This repository can be used on its own but it is intended to be used as a submodule of TKS. TKS enables enthusiasts and administrators alike to easily provision highly available and production-ready Kubernetes clusters and other modern infrastructure on Proxmox VE.
Bootstrap_Proxmox
sets up a Proxmox server for TKS by creating the necessary user accounts, installing package dependencies, and more.
Ansible is used to configure Proxmox. Logic is split into multiple roles which are often highly configurable and even optional. Configurations are applied to TKS using environment variables. For a list of supported environment variables, see the README document for each role.
This project assumes you have a network connection, server, and bootable flash drive/iDRAC. As well as a workstation with Ansible >= 2.9 installed.
In my case, I have both a Dell server with IDRAC and a Mac Pro that requires a bootable USB drive. So instructions for both methods will be provided. After booting from the medium, proceed to install Proxmox as usual.
Dell IDRAC:
- Download the latest ISO for Proxmox VE.
- Connect to IDRAC and launch a new Virtual Console.
- Click
Virtual Media
and select your downloaded Proxmox VE ISO as a new CD/DVD Image File. - Click
Map Device
and reboot the server. Interrupt the boot process by tappingF10
to access the Dell Lifecycle Controller. - Navigate through
OS Deployment
->Deploy OS
->Go Directly to OS Deployment
- Set
Available Operation Systems
toAny Other Operation System
, choose aManual Install
, choosePVE Virtual CD
as your media, - Press
Finish
and wait for the Lifecycle Controller to boot into the Proxmox VE installer.
Bootable USB Installer:
-
Connect a flash drive to your workstation and use fdisk or diskutil to determine the mountpath. Mine is
/dev/sdf
. -
Export the required variables documented in the role's README. You can find the latest ISO url for Proxmox here.
export TKS_BM_V_PROXMOX_ISO_URL="https://www.proxmox.com/en/downloads?task=callelement&format=raw&item_id=513&element=f85c494b-2b32-4109-b8c1-083cca2b7db6&method=download&args[0]=e20c5339a85f415aa8786ae730d14f05" export TKS_BM_V_USB_MOUNTPATH=/dev/sdf
-
Execute the role using the
create_usb_medium.yml
playbook and eject the flash drive when finished.ansible-playbook create_usb_medium.yml sudo eject $TKS_BM_V_USB_MOUNTPATH
-
Disconnect the flash drive from your workstation and connect it to your server. Power it on, and boot from the USB. With my Mac Pro, I accomplish this by holding down the
Option
key. Allow the computer to boot into the Proxmox VE installer.
Now that Proxmox has been installed, it's time to set up a user account for Ansible to use. We'll also be creating an SSH key and adding it to the authorized_keys
file for that user as well as installing sudo
and making them a sudoer.
-
Export the
ANSIBLE_REMOTE_USER
andANSIBLE_ASK_PASS
environment variables. This is necessary at first since Proxmox does not yet have an SSH key.export ANSIBLE_REMOTE_USER=root export ANSIBLE_ASK_PASS=true
-
In order to use password authentication with Ansible, you will need to also install the sshpass package.
sudo pacman -S sshpass
-
Create a new SSH Key and User Account for Ansible to use
export TKS_BP_R_CREATE_USER_ACCOUNT=true export TKS_BP_V_PROXMOX_SSH_KEY='~/.ssh/sol.milkyway' export TKS_BP_V_PROXMOX_USER_NAME=tj ansible-playbook -i inventory.yml TKS-Bootstrap_Proxmox/Ansible/create_user_account.yml
-
Reconfigure Ansible to use SSH keys for authentication as well as your new user account.
export ANSIBLE_REMOTE_USER=tj export ANSIBLE_ASK_PASS=false export ANSIBLE_PRIVATE_KEY_FILE=~/.ssh/sol.milkyway
Now that we can use Ansible freely, we can use the site.yml
playbook to set up most things. By default, no configuration changes will be applied unless the appropriate environment variable below is set to true
. Many tasks support further configuration options, see the README for the Configure_Proxmox
role examples.
Variable | Description | Example Value |
---|---|---|
TKS_BP_T_CONFIGURE_REPOSITORIES |
Use the Contributor repositories for packages | true |
TKS_BP_T_CONFIGURE_UNATTENDED_UPGRADES |
Automatically manage package updates | true |
TKS_BP_T_CONFIGURE_SYSTEM |
Configure system properties such as OS Swappiness | true |
TKS_BP_T_CONFIGURE_ZFS |
Configures ZFS Memory Limitations, Swappiness, email notifications, etc. | true |
TKS_BP_T_INSTALL_PACKAGES |
Install a list of qualify-of-life packages for standard system administration | true |
TKS_BP_T_INSTALL_SANOID |
Install Sanoid and configure automatic ZFS Snapshot management | true |
TKS_BP_T_INSTALL_POSTFIX |
Install and configure a Postfix SMTP relay for email notifications | true |
TKS_BP_T_INSTALL_ZSH |
Install and configure ZSH as the default user shell | true |
For example, if you wanted to do the following, your steps might look like:
- Switch over to the contributor repositories
- Install my preferred qualify of life packages
- Configure ZFS, ZED, and Sanoid
- Set up Unattended Upgrades
-
Configure your Ansible client:
export ANSIBLE_REMOTE_USER="tj" export ANSIBLE_ASK_PASS=false export ANSIBLE_PRIVATE_KEY_FILE="~/.ssh/sol.milkyway"
-
Export the variables indicating which configurations you wish to apply:
export TKS_BP_T_CONFIGURE_REPOSITORIES=true export TKS_BP_T_INSTALL_PACKAGES=true export TKS_BP_T_INSTALL_POSTFIX=true export TKS_BP_T_CONFIGURE_SYSTEM=true export TKS_BP_T_CONFIGURE_UNATTENDED_UPGRADES=true
-
Define some variables to configure the
Postfix
relay client. Be mindful to not leave your password in your shell history:export HISTCONTROL=ignoreboth export TKS_BP_V_POSTFIX_EMAIL="[email protected]" export TKS_BP_V_POSTFIX_PASSWORD="YOURPASSWORD" export TKS_BP_V_POSTFIX_SERVER=smtp.gmail.com export TKS_BP_V_POSTFIX_PORT=587 export TKS_BP_V_POSTFIX_TLS='yes'
-
Configure the OS Swappiness as a percentage of 100.
export TKS_BP_V_SYS_SWAPPINESS=10
-
Configure the version of Sanoid you want to install.
export TKS_BP_V_SANOID_VESRION='2.0.3'
-
Define some variables to configure Unattended Upgrades:
export TKS_BP_V_UPGRADES_NOTIFY=true export TKS_BP_V_UPGRADES_EMAIL="[email protected]" export TKS_BP_V_UPGRADES_ON_SHUTDOWN=true export TKS_BP_V_UPGRADES_LOG_SYSLOG=true
-
Apply the configurations to Proxmox:
ansible-playbook -i inventory.yml TKS-Bootstrap_Proxmox/Ansible/site.yml
Your environment may or may not include multiple physical nodes. as a result, this step is optional. In order to form a cluster, you must have at least one master and node present in your inventory file under the groups master
and nodes
. Furthermore, only a single master can be in your inventory. Lastly, as a limitation invoked by Proxmox, your node can not have any workloads currently running on it.
Export the required environment variables and run the configure_cluster.yml
Ansible Playbook. Be mindful to not leave your password in your shell history:
export HISTCONTROL=ignoreboth
export TKS_BP_V_PROXMOX_CLUSTER_NAME=TKS
export TKS_BP_V_PROXMOX_MASTER_PASSWORD=YOURPASSWORD
ansible-playbook -i inventory.yml TKS-Bootstrap_Proxmox/Ansible/configure_cluster.yml
unset $TKS_BP_V_PROXMOX_MASTER_PASSWORD
Should something go wrong, or you wish you undo your clustering, you can do so with the following commands:
systemctl stop pve-cluster
systemctl stop corosync
pmxcfs -l
rm /etc/pve/corosync.conf
rm /etc/corosync/*
killall pmxcfs
systemctl start pve-cluster
Storage is a delicate component of any environment and there is a larger risk for disaster when applying automation to it as a result. Further complicating this, is that storage is configured differently in almost every environment. As a result, I have decided to leave this portion of TKS as a manual process.
Our goal here is to provide our hypervisor with storage to use for three primary things. We'll need a place to store VMs, their backups, and general data. ZFS is a common storage solution for Proxmox, and can satisfy all three requirements in one go. My personal homelab leverages multiple ZFS and hardware arrays. For posterity, I'll include the steps I followed below. As a reminder, these commands will NOT be the same for your system.
-
SSH into the server and configure LVM for the hardware RAID volume.
pvcreate /dev/sdg vgcreate RAIDPool /dev/sdg lvcreate RAIDPool /dev/sdg --name RAIDPool_Data -L 4T lvcreate RAIDPool /dev/sdg --name RAIDPool_Templates -L 100G lvcreate RAIDPool /dev/sdg --name RAIDPool_Backups -L 500G mkfs.xfs /dev/RAIDPool/RAIDPool_Data mkfs.xfs /dev/RAIDPool/RAIDPool_Templates mkfs.xfs /dev/RAIDPool/RAIDPool_Backups
-
Create working directories for Proxmox Backups, ISOs, & Templates and configure
/etc/fstab
accordingly.mkdir /mnt/RAIDPool_Backups mkdir /mnt/RAIDPool_Templates mkdir /mnt/RAIDPool_Data echo "/dev/RAIDPool/RAIDPool_Templates /mnt/RAIDPool_Templates xfs defaults 0 0" >> /etc/fstab echo "/dev/RAIDPool/RAIDPool_Backups /mnt/RAIDPool_Backups xfs defaults 0 0" >> /etc/fstab echo "/dev/RAIDPool/RAIDPool_Data /mnt/RAIDPool_Data xfs defaults 0 0" >> /etc/fstab
-
Import the ZFS Storage Pools.
zpool import -f DataPool zpool import -f FlashPool # If necessary, use `zfs set` to change the mountpoints to `/mnt/` and re-import
-
Make sure all of the filesystems have successfully mounted and then exit out of the SSH session.
zfs list zpool status mount -a df -h
-
Within the Proxmox UI' s Datacenter
Storage
view, Disable thelocal
Storage ID and remove thelocal-lvm
Storage IDs to avoid over-provisioning the Proxmox OS SATADOM. Add the following Storage IDs:ID Type Content Path Shared Enabled Nodes Thin BS FlashPool ZFS Disk image, Container No Yes earth No 32K RAIDPool_Backups Directory VZDump backup file /mnt/RAIDPool_Backups Yes Yes all RAIDPool_Templates Directory Disk image, ISO image, Snippets, Container template /mnt/RAIDPool_Templates Yes Yes all RAIDPool_Data LVM Disk image, Container No Yes earth No DataPool ZFS Disk image, Container No Yes earth Yes 128K -
Reboot the server and ensure that all of the storage endpoints are automatically mounted again.
reboot df -h