Proxmox VE Installation

This guide covers two approaches to installing Proxmox Virtual Environment (VE) from scratch, depending on your server's storage configuration.

Approach 1: Hardware RAID 10 with SSD Drives

This is the simpler approach where the entire Proxmox installation (OS + VM storage) runs on a single hardware RAID 10 array using 4 or 6 SSD drives.

Prerequisites

Step 1: Configure Hardware RAID 10

  1. Access the RAID controller BIOS during server boot (usually Ctrl+R for Dell PERC, or through the UEFI setup)
  2. Delete any existing virtual disks / foreign configurations
  3. Create a new Virtual Disk:
    • RAID Level: RAID 10
    • Drives: Select all 4 (or 6) SSD drives
    • Strip Size: 256 KB (good for virtualization workloads)
    • Read Policy: Read Ahead
    • Write Policy: Write Back (if battery/capacitor backed) or Write Through
    • Disk Cache: Disabled
  4. Initialize the virtual disk (Fast Init is fine)
  5. Confirm the virtual disk shows as Optimal status
  6. Save and exit the RAID configuration

Step 2: Boot from Proxmox VE ISO

  1. Mount the Proxmox VE ISO via IPMI virtual media (or USB)
  2. Set the boot order to boot from the virtual CD/USB drive
  3. Reboot the server and select "Install Proxmox VE (Graphical)"
  4. Accept the EULA

Step 3: Select Target Disk

  1. The installer will show your RAID 10 virtual disk as a single drive (e.g., /dev/sda)
  2. Select this disk as the installation target
  3. Click Options to configure the disk layout:
    • Filesystem: ext4 (recommended for RAID controllers) or ZFS (RAID 10) if you want ZFS features on top
    • hdsize: Use full disk or set a specific size
    • swapsize: 8 GB (or match RAM if hibernation is needed)
    • maxroot: 100 GB (for the OS root partition)
    • minfree: 16 GB
    • maxvz: Remaining space (this becomes local-lvm for VM storage)
  4. Click OK and proceed

Step 4: Network and Password Configuration

  1. Country/Timezone: Set appropriately (e.g., Canada, America/Toronto)
  2. Password: Set a strong root password
  3. Email: Enter admin notification email address
  4. Management Interface: Select the primary NIC
  5. Hostname: Set FQDN (e.g., pve1.4goodhosting.com)
  6. IP Address: Set the server's static IP
  7. Netmask: Usually 255.255.255.0 (/24)
  8. Gateway: Set the default gateway
  9. DNS Server: Set DNS (e.g., 8.8.8.8 or your internal DNS)

Step 5: Complete Installation

  1. Review the summary and click Install
  2. Wait for installation to complete (usually 5-10 minutes on SSDs)
  3. Remove the ISO/USB media
  4. Reboot the server

Step 6: Post-Installation Setup

6.1 Access the Web Interface

Open a browser and go to: https://<server-ip>:8006

Login with user root and the password you set during installation.

6.2 Remove Enterprise Repository (if no subscription)

# Disable enterprise repo
sed -i 's/^deb/# deb/' /etc/apt/sources.list.d/pve-enterprise.list

# Add no-subscription repo
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list

# Update packages
apt update && apt full-upgrade -y

6.3 Remove Subscription Nag (optional)

sed -Ezi.bak "s/(Ext\.Msg\.show\(\{.*?title: gettext\('No valid sub)/void({ \/\/\1/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
systemctl restart pveproxy.service

6.4 Configure Storage

Verify your storage is set up correctly:

# Check logical volumes
lvs

# You should see:
#   root   pve  (OS root partition)
#   swap   pve  (swap)
#   data   pve  (VM storage - local-lvm)

In the web UI, go to Datacenter > Storage and verify:

6.5 Upload ISO Images

Go to local > ISO Images > Upload and upload your OS ISOs (e.g., Ubuntu Server, Windows Server, etc.)

6.6 Configure Networking

# View current network config
cat /etc/network/interfaces

# The default bridge vmbr0 should already be configured
# To add additional bridges for VLANs:
auto vmbr1
iface vmbr1 inet manual
    bridge-ports none
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes

6.7 Enable Firewall (recommended)

In the web UI: Datacenter > Firewall > Options - Enable the firewall and configure rules to allow:

6.8 Set Up Email Notifications

apt install -y libsasl2-modules

# Configure postfix for relay (e.g., via Gmail or SMTP relay)
cat >> /etc/postfix/main.cf <<EOF
relayhost = [smtp.gmail.com]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
EOF

# Add credentials
echo "[smtp.gmail.com]:587 user@gmail.com:app-password" > /etc/postfix/sasl_passwd
chmod 600 /etc/postfix/sasl_passwd
postmap /etc/postfix/sasl_passwd
systemctl restart postfix

Approach 2: NVMe Software RAID 1 (OS) + Hardware RAID 10 (VM Storage)

This approach separates the OS and VM storage onto different disk arrays for better performance and reliability. The OS runs on a fast NVMe software RAID 1 mirror, while VMs are stored on a hardware RAID 10 array of SSDs.

Prerequisites

Step 1: Configure Hardware RAID 10 for SSDs

  1. Access the RAID controller BIOS during boot
  2. Create a RAID 10 virtual disk using the 4 or 6 SSD drives (same settings as Approach 1)
  3. Important: The NVMe drives are NOT connected to the RAID controller - they connect directly to the motherboard via M.2 or U.2 slots, so they will not appear in the RAID configuration
  4. Save and exit

Step 2: Install Proxmox VE on NVMe (ZFS Mirror)

  1. Boot from the Proxmox VE ISO
  2. Select "Install Proxmox VE (Graphical)"
  3. Accept the EULA
  4. At the disk selection screen, click Options:
    • Filesystem: ZFS (RAID1)
    • Harddisk 0: Select first NVMe drive (e.g., /dev/nvme0n1)
    • Harddisk 1: Select second NVMe drive (e.g., /dev/nvme1n1)
    • ashift: 12 (for most NVMe drives with 4K sectors)
    • compress: lz4
    • checksum: on
    • hdsize: Use the full NVMe capacity or set a limit
  5. Complete the network and password configuration (same as Approach 1, Steps 4-5)
  6. Install and reboot

Step 3: Verify ZFS Mirror Status

# Check ZFS pool status
zpool status rpool

# Expected output:
#   pool: rpool
#   state: ONLINE
#   config:
#     NAME                                    STATE
#     rpool                                   ONLINE
#       mirror-0                              ONLINE
#         /dev/disk/by-id/nvme-DRIVE1-part3   ONLINE
#         /dev/disk/by-id/nvme-DRIVE2-part3   ONLINE

# Check ZFS datasets
zfs list

Step 4: Configure Hardware RAID 10 Array as VM Storage

4.1 Identify the RAID Virtual Disk

# List all block devices
lsblk

# The hardware RAID 10 array will show as a single device, e.g., /dev/sda
# The NVMe drives will show as /dev/nvme0n1 and /dev/nvme1n1

# Confirm the RAID disk
smartctl -a /dev/sda | head -20

4.2 Option A: Add as LVM Storage (Recommended)

# Create a physical volume on the RAID array
pvcreate /dev/sda

# Create a volume group
vgcreate vmdata /dev/sda

# Create a thin pool (use 95% of space, leave some for metadata)
TOTAL_SIZE=$(vgs vmdata --noheadings -o vg_size --nosuffix --units g | tr -d ' ' | cut -d. -f1)
POOL_SIZE=$(echo "$TOTAL_SIZE * 95 / 100" | bc)
lvcreate -L ${POOL_SIZE}g -T vmdata/vm-pool

# Verify
lvs

Add to Proxmox via web UI:

  1. Go to Datacenter > Storage > Add > LVM-Thin
  2. ID: ssd-storage
  3. Volume Group: vmdata
  4. Thin Pool: vm-pool
  5. Content: Disk Image, Container
  6. Click Add

4.2 Option B: Add as ZFS Storage

If you prefer ZFS features on top of the hardware RAID (note: this adds some overhead but gives you snapshots, compression, etc.):

# Create a ZFS pool on the RAID virtual disk
zpool create -f -o ashift=12 vmpool /dev/sda

# Enable compression
zfs set compression=lz4 vmpool

# Disable atime for better performance
zfs set atime=off vmpool

# Add to Proxmox via web UI:
# Datacenter > Storage > Add > ZFS
# ID: ssd-storage
# ZFS Pool: vmpool
# Content: Disk Image, Container

Step 5: Post-Installation Setup

Follow the same post-installation steps as Approach 1 (Steps 6.1 through 6.8).

Step 6: Verify Final Storage Layout

# Check all storage
pvesm status

# Expected output:
# Name          Type     Status  Total       Used      Available  %
# local         dir      active  XXXXX       XXXXX     XXXXX      X%
# local-zfs     zfspool  active  XXXXX       XXXXX     XXXXX      X%
# ssd-storage   lvmthin  active  XXXXX       XXXXX     XXXXX      X%

Your Proxmox node should now have:


Common Post-Install Tasks (Both Approaches)

Create Your First VM

  1. Upload an ISO to local > ISO Images
  2. Click Create VM in the top right
  3. Configure: Name, ISO, OS type, disk (select appropriate storage), CPU, Memory, Network
  4. Start the VM and open the console to complete OS installation

Create a Container (LXC)

  1. Download a template: local > CT Templates > Templates
  2. Click Create CT in the top right
  3. Configure: Hostname, password, template, storage, CPU, memory, network
  4. Start the container

Set Up Backups

  1. Go to Datacenter > Backup
  2. Click Add to create a backup job
  3. Set schedule (e.g., daily at 2:00 AM)
  4. Select VMs/containers to back up
  5. Choose storage destination and compression (zstd recommended)
  6. Set retention policy (e.g., keep last 7 daily backups)

Monitoring

# Check cluster/node status
pvesh get /nodes/$(hostname)/status

# Check storage health
zpool status     # if using ZFS
pvesm status     # all storage overview

# Check RAID status (hardware)
# Dell: omreport storage vdisk
# HP: ssacli ctrl slot=0 show config