Proxmox VE Installation
This guide covers two approaches to installing Proxmox Virtual Environment (VE) from scratch, depending on your server's storage configuration.
Approach 1: Hardware RAID 10 with SSD Drives
This is the simpler approach where the entire Proxmox installation (OS + VM storage) runs on a single hardware RAID 10 array using 4 or 6 SSD drives.
Prerequisites
- Dedicated server with a hardware RAID controller (e.g., Dell PERC, LSI MegaRAID, HP Smart Array)
- 4 or 6 x SSD drives (same model and capacity recommended)
- Proxmox VE ISO (download from
proxmox.com/en/downloads) - IPMI/iLO/iDRAC access for remote console and ISO mounting
- Minimum 16 GB RAM (64 GB+ recommended for production)
Step 1: Configure Hardware RAID 10
- Access the RAID controller BIOS during server boot (usually Ctrl+R for Dell PERC, or through the UEFI setup)
- Delete any existing virtual disks / foreign configurations
- Create a new Virtual Disk:
- RAID Level: RAID 10
- Drives: Select all 4 (or 6) SSD drives
- Strip Size: 256 KB (good for virtualization workloads)
- Read Policy: Read Ahead
- Write Policy: Write Back (if battery/capacitor backed) or Write Through
- Disk Cache: Disabled
- Initialize the virtual disk (Fast Init is fine)
- Confirm the virtual disk shows as Optimal status
- Save and exit the RAID configuration
Step 2: Boot from Proxmox VE ISO
- Mount the Proxmox VE ISO via IPMI virtual media (or USB)
- Set the boot order to boot from the virtual CD/USB drive
- Reboot the server and select "Install Proxmox VE (Graphical)"
- Accept the EULA
Step 3: Select Target Disk
- The installer will show your RAID 10 virtual disk as a single drive (e.g.,
/dev/sda) - Select this disk as the installation target
- Click Options to configure the disk layout:
- Filesystem: ext4 (recommended for RAID controllers) or ZFS (RAID 10) if you want ZFS features on top
- hdsize: Use full disk or set a specific size
- swapsize: 8 GB (or match RAM if hibernation is needed)
- maxroot: 100 GB (for the OS root partition)
- minfree: 16 GB
- maxvz: Remaining space (this becomes
local-lvmfor VM storage)
- Click OK and proceed
Step 4: Network and Password Configuration
- Country/Timezone: Set appropriately (e.g., Canada, America/Toronto)
- Password: Set a strong root password
- Email: Enter admin notification email address
- Management Interface: Select the primary NIC
- Hostname: Set FQDN (e.g.,
pve1.4goodhosting.com) - IP Address: Set the server's static IP
- Netmask: Usually
255.255.255.0(/24) - Gateway: Set the default gateway
- DNS Server: Set DNS (e.g.,
8.8.8.8or your internal DNS)
Step 5: Complete Installation
- Review the summary and click Install
- Wait for installation to complete (usually 5-10 minutes on SSDs)
- Remove the ISO/USB media
- Reboot the server
Step 6: Post-Installation Setup
6.1 Access the Web Interface
Open a browser and go to: https://<server-ip>:8006
Login with user root and the password you set during installation.
6.2 Remove Enterprise Repository (if no subscription)
# Disable enterprise repo
sed -i 's/^deb/# deb/' /etc/apt/sources.list.d/pve-enterprise.list
# Add no-subscription repo
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list
# Update packages
apt update && apt full-upgrade -y
6.3 Remove Subscription Nag (optional)
sed -Ezi.bak "s/(Ext\.Msg\.show\(\{.*?title: gettext\('No valid sub)/void({ \/\/\1/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
systemctl restart pveproxy.service
6.4 Configure Storage
Verify your storage is set up correctly:
# Check logical volumes
lvs
# You should see:
# root pve (OS root partition)
# swap pve (swap)
# data pve (VM storage - local-lvm)
In the web UI, go to Datacenter > Storage and verify:
local- Directory storage for ISOs, backups, templateslocal-lvm- LVM-Thin storage for VM disks and containers
6.5 Upload ISO Images
Go to local > ISO Images > Upload and upload your OS ISOs (e.g., Ubuntu Server, Windows Server, etc.)
6.6 Configure Networking
# View current network config
cat /etc/network/interfaces
# The default bridge vmbr0 should already be configured
# To add additional bridges for VLANs:
auto vmbr1
iface vmbr1 inet manual
bridge-ports none
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
6.7 Enable Firewall (recommended)
In the web UI: Datacenter > Firewall > Options - Enable the firewall and configure rules to allow:
- Port 8006 (Proxmox web UI)
- Port 22 (SSH)
- Port 3128 (Spice proxy, if needed)
- Port 5900-5999 (VNC console)
6.8 Set Up Email Notifications
apt install -y libsasl2-modules
# Configure postfix for relay (e.g., via Gmail or SMTP relay)
cat >> /etc/postfix/main.cf <<EOF
relayhost = [smtp.gmail.com]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
EOF
# Add credentials
echo "[smtp.gmail.com]:587 user@gmail.com:app-password" > /etc/postfix/sasl_passwd
chmod 600 /etc/postfix/sasl_passwd
postmap /etc/postfix/sasl_passwd
systemctl restart postfix
Approach 2: NVMe Software RAID 1 (OS) + Hardware RAID 10 (VM Storage)
This approach separates the OS and VM storage onto different disk arrays for better performance and reliability. The OS runs on a fast NVMe software RAID 1 mirror, while VMs are stored on a hardware RAID 10 array of SSDs.
Prerequisites
- Dedicated server with a hardware RAID controller
- 2 x NVMe drives (for OS - software RAID 1 mirror)
- 4 or 6 x SSD drives (for VM storage - hardware RAID 10)
- Proxmox VE ISO
- IPMI/iLO/iDRAC access
Step 1: Configure Hardware RAID 10 for SSDs
- Access the RAID controller BIOS during boot
- Create a RAID 10 virtual disk using the 4 or 6 SSD drives (same settings as Approach 1)
- Important: The NVMe drives are NOT connected to the RAID controller - they connect directly to the motherboard via M.2 or U.2 slots, so they will not appear in the RAID configuration
- Save and exit
Step 2: Install Proxmox VE on NVMe (ZFS Mirror)
- Boot from the Proxmox VE ISO
- Select "Install Proxmox VE (Graphical)"
- Accept the EULA
- At the disk selection screen, click Options:
- Filesystem: ZFS (RAID1)
- Harddisk 0: Select first NVMe drive (e.g.,
/dev/nvme0n1) - Harddisk 1: Select second NVMe drive (e.g.,
/dev/nvme1n1) - ashift: 12 (for most NVMe drives with 4K sectors)
- compress: lz4
- checksum: on
- hdsize: Use the full NVMe capacity or set a limit
- Complete the network and password configuration (same as Approach 1, Steps 4-5)
- Install and reboot
Step 3: Verify ZFS Mirror Status
# Check ZFS pool status
zpool status rpool
# Expected output:
# pool: rpool
# state: ONLINE
# config:
# NAME STATE
# rpool ONLINE
# mirror-0 ONLINE
# /dev/disk/by-id/nvme-DRIVE1-part3 ONLINE
# /dev/disk/by-id/nvme-DRIVE2-part3 ONLINE
# Check ZFS datasets
zfs list
Step 4: Configure Hardware RAID 10 Array as VM Storage
4.1 Identify the RAID Virtual Disk
# List all block devices
lsblk
# The hardware RAID 10 array will show as a single device, e.g., /dev/sda
# The NVMe drives will show as /dev/nvme0n1 and /dev/nvme1n1
# Confirm the RAID disk
smartctl -a /dev/sda | head -20
4.2 Option A: Add as LVM Storage (Recommended)
# Create a physical volume on the RAID array
pvcreate /dev/sda
# Create a volume group
vgcreate vmdata /dev/sda
# Create a thin pool (use 95% of space, leave some for metadata)
TOTAL_SIZE=$(vgs vmdata --noheadings -o vg_size --nosuffix --units g | tr -d ' ' | cut -d. -f1)
POOL_SIZE=$(echo "$TOTAL_SIZE * 95 / 100" | bc)
lvcreate -L ${POOL_SIZE}g -T vmdata/vm-pool
# Verify
lvs
Add to Proxmox via web UI:
- Go to Datacenter > Storage > Add > LVM-Thin
- ID:
ssd-storage - Volume Group:
vmdata - Thin Pool:
vm-pool - Content: Disk Image, Container
- Click Add
4.2 Option B: Add as ZFS Storage
If you prefer ZFS features on top of the hardware RAID (note: this adds some overhead but gives you snapshots, compression, etc.):
# Create a ZFS pool on the RAID virtual disk
zpool create -f -o ashift=12 vmpool /dev/sda
# Enable compression
zfs set compression=lz4 vmpool
# Disable atime for better performance
zfs set atime=off vmpool
# Add to Proxmox via web UI:
# Datacenter > Storage > Add > ZFS
# ID: ssd-storage
# ZFS Pool: vmpool
# Content: Disk Image, Container
Step 5: Post-Installation Setup
Follow the same post-installation steps as Approach 1 (Steps 6.1 through 6.8).
Step 6: Verify Final Storage Layout
# Check all storage
pvesm status
# Expected output:
# Name Type Status Total Used Available %
# local dir active XXXXX XXXXX XXXXX X%
# local-zfs zfspool active XXXXX XXXXX XXXXX X%
# ssd-storage lvmthin active XXXXX XXXXX XXXXX X%
Your Proxmox node should now have:
- local - Directory on NVMe (for ISOs, backups, templates)
- local-zfs - ZFS on NVMe mirror (for small/critical VMs)
- ssd-storage - LVM-Thin or ZFS on RAID 10 SSDs (primary VM storage)
Common Post-Install Tasks (Both Approaches)
Create Your First VM
- Upload an ISO to local > ISO Images
- Click Create VM in the top right
- Configure: Name, ISO, OS type, disk (select appropriate storage), CPU, Memory, Network
- Start the VM and open the console to complete OS installation
Create a Container (LXC)
- Download a template: local > CT Templates > Templates
- Click Create CT in the top right
- Configure: Hostname, password, template, storage, CPU, memory, network
- Start the container
Set Up Backups
- Go to Datacenter > Backup
- Click Add to create a backup job
- Set schedule (e.g., daily at 2:00 AM)
- Select VMs/containers to back up
- Choose storage destination and compression (zstd recommended)
- Set retention policy (e.g., keep last 7 daily backups)
Monitoring
# Check cluster/node status
pvesh get /nodes/$(hostname)/status
# Check storage health
zpool status # if using ZFS
pvesm status # all storage overview
# Check RAID status (hardware)
# Dell: omreport storage vdisk
# HP: ssacli ctrl slot=0 show config