Technical whitepaper — PodHeitor vSphere for Bacula

Image-level backup via VADP + CBT, CBT-based push replication with 10 DR modes, cross-hypervisor restore (vSphere ↔ Hyper-V ↔ Proxmox), and the VDDK runtime RPM with RPATH=$ORIGIN that avoids polluting the host’s ld.so.

Technical companion to the PodHeitor vSphere plugin page.

1. The problem: stock Bacula doesn’t speak VADP

Bacula Community by design has no VADP client and no CBT. ESXi VM backup via stock FD reduces to:

  • Backup the datastore mounted on the FD host — captures VMDKs in inconsistent state, no snapshot, no CBT.
  • Bacula Enterprise vSphere plugin — exists, but at enterprise pricing and without cross-site DR replication or 10 failover modes.
  • Standalone Veeam — parallel to Bacula, with separate tooling, RPO, scripting and licenses. Doubles TCO.

The PodHeitor vSphere BRC consolidates backup, replication, and cross-hypervisor conversion into a single Bacula plugin with native VADP + VDDK 9.0 integration.

2. Architectural model

Bacula FD → Rust cdylib plugin (.so) → Rust backend binary (PTCOMM pipe)
                                          ↕
                                     vSphere SOAP API (via reqwest)
                                          ↕
                                     VDDK 9.0 (FFI — read/write VMDK blocks)

The .so is a pure-Rust cdylib built from the PodHeitor Rust cdylib workspace (plugin-vsphere crate). It implements the Bacula FD plugin C ABI via the independently re-implemented bacula-fd-abi crate — no Bacula source tree required to build, and no AGPLv3-licensed Bacula objects are linked into the shipped binary. The .so spawns the Rust backend as a subprocess and uses PTCOMM (status_char + 6-digit length + newline + payload) over stdin/stdout.

3. VADP transport modes

VMDK reads via VADP support four modes; the plugin exposes them via the transport parameter:

Mode When to use Tradeoff
nbd Lab, no TLS required Cleartext; easy debug
nbdssl (default) Production without shared SAN TLS over NBD; acceptable CPU overhead
hotadd FD running as a VM on the same ESXi cluster Target VM disks hot-added to FD VM; LAN-free
san FD with direct access to the VM’s SAN/iSCSI Pure LAN-free; max throughput

4. CBT — Changed Block Tracking

VADP CBT is the VMware feature that lets a client ask “which blocks have changed since ChangeId X?”. The plugin uses it on every Incremental/Differential:

  1. Consistent VM snapshot (with quiesce=true default — VMware Tools quiesce filesystem).
  2. CBT query against the prior snapshot — returns list of {offset, length} extents.
  3. Read those extents only via VDDK.
  4. Stream over PTCOMM with offsets preserved.
  5. Remove snapshot (with optional snapshot_delete_delay for big commits).

The keep_cbt=true parameter avoids resetting CBT state after backup — useful when multiple Bacula jobs consume the same CBT in parallel.

5. CBT-Push replication and the 10 DR modes

The plugin implements CBT-based push replication with a persistent receiver daemon on the DR side. The mode= parameter selects one of ten operations:

Mode Function
replication-status Show replication status and last sync time
seed Initial full disk replication (seed the replica)
cbt-push Push CBT deltas to the remote replica
failover-test Boot replica on isolated network (non-destructive)
failover-undo Undo test failover, power off replica
failover-planned Graceful failover: sync → shutdown source → boot replica
failover-unplanned Emergency failover: boot replica immediately
failover-permanent Permanent failover: replica becomes production
failback Reverse-replicate from replica back to source
reprotect Re-establish replication after failback

The full Veeam SRM-style workflow (test → planned → reprotect) is available as plugin modes, drivable from the Bacula Director with no extra product.

5.1 Network mapping and Re-IP

Parameter Format Function
net_map "source_net=target_net" Reconfigure replica NICs at failover
reip "nic:ip/prefix:gw:dns1,dns2" Reconfigure replica’s guest IP at failover
storage_map "source_ds=target_ds" Datastore mapping
test_failover_network network name Isolated network for test failover (no production pollution)

5.2 TLS for the DR channel

Supports dr_tls_cert + dr_tls_key (PEM) with rustls; dr_tls_insecure=true accepts self-signed (lab only). PSK token via dr_auth_token required in all modes.

6. VDDK runtime RPM — why it matters

VDDK is distributed by VMware as a tarball with bundled libcrypto.so.3/libssl.so.3/libcurl.so.4. Older installs wrote /etc/ld.so.conf.d/vmware-vddk.conf pointing at those libs — making every process on the host (including rpm, dnf, flatpak) load VMware’s libcrypto instead of system OpenSSL. Typical symptom:

flatpak: /usr/lib/vmware-vix-disklib/lib64/libcrypto.so.3: 
    version `OPENSSL_3.4.0' not found (required by /lib64/librpmio.so.9)

The podheitor-vixdisklib-runtime-9.0.1-1.el9.x86_64.rpm fixes this at build time: every .so in /usr/lib/vmware-vix-disklib/lib64 is patched with RPATH=$ORIGIN, so VDDK resolves its siblings from its own directory without polluting the host ld.so. The %post scriptlet removes any stale /etc/ld.so.conf.d/vmware-vddk*.conf, vixlib*.conf, or flatpack*.conf entries and reruns ldconfig.

Air-gapped recovery (host already broken by legacy install):

sudo rm -f /etc/ld.so.conf.d/vmware-vddk.conf 
           /etc/ld.so.conf.d/vmware-vix-disklib.conf 
           /etc/ld.so.conf.d/*vixlib*.conf 
           /etc/ld.so.conf.d/*flatpack*.conf
sudo ldconfig
sudo rpm -Uvh releases/podheitor-vixdisklib-runtime-9.0.1-1.el9.x86_64.rpm

7. Cross-restore (vSphere ↔ Hyper-V ↔ Proxmox)

The plugin participates in two legs of the cross-hypervisor triangle:

  • As source: VMware backups can be restored into Hyper-V or Proxmox via the sister PodHeitor Hyper-V / PodHeitor Proxmox plugins.
  • As target: Hyper-V/Proxmox backups can be restored into vSphere through this plugin (with VHDX → VMDK and qcow2 → VMDK conversion).

8. Compatibility

Component Supported versions
VMware ESXi 7.0, 8.0, 8.0U3
VMware vCenter 7.0, 8.0
VDDK 8.0.x, 9.0.x
Bacula Community 15.0.x
OS (FD server) Oracle Linux 9, RHEL 9, Rocky 9, AlmaLinux 9
Architecture x86_64

9. Lab validation

  • ESXi 8.0U3e standalone host, VDDK 9.0.1 on Oracle Linux 9.5, Bacula Community 15.0.3.
  • 4 CBT-enabled VMs (Alpine, TinyCore, CirrOS, multi-disk).
  • 12/12 replication tests PASSED (April 2026).
  • v1.4.1: lab job 6442 restored 2.147 GB / 4 files in 6s, 0 FD errors; restored Alpine 3.21 VM booted clean.

10. Documented anti-patterns

  • Don’t leave dr_tls_insecure=true in production. Accepts self-signed certs on the DR channel — useful in lab; MITM risk in prod.
  • Don’t use force_san=true without correct SAN zoning. The FD must see the target VM’s LUNs directly; misconfiguration produces confusing I/O errors.
  • Don’t run quiesce=false on DB VMs. Crash-consistent snapshot → DBs need recovery on replica boot/restore.

11. License posture

Plugin under LicenseRef-PodHeitor-Proprietary. Since v1.4.0, the .so is a pure-Rust cdylib — no Bacula AGPLv3 source statically linked. Earlier releases linked C++ pluginlib/metaplugin objects from the Bacula tree — no longer the case. Bindings via the clean-room bacula-fd-abi crate.

Ready to evaluate?

30-day free trial for production vSphere fleets. Guaranteed at minimum 50% discount vs Bacula Enterprise, Veeam or Commvault, with 10 DR modes and cross-hypervisor restore included.

Heitor Faria — Founder, PodHeitor International
[email protected]
☎ +1 (789) 726-1749 · +55 (61) 98268-4220 (WhatsApp)
🔗 PodHeitor vSphere plugin page

Disponível em: pt-brPortuguês (Portuguese (Brazil))enEnglishesEspañol (Spanish)

Leave a Reply