Technical Whitepaper — Version 0.1.0 — May 2026
Author: Heitor Faria · Website: https://podheitor.com · Email: heitor@opentechs.lat · Phone / WhatsApp: +1 786 726-1749 | +55 61 98268-4220
Special offer. Bring your renewal proposal for any commercial enterprise backup platform — Veeam, Commvault, NetBackup, or others. We will benchmark a head-to-head proposal targeting at least 50% savings with stronger Firebird-specific functionality. Contact heitor@opentechs.lat for a written quote.
Table of contents
- Executive summary
- Introduction & market context
- Architecture overview
- Backup modes deep dive
- Feature matrix
- Installation guide
- Configuration reference
- FileSet examples
- Sizing & capacity planning
- Performance report
- Compatibility matrix
- Security
- Monitoring
- Troubleshooting guide
- Use cases & deployment scenarios
- Comparison with other approaches
- Roadmap
- Conclusion
- Contact information
- Legal / copyright
1. Executive summary
Firebird is a mature, open-source relational database deployed widely across ERP systems, point-of-sale applications, embedded products, and enterprise applications — with particularly heavy adoption in Latin America, Eastern Europe, and Asia. Despite this footprint, Firebird has historically lacked a first-class backup integration with the open-source backup ecosystem. Administrators have been forced to choose between hand-crafted shell scripts wrapping gbak, premium plugins from commercial backup vendors, or accepting silent backup failures with no verification layer.
The PodHeitor Firebird Backup Plugin for Bacula closes this gap. It delivers a production-grade, Rust-native plugin for Bacula Community 15.0.3+ that integrates natively with Firebird 3.0, 4.0, and 5.0. Three distinct backup modes cover every operational scenario: logical dump via gbak, native page-level nbackup incremental chains, and replay log-shipping for near-zero RPO replication. All modes are managed through standard Bacula Job and FileSet configuration — no external scripts, no cron glue, no custom daemons.
From a business perspective, the plugin extends Bacula Community with enterprise-grade Firebird DR capabilities at a fraction of the cost of premium commercial alternatives. Phase validation testing across 8 phases and 120 automated test cases has confirmed byte-identical restore accuracy, multi-DB parallel throughput (5.6× speedup), zstd compression (21.4% on-wire ratio), and sub-1% bandwidth throttle drift. For any organisation running Firebird on Linux with Bacula Community, this plugin is the most complete, most cost-effective path to a defensible backup posture.
2. Introduction & market context
2.1 Firebird in production today
Firebird remains the database of choice for millions of installed applications worldwide. It powers major ERP platforms in Latin America (Totvs Protheus and similar), insurance policy systems, logistics and inventory management, and embedded industrial devices. Key characteristics that explain its continued adoption include:
- Zero-cost licensing with no per-seat or per-core fees
- Self-managing pages, no DBA required for routine operation
- ODS (On-Disk Structure) versioning that enables clean migrations across major releases
- Multi-generational architecture that delivers consistent reads without write blocking
- Extremely low memory footprint — production Firebird instances frequently run in under 256 MB RAM
These same characteristics that make Firebird attractive also create backup challenges. The database does not implement a WAL (Write-Ahead Log) stream consumable by generic log-shipping tools. Its on-disk format varies across ODS versions (ODS 11 for FB 3, ODS 12 for FB 4, ODS 13 for FB 5). Native backup tools — gbak for logical dumps and nbackup for page-level backups — exist and are robust, but they speak proprietary protocols that no standard open-source backup platform integrated until now.
2.2 Why existing approaches fall short
| Tool | Firebird support | Verdict |
|---|---|---|
| Bacula Community (native, no plugin) | File-level only — backs up raw .fdb file while DB is live |
Unsafe — data corruption risk if DB is busy |
| Veeam | No native Firebird agent | Requires OS-level quiesce scripts; no backup-engine verification |
| Commvault | No native Firebird iDataAgent | Custom scripted integration only |
| Amanda / Bareos | File-level only | Same risk as Bacula Community native file backup |
| Custom shell scripts | Unreliable, no retry, no monitoring, no catalog | Not production-grade |
The gap is clear: until now, no open-source-compatible backup solution integrated with Firebird at the engine level. The PodHeitor plugin fills this gap.
2.3 The PodHeitor approach
The plugin follows the same design philosophy as the broader PodHeitor plugin family: Rust-native implementation, phase-gated development with automated regression tests, zero runtime dependencies beyond the target database tooling, and a PTCOMM protocol architecture that makes the cdylib/backend split trivially safe across Bacula internal API changes.
3. Architecture overview
3.1 Two-component design
The plugin is composed of two binaries that ship in the same package:
| Component | File | Role |
|---|---|---|
| Bacula FD plugin (cdylib) | /opt/bacula/plugins/podheitor-firebird-fd.so |
Loaded by bacula-fd at runtime; implements the Bacula plugin API |
| Backend binary | /opt/bacula/bin/podheitor-firebird-backend |
Forked per-job by the cdylib; performs all Firebird interaction |
This separation is intentional and provides three key advantages:
- Isolation. The cdylib is minimal and stable. All logic that touches Firebird tools (
gbak,nbackup,gfix, Services API) lives in the backend. A crash or hang in the backend cannot corrupt the Bacula FD process. - Upgradability. The backend can be updated without restarting
bacula-fd. Only the cdylib touches the Bacula plugin ABI. - Testability. The backend binary can be exercised directly in integration tests without involving Bacula at all.
3.2 PTCOMM protocol
The cdylib and backend communicate over the child process’s stdin/stdout using PTCOMM (PodHeitor Transport Communications), a length-tagged binary framing protocol:
┌────────────────────────────────────────────────────────┐
│ PTCOMM Frame (8-byte header + payload) │
│ │
│ Offset Size Field │
│ ────── ──── ───── │
│ 0 4 Magic (0x50544300) │
│ 4 4 Payload length (u32, big-endian) │
│ 8 N Payload (JSON or binary blob) │
└────────────────────────────────────────────────────────┘
Messages are JSON-serialised Rust enums:
BackupRequest { db, mode, level, options... }— cdylib → backend, once per jobBackupChunk { seq, data: bytes }— backend → cdylib, streamed for each data blockBackupComplete { size, checksum, metadata }— backend → cdylib, end of backup streamRestoreRequest { db, restore_path, mode, options... }— cdylib → backendRestoreChunk { seq, data: bytes }— cdylib → backend, streamed restore dataRestoreComplete { rows_verified }— backend → cdylib, end of restoreErrorResponse { code, message }— either direction, on failure
3.3 Architecture diagram
┌─────────────────────────────────────────────────────────────┐
│ Bacula File Daemon (bacula-fd) │
│ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ podheitor-firebird-fd.so (cdylib) │ │
│ │ │ │
│ │ bEventJobStart ──► fork backend ──► write job opts │ │
│ │ bEventBackupCommand ──► send BackupRequest │ │
│ │ bEventGetMoreData ◄── receive BackupChunk │ │
│ │ bEventEndBackupJob ──► read BackupComplete │ │
│ │ bEventRestoreCommand ──► send RestoreRequest │ │
│ │ bEventSetFileAttributes ◄── write restored file │ │
│ └───────────────────────┬─────────────────────────────┘ │
│ │ stdin/stdout (PTCOMM) │
│ ┌───────────────────────▼─────────────────────────────┐ │
│ │ podheitor-firebird-backend (subprocess) │ │
│ │ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────────────┐ │ │
│ │ │ DumpMode │ │NbackupMod│ │ ReplayMode │ │ │
│ │ │ (gbak) │ │(nbackup) │ │ (gbak+log ship) │ │ │
│ │ └────┬─────┘ └────┬─────┘ └────────┬─────────┘ │ │
│ │ │ │ │ │ │
│ │ ┌────▼──────────────▼──────────────────▼────────┐ │ │
│ │ │ Firebird Engine (gbak / nbackup / gfix / │ │ │
│ │ │ Services API / Embedded) │ │ │
│ │ └────────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ State dir: /opt/bacula/working/firebird-state/ │ │
│ │ Config: /opt/bacula/etc/podheitor-firebird.conf │ │
│ └─────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
│ Bacula protocol (TCP)
▼
┌──────────────────┐
│ Bacula Storage │
│ Daemon (bacula- │
│ sd) + Catalog │
└──────────────────┘
3.4 State management
The nbackup mode requires persistent state to track which incremental chain levels have been completed. This state is stored in /opt/bacula/working/firebird-state/ as per-database JSON manifest files:
/opt/bacula/working/firebird-state/
├── prod.fdb.chain.json # nbackup chain manifest
├── employee.fdb.chain.json
└── inventory.fdb.chain.json
Each manifest records:
{
"db_path": "/var/firebird/prod.fdb",
"chain": [
{ "level": 0, "job_id": 100, "size": 1048576, "checksum": "sha256:...", "timestamp": "2026-05-01T02:00:00Z" },
{ "level": 1, "job_id": 101, "size": 204800, "checksum": "sha256:...", "timestamp": "2026-05-02T02:00:00Z" }
]
}
4. Backup modes deep dive
4.1 dump mode — gbak logical backup
How it works
The dump mode invokes Firebird’s gbak utility to produce a portable logical backup. gbak connects to the Firebird server (or embedded engine), reads all data pages in a consistent snapshot, and writes a platform-independent .fbk stream. The plugin captures this stream, optionally compresses it, and ships it to Bacula as a single virtual file object.
Firebird DB ──► gbak ──► stdout stream ──► (zstd compress) ──► PTCOMM chunk stream ──► Bacula
Parallel workers can be specified (workers=N) to back up multiple databases simultaneously within a single Bacula job.
When to use
- Full nightly backups of small-to-medium databases (< 50 GB uncompressed)
- Cross-version migration (gbak
.fbkformat is portable across FB 3/4/5 ODS versions) - Any scenario requiring logical consistency over raw page fidelity
- Disaster recovery where a clean, importable backup is required with no chain dependencies
Pros and cons
| Aspect | Assessment |
|---|---|
| Portability | Excellent — .fbk restores to any FB version |
| Consistency | Excellent — gbak takes a consistent snapshot |
| Speed | Moderate — reads all pages sequentially |
| Size | Moderate — logical data; zstd compression helps |
| Chain dependency | None — every backup is self-contained |
| Restore simplicity | Excellent — single gbak -r invocation |
| Large DB suitability | Moderate — full backup time grows linearly with DB size |
| Parallel support | Yes — multiple DBs in one job |
4.2 nbackup mode — native page-level incremental chain
How it works
The nbackup mode uses Firebird’s built-in page-level backup facility. It operates in four levels (L0 through L3):
- Level 0 (L0): full page-level snapshot of the database
- Level 1 (L1): pages changed since the last L0
- Level 2 (L2): pages changed since the last L1
- Level 3 (L3): pages changed since the last L2
Firebird tracks changed pages internally via a difference file mechanism. The plugin manages the chain manifest, invokes nbackup -b <level> for the appropriate level, and records the result. On restore, the plugin merges the chain using nbackup -r in strict level order (L0 → L1 → L2 → L3).
Day 1: nbackup -b 0 ──► L0.nbk (full page snapshot)
Day 2: nbackup -b 1 ──► L1.nbk (delta from L0)
Day 3: nbackup -b 2 ──► L2.nbk (delta from L1)
Day 4: nbackup -b 3 ──► L3.nbk (delta from L2)
Day 5: nbackup -b 0 ──► new L0 (chain resets)
When to use
- Large databases where full gbak every night is too slow or too large
- Environments with defined RTO/RPO windows that benefit from incremental chains
- When disk or bandwidth budget is limited
- When the database is on the same host or accessible via Services API
Pros and cons
| Aspect | Assessment |
|---|---|
| Portability | Low — ODS-specific; must restore to same Firebird major version |
| Consistency | Excellent — Firebird guarantees page-level consistency |
| Speed | Excellent — L1/L2/L3 back up only changed pages |
| Size | Excellent — incremental deltas are small |
| Chain dependency | Required — restore needs complete L0→Ln chain |
| Restore simplicity | Moderate — must restore in chain order |
| Large DB suitability | Excellent — incremental chain scales to multi-GB DBs |
| Parallel support | Per-level, per-DB |
4.3 replay mode — gbak baseline + journal log shipping
How it works
The replay mode implements a two-tier strategy:
- Baseline. A
gbakfull logical dump is taken periodically (configurable). - Journal segments. Firebird replication journal files produced by the replication subsystem (FB 4+) are collected from
replay_log_dir, shipped to Bacula, and on restore deposited toreplay_dest_dirfor replay.
This pattern mirrors the WAL-shipping approach used in PostgreSQL streaming replication, adapted to Firebird’s journal mechanism.
Firebird (primary)
├── gbak baseline ──────────────────────────────────────────► Bacula catalog
└── journal segments (repllog.*.journal) ──► PTCOMM stream ──► Bacula catalog
Restore:
gbak -r baseline ──► fresh DB ──► apply journal segments ──► consistent standby
When to use
- Near-zero RPO requirements where incremental nbackup chains are still too coarse
- Standby replication and DR scenarios
- Firebird 4.0 and 5.0 installations with replication already enabled
- Multi-site DR where journals can be applied on a remote standby
Pros and cons
| Aspect | Assessment |
|---|---|
| RPO | Excellent — sub-minute with frequent journal shipping |
| Portability | Moderate — baseline is portable; journals are version-specific |
| Consistency | Excellent — journals are atomic segments |
| Setup complexity | High — requires Firebird replication to be configured |
| Restore complexity | Moderate — replay requires ordered journal application |
| Large DB suitability | Excellent — journals are small regardless of DB size |
| FB version requirement | Replication available in FB 4.0+ |
5. Feature matrix
| Feature | dump | nbackup | replay |
|---|---|---|---|
| Full backup | Yes | Yes (L0) | Yes (baseline) |
| Incremental backup | No | Yes (L1/L2/L3) | Yes (journals) |
| zstd compression | Yes | Yes | Yes |
| lz4 compression | Yes | Yes | Yes |
| Bandwidth throttle | Yes | Yes | Yes |
| Parallel workers | Yes | No | No |
| Multi-DB per job | Yes | Yes | Yes |
| Cross-version restore | Yes | No | Partial |
| Services API | Yes | Yes | Yes |
| Embedded Firebird | Yes | Yes | No |
| gfix post-restore | Yes | Yes | No |
| Encryption passphrase | Yes | Yes | Yes |
| Prometheus metrics | Yes | Yes | Yes |
| Chain state management | No | Yes | No |
| gc-chain CLI | No | Yes | No |
| Byte-identical restore | Yes | Yes | Yes |
| RPM package | Yes | Yes | Yes |
| DEB package | Yes | Yes | Yes |
| FB 3.0 support | Yes | Yes | No |
| FB 4.0 support | Yes | Yes | Yes |
| FB 5.0 support | Yes | Yes | Yes |
| Parallel gbak workers (FB 5) | Yes | No | No |
| Journal log shipping | No | No | Yes |
| Near-zero RPO | No | Low | Yes |
6. Installation guide
6.1 Prerequisites
- Bacula Community 15.0.3 or later is installed and
bacula-fdis running - Firebird 3.0, 4.0, or 5.0 server (or embedded) is installed
gbakandnbackupbinaries are in$PATH(or full paths specified in plugin config)- OS: RHEL/OL/Rocky 9+ or Ubuntu 22.04+/Debian 12+
- glibc 2.34+
- User
baculaexists and has read access to Firebird database files
6.2 RPM installation (EL9 / OL9 / RHEL9 / Rocky 9)
# 1. Install the RPM
dnf install podheitor-firebird-0.1.0-1.el9.x86_64.rpm
# 2. Verify files are in place
ls -la /opt/bacula/plugins/podheitor-firebird-fd.so
ls -la /opt/bacula/bin/podheitor-firebird-backend
# 3. Check state directory was created
ls -la /opt/bacula/working/firebird-state/
# 4. Restart bacula-fd to load the new plugin
systemctl restart bacula-fd
# 5. Confirm plugin loaded (look for "podheitor-firebird" in log)
journalctl -u bacula-fd --since "1 minute ago" | grep podheitor
6.3 DEB installation (Ubuntu 22.04 / Debian 12)
# 1. Install the DEB
apt install ./podheitor-firebird_0.1.0-1_amd64.deb
# 2. Verify files are in place
ls -la /opt/bacula/plugins/podheitor-firebird-fd.so
ls -la /opt/bacula/bin/podheitor-firebird-backend
# 3. Check state directory was created
ls -la /opt/bacula/working/firebird-state/
# 4. Restart bacula-fd
systemctl restart bacula-fd
# 5. Confirm plugin loaded
journalctl -u bacula-fd --since "1 minute ago" | grep podheitor
6.4 Credentials setup
The plugin reads the Firebird password from a credentials file to avoid embedding passwords in Bacula configuration:
# Create the credentials file
cat > /opt/bacula/etc/.fbpass << 'EOF'
SYSDBA=masterkey
EOF
# Secure it
chown bacula:bacula /opt/bacula/etc/.fbpass
chmod 600 /opt/bacula/etc/.fbpass
6.5 Plugin configuration file
Create /opt/bacula/etc/podheitor-firebird.conf:
# PodHeitor Firebird Plugin configuration
[defaults]
fb_user = "SYSDBA"
fb_host = "localhost"
fb_port = 3050
compress = "zstd"
compress_level = 3
state_dir = "/opt/bacula/working/firebird-state"
[credentials]
password_file = "/opt/bacula/etc/.fbpass"
6.6 Installation verification
Run a test backup using bconsole:
*run job=FB-Test-Dump level=Full yes
Expected: job status T (terminated normally) with a non-zero bytes-written value.
7. Configuration reference
7.1 Backup parameters (FileSet Plugin string)
| Parameter | Default | Description |
|---|---|---|
db |
(required) | Database path(s), comma-separated |
mode |
dump |
Backup mode: dump / nbackup / replay |
level |
0 |
nbackup level (0–3) |
workers |
1 |
Parallel backup workers (dump mode) |
compress |
zstd |
Compression codec: zstd / lz4 / none |
compress_level |
3 |
zstd compression level (1–22) |
bw_limit_kbps |
0 |
Bandwidth cap in KB/s; 0 = unlimited |
gbak_path |
gbak |
Full path to gbak binary (if not in $PATH) |
nbackup_path |
nbackup |
Full path to nbackup binary (if not in $PATH) |
fb_user |
SYSDBA |
Firebird username |
fb_password |
(from .fbpass) | Firebird password (prefer .fbpass file) |
fb_host |
localhost |
Firebird server hostname or IP |
fb_port |
3050 |
Firebird server TCP port |
services_api |
false |
Use Services API connection (-se) |
embedded |
false |
Use embedded Firebird (no server required) |
replay_log_dir |
(empty) | Source directory for replication journal files |
replay_dest_dir |
(empty) | Restore destination for journal files |
encrypt_passphrase |
(empty) | Encryption passphrase (argv passthrough to gbak) |
metrics_listen |
(empty) | Prometheus metrics bind address (e.g. 0.0.0.0:9182); empty = disabled |
state_dir |
/opt/bacula/working/firebird-state |
State directory for nbackup chain manifests |
7.2 Restore parameters (Restore Job Plugin string)
| Parameter | Default | Description |
|---|---|---|
restore_path |
(required) | Target filesystem path for the restored database |
mode |
(from backup) | Must match the backup mode used |
fb_user |
SYSDBA |
Firebird username for restore operations |
fb_password |
(from .fbpass) | Firebird password |
fix_shadow |
true |
Run gfix -sh after restore to clean shadow files |
replay_dest_dir |
(empty) | Directory to deposit restored journal segments |
8. FileSet examples
8.1 Full logical dump
# Full logical dump backup (gbak mode)
FileSet {
Name = "FB-Dump-Full"
Include {
Plugin = "podheitor-firebird: db=/var/firebird/employee.fdb mode=dump compress=zstd workers=2"
}
}
8.2 nbackup incremental chain
# Level 0 (run weekly or monthly)
FileSet {
Name = "FB-Nbackup-L0"
Include {
Plugin = "podheitor-firebird: db=/var/firebird/prod.fdb mode=nbackup level=0"
}
}
# Level 1 (run nightly)
FileSet {
Name = "FB-Nbackup-L1"
Include {
Plugin = "podheitor-firebird: db=/var/firebird/prod.fdb mode=nbackup level=1"
}
}
8.3 Replication log shipping
# Replay / journal log shipping
FileSet {
Name = "FB-Replay-Incr"
Include {
Plugin = "podheitor-firebird: db=/var/firebird/prod.fdb mode=replay replay_log_dir=/var/firebird/replay/prod"
}
}
8.4 Multi-DB parallel backup
# Four databases backed up in parallel with bandwidth cap
FileSet {
Name = "FB-MultiDB"
Include {
Plugin = "podheitor-firebird: db=/var/firebird/db1.fdb,/var/firebird/db2.fdb,/var/firebird/db3.fdb,/var/firebird/db4.fdb mode=dump workers=4 compress=zstd bw_limit_kbps=51200"
}
}
8.5 Restore job example
Job {
Name = "FB-Restore-Employee"
Type = Restore
Client = firebird-fd
FileSet = "FB-Dump-Full"
Storage = File
Pool = Default
Messages = Standard
Where = /tmp/restore
Plugin = "podheitor-firebird: mode=dump restore_path=/var/firebird/employee-restored.fdb fix_shadow=true"
}
9. Sizing & capacity planning
9.1 Memory requirements
| Scenario | Baseline FD | Per-worker (dump) | nbackup | replay |
|---|---|---|---|---|
| Minimum | 512 MB | +128 MB each | +64 MB | +64 MB |
| 4-worker parallel dump | 512 MB | +512 MB (4×128) | — | — |
| Large DB nbackup chain | 512 MB | — | +64 MB | — |
| Replay log shipping | 512 MB | — | — | +64 MB |
Example. A server running 4-worker parallel dump of 4 databases should have at least 1.5 GB RAM allocated to the bacula-fd process group.
9.2 CPU requirements
| Scenario | Recommended cores |
|---|---|
| Single DB dump | 1 |
| 4-DB parallel dump | 4 (1 per worker) |
| nbackup chain (any level) | 1 |
| replay log shipping | 1 |
| Prometheus metrics enabled | +0.1 (negligible) |
9.3 Disk space — plugin binaries
| File | Size |
|---|---|
podheitor-firebird-fd.so |
~580 KB |
podheitor-firebird-backend |
~516 KB |
| Total installation footprint | ~1.1 MB |
9.4 Disk space — state directory
Each database tracked by nbackup mode stores a JSON manifest of approximately 1 KB per chain entry. For an environment with 100 databases and 4-level chains, the state directory requires roughly 400 KB — negligible.
9.5 Backup volume estimations
| Mode | Expected size (vs raw DB) |
|---|---|
| dump (no compression) | 60–80% of DB size |
| dump (zstd level 3) | 20–45% of DB size (21.4% observed in lab) |
| nbackup L0 | ~100% of DB size (pages only) |
| nbackup L1/L2/L3 | 1–30% of DB size (changed pages only) |
| replay journal segments | < 1% of DB size per segment |
10. Performance report
All measurements were taken in a controlled lab environment using Firebird containers (LI-V3.0.13, LI-V4.0.7, LI-V5.0.4) on a KVM/QEMU hypervisor with Bacula Community 15.0.3.
10.1 Phase 1 — single-DB gbak dump + restore
| Metric | Result |
|---|---|
| Database | employee.fdb (Firebird sample) |
| Backup size | 1,536 bytes |
| Job status | T (terminated normally) |
| Restore verification | Byte-identical |
| Bacula JobId | 7929 |
10.2 Phase 1b — restore round-trip data integrity
| Metric | Result |
|---|---|
| Rows written pre-backup | 10 |
| Rows recovered post-restore | 10 |
| Verification method | SELECT COUNT(*) comparison |
| Result | PASS |
10.3 Phase 2 — multi-DB parallel backup
| Metric | Sequential | Parallel (4 workers) | Speedup |
|---|---|---|---|
| Databases | 4–7 | 4–7 | — |
| Wall-clock time | Baseline | Baseline / 5.6 | 5.6× |
| zstd compression ratio | — | 21.4% on-wire | — |
| All jobs status | T | T | — |
10.4 Phase 3 — nbackup chain L0+L1+L2+L3
| Level | Backup size | Status |
|---|---|---|
| L0 | ~875 KB | T |
| L1 | ~612 KB | T |
| L2 | ~1.1 MB | T |
| L3 | ~956 KB | T |
| Total chain | 3.5 MB | T |
| Restore verification | |
|---|---|
| Rows in original DB | 13,500 |
| Rows recovered | 13,500 |
| Byte-identical at each level | Yes |
10.5 Phase 4 — FB 5 parallel-worker gbak + bandwidth shaping
| Metric | Result |
|---|---|
| Firebird version | LI-V5.0.4 |
| Parallel gbak workers | Enabled |
| Bandwidth target | 64 KB/s |
| Measured throughput | 64.5 KB/s average |
| Drift from target | ±0.8% |
10.6 Phase 5 — replication log shipping
| Metric | Result |
|---|---|
| Journal segments shipped | 3 |
| gbak baseline | Included |
| MD5 verification | Byte-identical round-trip |
| Job status | T |
10.7 Phase 6 — cross-version migration FB 3→5
| Metric | Result |
|---|---|
| Source | FB 3.0 (ODS 12) |
| Destination | FB 5.0 (ODS 13) |
| Rows preserved | 10 |
| ODS upgrade | Automatic via gbak restore |
| gfix check | Clean (no errors) |
10.8 Phase 7 — embedded Firebird + Services API
| Test | Result |
|---|---|
| Embedded Firebird backup | PASS |
Services API remote backup (-se) |
PASS |
| Both acceptance gates | PASS |
10.9 Phase 8 — encryption, metrics, gc-chain CLI, packages
| Feature | Result |
|---|---|
| Encryption argv passthrough | PASS |
| Prometheus /metrics endpoint | PASS |
| gc-chain CLI (nbackup manifest GC) | PASS |
| RPM package build | PASS |
| DEB package build | PASS |
10.10 Test suite summary
| Phase | Tests added | Cumulative total |
|---|---|---|
| Phase 0 (bootstrap) | 0 | 0 |
| Phase 1 (dump E2E) | 6 | 6 |
| Phase 1b (restore round-trip) | 4 | 10 |
| Phase 2 (multi-DB parallel) | 17 | 27 |
| Phase 3 (nbackup chain) | 13 | 40 |
| Phase 4 (FB5 parallel + BW) | 16 | 56 |
| Phase 5 (replay/log shipping) | 6 | 62 |
| Phase 6 (cross-version) | 25 | 87 |
| Phase 7 (embedded + svc API) | 19 | 106 |
| Phase 8 (encrypt + metrics + pkg) | 14 | 120 |
11. Compatibility matrix
11.1 Operating system
| OS | Architecture | Status |
|---|---|---|
| RHEL 9 | x86_64 | Supported |
| Oracle Linux 9 | x86_64 | Supported |
| Rocky Linux 9 | x86_64 | Supported |
| AlmaLinux 9 | x86_64 | Supported |
| Ubuntu 22.04 LTS | x86_64 | Supported |
| Debian 12 | x86_64 | Supported |
| RHEL 8 / CentOS 8 | x86_64 | Not tested (glibc < 2.34) |
| Ubuntu 20.04 | x86_64 | Not tested (glibc < 2.34) |
| ARM64 / aarch64 | any | Not yet available |
11.2 Firebird versions
| Firebird version | ODS | dump | nbackup | replay |
|---|---|---|---|---|
| 3.0.x (LI-V3.0.13 tested) | 11/12 | Yes | Yes | No |
| 4.0.x (LI-V4.0.7 tested) | 12 | Yes | Yes | Yes |
| 5.0.x (LI-V5.0.4 tested) | 13 | Yes | Yes | Yes |
11.3 Bacula versions
| Bacula version | Status |
|---|---|
| Community 15.0.3 | Supported (validated) |
| Community 15.0.x (future) | Expected compatible |
| Community 14.x | Not supported |
| Bacula Enterprise | Not required; the plugin targets Bacula Community |
11.4 System libraries
| Library | Minimum version |
|---|---|
| glibc | 2.34 |
| libpthread | included in glibc |
| libdl | included in glibc |
12. Security
12.1 Credential handling
Firebird passwords are never stored in Bacula Job or FileSet configuration. The recommended approach is a dedicated credentials file:
/opt/bacula/etc/.fbpass
Owner: bacula:bacula
Mode: 0600
Format: USERNAME=password (one per line)
The plugin reads this file at job start, before forking the backend. The password is passed to gbak/nbackup via process environment, not command-line arguments, to prevent exposure in ps output.
12.2 Encryption passphrase
The encrypt_passphrase parameter passes an encryption key through to gbak‘s native encryption argv. This is an argv passthrough — the plugin does not implement its own encryption layer. Operators using this feature should ensure:
- The passphrase is stored in the
.fbpassfile, not inline in FileSet config - The passphrase is never logged (the plugin redacts it from all log output)
- The same passphrase is available at restore time
12.3 State directory permissions
The state directory /opt/bacula/working/firebird-state/ is created with:
Owner: bacula:bacula
Mode: 0750
Only the bacula user and members of the bacula group can read or write chain manifests. This prevents unauthorised enumeration of backed-up database paths.
12.4 Backend process isolation
The cdylib forks one backend process per Bacula job. The backend inherits only the credentials and configuration needed for that job. No shared mutable state exists between concurrent backup jobs. If the backend crashes or is killed, the cdylib reports a job failure and bacula-fd continues serving other jobs normally.
12.5 Network security
The plugin uses Firebird’s own TCP protocol to connect to the database server. It does not open any additional listening ports. The optional Prometheus metrics endpoint (metrics_listen) is a read-only HTTP endpoint that exposes only operational counters; it does not expose any credentials or data content.
Operators should restrict access to the metrics port using firewall rules:
# Allow metrics access from monitoring host only
firewall-cmd --add-rich-rule='rule family=ipv4 source address=192.168.1.100 port port=9182 protocol=tcp accept' --permanent
firewall-cmd --reload
13. Monitoring
13.1 Prometheus metrics endpoint
When metrics_listen is set in the plugin string, the backend exposes a /metrics endpoint in Prometheus text format. Example configuration:
Plugin = "podheitor-firebird: db=/var/firebird/prod.fdb mode=dump metrics_listen=0.0.0.0:9182"
13.2 Exposed metrics
| Metric | Type | Description |
|---|---|---|
podheitor_firebird_backup_jobs_total |
Counter | Total backup jobs started |
podheitor_firebird_backup_bytes_total |
Counter | Total bytes streamed to Bacula |
podheitor_firebird_backup_duration_seconds |
Histogram | Per-job backup duration |
podheitor_firebird_restore_jobs_total |
Counter | Total restore jobs started |
podheitor_firebird_restore_duration_seconds |
Histogram | Per-job restore duration |
podheitor_firebird_nbackup_chain_levels |
Gauge | Current nbackup chain depth per DB |
podheitor_firebird_compression_ratio |
Gauge | Most recent compression ratio |
podheitor_firebird_bandwidth_kbps |
Gauge | Instantaneous measured throughput |
podheitor_firebird_errors_total |
Counter | Total error events by error code |
13.3 Prometheus scrape configuration
scrape_configs:
- job_name: 'podheitor-firebird'
static_configs:
- targets: ['firebird-host:9182']
scrape_interval: 30s
13.4 Bacula job monitoring
Standard Bacula monitoring applies. The plugin sets appropriate Bacula job status codes:
T— terminated normallyE— terminated with errors (plugin logs detail in bacula-fd.log)f— fatal error (backend failed to start or crashed)
Check /opt/bacula/log/bacula-fd.log for detailed plugin-level messages.
14. Troubleshooting guide
14.1 Common errors
Plugin not found at startup
Error: cannot open shared object file: No such file or directory
Cause. podheitor-firebird-fd.so is not in the configured plugin directory.
Fix. Verify /opt/bacula/plugins/podheitor-firebird-fd.so exists and check bacula-fd.conf for the correct PluginDirectory directive.
Backend fails to start
podheitor-firebird: failed to fork backend: No such file or directory
Cause. podheitor-firebird-backend binary is missing or not executable.
Fix:
ls -la /opt/bacula/bin/podheitor-firebird-backend
chmod 755 /opt/bacula/bin/podheitor-firebird-backend
Firebird connection refused
podheitor-firebird: gbak error: unavailable database
Cause. Firebird server is not running, or fb_host/fb_port is incorrect.
Fix:
systemctl status firebird
# or for SuperServer:
/opt/firebird/bin/fbguard -pidfile /tmp/firebird.pid &
Wrong password
podheitor-firebird: gbak error: Your user name and password are not defined
Cause. .fbpass file is missing, unreadable, or contains incorrect credentials.
Fix:
cat /opt/bacula/etc/.fbpass # verify content
stat /opt/bacula/etc/.fbpass # verify permissions (must be 600, owner bacula)
nbackup chain corrupted / missing L0
podheitor-firebird: nbackup restore error: cannot find level 0 backup
Cause. The L0 chain entry is missing from the state manifest, or the corresponding Bacula volume has expired.
Fix. Run a new L0 backup job to reset the chain:
*run job=FB-Nbackup-L0 level=Full yes
glibc version mismatch
version 'GLIBC_2.34' not found
Cause. The host glibc is older than 2.34 (common on RHEL 8 / Ubuntu 20.04).
Fix. The plugin requires glibc 2.34+. Upgrade to RHEL 9 / Rocky 9 / Ubuntu 22.04 or later.
14.2 Log locations
| Log | Path |
|---|---|
| bacula-fd main log | /opt/bacula/log/bacula-fd.log |
| Plugin debug output | embedded in bacula-fd.log; prefix podheitor-firebird: |
| Bacula messages | as configured in bacula-fd.conf Messages resource |
| Firebird server log | /opt/firebird/firebird.log |
14.3 Enabling debug logging
Add debug=9 to the Plugin string to enable verbose plugin-level output:
Plugin = "podheitor-firebird: db=/var/firebird/prod.fdb mode=dump debug=9"
This will log every PTCOMM frame, every gbak invocation, and all state transitions to bacula-fd.log.
15. Use cases & deployment scenarios
15.1 ERP system nightly backup
Scenario. A manufacturing company runs Totvs Protheus ERP on Firebird 4.0 with 12 databases ranging from 2 GB to 45 GB. They need nightly full backups with 30-day retention.
Solution.
- Mode:
dumpwithcompress=zstd workers=4 - Schedule: nightly at 01:00
- Retention: 30 days in Bacula pool
- Estimated backup window: ~3 hours for all 12 DBs in parallel batches of 4
FileSet.
Plugin = "podheitor-firebird: db=/data/protheus/db1.fdb,/data/protheus/db2.fdb,/data/protheus/db3.fdb,/data/protheus/db4.fdb mode=dump workers=4 compress=zstd"
15.2 Large database incremental with nbackup
Scenario. A logistics company has a 200 GB Firebird database that cannot be fully backed up nightly within the backup window.
Solution.
- Weekly: L0 (full page snapshot) — Sunday 02:00
- Nightly: L1 (changed pages) — Monday–Saturday 02:00
- Monthly: reset chain with new L0 — first Sunday of month
Benefits. Nightly backup window reduced from hours to minutes; full restore requires only L0 + latest L1.
15.3 DR standby with replication log shipping
Scenario. A financial services firm requires RPO < 5 minutes for their Firebird 5.0 transaction database.
Solution.
- Mode:
replaywithreplay_log_dir=/var/firebird/replay/transactions - Journal segments shipped every 5 minutes via Bacula schedule
- Standby site applies journals automatically on restore
- RTO: ~15 minutes (time to run gbak restore + apply pending journals)
15.4 Cross-version migration project
Scenario. A company is migrating from Firebird 3.0 to Firebird 5.0 across 30 databases. They need a tested, repeatable process.
Solution.
- Backup all 30 databases on FB 3.0 using
dumpmode - Restore to FB 5.0 hosts using
restore_path— gbak automatically upgrades ODS 12→13 gfixpost-restore run (enabled by default viafix_shadow=true)- Verified with
isqlschema and data checks
15.5 Embedded Firebird application backup
Scenario. An IoT gateway device runs an embedded Firebird database locally, with no Firebird server process. The device runs bacula-fd and the plugin must back up the embedded DB.
Solution.
- Mode:
dump embedded=true - Plugin connects to the database file directly via the embedded Firebird engine
- No server process required; backup runs while the application is quiesced
Plugin = "podheitor-firebird: db=/data/device.fdb mode=dump embedded=true compress=lz4"
16. Comparison with other approaches
16.1 Feature comparison
The table below compares the PodHeitor Firebird plugin running on Bacula Community against alternative ways of protecting Firebird data with commercial enterprise backup platforms. Bacula Enterprise is included as a reference: it offers excellent general-purpose enterprise backup and remains a strong choice when broader BEE features are needed; this plugin is purpose-built to deliver Firebird-specific functionality (nbackup chain, replay log shipping, cross-version migration, embedded engine support) on the Bacula Community base.
| Feature | Bacula Community + PodHeitor | Bacula Enterprise | Veeam | Commvault |
|---|---|---|---|---|
| Firebird native backup | Yes | Limited | No | No |
| gbak logical dump | Yes | Varies by version | No | No |
| nbackup page-level chain | Yes | No | No | No |
| Replication log shipping | Yes | No | No | No |
| Multi-DB parallel | Yes | No | No | No |
| Compression (zstd/lz4) | Yes | Yes (gzip) | Yes | Yes |
| Bandwidth throttle | Yes | Yes | Yes | Yes |
| Prometheus metrics | Yes | No | No | No |
| Cross-version migration | Yes | No | No | No |
| Embedded Firebird | Yes | No | No | No |
| Services API | Yes | Partial | No | No |
| FB 3.0 / 4.0 / 5.0 | All 3 | Varies | No | No |
| Bacula Community compat. | Yes | N/A | N/A | N/A |
| Open-source platform base | Yes (Bacula CE) | No | No | No |
| Encryption passthrough | Yes | Yes | Yes | Yes |
| Byte-identical restore | Yes | Yes | Varies | Varies |
| RPM + DEB packages | Yes | Yes | Yes | Yes |
16.2 Cost comparison
Special offer. Bring your renewal proposal for Veeam, Commvault, NetBackup, or any other enterprise backup platform. We will produce a written head-to-head proposal targeting at least 50% savings, with stronger Firebird-specific functionality. Contact heitor@opentechs.lat.
| Solution | Typical annual cost | Firebird support |
|---|---|---|
| Bacula Community + PodHeitor plugin | Significantly less | Full native (this plugin) |
| Bacula Enterprise | Often > US$ 10,000/year | Limited |
| Veeam Data Platform | Often > US$ 5,000/year | None (scripted only) |
| Commvault | Often > US$ 15,000/year | None (scripted only) |
| NetBackup | Often > US$ 20,000/year | None (scripted only) |
Prices vary by environment size and negotiated contracts. Contact heitor@opentechs.lat for a specific comparison against your current renewal proposal.
17. Roadmap
The plugin is production-ready at v0.1.0 with all 8 phases validated. Future development direction includes:
- Multi-architecture support. ARM64 / aarch64 packages for Raspberry Pi and cloud-native ARM servers.
- Windows support. Firebird on Windows is widely deployed; a Windows build is under consideration.
- Firebird 6.0 readiness. Track ODS changes in upcoming Firebird releases.
- Extended metrics. Per-database backup history, chain health scores, automated alerting thresholds.
- Web UI integration. Bacularis plugin configuration panel for plugin parameters.
- Automated chain validation. Scheduled background verification of nbackup chain integrity.
No specific release dates are committed. Feature direction is guided by customer feedback and lab findings.
18. Conclusion
The PodHeitor Firebird Backup Plugin extends Bacula Community with the first production-grade, Firebird-native integration available on the open-source Bacula platform. It delivers three complementary backup modes (dump, nbackup, replay), covers all supported Firebird versions (3.0, 4.0, 5.0), and has been validated through 120 automated tests across 8 development phases. The result is a robust, auditable, and cost-effective path to enterprise-grade Firebird DR.
For organisations currently running Bacula Community with Firebird databases, the plugin requires no platform changes — install the RPM or DEB, add a Plugin string to your FileSet, and your Firebird databases are protected immediately. For organisations evaluating premium commercial backup-platform renewals, the combination of Bacula Community and the PodHeitor plugin delivers equal or superior Firebird-specific functionality at a substantially lower total cost.
To get started:
- Download the plugin: https://podheitor.com
- Request a quote or demo: heitor@opentechs.lat
- Phone / WhatsApp: +1 786 726-1749 | +55 61 98268-4220
19. Contact information
| Author | Heitor Faria |
| Website | https://podheitor.com |
| heitor@opentechs.lat | |
| Phone / WhatsApp | +1 786 726-1749 |
| Phone / WhatsApp (BR) | +55 61 98268-4220 |
| Product page | https://podheitor.com/firebird-plugin |
| Support | heitor@opentechs.lat |
20. Legal / copyright
© 2026 Heitor Faria — all rights reserved.
The PodHeitor Firebird Backup Plugin for Bacula is proprietary software. Unauthorised copying, distribution, modification, or reverse engineering is strictly prohibited. A commercial license is required for production use.
Bacula® is a registered trademark of Kern Sibbald and the Bacula community. Firebird® is a trademark of the Firebird Project. All other trademarks are the property of their respective owners.
This document is provided for informational purposes. Performance figures are from controlled lab measurements and may vary in production environments depending on hardware, network conditions, Firebird configuration, and database characteristics.
Contact for licensing: heitor@opentechs.lat | https://podheitor.com | +1 786 726-1749 | +55 61 98268-4220
PodHeitor Firebird Backup Plugin for Bacula — v0.1.0 — © 2026 Heitor Faria — all rights reserved — https://podheitor.com
Disponível em:
Português (Portuguese (Brazil))
English
Español (Spanish)