Technical whitepaper — PodHeitor ADABAS for Bacula

Technical whitepaper — PodHeitor ADABAS for Bacula

Technical Whitepaper — Version 1.0.0-ce — May 2026

Author: Heitor Faria · Website: https://podheitor.com · Email: heitor@opentechs.lat · Phone / WhatsApp: +1 786 726-1749 | +55 61 98268-4220

Special offer. Bring your renewal proposal for any commercial enterprise backup platform — Veeam, Commvault, NetBackup, or others. We will benchmark a head-to-head proposal targeting at least 50% savings with stronger ADABAS-specific functionality. Contact heitor@opentechs.lat for a written quote.

Table of contents

  1. Executive summary
  2. Introduction & market context
  3. Architecture overview
  4. Backup modes deep dive
  5. Feature matrix
  6. Installation guide
  7. Configuration reference
  8. FileSet examples
  9. Sizing & capacity planning
  10. Performance report
  11. Compatibility matrix
  12. Security
  13. Monitoring
  14. Troubleshooting guide
  15. Use cases & deployment scenarios
  16. Comparison with other approaches
  17. Roadmap
  18. Conclusion
  19. Contact information
  20. Legal / copyright

1. Executive summary

ADABAS (Adaptable DAta BAse System), developed by Software AG, is a hierarchical/navigational database management system with a uniquely long production history — originally introduced in 1971, it continues to underpin mission-critical workloads in telecommunications, banking, insurance, and government agencies worldwide. ADABAS runs on z/OS mainframes, AIX, and Linux, and it is not uncommon to encounter active ADABAS instances protecting decades of transaction history in sectors where legacy continuity is mandated by regulation.

Despite this critical footprint, ADABAS backup in Linux environments has historically been managed either through native Software AG utilities — ADASAV, ADABCK, ADAREP — invoked by operator shell scripts, or by OS-level filesystem snapshots that cannot guarantee database consistency. Neither approach integrates with an enterprise backup catalog, provides verified restore workflows, or delivers the retention management, encryption, and policy-based scheduling that modern data protection standards require.

The PodHeitor ADABAS Backup Plugin for Bacula closes this gap. It is a production-grade, Rust-native Bacula File Daemon plugin that integrates ADABAS backup and restore directly into Bacula Community 15.0.3+. The plugin wraps the native ADABAS utilities (adabck, adaopr, adavfy, adarec) through a type-safe, timeout-bounded subprocess layer, streams backup data over the PTCOMM protocol, and stores everything in the standard Bacula catalog. Full backups, PLOG-based incrementals, multi-DBID jobs, point-in-time restore, and post-restore consistency verification are all managed through a single Bacula Plugin string — no operator scripts, no manual sequencing.

Validation testing on Community Edition (CE) 7.4.0.3 confirmed: 78 Rust unit tests passing, full backup end-to-end (7.27 MB, exit 0, valid manifest), streaming memory profile of 1.94 MB peak for a 7.27 MB dump (O(1) constant-memory streaming), multi-DBID best-effort aggregation, and signal-safe cancellation. License-gated features (live PLOG apply, nucleus-offline restore completion) are code-complete and await a licensed ADABAS instance for final end-to-end validation.

For any organisation running ADABAS on Linux with Bacula Community, the PodHeitor plugin is the most complete, most cost-effective path to a defensible, auditable backup posture — without replacing the backup platform that is already in operation.


2. Introduction & market context

2.1 ADABAS in production today

ADABAS occupies a unique position in the database landscape. It is not a niche academic system — it is the database that kept entire national telecoms, banking clearinghouses, and insurance policy ledgers running for forty years before modern relational systems matured. Its architecture reflects its era: a hierarchical, direct-access data model with fixed-length records, coupled to the Natural 4GL programming language developed in parallel by Software AG. Key characteristics:

  • Hierarchical / navigational model. Data is stored as numbered fields (FDT — Field Definition Table) accessed by ISN (Internal Sequence Number), enabling extremely fast sequential and direct reads on large flat datasets.
  • High-throughput transaction processing. ADABAS uses a Multi-User Facility (MUF) architecture with SysV IPC message queues for inter-process communication, enabling thousands of concurrent NUC (Nucleus) operations.
  • PLOG / ASSO / DATA / WORK container model. The database is physically split into ASSO (associator), DATA, WORK, and PLOG (Protection LOG) containers — each with independent backup semantics.
  • Natural language integration. Most ADABAS applications are written in Natural, and the NaturalONE IDE / NaturalOne development environment tightly couples application logic to the schema.
  • Cross-platform legacy. ADABAS runs on z/OS, BS2000, AIX, HP-UX, and Linux. This plugin targets Linux (x86_64, aarch64), which is the primary deployment target for new ADABAS installations and for organisations migrating from mainframe to commodity hardware.

Sectors where ADABAS remains in active production:

  • Telecommunications: subscriber management, billing, and call data record (CDR) storage at national telecoms in Brazil, Germany, South Africa, and elsewhere.
  • Financial services: core banking ledgers, clearing settlement systems, and insurance policy reserving engines.
  • Government: personnel records, social security registries, and tax databases — especially in Latin America, Central Europe, and sub-Saharan Africa.
  • Utilities: customer billing for electricity and water distribution networks.

2.2 Why existing backup approaches fall short

Approach ADABAS consistency Verdict
Bacula Community (file-level, no plugin) None — backs up raw container files while ADABAS MUF is running Unsafe — ASSO/DATA are in an inconsistent mid-transaction state
OS-level LVM / ZFS snapshot Crash-consistent at best Requires nucleus quiesce; no PLOG integration; manual recovery sequencing
Veeam No native ADABAS agent VM-level backup only; no application-consistent ADABAS recovery
Commvault No ADABAS iDataAgent Custom scripted integration only
NetBackup No ADABAS policy type OS-level only; no Nucleus quiet point integration
Software AG native tools only (ADASAV, ADABCK) Full consistency No catalog, no retention management, no policy scheduling, no encryption integration; manual operator intervention for restore
Custom shell scripts wrapping adabck Partial — correctness depends on script quality Not production-grade; no retry, no verification, no monitoring

The gap is clear: until now, no open-source-compatible backup solution integrated with ADABAS at the engine level. The PodHeitor plugin fills this gap.

2.3 The PodHeitor approach

The plugin follows the same design philosophy as the broader PodHeitor plugin family: Rust-native implementation, phase-gated development with automated regression tests, zero runtime dependencies beyond the target database tooling, and a PTCOMM protocol architecture that makes the cdylib/backend split trivially safe across Bacula internal API changes. The ADABAS plugin reuses the same PTCOMM protocol, metaplugin cdylib framework, subprocess isolation model, and config-file-merge mechanism that have been validated across the PostgreSQL, Firebird, and other PodHeitor plugins.


3. Architecture overview

3.1 Two-component design

The plugin is composed of two binaries that ship in the same package:

Component File Role
Bacula FD plugin (cdylib) /opt/bacula/plugins/podheitor-adabas-fd.so Loaded by bacula-fd at runtime; implements the Bacula plugin API (pure Rust cdylib, built from metaplugin-rs)
Backend binary /opt/bacula/bin/podheitor-adabas-backend Forked per-job by the cdylib; performs all ADABAS interaction via subprocess calls to native ADABAS utilities

This separation provides three key advantages:

  1. Isolation. The cdylib is minimal and stable. All logic that touches ADABAS utilities (adabck, adaopr, adavfy, adarec) lives in the backend. A crash or hang in the backend cannot corrupt the Bacula FD process.
  2. Upgradability. The backend binary can be updated without restarting bacula-fd. Only the cdylib touches the Bacula plugin ABI.
  3. Testability. The backend binary can be exercised directly in integration tests without involving Bacula at all — critical for a system where licensed ADABAS may not be available in CI.

3.2 PTCOMM protocol

The cdylib and backend communicate over the child process’s stdin/stdout using PTCOMM (PodHeitor Transport Communications), a length-tagged binary framing protocol:

┌────────────────────────────────────────────────────────┐
│  PTCOMM Frame (8-byte header + payload)                │
│                                                        │
│  Offset  Size  Field                                   │
│  ──────  ────  ─────                                   │
│  0       4     Magic  (0x50544300)                     │
│  4       4     Payload length (u32, big-endian)        │
│  8       N     Payload (JSON envelope or binary blob)  │
└────────────────────────────────────────────────────────┘

The five-phase PTCOMM handshake covers: capability negotiation → job parameters → backup/restore stream → status reporting → clean shutdown. Every phase is timeout-bounded to prevent a runaway ADABAS subprocess from wedging the Bacula job.

3.3 Architecture diagram

┌─────────────────────────────┐
│  Bacula Director             │    Job definition: Plugin = "podheitor-adabas: dbid=12"
└──────────────┬──────────────┘
               │
┌──────────────▼──────────────┐
│  Bacula File Daemon (bacula-fd)                                   │
│  Loads /opt/bacula/plugins/podheitor-adabas-fd.so                 │
│  (Rust cdylib — delegates every callback to the backend)          │
└──────────────┬──────────────┘
               │ PTCOMM protocol (length-prefixed packets via stdin/stdout)
               ▼
┌──────────────────────────────────────────┐
│  /opt/bacula/bin/podheitor-adabas-backend │  Rust binary
│    ├── main.rs          PTCOMM 5-phase handshake
│    ├── config.rs        plugin string + config-file merge
│    ├── backup.rs        Level F orchestration
│    ├── incremental.rs   Level I (PLOG archiving) orchestration
│    ├── restore.rs       Phase A→E restore orchestration
│    ├── adabck.rs        DUMP + RESTORE subprocess wrappers
│    ├── adaopr.rs        Operator: status + EXT_BACKUP + RAII guard
│    ├── adavfy.rs        Post-restore consistency verify
│    ├── adarec.rs        PLOG apply (license-gated on CE)
│    ├── plog.rs          PLOG discovery + sequence-wrap detection
│    ├── state.rs         Plugin state file (archived + pending_delete)
│    ├── stream.rs        Fixed 1 MiB buffer child_stdout → PTCOMM
│    ├── subproc.rs       Bounded-wait helper (timeouts)
│    ├── metadata.rs      BackupManifest JSON builder
│    └── types.rs         Contracts: AdabasConfig, NucleusStatus, Manifests
└──────────────┬───────────────────────────┘
               │ child subprocess calls (native ADABAS utilities)
               ▼
┌───────────────────────────────────────────┐
│  ADABAS host (same machine as bacula-fd)  │
│    adabck / adaopr / adavfy / adarec      │
│    DBID=<N> ASSO / DATA / WORK containers │
└───────────────────────────────────────────┘

3.4 Co-location constraint

The Rust backend runs on the same Linux host as ADABAS. ADABAS inter-process communication uses SysV message queues — these cannot be tunnelled across machines. The Bacula Storage Daemon can be anywhere; the plugin streams PTCOMM packets over the standard Bacula FD→SD TCP connection. In containerised deployments where ADABAS utilities live inside a Podman/Docker container, the included wrapper script (scripts/podheitor-adabas-backend.podman-wrapper.sh) re-execs the backend inside the container using podman exec -i, transparently bridging the host/container boundary.

3.5 State management

PLOG-based incremental jobs require persistent state to track which protection log files have been archived and which are pending safety-gated deletion. This state is stored per-DBID in:

/opt/bacula/working/podheitor-adabas-state/
  ├── dbid-12.json      # state for DBID 12
  ├── dbid-13.json      # state for DBID 13
  └── dbid-14.json      # state for DBID 14

Each state file is written atomically via a tmpfile-then-rename pattern, ensuring no state corruption on crash. The schema is versioned for forward compatibility.


4. Backup modes deep dive

4.1 Full backup (Level F) — online ADABCK DUMP

How it works

The full backup mode invokes adabck with DUMP=* to perform a complete online backup of the ADABAS nucleus. Before the dump, the plugin optionally wraps the operation with an adaopr EXT_BACKUP=PREPARE command to notify the nucleus that an external backup tool is beginning, ensuring transactional consistency at the quiet point. On completion, adaopr EXT_BACKUP=CONTINUE releases the quiet point. The dump stream is captured via the BCK001=- environment variable (stdout streaming mode) and relayed to Bacula over PTCOMM in a fixed 1 MiB buffer loop — constant memory regardless of database size.

  adaopr EXT_BACKUP=PREPARE  →  adabck DUMP=* (stdout)  →  1 MiB buffer  →  PTCOMM  →  Bacula SD

When to use

  • Weekly full backup as the anchor point for incremental chains
  • Initial baseline before enabling PLOG incremental archiving
  • Any DR scenario requiring a complete self-contained restore point
  • Environments where PLOG-based incrementals are not licensed (Community Edition)

Key properties

Property Assessment
Consistency Excellent — quiet point via EXT_BACKUP=PREPARE ensures transactional boundary
Online (hot) backup Yes — nucleus remains online throughout
Memory footprint O(1) — fixed 1 MiB streaming buffer validated at 1.94 MB VmHWM for a 7.27 MB dump
Backup size Full ASSO + DATA containers; PLOG not included in Level F
EXT_BACKUP requirement Yes on licensed ADABAS (default on); must be disabled on Community Edition
Multi-DBID support Yes — best-effort per-DBID, failures aggregated without aborting surviving DBIDs
Timeout bounded Yes — default 4 hours; configurable via backup_timeout_secs

4.2 Incremental backup (Level I) — PLOG archiving

How it works

The incremental backup mode archives ADABAS Protection Log (PLOG) files since the last successful job. PLOGs record every committed transaction and are the native ADABAS mechanism for point-in-time recovery. The plugin scans the PLOG directory for files with sequence numbers newer than the last archived sequence (tracked in the per-DBID state file), ships each PLOG as a separate virtual file under /@ADABAS/dbid-<N>/plog-<seq>.adabck, and emits a _plog_manifest.json Restore Object recording sequence ranges, generation counters, and timestamps.

  Scan PLOG.NNNN files  →  sequence-wrap detection  →  stream each PLOG via PTCOMM  →  safety-gated delete

The safety gate ensures PLOG files are never deleted from local disk until a subsequent job has run successfully — giving the operator a full job cycle to abort before losing local copies.

When to use

  • Hourly incremental protection between weekly full backups (RPO ≈ 1 hour)
  • Any environment requiring point-in-time restore capability
  • Environments where full backup time is too long for nightly execution

Key properties

Property Assessment
Licensed ADABAS required Yes — CE does not emit PLOGs (ADR-011, R8)
Incremental size vs. full Typically < 1% of full backup size per hourly interval
Sequence-wrap detection Yes — generation counter increments on PLOG.0001 after last_seen > 0
Safety-gated delete Yes — local PLOGs removed only after next successful job
Retry on transient failure Yes — exponential backoff (100ms, 200ms, 400ms), bounded by plog_ship_retries (default 2)
Full backup retry Never — ADR-009: retry logic is only for PLOG shipping, not Level F dumps

4.3 Restore — full (adabck RESTORE) + point-in-time (adarec PLOG apply)

Full restore flow (Phase A → E)

The restore process follows a five-phase orchestration:

  1. Phase A — manifest parse. The plugin reads the _manifest.json Restore Object from the selected backup job to reconstruct the DBID list and backup parameters.
  2. Phase B — stage files. Bacula delivers the backup stream files to a temporary staging directory under $TMPDIR.
  3. Phase C — nucleus gate. The plugin verifies allow_destructive_restore=yes is set (gate enforcement prevents accidental overwrites) and checks nucleus offline status.
  4. Phase D — adabck RESTORE. The staged backup file is fed to adabck RESTORE=* via a named FIFO. The nucleus is overwritten.
  5. Phase E — verify. adavfy runs a consistency check against the restored nucleus. If verify_after_restore=yes (default), a failed verify terminates the job with an error.

Point-in-time restore (PITR)

After a full restore completes (Phase A–E), PLOGs are applied in sequence using adarec CHECKPOINT=(first,last). Checkpoint names are sanitised to [A-Z0-9_]{0,8} before being passed to adarec to prevent DSL injection from hostile manifests. Future versions will support restore_to_time=&lt;ISO8601&gt; via adaplp timestamp-to-checkpoint resolution (v1.1.0 roadmap).

  adabck RESTORE=* (full) → adavfy CHECK → adarec CHECKPOINT=(sync,last) [per PLOG in order]

5. Feature matrix

Feature Level F (Full) Level I (PLOG Incr) Restore
Online (hot) backup Yes Yes N/A
EXT_BACKUP quiet point wrap Yes (default on) N/A N/A
adabck DUMP integration Yes No N/A
PLOG archiving No Yes (licensed) N/A
adabck RESTORE integration N/A N/A Yes (licensed)
adarec PLOG apply (PITR) N/A N/A Yes (licensed)
adavfy post-restore verify N/A N/A Yes (default on)
Multi-DBID per job Yes Yes Yes
Bacula catalog integration Yes Yes Yes
BackupManifest JSON object Yes Yes (PlogManifest) Read
LZ4 compression (Bacula) Yes Yes N/A
Configurable timeouts Yes Yes Yes
Signal-safe cancel Yes Yes Yes
Sequence-wrap detection N/A Yes N/A
Safety-gated PLOG delete N/A Yes N/A
stdout streaming (BCK001=-) Yes N/A N/A
tempfile fallback mode Yes N/A N/A
Podman wrapper support Yes Yes Yes
ADABAS CE 7.x support Yes (external_backup=no) No (PLOGs gated) Partial
ADABAS licensed 8.x support Yes Yes Yes
Checkpoint-based PITR N/A N/A Yes (licensed)
Diagnostic CLI (–probe-nucleus) Yes Yes Yes
State file atomic write N/A Yes N/A

6. Installation guide

6.1 Prerequisites

  • Bacula Community 15.0.3 or later is installed and bacula-fd is running
  • ADABAS 7.x (Community) or 8.x+ (licensed) is installed on the same host as bacula-fd
  • ADABAS utilities adabck, adaopr, adavfy, adarec are in $PATH (or full paths configured via plugin string)
  • OS: Linux x86_64 or aarch64
  • Rust 1.70+ (required only for building from source)
  • The user running bacula-fd must be able to execute ADABAS utilities as the ADABAS DBA (typically sagadmin) — either by co-location under the same UID or via configured sudo rules

6.2 Building from source (recommended)

# 1. Clone or unpack the plugin source
cd /opt/podheitor-adabas-plugin

# 2. Build the Rust cdylib + backend binary (no Bacula source required)
make

# 3. Install (needs root — writes to /opt/bacula/{plugins,bin})
sudo make install

# 4. Restart the File Daemon so it loads the new .so
sudo systemctl restart bacula-fd

# 5. Confirm the plugin is loaded
echo "status client=$(hostname)-fd" | bconsole | grep -i adabas
# Expected: Plugin: ... podheitor-adabas(1.0.0-ce) ...

6.3 Post-install smoke test

# Validate the backend can reach your ADABAS nucleus
/opt/bacula/bin/podheitor-adabas-backend --probe-nucleus 12
# Expected: DBID=12 ONLINE adanuc=&lt;version&gt;

# If OFFLINE or UNKNOWN:
#  OFFLINE  = ADANUC process not running; start it
#  UNKNOWN  = adaopr rejected the DBID; check $PATH includes ADABAS bin dir,
#             or pass --adaopr-path=/absolute/path/to/adaopr

6.4 Packaged RPM / DEB

RPM and DEB packages are planned for the v1.0.0 general availability release. Track publication status in the podheitor.com plugin catalogue.

6.5 Containerised ADABAS (Podman wrapper)

When ADABAS utilities are not natively installed on the FD host but run inside a Podman container:

# Install the wrapper (idempotent)
sudo bash scripts/install-podman-wrapper.sh adabas-ce-test

# The wrapper re-execs the backend inside the named container:
# podman exec -i adabas-ce-test /opt/bacula/bin/podheitor-adabas-backend "$@"

Rollback: a timestamped backup of any previous native backend ELF is preserved at /usr/local/sbin/podheitor-adabas-backend.elf-bak.&lt;ts&gt;.


7. Configuration reference

7.1 Plugin string parameters

All parameters are passed in the Plugin = "podheitor-adabas: &lt;param&gt;=&lt;value&gt; ..." directive. Parameters are space-separated (not colon-separated). Both snake_case and camelCase are accepted for each key.

Parameter Type Default Description
dbid uint (required) Single ADABAS database ID
dbids list Comma-separated list for multi-DB jobs: dbids=12,13,14
mode enum online online (supported) or cold (deferred)
external_backup bool yes Wrap DUMP in adaopr EXT_BACKUP=PREPARE/CONTINUE. Must be no on Community Edition
plog bool yes Archive PLOGs during Level I jobs. No-op on CE
plog_dir path auto-detect Directory containing PLOG.* files
adabck_path path adabck Full path override for the adabck utility
adaopr_path path adaopr Full path override for adaopr
adavfy_path path adavfy Full path override for adavfy
adarec_path path adarec Full path override for adarec (PLOG apply)
stream_mode enum auto auto (= stdout), stdout, or tempfile fallback
buffer_size size 8m Streaming buffer (accepts K/M/G suffixes)
allow_destructive_restore bool no Required gate for restore — without this, restore refuses to overwrite an existing DB
verify_after_restore bool yes Run adavfy DBID=&lt;N&gt; after restore
restore_to_checkpoint string PITR target: "first,last" checkpoint names
restore_to_time timestamp ISO 8601 timestamp (v1.1.0: resolution via adaplp)
config_file path /opt/bacula/etc/podheitor-adabas.conf File containing default parameter lines, merged under the job’s plugin string
status_timeout_secs uint 10 Wall-clock bound on adaopr status probe
backup_timeout_secs uint 14400 Wall-clock bound on the DUMP subprocess (4 hours)
restore_timeout_secs uint 14400 Wall-clock bound on the RESTORE subprocess (4 hours)
adarec_timeout_secs uint 1800 Wall-clock bound per-PLOG adarec call (30 min)
verify_timeout_secs uint 300 Wall-clock bound on adavfy (5 min)
plog_ship_retries uint 2 Max retries (exponential backoff) on transient PLOG ship failure

7.2 Environment variables

Variable Effect
ADABAS_LOG_LEVEL Verbosity: off, error, warn, info (default), debug, trace. Aliases: err, warning, numeric 0–5.
ADABAS_DBID Optional fallback DBID for --probe-nucleus. The plugin always prefers the DSL-side dbid=.
PODHEITOR_ADAOPR_PATH Override the adaopr path used by --probe-nucleus. Useful in containerised environments.

7.3 Config file — shared defaults

Create /opt/bacula/etc/podheitor-adabas.conf to share defaults across many jobs:

# One key=value per line; # starts a comment
external_backup      = yes
stream_mode          = auto
buffer_size          = 16m
verify_after_restore = yes
status_timeout_secs  = 15
backup_timeout_secs  = 21600

The plugin reads this file first, then overlays the job’s Plugin = "..." parameters — per-job overrides always win.


8. FileSet examples

8.1 Single DBID — weekly Full + hourly Incremental

FileSet {
  Name = "ADABAS-DBID-12"
  Include {
    Options {
      signature   = MD5
      compression = LZ4
    }
    Plugin = "podheitor-adabas: dbid=12"
  }
}

Schedule {
  Name = "ADABAS-Weekly"
  Run = Level=Full         sun at 02:00
  Run = Level=Incremental  mon-sat at *:00
}

Job {
  Name     = "ADABAS-DBID-12"
  Type     = Backup
  Level    = Incremental
  FileSet  = "ADABAS-DBID-12"
  Client   = "adabas-host-fd"
  Storage  = "File"
  Pool     = "ADABAS-Pool"
  Messages = "Standard"
  Schedule = "ADABAS-Weekly"
  Max Run Sched Time = 6 hours
}

8.2 Multi-DBID — three databases in one job

FileSet {
  Name = "ADABAS-Prod-Cluster"
  Include {
    Options { signature = MD5; compression = LZ4 }
    Plugin = "podheitor-adabas: dbids=12,13,14"
  }
}

After a Full job, bconsole: list files jobid=&lt;N&gt; shows:

/@ADABAS/dbid-12/full-&lt;ts&gt;.adabck
/@ADABAS/dbid-12/_manifest.json
/@ADABAS/dbid-13/full-&lt;ts&gt;.adabck
/@ADABAS/dbid-13/_manifest.json
/@ADABAS/dbid-14/full-&lt;ts&gt;.adabck
/@ADABAS/dbid-14/_manifest.json

8.3 Community Edition — external_backup=no required

FileSet {
  Name = "ADABAS-CE-DBID-12"
  Include {
    Options { signature = MD5 }
    Plugin = "podheitor-adabas: dbid=12 external_backup=no plog=no"
  }
}

8.4 High-volume — tempfile fallback

Plugin = "podheitor-adabas: dbid=12 stream_mode=tempfile buffer_size=16m"

8.5 Custom timeouts for very large VLDB

Plugin = "podheitor-adabas: dbid=12 status_timeout_secs=30 backup_timeout_secs=28800 verify_timeout_secs=900"

8.6 Interactive restore

bconsole: restore client=adabas-host-fd
# Select files under /@ADABAS/dbid-12/, then:
pluginoptions "podheitor-adabas: dbid=12 allow_destructive_restore=yes"

8.7 Point-in-time restore (licensed ADABAS only)

bconsole: restore
pluginoptions "podheitor-adabas: dbid=12 allow_destructive_restore=yes restore_to_checkpoint=SYNC,W8NW verify_after_restore=yes"

9. Sizing & capacity planning

9.1 Memory requirements

Scenario Peak memory (backend process)
Full backup (stdout mode) ~2 MB (1 MiB streaming buffer + backend overhead) — validated: 1.94 MB VmHWM for 7.27 MB dump
Full backup (tempfile mode) ~2 MB backend + temporary disk space equal to dump size
Incremental (PLOG archiving) ~2 MB per active DBID — PLOGs are small (typically 4 KB–64 KB each)
Multi-DBID job (N DBIDs) Sequential processing — peak memory is single-DBID overhead, not N×
Restore (adabck RESTORE) ~2 MB backend + FIFO buffer; actual RESTORE memory is in the ADABAS nucleus, not the plugin

Key design point. The streaming architecture uses a fixed 1 MiB buffer — memory is O(1) regardless of database size. A 500 GB ADABAS dump uses the same 2 MB peak backend memory as a 7 MB demo database.

9.2 CPU requirements

Scenario Recommended cores
Single DBID full backup 1 (I/O-bound by adabck throughput)
Multi-DBID job (sequential) 1–2
PLOG incremental (N PLOGs) 1 (sequential per-PLOG processing)
Restore + PITR 1–2

9.3 Disk space — plugin binaries

File Size
podheitor-adabas-fd.so ~600 KB (release profile LTO)
podheitor-adabas-backend ~680 KB (release profile LTO)
Total installation footprint ~1.3 MB

9.4 Disk space — state directory

Each DBID tracked for PLOG incremental jobs stores a JSON state file of approximately 2–4 KB. For 100 databases the state directory requires less than 400 KB — negligible.

9.5 Backup volume estimates

Mode Expected size
Full backup (no additional compression) Approximately equal to ASSO + DATA container sizes combined
Full backup + LZ4 (Bacula FileSet compression) Typically 60–80% of raw container sizes depending on data patterns
PLOG incremental per interval Typically < 1% of full backup size per hour; highly workload-dependent

10. Performance report

All measurements were taken on a controlled lab environment using the official Software AG ADABAS Community Edition 7.4.0 Podman container (softwareag/adabas-ce:7.4.0, DBID=12, EMPLOYEES/VEHICLES/MISCELLANEOUS demo databases) running on a Linux host with Bacula Community 15.0.3. CE imposes nucleus parameter caps (NT=3, NU=5, LWP=1 MB, TCPCONN=4) — these are directional benchmarks; production performance on licensed ADABAS will differ.

10.1 Full backup — stdout streaming mode

Metric Result
Database ADABAS CE 7.4.0 DBID=12 (EMPLOYEES/VEHICLES/MISCELLANEOUS)
Backup size 7,275,028 bytes (7.27 MB)
Stream mode stdout (BCK001=-)
Backend peak memory (VmHWM) 1.94 MB
Memory-to-data ratio 0.27 (O(1) — fixed buffer)
BackupManifest emitted Yes — 434 bytes, all 12 fields validated
adabck exit code 0
Backend exit code 0

10.2 Full backup — tempfile fallback mode

Metric Result
Backup size 7,274,968 bytes
Stream mode recorded in manifest tempfile
Status PASS — byte count matches stdout mode (12-byte frame overhead difference expected)

10.3 Multi-DBID best-effort aggregation

Metric Result
DBIDs in job 12 (online), 99 (non-existent)
DBID=12 outcome Backup preserved in catalog
DBID=99 outcome Failed with clear error — DBID=99 UNKNOWN
Aggregate error reported Yes — job log: “1 DBID(s) OK ([12]), 1 failed ([99])”
DBID=12 data integrity Not corrupted by DBID=99 failure

10.4 PLOG incremental archiving (synthetic fixtures)

Metric Result
PLOG fixture files 3 files (seq 4, 5, 6), 4 KB each
Incremental job size 3 × 4 KiB + PlogManifest ≈ 0.18% of 7 MB full
Sequence-wrap simulation PASS — PLOG.0001 after last_seen=6 → generation++ logged
Idempotency (run 2) PASS — 0 candidate PLOGs, 3 previously archived
Safety-gated delete PASS — prior PLOGs removed from fixtures dir after run 2

10.5 Restore orchestration

Test case Result
Restore without allow_destructive_restore=yes PASS — refused with clear message
Restore with missing manifest PASS — “No backup manifest found for DBID=12”
Restore orchestration Phases A→E PASS (CE license-gated at Phase D completion)
Multi-DBID restore grouping PASS — 2 manifests grouped by vpath segment

10.6 Rust unit test suite

Module Tests Status
config 13 Pass
types 10 Pass
ptcomm 5 Pass
adaopr 6 Pass
adabck 4 Pass
adarec 3 Pass
adavfy 1 Pass
incremental 1 Pass
logging 3 Pass
plog 13 Pass
state 6 Pass
restore 3 Pass
metadata 2 Pass
misc 8 Pass
Total 78 / 78 All pass

11. Compatibility matrix

11.1 Operating system

OS Architecture Status
RHEL 9 / Oracle Linux 9 / Rocky 9 x86_64 Supported
Ubuntu 22.04 LTS x86_64 Supported
Debian 12 x86_64 Supported
Linux (any modern distro) aarch64 Build-compatible (cargo target); packages pending
AIX POWER Out of scope for this plugin (separate engagement)
z/OS System/390 Out of scope (mainframe engagement)
Windows x86_64 Explicitly out of scope (ADR-007)

11.2 ADABAS versions

Version Platform Level F Backup Level I PLOG Restore
7.4.x CE (softwareag/adabas-ce:7.4.0) Linux container Yes (external_backup=no) No (license-gated) Partial (orchestration only)
8.x+ licensed Linux Yes (external_backup=yes) Yes Yes
z/OS / BS2000 Mainframe Out of scope Out of scope Out of scope

11.3 Bacula versions

Bacula version Status
Community 15.0.3 Supported and validated
Community 15.0.x (future) Expected compatible (metaplugin-rs tracks the ABI)
Community 14.x and earlier Not supported (metaplugin framework requires 15.0.3+)
Bacula Enterprise Not required; plugin targets Bacula Community

11.4 Natural / NaturalONE application layer

The plugin protects the ADABAS database engine layer (ASSO, DATA, WORK containers and PLOGs). Natural application source code and NaturalONE project artefacts are filesystem objects and should be backed up with standard Bacula file-level jobs. Application-layer backup is explicitly out of scope for this plugin.


12. Security

12.1 No embedded credentials

The plugin does not handle ADABAS passwords — ADABAS authentication is managed by the nucleus via the operator interface and SysV IPC, not by username/password in the backup job. There are no credentials to embed, store, or redact in the Bacula configuration.

12.2 DSL injection prevention

Checkpoint names passed to adarec CHECKPOINT=(first,last) are sanitised to the character class [A-Z0-9_]{0,8} before being passed to the subprocess. This prevents injection of arbitrary ADABAS DSL keywords from a hostile or corrupt _plog_manifest.json.

12.3 Path validation

The adabck_path, adaopr_path, adavfy_path, and adarec_path parameters are validated to contain no shell metacharacters before being passed to Command::new(). There is no sh -c anywhere in the backend — all subprocess invocations use Command::args([]) directly.

12.4 Destructive restore gate

Restore unconditionally refuses to proceed unless allow_destructive_restore=yes is explicitly set in the plugin options. This gate cannot be bypassed by a misconfigured Job definition — it must be present as an explicit operator decision at restore time. The gate message is:

[podheitor-adabas] restore refused: allow_destructive_restore=yes required

12.5 Subprocess isolation

The cdylib forks one backend process per Bacula job. No shared mutable state exists between concurrent backup jobs. If the backend crashes or is killed by an external signal, the cdylib reports a job failure and bacula-fd continues serving other jobs normally. Every subprocess has a bounded wall-clock timeout — a runaway adabck or adarec process cannot permanently wedge the Bacula File Daemon.

12.6 Temp file handling

Temporary files created during tempfile-mode backups and restore staging use a process-ID suffix to avoid collisions between concurrent jobs. The StagedFiles Rust type implements Drop to clean up staging directories on all exit paths — including panics and SIGTERM cancellation. On SIGTERM/SIGINT, a global atomic flag is set and polled between streaming reads; the plugin unwinds cleanly without leaving orphaned files.

12.7 State file permissions

The state directory /opt/bacula/working/podheitor-adabas-state/ is created with mode 0750, owner bacula. State files are written with atomic tmpfile-then-rename to prevent partial-write corruption on crash or power failure.


13. Monitoring

13.1 Bacula job status codes

The plugin sets standard Bacula job status codes:

  • T — terminated normally; all DBIDs backed up successfully
  • E — terminated with errors; one or more DBIDs failed (details in job log)
  • f — fatal error; backend failed to start or crashed before any DBID was attempted

13.2 Job log messages

All plugin messages are prefixed [podheitor-adabas] and routed to the Bacula job log via PTCOMM info packets. Standard output from a successful full backup:

[podheitor-adabas] DBID=12 ONLINE adanuc=7.4.0.3
[podheitor-adabas] DBID=12 EXT_BACKUP PREPARE issued
[podheitor-adabas] DBID=12 dump (stdout): 7274972 bytes → /@ADABAS/dbid-12/full-20260424T020851Z.adabck
[podheitor-adabas] DBID=12 manifest: /@ADABAS/dbid-12/_manifest.json
[podheitor-adabas] DBID=12 EXT_BACKUP CONTINUE issued
[podheitor-adabas] backup summary: 1 DBID(s) OK ([12]), 0 failed, 7274972 bytes total

13.3 Log file locations

Log Path
Plugin backend log /opt/bacula/working/podheitor-adabas-backend.log (fallback: /tmp/podheitor-adabas-backend.log)
Bacula job log Via bconsole: list messages or through Bacularis
Plugin state file /opt/bacula/working/podheitor-adabas-state/dbid-&lt;N&gt;.json

13.4 Debug logging

Set ADABAS_LOG_LEVEL=debug (or trace for full PTCOMM frame logging) on the bacula-fd environment to enable verbose output:

ADABAS_LOG_LEVEL=debug bconsole: run job=ADABAS-DBID-12-Full yes

13.5 Nucleus probe CLI

# Check nucleus reachability and version at any time (no Bacula job required)
/opt/bacula/bin/podheitor-adabas-backend --probe-nucleus 12
# Expected: DBID=12 ONLINE adanuc=&lt;version&gt;

13.6 Bacularis integration

All ADABAS backup jobs are first-class citizens in Bacularis — they appear in the job list, their virtual files (including _manifest.json and PLOG entries) are browsable, and restore jobs can be initiated from the Bacularis web UI. No ADABAS-specific Bacularis extensions are required.


14. Troubleshooting guide

14.1 Common errors

Restore refused — missing gate parameter

[podheitor-adabas] restore refused: allow_destructive_restore=yes required

Cause. Gate enforcement working as designed. Fix. Pass allow_destructive_restore=yes in the pluginoptions directive only when you intend to overwrite the target database.

No backup manifest found

[podheitor-adabas] No backup manifest found for DBID=12

Cause. The restore job’s file selection does not include _manifest.json. Fix. Re-select the full virtual namespace /@ADABAS/dbid-12/* during restore selection.

ADABCK aborted — CE + EXT_BACKUP deadlock

Subprocess 'adabck DBID=12 DUMP=*' exit=&lt;C&gt;: %ADABCK-I-ABORTED

Cause. On CE, EXT_BACKUP=PREPARE before adabck DUMP=* causes adabck to block in do_msgrcv waiting for nucleus IPC — a known CE restriction. Fix. Set external_backup=no in the plugin string for CE jobs. Production licensed ADABAS should keep the default external_backup=yes.

Subprocess timeout

child did not exit within &lt;N&gt;s — killed

Cause. A subprocess (adabck, adavfy, adarec) exceeded its configured wall-clock timeout. Fix. Diagnose whether the underlying operation legitimately requires more time; if so, increase the relevant *_timeout_secs parameter.

Job cancelled mid-stream

cancelled by signal (SIGTERM/SIGINT) mid-stream

Cause. Expected behaviour — operator cancelled the Bacula job, or bacula-fd sent SIGTERM. Fix. No action needed; staged files are cleaned up automatically by the StagedFiles Drop guard.

probe-nucleus returns UNKNOWN

/opt/bacula/bin/podheitor-adabas-backend --probe-nucleus 12 → UNKNOWN

Cause. adaopr exited non-zero — wrong DBID, ADABAS utilities not in PATH, or nucleus not running. Fix. Verify $PATH includes the ADABAS bin directory; confirm the DBID exists via adainfo.sh &lt;DBID&gt;; or specify --adaopr-path=/absolute/path.

Incremental job reports 0 candidate PLOGs

[podheitor-adabas] DBID=12: 0 candidate PLOG(s); 0 already archived; 0 new

Cause (a). No PLOGs have rotated since the last job — normal in a low-activity environment. Cause (b). CE is in use — PLOGs are not emitted by the Community Edition nucleus. Fix. In both cases this is expected; no action required.

14.2 Diagnostic commands reference

# Is the nucleus reachable?
/opt/bacula/bin/podheitor-adabas-backend --probe-nucleus 12

# One-shot verbose log for a backup job
ADABAS_LOG_LEVEL=debug bconsole: run job=ADABAS-DBID-12-Full yes

# Inspect the plugin state file (read-only; never edit by hand)
cat /opt/bacula/working/podheitor-adabas-state/dbid-12.json

# Verify plugin is loaded in the FD
echo "status client=$(hostname)-fd" | bconsole | grep -i adabas

15. Use cases & deployment scenarios

15.1 Telecommunications subscriber database

Scenario. A national telecommunications operator runs subscriber management and CDR (Call Data Record) storage on ADABAS 8.x licensed, two DBID instances, combined DATA+ASSO totalling 400 GB. The regulatory requirement is 7-year data retention with a 4-hour RTO and 1-hour RPO.

Solution.

  • Weekly Full backup (Level F) on Sunday at 02:00 with external_backup=yes (default)
  • Hourly Incremental backup (Level I) — PLOG archiving every hour, Monday through Saturday
  • Bacula pool with 7-year volume retention
  • Offsite copy to tape or cloud via standard Bacula migration job

Result. RPO ≈ 1 hour, RTO driven by ADABAS restore time (< 4 hours for 400 GB over Gigabit LAN to local storage). Full compliance with regulatory retention without any change to the existing ADABAS or Natural applications.

15.2 Core banking ledger

Scenario. A regional bank runs its transaction processing ledger on ADABAS on IBM AIX… migrating to Linux. The new Linux ADABAS 8.x instance must meet BCBS 239 data lineage requirements — every transaction must be recoverable to a named checkpoint.

Solution.

  • Full backup nightly with external_backup=yes and verify_after_restore=yes
  • PLOG incremental archiving every 15 minutes
  • Point-in-time restore tested quarterly using restore_to_checkpoint=SYNC,&lt;checkpoint&gt;
  • BackupManifest JSON stored in Bacula catalog — provides the auditable evidence trail

15.3 Government personnel registry

Scenario. A national social security agency runs ADABAS on Linux for its 40-million-record personnel registry. Budget constraints prohibit commercial backup platform licensing. Bacula Community is already in use for file-level backups across the agency.

Solution.

  • Deploy the PodHeitor plugin on the existing Bacula Community infrastructure — zero platform cost
  • Full backup weekly; PLOG incremental hourly
  • Extend existing Bacula pools and schedules to cover ADABAS jobs
  • No new backup software, no new licensing, no retraining for the existing Bacula team

15.4 Insurance policy reserving engine

Scenario. An insurance company runs ADABAS as its policy reserving engine with nightly batch runs calculating reserves for 5 million policies. The batch run modifies millions of records; the RTO requirement is 2 hours (time to re-run the batch if ADABAS is corrupted).

Solution.

  • Full backup immediately post-batch (approx. 03:00) with a custom backup_timeout_secs=21600 for large databases
  • PLOG archiving every 30 minutes during batch execution to enable fine-grained recovery
  • adavfy consistency verification after every restore (default on) provides auditable proof of recovery integrity

15.5 Development/CI — ADABAS CE container

Scenario. A software team developing Natural/ADABAS applications needs to protect their CE development environment and test the backup plugin in CI before deploying to production.

Solution.

  • Run the official ADABAS CE container: podman run -d --name adabas-ce-test softwareag/adabas-ce:7.4.0
  • Use external_backup=no plog=no in the FileSet (CE restrictions)
  • Deploy the Podman wrapper to bridge the host FD to the container utilities
  • Full backup validates the entire plugin → PTCOMM → adabck pipeline in CI with zero cost
Plugin = "podheitor-adabas: dbid=12 external_backup=no plog=no stream_mode=auto"

16. Comparison with other approaches

16.1 Feature comparison

The table below compares the PodHeitor ADABAS plugin running on Bacula Community against alternative ways of protecting ADABAS data. Bacula Enterprise is included as a reference: it offers excellent general-purpose enterprise backup and remains a strong choice when broader BEE features are needed; this plugin is purpose-built to deliver ADABAS-specific functionality (PLOG incremental chains, checkpoint-based PITR, online quiet-point integration, catalog-native manifest storage) on the Bacula Community base.

Feature Bacula Community + PodHeitor Bacula Enterprise Veeam Commvault NetBackup
ADABAS native (adabck integration) Yes No No No No
Online (hot) backup with quiet point Yes No No No No
PLOG incremental archiving Yes No No No No
Point-in-time restore (checkpoint) Yes (licensed ADABAS) No No No No
adavfy post-restore verify Yes No No No No
Multi-DBID per job Yes No No No No
Bacula catalog integration Yes Yes N/A N/A N/A
BackupManifest JSON object Yes No No No No
Compression (LZ4 / zstd via Bacula) Yes Yes Yes Yes Yes
Encryption Yes (Bacula native) Yes Yes Yes Yes
Bandwidth throttle Yes (Bacula native) Yes Yes Yes Yes
Retention management Yes (Bacula pools) Yes Yes Yes Yes
Bacula Community compatible Yes N/A N/A N/A N/A
Open-source platform base Yes (Bacula CE) No No No No
ADABAS CE 7.x tested Yes No No No No
Podman wrapper for containerised ADABAS Yes No No No No
Signal-safe cancel + staged-file cleanup Yes N/A N/A N/A N/A
78 automated Rust unit tests Yes N/A N/A N/A N/A

16.2 Cost comparison

Special offer. Bring your renewal proposal for Veeam, Commvault, NetBackup, or any other enterprise backup platform. We will produce a written head-to-head proposal targeting at least 50% savings, with stronger ADABAS-specific functionality that no commercial platform currently offers. Contact heitor@opentechs.lat.

Solution Typical annual cost ADABAS native support
Bacula Community + PodHeitor plugin Significantly less Full native (this plugin)
Bacula Enterprise Often > US$ 10,000/year None (file-level or scripted)
Veeam Data Platform Often > US$ 5,000/year None (VM-level only)
Commvault Often > US$ 15,000/year None (scripted only)
NetBackup Often > US$ 20,000/year None (scripted only)

Prices vary by environment size and negotiated contracts. Contact heitor@opentechs.lat for a specific comparison against your current renewal proposal.


17. Roadmap

The plugin is at v1.0.0-ce — CE-validated end-to-end for full backups; license-gated features (PLOG apply, nucleus-offline restore) are code-complete and await licensed instance validation.

  • v1.0.0 (general availability). Same code as 1.0.0-ce, after passing the license-gated portion of the acceptance matrix on production ADABAS. Target: first customer licensed ADABAS installation.
  • v1.1.0 (planned).
    • adaplp integration to resolve restore_to_time=&lt;ISO8601&gt; timestamps into checkpoint names automatically — enabling time-based PITR without manual checkpoint name lookup
    • stream_mode=auto transparent fallback — currently Auto is a synonym of Stdout; making Auto try stdout then silently fall back to tempfile on certain error codes
    • Cold backup mode (ADR-006 accepted, deferred) — for scheduled maintenance-window backups
  • v1.2.0 (future).
    • RPM / DEB packaging and publication to podheitor.com plugin catalogue
    • ARM64 / aarch64 binary packages
    • Windows port (ADR-007 — reassess if a customer requires ADABAS on Windows)
    • Bacularis plugin configuration panel for ADABAS job parameters
    • Extended metrics endpoint (Prometheus) for ADABAS backup job counters

No specific release dates are committed. Feature direction is guided by customer feedback and licensed-instance lab findings.


18. Conclusion

The PodHeitor ADABAS Backup Plugin extends Bacula Community with the first production-grade, ADABAS-native backup integration available on the open-source Bacula platform. It delivers online hot backup via adabck DUMP with transactional quiet-point wrapping, PLOG-based hourly incremental archiving for RPO ≈ 1 hour, checkpoint-based point-in-time restore via adarec, and post-restore consistency verification via adavfy — all managed through a single Bacula FileSet Plugin directive.

The plugin has been validated through 78 Rust unit tests and end-to-end CE testing, with streaming memory efficiency demonstrated at 1.94 MB peak for a running ADABAS instance (O(1) constant-memory streaming). The architecture — Rust cdylib + Rust backend + PTCOMM protocol + subprocess isolation — is the same battle-tested pattern used across all mature PodHeitor plugins.

For organisations running ADABAS on Linux — in telecommunications, banking, insurance, or government — the PodHeitor plugin fills a gap that no other open-source or commercial backup tool currently addresses. It requires no changes to the existing ADABAS installation, no Natural application modifications, and no replacement of an existing Bacula infrastructure. For organisations evaluating commercial backup platform renewals, the combination of Bacula Community and the PodHeitor plugin delivers superior ADABAS-specific DR capability at a fraction of the cost.

To get started:

  • Download the plugin: https://podheitor.com
  • Request a quote or demo: heitor@opentechs.lat
  • Phone / WhatsApp: +1 786 726-1749 | +55 61 98268-4220

19. Contact information

Author Heitor Faria
Company PodHeitor International
Website https://podheitor.com
Email heitor@opentechs.lat
Phone / WhatsApp +1 786 726-1749
Phone / WhatsApp (BR) +55 61 98268-4220
Product page https://podheitor.com/adabas-plugin
Support heitor@opentechs.lat

20. Legal / copyright

© 2026 Heitor Faria — all rights reserved.

The PodHeitor ADABAS Backup Plugin for Bacula is proprietary software. Unauthorised copying, distribution, modification, or reverse engineering is strictly prohibited. A commercial license is required for production use.

Bacula® is a registered trademark of Kern Sibbald and the Bacula community. ADABAS® and Software AG® are registered trademarks of Software AG. Natural® and NaturalONE® are trademarks of Software AG. Veeam® is a trademark of Veeam Software. Commvault® is a trademark of Commvault Systems, Inc. NetBackup® is a trademark of Veritas Technologies LLC. All other trademarks are the property of their respective owners.

This document is provided for informational purposes. Performance figures are from controlled lab measurements using ADABAS Community Edition 7.4.0 and may vary significantly in production environments depending on hardware, ADABAS version, nucleus parameter configuration, database size, and workload characteristics. License-gated features are code-complete and unit-tested but have not been validated end-to-end on a licensed ADABAS instance; performance and compatibility on licensed 8.x+ installations may differ from CE measurements.

Contact for licensing: heitor@opentechs.lat | https://podheitor.com | +1 786 726-1749 | +55 61 98268-4220


PodHeitor ADABAS Backup Plugin for Bacula — v1.0.0-ce — © 2026 Heitor Faria — all rights reserved — https://podheitor.com

Disponível em: pt-brPortuguês (Portuguese (Brazil))enEnglishesEspañol (Spanish)

Leave a Reply