Introduction
Open Sesame is a trust-scoped secret and identity fabric for the desktop. It manages encrypted secret vaults with per-profile trust boundaries, provides window switching with letter-hint overlays, clipboard history with sensitivity detection, keyboard input capture, and text snippet expansion. Everything is scoped to trust profiles that activate based on context or manual selection.
Packages
Open Sesame ships as two packages:
open-sesame (headless core) contains the sesame CLI, daemon-profile, daemon-secrets,
daemon-launcher, and daemon-snippets. It runs anywhere with systemd: desktops, servers, containers,
and VMs. This package provides encrypted vaults, secret management, environment variable injection,
application launching, and profile management without any GUI dependencies.
open-sesame-desktop (GUI layer) depends on open-sesame and adds daemon-wm, daemon-clipboard,
and daemon-input. It requires a COSMIC or Wayland desktop. This package provides the window switcher
overlay, clipboard history, and keyboard input capture.
Installing open-sesame-desktop pulls in open-sesame automatically. On a server or in a container,
install just open-sesame for encrypted secrets and application launching.
Audience
This documentation is written for:
- Contributors working on the Open Sesame codebase. The architecture and platform sections describe internal design, crate structure, IPC protocols, and implementation patterns.
- Extension authors building WASM component model extensions. The extending section covers the extension host runtime, SDK, WIT interfaces, and OCI distribution.
- Platform implementors adding support for new operating systems or compositor backends. The platform section documents the trait abstractions, factory patterns, and feature gating used across platform crates.
- Security auditors reviewing the trust model, cryptographic primitives, sandbox enforcement, and key hierarchy. The secrets, authentication, and compliance sections provide the relevant detail.
- Deployment engineers operating Open Sesame in production. The deployment and packaging sections cover systemd integration, service topology, and package structure.
Navigating the Documentation
- Architecture – internal design: crate map, daemon topology, IPC bus, data flows. Start here for a structural understanding of the system.
- Secrets – vault system: SQLCipher storage, key hierarchy, Argon2id KDF, key-encryption keys, per-profile isolation.
- Authentication – unlock mechanisms: password, SSH agent, multi-factor auth policy engine.
- Platform – OS abstraction layer: Linux (Wayland, D-Bus, evdev, systemd), macOS (Accessibility, Keychain, launchd), Windows (UI Automation, Credential Manager, Task Scheduler).
- Extending – extension system: Wasmtime host, WASI component model, WIT bindings, OCI packaging.
- Desktop – window management: compositor integration, overlay rendering, focus tracking.
- Deployment – operations: systemd units, service readiness, watchdog, packaging.
- Compliance – security posture: Landlock, seccomp, mlock, guard pages, audit logging.
For user-facing quick start instructions, CLI reference, and configuration guide, see the README.
Architecture Overview
Open Sesame is a trust-scoped secret and identity fabric for Linux, built as 21 Rust crates organized into a Cargo workspace. The system runs as seven cooperating daemons under systemd, communicating over a Noise IK encrypted IPC bus. Two Debian packages partition the daemons into a headless core suitable for servers, containers, and VMs, and a desktop layer that requires a Wayland compositor.
Crate Topology
The workspace (Cargo.toml:2-26) contains 21 crates in five layers: memory and cryptographic
foundations, shared abstractions, platform bindings, daemon binaries, and the extension system.
Foundation Layer
| Crate | Purpose |
|---|---|
core-memory | Page-aligned secure memory allocator backed by memfd_secret(2) |
core-crypto | Cryptographic primitives: AES-256-GCM, Argon2id, SecureBytes, EncryptedStore |
core-types | Shared types, error types, and event schema for the IPC bus |
Abstraction Layer
| Crate | Purpose |
|---|---|
core-config | Configuration schema, validation, hot-reload, and policy override |
core-ipc | IPC bus protocol, postcard framing, BusServer/BusClient |
core-fuzzy | Fuzzy matching (nucleo), frecency scoring, FTS5, and index abstractions |
core-secrets | Secret storage abstraction over platform keystores and age-encrypted vaults |
core-profile | Profile schema, context-driven activation, isolation contracts, and atomic switching |
core-auth | Pluggable authentication backends for vault unlock (password, SSH-agent, future FIDO2) |
Platform Layer
| Crate | Purpose |
|---|---|
platform-linux | Linux API wrappers: evdev, uinput, Wayland protocols, D-Bus, Landlock, seccomp |
platform-macos | macOS API wrappers: Accessibility, CGEventTap, NSPasteboard, Keychain, LaunchAgent |
platform-windows | Windows API wrappers: Win32 hooks, UI Automation, Credential Manager, Group Policy |
Daemon Layer
| Crate | Purpose |
|---|---|
daemon-profile | Profile orchestrator daemon: hosts IPC bus server, context evaluation, concurrent profile activation |
daemon-secrets | Secrets broker daemon: JIT delivery, keyring integration, profile-scoped access |
daemon-launcher | Application launcher daemon: fuzzy search, frecency, overlay UI, desktop entry discovery |
daemon-snippets | Snippet expansion daemon: template rendering, variable substitution, secret injection |
daemon-wm | Window manager daemon: Wayland overlay window switcher with letter-hint navigation |
daemon-clipboard | Clipboard manager daemon: history, encryption, sensitivity detection, profile scoping |
daemon-input | Input remapper daemon: keyboard layers, app-aware rules, macro expansion |
Orchestration and Extension Layer
| Crate | Purpose |
|---|---|
open-sesame | The sesame CLI binary: platform orchestration for multi-agent desktop control |
sesame-workspace | Workspace-level integration utilities shared across daemons |
extension-host | WASM extension host: wasmtime runtime, capability sandbox, extension lifecycle |
extension-sdk | Extension SDK: types, host function bindings, and WIT interfaces for extensions |
Daemon Model
The seven daemons are split across two systemd targets and two Debian packages.
Headless Daemons (package: open-sesame)
These four daemons have no GUI dependencies and run on any system with systemd: bare-metal servers, containers, VMs, and desktops alike.
| Daemon | Role |
|---|---|
| daemon-profile | The IPC bus host. Manages trust profiles, evaluates context rules (WiFi network, connected hardware), performs atomic profile switching, and hosts the BusServer that all other daemons connect to. |
| daemon-secrets | Secrets broker. Manages SQLCipher-encrypted vaults scoped to trust profiles. Delivers secrets just-in-time to authorized callers over the IPC bus. Enforces per-vault ACLs and rate limiting. |
| daemon-launcher | Application launcher. Discovers .desktop entries, maintains frecency rankings per profile, and launches applications with optional secret injection as environment variables. |
| daemon-snippets | Snippet expansion engine. Renders templates with variable substitution and secret injection. |
Desktop Daemons (package: open-sesame-desktop)
These three daemons require a COSMIC or Wayland compositor. The open-sesame-desktop
package depends on open-sesame, so installing it pulls in all headless daemons
automatically.
| Daemon | Role |
|---|---|
| daemon-wm | Window manager overlay. Renders the Alt+Tab window switcher with letter-hint navigation via Wayland layer-shell protocols. |
| daemon-clipboard | Clipboard manager. Monitors Wayland clipboard events, maintains encrypted history per profile, detects sensitive content (passwords, tokens), and auto-expires sensitive entries. |
| daemon-input | Input remapper. Captures keyboard events via evdev, evaluates compositor-independent shortcut bindings, routes key events to other daemons over the IPC bus. |
Headless/Desktop Split Rationale
The split exists so that secret management, application launching, and snippet expansion
can run on headless infrastructure (CI runners, jump hosts, fleet nodes) without pulling
in Wayland, GTK, or GPU dependencies. A server running only open-sesame gets encrypted
vaults, profile-scoped secrets, and environment injection
(sesame env -p work -- aws s3 ls) with no graphical stack. A developer workstation
installs open-sesame-desktop for the full experience: window switching, clipboard
history, and keyboard shortcuts layered on top of the same headless core.
IPC Bus
All inter-daemon communication flows through a Noise IK encrypted IPC bus
implemented in core-ipc.
Hub-and-Spoke Topology
daemon-profile hosts the BusServer. Every other daemon and every sesame CLI
invocation connects as a BusClient. There is no peer-to-peer communication between
daemons; all messages route through the hub.
Noise IK Transport
The IPC bus uses the Noise IK handshake pattern from
the snow crate (Cargo.toml:89). In the IK pattern the initiator (client) knows
the responder’s (server’s) static public key before the handshake begins. This provides
mutual authentication and forward secrecy on every connection. Messages are framed with
postcard (Cargo.toml:65) for compact binary serialization and deserialized into
core-types::EventKind variants.
Ephemeral CLI Connections
The sesame CLI binary does not maintain a long-lived connection. Each CLI invocation
opens a Noise IK session to daemon-profile, sends one or more EventKind messages,
receives the response, and disconnects. The CLI has no persistent state and can be
invoked from scripts, cron jobs, or CI pipelines without session management.
Message Flow Example
A sesame secret get -p work aws-access-key invocation follows this path:
sesameCLI opens a Noise IK session todaemon-profile.daemon-profileauthenticates the client and routes the secret-get request todaemon-secrets.daemon-secretsverifies the caller’s clearance against the vault ACL, decrypts the value from the SQLCipher store, and returns it as aSensitiveBytespayload.daemon-profileforwards the response to the CLI.- The CLI writes the secret to stdout and exits.
All secret material in transit is encrypted by the Noise session. All secret material
at rest in daemon memory is held in ProtectedAlloc pages backed by memfd_secret(2)
(see Memory Protection).
Daemon Topology
graph TD
subgraph "open-sesame (headless)"
DP[daemon-profile<br/><i>IPC bus host, profiles</i>]
DS[daemon-secrets<br/><i>vault broker</i>]
DL[daemon-launcher<br/><i>app launch, frecency</i>]
DN[daemon-snippets<br/><i>snippet expansion</i>]
end
subgraph "open-sesame-desktop (GUI)"
DW[daemon-wm<br/><i>window switcher overlay</i>]
DC[daemon-clipboard<br/><i>clipboard history</i>]
DI[daemon-input<br/><i>keyboard capture</i>]
end
CLI[sesame CLI<br/><i>ephemeral connections</i>]
DS -->|BusClient| DP
DL -->|BusClient| DP
DN -->|BusClient| DP
DW -->|BusClient| DP
DC -->|BusClient| DP
DI -->|BusClient| DP
CLI -.->|ephemeral BusClient| DP
subgraph "Foundation Crates"
CM[core-memory]
CC[core-crypto]
CT[core-types]
CI[core-ipc]
CF[core-config]
CZ[core-fuzzy]
CSE[core-secrets]
CP[core-profile]
CA[core-auth]
end
subgraph "Platform Crates"
PL[platform-linux]
end
subgraph "Extension System"
EH[extension-host]
ES[extension-sdk]
end
DP --> CI
DP --> CP
DP --> CF
DP --> CT
DS --> CSE
DS --> CC
DS --> CM
DL --> CZ
DL --> CF
DW --> PL
DC --> PL
DI --> PL
CI --> CT
CI --> CM
CC --> CM
CSE --> CC
CA --> CC
EH --> ES
Crate Dependency Highlights
- Every daemon depends on
core-ipc(for bus connectivity) andcore-types(for theEventKindprotocol schema). core-ipcdepends oncore-memorybecause Noise session keys are held inProtectedAlloc.core-cryptodepends oncore-memorybecauseSecureByteswrapsProtectedAllocfor key material.- The three desktop daemons (
daemon-wm,daemon-clipboard,daemon-input) depend onplatform-linuxfor Wayland protocol bindings, evdev access, and compositor integration. core-secretsdepends oncore-cryptofor vault encryption andcore-authdepends oncore-cryptofor key derivation during authentication.- The
extension-hostcrate useswasmtime(Cargo.toml:164) with the component model and pooling allocator for sandboxed WASM extension execution.
Security Boundaries
Each daemon runs as a separate systemd service with:
- Landlock filesystem restrictions (
platform-linux,landlockcrate atCargo.toml:136) limiting each daemon to only the filesystem paths it needs. - seccomp syscall filtering (
platform-linux,libseccompcrate atCargo.toml:137) restricting each daemon to a minimal syscall allowlist. - Noise IK mutual authentication on every IPC connection, preventing unauthorized processes from joining the bus.
- ProtectedAlloc secure memory for all key material, with
memfd_secret(2)removing secret pages from the kernel direct map (see Memory Protection).
See Also
- Memory Protection – secure memory allocator internals
- IPC Protocol – Noise IK handshake, framing, and message schema
- Sandbox Model – Landlock and seccomp configuration
- Profile Trust Model – profile isolation contracts
Memory Protection
All secret-carrying types in Open Sesame are backed by core-memory::ProtectedAlloc,
a page-aligned secure memory allocator that uses memfd_secret(2) on Linux 5.14+ to
remove secret pages from the kernel direct map entirely. This page documents the
allocator internals, the memory layout, the fallback path, and the type hierarchy
built on top of it.
ProtectedAlloc Memory Layout
Every ProtectedAlloc instance maps a contiguous region of virtual memory containing
five sections: three PROT_NONE guard pages, one read-only metadata page, and one or
more read-write data pages. The layout is defined in core-memory/src/alloc.rs:31-32
where OVERHEAD_PAGES is set to 4 (guard0 + metadata + guard1 + guard2), and data
pages are sized to fit the 16-byte canary plus the requested user data length.
mmap'd region (mmap_total bytes)
+------------+------------+------------+---------------------------+------------+
| guard pg 0 | metadata | guard pg 1 | data pages ... | guard pg 2 |
| PROT_NONE | PROT_READ | PROT_NONE | PROT_READ | PROT_WRITE | PROT_NONE |
+------------+------------+------------+---------------------------+------------+
^ ^ ^ ^ ^
| | | | |
mmap_base +1 page +2 pages +3 pages +3 pages
(metadata) (data_region) +data_region_len
(guard2)
Detail of data pages (right-aligned user data):
|<------------- data_region_len (data_pages * page_size) ------------->|
+-------------------+-----------+--------------------------------------+
| padding | canary | user data |
| (filled 0xDB) | 16 bytes | (user_len bytes) |
+-------------------+-----------+--------------------------------------+
^ ^ ^ ^
data_start canary_ptr user_data guard page 2
(PROT_NONE)
Byte-Level Sizes
Given a system page size P (typically 4096) and a requested allocation of N bytes:
| Component | Formula | Example (N=32, P=4096) |
|---|---|---|
data_pages | ceil((16 + N) / P) | 1 |
data_region_len | data_pages * P | 4096 |
mmap_total | (4 + data_pages) * P | 20480 (5 pages) |
padding_len | data_region_len - 16 - N | 4048 |
The padding is filled with 0xDB (PADDING_FILL, alloc.rs:29), matching
libsodium’s garbage fill convention.
Guard Pages
Three guard pages are set to PROT_NONE via mprotect(2)
(alloc.rs:422-434). Any read or write to a guard page triggers an
immediate SIGSEGV:
- guard0 (
mmap_base): prevents underflow from adjacent lower-address allocations. - guard1 (
mmap_base + 2*P): separates the read-only metadata page from the writable data region. Prevents metadata corruption from data-region underflow. - guard2 (
mmap_base + 3*P + data_region_len): the trailing guard page. Because user data is right-aligned within the data region, a buffer overflow of even one byte hits this page immediately.
Right-Alignment
User data is placed at the end of the data region (alloc.rs:455):
#![allow(unused)]
fn main() {
let user_data_ptr = data_start.add(data_region_len - user_len);
}
This right-alignment means a sequential buffer overflow crosses from user
data directly into the trailing guard page (guard2), triggering SIGSEGV
on the first out-of-bounds byte. Without right-alignment, an overflow
would silently write into unused padding within the same page before
reaching the guard.
Metadata Page
The metadata page (alloc.rs:438-451) stores allocation bookkeeping at
fixed offsets, then is downgraded to PROT_READ:
| Offset | Size | Content |
|---|---|---|
| 0 | 8 bytes | mmap_total (total mapped size) |
| 8 | 8 bytes | Data region offset from mmap_base |
| 16 | 8 bytes | User data offset from mmap_base |
| 24 | 8 bytes | user_len (requested allocation size) |
| 32 | 8 bytes | data_pages count |
| 40 | 16 bytes | Copy of the process-wide canary |
The metadata page is restored to PROT_READ|PROT_WRITE during Drop
(alloc.rs:688-694) so it can be volatile-zeroed before munmap.
memfd_secret(2) Backend
The preferred allocation backend on Linux is memfd_secret(2), invoked
via raw syscall 447 (alloc.rs:130,335). This syscall, available since
Linux 5.14, creates an anonymous file descriptor whose pages are:
- Removed from the kernel direct map: the pages are not addressable
by any kernel code path, including
/proc/pid/memreads,process_vm_readv(2), kernel modules, and DMA engines. - Invisible to ptrace: even
CAP_SYS_PTRACEcannot read the page contents. - Implicitly locked: the kernel does not swap
memfd_secretpages to disk. No explicitmlock(2)is needed.
The syscall requires CONFIG_SECRETMEM=y in the kernel configuration.
To check whether a running kernel has this enabled:
zgrep CONFIG_SECRETMEM /proc/config.gz
# or
grep CONFIG_SECRETMEM /boot/config-$(uname -r)
Probe and Caching
The allocator probes for memfd_secret availability once at process
startup via probe_memfd_secret() (alloc.rs:125-188) and caches the
result in a OnceLock<bool> (alloc.rs:45). The probe sequence:
- Call
syscall(447, 0)to create a secret fd. - If
fd < 0, log anERROR-level security degradation and cachefalse. - If
fd >= 0, close the fd immediately and cachetrue.
Allocation Sequence
The memfd_secret_mmap() function (alloc.rs:333-372) performs the full
allocation:
syscall(447, 0)– create the secret fd.ftruncate(fd, size)– set the mapping size.mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0)– map the pages.MAP_SHAREDis required formemfd_secret.close(fd)– the mapping persists after the fd is closed.
Fallback: mmap(MAP_ANONYMOUS)
On kernels without memfd_secret support (Linux < 5.14 or
CONFIG_SECRETMEM disabled), the allocator falls back to
mmap(MAP_ANONYMOUS|MAP_PRIVATE) (alloc.rs:380-398). This fallback
applies two additional protections that memfd_secret provides
implicitly:
mlock(2)(alloc.rs:495): locks the data region pages into RAM, preventing the kernel from swapping them to disk. Ifmlockfails withENOMEM(theRLIMIT_MEMLOCKlimit is exceeded), the allocator logs aWARN-level security degradation but continues (alloc.rs:498-511). Ifmlockfails with any other errno, allocation fails withProtectedAllocError::MmapFailed.madvise(MADV_DONTDUMP)(alloc.rs:482): excludes the data region from core dumps. This is Linux-specific.
The fallback is a security degradation: pages remain on the kernel
direct map and are readable via /proc/pid/mem by any process running
as the same UID. An ERROR-level audit log is emitted by both
probe_memfd_secret() (alloc.rs:147-161) and core_memory::init()
(core-memory/src/lib.rs:83-91) when operating in fallback mode. The
log message explicitly states that fallback mode does not meet
IL5/IL6, STIG, or PCI-DSS requirements (lib.rs:88).
Canary Verification
Each ProtectedAlloc instance places a 16-byte canary (CANARY_SIZE,
alloc.rs:25) immediately before the user data region. The canary value
is a process-wide random generated once from getrandom(2) (on Linux,
alloc.rs:56) or getentropy(2) (on macOS, alloc.rs:67) and cached
in a OnceLock<[u8; 16]> (alloc.rs:39).
Placement
The canary is written at user_data_ptr - 16 (alloc.rs:457-464). A
copy is also stored in the metadata page at offset 40 (alloc.rs:446).
Constant-Time Verification on Drop
During ProtectedAlloc::drop() (alloc.rs:636-720), the canary is
verified before any cleanup:
-
The 16 bytes at
canary_ptrare read as a slice (alloc.rs:641-642). -
They are compared to the global canary using
fixed_len_constant_time_eq()(alloc.rs:613-624). This function XORs each byte pair into an accumulator and reads the result throughread_volatileto prevent the compiler from short-circuiting the comparison:#![allow(unused)] fn main() { fn fixed_len_constant_time_eq(a: &[u8], b: &[u8]) -> bool { if a.len() != b.len() { return false; } let mut acc: u8 = 0; for (x, y) in a.iter().zip(b.iter()) { acc |= x ^ y; } let result = unsafe { std::ptr::read_volatile(&acc) }; result == 0 } } -
If the comparison fails, an
ERROR-level audit log is emitted and the process aborts viastd::process::abort()(alloc.rs:656). The process aborts rather than continuing with potentially compromised key material.
Canary corruption indicates a buffer underflow, heap corruption, or use-after-free in secret-handling code.
Volatile Zeroize
After canary verification passes, the entire data region (not just the
user data portion) is volatile-zeroed via volatile_zero()
(alloc.rs:627-632):
#![allow(unused)]
fn main() {
fn volatile_zero(ptr: *mut u8, len: usize) {
let slice = unsafe { std::slice::from_raw_parts_mut(ptr, len) };
slice.zeroize();
std::sync::atomic::compiler_fence(std::sync::atomic::Ordering::SeqCst);
}
}
The zeroize crate (Cargo.toml:85) performs volatile writes that the
compiler cannot elide. The compiler_fence(SeqCst) (alloc.rs:631)
provides an additional barrier preventing reordering of the zeroize with
the subsequent munmap. This zeroes the canary, the 0xDB padding, and
the user data before the pages are returned to the kernel.
Drop Sequence
The full Drop implementation (alloc.rs:636-720) proceeds in order:
- Canary check – constant-time comparison, abort on corruption.
- Volatile-zero the data region –
data_region_lenbytes starting atdata_region. - munlock (fallback only) – unlock data pages (
alloc.rs:665). - MADV_DODUMP (fallback only, Linux) – re-enable core dump
inclusion for the zeroed pages (
alloc.rs:675). - Zero metadata page – restore
PROT_READ|PROT_WRITE, volatile-zero (alloc.rs:686-695). - munmap – release the entire mapping back to the kernel
(
alloc.rs:700).
Type Hierarchy
Three types build on ProtectedAlloc to provide ergonomic secret
handling at different layers of the system.
SecureBytes (core-crypto/src/secure_bytes.rs)
SecureBytes is the primary vehicle for cryptographic key material:
master keys, vault keys, derived keys, and KEKs. It wraps a
ProtectedAlloc with an actual_len field to support empty values
(backed by a 1-byte sentinel allocation, secure_bytes.rs:55-56).
Key properties:
from_slice(&[u8])(secure_bytes.rs:73-81): copies directly into protected memory with no intermediate heap allocation. This is the preferred constructor.new(Vec<u8>)(secure_bytes.rs:51-63): accepts an ownedVec, copies into protected memory, then zeroizes the sourceVecon the unprotected heap. The doc comment (secure_bytes.rs:37-44) explicitly notes the brief heap exposure and recommendsfrom_slicewhen possible.into_protected_alloc()(secure_bytes.rs:107-120): zero-copy transfer of the innerProtectedAllocto a new owner. UsesManuallyDropto suppress theSecureBytesdestructor andptr::readto move the allocation out. TheProtectedAllocis never copied or re-mapped.Clone(secure_bytes.rs:124-129): creates a fully independentProtectedAllocwith its own guard pages, canary, and mlock. Both original and clone zeroize independently on drop.Debug(secure_bytes.rs:146-148): redacted output showing only byte count (SecureBytes([REDACTED; 32 bytes])), never contents.
SecureBytes does not implement Serialize or Deserialize. Secrets
must be explicitly converted to SensitiveBytes before crossing a
serialization boundary.
SecureVec (core-crypto/src/secure_vec.rs)
SecureVec is a password input buffer designed for character-by-character
collection in graphical overlays where the full password length is not
known in advance. It pre-allocates a fixed-size ProtectedAlloc (512
bytes for for_password(), secure_vec.rs:14,61) and provides UTF-8
aware push_char/pop_char operations.
Key properties:
- No reallocation: the buffer is fixed-size.
push_charpanics if the buffer is full (secure_vec.rs:118-122). The 512-byte limit accommodates passwords up to approximately 128 four-byte Unicode characters (secure_vec.rs:13). - Lazy allocation:
SecureVec::new()(secure_vec.rs:43-48) creates an empty instance withinner: Noneand no mmap.for_password()orwith_capacity()triggers the actualProtectedAlloc. - UTF-8 aware pop:
pop_char()(secure_vec.rs:133-160) scans backwards to find multi-byte character boundaries (checking the0xC0continuation mask) and zeroizes the removed bytes in-place before adjusting the cursor. clear()(secure_vec.rs:199-208): zeroizes all written bytes and resets the cursor without deallocating, allowing buffer reuse for sequential vault unlocks.- Double zeroize on drop:
Drop(secure_vec.rs:217-229) zeroizes written bytes beforeProtectedAlloc::dropperforms its own volatile-zero of the entire data region.
SensitiveBytes (core-types/src/sensitive.rs)
SensitiveBytes is the wire-compatible type for secret values in
EventKind IPC messages. It wraps a ProtectedAlloc and implements
Serialize/Deserialize for postcard framing.
Key properties:
- Zero-copy deserialization path: the custom
SensitiveBytesVisitor(sensitive.rs:112-146) implementsvisit_bytes(sensitive.rs:123-125) which receives a borrowed&[u8]from the deserializer and copies directly into aProtectedAlloc. When postcard performs in-memory deserialization, this path avoids any intermediate heapVec<u8>. - Fallback deserialization path:
visit_byte_buf(sensitive.rs:129-132) handles deserializers that provide owned bytes. TheVec<u8>is copied into protected memory and immediately zeroized. - Sequence fallback:
visit_seq(sensitive.rs:136-145) handles deserializers that encode bytes as a sequence ofu8values. The collectedVec<u8>is zeroized after copying. from_protected()(sensitive.rs:57-62): accepts aProtectedAllocandactual_lendirectly, enabling zero-copy transfer fromSecureBytes.- Serialization (
sensitive.rs:95-99): callsserializer.serialize_bytes()directly from the protected memory slice. postcard reads the slice without copying. Debug(sensitive.rs:160-163): redacted output ([REDACTED; 32 bytes]).
Zero-Copy Lifecycle
The three types form a zero-copy pipeline for secret material:
- A vault key is derived by
core-cryptoand stored asSecureBytes(inProtectedAlloc). - When the key must cross the IPC bus,
SecureBytes::into_protected_alloc()transfers theProtectedAlloctoSensitiveBytes::from_protected()with no heap copy and no re-mapping. SensitiveBytesserializes directly from theProtectedAllocpages into the Noise-encrypted IPC frame.- On the receiving end, postcard’s
visit_bytespath deserializes directly into a newProtectedAlloc.
At no point does plaintext key material exist on the unprotected heap,
provided the from_slice constructor path is used rather than
SecureBytes::new(Vec<u8>).
init_secure_memory() Pre-Sandbox Probe
The core_memory::init() function (core-memory/src/lib.rs:58-107)
must be called before the seccomp sandbox is applied. It performs a probe
allocation of 1 byte (lib.rs:68) which triggers probe_memfd_secret()
internally, caching whether syscall 447 is available. If this probe ran
after seccomp activation, the raw syscall would be killed by the filter.
The function also reads RLIMIT_MEMLOCK via getrlimit(2)
(lib.rs:62-65) and logs it alongside the security posture:
- memfd_secret available:
INFO-level log withbackend = "memfd_secret"and therlimit_memlock_bytesvalue (lib.rs:71-78). - memfd_secret unavailable:
ERROR-level log withbackend = "mmap(MAP_ANONYMOUS) fallback"and remediation instructions (lib.rs:83-91). - Probe allocation failure:
ERROR-level log warning that all secret-carrying types will panic on allocation (lib.rs:95-103).
The function is idempotent (lib.rs:57). The OnceLock values for
CANARY, PAGE_SIZE, and MEMFD_SECRET_AVAILABLE (alloc.rs:39,42,45)
ensure that the probe syscall, the getrandom call, and the sysconf
call each execute exactly once per process regardless of how many times
init() is called.
Guard Page SIGSEGV Test Methodology
The guard page tests (core-memory/tests/guard_page_sigsegv.rs) use a
subprocess harness pattern because the expected outcome is process death
by signal, which cannot be caught within a single test process.
Test Structure
Each test case consists of a parent test and a child harness:
- Parent test (e.g.,
overflow_hits_trailing_guard_page,guard_page_sigsegv.rs:58-62): spawns the test binary targeting the harness function by name via--exact, with a gating environment variable__GUARD_PAGE_HARNESS. - Child harness (e.g.,
overflow_harness,guard_page_sigsegv.rs:66-83): checks the environment variable, allocates aProtectedAllocviafrom_slice(b"test"), performs a deliberate out-of-boundsread_volatile, and callsexit(1)if the read succeeds (which it must not). - Signal assertion (
assert_signal_death,guard_page_sigsegv.rs:25-53): the parent verifies the child was killed bySIGSEGV(signal 11) orSIGBUS(signal 7), handling both direct signal death (ExitStatusExt::signal()) and the 128+signal exit code convention used by some platforms.
Test Cases
| Test | Action | Expected Signal |
|---|---|---|
overflow_hits_trailing_guard_page | Reads one byte past ptr.add(len) (guard_page_sigsegv.rs:78) | SIGSEGV (11) |
underflow_hits_leading_guard_page | Reads one page before ptr via ptr.sub(page_size) (guard_page_sigsegv.rs:112) | SIGSEGV (11) or SIGBUS (7) |
The overflow test validates right-alignment: because user data is flush
against guard page 2, the very first out-of-bounds byte lands on a
PROT_NONE page. The underflow test reads backward past the canary and
padding into guard page 1 (between metadata and data region).
The environment variable gate (guard_page_sigsegv.rs:67) ensures that
when the test binary is run normally (without __GUARD_PAGE_HARNESS
set), the harness functions return immediately without performing any
unsafe operations.
Platform Support Summary
| Platform | Backend | Protection Level |
|---|---|---|
Linux 5.14+ with CONFIG_SECRETMEM=y | memfd_secret(2) | Full: pages removed from kernel direct map |
Linux < 5.14 or without CONFIG_SECRETMEM | mmap(MAP_ANONYMOUS) + mlock + MADV_DONTDUMP | Degraded: pages on direct map, audit-logged |
| Non-Unix | Compile-time stub | ProtectedAlloc::new() returns Err(Unsupported) |
The non-Unix stub (core-memory/src/lib.rs:111-182) exists solely so
the crate compiles in workspace-wide cargo check runs. All methods on
the stub panic or return errors; no secrets can be handled on unsupported
platforms.
See Also
- Architecture Overview – crate topology and daemon model
- IPC Protocol – Noise IK transport that carries
SensitiveBytespayloads - Sandbox Model – Landlock and seccomp filters applied after
init_secure_memory()
IPC Bus Protocol
The core-ipc crate implements the inter-process communication protocol
used by all Open Sesame daemons and the sesame CLI.
Bus Architecture
The IPC bus uses a star topology. daemon-profile hosts a BusServer
that binds a Unix domain socket at $XDG_RUNTIME_DIR/pds/bus.sock. All
other daemons and the sesame CLI connect to this socket as BusClient
instances.
The server accept loop (BusServer::run in server.rs) listens for
incoming connections, extracts UCred credentials via SO_PEERCRED,
enforces a same-UID policy (rejecting connections from different users),
and spawns a per-connection handler task. Each connection performs a
mandatory Noise IK handshake before any application data flows.
Per-connection state is tracked in ConnectionState, which holds:
- The daemon’s
DaemonId(set on first message) - A registry-verified daemon name (
verified_name, from Noise IK handshake) - An outbound
mpsc::Sender<Vec<u8>>channel (capacity 256) PeerCredentials(PID and UID)SecurityLevelclearance- Subscription filters
- An optional
TrustVectorcomputed at connection time
An atomic u64 counter assigns monotonically increasing connection IDs.
Connection state is registered only after the Noise handshake succeeds,
preventing a race where broadcast frames arrive on the outbound channel
before the writer task is ready.
On BusServer::drop, the socket file is removed from the filesystem.
Noise IK Handshake
All socket connections use the Noise Protocol Framework with the IK pattern:
Noise_IK_25519_ChaChaPoly_BLAKE2s
The primitives are:
- X25519 Diffie-Hellman key agreement
- ChaCha20-Poly1305 authenticated encryption (AEAD)
- BLAKE2s hashing
The IK pattern means the initiator (connecting daemon) transmits its static key encrypted in the first message, and the responder’s (bus server’s) static key is pre-known to the initiator. This provides mutual authentication in a single round-trip (2 messages).
From the initiator (client) perspective:
- Write message 1 to responder (ephemeral key + encrypted static key)
- Read message 2 from responder (responder’s ephemeral key)
- Transition to transport mode with forward-secret keys
From the responder (server) perspective:
- Read message 1 from initiator (contains initiator’s ephemeral + encrypted static)
- Write message 2 to initiator (contains responder’s ephemeral)
- Transition to transport mode with forward-secret keys
The handshake has a 5-second timeout (HANDSHAKE_TIMEOUT) to prevent
denial-of-service via slow handshake. The snow crate provides the
Noise implementation.
Prologue Binding
The Noise prologue cryptographically binds OS-level transport identity
to the encrypted channel. Both sides construct an identical prologue
from UCred credentials:
PDS-IPC-v1:<lower_pid>:<lower_uid>:<higher_pid>:<higher_uid>
Canonical ordering is by PID (lower PID first), ensuring both sides produce identical bytes regardless of which side is the server. If either side has incorrect peer credentials (e.g., due to spoofing), the prologue mismatch causes the Noise handshake to fail cryptographically.
PeerCredentials are obtained via:
extract_ucred(): callsUnixStream::peer_cred()(usesSO_PEERCREDon Linux) to get the remote peer’s PID and UID.local_credentials(): callsrustix::process::getuid()andstd::process::id()for the local process.
An in-process sentinel (PeerCredentials::in_process()) uses
u32::MAX as the UID, which never matches a real UCred check.
Encrypted Transport
After handshake completion, NoiseTransport wraps a
snow::TransportState and provides chunked encrypted I/O.
Noise transport messages are limited to 65535 bytes. The maximum
plaintext per Noise message is 65519 bytes (65535 minus the 16-byte
AEAD tag). Application frames up to 16 MiB
(MAX_FRAME_SIZE = 16 * 1024 * 1024) are chunked into multiple Noise
messages.
Encrypted Frame Wire Format
[4-byte BE chunk_count] (length-prefixed, plaintext)
[length-prefixed encrypted chunk 1]
[length-prefixed encrypted chunk 2]
...
[length-prefixed encrypted chunk N]
Each chunk is individually encrypted by
snow::TransportState::write_message and written via the
length-prefixed framing layer. The chunk count header is transmitted
in the clear because it is not sensitive and the reader needs it to
know how many chunks to expect.
Zero-length payloads send one empty encrypted chunk. On the read path,
the reassembled payload is validated against MAX_FRAME_SIZE, and the
intermediate decrypt buffer is zeroized via zeroize::Zeroize.
A 200 KiB payload requires approximately 4 chunks (200 * 1024 / 65519). The maximum number of chunks for a 16 MiB payload is validated on read to reject fabricated chunk counts.
Mutual Exclusion
snow::TransportState requires &mut self for both encrypt and decrypt.
Both the server and client use tokio::select! to multiplex reads and
writes in a single task rather than splitting into separate reader/writer
tasks with a Mutex. The Mutex approach would deadlock because the
reader would hold the lock while awaiting socket I/O, starving the
writer.
Decrypted postcard buffers on the server side and plaintext outbound buffers on the client side are zeroized after processing, as they may contain serialized secret values.
Framing Layer
The framing layer (framing.rs) provides two independent services.
Serialization
encode_frame and decode_frame convert between typed Rust values and
postcard byte payloads:
encode_frame<T: Serialize>(value) -> Vec<u8>: callspostcard::to_allocvec.decode_frame<T: DeserializeOwned>(payload) -> T: callspostcard::from_bytes.
These are symmetric: decode_frame(encode_frame(v)) == v.
Wire I/O
write_frame and read_frame add and strip a 4-byte big-endian length
prefix for socket transport:
write_frame(writer, payload): writes[4-byte BE length][payload], then flushes.read_frame(reader) -> Vec<u8>: reads the 4-byte length, validates againstMAX_FRAME_SIZE(16 MiB), then reads the payload.
The length prefix is a wire-only concern. Internal channels (bus routing,
BusServer::publish, subscriber mpsc channels) carry raw postcard
payloads without it.
Socket wire format: [4-byte BE length][postcard payload]
Frames with a length exceeding MAX_FRAME_SIZE are rejected on read to
prevent out-of-memory conditions from malformed or malicious length
prefixes.
Message Envelope
Every IPC message is wrapped in Message<T> (message.rs). The current
wire version is 3 (WIRE_VERSION = 3).
| Field | Type | Description |
|---|---|---|
wire_version | u8 | Wire format version, always serialized first. |
msg_id | Uuid (v7) | Unique message identifier, time-ordered. |
correlation_id | Option<Uuid> | Links a response to its originating request’s msg_id. |
sender | DaemonId | Sender daemon identity (UUID v7, dmon- prefix). |
timestamp | Timestamp | Dual-clock timestamp (wall + monotonic). |
payload | T | The event or request payload (typically EventKind). |
security_level | SecurityLevel | Access control level for routing decisions. |
verified_sender_name | Option<String> | Server-stamped name from Noise IK registry lookup. |
origin_installation | Option<InstallationId> | v3: sender’s installation identity. |
agent_id | Option<AgentId> | v3: sender’s agent identity. |
trust_snapshot | Option<TrustVector> | v3: trust assessment at message creation time. |
MessageContext carries per-client identity state so Message::new()
can populate all fields. A minimal context requires only a DaemonId;
v3 fields default to None.
The verified_sender_name is set exclusively by route_frame() in the
bus server. Client-supplied values are overwritten. None indicates an
unregistered client. Postcard uses positional encoding, so all Option
fields must always be present on the wire; skip_serializing_if is
deliberately not used.
Message::new() generates a UUID v7 for msg_id (time-ordered) and
leaves correlation_id at None. The with_correlation(id) builder
method sets it for response messages.
Clearance Model
SecurityLevel Enum
SecurityLevel (core-types/src/security.rs) classifies message
sensitivity for bus routing. The variants, ordered from lowest to highest
by their derived Ord:
| Level | Description |
|---|---|
Open | Visible to all subscribers including extensions. |
Internal | Visible to authenticated daemons only. This is the default. |
ProfileScoped | Visible only to daemons holding the current profile’s security context. |
SecretsOnly | Visible only to the secrets daemon. |
Because SecurityLevel derives PartialOrd and Ord, clearance
comparisons use standard Rust ordering:
Open < Internal < ProfileScoped < SecretsOnly.
ClearanceRegistry
ClearanceRegistry (registry.rs) maps X25519 static public keys
([u8; 32]) to DaemonClearance entries:
#![allow(unused)]
fn main() {
pub struct DaemonClearance {
pub name: String,
pub security_level: SecurityLevel,
pub generation: u64,
}
}
The generation counter increments on every key change (rotation or
crash-revocation). It is used by two-phase rotation to detect concurrent
revocations.
The registry is populated by daemon-profile at startup from per-daemon
keypairs. It is wrapped in RwLock<ClearanceRegistry> inside
ServerState to allow runtime mutation.
After the Noise IK handshake, the server extracts the client’s static
public key via NoiseTransport::remote_static() (which calls
TransportState::get_remote_static()). The Noise IK pattern guarantees
the remote static key is available after handshake. The 32-byte key is
looked up in the registry:
- Found: the connection receives the registered name and clearance level.
- Not found: the connection is treated as an ephemeral client with
SecretsOnlyclearance.
The registry supports rotate_key(old, new) (removes old entry, inserts
new with incremented generation), revoke(pubkey) (removes and returns
the entry), register_with_generation (for revoke-then-reregister
flows), and find_by_name (linear scan, acceptable for fewer than 10
daemons).
Routing Enforcement
route_frame() enforces two clearance rules:
- Sender clearance: A daemon may only emit messages at or below its
own clearance level. If
conn.security_clearance < msg.security_level, the frame is rejected and anAccessDeniedresponse is sent back to the sender. - Recipient clearance: When broadcasting, the server skips
subscribers whose
security_clearanceis below the message’ssecurity_level.
Sender Identity Verification
On the first message from a connection, route_frame() records the
self-declared DaemonId. Subsequent messages must use the same
DaemonId. A change mid-session is treated as an impersonation attempt:
the frame is dropped and an AccessDenied response is returned.
The server stamps verified_sender_name onto every routed message by
re-encoding it after registry lookup. If the connection’s
trust_snapshot field is not set on the message, the server also stamps
the connection-level TrustVector. This re-encode adds serialization
overhead on every routed frame, but for a local IPC bus with fewer than
10 daemons the cost is negligible (microseconds per frame).
Ephemeral Clients
Clients whose static public key is not in the ClearanceRegistry
receive SecurityLevel::SecretsOnly clearance. This applies to the
sesame CLI and any other transient tool.
Ephemeral clients are still authenticated: the same-UID check and Noise
IK handshake both apply. They simply lack a pre-registered identity in
the registry. The audit log records these connections as
ephemeral-client-accepted events with the client’s X25519 public key
and PID/UID.
Key Management
Key generation, persistence, and tamper detection are implemented in
noise_keys.rs.
Keypair Generation
generate_keypair() produces an X25519 static keypair via
snow::Builder::generate_keypair(). Both the public and private keys
are 32 bytes. The returned ZeroizingKeypair wrapper guarantees private
key zeroization on drop (including during panics), since snow::Keypair
has no Drop implementation. ZeroizingKeypair::into_inner() transfers
ownership using mem::take to zero the wrapper’s copy.
Filesystem Layout
Keys are stored under $XDG_RUNTIME_DIR/pds/:
| File | Permissions | Content |
|---|---|---|
bus.pub | 0644 | Bus server X25519 public key (32 bytes). |
bus.key | 0600 | Bus server private key (32 bytes). |
bus.checksum | default | BLAKE3 keyed hash (32 bytes). |
keys/<daemon>.pub | 0644 | Per-daemon public key (32 bytes). |
keys/<daemon>.key | 0600 | Per-daemon private key (32 bytes). |
keys/<daemon>.checksum | default | Per-daemon BLAKE3 keyed hash (32 bytes). |
The keys/ directory is set to mode 0700 to prevent local users from
enumerating registered daemons.
Atomic Writes
Private keys are written atomically: the key is written to a .tmp file
with 0600 permissions set at open time via OpenOptionsExt::mode,
fsynced, then renamed to the final path. This prevents a window where
the key file exists with default (permissive) permissions. The write is
performed inside tokio::task::spawn_blocking to avoid blocking the
async runtime.
Tamper Detection Checksums
Each keypair has an accompanying .checksum file containing
blake3::keyed_hash(public_key, private_key) – a BLAKE3 keyed hash
using the 32-byte public key as the key and the private key as the data.
On read, the checksum is recomputed and compared to the stored value. A
mismatch produces a TAMPER DETECTED error with instructions to delete
the affected files and restart daemon-profile.
This detects partial corruption or partial tampering (e.g., private key replaced but checksum file untouched). It does not prevent an attacker with full filesystem write access from replacing all three files (private key, public key, checksum) with a self-consistent set. That threat model requires a root-of-trust outside the filesystem such as TPM-backed attestation.
Missing checksum files (from older installations) produce a warning rather than an error, for backward compatibility.
Key Rotation
The ClearanceRegistry supports runtime key rotation via
rotate_key(old_pubkey, new_pubkey), which atomically removes the old
entry and inserts the new one with the same name and clearance level but
an incremented generation counter.
The rotation protocol uses KeyRotationPending and
KeyRotationComplete events:
daemon-profilegenerates a new keypair for the target daemon, writes it to disk, and broadcastsKeyRotationPendingwith the new public key and a grace period.- The target daemon calls
BusClient::handle_key_rotation, which reads the new keypair from disk, verifies the announced public key matches what is on disk (detecting tampering), reconnects to the bus with the new key, and re-announces viaDaemonStarted. - On reconnection, if the server detects a
DaemonStartedfrom a verified name that already has an active connection, it evicts the stale old connection and registers the new one inname_to_conn.
connect_with_keypair_retry supports crash-restart scenarios where
daemon-profile may have regenerated a daemon’s keypair. Each retry
re-reads the keypair from disk with exponential backoff.
Request-Response Correlation
The bus supports three message routing patterns.
Request-Response (Unicast Reply)
When a message arrives without a correlation_id, route_frame()
records (msg_id -> sender_conn_id) in the pending_requests table.
The message is then broadcast to eligible subscribers. When a response
arrives (identified by having a correlation_id), the server removes the
matching entry from pending_requests and delivers the response only to
the originating connection.
On the client side, BusClient::request() creates a message, registers
a oneshot::channel waiter keyed by msg_id, sends the message, and
awaits the response with a caller-specified timeout. If the timeout
expires, the waiter is cleaned up and an error is returned.
Confirmed RPC
The server provides
register_confirmation(correlation_id, mpsc::Sender), which returns an
RAII ConfirmationGuard. When a correlated response matching the
registered correlation_id arrives at route_frame(), the raw frame is
sent to the confirmation channel instead of (or in addition to) the
normal routing path. The ConfirmationGuard deregisters the route on
drop, preventing stale entries from accumulating if the caller times out
or encounters an error.
Pub-Sub Broadcast
Messages without a correlation_id that are not responses are broadcast
to all connected subscribers whose security_clearance meets or exceeds
the message’s security_level. The sender’s own connection is excluded
to prevent feedback loops. The same echo-suppression applies to
BusServer::publish() for in-process subscribers (it decodes the frame
to extract the sender DaemonId and skips matching connections).
Named Unicast
The server maintains a name_to_conn: HashMap<String, u64> mapping,
populated when route_frame() processes DaemonStarted events from
connections with a verified_sender_name.
send_to_named(daemon_name, frame) resolves the daemon name to a
connection ID for O(1) unicast delivery without broadcasting.
Socket Path Resolution
socket_path() in transport.rs resolves the platform-appropriate
socket path:
| Platform | Path |
|---|---|
| Linux | $XDG_RUNTIME_DIR/pds/bus.sock |
| macOS | ~/Library/Application Support/pds/bus.sock |
| Windows | \\.\pipe\pds\bus |
On Linux, XDG_RUNTIME_DIR must be set; its absence is a fatal error.
Socket Permissions
The bus server applies defense-in-depth permissions on bind:
- The socket file is set to mode
0700. - The parent directory is set to mode
0700.
The real security boundary is UCred UID validation (the same-UID check
in the accept loop), but restrictive filesystem permissions harden
against misconfigured XDG_RUNTIME_DIR permissions.
See Also
- Protocol Evolution – forward compatibility and wire versioning
- Memory Protection – zeroization and secret handling
Protocol Evolution
This page documents how the Open Sesame IPC protocol handles versioning, forward compatibility, and the addition of new event types without breaking existing daemons.
EventKind and Unknown Variant Deserialization
The protocol schema is defined by the EventKind enum in
core-types/src/events.rs. This enum is marked #[non_exhaustive] and
contains a catch-all variant:
#![allow(unused)]
fn main() {
#[derive(Clone, Serialize, Deserialize)]
#[non_exhaustive]
pub enum EventKind {
// ... all named variants ...
#[serde(other)]
Unknown,
}
}
The #[serde(other)] attribute on the Unknown variant is the
forward-compatibility mechanism. When a daemon receives a
postcard-encoded EventKind with a variant discriminant it does not
recognize (because the sender is running a newer version of the code),
serde deserializes it as EventKind::Unknown instead of returning a
deserialization error.
This means a daemon compiled against an older version of core-types
can receive messages containing event variants that did not exist when it
was compiled. The message deserializes successfully; the daemon sees
EventKind::Unknown and can choose to ignore it, log it, or pass it
through.
The #[non_exhaustive] attribute enforces at compile time that all
match arms on EventKind must include a wildcard or Unknown branch.
This prevents new variants from causing compile errors in downstream
crates that have not been updated.
Postcard Encoding Properties
The IPC bus uses postcard (a #[no_std]-compatible, compact binary
serde format) for all serialization. Several properties of postcard’s
encoding are relevant to protocol evolution.
Externally-Tagged Enums
EventKind uses serde’s default externally-tagged representation.
Postcard encodes externally-tagged enums as a varint discriminant
followed by the variant’s fields in declaration order. The events.rs
source contains an explicit note:
Externally-tagged enum (serde default) for postcard wire compatibility. Postcard does not support
#[serde(tag = "...", content = "...")].
This means:
- Each variant is identified by its position (index) in the enum declaration.
- Adding new variants at the end of the enum produces new discriminant
values that older decoders do not recognize, triggering
#[serde(other)]deserialization toUnknown. - Reordering existing variants would change their discriminants and break all existing decoders. Variant ordering must be append-only.
Positional Field Encoding
Postcard encodes struct fields positionally (by declaration order), not
by name. The Message<T> envelope in message.rs contains a comment
making this explicit:
No
skip_serializing_if– postcard uses positional encoding, so the field must always be present in the wire format for decode compatibility.
This means:
- Every field in
Message<T>must always be serialized, even if its value isNone. Omitting anOptionfield viaskip_serializing_ifwould shift all subsequent fields by one position, causing decode failures. - New fields can only be appended to the end of the struct. The v3
fields (
origin_installation,agent_id,trust_snapshot) are explicitly commented as “v3 fields (appended for positional encoding safety).” - Removing or reordering existing fields is a breaking change.
Implications for Field Addition
When a v3 sender transmits a message with the three new trailing fields
to a v2 receiver, the v2 decoder reads only the fields it knows about
and ignores trailing bytes. Postcard’s from_bytes does not require that
all input bytes be consumed – it reads fields sequentially and stops
when the struct is fully populated. This means appending new Option
fields to Message<T> is backward-compatible as long as older decoders
were compiled without those fields.
When a v2 sender transmits a message missing the v3 trailing fields to
a v3 receiver, postcard::from_bytes encounters end-of-input when
trying to decode the missing fields. In practice, the codebase treats
wire version bumps as requiring atomic deployment of all binaries (see
the wire version section below).
Wire Version Field
The Message<T> struct contains a wire_version: u8 field, always
serialized first. The current value is 3, defined as
pub const WIRE_VERSION: u8 = 3 in message.rs.
The source code documents the wire version contract:
WIRE FORMAT CONTRACT:
v2 fields:
wire_version,msg_id,correlation_id,sender,timestamp,payload,security_level,verified_sender_nameAll v2 binaries must be deployed atomically (single compilation unit). Adding fields requires incrementing this constant and updating the decode path to handle both old and new versions during rolling upgrades.
What the Wire Version Encodes
The wire version tracks changes to the Message<T> envelope
structure – specifically, which fields are present and in what order.
It does not track changes to EventKind variants (those are handled by
#[serde(other)]).
- v2: 8 fields (
wire_versionthroughverified_sender_name) - v3: 11 fields (adds
origin_installation,agent_id,trust_snapshot)
Version Negotiation
The protocol does not perform explicit version negotiation. There is no
handshake phase where client and server agree on a wire version. Instead,
Message::new() always stamps the current WIRE_VERSION, and the source
code states that all binaries must be deployed atomically when the wire
version changes.
A receiver can inspect msg.wire_version after deserialization to
determine which generation of the protocol the sender used. The current
codebase does not implement version-conditional decode logic; all daemons
are expected to be at the same wire version. The comment about “updating
the decode path to handle both old and new versions during rolling
upgrades” describes an intended future capability, not current behavior.
How New Event Variants Are Added
Adding a new EventKind variant follows this procedure:
- Append the new variant to the end of the
EventKindenum incore-types/src/events.rs. Inserting it in the middle would change the discriminant indices of all subsequent variants. - Add a Debug arm in the
impl_event_debug!macro invocation at the bottom ofevents.rs. The macro enforces exhaustiveness – omitting a variant is a compile error. Sensitive variants (containing passwords or secret values) go in thesensitivesection with explicitREDACTEDannotations. All others go in thetransparentsection. - No wire version bump is needed for new
EventKindvariants. TheUnknowncatch-all handles unrecognized discriminants at theEventKindlevel. Wire version bumps are only needed for changes to theMessage<T>envelope structure.
Daemons compiled against the old core-types deserialize the new
variant as EventKind::Unknown. Daemons compiled against the new
core-types see the fully typed variant. Both can coexist on the same
bus.
How New Message Fields Are Added
Adding a new field to Message<T> is a more disruptive change:
- Append the new field to the end of the
Message<T>struct. Postcard’s positional encoding means insertion or reordering breaks all existing decoders. - Increment
WIRE_VERSIONto signal the structural change. - Deploy all binaries atomically. The codebase does not currently implement multi-version decode logic. All daemons must be rebuilt and redeployed together.
- Update
MessageContextif the new field should be populated by the sender (as was done fororigin_installation,agent_id, andtrust_snapshotin v3). - Do not use
skip_serializing_ifon the new field. The field must always be present on the wire for positional decode compatibility.
Practical Constraints
Variant Stability
The EventKind enum currently contains over 80 variants spanning window
management, profile lifecycle, clipboard, input, secrets RPC, launcher
RPC, agent lifecycle, authorization, federation, device posture,
multi-factor auth, and bus-level errors. Each variant’s position in the
enum declaration is its wire discriminant. Removing a variant or changing
its position is a breaking wire change.
Enum Variant Field Changes
Postcard encodes variant fields positionally, the same as struct fields. Adding a field to an existing variant, removing a field, or reordering fields within a variant is a breaking wire change. New fields for existing functionality should be introduced as new variants rather than modifications to existing ones.
Sensitivity Redaction
The Debug implementation for EventKind uses a compile-time
exhaustive macro (impl_event_debug!) that separates sensitive variants
from transparent ones. Sensitive variants (SecretGetResponse,
SecretSet, UnlockRequest, SshUnlockRequest, FactorSubmit) have
their secret-bearing fields rendered as [REDACTED; N bytes] in debug
output. Adding a new variant that carries secret material requires
placing it in the sensitive section of the macro.
Forward Compatibility Boundaries
The #[serde(other)] mechanism provides forward compatibility only for
unknown enum variants. It does not help with:
- Unknown fields within a known variant (postcard has no field-skipping mechanism for positional encoding)
- Structural changes to the
Message<T>envelope - Changes to the framing layer (length-prefix format, encryption chunking)
- Changes to the Noise handshake parameters
These categories of change require coordinated deployment of all binaries.
See Also
- IPC Bus Protocol – transport, framing, and routing details
Sandbox Model
Open Sesame enforces a three-layer process containment model on Linux: Landlock filesystem sandboxing, seccomp-bpf syscall filtering, and systemd unit hardening. Each daemon receives a tailored sandbox that grants the minimum privileges required for its function. Sandbox application is mandatory — every daemon treats sandbox failure as fatal and refuses to start unsandboxed.
Process Hardening
Before any sandbox is applied, every daemon calls harden_process()
(platform-linux/src/security.rs:14). This function performs two
operations:
PR_SET_DUMPABLE(0)— preventsptraceattachment by non-root processes and prevents core dumps from containing process memory (security.rs:19).RLIMIT_CORE(0,0)— sets both soft and hard core dump limits to zero, preventing core files even if dumpable is re-enabled by setuid (security.rs:32-36).
Resource limits are applied via apply_resource_limits()
(security.rs:66). All daemons set RLIMIT_NOFILE to 4096. The
memlock_bytes parameter is set to 0 at the application level; systemd
units provide the actual LimitMEMLOCK=64M constraint.
These hardening calls log errors but do not abort. A daemon still
proceeds to Landlock and seccomp even if prctl or setrlimit fails.
The Landlock and seccomp layers are the hard security boundary.
Landlock Filesystem Sandbox
Landlock provides unprivileged filesystem sandboxing on Linux
kernels >= 5.13. The shared implementation lives in
platform-linux/src/sandbox.rs. Each daemon defines its own ruleset in
a per-daemon apply_sandbox() function.
ABI Level and Enforcement Policy
The sandbox targets Landlock ABI V6 (sandbox.rs:77), which covers
filesystem access (AccessFs), network access (AccessNet), and scope
restrictions (abstract Unix sockets and cross-process signals via
Scope). The Ruleset is created with
handle_access(AccessFs::from_all(abi)) and
handle_access(AccessNet::from_all(abi)) to handle all access types at
the V6 level (sandbox.rs:85-96).
Partial enforcement is treated as a fatal error. If the kernel ABI
cannot fully enforce the requested rules, apply_landlock() returns an
error and the daemon aborts (sandbox.rs:157-161). There is no graceful
degradation path.
ENOENT Handling
Paths that do not exist at sandbox application time are silently skipped
(sandbox.rs:114-120). This is strictly more restrictive than granting
the path, because Landlock denies access to any path not present in the
ruleset. This design handles the case where directories have not yet
been created — for example, the vaults directory before sesame init
runs, or $XDG_RUNTIME_DIR/pds/ before daemon-profile creates it.
Nix Symlink Resolution
On NixOS, configuration files are symlinks into /nix/store. Each
daemon calls core_config::resolve_config_real_dirs() before applying
Landlock to discover the real filesystem paths behind config symlinks.
These resolved paths are added as read-only Landlock rules so that
config hot-reload can follow symlinks after the sandbox is applied.
daemon-wm additionally grants blanket read-only access to /nix/store
(daemon-wm/src/sandbox.rs:68-69) for shared libraries, GLib schemas,
locale data, and XKB keyboard rules.
daemon-profile creates its Landlock target directories if they do not
exist before opening PathFd handles
(daemon-profile/src/sandbox.rs:38-42). This handles the race condition
where systemd restarts daemon-profile after a
sesame init --wipe-reset-destroy-all-data before the directories are
recreated.
Non-Directory Inode Handling
The implementation performs fstat() on each PathFd after opening it
to detect whether the inode is a directory or a non-directory file
(sandbox.rs:130-136). For non-directory inodes (sockets, regular
files), directory-only access flags (ReadDir, MakeDir, etc.) are
masked off using AccessFs::from_file(abi). This prevents the Landlock
crate’s PathBeneath::try_compat_inner from reporting
PartiallyEnforced on non-directory fds.
The FsAccess::ReadWriteFile variant (sandbox.rs:22-24) exists
specifically for non-directory paths such as Unix domain sockets,
granting file-level read-write access without directory-only flags.
Scope Restrictions
Two scope modes are available via the LandlockScope enum
(sandbox.rs:54-60):
Full— blocks both abstract Unix sockets and cross-process signals. UsesScope::from_all(abi)which on ABI V6 includesAbstractUnixSocketandSignal.SignalOnly— blocks cross-process signals only, permitting abstract Unix sockets. UsesScope::Signalalone.
Daemons that need D-Bus or Wayland communication via abstract Unix
sockets use SignalOnly. Daemons with no such requirement use Full.
Per-Daemon Filesystem Rules
daemon-profile
Source: daemon-profile/src/sandbox.rs:29. Scope: SignalOnly
(needs D-Bus).
| Path | Access | Purpose |
|---|---|---|
~/.config/pds/ | ReadWrite | Audit log, config, vault metadata |
$XDG_RUNTIME_DIR/pds/ | ReadWrite | IPC bus socket, keys, runtime state |
$NOTIFY_SOCKET | ReadWriteFile | systemd sd_notify keepalives |
$SSH_AUTH_SOCK + canonicalized target + parent | ReadWriteFile / ReadOnly | SSH agent auto-unlock |
~/.ssh/ + agent.sock + canonicalized target + parent | ReadOnly / ReadWriteFile | Stable SSH agent symlink fallback |
| Resolved config symlink targets | ReadOnly | Config hot-reload on NixOS |
daemon-profile is the only daemon that hosts the IPC bus server socket.
It requires ReadWrite on the entire $XDG_RUNTIME_DIR/pds/ directory
because it creates the bus.sock and bus.pub files at startup.
SSH agent socket handling resolves symlinks to their target inodes. On
Konductor VMs, ~/.ssh/agent.sock is a stable symlink to a per-session
/tmp/ssh-XXXX/agent.PID path. Landlock resolves symlinks to their
target inodes, so the implementation grants access to the symlink path,
the canonicalized target, and the parent directory of the target for
path traversal (daemon-profile/src/sandbox.rs:81-149).
daemon-secrets
Source: daemon-secrets/src/sandbox.rs:7. Scope: Full (no abstract
Unix sockets needed).
| Path | Access | Purpose |
|---|---|---|
~/.config/pds/ | ReadWrite | Vault SQLCipher databases, salt storage |
$XDG_RUNTIME_DIR/pds/keys/ | ReadOnly | IPC client keypair |
$XDG_RUNTIME_DIR/pds/bus.pub | ReadOnly | Bus server public key |
$XDG_RUNTIME_DIR/pds/bus.sock | ReadWriteFile | IPC bus socket |
$XDG_RUNTIME_DIR/bus | ReadWriteFile | D-Bus filesystem socket |
$NOTIFY_SOCKET | ReadWriteFile | systemd sd_notify keepalives |
| Resolved config symlink targets | ReadOnly | Config hot-reload on NixOS |
daemon-secrets has the narrowest Landlock ruleset of all daemons that
handle secret material. It uses LandlockScope::Full to block abstract
Unix sockets. The D-Bus filesystem socket at $XDG_RUNTIME_DIR/bus is
granted as a ReadWriteFile rule because it is a non-directory inode
(daemon-secrets/src/sandbox.rs:44-47).
daemon-wm
Source: daemon-wm/src/sandbox.rs:8. Scope: SignalOnly (Wayland
uses abstract sockets).
| Path | Access | Purpose |
|---|---|---|
$XDG_RUNTIME_DIR/pds/keys/ | ReadOnly | IPC client keypair |
$XDG_RUNTIME_DIR/pds/bus.pub | ReadOnly | Bus server public key |
$XDG_RUNTIME_DIR/pds/bus.sock | ReadWriteFile | IPC bus socket |
$WAYLAND_DISPLAY socket | ReadWriteFile | Wayland compositor protocol |
~/.cache/open-sesame/ | ReadWrite | MRU state, overlay cache |
/etc/fonts | ReadOnly | Fontconfig configuration |
/usr/share/fonts | ReadOnly | System font files |
~/.config/cosmic/ | ReadOnly | COSMIC desktop theme integration |
/nix/store | ReadOnly | Shared libs, schemas, XKB (NixOS) |
/proc | ReadOnly | xdg-desktop-portal PID verification |
/usr/share | ReadOnly | System shared data (fonts, icons, mime, locale) |
/usr/share/X11/xkb | ReadOnly | XKB system rules (non-NixOS) |
~/.local/share/ | ReadOnly | User fonts and theme data |
~/.config/pds/vaults/ | ReadOnly | Salt files and SSH enrollment blobs |
$SSH_AUTH_SOCK + canonicalized paths | ReadWriteFile / ReadOnly | SSH agent auto-unlock |
$NOTIFY_SOCKET | ReadWriteFile | systemd sd_notify keepalives |
| Resolved config symlink targets | ReadOnly | Config hot-reload on NixOS |
daemon-wm has the broadest Landlock ruleset because it renders a Wayland
overlay using SCTK and tiny-skia. It requires access to fonts, theme
data, and system shared resources. GPU/DRI access is intentionally
excluded — rendering uses wl_shm CPU shared memory buffers only
(daemon-wm/src/sandbox.rs:91-93).
daemon-clipboard
Source: daemon-clipboard/src/main.rs:306. Scope: Full.
| Path | Access | Purpose |
|---|---|---|
$XDG_RUNTIME_DIR/pds/keys/ | ReadOnly | IPC client keypair |
$XDG_RUNTIME_DIR/pds/bus.pub | ReadOnly | Bus server public key |
$XDG_RUNTIME_DIR/pds/bus.sock | ReadWriteFile | IPC bus socket |
$WAYLAND_DISPLAY socket | ReadWriteFile | Wayland data-control protocol |
~/.cache/open-sesame/ | ReadWrite | Clipboard history SQLite database |
| Resolved config symlink targets | ReadOnly | Config hot-reload on NixOS |
daemon-input
Source: daemon-input/src/main.rs:319. Scope: Full.
| Path | Access | Purpose |
|---|---|---|
$XDG_RUNTIME_DIR/pds/keys/ | ReadOnly | IPC client keypair |
$XDG_RUNTIME_DIR/pds/bus.pub | ReadOnly | Bus server public key |
$XDG_RUNTIME_DIR/pds/bus.sock | ReadWriteFile | IPC bus socket |
/dev/input | ReadOnly | evdev keyboard device nodes |
/sys/class/input | ReadOnly | evdev device enumeration symlinks |
/sys/devices | ReadOnly | evdev device metadata via sysfs |
| Resolved config symlink targets | ReadOnly | Config hot-reload on NixOS |
daemon-input is the only daemon with access to /dev/input and
/sys/class/input. It reads raw keyboard events via evdev.
daemon-snippets
Source: daemon-snippets/src/main.rs:241. Scope: Full.
| Path | Access | Purpose |
|---|---|---|
$XDG_RUNTIME_DIR/pds/keys/ | ReadOnly | IPC client keypair |
$XDG_RUNTIME_DIR/pds/bus.pub | ReadOnly | Bus server public key |
$XDG_RUNTIME_DIR/pds/bus.sock | ReadWriteFile | IPC bus socket |
~/.config/pds/ | ReadOnly | Config directory (snippet templates) |
| Resolved config symlink targets | ReadOnly | Config hot-reload on NixOS |
daemon-snippets has the narrowest Landlock ruleset of all sandboxed daemons. It requires only IPC bus access and read-only config access.
daemon-launcher
daemon-launcher does not apply Landlock or seccomp. It spawns
arbitrary desktop applications as child processes via fork+exec.
Landlock and seccomp filters inherit across fork+exec and would kill
every spawned application (daemon-launcher/src/main.rs:119-121). The
security boundary for daemon-launcher is IPC bus authentication via
Noise IK. systemd unit hardening provides the process containment layer.
seccomp-bpf Syscall Filtering
The seccomp implementation uses libseccomp to build per-daemon BPF
filters (platform-linux/src/sandbox.rs:259). seccomp is always applied
after Landlock because Landlock setup requires syscalls
(landlock_create_ruleset, landlock_add_rule,
landlock_restrict_self) that the seccomp filter does not permit.
Default Action
The default action for disallowed syscalls is
ScmpAction::KillThread (SECCOMP_RET_KILL_THREAD)
(sandbox.rs:268). This sends SIGSYS to the offending thread rather
than using KillProcess, which would skip the signal handler entirely.
The choice of KillThread over Errno or Log is deliberate —
Errno or Log would allow an attacker to probe for allowed syscalls
(sandbox.rs:256-258).
SIGSYS Handler
A custom SIGSYS signal handler is installed before the seccomp filter
is loaded (sandbox.rs:173-238). The handler is designed to be
async-signal-safe:
- It uses no allocator and makes no heap allocations.
- It extracts the syscall number from
siginfo_tat byte offset 24 from the struct base on x86_64 (sandbox.rs:201). This offset corresponds tosi_call_addr(8-byte pointer) followed bysi_syscall(4-byte int) within the_sigsysunion member, which starts at byte offset 16 from the struct base. - It formats the number into a stack-allocated buffer and writes
"SECCOMP VIOLATION: syscall=NNN"to stderr via rawlibc::write()on fd 2. - After logging, it resets
SIGSYStoSIG_DFLvialibc::signal()and re-raises the signal vialibc::raise()(sandbox.rs:226-228).
The handler is registered with SA_SIGINFO | SA_RESETHAND flags
(sandbox.rs:235). SA_RESETHAND ensures the handler fires only
once — subsequent SIGSYS deliveries use the default disposition.
Per-Daemon Syscall Differences
All six sandboxed daemons share a common baseline of approximately 50
syscalls covering I/O basics (read, write, close, openat,
lseek, pread64, fstat, stat, newfstatat, statx, access),
memory management (mmap, mprotect, munmap, madvise, brk),
process/threading (futex, clone3, clone, set_robust_list,
set_tid_address, rseq, sched_getaffinity, prlimit64, prctl,
getpid, gettid, getuid, geteuid, kill), epoll (epoll_wait,
epoll_ctl, epoll_create1, eventfd2, poll, ppoll), timers
(clock_gettime, timer_create, timer_settime, timer_delete),
networking (socket, connect, sendto, recvfrom, recvmsg,
sendmsg, getsockname, getpeername, setsockopt, socketpair,
shutdown, getsockopt), signals (sigaltstack, rt_sigaction,
rt_sigprocmask, rt_sigreturn, tgkill), inotify
(inotify_init1, inotify_add_watch, inotify_rm_watch), and misc
(exit_group, exit, getrandom, memfd_secret, ftruncate,
restart_syscall, pipe2, dup).
The following table lists syscalls that differentiate the daemons:
| Syscall | profile | secrets | wm | clipboard | input | snippets | Purpose |
|---|---|---|---|---|---|---|---|
bind | Y | - | Y | - | - | - | Server socket / Wayland |
listen | Y | - | Y | - | - | - | Server socket / Wayland |
accept4 | Y | - | Y | - | - | - | Server socket / Wayland |
mlock | - | Y | Y | - | - | - | Secret zeroization / SCTK buffers |
munlock | - | Y | - | - | - | - | Secret zeroization |
mlock2 | - | - | Y | - | - | - | SCTK/Wayland runtime |
mremap | - | - | Y | - | - | - | SCTK buffer reallocation |
pwrite64 | - | Y | - | - | - | - | SQLCipher journal writes |
fallocate | - | Y | - | - | - | - | SQLCipher space preallocation |
flock | Y | Y | Y | Y | - | - | Database/file locking |
chmod / fchmod | Y | - | Y | - | - | - | File permission management |
fchown | Y | - | - | - | - | - | IPC socket ownership |
rename | Y | Y | Y | - | - | - | Atomic file replacement |
unlink | Y | Y | Y | - | - | - | File/socket cleanup |
statfs / fstatfs | - | - | Y | - | - | - | Filesystem info (SCTK) |
sched_get_priority_max | - | - | Y | - | - | - | Thread priority (SCTK) |
sysinfo | - | - | Y | - | - | - | System memory info (SCTK) |
memfd_create | Y | - | Y | - | - | - | D-Bus / Wayland shared memory |
nanosleep | Y | Y | Y | - | - | - | Event loop timing |
clock_nanosleep | Y | Y | Y | - | - | - | Event loop timing |
sched_yield | Y | - | Y | - | - | - | Cooperative thread scheduling |
timerfd_create | Y | - | Y | - | - | - | D-Bus / Wayland event loops |
timerfd_settime | Y | - | Y | - | - | - | D-Bus / Wayland event loops |
timerfd_gettime | Y | - | Y | - | - | - | D-Bus / Wayland event loops |
getresuid / getresgid | Y | Y | Y | - | - | - | D-Bus credential passing |
getgid / getegid | Y | Y | Y | - | - | - | D-Bus credential passing |
writev / readv | Y | Y | Y | - | - | - | Scatter/gather I/O |
readlinkat | Y | Y | Y | - | - | - | Symlink resolution |
uname | Y | Y | Y | - | - | - | D-Bus / Wayland runtime |
getcwd | Y | Y | Y | - | - | - | Working directory resolution |
Key observations:
- daemon-secrets uniquely requires
mlock/munlockfor zeroization of secret material in memory, pluspwrite64andfallocatefor SQLCipher database journal operations. - daemon-wm has the broadest syscall allowlist (~88 syscalls) due to
Wayland/SCTK runtime requirements including
mremap,mlock2,statfs/fstatfs,sysinfo, andsched_get_priority_max. - daemon-profile requires
bind/listen/accept4because it hosts the IPC bus server socket. It also requiresfchownfor setting socket ownership. - daemon-input and daemon-snippets have the narrowest allowlists (~57-60 syscalls).
- All sandboxed daemons permit
memfd_secretfor secure memory allocation andgetrandomfor cryptographic random number generation.
systemd Unit Hardening
Each daemon runs as a Type=notify systemd user service with
WatchdogSec=30. Service files are located in contrib/systemd/.
Common Directives
All seven daemons share the following systemd hardening:
| Directive | Value | Effect |
|---|---|---|
NoNewPrivileges | yes | Prevents privilege escalation via setuid/setgid binaries |
LimitCORE | 0 | Disables core dumps at the cgroup level |
LimitMEMLOCK | 64M | Caps locked memory at 64 MiB |
Restart | on-failure | Automatic restart on non-zero exit |
RestartSec | 5 | Five-second delay between restarts |
WatchdogSec | 30 | Daemon must call sd_notify(WATCHDOG=1) within 30 seconds |
Per-Daemon systemd Differences
| Directive | profile | secrets | wm | launcher | clipboard | input | snippets |
|---|---|---|---|---|---|---|---|
ProtectHome | read-only | read-only | read-only | - | read-only | read-only | read-only |
ProtectSystem | strict | strict | strict | - | strict | strict | strict |
PrivateNetwork | - | yes | - | - | - | - | - |
ProtectClock | - | - | - | yes | - | - | - |
ProtectKernelTunables | - | - | - | yes | - | - | - |
ProtectKernelModules | - | - | - | yes | - | - | - |
ProtectKernelLogs | - | - | - | yes | - | - | - |
ProtectControlGroups | - | - | - | yes | - | - | - |
LockPersonality | - | - | - | yes | - | - | - |
RestrictSUIDSGID | - | - | - | yes | - | - | - |
SystemCallArchitectures | - | - | - | native | - | - | - |
CapabilityBoundingSet | - | - | - | (empty) | - | - | - |
KillMode | - | - | - | process | - | - | - |
LimitNOFILE | 4096 | 1024 | 4096 | 4096 | 4096 | 4096 | 4096 |
MemoryMax | 128M | 256M | 128M | - | 128M | 128M | 128M |
Notable design decisions:
- daemon-secrets (
open-sesame-secrets.service:18) is the only daemon withPrivateNetwork=yes, placing it in its own network namespace with no connectivity. It communicates exclusively via the Unix domain IPC bus socket. It has the lowestLimitNOFILE(1024) but the highestMemoryMax(256M) to accommodate Argon2id, which allocates 19 MiB per key derivation. - daemon-launcher (
open-sesame-launcher.service:17-21) does not setProtectHomeorProtectSystembecause these mount namespace restrictions inherit to child processes spawned viasystemd-run --scope. Firefox, for example, writes to/run/user/1000/dconf/and fails with “Read-only file system” whenProtectSystem=strictis applied to the launcher. Instead, daemon-launcher uses kernel control plane protections and an emptyCapabilityBoundingSetto drop all Linux capabilities.KillMode=processensures spawned applications survive launcher restarts. - ReadWritePaths vary per daemon: daemon-profile and daemon-secrets
get
%t/pdsand%h/.config/pds; daemon-wm and daemon-clipboard get%h/.cache/open-sesame; daemon-wm additionally gets%h/.cache/fontconfig.
Sandbox Application Order
The sandbox layers are applied in a strict sequence during daemon startup:
harden_process()—PR_SET_DUMPABLE(0),RLIMIT_CORE(0,0)apply_resource_limits()—RLIMIT_NOFILE,RLIMIT_MEMLOCK- Pre-sandbox I/O — open file descriptors, connect to IPC bus, read keypairs, scan desktop entries (daemon-launcher), open evdev devices (daemon-input)
init_secure_memory()— probememfd_secretbefore seccomp locks down syscallsapply_landlock()— filesystem containment (implicitly setsPR_SET_NO_NEW_PRIVSvialandlock_restrict_self)apply_seccomp()— syscall filtering (must follow Landlock)
This ordering is critical. Landlock setup requires the
landlock_create_ruleset, landlock_add_rule, and
landlock_restrict_self syscalls, which are not in any daemon’s seccomp
allowlist. The IPC bus connection must be established before Landlock
restricts filesystem access, because the daemon reads its keypair from
$XDG_RUNTIME_DIR/pds/keys/.
Daemon Sandbox Capability Matrix
| Daemon | harden_process | Landlock | seccomp | Landlock Scope | PrivateNetwork | ProtectSystem | Approx. Syscalls |
|---|---|---|---|---|---|---|---|
| daemon-profile | Y | Y | Y | SignalOnly | - | strict | ~80 |
| daemon-secrets | Y | Y | Y | Full | Y | strict | ~72 |
| daemon-wm | Y | Y | Y | SignalOnly | - | strict | ~88 |
| daemon-launcher | Y | - | - | N/A | - | - | N/A |
| daemon-clipboard | Y | Y | Y | Full | - | strict | ~60 |
| daemon-input | Y | Y | Y | Full | - | strict | ~60 |
| daemon-snippets | Y | Y | Y | Full | - | strict | ~57 |
Profile Trust Model
Trust profiles are the fundamental isolation boundary in Open Sesame. Every scoped resource – secrets, clipboard content, frecency data, snippets, audit entries, and launch configurations – is partitioned by trust profile.
TrustProfileName Validation
The TrustProfileName type in core-types/src/profile.rs enforces
strict validation at construction time. It is impossible to hold an
invalid TrustProfileName value after construction.
Invariants:
- Non-empty.
- Maximum 64 bytes.
- Must start with an ASCII alphanumeric character:
[a-zA-Z0-9]. - Body characters restricted to:
[a-zA-Z0-9_-]. - Not
.or..(path traversal prevention). - No whitespace, path separators, or null bytes.
Invalid characters produce a detailed error message including the byte
value and position:
"trust profile name contains invalid byte 0x{b:02x} at position {i}".
These rules make the name safe for direct use in filesystem paths without additional sanitization.
Filesystem mappings:
| Resource | Path pattern |
|---|---|
| SQLCipher vault | vaults/{name}.db |
| BLAKE3 KDF context | "pds v2 vault-key {name}" |
| Frecency database | launcher/{name}.frecency.db |
TrustProfileName implements TryFrom<String> and TryFrom<&str>,
returning Error::Validation on failure. It serializes transparently
(via #[serde(transparent)]) as a plain string and deserializes with
validation. All boundary-facing code – CLI argument parsers, IPC
message handlers, config file loaders – validates at entry.
Profile Scoping
Each trust profile isolates the following resources:
| Resource | Isolation mechanism |
|---|---|
| Secrets | Per-profile SQLCipher vault file. Vault keys are derived via BLAKE3 KDF with profile-specific context strings. |
| Clipboard | Cross-profile clipboard access is denied and logged as AuditAction::IsolationViolationAttempt. |
| Frecency | Per-profile SQLite database for launch frecency ranking. Profile switch in daemon-launcher triggers engine.switch_profile(). |
| Extensions | Extension data is scoped per profile via IsolatedResource::Extensions. |
| Window list | Window management state is scoped per profile via IsolatedResource::WindowList. |
| Audit | Audit entries record which profile was involved in each operation via ProfileId fields. |
| Launch profiles | Launch profile definitions live under profiles.<name>.launch_profiles in configuration. |
The IsolatedResource enum in core-profile/src/lib.rs defines the
five isolatable resource types: Clipboard, Secrets, Frecency,
Extensions, WindowList. It is serialized with
#[serde(rename_all = "lowercase")] for configuration and audit log
entries.
Profile State Machine
Each profile has an independent lifecycle state, represented by the
ProfileState enum:
- Inactive: vault closed, no secrets served.
- Active(ProfileId): vault open, serving secrets.
- Transitioning(ProfileId): activation or deactivation in progress.
Multiple profiles may be active concurrently. There is no global “active
profile” singleton – the system supports simultaneous active profiles
with independent vaults. The active_profiles set in daemon-profile is
a HashSet<TrustProfileName>.
Context-Driven Activation
The ContextEngine in core-profile/src/context.rs evaluates system
signals against activation rules to determine the default profile for
new unscoped launches. Changing the default does not deactivate other
active profiles.
Context Signals
Signals that trigger rule evaluation:
| Signal | Source |
|---|---|
SsidChanged(String) | WiFi network change via D-Bus SSID monitor (platform_linux::dbus::ssid_monitor). |
AppFocused(AppId) | Wayland compositor focus change via platform_linux::compositor::focus_monitor. |
UsbDeviceAttached(String) | USB device insertion (vendor:product identifier). |
UsbDeviceDetached(String) | USB device removal. |
HardwareKeyPresent(String) | Hardware security key detection (e.g., YubiKey). |
TimeWindowEntered(String) | Time-based rule trigger (cron-like expression). |
GeolocationChanged(f64, f64) | Location change (latitude, longitude). |
Signal sources are spawned as long-lived tokio tasks in
daemon-profile/src/main.rs. They are conditionally compiled behind
#[cfg(all(target_os = "linux", feature = "desktop"))].
Activation Rules
Each profile’s activation configuration (ProfileActivation) contains:
- rules: a
Vec<ActivationRule>, each specifying aRuleTriggertype and a string value to match. - combinator:
RuleCombinator::All(every rule must match the signal) orRuleCombinator::Any(one matching rule suffices). - priority:
u32value. When multiple profiles match, the highest priority wins. - switch_delay_ms:
u64debounce interval in milliseconds. Prevents rapid oscillation when a signal fires repeatedly.
Evaluation Algorithm
When ContextEngine::evaluate(signal) is called:
- All profiles whose rules match the signal are collected. For
Allcombinators, every rule in the profile must match; forAny, at least one rule must match. - Candidates are sorted by priority descending.
- The highest-priority candidate is selected.
- If it is already the current default,
Noneis returned (no change). - Debounce check: if the candidate was last switched to within
switch_delay_msago,Noneis returned. - Otherwise, the default is updated, the switch time is recorded, and
the new
ProfileIdis returned.
Rule matching is type-strict: an Ssid trigger only matches
SsidChanged signals, an AppFocus trigger only matches AppFocused
signals, and so on. Mismatched trigger/signal pairs always return false.
Default Profile
The default profile determines which trust profile is used for new
unscoped launches (launches without an explicit --profile flag). It
is set by:
- Configuration:
global.default_profilein the config file, loaded at startup. - Context engine: automatic switching based on runtime signals overrides the config default.
- Hot reload: when config changes are detected by
ConfigWatcher, the context engine is rebuilt with the new default and thedefault_profile_nameis updated. Theconfig_profile_nameslist is also refreshed so thatsesame profile listreflects added or removed profiles.
Default profile changes are:
- Audited via
AuditAction::DefaultProfileChanged. - Broadcast on the IPC bus as
EventKind::DefaultProfileChanged. - Reported by
sesame status.
Profile Inheritance
There is no profile inheritance in the current implementation. Each
trust profile is an independent, self-contained configuration with its
own launch profiles, vault, and isolation boundaries. Cross-profile
interaction is limited to qualified tag references (e.g., work:corp)
in launch profile composition, which merge environment at launch time
without merging the profile definitions themselves.
Workspace Conventions
Open Sesame enforces a deterministic directory layout for source code
repositories. Git remote URLs are parsed into canonical filesystem paths
following the convention {root}/{user}/{server}/{org}/{repo}.
Canonical Path Convention
Every repository maps to a unique filesystem path:
/workspace/{user}/{server}/{org}/{repo}
For example:
| Remote URL | Canonical path |
|---|---|
https://github.com/scopecreep-zip/open-sesame | /workspace/usrbinkat/github.com/scopecreep-zip/open-sesame |
git@github.com:braincraftio/k9.git | /workspace/usrbinkat/github.com/braincraftio/k9 |
git@git.braincraft.io:braincraft/k9.git | /workspace/usrbinkat/git.braincraft.io/braincraft/k9 |
The default root is /workspace, configurable via the
SESAME_WORKSPACE_ROOT environment variable or settings.root in
workspaces.toml.
URL Parsing
The parse_url function in sesame-workspace/src/convention.rs accepts
two URL formats:
HTTPS
https://github.com/org/repo[.git]
Splits on / after stripping the scheme. Requires at least three path
components: server/org/repo.
SSH
git@github.com:org/repo.git
Splits on @ to isolate the user portion, then on : to separate the
server from the org/repo path. The path after the colon is split on /
to extract org and repo.
workspace.git Format
URLs where the repo component is workspace (or workspace.git) are
treated as org-level workspace repositories. These represent a monorepo
pattern where the org directory itself is a git repository containing
sibling project repos. The canonical path stops at the org level:
https://github.com/braincraftio/workspace.git
-> /workspace/usrbinkat/github.com/braincraftio/
The CloneTarget enum distinguishes Regular(PathBuf) from
WorkspaceGit(PathBuf). Cloning a workspace.git into an existing org
directory that already contains sibling repos triggers a special
initialization flow: git init, git remote add origin,
git fetch origin, then git checkout -f origin/HEAD -B main.
Normalization and Validation
- Server names are lowercased (
GITHUB.COMbecomesgithub.com). .gitsuffixes are stripped from repo names.- Insecure
http://URLs log a tracing warning about cleartext credential transmission but are not rejected.
Component validation (validate_component) rejects:
| Condition | Rejection reason |
|---|---|
| Empty component | "{label} component is empty" |
Leading . | Prevents collision with .git, .ssh, .config directories. |
Contains .. | Path traversal attack. |
Contains / or \ | Path separator embedded in component. |
| Contains null byte | Null byte injection. |
| Exceeds 255 bytes | Filesystem component length limit (ext4, btrfs). |
| Leading/trailing whitespace | Filesystem ambiguity. |
Git-Aware Discovery
is_git_repo
The git::is_git_repo function in sesame-workspace/src/git.rs checks
for the existence of a .git entry (directory or file) at the given
path. It does not shell out to git.
Remote URL Extraction
git::remote_url runs
git -C {path} remote get-url origin via std::process::Command with
explicit .arg() calls. Returns Ok(None) if the path lacks a .git
entry or has no origin remote. Returns Ok(Some(url)) on success.
Additional Git Operations
The git module provides:
current_branch(path): runsgit rev-parse --abbrev-ref HEAD.is_clean(path): runsgit status --porcelainand checks for empty output.clone_repo(url, target, depth): clones with optional--depthand--separator before URL/path arguments.
All commands use explicit .arg() calls. The module-level documentation
states: “NEVER use format!() to build command strings. NEVER use
shell interpolation.”
Workspace Discovery
discover::discover_workspaces in
sesame-workspace/src/discover.rs walks the directory tree at
{root}/{user}/ to find all git repositories. The walk follows the
convention depth structure:
- Server level: enumerate directories under
{root}/{user}/. - Org level: enumerate directories under each server. If an org
directory contains a
.gitentry, it is recorded as aworkspace.gitdiscovery. - Repo level: enumerate directories under each org. Directories
with
.gitentries are recorded as regular repositories.
Security properties of the walk:
- Symlinks skipped:
entry.file_type()?.is_symlink()causes the entry to be skipped at every level. This prevents symlink loops and TOCTOU traversal attacks. - Permission denied: silently skipped
(
ErrorKind::PermissionDeniedreturnsOk(())). .gitdirectories: explicitly skipped as traversal targets (they are detected but not descended into).
Results are sorted by path. Each DiscoveredWorkspace includes:
path: filesystem path to the repository root.convention: parsedWorkspaceConventioncomponents (server, org, repo).remote_url: fromgit remote get-url origin, if available.linked_profile: resolved from workspace config links, if configured.is_workspace_git: true for org-level workspace.git repositories.
Workspace Configuration
workspaces.toml
The user-level workspace configuration is stored at
~/.config/pds/workspaces.toml. The schema is defined by
WorkspaceConfig in core-config/src/schema_workspace.rs:
[settings]
root = "/workspace"
user = "usrbinkat"
default_ssh = true
[links]
"/workspace/usrbinkat/github.com/org" = "personal"
"/workspace/usrbinkat/github.com/org/k9" = "work"
Settings fields:
| Field | Type | Default | Description |
|---|---|---|---|
root | PathBuf | $SESAME_WORKSPACE_ROOT or /workspace | Root directory for all workspaces. |
user | String | $USER or "user" | Username for path construction. |
default_ssh | bool | true | Prefer SSH URLs when cloning. |
Links section: a BTreeMap<String, String> mapping canonical paths
to profile names. More specific paths override less specific ones
(longest prefix wins).
Profile Link Resolution
resolve_workspace_profile in sesame-workspace/src/config.rs resolves
a filesystem path to a profile name using two strategies:
- Exact match: the path matches a link key exactly.
- Longest prefix match: the longest link key that is a prefix of
the path wins. Path boundary enforcement prevents
/orgfrom matching/organic– the link path must match exactly or be followed by/.
.sesame.toml (Local Config)
Workspace-level and repo-level configuration files (.sesame.toml)
provide per-directory overrides. The schema is LocalSesameConfig in
core-config/src/schema_workspace.rs:
# /workspace/usrbinkat/github.com/org/.sesame.toml
profile = "work"
secret_prefix = "MYAPP"
tags = ["dev-rust"]
[env]
RUST_LOG = "debug"
| Field | Type | Description |
|---|---|---|
profile | Option<String> | Default trust profile for this context. |
env | BTreeMap<String, String> | Non-secret environment variables to inject. |
tags | Vec<String> | Launch profile tags to apply by default. |
secret_prefix | Option<String> | Env var prefix for secret injection (e.g., "MYAPP" causes api-key to become MYAPP_API_KEY). |
Multi-Layer Config Precedence
resolve_effective_config in sesame-workspace/src/config.rs merges
configuration from all layers. Precedence (highest to lowest):
- Repo
.sesame.toml({path}/.sesame.toml) - Workspace
.sesame.toml({root}/{user}/{server}/{org}/.sesame.toml) - User config links (
workspaces.toml[links]section)
Merge semantics per field:
profile: highest-priority layer wins outright.env: all layers are merged into a singleBTreeMap. Higher-priority keys override lower-priority ones; keys unique to lower layers are preserved.tags: all layers’ tags are concatenated (workspace tags first, then repo tags).secret_prefix: highest-priority layer wins outright.
The ConfigProvenance struct tracks which layer determined each value
("user config link", "workspace .sesame.toml", or
"repo .sesame.toml").
Platform-Specific Root Resolution
The workspace root is resolved in sesame-workspace/src/config.rs
(resolve_root) with this priority:
SESAME_WORKSPACE_ROOTenvironment variable.config.settings.rootfromworkspaces.toml.- Default:
/workspace.
The default WorkspaceSettings reads SESAME_WORKSPACE_ROOT at
construction time, so the env var takes effect even without an explicit
workspaces.toml. The username defaults to the USER environment
variable, falling back to the string "user".
Shell Injection Prevention
All git operations in sesame-workspace/src/git.rs use
std::process::Command with explicit .arg() calls. The -- separator
is used before URL and path arguments in git clone to prevent argument
injection (a URL starting with - would otherwise be interpreted as a
flag). No temporary files are created. No secret material is written to
disk.
Cryptographic Agility
Open Sesame implements config-driven cryptographic algorithm selection
across five independent axes: key derivation (KDF), hierarchical key
derivation (HKDF), Noise IPC transport cipher, Noise IPC transport hash,
and audit log chain hash. Algorithm choices are declared in the
[crypto] section of config.toml and dispatched at runtime through
typed enum matching. No algorithm is hardcoded at call sites.
Configuration Schema
The [crypto] section of config.toml maps to CryptoConfigToml
(core-config/src/schema_crypto.rs:14), a string-based TOML
representation with six fields:
[crypto]
kdf = "argon2id"
hkdf = "blake3"
noise_cipher = "chacha-poly"
noise_hash = "blake2s"
audit_hash = "blake3"
minimum_peer_profile = "leading-edge"
These defaults are defined in the Default implementation
(schema_crypto.rs:30-38). At load time,
CryptoConfigToml::to_typed() (schema_crypto.rs:48) converts the
string values to validated enum variants in core_types::CryptoConfig
(core-types/src/crypto.rs:82). Unrecognized algorithm names produce a
core_types::Error::Config error, preventing the daemon from starting
with an invalid configuration.
Algorithm Axes
KDF: Password to Master Key
The KDF converts a user password and 16-byte salt into a 32-byte master
key. Two algorithms are available, selected by the kdf config field
and dispatched through derive_key_kdf()
(core-crypto/src/kdf.rs:60-69).
argon2id (default, KdfAlgorithm::Argon2id):
- Algorithm: Argon2id (hybrid mode, resists both side-channel and GPU attacks)
- Memory cost: 19,456 KiB (19 MiB) (
kdf.rs:28) - Time cost: 2 iterations (
kdf.rs:29) - Parallelism: 1 lane (
kdf.rs:30) - Output: 32 bytes (
kdf.rs:31) - Version: 0x13 (
kdf.rs:35) - Parameters follow OWASP minimum recommendations (
kdf.rs:14-17) - Implementation:
argon2crate withArgon2::new(Algorithm::Argon2id, Version::V0x13, params)(kdf.rs:35)
pbkdf2-sha256 (KdfAlgorithm::Pbkdf2Sha256):
- Algorithm: PBKDF2-HMAC-SHA256
- Iterations: 600,000 (
kdf.rs:51) - Output: 32 bytes
- Parameters follow OWASP recommendations for PBKDF2-SHA256
(
kdf.rs:47) - Implementation:
pbkdf2crate withHmac<Sha256>(kdf.rs:51)
Both functions return SecureBytes — mlock’d, zeroize-on-drop memory
backed by core_memory::ProtectedAlloc. Intermediate stack arrays are
zeroized via zeroize::Zeroizing before the function returns
(kdf.rs:37, kdf.rs:50).
HKDF: Master Key to Per-Purpose Keys
The HKDF layer derives per-profile, per-purpose 32-byte keys from the
master key. Two algorithms are available, dispatched through the
*_with_algorithm() family of functions in core-crypto/src/hkdf.rs.
blake3 (default, HkdfAlgorithm::Blake3):
- Uses BLAKE3’s built-in
derive_keymode, which provides extract-then-expand semantics equivalent to HKDF (hkdf.rs:1-5) - Context string format:
"pds v2 <purpose> <profile_id>"(hkdf.rs:39-41) - Domain separation is achieved via BLAKE3’s context string parameter,
which internally derives a context key from the string and uses it to
key the hash of the input keying material (
hkdf.rs:27-33) - Implementation:
blake3::derive_key(context, ikm)(hkdf.rs:31) - Performance: 5-14x faster than SHA-256 with hardware acceleration via
AVX2/AVX512/NEON (
hkdf.rs:5)
hkdf-sha256 (HkdfAlgorithm::HkdfSha256):
- Standard HKDF extract-then-expand per RFC 5869
- Salt:
None(the IKM serves as both input keying material and implicit salt) (hkdf.rs:121) - Info: the context string bytes, providing domain separation
(
hkdf.rs:123) - Output: 32 bytes (
hkdf.rs:122) - Implementation:
Hkdf::<Sha256>::new(None, ikm)followed byhk.expand(context.as_bytes(), &mut key)(hkdf.rs:121-124) - Intermediate output array is zeroized before return (
hkdf.rs:126)
The key hierarchy derived through HKDF (hkdf.rs:7-14):
User password -> Argon2id -> Master Key (32 bytes)
-> HKDF "vault-key" -> per-profile vault key (encrypts SQLCipher DB)
-> HKDF "clipboard-key" -> per-profile clipboard key (zeroed on profile deactivation)
-> HKDF "ipc-auth-token" -> per-profile IPC authentication token
-> HKDF "ipc-encryption-key" -> per-profile IPC field encryption key
Each purpose has a dedicated public function (derive_vault_key,
derive_clipboard_key, derive_ipc_auth_token,
derive_ipc_encryption_key) with a corresponding *_with_algorithm()
variant that accepts an HkdfAlgorithm parameter. The
algorithm-dispatching variants use a match statement to route to the
correct implementation (hkdf.rs:137-141).
A key-encrypting-key (KEK) for platform keyring storage is derived
separately via derive_kek() (hkdf.rs:91-101). The KEK uses the
hardcoded context string "pds v2 key-encrypting-key" and concatenates
password + salt as the IKM, ensuring cryptographic independence from the
Argon2id master key derivation path. The concatenated IKM is zeroized
after use (hkdf.rs:99).
An extensibility function derive_key() (hkdf.rs:107-110) accepts an
arbitrary purpose string, allowing new key purposes to be added without
modifying the module. Callers must ensure purpose strings are globally
unique.
Noise Cipher: IPC Transport Encryption
The Noise IK protocol used for inter-daemon IPC communication supports
two cipher selections via the noise_cipher config field:
chacha-poly (default, NoiseCipher::ChaChaPoly):
- ChaCha20-Poly1305 authenticated encryption
- Constant-time on all architectures without hardware AES
- The leading-edge default for environments where AES-NI is not guaranteed
aes-gcm (NoiseCipher::AesGcm):
- AES-256-GCM authenticated encryption
- Optimal on processors with AES-NI hardware acceleration
- Required for NIST/FedRAMP compliance
The cipher selection is read from config and passed to the Noise
protocol builder at IPC bus initialization. The NoiseCipher enum is
defined in core-types/src/crypto.rs:31-37.
Noise Hash: IPC Transport Hash
The Noise protocol hash function is selected via the noise_hash
config field:
blake2s (default, NoiseHash::Blake2s):
- BLAKE2s (256-bit output, optimized for 32-bit and 64-bit platforms)
- Faster than SHA-256 on platforms without SHA extensions
- The leading-edge default
sha256 (NoiseHash::Sha256):
- SHA-256
- Required for NIST/FedRAMP compliance
- Optimal on processors with SHA-NI hardware extensions
The NoiseHash enum is defined in core-types/src/crypto.rs:43-49.
Audit Hash: Audit Log Chain Integrity
The audit log uses a hash chain where each entry’s hash covers the
previous entry’s hash, providing tamper evidence. The hash function is
selected via the audit_hash config field:
blake3 (default, AuditHash::Blake3):
- BLAKE3 (256-bit output)
- Hardware-accelerated via AVX2/AVX512/NEON where available
- The leading-edge default
sha256 (AuditHash::Sha256):
- SHA-256
- Required for NIST/FedRAMP compliance
The AuditHash enum is defined in core-types/src/crypto.rs:55-61.
At-Rest Encryption
Vault data at rest is encrypted with AES-256-GCM via the
EncryptionKey type (core-crypto/src/encryption.rs:13). This cipher
is not configurable — it is always AES-256-GCM regardless of the
[crypto] config section. The implementation uses the RustCrypto
aes-gcm crate (encryption.rs:5-6).
- Key size: 32 bytes (AES-256) (
encryption.rs:24) - Nonce size: 12 bytes (
encryption.rs:42) - Output: ciphertext with appended 16-byte authentication tag
(
encryption.rs:37) - Decrypted plaintext is returned as
SecureBytes(mlock’d, zeroize-on-drop) (encryption.rs:61) - The
Debugimplementation redacts key material, printing"EncryptionKey([REDACTED])"(encryption.rs:66-68)
Nonce reuse catastrophically breaks both confidentiality and
authenticity. Callers are responsible for ensuring nonce uniqueness per
encryption with the same key (encryption.rs:36-37).
Pre-Defined Crypto Profiles
The minimum_peer_profile config field selects a pre-defined algorithm
profile via the CryptoProfile enum
(core-types/src/crypto.rs:67-75):
leading-edge (default, CryptoProfile::LeadingEdge):
| Axis | Algorithm |
|---|---|
| KDF | Argon2id (19 MiB, 2 iterations) |
| HKDF | BLAKE3 |
| Noise cipher | ChaCha20-Poly1305 |
| Noise hash | BLAKE2s |
| Audit hash | BLAKE3 |
This profile uses modern algorithms that prioritize security margin and performance on commodity hardware without requiring specific hardware acceleration.
governance-compatible (CryptoProfile::GovernanceCompatible):
| Axis | Algorithm |
|---|---|
| KDF | PBKDF2-SHA256 (600K iterations) |
| HKDF | HKDF-SHA256 |
| Noise cipher | AES-256-GCM |
| Noise hash | SHA-256 |
| Audit hash | SHA-256 |
This profile uses exclusively NIST-approved algorithms suitable for environments subject to FedRAMP, FIPS 140-3, or equivalent governance frameworks.
custom (CryptoProfile::Custom):
Individual algorithm selection via the per-axis config fields. Allows mixing algorithms across profiles (e.g., Argon2id KDF with AES-GCM Noise cipher).
The minimum_peer_profile field specifies the minimum cryptographic
profile that the local node will accept from federation peers. A node
configured with "leading-edge" will reject connections from peers
advertising a weaker profile. This field is defined in CryptoConfig
as minimum_peer_profile: CryptoProfile
(core-types/src/crypto.rs:89).
Config-to-Runtime Dispatch
Algorithm selection flows from config to runtime through a three-stage pipeline:
-
TOML parsing: The
[crypto]section is deserialized intoCryptoConfigToml(core-config/src/schema_crypto.rs:14), which stores all algorithm names asStringvalues. -
Validation:
CryptoConfigToml::to_typed()(schema_crypto.rs:48) converts each string to a typed enum variant viamatchstatements. Unrecognized strings produce an error. The result is acore_types::CryptoConfigstruct with typed fields (core-types/src/crypto.rs:82-90). -
Dispatch: Runtime code calls algorithm-dispatching functions that accept the typed enum and route to the correct implementation. For example,
derive_key_kdf()(core-crypto/src/kdf.rs:60-69) matches onKdfAlgorithm:#![allow(unused)] fn main() { pub fn derive_key_kdf( algorithm: &KdfAlgorithm, password: &[u8], salt: &[u8; 16], ) -> core_types::Result<SecureBytes> { match algorithm { KdfAlgorithm::Argon2id => derive_key_argon2(password, salt), KdfAlgorithm::Pbkdf2Sha256 => derive_key_pbkdf2(password, salt), } } }Similarly,
derive_vault_key_with_algorithm()(core-crypto/src/hkdf.rs:131-141) matches onHkdfAlgorithm:#![allow(unused)] fn main() { pub fn derive_vault_key_with_algorithm( algorithm: &HkdfAlgorithm, master_key: &[u8], profile_id: &str, ) -> SecureBytes { let ctx = build_context("vault-key", profile_id); match algorithm { HkdfAlgorithm::Blake3 => derive_32(&ctx, master_key), HkdfAlgorithm::HkdfSha256 => derive_32_hkdf_sha256(&ctx, master_key), } } }
This pattern ensures that adding a new algorithm requires three changes:
add a variant to the core_types enum, add a match arm in the TOML
validator, and add a match arm in the dispatch function. No call sites
need modification.
FIPS Considerations
Open Sesame does not claim FIPS 140-3 validation. The cryptographic
implementations are provided by RustCrypto crates (argon2, pbkdf2,
aes-gcm, blake3, hkdf, sha2) which have not undergone CMVP
certification.
For deployments subject to FIPS requirements, the
governance-compatible profile restricts algorithm selection to
NIST-approved primitives (PBKDF2-SHA256, HKDF-SHA256, AES-256-GCM,
SHA-256). This satisfies the algorithm selection requirement but does
not address the validated module requirement. Organizations requiring a
FIPS-validated cryptographic module would need to replace the RustCrypto
backends with a certified implementation (e.g., AWS-LC, BoringCrypto)
and re-validate.
The minimum_peer_profile mechanism provides a policy enforcement
point: setting it to "governance-compatible" ensures that no peer in
a federated deployment can negotiate a session using non-NIST
algorithms, even if the local node supports them.
Memory Protection
All key material derived through the KDF and HKDF paths is returned as
SecureBytes (core-crypto/src/lib.rs:16), which is backed by
core_memory::ProtectedAlloc. This provides:
- Page-aligned allocation with guard pages
mlockto prevent swapping to disk- Volatile zeroization on drop
- Canary bytes for buffer overflow detection
The init_secure_memory() function (core-crypto/src/lib.rs:29-31)
must be called before the seccomp sandbox is applied, because it probes
memfd_secret availability. After seccomp is active, memfd_secret
remains in the allowlist for all sandboxed daemons.
Vault Engine
The vault engine provides encrypted per-profile secret storage backed by SQLCipher databases with a multi-layer key hierarchy derived from BLAKE3.
SQLCipher Configuration
Each vault database is a SQLCipher-encrypted SQLite file. The following PRAGMA directives are applied
in SqlCipherStore::open() (core-secrets/src/sqlcipher.rs) before any table access:
| Parameter | Value | Purpose |
|---|---|---|
cipher_page_size | 4096 | Page-level encryption granularity |
cipher_hmac_algorithm | HMAC_SHA256 | Per-page authentication |
cipher_kdf_algorithm | PBKDF2_HMAC_SHA256 | Internal SQLCipher KDF for page keys |
kdf_iter | 256000 | PBKDF2 iteration count |
journal_mode | WAL | Write-ahead logging for crash safety |
SQLCipher encrypts every database page with AES-256-CBC and authenticates each page with
HMAC-SHA256. The page encryption key is supplied via PRAGMA key as a raw 32-byte hex-encoded
value. After the key pragma executes, both the hex string and the SQL statement are zeroized in
place via zeroize::Zeroize before any further operations proceed.
Key Hierarchy
The key derivation chain from user password to on-disk encryption proceeds through three stages:
User password + per-profile 16-byte random salt
--> Argon2id
--> Master Key (32 bytes, held in SecureBytes / mlock'd memory)
--> BLAKE3 derive_key(context="pds v2 vault-key {profile_id}")
--> Vault Key (32 bytes) -- used as SQLCipher PRAGMA key
--> BLAKE3 derive_key(context="pds v2 entry-encryption-key")
--> Entry Key (32 bytes) -- used for per-entry AES-256-GCM
BLAKE3’s derive_key mode accepts a context string that provides domain separation. The vault
key and entry key share the same vault key as input keying material but use different context
strings, making them cryptographically independent. The vault_key_derivation_domain_separation
test in core-secrets/src/sqlcipher.rs verifies this property.
The full set of derived keys sharing the same master key (defined in core-crypto/src/hkdf.rs):
| Context | Purpose |
|---|---|
pds v2 vault-key {profile_id} | SQLCipher page encryption |
pds v2 entry-encryption-key | Per-entry AES-256-GCM (derived from vault key, not master key) |
pds v2 clipboard-key {profile_id} | Clipboard encryption |
pds v2 ipc-auth-token {profile_id} | IPC authentication |
pds v2 ipc-encryption-key {profile_id} | Per-field IPC encryption (feature-gated) |
pds v2 key-encrypting-key | KEK for platform keyring storage |
An HKDF-SHA256 alternative is available via derive_vault_key_with_algorithm(), selectable per
the HkdfAlgorithm enum. BLAKE3 is the default. The
blake3_and_hkdf_sha256_produce_different_keys test confirms the two algorithms produce
different outputs for the same inputs.
Double Encryption
Each secret value receives two independent layers of encryption:
- Page-level: SQLCipher encrypts the entire database page (key names, values, metadata) using the vault key via AES-256-CBC + HMAC-SHA256.
- Entry-level: Each value is individually encrypted with AES-256-GCM using the entry key
before being written to the
valuecolumn. The wire format stored in the database is[12-byte random nonce][ciphertext + 16-byte GCM tag].
Every encryption operation generates a fresh 12-byte random nonce via getrandom. The minimum
wire length for decryption is 28 bytes (12-byte nonce + 16-byte GCM tag); shorter values are
rejected with an error. The encrypt_same_value_produces_different_ciphertext test verifies
nonce uniqueness across 100 consecutive encryptions of identical plaintext.
The db_file_contains_no_plaintext and db_file_contains_no_key_names_in_plaintext tests read
raw database file bytes and assert that neither secret values nor key names appear anywhere in
the on-disk file.
Database Schema
The schema is created via an idempotent CREATE TABLE IF NOT EXISTS statement during
SqlCipherStore::open():
CREATE TABLE IF NOT EXISTS secrets (
key TEXT PRIMARY KEY,
value BLOB NOT NULL,
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL
);
Timestamps are stored as Unix epoch seconds via SystemTime::now(). The
schema_migration_idempotent test verifies that opening a database multiple times does not fail
or corrupt existing data. After schema creation, SqlCipherStore::open() executes
SELECT count(*) FROM sqlite_master to verify the key is correct; a wrong key causes this
statement to fail with “wrong key or corrupt database”.
Per-Profile Vault Isolation
Each profile receives a separate database file at {config_dir}/vaults/{profile_name}.db and a
separate 16-byte random salt file at {config_dir}/vaults/{profile_name}.salt. The salt is
generated by getrandom on first unlock and persisted to disk by generate_profile_salt() in
daemon-secrets/src/unlock.rs.
Isolation is cryptographic, not namespace-based.
core_crypto::derive_vault_key(master_key, profile_id) uses the context string
"pds v2 vault-key {profile_id}", producing a different 32-byte key for each profile ID even
when the master key is the same. Attempting to open a vault encrypted with profile A’s key using
profile B’s key fails at the SELECT count(*) FROM sqlite_master verification step.
Tests in core-secrets/src/sqlcipher.rs that verify isolation:
cross_profile_keys_are_independent– different profile IDs yield different keys, and opening a database with the wrong profile’s key fails.cross_profile_secret_access_returns_error– reading a key from the wrong profile’s vault returnsNotFound.different_vault_keys_cannot_access– a database opened with key A rejects key B.
Vault Lifecycle
Create
A vault is created implicitly on first access after a profile is unlocked.
VaultState::vault_for() in daemon-secrets/src/vault.rs creates the
{config_dir}/vaults/ directory if needed, then calls SqlCipherStore::open() inside
tokio::task::spawn_blocking with a 10-second timeout to avoid blocking the async event loop
during synchronous SQLCipher I/O. The timeout is a defensive measure: if the blocking thread is
killed (e.g., by seccomp SIGSYS), the JoinHandle would hang indefinitely without it. The
opened store is wrapped in JitDelivery with the configured TTL (default 300 seconds, set via
the --ttl CLI flag or PDS_SECRET_TTL environment variable).
Open
SqlCipherStore::open() performs the following steps in order:
- Validate that the vault key is exactly 32 bytes.
- Open the SQLite connection via
Connection::open(). - Set the encryption key via
PRAGMA keyin raw hex mode, then zeroize the hex string and the SQL statement. - Apply SQLCipher configuration PRAGMAs (cipher_page_size, HMAC algorithm, KDF algorithm, KDF iterations, WAL mode).
- Run the idempotent schema migration (
CREATE TABLE IF NOT EXISTS). - Verify the key by executing
SELECT count(*) FROM sqlite_master. - Derive the entry key via
blake3::derive_key("pds v2 entry-encryption-key", vault_key), then zeroize the intermediate byte array.
Rekey (C-level Key Scrub)
SqlCipherStore::pragma_rekey_clear() issues PRAGMA rekey = '' to scrub SQLCipher’s internal
C-level copy of the page encryption key from memory. This is defense-in-depth: the Rust-side
entry_key: SecureBytes already zeroizes on drop, but this call ensures the C library’s
internal buffer is also cleared. The method logs a warning on failure but does not panic, since
the connection may already be in a broken state. The
pragma_rekey_clear_does_not_remove_encryption test confirms this scrubs the in-memory key
without removing on-disk encryption.
An AtomicBool (cleared) prevents redundant PRAGMA rekey calls in the Drop
implementation.
Close
Vault closing occurs during profile deactivation (VaultState::deactivate_profile()) or
locking (handle_lock_request()). The sequence is:
- Remove the profile from the
active_profilesauthorization set. This is the security-critical step and happens first, before any I/O. - Flush the JIT cache via
vault.flush().await. AllSecureBytesentries are zeroized on drop when theHashMapis cleared. - Call
pragma_rekey_clear()to scrub the C-level key buffer. - Drop the
SqlCipherStore. Theentry_key: SecureByteszeroizes on drop, andDropskips the redundantPRAGMA rekeybecauseclearedis already set. - Remove the master key from the
master_keysmap.SecureByteszeroizes on drop. - Remove any partial multi-factor unlock state for the profile.
- On Linux, delete the profile’s platform keyring entry via
keyring_delete_profile().
On lock-all (no profile specified in LockRequest), the rate limiter state is also reset to a
fresh SecretRateLimiter instance.
Secret Lifecycle
This page describes how secrets move through the system: from storage in an encrypted vault, through a JIT cache, across the IPC bus, and into a child process’s environment. It also covers key material lifecycle and the compliance testing framework.
Secret Storage Operations
The SecretsStore trait (core-secrets/src/store.rs) defines four operations that every storage
backend must implement:
| Operation | Behavior |
|---|---|
get(key) | Retrieve a secret by key. Returns an error if the key does not exist. |
set(key, value) | Store a secret. Overwrites if the key already exists. Updates updated_at; sets created_at on first insert. |
delete(key) | Delete a secret by key. Returns an error if the key does not exist. |
list_keys() | List all key names in the store. Values are not returned (no bulk decryption). |
The list_keys method intentionally avoids returning values. Listing secrets does not trigger
bulk decryption of every entry, limiting the window during which plaintext exists in memory.
Two implementations exist:
SqlCipherStore(core-secrets/src/sqlcipher.rs): Production backend. Eachsetencrypts the value with per-entry AES-256-GCM before writing to the database. Eachgetdecrypts after reading. TheMutex<Connection>serializes all database access.InMemoryStore(core-secrets/src/store.rs): Testing backend. Holds secrets in aHashMap<String, SecureBytes>protected by atokio::sync::RwLock. Values are stored asSecureBytes(mlock’d, zeroize-on-drop). Does not persist to disk.
JIT Cache
The JitDelivery<S> wrapper (core-secrets/src/jit.rs) adds a time-limited in-memory cache in
front of any SecretsStore implementation. It exists to avoid repeated SQLCipher decryption for
frequently accessed secrets.
Resolution
JitDelivery::resolve(key) checks the cache first. If a valid (non-expired) entry exists, the
cached SecureBytes clone is returned without touching the underlying store. If the entry is
missing or expired, the value is fetched from the store, cached, and returned.
Both the cache entry and the returned value are independent SecureBytes clones. Each
independently zeroizes on drop.
TTL Expiry
Each cache entry records its fetched_at timestamp as a std::time::Instant. On the next
resolve call, if fetched_at.elapsed() >= ttl, the cached value is considered expired and a
fresh fetch occurs. The default TTL is 300 seconds, configurable via the daemon’s --ttl flag
or PDS_SECRET_TTL environment variable.
The ttl_expiry_refetches test verifies that after TTL expiry, the underlying store is
re-queried and updated values are returned.
Flush on Lock
JitDelivery::flush() clears the entire cache by calling cache.clear(). Because each value
in the cache is a SecureBytes, dropping the HashMap entries triggers zeroization of all
cached secret material. Flush is called during profile deactivation and locking, before the
vault is closed and key material is destroyed.
The flush_clears_cache test verifies that after a flush, the next resolve call fetches
fresh data from the underlying store.
Store Bypass
JitDelivery::store() provides direct access to the underlying SecretsStore, bypassing the
cache. This is used for write operations (set, delete, list_keys) which should not
interact with the read cache. After a set or delete, the daemon calls
vault.flush().await to invalidate any stale cache entries.
Key Material Lifecycle
All key material in the secrets subsystem is held in SecureBytes (core-crypto), which
provides:
- mlock: The backing memory is locked to prevent swapping to disk. On Linux, this uses
memfd_secretwith guard pages when available. - Zeroize on drop: When a
SecureBytesvalue is dropped, its backing memory is overwritten with zeros before deallocation. This is implemented via thezeroizecrate’sZeroizetrait. - Clone independence: Cloning a
SecureBytesvalue creates a new mlock’d allocation. Dropping the clone does not affect the original, and vice versa.
The lifecycle of key material through the system:
- Derivation: The master key is derived via Argon2id from the user’s password and a
per-profile 16-byte salt (
derive_master_key()indaemon-secrets/src/unlock.rs, which delegates tocore_crypto::derive_key_argon2()). The result is a 32-byteSecureBytesvalue. - Storage: The master key is stored in
VaultState::master_keys(daemon-secrets/src/vault.rs), aHashMap<TrustProfileName, SecureBytes>. - Derivation (vault key): On first vault access,
core_crypto::derive_vault_key()derives a 32-byte vault key from the master key via BLAKE3. The intermediate stack array is wrapped inzeroize::Zeroizingand zeroized on scope exit. - Use: The vault key is passed to
SqlCipherStore::open(), which uses it forPRAGMA keyand derives the entry key. The vault key is not retained by the store after open completes. - Destruction: On lock or deactivation, the JIT cache is flushed (zeroizing cached
secrets),
pragma_rekey_clear()scrubs the C-level key buffer, theSqlCipherStoreis dropped (zeroizing the entry key), and the master key is removed from the map (zeroizing on drop).
Field-Level IPC Encryption
When the ipc-field-encryption feature is enabled, secret values are encrypted with AES-256-GCM
before being placed on the IPC bus, providing a second encryption layer on top of the Noise IK
transport.
The per-profile IPC encryption key is derived via
core_crypto::derive_ipc_encryption_key(master_key, profile_id) using the context string
"pds v2 ipc-encryption-key {profile_id}". The wire format is
[12-byte random nonce][AES-256-GCM ciphertext + tag].
This feature is gated behind ipc-field-encryption and disabled by default for the following
reasons, documented in daemon-secrets/src/vault.rs:
- The Noise IK transport is already the security boundary, matching the precedent set by ssh-agent, 1Password, Vault, and gpg-agent.
- CLI clients lack the master key needed to decrypt per-field encrypted values.
- The per-field key derives from the same master key that transits inside the Noise channel, so it is not an independent trust root.
When enabled, the encryption path in handle_secret_get (daemon-secrets/src/crud.rs) encrypts
values before sending the SecretGetResponse, and the decryption path in handle_secret_set
decrypts incoming values before writing to the vault. The decrypted intermediate Vec<u8> is
explicitly zeroized after the store write completes.
Compliance Testing
The compliance_tests() function (core-secrets/src/compliance.rs) defines a portable test
suite that every SecretsStore implementation must pass. The suite verifies:
| Test case | Assertion |
|---|---|
| Set and get | A stored value is retrievable with identical bytes. |
| Overwrite | Storing to an existing key replaces the value. |
| Get nonexistent | Retrieving a key that does not exist returns an error. |
| Delete | A deleted key is no longer retrievable. |
| Delete nonexistent | Deleting a key that does not exist returns an error. |
| List keys | All stored key names appear in the list. |
| Cleanup | After deleting all keys, the list is empty. |
The in_memory_store_passes_compliance test runs this suite against InMemoryStore. The
SQLCipher backend has its own compliance tests in core-secrets/src/sqlcipher.rs that
additionally verify encryption properties (no plaintext on disk, cross-profile isolation, nonce
uniqueness).
Six-Gate Security Pipeline
Every secret CRUD operation passes through a six-gate security pipeline in
daemon-secrets/src/crud.rs before the vault is accessed. The gates execute in order from
cheapest to most expensive:
- Lock check: Rejects the request if no profiles are unlocked (
master_keysis empty). - Active profile check: Rejects if the requested profile is not in the
active_profilesset. - Identity check: Logs the requester’s
verified_sender_name(stamped by the IPC bus server from the Noise IK registry). Expected requesters aredaemon-secrets,daemon-launcher, orNone(CLI relay). - Rate limit check: Applies per-requester token bucket rate limiting.
- ACL check: Evaluates per-daemon per-key access control rules from config.
5.5. Key validation: Validates the secret key name via
core_types::validate_secret_key(). - Vault access: Opens (or retrieves) the vault and performs the requested operation.
Each gate that denies a request emits both a structured tracing log entry and a
SecretOperationAudit IPC event (fire-and-forget to daemon-profile for persistent audit
logging). The denial response is sent immediately and processing stops.
Access Control
The secrets daemon enforces per-daemon per-key access control over secret operations. ACL rules are defined in the configuration file and evaluated as pure functions over config state with no I/O or mutable state. Rate limiting provides a second layer of defense against enumeration attacks.
Per-Daemon Per-Key ACL
Access control is implemented in daemon-secrets/src/acl.rs as two pure functions:
check_secret_access() for get/set/delete operations, and check_secret_list_access() for
list operations.
Configuration
ACL rules are defined in the config file under [profiles.<name>.secrets.access]. Each entry
maps a daemon name to a list of secret key names that daemon is permitted to access:
[profiles.work.secrets.access]
daemon-launcher = ["api-key", "db-password"]
daemon-wm = []
In this example, daemon-launcher can access api-key and db-password in the work
profile. daemon-wm has an explicit empty list, which denies all access including listing.
Evaluation Rules for Get/Set/Delete
The check_secret_access() function evaluates the following rules in order. The first matching
rule determines the outcome:
| Condition | Result | Rationale |
|---|---|---|
| Profile not in config, no ACL policy on any profile | Allow | Backward compatibility with pre-ACL deployments. |
| Profile not in config, ACL policy exists on any other profile | Deny | Fail-closed. An attacker must not bypass ACL by requesting a nonexistent profile. |
| Profile in config, empty access map | Allow | No ACL policy configured for this profile. |
Unregistered client (verified_sender_name is None), ACL policy exists | Deny | Unregistered clients cannot be identity-verified. |
| Daemon name absent from access map | Allow | Backward compatible default. Only daemons explicitly listed are restricted. |
| Daemon name present, key in allowed list | Allow | Explicit grant. |
| Daemon name present, key not in allowed list | Deny | Allowlist is strict. |
| Daemon name present, empty allowed list | Deny | Explicit deny-all. Empty list means no access, not unrestricted access. |
Evaluation Rules for List
The check_secret_list_access() function follows the same rules as get/set/delete with one
difference at the daemon-level check:
| Condition | Result |
|---|---|
| Daemon name present, non-empty allowed list | Allow (has at least some access) |
| Daemon name present, empty allowed list | Deny (“no keys allowed” means “cannot even see what keys exist”) |
All other conditions match check_secret_access().
Test Coverage
The ACL module contains 15 tests (acl_001 through acl_015) that verify every branch of
both functions. Each test is prefixed with a SECURITY INVARIANT comment documenting the
property it protects.
Unregistered Client Handling
Client identity is determined by the verified_sender_name field on each IPC message. This
field is stamped by the IPC bus server from the Noise IK static key registry – it is not
self-declared by the client. The check_secret_requester() function in
daemon-secrets/src/acl.rs logs an anomaly warning if a daemon other than daemon-secrets or
daemon-launcher requests secrets, since those are the only expected requesters.
Unregistered clients (those with verified_sender_name set to None) are CLI relay connections
that transit through daemon-profile with Open clearance. When any ACL policy is active,
unregistered clients are denied access to both individual secrets and the key listing. This
prevents bypass via unauthenticated connections.
Audit Logging
Every secret operation emits a structured audit log entry via the audit_secret_access()
function, regardless of whether the operation succeeds or is denied. The log entry includes:
event_type: The operation (get,set,delete,list,unlock,lock).requester: TheDaemonId(UUID) of the requesting client.profile: The target trust profile name.key: The secret key name (or-for operations that do not target a specific key).outcome: The result (success,denied-locked,denied-acl,rate-limited,not-found, etc.).
In addition to local tracing logs, each operation also emits a SecretOperationAudit IPC
event that is published to the bus for persistent logging by daemon-profile. This event is
fire-and-forget: delivery failure does not block or fail the secret operation. Both audit paths
are required; the code comments explicitly state that neither should be removed assuming the
other is sufficient.
Rate Limiting
Rate limiting is implemented in daemon-secrets/src/rate_limit.rs using the governor crate’s
in-memory GCRA (Generic Cell Rate Algorithm) token bucket.
Configuration
The rate limiter is configured with a fixed quota:
- Sustained rate: 10 requests per second
- Burst capacity: 20 requests
These values are hardcoded in SecretRateLimiter::new().
Per-Daemon Buckets
Each daemon receives an independent rate limit bucket, keyed on its verified_sender_name.
Exhausting one daemon’s quota does not affect any other daemon’s ability to access secrets.
Buckets are created lazily on first request from each daemon.
Anonymous Client Isolation
All unregistered clients (those with verified_sender_name set to None) share a single rate
limit bucket keyed on the sentinel value __anonymous__. This prevents bypass via the
new-connection-per-request pattern: an attacker who opens a fresh IPC connection for every
request still draws from the same shared anonymous bucket.
The anonymous bucket is independent from all named daemon buckets. Exhausting the anonymous bucket does not affect registered daemons, and vice versa.
Rate Limiter Reset
When a lock-all operation succeeds (no profile specified in LockRequest), the rate limiter is
reset to a fresh instance with empty buckets. This occurs in handle_lock_request() in
daemon-secrets/src/unlock.rs.
Test Coverage
The rate limiting module contains five tests (rate_001 through rate_005) that verify:
- Burst capacity of 20 requests is allowed (
rate_001). - The 21st request after burst exhaustion is denied (
rate_002). - Daemon buckets are independent (
rate_003). - The anonymous bucket is independent from named daemon buckets (
rate_004). - All anonymous clients share a single bucket (
rate_005).
Secret Injection
The sesame CLI provides two commands for injecting vault secrets into running processes:
sesame env for spawning a child process with secrets as environment variables, and
sesame export for emitting secrets in shell, dotenv, or JSON format. Both commands enforce a
runtime denylist that blocks security-sensitive environment variable names.
sesame env
sesame env spawns a child process with all secrets from the specified profile(s) injected as
environment variables.
sesame env -p work -- my-application --flag
The command resolves profile specs from the -p flag or, if omitted, from the
SESAME_PROFILES environment variable. If neither is set, the default profile name is used.
The child process also receives a SESAME_PROFILES environment variable containing a CSV of
the resolved profile specs (e.g., work,braincraft:operations), allowing it to know its
security context.
After the child process exits, all secret byte vectors are zeroized via zeroize::Zeroize
before the parent process exits with the child’s exit code.
Multi-Profile Support
Multiple profiles can be specified as a comma-separated list:
sesame env -p "default,work" -- my-application
Secrets are fetched from each profile in order and merged with left-wins collision resolution:
if the same secret key name exists in multiple profiles, the value from the first profile in
the list is used. This is implemented in fetch_multi_profile_secrets() in
open-sesame/src/ipc.rs, which uses a HashSet to track seen key names.
Prefix
The --prefix flag prepends a string to all generated environment variable names:
sesame env -p work --prefix MYAPP -- my-application
# Secret "api-key" becomes MYAPP_API_KEY
sesame export
sesame export emits secrets in one of three formats, suitable for shell evaluation or file
generation.
Shell Format
sesame export -p work --format shell
Output:
export API_KEY="the-secret-value"
export DB_PASSWORD="another-value"
Dotenv Format
sesame export -p work --format dotenv
Output:
API_KEY="the-secret-value"
DB_PASSWORD="another-value"
JSON Format
sesame export -p work --format json
Output:
{"API_KEY":"the-secret-value","DB_PASSWORD":"another-value"}
After output, all intermediate string copies are zeroized via unsafe
as_bytes_mut().zeroize().
Secret Name to Environment Variable Conversion
The secret_key_to_env_var() function in open-sesame/src/env.rs converts secret key names
to environment variable names using the following rules:
| Input character | Output |
|---|---|
Hyphen (-) | Underscore (_) |
Dot (.) | Underscore (_) |
| ASCII alphanumeric | Uppercased |
Underscore (_) | Preserved |
| All other characters | Underscore (_) |
The entire result is uppercased. If a prefix is provided, it is prepended with an underscore separator.
Examples (from tests in open-sesame/src/env.rs):
| Secret key | Prefix | Environment variable |
|---|---|---|
api-key | None | API_KEY |
api-key | MYAPP | MYAPP_API_KEY |
db.host-name | None | DB_HOST_NAME |
Environment Variable Denylist
The DENIED_ENV_VARS constant in open-sesame/src/env.rs defines environment variable names
that must never be overwritten by secret injection. The is_denied_env_var() function checks
against this list using case-insensitive comparison. The BASH_FUNC_ prefix is matched as a
prefix (any variable starting with BASH_FUNC_ is denied).
If a secret’s converted name matches a denied variable, the secret is skipped with a warning printed to stderr. It is not injected into the child process or emitted in export output.
Full Denylist
Dynamic linker – arbitrary code execution:
LD_PRELOADLD_LIBRARY_PATHLD_AUDITLD_DEBUGLD_DEBUG_OUTPUTLD_DYNAMIC_WEAKLD_PROFILELD_SHOW_AUXVLD_BIND_NOWLD_BIND_NOTDYLD_INSERT_LIBRARIESDYLD_LIBRARY_PATHDYLD_FRAMEWORK_PATH
Core execution environment:
PATHHOMEUSERSHELLLOGNAMELANGTERMDISPLAYWAYLAND_DISPLAYXDG_RUNTIME_DIR
Shell injection vectors:
BASH_ENVENVBASH_FUNC_(prefix match)CDPATHGLOBIGNORESHELLOPTSBASHOPTSPROMPT_COMMANDPS1,PS2,PS4MAIL,MAILPATH,MAILCHECKIFS
Language runtime code execution:
PYTHONPATH,PYTHONSTARTUP,PYTHONHOMENODE_OPTIONS,NODE_PATH,NODE_EXTRA_CA_CERTSPERL5LIB,PERL5OPTRUBYLIB,RUBYOPTGOPATH,GOROOT,GOFLAGSJAVA_HOME,CLASSPATH,JAVA_TOOL_OPTIONS
Security and authentication:
SSH_AUTH_SOCKGPG_AGENT_INFOKRB5_CONFIG,KRB5CCNAMESSL_CERT_FILE,SSL_CERT_DIRCURL_CA_BUNDLE,REQUESTS_CA_BUNDLEGIT_SSL_CAINFONIX_SSL_CERT_FILE
Nix:
NIX_PATHNIX_CONF_DIR
Sudo and privilege escalation:
SUDO_ASKPASSSUDO_EDITORVISUALEDITOR
Systemd and D-Bus:
SYSTEMD_UNIT_PATHDBUS_SESSION_BUS_ADDRESS
Open Sesame namespace:
SESAME_PROFILE
Shell Escaping
The shell_escape() function in open-sesame/src/env.rs produces output safe for embedding
in double-quoted export statements. The following transformations are applied:
| Character | Output | Reason |
|---|---|---|
\0 (null) | Stripped | C string truncation risk |
" | \" | Shell metacharacter |
\ | \\ | Shell metacharacter |
$ | \$ | Variable expansion |
` | \` | Command substitution |
! | \! | History expansion |
\n | \n (literal backslash-n) | Newline |
\r | \r (literal backslash-r) | Carriage return |
JSON Escaping
The json_escape() function produces output safe for embedding in JSON string values:
| Character | Output |
|---|---|
\0 (null) | Stripped |
" | \" |
\ | \\ |
\n | \n |
\r | \r |
\t | \t |
| Other control characters | \uXXXX (Unicode escape) |
Cross-Profile Behavior
Open Sesame enforces strict per-profile isolation for secret storage while providing controlled mechanisms for accessing secrets from multiple profiles in a single session.
Profile Isolation Guarantees
Each trust profile is a cryptographically independent security domain. The following properties hold:
Independent Master Keys
Each profile has its own password, its own 16-byte random salt (stored at
{config_dir}/vaults/{profile}.salt), and its own Argon2id-derived master key. Knowing the
password for profile A reveals nothing about the master key for profile B, even if the user
chooses the same password for both, because the salts differ.
Independent Vault Keys
The vault key for each profile is derived via
core_crypto::derive_vault_key(master_key, profile_id) using the BLAKE3 context string
"pds v2 vault-key {profile_id}". Different profile IDs produce different vault keys even
from the same master key. The different_profiles_produce_different_keys test in
core-crypto/src/hkdf.rs verifies this property.
Independent Database Files
Each profile’s secrets are stored in a separate SQLCipher database file at
{config_dir}/vaults/{profile_name}.db. There is no shared database. Opening profile A’s
database file with profile B’s vault key fails at the
SELECT count(*) FROM sqlite_master verification step in SqlCipherStore::open().
Independent Unlock State
Each profile is unlocked independently via UnlockRequest with an optional profile field.
The VaultState struct in daemon-secrets/src/vault.rs maintains per-profile state in
several maps:
master_keys: HashMap<TrustProfileName, SecureBytes>– per-profile master keys.vaults: HashMap<TrustProfileName, JitDelivery<SqlCipherStore>>– per-profile open vault handles.active_profiles: HashSet<TrustProfileName>– profiles authorized for secret access.partial_unlocks: HashMap<TrustProfileName, PartialUnlock>– in-progress multi-factor unlock sessions.
Multiple profiles may be unlocked and active concurrently. There is no global “locked” state; the daemon starts with empty maps and each profile is unlocked individually.
Independent Deactivation
Locking a single profile (LockRequest with a profile field) removes only that profile’s
master key, vault handle, partial unlock state, and keyring entry. Other profiles remain
unlocked and accessible.
Cross-Profile Tag References
The profile spec format used by sesame env and sesame export supports an org:vault
syntax for referencing profiles with organizational namespaces:
default --> ProfileSpec { org: None, vault: "default" }
braincraft:operations --> ProfileSpec { org: Some("braincraft"), vault: "operations" }
This parsing is implemented in parse_profile_specs() in open-sesame/src/ipc.rs. The org
field is currently informational – it is included in the SESAME_PROFILES CSV injected into
child processes but does not affect vault lookup. The vault field is used as the
TrustProfileName for IPC requests.
The format is designed for future extension to container registry-style references
(e.g., docker.io/project/org:vault@sha256).
Multi-Profile Secret Injection
The sesame env and sesame export commands accept a comma-separated list of profile specs:
sesame env -p "default,work" -- my-application
sesame export -p "default,work,braincraft:operations" --format json
The profile list can also be set via the SESAME_PROFILES environment variable, which is
checked when the -p flag is omitted. Resolution order is implemented in
resolve_profile_specs() in open-sesame/src/ipc.rs:
- If
-pis provided, use it. - Otherwise, read
SESAME_PROFILESfrom the environment. - If neither is set, use the default profile name.
Merge Behavior
fetch_multi_profile_secrets() iterates over the profile specs in order. For each profile, it
fetches all secret keys via SecretList, then fetches each value via SecretGet. Keys are
merged into the result with left-wins collision resolution: the first profile in the list that
contains a given key wins. A HashSet<String> tracks which key names have already been seen.
If a profile has no secrets, a warning is printed to stderr but processing continues with the remaining profiles.
Denylist Enforcement
After the secret key name is converted to an environment variable name (via
secret_key_to_env_var()), the result is checked against the denylist
(is_denied_env_var()). Denied variables are skipped with a warning on stderr. This check
applies identically regardless of which profile the secret originated from.
What Crosses Profile Boundaries
| Resource | Crosses boundaries? | Mechanism |
|---|---|---|
| Secret values | No | Each profile’s vault is encrypted with a unique key. |
| Secret key names | No | Key names are only visible within a single profile’s SecretList response. |
| Master keys | No | Each profile has an independent master key derived from its own salt. |
| Environment variables | Yes, at injection time | sesame env -p "a,b" merges secrets from both profiles into a single child process environment. |
| Vault database files | No | Each profile has its own .db file. |
| Salt files | No | Each profile has its own .salt file. |
| JIT cache entries | No | JitDelivery instances are per-profile in the vaults map. |
| Rate limit buckets | No (per-daemon, not per-profile) | Rate limiting is keyed on daemon identity, not profile. |
| ACL rules | No | ACL rules are defined per-profile under [profiles.<name>.secrets.access]. |
| Platform keyring entries | No | Keyring operations are per-profile (keyring_store_profile, keyring_delete_profile). |
The only mechanism by which secrets from different profiles can coexist in the same memory
space is the sesame env / sesame export multi-profile merge, which operates in the CLI
process after secrets have been fetched via IPC from independently unlocked vaults.
Multi-Profile Unlock
Each profile must be unlocked independently before its secrets can be accessed. The
sesame unlock command accepts a -p flag:
sesame unlock -p default
sesame unlock -p work
There is no batch unlock command that accepts multiple profiles in a single invocation. Each
UnlockRequest IPC message targets a single profile. If a profile is already unlocked, the
daemon rejects the request with UnlockRejectedReason::AlreadyUnlocked.
Locking supports both single-profile and all-profile modes:
sesame lock -p work # Lock only the "work" profile
sesame lock # Lock all profiles
Lock-all removes all master keys, flushes all JIT caches, scrubs all C-level key buffers, deletes all keyring entries, clears all partial unlock state, and resets the rate limiter.
Factor Architecture
This page describes the pluggable authentication backend system in core-auth. The system defines a
trait-based dispatch mechanism that allows multiple authentication methods to coexist, with the
AuthDispatcher coordinating backend selection at unlock time.
AuthFactorId
The AuthFactorId enum in core-types/src/auth.rs identifies each authentication factor type.
Six variants exist:
| Variant | Config string | Status |
|---|---|---|
Password | password | Implemented |
SshAgent | ssh-agent | Implemented |
Fido2 | fido2 | Defined, no backend |
Tpm | tpm | Defined, no backend |
Fingerprint | fingerprint | Defined, no backend |
Yubikey | yubikey | Defined, no backend |
The enum derives Serialize, Deserialize, Copy, Hash, Ord, and uses
#[serde(rename_all = "kebab-case")]. The four future variants (Fido2, Tpm, Fingerprint,
Yubikey) are defined to permit forward-compatible policy configuration: a vault metadata file can
reference these factor types in its auth_policy before their backends are implemented.
AuthFactorId::from_config_str() parses the config-file string form.
AuthFactorId::as_config_str() returns the static string. The Display implementation
delegates to as_config_str().
VaultAuthBackend Trait
Defined in core-auth/src/backend.rs, the VaultAuthBackend trait is the extension point for
adding new authentication methods. It requires Send + Sync and uses #[async_trait].
Required Methods
| Method | Signature | Purpose |
|---|---|---|
factor_id | fn(&self) -> AuthFactorId | Which factor this backend provides |
name | fn(&self) -> &str | Human-readable name for audit logs and overlay display |
backend_id | fn(&self) -> &str | Short identifier for IPC messages and config |
is_enrolled | fn(&self, profile, config_dir) -> bool | Whether enrollment artifacts exist on disk |
can_unlock | async fn(&self, profile, config_dir) -> bool | Whether unlock can currently succeed (must complete in <100ms) |
requires_interaction | fn(&self) -> AuthInteraction | What kind of user interaction is needed |
unlock | async fn(&self, profile, config_dir, salt) -> Result<UnlockOutcome, AuthError> | Derive or unwrap the master key |
enroll | async fn(&self, profile, master_key, config_dir, salt, selected_key_index) -> Result<(), AuthError> | Create enrollment artifacts for a profile |
revoke | async fn(&self, profile, config_dir) -> Result<(), AuthError> | Remove enrollment artifacts |
The enroll method accepts an optional selected_key_index for backends that offer multiple
eligible keys (e.g., SSH agent with multiple loaded keys). If None, the backend picks the
first eligible key.
AuthInteraction
The AuthInteraction enum describes the interaction model:
None– Backend can unlock silently (SSH agent with a software key, future TPM, future keyring).PasswordEntry– Keyboard input required.HardwareTouch– Physical touch on a hardware token (future FIDO2, PIV with touch policy).
FactorContribution
The FactorContribution enum describes what a backend provides to the unlock process:
CompleteMasterKey– The backend independently unwraps or derives a complete 32-byte master key. Used inAnyandPolicymodes.FactorPiece– The backend provides a piece that must be combined with pieces from other factors via BLAKE3derive_key. Used inAllmode.
VaultMetadata::contribution_type() returns FactorPiece when auth_policy is All, and
CompleteMasterKey for Any and Policy.
UnlockOutcome
The UnlockOutcome struct is returned by a successful unlock() call:
master_key: SecureBytes– The 32-byte master key (mlock’d, zeroize-on-drop).audit_metadata: BTreeMap<String, String>– Backend-specific metadata for audit logging (e.g.,ssh_fingerprint,key_type).ipc_strategy: IpcUnlockStrategy– Which IPC message type to use (PasswordUnlockorDirectMasterKey).factor_id: AuthFactorId– Which factor this outcome represents.
IpcUnlockStrategy
PasswordUnlock– Use theUnlockRequestIPC message; daemon-secrets performs the KDF.DirectMasterKey– Use theSshUnlockRequestorFactorSubmitIPC message with a pre-derived master key.
Both implemented backends (PasswordBackend and SshAgentBackend) use DirectMasterKey. The
password backend derives the KEK client-side via Argon2id and unwraps the master key before
sending it over IPC.
AuthDispatcher
Defined in core-auth/src/dispatcher.rs, the AuthDispatcher holds a
Vec<Box<dyn VaultAuthBackend>> and provides methods for backend discovery and selection.
Construction
AuthDispatcher::new() registers two backends in priority order:
SshAgentBackend(non-interactive)PasswordBackend(interactive fallback)
Methods
backends(&self) -> &[Box<dyn VaultAuthBackend>] – Access all registered backends.
applicable_backends(profile, config_dir, meta) -> Vec<&dyn VaultAuthBackend> – Returns
backends that are both enrolled in the vault metadata (meta.has_factor(backend.factor_id()))
AND can currently perform an unlock (backend.can_unlock()). Used by the CLI to determine
which factors to attempt.
find_auto_backend(profile, config_dir) -> Option<&dyn VaultAuthBackend> – Returns the
first backend where requires_interaction() == AuthInteraction::None, is_enrolled() is true,
and can_unlock() is true. Does not consult vault metadata – checks enrollment files directly
on disk.
can_auto_unlock(profile, config_dir, meta) -> bool – Policy-aware auto-unlock
feasibility check:
Anymode: delegates tofind_auto_backend()– a single non-interactive backend suffices.AllorPolicymode: all applicable backends must be non-interactive. Returnsfalseconservatively if any required factor needs interaction.
password_backend(&self) -> &dyn VaultAuthBackend – Returns the password backend. Panics
if not registered (programming error – the constructor always registers it).
VaultMetadata
Defined in core-auth/src/vault_meta.rs, VaultMetadata is the JSON-serialized record of a
vault’s authentication state. Stored at {config_dir}/vaults/{profile}.vault-meta with
permissions 0o600.
Fields
| Field | Type | Purpose |
|---|---|---|
version | u32 | Format version (currently 1) |
init_mode | VaultInitMode | How the vault was originally initialized |
enrolled_factors | Vec<EnrolledFactor> | Which auth methods are enrolled |
auth_policy | AuthCombineMode | Unlock policy for this vault |
created_at | u64 | Unix epoch seconds of vault creation |
policy_changed_at | u64 | Unix epoch seconds of last policy change |
VaultInitMode
Password– Initialized with password only.SshKeyOnly– Initialized with SSH key only (random master key, no password).MultiFactor { factors: Vec<AuthFactorId> }– Initialized with multiple factors.
EnrolledFactor
Each enrolled factor records:
factor_id: AuthFactorId– The factor type.label: String– Human-readable label (e.g., SSH key fingerprint, “master password”).enrolled_at: u64– Unix epoch seconds.
Version Gating
VaultMetadata::load() rejects metadata where version > MAX_SUPPORTED_VERSION (currently
1). This prevents a newer binary from silently misinterpreting a vault metadata format it
does not understand.
Persistence
JSON is used rather than TOML to distinguish machine-managed metadata from user-editable
configuration. Writes use atomic rename via a .vault-meta.tmp intermediate file. File
permissions are set to 0o600 on Unix before the rename.
Factory Methods
new_password(auth_policy)– Creates metadata with a singlePasswordenrolled factor.new_ssh_only(fingerprint, auth_policy)– Creates metadata with a singleSshAgentenrolled factor.new_multi_factor(factors, auth_policy)– Creates metadata with arbitrary enrolled factors.
Factor Management
has_factor(factor_id) -> bool– Check enrollment.add_factor(factor_id, label)– Idempotent add (no-op if already enrolled).remove_factor(factor_id)– Remove by factor ID.contribution_type() -> FactorContribution– ReturnsFactorPieceforAllmode,CompleteMasterKeyforAny/Policy.
Adding a New Factor
To add a new authentication factor (e.g., FIDO2):
- The
AuthFactorIdvariant already exists incore-types/src/auth.rs(e.g.,Fido2). - Create a new module in
core-auth/src/implementing a struct (e.g.,Fido2Backend). - Implement
VaultAuthBackendfor the struct:factor_id()returns the correspondingAuthFactorIdvariant.is_enrolled()checks for the factor’s enrollment artifact on disk.can_unlock()checks whether the hardware or service is available.requires_interaction()returns the appropriateAuthInteractionvariant.unlock()derives or unwraps the 32-byte master key.enroll()wraps the master key under the factor’s KEK and writes an enrollment blob.revoke()zeroizes and deletes the enrollment blob.
- Register the backend in
AuthDispatcher::new()at the appropriate priority position (non-interactive backends before interactive ones). - The CLI unlock flow in
open-sesame/src/unlock.rshandles unknown factors by reporting that the factor is not yet supported. Adding a match arm intry_auto_factor()(for non-interactive factors) or the phase 3 loop (for interactive factors) enables CLI support.
No changes to daemon-secrets are required – the FactorSubmit IPC handler and
PartialUnlock state machine operate on AuthFactorId and SecureBytes generically.
Policy Engine
This page describes the multi-factor authentication policy system. Policies are declared in configuration, persisted in vault metadata, and enforced by daemon-secrets at unlock time through a partial unlock state machine.
AuthCombineMode
Defined in core-types/src/auth.rs, the AuthCombineMode enum determines both the key wrapping
scheme at initialization and the unlock policy evaluation at runtime. It derives Serialize,
Deserialize, and uses #[serde(rename_all = "kebab-case")].
Any (default)
AuthCombineMode::Any
The master key is a random 32-byte value generated via getrandom. Each enrolled factor
independently wraps this master key under its own KEK (Argon2id-derived for password,
BLAKE3-derived for SSH). Any single enrolled factor can unlock the vault alone.
At unlock time in daemon-secrets, the first valid factor submitted completes the unlock
immediately. The PartialUnlock state machine clears all remaining requirements when Any
mode is detected:
#![allow(unused)]
fn main() {
if matches!(meta.auth_policy, AuthCombineMode::Any) {
partial.remaining_required.clear();
partial.remaining_additional = 0;
}
}
All
AuthCombineMode::All
Every enrolled factor must be provided at unlock time. Each factor contributes a “piece” (its
unwrapped key material). Once all pieces are collected, daemon-secrets combines them into the
master key via BLAKE3 derive_key:
- Factor pieces are sorted by
AuthFactorId(which derivesOrd). - The sorted pieces are concatenated.
- BLAKE3
derive_keyis called with context"pds v2 combined-master-key {profile_name}"and the concatenated bytes as input. - The result is a 32-byte master key.
The KDF context constant is ALL_MODE_KDF_CONTEXT defined in daemon-secrets/src/vault.rs
as "pds v2 combined-master-key".
The VaultMetadata::contribution_type() method returns FactorContribution::FactorPiece for
All mode. daemon-secrets checks this to decide whether to verify each factor’s key material
against the vault DB independently (it does not – verification only happens after
combination).
Policy
AuthCombineMode::Policy(AuthPolicy {
required: Vec<AuthFactorId>,
additional_required: u32,
})
A policy expression combining mandatory factors with a threshold of additional factors. Key
wrapping uses independent wraps (same as Any mode – each factor wraps the same random
master key). Policy enforcement happens at the daemon level.
required: Factors that must always succeed. Every factor in this list must be submitted.additional_required: How many additional enrolled factors (beyond those inrequired) must also succeed.
Example: required: [Password], additional_required: 1 means the password is always required,
plus one more factor (e.g., SSH agent or a future FIDO2 token).
FactorContribution is CompleteMasterKey for Policy mode – each factor independently
unwraps the same master key.
Configuration
Auth policy is configured in config.toml under [profiles.<name>.auth], defined by the
AuthConfig struct in core-config/src/schema_secrets.rs:
[profiles.default.auth]
mode = "any" # "any", "all", or "policy"
required = ["password", "ssh-agent"] # For mode="policy" only
additional_required = 1 # For mode="policy" only
AuthConfig::to_typed() converts the string-based config representation to AuthCombineMode.
It validates that all factor names in required are recognized via
AuthFactorId::from_config_str(). The default AuthConfig uses mode "any" with empty
required and additional_required = 0.
PartialUnlock State Machine
Defined in daemon-secrets/src/vault.rs, the PartialUnlock struct tracks in-progress
multi-factor unlocks. At most one PartialUnlock exists per profile, stored in
VaultState::partial_unlocks.
State
| Field | Type | Purpose |
|---|---|---|
received_factors | HashMap<AuthFactorId, SecureBytes> | Factor keys received so far |
remaining_required | HashSet<AuthFactorId> | Factors still needed |
remaining_additional | u32 | Additional factors still needed beyond required |
deadline | tokio::time::Instant | Expiration time |
Lifecycle
-
Creation: A
PartialUnlockis created on the firstFactorSubmitfor a profile. Theremaining_requiredandremaining_additionalfields are initialized from the vault’sAuthCombineMode. -
Factor acceptance: Each
FactorSubmitrecords the factor’s key material inreceived_factorsand removes the factor fromremaining_required. If the factor is not in the required set andremaining_additional > 0, the additional counter is decremented. -
Completion check:
is_complete()returnstruewhenremaining_requiredis empty ANDremaining_additional == 0. -
Promotion: When complete, the partial state is removed from the map and the master key is either taken directly (for
Any/Policymode, the first received factor’s key) or derived by combining all pieces (forAllmode). -
Expiration:
is_expired()checks whethertokio::time::Instant::now() >= deadline. Expired partials are rejected on the nextFactorSubmitand removed from the map.
Timeouts
PARTIAL_UNLOCK_TIMEOUT_SECS: 120 seconds. The deadline for collecting all required factors after the first factor is submitted.PARTIAL_UNLOCK_SWEEP_INTERVAL_SECS: 30 seconds. The interval at which daemon-secrets sweeps and discards expired partial unlock state.
Key Combination (All Mode)
When all factors have been received in All mode, daemon-secrets combines them:
#![allow(unused)]
fn main() {
let mut pieces: Vec<_> = partial.received_factors.into_iter().collect();
pieces.sort_by_key(|(id, _)| *id);
let mut combined = Vec::new();
for (_id, piece) in &pieces {
combined.extend_from_slice(piece.as_bytes());
}
let ctx_str = format!("{ALL_MODE_KDF_CONTEXT} {target}");
let derived: [u8; 32] = blake3::derive_key(&ctx_str, &combined);
combined.zeroize();
}
The sorting by AuthFactorId ensures deterministic ordering regardless of submission order.
CLI Unlock Flow
The CLI unlock command in open-sesame/src/unlock.rs orchestrates factor submission in three
phases:
Phase 1: Auto-Submit Non-Interactive Factors
The CLI iterates over all enrolled factors and calls try_auto_factor() for each. Currently,
only AuthFactorId::SshAgent is handled – it checks can_unlock() on the SshAgentBackend,
and if available, calls unlock() to derive the master key client-side and submits it via
FactorSubmit IPC.
If the vault uses Any mode and the SSH agent succeeds, the vault is fully unlocked and no
further factors are needed.
Phase 2: Query Remaining Factors
The CLI sends a VaultAuthQuery IPC message to daemon-secrets, which returns:
enrolled_factors: All enrolled factor IDs.auth_policy: The vault’sAuthCombineMode.partial_in_progress: Whether aPartialUnlockexists.received_factors: Which factors have already been accepted.
The CLI filters out already-received factors to determine what remains.
Phase 3: Prompt Interactive Factors
The CLI iterates over remaining factors:
Password: Prompts for password (viadialoguerif terminal, or reads from stdin), derives the master key client-side usingPasswordBackend::unlock(), and submits viaFactorSubmit.- Other factors: The CLI reports that the factor is not yet supported and exits with an error.
Each FactorSubmit response includes unlock_complete, remaining_factors, and
remaining_additional, allowing the CLI to track progress.
Factor Submission IPC
The submit_factor() function sends EventKind::FactorSubmit with:
factor_id: Which factor type.key_material: The master key in aSensitiveBytes(mlock’dProtectedAlloc).profile: Target profile name.audit_metadata: Backend-specific audit fields.
The daemon responds with EventKind::FactorResponse containing acceptance status, completion
status, and remaining factor information.
Daemon-Side Verification
For Any and Policy modes (CompleteMasterKey contribution), daemon-secrets verifies each
submitted factor’s key material against the vault database before accepting it. It derives the
vault key via core_crypto::derive_vault_key() and attempts to open the SQLCipher database. If
the open fails (wrong key, GCM authentication failure), the factor is rejected.
For All mode (FactorPiece contribution), individual pieces cannot be verified against the
vault database. Verification happens after all pieces are combined into the master key.
Password Backend
This page describes the password authentication backend implemented in
core-auth/src/password.rs and core-auth/src/password_wrap.rs. The backend uses Argon2id to
derive a key-encrypting key (KEK) from user-provided password bytes, then wraps or unwraps a
32-byte master key using AES-256-GCM.
PasswordBackend
The PasswordBackend struct holds an optional SecureVec containing password bytes. Password
bytes must be injected via with_password() (builder pattern) or set_password() (mutation)
before calling unlock() or enroll(). The SecureVec type provides mlock’d memory and
zeroize-on-drop semantics.
Trait Implementation
| Method | Behavior |
|---|---|
factor_id() | Returns AuthFactorId::Password |
name() | Returns "Password" |
backend_id() | Returns "password" |
is_enrolled(profile, config_dir) | Checks whether {config_dir}/vaults/{profile}.password-wrap exists |
can_unlock(profile, config_dir) | Returns true only if enrolled AND password bytes have been set |
requires_interaction() | Returns AuthInteraction::PasswordEntry |
KEK Derivation
The derive_kek() method performs:
- Validate salt is exactly 16 bytes.
- Call
core_crypto::derive_key_argon2(password, salt)– Argon2id with project-wide parameters. - Copy the first 32 bytes of the Argon2id output into a
[u8; 32]KEK array.
The Argon2id parameters are defined in core-crypto (not in core-auth).
Unlock Flow
- Read password bytes from the stored
SecureVec. Fail withBackendNotApplicableif no password was set. - Load the
PasswordWrapBlobfrom{config_dir}/vaults/{profile}.password-wrap. - Derive the KEK via
derive_kek(password, salt). - Call
blob.unwrap(&mut kek)to decrypt the master key via AES-256-GCM. The KEK is zeroized after use. - Return an
UnlockOutcomewithipc_strategy: DirectMasterKeyandfactor_id: Password.
Enrollment Flow
- Read password bytes from the stored
SecureVec. - Derive the KEK via
derive_kek(password, salt). - Call
PasswordWrapBlob::wrap(master_key, &mut kek)to encrypt the master key. The KEK is zeroized after use. - Write the blob to disk via
blob.save(config_dir, profile).
Revocation
Revocation overwrites the wrap file with zeros before deletion to prevent casual recovery from disk:
- Read the file length.
- Write a zero-filled buffer of the same length.
- Delete the file via
std::fs::remove_file.
PasswordWrapBlob
Defined in core-auth/src/password_wrap.rs, the PasswordWrapBlob struct represents the
on-disk binary format for the AES-256-GCM wrapped master key.
Binary Format
Offset Length Field
0 1 Version byte (0x01)
1 12 Nonce (random, generated via getrandom)
13 48 Ciphertext (32-byte master key + 16-byte GCM tag)
Total size: 61 bytes.
The version constant PASSWORD_WRAP_VERSION is 0x01.
Wrapping (Encryption)
PasswordWrapBlob::wrap(master_key, kek_bytes):
- Construct an
EncryptionKeyfrom the 32-byte KEK. - Generate a 12-byte random nonce via
getrandom. - Encrypt the master key with AES-256-GCM using the KEK and nonce.
- Zeroize the KEK bytes.
- Return the blob containing version, nonce, and ciphertext.
Unwrapping (Decryption)
PasswordWrapBlob::unwrap(kek_bytes):
- Construct an
EncryptionKeyfrom the 32-byte KEK. - Zeroize the KEK bytes immediately after key construction.
- Decrypt using AES-256-GCM with the stored nonce and ciphertext.
- Return the plaintext as
SecureBytes(mlock’d, zeroize-on-drop). - If GCM authentication fails (wrong password), return
AuthError::UnwrapFailed.
Deserialization
PasswordWrapBlob::deserialize(data) rejects:
- Data shorter than 61 bytes (
AuthError::InvalidBlob). - Version bytes other than
0x01(AuthError::InvalidBlob).
Persistence
Path: {config_dir}/vaults/{profile}.password-wrap
Write: save() uses atomic rename via a .password-wrap.tmp intermediate file. On Unix,
file permissions are set to 0o600 (owner read/write only) before the rename. The parent
vaults/ directory is created if it does not exist.
Read: load() reads the file and calls deserialize().
Zeroization
The PasswordWrapBlob struct implements Drop to zeroize its nonce and ciphertext
fields. All KEK arrays are zeroized immediately after use in both wrap() and unwrap().
Salt
Each profile has an independent 16-byte salt stored at {config_dir}/vaults/{profile}.salt.
The salt is generated via getrandom during vault initialization
(daemon-secrets/src/unlock.rs::generate_profile_salt).
During sesame init, the salt file is written with:
- The
vaults/directory created with permissions0o700. - The salt file itself written via
core_config::atomic_writeand then set to permissions0o600.
The salt is used as input to both the Argon2id KDF (password backend) and the BLAKE3 challenge derivation (SSH agent backend). Using a per-profile salt ensures that the same password produces different KEKs for different profiles.
Key Material Handling
The password backend’s key material lifecycle:
-
Password bytes: Stored in
SecureVec(mlock’d, zeroize-on-drop). Acquired from the user viadialoguer::Password(terminal) or stdin (pipe). TheStringholding the raw password is zeroized immediately after copying into theSecureVec. -
KEK (Argon2id output): A
[u8; 32]stack array. Zeroized byPasswordWrapBlob::wrap()andPasswordWrapBlob::unwrap()after use. -
Master key: Returned as
SecureBytes(backed byProtectedAlloc– mlock’d, mprotect’d, zeroize-on-drop). Transferred to daemon-secrets viaSensitiveBytesIPC wrapper which also usesProtectedAlloc.
At no point does the master key exist in an unprotected heap allocation. The KEK exists briefly on the stack and is zeroized before the function returns.
SSH Agent Backend
This page describes the SSH agent authentication backend implemented in core-auth/src/ssh.rs
and core-auth/src/ssh_types.rs. The backend connects to the user’s SSH agent, signs a
deterministic challenge, derives a KEK from the signature via BLAKE3, and wraps or unwraps
the vault master key using AES-256-GCM.
SshAgentBackend
The SshAgentBackend struct is a zero-sized type. All state lives in the SSH agent process
and on-disk enrollment blobs.
Trait Implementation
| Method | Behavior |
|---|---|
factor_id() | Returns AuthFactorId::SshAgent |
name() | Returns "SSH Agent" |
backend_id() | Returns "ssh-agent" |
is_enrolled(profile, config_dir) | Checks whether {config_dir}/vaults/{profile}.ssh-enrollment exists |
can_unlock(profile, config_dir) | Enrolled, blob is parseable, and the enrolled key’s fingerprint is present in the running agent |
requires_interaction() | Returns AuthInteraction::None |
The can_unlock() check connects to the SSH agent via spawn_blocking (the
ssh-agent-client-rs crate uses synchronous Unix socket I/O) and searches the agent’s
identity list for a key matching the fingerprint stored in the enrollment blob.
Challenge Construction
The challenge is a deterministic 32-byte value derived from the profile name and salt:
context = "pds v2 ssh-challenge {profile_name}"
challenge = BLAKE3::derive_key(context, salt)
The same profile name and salt always produce the same challenge. Different profiles or salts produce different challenges. This determinism is essential because the backend must produce the same challenge at both enrollment and unlock time.
Signature to KEK Derivation
After the SSH agent signs the challenge, the raw signature bytes are fed into a second BLAKE3
derive_key call:
context = "pds v2 ssh-vault-kek {profile_name}"
kek = BLAKE3::derive_key(context, signature_bytes)
The raw signature bytes are zeroized immediately after KEK derivation. The KEK is a 32-byte value used as an AES-256-GCM key to wrap or unwrap the master key.
This two-step derivation (challenge from salt, KEK from signature) ensures:
- The KEK is bound to both the profile identity and the specific SSH key.
- The signature is never stored – only the wrapped master key is persisted.
- The BLAKE3 derivation provides domain separation between the challenge and KEK contexts.
Supported Key Types
Defined in core-auth/src/ssh_types.rs, the SshKeyType enum restricts which SSH key types
can be used:
| Type | Wire name | Determinism |
|---|---|---|
Ed25519 | ssh-ed25519 | Deterministic by specification (RFC 8032) |
Rsa | ssh-rsa | PKCS#1 v1.5 padding uses no randomness; ssh-agent-client-rs hard-codes SHA-512 |
Excluded key types:
- ECDSA (
ecdsa-sha2-nistp256, etc.): Non-deterministic. Uses a randomkvalue per signature. A different signature on each unlock would produce a different KEK and fail to unwrap the enrollment blob. - RSA-PSS: Non-deterministic. Uses a random salt per signature.
SshKeyType::from_algorithm() converts from ssh_key::Algorithm, rejecting non-deterministic
types with AuthError::UnsupportedKeyType. SshKeyType::from_wire_name() parses the SSH wire
format string.
EnrollmentBlob
The EnrollmentBlob struct persists the SSH-agent enrollment on disk at
{config_dir}/vaults/{profile}.ssh-enrollment.
Binary Format
Offset Length Field
0 1 Version byte (0x01)
1 2 Key fingerprint length N (big-endian u16)
3 N Key fingerprint (ASCII, e.g. "SHA256:...")
3+N 1 Key type length M (u8)
4+N M Key type wire name (ASCII, e.g. "ssh-ed25519")
4+N+M 12 Nonce (random)
16+N+M 48 Ciphertext (32-byte master key + 16-byte GCM tag)
The version constant ENROLLMENT_VERSION is 0x01.
Security
- Fingerprint length is capped at 256 bytes during deserialization to prevent allocation attacks from malformed blobs.
- File permissions are set to
0o600before atomic rename. - Revocation overwrites the file with zeros before deletion.
Unlock Flow
- Read and deserialize the enrollment blob from disk.
- Derive the 32-byte challenge:
BLAKE3::derive_key("pds v2 ssh-challenge {profile}", salt). - Connect to the SSH agent (via
spawn_blockingto avoid blocking the tokio runtime). - Find the identity matching the enrolled fingerprint.
- Sign the challenge with the enrolled key.
- Derive the KEK:
BLAKE3::derive_key("pds v2 ssh-vault-kek {profile}", signature_bytes). - Zeroize the raw signature bytes.
- Construct an
EncryptionKeyfrom the KEK, then zeroize the KEK bytes. - Decrypt the master key from the enrollment blob’s ciphertext using AES-256-GCM.
- Return an
UnlockOutcomewithipc_strategy: DirectMasterKey,factor_id: SshAgent, and audit metadata including the SSH fingerprint and key type.
Enrollment Flow
- Connect to the SSH agent, list all identities, filter to eligible key types (Ed25519, RSA).
- Select a key by
selected_key_index(required –NonereturnsNoEligibleKey). - Sign the challenge with the selected key.
- Derive the KEK from the signature (same derivation as unlock).
- Zeroize the signature bytes.
- Generate a 12-byte random nonce via
getrandom. - Encrypt the master key with AES-256-GCM using the KEK and nonce.
- Zeroize the KEK bytes.
- Build and serialize the
EnrollmentBlobwith the key fingerprint, key type, nonce, and ciphertext. - Write to disk atomically via a
.ssh-enrollment.tmpintermediate, with0o600permissions.
Key Selection
The CLI sesame ssh enroll command in open-sesame/src/ssh.rs supports three methods for
selecting which SSH key to enroll:
Fingerprint via –ssh-key Flag
sesame ssh enroll --ssh-key SHA256:abc123...
The fingerprint is matched against loaded agent keys, with or without the SHA256: prefix.
Public Key File via –ssh-key Flag
sesame ssh enroll --ssh-key ~/.ssh/id_ed25519.pub
The file is read, parsed as an OpenSSH public key, and its SHA256 fingerprint is computed.
Path traversal via ~/ is resolved through canonicalize() and verified to remain within
$HOME. Files larger than 64 KB are rejected.
Interactive Menu
When --ssh-key is omitted and stdin is a terminal, dialoguer::Select presents a menu of
eligible keys from the agent, showing fingerprint and algorithm. In non-interactive mode
(piped stdin), --ssh-key is required.
Agent Connection
The connect_agent() function in core-auth/src/ssh.rs attempts two socket paths in order:
-
$SSH_AUTH_SOCK: The standard environment variable, set byssh-agent,sshdforwarding, or systemd environment propagation. -
~/.ssh/agent.sock: A fallback stable symlink path. On Konductor VMs,/etc/profile.d/konductor-ssh-agent.shcreates~/.ssh/agent.sockpointing to the forwarded agent socket (/tmp/ssh-XXXX/agent.PID) on each SSH login. This gives systemd user services a stable path to the forwarded agent, since$SSH_AUTH_SOCKpoints to a per-session temporary directory that changes on each login.
The function is intentionally synchronous – local Unix socket connect is sub-millisecond.
All agent operations in the async VaultAuthBackend methods are wrapped in
tokio::task::spawn_blocking to avoid blocking the tokio runtime.
Agent Forwarding
For remote or containerized environments where the SSH key lives on the operator’s workstation:
- The SSH agent socket is forwarded via
ssh -AorForwardAgent yesin SSH config. $SSH_AUTH_SOCKis set bysshdto the forwarded socket path.- The stable symlink pattern (
~/.ssh/agent.sock) provides systemd user services access to the forwarded agent, since systemd services do not inherit the per-session$SSH_AUTH_SOCK. - The Konductor profile.d hook creates and maintains this symlink automatically on each SSH login.
This architecture allows vault unlock via SSH agent even when running inside a VM or container, provided the SSH agent is forwarded from the host.
FIDO2 / WebAuthn Backend
Status: Design Intent. The
AuthFactorId::Fido2variant exists incore-types::authand theVaultAuthBackendtrait is defined incore-auth::backend, but no struct implements this factor today. This page documents what the backend will do when built, grounded in the trait interface and FIDO2 standards.
The FIDO2 backend enables vault unlock using CTAP2-compliant authenticators – USB security
keys, platform authenticators, and BLE/NFC tokens. It maps to AuthFactorId::Fido2 (config
string "fido2") and operates through libfido2 directly, without a browser or the WebAuthn
JavaScript API.
Relevant Standards
| Standard | Role |
|---|---|
| CTAP2 (Client to Authenticator Protocol 2.1) | Wire protocol between host and authenticator. Open Sesame acts as the CTAP2 platform (client). |
| WebAuthn Level 2 (W3C) | Defines the relying party model, credential creation, and assertion ceremonies. Open Sesame borrows the data model (RP ID, credential ID, user handle) but does not use a browser. |
| HMAC-secret extension (CTAP2.1) | Allows the authenticator to compute a deterministic symmetric secret from a caller-provided salt, without exposing the credential private key. This is the primary key-derivation mechanism. |
| credProtect extension | Controls whether credentials are discoverable without user verification. Should be set to level 2 or 3 to prevent silent credential enumeration. |
Mapping to VaultAuthBackend
factor_id()
Returns AuthFactorId::Fido2.
backend_id()
Returns "fido2".
name()
Returns "FIDO2/WebAuthn" (for overlay display and audit logs).
requires_interaction()
Returns AuthInteraction::HardwareTouch. CTAP2 authenticators require user presence (UP) at
minimum; most also support user verification (UV) via on-device PIN or biometric. Both require
physical interaction.
is_enrolled(profile, config_dir)
Checks whether a file {config_dir}/profiles/{profile}/fido2.enrollment exists and contains
a valid enrollment blob (see Enrollment Blob Format below). The enrollment record contains
the credential ID, relying party ID, and the wrapped master key blob. This is a synchronous
filesystem check with no device communication.
can_unlock(profile, config_dir)
- Verify enrollment exists via
is_enrolled(). - Enumerate connected FIDO2 devices via
libfido2device enumeration. - Return
trueif at least one device is present.
Device enumeration over HID typically completes in under 20ms, well within the 100ms trait
budget. This method does not verify that the connected device holds the enrolled credential
– that requires a CTAP2 transaction and user interaction, which is deferred to unlock().
enroll(profile, master_key, config_dir, salt, selected_key_index)
Enrollment proceeds as follows:
- Enumerate connected FIDO2 authenticators. If
selected_key_indexisSome(i), select the i-th device; otherwise select the first. - Construct a relying party ID:
open-sesame:{profile}(synthetic, not a web origin). - Generate a random 32-byte user ID and a random 16-byte challenge.
- Perform
authenticatorMakeCredentialwith:- Algorithm: ES256 (COSE -7) preferred, EdDSA (COSE -8) as fallback.
- Extensions:
hmac-secret: true,credProtect: 2,rk: true(resident key). - User verification: preferred (UV if the device supports it).
- Store the attestation response (credential ID, public key, attestation object).
- Immediately perform a
getAssertionwith thehmac-secretextension, passingsaltas the HMAC-secret salt input. The authenticator returns a 32-byte HMAC output. - Use the HMAC output as a key-encryption key (KEK). Wrap
master_keyunder this KEK using AES-256-GCM with a random 12-byte nonce. - Serialize and write the enrollment blob to
{config_dir}/profiles/{profile}/fido2.enrollment.
unlock(profile, config_dir, salt)
Unlock proceeds as follows:
- Load and deserialize the enrollment blob.
- Perform
authenticatorGetAssertionfor the enrolled RP ID and credential ID, with:- Extensions:
hmac-secretwithsaltas input. - User verification: preferred.
- Extensions:
- The authenticator returns a 32-byte HMAC output (the KEK) and an assertion signature.
- Unwrap the master key from the enrollment blob using the KEK (AES-256-GCM decrypt).
- If unwrap fails (wrong device or tampered blob), return
AuthError::UnwrapFailed. - Return
UnlockOutcome:master_key: the unwrapped 32-byte key.ipc_strategy:IpcUnlockStrategy::DirectMasterKey.factor_id:AuthFactorId::Fido2.audit_metadata:{"aaguid": "<hex>", "credential_id": "<hex>", "uv": "true|false"}.
revoke(profile, config_dir)
Deletes {config_dir}/profiles/{profile}/fido2.enrollment. Does not attempt to delete the
resident credential from the authenticator (CTAP2 does not guarantee remote deletion support
across all devices).
Enrollment Blob Format
Version: u8 (1)
RP ID: length-prefixed UTF-8
Credential ID: length-prefixed bytes
Public Key (COSE): length-prefixed bytes
Attestation Object: length-prefixed bytes (optional, for future policy use)
Wrapped Master Key: 12-byte nonce || ciphertext || 16-byte GCM tag
The blob is versioned to allow schema evolution. The version byte is checked on load; unknown
versions produce AuthError::InvalidBlob.
FactorContribution
AuthCombineMode::AnyorAuthCombineMode::Policy: The backend providesFactorContribution::CompleteMasterKey. It independently unwraps the full 32-byte master key from its enrollment blob.AuthCombineMode::All: The backend providesFactorContribution::FactorPiece. The 32-byte HMAC-secret output is contributed as one input to the combined HKDF derivation. In this mode, enrollment does not wrap the master key; it stores only the credential ID and RP ID. The HMAC-secret output itself is the piece.
Platform Authenticator vs Roaming Authenticator
FIDO2 defines two authenticator attachment modalities:
- Platform authenticators are built into the host device (e.g., Windows Hello TPM-backed key, macOS Touch ID Secure Enclave key, Android biometric key). On Linux desktops, platform authenticators are uncommon.
- Roaming authenticators are external devices connected via USB HID, NFC, or BLE (e.g., YubiKey 5, SoloKeys, Google Titan, Nitrokey).
This backend targets roaming authenticators. For platform biometric unlock on Linux, the
Biometrics backend (AuthFactorId::Fingerprint) is the appropriate
choice – it uses fprintd/polkit rather than CTAP2.
Browser-less Operation
Open Sesame communicates directly with authenticators via libfido2, the reference CTAP2 C
library maintained by Yubico. Consequences:
- No origin binding. The RP ID is a synthetic string (
open-sesame:{profile}), not a web origin. There is no TLS channel binding. - No browser UI. The daemon overlay prompts the user to touch the authenticator. The backend blocks on the CTAP2 transaction until UP/UV is satisfied or a timeout expires.
- Attestation is informational. The attestation object is stored for optional future policy enforcement (e.g., restricting enrollment to FIPS-certified authenticators via FIDO Metadata Service lookup) but is not verified during normal unlock.
Integration Dependencies
| Dependency | Type | Purpose |
|---|---|---|
libfido2 >= 1.13 | System C library | CTAP2 HID/NFC/BLE transport |
libfido2-dev | System package | Build-time headers and pkg-config |
Rust crate: libfido2 or ctap-hid-fido2 | Cargo dependency | Safe Rust bindings |
udev rule or plugdev group | System config | User access to /dev/hidraw* devices |
Threat Model Considerations
- Deterministic KEK. The HMAC-secret output is deterministic for a given (credential, salt) pair. Changing the vault salt invalidates the KEK; re-enrollment is required after re-keying.
- Loss recovery. If the authenticator is lost or destroyed, the enrollment blob is useless. Recovery requires another enrolled factor (password, SSH agent, etc.).
- Clone resistance. Depends on the authenticator hardware. Devices with a secure element
(YubiKey 5, SoloKeys v2) resist cloning. Software-only CTAP2 implementations (e.g.,
libfido2soft token) provide no clone resistance. - PIN brute-force. CTAP2 authenticators implement per-device PIN retry counters with lockout. This is enforced by the authenticator firmware, not by Open Sesame.
- Relay attacks. An attacker with network access to the USB HID device could relay CTAP2 messages. Physical proximity verification is delegated to the authenticator’s UP mechanism.
See Also
- Factor Architecture –
VaultAuthBackendtrait definition and dispatch - Hardware Tokens – YubiKey PIV/challenge-response (non-FIDO2 protocols)
- Biometrics – Platform biometric unlock via fprintd
- Policy Engine – Multi-factor combination modes (
Any,All,Policy)
TPM 2.0 Backend
Status: Design Intent. The
AuthFactorId::Tpmvariant exists incore-types::authand theVaultAuthBackendtrait is defined incore-auth::backend, but no struct implements this factor today. This page documents what the backend will do when built, grounded in the trait interface and TPM 2.0 standards.
The TPM backend enables vault unlock by sealing the master key to the platform’s Trusted
Platform Module. The sealed blob can only be unsealed when the TPM’s Platform Configuration
Registers (PCRs) match the values recorded at seal time, binding the vault to a specific
machine in a specific boot state. It maps to AuthFactorId::Tpm (config string "tpm").
Relevant Standards
| Specification | Role |
|---|---|
| TPM 2.0 Library Specification (TCG) | Defines the TPM command set, key hierarchies, sealing, and PCR operations. |
| TCG PC Client Platform TPM Profile (PTP) | Specifies PCR allocation and boot measurement conventions for PC platforms. |
| tpm2-tss (TCG Software Stack) | Userspace C library providing ESAPI, FAPI, and TCTI layers for TPM communication. |
| tpm2-tools | Command-line tools built on tpm2-tss, useful for enrollment scripting and debugging. |
| Linux IMA (Integrity Measurement Architecture) | Extends PCR 10 with file hashes during runtime. Optional extension point for runtime integrity. |
Core Concept: Sealing to PCR State
TPM 2.0 sealing binds a data blob to an authorization policy that includes PCR values. The TPM only unseals the blob if the current PCR values match the policy. This creates a hardware-enforced link between the vault key and boot integrity state:
- At enrollment, the backend reads the current PCR values, constructs an authorization policy from them, and seals the master key under the TPM’s Storage Root Key (SRK) with that policy.
- At unlock, the backend asks the TPM to unseal the blob. The TPM internally compares current PCR values against the sealed policy. If they match, the blob is released. If any measured component has changed, unsealing fails.
PCR Selection
The default PCR selection for desktop Linux:
| PCR | Measures |
|---|---|
| 0 | UEFI firmware code |
| 1 | UEFI firmware configuration |
| 2 | Option ROMs / external firmware |
| 3 | Option ROM configuration |
| 7 | Secure Boot state (PK, KEK, db, dbx) |
PCRs 4-6 (boot manager, GPT, resume events) are intentionally excluded by default because kernel updates would invalidate the seal on every update. The PCR set is configurable at enrollment time.
Extending to PCR 11 (unified kernel image measurement, used by systemd-stub) or PCR 10
(IMA) is supported as an opt-in for higher-assurance configurations.
Mapping to VaultAuthBackend
factor_id()
Returns AuthFactorId::Tpm.
backend_id()
Returns "tpm".
name()
Returns "TPM 2.0".
requires_interaction()
Returns AuthInteraction::None. TPM unsealing is a silent, non-interactive operation. The
TPM does not require user presence for unsealing (unlike FIDO2). If a TPM PIN (authValue) is
configured on the sealed object, the interaction type changes to
AuthInteraction::PasswordEntry.
is_enrolled(profile, config_dir)
Checks whether {config_dir}/profiles/{profile}/tpm.enrollment exists and contains a valid
sealed blob with a recognized version byte.
can_unlock(profile, config_dir)
- Verify enrollment exists via
is_enrolled(). - Open a connection to the TPM via the TCTI (typically
/dev/tpmrm0, the kernel resource manager). - Return
trueif the TPM device is accessible.
PCR matching is not checked here – a trial unseal could exceed the 100ms budget and may trigger rate limiting on some TPM implementations.
enroll(profile, master_key, config_dir, salt, selected_key_index)
- Open a TPM context via tpm2-tss ESAPI.
- Read the current PCR values for the configured PCR selection (default: 0, 1, 2, 3, 7).
- Build a
PolicyPCRauthorization policy from the PCR digest. - Optionally, combine with
PolicyAuthValueif the user wants a TPM PIN (defense-in-depth against evil-maid attacks where PCRs match but an attacker has physical access). - Create a sealed object under the SRK (Storage Hierarchy, persistent handle
0x81000001or equivalent):- Object type:
TPM2_ALG_KEYEDHASHwithsealattribute. - Data: the 32-byte
master_key. - Auth policy: the PCR policy (and optionally PIN policy).
- Object type:
- Persist the sealed object to a TPM NV index, or serialize the public/private portions to disk.
- Write the enrollment blob to
{config_dir}/profiles/{profile}/tpm.enrollment.
selected_key_index is unused (there is one TPM per machine). It is ignored.
unlock(profile, config_dir, salt)
- Load the enrollment blob and deserialize the sealed object context.
- Open a TPM context via ESAPI.
- Load the sealed object into the TPM.
- Start a policy session. Execute
PolicyPCRwith the enrolled PCR selection. - If a TPM PIN was configured, execute
PolicyAuthValueand provide the PIN. - Call
TPM2_Unsealwith the policy session. - If unsealing succeeds, the TPM returns the 32-byte master key.
- If unsealing fails (PCR mismatch), return
AuthError::UnwrapFailed. The audit metadata should include which PCRs diverged, if determinable. - Return
UnlockOutcome:master_key: the unsealed 32-byte key.ipc_strategy:IpcUnlockStrategy::DirectMasterKey.factor_id:AuthFactorId::Tpm.audit_metadata:{"pcr_selection": "0,1,2,3,7", "tpm_manufacturer": "<vendor>"}.
revoke(profile, config_dir)
- If the sealed object was persisted to a TPM NV index, evict it with
TPM2_EvictControl. - Delete
{config_dir}/profiles/{profile}/tpm.enrollment.
Enrollment Blob Format
Version: u8 (1)
PCR selection: u32 bitmask (bit N set = PCR N included)
PCR digest at seal time: 32 bytes (SHA-256)
Sealed object public area: length-prefixed bytes (TPM2B_PUBLIC)
Sealed object private area: length-prefixed bytes (TPM2B_PRIVATE)
SRK handle: u32
PIN flag: u8 (0 = no PIN, 1 = PolicyAuthValue included)
FactorContribution
AuthCombineMode::AnyorAuthCombineMode::Policy: The backend providesFactorContribution::CompleteMasterKey. The TPM directly unseals the full 32-byte master key.AuthCombineMode::All: The backend providesFactorContribution::FactorPiece. At enrollment, a random 32-byte piece is sealed (not the master key itself). At unlock, the unsealed piece is contributed to the combined HKDF derivation.
Measured Boot Chain
The security of this backend depends on the integrity of the measured boot chain:
- UEFI firmware measures itself and boot configuration into PCRs 0-3.
- Shim / bootloader (GRUB, systemd-boot) is measured by the firmware into PCR 4.
- Secure Boot state (whether Secure Boot is enabled, which keys are enrolled) is reflected in PCR 7.
- Kernel and initramfs, if using
systemd-stubunified kernel images (UKI), are measured into PCR 11.
If an attacker modifies any component in this chain, the corresponding PCR value changes, and the TPM refuses to unseal the vault key.
Firmware and Kernel Updates
After a firmware or kernel update, PCR values change and the sealed blob becomes invalid. Strategies to manage this:
- Predictive re-sealing. Before a kernel update, predict the new PCR values (using
systemd-pcrphaseorsystemd-measure) and create a second sealed blob for the new values. Delete the old one after successful boot. - Fallback factor. Always maintain a second enrolled factor (password, SSH agent) so access is not lost when PCR values change unexpectedly.
- PCR selection trade-offs. Excluding volatile PCRs (4, 5, 6) from the policy reduces re-enrollment frequency at the cost of reduced boot integrity coverage.
Platform Binding
The TPM is a physical chip (or firmware TPM) soldered to the motherboard. The sealed blob is bound to that specific TPM – it cannot be moved to another machine. This provides:
- Hardware binding. The vault is tied to a specific physical device.
- Anti-theft. A stolen drive cannot be unlocked on another machine.
- Anti-cloning. TPM private keys cannot be extracted (the TPM is designed to resist physical attacks on the chip package).
Integration Dependencies
| Dependency | Type | Purpose |
|---|---|---|
tpm2-tss >= 4.0 | System C library | ESAPI, FAPI, and TCTI for TPM communication |
tpm2-tss-devel | System package | Build-time headers |
Rust crate: tss-esapi | Cargo dependency | Safe Rust bindings to tpm2-tss ESAPI |
/dev/tpmrm0 | Kernel device | TPM resource manager (kernel >= 4.12) |
tpm2-abrmd (optional) | System service | Userspace resource manager (alternative to kernel RM) |
tpm2-tools (optional) | System package | Debugging and manual enrollment scripting |
The user must have read/write access to /dev/tpmrm0 (typically via the tss group or a
udev rule).
Threat Model Considerations
- Evil-maid with matching PCRs. If an attacker can reproduce the exact boot chain (same
firmware, same bootloader, same Secure Boot keys), they can unseal the key. Adding a TPM
PIN (
PolicyAuthValue) mitigates this. - Firmware TPM (fTPM) vulnerabilities. Firmware TPMs run inside the CPU or chipset firmware. Vulnerabilities in fTPM firmware (e.g., AMD fTPM voltage glitching) can potentially extract sealed data. Discrete TPM chips (e.g., Infineon SLB9670) offer stronger physical resistance.
- Running-system compromise. TPM sealing protects at-rest data. Once the system is booted and the vault is unlocked, an attacker with root access can read the master key from daemon-secrets process memory. TPM does not protect against runtime compromise.
- PCR reset attacks. On some platforms, a hardware reset of the TPM (e.g., via LPC bus manipulation) can reset PCRs to zero. Sealing to PCR 7 (Secure Boot state) partially mitigates this because Secure Boot measurements are replayed from firmware on reset.
- vTPM in virtualized environments. A virtual TPM provides no physical security. The hypervisor can read all sealed data. TPM enrollment in a VM is useful for binding a vault to a specific VM instance, not for hardware-level tamper resistance.
See Also
- Factor Architecture –
VaultAuthBackendtrait definition and dispatch - SED/Opal – Drive-level hardware binding (complementary to TPM)
- Policy Engine – Multi-factor combination modes
- FIDO2/WebAuthn – Alternative hardware factor (portable, not platform-bound)
Biometrics Backend
Status: Design Intent. The
AuthFactorId::Fingerprintvariant exists incore-types::authand theVaultAuthBackendtrait is defined incore-auth::backend, but no struct implements this factor today. This page documents what the backend will do when built, grounded in the trait interface and platform biometric APIs.
The biometrics backend enables vault unlock gated by fingerprint verification (and, in the
future, other biometric modalities such as face recognition). It maps to
AuthFactorId::Fingerprint (config string "fingerprint"). The critical design principle:
biometric data is never used as key material. Biometrics are authentication gates that release
a stored key, not secrets from which keys are derived.
Design Principle: Biometrics Are Not Secrets
Biometric features (fingerprint minutiae, facial geometry) are not secret – they can be observed, photographed, or lifted from surfaces. They are also not stable – they vary between readings. For these reasons, the biometrics backend never derives cryptographic key material from biometric data. Instead:
- At enrollment, the master key (or a KEK) is encrypted and stored on disk.
- The decryption key for that blob is held in a platform keystore that requires biometric verification to release.
- At unlock, the platform biometric subsystem verifies the user, and if successful, releases the decryption key to the backend.
The biometric template (the mathematical representation of the fingerprint or face) never leaves the platform biometric subsystem. Open Sesame never sees, stores, or transmits biometric data.
Platform Biometric APIs
Linux: fprintd
On Linux, fingerprint authentication is mediated by fprintd, a D-Bus service that manages
fingerprint readers and templates. The authentication flow:
- The backend calls
net.reactivated.Fprint.Device.VerifyStarton the fprintd D-Bus interface. fprintdcommunicates with the fingerprint sensor hardware vialibfprint, acquires a fingerprint image, and matches it against enrolled templates.- On match,
fprintdemits aVerifyStatussignal withverify-match. On failure,verify-no-matchorverify-retry-scan. - The backend calls
VerifyStopto end the session.
The backend uses the fprintd D-Bus API directly (not PAM) to avoid requiring a full PAM session context.
Future: macOS LocalAuthentication
On macOS (if platform support is added), LocalAuthentication.framework provides Touch ID
and Face ID gating of Keychain items. A Keychain item with
kSecAccessControlBiometryCurrentSet requires biometric verification before the Keychain
releases the stored secret. This maps directly to the “biometric gates release of a stored
key” model.
Future: Windows Hello
On Windows, Windows.Security.Credentials.KeyCredentialManager and the Windows Hello
biometric subsystem provide similar gating. The TPM-backed key is released only after
Windows Hello verification succeeds.
Mapping to VaultAuthBackend
factor_id()
Returns AuthFactorId::Fingerprint.
backend_id()
Returns "fingerprint".
name()
Returns "Fingerprint".
requires_interaction()
Returns AuthInteraction::HardwareTouch. The user must place their finger on the sensor.
is_enrolled(profile, config_dir)
Checks two conditions:
- An enrollment blob exists at
{config_dir}/profiles/{profile}/fingerprint.enrollment. - At least one fingerprint is enrolled in
fprintdfor the current system user (queried vianet.reactivated.Fprint.Device.ListEnrolledFingers).
Both must be true. If the system fingerprint enrollment is wiped (user re-enrolled fingers in system settings), the Open Sesame enrollment blob still exists on disk but the platform verification will match against different templates, making it effectively stale.
can_unlock(profile, config_dir)
- Verify enrollment exists via
is_enrolled(). - Check that
fprintdis running (D-Bus namenet.reactivated.Fprintis available). - Check that at least one fingerprint reader device is present.
D-Bus name lookup and device enumeration complete well within the 100ms trait budget.
enroll(profile, master_key, config_dir, salt, selected_key_index)
- Verify that
fprintdhas at least one enrolled fingerprint for the current user. If not, returnAuthError::BackendNotApplicable("no fingerprints enrolled in fprintd; enroll via system settings first"). - Generate a random 32-byte storage key.
- Wrap
master_keyunder the storage key using AES-256-GCM. - Store the storage key in a location protected by biometric gating:
- Primary strategy (Linux): Store the storage key in the user’s kernel keyring
(
keyctl) under a session-scoped key. The keyring entry is created with a timeout matching the user session. Biometric verification via fprintd acts as the authorization gate before the backend retrieves the keyring secret at unlock time. - Fallback strategy: Encrypt the storage key with a key derived from
saltand a device-specific identifier (machine-id). Store the encrypted storage key in the enrollment blob itself. The biometric check acts as the sole authorization gate.
- Primary strategy (Linux): Store the storage key in the user’s kernel keyring
(
- Write the enrollment blob to
{config_dir}/profiles/{profile}/fingerprint.enrollment.
selected_key_index is unused (there is one biometric subsystem per machine). It is ignored.
unlock(profile, config_dir, salt)
- Load the enrollment blob.
- Initiate fingerprint verification via fprintd D-Bus API (
VerifyStart). - Wait for the
VerifyStatussignal. The daemon overlay displays a “scan your fingerprint” prompt. - If verification fails (no match, timeout, or sensor error), return
AuthError::UnwrapFailed. - If verification succeeds, retrieve the storage key from the kernel keyring (primary strategy) or decrypt it from the blob (fallback strategy).
- Unwrap the master key using the storage key (AES-256-GCM decrypt).
- Return
UnlockOutcome:master_key: the unwrapped 32-byte key.ipc_strategy:IpcUnlockStrategy::DirectMasterKey.factor_id:AuthFactorId::Fingerprint.audit_metadata:{"method": "fprintd", "finger": "<which_finger>"}(if fprintd reports which finger matched).
revoke(profile, config_dir)
- Remove the storage key from the kernel keyring (if using primary strategy).
- Delete
{config_dir}/profiles/{profile}/fingerprint.enrollment.
Does not remove fingerprints from fprintd – those are system-level enrollments managed by the user outside of Open Sesame.
Enrollment Blob Format
Version: u8 (1)
Storage strategy: u8 (1 = kernel keyring, 2 = embedded encrypted key)
Wrapped master key: 12-byte nonce || ciphertext || 16-byte GCM tag
Embedded encrypted storage key (strategy 2 only): 12-byte nonce || ciphertext || 16-byte tag
Device binding hash: 32 bytes (SHA-256 of machine-id || profile name)
FactorContribution
AuthCombineMode::AnyorAuthCombineMode::Policy: The backend providesFactorContribution::CompleteMasterKey. It unwraps the full master key after biometric verification succeeds.AuthCombineMode::All: The backend providesFactorContribution::FactorPiece. A random 32-byte piece (not the master key) is stored behind the biometric gate and contributed to HKDF derivation upon successful verification.
The biometric itself does not contribute entropy – it is a gate. The piece is a random value generated at enrollment time and stored behind the biometric gate.
Liveness Detection
Fingerprint sensors vary in their resistance to spoofing:
| Sensor Type | Spoofing Resistance | Notes |
|---|---|---|
| Capacitive (most laptop sensors) | Moderate | Detects electrical properties of skin. Gummy fingerprints with conductive material can sometimes fool them. |
| Ultrasonic (e.g., Qualcomm 3D Sonic) | High | Measures sub-dermal features. More resistant to printed or molded replicas. |
| Optical (common in USB readers) | Low | Easiest to spoof with printed or molded fingerprints. |
Open Sesame delegates liveness detection entirely to the sensor hardware and fprintd. The
backend does not attempt its own liveness checks. Deployment guidance: use capacitive or
ultrasonic sensors for security-sensitive configurations, and combine biometric with a second
factor via AuthCombineMode::Policy.
Privacy Guarantees
- No template storage. Open Sesame never stores, transmits, or processes biometric
templates. Templates are managed exclusively by
fprintd(stored in/var/lib/fprint/). - No template access. The backend never requests raw biometric data or template bytes. It uses only the verify/match API, which returns a boolean result.
- No cross-profile linkability. The enrollment blob contains no biometric information. An attacker who obtains the blob cannot determine whose fingerprint unlocks the vault.
- User-controlled deletion. Revoking the backend deletes only the encrypted key blob. Biometric templates remain under user control in fprintd.
Integration Dependencies
| Dependency | Type | Purpose |
|---|---|---|
fprintd >= 1.94 | System service | Fingerprint verification via D-Bus |
libfprint >= 1.94 | System library | Sensor driver layer (used by fprintd) |
Rust crate: zbus | Cargo dependency | D-Bus client for fprintd communication |
Rust crate: keyutils | Cargo dependency | Linux kernel keyring access (primary storage strategy) |
| Compatible fingerprint reader | Hardware | Any reader supported by libfprint |
Threat Model Considerations
- Biometric spoofing. The backend is only as spoof-resistant as the sensor hardware. It
should not be the sole factor for high-value vaults. Combining biometric with password or
FIDO2 via
AuthCombineMode::Policyis recommended. - Stolen enrollment blob. The blob is useless without passing biometric verification (primary strategy) or without the device-specific derivation inputs (fallback strategy). The biometric gate is the critical protection.
- fprintd compromise. If an attacker can inject false D-Bus responses (by compromising fprintd or the user’s D-Bus session), they can bypass biometric verification. Running fprintd as a system service (not user session) and using D-Bus mediation via AppArmor or SELinux mitigates this.
- Irrevocable biometrics. If a fingerprint is compromised (lifted from a surface), the user cannot change their fingerprint. Mitigation: re-enroll with a different finger and revoke the old enrollment, or add a second factor requirement via policy.
- Fallback strategy weakness. The embedded-key fallback strategy protects the storage key only with device-specific derivation (machine-id + salt). An attacker with the enrollment blob and knowledge of the machine-id can bypass the biometric gate entirely. The primary strategy (kernel keyring) is strongly preferred.
See Also
- Factor Architecture –
VaultAuthBackendtrait definition and dispatch - FIDO2/WebAuthn – Roaming authenticator with on-device biometric UV
- Policy Engine – Combining biometric with other factors
Hardware Tokens Backend (YubiKey / Smart Cards / PIV)
Status: Design Intent. The
AuthFactorId::Yubikeyvariant exists incore-types::authand theVaultAuthBackendtrait is defined incore-auth::backend, but no struct implements this factor today. This page documents what the backend will do when built, grounded in the trait interface and relevant smart card standards.
The hardware tokens backend enables vault unlock using YubiKeys, PIV smart cards, and
PKCS#11-compatible cryptographic tokens. It maps to AuthFactorId::Yubikey (config string
"yubikey"). Despite the enum variant name referencing YubiKey specifically, the backend is
designed to support the broader category of challenge-response and certificate-based hardware
tokens.
This backend covers the non-FIDO2 capabilities of these devices. For FIDO2/CTAP2 operation, see the FIDO2/WebAuthn backend.
Supported Protocols
PIV (FIPS 201 / NIST SP 800-73)
Personal Identity Verification is a US government standard for smart card authentication. PIV cards (and YubiKeys with the PIV applet) contain X.509 certificates and corresponding private keys in hardware slots. The private key never leaves the card.
PIV slots relevant to Open Sesame:
| Slot | Purpose | PIN Policy | Touch Policy |
|---|---|---|---|
| 9a | PIV Authentication | Once per session | Configurable |
| 9c | Digital Signature | Always | Configurable |
| 9d | Key Management | Once per session | Configurable |
| 9e | Card Authentication | Never | Never |
Slot 9d (Key Management) is the natural fit for vault unlock – it is designed for key agreement and encryption operations, has a reasonable PIN policy (once per session), and supports touch policy configuration.
HMAC-SHA1 Challenge-Response (YubiKey Slot 2)
YubiKeys support HMAC-SHA1 challenge-response in their OTP applet (slots 1 and 2). The host
sends a challenge, the YubiKey computes HMAC-SHA1 with a pre-programmed 20-byte secret, and
returns the 20-byte response. This is the same mechanism used by ykman and ykchalresp.
Slot 2 (long press) is conventionally used for challenge-response to avoid conflicts with slot 1 (short press, often configured for OTP).
PKCS#11
PKCS#11 is the generic smart card interface. Any token with a PKCS#11 module (OpenSC, YubiKey YKCS11, Nitrokey, etc.) can be used. The backend loads the PKCS#11 shared library, finds a suitable private key object, and performs a sign or decrypt operation.
Mapping to VaultAuthBackend
factor_id()
Returns AuthFactorId::Yubikey.
backend_id()
Returns "yubikey".
name()
Returns "Hardware Token".
requires_interaction()
Returns AuthInteraction::HardwareTouch if the enrolled token has a touch policy enabled.
Returns AuthInteraction::PasswordEntry if the token requires a PIN but no touch. The
interaction type is recorded in the enrollment blob and returned by this method.
Most configurations require touch (physical presence), so HardwareTouch is the common case.
is_enrolled(profile, config_dir)
Checks whether {config_dir}/profiles/{profile}/yubikey.enrollment exists and contains a
valid enrollment blob.
can_unlock(profile, config_dir)
- Verify enrollment exists.
- Based on the enrolled protocol:
- HMAC-SHA1: Enumerate USB HID devices matching YubiKey vendor/product IDs.
- PIV/PKCS#11: Attempt to open a PCSC connection and verify that a card is present in a reader.
- Return
trueif a device is detected.
No cryptographic operation is performed (must stay within 100ms).
enroll(profile, master_key, config_dir, salt, selected_key_index)
The enrollment path depends on the protocol. The backend auto-detects the preferred protocol based on the connected device, or the user specifies via configuration.
HMAC-SHA1 Path
- Enumerate connected YubiKeys. If
selected_key_indexisSome(i), select the i-th device. - Issue a challenge-response using
saltas the challenge (hashed to fit the challenge length if needed). - The YubiKey returns a 20-byte HMAC-SHA1 response.
- Derive a 32-byte KEK from the HMAC response using HKDF-SHA256:
KEK = HKDF-SHA256(ikm=hmac_response, salt=salt, info="open-sesame:yubikey:{profile}"). - Wrap
master_keyunder the KEK using AES-256-GCM. - Store the enrollment blob with the YubiKey serial number, slot number, and wrapped master key.
PIV Path
- Open a PCSC connection to the smart card.
- Select the PIV applet (AID
A0 00 00 03 08). - Authenticate to the card (PIN prompt if required by slot policy).
- Read the certificate from the selected slot (default: 9d).
- Generate a random 32-byte challenge.
- Encrypt the challenge using the public key from the certificate (RSA-OAEP or ECDH depending on key type).
- Derive a KEK:
KEK = HKDF-SHA256(ikm=challenge, salt=salt, info="open-sesame:piv:{profile}"). - Wrap
master_keyunder the KEK. - Store the enrollment blob with the certificate fingerprint (SHA-256), slot number, encrypted challenge, and wrapped master key.
PKCS#11 Path
Follows the PIV path but uses the PKCS#11 API (C_FindObjects, C_Decrypt / C_Sign)
instead of raw APDU commands. The enrollment blob additionally stores the PKCS#11 module
path and token serial number.
unlock(profile, config_dir, salt)
HMAC-SHA1 Path
- Load enrollment blob.
- Issue challenge-response with
saltas the challenge. - Derive KEK from the HMAC response (same HKDF as enrollment).
- Unwrap master key. If unwrap fails (different YubiKey or different slot 2 secret), return
AuthError::UnwrapFailed.
PIV Path
- Load enrollment blob, including the encrypted challenge.
- Open PCSC connection, select PIV applet, authenticate (PIN if required).
- Decrypt the encrypted challenge using the card’s private key (slot 9d).
- Derive KEK from the decrypted challenge (same HKDF as enrollment).
- Unwrap master key.
Common Outcome
Return UnlockOutcome:
master_key: the unwrapped 32-byte key.ipc_strategy:IpcUnlockStrategy::DirectMasterKey.factor_id:AuthFactorId::Yubikey.audit_metadata:{"protocol": "hmac-sha1|piv|pkcs11", "serial": "<device_serial>", "slot": "<slot>"}.
revoke(profile, config_dir)
Delete {config_dir}/profiles/{profile}/yubikey.enrollment. Does not modify the token
itself (the HMAC secret or PIV keys remain on the device).
Enrollment Blob Format
Version: u8 (1)
Protocol: u8 (1 = HMAC-SHA1, 2 = PIV, 3 = PKCS#11)
Device serial: length-prefixed UTF-8
Slot/key identifier: length-prefixed UTF-8
Interaction type: u8 (maps to AuthInteraction variant)
--- Protocol-specific fields ---
[HMAC-SHA1]
Wrapped master key: 12-byte nonce || ciphertext || 16-byte GCM tag
[PIV]
Certificate fingerprint: 32 bytes (SHA-256)
Encrypted challenge: length-prefixed bytes
Wrapped master key: 12-byte nonce || ciphertext || 16-byte GCM tag
[PKCS#11]
Module path: length-prefixed UTF-8
Token serial: length-prefixed UTF-8
Certificate fingerprint: 32 bytes
Encrypted challenge: length-prefixed bytes
Wrapped master key: 12-byte nonce || ciphertext || 16-byte GCM tag
FactorContribution
AuthCombineMode::AnyorAuthCombineMode::Policy: The backend providesFactorContribution::CompleteMasterKey. It independently unwraps the full master key.AuthCombineMode::All: The backend providesFactorContribution::FactorPiece. For HMAC-SHA1, the HKDF output derived from the HMAC response is the piece (32 bytes). For PIV, the decrypted challenge is the piece. The piece is contributed to the combined HKDF derivation.
Touch Requirement for Physical Presence
YubiKeys and some smart cards support a touch policy: the device requires the user to physically touch a contact sensor before performing a cryptographic operation. This provides proof of physical presence, mitigating malware that silently uses the token while plugged in.
| Policy | Behavior |
|---|---|
| Never | No touch required (default for HMAC-SHA1 on some firmware) |
| Always | Touch required for every operation |
| Cached | Touch required once, cached for 15 seconds |
The backend records the touch policy in the enrollment blob so that
requires_interaction() returns the correct AuthInteraction variant. The daemon overlay
displays a “touch your key” prompt when AuthInteraction::HardwareTouch is indicated.
HMAC-SHA1 Key Derivation Detail
The HMAC-SHA1 response is only 20 bytes, insufficient for a 32-byte AES key directly. The HKDF-SHA256 expansion step stretches this to 32 bytes:
- The HMAC-SHA1 secret on the YubiKey is 20 bytes (160 bits), programmed at configuration time.
- The challenge (vault salt) is up to 64 bytes.
- The 20-byte HMAC output has at most 160 bits of entropy.
- HKDF’s security bound is
min(input_entropy, hash_output_length)= 160 bits, which exceeds the 128-bit security target for the derived 256-bit KEK.
Integration Dependencies
| Dependency | Type | Purpose |
|---|---|---|
pcsc-lite + libpcsclite-dev | System library | PCSC smart card access |
pcscd | System service | Smart card daemon (must be running for PIV/PKCS#11) |
opensc (optional) | System package | PKCS#11 module and generic smart card drivers |
ykpers / yubikey-manager (optional) | System library/tool | YubiKey HID communication for HMAC-SHA1 |
Rust crate: pcsc | Cargo dependency | PCSC bindings for PIV |
Rust crate: yubikey | Cargo dependency | YubiKey PIV operations |
Rust crate: cryptoki | Cargo dependency | PKCS#11 bindings |
Rust crate: yubico-manager or challenge-response | Cargo dependency | HMAC-SHA1 challenge-response |
Threat Model Considerations
- HMAC-SHA1 secret extraction. The HMAC secret on a YubiKey cannot be read back after programming. Extracting it requires destructive chip analysis.
- PIN brute-force. PIV PINs have a retry counter (default 3 attempts before lockout). After lockout, the PUK (PIN Unlock Key) is required. After PUK lockout, the PIV applet must be reset (destroying all keys).
- Token loss. If the token is lost, the enrollment blob is useless without the physical device. Recovery requires an alternative enrolled factor.
- Relay attacks (HMAC-SHA1). HMAC-SHA1 challenge-response over USB HID can be relayed over a network. Touch policy (set to “always”) mitigates this by requiring physical presence.
- Relay attacks (PIV). Smart card operations over PCSC can be relayed using tools like
virtualsmartcard. Touch policy on YubiKey PIV mitigates this. - SHA-1 and HMAC-SHA1. HMAC-SHA1 is not affected by SHA-1 collision attacks. HMAC security depends on the PRF property of the compression function, not collision resistance. HMAC-SHA1 remains secure for key derivation.
See Also
- Factor Architecture –
VaultAuthBackendtrait definition and dispatch - FIDO2/WebAuthn – FIDO2 mode of the same hardware (different protocol)
- TPM – Platform-bound hardware factor (non-portable)
- Policy Engine – Multi-factor combination modes
Self-Encrypting Drive (SED) / TCG Opal Backend
Status: Design Intent. No
AuthFactorIdvariant exists for SED/Opal today. This backend is a future extension that would require adding anAuthFactorId::SedOpalvariant tocore-types::auth. TheVaultAuthBackendtrait incore-auth::backenddefines the interface it would implement. This page documents the design intent.
The SED/Opal backend enables vault unlock by binding the master key to a Self-Encrypting Drive’s hardware encryption using the TCG Opal 2.0 specification. The drive’s encryption controller holds the key material, accessible only after the drive is unlocked with the correct credentials. This provides protection against physical drive theft without relying on software-layer full-disk encryption.
Relevant Standards
| Specification | Role |
|---|---|
| TCG Opal 2.0 (Trusted Computing Group) | Defines the SED management interface: locking ranges, authentication, band management, and the Security Protocol command set. |
| TCG Opal SSC (Security Subsystem Class) | Profile of TCG Storage that Opal-compliant drives implement. |
| ATA Security Feature Set | Legacy drive locking (ATA password). Opal supersedes this but some drives support both. |
| NVMe Security Send/Receive | Transport for TCG commands on NVMe drives. |
| IEEE 1667 | Silo-based authentication for storage devices (used by some USB encrypted drives). |
Core Concept: Drive-Bound Vault Keys
A Self-Encrypting Drive transparently encrypts all data written to it using a Media Encryption Key (MEK) stored in the drive controller. The MEK is wrapped by a Key Encryption Key (KEK) derived from the user’s authentication credential. Without the correct credential, the MEK cannot be unwrapped and the drive contents are cryptographically inaccessible.
The SED/Opal backend leverages this mechanism for vault key storage:
- At enrollment, the backend stores the vault master key within an Opal locking range’s DataStore table. The locking range is protected by an Opal credential that the backend manages.
- At unlock, the backend authenticates to the Opal Security Provider (SP) using the stored credential, reads the master key from the DataStore, and provides it to daemon-secrets.
Locking Range Architecture
Opal drives support multiple locking ranges. The backend uses a dedicated locking range for Open Sesame, separate from the global locking range (which may be used for full-disk encryption by the OS):
- Global Range (Range 0): Managed by the OS or firmware for full-disk encryption (e.g.,
sedutil-cli, BitLocker,systemd-cryptenroll). - Dedicated Range (Range N): A small range allocated for Open Sesame DataStore usage. Contains only the encrypted vault master key blob.
If a dedicated range cannot be allocated (drive does not support multiple ranges, or all ranges are in use), the backend falls back to storing the wrapped key in the DataStore table associated with the Admin SP.
Mapping to VaultAuthBackend
factor_id()
Returns AuthFactorId::SedOpal (to be added to the enum).
backend_id()
Returns "sed-opal".
name()
Returns "Self-Encrypting Drive".
requires_interaction()
Returns AuthInteraction::None. SED unlock is non-interactive. The backend authenticates to
the drive controller programmatically using a credential derived from device-specific secrets,
not a user-entered password.
is_enrolled(profile, config_dir)
Checks whether {config_dir}/profiles/{profile}/sed-opal.enrollment exists and contains a
valid enrollment blob identifying the drive (serial number, locking range, and Opal credential
reference).
can_unlock(profile, config_dir)
- Verify enrollment exists.
- Identify the enrolled drive by serial number.
- Check that the drive is present and accessible (block device exists or can be found by serial via sysfs).
- Return
trueif the drive is present.
Does not attempt Opal authentication (may exceed 100ms and may trigger lockout counters on failure).
enroll(profile, master_key, config_dir, salt, selected_key_index)
- Enumerate Opal-capable drives by sending TCG Discovery 0 to each block device.
- If
selected_key_indexisSome(i), select the i-th drive. Otherwise select the first Opal-capable drive. - Authenticate to the drive’s Admin SP using the SID (Security Identifier) or a pre-configured admin credential.
- Allocate or identify a locking range for Open Sesame use.
- Generate a random Opal credential for the locking range (or derive one from
saltand a device-specific secret). - Store the vault
master_keyin the DataStore table of the locking range. - Lock the range, binding it to the generated credential.
- Write the enrollment blob to
{config_dir}/profiles/{profile}/sed-opal.enrollmentcontaining the drive serial, locking range index, and the Opal credential (encrypted under a key derived fromsaltand the machine ID).
unlock(profile, config_dir, salt)
- Load the enrollment blob.
- Derive the Opal credential (decrypt using
saltand machine ID). - Open a session to the drive’s Locking SP.
- Authenticate with the credential.
- Read the master key from the DataStore table.
- Return
UnlockOutcome:master_key: the 32-byte key read from the DataStore.ipc_strategy:IpcUnlockStrategy::DirectMasterKey.factor_id:AuthFactorId::SedOpal.audit_metadata:{"drive_serial": "<serial>", "locking_range": "<N>"}.
revoke(profile, config_dir)
- Authenticate to the drive’s Admin SP.
- Erase the master key from the DataStore table (overwrite with zeros).
- Optionally release the locking range allocation.
- Delete
{config_dir}/profiles/{profile}/sed-opal.enrollment.
Enrollment Blob Format
Version: u8 (1)
Drive serial: length-prefixed UTF-8
Drive model: length-prefixed UTF-8
Block device path at enrollment time: length-prefixed UTF-8 (informational; may change)
Locking range index: u16
Opal credential (encrypted): 12-byte nonce || ciphertext || 16-byte GCM tag
Machine binding hash: 32 bytes (SHA-256 of machine-id, used in credential derivation)
FactorContribution
AuthCombineMode::AnyorAuthCombineMode::Policy: The backend providesFactorContribution::CompleteMasterKey. The drive hardware releases the full master key from the DataStore.AuthCombineMode::All: The backend providesFactorContribution::FactorPiece. A random 32-byte piece (not the master key) is stored in the DataStore and contributed to combined HKDF derivation.
Pre-Boot Authentication
TCG Opal defines a Pre-Boot Authentication (PBA) mechanism: a small region of the drive (the Shadow MBR) is presented to the BIOS/UEFI before the main OS boots. The PBA image authenticates the user and unlocks the drive before the OS sees the encrypted data.
Open Sesame does not implement PBA. It operates entirely within the running OS. If the drive is locked at boot by system-level SED management, Open Sesame assumes the drive is already unlocked by the time daemon-secrets starts. The backend uses Opal only for its DataStore facility (key storage with hardware-gated access), not for drive-level boot locking.
Integration Dependencies
| Dependency | Type | Purpose |
|---|---|---|
sedutil-cli | System tool | Opal drive management (enrollment, locking range setup) |
libata / kernel NVMe driver | Kernel | ATA Security / NVMe Security Send/Receive commands |
Rust crate: sedutil-rs or direct ioctl | Cargo dependency | Programmatic Opal SP communication |
/dev/sdX or /dev/nvmeXnY | Block device | Target drive |
Root or CAP_SYS_RAWIO | Privilege | Required for TCG command passthrough via SG_IO / NVMe admin commands |
Privilege Requirements
Opal commands require raw SCSI/ATA/NVMe command passthrough, which typically requires root
or CAP_SYS_RAWIO. Since daemon-secrets runs as a system service, this is consistent with
its privilege model. Enrollment and revocation also require admin-level Opal credentials
(SID or Admin1 authority).
Threat Model
Protects Against
- Physical drive theft. An attacker who steals the drive (but not the machine) cannot access the vault master key. The DataStore contents are encrypted by the drive’s MEK, inaccessible without the Opal credential.
- Offline forensic imaging. Imaging the raw drive platters or flash chips yields only ciphertext.
- Cold boot on different hardware. Moving the drive to another machine does not help because the Opal credential is derived from the original machine’s identity.
Does Not Protect Against
- Running-system compromise. Once the OS is booted and the drive is unlocked, an attacker with root access can read the DataStore contents. SED encryption is transparent to the running OS after unlock.
- DMA attacks. An attacker with physical access to the running machine can use DMA (e.g., via Thunderbolt or FireWire) to read memory containing the unlocked master key.
- SED firmware vulnerabilities. Research has demonstrated that some SED implementations have firmware bugs allowing bypass of Opal locking without the credential (e.g., the 2018 Radboud University disclosure affecting Crucial and Samsung drives). The backend cannot detect or mitigate firmware-level flaws.
- Evil-maid with machine access. If the attacker has access to the original machine (not just the drive), they can boot the machine, wait for the drive to be unlocked by the OS, and extract the master key.
Complementary Use with TPM
SED/Opal and TPM provide complementary hardware binding:
- TPM binds the vault to the boot integrity state (firmware, bootloader, Secure Boot policy). Protects against software-level boot chain tampering.
- SED/Opal binds the vault to the physical drive. Protects against drive theft.
Using both factors in AuthCombineMode::All provides defense in depth: the vault is bound
to both the machine’s boot state and the specific physical drive.
See Also
- Factor Architecture –
VaultAuthBackendtrait definition and dispatch - TPM – Complementary platform-binding factor
- Policy Engine – Combining SED with other factors
Federation Factors
Status: Design Intent. No
AuthFactorIdvariant orVaultAuthBackendimplementation exists for federation today. Federation is a future capability that builds on top of the existing factor backends. This page documents the design intent for cross-device factor delegation.
Federation factors allow a user to satisfy an authentication factor on one device and have that satisfaction count toward unlocking a vault on a different device. This enables scenarios such as unlocking a headless server’s vault using a phone’s fingerprint sensor, or centrally managing vault unlock across a fleet of machines via an HSM.
Core Concepts
Factor Delegation
Factor delegation separates where a factor is satisfied from where the vault is unlocked:
- Origin device: The device where the user physically performs authentication (touches a FIDO2 key, scans a fingerprint, enters a password).
- Target device: The device where the vault resides and where daemon-secrets runs.
- Delegation token: A cryptographic proof that a specific factor was satisfied on the origin device, valid for a bounded time and scope.
The delegation token does not contain the master key. It is an authorization proof that daemon-secrets on the target device accepts in lieu of direct local factor satisfaction.
Trust Chain
Federation introduces a multi-hop trust chain:
Device Attestation -> Factor Proof -> Delegation Token -> Vault Unlock
- Device attestation. The origin device proves its identity and integrity to the target device. This may use TPM remote attestation, a pre-shared device certificate, or a Noise IK session where the origin device’s static public key is pre-enrolled.
- Factor proof. The origin device satisfies a factor locally (e.g., fingerprint verification via fprintd) and produces a signed statement: “factor F was satisfied by user U at time T on device D.”
- Delegation token. The factor proof is wrapped into a delegation token that specifies the target vault, permitted operations, expiration time, and scope restrictions.
- Vault unlock. The target device’s daemon-secrets receives the delegation token, verifies the full chain (device identity, factor proof signature, token validity, scope), and uses it to authorize the unlock.
Delegation Token Structure
Version: u8 (1)
Token ID: 16 bytes (random, for revocation and audit)
Origin device ID: 32 bytes (public key fingerprint or device certificate hash)
Factor ID: AuthFactorId (which factor was satisfied)
Factor proof signature: length-prefixed bytes (signed by origin device's attestation key)
Timestamp: u64 (Unix epoch seconds when factor was satisfied)
Expiry: u64 (Unix epoch seconds when token becomes invalid)
Target vault: length-prefixed UTF-8 (profile name on target device)
Scope: DelegationScope (see below)
Token signature: length-prefixed bytes (signed by origin device's delegation key)
DelegationScope
Delegation tokens carry explicit scope restrictions that limit what the token can authorize:
#![allow(unused)]
fn main() {
struct DelegationScope {
/// Which operations the token authorizes.
allowed_operations: Vec<DelegatedOperation>,
/// Maximum number of times the token can be used (None = unlimited within expiry).
max_uses: Option<u32>,
/// If set, token is only valid from these source IP addresses.
source_addresses: Option<Vec<IpAddr>>,
}
enum DelegatedOperation {
/// Unlock the vault (read access to secrets).
VaultUnlock,
/// Unlock and modify secrets.
VaultUnlockWrite,
/// Unlock a specific secret by path.
SecretAccess(String),
}
}
Scope Narrowing
A delegation token can only have equal or narrower scope than the factor it represents. Scope narrowing is enforced at token creation time:
- A password factor with full vault access can delegate a token that only unlocks specific secrets.
- A biometric factor can delegate a token valid for 5 minutes instead of the session duration.
- A FIDO2 factor can delegate a token restricted to a single use.
Scope can never be widened. A token scoped to SecretAccess("/ssh/id_ed25519") cannot be
used to unlock the entire vault.
Relationship to VaultAuthBackend
Federation does not implement VaultAuthBackend directly. Instead, it wraps existing
backends:
- On the origin device, a standard
VaultAuthBackend(fingerprint, FIDO2, password, etc.) performs the actual authentication. - The origin device’s federation service creates a delegation token signed with the device’s attestation key.
- On the target device, a
FederationReceivercomponent (a new daemon component, not aVaultAuthBackend) validates the token and translates it into an internal unlock authorization.
The target device’s daemon-secrets treats a validated delegation token as equivalent to a
local factor satisfaction for policy evaluation purposes. If the vault’s
AuthCombineMode::Policy requires AuthFactorId::Fingerprint, a delegation token proving
fingerprint satisfaction on a trusted origin device satisfies that requirement.
IPC Flow
Origin Device Target Device
───────────── ─────────────
User touches fingerprint sensor
|
v
FingerprintBackend.unlock() succeeds
|
v
FederationService creates
delegation token
|
v
Noise IK session ──────────────────> FederationReceiver
|
v
Verify device attestation
Verify factor proof signature
Check token expiry and scope
|
v
daemon-secrets accepts
factor as satisfied
|
v
Policy engine evaluates
(may need more factors)
|
v
Vault unlocked (if policy met)
FactorContribution
Federation does not change the FactorContribution type of the underlying factor. If the
delegated factor provides FactorContribution::CompleteMasterKey locally, the delegation
token authorizes release of the same master key on the target device (which must have its
own wrapped copy from a prior enrollment of that factor type).
In AuthCombineMode::All, federation cannot provide a FactorPiece remotely because the
piece must be combined locally with other pieces on the target device. Federation in All
mode requires the origin device to contribute its piece to a multi-party key derivation
protocol. This is deferred to a future design iteration (see Open Questions).
Use Cases
Unlock Server Vault from Phone
A developer manages secrets on a headless server. The server vault requires fingerprint +
password (AuthCombineMode::Policy, both required). The developer:
- Scans a fingerprint on their phone (origin device).
- The phone creates a delegation token for the server vault, scoped to
VaultUnlock, expiring in 60 seconds. - The token is sent to the server over a Noise IK session (phone’s static public key is pre-enrolled on the server).
- The server’s daemon-secrets accepts the fingerprint factor as satisfied.
- The developer enters a password directly on the server (or via SSH).
- Both policy requirements are met; the vault unlocks.
Fleet Unlock via Central HSM
An organization operates a fleet of machines, each with a vault. A central HSM holds a master delegation key. An administrator:
- Authenticates to the HSM management interface (FIDO2 + password).
- The HSM creates delegation tokens for a set of target machines, each scoped to
VaultUnlock, expiring in 5 minutes. - Tokens are distributed to target machines via the management plane.
- Each machine’s daemon-secrets validates the token against the HSM’s pre-enrolled public key.
- Vaults unlock. The HSM never sees the vault master keys.
Emergency Break-Glass
A break-glass procedure for when normal factors are unavailable:
- An administrator authenticates to a break-glass service using a hardware token.
- The service creates a single-use delegation token (
max_uses: 1) for the target vault. - The token is transmitted to the target device.
- The vault unlocks once. The token is consumed and cannot be reused.
- All break-glass events are audit-logged with the token ID, origin device, and administrator identity.
Remote Attestation
Before a target device accepts a delegation token, it must verify that the origin device is trustworthy. Remote attestation provides this assurance.
Device Identity
Each device in the federation has a long-lived identity key pair. The public key is enrolled on peer devices during a setup ceremony. Options for the identity key:
- TPM-backed key. The device’s TPM generates a non-exportable attestation key. The public portion is enrolled on peers. This proves the origin device has not been cloned.
- Noise IK static keypair. The existing Open Sesame Noise IK transport provides mutual authentication. The origin device’s static public key is already known to the target device from IPC bus enrollment.
- X.509 certificate. A CA-issued device certificate, validated against a pinned CA public key. Suitable for organizational deployments with existing PKI.
Platform State Attestation
Optionally, the origin device can include a TPM quote (signed PCR values) in the delegation token, proving its boot integrity at the time of factor satisfaction. The target device verifies the quote against a known-good PCR policy. This prevents a compromised origin device from generating fraudulent factor proofs.
Time-Bounded Delegation Tokens
All delegation tokens have mandatory expiration:
- Minimum expiry: 10 seconds (prevents creation of tokens that expire before delivery).
- Maximum expiry: Configurable per-vault, default 300 seconds (5 minutes). Longer durations increase the window for token theft and replay.
- Clock skew tolerance: 30 seconds. The target device accepts tokens where
now - 30s <= timestamp <= now + 30sandnow <= expiry + 30s.
Token expiry is checked at the target device at time of use. A token that was valid when created but has since expired is rejected. There is no renewal mechanism; a new factor satisfaction and new token are required.
Replay Prevention
Each token has a unique random 16-byte ID. The target device maintains a set of consumed token IDs (in memory, persisted to disk for crash recovery). A token ID that has been seen before is rejected, even if the token has not expired.
The consumed-ID set is pruned of entries older than max_expiry + clock_skew_tolerance to
bound memory usage.
Security Considerations
- Token theft. A stolen delegation token can be used by an attacker within its validity
window and scope. Mitigations: short expiry, single-use tokens (
max_uses: 1), source address restrictions, and Noise IK transport encryption (tokens are never sent in plaintext). - Origin device compromise. If the origin device is compromised, an attacker can generate arbitrary delegation tokens. Mitigations: TPM-backed attestation keys (attacker cannot extract the signing key without hardware attack), platform state attestation (compromised boot state is detected), and administrative revocation of the device’s enrollment on all target devices.
- Target device compromise. If the target device is already compromised, delegation tokens are irrelevant – the attacker already has access to the running system. Federation does not increase or decrease the attack surface of a compromised target.
- Network partition. If origin and target devices cannot communicate directly, the token must be relayed through an intermediary. The token’s cryptographic signatures ensure integrity regardless of the relay path, but relay latency may cause expiry. Pre-generating tokens with longer expiry is an option for intermittently-connected environments.
- Scope escalation. The scope narrowing invariant (delegation can only narrow, never widen) is enforced at token creation on the origin device and verified at the target device. A malicious origin device could create a token with any scope up to the full permissions of the delegated factor – this is inherent to the delegation model. Trust in the origin device is a prerequisite for accepting its tokens.
Open Questions
AuthCombineMode::Allsupport. Federation inAllmode requires multi-party key derivation where the origin device contributes its piece without revealing the combined master key. Threshold secret sharing or secure multi-party computation protocols may be needed.- Token revocation broadcast. How does a target device learn that a token has been revoked before its natural expiry? Options include a revocation list pushed via the management plane, or making tokens short-lived enough that revocation is unnecessary.
- Multi-hop delegation. Can device A delegate to device B, which then re-delegates to device C? The current design does not support transitive delegation. Each token is signed by the origin device and validated against that device’s enrolled key directly.
- Offline token pre-generation. For air-gapped environments, tokens may need to be generated in advance with longer validity. This increases the theft window and requires careful scope restriction.
See Also
- Factor Architecture –
VaultAuthBackendtrait definition and dispatch - Policy Engine – How delegated factors interact with
AuthCombineMode - Biometrics – Common delegation source (phone fingerprint)
- TPM – Remote attestation for device identity
Window Manager Daemon
The daemon-wm process implements a Wayland overlay window switcher with Vimium-style letter hints,
application launching, and inline vault unlock. It runs as a single-threaded tokio process
(current_thread runtime) connected to the IPC bus as a BusClient.
Controller State Machine
The OverlayController (controller.rs) is the single owner of all overlay state, timing, and
decisions. The main loop feeds events in, executes the returned Command list, and does nothing
else. The controller never performs I/O directly.
Phases
The controller tracks a Phase enum with the following variants:
- Idle – Nothing happening. No overlay visible, no timers running.
- Armed – Border visible, keyboard exclusive mode acquired via layer-shell. The picker is not
yet visible. The controller waits for either modifier release (quick-switch) or dwell timeout
(transition to Picking). Carries
entered_at: Instant,selection: usize,input: String,dwell_ms: u32, and an optionalPendingLaunch. - Picking – Full picker visible. The user is browsing the window list or typing hint
characters. Carries the same
Snapshot,selection,input, and optionalPendingLaunch. - Launching – An application launch request has been sent to
daemon-launchervia IPC. The overlay displays a status indicator while waiting for the response. - LaunchError – A launch failed. The overlay shows an error toast. Any keystroke dismisses.
- Unlocking – Vault unlock in progress. Contains
profiles_to_unlock,current_index,password_len,unlock_mode(one ofAutoAttempt,WaitingForTouch,Password,Verifying), and the original launch command for retry after unlock.
Events
The controller accepts the following Event variants:
| Event | Source | Description |
|---|---|---|
Activate | IPC WmActivateOverlay | Forward activation (Alt+Tab) |
ActivateBackward | IPC WmActivateOverlayBackward | Backward activation (Alt+Shift+Tab) |
ActivateLauncher | IPC WmActivateOverlayLauncher | Launcher mode (Alt+Space) |
ActivateLauncherBackward | IPC WmActivateOverlayLauncherBackward | Launcher mode backward |
ModifierReleased | Overlay SCTK or IPC InputKeyEvent | Alt/Meta key released |
Char(char) | Overlay or IPC key event | Alphanumeric character typed |
Backspace | Overlay or IPC key event | Backspace pressed |
SelectionDown / SelectionUp | Overlay or IPC key event | Arrow/Tab navigation |
Confirm | Overlay or IPC key event | Enter pressed |
Escape / Dismiss | Overlay or IPC key event | Cancel/timeout |
DwellTimeout | Main loop deadline | Dwell timer expired |
LaunchResult | Command executor callback | Launch IPC completed |
AutoUnlockResult | Command executor callback | SSH agent unlock completed |
TouchResult | Command executor callback | Hardware token touch completed |
UnlockResult | Command executor callback | Password unlock IPC completed |
Transitions
Idle ──Activate──> Armed ──DwellTimeout──> Picking
| |
|<──────Activate──────────| (re-activation cycles selection)
| |
|──ModifierReleased──> Idle (activate selected window)
| |
|──Char──> Picking |──ModifierReleased──> Idle
| |──Escape──> Idle
| |──Confirm──> Idle
| +──launch match──> Launching
|
+──ModifierReleased (fast)──> Idle (quick-switch)
Launching ──LaunchResult(success)──> Idle
Launching ──LaunchResult(VaultsLocked)──> Unlocking
Launching ──LaunchResult(error)──> LaunchError ──any key──> Idle
Unlocking ──AutoUnlockResult(success)──> retry launch or next profile
Unlocking ──AutoUnlockResult(fail)──> Password prompt
Unlocking ──UnlockResult(success)──> retry launch or next profile
Unlocking ──Escape──> Idle
Pre-computed Snapshot
At activation time, the controller builds a Snapshot that carries all data through the entire
overlay lifecycle. The snapshot contains:
- A copy of the window list, MRU-reordered via
mru::reorder()and truncated tomax_visible_windows(default: 20). - The origin window (currently focused) rotated from MRU position 0 to the last index.
- Hint strings assigned via
hints::assign_app_hints(), parallel to the window list. - Overlay-ready
WindowInfostructs containingapp_idandtitle. - A clone of the
key_bindingsmap for launch-or-focus resolution.
No recomputation occurs after the snapshot is built. Keyboard actions only update the selection index and input buffer.
Quick-Switch
When ModifierReleased fires during the Armed phase, the controller evaluates three conditions
in on_modifier_released():
- Elapsed time since
entered_atis belowquick_switch_threshold_ms(default: 250ms fromWmConfig). - No input characters have been typed (
input.is_empty()). - The selection has not moved from
snap.initial_forward().
If all three hold, the controller activates initial_forward() – the MRU previous window
(index 0 after origin rotation). Otherwise, it activates the current selection.
This enables fast Alt+Tab release to instantly switch to the previously focused window without ever showing the picker overlay.
Dwell Timeout
The main loop calls controller.next_deadline() on each iteration of the tokio::select! loop.
During the Armed phase, this returns entered_at + Duration::from_millis(dwell_ms). The
dwell_ms value is set to:
quick_switch_threshold_ms(default: 250ms) forActivationMode::ForwardandActivationMode::Backward.min(overlay_delay_ms, 100)forActivationMode::LauncherandActivationMode::LauncherBackward, providing a shorter dwell to let the compositor grant keyboard exclusivity before the first keypress.
When the deadline fires, the main loop sends Event::DwellTimeout. The controller’s
on_dwell_timeout() method transitions Armed to Picking and emits Command::ShowPicker with the
snapshot’s pre-computed overlay_windows and hints.
Reactivation
When an Activate or ActivateBackward event arrives while already in Armed or Picking (e.g.,
repeated Alt+Tab intercepted by the compositor):
- The selection index advances forward or backward by one position, wrapping via modular
arithmetic over
snap.windows.len(). - If in Armed, the phase transitions to Picking with
Command::ShowPickerandCommand::UpdatePicker. - A
Command::ResetGraceis emitted to reset the overlay’s modifier-poll grace timer, proving Alt is still held. last_ipc_advanceis set toInstant::now(). AnySelectionDownorSelectionUpevent within 100ms (REACTIVATION_DEDUP_MS) is suppressed byis_reactivation_duplicate()to prevent double-advancement from the same physical keystroke arriving via both IPC re-activation and the keyboard handler.
Staged Launch
When the user types a character in on_char() and check_hint_or_launch() finds that the input
does not match any hint (MatchResult::NoMatch) but is a single character matching a
key_bindings entry with a launch command:
- A
PendingLaunchstruct (containingcommand,tags,launch_args) is stored in the current phase viaset_pending_launch(). Command::ShowLaunchStaged { command }is emitted to display the intent in the overlay.- The launch is not executed immediately.
Commitment occurs when:
- ModifierReleased:
on_modifier_released()checks forpending_launchbefore window activation. If present, the controller transitions toPhase::Launchingand emitsCommand::ShowLaunchingfollowed byCommand::LaunchApp. - Confirm (Enter):
on_confirm()follows the same path. - Backspace: If
input.pop()empties the buffer,pending_launchis set toNone. - Escape:
on_escape()dismisses the overlay entirely, clearing all state.
Overlay Lifecycle
SCTK Layer-Shell Surface
The overlay runs on a dedicated OS thread spawned by overlay::spawn_overlay(), communicating
with the tokio event loop via std::sync::mpsc (commands in) and tokio::sync::mpsc (events
out). The OverlayApp struct holds all Wayland state and creates a wlr-layer-shell surface
with:
Layer::Overlay– renders above all other surfaces.Anchor::TOP | Anchor::BOTTOM | Anchor::LEFT | Anchor::RIGHT– fullscreen coverage.KeyboardInteractivity::Exclusive– captures all keyboard input when visible.
The overlay thread runs a manual poll loop using prepare_read() and rustix::event::poll() for
low-latency Wayland event dispatch, draining the command channel every POLL_INTERVAL_MS (4ms).
Show/Hide
- ShowBorder: Creates the layer-shell surface if absent. Sets
OverlayPhase::BorderOnly. Acquires keyboard exclusivity. Recordsactivated_atfor stale-activation timeout. - ShowFull: Stores the
windowsandhintsvectors, transitions toOverlayPhase::Full, and triggers a redraw. - HideAndSync: Destroys the surface, performs a Wayland display sync via
wl_display.roundtrip(), then sendsOverlayEvent::SurfaceUnmappedas acknowledgment. The main loop’sexecute_commands()waits up to 5 seconds for this event before proceeding with window activation. This ensures the compositor no longer sees the exclusive-keyboard surface before focus transfers. - Hide: Destroys the surface without synchronization. Used for escape/dismiss where no subsequent window activation is needed.
Modifier Tracking
The overlay tracks alt_held via the SCTK KeyboardHandler’s modifier callback. After
activation, a grace period (MODIFIER_POLL_GRACE_MS = 150ms) prevents premature modifier-release
detection. If no keyboard event arrives within STALE_ACTIVATION_TIMEOUT_MS (3000ms), the overlay
sends OverlayEvent::Dismiss to handle cases where Alt was released before keyboard focus was
granted.
The ConfirmKeyboardInput command from the main loop (sent on the first IPC key event) sets
received_key_event = true, disabling the stale activation timeout.
Overlay Phases
The overlay thread tracks OverlayPhase: Hidden, BorderOnly, Full, Launching,
LaunchError, UnlockPrompt, UnlockProgress. Each phase determines what the render module
draws.
Rendering
The render.rs module implements software rendering using two libraries:
- tiny-skia: 2D path operations.
rounded_rect_path()builds quadratic Bezier paths for rounded rectangles.fill_rounded_rect()andstroke_rounded_rect()render filled and stroked shapes onto atiny_skia::Pixmap. Layout follows a Material Design 4-point grid with base constants: padding (20px), row height (48px), row spacing (8px), badge dimensions (48x32px), badge radius (8px), app column width (180px), text size (16px), border width (3px), corner radius (16px), and column gap (16px). All dimensions scale with HiDPI via theLayoutstruct. - cosmic-text: Text shaping, layout, and glyph rasterization.
FontSystemmanages font discovery and caching.SwashCacheprovides glyph rasterization. Text is measured withmeasure_text()(returns width and height) and drawn withdraw_text(), both operating onBufferobjects with configurableAttrs(family, weight) andMetrics(font size, line height at 1.3x).
Theme
OverlayTheme defines colors for: background, card_background, card_border,
text_primary, text_secondary, badge_background, badge_text,
badge_matched_background, badge_matched_text, selection_highlight, border_color, plus
border_width and corner_radius. Theme construction follows a priority chain:
- COSMIC system theme:
OverlayTheme::from_cosmic()loadsplatform_linux::cosmic_theme::CosmicThemeand maps its semantic color tokens (background.base,primary.base,primary.on,secondary.component.base,accent.base,accent.on,corner_radii.radius_m) to overlay theme fields. - User config overrides:
OverlayTheme::from_config()compares eachWmConfigcolor field against its default. Non-default values override the COSMIC-derived theme. - Hardcoded defaults: Dark theme with Catppuccin-inspired palette (
#89b4faborder,#000000c8background,#1e1e1ef0cards,#646464badges,#4caf50matched badges).
Colors are parsed from CSS hex notation (#RRGGBB or #RRGGBBAA) via Color::from_hex().
Theme updates arrive via OverlayCmd::UpdateTheme on config hot-reload.
Rendered Elements
- Border-only phase: A border indicator around the screen edges.
- Full picker: A centered card with: hint badges (letter hints with
badge_backgroundorbadge_matched_backgrounddepending on match state), app ID column (optional, controlled byshow_app_id), and title column per window row. The selected row receives aselection_highlightbackground. An input buffer is displayed for typed characters. - Launch status: Staged launch intent, launching indicator, or error messages.
- Unlock prompt: Profile name, dot-masked password field (receives only
password_len, never password bytes), and optional error message. - Unlock progress: Profile name with status message (e.g., “Authenticating…”, “Verifying…”, “Touch your security key…”).
MRU Stack
The mru.rs module maintains a file-based most-recently-used window stack at
~/.cache/open-sesame/mru. The cache directory is created with mode 0o700 on Unix.
File Format
One window ID per line, most recent first. The stack is capped at MAX_ENTRIES (64).
Operations
load(): Opens the file with a sharedflock(LOCK_SH | LOCK_NB– never blocks the tokio thread). Parses one ID per line, trimming whitespace and filtering empty lines. ReturnsMruStatecontaining the orderedstack: Vec<String>.save(target): Opens the file with an exclusiveflock(LOCK_EX | LOCK_NB). Reads the current stack, removestargetfrom its old position viaretain(), inserts it at index 0, truncates to 64 entries, and writes back as newline-joined text. No-op if target is already at position 0.seed_if_empty(windows): On first launch or after crash, seeds the stack from the compositor’s window list. The focused window goes to position 0. No-op if the stack already has entries.reorder(windows, get_id, state): Sorts a window slice by MRU stack position. Windows present in the stack sort by their position (0 = most recent). Windows not in the stack receiveusize::MAXand sort after all tracked windows, preserving their relative compositor order.
Origin Tracking
After mru::reorder(), the currently focused window (MRU position 0) sits at the beginning of
the sorted list. Snapshot::build() then rotates it to the end via remove() + push(). The
result:
- Index 0 = MRU previous (the quick-switch target).
- Last index = origin (currently focused, lowest switch priority).
initial_forward()returns 0 unless that is the origin, in which case it returns 1.initial_backward()returns the last index unless that is the origin, in which case it returnslast - 1.
The origin window remains in the list for display and is reachable by full-circle cycling or explicit hint selection.
Inline Vault Unlock
When a launch request returns a LaunchDenial::VaultsLocked { locked_profiles } denial,
on_launch_result() transitions to Phase::Unlocking without dismissing the overlay. The phase
stores the locked_profiles list, a current_index into it, and the original retry_command,
retry_tags, and retry_launch_args for replay after unlock.
Unlock Flow
-
Auto-unlock attempt (
Command::AttemptAutoUnlock): Thecommands_unlock::attempt_auto_unlock()handler reads the vault’s salt file from{config_dir}/vaults/{profile}.salt, creates acore_auth::AuthDispatcher, callsfind_auto_backend()to locate an SSH agent enrollment, and invokesauto_backend.unlock(). On success, the resulting master key is transferred intoSensitiveBytes::from_protected()and sent todaemon-secretsviaSshUnlockRequestIPC with a 30-second timeout. TheAutoUnlockResultevent is fed back through the controller. -
Touch prompt: If the auto-unlock backend sets
needs_touch = true, the controller transitions toUnlockMode::WaitingForTouchand emitsCommand::ShowTouchPrompt. The overlay displays “Touch your security key for {profile}…”. -
Password fallback: If auto-unlock fails (no backend available, agent error, or secrets rejection), the controller transitions to
UnlockMode::Passwordand emitsCommand::ShowPasswordPrompt. Password bytes are accumulated in aSecureVec(pre-allocated withmlockviaSecureVec::for_password()). The overlay receives onlypassword_lenviaOverlayCmd::ShowUnlockPrompt, never password bytes. -
Password submission (
Command::SubmitPasswordUnlock): On Enter,commands_unlock::submit_password_unlock()copies the password fromSecureVecintoSensitiveBytes::from_slice()(mlock-to-mlock copy, no heap exposure), clears theSecureVecimmediately, shows “Verifying…” in the overlay, and sendsUnlockRequestIPC todaemon-secretswith a 30-second timeout (accommodating Argon2id KDF with high memory parameters).AlreadyUnlockedresponses are treated as success. -
Multi-profile unlock: If multiple profiles are locked,
advance_to_next_profile_or_retry()incrementscurrent_indexand starts the auto-unlock flow for the next profile. -
Retry: After all profiles are unlocked, the controller emits
Command::ActivateProfiles(sendsProfileActivateIPC for each profile) followed byCommand::LaunchAppwith the original command, tags, and launch args.
Security Properties
- Password bytes never cross the thread boundary to the render thread. The overlay receives only
password_len: usize. SecureVecusesmlockto prevent swap and core-dump exposure.SensitiveBytesusesProtectedAllocfor the IPC transfer todaemon-secrets.- The password buffer is zeroized via
Command::ClearPasswordBufferon escape, successful unlock, or any transition out of the Unlocking phase.
Keyboard Input
Keyboard events arrive from two sources:
- SCTK keyboard handler: The overlay’s
wlr-layer-shellsurface receives Wayland keyboard events when it holdsKeyboardInteractivity::Exclusive. TheKeyboardHandlerimplementation mapsKeyEventandModifierstoOverlayEventvariants. - IPC
InputKeyEvent:daemon-inputforwards evdev keyboard events over the IPC bus when a grab is active. The main loop maps these viamap_ipc_key_to_event()to controllerEventvariants.
Both sources pass through a shared KeyDeduplicator instance (8-entry ring buffer, 50ms expiry
window, direction-aware) to ensure only the first arrival of each physical keystroke is processed.
When the overlay activates, Command::ShowBorder triggers an InputGrabRequest publish to
acquire keyboard forwarding from daemon-input. On hide (Command::HideAndSync or
Command::Hide), InputGrabRelease is published. The first IPC key event each activation cycle
sends OverlayCmd::ConfirmKeyboardInput to the overlay thread, setting
ipc_keyboard_active = true and stopping the stale activation timeout.
IPC Interface
| Message | Response | Description |
|---|---|---|
WmListWindows | WmListWindowsResponse { windows } | Returns MRU-reordered window list |
WmActivateWindow { window_id } | WmActivateWindowResponse { success } | Activates a window by ID or app_id match, saves MRU state |
WmActivateOverlay | – | Triggers forward overlay activation |
WmActivateOverlayBackward | – | Triggers backward overlay activation |
WmActivateOverlayLauncher | – | Triggers launcher-mode activation |
WmActivateOverlayLauncherBackward | – | Triggers launcher-mode backward activation |
InputKeyEvent | – | Keyboard event from daemon-input (processed only when not idle) |
KeyRotationPending | – | Reconnects with rotated keypair via BusClient::handle_key_rotation() |
Process Hardening
On Linux, daemon-wm applies the following security measures:
platform_linux::security::harden_process()for process-level hardening.- Resource limits:
nofile = 4096,memlock_bytes = 0. core_types::init_secure_memory()probesmemfd_secretand initializes secure memory before the sandbox is applied.- Landlock filesystem sandbox via
daemon_wm::sandbox::apply_sandbox(), applied after IPC keypair read and bus connection but before IPC traffic processing. - systemd watchdog notification every 15 seconds via
platform_linux::systemd::notify_watchdog(), withplatform_linux::systemd::notify_ready()called at startup.
Configuration
The WmConfig struct (core-config/src/schema_wm.rs) provides:
| Field | Type | Default | Description |
|---|---|---|---|
hint_keys | String | "asdfghjkl" | Characters used for hint assignment |
overlay_delay_ms | u32 | 150 | Dwell delay before showing full picker |
activation_delay_ms | u32 | 200 | Delay after activation before dismiss |
quick_switch_threshold_ms | u32 | 250 | Fast-release threshold for instant switch |
border_width | f32 | 4.0 | Border width in pixels |
border_color | String | "#89b4fa" | Border color (CSS hex) |
background_color | String | "#000000c8" | Overlay background (hex with alpha) |
card_color | String | "#1e1e1ef0" | Card background color |
text_color | String | "#ffffff" | Primary text color |
hint_color | String | "#646464" | Hint badge color |
hint_matched_color | String | "#4caf50" | Matched hint badge color |
key_bindings | BTreeMap | (see Hints) | Per-key app bindings |
show_title | bool | true | Show window titles in overlay |
show_app_id | bool | false | Show app IDs in overlay |
max_visible_windows | u32 | 20 | Maximum windows in picker |
Configuration hot-reloads via core_config::ConfigWatcher. When the watcher fires, the main loop
reads the new WmConfig, builds an OverlayTheme::from_config(), sends
OverlayCmd::UpdateTheme to the overlay thread, updates the shared wm_config mutex, and
publishes ConfigReloaded on the IPC bus.
Compositor Backend
Window list polling runs on a dedicated OS thread named wm-winlist-poll because the compositor
backend (platform_linux::compositor::CompositorBackend) performs synchronous Wayland roundtrips
with libc::poll(). On the current_thread tokio runtime, this would block all IPC message
processing. The thread calls backend.list_windows() every 2 seconds, sending results to the
tokio runtime via a tokio::sync::mpsc channel.
If platform_linux::compositor::detect_compositor() fails (e.g., no
wlr-foreign-toplevel-management protocol support), daemon-wm falls back to a D-Bus focus
monitor (platform_linux::compositor::focus_monitor). This monitor receives
FocusEvent::Focus(app_id) and FocusEvent::Closed(app_id) events, maintaining a synthetic
window list by tracking focus changes and window closures.
Dependencies
The daemon-wm crate depends on the following workspace crates: core-types, core-config,
core-ipc, core-crypto, core-auth, core-profile. External dependencies include
smithay-client-toolkit (SCTK), wayland-client, wayland-protocols-wlr, tiny-skia, and
cosmic-text, all gated behind the wayland feature (enabled by default). The platform-linux
crate is used with the cosmic feature for compositor backend and theme integration.
Hint Assignment
The hints module (daemon-wm/src/hints.rs) assigns letter-based hints to windows for
keyboard-driven selection. Hints follow a Vimium-style model where each window receives a unique
string of repeated characters that the user types to select it.
Hint Assignment Algorithm
The assign_hints(count, hint_keys) function generates hints from a configurable key set string
(default: "asdfghjkl" via WmConfig). For N windows and K available keys in the key set:
- The first
Kwindows each receive a single character:a,s,d,f, … - The next
Kwindows receive doubled characters:aa,ss,dd,ff, … - The pattern continues with tripled characters, and so on.
Each key is used once at each repetition level before any key repeats at the next level. For
example, with hint_keys = "asd" and 5 windows, the assigned hints are: a, s, d, aa,
ss.
This function is used internally. The primary entry point for daemon-wm is assign_app_hints(),
which groups windows by application before assigning.
App Grouping
The assign_app_hints(app_ids, key_bindings) function groups windows by their resolved base key
character before assigning hints. Windows sharing the same base key receive consecutive
repetitions of that character.
For a window list containing two Firefox instances and one Ghostty:
- Firefox window 1:
f - Firefox window 2:
ff - Ghostty window 1:
g
The function returns (hint_string, original_index) pairs sorted by original window index,
preserving display order.
Key Selection
The base key for each application is determined by key_for_app(app_id, key_bindings) with the
following priority:
1. Explicit Config Override
The key_bindings map in WmConfig allows explicit key-to-app mapping. Each WmKeyBinding
entry contains an apps list of app ID patterns:
[profiles.default.wm.key_bindings.f]
apps = ["firefox", "org.mozilla.firefox"]
launch = "firefox"
key_for_app() iterates all key bindings and checks each pattern against the app ID using three
comparisons:
- Exact match:
pattern == app_id - Case-insensitive match:
pattern.to_lowercase() == app_id.to_lowercase() - Last-segment match: the reverse-DNS last segment of
app_id(lowercased) equals the pattern (lowercased). Fororg.mozilla.firefox, the last segment isfirefox.
The first matching binding’s key character is returned.
2. Auto-Key Detection
If no explicit binding matches, auto_key_for_app(app_id) extracts the first alphabetic
character from the last segment of the app ID (split on .):
com.mitchellh.ghostty– last segment isghostty, auto-key isg.firefox– no dots, the full string is the segment, auto-key isf.microsoft-edge– auto-key ism.
The character is lowercased. If no alphabetic character is found, None is returned and
assign_app_hints() falls back to 'a'.
Default Key Bindings
WmConfig::default() ships with bindings for common applications:
| Key | Applications | Launch Command |
|---|---|---|
g | ghostty, com.mitchellh.ghostty | ghostty |
f | firefox, org.mozilla.firefox | firefox |
e | microsoft-edge | microsoft-edge |
c | chromium, google-chrome | – |
v | code, Code, cursor, Cursor | code |
n | nautilus, org.gnome.Nautilus | nautilus |
s | slack, Slack | slack |
d | discord, Discord | discord |
m | spotify | spotify |
t | thunderbird | thunderbird |
Numeric Shorthand
The normalize_input() function expands numeric suffixes before matching. This allows users to
type a2 instead of aa, or f3 instead of fff:
a2normalizes toaaa3normalizes toaaaf1normalizes tof
Expansion rules:
- The input must be at least 2 characters long.
- The trailing characters must all be ASCII digits.
- The leading characters must all be the same letter (e.g.,
aoraa, but notab). - The numeric value must be between 1 and 26 inclusive.
If any rule is violated, the input is returned as-is (lowercased). Mixed-character inputs like
ab2 are not expanded because the letter prefix contains non-identical characters.
Case-Insensitive Matching
All input is lowercased by normalize_input() before matching. Typing S matches the hint s.
This applies to both direct character matching and numeric shorthand expansion.
Match Results
The match_input(input, hints) function normalizes the input and returns one of three
MatchResult variants:
Exact(index)– Exactly one hint equals the normalized input, and no other hints share it as a prefix. The controller selects this window.Partial(indices)– Multiple hints start with the normalized input. This includes cases where one hint is an exact match but others share the same prefix (e.g., typingawith hintsa,aa,aaayieldsPartial([0, 1, 2])). The controller updates the display but does not commit a selection.NoMatch– No hint starts with the normalized input. The controller checks for a launch command binding.
Focus-or-Launch
When check_hint_or_launch() in the controller receives MatchResult::NoMatch and the input
buffer contains exactly one character, it calls hints::launch_for_key(key, key_bindings). If a
launch command exists for that key:
- A
PendingLaunchis staged viaset_pending_launch(), containing the command string, tags, and launch args from the binding. - The overlay displays the staged intent via
Command::ShowLaunchStaged. - The launch executes on modifier release or Enter (see Staged Launch).
If no launch command is configured for the key, the input is treated as a filter with no matches.
Tags and Launch Args
Each WmKeyBinding can carry tags and launch_args fields:
[profiles.default.wm.key_bindings.g]
apps = ["ghostty"]
launch = "ghostty"
tags = ["dev-rust", "ai-tools"]
launch_args = ["--working-directory=/workspace"]
tags_for_key(key, key_bindings)returns thetagsvector for the matching key. Tags are forwarded todaemon-launcherin theLaunchExecuteIPC message for launch profile composition (environment variable injection, secret fetching, Nix devshell activation). Tags support qualified cross-profile references using colon syntax (e.g.,"work:corp").launch_args_for_key(key, key_bindings)returns thelaunch_argsvector. These are appended to the launched command’s argument list.
Both functions perform case-insensitive key lookup by lowercasing the input character before
looking up the BTreeMap.
Clipboard Daemon
The daemon-clipboard process manages per-profile clipboard history with sensitivity
classification. It runs as a single-threaded tokio process (current_thread runtime) connected
to the Noise IK IPC bus as a BusClient.
Storage
Clipboard entries are stored in a SQLite database at ~/.cache/open-sesame/clipboard.db, opened
via rusqlite::Connection. The parent directory is created if absent. The schema consists of a
single table:
CREATE TABLE IF NOT EXISTS clipboard_entries (
entry_id TEXT PRIMARY KEY,
profile_id TEXT NOT NULL,
content TEXT NOT NULL,
content_type TEXT NOT NULL DEFAULT 'text/plain',
sensitivity TEXT NOT NULL DEFAULT 'public',
preview TEXT NOT NULL,
timestamp_ms INTEGER NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_clipboard_profile
ON clipboard_entries(profile_id, timestamp_ms DESC);
The index on (profile_id, timestamp_ms DESC) supports efficient per-profile history queries
ordered by recency.
Per-Profile History
All clipboard entries are associated with a profile_id. Queries filter by profile, ensuring
that clipboard history from one trust profile is not visible to another. This scoping is enforced
at the storage layer – every SELECT, DELETE, and aggregate query includes a
WHERE profile_id = ? predicate.
Sensitivity Classification
Each clipboard entry carries a sensitivity field stored as a text string in SQLite and mapped
to the SensitivityClass enum on read:
| Value | Enum Variant | Description |
|---|---|---|
public | SensitivityClass::Public | Non-sensitive content |
confidential | SensitivityClass::Confidential | Internal or business data |
secret | SensitivityClass::Secret | Credentials, tokens |
topsecret | SensitivityClass::TopSecret | High-value secrets |
Unknown string values default to Public. The entry_id field uses UUIDv7
(uuid::Uuid::now_v7()), providing time-ordered unique identifiers.
IPC Interface
The daemon handles the following IPC messages:
| Message | Response | Description |
|---|---|---|
ClipboardHistory | ClipboardHistoryResponse | Returns the most recent limit entries for a profile |
ClipboardGet | ClipboardGetResponse | Retrieves full content for a specific entry by UUID |
ClipboardClear | ClipboardClearResponse | Deletes all clipboard entries for a profile |
KeyRotationPending | – | Reconnects with a rotated IPC keypair |
The ClipboardHistory response includes entry_id, content_type, sensitivity, profile_id,
preview, and timestamp_ms per entry. The content field is not included in history responses
to avoid transmitting large payloads over IPC. Use ClipboardGet to retrieve full content.
All IPC responses are correlated to the original request via Message::with_correlation(msg.msg_id).
Process Hardening
On Linux, daemon-clipboard applies the following security measures:
platform_linux::security::harden_process()for process-level hardening.- Resource limits:
nofile = 4096,memlock_bytes = 0. core_types::init_secure_memory()formemfd_secretprobing.- Landlock filesystem sandbox restricting access to:
- IPC key directory (
$XDG_RUNTIME_DIR/pds/keys/) – read-only. - Bus public key (
$XDG_RUNTIME_DIR/pds/bus.pub) – read-only. - Bus socket (
$XDG_RUNTIME_DIR/pds/bus.sock) – read-write. - Wayland socket (
$XDG_RUNTIME_DIR/$WAYLAND_DISPLAY) – read-write. - Cache directory (
~/.cache/open-sesame/) – read-write (for SQLite database). - Config symlink targets (e.g.,
/nix/storepaths) – read-only.
- IPC key directory (
- Seccomp syscall filter with an allowlist including: SQLite-relevant syscalls (
fsync,fdatasync,flock,pread64,lseek), Wayland protocol syscalls (socket,connect,sendmsg,recvmsg), inotify syscalls for config hot-reload, andmemfd_secretfor secure memory. - The sandbox panics on application failure (
"refusing to run unsandboxed"), ensuring the daemon never operates without confinement.
Configuration
The daemon loads configuration via core_config::load_config() and establishes a ConfigWatcher
with a callback channel for hot-reload. On config change, the callback sends a notification, and
the event loop publishes ConfigReloaded { changed_keys: ["clipboard"] } to the IPC bus.
Lifecycle
- Startup: Process hardening, directory bootstrap, config load, IPC bus connection with keypair retry (5 attempts, 500ms interval), sandbox application.
- Announcement: Publishes
DaemonStarted { capabilities: ["clipboard", "history"] }. - Readiness: Calls
platform_linux::systemd::notify_ready(). - Event loop:
tokio::select!over watchdog timer (15s), IPC messages, config reload notifications, SIGINT, and SIGTERM. - Shutdown: Publishes
DaemonStopped { reason: "shutdown" }.
Input Daemon
The daemon-input process captures keyboard events via Linux evdev and forwards them over the
Noise IK IPC bus for consumption by daemon-wm. It runs as a single-threaded tokio process
(current_thread runtime) connected to the IPC bus as a BusClient.
Device Discovery
The spawn_keyboard_readers() function (keyboard.rs) enumerates input devices via
platform_linux::input::enumerate_devices(), filters to those with is_keyboard = true, and
opens each as an async EventStream via platform_linux::input::open_keyboard_stream().
One tokio task is spawned per keyboard device. All tasks funnel events into a single mpsc
channel with a buffer size of 256. If no keyboard devices are found (typically because the user
is not in the input group), the function logs a warning with remediation advice
(sudo usermod -aG input $USER) and returns an empty receiver. This is non-fatal – daemon-wm
falls back to SCTK keyboard input from its layer-shell surface.
Event Reading
Each reader task processes evdev events via stream.next_event().await in a loop. Only
EventSummary::Key events are forwarded:
- value 0: Key release – forwarded as
RawKeyEvent { keycode, pressed: false }. - value 1: Key press – forwarded as
RawKeyEvent { keycode, pressed: true }. - value 2: Key repeat – skipped. Repeat handling is left to the consumer.
The keycode field contains the evdev hardware scan code (e.g., 30 for KEY_A). The keycode is
cast from evdev::Key to u32 via keycode.0 as u32. On read errors (device disconnect,
permission denied), the task logs a warning and returns, ending that device’s reader.
XKB Keysym Translation
The XkbContext struct wraps an xkbcommon::xkb::State initialized with the system’s default
keymap. XkbContext::new() calls Keymap::new_from_names() with empty strings for rules, model,
layout, and variant (meaning system defaults), and KEYMAP_COMPILE_NO_FLAGS. It returns None
if xkbcommon fails to initialize (missing XKB data files).
Translation Process
process_key(evdev_keycode, pressed) translates a raw evdev event into a KeyboardEvent:
- Offset: Adds the XKB offset (
xkb_keycode = evdev_keycode + 8) because evdev keycodes are offset by 8 from XKB keycodes. - Pre-read: Reads the keysym via
state.key_get_one_sym()and UTF-32 character viastate.key_get_utf32()before updating state. This ordering is critical: when the Alt key itself is pressed, the modifier mask returned byactive_modifiers()must not yet include Alt, ensuring correct modifier-release detection on the receiving end (daemon-wm). - Modifiers: Calls
active_modifiers()to build the current modifier bitmask. - State update: Calls
state.update_key()with the key direction after reading. - Unicode: The
unicodefield is populated only on key press (pressed == true) and only whenkey_get_utf32()returns a non-zero value.
Modifier Bitmask
The active_modifiers() method queries four XKB named modifiers and maps them to GDK-compatible
bit positions:
| Modifier | XKB Constant | Bit Position | GDK Name |
|---|---|---|---|
| Shift | MOD_NAME_SHIFT | bit 0 | GDK_SHIFT_MASK |
| Control | MOD_NAME_CTRL | bit 2 | GDK_CONTROL_MASK |
| Alt | MOD_NAME_ALT | bit 3 | GDK_ALT_MASK |
| Super | MOD_NAME_LOGO | bit 26 | GDK_SUPER_MASK |
Each modifier is checked via state.mod_name_is_active() with STATE_MODS_EFFECTIVE.
Fallback
If XkbContext::new() returns None, the daemon logs a warning and constructs KeyboardEvent
structs with the raw evdev keycode as keyval, zero modifiers, and None for unicode.
Grab Protocol
The daemon tracks keyboard grab state via two variables: grab_active: bool and
grab_requester: Option<DaemonId>.
When Grab Is Active
All key events (press and release, value 0 and 1) are translated via XkbContext::process_key()
and published as InputKeyEvent messages on the IPC bus with SecurityLevel::Internal.
When Grab Is Inactive
Key events still flow through XkbContext::process_key() to keep modifier tracking accurate for
future grabs. However, only Alt/Meta release events are forwarded. Specifically, if
pressed == false and the keyval is in the range 0xFFE7..=0xFFEA (Meta_L, Meta_R, Alt_L,
Alt_R), the event is published as InputKeyEvent.
This unconditional forwarding of modifier releases solves a race condition inherent to
single-threaded runtimes: the InputGrabRequest IPC message may arrive after the user has
already released Alt. Without this forwarding, daemon-wm would never detect the Alt release
and the overlay would remain stuck. Only releases are forwarded (not presses), limiting
extraneous IPC traffic to at most 4 keycodes.
IPC Messages
| Message | Response | Description |
|---|---|---|
InputGrabRequest | InputGrabResponse | Activates the grab and records the requester |
InputGrabRelease | – | Deactivates the grab if requester matches |
InputLayersList | InputLayersListResponse | Returns configured input remap layers |
InputStatus | InputStatusResponse | Returns current daemon status |
KeyRotationPending | – | Reconnects with a rotated IPC keypair |
KeyDeduplicator
The KeyDeduplicator (daemon-wm/src/ipc_keys.rs) prevents duplicate processing when both the
SCTK keyboard handler and IPC InputKeyEvent fire for the same physical keystroke. It is
instantiated in the daemon-wm main loop, not in daemon-input.
Implementation
- An 8-entry ring buffer stores
(keyval: u32, pressed: bool, timestamp: Instant)tuples, initialized to(0, false, epoch). accept(keyval, pressed)scans the entire buffer. If any entry matches the samekeyvalandpresseddirection within 50ms of the current time, the event is rejected (returnsfalse). Otherwise, the event is recorded at the current ring index (which advances modulo 8) and accepted (returnstrue).- Direction-aware: a press (
pressed = true) and release (pressed = false) of the same key are treated as distinct events and do not deduplicate each other. - The ring buffer wraps on overflow, overwriting the oldest entry.
IPC Key Mapping
map_ipc_key_to_event(keyval, modifiers, unicode) in daemon-wm/src/ipc_keys.rs translates
XKB keysyms received via IPC into controller Event variants:
| Keysym | Constant | Event |
|---|---|---|
0xFF1B | Escape | Event::Escape |
0xFF0D | Return | Event::Confirm |
0xFF8D | KP_Enter | Event::Confirm |
0xFF09 | Tab | None (suppressed – cycling handled by IPC re-activation) |
0xFF54 | Down | Event::SelectionDown |
0xFF52 | Up | Event::SelectionUp |
0xFF08 | Backspace | Event::Backspace |
0x0020 | Space | Event::Char(' ') |
| Other | – | Event::Char(ch) if unicode is Some and passes is_ascii_graphic() |
Tab is explicitly suppressed because cycling through the window list is handled at the IPC level
by the compositor intercepting Alt+Tab and sending WmActivateOverlay /
WmActivateOverlayBackward. Forwarding Tab as SelectionDown would cause double-advancement.
Process Hardening
On Linux, daemon-input applies:
platform_linux::security::harden_process()for process-level hardening.- Resource limits:
nofile = 4096,memlock_bytes = 0. core_types::init_secure_memory()formemfd_secretprobing.- Landlock sandbox restricting access to:
- IPC key directory (
$XDG_RUNTIME_DIR/pds/keys/) – read-only. - Bus public key and socket – read-only and read-write respectively.
/dev/input– read-only (evdev device access)./sys/class/input– read-only (device enumeration symlinks)./sys/devices– read-only (device metadata via symlink traversal).- Config symlink targets – read-only.
- IPC key directory (
- Seccomp syscall filter with evdev-relevant syscalls (
ioctlfor device queries), inotify for config hot-reload,memfd_secret, and standard I/O syscalls. - The sandbox panics on failure, refusing to run unsandboxed.
Compositor-Independent Operation
The daemon reads directly from /dev/input/event* devices rather than relying on compositor
keyboard focus. This design is necessary because:
- The overlay’s
KeyboardInteractivity::Exclusivemay not be granted immediately by all compositors. - The
InputGrabRequestIPC message may arrive after the triggering keystroke. - Some compositors may not forward all key events to layer-shell surfaces.
By reading at the evdev level, daemon-input captures keystrokes regardless of which window has
compositor focus, providing a reliable input path for the overlay.
Lifecycle
- Startup: Process hardening, directory bootstrap, config load, keyboard reader spawn, XKB context creation, IPC bus connection with keypair retry (5 attempts, 500ms interval), sandbox application.
- Announcement: Publishes
DaemonStarted { capabilities: ["input", "remap"] }. - Readiness: Calls
platform_linux::systemd::notify_ready(). - Event loop:
tokio::select!over watchdog timer (15s), keyboard events, IPC messages, config reload notifications, SIGINT, and SIGTERM. - Shutdown: Publishes
DaemonStopped { reason: "shutdown" }.
Snippets Daemon
The daemon-snippets process manages text snippet templates with profile-scoped namespaces. It
runs as a single-threaded tokio process (current_thread runtime) connected to the Noise IK IPC
bus as a BusClient.
Storage
Snippets are stored in an in-memory HashMap<(String, String), String> keyed by
(profile_name, trigger) with the template string as the value. The type alias SnippetMap
defines this type.
The config schema does not yet include a dedicated snippets section, so build_snippet_map()
returns an empty HashMap on startup and after every config reload. All snippet data is
populated at runtime via SnippetAdd IPC messages.
On config hot-reload, the snippet map is rebuilt by calling build_snippet_map() with the new
config, which currently clears all runtime-added snippets. This behavior will change when a
persistent config-based snippet definition is added to the schema.
Profile Scoping
Every snippet is associated with a trust profile name as the first element of its
(profile, trigger) composite key. This ensures that two profiles can define different
expansions for the same trigger string without collision.
All operations are profile-scoped:
SnippetList: Filters the entire map with.filter(|((p, _), _)| p == &profile_str), returning only snippets belonging to the requested profile.SnippetExpand: Performs an exactHashMap::get()lookup with the(profile, trigger)tuple.SnippetAdd: Inserts or overwrites at the(profile, trigger)key. A snippet added under profile"work"is not visible from profile"personal".
IPC Interface
| Message | Response | Description |
|---|---|---|
SnippetList | SnippetListResponse | Returns all snippets for the given profile |
SnippetExpand | SnippetExpandResponse | Looks up the template for an exact trigger |
SnippetAdd | SnippetAddResponse | Inserts or overwrites a snippet |
KeyRotationPending | – | Reconnects with a rotated IPC keypair |
The SnippetList response returns Vec<SnippetInfo> where each entry contains trigger and
template_preview. Previews are truncated to 80 characters: templates longer than 80 characters
are cut to 77 characters with ... appended.
All IPC responses are correlated to the original request via
Message::with_correlation(msg.msg_id).
Trigger Matching
Trigger matching is exact and case-sensitive. The trigger field from a SnippetExpand request
must match the stored trigger string byte-for-byte. The snippet map uses HashMap::get() with
the (profile.to_string(), trigger.clone()) tuple as the key. No fuzzy matching, prefix
matching, or normalization is performed.
Template Format
Templates are stored and returned as plain strings. The module-level documentation describes
variable substitution and secret injection as design goals, but the current implementation
returns the template string verbatim from SnippetExpand without any processing. The expansion
pipeline for variable substitution (${VAR}) and secret injection (${secret:name}) is not yet
implemented.
Process Hardening
On Linux, daemon-snippets applies the following security measures:
platform_linux::security::harden_process()for process-level hardening.- Resource limits:
nofile = 4096,memlock_bytes = 0. core_types::init_secure_memory()formemfd_secretprobing.- Landlock filesystem sandbox restricting access to:
- IPC key directory (
$XDG_RUNTIME_DIR/pds/keys/) – read-only. - Bus public key (
$XDG_RUNTIME_DIR/pds/bus.pub) – read-only. - Bus socket (
$XDG_RUNTIME_DIR/pds/bus.sock) – read-write. - Config directory (
~/.config/pds/) – read-only. - Config symlink targets (e.g.,
/nix/storepaths) – read-only.
- IPC key directory (
- Seccomp syscall filter with an allowlist for standard I/O, memory management, networking (IPC
socket), inotify (config hot-reload),
memfd_secret, and process lifecycle syscalls. - The sandbox panics on application failure, refusing to run unsandboxed.
The sandbox is notably more restrictive than other desktop daemons: daemon-snippets requires no
Wayland socket access, no /dev/input access, and no cache directory writes.
Lifecycle
- Startup: Process hardening, directory bootstrap, config load, snippet map build (empty), IPC bus connection with keypair retry (5 attempts, 500ms interval), sandbox application.
- Announcement: Publishes
DaemonStarted { capabilities: ["snippets", "expansion"] }. - Readiness: Calls
platform_linux::systemd::notify_ready(). - Event loop:
tokio::select!over watchdog timer (15s), IPC messages, config reload notifications (rebuilds snippet map from config), SIGINT, and SIGTERM. - Shutdown: Publishes
DaemonStopped { reason: "shutdown" }.
Launch Profiles
Launch profiles define composable environment bundles that attach to application launches. Each profile specifies environment variables, secret references, an optional Nix devshell, and an optional working directory. Profiles are scoped to trust profiles and composed at launch time via tags.
Profile Structure
A launch profile is defined by the LaunchProfile struct in core-config/src/schema_wm.rs:
[profiles.work.launch_profiles.dev-rust]
env = { RUST_LOG = "debug", CARGO_HOME = "/workspace/.cargo" }
secrets = ["github-token", "crates-io-token"]
devshell = "/workspace/myproject#rust"
cwd = "/workspace/usrbinkat/github.com/org/repo"
Each field is optional and defaults to empty:
| Field | Type | Description |
|---|---|---|
env | BTreeMap<String, String> | Static environment variables injected into the child process. |
secrets | Vec<String> | Secret names fetched from the vault and converted to env vars. |
devshell | Option<String> | Nix flake devshell reference. Wraps the command in nix develop. |
cwd | Option<String> | Absolute path used as the working directory for the spawned process. |
Tag System
Key bindings in the window manager configuration reference launch profiles through the tags
field on WmKeyBinding:
[profiles.default.wm.key_bindings.g]
apps = ["ghostty", "com.mitchellh.ghostty"]
launch = "ghostty"
tags = ["dev-rust", "ai-tools"]
launch_args = ["--working-directory=/workspace/user/github.com/org/repo"]
When a key binding triggers a launch, daemon-launcher resolves each tag against the configuration to compose the final environment. Tags are processed in order; the composition rules are described below.
Cross-Profile Tag References
Tags support qualified cross-profile references using the profile:name syntax. An unqualified
tag resolves against the default (or explicitly specified) trust profile. A qualified tag
resolves against a different trust profile.
| Tag | Resolution |
|---|---|
dev-rust | Resolves dev-rust in the current trust profile. |
work:corp | Resolves corp in the work trust profile. |
The parsing logic in daemon-launcher/src/launch.rs (parse_tag) splits on the first colon.
If no colon is present, the tag is unqualified and uses the default profile.
Cross-profile references allow a single key binding to compose environments from multiple trust boundaries. For example, a terminal binding might combine a personal development environment with corporate secrets:
tags = ["dev-rust", "work:corp-secrets"]
Tag Composition Rules
When multiple tags are specified, they are processed sequentially. The composition semantics are:
- Environment variables: merged into a single
BTreeMap. When the same key appears in multiple tags, the later tag wins. - Secrets: accumulated. Duplicate secret names (same name, same trust profile) are deduplicated; secrets from different trust profiles are kept independently.
- Devshell: last tag with a non-
Nonedevshell wins. - Working directory: last tag with a non-
Nonecwdwins.
This is implemented in daemon-launcher/src/launch.rs in the launch_entry function. The
composed environment is applied to the child process after secret fetching completes.
Configuration Schema
Launch profiles live under each trust profile’s configuration section:
[profiles.personal]
# ... other profile settings ...
[profiles.personal.launch_profiles.dev-rust]
env = { RUST_LOG = "debug" }
secrets = ["github-token"]
devshell = "/workspace/project#rust"
cwd = "/workspace/usrbinkat/github.com/org/repo"
[profiles.personal.launch_profiles.ai-tools]
env = { ANTHROPIC_MODEL = "claude-sonnet-4-20250514" }
secrets = ["anthropic-api-key"]
The full path in the config tree is
profiles.<trust_profile_name>.launch_profiles.<launch_profile_name>. Daemon-launcher reads
these from the hot-reloaded configuration state (ConfigWatcher) at launch time, so changes
take effect without daemon restart.
Denial Handling
If a tag references a trust profile or launch profile that does not exist, daemon-launcher
returns a structured LaunchDenial to the window manager:
LaunchDenial::ProfileNotFound– the trust profile name in a qualified tag does not exist.LaunchDenial::LaunchProfileNotFound– the launch profile name does not exist within the resolved trust profile.
The window manager can use these denials to display user-facing error messages.
Secret Injection
Secrets flow from per-profile SQLCipher vaults into launched processes as environment variables.
Two mechanisms exist: the sesame env CLI command for interactive use, and the daemon-launcher
IPC path for overlay-driven launches.
Vault to Environment Pipeline
daemon-launcher Path
When daemon-launcher processes a LaunchExecute request with tags, the pipeline is:
- Tag resolution: each tag is resolved to a
LaunchProfilefrom the hot-reloaded configuration. Cross-profile tags (work:corp) route to different trust profiles. - Secret collection: secret names from all resolved launch profiles are accumulated with their owning trust profile name. Duplicates (same name, same profile) are deduplicated.
- IPC fetch: for each
(secret_name, trust_profile_name)pair, daemon-launcher sends aSecretGetrequest over the Noise IK bus to daemon-secrets. The request specifies the trust profile that owns the vault. - Name conversion: the secret name is converted to an environment variable name (see below).
- Environment injection: the secret value is inserted into the composed environment map,
then passed to the child process via
Command::env(). - Zeroization: after the child process is spawned and the environment has been copied to
the OS process, all secret values in the composed environment map are zeroized via
zeroize::Zeroize.
Batched Denial Collection
Daemon-launcher does not abort on the first secret fetch failure. Instead, it collects all denials and returns them in a single response so the window manager can prompt for all required vault unlocks at once:
- Locked vaults:
SecretDenialReason::LockedorProfileNotActivedenials are collected into alocked_profileslist. - Missing secrets:
SecretDenialReason::NotFounddenials increment amissing_count. - Rate limiting:
SecretDenialReason::RateLimitedcauses an immediate abort withLaunchDenial::RateLimited.
After iterating all secrets, locked vaults take priority: if any vaults are locked,
LaunchDenial::VaultsLocked is returned with the full list. Otherwise, if secrets are missing,
LaunchDenial::SecretNotFound is returned.
sesame env
The sesame env command spawns a child process with vault secrets injected as environment
variables:
sesame env -p work -- my-command --flag
It connects to the IPC bus, fetches all secrets for the specified profile(s), converts each secret name to an env var, injects them into the child process, waits for the child to exit, zeroizes all secret copies, and exits with the child’s exit code.
The child also receives a SESAME_PROFILES environment variable containing a comma-separated
list of the profile specs that were used.
sesame export
The sesame export command outputs secrets in shell, dotenv, or JSON format without spawning
a child:
sesame export -p work --format shell
sesame export -p work --format dotenv
sesame export -p work --format json
Output is written to stdout. Secret values are zeroized after printing.
Secret Name to Env Var Conversion
Two conversion implementations exist, with slightly different rules:
daemon-launcher (launch.rs)
Applies to secrets injected via launch profile tags:
- Uppercase the entire name.
- Replace hyphens with underscores.
Examples: github-token becomes GITHUB_TOKEN. anthropic-api-key becomes
ANTHROPIC_API_KEY.
sesame env / sesame export (env.rs)
Applies to secrets injected via the CLI:
- Uppercase the entire name.
- Replace hyphens, dots, and non-alphanumeric characters (except underscores) with underscores.
Examples: api-key becomes API_KEY. db.host-name becomes DB_HOST_NAME.
Prefix System
The --prefix flag (available on sesame env and sesame export) prepends a string to every
generated environment variable name, separated by an underscore:
sesame env --prefix MYAPP -p work -- my-command
With prefix MYAPP, the secret api-key becomes MYAPP_API_KEY.
The prefix is also configurable per-workspace via .sesame.toml:
secret_prefix = "MYAPP"
The prefix is applied after the name-to-env-var conversion, so the full transformation is:
secret_name -> uppercase + substitute -> prepend prefix.
Denied Environment Variables
The sesame env and sesame export commands maintain a deny list of environment variable names
that must never be overwritten by secret injection. This prevents secrets with adversarial names
from hijacking the dynamic linker, shell execution, or privilege escalation vectors. The deny
list includes:
- Dynamic linker variables:
LD_PRELOAD,LD_LIBRARY_PATH,DYLD_INSERT_LIBRARIES, and others. - Core execution:
PATH,HOME,SHELL,USER. - Shell injection vectors:
BASH_ENV,IFS,PROMPT_COMMAND, and others. - Language runtime injection:
PYTHONPATH,NODE_OPTIONS,RUBYOPT, and others. - Open Sesame’s own namespace:
SESAME_PROFILE.
Matching is case-insensitive. The BASH_FUNC_ prefix is matched as a prefix pattern to block
Bash function export injection.
Implicit Environment Variables
Every launched process receives these environment variables regardless of tag configuration:
| Variable | Value |
|---|---|
SESAME_PROFILE | The trust profile name used for the launch. |
SESAME_APP_ID | The desktop entry ID of the launched application. |
SESAME_SOCKET | Path to the IPC bus Unix socket. |
These are injected after the composed environment, so they cannot be overridden by launch profile
env entries.
Desktop Entry Discovery
Daemon-launcher discovers launchable applications by scanning XDG desktop entry files, builds a fuzzy search index, and ranks results using frecency (frequency + recency).
XDG Desktop Entry Scanning
The scanner in daemon-launcher/src/scanner.rs uses the freedesktop-desktop-entry crate to
enumerate .desktop files from $XDG_DATA_DIRS/applications/. Scanning is synchronous and runs
in a tokio::task::spawn_blocking context at daemon startup.
Filtering Rules
Entries are filtered before indexing:
| Condition | Action |
|---|---|
NoDisplay=true | Skipped. Non-launchable entries (e.g., D-Bus activatable services). |
Hidden=true | Skipped. Explicitly hidden by the packager. |
No Exec= field | Skipped. Not a launchable application. |
| Duplicate ID | Only the first occurrence is kept. |
Indexed Fields
For each surviving entry, the scanner produces a MatchItem with:
- id: the desktop entry ID (e.g.,
org.mozilla.firefox). - name: the localized
Name=field, falling back to the entry ID. - extra: a space-joined string of
Keywords=andCategories=values, used to broaden fuzzy match surface.
The Exec line is cached separately in a CachedEntry for post-scan use during launch
execution. The Exec cache is stored as a HashMap<String, CachedEntry> keyed by entry ID.
Fuzzy Search
Daemon-launcher uses the nucleo fuzzy matching library (via the core-fuzzy crate). Items are
injected into the matcher at startup via an Injector. Queries arrive as LaunchQuery IPC
messages and are dispatched to SearchEngine::query(), which combines fuzzy match scores with
frecency boosts.
Query results are returned as LaunchResult values containing the entry ID, display name, icon,
and composite score.
Frecency Ranking
Launch frequency and recency are tracked in a per-profile SQLite database managed by
core-fuzzy::FrecencyDb. The database file is stored at:
~/.config/pds/launcher/{profile_name}.frecency.db
Each trust profile has its own frecency database, providing isolation between profiles. When a
LaunchQuery specifies a different profile than the current one, the search engine switches its
frecency context via engine.switch_profile().
When a LaunchExecute succeeds, engine.record_launch(entry_id) updates the frecency database.
The frecency boost is refreshed periodically via engine.refresh_frecency().
Desktop Entry Field Code Stripping
Before executing an Exec line, the scanner strips freedesktop %-prefixed field codes. These
are placeholder tokens defined by the Desktop Entry Specification that would normally be replaced
by a file manager:
Stripped codes: %f, %F, %u, %U, %d, %D, %n, %N, %i, %c, %k, %v, %m.
The literal %% sequence is collapsed to a single %.
After stripping, multiple consecutive spaces from removed codes are collapsed. The result is
then tokenized using freedesktop quoting rules (double-quote escaping for \", \`, \\,
\$). The tokenizer does not invoke a shell.
3-Strategy Resolution Fallback
When a LaunchExecute request arrives, the entry ID is resolved against the cached entries
using three strategies in order:
- Exact match: the entry ID matches a cache key exactly (e.g.,
org.mozilla.firefox). - Last segment match: the entry ID matches the last dot-separated segment of a cached ID,
case-insensitively (e.g.,
firefoxmatchesorg.mozilla.firefox). - Case-insensitive full ID match: the entry ID matches a cached ID when both are lowercased
(e.g.,
alacrittymatchesAlacritty).
If none of the three strategies produces a match, LaunchDenial::EntryNotFound is returned.
Process Management
Daemon-launcher spawns child processes in isolated systemd scopes with zombie reaping and post-spawn secret zeroization.
systemd-run Scope Isolation
Every launched process is wrapped in a transient systemd user scope via:
systemd-run --user --scope --unit=app-open-sesame-{entry_id}-{pid}.scope -- {program} {args}
This provides:
- cgroup isolation: the child runs in its own cgroup, enabling per-application resource
accounting via
systemd-cgtop. - No inherited limits: the child does not inherit
MemoryMaxor mount namespace restrictions from the launcher’s service unit. - Launcher restart survival: because
KillMode=processsemantics apply to scopes (the scope itself has no main process), children survive launcher daemon restarts. - Clean unit naming: the scope name is sanitized from the entry ID by replacing non-alphanumeric characters with dashes and collapsing runs.
Fallback to Direct Spawn
If systemd-run is unavailable (not installed, or the spawn fails), daemon-launcher falls back
to a direct Command::spawn(). The via_scope flag in the log output indicates which path was
taken.
No Sandbox Inheritance
Daemon-launcher intentionally does not apply seccomp or Landlock sandboxing to itself. Seccomp
and Landlock rules inherit across fork+exec and would be applied to every child process,
breaking arbitrary desktop applications. The security boundary for daemon-launcher is the Noise
IK authenticated IPC bus, not process-level sandboxing.
Child Reaping
After spawning, daemon-launcher reaps the wrapper process (or direct child) in a
tokio::task::spawn_blocking closure that calls child.wait(). This prevents zombie
accumulation.
When using systemd-run scopes, the reaped process is the systemd-run wrapper, not the
application itself. The application continues running under the transient scope until it exits
naturally.
Secret Zeroization
Secret values pass through two zeroization points:
- Error paths: if a secret value fails UTF-8 validation, the raw bytes are zeroized before returning the error.
- Post-spawn cleanup: after
Command::spawn()copies the environment to the OS process, all values in the composed environmentBTreeMapare zeroized viazeroize::Zeroize, and the map is dropped.
This ensures secret material does not persist in the daemon-launcher process memory after it has been handed off to the child.
I/O Configuration
Spawned processes have their I/O handles configured as:
| Stream | Configuration |
|---|---|
| stdin | /dev/null (Stdio::null()) |
| stdout | /dev/null (Stdio::null()) |
| stderr | Inherited from daemon-launcher (Stdio::inherit()) |
Stderr inheritance allows application error output to reach the journal when daemon-launcher runs under systemd.
Environment Propagation
The composed environment (launch profile env vars + secrets + implicit SESAME_* vars) is
propagated to both the systemd-run wrapper and the direct spawn fallback. The systemd-run
process passes its environment through to the child in the scope.
The working directory (cwd) from the launch profile is validated as an absolute, existing
directory path before being set on the command. Relative paths are rejected with an error.
Structured Logging
All Open Sesame daemons use the tracing crate for structured, leveled logging. Log output is
configurable between JSON and human-readable formats, with journald integration on Linux.
Tracing Integration
Every daemon initializes a tracing-subscriber stack at startup. The two supported output
formats are:
- JSON (
--log-format json, default for daemon-profile): machine-parseable structured JSON, one object per line. Enabled viatracing_subscriber::fmt().json().init(). - Pretty (
--log-format pretty): human-readable colored output viatracing_subscriber::fmt().init().
The format is selected via the --log-format CLI flag or the PDS_LOG_FORMAT environment
variable. The implementation is in daemon-profile/src/sandbox.rs (init_logging).
RUST_LOG and Log Levels
All daemons read the RUST_LOG environment variable via
tracing_subscriber::EnvFilter::try_from_default_env(). If RUST_LOG is not set, the default
filter is info.
Standard tracing levels are used throughout:
| Level | Usage |
|---|---|
error | IPC failures, secret fetch denials, audit chain verification failures, sandbox application failures. |
warn | Non-fatal issues: systemd-run fallback, corrupt audit tail entry, HTTP git URL detected. |
info | Daemon lifecycle (starting, ready, shutting down), launch execution, watchdog ticks, config reloads, key rotation, audit chain verification on startup. |
debug | Child reaping status, context engine debounce suppression. |
journald Integration
The tracing-journald crate is a Linux dependency of daemon-launcher and other daemons. When
running under systemd, structured log fields are forwarded to the journal as journal fields,
enabling filtering with journalctl:
journalctl --user -u daemon-launcher.service
journalctl --user -u daemon-profile.service
Structured Fields
Tracing spans and events use structured key-value fields throughout the codebase. Notable patterns:
- Launch execution:
entry_id,program,arg_count,scope_name,tags,devshell,env_count,secret_count,via_scope,pidare attached to launch log lines indaemon-launcher/src/launch.rs. - Secret fetching:
secret_countand per-secretreasonfields on denial. - Watchdog:
watchdog_tick_counttracks event loop health indaemon-profile/src/main.rs. - IPC messages:
senderandmsg_ididentify message origin. - Audit:
path,sequence,entriestrack audit log state at startup. - Security posture: sandbox
statusis logged after Landlock and seccomp application. - Key rotation:
daemon_name,generation,clearancefields on rotation events. - Desktop entry resolution:
entry_id,resolved_idlogged with the resolution strategy used.
Daemon Startup Logging Sequence
Daemon-profile follows this startup sequence (other daemons follow a similar pattern):
"daemon-profile starting"– logged immediately after CLI parsing.harden_process()andapply_resource_limits()– the platform layer hardens the process (RLIMIT_NOFILE, RLIMIT_MEMLOCK, etc.).init_secure_memory()– probesmemfd_secret(2)availability and logs whether the kernel supports sealed anonymous memory for secret storage.- Sandbox application – logs the Landlock and seccomp result via
?statusstructured field. - IPC bus server bind – logs
pathand confirms Noise IK encryption. - Per-daemon keypair generation – logs
daemon,clearancefor each of the six known daemons. - Audit logger initialization – logs
pathandsequence(chain head position). - Audit chain verification – logs
entriescount if the chain is intact, or an error if verification fails. - Context engine initialization – logs
profile(the default ProfileId). platform_linux::systemd::notify_ready()– sendsREADY=1to systemd."daemon-profile ready"– logged after readiness notification.
Audit Chain
Open Sesame maintains a tamper-evident audit log using a BLAKE3 hash chain. Every auditable
operation appends a JSONL entry whose prev_hash field contains the hash of the previous
entry’s serialized JSON. Tampering with any entry invalidates all subsequent hashes.
Hash Chain Mechanics
The AuditLogger in core-profile/src/audit.rs maintains three pieces of mutable state:
last_hash: the hex-encoded hash of the most recently written entry.sequence: a monotonically increasing counter (starts at 1).hash_algorithm: eitherBlake3orSha256(configurable at construction, default is BLAKE3).
When append(action) is called:
- The sequence is incremented.
- A wall-clock timestamp (milliseconds since Unix epoch) is captured.
- An
AuditEntryis constructed with the currentlast_hashas itsprev_hash. - The entry is serialized to a single-line JSON string.
- The JSON bytes are hashed with the configured algorithm (BLAKE3 or SHA-256).
- The resulting hex digest becomes the new
last_hash. - The JSON line is written to the underlying
Writesink and flushed.
The first entry in a fresh log has an empty string as its prev_hash.
Entry Structure
Each JSONL line contains:
{
"sequence": 1,
"timestamp_ms": 1700000000000,
"action": { "ProfileActivated": { "target": "...", "duration_ms": 42 } },
"prev_hash": "",
"agent_id": "..."
}
| Field | Type | Description |
|---|---|---|
sequence | u64 | Monotonically increasing, starting at 1. |
timestamp_ms | u64 | Wall clock time in milliseconds since Unix epoch. |
action | AuditAction | The auditable operation (see variants below). |
prev_hash | String | Hex-encoded hash of the previous entry’s JSON. Empty for the first entry. |
agent_id | Option<AgentId> | The agent identity that triggered the action, if known. |
AuditAction Variants
The AuditAction enum in core-profile/src/lib.rs is #[non_exhaustive] and currently
defines:
| Variant | Fields | Description |
|---|---|---|
ProfileActivated | target: ProfileId, duration_ms: u32 | A trust profile was activated. |
ProfileDeactivated | target: ProfileId, duration_ms: u32 | A trust profile was deactivated. |
ProfileActivationFailed | target: ProfileId, reason: String | Activation failed. |
DefaultProfileChanged | previous: ProfileId, current: ProfileId | The default profile for new launches changed. |
IsolationViolationAttempt | from_profile, resource | A cross-profile resource access was blocked. |
SecretAccessed | profile_id: ProfileId, secret_ref: String | A secret was read from a vault. |
KeyRotationStarted | daemon_name: String, generation: u64 | IPC bus key rotation began. |
KeyRotationCompleted | daemon_name: String, generation: u64 | Key rotation completed. |
KeyRevoked | daemon_name: String, reason: String, generation: u64 | A daemon’s key was revoked. |
SecretOperationAudited | action, profile, key, requester, outcome | A secret operation was logged. |
AgentConnected | agent_id: AgentId, agent_type: AgentType | An agent connected. |
AgentDisconnected | agent_id: AgentId, reason: String | An agent disconnected. |
InstallationCreated | id, org, machine_binding_present | A new installation was registered. |
ProfileIdMigrated | name, old_id, new_id | A profile’s internal ID was migrated. |
AuthorizationRequired | request_id: Uuid, operation: String | An operation requires authorization. |
AuthorizationGranted | request_id, delegator, scope | Authorization was granted. |
AuthorizationDenied | request_id: Uuid, reason: String | Authorization was denied. |
AuthorizationTimeout | request_id: Uuid | An authorization request timed out. |
DelegationRevoked | delegation_id, revoker, reason | A delegation was revoked. |
HeartbeatRenewed | delegation_id, renewal_source | A delegation heartbeat was renewed. |
FederationSessionEstablished | session_id, remote_installation | A federation session was established. |
FederationSessionTerminated | session_id: Uuid, reason: String | A federation session ended. |
PostureEvaluated | composite_score: f64 | A security posture evaluation produced a score. |
Tamper Detection: sesame audit verify
The sesame audit verify command in open-sesame/src/audit.rs reads the audit log at
~/.config/pds/audit.jsonl and replays the hash chain:
$ sesame audit verify
OK: 1247 entries verified.
The verification algorithm in core_profile::verify_chain:
- Iterates each non-empty JSONL line in order.
- Parses each line as an
AuditEntry. - Checks that
entry.prev_hashmatches the hash computed from the previous line’s raw JSON bytes. - If any mismatch is found, returns an error identifying the broken sequence number and the
expected vs. actual
prev_hash.
Verification detects: modified entries, deleted entries, reordered entries, and injected entries.
The test suite in core-profile/src/audit.rs explicitly validates detection of all four
tampering modes.
sesame audit tail
The sesame audit tail command displays recent audit entries:
sesame audit tail 10
sesame audit tail --follow
Without --follow, the command reads the last N entries from the log file and pretty-prints
each as indented JSON separated by --- dividers.
With --follow, it watches the audit log file for new appends using
notify::RecommendedWatcher (inotify on Linux). When the file grows, only the new bytes are
read (via Seek::SeekFrom::Start(last_len)), parsed line by line, and printed. The follow loop
exits on Ctrl-C (SIGINT).
Chain Recovery After Corruption
On daemon-profile startup, the audit logger loads its state from the last line of the existing
log file. The load_audit_state function in daemon-profile/src/context.rs:
- Reads the file contents (returns
(empty, 0)if the file does not exist). - Finds the last non-empty line by iterating in reverse.
- Attempts to parse it as an
AuditEntry. - If successful, computes its BLAKE3 hash and extracts its sequence number.
- If parsing fails (corrupt last entry), falls back to
(empty_hash, 0), starting a fresh chain segment.
After loading, the startup code runs verify_chain on the existing log if the sequence is
greater than 0. A verification failure is logged at error level but does not prevent the
daemon from starting – the daemon continues appending to the potentially-broken chain.
Chain Continuity Across Restarts
The audit chain survives daemon restarts. On restart, daemon-profile loads the last hash and
sequence from disk and continues appending. The hash of the last pre-restart entry becomes the
prev_hash of the first post-restart entry, maintaining an unbroken chain. The test
chain_resumes_after_restart in core-profile/src/audit.rs validates this property across two
simulated sessions with five total entries.
File Format and Location
- Path:
~/.config/pds/audit.jsonl(resolved viacore_config::config_dir()). - Format: JSON Lines – one JSON object per line, newline-delimited.
- Hash algorithm: BLAKE3 by default. SHA-256 is supported as an alternative. The algorithm must be consistent within a single log file for verification to succeed.
- Write mode: append-only (
OpenOptions::new().create(true).append(true)). Each write is followed by an explicitflush()viaBufWriter. - Agent identity: the
default_agent_idis derived from the installation namespace and the Unix UID of the running process:uuid::Uuid::new_v5(&install_ns, "agent:human:uid{uid}").
Retention and Rotation
The current implementation does not perform automatic log rotation or retention. The audit log
grows unboundedly. External log rotation (e.g., logrotate) can be applied, but rotating the
file severs the hash chain – sesame audit verify can only validate entries present in a
single contiguous file. Operators who require forensic auditability across rotation boundaries
should archive rotated segments and verify them independently.
Health Checks
Open Sesame provides daemon health monitoring through sesame status and systemd watchdog
integration.
sesame status
The sesame status command in open-sesame/src/status.rs connects to the IPC bus and sends a
StatusRequest message to daemon-profile. The response (StatusResponse) includes:
- Per-vault lock state (
lock_state: BTreeMap<TrustProfileName, bool>): each trust profile’s vault is reported as locked or unlocked. Displayed as a table with profile names and colored status indicators. - Default profile (
default_profile: TrustProfileName): the currently active default profile for new unscoped launches. - Active profiles (
active_profiles: Vec<TrustProfileName>): the list of profiles that are currently activated (vault open, serving secrets). - Global locked flag (
locked: bool): legacy fallback used when per-profile lock state is unavailable.
Example output:
Vaults:
personal unlocked
work locked
Default profile: personal
Active profiles:
- personal (default)
If the lock_state map is empty (daemon-secrets has not reported per-profile state), the
display falls back to a single global locked/unlocked indicator.
Liveness Check
The sesame status command implicitly tests daemon-profile liveness. If the IPC bus is
unreachable (daemon-profile is not running or the socket is missing), the connect() call fails
with an error. This makes sesame status usable as a basic health check in scripts and
monitoring systems.
systemd Integration
Type=notify and sd_notify
Daemons use systemd’s Type=notify service type. After completing initialization (config
loaded, IPC connected, indexes built), each daemon calls
platform_linux::systemd::notify_ready(), which sends READY=1 to systemd. This tells systemd
that the daemon is ready to accept requests.
The NOTIFY_SOCKET path is included in daemon-profile’s Landlock ruleset so that sd_notify
calls succeed after the filesystem sandbox is applied. Abstract sockets (prefixed with @)
bypass Landlock AccessFs rules and do not require explicit allowlisting.
WatchdogSec=30
Daemon-profile runs a tokio interval timer at half the watchdog interval (15 seconds) and calls
platform_linux::systemd::notify_watchdog() on each tick, which sends WATCHDOG=1 to systemd.
If a daemon fails to send a watchdog notification within 30 seconds (two missed ticks), systemd
considers the daemon unresponsive and restarts it according to the unit’s Restart= policy.
The watchdog tick in daemon-profile also serves as the reconciliation driver – every other tick (every 30 seconds), it reconciles state with daemon-secrets.
Reconciliation
Daemon-profile reconciles with daemon-secrets every 30 seconds (every other watchdog tick,
controlled by watchdog_tick_count.is_multiple_of(2)). The reconciliation RPC updates:
- The global
lockedflag. - The
active_profilesset. - Per-profile lock state.
This ensures that sesame status reports current state even if an IPC event was lost or a
daemon restarted between reconciliation cycles.
Crash-Restart Detection
Daemon-profile tracks daemon identities via DaemonTracker in daemon-profile/src/main.rs.
The tracker maintains a HashMap<String, DaemonId> mapping daemon names to their last known
identity.
When a DaemonStarted event arrives from a daemon name that already has a registered
DaemonId, the track() method detects a crash-restart: the old ID differs from the new one.
It returns Some(old_id), allowing daemon-profile to clean up stale state associated with the
previous instance.
Watchdog Logging
Watchdog ticks are logged at info level for the first three ticks and then every 20th tick
(controlled by watchdog_tick_count <= 3 || watchdog_tick_count.is_multiple_of(20)). This
provides startup confirmation without flooding the journal during steady-state operation.
Metrics and Observability
This page describes the metrics and observability design for Open Sesame.
Structured logging is implemented today; metrics export and the
sesame status --doctor command are planned.
Current State
All daemons emit structured log events via the tracing crate. Log output
includes span context (daemon ID, profile name, operation), timestamps, and
severity levels. Logs are written to journald when running under systemd,
or to stderr in development.
Metrics export (Prometheus, OpenTelemetry) is not yet implemented.
Planned Metrics Export
Prometheus
Each daemon will expose a /metrics endpoint on a local Unix socket (not a
TCP port) in Prometheus exposition format. A Prometheus instance or
prometheus-node-exporter textfile collector can scrape these.
OpenTelemetry (OTLP)
For environments with an OTLP collector (Grafana Agent, OpenTelemetry
Collector), daemons will support OTLP export over gRPC or HTTP. This is
configured in ~/.config/pds/observability.toml.
Planned Metric Categories
Daemon Health
| Metric | Type | Description |
|---|---|---|
pds_daemon_uptime_seconds | Gauge | Seconds since daemon start |
pds_daemon_restart_count | Counter | systemd restart count (from watchdog) |
pds_daemon_memory_rss_bytes | Gauge | Resident set size |
pds_daemon_memory_locked_bytes | Gauge | mlock’d memory |
Vault Operations
| Metric | Type | Description |
|---|---|---|
pds_vault_unlock_total | Counter | Unlock attempts (labeled by factor, result) |
pds_vault_unlock_duration_seconds | Histogram | Time to complete unlock |
pds_vault_secret_read_total | Counter | Secret read operations |
pds_vault_secret_write_total | Counter | Secret write operations |
pds_vault_acl_denial_total | Counter | ACL-denied operations |
IPC Throughput
| Metric | Type | Description |
|---|---|---|
pds_ipc_messages_sent_total | Counter | Messages sent (labeled by event kind) |
pds_ipc_messages_received_total | Counter | Messages received |
pds_ipc_message_bytes_total | Counter | Total bytes over the bus |
pds_ipc_request_duration_seconds | Histogram | Request-response round-trip time |
pds_ipc_connections_active | Gauge | Current connected clients |
pds_ipc_clearance_drop_total | Counter | Messages dropped by clearance check |
Memory Protection Posture
| Metric | Type | Description |
|---|---|---|
pds_mlock_limit_bytes | Gauge | Configured LimitMEMLOCK |
pds_mlock_used_bytes | Gauge | Currently locked memory |
pds_seccomp_active | Gauge | 1 if seccomp filter is loaded, 0 otherwise |
pds_landlock_active | Gauge | 1 if Landlock restrictions are active |
sesame status –doctor
The sesame status --doctor command (tracked as issue #20) performs a
comprehensive system health check. The planned implementation runs 43
individual checks across 6 categories.
Check Categories
1. Daemon Liveness
Verifies each daemon process is running, its systemd unit is active, and it
responds to StatusRequest on the IPC bus.
2. IPC Connectivity
Tests Noise IK handshake to the bus server, measures round-trip latency, verifies the socket file exists with correct permissions.
3. Vault Integrity
Checks SQLCipher database integrity, verifies enrolled auth factors match configuration, tests that the vault salt is present and the correct length.
4. Cryptographic Posture
Verifies Noise IK keypairs exist, checks key file permissions (0600),
validates that the ClearanceRegistry is populated, confirms mlock is
available for secret memory.
5. Platform Integration
Checks Wayland session type, verifies COSMIC compositor protocol
availability, tests xdg-desktop-portal connectivity, confirms D-Bus
session bus access.
6. Configuration
Validates TOML configuration against the schema, checks for deprecated keys, verifies file permissions on sensitive config files.
Output Formats
The --doctor command supports multiple output formats:
- Text (default) – Human-readable output with pass/fail/warn indicators and remediation suggestions.
- JSON (
--format json) – Machine-parseable output for CI integration. - Prometheus exposition (
--format prometheus) – Each check becomes a gauge metric (pds_doctor_check{name="...",category="..."}with value 0, 1, or 2 for pass, fail, or warn). - OTLP (
--format otlp) – Exports check results as OpenTelemetry metrics to a configured collector.
Governance Profile Filtering
Checks can be filtered by governance profile to focus on compliance-relevant items:
# Run only STIG-relevant checks
sesame status --doctor --governance stig
# Run PCI-DSS checks, output as JSON
sesame status --doctor --governance pci-dss --format json
# Run SOC2 checks
sesame status --doctor --governance soc2
Each check is tagged with the governance frameworks it is relevant to. For
example, the mlock availability check is relevant to STIG and PCI-DSS but
not SOC2; the audit log integrity check is relevant to all three.
Single Desktop
Open Sesame runs as a full desktop suite on COSMIC/Wayland systems, providing secret management, window management overlays, clipboard isolation, input capture, and application launching across trust profiles.
Package Model
A desktop installation requires both packages:
| Package | Contents |
|---|---|
open-sesame | CLI (sesame), headless daemons: profile, secrets, launcher, snippets |
open-sesame-desktop | Desktop daemons: wm, clipboard, input; Wayland/COSMIC integration |
The open-sesame-desktop package depends on open-sesame. Installing the desktop package
pulls in the headless package automatically.
Installation
APT (Ubuntu 24.04 Noble)
curl -fsSL https://scopecreep-zip.github.io/open-sesame/gpg.key \
| sudo gpg --dearmor -o /usr/share/keyrings/open-sesame.gpg
echo "deb [signed-by=/usr/share/keyrings/open-sesame.gpg] \
https://scopecreep-zip.github.io/open-sesame noble main" \
| sudo tee /etc/apt/sources.list.d/open-sesame.list
sudo apt update
sudo apt install open-sesame open-sesame-desktop
Package integrity is verified via GPG-signed repository indices and SLSA build provenance
attestations generated by GitHub Actions. See SECURITY.md for verification commands.
Nix Flake
nix profile install github:ScopeCreep-zip/open-sesame
Pre-built binaries are available from the scopecreep-zip.cachix.org binary cache with
Ed25519 signing. For home-manager integration, add the flake input and include the Open Sesame
module in the home-manager configuration.
From Source
cargo build --release --workspace
The seven daemon binaries (daemon-profile, daemon-secrets, daemon-launcher,
daemon-snippets, daemon-wm, daemon-clipboard, daemon-input) and the sesame CLI
are placed in target/release/.
Initialization
After installation, initialize the installation identity and default vault:
sesame init
This command performs the following:
- Generates a UUID v4 installation identifier, persisted to
~/.config/pds/installation.tomlas anInstallationConfig(core-config/src/schema_installation.rs). - Derives a deterministic namespace UUID for profile ID generation (
namespacefield). - Creates the default trust profile vault (
vaults/default.db) as a SQLCipher-encrypted database. - Enrolls the first authentication factor (password via Argon2id KDF or SSH agent key).
- Installs and enables systemd user services and targets.
Optionally, an organizational namespace can be provided at init time:
sesame init --org braincraft.io
This populates the org field in installation.toml with the domain and a deterministic
namespace derived as uuid5(NAMESPACE_URL, domain), per the OrgConfig type.
Daemon Architecture
All seven daemons run as systemd user services:
| Daemon | Responsibility | Target | SecurityLevel |
|---|---|---|---|
daemon-profile | IPC bus host, key management, profile activation, audit | Headless | Internal |
daemon-secrets | SQLCipher vaults, ACL, rate limiting | Headless | SecretsOnly |
daemon-launcher | Application launching, frecency scoring | Headless | Internal |
daemon-snippets | Snippet management | Headless | Internal |
daemon-wm | Window management, Alt+Tab overlay | Desktop | Internal |
daemon-clipboard | Clipboard isolation per profile | Desktop | ProfileScoped |
daemon-input | Keyboard/mouse capture, hotkey routing | Desktop | Internal |
Inter-daemon communication uses the Noise IK protocol (X25519 + ChaChaPoly + BLAKE2s) over a
Unix domain socket IPC bus hosted by daemon-profile. Each daemon registers its X25519 static
public key in the clearance registry (core-ipc/src/registry.rs). Messages are routed based on
SecurityLevel ordering: Open < Internal < ProfileScoped < SecretsOnly. A daemon can only
emit messages at or below its own clearance level, and only receives messages at or below its
clearance level.
systemd Targets
Two systemd targets compose the service graph:
| Target | WantedBy | Daemons |
|---|---|---|
open-sesame-headless.target | default.target | profile, secrets, launcher, snippets |
open-sesame-desktop.target | graphical-session.target | wm, clipboard, input |
The desktop target declares Requires=open-sesame-headless.target graphical-session.target.
Starting the desktop target starts all headless daemons first. Stopping the desktop target
leaves headless daemons running, so secret management continues when the graphical session
is inactive.
All daemon services use Type=notify with WatchdogSec=30. Services that fail restart
after RestartSec=5.
Service Dependency Graph
default.target
+-- open-sesame-headless.target
+-- open-sesame-profile.service
+-- open-sesame-secrets.service (Requires/After: profile)
+-- open-sesame-launcher.service (After: profile)
+-- open-sesame-snippets.service (After: profile)
graphical-session.target
+-- open-sesame-desktop.target (Requires: headless)
+-- open-sesame-wm.service
+-- open-sesame-clipboard.service
+-- open-sesame-input.service
File Locations
Configuration
| Path | Purpose |
|---|---|
~/.config/pds/config.toml | User configuration: profiles, crypto, agents, extensions |
~/.config/pds/installation.toml | Installation identity: UUID, namespace, org, machine binding |
~/.config/pds/ssh-agent.env | SSH agent socket path for factor enrollment |
/etc/pds/policy.toml | System policy overrides (enterprise-managed, read-only at runtime) |
Configuration follows layered inheritance: system policy > user config > built-in defaults.
Each PolicyOverride (core-config/src/schema.rs) records a dotted key path, enforced value,
and source string (e.g., /etc/pds/policy.toml).
Runtime
| Path | Purpose |
|---|---|
$XDG_RUNTIME_DIR/pds/ | Runtime directory |
$XDG_RUNTIME_DIR/pds/bus.sock | Noise IK IPC bus socket |
The IPC socket path can be overridden via the global.ipc.socket_path key in config.toml.
Default channel capacity per subscriber is 1024 messages with a 5-second slow-subscriber
timeout (IpcConfig in core-config/src/schema.rs).
Data
| Path | Purpose |
|---|---|
~/.config/pds/vaults/{profile}.db | SQLCipher encrypted vault per trust profile |
~/.config/pds/launcher/{profile}.frecency.db | Application launch frecency database |
~/.config/pds/audit/ | BLAKE3 hash-chained audit log |
Each trust profile name maps 1:1 to a vault file. The TrustProfileName type
(core-types/src/profile.rs) enforces path safety: ASCII alphanumeric plus hyphens and
underscores, max 64 bytes, no path traversal components.
Security Hardening
Each daemon service applies the following systemd directives (see contrib/systemd/*.service):
NoNewPrivileges=yes– prevents privilege escalation.ProtectSystem=strict– mounts the filesystem read-only except explicitReadWritePaths.ProtectHome=read-only– prevents writes outside~/.config/pds/and$XDG_RUNTIME_DIR/pds/.LimitCORE=0– disables core dumps.LimitMEMLOCK=64M– permitsmemfd_secret(2)andmlock(2)allocations for secure memory.MemoryMax– per-daemon ceiling (128M for profile, 256M for secrets).
The secrets daemon additionally declares PrivateNetwork=yes, preventing all network access
from the process that holds decrypted vault keys.
Landlock filesystem sandboxing and seccomp-bpf syscall filtering are applied in-process by each daemon at startup. Partially-enforced Landlock is treated as a fatal error; the daemon does not start with degraded isolation.
Desktop Overlay
On COSMIC desktops, daemon-wm provides an Alt+Tab window switching overlay rendered via the
COSMIC compositor backend (platform-linux). The overlay displays windows scoped to the active
trust profile. Profile switching is routed through the IPC bus at Internal security level.
Verification
After initialization:
# Verify all daemons are healthy
sesame status
# Verify systemd targets
systemctl --user status open-sesame-headless.target
systemctl --user status open-sesame-desktop.target
# Verify IPC bus socket exists
ls -la $XDG_RUNTIME_DIR/pds/bus.sock
# Verify vault was created
ls -la ~/.config/pds/vaults/default.db
Headless Server
Open Sesame operates without a display server, making it suitable for servers, CI/CD runners,
containers, and virtual machines. The headless deployment uses the open-sesame package only,
with no GUI dependencies.
Package
Only the open-sesame package is required. It contains:
sesameCLIdaemon-profile(IPC bus host, key management, audit)daemon-secrets(SQLCipher vaults, ACL, rate limiting)daemon-launcher(application launching, frecency scoring)daemon-snippets(snippet management)
The open-sesame-desktop package is not installed. The three desktop daemons (daemon-wm,
daemon-clipboard, daemon-input) are absent and no Wayland or COSMIC libraries are linked.
Use Cases
CI/CD Secret Injection
Open Sesame injects secrets into build processes as environment variables, scoped by trust profile:
sesame env -p ci-production -- make deploy
The sesame env command activates the named profile, decrypts the vault, and launches the
child process with secrets projected into its environment. The child process inherits only
the secrets defined in the ci-production profile’s vault. On process exit, the secrets are
not persisted anywhere on disk outside the encrypted vault.
Server Credential Management
Long-running services can read secrets at startup or on demand:
# One-shot: print a secret value
sesame secret get -p work database-url
# Launch a service with its secret environment
sesame env -p production -- ./my-service
Container Secret Injection
In container environments, Open Sesame can run as a sidecar or init container that projects secrets into shared volumes or environment:
# In an init container or entrypoint script
sesame init --non-interactive
sesame env -p container -- exec "$@"
systemd Target
The headless target starts on default.target, requiring no graphical session:
# contrib/systemd/open-sesame-headless.target
[Unit]
Description=Open Sesame Headless Suite
# No display server required. Suitable for servers, containers, VMs.
[Install]
WantedBy=default.target
The four headless daemons are PartOf=open-sesame-headless.target. The profile daemon starts
first; secrets, launcher, and snippets declare ordering dependencies on profile.
Starting and Stopping
# Start the headless suite
systemctl --user start open-sesame-headless.target
# Stop all headless daemons
systemctl --user stop open-sesame-headless.target
# Enable on boot
systemctl --user enable open-sesame-headless.target
SSH Agent Forwarding for Remote Vault Unlock
When Open Sesame is installed on a remote server, vault unlock can use an SSH agent key from the operator’s local machine. This avoids storing passwords on the server.
Setup
-
On the remote server, enroll an SSH agent factor during
sesame init:sesame init --auth-factor ssh-agent -
When connecting, forward the SSH agent:
ssh -A user@server -
On the remote server, unlock the vault using the forwarded agent:
sesame unlock -p work --factor ssh-agent
The SSH agent backend (AuthFactorId::SshAgent in core-types/src/auth.rs) produces a
deterministic signature over a challenge, which is processed through
BLAKE3 derive_key("pds v2 ssh-vault-kek {profile}") to produce a KEK. The KEK unwraps the
master key from the EnrollmentBlob stored on disk. The SSH private key never leaves the
local machine.
Multi-Factor on Headless
The AuthCombineMode (core-types/src/auth.rs) supports three modes for headless
environments:
| Mode | Behavior |
|---|---|
Any | Any single enrolled factor unlocks. SSH agent alone suffices. |
All | All enrolled factors required. Both password and SSH agent must be provided. |
Policy | Configurable: e.g., SSH agent always required, plus one additional factor. |
For headless servers where interactive password entry is impractical, enrolling SSH agent as
the sole factor with Any mode provides passwordless unlock gated on SSH key possession.
Configuration
Headless configuration is identical to desktop, minus the window manager, clipboard, and input
sections. The relevant top-level configuration file is ~/.config/pds/config.toml with the
schema defined in core-config/src/schema.rs.
Key headless-specific settings:
[global]
default_profile = "production"
[global.ipc]
# Custom socket path for containerized deployments
# socket_path = "/run/pds/bus.sock"
channel_capacity = 1024
[global.logging]
level = "info"
json = true # Structured output for log aggregation
journald = true # journald integration on systemd hosts
File Locations
| Path | Purpose |
|---|---|
~/.config/pds/config.toml | User configuration |
~/.config/pds/installation.toml | Installation identity |
~/.config/pds/vaults/{profile}.db | Encrypted vaults |
$XDG_RUNTIME_DIR/pds/bus.sock | IPC socket |
~/.config/pds/audit/ | Audit log |
Security Notes
The secrets daemon runs with PrivateNetwork=yes, which is particularly relevant on servers
where network-facing services coexist. Even if an adjacent service is compromised, it cannot
reach the secrets daemon over the network. All access is via the authenticated Noise IK IPC
bus over a Unix domain socket.
On headless systems without memfd_secret(2) support (e.g., older kernels or containers
without CONFIG_SECRETMEM=y), the daemons fall back to mmap(MAP_ANONYMOUS) with mlock(2)
and MADV_DONTDUMP. This fallback is logged at ERROR level with an explicit compliance impact
statement naming the frameworks affected (IL5/IL6, STIG, PCI-DSS) and the remediation command
to enable CONFIG_SECRETMEM.
Headless-First Design
Every sesame CLI command works from explicit primitives without interactive prompts. The CLI
does not assume a terminal is attached. Exit codes, structured JSON output, and non-interactive
flags support automation:
# Non-interactive unlock with SSH agent
sesame unlock -p work --factor ssh-agent --non-interactive
# JSON output for scripting
sesame secret list -p work --json | jq '.[].key'
# Exit code indicates vault lock state
sesame status -p work --quiet; echo $?
Multi-User
Open Sesame supports multiple users on a shared workstation. Each user operates an independent set of daemons, vaults, and IPC buses with hardware-enforced isolation between users.
Per-User Service Instances
Each user runs their own systemd user services. There is no system-wide Open Sesame daemon.
When user alice and user bob both log into the same machine:
- Alice’s
daemon-profilelistens on$XDG_RUNTIME_DIR/pds/bus.sock(typically/run/user/1000/pds/bus.sock). - Bob’s
daemon-profilelistens on/run/user/1001/pds/bus.sock. - Each user’s daemon set is managed by their own
systemctl --userinstance. - The two sets of daemons have no knowledge of each other.
Isolation Boundaries
| Boundary | Mechanism |
|---|---|
| IPC bus | Separate Unix domain sockets under each user’s $XDG_RUNTIME_DIR |
| Configuration | Separate ~/.config/pds/ per user home directory |
| Vaults | Separate SQLCipher databases per user, per profile |
| Audit logs | Separate BLAKE3 hash chains per user |
| Secret memory | memfd_secret(2) pages are per-process; invisible to other UIDs and to root |
| Process isolation | Landlock + seccomp per daemon; ProtectHome=read-only prevents cross-user access |
memfd_secret Isolation
On Linux 5.14+ with CONFIG_SECRETMEM=y, all secret-carrying memory allocations
(SecureBytes, SecureVec, SensitiveBytes) use memfd_secret(2). Pages allocated via
this syscall are:
- Removed from the kernel direct map.
- Invisible to
/proc/pid/memreads. - Inaccessible to kernel modules and DMA.
- Inaccessible via
ptraceeven as root.
This means that even a root-level compromise on the shared workstation cannot read another user’s decrypted secrets from memory. The secrets exist only in the virtual address space of the owning process.
When memfd_secret is unavailable, the fallback is mmap(MAP_ANONYMOUS) with mlock(2) and
MADV_DONTDUMP. This prevents secrets from being swapped to disk or appearing in core dumps,
but does not remove them from the kernel direct map. The fallback is logged at ERROR level
with compliance impact.
RLIMIT_MEMLOCK
Each daemon service sets LimitMEMLOCK=64M (see contrib/systemd/*.service). On a
multi-user workstation, the total memfd_secret and mlock usage is the sum across all
users’ daemon instances. System administrators should verify that the system-wide locked
memory limit and per-user RLIMIT_MEMLOCK (via /etc/security/limits.conf) accommodate
the expected number of concurrent users.
Shared Workstation Model
Separate Vaults, Separate Profiles
Each user has their own InstallationConfig with a distinct installation UUID. Two users on
the same machine have different installation IDs, different vault encryption keys, and
different profile IDs even if both name a profile work. The TrustProfileName maps to a
per-user vault file at ~/.config/pds/vaults/{name}.db.
Hardware Security Key per User
Users can enroll different hardware security keys (FIDO2, YubiKey) as authentication factors.
The AuthFactorId::Fido2 and AuthFactorId::Yubikey variants in core-types/src/auth.rs
support per-user enrollment. A shared YubiKey slot is not assumed; each user’s enrollment
produces a distinct credential ID.
Profile Activation Independence
Profile activation is per-user. Alice activating her corporate profile does not affect
Bob’s active profile. The daemon-profile instance for each user independently evaluates
activation rules (ActivationConfig in core-config/src/schema.rs): WiFi SSID triggers,
USB device presence, time-of-day rules, and security key requirements.
System Policy
Enterprise administrators can enforce organization-wide defaults via /etc/pds/policy.toml.
This file is read-only at runtime and applies to all users on the machine.
# /etc/pds/policy.toml
[[policy]]
key = "crypto.kdf"
value = "argon2id"
source = "enterprise-security-policy"
[[policy]]
key = "audit.enabled"
value = true
source = "enterprise-security-policy"
[[policy]]
key = "clipboard.max_history"
value = 0
source = "enterprise-data-loss-prevention"
Each entry corresponds to a PolicyOverride struct (core-config/src/schema.rs) with a
dotted key path, enforced value, and source identifier. Policy overrides take precedence
over user configuration. Users cannot override a policy-locked key.
Policy Distribution
System policy files are managed by the organization’s configuration management tooling
(Ansible, Puppet, Chef, NixOS modules, or similar). Open Sesame does not implement its own
policy distribution mechanism. The file at /etc/pds/policy.toml is a standard configuration
file managed by the operating system’s package manager or configuration management.
Kernel Requirements
For full multi-user isolation:
| Requirement | Purpose | Verification |
|---|---|---|
| Linux 5.14+ | memfd_secret(2) | uname -r |
CONFIG_SECRETMEM=y | Kernel direct-map removal | grep SECRETMEM /boot/config-$(uname -r) |
| systemd 255+ | Per-user service management | systemctl --version |
| Sufficient RLIMIT_MEMLOCK | Locked memory for all users | ulimit -l |
Auditing in Multi-User Environments
Each user’s audit log is independent. The BLAKE3 hash-chained audit log for a user resides at
~/.config/pds/audit/ under that user’s home directory. Audit verification with
sesame audit verify operates on the current user’s chain only.
For centralized audit collection across all users on a workstation, the structured JSON
logging output (global.logging.json = true) can be forwarded to a central log aggregator
via journald or a sidecar log shipper. Each log entry includes the installation ID, which
uniquely identifies the user’s Open Sesame instance.
Fleet Management
Design Intent. This page describes the architecture for managing Open Sesame across many devices. The primitives referenced below (
InstallationId,OrganizationNamespace,PolicyOverride, structured logging) exist in the type system and configuration schema today. Fleet-scale orchestration tooling that consumes these primitives is not yet implemented.
Overview
Fleet management treats each Open Sesame installation as an independently-operating node that can be configured, monitored, and audited from a central control plane. The design relies on three properties already present in the system:
- Every installation has a globally unique
InstallationId(UUID v4, generated atsesame init), defined incore-types/src/security.rs. - Installations can be grouped by
OrganizationNamespace(domain-derived UUID), enabling fleet-wide identity correlation. - System policy (
/etc/pds/policy.toml) is a static file that can be distributed by configuration management without requiring a running Open Sesame daemon.
Profile and Policy Distribution
Configuration Management Integration
Open Sesame’s configuration is file-based. Fleet-wide profile templates and security policies are distributed as files via existing configuration management tools:
Configuration Management (Ansible/Puppet/Chef/NixOS)
+-- /etc/pds/policy.toml System policy overrides
+-- ~/.config/pds/config.toml User configuration template
+-- ~/.config/pds/installation.toml Pre-seeded installation identity (optional)
The PolicyOverride type (core-config/src/schema.rs) supports locking any configuration
key to an enforced value with a source identifier:
# Enforced by fleet management
[[policy]]
key = "crypto.kdf"
value = "argon2id"
source = "fleet-security-baseline-2025"
[[policy]]
key = "crypto.minimum_peer_profile"
value = "governance-compatible"
source = "fleet-security-baseline-2025"
Pre-Seeded Installation Identity
For fleet provisioning, installation.toml can be pre-generated with a known UUID and
organizational namespace, then distributed to devices before sesame init runs. The
InstallationConfig (core-config/src/schema_installation.rs) fields are:
| Field | Purpose | Fleet Use |
|---|---|---|
id | UUID v4, unique per device | Asset tracking, audit correlation |
namespace | Derived UUID for deterministic profile IDs | Cross-device profile identity |
org.domain | Organization domain | Fleet grouping |
org.namespace | uuid5(NAMESPACE_URL, domain) | Deterministic namespace derivation |
machine_binding.binding_hash | BLAKE3 hash of machine identity material | Hardware attestation |
machine_binding.binding_type | machine-id or tpm-bound | Binding method |
Pre-seeding the org field ensures all fleet devices share a common organizational namespace,
enabling deterministic profile ID generation across the fleet.
Centralized Audit Log Collection
Each Open Sesame installation produces a BLAKE3 hash-chained audit log and structured log output. Fleet-scale audit aggregation uses the existing structured logging infrastructure.
journald and Log Shipping
# config.toml on fleet devices
[global.logging]
level = "info"
json = true
journald = true
With json = true and journald = true, all daemon log entries are structured JSON emitted
to the systemd journal. A log shipper (Promtail, Fluentd, Vector, Filebeat) forwards journal
entries to a central aggregator.
Each structured log entry includes:
installation_id– the device’s UUID frominstallation.toml.daemon_id– which daemon emitted the entry (DaemonIdfromcore-types/src/ids.rs).profile– active trust profile name.event– the operation performed.- Timestamp, severity, and span context.
Audit Chain Verification
The BLAKE3 hash-chained audit log provides tamper evidence at the device level. For fleet-wide integrity verification:
- Collect audit chain files from each device.
- Run
sesame audit verifyagainst each chain independently. - Cross-reference audit entries with centralized log aggregator records.
A broken hash chain on any device indicates tampering or data loss on that device.
Remote Unlock Patterns
Fleet devices may need to be unlocked without physical operator presence.
SSH Agent Forwarding
An operator connects to a fleet device and forwards their SSH agent:
ssh -A operator@fleet-device-042
sesame unlock -p production --factor ssh-agent
The SSH agent factor (AuthFactorId::SshAgent) derives a KEK from the forwarded key’s
deterministic signature without the private key leaving the operator’s machine.
Delegated Factors (Design Intent)
The DelegationGrant type (core-types/src/security.rs) models time-bounded,
scope-narrowed capability delegation. A fleet operator could issue a delegation grant to
an automation agent, authorizing it to unlock specific profiles on specific devices:
DelegationGrant {
delegator: <operator-agent-id>,
scope: CapabilitySet { Unlock },
initial_ttl: 3600s,
heartbeat_interval: 300s,
nonce: <16 random bytes>,
signature: <Ed25519 over grant fields>,
}
The grant’s scope is intersected with the delegator’s own capabilities, ensuring the
automation agent cannot exceed the operator’s authority. The initial_ttl and
heartbeat_interval fields enforce time-bounded access with mandatory renewal.
Fleet Health Monitoring
Daemon Health
All daemons use Type=notify with WatchdogSec=30. systemd restarts unhealthy daemons
automatically. Fleet monitoring collects systemd service states via standard node monitoring
(node_exporter, osquery, or equivalent).
Security Posture Signals
Key posture signals per device:
| Signal | Source | Meaning |
|---|---|---|
memfd_secret availability | Daemon startup log | Whether secrets are removed from kernel direct map |
| Landlock enforcement | Daemon startup log | Whether filesystem sandboxing is active |
| seccomp-bpf active | Daemon startup log | Whether syscall filtering is active |
| Kernel version | uname -r | Whether platform meets minimum requirements |
CONFIG_SECRETMEM=y | /boot/config-* | Kernel compiled with secret memory support |
Devices that log memfd_secret fallback at ERROR level are operating at a reduced security
posture. Fleet management should alert on this condition and schedule kernel upgrades.
Structured Alerting
With JSON-structured logging forwarded to a central aggregator, fleet operators can define alerts on:
- Vault unlock failures (rate limiting triggered).
- Audit chain verification failures.
- Daemon restart loops (watchdog failures).
- Security posture degradation (memfd_secret fallback).
- Policy override conflicts (user config conflicts with fleet policy).
Kubernetes
Design Intent. This page describes how Open Sesame can operate as a secret provider in Kubernetes environments. The type system primitives referenced below (
InstallationId,AgentIdentity,DelegationGrant,Attestation) exist today. The Kubernetes-specific integration components (sidecar container image, CSI driver, admission webhook) are not yet implemented.
Architecture
Open Sesame in Kubernetes operates as a per-pod or per-node secret provider. The headless
daemon set (daemon-profile, daemon-secrets) runs inside a sidecar container or as a
DaemonSet, providing secrets to application containers via environment injection or shared
volumes.
No desktop daemons are deployed. The open-sesame package alone is sufficient.
Sidecar Pattern
The sidecar pattern deploys an Open Sesame container alongside each application pod:
Pod
+-- app-container
| Reads secrets from shared volume or environment
+-- open-sesame-sidecar
daemon-profile, daemon-secrets
Mounts: /run/pds (shared tmpfs)
Mounts: /etc/pds/installation.toml (ConfigMap or Secret)
Mounts: /etc/pds/policy.toml (ConfigMap)
Secret Projection
The sidecar decrypts vault secrets and writes them to a shared tmpfs volume that the
application container mounts:
# Init container or sidecar entrypoint
sesame init --non-interactive --installation /etc/pds/installation.toml
sesame unlock -p $PROFILE --factor ssh-agent --non-interactive
sesame env -p $PROFILE --export-to /run/pds/secrets.env
Alternatively, the sidecar can project secrets as individual files:
/run/pds/secrets/
+-- database-url
+-- api-key
+-- tls-cert
The tmpfs mount ensures secrets are never written to persistent storage on the node.
Kubernetes Secret Objects (Design Intent)
A controller component could synchronize vault contents into native Kubernetes Secret
objects, enabling standard envFrom and volume mount patterns:
Open Sesame Vault (SQLCipher)
--> Controller watches vault changes
--> Creates/updates Kubernetes Secret objects
--> Pods consume via standard envFrom/volumeMount
This approach trades the stronger isolation of the sidecar pattern for compatibility with existing Kubernetes-native workflows.
Pod Identity
Installation ID per Pod
Each pod receives a unique InstallationId via a pre-seeded installation.toml. The
InstallationConfig (core-config/src/schema_installation.rs) for a pod includes:
| Field | Value | Purpose |
|---|---|---|
id | UUID v4, unique per pod instance | Audit trail attribution |
org.domain | Organization domain | Fleet grouping |
machine_binding.binding_type | machine-id | Pod identity binding |
machine_binding.binding_hash | BLAKE3 hash of pod UID + node ID | Anti-migration attestation |
The machine binding hash can incorporate the Kubernetes pod UID and node identity, binding the installation to a specific pod lifecycle. If the pod is rescheduled to a different node, the binding hash does not match, requiring re-attestation.
Workload Attestation (Design Intent)
The Attestation enum (core-types/src/security.rs) includes ProcessAttestation with an
exe_hash field. In Kubernetes, workload attestation extends this concept:
- Container image digest serves as the
exe_hash, verified against a signed manifest. - Service account token provides Kubernetes-native identity.
- Node attestation via TPM or machine binding provides hardware-rooted trust.
These attestation signals compose into a TrustVector (core-types/src/security.rs):
TrustVector {
authn_strength: High, // Service account + image signature
authz_freshness: <since last token rotation>,
delegation_depth: 1, // Delegated from cluster operator
device_posture: 0.8, // Node with TPM but no memfd_secret
network_exposure: Encrypted, // Noise IK over loopback or pod network
agent_type: Service { unit: "my-app-pod" },
}
DaemonSet Pattern (Design Intent)
For clusters where per-pod sidecars are too resource-intensive, a DaemonSet deploys one Open Sesame instance per node:
Node
+-- open-sesame DaemonSet pod
| daemon-profile, daemon-secrets
| Exposes: /run/pds/bus.sock (hostPath)
|
+-- app-pod-1 (mounts /run/pds/bus.sock)
+-- app-pod-2 (mounts /run/pds/bus.sock)
Application pods connect to the node-level IPC bus. Each connecting pod authenticates via
Attestation::UCred (UID/PID from the Unix domain socket) and receives capabilities scoped
to its service account identity.
The SecurityLevel hierarchy (core-types/src/security.rs) ensures that application pods
at Open or Internal clearance cannot read SecretsOnly messages on the shared bus.
Service Mesh Integration (Design Intent)
Open Sesame’s Noise IK transport (core-ipc) provides mutual authentication with forward
secrecy. In service mesh contexts, Noise IK can serve as an alternative to mTLS for
service-to-service communication:
| Property | mTLS (Istio/Linkerd) | Noise IK |
|---|---|---|
| Key exchange | X.509 certificates, CA hierarchy | X25519 static keys, clearance registry |
| Forward secrecy | Per-connection via TLS 1.3 | Per-connection via Noise IK |
| Identity binding | SPIFFE ID in SAN | AgentId + InstallationId |
| Revocation | CRL/OCSP, short-lived certs | Clearance registry generation counter |
| Trust model | Centralized CA | Peer-to-peer, registry-based |
The clearance registry (core-ipc/src/registry.rs) maps X25519 public keys to daemon
identities and security levels. In a Kubernetes context, the registry could be populated
from a shared ConfigMap or CRD, enabling cross-pod Noise IK authentication without a
certificate authority.
Resource Considerations
Sidecar Resources
Minimum resource requests for the Open Sesame sidecar:
| Resource | Request | Limit | Notes |
|---|---|---|---|
| CPU | 10m | 100m | Idle after vault unlock |
| Memory | 32Mi | 128Mi | LimitMEMLOCK=64M for secure memory |
| Ephemeral storage | 10Mi | 50Mi | Vault DB + audit log |
Security Context
securityContext:
runAsNonRoot: true
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
The Open Sesame daemons apply their own seccomp-bpf filters in-process, layered on top of the Kubernetes-level seccomp profile.
memfd_secret in Containers
memfd_secret(2) requires CONFIG_SECRETMEM=y in the host kernel. Most managed Kubernetes
distributions (GKE, EKS, AKS) use kernels that do not enable this option by default. On these
platforms, Open Sesame falls back to mmap with mlock and logs the security posture
degradation at ERROR level. Operators running on custom node images or bare-metal Kubernetes
can enable CONFIG_SECRETMEM=y for full protection.
Air-Gapped Environments
Design Intent. This page describes operating Open Sesame in air-gapped, SCIF, and offline environments (IL5/IL6 and above). Core vault operations require no network access today. The key ceremony procedures and audit export tooling described below are architectural targets grounded in the existing type system and cryptographic primitives.
Offline-First Architecture
Open Sesame’s core functionality requires no network access. The secrets daemon
(daemon-secrets) runs with PrivateNetwork=yes in its systemd unit, enforcing network
isolation at the kernel level. All inter-daemon communication occurs over a local Unix domain
socket via the Noise IK protocol.
Operations that work fully offline:
- Vault creation, unlock, lock.
- Secret read, write, delete, list.
- Profile activation and switching.
- Audit log generation and verification.
- Application launching with secret injection.
- Clipboard isolation.
The only operations that require network access are SSH agent forwarding (which requires an SSH connection) and extension installation from OCI registries (which can be pre-staged).
memfd_secret as Security Floor
Air-gapped environments operating at IL5/IL6 or within SCIFs require memfd_secret(2) as a
mandatory security control. The kernel must be compiled with CONFIG_SECRETMEM=y.
Verification
# Verify kernel support
grep CONFIG_SECRETMEM /boot/config-$(uname -r)
# Expected: CONFIG_SECRETMEM=y
# Verify at runtime
sesame status --security-posture
On systems where memfd_secret is unavailable, Open Sesame logs at ERROR level with an
explicit compliance impact statement:
ERROR memfd_secret unavailable: secrets remain on kernel direct map.
Compliance impact: does not meet IL5/IL6, DISA STIG, PCI-DSS requirements
for memory isolation. Remediation: enable CONFIG_SECRETMEM=y in kernel config.
For air-gapped deployments, memfd_secret availability should be a deployment gate. Do not
proceed with secret enrollment on systems that report this fallback.
Kernel Configuration
Air-gapped systems should use a hardened kernel with at minimum:
CONFIG_SECRETMEM=y # memfd_secret(2) support
CONFIG_SECURITY_LANDLOCK=y # Landlock filesystem sandboxing
CONFIG_SECCOMP=y # seccomp-bpf syscall filtering
CONFIG_SECCOMP_FILTER=y # BPF filter programs for seccomp
Air-Gapped Key Ceremony
Master Key Generation
In an air-gapped environment, the initial key ceremony is performed on a physically isolated machine:
-
Preparation. Boot the ceremony machine from verified media. Verify kernel supports
memfd_secret. -
Initialization. Run
sesame initto generate theInstallationConfig:- UUID v4 installation identifier.
- Organization namespace (if enterprise-managed).
- Machine binding via
/etc/machine-idor TPM (MachineBindingTypeincore-types/src/security.rs).
-
Factor Enrollment. Enroll authentication factors per the site’s
AuthCombineModepolicy (core-types/src/auth.rs):Password– Argon2id KDF with 19 MiB memory, 2 iterations.SshAgent– Deterministic SSH signature-derived KEK.Fido2,Tpm,Yubikey– Hardware factors (defined inAuthFactorId; backends not yet implemented).
-
Policy Lock. Deploy
/etc/pds/policy.tomlto enforce cryptographic algorithm selection:[[policy]] key = "crypto.kdf" value = "argon2id" source = "airgap-key-ceremony-2025" [[policy]] key = "crypto.minimum_peer_profile" value = "leading-edge" source = "airgap-key-ceremony-2025" -
Verification. Run
sesame statusandsesame audit verifyto confirm the installation is healthy and the audit chain has a valid genesis entry.
Factor Enrollment for “All” Mode
For high-security environments, AuthCombineMode::All requires every enrolled factor to be
present at unlock time. The master key is derived from chaining all factor contributions:
BLAKE3 derive_key("pds v2 combined-master-key {profile}", sorted_factor_pieces)
--> Combined Master Key
This prevents any single compromised factor from unlocking the vault.
Factor Enrollment for “Policy” Mode
The AuthPolicy struct (core-types/src/auth.rs) supports threshold-based unlock:
[auth]
mode = "policy"
[auth.policy]
required = ["password"]
additional_required = 1
# Enrolled: password, ssh-agent, fido2
# Unlock requires: password + (ssh-agent OR fido2)
Audit Chain Export
The BLAKE3 hash-chained audit log provides tamper evidence that can be verified independently. For air-gapped environments where logs cannot be streamed to a central aggregator:
Export Procedure
-
Export. Copy the audit chain from the air-gapped machine to removable media:
cp -r ~/.config/pds/audit/ /media/audit-export/ -
Transfer. Move the removable media through the appropriate security boundary (data diode, manual review, or similar).
-
Verify. On the receiving side, verify the chain integrity:
sesame audit verify --path /media/audit-export/Verification checks that each entry’s BLAKE3 hash chains to the previous entry. Any modification, deletion, or reordering of entries breaks the chain.
Chain Properties
Each audit entry contains:
- Timestamp.
- Operation type (unlock, lock, secret read/write/delete, profile switch).
- Profile name.
- BLAKE3 hash of the previous entry (chain link).
- BLAKE3 hash of the current entry’s contents.
The chain starts from a genesis entry created at sesame init. The hash algorithm is
configurable via CryptoConfigToml.audit_hash (core-config/src/schema_crypto.rs):
BLAKE3 (default) or SHA-256 (governance-compatible).
Compliance Mapping
NIST 800-53
| Control | Open Sesame Mechanism |
|---|---|
| SC-28 (Protection of Information at Rest) | SQLCipher AES-256-CBC + HMAC-SHA512, Argon2id KDF |
| SC-12 (Cryptographic Key Establishment) | BLAKE3 domain-separated key derivation hierarchy |
| SC-13 (Cryptographic Protection) | Config-selectable algorithms via CryptoConfig; governance-compatible profile uses NIST-approved algorithms |
| AU-10 (Non-repudiation) | BLAKE3 hash-chained audit log |
| AC-3 (Access Enforcement) | Per-daemon SecurityLevel clearance, CapabilitySet authorization |
| IA-5 (Authenticator Management) | Multi-factor auth policy (AuthCombineMode), hardware factor support |
DISA STIG
| STIG Control | Open Sesame Mechanism |
|---|---|
| Encrypted storage at rest | SQLCipher vaults, per-profile encryption keys |
| Memory protection | memfd_secret(2), guard pages, volatile zeroize |
| Audit trail integrity | BLAKE3 hash chain, tamper detection |
| Least privilege | Landlock, seccomp-bpf, per-daemon clearance levels |
| No core dumps | LimitCORE=0, MADV_DONTDUMP |
Extension Pre-Staging
In air-gapped environments, WASI extensions cannot be fetched from OCI registries at runtime. Extensions are pre-staged during the provisioning phase:
-
On a connected machine, fetch the extension OCI artifact. The
OciReferencetype (core-types/src/oci.rs) captures registry, principal, scope, revision, and provenance digest:registry.example.com/org/extension:1.0.0@sha256:abc123 -
Transfer the artifact to the air-gapped machine via removable media.
-
Install from the local artifact:
sesame extension install --from-file /media/extensions/extension-1.0.0.wasm
The extension’s content hash (manifest_hash in AgentType::Extension, defined in
core-types/src/security.rs) is verified at load time regardless of how the artifact
was delivered.
Edge and Embedded
Design Intent. This page describes deploying Open Sesame on IoT, embedded, and resource-constrained environments. The headless daemon architecture and ARM64 build targets exist today. Embedded-specific optimizations (reduced memory profiles, static linking, busybox integration) are architectural targets.
Minimal Footprint
Edge deployments use the headless package only. The four headless daemons (daemon-profile,
daemon-secrets, daemon-launcher, daemon-snippets) provide secret management without
GUI dependencies.
For the most constrained environments, only daemon-profile and daemon-secrets are
required. The launcher and snippets daemons are optional if application launching and snippet
management are not needed.
Resource Profile
| Resource | Desktop Default | Edge Target |
|---|---|---|
LimitMEMLOCK | 64M | 8M–16M (configurable) |
MemoryMax (profile) | 128M | 32M |
MemoryMax (secrets) | 256M | 64M |
| IPC channel capacity | 1024 | 64–128 |
| Daemons | 7 | 2–4 |
| Vault count | Multiple profiles | Single profile typical |
The LimitMEMLOCK value in each daemon’s systemd unit controls the maximum memfd_secret
and mlock allocation. Edge devices with limited RAM should reduce this to match available
memory, trading maximum concurrent secret capacity for lower memory pressure.
The IPC channel capacity is configurable via global.ipc.channel_capacity in config.toml
(IpcConfig in core-config/src/schema.rs). Reducing it from the default 1024 lowers
per-subscriber memory usage.
ARM64 Native Builds
Open Sesame builds natively for aarch64-linux. The CI pipeline produces ARM64 .deb
packages and Nix derivations without cross-compilation or QEMU emulation, avoiding the
performance and correctness risks of emulated builds.
Supported Targets
| Target | Status | Use Case |
|---|---|---|
x86_64-linux | Supported | Desktop, server, cloud |
aarch64-linux | Supported | Edge, embedded, ARM servers, Raspberry Pi |
Building for ARM64
# Native build on an ARM64 host
cargo build --release --workspace
# Or install from the APT repository (ARM64 packages available)
sudo apt install open-sesame
Embedded Linux Considerations
systemd Environments
On embedded Linux systems running systemd, Open Sesame’s service files work without modification. Adjust resource limits in the service unit overrides:
systemctl --user edit open-sesame-secrets.service
[Service]
LimitMEMLOCK=16M
MemoryMax=64M
Non-systemd Environments (Design Intent)
Embedded systems using busybox init, OpenRC, or runit do not have systemd user services. For these environments, Open Sesame daemons can be started as supervised processes:
# Direct daemon startup (no systemd)
daemon-profile &
daemon-secrets &
The daemons use sd_notify for systemd integration but do not require it. On non-systemd
systems, the watchdog and notify protocols are inactive; the daemons start and run without
them.
Static Linking (Design Intent)
For minimal embedded root filesystems without a full glibc, static linking with musl is an architectural target:
# Target: static musl binary
cargo build --release --target aarch64-unknown-linux-musl
Static binaries eliminate shared library dependencies, simplifying deployment to embedded images. The primary obstacle is SQLCipher’s C dependency, which requires careful static linking configuration.
Secure Boot Chain
Edge devices in high-security deployments benefit from a layered protection model that roots trust in hardware.
TPM Integration (Design Intent)
The MachineBindingType::TpmBound variant (core-types/src/security.rs) represents
TPM-sealed key material:
MachineBinding {
binding_hash: BLAKE3(tpm_sealed_key || installation_id),
binding_type: TpmBound,
}
TPM-bound installations tie the vault master key to a specific device’s TPM PCR state. If the device’s boot chain is modified (firmware update, rootkit), the PCR values change and the TPM refuses to unseal the key, preventing vault unlock on a compromised device.
Self-Encrypting Drives (SED)
On devices with SED-capable storage, the layered protection model is:
Layer 1: SED hardware encryption (transparent, always-on)
Layer 2: SQLCipher vault encryption (application-level, per-profile keys)
Layer 3: memfd_secret(2) (runtime memory protection, kernel direct-map removal)
Each layer is independent. SED protects data at rest at the storage level. SQLCipher
protects vault files even if the drive is mounted on another system. memfd_secret
protects decrypted secrets in memory even if the OS kernel is partially compromised.
memfd_secret on Edge Kernels
Many embedded Linux distributions use custom or vendor kernels that may not include
CONFIG_SECRETMEM=y. Before deploying Open Sesame to edge devices, verify kernel support:
grep CONFIG_SECRETMEM /boot/config-$(uname -r)
# or, if /boot/config is not available:
zcat /proc/config.gz 2>/dev/null | grep SECRETMEM
For Yocto/Buildroot-based images, add CONFIG_SECRETMEM=y to the kernel defconfig. For
vendor kernels where this is not possible, Open Sesame operates in fallback mode with
mlock(2), logged at ERROR level.
Edge-Specific Configuration
# config.toml for edge deployment
[global]
default_profile = "device"
[global.ipc]
channel_capacity = 64
slow_subscriber_timeout_ms = 2000
[global.logging]
level = "warn" # Reduce log volume on constrained storage
json = true # Structured output for remote collection
journald = false # May not have journald on embedded systems
Connectivity Patterns
Edge devices are often intermittently connected. Open Sesame’s offline-first design means all core operations (vault unlock, secret access, profile switching) work without network connectivity. Network-dependent operations are limited to:
- SSH agent forwarding for remote unlock (requires SSH connection).
- Extension fetching from OCI registries (can be pre-staged).
- Log shipping to central aggregator (buffered locally, forwarded when connected).
- Audit chain export (manual transfer via removable media if never connected).
For permanently air-gapped edge devices, see the Air-Gapped Environments documentation.
Identity Model
Open Sesame’s identity model provides globally unique, collision-resistant identifiers for installations, organizations, profiles, and vaults. The model supports federation across devices and organizations without a central identity provider.
InstallationId
Every Open Sesame installation has a unique identity, defined as InstallationId in
core-types/src/security.rs:
#![allow(unused)]
fn main() {
pub struct InstallationId {
pub id: Uuid, // UUID v4, generated at sesame init
pub org_ns: Option<OrganizationNamespace>, // Enterprise namespace
pub namespace: Uuid, // Derived, for deterministic ID generation
pub machine_binding: Option<MachineBinding>, // Hardware attestation
}
}
The id field is a UUID v4 generated once at sesame init and persisted in
~/.config/pds/installation.toml (InstallationConfig in
core-config/src/schema_installation.rs). It never changes unless the user explicitly
re-initializes.
The namespace field is derived deterministically:
namespace = uuid5(org_ns.namespace || PROFILE_NS, "install:{id}")
This derived namespace seeds deterministic ProfileId generation, ensuring that the same
profile name on two different installations produces different profile IDs.
Properties
| Property | Value |
|---|---|
| Generation | UUID v4 for InstallationId.id; UUID v7 via Uuid::now_v7() for ProfileId, AgentId, and other define_id! types |
| Persistence | ~/.config/pds/installation.toml |
| Scope | One per user per machine |
| Collision resistance | 122 bits of randomness (UUID v4) |
Organization Namespace
The OrganizationNamespace (core-types/src/security.rs) groups installations by
organization:
#![allow(unused)]
fn main() {
pub struct OrganizationNamespace {
pub domain: String, // e.g., "braincraft.io"
pub namespace: Uuid, // uuid5(NAMESPACE_URL, domain)
}
}
The namespace UUID is derived deterministically from the domain string using UUID v5 with the URL namespace. Any installation that specifies the same organization domain produces the same namespace UUID, enabling cross-installation identity correlation without a central registry.
Enrollment
sesame init --org braincraft.io
This writes the OrgConfig to installation.toml:
[org]
domain = "braincraft.io"
namespace = "a1b2c3d4-..." # uuid5(NAMESPACE_URL, "braincraft.io")
Identity Hierarchy
The identity model forms a four-level hierarchy:
Organization (OrganizationNamespace)
+-- Installation (InstallationId)
+-- Profile (TrustProfileName -> ProfileId)
+-- Vault (SQLCipher DB, 1:1 with profile)
Each level narrows scope:
- Organization – optional grouping by domain. Two installations in the same org share a namespace for deterministic ID derivation.
- Installation – a single
sesame initon a single machine for a single user. Identified by UUID v4. - Profile – a trust context (e.g.,
work,personal,ci-production). TheTrustProfileNametype (core-types/src/profile.rs) is a validated, path-safe string: ASCII alphanumeric plus hyphens and underscores, max 64 bytes, no path traversal. TheProfileIdis a UUID v7 generated via thedefine_id!macro (core-types/src/ids.rs). - Vault – a SQLCipher database scoped to one profile. The vault file path is
vaults/{profile_name}.db. The encryption key is derived viaBLAKE3 derive_key("pds v2 vault-key {profile}")from the profile’s master key.
Device Identity
An installation can optionally be bound to a specific machine via MachineBinding
(core-types/src/security.rs):
#![allow(unused)]
fn main() {
pub struct MachineBinding {
pub binding_hash: [u8; 32], // BLAKE3 hash of machine identity material
pub binding_type: MachineBindingType, // MachineId or TpmBound
}
}
Two binding types are defined:
| Type | Source | Portability |
|---|---|---|
MachineId | BLAKE3 hash of /etc/machine-id + installation ID | Survives reboots, not disk clones |
TpmBound | TPM-sealed key material | Survives reboots, tied to hardware TPM |
Machine binding serves two purposes:
-
Attestation. The
Attestation::DeviceAttestationvariant (core-types/src/security.rs) includes aMachineBindingand a verification timestamp. This allows federation peers to verify that an identity claim originates from a specific physical device. -
Migration detection. If an
installation.tomlis copied to a different machine, the machine binding hash does not match/etc/machine-idon the new host. The system detects this and can require re-attestation.
Cross-Device Identity Correlation
Same Organization
Two installations in the same organization (same org.domain) share a derived namespace.
The ProfileRef type (core-types/src/profile.rs) fully qualifies a profile across
installations:
#![allow(unused)]
fn main() {
pub struct ProfileRef {
pub name: TrustProfileName,
pub id: ProfileId,
pub installation: InstallationId,
}
}
A ProfileRef uniquely identifies a profile in a federation context. Two devices with the
same organization and the same profile name work produce different ProfileRef values
because their installation.id fields differ.
Cross-Organization
Installations in different organizations have different namespace derivations. Cross-organization identity correlation requires explicit trust establishment (out-of-band key exchange or mutual attestation), not namespace collision.
ID Generation
The define_id! macro in core-types/src/ids.rs generates typed ID wrappers over UUID v7:
#![allow(unused)]
fn main() {
define_id!(ProfileId, "prof");
define_id!(AgentId, "agent");
define_id!(DaemonId, "dmon");
define_id!(ExtensionId, "ext");
// ... and others
}
Each ID type:
- Wraps a
Uuid(UUID v7 viaUuid::now_v7()). - Displays with a type prefix (e.g.,
prof-01234567-...,agent-89abcdef-...). - Implements
Serialize/Deserializeas a transparent UUID. - Is
Copy,Eq,Hash, andOrd.
UUID v7 is time-ordered, so IDs generated later sort after IDs generated earlier. This provides natural chronological ordering for audit logs and event streams without a separate timestamp field.
Installation Configuration on Disk
The InstallationConfig struct (core-config/src/schema_installation.rs) is the
TOML-serialized form of the installation identity:
# ~/.config/pds/installation.toml
id = "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
namespace = "fedcba98-7654-3210-fedc-ba9876543210"
[org]
domain = "braincraft.io"
namespace = "12345678-abcd-ef01-2345-6789abcdef01"
[machine_binding]
binding_hash = "a1b2c3d4e5f6..." # hex-encoded BLAKE3 hash
binding_type = "machine-id"
The org and machine_binding sections are optional. A personal desktop installation
without enterprise management or hardware binding omits both:
id = "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
namespace = "fedcba98-7654-3210-fedc-ba9876543210"
Agent Identity
Open Sesame models every entity that interacts with the system – human operators, AI agents, system services, and WASI extensions – as an agent with a typed identity, local process binding, and capability-scoped session.
AgentId and AgentType
The AgentId type (core-types/src/ids.rs) is a UUID v7 wrapper generated via
define_id!(AgentId, "agent"). Each agent receives a unique identifier at registration
time, displayed with the agent- prefix (e.g., agent-01941c8a-...).
The AgentType enum (core-types/src/security.rs) classifies what kind of entity the
agent is:
#![allow(unused)]
fn main() {
pub enum AgentType {
Human,
AI { model_family: String },
Service { unit: String },
Extension { manifest_hash: [u8; 32] },
}
}
| Variant | Description | Example |
|---|---|---|
Human | Interactive operator with keyboard/mouse | Desktop user |
AI { model_family } | LLM-based agent, API-driven | model_family: "claude-4" |
Service { unit } | systemd service or daemon process | unit: "daemon-launcher.service" |
Extension { manifest_hash } | WASI extension, content-addressed | SHA-256 of the WASM module |
AgentType is descriptive metadata, not a trust tier. An AI agent with proper attestations
and a delegation chain can have higher effective trust than a human agent without a security
key. Trust is evaluated via TrustVector, not AgentType.
Local Process Identity
The LocalAgentId enum (core-types/src/security.rs) binds an agent to a local process:
#![allow(unused)]
fn main() {
pub enum LocalAgentId {
UnixUid(u32),
ProcessIdentity { uid: u32, process_name: String },
SystemdUnit(String),
WasmHash([u8; 32]),
}
}
| Variant | Verification | Use Case |
|---|---|---|
UnixUid | UCred from Unix domain socket | Minimal identity, CLI tools |
ProcessIdentity | UCred + /proc/{pid}/exe inspection | Named processes |
SystemdUnit | systemd unit name lookup | Daemon services |
WasmHash | Content hash of WASM module bytes | Sandboxed extensions |
Local agent identity is established during IPC connection setup. When a process connects to the Noise IK bus, the server extracts UCred (pid, uid, gid) from the Unix domain socket and looks up the connecting process’s identity.
AgentIdentity
The AgentIdentity struct (core-types/src/security.rs) is the complete identity record
for an agent during a session:
#![allow(unused)]
fn main() {
pub struct AgentIdentity {
pub id: AgentId,
pub agent_type: AgentType,
pub local_id: LocalAgentId,
pub installation: InstallationId,
pub attestations: Vec<Attestation>,
pub session_scope: CapabilitySet,
pub delegation_chain: Vec<DelegationLink>,
}
}
| Field | Purpose |
|---|---|
id | Globally unique agent identifier (UUID v7) |
agent_type | Classification: Human, AI, Service, Extension |
local_id | Process-level binding on this machine |
installation | Which Open Sesame installation this agent belongs to |
attestations | Evidence accumulated during this session |
session_scope | Effective capabilities for this session |
delegation_chain | Chain of authority from the root delegator |
AgentMetadata
The AgentMetadata struct (core-types/src/security.rs) describes an agent’s type and the
attestation methods available to it:
#![allow(unused)]
fn main() {
pub struct AgentMetadata {
pub agent_type: AgentType,
pub available_attestation_methods: Vec<AttestationMethod>,
}
}
Available attestation methods vary by agent type:
| Agent Type | Typical Attestation Methods |
|---|---|
Human | MasterPassword, SecurityKey, DeviceAttestation |
AI | Delegation, ProcessAttestation |
Service | ProcessAttestation, DeviceAttestation |
Extension | ProcessAttestation (WASM hash verification) |
The AttestationMethod enum (core-types/src/security.rs) defines the methods:
MasterPassword– password-based, for human agents.SecurityKey– FIDO2/WebAuthn hardware token.ProcessAttestation– process identity verification via/procinspection.Delegation– authority delegated from another agent.DeviceAttestation– machine-level binding (TPM, machine-id).
Attestation
The Attestation enum (core-types/src/security.rs) captures the evidence used to verify
an agent’s identity claim. Each variant records the specific data for one verification
method:
| Variant | Evidence |
|---|---|
UCred | pid, uid, gid from Unix domain socket |
NoiseIK | X25519 public key, registry generation counter |
MasterPassword | Timestamp of successful verification |
SecurityKey | FIDO2 credential ID, verification timestamp |
ProcessAttestation | pid, SHA-256 of executable, uid |
Delegation | Delegator AgentId, granted CapabilitySet, chain depth |
DeviceAttestation | MachineBinding, verification timestamp |
RemoteAttestation | Remote InstallationId, nested remote device attestation |
HeartbeatRenewal | Original attestation type, renewal attestation, renewal timestamp |
Multiple attestations compose to strengthen trust. For example, UCred + MasterPassword
produces a higher TrustLevel in the TrustVector than either alone. The attestations
vector on AgentIdentity accumulates all attestation evidence for the current session.
Machine Agents
Service accounts and AI agents operate as machine agents with restricted capabilities. A
machine agent’s AgentIdentity is established as follows:
-
Registration. The agent is registered with an
AgentId,AgentType, and initialCapabilitySet. For example, a CI runner agent might receive:CapabilitySet { SecretRead { key_pattern: Some("ci/*") }, SecretList } -
Attestation. At connection time, the agent presents attestation evidence. For a
Serviceagent, this isAttestation::ProcessAttestation:#![allow(unused)] fn main() { Attestation::ProcessAttestation { pid: 12345, exe_hash: <SHA-256 of /usr/bin/ci-runner>, uid: 1001, } } -
Session scope. The agent’s
session_scopeis the intersection of its registered capabilities and any delegation grant’s scope. The agent cannot exceed the capabilities it was registered with, and delegation further narrows scope.
Agent Lifecycle
Registration
Agent registration creates an AgentId and associates it with an AgentType and initial
capability set. For built-in daemons, registration is automatic at IPC bus connection via
the clearance registry (core-ipc/src/registry.rs). Each daemon’s X25519 public key maps
to a DaemonId and SecurityLevel.
Key Rotation (Design Intent)
The clearance registry maintains a registry_generation counter. When an agent’s X25519
key pair is rotated:
- The new public key is registered with an incremented generation.
- The old public key is revoked (removed from the registry).
- Peers that cached the old key receive a registry update.
The Attestation::NoiseIK variant records the registry_generation at the time of
verification, enabling peers to detect stale attestations.
Revocation
Revoking an agent removes its public key from the clearance registry. Subsequent connection attempts with the revoked key are rejected. Active sessions using the revoked key continue until the next re-authentication interval.
Human-to-Agent Delegation
A human operator can delegate capabilities to a machine agent via DelegationGrant
(core-types/src/security.rs). The delegation:
- Narrows scope: the delegatee’s effective capabilities are
delegator_scope.intersection(grant.scope). - Is time-bounded:
initial_ttlsets the maximum grant lifetime. - Requires heartbeat:
heartbeat_intervalsets how often the delegatee must renew. - Is signed: Ed25519 signature over grant fields prevents tampering.
- Records depth:
DelegationLink.depthtracks how many hops from the root delegator (0 = direct from human).
See the Delegation documentation for the full delegation model.
Trust Evaluation
Agent trust is not determined by AgentType alone. The TrustVector
(core-types/src/security.rs) evaluates trust across multiple dimensions:
#![allow(unused)]
fn main() {
pub struct TrustVector {
pub authn_strength: TrustLevel, // None < Low < Medium < High < Hardware
pub authz_freshness: Duration, // Time since last authorization refresh
pub delegation_depth: u8, // 0 = direct human
pub device_posture: f64, // 0.0 = unknown, 1.0 = fully attested
pub network_exposure: NetworkTrust, // Local < Encrypted < Onion < PublicInternet
pub agent_type: AgentType, // Metadata, not a trust tier
}
}
Authorization decisions consume the TrustVector holistically. A Service agent on a
local Unix socket with Hardware-level authentication and zero delegation depth may be
trusted more than a Human agent on an encrypted network with Medium authentication
and a stale authorization token.
Delegation
Open Sesame implements capability delegation via the DelegationGrant type, enabling agents
to transfer a subset of their capabilities to other agents with time-bounded, scope-narrowed,
cryptographically signed grants.
DelegationGrant
The DelegationGrant struct is defined in core-types/src/security.rs:
#![allow(unused)]
fn main() {
pub struct DelegationGrant {
pub delegator: AgentId,
pub scope: CapabilitySet,
pub initial_ttl: Duration,
pub heartbeat_interval: Duration,
pub nonce: [u8; 16],
pub point_of_use_filter: Option<OciReference>,
pub signature: Vec<u8>, // Ed25519 signature over the grant fields
}
}
| Field | Purpose |
|---|---|
delegator | The AgentId of the agent issuing the grant |
scope | Maximum capabilities the delegatee may exercise |
initial_ttl | Time-to-live from grant creation; the grant expires after this duration |
heartbeat_interval | How often the delegatee must renew; missed heartbeat revokes the grant |
nonce | 16-byte anti-replay nonce, unique per grant |
point_of_use_filter | Optional OCI reference restricting where the grant can be used |
signature | Ed25519 signature over all other fields by the delegator |
Scope Narrowing
Delegation enforces a fundamental invariant: a delegatee can never exceed its delegator’s capabilities. The delegatee’s effective capabilities are computed as:
effective = delegator_scope.intersection(grant.scope)
The CapabilitySet type (core-types/src/security.rs) implements lattice operations:
| Operation | Method | Semantics |
|---|---|---|
| Union | a.union(b) | All capabilities from both sets |
| Intersection | a.intersection(b) | Only capabilities in both sets |
| Subset test | a.is_subset(b) | True if every capability in a is in b |
| Superset test | a.is_superset(b) | True if every capability in b is in a |
| Empty set | CapabilitySet::empty() | No capabilities |
| Full set | CapabilitySet::all() | All non-parameterized capabilities |
Example
A human operator holds { Admin, SecretRead, SecretWrite, SecretList, Unlock }. The
operator delegates to a CI agent with scope
{ SecretRead { key_pattern: Some("ci/*") }, SecretList }:
Delegator scope: { Admin, SecretRead, SecretWrite, SecretList, Unlock }
Grant scope: { SecretRead { key_pattern: "ci/*" }, SecretList }
Effective: { SecretRead { key_pattern: "ci/*" }, SecretList }
The CI agent can read secrets matching ci/* and list secret keys. It cannot write secrets,
unlock vaults, or perform admin operations, even though the delegator holds those
capabilities.
Parameterized Capabilities
Several capabilities accept optional parameters that further restrict scope:
#![allow(unused)]
fn main() {
Capability::SecretRead { key_pattern: Option<String> }
Capability::SecretWrite { key_pattern: Option<String> }
Capability::SecretDelete { key_pattern: Option<String> }
Capability::Delegate { max_depth: u8, scope: Box<CapabilitySet> }
}
A SecretRead with key_pattern: None permits reading any secret. A SecretRead with
key_pattern: Some("ci/*") restricts reads to keys matching the glob pattern. Delegation
intersection treats parameterized capabilities as more restrictive: the result uses the
narrower pattern.
Time-Bounded Grants
Every DelegationGrant has two temporal controls:
initial_ttl
The grant is valid for initial_ttl from creation time. After this duration, the grant
expires regardless of heartbeat activity. This prevents indefinite capability transfer.
heartbeat_interval
The delegatee must renew the grant at intervals not exceeding heartbeat_interval. A
missed heartbeat revokes the grant. This provides continuous verification that the delegatee
is still active and authorized.
The Attestation::HeartbeatRenewal variant (core-types/src/security.rs) records heartbeat
events:
#![allow(unused)]
fn main() {
Attestation::HeartbeatRenewal {
original_attestation_type: AttestationType,
renewal_attestation: Box<Attestation>,
renewed_at: u64,
}
}
Delegation Chains
Grants can be chained: agent A delegates to agent B, which delegates to agent C. The
DelegationLink struct tracks position in the chain:
#![allow(unused)]
fn main() {
pub struct DelegationLink {
pub grant: DelegationGrant,
pub depth: u8, // 0 = direct from human operator
}
}
The AgentIdentity.delegation_chain field (core-types/src/security.rs) stores the full
chain of DelegationLink entries from the root delegator to the current agent.
Chain Depth Control
The Capability::Delegate variant includes a max_depth field:
#![allow(unused)]
fn main() {
Capability::Delegate {
max_depth: u8,
scope: Box<CapabilitySet>,
}
}
max_depth limits how many times a delegation can be re-delegated. A grant with
max_depth: 2 allows:
Human (depth 0) -> Agent A (depth 1) -> Agent B (depth 2)
Agent B cannot further delegate because depth 2 equals max_depth. This prevents unbounded
delegation chains that would make audit trails difficult to follow.
Chain Verification
To verify a delegation chain:
- Start from the root delegator (depth 0). Verify the root is a known, trusted agent
(typically
AgentType::Human). - For each link in the chain:
- Verify the
signatureover theDelegationGrantfields using the delegator’s Ed25519 public key. - Verify that the grant has not expired (
initial_ttlnot exceeded). - Verify that the heartbeat is current (
heartbeat_intervalnot exceeded). - Verify that
depthdoes not exceed theDelegate.max_depthfrom the delegator’s capability. - Compute effective scope as
previous_scope.intersection(grant.scope).
- Verify the
- The final effective scope is the intersection of all grants in the chain.
Monotonic Narrowing
Each link in the chain can only narrow capabilities, never widen them. The intersection operation guarantees:
scope_n <= scope_{n-1} <= ... <= scope_0
where <= means “is a subset of.” This is a structural property of the lattice:
a.intersection(b).is_subset(a) is always true.
Anti-Replay
Each DelegationGrant contains a 16-byte nonce field. The nonce must be unique across all
grants from a given delegator. A delegation verifier maintains a set of observed nonces and
rejects grants with previously-seen nonces. This prevents replay attacks where a revoked or
expired grant is re-presented.
Point-of-Use Filter
The point_of_use_filter field is an optional OciReference (core-types/src/oci.rs) that
restricts where the delegation can be used:
#![allow(unused)]
fn main() {
pub struct OciReference {
pub registry: String,
pub principal: String,
pub scope: String,
pub revision: String,
pub provenance: Option<String>,
}
}
When present, the delegation is only valid in the context of the specified OCI artifact. This is intended for extension-scoped delegations: a grant that authorizes an extension to read secrets only when running as part of a specific, content-addressed WASM module.
The Delegate Capability
The Capability::Delegate variant is itself a capability that must be held to issue
delegations:
#![allow(unused)]
fn main() {
Capability::Delegate {
max_depth: u8,
scope: Box<CapabilitySet>,
}
}
An agent without Capability::Delegate in its session_scope cannot create
DelegationGrant entries. The scope field within the Delegate capability limits what
the agent can delegate, and max_depth limits the chain length. The ability to delegate is
itself subject to delegation narrowing.
Revocation
Delegation grants are revoked in the following scenarios:
- TTL expiry. The
initial_ttlhas elapsed since grant creation. - Missed heartbeat. The delegatee did not renew within
heartbeat_interval. - Delegator revocation. The delegator explicitly revokes the grant (removes it from the active grant set).
- Chain invalidation. Any link in the delegation chain is revoked, which invalidates all downstream links.
Revocation is immediate and does not require the delegatee’s cooperation. The delegatee’s next operation that requires the revoked capability is denied.
Multi-Cluster Federation
Design Intent. This page describes cross-cluster secret synchronization between Open Sesame installations. The primitives referenced below (
InstallationId,ProfileRef,OrganizationNamespace,CryptoConfig, Noise IK transport) exist in the type system and IPC layer today. The synchronization protocol, conflict resolution logic, and selective sync policies are architectural targets.
Overview
Multi-cluster federation enables multiple Open Sesame installations to share secrets and profiles across trust boundaries. Each installation operates independently and maintains full functionality without connectivity to peers. Synchronization is an additive capability layered on top of the existing single-installation model.
Prerequisites
Federated installations must share an OrganizationNamespace (core-types/src/security.rs).
Installations in different organizations cannot federate without explicit trust establishment.
The shared org namespace ensures deterministic profile ID derivation, so the same profile name
on different installations can be correlated.
Each peer must meet the minimum_peer_profile requirement from CryptoConfig
(core-types/src/crypto.rs):
#![allow(unused)]
fn main() {
pub struct CryptoConfig {
// ...
pub minimum_peer_profile: CryptoProfile, // LeadingEdge, GovernanceCompatible, or Custom
}
}
A peer advertising GovernanceCompatible algorithms (PBKDF2-SHA256, HKDF-SHA256, AES-GCM,
SHA-256) is rejected by an installation requiring LeadingEdge unless the policy explicitly
allows it.
Profile-Scoped Synchronization
Synchronization is scoped to individual trust profiles. The ProfileRef type
(core-types/src/profile.rs) fully qualifies a profile in a federation context:
#![allow(unused)]
fn main() {
pub struct ProfileRef {
pub name: TrustProfileName,
pub id: ProfileId,
pub installation: InstallationId,
}
}
Two installations with the profile work have different ProfileRef values because their
InstallationId fields differ. Federation maps these profiles to each other by matching on
TrustProfileName within the shared org namespace.
Selective Sync Policies (Design Intent)
Not all secrets in a profile need to be synchronized. Selective sync policies control which secrets replicate:
Sync Policy for profile "work":
- sync: secrets matching "shared/*"
- exclude: secrets matching "local/*"
- direction: bidirectional
- peers: [installation-uuid-1, installation-uuid-2]
Secrets matching the local/* pattern remain on the originating installation. Secrets
matching shared/* replicate to specified peers. The policy is configured per-profile and
per-installation.
Conflict Resolution (Design Intent)
When two installations modify the same secret independently (e.g., during a network partition), a conflict arises at synchronization time.
Last-Writer-Wins with Vector Clocks
Each secret carries a vector clock with one entry per installation that has modified it:
Secret "shared/api-key":
Installation A: version 3
Installation B: version 2
On synchronization:
-
No conflict. One vector clock strictly dominates the other (all entries greater or equal, at least one strictly greater). The dominating version wins.
-
Concurrent writes. Neither vector clock dominates. This is a true conflict. Resolution strategy:
- Default: last-writer-wins. The write with the latest wall-clock timestamp wins. The losing version is preserved in a conflict log for manual review.
- Configurable. Future policy options include: reject (require manual resolution), merge (for structured secret formats), or defer to a specific installation.
Conflict Log
All conflicts are recorded in the audit chain with both versions, their vector clocks, and
the resolution applied. The sesame audit verify command can surface unreviewed conflicts.
Split-Brain Handling (Design Intent)
When network connectivity between peers is lost, each installation continues operating independently. This is the normal mode of operation for Open Sesame – the system is designed for offline-first use.
Partition Behavior
During a partition:
- Each installation reads and writes its local vault without restriction.
- No synchronization occurs.
- The audit chain continues recording local operations.
Convergence on Reconnect
When connectivity is restored:
- Each peer advertises its vector clock state for each synchronized profile.
- Peers exchange only the secrets that have changed since the last synchronization point.
- Conflicts are resolved per the configured policy.
- Audit chains from both peers are cross-referenced to build a unified timeline.
The convergence protocol is idempotent: re-running synchronization after a successful sync produces no changes.
Encrypted Replication
All synchronization traffic between peers uses the Noise IK protocol, the same transport used for local IPC. This provides:
| Property | Mechanism |
|---|---|
| Mutual authentication | X25519 static keys, verified against clearance registry |
| Forward secrecy | Per-session Noise IK ephemeral keys |
| Encryption | ChaChaPoly (default) or AES-256-GCM (NoiseCipher in core-types/src/crypto.rs) |
| Integrity | Noise protocol MAC |
| Replay protection | Noise protocol nonce management |
Peer identity is established via the Attestation::RemoteAttestation variant
(core-types/src/security.rs):
#![allow(unused)]
fn main() {
Attestation::RemoteAttestation {
remote_installation: InstallationId,
remote_device_attestation: Box<Attestation>,
}
}
This nests the remote peer’s device attestation (e.g., machine binding, TPM) inside a remote attestation wrapper, providing end-to-end identity verification for the replication channel.
Topology
Federation supports multiple topologies:
Hub-Spoke
A central installation acts as the synchronization hub. Leaf installations sync with the hub only:
Hub
/ | \
A B C
Simpler to manage. The hub is a single point of failure for synchronization (not for local operation).
Mesh
Every installation syncs with every other installation:
A --- B
| X |
C --- D
No single point of failure. Higher bandwidth and complexity. See Mesh Topology for the full mesh design.
Partial Mesh
Selected installations sync with selected peers:
A --- B
|
C --- D
Supports organizational boundaries where teams share a subset of profiles.
Security Considerations
Trust Boundary
Each synchronized secret crosses a trust boundary at the profile level. An installation’s
SecurityLevel hierarchy (Open < Internal < ProfileScoped < SecretsOnly) applies locally.
A remote peer with access to a shared profile can read secrets at the ProfileScoped level
for that profile, but cannot escalate to SecretsOnly on the local installation.
Delegation for Sync Agents
The synchronization agent on each installation operates with a DelegationGrant scoped to
the secrets being synchronized:
DelegationGrant {
delegator: <local-operator>,
scope: { SecretRead { key_pattern: "shared/*" }, SecretWrite { key_pattern: "shared/*" } },
initial_ttl: 86400s,
heartbeat_interval: 3600s,
...
}
This ensures the sync agent cannot access local-only secrets, even if compromised.
Audit Trail
Every secret received from a remote peer is recorded in the local audit chain with the
remote peer’s InstallationId and the grant under which the sync occurred. This provides
a tamper-evident record of which secrets were replicated from where.
Human-Agent Orchestration
Design Intent. This page describes mixed workflows where human operators and AI/service agents collaborate on secret-bearing operations. The agent identity model (
AgentIdentity,AgentType,DelegationGrant,TrustVector) is defined in the type system today. The approval gate mechanisms, escalation protocols, and multi-party authorization described below are architectural targets.
Overview
Open Sesame treats human operators and machine agents as peers in the same identity system.
Both are AgentIdentity instances with typed identities, attestations, capability sets, and
delegation chains. The difference is not in system architecture but in the attestation methods
available and the trust policies applied.
The core principle: agents can perform secret-bearing operations only to the extent that a
human has authorized them, either directly (via DelegationGrant) or via policy.
Agent Types in Orchestration
The AgentType enum (core-types/src/security.rs) defines the entity classification:
| Type | Role in Orchestration |
|---|---|
Human | Approver, delegator, root of trust for capability chains |
AI { model_family } | Automated operations, LLM-driven workflows, copilot actions |
Service { unit } | Background processes, CI/CD pipelines, cron jobs |
Extension { manifest_hash } | WASI plugins operating in a content-addressed sandbox |
Approval Gates (Design Intent)
An approval gate is a policy that requires human authorization before an agent can access a secret or perform a privileged operation.
Gate Model
When an AI or Service agent requests a capability that requires approval:
- The agent submits a request specifying the capability needed and the context (profile, secret key pattern, operation type).
- The request is held in a pending state. The agent blocks or receives a pending response.
- One or more human operators are notified.
- The human reviews the request and either approves (issuing a
DelegationGrant) or denies. - On approval, the agent’s
session_scopeis updated with the granted capabilities for the duration ofinitial_ttl.
Gate Conditions
Approval gates are triggered by the gap between an agent’s session_scope and the
capabilities required for the requested operation:
Agent session_scope: { SecretList, StatusRead }
Requested operation: SecretRead { key_pattern: "production/db-*" }
Gap: { SecretRead { key_pattern: "production/db-*" } }
--> Approval gate triggered
If the agent already holds the required capability (e.g., from a prior delegation), no gate is triggered.
Escalation (Design Intent)
Escalation is the process by which an agent requests elevated capabilities beyond its current session scope.
Escalation Flow
1. Agent detects it needs Capability::Unlock for profile "production"
2. Agent does not hold Unlock in session_scope
3. Agent submits escalation request:
- Requested: { Unlock }
- Context: profile "production", reason "scheduled key rotation"
- Requested TTL: 300s
4. Human operator reviews escalation request
5. Human approves with narrowed scope:
DelegationGrant {
delegator: <human-agent-id>,
scope: { Unlock },
initial_ttl: 300s, // 5 minutes, not the 1 hour requested
heartbeat_interval: 60s,
nonce: <random>,
signature: <Ed25519>,
}
6. Agent's effective scope becomes:
session_scope.union(granted_scope).intersection(delegator_scope)
7. After 300s, the grant expires and the agent loses Unlock
The human can:
- Approve with the requested scope and TTL.
- Approve with a narrower scope or shorter TTL (the human always narrows, never widens).
- Deny the request.
Automatic Escalation Policies (Design Intent)
For well-defined, repetitive workflows, policies can pre-authorize escalation without human-in-the-loop:
# config.toml
[[agents.auto_escalation]]
agent_type = "service"
unit = "backup-agent.service"
capabilities = ["secret-read"]
key_pattern = "backup/*"
max_ttl = "1h"
require_device_attestation = true
This pre-authorization avoids interactive approval for routine operations while maintaining the capability lattice’s scope-narrowing invariant.
Audit Trail
All agent actions are attributed in the audit log. The audit entry for any operation includes:
| Field | Source |
|---|---|
| Agent ID | AgentIdentity.id |
| Agent type | AgentIdentity.agent_type |
| Delegation chain | AgentIdentity.delegation_chain – full chain from root delegator |
| Effective capabilities | AgentIdentity.session_scope at time of operation |
| Operation | The specific action performed (read, write, delete, unlock, etc.) |
| Profile | Which trust profile the operation targeted |
| Timestamp | Operation time |
| Attestations | Which attestation methods were active |
Chain Attribution
For delegated operations, the audit trail records the entire delegation chain:
Audit entry for SecretRead("production/api-key"):
Agent: agent-01941c8a-... (AI, model_family: "claude-4")
Delegation chain:
[0] Human operator-5678 -> DelegationGrant { scope: {SecretRead, SecretList}, ttl: 3600s }
[1] agent-01941c8a-... (current agent)
Attestations: [NoiseIK, Delegation]
This provides full provenance: who authorized the AI agent, what scope was granted, and when the delegation expires.
Multi-Party Authorization (Design Intent)
For critical operations (e.g., deleting a production secret, rotating a root key), multi-party authorization requires approval from multiple human operators.
N-of-M Model
A multi-party policy specifies:
- M – total number of designated approvers.
- N – minimum number who must approve.
- Timeout – how long to wait for approvals before the request expires.
Policy for Capability::SecretDelete { key_pattern: "production/*" }:
Approvers: [operator-A, operator-B, operator-C] (M = 3)
Required: 2 (N = 2)
Timeout: 1 hour
Authorization Flow
- An agent or human requests a capability that matches a multi-party policy.
- All M approvers are notified.
- Each approver independently reviews and approves or denies.
- When N approvals are collected, a composite
DelegationGrantis issued:- The
scopeis the intersection of all approvers’ individual scopes. - The
initial_ttlis the minimum of all approvers’ specified TTLs. - Each approver’s signature is recorded.
- The
- If the timeout expires before N approvals are collected, the request is denied.
Multi-Party Attestation
The Attestation::Delegation variant records the delegator’s AgentId and the granted
scope. For multi-party authorization, multiple Attestation::Delegation entries appear
in the agent’s attestations vector, one per approver.
Trust Vector in Orchestration
The TrustVector (core-types/src/security.rs) provides the quantitative basis for
authorization decisions in mixed human-agent workflows:
| Dimension | Effect on Orchestration |
|---|---|
authn_strength | Higher strength reduces approval gate friction |
authz_freshness | Stale authorization triggers re-approval |
delegation_depth | Deeper chains require stronger attestations at each link |
device_posture | Low posture (no memfd_secret, no TPM) may trigger additional approval requirements |
network_exposure | Remote agents (Encrypted, Onion, PublicInternet) face stricter policies than local agents |
agent_type | Metadata for policy matching, not a trust tier |
Worked Example: AI Copilot Accessing Secrets
-
A developer invokes an AI copilot to debug a production issue.
-
The copilot (
AgentType::AI,model_family: "claude-4") needs to read a database connection string. -
The copilot’s
session_scopedoes not includeSecretReadforproduction/*. -
An approval gate fires. The developer receives a prompt:
Agent "copilot-agent-01941c8a" (AI/claude-4) requests: SecretRead { key_pattern: "production/db-connection" } Reason: "Debugging connection timeout in production service" Approve for 10 minutes? [y/N] -
The developer approves. A
DelegationGrantis issued withinitial_ttl: 600s. -
The copilot reads the secret. The audit log records the read with the full delegation chain.
-
After 10 minutes, the grant expires. The copilot can no longer read production secrets.
Mesh Topology
Design Intent. This page describes a peer-to-peer federation mesh where Open Sesame installations synchronize state without a central authority. The identity model (
InstallationId,OrganizationNamespace), Noise IK transport (core-ipc), and attestation types (Attestation::RemoteAttestation) exist in the type system today. The gossip protocol, CRDT-based state merging, and convergence guarantees described below are architectural targets.
Overview
A mesh topology connects Open Sesame installations as peers where each node can communicate directly with any other node. There is no central server, certificate authority, or coordinator. Trust is established through mutual Noise IK authentication and device attestation. State convergence is achieved through gossip-based propagation and conflict-free replicated data types (CRDTs).
Trust Establishment
Initial Bootstrap
Two installations establish trust through an out-of-band key exchange:
-
Key display. Installation A displays its X25519 static public key and
InstallationId:sesame federation show-identity # Output: # Installation: a1b2c3d4-... # Org: braincraft.io # Public key: base64(X25519 static key) # Machine binding: machine-id (verified) -
Key import. Installation B imports A’s identity:
sesame federation trust --installation a1b2c3d4-... --pubkey base64(...) -
Mutual verification. Both installations perform a Noise IK handshake. Each peer verifies the other’s static key matches the imported value. The
Attestation::RemoteAttestationtype records the result:#![allow(unused)] fn main() { Attestation::RemoteAttestation { remote_installation: InstallationId { id: a1b2c3d4-..., ... }, remote_device_attestation: Box::new(Attestation::DeviceAttestation { binding: MachineBinding { binding_hash: [...], binding_type: MachineId }, verified_at: 1711234567, }), } }
Trust Anchors
Each installation maintains a set of trusted peer identities (public keys + installation IDs). This set is the trust anchor for the mesh. A peer not in the trust set is rejected during Noise IK handshake. Trust anchors can be:
- Manually established (out-of-band, as described above).
- Transitively established (A trusts B, B trusts C, A can choose to trust C based on B’s attestation).
- Organizationally established (all installations in the same
OrganizationNamespaceshare a common trust anchor via policy distribution).
Transitive trust is opt-in and policy-controlled. An installation is never forced to trust a peer it has not explicitly approved or that does not meet its configured policy.
Gossip Protocol (Design Intent)
State changes propagate through the mesh via a gossip protocol.
What is Gossiped
- Profile metadata. Profile names, IDs, and sync policies for synchronized profiles.
- Secret updates. Encrypted secret payloads for profiles configured for synchronization.
- Policy updates. Organization-wide policy changes from
/etc/pds/policy.toml. - Peer identity. New peer introductions (installation ID + public key) for mesh expansion.
- Revocation notices. Key revocation and delegation revocation events.
Gossip Mechanics
- A node produces a state change (e.g., writes a secret to a synchronized profile).
- The node selects a random subset of its known peers (fanout factor, typically 3–5).
- The node sends the update to the selected peers over Noise IK connections.
- Each receiving peer checks whether the update is new (vector clock comparison). If new, the peer applies the update locally and re-gossips to its own random subset of peers.
- Propagation continues until all nodes have seen the update.
Dissemination Guarantees
With a fanout factor of f and n nodes:
- Expected propagation rounds:
O(log_f(n)). - For 100 nodes with fanout 3: approximately 5 rounds to reach all peers.
- Probabilistic guarantee: with sufficient fanout, all nodes receive the update with high probability. Deterministic delivery is not guaranteed per round; convergence is eventual.
CRDT-Based State Merging (Design Intent)
To achieve convergence without coordination, synchronized state uses conflict-free replicated data types.
Secret State as a Map CRDT
Each synchronized profile’s secret store is modeled as a map from secret key to (value, vector clock) pairs:
Profile "work" secrets:
"shared/api-key" -> (encrypted_value, {install_A: 3, install_B: 2})
"shared/db-url" -> (encrypted_value, {install_A: 1})
The CRDT merge rule:
- Concurrent writes to the same key. Last-writer-wins based on wall-clock timestamp, with installation ID as tiebreaker. The losing value is preserved in a conflict log.
- Non-conflicting writes. Different keys or causally ordered writes merge automatically with no conflict.
- Deletes. A tombstone entry replaces the value. Tombstones are retained for a configurable duration (default: 30 days) to ensure propagation across partitioned nodes.
Convergence Properties
| Property | Guarantee |
|---|---|
| Eventual consistency | All connected peers converge to the same state |
| Commutativity | Updates can be applied in any order |
| Idempotency | Re-applying an update has no additional effect |
| Partition tolerance | Nodes operate independently during partitions |
Partition Tolerance
During Partition
Partitioned nodes continue operating independently. Each node:
- Reads and writes its local vault without restriction.
- Records all operations in its local audit chain.
- Queues outgoing gossip messages for delivery when connectivity is restored.
On Reconnect
When a partitioned node reconnects:
- The node exchanges vector clocks with peers to identify divergence.
- Missing updates are transferred in both directions.
- Conflicts (concurrent writes to the same key) are resolved per the CRDT merge rule.
- Audit chains from both sides are cross-referenced.
Convergence Verification
After reconnection, nodes can verify convergence:
sesame federation verify-convergence --profile work
This compares the local state hash with hashes reported by peers. A mismatch indicates an update still in transit or an unresolved conflict.
Peer Discovery and Mesh Expansion
Manual Peer Addition
sesame federation trust --installation <uuid> --pubkey <base64>
Organization-Scoped Discovery (Design Intent)
Within an OrganizationNamespace, peer discovery can be bootstrapped from a shared
configuration distributed via policy:
# /etc/pds/policy.toml
[[policy]]
key = "federation.bootstrap_peers"
value = [
{ installation = "a1b2c3d4-...", pubkey = "base64(...)" },
{ installation = "e5f6a7b8-...", pubkey = "base64(...)" },
]
source = "enterprise-fleet-management"
New installations in the org automatically discover existing peers from the bootstrap list. Each peer still performs mutual Noise IK authentication before sharing state.
Peer Removal
Removing a peer from the trust set:
sesame federation untrust --installation <uuid>
The removed peer’s public key is revoked. Gossip messages from the revoked peer are rejected. Secrets previously shared with the revoked peer remain encrypted in local vaults; they are not retroactively expunged from the peer’s copy.
Security Properties
No Central Authority
There is no CA, no coordinator, and no single point of compromise. Compromising one node does not grant access to other nodes’ local-only secrets. Synchronized secrets are limited to what was explicitly configured for sync.
Forward Secrecy
Each Noise IK connection uses ephemeral keys, providing forward secrecy per session. Compromising a node’s static key does not compromise past session traffic (though it does compromise future sessions until the key is rotated and the old key revoked in peers’ trust sets).
Minimum Peer Crypto Profile
The CryptoConfig.minimum_peer_profile field (core-types/src/crypto.rs) enforces a floor
on the cryptographic algorithms a peer must use:
Local: LeadingEdge (Argon2id, BLAKE3, ChaChaPoly, BLAKE2s)
Peer: GovernanceCompatible (PBKDF2-SHA256, HKDF-SHA256, AES-GCM, SHA-256)
If minimum_peer_profile = LeadingEdge:
--> Peer rejected (does not meet minimum)
If minimum_peer_profile = GovernanceCompatible:
--> Peer accepted
This prevents a mesh node with weak cryptographic configuration from weakening the overall mesh security posture.
Extension System
Open Sesame provides a WASM-based extension system composed of two crates: extension-host (the
runtime) and extension-sdk (the authoring toolkit). Extensions are distributed as OCI artifacts.
Current Implementation Status
Both crates are in early scaffolding phase. The extension-host crate declares its module-level
documentation and dependency structure but contains no runtime logic. The extension-sdk crate
declares its module-level documentation and enforces #![forbid(unsafe_code)] but contains no
bindings or type definitions beyond the crate root. The architectural contracts (crate boundaries,
dependency selections, WIT/OCI integration points) are established; functional implementation is
pending.
extension-host
The extension-host crate provides the Wasmtime-backed runtime for executing WASM component model
extensions with capability-based sandboxing. Each extension runs in its own Store with capabilities
enforced from its manifest.
Dependencies
| Crate | Purpose |
|---|---|
wasmtime | WebAssembly runtime engine with component model support |
wasmtime-wasi | WASI preview 2 implementation for Wasmtime |
core-types | Shared types for the extension/host boundary |
core-config | Configuration loading for extension manifests |
core-ipc | IPC bus client for extension-to-daemon communication |
extension-sdk | Shared type definitions between host and guest |
tokio | Async runtime for extension lifecycle management |
anyhow | Error handling for Wasmtime operations |
Planned Architecture
Based on the crate’s declared dependencies and documentation, the extension host is designed around these components:
- Wasmtime engine with pooling allocator: The
wasmtimedependency provides the core WebAssembly execution engine. Pooling allocation pre-allocates memory slots for extension instances, reducing per-instantiation overhead. - WASI component model: The
wasmtime-wasidependency provides WASI preview 2 support, giving extensions controlled access to filesystem, networking, clocks, and random number generation through capability handles. - Capability sandbox: Each extension’s
Storeis configured with capabilities declared in its manifest. Extensions cannot access resources beyond what their manifest declares. - IPC bus integration: The
core-ipcdependency allows extensions to communicate with daemon processes over the Noise IK encrypted bus, subject to clearance checks.
extension-sdk
The extension-sdk crate provides the types, host function bindings, and WIT interface definitions
that extension authors use to build WASM component model extensions targeting the extension host.
Dependencies
| Crate | Purpose |
|---|---|
core-types | Shared types for the extension/host boundary |
wit-bindgen | Code generation from WIT (WebAssembly Interface Type) definitions |
serde | Serialization for extension configuration and data exchange |
WIT Bindings
The wit-bindgen dependency generates Rust bindings from WIT interface definitions. WIT defines the
contract between extensions (guests) and the extension host: what functions the host exports to
extensions, what functions extensions must implement, and the types exchanged across the boundary.
The SDK crate enforces #![forbid(unsafe_code)] – all unsafe operations are confined to the
generated bindings and the host runtime.
OCI Distribution
Extensions are packaged and distributed as OCI (Open Container Initiative) artifacts. The
OciReference type in core-types (defined in core-types/src/oci.rs) provides the addressing
scheme:
registry/principal/scope:revision[@provenance]
Examples:
registry.example.com/org/extension:1.0.0registry.example.com/org/extension:1.0.0@sha256:abc123
The OciReference type parses and validates this format, requiring at least three path segments
(registry, principal, scope), a non-empty revision after :, and an optional provenance hash after
@. It implements FromStr, Display, Serialize, and Deserialize.
The OCI distribution model allows extensions to be:
- Published to any OCI-compliant registry (Docker Hub, GitHub Container Registry, self-hosted registries).
- Content-addressed via the optional provenance field for integrity verification.
- Version-pinned via the revision field for reproducible deployments.
- Scoped by principal (organization/user) and scope (extension name) for namespace isolation.
Extension Lifecycle
The planned extension lifecycle follows five phases:
- Discover: Resolve an
OciReferenceto a registry, pull the extension artifact, and verify its provenance hash if present. - Load: Parse the WASM component from the artifact. Validate the component’s WIT imports against what the host can provide.
- Sandbox: Create a Wasmtime
Storewith WASI capabilities scoped to the extension’s manifest. Configure resource limits (memory, fuel/instruction count, file descriptors). - Execute: Instantiate the component in the store. Call the extension’s exported entry points. The extension communicates with daemons via host-provided IPC functions.
- Teardown: Drop the store, releasing all resources. The pooling allocator reclaims the memory slot for reuse.
Extension Host Capabilities
The extension-host crate provides the Wasmtime-backed runtime that executes
WASM component model extensions. Each extension runs in an isolated Store
with capabilities enforced from its manifest.
Current Implementation Status
As of this writing, the extension host is scaffolded but not fully wired into
the daemon runtime. The crate declares dependencies on wasmtime,
wasmtime-wasi, core-types, core-config, core-ipc, and extension-sdk.
The public module (extension-host/src/lib.rs) contains documentation comments
describing the intended architecture but no exported functions or types yet.
The sections below describe the design that these crates are being built toward.
Wasmtime Runtime Configuration
The extension host uses Wasmtime as its WebAssembly runtime. The planned configuration includes:
- Cranelift compiler backend – Wasmtime’s default optimizing compiler. Extensions are compiled ahead of time on first load, then cached.
- Component model – Extensions are WASM components (not core modules). This enables typed interfaces via WIT and structured capability passing.
- Pooling allocator – When multiple extensions run concurrently, the
pooling instance allocator pre-reserves virtual address space for all
instances, avoiding per-instantiation
mmapoverhead. Configuration parameters (instance count, memory pages, table elements) are derived from the extension manifest’s declared resource limits.
WASI Sandbox
Each extension Store is configured with a WASI context that restricts what the guest can access. The sandbox follows a deny-by-default model:
| Resource | Default | With Capability Grant |
|---|---|---|
| Filesystem read | Denied | Scoped to declared directories |
| Filesystem write | Denied | Scoped to declared directories |
| Network sockets | Denied | Denied (no current grant path) |
| Environment variables | Denied | Filtered set from manifest |
| Clock (monotonic) | Allowed | Allowed |
| Clock (wall) | Allowed | Allowed |
| Random (CSPRNG) | Allowed | Allowed |
| stdin/stdout/stderr | Redirected to host log | Redirected to host log |
Extensions cannot access the host filesystem, network, or other extensions’ memory unless the host explicitly grants a capability through the WIT interface.
Capability Grants
The host exposes functionality to extensions through WIT-defined interfaces. An extension’s manifest declares which capabilities it requires; the host validates these at load time and links only the granted imports.
Planned capability categories:
- secret-read – Read a named secret from the active vault (routed through daemon-secrets via IPC). The extension never receives the vault master key.
- secret-write – Store or update a secret. Requires explicit user approval on first use.
- config-read – Read configuration values from
core-config. - ipc-publish – Publish an
EventKindmessage to the IPC bus at the extension’s clearance level. - clipboard-write – Write to the clipboard via daemon-clipboard.
- notification – Display a desktop notification.
Each capability is a separate WIT interface. An extension that declares
secret-read but not secret-write receives a linker that provides only
the read import; the write import is left unresolved, causing instantiation
to fail if the guest attempts to call it.
Resource Limits
The host enforces per-extension resource limits to prevent a misbehaving extension from affecting system stability:
- Memory – Maximum linear memory size, configured in the pooling allocator. The default is 64 MiB per extension instance.
- Fuel (CPU) – Wasmtime’s fuel metering limits the number of instructions an extension can execute per invocation. When fuel is exhausted, the call traps with a deterministic error.
- Table elements – Maximum number of indirect function table entries.
- Instances – Maximum number of concurrent component instances across all loaded extensions.
- Execution timeout – A wall-clock deadline per invocation. Implemented via Wasmtime’s epoch interruption mechanism: the host increments the epoch on a timer, and each Store is configured with a maximum epoch delta.
These limits are declared in the extension manifest and validated against
system-wide maximums set in core-config.
Example Extension
This page walks through creating a WASM component model extension for Open Sesame, from WIT interface definition through OCI packaging. Because the extension runtime is not yet fully wired, sections that describe design intent rather than working code are marked accordingly.
Prerequisites
-
Rust toolchain with the
wasm32-wasip2target:rustup target add wasm32-wasip2 -
wasm-toolsfor component composition:cargo install wasm-tools -
An OCI-compatible registry (e.g.,
ghcr.io) for publishing.
Step 1: Define the WIT Interface
Design intent. No
.witfiles ship in the repository yet. Theextension-sdkcrate will provide canonical WIT definitions; what follows is the planned schema.
Create a wit/ directory with the extension’s world:
// wit/world.wit
package open-sesame:example@0.1.0;
world greeting {
import open-sesame:host/config-read@0.1.0;
export greet: func(name: string) -> string;
}
The import line declares that this extension requires the config-read
capability from the host. The export line declares the function the host
will call.
Step 2: Implement the Guest in Rust
Create a new crate:
cargo new --lib greeting-extension
cd greeting-extension
Add dependencies to Cargo.toml:
[package]
name = "greeting-extension"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[dependencies]
wit-bindgen = "0.41"
The extension-sdk crate (extension-sdk/Cargo.toml) depends on
wit-bindgen for generating Rust bindings from WIT definitions. Guest code
uses the wit_bindgen::generate! macro:
#![allow(unused)]
fn main() {
// src/lib.rs
wit_bindgen::generate!({
world: "greeting",
path: "../wit",
});
struct Component;
impl Guest for Component {
fn greet(name: String) -> String {
// Read a greeting template from config (host-provided import).
let template = open_sesame::host::config_read::get("greeting.template")
.unwrap_or_else(|| "Hello, {}!".to_string());
template.replace("{}", &name)
}
}
export!(Component);
}
Step 3: Build the WASM Component
Compile to a core WASM module, then convert to a component:
cargo build --target wasm32-wasip2 --release
wasm-tools component new \
target/wasm32-wasip2/release/greeting_extension.wasm \
-o greeting.component.wasm
The resulting greeting.component.wasm is a self-describing component that
declares its imports and exports in the component model type system.
Step 4: Package as an OCI Artifact
Design intent. OCI distribution is defined in
core-types/src/oci.rsasOciReferencebut the pull/push workflow is not yet implemented.
Open Sesame identifies extensions by OCI references with the format:
registry/principal/scope:revision[@provenance]
For example:
ghcr.io/my-org/greeting-extension:0.1.0@sha256:abcdef1234567890
The OciReference struct parses this into five fields:
| Field | Example | Required |
|---|---|---|
registry | ghcr.io | Yes |
principal | my-org | Yes |
scope | greeting-extension | Yes |
revision | 0.1.0 | Yes |
provenance | sha256:abcdef1234567890 | No |
Push the component to a registry using an OCI-compatible tool:
oras push ghcr.io/my-org/greeting-extension:0.1.0 \
greeting.component.wasm:application/vnd.wasm.component.v1+wasm
Step 5: Write the Extension Manifest
Design intent. The manifest schema is not yet finalized.
The extension manifest declares metadata, capabilities, and resource limits:
[extension]
name = "greeting-extension"
version = "0.1.0"
oci = "ghcr.io/my-org/greeting-extension:0.1.0"
[capabilities]
config-read = true
[limits]
max_memory_mib = 16
max_fuel = 1_000_000
Place this file in ~/.config/pds/extensions/greeting-extension.toml.
Step 6: Load and Test
Design intent. The CLI subcommand for extension management is not yet implemented.
Once the extension host runtime is wired, the intended workflow:
# Install from OCI
sesame extension install ghcr.io/my-org/greeting-extension:0.1.0
# List installed extensions
sesame extension list
# Invoke directly for testing
sesame extension call greeting-extension greet "World"
Testing During Development
Until the full extension host is available, extensions can be tested with standalone Wasmtime:
wasmtime run --wasm component-model greeting.component.wasm \
--invoke greet "World"
For Rust-level unit tests, the extension-sdk crate includes proptest as
a dev-dependency for property-based testing of WIT type serialization.
Authentication Backends
The core-auth crate defines a pluggable authentication system for vault
unlock. Each authentication factor (password, SSH agent, hardware token)
is implemented as a struct that implements the VaultAuthBackend trait.
This page describes how to implement a new backend.
The VaultAuthBackend Trait
The trait is defined in core-auth/src/backend.rs. A backend must implement
all of the following methods:
#![allow(unused)]
fn main() {
#[async_trait]
pub trait VaultAuthBackend: Send + Sync {
fn factor_id(&self) -> AuthFactorId;
fn name(&self) -> &str;
fn backend_id(&self) -> &str;
fn is_enrolled(&self, profile: &TrustProfileName, config_dir: &Path) -> bool;
async fn can_unlock(&self, profile: &TrustProfileName, config_dir: &Path) -> bool;
fn requires_interaction(&self) -> AuthInteraction;
async fn unlock(
&self,
profile: &TrustProfileName,
config_dir: &Path,
salt: &[u8],
) -> Result<UnlockOutcome, AuthError>;
async fn enroll(
&self,
profile: &TrustProfileName,
master_key: &SecureBytes,
config_dir: &Path,
salt: &[u8],
selected_key_index: Option<usize>,
) -> Result<(), AuthError>;
async fn revoke(
&self, profile: &TrustProfileName, config_dir: &Path,
) -> Result<(), AuthError>;
}
}
Method Descriptions
factor_id()
Returns the AuthFactorId enum variant that identifies this factor. Used in
policy evaluation and audit logging.
name()
Human-readable name for audit logs and overlay display (e.g., "SSH Agent",
"FIDO2 Token").
backend_id()
Short machine-readable identifier for IPC messages and configuration files
(e.g., "ssh-agent", "fido2").
is_enrolled(profile, config_dir)
Synchronous check for whether enrollment data exists for the given profile.
Reads from the filesystem under config_dir. Must not perform I/O that
could block.
can_unlock(profile, config_dir)
Asynchronous readiness check. Returns true if the backend can currently
perform an unlock. Must complete in under 100 ms. For example, an SSH agent
backend checks whether SSH_AUTH_SOCK is set and the agent is reachable; a
FIDO2 backend checks whether a token is plugged in.
requires_interaction()
Returns an AuthInteraction variant:
AuthInteraction::None– No user interaction needed (SSH software key, TPM, OS keyring).AuthInteraction::PasswordEntry– Keyboard input required.AuthInteraction::HardwareTouch– Physical touch on a hardware device.
unlock(profile, config_dir, salt)
The core unlock operation. Derives or unwraps the master key and returns an
UnlockOutcome.
enroll(profile, master_key, config_dir, salt, selected_key_index)
Enrolls this backend for a profile. Receives the master key so the backend
can wrap or encrypt it for later retrieval. selected_key_index optionally
specifies which eligible key to use (e.g., which SSH key from the agent).
revoke(profile, config_dir)
Removes enrollment data for this backend from the profile.
UnlockOutcome
A successful unlock() call returns:
#![allow(unused)]
fn main() {
pub struct UnlockOutcome {
pub master_key: SecureBytes,
pub audit_metadata: BTreeMap<String, String>,
pub ipc_strategy: IpcUnlockStrategy,
pub factor_id: AuthFactorId,
}
}
master_key– The 32-byte master key (forDirectMasterKeystrategy) or password bytes (forPasswordUnlockstrategy). Held inSecureBytes, which is zeroized on drop.audit_metadata– Key-value pairs for audit logging (e.g.,"key_fingerprint" => "SHA256:...","key_comment" => "user@host").ipc_strategy– Determines which IPC message type carries the key to daemon-secrets:IpcUnlockStrategy::PasswordUnlock– daemon-secrets performs the KDF.IpcUnlockStrategy::DirectMasterKey– The master key is pre-derived; daemon-secrets uses it directly.
factor_id– Echoes back the factor identifier for correlation.
FactorContribution
The FactorContribution enum determines how a backend’s output participates
in multi-factor composition:
CompleteMasterKey– This backend produces a complete, independently valid master key. Used inAnymode (any single factor suffices) and inPolicymode where individual factors can stand alone.FactorPiece– This backend produces one piece of a combined key. Used inAllmode, where the final master key is derived via HKDF from all factor pieces concatenated.
Backends that unwrap an encrypted copy of the master key (SSH agent, FIDO2
with hmac-secret) should use CompleteMasterKey. Backends that contribute
entropy toward a combined derivation (e.g., a partial PIN) should use
FactorPiece.
Registration with AuthDispatcher
After implementing the trait, register the backend with the AuthDispatcher:
#![allow(unused)]
fn main() {
let fido2_backend = Fido2Backend::new(/* config */);
dispatcher.register(Box::new(fido2_backend));
}
The dispatcher iterates registered backends during unlock, filtering by enrollment status and the active vault’s auth policy.
VaultMetadata Integration
Enrollment data is persisted alongside the vault’s VaultMetadata. Each
backend is responsible for writing its own enrollment artifacts under
config_dir/profiles/<profile>/auth/<backend_id>/. The format is
backend-specific; common patterns include:
- A wrapped (encrypted) copy of the master key.
- A credential ID or public key for verification during unlock.
- Parameters for key derivation (iteration count, algorithm identifiers).
The is_enrolled() method checks for the existence and validity of these
artifacts.
Example: Skeleton FIDO2 Backend
The following skeleton illustrates the structure of a hypothetical FIDO2 backend. It does not compile as-is; it shows the trait method signatures and their responsibilities.
#![allow(unused)]
fn main() {
use core_auth::{
AuthError, AuthInteraction, FactorContribution, IpcUnlockStrategy,
UnlockOutcome, VaultAuthBackend,
};
use core_crypto::SecureBytes;
use core_types::{AuthFactorId, TrustProfileName};
use std::collections::BTreeMap;
use std::path::Path;
pub struct Fido2Backend {
// Configuration: acceptable authenticator AAGUIDs, timeout, etc.
}
#[async_trait::async_trait]
impl VaultAuthBackend for Fido2Backend {
fn factor_id(&self) -> AuthFactorId {
AuthFactorId::Fido2
}
fn name(&self) -> &str {
"FIDO2 Token"
}
fn backend_id(&self) -> &str {
"fido2"
}
fn is_enrolled(&self, profile: &TrustProfileName, config_dir: &Path) -> bool {
let cred_path = config_dir
.join("profiles")
.join(profile.as_str())
.join("auth/fido2/credential.json");
cred_path.exists()
}
async fn can_unlock(&self, _profile: &TrustProfileName, _config_dir: &Path) -> bool {
// Check if a FIDO2 authenticator is available via platform API.
// Must return within 100 ms.
check_authenticator_present().await
}
fn requires_interaction(&self) -> AuthInteraction {
AuthInteraction::HardwareTouch
}
async fn unlock(
&self,
profile: &TrustProfileName,
config_dir: &Path,
salt: &[u8],
) -> Result<UnlockOutcome, AuthError> {
// 1. Load credential ID from enrollment data.
let cred = load_credential(profile, config_dir)?;
// 2. Perform FIDO2 assertion with hmac-secret extension.
// This requires user touch on the authenticator.
let hmac_secret = perform_assertion(&cred, salt).await?;
// 3. Use the hmac-secret output to unwrap the stored master key.
let wrapped_key = load_wrapped_key(profile, config_dir)?;
let master_key = unwrap_master_key(&wrapped_key, &hmac_secret)?;
Ok(UnlockOutcome {
master_key,
audit_metadata: BTreeMap::from([
("credential_id".into(), hex::encode(&cred.id)),
("authenticator_aaguid".into(), cred.aaguid.to_string()),
]),
ipc_strategy: IpcUnlockStrategy::DirectMasterKey,
factor_id: AuthFactorId::Fido2,
})
}
async fn enroll(
&self,
profile: &TrustProfileName,
master_key: &SecureBytes,
config_dir: &Path,
salt: &[u8],
_selected_key_index: Option<usize>,
) -> Result<(), AuthError> {
// 1. Perform FIDO2 credential creation (MakeCredential).
// 2. Use hmac-secret extension to derive a wrapping key.
// 3. Wrap the master_key with the derived wrapping key.
// 4. Persist credential ID + wrapped key under config_dir.
Ok(())
}
async fn revoke(
&self,
profile: &TrustProfileName,
config_dir: &Path,
) -> Result<(), AuthError> {
let auth_dir = config_dir
.join("profiles")
.join(profile.as_str())
.join("auth/fido2");
if auth_dir.exists() {
std::fs::remove_dir_all(&auth_dir)
.map_err(|e| AuthError::Io(e.to_string()))?;
}
Ok(())
}
}
}
Testing a New Backend
A backend implementation should verify the following:
-
Enrollment round-trip – Enroll with a known master key, then confirm
is_enrolled()returnstrueand the enrollment artifacts exist on disk. -
Unlock round-trip – After enrollment, call
unlock()and verify the returnedmaster_keymatches the original. -
Wrong-key rejection – Tamper with enrollment data or use a different salt, and verify
unlock()returnsAuthError. -
Revocation – Call
revoke(), confirmis_enrolled()returnsfalse, and confirm the enrollment directory is removed. -
Readiness check – Verify
can_unlock()returnsfalsewhen the backing resource is unavailable (e.g., no SSH agent socket, no FIDO2 token connected). -
Interaction declaration – Verify
requires_interaction()returns the correct variant. The unlock UX uses this to decide whether to show a password prompt or a “touch your token” message.
Adding Platform Backends
This page describes how to add a new operating system backend or a new compositor backend within an existing platform crate.
Platform Crate Structure
Open Sesame uses one platform crate per operating system:
| Crate | Target | Status |
|---|---|---|
platform-linux | target_os = "linux" | Implemented: compositor backends, evdev input, D-Bus, systemd, Landlock/seccomp sandbox |
platform-macos | target_os = "macos" | Scaffolded: module declarations with no functional code |
platform-windows | target_os = "windows" | Scaffolded: module declarations with no functional code |
Each crate compiles as an empty library on non-target platforms. All public modules are gated with
#[cfg(target_os = "...")]. Platform crates contain no business logic – they provide safe Rust
abstractions consumed by daemon crates.
Compositor Trait and Factory Pattern
The platform-linux crate demonstrates the reference pattern for abstracting over multiple backends
within a single platform.
The Trait
The CompositorBackend trait in platform-linux/src/compositor.rs defines the interface:
#![allow(unused)]
fn main() {
pub trait CompositorBackend: Send + Sync {
fn list_windows(&self) -> BoxFuture<'_, core_types::Result<Vec<Window>>>;
fn list_workspaces(&self) -> BoxFuture<'_, core_types::Result<Vec<Workspace>>>;
fn activate_window(&self, id: &WindowId) -> BoxFuture<'_, core_types::Result<()>>;
fn set_window_geometry(&self, id: &WindowId, geom: &Geometry)
-> BoxFuture<'_, core_types::Result<()>>;
fn move_to_workspace(&self, id: &WindowId, ws: &CompositorWorkspaceId)
-> BoxFuture<'_, core_types::Result<()>>;
fn focus_window(&self, id: &WindowId) -> BoxFuture<'_, core_types::Result<()>>;
fn close_window(&self, id: &WindowId) -> BoxFuture<'_, core_types::Result<()>>;
fn name(&self) -> &str;
}
}
Methods return BoxFuture (Pin<Box<dyn Future<Output = T> + Send>>) instead of using async fn
in the trait. This is required for dyn-compatibility – the factory function returns
Box<dyn CompositorBackend> for runtime backend selection.
The Factory
detect_compositor() probes the runtime environment and returns the appropriate backend:
#![allow(unused)]
fn main() {
pub fn detect_compositor() -> core_types::Result<Box<dyn CompositorBackend>> {
// 1. Try COSMIC-specific protocols (if cosmic feature enabled)
// 2. Try wlr-foreign-toplevel-management-v1
// 3. Return Error::Platform if nothing works
}
}
Detection order matters: more specific backends are tried first (COSMIC), with generic fallbacks last
(WLR). Each backend’s connect() method probes for required protocols and returns an error if they
are unavailable, allowing the factory to fall through to the next candidate.
Backend Implementations
Each backend is a pub(crate) module containing a struct that implements CompositorBackend:
backend_cosmic.rs–CosmicBackendusingext_foreign_toplevel_list_v1+zcosmic_toplevel_{info,manager}_v1backend_wlr.rs–WlrBackendusingzwlr_foreign_toplevel_manager_v1
Backends are pub(crate) because callers interact with them only through
Box<dyn CompositorBackend> returned by the factory. The concrete types are not part of the
public API.
Adding a New Compositor Backend
To add support for a compositor that uses different protocols (e.g., GNOME/Mutter, KDE/KWin, Hyprland IPC):
Step 1: Create the Backend Module
Create platform-linux/src/backend_<name>.rs with a struct implementing CompositorBackend. The
struct must be Send + Sync.
For operations not supported by the compositor’s protocols, return Error::Platform with a
descriptive message:
#![allow(unused)]
fn main() {
fn set_window_geometry(&self, _id: &WindowId, _geom: &Geometry)
-> BoxFuture<'_, core_types::Result<()>>
{
Box::pin(async {
Err(core_types::Error::Platform(
"set_window_geometry not supported by <name> protocol".into(),
))
})
}
}
Provide a connect() constructor that probes for required protocols/interfaces and returns
core_types::Result<Self>.
Step 2: Register the Module
Add the module declaration to platform-linux/src/lib.rs:
#![allow(unused)]
fn main() {
#[cfg(all(target_os = "linux", feature = "<name>"))]
pub(crate) mod backend_<name>;
}
Step 3: Add the Detection Arm
Add a match arm to detect_compositor() in platform-linux/src/compositor.rs. Place it in the
detection order based on protocol specificity:
#![allow(unused)]
fn main() {
#[cfg(feature = "<name>")]
{
match crate::backend_<name>::<Name>Backend::connect() {
Ok(backend) => {
tracing::info!("compositor backend: <name>");
return Ok(Box::new(backend));
}
Err(e) => {
tracing::info!("<name> backend unavailable, trying next: {e}");
}
}
}
}
Step 4: Add the Feature Flag
In platform-linux/Cargo.toml, add a feature flag for the new backend:
[features]
<name> = [
"desktop",
"dep:<new-protocol-crate>",
]
If the new backend uses only existing dependencies (e.g., communicating via D-Bus with zbus), no
additional optional dependencies are needed.
Feature Gating and Conditional Compilation
Platform crates use a layered feature flag model:
- No features: Headless-safe modules only (sandbox, security, systemd, dbus, cosmic_keys, cosmic_theme, clipboard trait). Suitable for server/container deployments.
desktop: Wayland compositor integration, evdev input, focus monitoring. Pulls inwayland-client,wayland-protocols,wayland-protocols-wlr,smithay-client-toolkit,evdev.cosmic: COSMIC-specific protocols. Impliesdesktop. Pulls incosmic-client-toolkitandcosmic-protocols(GPL-3.0).
This layering isolates build dependencies and license obligations. The cosmic feature flag
specifically isolates GPL-3.0 dependencies so that builds without COSMIC support remain under the
project’s base license.
Conditional compilation uses #[cfg(all(target_os = "linux", feature = "..."))] on module
declarations in lib.rs. Backend modules are pub(crate) so they remain internal implementation
details.
Adding a New OS Platform
To add a platform crate for a new operating system:
- Create
platform-<os>/withCargo.tomlandsrc/lib.rs. - Gate all modules with
#[cfg(target_os = "<os>")]. - Depend on
core-typesfor shared types (Window,WindowId,Error,Result). - Implement the same logical modules as the other platform crates (window management, clipboard, input, credential storage, daemon lifecycle). The specific API surface depends on what the OS provides.
- Use
pub(crate)for backend implementation modules; expose only traits and factory functions as the public API. - Add the crate to the workspace
Cargo.toml. - Update daemon crates to conditionally depend on the new platform crate via
[target.'cfg(target_os = "<os>")'.dependencies].
The platform crate should contain no business logic. It provides safe wrappers over OS APIs, and daemon crates compose these wrappers into application behavior.
Nix Packaging
Open Sesame provides a Nix flake (flake.nix) that produces two packages, an overlay, a Home Manager
module, and a development shell. The flake targets x86_64-linux and aarch64-linux.
Flake Structure
The flake uses nixpkgs (nixos-unstable) as its sole input. It exposes the following outputs:
| Output | Description |
|---|---|
packages.<system>.open-sesame | Headless package (CLI + 4 daemons) |
packages.<system>.open-sesame-desktop | Desktop package (3 GUI daemons); depends on headless |
packages.<system>.default | Alias for open-sesame-desktop |
overlays.default | Nixpkgs overlay adding both packages |
homeManagerModules.default | Home Manager module for declarative configuration |
devShells.<system>.default | Development shell with Rust toolchain and native dependencies |
Headless Package (nix/package.nix)
The headless package builds five binary crates with --no-default-features, disabling all
desktop/GUI code paths:
| Crate | Binary |
|---|---|
open-sesame | sesame |
daemon-profile | daemon-profile |
daemon-secrets | daemon-secrets |
daemon-launcher | daemon-launcher |
daemon-snippets | daemon-snippets |
Build dependencies:
- nativeBuildInputs:
pkg-config,installShellFiles - buildInputs:
openssl,libseccomp
The install phase copies the five binaries, the example configuration file, and five systemd user
units (the headless target plus four service files) into $out.
Source filtering uses lib.fileset.unions to include only Cargo.toml, Cargo.lock,
rust-toolchain.toml, config.example.toml, .cargo/, contrib/, and all crate directories
(matched by prefix: core-*, daemon-*, platform-*, extension-*, sesame-*, open-sesame,
xtask). Documentation, analysis files, and CI configuration are excluded.
Desktop Package (nix/package-desktop.nix)
The desktop package builds four binary crates with default features (desktop enabled):
| Crate | Binary |
|---|---|
open-sesame | sesame (rebuilt with desktop features) |
daemon-wm | daemon-wm |
daemon-clipboard | daemon-clipboard |
daemon-input | daemon-input |
Additional build dependencies beyond the headless set:
- nativeBuildInputs: adds
makeWrapper - buildInputs: adds
fontconfig,wayland,wayland-protocols,libxkbcommon - propagatedBuildInputs:
open-sesame(the headless package)
The propagatedBuildInputs declaration ensures the headless binaries (sesame, daemon-profile,
daemon-secrets, daemon-launcher, daemon-snippets) appear on PATH when the desktop package
is installed.
The daemon-wm binary is wrapped with wrapProgram to set XKB_CONFIG_ROOT to
${xkeyboard-config}/etc/X11/xkb. This is required because libxkbcommon needs evdev keyboard
rules at runtime, and the Nix store path differs from the system default.
The install phase copies the three desktop systemd user units (the desktop target plus wm, clipboard,
and input service files) into $out.
cargoLock.outputHashes
Both packages declare outputHashes for three git dependencies that Cargo.lock references:
outputHashes = {
"cosmic-client-toolkit-0.2.0" = "sha256-ymn+BUTTzyHquPn4hvuoA3y1owFj8LVrmsPu2cdkFQ8=";
"cosmic-protocols-0.2.0" = "sha256-ymn+BUTTzyHquPn4hvuoA3y1owFj8LVrmsPu2cdkFQ8=";
"nucleo-0.5.0" = "sha256-Hm4SxtTSBrcWpXrtSqeO0TACbUxq3gizg1zD/6Yw/sI=";
};
The headless package includes these hashes even though it does not build the COSMIC crates, because Cargo.lock references workspace members and Cargo resolves the entire lock file before building.
Home Manager Module
The Home Manager module is available at homeManagerModules.default. It configures Open Sesame
declaratively under programs.open-sesame.
Options
| Option | Type | Default | Description |
|---|---|---|---|
enable | bool | false | Enable the Open Sesame desktop suite |
headless | bool | false | Headless mode: only starts profile, secrets, launcher, and snippets daemons. Omits GUI daemons and graphical-session dependency. |
package | package | auto-selected | Defaults to open-sesame-desktop or open-sesame depending on headless |
settings | TOML attrset | {} | WM key bindings and settings for the default profile |
profiles | attrsOf (tomlFormat.type) | {} | Additional profile configuration keyed by trust profile name |
logLevel | enum | "info" | RUST_LOG level for all daemons. One of: error, warn, info, debug, trace |
Generated Configuration
When settings or profiles are non-empty, the module generates
~/.config/pds/config.toml (via xdg.configFile."pds/config.toml") with config_version = 3.
The settings option populates profiles.default.wm, while profiles allows defining additional
trust profiles with launch profiles and vault configuration.
Systemd Service Generation
The module generates two systemd user targets and up to seven services:
Headless target (open-sesame-headless):
WantedBy = [ "default.target" ]– starts on login regardless of graphical session- Four services:
open-sesame-profile,open-sesame-secrets,open-sesame-launcher,open-sesame-snippets - All services declare
PartOf = [ "open-sesame-headless.target" ]
Desktop target (open-sesame-desktop, omitted in headless mode):
Requires = [ "open-sesame-headless.target" "graphical-session.target" ]WantedBy = [ "graphical-session.target" ]- Three services:
open-sesame-wm,open-sesame-clipboard,open-sesame-input
All services share common hardening directives:
Type = "notify"withWatchdogSec = 30Restart = "on-failure"withRestartSec = 5NoNewPrivileges = trueLimitMEMLOCK = "64M"(required formlock-backedProtectedAlloc)LimitCORE = 0(disables core dumps to prevent secret leakage)Environment = [ "RUST_LOG=${cfg.logLevel}" ]
Per-service hardening varies. For example, daemon-secrets sets PrivateNetwork = true and
MemoryMax = "256M", while daemon-launcher sets CapabilityBoundingSet = "" and
SystemCallArchitectures = "native".
The daemon-profile service uses ProtectHome = "read-only", ProtectSystem = "strict", and
ReadWritePaths = [ "%t/pds" "%h/.config/pds" ] to restrict filesystem access.
tmpfiles.d Rules
The module creates tmpfiles.d rules to ensure runtime directories exist before services start:
d %t/pds 0700 - - -
d %h/.config/pds 0700 - - -
d %h/.cache/open-sesame 0700 - - -
In desktop mode, an additional rule is added:
d %h/.cache/fontconfig 0755 - - -
These directories must exist on the real filesystem because ProtectSystem=strict bind-mounts
ReadWritePaths into each service’s mount namespace, and the source directory must already exist.
SSH Agent Integration
The module sets systemd.user.sessionVariables.SSH_AUTH_SOCK = "${HOME}/.ssh/agent.sock" to provide
a stable socket path for systemd user services. The daemon-profile and daemon-wm services
additionally load EnvironmentFile = [ "-%h/.config/pds/ssh-agent.env" ] (the leading - makes
the file optional).
Cachix Binary Cache
The flake declares a Cachix binary cache in its nixConfig:
nixConfig = {
extra-substituters = [ "https://scopecreep-zip.cachix.org" ];
extra-trusted-public-keys = [
"scopecreep-zip.cachix.org-1:LPiVDsYXJvgljVfZPN43zBWB7ZCGFr2jZ/lBinnPGvU="
];
};
Users who pass --accept-flake-config (or have the substituter trusted) automatically pull
pre-built binaries for both x86_64-linux and aarch64-linux.
CI pushes to the Cachix cache on every release via the nix.yml workflow using
cachix/cachix-action@v15 with the SCOPE_CREEP_CACHIX_PRIVATE_KEY secret. The same workflow
runs on pull requests for cache warming (build only, no push without the secret).
preCheck
Both packages set preCheck = "export HOME=$(mktemp -d)" to provide test isolation. Tests that
create configuration or runtime directories write to a temporary home instead of interfering with
the build sandbox.
Debian Packaging
Open Sesame ships as two .deb packages built with cargo-deb. The two-package model mirrors the
Nix split: a headless package for servers and containers, and a desktop package that adds GUI daemons
for COSMIC/Wayland.
Package Overview
open-sesame (headless)
Defined in open-sesame/Cargo.toml under [package.metadata.deb].
| Field | Value |
|---|---|
| Package name | open-sesame |
| Section | utils |
| Priority | optional |
| Depends | libc6, libgcc-s1, libseccomp2 |
| Recommends | openssh-client |
| Suggests | open-sesame-desktop |
Installed binaries (to /usr/bin/):
sesame(CLI)daemon-profiledaemon-secretsdaemon-launcherdaemon-snippets
Installed systemd units (to /usr/lib/systemd/user/):
open-sesame-headless.targetopen-sesame-profile.serviceopen-sesame-secrets.serviceopen-sesame-launcher.serviceopen-sesame-snippets.service
Additional assets:
- Man page:
/usr/share/man/man1/sesame.1.gz(generated by xtask) - Shell completions: bash (
/usr/share/bash-completion/completions/sesame), zsh (/usr/share/zsh/vendor-completions/_sesame), and fish (/usr/share/fish/vendor_completions.d/sesame.fish) - Example config:
/usr/share/doc/open-sesame/config.example.toml
Maintainer scripts are sourced from scripts/.
open-sesame-desktop
Defined in daemon-wm/Cargo.toml under [package.metadata.deb].
| Field | Value |
|---|---|
| Package name | open-sesame-desktop |
| Section | utils |
| Priority | optional |
| Depends | open-sesame, libc6, libgcc-s1, libseccomp2, libxkbcommon0, libwayland-client0, libfontconfig1, libfreetype6, fonts-dejavu-core |
| Recommends | xdg-utils, fontconfig |
| Suggests | cosmic-desktop |
The open-sesame dependency ensures the headless daemons and CLI are installed before the desktop
layer.
Installed binaries (to /usr/bin/):
daemon-wmdaemon-clipboarddaemon-input
Installed systemd units (to /usr/lib/systemd/user/):
open-sesame-desktop.targetopen-sesame-wm.serviceopen-sesame-clipboard.serviceopen-sesame-input.service
Maintainer scripts are sourced from scripts/desktop/.
Systemd Targets
open-sesame-headless.target
[Unit]
Description=Open Sesame Headless Suite
Documentation=https://github.com/scopecreep-zip/open-sesame
[Install]
WantedBy=default.target
The headless target is wanted by default.target, meaning it activates on every user login
regardless of whether a graphical session exists. The four headless services declare
PartOf=open-sesame-headless.target.
open-sesame-desktop.target
[Unit]
Description=Open Sesame Desktop Suite
Documentation=https://github.com/scopecreep-zip/open-sesame
Requires=open-sesame-headless.target graphical-session.target
After=open-sesame-headless.target graphical-session.target
[Install]
WantedBy=graphical-session.target
The desktop target requires both the headless target (for IPC bus and secrets infrastructure) and
graphical-session.target (for Wayland compositor access). It is wanted by
graphical-session.target, so it only activates when a graphical session starts.
Service Hardening
All services in contrib/systemd/ use Type=notify with WatchdogSec=30, Restart=on-failure,
RestartSec=5, and NoNewPrivileges=yes. Resource limits include LimitMEMLOCK=64M (for
mlock-backed protected allocations), LimitCORE=0 (prevents core dumps), and MemoryMax caps
per daemon.
The daemon-profile service, which hosts the IPC bus, sets ProtectHome=read-only and
ProtectSystem=strict with ReadWritePaths=%t/pds %h/.config/pds.
Maintainer Scripts
Headless Package
postinst (scripts/postinst):
- Enables services globally with
systemctl --global enablefor the four headless services and the headless target. This persists across future logins and new users. - Reloads all active user managers with
systemctl reload 'user@*.service'so they see the new unit files. - Iterates over all currently logged-in users (by parsing UIDs from
systemctl list-units 'user@*') and restarts each headless service usingsystemctl --user -M "$uid@"with aSYSTEMD_BUS_TIMEOUT=25stimeout.
prerm (scripts/prerm):
- On
remove|deconfigure: stops all headless services for active users in reverse dependency order (snippets, launcher, secrets, profile), then disables globally. - On
upgrade: stops services only (does not disable). The postinst of the new version restarts with new binaries.
postrm (scripts/postrm):
- On
remove|purge: reloads user managers to clear removed unit files. Prints a message noting that user configuration at~/.config/pds/is preserved.
Desktop Package
postinst (scripts/desktop/postinst):
- Enables desktop services globally:
open-sesame-wm.service,open-sesame-clipboard.service,open-sesame-input.service,open-sesame-desktop.target. - Reloads active user managers.
- Restarts desktop services for all active users.
- Prints a note that
daemon-inputrequiresinputgroup membership for keyboard capture.
prerm (scripts/desktop/prerm):
- On
remove|deconfigure: stops desktop services (input, clipboard, wm) for active users, then disables globally. - On
upgrade: stops services only.
postrm (scripts/desktop/postrm):
- On
remove|purge: reloads user managers. Notes that headless daemons remain installed.
User Iteration Pattern
All maintainer scripts use the same active_user_uids() helper to discover logged-in users:
active_user_uids() {
systemctl list-units 'user@*' --legend=no 2>/dev/null \
| sed -n 's/.*user@\([0-9]\+\)\.service.*/\1/p'
}
This pattern is derived from systemd-update-helper.in and ensures services are managed for all
active user sessions, not just the invoking user.
Distribution
Open Sesame uses semantic-release for automated versioning, GitHub Actions for building, SLSA attestations for supply chain security, and GitHub Pages for hosting an APT repository alongside documentation.
Semantic Release
Version management is configured in release.config.mjs. Semantic-release runs on pushes to main
and analyzes conventional commits to determine version bumps.
Release Rules
| Commit type | Release |
|---|---|
feat | minor |
fix | patch |
perf | patch |
revert | patch |
docs (scope: README) | patch |
refactor, style, chore, test, build, ci | no release |
Any scope no-release | no release |
Plugin Pipeline
The semantic-release plugin chain executes in order:
@semantic-release/commit-analyzer– Analyzes commits using theconventionalcommitspreset to determine the version bump type.@semantic-release/exec– Generates a release header from.github/templates/RELEASE_HEADER.md.@semantic-release/release-notes-generator– Generates release notes from commits, categorized by type.@semantic-release/changelog– UpdatesCHANGELOG.md.@semantic-release/exec– Updates the[workspace.package]version inCargo.tomlusingsed, then runscargo generate-lockfileto updateCargo.lock.@semantic-release/git– CommitsCHANGELOG.md,Cargo.toml, andCargo.lockwith messagechore(release): <version> [skip ci].@semantic-release/github– Creates the GitHub release.
Release Pipeline DAG
The release workflow (release.yml) runs on pushes to main and defines the following job
dependency graph:
semantic-release
├── build (amd64) ─┬──► attest
├── build (arm64) ─┤
│ └──► upload-assets
├── nix-cache
├── build-docs
│
└── [all above] ────────────► publish ──► cleanup
All downstream jobs gate on needs.semantic-release.outputs.new_release == 'true'. If
semantic-release determines no version bump is needed, the pipeline stops after the first job.
Job Details
semantic-release: Checks out with fetch-depth: 0, installs Node.js via mise, runs
npx semantic-release. Outputs new_release, version, and tag.
build: Runs on a dual-architecture matrix (ubuntu-24.04 for amd64, ubuntu-24.04-arm for
arm64). Installs Rust and cargo-deb via mise. Raises RLIMIT_MEMLOCK to 256 MiB with prlimit
before building (required by ProtectedAlloc). Builds .deb packages via mise tasks
(ci:build:deb / ci:build:deb:arm64), renames them with architecture suffixes, and uploads as
artifacts.
nix-cache: Calls the reusable nix.yml workflow with the release tag. Builds both open-sesame
and open-sesame-desktop for each architecture and pushes to Cachix.
build-docs: Builds rustdoc and mdBook documentation via mise run ci:docs:all and
mise run ci:docs:combine, then uploads as an artifact.
attest: Downloads all .deb artifacts and generates SLSA build provenance attestations using
actions/attest-build-provenance@v2.
upload-assets: Downloads .deb artifacts, generates SHA256SUMS.txt, renders install
instructions from a template with per-architecture checksums, and uploads all .deb files and
checksums to the GitHub release using softprops/action-gh-release@v2.
publish: Downloads .deb artifacts and documentation, imports the GPG signing key, generates
the APT repository via mise run ci:release:apt-repo, and deploys the combined APT repository and
documentation site to GitHub Pages.
cleanup: Deletes old releases, keeping the 10 most recent. Uses
dev-drprasad/delete-older-releases@v0.3.4. Tags are preserved.
APT Repository
The APT repository is hosted on GitHub Pages and generated by the ci:release:apt-repo mise task
during the publish job. The process:
- Downloads all
.debartifacts into apackages/directory. - Imports the GPG private key (
GPG_PRIVATE_KEYsecret) usingcrazy-max/ghaction-import-gpg@v6. - Generates the
Packagesindex and signs the repository with GPG. - Combines the APT repository with the documentation site into a single
gh-pages/directory. - Deploys to GitHub Pages using
actions/deploy-pages@v5.
The publish job runs in the github-pages environment and requires pages: write and
id-token: write permissions.
SLSA Build Provenance
Every .deb artifact receives a SLSA build provenance attestation generated by
actions/attest-build-provenance@v2. This runs in the attest job after the build completes. The
workflow declares attestations: write permission at the top level.
Attestations provide a cryptographic link between each .deb file and its GitHub Actions build,
allowing consumers to verify that artifacts were produced by the CI pipeline and not tampered with.
Checksum Verification
The upload-assets job generates SHA256SUMS.txt containing SHA-256 hashes for all .deb files:
sha256sum ./*.deb > SHA256SUMS.txt
The checksums file is uploaded alongside the .deb files to the GitHub release. Per-architecture
checksums are also interpolated into the release body template for inline verification instructions.
Workflow Permissions
The release workflow requests the following permissions:
| Permission | Purpose |
|---|---|
contents: write | Create GitHub releases, push version commits |
pages: write | Deploy APT repo and docs to GitHub Pages |
id-token: write | OIDC token for Pages deployment and attestations |
attestations: write | SLSA build provenance |
issues: write | Semantic-release issue comments |
pull-requests: write | Semantic-release PR comments |
Packaging for New Distributions
This guide covers the requirements and considerations for packaging Open
Sesame on Linux distributions beyond the officially supported Debian/Ubuntu
.deb packages and Nix flake.
Common Requirements
Regardless of distribution, all packages must satisfy the following.
Two-Package Split
Open Sesame ships as two logical packages:
- open-sesame (headless) – Contains the
sesameCLI,daemon-profile,daemon-secrets,daemon-launcher,daemon-snippets, and their systemd user service units. Has no GUI dependencies. - open-sesame-desktop (requires open-sesame) – Contains
daemon-wm,daemon-clipboard,daemon-input, and the COSMIC/Wayland compositor integration. Depends onlibwayland-client,libxkbcommon, andcosmic-protocols.
systemd User Services
All daemons run as systemd user services (systemctl --user). Packages must
install unit files to /usr/lib/systemd/user/. The services use:
Type=notifywithsd_notifyreadiness.Restart=on-failurewithRestartSec=2.- Ordering via
After=andRequires=(daemon-profile starts first as the IPC bus host; all others depend on it).
LimitMEMLOCK
daemon-secrets requires mlock for secret memory. The systemd unit sets
LimitMEMLOCK=64M. Packages that install systemd overrides or distributions
that set system-wide limits below this threshold will cause vault operations
to fail. The corresponding PAM/security limit is:
# /etc/security/limits.d/open-sesame.conf
* soft memlock 65536
* hard memlock 65536
Binary Paths
All binaries install to /usr/bin/. Configuration lives under
~/.config/pds/ (per XDG Base Directory specification).
AUR (Arch Linux)
Arch packaging uses PKGBUILD files. Two packages are needed.
open-sesame
pkgname=open-sesame
pkgver=1.6.3
pkgrel=1
pkgdesc='Programmable desktop suite - headless daemons and CLI'
arch=('x86_64' 'aarch64')
url='https://github.com/ScopeCreep-zip/open-sesame'
license=('GPL-3.0-only')
depends=('gcc-libs' 'sqlcipher' 'openssl')
makedepends=('cargo' 'pkg-config')
build() {
cd "$srcdir/$pkgname-$pkgver"
cargo build --release \
--bin sesame \
--bin daemon-profile \
--bin daemon-secrets \
--bin daemon-launcher \
--bin daemon-snippets
}
package() {
cd "$srcdir/$pkgname-$pkgver"
for bin in sesame daemon-profile daemon-secrets daemon-launcher daemon-snippets; do
install -Dm755 "target/release/$bin" "$pkgdir/usr/bin/$bin"
done
install -Dm644 dist/systemd/*.service -t "$pkgdir/usr/lib/systemd/user/"
install -Dm644 dist/limits.conf "$pkgdir/etc/security/limits.d/open-sesame.conf"
}
open-sesame-desktop
pkgname=open-sesame-desktop
pkgver=1.6.3
pkgrel=1
pkgdesc='Programmable desktop suite - COSMIC/Wayland compositor integration'
arch=('x86_64' 'aarch64')
depends=('open-sesame' 'wayland' 'libxkbcommon' 'cosmic-protocols')
makedepends=('cargo' 'pkg-config')
build() {
cd "$srcdir/open-sesame-$pkgver"
cargo build --release \
--bin daemon-wm \
--bin daemon-clipboard \
--bin daemon-input
}
package() {
cd "$srcdir/open-sesame-$pkgver"
for bin in daemon-wm daemon-clipboard daemon-input; do
install -Dm755 "target/release/$bin" "$pkgdir/usr/bin/$bin"
done
install -Dm644 dist/systemd/daemon-wm.service -t "$pkgdir/usr/lib/systemd/user/"
install -Dm644 dist/systemd/daemon-clipboard.service -t "$pkgdir/usr/lib/systemd/user/"
install -Dm644 dist/systemd/daemon-input.service -t "$pkgdir/usr/lib/systemd/user/"
}
RPM (Fedora / RHEL)
Spec File Considerations
- BuildRequires:
cargo,rust-packaging,pkg-config,sqlcipher-devel,openssl-devel,wayland-devel,libxkbcommon-devel. - License tag:
GPL-3.0-only. - Subpackages: Use
%package desktopfor the GUI subpackage withRequires: %{name} = %{version}-%{release}. - systemd macros: Use
%systemd_user_post,%systemd_user_preun, and%systemd_user_postunfor service lifecycle. - Vendor dependencies: Fedora policy requires vendored dependencies to be
audited. Run
cargo vendorand include the vendor tarball as a secondary source. - SELinux: daemon-secrets performs
mlockand readsSSH_AUTH_SOCK. A custom SELinux policy module may be required for confined users. The base package should include a.tepolicy file or document the required booleans.
Alpine Linux
Static Linking and musl
Alpine uses musl libc. Open Sesame compiles against musl with the
x86_64-unknown-linux-musl target. Considerations:
- SQLCipher: Must be compiled against musl. Alpine’s
sqlcipherpackage provides this. - OpenSSL vs. rustls: If the build uses OpenSSL for TLS, link against
Alpine’s
openssl-dev(which is musl-compatible). Alternatively,rustlsavoids the system OpenSSL dependency entirely. - Static binary: For maximum portability, build fully static binaries
with
RUSTFLAGS='-C target-feature=+crt-static'. This produces binaries that run on any Linux kernel >= 3.17 (formlock2and Landlock). - No systemd: Alpine uses OpenRC by default. Provide OpenRC init scripts
as an alternative to systemd user services. The init scripts must set
the
MEMLOCKulimit and run daemons as the logged-in user, not root.
APKBUILD
The APKBUILD follows the same two-package split. Use subpackages for
the desktop variant. Alpine’s Rust packaging infrastructure supports
cargo auditable build for SBOM embedding.
Flatpak
Sandbox Implications
Flatpak introduces a second layer of sandboxing on top of Open Sesame’s own Noise IK IPC isolation and Landlock filesystem restrictions.
Key issues:
- Nested sandboxing: daemon-secrets uses
mlock,seccomp, and Landlock. Inside a Flatpak sandbox,seccompfilters compose (the stricter filter wins), but Landlock may conflict with Flatpak’s own filesystem portals. - Unix socket access: The IPC bus uses a Unix domain socket under
$XDG_RUNTIME_DIR. Flatpak must be configured to expose this path, or the socket must use a portal. - SSH agent: Flatpak does not expose
SSH_AUTH_SOCKby default. The--socket=ssh-authpermission is required for SSH agent unlock. - Wayland: The desktop package requires
--socket=waylandand access to the COSMIC compositor protocols, which may not be available through the standard Wayland portal.
For these reasons, Flatpak packaging is considered lower priority. The recommended approach is native packaging for distributions that target the COSMIC desktop.
Homebrew (macOS)
When platform-macos Is Implemented
Open Sesame currently targets Linux with COSMIC/Wayland. A platform-macos
crate is planned but not yet implemented. When it becomes available:
- Formula structure: A single formula covering the headless components (there is no separate desktop package on macOS; window management uses native Accessibility APIs).
- launchd: Replace systemd user services with
launchdplist files installed to~/Library/LaunchAgents/. - Keychain integration: The macOS keychain can serve as an auth backend
(similar to SSH agent), replacing
mlock-based secret memory with Secure Enclave operations where available. - Dependencies:
sqlcipheris available via Homebrew. No Wayland dependencies are needed.
This section will be expanded when platform-macos reaches a functional
state.
Testing Methodology
Open Sesame uses a layered testing strategy spanning unit tests, integration tests, property-based tests, and snapshot tests across the workspace.
Test Categories
Unit Tests
Unit tests are embedded in source files using #[cfg(test)] modules. They cover pure logic such as
hint assignment, configuration validation, cryptographic derivation, rate limiting, ACL enforcement,
audit logging, and type conversions. Approximately 576 test functions exist in src/ modules across
55 source files in the workspace.
Integration Tests
Integration tests live in <crate>/tests/ directories and test cross-module behavior:
| File | Test Count | Scope |
|---|---|---|
core-ipc/tests/socket_integration.rs | 21 | Noise IK encrypted IPC: connect, pub/sub, request/response, clearance enforcement, identity binding, unicast routing |
daemon-wm/tests/wm_integration.rs | 43 | Hint assignment, hint matching, overlay controller state machine, config validation |
open-sesame/tests/cli_integration.rs | 18 | CLI argument parsing, help output, exit codes (no running daemon required) |
core-memory/tests/guard_page_sigsegv.rs | 4 | Guard page SIGSEGV verification via subprocess harness |
core-ipc/tests/daemon_keypair.rs | 1 | Keypair persistence, file permissions, tamper detection |
Property-Based Tests (proptest)
The proptest crate is a dev-dependency in 10 workspace crates:
core-types,core-crypto,core-config,core-secrets,core-profile,core-fuzzyplatform-linux,platform-macos,platform-windowsextension-sdk
Property-based tests generate random inputs to verify invariants such as serialization round-trips, key derivation determinism, and type conversion totality.
Snapshot Tests (insta)
The insta crate is declared as a workspace dependency with yaml, json, and redactions
features. Snapshot tests capture serialized output and compare against stored reference files,
detecting unintended changes to wire formats and configuration serialization.
Test Isolation
HOME Directory Isolation
Both Nix packages and the CI pipeline set HOME=$(mktemp -d) before running tests:
export HOME=$(mktemp -d)
This is configured as preCheck in nix/package.nix and nix/package-desktop.nix. Tests that
create configuration directories (~/.config/pds/), runtime directories
($XDG_RUNTIME_DIR/pds/), or keypair files write to the temporary directory instead of the real
home.
For IPC integration tests, core-ipc/tests/daemon_keypair.rs uses
noise::set_runtime_dir_override() to redirect directory creation without mutating environment
variables, avoiding race conditions in parallel test execution.
RLIMIT_MEMLOCK Requirement
The ProtectedAlloc allocator uses mlock to pin secret-holding pages in physical memory,
preventing swap exposure. This requires a sufficient RLIMIT_MEMLOCK limit.
In CI, prlimit raises the limit before test execution:
sudo prlimit --pid $$ --memlock=268435456:268435456
This sets both soft and hard limits to 256 MiB. The same prlimit invocation is used in the build
jobs for .deb packaging and Nix builds.
Tests that allocate ProtectedAlloc instances fail with ENOMEM if the memlock limit is
insufficient. The systemd service units set LimitMEMLOCK=64M for production use.
Test Execution
CI Pipeline
Tests run via mise run ci:test in the test.yml workflow on both ubuntu-24.04 (amd64) and
ubuntu-24.04-arm (arm64). The mise task runner manages Rust toolchain installation and task
orchestration.
Nix Builds
The Nix packages run cargo tests during the build phase:
- Headless: tests the five headless crates with
--no-default-features - Desktop: tests the entire workspace with
--workspace
Both set preCheck = "export HOME=$(mktemp -d)" for isolation.
Local Execution
Developers can run the full test suite with:
sudo prlimit --pid $$ --memlock=268435456:268435456
cargo test --workspace
The prlimit invocation is required for core-memory and any crate that transitively uses
ProtectedAlloc.
Security Tests
Open Sesame includes targeted security tests that verify memory protection, cryptographic isolation, IPC authentication, and authorization enforcement. These tests validate security invariants that, if broken, would compromise secret confidentiality.
Guard Page SIGSEGV Verification
File: core-memory/tests/guard_page_sigsegv.rs
ProtectedAlloc wraps sensitive data in page-aligned memory with guard pages on both sides. The
guard page tests verify that out-of-bounds access triggers SIGSEGV (signal 11) rather than silently
reading adjacent memory.
Subprocess Harness Pattern
Direct SIGSEGV in a test process would kill the entire test runner. The tests use a subprocess harness:
- The parent test (
overflow_hits_trailing_guard_page,underflow_hits_leading_guard_page) spawns the test binary as a child process, targeting a specific harness function with--exactand passing an environment variable__GUARD_PAGE_HARNESSto gate execution. - The child harness (
overflow_harness,underflow_harness) checks for the environment variable. If absent, it returns immediately (no-op when run as part of the normal test suite). If present, it allocates aProtectedAlloc, performs a deliberate out-of-bounds read, and callsexit(1)as unreachable fallback. - The parent inspects the child’s exit status. On Unix, it checks
status.signal()for SIGSEGV (11) or SIGBUS (7). As a fallback for platforms that encode signal death as exit code 128+signal, it also checks the exit code.
Test Coverage
- Trailing guard page: reads one byte past
ptr.add(len), triggering SIGSEGV on the guard page after the data region. - Leading guard page: reads one full page before the data pointer (
ptr.sub(page_size)), past any canary and padding, into the guard page between the metadata region and data region. Accepts both SIGSEGV (11) and SIGBUS (7).
Canary Verification
File: core-memory/src/alloc.rs (unit tests)
ProtectedAlloc writes a canary value into the metadata region during allocation. Unit tests verify
canary behavior:
canary_is_consistent: verifies that canary derivation is deterministic – the same allocation size always produces the same canary.alloc_canary_plus_data_spans_page_boundary: verifies correct behavior when the canary plus user data cross a page boundary.
The canary is checked on Drop. If the canary has been corrupted (indicating a buffer underflow or
use-after-free into the metadata region), the allocator detects the tampering.
Postcard Wire Format Compatibility
File: core-types/src/sensitive.rs
SensitiveBytes provides custom Serialize and Deserialize implementations to maintain wire
compatibility with postcard (the IPC serialization format). The serializer writes raw bytes directly
from protected memory via serialize_bytes. The deserializer implements a custom Visitor with two
paths:
- Zero-copy path (
visit_bytes): copies directly from the deserializer’s borrowed input buffer into aProtectedAlloc. No intermediate heapVec<u8>is created. This is the path postcard uses for in-memory deserialization. - Owned path (
visit_byte_buf): accepts an ownedVec<u8>, copies intoProtectedAlloc, then zeroizes theVec<u8>before dropping it.
This ensures that SensitiveBytes and Vec<u8> produce identical wire representations, maintaining
backward compatibility with any code that previously used plain byte vectors.
Cross-Profile Vault Isolation
File: core-secrets/src/sqlcipher.rs (unit tests)
SQLCipher vaults are encrypted with per-profile vault keys derived via BLAKE3 domain separation. Three tests verify isolation:
-
cross_profile_keys_are_independent: derives vault keys for profiles “work” and “personal” from the same master key. Asserts the derived keys differ. Opens a vault with the “work” key, stores a secret, then attempts to reopen the same database file with the “personal” key. TheSqlCipherStore::opencall must return an error because SQLCipher cannot decrypt pages with the wrong key. -
cross_profile_secret_access_returns_error: creates two separate vault databases for “work” and “personal” profiles. Stores a secret in the “work” vault, then attempts to read the same key name from the “personal” vault. The result must beErr(core_types::Error::NotFound(_)). -
vault_key_derivation_domain_separation: verifies thatcore_crypto::derive_vault_keyproduces distinct keys for different profile names, confirming the BLAKE3 domain separation functions correctly.
IPC Authentication and Authorization
File: core-ipc/tests/socket_integration.rs
The IPC integration tests verify several security invariants of the Noise IK transport and bus server:
Noise Handshake Rejection
noise_handshake_rejects_wrong_key: a client connects expecting an incorrect server public key.
The Noise IK handshake fails because the client’s static key lookup does not match the server’s
actual identity.
Clearance Escalation Blocking
clearance_escalation_blocked: a client registered at SecurityLevel::Open attempts to publish a
message at SecurityLevel::Internal. The bus server silently drops the frame. An
Internal-clearance receiver does not receive it.
Sender Identity Binding
sender_identity_change_blocked: after a client’s first message binds its DaemonId to the
connection, any subsequent message with a different DaemonId is dropped. This prevents a
compromised client from impersonating another daemon mid-session.
Verified Sender Name Stamping
verified_sender_name_stamped: messages routed through the bus carry a verified_sender_name field
stamped by the server from the Noise IK registry lookup. The sender cannot self-declare this field.
The test verifies the stamped name matches the registry entry ("test-client-0"), not anything the
sender included in the message payload.
Unicast Response Routing
secret_response_not_received_by_bystander: when a request/response pair uses correlation IDs, the
response is unicast-routed to the original requester only. A bystander client connected to the same
bus does not receive the correlated response.
Orphan Response Dropping
uncorrelated_response_is_dropped: a message with a fabricated correlation_id (no matching
pending request) is silently dropped by the bus server and not broadcast to any client.
Ephemeral Client UCred Validation
ephemeral_client_gets_secrets_only_clearance: an unregistered key (ephemeral CLI connection) that
passes UCred same-UID validation receives SecretsOnly clearance, allowing it to send unlock and
secret CRUD messages without being pre-registered in the key registry.
Clearance-Level Message Filtering
secrets_only_message_not_delivered_to_internal_daemon: a message published at SecretsOnly level
is not delivered to Internal-clearance recipients, since Internal < SecretsOnly in the clearance
hierarchy.
Keypair Persistence Security
File: core-ipc/tests/daemon_keypair.rs
This test verifies filesystem security invariants for daemon keypair storage:
- The keys directory has
0700permissions. - Private key files (
.key) have0600permissions. - Public key files (
.pub) have0644permissions. - Bus keypair files (
bus.key,bus.pub,bus.checksum) have correct permissions. - Corrupting the checksum file triggers a
TAMPER DETECTEDerror on the next read, preventing use of tampered keypairs.
Seccomp Allowlist
Each daemon applies a seccomp filter via platform_linux::sandbox::apply_seccomp. The function uses
libseccomp to install a BPF filter with a default-deny policy (SCMP_ACT_ERRNO(EPERM)), adding
only the syscalls required by each daemon’s SeccompProfile. This prevents an attacker who gains
code execution within a daemon from making arbitrary system calls.
Seccomp is combined with Landlock filesystem restrictions in the apply_sandbox function, which
each daemon calls during initialization. Per-daemon sandbox configurations are defined in each
daemon’s sandbox.rs module (e.g., daemon-secrets/src/sandbox.rs, daemon-wm/src/sandbox.rs).
Daemons that do not need network access (e.g., daemon-secrets) additionally set
PrivateNetwork=true at the systemd level.
IPC Integration Tests
The core-ipc crate includes a comprehensive integration test suite in
core-ipc/tests/socket_integration.rs. All tests exercise the full Noise IK
encrypted transport – there is no plaintext transport path in the codebase.
Test Infrastructure
Helpers
The test suite provides three helper functions:
-
start_server_with_clients(n)– Creates a temporary directory, generates a server keypair, registersnclient keypairs atSecurityLevel::Internalin aClearanceRegistry, binds aBusServerto a Unix socket, and returns the server, temp directory, server public key, and client keypairs. -
start_server()– Convenience wrapper that registers a single client at Internal clearance. -
connect_with_keypair(id, sock, server_pub, kp)– Connects aBusClientviaconnect_encryptedwith the givenDaemonIdand keypair.
All tests create isolated Unix sockets in tempfile::TempDir instances,
ensuring no shared state between tests.
Test Coverage
Server Lifecycle
server_bind_creates_socket_file – Verifies that BusServer::bind
creates the Unix socket file on disk, including parent directory creation.
client_connect_and_server_accept – Verifies that after a client
performs a Noise IK handshake, the server reports a connection count of 1.
Publish-Subscribe
publish_subscribe_roundtrip – Client A publishes a DaemonStarted
event at Internal level. Client B receives it and verifies the event kind
matches. Confirms that broadcast delivery works end-to-end over encrypted
transport.
sender_does_not_receive_own_message – A client publishes a message
and then attempts to receive. The receive times out, confirming that the bus
server does not echo messages back to the sender.
multiple_clients_receive_broadcast – One sender, two receivers. Both
receivers get the ConfigReloaded event, verifying fan-out broadcast.
Request-Response
request_response_correlation – Client A sends a SecretList request
via client.request(). Client B receives it, constructs a
SecretListResponse with .with_correlation(request_msg.msg_id), and sends
it back. Client A’s request() future resolves with the correlated response.
Verifies that correlation ID routing works correctly.
launch_execute_response_roundtrip – End-to-end test of the
LaunchExecute / LaunchExecuteResponse request-response pair, simulating
the CLI sending a launch command and daemon-launcher responding with a PID.
launch_execute_error_roundtrip – Same as above, but the launcher
responds with error: Some("desktop entry 'nonexistent' not found") and
denial: Some(LaunchDenial::EntryNotFound). Verifies error propagation
through the correlated response path.
request_timeout – A client sends a StatusRequest with a 100 ms
timeout. No responder exists. The request() call returns an error
containing “timed out”.
Unicast Routing
secret_response_not_received_by_bystander – Three clients: a
requester, a bystander, and a simulated secrets daemon. The requester sends
a SecretList request. Both the bystander and the secrets daemon receive
the broadcast request. The secrets daemon responds with a correlated
SecretListResponse. The requester receives it, but the bystander does not.
This verifies that correlated responses are unicast-routed to the original
requester, not broadcast.
uncorrelated_response_is_dropped – A client sends a response message
with a fabricated correlation ID that matches no pending request. The bus
server drops it; no other client receives it. This prevents response
injection attacks.
Noise Handshake Security
noise_handshake_rejects_wrong_key – A client attempts to connect
using a server public key that does not match the actual server. The Noise IK
handshake fails, and connect_encrypted returns an error. This is a
fundamental authentication property of the IK pattern: the initiator pins
the responder’s static key.
client_connect_retry_on_missing_socket – A client attempts to connect
to a nonexistent socket path. The connection fails with an error containing
“failed to connect” rather than hanging or panicking.
Clearance Enforcement
clearance_escalation_blocked – Two clients are registered: one at
SecurityLevel::Open, one at SecurityLevel::Internal. The Open-clearance
client publishes a message at Internal level. The Internal-clearance client
does not receive it. The bus server silently drops frames that exceed the
sender’s clearance.
secrets_only_message_not_delivered_to_internal_daemon – A client
registered at SecurityLevel::SecretsOnly publishes at SecretsOnly level.
A client registered at SecurityLevel::Internal does not receive it. This
verifies the lattice property: Internal clearance is below SecretsOnly, so
Internal recipients are excluded from SecretsOnly-level messages. This
isolation ensures that daemon-secrets traffic is partitioned from general
daemon traffic.
ephemeral_client_gets_secrets_only_clearance – A client connects with
an unregistered keypair (not in the ClearanceRegistry). The connection
succeeds via UCred same-UID validation, and the server reports 1 connection.
Ephemeral clients (typically sesame CLI invocations) receive SecretsOnly
clearance, allowing them to interact with daemon-secrets without being
pre-registered.
Sender Identity
sender_identity_change_blocked – A client sends a first message with
DaemonId(20), binding that identity to the connection. It then sends a
second message with DaemonId(99). The receiver gets the first message but
not the second. The bus server drops messages where the sender’s DaemonId
does not match the identity bound on the first message, preventing identity
spoofing mid-session.
verified_sender_name_stamped – A client registered as
"test-client-0" sends a message. The receiver inspects
msg.verified_sender_name and finds it set to Some("test-client-0"). This
field is stamped by the server from the Noise IK registry lookup, not
self-declared by the sender. Recipients can trust this field for
authorization decisions.
Cross-Daemon Routing
registered_client_overlay_reaches_daemon_wm – A CLI client registered
at Internal clearance publishes WmActivateOverlay. A simulated daemon-wm
client receives it. Verifies the overlay activation path from CLI to window
manager.
Graceful Shutdown
shutdown_flushes_publish_before_disconnect – A CLI client publishes
WmActivateOverlay then calls client.shutdown().await. A daemon-wm
client receives the event after the CLI has disconnected. This verifies that
shutdown() flushes outbound frames before closing the connection.
drop_without_shutdown_may_lose_message – A CLI client publishes then
immediately drops the client handle without calling shutdown(). The test
documents that this races the I/O task and message delivery is
non-deterministic. This test exists as a regression companion to the
shutdown_flushes test: it demonstrates the data loss that shutdown() was
introduced to prevent.
CI Pipeline
Open Sesame uses four GitHub Actions workflows for testing, documentation, release, and Nix builds.
Workflow Overview
| Workflow | File | Triggers | Purpose |
|---|---|---|---|
| Test | test.yml | Push to main/master, PRs | Run cargo test on dual architectures |
| Docs | docs.yml | Push to main/master, PRs | Build rustdoc and mdBook |
| Release | release.yml | Push to main, manual dispatch | Semantic-release, build, attest, publish |
| Nix | nix.yml | Called by release.yml, PRs | Build Nix packages and push to Cachix |
test.yml
The test workflow runs on every push to main/master and on pull requests targeting those
branches.
Dual-Architecture Matrix
matrix:
include:
- arch: amd64
runner: ubuntu-24.04
- arch: arm64
runner: ubuntu-24.04-arm
Both runners use Ubuntu 24.04. ARM builds use GitHub’s native ubuntu-24.04-arm runner (not
emulation).
Execution
- Checks out the repository.
- Installs the Rust toolchain via
jdx/mise-action@v4with caching enabled. - Raises
RLIMIT_MEMLOCKto 256 MiB withsudo prlimit --pid $$ --memlock=268435456:268435456. This is required becauseProtectedAllocusesmlockto pin secret-holding memory pages. - Runs
mise run ci:test.
The MISE_AUTO_INSTALL environment variable is set to "false" to prevent automatic tool
installation outside the explicit mise-action step.
docs.yml
The docs workflow runs on pushes and PRs to main/master. It runs on ubuntu-latest (single
architecture).
- Checks out the repository.
- Installs Rust via mise with caching.
- Runs
mise run ci:docsto build documentation.
This workflow validates that documentation builds succeed but does not deploy. Deployment occurs in
the release workflow’s build-docs and publish jobs.
release.yml
The release workflow is the primary CI/CD pipeline. It triggers on pushes to main and supports
manual dispatch with a dry-run option.
Permissions
The workflow declares the following permissions:
contents: write– GitHub release creation, version commitspages: write– GitHub Pages deploymentid-token: write– OIDC tokens for Pages and attestationsattestations: write– SLSA build provenanceissues: write,pull-requests: write– semantic-release comments
Job Dependency Graph
semantic-release ──┬──► build (amd64) ──┬──► attest
├──► build (arm64) ──┤
│ └──► upload-assets
├──► nix-cache
├──► build-docs
│
└──► [build + upload-assets + build-docs] ──► publish ──► cleanup
All jobs after semantic-release are gated on new_release == 'true'.
Build Job
The build job uses the same dual-architecture matrix as the test workflow. It installs rust and
cargo:cargo-deb via mise, raises the memlock limit, and runs architecture-specific mise tasks:
| Architecture | Build Task | Rename Task |
|---|---|---|
| amd64 | ci:build:deb | ci:release:rename-deb |
| arm64 | ci:build:deb:arm64 | ci:release:rename-deb:arm64 |
The rename task adds architecture suffixes to the .deb filenames. Artifacts are uploaded with
1-day retention.
Nix Cache Job
Calls the reusable nix.yml workflow, passing the release tag and the
SCOPE_CREEP_CACHIX_PRIVATE_KEY secret.
Build Docs Job
Checks out the release tag, runs mise run ci:docs:all and mise run ci:docs:combine to produce
a combined rustdoc and mdBook site. The result is uploaded as a documentation artifact.
Publish Job
The publish job:
- Downloads
.debartifacts and documentation. - Imports the GPG signing key via
crazy-max/ghaction-import-gpg@v6. - Runs
mise run ci:release:apt-repoto generate the signed APT repository. - Deploys the combined APT repository and documentation to GitHub Pages via
actions/deploy-pages@v5.
This job runs in the github-pages environment.
nix.yml
The Nix workflow serves dual purposes:
- Reusable workflow: called by
release.ymlwith a tag input to build and push release artifacts to Cachix. - Standalone PR workflow: runs on PRs to
mainfor cache warming (builds packages but the Cachix action only pushes when the auth token is available).
Matrix
matrix:
include:
- system: x86_64-linux
runner: ubuntu-24.04
- system: aarch64-linux
runner: ubuntu-24.04-arm
Execution
- Checks out at the specified tag (or current ref for PRs).
- Installs Nix via
cachix/install-nix-action@v31. - Configures Cachix via
cachix/cachix-action@v15with thescopecreep-zipcache name. - Raises the memlock limit.
- Builds both
open-sesameandopen-sesame-desktopfor the matrix system with--accept-flake-config -L.
Mise Task Runner
All workflows use jdx/mise-action@v4 to install tools and run tasks. Mise manages:
- Rust toolchain version (from
rust-toolchain.tomlor mise config) - Node.js (for semantic-release in the release workflow)
cargo-deb(for.debpackaging in the build job)
Task names follow the convention ci:<category>:<action> (e.g., ci:test, ci:build:deb,
ci:docs:all, ci:release:apt-repo).
Environment Variables
| Variable | Value | Purpose |
|---|---|---|
CARGO_TERM_COLOR | always | Colored cargo output in CI logs |
MISE_AUTO_INSTALL | false | Prevent implicit tool installation |
Compliance Framework Mapping
This page maps Open Sesame’s security controls to specific requirements in NIST 800-53, DISA STIG, PCI-DSS, SOC 2, and FedRAMP. Controls that are fully implemented cite the source crate or configuration. Controls that depend on design-intent features are marked accordingly.
NIST 800-53 Rev. 5
AC – Access Control
| Control | Title | Open Sesame Mechanism | Status |
|---|---|---|---|
| AC-3 | Access Enforcement | SecurityLevel clearance hierarchy (core-types/src/security.rs): Open < Internal < ProfileScoped < SecretsOnly. Each daemon registers at a clearance level; messages are routed only to peers at sufficient clearance. CapabilitySet enforces per-agent authorization. | Implemented |
| AC-4 | Information Flow Enforcement | IPC bus enforces sender clearance: a daemon cannot emit messages above its own SecurityLevel. Recipient filtering ensures low-clearance daemons never receive high-clearance messages (core-ipc/src/server.rs). | Implemented |
| AC-6 | Least Privilege | Per-daemon Landlock filesystem sandboxing, seccomp-bpf syscall allowlists, systemd NoNewPrivileges=yes, empty capability bounding set, ProtectSystem=strict. | Implemented |
| AC-6(1) | Authorize Access to Security Functions | Capability::Admin, Capability::Unlock, Capability::Lock restricted to agents with explicit grants. Delegation narrows scope via CapabilitySet.intersection(). | Implemented |
| AC-6(9) | Log Use of Privileged Functions | BLAKE3 hash-chained audit log records all vault operations (core-profile). | Implemented |
| AC-17 | Remote Access | Noise IK mutual authentication for all IPC. SSH agent forwarding for remote vault unlock. PrivateNetwork=yes on secrets daemon. | Implemented |
AU – Audit and Accountability
| Control | Title | Open Sesame Mechanism | Status |
|---|---|---|---|
| AU-2 | Event Logging | Structured JSON logging (global.logging.json = true), journald integration. Events include: unlock/lock, secret CRUD, profile activation, daemon lifecycle. | Implemented |
| AU-3 | Content of Audit Records | Each entry includes: timestamp, agent identity, operation, profile, security level. AgentIdentity provides agent type, delegation chain, attestations. | Implemented |
| AU-10 | Non-repudiation | BLAKE3 hash-chained audit log. Each entry’s hash chains to the previous. sesame audit verify detects tampering. | Implemented |
| AU-11 | Audit Record Retention | Audit chain files persist on disk indefinitely. Retention policy is delegated to the operating environment. | Implemented (storage) |
| AU-12 | Audit Record Generation | All daemons emit structured log events. The audit chain is generated by core-profile’s audit logger. | Implemented |
IA – Identification and Authentication
| Control | Title | Open Sesame Mechanism | Status |
|---|---|---|---|
| IA-2 | Identification and Authentication (Organizational Users) | AuthFactorId enum: Password, SshAgent, Fido2, Tpm, Fingerprint, Yubikey (core-types/src/auth.rs). Password and SshAgent backends implemented. | Partially Implemented |
| IA-2(1) | Multi-Factor Authentication to Privileged Accounts | AuthCombineMode: Any, All, Policy (core-types/src/auth.rs). Policy mode supports threshold-based MFA (N required factors + M additional). | Implemented |
| IA-2(6) | Access to Accounts – Separate Device | Hardware security keys (FIDO2, YubiKey) as separate physical devices. SSH agent forwarding uses the operator’s local key. | Partially Implemented (SSH agent implemented; FIDO2/YubiKey defined but backends not yet implemented) |
| IA-5 | Authenticator Management | Argon2id KDF (19 MiB, 2 iterations) for password. BLAKE3 domain-separated key derivation. Per-profile salts. | Implemented |
| IA-5(2) | Public Key-Based Authentication | Noise IK X25519 static keys for IPC. SSH agent Ed25519/RSA keys for vault unlock. | Implemented |
SC – System and Communications Protection
| Control | Title | Open Sesame Mechanism | Status |
|---|---|---|---|
| SC-8 | Transmission Confidentiality and Integrity | Noise IK protocol: X25519 + ChaChaPoly + BLAKE2s with forward secrecy. All IPC authenticated and encrypted. | Implemented |
| SC-12 | Cryptographic Key Establishment and Management | BLAKE3 domain-separated key hierarchy. Master key derived from auth factors. Sub-keys derived via BLAKE3 derive_key with unique context strings per purpose. | Implemented |
| SC-13 | Cryptographic Protection | CryptoConfig (core-types/src/crypto.rs) with configurable algorithm selection. GovernanceCompatible profile uses NIST-approved algorithms (PBKDF2-SHA256, HKDF-SHA256, AES-GCM, SHA-256). | Implemented |
| SC-28 | Protection of Information at Rest | SQLCipher: AES-256-CBC + HMAC-SHA512 per page. Per-profile encryption keys. | Implemented |
| SC-28(1) | Cryptographic Protection (at Rest) | Vault files are encrypted at rest with keys derived from Argon2id KDF output through BLAKE3 domain-separated derivation. | Implemented |
| SC-39 | Process Isolation | Per-daemon systemd services with Landlock, seccomp-bpf, NoNewPrivileges, ProtectSystem=strict. Secrets daemon: PrivateNetwork=yes. | Implemented |
SI – System and Information Integrity
| Control | Title | Open Sesame Mechanism | Status |
|---|---|---|---|
| SI-7 | Software, Firmware, and Information Integrity | GPG-signed APT packages. SLSA build provenance. OciReference with provenance digest for extensions. WASM extensions identified by content hash (AgentType::Extension { manifest_hash }). | Implemented |
| SI-16 | Memory Protection | memfd_secret(2) removes pages from kernel direct map. Guard pages (PROT_NONE). Volatile zeroize on drop. LimitCORE=0, MADV_DONTDUMP. | Implemented |
DISA STIG
| STIG Requirement | Open Sesame Mechanism | Status |
|---|---|---|
| Encrypted storage at rest | SQLCipher AES-256-CBC + HMAC-SHA512, per-profile encryption keys | Implemented |
| Memory protection for credentials | memfd_secret(2), guard pages, canary verification, volatile zeroize | Implemented |
| Audit trail integrity | BLAKE3 hash chain with tamper detection via sesame audit verify | Implemented |
| Least privilege process isolation | Landlock, seccomp-bpf, per-daemon clearance levels, systemd hardening | Implemented |
| No core dumps | LimitCORE=0 in all daemon services, MADV_DONTDUMP on secure allocations | Implemented |
| Authentication strength | Argon2id with memory-hard parameters (19 MiB). Multi-factor support. | Implemented |
| Access control for sensitive data | SecurityLevel hierarchy, CapabilitySet authorization | Implemented |
| Session management | Heartbeat-based delegation with TTL expiry, TrustVector.authz_freshness | Implemented (types); Design Intent (runtime enforcement) |
PCI-DSS v4.0
Requirement 3: Protect Stored Account Data
| Sub-Requirement | Open Sesame Mechanism |
|---|---|
| 3.5.1 Restrict access to cryptographic keys | Master key held in memfd_secret(2) memory, accessible only to the owning daemon process. Key derivation hierarchy: master key -> per-profile vault key -> SQLCipher page key. |
| 3.5.1.2 Store secret keys in fewest possible locations | One master key per installation, derived into per-profile keys. Master key exists only in protected memory; never on disk in plaintext. |
| 3.6.1 Key management procedures | sesame init generates keys. AuthCombineMode defines unlock policy. Key rotation via re-enrollment. |
Requirement 7: Restrict Access to System Components and Cardholder Data
| Sub-Requirement | Open Sesame Mechanism |
|---|---|
| 7.2.1 Access control system | CapabilitySet per agent. SecurityLevel per daemon. DelegationGrant for scoped access transfer. |
| 7.2.2 Assign access based on job classification | Trust profiles map to roles. Each profile has its own vault with its own secrets. |
Requirement 8: Identify Users and Authenticate Access
| Sub-Requirement | Open Sesame Mechanism |
|---|---|
| 8.3.1 All user access authenticated | All IPC authenticated via Noise IK. Vault unlock requires enrolled factor(s). |
| 8.3.2 Strong authentication for all access | Argon2id (memory-hard). Multi-factor via AuthCombineMode. Hardware factors defined. |
| 8.6.1 System and application accounts managed | AgentIdentity with typed AgentType, capability scoping, delegation chains. |
Requirement 10: Log and Monitor All Access
| Sub-Requirement | Open Sesame Mechanism |
|---|---|
| 10.2.1 Audit logs capture events | BLAKE3 hash-chained audit log, structured JSON logging. |
| 10.2.1.2 All actions by administrative accounts | Capability::Admin operations logged with full agent identity and delegation chain. |
| 10.3.1 Audit log protected from tampering | Hash chain provides tamper evidence. sesame audit verify detects modification. |
SOC 2 Trust Service Criteria
| Criteria | Category | Open Sesame Mechanism |
|---|---|---|
| CC6.1 | Logical and Physical Access Controls | SecurityLevel hierarchy, CapabilitySet, Noise IK authentication, per-daemon sandbox |
| CC6.2 | Prior to Issuing System Credentials | sesame init with factor enrollment. AgentIdentity creation with attestation. |
| CC6.3 | Based on Authorization | CapabilitySet intersection for delegation. Policy-based approval gates (Design Intent). |
| CC6.6 | Restrict Access | Landlock, seccomp-bpf, PrivateNetwork=yes (secrets daemon), ProtectHome=read-only |
| CC6.7 | Restrict Transmission | Noise IK encryption for all IPC. No plaintext secret transmission. |
| CC6.8 | Prevent or Detect Unauthorized Software | WASM extensions identified by manifest_hash. OciReference with provenance. GPG-signed packages. |
| CC7.1 | Monitor Infrastructure | systemd watchdog (WatchdogSec=30), structured logging, sesame status |
| CC7.2 | Monitor for Anomalies | Rate-limited vault unlock attempts. Audit chain verification. |
| CC8.1 | Changes to Infrastructure | Configuration layered inheritance with PolicyOverride audit trail |
FedRAMP
FedRAMP baselines inherit from NIST 800-53. The controls mapped in the NIST 800-53 section above apply to FedRAMP at the corresponding baseline level (Low, Moderate, High).
Cryptographic Algorithm Compliance
FedRAMP requires FIPS 140-validated cryptographic modules. Open Sesame provides a
GovernanceCompatible crypto profile (core-types/src/crypto.rs) that selects
NIST-approved algorithms:
| Component | LeadingEdge (Default) | GovernanceCompatible |
|---|---|---|
| KDF | Argon2id | PBKDF2-SHA256 (600K iterations) |
| HKDF | BLAKE3 | HKDF-SHA256 |
| Noise cipher | ChaChaPoly | AES-256-GCM |
| Noise hash | BLAKE2s | SHA-256 |
| Audit hash | BLAKE3 | SHA-256 |
The GovernanceCompatible profile uses algorithms that have FIPS 140-validated
implementations in widely-used cryptographic libraries. Open Sesame itself is not
FIPS-validated; deployments requiring FIPS validation must use a FIPS-validated
cryptographic provider at the library level. See
Cryptographic Inventory for the full algorithm inventory.
Cryptographic Inventory
This page provides an exhaustive inventory of every cryptographic algorithm used in Open Sesame, where it is used, the key sizes and parameters, and the relevant standards references.
Algorithm Summary
| Algorithm | Purpose | Key Size | Standard | Crate |
|---|---|---|---|---|
| Argon2id | Password -> master key derivation | 32 bytes output | RFC 9106 | core-crypto (kdf.rs) |
| PBKDF2-SHA256 | Password -> master key (governance-compatible) | 32 bytes output | NIST SP 800-132, RFC 8018 | core-crypto (kdf.rs) |
| BLAKE3 derive_key | Master key -> per-purpose sub-keys | 32 bytes output | BLAKE3 spec (domain-separated KDF mode) | core-crypto (hkdf.rs) |
| HKDF-SHA256 | Master key -> per-purpose sub-keys (governance-compatible) | 32 bytes output | RFC 5869, NIST SP 800-56C | core-crypto (hkdf.rs) |
| AES-256-GCM | Key wrapping (PasswordWrapBlob, EnrollmentBlob) | 256-bit key | NIST SP 800-38D, FIPS 197 | core-crypto |
| AES-256-CBC + HMAC-SHA512 | SQLCipher page encryption | 256-bit key (encrypt) + 512-bit key (MAC) | FIPS 197, FIPS 198-1 | SQLCipher (via rusqlite) |
| X25519 | Noise IK key agreement | 256-bit (32 bytes) | RFC 7748 | snow (via core-ipc) |
| ChaChaPoly | Noise IK transport encryption (default) | 256-bit key | RFC 7539 | snow (via core-ipc) |
| BLAKE2s | Noise IK hashing (default) | 256-bit output | RFC 7693 | snow (via core-ipc) |
| AES-256-GCM (Noise) | Noise IK transport encryption (governance-compatible) | 256-bit key | NIST SP 800-38D | snow (via core-ipc) |
| SHA-256 (Noise) | Noise IK hashing (governance-compatible) | 256-bit output | FIPS 180-4 | snow (via core-ipc) |
| BLAKE3 | Audit log hash chain (default) | 256-bit output | BLAKE3 spec | core-profile |
| SHA-256 | Audit log hash chain (governance-compatible) | 256-bit output | FIPS 180-4 | core-profile |
| Ed25519 | Delegation grant signatures | 256-bit key (32 bytes) | RFC 8032 | core-types (security.rs) |
Argon2id
Standard: RFC 9106
Purpose: Derives the master key from a user-supplied password. Used by the Password
authentication factor (AuthFactorId::Password in core-types/src/auth.rs).
Parameters:
| Parameter | Value | Rationale |
|---|---|---|
| Memory | 19 MiB (19,456 KiB) | Memory-hard to resist GPU/ASIC attacks |
| Iterations | 2 | Balanced with memory cost for interactive use |
| Parallelism | 1 | Single-threaded derivation |
| Output | 32 bytes | 256-bit master key |
| Salt | 16 bytes, per-profile, random | Unique per vault |
Implementation: core-crypto/src/kdf.rs, function derive_key_argon2.
Known residual: The Argon2id working memory (19 MiB) resides on the unprotected heap
during derivation. This is an upstream limitation of the argon2 crate. See GitHub
issue #14.
BLAKE3 Key Derivation
Standard: BLAKE3 specification, KDF mode
Purpose: Derives per-purpose sub-keys from the master key using domain-separated context strings. Each context string is globally unique and hardcoded.
Context strings used in the system:
| Context String | Purpose | Source |
|---|---|---|
"pds v2 vault-key {profile}" | SQLCipher vault encryption key | core-crypto/src/hkdf.rs |
"pds v2 clipboard-key {profile}" | Clipboard encryption key | core-crypto/src/hkdf.rs |
"pds v2 ipc-auth-token {profile}" | IPC bus authentication token | core-crypto/src/hkdf.rs |
"pds v2 ipc-encryption-key {profile}" | IPC field encryption key | core-crypto/src/hkdf.rs |
"pds v2 ssh-vault-kek {profile}" | SSH agent KEK derivation | core-auth |
"pds v2 combined-master-key {profile}" | Combined key from all factors (All mode) | core-auth |
"pds v2 kek-salt {profile}" | Salt derivation for KEK wrapping | core-crypto/src/hkdf.rs |
Implementation: core-crypto/src/hkdf.rs, function derive_32 wrapping
blake3::derive_key.
BLAKE3’s KDF mode internally derives a context key from the context string, then uses it as keying material with extract-then-expand semantics equivalent to HKDF.
HKDF-SHA256
Standard: RFC 5869, NIST SP 800-56C
Purpose: Governance-compatible alternative to BLAKE3 key derivation. Used when
CryptoConfigToml.hkdf = "hkdf-sha256" (core-config/src/schema_crypto.rs).
Implementation: core-crypto/src/hkdf.rs, function derive_32_hkdf_sha256. Uses the
hkdf crate with sha2::Sha256.
The same context strings listed above for BLAKE3 are used as the HKDF info parameter. The salt is extracted from the master key. Output is 32 bytes.
AES-256-GCM (Key Wrapping)
Standard: NIST SP 800-38D, FIPS 197
Purpose: Wraps and unwraps the master key under a key-encryption key (KEK) derived from an authentication factor.
Used in:
PasswordWrapBlob– master key wrapped under the Argon2id-derived KEK. Stored on disk in the vault metadata.EnrollmentBlob– master key wrapped under the SSH agent-derived KEK. Stored on disk for SSH agent factor.
Parameters:
| Parameter | Value |
|---|---|
| Key size | 256 bits (32 bytes) |
| Nonce | 96 bits (12 bytes), random per wrap |
| Tag | 128 bits (16 bytes) |
AES-256-CBC + HMAC-SHA512 (SQLCipher)
Standard: FIPS 197 (AES), FIPS 198-1 (HMAC), FIPS 180-4 (SHA-512)
Purpose: SQLCipher page-level encryption for vault databases. Each page in the SQLite database is independently encrypted and authenticated.
Parameters:
| Parameter | Value |
|---|---|
| Encryption | AES-256-CBC per page |
| Authentication | HMAC-SHA512 per page |
| Key derivation | Per-page key from vault key via SQLCipher’s internal KDF |
| Page size | 4096 bytes (SQLCipher default) |
| KDF iterations | Controlled by SQLCipher; the vault key itself is pre-derived via Argon2id + BLAKE3 |
Implementation: SQLCipher via the rusqlite crate with the bundled-sqlcipher feature.
Noise IK (IPC Transport)
Standard: Noise Protocol Framework (noiseprotocol.org), pattern IK
Purpose: All inter-daemon communication on the IPC bus. Provides mutual authentication, encryption, and forward secrecy.
Pattern: IK (Initiator knows responder’s static key)
Default cipher suite: Noise_IK_25519_ChaChaPoly_BLAKE2s
| Component | Default (LeadingEdge) | Governance-Compatible |
|---|---|---|
| Key agreement | X25519 (RFC 7748) | X25519 (RFC 7748) |
| Cipher | ChaChaPoly (RFC 7539) | AES-256-GCM (NIST SP 800-38D) |
| Hash | BLAKE2s (RFC 7693) | SHA-256 (FIPS 180-4) |
Additional binding: The UCred (pid, uid, gid) of the connecting process is bound into the Noise prologue, preventing a process from impersonating another process’s Noise session.
Implementation: core-ipc, using the snow crate. Cipher suite selection is configured
via CryptoConfigToml.noise_cipher and CryptoConfigToml.noise_hash in
core-config/src/schema_crypto.rs.
Ed25519 (Delegation Signatures)
Standard: RFC 8032
Purpose: Signs DelegationGrant structs to prevent tampering with capability
delegations. The 64-byte signature is stored in DelegationGrant.signature
(core-types/src/security.rs).
Key size: 256-bit private key, 256-bit public key.
FIPS Path
The following table summarizes FIPS 140 validation status for each algorithm:
| Algorithm | FIPS-Validated Implementations Available | Open Sesame Profile |
|---|---|---|
| Argon2id | No FIPS 140 validation exists | LeadingEdge only |
| PBKDF2-SHA256 | Yes (multiple vendors) | GovernanceCompatible |
| BLAKE3 | No FIPS 140 validation exists | LeadingEdge only |
| HKDF-SHA256 | Yes (via HMAC-SHA256) | GovernanceCompatible |
| AES-256-GCM | Yes (multiple vendors) | Both profiles |
| AES-256-CBC | Yes (multiple vendors) | Both profiles (SQLCipher) |
| HMAC-SHA512 | Yes (multiple vendors) | Both profiles (SQLCipher) |
| X25519 | Partial (some FIPS modules include it) | Both profiles |
| ChaChaPoly | No FIPS 140 validation exists | LeadingEdge only |
| AES-256-GCM (Noise) | Yes (multiple vendors) | GovernanceCompatible |
| BLAKE2s | No FIPS 140 validation exists | LeadingEdge only |
| SHA-256 | Yes (multiple vendors) | GovernanceCompatible |
| Ed25519 | Partial (some FIPS modules include it) | Both profiles |
For deployments requiring full FIPS 140 compliance, set the crypto profile to
governance-compatible:
[crypto]
kdf = "pbkdf2-sha256"
hkdf = "hkdf-sha256"
noise_cipher = "aes-gcm"
noise_hash = "sha256"
audit_hash = "sha256"
minimum_peer_profile = "governance-compatible"
This configuration uses only algorithms with widely available FIPS 140-validated implementations. Open Sesame itself is not a FIPS-validated module; the FIPS boundary is at the cryptographic library level.
Crypto Agility
All cryptographic algorithm selections are config-driven via CryptoConfigToml
(core-config/src/schema_crypto.rs). The to_typed() method converts string-based
configuration into validated CryptoConfig enum variants.
Adding a new algorithm requires:
- Adding a variant to the relevant enum in
core-types/src/crypto.rs(e.g.,KdfAlgorithm::Scrypt). - Adding the string mapping in
core-config/src/schema_crypto.rs. - Implementing the algorithm in the corresponding
core-cryptofunction.
The minimum_peer_profile field in CryptoConfig allows heterogeneous crypto profiles
within a federation: each installation selects its own algorithms but can set a floor for
what it accepts from peers. This enables gradual migration from one algorithm to another
without a coordinated cutover.
PBKDF2-SHA256
Standard: NIST SP 800-132, RFC 8018
Purpose: Governance-compatible alternative to Argon2id for password-based key derivation.
Used when CryptoConfigToml.kdf = "pbkdf2-sha256".
Parameters:
| Parameter | Value |
|---|---|
| Hash | SHA-256 |
| Iterations | 600,000 |
| Output | 32 bytes |
| Salt | 16 bytes, per-profile, random |
Implementation: core-crypto/src/kdf.rs, function derive_key_pbkdf2.
PBKDF2-SHA256 provides FIPS 140 compliance for the KDF layer but is significantly less resistant to GPU/ASIC attacks than Argon2id due to its lack of memory-hardness. It should be selected only when FIPS compliance is a hard requirement.
Zero Trust Posture
This page describes how Open Sesame applies zero trust principles to its architecture. Zero trust in this context means that no component, process, or network path is implicitly trusted. Every interaction is authenticated, authorized, and audited, regardless of origin.
Principles
Never Trust, Always Verify
Every IPC message on the bus is authenticated via the Noise IK protocol. There is no unauthenticated communication path between daemons.
Implementation: When a daemon connects to the IPC bus hosted by daemon-profile, the
Noise IK handshake verifies the connecting daemon’s X25519 static public key against the
clearance registry (core-ipc/src/registry.rs). UCred (pid, uid, gid) from the Unix domain
socket is bound into the Noise prologue, preventing a compromised process from reusing
another process’s Noise session.
Unregistered clients (e.g., the sesame CLI) receive Open clearance. They can send and
receive Open-level messages but are excluded from Internal, ProfileScoped, and
SecretsOnly traffic.
The SecurityLevel enum (core-types/src/security.rs) defines the clearance hierarchy:
#![allow(unused)]
fn main() {
pub enum SecurityLevel {
Open, // Visible to all, including extensions
Internal, // Authenticated daemons only
ProfileScoped, // Daemons with current profile's security context
SecretsOnly, // Secrets daemon only
}
}
A message at SecretsOnly level is delivered only to daemons registered at SecretsOnly
clearance. A daemon at Internal clearance never sees it. This is enforced in the IPC
server’s message routing loop (core-ipc/src/server.rs): the server checks
conn.security_clearance >= msg.security_level before delivering each message, and checks
conn.security_clearance >= msg.security_level before accepting each sent message from a
daemon.
Least Privilege
Each daemon operates with the minimum privileges required for its function. Privilege boundaries are enforced at multiple layers:
Per-Daemon Clearance
| Daemon | Clearance | Rationale |
|---|---|---|
daemon-secrets | SecretsOnly | Holds decrypted vault keys; must not leak to other daemons |
daemon-clipboard | ProfileScoped | Handles clipboard content scoped to the active profile |
daemon-profile | Internal | IPC bus host; sees all Internal-level and below |
daemon-wm | Internal | Window management; no access to secrets |
daemon-launcher | Internal | Application launching; receives secrets only via env injection |
daemon-input | Internal | Keyboard/mouse capture; no secret access |
daemon-snippets | Internal | Snippet management; no secret access |
Filesystem Sandboxing (Landlock)
Each daemon restricts its own filesystem access at startup via Landlock. The secrets daemon,
for example, can access only $XDG_RUNTIME_DIR/pds/ and ~/.config/pds/. Attempts to read
or write outside these paths return EACCES.
Partially-enforced Landlock is a fatal error. If the kernel supports Landlock but enforcement is incomplete (e.g., missing filesystem support), the daemon aborts rather than operating with degraded isolation.
Syscall Filtering (seccomp-bpf)
Each daemon installs a seccomp-bpf filter with a per-daemon syscall allowlist. Unallowed
syscalls terminate the offending thread (SECCOMP_RET_KILL_THREAD). A SIGSYS handler logs
the denied syscall before the thread dies, providing visibility into unexpected syscall
usage.
systemd Hardening
All daemon services apply:
| Directive | Effect |
|---|---|
NoNewPrivileges=yes | Prevents privilege escalation via setuid/setgid binaries |
ProtectSystem=strict | Root filesystem mounted read-only |
ProtectHome=read-only | Home directory read-only except explicit ReadWritePaths |
LimitCORE=0 | Core dumps disabled |
LimitMEMLOCK=64M | Locked memory budget for memfd_secret and mlock |
MemoryMax | Per-daemon memory ceiling |
The secrets daemon additionally uses PrivateNetwork=yes, which creates a network namespace
with only a loopback interface. The secrets daemon has no path to any network socket.
Capability-Based Authorization
The CapabilitySet type (core-types/src/security.rs) implements fine-grained,
capability-based authorization. Each agent’s session_scope defines exactly which
operations it can perform. The 16 defined capabilities are:
Admin,SecretRead,SecretWrite,SecretDelete,SecretListProfileActivate,ProfileDeactivate,ProfileList,ProfileSetDefaultStatusRead,AuditRead,ConfigReloadUnlock,LockDelegate,ExtensionInstall,ExtensionManage
Delegation narrows scope via lattice intersection:
effective = delegator_scope.intersection(grant.scope). A delegatee can never exceed the
delegator’s capabilities.
Continuous Verification
Trust is not established once and cached. Multiple mechanisms provide ongoing verification:
Watchdog
All daemons report health to systemd via WatchdogSec=30. If a daemon fails to report
within 30 seconds, systemd restarts it. This detects hung processes and ensures daemon
liveness.
Audit Chain
The BLAKE3 hash-chained audit log provides a tamper-evident record of all operations. Each
entry hashes the previous entry’s hash, forming a chain from the genesis entry at
sesame init to the most recent operation. Verification via sesame audit verify detects:
- Modified entries (hash mismatch).
- Deleted entries (chain gap).
- Reordered entries (hash mismatch).
- Inserted entries (hash mismatch).
The hash algorithm is configurable: BLAKE3 (default) or SHA-256 (governance-compatible),
via CryptoConfigToml.audit_hash (core-config/src/schema_crypto.rs).
Authorization Freshness
The TrustVector.authz_freshness field (core-types/src/security.rs) tracks how long
since the last authorization refresh. Delegated capabilities expire via
DelegationGrant.initial_ttl and require periodic renewal via heartbeat_interval. A
stale authorization is equivalent to no authorization.
Heartbeat Renewal
The Attestation::HeartbeatRenewal variant records heartbeat events for time-bounded
attestations. Missing a heartbeat revokes the corresponding delegation.
Device Health as Posture Signal
The availability of memfd_secret(2) is a binary posture signal. A system with
memfd_secret removes secret pages from the kernel direct map; a system without it leaves
secrets accessible to any process that can read /proc/pid/mem or perform DMA.
| Posture Signal | Value | Meaning |
|---|---|---|
memfd_secret available | device_posture: 1.0 | Secrets removed from kernel direct map |
memfd_secret unavailable | device_posture: 0.5 | Secrets on kernel direct map (mlock fallback) |
| No mlock | device_posture: 0.0 | Secrets may be swapped to disk |
The TrustVector.device_posture field (core-types/src/security.rs) is a f64 from 0.0
(unknown) to 1.0 (fully attested). In a federation context, a peer with low device posture
may be restricted from receiving high-sensitivity secrets.
Additional posture signals include:
- Landlock enforcement status – whether the filesystem sandbox is active.
- seccomp-bpf status – whether syscall filtering is active.
- Machine binding – whether the installation is bound to specific hardware via
MachineBindingType::TpmBoundorMachineBindingType::MachineId. - Kernel version – whether the kernel meets minimum requirements for all security controls.
Microsegmentation via Profile Isolation
Trust profiles are the microsegmentation boundary in Open Sesame. Each profile is an isolated trust context:
| Boundary | Isolation Mechanism |
|---|---|
| Secrets | Separate SQLCipher vault per profile (vaults/{name}.db) |
| Encryption keys | Separate BLAKE3-derived vault key per profile |
| Clipboard | Profile-scoped clipboard history |
| Audit | Profile attribution in every audit entry |
| Frecency | Separate frecency database per profile |
| Environment | Profile-scoped secret injection via sesame env -p {profile} |
Cross-profile access is not possible without explicit configuration. A daemon operating in
the work profile cannot read secrets from the personal profile’s vault. The vault
encryption keys are derived from different BLAKE3 context strings
("pds v2 vault-key work" vs. "pds v2 vault-key personal"), so even with the master key,
the derived keys are distinct.
The LaunchProfile type (core-types/src/profile.rs) allows explicit profile stacking for
applications that need secrets from multiple profiles:
#![allow(unused)]
fn main() {
pub struct LaunchProfile {
pub trust_profiles: Vec<TrustProfileName>,
pub conflict_policy: ConflictPolicy,
}
}
When multiple profiles are stacked, the ConflictPolicy determines how secret key
collisions are handled: Strict (abort), Warn (log and use higher-precedence), or Last
(silently use higher-precedence). The default is Strict, preventing accidental secret
leakage across profile boundaries.
Explicit Security Posture
Open Sesame does not degrade silently. Security controls that fail are fatal, with one documented exception:
- Landlock enforcement failure: fatal. Daemon does not start.
- seccomp-bpf installation failure: fatal. Daemon does not start.
memfd_secretunavailability: non-fatal. Daemon starts withmlockfallback. Logged at ERROR level with an explicit compliance impact statement naming affected frameworks (IL5/IL6, DISA STIG, PCI-DSS) and the exact remediation command.
The memfd_secret exception exists because the feature depends on kernel configuration that
application software cannot control. The ERROR-level log ensures the operator is informed of
the reduced posture, and the compliance impact statement provides actionable remediation
guidance.
Network Trust Model
The NetworkTrust enum (core-types/src/security.rs) classifies the trust level of the
network path:
#![allow(unused)]
fn main() {
pub enum NetworkTrust {
Local, // Unix domain socket, same machine
Encrypted, // Noise IK, TLS, WireGuard
Onion, // Tor, Veilid
PublicInternet, // Unencrypted or minimally authenticated
}
}
The ordering represents decreasing trust: Local is most trusted (no network traversal),
PublicInternet is least trusted.
In the current implementation, all IPC communication uses Local (Unix domain socket). In
a federation context (Design Intent), Encrypted (Noise IK over TCP) would be used for
cross-machine communication. The TrustVector.network_exposure field allows authorization
policies to require stronger authentication for less-trusted network paths.
Linux Platform
The platform-linux crate provides safe Rust abstractions over Linux-specific APIs consumed by the daemon
crates. It contains no business logic. All modules are gated with #[cfg(target_os = "linux")].
Feature Flags
The crate uses two feature flags to control dependency scope:
- No features (default): Only headless-safe modules are compiled:
sandbox,security,systemd,dbus,cosmic_keys,cosmic_theme, and theclipboardtrait definition. This is sufficient for theopen-sesame(headless) package. desktop: Enables Wayland compositor integration (compositor,focus_monitor), evdev input capture (input), and pulls inwayland-client,wayland-protocols,wayland-protocols-wlr,smithay-client-toolkit, andevdev.cosmic: Enables COSMIC-specific Wayland protocol support. Impliesdesktop. Pulls incosmic-client-toolkitandcosmic-protocols, which are GPL-3.0 licensed. This feature flag isolates the GPL license obligation to builds that opt in.
Compositor Abstraction
The CompositorBackend Trait
Window and workspace management is abstracted behind the CompositorBackend trait defined in
compositor.rs. The trait requires Send + Sync and exposes these operations:
list_windows()– enumerate all toplevel windowslist_workspaces()– enumerate workspacesactivate_window(id)– bring a window to the foregroundset_window_geometry(id, geom)– resize/reposition a windowmove_to_workspace(id, ws)– move a window to a different workspacefocus_window(id)– set input focus to a windowclose_window(id)– request a window to closename()– human-readable backend name for diagnostics
All methods return Pin<Box<dyn Future<Output = T> + Send>> (aliased as BoxFuture) to maintain
dyn-compatibility. This is required because detect_compositor() returns Box<dyn CompositorBackend>
for runtime backend selection.
The trait also defines a Workspace struct with fields id (CompositorWorkspaceId), name
(String), and is_active (bool).
Runtime Backend Detection
The detect_compositor() factory function probes the Wayland display for supported protocols and
instantiates the appropriate backend:
- If the
cosmicfeature is enabled, attempt to connect theCosmicBackend. On success, return it. - If COSMIC protocols are unavailable (or the feature is disabled), attempt to connect the
WlrBackend. - If neither backend connects, return
Error::Platform.
This detection runs once at daemon startup. The returned Box<dyn CompositorBackend> is stored and used
for the daemon’s lifetime.
CosmicBackend
The CosmicBackend (in backend_cosmic.rs) targets the COSMIC desktop compositor (cosmic-comp). It
uses three Wayland protocols:
ext_foreign_toplevel_list_v1– standard protocol for window enumeration (toplevel handles with identifier, app_id, title).zcosmic_toplevel_info_v1– COSMIC-specific extension providing activation state detection viaState::Activated.zcosmic_toplevel_manager_v1– COSMIC-specific extension providing window activation (manager.activate(handle, seat)) and close operations.
Connection and Protocol Probing
CosmicBackend::connect() opens a Wayland connection, initializes the registry, and verifies that all
three required protocol interfaces are advertised in the global list. It does not bind protocol objects
during probing – binding ExtForeignToplevelListV1 causes the compositor to start sending toplevel
events, and if the probe event queue is then dropped, those objects become zombies that cause the
compositor to close the connection.
The backend holds the wayland_client::Connection and an op_lock (Mutex<()>) that serializes all
protocol operations. Concurrent bind/destroy cycles on the same wl_display can corrupt compositor
state and crash cosmic-comp.
Window Enumeration (2-Roundtrip Pattern)
enumerate() follows a two-roundtrip protocol flow:
- Roundtrip 1: Bind
ext_foreign_toplevel_list_v1andzcosmic_toplevel_info_v1. Receive allExtForeignToplevelHandleV1events (identifier, app_id, title, Done). - Request
zcosmic_toplevel_handlefor each handle viainfo.get_cosmic_toplevel(). - Roundtrip 2: Receive cosmic state events. Detect activation by checking for
State::Activatedin the state byte array (packedu32values in native endian).
Windows are converted to core_types::Window structs. The WindowId is derived deterministically
using UUID v5 with a fixed namespace ("open-sesame-wind" as bytes) and the protocol identifier as
input. The focused window is reordered to the end of the list (MRU ordering for Alt+Tab).
After enumeration, all protocol objects are destroyed in the correct order per the protocol
specification: destroy cosmic handles, destroy foreign toplevel handles, stop the list, roundtrip for
the finished event, destroy the list, flush.
Window Activation (3-Roundtrip Pattern)
activate() uses a separate disposable Wayland connection to avoid crashing cosmic-comp. The compositor
panics (toplevel_management.rs:267 unreachable!()) when protocol objects are destroyed while an
activation is in flight, which would kill the entire COSMIC desktop session. The disposable connection
isolates this breakage from the shared connection used for enumeration.
- Roundtrip 1: Enumerate toplevels on the disposable connection.
- Find the target window by deterministic UUID mapping. Request its cosmic handle.
- Roundtrip 2: Receive the cosmic handle.
- Call
manager.activate(cosmic_handle, seat). - Roundtrip 3: Ensure activation is processed.
Protocol objects are intentionally leaked. The leaked objects cause a broken pipe when the EventQueue
drops, but this only affects the disposable connection.
Unsupported Operations
set_window_geometry and move_to_workspace return Error::Platform – these operations are not
supported by the COSMIC toplevel protocols. focus_window delegates to activate_window.
WlrBackend
The WlrBackend (in backend_wlr.rs) implements CompositorBackend using
wlr-foreign-toplevel-management-v1. This protocol is supported by sway, Hyprland, niri, Wayfire,
and COSMIC (which advertises it for backwards compatibility).
Architecture
Unlike the COSMIC backend’s re-enumerate-on-each-call approach, the WLR backend maintains a continuously updated state snapshot:
- A dedicated dispatch thread (
wlr-dispatch) continuously reads Wayland events usingprepare_read()+libc::poll()with a 50ms periodic wake-up. - On each
Doneevent (the protocol’s atomic commit point), the dispatch thread publishes the committed toplevel state to a sharedArc<Mutex<WlrState>>. - On
Closedevents, the toplevel is removed from shared state and the handle proxy is destroyed. list_windows()reads the snapshot under the mutex. No Wayland roundtrips occur on the API thread.activate_window()andclose_window()call proxy methods directly (wayland-client 0.31 proxies areSend + Sync) and flush the shared connection.
The dispatch loop uses exponential backoff (100ms to 30s) on read, dispatch, or flush errors.
Unsupported Operations
set_window_geometry and move_to_workspace return Error::Platform – the wlr-foreign-toplevel
protocol does not support these operations. focus_window delegates to activate_window.
Focus Monitor
The focus_monitor module (in focus_monitor.rs) tracks the active window and sends FocusEvent
values through a tokio::sync::mpsc channel. It uses wlr-foreign-toplevel-management-v1 and is
compatible with sway, Hyprland, niri, Wayfire, and COSMIC.
FocusEvent has two variants:
Focus(String)– an app gained focus; payload is theapp_id.Closed(String)– a window closed; payload is theapp_id.
The monitor runs as a long-lived async task. It connects to the Wayland display, binds the wlr foreign
toplevel manager (version 1-3), and enters an async event loop using tokio::io::unix::AsyncFd on the
Wayland socket file descriptor. On each Done event, if the activated app_id changed, a
FocusEvent::Focus is sent via try_send. On Closed events, a FocusEvent::Closed is sent and the
handle proxy is destroyed.
The focus monitor is re-exported from compositor for backward compatibility: downstream crates import
platform_linux::compositor::{FocusEvent, focus_monitor}.
Clipboard
The clipboard module defines the DataControl trait for Wayland clipboard access. It abstracts over
two protocols:
ext-data-control-v1(preferred, standardized)wlr-data-control-v1(fallback for older compositors)
The trait provides:
read_selection()– read the current clipboard content with MIME type metadata.write_selection(content)– write content to the clipboard.subscribe()– subscribe to clipboard change notifications via atokio::sync::mpsc::Receiver<ClipboardContent>.protocol_name()– diagnostic name.
ClipboardContent carries a mime_type string and data byte vector.
The connect_data_control() factory function currently returns an error – clipboard implementation is
deferred to a later phase. The trait definition and module are available as the integration contract.
On COSMIC, the COSMIC_DATA_CONTROL_ENABLED=1 environment variable is required for data-control
protocol access.
Input
The input module (in input.rs) provides evdev device discovery and async keyboard event streaming.
Device Discovery
enumerate_devices() iterates /dev/input/event* via the evdev crate’s built-in enumerator. Each
device is classified:
- Keyboard: supports
KEY_A,KEY_Z, andKEY_ENTER. This heuristic excludes power buttons, media controllers, and other devices that report KEY events but lack a full key set. - Pointer: supports
BTN_LEFT.
The function returns a Vec<DeviceInfo> with path, name, is_keyboard, and is_pointer fields.
Devices that fail to open (EACCES) are silently skipped.
Keyboard Streaming
open_keyboard_stream(path) opens an evdev device and returns an EventStream (from the evdev crate)
that uses AsyncFd<Device> internally. This is fully async with no spawn_blocking required. Call
stream.next_event().await to read events.
The device is not grabbed (EVIOCGRAB is not used). Events are read passively – they also reach the
compositor. This is intentional: the system observes and forwards copies rather than stealing events.
Requires input group membership. Root is never required. For future remap support via /dev/uinput,
a udev rule is needed: KERNEL=="uinput", GROUP="uinput", MODE="0660".
D-Bus Integration
The dbus module (in dbus.rs) provides typed D-Bus proxies using zbus with
default-features = false, features = ["tokio"] to ensure all I/O runs on the tokio runtime with no
background threads.
Session Bus
SessionBus::connect() opens a connection to the D-Bus session bus. It serves as the shared connection
handle for all proxies.
Secret Service (org.freedesktop.secrets)
SecretServiceProxy provides raw store/retrieve/delete/has operations for the freedesktop Secret
Service API. It opens a plain-text session (secrets transmitted unencrypted over D-Bus, which is safe
because D-Bus is local transport). The proxy operates on the default collection
(/org/freedesktop/secrets/aliases/default) and identifies items by application and account
attributes with type master-key-wrapped.
This module provides only the low-level D-Bus proxy. Business logic (KeyLocker trait, key hierarchy)
lives in daemon-secrets.
Global Shortcuts Portal (org.freedesktop.portal.GlobalShortcuts)
GlobalShortcutsProxy provides compositor-agnostic global hotkey registration through
xdg-desktop-portal. Supported on COSMIC, KDE Plasma 6.4+, and niri. The proxy supports
create_session, bind_shortcuts, and list_shortcuts operations.
NetworkManager SSID Monitor
ssid_monitor() is a long-lived async task that monitors the active WiFi SSID via NetworkManager D-Bus
signals on the system bus. It subscribes to the StateChanged signal on
org.freedesktop.NetworkManager, re-reads the primary active connection’s SSID on each state change,
and sends the SSID string through a tokio::sync::mpsc::Sender<String> when it changes.
The SSID reading traverses the NetworkManager object graph: primary connection -> connection type check
(must be 802-11-wireless) -> device list -> active access point -> SSID byte array -> UTF-8 string.
This enables context-based profile activation (e.g., activate “work” profile when connected to the office WiFi).
COSMIC Key Injection
The cosmic_keys module (in cosmic_keys.rs) manages keybindings in COSMIC desktop’s shortcut
configuration files:
~/.config/cosmic/com.system76.CosmicSettings.Shortcuts/v1/custom– customSpawn(...)bindings~/.config/cosmic/com.system76.CosmicSettings.Shortcuts/v1/system_actions– mapsSystem(...)action variants to command strings
System Actions Override Strategy
For Alt+Tab integration, the module overrides system_actions rather than adding a competing
Spawn(...) binding. COSMIC’s default keybindings map Alt+Tab to System(WindowSwitcher). Adding a
parallel Spawn(...) binding would race with the default and leak the Alt modifier to applications.
By overriding system_actions, the compositor’s own built-in Alt+Tab binding fires sesame, and the key
event is consumed at compositor level before any application sees the Alt keypress.
The overrides point WindowSwitcher to sesame wm overlay and WindowSwitcherPrevious to
sesame wm overlay --backward.
Injection Safety
All values written to RON configuration files are escaped through escape_ron_string(), which handles
backslash and double-quote characters to prevent RON injection.
Configuration Files
The files are in RON (Rusty Object Notation) format. The compositor watches these files via
cosmic_config::calloop::ConfigWatchSource and live-reloads on change – no logout is required.
Before writing, the module creates a .bak backup of the existing file. The
setup_keybinding(launcher_key_combo) function:
- Overrides
system_actionsfor WindowSwitcher/WindowSwitcherPrevious. - Adds a custom
Spawn(...)binding for the launcher key (e.g.,alt+space). - Adds a backward variant with Shift (e.g.,
alt+shift+space).
remove_keybinding() removes all sesame entries from both files. If system_actions becomes empty
after removal, the file is deleted so COSMIC falls back to system defaults at /usr/share/cosmic/.
COSMIC Theme Integration
The cosmic_theme module (in cosmic_theme.rs) reads theme colors, fonts, corner radii, and dark/light
mode from COSMIC’s RON configuration at ~/.config/cosmic/:
- Theme mode:
com.system76.CosmicTheme.Mode/v1/is_dark - Dark theme:
com.system76.CosmicTheme.Dark/v1/ - Light theme:
com.system76.CosmicTheme.Light/v1/
CosmicTheme::load() reads the mode, selects the appropriate theme directory, and deserializes
background, primary, secondary containers, accent colors, and corner radii from individual RON
files. Returns None on non-COSMIC systems where these files do not exist.
The types (CosmicColor, ComponentColors, Container, AccentColors, CornerRadii) provide the
theme data needed for overlay rendering. CosmicColor stores RGBA as 0.0-1.0 floats with a
to_rgba() conversion to (u8, u8, u8, u8).
systemd Integration
The systemd module (in systemd.rs) provides three helpers using the sd-notify crate:
notify_ready()– sendsREADY=1to systemd forType=notifyservices. PreservesNOTIFY_SOCKET(does not unset it) so subsequent watchdog pings continue to work.notify_watchdog()– sends a watchdog keepalive ping.notify_status(status)– updates the daemon’s status string visible insystemctl status.
Adding a New Compositor Backend
To add support for a new compositor (e.g., GNOME/Mutter via org.gnome.Mutter.IdleMonitor, KDE/KWin
via org.kde.KWin, or Hyprland IPC):
- Create
backend_<name>.rsinplatform-linux/src/implementing theCompositorBackendtrait. - Add
pub(crate) mod backend_<name>;tolib.rs, gated behind an appropriate feature flag. - Add a match arm to
detect_compositor()incompositor.rs. Place it in the detection order according to protocol specificity (more specific protocols first, generic fallbacks last). - Add the feature flag to
Cargo.tomlwith any new protocol dependencies.
The backend struct must be Send + Sync. Methods return BoxFuture for dyn-compatibility. For
operations not supported by the target compositor’s protocols, return Error::Platform with a
descriptive message.
macOS Platform
The platform-macos crate provides safe Rust abstractions over macOS-specific APIs consumed by the
daemon crates. It contains no business logic. All modules are gated with #[cfg(target_os = "macos")];
on other platforms the crate compiles as an empty library with no exports.
Implementation Status
The crate is scaffolded with module declarations only. macOS implementations are deferred until the Linux platform is validated on Pop!_OS / COSMIC. The module structure, API boundaries, and dependency selections are defined. No functional code exists.
Dependencies
The Cargo.toml declares macOS-specific dependencies:
| Crate | Purpose |
|---|---|
core-types | Shared types (Window, WindowId, Error, etc.) |
security-framework | Keychain Services API (create/read/delete keychain items) |
objc2 | Objective-C runtime bindings for Accessibility and AppKit APIs |
core-foundation | CFString, CFDictionary, CFRunLoop interop |
core-graphics | CGEventTap, CGEventPost for input monitoring and injection |
serde | Serialization for configuration types |
tokio | Async runtime integration |
tracing | Structured logging |
Module Structure
accessibility
Window management via the Accessibility API (AXUIElement). This module will provide the macOS
equivalent of the Linux compositor backends: window enumeration, activation, geometry manipulation,
and close operations. On macOS, all window management goes through the Accessibility framework rather
than compositor-specific protocols.
clipboard
Clipboard access via NSPasteboard. This module will provide read, write, and change-notification
functionality equivalent to the Linux DataControl trait. macOS clipboard access does not require
special permissions.
input
Input monitoring via CGEventTap (listen-only) and input injection via CGEventPost. Both operations
require the Accessibility permission in TCC. The module will provide keyboard event observation
equivalent to the Linux evdev module. Unlike Linux evdev, macOS input monitoring is global by default
and does not require group membership – it requires a TCC permission grant instead.
keychain
Per-profile named keychains via the security-framework crate (Keychain Services API). This module
will store wrapped key-encryption keys, equivalent to the Linux SecretServiceProxy in the dbus
module. macOS uses per-user keychains rather than a D-Bus Secret Service.
launch_agent
LaunchAgent plist generation and launchctl lifecycle management. This is the macOS equivalent of
systemd service units. The module will generate property list files for
~/Library/LaunchAgents/, register them with launchctl, and manage daemon lifecycle (start, stop,
status). Unlike systemd’s Type=notify, LaunchAgents use process lifecycle for readiness signaling.
tcc
Transparency, Consent, and Control (TCC) permission state introspection. This module will query the TCC database to determine whether Accessibility and Input Monitoring permissions have been granted before attempting operations that require them. This allows the system to provide actionable error messages rather than silently failing.
Platform-Specific Considerations
Accessibility API vs. Wayland Protocols
On Linux, window management is mediated by compositor-specific Wayland protocols. On macOS, the
Accessibility API (AXUIElement) provides a single, compositor-independent interface for window
enumeration, activation, geometry, and close operations. The trade-off is that Accessibility access
requires an explicit TCC permission grant from the user, and the API surface is significantly
different from Wayland protocols.
TCC Permissions
macOS requires explicit user consent for two operations that Open Sesame uses:
- Accessibility: Required for window management (
AXUIElement) and input injection (CGEventPost). - Input Monitoring: Required for keyboard event observation (
CGEventTapin listen-only mode).
These permissions cannot be granted programmatically. The application must be added to the relevant
TCC lists in System Settings. The tcc module exists to detect permission state and guide the user
through the grant process.
launchd vs. systemd
macOS uses launchd instead of systemd for daemon management. Key differences:
- Readiness signaling: systemd supports
Type=notifywithsd_notify(READY=1). launchd uses process lifecycle – a LaunchAgent is considered ready when the process is running. - Watchdog: systemd supports
WatchdogSecwith periodic keepalive pings. launchd hasKeepAlivewhich restarts crashed processes but does not support health-check pings. - Socket activation: systemd supports
ListenStreamfor socket-activated services. launchd supportsSocketsin the plist for equivalent functionality. - Configuration format: systemd uses INI-style unit files. launchd uses XML property lists in
~/Library/LaunchAgents/. - Dependency ordering: systemd supports
After=,Requires=,Wants=. launchd has limited dependency support viaWatchPathsandQueueDirectories.
Keychain vs. Secret Service
Linux uses the freedesktop Secret Service API (org.freedesktop.secrets) over D-Bus for credential
storage. macOS uses the Keychain Services API directly. Both provide encrypted-at-rest storage scoped
to the user session, but the API surfaces are entirely different. The keychain module will present
the same logical operations (store, retrieve, delete, has) as the Linux SecretServiceProxy.
Windows Platform
The platform-windows crate provides safe Rust abstractions over Windows-specific APIs consumed by
the daemon crates. It contains no business logic. All modules are gated with
#[cfg(target_os = "windows")]; on other platforms the crate compiles as an empty library with no
exports.
Implementation Status
The crate is scaffolded with module declarations only. Windows implementations are deferred until the Linux and macOS platforms are validated. The module structure, API boundaries, and dependency selections are defined. No functional code exists.
Dependencies
The Cargo.toml declares Windows-specific dependencies:
| Crate | Purpose |
|---|---|
core-types | Shared types (Window, WindowId, Error, etc.) |
windows | Official Microsoft Windows API bindings (Win32, COM, WinRT) |
serde | Serialization for configuration types |
tokio | Async runtime integration |
tracing | Structured logging |
Module Structure
clipboard
Clipboard monitoring via AddClipboardFormatListener. This module will provide clipboard change
notifications and read/write operations, equivalent to the Linux DataControl trait. Windows
clipboard access uses the Win32 clipboard API and does not require elevated privileges.
credential
Credential storage via CryptProtectData (DPAPI) and CredRead/CredWrite (Credential Manager).
This module will store wrapped key-encryption keys, equivalent to the Linux SecretServiceProxy in
the dbus module. DPAPI provides user-scoped encryption tied to the Windows login credentials. The
Credential Manager provides a higher-level API for named credentials visible in the Windows
Credential Manager UI.
hotkey
Global hotkey registration via RegisterHotKey/UnregisterHotKey. This module will provide
compositor-independent hotkey capture, equivalent to the Linux Global Shortcuts portal or COSMIC key
injection. On Windows, global hotkeys are registered per-thread and deliver WM_HOTKEY messages to
the registering thread’s message loop.
input_hook
Input capture via SetWindowsHookEx(WH_KEYBOARD_LL). This module will provide low-level keyboard
monitoring equivalent to the Linux evdev module. Low-level keyboard hooks see all keyboard input
system-wide. The crate documentation notes that EDR (Endpoint Detection and Response) disclosure is
required – low-level keyboard hooks are flagged by security software and must be documented for
enterprise deployment.
named_pipe
IPC bootstrap via Named Pipes. This is the Windows equivalent of Unix domain sockets used by the Noise IK IPC bus on Linux. Named Pipes provide the transport layer for inter-daemon communication on Windows. Security descriptors on the pipe control which processes can connect.
policy
Enterprise policy reading via Group Policy registry keys. This module will read
HKLM\Software\Policies\OpenSesame\ for enterprise-managed configuration overrides. This has no
direct Linux equivalent – the closest analog is /etc/pds/ system configuration, but Group Policy
provides domain-joined management capabilities.
task_scheduler
Daemon autostart via Task Scheduler COM API. This is the Windows equivalent of systemd user services and macOS LaunchAgents. The module will create scheduled tasks that run at user logon to start the daemon processes.
ui_automation
Window management and enumeration via UI Automation COM API. This module provides the Windows equivalent of the Linux compositor backends. UI Automation exposes the desktop automation tree, allowing enumeration of all top-level windows, reading their properties (title, class, process), and performing actions (activate, minimize, close, move, resize).
virtual_desktop
Workspace management via the Virtual Desktop COM API. This module will provide workspace enumeration
and window-to-desktop movement, equivalent to the Linux list_workspaces and move_to_workspace
compositor operations. The Windows Virtual Desktop API is undocumented and version-fragile – COM
interface GUIDs change between Windows 10 and Windows 11 builds.
Platform-Specific Considerations
UI Automation vs. Wayland Protocols
On Linux, window management uses compositor-specific Wayland protocols (wlr-foreign-toplevel, COSMIC
toplevel). On Windows, UI Automation provides a single COM-based interface that works across all
window managers. The trade-off is COM initialization complexity and the need to handle apartment
threading models correctly (CoInitializeEx with COINIT_MULTITHREADED or
COINIT_APARTMENTTHREADED).
Credential Manager vs. Secret Service
Linux uses the freedesktop Secret Service API over D-Bus. Windows uses DPAPI
(CryptProtectData/CryptUnprotectData) for raw encryption tied to user credentials, and the
Credential Manager API (CredRead/CredWrite) for named credential storage. Both provide
user-scoped encrypted-at-rest storage, but the APIs are entirely different.
Task Scheduler vs. systemd
Windows uses the Task Scheduler for daemon autostart. Key differences from systemd:
- Readiness signaling: systemd supports
Type=notify. Task Scheduler has no equivalent; the task is considered running when the process starts. - Watchdog: systemd supports
WatchdogSec. Task Scheduler can restart failed tasks but does not support health-check pings. - Dependencies: systemd supports
After=,Requires=. Task Scheduler supports task dependencies but with a less expressive model. - Configuration: systemd uses INI-style unit files. Task Scheduler uses XML task definitions
registered via COM or
schtasks.exe.
Named Pipes vs. Unix Domain Sockets
The Noise IK IPC bus uses Unix domain sockets on Linux. On Windows, Named Pipes provide equivalent functionality with OS-level access control via security descriptors. Named Pipes support both byte-mode and message-mode communication; the IPC bus would use byte-mode to match the stream semantics of Unix domain sockets.
EDR Disclosure
Low-level keyboard hooks (WH_KEYBOARD_LL) and clipboard monitoring are flagged by Endpoint
Detection and Response (EDR) software common in enterprise environments. Deployment in managed
environments requires documentation of these behaviors and may require allowlist entries in the
organization’s security tooling.
Changelog
The changelog is auto-generated from conventional commit messages and
maintained in the repository root at
CHANGELOG.md.
Each GitHub Release includes the relevant changelog section along with
install instructions for the APT repository and direct .deb download
links. Release assets include SHA256 checksums and
SLSA Build Provenance attestations that can be verified
with:
gh attestation verify "open-sesame-linux-$(uname -m).deb" --owner ScopeCreep-zip
For the full version history, see the
Releases page or
the CHANGELOG.md file in the repository root.
License
Open Sesame is licensed under GPL-3.0-only (GNU General Public License, version 3, with no “or later” clause).
Why GPL-3.0
The cosmic-protocols crate, which provides Wayland protocol definitions for
the COSMIC desktop compositor, is licensed under GPL-3.0-only. Because
Open Sesame links against cosmic-protocols in the platform-linux and
daemon-wm crates, the entire combined work must be distributed under
GPL-3.0-only to satisfy the license terms.
License Text
The full license text is in the
LICENSE
file at the repository root. It is the standard GNU General Public License
version 3 as published by the Free Software Foundation on 29 June 2007.
SPDX Identifier
All crate manifests declare license = "GPL-3.0-only" in their
Cargo.toml workspace configuration, using the
SPDX license identifier.
Security Hardening Field Guide
A practical, encyclopedic reference for debugging and troubleshooting Linux security hardening across seccomp-bpf, Landlock, systemd sandboxing, and related tooling. Written for engineers hardening multi-daemon Linux applications.
1. Overview
Modern Linux application hardening is built on defense-in-depth: multiple independent security layers that each reduce the blast radius of a compromise. No single layer is sufficient. The three primary layers are:
| Layer | Scope | Enforced By |
|---|---|---|
| systemd sandboxing | Mount namespaces, resource limits, lifecycle | systemd (PID 1 / user manager) |
| Landlock | Filesystem access control | Kernel LSM, applied per-process |
| seccomp-bpf | Syscall filtering | Kernel, applied per-thread/process |
These layers compose because they operate at different abstraction levels:
- systemd mount namespaces control what the process can see on the
filesystem. A process inside
ProtectSystem=strictliterally cannot write to/usrbecause its mount namespace has a read-only bind mount. - Landlock controls what the process is allowed to access within the paths it can see. Even if systemd exposes a writable path, Landlock can restrict the process to specific subdirectories.
- seccomp-bpf controls what the process is allowed to do at the syscall
level. Even if a process can open a file, seccomp can block it from calling
execve,ptrace, ormount.
A compromised daemon that escapes one layer still faces the others. This guide covers how to implement each layer correctly, the non-obvious failure modes, and how to debug them when things go wrong.
2. seccomp-bpf
2.1 How seccomp works
seccomp-bpf attaches a BPF program to a process (or thread) that intercepts every syscall before the kernel executes it. The BPF program inspects the syscall number and arguments, then returns a verdict.
Activation:
prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0); // required first
seccomp(SECCOMP_SET_MODE_FILTER, flags, &prog);
The SECCOMP_FILTER_FLAG_TSYNC flag is critical for multi-threaded programs:
it synchronizes the filter to all threads in the thread group atomically. Without
it, each thread must install the filter individually, creating a race window.
Action modes:
| Action | Behavior | Use Case |
|---|---|---|
SECCOMP_RET_KILL_PROCESS | Kills the entire process with SIGSYS | Production: fail-closed, no zombie threads |
SECCOMP_RET_KILL_THREAD | Kills only the offending thread | Dangerous with async runtimes (see 2.2) |
SECCOMP_RET_ERRNO | Returns an errno to the caller | Graceful degradation, testable |
SECCOMP_RET_LOG | Allows but logs via audit | Development/audit mode |
Choosing an action mode:
- Use
SECCOMP_RET_KILL_PROCESSin production. It is the safest default. A process that violates its seccomp policy is compromised and should die. - Use
SECCOMP_RET_LOGduring development to discover which syscalls your code actually needs without killing it. - Use
SECCOMP_RET_ERRNO(EPERM)only when you have code that gracefully handles the error (e.g., optional features that degrade). - Avoid
SECCOMP_RET_KILL_THREADunless you fully understand the implications for your threading model. Read section 2.2.
2.2 KillThread + async runtimes (CRITICAL)
This is the single most dangerous failure mode in seccomp’d async applications.
When SECCOMP_RET_KILL_THREAD kills a thread in tokio’s (or async-std’s)
blocking thread pool, the JoinHandle returned by spawn_blocking never
resolves. The kernel destroys the thread. The channel that the runtime uses
to send the result back is dropped without sending. The JoinHandle future
polls forever.
The cascade:
- A
spawn_blockingtask calls a blocked syscall (e.g.,ftruncatefor SQLite WAL rollback). - seccomp kills that thread with SIGSYS.
- The
JoinHandlefuture never completes. - The
tokio::select!branch waiting on that handle blocks forever. - The event loop freezes. No other futures make progress.
- The watchdog timer (if it ticks inside the same event loop) stops ticking.
- systemd’s
WatchdogSecfires and kills the process.
This is silent. No logs. No crash. No panic. The process simply freezes and systemd eventually SIGKILLs it. Journalctl shows a watchdog timeout with no preceding error messages.
Design rule: Every spawn_blocking in a seccomp-filtered process MUST
have a timeout wrapper:
#![allow(unused)]
fn main() {
use tokio::time::{timeout, Duration};
let result = timeout(
Duration::from_secs(10),
tokio::task::spawn_blocking(move || {
// potentially blocked operation
database.execute("PRAGMA wal_checkpoint(TRUNCATE)")
}),
)
.await;
match result {
Ok(Ok(Ok(rows))) => { /* success */ }
Ok(Ok(Err(e))) => { /* database error */ }
Ok(Err(e)) => { /* JoinError: thread panicked */ }
Err(_) => {
// TIMEOUT: likely seccomp killed the thread
tracing::error!("spawn_blocking timed out -- possible seccomp kill");
// Initiate graceful shutdown or restart
}
}
}
This does not prevent the thread death, but it prevents the entire event loop from freezing and gives you a log line to debug.
2.3 SIGSYS signal handler
When seccomp blocks a syscall, the kernel delivers SIGSYS to the thread before
killing it (for KILL_THREAD) or to the process (for KILL_PROCESS). You can
install a handler to log which syscall was blocked.
Constraints: The signal handler runs in signal context. You must not allocate, lock mutexes, or call most libc functions. Use only async-signal-safe functions.
#![allow(unused)]
fn main() {
use libc::{
c_int, c_void, sigaction, siginfo_t, SA_RESETHAND, SA_SIGINFO, SIGSYS,
};
unsafe extern "C" fn sigsys_handler(
_sig: c_int,
info: *mut siginfo_t,
_ctx: *mut c_void,
) {
// si_syscall contains the blocked syscall number
let syscall = (*info).si_syscall;
// Write directly to stderr (fd 2) -- no allocator, no buffering
// Manual integer formatting in a stack buffer
let mut buf = [0u8; 64];
let prefix = b"seccomp: blocked syscall ";
buf[..prefix.len()].copy_from_slice(prefix);
let mut pos = prefix.len();
// Convert syscall number to decimal digits
if syscall == 0 {
buf[pos] = b'0';
pos += 1;
} else {
let mut n = syscall;
let start = pos;
while n > 0 {
buf[pos] = b'0' + (n % 10) as u8;
pos += 1;
n /= 10;
}
buf[start..pos].reverse();
}
buf[pos] = b'\n';
pos += 1;
let _ = libc::write(2, buf.as_ptr() as *const c_void, pos);
}
pub fn install_sigsys_handler() {
unsafe {
let mut sa: sigaction = std::mem::zeroed();
sa.sa_sigaction = sigsys_handler as usize;
sa.sa_flags = SA_SIGINFO | SA_RESETHAND;
sigaction(SIGSYS, &sa, std::ptr::null_mut());
}
}
}
Important: Install the handler before applying the seccomp filter.
Why output may not appear in journalctl:
- With
KILL_THREAD, the thread dies but the process lives. The write to stderr may succeed, but if the process later freezes (see 2.2), journald may not flush the pipe buffer before systemd kills it. - With
KILL_PROCESS, the write races against process teardown. - Use
SA_RESETHANDso the handler fires once, then the default (kill) takes effect on the next violation.
2.4 Building seccomp allowlists with strace
The only reliable way to build a seccomp allowlist is to trace your application under real workloads.
Step 1: Trace all threads
# Attach to a running process
strace -f -o /tmp/trace.log -p $(pidof my-daemon)
# Or launch under strace
strace -f -o /tmp/trace.log -- ./my-daemon
The -f flag follows child threads and processes.
Step 2: Exercise ALL code paths
This is where most allowlists fail. You must exercise:
- Startup and initialization
- Normal operation (happy path)
- Error paths (invalid input, network failure, disk full)
- Shutdown (graceful and SIGTERM)
- Database operations (open, read, write, WAL checkpoint, vacuum)
- Config reload (inotify, file re-read)
- IPC (socket creation, connection, message exchange)
Step 3: Extract unique syscalls
awk -F'(' '{print $1}' /tmp/trace.log \
| sed 's/^[0-9]* *//' \
| sort -u \
> /tmp/syscalls.txt
Step 4: Diff against your allowlist
Compare the trace output against your current allowlist. Add any missing syscalls.
Commonly missed syscalls:
| Syscall | Triggered By |
|---|---|
ftruncate | SQLite WAL rollback/checkpoint |
fsync | SQLite PRAGMA, checkpoint, journal |
fdatasync | SQLite WAL writes |
pwrite64 | SQLite WAL page writes |
fallocate | SQLite pre-allocating journal/WAL space |
readlink | Symlink resolution (common on NixOS) |
inotify_init1 | File watcher initialization |
inotify_add_watch | Watching config files for changes |
inotify_rm_watch | Cleaning up file watches |
statx | Modern stat replacement (glibc 2.28+) |
getrandom | Cryptographic RNG, SQLCipher |
clone3 | Modern thread creation (glibc 2.34+) |
2.5 Common pitfalls
fdatasync vs fsync:
SQLite uses both. fdatasync for WAL writes (it only needs data, not
metadata). fsync for PRAGMA operations and WAL checkpoints (it needs full
metadata sync). Missing either one causes intermittent seccomp kills that
only trigger under write load.
SQLite WAL mode syscall set:
A complete SQLite WAL allowlist includes: openat, ftruncate, pwrite64,
pread64, fallocate, rename, fsync, fdatasync, fcntl (for
F_SETLK/F_GETLK advisory locking), fstat, lseek, unlink.
D-Bus / zbus syscalls:
If your daemon communicates over D-Bus (e.g., for desktop integration):
socket, connect, sendmsg, recvmsg, geteuid (D-Bus auth), shutdown,
getsockopt, setsockopt.
inotify file watchers:
Any config hot-reload mechanism using inotify needs: inotify_init1,
inotify_add_watch, inotify_rm_watch, read (for reading events from the
inotify fd), epoll_ctl (if using epoll to watch the inotify fd).
SECCOMP_FILTER_FLAG_TSYNC timing:
TSYNC applies the filter to all existing threads. If a file watcher
thread was spawned before seccomp is applied, it gets the filter too. If that
thread’s syscalls are not in the allowlist, it dies. Either:
- Apply seccomp before spawning any background threads, or
- Ensure the allowlist covers all threads’ syscalls, or
- Have background threads install their own filters before doing work.
3. Landlock
3.1 How Landlock works
Landlock is a Linux Security Module (LSM) that provides unprivileged, process-level filesystem access control. Unlike seccomp (which filters syscalls), Landlock filters filesystem operations on specific paths.
#![allow(unused)]
fn main() {
// Pseudocode for Landlock setup
let ruleset = Ruleset::default()
.handle_access(AccessFs::from_all(abi_version))?
.create()?;
// Grant read-only access to config directory
ruleset.add_rule(PathBeneath::new(
File::open("/etc/myapp")?,
AccessFs::ReadFile | AccessFs::ReadDir,
))?;
// Grant read-write access to runtime directory
ruleset.add_rule(PathBeneath::new(
File::open("/run/user/1000/myapp")?,
AccessFs::from_all(abi_version),
))?;
// Enforce -- no more rules can be added after this
ruleset.restrict_self()?;
}
Key properties:
- Rules are additive: you start with no access and grant specific paths.
- Rules are inherited: child processes inherit the restriction.
- Rules are stackable: multiple Landlock rulesets compose (intersection).
- Landlock requires no privileges – any process can restrict itself.
ABI versions (V1 through V6) add support for new access rights. Always query the running kernel’s supported version and degrade gracefully:
#![allow(unused)]
fn main() {
let abi = landlock::ABI::V3; // minimum supported
let actual = landlock::ABI::new_current().unwrap_or(abi);
}
3.2 Symlink resolution
Landlock grants access to the resolved path, not the symlink itself. This is a critical distinction on distributions that use symlink farms.
NixOS and Guix store all packages in /nix/store/ (or /gnu/store/)
and symlink configuration files into place:
/etc/myapp/config.toml -> /nix/store/abc123-myapp-config/config.toml
If you grant Landlock access to /etc/myapp/, the process can open the
symlink. But the target is in /nix/store/, which is not in the ruleset.
The open fails with EACCES.
Solution: Canonicalize all config paths before building Landlock rules:
#![allow(unused)]
fn main() {
use std::fs;
use std::path::{Path, PathBuf};
use std::collections::HashSet;
fn resolve_landlock_paths(paths: &[&str]) -> HashSet<PathBuf> {
let mut resolved = HashSet::new();
for path in paths {
let p = Path::new(path);
if p.exists() {
// Add the original path
resolved.insert(p.to_path_buf());
// Add the canonical (resolved) path
if let Ok(canonical) = fs::canonicalize(p) {
resolved.insert(canonical.clone());
// Also add parent directories for traversal
if let Some(parent) = canonical.parent() {
resolved.insert(parent.to_path_buf());
}
}
}
}
resolved
}
}
Then add all resolved paths as read-only rules.
3.3 Common pitfalls
/dev/urandom blocked:
SQLCipher and OpenSSL read from /dev/urandom for random bytes. If Landlock
blocks /dev/urandom, they fall back to the getrandom() syscall, which
bypasses the filesystem entirely. This usually works, but you may see EACCES
errors in logs. Grant read access to /dev/urandom to silence them:
#![allow(unused)]
fn main() {
ruleset.add_rule(PathBeneath::new(
File::open("/dev/urandom")?,
AccessFs::ReadFile,
))?;
}
NOTIFY_SOCKET path:
sd_notify() communicates with systemd via a Unix socket whose path is in
$NOTIFY_SOCKET. This can be either:
- Abstract socket (prefixed with
@): Bypasses the filesystem entirely. Landlock does not apply. No rule needed. - Filesystem socket (e.g.,
/run/user/1000/systemd/notify): Landlock must allow write access to this path, orsd_notify()silently fails.
Check before adding rules:
#![allow(unused)]
fn main() {
if let Ok(sock) = std::env::var("NOTIFY_SOCKET") {
if !sock.starts_with('@') {
// Filesystem socket -- add to Landlock rules
let sock_path = Path::new(&sock);
if let Some(parent) = sock_path.parent() {
ruleset.add_rule(PathBeneath::new(
File::open(parent)?,
AccessFs::WriteFile,
))?;
}
}
}
}
Abstract sockets bypass Landlock entirely:
Any Unix domain socket with an abstract address (beginning with a null byte,
shown as @ in ss output) is not subject to Landlock filesystem rules.
This is by design – abstract sockets live in the network namespace, not the
filesystem. If you need to restrict abstract socket access, use seccomp to
filter connect/bind with argument inspection, or use network namespaces.
4. systemd Sandboxing
4.1 Mount namespaces
systemd can create per-service mount namespaces that restrict the filesystem view. This is the outermost sandbox layer.
Key directives for [Service] sections:
[Service]
# Read-only root filesystem (bind mount overlays)
ProtectSystem=strict
# User home directory is read-only
ProtectHome=read-only
# Specific writable paths (bind-mounted into the namespace)
ReadWritePaths=/run/user/%U/myapp /home/%U/.local/share/myapp
# Restrict /proc, /sys, kernel tunables
ProtectProc=invisible
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
# Private /tmp
PrivateTmp=yes
# No new privileges (required for seccomp)
NoNewPrivileges=yes
# Restrict capabilities
CapabilityBoundingSet=
AmbientCapabilities=
Critical requirement: Every path listed in ReadWritePaths= must exist
on the host before the service starts. If the directory does not exist,
systemd cannot create the bind mount, and the service fails with
exit status 226/NAMESPACE.
This is the most common systemd sandbox failure mode.
4.2 tmpfiles.d for directory pre-creation
The chicken-and-egg problem: your daemon creates its directories on first run, but systemd’s mount namespace fails if those directories do not already exist.
Solution: Use systemd-tmpfiles to create directories at user session
login, before any service starts.
For NixOS (in your system or home-manager configuration):
systemd.user.tmpfiles.rules = [
"d %t/myapp 0700 - - -" # /run/user/UID/myapp
"d %h/.config/myapp 0700 - - -" # ~/.config/myapp
"d %h/.local/share/myapp 0700 - - -"
];
For other distributions, create ~/.config/systemd/user/tmpfiles.d/myapp.conf:
# Type Path Mode User Group Age
d %t/myapp 0700 - - -
d %h/.config/myapp 0700 - - -
d %h/.local/share/myapp 0700 - - -
Specifiers: %t = $XDG_RUNTIME_DIR, %h = $HOME, %U = numeric UID.
For wipe/reinitialize flows (e.g., factory reset, test harness):
# Recreate directories after wiping
rm -rf ~/.local/share/myapp
systemd-tmpfiles --user --create
systemctl --user restart myapp.service
Defense-in-depth: The application should also create its directories on
startup (a bootstrap_dirs() function) so it works on platforms without
systemd (containers, macOS, BSDs). tmpfiles.d is the systemd-specific layer;
application bootstrap is the portable layer.
4.3 Service type alignment
Type=notify:
The daemon signals readiness by calling sd_notify("READY=1"). systemd waits
for this signal before marking the service as active.
#![allow(unused)]
fn main() {
// Using the sd-notify crate or raw socket write
sd_notify::notify(false, &[sd_notify::NotifyState::Ready])?;
}
If you set Type=simple but your daemon calls sd_notify, systemd ignores
the notification silently. The service is marked active immediately on exec.
This is not an error – it just means your readiness signal does nothing.
WatchdogSec=:
The daemon must call sd_notify("WATCHDOG=1") at least every
WatchdogSec / 2 interval. If the event loop freezes (e.g., due to seccomp
killing a thread – see 2.2), the watchdog fires and systemd restarts the
service.
#![allow(unused)]
fn main() {
// Tick the watchdog inside the main event loop
loop {
tokio::select! {
msg = ipc_rx.recv() => { handle_message(msg).await; }
_ = watchdog_interval.tick() => {
sd_notify::notify(false, &[sd_notify::NotifyState::Watchdog])?;
}
}
}
}
Place the watchdog tick in the event loop, not in a separate thread. A separate thread will keep ticking even when the event loop is frozen, defeating the purpose.
TimeoutStopSec=:
How long systemd waits after sending SIGTERM before sending SIGKILL. Set this
to give your daemon time for graceful shutdown (flush databases, close
connections), but not so long that a hung daemon blocks restarts.
TimeoutStopSec=10s
4.4 Common pitfalls
RuntimeDirectory= with ProtectSystem=strict:
For user services, RuntimeDirectory=myapp creates /run/user/UID/myapp
inside the mount namespace. This directory is only visible to that specific
service instance. Other services in the same user session cannot see it.
If you need a shared runtime directory, use ReadWritePaths= with a
directory created by tmpfiles.d.
PrivateNetwork=yes and Unix sockets:
PrivateNetwork=yes creates a new network namespace with only a loopback
interface. TCP/UDP connections to external hosts are blocked. However, Unix
domain sockets on the filesystem are unaffected – they are filesystem
operations, not network operations. This means IPC over Unix sockets works
fine with PrivateNetwork=yes, which is usually what you want for a daemon
that only communicates via local IPC.
sd_notify silently succeeds when NOTIFY_SOCKET is unset:
When running outside systemd (e.g., in a terminal for debugging),
$NOTIFY_SOCKET is not set. The sd_notify() call returns success without
doing anything. Add diagnostic logging so you know whether notifications are
actually being delivered:
#![allow(unused)]
fn main() {
if std::env::var("NOTIFY_SOCKET").is_ok() {
tracing::info!("systemd notify socket available");
sd_notify::notify(false, &[sd_notify::NotifyState::Ready])?;
} else {
tracing::warn!("NOTIFY_SOCKET not set -- sd_notify disabled");
}
}
5. Debugging Toolkit
5.1 strace
strace is the single most valuable tool for debugging seccomp and Landlock issues.
Trace all threads of a running process:
strace -f -o /tmp/trace.log -p $(pidof my-daemon)
Filter out noisy syscalls:
strace -f -e trace='!read,write,close,epoll_wait,futex,nanosleep' \
-o /tmp/trace.log -p $(pidof my-daemon)
Find seccomp kills:
grep "killed by SIGSYS" /tmp/trace.log
The last syscall logged for that thread (immediately before the
+++ killed by SIGSYS +++ line) is the blocked syscall. Example:
[pid 12345] ftruncate(7, 0) = ?
[pid 12345] +++ killed by SIGSYS (core dumped) +++
This tells you ftruncate is missing from the allowlist.
Trace all daemons simultaneously for comprehensive coverage:
for pid in $(pgrep -f 'my-daemon'); do
strace -f -o /tmp/trace-${pid}.log -p $pid &
done
# Exercise all code paths, then kill strace processes
5.2 journalctl
View logs for a user service:
journalctl --user -u my-daemon.service --no-pager -o short-precise
Key exit status codes:
| Status | Meaning | Likely Cause |
|---|---|---|
| 226/NAMESPACE | Mount namespace setup failed | ReadWritePaths directory does not exist |
| 31/SYS | Killed by signal 31 (SIGSYS) | seccomp blocked a syscall |
| 6/ABRT | Aborted | Watchdog timeout, assertion failure, or panic |
| -1/WATCHDOG | Watchdog timeout | Event loop frozen (see 2.2) |
Watch in real time:
journalctl --user -u my-daemon.service -f -o short-precise
5.3 systemctl
Check service status:
systemctl --user status my-daemon.service
Look for: Active: (running/failed/inactive), Main PID:, exit code/status.
Clear failed state: After a service fails, systemd remembers the failure. You must reset it before restarting:
systemctl --user reset-failed my-daemon.service
systemctl --user start my-daemon.service
Recreate tmpfiles.d directories:
systemd-tmpfiles --user --create
This is idempotent – safe to run anytime.
5.4 Diagnostic patterns
“No such file or directory” + status=226:
The service’s ReadWritePaths or ReadOnlyPaths references a directory that
does not exist on the host filesystem. systemd cannot create the bind mount
into the namespace.
Fix: Ensure tmpfiles.d rules create all required directories. Run
systemd-tmpfiles --user --create and retry.
Watchdog timeout with no error logs:
The event loop is frozen. The most common cause is seccomp KILL_THREAD
silently destroying a thread that tokio::select! is waiting on (see 2.2).
Debug: Attach strace to the process, exercise the code path that triggers the
freeze, look for SIGSYS kills. Add timeout wrappers to spawn_blocking
calls to regain visibility.
“database is locked” after timeout:
A spawn_blocking thread was killed by seccomp while holding an fcntl
advisory lock on a SQLite file. The lock was not released because the thread
died without running destructors. The file descriptor may still be open (held
by the process, not the thread).
Fix: Add the missing syscall to the allowlist. If the database is stuck,
restart the process (the lock is released when the fd is closed on process
exit). For robustness, set PRAGMA busy_timeout so SQLite retries instead of
immediately returning SQLITE_BUSY.
Silent timeout from CLI (e.g., 5 seconds, no response): The daemon received the IPC message but froze during processing. The CLI’s request timeout fires. This is the user-visible symptom of the event loop freeze described above.
Debug: Check if the daemon process is still running (ps aux | grep daemon).
If it is running but not responding, it is frozen. Attach strace.
6. Defense-in-Depth Architecture
The two-tier model:
+--------------------------+
| systemd (outer) |
| Mount namespaces |
| Resource limits |
| (LimitNOFILE, MemoryMax) |
| Watchdog lifecycle |
| ProtectSystem, |
| ProtectHome |
+-----------+--------------+
|
+-----------v--------------+
| Application (inner) |
| Landlock filesystem ACL |
| seccomp-bpf syscall |
| filter |
| setrlimit |
| (NOFILE, MEMLOCK) |
| Directory bootstrap |
+--------------------------+
systemd owns:
- Process lifecycle (start, stop, restart, watchdog)
- Outer filesystem isolation (mount namespaces)
- Resource limits that survive application bugs (
MemoryMax,TasksMax) - Compliance posture (auditors can inspect unit files)
The application owns:
- Inner filesystem isolation (Landlock – more granular than mount namespaces)
- Syscall filtering (seccomp – systemd’s
SystemCallFilteris a convenience wrapper, but application-level gives more control) - Resource self-limits (
setrlimit– defense against fd leaks, memory leaks) - Directory bootstrapping (portable across platforms)
Both layers are required:
- systemd provides the compliance and lifecycle layer. Auditors and distribution packagers can review unit files without reading application code.
- Landlock and seccomp provide the defense-in-depth layer. They protect against vulnerabilities within the application itself.
- The application must work on non-systemd platforms (containers, macOS, embedded Linux). Landlock and seccomp are Linux-specific but do not require systemd. The application’s bootstrap code handles the portable case.
7. Checklist: Hardening a New Daemon
Use this checklist when adding security hardening to a new daemon. Each item addresses a specific failure mode described in this guide.
systemd unit file
-
Type=notifywithsd_notify("READY=1")in application code -
WatchdogSec=30s(adjust to your heartbeat interval) -
TimeoutStopSec=10s(enough for graceful shutdown) -
Restart=on-failure,RestartSec=2s -
ProtectSystem=strict -
ProtectHome=read-only -
ReadWritePaths=for every writable directory -
NoNewPrivileges=yes -
PrivateTmp=yes - tmpfiles.d rules for every directory in
ReadWritePaths
Application bootstrap
-
bootstrap_dirs()creates all required directories (portable fallback) -
setrlimit(RLIMIT_NOFILE, ...)to cap file descriptors -
setrlimit(RLIMIT_MEMLOCK, ...)if using mlock for secrets
Landlock
- Grant
ReadWriteto runtime directory ($XDG_RUNTIME_DIR/myapp) - Grant
ReadOnlyto config directory ($XDG_CONFIG_HOME/myapp) - Grant
ReadWriteto data directory ($XDG_DATA_HOME/myapp) - Canonicalize all paths to resolve symlinks (NixOS/Guix)
- Grant
ReadOnlyto/dev/urandomif using crypto - Check
$NOTIFY_SOCKET– if filesystem path, grant write access - Test on NixOS or with symlinked configs
seccomp allowlist
- Trace with
strace -funder ALL code paths - Include
fsyncANDfdatasync(SQLite uses both) - Include
inotify_init1,inotify_add_watch,inotify_rm_watchif using file watchers - Include
readlink,readlinkatif paths may be symlinks - Include
getrandomfor crypto operations - Include
clone3if targeting glibc >= 2.34 - Use
SECCOMP_RET_KILL_PROCESS(notKILL_THREAD) in production - Use
SECCOMP_FILTER_FLAG_TSYNCfor multi-threaded programs
Defensive timeouts
- Every
spawn_blockingwrapped withtokio::time::timeout - Timeout duration is shorter than
WatchdogSec / 2 - Timeout fires a log message identifying the blocked operation
SIGSYS handler
- Installed before seccomp filter is applied
- Uses only async-signal-safe functions (raw
writeto fd 2) - Logs the blocked syscall number
- Uses
SA_RESETHANDto avoid infinite handler loops
Watchdog
- Ticks inside the main event loop (
tokio::select!branch) - Does NOT tick in a separate thread
- Interval is
WatchdogSec / 2or less
Testing
- Wipe all state directories and recreate from scratch
- Start all daemons – verify no 226/NAMESPACE errors
- Exercise all features under normal operation
- Trigger error paths (bad input, network down, disk full)
- Verify watchdog ticks appear in journal
- Verify graceful shutdown completes within
TimeoutStopSec - Run full test cycle twice (catches state leaks from first run)