Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction

Open Sesame is a trust-scoped secret and identity fabric for the desktop. It manages encrypted secret vaults with per-profile trust boundaries, provides window switching with letter-hint overlays, clipboard history with sensitivity detection, keyboard input capture, and text snippet expansion. Everything is scoped to trust profiles that activate based on context or manual selection.

Packages

Open Sesame ships as two packages:

open-sesame (headless core) contains the sesame CLI, daemon-profile, daemon-secrets, daemon-launcher, and daemon-snippets. It runs anywhere with systemd: desktops, servers, containers, and VMs. This package provides encrypted vaults, secret management, environment variable injection, application launching, and profile management without any GUI dependencies.

open-sesame-desktop (GUI layer) depends on open-sesame and adds daemon-wm, daemon-clipboard, and daemon-input. It requires a COSMIC or Wayland desktop. This package provides the window switcher overlay, clipboard history, and keyboard input capture.

Installing open-sesame-desktop pulls in open-sesame automatically. On a server or in a container, install just open-sesame for encrypted secrets and application launching.

Audience

This documentation is written for:

  • Contributors working on the Open Sesame codebase. The architecture and platform sections describe internal design, crate structure, IPC protocols, and implementation patterns.
  • Extension authors building WASM component model extensions. The extending section covers the extension host runtime, SDK, WIT interfaces, and OCI distribution.
  • Platform implementors adding support for new operating systems or compositor backends. The platform section documents the trait abstractions, factory patterns, and feature gating used across platform crates.
  • Security auditors reviewing the trust model, cryptographic primitives, sandbox enforcement, and key hierarchy. The secrets, authentication, and compliance sections provide the relevant detail.
  • Deployment engineers operating Open Sesame in production. The deployment and packaging sections cover systemd integration, service topology, and package structure.
  • Architecture – internal design: crate map, daemon topology, IPC bus, data flows. Start here for a structural understanding of the system.
  • Secrets – vault system: SQLCipher storage, key hierarchy, Argon2id KDF, key-encryption keys, per-profile isolation.
  • Authentication – unlock mechanisms: password, SSH agent, multi-factor auth policy engine.
  • Platform – OS abstraction layer: Linux (Wayland, D-Bus, evdev, systemd), macOS (Accessibility, Keychain, launchd), Windows (UI Automation, Credential Manager, Task Scheduler).
  • Extending – extension system: Wasmtime host, WASI component model, WIT bindings, OCI packaging.
  • Desktop – window management: compositor integration, overlay rendering, focus tracking.
  • Deployment – operations: systemd units, service readiness, watchdog, packaging.
  • Compliance – security posture: Landlock, seccomp, mlock, guard pages, audit logging.

For user-facing quick start instructions, CLI reference, and configuration guide, see the README.

Architecture Overview

Open Sesame is a trust-scoped secret and identity fabric for Linux, built as 21 Rust crates organized into a Cargo workspace. The system runs as seven cooperating daemons under systemd, communicating over a Noise IK encrypted IPC bus. Two Debian packages partition the daemons into a headless core suitable for servers, containers, and VMs, and a desktop layer that requires a Wayland compositor.

Crate Topology

The workspace (Cargo.toml:2-26) contains 21 crates in five layers: memory and cryptographic foundations, shared abstractions, platform bindings, daemon binaries, and the extension system.

Foundation Layer

CratePurpose
core-memoryPage-aligned secure memory allocator backed by memfd_secret(2)
core-cryptoCryptographic primitives: AES-256-GCM, Argon2id, SecureBytes, EncryptedStore
core-typesShared types, error types, and event schema for the IPC bus

Abstraction Layer

CratePurpose
core-configConfiguration schema, validation, hot-reload, and policy override
core-ipcIPC bus protocol, postcard framing, BusServer/BusClient
core-fuzzyFuzzy matching (nucleo), frecency scoring, FTS5, and index abstractions
core-secretsSecret storage abstraction over platform keystores and age-encrypted vaults
core-profileProfile schema, context-driven activation, isolation contracts, and atomic switching
core-authPluggable authentication backends for vault unlock (password, SSH-agent, future FIDO2)

Platform Layer

CratePurpose
platform-linuxLinux API wrappers: evdev, uinput, Wayland protocols, D-Bus, Landlock, seccomp
platform-macosmacOS API wrappers: Accessibility, CGEventTap, NSPasteboard, Keychain, LaunchAgent
platform-windowsWindows API wrappers: Win32 hooks, UI Automation, Credential Manager, Group Policy

Daemon Layer

CratePurpose
daemon-profileProfile orchestrator daemon: hosts IPC bus server, context evaluation, concurrent profile activation
daemon-secretsSecrets broker daemon: JIT delivery, keyring integration, profile-scoped access
daemon-launcherApplication launcher daemon: fuzzy search, frecency, overlay UI, desktop entry discovery
daemon-snippetsSnippet expansion daemon: template rendering, variable substitution, secret injection
daemon-wmWindow manager daemon: Wayland overlay window switcher with letter-hint navigation
daemon-clipboardClipboard manager daemon: history, encryption, sensitivity detection, profile scoping
daemon-inputInput remapper daemon: keyboard layers, app-aware rules, macro expansion

Orchestration and Extension Layer

CratePurpose
open-sesameThe sesame CLI binary: platform orchestration for multi-agent desktop control
sesame-workspaceWorkspace-level integration utilities shared across daemons
extension-hostWASM extension host: wasmtime runtime, capability sandbox, extension lifecycle
extension-sdkExtension SDK: types, host function bindings, and WIT interfaces for extensions

Daemon Model

The seven daemons are split across two systemd targets and two Debian packages.

Headless Daemons (package: open-sesame)

These four daemons have no GUI dependencies and run on any system with systemd: bare-metal servers, containers, VMs, and desktops alike.

DaemonRole
daemon-profileThe IPC bus host. Manages trust profiles, evaluates context rules (WiFi network, connected hardware), performs atomic profile switching, and hosts the BusServer that all other daemons connect to.
daemon-secretsSecrets broker. Manages SQLCipher-encrypted vaults scoped to trust profiles. Delivers secrets just-in-time to authorized callers over the IPC bus. Enforces per-vault ACLs and rate limiting.
daemon-launcherApplication launcher. Discovers .desktop entries, maintains frecency rankings per profile, and launches applications with optional secret injection as environment variables.
daemon-snippetsSnippet expansion engine. Renders templates with variable substitution and secret injection.

Desktop Daemons (package: open-sesame-desktop)

These three daemons require a COSMIC or Wayland compositor. The open-sesame-desktop package depends on open-sesame, so installing it pulls in all headless daemons automatically.

DaemonRole
daemon-wmWindow manager overlay. Renders the Alt+Tab window switcher with letter-hint navigation via Wayland layer-shell protocols.
daemon-clipboardClipboard manager. Monitors Wayland clipboard events, maintains encrypted history per profile, detects sensitive content (passwords, tokens), and auto-expires sensitive entries.
daemon-inputInput remapper. Captures keyboard events via evdev, evaluates compositor-independent shortcut bindings, routes key events to other daemons over the IPC bus.

Headless/Desktop Split Rationale

The split exists so that secret management, application launching, and snippet expansion can run on headless infrastructure (CI runners, jump hosts, fleet nodes) without pulling in Wayland, GTK, or GPU dependencies. A server running only open-sesame gets encrypted vaults, profile-scoped secrets, and environment injection (sesame env -p work -- aws s3 ls) with no graphical stack. A developer workstation installs open-sesame-desktop for the full experience: window switching, clipboard history, and keyboard shortcuts layered on top of the same headless core.

IPC Bus

All inter-daemon communication flows through a Noise IK encrypted IPC bus implemented in core-ipc.

Hub-and-Spoke Topology

daemon-profile hosts the BusServer. Every other daemon and every sesame CLI invocation connects as a BusClient. There is no peer-to-peer communication between daemons; all messages route through the hub.

Noise IK Transport

The IPC bus uses the Noise IK handshake pattern from the snow crate (Cargo.toml:89). In the IK pattern the initiator (client) knows the responder’s (server’s) static public key before the handshake begins. This provides mutual authentication and forward secrecy on every connection. Messages are framed with postcard (Cargo.toml:65) for compact binary serialization and deserialized into core-types::EventKind variants.

Ephemeral CLI Connections

The sesame CLI binary does not maintain a long-lived connection. Each CLI invocation opens a Noise IK session to daemon-profile, sends one or more EventKind messages, receives the response, and disconnects. The CLI has no persistent state and can be invoked from scripts, cron jobs, or CI pipelines without session management.

Message Flow Example

A sesame secret get -p work aws-access-key invocation follows this path:

  1. sesame CLI opens a Noise IK session to daemon-profile.
  2. daemon-profile authenticates the client and routes the secret-get request to daemon-secrets.
  3. daemon-secrets verifies the caller’s clearance against the vault ACL, decrypts the value from the SQLCipher store, and returns it as a SensitiveBytes payload.
  4. daemon-profile forwards the response to the CLI.
  5. The CLI writes the secret to stdout and exits.

All secret material in transit is encrypted by the Noise session. All secret material at rest in daemon memory is held in ProtectedAlloc pages backed by memfd_secret(2) (see Memory Protection).

Daemon Topology

graph TD
    subgraph "open-sesame (headless)"
        DP[daemon-profile<br/><i>IPC bus host, profiles</i>]
        DS[daemon-secrets<br/><i>vault broker</i>]
        DL[daemon-launcher<br/><i>app launch, frecency</i>]
        DN[daemon-snippets<br/><i>snippet expansion</i>]
    end

    subgraph "open-sesame-desktop (GUI)"
        DW[daemon-wm<br/><i>window switcher overlay</i>]
        DC[daemon-clipboard<br/><i>clipboard history</i>]
        DI[daemon-input<br/><i>keyboard capture</i>]
    end

    CLI[sesame CLI<br/><i>ephemeral connections</i>]

    DS -->|BusClient| DP
    DL -->|BusClient| DP
    DN -->|BusClient| DP
    DW -->|BusClient| DP
    DC -->|BusClient| DP
    DI -->|BusClient| DP
    CLI -.->|ephemeral BusClient| DP

    subgraph "Foundation Crates"
        CM[core-memory]
        CC[core-crypto]
        CT[core-types]
        CI[core-ipc]
        CF[core-config]
        CZ[core-fuzzy]
        CSE[core-secrets]
        CP[core-profile]
        CA[core-auth]
    end

    subgraph "Platform Crates"
        PL[platform-linux]
    end

    subgraph "Extension System"
        EH[extension-host]
        ES[extension-sdk]
    end

    DP --> CI
    DP --> CP
    DP --> CF
    DP --> CT
    DS --> CSE
    DS --> CC
    DS --> CM
    DL --> CZ
    DL --> CF
    DW --> PL
    DC --> PL
    DI --> PL
    CI --> CT
    CI --> CM
    CC --> CM
    CSE --> CC
    CA --> CC
    EH --> ES

Crate Dependency Highlights

  • Every daemon depends on core-ipc (for bus connectivity) and core-types (for the EventKind protocol schema).
  • core-ipc depends on core-memory because Noise session keys are held in ProtectedAlloc.
  • core-crypto depends on core-memory because SecureBytes wraps ProtectedAlloc for key material.
  • The three desktop daemons (daemon-wm, daemon-clipboard, daemon-input) depend on platform-linux for Wayland protocol bindings, evdev access, and compositor integration.
  • core-secrets depends on core-crypto for vault encryption and core-auth depends on core-crypto for key derivation during authentication.
  • The extension-host crate uses wasmtime (Cargo.toml:164) with the component model and pooling allocator for sandboxed WASM extension execution.

Security Boundaries

Each daemon runs as a separate systemd service with:

  • Landlock filesystem restrictions (platform-linux, landlock crate at Cargo.toml:136) limiting each daemon to only the filesystem paths it needs.
  • seccomp syscall filtering (platform-linux, libseccomp crate at Cargo.toml:137) restricting each daemon to a minimal syscall allowlist.
  • Noise IK mutual authentication on every IPC connection, preventing unauthorized processes from joining the bus.
  • ProtectedAlloc secure memory for all key material, with memfd_secret(2) removing secret pages from the kernel direct map (see Memory Protection).

See Also

Memory Protection

All secret-carrying types in Open Sesame are backed by core-memory::ProtectedAlloc, a page-aligned secure memory allocator that uses memfd_secret(2) on Linux 5.14+ to remove secret pages from the kernel direct map entirely. This page documents the allocator internals, the memory layout, the fallback path, and the type hierarchy built on top of it.

ProtectedAlloc Memory Layout

Every ProtectedAlloc instance maps a contiguous region of virtual memory containing five sections: three PROT_NONE guard pages, one read-only metadata page, and one or more read-write data pages. The layout is defined in core-memory/src/alloc.rs:31-32 where OVERHEAD_PAGES is set to 4 (guard0 + metadata + guard1 + guard2), and data pages are sized to fit the 16-byte canary plus the requested user data length.

                              mmap'd region (mmap_total bytes)
 +------------+------------+------------+---------------------------+------------+
 | guard pg 0 |  metadata  | guard pg 1 |       data pages ...      | guard pg 2 |
 | PROT_NONE  | PROT_READ  | PROT_NONE  |  PROT_READ | PROT_WRITE  | PROT_NONE  |
 +------------+------------+------------+---------------------------+------------+
 ^            ^            ^            ^                           ^
 |            |            |            |                           |
 mmap_base    +1 page      +2 pages     +3 pages                   +3 pages
              (metadata)                 (data_region)              +data_region_len
                                                                   (guard2)


              Detail of data pages (right-aligned user data):

 |<------------- data_region_len (data_pages * page_size) ------------->|
 +-------------------+-----------+--------------------------------------+
 |     padding       |  canary   |             user data                |
 |  (filled 0xDB)    |  16 bytes |           (user_len bytes)           |
 +-------------------+-----------+--------------------------------------+
 ^                   ^           ^                                      ^
 data_start          canary_ptr  user_data                              guard page 2
                                                                        (PROT_NONE)

Byte-Level Sizes

Given a system page size P (typically 4096) and a requested allocation of N bytes:

ComponentFormulaExample (N=32, P=4096)
data_pagesceil((16 + N) / P)1
data_region_lendata_pages * P4096
mmap_total(4 + data_pages) * P20480 (5 pages)
padding_lendata_region_len - 16 - N4048

The padding is filled with 0xDB (PADDING_FILL, alloc.rs:29), matching libsodium’s garbage fill convention.

Guard Pages

Three guard pages are set to PROT_NONE via mprotect(2) (alloc.rs:422-434). Any read or write to a guard page triggers an immediate SIGSEGV:

  • guard0 (mmap_base): prevents underflow from adjacent lower-address allocations.
  • guard1 (mmap_base + 2*P): separates the read-only metadata page from the writable data region. Prevents metadata corruption from data-region underflow.
  • guard2 (mmap_base + 3*P + data_region_len): the trailing guard page. Because user data is right-aligned within the data region, a buffer overflow of even one byte hits this page immediately.

Right-Alignment

User data is placed at the end of the data region (alloc.rs:455):

#![allow(unused)]
fn main() {
let user_data_ptr = data_start.add(data_region_len - user_len);
}

This right-alignment means a sequential buffer overflow crosses from user data directly into the trailing guard page (guard2), triggering SIGSEGV on the first out-of-bounds byte. Without right-alignment, an overflow would silently write into unused padding within the same page before reaching the guard.

Metadata Page

The metadata page (alloc.rs:438-451) stores allocation bookkeeping at fixed offsets, then is downgraded to PROT_READ:

OffsetSizeContent
08 bytesmmap_total (total mapped size)
88 bytesData region offset from mmap_base
168 bytesUser data offset from mmap_base
248 bytesuser_len (requested allocation size)
328 bytesdata_pages count
4016 bytesCopy of the process-wide canary

The metadata page is restored to PROT_READ|PROT_WRITE during Drop (alloc.rs:688-694) so it can be volatile-zeroed before munmap.

memfd_secret(2) Backend

The preferred allocation backend on Linux is memfd_secret(2), invoked via raw syscall 447 (alloc.rs:130,335). This syscall, available since Linux 5.14, creates an anonymous file descriptor whose pages are:

  • Removed from the kernel direct map: the pages are not addressable by any kernel code path, including /proc/pid/mem reads, process_vm_readv(2), kernel modules, and DMA engines.
  • Invisible to ptrace: even CAP_SYS_PTRACE cannot read the page contents.
  • Implicitly locked: the kernel does not swap memfd_secret pages to disk. No explicit mlock(2) is needed.

The syscall requires CONFIG_SECRETMEM=y in the kernel configuration. To check whether a running kernel has this enabled:

zgrep CONFIG_SECRETMEM /proc/config.gz
# or
grep CONFIG_SECRETMEM /boot/config-$(uname -r)

Probe and Caching

The allocator probes for memfd_secret availability once at process startup via probe_memfd_secret() (alloc.rs:125-188) and caches the result in a OnceLock<bool> (alloc.rs:45). The probe sequence:

  1. Call syscall(447, 0) to create a secret fd.
  2. If fd < 0, log an ERROR-level security degradation and cache false.
  3. If fd >= 0, close the fd immediately and cache true.

Allocation Sequence

The memfd_secret_mmap() function (alloc.rs:333-372) performs the full allocation:

  1. syscall(447, 0) – create the secret fd.
  2. ftruncate(fd, size) – set the mapping size.
  3. mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0) – map the pages. MAP_SHARED is required for memfd_secret.
  4. close(fd) – the mapping persists after the fd is closed.

Fallback: mmap(MAP_ANONYMOUS)

On kernels without memfd_secret support (Linux < 5.14 or CONFIG_SECRETMEM disabled), the allocator falls back to mmap(MAP_ANONYMOUS|MAP_PRIVATE) (alloc.rs:380-398). This fallback applies two additional protections that memfd_secret provides implicitly:

  • mlock(2) (alloc.rs:495): locks the data region pages into RAM, preventing the kernel from swapping them to disk. If mlock fails with ENOMEM (the RLIMIT_MEMLOCK limit is exceeded), the allocator logs a WARN-level security degradation but continues (alloc.rs:498-511). If mlock fails with any other errno, allocation fails with ProtectedAllocError::MmapFailed.
  • madvise(MADV_DONTDUMP) (alloc.rs:482): excludes the data region from core dumps. This is Linux-specific.

The fallback is a security degradation: pages remain on the kernel direct map and are readable via /proc/pid/mem by any process running as the same UID. An ERROR-level audit log is emitted by both probe_memfd_secret() (alloc.rs:147-161) and core_memory::init() (core-memory/src/lib.rs:83-91) when operating in fallback mode. The log message explicitly states that fallback mode does not meet IL5/IL6, STIG, or PCI-DSS requirements (lib.rs:88).

Canary Verification

Each ProtectedAlloc instance places a 16-byte canary (CANARY_SIZE, alloc.rs:25) immediately before the user data region. The canary value is a process-wide random generated once from getrandom(2) (on Linux, alloc.rs:56) or getentropy(2) (on macOS, alloc.rs:67) and cached in a OnceLock<[u8; 16]> (alloc.rs:39).

Placement

The canary is written at user_data_ptr - 16 (alloc.rs:457-464). A copy is also stored in the metadata page at offset 40 (alloc.rs:446).

Constant-Time Verification on Drop

During ProtectedAlloc::drop() (alloc.rs:636-720), the canary is verified before any cleanup:

  1. The 16 bytes at canary_ptr are read as a slice (alloc.rs:641-642).

  2. They are compared to the global canary using fixed_len_constant_time_eq() (alloc.rs:613-624). This function XORs each byte pair into an accumulator and reads the result through read_volatile to prevent the compiler from short-circuiting the comparison:

    #![allow(unused)]
    fn main() {
    fn fixed_len_constant_time_eq(a: &[u8], b: &[u8]) -> bool {
        if a.len() != b.len() {
            return false;
        }
        let mut acc: u8 = 0;
        for (x, y) in a.iter().zip(b.iter()) {
            acc |= x ^ y;
        }
        let result = unsafe { std::ptr::read_volatile(&acc) };
        result == 0
    }
    }
  3. If the comparison fails, an ERROR-level audit log is emitted and the process aborts via std::process::abort() (alloc.rs:656). The process aborts rather than continuing with potentially compromised key material.

Canary corruption indicates a buffer underflow, heap corruption, or use-after-free in secret-handling code.

Volatile Zeroize

After canary verification passes, the entire data region (not just the user data portion) is volatile-zeroed via volatile_zero() (alloc.rs:627-632):

#![allow(unused)]
fn main() {
fn volatile_zero(ptr: *mut u8, len: usize) {
    let slice = unsafe { std::slice::from_raw_parts_mut(ptr, len) };
    slice.zeroize();
    std::sync::atomic::compiler_fence(std::sync::atomic::Ordering::SeqCst);
}
}

The zeroize crate (Cargo.toml:85) performs volatile writes that the compiler cannot elide. The compiler_fence(SeqCst) (alloc.rs:631) provides an additional barrier preventing reordering of the zeroize with the subsequent munmap. This zeroes the canary, the 0xDB padding, and the user data before the pages are returned to the kernel.

Drop Sequence

The full Drop implementation (alloc.rs:636-720) proceeds in order:

  1. Canary check – constant-time comparison, abort on corruption.
  2. Volatile-zero the data regiondata_region_len bytes starting at data_region.
  3. munlock (fallback only) – unlock data pages (alloc.rs:665).
  4. MADV_DODUMP (fallback only, Linux) – re-enable core dump inclusion for the zeroed pages (alloc.rs:675).
  5. Zero metadata page – restore PROT_READ|PROT_WRITE, volatile-zero (alloc.rs:686-695).
  6. munmap – release the entire mapping back to the kernel (alloc.rs:700).

Type Hierarchy

Three types build on ProtectedAlloc to provide ergonomic secret handling at different layers of the system.

SecureBytes (core-crypto/src/secure_bytes.rs)

SecureBytes is the primary vehicle for cryptographic key material: master keys, vault keys, derived keys, and KEKs. It wraps a ProtectedAlloc with an actual_len field to support empty values (backed by a 1-byte sentinel allocation, secure_bytes.rs:55-56).

Key properties:

  • from_slice(&[u8]) (secure_bytes.rs:73-81): copies directly into protected memory with no intermediate heap allocation. This is the preferred constructor.
  • new(Vec<u8>) (secure_bytes.rs:51-63): accepts an owned Vec, copies into protected memory, then zeroizes the source Vec on the unprotected heap. The doc comment (secure_bytes.rs:37-44) explicitly notes the brief heap exposure and recommends from_slice when possible.
  • into_protected_alloc() (secure_bytes.rs:107-120): zero-copy transfer of the inner ProtectedAlloc to a new owner. Uses ManuallyDrop to suppress the SecureBytes destructor and ptr::read to move the allocation out. The ProtectedAlloc is never copied or re-mapped.
  • Clone (secure_bytes.rs:124-129): creates a fully independent ProtectedAlloc with its own guard pages, canary, and mlock. Both original and clone zeroize independently on drop.
  • Debug (secure_bytes.rs:146-148): redacted output showing only byte count (SecureBytes([REDACTED; 32 bytes])), never contents.

SecureBytes does not implement Serialize or Deserialize. Secrets must be explicitly converted to SensitiveBytes before crossing a serialization boundary.

SecureVec (core-crypto/src/secure_vec.rs)

SecureVec is a password input buffer designed for character-by-character collection in graphical overlays where the full password length is not known in advance. It pre-allocates a fixed-size ProtectedAlloc (512 bytes for for_password(), secure_vec.rs:14,61) and provides UTF-8 aware push_char/pop_char operations.

Key properties:

  • No reallocation: the buffer is fixed-size. push_char panics if the buffer is full (secure_vec.rs:118-122). The 512-byte limit accommodates passwords up to approximately 128 four-byte Unicode characters (secure_vec.rs:13).
  • Lazy allocation: SecureVec::new() (secure_vec.rs:43-48) creates an empty instance with inner: None and no mmap. for_password() or with_capacity() triggers the actual ProtectedAlloc.
  • UTF-8 aware pop: pop_char() (secure_vec.rs:133-160) scans backwards to find multi-byte character boundaries (checking the 0xC0 continuation mask) and zeroizes the removed bytes in-place before adjusting the cursor.
  • clear() (secure_vec.rs:199-208): zeroizes all written bytes and resets the cursor without deallocating, allowing buffer reuse for sequential vault unlocks.
  • Double zeroize on drop: Drop (secure_vec.rs:217-229) zeroizes written bytes before ProtectedAlloc::drop performs its own volatile-zero of the entire data region.

SensitiveBytes (core-types/src/sensitive.rs)

SensitiveBytes is the wire-compatible type for secret values in EventKind IPC messages. It wraps a ProtectedAlloc and implements Serialize/Deserialize for postcard framing.

Key properties:

  • Zero-copy deserialization path: the custom SensitiveBytesVisitor (sensitive.rs:112-146) implements visit_bytes (sensitive.rs:123-125) which receives a borrowed &[u8] from the deserializer and copies directly into a ProtectedAlloc. When postcard performs in-memory deserialization, this path avoids any intermediate heap Vec<u8>.
  • Fallback deserialization path: visit_byte_buf (sensitive.rs:129-132) handles deserializers that provide owned bytes. The Vec<u8> is copied into protected memory and immediately zeroized.
  • Sequence fallback: visit_seq (sensitive.rs:136-145) handles deserializers that encode bytes as a sequence of u8 values. The collected Vec<u8> is zeroized after copying.
  • from_protected() (sensitive.rs:57-62): accepts a ProtectedAlloc and actual_len directly, enabling zero-copy transfer from SecureBytes.
  • Serialization (sensitive.rs:95-99): calls serializer.serialize_bytes() directly from the protected memory slice. postcard reads the slice without copying.
  • Debug (sensitive.rs:160-163): redacted output ([REDACTED; 32 bytes]).

Zero-Copy Lifecycle

The three types form a zero-copy pipeline for secret material:

  1. A vault key is derived by core-crypto and stored as SecureBytes (in ProtectedAlloc).
  2. When the key must cross the IPC bus, SecureBytes::into_protected_alloc() transfers the ProtectedAlloc to SensitiveBytes::from_protected() with no heap copy and no re-mapping.
  3. SensitiveBytes serializes directly from the ProtectedAlloc pages into the Noise-encrypted IPC frame.
  4. On the receiving end, postcard’s visit_bytes path deserializes directly into a new ProtectedAlloc.

At no point does plaintext key material exist on the unprotected heap, provided the from_slice constructor path is used rather than SecureBytes::new(Vec<u8>).

init_secure_memory() Pre-Sandbox Probe

The core_memory::init() function (core-memory/src/lib.rs:58-107) must be called before the seccomp sandbox is applied. It performs a probe allocation of 1 byte (lib.rs:68) which triggers probe_memfd_secret() internally, caching whether syscall 447 is available. If this probe ran after seccomp activation, the raw syscall would be killed by the filter.

The function also reads RLIMIT_MEMLOCK via getrlimit(2) (lib.rs:62-65) and logs it alongside the security posture:

  • memfd_secret available: INFO-level log with backend = "memfd_secret" and the rlimit_memlock_bytes value (lib.rs:71-78).
  • memfd_secret unavailable: ERROR-level log with backend = "mmap(MAP_ANONYMOUS) fallback" and remediation instructions (lib.rs:83-91).
  • Probe allocation failure: ERROR-level log warning that all secret-carrying types will panic on allocation (lib.rs:95-103).

The function is idempotent (lib.rs:57). The OnceLock values for CANARY, PAGE_SIZE, and MEMFD_SECRET_AVAILABLE (alloc.rs:39,42,45) ensure that the probe syscall, the getrandom call, and the sysconf call each execute exactly once per process regardless of how many times init() is called.

Guard Page SIGSEGV Test Methodology

The guard page tests (core-memory/tests/guard_page_sigsegv.rs) use a subprocess harness pattern because the expected outcome is process death by signal, which cannot be caught within a single test process.

Test Structure

Each test case consists of a parent test and a child harness:

  1. Parent test (e.g., overflow_hits_trailing_guard_page, guard_page_sigsegv.rs:58-62): spawns the test binary targeting the harness function by name via --exact, with a gating environment variable __GUARD_PAGE_HARNESS.
  2. Child harness (e.g., overflow_harness, guard_page_sigsegv.rs:66-83): checks the environment variable, allocates a ProtectedAlloc via from_slice(b"test"), performs a deliberate out-of-bounds read_volatile, and calls exit(1) if the read succeeds (which it must not).
  3. Signal assertion (assert_signal_death, guard_page_sigsegv.rs:25-53): the parent verifies the child was killed by SIGSEGV (signal 11) or SIGBUS (signal 7), handling both direct signal death (ExitStatusExt::signal()) and the 128+signal exit code convention used by some platforms.

Test Cases

TestActionExpected Signal
overflow_hits_trailing_guard_pageReads one byte past ptr.add(len) (guard_page_sigsegv.rs:78)SIGSEGV (11)
underflow_hits_leading_guard_pageReads one page before ptr via ptr.sub(page_size) (guard_page_sigsegv.rs:112)SIGSEGV (11) or SIGBUS (7)

The overflow test validates right-alignment: because user data is flush against guard page 2, the very first out-of-bounds byte lands on a PROT_NONE page. The underflow test reads backward past the canary and padding into guard page 1 (between metadata and data region).

The environment variable gate (guard_page_sigsegv.rs:67) ensures that when the test binary is run normally (without __GUARD_PAGE_HARNESS set), the harness functions return immediately without performing any unsafe operations.

Platform Support Summary

PlatformBackendProtection Level
Linux 5.14+ with CONFIG_SECRETMEM=ymemfd_secret(2)Full: pages removed from kernel direct map
Linux < 5.14 or without CONFIG_SECRETMEMmmap(MAP_ANONYMOUS) + mlock + MADV_DONTDUMPDegraded: pages on direct map, audit-logged
Non-UnixCompile-time stubProtectedAlloc::new() returns Err(Unsupported)

The non-Unix stub (core-memory/src/lib.rs:111-182) exists solely so the crate compiles in workspace-wide cargo check runs. All methods on the stub panic or return errors; no secrets can be handled on unsupported platforms.

See Also

IPC Bus Protocol

The core-ipc crate implements the inter-process communication protocol used by all Open Sesame daemons and the sesame CLI.

Bus Architecture

The IPC bus uses a star topology. daemon-profile hosts a BusServer that binds a Unix domain socket at $XDG_RUNTIME_DIR/pds/bus.sock. All other daemons and the sesame CLI connect to this socket as BusClient instances.

The server accept loop (BusServer::run in server.rs) listens for incoming connections, extracts UCred credentials via SO_PEERCRED, enforces a same-UID policy (rejecting connections from different users), and spawns a per-connection handler task. Each connection performs a mandatory Noise IK handshake before any application data flows.

Per-connection state is tracked in ConnectionState, which holds:

  • The daemon’s DaemonId (set on first message)
  • A registry-verified daemon name (verified_name, from Noise IK handshake)
  • An outbound mpsc::Sender<Vec<u8>> channel (capacity 256)
  • PeerCredentials (PID and UID)
  • SecurityLevel clearance
  • Subscription filters
  • An optional TrustVector computed at connection time

An atomic u64 counter assigns monotonically increasing connection IDs. Connection state is registered only after the Noise handshake succeeds, preventing a race where broadcast frames arrive on the outbound channel before the writer task is ready.

On BusServer::drop, the socket file is removed from the filesystem.

Noise IK Handshake

All socket connections use the Noise Protocol Framework with the IK pattern:

Noise_IK_25519_ChaChaPoly_BLAKE2s

The primitives are:

  • X25519 Diffie-Hellman key agreement
  • ChaCha20-Poly1305 authenticated encryption (AEAD)
  • BLAKE2s hashing

The IK pattern means the initiator (connecting daemon) transmits its static key encrypted in the first message, and the responder’s (bus server’s) static key is pre-known to the initiator. This provides mutual authentication in a single round-trip (2 messages).

From the initiator (client) perspective:

  1. Write message 1 to responder (ephemeral key + encrypted static key)
  2. Read message 2 from responder (responder’s ephemeral key)
  3. Transition to transport mode with forward-secret keys

From the responder (server) perspective:

  1. Read message 1 from initiator (contains initiator’s ephemeral + encrypted static)
  2. Write message 2 to initiator (contains responder’s ephemeral)
  3. Transition to transport mode with forward-secret keys

The handshake has a 5-second timeout (HANDSHAKE_TIMEOUT) to prevent denial-of-service via slow handshake. The snow crate provides the Noise implementation.

Prologue Binding

The Noise prologue cryptographically binds OS-level transport identity to the encrypted channel. Both sides construct an identical prologue from UCred credentials:

PDS-IPC-v1:<lower_pid>:<lower_uid>:<higher_pid>:<higher_uid>

Canonical ordering is by PID (lower PID first), ensuring both sides produce identical bytes regardless of which side is the server. If either side has incorrect peer credentials (e.g., due to spoofing), the prologue mismatch causes the Noise handshake to fail cryptographically.

PeerCredentials are obtained via:

  • extract_ucred(): calls UnixStream::peer_cred() (uses SO_PEERCRED on Linux) to get the remote peer’s PID and UID.
  • local_credentials(): calls rustix::process::getuid() and std::process::id() for the local process.

An in-process sentinel (PeerCredentials::in_process()) uses u32::MAX as the UID, which never matches a real UCred check.

Encrypted Transport

After handshake completion, NoiseTransport wraps a snow::TransportState and provides chunked encrypted I/O.

Noise transport messages are limited to 65535 bytes. The maximum plaintext per Noise message is 65519 bytes (65535 minus the 16-byte AEAD tag). Application frames up to 16 MiB (MAX_FRAME_SIZE = 16 * 1024 * 1024) are chunked into multiple Noise messages.

Encrypted Frame Wire Format

[4-byte BE chunk_count]     (length-prefixed, plaintext)
[length-prefixed encrypted chunk 1]
[length-prefixed encrypted chunk 2]
...
[length-prefixed encrypted chunk N]

Each chunk is individually encrypted by snow::TransportState::write_message and written via the length-prefixed framing layer. The chunk count header is transmitted in the clear because it is not sensitive and the reader needs it to know how many chunks to expect.

Zero-length payloads send one empty encrypted chunk. On the read path, the reassembled payload is validated against MAX_FRAME_SIZE, and the intermediate decrypt buffer is zeroized via zeroize::Zeroize.

A 200 KiB payload requires approximately 4 chunks (200 * 1024 / 65519). The maximum number of chunks for a 16 MiB payload is validated on read to reject fabricated chunk counts.

Mutual Exclusion

snow::TransportState requires &mut self for both encrypt and decrypt. Both the server and client use tokio::select! to multiplex reads and writes in a single task rather than splitting into separate reader/writer tasks with a Mutex. The Mutex approach would deadlock because the reader would hold the lock while awaiting socket I/O, starving the writer.

Decrypted postcard buffers on the server side and plaintext outbound buffers on the client side are zeroized after processing, as they may contain serialized secret values.

Framing Layer

The framing layer (framing.rs) provides two independent services.

Serialization

encode_frame and decode_frame convert between typed Rust values and postcard byte payloads:

  • encode_frame<T: Serialize>(value) -> Vec<u8>: calls postcard::to_allocvec.
  • decode_frame<T: DeserializeOwned>(payload) -> T: calls postcard::from_bytes.

These are symmetric: decode_frame(encode_frame(v)) == v.

Wire I/O

write_frame and read_frame add and strip a 4-byte big-endian length prefix for socket transport:

  • write_frame(writer, payload): writes [4-byte BE length][payload], then flushes.
  • read_frame(reader) -> Vec<u8>: reads the 4-byte length, validates against MAX_FRAME_SIZE (16 MiB), then reads the payload.

The length prefix is a wire-only concern. Internal channels (bus routing, BusServer::publish, subscriber mpsc channels) carry raw postcard payloads without it.

Socket wire format: [4-byte BE length][postcard payload]

Frames with a length exceeding MAX_FRAME_SIZE are rejected on read to prevent out-of-memory conditions from malformed or malicious length prefixes.

Message Envelope

Every IPC message is wrapped in Message<T> (message.rs). The current wire version is 3 (WIRE_VERSION = 3).

FieldTypeDescription
wire_versionu8Wire format version, always serialized first.
msg_idUuid (v7)Unique message identifier, time-ordered.
correlation_idOption<Uuid>Links a response to its originating request’s msg_id.
senderDaemonIdSender daemon identity (UUID v7, dmon- prefix).
timestampTimestampDual-clock timestamp (wall + monotonic).
payloadTThe event or request payload (typically EventKind).
security_levelSecurityLevelAccess control level for routing decisions.
verified_sender_nameOption<String>Server-stamped name from Noise IK registry lookup.
origin_installationOption<InstallationId>v3: sender’s installation identity.
agent_idOption<AgentId>v3: sender’s agent identity.
trust_snapshotOption<TrustVector>v3: trust assessment at message creation time.

MessageContext carries per-client identity state so Message::new() can populate all fields. A minimal context requires only a DaemonId; v3 fields default to None.

The verified_sender_name is set exclusively by route_frame() in the bus server. Client-supplied values are overwritten. None indicates an unregistered client. Postcard uses positional encoding, so all Option fields must always be present on the wire; skip_serializing_if is deliberately not used.

Message::new() generates a UUID v7 for msg_id (time-ordered) and leaves correlation_id at None. The with_correlation(id) builder method sets it for response messages.

Clearance Model

SecurityLevel Enum

SecurityLevel (core-types/src/security.rs) classifies message sensitivity for bus routing. The variants, ordered from lowest to highest by their derived Ord:

LevelDescription
OpenVisible to all subscribers including extensions.
InternalVisible to authenticated daemons only. This is the default.
ProfileScopedVisible only to daemons holding the current profile’s security context.
SecretsOnlyVisible only to the secrets daemon.

Because SecurityLevel derives PartialOrd and Ord, clearance comparisons use standard Rust ordering: Open < Internal < ProfileScoped < SecretsOnly.

ClearanceRegistry

ClearanceRegistry (registry.rs) maps X25519 static public keys ([u8; 32]) to DaemonClearance entries:

#![allow(unused)]
fn main() {
pub struct DaemonClearance {
    pub name: String,
    pub security_level: SecurityLevel,
    pub generation: u64,
}
}

The generation counter increments on every key change (rotation or crash-revocation). It is used by two-phase rotation to detect concurrent revocations.

The registry is populated by daemon-profile at startup from per-daemon keypairs. It is wrapped in RwLock<ClearanceRegistry> inside ServerState to allow runtime mutation.

After the Noise IK handshake, the server extracts the client’s static public key via NoiseTransport::remote_static() (which calls TransportState::get_remote_static()). The Noise IK pattern guarantees the remote static key is available after handshake. The 32-byte key is looked up in the registry:

  • Found: the connection receives the registered name and clearance level.
  • Not found: the connection is treated as an ephemeral client with SecretsOnly clearance.

The registry supports rotate_key(old, new) (removes old entry, inserts new with incremented generation), revoke(pubkey) (removes and returns the entry), register_with_generation (for revoke-then-reregister flows), and find_by_name (linear scan, acceptable for fewer than 10 daemons).

Routing Enforcement

route_frame() enforces two clearance rules:

  1. Sender clearance: A daemon may only emit messages at or below its own clearance level. If conn.security_clearance < msg.security_level, the frame is rejected and an AccessDenied response is sent back to the sender.
  2. Recipient clearance: When broadcasting, the server skips subscribers whose security_clearance is below the message’s security_level.

Sender Identity Verification

On the first message from a connection, route_frame() records the self-declared DaemonId. Subsequent messages must use the same DaemonId. A change mid-session is treated as an impersonation attempt: the frame is dropped and an AccessDenied response is returned.

The server stamps verified_sender_name onto every routed message by re-encoding it after registry lookup. If the connection’s trust_snapshot field is not set on the message, the server also stamps the connection-level TrustVector. This re-encode adds serialization overhead on every routed frame, but for a local IPC bus with fewer than 10 daemons the cost is negligible (microseconds per frame).

Ephemeral Clients

Clients whose static public key is not in the ClearanceRegistry receive SecurityLevel::SecretsOnly clearance. This applies to the sesame CLI and any other transient tool.

Ephemeral clients are still authenticated: the same-UID check and Noise IK handshake both apply. They simply lack a pre-registered identity in the registry. The audit log records these connections as ephemeral-client-accepted events with the client’s X25519 public key and PID/UID.

Key Management

Key generation, persistence, and tamper detection are implemented in noise_keys.rs.

Keypair Generation

generate_keypair() produces an X25519 static keypair via snow::Builder::generate_keypair(). Both the public and private keys are 32 bytes. The returned ZeroizingKeypair wrapper guarantees private key zeroization on drop (including during panics), since snow::Keypair has no Drop implementation. ZeroizingKeypair::into_inner() transfers ownership using mem::take to zero the wrapper’s copy.

Filesystem Layout

Keys are stored under $XDG_RUNTIME_DIR/pds/:

FilePermissionsContent
bus.pub0644Bus server X25519 public key (32 bytes).
bus.key0600Bus server private key (32 bytes).
bus.checksumdefaultBLAKE3 keyed hash (32 bytes).
keys/<daemon>.pub0644Per-daemon public key (32 bytes).
keys/<daemon>.key0600Per-daemon private key (32 bytes).
keys/<daemon>.checksumdefaultPer-daemon BLAKE3 keyed hash (32 bytes).

The keys/ directory is set to mode 0700 to prevent local users from enumerating registered daemons.

Atomic Writes

Private keys are written atomically: the key is written to a .tmp file with 0600 permissions set at open time via OpenOptionsExt::mode, fsynced, then renamed to the final path. This prevents a window where the key file exists with default (permissive) permissions. The write is performed inside tokio::task::spawn_blocking to avoid blocking the async runtime.

Tamper Detection Checksums

Each keypair has an accompanying .checksum file containing blake3::keyed_hash(public_key, private_key) – a BLAKE3 keyed hash using the 32-byte public key as the key and the private key as the data. On read, the checksum is recomputed and compared to the stored value. A mismatch produces a TAMPER DETECTED error with instructions to delete the affected files and restart daemon-profile.

This detects partial corruption or partial tampering (e.g., private key replaced but checksum file untouched). It does not prevent an attacker with full filesystem write access from replacing all three files (private key, public key, checksum) with a self-consistent set. That threat model requires a root-of-trust outside the filesystem such as TPM-backed attestation.

Missing checksum files (from older installations) produce a warning rather than an error, for backward compatibility.

Key Rotation

The ClearanceRegistry supports runtime key rotation via rotate_key(old_pubkey, new_pubkey), which atomically removes the old entry and inserts the new one with the same name and clearance level but an incremented generation counter.

The rotation protocol uses KeyRotationPending and KeyRotationComplete events:

  1. daemon-profile generates a new keypair for the target daemon, writes it to disk, and broadcasts KeyRotationPending with the new public key and a grace period.
  2. The target daemon calls BusClient::handle_key_rotation, which reads the new keypair from disk, verifies the announced public key matches what is on disk (detecting tampering), reconnects to the bus with the new key, and re-announces via DaemonStarted.
  3. On reconnection, if the server detects a DaemonStarted from a verified name that already has an active connection, it evicts the stale old connection and registers the new one in name_to_conn.

connect_with_keypair_retry supports crash-restart scenarios where daemon-profile may have regenerated a daemon’s keypair. Each retry re-reads the keypair from disk with exponential backoff.

Request-Response Correlation

The bus supports three message routing patterns.

Request-Response (Unicast Reply)

When a message arrives without a correlation_id, route_frame() records (msg_id -> sender_conn_id) in the pending_requests table. The message is then broadcast to eligible subscribers. When a response arrives (identified by having a correlation_id), the server removes the matching entry from pending_requests and delivers the response only to the originating connection.

On the client side, BusClient::request() creates a message, registers a oneshot::channel waiter keyed by msg_id, sends the message, and awaits the response with a caller-specified timeout. If the timeout expires, the waiter is cleaned up and an error is returned.

Confirmed RPC

The server provides register_confirmation(correlation_id, mpsc::Sender), which returns an RAII ConfirmationGuard. When a correlated response matching the registered correlation_id arrives at route_frame(), the raw frame is sent to the confirmation channel instead of (or in addition to) the normal routing path. The ConfirmationGuard deregisters the route on drop, preventing stale entries from accumulating if the caller times out or encounters an error.

Pub-Sub Broadcast

Messages without a correlation_id that are not responses are broadcast to all connected subscribers whose security_clearance meets or exceeds the message’s security_level. The sender’s own connection is excluded to prevent feedback loops. The same echo-suppression applies to BusServer::publish() for in-process subscribers (it decodes the frame to extract the sender DaemonId and skips matching connections).

Named Unicast

The server maintains a name_to_conn: HashMap<String, u64> mapping, populated when route_frame() processes DaemonStarted events from connections with a verified_sender_name. send_to_named(daemon_name, frame) resolves the daemon name to a connection ID for O(1) unicast delivery without broadcasting.

Socket Path Resolution

socket_path() in transport.rs resolves the platform-appropriate socket path:

PlatformPath
Linux$XDG_RUNTIME_DIR/pds/bus.sock
macOS~/Library/Application Support/pds/bus.sock
Windows\\.\pipe\pds\bus

On Linux, XDG_RUNTIME_DIR must be set; its absence is a fatal error.

Socket Permissions

The bus server applies defense-in-depth permissions on bind:

  • The socket file is set to mode 0700.
  • The parent directory is set to mode 0700.

The real security boundary is UCred UID validation (the same-UID check in the accept loop), but restrictive filesystem permissions harden against misconfigured XDG_RUNTIME_DIR permissions.

See Also

Protocol Evolution

This page documents how the Open Sesame IPC protocol handles versioning, forward compatibility, and the addition of new event types without breaking existing daemons.

EventKind and Unknown Variant Deserialization

The protocol schema is defined by the EventKind enum in core-types/src/events.rs. This enum is marked #[non_exhaustive] and contains a catch-all variant:

#![allow(unused)]
fn main() {
#[derive(Clone, Serialize, Deserialize)]
#[non_exhaustive]
pub enum EventKind {
    // ... all named variants ...

    #[serde(other)]
    Unknown,
}
}

The #[serde(other)] attribute on the Unknown variant is the forward-compatibility mechanism. When a daemon receives a postcard-encoded EventKind with a variant discriminant it does not recognize (because the sender is running a newer version of the code), serde deserializes it as EventKind::Unknown instead of returning a deserialization error.

This means a daemon compiled against an older version of core-types can receive messages containing event variants that did not exist when it was compiled. The message deserializes successfully; the daemon sees EventKind::Unknown and can choose to ignore it, log it, or pass it through.

The #[non_exhaustive] attribute enforces at compile time that all match arms on EventKind must include a wildcard or Unknown branch. This prevents new variants from causing compile errors in downstream crates that have not been updated.

Postcard Encoding Properties

The IPC bus uses postcard (a #[no_std]-compatible, compact binary serde format) for all serialization. Several properties of postcard’s encoding are relevant to protocol evolution.

Externally-Tagged Enums

EventKind uses serde’s default externally-tagged representation. Postcard encodes externally-tagged enums as a varint discriminant followed by the variant’s fields in declaration order. The events.rs source contains an explicit note:

Externally-tagged enum (serde default) for postcard wire compatibility. Postcard does not support #[serde(tag = "...", content = "...")].

This means:

  • Each variant is identified by its position (index) in the enum declaration.
  • Adding new variants at the end of the enum produces new discriminant values that older decoders do not recognize, triggering #[serde(other)] deserialization to Unknown.
  • Reordering existing variants would change their discriminants and break all existing decoders. Variant ordering must be append-only.

Positional Field Encoding

Postcard encodes struct fields positionally (by declaration order), not by name. The Message<T> envelope in message.rs contains a comment making this explicit:

No skip_serializing_if – postcard uses positional encoding, so the field must always be present in the wire format for decode compatibility.

This means:

  • Every field in Message<T> must always be serialized, even if its value is None. Omitting an Option field via skip_serializing_if would shift all subsequent fields by one position, causing decode failures.
  • New fields can only be appended to the end of the struct. The v3 fields (origin_installation, agent_id, trust_snapshot) are explicitly commented as “v3 fields (appended for positional encoding safety).”
  • Removing or reordering existing fields is a breaking change.

Implications for Field Addition

When a v3 sender transmits a message with the three new trailing fields to a v2 receiver, the v2 decoder reads only the fields it knows about and ignores trailing bytes. Postcard’s from_bytes does not require that all input bytes be consumed – it reads fields sequentially and stops when the struct is fully populated. This means appending new Option fields to Message<T> is backward-compatible as long as older decoders were compiled without those fields.

When a v2 sender transmits a message missing the v3 trailing fields to a v3 receiver, postcard::from_bytes encounters end-of-input when trying to decode the missing fields. In practice, the codebase treats wire version bumps as requiring atomic deployment of all binaries (see the wire version section below).

Wire Version Field

The Message<T> struct contains a wire_version: u8 field, always serialized first. The current value is 3, defined as pub const WIRE_VERSION: u8 = 3 in message.rs.

The source code documents the wire version contract:

WIRE FORMAT CONTRACT:

v2 fields: wire_version, msg_id, correlation_id, sender, timestamp, payload, security_level, verified_sender_name

All v2 binaries must be deployed atomically (single compilation unit). Adding fields requires incrementing this constant and updating the decode path to handle both old and new versions during rolling upgrades.

What the Wire Version Encodes

The wire version tracks changes to the Message<T> envelope structure – specifically, which fields are present and in what order. It does not track changes to EventKind variants (those are handled by #[serde(other)]).

  • v2: 8 fields (wire_version through verified_sender_name)
  • v3: 11 fields (adds origin_installation, agent_id, trust_snapshot)

Version Negotiation

The protocol does not perform explicit version negotiation. There is no handshake phase where client and server agree on a wire version. Instead, Message::new() always stamps the current WIRE_VERSION, and the source code states that all binaries must be deployed atomically when the wire version changes.

A receiver can inspect msg.wire_version after deserialization to determine which generation of the protocol the sender used. The current codebase does not implement version-conditional decode logic; all daemons are expected to be at the same wire version. The comment about “updating the decode path to handle both old and new versions during rolling upgrades” describes an intended future capability, not current behavior.

How New Event Variants Are Added

Adding a new EventKind variant follows this procedure:

  1. Append the new variant to the end of the EventKind enum in core-types/src/events.rs. Inserting it in the middle would change the discriminant indices of all subsequent variants.
  2. Add a Debug arm in the impl_event_debug! macro invocation at the bottom of events.rs. The macro enforces exhaustiveness – omitting a variant is a compile error. Sensitive variants (containing passwords or secret values) go in the sensitive section with explicit REDACTED annotations. All others go in the transparent section.
  3. No wire version bump is needed for new EventKind variants. The Unknown catch-all handles unrecognized discriminants at the EventKind level. Wire version bumps are only needed for changes to the Message<T> envelope structure.

Daemons compiled against the old core-types deserialize the new variant as EventKind::Unknown. Daemons compiled against the new core-types see the fully typed variant. Both can coexist on the same bus.

How New Message Fields Are Added

Adding a new field to Message<T> is a more disruptive change:

  1. Append the new field to the end of the Message<T> struct. Postcard’s positional encoding means insertion or reordering breaks all existing decoders.
  2. Increment WIRE_VERSION to signal the structural change.
  3. Deploy all binaries atomically. The codebase does not currently implement multi-version decode logic. All daemons must be rebuilt and redeployed together.
  4. Update MessageContext if the new field should be populated by the sender (as was done for origin_installation, agent_id, and trust_snapshot in v3).
  5. Do not use skip_serializing_if on the new field. The field must always be present on the wire for positional decode compatibility.

Practical Constraints

Variant Stability

The EventKind enum currently contains over 80 variants spanning window management, profile lifecycle, clipboard, input, secrets RPC, launcher RPC, agent lifecycle, authorization, federation, device posture, multi-factor auth, and bus-level errors. Each variant’s position in the enum declaration is its wire discriminant. Removing a variant or changing its position is a breaking wire change.

Enum Variant Field Changes

Postcard encodes variant fields positionally, the same as struct fields. Adding a field to an existing variant, removing a field, or reordering fields within a variant is a breaking wire change. New fields for existing functionality should be introduced as new variants rather than modifications to existing ones.

Sensitivity Redaction

The Debug implementation for EventKind uses a compile-time exhaustive macro (impl_event_debug!) that separates sensitive variants from transparent ones. Sensitive variants (SecretGetResponse, SecretSet, UnlockRequest, SshUnlockRequest, FactorSubmit) have their secret-bearing fields rendered as [REDACTED; N bytes] in debug output. Adding a new variant that carries secret material requires placing it in the sensitive section of the macro.

Forward Compatibility Boundaries

The #[serde(other)] mechanism provides forward compatibility only for unknown enum variants. It does not help with:

  • Unknown fields within a known variant (postcard has no field-skipping mechanism for positional encoding)
  • Structural changes to the Message<T> envelope
  • Changes to the framing layer (length-prefix format, encryption chunking)
  • Changes to the Noise handshake parameters

These categories of change require coordinated deployment of all binaries.

See Also

Sandbox Model

Open Sesame enforces a three-layer process containment model on Linux: Landlock filesystem sandboxing, seccomp-bpf syscall filtering, and systemd unit hardening. Each daemon receives a tailored sandbox that grants the minimum privileges required for its function. Sandbox application is mandatory — every daemon treats sandbox failure as fatal and refuses to start unsandboxed.

Process Hardening

Before any sandbox is applied, every daemon calls harden_process() (platform-linux/src/security.rs:14). This function performs two operations:

  1. PR_SET_DUMPABLE(0) — prevents ptrace attachment by non-root processes and prevents core dumps from containing process memory (security.rs:19).
  2. RLIMIT_CORE(0,0) — sets both soft and hard core dump limits to zero, preventing core files even if dumpable is re-enabled by setuid (security.rs:32-36).

Resource limits are applied via apply_resource_limits() (security.rs:66). All daemons set RLIMIT_NOFILE to 4096. The memlock_bytes parameter is set to 0 at the application level; systemd units provide the actual LimitMEMLOCK=64M constraint.

These hardening calls log errors but do not abort. A daemon still proceeds to Landlock and seccomp even if prctl or setrlimit fails. The Landlock and seccomp layers are the hard security boundary.

Landlock Filesystem Sandbox

Landlock provides unprivileged filesystem sandboxing on Linux kernels >= 5.13. The shared implementation lives in platform-linux/src/sandbox.rs. Each daemon defines its own ruleset in a per-daemon apply_sandbox() function.

ABI Level and Enforcement Policy

The sandbox targets Landlock ABI V6 (sandbox.rs:77), which covers filesystem access (AccessFs), network access (AccessNet), and scope restrictions (abstract Unix sockets and cross-process signals via Scope). The Ruleset is created with handle_access(AccessFs::from_all(abi)) and handle_access(AccessNet::from_all(abi)) to handle all access types at the V6 level (sandbox.rs:85-96).

Partial enforcement is treated as a fatal error. If the kernel ABI cannot fully enforce the requested rules, apply_landlock() returns an error and the daemon aborts (sandbox.rs:157-161). There is no graceful degradation path.

ENOENT Handling

Paths that do not exist at sandbox application time are silently skipped (sandbox.rs:114-120). This is strictly more restrictive than granting the path, because Landlock denies access to any path not present in the ruleset. This design handles the case where directories have not yet been created — for example, the vaults directory before sesame init runs, or $XDG_RUNTIME_DIR/pds/ before daemon-profile creates it.

On NixOS, configuration files are symlinks into /nix/store. Each daemon calls core_config::resolve_config_real_dirs() before applying Landlock to discover the real filesystem paths behind config symlinks. These resolved paths are added as read-only Landlock rules so that config hot-reload can follow symlinks after the sandbox is applied.

daemon-wm additionally grants blanket read-only access to /nix/store (daemon-wm/src/sandbox.rs:68-69) for shared libraries, GLib schemas, locale data, and XKB keyboard rules.

daemon-profile creates its Landlock target directories if they do not exist before opening PathFd handles (daemon-profile/src/sandbox.rs:38-42). This handles the race condition where systemd restarts daemon-profile after a sesame init --wipe-reset-destroy-all-data before the directories are recreated.

Non-Directory Inode Handling

The implementation performs fstat() on each PathFd after opening it to detect whether the inode is a directory or a non-directory file (sandbox.rs:130-136). For non-directory inodes (sockets, regular files), directory-only access flags (ReadDir, MakeDir, etc.) are masked off using AccessFs::from_file(abi). This prevents the Landlock crate’s PathBeneath::try_compat_inner from reporting PartiallyEnforced on non-directory fds.

The FsAccess::ReadWriteFile variant (sandbox.rs:22-24) exists specifically for non-directory paths such as Unix domain sockets, granting file-level read-write access without directory-only flags.

Scope Restrictions

Two scope modes are available via the LandlockScope enum (sandbox.rs:54-60):

  • Full — blocks both abstract Unix sockets and cross-process signals. Uses Scope::from_all(abi) which on ABI V6 includes AbstractUnixSocket and Signal.
  • SignalOnly — blocks cross-process signals only, permitting abstract Unix sockets. Uses Scope::Signal alone.

Daemons that need D-Bus or Wayland communication via abstract Unix sockets use SignalOnly. Daemons with no such requirement use Full.

Per-Daemon Filesystem Rules

daemon-profile

Source: daemon-profile/src/sandbox.rs:29. Scope: SignalOnly (needs D-Bus).

PathAccessPurpose
~/.config/pds/ReadWriteAudit log, config, vault metadata
$XDG_RUNTIME_DIR/pds/ReadWriteIPC bus socket, keys, runtime state
$NOTIFY_SOCKETReadWriteFilesystemd sd_notify keepalives
$SSH_AUTH_SOCK + canonicalized target + parentReadWriteFile / ReadOnlySSH agent auto-unlock
~/.ssh/ + agent.sock + canonicalized target + parentReadOnly / ReadWriteFileStable SSH agent symlink fallback
Resolved config symlink targetsReadOnlyConfig hot-reload on NixOS

daemon-profile is the only daemon that hosts the IPC bus server socket. It requires ReadWrite on the entire $XDG_RUNTIME_DIR/pds/ directory because it creates the bus.sock and bus.pub files at startup.

SSH agent socket handling resolves symlinks to their target inodes. On Konductor VMs, ~/.ssh/agent.sock is a stable symlink to a per-session /tmp/ssh-XXXX/agent.PID path. Landlock resolves symlinks to their target inodes, so the implementation grants access to the symlink path, the canonicalized target, and the parent directory of the target for path traversal (daemon-profile/src/sandbox.rs:81-149).

daemon-secrets

Source: daemon-secrets/src/sandbox.rs:7. Scope: Full (no abstract Unix sockets needed).

PathAccessPurpose
~/.config/pds/ReadWriteVault SQLCipher databases, salt storage
$XDG_RUNTIME_DIR/pds/keys/ReadOnlyIPC client keypair
$XDG_RUNTIME_DIR/pds/bus.pubReadOnlyBus server public key
$XDG_RUNTIME_DIR/pds/bus.sockReadWriteFileIPC bus socket
$XDG_RUNTIME_DIR/busReadWriteFileD-Bus filesystem socket
$NOTIFY_SOCKETReadWriteFilesystemd sd_notify keepalives
Resolved config symlink targetsReadOnlyConfig hot-reload on NixOS

daemon-secrets has the narrowest Landlock ruleset of all daemons that handle secret material. It uses LandlockScope::Full to block abstract Unix sockets. The D-Bus filesystem socket at $XDG_RUNTIME_DIR/bus is granted as a ReadWriteFile rule because it is a non-directory inode (daemon-secrets/src/sandbox.rs:44-47).

daemon-wm

Source: daemon-wm/src/sandbox.rs:8. Scope: SignalOnly (Wayland uses abstract sockets).

PathAccessPurpose
$XDG_RUNTIME_DIR/pds/keys/ReadOnlyIPC client keypair
$XDG_RUNTIME_DIR/pds/bus.pubReadOnlyBus server public key
$XDG_RUNTIME_DIR/pds/bus.sockReadWriteFileIPC bus socket
$WAYLAND_DISPLAY socketReadWriteFileWayland compositor protocol
~/.cache/open-sesame/ReadWriteMRU state, overlay cache
/etc/fontsReadOnlyFontconfig configuration
/usr/share/fontsReadOnlySystem font files
~/.config/cosmic/ReadOnlyCOSMIC desktop theme integration
/nix/storeReadOnlyShared libs, schemas, XKB (NixOS)
/procReadOnlyxdg-desktop-portal PID verification
/usr/shareReadOnlySystem shared data (fonts, icons, mime, locale)
/usr/share/X11/xkbReadOnlyXKB system rules (non-NixOS)
~/.local/share/ReadOnlyUser fonts and theme data
~/.config/pds/vaults/ReadOnlySalt files and SSH enrollment blobs
$SSH_AUTH_SOCK + canonicalized pathsReadWriteFile / ReadOnlySSH agent auto-unlock
$NOTIFY_SOCKETReadWriteFilesystemd sd_notify keepalives
Resolved config symlink targetsReadOnlyConfig hot-reload on NixOS

daemon-wm has the broadest Landlock ruleset because it renders a Wayland overlay using SCTK and tiny-skia. It requires access to fonts, theme data, and system shared resources. GPU/DRI access is intentionally excluded — rendering uses wl_shm CPU shared memory buffers only (daemon-wm/src/sandbox.rs:91-93).

daemon-clipboard

Source: daemon-clipboard/src/main.rs:306. Scope: Full.

PathAccessPurpose
$XDG_RUNTIME_DIR/pds/keys/ReadOnlyIPC client keypair
$XDG_RUNTIME_DIR/pds/bus.pubReadOnlyBus server public key
$XDG_RUNTIME_DIR/pds/bus.sockReadWriteFileIPC bus socket
$WAYLAND_DISPLAY socketReadWriteFileWayland data-control protocol
~/.cache/open-sesame/ReadWriteClipboard history SQLite database
Resolved config symlink targetsReadOnlyConfig hot-reload on NixOS

daemon-input

Source: daemon-input/src/main.rs:319. Scope: Full.

PathAccessPurpose
$XDG_RUNTIME_DIR/pds/keys/ReadOnlyIPC client keypair
$XDG_RUNTIME_DIR/pds/bus.pubReadOnlyBus server public key
$XDG_RUNTIME_DIR/pds/bus.sockReadWriteFileIPC bus socket
/dev/inputReadOnlyevdev keyboard device nodes
/sys/class/inputReadOnlyevdev device enumeration symlinks
/sys/devicesReadOnlyevdev device metadata via sysfs
Resolved config symlink targetsReadOnlyConfig hot-reload on NixOS

daemon-input is the only daemon with access to /dev/input and /sys/class/input. It reads raw keyboard events via evdev.

daemon-snippets

Source: daemon-snippets/src/main.rs:241. Scope: Full.

PathAccessPurpose
$XDG_RUNTIME_DIR/pds/keys/ReadOnlyIPC client keypair
$XDG_RUNTIME_DIR/pds/bus.pubReadOnlyBus server public key
$XDG_RUNTIME_DIR/pds/bus.sockReadWriteFileIPC bus socket
~/.config/pds/ReadOnlyConfig directory (snippet templates)
Resolved config symlink targetsReadOnlyConfig hot-reload on NixOS

daemon-snippets has the narrowest Landlock ruleset of all sandboxed daemons. It requires only IPC bus access and read-only config access.

daemon-launcher

daemon-launcher does not apply Landlock or seccomp. It spawns arbitrary desktop applications as child processes via fork+exec. Landlock and seccomp filters inherit across fork+exec and would kill every spawned application (daemon-launcher/src/main.rs:119-121). The security boundary for daemon-launcher is IPC bus authentication via Noise IK. systemd unit hardening provides the process containment layer.

seccomp-bpf Syscall Filtering

The seccomp implementation uses libseccomp to build per-daemon BPF filters (platform-linux/src/sandbox.rs:259). seccomp is always applied after Landlock because Landlock setup requires syscalls (landlock_create_ruleset, landlock_add_rule, landlock_restrict_self) that the seccomp filter does not permit.

Default Action

The default action for disallowed syscalls is ScmpAction::KillThread (SECCOMP_RET_KILL_THREAD) (sandbox.rs:268). This sends SIGSYS to the offending thread rather than using KillProcess, which would skip the signal handler entirely. The choice of KillThread over Errno or Log is deliberate — Errno or Log would allow an attacker to probe for allowed syscalls (sandbox.rs:256-258).

SIGSYS Handler

A custom SIGSYS signal handler is installed before the seccomp filter is loaded (sandbox.rs:173-238). The handler is designed to be async-signal-safe:

  • It uses no allocator and makes no heap allocations.
  • It extracts the syscall number from siginfo_t at byte offset 24 from the struct base on x86_64 (sandbox.rs:201). This offset corresponds to si_call_addr (8-byte pointer) followed by si_syscall (4-byte int) within the _sigsys union member, which starts at byte offset 16 from the struct base.
  • It formats the number into a stack-allocated buffer and writes "SECCOMP VIOLATION: syscall=NNN" to stderr via raw libc::write() on fd 2.
  • After logging, it resets SIGSYS to SIG_DFL via libc::signal() and re-raises the signal via libc::raise() (sandbox.rs:226-228).

The handler is registered with SA_SIGINFO | SA_RESETHAND flags (sandbox.rs:235). SA_RESETHAND ensures the handler fires only once — subsequent SIGSYS deliveries use the default disposition.

Per-Daemon Syscall Differences

All six sandboxed daemons share a common baseline of approximately 50 syscalls covering I/O basics (read, write, close, openat, lseek, pread64, fstat, stat, newfstatat, statx, access), memory management (mmap, mprotect, munmap, madvise, brk), process/threading (futex, clone3, clone, set_robust_list, set_tid_address, rseq, sched_getaffinity, prlimit64, prctl, getpid, gettid, getuid, geteuid, kill), epoll (epoll_wait, epoll_ctl, epoll_create1, eventfd2, poll, ppoll), timers (clock_gettime, timer_create, timer_settime, timer_delete), networking (socket, connect, sendto, recvfrom, recvmsg, sendmsg, getsockname, getpeername, setsockopt, socketpair, shutdown, getsockopt), signals (sigaltstack, rt_sigaction, rt_sigprocmask, rt_sigreturn, tgkill), inotify (inotify_init1, inotify_add_watch, inotify_rm_watch), and misc (exit_group, exit, getrandom, memfd_secret, ftruncate, restart_syscall, pipe2, dup).

The following table lists syscalls that differentiate the daemons:

SyscallprofilesecretswmclipboardinputsnippetsPurpose
bindY-Y---Server socket / Wayland
listenY-Y---Server socket / Wayland
accept4Y-Y---Server socket / Wayland
mlock-YY---Secret zeroization / SCTK buffers
munlock-Y----Secret zeroization
mlock2--Y---SCTK/Wayland runtime
mremap--Y---SCTK buffer reallocation
pwrite64-Y----SQLCipher journal writes
fallocate-Y----SQLCipher space preallocation
flockYYYY--Database/file locking
chmod / fchmodY-Y---File permission management
fchownY-----IPC socket ownership
renameYYY---Atomic file replacement
unlinkYYY---File/socket cleanup
statfs / fstatfs--Y---Filesystem info (SCTK)
sched_get_priority_max--Y---Thread priority (SCTK)
sysinfo--Y---System memory info (SCTK)
memfd_createY-Y---D-Bus / Wayland shared memory
nanosleepYYY---Event loop timing
clock_nanosleepYYY---Event loop timing
sched_yieldY-Y---Cooperative thread scheduling
timerfd_createY-Y---D-Bus / Wayland event loops
timerfd_settimeY-Y---D-Bus / Wayland event loops
timerfd_gettimeY-Y---D-Bus / Wayland event loops
getresuid / getresgidYYY---D-Bus credential passing
getgid / getegidYYY---D-Bus credential passing
writev / readvYYY---Scatter/gather I/O
readlinkatYYY---Symlink resolution
unameYYY---D-Bus / Wayland runtime
getcwdYYY---Working directory resolution

Key observations:

  • daemon-secrets uniquely requires mlock/munlock for zeroization of secret material in memory, plus pwrite64 and fallocate for SQLCipher database journal operations.
  • daemon-wm has the broadest syscall allowlist (~88 syscalls) due to Wayland/SCTK runtime requirements including mremap, mlock2, statfs/fstatfs, sysinfo, and sched_get_priority_max.
  • daemon-profile requires bind/listen/accept4 because it hosts the IPC bus server socket. It also requires fchown for setting socket ownership.
  • daemon-input and daemon-snippets have the narrowest allowlists (~57-60 syscalls).
  • All sandboxed daemons permit memfd_secret for secure memory allocation and getrandom for cryptographic random number generation.

systemd Unit Hardening

Each daemon runs as a Type=notify systemd user service with WatchdogSec=30. Service files are located in contrib/systemd/.

Common Directives

All seven daemons share the following systemd hardening:

DirectiveValueEffect
NoNewPrivilegesyesPrevents privilege escalation via setuid/setgid binaries
LimitCORE0Disables core dumps at the cgroup level
LimitMEMLOCK64MCaps locked memory at 64 MiB
Restarton-failureAutomatic restart on non-zero exit
RestartSec5Five-second delay between restarts
WatchdogSec30Daemon must call sd_notify(WATCHDOG=1) within 30 seconds

Per-Daemon systemd Differences

Directiveprofilesecretswmlauncherclipboardinputsnippets
ProtectHomeread-onlyread-onlyread-only-read-onlyread-onlyread-only
ProtectSystemstrictstrictstrict-strictstrictstrict
PrivateNetwork-yes-----
ProtectClock---yes---
ProtectKernelTunables---yes---
ProtectKernelModules---yes---
ProtectKernelLogs---yes---
ProtectControlGroups---yes---
LockPersonality---yes---
RestrictSUIDSGID---yes---
SystemCallArchitectures---native---
CapabilityBoundingSet---(empty)---
KillMode---process---
LimitNOFILE4096102440964096409640964096
MemoryMax128M256M128M-128M128M128M

Notable design decisions:

  • daemon-secrets (open-sesame-secrets.service:18) is the only daemon with PrivateNetwork=yes, placing it in its own network namespace with no connectivity. It communicates exclusively via the Unix domain IPC bus socket. It has the lowest LimitNOFILE (1024) but the highest MemoryMax (256M) to accommodate Argon2id, which allocates 19 MiB per key derivation.
  • daemon-launcher (open-sesame-launcher.service:17-21) does not set ProtectHome or ProtectSystem because these mount namespace restrictions inherit to child processes spawned via systemd-run --scope. Firefox, for example, writes to /run/user/1000/dconf/ and fails with “Read-only file system” when ProtectSystem=strict is applied to the launcher. Instead, daemon-launcher uses kernel control plane protections and an empty CapabilityBoundingSet to drop all Linux capabilities. KillMode=process ensures spawned applications survive launcher restarts.
  • ReadWritePaths vary per daemon: daemon-profile and daemon-secrets get %t/pds and %h/.config/pds; daemon-wm and daemon-clipboard get %h/.cache/open-sesame; daemon-wm additionally gets %h/.cache/fontconfig.

Sandbox Application Order

The sandbox layers are applied in a strict sequence during daemon startup:

  1. harden_process()PR_SET_DUMPABLE(0), RLIMIT_CORE(0,0)
  2. apply_resource_limits()RLIMIT_NOFILE, RLIMIT_MEMLOCK
  3. Pre-sandbox I/O — open file descriptors, connect to IPC bus, read keypairs, scan desktop entries (daemon-launcher), open evdev devices (daemon-input)
  4. init_secure_memory() — probe memfd_secret before seccomp locks down syscalls
  5. apply_landlock() — filesystem containment (implicitly sets PR_SET_NO_NEW_PRIVS via landlock_restrict_self)
  6. apply_seccomp() — syscall filtering (must follow Landlock)

This ordering is critical. Landlock setup requires the landlock_create_ruleset, landlock_add_rule, and landlock_restrict_self syscalls, which are not in any daemon’s seccomp allowlist. The IPC bus connection must be established before Landlock restricts filesystem access, because the daemon reads its keypair from $XDG_RUNTIME_DIR/pds/keys/.

Daemon Sandbox Capability Matrix

Daemonharden_processLandlockseccompLandlock ScopePrivateNetworkProtectSystemApprox. Syscalls
daemon-profileYYYSignalOnly-strict~80
daemon-secretsYYYFullYstrict~72
daemon-wmYYYSignalOnly-strict~88
daemon-launcherY--N/A--N/A
daemon-clipboardYYYFull-strict~60
daemon-inputYYYFull-strict~60
daemon-snippetsYYYFull-strict~57

Profile Trust Model

Trust profiles are the fundamental isolation boundary in Open Sesame. Every scoped resource – secrets, clipboard content, frecency data, snippets, audit entries, and launch configurations – is partitioned by trust profile.

TrustProfileName Validation

The TrustProfileName type in core-types/src/profile.rs enforces strict validation at construction time. It is impossible to hold an invalid TrustProfileName value after construction.

Invariants:

  • Non-empty.
  • Maximum 64 bytes.
  • Must start with an ASCII alphanumeric character: [a-zA-Z0-9].
  • Body characters restricted to: [a-zA-Z0-9_-].
  • Not . or .. (path traversal prevention).
  • No whitespace, path separators, or null bytes.

Invalid characters produce a detailed error message including the byte value and position: "trust profile name contains invalid byte 0x{b:02x} at position {i}".

These rules make the name safe for direct use in filesystem paths without additional sanitization.

Filesystem mappings:

ResourcePath pattern
SQLCipher vaultvaults/{name}.db
BLAKE3 KDF context"pds v2 vault-key {name}"
Frecency databaselauncher/{name}.frecency.db

TrustProfileName implements TryFrom<String> and TryFrom<&str>, returning Error::Validation on failure. It serializes transparently (via #[serde(transparent)]) as a plain string and deserializes with validation. All boundary-facing code – CLI argument parsers, IPC message handlers, config file loaders – validates at entry.

Profile Scoping

Each trust profile isolates the following resources:

ResourceIsolation mechanism
SecretsPer-profile SQLCipher vault file. Vault keys are derived via BLAKE3 KDF with profile-specific context strings.
ClipboardCross-profile clipboard access is denied and logged as AuditAction::IsolationViolationAttempt.
FrecencyPer-profile SQLite database for launch frecency ranking. Profile switch in daemon-launcher triggers engine.switch_profile().
ExtensionsExtension data is scoped per profile via IsolatedResource::Extensions.
Window listWindow management state is scoped per profile via IsolatedResource::WindowList.
AuditAudit entries record which profile was involved in each operation via ProfileId fields.
Launch profilesLaunch profile definitions live under profiles.<name>.launch_profiles in configuration.

The IsolatedResource enum in core-profile/src/lib.rs defines the five isolatable resource types: Clipboard, Secrets, Frecency, Extensions, WindowList. It is serialized with #[serde(rename_all = "lowercase")] for configuration and audit log entries.

Profile State Machine

Each profile has an independent lifecycle state, represented by the ProfileState enum:

  • Inactive: vault closed, no secrets served.
  • Active(ProfileId): vault open, serving secrets.
  • Transitioning(ProfileId): activation or deactivation in progress.

Multiple profiles may be active concurrently. There is no global “active profile” singleton – the system supports simultaneous active profiles with independent vaults. The active_profiles set in daemon-profile is a HashSet<TrustProfileName>.

Context-Driven Activation

The ContextEngine in core-profile/src/context.rs evaluates system signals against activation rules to determine the default profile for new unscoped launches. Changing the default does not deactivate other active profiles.

Context Signals

Signals that trigger rule evaluation:

SignalSource
SsidChanged(String)WiFi network change via D-Bus SSID monitor (platform_linux::dbus::ssid_monitor).
AppFocused(AppId)Wayland compositor focus change via platform_linux::compositor::focus_monitor.
UsbDeviceAttached(String)USB device insertion (vendor:product identifier).
UsbDeviceDetached(String)USB device removal.
HardwareKeyPresent(String)Hardware security key detection (e.g., YubiKey).
TimeWindowEntered(String)Time-based rule trigger (cron-like expression).
GeolocationChanged(f64, f64)Location change (latitude, longitude).

Signal sources are spawned as long-lived tokio tasks in daemon-profile/src/main.rs. They are conditionally compiled behind #[cfg(all(target_os = "linux", feature = "desktop"))].

Activation Rules

Each profile’s activation configuration (ProfileActivation) contains:

  • rules: a Vec<ActivationRule>, each specifying a RuleTrigger type and a string value to match.
  • combinator: RuleCombinator::All (every rule must match the signal) or RuleCombinator::Any (one matching rule suffices).
  • priority: u32 value. When multiple profiles match, the highest priority wins.
  • switch_delay_ms: u64 debounce interval in milliseconds. Prevents rapid oscillation when a signal fires repeatedly.

Evaluation Algorithm

When ContextEngine::evaluate(signal) is called:

  1. All profiles whose rules match the signal are collected. For All combinators, every rule in the profile must match; for Any, at least one rule must match.
  2. Candidates are sorted by priority descending.
  3. The highest-priority candidate is selected.
  4. If it is already the current default, None is returned (no change).
  5. Debounce check: if the candidate was last switched to within switch_delay_ms ago, None is returned.
  6. Otherwise, the default is updated, the switch time is recorded, and the new ProfileId is returned.

Rule matching is type-strict: an Ssid trigger only matches SsidChanged signals, an AppFocus trigger only matches AppFocused signals, and so on. Mismatched trigger/signal pairs always return false.

Default Profile

The default profile determines which trust profile is used for new unscoped launches (launches without an explicit --profile flag). It is set by:

  1. Configuration: global.default_profile in the config file, loaded at startup.
  2. Context engine: automatic switching based on runtime signals overrides the config default.
  3. Hot reload: when config changes are detected by ConfigWatcher, the context engine is rebuilt with the new default and the default_profile_name is updated. The config_profile_names list is also refreshed so that sesame profile list reflects added or removed profiles.

Default profile changes are:

  • Audited via AuditAction::DefaultProfileChanged.
  • Broadcast on the IPC bus as EventKind::DefaultProfileChanged.
  • Reported by sesame status.

Profile Inheritance

There is no profile inheritance in the current implementation. Each trust profile is an independent, self-contained configuration with its own launch profiles, vault, and isolation boundaries. Cross-profile interaction is limited to qualified tag references (e.g., work:corp) in launch profile composition, which merge environment at launch time without merging the profile definitions themselves.

Workspace Conventions

Open Sesame enforces a deterministic directory layout for source code repositories. Git remote URLs are parsed into canonical filesystem paths following the convention {root}/{user}/{server}/{org}/{repo}.

Canonical Path Convention

Every repository maps to a unique filesystem path:

/workspace/{user}/{server}/{org}/{repo}

For example:

Remote URLCanonical path
https://github.com/scopecreep-zip/open-sesame/workspace/usrbinkat/github.com/scopecreep-zip/open-sesame
git@github.com:braincraftio/k9.git/workspace/usrbinkat/github.com/braincraftio/k9
git@git.braincraft.io:braincraft/k9.git/workspace/usrbinkat/git.braincraft.io/braincraft/k9

The default root is /workspace, configurable via the SESAME_WORKSPACE_ROOT environment variable or settings.root in workspaces.toml.

URL Parsing

The parse_url function in sesame-workspace/src/convention.rs accepts two URL formats:

HTTPS

https://github.com/org/repo[.git]

Splits on / after stripping the scheme. Requires at least three path components: server/org/repo.

SSH

git@github.com:org/repo.git

Splits on @ to isolate the user portion, then on : to separate the server from the org/repo path. The path after the colon is split on / to extract org and repo.

workspace.git Format

URLs where the repo component is workspace (or workspace.git) are treated as org-level workspace repositories. These represent a monorepo pattern where the org directory itself is a git repository containing sibling project repos. The canonical path stops at the org level:

https://github.com/braincraftio/workspace.git
  -> /workspace/usrbinkat/github.com/braincraftio/

The CloneTarget enum distinguishes Regular(PathBuf) from WorkspaceGit(PathBuf). Cloning a workspace.git into an existing org directory that already contains sibling repos triggers a special initialization flow: git init, git remote add origin, git fetch origin, then git checkout -f origin/HEAD -B main.

Normalization and Validation

  • Server names are lowercased (GITHUB.COM becomes github.com).
  • .git suffixes are stripped from repo names.
  • Insecure http:// URLs log a tracing warning about cleartext credential transmission but are not rejected.

Component validation (validate_component) rejects:

ConditionRejection reason
Empty component"{label} component is empty"
Leading .Prevents collision with .git, .ssh, .config directories.
Contains ..Path traversal attack.
Contains / or \Path separator embedded in component.
Contains null byteNull byte injection.
Exceeds 255 bytesFilesystem component length limit (ext4, btrfs).
Leading/trailing whitespaceFilesystem ambiguity.

Git-Aware Discovery

is_git_repo

The git::is_git_repo function in sesame-workspace/src/git.rs checks for the existence of a .git entry (directory or file) at the given path. It does not shell out to git.

Remote URL Extraction

git::remote_url runs git -C {path} remote get-url origin via std::process::Command with explicit .arg() calls. Returns Ok(None) if the path lacks a .git entry or has no origin remote. Returns Ok(Some(url)) on success.

Additional Git Operations

The git module provides:

  • current_branch(path): runs git rev-parse --abbrev-ref HEAD.
  • is_clean(path): runs git status --porcelain and checks for empty output.
  • clone_repo(url, target, depth): clones with optional --depth and -- separator before URL/path arguments.

All commands use explicit .arg() calls. The module-level documentation states: “NEVER use format!() to build command strings. NEVER use shell interpolation.”

Workspace Discovery

discover::discover_workspaces in sesame-workspace/src/discover.rs walks the directory tree at {root}/{user}/ to find all git repositories. The walk follows the convention depth structure:

  1. Server level: enumerate directories under {root}/{user}/.
  2. Org level: enumerate directories under each server. If an org directory contains a .git entry, it is recorded as a workspace.git discovery.
  3. Repo level: enumerate directories under each org. Directories with .git entries are recorded as regular repositories.

Security properties of the walk:

  • Symlinks skipped: entry.file_type()?.is_symlink() causes the entry to be skipped at every level. This prevents symlink loops and TOCTOU traversal attacks.
  • Permission denied: silently skipped (ErrorKind::PermissionDenied returns Ok(())).
  • .git directories: explicitly skipped as traversal targets (they are detected but not descended into).

Results are sorted by path. Each DiscoveredWorkspace includes:

  • path: filesystem path to the repository root.
  • convention: parsed WorkspaceConvention components (server, org, repo).
  • remote_url: from git remote get-url origin, if available.
  • linked_profile: resolved from workspace config links, if configured.
  • is_workspace_git: true for org-level workspace.git repositories.

Workspace Configuration

workspaces.toml

The user-level workspace configuration is stored at ~/.config/pds/workspaces.toml. The schema is defined by WorkspaceConfig in core-config/src/schema_workspace.rs:

[settings]
root = "/workspace"
user = "usrbinkat"
default_ssh = true

[links]
"/workspace/usrbinkat/github.com/org" = "personal"
"/workspace/usrbinkat/github.com/org/k9" = "work"

Settings fields:

FieldTypeDefaultDescription
rootPathBuf$SESAME_WORKSPACE_ROOT or /workspaceRoot directory for all workspaces.
userString$USER or "user"Username for path construction.
default_sshbooltruePrefer SSH URLs when cloning.

Links section: a BTreeMap<String, String> mapping canonical paths to profile names. More specific paths override less specific ones (longest prefix wins).

resolve_workspace_profile in sesame-workspace/src/config.rs resolves a filesystem path to a profile name using two strategies:

  1. Exact match: the path matches a link key exactly.
  2. Longest prefix match: the longest link key that is a prefix of the path wins. Path boundary enforcement prevents /org from matching /organic – the link path must match exactly or be followed by /.

.sesame.toml (Local Config)

Workspace-level and repo-level configuration files (.sesame.toml) provide per-directory overrides. The schema is LocalSesameConfig in core-config/src/schema_workspace.rs:

# /workspace/usrbinkat/github.com/org/.sesame.toml
profile = "work"
secret_prefix = "MYAPP"
tags = ["dev-rust"]

[env]
RUST_LOG = "debug"
FieldTypeDescription
profileOption<String>Default trust profile for this context.
envBTreeMap<String, String>Non-secret environment variables to inject.
tagsVec<String>Launch profile tags to apply by default.
secret_prefixOption<String>Env var prefix for secret injection (e.g., "MYAPP" causes api-key to become MYAPP_API_KEY).

Multi-Layer Config Precedence

resolve_effective_config in sesame-workspace/src/config.rs merges configuration from all layers. Precedence (highest to lowest):

  1. Repo .sesame.toml ({path}/.sesame.toml)
  2. Workspace .sesame.toml ({root}/{user}/{server}/{org}/.sesame.toml)
  3. User config links (workspaces.toml [links] section)

Merge semantics per field:

  • profile: highest-priority layer wins outright.
  • env: all layers are merged into a single BTreeMap. Higher-priority keys override lower-priority ones; keys unique to lower layers are preserved.
  • tags: all layers’ tags are concatenated (workspace tags first, then repo tags).
  • secret_prefix: highest-priority layer wins outright.

The ConfigProvenance struct tracks which layer determined each value ("user config link", "workspace .sesame.toml", or "repo .sesame.toml").

Platform-Specific Root Resolution

The workspace root is resolved in sesame-workspace/src/config.rs (resolve_root) with this priority:

  1. SESAME_WORKSPACE_ROOT environment variable.
  2. config.settings.root from workspaces.toml.
  3. Default: /workspace.

The default WorkspaceSettings reads SESAME_WORKSPACE_ROOT at construction time, so the env var takes effect even without an explicit workspaces.toml. The username defaults to the USER environment variable, falling back to the string "user".

Shell Injection Prevention

All git operations in sesame-workspace/src/git.rs use std::process::Command with explicit .arg() calls. The -- separator is used before URL and path arguments in git clone to prevent argument injection (a URL starting with - would otherwise be interpreted as a flag). No temporary files are created. No secret material is written to disk.

Cryptographic Agility

Open Sesame implements config-driven cryptographic algorithm selection across five independent axes: key derivation (KDF), hierarchical key derivation (HKDF), Noise IPC transport cipher, Noise IPC transport hash, and audit log chain hash. Algorithm choices are declared in the [crypto] section of config.toml and dispatched at runtime through typed enum matching. No algorithm is hardcoded at call sites.

Configuration Schema

The [crypto] section of config.toml maps to CryptoConfigToml (core-config/src/schema_crypto.rs:14), a string-based TOML representation with six fields:

[crypto]
kdf = "argon2id"
hkdf = "blake3"
noise_cipher = "chacha-poly"
noise_hash = "blake2s"
audit_hash = "blake3"
minimum_peer_profile = "leading-edge"

These defaults are defined in the Default implementation (schema_crypto.rs:30-38). At load time, CryptoConfigToml::to_typed() (schema_crypto.rs:48) converts the string values to validated enum variants in core_types::CryptoConfig (core-types/src/crypto.rs:82). Unrecognized algorithm names produce a core_types::Error::Config error, preventing the daemon from starting with an invalid configuration.

Algorithm Axes

KDF: Password to Master Key

The KDF converts a user password and 16-byte salt into a 32-byte master key. Two algorithms are available, selected by the kdf config field and dispatched through derive_key_kdf() (core-crypto/src/kdf.rs:60-69).

argon2id (default, KdfAlgorithm::Argon2id):

  • Algorithm: Argon2id (hybrid mode, resists both side-channel and GPU attacks)
  • Memory cost: 19,456 KiB (19 MiB) (kdf.rs:28)
  • Time cost: 2 iterations (kdf.rs:29)
  • Parallelism: 1 lane (kdf.rs:30)
  • Output: 32 bytes (kdf.rs:31)
  • Version: 0x13 (kdf.rs:35)
  • Parameters follow OWASP minimum recommendations (kdf.rs:14-17)
  • Implementation: argon2 crate with Argon2::new(Algorithm::Argon2id, Version::V0x13, params) (kdf.rs:35)

pbkdf2-sha256 (KdfAlgorithm::Pbkdf2Sha256):

  • Algorithm: PBKDF2-HMAC-SHA256
  • Iterations: 600,000 (kdf.rs:51)
  • Output: 32 bytes
  • Parameters follow OWASP recommendations for PBKDF2-SHA256 (kdf.rs:47)
  • Implementation: pbkdf2 crate with Hmac<Sha256> (kdf.rs:51)

Both functions return SecureBytes — mlock’d, zeroize-on-drop memory backed by core_memory::ProtectedAlloc. Intermediate stack arrays are zeroized via zeroize::Zeroizing before the function returns (kdf.rs:37, kdf.rs:50).

HKDF: Master Key to Per-Purpose Keys

The HKDF layer derives per-profile, per-purpose 32-byte keys from the master key. Two algorithms are available, dispatched through the *_with_algorithm() family of functions in core-crypto/src/hkdf.rs.

blake3 (default, HkdfAlgorithm::Blake3):

  • Uses BLAKE3’s built-in derive_key mode, which provides extract-then-expand semantics equivalent to HKDF (hkdf.rs:1-5)
  • Context string format: "pds v2 <purpose> <profile_id>" (hkdf.rs:39-41)
  • Domain separation is achieved via BLAKE3’s context string parameter, which internally derives a context key from the string and uses it to key the hash of the input keying material (hkdf.rs:27-33)
  • Implementation: blake3::derive_key(context, ikm) (hkdf.rs:31)
  • Performance: 5-14x faster than SHA-256 with hardware acceleration via AVX2/AVX512/NEON (hkdf.rs:5)

hkdf-sha256 (HkdfAlgorithm::HkdfSha256):

  • Standard HKDF extract-then-expand per RFC 5869
  • Salt: None (the IKM serves as both input keying material and implicit salt) (hkdf.rs:121)
  • Info: the context string bytes, providing domain separation (hkdf.rs:123)
  • Output: 32 bytes (hkdf.rs:122)
  • Implementation: Hkdf::<Sha256>::new(None, ikm) followed by hk.expand(context.as_bytes(), &mut key) (hkdf.rs:121-124)
  • Intermediate output array is zeroized before return (hkdf.rs:126)

The key hierarchy derived through HKDF (hkdf.rs:7-14):

User password -> Argon2id -> Master Key (32 bytes)
  -> HKDF "vault-key"          -> per-profile vault key (encrypts SQLCipher DB)
  -> HKDF "clipboard-key"      -> per-profile clipboard key (zeroed on profile deactivation)
  -> HKDF "ipc-auth-token"     -> per-profile IPC authentication token
  -> HKDF "ipc-encryption-key" -> per-profile IPC field encryption key

Each purpose has a dedicated public function (derive_vault_key, derive_clipboard_key, derive_ipc_auth_token, derive_ipc_encryption_key) with a corresponding *_with_algorithm() variant that accepts an HkdfAlgorithm parameter. The algorithm-dispatching variants use a match statement to route to the correct implementation (hkdf.rs:137-141).

A key-encrypting-key (KEK) for platform keyring storage is derived separately via derive_kek() (hkdf.rs:91-101). The KEK uses the hardcoded context string "pds v2 key-encrypting-key" and concatenates password + salt as the IKM, ensuring cryptographic independence from the Argon2id master key derivation path. The concatenated IKM is zeroized after use (hkdf.rs:99).

An extensibility function derive_key() (hkdf.rs:107-110) accepts an arbitrary purpose string, allowing new key purposes to be added without modifying the module. Callers must ensure purpose strings are globally unique.

Noise Cipher: IPC Transport Encryption

The Noise IK protocol used for inter-daemon IPC communication supports two cipher selections via the noise_cipher config field:

chacha-poly (default, NoiseCipher::ChaChaPoly):

  • ChaCha20-Poly1305 authenticated encryption
  • Constant-time on all architectures without hardware AES
  • The leading-edge default for environments where AES-NI is not guaranteed

aes-gcm (NoiseCipher::AesGcm):

  • AES-256-GCM authenticated encryption
  • Optimal on processors with AES-NI hardware acceleration
  • Required for NIST/FedRAMP compliance

The cipher selection is read from config and passed to the Noise protocol builder at IPC bus initialization. The NoiseCipher enum is defined in core-types/src/crypto.rs:31-37.

Noise Hash: IPC Transport Hash

The Noise protocol hash function is selected via the noise_hash config field:

blake2s (default, NoiseHash::Blake2s):

  • BLAKE2s (256-bit output, optimized for 32-bit and 64-bit platforms)
  • Faster than SHA-256 on platforms without SHA extensions
  • The leading-edge default

sha256 (NoiseHash::Sha256):

  • SHA-256
  • Required for NIST/FedRAMP compliance
  • Optimal on processors with SHA-NI hardware extensions

The NoiseHash enum is defined in core-types/src/crypto.rs:43-49.

Audit Hash: Audit Log Chain Integrity

The audit log uses a hash chain where each entry’s hash covers the previous entry’s hash, providing tamper evidence. The hash function is selected via the audit_hash config field:

blake3 (default, AuditHash::Blake3):

  • BLAKE3 (256-bit output)
  • Hardware-accelerated via AVX2/AVX512/NEON where available
  • The leading-edge default

sha256 (AuditHash::Sha256):

  • SHA-256
  • Required for NIST/FedRAMP compliance

The AuditHash enum is defined in core-types/src/crypto.rs:55-61.

At-Rest Encryption

Vault data at rest is encrypted with AES-256-GCM via the EncryptionKey type (core-crypto/src/encryption.rs:13). This cipher is not configurable — it is always AES-256-GCM regardless of the [crypto] config section. The implementation uses the RustCrypto aes-gcm crate (encryption.rs:5-6).

  • Key size: 32 bytes (AES-256) (encryption.rs:24)
  • Nonce size: 12 bytes (encryption.rs:42)
  • Output: ciphertext with appended 16-byte authentication tag (encryption.rs:37)
  • Decrypted plaintext is returned as SecureBytes (mlock’d, zeroize-on-drop) (encryption.rs:61)
  • The Debug implementation redacts key material, printing "EncryptionKey([REDACTED])" (encryption.rs:66-68)

Nonce reuse catastrophically breaks both confidentiality and authenticity. Callers are responsible for ensuring nonce uniqueness per encryption with the same key (encryption.rs:36-37).

Pre-Defined Crypto Profiles

The minimum_peer_profile config field selects a pre-defined algorithm profile via the CryptoProfile enum (core-types/src/crypto.rs:67-75):

leading-edge (default, CryptoProfile::LeadingEdge):

AxisAlgorithm
KDFArgon2id (19 MiB, 2 iterations)
HKDFBLAKE3
Noise cipherChaCha20-Poly1305
Noise hashBLAKE2s
Audit hashBLAKE3

This profile uses modern algorithms that prioritize security margin and performance on commodity hardware without requiring specific hardware acceleration.

governance-compatible (CryptoProfile::GovernanceCompatible):

AxisAlgorithm
KDFPBKDF2-SHA256 (600K iterations)
HKDFHKDF-SHA256
Noise cipherAES-256-GCM
Noise hashSHA-256
Audit hashSHA-256

This profile uses exclusively NIST-approved algorithms suitable for environments subject to FedRAMP, FIPS 140-3, or equivalent governance frameworks.

custom (CryptoProfile::Custom):

Individual algorithm selection via the per-axis config fields. Allows mixing algorithms across profiles (e.g., Argon2id KDF with AES-GCM Noise cipher).

The minimum_peer_profile field specifies the minimum cryptographic profile that the local node will accept from federation peers. A node configured with "leading-edge" will reject connections from peers advertising a weaker profile. This field is defined in CryptoConfig as minimum_peer_profile: CryptoProfile (core-types/src/crypto.rs:89).

Config-to-Runtime Dispatch

Algorithm selection flows from config to runtime through a three-stage pipeline:

  1. TOML parsing: The [crypto] section is deserialized into CryptoConfigToml (core-config/src/schema_crypto.rs:14), which stores all algorithm names as String values.

  2. Validation: CryptoConfigToml::to_typed() (schema_crypto.rs:48) converts each string to a typed enum variant via match statements. Unrecognized strings produce an error. The result is a core_types::CryptoConfig struct with typed fields (core-types/src/crypto.rs:82-90).

  3. Dispatch: Runtime code calls algorithm-dispatching functions that accept the typed enum and route to the correct implementation. For example, derive_key_kdf() (core-crypto/src/kdf.rs:60-69) matches on KdfAlgorithm:

    #![allow(unused)]
    fn main() {
    pub fn derive_key_kdf(
        algorithm: &KdfAlgorithm,
        password: &[u8],
        salt: &[u8; 16],
    ) -> core_types::Result<SecureBytes> {
        match algorithm {
            KdfAlgorithm::Argon2id => derive_key_argon2(password, salt),
            KdfAlgorithm::Pbkdf2Sha256 => derive_key_pbkdf2(password, salt),
        }
    }
    }

    Similarly, derive_vault_key_with_algorithm() (core-crypto/src/hkdf.rs:131-141) matches on HkdfAlgorithm:

    #![allow(unused)]
    fn main() {
    pub fn derive_vault_key_with_algorithm(
        algorithm: &HkdfAlgorithm,
        master_key: &[u8],
        profile_id: &str,
    ) -> SecureBytes {
        let ctx = build_context("vault-key", profile_id);
        match algorithm {
            HkdfAlgorithm::Blake3 => derive_32(&ctx, master_key),
            HkdfAlgorithm::HkdfSha256 => derive_32_hkdf_sha256(&ctx, master_key),
        }
    }
    }

This pattern ensures that adding a new algorithm requires three changes: add a variant to the core_types enum, add a match arm in the TOML validator, and add a match arm in the dispatch function. No call sites need modification.

FIPS Considerations

Open Sesame does not claim FIPS 140-3 validation. The cryptographic implementations are provided by RustCrypto crates (argon2, pbkdf2, aes-gcm, blake3, hkdf, sha2) which have not undergone CMVP certification.

For deployments subject to FIPS requirements, the governance-compatible profile restricts algorithm selection to NIST-approved primitives (PBKDF2-SHA256, HKDF-SHA256, AES-256-GCM, SHA-256). This satisfies the algorithm selection requirement but does not address the validated module requirement. Organizations requiring a FIPS-validated cryptographic module would need to replace the RustCrypto backends with a certified implementation (e.g., AWS-LC, BoringCrypto) and re-validate.

The minimum_peer_profile mechanism provides a policy enforcement point: setting it to "governance-compatible" ensures that no peer in a federated deployment can negotiate a session using non-NIST algorithms, even if the local node supports them.

Memory Protection

All key material derived through the KDF and HKDF paths is returned as SecureBytes (core-crypto/src/lib.rs:16), which is backed by core_memory::ProtectedAlloc. This provides:

  • Page-aligned allocation with guard pages
  • mlock to prevent swapping to disk
  • Volatile zeroization on drop
  • Canary bytes for buffer overflow detection

The init_secure_memory() function (core-crypto/src/lib.rs:29-31) must be called before the seccomp sandbox is applied, because it probes memfd_secret availability. After seccomp is active, memfd_secret remains in the allowlist for all sandboxed daemons.

Vault Engine

The vault engine provides encrypted per-profile secret storage backed by SQLCipher databases with a multi-layer key hierarchy derived from BLAKE3.

SQLCipher Configuration

Each vault database is a SQLCipher-encrypted SQLite file. The following PRAGMA directives are applied in SqlCipherStore::open() (core-secrets/src/sqlcipher.rs) before any table access:

ParameterValuePurpose
cipher_page_size4096Page-level encryption granularity
cipher_hmac_algorithmHMAC_SHA256Per-page authentication
cipher_kdf_algorithmPBKDF2_HMAC_SHA256Internal SQLCipher KDF for page keys
kdf_iter256000PBKDF2 iteration count
journal_modeWALWrite-ahead logging for crash safety

SQLCipher encrypts every database page with AES-256-CBC and authenticates each page with HMAC-SHA256. The page encryption key is supplied via PRAGMA key as a raw 32-byte hex-encoded value. After the key pragma executes, both the hex string and the SQL statement are zeroized in place via zeroize::Zeroize before any further operations proceed.

Key Hierarchy

The key derivation chain from user password to on-disk encryption proceeds through three stages:

User password + per-profile 16-byte random salt
    --> Argon2id
    --> Master Key (32 bytes, held in SecureBytes / mlock'd memory)
        --> BLAKE3 derive_key(context="pds v2 vault-key {profile_id}")
        --> Vault Key (32 bytes) -- used as SQLCipher PRAGMA key
            --> BLAKE3 derive_key(context="pds v2 entry-encryption-key")
            --> Entry Key (32 bytes) -- used for per-entry AES-256-GCM

BLAKE3’s derive_key mode accepts a context string that provides domain separation. The vault key and entry key share the same vault key as input keying material but use different context strings, making them cryptographically independent. The vault_key_derivation_domain_separation test in core-secrets/src/sqlcipher.rs verifies this property.

The full set of derived keys sharing the same master key (defined in core-crypto/src/hkdf.rs):

ContextPurpose
pds v2 vault-key {profile_id}SQLCipher page encryption
pds v2 entry-encryption-keyPer-entry AES-256-GCM (derived from vault key, not master key)
pds v2 clipboard-key {profile_id}Clipboard encryption
pds v2 ipc-auth-token {profile_id}IPC authentication
pds v2 ipc-encryption-key {profile_id}Per-field IPC encryption (feature-gated)
pds v2 key-encrypting-keyKEK for platform keyring storage

An HKDF-SHA256 alternative is available via derive_vault_key_with_algorithm(), selectable per the HkdfAlgorithm enum. BLAKE3 is the default. The blake3_and_hkdf_sha256_produce_different_keys test confirms the two algorithms produce different outputs for the same inputs.

Double Encryption

Each secret value receives two independent layers of encryption:

  1. Page-level: SQLCipher encrypts the entire database page (key names, values, metadata) using the vault key via AES-256-CBC + HMAC-SHA256.
  2. Entry-level: Each value is individually encrypted with AES-256-GCM using the entry key before being written to the value column. The wire format stored in the database is [12-byte random nonce][ciphertext + 16-byte GCM tag].

Every encryption operation generates a fresh 12-byte random nonce via getrandom. The minimum wire length for decryption is 28 bytes (12-byte nonce + 16-byte GCM tag); shorter values are rejected with an error. The encrypt_same_value_produces_different_ciphertext test verifies nonce uniqueness across 100 consecutive encryptions of identical plaintext.

The db_file_contains_no_plaintext and db_file_contains_no_key_names_in_plaintext tests read raw database file bytes and assert that neither secret values nor key names appear anywhere in the on-disk file.

Database Schema

The schema is created via an idempotent CREATE TABLE IF NOT EXISTS statement during SqlCipherStore::open():

CREATE TABLE IF NOT EXISTS secrets (
    key        TEXT PRIMARY KEY,
    value      BLOB NOT NULL,
    created_at INTEGER NOT NULL,
    updated_at INTEGER NOT NULL
);

Timestamps are stored as Unix epoch seconds via SystemTime::now(). The schema_migration_idempotent test verifies that opening a database multiple times does not fail or corrupt existing data. After schema creation, SqlCipherStore::open() executes SELECT count(*) FROM sqlite_master to verify the key is correct; a wrong key causes this statement to fail with “wrong key or corrupt database”.

Per-Profile Vault Isolation

Each profile receives a separate database file at {config_dir}/vaults/{profile_name}.db and a separate 16-byte random salt file at {config_dir}/vaults/{profile_name}.salt. The salt is generated by getrandom on first unlock and persisted to disk by generate_profile_salt() in daemon-secrets/src/unlock.rs.

Isolation is cryptographic, not namespace-based. core_crypto::derive_vault_key(master_key, profile_id) uses the context string "pds v2 vault-key {profile_id}", producing a different 32-byte key for each profile ID even when the master key is the same. Attempting to open a vault encrypted with profile A’s key using profile B’s key fails at the SELECT count(*) FROM sqlite_master verification step.

Tests in core-secrets/src/sqlcipher.rs that verify isolation:

  • cross_profile_keys_are_independent – different profile IDs yield different keys, and opening a database with the wrong profile’s key fails.
  • cross_profile_secret_access_returns_error – reading a key from the wrong profile’s vault returns NotFound.
  • different_vault_keys_cannot_access – a database opened with key A rejects key B.

Vault Lifecycle

Create

A vault is created implicitly on first access after a profile is unlocked. VaultState::vault_for() in daemon-secrets/src/vault.rs creates the {config_dir}/vaults/ directory if needed, then calls SqlCipherStore::open() inside tokio::task::spawn_blocking with a 10-second timeout to avoid blocking the async event loop during synchronous SQLCipher I/O. The timeout is a defensive measure: if the blocking thread is killed (e.g., by seccomp SIGSYS), the JoinHandle would hang indefinitely without it. The opened store is wrapped in JitDelivery with the configured TTL (default 300 seconds, set via the --ttl CLI flag or PDS_SECRET_TTL environment variable).

Open

SqlCipherStore::open() performs the following steps in order:

  1. Validate that the vault key is exactly 32 bytes.
  2. Open the SQLite connection via Connection::open().
  3. Set the encryption key via PRAGMA key in raw hex mode, then zeroize the hex string and the SQL statement.
  4. Apply SQLCipher configuration PRAGMAs (cipher_page_size, HMAC algorithm, KDF algorithm, KDF iterations, WAL mode).
  5. Run the idempotent schema migration (CREATE TABLE IF NOT EXISTS).
  6. Verify the key by executing SELECT count(*) FROM sqlite_master.
  7. Derive the entry key via blake3::derive_key("pds v2 entry-encryption-key", vault_key), then zeroize the intermediate byte array.

Rekey (C-level Key Scrub)

SqlCipherStore::pragma_rekey_clear() issues PRAGMA rekey = '' to scrub SQLCipher’s internal C-level copy of the page encryption key from memory. This is defense-in-depth: the Rust-side entry_key: SecureBytes already zeroizes on drop, but this call ensures the C library’s internal buffer is also cleared. The method logs a warning on failure but does not panic, since the connection may already be in a broken state. The pragma_rekey_clear_does_not_remove_encryption test confirms this scrubs the in-memory key without removing on-disk encryption.

An AtomicBool (cleared) prevents redundant PRAGMA rekey calls in the Drop implementation.

Close

Vault closing occurs during profile deactivation (VaultState::deactivate_profile()) or locking (handle_lock_request()). The sequence is:

  1. Remove the profile from the active_profiles authorization set. This is the security-critical step and happens first, before any I/O.
  2. Flush the JIT cache via vault.flush().await. All SecureBytes entries are zeroized on drop when the HashMap is cleared.
  3. Call pragma_rekey_clear() to scrub the C-level key buffer.
  4. Drop the SqlCipherStore. The entry_key: SecureBytes zeroizes on drop, and Drop skips the redundant PRAGMA rekey because cleared is already set.
  5. Remove the master key from the master_keys map. SecureBytes zeroizes on drop.
  6. Remove any partial multi-factor unlock state for the profile.
  7. On Linux, delete the profile’s platform keyring entry via keyring_delete_profile().

On lock-all (no profile specified in LockRequest), the rate limiter state is also reset to a fresh SecretRateLimiter instance.

Secret Lifecycle

This page describes how secrets move through the system: from storage in an encrypted vault, through a JIT cache, across the IPC bus, and into a child process’s environment. It also covers key material lifecycle and the compliance testing framework.

Secret Storage Operations

The SecretsStore trait (core-secrets/src/store.rs) defines four operations that every storage backend must implement:

OperationBehavior
get(key)Retrieve a secret by key. Returns an error if the key does not exist.
set(key, value)Store a secret. Overwrites if the key already exists. Updates updated_at; sets created_at on first insert.
delete(key)Delete a secret by key. Returns an error if the key does not exist.
list_keys()List all key names in the store. Values are not returned (no bulk decryption).

The list_keys method intentionally avoids returning values. Listing secrets does not trigger bulk decryption of every entry, limiting the window during which plaintext exists in memory.

Two implementations exist:

  • SqlCipherStore (core-secrets/src/sqlcipher.rs): Production backend. Each set encrypts the value with per-entry AES-256-GCM before writing to the database. Each get decrypts after reading. The Mutex<Connection> serializes all database access.
  • InMemoryStore (core-secrets/src/store.rs): Testing backend. Holds secrets in a HashMap<String, SecureBytes> protected by a tokio::sync::RwLock. Values are stored as SecureBytes (mlock’d, zeroize-on-drop). Does not persist to disk.

JIT Cache

The JitDelivery<S> wrapper (core-secrets/src/jit.rs) adds a time-limited in-memory cache in front of any SecretsStore implementation. It exists to avoid repeated SQLCipher decryption for frequently accessed secrets.

Resolution

JitDelivery::resolve(key) checks the cache first. If a valid (non-expired) entry exists, the cached SecureBytes clone is returned without touching the underlying store. If the entry is missing or expired, the value is fetched from the store, cached, and returned.

Both the cache entry and the returned value are independent SecureBytes clones. Each independently zeroizes on drop.

TTL Expiry

Each cache entry records its fetched_at timestamp as a std::time::Instant. On the next resolve call, if fetched_at.elapsed() >= ttl, the cached value is considered expired and a fresh fetch occurs. The default TTL is 300 seconds, configurable via the daemon’s --ttl flag or PDS_SECRET_TTL environment variable.

The ttl_expiry_refetches test verifies that after TTL expiry, the underlying store is re-queried and updated values are returned.

Flush on Lock

JitDelivery::flush() clears the entire cache by calling cache.clear(). Because each value in the cache is a SecureBytes, dropping the HashMap entries triggers zeroization of all cached secret material. Flush is called during profile deactivation and locking, before the vault is closed and key material is destroyed.

The flush_clears_cache test verifies that after a flush, the next resolve call fetches fresh data from the underlying store.

Store Bypass

JitDelivery::store() provides direct access to the underlying SecretsStore, bypassing the cache. This is used for write operations (set, delete, list_keys) which should not interact with the read cache. After a set or delete, the daemon calls vault.flush().await to invalidate any stale cache entries.

Key Material Lifecycle

All key material in the secrets subsystem is held in SecureBytes (core-crypto), which provides:

  • mlock: The backing memory is locked to prevent swapping to disk. On Linux, this uses memfd_secret with guard pages when available.
  • Zeroize on drop: When a SecureBytes value is dropped, its backing memory is overwritten with zeros before deallocation. This is implemented via the zeroize crate’s Zeroize trait.
  • Clone independence: Cloning a SecureBytes value creates a new mlock’d allocation. Dropping the clone does not affect the original, and vice versa.

The lifecycle of key material through the system:

  1. Derivation: The master key is derived via Argon2id from the user’s password and a per-profile 16-byte salt (derive_master_key() in daemon-secrets/src/unlock.rs, which delegates to core_crypto::derive_key_argon2()). The result is a 32-byte SecureBytes value.
  2. Storage: The master key is stored in VaultState::master_keys (daemon-secrets/src/vault.rs), a HashMap<TrustProfileName, SecureBytes>.
  3. Derivation (vault key): On first vault access, core_crypto::derive_vault_key() derives a 32-byte vault key from the master key via BLAKE3. The intermediate stack array is wrapped in zeroize::Zeroizing and zeroized on scope exit.
  4. Use: The vault key is passed to SqlCipherStore::open(), which uses it for PRAGMA key and derives the entry key. The vault key is not retained by the store after open completes.
  5. Destruction: On lock or deactivation, the JIT cache is flushed (zeroizing cached secrets), pragma_rekey_clear() scrubs the C-level key buffer, the SqlCipherStore is dropped (zeroizing the entry key), and the master key is removed from the map (zeroizing on drop).

Field-Level IPC Encryption

When the ipc-field-encryption feature is enabled, secret values are encrypted with AES-256-GCM before being placed on the IPC bus, providing a second encryption layer on top of the Noise IK transport.

The per-profile IPC encryption key is derived via core_crypto::derive_ipc_encryption_key(master_key, profile_id) using the context string "pds v2 ipc-encryption-key {profile_id}". The wire format is [12-byte random nonce][AES-256-GCM ciphertext + tag].

This feature is gated behind ipc-field-encryption and disabled by default for the following reasons, documented in daemon-secrets/src/vault.rs:

  • The Noise IK transport is already the security boundary, matching the precedent set by ssh-agent, 1Password, Vault, and gpg-agent.
  • CLI clients lack the master key needed to decrypt per-field encrypted values.
  • The per-field key derives from the same master key that transits inside the Noise channel, so it is not an independent trust root.

When enabled, the encryption path in handle_secret_get (daemon-secrets/src/crud.rs) encrypts values before sending the SecretGetResponse, and the decryption path in handle_secret_set decrypts incoming values before writing to the vault. The decrypted intermediate Vec<u8> is explicitly zeroized after the store write completes.

Compliance Testing

The compliance_tests() function (core-secrets/src/compliance.rs) defines a portable test suite that every SecretsStore implementation must pass. The suite verifies:

Test caseAssertion
Set and getA stored value is retrievable with identical bytes.
OverwriteStoring to an existing key replaces the value.
Get nonexistentRetrieving a key that does not exist returns an error.
DeleteA deleted key is no longer retrievable.
Delete nonexistentDeleting a key that does not exist returns an error.
List keysAll stored key names appear in the list.
CleanupAfter deleting all keys, the list is empty.

The in_memory_store_passes_compliance test runs this suite against InMemoryStore. The SQLCipher backend has its own compliance tests in core-secrets/src/sqlcipher.rs that additionally verify encryption properties (no plaintext on disk, cross-profile isolation, nonce uniqueness).

Six-Gate Security Pipeline

Every secret CRUD operation passes through a six-gate security pipeline in daemon-secrets/src/crud.rs before the vault is accessed. The gates execute in order from cheapest to most expensive:

  1. Lock check: Rejects the request if no profiles are unlocked (master_keys is empty).
  2. Active profile check: Rejects if the requested profile is not in the active_profiles set.
  3. Identity check: Logs the requester’s verified_sender_name (stamped by the IPC bus server from the Noise IK registry). Expected requesters are daemon-secrets, daemon-launcher, or None (CLI relay).
  4. Rate limit check: Applies per-requester token bucket rate limiting.
  5. ACL check: Evaluates per-daemon per-key access control rules from config. 5.5. Key validation: Validates the secret key name via core_types::validate_secret_key().
  6. Vault access: Opens (or retrieves) the vault and performs the requested operation.

Each gate that denies a request emits both a structured tracing log entry and a SecretOperationAudit IPC event (fire-and-forget to daemon-profile for persistent audit logging). The denial response is sent immediately and processing stops.

Access Control

The secrets daemon enforces per-daemon per-key access control over secret operations. ACL rules are defined in the configuration file and evaluated as pure functions over config state with no I/O or mutable state. Rate limiting provides a second layer of defense against enumeration attacks.

Per-Daemon Per-Key ACL

Access control is implemented in daemon-secrets/src/acl.rs as two pure functions: check_secret_access() for get/set/delete operations, and check_secret_list_access() for list operations.

Configuration

ACL rules are defined in the config file under [profiles.<name>.secrets.access]. Each entry maps a daemon name to a list of secret key names that daemon is permitted to access:

[profiles.work.secrets.access]
daemon-launcher = ["api-key", "db-password"]
daemon-wm = []

In this example, daemon-launcher can access api-key and db-password in the work profile. daemon-wm has an explicit empty list, which denies all access including listing.

Evaluation Rules for Get/Set/Delete

The check_secret_access() function evaluates the following rules in order. The first matching rule determines the outcome:

ConditionResultRationale
Profile not in config, no ACL policy on any profileAllowBackward compatibility with pre-ACL deployments.
Profile not in config, ACL policy exists on any other profileDenyFail-closed. An attacker must not bypass ACL by requesting a nonexistent profile.
Profile in config, empty access mapAllowNo ACL policy configured for this profile.
Unregistered client (verified_sender_name is None), ACL policy existsDenyUnregistered clients cannot be identity-verified.
Daemon name absent from access mapAllowBackward compatible default. Only daemons explicitly listed are restricted.
Daemon name present, key in allowed listAllowExplicit grant.
Daemon name present, key not in allowed listDenyAllowlist is strict.
Daemon name present, empty allowed listDenyExplicit deny-all. Empty list means no access, not unrestricted access.

Evaluation Rules for List

The check_secret_list_access() function follows the same rules as get/set/delete with one difference at the daemon-level check:

ConditionResult
Daemon name present, non-empty allowed listAllow (has at least some access)
Daemon name present, empty allowed listDeny (“no keys allowed” means “cannot even see what keys exist”)

All other conditions match check_secret_access().

Test Coverage

The ACL module contains 15 tests (acl_001 through acl_015) that verify every branch of both functions. Each test is prefixed with a SECURITY INVARIANT comment documenting the property it protects.

Unregistered Client Handling

Client identity is determined by the verified_sender_name field on each IPC message. This field is stamped by the IPC bus server from the Noise IK static key registry – it is not self-declared by the client. The check_secret_requester() function in daemon-secrets/src/acl.rs logs an anomaly warning if a daemon other than daemon-secrets or daemon-launcher requests secrets, since those are the only expected requesters.

Unregistered clients (those with verified_sender_name set to None) are CLI relay connections that transit through daemon-profile with Open clearance. When any ACL policy is active, unregistered clients are denied access to both individual secrets and the key listing. This prevents bypass via unauthenticated connections.

Audit Logging

Every secret operation emits a structured audit log entry via the audit_secret_access() function, regardless of whether the operation succeeds or is denied. The log entry includes:

  • event_type: The operation (get, set, delete, list, unlock, lock).
  • requester: The DaemonId (UUID) of the requesting client.
  • profile: The target trust profile name.
  • key: The secret key name (or - for operations that do not target a specific key).
  • outcome: The result (success, denied-locked, denied-acl, rate-limited, not-found, etc.).

In addition to local tracing logs, each operation also emits a SecretOperationAudit IPC event that is published to the bus for persistent logging by daemon-profile. This event is fire-and-forget: delivery failure does not block or fail the secret operation. Both audit paths are required; the code comments explicitly state that neither should be removed assuming the other is sufficient.

Rate Limiting

Rate limiting is implemented in daemon-secrets/src/rate_limit.rs using the governor crate’s in-memory GCRA (Generic Cell Rate Algorithm) token bucket.

Configuration

The rate limiter is configured with a fixed quota:

  • Sustained rate: 10 requests per second
  • Burst capacity: 20 requests

These values are hardcoded in SecretRateLimiter::new().

Per-Daemon Buckets

Each daemon receives an independent rate limit bucket, keyed on its verified_sender_name. Exhausting one daemon’s quota does not affect any other daemon’s ability to access secrets. Buckets are created lazily on first request from each daemon.

Anonymous Client Isolation

All unregistered clients (those with verified_sender_name set to None) share a single rate limit bucket keyed on the sentinel value __anonymous__. This prevents bypass via the new-connection-per-request pattern: an attacker who opens a fresh IPC connection for every request still draws from the same shared anonymous bucket.

The anonymous bucket is independent from all named daemon buckets. Exhausting the anonymous bucket does not affect registered daemons, and vice versa.

Rate Limiter Reset

When a lock-all operation succeeds (no profile specified in LockRequest), the rate limiter is reset to a fresh instance with empty buckets. This occurs in handle_lock_request() in daemon-secrets/src/unlock.rs.

Test Coverage

The rate limiting module contains five tests (rate_001 through rate_005) that verify:

  • Burst capacity of 20 requests is allowed (rate_001).
  • The 21st request after burst exhaustion is denied (rate_002).
  • Daemon buckets are independent (rate_003).
  • The anonymous bucket is independent from named daemon buckets (rate_004).
  • All anonymous clients share a single bucket (rate_005).

Secret Injection

The sesame CLI provides two commands for injecting vault secrets into running processes: sesame env for spawning a child process with secrets as environment variables, and sesame export for emitting secrets in shell, dotenv, or JSON format. Both commands enforce a runtime denylist that blocks security-sensitive environment variable names.

sesame env

sesame env spawns a child process with all secrets from the specified profile(s) injected as environment variables.

sesame env -p work -- my-application --flag

The command resolves profile specs from the -p flag or, if omitted, from the SESAME_PROFILES environment variable. If neither is set, the default profile name is used.

The child process also receives a SESAME_PROFILES environment variable containing a CSV of the resolved profile specs (e.g., work,braincraft:operations), allowing it to know its security context.

After the child process exits, all secret byte vectors are zeroized via zeroize::Zeroize before the parent process exits with the child’s exit code.

Multi-Profile Support

Multiple profiles can be specified as a comma-separated list:

sesame env -p "default,work" -- my-application

Secrets are fetched from each profile in order and merged with left-wins collision resolution: if the same secret key name exists in multiple profiles, the value from the first profile in the list is used. This is implemented in fetch_multi_profile_secrets() in open-sesame/src/ipc.rs, which uses a HashSet to track seen key names.

Prefix

The --prefix flag prepends a string to all generated environment variable names:

sesame env -p work --prefix MYAPP -- my-application
# Secret "api-key" becomes MYAPP_API_KEY

sesame export

sesame export emits secrets in one of three formats, suitable for shell evaluation or file generation.

Shell Format

sesame export -p work --format shell

Output:

export API_KEY="the-secret-value"
export DB_PASSWORD="another-value"

Dotenv Format

sesame export -p work --format dotenv

Output:

API_KEY="the-secret-value"
DB_PASSWORD="another-value"

JSON Format

sesame export -p work --format json

Output:

{"API_KEY":"the-secret-value","DB_PASSWORD":"another-value"}

After output, all intermediate string copies are zeroized via unsafe as_bytes_mut().zeroize().

Secret Name to Environment Variable Conversion

The secret_key_to_env_var() function in open-sesame/src/env.rs converts secret key names to environment variable names using the following rules:

Input characterOutput
Hyphen (-)Underscore (_)
Dot (.)Underscore (_)
ASCII alphanumericUppercased
Underscore (_)Preserved
All other charactersUnderscore (_)

The entire result is uppercased. If a prefix is provided, it is prepended with an underscore separator.

Examples (from tests in open-sesame/src/env.rs):

Secret keyPrefixEnvironment variable
api-keyNoneAPI_KEY
api-keyMYAPPMYAPP_API_KEY
db.host-nameNoneDB_HOST_NAME

Environment Variable Denylist

The DENIED_ENV_VARS constant in open-sesame/src/env.rs defines environment variable names that must never be overwritten by secret injection. The is_denied_env_var() function checks against this list using case-insensitive comparison. The BASH_FUNC_ prefix is matched as a prefix (any variable starting with BASH_FUNC_ is denied).

If a secret’s converted name matches a denied variable, the secret is skipped with a warning printed to stderr. It is not injected into the child process or emitted in export output.

Full Denylist

Dynamic linker – arbitrary code execution:

  • LD_PRELOAD
  • LD_LIBRARY_PATH
  • LD_AUDIT
  • LD_DEBUG
  • LD_DEBUG_OUTPUT
  • LD_DYNAMIC_WEAK
  • LD_PROFILE
  • LD_SHOW_AUXV
  • LD_BIND_NOW
  • LD_BIND_NOT
  • DYLD_INSERT_LIBRARIES
  • DYLD_LIBRARY_PATH
  • DYLD_FRAMEWORK_PATH

Core execution environment:

  • PATH
  • HOME
  • USER
  • SHELL
  • LOGNAME
  • LANG
  • TERM
  • DISPLAY
  • WAYLAND_DISPLAY
  • XDG_RUNTIME_DIR

Shell injection vectors:

  • BASH_ENV
  • ENV
  • BASH_FUNC_ (prefix match)
  • CDPATH
  • GLOBIGNORE
  • SHELLOPTS
  • BASHOPTS
  • PROMPT_COMMAND
  • PS1, PS2, PS4
  • MAIL, MAILPATH, MAILCHECK
  • IFS

Language runtime code execution:

  • PYTHONPATH, PYTHONSTARTUP, PYTHONHOME
  • NODE_OPTIONS, NODE_PATH, NODE_EXTRA_CA_CERTS
  • PERL5LIB, PERL5OPT
  • RUBYLIB, RUBYOPT
  • GOPATH, GOROOT, GOFLAGS
  • JAVA_HOME, CLASSPATH, JAVA_TOOL_OPTIONS

Security and authentication:

  • SSH_AUTH_SOCK
  • GPG_AGENT_INFO
  • KRB5_CONFIG, KRB5CCNAME
  • SSL_CERT_FILE, SSL_CERT_DIR
  • CURL_CA_BUNDLE, REQUESTS_CA_BUNDLE
  • GIT_SSL_CAINFO
  • NIX_SSL_CERT_FILE

Nix:

  • NIX_PATH
  • NIX_CONF_DIR

Sudo and privilege escalation:

  • SUDO_ASKPASS
  • SUDO_EDITOR
  • VISUAL
  • EDITOR

Systemd and D-Bus:

  • SYSTEMD_UNIT_PATH
  • DBUS_SESSION_BUS_ADDRESS

Open Sesame namespace:

  • SESAME_PROFILE

Shell Escaping

The shell_escape() function in open-sesame/src/env.rs produces output safe for embedding in double-quoted export statements. The following transformations are applied:

CharacterOutputReason
\0 (null)StrippedC string truncation risk
"\"Shell metacharacter
\\\Shell metacharacter
$\$Variable expansion
`\`Command substitution
!\!History expansion
\n\n (literal backslash-n)Newline
\r\r (literal backslash-r)Carriage return

JSON Escaping

The json_escape() function produces output safe for embedding in JSON string values:

CharacterOutput
\0 (null)Stripped
"\"
\\\
\n\n
\r\r
\t\t
Other control characters\uXXXX (Unicode escape)

Cross-Profile Behavior

Open Sesame enforces strict per-profile isolation for secret storage while providing controlled mechanisms for accessing secrets from multiple profiles in a single session.

Profile Isolation Guarantees

Each trust profile is a cryptographically independent security domain. The following properties hold:

Independent Master Keys

Each profile has its own password, its own 16-byte random salt (stored at {config_dir}/vaults/{profile}.salt), and its own Argon2id-derived master key. Knowing the password for profile A reveals nothing about the master key for profile B, even if the user chooses the same password for both, because the salts differ.

Independent Vault Keys

The vault key for each profile is derived via core_crypto::derive_vault_key(master_key, profile_id) using the BLAKE3 context string "pds v2 vault-key {profile_id}". Different profile IDs produce different vault keys even from the same master key. The different_profiles_produce_different_keys test in core-crypto/src/hkdf.rs verifies this property.

Independent Database Files

Each profile’s secrets are stored in a separate SQLCipher database file at {config_dir}/vaults/{profile_name}.db. There is no shared database. Opening profile A’s database file with profile B’s vault key fails at the SELECT count(*) FROM sqlite_master verification step in SqlCipherStore::open().

Independent Unlock State

Each profile is unlocked independently via UnlockRequest with an optional profile field. The VaultState struct in daemon-secrets/src/vault.rs maintains per-profile state in several maps:

  • master_keys: HashMap<TrustProfileName, SecureBytes> – per-profile master keys.
  • vaults: HashMap<TrustProfileName, JitDelivery<SqlCipherStore>> – per-profile open vault handles.
  • active_profiles: HashSet<TrustProfileName> – profiles authorized for secret access.
  • partial_unlocks: HashMap<TrustProfileName, PartialUnlock> – in-progress multi-factor unlock sessions.

Multiple profiles may be unlocked and active concurrently. There is no global “locked” state; the daemon starts with empty maps and each profile is unlocked individually.

Independent Deactivation

Locking a single profile (LockRequest with a profile field) removes only that profile’s master key, vault handle, partial unlock state, and keyring entry. Other profiles remain unlocked and accessible.

Cross-Profile Tag References

The profile spec format used by sesame env and sesame export supports an org:vault syntax for referencing profiles with organizational namespaces:

default                    --> ProfileSpec { org: None,                  vault: "default" }
braincraft:operations      --> ProfileSpec { org: Some("braincraft"),    vault: "operations" }

This parsing is implemented in parse_profile_specs() in open-sesame/src/ipc.rs. The org field is currently informational – it is included in the SESAME_PROFILES CSV injected into child processes but does not affect vault lookup. The vault field is used as the TrustProfileName for IPC requests.

The format is designed for future extension to container registry-style references (e.g., docker.io/project/org:vault@sha256).

Multi-Profile Secret Injection

The sesame env and sesame export commands accept a comma-separated list of profile specs:

sesame env -p "default,work" -- my-application
sesame export -p "default,work,braincraft:operations" --format json

The profile list can also be set via the SESAME_PROFILES environment variable, which is checked when the -p flag is omitted. Resolution order is implemented in resolve_profile_specs() in open-sesame/src/ipc.rs:

  1. If -p is provided, use it.
  2. Otherwise, read SESAME_PROFILES from the environment.
  3. If neither is set, use the default profile name.

Merge Behavior

fetch_multi_profile_secrets() iterates over the profile specs in order. For each profile, it fetches all secret keys via SecretList, then fetches each value via SecretGet. Keys are merged into the result with left-wins collision resolution: the first profile in the list that contains a given key wins. A HashSet<String> tracks which key names have already been seen.

If a profile has no secrets, a warning is printed to stderr but processing continues with the remaining profiles.

Denylist Enforcement

After the secret key name is converted to an environment variable name (via secret_key_to_env_var()), the result is checked against the denylist (is_denied_env_var()). Denied variables are skipped with a warning on stderr. This check applies identically regardless of which profile the secret originated from.

What Crosses Profile Boundaries

ResourceCrosses boundaries?Mechanism
Secret valuesNoEach profile’s vault is encrypted with a unique key.
Secret key namesNoKey names are only visible within a single profile’s SecretList response.
Master keysNoEach profile has an independent master key derived from its own salt.
Environment variablesYes, at injection timesesame env -p "a,b" merges secrets from both profiles into a single child process environment.
Vault database filesNoEach profile has its own .db file.
Salt filesNoEach profile has its own .salt file.
JIT cache entriesNoJitDelivery instances are per-profile in the vaults map.
Rate limit bucketsNo (per-daemon, not per-profile)Rate limiting is keyed on daemon identity, not profile.
ACL rulesNoACL rules are defined per-profile under [profiles.<name>.secrets.access].
Platform keyring entriesNoKeyring operations are per-profile (keyring_store_profile, keyring_delete_profile).

The only mechanism by which secrets from different profiles can coexist in the same memory space is the sesame env / sesame export multi-profile merge, which operates in the CLI process after secrets have been fetched via IPC from independently unlocked vaults.

Multi-Profile Unlock

Each profile must be unlocked independently before its secrets can be accessed. The sesame unlock command accepts a -p flag:

sesame unlock -p default
sesame unlock -p work

There is no batch unlock command that accepts multiple profiles in a single invocation. Each UnlockRequest IPC message targets a single profile. If a profile is already unlocked, the daemon rejects the request with UnlockRejectedReason::AlreadyUnlocked.

Locking supports both single-profile and all-profile modes:

sesame lock -p work          # Lock only the "work" profile
sesame lock                  # Lock all profiles

Lock-all removes all master keys, flushes all JIT caches, scrubs all C-level key buffers, deletes all keyring entries, clears all partial unlock state, and resets the rate limiter.

Factor Architecture

This page describes the pluggable authentication backend system in core-auth. The system defines a trait-based dispatch mechanism that allows multiple authentication methods to coexist, with the AuthDispatcher coordinating backend selection at unlock time.

AuthFactorId

The AuthFactorId enum in core-types/src/auth.rs identifies each authentication factor type. Six variants exist:

VariantConfig stringStatus
PasswordpasswordImplemented
SshAgentssh-agentImplemented
Fido2fido2Defined, no backend
TpmtpmDefined, no backend
FingerprintfingerprintDefined, no backend
YubikeyyubikeyDefined, no backend

The enum derives Serialize, Deserialize, Copy, Hash, Ord, and uses #[serde(rename_all = "kebab-case")]. The four future variants (Fido2, Tpm, Fingerprint, Yubikey) are defined to permit forward-compatible policy configuration: a vault metadata file can reference these factor types in its auth_policy before their backends are implemented.

AuthFactorId::from_config_str() parses the config-file string form. AuthFactorId::as_config_str() returns the static string. The Display implementation delegates to as_config_str().

VaultAuthBackend Trait

Defined in core-auth/src/backend.rs, the VaultAuthBackend trait is the extension point for adding new authentication methods. It requires Send + Sync and uses #[async_trait].

Required Methods

MethodSignaturePurpose
factor_idfn(&self) -> AuthFactorIdWhich factor this backend provides
namefn(&self) -> &strHuman-readable name for audit logs and overlay display
backend_idfn(&self) -> &strShort identifier for IPC messages and config
is_enrolledfn(&self, profile, config_dir) -> boolWhether enrollment artifacts exist on disk
can_unlockasync fn(&self, profile, config_dir) -> boolWhether unlock can currently succeed (must complete in <100ms)
requires_interactionfn(&self) -> AuthInteractionWhat kind of user interaction is needed
unlockasync fn(&self, profile, config_dir, salt) -> Result<UnlockOutcome, AuthError>Derive or unwrap the master key
enrollasync fn(&self, profile, master_key, config_dir, salt, selected_key_index) -> Result<(), AuthError>Create enrollment artifacts for a profile
revokeasync fn(&self, profile, config_dir) -> Result<(), AuthError>Remove enrollment artifacts

The enroll method accepts an optional selected_key_index for backends that offer multiple eligible keys (e.g., SSH agent with multiple loaded keys). If None, the backend picks the first eligible key.

AuthInteraction

The AuthInteraction enum describes the interaction model:

  • None – Backend can unlock silently (SSH agent with a software key, future TPM, future keyring).
  • PasswordEntry – Keyboard input required.
  • HardwareTouch – Physical touch on a hardware token (future FIDO2, PIV with touch policy).

FactorContribution

The FactorContribution enum describes what a backend provides to the unlock process:

  • CompleteMasterKey – The backend independently unwraps or derives a complete 32-byte master key. Used in Any and Policy modes.
  • FactorPiece – The backend provides a piece that must be combined with pieces from other factors via BLAKE3 derive_key. Used in All mode.

VaultMetadata::contribution_type() returns FactorPiece when auth_policy is All, and CompleteMasterKey for Any and Policy.

UnlockOutcome

The UnlockOutcome struct is returned by a successful unlock() call:

  • master_key: SecureBytes – The 32-byte master key (mlock’d, zeroize-on-drop).
  • audit_metadata: BTreeMap<String, String> – Backend-specific metadata for audit logging (e.g., ssh_fingerprint, key_type).
  • ipc_strategy: IpcUnlockStrategy – Which IPC message type to use (PasswordUnlock or DirectMasterKey).
  • factor_id: AuthFactorId – Which factor this outcome represents.

IpcUnlockStrategy

  • PasswordUnlock – Use the UnlockRequest IPC message; daemon-secrets performs the KDF.
  • DirectMasterKey – Use the SshUnlockRequest or FactorSubmit IPC message with a pre-derived master key.

Both implemented backends (PasswordBackend and SshAgentBackend) use DirectMasterKey. The password backend derives the KEK client-side via Argon2id and unwraps the master key before sending it over IPC.

AuthDispatcher

Defined in core-auth/src/dispatcher.rs, the AuthDispatcher holds a Vec<Box<dyn VaultAuthBackend>> and provides methods for backend discovery and selection.

Construction

AuthDispatcher::new() registers two backends in priority order:

  1. SshAgentBackend (non-interactive)
  2. PasswordBackend (interactive fallback)

Methods

backends(&self) -> &[Box<dyn VaultAuthBackend>] – Access all registered backends.

applicable_backends(profile, config_dir, meta) -> Vec<&dyn VaultAuthBackend> – Returns backends that are both enrolled in the vault metadata (meta.has_factor(backend.factor_id())) AND can currently perform an unlock (backend.can_unlock()). Used by the CLI to determine which factors to attempt.

find_auto_backend(profile, config_dir) -> Option<&dyn VaultAuthBackend> – Returns the first backend where requires_interaction() == AuthInteraction::None, is_enrolled() is true, and can_unlock() is true. Does not consult vault metadata – checks enrollment files directly on disk.

can_auto_unlock(profile, config_dir, meta) -> bool – Policy-aware auto-unlock feasibility check:

  • Any mode: delegates to find_auto_backend() – a single non-interactive backend suffices.
  • All or Policy mode: all applicable backends must be non-interactive. Returns false conservatively if any required factor needs interaction.

password_backend(&self) -> &dyn VaultAuthBackend – Returns the password backend. Panics if not registered (programming error – the constructor always registers it).

VaultMetadata

Defined in core-auth/src/vault_meta.rs, VaultMetadata is the JSON-serialized record of a vault’s authentication state. Stored at {config_dir}/vaults/{profile}.vault-meta with permissions 0o600.

Fields

FieldTypePurpose
versionu32Format version (currently 1)
init_modeVaultInitModeHow the vault was originally initialized
enrolled_factorsVec<EnrolledFactor>Which auth methods are enrolled
auth_policyAuthCombineModeUnlock policy for this vault
created_atu64Unix epoch seconds of vault creation
policy_changed_atu64Unix epoch seconds of last policy change

VaultInitMode

  • Password – Initialized with password only.
  • SshKeyOnly – Initialized with SSH key only (random master key, no password).
  • MultiFactor { factors: Vec<AuthFactorId> } – Initialized with multiple factors.

EnrolledFactor

Each enrolled factor records:

  • factor_id: AuthFactorId – The factor type.
  • label: String – Human-readable label (e.g., SSH key fingerprint, “master password”).
  • enrolled_at: u64 – Unix epoch seconds.

Version Gating

VaultMetadata::load() rejects metadata where version > MAX_SUPPORTED_VERSION (currently 1). This prevents a newer binary from silently misinterpreting a vault metadata format it does not understand.

Persistence

JSON is used rather than TOML to distinguish machine-managed metadata from user-editable configuration. Writes use atomic rename via a .vault-meta.tmp intermediate file. File permissions are set to 0o600 on Unix before the rename.

Factory Methods

  • new_password(auth_policy) – Creates metadata with a single Password enrolled factor.
  • new_ssh_only(fingerprint, auth_policy) – Creates metadata with a single SshAgent enrolled factor.
  • new_multi_factor(factors, auth_policy) – Creates metadata with arbitrary enrolled factors.

Factor Management

  • has_factor(factor_id) -> bool – Check enrollment.
  • add_factor(factor_id, label) – Idempotent add (no-op if already enrolled).
  • remove_factor(factor_id) – Remove by factor ID.
  • contribution_type() -> FactorContribution – Returns FactorPiece for All mode, CompleteMasterKey for Any/Policy.

Adding a New Factor

To add a new authentication factor (e.g., FIDO2):

  1. The AuthFactorId variant already exists in core-types/src/auth.rs (e.g., Fido2).
  2. Create a new module in core-auth/src/ implementing a struct (e.g., Fido2Backend).
  3. Implement VaultAuthBackend for the struct:
    • factor_id() returns the corresponding AuthFactorId variant.
    • is_enrolled() checks for the factor’s enrollment artifact on disk.
    • can_unlock() checks whether the hardware or service is available.
    • requires_interaction() returns the appropriate AuthInteraction variant.
    • unlock() derives or unwraps the 32-byte master key.
    • enroll() wraps the master key under the factor’s KEK and writes an enrollment blob.
    • revoke() zeroizes and deletes the enrollment blob.
  4. Register the backend in AuthDispatcher::new() at the appropriate priority position (non-interactive backends before interactive ones).
  5. The CLI unlock flow in open-sesame/src/unlock.rs handles unknown factors by reporting that the factor is not yet supported. Adding a match arm in try_auto_factor() (for non-interactive factors) or the phase 3 loop (for interactive factors) enables CLI support.

No changes to daemon-secrets are required – the FactorSubmit IPC handler and PartialUnlock state machine operate on AuthFactorId and SecureBytes generically.

Policy Engine

This page describes the multi-factor authentication policy system. Policies are declared in configuration, persisted in vault metadata, and enforced by daemon-secrets at unlock time through a partial unlock state machine.

AuthCombineMode

Defined in core-types/src/auth.rs, the AuthCombineMode enum determines both the key wrapping scheme at initialization and the unlock policy evaluation at runtime. It derives Serialize, Deserialize, and uses #[serde(rename_all = "kebab-case")].

Any (default)

AuthCombineMode::Any

The master key is a random 32-byte value generated via getrandom. Each enrolled factor independently wraps this master key under its own KEK (Argon2id-derived for password, BLAKE3-derived for SSH). Any single enrolled factor can unlock the vault alone.

At unlock time in daemon-secrets, the first valid factor submitted completes the unlock immediately. The PartialUnlock state machine clears all remaining requirements when Any mode is detected:

#![allow(unused)]
fn main() {
if matches!(meta.auth_policy, AuthCombineMode::Any) {
    partial.remaining_required.clear();
    partial.remaining_additional = 0;
}
}

All

AuthCombineMode::All

Every enrolled factor must be provided at unlock time. Each factor contributes a “piece” (its unwrapped key material). Once all pieces are collected, daemon-secrets combines them into the master key via BLAKE3 derive_key:

  1. Factor pieces are sorted by AuthFactorId (which derives Ord).
  2. The sorted pieces are concatenated.
  3. BLAKE3 derive_key is called with context "pds v2 combined-master-key {profile_name}" and the concatenated bytes as input.
  4. The result is a 32-byte master key.

The KDF context constant is ALL_MODE_KDF_CONTEXT defined in daemon-secrets/src/vault.rs as "pds v2 combined-master-key".

The VaultMetadata::contribution_type() method returns FactorContribution::FactorPiece for All mode. daemon-secrets checks this to decide whether to verify each factor’s key material against the vault DB independently (it does not – verification only happens after combination).

Policy

AuthCombineMode::Policy(AuthPolicy {
    required: Vec<AuthFactorId>,
    additional_required: u32,
})

A policy expression combining mandatory factors with a threshold of additional factors. Key wrapping uses independent wraps (same as Any mode – each factor wraps the same random master key). Policy enforcement happens at the daemon level.

  • required: Factors that must always succeed. Every factor in this list must be submitted.
  • additional_required: How many additional enrolled factors (beyond those in required) must also succeed.

Example: required: [Password], additional_required: 1 means the password is always required, plus one more factor (e.g., SSH agent or a future FIDO2 token).

FactorContribution is CompleteMasterKey for Policy mode – each factor independently unwraps the same master key.

Configuration

Auth policy is configured in config.toml under [profiles.<name>.auth], defined by the AuthConfig struct in core-config/src/schema_secrets.rs:

[profiles.default.auth]
mode = "any"                          # "any", "all", or "policy"
required = ["password", "ssh-agent"]  # For mode="policy" only
additional_required = 1               # For mode="policy" only

AuthConfig::to_typed() converts the string-based config representation to AuthCombineMode. It validates that all factor names in required are recognized via AuthFactorId::from_config_str(). The default AuthConfig uses mode "any" with empty required and additional_required = 0.

PartialUnlock State Machine

Defined in daemon-secrets/src/vault.rs, the PartialUnlock struct tracks in-progress multi-factor unlocks. At most one PartialUnlock exists per profile, stored in VaultState::partial_unlocks.

State

FieldTypePurpose
received_factorsHashMap<AuthFactorId, SecureBytes>Factor keys received so far
remaining_requiredHashSet<AuthFactorId>Factors still needed
remaining_additionalu32Additional factors still needed beyond required
deadlinetokio::time::InstantExpiration time

Lifecycle

  1. Creation: A PartialUnlock is created on the first FactorSubmit for a profile. The remaining_required and remaining_additional fields are initialized from the vault’s AuthCombineMode.

  2. Factor acceptance: Each FactorSubmit records the factor’s key material in received_factors and removes the factor from remaining_required. If the factor is not in the required set and remaining_additional > 0, the additional counter is decremented.

  3. Completion check: is_complete() returns true when remaining_required is empty AND remaining_additional == 0.

  4. Promotion: When complete, the partial state is removed from the map and the master key is either taken directly (for Any/Policy mode, the first received factor’s key) or derived by combining all pieces (for All mode).

  5. Expiration: is_expired() checks whether tokio::time::Instant::now() >= deadline. Expired partials are rejected on the next FactorSubmit and removed from the map.

Timeouts

  • PARTIAL_UNLOCK_TIMEOUT_SECS: 120 seconds. The deadline for collecting all required factors after the first factor is submitted.
  • PARTIAL_UNLOCK_SWEEP_INTERVAL_SECS: 30 seconds. The interval at which daemon-secrets sweeps and discards expired partial unlock state.

Key Combination (All Mode)

When all factors have been received in All mode, daemon-secrets combines them:

#![allow(unused)]
fn main() {
let mut pieces: Vec<_> = partial.received_factors.into_iter().collect();
pieces.sort_by_key(|(id, _)| *id);
let mut combined = Vec::new();
for (_id, piece) in &pieces {
    combined.extend_from_slice(piece.as_bytes());
}
let ctx_str = format!("{ALL_MODE_KDF_CONTEXT} {target}");
let derived: [u8; 32] = blake3::derive_key(&ctx_str, &combined);
combined.zeroize();
}

The sorting by AuthFactorId ensures deterministic ordering regardless of submission order.

CLI Unlock Flow

The CLI unlock command in open-sesame/src/unlock.rs orchestrates factor submission in three phases:

Phase 1: Auto-Submit Non-Interactive Factors

The CLI iterates over all enrolled factors and calls try_auto_factor() for each. Currently, only AuthFactorId::SshAgent is handled – it checks can_unlock() on the SshAgentBackend, and if available, calls unlock() to derive the master key client-side and submits it via FactorSubmit IPC.

If the vault uses Any mode and the SSH agent succeeds, the vault is fully unlocked and no further factors are needed.

Phase 2: Query Remaining Factors

The CLI sends a VaultAuthQuery IPC message to daemon-secrets, which returns:

  • enrolled_factors: All enrolled factor IDs.
  • auth_policy: The vault’s AuthCombineMode.
  • partial_in_progress: Whether a PartialUnlock exists.
  • received_factors: Which factors have already been accepted.

The CLI filters out already-received factors to determine what remains.

Phase 3: Prompt Interactive Factors

The CLI iterates over remaining factors:

  • Password: Prompts for password (via dialoguer if terminal, or reads from stdin), derives the master key client-side using PasswordBackend::unlock(), and submits via FactorSubmit.
  • Other factors: The CLI reports that the factor is not yet supported and exits with an error.

Each FactorSubmit response includes unlock_complete, remaining_factors, and remaining_additional, allowing the CLI to track progress.

Factor Submission IPC

The submit_factor() function sends EventKind::FactorSubmit with:

  • factor_id: Which factor type.
  • key_material: The master key in a SensitiveBytes (mlock’d ProtectedAlloc).
  • profile: Target profile name.
  • audit_metadata: Backend-specific audit fields.

The daemon responds with EventKind::FactorResponse containing acceptance status, completion status, and remaining factor information.

Daemon-Side Verification

For Any and Policy modes (CompleteMasterKey contribution), daemon-secrets verifies each submitted factor’s key material against the vault database before accepting it. It derives the vault key via core_crypto::derive_vault_key() and attempts to open the SQLCipher database. If the open fails (wrong key, GCM authentication failure), the factor is rejected.

For All mode (FactorPiece contribution), individual pieces cannot be verified against the vault database. Verification happens after all pieces are combined into the master key.

Password Backend

This page describes the password authentication backend implemented in core-auth/src/password.rs and core-auth/src/password_wrap.rs. The backend uses Argon2id to derive a key-encrypting key (KEK) from user-provided password bytes, then wraps or unwraps a 32-byte master key using AES-256-GCM.

PasswordBackend

The PasswordBackend struct holds an optional SecureVec containing password bytes. Password bytes must be injected via with_password() (builder pattern) or set_password() (mutation) before calling unlock() or enroll(). The SecureVec type provides mlock’d memory and zeroize-on-drop semantics.

Trait Implementation

MethodBehavior
factor_id()Returns AuthFactorId::Password
name()Returns "Password"
backend_id()Returns "password"
is_enrolled(profile, config_dir)Checks whether {config_dir}/vaults/{profile}.password-wrap exists
can_unlock(profile, config_dir)Returns true only if enrolled AND password bytes have been set
requires_interaction()Returns AuthInteraction::PasswordEntry

KEK Derivation

The derive_kek() method performs:

  1. Validate salt is exactly 16 bytes.
  2. Call core_crypto::derive_key_argon2(password, salt) – Argon2id with project-wide parameters.
  3. Copy the first 32 bytes of the Argon2id output into a [u8; 32] KEK array.

The Argon2id parameters are defined in core-crypto (not in core-auth).

Unlock Flow

  1. Read password bytes from the stored SecureVec. Fail with BackendNotApplicable if no password was set.
  2. Load the PasswordWrapBlob from {config_dir}/vaults/{profile}.password-wrap.
  3. Derive the KEK via derive_kek(password, salt).
  4. Call blob.unwrap(&mut kek) to decrypt the master key via AES-256-GCM. The KEK is zeroized after use.
  5. Return an UnlockOutcome with ipc_strategy: DirectMasterKey and factor_id: Password.

Enrollment Flow

  1. Read password bytes from the stored SecureVec.
  2. Derive the KEK via derive_kek(password, salt).
  3. Call PasswordWrapBlob::wrap(master_key, &mut kek) to encrypt the master key. The KEK is zeroized after use.
  4. Write the blob to disk via blob.save(config_dir, profile).

Revocation

Revocation overwrites the wrap file with zeros before deletion to prevent casual recovery from disk:

  1. Read the file length.
  2. Write a zero-filled buffer of the same length.
  3. Delete the file via std::fs::remove_file.

PasswordWrapBlob

Defined in core-auth/src/password_wrap.rs, the PasswordWrapBlob struct represents the on-disk binary format for the AES-256-GCM wrapped master key.

Binary Format

Offset  Length  Field
0       1       Version byte (0x01)
1       12      Nonce (random, generated via getrandom)
13      48      Ciphertext (32-byte master key + 16-byte GCM tag)

Total size: 61 bytes.

The version constant PASSWORD_WRAP_VERSION is 0x01.

Wrapping (Encryption)

PasswordWrapBlob::wrap(master_key, kek_bytes):

  1. Construct an EncryptionKey from the 32-byte KEK.
  2. Generate a 12-byte random nonce via getrandom.
  3. Encrypt the master key with AES-256-GCM using the KEK and nonce.
  4. Zeroize the KEK bytes.
  5. Return the blob containing version, nonce, and ciphertext.

Unwrapping (Decryption)

PasswordWrapBlob::unwrap(kek_bytes):

  1. Construct an EncryptionKey from the 32-byte KEK.
  2. Zeroize the KEK bytes immediately after key construction.
  3. Decrypt using AES-256-GCM with the stored nonce and ciphertext.
  4. Return the plaintext as SecureBytes (mlock’d, zeroize-on-drop).
  5. If GCM authentication fails (wrong password), return AuthError::UnwrapFailed.

Deserialization

PasswordWrapBlob::deserialize(data) rejects:

  • Data shorter than 61 bytes (AuthError::InvalidBlob).
  • Version bytes other than 0x01 (AuthError::InvalidBlob).

Persistence

Path: {config_dir}/vaults/{profile}.password-wrap

Write: save() uses atomic rename via a .password-wrap.tmp intermediate file. On Unix, file permissions are set to 0o600 (owner read/write only) before the rename. The parent vaults/ directory is created if it does not exist.

Read: load() reads the file and calls deserialize().

Zeroization

The PasswordWrapBlob struct implements Drop to zeroize its nonce and ciphertext fields. All KEK arrays are zeroized immediately after use in both wrap() and unwrap().

Salt

Each profile has an independent 16-byte salt stored at {config_dir}/vaults/{profile}.salt. The salt is generated via getrandom during vault initialization (daemon-secrets/src/unlock.rs::generate_profile_salt).

During sesame init, the salt file is written with:

  • The vaults/ directory created with permissions 0o700.
  • The salt file itself written via core_config::atomic_write and then set to permissions 0o600.

The salt is used as input to both the Argon2id KDF (password backend) and the BLAKE3 challenge derivation (SSH agent backend). Using a per-profile salt ensures that the same password produces different KEKs for different profiles.

Key Material Handling

The password backend’s key material lifecycle:

  1. Password bytes: Stored in SecureVec (mlock’d, zeroize-on-drop). Acquired from the user via dialoguer::Password (terminal) or stdin (pipe). The String holding the raw password is zeroized immediately after copying into the SecureVec.

  2. KEK (Argon2id output): A [u8; 32] stack array. Zeroized by PasswordWrapBlob::wrap() and PasswordWrapBlob::unwrap() after use.

  3. Master key: Returned as SecureBytes (backed by ProtectedAlloc – mlock’d, mprotect’d, zeroize-on-drop). Transferred to daemon-secrets via SensitiveBytes IPC wrapper which also uses ProtectedAlloc.

At no point does the master key exist in an unprotected heap allocation. The KEK exists briefly on the stack and is zeroized before the function returns.

SSH Agent Backend

This page describes the SSH agent authentication backend implemented in core-auth/src/ssh.rs and core-auth/src/ssh_types.rs. The backend connects to the user’s SSH agent, signs a deterministic challenge, derives a KEK from the signature via BLAKE3, and wraps or unwraps the vault master key using AES-256-GCM.

SshAgentBackend

The SshAgentBackend struct is a zero-sized type. All state lives in the SSH agent process and on-disk enrollment blobs.

Trait Implementation

MethodBehavior
factor_id()Returns AuthFactorId::SshAgent
name()Returns "SSH Agent"
backend_id()Returns "ssh-agent"
is_enrolled(profile, config_dir)Checks whether {config_dir}/vaults/{profile}.ssh-enrollment exists
can_unlock(profile, config_dir)Enrolled, blob is parseable, and the enrolled key’s fingerprint is present in the running agent
requires_interaction()Returns AuthInteraction::None

The can_unlock() check connects to the SSH agent via spawn_blocking (the ssh-agent-client-rs crate uses synchronous Unix socket I/O) and searches the agent’s identity list for a key matching the fingerprint stored in the enrollment blob.

Challenge Construction

The challenge is a deterministic 32-byte value derived from the profile name and salt:

context = "pds v2 ssh-challenge {profile_name}"
challenge = BLAKE3::derive_key(context, salt)

The same profile name and salt always produce the same challenge. Different profiles or salts produce different challenges. This determinism is essential because the backend must produce the same challenge at both enrollment and unlock time.

Signature to KEK Derivation

After the SSH agent signs the challenge, the raw signature bytes are fed into a second BLAKE3 derive_key call:

context = "pds v2 ssh-vault-kek {profile_name}"
kek = BLAKE3::derive_key(context, signature_bytes)

The raw signature bytes are zeroized immediately after KEK derivation. The KEK is a 32-byte value used as an AES-256-GCM key to wrap or unwrap the master key.

This two-step derivation (challenge from salt, KEK from signature) ensures:

  • The KEK is bound to both the profile identity and the specific SSH key.
  • The signature is never stored – only the wrapped master key is persisted.
  • The BLAKE3 derivation provides domain separation between the challenge and KEK contexts.

Supported Key Types

Defined in core-auth/src/ssh_types.rs, the SshKeyType enum restricts which SSH key types can be used:

TypeWire nameDeterminism
Ed25519ssh-ed25519Deterministic by specification (RFC 8032)
Rsassh-rsaPKCS#1 v1.5 padding uses no randomness; ssh-agent-client-rs hard-codes SHA-512

Excluded key types:

  • ECDSA (ecdsa-sha2-nistp256, etc.): Non-deterministic. Uses a random k value per signature. A different signature on each unlock would produce a different KEK and fail to unwrap the enrollment blob.
  • RSA-PSS: Non-deterministic. Uses a random salt per signature.

SshKeyType::from_algorithm() converts from ssh_key::Algorithm, rejecting non-deterministic types with AuthError::UnsupportedKeyType. SshKeyType::from_wire_name() parses the SSH wire format string.

EnrollmentBlob

The EnrollmentBlob struct persists the SSH-agent enrollment on disk at {config_dir}/vaults/{profile}.ssh-enrollment.

Binary Format

Offset    Length  Field
0         1       Version byte (0x01)
1         2       Key fingerprint length N (big-endian u16)
3         N       Key fingerprint (ASCII, e.g. "SHA256:...")
3+N       1       Key type length M (u8)
4+N       M       Key type wire name (ASCII, e.g. "ssh-ed25519")
4+N+M     12      Nonce (random)
16+N+M    48      Ciphertext (32-byte master key + 16-byte GCM tag)

The version constant ENROLLMENT_VERSION is 0x01.

Security

  • Fingerprint length is capped at 256 bytes during deserialization to prevent allocation attacks from malformed blobs.
  • File permissions are set to 0o600 before atomic rename.
  • Revocation overwrites the file with zeros before deletion.

Unlock Flow

  1. Read and deserialize the enrollment blob from disk.
  2. Derive the 32-byte challenge: BLAKE3::derive_key("pds v2 ssh-challenge {profile}", salt).
  3. Connect to the SSH agent (via spawn_blocking to avoid blocking the tokio runtime).
  4. Find the identity matching the enrolled fingerprint.
  5. Sign the challenge with the enrolled key.
  6. Derive the KEK: BLAKE3::derive_key("pds v2 ssh-vault-kek {profile}", signature_bytes).
  7. Zeroize the raw signature bytes.
  8. Construct an EncryptionKey from the KEK, then zeroize the KEK bytes.
  9. Decrypt the master key from the enrollment blob’s ciphertext using AES-256-GCM.
  10. Return an UnlockOutcome with ipc_strategy: DirectMasterKey, factor_id: SshAgent, and audit metadata including the SSH fingerprint and key type.

Enrollment Flow

  1. Connect to the SSH agent, list all identities, filter to eligible key types (Ed25519, RSA).
  2. Select a key by selected_key_index (required – None returns NoEligibleKey).
  3. Sign the challenge with the selected key.
  4. Derive the KEK from the signature (same derivation as unlock).
  5. Zeroize the signature bytes.
  6. Generate a 12-byte random nonce via getrandom.
  7. Encrypt the master key with AES-256-GCM using the KEK and nonce.
  8. Zeroize the KEK bytes.
  9. Build and serialize the EnrollmentBlob with the key fingerprint, key type, nonce, and ciphertext.
  10. Write to disk atomically via a .ssh-enrollment.tmp intermediate, with 0o600 permissions.

Key Selection

The CLI sesame ssh enroll command in open-sesame/src/ssh.rs supports three methods for selecting which SSH key to enroll:

Fingerprint via –ssh-key Flag

sesame ssh enroll --ssh-key SHA256:abc123...

The fingerprint is matched against loaded agent keys, with or without the SHA256: prefix.

Public Key File via –ssh-key Flag

sesame ssh enroll --ssh-key ~/.ssh/id_ed25519.pub

The file is read, parsed as an OpenSSH public key, and its SHA256 fingerprint is computed. Path traversal via ~/ is resolved through canonicalize() and verified to remain within $HOME. Files larger than 64 KB are rejected.

Interactive Menu

When --ssh-key is omitted and stdin is a terminal, dialoguer::Select presents a menu of eligible keys from the agent, showing fingerprint and algorithm. In non-interactive mode (piped stdin), --ssh-key is required.

Agent Connection

The connect_agent() function in core-auth/src/ssh.rs attempts two socket paths in order:

  1. $SSH_AUTH_SOCK: The standard environment variable, set by ssh-agent, sshd forwarding, or systemd environment propagation.

  2. ~/.ssh/agent.sock: A fallback stable symlink path. On Konductor VMs, /etc/profile.d/konductor-ssh-agent.sh creates ~/.ssh/agent.sock pointing to the forwarded agent socket (/tmp/ssh-XXXX/agent.PID) on each SSH login. This gives systemd user services a stable path to the forwarded agent, since $SSH_AUTH_SOCK points to a per-session temporary directory that changes on each login.

The function is intentionally synchronous – local Unix socket connect is sub-millisecond. All agent operations in the async VaultAuthBackend methods are wrapped in tokio::task::spawn_blocking to avoid blocking the tokio runtime.

Agent Forwarding

For remote or containerized environments where the SSH key lives on the operator’s workstation:

  • The SSH agent socket is forwarded via ssh -A or ForwardAgent yes in SSH config.
  • $SSH_AUTH_SOCK is set by sshd to the forwarded socket path.
  • The stable symlink pattern (~/.ssh/agent.sock) provides systemd user services access to the forwarded agent, since systemd services do not inherit the per-session $SSH_AUTH_SOCK.
  • The Konductor profile.d hook creates and maintains this symlink automatically on each SSH login.

This architecture allows vault unlock via SSH agent even when running inside a VM or container, provided the SSH agent is forwarded from the host.

FIDO2 / WebAuthn Backend

Status: Design Intent. The AuthFactorId::Fido2 variant exists in core-types::auth and the VaultAuthBackend trait is defined in core-auth::backend, but no struct implements this factor today. This page documents what the backend will do when built, grounded in the trait interface and FIDO2 standards.

The FIDO2 backend enables vault unlock using CTAP2-compliant authenticators – USB security keys, platform authenticators, and BLE/NFC tokens. It maps to AuthFactorId::Fido2 (config string "fido2") and operates through libfido2 directly, without a browser or the WebAuthn JavaScript API.

Relevant Standards

StandardRole
CTAP2 (Client to Authenticator Protocol 2.1)Wire protocol between host and authenticator. Open Sesame acts as the CTAP2 platform (client).
WebAuthn Level 2 (W3C)Defines the relying party model, credential creation, and assertion ceremonies. Open Sesame borrows the data model (RP ID, credential ID, user handle) but does not use a browser.
HMAC-secret extension (CTAP2.1)Allows the authenticator to compute a deterministic symmetric secret from a caller-provided salt, without exposing the credential private key. This is the primary key-derivation mechanism.
credProtect extensionControls whether credentials are discoverable without user verification. Should be set to level 2 or 3 to prevent silent credential enumeration.

Mapping to VaultAuthBackend

factor_id()

Returns AuthFactorId::Fido2.

backend_id()

Returns "fido2".

name()

Returns "FIDO2/WebAuthn" (for overlay display and audit logs).

requires_interaction()

Returns AuthInteraction::HardwareTouch. CTAP2 authenticators require user presence (UP) at minimum; most also support user verification (UV) via on-device PIN or biometric. Both require physical interaction.

is_enrolled(profile, config_dir)

Checks whether a file {config_dir}/profiles/{profile}/fido2.enrollment exists and contains a valid enrollment blob (see Enrollment Blob Format below). The enrollment record contains the credential ID, relying party ID, and the wrapped master key blob. This is a synchronous filesystem check with no device communication.

can_unlock(profile, config_dir)

  1. Verify enrollment exists via is_enrolled().
  2. Enumerate connected FIDO2 devices via libfido2 device enumeration.
  3. Return true if at least one device is present.

Device enumeration over HID typically completes in under 20ms, well within the 100ms trait budget. This method does not verify that the connected device holds the enrolled credential – that requires a CTAP2 transaction and user interaction, which is deferred to unlock().

enroll(profile, master_key, config_dir, salt, selected_key_index)

Enrollment proceeds as follows:

  1. Enumerate connected FIDO2 authenticators. If selected_key_index is Some(i), select the i-th device; otherwise select the first.
  2. Construct a relying party ID: open-sesame:{profile} (synthetic, not a web origin).
  3. Generate a random 32-byte user ID and a random 16-byte challenge.
  4. Perform authenticatorMakeCredential with:
    • Algorithm: ES256 (COSE -7) preferred, EdDSA (COSE -8) as fallback.
    • Extensions: hmac-secret: true, credProtect: 2, rk: true (resident key).
    • User verification: preferred (UV if the device supports it).
  5. Store the attestation response (credential ID, public key, attestation object).
  6. Immediately perform a getAssertion with the hmac-secret extension, passing salt as the HMAC-secret salt input. The authenticator returns a 32-byte HMAC output.
  7. Use the HMAC output as a key-encryption key (KEK). Wrap master_key under this KEK using AES-256-GCM with a random 12-byte nonce.
  8. Serialize and write the enrollment blob to {config_dir}/profiles/{profile}/fido2.enrollment.

unlock(profile, config_dir, salt)

Unlock proceeds as follows:

  1. Load and deserialize the enrollment blob.
  2. Perform authenticatorGetAssertion for the enrolled RP ID and credential ID, with:
    • Extensions: hmac-secret with salt as input.
    • User verification: preferred.
  3. The authenticator returns a 32-byte HMAC output (the KEK) and an assertion signature.
  4. Unwrap the master key from the enrollment blob using the KEK (AES-256-GCM decrypt).
  5. If unwrap fails (wrong device or tampered blob), return AuthError::UnwrapFailed.
  6. Return UnlockOutcome:
    • master_key: the unwrapped 32-byte key.
    • ipc_strategy: IpcUnlockStrategy::DirectMasterKey.
    • factor_id: AuthFactorId::Fido2.
    • audit_metadata: {"aaguid": "<hex>", "credential_id": "<hex>", "uv": "true|false"}.

revoke(profile, config_dir)

Deletes {config_dir}/profiles/{profile}/fido2.enrollment. Does not attempt to delete the resident credential from the authenticator (CTAP2 does not guarantee remote deletion support across all devices).

Enrollment Blob Format

Version: u8 (1)
RP ID: length-prefixed UTF-8
Credential ID: length-prefixed bytes
Public Key (COSE): length-prefixed bytes
Attestation Object: length-prefixed bytes (optional, for future policy use)
Wrapped Master Key: 12-byte nonce || ciphertext || 16-byte GCM tag

The blob is versioned to allow schema evolution. The version byte is checked on load; unknown versions produce AuthError::InvalidBlob.

FactorContribution

  • AuthCombineMode::Any or AuthCombineMode::Policy: The backend provides FactorContribution::CompleteMasterKey. It independently unwraps the full 32-byte master key from its enrollment blob.
  • AuthCombineMode::All: The backend provides FactorContribution::FactorPiece. The 32-byte HMAC-secret output is contributed as one input to the combined HKDF derivation. In this mode, enrollment does not wrap the master key; it stores only the credential ID and RP ID. The HMAC-secret output itself is the piece.

Platform Authenticator vs Roaming Authenticator

FIDO2 defines two authenticator attachment modalities:

  • Platform authenticators are built into the host device (e.g., Windows Hello TPM-backed key, macOS Touch ID Secure Enclave key, Android biometric key). On Linux desktops, platform authenticators are uncommon.
  • Roaming authenticators are external devices connected via USB HID, NFC, or BLE (e.g., YubiKey 5, SoloKeys, Google Titan, Nitrokey).

This backend targets roaming authenticators. For platform biometric unlock on Linux, the Biometrics backend (AuthFactorId::Fingerprint) is the appropriate choice – it uses fprintd/polkit rather than CTAP2.

Browser-less Operation

Open Sesame communicates directly with authenticators via libfido2, the reference CTAP2 C library maintained by Yubico. Consequences:

  • No origin binding. The RP ID is a synthetic string (open-sesame:{profile}), not a web origin. There is no TLS channel binding.
  • No browser UI. The daemon overlay prompts the user to touch the authenticator. The backend blocks on the CTAP2 transaction until UP/UV is satisfied or a timeout expires.
  • Attestation is informational. The attestation object is stored for optional future policy enforcement (e.g., restricting enrollment to FIPS-certified authenticators via FIDO Metadata Service lookup) but is not verified during normal unlock.

Integration Dependencies

DependencyTypePurpose
libfido2 >= 1.13System C libraryCTAP2 HID/NFC/BLE transport
libfido2-devSystem packageBuild-time headers and pkg-config
Rust crate: libfido2 or ctap-hid-fido2Cargo dependencySafe Rust bindings
udev rule or plugdev groupSystem configUser access to /dev/hidraw* devices

Threat Model Considerations

  • Deterministic KEK. The HMAC-secret output is deterministic for a given (credential, salt) pair. Changing the vault salt invalidates the KEK; re-enrollment is required after re-keying.
  • Loss recovery. If the authenticator is lost or destroyed, the enrollment blob is useless. Recovery requires another enrolled factor (password, SSH agent, etc.).
  • Clone resistance. Depends on the authenticator hardware. Devices with a secure element (YubiKey 5, SoloKeys v2) resist cloning. Software-only CTAP2 implementations (e.g., libfido2 soft token) provide no clone resistance.
  • PIN brute-force. CTAP2 authenticators implement per-device PIN retry counters with lockout. This is enforced by the authenticator firmware, not by Open Sesame.
  • Relay attacks. An attacker with network access to the USB HID device could relay CTAP2 messages. Physical proximity verification is delegated to the authenticator’s UP mechanism.

See Also

TPM 2.0 Backend

Status: Design Intent. The AuthFactorId::Tpm variant exists in core-types::auth and the VaultAuthBackend trait is defined in core-auth::backend, but no struct implements this factor today. This page documents what the backend will do when built, grounded in the trait interface and TPM 2.0 standards.

The TPM backend enables vault unlock by sealing the master key to the platform’s Trusted Platform Module. The sealed blob can only be unsealed when the TPM’s Platform Configuration Registers (PCRs) match the values recorded at seal time, binding the vault to a specific machine in a specific boot state. It maps to AuthFactorId::Tpm (config string "tpm").

Relevant Standards

SpecificationRole
TPM 2.0 Library Specification (TCG)Defines the TPM command set, key hierarchies, sealing, and PCR operations.
TCG PC Client Platform TPM Profile (PTP)Specifies PCR allocation and boot measurement conventions for PC platforms.
tpm2-tss (TCG Software Stack)Userspace C library providing ESAPI, FAPI, and TCTI layers for TPM communication.
tpm2-toolsCommand-line tools built on tpm2-tss, useful for enrollment scripting and debugging.
Linux IMA (Integrity Measurement Architecture)Extends PCR 10 with file hashes during runtime. Optional extension point for runtime integrity.

Core Concept: Sealing to PCR State

TPM 2.0 sealing binds a data blob to an authorization policy that includes PCR values. The TPM only unseals the blob if the current PCR values match the policy. This creates a hardware-enforced link between the vault key and boot integrity state:

  1. At enrollment, the backend reads the current PCR values, constructs an authorization policy from them, and seals the master key under the TPM’s Storage Root Key (SRK) with that policy.
  2. At unlock, the backend asks the TPM to unseal the blob. The TPM internally compares current PCR values against the sealed policy. If they match, the blob is released. If any measured component has changed, unsealing fails.

PCR Selection

The default PCR selection for desktop Linux:

PCRMeasures
0UEFI firmware code
1UEFI firmware configuration
2Option ROMs / external firmware
3Option ROM configuration
7Secure Boot state (PK, KEK, db, dbx)

PCRs 4-6 (boot manager, GPT, resume events) are intentionally excluded by default because kernel updates would invalidate the seal on every update. The PCR set is configurable at enrollment time.

Extending to PCR 11 (unified kernel image measurement, used by systemd-stub) or PCR 10 (IMA) is supported as an opt-in for higher-assurance configurations.

Mapping to VaultAuthBackend

factor_id()

Returns AuthFactorId::Tpm.

backend_id()

Returns "tpm".

name()

Returns "TPM 2.0".

requires_interaction()

Returns AuthInteraction::None. TPM unsealing is a silent, non-interactive operation. The TPM does not require user presence for unsealing (unlike FIDO2). If a TPM PIN (authValue) is configured on the sealed object, the interaction type changes to AuthInteraction::PasswordEntry.

is_enrolled(profile, config_dir)

Checks whether {config_dir}/profiles/{profile}/tpm.enrollment exists and contains a valid sealed blob with a recognized version byte.

can_unlock(profile, config_dir)

  1. Verify enrollment exists via is_enrolled().
  2. Open a connection to the TPM via the TCTI (typically /dev/tpmrm0, the kernel resource manager).
  3. Return true if the TPM device is accessible.

PCR matching is not checked here – a trial unseal could exceed the 100ms budget and may trigger rate limiting on some TPM implementations.

enroll(profile, master_key, config_dir, salt, selected_key_index)

  1. Open a TPM context via tpm2-tss ESAPI.
  2. Read the current PCR values for the configured PCR selection (default: 0, 1, 2, 3, 7).
  3. Build a PolicyPCR authorization policy from the PCR digest.
  4. Optionally, combine with PolicyAuthValue if the user wants a TPM PIN (defense-in-depth against evil-maid attacks where PCRs match but an attacker has physical access).
  5. Create a sealed object under the SRK (Storage Hierarchy, persistent handle 0x81000001 or equivalent):
    • Object type: TPM2_ALG_KEYEDHASH with seal attribute.
    • Data: the 32-byte master_key.
    • Auth policy: the PCR policy (and optionally PIN policy).
  6. Persist the sealed object to a TPM NV index, or serialize the public/private portions to disk.
  7. Write the enrollment blob to {config_dir}/profiles/{profile}/tpm.enrollment.

selected_key_index is unused (there is one TPM per machine). It is ignored.

unlock(profile, config_dir, salt)

  1. Load the enrollment blob and deserialize the sealed object context.
  2. Open a TPM context via ESAPI.
  3. Load the sealed object into the TPM.
  4. Start a policy session. Execute PolicyPCR with the enrolled PCR selection.
  5. If a TPM PIN was configured, execute PolicyAuthValue and provide the PIN.
  6. Call TPM2_Unseal with the policy session.
  7. If unsealing succeeds, the TPM returns the 32-byte master key.
  8. If unsealing fails (PCR mismatch), return AuthError::UnwrapFailed. The audit metadata should include which PCRs diverged, if determinable.
  9. Return UnlockOutcome:
    • master_key: the unsealed 32-byte key.
    • ipc_strategy: IpcUnlockStrategy::DirectMasterKey.
    • factor_id: AuthFactorId::Tpm.
    • audit_metadata: {"pcr_selection": "0,1,2,3,7", "tpm_manufacturer": "<vendor>"}.

revoke(profile, config_dir)

  1. If the sealed object was persisted to a TPM NV index, evict it with TPM2_EvictControl.
  2. Delete {config_dir}/profiles/{profile}/tpm.enrollment.

Enrollment Blob Format

Version: u8 (1)
PCR selection: u32 bitmask (bit N set = PCR N included)
PCR digest at seal time: 32 bytes (SHA-256)
Sealed object public area: length-prefixed bytes (TPM2B_PUBLIC)
Sealed object private area: length-prefixed bytes (TPM2B_PRIVATE)
SRK handle: u32
PIN flag: u8 (0 = no PIN, 1 = PolicyAuthValue included)

FactorContribution

  • AuthCombineMode::Any or AuthCombineMode::Policy: The backend provides FactorContribution::CompleteMasterKey. The TPM directly unseals the full 32-byte master key.
  • AuthCombineMode::All: The backend provides FactorContribution::FactorPiece. At enrollment, a random 32-byte piece is sealed (not the master key itself). At unlock, the unsealed piece is contributed to the combined HKDF derivation.

Measured Boot Chain

The security of this backend depends on the integrity of the measured boot chain:

  1. UEFI firmware measures itself and boot configuration into PCRs 0-3.
  2. Shim / bootloader (GRUB, systemd-boot) is measured by the firmware into PCR 4.
  3. Secure Boot state (whether Secure Boot is enabled, which keys are enrolled) is reflected in PCR 7.
  4. Kernel and initramfs, if using systemd-stub unified kernel images (UKI), are measured into PCR 11.

If an attacker modifies any component in this chain, the corresponding PCR value changes, and the TPM refuses to unseal the vault key.

Firmware and Kernel Updates

After a firmware or kernel update, PCR values change and the sealed blob becomes invalid. Strategies to manage this:

  • Predictive re-sealing. Before a kernel update, predict the new PCR values (using systemd-pcrphase or systemd-measure) and create a second sealed blob for the new values. Delete the old one after successful boot.
  • Fallback factor. Always maintain a second enrolled factor (password, SSH agent) so access is not lost when PCR values change unexpectedly.
  • PCR selection trade-offs. Excluding volatile PCRs (4, 5, 6) from the policy reduces re-enrollment frequency at the cost of reduced boot integrity coverage.

Platform Binding

The TPM is a physical chip (or firmware TPM) soldered to the motherboard. The sealed blob is bound to that specific TPM – it cannot be moved to another machine. This provides:

  • Hardware binding. The vault is tied to a specific physical device.
  • Anti-theft. A stolen drive cannot be unlocked on another machine.
  • Anti-cloning. TPM private keys cannot be extracted (the TPM is designed to resist physical attacks on the chip package).

Integration Dependencies

DependencyTypePurpose
tpm2-tss >= 4.0System C libraryESAPI, FAPI, and TCTI for TPM communication
tpm2-tss-develSystem packageBuild-time headers
Rust crate: tss-esapiCargo dependencySafe Rust bindings to tpm2-tss ESAPI
/dev/tpmrm0Kernel deviceTPM resource manager (kernel >= 4.12)
tpm2-abrmd (optional)System serviceUserspace resource manager (alternative to kernel RM)
tpm2-tools (optional)System packageDebugging and manual enrollment scripting

The user must have read/write access to /dev/tpmrm0 (typically via the tss group or a udev rule).

Threat Model Considerations

  • Evil-maid with matching PCRs. If an attacker can reproduce the exact boot chain (same firmware, same bootloader, same Secure Boot keys), they can unseal the key. Adding a TPM PIN (PolicyAuthValue) mitigates this.
  • Firmware TPM (fTPM) vulnerabilities. Firmware TPMs run inside the CPU or chipset firmware. Vulnerabilities in fTPM firmware (e.g., AMD fTPM voltage glitching) can potentially extract sealed data. Discrete TPM chips (e.g., Infineon SLB9670) offer stronger physical resistance.
  • Running-system compromise. TPM sealing protects at-rest data. Once the system is booted and the vault is unlocked, an attacker with root access can read the master key from daemon-secrets process memory. TPM does not protect against runtime compromise.
  • PCR reset attacks. On some platforms, a hardware reset of the TPM (e.g., via LPC bus manipulation) can reset PCRs to zero. Sealing to PCR 7 (Secure Boot state) partially mitigates this because Secure Boot measurements are replayed from firmware on reset.
  • vTPM in virtualized environments. A virtual TPM provides no physical security. The hypervisor can read all sealed data. TPM enrollment in a VM is useful for binding a vault to a specific VM instance, not for hardware-level tamper resistance.

See Also

Biometrics Backend

Status: Design Intent. The AuthFactorId::Fingerprint variant exists in core-types::auth and the VaultAuthBackend trait is defined in core-auth::backend, but no struct implements this factor today. This page documents what the backend will do when built, grounded in the trait interface and platform biometric APIs.

The biometrics backend enables vault unlock gated by fingerprint verification (and, in the future, other biometric modalities such as face recognition). It maps to AuthFactorId::Fingerprint (config string "fingerprint"). The critical design principle: biometric data is never used as key material. Biometrics are authentication gates that release a stored key, not secrets from which keys are derived.

Design Principle: Biometrics Are Not Secrets

Biometric features (fingerprint minutiae, facial geometry) are not secret – they can be observed, photographed, or lifted from surfaces. They are also not stable – they vary between readings. For these reasons, the biometrics backend never derives cryptographic key material from biometric data. Instead:

  1. At enrollment, the master key (or a KEK) is encrypted and stored on disk.
  2. The decryption key for that blob is held in a platform keystore that requires biometric verification to release.
  3. At unlock, the platform biometric subsystem verifies the user, and if successful, releases the decryption key to the backend.

The biometric template (the mathematical representation of the fingerprint or face) never leaves the platform biometric subsystem. Open Sesame never sees, stores, or transmits biometric data.

Platform Biometric APIs

Linux: fprintd

On Linux, fingerprint authentication is mediated by fprintd, a D-Bus service that manages fingerprint readers and templates. The authentication flow:

  1. The backend calls net.reactivated.Fprint.Device.VerifyStart on the fprintd D-Bus interface.
  2. fprintd communicates with the fingerprint sensor hardware via libfprint, acquires a fingerprint image, and matches it against enrolled templates.
  3. On match, fprintd emits a VerifyStatus signal with verify-match. On failure, verify-no-match or verify-retry-scan.
  4. The backend calls VerifyStop to end the session.

The backend uses the fprintd D-Bus API directly (not PAM) to avoid requiring a full PAM session context.

Future: macOS LocalAuthentication

On macOS (if platform support is added), LocalAuthentication.framework provides Touch ID and Face ID gating of Keychain items. A Keychain item with kSecAccessControlBiometryCurrentSet requires biometric verification before the Keychain releases the stored secret. This maps directly to the “biometric gates release of a stored key” model.

Future: Windows Hello

On Windows, Windows.Security.Credentials.KeyCredentialManager and the Windows Hello biometric subsystem provide similar gating. The TPM-backed key is released only after Windows Hello verification succeeds.

Mapping to VaultAuthBackend

factor_id()

Returns AuthFactorId::Fingerprint.

backend_id()

Returns "fingerprint".

name()

Returns "Fingerprint".

requires_interaction()

Returns AuthInteraction::HardwareTouch. The user must place their finger on the sensor.

is_enrolled(profile, config_dir)

Checks two conditions:

  1. An enrollment blob exists at {config_dir}/profiles/{profile}/fingerprint.enrollment.
  2. At least one fingerprint is enrolled in fprintd for the current system user (queried via net.reactivated.Fprint.Device.ListEnrolledFingers).

Both must be true. If the system fingerprint enrollment is wiped (user re-enrolled fingers in system settings), the Open Sesame enrollment blob still exists on disk but the platform verification will match against different templates, making it effectively stale.

can_unlock(profile, config_dir)

  1. Verify enrollment exists via is_enrolled().
  2. Check that fprintd is running (D-Bus name net.reactivated.Fprint is available).
  3. Check that at least one fingerprint reader device is present.

D-Bus name lookup and device enumeration complete well within the 100ms trait budget.

enroll(profile, master_key, config_dir, salt, selected_key_index)

  1. Verify that fprintd has at least one enrolled fingerprint for the current user. If not, return AuthError::BackendNotApplicable("no fingerprints enrolled in fprintd; enroll via system settings first").
  2. Generate a random 32-byte storage key.
  3. Wrap master_key under the storage key using AES-256-GCM.
  4. Store the storage key in a location protected by biometric gating:
    • Primary strategy (Linux): Store the storage key in the user’s kernel keyring (keyctl) under a session-scoped key. The keyring entry is created with a timeout matching the user session. Biometric verification via fprintd acts as the authorization gate before the backend retrieves the keyring secret at unlock time.
    • Fallback strategy: Encrypt the storage key with a key derived from salt and a device-specific identifier (machine-id). Store the encrypted storage key in the enrollment blob itself. The biometric check acts as the sole authorization gate.
  5. Write the enrollment blob to {config_dir}/profiles/{profile}/fingerprint.enrollment.

selected_key_index is unused (there is one biometric subsystem per machine). It is ignored.

unlock(profile, config_dir, salt)

  1. Load the enrollment blob.
  2. Initiate fingerprint verification via fprintd D-Bus API (VerifyStart).
  3. Wait for the VerifyStatus signal. The daemon overlay displays a “scan your fingerprint” prompt.
  4. If verification fails (no match, timeout, or sensor error), return AuthError::UnwrapFailed.
  5. If verification succeeds, retrieve the storage key from the kernel keyring (primary strategy) or decrypt it from the blob (fallback strategy).
  6. Unwrap the master key using the storage key (AES-256-GCM decrypt).
  7. Return UnlockOutcome:
    • master_key: the unwrapped 32-byte key.
    • ipc_strategy: IpcUnlockStrategy::DirectMasterKey.
    • factor_id: AuthFactorId::Fingerprint.
    • audit_metadata: {"method": "fprintd", "finger": "<which_finger>"} (if fprintd reports which finger matched).

revoke(profile, config_dir)

  1. Remove the storage key from the kernel keyring (if using primary strategy).
  2. Delete {config_dir}/profiles/{profile}/fingerprint.enrollment.

Does not remove fingerprints from fprintd – those are system-level enrollments managed by the user outside of Open Sesame.

Enrollment Blob Format

Version: u8 (1)
Storage strategy: u8 (1 = kernel keyring, 2 = embedded encrypted key)
Wrapped master key: 12-byte nonce || ciphertext || 16-byte GCM tag
Embedded encrypted storage key (strategy 2 only): 12-byte nonce || ciphertext || 16-byte tag
Device binding hash: 32 bytes (SHA-256 of machine-id || profile name)

FactorContribution

  • AuthCombineMode::Any or AuthCombineMode::Policy: The backend provides FactorContribution::CompleteMasterKey. It unwraps the full master key after biometric verification succeeds.
  • AuthCombineMode::All: The backend provides FactorContribution::FactorPiece. A random 32-byte piece (not the master key) is stored behind the biometric gate and contributed to HKDF derivation upon successful verification.

The biometric itself does not contribute entropy – it is a gate. The piece is a random value generated at enrollment time and stored behind the biometric gate.

Liveness Detection

Fingerprint sensors vary in their resistance to spoofing:

Sensor TypeSpoofing ResistanceNotes
Capacitive (most laptop sensors)ModerateDetects electrical properties of skin. Gummy fingerprints with conductive material can sometimes fool them.
Ultrasonic (e.g., Qualcomm 3D Sonic)HighMeasures sub-dermal features. More resistant to printed or molded replicas.
Optical (common in USB readers)LowEasiest to spoof with printed or molded fingerprints.

Open Sesame delegates liveness detection entirely to the sensor hardware and fprintd. The backend does not attempt its own liveness checks. Deployment guidance: use capacitive or ultrasonic sensors for security-sensitive configurations, and combine biometric with a second factor via AuthCombineMode::Policy.

Privacy Guarantees

  1. No template storage. Open Sesame never stores, transmits, or processes biometric templates. Templates are managed exclusively by fprintd (stored in /var/lib/fprint/).
  2. No template access. The backend never requests raw biometric data or template bytes. It uses only the verify/match API, which returns a boolean result.
  3. No cross-profile linkability. The enrollment blob contains no biometric information. An attacker who obtains the blob cannot determine whose fingerprint unlocks the vault.
  4. User-controlled deletion. Revoking the backend deletes only the encrypted key blob. Biometric templates remain under user control in fprintd.

Integration Dependencies

DependencyTypePurpose
fprintd >= 1.94System serviceFingerprint verification via D-Bus
libfprint >= 1.94System librarySensor driver layer (used by fprintd)
Rust crate: zbusCargo dependencyD-Bus client for fprintd communication
Rust crate: keyutilsCargo dependencyLinux kernel keyring access (primary storage strategy)
Compatible fingerprint readerHardwareAny reader supported by libfprint

Threat Model Considerations

  • Biometric spoofing. The backend is only as spoof-resistant as the sensor hardware. It should not be the sole factor for high-value vaults. Combining biometric with password or FIDO2 via AuthCombineMode::Policy is recommended.
  • Stolen enrollment blob. The blob is useless without passing biometric verification (primary strategy) or without the device-specific derivation inputs (fallback strategy). The biometric gate is the critical protection.
  • fprintd compromise. If an attacker can inject false D-Bus responses (by compromising fprintd or the user’s D-Bus session), they can bypass biometric verification. Running fprintd as a system service (not user session) and using D-Bus mediation via AppArmor or SELinux mitigates this.
  • Irrevocable biometrics. If a fingerprint is compromised (lifted from a surface), the user cannot change their fingerprint. Mitigation: re-enroll with a different finger and revoke the old enrollment, or add a second factor requirement via policy.
  • Fallback strategy weakness. The embedded-key fallback strategy protects the storage key only with device-specific derivation (machine-id + salt). An attacker with the enrollment blob and knowledge of the machine-id can bypass the biometric gate entirely. The primary strategy (kernel keyring) is strongly preferred.

See Also

Hardware Tokens Backend (YubiKey / Smart Cards / PIV)

Status: Design Intent. The AuthFactorId::Yubikey variant exists in core-types::auth and the VaultAuthBackend trait is defined in core-auth::backend, but no struct implements this factor today. This page documents what the backend will do when built, grounded in the trait interface and relevant smart card standards.

The hardware tokens backend enables vault unlock using YubiKeys, PIV smart cards, and PKCS#11-compatible cryptographic tokens. It maps to AuthFactorId::Yubikey (config string "yubikey"). Despite the enum variant name referencing YubiKey specifically, the backend is designed to support the broader category of challenge-response and certificate-based hardware tokens.

This backend covers the non-FIDO2 capabilities of these devices. For FIDO2/CTAP2 operation, see the FIDO2/WebAuthn backend.

Supported Protocols

PIV (FIPS 201 / NIST SP 800-73)

Personal Identity Verification is a US government standard for smart card authentication. PIV cards (and YubiKeys with the PIV applet) contain X.509 certificates and corresponding private keys in hardware slots. The private key never leaves the card.

PIV slots relevant to Open Sesame:

SlotPurposePIN PolicyTouch Policy
9aPIV AuthenticationOnce per sessionConfigurable
9cDigital SignatureAlwaysConfigurable
9dKey ManagementOnce per sessionConfigurable
9eCard AuthenticationNeverNever

Slot 9d (Key Management) is the natural fit for vault unlock – it is designed for key agreement and encryption operations, has a reasonable PIN policy (once per session), and supports touch policy configuration.

HMAC-SHA1 Challenge-Response (YubiKey Slot 2)

YubiKeys support HMAC-SHA1 challenge-response in their OTP applet (slots 1 and 2). The host sends a challenge, the YubiKey computes HMAC-SHA1 with a pre-programmed 20-byte secret, and returns the 20-byte response. This is the same mechanism used by ykman and ykchalresp.

Slot 2 (long press) is conventionally used for challenge-response to avoid conflicts with slot 1 (short press, often configured for OTP).

PKCS#11

PKCS#11 is the generic smart card interface. Any token with a PKCS#11 module (OpenSC, YubiKey YKCS11, Nitrokey, etc.) can be used. The backend loads the PKCS#11 shared library, finds a suitable private key object, and performs a sign or decrypt operation.

Mapping to VaultAuthBackend

factor_id()

Returns AuthFactorId::Yubikey.

backend_id()

Returns "yubikey".

name()

Returns "Hardware Token".

requires_interaction()

Returns AuthInteraction::HardwareTouch if the enrolled token has a touch policy enabled. Returns AuthInteraction::PasswordEntry if the token requires a PIN but no touch. The interaction type is recorded in the enrollment blob and returned by this method.

Most configurations require touch (physical presence), so HardwareTouch is the common case.

is_enrolled(profile, config_dir)

Checks whether {config_dir}/profiles/{profile}/yubikey.enrollment exists and contains a valid enrollment blob.

can_unlock(profile, config_dir)

  1. Verify enrollment exists.
  2. Based on the enrolled protocol:
    • HMAC-SHA1: Enumerate USB HID devices matching YubiKey vendor/product IDs.
    • PIV/PKCS#11: Attempt to open a PCSC connection and verify that a card is present in a reader.
  3. Return true if a device is detected.

No cryptographic operation is performed (must stay within 100ms).

enroll(profile, master_key, config_dir, salt, selected_key_index)

The enrollment path depends on the protocol. The backend auto-detects the preferred protocol based on the connected device, or the user specifies via configuration.

HMAC-SHA1 Path

  1. Enumerate connected YubiKeys. If selected_key_index is Some(i), select the i-th device.
  2. Issue a challenge-response using salt as the challenge (hashed to fit the challenge length if needed).
  3. The YubiKey returns a 20-byte HMAC-SHA1 response.
  4. Derive a 32-byte KEK from the HMAC response using HKDF-SHA256: KEK = HKDF-SHA256(ikm=hmac_response, salt=salt, info="open-sesame:yubikey:{profile}").
  5. Wrap master_key under the KEK using AES-256-GCM.
  6. Store the enrollment blob with the YubiKey serial number, slot number, and wrapped master key.

PIV Path

  1. Open a PCSC connection to the smart card.
  2. Select the PIV applet (AID A0 00 00 03 08).
  3. Authenticate to the card (PIN prompt if required by slot policy).
  4. Read the certificate from the selected slot (default: 9d).
  5. Generate a random 32-byte challenge.
  6. Encrypt the challenge using the public key from the certificate (RSA-OAEP or ECDH depending on key type).
  7. Derive a KEK: KEK = HKDF-SHA256(ikm=challenge, salt=salt, info="open-sesame:piv:{profile}").
  8. Wrap master_key under the KEK.
  9. Store the enrollment blob with the certificate fingerprint (SHA-256), slot number, encrypted challenge, and wrapped master key.

PKCS#11 Path

Follows the PIV path but uses the PKCS#11 API (C_FindObjects, C_Decrypt / C_Sign) instead of raw APDU commands. The enrollment blob additionally stores the PKCS#11 module path and token serial number.

unlock(profile, config_dir, salt)

HMAC-SHA1 Path

  1. Load enrollment blob.
  2. Issue challenge-response with salt as the challenge.
  3. Derive KEK from the HMAC response (same HKDF as enrollment).
  4. Unwrap master key. If unwrap fails (different YubiKey or different slot 2 secret), return AuthError::UnwrapFailed.

PIV Path

  1. Load enrollment blob, including the encrypted challenge.
  2. Open PCSC connection, select PIV applet, authenticate (PIN if required).
  3. Decrypt the encrypted challenge using the card’s private key (slot 9d).
  4. Derive KEK from the decrypted challenge (same HKDF as enrollment).
  5. Unwrap master key.

Common Outcome

Return UnlockOutcome:

  • master_key: the unwrapped 32-byte key.
  • ipc_strategy: IpcUnlockStrategy::DirectMasterKey.
  • factor_id: AuthFactorId::Yubikey.
  • audit_metadata: {"protocol": "hmac-sha1|piv|pkcs11", "serial": "<device_serial>", "slot": "<slot>"}.

revoke(profile, config_dir)

Delete {config_dir}/profiles/{profile}/yubikey.enrollment. Does not modify the token itself (the HMAC secret or PIV keys remain on the device).

Enrollment Blob Format

Version: u8 (1)
Protocol: u8 (1 = HMAC-SHA1, 2 = PIV, 3 = PKCS#11)
Device serial: length-prefixed UTF-8
Slot/key identifier: length-prefixed UTF-8
Interaction type: u8 (maps to AuthInteraction variant)
--- Protocol-specific fields ---
[HMAC-SHA1]
  Wrapped master key: 12-byte nonce || ciphertext || 16-byte GCM tag
[PIV]
  Certificate fingerprint: 32 bytes (SHA-256)
  Encrypted challenge: length-prefixed bytes
  Wrapped master key: 12-byte nonce || ciphertext || 16-byte GCM tag
[PKCS#11]
  Module path: length-prefixed UTF-8
  Token serial: length-prefixed UTF-8
  Certificate fingerprint: 32 bytes
  Encrypted challenge: length-prefixed bytes
  Wrapped master key: 12-byte nonce || ciphertext || 16-byte GCM tag

FactorContribution

  • AuthCombineMode::Any or AuthCombineMode::Policy: The backend provides FactorContribution::CompleteMasterKey. It independently unwraps the full master key.
  • AuthCombineMode::All: The backend provides FactorContribution::FactorPiece. For HMAC-SHA1, the HKDF output derived from the HMAC response is the piece (32 bytes). For PIV, the decrypted challenge is the piece. The piece is contributed to the combined HKDF derivation.

Touch Requirement for Physical Presence

YubiKeys and some smart cards support a touch policy: the device requires the user to physically touch a contact sensor before performing a cryptographic operation. This provides proof of physical presence, mitigating malware that silently uses the token while plugged in.

PolicyBehavior
NeverNo touch required (default for HMAC-SHA1 on some firmware)
AlwaysTouch required for every operation
CachedTouch required once, cached for 15 seconds

The backend records the touch policy in the enrollment blob so that requires_interaction() returns the correct AuthInteraction variant. The daemon overlay displays a “touch your key” prompt when AuthInteraction::HardwareTouch is indicated.

HMAC-SHA1 Key Derivation Detail

The HMAC-SHA1 response is only 20 bytes, insufficient for a 32-byte AES key directly. The HKDF-SHA256 expansion step stretches this to 32 bytes:

  • The HMAC-SHA1 secret on the YubiKey is 20 bytes (160 bits), programmed at configuration time.
  • The challenge (vault salt) is up to 64 bytes.
  • The 20-byte HMAC output has at most 160 bits of entropy.
  • HKDF’s security bound is min(input_entropy, hash_output_length) = 160 bits, which exceeds the 128-bit security target for the derived 256-bit KEK.

Integration Dependencies

DependencyTypePurpose
pcsc-lite + libpcsclite-devSystem libraryPCSC smart card access
pcscdSystem serviceSmart card daemon (must be running for PIV/PKCS#11)
opensc (optional)System packagePKCS#11 module and generic smart card drivers
ykpers / yubikey-manager (optional)System library/toolYubiKey HID communication for HMAC-SHA1
Rust crate: pcscCargo dependencyPCSC bindings for PIV
Rust crate: yubikeyCargo dependencyYubiKey PIV operations
Rust crate: cryptokiCargo dependencyPKCS#11 bindings
Rust crate: yubico-manager or challenge-responseCargo dependencyHMAC-SHA1 challenge-response

Threat Model Considerations

  • HMAC-SHA1 secret extraction. The HMAC secret on a YubiKey cannot be read back after programming. Extracting it requires destructive chip analysis.
  • PIN brute-force. PIV PINs have a retry counter (default 3 attempts before lockout). After lockout, the PUK (PIN Unlock Key) is required. After PUK lockout, the PIV applet must be reset (destroying all keys).
  • Token loss. If the token is lost, the enrollment blob is useless without the physical device. Recovery requires an alternative enrolled factor.
  • Relay attacks (HMAC-SHA1). HMAC-SHA1 challenge-response over USB HID can be relayed over a network. Touch policy (set to “always”) mitigates this by requiring physical presence.
  • Relay attacks (PIV). Smart card operations over PCSC can be relayed using tools like virtualsmartcard. Touch policy on YubiKey PIV mitigates this.
  • SHA-1 and HMAC-SHA1. HMAC-SHA1 is not affected by SHA-1 collision attacks. HMAC security depends on the PRF property of the compression function, not collision resistance. HMAC-SHA1 remains secure for key derivation.

See Also

  • Factor ArchitectureVaultAuthBackend trait definition and dispatch
  • FIDO2/WebAuthn – FIDO2 mode of the same hardware (different protocol)
  • TPM – Platform-bound hardware factor (non-portable)
  • Policy Engine – Multi-factor combination modes

Self-Encrypting Drive (SED) / TCG Opal Backend

Status: Design Intent. No AuthFactorId variant exists for SED/Opal today. This backend is a future extension that would require adding an AuthFactorId::SedOpal variant to core-types::auth. The VaultAuthBackend trait in core-auth::backend defines the interface it would implement. This page documents the design intent.

The SED/Opal backend enables vault unlock by binding the master key to a Self-Encrypting Drive’s hardware encryption using the TCG Opal 2.0 specification. The drive’s encryption controller holds the key material, accessible only after the drive is unlocked with the correct credentials. This provides protection against physical drive theft without relying on software-layer full-disk encryption.

Relevant Standards

SpecificationRole
TCG Opal 2.0 (Trusted Computing Group)Defines the SED management interface: locking ranges, authentication, band management, and the Security Protocol command set.
TCG Opal SSC (Security Subsystem Class)Profile of TCG Storage that Opal-compliant drives implement.
ATA Security Feature SetLegacy drive locking (ATA password). Opal supersedes this but some drives support both.
NVMe Security Send/ReceiveTransport for TCG commands on NVMe drives.
IEEE 1667Silo-based authentication for storage devices (used by some USB encrypted drives).

Core Concept: Drive-Bound Vault Keys

A Self-Encrypting Drive transparently encrypts all data written to it using a Media Encryption Key (MEK) stored in the drive controller. The MEK is wrapped by a Key Encryption Key (KEK) derived from the user’s authentication credential. Without the correct credential, the MEK cannot be unwrapped and the drive contents are cryptographically inaccessible.

The SED/Opal backend leverages this mechanism for vault key storage:

  1. At enrollment, the backend stores the vault master key within an Opal locking range’s DataStore table. The locking range is protected by an Opal credential that the backend manages.
  2. At unlock, the backend authenticates to the Opal Security Provider (SP) using the stored credential, reads the master key from the DataStore, and provides it to daemon-secrets.

Locking Range Architecture

Opal drives support multiple locking ranges. The backend uses a dedicated locking range for Open Sesame, separate from the global locking range (which may be used for full-disk encryption by the OS):

  • Global Range (Range 0): Managed by the OS or firmware for full-disk encryption (e.g., sedutil-cli, BitLocker, systemd-cryptenroll).
  • Dedicated Range (Range N): A small range allocated for Open Sesame DataStore usage. Contains only the encrypted vault master key blob.

If a dedicated range cannot be allocated (drive does not support multiple ranges, or all ranges are in use), the backend falls back to storing the wrapped key in the DataStore table associated with the Admin SP.

Mapping to VaultAuthBackend

factor_id()

Returns AuthFactorId::SedOpal (to be added to the enum).

backend_id()

Returns "sed-opal".

name()

Returns "Self-Encrypting Drive".

requires_interaction()

Returns AuthInteraction::None. SED unlock is non-interactive. The backend authenticates to the drive controller programmatically using a credential derived from device-specific secrets, not a user-entered password.

is_enrolled(profile, config_dir)

Checks whether {config_dir}/profiles/{profile}/sed-opal.enrollment exists and contains a valid enrollment blob identifying the drive (serial number, locking range, and Opal credential reference).

can_unlock(profile, config_dir)

  1. Verify enrollment exists.
  2. Identify the enrolled drive by serial number.
  3. Check that the drive is present and accessible (block device exists or can be found by serial via sysfs).
  4. Return true if the drive is present.

Does not attempt Opal authentication (may exceed 100ms and may trigger lockout counters on failure).

enroll(profile, master_key, config_dir, salt, selected_key_index)

  1. Enumerate Opal-capable drives by sending TCG Discovery 0 to each block device.
  2. If selected_key_index is Some(i), select the i-th drive. Otherwise select the first Opal-capable drive.
  3. Authenticate to the drive’s Admin SP using the SID (Security Identifier) or a pre-configured admin credential.
  4. Allocate or identify a locking range for Open Sesame use.
  5. Generate a random Opal credential for the locking range (or derive one from salt and a device-specific secret).
  6. Store the vault master_key in the DataStore table of the locking range.
  7. Lock the range, binding it to the generated credential.
  8. Write the enrollment blob to {config_dir}/profiles/{profile}/sed-opal.enrollment containing the drive serial, locking range index, and the Opal credential (encrypted under a key derived from salt and the machine ID).

unlock(profile, config_dir, salt)

  1. Load the enrollment blob.
  2. Derive the Opal credential (decrypt using salt and machine ID).
  3. Open a session to the drive’s Locking SP.
  4. Authenticate with the credential.
  5. Read the master key from the DataStore table.
  6. Return UnlockOutcome:
    • master_key: the 32-byte key read from the DataStore.
    • ipc_strategy: IpcUnlockStrategy::DirectMasterKey.
    • factor_id: AuthFactorId::SedOpal.
    • audit_metadata: {"drive_serial": "<serial>", "locking_range": "<N>"}.

revoke(profile, config_dir)

  1. Authenticate to the drive’s Admin SP.
  2. Erase the master key from the DataStore table (overwrite with zeros).
  3. Optionally release the locking range allocation.
  4. Delete {config_dir}/profiles/{profile}/sed-opal.enrollment.

Enrollment Blob Format

Version: u8 (1)
Drive serial: length-prefixed UTF-8
Drive model: length-prefixed UTF-8
Block device path at enrollment time: length-prefixed UTF-8 (informational; may change)
Locking range index: u16
Opal credential (encrypted): 12-byte nonce || ciphertext || 16-byte GCM tag
Machine binding hash: 32 bytes (SHA-256 of machine-id, used in credential derivation)

FactorContribution

  • AuthCombineMode::Any or AuthCombineMode::Policy: The backend provides FactorContribution::CompleteMasterKey. The drive hardware releases the full master key from the DataStore.
  • AuthCombineMode::All: The backend provides FactorContribution::FactorPiece. A random 32-byte piece (not the master key) is stored in the DataStore and contributed to combined HKDF derivation.

Pre-Boot Authentication

TCG Opal defines a Pre-Boot Authentication (PBA) mechanism: a small region of the drive (the Shadow MBR) is presented to the BIOS/UEFI before the main OS boots. The PBA image authenticates the user and unlocks the drive before the OS sees the encrypted data.

Open Sesame does not implement PBA. It operates entirely within the running OS. If the drive is locked at boot by system-level SED management, Open Sesame assumes the drive is already unlocked by the time daemon-secrets starts. The backend uses Opal only for its DataStore facility (key storage with hardware-gated access), not for drive-level boot locking.

Integration Dependencies

DependencyTypePurpose
sedutil-cliSystem toolOpal drive management (enrollment, locking range setup)
libata / kernel NVMe driverKernelATA Security / NVMe Security Send/Receive commands
Rust crate: sedutil-rs or direct ioctlCargo dependencyProgrammatic Opal SP communication
/dev/sdX or /dev/nvmeXnYBlock deviceTarget drive
Root or CAP_SYS_RAWIOPrivilegeRequired for TCG command passthrough via SG_IO / NVMe admin commands

Privilege Requirements

Opal commands require raw SCSI/ATA/NVMe command passthrough, which typically requires root or CAP_SYS_RAWIO. Since daemon-secrets runs as a system service, this is consistent with its privilege model. Enrollment and revocation also require admin-level Opal credentials (SID or Admin1 authority).

Threat Model

Protects Against

  • Physical drive theft. An attacker who steals the drive (but not the machine) cannot access the vault master key. The DataStore contents are encrypted by the drive’s MEK, inaccessible without the Opal credential.
  • Offline forensic imaging. Imaging the raw drive platters or flash chips yields only ciphertext.
  • Cold boot on different hardware. Moving the drive to another machine does not help because the Opal credential is derived from the original machine’s identity.

Does Not Protect Against

  • Running-system compromise. Once the OS is booted and the drive is unlocked, an attacker with root access can read the DataStore contents. SED encryption is transparent to the running OS after unlock.
  • DMA attacks. An attacker with physical access to the running machine can use DMA (e.g., via Thunderbolt or FireWire) to read memory containing the unlocked master key.
  • SED firmware vulnerabilities. Research has demonstrated that some SED implementations have firmware bugs allowing bypass of Opal locking without the credential (e.g., the 2018 Radboud University disclosure affecting Crucial and Samsung drives). The backend cannot detect or mitigate firmware-level flaws.
  • Evil-maid with machine access. If the attacker has access to the original machine (not just the drive), they can boot the machine, wait for the drive to be unlocked by the OS, and extract the master key.

Complementary Use with TPM

SED/Opal and TPM provide complementary hardware binding:

  • TPM binds the vault to the boot integrity state (firmware, bootloader, Secure Boot policy). Protects against software-level boot chain tampering.
  • SED/Opal binds the vault to the physical drive. Protects against drive theft.

Using both factors in AuthCombineMode::All provides defense in depth: the vault is bound to both the machine’s boot state and the specific physical drive.

See Also

Federation Factors

Status: Design Intent. No AuthFactorId variant or VaultAuthBackend implementation exists for federation today. Federation is a future capability that builds on top of the existing factor backends. This page documents the design intent for cross-device factor delegation.

Federation factors allow a user to satisfy an authentication factor on one device and have that satisfaction count toward unlocking a vault on a different device. This enables scenarios such as unlocking a headless server’s vault using a phone’s fingerprint sensor, or centrally managing vault unlock across a fleet of machines via an HSM.

Core Concepts

Factor Delegation

Factor delegation separates where a factor is satisfied from where the vault is unlocked:

  • Origin device: The device where the user physically performs authentication (touches a FIDO2 key, scans a fingerprint, enters a password).
  • Target device: The device where the vault resides and where daemon-secrets runs.
  • Delegation token: A cryptographic proof that a specific factor was satisfied on the origin device, valid for a bounded time and scope.

The delegation token does not contain the master key. It is an authorization proof that daemon-secrets on the target device accepts in lieu of direct local factor satisfaction.

Trust Chain

Federation introduces a multi-hop trust chain:

Device Attestation -> Factor Proof -> Delegation Token -> Vault Unlock
  1. Device attestation. The origin device proves its identity and integrity to the target device. This may use TPM remote attestation, a pre-shared device certificate, or a Noise IK session where the origin device’s static public key is pre-enrolled.
  2. Factor proof. The origin device satisfies a factor locally (e.g., fingerprint verification via fprintd) and produces a signed statement: “factor F was satisfied by user U at time T on device D.”
  3. Delegation token. The factor proof is wrapped into a delegation token that specifies the target vault, permitted operations, expiration time, and scope restrictions.
  4. Vault unlock. The target device’s daemon-secrets receives the delegation token, verifies the full chain (device identity, factor proof signature, token validity, scope), and uses it to authorize the unlock.

Delegation Token Structure

Version: u8 (1)
Token ID: 16 bytes (random, for revocation and audit)
Origin device ID: 32 bytes (public key fingerprint or device certificate hash)
Factor ID: AuthFactorId (which factor was satisfied)
Factor proof signature: length-prefixed bytes (signed by origin device's attestation key)
Timestamp: u64 (Unix epoch seconds when factor was satisfied)
Expiry: u64 (Unix epoch seconds when token becomes invalid)
Target vault: length-prefixed UTF-8 (profile name on target device)
Scope: DelegationScope (see below)
Token signature: length-prefixed bytes (signed by origin device's delegation key)

DelegationScope

Delegation tokens carry explicit scope restrictions that limit what the token can authorize:

#![allow(unused)]
fn main() {
struct DelegationScope {
    /// Which operations the token authorizes.
    allowed_operations: Vec<DelegatedOperation>,
    /// Maximum number of times the token can be used (None = unlimited within expiry).
    max_uses: Option<u32>,
    /// If set, token is only valid from these source IP addresses.
    source_addresses: Option<Vec<IpAddr>>,
}

enum DelegatedOperation {
    /// Unlock the vault (read access to secrets).
    VaultUnlock,
    /// Unlock and modify secrets.
    VaultUnlockWrite,
    /// Unlock a specific secret by path.
    SecretAccess(String),
}
}

Scope Narrowing

A delegation token can only have equal or narrower scope than the factor it represents. Scope narrowing is enforced at token creation time:

  • A password factor with full vault access can delegate a token that only unlocks specific secrets.
  • A biometric factor can delegate a token valid for 5 minutes instead of the session duration.
  • A FIDO2 factor can delegate a token restricted to a single use.

Scope can never be widened. A token scoped to SecretAccess("/ssh/id_ed25519") cannot be used to unlock the entire vault.

Relationship to VaultAuthBackend

Federation does not implement VaultAuthBackend directly. Instead, it wraps existing backends:

  1. On the origin device, a standard VaultAuthBackend (fingerprint, FIDO2, password, etc.) performs the actual authentication.
  2. The origin device’s federation service creates a delegation token signed with the device’s attestation key.
  3. On the target device, a FederationReceiver component (a new daemon component, not a VaultAuthBackend) validates the token and translates it into an internal unlock authorization.

The target device’s daemon-secrets treats a validated delegation token as equivalent to a local factor satisfaction for policy evaluation purposes. If the vault’s AuthCombineMode::Policy requires AuthFactorId::Fingerprint, a delegation token proving fingerprint satisfaction on a trusted origin device satisfies that requirement.

IPC Flow

Origin Device                          Target Device
─────────────                          ─────────────
User touches fingerprint sensor
    |
    v
FingerprintBackend.unlock() succeeds
    |
    v
FederationService creates
  delegation token
    |
    v
Noise IK session ──────────────────>  FederationReceiver
                                           |
                                           v
                                      Verify device attestation
                                      Verify factor proof signature
                                      Check token expiry and scope
                                           |
                                           v
                                      daemon-secrets accepts
                                      factor as satisfied
                                           |
                                           v
                                      Policy engine evaluates
                                      (may need more factors)
                                           |
                                           v
                                      Vault unlocked (if policy met)

FactorContribution

Federation does not change the FactorContribution type of the underlying factor. If the delegated factor provides FactorContribution::CompleteMasterKey locally, the delegation token authorizes release of the same master key on the target device (which must have its own wrapped copy from a prior enrollment of that factor type).

In AuthCombineMode::All, federation cannot provide a FactorPiece remotely because the piece must be combined locally with other pieces on the target device. Federation in All mode requires the origin device to contribute its piece to a multi-party key derivation protocol. This is deferred to a future design iteration (see Open Questions).

Use Cases

Unlock Server Vault from Phone

A developer manages secrets on a headless server. The server vault requires fingerprint + password (AuthCombineMode::Policy, both required). The developer:

  1. Scans a fingerprint on their phone (origin device).
  2. The phone creates a delegation token for the server vault, scoped to VaultUnlock, expiring in 60 seconds.
  3. The token is sent to the server over a Noise IK session (phone’s static public key is pre-enrolled on the server).
  4. The server’s daemon-secrets accepts the fingerprint factor as satisfied.
  5. The developer enters a password directly on the server (or via SSH).
  6. Both policy requirements are met; the vault unlocks.

Fleet Unlock via Central HSM

An organization operates a fleet of machines, each with a vault. A central HSM holds a master delegation key. An administrator:

  1. Authenticates to the HSM management interface (FIDO2 + password).
  2. The HSM creates delegation tokens for a set of target machines, each scoped to VaultUnlock, expiring in 5 minutes.
  3. Tokens are distributed to target machines via the management plane.
  4. Each machine’s daemon-secrets validates the token against the HSM’s pre-enrolled public key.
  5. Vaults unlock. The HSM never sees the vault master keys.

Emergency Break-Glass

A break-glass procedure for when normal factors are unavailable:

  1. An administrator authenticates to a break-glass service using a hardware token.
  2. The service creates a single-use delegation token (max_uses: 1) for the target vault.
  3. The token is transmitted to the target device.
  4. The vault unlocks once. The token is consumed and cannot be reused.
  5. All break-glass events are audit-logged with the token ID, origin device, and administrator identity.

Remote Attestation

Before a target device accepts a delegation token, it must verify that the origin device is trustworthy. Remote attestation provides this assurance.

Device Identity

Each device in the federation has a long-lived identity key pair. The public key is enrolled on peer devices during a setup ceremony. Options for the identity key:

  • TPM-backed key. The device’s TPM generates a non-exportable attestation key. The public portion is enrolled on peers. This proves the origin device has not been cloned.
  • Noise IK static keypair. The existing Open Sesame Noise IK transport provides mutual authentication. The origin device’s static public key is already known to the target device from IPC bus enrollment.
  • X.509 certificate. A CA-issued device certificate, validated against a pinned CA public key. Suitable for organizational deployments with existing PKI.

Platform State Attestation

Optionally, the origin device can include a TPM quote (signed PCR values) in the delegation token, proving its boot integrity at the time of factor satisfaction. The target device verifies the quote against a known-good PCR policy. This prevents a compromised origin device from generating fraudulent factor proofs.

Time-Bounded Delegation Tokens

All delegation tokens have mandatory expiration:

  • Minimum expiry: 10 seconds (prevents creation of tokens that expire before delivery).
  • Maximum expiry: Configurable per-vault, default 300 seconds (5 minutes). Longer durations increase the window for token theft and replay.
  • Clock skew tolerance: 30 seconds. The target device accepts tokens where now - 30s <= timestamp <= now + 30s and now <= expiry + 30s.

Token expiry is checked at the target device at time of use. A token that was valid when created but has since expired is rejected. There is no renewal mechanism; a new factor satisfaction and new token are required.

Replay Prevention

Each token has a unique random 16-byte ID. The target device maintains a set of consumed token IDs (in memory, persisted to disk for crash recovery). A token ID that has been seen before is rejected, even if the token has not expired.

The consumed-ID set is pruned of entries older than max_expiry + clock_skew_tolerance to bound memory usage.

Security Considerations

  • Token theft. A stolen delegation token can be used by an attacker within its validity window and scope. Mitigations: short expiry, single-use tokens (max_uses: 1), source address restrictions, and Noise IK transport encryption (tokens are never sent in plaintext).
  • Origin device compromise. If the origin device is compromised, an attacker can generate arbitrary delegation tokens. Mitigations: TPM-backed attestation keys (attacker cannot extract the signing key without hardware attack), platform state attestation (compromised boot state is detected), and administrative revocation of the device’s enrollment on all target devices.
  • Target device compromise. If the target device is already compromised, delegation tokens are irrelevant – the attacker already has access to the running system. Federation does not increase or decrease the attack surface of a compromised target.
  • Network partition. If origin and target devices cannot communicate directly, the token must be relayed through an intermediary. The token’s cryptographic signatures ensure integrity regardless of the relay path, but relay latency may cause expiry. Pre-generating tokens with longer expiry is an option for intermittently-connected environments.
  • Scope escalation. The scope narrowing invariant (delegation can only narrow, never widen) is enforced at token creation on the origin device and verified at the target device. A malicious origin device could create a token with any scope up to the full permissions of the delegated factor – this is inherent to the delegation model. Trust in the origin device is a prerequisite for accepting its tokens.

Open Questions

  • AuthCombineMode::All support. Federation in All mode requires multi-party key derivation where the origin device contributes its piece without revealing the combined master key. Threshold secret sharing or secure multi-party computation protocols may be needed.
  • Token revocation broadcast. How does a target device learn that a token has been revoked before its natural expiry? Options include a revocation list pushed via the management plane, or making tokens short-lived enough that revocation is unnecessary.
  • Multi-hop delegation. Can device A delegate to device B, which then re-delegates to device C? The current design does not support transitive delegation. Each token is signed by the origin device and validated against that device’s enrolled key directly.
  • Offline token pre-generation. For air-gapped environments, tokens may need to be generated in advance with longer validity. This increases the theft window and requires careful scope restriction.

See Also

  • Factor ArchitectureVaultAuthBackend trait definition and dispatch
  • Policy Engine – How delegated factors interact with AuthCombineMode
  • Biometrics – Common delegation source (phone fingerprint)
  • TPM – Remote attestation for device identity

Window Manager Daemon

The daemon-wm process implements a Wayland overlay window switcher with Vimium-style letter hints, application launching, and inline vault unlock. It runs as a single-threaded tokio process (current_thread runtime) connected to the IPC bus as a BusClient.

Controller State Machine

The OverlayController (controller.rs) is the single owner of all overlay state, timing, and decisions. The main loop feeds events in, executes the returned Command list, and does nothing else. The controller never performs I/O directly.

Phases

The controller tracks a Phase enum with the following variants:

  • Idle – Nothing happening. No overlay visible, no timers running.
  • Armed – Border visible, keyboard exclusive mode acquired via layer-shell. The picker is not yet visible. The controller waits for either modifier release (quick-switch) or dwell timeout (transition to Picking). Carries entered_at: Instant, selection: usize, input: String, dwell_ms: u32, and an optional PendingLaunch.
  • Picking – Full picker visible. The user is browsing the window list or typing hint characters. Carries the same Snapshot, selection, input, and optional PendingLaunch.
  • Launching – An application launch request has been sent to daemon-launcher via IPC. The overlay displays a status indicator while waiting for the response.
  • LaunchError – A launch failed. The overlay shows an error toast. Any keystroke dismisses.
  • Unlocking – Vault unlock in progress. Contains profiles_to_unlock, current_index, password_len, unlock_mode (one of AutoAttempt, WaitingForTouch, Password, Verifying), and the original launch command for retry after unlock.

Events

The controller accepts the following Event variants:

EventSourceDescription
ActivateIPC WmActivateOverlayForward activation (Alt+Tab)
ActivateBackwardIPC WmActivateOverlayBackwardBackward activation (Alt+Shift+Tab)
ActivateLauncherIPC WmActivateOverlayLauncherLauncher mode (Alt+Space)
ActivateLauncherBackwardIPC WmActivateOverlayLauncherBackwardLauncher mode backward
ModifierReleasedOverlay SCTK or IPC InputKeyEventAlt/Meta key released
Char(char)Overlay or IPC key eventAlphanumeric character typed
BackspaceOverlay or IPC key eventBackspace pressed
SelectionDown / SelectionUpOverlay or IPC key eventArrow/Tab navigation
ConfirmOverlay or IPC key eventEnter pressed
Escape / DismissOverlay or IPC key eventCancel/timeout
DwellTimeoutMain loop deadlineDwell timer expired
LaunchResultCommand executor callbackLaunch IPC completed
AutoUnlockResultCommand executor callbackSSH agent unlock completed
TouchResultCommand executor callbackHardware token touch completed
UnlockResultCommand executor callbackPassword unlock IPC completed

Transitions

Idle ──Activate──> Armed ──DwellTimeout──> Picking
                     |                        |
                     |<──────Activate──────────|  (re-activation cycles selection)
                     |                        |
                     |──ModifierReleased──> Idle (activate selected window)
                     |                        |
                     |──Char──> Picking        |──ModifierReleased──> Idle
                     |                        |──Escape──> Idle
                     |                        |──Confirm──> Idle
                     |                        +──launch match──> Launching
                     |
                     +──ModifierReleased (fast)──> Idle (quick-switch)

Launching ──LaunchResult(success)──> Idle
Launching ──LaunchResult(VaultsLocked)──> Unlocking
Launching ──LaunchResult(error)──> LaunchError ──any key──> Idle

Unlocking ──AutoUnlockResult(success)──> retry launch or next profile
Unlocking ──AutoUnlockResult(fail)──> Password prompt
Unlocking ──UnlockResult(success)──> retry launch or next profile
Unlocking ──Escape──> Idle

Pre-computed Snapshot

At activation time, the controller builds a Snapshot that carries all data through the entire overlay lifecycle. The snapshot contains:

  • A copy of the window list, MRU-reordered via mru::reorder() and truncated to max_visible_windows (default: 20).
  • The origin window (currently focused) rotated from MRU position 0 to the last index.
  • Hint strings assigned via hints::assign_app_hints(), parallel to the window list.
  • Overlay-ready WindowInfo structs containing app_id and title.
  • A clone of the key_bindings map for launch-or-focus resolution.

No recomputation occurs after the snapshot is built. Keyboard actions only update the selection index and input buffer.

Quick-Switch

When ModifierReleased fires during the Armed phase, the controller evaluates three conditions in on_modifier_released():

  1. Elapsed time since entered_at is below quick_switch_threshold_ms (default: 250ms from WmConfig).
  2. No input characters have been typed (input.is_empty()).
  3. The selection has not moved from snap.initial_forward().

If all three hold, the controller activates initial_forward() – the MRU previous window (index 0 after origin rotation). Otherwise, it activates the current selection.

This enables fast Alt+Tab release to instantly switch to the previously focused window without ever showing the picker overlay.

Dwell Timeout

The main loop calls controller.next_deadline() on each iteration of the tokio::select! loop. During the Armed phase, this returns entered_at + Duration::from_millis(dwell_ms). The dwell_ms value is set to:

  • quick_switch_threshold_ms (default: 250ms) for ActivationMode::Forward and ActivationMode::Backward.
  • min(overlay_delay_ms, 100) for ActivationMode::Launcher and ActivationMode::LauncherBackward, providing a shorter dwell to let the compositor grant keyboard exclusivity before the first keypress.

When the deadline fires, the main loop sends Event::DwellTimeout. The controller’s on_dwell_timeout() method transitions Armed to Picking and emits Command::ShowPicker with the snapshot’s pre-computed overlay_windows and hints.

Reactivation

When an Activate or ActivateBackward event arrives while already in Armed or Picking (e.g., repeated Alt+Tab intercepted by the compositor):

  1. The selection index advances forward or backward by one position, wrapping via modular arithmetic over snap.windows.len().
  2. If in Armed, the phase transitions to Picking with Command::ShowPicker and Command::UpdatePicker.
  3. A Command::ResetGrace is emitted to reset the overlay’s modifier-poll grace timer, proving Alt is still held.
  4. last_ipc_advance is set to Instant::now(). Any SelectionDown or SelectionUp event within 100ms (REACTIVATION_DEDUP_MS) is suppressed by is_reactivation_duplicate() to prevent double-advancement from the same physical keystroke arriving via both IPC re-activation and the keyboard handler.

Staged Launch

When the user types a character in on_char() and check_hint_or_launch() finds that the input does not match any hint (MatchResult::NoMatch) but is a single character matching a key_bindings entry with a launch command:

  1. A PendingLaunch struct (containing command, tags, launch_args) is stored in the current phase via set_pending_launch().
  2. Command::ShowLaunchStaged { command } is emitted to display the intent in the overlay.
  3. The launch is not executed immediately.

Commitment occurs when:

  • ModifierReleased: on_modifier_released() checks for pending_launch before window activation. If present, the controller transitions to Phase::Launching and emits Command::ShowLaunching followed by Command::LaunchApp.
  • Confirm (Enter): on_confirm() follows the same path.
  • Backspace: If input.pop() empties the buffer, pending_launch is set to None.
  • Escape: on_escape() dismisses the overlay entirely, clearing all state.

Overlay Lifecycle

SCTK Layer-Shell Surface

The overlay runs on a dedicated OS thread spawned by overlay::spawn_overlay(), communicating with the tokio event loop via std::sync::mpsc (commands in) and tokio::sync::mpsc (events out). The OverlayApp struct holds all Wayland state and creates a wlr-layer-shell surface with:

  • Layer::Overlay – renders above all other surfaces.
  • Anchor::TOP | Anchor::BOTTOM | Anchor::LEFT | Anchor::RIGHT – fullscreen coverage.
  • KeyboardInteractivity::Exclusive – captures all keyboard input when visible.

The overlay thread runs a manual poll loop using prepare_read() and rustix::event::poll() for low-latency Wayland event dispatch, draining the command channel every POLL_INTERVAL_MS (4ms).

Show/Hide

  • ShowBorder: Creates the layer-shell surface if absent. Sets OverlayPhase::BorderOnly. Acquires keyboard exclusivity. Records activated_at for stale-activation timeout.
  • ShowFull: Stores the windows and hints vectors, transitions to OverlayPhase::Full, and triggers a redraw.
  • HideAndSync: Destroys the surface, performs a Wayland display sync via wl_display.roundtrip(), then sends OverlayEvent::SurfaceUnmapped as acknowledgment. The main loop’s execute_commands() waits up to 5 seconds for this event before proceeding with window activation. This ensures the compositor no longer sees the exclusive-keyboard surface before focus transfers.
  • Hide: Destroys the surface without synchronization. Used for escape/dismiss where no subsequent window activation is needed.

Modifier Tracking

The overlay tracks alt_held via the SCTK KeyboardHandler’s modifier callback. After activation, a grace period (MODIFIER_POLL_GRACE_MS = 150ms) prevents premature modifier-release detection. If no keyboard event arrives within STALE_ACTIVATION_TIMEOUT_MS (3000ms), the overlay sends OverlayEvent::Dismiss to handle cases where Alt was released before keyboard focus was granted.

The ConfirmKeyboardInput command from the main loop (sent on the first IPC key event) sets received_key_event = true, disabling the stale activation timeout.

Overlay Phases

The overlay thread tracks OverlayPhase: Hidden, BorderOnly, Full, Launching, LaunchError, UnlockPrompt, UnlockProgress. Each phase determines what the render module draws.

Rendering

The render.rs module implements software rendering using two libraries:

  • tiny-skia: 2D path operations. rounded_rect_path() builds quadratic Bezier paths for rounded rectangles. fill_rounded_rect() and stroke_rounded_rect() render filled and stroked shapes onto a tiny_skia::Pixmap. Layout follows a Material Design 4-point grid with base constants: padding (20px), row height (48px), row spacing (8px), badge dimensions (48x32px), badge radius (8px), app column width (180px), text size (16px), border width (3px), corner radius (16px), and column gap (16px). All dimensions scale with HiDPI via the Layout struct.
  • cosmic-text: Text shaping, layout, and glyph rasterization. FontSystem manages font discovery and caching. SwashCache provides glyph rasterization. Text is measured with measure_text() (returns width and height) and drawn with draw_text(), both operating on Buffer objects with configurable Attrs (family, weight) and Metrics (font size, line height at 1.3x).

Theme

OverlayTheme defines colors for: background, card_background, card_border, text_primary, text_secondary, badge_background, badge_text, badge_matched_background, badge_matched_text, selection_highlight, border_color, plus border_width and corner_radius. Theme construction follows a priority chain:

  1. COSMIC system theme: OverlayTheme::from_cosmic() loads platform_linux::cosmic_theme::CosmicTheme and maps its semantic color tokens (background.base, primary.base, primary.on, secondary.component.base, accent.base, accent.on, corner_radii.radius_m) to overlay theme fields.
  2. User config overrides: OverlayTheme::from_config() compares each WmConfig color field against its default. Non-default values override the COSMIC-derived theme.
  3. Hardcoded defaults: Dark theme with Catppuccin-inspired palette (#89b4fa border, #000000c8 background, #1e1e1ef0 cards, #646464 badges, #4caf50 matched badges).

Colors are parsed from CSS hex notation (#RRGGBB or #RRGGBBAA) via Color::from_hex(). Theme updates arrive via OverlayCmd::UpdateTheme on config hot-reload.

Rendered Elements

  • Border-only phase: A border indicator around the screen edges.
  • Full picker: A centered card with: hint badges (letter hints with badge_background or badge_matched_background depending on match state), app ID column (optional, controlled by show_app_id), and title column per window row. The selected row receives a selection_highlight background. An input buffer is displayed for typed characters.
  • Launch status: Staged launch intent, launching indicator, or error messages.
  • Unlock prompt: Profile name, dot-masked password field (receives only password_len, never password bytes), and optional error message.
  • Unlock progress: Profile name with status message (e.g., “Authenticating…”, “Verifying…”, “Touch your security key…”).

MRU Stack

The mru.rs module maintains a file-based most-recently-used window stack at ~/.cache/open-sesame/mru. The cache directory is created with mode 0o700 on Unix.

File Format

One window ID per line, most recent first. The stack is capped at MAX_ENTRIES (64).

Operations

  • load(): Opens the file with a shared flock (LOCK_SH | LOCK_NB – never blocks the tokio thread). Parses one ID per line, trimming whitespace and filtering empty lines. Returns MruState containing the ordered stack: Vec<String>.
  • save(target): Opens the file with an exclusive flock (LOCK_EX | LOCK_NB). Reads the current stack, removes target from its old position via retain(), inserts it at index 0, truncates to 64 entries, and writes back as newline-joined text. No-op if target is already at position 0.
  • seed_if_empty(windows): On first launch or after crash, seeds the stack from the compositor’s window list. The focused window goes to position 0. No-op if the stack already has entries.
  • reorder(windows, get_id, state): Sorts a window slice by MRU stack position. Windows present in the stack sort by their position (0 = most recent). Windows not in the stack receive usize::MAX and sort after all tracked windows, preserving their relative compositor order.

Origin Tracking

After mru::reorder(), the currently focused window (MRU position 0) sits at the beginning of the sorted list. Snapshot::build() then rotates it to the end via remove() + push(). The result:

  • Index 0 = MRU previous (the quick-switch target).
  • Last index = origin (currently focused, lowest switch priority).
  • initial_forward() returns 0 unless that is the origin, in which case it returns 1.
  • initial_backward() returns the last index unless that is the origin, in which case it returns last - 1.

The origin window remains in the list for display and is reachable by full-circle cycling or explicit hint selection.

Inline Vault Unlock

When a launch request returns a LaunchDenial::VaultsLocked { locked_profiles } denial, on_launch_result() transitions to Phase::Unlocking without dismissing the overlay. The phase stores the locked_profiles list, a current_index into it, and the original retry_command, retry_tags, and retry_launch_args for replay after unlock.

Unlock Flow

  1. Auto-unlock attempt (Command::AttemptAutoUnlock): The commands_unlock::attempt_auto_unlock() handler reads the vault’s salt file from {config_dir}/vaults/{profile}.salt, creates a core_auth::AuthDispatcher, calls find_auto_backend() to locate an SSH agent enrollment, and invokes auto_backend.unlock(). On success, the resulting master key is transferred into SensitiveBytes::from_protected() and sent to daemon-secrets via SshUnlockRequest IPC with a 30-second timeout. The AutoUnlockResult event is fed back through the controller.

  2. Touch prompt: If the auto-unlock backend sets needs_touch = true, the controller transitions to UnlockMode::WaitingForTouch and emits Command::ShowTouchPrompt. The overlay displays “Touch your security key for {profile}…”.

  3. Password fallback: If auto-unlock fails (no backend available, agent error, or secrets rejection), the controller transitions to UnlockMode::Password and emits Command::ShowPasswordPrompt. Password bytes are accumulated in a SecureVec (pre-allocated with mlock via SecureVec::for_password()). The overlay receives only password_len via OverlayCmd::ShowUnlockPrompt, never password bytes.

  4. Password submission (Command::SubmitPasswordUnlock): On Enter, commands_unlock::submit_password_unlock() copies the password from SecureVec into SensitiveBytes::from_slice() (mlock-to-mlock copy, no heap exposure), clears the SecureVec immediately, shows “Verifying…” in the overlay, and sends UnlockRequest IPC to daemon-secrets with a 30-second timeout (accommodating Argon2id KDF with high memory parameters). AlreadyUnlocked responses are treated as success.

  5. Multi-profile unlock: If multiple profiles are locked, advance_to_next_profile_or_retry() increments current_index and starts the auto-unlock flow for the next profile.

  6. Retry: After all profiles are unlocked, the controller emits Command::ActivateProfiles (sends ProfileActivate IPC for each profile) followed by Command::LaunchApp with the original command, tags, and launch args.

Security Properties

  • Password bytes never cross the thread boundary to the render thread. The overlay receives only password_len: usize.
  • SecureVec uses mlock to prevent swap and core-dump exposure.
  • SensitiveBytes uses ProtectedAlloc for the IPC transfer to daemon-secrets.
  • The password buffer is zeroized via Command::ClearPasswordBuffer on escape, successful unlock, or any transition out of the Unlocking phase.

Keyboard Input

Keyboard events arrive from two sources:

  1. SCTK keyboard handler: The overlay’s wlr-layer-shell surface receives Wayland keyboard events when it holds KeyboardInteractivity::Exclusive. The KeyboardHandler implementation maps KeyEvent and Modifiers to OverlayEvent variants.
  2. IPC InputKeyEvent: daemon-input forwards evdev keyboard events over the IPC bus when a grab is active. The main loop maps these via map_ipc_key_to_event() to controller Event variants.

Both sources pass through a shared KeyDeduplicator instance (8-entry ring buffer, 50ms expiry window, direction-aware) to ensure only the first arrival of each physical keystroke is processed.

When the overlay activates, Command::ShowBorder triggers an InputGrabRequest publish to acquire keyboard forwarding from daemon-input. On hide (Command::HideAndSync or Command::Hide), InputGrabRelease is published. The first IPC key event each activation cycle sends OverlayCmd::ConfirmKeyboardInput to the overlay thread, setting ipc_keyboard_active = true and stopping the stale activation timeout.

IPC Interface

MessageResponseDescription
WmListWindowsWmListWindowsResponse { windows }Returns MRU-reordered window list
WmActivateWindow { window_id }WmActivateWindowResponse { success }Activates a window by ID or app_id match, saves MRU state
WmActivateOverlayTriggers forward overlay activation
WmActivateOverlayBackwardTriggers backward overlay activation
WmActivateOverlayLauncherTriggers launcher-mode activation
WmActivateOverlayLauncherBackwardTriggers launcher-mode backward activation
InputKeyEventKeyboard event from daemon-input (processed only when not idle)
KeyRotationPendingReconnects with rotated keypair via BusClient::handle_key_rotation()

Process Hardening

On Linux, daemon-wm applies the following security measures:

  • platform_linux::security::harden_process() for process-level hardening.
  • Resource limits: nofile = 4096, memlock_bytes = 0.
  • core_types::init_secure_memory() probes memfd_secret and initializes secure memory before the sandbox is applied.
  • Landlock filesystem sandbox via daemon_wm::sandbox::apply_sandbox(), applied after IPC keypair read and bus connection but before IPC traffic processing.
  • systemd watchdog notification every 15 seconds via platform_linux::systemd::notify_watchdog(), with platform_linux::systemd::notify_ready() called at startup.

Configuration

The WmConfig struct (core-config/src/schema_wm.rs) provides:

FieldTypeDefaultDescription
hint_keysString"asdfghjkl"Characters used for hint assignment
overlay_delay_msu32150Dwell delay before showing full picker
activation_delay_msu32200Delay after activation before dismiss
quick_switch_threshold_msu32250Fast-release threshold for instant switch
border_widthf324.0Border width in pixels
border_colorString"#89b4fa"Border color (CSS hex)
background_colorString"#000000c8"Overlay background (hex with alpha)
card_colorString"#1e1e1ef0"Card background color
text_colorString"#ffffff"Primary text color
hint_colorString"#646464"Hint badge color
hint_matched_colorString"#4caf50"Matched hint badge color
key_bindingsBTreeMap(see Hints)Per-key app bindings
show_titlebooltrueShow window titles in overlay
show_app_idboolfalseShow app IDs in overlay
max_visible_windowsu3220Maximum windows in picker

Configuration hot-reloads via core_config::ConfigWatcher. When the watcher fires, the main loop reads the new WmConfig, builds an OverlayTheme::from_config(), sends OverlayCmd::UpdateTheme to the overlay thread, updates the shared wm_config mutex, and publishes ConfigReloaded on the IPC bus.

Compositor Backend

Window list polling runs on a dedicated OS thread named wm-winlist-poll because the compositor backend (platform_linux::compositor::CompositorBackend) performs synchronous Wayland roundtrips with libc::poll(). On the current_thread tokio runtime, this would block all IPC message processing. The thread calls backend.list_windows() every 2 seconds, sending results to the tokio runtime via a tokio::sync::mpsc channel.

If platform_linux::compositor::detect_compositor() fails (e.g., no wlr-foreign-toplevel-management protocol support), daemon-wm falls back to a D-Bus focus monitor (platform_linux::compositor::focus_monitor). This monitor receives FocusEvent::Focus(app_id) and FocusEvent::Closed(app_id) events, maintaining a synthetic window list by tracking focus changes and window closures.

Dependencies

The daemon-wm crate depends on the following workspace crates: core-types, core-config, core-ipc, core-crypto, core-auth, core-profile. External dependencies include smithay-client-toolkit (SCTK), wayland-client, wayland-protocols-wlr, tiny-skia, and cosmic-text, all gated behind the wayland feature (enabled by default). The platform-linux crate is used with the cosmic feature for compositor backend and theme integration.

Hint Assignment

The hints module (daemon-wm/src/hints.rs) assigns letter-based hints to windows for keyboard-driven selection. Hints follow a Vimium-style model where each window receives a unique string of repeated characters that the user types to select it.

Hint Assignment Algorithm

The assign_hints(count, hint_keys) function generates hints from a configurable key set string (default: "asdfghjkl" via WmConfig). For N windows and K available keys in the key set:

  1. The first K windows each receive a single character: a, s, d, f, …
  2. The next K windows receive doubled characters: aa, ss, dd, ff, …
  3. The pattern continues with tripled characters, and so on.

Each key is used once at each repetition level before any key repeats at the next level. For example, with hint_keys = "asd" and 5 windows, the assigned hints are: a, s, d, aa, ss.

This function is used internally. The primary entry point for daemon-wm is assign_app_hints(), which groups windows by application before assigning.

App Grouping

The assign_app_hints(app_ids, key_bindings) function groups windows by their resolved base key character before assigning hints. Windows sharing the same base key receive consecutive repetitions of that character.

For a window list containing two Firefox instances and one Ghostty:

  • Firefox window 1: f
  • Firefox window 2: ff
  • Ghostty window 1: g

The function returns (hint_string, original_index) pairs sorted by original window index, preserving display order.

Key Selection

The base key for each application is determined by key_for_app(app_id, key_bindings) with the following priority:

1. Explicit Config Override

The key_bindings map in WmConfig allows explicit key-to-app mapping. Each WmKeyBinding entry contains an apps list of app ID patterns:

[profiles.default.wm.key_bindings.f]
apps = ["firefox", "org.mozilla.firefox"]
launch = "firefox"

key_for_app() iterates all key bindings and checks each pattern against the app ID using three comparisons:

  • Exact match: pattern == app_id
  • Case-insensitive match: pattern.to_lowercase() == app_id.to_lowercase()
  • Last-segment match: the reverse-DNS last segment of app_id (lowercased) equals the pattern (lowercased). For org.mozilla.firefox, the last segment is firefox.

The first matching binding’s key character is returned.

2. Auto-Key Detection

If no explicit binding matches, auto_key_for_app(app_id) extracts the first alphabetic character from the last segment of the app ID (split on .):

  • com.mitchellh.ghostty – last segment is ghostty, auto-key is g.
  • firefox – no dots, the full string is the segment, auto-key is f.
  • microsoft-edge – auto-key is m.

The character is lowercased. If no alphabetic character is found, None is returned and assign_app_hints() falls back to 'a'.

Default Key Bindings

WmConfig::default() ships with bindings for common applications:

KeyApplicationsLaunch Command
gghostty, com.mitchellh.ghosttyghostty
ffirefox, org.mozilla.firefoxfirefox
emicrosoft-edgemicrosoft-edge
cchromium, google-chrome
vcode, Code, cursor, Cursorcode
nnautilus, org.gnome.Nautilusnautilus
sslack, Slackslack
ddiscord, Discorddiscord
mspotifyspotify
tthunderbirdthunderbird

Numeric Shorthand

The normalize_input() function expands numeric suffixes before matching. This allows users to type a2 instead of aa, or f3 instead of fff:

  • a2 normalizes to aa
  • a3 normalizes to aaa
  • f1 normalizes to f

Expansion rules:

  • The input must be at least 2 characters long.
  • The trailing characters must all be ASCII digits.
  • The leading characters must all be the same letter (e.g., a or aa, but not ab).
  • The numeric value must be between 1 and 26 inclusive.

If any rule is violated, the input is returned as-is (lowercased). Mixed-character inputs like ab2 are not expanded because the letter prefix contains non-identical characters.

Case-Insensitive Matching

All input is lowercased by normalize_input() before matching. Typing S matches the hint s. This applies to both direct character matching and numeric shorthand expansion.

Match Results

The match_input(input, hints) function normalizes the input and returns one of three MatchResult variants:

  • Exact(index) – Exactly one hint equals the normalized input, and no other hints share it as a prefix. The controller selects this window.
  • Partial(indices) – Multiple hints start with the normalized input. This includes cases where one hint is an exact match but others share the same prefix (e.g., typing a with hints a, aa, aaa yields Partial([0, 1, 2])). The controller updates the display but does not commit a selection.
  • NoMatch – No hint starts with the normalized input. The controller checks for a launch command binding.

Focus-or-Launch

When check_hint_or_launch() in the controller receives MatchResult::NoMatch and the input buffer contains exactly one character, it calls hints::launch_for_key(key, key_bindings). If a launch command exists for that key:

  1. A PendingLaunch is staged via set_pending_launch(), containing the command string, tags, and launch args from the binding.
  2. The overlay displays the staged intent via Command::ShowLaunchStaged.
  3. The launch executes on modifier release or Enter (see Staged Launch).

If no launch command is configured for the key, the input is treated as a filter with no matches.

Tags and Launch Args

Each WmKeyBinding can carry tags and launch_args fields:

[profiles.default.wm.key_bindings.g]
apps = ["ghostty"]
launch = "ghostty"
tags = ["dev-rust", "ai-tools"]
launch_args = ["--working-directory=/workspace"]
  • tags_for_key(key, key_bindings) returns the tags vector for the matching key. Tags are forwarded to daemon-launcher in the LaunchExecute IPC message for launch profile composition (environment variable injection, secret fetching, Nix devshell activation). Tags support qualified cross-profile references using colon syntax (e.g., "work:corp").
  • launch_args_for_key(key, key_bindings) returns the launch_args vector. These are appended to the launched command’s argument list.

Both functions perform case-insensitive key lookup by lowercasing the input character before looking up the BTreeMap.

Clipboard Daemon

The daemon-clipboard process manages per-profile clipboard history with sensitivity classification. It runs as a single-threaded tokio process (current_thread runtime) connected to the Noise IK IPC bus as a BusClient.

Storage

Clipboard entries are stored in a SQLite database at ~/.cache/open-sesame/clipboard.db, opened via rusqlite::Connection. The parent directory is created if absent. The schema consists of a single table:

CREATE TABLE IF NOT EXISTS clipboard_entries (
    entry_id TEXT PRIMARY KEY,
    profile_id TEXT NOT NULL,
    content TEXT NOT NULL,
    content_type TEXT NOT NULL DEFAULT 'text/plain',
    sensitivity TEXT NOT NULL DEFAULT 'public',
    preview TEXT NOT NULL,
    timestamp_ms INTEGER NOT NULL
);

CREATE INDEX IF NOT EXISTS idx_clipboard_profile
    ON clipboard_entries(profile_id, timestamp_ms DESC);

The index on (profile_id, timestamp_ms DESC) supports efficient per-profile history queries ordered by recency.

Per-Profile History

All clipboard entries are associated with a profile_id. Queries filter by profile, ensuring that clipboard history from one trust profile is not visible to another. This scoping is enforced at the storage layer – every SELECT, DELETE, and aggregate query includes a WHERE profile_id = ? predicate.

Sensitivity Classification

Each clipboard entry carries a sensitivity field stored as a text string in SQLite and mapped to the SensitivityClass enum on read:

ValueEnum VariantDescription
publicSensitivityClass::PublicNon-sensitive content
confidentialSensitivityClass::ConfidentialInternal or business data
secretSensitivityClass::SecretCredentials, tokens
topsecretSensitivityClass::TopSecretHigh-value secrets

Unknown string values default to Public. The entry_id field uses UUIDv7 (uuid::Uuid::now_v7()), providing time-ordered unique identifiers.

IPC Interface

The daemon handles the following IPC messages:

MessageResponseDescription
ClipboardHistoryClipboardHistoryResponseReturns the most recent limit entries for a profile
ClipboardGetClipboardGetResponseRetrieves full content for a specific entry by UUID
ClipboardClearClipboardClearResponseDeletes all clipboard entries for a profile
KeyRotationPendingReconnects with a rotated IPC keypair

The ClipboardHistory response includes entry_id, content_type, sensitivity, profile_id, preview, and timestamp_ms per entry. The content field is not included in history responses to avoid transmitting large payloads over IPC. Use ClipboardGet to retrieve full content.

All IPC responses are correlated to the original request via Message::with_correlation(msg.msg_id).

Process Hardening

On Linux, daemon-clipboard applies the following security measures:

  • platform_linux::security::harden_process() for process-level hardening.
  • Resource limits: nofile = 4096, memlock_bytes = 0.
  • core_types::init_secure_memory() for memfd_secret probing.
  • Landlock filesystem sandbox restricting access to:
    • IPC key directory ($XDG_RUNTIME_DIR/pds/keys/) – read-only.
    • Bus public key ($XDG_RUNTIME_DIR/pds/bus.pub) – read-only.
    • Bus socket ($XDG_RUNTIME_DIR/pds/bus.sock) – read-write.
    • Wayland socket ($XDG_RUNTIME_DIR/$WAYLAND_DISPLAY) – read-write.
    • Cache directory (~/.cache/open-sesame/) – read-write (for SQLite database).
    • Config symlink targets (e.g., /nix/store paths) – read-only.
  • Seccomp syscall filter with an allowlist including: SQLite-relevant syscalls (fsync, fdatasync, flock, pread64, lseek), Wayland protocol syscalls (socket, connect, sendmsg, recvmsg), inotify syscalls for config hot-reload, and memfd_secret for secure memory.
  • The sandbox panics on application failure ("refusing to run unsandboxed"), ensuring the daemon never operates without confinement.

Configuration

The daemon loads configuration via core_config::load_config() and establishes a ConfigWatcher with a callback channel for hot-reload. On config change, the callback sends a notification, and the event loop publishes ConfigReloaded { changed_keys: ["clipboard"] } to the IPC bus.

Lifecycle

  1. Startup: Process hardening, directory bootstrap, config load, IPC bus connection with keypair retry (5 attempts, 500ms interval), sandbox application.
  2. Announcement: Publishes DaemonStarted { capabilities: ["clipboard", "history"] }.
  3. Readiness: Calls platform_linux::systemd::notify_ready().
  4. Event loop: tokio::select! over watchdog timer (15s), IPC messages, config reload notifications, SIGINT, and SIGTERM.
  5. Shutdown: Publishes DaemonStopped { reason: "shutdown" }.

Input Daemon

The daemon-input process captures keyboard events via Linux evdev and forwards them over the Noise IK IPC bus for consumption by daemon-wm. It runs as a single-threaded tokio process (current_thread runtime) connected to the IPC bus as a BusClient.

Device Discovery

The spawn_keyboard_readers() function (keyboard.rs) enumerates input devices via platform_linux::input::enumerate_devices(), filters to those with is_keyboard = true, and opens each as an async EventStream via platform_linux::input::open_keyboard_stream().

One tokio task is spawned per keyboard device. All tasks funnel events into a single mpsc channel with a buffer size of 256. If no keyboard devices are found (typically because the user is not in the input group), the function logs a warning with remediation advice (sudo usermod -aG input $USER) and returns an empty receiver. This is non-fatal – daemon-wm falls back to SCTK keyboard input from its layer-shell surface.

Event Reading

Each reader task processes evdev events via stream.next_event().await in a loop. Only EventSummary::Key events are forwarded:

  • value 0: Key release – forwarded as RawKeyEvent { keycode, pressed: false }.
  • value 1: Key press – forwarded as RawKeyEvent { keycode, pressed: true }.
  • value 2: Key repeat – skipped. Repeat handling is left to the consumer.

The keycode field contains the evdev hardware scan code (e.g., 30 for KEY_A). The keycode is cast from evdev::Key to u32 via keycode.0 as u32. On read errors (device disconnect, permission denied), the task logs a warning and returns, ending that device’s reader.

XKB Keysym Translation

The XkbContext struct wraps an xkbcommon::xkb::State initialized with the system’s default keymap. XkbContext::new() calls Keymap::new_from_names() with empty strings for rules, model, layout, and variant (meaning system defaults), and KEYMAP_COMPILE_NO_FLAGS. It returns None if xkbcommon fails to initialize (missing XKB data files).

Translation Process

process_key(evdev_keycode, pressed) translates a raw evdev event into a KeyboardEvent:

  1. Offset: Adds the XKB offset (xkb_keycode = evdev_keycode + 8) because evdev keycodes are offset by 8 from XKB keycodes.
  2. Pre-read: Reads the keysym via state.key_get_one_sym() and UTF-32 character via state.key_get_utf32() before updating state. This ordering is critical: when the Alt key itself is pressed, the modifier mask returned by active_modifiers() must not yet include Alt, ensuring correct modifier-release detection on the receiving end (daemon-wm).
  3. Modifiers: Calls active_modifiers() to build the current modifier bitmask.
  4. State update: Calls state.update_key() with the key direction after reading.
  5. Unicode: The unicode field is populated only on key press (pressed == true) and only when key_get_utf32() returns a non-zero value.

Modifier Bitmask

The active_modifiers() method queries four XKB named modifiers and maps them to GDK-compatible bit positions:

ModifierXKB ConstantBit PositionGDK Name
ShiftMOD_NAME_SHIFTbit 0GDK_SHIFT_MASK
ControlMOD_NAME_CTRLbit 2GDK_CONTROL_MASK
AltMOD_NAME_ALTbit 3GDK_ALT_MASK
SuperMOD_NAME_LOGObit 26GDK_SUPER_MASK

Each modifier is checked via state.mod_name_is_active() with STATE_MODS_EFFECTIVE.

Fallback

If XkbContext::new() returns None, the daemon logs a warning and constructs KeyboardEvent structs with the raw evdev keycode as keyval, zero modifiers, and None for unicode.

Grab Protocol

The daemon tracks keyboard grab state via two variables: grab_active: bool and grab_requester: Option<DaemonId>.

When Grab Is Active

All key events (press and release, value 0 and 1) are translated via XkbContext::process_key() and published as InputKeyEvent messages on the IPC bus with SecurityLevel::Internal.

When Grab Is Inactive

Key events still flow through XkbContext::process_key() to keep modifier tracking accurate for future grabs. However, only Alt/Meta release events are forwarded. Specifically, if pressed == false and the keyval is in the range 0xFFE7..=0xFFEA (Meta_L, Meta_R, Alt_L, Alt_R), the event is published as InputKeyEvent.

This unconditional forwarding of modifier releases solves a race condition inherent to single-threaded runtimes: the InputGrabRequest IPC message may arrive after the user has already released Alt. Without this forwarding, daemon-wm would never detect the Alt release and the overlay would remain stuck. Only releases are forwarded (not presses), limiting extraneous IPC traffic to at most 4 keycodes.

IPC Messages

MessageResponseDescription
InputGrabRequestInputGrabResponseActivates the grab and records the requester
InputGrabReleaseDeactivates the grab if requester matches
InputLayersListInputLayersListResponseReturns configured input remap layers
InputStatusInputStatusResponseReturns current daemon status
KeyRotationPendingReconnects with a rotated IPC keypair

KeyDeduplicator

The KeyDeduplicator (daemon-wm/src/ipc_keys.rs) prevents duplicate processing when both the SCTK keyboard handler and IPC InputKeyEvent fire for the same physical keystroke. It is instantiated in the daemon-wm main loop, not in daemon-input.

Implementation

  • An 8-entry ring buffer stores (keyval: u32, pressed: bool, timestamp: Instant) tuples, initialized to (0, false, epoch).
  • accept(keyval, pressed) scans the entire buffer. If any entry matches the same keyval and pressed direction within 50ms of the current time, the event is rejected (returns false). Otherwise, the event is recorded at the current ring index (which advances modulo 8) and accepted (returns true).
  • Direction-aware: a press (pressed = true) and release (pressed = false) of the same key are treated as distinct events and do not deduplicate each other.
  • The ring buffer wraps on overflow, overwriting the oldest entry.

IPC Key Mapping

map_ipc_key_to_event(keyval, modifiers, unicode) in daemon-wm/src/ipc_keys.rs translates XKB keysyms received via IPC into controller Event variants:

KeysymConstantEvent
0xFF1BEscapeEvent::Escape
0xFF0DReturnEvent::Confirm
0xFF8DKP_EnterEvent::Confirm
0xFF09TabNone (suppressed – cycling handled by IPC re-activation)
0xFF54DownEvent::SelectionDown
0xFF52UpEvent::SelectionUp
0xFF08BackspaceEvent::Backspace
0x0020SpaceEvent::Char(' ')
OtherEvent::Char(ch) if unicode is Some and passes is_ascii_graphic()

Tab is explicitly suppressed because cycling through the window list is handled at the IPC level by the compositor intercepting Alt+Tab and sending WmActivateOverlay / WmActivateOverlayBackward. Forwarding Tab as SelectionDown would cause double-advancement.

Process Hardening

On Linux, daemon-input applies:

  • platform_linux::security::harden_process() for process-level hardening.
  • Resource limits: nofile = 4096, memlock_bytes = 0.
  • core_types::init_secure_memory() for memfd_secret probing.
  • Landlock sandbox restricting access to:
    • IPC key directory ($XDG_RUNTIME_DIR/pds/keys/) – read-only.
    • Bus public key and socket – read-only and read-write respectively.
    • /dev/input – read-only (evdev device access).
    • /sys/class/input – read-only (device enumeration symlinks).
    • /sys/devices – read-only (device metadata via symlink traversal).
    • Config symlink targets – read-only.
  • Seccomp syscall filter with evdev-relevant syscalls (ioctl for device queries), inotify for config hot-reload, memfd_secret, and standard I/O syscalls.
  • The sandbox panics on failure, refusing to run unsandboxed.

Compositor-Independent Operation

The daemon reads directly from /dev/input/event* devices rather than relying on compositor keyboard focus. This design is necessary because:

  1. The overlay’s KeyboardInteractivity::Exclusive may not be granted immediately by all compositors.
  2. The InputGrabRequest IPC message may arrive after the triggering keystroke.
  3. Some compositors may not forward all key events to layer-shell surfaces.

By reading at the evdev level, daemon-input captures keystrokes regardless of which window has compositor focus, providing a reliable input path for the overlay.

Lifecycle

  1. Startup: Process hardening, directory bootstrap, config load, keyboard reader spawn, XKB context creation, IPC bus connection with keypair retry (5 attempts, 500ms interval), sandbox application.
  2. Announcement: Publishes DaemonStarted { capabilities: ["input", "remap"] }.
  3. Readiness: Calls platform_linux::systemd::notify_ready().
  4. Event loop: tokio::select! over watchdog timer (15s), keyboard events, IPC messages, config reload notifications, SIGINT, and SIGTERM.
  5. Shutdown: Publishes DaemonStopped { reason: "shutdown" }.

Snippets Daemon

The daemon-snippets process manages text snippet templates with profile-scoped namespaces. It runs as a single-threaded tokio process (current_thread runtime) connected to the Noise IK IPC bus as a BusClient.

Storage

Snippets are stored in an in-memory HashMap<(String, String), String> keyed by (profile_name, trigger) with the template string as the value. The type alias SnippetMap defines this type.

The config schema does not yet include a dedicated snippets section, so build_snippet_map() returns an empty HashMap on startup and after every config reload. All snippet data is populated at runtime via SnippetAdd IPC messages.

On config hot-reload, the snippet map is rebuilt by calling build_snippet_map() with the new config, which currently clears all runtime-added snippets. This behavior will change when a persistent config-based snippet definition is added to the schema.

Profile Scoping

Every snippet is associated with a trust profile name as the first element of its (profile, trigger) composite key. This ensures that two profiles can define different expansions for the same trigger string without collision.

All operations are profile-scoped:

  • SnippetList: Filters the entire map with .filter(|((p, _), _)| p == &profile_str), returning only snippets belonging to the requested profile.
  • SnippetExpand: Performs an exact HashMap::get() lookup with the (profile, trigger) tuple.
  • SnippetAdd: Inserts or overwrites at the (profile, trigger) key. A snippet added under profile "work" is not visible from profile "personal".

IPC Interface

MessageResponseDescription
SnippetListSnippetListResponseReturns all snippets for the given profile
SnippetExpandSnippetExpandResponseLooks up the template for an exact trigger
SnippetAddSnippetAddResponseInserts or overwrites a snippet
KeyRotationPendingReconnects with a rotated IPC keypair

The SnippetList response returns Vec<SnippetInfo> where each entry contains trigger and template_preview. Previews are truncated to 80 characters: templates longer than 80 characters are cut to 77 characters with ... appended.

All IPC responses are correlated to the original request via Message::with_correlation(msg.msg_id).

Trigger Matching

Trigger matching is exact and case-sensitive. The trigger field from a SnippetExpand request must match the stored trigger string byte-for-byte. The snippet map uses HashMap::get() with the (profile.to_string(), trigger.clone()) tuple as the key. No fuzzy matching, prefix matching, or normalization is performed.

Template Format

Templates are stored and returned as plain strings. The module-level documentation describes variable substitution and secret injection as design goals, but the current implementation returns the template string verbatim from SnippetExpand without any processing. The expansion pipeline for variable substitution (${VAR}) and secret injection (${secret:name}) is not yet implemented.

Process Hardening

On Linux, daemon-snippets applies the following security measures:

  • platform_linux::security::harden_process() for process-level hardening.
  • Resource limits: nofile = 4096, memlock_bytes = 0.
  • core_types::init_secure_memory() for memfd_secret probing.
  • Landlock filesystem sandbox restricting access to:
    • IPC key directory ($XDG_RUNTIME_DIR/pds/keys/) – read-only.
    • Bus public key ($XDG_RUNTIME_DIR/pds/bus.pub) – read-only.
    • Bus socket ($XDG_RUNTIME_DIR/pds/bus.sock) – read-write.
    • Config directory (~/.config/pds/) – read-only.
    • Config symlink targets (e.g., /nix/store paths) – read-only.
  • Seccomp syscall filter with an allowlist for standard I/O, memory management, networking (IPC socket), inotify (config hot-reload), memfd_secret, and process lifecycle syscalls.
  • The sandbox panics on application failure, refusing to run unsandboxed.

The sandbox is notably more restrictive than other desktop daemons: daemon-snippets requires no Wayland socket access, no /dev/input access, and no cache directory writes.

Lifecycle

  1. Startup: Process hardening, directory bootstrap, config load, snippet map build (empty), IPC bus connection with keypair retry (5 attempts, 500ms interval), sandbox application.
  2. Announcement: Publishes DaemonStarted { capabilities: ["snippets", "expansion"] }.
  3. Readiness: Calls platform_linux::systemd::notify_ready().
  4. Event loop: tokio::select! over watchdog timer (15s), IPC messages, config reload notifications (rebuilds snippet map from config), SIGINT, and SIGTERM.
  5. Shutdown: Publishes DaemonStopped { reason: "shutdown" }.

Launch Profiles

Launch profiles define composable environment bundles that attach to application launches. Each profile specifies environment variables, secret references, an optional Nix devshell, and an optional working directory. Profiles are scoped to trust profiles and composed at launch time via tags.

Profile Structure

A launch profile is defined by the LaunchProfile struct in core-config/src/schema_wm.rs:

[profiles.work.launch_profiles.dev-rust]
env = { RUST_LOG = "debug", CARGO_HOME = "/workspace/.cargo" }
secrets = ["github-token", "crates-io-token"]
devshell = "/workspace/myproject#rust"
cwd = "/workspace/usrbinkat/github.com/org/repo"

Each field is optional and defaults to empty:

FieldTypeDescription
envBTreeMap<String, String>Static environment variables injected into the child process.
secretsVec<String>Secret names fetched from the vault and converted to env vars.
devshellOption<String>Nix flake devshell reference. Wraps the command in nix develop.
cwdOption<String>Absolute path used as the working directory for the spawned process.

Tag System

Key bindings in the window manager configuration reference launch profiles through the tags field on WmKeyBinding:

[profiles.default.wm.key_bindings.g]
apps = ["ghostty", "com.mitchellh.ghostty"]
launch = "ghostty"
tags = ["dev-rust", "ai-tools"]
launch_args = ["--working-directory=/workspace/user/github.com/org/repo"]

When a key binding triggers a launch, daemon-launcher resolves each tag against the configuration to compose the final environment. Tags are processed in order; the composition rules are described below.

Cross-Profile Tag References

Tags support qualified cross-profile references using the profile:name syntax. An unqualified tag resolves against the default (or explicitly specified) trust profile. A qualified tag resolves against a different trust profile.

TagResolution
dev-rustResolves dev-rust in the current trust profile.
work:corpResolves corp in the work trust profile.

The parsing logic in daemon-launcher/src/launch.rs (parse_tag) splits on the first colon. If no colon is present, the tag is unqualified and uses the default profile.

Cross-profile references allow a single key binding to compose environments from multiple trust boundaries. For example, a terminal binding might combine a personal development environment with corporate secrets:

tags = ["dev-rust", "work:corp-secrets"]

Tag Composition Rules

When multiple tags are specified, they are processed sequentially. The composition semantics are:

  • Environment variables: merged into a single BTreeMap. When the same key appears in multiple tags, the later tag wins.
  • Secrets: accumulated. Duplicate secret names (same name, same trust profile) are deduplicated; secrets from different trust profiles are kept independently.
  • Devshell: last tag with a non-None devshell wins.
  • Working directory: last tag with a non-None cwd wins.

This is implemented in daemon-launcher/src/launch.rs in the launch_entry function. The composed environment is applied to the child process after secret fetching completes.

Configuration Schema

Launch profiles live under each trust profile’s configuration section:

[profiles.personal]
# ... other profile settings ...

[profiles.personal.launch_profiles.dev-rust]
env = { RUST_LOG = "debug" }
secrets = ["github-token"]
devshell = "/workspace/project#rust"
cwd = "/workspace/usrbinkat/github.com/org/repo"

[profiles.personal.launch_profiles.ai-tools]
env = { ANTHROPIC_MODEL = "claude-sonnet-4-20250514" }
secrets = ["anthropic-api-key"]

The full path in the config tree is profiles.<trust_profile_name>.launch_profiles.<launch_profile_name>. Daemon-launcher reads these from the hot-reloaded configuration state (ConfigWatcher) at launch time, so changes take effect without daemon restart.

Denial Handling

If a tag references a trust profile or launch profile that does not exist, daemon-launcher returns a structured LaunchDenial to the window manager:

  • LaunchDenial::ProfileNotFound – the trust profile name in a qualified tag does not exist.
  • LaunchDenial::LaunchProfileNotFound – the launch profile name does not exist within the resolved trust profile.

The window manager can use these denials to display user-facing error messages.

Secret Injection

Secrets flow from per-profile SQLCipher vaults into launched processes as environment variables. Two mechanisms exist: the sesame env CLI command for interactive use, and the daemon-launcher IPC path for overlay-driven launches.

Vault to Environment Pipeline

daemon-launcher Path

When daemon-launcher processes a LaunchExecute request with tags, the pipeline is:

  1. Tag resolution: each tag is resolved to a LaunchProfile from the hot-reloaded configuration. Cross-profile tags (work:corp) route to different trust profiles.
  2. Secret collection: secret names from all resolved launch profiles are accumulated with their owning trust profile name. Duplicates (same name, same profile) are deduplicated.
  3. IPC fetch: for each (secret_name, trust_profile_name) pair, daemon-launcher sends a SecretGet request over the Noise IK bus to daemon-secrets. The request specifies the trust profile that owns the vault.
  4. Name conversion: the secret name is converted to an environment variable name (see below).
  5. Environment injection: the secret value is inserted into the composed environment map, then passed to the child process via Command::env().
  6. Zeroization: after the child process is spawned and the environment has been copied to the OS process, all secret values in the composed environment map are zeroized via zeroize::Zeroize.

Batched Denial Collection

Daemon-launcher does not abort on the first secret fetch failure. Instead, it collects all denials and returns them in a single response so the window manager can prompt for all required vault unlocks at once:

  • Locked vaults: SecretDenialReason::Locked or ProfileNotActive denials are collected into a locked_profiles list.
  • Missing secrets: SecretDenialReason::NotFound denials increment a missing_count.
  • Rate limiting: SecretDenialReason::RateLimited causes an immediate abort with LaunchDenial::RateLimited.

After iterating all secrets, locked vaults take priority: if any vaults are locked, LaunchDenial::VaultsLocked is returned with the full list. Otherwise, if secrets are missing, LaunchDenial::SecretNotFound is returned.

sesame env

The sesame env command spawns a child process with vault secrets injected as environment variables:

sesame env -p work -- my-command --flag

It connects to the IPC bus, fetches all secrets for the specified profile(s), converts each secret name to an env var, injects them into the child process, waits for the child to exit, zeroizes all secret copies, and exits with the child’s exit code.

The child also receives a SESAME_PROFILES environment variable containing a comma-separated list of the profile specs that were used.

sesame export

The sesame export command outputs secrets in shell, dotenv, or JSON format without spawning a child:

sesame export -p work --format shell
sesame export -p work --format dotenv
sesame export -p work --format json

Output is written to stdout. Secret values are zeroized after printing.

Secret Name to Env Var Conversion

Two conversion implementations exist, with slightly different rules:

daemon-launcher (launch.rs)

Applies to secrets injected via launch profile tags:

  • Uppercase the entire name.
  • Replace hyphens with underscores.

Examples: github-token becomes GITHUB_TOKEN. anthropic-api-key becomes ANTHROPIC_API_KEY.

sesame env / sesame export (env.rs)

Applies to secrets injected via the CLI:

  • Uppercase the entire name.
  • Replace hyphens, dots, and non-alphanumeric characters (except underscores) with underscores.

Examples: api-key becomes API_KEY. db.host-name becomes DB_HOST_NAME.

Prefix System

The --prefix flag (available on sesame env and sesame export) prepends a string to every generated environment variable name, separated by an underscore:

sesame env --prefix MYAPP -p work -- my-command

With prefix MYAPP, the secret api-key becomes MYAPP_API_KEY.

The prefix is also configurable per-workspace via .sesame.toml:

secret_prefix = "MYAPP"

The prefix is applied after the name-to-env-var conversion, so the full transformation is: secret_name -> uppercase + substitute -> prepend prefix.

Denied Environment Variables

The sesame env and sesame export commands maintain a deny list of environment variable names that must never be overwritten by secret injection. This prevents secrets with adversarial names from hijacking the dynamic linker, shell execution, or privilege escalation vectors. The deny list includes:

  • Dynamic linker variables: LD_PRELOAD, LD_LIBRARY_PATH, DYLD_INSERT_LIBRARIES, and others.
  • Core execution: PATH, HOME, SHELL, USER.
  • Shell injection vectors: BASH_ENV, IFS, PROMPT_COMMAND, and others.
  • Language runtime injection: PYTHONPATH, NODE_OPTIONS, RUBYOPT, and others.
  • Open Sesame’s own namespace: SESAME_PROFILE.

Matching is case-insensitive. The BASH_FUNC_ prefix is matched as a prefix pattern to block Bash function export injection.

Implicit Environment Variables

Every launched process receives these environment variables regardless of tag configuration:

VariableValue
SESAME_PROFILEThe trust profile name used for the launch.
SESAME_APP_IDThe desktop entry ID of the launched application.
SESAME_SOCKETPath to the IPC bus Unix socket.

These are injected after the composed environment, so they cannot be overridden by launch profile env entries.

Desktop Entry Discovery

Daemon-launcher discovers launchable applications by scanning XDG desktop entry files, builds a fuzzy search index, and ranks results using frecency (frequency + recency).

XDG Desktop Entry Scanning

The scanner in daemon-launcher/src/scanner.rs uses the freedesktop-desktop-entry crate to enumerate .desktop files from $XDG_DATA_DIRS/applications/. Scanning is synchronous and runs in a tokio::task::spawn_blocking context at daemon startup.

Filtering Rules

Entries are filtered before indexing:

ConditionAction
NoDisplay=trueSkipped. Non-launchable entries (e.g., D-Bus activatable services).
Hidden=trueSkipped. Explicitly hidden by the packager.
No Exec= fieldSkipped. Not a launchable application.
Duplicate IDOnly the first occurrence is kept.

Indexed Fields

For each surviving entry, the scanner produces a MatchItem with:

  • id: the desktop entry ID (e.g., org.mozilla.firefox).
  • name: the localized Name= field, falling back to the entry ID.
  • extra: a space-joined string of Keywords= and Categories= values, used to broaden fuzzy match surface.

The Exec line is cached separately in a CachedEntry for post-scan use during launch execution. The Exec cache is stored as a HashMap<String, CachedEntry> keyed by entry ID.

Daemon-launcher uses the nucleo fuzzy matching library (via the core-fuzzy crate). Items are injected into the matcher at startup via an Injector. Queries arrive as LaunchQuery IPC messages and are dispatched to SearchEngine::query(), which combines fuzzy match scores with frecency boosts.

Query results are returned as LaunchResult values containing the entry ID, display name, icon, and composite score.

Frecency Ranking

Launch frequency and recency are tracked in a per-profile SQLite database managed by core-fuzzy::FrecencyDb. The database file is stored at:

~/.config/pds/launcher/{profile_name}.frecency.db

Each trust profile has its own frecency database, providing isolation between profiles. When a LaunchQuery specifies a different profile than the current one, the search engine switches its frecency context via engine.switch_profile().

When a LaunchExecute succeeds, engine.record_launch(entry_id) updates the frecency database. The frecency boost is refreshed periodically via engine.refresh_frecency().

Desktop Entry Field Code Stripping

Before executing an Exec line, the scanner strips freedesktop %-prefixed field codes. These are placeholder tokens defined by the Desktop Entry Specification that would normally be replaced by a file manager:

Stripped codes: %f, %F, %u, %U, %d, %D, %n, %N, %i, %c, %k, %v, %m.

The literal %% sequence is collapsed to a single %.

After stripping, multiple consecutive spaces from removed codes are collapsed. The result is then tokenized using freedesktop quoting rules (double-quote escaping for \", \`, \\, \$). The tokenizer does not invoke a shell.

3-Strategy Resolution Fallback

When a LaunchExecute request arrives, the entry ID is resolved against the cached entries using three strategies in order:

  1. Exact match: the entry ID matches a cache key exactly (e.g., org.mozilla.firefox).
  2. Last segment match: the entry ID matches the last dot-separated segment of a cached ID, case-insensitively (e.g., firefox matches org.mozilla.firefox).
  3. Case-insensitive full ID match: the entry ID matches a cached ID when both are lowercased (e.g., alacritty matches Alacritty).

If none of the three strategies produces a match, LaunchDenial::EntryNotFound is returned.

Process Management

Daemon-launcher spawns child processes in isolated systemd scopes with zombie reaping and post-spawn secret zeroization.

systemd-run Scope Isolation

Every launched process is wrapped in a transient systemd user scope via:

systemd-run --user --scope --unit=app-open-sesame-{entry_id}-{pid}.scope -- {program} {args}

This provides:

  • cgroup isolation: the child runs in its own cgroup, enabling per-application resource accounting via systemd-cgtop.
  • No inherited limits: the child does not inherit MemoryMax or mount namespace restrictions from the launcher’s service unit.
  • Launcher restart survival: because KillMode=process semantics apply to scopes (the scope itself has no main process), children survive launcher daemon restarts.
  • Clean unit naming: the scope name is sanitized from the entry ID by replacing non-alphanumeric characters with dashes and collapsing runs.

Fallback to Direct Spawn

If systemd-run is unavailable (not installed, or the spawn fails), daemon-launcher falls back to a direct Command::spawn(). The via_scope flag in the log output indicates which path was taken.

No Sandbox Inheritance

Daemon-launcher intentionally does not apply seccomp or Landlock sandboxing to itself. Seccomp and Landlock rules inherit across fork+exec and would be applied to every child process, breaking arbitrary desktop applications. The security boundary for daemon-launcher is the Noise IK authenticated IPC bus, not process-level sandboxing.

Child Reaping

After spawning, daemon-launcher reaps the wrapper process (or direct child) in a tokio::task::spawn_blocking closure that calls child.wait(). This prevents zombie accumulation.

When using systemd-run scopes, the reaped process is the systemd-run wrapper, not the application itself. The application continues running under the transient scope until it exits naturally.

Secret Zeroization

Secret values pass through two zeroization points:

  1. Error paths: if a secret value fails UTF-8 validation, the raw bytes are zeroized before returning the error.
  2. Post-spawn cleanup: after Command::spawn() copies the environment to the OS process, all values in the composed environment BTreeMap are zeroized via zeroize::Zeroize, and the map is dropped.

This ensures secret material does not persist in the daemon-launcher process memory after it has been handed off to the child.

I/O Configuration

Spawned processes have their I/O handles configured as:

StreamConfiguration
stdin/dev/null (Stdio::null())
stdout/dev/null (Stdio::null())
stderrInherited from daemon-launcher (Stdio::inherit())

Stderr inheritance allows application error output to reach the journal when daemon-launcher runs under systemd.

Environment Propagation

The composed environment (launch profile env vars + secrets + implicit SESAME_* vars) is propagated to both the systemd-run wrapper and the direct spawn fallback. The systemd-run process passes its environment through to the child in the scope.

The working directory (cwd) from the launch profile is validated as an absolute, existing directory path before being set on the command. Relative paths are rejected with an error.

Structured Logging

All Open Sesame daemons use the tracing crate for structured, leveled logging. Log output is configurable between JSON and human-readable formats, with journald integration on Linux.

Tracing Integration

Every daemon initializes a tracing-subscriber stack at startup. The two supported output formats are:

  • JSON (--log-format json, default for daemon-profile): machine-parseable structured JSON, one object per line. Enabled via tracing_subscriber::fmt().json().init().
  • Pretty (--log-format pretty): human-readable colored output via tracing_subscriber::fmt().init().

The format is selected via the --log-format CLI flag or the PDS_LOG_FORMAT environment variable. The implementation is in daemon-profile/src/sandbox.rs (init_logging).

RUST_LOG and Log Levels

All daemons read the RUST_LOG environment variable via tracing_subscriber::EnvFilter::try_from_default_env(). If RUST_LOG is not set, the default filter is info.

Standard tracing levels are used throughout:

LevelUsage
errorIPC failures, secret fetch denials, audit chain verification failures, sandbox application failures.
warnNon-fatal issues: systemd-run fallback, corrupt audit tail entry, HTTP git URL detected.
infoDaemon lifecycle (starting, ready, shutting down), launch execution, watchdog ticks, config reloads, key rotation, audit chain verification on startup.
debugChild reaping status, context engine debounce suppression.

journald Integration

The tracing-journald crate is a Linux dependency of daemon-launcher and other daemons. When running under systemd, structured log fields are forwarded to the journal as journal fields, enabling filtering with journalctl:

journalctl --user -u daemon-launcher.service
journalctl --user -u daemon-profile.service

Structured Fields

Tracing spans and events use structured key-value fields throughout the codebase. Notable patterns:

  • Launch execution: entry_id, program, arg_count, scope_name, tags, devshell, env_count, secret_count, via_scope, pid are attached to launch log lines in daemon-launcher/src/launch.rs.
  • Secret fetching: secret_count and per-secret reason fields on denial.
  • Watchdog: watchdog_tick_count tracks event loop health in daemon-profile/src/main.rs.
  • IPC messages: sender and msg_id identify message origin.
  • Audit: path, sequence, entries track audit log state at startup.
  • Security posture: sandbox status is logged after Landlock and seccomp application.
  • Key rotation: daemon_name, generation, clearance fields on rotation events.
  • Desktop entry resolution: entry_id, resolved_id logged with the resolution strategy used.

Daemon Startup Logging Sequence

Daemon-profile follows this startup sequence (other daemons follow a similar pattern):

  1. "daemon-profile starting" – logged immediately after CLI parsing.
  2. harden_process() and apply_resource_limits() – the platform layer hardens the process (RLIMIT_NOFILE, RLIMIT_MEMLOCK, etc.).
  3. init_secure_memory() – probes memfd_secret(2) availability and logs whether the kernel supports sealed anonymous memory for secret storage.
  4. Sandbox application – logs the Landlock and seccomp result via ?status structured field.
  5. IPC bus server bind – logs path and confirms Noise IK encryption.
  6. Per-daemon keypair generation – logs daemon, clearance for each of the six known daemons.
  7. Audit logger initialization – logs path and sequence (chain head position).
  8. Audit chain verification – logs entries count if the chain is intact, or an error if verification fails.
  9. Context engine initialization – logs profile (the default ProfileId).
  10. platform_linux::systemd::notify_ready() – sends READY=1 to systemd.
  11. "daemon-profile ready" – logged after readiness notification.

Audit Chain

Open Sesame maintains a tamper-evident audit log using a BLAKE3 hash chain. Every auditable operation appends a JSONL entry whose prev_hash field contains the hash of the previous entry’s serialized JSON. Tampering with any entry invalidates all subsequent hashes.

Hash Chain Mechanics

The AuditLogger in core-profile/src/audit.rs maintains three pieces of mutable state:

  • last_hash: the hex-encoded hash of the most recently written entry.
  • sequence: a monotonically increasing counter (starts at 1).
  • hash_algorithm: either Blake3 or Sha256 (configurable at construction, default is BLAKE3).

When append(action) is called:

  1. The sequence is incremented.
  2. A wall-clock timestamp (milliseconds since Unix epoch) is captured.
  3. An AuditEntry is constructed with the current last_hash as its prev_hash.
  4. The entry is serialized to a single-line JSON string.
  5. The JSON bytes are hashed with the configured algorithm (BLAKE3 or SHA-256).
  6. The resulting hex digest becomes the new last_hash.
  7. The JSON line is written to the underlying Write sink and flushed.

The first entry in a fresh log has an empty string as its prev_hash.

Entry Structure

Each JSONL line contains:

{
  "sequence": 1,
  "timestamp_ms": 1700000000000,
  "action": { "ProfileActivated": { "target": "...", "duration_ms": 42 } },
  "prev_hash": "",
  "agent_id": "..."
}
FieldTypeDescription
sequenceu64Monotonically increasing, starting at 1.
timestamp_msu64Wall clock time in milliseconds since Unix epoch.
actionAuditActionThe auditable operation (see variants below).
prev_hashStringHex-encoded hash of the previous entry’s JSON. Empty for the first entry.
agent_idOption<AgentId>The agent identity that triggered the action, if known.

AuditAction Variants

The AuditAction enum in core-profile/src/lib.rs is #[non_exhaustive] and currently defines:

VariantFieldsDescription
ProfileActivatedtarget: ProfileId, duration_ms: u32A trust profile was activated.
ProfileDeactivatedtarget: ProfileId, duration_ms: u32A trust profile was deactivated.
ProfileActivationFailedtarget: ProfileId, reason: StringActivation failed.
DefaultProfileChangedprevious: ProfileId, current: ProfileIdThe default profile for new launches changed.
IsolationViolationAttemptfrom_profile, resourceA cross-profile resource access was blocked.
SecretAccessedprofile_id: ProfileId, secret_ref: StringA secret was read from a vault.
KeyRotationStarteddaemon_name: String, generation: u64IPC bus key rotation began.
KeyRotationCompleteddaemon_name: String, generation: u64Key rotation completed.
KeyRevokeddaemon_name: String, reason: String, generation: u64A daemon’s key was revoked.
SecretOperationAuditedaction, profile, key, requester, outcomeA secret operation was logged.
AgentConnectedagent_id: AgentId, agent_type: AgentTypeAn agent connected.
AgentDisconnectedagent_id: AgentId, reason: StringAn agent disconnected.
InstallationCreatedid, org, machine_binding_presentA new installation was registered.
ProfileIdMigratedname, old_id, new_idA profile’s internal ID was migrated.
AuthorizationRequiredrequest_id: Uuid, operation: StringAn operation requires authorization.
AuthorizationGrantedrequest_id, delegator, scopeAuthorization was granted.
AuthorizationDeniedrequest_id: Uuid, reason: StringAuthorization was denied.
AuthorizationTimeoutrequest_id: UuidAn authorization request timed out.
DelegationRevokeddelegation_id, revoker, reasonA delegation was revoked.
HeartbeatReneweddelegation_id, renewal_sourceA delegation heartbeat was renewed.
FederationSessionEstablishedsession_id, remote_installationA federation session was established.
FederationSessionTerminatedsession_id: Uuid, reason: StringA federation session ended.
PostureEvaluatedcomposite_score: f64A security posture evaluation produced a score.

Tamper Detection: sesame audit verify

The sesame audit verify command in open-sesame/src/audit.rs reads the audit log at ~/.config/pds/audit.jsonl and replays the hash chain:

$ sesame audit verify
OK: 1247 entries verified.

The verification algorithm in core_profile::verify_chain:

  1. Iterates each non-empty JSONL line in order.
  2. Parses each line as an AuditEntry.
  3. Checks that entry.prev_hash matches the hash computed from the previous line’s raw JSON bytes.
  4. If any mismatch is found, returns an error identifying the broken sequence number and the expected vs. actual prev_hash.

Verification detects: modified entries, deleted entries, reordered entries, and injected entries. The test suite in core-profile/src/audit.rs explicitly validates detection of all four tampering modes.

sesame audit tail

The sesame audit tail command displays recent audit entries:

sesame audit tail 10
sesame audit tail --follow

Without --follow, the command reads the last N entries from the log file and pretty-prints each as indented JSON separated by --- dividers.

With --follow, it watches the audit log file for new appends using notify::RecommendedWatcher (inotify on Linux). When the file grows, only the new bytes are read (via Seek::SeekFrom::Start(last_len)), parsed line by line, and printed. The follow loop exits on Ctrl-C (SIGINT).

Chain Recovery After Corruption

On daemon-profile startup, the audit logger loads its state from the last line of the existing log file. The load_audit_state function in daemon-profile/src/context.rs:

  1. Reads the file contents (returns (empty, 0) if the file does not exist).
  2. Finds the last non-empty line by iterating in reverse.
  3. Attempts to parse it as an AuditEntry.
  4. If successful, computes its BLAKE3 hash and extracts its sequence number.
  5. If parsing fails (corrupt last entry), falls back to (empty_hash, 0), starting a fresh chain segment.

After loading, the startup code runs verify_chain on the existing log if the sequence is greater than 0. A verification failure is logged at error level but does not prevent the daemon from starting – the daemon continues appending to the potentially-broken chain.

Chain Continuity Across Restarts

The audit chain survives daemon restarts. On restart, daemon-profile loads the last hash and sequence from disk and continues appending. The hash of the last pre-restart entry becomes the prev_hash of the first post-restart entry, maintaining an unbroken chain. The test chain_resumes_after_restart in core-profile/src/audit.rs validates this property across two simulated sessions with five total entries.

File Format and Location

  • Path: ~/.config/pds/audit.jsonl (resolved via core_config::config_dir()).
  • Format: JSON Lines – one JSON object per line, newline-delimited.
  • Hash algorithm: BLAKE3 by default. SHA-256 is supported as an alternative. The algorithm must be consistent within a single log file for verification to succeed.
  • Write mode: append-only (OpenOptions::new().create(true).append(true)). Each write is followed by an explicit flush() via BufWriter.
  • Agent identity: the default_agent_id is derived from the installation namespace and the Unix UID of the running process: uuid::Uuid::new_v5(&install_ns, "agent:human:uid{uid}").

Retention and Rotation

The current implementation does not perform automatic log rotation or retention. The audit log grows unboundedly. External log rotation (e.g., logrotate) can be applied, but rotating the file severs the hash chain – sesame audit verify can only validate entries present in a single contiguous file. Operators who require forensic auditability across rotation boundaries should archive rotated segments and verify them independently.

Health Checks

Open Sesame provides daemon health monitoring through sesame status and systemd watchdog integration.

sesame status

The sesame status command in open-sesame/src/status.rs connects to the IPC bus and sends a StatusRequest message to daemon-profile. The response (StatusResponse) includes:

  • Per-vault lock state (lock_state: BTreeMap<TrustProfileName, bool>): each trust profile’s vault is reported as locked or unlocked. Displayed as a table with profile names and colored status indicators.
  • Default profile (default_profile: TrustProfileName): the currently active default profile for new unscoped launches.
  • Active profiles (active_profiles: Vec<TrustProfileName>): the list of profiles that are currently activated (vault open, serving secrets).
  • Global locked flag (locked: bool): legacy fallback used when per-profile lock state is unavailable.

Example output:

Vaults:
  personal  unlocked
  work      locked
Default profile: personal
Active profiles:
  - personal (default)

If the lock_state map is empty (daemon-secrets has not reported per-profile state), the display falls back to a single global locked/unlocked indicator.

Liveness Check

The sesame status command implicitly tests daemon-profile liveness. If the IPC bus is unreachable (daemon-profile is not running or the socket is missing), the connect() call fails with an error. This makes sesame status usable as a basic health check in scripts and monitoring systems.

systemd Integration

Type=notify and sd_notify

Daemons use systemd’s Type=notify service type. After completing initialization (config loaded, IPC connected, indexes built), each daemon calls platform_linux::systemd::notify_ready(), which sends READY=1 to systemd. This tells systemd that the daemon is ready to accept requests.

The NOTIFY_SOCKET path is included in daemon-profile’s Landlock ruleset so that sd_notify calls succeed after the filesystem sandbox is applied. Abstract sockets (prefixed with @) bypass Landlock AccessFs rules and do not require explicit allowlisting.

WatchdogSec=30

Daemon-profile runs a tokio interval timer at half the watchdog interval (15 seconds) and calls platform_linux::systemd::notify_watchdog() on each tick, which sends WATCHDOG=1 to systemd.

If a daemon fails to send a watchdog notification within 30 seconds (two missed ticks), systemd considers the daemon unresponsive and restarts it according to the unit’s Restart= policy.

The watchdog tick in daemon-profile also serves as the reconciliation driver – every other tick (every 30 seconds), it reconciles state with daemon-secrets.

Reconciliation

Daemon-profile reconciles with daemon-secrets every 30 seconds (every other watchdog tick, controlled by watchdog_tick_count.is_multiple_of(2)). The reconciliation RPC updates:

  • The global locked flag.
  • The active_profiles set.
  • Per-profile lock state.

This ensures that sesame status reports current state even if an IPC event was lost or a daemon restarted between reconciliation cycles.

Crash-Restart Detection

Daemon-profile tracks daemon identities via DaemonTracker in daemon-profile/src/main.rs. The tracker maintains a HashMap<String, DaemonId> mapping daemon names to their last known identity.

When a DaemonStarted event arrives from a daemon name that already has a registered DaemonId, the track() method detects a crash-restart: the old ID differs from the new one. It returns Some(old_id), allowing daemon-profile to clean up stale state associated with the previous instance.

Watchdog Logging

Watchdog ticks are logged at info level for the first three ticks and then every 20th tick (controlled by watchdog_tick_count <= 3 || watchdog_tick_count.is_multiple_of(20)). This provides startup confirmation without flooding the journal during steady-state operation.

Metrics and Observability

This page describes the metrics and observability design for Open Sesame. Structured logging is implemented today; metrics export and the sesame status --doctor command are planned.

Current State

All daemons emit structured log events via the tracing crate. Log output includes span context (daemon ID, profile name, operation), timestamps, and severity levels. Logs are written to journald when running under systemd, or to stderr in development.

Metrics export (Prometheus, OpenTelemetry) is not yet implemented.

Planned Metrics Export

Prometheus

Each daemon will expose a /metrics endpoint on a local Unix socket (not a TCP port) in Prometheus exposition format. A Prometheus instance or prometheus-node-exporter textfile collector can scrape these.

OpenTelemetry (OTLP)

For environments with an OTLP collector (Grafana Agent, OpenTelemetry Collector), daemons will support OTLP export over gRPC or HTTP. This is configured in ~/.config/pds/observability.toml.

Planned Metric Categories

Daemon Health

MetricTypeDescription
pds_daemon_uptime_secondsGaugeSeconds since daemon start
pds_daemon_restart_countCountersystemd restart count (from watchdog)
pds_daemon_memory_rss_bytesGaugeResident set size
pds_daemon_memory_locked_bytesGaugemlock’d memory

Vault Operations

MetricTypeDescription
pds_vault_unlock_totalCounterUnlock attempts (labeled by factor, result)
pds_vault_unlock_duration_secondsHistogramTime to complete unlock
pds_vault_secret_read_totalCounterSecret read operations
pds_vault_secret_write_totalCounterSecret write operations
pds_vault_acl_denial_totalCounterACL-denied operations

IPC Throughput

MetricTypeDescription
pds_ipc_messages_sent_totalCounterMessages sent (labeled by event kind)
pds_ipc_messages_received_totalCounterMessages received
pds_ipc_message_bytes_totalCounterTotal bytes over the bus
pds_ipc_request_duration_secondsHistogramRequest-response round-trip time
pds_ipc_connections_activeGaugeCurrent connected clients
pds_ipc_clearance_drop_totalCounterMessages dropped by clearance check

Memory Protection Posture

MetricTypeDescription
pds_mlock_limit_bytesGaugeConfigured LimitMEMLOCK
pds_mlock_used_bytesGaugeCurrently locked memory
pds_seccomp_activeGauge1 if seccomp filter is loaded, 0 otherwise
pds_landlock_activeGauge1 if Landlock restrictions are active

sesame status –doctor

The sesame status --doctor command (tracked as issue #20) performs a comprehensive system health check. The planned implementation runs 43 individual checks across 6 categories.

Check Categories

1. Daemon Liveness

Verifies each daemon process is running, its systemd unit is active, and it responds to StatusRequest on the IPC bus.

2. IPC Connectivity

Tests Noise IK handshake to the bus server, measures round-trip latency, verifies the socket file exists with correct permissions.

3. Vault Integrity

Checks SQLCipher database integrity, verifies enrolled auth factors match configuration, tests that the vault salt is present and the correct length.

4. Cryptographic Posture

Verifies Noise IK keypairs exist, checks key file permissions (0600), validates that the ClearanceRegistry is populated, confirms mlock is available for secret memory.

5. Platform Integration

Checks Wayland session type, verifies COSMIC compositor protocol availability, tests xdg-desktop-portal connectivity, confirms D-Bus session bus access.

6. Configuration

Validates TOML configuration against the schema, checks for deprecated keys, verifies file permissions on sensitive config files.

Output Formats

The --doctor command supports multiple output formats:

  • Text (default) – Human-readable output with pass/fail/warn indicators and remediation suggestions.
  • JSON (--format json) – Machine-parseable output for CI integration.
  • Prometheus exposition (--format prometheus) – Each check becomes a gauge metric (pds_doctor_check{name="...",category="..."} with value 0, 1, or 2 for pass, fail, or warn).
  • OTLP (--format otlp) – Exports check results as OpenTelemetry metrics to a configured collector.

Governance Profile Filtering

Checks can be filtered by governance profile to focus on compliance-relevant items:

# Run only STIG-relevant checks
sesame status --doctor --governance stig

# Run PCI-DSS checks, output as JSON
sesame status --doctor --governance pci-dss --format json

# Run SOC2 checks
sesame status --doctor --governance soc2

Each check is tagged with the governance frameworks it is relevant to. For example, the mlock availability check is relevant to STIG and PCI-DSS but not SOC2; the audit log integrity check is relevant to all three.

Single Desktop

Open Sesame runs as a full desktop suite on COSMIC/Wayland systems, providing secret management, window management overlays, clipboard isolation, input capture, and application launching across trust profiles.

Package Model

A desktop installation requires both packages:

PackageContents
open-sesameCLI (sesame), headless daemons: profile, secrets, launcher, snippets
open-sesame-desktopDesktop daemons: wm, clipboard, input; Wayland/COSMIC integration

The open-sesame-desktop package depends on open-sesame. Installing the desktop package pulls in the headless package automatically.

Installation

APT (Ubuntu 24.04 Noble)

curl -fsSL https://scopecreep-zip.github.io/open-sesame/gpg.key \
  | sudo gpg --dearmor -o /usr/share/keyrings/open-sesame.gpg

echo "deb [signed-by=/usr/share/keyrings/open-sesame.gpg] \
  https://scopecreep-zip.github.io/open-sesame noble main" \
  | sudo tee /etc/apt/sources.list.d/open-sesame.list

sudo apt update
sudo apt install open-sesame open-sesame-desktop

Package integrity is verified via GPG-signed repository indices and SLSA build provenance attestations generated by GitHub Actions. See SECURITY.md for verification commands.

Nix Flake

nix profile install github:ScopeCreep-zip/open-sesame

Pre-built binaries are available from the scopecreep-zip.cachix.org binary cache with Ed25519 signing. For home-manager integration, add the flake input and include the Open Sesame module in the home-manager configuration.

From Source

cargo build --release --workspace

The seven daemon binaries (daemon-profile, daemon-secrets, daemon-launcher, daemon-snippets, daemon-wm, daemon-clipboard, daemon-input) and the sesame CLI are placed in target/release/.

Initialization

After installation, initialize the installation identity and default vault:

sesame init

This command performs the following:

  1. Generates a UUID v4 installation identifier, persisted to ~/.config/pds/installation.toml as an InstallationConfig (core-config/src/schema_installation.rs).
  2. Derives a deterministic namespace UUID for profile ID generation (namespace field).
  3. Creates the default trust profile vault (vaults/default.db) as a SQLCipher-encrypted database.
  4. Enrolls the first authentication factor (password via Argon2id KDF or SSH agent key).
  5. Installs and enables systemd user services and targets.

Optionally, an organizational namespace can be provided at init time:

sesame init --org braincraft.io

This populates the org field in installation.toml with the domain and a deterministic namespace derived as uuid5(NAMESPACE_URL, domain), per the OrgConfig type.

Daemon Architecture

All seven daemons run as systemd user services:

DaemonResponsibilityTargetSecurityLevel
daemon-profileIPC bus host, key management, profile activation, auditHeadlessInternal
daemon-secretsSQLCipher vaults, ACL, rate limitingHeadlessSecretsOnly
daemon-launcherApplication launching, frecency scoringHeadlessInternal
daemon-snippetsSnippet managementHeadlessInternal
daemon-wmWindow management, Alt+Tab overlayDesktopInternal
daemon-clipboardClipboard isolation per profileDesktopProfileScoped
daemon-inputKeyboard/mouse capture, hotkey routingDesktopInternal

Inter-daemon communication uses the Noise IK protocol (X25519 + ChaChaPoly + BLAKE2s) over a Unix domain socket IPC bus hosted by daemon-profile. Each daemon registers its X25519 static public key in the clearance registry (core-ipc/src/registry.rs). Messages are routed based on SecurityLevel ordering: Open < Internal < ProfileScoped < SecretsOnly. A daemon can only emit messages at or below its own clearance level, and only receives messages at or below its clearance level.

systemd Targets

Two systemd targets compose the service graph:

TargetWantedByDaemons
open-sesame-headless.targetdefault.targetprofile, secrets, launcher, snippets
open-sesame-desktop.targetgraphical-session.targetwm, clipboard, input

The desktop target declares Requires=open-sesame-headless.target graphical-session.target. Starting the desktop target starts all headless daemons first. Stopping the desktop target leaves headless daemons running, so secret management continues when the graphical session is inactive.

All daemon services use Type=notify with WatchdogSec=30. Services that fail restart after RestartSec=5.

Service Dependency Graph

default.target
  +-- open-sesame-headless.target
        +-- open-sesame-profile.service
        +-- open-sesame-secrets.service   (Requires/After: profile)
        +-- open-sesame-launcher.service  (After: profile)
        +-- open-sesame-snippets.service  (After: profile)

graphical-session.target
  +-- open-sesame-desktop.target          (Requires: headless)
        +-- open-sesame-wm.service
        +-- open-sesame-clipboard.service
        +-- open-sesame-input.service

File Locations

Configuration

PathPurpose
~/.config/pds/config.tomlUser configuration: profiles, crypto, agents, extensions
~/.config/pds/installation.tomlInstallation identity: UUID, namespace, org, machine binding
~/.config/pds/ssh-agent.envSSH agent socket path for factor enrollment
/etc/pds/policy.tomlSystem policy overrides (enterprise-managed, read-only at runtime)

Configuration follows layered inheritance: system policy > user config > built-in defaults. Each PolicyOverride (core-config/src/schema.rs) records a dotted key path, enforced value, and source string (e.g., /etc/pds/policy.toml).

Runtime

PathPurpose
$XDG_RUNTIME_DIR/pds/Runtime directory
$XDG_RUNTIME_DIR/pds/bus.sockNoise IK IPC bus socket

The IPC socket path can be overridden via the global.ipc.socket_path key in config.toml. Default channel capacity per subscriber is 1024 messages with a 5-second slow-subscriber timeout (IpcConfig in core-config/src/schema.rs).

Data

PathPurpose
~/.config/pds/vaults/{profile}.dbSQLCipher encrypted vault per trust profile
~/.config/pds/launcher/{profile}.frecency.dbApplication launch frecency database
~/.config/pds/audit/BLAKE3 hash-chained audit log

Each trust profile name maps 1:1 to a vault file. The TrustProfileName type (core-types/src/profile.rs) enforces path safety: ASCII alphanumeric plus hyphens and underscores, max 64 bytes, no path traversal components.

Security Hardening

Each daemon service applies the following systemd directives (see contrib/systemd/*.service):

  • NoNewPrivileges=yes – prevents privilege escalation.
  • ProtectSystem=strict – mounts the filesystem read-only except explicit ReadWritePaths.
  • ProtectHome=read-only – prevents writes outside ~/.config/pds/ and $XDG_RUNTIME_DIR/pds/.
  • LimitCORE=0 – disables core dumps.
  • LimitMEMLOCK=64M – permits memfd_secret(2) and mlock(2) allocations for secure memory.
  • MemoryMax – per-daemon ceiling (128M for profile, 256M for secrets).

The secrets daemon additionally declares PrivateNetwork=yes, preventing all network access from the process that holds decrypted vault keys.

Landlock filesystem sandboxing and seccomp-bpf syscall filtering are applied in-process by each daemon at startup. Partially-enforced Landlock is treated as a fatal error; the daemon does not start with degraded isolation.

Desktop Overlay

On COSMIC desktops, daemon-wm provides an Alt+Tab window switching overlay rendered via the COSMIC compositor backend (platform-linux). The overlay displays windows scoped to the active trust profile. Profile switching is routed through the IPC bus at Internal security level.

Verification

After initialization:

# Verify all daemons are healthy
sesame status

# Verify systemd targets
systemctl --user status open-sesame-headless.target
systemctl --user status open-sesame-desktop.target

# Verify IPC bus socket exists
ls -la $XDG_RUNTIME_DIR/pds/bus.sock

# Verify vault was created
ls -la ~/.config/pds/vaults/default.db

Headless Server

Open Sesame operates without a display server, making it suitable for servers, CI/CD runners, containers, and virtual machines. The headless deployment uses the open-sesame package only, with no GUI dependencies.

Package

Only the open-sesame package is required. It contains:

  • sesame CLI
  • daemon-profile (IPC bus host, key management, audit)
  • daemon-secrets (SQLCipher vaults, ACL, rate limiting)
  • daemon-launcher (application launching, frecency scoring)
  • daemon-snippets (snippet management)

The open-sesame-desktop package is not installed. The three desktop daemons (daemon-wm, daemon-clipboard, daemon-input) are absent and no Wayland or COSMIC libraries are linked.

Use Cases

CI/CD Secret Injection

Open Sesame injects secrets into build processes as environment variables, scoped by trust profile:

sesame env -p ci-production -- make deploy

The sesame env command activates the named profile, decrypts the vault, and launches the child process with secrets projected into its environment. The child process inherits only the secrets defined in the ci-production profile’s vault. On process exit, the secrets are not persisted anywhere on disk outside the encrypted vault.

Server Credential Management

Long-running services can read secrets at startup or on demand:

# One-shot: print a secret value
sesame secret get -p work database-url

# Launch a service with its secret environment
sesame env -p production -- ./my-service

Container Secret Injection

In container environments, Open Sesame can run as a sidecar or init container that projects secrets into shared volumes or environment:

# In an init container or entrypoint script
sesame init --non-interactive
sesame env -p container -- exec "$@"

systemd Target

The headless target starts on default.target, requiring no graphical session:

# contrib/systemd/open-sesame-headless.target
[Unit]
Description=Open Sesame Headless Suite
# No display server required. Suitable for servers, containers, VMs.

[Install]
WantedBy=default.target

The four headless daemons are PartOf=open-sesame-headless.target. The profile daemon starts first; secrets, launcher, and snippets declare ordering dependencies on profile.

Starting and Stopping

# Start the headless suite
systemctl --user start open-sesame-headless.target

# Stop all headless daemons
systemctl --user stop open-sesame-headless.target

# Enable on boot
systemctl --user enable open-sesame-headless.target

SSH Agent Forwarding for Remote Vault Unlock

When Open Sesame is installed on a remote server, vault unlock can use an SSH agent key from the operator’s local machine. This avoids storing passwords on the server.

Setup

  1. On the remote server, enroll an SSH agent factor during sesame init:

    sesame init --auth-factor ssh-agent
    
  2. When connecting, forward the SSH agent:

    ssh -A user@server
    
  3. On the remote server, unlock the vault using the forwarded agent:

    sesame unlock -p work --factor ssh-agent
    

The SSH agent backend (AuthFactorId::SshAgent in core-types/src/auth.rs) produces a deterministic signature over a challenge, which is processed through BLAKE3 derive_key("pds v2 ssh-vault-kek {profile}") to produce a KEK. The KEK unwraps the master key from the EnrollmentBlob stored on disk. The SSH private key never leaves the local machine.

Multi-Factor on Headless

The AuthCombineMode (core-types/src/auth.rs) supports three modes for headless environments:

ModeBehavior
AnyAny single enrolled factor unlocks. SSH agent alone suffices.
AllAll enrolled factors required. Both password and SSH agent must be provided.
PolicyConfigurable: e.g., SSH agent always required, plus one additional factor.

For headless servers where interactive password entry is impractical, enrolling SSH agent as the sole factor with Any mode provides passwordless unlock gated on SSH key possession.

Configuration

Headless configuration is identical to desktop, minus the window manager, clipboard, and input sections. The relevant top-level configuration file is ~/.config/pds/config.toml with the schema defined in core-config/src/schema.rs.

Key headless-specific settings:

[global]
default_profile = "production"

[global.ipc]
# Custom socket path for containerized deployments
# socket_path = "/run/pds/bus.sock"
channel_capacity = 1024

[global.logging]
level = "info"
json = true        # Structured output for log aggregation
journald = true    # journald integration on systemd hosts

File Locations

PathPurpose
~/.config/pds/config.tomlUser configuration
~/.config/pds/installation.tomlInstallation identity
~/.config/pds/vaults/{profile}.dbEncrypted vaults
$XDG_RUNTIME_DIR/pds/bus.sockIPC socket
~/.config/pds/audit/Audit log

Security Notes

The secrets daemon runs with PrivateNetwork=yes, which is particularly relevant on servers where network-facing services coexist. Even if an adjacent service is compromised, it cannot reach the secrets daemon over the network. All access is via the authenticated Noise IK IPC bus over a Unix domain socket.

On headless systems without memfd_secret(2) support (e.g., older kernels or containers without CONFIG_SECRETMEM=y), the daemons fall back to mmap(MAP_ANONYMOUS) with mlock(2) and MADV_DONTDUMP. This fallback is logged at ERROR level with an explicit compliance impact statement naming the frameworks affected (IL5/IL6, STIG, PCI-DSS) and the remediation command to enable CONFIG_SECRETMEM.

Headless-First Design

Every sesame CLI command works from explicit primitives without interactive prompts. The CLI does not assume a terminal is attached. Exit codes, structured JSON output, and non-interactive flags support automation:

# Non-interactive unlock with SSH agent
sesame unlock -p work --factor ssh-agent --non-interactive

# JSON output for scripting
sesame secret list -p work --json | jq '.[].key'

# Exit code indicates vault lock state
sesame status -p work --quiet; echo $?

Multi-User

Open Sesame supports multiple users on a shared workstation. Each user operates an independent set of daemons, vaults, and IPC buses with hardware-enforced isolation between users.

Per-User Service Instances

Each user runs their own systemd user services. There is no system-wide Open Sesame daemon. When user alice and user bob both log into the same machine:

  • Alice’s daemon-profile listens on $XDG_RUNTIME_DIR/pds/bus.sock (typically /run/user/1000/pds/bus.sock).
  • Bob’s daemon-profile listens on /run/user/1001/pds/bus.sock.
  • Each user’s daemon set is managed by their own systemctl --user instance.
  • The two sets of daemons have no knowledge of each other.

Isolation Boundaries

BoundaryMechanism
IPC busSeparate Unix domain sockets under each user’s $XDG_RUNTIME_DIR
ConfigurationSeparate ~/.config/pds/ per user home directory
VaultsSeparate SQLCipher databases per user, per profile
Audit logsSeparate BLAKE3 hash chains per user
Secret memorymemfd_secret(2) pages are per-process; invisible to other UIDs and to root
Process isolationLandlock + seccomp per daemon; ProtectHome=read-only prevents cross-user access

memfd_secret Isolation

On Linux 5.14+ with CONFIG_SECRETMEM=y, all secret-carrying memory allocations (SecureBytes, SecureVec, SensitiveBytes) use memfd_secret(2). Pages allocated via this syscall are:

  • Removed from the kernel direct map.
  • Invisible to /proc/pid/mem reads.
  • Inaccessible to kernel modules and DMA.
  • Inaccessible via ptrace even as root.

This means that even a root-level compromise on the shared workstation cannot read another user’s decrypted secrets from memory. The secrets exist only in the virtual address space of the owning process.

When memfd_secret is unavailable, the fallback is mmap(MAP_ANONYMOUS) with mlock(2) and MADV_DONTDUMP. This prevents secrets from being swapped to disk or appearing in core dumps, but does not remove them from the kernel direct map. The fallback is logged at ERROR level with compliance impact.

RLIMIT_MEMLOCK

Each daemon service sets LimitMEMLOCK=64M (see contrib/systemd/*.service). On a multi-user workstation, the total memfd_secret and mlock usage is the sum across all users’ daemon instances. System administrators should verify that the system-wide locked memory limit and per-user RLIMIT_MEMLOCK (via /etc/security/limits.conf) accommodate the expected number of concurrent users.

Shared Workstation Model

Separate Vaults, Separate Profiles

Each user has their own InstallationConfig with a distinct installation UUID. Two users on the same machine have different installation IDs, different vault encryption keys, and different profile IDs even if both name a profile work. The TrustProfileName maps to a per-user vault file at ~/.config/pds/vaults/{name}.db.

Hardware Security Key per User

Users can enroll different hardware security keys (FIDO2, YubiKey) as authentication factors. The AuthFactorId::Fido2 and AuthFactorId::Yubikey variants in core-types/src/auth.rs support per-user enrollment. A shared YubiKey slot is not assumed; each user’s enrollment produces a distinct credential ID.

Profile Activation Independence

Profile activation is per-user. Alice activating her corporate profile does not affect Bob’s active profile. The daemon-profile instance for each user independently evaluates activation rules (ActivationConfig in core-config/src/schema.rs): WiFi SSID triggers, USB device presence, time-of-day rules, and security key requirements.

System Policy

Enterprise administrators can enforce organization-wide defaults via /etc/pds/policy.toml. This file is read-only at runtime and applies to all users on the machine.

# /etc/pds/policy.toml

[[policy]]
key = "crypto.kdf"
value = "argon2id"
source = "enterprise-security-policy"

[[policy]]
key = "audit.enabled"
value = true
source = "enterprise-security-policy"

[[policy]]
key = "clipboard.max_history"
value = 0
source = "enterprise-data-loss-prevention"

Each entry corresponds to a PolicyOverride struct (core-config/src/schema.rs) with a dotted key path, enforced value, and source identifier. Policy overrides take precedence over user configuration. Users cannot override a policy-locked key.

Policy Distribution

System policy files are managed by the organization’s configuration management tooling (Ansible, Puppet, Chef, NixOS modules, or similar). Open Sesame does not implement its own policy distribution mechanism. The file at /etc/pds/policy.toml is a standard configuration file managed by the operating system’s package manager or configuration management.

Kernel Requirements

For full multi-user isolation:

RequirementPurposeVerification
Linux 5.14+memfd_secret(2)uname -r
CONFIG_SECRETMEM=yKernel direct-map removalgrep SECRETMEM /boot/config-$(uname -r)
systemd 255+Per-user service managementsystemctl --version
Sufficient RLIMIT_MEMLOCKLocked memory for all usersulimit -l

Auditing in Multi-User Environments

Each user’s audit log is independent. The BLAKE3 hash-chained audit log for a user resides at ~/.config/pds/audit/ under that user’s home directory. Audit verification with sesame audit verify operates on the current user’s chain only.

For centralized audit collection across all users on a workstation, the structured JSON logging output (global.logging.json = true) can be forwarded to a central log aggregator via journald or a sidecar log shipper. Each log entry includes the installation ID, which uniquely identifies the user’s Open Sesame instance.

Fleet Management

Design Intent. This page describes the architecture for managing Open Sesame across many devices. The primitives referenced below (InstallationId, OrganizationNamespace, PolicyOverride, structured logging) exist in the type system and configuration schema today. Fleet-scale orchestration tooling that consumes these primitives is not yet implemented.

Overview

Fleet management treats each Open Sesame installation as an independently-operating node that can be configured, monitored, and audited from a central control plane. The design relies on three properties already present in the system:

  1. Every installation has a globally unique InstallationId (UUID v4, generated at sesame init), defined in core-types/src/security.rs.
  2. Installations can be grouped by OrganizationNamespace (domain-derived UUID), enabling fleet-wide identity correlation.
  3. System policy (/etc/pds/policy.toml) is a static file that can be distributed by configuration management without requiring a running Open Sesame daemon.

Profile and Policy Distribution

Configuration Management Integration

Open Sesame’s configuration is file-based. Fleet-wide profile templates and security policies are distributed as files via existing configuration management tools:

Configuration Management (Ansible/Puppet/Chef/NixOS)
  +-- /etc/pds/policy.toml            System policy overrides
  +-- ~/.config/pds/config.toml       User configuration template
  +-- ~/.config/pds/installation.toml  Pre-seeded installation identity (optional)

The PolicyOverride type (core-config/src/schema.rs) supports locking any configuration key to an enforced value with a source identifier:

# Enforced by fleet management
[[policy]]
key = "crypto.kdf"
value = "argon2id"
source = "fleet-security-baseline-2025"

[[policy]]
key = "crypto.minimum_peer_profile"
value = "governance-compatible"
source = "fleet-security-baseline-2025"

Pre-Seeded Installation Identity

For fleet provisioning, installation.toml can be pre-generated with a known UUID and organizational namespace, then distributed to devices before sesame init runs. The InstallationConfig (core-config/src/schema_installation.rs) fields are:

FieldPurposeFleet Use
idUUID v4, unique per deviceAsset tracking, audit correlation
namespaceDerived UUID for deterministic profile IDsCross-device profile identity
org.domainOrganization domainFleet grouping
org.namespaceuuid5(NAMESPACE_URL, domain)Deterministic namespace derivation
machine_binding.binding_hashBLAKE3 hash of machine identity materialHardware attestation
machine_binding.binding_typemachine-id or tpm-boundBinding method

Pre-seeding the org field ensures all fleet devices share a common organizational namespace, enabling deterministic profile ID generation across the fleet.

Centralized Audit Log Collection

Each Open Sesame installation produces a BLAKE3 hash-chained audit log and structured log output. Fleet-scale audit aggregation uses the existing structured logging infrastructure.

journald and Log Shipping

# config.toml on fleet devices
[global.logging]
level = "info"
json = true
journald = true

With json = true and journald = true, all daemon log entries are structured JSON emitted to the systemd journal. A log shipper (Promtail, Fluentd, Vector, Filebeat) forwards journal entries to a central aggregator.

Each structured log entry includes:

  • installation_id – the device’s UUID from installation.toml.
  • daemon_id – which daemon emitted the entry (DaemonId from core-types/src/ids.rs).
  • profile – active trust profile name.
  • event – the operation performed.
  • Timestamp, severity, and span context.

Audit Chain Verification

The BLAKE3 hash-chained audit log provides tamper evidence at the device level. For fleet-wide integrity verification:

  1. Collect audit chain files from each device.
  2. Run sesame audit verify against each chain independently.
  3. Cross-reference audit entries with centralized log aggregator records.

A broken hash chain on any device indicates tampering or data loss on that device.

Remote Unlock Patterns

Fleet devices may need to be unlocked without physical operator presence.

SSH Agent Forwarding

An operator connects to a fleet device and forwards their SSH agent:

ssh -A operator@fleet-device-042
sesame unlock -p production --factor ssh-agent

The SSH agent factor (AuthFactorId::SshAgent) derives a KEK from the forwarded key’s deterministic signature without the private key leaving the operator’s machine.

Delegated Factors (Design Intent)

The DelegationGrant type (core-types/src/security.rs) models time-bounded, scope-narrowed capability delegation. A fleet operator could issue a delegation grant to an automation agent, authorizing it to unlock specific profiles on specific devices:

DelegationGrant {
    delegator: <operator-agent-id>,
    scope: CapabilitySet { Unlock },
    initial_ttl: 3600s,
    heartbeat_interval: 300s,
    nonce: <16 random bytes>,
    signature: <Ed25519 over grant fields>,
}

The grant’s scope is intersected with the delegator’s own capabilities, ensuring the automation agent cannot exceed the operator’s authority. The initial_ttl and heartbeat_interval fields enforce time-bounded access with mandatory renewal.

Fleet Health Monitoring

Daemon Health

All daemons use Type=notify with WatchdogSec=30. systemd restarts unhealthy daemons automatically. Fleet monitoring collects systemd service states via standard node monitoring (node_exporter, osquery, or equivalent).

Security Posture Signals

Key posture signals per device:

SignalSourceMeaning
memfd_secret availabilityDaemon startup logWhether secrets are removed from kernel direct map
Landlock enforcementDaemon startup logWhether filesystem sandboxing is active
seccomp-bpf activeDaemon startup logWhether syscall filtering is active
Kernel versionuname -rWhether platform meets minimum requirements
CONFIG_SECRETMEM=y/boot/config-*Kernel compiled with secret memory support

Devices that log memfd_secret fallback at ERROR level are operating at a reduced security posture. Fleet management should alert on this condition and schedule kernel upgrades.

Structured Alerting

With JSON-structured logging forwarded to a central aggregator, fleet operators can define alerts on:

  • Vault unlock failures (rate limiting triggered).
  • Audit chain verification failures.
  • Daemon restart loops (watchdog failures).
  • Security posture degradation (memfd_secret fallback).
  • Policy override conflicts (user config conflicts with fleet policy).

Kubernetes

Design Intent. This page describes how Open Sesame can operate as a secret provider in Kubernetes environments. The type system primitives referenced below (InstallationId, AgentIdentity, DelegationGrant, Attestation) exist today. The Kubernetes-specific integration components (sidecar container image, CSI driver, admission webhook) are not yet implemented.

Architecture

Open Sesame in Kubernetes operates as a per-pod or per-node secret provider. The headless daemon set (daemon-profile, daemon-secrets) runs inside a sidecar container or as a DaemonSet, providing secrets to application containers via environment injection or shared volumes.

No desktop daemons are deployed. The open-sesame package alone is sufficient.

Sidecar Pattern

The sidecar pattern deploys an Open Sesame container alongside each application pod:

Pod
  +-- app-container
  |     Reads secrets from shared volume or environment
  +-- open-sesame-sidecar
        daemon-profile, daemon-secrets
        Mounts: /run/pds (shared tmpfs)
        Mounts: /etc/pds/installation.toml (ConfigMap or Secret)
        Mounts: /etc/pds/policy.toml (ConfigMap)

Secret Projection

The sidecar decrypts vault secrets and writes them to a shared tmpfs volume that the application container mounts:

# Init container or sidecar entrypoint
sesame init --non-interactive --installation /etc/pds/installation.toml
sesame unlock -p $PROFILE --factor ssh-agent --non-interactive
sesame env -p $PROFILE --export-to /run/pds/secrets.env

Alternatively, the sidecar can project secrets as individual files:

/run/pds/secrets/
  +-- database-url
  +-- api-key
  +-- tls-cert

The tmpfs mount ensures secrets are never written to persistent storage on the node.

Kubernetes Secret Objects (Design Intent)

A controller component could synchronize vault contents into native Kubernetes Secret objects, enabling standard envFrom and volume mount patterns:

Open Sesame Vault (SQLCipher)
  --> Controller watches vault changes
    --> Creates/updates Kubernetes Secret objects
      --> Pods consume via standard envFrom/volumeMount

This approach trades the stronger isolation of the sidecar pattern for compatibility with existing Kubernetes-native workflows.

Pod Identity

Installation ID per Pod

Each pod receives a unique InstallationId via a pre-seeded installation.toml. The InstallationConfig (core-config/src/schema_installation.rs) for a pod includes:

FieldValuePurpose
idUUID v4, unique per pod instanceAudit trail attribution
org.domainOrganization domainFleet grouping
machine_binding.binding_typemachine-idPod identity binding
machine_binding.binding_hashBLAKE3 hash of pod UID + node IDAnti-migration attestation

The machine binding hash can incorporate the Kubernetes pod UID and node identity, binding the installation to a specific pod lifecycle. If the pod is rescheduled to a different node, the binding hash does not match, requiring re-attestation.

Workload Attestation (Design Intent)

The Attestation enum (core-types/src/security.rs) includes ProcessAttestation with an exe_hash field. In Kubernetes, workload attestation extends this concept:

  • Container image digest serves as the exe_hash, verified against a signed manifest.
  • Service account token provides Kubernetes-native identity.
  • Node attestation via TPM or machine binding provides hardware-rooted trust.

These attestation signals compose into a TrustVector (core-types/src/security.rs):

TrustVector {
    authn_strength: High,          // Service account + image signature
    authz_freshness: <since last token rotation>,
    delegation_depth: 1,           // Delegated from cluster operator
    device_posture: 0.8,           // Node with TPM but no memfd_secret
    network_exposure: Encrypted,   // Noise IK over loopback or pod network
    agent_type: Service { unit: "my-app-pod" },
}

DaemonSet Pattern (Design Intent)

For clusters where per-pod sidecars are too resource-intensive, a DaemonSet deploys one Open Sesame instance per node:

Node
  +-- open-sesame DaemonSet pod
  |     daemon-profile, daemon-secrets
  |     Exposes: /run/pds/bus.sock (hostPath)
  |
  +-- app-pod-1  (mounts /run/pds/bus.sock)
  +-- app-pod-2  (mounts /run/pds/bus.sock)

Application pods connect to the node-level IPC bus. Each connecting pod authenticates via Attestation::UCred (UID/PID from the Unix domain socket) and receives capabilities scoped to its service account identity.

The SecurityLevel hierarchy (core-types/src/security.rs) ensures that application pods at Open or Internal clearance cannot read SecretsOnly messages on the shared bus.

Service Mesh Integration (Design Intent)

Open Sesame’s Noise IK transport (core-ipc) provides mutual authentication with forward secrecy. In service mesh contexts, Noise IK can serve as an alternative to mTLS for service-to-service communication:

PropertymTLS (Istio/Linkerd)Noise IK
Key exchangeX.509 certificates, CA hierarchyX25519 static keys, clearance registry
Forward secrecyPer-connection via TLS 1.3Per-connection via Noise IK
Identity bindingSPIFFE ID in SANAgentId + InstallationId
RevocationCRL/OCSP, short-lived certsClearance registry generation counter
Trust modelCentralized CAPeer-to-peer, registry-based

The clearance registry (core-ipc/src/registry.rs) maps X25519 public keys to daemon identities and security levels. In a Kubernetes context, the registry could be populated from a shared ConfigMap or CRD, enabling cross-pod Noise IK authentication without a certificate authority.

Resource Considerations

Sidecar Resources

Minimum resource requests for the Open Sesame sidecar:

ResourceRequestLimitNotes
CPU10m100mIdle after vault unlock
Memory32Mi128MiLimitMEMLOCK=64M for secure memory
Ephemeral storage10Mi50MiVault DB + audit log

Security Context

securityContext:
  runAsNonRoot: true
  readOnlyRootFilesystem: true
  allowPrivilegeEscalation: false
  seccompProfile:
    type: RuntimeDefault

The Open Sesame daemons apply their own seccomp-bpf filters in-process, layered on top of the Kubernetes-level seccomp profile.

memfd_secret in Containers

memfd_secret(2) requires CONFIG_SECRETMEM=y in the host kernel. Most managed Kubernetes distributions (GKE, EKS, AKS) use kernels that do not enable this option by default. On these platforms, Open Sesame falls back to mmap with mlock and logs the security posture degradation at ERROR level. Operators running on custom node images or bare-metal Kubernetes can enable CONFIG_SECRETMEM=y for full protection.

Air-Gapped Environments

Design Intent. This page describes operating Open Sesame in air-gapped, SCIF, and offline environments (IL5/IL6 and above). Core vault operations require no network access today. The key ceremony procedures and audit export tooling described below are architectural targets grounded in the existing type system and cryptographic primitives.

Offline-First Architecture

Open Sesame’s core functionality requires no network access. The secrets daemon (daemon-secrets) runs with PrivateNetwork=yes in its systemd unit, enforcing network isolation at the kernel level. All inter-daemon communication occurs over a local Unix domain socket via the Noise IK protocol.

Operations that work fully offline:

  • Vault creation, unlock, lock.
  • Secret read, write, delete, list.
  • Profile activation and switching.
  • Audit log generation and verification.
  • Application launching with secret injection.
  • Clipboard isolation.

The only operations that require network access are SSH agent forwarding (which requires an SSH connection) and extension installation from OCI registries (which can be pre-staged).

memfd_secret as Security Floor

Air-gapped environments operating at IL5/IL6 or within SCIFs require memfd_secret(2) as a mandatory security control. The kernel must be compiled with CONFIG_SECRETMEM=y.

Verification

# Verify kernel support
grep CONFIG_SECRETMEM /boot/config-$(uname -r)
# Expected: CONFIG_SECRETMEM=y

# Verify at runtime
sesame status --security-posture

On systems where memfd_secret is unavailable, Open Sesame logs at ERROR level with an explicit compliance impact statement:

ERROR memfd_secret unavailable: secrets remain on kernel direct map.
      Compliance impact: does not meet IL5/IL6, DISA STIG, PCI-DSS requirements
      for memory isolation. Remediation: enable CONFIG_SECRETMEM=y in kernel config.

For air-gapped deployments, memfd_secret availability should be a deployment gate. Do not proceed with secret enrollment on systems that report this fallback.

Kernel Configuration

Air-gapped systems should use a hardened kernel with at minimum:

CONFIG_SECRETMEM=y          # memfd_secret(2) support
CONFIG_SECURITY_LANDLOCK=y  # Landlock filesystem sandboxing
CONFIG_SECCOMP=y            # seccomp-bpf syscall filtering
CONFIG_SECCOMP_FILTER=y     # BPF filter programs for seccomp

Air-Gapped Key Ceremony

Master Key Generation

In an air-gapped environment, the initial key ceremony is performed on a physically isolated machine:

  1. Preparation. Boot the ceremony machine from verified media. Verify kernel supports memfd_secret.

  2. Initialization. Run sesame init to generate the InstallationConfig:

    • UUID v4 installation identifier.
    • Organization namespace (if enterprise-managed).
    • Machine binding via /etc/machine-id or TPM (MachineBindingType in core-types/src/security.rs).
  3. Factor Enrollment. Enroll authentication factors per the site’s AuthCombineMode policy (core-types/src/auth.rs):

    • Password – Argon2id KDF with 19 MiB memory, 2 iterations.
    • SshAgent – Deterministic SSH signature-derived KEK.
    • Fido2, Tpm, Yubikey – Hardware factors (defined in AuthFactorId; backends not yet implemented).
  4. Policy Lock. Deploy /etc/pds/policy.toml to enforce cryptographic algorithm selection:

    [[policy]]
    key = "crypto.kdf"
    value = "argon2id"
    source = "airgap-key-ceremony-2025"
    
    [[policy]]
    key = "crypto.minimum_peer_profile"
    value = "leading-edge"
    source = "airgap-key-ceremony-2025"
    
  5. Verification. Run sesame status and sesame audit verify to confirm the installation is healthy and the audit chain has a valid genesis entry.

Factor Enrollment for “All” Mode

For high-security environments, AuthCombineMode::All requires every enrolled factor to be present at unlock time. The master key is derived from chaining all factor contributions:

BLAKE3 derive_key("pds v2 combined-master-key {profile}", sorted_factor_pieces)
  --> Combined Master Key

This prevents any single compromised factor from unlocking the vault.

Factor Enrollment for “Policy” Mode

The AuthPolicy struct (core-types/src/auth.rs) supports threshold-based unlock:

[auth]
mode = "policy"

[auth.policy]
required = ["password"]
additional_required = 1
# Enrolled: password, ssh-agent, fido2
# Unlock requires: password + (ssh-agent OR fido2)

Audit Chain Export

The BLAKE3 hash-chained audit log provides tamper evidence that can be verified independently. For air-gapped environments where logs cannot be streamed to a central aggregator:

Export Procedure

  1. Export. Copy the audit chain from the air-gapped machine to removable media:

    cp -r ~/.config/pds/audit/ /media/audit-export/
    
  2. Transfer. Move the removable media through the appropriate security boundary (data diode, manual review, or similar).

  3. Verify. On the receiving side, verify the chain integrity:

    sesame audit verify --path /media/audit-export/
    

    Verification checks that each entry’s BLAKE3 hash chains to the previous entry. Any modification, deletion, or reordering of entries breaks the chain.

Chain Properties

Each audit entry contains:

  • Timestamp.
  • Operation type (unlock, lock, secret read/write/delete, profile switch).
  • Profile name.
  • BLAKE3 hash of the previous entry (chain link).
  • BLAKE3 hash of the current entry’s contents.

The chain starts from a genesis entry created at sesame init. The hash algorithm is configurable via CryptoConfigToml.audit_hash (core-config/src/schema_crypto.rs): BLAKE3 (default) or SHA-256 (governance-compatible).

Compliance Mapping

NIST 800-53

ControlOpen Sesame Mechanism
SC-28 (Protection of Information at Rest)SQLCipher AES-256-CBC + HMAC-SHA512, Argon2id KDF
SC-12 (Cryptographic Key Establishment)BLAKE3 domain-separated key derivation hierarchy
SC-13 (Cryptographic Protection)Config-selectable algorithms via CryptoConfig; governance-compatible profile uses NIST-approved algorithms
AU-10 (Non-repudiation)BLAKE3 hash-chained audit log
AC-3 (Access Enforcement)Per-daemon SecurityLevel clearance, CapabilitySet authorization
IA-5 (Authenticator Management)Multi-factor auth policy (AuthCombineMode), hardware factor support

DISA STIG

STIG ControlOpen Sesame Mechanism
Encrypted storage at restSQLCipher vaults, per-profile encryption keys
Memory protectionmemfd_secret(2), guard pages, volatile zeroize
Audit trail integrityBLAKE3 hash chain, tamper detection
Least privilegeLandlock, seccomp-bpf, per-daemon clearance levels
No core dumpsLimitCORE=0, MADV_DONTDUMP

Extension Pre-Staging

In air-gapped environments, WASI extensions cannot be fetched from OCI registries at runtime. Extensions are pre-staged during the provisioning phase:

  1. On a connected machine, fetch the extension OCI artifact. The OciReference type (core-types/src/oci.rs) captures registry, principal, scope, revision, and provenance digest:

    registry.example.com/org/extension:1.0.0@sha256:abc123
    
  2. Transfer the artifact to the air-gapped machine via removable media.

  3. Install from the local artifact:

    sesame extension install --from-file /media/extensions/extension-1.0.0.wasm
    

The extension’s content hash (manifest_hash in AgentType::Extension, defined in core-types/src/security.rs) is verified at load time regardless of how the artifact was delivered.

Edge and Embedded

Design Intent. This page describes deploying Open Sesame on IoT, embedded, and resource-constrained environments. The headless daemon architecture and ARM64 build targets exist today. Embedded-specific optimizations (reduced memory profiles, static linking, busybox integration) are architectural targets.

Minimal Footprint

Edge deployments use the headless package only. The four headless daemons (daemon-profile, daemon-secrets, daemon-launcher, daemon-snippets) provide secret management without GUI dependencies.

For the most constrained environments, only daemon-profile and daemon-secrets are required. The launcher and snippets daemons are optional if application launching and snippet management are not needed.

Resource Profile

ResourceDesktop DefaultEdge Target
LimitMEMLOCK64M8M–16M (configurable)
MemoryMax (profile)128M32M
MemoryMax (secrets)256M64M
IPC channel capacity102464–128
Daemons72–4
Vault countMultiple profilesSingle profile typical

The LimitMEMLOCK value in each daemon’s systemd unit controls the maximum memfd_secret and mlock allocation. Edge devices with limited RAM should reduce this to match available memory, trading maximum concurrent secret capacity for lower memory pressure.

The IPC channel capacity is configurable via global.ipc.channel_capacity in config.toml (IpcConfig in core-config/src/schema.rs). Reducing it from the default 1024 lowers per-subscriber memory usage.

ARM64 Native Builds

Open Sesame builds natively for aarch64-linux. The CI pipeline produces ARM64 .deb packages and Nix derivations without cross-compilation or QEMU emulation, avoiding the performance and correctness risks of emulated builds.

Supported Targets

TargetStatusUse Case
x86_64-linuxSupportedDesktop, server, cloud
aarch64-linuxSupportedEdge, embedded, ARM servers, Raspberry Pi

Building for ARM64

# Native build on an ARM64 host
cargo build --release --workspace

# Or install from the APT repository (ARM64 packages available)
sudo apt install open-sesame

Embedded Linux Considerations

systemd Environments

On embedded Linux systems running systemd, Open Sesame’s service files work without modification. Adjust resource limits in the service unit overrides:

systemctl --user edit open-sesame-secrets.service
[Service]
LimitMEMLOCK=16M
MemoryMax=64M

Non-systemd Environments (Design Intent)

Embedded systems using busybox init, OpenRC, or runit do not have systemd user services. For these environments, Open Sesame daemons can be started as supervised processes:

# Direct daemon startup (no systemd)
daemon-profile &
daemon-secrets &

The daemons use sd_notify for systemd integration but do not require it. On non-systemd systems, the watchdog and notify protocols are inactive; the daemons start and run without them.

Static Linking (Design Intent)

For minimal embedded root filesystems without a full glibc, static linking with musl is an architectural target:

# Target: static musl binary
cargo build --release --target aarch64-unknown-linux-musl

Static binaries eliminate shared library dependencies, simplifying deployment to embedded images. The primary obstacle is SQLCipher’s C dependency, which requires careful static linking configuration.

Secure Boot Chain

Edge devices in high-security deployments benefit from a layered protection model that roots trust in hardware.

TPM Integration (Design Intent)

The MachineBindingType::TpmBound variant (core-types/src/security.rs) represents TPM-sealed key material:

MachineBinding {
    binding_hash: BLAKE3(tpm_sealed_key || installation_id),
    binding_type: TpmBound,
}

TPM-bound installations tie the vault master key to a specific device’s TPM PCR state. If the device’s boot chain is modified (firmware update, rootkit), the PCR values change and the TPM refuses to unseal the key, preventing vault unlock on a compromised device.

Self-Encrypting Drives (SED)

On devices with SED-capable storage, the layered protection model is:

Layer 1: SED hardware encryption (transparent, always-on)
Layer 2: SQLCipher vault encryption (application-level, per-profile keys)
Layer 3: memfd_secret(2) (runtime memory protection, kernel direct-map removal)

Each layer is independent. SED protects data at rest at the storage level. SQLCipher protects vault files even if the drive is mounted on another system. memfd_secret protects decrypted secrets in memory even if the OS kernel is partially compromised.

memfd_secret on Edge Kernels

Many embedded Linux distributions use custom or vendor kernels that may not include CONFIG_SECRETMEM=y. Before deploying Open Sesame to edge devices, verify kernel support:

grep CONFIG_SECRETMEM /boot/config-$(uname -r)
# or, if /boot/config is not available:
zcat /proc/config.gz 2>/dev/null | grep SECRETMEM

For Yocto/Buildroot-based images, add CONFIG_SECRETMEM=y to the kernel defconfig. For vendor kernels where this is not possible, Open Sesame operates in fallback mode with mlock(2), logged at ERROR level.

Edge-Specific Configuration

# config.toml for edge deployment
[global]
default_profile = "device"

[global.ipc]
channel_capacity = 64
slow_subscriber_timeout_ms = 2000

[global.logging]
level = "warn"       # Reduce log volume on constrained storage
json = true          # Structured output for remote collection
journald = false     # May not have journald on embedded systems

Connectivity Patterns

Edge devices are often intermittently connected. Open Sesame’s offline-first design means all core operations (vault unlock, secret access, profile switching) work without network connectivity. Network-dependent operations are limited to:

  • SSH agent forwarding for remote unlock (requires SSH connection).
  • Extension fetching from OCI registries (can be pre-staged).
  • Log shipping to central aggregator (buffered locally, forwarded when connected).
  • Audit chain export (manual transfer via removable media if never connected).

For permanently air-gapped edge devices, see the Air-Gapped Environments documentation.

Identity Model

Open Sesame’s identity model provides globally unique, collision-resistant identifiers for installations, organizations, profiles, and vaults. The model supports federation across devices and organizations without a central identity provider.

InstallationId

Every Open Sesame installation has a unique identity, defined as InstallationId in core-types/src/security.rs:

#![allow(unused)]
fn main() {
pub struct InstallationId {
    pub id: Uuid,                                    // UUID v4, generated at sesame init
    pub org_ns: Option<OrganizationNamespace>,       // Enterprise namespace
    pub namespace: Uuid,                             // Derived, for deterministic ID generation
    pub machine_binding: Option<MachineBinding>,     // Hardware attestation
}
}

The id field is a UUID v4 generated once at sesame init and persisted in ~/.config/pds/installation.toml (InstallationConfig in core-config/src/schema_installation.rs). It never changes unless the user explicitly re-initializes.

The namespace field is derived deterministically:

namespace = uuid5(org_ns.namespace || PROFILE_NS, "install:{id}")

This derived namespace seeds deterministic ProfileId generation, ensuring that the same profile name on two different installations produces different profile IDs.

Properties

PropertyValue
GenerationUUID v4 for InstallationId.id; UUID v7 via Uuid::now_v7() for ProfileId, AgentId, and other define_id! types
Persistence~/.config/pds/installation.toml
ScopeOne per user per machine
Collision resistance122 bits of randomness (UUID v4)

Organization Namespace

The OrganizationNamespace (core-types/src/security.rs) groups installations by organization:

#![allow(unused)]
fn main() {
pub struct OrganizationNamespace {
    pub domain: String,       // e.g., "braincraft.io"
    pub namespace: Uuid,      // uuid5(NAMESPACE_URL, domain)
}
}

The namespace UUID is derived deterministically from the domain string using UUID v5 with the URL namespace. Any installation that specifies the same organization domain produces the same namespace UUID, enabling cross-installation identity correlation without a central registry.

Enrollment

sesame init --org braincraft.io

This writes the OrgConfig to installation.toml:

[org]
domain = "braincraft.io"
namespace = "a1b2c3d4-..."   # uuid5(NAMESPACE_URL, "braincraft.io")

Identity Hierarchy

The identity model forms a four-level hierarchy:

Organization (OrganizationNamespace)
  +-- Installation (InstallationId)
        +-- Profile (TrustProfileName -> ProfileId)
              +-- Vault (SQLCipher DB, 1:1 with profile)

Each level narrows scope:

  1. Organization – optional grouping by domain. Two installations in the same org share a namespace for deterministic ID derivation.
  2. Installation – a single sesame init on a single machine for a single user. Identified by UUID v4.
  3. Profile – a trust context (e.g., work, personal, ci-production). The TrustProfileName type (core-types/src/profile.rs) is a validated, path-safe string: ASCII alphanumeric plus hyphens and underscores, max 64 bytes, no path traversal. The ProfileId is a UUID v7 generated via the define_id! macro (core-types/src/ids.rs).
  4. Vault – a SQLCipher database scoped to one profile. The vault file path is vaults/{profile_name}.db. The encryption key is derived via BLAKE3 derive_key("pds v2 vault-key {profile}") from the profile’s master key.

Device Identity

An installation can optionally be bound to a specific machine via MachineBinding (core-types/src/security.rs):

#![allow(unused)]
fn main() {
pub struct MachineBinding {
    pub binding_hash: [u8; 32],            // BLAKE3 hash of machine identity material
    pub binding_type: MachineBindingType,  // MachineId or TpmBound
}
}

Two binding types are defined:

TypeSourcePortability
MachineIdBLAKE3 hash of /etc/machine-id + installation IDSurvives reboots, not disk clones
TpmBoundTPM-sealed key materialSurvives reboots, tied to hardware TPM

Machine binding serves two purposes:

  1. Attestation. The Attestation::DeviceAttestation variant (core-types/src/security.rs) includes a MachineBinding and a verification timestamp. This allows federation peers to verify that an identity claim originates from a specific physical device.

  2. Migration detection. If an installation.toml is copied to a different machine, the machine binding hash does not match /etc/machine-id on the new host. The system detects this and can require re-attestation.

Cross-Device Identity Correlation

Same Organization

Two installations in the same organization (same org.domain) share a derived namespace. The ProfileRef type (core-types/src/profile.rs) fully qualifies a profile across installations:

#![allow(unused)]
fn main() {
pub struct ProfileRef {
    pub name: TrustProfileName,
    pub id: ProfileId,
    pub installation: InstallationId,
}
}

A ProfileRef uniquely identifies a profile in a federation context. Two devices with the same organization and the same profile name work produce different ProfileRef values because their installation.id fields differ.

Cross-Organization

Installations in different organizations have different namespace derivations. Cross-organization identity correlation requires explicit trust establishment (out-of-band key exchange or mutual attestation), not namespace collision.

ID Generation

The define_id! macro in core-types/src/ids.rs generates typed ID wrappers over UUID v7:

#![allow(unused)]
fn main() {
define_id!(ProfileId, "prof");
define_id!(AgentId, "agent");
define_id!(DaemonId, "dmon");
define_id!(ExtensionId, "ext");
// ... and others
}

Each ID type:

  • Wraps a Uuid (UUID v7 via Uuid::now_v7()).
  • Displays with a type prefix (e.g., prof-01234567-..., agent-89abcdef-...).
  • Implements Serialize/Deserialize as a transparent UUID.
  • Is Copy, Eq, Hash, and Ord.

UUID v7 is time-ordered, so IDs generated later sort after IDs generated earlier. This provides natural chronological ordering for audit logs and event streams without a separate timestamp field.

Installation Configuration on Disk

The InstallationConfig struct (core-config/src/schema_installation.rs) is the TOML-serialized form of the installation identity:

# ~/.config/pds/installation.toml
id = "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
namespace = "fedcba98-7654-3210-fedc-ba9876543210"

[org]
domain = "braincraft.io"
namespace = "12345678-abcd-ef01-2345-6789abcdef01"

[machine_binding]
binding_hash = "a1b2c3d4e5f6..."  # hex-encoded BLAKE3 hash
binding_type = "machine-id"

The org and machine_binding sections are optional. A personal desktop installation without enterprise management or hardware binding omits both:

id = "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
namespace = "fedcba98-7654-3210-fedc-ba9876543210"

Agent Identity

Open Sesame models every entity that interacts with the system – human operators, AI agents, system services, and WASI extensions – as an agent with a typed identity, local process binding, and capability-scoped session.

AgentId and AgentType

The AgentId type (core-types/src/ids.rs) is a UUID v7 wrapper generated via define_id!(AgentId, "agent"). Each agent receives a unique identifier at registration time, displayed with the agent- prefix (e.g., agent-01941c8a-...).

The AgentType enum (core-types/src/security.rs) classifies what kind of entity the agent is:

#![allow(unused)]
fn main() {
pub enum AgentType {
    Human,
    AI { model_family: String },
    Service { unit: String },
    Extension { manifest_hash: [u8; 32] },
}
}
VariantDescriptionExample
HumanInteractive operator with keyboard/mouseDesktop user
AI { model_family }LLM-based agent, API-drivenmodel_family: "claude-4"
Service { unit }systemd service or daemon processunit: "daemon-launcher.service"
Extension { manifest_hash }WASI extension, content-addressedSHA-256 of the WASM module

AgentType is descriptive metadata, not a trust tier. An AI agent with proper attestations and a delegation chain can have higher effective trust than a human agent without a security key. Trust is evaluated via TrustVector, not AgentType.

Local Process Identity

The LocalAgentId enum (core-types/src/security.rs) binds an agent to a local process:

#![allow(unused)]
fn main() {
pub enum LocalAgentId {
    UnixUid(u32),
    ProcessIdentity { uid: u32, process_name: String },
    SystemdUnit(String),
    WasmHash([u8; 32]),
}
}
VariantVerificationUse Case
UnixUidUCred from Unix domain socketMinimal identity, CLI tools
ProcessIdentityUCred + /proc/{pid}/exe inspectionNamed processes
SystemdUnitsystemd unit name lookupDaemon services
WasmHashContent hash of WASM module bytesSandboxed extensions

Local agent identity is established during IPC connection setup. When a process connects to the Noise IK bus, the server extracts UCred (pid, uid, gid) from the Unix domain socket and looks up the connecting process’s identity.

AgentIdentity

The AgentIdentity struct (core-types/src/security.rs) is the complete identity record for an agent during a session:

#![allow(unused)]
fn main() {
pub struct AgentIdentity {
    pub id: AgentId,
    pub agent_type: AgentType,
    pub local_id: LocalAgentId,
    pub installation: InstallationId,
    pub attestations: Vec<Attestation>,
    pub session_scope: CapabilitySet,
    pub delegation_chain: Vec<DelegationLink>,
}
}
FieldPurpose
idGlobally unique agent identifier (UUID v7)
agent_typeClassification: Human, AI, Service, Extension
local_idProcess-level binding on this machine
installationWhich Open Sesame installation this agent belongs to
attestationsEvidence accumulated during this session
session_scopeEffective capabilities for this session
delegation_chainChain of authority from the root delegator

AgentMetadata

The AgentMetadata struct (core-types/src/security.rs) describes an agent’s type and the attestation methods available to it:

#![allow(unused)]
fn main() {
pub struct AgentMetadata {
    pub agent_type: AgentType,
    pub available_attestation_methods: Vec<AttestationMethod>,
}
}

Available attestation methods vary by agent type:

Agent TypeTypical Attestation Methods
HumanMasterPassword, SecurityKey, DeviceAttestation
AIDelegation, ProcessAttestation
ServiceProcessAttestation, DeviceAttestation
ExtensionProcessAttestation (WASM hash verification)

The AttestationMethod enum (core-types/src/security.rs) defines the methods:

  • MasterPassword – password-based, for human agents.
  • SecurityKey – FIDO2/WebAuthn hardware token.
  • ProcessAttestation – process identity verification via /proc inspection.
  • Delegation – authority delegated from another agent.
  • DeviceAttestation – machine-level binding (TPM, machine-id).

Attestation

The Attestation enum (core-types/src/security.rs) captures the evidence used to verify an agent’s identity claim. Each variant records the specific data for one verification method:

VariantEvidence
UCredpid, uid, gid from Unix domain socket
NoiseIKX25519 public key, registry generation counter
MasterPasswordTimestamp of successful verification
SecurityKeyFIDO2 credential ID, verification timestamp
ProcessAttestationpid, SHA-256 of executable, uid
DelegationDelegator AgentId, granted CapabilitySet, chain depth
DeviceAttestationMachineBinding, verification timestamp
RemoteAttestationRemote InstallationId, nested remote device attestation
HeartbeatRenewalOriginal attestation type, renewal attestation, renewal timestamp

Multiple attestations compose to strengthen trust. For example, UCred + MasterPassword produces a higher TrustLevel in the TrustVector than either alone. The attestations vector on AgentIdentity accumulates all attestation evidence for the current session.

Machine Agents

Service accounts and AI agents operate as machine agents with restricted capabilities. A machine agent’s AgentIdentity is established as follows:

  1. Registration. The agent is registered with an AgentId, AgentType, and initial CapabilitySet. For example, a CI runner agent might receive:

    CapabilitySet { SecretRead { key_pattern: Some("ci/*") }, SecretList }
    
  2. Attestation. At connection time, the agent presents attestation evidence. For a Service agent, this is Attestation::ProcessAttestation:

    #![allow(unused)]
    fn main() {
    Attestation::ProcessAttestation {
        pid: 12345,
        exe_hash: <SHA-256 of /usr/bin/ci-runner>,
        uid: 1001,
    }
    }
  3. Session scope. The agent’s session_scope is the intersection of its registered capabilities and any delegation grant’s scope. The agent cannot exceed the capabilities it was registered with, and delegation further narrows scope.

Agent Lifecycle

Registration

Agent registration creates an AgentId and associates it with an AgentType and initial capability set. For built-in daemons, registration is automatic at IPC bus connection via the clearance registry (core-ipc/src/registry.rs). Each daemon’s X25519 public key maps to a DaemonId and SecurityLevel.

Key Rotation (Design Intent)

The clearance registry maintains a registry_generation counter. When an agent’s X25519 key pair is rotated:

  1. The new public key is registered with an incremented generation.
  2. The old public key is revoked (removed from the registry).
  3. Peers that cached the old key receive a registry update.

The Attestation::NoiseIK variant records the registry_generation at the time of verification, enabling peers to detect stale attestations.

Revocation

Revoking an agent removes its public key from the clearance registry. Subsequent connection attempts with the revoked key are rejected. Active sessions using the revoked key continue until the next re-authentication interval.

Human-to-Agent Delegation

A human operator can delegate capabilities to a machine agent via DelegationGrant (core-types/src/security.rs). The delegation:

  • Narrows scope: the delegatee’s effective capabilities are delegator_scope.intersection(grant.scope).
  • Is time-bounded: initial_ttl sets the maximum grant lifetime.
  • Requires heartbeat: heartbeat_interval sets how often the delegatee must renew.
  • Is signed: Ed25519 signature over grant fields prevents tampering.
  • Records depth: DelegationLink.depth tracks how many hops from the root delegator (0 = direct from human).

See the Delegation documentation for the full delegation model.

Trust Evaluation

Agent trust is not determined by AgentType alone. The TrustVector (core-types/src/security.rs) evaluates trust across multiple dimensions:

#![allow(unused)]
fn main() {
pub struct TrustVector {
    pub authn_strength: TrustLevel,       // None < Low < Medium < High < Hardware
    pub authz_freshness: Duration,        // Time since last authorization refresh
    pub delegation_depth: u8,             // 0 = direct human
    pub device_posture: f64,              // 0.0 = unknown, 1.0 = fully attested
    pub network_exposure: NetworkTrust,   // Local < Encrypted < Onion < PublicInternet
    pub agent_type: AgentType,            // Metadata, not a trust tier
}
}

Authorization decisions consume the TrustVector holistically. A Service agent on a local Unix socket with Hardware-level authentication and zero delegation depth may be trusted more than a Human agent on an encrypted network with Medium authentication and a stale authorization token.

Delegation

Open Sesame implements capability delegation via the DelegationGrant type, enabling agents to transfer a subset of their capabilities to other agents with time-bounded, scope-narrowed, cryptographically signed grants.

DelegationGrant

The DelegationGrant struct is defined in core-types/src/security.rs:

#![allow(unused)]
fn main() {
pub struct DelegationGrant {
    pub delegator: AgentId,
    pub scope: CapabilitySet,
    pub initial_ttl: Duration,
    pub heartbeat_interval: Duration,
    pub nonce: [u8; 16],
    pub point_of_use_filter: Option<OciReference>,
    pub signature: Vec<u8>,   // Ed25519 signature over the grant fields
}
}
FieldPurpose
delegatorThe AgentId of the agent issuing the grant
scopeMaximum capabilities the delegatee may exercise
initial_ttlTime-to-live from grant creation; the grant expires after this duration
heartbeat_intervalHow often the delegatee must renew; missed heartbeat revokes the grant
nonce16-byte anti-replay nonce, unique per grant
point_of_use_filterOptional OCI reference restricting where the grant can be used
signatureEd25519 signature over all other fields by the delegator

Scope Narrowing

Delegation enforces a fundamental invariant: a delegatee can never exceed its delegator’s capabilities. The delegatee’s effective capabilities are computed as:

effective = delegator_scope.intersection(grant.scope)

The CapabilitySet type (core-types/src/security.rs) implements lattice operations:

OperationMethodSemantics
Uniona.union(b)All capabilities from both sets
Intersectiona.intersection(b)Only capabilities in both sets
Subset testa.is_subset(b)True if every capability in a is in b
Superset testa.is_superset(b)True if every capability in b is in a
Empty setCapabilitySet::empty()No capabilities
Full setCapabilitySet::all()All non-parameterized capabilities

Example

A human operator holds { Admin, SecretRead, SecretWrite, SecretList, Unlock }. The operator delegates to a CI agent with scope { SecretRead { key_pattern: Some("ci/*") }, SecretList }:

Delegator scope:  { Admin, SecretRead, SecretWrite, SecretList, Unlock }
Grant scope:      { SecretRead { key_pattern: "ci/*" }, SecretList }
Effective:        { SecretRead { key_pattern: "ci/*" }, SecretList }

The CI agent can read secrets matching ci/* and list secret keys. It cannot write secrets, unlock vaults, or perform admin operations, even though the delegator holds those capabilities.

Parameterized Capabilities

Several capabilities accept optional parameters that further restrict scope:

#![allow(unused)]
fn main() {
Capability::SecretRead { key_pattern: Option<String> }
Capability::SecretWrite { key_pattern: Option<String> }
Capability::SecretDelete { key_pattern: Option<String> }
Capability::Delegate { max_depth: u8, scope: Box<CapabilitySet> }
}

A SecretRead with key_pattern: None permits reading any secret. A SecretRead with key_pattern: Some("ci/*") restricts reads to keys matching the glob pattern. Delegation intersection treats parameterized capabilities as more restrictive: the result uses the narrower pattern.

Time-Bounded Grants

Every DelegationGrant has two temporal controls:

initial_ttl

The grant is valid for initial_ttl from creation time. After this duration, the grant expires regardless of heartbeat activity. This prevents indefinite capability transfer.

heartbeat_interval

The delegatee must renew the grant at intervals not exceeding heartbeat_interval. A missed heartbeat revokes the grant. This provides continuous verification that the delegatee is still active and authorized.

The Attestation::HeartbeatRenewal variant (core-types/src/security.rs) records heartbeat events:

#![allow(unused)]
fn main() {
Attestation::HeartbeatRenewal {
    original_attestation_type: AttestationType,
    renewal_attestation: Box<Attestation>,
    renewed_at: u64,
}
}

Delegation Chains

Grants can be chained: agent A delegates to agent B, which delegates to agent C. The DelegationLink struct tracks position in the chain:

#![allow(unused)]
fn main() {
pub struct DelegationLink {
    pub grant: DelegationGrant,
    pub depth: u8,   // 0 = direct from human operator
}
}

The AgentIdentity.delegation_chain field (core-types/src/security.rs) stores the full chain of DelegationLink entries from the root delegator to the current agent.

Chain Depth Control

The Capability::Delegate variant includes a max_depth field:

#![allow(unused)]
fn main() {
Capability::Delegate {
    max_depth: u8,
    scope: Box<CapabilitySet>,
}
}

max_depth limits how many times a delegation can be re-delegated. A grant with max_depth: 2 allows:

Human (depth 0) -> Agent A (depth 1) -> Agent B (depth 2)

Agent B cannot further delegate because depth 2 equals max_depth. This prevents unbounded delegation chains that would make audit trails difficult to follow.

Chain Verification

To verify a delegation chain:

  1. Start from the root delegator (depth 0). Verify the root is a known, trusted agent (typically AgentType::Human).
  2. For each link in the chain:
    • Verify the signature over the DelegationGrant fields using the delegator’s Ed25519 public key.
    • Verify that the grant has not expired (initial_ttl not exceeded).
    • Verify that the heartbeat is current (heartbeat_interval not exceeded).
    • Verify that depth does not exceed the Delegate.max_depth from the delegator’s capability.
    • Compute effective scope as previous_scope.intersection(grant.scope).
  3. The final effective scope is the intersection of all grants in the chain.

Monotonic Narrowing

Each link in the chain can only narrow capabilities, never widen them. The intersection operation guarantees:

scope_n <= scope_{n-1} <= ... <= scope_0

where <= means “is a subset of.” This is a structural property of the lattice: a.intersection(b).is_subset(a) is always true.

Anti-Replay

Each DelegationGrant contains a 16-byte nonce field. The nonce must be unique across all grants from a given delegator. A delegation verifier maintains a set of observed nonces and rejects grants with previously-seen nonces. This prevents replay attacks where a revoked or expired grant is re-presented.

Point-of-Use Filter

The point_of_use_filter field is an optional OciReference (core-types/src/oci.rs) that restricts where the delegation can be used:

#![allow(unused)]
fn main() {
pub struct OciReference {
    pub registry: String,
    pub principal: String,
    pub scope: String,
    pub revision: String,
    pub provenance: Option<String>,
}
}

When present, the delegation is only valid in the context of the specified OCI artifact. This is intended for extension-scoped delegations: a grant that authorizes an extension to read secrets only when running as part of a specific, content-addressed WASM module.

The Delegate Capability

The Capability::Delegate variant is itself a capability that must be held to issue delegations:

#![allow(unused)]
fn main() {
Capability::Delegate {
    max_depth: u8,
    scope: Box<CapabilitySet>,
}
}

An agent without Capability::Delegate in its session_scope cannot create DelegationGrant entries. The scope field within the Delegate capability limits what the agent can delegate, and max_depth limits the chain length. The ability to delegate is itself subject to delegation narrowing.

Revocation

Delegation grants are revoked in the following scenarios:

  1. TTL expiry. The initial_ttl has elapsed since grant creation.
  2. Missed heartbeat. The delegatee did not renew within heartbeat_interval.
  3. Delegator revocation. The delegator explicitly revokes the grant (removes it from the active grant set).
  4. Chain invalidation. Any link in the delegation chain is revoked, which invalidates all downstream links.

Revocation is immediate and does not require the delegatee’s cooperation. The delegatee’s next operation that requires the revoked capability is denied.

Multi-Cluster Federation

Design Intent. This page describes cross-cluster secret synchronization between Open Sesame installations. The primitives referenced below (InstallationId, ProfileRef, OrganizationNamespace, CryptoConfig, Noise IK transport) exist in the type system and IPC layer today. The synchronization protocol, conflict resolution logic, and selective sync policies are architectural targets.

Overview

Multi-cluster federation enables multiple Open Sesame installations to share secrets and profiles across trust boundaries. Each installation operates independently and maintains full functionality without connectivity to peers. Synchronization is an additive capability layered on top of the existing single-installation model.

Prerequisites

Federated installations must share an OrganizationNamespace (core-types/src/security.rs). Installations in different organizations cannot federate without explicit trust establishment. The shared org namespace ensures deterministic profile ID derivation, so the same profile name on different installations can be correlated.

Each peer must meet the minimum_peer_profile requirement from CryptoConfig (core-types/src/crypto.rs):

#![allow(unused)]
fn main() {
pub struct CryptoConfig {
    // ...
    pub minimum_peer_profile: CryptoProfile,  // LeadingEdge, GovernanceCompatible, or Custom
}
}

A peer advertising GovernanceCompatible algorithms (PBKDF2-SHA256, HKDF-SHA256, AES-GCM, SHA-256) is rejected by an installation requiring LeadingEdge unless the policy explicitly allows it.

Profile-Scoped Synchronization

Synchronization is scoped to individual trust profiles. The ProfileRef type (core-types/src/profile.rs) fully qualifies a profile in a federation context:

#![allow(unused)]
fn main() {
pub struct ProfileRef {
    pub name: TrustProfileName,
    pub id: ProfileId,
    pub installation: InstallationId,
}
}

Two installations with the profile work have different ProfileRef values because their InstallationId fields differ. Federation maps these profiles to each other by matching on TrustProfileName within the shared org namespace.

Selective Sync Policies (Design Intent)

Not all secrets in a profile need to be synchronized. Selective sync policies control which secrets replicate:

Sync Policy for profile "work":
  - sync: secrets matching "shared/*"
  - exclude: secrets matching "local/*"
  - direction: bidirectional
  - peers: [installation-uuid-1, installation-uuid-2]

Secrets matching the local/* pattern remain on the originating installation. Secrets matching shared/* replicate to specified peers. The policy is configured per-profile and per-installation.

Conflict Resolution (Design Intent)

When two installations modify the same secret independently (e.g., during a network partition), a conflict arises at synchronization time.

Last-Writer-Wins with Vector Clocks

Each secret carries a vector clock with one entry per installation that has modified it:

Secret "shared/api-key":
  Installation A: version 3
  Installation B: version 2

On synchronization:

  1. No conflict. One vector clock strictly dominates the other (all entries greater or equal, at least one strictly greater). The dominating version wins.

  2. Concurrent writes. Neither vector clock dominates. This is a true conflict. Resolution strategy:

    • Default: last-writer-wins. The write with the latest wall-clock timestamp wins. The losing version is preserved in a conflict log for manual review.
    • Configurable. Future policy options include: reject (require manual resolution), merge (for structured secret formats), or defer to a specific installation.

Conflict Log

All conflicts are recorded in the audit chain with both versions, their vector clocks, and the resolution applied. The sesame audit verify command can surface unreviewed conflicts.

Split-Brain Handling (Design Intent)

When network connectivity between peers is lost, each installation continues operating independently. This is the normal mode of operation for Open Sesame – the system is designed for offline-first use.

Partition Behavior

During a partition:

  • Each installation reads and writes its local vault without restriction.
  • No synchronization occurs.
  • The audit chain continues recording local operations.

Convergence on Reconnect

When connectivity is restored:

  1. Each peer advertises its vector clock state for each synchronized profile.
  2. Peers exchange only the secrets that have changed since the last synchronization point.
  3. Conflicts are resolved per the configured policy.
  4. Audit chains from both peers are cross-referenced to build a unified timeline.

The convergence protocol is idempotent: re-running synchronization after a successful sync produces no changes.

Encrypted Replication

All synchronization traffic between peers uses the Noise IK protocol, the same transport used for local IPC. This provides:

PropertyMechanism
Mutual authenticationX25519 static keys, verified against clearance registry
Forward secrecyPer-session Noise IK ephemeral keys
EncryptionChaChaPoly (default) or AES-256-GCM (NoiseCipher in core-types/src/crypto.rs)
IntegrityNoise protocol MAC
Replay protectionNoise protocol nonce management

Peer identity is established via the Attestation::RemoteAttestation variant (core-types/src/security.rs):

#![allow(unused)]
fn main() {
Attestation::RemoteAttestation {
    remote_installation: InstallationId,
    remote_device_attestation: Box<Attestation>,
}
}

This nests the remote peer’s device attestation (e.g., machine binding, TPM) inside a remote attestation wrapper, providing end-to-end identity verification for the replication channel.

Topology

Federation supports multiple topologies:

Hub-Spoke

A central installation acts as the synchronization hub. Leaf installations sync with the hub only:

      Hub
     / | \
    A  B  C

Simpler to manage. The hub is a single point of failure for synchronization (not for local operation).

Mesh

Every installation syncs with every other installation:

    A --- B
    |  X  |
    C --- D

No single point of failure. Higher bandwidth and complexity. See Mesh Topology for the full mesh design.

Partial Mesh

Selected installations sync with selected peers:

    A --- B
          |
    C --- D

Supports organizational boundaries where teams share a subset of profiles.

Security Considerations

Trust Boundary

Each synchronized secret crosses a trust boundary at the profile level. An installation’s SecurityLevel hierarchy (Open < Internal < ProfileScoped < SecretsOnly) applies locally. A remote peer with access to a shared profile can read secrets at the ProfileScoped level for that profile, but cannot escalate to SecretsOnly on the local installation.

Delegation for Sync Agents

The synchronization agent on each installation operates with a DelegationGrant scoped to the secrets being synchronized:

DelegationGrant {
    delegator: <local-operator>,
    scope: { SecretRead { key_pattern: "shared/*" }, SecretWrite { key_pattern: "shared/*" } },
    initial_ttl: 86400s,
    heartbeat_interval: 3600s,
    ...
}

This ensures the sync agent cannot access local-only secrets, even if compromised.

Audit Trail

Every secret received from a remote peer is recorded in the local audit chain with the remote peer’s InstallationId and the grant under which the sync occurred. This provides a tamper-evident record of which secrets were replicated from where.

Human-Agent Orchestration

Design Intent. This page describes mixed workflows where human operators and AI/service agents collaborate on secret-bearing operations. The agent identity model (AgentIdentity, AgentType, DelegationGrant, TrustVector) is defined in the type system today. The approval gate mechanisms, escalation protocols, and multi-party authorization described below are architectural targets.

Overview

Open Sesame treats human operators and machine agents as peers in the same identity system. Both are AgentIdentity instances with typed identities, attestations, capability sets, and delegation chains. The difference is not in system architecture but in the attestation methods available and the trust policies applied.

The core principle: agents can perform secret-bearing operations only to the extent that a human has authorized them, either directly (via DelegationGrant) or via policy.

Agent Types in Orchestration

The AgentType enum (core-types/src/security.rs) defines the entity classification:

TypeRole in Orchestration
HumanApprover, delegator, root of trust for capability chains
AI { model_family }Automated operations, LLM-driven workflows, copilot actions
Service { unit }Background processes, CI/CD pipelines, cron jobs
Extension { manifest_hash }WASI plugins operating in a content-addressed sandbox

Approval Gates (Design Intent)

An approval gate is a policy that requires human authorization before an agent can access a secret or perform a privileged operation.

Gate Model

When an AI or Service agent requests a capability that requires approval:

  1. The agent submits a request specifying the capability needed and the context (profile, secret key pattern, operation type).
  2. The request is held in a pending state. The agent blocks or receives a pending response.
  3. One or more human operators are notified.
  4. The human reviews the request and either approves (issuing a DelegationGrant) or denies.
  5. On approval, the agent’s session_scope is updated with the granted capabilities for the duration of initial_ttl.

Gate Conditions

Approval gates are triggered by the gap between an agent’s session_scope and the capabilities required for the requested operation:

Agent session_scope: { SecretList, StatusRead }
Requested operation: SecretRead { key_pattern: "production/db-*" }

Gap: { SecretRead { key_pattern: "production/db-*" } }
  --> Approval gate triggered

If the agent already holds the required capability (e.g., from a prior delegation), no gate is triggered.

Escalation (Design Intent)

Escalation is the process by which an agent requests elevated capabilities beyond its current session scope.

Escalation Flow

1. Agent detects it needs Capability::Unlock for profile "production"
2. Agent does not hold Unlock in session_scope
3. Agent submits escalation request:
     - Requested: { Unlock }
     - Context: profile "production", reason "scheduled key rotation"
     - Requested TTL: 300s
4. Human operator reviews escalation request
5. Human approves with narrowed scope:
     DelegationGrant {
         delegator: <human-agent-id>,
         scope: { Unlock },
         initial_ttl: 300s,          // 5 minutes, not the 1 hour requested
         heartbeat_interval: 60s,
         nonce: <random>,
         signature: <Ed25519>,
     }
6. Agent's effective scope becomes:
     session_scope.union(granted_scope).intersection(delegator_scope)
7. After 300s, the grant expires and the agent loses Unlock

The human can:

  • Approve with the requested scope and TTL.
  • Approve with a narrower scope or shorter TTL (the human always narrows, never widens).
  • Deny the request.

Automatic Escalation Policies (Design Intent)

For well-defined, repetitive workflows, policies can pre-authorize escalation without human-in-the-loop:

# config.toml
[[agents.auto_escalation]]
agent_type = "service"
unit = "backup-agent.service"
capabilities = ["secret-read"]
key_pattern = "backup/*"
max_ttl = "1h"
require_device_attestation = true

This pre-authorization avoids interactive approval for routine operations while maintaining the capability lattice’s scope-narrowing invariant.

Audit Trail

All agent actions are attributed in the audit log. The audit entry for any operation includes:

FieldSource
Agent IDAgentIdentity.id
Agent typeAgentIdentity.agent_type
Delegation chainAgentIdentity.delegation_chain – full chain from root delegator
Effective capabilitiesAgentIdentity.session_scope at time of operation
OperationThe specific action performed (read, write, delete, unlock, etc.)
ProfileWhich trust profile the operation targeted
TimestampOperation time
AttestationsWhich attestation methods were active

Chain Attribution

For delegated operations, the audit trail records the entire delegation chain:

Audit entry for SecretRead("production/api-key"):
  Agent: agent-01941c8a-... (AI, model_family: "claude-4")
  Delegation chain:
    [0] Human operator-5678 -> DelegationGrant { scope: {SecretRead, SecretList}, ttl: 3600s }
    [1] agent-01941c8a-... (current agent)
  Attestations: [NoiseIK, Delegation]

This provides full provenance: who authorized the AI agent, what scope was granted, and when the delegation expires.

Multi-Party Authorization (Design Intent)

For critical operations (e.g., deleting a production secret, rotating a root key), multi-party authorization requires approval from multiple human operators.

N-of-M Model

A multi-party policy specifies:

  • M – total number of designated approvers.
  • N – minimum number who must approve.
  • Timeout – how long to wait for approvals before the request expires.
Policy for Capability::SecretDelete { key_pattern: "production/*" }:
  Approvers: [operator-A, operator-B, operator-C]  (M = 3)
  Required:  2                                       (N = 2)
  Timeout:   1 hour

Authorization Flow

  1. An agent or human requests a capability that matches a multi-party policy.
  2. All M approvers are notified.
  3. Each approver independently reviews and approves or denies.
  4. When N approvals are collected, a composite DelegationGrant is issued:
    • The scope is the intersection of all approvers’ individual scopes.
    • The initial_ttl is the minimum of all approvers’ specified TTLs.
    • Each approver’s signature is recorded.
  5. If the timeout expires before N approvals are collected, the request is denied.

Multi-Party Attestation

The Attestation::Delegation variant records the delegator’s AgentId and the granted scope. For multi-party authorization, multiple Attestation::Delegation entries appear in the agent’s attestations vector, one per approver.

Trust Vector in Orchestration

The TrustVector (core-types/src/security.rs) provides the quantitative basis for authorization decisions in mixed human-agent workflows:

DimensionEffect on Orchestration
authn_strengthHigher strength reduces approval gate friction
authz_freshnessStale authorization triggers re-approval
delegation_depthDeeper chains require stronger attestations at each link
device_postureLow posture (no memfd_secret, no TPM) may trigger additional approval requirements
network_exposureRemote agents (Encrypted, Onion, PublicInternet) face stricter policies than local agents
agent_typeMetadata for policy matching, not a trust tier

Worked Example: AI Copilot Accessing Secrets

  1. A developer invokes an AI copilot to debug a production issue.

  2. The copilot (AgentType::AI, model_family: "claude-4") needs to read a database connection string.

  3. The copilot’s session_scope does not include SecretRead for production/*.

  4. An approval gate fires. The developer receives a prompt:

    Agent "copilot-agent-01941c8a" (AI/claude-4) requests:
      SecretRead { key_pattern: "production/db-connection" }
    Reason: "Debugging connection timeout in production service"
    Approve for 10 minutes? [y/N]
    
  5. The developer approves. A DelegationGrant is issued with initial_ttl: 600s.

  6. The copilot reads the secret. The audit log records the read with the full delegation chain.

  7. After 10 minutes, the grant expires. The copilot can no longer read production secrets.

Mesh Topology

Design Intent. This page describes a peer-to-peer federation mesh where Open Sesame installations synchronize state without a central authority. The identity model (InstallationId, OrganizationNamespace), Noise IK transport (core-ipc), and attestation types (Attestation::RemoteAttestation) exist in the type system today. The gossip protocol, CRDT-based state merging, and convergence guarantees described below are architectural targets.

Overview

A mesh topology connects Open Sesame installations as peers where each node can communicate directly with any other node. There is no central server, certificate authority, or coordinator. Trust is established through mutual Noise IK authentication and device attestation. State convergence is achieved through gossip-based propagation and conflict-free replicated data types (CRDTs).

Trust Establishment

Initial Bootstrap

Two installations establish trust through an out-of-band key exchange:

  1. Key display. Installation A displays its X25519 static public key and InstallationId:

    sesame federation show-identity
    # Output:
    #   Installation: a1b2c3d4-...
    #   Org: braincraft.io
    #   Public key: base64(X25519 static key)
    #   Machine binding: machine-id (verified)
    
  2. Key import. Installation B imports A’s identity:

    sesame federation trust --installation a1b2c3d4-... --pubkey base64(...)
    
  3. Mutual verification. Both installations perform a Noise IK handshake. Each peer verifies the other’s static key matches the imported value. The Attestation::RemoteAttestation type records the result:

    #![allow(unused)]
    fn main() {
    Attestation::RemoteAttestation {
        remote_installation: InstallationId { id: a1b2c3d4-..., ... },
        remote_device_attestation: Box::new(Attestation::DeviceAttestation {
            binding: MachineBinding { binding_hash: [...], binding_type: MachineId },
            verified_at: 1711234567,
        }),
    }
    }

Trust Anchors

Each installation maintains a set of trusted peer identities (public keys + installation IDs). This set is the trust anchor for the mesh. A peer not in the trust set is rejected during Noise IK handshake. Trust anchors can be:

  • Manually established (out-of-band, as described above).
  • Transitively established (A trusts B, B trusts C, A can choose to trust C based on B’s attestation).
  • Organizationally established (all installations in the same OrganizationNamespace share a common trust anchor via policy distribution).

Transitive trust is opt-in and policy-controlled. An installation is never forced to trust a peer it has not explicitly approved or that does not meet its configured policy.

Gossip Protocol (Design Intent)

State changes propagate through the mesh via a gossip protocol.

What is Gossiped

  • Profile metadata. Profile names, IDs, and sync policies for synchronized profiles.
  • Secret updates. Encrypted secret payloads for profiles configured for synchronization.
  • Policy updates. Organization-wide policy changes from /etc/pds/policy.toml.
  • Peer identity. New peer introductions (installation ID + public key) for mesh expansion.
  • Revocation notices. Key revocation and delegation revocation events.

Gossip Mechanics

  1. A node produces a state change (e.g., writes a secret to a synchronized profile).
  2. The node selects a random subset of its known peers (fanout factor, typically 3–5).
  3. The node sends the update to the selected peers over Noise IK connections.
  4. Each receiving peer checks whether the update is new (vector clock comparison). If new, the peer applies the update locally and re-gossips to its own random subset of peers.
  5. Propagation continues until all nodes have seen the update.

Dissemination Guarantees

With a fanout factor of f and n nodes:

  • Expected propagation rounds: O(log_f(n)).
  • For 100 nodes with fanout 3: approximately 5 rounds to reach all peers.
  • Probabilistic guarantee: with sufficient fanout, all nodes receive the update with high probability. Deterministic delivery is not guaranteed per round; convergence is eventual.

CRDT-Based State Merging (Design Intent)

To achieve convergence without coordination, synchronized state uses conflict-free replicated data types.

Secret State as a Map CRDT

Each synchronized profile’s secret store is modeled as a map from secret key to (value, vector clock) pairs:

Profile "work" secrets:
  "shared/api-key" -> (encrypted_value, {install_A: 3, install_B: 2})
  "shared/db-url"  -> (encrypted_value, {install_A: 1})

The CRDT merge rule:

  • Concurrent writes to the same key. Last-writer-wins based on wall-clock timestamp, with installation ID as tiebreaker. The losing value is preserved in a conflict log.
  • Non-conflicting writes. Different keys or causally ordered writes merge automatically with no conflict.
  • Deletes. A tombstone entry replaces the value. Tombstones are retained for a configurable duration (default: 30 days) to ensure propagation across partitioned nodes.

Convergence Properties

PropertyGuarantee
Eventual consistencyAll connected peers converge to the same state
CommutativityUpdates can be applied in any order
IdempotencyRe-applying an update has no additional effect
Partition toleranceNodes operate independently during partitions

Partition Tolerance

During Partition

Partitioned nodes continue operating independently. Each node:

  • Reads and writes its local vault without restriction.
  • Records all operations in its local audit chain.
  • Queues outgoing gossip messages for delivery when connectivity is restored.

On Reconnect

When a partitioned node reconnects:

  1. The node exchanges vector clocks with peers to identify divergence.
  2. Missing updates are transferred in both directions.
  3. Conflicts (concurrent writes to the same key) are resolved per the CRDT merge rule.
  4. Audit chains from both sides are cross-referenced.

Convergence Verification

After reconnection, nodes can verify convergence:

sesame federation verify-convergence --profile work

This compares the local state hash with hashes reported by peers. A mismatch indicates an update still in transit or an unresolved conflict.

Peer Discovery and Mesh Expansion

Manual Peer Addition

sesame federation trust --installation <uuid> --pubkey <base64>

Organization-Scoped Discovery (Design Intent)

Within an OrganizationNamespace, peer discovery can be bootstrapped from a shared configuration distributed via policy:

# /etc/pds/policy.toml
[[policy]]
key = "federation.bootstrap_peers"
value = [
    { installation = "a1b2c3d4-...", pubkey = "base64(...)" },
    { installation = "e5f6a7b8-...", pubkey = "base64(...)" },
]
source = "enterprise-fleet-management"

New installations in the org automatically discover existing peers from the bootstrap list. Each peer still performs mutual Noise IK authentication before sharing state.

Peer Removal

Removing a peer from the trust set:

sesame federation untrust --installation <uuid>

The removed peer’s public key is revoked. Gossip messages from the revoked peer are rejected. Secrets previously shared with the revoked peer remain encrypted in local vaults; they are not retroactively expunged from the peer’s copy.

Security Properties

No Central Authority

There is no CA, no coordinator, and no single point of compromise. Compromising one node does not grant access to other nodes’ local-only secrets. Synchronized secrets are limited to what was explicitly configured for sync.

Forward Secrecy

Each Noise IK connection uses ephemeral keys, providing forward secrecy per session. Compromising a node’s static key does not compromise past session traffic (though it does compromise future sessions until the key is rotated and the old key revoked in peers’ trust sets).

Minimum Peer Crypto Profile

The CryptoConfig.minimum_peer_profile field (core-types/src/crypto.rs) enforces a floor on the cryptographic algorithms a peer must use:

Local: LeadingEdge (Argon2id, BLAKE3, ChaChaPoly, BLAKE2s)
Peer:  GovernanceCompatible (PBKDF2-SHA256, HKDF-SHA256, AES-GCM, SHA-256)

If minimum_peer_profile = LeadingEdge:
  --> Peer rejected (does not meet minimum)

If minimum_peer_profile = GovernanceCompatible:
  --> Peer accepted

This prevents a mesh node with weak cryptographic configuration from weakening the overall mesh security posture.

Extension System

Open Sesame provides a WASM-based extension system composed of two crates: extension-host (the runtime) and extension-sdk (the authoring toolkit). Extensions are distributed as OCI artifacts.

Current Implementation Status

Both crates are in early scaffolding phase. The extension-host crate declares its module-level documentation and dependency structure but contains no runtime logic. The extension-sdk crate declares its module-level documentation and enforces #![forbid(unsafe_code)] but contains no bindings or type definitions beyond the crate root. The architectural contracts (crate boundaries, dependency selections, WIT/OCI integration points) are established; functional implementation is pending.

extension-host

The extension-host crate provides the Wasmtime-backed runtime for executing WASM component model extensions with capability-based sandboxing. Each extension runs in its own Store with capabilities enforced from its manifest.

Dependencies

CratePurpose
wasmtimeWebAssembly runtime engine with component model support
wasmtime-wasiWASI preview 2 implementation for Wasmtime
core-typesShared types for the extension/host boundary
core-configConfiguration loading for extension manifests
core-ipcIPC bus client for extension-to-daemon communication
extension-sdkShared type definitions between host and guest
tokioAsync runtime for extension lifecycle management
anyhowError handling for Wasmtime operations

Planned Architecture

Based on the crate’s declared dependencies and documentation, the extension host is designed around these components:

  • Wasmtime engine with pooling allocator: The wasmtime dependency provides the core WebAssembly execution engine. Pooling allocation pre-allocates memory slots for extension instances, reducing per-instantiation overhead.
  • WASI component model: The wasmtime-wasi dependency provides WASI preview 2 support, giving extensions controlled access to filesystem, networking, clocks, and random number generation through capability handles.
  • Capability sandbox: Each extension’s Store is configured with capabilities declared in its manifest. Extensions cannot access resources beyond what their manifest declares.
  • IPC bus integration: The core-ipc dependency allows extensions to communicate with daemon processes over the Noise IK encrypted bus, subject to clearance checks.

extension-sdk

The extension-sdk crate provides the types, host function bindings, and WIT interface definitions that extension authors use to build WASM component model extensions targeting the extension host.

Dependencies

CratePurpose
core-typesShared types for the extension/host boundary
wit-bindgenCode generation from WIT (WebAssembly Interface Type) definitions
serdeSerialization for extension configuration and data exchange

WIT Bindings

The wit-bindgen dependency generates Rust bindings from WIT interface definitions. WIT defines the contract between extensions (guests) and the extension host: what functions the host exports to extensions, what functions extensions must implement, and the types exchanged across the boundary. The SDK crate enforces #![forbid(unsafe_code)] – all unsafe operations are confined to the generated bindings and the host runtime.

OCI Distribution

Extensions are packaged and distributed as OCI (Open Container Initiative) artifacts. The OciReference type in core-types (defined in core-types/src/oci.rs) provides the addressing scheme:

registry/principal/scope:revision[@provenance]

Examples:

  • registry.example.com/org/extension:1.0.0
  • registry.example.com/org/extension:1.0.0@sha256:abc123

The OciReference type parses and validates this format, requiring at least three path segments (registry, principal, scope), a non-empty revision after :, and an optional provenance hash after @. It implements FromStr, Display, Serialize, and Deserialize.

The OCI distribution model allows extensions to be:

  • Published to any OCI-compliant registry (Docker Hub, GitHub Container Registry, self-hosted registries).
  • Content-addressed via the optional provenance field for integrity verification.
  • Version-pinned via the revision field for reproducible deployments.
  • Scoped by principal (organization/user) and scope (extension name) for namespace isolation.

Extension Lifecycle

The planned extension lifecycle follows five phases:

  1. Discover: Resolve an OciReference to a registry, pull the extension artifact, and verify its provenance hash if present.
  2. Load: Parse the WASM component from the artifact. Validate the component’s WIT imports against what the host can provide.
  3. Sandbox: Create a Wasmtime Store with WASI capabilities scoped to the extension’s manifest. Configure resource limits (memory, fuel/instruction count, file descriptors).
  4. Execute: Instantiate the component in the store. Call the extension’s exported entry points. The extension communicates with daemons via host-provided IPC functions.
  5. Teardown: Drop the store, releasing all resources. The pooling allocator reclaims the memory slot for reuse.

Extension Host Capabilities

The extension-host crate provides the Wasmtime-backed runtime that executes WASM component model extensions. Each extension runs in an isolated Store with capabilities enforced from its manifest.

Current Implementation Status

As of this writing, the extension host is scaffolded but not fully wired into the daemon runtime. The crate declares dependencies on wasmtime, wasmtime-wasi, core-types, core-config, core-ipc, and extension-sdk. The public module (extension-host/src/lib.rs) contains documentation comments describing the intended architecture but no exported functions or types yet. The sections below describe the design that these crates are being built toward.

Wasmtime Runtime Configuration

The extension host uses Wasmtime as its WebAssembly runtime. The planned configuration includes:

  • Cranelift compiler backend – Wasmtime’s default optimizing compiler. Extensions are compiled ahead of time on first load, then cached.
  • Component model – Extensions are WASM components (not core modules). This enables typed interfaces via WIT and structured capability passing.
  • Pooling allocator – When multiple extensions run concurrently, the pooling instance allocator pre-reserves virtual address space for all instances, avoiding per-instantiation mmap overhead. Configuration parameters (instance count, memory pages, table elements) are derived from the extension manifest’s declared resource limits.

WASI Sandbox

Each extension Store is configured with a WASI context that restricts what the guest can access. The sandbox follows a deny-by-default model:

ResourceDefaultWith Capability Grant
Filesystem readDeniedScoped to declared directories
Filesystem writeDeniedScoped to declared directories
Network socketsDeniedDenied (no current grant path)
Environment variablesDeniedFiltered set from manifest
Clock (monotonic)AllowedAllowed
Clock (wall)AllowedAllowed
Random (CSPRNG)AllowedAllowed
stdin/stdout/stderrRedirected to host logRedirected to host log

Extensions cannot access the host filesystem, network, or other extensions’ memory unless the host explicitly grants a capability through the WIT interface.

Capability Grants

The host exposes functionality to extensions through WIT-defined interfaces. An extension’s manifest declares which capabilities it requires; the host validates these at load time and links only the granted imports.

Planned capability categories:

  • secret-read – Read a named secret from the active vault (routed through daemon-secrets via IPC). The extension never receives the vault master key.
  • secret-write – Store or update a secret. Requires explicit user approval on first use.
  • config-read – Read configuration values from core-config.
  • ipc-publish – Publish an EventKind message to the IPC bus at the extension’s clearance level.
  • clipboard-write – Write to the clipboard via daemon-clipboard.
  • notification – Display a desktop notification.

Each capability is a separate WIT interface. An extension that declares secret-read but not secret-write receives a linker that provides only the read import; the write import is left unresolved, causing instantiation to fail if the guest attempts to call it.

Resource Limits

The host enforces per-extension resource limits to prevent a misbehaving extension from affecting system stability:

  • Memory – Maximum linear memory size, configured in the pooling allocator. The default is 64 MiB per extension instance.
  • Fuel (CPU) – Wasmtime’s fuel metering limits the number of instructions an extension can execute per invocation. When fuel is exhausted, the call traps with a deterministic error.
  • Table elements – Maximum number of indirect function table entries.
  • Instances – Maximum number of concurrent component instances across all loaded extensions.
  • Execution timeout – A wall-clock deadline per invocation. Implemented via Wasmtime’s epoch interruption mechanism: the host increments the epoch on a timer, and each Store is configured with a maximum epoch delta.

These limits are declared in the extension manifest and validated against system-wide maximums set in core-config.

Example Extension

This page walks through creating a WASM component model extension for Open Sesame, from WIT interface definition through OCI packaging. Because the extension runtime is not yet fully wired, sections that describe design intent rather than working code are marked accordingly.

Prerequisites

  • Rust toolchain with the wasm32-wasip2 target:

    rustup target add wasm32-wasip2
    
  • wasm-tools for component composition:

    cargo install wasm-tools
    
  • An OCI-compatible registry (e.g., ghcr.io) for publishing.

Step 1: Define the WIT Interface

Design intent. No .wit files ship in the repository yet. The extension-sdk crate will provide canonical WIT definitions; what follows is the planned schema.

Create a wit/ directory with the extension’s world:

// wit/world.wit
package open-sesame:example@0.1.0;

world greeting {
  import open-sesame:host/config-read@0.1.0;
  export greet: func(name: string) -> string;
}

The import line declares that this extension requires the config-read capability from the host. The export line declares the function the host will call.

Step 2: Implement the Guest in Rust

Create a new crate:

cargo new --lib greeting-extension
cd greeting-extension

Add dependencies to Cargo.toml:

[package]
name = "greeting-extension"
version = "0.1.0"
edition = "2021"

[lib]
crate-type = ["cdylib"]

[dependencies]
wit-bindgen = "0.41"

The extension-sdk crate (extension-sdk/Cargo.toml) depends on wit-bindgen for generating Rust bindings from WIT definitions. Guest code uses the wit_bindgen::generate! macro:

#![allow(unused)]
fn main() {
// src/lib.rs
wit_bindgen::generate!({
    world: "greeting",
    path: "../wit",
});

struct Component;

impl Guest for Component {
    fn greet(name: String) -> String {
        // Read a greeting template from config (host-provided import).
        let template = open_sesame::host::config_read::get("greeting.template")
            .unwrap_or_else(|| "Hello, {}!".to_string());
        template.replace("{}", &name)
    }
}

export!(Component);
}

Step 3: Build the WASM Component

Compile to a core WASM module, then convert to a component:

cargo build --target wasm32-wasip2 --release

wasm-tools component new \
  target/wasm32-wasip2/release/greeting_extension.wasm \
  -o greeting.component.wasm

The resulting greeting.component.wasm is a self-describing component that declares its imports and exports in the component model type system.

Step 4: Package as an OCI Artifact

Design intent. OCI distribution is defined in core-types/src/oci.rs as OciReference but the pull/push workflow is not yet implemented.

Open Sesame identifies extensions by OCI references with the format:

registry/principal/scope:revision[@provenance]

For example:

ghcr.io/my-org/greeting-extension:0.1.0@sha256:abcdef1234567890

The OciReference struct parses this into five fields:

FieldExampleRequired
registryghcr.ioYes
principalmy-orgYes
scopegreeting-extensionYes
revision0.1.0Yes
provenancesha256:abcdef1234567890No

Push the component to a registry using an OCI-compatible tool:

oras push ghcr.io/my-org/greeting-extension:0.1.0 \
  greeting.component.wasm:application/vnd.wasm.component.v1+wasm

Step 5: Write the Extension Manifest

Design intent. The manifest schema is not yet finalized.

The extension manifest declares metadata, capabilities, and resource limits:

[extension]
name = "greeting-extension"
version = "0.1.0"
oci = "ghcr.io/my-org/greeting-extension:0.1.0"

[capabilities]
config-read = true

[limits]
max_memory_mib = 16
max_fuel = 1_000_000

Place this file in ~/.config/pds/extensions/greeting-extension.toml.

Step 6: Load and Test

Design intent. The CLI subcommand for extension management is not yet implemented.

Once the extension host runtime is wired, the intended workflow:

# Install from OCI
sesame extension install ghcr.io/my-org/greeting-extension:0.1.0

# List installed extensions
sesame extension list

# Invoke directly for testing
sesame extension call greeting-extension greet "World"

Testing During Development

Until the full extension host is available, extensions can be tested with standalone Wasmtime:

wasmtime run --wasm component-model greeting.component.wasm \
  --invoke greet "World"

For Rust-level unit tests, the extension-sdk crate includes proptest as a dev-dependency for property-based testing of WIT type serialization.

Authentication Backends

The core-auth crate defines a pluggable authentication system for vault unlock. Each authentication factor (password, SSH agent, hardware token) is implemented as a struct that implements the VaultAuthBackend trait. This page describes how to implement a new backend.

The VaultAuthBackend Trait

The trait is defined in core-auth/src/backend.rs. A backend must implement all of the following methods:

#![allow(unused)]
fn main() {
#[async_trait]
pub trait VaultAuthBackend: Send + Sync {
    fn factor_id(&self) -> AuthFactorId;
    fn name(&self) -> &str;
    fn backend_id(&self) -> &str;
    fn is_enrolled(&self, profile: &TrustProfileName, config_dir: &Path) -> bool;
    async fn can_unlock(&self, profile: &TrustProfileName, config_dir: &Path) -> bool;
    fn requires_interaction(&self) -> AuthInteraction;
    async fn unlock(
        &self,
        profile: &TrustProfileName,
        config_dir: &Path,
        salt: &[u8],
    ) -> Result<UnlockOutcome, AuthError>;
    async fn enroll(
        &self,
        profile: &TrustProfileName,
        master_key: &SecureBytes,
        config_dir: &Path,
        salt: &[u8],
        selected_key_index: Option<usize>,
    ) -> Result<(), AuthError>;
    async fn revoke(
        &self, profile: &TrustProfileName, config_dir: &Path,
    ) -> Result<(), AuthError>;
}
}

Method Descriptions

factor_id()

Returns the AuthFactorId enum variant that identifies this factor. Used in policy evaluation and audit logging.

name()

Human-readable name for audit logs and overlay display (e.g., "SSH Agent", "FIDO2 Token").

backend_id()

Short machine-readable identifier for IPC messages and configuration files (e.g., "ssh-agent", "fido2").

is_enrolled(profile, config_dir)

Synchronous check for whether enrollment data exists for the given profile. Reads from the filesystem under config_dir. Must not perform I/O that could block.

can_unlock(profile, config_dir)

Asynchronous readiness check. Returns true if the backend can currently perform an unlock. Must complete in under 100 ms. For example, an SSH agent backend checks whether SSH_AUTH_SOCK is set and the agent is reachable; a FIDO2 backend checks whether a token is plugged in.

requires_interaction()

Returns an AuthInteraction variant:

  • AuthInteraction::None – No user interaction needed (SSH software key, TPM, OS keyring).
  • AuthInteraction::PasswordEntry – Keyboard input required.
  • AuthInteraction::HardwareTouch – Physical touch on a hardware device.

unlock(profile, config_dir, salt)

The core unlock operation. Derives or unwraps the master key and returns an UnlockOutcome.

enroll(profile, master_key, config_dir, salt, selected_key_index)

Enrolls this backend for a profile. Receives the master key so the backend can wrap or encrypt it for later retrieval. selected_key_index optionally specifies which eligible key to use (e.g., which SSH key from the agent).

revoke(profile, config_dir)

Removes enrollment data for this backend from the profile.

UnlockOutcome

A successful unlock() call returns:

#![allow(unused)]
fn main() {
pub struct UnlockOutcome {
    pub master_key: SecureBytes,
    pub audit_metadata: BTreeMap<String, String>,
    pub ipc_strategy: IpcUnlockStrategy,
    pub factor_id: AuthFactorId,
}
}
  • master_key – The 32-byte master key (for DirectMasterKey strategy) or password bytes (for PasswordUnlock strategy). Held in SecureBytes, which is zeroized on drop.
  • audit_metadata – Key-value pairs for audit logging (e.g., "key_fingerprint" => "SHA256:...", "key_comment" => "user@host").
  • ipc_strategy – Determines which IPC message type carries the key to daemon-secrets:
    • IpcUnlockStrategy::PasswordUnlock – daemon-secrets performs the KDF.
    • IpcUnlockStrategy::DirectMasterKey – The master key is pre-derived; daemon-secrets uses it directly.
  • factor_id – Echoes back the factor identifier for correlation.

FactorContribution

The FactorContribution enum determines how a backend’s output participates in multi-factor composition:

  • CompleteMasterKey – This backend produces a complete, independently valid master key. Used in Any mode (any single factor suffices) and in Policy mode where individual factors can stand alone.
  • FactorPiece – This backend produces one piece of a combined key. Used in All mode, where the final master key is derived via HKDF from all factor pieces concatenated.

Backends that unwrap an encrypted copy of the master key (SSH agent, FIDO2 with hmac-secret) should use CompleteMasterKey. Backends that contribute entropy toward a combined derivation (e.g., a partial PIN) should use FactorPiece.

Registration with AuthDispatcher

After implementing the trait, register the backend with the AuthDispatcher:

#![allow(unused)]
fn main() {
let fido2_backend = Fido2Backend::new(/* config */);
dispatcher.register(Box::new(fido2_backend));
}

The dispatcher iterates registered backends during unlock, filtering by enrollment status and the active vault’s auth policy.

VaultMetadata Integration

Enrollment data is persisted alongside the vault’s VaultMetadata. Each backend is responsible for writing its own enrollment artifacts under config_dir/profiles/<profile>/auth/<backend_id>/. The format is backend-specific; common patterns include:

  • A wrapped (encrypted) copy of the master key.
  • A credential ID or public key for verification during unlock.
  • Parameters for key derivation (iteration count, algorithm identifiers).

The is_enrolled() method checks for the existence and validity of these artifacts.

Example: Skeleton FIDO2 Backend

The following skeleton illustrates the structure of a hypothetical FIDO2 backend. It does not compile as-is; it shows the trait method signatures and their responsibilities.

#![allow(unused)]
fn main() {
use core_auth::{
    AuthError, AuthInteraction, FactorContribution, IpcUnlockStrategy,
    UnlockOutcome, VaultAuthBackend,
};
use core_crypto::SecureBytes;
use core_types::{AuthFactorId, TrustProfileName};
use std::collections::BTreeMap;
use std::path::Path;

pub struct Fido2Backend {
    // Configuration: acceptable authenticator AAGUIDs, timeout, etc.
}

#[async_trait::async_trait]
impl VaultAuthBackend for Fido2Backend {
    fn factor_id(&self) -> AuthFactorId {
        AuthFactorId::Fido2
    }

    fn name(&self) -> &str {
        "FIDO2 Token"
    }

    fn backend_id(&self) -> &str {
        "fido2"
    }

    fn is_enrolled(&self, profile: &TrustProfileName, config_dir: &Path) -> bool {
        let cred_path = config_dir
            .join("profiles")
            .join(profile.as_str())
            .join("auth/fido2/credential.json");
        cred_path.exists()
    }

    async fn can_unlock(&self, _profile: &TrustProfileName, _config_dir: &Path) -> bool {
        // Check if a FIDO2 authenticator is available via platform API.
        // Must return within 100 ms.
        check_authenticator_present().await
    }

    fn requires_interaction(&self) -> AuthInteraction {
        AuthInteraction::HardwareTouch
    }

    async fn unlock(
        &self,
        profile: &TrustProfileName,
        config_dir: &Path,
        salt: &[u8],
    ) -> Result<UnlockOutcome, AuthError> {
        // 1. Load credential ID from enrollment data.
        let cred = load_credential(profile, config_dir)?;

        // 2. Perform FIDO2 assertion with hmac-secret extension.
        //    This requires user touch on the authenticator.
        let hmac_secret = perform_assertion(&cred, salt).await?;

        // 3. Use the hmac-secret output to unwrap the stored master key.
        let wrapped_key = load_wrapped_key(profile, config_dir)?;
        let master_key = unwrap_master_key(&wrapped_key, &hmac_secret)?;

        Ok(UnlockOutcome {
            master_key,
            audit_metadata: BTreeMap::from([
                ("credential_id".into(), hex::encode(&cred.id)),
                ("authenticator_aaguid".into(), cred.aaguid.to_string()),
            ]),
            ipc_strategy: IpcUnlockStrategy::DirectMasterKey,
            factor_id: AuthFactorId::Fido2,
        })
    }

    async fn enroll(
        &self,
        profile: &TrustProfileName,
        master_key: &SecureBytes,
        config_dir: &Path,
        salt: &[u8],
        _selected_key_index: Option<usize>,
    ) -> Result<(), AuthError> {
        // 1. Perform FIDO2 credential creation (MakeCredential).
        // 2. Use hmac-secret extension to derive a wrapping key.
        // 3. Wrap the master_key with the derived wrapping key.
        // 4. Persist credential ID + wrapped key under config_dir.
        Ok(())
    }

    async fn revoke(
        &self,
        profile: &TrustProfileName,
        config_dir: &Path,
    ) -> Result<(), AuthError> {
        let auth_dir = config_dir
            .join("profiles")
            .join(profile.as_str())
            .join("auth/fido2");
        if auth_dir.exists() {
            std::fs::remove_dir_all(&auth_dir)
                .map_err(|e| AuthError::Io(e.to_string()))?;
        }
        Ok(())
    }
}
}

Testing a New Backend

A backend implementation should verify the following:

  1. Enrollment round-trip – Enroll with a known master key, then confirm is_enrolled() returns true and the enrollment artifacts exist on disk.

  2. Unlock round-trip – After enrollment, call unlock() and verify the returned master_key matches the original.

  3. Wrong-key rejection – Tamper with enrollment data or use a different salt, and verify unlock() returns AuthError.

  4. Revocation – Call revoke(), confirm is_enrolled() returns false, and confirm the enrollment directory is removed.

  5. Readiness check – Verify can_unlock() returns false when the backing resource is unavailable (e.g., no SSH agent socket, no FIDO2 token connected).

  6. Interaction declaration – Verify requires_interaction() returns the correct variant. The unlock UX uses this to decide whether to show a password prompt or a “touch your token” message.

Adding Platform Backends

This page describes how to add a new operating system backend or a new compositor backend within an existing platform crate.

Platform Crate Structure

Open Sesame uses one platform crate per operating system:

CrateTargetStatus
platform-linuxtarget_os = "linux"Implemented: compositor backends, evdev input, D-Bus, systemd, Landlock/seccomp sandbox
platform-macostarget_os = "macos"Scaffolded: module declarations with no functional code
platform-windowstarget_os = "windows"Scaffolded: module declarations with no functional code

Each crate compiles as an empty library on non-target platforms. All public modules are gated with #[cfg(target_os = "...")]. Platform crates contain no business logic – they provide safe Rust abstractions consumed by daemon crates.

Compositor Trait and Factory Pattern

The platform-linux crate demonstrates the reference pattern for abstracting over multiple backends within a single platform.

The Trait

The CompositorBackend trait in platform-linux/src/compositor.rs defines the interface:

#![allow(unused)]
fn main() {
pub trait CompositorBackend: Send + Sync {
    fn list_windows(&self) -> BoxFuture<'_, core_types::Result<Vec<Window>>>;
    fn list_workspaces(&self) -> BoxFuture<'_, core_types::Result<Vec<Workspace>>>;
    fn activate_window(&self, id: &WindowId) -> BoxFuture<'_, core_types::Result<()>>;
    fn set_window_geometry(&self, id: &WindowId, geom: &Geometry)
        -> BoxFuture<'_, core_types::Result<()>>;
    fn move_to_workspace(&self, id: &WindowId, ws: &CompositorWorkspaceId)
        -> BoxFuture<'_, core_types::Result<()>>;
    fn focus_window(&self, id: &WindowId) -> BoxFuture<'_, core_types::Result<()>>;
    fn close_window(&self, id: &WindowId) -> BoxFuture<'_, core_types::Result<()>>;
    fn name(&self) -> &str;
}
}

Methods return BoxFuture (Pin<Box<dyn Future<Output = T> + Send>>) instead of using async fn in the trait. This is required for dyn-compatibility – the factory function returns Box<dyn CompositorBackend> for runtime backend selection.

The Factory

detect_compositor() probes the runtime environment and returns the appropriate backend:

#![allow(unused)]
fn main() {
pub fn detect_compositor() -> core_types::Result<Box<dyn CompositorBackend>> {
    // 1. Try COSMIC-specific protocols (if cosmic feature enabled)
    // 2. Try wlr-foreign-toplevel-management-v1
    // 3. Return Error::Platform if nothing works
}
}

Detection order matters: more specific backends are tried first (COSMIC), with generic fallbacks last (WLR). Each backend’s connect() method probes for required protocols and returns an error if they are unavailable, allowing the factory to fall through to the next candidate.

Backend Implementations

Each backend is a pub(crate) module containing a struct that implements CompositorBackend:

  • backend_cosmic.rsCosmicBackend using ext_foreign_toplevel_list_v1 + zcosmic_toplevel_{info,manager}_v1
  • backend_wlr.rsWlrBackend using zwlr_foreign_toplevel_manager_v1

Backends are pub(crate) because callers interact with them only through Box<dyn CompositorBackend> returned by the factory. The concrete types are not part of the public API.

Adding a New Compositor Backend

To add support for a compositor that uses different protocols (e.g., GNOME/Mutter, KDE/KWin, Hyprland IPC):

Step 1: Create the Backend Module

Create platform-linux/src/backend_<name>.rs with a struct implementing CompositorBackend. The struct must be Send + Sync.

For operations not supported by the compositor’s protocols, return Error::Platform with a descriptive message:

#![allow(unused)]
fn main() {
fn set_window_geometry(&self, _id: &WindowId, _geom: &Geometry)
    -> BoxFuture<'_, core_types::Result<()>>
{
    Box::pin(async {
        Err(core_types::Error::Platform(
            "set_window_geometry not supported by <name> protocol".into(),
        ))
    })
}
}

Provide a connect() constructor that probes for required protocols/interfaces and returns core_types::Result<Self>.

Step 2: Register the Module

Add the module declaration to platform-linux/src/lib.rs:

#![allow(unused)]
fn main() {
#[cfg(all(target_os = "linux", feature = "<name>"))]
pub(crate) mod backend_<name>;
}

Step 3: Add the Detection Arm

Add a match arm to detect_compositor() in platform-linux/src/compositor.rs. Place it in the detection order based on protocol specificity:

#![allow(unused)]
fn main() {
#[cfg(feature = "<name>")]
{
    match crate::backend_<name>::<Name>Backend::connect() {
        Ok(backend) => {
            tracing::info!("compositor backend: <name>");
            return Ok(Box::new(backend));
        }
        Err(e) => {
            tracing::info!("<name> backend unavailable, trying next: {e}");
        }
    }
}
}

Step 4: Add the Feature Flag

In platform-linux/Cargo.toml, add a feature flag for the new backend:

[features]
<name> = [
    "desktop",
    "dep:<new-protocol-crate>",
]

If the new backend uses only existing dependencies (e.g., communicating via D-Bus with zbus), no additional optional dependencies are needed.

Feature Gating and Conditional Compilation

Platform crates use a layered feature flag model:

  • No features: Headless-safe modules only (sandbox, security, systemd, dbus, cosmic_keys, cosmic_theme, clipboard trait). Suitable for server/container deployments.
  • desktop: Wayland compositor integration, evdev input, focus monitoring. Pulls in wayland-client, wayland-protocols, wayland-protocols-wlr, smithay-client-toolkit, evdev.
  • cosmic: COSMIC-specific protocols. Implies desktop. Pulls in cosmic-client-toolkit and cosmic-protocols (GPL-3.0).

This layering isolates build dependencies and license obligations. The cosmic feature flag specifically isolates GPL-3.0 dependencies so that builds without COSMIC support remain under the project’s base license.

Conditional compilation uses #[cfg(all(target_os = "linux", feature = "..."))] on module declarations in lib.rs. Backend modules are pub(crate) so they remain internal implementation details.

Adding a New OS Platform

To add a platform crate for a new operating system:

  1. Create platform-<os>/ with Cargo.toml and src/lib.rs.
  2. Gate all modules with #[cfg(target_os = "<os>")].
  3. Depend on core-types for shared types (Window, WindowId, Error, Result).
  4. Implement the same logical modules as the other platform crates (window management, clipboard, input, credential storage, daemon lifecycle). The specific API surface depends on what the OS provides.
  5. Use pub(crate) for backend implementation modules; expose only traits and factory functions as the public API.
  6. Add the crate to the workspace Cargo.toml.
  7. Update daemon crates to conditionally depend on the new platform crate via [target.'cfg(target_os = "<os>")'.dependencies].

The platform crate should contain no business logic. It provides safe wrappers over OS APIs, and daemon crates compose these wrappers into application behavior.

Nix Packaging

Open Sesame provides a Nix flake (flake.nix) that produces two packages, an overlay, a Home Manager module, and a development shell. The flake targets x86_64-linux and aarch64-linux.

Flake Structure

The flake uses nixpkgs (nixos-unstable) as its sole input. It exposes the following outputs:

OutputDescription
packages.<system>.open-sesameHeadless package (CLI + 4 daemons)
packages.<system>.open-sesame-desktopDesktop package (3 GUI daemons); depends on headless
packages.<system>.defaultAlias for open-sesame-desktop
overlays.defaultNixpkgs overlay adding both packages
homeManagerModules.defaultHome Manager module for declarative configuration
devShells.<system>.defaultDevelopment shell with Rust toolchain and native dependencies

Headless Package (nix/package.nix)

The headless package builds five binary crates with --no-default-features, disabling all desktop/GUI code paths:

CrateBinary
open-sesamesesame
daemon-profiledaemon-profile
daemon-secretsdaemon-secrets
daemon-launcherdaemon-launcher
daemon-snippetsdaemon-snippets

Build dependencies:

  • nativeBuildInputs: pkg-config, installShellFiles
  • buildInputs: openssl, libseccomp

The install phase copies the five binaries, the example configuration file, and five systemd user units (the headless target plus four service files) into $out.

Source filtering uses lib.fileset.unions to include only Cargo.toml, Cargo.lock, rust-toolchain.toml, config.example.toml, .cargo/, contrib/, and all crate directories (matched by prefix: core-*, daemon-*, platform-*, extension-*, sesame-*, open-sesame, xtask). Documentation, analysis files, and CI configuration are excluded.

Desktop Package (nix/package-desktop.nix)

The desktop package builds four binary crates with default features (desktop enabled):

CrateBinary
open-sesamesesame (rebuilt with desktop features)
daemon-wmdaemon-wm
daemon-clipboarddaemon-clipboard
daemon-inputdaemon-input

Additional build dependencies beyond the headless set:

  • nativeBuildInputs: adds makeWrapper
  • buildInputs: adds fontconfig, wayland, wayland-protocols, libxkbcommon
  • propagatedBuildInputs: open-sesame (the headless package)

The propagatedBuildInputs declaration ensures the headless binaries (sesame, daemon-profile, daemon-secrets, daemon-launcher, daemon-snippets) appear on PATH when the desktop package is installed.

The daemon-wm binary is wrapped with wrapProgram to set XKB_CONFIG_ROOT to ${xkeyboard-config}/etc/X11/xkb. This is required because libxkbcommon needs evdev keyboard rules at runtime, and the Nix store path differs from the system default.

The install phase copies the three desktop systemd user units (the desktop target plus wm, clipboard, and input service files) into $out.

cargoLock.outputHashes

Both packages declare outputHashes for three git dependencies that Cargo.lock references:

outputHashes = {
  "cosmic-client-toolkit-0.2.0" = "sha256-ymn+BUTTzyHquPn4hvuoA3y1owFj8LVrmsPu2cdkFQ8=";
  "cosmic-protocols-0.2.0" = "sha256-ymn+BUTTzyHquPn4hvuoA3y1owFj8LVrmsPu2cdkFQ8=";
  "nucleo-0.5.0" = "sha256-Hm4SxtTSBrcWpXrtSqeO0TACbUxq3gizg1zD/6Yw/sI=";
};

The headless package includes these hashes even though it does not build the COSMIC crates, because Cargo.lock references workspace members and Cargo resolves the entire lock file before building.

Home Manager Module

The Home Manager module is available at homeManagerModules.default. It configures Open Sesame declaratively under programs.open-sesame.

Options

OptionTypeDefaultDescription
enableboolfalseEnable the Open Sesame desktop suite
headlessboolfalseHeadless mode: only starts profile, secrets, launcher, and snippets daemons. Omits GUI daemons and graphical-session dependency.
packagepackageauto-selectedDefaults to open-sesame-desktop or open-sesame depending on headless
settingsTOML attrset{}WM key bindings and settings for the default profile
profilesattrsOf (tomlFormat.type){}Additional profile configuration keyed by trust profile name
logLevelenum"info"RUST_LOG level for all daemons. One of: error, warn, info, debug, trace

Generated Configuration

When settings or profiles are non-empty, the module generates ~/.config/pds/config.toml (via xdg.configFile."pds/config.toml") with config_version = 3. The settings option populates profiles.default.wm, while profiles allows defining additional trust profiles with launch profiles and vault configuration.

Systemd Service Generation

The module generates two systemd user targets and up to seven services:

Headless target (open-sesame-headless):

  • WantedBy = [ "default.target" ] – starts on login regardless of graphical session
  • Four services: open-sesame-profile, open-sesame-secrets, open-sesame-launcher, open-sesame-snippets
  • All services declare PartOf = [ "open-sesame-headless.target" ]

Desktop target (open-sesame-desktop, omitted in headless mode):

  • Requires = [ "open-sesame-headless.target" "graphical-session.target" ]
  • WantedBy = [ "graphical-session.target" ]
  • Three services: open-sesame-wm, open-sesame-clipboard, open-sesame-input

All services share common hardening directives:

  • Type = "notify" with WatchdogSec = 30
  • Restart = "on-failure" with RestartSec = 5
  • NoNewPrivileges = true
  • LimitMEMLOCK = "64M" (required for mlock-backed ProtectedAlloc)
  • LimitCORE = 0 (disables core dumps to prevent secret leakage)
  • Environment = [ "RUST_LOG=${cfg.logLevel}" ]

Per-service hardening varies. For example, daemon-secrets sets PrivateNetwork = true and MemoryMax = "256M", while daemon-launcher sets CapabilityBoundingSet = "" and SystemCallArchitectures = "native".

The daemon-profile service uses ProtectHome = "read-only", ProtectSystem = "strict", and ReadWritePaths = [ "%t/pds" "%h/.config/pds" ] to restrict filesystem access.

tmpfiles.d Rules

The module creates tmpfiles.d rules to ensure runtime directories exist before services start:

d %t/pds 0700 - - -
d %h/.config/pds 0700 - - -
d %h/.cache/open-sesame 0700 - - -

In desktop mode, an additional rule is added:

d %h/.cache/fontconfig 0755 - - -

These directories must exist on the real filesystem because ProtectSystem=strict bind-mounts ReadWritePaths into each service’s mount namespace, and the source directory must already exist.

SSH Agent Integration

The module sets systemd.user.sessionVariables.SSH_AUTH_SOCK = "${HOME}/.ssh/agent.sock" to provide a stable socket path for systemd user services. The daemon-profile and daemon-wm services additionally load EnvironmentFile = [ "-%h/.config/pds/ssh-agent.env" ] (the leading - makes the file optional).

Cachix Binary Cache

The flake declares a Cachix binary cache in its nixConfig:

nixConfig = {
  extra-substituters = [ "https://scopecreep-zip.cachix.org" ];
  extra-trusted-public-keys = [
    "scopecreep-zip.cachix.org-1:LPiVDsYXJvgljVfZPN43zBWB7ZCGFr2jZ/lBinnPGvU="
  ];
};

Users who pass --accept-flake-config (or have the substituter trusted) automatically pull pre-built binaries for both x86_64-linux and aarch64-linux.

CI pushes to the Cachix cache on every release via the nix.yml workflow using cachix/cachix-action@v15 with the SCOPE_CREEP_CACHIX_PRIVATE_KEY secret. The same workflow runs on pull requests for cache warming (build only, no push without the secret).

preCheck

Both packages set preCheck = "export HOME=$(mktemp -d)" to provide test isolation. Tests that create configuration or runtime directories write to a temporary home instead of interfering with the build sandbox.

Debian Packaging

Open Sesame ships as two .deb packages built with cargo-deb. The two-package model mirrors the Nix split: a headless package for servers and containers, and a desktop package that adds GUI daemons for COSMIC/Wayland.

Package Overview

open-sesame (headless)

Defined in open-sesame/Cargo.toml under [package.metadata.deb].

FieldValue
Package nameopen-sesame
Sectionutils
Priorityoptional
Dependslibc6, libgcc-s1, libseccomp2
Recommendsopenssh-client
Suggestsopen-sesame-desktop

Installed binaries (to /usr/bin/):

  • sesame (CLI)
  • daemon-profile
  • daemon-secrets
  • daemon-launcher
  • daemon-snippets

Installed systemd units (to /usr/lib/systemd/user/):

  • open-sesame-headless.target
  • open-sesame-profile.service
  • open-sesame-secrets.service
  • open-sesame-launcher.service
  • open-sesame-snippets.service

Additional assets:

  • Man page: /usr/share/man/man1/sesame.1.gz (generated by xtask)
  • Shell completions: bash (/usr/share/bash-completion/completions/sesame), zsh (/usr/share/zsh/vendor-completions/_sesame), and fish (/usr/share/fish/vendor_completions.d/sesame.fish)
  • Example config: /usr/share/doc/open-sesame/config.example.toml

Maintainer scripts are sourced from scripts/.

open-sesame-desktop

Defined in daemon-wm/Cargo.toml under [package.metadata.deb].

FieldValue
Package nameopen-sesame-desktop
Sectionutils
Priorityoptional
Dependsopen-sesame, libc6, libgcc-s1, libseccomp2, libxkbcommon0, libwayland-client0, libfontconfig1, libfreetype6, fonts-dejavu-core
Recommendsxdg-utils, fontconfig
Suggestscosmic-desktop

The open-sesame dependency ensures the headless daemons and CLI are installed before the desktop layer.

Installed binaries (to /usr/bin/):

  • daemon-wm
  • daemon-clipboard
  • daemon-input

Installed systemd units (to /usr/lib/systemd/user/):

  • open-sesame-desktop.target
  • open-sesame-wm.service
  • open-sesame-clipboard.service
  • open-sesame-input.service

Maintainer scripts are sourced from scripts/desktop/.

Systemd Targets

open-sesame-headless.target

[Unit]
Description=Open Sesame Headless Suite
Documentation=https://github.com/scopecreep-zip/open-sesame

[Install]
WantedBy=default.target

The headless target is wanted by default.target, meaning it activates on every user login regardless of whether a graphical session exists. The four headless services declare PartOf=open-sesame-headless.target.

open-sesame-desktop.target

[Unit]
Description=Open Sesame Desktop Suite
Documentation=https://github.com/scopecreep-zip/open-sesame
Requires=open-sesame-headless.target graphical-session.target
After=open-sesame-headless.target graphical-session.target

[Install]
WantedBy=graphical-session.target

The desktop target requires both the headless target (for IPC bus and secrets infrastructure) and graphical-session.target (for Wayland compositor access). It is wanted by graphical-session.target, so it only activates when a graphical session starts.

Service Hardening

All services in contrib/systemd/ use Type=notify with WatchdogSec=30, Restart=on-failure, RestartSec=5, and NoNewPrivileges=yes. Resource limits include LimitMEMLOCK=64M (for mlock-backed protected allocations), LimitCORE=0 (prevents core dumps), and MemoryMax caps per daemon.

The daemon-profile service, which hosts the IPC bus, sets ProtectHome=read-only and ProtectSystem=strict with ReadWritePaths=%t/pds %h/.config/pds.

Maintainer Scripts

Headless Package

postinst (scripts/postinst):

  1. Enables services globally with systemctl --global enable for the four headless services and the headless target. This persists across future logins and new users.
  2. Reloads all active user managers with systemctl reload 'user@*.service' so they see the new unit files.
  3. Iterates over all currently logged-in users (by parsing UIDs from systemctl list-units 'user@*') and restarts each headless service using systemctl --user -M "$uid@" with a SYSTEMD_BUS_TIMEOUT=25s timeout.

prerm (scripts/prerm):

  • On remove|deconfigure: stops all headless services for active users in reverse dependency order (snippets, launcher, secrets, profile), then disables globally.
  • On upgrade: stops services only (does not disable). The postinst of the new version restarts with new binaries.

postrm (scripts/postrm):

  • On remove|purge: reloads user managers to clear removed unit files. Prints a message noting that user configuration at ~/.config/pds/ is preserved.

Desktop Package

postinst (scripts/desktop/postinst):

  1. Enables desktop services globally: open-sesame-wm.service, open-sesame-clipboard.service, open-sesame-input.service, open-sesame-desktop.target.
  2. Reloads active user managers.
  3. Restarts desktop services for all active users.
  4. Prints a note that daemon-input requires input group membership for keyboard capture.

prerm (scripts/desktop/prerm):

  • On remove|deconfigure: stops desktop services (input, clipboard, wm) for active users, then disables globally.
  • On upgrade: stops services only.

postrm (scripts/desktop/postrm):

  • On remove|purge: reloads user managers. Notes that headless daemons remain installed.

User Iteration Pattern

All maintainer scripts use the same active_user_uids() helper to discover logged-in users:

active_user_uids() {
    systemctl list-units 'user@*' --legend=no 2>/dev/null \
        | sed -n 's/.*user@\([0-9]\+\)\.service.*/\1/p'
}

This pattern is derived from systemd-update-helper.in and ensures services are managed for all active user sessions, not just the invoking user.

Distribution

Open Sesame uses semantic-release for automated versioning, GitHub Actions for building, SLSA attestations for supply chain security, and GitHub Pages for hosting an APT repository alongside documentation.

Semantic Release

Version management is configured in release.config.mjs. Semantic-release runs on pushes to main and analyzes conventional commits to determine version bumps.

Release Rules

Commit typeRelease
featminor
fixpatch
perfpatch
revertpatch
docs (scope: README)patch
refactor, style, chore, test, build, cino release
Any scope no-releaseno release

Plugin Pipeline

The semantic-release plugin chain executes in order:

  1. @semantic-release/commit-analyzer – Analyzes commits using the conventionalcommits preset to determine the version bump type.
  2. @semantic-release/exec – Generates a release header from .github/templates/RELEASE_HEADER.md.
  3. @semantic-release/release-notes-generator – Generates release notes from commits, categorized by type.
  4. @semantic-release/changelog – Updates CHANGELOG.md.
  5. @semantic-release/exec – Updates the [workspace.package] version in Cargo.toml using sed, then runs cargo generate-lockfile to update Cargo.lock.
  6. @semantic-release/git – Commits CHANGELOG.md, Cargo.toml, and Cargo.lock with message chore(release): <version> [skip ci].
  7. @semantic-release/github – Creates the GitHub release.

Release Pipeline DAG

The release workflow (release.yml) runs on pushes to main and defines the following job dependency graph:

semantic-release
├── build (amd64)       ─┬──► attest
├── build (arm64)       ─┤
│                        └──► upload-assets
├── nix-cache
├── build-docs
│
└── [all above] ────────────► publish ──► cleanup

All downstream jobs gate on needs.semantic-release.outputs.new_release == 'true'. If semantic-release determines no version bump is needed, the pipeline stops after the first job.

Job Details

semantic-release: Checks out with fetch-depth: 0, installs Node.js via mise, runs npx semantic-release. Outputs new_release, version, and tag.

build: Runs on a dual-architecture matrix (ubuntu-24.04 for amd64, ubuntu-24.04-arm for arm64). Installs Rust and cargo-deb via mise. Raises RLIMIT_MEMLOCK to 256 MiB with prlimit before building (required by ProtectedAlloc). Builds .deb packages via mise tasks (ci:build:deb / ci:build:deb:arm64), renames them with architecture suffixes, and uploads as artifacts.

nix-cache: Calls the reusable nix.yml workflow with the release tag. Builds both open-sesame and open-sesame-desktop for each architecture and pushes to Cachix.

build-docs: Builds rustdoc and mdBook documentation via mise run ci:docs:all and mise run ci:docs:combine, then uploads as an artifact.

attest: Downloads all .deb artifacts and generates SLSA build provenance attestations using actions/attest-build-provenance@v2.

upload-assets: Downloads .deb artifacts, generates SHA256SUMS.txt, renders install instructions from a template with per-architecture checksums, and uploads all .deb files and checksums to the GitHub release using softprops/action-gh-release@v2.

publish: Downloads .deb artifacts and documentation, imports the GPG signing key, generates the APT repository via mise run ci:release:apt-repo, and deploys the combined APT repository and documentation site to GitHub Pages.

cleanup: Deletes old releases, keeping the 10 most recent. Uses dev-drprasad/delete-older-releases@v0.3.4. Tags are preserved.

APT Repository

The APT repository is hosted on GitHub Pages and generated by the ci:release:apt-repo mise task during the publish job. The process:

  1. Downloads all .deb artifacts into a packages/ directory.
  2. Imports the GPG private key (GPG_PRIVATE_KEY secret) using crazy-max/ghaction-import-gpg@v6.
  3. Generates the Packages index and signs the repository with GPG.
  4. Combines the APT repository with the documentation site into a single gh-pages/ directory.
  5. Deploys to GitHub Pages using actions/deploy-pages@v5.

The publish job runs in the github-pages environment and requires pages: write and id-token: write permissions.

SLSA Build Provenance

Every .deb artifact receives a SLSA build provenance attestation generated by actions/attest-build-provenance@v2. This runs in the attest job after the build completes. The workflow declares attestations: write permission at the top level.

Attestations provide a cryptographic link between each .deb file and its GitHub Actions build, allowing consumers to verify that artifacts were produced by the CI pipeline and not tampered with.

Checksum Verification

The upload-assets job generates SHA256SUMS.txt containing SHA-256 hashes for all .deb files:

sha256sum ./*.deb > SHA256SUMS.txt

The checksums file is uploaded alongside the .deb files to the GitHub release. Per-architecture checksums are also interpolated into the release body template for inline verification instructions.

Workflow Permissions

The release workflow requests the following permissions:

PermissionPurpose
contents: writeCreate GitHub releases, push version commits
pages: writeDeploy APT repo and docs to GitHub Pages
id-token: writeOIDC token for Pages deployment and attestations
attestations: writeSLSA build provenance
issues: writeSemantic-release issue comments
pull-requests: writeSemantic-release PR comments

Packaging for New Distributions

This guide covers the requirements and considerations for packaging Open Sesame on Linux distributions beyond the officially supported Debian/Ubuntu .deb packages and Nix flake.

Common Requirements

Regardless of distribution, all packages must satisfy the following.

Two-Package Split

Open Sesame ships as two logical packages:

  • open-sesame (headless) – Contains the sesame CLI, daemon-profile, daemon-secrets, daemon-launcher, daemon-snippets, and their systemd user service units. Has no GUI dependencies.
  • open-sesame-desktop (requires open-sesame) – Contains daemon-wm, daemon-clipboard, daemon-input, and the COSMIC/Wayland compositor integration. Depends on libwayland-client, libxkbcommon, and cosmic-protocols.

systemd User Services

All daemons run as systemd user services (systemctl --user). Packages must install unit files to /usr/lib/systemd/user/. The services use:

  • Type=notify with sd_notify readiness.
  • Restart=on-failure with RestartSec=2.
  • Ordering via After= and Requires= (daemon-profile starts first as the IPC bus host; all others depend on it).

LimitMEMLOCK

daemon-secrets requires mlock for secret memory. The systemd unit sets LimitMEMLOCK=64M. Packages that install systemd overrides or distributions that set system-wide limits below this threshold will cause vault operations to fail. The corresponding PAM/security limit is:

# /etc/security/limits.d/open-sesame.conf
*  soft  memlock  65536
*  hard  memlock  65536

Binary Paths

All binaries install to /usr/bin/. Configuration lives under ~/.config/pds/ (per XDG Base Directory specification).

AUR (Arch Linux)

Arch packaging uses PKGBUILD files. Two packages are needed.

open-sesame

pkgname=open-sesame
pkgver=1.6.3
pkgrel=1
pkgdesc='Programmable desktop suite - headless daemons and CLI'
arch=('x86_64' 'aarch64')
url='https://github.com/ScopeCreep-zip/open-sesame'
license=('GPL-3.0-only')
depends=('gcc-libs' 'sqlcipher' 'openssl')
makedepends=('cargo' 'pkg-config')

build() {
    cd "$srcdir/$pkgname-$pkgver"
    cargo build --release \
        --bin sesame \
        --bin daemon-profile \
        --bin daemon-secrets \
        --bin daemon-launcher \
        --bin daemon-snippets
}

package() {
    cd "$srcdir/$pkgname-$pkgver"
    for bin in sesame daemon-profile daemon-secrets daemon-launcher daemon-snippets; do
        install -Dm755 "target/release/$bin" "$pkgdir/usr/bin/$bin"
    done
    install -Dm644 dist/systemd/*.service -t "$pkgdir/usr/lib/systemd/user/"
    install -Dm644 dist/limits.conf "$pkgdir/etc/security/limits.d/open-sesame.conf"
}

open-sesame-desktop

pkgname=open-sesame-desktop
pkgver=1.6.3
pkgrel=1
pkgdesc='Programmable desktop suite - COSMIC/Wayland compositor integration'
arch=('x86_64' 'aarch64')
depends=('open-sesame' 'wayland' 'libxkbcommon' 'cosmic-protocols')
makedepends=('cargo' 'pkg-config')

build() {
    cd "$srcdir/open-sesame-$pkgver"
    cargo build --release \
        --bin daemon-wm \
        --bin daemon-clipboard \
        --bin daemon-input
}

package() {
    cd "$srcdir/open-sesame-$pkgver"
    for bin in daemon-wm daemon-clipboard daemon-input; do
        install -Dm755 "target/release/$bin" "$pkgdir/usr/bin/$bin"
    done
    install -Dm644 dist/systemd/daemon-wm.service -t "$pkgdir/usr/lib/systemd/user/"
    install -Dm644 dist/systemd/daemon-clipboard.service -t "$pkgdir/usr/lib/systemd/user/"
    install -Dm644 dist/systemd/daemon-input.service -t "$pkgdir/usr/lib/systemd/user/"
}

RPM (Fedora / RHEL)

Spec File Considerations

  • BuildRequires: cargo, rust-packaging, pkg-config, sqlcipher-devel, openssl-devel, wayland-devel, libxkbcommon-devel.
  • License tag: GPL-3.0-only.
  • Subpackages: Use %package desktop for the GUI subpackage with Requires: %{name} = %{version}-%{release}.
  • systemd macros: Use %systemd_user_post, %systemd_user_preun, and %systemd_user_postun for service lifecycle.
  • Vendor dependencies: Fedora policy requires vendored dependencies to be audited. Run cargo vendor and include the vendor tarball as a secondary source.
  • SELinux: daemon-secrets performs mlock and reads SSH_AUTH_SOCK. A custom SELinux policy module may be required for confined users. The base package should include a .te policy file or document the required booleans.

Alpine Linux

Static Linking and musl

Alpine uses musl libc. Open Sesame compiles against musl with the x86_64-unknown-linux-musl target. Considerations:

  • SQLCipher: Must be compiled against musl. Alpine’s sqlcipher package provides this.
  • OpenSSL vs. rustls: If the build uses OpenSSL for TLS, link against Alpine’s openssl-dev (which is musl-compatible). Alternatively, rustls avoids the system OpenSSL dependency entirely.
  • Static binary: For maximum portability, build fully static binaries with RUSTFLAGS='-C target-feature=+crt-static'. This produces binaries that run on any Linux kernel >= 3.17 (for mlock2 and Landlock).
  • No systemd: Alpine uses OpenRC by default. Provide OpenRC init scripts as an alternative to systemd user services. The init scripts must set the MEMLOCK ulimit and run daemons as the logged-in user, not root.

APKBUILD

The APKBUILD follows the same two-package split. Use subpackages for the desktop variant. Alpine’s Rust packaging infrastructure supports cargo auditable build for SBOM embedding.

Flatpak

Sandbox Implications

Flatpak introduces a second layer of sandboxing on top of Open Sesame’s own Noise IK IPC isolation and Landlock filesystem restrictions.

Key issues:

  • Nested sandboxing: daemon-secrets uses mlock, seccomp, and Landlock. Inside a Flatpak sandbox, seccomp filters compose (the stricter filter wins), but Landlock may conflict with Flatpak’s own filesystem portals.
  • Unix socket access: The IPC bus uses a Unix domain socket under $XDG_RUNTIME_DIR. Flatpak must be configured to expose this path, or the socket must use a portal.
  • SSH agent: Flatpak does not expose SSH_AUTH_SOCK by default. The --socket=ssh-auth permission is required for SSH agent unlock.
  • Wayland: The desktop package requires --socket=wayland and access to the COSMIC compositor protocols, which may not be available through the standard Wayland portal.

For these reasons, Flatpak packaging is considered lower priority. The recommended approach is native packaging for distributions that target the COSMIC desktop.

Homebrew (macOS)

When platform-macos Is Implemented

Open Sesame currently targets Linux with COSMIC/Wayland. A platform-macos crate is planned but not yet implemented. When it becomes available:

  • Formula structure: A single formula covering the headless components (there is no separate desktop package on macOS; window management uses native Accessibility APIs).
  • launchd: Replace systemd user services with launchd plist files installed to ~/Library/LaunchAgents/.
  • Keychain integration: The macOS keychain can serve as an auth backend (similar to SSH agent), replacing mlock-based secret memory with Secure Enclave operations where available.
  • Dependencies: sqlcipher is available via Homebrew. No Wayland dependencies are needed.

This section will be expanded when platform-macos reaches a functional state.

Testing Methodology

Open Sesame uses a layered testing strategy spanning unit tests, integration tests, property-based tests, and snapshot tests across the workspace.

Test Categories

Unit Tests

Unit tests are embedded in source files using #[cfg(test)] modules. They cover pure logic such as hint assignment, configuration validation, cryptographic derivation, rate limiting, ACL enforcement, audit logging, and type conversions. Approximately 576 test functions exist in src/ modules across 55 source files in the workspace.

Integration Tests

Integration tests live in <crate>/tests/ directories and test cross-module behavior:

FileTest CountScope
core-ipc/tests/socket_integration.rs21Noise IK encrypted IPC: connect, pub/sub, request/response, clearance enforcement, identity binding, unicast routing
daemon-wm/tests/wm_integration.rs43Hint assignment, hint matching, overlay controller state machine, config validation
open-sesame/tests/cli_integration.rs18CLI argument parsing, help output, exit codes (no running daemon required)
core-memory/tests/guard_page_sigsegv.rs4Guard page SIGSEGV verification via subprocess harness
core-ipc/tests/daemon_keypair.rs1Keypair persistence, file permissions, tamper detection

Property-Based Tests (proptest)

The proptest crate is a dev-dependency in 10 workspace crates:

  • core-types, core-crypto, core-config, core-secrets, core-profile, core-fuzzy
  • platform-linux, platform-macos, platform-windows
  • extension-sdk

Property-based tests generate random inputs to verify invariants such as serialization round-trips, key derivation determinism, and type conversion totality.

Snapshot Tests (insta)

The insta crate is declared as a workspace dependency with yaml, json, and redactions features. Snapshot tests capture serialized output and compare against stored reference files, detecting unintended changes to wire formats and configuration serialization.

Test Isolation

HOME Directory Isolation

Both Nix packages and the CI pipeline set HOME=$(mktemp -d) before running tests:

export HOME=$(mktemp -d)

This is configured as preCheck in nix/package.nix and nix/package-desktop.nix. Tests that create configuration directories (~/.config/pds/), runtime directories ($XDG_RUNTIME_DIR/pds/), or keypair files write to the temporary directory instead of the real home.

For IPC integration tests, core-ipc/tests/daemon_keypair.rs uses noise::set_runtime_dir_override() to redirect directory creation without mutating environment variables, avoiding race conditions in parallel test execution.

RLIMIT_MEMLOCK Requirement

The ProtectedAlloc allocator uses mlock to pin secret-holding pages in physical memory, preventing swap exposure. This requires a sufficient RLIMIT_MEMLOCK limit.

In CI, prlimit raises the limit before test execution:

sudo prlimit --pid $$ --memlock=268435456:268435456

This sets both soft and hard limits to 256 MiB. The same prlimit invocation is used in the build jobs for .deb packaging and Nix builds.

Tests that allocate ProtectedAlloc instances fail with ENOMEM if the memlock limit is insufficient. The systemd service units set LimitMEMLOCK=64M for production use.

Test Execution

CI Pipeline

Tests run via mise run ci:test in the test.yml workflow on both ubuntu-24.04 (amd64) and ubuntu-24.04-arm (arm64). The mise task runner manages Rust toolchain installation and task orchestration.

Nix Builds

The Nix packages run cargo tests during the build phase:

  • Headless: tests the five headless crates with --no-default-features
  • Desktop: tests the entire workspace with --workspace

Both set preCheck = "export HOME=$(mktemp -d)" for isolation.

Local Execution

Developers can run the full test suite with:

sudo prlimit --pid $$ --memlock=268435456:268435456
cargo test --workspace

The prlimit invocation is required for core-memory and any crate that transitively uses ProtectedAlloc.

Security Tests

Open Sesame includes targeted security tests that verify memory protection, cryptographic isolation, IPC authentication, and authorization enforcement. These tests validate security invariants that, if broken, would compromise secret confidentiality.

Guard Page SIGSEGV Verification

File: core-memory/tests/guard_page_sigsegv.rs

ProtectedAlloc wraps sensitive data in page-aligned memory with guard pages on both sides. The guard page tests verify that out-of-bounds access triggers SIGSEGV (signal 11) rather than silently reading adjacent memory.

Subprocess Harness Pattern

Direct SIGSEGV in a test process would kill the entire test runner. The tests use a subprocess harness:

  1. The parent test (overflow_hits_trailing_guard_page, underflow_hits_leading_guard_page) spawns the test binary as a child process, targeting a specific harness function with --exact and passing an environment variable __GUARD_PAGE_HARNESS to gate execution.
  2. The child harness (overflow_harness, underflow_harness) checks for the environment variable. If absent, it returns immediately (no-op when run as part of the normal test suite). If present, it allocates a ProtectedAlloc, performs a deliberate out-of-bounds read, and calls exit(1) as unreachable fallback.
  3. The parent inspects the child’s exit status. On Unix, it checks status.signal() for SIGSEGV (11) or SIGBUS (7). As a fallback for platforms that encode signal death as exit code 128+signal, it also checks the exit code.

Test Coverage

  • Trailing guard page: reads one byte past ptr.add(len), triggering SIGSEGV on the guard page after the data region.
  • Leading guard page: reads one full page before the data pointer (ptr.sub(page_size)), past any canary and padding, into the guard page between the metadata region and data region. Accepts both SIGSEGV (11) and SIGBUS (7).

Canary Verification

File: core-memory/src/alloc.rs (unit tests)

ProtectedAlloc writes a canary value into the metadata region during allocation. Unit tests verify canary behavior:

  • canary_is_consistent: verifies that canary derivation is deterministic – the same allocation size always produces the same canary.
  • alloc_canary_plus_data_spans_page_boundary: verifies correct behavior when the canary plus user data cross a page boundary.

The canary is checked on Drop. If the canary has been corrupted (indicating a buffer underflow or use-after-free into the metadata region), the allocator detects the tampering.

Postcard Wire Format Compatibility

File: core-types/src/sensitive.rs

SensitiveBytes provides custom Serialize and Deserialize implementations to maintain wire compatibility with postcard (the IPC serialization format). The serializer writes raw bytes directly from protected memory via serialize_bytes. The deserializer implements a custom Visitor with two paths:

  • Zero-copy path (visit_bytes): copies directly from the deserializer’s borrowed input buffer into a ProtectedAlloc. No intermediate heap Vec<u8> is created. This is the path postcard uses for in-memory deserialization.
  • Owned path (visit_byte_buf): accepts an owned Vec<u8>, copies into ProtectedAlloc, then zeroizes the Vec<u8> before dropping it.

This ensures that SensitiveBytes and Vec<u8> produce identical wire representations, maintaining backward compatibility with any code that previously used plain byte vectors.

Cross-Profile Vault Isolation

File: core-secrets/src/sqlcipher.rs (unit tests)

SQLCipher vaults are encrypted with per-profile vault keys derived via BLAKE3 domain separation. Three tests verify isolation:

  • cross_profile_keys_are_independent: derives vault keys for profiles “work” and “personal” from the same master key. Asserts the derived keys differ. Opens a vault with the “work” key, stores a secret, then attempts to reopen the same database file with the “personal” key. The SqlCipherStore::open call must return an error because SQLCipher cannot decrypt pages with the wrong key.

  • cross_profile_secret_access_returns_error: creates two separate vault databases for “work” and “personal” profiles. Stores a secret in the “work” vault, then attempts to read the same key name from the “personal” vault. The result must be Err(core_types::Error::NotFound(_)).

  • vault_key_derivation_domain_separation: verifies that core_crypto::derive_vault_key produces distinct keys for different profile names, confirming the BLAKE3 domain separation functions correctly.

IPC Authentication and Authorization

File: core-ipc/tests/socket_integration.rs

The IPC integration tests verify several security invariants of the Noise IK transport and bus server:

Noise Handshake Rejection

noise_handshake_rejects_wrong_key: a client connects expecting an incorrect server public key. The Noise IK handshake fails because the client’s static key lookup does not match the server’s actual identity.

Clearance Escalation Blocking

clearance_escalation_blocked: a client registered at SecurityLevel::Open attempts to publish a message at SecurityLevel::Internal. The bus server silently drops the frame. An Internal-clearance receiver does not receive it.

Sender Identity Binding

sender_identity_change_blocked: after a client’s first message binds its DaemonId to the connection, any subsequent message with a different DaemonId is dropped. This prevents a compromised client from impersonating another daemon mid-session.

Verified Sender Name Stamping

verified_sender_name_stamped: messages routed through the bus carry a verified_sender_name field stamped by the server from the Noise IK registry lookup. The sender cannot self-declare this field. The test verifies the stamped name matches the registry entry ("test-client-0"), not anything the sender included in the message payload.

Unicast Response Routing

secret_response_not_received_by_bystander: when a request/response pair uses correlation IDs, the response is unicast-routed to the original requester only. A bystander client connected to the same bus does not receive the correlated response.

Orphan Response Dropping

uncorrelated_response_is_dropped: a message with a fabricated correlation_id (no matching pending request) is silently dropped by the bus server and not broadcast to any client.

Ephemeral Client UCred Validation

ephemeral_client_gets_secrets_only_clearance: an unregistered key (ephemeral CLI connection) that passes UCred same-UID validation receives SecretsOnly clearance, allowing it to send unlock and secret CRUD messages without being pre-registered in the key registry.

Clearance-Level Message Filtering

secrets_only_message_not_delivered_to_internal_daemon: a message published at SecretsOnly level is not delivered to Internal-clearance recipients, since Internal < SecretsOnly in the clearance hierarchy.

Keypair Persistence Security

File: core-ipc/tests/daemon_keypair.rs

This test verifies filesystem security invariants for daemon keypair storage:

  • The keys directory has 0700 permissions.
  • Private key files (.key) have 0600 permissions.
  • Public key files (.pub) have 0644 permissions.
  • Bus keypair files (bus.key, bus.pub, bus.checksum) have correct permissions.
  • Corrupting the checksum file triggers a TAMPER DETECTED error on the next read, preventing use of tampered keypairs.

Seccomp Allowlist

Each daemon applies a seccomp filter via platform_linux::sandbox::apply_seccomp. The function uses libseccomp to install a BPF filter with a default-deny policy (SCMP_ACT_ERRNO(EPERM)), adding only the syscalls required by each daemon’s SeccompProfile. This prevents an attacker who gains code execution within a daemon from making arbitrary system calls.

Seccomp is combined with Landlock filesystem restrictions in the apply_sandbox function, which each daemon calls during initialization. Per-daemon sandbox configurations are defined in each daemon’s sandbox.rs module (e.g., daemon-secrets/src/sandbox.rs, daemon-wm/src/sandbox.rs). Daemons that do not need network access (e.g., daemon-secrets) additionally set PrivateNetwork=true at the systemd level.

IPC Integration Tests

The core-ipc crate includes a comprehensive integration test suite in core-ipc/tests/socket_integration.rs. All tests exercise the full Noise IK encrypted transport – there is no plaintext transport path in the codebase.

Test Infrastructure

Helpers

The test suite provides three helper functions:

  • start_server_with_clients(n) – Creates a temporary directory, generates a server keypair, registers n client keypairs at SecurityLevel::Internal in a ClearanceRegistry, binds a BusServer to a Unix socket, and returns the server, temp directory, server public key, and client keypairs.

  • start_server() – Convenience wrapper that registers a single client at Internal clearance.

  • connect_with_keypair(id, sock, server_pub, kp) – Connects a BusClient via connect_encrypted with the given DaemonId and keypair.

All tests create isolated Unix sockets in tempfile::TempDir instances, ensuring no shared state between tests.

Test Coverage

Server Lifecycle

server_bind_creates_socket_file – Verifies that BusServer::bind creates the Unix socket file on disk, including parent directory creation.

client_connect_and_server_accept – Verifies that after a client performs a Noise IK handshake, the server reports a connection count of 1.

Publish-Subscribe

publish_subscribe_roundtrip – Client A publishes a DaemonStarted event at Internal level. Client B receives it and verifies the event kind matches. Confirms that broadcast delivery works end-to-end over encrypted transport.

sender_does_not_receive_own_message – A client publishes a message and then attempts to receive. The receive times out, confirming that the bus server does not echo messages back to the sender.

multiple_clients_receive_broadcast – One sender, two receivers. Both receivers get the ConfigReloaded event, verifying fan-out broadcast.

Request-Response

request_response_correlation – Client A sends a SecretList request via client.request(). Client B receives it, constructs a SecretListResponse with .with_correlation(request_msg.msg_id), and sends it back. Client A’s request() future resolves with the correlated response. Verifies that correlation ID routing works correctly.

launch_execute_response_roundtrip – End-to-end test of the LaunchExecute / LaunchExecuteResponse request-response pair, simulating the CLI sending a launch command and daemon-launcher responding with a PID.

launch_execute_error_roundtrip – Same as above, but the launcher responds with error: Some("desktop entry 'nonexistent' not found") and denial: Some(LaunchDenial::EntryNotFound). Verifies error propagation through the correlated response path.

request_timeout – A client sends a StatusRequest with a 100 ms timeout. No responder exists. The request() call returns an error containing “timed out”.

Unicast Routing

secret_response_not_received_by_bystander – Three clients: a requester, a bystander, and a simulated secrets daemon. The requester sends a SecretList request. Both the bystander and the secrets daemon receive the broadcast request. The secrets daemon responds with a correlated SecretListResponse. The requester receives it, but the bystander does not. This verifies that correlated responses are unicast-routed to the original requester, not broadcast.

uncorrelated_response_is_dropped – A client sends a response message with a fabricated correlation ID that matches no pending request. The bus server drops it; no other client receives it. This prevents response injection attacks.

Noise Handshake Security

noise_handshake_rejects_wrong_key – A client attempts to connect using a server public key that does not match the actual server. The Noise IK handshake fails, and connect_encrypted returns an error. This is a fundamental authentication property of the IK pattern: the initiator pins the responder’s static key.

client_connect_retry_on_missing_socket – A client attempts to connect to a nonexistent socket path. The connection fails with an error containing “failed to connect” rather than hanging or panicking.

Clearance Enforcement

clearance_escalation_blocked – Two clients are registered: one at SecurityLevel::Open, one at SecurityLevel::Internal. The Open-clearance client publishes a message at Internal level. The Internal-clearance client does not receive it. The bus server silently drops frames that exceed the sender’s clearance.

secrets_only_message_not_delivered_to_internal_daemon – A client registered at SecurityLevel::SecretsOnly publishes at SecretsOnly level. A client registered at SecurityLevel::Internal does not receive it. This verifies the lattice property: Internal clearance is below SecretsOnly, so Internal recipients are excluded from SecretsOnly-level messages. This isolation ensures that daemon-secrets traffic is partitioned from general daemon traffic.

ephemeral_client_gets_secrets_only_clearance – A client connects with an unregistered keypair (not in the ClearanceRegistry). The connection succeeds via UCred same-UID validation, and the server reports 1 connection. Ephemeral clients (typically sesame CLI invocations) receive SecretsOnly clearance, allowing them to interact with daemon-secrets without being pre-registered.

Sender Identity

sender_identity_change_blocked – A client sends a first message with DaemonId(20), binding that identity to the connection. It then sends a second message with DaemonId(99). The receiver gets the first message but not the second. The bus server drops messages where the sender’s DaemonId does not match the identity bound on the first message, preventing identity spoofing mid-session.

verified_sender_name_stamped – A client registered as "test-client-0" sends a message. The receiver inspects msg.verified_sender_name and finds it set to Some("test-client-0"). This field is stamped by the server from the Noise IK registry lookup, not self-declared by the sender. Recipients can trust this field for authorization decisions.

Cross-Daemon Routing

registered_client_overlay_reaches_daemon_wm – A CLI client registered at Internal clearance publishes WmActivateOverlay. A simulated daemon-wm client receives it. Verifies the overlay activation path from CLI to window manager.

Graceful Shutdown

shutdown_flushes_publish_before_disconnect – A CLI client publishes WmActivateOverlay then calls client.shutdown().await. A daemon-wm client receives the event after the CLI has disconnected. This verifies that shutdown() flushes outbound frames before closing the connection.

drop_without_shutdown_may_lose_message – A CLI client publishes then immediately drops the client handle without calling shutdown(). The test documents that this races the I/O task and message delivery is non-deterministic. This test exists as a regression companion to the shutdown_flushes test: it demonstrates the data loss that shutdown() was introduced to prevent.

CI Pipeline

Open Sesame uses four GitHub Actions workflows for testing, documentation, release, and Nix builds.

Workflow Overview

WorkflowFileTriggersPurpose
Testtest.ymlPush to main/master, PRsRun cargo test on dual architectures
Docsdocs.ymlPush to main/master, PRsBuild rustdoc and mdBook
Releaserelease.ymlPush to main, manual dispatchSemantic-release, build, attest, publish
Nixnix.ymlCalled by release.yml, PRsBuild Nix packages and push to Cachix

test.yml

The test workflow runs on every push to main/master and on pull requests targeting those branches.

Dual-Architecture Matrix

matrix:
  include:
    - arch: amd64
      runner: ubuntu-24.04
    - arch: arm64
      runner: ubuntu-24.04-arm

Both runners use Ubuntu 24.04. ARM builds use GitHub’s native ubuntu-24.04-arm runner (not emulation).

Execution

  1. Checks out the repository.
  2. Installs the Rust toolchain via jdx/mise-action@v4 with caching enabled.
  3. Raises RLIMIT_MEMLOCK to 256 MiB with sudo prlimit --pid $$ --memlock=268435456:268435456. This is required because ProtectedAlloc uses mlock to pin secret-holding memory pages.
  4. Runs mise run ci:test.

The MISE_AUTO_INSTALL environment variable is set to "false" to prevent automatic tool installation outside the explicit mise-action step.

docs.yml

The docs workflow runs on pushes and PRs to main/master. It runs on ubuntu-latest (single architecture).

  1. Checks out the repository.
  2. Installs Rust via mise with caching.
  3. Runs mise run ci:docs to build documentation.

This workflow validates that documentation builds succeed but does not deploy. Deployment occurs in the release workflow’s build-docs and publish jobs.

release.yml

The release workflow is the primary CI/CD pipeline. It triggers on pushes to main and supports manual dispatch with a dry-run option.

Permissions

The workflow declares the following permissions:

  • contents: write – GitHub release creation, version commits
  • pages: write – GitHub Pages deployment
  • id-token: write – OIDC tokens for Pages and attestations
  • attestations: write – SLSA build provenance
  • issues: write, pull-requests: write – semantic-release comments

Job Dependency Graph

semantic-release ──┬──► build (amd64)  ──┬──► attest
                   ├──► build (arm64)  ──┤
                   │                     └──► upload-assets
                   ├──► nix-cache
                   ├──► build-docs
                   │
                   └──► [build + upload-assets + build-docs] ──► publish ──► cleanup

All jobs after semantic-release are gated on new_release == 'true'.

Build Job

The build job uses the same dual-architecture matrix as the test workflow. It installs rust and cargo:cargo-deb via mise, raises the memlock limit, and runs architecture-specific mise tasks:

ArchitectureBuild TaskRename Task
amd64ci:build:debci:release:rename-deb
arm64ci:build:deb:arm64ci:release:rename-deb:arm64

The rename task adds architecture suffixes to the .deb filenames. Artifacts are uploaded with 1-day retention.

Nix Cache Job

Calls the reusable nix.yml workflow, passing the release tag and the SCOPE_CREEP_CACHIX_PRIVATE_KEY secret.

Build Docs Job

Checks out the release tag, runs mise run ci:docs:all and mise run ci:docs:combine to produce a combined rustdoc and mdBook site. The result is uploaded as a documentation artifact.

Publish Job

The publish job:

  1. Downloads .deb artifacts and documentation.
  2. Imports the GPG signing key via crazy-max/ghaction-import-gpg@v6.
  3. Runs mise run ci:release:apt-repo to generate the signed APT repository.
  4. Deploys the combined APT repository and documentation to GitHub Pages via actions/deploy-pages@v5.

This job runs in the github-pages environment.

nix.yml

The Nix workflow serves dual purposes:

  • Reusable workflow: called by release.yml with a tag input to build and push release artifacts to Cachix.
  • Standalone PR workflow: runs on PRs to main for cache warming (builds packages but the Cachix action only pushes when the auth token is available).

Matrix

matrix:
  include:
    - system: x86_64-linux
      runner: ubuntu-24.04
    - system: aarch64-linux
      runner: ubuntu-24.04-arm

Execution

  1. Checks out at the specified tag (or current ref for PRs).
  2. Installs Nix via cachix/install-nix-action@v31.
  3. Configures Cachix via cachix/cachix-action@v15 with the scopecreep-zip cache name.
  4. Raises the memlock limit.
  5. Builds both open-sesame and open-sesame-desktop for the matrix system with --accept-flake-config -L.

Mise Task Runner

All workflows use jdx/mise-action@v4 to install tools and run tasks. Mise manages:

  • Rust toolchain version (from rust-toolchain.toml or mise config)
  • Node.js (for semantic-release in the release workflow)
  • cargo-deb (for .deb packaging in the build job)

Task names follow the convention ci:<category>:<action> (e.g., ci:test, ci:build:deb, ci:docs:all, ci:release:apt-repo).

Environment Variables

VariableValuePurpose
CARGO_TERM_COLORalwaysColored cargo output in CI logs
MISE_AUTO_INSTALLfalsePrevent implicit tool installation

Compliance Framework Mapping

This page maps Open Sesame’s security controls to specific requirements in NIST 800-53, DISA STIG, PCI-DSS, SOC 2, and FedRAMP. Controls that are fully implemented cite the source crate or configuration. Controls that depend on design-intent features are marked accordingly.

NIST 800-53 Rev. 5

AC – Access Control

ControlTitleOpen Sesame MechanismStatus
AC-3Access EnforcementSecurityLevel clearance hierarchy (core-types/src/security.rs): Open < Internal < ProfileScoped < SecretsOnly. Each daemon registers at a clearance level; messages are routed only to peers at sufficient clearance. CapabilitySet enforces per-agent authorization.Implemented
AC-4Information Flow EnforcementIPC bus enforces sender clearance: a daemon cannot emit messages above its own SecurityLevel. Recipient filtering ensures low-clearance daemons never receive high-clearance messages (core-ipc/src/server.rs).Implemented
AC-6Least PrivilegePer-daemon Landlock filesystem sandboxing, seccomp-bpf syscall allowlists, systemd NoNewPrivileges=yes, empty capability bounding set, ProtectSystem=strict.Implemented
AC-6(1)Authorize Access to Security FunctionsCapability::Admin, Capability::Unlock, Capability::Lock restricted to agents with explicit grants. Delegation narrows scope via CapabilitySet.intersection().Implemented
AC-6(9)Log Use of Privileged FunctionsBLAKE3 hash-chained audit log records all vault operations (core-profile).Implemented
AC-17Remote AccessNoise IK mutual authentication for all IPC. SSH agent forwarding for remote vault unlock. PrivateNetwork=yes on secrets daemon.Implemented

AU – Audit and Accountability

ControlTitleOpen Sesame MechanismStatus
AU-2Event LoggingStructured JSON logging (global.logging.json = true), journald integration. Events include: unlock/lock, secret CRUD, profile activation, daemon lifecycle.Implemented
AU-3Content of Audit RecordsEach entry includes: timestamp, agent identity, operation, profile, security level. AgentIdentity provides agent type, delegation chain, attestations.Implemented
AU-10Non-repudiationBLAKE3 hash-chained audit log. Each entry’s hash chains to the previous. sesame audit verify detects tampering.Implemented
AU-11Audit Record RetentionAudit chain files persist on disk indefinitely. Retention policy is delegated to the operating environment.Implemented (storage)
AU-12Audit Record GenerationAll daemons emit structured log events. The audit chain is generated by core-profile’s audit logger.Implemented

IA – Identification and Authentication

ControlTitleOpen Sesame MechanismStatus
IA-2Identification and Authentication (Organizational Users)AuthFactorId enum: Password, SshAgent, Fido2, Tpm, Fingerprint, Yubikey (core-types/src/auth.rs). Password and SshAgent backends implemented.Partially Implemented
IA-2(1)Multi-Factor Authentication to Privileged AccountsAuthCombineMode: Any, All, Policy (core-types/src/auth.rs). Policy mode supports threshold-based MFA (N required factors + M additional).Implemented
IA-2(6)Access to Accounts – Separate DeviceHardware security keys (FIDO2, YubiKey) as separate physical devices. SSH agent forwarding uses the operator’s local key.Partially Implemented (SSH agent implemented; FIDO2/YubiKey defined but backends not yet implemented)
IA-5Authenticator ManagementArgon2id KDF (19 MiB, 2 iterations) for password. BLAKE3 domain-separated key derivation. Per-profile salts.Implemented
IA-5(2)Public Key-Based AuthenticationNoise IK X25519 static keys for IPC. SSH agent Ed25519/RSA keys for vault unlock.Implemented

SC – System and Communications Protection

ControlTitleOpen Sesame MechanismStatus
SC-8Transmission Confidentiality and IntegrityNoise IK protocol: X25519 + ChaChaPoly + BLAKE2s with forward secrecy. All IPC authenticated and encrypted.Implemented
SC-12Cryptographic Key Establishment and ManagementBLAKE3 domain-separated key hierarchy. Master key derived from auth factors. Sub-keys derived via BLAKE3 derive_key with unique context strings per purpose.Implemented
SC-13Cryptographic ProtectionCryptoConfig (core-types/src/crypto.rs) with configurable algorithm selection. GovernanceCompatible profile uses NIST-approved algorithms (PBKDF2-SHA256, HKDF-SHA256, AES-GCM, SHA-256).Implemented
SC-28Protection of Information at RestSQLCipher: AES-256-CBC + HMAC-SHA512 per page. Per-profile encryption keys.Implemented
SC-28(1)Cryptographic Protection (at Rest)Vault files are encrypted at rest with keys derived from Argon2id KDF output through BLAKE3 domain-separated derivation.Implemented
SC-39Process IsolationPer-daemon systemd services with Landlock, seccomp-bpf, NoNewPrivileges, ProtectSystem=strict. Secrets daemon: PrivateNetwork=yes.Implemented

SI – System and Information Integrity

ControlTitleOpen Sesame MechanismStatus
SI-7Software, Firmware, and Information IntegrityGPG-signed APT packages. SLSA build provenance. OciReference with provenance digest for extensions. WASM extensions identified by content hash (AgentType::Extension { manifest_hash }).Implemented
SI-16Memory Protectionmemfd_secret(2) removes pages from kernel direct map. Guard pages (PROT_NONE). Volatile zeroize on drop. LimitCORE=0, MADV_DONTDUMP.Implemented

DISA STIG

STIG RequirementOpen Sesame MechanismStatus
Encrypted storage at restSQLCipher AES-256-CBC + HMAC-SHA512, per-profile encryption keysImplemented
Memory protection for credentialsmemfd_secret(2), guard pages, canary verification, volatile zeroizeImplemented
Audit trail integrityBLAKE3 hash chain with tamper detection via sesame audit verifyImplemented
Least privilege process isolationLandlock, seccomp-bpf, per-daemon clearance levels, systemd hardeningImplemented
No core dumpsLimitCORE=0 in all daemon services, MADV_DONTDUMP on secure allocationsImplemented
Authentication strengthArgon2id with memory-hard parameters (19 MiB). Multi-factor support.Implemented
Access control for sensitive dataSecurityLevel hierarchy, CapabilitySet authorizationImplemented
Session managementHeartbeat-based delegation with TTL expiry, TrustVector.authz_freshnessImplemented (types); Design Intent (runtime enforcement)

PCI-DSS v4.0

Requirement 3: Protect Stored Account Data

Sub-RequirementOpen Sesame Mechanism
3.5.1 Restrict access to cryptographic keysMaster key held in memfd_secret(2) memory, accessible only to the owning daemon process. Key derivation hierarchy: master key -> per-profile vault key -> SQLCipher page key.
3.5.1.2 Store secret keys in fewest possible locationsOne master key per installation, derived into per-profile keys. Master key exists only in protected memory; never on disk in plaintext.
3.6.1 Key management proceduressesame init generates keys. AuthCombineMode defines unlock policy. Key rotation via re-enrollment.

Requirement 7: Restrict Access to System Components and Cardholder Data

Sub-RequirementOpen Sesame Mechanism
7.2.1 Access control systemCapabilitySet per agent. SecurityLevel per daemon. DelegationGrant for scoped access transfer.
7.2.2 Assign access based on job classificationTrust profiles map to roles. Each profile has its own vault with its own secrets.

Requirement 8: Identify Users and Authenticate Access

Sub-RequirementOpen Sesame Mechanism
8.3.1 All user access authenticatedAll IPC authenticated via Noise IK. Vault unlock requires enrolled factor(s).
8.3.2 Strong authentication for all accessArgon2id (memory-hard). Multi-factor via AuthCombineMode. Hardware factors defined.
8.6.1 System and application accounts managedAgentIdentity with typed AgentType, capability scoping, delegation chains.

Requirement 10: Log and Monitor All Access

Sub-RequirementOpen Sesame Mechanism
10.2.1 Audit logs capture eventsBLAKE3 hash-chained audit log, structured JSON logging.
10.2.1.2 All actions by administrative accountsCapability::Admin operations logged with full agent identity and delegation chain.
10.3.1 Audit log protected from tamperingHash chain provides tamper evidence. sesame audit verify detects modification.

SOC 2 Trust Service Criteria

CriteriaCategoryOpen Sesame Mechanism
CC6.1Logical and Physical Access ControlsSecurityLevel hierarchy, CapabilitySet, Noise IK authentication, per-daemon sandbox
CC6.2Prior to Issuing System Credentialssesame init with factor enrollment. AgentIdentity creation with attestation.
CC6.3Based on AuthorizationCapabilitySet intersection for delegation. Policy-based approval gates (Design Intent).
CC6.6Restrict AccessLandlock, seccomp-bpf, PrivateNetwork=yes (secrets daemon), ProtectHome=read-only
CC6.7Restrict TransmissionNoise IK encryption for all IPC. No plaintext secret transmission.
CC6.8Prevent or Detect Unauthorized SoftwareWASM extensions identified by manifest_hash. OciReference with provenance. GPG-signed packages.
CC7.1Monitor Infrastructuresystemd watchdog (WatchdogSec=30), structured logging, sesame status
CC7.2Monitor for AnomaliesRate-limited vault unlock attempts. Audit chain verification.
CC8.1Changes to InfrastructureConfiguration layered inheritance with PolicyOverride audit trail

FedRAMP

FedRAMP baselines inherit from NIST 800-53. The controls mapped in the NIST 800-53 section above apply to FedRAMP at the corresponding baseline level (Low, Moderate, High).

Cryptographic Algorithm Compliance

FedRAMP requires FIPS 140-validated cryptographic modules. Open Sesame provides a GovernanceCompatible crypto profile (core-types/src/crypto.rs) that selects NIST-approved algorithms:

ComponentLeadingEdge (Default)GovernanceCompatible
KDFArgon2idPBKDF2-SHA256 (600K iterations)
HKDFBLAKE3HKDF-SHA256
Noise cipherChaChaPolyAES-256-GCM
Noise hashBLAKE2sSHA-256
Audit hashBLAKE3SHA-256

The GovernanceCompatible profile uses algorithms that have FIPS 140-validated implementations in widely-used cryptographic libraries. Open Sesame itself is not FIPS-validated; deployments requiring FIPS validation must use a FIPS-validated cryptographic provider at the library level. See Cryptographic Inventory for the full algorithm inventory.

Cryptographic Inventory

This page provides an exhaustive inventory of every cryptographic algorithm used in Open Sesame, where it is used, the key sizes and parameters, and the relevant standards references.

Algorithm Summary

AlgorithmPurposeKey SizeStandardCrate
Argon2idPassword -> master key derivation32 bytes outputRFC 9106core-crypto (kdf.rs)
PBKDF2-SHA256Password -> master key (governance-compatible)32 bytes outputNIST SP 800-132, RFC 8018core-crypto (kdf.rs)
BLAKE3 derive_keyMaster key -> per-purpose sub-keys32 bytes outputBLAKE3 spec (domain-separated KDF mode)core-crypto (hkdf.rs)
HKDF-SHA256Master key -> per-purpose sub-keys (governance-compatible)32 bytes outputRFC 5869, NIST SP 800-56Ccore-crypto (hkdf.rs)
AES-256-GCMKey wrapping (PasswordWrapBlob, EnrollmentBlob)256-bit keyNIST SP 800-38D, FIPS 197core-crypto
AES-256-CBC + HMAC-SHA512SQLCipher page encryption256-bit key (encrypt) + 512-bit key (MAC)FIPS 197, FIPS 198-1SQLCipher (via rusqlite)
X25519Noise IK key agreement256-bit (32 bytes)RFC 7748snow (via core-ipc)
ChaChaPolyNoise IK transport encryption (default)256-bit keyRFC 7539snow (via core-ipc)
BLAKE2sNoise IK hashing (default)256-bit outputRFC 7693snow (via core-ipc)
AES-256-GCM (Noise)Noise IK transport encryption (governance-compatible)256-bit keyNIST SP 800-38Dsnow (via core-ipc)
SHA-256 (Noise)Noise IK hashing (governance-compatible)256-bit outputFIPS 180-4snow (via core-ipc)
BLAKE3Audit log hash chain (default)256-bit outputBLAKE3 speccore-profile
SHA-256Audit log hash chain (governance-compatible)256-bit outputFIPS 180-4core-profile
Ed25519Delegation grant signatures256-bit key (32 bytes)RFC 8032core-types (security.rs)

Argon2id

Standard: RFC 9106

Purpose: Derives the master key from a user-supplied password. Used by the Password authentication factor (AuthFactorId::Password in core-types/src/auth.rs).

Parameters:

ParameterValueRationale
Memory19 MiB (19,456 KiB)Memory-hard to resist GPU/ASIC attacks
Iterations2Balanced with memory cost for interactive use
Parallelism1Single-threaded derivation
Output32 bytes256-bit master key
Salt16 bytes, per-profile, randomUnique per vault

Implementation: core-crypto/src/kdf.rs, function derive_key_argon2.

Known residual: The Argon2id working memory (19 MiB) resides on the unprotected heap during derivation. This is an upstream limitation of the argon2 crate. See GitHub issue #14.

BLAKE3 Key Derivation

Standard: BLAKE3 specification, KDF mode

Purpose: Derives per-purpose sub-keys from the master key using domain-separated context strings. Each context string is globally unique and hardcoded.

Context strings used in the system:

Context StringPurposeSource
"pds v2 vault-key {profile}"SQLCipher vault encryption keycore-crypto/src/hkdf.rs
"pds v2 clipboard-key {profile}"Clipboard encryption keycore-crypto/src/hkdf.rs
"pds v2 ipc-auth-token {profile}"IPC bus authentication tokencore-crypto/src/hkdf.rs
"pds v2 ipc-encryption-key {profile}"IPC field encryption keycore-crypto/src/hkdf.rs
"pds v2 ssh-vault-kek {profile}"SSH agent KEK derivationcore-auth
"pds v2 combined-master-key {profile}"Combined key from all factors (All mode)core-auth
"pds v2 kek-salt {profile}"Salt derivation for KEK wrappingcore-crypto/src/hkdf.rs

Implementation: core-crypto/src/hkdf.rs, function derive_32 wrapping blake3::derive_key.

BLAKE3’s KDF mode internally derives a context key from the context string, then uses it as keying material with extract-then-expand semantics equivalent to HKDF.

HKDF-SHA256

Standard: RFC 5869, NIST SP 800-56C

Purpose: Governance-compatible alternative to BLAKE3 key derivation. Used when CryptoConfigToml.hkdf = "hkdf-sha256" (core-config/src/schema_crypto.rs).

Implementation: core-crypto/src/hkdf.rs, function derive_32_hkdf_sha256. Uses the hkdf crate with sha2::Sha256.

The same context strings listed above for BLAKE3 are used as the HKDF info parameter. The salt is extracted from the master key. Output is 32 bytes.

AES-256-GCM (Key Wrapping)

Standard: NIST SP 800-38D, FIPS 197

Purpose: Wraps and unwraps the master key under a key-encryption key (KEK) derived from an authentication factor.

Used in:

  • PasswordWrapBlob – master key wrapped under the Argon2id-derived KEK. Stored on disk in the vault metadata.
  • EnrollmentBlob – master key wrapped under the SSH agent-derived KEK. Stored on disk for SSH agent factor.

Parameters:

ParameterValue
Key size256 bits (32 bytes)
Nonce96 bits (12 bytes), random per wrap
Tag128 bits (16 bytes)

AES-256-CBC + HMAC-SHA512 (SQLCipher)

Standard: FIPS 197 (AES), FIPS 198-1 (HMAC), FIPS 180-4 (SHA-512)

Purpose: SQLCipher page-level encryption for vault databases. Each page in the SQLite database is independently encrypted and authenticated.

Parameters:

ParameterValue
EncryptionAES-256-CBC per page
AuthenticationHMAC-SHA512 per page
Key derivationPer-page key from vault key via SQLCipher’s internal KDF
Page size4096 bytes (SQLCipher default)
KDF iterationsControlled by SQLCipher; the vault key itself is pre-derived via Argon2id + BLAKE3

Implementation: SQLCipher via the rusqlite crate with the bundled-sqlcipher feature.

Noise IK (IPC Transport)

Standard: Noise Protocol Framework (noiseprotocol.org), pattern IK

Purpose: All inter-daemon communication on the IPC bus. Provides mutual authentication, encryption, and forward secrecy.

Pattern: IK (Initiator knows responder’s static key)

Default cipher suite: Noise_IK_25519_ChaChaPoly_BLAKE2s

ComponentDefault (LeadingEdge)Governance-Compatible
Key agreementX25519 (RFC 7748)X25519 (RFC 7748)
CipherChaChaPoly (RFC 7539)AES-256-GCM (NIST SP 800-38D)
HashBLAKE2s (RFC 7693)SHA-256 (FIPS 180-4)

Additional binding: The UCred (pid, uid, gid) of the connecting process is bound into the Noise prologue, preventing a process from impersonating another process’s Noise session.

Implementation: core-ipc, using the snow crate. Cipher suite selection is configured via CryptoConfigToml.noise_cipher and CryptoConfigToml.noise_hash in core-config/src/schema_crypto.rs.

Ed25519 (Delegation Signatures)

Standard: RFC 8032

Purpose: Signs DelegationGrant structs to prevent tampering with capability delegations. The 64-byte signature is stored in DelegationGrant.signature (core-types/src/security.rs).

Key size: 256-bit private key, 256-bit public key.

FIPS Path

The following table summarizes FIPS 140 validation status for each algorithm:

AlgorithmFIPS-Validated Implementations AvailableOpen Sesame Profile
Argon2idNo FIPS 140 validation existsLeadingEdge only
PBKDF2-SHA256Yes (multiple vendors)GovernanceCompatible
BLAKE3No FIPS 140 validation existsLeadingEdge only
HKDF-SHA256Yes (via HMAC-SHA256)GovernanceCompatible
AES-256-GCMYes (multiple vendors)Both profiles
AES-256-CBCYes (multiple vendors)Both profiles (SQLCipher)
HMAC-SHA512Yes (multiple vendors)Both profiles (SQLCipher)
X25519Partial (some FIPS modules include it)Both profiles
ChaChaPolyNo FIPS 140 validation existsLeadingEdge only
AES-256-GCM (Noise)Yes (multiple vendors)GovernanceCompatible
BLAKE2sNo FIPS 140 validation existsLeadingEdge only
SHA-256Yes (multiple vendors)GovernanceCompatible
Ed25519Partial (some FIPS modules include it)Both profiles

For deployments requiring full FIPS 140 compliance, set the crypto profile to governance-compatible:

[crypto]
kdf = "pbkdf2-sha256"
hkdf = "hkdf-sha256"
noise_cipher = "aes-gcm"
noise_hash = "sha256"
audit_hash = "sha256"
minimum_peer_profile = "governance-compatible"

This configuration uses only algorithms with widely available FIPS 140-validated implementations. Open Sesame itself is not a FIPS-validated module; the FIPS boundary is at the cryptographic library level.

Crypto Agility

All cryptographic algorithm selections are config-driven via CryptoConfigToml (core-config/src/schema_crypto.rs). The to_typed() method converts string-based configuration into validated CryptoConfig enum variants.

Adding a new algorithm requires:

  1. Adding a variant to the relevant enum in core-types/src/crypto.rs (e.g., KdfAlgorithm::Scrypt).
  2. Adding the string mapping in core-config/src/schema_crypto.rs.
  3. Implementing the algorithm in the corresponding core-crypto function.

The minimum_peer_profile field in CryptoConfig allows heterogeneous crypto profiles within a federation: each installation selects its own algorithms but can set a floor for what it accepts from peers. This enables gradual migration from one algorithm to another without a coordinated cutover.

PBKDF2-SHA256

Standard: NIST SP 800-132, RFC 8018

Purpose: Governance-compatible alternative to Argon2id for password-based key derivation. Used when CryptoConfigToml.kdf = "pbkdf2-sha256".

Parameters:

ParameterValue
HashSHA-256
Iterations600,000
Output32 bytes
Salt16 bytes, per-profile, random

Implementation: core-crypto/src/kdf.rs, function derive_key_pbkdf2.

PBKDF2-SHA256 provides FIPS 140 compliance for the KDF layer but is significantly less resistant to GPU/ASIC attacks than Argon2id due to its lack of memory-hardness. It should be selected only when FIPS compliance is a hard requirement.

Zero Trust Posture

This page describes how Open Sesame applies zero trust principles to its architecture. Zero trust in this context means that no component, process, or network path is implicitly trusted. Every interaction is authenticated, authorized, and audited, regardless of origin.

Principles

Never Trust, Always Verify

Every IPC message on the bus is authenticated via the Noise IK protocol. There is no unauthenticated communication path between daemons.

Implementation: When a daemon connects to the IPC bus hosted by daemon-profile, the Noise IK handshake verifies the connecting daemon’s X25519 static public key against the clearance registry (core-ipc/src/registry.rs). UCred (pid, uid, gid) from the Unix domain socket is bound into the Noise prologue, preventing a compromised process from reusing another process’s Noise session.

Unregistered clients (e.g., the sesame CLI) receive Open clearance. They can send and receive Open-level messages but are excluded from Internal, ProfileScoped, and SecretsOnly traffic.

The SecurityLevel enum (core-types/src/security.rs) defines the clearance hierarchy:

#![allow(unused)]
fn main() {
pub enum SecurityLevel {
    Open,           // Visible to all, including extensions
    Internal,       // Authenticated daemons only
    ProfileScoped,  // Daemons with current profile's security context
    SecretsOnly,    // Secrets daemon only
}
}

A message at SecretsOnly level is delivered only to daemons registered at SecretsOnly clearance. A daemon at Internal clearance never sees it. This is enforced in the IPC server’s message routing loop (core-ipc/src/server.rs): the server checks conn.security_clearance >= msg.security_level before delivering each message, and checks conn.security_clearance >= msg.security_level before accepting each sent message from a daemon.

Least Privilege

Each daemon operates with the minimum privileges required for its function. Privilege boundaries are enforced at multiple layers:

Per-Daemon Clearance

DaemonClearanceRationale
daemon-secretsSecretsOnlyHolds decrypted vault keys; must not leak to other daemons
daemon-clipboardProfileScopedHandles clipboard content scoped to the active profile
daemon-profileInternalIPC bus host; sees all Internal-level and below
daemon-wmInternalWindow management; no access to secrets
daemon-launcherInternalApplication launching; receives secrets only via env injection
daemon-inputInternalKeyboard/mouse capture; no secret access
daemon-snippetsInternalSnippet management; no secret access

Filesystem Sandboxing (Landlock)

Each daemon restricts its own filesystem access at startup via Landlock. The secrets daemon, for example, can access only $XDG_RUNTIME_DIR/pds/ and ~/.config/pds/. Attempts to read or write outside these paths return EACCES.

Partially-enforced Landlock is a fatal error. If the kernel supports Landlock but enforcement is incomplete (e.g., missing filesystem support), the daemon aborts rather than operating with degraded isolation.

Syscall Filtering (seccomp-bpf)

Each daemon installs a seccomp-bpf filter with a per-daemon syscall allowlist. Unallowed syscalls terminate the offending thread (SECCOMP_RET_KILL_THREAD). A SIGSYS handler logs the denied syscall before the thread dies, providing visibility into unexpected syscall usage.

systemd Hardening

All daemon services apply:

DirectiveEffect
NoNewPrivileges=yesPrevents privilege escalation via setuid/setgid binaries
ProtectSystem=strictRoot filesystem mounted read-only
ProtectHome=read-onlyHome directory read-only except explicit ReadWritePaths
LimitCORE=0Core dumps disabled
LimitMEMLOCK=64MLocked memory budget for memfd_secret and mlock
MemoryMaxPer-daemon memory ceiling

The secrets daemon additionally uses PrivateNetwork=yes, which creates a network namespace with only a loopback interface. The secrets daemon has no path to any network socket.

Capability-Based Authorization

The CapabilitySet type (core-types/src/security.rs) implements fine-grained, capability-based authorization. Each agent’s session_scope defines exactly which operations it can perform. The 16 defined capabilities are:

  • Admin, SecretRead, SecretWrite, SecretDelete, SecretList
  • ProfileActivate, ProfileDeactivate, ProfileList, ProfileSetDefault
  • StatusRead, AuditRead, ConfigReload
  • Unlock, Lock
  • Delegate, ExtensionInstall, ExtensionManage

Delegation narrows scope via lattice intersection: effective = delegator_scope.intersection(grant.scope). A delegatee can never exceed the delegator’s capabilities.

Continuous Verification

Trust is not established once and cached. Multiple mechanisms provide ongoing verification:

Watchdog

All daemons report health to systemd via WatchdogSec=30. If a daemon fails to report within 30 seconds, systemd restarts it. This detects hung processes and ensures daemon liveness.

Audit Chain

The BLAKE3 hash-chained audit log provides a tamper-evident record of all operations. Each entry hashes the previous entry’s hash, forming a chain from the genesis entry at sesame init to the most recent operation. Verification via sesame audit verify detects:

  • Modified entries (hash mismatch).
  • Deleted entries (chain gap).
  • Reordered entries (hash mismatch).
  • Inserted entries (hash mismatch).

The hash algorithm is configurable: BLAKE3 (default) or SHA-256 (governance-compatible), via CryptoConfigToml.audit_hash (core-config/src/schema_crypto.rs).

Authorization Freshness

The TrustVector.authz_freshness field (core-types/src/security.rs) tracks how long since the last authorization refresh. Delegated capabilities expire via DelegationGrant.initial_ttl and require periodic renewal via heartbeat_interval. A stale authorization is equivalent to no authorization.

Heartbeat Renewal

The Attestation::HeartbeatRenewal variant records heartbeat events for time-bounded attestations. Missing a heartbeat revokes the corresponding delegation.

Device Health as Posture Signal

The availability of memfd_secret(2) is a binary posture signal. A system with memfd_secret removes secret pages from the kernel direct map; a system without it leaves secrets accessible to any process that can read /proc/pid/mem or perform DMA.

Posture SignalValueMeaning
memfd_secret availabledevice_posture: 1.0Secrets removed from kernel direct map
memfd_secret unavailabledevice_posture: 0.5Secrets on kernel direct map (mlock fallback)
No mlockdevice_posture: 0.0Secrets may be swapped to disk

The TrustVector.device_posture field (core-types/src/security.rs) is a f64 from 0.0 (unknown) to 1.0 (fully attested). In a federation context, a peer with low device posture may be restricted from receiving high-sensitivity secrets.

Additional posture signals include:

  • Landlock enforcement status – whether the filesystem sandbox is active.
  • seccomp-bpf status – whether syscall filtering is active.
  • Machine binding – whether the installation is bound to specific hardware via MachineBindingType::TpmBound or MachineBindingType::MachineId.
  • Kernel version – whether the kernel meets minimum requirements for all security controls.

Microsegmentation via Profile Isolation

Trust profiles are the microsegmentation boundary in Open Sesame. Each profile is an isolated trust context:

BoundaryIsolation Mechanism
SecretsSeparate SQLCipher vault per profile (vaults/{name}.db)
Encryption keysSeparate BLAKE3-derived vault key per profile
ClipboardProfile-scoped clipboard history
AuditProfile attribution in every audit entry
FrecencySeparate frecency database per profile
EnvironmentProfile-scoped secret injection via sesame env -p {profile}

Cross-profile access is not possible without explicit configuration. A daemon operating in the work profile cannot read secrets from the personal profile’s vault. The vault encryption keys are derived from different BLAKE3 context strings ("pds v2 vault-key work" vs. "pds v2 vault-key personal"), so even with the master key, the derived keys are distinct.

The LaunchProfile type (core-types/src/profile.rs) allows explicit profile stacking for applications that need secrets from multiple profiles:

#![allow(unused)]
fn main() {
pub struct LaunchProfile {
    pub trust_profiles: Vec<TrustProfileName>,
    pub conflict_policy: ConflictPolicy,
}
}

When multiple profiles are stacked, the ConflictPolicy determines how secret key collisions are handled: Strict (abort), Warn (log and use higher-precedence), or Last (silently use higher-precedence). The default is Strict, preventing accidental secret leakage across profile boundaries.

Explicit Security Posture

Open Sesame does not degrade silently. Security controls that fail are fatal, with one documented exception:

  • Landlock enforcement failure: fatal. Daemon does not start.
  • seccomp-bpf installation failure: fatal. Daemon does not start.
  • memfd_secret unavailability: non-fatal. Daemon starts with mlock fallback. Logged at ERROR level with an explicit compliance impact statement naming affected frameworks (IL5/IL6, DISA STIG, PCI-DSS) and the exact remediation command.

The memfd_secret exception exists because the feature depends on kernel configuration that application software cannot control. The ERROR-level log ensures the operator is informed of the reduced posture, and the compliance impact statement provides actionable remediation guidance.

Network Trust Model

The NetworkTrust enum (core-types/src/security.rs) classifies the trust level of the network path:

#![allow(unused)]
fn main() {
pub enum NetworkTrust {
    Local,           // Unix domain socket, same machine
    Encrypted,       // Noise IK, TLS, WireGuard
    Onion,           // Tor, Veilid
    PublicInternet,  // Unencrypted or minimally authenticated
}
}

The ordering represents decreasing trust: Local is most trusted (no network traversal), PublicInternet is least trusted.

In the current implementation, all IPC communication uses Local (Unix domain socket). In a federation context (Design Intent), Encrypted (Noise IK over TCP) would be used for cross-machine communication. The TrustVector.network_exposure field allows authorization policies to require stronger authentication for less-trusted network paths.

Linux Platform

The platform-linux crate provides safe Rust abstractions over Linux-specific APIs consumed by the daemon crates. It contains no business logic. All modules are gated with #[cfg(target_os = "linux")].

Feature Flags

The crate uses two feature flags to control dependency scope:

  • No features (default): Only headless-safe modules are compiled: sandbox, security, systemd, dbus, cosmic_keys, cosmic_theme, and the clipboard trait definition. This is sufficient for the open-sesame (headless) package.
  • desktop: Enables Wayland compositor integration (compositor, focus_monitor), evdev input capture (input), and pulls in wayland-client, wayland-protocols, wayland-protocols-wlr, smithay-client-toolkit, and evdev.
  • cosmic: Enables COSMIC-specific Wayland protocol support. Implies desktop. Pulls in cosmic-client-toolkit and cosmic-protocols, which are GPL-3.0 licensed. This feature flag isolates the GPL license obligation to builds that opt in.

Compositor Abstraction

The CompositorBackend Trait

Window and workspace management is abstracted behind the CompositorBackend trait defined in compositor.rs. The trait requires Send + Sync and exposes these operations:

  • list_windows() – enumerate all toplevel windows
  • list_workspaces() – enumerate workspaces
  • activate_window(id) – bring a window to the foreground
  • set_window_geometry(id, geom) – resize/reposition a window
  • move_to_workspace(id, ws) – move a window to a different workspace
  • focus_window(id) – set input focus to a window
  • close_window(id) – request a window to close
  • name() – human-readable backend name for diagnostics

All methods return Pin<Box<dyn Future<Output = T> + Send>> (aliased as BoxFuture) to maintain dyn-compatibility. This is required because detect_compositor() returns Box<dyn CompositorBackend> for runtime backend selection.

The trait also defines a Workspace struct with fields id (CompositorWorkspaceId), name (String), and is_active (bool).

Runtime Backend Detection

The detect_compositor() factory function probes the Wayland display for supported protocols and instantiates the appropriate backend:

  1. If the cosmic feature is enabled, attempt to connect the CosmicBackend. On success, return it.
  2. If COSMIC protocols are unavailable (or the feature is disabled), attempt to connect the WlrBackend.
  3. If neither backend connects, return Error::Platform.

This detection runs once at daemon startup. The returned Box<dyn CompositorBackend> is stored and used for the daemon’s lifetime.

CosmicBackend

The CosmicBackend (in backend_cosmic.rs) targets the COSMIC desktop compositor (cosmic-comp). It uses three Wayland protocols:

  • ext_foreign_toplevel_list_v1 – standard protocol for window enumeration (toplevel handles with identifier, app_id, title).
  • zcosmic_toplevel_info_v1 – COSMIC-specific extension providing activation state detection via State::Activated.
  • zcosmic_toplevel_manager_v1 – COSMIC-specific extension providing window activation (manager.activate(handle, seat)) and close operations.

Connection and Protocol Probing

CosmicBackend::connect() opens a Wayland connection, initializes the registry, and verifies that all three required protocol interfaces are advertised in the global list. It does not bind protocol objects during probing – binding ExtForeignToplevelListV1 causes the compositor to start sending toplevel events, and if the probe event queue is then dropped, those objects become zombies that cause the compositor to close the connection.

The backend holds the wayland_client::Connection and an op_lock (Mutex<()>) that serializes all protocol operations. Concurrent bind/destroy cycles on the same wl_display can corrupt compositor state and crash cosmic-comp.

Window Enumeration (2-Roundtrip Pattern)

enumerate() follows a two-roundtrip protocol flow:

  1. Roundtrip 1: Bind ext_foreign_toplevel_list_v1 and zcosmic_toplevel_info_v1. Receive all ExtForeignToplevelHandleV1 events (identifier, app_id, title, Done).
  2. Request zcosmic_toplevel_handle for each handle via info.get_cosmic_toplevel().
  3. Roundtrip 2: Receive cosmic state events. Detect activation by checking for State::Activated in the state byte array (packed u32 values in native endian).

Windows are converted to core_types::Window structs. The WindowId is derived deterministically using UUID v5 with a fixed namespace ("open-sesame-wind" as bytes) and the protocol identifier as input. The focused window is reordered to the end of the list (MRU ordering for Alt+Tab).

After enumeration, all protocol objects are destroyed in the correct order per the protocol specification: destroy cosmic handles, destroy foreign toplevel handles, stop the list, roundtrip for the finished event, destroy the list, flush.

Window Activation (3-Roundtrip Pattern)

activate() uses a separate disposable Wayland connection to avoid crashing cosmic-comp. The compositor panics (toplevel_management.rs:267 unreachable!()) when protocol objects are destroyed while an activation is in flight, which would kill the entire COSMIC desktop session. The disposable connection isolates this breakage from the shared connection used for enumeration.

  1. Roundtrip 1: Enumerate toplevels on the disposable connection.
  2. Find the target window by deterministic UUID mapping. Request its cosmic handle.
  3. Roundtrip 2: Receive the cosmic handle.
  4. Call manager.activate(cosmic_handle, seat).
  5. Roundtrip 3: Ensure activation is processed.

Protocol objects are intentionally leaked. The leaked objects cause a broken pipe when the EventQueue drops, but this only affects the disposable connection.

Unsupported Operations

set_window_geometry and move_to_workspace return Error::Platform – these operations are not supported by the COSMIC toplevel protocols. focus_window delegates to activate_window.

WlrBackend

The WlrBackend (in backend_wlr.rs) implements CompositorBackend using wlr-foreign-toplevel-management-v1. This protocol is supported by sway, Hyprland, niri, Wayfire, and COSMIC (which advertises it for backwards compatibility).

Architecture

Unlike the COSMIC backend’s re-enumerate-on-each-call approach, the WLR backend maintains a continuously updated state snapshot:

  • A dedicated dispatch thread (wlr-dispatch) continuously reads Wayland events using prepare_read() + libc::poll() with a 50ms periodic wake-up.
  • On each Done event (the protocol’s atomic commit point), the dispatch thread publishes the committed toplevel state to a shared Arc<Mutex<WlrState>>.
  • On Closed events, the toplevel is removed from shared state and the handle proxy is destroyed.
  • list_windows() reads the snapshot under the mutex. No Wayland roundtrips occur on the API thread.
  • activate_window() and close_window() call proxy methods directly (wayland-client 0.31 proxies are Send + Sync) and flush the shared connection.

The dispatch loop uses exponential backoff (100ms to 30s) on read, dispatch, or flush errors.

Unsupported Operations

set_window_geometry and move_to_workspace return Error::Platform – the wlr-foreign-toplevel protocol does not support these operations. focus_window delegates to activate_window.

Focus Monitor

The focus_monitor module (in focus_monitor.rs) tracks the active window and sends FocusEvent values through a tokio::sync::mpsc channel. It uses wlr-foreign-toplevel-management-v1 and is compatible with sway, Hyprland, niri, Wayfire, and COSMIC.

FocusEvent has two variants:

  • Focus(String) – an app gained focus; payload is the app_id.
  • Closed(String) – a window closed; payload is the app_id.

The monitor runs as a long-lived async task. It connects to the Wayland display, binds the wlr foreign toplevel manager (version 1-3), and enters an async event loop using tokio::io::unix::AsyncFd on the Wayland socket file descriptor. On each Done event, if the activated app_id changed, a FocusEvent::Focus is sent via try_send. On Closed events, a FocusEvent::Closed is sent and the handle proxy is destroyed.

The focus monitor is re-exported from compositor for backward compatibility: downstream crates import platform_linux::compositor::{FocusEvent, focus_monitor}.

Clipboard

The clipboard module defines the DataControl trait for Wayland clipboard access. It abstracts over two protocols:

  • ext-data-control-v1 (preferred, standardized)
  • wlr-data-control-v1 (fallback for older compositors)

The trait provides:

  • read_selection() – read the current clipboard content with MIME type metadata.
  • write_selection(content) – write content to the clipboard.
  • subscribe() – subscribe to clipboard change notifications via a tokio::sync::mpsc::Receiver<ClipboardContent>.
  • protocol_name() – diagnostic name.

ClipboardContent carries a mime_type string and data byte vector.

The connect_data_control() factory function currently returns an error – clipboard implementation is deferred to a later phase. The trait definition and module are available as the integration contract.

On COSMIC, the COSMIC_DATA_CONTROL_ENABLED=1 environment variable is required for data-control protocol access.

Input

The input module (in input.rs) provides evdev device discovery and async keyboard event streaming.

Device Discovery

enumerate_devices() iterates /dev/input/event* via the evdev crate’s built-in enumerator. Each device is classified:

  • Keyboard: supports KEY_A, KEY_Z, and KEY_ENTER. This heuristic excludes power buttons, media controllers, and other devices that report KEY events but lack a full key set.
  • Pointer: supports BTN_LEFT.

The function returns a Vec<DeviceInfo> with path, name, is_keyboard, and is_pointer fields. Devices that fail to open (EACCES) are silently skipped.

Keyboard Streaming

open_keyboard_stream(path) opens an evdev device and returns an EventStream (from the evdev crate) that uses AsyncFd<Device> internally. This is fully async with no spawn_blocking required. Call stream.next_event().await to read events.

The device is not grabbed (EVIOCGRAB is not used). Events are read passively – they also reach the compositor. This is intentional: the system observes and forwards copies rather than stealing events.

Requires input group membership. Root is never required. For future remap support via /dev/uinput, a udev rule is needed: KERNEL=="uinput", GROUP="uinput", MODE="0660".

D-Bus Integration

The dbus module (in dbus.rs) provides typed D-Bus proxies using zbus with default-features = false, features = ["tokio"] to ensure all I/O runs on the tokio runtime with no background threads.

Session Bus

SessionBus::connect() opens a connection to the D-Bus session bus. It serves as the shared connection handle for all proxies.

Secret Service (org.freedesktop.secrets)

SecretServiceProxy provides raw store/retrieve/delete/has operations for the freedesktop Secret Service API. It opens a plain-text session (secrets transmitted unencrypted over D-Bus, which is safe because D-Bus is local transport). The proxy operates on the default collection (/org/freedesktop/secrets/aliases/default) and identifies items by application and account attributes with type master-key-wrapped.

This module provides only the low-level D-Bus proxy. Business logic (KeyLocker trait, key hierarchy) lives in daemon-secrets.

Global Shortcuts Portal (org.freedesktop.portal.GlobalShortcuts)

GlobalShortcutsProxy provides compositor-agnostic global hotkey registration through xdg-desktop-portal. Supported on COSMIC, KDE Plasma 6.4+, and niri. The proxy supports create_session, bind_shortcuts, and list_shortcuts operations.

NetworkManager SSID Monitor

ssid_monitor() is a long-lived async task that monitors the active WiFi SSID via NetworkManager D-Bus signals on the system bus. It subscribes to the StateChanged signal on org.freedesktop.NetworkManager, re-reads the primary active connection’s SSID on each state change, and sends the SSID string through a tokio::sync::mpsc::Sender<String> when it changes.

The SSID reading traverses the NetworkManager object graph: primary connection -> connection type check (must be 802-11-wireless) -> device list -> active access point -> SSID byte array -> UTF-8 string.

This enables context-based profile activation (e.g., activate “work” profile when connected to the office WiFi).

COSMIC Key Injection

The cosmic_keys module (in cosmic_keys.rs) manages keybindings in COSMIC desktop’s shortcut configuration files:

  • ~/.config/cosmic/com.system76.CosmicSettings.Shortcuts/v1/custom – custom Spawn(...) bindings
  • ~/.config/cosmic/com.system76.CosmicSettings.Shortcuts/v1/system_actions – maps System(...) action variants to command strings

System Actions Override Strategy

For Alt+Tab integration, the module overrides system_actions rather than adding a competing Spawn(...) binding. COSMIC’s default keybindings map Alt+Tab to System(WindowSwitcher). Adding a parallel Spawn(...) binding would race with the default and leak the Alt modifier to applications. By overriding system_actions, the compositor’s own built-in Alt+Tab binding fires sesame, and the key event is consumed at compositor level before any application sees the Alt keypress.

The overrides point WindowSwitcher to sesame wm overlay and WindowSwitcherPrevious to sesame wm overlay --backward.

Injection Safety

All values written to RON configuration files are escaped through escape_ron_string(), which handles backslash and double-quote characters to prevent RON injection.

Configuration Files

The files are in RON (Rusty Object Notation) format. The compositor watches these files via cosmic_config::calloop::ConfigWatchSource and live-reloads on change – no logout is required.

Before writing, the module creates a .bak backup of the existing file. The setup_keybinding(launcher_key_combo) function:

  1. Overrides system_actions for WindowSwitcher/WindowSwitcherPrevious.
  2. Adds a custom Spawn(...) binding for the launcher key (e.g., alt+space).
  3. Adds a backward variant with Shift (e.g., alt+shift+space).

remove_keybinding() removes all sesame entries from both files. If system_actions becomes empty after removal, the file is deleted so COSMIC falls back to system defaults at /usr/share/cosmic/.

COSMIC Theme Integration

The cosmic_theme module (in cosmic_theme.rs) reads theme colors, fonts, corner radii, and dark/light mode from COSMIC’s RON configuration at ~/.config/cosmic/:

  • Theme mode: com.system76.CosmicTheme.Mode/v1/is_dark
  • Dark theme: com.system76.CosmicTheme.Dark/v1/
  • Light theme: com.system76.CosmicTheme.Light/v1/

CosmicTheme::load() reads the mode, selects the appropriate theme directory, and deserializes background, primary, secondary containers, accent colors, and corner radii from individual RON files. Returns None on non-COSMIC systems where these files do not exist.

The types (CosmicColor, ComponentColors, Container, AccentColors, CornerRadii) provide the theme data needed for overlay rendering. CosmicColor stores RGBA as 0.0-1.0 floats with a to_rgba() conversion to (u8, u8, u8, u8).

systemd Integration

The systemd module (in systemd.rs) provides three helpers using the sd-notify crate:

  • notify_ready() – sends READY=1 to systemd for Type=notify services. Preserves NOTIFY_SOCKET (does not unset it) so subsequent watchdog pings continue to work.
  • notify_watchdog() – sends a watchdog keepalive ping.
  • notify_status(status) – updates the daemon’s status string visible in systemctl status.

Adding a New Compositor Backend

To add support for a new compositor (e.g., GNOME/Mutter via org.gnome.Mutter.IdleMonitor, KDE/KWin via org.kde.KWin, or Hyprland IPC):

  1. Create backend_<name>.rs in platform-linux/src/ implementing the CompositorBackend trait.
  2. Add pub(crate) mod backend_<name>; to lib.rs, gated behind an appropriate feature flag.
  3. Add a match arm to detect_compositor() in compositor.rs. Place it in the detection order according to protocol specificity (more specific protocols first, generic fallbacks last).
  4. Add the feature flag to Cargo.toml with any new protocol dependencies.

The backend struct must be Send + Sync. Methods return BoxFuture for dyn-compatibility. For operations not supported by the target compositor’s protocols, return Error::Platform with a descriptive message.

macOS Platform

The platform-macos crate provides safe Rust abstractions over macOS-specific APIs consumed by the daemon crates. It contains no business logic. All modules are gated with #[cfg(target_os = "macos")]; on other platforms the crate compiles as an empty library with no exports.

Implementation Status

The crate is scaffolded with module declarations only. macOS implementations are deferred until the Linux platform is validated on Pop!_OS / COSMIC. The module structure, API boundaries, and dependency selections are defined. No functional code exists.

Dependencies

The Cargo.toml declares macOS-specific dependencies:

CratePurpose
core-typesShared types (Window, WindowId, Error, etc.)
security-frameworkKeychain Services API (create/read/delete keychain items)
objc2Objective-C runtime bindings for Accessibility and AppKit APIs
core-foundationCFString, CFDictionary, CFRunLoop interop
core-graphicsCGEventTap, CGEventPost for input monitoring and injection
serdeSerialization for configuration types
tokioAsync runtime integration
tracingStructured logging

Module Structure

accessibility

Window management via the Accessibility API (AXUIElement). This module will provide the macOS equivalent of the Linux compositor backends: window enumeration, activation, geometry manipulation, and close operations. On macOS, all window management goes through the Accessibility framework rather than compositor-specific protocols.

clipboard

Clipboard access via NSPasteboard. This module will provide read, write, and change-notification functionality equivalent to the Linux DataControl trait. macOS clipboard access does not require special permissions.

input

Input monitoring via CGEventTap (listen-only) and input injection via CGEventPost. Both operations require the Accessibility permission in TCC. The module will provide keyboard event observation equivalent to the Linux evdev module. Unlike Linux evdev, macOS input monitoring is global by default and does not require group membership – it requires a TCC permission grant instead.

keychain

Per-profile named keychains via the security-framework crate (Keychain Services API). This module will store wrapped key-encryption keys, equivalent to the Linux SecretServiceProxy in the dbus module. macOS uses per-user keychains rather than a D-Bus Secret Service.

launch_agent

LaunchAgent plist generation and launchctl lifecycle management. This is the macOS equivalent of systemd service units. The module will generate property list files for ~/Library/LaunchAgents/, register them with launchctl, and manage daemon lifecycle (start, stop, status). Unlike systemd’s Type=notify, LaunchAgents use process lifecycle for readiness signaling.

tcc

Transparency, Consent, and Control (TCC) permission state introspection. This module will query the TCC database to determine whether Accessibility and Input Monitoring permissions have been granted before attempting operations that require them. This allows the system to provide actionable error messages rather than silently failing.

Platform-Specific Considerations

Accessibility API vs. Wayland Protocols

On Linux, window management is mediated by compositor-specific Wayland protocols. On macOS, the Accessibility API (AXUIElement) provides a single, compositor-independent interface for window enumeration, activation, geometry, and close operations. The trade-off is that Accessibility access requires an explicit TCC permission grant from the user, and the API surface is significantly different from Wayland protocols.

TCC Permissions

macOS requires explicit user consent for two operations that Open Sesame uses:

  • Accessibility: Required for window management (AXUIElement) and input injection (CGEventPost).
  • Input Monitoring: Required for keyboard event observation (CGEventTap in listen-only mode).

These permissions cannot be granted programmatically. The application must be added to the relevant TCC lists in System Settings. The tcc module exists to detect permission state and guide the user through the grant process.

launchd vs. systemd

macOS uses launchd instead of systemd for daemon management. Key differences:

  • Readiness signaling: systemd supports Type=notify with sd_notify(READY=1). launchd uses process lifecycle – a LaunchAgent is considered ready when the process is running.
  • Watchdog: systemd supports WatchdogSec with periodic keepalive pings. launchd has KeepAlive which restarts crashed processes but does not support health-check pings.
  • Socket activation: systemd supports ListenStream for socket-activated services. launchd supports Sockets in the plist for equivalent functionality.
  • Configuration format: systemd uses INI-style unit files. launchd uses XML property lists in ~/Library/LaunchAgents/.
  • Dependency ordering: systemd supports After=, Requires=, Wants=. launchd has limited dependency support via WatchPaths and QueueDirectories.

Keychain vs. Secret Service

Linux uses the freedesktop Secret Service API (org.freedesktop.secrets) over D-Bus for credential storage. macOS uses the Keychain Services API directly. Both provide encrypted-at-rest storage scoped to the user session, but the API surfaces are entirely different. The keychain module will present the same logical operations (store, retrieve, delete, has) as the Linux SecretServiceProxy.

Windows Platform

The platform-windows crate provides safe Rust abstractions over Windows-specific APIs consumed by the daemon crates. It contains no business logic. All modules are gated with #[cfg(target_os = "windows")]; on other platforms the crate compiles as an empty library with no exports.

Implementation Status

The crate is scaffolded with module declarations only. Windows implementations are deferred until the Linux and macOS platforms are validated. The module structure, API boundaries, and dependency selections are defined. No functional code exists.

Dependencies

The Cargo.toml declares Windows-specific dependencies:

CratePurpose
core-typesShared types (Window, WindowId, Error, etc.)
windowsOfficial Microsoft Windows API bindings (Win32, COM, WinRT)
serdeSerialization for configuration types
tokioAsync runtime integration
tracingStructured logging

Module Structure

clipboard

Clipboard monitoring via AddClipboardFormatListener. This module will provide clipboard change notifications and read/write operations, equivalent to the Linux DataControl trait. Windows clipboard access uses the Win32 clipboard API and does not require elevated privileges.

credential

Credential storage via CryptProtectData (DPAPI) and CredRead/CredWrite (Credential Manager). This module will store wrapped key-encryption keys, equivalent to the Linux SecretServiceProxy in the dbus module. DPAPI provides user-scoped encryption tied to the Windows login credentials. The Credential Manager provides a higher-level API for named credentials visible in the Windows Credential Manager UI.

hotkey

Global hotkey registration via RegisterHotKey/UnregisterHotKey. This module will provide compositor-independent hotkey capture, equivalent to the Linux Global Shortcuts portal or COSMIC key injection. On Windows, global hotkeys are registered per-thread and deliver WM_HOTKEY messages to the registering thread’s message loop.

input_hook

Input capture via SetWindowsHookEx(WH_KEYBOARD_LL). This module will provide low-level keyboard monitoring equivalent to the Linux evdev module. Low-level keyboard hooks see all keyboard input system-wide. The crate documentation notes that EDR (Endpoint Detection and Response) disclosure is required – low-level keyboard hooks are flagged by security software and must be documented for enterprise deployment.

named_pipe

IPC bootstrap via Named Pipes. This is the Windows equivalent of Unix domain sockets used by the Noise IK IPC bus on Linux. Named Pipes provide the transport layer for inter-daemon communication on Windows. Security descriptors on the pipe control which processes can connect.

policy

Enterprise policy reading via Group Policy registry keys. This module will read HKLM\Software\Policies\OpenSesame\ for enterprise-managed configuration overrides. This has no direct Linux equivalent – the closest analog is /etc/pds/ system configuration, but Group Policy provides domain-joined management capabilities.

task_scheduler

Daemon autostart via Task Scheduler COM API. This is the Windows equivalent of systemd user services and macOS LaunchAgents. The module will create scheduled tasks that run at user logon to start the daemon processes.

ui_automation

Window management and enumeration via UI Automation COM API. This module provides the Windows equivalent of the Linux compositor backends. UI Automation exposes the desktop automation tree, allowing enumeration of all top-level windows, reading their properties (title, class, process), and performing actions (activate, minimize, close, move, resize).

virtual_desktop

Workspace management via the Virtual Desktop COM API. This module will provide workspace enumeration and window-to-desktop movement, equivalent to the Linux list_workspaces and move_to_workspace compositor operations. The Windows Virtual Desktop API is undocumented and version-fragile – COM interface GUIDs change between Windows 10 and Windows 11 builds.

Platform-Specific Considerations

UI Automation vs. Wayland Protocols

On Linux, window management uses compositor-specific Wayland protocols (wlr-foreign-toplevel, COSMIC toplevel). On Windows, UI Automation provides a single COM-based interface that works across all window managers. The trade-off is COM initialization complexity and the need to handle apartment threading models correctly (CoInitializeEx with COINIT_MULTITHREADED or COINIT_APARTMENTTHREADED).

Credential Manager vs. Secret Service

Linux uses the freedesktop Secret Service API over D-Bus. Windows uses DPAPI (CryptProtectData/CryptUnprotectData) for raw encryption tied to user credentials, and the Credential Manager API (CredRead/CredWrite) for named credential storage. Both provide user-scoped encrypted-at-rest storage, but the APIs are entirely different.

Task Scheduler vs. systemd

Windows uses the Task Scheduler for daemon autostart. Key differences from systemd:

  • Readiness signaling: systemd supports Type=notify. Task Scheduler has no equivalent; the task is considered running when the process starts.
  • Watchdog: systemd supports WatchdogSec. Task Scheduler can restart failed tasks but does not support health-check pings.
  • Dependencies: systemd supports After=, Requires=. Task Scheduler supports task dependencies but with a less expressive model.
  • Configuration: systemd uses INI-style unit files. Task Scheduler uses XML task definitions registered via COM or schtasks.exe.

Named Pipes vs. Unix Domain Sockets

The Noise IK IPC bus uses Unix domain sockets on Linux. On Windows, Named Pipes provide equivalent functionality with OS-level access control via security descriptors. Named Pipes support both byte-mode and message-mode communication; the IPC bus would use byte-mode to match the stream semantics of Unix domain sockets.

EDR Disclosure

Low-level keyboard hooks (WH_KEYBOARD_LL) and clipboard monitoring are flagged by Endpoint Detection and Response (EDR) software common in enterprise environments. Deployment in managed environments requires documentation of these behaviors and may require allowlist entries in the organization’s security tooling.

Changelog

The changelog is auto-generated from conventional commit messages and maintained in the repository root at CHANGELOG.md.

Each GitHub Release includes the relevant changelog section along with install instructions for the APT repository and direct .deb download links. Release assets include SHA256 checksums and SLSA Build Provenance attestations that can be verified with:

gh attestation verify "open-sesame-linux-$(uname -m).deb" --owner ScopeCreep-zip

For the full version history, see the Releases page or the CHANGELOG.md file in the repository root.

License

Open Sesame is licensed under GPL-3.0-only (GNU General Public License, version 3, with no “or later” clause).

Why GPL-3.0

The cosmic-protocols crate, which provides Wayland protocol definitions for the COSMIC desktop compositor, is licensed under GPL-3.0-only. Because Open Sesame links against cosmic-protocols in the platform-linux and daemon-wm crates, the entire combined work must be distributed under GPL-3.0-only to satisfy the license terms.

License Text

The full license text is in the LICENSE file at the repository root. It is the standard GNU General Public License version 3 as published by the Free Software Foundation on 29 June 2007.

SPDX Identifier

All crate manifests declare license = "GPL-3.0-only" in their Cargo.toml workspace configuration, using the SPDX license identifier.

Security Hardening Field Guide

A practical, encyclopedic reference for debugging and troubleshooting Linux security hardening across seccomp-bpf, Landlock, systemd sandboxing, and related tooling. Written for engineers hardening multi-daemon Linux applications.


1. Overview

Modern Linux application hardening is built on defense-in-depth: multiple independent security layers that each reduce the blast radius of a compromise. No single layer is sufficient. The three primary layers are:

LayerScopeEnforced By
systemd sandboxingMount namespaces, resource limits, lifecyclesystemd (PID 1 / user manager)
LandlockFilesystem access controlKernel LSM, applied per-process
seccomp-bpfSyscall filteringKernel, applied per-thread/process

These layers compose because they operate at different abstraction levels:

  • systemd mount namespaces control what the process can see on the filesystem. A process inside ProtectSystem=strict literally cannot write to /usr because its mount namespace has a read-only bind mount.
  • Landlock controls what the process is allowed to access within the paths it can see. Even if systemd exposes a writable path, Landlock can restrict the process to specific subdirectories.
  • seccomp-bpf controls what the process is allowed to do at the syscall level. Even if a process can open a file, seccomp can block it from calling execve, ptrace, or mount.

A compromised daemon that escapes one layer still faces the others. This guide covers how to implement each layer correctly, the non-obvious failure modes, and how to debug them when things go wrong.


2. seccomp-bpf

2.1 How seccomp works

seccomp-bpf attaches a BPF program to a process (or thread) that intercepts every syscall before the kernel executes it. The BPF program inspects the syscall number and arguments, then returns a verdict.

Activation:

prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);  // required first
seccomp(SECCOMP_SET_MODE_FILTER, flags, &prog);

The SECCOMP_FILTER_FLAG_TSYNC flag is critical for multi-threaded programs: it synchronizes the filter to all threads in the thread group atomically. Without it, each thread must install the filter individually, creating a race window.

Action modes:

ActionBehaviorUse Case
SECCOMP_RET_KILL_PROCESSKills the entire process with SIGSYSProduction: fail-closed, no zombie threads
SECCOMP_RET_KILL_THREADKills only the offending threadDangerous with async runtimes (see 2.2)
SECCOMP_RET_ERRNOReturns an errno to the callerGraceful degradation, testable
SECCOMP_RET_LOGAllows but logs via auditDevelopment/audit mode

Choosing an action mode:

  • Use SECCOMP_RET_KILL_PROCESS in production. It is the safest default. A process that violates its seccomp policy is compromised and should die.
  • Use SECCOMP_RET_LOG during development to discover which syscalls your code actually needs without killing it.
  • Use SECCOMP_RET_ERRNO(EPERM) only when you have code that gracefully handles the error (e.g., optional features that degrade).
  • Avoid SECCOMP_RET_KILL_THREAD unless you fully understand the implications for your threading model. Read section 2.2.

2.2 KillThread + async runtimes (CRITICAL)

This is the single most dangerous failure mode in seccomp’d async applications.

When SECCOMP_RET_KILL_THREAD kills a thread in tokio’s (or async-std’s) blocking thread pool, the JoinHandle returned by spawn_blocking never resolves. The kernel destroys the thread. The channel that the runtime uses to send the result back is dropped without sending. The JoinHandle future polls forever.

The cascade:

  1. A spawn_blocking task calls a blocked syscall (e.g., ftruncate for SQLite WAL rollback).
  2. seccomp kills that thread with SIGSYS.
  3. The JoinHandle future never completes.
  4. The tokio::select! branch waiting on that handle blocks forever.
  5. The event loop freezes. No other futures make progress.
  6. The watchdog timer (if it ticks inside the same event loop) stops ticking.
  7. systemd’s WatchdogSec fires and kills the process.

This is silent. No logs. No crash. No panic. The process simply freezes and systemd eventually SIGKILLs it. Journalctl shows a watchdog timeout with no preceding error messages.

Design rule: Every spawn_blocking in a seccomp-filtered process MUST have a timeout wrapper:

#![allow(unused)]
fn main() {
use tokio::time::{timeout, Duration};

let result = timeout(
    Duration::from_secs(10),
    tokio::task::spawn_blocking(move || {
        // potentially blocked operation
        database.execute("PRAGMA wal_checkpoint(TRUNCATE)")
    }),
)
.await;

match result {
    Ok(Ok(Ok(rows))) => { /* success */ }
    Ok(Ok(Err(e))) => { /* database error */ }
    Ok(Err(e)) => { /* JoinError: thread panicked */ }
    Err(_) => {
        // TIMEOUT: likely seccomp killed the thread
        tracing::error!("spawn_blocking timed out -- possible seccomp kill");
        // Initiate graceful shutdown or restart
    }
}
}

This does not prevent the thread death, but it prevents the entire event loop from freezing and gives you a log line to debug.

2.3 SIGSYS signal handler

When seccomp blocks a syscall, the kernel delivers SIGSYS to the thread before killing it (for KILL_THREAD) or to the process (for KILL_PROCESS). You can install a handler to log which syscall was blocked.

Constraints: The signal handler runs in signal context. You must not allocate, lock mutexes, or call most libc functions. Use only async-signal-safe functions.

#![allow(unused)]
fn main() {
use libc::{
    c_int, c_void, sigaction, siginfo_t, SA_RESETHAND, SA_SIGINFO, SIGSYS,
};

unsafe extern "C" fn sigsys_handler(
    _sig: c_int,
    info: *mut siginfo_t,
    _ctx: *mut c_void,
) {
    // si_syscall contains the blocked syscall number
    let syscall = (*info).si_syscall;

    // Write directly to stderr (fd 2) -- no allocator, no buffering
    // Manual integer formatting in a stack buffer
    let mut buf = [0u8; 64];
    let prefix = b"seccomp: blocked syscall ";
    buf[..prefix.len()].copy_from_slice(prefix);
    let mut pos = prefix.len();

    // Convert syscall number to decimal digits
    if syscall == 0 {
        buf[pos] = b'0';
        pos += 1;
    } else {
        let mut n = syscall;
        let start = pos;
        while n > 0 {
            buf[pos] = b'0' + (n % 10) as u8;
            pos += 1;
            n /= 10;
        }
        buf[start..pos].reverse();
    }
    buf[pos] = b'\n';
    pos += 1;

    let _ = libc::write(2, buf.as_ptr() as *const c_void, pos);
}

pub fn install_sigsys_handler() {
    unsafe {
        let mut sa: sigaction = std::mem::zeroed();
        sa.sa_sigaction = sigsys_handler as usize;
        sa.sa_flags = SA_SIGINFO | SA_RESETHAND;
        sigaction(SIGSYS, &sa, std::ptr::null_mut());
    }
}
}

Important: Install the handler before applying the seccomp filter.

Why output may not appear in journalctl:

  • With KILL_THREAD, the thread dies but the process lives. The write to stderr may succeed, but if the process later freezes (see 2.2), journald may not flush the pipe buffer before systemd kills it.
  • With KILL_PROCESS, the write races against process teardown.
  • Use SA_RESETHAND so the handler fires once, then the default (kill) takes effect on the next violation.

2.4 Building seccomp allowlists with strace

The only reliable way to build a seccomp allowlist is to trace your application under real workloads.

Step 1: Trace all threads

# Attach to a running process
strace -f -o /tmp/trace.log -p $(pidof my-daemon)

# Or launch under strace
strace -f -o /tmp/trace.log -- ./my-daemon

The -f flag follows child threads and processes.

Step 2: Exercise ALL code paths

This is where most allowlists fail. You must exercise:

  • Startup and initialization
  • Normal operation (happy path)
  • Error paths (invalid input, network failure, disk full)
  • Shutdown (graceful and SIGTERM)
  • Database operations (open, read, write, WAL checkpoint, vacuum)
  • Config reload (inotify, file re-read)
  • IPC (socket creation, connection, message exchange)

Step 3: Extract unique syscalls

awk -F'(' '{print $1}' /tmp/trace.log \
  | sed 's/^[0-9]* *//' \
  | sort -u \
  > /tmp/syscalls.txt

Step 4: Diff against your allowlist

Compare the trace output against your current allowlist. Add any missing syscalls.

Commonly missed syscalls:

SyscallTriggered By
ftruncateSQLite WAL rollback/checkpoint
fsyncSQLite PRAGMA, checkpoint, journal
fdatasyncSQLite WAL writes
pwrite64SQLite WAL page writes
fallocateSQLite pre-allocating journal/WAL space
readlinkSymlink resolution (common on NixOS)
inotify_init1File watcher initialization
inotify_add_watchWatching config files for changes
inotify_rm_watchCleaning up file watches
statxModern stat replacement (glibc 2.28+)
getrandomCryptographic RNG, SQLCipher
clone3Modern thread creation (glibc 2.34+)

2.5 Common pitfalls

fdatasync vs fsync: SQLite uses both. fdatasync for WAL writes (it only needs data, not metadata). fsync for PRAGMA operations and WAL checkpoints (it needs full metadata sync). Missing either one causes intermittent seccomp kills that only trigger under write load.

SQLite WAL mode syscall set: A complete SQLite WAL allowlist includes: openat, ftruncate, pwrite64, pread64, fallocate, rename, fsync, fdatasync, fcntl (for F_SETLK/F_GETLK advisory locking), fstat, lseek, unlink.

D-Bus / zbus syscalls: If your daemon communicates over D-Bus (e.g., for desktop integration): socket, connect, sendmsg, recvmsg, geteuid (D-Bus auth), shutdown, getsockopt, setsockopt.

inotify file watchers: Any config hot-reload mechanism using inotify needs: inotify_init1, inotify_add_watch, inotify_rm_watch, read (for reading events from the inotify fd), epoll_ctl (if using epoll to watch the inotify fd).

SECCOMP_FILTER_FLAG_TSYNC timing: TSYNC applies the filter to all existing threads. If a file watcher thread was spawned before seccomp is applied, it gets the filter too. If that thread’s syscalls are not in the allowlist, it dies. Either:

  1. Apply seccomp before spawning any background threads, or
  2. Ensure the allowlist covers all threads’ syscalls, or
  3. Have background threads install their own filters before doing work.

3. Landlock

3.1 How Landlock works

Landlock is a Linux Security Module (LSM) that provides unprivileged, process-level filesystem access control. Unlike seccomp (which filters syscalls), Landlock filters filesystem operations on specific paths.

#![allow(unused)]
fn main() {
// Pseudocode for Landlock setup
let ruleset = Ruleset::default()
    .handle_access(AccessFs::from_all(abi_version))?
    .create()?;

// Grant read-only access to config directory
ruleset.add_rule(PathBeneath::new(
    File::open("/etc/myapp")?,
    AccessFs::ReadFile | AccessFs::ReadDir,
))?;

// Grant read-write access to runtime directory
ruleset.add_rule(PathBeneath::new(
    File::open("/run/user/1000/myapp")?,
    AccessFs::from_all(abi_version),
))?;

// Enforce -- no more rules can be added after this
ruleset.restrict_self()?;
}

Key properties:

  • Rules are additive: you start with no access and grant specific paths.
  • Rules are inherited: child processes inherit the restriction.
  • Rules are stackable: multiple Landlock rulesets compose (intersection).
  • Landlock requires no privileges – any process can restrict itself.

ABI versions (V1 through V6) add support for new access rights. Always query the running kernel’s supported version and degrade gracefully:

#![allow(unused)]
fn main() {
let abi = landlock::ABI::V3; // minimum supported
let actual = landlock::ABI::new_current().unwrap_or(abi);
}

Landlock grants access to the resolved path, not the symlink itself. This is a critical distinction on distributions that use symlink farms.

NixOS and Guix store all packages in /nix/store/ (or /gnu/store/) and symlink configuration files into place:

/etc/myapp/config.toml -> /nix/store/abc123-myapp-config/config.toml

If you grant Landlock access to /etc/myapp/, the process can open the symlink. But the target is in /nix/store/, which is not in the ruleset. The open fails with EACCES.

Solution: Canonicalize all config paths before building Landlock rules:

#![allow(unused)]
fn main() {
use std::fs;
use std::path::{Path, PathBuf};
use std::collections::HashSet;

fn resolve_landlock_paths(paths: &[&str]) -> HashSet<PathBuf> {
    let mut resolved = HashSet::new();
    for path in paths {
        let p = Path::new(path);
        if p.exists() {
            // Add the original path
            resolved.insert(p.to_path_buf());
            // Add the canonical (resolved) path
            if let Ok(canonical) = fs::canonicalize(p) {
                resolved.insert(canonical.clone());
                // Also add parent directories for traversal
                if let Some(parent) = canonical.parent() {
                    resolved.insert(parent.to_path_buf());
                }
            }
        }
    }
    resolved
}
}

Then add all resolved paths as read-only rules.

3.3 Common pitfalls

/dev/urandom blocked: SQLCipher and OpenSSL read from /dev/urandom for random bytes. If Landlock blocks /dev/urandom, they fall back to the getrandom() syscall, which bypasses the filesystem entirely. This usually works, but you may see EACCES errors in logs. Grant read access to /dev/urandom to silence them:

#![allow(unused)]
fn main() {
ruleset.add_rule(PathBeneath::new(
    File::open("/dev/urandom")?,
    AccessFs::ReadFile,
))?;
}

NOTIFY_SOCKET path: sd_notify() communicates with systemd via a Unix socket whose path is in $NOTIFY_SOCKET. This can be either:

  • Abstract socket (prefixed with @): Bypasses the filesystem entirely. Landlock does not apply. No rule needed.
  • Filesystem socket (e.g., /run/user/1000/systemd/notify): Landlock must allow write access to this path, or sd_notify() silently fails.

Check before adding rules:

#![allow(unused)]
fn main() {
if let Ok(sock) = std::env::var("NOTIFY_SOCKET") {
    if !sock.starts_with('@') {
        // Filesystem socket -- add to Landlock rules
        let sock_path = Path::new(&sock);
        if let Some(parent) = sock_path.parent() {
            ruleset.add_rule(PathBeneath::new(
                File::open(parent)?,
                AccessFs::WriteFile,
            ))?;
        }
    }
}
}

Abstract sockets bypass Landlock entirely: Any Unix domain socket with an abstract address (beginning with a null byte, shown as @ in ss output) is not subject to Landlock filesystem rules. This is by design – abstract sockets live in the network namespace, not the filesystem. If you need to restrict abstract socket access, use seccomp to filter connect/bind with argument inspection, or use network namespaces.


4. systemd Sandboxing

4.1 Mount namespaces

systemd can create per-service mount namespaces that restrict the filesystem view. This is the outermost sandbox layer.

Key directives for [Service] sections:

[Service]
# Read-only root filesystem (bind mount overlays)
ProtectSystem=strict

# User home directory is read-only
ProtectHome=read-only

# Specific writable paths (bind-mounted into the namespace)
ReadWritePaths=/run/user/%U/myapp /home/%U/.local/share/myapp

# Restrict /proc, /sys, kernel tunables
ProtectProc=invisible
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes

# Private /tmp
PrivateTmp=yes

# No new privileges (required for seccomp)
NoNewPrivileges=yes

# Restrict capabilities
CapabilityBoundingSet=
AmbientCapabilities=

Critical requirement: Every path listed in ReadWritePaths= must exist on the host before the service starts. If the directory does not exist, systemd cannot create the bind mount, and the service fails with exit status 226/NAMESPACE.

This is the most common systemd sandbox failure mode.

4.2 tmpfiles.d for directory pre-creation

The chicken-and-egg problem: your daemon creates its directories on first run, but systemd’s mount namespace fails if those directories do not already exist.

Solution: Use systemd-tmpfiles to create directories at user session login, before any service starts.

For NixOS (in your system or home-manager configuration):

systemd.user.tmpfiles.rules = [
  "d %t/myapp        0700 - - -"    # /run/user/UID/myapp
  "d %h/.config/myapp 0700 - - -"   # ~/.config/myapp
  "d %h/.local/share/myapp 0700 - - -"
];

For other distributions, create ~/.config/systemd/user/tmpfiles.d/myapp.conf:

# Type  Path                      Mode  User  Group  Age
d       %t/myapp                  0700  -     -      -
d       %h/.config/myapp          0700  -     -      -
d       %h/.local/share/myapp     0700  -     -      -

Specifiers: %t = $XDG_RUNTIME_DIR, %h = $HOME, %U = numeric UID.

For wipe/reinitialize flows (e.g., factory reset, test harness):

# Recreate directories after wiping
rm -rf ~/.local/share/myapp
systemd-tmpfiles --user --create
systemctl --user restart myapp.service

Defense-in-depth: The application should also create its directories on startup (a bootstrap_dirs() function) so it works on platforms without systemd (containers, macOS, BSDs). tmpfiles.d is the systemd-specific layer; application bootstrap is the portable layer.

4.3 Service type alignment

Type=notify: The daemon signals readiness by calling sd_notify("READY=1"). systemd waits for this signal before marking the service as active.

#![allow(unused)]
fn main() {
// Using the sd-notify crate or raw socket write
sd_notify::notify(false, &[sd_notify::NotifyState::Ready])?;
}

If you set Type=simple but your daemon calls sd_notify, systemd ignores the notification silently. The service is marked active immediately on exec. This is not an error – it just means your readiness signal does nothing.

WatchdogSec=: The daemon must call sd_notify("WATCHDOG=1") at least every WatchdogSec / 2 interval. If the event loop freezes (e.g., due to seccomp killing a thread – see 2.2), the watchdog fires and systemd restarts the service.

#![allow(unused)]
fn main() {
// Tick the watchdog inside the main event loop
loop {
    tokio::select! {
        msg = ipc_rx.recv() => { handle_message(msg).await; }
        _ = watchdog_interval.tick() => {
            sd_notify::notify(false, &[sd_notify::NotifyState::Watchdog])?;
        }
    }
}
}

Place the watchdog tick in the event loop, not in a separate thread. A separate thread will keep ticking even when the event loop is frozen, defeating the purpose.

TimeoutStopSec=: How long systemd waits after sending SIGTERM before sending SIGKILL. Set this to give your daemon time for graceful shutdown (flush databases, close connections), but not so long that a hung daemon blocks restarts.

TimeoutStopSec=10s

4.4 Common pitfalls

RuntimeDirectory= with ProtectSystem=strict: For user services, RuntimeDirectory=myapp creates /run/user/UID/myapp inside the mount namespace. This directory is only visible to that specific service instance. Other services in the same user session cannot see it. If you need a shared runtime directory, use ReadWritePaths= with a directory created by tmpfiles.d.

PrivateNetwork=yes and Unix sockets: PrivateNetwork=yes creates a new network namespace with only a loopback interface. TCP/UDP connections to external hosts are blocked. However, Unix domain sockets on the filesystem are unaffected – they are filesystem operations, not network operations. This means IPC over Unix sockets works fine with PrivateNetwork=yes, which is usually what you want for a daemon that only communicates via local IPC.

sd_notify silently succeeds when NOTIFY_SOCKET is unset: When running outside systemd (e.g., in a terminal for debugging), $NOTIFY_SOCKET is not set. The sd_notify() call returns success without doing anything. Add diagnostic logging so you know whether notifications are actually being delivered:

#![allow(unused)]
fn main() {
if std::env::var("NOTIFY_SOCKET").is_ok() {
    tracing::info!("systemd notify socket available");
    sd_notify::notify(false, &[sd_notify::NotifyState::Ready])?;
} else {
    tracing::warn!("NOTIFY_SOCKET not set -- sd_notify disabled");
}
}

5. Debugging Toolkit

5.1 strace

strace is the single most valuable tool for debugging seccomp and Landlock issues.

Trace all threads of a running process:

strace -f -o /tmp/trace.log -p $(pidof my-daemon)

Filter out noisy syscalls:

strace -f -e trace='!read,write,close,epoll_wait,futex,nanosleep' \
  -o /tmp/trace.log -p $(pidof my-daemon)

Find seccomp kills:

grep "killed by SIGSYS" /tmp/trace.log

The last syscall logged for that thread (immediately before the +++ killed by SIGSYS +++ line) is the blocked syscall. Example:

[pid 12345] ftruncate(7, 0)     = ?
[pid 12345] +++ killed by SIGSYS (core dumped) +++

This tells you ftruncate is missing from the allowlist.

Trace all daemons simultaneously for comprehensive coverage:

for pid in $(pgrep -f 'my-daemon'); do
    strace -f -o /tmp/trace-${pid}.log -p $pid &
done
# Exercise all code paths, then kill strace processes

5.2 journalctl

View logs for a user service:

journalctl --user -u my-daemon.service --no-pager -o short-precise

Key exit status codes:

StatusMeaningLikely Cause
226/NAMESPACEMount namespace setup failedReadWritePaths directory does not exist
31/SYSKilled by signal 31 (SIGSYS)seccomp blocked a syscall
6/ABRTAbortedWatchdog timeout, assertion failure, or panic
-1/WATCHDOGWatchdog timeoutEvent loop frozen (see 2.2)

Watch in real time:

journalctl --user -u my-daemon.service -f -o short-precise

5.3 systemctl

Check service status:

systemctl --user status my-daemon.service

Look for: Active: (running/failed/inactive), Main PID:, exit code/status.

Clear failed state: After a service fails, systemd remembers the failure. You must reset it before restarting:

systemctl --user reset-failed my-daemon.service
systemctl --user start my-daemon.service

Recreate tmpfiles.d directories:

systemd-tmpfiles --user --create

This is idempotent – safe to run anytime.

5.4 Diagnostic patterns

“No such file or directory” + status=226: The service’s ReadWritePaths or ReadOnlyPaths references a directory that does not exist on the host filesystem. systemd cannot create the bind mount into the namespace.

Fix: Ensure tmpfiles.d rules create all required directories. Run systemd-tmpfiles --user --create and retry.

Watchdog timeout with no error logs: The event loop is frozen. The most common cause is seccomp KILL_THREAD silently destroying a thread that tokio::select! is waiting on (see 2.2).

Debug: Attach strace to the process, exercise the code path that triggers the freeze, look for SIGSYS kills. Add timeout wrappers to spawn_blocking calls to regain visibility.

“database is locked” after timeout: A spawn_blocking thread was killed by seccomp while holding an fcntl advisory lock on a SQLite file. The lock was not released because the thread died without running destructors. The file descriptor may still be open (held by the process, not the thread).

Fix: Add the missing syscall to the allowlist. If the database is stuck, restart the process (the lock is released when the fd is closed on process exit). For robustness, set PRAGMA busy_timeout so SQLite retries instead of immediately returning SQLITE_BUSY.

Silent timeout from CLI (e.g., 5 seconds, no response): The daemon received the IPC message but froze during processing. The CLI’s request timeout fires. This is the user-visible symptom of the event loop freeze described above.

Debug: Check if the daemon process is still running (ps aux | grep daemon). If it is running but not responding, it is frozen. Attach strace.


6. Defense-in-Depth Architecture

The two-tier model:

                    +--------------------------+
                    |     systemd (outer)      |
                    |  Mount namespaces         |
                    |  Resource limits          |
                    |  (LimitNOFILE, MemoryMax) |
                    |  Watchdog lifecycle       |
                    |  ProtectSystem,           |
                    |  ProtectHome              |
                    +-----------+--------------+
                                |
                    +-----------v--------------+
                    |   Application (inner)    |
                    |  Landlock filesystem ACL  |
                    |  seccomp-bpf syscall      |
                    |  filter                   |
                    |  setrlimit               |
                    |  (NOFILE, MEMLOCK)        |
                    |  Directory bootstrap      |
                    +--------------------------+

systemd owns:

  • Process lifecycle (start, stop, restart, watchdog)
  • Outer filesystem isolation (mount namespaces)
  • Resource limits that survive application bugs (MemoryMax, TasksMax)
  • Compliance posture (auditors can inspect unit files)

The application owns:

  • Inner filesystem isolation (Landlock – more granular than mount namespaces)
  • Syscall filtering (seccomp – systemd’s SystemCallFilter is a convenience wrapper, but application-level gives more control)
  • Resource self-limits (setrlimit – defense against fd leaks, memory leaks)
  • Directory bootstrapping (portable across platforms)

Both layers are required:

  • systemd provides the compliance and lifecycle layer. Auditors and distribution packagers can review unit files without reading application code.
  • Landlock and seccomp provide the defense-in-depth layer. They protect against vulnerabilities within the application itself.
  • The application must work on non-systemd platforms (containers, macOS, embedded Linux). Landlock and seccomp are Linux-specific but do not require systemd. The application’s bootstrap code handles the portable case.

7. Checklist: Hardening a New Daemon

Use this checklist when adding security hardening to a new daemon. Each item addresses a specific failure mode described in this guide.

systemd unit file

  • Type=notify with sd_notify("READY=1") in application code
  • WatchdogSec=30s (adjust to your heartbeat interval)
  • TimeoutStopSec=10s (enough for graceful shutdown)
  • Restart=on-failure, RestartSec=2s
  • ProtectSystem=strict
  • ProtectHome=read-only
  • ReadWritePaths= for every writable directory
  • NoNewPrivileges=yes
  • PrivateTmp=yes
  • tmpfiles.d rules for every directory in ReadWritePaths

Application bootstrap

  • bootstrap_dirs() creates all required directories (portable fallback)
  • setrlimit(RLIMIT_NOFILE, ...) to cap file descriptors
  • setrlimit(RLIMIT_MEMLOCK, ...) if using mlock for secrets

Landlock

  • Grant ReadWrite to runtime directory ($XDG_RUNTIME_DIR/myapp)
  • Grant ReadOnly to config directory ($XDG_CONFIG_HOME/myapp)
  • Grant ReadWrite to data directory ($XDG_DATA_HOME/myapp)
  • Canonicalize all paths to resolve symlinks (NixOS/Guix)
  • Grant ReadOnly to /dev/urandom if using crypto
  • Check $NOTIFY_SOCKET – if filesystem path, grant write access
  • Test on NixOS or with symlinked configs

seccomp allowlist

  • Trace with strace -f under ALL code paths
  • Include fsync AND fdatasync (SQLite uses both)
  • Include inotify_init1, inotify_add_watch, inotify_rm_watch if using file watchers
  • Include readlink, readlinkat if paths may be symlinks
  • Include getrandom for crypto operations
  • Include clone3 if targeting glibc >= 2.34
  • Use SECCOMP_RET_KILL_PROCESS (not KILL_THREAD) in production
  • Use SECCOMP_FILTER_FLAG_TSYNC for multi-threaded programs

Defensive timeouts

  • Every spawn_blocking wrapped with tokio::time::timeout
  • Timeout duration is shorter than WatchdogSec / 2
  • Timeout fires a log message identifying the blocked operation

SIGSYS handler

  • Installed before seccomp filter is applied
  • Uses only async-signal-safe functions (raw write to fd 2)
  • Logs the blocked syscall number
  • Uses SA_RESETHAND to avoid infinite handler loops

Watchdog

  • Ticks inside the main event loop (tokio::select! branch)
  • Does NOT tick in a separate thread
  • Interval is WatchdogSec / 2 or less

Testing

  • Wipe all state directories and recreate from scratch
  • Start all daemons – verify no 226/NAMESPACE errors
  • Exercise all features under normal operation
  • Trigger error paths (bad input, network down, disk full)
  • Verify watchdog ticks appear in journal
  • Verify graceful shutdown completes within TimeoutStopSec
  • Run full test cycle twice (catches state leaks from first run)

References