Knowledge Edge Map

Date: 2026-02-12 Method: Socratic interview against vault contents


How to Read This

Each domain is rated on a spectrum:

  • Core: You can teach this. Deep, connected, battle-tested.
  • Working: You use this. Practical knowledge, some gaps in theory.
  • Conceptual: You know about this. Right intuitions, missing mechanics.
  • Peripheral: Awareness only. Could look it up but can’t reason from it.
  • Absent: Not in your model at all.

Domain Map

DevOps & Delivery — Core

Solid ground:

  • DORA metrics, Accelerate framework, 24 capabilities model
  • Team Topologies (all 4 team types, cognitive load, Conway’s Law)
  • Continuous delivery philosophy (batch size, Little’s Law, flow)
  • Platform engineering as product thinking

Edges found:

  • DORA failure modes: You identified ops/support teams not fitting and Goodhart’s Law (gaming metrics). The remaining gap: DORA captures delivery but not customer impact — elite DORA metrics can coexist with shipping the wrong thing.
  • Queueing theory is intuitive but not mathematical for you. The relationship between utilization and wait time is hyperbolic (wait ∝ 1/(1-ρ)), not linear-then-exponential. At 90% utilization, wait = 9x processing time. Your calculus background means this would click fast — look up Kingman’s formula.
  • Regulated industries: acknowledged blind spot. Team Topologies’ autonomy assumptions break down when compliance forces coupling.

Recommended bridge: One session connecting Little’s Law → Kingman’s formula → back-pressure in distributed systems. You have all the pieces; they’re just not wired together.


Security — Core (Supply Chain) / Peripheral (Everything Else)

Solid ground:

  • Supply chain security concepts (SLSA, SBOM, GUAC, Sigstore, OpenSSF)
  • Shift-left philosophy

Edges found:

  • SLSA knowledge has atrophied — you acknowledged it’s been a minute. The concepts are in your notes but not in working memory.
  • CVE/CWE/CPE taxonomy is not internalized. CVE = specific vulnerability instance, CWE = class of bug, CPE = affected product identifier. Worth memorizing given your supply chain adjacency.
  • Runtime security is intuitive but not practiced. You correctly identified network isolation, auth layers, JIT creds, behavior detection. You didn’t name specific tools (Falco, seccomp, AppArmor/SELinux). The “weirdo OS-level process attestation” you half-remembered is real — seccomp profiles, and at the hardware level, Intel SGX.
  • Build-time vs. runtime gap: Your security mental model stops at the artifact. What happens after deploy is conceptual for you, not operational.

Recommended bridge: Install Falco on a test cluster. Your eBPF awareness means the concepts will land fast — you just need hands-on.


System Design & Distributed Systems — Working/Conceptual

Solid ground:

  • CAP theorem (basic), saga pattern, C4 model
  • Preference for orchestration over choreography (with good practical reasoning)

Edges found:

  • CAP is too coarse and you don’t know that yet. “Consistency” in CAP means specifically linearizability — one very strong guarantee among many. Kleppmann (whose book you’ve read) advocates PACELC: even without partitions, you trade latency vs. consistency. You’re probably making this tradeoff daily without framing it.
  • Saga compensating transactions: Known but didn’t surface in interview. No gap here.
  • Back-pressure is your biggest conceptual gap in this domain. You named reactive strategies (errors, circuit breakers) but missed proactive ones: buffering/queuing, rate limiting, load shedding, pull-based consumption, horizontal scaling. Most importantly: you have queueing theory in one part of your brain and circuit breakers in another, but they’re not connected. WIP limits ARE a back-pressure mechanism. Little’s Law formalizes what back-pressure solves. Build the bridge.

Recommended bridge: Read about reactive streams / pull-based consumption patterns. Then revisit Little’s Law and notice it’s the same thing described mathematically.


AI/ML — Peripheral

Solid ground:

  • Transformer architecture awareness (self-attention, parallelization)
  • RLHF concept
  • Recent RAG awareness

Edges found:

  • Math → ML bridge is unbuilt. This is your single highest-leverage gap. Gradient descent = walk downhill using partial derivatives (you know these). Forward pass = matrix multiplication + nonlinear activation (you know linear algebra). Backpropagation = the chain rule (you know this). You have ~90% of the prerequisites sitting unused. One focused session would give you a fundamentally different understanding of ML than “stochastic parrots.”
  • ML model provenance: You correctly intuited Merkle trees of training data CIDs but only covered one layer. A backdoored training script with identical data produces a compromised model. Full provenance = data + code + hyperparameters + environment + weights. That’s SLSA attestations applied to ML pipelines. You were closer than you realized.

Recommended bridge: Work through one gradient descent example by hand using your calculus knowledge. Compute the partial derivative of a loss function, update weights, repeat. The entire mystique of ML evaporates when you see it’s optimization you already understand.


Mathematics & Statistics — Core (Frequentist) / Absent (Bayesian)

Solid ground:

  • Calculus (derivatives, integration, FTC)
  • Linear algebra (matrix ops, transformations, Gaussian elimination)
  • Frequentist statistics (hypothesis testing, confidence intervals, CLT)

Edges found:

  • Bayesian statistics is absent. You acknowledged this cleanly. The one-sentence bridge: Bayesian methods let you update beliefs with evidence incrementally, which means you can stop A/B tests early when you have enough signal instead of waiting for a fixed sample size.
  • Causal inference not covered — difference-in-differences, instrumental variables. This matters when you can’t run controlled experiments (most real business decisions).
  • The math you know doesn’t connect to the systems you build. Queueing theory, information theory, and optimization all bridge math → engineering. The queueing theory connection is the most immediate win.

P2P & Decentralized Web — Working

Solid ground:

  • IPFS architecture (CIDs, Bitswap, DHT, libp2p, performance concerns)
  • DID concept and resolution
  • Noosphere project understanding

Edges found:

  • DIDs: missed key rotation. The killer feature of DIDs over raw public keys isn’t multi-key-type support — it’s that you can rotate keys while keeping the same identifier. If a raw public key is compromised, your identity is dead. DIDs solve this through indirection.
  • Local-first hard problems: You correctly identified product-level challenges (network effects, telemetry, integration access). You missed the hard technical problems: schema evolution (how to migrate when you can’t run migrations on everyone’s device), access revocation (how to un-share replicated data), and CRDT tombstone accumulation.
  • SSB vs DHT tradeoff: You had the right intuition but the precise framing is global-but-trustless (DHT) vs. local-but-trusted (SSB gossip). DHT gives global addressability with no trust model; SSB scopes replication to your social graph, giving offline-first and implicit trust at the cost of discoverability.

Observability & SRE — Working

Solid ground:

  • OpenTelemetry four signal types (traces, metrics, logs, profiling)
  • Practical alerting principles (no-action = noisy, time horizon matching)
  • Error budget concept

Edges found:

  • Alert calibration is manual for you. You said “review dashboards regularly.” The rigorous approach: track alert-to-incident ratio (precision) and incident-without-alert rate (recall). Your stats background makes this framing natural.
  • Error budgets: Known more fully than surfaced in interview. No significant gap here.

Top 5 Highest-Leverage Gaps

Ranked by “effort to close vs. value gained,” considering what you already know:

PriorityGapWhy It’s High-LeverageEffort
1Math → ML bridgeYou have 90% of the prerequisites. One session connecting calculus/linear algebra to gradient descent transforms your ML understanding.Low
2Queueing theory → distributed systemsKingman’s formula + back-pressure patterns. Connects theory you own to systems you build.Low
3Runtime security (Falco, seccomp)Your security stops at build-time. One tool (Falco) extends it to runtime. eBPF knowledge is the on-ramp.Medium
4Bayesian statisticsUnlocks sequential testing, better A/B tests, and a different way of reasoning about uncertainty.Medium
5CAP → PACELC + consistency modelsYou make latency/consistency tradeoffs daily without a framework for them. Reread Kleppmann’s DDIA chapters 7-9.Medium

Meta-Observations

  • You reason from experience, not models. Your answers were consistently grounded in “I’ve seen this” rather than “the theory says.” This is a strength (practical, honest) and a limitation (you miss things you haven’t encountered).
  • Your product/business instincts are strong. When asked about local-first hard problems, you went straight to network effects and telemetry — the business viability questions. Most engineers go to CRDTs. You think about whether something will work in the market, not just whether it compiles.
  • Knowledge atrophies without use. SLSA, DIDs, and saga patterns all showed signs of “I knew this once.” Your vault preserves the notes but not the working fluency. Consider periodic review of high-value notes.
  • You have strong conceptual islands that aren’t connected. Math ↔ ML, queueing theory ↔ back-pressure, SLSA ↔ ML provenance. The knowledge exists in silos. The highest-value work isn’t learning new things — it’s building bridges between things you already know.

Generated by Claude Code on 2026-02-12 via Socratic interview.