Weekly Review — 2026-02-12
Prompt 1: Stale Note Detection
Notes modified in the last 7 days reviewed for incomplete thoughts, TODOs, and outdated info.
Incomplete Thoughts
| Note | Issue |
|---|---|
| Context Engineering | Just a single quote and attribution. No original analysis, no links to other notes. Stub. |
| Embeddings | One sentence. Doesn’t explain how embedding models differ from word vectors, or common models (e.g. all-MiniLM-L6-v2 which you already reference in book-diff/README.md). |
| Vector Databases | Empty file (0 content). |
| Word Vectors | Empty file (0 content). |
| Types of LLM prompts | Just a URL and a single quote block distinguishing Instructions vs Guidance. No synthesis. |
| Recursive Language Model | Single paragraph. No link to the Zhang paper. No connection to your existing Knowledge Diff note that already references it. |
| The Anthropic hive mind | Quotes only, no original commentary. No links to any notes. |
| Untitled 1 / Untitled 2 | Empty files. Should be deleted or given content. |
Outdated Information
| Note | Concern |
|---|---|
| Retrieval-Augmented Generation | Typo: “prokpt” should be “prompt”. The description is also imprecise — RAG doesn’t necessarily do embedding on the prompt; it embeds the query, which may be extracted/reformulated from the prompt. |
Action Items
- Flesh out or delete
Vector DatabasesandWord Vectors(both empty) - Delete
Untitled 1andUntitled 2(empty, no purpose) - Fix typo “prokpt” → “prompt” in
Retrieval-Augmented Generation - Add original commentary to
The Anthropic hive mindandContext Engineering - Add paper citation to
Recursive Language Model
Prompt 2: Link Opportunity Finder
Notes that should link OUT
| Note | Suggested Links |
|---|---|
| Context Engineering | [[LLMs]], [[Types of LLM prompts]] (context engineering is about curating prompt context), [[Retrieval-Augmented Generation]] (RAG is a form of context engineering) |
| Types of LLM prompts | [[Context Engineering]] (same source article), [[LLMs]], [[Coding agents]] (the prompt types described are agent-relevant) |
| The Anthropic hive mind | [[LLMs]], [[Coding agents]] (Yegge’s piece is about AI-era strategy), [[Build vs Buy decision frameworks]] (the “atoms moat” argument is a build-vs-buy lens) |
| Recursive Language Model | [[Context Engineering]] (RLM is a context management technique), [[Embeddings]] (RLM is positioned as alternative to embedding) |
| Retrieval-Augmented Generation | [[Context Engineering]] (RAG is context engineering), [[LLMs]] |
| Embeddings | [[Retrieval-Augmented Generation]] (embeddings power RAG), [[LLMs]] |
| Build vs Buy decision frameworks | [[Platform Strategy]] (already linked from Platform Strategy, should link back) |
| Peer-Responsible quality | [[Continuous Delivery]] or [[Accelerate]] (peer review is a CD practice) |
| StrongDM’s Software Factory | [[Coding agents]], [[LLMs]] (a “software factory” where code must not be written/reviewed by humans is an AI/agents concept) |
Notes that should link TO these new notes
| Existing Note | Should Add Link To |
|---|---|
| Coding agents | [[Context Engineering]], [[Types of LLM prompts]], [[Recursive Language Model]] |
| llm-assistant/Knowledge Diff | Already links to [[Embeddings]], [[Recursive Language Model]], [[Vector Databases]] — good. |
Prompt 3: Daily Note Extraction
Daily notes this week: 2026-02-02 through 2026-02-06
2026-02-02 — Chat with [[dvc]] about LLMs
- Getting good results but having to repeat himself
- Questions about how well LLMs understand existing code and PR flow integration
- Writing in TypeScript (doesn’t know TypeScript)
- Sanskrit Studio — most probable and least probable readings
2026-02-05 — Log spew analysis (Thrive Market work notes)
- Optimizely log noise (invalid user ID format, missing experiment keys)
- Braintree payment errors logged as application errors (MOK-42986)
- Habit formation reward date range issues
- Autoship configuration data empty errors
- Geocoding API misuse (“should be a geoip concern”)
- Brand page event failures
2026-02-06 — Randy/LeadDev connection
- Randy knows people who run LeadDev
- Talked about AI stuff with them
Ideas Worth Promoting to Permanent Notes
-
“LLM repetition problem” — dvc’s experience of having to repeat himself to LLMs is a real pattern. Could connect to
[[Context Engineering]](the whole point is solving this) and[[Recursive Language Model]](stores context to avoid repetition). -
“LLMs and existing codebases” — The question of how well LLMs understand existing code is worth a note. Connects to
[[Coding agents]]and the context window problem.
Recurring Themes
- AI/LLM adoption appears across 2026-02-02 (dvc conversation) and 2026-02-06 (LeadDev/AI). You’re clearly in a period of active AI thinking.
- This aligns with the cluster of new notes this week: Context Engineering, Embeddings, RAG, Vector Databases, Word Vectors, Recursive Language Model, Types of LLM prompts, Anthropic hive mind, StrongDM’s Software Factory — all AI-related.
Action Items / Commitments
- The 2026-02-05 log spew analysis is Thrive-specific work notes. Consider whether any patterns (e.g. “noisy logs masking real errors”) deserve a permanent note under
[[observability]]or[[SRE]]. - The geocoding observation (“should be a geoip concern”) is an architectural insight — consider capturing the general principle.
Summary
This was an AI-focused week. You created ~10 new AI/ML notes, most of which are stubs. The biggest wins would be:
- Fill in or delete the empties —
Vector Databases,Word Vectors,Untitled 1,Untitled 2are dead weight right now. - Cross-link the AI cluster — These new notes form a natural graph (Context Engineering ↔ RAG ↔ Embeddings ↔ Vector Databases ↔ Word Vectors) but almost none of them link to each other yet. Adding these links would close the RAG/vector database knowledge gap noted in
CLAUDE.md. - Fix the RAG typo — Small but it’ll bug you later.
- Connect the Anthropic hive mind quotes to your own thinking — Right now it’s just excerpts with no commentary. What do you think about the 90-day planning cycle? The atoms moat?