Weekly Review — 2026-02-12


Prompt 1: Stale Note Detection

Notes modified in the last 7 days reviewed for incomplete thoughts, TODOs, and outdated info.

Incomplete Thoughts

NoteIssue
Context EngineeringJust a single quote and attribution. No original analysis, no links to other notes. Stub.
EmbeddingsOne sentence. Doesn’t explain how embedding models differ from word vectors, or common models (e.g. all-MiniLM-L6-v2 which you already reference in book-diff/README.md).
Vector DatabasesEmpty file (0 content).
Word VectorsEmpty file (0 content).
Types of LLM promptsJust a URL and a single quote block distinguishing Instructions vs Guidance. No synthesis.
Recursive Language ModelSingle paragraph. No link to the Zhang paper. No connection to your existing Knowledge Diff note that already references it.
The Anthropic hive mindQuotes only, no original commentary. No links to any notes.
Untitled 1 / Untitled 2Empty files. Should be deleted or given content.

Outdated Information

NoteConcern
Retrieval-Augmented GenerationTypo: “prokpt” should be “prompt”. The description is also imprecise — RAG doesn’t necessarily do embedding on the prompt; it embeds the query, which may be extracted/reformulated from the prompt.

Action Items

  • Flesh out or delete Vector Databases and Word Vectors (both empty)
  • Delete Untitled 1 and Untitled 2 (empty, no purpose)
  • Fix typo “prokpt” → “prompt” in Retrieval-Augmented Generation
  • Add original commentary to The Anthropic hive mind and Context Engineering
  • Add paper citation to Recursive Language Model

NoteSuggested Links
Context Engineering[[LLMs]], [[Types of LLM prompts]] (context engineering is about curating prompt context), [[Retrieval-Augmented Generation]] (RAG is a form of context engineering)
Types of LLM prompts[[Context Engineering]] (same source article), [[LLMs]], [[Coding agents]] (the prompt types described are agent-relevant)
The Anthropic hive mind[[LLMs]], [[Coding agents]] (Yegge’s piece is about AI-era strategy), [[Build vs Buy decision frameworks]] (the “atoms moat” argument is a build-vs-buy lens)
Recursive Language Model[[Context Engineering]] (RLM is a context management technique), [[Embeddings]] (RLM is positioned as alternative to embedding)
Retrieval-Augmented Generation[[Context Engineering]] (RAG is context engineering), [[LLMs]]
Embeddings[[Retrieval-Augmented Generation]] (embeddings power RAG), [[LLMs]]
Build vs Buy decision frameworks[[Platform Strategy]] (already linked from Platform Strategy, should link back)
Peer-Responsible quality[[Continuous Delivery]] or [[Accelerate]] (peer review is a CD practice)
StrongDM’s Software Factory[[Coding agents]], [[LLMs]] (a “software factory” where code must not be written/reviewed by humans is an AI/agents concept)
Existing NoteShould Add Link To
Coding agents[[Context Engineering]], [[Types of LLM prompts]], [[Recursive Language Model]]
llm-assistant/Knowledge DiffAlready links to [[Embeddings]], [[Recursive Language Model]], [[Vector Databases]] — good.

Prompt 3: Daily Note Extraction

Daily notes this week: 2026-02-02 through 2026-02-06

2026-02-02 — Chat with [[dvc]] about LLMs

  • Getting good results but having to repeat himself
  • Questions about how well LLMs understand existing code and PR flow integration
  • Writing in TypeScript (doesn’t know TypeScript)
  • Sanskrit Studio — most probable and least probable readings

2026-02-05 — Log spew analysis (Thrive Market work notes)

  • Optimizely log noise (invalid user ID format, missing experiment keys)
  • Braintree payment errors logged as application errors (MOK-42986)
  • Habit formation reward date range issues
  • Autoship configuration data empty errors
  • Geocoding API misuse (“should be a geoip concern”)
  • Brand page event failures

2026-02-06 — Randy/LeadDev connection

  • Randy knows people who run LeadDev
  • Talked about AI stuff with them

Ideas Worth Promoting to Permanent Notes

  1. “LLM repetition problem” — dvc’s experience of having to repeat himself to LLMs is a real pattern. Could connect to [[Context Engineering]] (the whole point is solving this) and [[Recursive Language Model]] (stores context to avoid repetition).

  2. “LLMs and existing codebases” — The question of how well LLMs understand existing code is worth a note. Connects to [[Coding agents]] and the context window problem.

Recurring Themes

  • AI/LLM adoption appears across 2026-02-02 (dvc conversation) and 2026-02-06 (LeadDev/AI). You’re clearly in a period of active AI thinking.
  • This aligns with the cluster of new notes this week: Context Engineering, Embeddings, RAG, Vector Databases, Word Vectors, Recursive Language Model, Types of LLM prompts, Anthropic hive mind, StrongDM’s Software Factory — all AI-related.

Action Items / Commitments

  • The 2026-02-05 log spew analysis is Thrive-specific work notes. Consider whether any patterns (e.g. “noisy logs masking real errors”) deserve a permanent note under [[observability]] or [[SRE]].
  • The geocoding observation (“should be a geoip concern”) is an architectural insight — consider capturing the general principle.

Summary

This was an AI-focused week. You created ~10 new AI/ML notes, most of which are stubs. The biggest wins would be:

  1. Fill in or delete the emptiesVector Databases, Word Vectors, Untitled 1, Untitled 2 are dead weight right now.
  2. Cross-link the AI cluster — These new notes form a natural graph (Context Engineering ↔ RAG ↔ Embeddings ↔ Vector Databases ↔ Word Vectors) but almost none of them link to each other yet. Adding these links would close the RAG/vector database knowledge gap noted in CLAUDE.md.
  3. Fix the RAG typo — Small but it’ll bug you later.
  4. Connect the Anthropic hive mind quotes to your own thinking — Right now it’s just excerpts with no commentary. What do you think about the 90-day planning cycle? The atoms moat?