Based on a paper by Zhang. It deals with things like massive prompts. Instead of getting a prompt, it stores the prompt in a variable inside of a repl-like environment and then allows the coding LLM to query it programmatically without revealing the big prompt. Critically it allows invoking sub LLMs to parse and understand the context variable.

An alternative approach to Context Engineering — instead of pre-computing Embeddings, let the model programmatically explore the context.