Designing resilient agentic systems with deterministic markdown workflows
The architecture of a resilient personal agent rests on a simple insight: use deterministic storage for the parts that must be reliable, and LLMs for the parts that benefit from flexible interpretation. Storing facts, message history, and classification results as plain Markdown gives you reproducible inputs and an auditable trail that an LLM can read, summarize, and augment without becoming the single source of truth.
Determinism matters when behavior must be repeatable. When you extract structured metadata from notes—tags, summaries, classification labels—record those outputs deterministically. Use small, well-tested algorithms for classification where possible and reserve the LLM for summarization, paraphrasing, or creative rearrangement. This hybrid reduces hallucination risk and makes debugging straightforward.
Context management is a first-class concern. Large context windows are valuable but finite; prioritize instructions and the immediate working set instead of dumping everything at once. Give the model a compact prompt that points to the deterministic artifacts it should consult (note IDs, file paths, or short summaries) rather than re-supplying full history every time.
Design workflows so that deterministic steps are cheaply rerunnable. If a classification algorithm changes, you should be able to re-run only that step across stored notes and produce the same outputs without re-invoking the model for everything. This separation of concerns speeds iteration and reduces cost.
People building agentic systems must trade off convenience for auditability. Markdown files are inconvenient compared with an opaque vector store, but they are human-readable, versionable, and easy to recover. That tradeoff is often worth it for personal or sensitive agent workflows where explainability and control matter.
Operationally, keep short, focused prompts and prefer algorithms where edge cases are critical (currency rounding, ordering logic, financial computations). Use the LLM for extraction and synthesis, then apply deterministic rules for decisions that drive external systems.
Finally, invest in a small suite of tooling: a pagination-friendly notes API, clear conventions for metadata in files, and simple scripts to commit and reconcile changes. These primitives make it possible to combine the creative power of LLMs with the predictability of traditional software engineering.