The assistant can help maintain itself
The same assistant that reads knowledge agents can also help write them. This is not a secondary feature — it is one of the more useful things the system enables.
Meeting transcripts, 1-on-1 notes, and decision logs accumulate patterns that are rarely made explicit anywhere. How someone tends to frame problems. What kinds of tradeoffs they consistently favour. Which communication contexts they find draining versus energising. The vocabulary they reach for when thinking carefully versus when thinking fast. None of this lives in a knowledge agent by default, because no one sat down and wrote it out. It exists only in aggregate, across dozens of conversations.
The assistant can extract it. Ask it to read six months of 1-on-1 notes and identify recurring decision-making patterns. Ask it to synthesise how you give feedback based on the transcripts where you have done it. Ask it what it would find useful to know about you that is not currently in the profile agent. The output of those conversations is the raw material for agent updates — and in most cases, the assistant can draft the update directly.
This loop matters because it changes how agents accumulate value over time. A knowledge agent written once and left alone gets stale. One that is periodically reviewed, extended, and refined through conversation with the assistant gets sharper. The assistant surfaces what is missing. You decide what to add. The file is updated. The next response is better calibrated.
The format supports this. Because agents are plain text files, the assistant can propose an edit in a message and you can apply it directly. There is no import process, no approval workflow, no schema to conform to. The feedback loop between "what the assistant knows" and "what it could know" stays short enough to act on.