I use the same motion to iterate on linter config and hooks (both claude code hooks and git hooks). So guardrails are actually enforced and iterated upon and do not clutter the context.
It's good idea. I didn't think of it because this project came about a "let's try to write a remote MCP server now that the standard has stabilized."
But there are some issues:
1. Cheaper + Deterministic: It is much more costly, both in terms of tokens and context window. (Generating the code takes many more tokens than making a tool call.) And there can be variability in the query, like issues with timezones.
2. Portability: It is not portable, not all LLM or LM environments have access to a code interpreter. This is a much lower resource requirement.
3. Extensibility: This approach is extensible, and it allows us to expand the toolkit with additional cognitive scaffolds that help contextualize how we experience time for the model. (This is a fancy way of saying: The code only gives the timestamp, but building an MCP allows us to contextualize this information — "this is time I'm sleeping, this is the time I'm eating or commuting, etc.")
4. Security: Ops teams are happier approving a read-only REST call than arbitrary code running.
One last thing I will say: The MCP server specification is unclear how much the initial "instructions", the README.md of the server for the model, is discovered. In the "passage-of-time" MCP server, the instructions provide the model with information on each available tool as well as the requirement to poll the time at each message.
In practice, this hasn't really worked. I've had to add a custom instruction to "call current_datetime" at each message to get Claude to do it consistently over time.
Still, it is meaningful that I ask the model to make a single quick query rather than generate code.
Because LLMs are notoriously unreliable especially over long context. Telling it something like "check the time before every turn" is going to fail after enough interactions. MCP call is more reliable for programmatic and specific queries, like retrieving the time.
> But if it's so obvious, why aren't other developers rolling out their own PKMS? Perhaps I'm the first to discover this or perhaps developers aren't writing about their custom PKMS. My guess is that commercial note-taking apps have larger, more vocal communities that drown out the murmurings of other DIY solutions.
It's a good idea to just not do stupid shit that would make it very painful to actually get compliant. Get vendors who have certs, keep infra minimal (which means not infra team). The more you do in house the more painful compliance will be. Buy, and buy from certified providers, simple. Manage identity centrally, keep all your secrets in a secret manager, use git and do code reviews. You're right all things you should be doing anyway.
Manage identity centrally is probably referring to using an identity management system like Okta, Microsoft Identity, or hosting your own IdP and using strong hardware 2FA. You don't want people creating their own accounts manually for everything or shared accounts that everyone knows the password for (or is on a shared spreadsheet that the entire company has access to).
At this point most startups would just use Google; since they're almost certainly using Google as their email provider, and "company email" is a de facto root-of-trust even if you don't intend it to be, there isn't really a whole lot of thought that needs to go into it. It helps that they have the best 2FA stack of any mainstream cloud service.
Exactly, which is why the whole “fiduciary duty to maximize profits for shareholders” is a totally meaningless/unenforceable/zero-information idea. Shareholders have many many different interests on different time horizons so business operators naturally and necessarily have broad discretion in how they satisfy their fiduciary duties.
reply