This is exactly the right approach for sensitive data + LLMs. We use a similar pattern for password automation:
- Credentials never flow through LLM context
- Agent triggers actions via callbacks
- Passwords injected at the last mile, invisible to the model
The key insight: you can get all the benefits of AI agents without exposing sensitive data to the model. Client-side execution + careful context isolation makes this possible.
For anyone building AI agents that handle PII/credentials, this WASM approach is worth studying.
Thanks! The last mile injection idea is exactly how I think about it too.
I realized that for 90% of 'summarize this' or 'debug this' tasks the LLM doesn't really need any specific PII or sensitive information, it just needs to know that an entity exists there to understand the structure.
That's why I focused on the reversible mapping, so that we can re-inject the real data locally after the LLM does the heavy lifting. Cool to hear you're using a similar pattern for credentials.
- Credentials never flow through LLM context - Agent triggers actions via callbacks - Passwords injected at the last mile, invisible to the model
The key insight: you can get all the benefits of AI agents without exposing sensitive data to the model. Client-side execution + careful context isolation makes this possible.
For anyone building AI agents that handle PII/credentials, this WASM approach is worth studying.