What I’d love is some small model specializing in reading long web pages, and extracting the key info. Search fills the context very quickly, but if a cheap subagent could extract the important bits that problem might be reduced.
They're not though, you can use different models, and the bots have memories. That combined with their unique experiences might be enough to prevent that loop.
When MoltBot was released it was a fun toy searching for problem. But when you read these posts, it's clear that under this toy something new is emerging. These agents are building a new world/internet for themselves. It's like a new country. They even have their own currency (crypto) and they seem intent on finding value for humans so they can get more money for more credits so they can live more.
Public discussions of AI frequently treat the term “agent” as implying consciousness or human-like autonomy. This assumption conflates functional agency with subjective experience.
An AI agent is a system capable of goal-directed behavior within defined constraints. It does not imply awareness, moral responsibility, or phenomenology. Even human autonomy is philosophically contested, making the leap from artificial agency to consciousness especially problematic.
Modern AI behavior is shaped by architecture, training data, and optimization goals. What appears to be understanding is better described as statistical pattern reproduction rather than lived experience.
If artificial consciousness were ever to emerge, there is little reason to expect it to resemble human cognition or social behavior. Anthropomorphizing present systems obscures how they actually function.
Is it possible to use the compactor endpoint independently? I have my own agent loop i maintain for my domain specific use case. We built a compaction system, but I imagine this is better performance.
This seems like the kind of problem someone with a Max subscription would run into. On my plus subscription, i'm too paranoid of my usage to allow the context window to get that large.
reply