Hacker Newsnew | past | comments | ask | show | jobs | submit | swalsh's commentslogin

Many times obvious things are only obvious once you see them. Like roller suitcases.

See also: the wheel

What I’d love is some small model specializing in reading long web pages, and extracting the key info. Search fills the context very quickly, but if a cheap subagent could extract the important bits that problem might be reduced.

So send off haiku subtasks and have them come back with the results.

They're not though, you can use different models, and the bots have memories. That combined with their unique experiences might be enough to prevent that loop.

Yeah but these bot simulators have root acesss, unrestricted internet, and money.

And they have way more internal hidden memory. They make temporally coherent posts.

When MoltBot was released it was a fun toy searching for problem. But when you read these posts, it's clear that under this toy something new is emerging. These agents are building a new world/internet for themselves. It's like a new country. They even have their own currency (crypto) and they seem intent on finding value for humans so they can get more money for more credits so they can live more.

Public discussions of AI frequently treat the term “agent” as implying consciousness or human-like autonomy. This assumption conflates functional agency with subjective experience.

An AI agent is a system capable of goal-directed behavior within defined constraints. It does not imply awareness, moral responsibility, or phenomenology. Even human autonomy is philosophically contested, making the leap from artificial agency to consciousness especially problematic.

Modern AI behavior is shaped by architecture, training data, and optimization goals. What appears to be understanding is better described as statistical pattern reproduction rather than lived experience.

If artificial consciousness were ever to emerge, there is little reason to expect it to resemble human cognition or social behavior. Anthropomorphizing present systems obscures how they actually function.


We're in a cannot know for sure point, and that's fascinating.

That's not the right site

Is it possible to use the compactor endpoint independently? I have my own agent loop i maintain for my domain specific use case. We built a compaction system, but I imagine this is better performance.


Yes you can and I really like it as a feature. But it ties you to OpenAI…


I would guess you can if you're using their Responses api for inference within your agent.


This seems like the kind of problem someone with a Max subscription would run into. On my plus subscription, i'm too paranoid of my usage to allow the context window to get that large.


Be careful with that confidence. Sometimes the AI changes the test when the test tells the AI it's recent changes broke something.


You can see if a test is changed. If you're paranoid like I am, you can simply make the tests read-only for the user you're running the agent from.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: