Hacker Newsnew | past | comments | ask | show | jobs | submit | jstummbillig's commentslogin

I say. Vibe coded 4 apps once I got past that, on my way to half a billion in ARR already.

Let's assume that Mozilla is not doing super hot and that's why their CEO is contemplating this topic.

Obviously we are not happy about ads, but we all understand that having money is pretty neat (if only to pay ones salary). Help the CEO fella: What great, unused options is Mozilla missing to generate revenue through their browser?


Adblocker.

I find the long-term memory concepts with regards to AI curiously dubious.

On first glance, of course it's something we want. It's how we do it, after all! Learning on the job is what enables us to do our jobs and so many other things.

On the other hand humans are frustratingly stuck in their ways and not all that happy to change and that is something that societies or orgs fight a lot. Do I want to convince my coding agent to learn new behavior, conflicting with existing memory?

It's not at all obvious to me in how far memory is a bug or a feature. Does somebody have a clear case on why this is something that we should want and why it's not a problem?


> Does somebody have a clear case on why this is something that we should want

For coding agents, I think it's clear that nobody wants to repeat the same thing over an over again. If a coding agent makes a mistake once (like `git add .` instead of manually picking files), it should be able to "learn" and never make the same mistake again.

Though I definitely agree w/ you that we shouldn't aspire to 1:1 replicate human memory. We want to be able to make our machines "unlearn" easily when needed, and we also want them to be able to "share" memory with other agents in ways that simply isn't possible with humans (until we all get neuralinks I guess)


Interesting. Here is my ai-powered dev prediction: We'll move toward event-sourced systems, because AI will be able to discover patterns and workflow correlations that are hard or impossible to recover from state-only CRUD. It seems silly to not preserve all that business information, given this analysis machine we have at our hands.

Yes, according to Duolingo's (obviously biased) CEO.

https://www.youtube.com/watch?v=st6uE-dlunY

Found this episode fairly interesting (without being particularly interested or personally invested in the space)


This is interesting and a nice conversation, thank you.

He talks about how they wanted to let people know that they would stop sending them notifications after five days of inactivity, but that the "passive-aggressive" nature of that notification actually got people to come back. To me it illustrates that it's such a fine line to walk if you want to respect the user but also maybe push through their own lack of motivation.

(I'm not a user of Duolingo so I can't speak to where they land on that but it's clearly controversial)


1250 day streak on Duolingo.

The funny passive-aggressive communication style is something I personally consider Duolingo's thing. I kinda like it that they have a persona and stick with it in all of their communication.

If it was cold and to the point "you have missed today's lesson", I wouldn't come back.


Why do you need code execution envs? Could the skill not just be a function over a business process, do a then b then c?

Turns out that basic shell commands are a really powerful for context management. And you get tools which run in shells for free.

But yes. Other agent platforms will adopt this pattern.


I prefer to provide CLIs to my agent

I find it powerful how it can leverage and self-discover the best way to use a CLI and its parameters to achieve its goals

It feels more powerful than providing pre-defined set functions as MCP that will have less flexibility as a CLI


Is there a fundamental difference between a skill and a tool or could I just make a terse skill and have that be used in the same way as a tool?

I think a tool call can be thought of as special type of reply where it’s contents are parsed and an actual function is called. A skill is more of a dynamic context-enrichment.

Since as per Anthropics own benchmarks Sonnet 4.5 is beaten by Opus 4.5 would it not suffice to infer the rest?

https://x.com/OpenAI/status/1999182104362668275


At least this once the AI-ism was not spotted.

Goodness no, I chuckled.

So, right off the bat: 5.2 code talk (through codex) feels really nice. The first coding attempt was a little meh compared to 5.1 codex max (reflecting what they wrote themselves), but simply planning / discussing things felt markedly better than anything I remember from any previous model, from any company.

I remain excited about new models. It's like finding my coworker be 10% smarter every other week.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: