Hacker Newsnew | past | comments | ask | show | jobs | submit | polyglotfacto's commentslogin

This is not legal advice, but I think one should always add a license, not so much for copyrights but for the "no warranty" part. If someone claims copyright once can add whatever license was used in the original work.

In general where I live (Spain), main baseline is fault/negligence, so basically "whoever causes damage by fault or negligence must repair it". They'd need to be able to attribute the fault/negligence to me, which since this is just public code with me promising nothing, will be really hard for them to "prove".

The license implicitly defaults to "I own all the rights", so no one is able to override that implicit license by copying the code and slapping their own license on top, I'm not sure if this is what you were thinking about when you said "claims copyright once can add whatever"?

Then on a different note, I'm not licensing/selling/providing any terms, so it's short of impossible for someone to credibly claim I warranted anything, there are no terms in the first place, except any implicit ones.

Maybe in the US works differently, and because Microsoft is in the US, that can somehow matter for me. But I'm not too worried about it :)

Thanks for the consideration and care though, that's always appreciated! :)


This one's really nice.

- clear code structure and good architecture(modular approach reminiscent of Blitz but not as radical, like Blitz-lite).

- Very easy to follow the code and understand how the main render loop works:

    - For Mac: main loop is at https://github.com/embedding-shapes/one-agent-one-browser/blob/master/src/platform/macos/windowed.rs#L74
   
    - You can see clearly how UI events as passed to the App to handle. 

    - App::tick allows the app to handle internal events(Servoshell does something similar with `spin_event_loop` at https://github.com/servo/servo/blob/611f3ef1625f4972337c247521f3a1d65040bd56/components/servo/servo.rs#L176)

    - If a redraw is needed, the main render logic is at https://github.com/embedding-shapes/one-agent-one-browser/blob/master/src/platform/macos/windowed.rs#L221 and calls into `render` of App, which computes a display list(layout) and then translates it into commands to the generic painter, which internally turns those into platform specific graphics operations.
- It's interesting how the painter for Mac uses Cocoa for graphics; very different from Servo which uses Webrender or Blitz which(in some path) uses Vello(itself using wgpu). I'd say using Cocoa like that might be closer to what React-Native does(expert to comfirm this pls?). Btw this kind of platform specific bindings is a strength of AI coding(and a real pain to do by hand).

- Nice modularity between the platform and browser app parts achieved with the App and Painter traits.

How to improve it further? I'd say try to map how the architecture correspond to Web standards, such as https://html.spec.whatwg.org/multipage/webappapis.html#event...

Wouldn't have to be precise and comprehensive, but for example parts of App::tick could be documented as an initial attempt to implement a part of the web event-loop and `render` as an attempt at implementing the update-the-rendering task.

You could also split the web engine part from the app embedding it in a similar way to the current split between platform and app.

Far superior, and more cost effective, than the attempt at scaling autonomous agent coding pursued by Fastrender. Shows how the important part isn't how many agents you can run in parallel, but rather how good of an idea the human overseeing the project has(or rather: develops).


Hey, thanks a bunch of the review of the code itself! Personally I hadn't really looked deeply at it yet myself, especially the Windows and macOS code, interesting to hear that it as a slightly different approach for the painter, to me that's slightly unexpected.

Agree with your conclusion :)

Would be interesting to see how the architecture/design would change if I had focused the agent on making as modular and reusable code as possible, as currently I had some constraints about it, but not too strict. You bring up lots of interesting points, thanks again!


you're welcome.

The fact that it is new is meaningless: the output is useless even as a proof of concept web engine and should be discarded, alongside the agent engineering pattern that produced it.

By that logic any experiment that doesn't produce a perfect result should be deleted. I don't think that's a defensible position.

Yes, but that is not what I wrote.

I wrote: "useless even as a proof of concept". It doesn't have to be perfect; it just needs to show a clear path forward.


I think rendering this is a pretty great proof of concept that illustrates a path forward to doing better: https://static.simonwillison.net/static/2026/cursor-simonwil...

If I was trying to build a new browser and I got to that point within a few weeks of starting I would be ecstatic.


So I think that's an easy way to achieve ecstasy for you then. I suggest giving it a try.

A good place to look for how one could do this, is https://github.com/DioxusLabs/blitz/tree/main

That project I consider a proper POC of a web engine, even though it doesn't even run javascript. Why? Because it has a nice architecture built around a clear idea--radical modularity--which could scale-up to a full web engine one day, despite major challenges remaining.

I think that with AI assistance, if you had some idea, you could reshuffle components of Blitz and have your own thing rendering to the screen within a day.

let's say you had a more ambitious goal, like taking Blitz and adding a JS engine like Boa. Well if you had some clear idea on how to do it, you could get a nice little POC in a week or two.

Basically what I'm saying is that yes the AI would save you a ton of typing and you'd be able to iterate on your idea. There are plenty of layout/graphics/Js components out there to choose from, so you could ensure a relatively small and clean POC.

Someone doing that, with or without AI but with a good idea, would impress me.

FastRender on the other hand is just this humongous pile of spaghetti, and my guess is it still is entirely dependent on existing libraries for actually showing something to the screen.

So that's the clear fail of the agent in my opinion: why produce so much code when it could be done so easily otherwise. Also, why bs yourself into these architecture docs and pretend you are following the specs when in fact you are not?

Everytime I try to browse the code I give up, mostly because when I look at something to try to understand how it fits into the whole, I end-up realizing it's only used in some unit-test.

For a quick comparison:

- https://github.com/DioxusLabs/blitz/blob/f828015b26d32b0bed3... - https://github.com/wilsonzlin/fastrender/blob/19bf1036105d4e...

I believe the two are more or less doing the same, but one is 30x the size of the other.

I can't begin to understand the render loop of Fastrender from the code.

On the other hand, here is the one from Blitz shell(the default Blitz app putting together various modular components):

- Window runs in winit loop: https://github.com/DioxusLabs/blitz/blob/f828015b26d32b0bed3... - Redraw is at https://github.com/DioxusLabs/blitz/blob/f828015b26d32b0bed3... - Calls into `paint_scene`, using the generic scene from the generic renderer: https://github.com/DioxusLabs/blitz/blob/f828015b26d32b0bed3...

Simple as that, and with a nice idea in terms of modularity.

That's a POC web engine.


> it's very much intended as a research project

If so then the failure of the experiment should be acknowledged.

Failure described among others at: https://news.ycombinator.com/item?id=46705625

> It's functional enough to render web pages

> FastRender may not be a production-ready browser, but it represents over a million lines of Rust code, written in a few weeks, that can already render real web pages to a usable degree.

This is something that can be done in much less than a million lines of code. There must be a core somewhere in Fastrender--probably just a few thousands lines--which is putting together existing layout and graphics libraries and makes it render something to the screen.

Doing that in a few weeks isn't impressive, especially not when buried in a million lines of spaghetti code.

If you want an example of a real prototype web engine build along radical design choices, head over to https://github.com/DioxusLabs/blitz

I'm pretty sure it renders far better than Fastrender(the edits the agents made to Taffy are probably nonsense), and I'm guessing it is at most 50k lines.

Conclusion:

In the light of the efforts to paper over failures, I'm calling Fastrender not a research project but propaganda.


They wanted to figure out patterns to have thousands of agents work on millions of lines together in parallel without stepping on each other's toes. They achieved that. Looks like a success to me.

Implementing a browser was just the demo for that. I called it their "hello world" at the end of my post.


> Looks like a success to me.

How is spaghetti code that does not implement the spec(web standards in this case) success?

You are one of the creators of Django; so let me try to give you an analogy: if someone runs thousands of agents in parallel to produce a web framework, and the code ends-up being able to connect to a database and render a template using existing libraries, and the rest would be total non-sense and otherwise useless to web devs; would you call that a success?

Success in software requires something that works as intended and is maintainable.


Their success criteria was "can we run thousands of agents at once and have them work towards a goal". By that criteria it was a success.

So yes, in your hypothetical I would call it a success IF their goal was parallel agent research. I'd call it a failure if they told me they were trying to build a production-quality alternative to Django.


I understand this is not meant as production level quality, but as a web engineer I was expecting at least a decent POC with some interesting design ideas; not total spaghetti that even gets the spec wrong(despite the good idea of checking the spec in the repo).

They may have solved a problem related to agent coordination, like you discussed in your interview related to conflicts and allowing edits to merge without always compiling.

But at the end of the day, a novelty like this is only useful in so far as it produces good code; I don't see how coding agents are of any help otherwise.

So the failure of the pattern should be acknowledged, so we can move on and figure out what does work.

I speculate that what does work is actually quite similar to managing an open source project: don't merge if it doesn't pass CI, and get a review from a human(the question is as what level of granularity). You also need humans in the project to decide on ways of doing things, so that the AI is relegated to its strength: applying existing patterns.

In all seriousness, you can tell Wilson to get in touch with me. With even only one person with domain knowledge involved in such an effort, and with some architectural choices made ahead of unleashing the herd, I think one could do amazing stuff.


I was actually thinking of your earlier comments about this from the perspective of a Servo engineer when I asked Wilson how much of his human-level effort on this project related to browser architecture as opposed to parallel agents research.

The answer I got made it clear to me that this wasn't a browser project - he outsourced almost all of the browser design thinking to the agents and focused on his research area, which was parallel agent coordination.

I'm certain having someone in the human driving seat who understands browsers and was actively seeking to build the best possible browser architecture would produce very different results!


Thanks for the clarification.

With the scope of the experiment in mind, I think we can deduce from it that AI is just not able to produce good software unsupervised. It's an important lesson.

To make a wider point, let's look at another of your prediction: that in 2026 the quality of AI code output will be undeniable. I actually think we've already reached that point. Since those agents came around I've never encountered a case where the AI wasn't able to code what I instructed it to. But that's not the same thing as software engineering, and in fact, I have never been impressed by the AI solving real problems for me.

It simply sucks at high quality software architecture. And I don't think this is due to a lack of computing power but that, rather, only humans can figure out what makes sense for them. And this matters, because if the software doesn't make sense, beyond very simple things you can test manually, it becomes impossible to know whether it works as intended.

A Web engine is a great example, because despite the extensive shared suites of tests and specifications, implementing one remains a challenge. You can write code and pass 90% of some sub test suite, and then figure out that your architecture for that Web API is all wrong and you'll never get to the last 10% and in fact your code is fundamentally broken. Unleashing AI without supervision makes this problem worse I think. Solving it requires human judgement and creativity.


Yeah I agree entirely.

Current coding agents are not up to the task of producing a production-quality web browser.

I genuinely can't imagine a better example of a sophisticated software project than a browser. Chrome, Firefox and WebKit each represent a billion-plus dollars of engineering salaries paid out to expert developers over multiple decades.

Browsers have to deal with a bewildering array of specifications - and handle websites that don't conform fully to those specifications.

They have extreme performance targets.

They operate in the most hostile environment imaginable - executing arbitrary code from the web! - so security is diabolically difficult as well.

I expect there's not a human on earth who could single-handedly architect a production grade new browser, it requires too much specialist knowledge across too many different disciplines.

On the basis, I should reconsider my 2029 prediction. I was imagining something that looked like FastRender but maybe a little more advanced - working JavaScript, for example.

A much more interesting question is when we might see a production grade browser built mostly by coding agents.

I do think we could see that by 2029, but if we did it would be a very different shape from the FastRender project:

- a team of human experts driving the coding agents

- in development for at least a year

- built with the goal of "production ready Chrome competitor", not as a research project

The question then becomes who would fund such a thing - even a small team of experts doesn't come cheap, and I expect that LLM prices in 2029 will still measure in the tens or hundreds of thousands of dollars for this, if not more.

Hard for me to definitely predict that someone will step up to fund such a project - great open source browser engines exist already, why would someone fund one from scratch?


> The question then becomes who would fund such a thing

Historically new web engines came about when a new challenger wanted to have a stake in web standards development. The way it happened was never from scratch but with a fork of an existing engine. Last time this happened was with Google. The reason, I think, was wanting to evolve the web into an application-like platform(HTML5), and a new architectural idea: multi-process.

The person who was in charge of that effort is now at OpenAI.

Today there are also projects like Ladybird and Servo which follow a different model: started from scratch and driven by interest from a developer community. But so far neither has users in the real-world, and so they haven't had an impact on the Web in the way Chromium has, yet.

Already today, both development models could benefit from the productivity gains of AI; in 2029 the game may have changed entirely. I can imagine a combination of math(TLA+ like I've done at https://github.com/w3c/IndexedDB/pull/484), web standard in their semi-formal English, and then some further guidance in terms of code architecture(through a conversation-like iterative loop), and see a Fastrender-like approach that actually works. Humans would still be the ones defining and solving all the hard problems, but you'd be typing a whole lot less code...

I'm the one who was driving the efforts to start experimenting with AI in Servo, which was cut short by https://github.com/servo/servo/discussions/36379

I've been using AI on side-projects ever since, and in those I don't type any code by hand anymore and end-up doing things I would not even contemplate(due to time constraints) without the use of AI.

Example: https://medium.com/@polyglot_factotum/tla-in-support-of-ai-c...


I've done this in the parallel post, see https://news.ycombinator.com/item?id=46705625 (and a couple of other replies in that thread)

TLDR; the code is not a valid POC but throw-away level quality that could never support a functioning web engine. It's actually very clear hallucinated AI BS, which is what you get when you don't have a human expert in the loop.

I actually like using AI, but only to save me the typing.


> what matters in the end is what the code does, not what it looks like

That is true in a way, although even for agents readability matters.

But the code here does not actually do the right thing, and the way it is written also means it never could.

Web devs do care whether the engine runs their code according to Web standards(otherwise it's early IE all over), and end-users do care that websites work as their devs intended to.

Current state is throw-away level quality.

I've critiqued it at length in the other post, see https://news.ycombinator.com/item?id=46705625


It's obvious by now that AI can write a whole bunch of code approximating all kinds of things. So there is no reason anymore for this to impress anyone.

A well-architected POC built in a week with a clear path to scaling it to a full implementation down the line would be impressive, but that's not what this is.

The current code output is basically throw-away level quality AI hallucinated BS.


> there are real complex systems being engineered towards the goal of a browser engine, even if not there yet.

In various comments in https://news.ycombinator.com/item?id=46624541 I have explained at length why your fleet of autonomous agents failed miserably at building something that could be seen as a valid POC.

One example: your rendering loop does not follow the web specs and makes no sense.

https://github.com/wilsonzlin/fastrender/blob/19bf1036105d4e...

The above design document is simply nonsense; typical AI hallucinated BS. Detailed critique at https://news.ycombinator.com/item?id=46705625

The actual code is worse; I can only describe it as a tangle of spaghetti. As a Browser expert I can't make much, if anything, out of it. In comparison, when I look at code in Ladybird, a project I am not involved in, I can instantly find my way around the code because I know the web specs.

So I agree this isn't just wiring up of dependencies, and neither is it copied from existing implementations: it's a uniquely bad design that could never support anything resembling a real-world web engine.

Now don't get me wrong, I do think AI could be leveraged to build a web engine, but not by unleashing autonomous agents. You need humans in the loop at all levels of abstractions; the agents should only be used to bang out features re-using patterns established or vetted by human experts.

If you want to do this the right way, get in touch: https://github.com/gterzian


So first of all, as per my other comments on this threads and coming from a browser engineer: the autonomous coding agents failed miserably.

Whether it is the best case scenario in terms of benchmark, I am not so sure.

The Web is indeed standardized and there are many open-source implementations out there. But how to implement the Web in a novel way by definition means you are trying to solve some perceived problem with existing implementations.

So I would rephrase your statement as such: rewriting an existing engine in another language without any novelty might be the best case scenario for autonomous coding agents.

As an example of approaching the problem in a novel way: the Fastrender code seems obsessed with metering of resources. Implementing the Web with that constraint in mind would be an interesting problem and not obvious at all. That's not what the project is doing so far by the way, since the code is quite frankly a bunch of spaghetti that does not follow Web standards at all(in a way that is unrelated to the metering story, so the divergence from specs is not novel, it's just wrong).


If you paid 5 cents for the code you would have been ripped off; it's throw-away stuff.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: