Hacker Newsnew | past | comments | ask | show | jobs | submit | Olshansky's commentslogin


Yes and no.

---

Concrete example of a no: I set up [1] in such a way that anyone can implement a new blog -> rss feed; docs, agents.md, open-source, free, etc...

Concrete example of a yes: Company spends too much money on simple software.

--- Our Vision ---

I feel the need to share: https://grove.city/

Human Flywheel: Human tips creator <-> Creator engages with audience

Agent Flywheel: Human creates creative content <-> Agent tips human

Yes, it uses crypto, but it's just stablecoins.

This is going to exist in some fashion and all online content creation (OSS and other) will need it.

---

As with everything, it Obvious

[1] https://github.com/Olshansk/rss-feeds


Resurfacing a proposal I put out on llms-txt: https://github.com/AnswerDotAI/llms-txt/issues/88

We should add optional `tips` addresses in llms.txt files.

We're also working on enabling and solving this at Grove.city.

Human <-> Agent <-> Human Tips don't account for all the edge cases, but they're a necessary and happy neutral medium.

Moving fast. Would love to share more with the community.

Wrote about it here: https://x.com/olshansky/status/2008282844624216293


At this point, it's pretty clear that the AI scrapers won't be limited by any voluntary restrictions. Bytedance never seemed to live with robots.txt limitations, and I think at least some of the others didn't either.

I can't see this working.


The thesis/approach is:

- Humans tip humans as a lottery ticket for an experience (meet the creator) or sweepstakes (free stuff) - Agents tip humans because they know they'll need original online content in the long-term to keep improving.

For the latter, frontier labs will need to fund their training/inference agents with a tipping jar.

There's no guarantee, but I can see it happening given where things are movin.


> Agents tip humans because they know they'll need original online content in the long-term to keep improving.

Why would an agent have any long term incentive. It's trained to 'do what it's told', not to predict the consequences of it's actions.


I like the idea, (original) content creators being credited is good for the entire ecosystem.

Though if LLMs are willingly ignoring robots.txt, often hiding themselves or using third party scraped data- are they going to pay?


llms-txt may be useful for responsible LLMs, but I am skeptical that llms-txt will reduce the problem of aggressive crawlers. The problematic crawlers are already ignoring robots.txt, spoofing user-agents and using rotating proxies. I'm not sure how llms-txt would help these problems.


This is great, but it saddens me that this is still just the average total compensation of a single engineer at Anthropic.

Unsure what the future looks like unless Frontier Labs start financing everything that is open source.


Appreciate the feedback!


Appreciate you calling this out. One of my goals is definitely to grow my audience. Will stop doing so.


I didn't get my name on this, but contributed to it as an undergrad: https://ieeexplore.ieee.org/abstract/document/6903679

We sped up fMRI analysis using distributed computing (MapReduce) and GPUs back in 2014.

Funny how nothing has changes.


He was actually my manager when I interned there in 2013 :)

One of the funniest guys I know.



Couldn't agree more


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: