Hacker Newsnew | past | comments | ask | show | jobs | submit | sonnig's commentslogin

I've owned iOS devices since around 2010 I think, and one of the first things I do when setting them up is disable autocorrect and most other typing assistances. For this reason! Also when typing in 2 or 3 languages, it is completely useless.

iOS 18 really fucked up the multiple languages setup. It used to automatically detect secondary languages when I had the English keyboard, but still prefer English. Now you have to create keyboard combos and can't reuse languages (can't "prefer Spanish but also support English" and viceversa)

Yep, it's similar in that way, but not in a imageboard/discussion context

Apologies for that, I have removed it.

Thanks, just seeing the hash made me literally shake

Yeah me too

I've changed this now, thanks for the feedback

i agree removing punctuation wouldve been a good idea alas it may be a bit too late since that would modify the hash of previous inputs in the future hmm but i will think about it

Me too, and other 16 users

95 other users*

True! That would be a more powerful approach. Here I kept it quite basic since I was not very familiar with the tooling. I do apply lowercasing of text + some whitespace stripping in order to increase the number of collisions a bit.

Edit: any other "quick hacks" to increase the number of collisions are welcome :)


Assuming both agents are using the same model, what could the reviewer agent add of value to the agent writing the code? It feels like "thinking mode" but with extra steps, and more chance of getting them stuck in a loop trying to overcorrect some small inane detail.

He does cover this later:

"I implemented a formula for Jeffrey Emanuel’s “Rule of Five”, which is the observation that if you make an LLM review something five times, with different focus areas each time though, it generates superior outcomes and artifacts. So you can take any workflow, cook it with the Rule of Five, and it will make each step get reviewed 4 times (the implementation counts as the first review)."

And I guess more generally, there is a level of non-determinism in there anyway.


Well put. I can't help thinking of this every time I see the 854594th "agent coordination framework" in GitHub. They all look strangely similar, are obviously themselves vibe-coded, and make no real effort to present any type of evidence that they can help development in any way.

Mind sharing your workflow? I'm at 24.3x productivity right now, 5 parallel agents, 2 monitoring Opus agents, 1 architect agent and 2 Senior QA agents, each with independent memory and 12 MCP servers. They are running in 78 parallel tabs in ghostty.


Is their TC mainly in tokens or also in stock-tokens? Did you connect them to a Mame MCP server so they can play and rest a bit while churning out 50 PRs a day each? What is your continuity plan if they all plan to quit at once?


I am working with kilo-stock-tokens. Currently producing 3000 LoC/h (trying to ramp up to 6000 by the end of the week). I have also deployed 4 union-busting agents in case the other agents decide to quit all at once.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: