Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Actually, I felt pk-protect-ai's post was spot on. That is, OpenAI does have, currently, the most impressive interfaces and models. But (as was famously commented on by a Google exec) they don't really have any unique moat. That is, while what OpenAI has built may be extraordinary, there is no "secret sauce" there that prevents others from copying them. And this isn't a knock on OpenAI at all. On the contrary, my understanding is that OpenAI was really the first to take the big risk in scaling their GPT models (i.e. spending the hundreds of millions to train their models before they knew what the outcome would be).


People keep saying this but after 1-2 years, nobody has gotten to their level yet.


This isn't true. Benchmarks do not define the usefulness of the models. Mixtral is much more useful for me right now than GPT-4. Look at LLaVA, or the new one, and the very impressive LWM (the text-only version is LLaMA2-based with a 512K context, which does not require TPU to run the inference). The fine-tuned LLaMA 34B is much faster than GPT-4 Turbo and less annoying, providing pretty impressive quality of results.


Who mentioned anything about benchmarks? YMMV, it depends on exact use cases, from my extensive testing using my cases, GPT-4 Turbo still is much easier to direct than any of the others you have mentioned.


> was famously commented on by a Google exec

It was just a rank and file IC.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: