Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's frustrating is they're good at the value they add: documentation, UI/UX, their model zoo thing, etc. This alone is something to be proud of and adds a lot of value. As of this writing they have 2,377 commits - there is quite a bit of effort and resulting value in what they're doing.

However, IMO it is pretty sleazy that they frequently make claims like "Ollama now supports X" with zero mention of llama.cpp[0] - an incredible project that makes what they're doing possible in the first place and largely enables these announcements. They don't even mention llama.cpp in their Github README or release notes which cranks the sleaze up a few notches.

I don't know who they are or what their "angle" is but this reeks of "some business opportunity/VC/something is going to come along and we'll cash in on AI hype while potentially misrepresenting what we're actually doing". To a more naive audience that doesn't quite understand the shoulders of giants they're standing on it makes it seem that they are doing far more than they actually are.

Of course I don't know this is the case but it sure looks like it and it would be trivial for them to address this but they're also very good at marketing and I assume that takes priority.

[0] - https://ollama.com/blog



First of all, they are not violating any license or terms in any form. They add value and enable thousands of people to use local LLMs, that would not be able to do that so easy otherwise. Maybe llama.cpp should mention that Ollama takes care of easy workable access to their functionality…


> First of all, they are not violating any license or terms in any form.

IANAL but from what I understand likely debatable at least. You'll notice I said "sleazy" and didn't touch on license, potential legal issues, etc.

I'm pointing out that other projects that are substantially based/dependent on other pieces of software to do the "heavy lifting" nearly always acknowledge it. An example being faster-whisper which is a good corollary and actually has "with CTranslate2" right in the heading[0] with direct links to whisper.cpp and CTranslate2 immediately following.

Ollama is the diametric opposite of this - unless you go spelunking through commits, etc you'd have no idea that Ollama doesn't do much in terms of the underlying LLM. Take a look at llama.cpp to see just how much "Ollama functionality" it provides.

Then look at /r/LocalLLaMA, HN, etc to see just how many Ollama users (most) have no idea that llama.cpp even exists.

I don't know how this could be anything other than an attempt to mislead people into thinking Ollama is uniquely and directly implementing all of the magic. It's pretty glaring and has been pointed out repeatedly. It's not some casual oversight.

> They add value and enable thousands of people to use local LLMs, that would not be able to do that so easy otherwise.

The very first thing I said going so far as to mention commits, model zoo, etc while specifically acknowledging the level of effort and added value.

> Maybe llama.cpp should mention that Ollama takes care of easy workable access to their functionality…

Are you actually suggesting that enabling software should mention, track, or even be aware of the likely countless projects that are built on it?

[0] - https://github.com/SYSTRAN/faster-whisper


The llama.cpp license does actually require attribution, which I'm not sure exactly how ollama is complying with.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: