From what I understood, the case against OpenAI wasn't about the summarisation. It was the fact that the AI was trained on copyrighted work. In case of Wikipedia, the assumption is that someone purchased the book, read it, and then summarised it.
They’re sort of separate. In a sense you could say that the ChatGPT model is a lossily compressed version of its training corpus. We acknowledge that a jpeg of a copyrighted image is a violation. If the model can recite Harry Potter word for word, even imperfectly, this is evidence that the model itself is an encoding of the book (among other things).
You hear people saying that a trained model can’t be a violation because humans can recite poetry, etc, but a transformer model is not human, and very philosophically and economically importantly, human brains can’t be copied and scaled.
They're very separate in terms of what seems to have happened in this case. This lawsuit isn't about memory or LLMs being archival/compression software (imho, a very far reach) or anything like that. The plaintiffs took a bit of text that was generated by ChatGPT and accused OpenAI of violating their IP rights, using the output as proof. As far as I understand, the method at which ChatGPT arrived to the output or how Game of Thrones is "stored" within it is irrelevant, the authors allege that the output text itself is infringing regardless of circumstance and therefore OpenAI should pay up. If it's eventually found that the short summary is indeed infringing on the copyright of the full work, there is absolutely nothing preventing the authors (or someone else who could later refer to this case) from suing someone else who wrote a similar summary, with or without the use of AI.
> You hear people saying that a trained model can’t be a violation because humans can recite poetry, etc
Also worth noting that, if a person performs a copyrighted work from memory - like a poem, a play, or a piece of music - that can still be a copyright violation. "I didn't copy anything, I just memorized it" isn't the get-out-of-jail-free card some people think it is.
A jpeg of a copyrighted image can be copyright infringement, but isn't necessarily. A trained model can be copyright infringement, but isn't necessarily. A human reciting poetry can be copyright infringement, but isn't necessarily.
The means of reproduction are immaterial; what matters is whether a specific use is permitted or not. That a reproduction of a work is found to be infringing in one context doesn't mean it is always infringing in all contexts; conversely, that a reproduction is considered fair use doesn't mean all uses of that reproduction will be considered fair.
I would guess that if there were a court case where a poet sued someone commercially that is for pay(say tickets specifically for it) reciting his poetry they might very well win. So reciting poetry probably could be copy right infringement at certain scale.
And as AI companies are commercial entities. I would lean towards direction where they doing it in general, even if not for repeating specific works, it could be infringement too.
That doesn't really make sense . Just because you purchased a book, does not mean the copyright goes away (for new works based on the book. For the physical book you bought, the doctrinevof first sale gives you some rights but only in that specific physical copy ). If openAI pirated material, that would be a separate issue from if the output of the LLM is infringing.
No, I used to work in a newspaper and we were switching between text editing, graphic design and image processing tools for our work. This makes a lot of sense! That said, most magazines and newspapers have designated people to focus on each of these and the chances of one person having to shift between all three is a little narrow.
Yes, these are faces, but why do they look like pillars to me? Ornamented and sculpted pillars are pretty common across civilizations and I can imagine sloping tent like roof set up that are held up by these pillars. How does one separate a pillar and a obelisk?
What's the point of GenAI in a manufacturing pipeline? Good ol' ML based AI automation is heavily used in larger manufacturing plants to identify defects
Large companies negotiate and get lower rates, but individuals get the sticker price. So it is in Microsoft's interest if more people took up the individual option. That said, I don't see it working in companies where everything is locked down.
I've been using the RayBan Meta glasses for a while now, and the main reason I like them is because they do not have a display (https://balanarayan.com/2024/12/31/ray-ban-meta-long-term-re...). Another screen to glare at is the last thing I need, but I can imagine there are people who want one of this.
I use them for taking videos when I'm out and for listening to music without putting on headphones or earphones. While it is not the best at anything, it is definitely capable of doing a lot of things well enough and that is what matters a lot of times.
Same, but I would love to have map navigation displayed occasionally. I use bicycle in the city a lot and so many times I had to pull the phone (+unlock with face ID) while cycling, just to see the directions, and it's both frustrating and dangerous.
Until my 15" MBAir purchase last year, it had been seventeen years since I'd purchased a laptop (edit: M2Pro Mini was first new computer in fifteen years).
CostCo had a $850 deal for an M3 Air, and as soon as I picked it up the sales clerk was printing my purchase ticket. It is so light, and the feet actually seem like they'll last a few years (unlike the solid-body designs since 2009).
Keyboard is fantastic. Battery life is unreal. Screen is beautiful.
I have been a full-time Macintosh user since 1991 (I remember fully 68k->PPC->x86->Silicon), and only recently set up my first Linux machine (because the modern OEM OS's are increasingly too invasive with AI'ification), a re-purposed MacPro5,1 #4evr