Think of it this way: Given any company in the world to invest that money, do you think it's best invested in your company or some other? Because if there's another one (eg nvidia, apple etc) then you should take the money out and move it into stocks in that one
I don’t think anyone really knows the answer yet. UK law has much looser standards for copyrightability than US law - UK law accepts the “sweat of the brow” doctrine - mere human effort is enough to create copyright, even if it lacks any significant creative element-under UK law, a transcriptionist transcribing an audio recording creates a new copyright in the transcription separate from the copyright in the audio itself; US law does not consider a mere verbatim transcription to be sufficiently original to create a new copyright. But, will UK judges extend “sweat of the brow” to include AI sweat as well as human sweat? My gut feel is probably “yes”, but I’m not aware of any case law on the topic yet. A complicating factor is there are a lot of wealthy vested interests who are going to be pushing for the law in this area to evolve in a way which suits them - both in the courts and in Parliament - so the law might not evolve in the way you’d expect if judges were just left to logically extend existing precedents.
Even in the US, I think the situation is complex. If I prompt an LLM to edit a copyrighted human-written text, the LLM output is going to be copyrighted, because even if the LLM’s changes aren’t copyrightable, the underlying text is. And what happens if an LLM proposes edits, and then a human uses their own judgement to decide which LLM edits to accept and which not to? That act of human judgement might provide grounds for copyrightability which weren’t present in the raw LLM output.
The attention of humans got ruined with later generations. The generation before us were a different level of skilled, and it's hard as a millenial (me) and gen z to get close to them.
Commenting with this self-superior attitude is what you're spending your limited time on earth doing, have a little joy in your life or at least leave people who are alone.
Just imagine some other people will carry the burden and mentally distance yourself from it to relax from it wearing you out. You can take up the burden again later once you've recovered and others are worn out
My hunch is that openai used ghibli as the example in their earlier dall-e blog posts strategically because anime was earlier said by the PM not to be protected by copyright in training. OpenAI is always sneakier than most people give them credit for.
I know the comments here are gonna be negative but I just find this so sick and awesome. Feels like it's finally close to the potential we knew was possible a few years ago. Feels like a pixar moment when CG tech showed a new realm of what was possible with toy story
These videos are a very impressive engineering feat. There are a lot of uses for this capability that will be beneficial to society, and in the coming years people will come up with more good uses nobody today has thought of yet.
But clearly we also see some major downsides. We already have an epidemic of social media rotting people's minds, and everything about this capability is set to supercharge these trends. OpenAI addresses some of these concerns, but there's absolutely no reason to think that OpenAI will do anything other than what they perceive as whatever makes them the most money.
An analogy would be a company coming up with a way to synthesize and distribute infinite high-fructose corn syrup. There are positive aspects to cheaply making sweet tasting food, but we can also expect some very adverse effects on nutritional health. Sora looks like the equivalent for the mind.
There's an optimistic take on this fantastic new technology making the world a better place for all of us in the long run, after society and culture have adapted to it. It's going to be a bumpy ride before we get there.
I actually wonder if this will kill off the social apps and the bragging that happens. It will be flooded by people faking themselves doing the unimaginable.
This is also my thesis. The internet is going to be saturated with AI slop indiscernible from real content. Once it reaches a tipping point, there will no longer be much of a reason to consume the content at all. I think social networks that can authenticate video/photo/text content as human-created will be a major trend in a few years.
I have no clue if the reactions are real, but there are some videos online of people showing their grandparents gameplay from Grand Theft Auto games trying to convince them that it is real footage. The point of the videos is to laugh at their reactions where they question if it really happened, etc.
Maybe this will result in something similar, but it can affect more people who aren’t as wary.
I regularly get AI movie recaps on my shorts and I just eat it up.
The very fact that I (or billions of others) waste time on shorts is an issue. I don't even play games anymore, it's just shorts. That is a concerning rewiring of the brain :/
Guess what I`m trying to say is that, there is a market out there. It's not pretty, but there certainly is.
Will keep trying to not watch these damn shorts...
Yes, I wonder if the content distribution networks that call themselves "social networks" can even survive something like this.
Of course, the ones focusing on the content can always editorialize the spam out. And in real social networks you ask your friends to stop making that much slop. But this can be finally the end of Facebook-like stuff.
> There are a lot of uses for this capability that will be beneficial to society
Are there? “A lot” of them? Please name a few that will be more beneficial than the very obvious detrimental uses like “making up life-destroying lies about your political opponents or groups of people you want to vilify” or “getting away with wrongdoing by convincing the judge a real video of yourself is a deepfake”.
It can generate funny videos of bald JD Vance and Harry Potter characters for TikTok. Which makes me wonder, what is the actual plan to make money off these models? Billions have been invested but the only thing they seem to be capable of is shitposting and manipulation. Where is the money going to come from?
> There are a lot of uses for this capability that will be beneficial to society
Please enlighten me. What are they? If my elderly grandma is on her deathbed and I have no way to get to see her before she passes, will she get more warmth and fond memories of me with a clip of my figure riding an AI generated dragon saying goodbye, or a handwritten letter?
Still zero responses, eh? My example was charged but I clearly had a point: how does AI fill a void where meaning should be, over what has worked for centuries? How is it better than face to face, or a handwritten letter?
Suno allows me to rapidly flesh out demos and brainstorm. Played music my whole life manually. Easier for me to find what I'm looking for and while avoiding demo love.
I still feel this is limited by what it learned from. It looks cool but it also looks like something I'd dreamt or saw flicking through TV channels. Kind of like spam for the eyes.
No doubt they can create Hollywood quality clips if the tools are good enough to keep objects consistent, example, coming back to the same scene with same decor and also emotional consistency in actors
I think this is not nearly as important as most people think it is.
In hollywood movies, everyone already knows about "continuity errors" - like when the water level of a glass goes up over time due to shots being spliced together. Sometimes shots with continuity errors are explicitly chosen by the editor because it had the most emotional resonance for the scene.
These types of things rarely affect our human subjective enjoyment of a video.
In terms of physics errors - current human CGI has physics errors. People just accept it and move on.
We know that superman can't lift an airplane because all of that weight on a single point of the fuselage doesn't hold, but like whatever.
Location consistency is important. Even something as simple and subtle as breaking the 180-rule [1] feels super uncanny to most audiences. Let alone changing the set the actor occupies, their wardrobe, props, etc.
There are lots of tools being built to address this, but they're still immature.
Well put. Honestly the actor part is mostly solved by now, the tricky part is depicting any kind of believable, persistent space across different shots. Based off of amateur outputs from places like https://www.reddit.com/r/aivideo/, at least!
This release is clearly capable of generating mind-blowingly realistic short clips, but I don't see any evidence that longer, multi-shot videos can be automated yet. With a professional's time and existing editing techniques, however...
I wonder if this stuff is trained on enough Hallmark movies that even AI actors will buy a hot coffee at a cafe and then proceed to flail the empty cup around like the humans do. Really takes me out of the scene every time - they can't even put water in the cup!?
No way man, this is why i loved Mr Robot, they actually payed a real expert and worked story around realism and not just made up gobbleygook that shuts my brain off entirely to its nonsense
Cool demo! But let’s pour one out for all the weird, janky, hand crafted videos that made early internet so fun. Anyone else still crave that kind of content?
The ability for the masses to create any video just by typing, among the other features, is not novel technology? Or is it just the lack of emotional response?