While most here are aligned with your perspective, and for good reasons, let me offer an alternate perspective. Today AI can take the goal and create a workflow for it. Something that orgs pay for in SaaS solutions.
AI does it imperfectly today, but if you have had to bet, would you bet that it gets better or worse? I would bet that it will improve, and as it is often with tech, at exponential rate. Then we would seen any workflow described in plain language and minutes see great software churned out. It might be a questions of when (not if) that happens. And are you prepared for that state of affairs?
Reading is thinking someone else's thoughts => That is true if you are strictly reading passively. Typically what happens is that reading opens many doors that leads to your own thinking. Of course depends on the type of material you are reading as well. But often reading broadens your thinking relative to just putting your own on paper.
Definitely a good point. I live in a college town and know many people that read all the time, but don't actually do anything active with what they've read. They just consume it continuously and think the understand many topics. Except when you talk to them, it comes out quickly that they didn't actually understand what they read on a deep level, they just went along for the "thinking ride".
And, as you point out, if you push yourself to read actively, it helps a lot!
Those four characters are enough friction to slowly grind down the number of today's outraged people into a population small enough that, when Google stop supporting '-ai', people will think it's weird that you still care.
Why would it cast any doubt? If you can use o1 output to build a better R1. Then use R1 output to build a better X1... then a better X2.. XN, that just shows a method to create better systems for a fraction of the cost from where we stand. If it was that obvious OpenAI should have themselves done. But the disruptors did it. It hindsight it might sound obvious, but that is true for all innovations. It is all good stuff.
I think it would cast doubt on the narrative "you could have trained o1 with much less compute, and r1 is proof of that", if it turned out that in order to train r1 in the first place, you had to have access to bunch of outputs from o1. In other words, you had to do the really expensive o1 training in the first place.
(with the caveat that all we have right now are accusations that DeepSeek made use of OpenAI data - it might just as well turn out that DeepSeek really did work independently, and you really could have gotten o1-like performance with much less compute)
In this study, we demonstrate that reasoning capabilities can be significantly
improved through large-scale reinforcement learning (RL), even without using supervised
fine-tuning (SFT) as a cold start. Furthermore, performance can be further enhanced with
the inclusion of a small amount of cold-start data
Is this cold start data what OpenAI is claiming their output ? If so what's the big deal ?
DeepSeek claims that the cold-start data is from DeepSeekV3, which is the model that has the $5.5M pricetag. If that data were actually the output of o1 (a model that had a much higher training cost, and its own RL post-training), that would significantly change the narrative of R1's development, and what's possible to build from scratch on a comparable training budget.
In the paper DeepSeek just says they have ~800k responses that they used for the cold start data on R1, and are very vague about how they got it:
> To collect such data, we have explored several approaches: using few-shot prompting with a long CoT as an example, directly prompting models to generate detailed answers with reflection and verification, gathering DeepSeek-R1-Zero outputs in a readable format, and refining the results through post-processing by human annotators.
My surface-level reading of these two sections is that the 800k samples come from R1-Zero (i.e. "the above RL training") and V3:
>We curate reasoning prompts and generate reasoning trajectories by performing rejection sampling from the checkpoint from the above RL training. In the previous stage, we only included data that could be evaluated using rule-based rewards. However, in this stage, we expand the dataset by incorporating additional data, some of which use a generative reward model by feeding the ground-truth and model predictions into DeepSeek-V3 for judgment.
>For non-reasoning data, such as writing, factual QA, self-cognition, and translation, we adopt the DeepSeek-V3 pipeline and reuse portions of the SFT dataset of DeepSeek-V3. For certain non-reasoning tasks, we call DeepSeek-V3 to generate a potential chain-of-thought before answering the question by prompting.
The non-reasoning portion of the DeepSeek-V3 dataset is described as:
>For non-reasoning data, such as creative writing, role-play, and simple question answering, we utilize DeepSeek-V2.5 to generate responses and enlist human annotators to verify the accuracy and correctness of the data.
I think if we were to take them at their word on all this, it would imply there is no specific OpenAI data in their pipeline (other than perhaps their pretraining corpus containing some incidental ChatGPT outputs that are posted on the web). I guess it's unclear where they got the "reasoning prompts" and corresponding answers, so you could sneak in some OpenAI data there?
That's what I am gathering as well. Where is OpenAI going to have substantial proof to claim that their outputs were used ?
The reasoning prompts and answers for SFT from V3 you mean ? No idea. For that matter you have no idea where OpenAI got this data from either. If they open this can of worms, their can of worms will be opened as well.
It's like the claim "they showed anyone create a powerful from scratch" becomes "false yet true".
Maybe they needed OpenAI for their process. But now that their model is open source, anyone can use that as their cold start and spend the same amount.
"From scratch" is a moving target. No one who makes their model with massive data from the net is really doing anything from scratch.
Yeah, but that kills the implied hope of building a better model for cheaper. Like this you'll always have a ceiling of being a bit worse then the openai models.
The logic doesn't exactly hold, it is like saying that a student is limited by their teachers. It is certainly possible that a bad teacher will hold the student back, but ultimately a student can lag or improve on the teacher without only a little extra stimulus.
They probably would need some other source of truth than an existing model, but it isn't clear how much additional data is needed.
Don't forget that this model probably has far less params than o1 or even 4o. This is a compression/distillation, which means it frees up so much compute resources to build models much powerful than o1. At least this allows further scaling compute-wise (if not in the amount of, non-synthetic, source material available for training).
If R1 were better than O1, yes you would be right. But the reporting I’ve seen is that it’s almost as good. Being able to copy cutting edge models won’t advance the state of the art in terms of intelligence. They have made improvements in other area, but if they reused O1 to train their model, that would be effectively a ctrl-c / ctrl-v strictly in terms of task performance.
It's not just about whether competitors can improve on OpenAI's models. It's about whether they can continually create reasonable substitutes for orders of magnitude less investment.
> It's about whether they can continually create reasonable substitutes for orders of magnitude less investment
That just means that the edge you’re able to retain if you invest $1B is nonexistent. It also means there’s a huge disincentive to invest $1B if your reward instantly evaporates. That would normally be fine if the competitor is otherwise able to get to that new level without the $1B. But if it relies on your $1B to then be able to put in $100M in the first place to replicate your investment, it essentially means the market for improvements disappears OR there’s legislation written to ensure competitors aren’t allowed to do that.
This is a tragedy of the commons and we already have historical example for how humans tried to deal with it and all the problems that come with it. The cost of producing a book requires substantial capital but the cost of copying it requires a lot less. Copyright law, however flawed and imperfect, tries to protect the incentive to create in the face of that.
> That just means that the edge you’re able to retain if you invest $1B is nonexistent.
Jeez. Must be really tough to have some comparatively small group of people financially destroy your industry with your own mechanically-harvested professional output while dubiously claiming to be be better than you when in reality it’s just a lot cheaper. Must be tough.
Maybe they should take some time to self-reflect and make some art and writing about it using the products they make that mechanically harvest the work of millions of people, and have already screwed up the commercial art and writing marketplaces pretty throughly. Maybe tell DeepSeek it’s their therapist and get some emotional support and guidance.
This. There is something doubly evil about OpenAI harvesting all of that work for its own economic benefit, while also destroying the opportunity for those it stole from to continue to ply their craft.
And then all of their stans taking on a persecution complex because people that actually made all of the “data” don’t uncritically accept their work as equivalent adds insult to injury.
>it essentially means the market for improvements disappears OR there’s legislation...
This is possibly true, though with billions already invested I'm not sure that OpenAI would just...stop absent legislation. And, there may be technical or other solutions beyond legislation. [0]
But, really, your comment here considers what might come next. OTOH, I was replying to your prior comment that seemed to imply that DeepSeek's achievement was of little consequence if they weren't improving on OpenAI's work. My reply was that simply approximating OpenAI's performance at much lower cost could still be extraordinarily consequential, if for no other reason than the challenges you subsequently outlined in this comment's parent.
[0] On that note, I'm not sure (and admittedly haven't yet researched) how DeepSeek just wholesale ingested ChatGPT's "output" to be used for its own model's training, so not sure what technical measures might be available to prevent this going forward.
First, there could have been industrial espionage involved so who knows. Ignoring that, you’re missing what I’m saying. Think of it this way - if it requires O1’s input to reach almost the same task performance, then this approach gives you a cheap way to replicate the performance of a leading edge model at a fraction of the cost. It does not give you a way to train something that beats a cutting edge model. Cutting edge models require a lot of R&D & capital expenditure - if they’re just going to be trivially copied after public availability, the response is going to be legislation to keep the incentive there to keep meaningful investments in that area. Otherwise you’re going to have another AI winter where progress shrivels because investment dollars dry up.
That’s why it’s so hard to understand the true cost of training Deepseek whereas it’s a little bit easier for cutting edge models (& even then still difficult).
"Otherwise you’re going to have another AI winter where progress shrivels because investment dollars dry up."
Tbh a lot of people in the world would love this outcome. They will use AI because not using it puts them at a comparative disadvantage - but would rather AI doesn't develop further or didn't develop at all (i.e. they don't value the absolute advantage/value). There's both good and bad reasons for this.
When you build a new model, there is a spectrum of how you use the old model: 1. taking the weights, 2. training on the logits, 3. training on model output, 4. training from scratch. We don't know how much advantage #3 gives. It might be the case that with enough output from the old model, it is almost as useful as taking the weights.
I lean on the idea that R1-Zero was trained from cold start, at the same time, they have tried many things including using OpenAI APIs. These things can happen in parallel.
> you had to do the really expensive o1 training in the first place
It is no better for OpenAI in this scenario either, any competitor can easily copy their expensive training without spending the same, i.e. there is a second mover advantage and no economic incentive to be the first one.
To put it another way, the $500 Billion Stargate investment will be worth just $5Billion once the models become available for consumption, because it only will take that much to replicate the same outcomes with new techniques even if the cold start needed o1 output for RL.
Now that its been done, is OpenAI needed or can you iterate on DeepSeek only moving forward?
My understanding is this effectively builds on OpenAI's very expensive initial work, provides a "nearly as good as" model for orders of magnitude cheaper to train and run, that also provides a basis to continue building on and improving without openAI, and without human bottlenecks.
That cuts OAI off at the knees in terms of market viability after billions have been spent. If DS can iterate and match the capabilities of the current in-development OAI models in the next year, it may come down to regulatory capture and government intervention to ensure its viability as a company.
You cannot really have government intervention against open source and weights successfully.
the attempt in cryptography with PGP and export controls made that clear.
Even if DS specifically is banned (and even effectively), a dozen other clean room replications following their published methods will become available.
It is possible this government will ban all “unapproved” LLMs not running at authorized provider[1], saying it is weapon and AGI or skynet or whatever makes powers that sound important, thus establishing the need for control [2], the rest of the world will keep innovating.
—-
[1] Bans just need to work only economically, not at information level i.e organization with liability considerations will not use “unapproved” ones and they are ones who will bulk of the money and that what they need to protect.
[2] if they were smart they could do this positively without the backlash bans would have. By giving protections to compliant models like legal indemnity for for model companies and users without necessarily blocking others
o1 wouldn't exist without the combined compute of every mind that led to the training data they used in the first place. How many h100 equivalents are the rolling continuum of all of human history?
Human reasoning, as it exists today, is the result of tens of thousands of years of intuition slowly distilled down to efficient abstract concepts like "numbers", "zero", "angles", "cause", "effect", "energy", "true", "false", ...
I don't know what reasoning from scratch would look like without training on examples from other reasoning beings. As human children do.
Actually i also think it's possible. Start with natural numbers axiom system. Form all valid sentences of increasing length. RL on a model to search for counter example or proofs. This on sufficient computer should produce superhuman math performance (efficiency) even at compute parity
I wonder how much discovery in math happens as a result in lateral thinking epiphanies. IE: A mathematician is trying to solve a problem, their mind is open to inspiration, and something in nature, or their childhood or a book synthesizes with their mental model and gives them the next node in their mental graph that leads to a solution and advancement.
In an axiomatic system, those solutions are checkable, but how discoverable are they when your search space starts from infinity? How much do you lose by disregarding the gritty reality and foam of human experience? It provides inspirational texture that helps mathematicians in the search at least.
Reality is a massive corpus of cause and effect that can be modeled mathematically. I think you're throwing the baby out with the bathwater if you even want to be able to math in a vacuum. Maybe there is a self optimization spider that can crawl up the axioms and solve all of math. I think you'll find that you can generate new math infinitely, and reality grounds it and provides the gravity to direct efforts towards things that are useful, meaningful and interesting to us.
As I mentioned in a sister comment, Gödel's incompleteness theorems also throw a wrench into things, because you will be able to construct logically consistent "truths" that may not actually exist in reality. At which point, your model of reality becomes decreasingly useful.
At the end of the day, all theory must be empirically verified, and contextually useful reasoning simply cannot develop in a vacuum.
Those theorems are only relevant if "reasoning" is taken to its logical extreme (no pun intended). If reasoning is developed/trained/evolved purely in order to be useful and not pushed beyond practical applications, the question of "what might happen with arbitrarily long proofs" doesn't even come up.
On the contrary, when reasoning about the real world, one must reason starting from assumptions that are uncertain (at best) or even "clearly wrong but still probably useful for this particular question" (at worst). Any long and logic-heavy proof would make the results highly dubious.
A question is: what algorithms does the brain use to make these creative lateral leaps? Are they replicable?
Unless the brain is using physics that we don’t understand or can’t replicate, it seems that, at least theoretically, there should be a way to model what it’s doing with silicon and code.
States like inspiration and creativity seem to correlate in an interesting way with ‘temperature’, ‘top p’, and other LLM inputs. By turning up the randomness and accepting a wider range of output, you get more nonsense, but you also potentially get more novel insights and connections. Human creativity seems to work in a somewhat similar way.
Dogs are probably the best example I can think of. They learn through experience and clearly reason, but without a complex language to define abstract concepts. Its very basic reasoning, but they do learn and apply that learning.
To your point, experience is the training. Without language/data to represent human experience and knowledge to train a model, how would you give it 'experience'?
And yet dogs, to a very high degree, just learn the same things. At least the same kinds of things, over and over.
They were pre-designed to learn what they always learn. Their minds structured to readily make the same connections as puppies, that dogs have always needed to survive.
Not for real reasoning, which by its nature, does not have a limit.
> just learn the same things. At least the same kinds of things, over and over.
Its easy to train the same things to a degree, but its amazing to watch different dogs individually learn and reason through things completely differently, even within a breed or even a litter.
Reasoning ability is always limited by the capacity of the thinker to frame the concepts and interactions. Its always limited by definition, we only push that limit farther than other species, and AGI may eventually push it past our abilities.
There was necessarily a "first reasoning being" who learned reasoning from scratch, and then it's improved from there. Humans needed tens of thousands of years because:
- humans experience reality at a slower pace than AI could theoretically experience a simulated reality
- humans have to transfer knowledge to the next generation every 80 years (in a manner that's very lossy), and around half of each human lifespan is spent learning things that the previous generation already knew
Whether the first reasoning entity is an individual organism or a group of organisms is completely irrelevant to the original point. If one were to grant that there was in fact a "first reasoning group" rather than a "first reasoning being" the original argument would remain intact.
That was the easy part though, figuring out how to handle all the unintended side effects it generated is still an ongoing process. Please sit and relax while we are solving the few incidentals events occurring here and there, rest assured we are putting our best effort to their resolution.
It is possible to learn to reason from scratch, that's what R1-0 did, but the resulting chains of thought aren't legible to humans.
To quote DeepSeek directly:
> DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates cold-start data before RL.
If you look at the benchmarks of the DeepSeek-V3-Base, it is quite capable, even in 0-shot: https://huggingface.co/deepseek-ai/DeepSeek-V3-Base#base-mod... This is not from scratch. These benchmark numbers are an indication that the base model already had a large number of reasoning/LLM tokens in the pre-training set.
On the other hand, my take on it, the ability to do reasoning in a long context is a general capability. And my guess is that it can be bootstrapped from scratch, without having to do training on all of the internet or having to distill models trained on the internet.
> These benchmark numbers are an indication that the base model already had a large number of reasoning/LLM tokens in the pre-training set.
But we already know that is the case: the Deepseek v3 paper says it was posttrained partly with an internal version of R1:
> Reasoning Data. For reasoning-related datasets, including those focused on mathematics,
code competition problems, and logic puzzles, we generate the data by leveraging an internal
DeepSeek-R1 model. Specifically, while the R1-generated data demonstrates strong accuracy, it
suffers from issues such as overthinking, poor formatting, and excessive length. Our objective is
to balance the high accuracy of R1-generated reasoning data and the clarity and conciseness of
regularly formatted reasoning data.
And deepseekmath did a repeated cycle of this kind of thing mixing in 10% of old previously seen data with new generated data from last gen in a continuous bootstrap.
Possible? I guess evolution did it over the course of a few billion years. For engineering purposes, starting from the best advanced position seems far more efficient.
I've been giving this a lot of thought over the last few months. My personal insight is that "reasoning" is simply the application of a probabilistic reasoning manifold on an input in order to transform it into constrained output that serves the stability or evolution of a system.
This manifold is constructed via learning a decontextualized pattern space on a given set of inputs. Given the inherent probabilistic nature of sampling, true reasoning is expressed in terms of probabilities, not axioms. It may be possible to discover axioms by locating fixed points or attractors on the manifold, but ultimately you're looking at a probabilistic manifold constructed from your input set.
But I don't think you can untie this "reasoning" from your input data. It's possible you will find "meta-reasoning", or similar structures found in any sufficiently advanced reasoning manifold, but these highly decontextualized structures might be entirely useless without proper recontextualization, necessitating that a reasoning manifold is trained on input whose patterns follow learnable underlying rules, if the manifold is to be useful for processing input of that kind.
Decontextualization is learning, decomposing aspects of an input into context-agnostic relationships. But recontextualization is the other half of that, knowing how to take highly abstract, sometimes inexpressible, context-agnostic relationships and transform them into useful analysis in novel domains.
This doesn't mean a well-trained model can't reason about input it hasn't encountered before, just that the input needs to be in some way causally connected to the same laws which governed the input the manifold was trained on.
I'm sure we could create a fully generalized reasoning manifold which could handle anything, but I don't see how we possibly get that without first considering and encountering all possible inputs. But these inputs still have to have some form of constraint governed by laws that must be learned through sampling, otherwise you'd just be training on effectively random data.
The other commenter who suggested simply generating all possible sentences and training on internal consistency should probably consider Gödel's incompleteness theorems, and that internal consistency isn't enough to accurately model and interpret the universe. One could construct a thought experiment about an isolated brain in a jar with effectively unlimited neuronal connections, but no sensory connection to the outside world. It's possible, with enough connections, that the likelihood of the brain conceiving of true events it hasn't actually encountered does increase meaningfully. But the brain still has nothing to validate against, and can't simply assume that because something is internally logically consistent, that it must exist or have existed.
If OpenAi had to account for the cost of producing all the copyrighted material they trained their LLM on, their system would be worth negative trillions of dollars.
Let's just assume that the cost of training can be externalized to other people for free.
Even if what OpenAI asserts in the title of this post is true, then their system is worth negative trillions of dollars.
If other players can access that data with relatively less effort, then it's futile trying to train your models and improve upon them, as clearly you don't have an architectural moat, just a training moat.
Kind of like an office scene where an introverted hardworker does all the tedious work, while his extroverted colleague promotes it as his and gains credit.
At the pace that DeepSeek is developing we should expect them to surpass OpenAI in not that long.
The big question really is, are we doing it wrong, could we have created o1 for a fraction of the price. Will o4 cost less to train than o1 did?
The second question is naturally. If we create a smarter LLM, can we use it to create another LLM that is even smarter?
It would have been fantastic if DeepSeek could have come out with an o3 competitor before o3 even became publicly available. That way we would have known for sure that we’re doing it wrong. Cause then either we could have used o1 to train a better AI or we could have just trained in a smarter and cheaper way.
The whole discussion is about whether or not the second case of using o1 outputs to fine tune R1 is what allowed R1 to become so good. If that's the case then your assertion that DeepSeek will surpass OpenAI doesn't really make sense because they're dependent on a frontier model in order to match, not surpass.
Yeah, that's my point. If they do end up surpassing OpenAI then it would seem likely that they aren't just relying on copying from o1, or whatever model is the frontier model at that time.
Sure. This is fine. Data is still a product, no matter how much businesses would like to turn it into a service.
The model already embodies the "total sum of a massive amount of compute" used to create it; if it's possible to reuse that embodied compute to create a better model, that's good for the world. Forcing everyone to redo all that compute for themselves is, conversely, bad for the world.
I mean, yes that's how progress works. Has OpenAI got a patent? If not it's fair game.
We don't make people figure out how to domesticate a cow every time they want a hamburger. Or test hundreds of thousands of filaments before they can have a lightbulb. Inventions, once invented, exist as giants to stand upon. The inventor can either choose to disclose the invention and earn a patent for exclusive rights, or they can try to keep it a secret and hope nobody reverse engineers it.
I think the prevailing narrative ATM is that DeepSeek's own innovation was done in isolation and they surpassed OpenAI. Even though in the paper they give a lot of credit to Llama for their techniques. The idea that they used o1's outputs for their distillation further shows that models like o1 are necessary.
All of this should have been clear anyway from the start, but that's the Internet for you.
The idea that they used o1's outputs for their distillation further shows that models like o1 are necessary.
Hmm, I think the narrative of the rise of LLMs is that once the output of humans has been distilled by the model, the human isn't necessary.
As far as I know, DeepSeek adds only a little to the transformers model while o1/o3 added a special "reasoning component" - if DeepSeek is as good as o1/o3, even taking data from it, then it seems the reasoning component isn't needed.
It seems clear that the term can be used informally to denote the boiling down of human knowledge, indeed it was used that way before AI appeared in the popular imagination.
In the context in which you said it, it matters a lot.
>> The idea that they used o1's outputs for their distillation further shows that models like o1 are necessary.
> Hmm, I think the narrative of the rise of LLMs is that once the output of humans has been distilled by the model, the human isn't necessary.
If deepseek was produced through the distillation (term of art) of o1, then the cost of producing deepseek is strictly higher than the cost of producing o1, and can't be avoided.
Continuing this argument, if the premise is true then deepseek can't be significantly improved without first producing a very expensive hypothetical o1-next model from which to distill better knowledge.
That is the argument that is being made. Please avoid shallow dismissals.
Edit: just to be clear, I doubt that deepseek was produced via distillation (term of art) of o1, since that would require access to o1's weights. It may have used some of o1's outputs to fine tune the model, which still would mean that the cost of training deepseek is strictly higher than training o1.
just to be clear, I doubt that deepseek was produced via distillation
Yeah, your technical point is kind of ridiculous here that in all my uses of distillation (and in the comment I quoted), distillation is used in informal sense and there's no allegation that DeepSeek could have been in possession of OpenAI's model weights, which is what's needed for your "Distillation (term of Art)".
Because it feeds conspiracy theories and because there's no evidence for it? Also, let's talk DeepSeek in particular, not "China".
Looking back on the article, it is indeed using "distillation" as a special/"term of art" but not using it correctly. IE, it's not actually speculating that DeepSeek obtained OpenAI's weights and distilled them down but rather that it used OpenAI's answers/output as a starting point (which there is a different method/"term of art").
The R1-Zero paper shows how many training steps the RL took, and it's not many. The cost of the RL is likely a small fraction of the cost of the foundational model.
> the prevailing narrative ATM is that DeepSeek's own innovation was done in isolation and they surpassed OpenAI
I did not think this, nor did I think this was what others assumed. The narrative, I thought, was that there is little point in paying OpenAI for LLM usage when a much cheaper, similar / better version can be made and used for a fraction of the cost (whether it's on the back of existing LLM research doesn't factor in)
Yes, well the narrative that rocked the stock market is different. Its looking at what DeepSeek did and assuming they may have competitive advantage in this space and could outperform OpenAI at their own game.
If the narrative is actually that DeepSeek can only reach whatever heights OpenAI has already gotten to with some new tricks, then markets will probably refocus on OpenAI's innovations and price things accordingly, even if the initial cost is huge. It also means OpenAI probably needs a better moat to protect its interests.
I'm not sure where the reality is exactly, but market reactions so far have basically followed that initial narrative and now the rebuttal.
The idea that someone can easily replicate an OpenAI model based simply on OpenAI outputs is, I’d argue, immeasurably worse for OpenAI’s valuation than the idea that someone happened to come up with a few innovations that leapfrogged OpenAI.
The latter could be a one time thing, and/or OpenAi Could still use their financial might to leverage those innovations and get even better with them.
However, the former destroys their business model and no amount of intelligence and innovation from OpenAI protects them from being copied at a fraction of the cost.
> Yes, well the narrative that rocked the stock market is different.
How do you know this?
> If the narrative is actually that DeepSeek can only reach whatever heights OpenAI has already gotten to with some new tricks, then markets will probably refocus on OpenAI's innovations and price things accordingly
Why? If every innovation OpenAI is trying to keep as secret sauce becomes commoditized quickly and cheaply, then why would markets care about any innovations they have? They will be unable to monetize them.
Why would it matter when Chinese deepseek is not going to abide by such rules or be forced to and will release their model open weights so anyone anywhere can host it?
Also, scraping most of the websites they scrape is also not allowed, they do it anyways
There were different narratives for different people. When I heard about r1, my first response was to dig into their paper and it's references to figure out how they did it.
Fwiw I assumed they were using o1 to train. But it doesn’t matter: the big story here is that massive compute resources are unlikely to be as important in the future as we thought. It cuts the legs off stargate etc just as it’s announced. The CCP must be highly entertained by the timeline.
But HOW they are necessary is the change. They went from building blocks to stepping stones. From a business standpoint that's very damaging to OAI and other players.
OpenAI couldn't do it, when the high cost of training and access to GPUs is their competitive advance against startups, they can't admit that it does not exist.
When will over training happen on the melange of models at scale?
And will AGI only ever be an extension of this concept?
That is where artificial intelligence is going. Copy things from other things. Will there be a AI Eureka moment where it deviates and knows where and why the reason it is wrong?
If they're training R1 on o1 output on the benchmarks - then I don't trust those benchmarks results for R1. It means the model is liable to be brittle, and they need to prove otherwise.
It seems like if they in fact distilled then what we have found is that you can create a worse copy of the model for ~5m dollars in compute by training on its outputs.
They're standing on the shoulders of giants, not only in terms of re-using expensive computing power almost for free by using the outputs of expensive models. It's a bit of a tradition in that country, also in manufacturing.
What I meant to say was that OpenAI did put a lot of money into extracting value out of the pile of (partially copyrighted) data, and that DeepSeek was freeloading on that investment without disclosing it, making them look more efficient than they truly are.
Well, originally, OpenAI wasn't supposed to be that kind of organization.
But if you leave someone in the tech industry of SV/SF long enough, they'll start to get high on their own supply and think they're entitled to insane amounts of value, so...
It's because they're the ones who could raise the money to make those models. Academics don't have access to that kind of compute. But the free models exist.
Even assuming the model was somehow publicly available in a form that could be directly copied, that would be a more blatant form of copyright infringement. Distillation launders copyrighted material in a way that OpenAI specifically has argued falls under fair use.
These days I just feel sad for some of the richest people. First their wealth seem to have pulled them away from their humanity. Now I just see them lead troubled existence and inflict pain on others, amplified by the said wealth. All the money in the world and no humanity or peace at heart.
The other day I was driving through a school zone and it had that 25 mph radar screen. Most people did slow down, but it got me thinking. With modern cars being "smart", could govt offer a tax tax break for folks opting into the car sotware capping speeds dynamically?
This might be in the works already: with hybrids + full electric vehicles being increasingly used, the revenue from a fuel tax is dropping. There are regions investigating use-based taxes that track when and where vehicles are used, and assess a tax based on that information.
This is, of course, a frightening proposal for anyone who wants to keep even one shred of privacy, but so it goes ....
There's a lot of interesting ways to do usage based vehicle tax that don't impact privacy or do so only minimally.
The simplistic is to just bin roads into N categories and every mile it increments your monthly usage category N road type. The categories could be anything like road type, time of day, current congestion level, etc. There's actually little reason to track the specific location or road name.
It wouldn't need to share any data with anyone, just have dynamic speed control depending on the road someone is driving. Storing data for all 6 million km of road locally and refreshing them regularly should be doable.
There's no way that's a good idea without a AZ-5 style wax+paper seal and flip cap on the steering wheel to turn it off. You want to be able to use your car in emergency situations.
Yesterday my friend bid on a house for $3.7M. There were double digit number of bidders. Lost to another buyer who got it for $3.8M all cash. Not saying it is the same everywhere, but where I live it is the norm. Nothing seems to bring down (or even slow) house prices.
At that price level, it’s a whole different market and set of buyers. If that's in California, the property taxes alone would be like $3400 a month. I doubt people with that kind of money are troubled by variations on interest rates.
But it isn't? At least, not for productivity uses. You will have to have your keyboard and the external battery with you. Which is a surprising amount of junk to carry everywhere.
I can almost see this being ok for waiting room and flights. The waiting room, though, you will not want to lose your situational awareness. Odds are high that someone is going to be calling your name. Not necessarily walking up to you to get your attention.
In lines, this is right out. You can't walk while using it. The explicitly are not supporting you moving through a large room, as I recall.
Makes sense. Maybe by v5 when the battery is internal and it is light enough it could unlock more use cases. I feel they have added several features for situational awareness with video pass through, but might need more.
I don't know how long it will take, all told. I think VR has come farther than a lot of folks realize. Certainly hoping that folks buy this and it establishes a market.
I'm just far less sold on two points from Apple. First is the productivity angle. Is akin to the iPod being productive. Yes, it can happen. By and large, though, it is a consumption device. Would love to see numbers exploring that.
Second stickler is the attempt at using all gestures. Even Beat Saber benefits greatly from the feedback on my handles when I hit blocks. Having frictionless gestures just feels unlikely to be as nice as they demo. Reminds me of "Jedi" demos where you swing a light sabre. Neat, until you realize you can't pantomime getting blocked that well. Heck, even blocking is tough if you don't know when the block landed.
AI does it imperfectly today, but if you have had to bet, would you bet that it gets better or worse? I would bet that it will improve, and as it is often with tech, at exponential rate. Then we would seen any workflow described in plain language and minutes see great software churned out. It might be a questions of when (not if) that happens. And are you prepared for that state of affairs?