ChatGPT seems like a huge distraction for OpenAI if their goal is transformative AI
IMO: the largest value creation from AGI won’t come from building a better shopping or travel assistant. The real pot of gold is in workflow / labor automation but obviously they can’t admit that openly.
It's broader, it's about users data. For example, you can store my address so you can send the item I ordered to me. You can't, without permission, use that to send me marketing stuff.
Kinda genius to scale exoskeleton data collection with UMI grippers when most labs are chasing "general" VLMs / VLAs by training on human demonstration videos.
Imo the latter will be very useful for semantic planning and reasoning, but only after manipulation is solved.
A ballpark cost estimate -
- $10 to $20 hourly wages for the data collectors
- $100,000 to $200,000 per day for 10,000 hours of data
- ~1,500 to 2,500 data collectors doing 4 to 6 hours daily
- $750K to $1.25M on hardware costs at $500 per gripper
Fully loaded cost between $4M to $8M for 270,000 hours of data.
Not bad considering the alternatives.
For example, teleoperation is way less efficient - it's 5x-6x slower than human demos, and 2x-3x more expensive per hour of operator time. But could become feasible after low-level and mid-level manipulation and task planning is solved.
Not teleoperating can have certain disadvantages due to mismatches between how humans move vs. how robots move though. See here: https://evjang.com/2024/08/31/motors.html
Intuitively, yes. But is it really true in practice?
Thinking about it, I'm reminded of various "additive training" tricks. Teach an AI to do A, and then to do B, and it might just generalize that to doing A+B with no extra training. Works often enough on things like LLMs.
In this case, we use non-robot data to teach an AI how to do diverse tasks, and robot-specific data (real or sim) to teach an AI how to operate a robot body. Which might generalize well enough to "doing diverse tasks through a robot body".
The exoskeletons are instrumented to match the kinematics and sensor suite of the actual robot gripper. You can trivially train a model on human collected gripper data and replay it on the robot.
You mentioned UMI, which to my knowledge runs VSLAM on camera+IMU data to estimate the gripper pose and no exoskeletons are involved. See here: https://umi-gripper.github.io/
Calling UMI an "exoskeleton" might be a stretch but the principle is the same - humans use a kinematically matched instrumented end affector to collect data that can be trivially replayed on the robot.
There is probably an equivalent of Amdahl's law for GDP - overall productivity will be bottlenecked by the least productive sectors.
Until AI becomes physically embodied, that would mean all high-mix, low-volume physical labor is likely going to become a lot more valuable in the mid-term.
Ben’s original take about 1% of users being creators might end up being right eventually
Consider the Studio Ghibli phenomenon. It was fun to create and share photos of your loved ones in Ghibli aesthetics until that novelty wore off
Video models arguably have a lot more novelty to juice. But they will eventually get boring once you have explored the usually finite latent space of interesting content
It's a supply-demand gap, but since the reasons for it are very apparent, it's completely reasonable to describe it as "consumers paying for [the existence of] datacenters".
I don't see how? It's much more reasonable to state "all electrical consumers are paying a proportionate amount to operate the grid based on their usage rates". This is typically spelled out by the rate commissions and designed to make sure one power consumer is not "subsidizing" the other.
In the case of your quoted article - taking it at face value - this means "everyone" is paying .02/khw more on their bill. A datacenter is going to be paying thousands of times more than your average household as they should.
I don't see a problem with this at all. Cheap electricity is required to have any sort of industrial base in any country. Paying a proportionate amount of what it costs the grid to serve you seems about as fair of a model as I can come up with.
If you need to subsidize some households, then having subsidized rates for usage under the average household consumption level for the area might make sense?
I don't really blame the last watt added to the grid for incremental uptick in costs. It was coming either way due to our severe lack of investment in dispatchable power generation and transmission capacity - datacenters simply brought the timeline forward a few years.
There are plenty of actual problematic things going into these datacenter deals. Them exposing how fragile our grid is due to severe lack of investment for 50 years is about the least interesting one to me. I'd start with local (and state) tax credits/abatements myself.
No, it's a lie. Consumers paying more because of data centers raising demand could be true, but that's not equivalent to them paying for the data centers' usage. The data centers also have to pay an increased rate when prices go up.
Data centers get commercial or maybe even industrial rates depending on their grid hookup and utilities love predictable loads. Those are lower than residential rates. If you're dishonest and don't understand the cost of operating a grid, you could say that's users paying for data centers. But then you'd need to apply it to every commercial/industrial user.
If the regular users were paying for data centers usage, why are so many of them going off-grid with turbines or at least partially on-prem generation?
reply