Hacker Newsnew | past | comments | ask | show | jobs | submit | blueblisters's commentslogin

Amanda Askell, Sholto Douglas have somewhat of a fan following on twitter

ChatGPT seems like a huge distraction for OpenAI if their goal is transformative AI

IMO: the largest value creation from AGI won’t come from building a better shopping or travel assistant. The real pot of gold is in workflow / labor automation but obviously they can’t admit that openly.


That boat sailed a long time ago


The problem is paternalism and assuming the user is too dumb to take control their privacy preferences.

The compliance of the cookie banner regulation has measurable negative externalities - one estimate suggests a EUR 14B/year productivity hit in the EU

Most modern browsers allow you to disable all cookies if you like. You can always use incognito mode if you want to be selective about it.

In an ideal world, the EU could have simply educated their constituents about privacy controls available in their browser.


GDPR is not a cookie regulation it is a tracking regulation.


It's broader, it's about users data. For example, you can store my address so you can send the item I ordered to me. You can't, without permission, use that to send me marketing stuff.


They do have unreleased Olympiad Gold-winning models that are definitely better than GPT5.

TBD if that performance generalizes to other real world tasks.


Kinda genius to scale exoskeleton data collection with UMI grippers when most labs are chasing "general" VLMs / VLAs by training on human demonstration videos.

Imo the latter will be very useful for semantic planning and reasoning, but only after manipulation is solved.

A ballpark cost estimate -

- $10 to $20 hourly wages for the data collectors

- $100,000 to $200,000 per day for 10,000 hours of data

- ~1,500 to 2,500 data collectors doing 4 to 6 hours daily

- $750K to $1.25M on hardware costs at $500 per gripper

Fully loaded cost between $4M to $8M for 270,000 hours of data.

Not bad considering the alternatives.

For example, teleoperation is way less efficient - it's 5x-6x slower than human demos, and 2x-3x more expensive per hour of operator time. But could become feasible after low-level and mid-level manipulation and task planning is solved.


Not teleoperating can have certain disadvantages due to mismatches between how humans move vs. how robots move though. See here: https://evjang.com/2024/08/31/motors.html


Intuitively, yes. But is it really true in practice?

Thinking about it, I'm reminded of various "additive training" tricks. Teach an AI to do A, and then to do B, and it might just generalize that to doing A+B with no extra training. Works often enough on things like LLMs.

In this case, we use non-robot data to teach an AI how to do diverse tasks, and robot-specific data (real or sim) to teach an AI how to operate a robot body. Which might generalize well enough to "doing diverse tasks through a robot body".


The exoskeletons are instrumented to match the kinematics and sensor suite of the actual robot gripper. You can trivially train a model on human collected gripper data and replay it on the robot.


You mentioned UMI, which to my knowledge runs VSLAM on camera+IMU data to estimate the gripper pose and no exoskeletons are involved. See here: https://umi-gripper.github.io/


"Exoskeleton" was inspired by the more recent Dex-Op paper (https://dex-op.github.io/)

Calling UMI an "exoskeleton" might be a stretch but the principle is the same - humans use a kinematically matched instrumented end affector to collect data that can be trivially replayed on the robot.


There is probably an equivalent of Amdahl's law for GDP - overall productivity will be bottlenecked by the least productive sectors.

Until AI becomes physically embodied, that would mean all high-mix, low-volume physical labor is likely going to become a lot more valuable in the mid-term.


Ben’s original take about 1% of users being creators might end up being right eventually

Consider the Studio Ghibli phenomenon. It was fun to create and share photos of your loved ones in Ghibli aesthetics until that novelty wore off

Video models arguably have a lot more novelty to juice. But they will eventually get boring once you have explored the usually finite latent space of interesting content


Math comparing new datacenter capacity to electric cars -

Projections estimate anywhere between 10GW to 30GW of US datacenter buildup over the next few years

1GW of continuous power can support uniform draw from ~2.6M Tesla Model 3s assuming 12,000 miles per year, 250Wh/mile.

So 26M on the lower end, 80M Model 3s on the upper end.

That's 10x-30x the cumulative number of Model 3s sold so far

And remember all datacenter draw is concentrated. It will disproportionately going to impact regions where they're being built.

We need new, clean power sources yesterday


And an increase in power demand and increased power prices will ensure those wind/solar farms get built.


> consumers paying for electricity used by server farms

wait what? consumers are literally paying for server farms? this isn't a supply-demand gap?


It's a supply-demand gap, but since the reasons for it are very apparent, it's completely reasonable to describe it as "consumers paying for [the existence of] datacenters".


I don't see how? It's much more reasonable to state "all electrical consumers are paying a proportionate amount to operate the grid based on their usage rates". This is typically spelled out by the rate commissions and designed to make sure one power consumer is not "subsidizing" the other.

In the case of your quoted article - taking it at face value - this means "everyone" is paying .02/khw more on their bill. A datacenter is going to be paying thousands of times more than your average household as they should.

I don't see a problem with this at all. Cheap electricity is required to have any sort of industrial base in any country. Paying a proportionate amount of what it costs the grid to serve you seems about as fair of a model as I can come up with.

If you need to subsidize some households, then having subsidized rates for usage under the average household consumption level for the area might make sense?

I don't really blame the last watt added to the grid for incremental uptick in costs. It was coming either way due to our severe lack of investment in dispatchable power generation and transmission capacity - datacenters simply brought the timeline forward a few years.

There are plenty of actual problematic things going into these datacenter deals. Them exposing how fragile our grid is due to severe lack of investment for 50 years is about the least interesting one to me. I'd start with local (and state) tax credits/abatements myself.


No, it's a lie. Consumers paying more because of data centers raising demand could be true, but that's not equivalent to them paying for the data centers' usage. The data centers also have to pay an increased rate when prices go up.

Data centers get commercial or maybe even industrial rates depending on their grid hookup and utilities love predictable loads. Those are lower than residential rates. If you're dishonest and don't understand the cost of operating a grid, you could say that's users paying for data centers. But then you'd need to apply it to every commercial/industrial user.

If the regular users were paying for data centers usage, why are so many of them going off-grid with turbines or at least partially on-prem generation?

The solution is more and cheaper energy.


Also the pretrained LLM (the one trained to predict next token of raw text) is not the one that most people use

A lot of clever LLM post training seems to steer the model towards becoming excellent improv artists which can lead to “surprise” if prompted well


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: