Hacker Newsnew | past | comments | ask | show | jobs | submit | cherioo's commentslogin

AI PC has been in the buzz for more than 2 years now (despite itself being a near useless concept), and intel has like 75% marketshare for laptop. Both of those are well with in norm for an intel marketing piece.

It’s not really meant for consumer. Who would even visit newsroom.intel.com?


Apparently it’s been a thing for a while:

What is an AI PC? ('Look, Ma! No Cloud!')

An AI PC has a CPU, a GPU and an NPU, each with specific AI acceleration capabilities. An NPU, or neural processing unit, is a specialized accelerator that handles artificial intelligence (AI) and machine learning (ML) tasks right on your PC instead of sending data to be processed in the cloud. https://newsroom.intel.com/artificial-intelligence/what-is-a...


It'd be interesting to see some market survey data showing the number of AI laptops sold & the number of users that actively use the acceleration capabilities for any task, even once.


I'm not sure I've ever heard of a single task that comes built into the system and uses the NPU.


Remove background from an image. Summarize some text. OCR to select text or click links in a screenshot. Relighting and centering you in your webcam. Semantic search for images and files.

A lot of that is in the first party Mac and Windows apps.


Selecting text in a photo is a game changer. I love it.


Wasn’t built in OCR an amazing feature?

We probably could have done it years earlier. But when it showed up… wow.


CES stands for Consumer Electronics Show last I checked.


Because that's not how they perceive their works. Instead it is "advocating for one's own team and passion", "helping others advance their career", "networking and building long-term connections".


The west is already ahead on this. It is called AI safety and alignment.


People laughing away the necessity for AI alignment are severely misaligned themselves; ironically enough, they very rarely represent the capability frontier.


In security-eze I guess you'd say then that there are AI capabilities that must be kept confidential,... always? Is that enforceable? Is it the government's place?

I think current censorship capabilities can be surmounted with just the classic techniques; write a song that... x is y and y is z... express in base64, though stuff like, what gemmascope maybe can still find whole segments of activation?

It seems like a lot of energy to only make a system worse.


Censoring models to avoid outputting Taylor Swift's songs has essentially nothing to do with the concept of AI alignment.


I mean I'm sure cramming synthetic data and scaling models to enhance like, in-model arithmetic, memory, etc. makes "alignment" appear more complex / model behavior more non-newtonian so to speak, but it's going to boil down to censorship one way or another. Or an NSP approach where you enforce a policy over activations using another separate model, and so-on and so-on.

Is it likely that it's a bigger problem to try and apply qualitative policies to training data, activations, and outputs than the approach ML-guys think is primarily appropriate (ie., nn training) or is it a bigger problem to scale hardware and explore activation architectures that have more effective representation[0], and make a better model? If you go after the data but cascade a model in to rewrite history that's obviously going to be expensive, but easy. Going after outputs is cheap and easy but not terrifically effective... but do we leave the gears rusty? Probably we shouldn't.

It's obfuscation to assert that there's some greater policy that must be applied to models beyond the automatic modeling that happens, unless there's some specific outcome you intend to prevent, namely censorship at this point, maybe optimistically you can prevent it from lying? Such application of policies have primarily targeted solutions that reduce model efficacy and universality.

[0] https://news.ycombinator.com/item?id=35703367


Where’s the source for this?

It doesn’t look good when similar WAF issues caused their big outage a few years back.



I just bought the same and planning to break out just the touch id to put into a 3D printed enclosure. Expensive hobby…


GPT4.5 was allegedly such a pre-train. It just didn’t perform good enough to announce and product it as such.


it wasn't economical to deploy but i expect it wasn't wasted, expect the openai team to pick that back up at some point


The scoop Dylan Patel got was that part way through the gpt4.5 pretraining run the results were very very good, but it leveled off and they ended up with a huge base model that really wasn't any better on their evals.


Allegedly deepseek is doing this because they don’t have enough gpu to serve two models concurrently.


That conversation probably gets easier if and when company when $100+M on AI.

Companies just need to get to the “if” part first. That or they wash their hand by using a reseller that can use whatever it wants under the hood.


I am really hoping Valve will release a Frame Pro with Elite Gen 5 later :(


Maybe eventually, but Valve don't tend to update their hardware very often so it'll probably be a while. They went over 6 years between their last VR headsets, and the Deck is over 3 years old now with no hint of a successor coming (the OLED version is more recent but that was a minor iteration with mostly the same specs).


I care a lot more about the screen resolution than the chip. The Steam Frame would make a really cool Linux workstation if the pixels per degree on the display matched typical monitors. Unfortunately, the resolution would have to be much higher than it is.


One annoying thing I have is, when I want to disable Adblock on some website (suspecting Adblock impair functionality, or where Adblock is not needed), I need to grant the extension full access before I can disable it.

Is there some trick I am missing?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: