AI PC has been in the buzz for more than 2 years now (despite itself being a near useless concept), and intel has like 75% marketshare for laptop. Both of those are well with in norm for an intel marketing piece.
It’s not really meant for consumer. Who would even visit newsroom.intel.com?
An AI PC has a CPU, a GPU and an NPU, each with specific AI acceleration capabilities. An NPU, or neural processing unit, is a specialized accelerator that handles artificial intelligence (AI) and machine learning (ML) tasks right on your PC instead of sending data to be processed in the cloud.
https://newsroom.intel.com/artificial-intelligence/what-is-a...
It'd be interesting to see some market survey data showing the number of AI laptops sold & the number of users that actively use the acceleration capabilities for any task, even once.
Remove background from an image. Summarize some text. OCR to select text or click links in a screenshot. Relighting and centering you in your webcam. Semantic search for images and files.
A lot of that is in the first party Mac and Windows apps.
Because that's not how they perceive their works. Instead it is "advocating for one's own team and passion", "helping others advance their career", "networking and building long-term connections".
People laughing away the necessity for AI alignment are severely misaligned themselves; ironically enough, they very rarely represent the capability frontier.
In security-eze I guess you'd say then that there are AI capabilities that must be kept confidential,... always? Is that enforceable? Is it the government's place?
I think current censorship capabilities can be surmounted with just the classic techniques; write a song that... x is y and y is z... express in base64, though stuff like, what gemmascope maybe can still find whole segments of activation?
It seems like a lot of energy to only make a system worse.
I mean I'm sure cramming synthetic data and scaling models to enhance like, in-model arithmetic, memory, etc. makes "alignment" appear more complex / model behavior more non-newtonian so to speak, but it's going to boil down to censorship one way or another. Or an NSP approach where you enforce a policy over activations using another separate model, and so-on and so-on.
Is it likely that it's a bigger problem to try and apply qualitative policies to training data, activations, and outputs than the approach ML-guys think is primarily appropriate (ie., nn training) or is it a bigger problem to scale hardware and explore activation architectures that have more effective representation[0], and make a better model? If you go after the data but cascade a model in to rewrite history that's obviously going to be expensive, but easy. Going after outputs is cheap and easy but not terrifically effective... but do we leave the gears rusty? Probably we shouldn't.
It's obfuscation to assert that there's some greater policy that must be applied to models beyond the automatic modeling that happens, unless there's some specific outcome you intend to prevent, namely censorship at this point, maybe optimistically you can prevent it from lying? Such application of policies have primarily targeted solutions that reduce model efficacy and universality.
The scoop Dylan Patel got was that part way through the gpt4.5 pretraining run the results were very very good, but it leveled off and they ended up with a huge base model that really wasn't any better on their evals.
Maybe eventually, but Valve don't tend to update their hardware very often so it'll probably be a while. They went over 6 years between their last VR headsets, and the Deck is over 3 years old now with no hint of a successor coming (the OLED version is more recent but that was a minor iteration with mostly the same specs).
I care a lot more about the screen resolution than the chip. The Steam Frame would make a really cool Linux workstation if the pixels per degree on the display matched typical monitors. Unfortunately, the resolution would have to be much higher than it is.
One annoying thing I have is, when I want to disable Adblock on some website (suspecting Adblock impair functionality, or where Adblock is not needed), I need to grant the extension full access before I can disable it.
It’s not really meant for consumer. Who would even visit newsroom.intel.com?