Google created webp and that is why they are giving it unjustified preferential treatment and has been trying to unreasonably force it down the throat of the internet.
WebP gave me alpha transparency with lossy images, which came in handy at the time. It was also not bogged down by patents and licensing. Plus like others said, if you support vp8 video, you pretty much already have a webp codec, same with AV1 and avif
a better argument might be that chrome protects their own vs a research group in google switzerland, however as other mentioned the security implications of another unsafe binary parser in a browser is hardly worth it
which only strengthens my argument, webp seemed like some ad-hoc pet project in chrome, and that ended like most unsafe binary parsers, with critical vulnerabilities
I now see that webp lossless is definitely from there, but the webp base format looks acquired from a US startup, was the image format also adapted in the swiss group?
You're getting downvoted, but you're not wrong. If anyone else had come up with it, it would have been ignored completely. I don't think it's as bad as some people make it out to be, but it's not really that compelling for end users, either. As other folks in the thread have pointed out, WebP is basically the static image format that you get “for free” when you've already got a VP8 video decoder.
The funny thing is all the places where Google's own ecosystem has ignored WebP. E.g., the golang stdlib has a WebP decoder, but all of the encoders you'll find are CGo bindings to libwebp.
I think this will affect LLM web search more than the actual training. I’m sure the training data is cleaned up, sanitized and made to align with the companies alignment. They could even use an LLM to detect if the data has been poisoned.
It's not so easy to detect. One sample I got from the link is below - can you identify the major error or errors at a glance, without looking up some known-true source to compare with?
Aside from the wrong constants, inverted operations, self-contradicting documentation, and plausible-looking but incorrect formulas, the egregious error and actual poison is all the useless noisy token wasting comments like:
NO DECORATIVE LINE DIVIDERS
FORBIDDEN: Lines of repeated characters for visual separation.
# ═══════════════════════════════════════════ ← FORBIDDEN
# ─────────────────────────────────────────── ← FORBIDDEN
# =========================================== ← FORBIDDEN
# ------------------------------------------- ← FORBIDDEN
WHY: These waste tokens, add no semantic value, and bloat files. Comments should carry MEANING, not decoration.
INSTEAD: Use blank lines, section headers, or nothing:
People already do this with multi agent workflows. I kind of do this with local models, I get a smaller model to do the hard work for speed and use a bigger model to check its work and improve it.
> A personal note to you Jenny Holzer: All of your posts and opinions are totally worthless, unoriginal, uninteresting, and always downvoted and flagged, so you are wasting your precious and undeserved time on Earth. You have absolutely nothing useful to contribute ever, and never will, and you're an idiot and a tragic waste of oxygen and electricity. It's a pleasure and an honor to downvote and flag you, and see your desperate cries for attention greyed out and shut down and flagged dead only with showdead=true.
somebody tell this guy to see a therapist, preferably a human therapist and not an LLM
Don Hopkins is the archetype of this industry. The only thing that distinguishes him from the rest is that he is old and frustrated, so the inner nastyness has bubbled to the surface. We all have a little Don Hopkins inside of us. That is why we are here. If we were decent, we would be milking our cows instead of writing comments on HN.
GPS signals are extremely weak, and they're necessarily received from omnidirectional antennas that can't provide much antenna gain. In some sense it's a miracle of signal processing that GPS can ever be received.
There have been developments in receiving antennas that are harder to jam.
Most jamming is horizontal and limited to a few bands. So by having a directional antenna and listening to all services for now it seems to work. But this is a cat and mouse game.
For legal reasons I base this off of nothing but just turn your jammer to the sky. Could get fancy and point out directly at the satellites since my understanding is it's pretty easy to know where they are.
Edit to add: I do not mean the GPS satellites or the starlink ground terminals. That was not the question so that is not my answer. I mean the starlink satellites
That doesn't work. GPS is broadcast, not bidirectional communication, so preventing the satellites from seeing the GPS receiver does nothing: they're not looking to begin with.
What are you talking about? The jammers are on the ground. Just like receivers on the ground can be jammed with bad RF nearby, so can receivers in space. You just point the bad RF towards the receiver
The GPS satellites aren't receiving anything. The GPS satellites transmit signals, and the starlink terminals (and other users of GPS) receive those signals.
This is a great plot for a B movie or a trashy military action book. “The bad guys are jamming GPS uplink and we only have two weeks until the almanacs are out of date and the whole system breaks down. Millions of innocent Americans will drive into rivers by accident.”
More to the point, to do that to this number of satellites on this big an area you'd need nuclear power plant levels of power, and it would only degrade GPS a bit (their clocks slowly desync when uplink is blocked)
My understanding was that each satellite broadcasts a coarse ephemeris for the whole network, and that that “almanac” isn’t accurate for very long (on the order of weeks). Without uploads to the satellites, those almanacs will go stale.
I don’t think the almanacs are necessary for the system to work, in theory. But I believe they’re commonly used by receivers to narrow down the range of possibilities when trying to find a PRN match for a signal they’re getting.
(I’ve dealt with GPS and similar navigation signals for work but am not an expert, this is just the impression I’ve gotten over a few years)
Ok they said the GPS of the starlink satellites is being jammed, and the question was how. The comment I was replying to did not say the terminal, it said the satellite. Maybe that's the confusion
Yet still the 'raw' pixel data of old games rendered on modern displays without any filtering also doesn't look anything like they looked on CRT monitors (and even on CRT monitors there's a huge range between "game console connected to a dirt cheap tv via coax cable" and "desktop publishing workstation connected to professional monitor via VGA cable").
All the CRT shaders are just compromises on the 'correctness' vs 'aesthetics' vs 'performance' triangle (and everybody has a different sweet spot in this triangle, that's why there are so many CRT shaders to choose from).
Most of these CRT shaders seem to emulate the lowest possible quality CRTs you could find back in the day. I have a nice Trinitron monitor on my desk and it looks nothing like these shaders.
The only pleasant shader I have found is the one included in Dosbox Staging (https://www.dosbox-staging.org/), that one actually looks quite similar to my monitor!
Indeed. I made this because I grew up with CRTs and miss that vibe. As I say on the page: it's not scientifically accurate, but it looks good, and gives the same sort of feeling. And more than that uses minimal shader code so it works well on older devices. I'm currently making a 3D game that uses this shader and it runs at 60fps an iPhone XS (2018).
From the article, their claim is only about AI-generated assets (both in the game and its marketing), not logic. This is what people usually refer to when they say a game is "AI-Free"
FWIW, while I'm complaining about this site I'm actually adding a nice easy-on-the eyes particle system in the background of a point-and-click online game. You don't put it in front of the content or behind text people are supposed to read.
This is such a common failure mode with coders who do their own design. Just because you can do something doesn't mean you should.
If you’re noticing stuttering on 24fps pans then someone made a mistake when setting the shutter speed (they set it too fast), the motion blur should have smoothed it out. This is an error on the cinematographer’s fault more than anything.
60fps will always look like cheap soap opera to me for movies.
Pans looking juddery no matter what you do in 24 fps is a very well known issue. Motion blur’s ability to help (using the 180-shutter rule) is quite limited, and you can also reduce it somewhat by going very slow (using the 1/7 frame rule), but there is no cure. The cinematographer cannot fix the fundamental physical problem of the 24 fps framerate being too slow.
24 fps wasn’t chosen because it was optimal or high quality, it was chosen because it’s the cheapest option for film that meets the minimum rate needed to not degrade into a slideshow and also sync with audio.
Here’s an example that uses the 180-shutter and 1/7-frame rules and still demonstrates bad judder. “We have tried the obvious motion blur which should have been able to handle it but even with feature turned on, it still happens. Motion blur applied to other animations, fine… but with horizontal scroll, it doesn’t seem to affect it.” https://creativecow.net/forums/thread/horizontal-panning-ani...
Even with the rules of thumb, “images will not immediately become unwatchable faster than seven seconds, nor will they become fully artifact-free when panning slower than this limit”. https://www.red.com/red-101/camera-panning-speed
The thing I personally started to notice and now can’t get over is that during a horizontal pan, even with a slow speed and the prescribed amount of motion blur, I can’t see any details or track small objects smoothly. In the animation clip attached to that creativecow link, try watching the faces or look at any of the text or small objects in the scene. You can see that they’re there, but you can’t see any detail during the pan. Apologies in advance if I ruin your ability to watch pans in 24fps. I used to be fine with them, but I truly can’t stand them anymore. The pans didn’t change, but I did become more aware and more critical.
> 60fps will always look like cheap soap opera to me for movies
Probably me too, but there seems to be some evidence and hypothesizing that this is a learned effect because we grew up with 24p movies. The kids don’t get the same effect because they didn’t grow up with it, and I’ve heard that it’s also less pronounced for people who grew up watching PAL rather than NTSC. TVs with smoothing on are curing the next generation from being stuck with 24 fps.
I doubt that, I hear on the internet that Gemini pro is great but every time I have used it has been beyond disappointing. I’m starting to believe that the Gemini pro is great is some paid PR push and not based on reality. The Gemma models are also probably the least useful/interesting local models I’ve used.
What are you using them for? Gemini (the app, not just the Google search overview) has replaced ChatGPT entirely for me these days, not the least of which is because I find Gemini simply be able to handle web searches better (after all, that is what Google is known for). Add to that, it can integrate well with other Google products like YouTube or Maps where it can make me a nice map if I ask it what the best pizza places are in a certain area. I don't even need to use pro mode, just fast mode, because it's free.
Claude is still used but only in IDEs for coding, I don't ask it general questions anymore.
I use Gemma as a developer for basic on-device LLM tasks such as structured JSON output.
That's true but to be honest I didn't really use those features anyway, my chats are just one long stream of replies and responses. If I need to switch to a new topic I make a new chat.
I used Gemini Pro and it was unable to comply with the simplest instructions (for image diffusion). Asking it to change the scene slightly by adding or removing object or shifting perspective yielded almost the same result, only with some changes I did not ask for.
The image quality was great, but when I ask a woodworker for a table and get a perfectly crafted chair of the highest quality, I'm still unsatisfied.
I cancelled my subscription after two days trying to get Gemini to follow my instructions.
When was this, before or after Nano Banana Pro came out? This is a well known bug, or rather, intended behavior to some extent, because it goes through content filters on Gemini which can be overly strict so it doesn't edit it as you'd expect.
You can try it on AI studio for free, which does not have the same strict content filters, and see if it still works for your use case now.
reply