Hacker Newsnew | past | comments | ask | show | jobs | submit | phire's commentslogin

Yes, the actual bandwidth of the last-mile analog line was much, much higher. Hence why we eventually got 8mbit ADSL or 24mbit ADSL 2.0+ running across it. Or even 50-300mbit with VDSL in really ideal conditions.

Though the actual available bandwidth was very dependent on distance. People would lease dedicated pairs for high bandwidth across town (or according to a random guy I talked to at a cafe: just pirate an unused pair that happened to run between their two buildings). But once we start talking between towns, the 32kbit you could get from the digital trunk lines was almost always higher than what you could get on a raw analog line over the same distance.


Yeah, I’m the same. I default to anyhow unless I need a strong API boundary (like if I’m publishing a library crate)

Sure, it’s slightly more error prone than proper enum errors, but it’s so much less friction, and much better than just doing panic (or unwrap) everywhere.


It seems very useful for archiving branches that never got merged.

Sometimes I work on a feature, and it doesn’t quite work out for some reason or another. The branch will probably never get merged, but it’s still useful for reference later when I want to see what didn’t work when taking a second attempt.

Currently, those abandoned branches have been polluting my branch list. In the past I have cloned the repo a second time just to “archive” them. Tags seem like a better idea.


I don’t think I’ve ever returned to a branch that I can easily rebase on top of the main branch. And if I really wanted to, I’d prefer to extract a patch so that I can copy the interesting lines.

Any branch older than 6 months is a strong candidate for deletion.


I sometimes leave merged branches around for quite a while, because I squash them when I merge to master and sometimes when tracking down a bug the ability to bisect very handy.

What made you decide to squash when merging instead of leaving the commits in the history so you can always bisect?

Not GP, but we do the same. Branches become the atomic unit of bisection in master, but the need is extremely rare. I think because we have good tests.

We also keep merged branches around. This has never happened, but if we needed to bisect at the merged-branch level, we could do that.

I know squash-merge isn't everyone's cup of tea, but I find it to be simpler and clearer for the 99+% case, and only slightly less convenient for the remainder.


The range reason your history textbook is not infinitely long. The longer something is, the less digestible. When we need more granularity, it's there in the branches.

Wonder if it's worth squashing in the branch, merging to main, then immediately reverting.

Now the work is visible in history, branch can be deleted, and anyone in the future can search the ticket number or whatever if your commit messages are useful.

Dunno if it's worth polluting history, just thinking out loud.


let it go. You will not bother fixing those to work with master.

It just moves trash from branch list to tag list.


Yeah, i think the author has been caught out by the fact that there simply isn’t a canonical way to encode h264.

JPEG is nice and simple, most encoders will produce (more or less) the same result for any given quality settings. The standard tells you exactly how to compress the image. Some encoders (like mozjpeg) use a few non-standard tricks to produce 5-20% better compression, but it’s essentially just a clever lossy preprocessing pass.

With h264, the standard essentially just says how decompressors should work, and it’s up to the individual encoders to work out to make best use of the available functionality for their intended use case. I’m not sure any encoder uses the full functionality (x264 refuses to use arbitrary frame order without b-frames, and I haven’t found an encoder that takes advantage of that). Which means the output of different encoders has wildly different results.

I’m guessing moonlight makes the assumption that most of its compression will come from motion prediction, and then takes massive shortcuts when encoding iframes.


But you use spend much less energy fighting gravity.

I’d expect it to have more range underwater than a typical quadcoter has through air. And much longer “flight” time.

But I doubt it gains enough to compete with a fixed wing drone using the same battery.


>I’d expect it to have more range underwater than a typical quadcoter has through air.

I would expect the opposite, with the higher drag being much more of an issue than gravity. But I would be interested to hear a definitive answer.


It never stopped.

Just takes backwards steps from time to time with major architectural innovations that deliver better performance at significantly lower clock speeds. Intel's last backwards step was from Pentium 4 to Core all the way back in ~2005. AMD's last backwards step was from Bulldozer (and friends) to Zen in 2017.

7GHz is ridiculous and probably just a false rumour, but IMO; Intel and AMD are probably due for another backwards step, they are exceeding the peek speeds from the P4/Bulldozer eras. And Apple has proved that you can get better performance at lower clock speeds.


Intels plan for P4 was to scale to 10Ghz. Its always been a race but plans don't always work out.

And IBM were planning for the PS3's cell processor to run at like 6ghz, with later versions scaling further. Though, it's not like Sony were planning to ship the PS3 clocked that high, they were just expecting their 3-4ghz cpu to run much cooler than it did.

You can really see where the industry hit the wall with Dennard scaling.


I mean, it was one of those inevitable technologies.

Other companies had already invented the CCD, it was only a matter of time before someone would digitise the signal and pair it with a storage device. It was an obvious concept.

All Kodak really did was develop an obvious concept into a prototype many years before it could be viable, and then receive a patent for it.


It seems to be more that they are simultaneously launching and killing the product.

Sounds like they entered into a contact to develop and sell the CM0 to several large manufacturers who happen to all be in China, hence the launch. But then discovered the supply of ram chips that it uses is extremely low (they apparently stopped manufacturing them years ago) and they want to direct as many of them as possible towards the Pi Zero 2.

So we will probably see a follow up to both later, and the CM0-B (or whatever they call it) will be more widely available.


But they obviously knew the RAM was EoL since they already use it in the zero 2. It would be monumentally incompetent for either party to not know this so there must be a plan.

Perhaps these RAM chips are more readily available in China through some means. There are companies that will extend the lifetime of a product if you can get them the design, we've used it for niche (expensive) RAMs. Surprised that would be worth it for something at the low end. Maybe they just have a huge pile of them in China.


The 512Mb RAM die actually embedded into the same RP3A0 package as the CPU (it's the exact same CPU die used in the Raspberry Pi 3). So the stock is exactly the same world wide and linked. and I'm pretty sure the RP3A0 chips are packaged outside of China and would need to be shipped in for this.

Besides, China's RAM manufacturing is reasonably new, and only makes DDR4 and LPDDR4, not the older LPDDR2 which the RP3A0 uses.

But yes, they would have known LPDDR2 was EOL. It was EOLed 6 years ago, before they even launched the zero 2 (which they only introduced because the BCM2835 chip used by the original Zero was EOL), so it's not exactly clear why they are launching the CM0 now.

What makes the most sense to me is that they are currently developing a new chip, that will be a more-or-less drop in replacement for the RP3A0. If it's drop-in, then the design work on the CM0 won't be wasted.

Which would give us some clues on what the RP4x chip is, and it's current status (close enough that they know it will arrive before they run out of RP3A0 chips for the Pi Zero 2, but far enough away to bother launching the CM0 now, as long as the supply is limited).

This RP4x chip presumably needs to have low enough power/costs to fit the Pi Zero 3 budget (so quad Cortex-A725 cores?), while also using modern memory, LPDDR4 if not LPDDR5 to push the EOL out as far as possible. Since the Raspberry Pi 3 depends on the same EOL LPDDR2 memory, this theoretical RP4x chip will probably be used for a product refresh there too (and lowering their costs, as a bonus).


A new chip makes the most sense, good point. But pretty sure LPDDR4 is also pretty much EOL, it's going to get as expensive as LPDDR5 at this rate.

Problem is that I don't think LPDDR5 comes in any sizes smaller than 1GB, so if they want to stick to the current 512MB spec (and price point), LPDDR4 might be the way to go.

It might be nearer to EOL, but it's not actually EOL and should be fine for 5+ years after any EOL is announced.


Micron have announced EOL, yes they'll supply industry for a while but I don't think that's the place raspberry pi want to be.

Others seem to have delayed their announcements after realising they can still make a load of money off ddr4. Also not a great situation for a raspberry pi chip. https://www.trendforce.com/news/2025/09/02/news-samsung-sk-h...


I'm not sure that reporting is correct.

Micron have EOLed a bunch of ddr4/lpddr4 parts, but that happens all the time, even to current nodes. There are still plenty of LPDDR4 parts in Micron's catalog [1] that are not EOL. TBH, I'm not exactly sure what the pattern is, maybe it's just parts on older nodes?

Reporters might be overreacting to product change notifications?

Also, it doesn't really matter when the first fabs start EOLing parts, it only matters when the last fab closes. And LPDDR4 might be a long-lived part, simply because China have opened a DDR4/LPDDR4 fab. Which might actually cause non-Chinese fabs to shut down DDR4/LPDDR4 early to focus on nodes with less competition.

[1] https://www.micron.com/products/memory/dram-components/lpddr...


MobyGames [1] claim they are unique missions. More or less the same story, but different maps, objectives and briefing text.

Maybe I need to re-download it, and check out the differences. I remember playing those six missions so many times before eventually saving up enough pocket money to buy the game, but I don't exactly remember them being different.

And it's actually six maps, three for each faction.

[1] https://www.mobygames.com/game/57961/warcraft-ii-tides-of-da...


Yeah, 9 patches for the original game, then the Battle.net Edition in 1999 (which added support for TCP/IP networking and Battle.net matchmaking), and at least one downloadable patch for that.

https://warcraft.wiki.gg/wiki/Warcraft_II_patch_information#...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: