Hacker Newsnew | past | comments | ask | show | jobs | submit | MontagFTB's commentslogin

We saw these is Ravenna


What, no Doom running on Voyager 2?


If the Standard has anything to say about compatibility between different language versions, I doubt many developers know those details. This is breeding ground for ODR violations, as you’re likely using compilers with different output (as they are built in different eras of the language’s lifetime) especially at higher optimization settings.

This flies in the face of modern principles like building all your C++, from source, at the same time, with the same settings.

Languages like Rust include these settings in symbol names as a hash to prevent these kinds of issues by design. Unless your whole team is a moderate-level language lawyer, you must enforce this by some other means or risk some really gnarly issues.


> Languages like Rust include these settings in symbol names as a hash to prevent these kinds of issues by design.

Historically, C++ compilers' name mangling scheme for symbols did precisely the same thing. The 2000-2008 period for gcc was particularly painful since the compiler developers really used it very frequently, to "prevent these kinds of issues by design". The only reason most C++ developers don't think about this much any more is that most C++ compilers haven't needed to change their demangling algorithm for a decade or more.


C++’s name mangling scheme handles some things like namespaces and overloading, but it does not account for other settings that can affect the ABI layer of the routine, like compile time switches or optimization level.


The name mangling scheme was changed to reflect things other than namespaces and overloading, it was modified to reflect fundamental compiler version incompatibilities (i.e. the ABI)

Optimization level should never cause link time or run time issues; if it does I'd consider that a compiler/linker bug, not an issue with the language.


Looping through inflate/deflate on rotated pixels still takes more time than updating a bit in the Exif (and the chunk’s associated CRC)


It's still negligible from the consumer standpoint.

Like, if you had millions of images you needed to rotate on a server in a batch job, then OK.

But if you're just rotating one photo, or even a hundred, that you've just taken, it's plenty fast enough.


Dithering the errors across the image would make the final result a lot more palette-able.


There are plenty of posts out there on using Knuth’s dancing links as a fast sudoku solver. Has it fallen out of fashion?


Dancing links is a very cute data-structure for a backtracking search, but there are a lot more aspects of writing a good Sudoku solver than just having a good data-structure for backtracking. Propagation (making deductions), heuristics, learning, parallelism, restarts, no-goods, ...

While 9x9 Sudoku problems are trivial to solve for more or less any program, 25x25 Sudoku instances are quite tricky and a simple and fast but naive search for a solution can easily take hours.


For generating puzzles it's really useful since it lets you determine if a randomly generated puzzle has only one possible path to solving it (exact cover problem). And it's fast so adding it to a pipeline doesn't incur much if any overhead.


Is there any property in particular of dancing links that you think helps in determining this, or is it just that a backtracking search can be used to test all cases?

For pen-and-paper puzzles like Sudoku there is usually the goal that a solution should be findable by a series of deductive steps. For 9x9 Sudoku, most deductive steps used correspond to the effects well-known propagation techniques offer[1]. With a suitable propagation level, if the puzzle is solved search-free, then one knows that both there is only one solution and that there is a deductive path to solve it.

[1]: See "Sudoku as a Constraint Problem", Helmut Simonis, https://ai.dmi.unibas.ch/_files/teaching/fs21/ai/material/ai... for some data on 9x9 Sudoku difficulty and the propagation techniques that are needed for search-free solving.


How’d they get Claude listed as one of the contributors? Is that due to changes coming in to the repo from a Claude/github integration?


it's just claude code commiting and pushing for me because i'm lazy


Not lazy! This should be a requirement, so future “us” can discern authorship - just like any developer.


It will probably go the opposite way though in the future. People will list when AI wasn't used in the loop, like how "sent from my iphone" was both a status signal and a request for leniency when it comes to spellcheck.


if you read the article, it says it is entirely vibecoded


Perhaps not, but a big benefit according to OP is the smaller number of tokens / context pollution skills introduce v. MCP.


A blast from the past! I once wrote an implementation of dancing_links in C++ as part of a Sudoku solver: https://github.com/stlab/adobe_source_libraries/blob/main/ad...


The largest tsunami on record came from a landslide in a bay: https://en.wikipedia.org/wiki/1958_Lituya_Bay_earthquake_and...


Yeah that'll happen when a good chunk of a mountain basically drops into your body of water, lol:

"The large mass of rock, acting as a monolith (thus resembling high-angle asteroid impact), struck with great force the sediments at bottom of Gilbert Inlet at the head of the bay. The impact created a large crater and displaced and folded recent and Tertiary deposits and sedimentary layers to an unknown depth."

With updated modeling showing that impact triggering the glacier to lift and subsequently release even more material, it's shocking anyone in the bay survived at all.

Edit - found a video with said papers modeling implemented, pretty neat: https://www.youtube.com/watch?v=B1axr5YGRwQ


Much of the literature references this as the biggest ever tsunami at 500+ meters, but an account from one of the survivors who was there on his fishing boat (with his 7-year-old son!) said this specific thing:

"The wave definitely started in Gilbert Inlet, just before the end of the quake. It was not a wave at first. It was like an explosion, or a glacier sluff. The wave came out of the lower part, and looked like the smallest part of the whole thing. The wave did not go up 1,800 feet, the water splashed there."

Still insane, but it was the immediate splash that scoured away trees and soil cover up to 527 meters up the mountain face, not a proper tsunami.

Both the fisherman and his son survived btw.


Yeah I personally agree it’s not quite the same as if the pacific ocean was coming up 527 meters before hitting the shore :)


I'd absolutely love to see something like this live, in the flesh, obviously from far enough to survive it. It would be one of those spectacles for completely flabbergasting one's sense of importance in a self-remaking world.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: