Here's a tip: any time you've got before/after screen grabs, don't do this thing where you've got to drag a line to switch between the two, don't have a fade, don't have a sliding transition, or anything like that. Just have it display one, then have a single button that you click to have it immediately display the other. Then when you click the button again, it goes back to displaying the first one again. Click, click, click - and your eyes do all the work for you.
Also: I can't work out which image is which. Taking the first image as an example: we've got MATERIAL-STYLE on the left, and LIFTKIT on the right. But what's the left? Does this mean that when you drag the line to the right, revealing the left image, you're looking at MATERIAL-STYLE? Or does this mean you see MATERIAL-STYLE when you drag the line to the left?
(This might seem like pointless quibbling, but this thing bills itself as the The UI Framework for Perfectionists.)
Hey Tom, I'm the creator. They're actually even worse than what you're describing. On touchscreens, the handle slides up and down as you try to move it left or right. Horrible, isn't it? One of these days, I'll get around to fixing it. The only reason it hasn't been done yet is that, to be perfectly honest, you're the first one to give this feedback. So I appreciate it!
If you got rid of the slider entirely and just had it flick between the two images instantly, the entire handle business would become irrelevant, and you'd never need to think about it again!
I admit I don't do web stuff, so perhaps this is hard to do. But I think it's the ideal. Before/after comparisons are very easy to assess if you can flick between the two cases and let your eyes show you the differences. The value of having an image that's part one and part the other (and two completely separate parts!) seems a bit questionable.
(My line of work means I'm unlikely to end up a customer, so you don't have to pay attention to my opinions.)
The flipping-between is a great hack -- as you said your eyes (really, brain) just do the work for you.
I learnt about it in Japan where proof-readers and editors would (or do) quickly lift a top page up and down to spot mistakes with kanji (pictographs). And sure enough, even from a page of dense script the dissonance of the error really does pop out at you.
I likewise tucked that little trick into my belt -- it comes in useful anytime you're trying to manually spot a pattern across complex data. This technique has the same "vibe" as FFTs to me: it's just neat feeling like you're getting computation from the universe for free.
Solar PV in a similar category: free electrons if you can arrange the magic rocks just right :)
If you put two proofs side by side, you can view from the right distance then uncross or cross your eyes like a stereogram till they converge, which makes differences shimmer.
And once you have the hang of this technique, congratulations! You can now enjoy those 3D "Magic Eye" images that stumped a significant portion of the population back in the 90s :)
I use ScreenFloat[0] in a similar way to catch differences between GUI settings, like the cPanel PHP extensions selector, which has tons of checkboxes. Position a screenshot of settings for site A over the settings for site B, adjust the transparency, and any differences will jump out.
I'd like to imagine I know which of each example were better designed, but the handle going to the side opposite from the label was making me second guess. Move handle away from the label to reveal is how I took it, so hope that's what you intended.
OTOH, I'm on touch screen (iPad/iOS26/WebKit) and it didn't go up and down, it went side to side.
As other feedback, the dumpster fire and deprecation warnings in the docs make me want to try this. I find builder-to-builder candor refreshingly helpful, treating your doc reader like an actual partner instead of like a euphemism. Appreciate your same candor throughout these comments.
Chainlift > Agency Services > Team menu option seems inert.
I always found this UI pattern a bit odd, because there just aren't that many situations where you want to compare the left side of image A and the right side of image B.
I see it a lot in photography, to show before/after processing - but what you want to be able to quickly compare are the same part of an image with and without the processing applied.
One of the photography tools I make is a LUT viewer/converter - and while I didn't have the slider at first, I guess it's standard enough at this point that people asked for it and I added it.
But I made two additions to it that make it more useful IMO:
- have labels on the left/right top corners, so it's immediately clear which version of the image you're looking at
- click and hold on the image to preview the full unprocessed version; release to revert to the view. That makes it easy to quickly compare the two versions of the same spot of a photo. (similar to what you suggest, but non-latching)
I've been wondering that myself. The descriptions seem to indicate that fully dragged to the left is liftkit, but my first assumption was that would be fully dragged to the right.
it's bad UX.
There's a little tiny arrow on the line's grab indicator showing which "side" you should look at. You can barely see it. Below there's the two labels floated to either side...
I agree, the x-axis labels are not helpful! Thankfully, the first example is “buttons with corrected icon spacing”, and the image on the right looks much better than the one on the left (a bigger difference in quality than in the other two examples), which is visible when the slider is on the left.
Suggestion to devs: put the label “material-style” in the lower left of its image and “liftkit” in the lower right of its image, and cover them appropriately as the slider moves, and then it'll be clear which framework the current image (or portion of it) belongs to.
If you were going to do this for the slider approach you can arrange the labels to the `block-start` and `block-end` of the image and support non-RTL scripts/languages natively.
I used this a lot when I was doing Windows stuff professionally, and I always really liked it.
The command line interface is good too: supply file spec that you'd type in to the GUI, and it'll print a list of matching files to stdout, one per line. Very easy to work with. I cobbled together a bit of Python stuff so that any time I was putting together a tool that needed to search for files, it could find the Everything command line tool if present, and use that instead of os.walk and the like, for a useful speedup.
(If nothing else, "es PATTERN" (to instantly find any name matching PATTERN anywhere on the system) is less typing than "find FOLDER -iname 'PATTERN'", and finishes more quickly. And compared to using locate, there's less chance of the database being out of date.)
I've often been suspicious of this too, having noticed that building one of my projects on Apple Silicon is way quicker than I'd expect relative to x64, given relative test suite run times and relative PassMark numbers.
I don't know how to set up a proper cross compile setup on Apple Silicon, so I tried compiling the same code on 2 macOS systems and 1 Linux system, running the corresponding test suite, and getting some numbers. It's not exactly conclusive, and if I was doing this properly properly then I'd try a bit harder to make everything match up, but it does indeed look like using clang to build x64 code is more expensive - for whatever reason - than using it to build ARM code.
Systems, including clang version and single-core PassMark:
M4 Max Mac Studio, clang-1700.6.3.2 (PassMark: 5000)
x64 i7-5557U Macbook Pro, clang-1500.1.0.2.5 (PassMark: 2290)
x64 AMD 2990WX Linux desktop, clang-20 (PassMark: 2431)
Single thread build times (in seconds). Code is a bunch of C++, plus some FOSS dependencies that are C, everything built with optimisation enabled:
Mac Studio: 365
x64 Macbook Pro: 1705
x64 Linux: 1422
(Linux time excludes build times for some of the FOSS dependencies, which on Linux come prebuilt via the package manager.)
Single thread test suite times (in seconds), an approximate indication of relative single thread performance:
Mac Studio: 120
x64 Macbook Pro: 350
x64 Linux: 309
Build time/test time makes it look like ARM clang is an outlier:
Mac Studio: 3.04
x64 Macbook Pro: 4.87
x64 Linux: 4.60
(The Linux value is flattered here, as it excludes dependency build times, as above. The C dependencies don't add much when building in parallel, but, looking at the above numbers, I wonder if they'd add up to enough when built in series to make the x64 figures the same.)
This article contains multiple lies. For starters, it is not the 5th of February. Also, 1 BTC = 1 BTC, or, in dollar terms, despite the claims, $64,400.
I didn't pay much attention to the rest of it but I can't imagine it was any better.
Well I messed up my joke slightly by adding a time-dependent BTC value, which was silly of me, but I was trying to cover all bases. I should have stuck to the classic 1 BTC = 1 BTC :(
(Anyway, in answer to your question: at time of writing, it's the 6th of February, though I don't know what else you might be expecting.)
But, regardless, even if you feel you have won, this time - for how long? It won't be the 5th of February for you forever!
Inflexible window layout. (For example, suppose you want to see breakpoints list, call stack, and find results simultaneously. You can't, as they all share the same panel. Which is always on the left of the window.)
I was struck by the "Magnitude: High | Applicability: High" bit. Who writes like this? More importantly, who reads like this? The V4 doc (which I have yet to read, but I did a text search) has 64 occurences of this sort of phrasing; not actually all that many, given that there's 293 pages, but enough to be interesting. I wonder if this extra stuff is there to make LLMs pay particular attention.
Intel's software optimization guides have similar annotations on many of their guidelines, and have done since long before LLMs were a thing. As a reader it's useful to know how impactful a given recommendation is and how generally applicable it is without having to read the more detailed explanations.
Ahh, interesting, thanks. (I read the reference manuals but typically ignore the rest... I don't need to write this stuff, just read it!) I've seen people recommend creating docs to be LLM-friendly and I was wondering if this was an instance of that.
But that is exactly what the flag button is there for?! - but this discussion has been had numerous times, and the two sides will never agree.
Safest to flag (or not) as you see fit, because you are a good person rather than an evil one. Then rely on the admins to rescue needlessly ultraflagged articles as appropriate. They are pretty good at doing the right thing.
You're saying the discussion of which chemical respirators to wear to protests has been had numerous times?
I'd say this is a productive topic of conversation for many HN users. There are not "two sides" on this topic, unless we're talking 3M vs MSA. The people flagging or commenting with opposing political views are disrupting conversation, likely because they they disagree with how the topic has been framed. This is exactly like PHP fans going into a Python thread and telling everyone Python sucks, disrupting the people who just wanted to discuss getting things done within the framework of Python. They might have some valid points, but they're not germane to civil discussion.
No, I was referring to discussion of the semantics of flagging. Apologies; I thought it was phrased clearly enough, but, perhaps not. (Maybe I should have said "that discussion" rather than "this discussion"? This is my native tongue, so you can't trust me to get this stuff perfectly right.)
Ah, I don't know that there was a problem with your phrasing. Rather it's that the meta-discussion of flagging is so inactionable that the possibility didn't even cross my mind. Mea culpa.
(Not unrelated: answer from Andrei Herasimchuk at https://www.quora.com/Why-does-Adobe-Photoshop-differentiate...)
Also: I can't work out which image is which. Taking the first image as an example: we've got MATERIAL-STYLE on the left, and LIFTKIT on the right. But what's the left? Does this mean that when you drag the line to the right, revealing the left image, you're looking at MATERIAL-STYLE? Or does this mean you see MATERIAL-STYLE when you drag the line to the left?
(This might seem like pointless quibbling, but this thing bills itself as the The UI Framework for Perfectionists.)
reply