Hacker Newsnew | past | comments | ask | show | jobs | submit | innocenat's commentslogin

> They're not just small luxuries, but actually better-performant and more practical than the popular alternatives in almost every way.

Most of us who use fountain pen feel this way too.

I literally just an hour ago tried picking up a gel pen for writing and 3 minutes later it went back into storage. It's Uniball One so it's not a bad gel pen either.


They also offer a personal license, which I would say should be affordable to any devs if they were getting money. Rider is $149/year for the first year, $119 for second, and $89/year for the third year onward.


That calculation does not hold for developing countries. Rider's license fee is huge in countries with per capita GDP is a small fraction of USA's per capita GDP.


I think it is a reasonable assumption that developers in developing countries earn much more than the per capita GDP.

(I say this as someone from a Newly Industrialized Country and I easily afford the all product pack)


We should probably look at developer salaries / hourly rates rather than GDP. Most of the people in developing countries don't need IDEs.

But yeah, if you work for low rates, then you have to work more hours to pay for your tools.


> Most of the people in developing countries don't need IDEs

Most people in general, I would say. I haven’t tried JetBrains editors in a while, and the “developing country” definition is very unclear in my opinion (and also part of why I roll my eyes at the “what about developing countries?” argument sometimes), but I do think the yearly price looks good for WebStorm at least, as someone living in Colombia.

For reference, at the time of writing, the standard Netflix plan costs 26,900 COP a month, which ends up being 322,800 COP yearly. Meanwhile, WebStorm’s first year comes at 298,541.10 COP post USD -> COP conversion - it isn’t an insignificant sum, but if it offers significant added value, I think it’s a fair price, certainly better than the Netflix pricing. The second year is reduced to 237,967.54 COP, and the third to 177,393.99 COP - that last one is even less than what you’d pay for the Netflix basic plan over a year (202,800 COP).


Almost (or maybe even all) of what WebStorm does, you can do it in Rider or RustRover (which is also free). So it make no sense to not also make WebStorm free.


You called it "markdown noise". I called it "easy to see if I actually bolded the whole word or I forgot to bold the last character" or "easy to see if I also underlined the space after the word".


I used logseq for about a year and I so hate the block-based editor (the same reason I stopped using Notion). I went back to using flat-file for a while until I find Zettlr.


Diagonal bridge, for one.


Japan railway companies in Kanto region are moving to QR code for individual ticket in 2027.

The bulk of the ticket will still be the Felica card though, because as far as I know neither the QR code or EMV open-loop system can handle required throughput of 60 persons/minute/gate.


Decoding 1 x86 instruction per cycle is easy. That's solved like 40 years ago.

The problem is that superscalar CPU needs to decode multiple x86 instructions per cycle. I think latest Intel big core pipeline can do (IIRC) 6 instructions per cycle, so to keep the pipeline full the decode MUST be able to decode 6 per cycle too.

If it's ARM, it's easy to do multiple decode. M1 do (IIRC) 8 per cycle easily, because the instruction length is fixed. So the first decoder starts at PC, the second starts at PC+4, etc. But x86 instructions are variable length, so after the first decoder decodes instruction at IP, where does the second decoder start decoding at?


It isn't quite that bad. The decoders write stop bits back into the L1D, to demarc where the instructions align. Since those bits aren't indexed in the cache and don't affect associativity, they don't really cost much. A handful of 6T SRAMs per cache line.


I would have assumed it just decodes the x86 into a 32-bit ARM-like internal ISA, similar to how a JIT works in software. x86 decoding is extremely costly in software if you build an interpreter. Probably like 30% maybe and that's assuming you have a cache. But with JIT code morphing in Blink, decoding cost drops to essentially nothing. As best as I understand it, all x86 microprocessors since the NexGen i586 have worked this way too. Once you're code morphing the frontend user-facing ISA, a much bigger problem rears its ugly head, which is the 4096-byte page size. That's something Apple really harped on with their M1 design which increased it to 16kb. It matters since morphed code can't be connected across page boundaries.


It decodes to uOPs optimized for the exact microarchitecture of that particular CPU. High performance ARM64 designs do the same.

But in the specific case of tracking variable length instruction boundaries, that happens in the L1i cache. uOP caches make decode bandwidth less critical, but it is still important enough to optimize.


That's called uOP cache, which Intel has been using since Sandy Bridge (and AMD but I don't remember on top of my head since when). But that's more transistors for the cache and its control mechanism.


It's definitely better than what NVIDIA does, inventing an entirely new ISA each year. If the hardware isn't paying the cost for a frontend, then it shovels the burden onto software. There's a reason every AI app has to bundle a 500mb matrix multiplication library in each download, and it's because GPUs force you to compile your code ten times for the last ten years of ISAs.


Part of it is that, but part of it is that people pay for getting from 95% optimal to 99% optimal, and doing that is actually a lot of work. If you peek inside the matrix multiplication library you'll note that it's not just "we have the best algorithm for the last 7 GPU microarchitectures" but also 7 implementations for the latest architecture because that's just how you need to be to go fast. Kind of like how if you take an uninformed look at glibc memcpy you'll see there is an AVX2 path and a ERMS path but also it will switch between algorithms based on the size of the input. You can easily go "yeah my SSE2 code is tiny and gets decent performance" but if you stop there you're leaving something on the table, and with GPUs it's this but even more extreme.


Using the uops directy as the isa would be a bad idea for code density. In RISC-V land, vendors tend to target standard extensions/profiles, but when they hardware is capable of other operations they often expose those through custom extensions.


IMO if the trade off is cheaper, faster hardware iteration then Nvidia’s strategy makes a lot of sense.


"maths" is British English for "math" in US English.


Personally while there are some task that's easier with multiple cursors, I have personally yet to find any use case where regexp replace in selections doesn't work as well.


You cannot edit in real-time with regex

With regex, you need to plan everything you need to do and then surgically do it.

With multiple cursors, you only need to know (roughly) where you need to change, the rest you figure it out on the fly.


In editors like vim/emacs it is possible and common.


I find Emacs Macros and rectangular editing as more useful in lore situations than multiple cursors.

As for regexps, the syntax in Emacs is hell, but I do know Emcacs has very powerful edit/replace tools.


Multi-cursor is more flexible than rectangular selection. You can skip or delete words of varying length etc. It is certainly less powerful than vim macro and regex, but it's easy to use because 1) there are less shortcuts to remember 2) no need to think too much like figuring out regex for simple tasks 3) it gives instant feedback

Video showing this https://youtu.be/lhFNWTAIzOI?t=28


skill issue tbh


"Emcacs"...I like it.


Yeah... typing fast and from the phone has this problem


There's a reason people use visual editors and not ed.


Wouldn’t remotely compare regex to ed.

I also haven’t found a use for multiple cursors.

Writing a regex is quick and easy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: