Hacker Newsnew | past | comments | ask | show | jobs | submit | reichstein's commentslogin

... thus falling into the Pitt of failure, where every way out is just a little too far away.


When I started using Linux, I didn't do so because I disliked Windows so much, I just was an insatiably curious nerd.

But since then, each new version of Windows has made me more and more grateful for not having to deal with that dumpster fire on my personal devices.

The saddest part to me is that I have the strong impression it wouldn't take that much work to turn Windows into a much better system. But for whatever reason, Microsoft is not interested in making that happen. Maybe they are incapable of doing so. But the system itself not the reason.


Semantic major/minor version 0.15 means it's still in development. It's not supposed to be stable. Going from 0.14 to 0.15 allows breaking changes.

Try making a similar change between version 5.0 and 6.0, with hundreds of thousands of existing users, programs, packages and frameworks that all have to be updated. (Yes, also the users who have to learn the new thing.)


So "cooperative multitasking is not preemptive multitasking".

The typical use of the word "asynchronous" means that the _language is single-threaded_ with cooperative multitasking (yield points) and event based, and external computations may run concurrently, instead of blocking, and will report result(s) as events.

There is no point in having asynchrony in a multithreaded or concurrent execution model, you can use blocking I/O and still have progress in the program while that one execution thread is blocked. Then you don't need the yield points to be explicit.


While this is indeed the most common use, I'll bring as counter-examples Rust (or C#, or F#, or OCaml 5+) that supports both OS threads and async. OS threads are good for CPU-bound tasks, async for IO-bound tasks.

The main benefit of having async (or Go-style M:N scheduling) is that you can afford to launch as many tasks/fibers/goroutines/... as you want, as long as you have RAM. If you're using OS threads, you need to pool them responsively to avoid choking your CPU with context-switches, running out of OS threads, running out of RAM, etc. – hardly impossible, but if you're doing more than just I/O, you can run into interesting deadlocks.


> The main benefit of having async (or Go-style M:N scheduling) is that you can afford to launch as many tasks/fibers/goroutines/... as you want

Some have argued that the real solution to this problem is to "just" fix OS threads. Rumor has it Google has done exactly this, but keeps it close to their chest:

https://www.youtube.com/watch?v=KXuZi9aeGTw

https://lwn.net/Articles/879398/

Somewhat related and also by Google is WebAssembly Promise Integration, which converts blocking code into non-blocking code without requiring language support:

https://v8.dev/blog/jspi

I see a possible future where the "async/await" idea simply fades away outside of niche use-cases.


I think that PI will have a role, but I suspect that it can quickly destroy the performance of wasm code in unexpected ways.

As for fixing OS threads, indeed, this may very well change the ecosystem, but many developers expect their code to be cross-platform, so it might take a while before there is a solution that works everywhere.


Aks. "Every beef anyone has ever had with Nvidia in one outrage friendly article."

If you want to hate on Nvidia, there'll be something for you in there.

An entire section on 12vhpwr connectors, with no mention of 12V-2x6.

A lot of "OMG Monopoly" and "why won't people buy AMD" without considering that maybe ... AMD cards are not considered by the general public to be as good _where it counts_. (Like benefit per Watt, aka heat.) Maybe it's all perception, but then AMD should work on that perception. If you want the cooler CPU/GPU, perception is that that's Intel/Nvidia. That's reason enough for me, and many others.

Availability isn't great, I'll admit that, if you don't want to settle for a 5060.


JSON is a text format. A parser must recognize the text `2` as a valid production of the JSON number grammar.

Converting that text to _any_ kind of numerical value is outside the scope of the specification. (At least the JSON.org specification, the RFC tries to say more.)

As a textural format, when you use it for data interchange between different platforms, you should ensure that the endpoints agree on the _interpretation_, otherwise they won't see the same data.

Again outside of the scope of the JSON specification.


The more a format restricts, the more useful it is. E.g. if a format allows pretty much anything and it's up to parsers to accept or reject it, we may as well say "any text file" (or even "any data file") -- it would allow for anything.

Similarly to a "schema-less" DBMS -- you will still have a schema, it will just be in your application code, not enforced by the DBMS.

JSON is a nice balance between convenience and restrictions, but it's still a compromise.


A JSON parser has to check if a numeric value is actually numeric - the JSON {"a" : 123456789} is valid, but {"a" : 12345678f} is not. Per the RFC, a standards-compliant JSON parser can also refuse {"a": 123456789} if it considers the number is too large.


Your irony detector may need calibration.

Or mine does.


The JSON spec only defines the JSON text format. It doesn't say what the text means. There are obvious interpretations, but every program that reads or writes JSON can decide what it does with it.

On the other hand, the thing that makes JSON actually useful is the interoperability, that JSON written by one program, on one platform, can be read by another preterm on another platform. Those programs have to agree on a protocol, what the JSON text must satisfy and what it means. It's usually not considered valuable to require object properties to be in a specific order, so they don't. But they could.


So you're saying that "an ask" is "an order" or "a demand", rather than "a request". Why not use those words?

I don't understand what "an ask" means. I don't know what the speaker intended with it, and I wouldn't know how a receiver would understand it.

It's just communicating badly, using words with no fixed shared meaning. Or somebody too afraid to be confrontational to phrase a demand as actually demanded.

And "learnings" is just somebody too lazy to say "lessons learned".


If it actually is stronger than a simple request, I could see saying "an ask" as a way of demanding using softer language. If your boss were to say "I demand ...", everybody is going to say they're a demanding jerk, but if they come to you with "an ask", that could carry the weight of the demand without sounding...demanding.

That said, I've never considered "an ask" to have any stronger meaning than a request. If I hear "an ask", I'm assuming I can push back the same amount I would to any other request.


I'm personally convinced that the reason the conditional operator of called "the ternary operator" it's that the ANSI C Programming Language book contains the phrase "the ternary operator, ?:", and a lot of readers didn't know what "ternary" meant and thought it was a name.


This sounds reasonable to me. I used to think you could guess what someone's first programming language was based on if they wrote methods or procedures. :D


Agree, other than that I wouldn't use "2-arity". Maybe "2-ary", but that's just "binary" written confusingly. It works for "n-ary".

I'd rather say that a binary operator _has_ an arity of 2 or talk about the arity _of_ an operator.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: