Hacker Newsnew | past | comments | ask | show | jobs | submit | usamoi's commentslogin

> The RISC-V ISA is already an industry standard and the next step is impartial recognition from a trusted international organization.

I'm confused. Isn't RISC-V International itself a trusted international organization? It's hard to see how an organization that standardizes screws and plugs could possibly be qualified to develop ISAs.


ISO defines standards for much more than bolts and plugs. A few examples include: the C++ ISO standard, IT security standards and workplace safety standards, and that’s a small subset of what they do.

They develop a well defined standard, not the technologies mentioned in the standard. So yes, they’re qualified.


But isn't RISC-V just a standard? ISO will decide what is RISC-V and what isn't. Then its complicated process will become an obstacle to innovation.


C++ "standard" sounds more like an example of why technology should avoid standards


It is certainly an example of why SC22 is a bad idea

The "C++ Standards Committee" is Working Group #21 of Sub Committee #22, of the Joint Technical Committee #1 between ISO and the IEC.

It is completely the wrong shape of organization for this work, a large unwieldy bureaucracy created so that sovereign entities could somehow agree things, this works pretty well for ISO 216 (the A-series paper sizes) and while it isn't very productive for something like ISO 26262 (safety) it can't do much harm. For the deeply technical work of a programming language it's hopeless.

The IETF shows a much better way to develop standards for technology.


The fact that the C++ committee is technically a subgroup of a subgroup of a subgroup is among the least of the issues of ISO for standardization.

The main problem is that ISO is a net negative value-add for standardization. At one point, the ISO editor came back and said "you need to change the standard because you used periods instead of commas for the decimal point, a violation of ISO rules." Small wonder there's muttering about taking C and C++ out of ISO.


I would argue that the structural problem is an underlying cause. So it won't be the proximate cause, but when you dig deeper, when you keep asking like a five year old, "But why?" the answer is ultimately ISO's structure and nothing to do with Bjarne's language in particular.

Hence the concern for the non-language but still deeply technical RISC-V standardization.


Titanic is not an example of why building ships has to be avoided. C++ is a great example, yes, of the damage ambitious and egotistical personas can inflict when cooperation is necessary.


Say what you will about C++, but it is undoubtedly one of the most successful and influential programming languages in history.


By which metric?

C, Java, Rust, JS, C# do exist


> influential

It's certainly a cautionary tale


If we are taking cheap potshots, there's a standard for standards: https://xkcd.com/927/ or in the proposed XKCD URI form xkcd://927


> It's hard to see how an organization that standardizes screws and plugs could possibly be qualified to develop ISAs.

you my friend have not delved into the rabbithole that is standardisation organizations.

ISO and IEC goes so far beyond bolts and screws it's frankly dizzying how faar reaching their fingers are in our society.

As for why, the top comment explained it well; There is a movement to block Risk-v adoption in the US for some geopolitical shenanigans. A standardisation with a trusted authority may help.


FTA: “Since 1987, JTC 1 has overseen the standardization of many foundational IT standards, including JPEG, MPEG, and the C and C++ programming languages”

Compared to ISO, RISC-V International has almost no experience maintaining standards.

Even if you think that’s isn’t valuable, the reality is that there is prestige/trustworthiness associated with an “ISO standard” sticker, similar to how having a “published in prestigious journal J” stickers gives scientific papers prestige/trustworthiness.


I've never heard of SPARK. What advantages does it have compared to Lean?


It's basically a subset of Ada, so you can use it anywhere you'd use Ada. I don't think Lean is at a point that it's an Ada replacement.


In a project, can you develop one module in Ada and another in SPARK and compile them together ? So, you can use safety-critical code in one module and regular Ada code in other modules ?


Yes, you can mix and match the two. This lets you do things like build a library with SPARK for some critical portion where you want SPARK's guarantees and can accept its limitations, and incorporate it into an application built with the rest of Ada.


Oh lovely - need to put Ada on the learning plan. Formal languages were a bit of a drag because you needed to maintain a separate "specification" copy of your critical code.

Going through Ada sample programs and surprised I can grok stuff without knowing anything about the language. Wondering why it never took off in the standard software world. Sure its a bit verbose, but so are Java and MacOS API's


I think at least slow and expensive compilers back in the days, the defense and aerospace stigma, and in more recent times a common misperception that it’s closed source. And it’s never been cool stuff.

And yes you’re right, it’s a very good language.


https://learn.adacore.com/ - I'd start here, good set of tutorials on the language including some comparative ones. It won't teach you everything you might need to know, but it's a free and good starting point.


Lean the math prover? What does that have to do with Ada/Rust?


> Lean the math prover? What does that have to do with Ada/Rust?

I'm going to be rude, but there are 4 sentences in this thread and you appear to have not read two of them.

The comment I responded to:

>> I've never heard of SPARK. What advantages does it have compared to Lean? [emphasis added]

The "It" in my response refers to SPARK.


There was no need to be rude.


They have different definitions of failure. In Lean a failure is to calculate wrong thing. In SPARK a failure is to not calculate at all because of memory issue or something like this. As far as I've seen SPARK, it encourages ephemeral data structures and effectful computations. Lean is less familiar to me, but I've got the impression that it is about correct computation in infinite memory and stack, and value-centered computations are encouraged. SPARK did not have pointers for long period. Then SPARK has got pointers, but only unique ones. Lean has shared pointers to immutable data structures. And infinitely recursive data structures.

Yet another provable code I have found in Eiffel. There is "proven" doubly linked list in Eiffel. Something not possible in SPARK, going against unique pointers. Something not possible in Lean, going against immutability.


It depends on who you are.

For implementers of third-party compilers, researchers of the Rust programming language, and programmers who write unsafe code, this is indeed a problem. It's bad.

For the designers of Rust, "no formal specification" allows them to make changes as long as it is not breaking. It's good.


Medical or Miltary often require the software stack/tooling to be certified following certain rules. I know most of this certifications are bogus, but how is that handled with Rust?


Ferrocene provides a certified compiler (based on a spec they've written documenting how it behaves) which is usable for many uses cases, but it obviously depends what exactly your specific domain needs.


Not implementing the Zbb extension but implementing big-endian. That sounds like doing it the hard way.


They are not rejecting Safe C++; they are rejecting memory safety. Majority of them believes that memory safety is just hype, and minority of them knows it's a problem, but doesn't want to restrict themselves about coding. If code runs, it is fine. If it does not, coder running is fine too.


The principles document that was accepted feels very targeted at Safe C++ specifically. It’s fair to say they rejected it.


I work on a Swift/iOS app that wraps a C++ library

90+% of our crashes are from hard-to-diagnose cpp crashes. Our engineers are smart and hardworking but they throw their hands up at this.

Please tell me my options aren’t limited to “please be better at programming”…?


Does iOS let you run it in another process? That's a common technique to isolate your app from crashy 3rd party components. This can work if you don't pass it untrusted data. If there's untrusted data coming in and you give it to a crashy c++ component, you're just asking to be pwned.

For containing legacy C++ codebases https://fil-c.org/ looks promising as well, and could fit the bill better if the data was user supplied. It's been discussed on HN many times, most recently here https://news.ycombinator.com/item?id=45133938 .. but currently doesn't support iOS.


> Our engineers are smart and hardworking but they throw their hands up at this.

Since you don't think this is a skill issue, shouldn't you support Safe C++, which eliminates unsafety rather than just turning a blind eye to it?

> Please tell me my options aren’t limited to “please be better at programming”…?

You can only use Valgrind/ASan, stress testing, and rewriting in other languages to pay off the technical debt. Even if a god points out every bug in your code, you'd still need to put in great effort to fix them. If you don't pay for it while coding, then you must pay for it after coding. There are no shortcuts.


Have you tried enabling asan? It’s not really the same kind of language guarantees but it does catch a lot of the same errors.

In general I think static analysis is a crutch for C++ to claim safety but it is still a very useful tool that devs should use in development.


Sorry, but yes, when your app crashes there could be two issues. The C++ library that you use is shit, or your engineers don't understand the underlying concept of allocating/deallocating things because for Swift they had never to learn them. With Rust the code just wouldn't compile at all, that the only difference.


They published a paper for it, which includes more details. https://www.usenix.org/conference/osdi24/presentation/chen-h...


Wait, doesn't that also pretty obviously say it's not a microkernel at all? They use "class 1 mechanism-enforced isolation" which isn't address space space isolation per the paper, and thus they solved ipc performance by not having any ipc - it's monolithic


Well, it is clear that they have a new definition of a microkernel, since there are now more new technologies that achieve isolation without compromising performance. Microkernel vs monolithic kernel is more of a marketing rhetoric than technical differences.


> Microkernel vs monolithic kernel is more of a marketing rhetoric than technical differences.

It's not, they have meanings and they can't just make up something different and pretend it's a microkernel when it isn't. That doesn't make it bad, it's just not what they are claiming. It also obviously isn't IPC, despite their continued use of the term throughout.

Also their isolation says it's ARM Watchpoint which is a debugger support? Maybe they are trapping unexpected address writes, but that isn't doing much for restricting privileges. It also lists Intel PKS, which Linux already supports/uses as well...


You can read the slide deck and the paper. They are pretty transparent about what they do. The whole point is how they tried to adapt microkernel concepts while still retaining the required performance.

They address at length why they don't use a traditional IPC for the most sollicited part of the kernel.

It being or not being a microkernel is not in itself a very interesting take. What's interesting is how useful or not what they do is.


This is actually quite easy to achieve, as long as you cannot realize your own mistakes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: