Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The big failure is that we stick with languages designed for computers and not people.

A C (or Rust) kernel is a heroic effort that takes man-years to complete. A Lisp one is an end of semester project that everyone builds for their make belief machine (also implemented in Lisp).



A toy C kernel is also an end of semester project.

What makes real kernels take man years to complete is the hardware support, the majority of Linux source code is drivers - the endless tables of hardware register definitions, opcodes and state machine handling.


But couldn't we do something about that as well? Couldn't drivers be built on some abstraction that would simplify some work?

I have zero knowledge about this area though


If you want multiplatform drivers that you can use to plug your device into computers of any architecture, there are abstractions for that. IMO, it's easier to write 3 or 4 versions of your driver than to use them, but they exist and some people really like them.

If you mean standard logical interfaces, those exist. Also, hardware interfaces are highly standardized.

The problem is that the drivers are exactly the code you write to make all the abstractions fit each other. So there is very little you can do to abstract them away.


> Couldn't drivers be built on some abstraction that would simplify some work?

That's like asking the alchemist to publicly publish their manuscripts.

In an ideal world, yes. However, we don't live there. Until a few years ago, GPUs and other drivers were guarded more carefully than the fucking Fort Knox.

Once you publish your drivers, you reveal a part of the inner workings of your hardware, and that's a no-no for companies.

Plus, what the other commenter said - getting hardware guys to design for a common driver interface is probably not gonna get traction.


Somebody somewhere has to do the work of making sure everything works together. Right now that's the OS. You're proposing moving that work to a standards committee. Either way, the problem persists. You either do that or go the Apple way which is to vertically integrate the wholes stack from hardware to software, but then you have Apple's problem, which was lower hardware compatibility.


I'm sure the hardware folks will be lining up to cooperate with the annoying software engineers giving them abstract constraints lol


If you could get every hardware manufacturer in the world onboard with such an interface, perhaps. But even if 90% of them were onboard there would be edge cases that people and companies would demand support for and there goes your standard.

Drivers exist to ultimately turn actual hardware circuits off and on, often for highly specialized and performance-critical applications, and are often written based on the requirements of a circuit diagram. So any unified driver platform would also involved unified hardware standards, likely to the detriment of performance in some applications, and good luck telling Electrical Engineers around the world to design circuits to a certain standard so the kernel developers can have it easier.


If you are ok with the performance you can obtain from an FPGA, you could do it now. Look at FPGA hardware-software co-design and related stuff.

If you mean, in general, for the hardware that already exists, that's what the HAL (Hardware Abstraction Layer) of the operating system tries to do.


FWIW android has HAL which is just this.


It is unfortunate that this field underestimates the importance of the "people" part in favor of the "computer" part. There's definitely a balance to be stricken. I do believe that languages that are designed for computers have done a pretty decent job at adapting features that are geared more towards the "people" part of the equation. Unfortunately, programmers are very tribal and are very eager to toss the wine out with the cork when it comes to ideas that may help but they've misapplied.


How is Lisp performance these days? It was around in the 70’s, right? So I guess the overhead couldn’t be too bad!


Considering how much of modern software is written in JavaScript and python, I have a hard time seeing how lisp overhead would pose much of a problem. Erlang is good enough for telecom equipment for 30 years ago, so that also gives us a data point.

If entertain the idea that the Von Neuman architecture may be a local maxima, then we can do even better; lisp machines had specialized instructions for lisp which allowed it to run at competitive performance to a normal programming language.

The issue doesn't seem to be performance; it seems to still come down to being too eccentric for a lot of use-cases, and difficult to many humans to grasp.

- https://en.wikipedia.org/wiki/Erlang_(programming_language)

- https://en.wikipedia.org/wiki/Lisp_machine


>The issue doesn't seem to be performance; it seems to still come down to being too eccentric for a lot of use-cases, and difficult to many humans to grasp.

Lisp is not too difficult to grasp, it's that everyone suffers from infix operator brain damage inflicted in childhood. We are in the same place Europe was in 1300. Arabic numerals are here and clearly superior.

But how do we know we can trust them? After all DCCCLXXIX is so much clearer than 879 [0].

Once everyone who is wedded to infix notation is dead our great grand children will wonder what made so many people wase so much time implementing towers of abstraction to accept and render a notation that only made sense for quill and parchment.

[0] https://lispcookbook.github.io/cl-cookbook/numbers.html#work...


It's not about prefix notation, it's that the fully uniform syntax has legitimate ergonomic problems for editing, human reading, and static analysis. Sexprs are better for computers than for humans in a lot of ways.


Only when not using one of the many Lisp editors that exist since Lisp Machines (Symbolics, TI), Interlisp-D (Xerox), that survive in Emacs SLIME, Cursive, LispWorks, Allegro Common Lisp, Raket, VSCode Calva.


Not true at all IMO. Reading code is reading code regardless of whether you have a fancy IDE or not.

S-expressions are indisputably harder to learn to read. Most languages have some flexibility in how you can format your code before it becomes unreadable or confusing. C has some, Lua has some, Ruby has some, and Python has maybe fewer but only because you're more tightly constrained by the whitespace syntax. Sexpr family languages meanwhile rely heavily on very very specific indentation structure to just make the code intelligible, let alone actually readable. It's not uncommon to see things like ))))))))) at the end of a paragraph of code. Yes, you can learn to see past it, but it's there and it's an acquired skill that simply isn't necessary for other syntax styles.

And moreover, the attitude in the Lisp community that you need an IDE kind of illustrates my point.

To write a Python script you can pop open literally any text editor and have a decent time just banging out your code. This can scale up to 100s or even 1000s of LoC.

You can do that with Lisp or Scheme too, but it's harder, and the stacks of parentheses can get painful even if you know what you're doing, at which point you really start to benefit from a paren matcher or something more powerful like Paredit.

You don't really need the full powered IDE for Lisp any more than you need it for Python. In terms of runtime-based code analysis Python or Ruby are about on par with Lisp, especially if you use a commercial IDE like Jetbrains. IDEs can and do keep a running copy of any of those interpreters in memory and dynamically pull up docstrings, look for call sites, rename methods, run a REPL, etc. Hot-reloading is almost as sketchy in Lisp as it is in Python, it's just more culturally acceptable to do it in Lisp.

The difference is that Python and Ruby syntax is not uniform and therefore is much easier to work with using static analysis tools. There's a middle ground between "dumb code editor" and "full-power IDE" where Python and Ruby can exist in an editor like Neovim and a user can be surprisingly productive without any intelligent completion, or using some clunky open-source LSP integration developed by some 22 year old in his spare time. With Lisp you don't have as much middle ground of tooling, precisely because it's harder to write useful tooling for it without a running image. And this is even more painful with Scheme than with Lisp because Scheme dialects are often not equipped to do anything like that.

All that is to say: s-exprs are hard to deal with for humans. They aren't for humans to read and write code. They never were. And that's OK! I love Lisp and Scheme (especially Gauche). It's just wrong to assert that everyone is brain damaged and that's why they don't use Lisp.


It surprised me to learn that John McCarthy never intended S-expressions to be the human-facing syntax of LISP.

http://jmc.stanford.edu/articles/lisp/lisp.pdf


"One can even conjecture that LISP owes its survival specifically to the fact that its programs are lists, which everyone, including me, has regarded as a disadvantage"

Not the first time someone didn't realize what they had.


Programming without IDE in 21st century is like making fire with stones and wood sticks.

A required skill for survival in the woods, not something to do daily.

This point of view applies to any programming language.

By the way you use two languages as example, that are decades behind Lisp regarding GC technology and native code generation.


I view code in many contexts though - diffs in emails, code snippets on web pages, in github's web UI, there are countless ways in which I need to read a piece of code outside of my preferred editor. And it is nicer, in my opinion, to read languages that have visually distinct parts to them. I'm sure it is because I'm used to it, but it really makes it hard to switch to a language that looks so uniform and requires additional tools outside of my brain to take a look at it.


You are free to criticize or look down on the way other people work. That doesn't change the silliness of the assertion that infix operator brain damage is the reason we aren't all using Lisp right now. It's totally valid to argue that we are all missing out on the benefits of superpowered interpreter-compilers like SBCL and Chez. But prefix math operators are only a small part of the reason why.


> S-expressions are indisputably harder to learn to read.

Has this been studied? This is a very strong claim to make without any references.

What if you take two groups of software developers, one which has 5-10 years of experience in a popular language of choice, let's say C, and then take a group of people who write LISP professionally (maybe clojure? Common lisp? Academics who work with scheme/racket?) and then have scientists who know how to evaluate cognitive effort measure the difference in reading difficulty.


I see you are debating lisps ergonomics, but that doesn't dismiss the paradigm. Erlang Haskell and Prolog has far better syntax readability, so I don't see this as really relevant in discussing the alternative to Von Neuman.

There are other ergonomics issues beyond syntax that pose issues to adoption (Haskell in production has become something of a running gag). Moving the paradigm into a mixed language alongside procedural code seem to help a lot in seeing its adoption in recent years. (swift, rust, python, c++)


I am responding to the assertion that the reason we don't all use Lisp is because we all have brain damage. My claim is that there are broader ergonomic issues with the language family. You could argue that maybe the system architecture and execution model of the Lisp machines should be debated separately from its syntax, but I am responding to an argument about its syntax.


Well said. I have always struggled with the S expression syntax though I have wanted to learn Lisp for a long time.


Depends on the Lisp, but Clojure is in the same order of magnitude as Java for the most part, and SBCL Common Lisp is one of the fastest GC languages.


Both Lisps (Common and Scheme) are garbage-collected, so they're in the 'slow as molasses' group of languages (which covers pretty much everything outside of C, C++, Rust, Fortran, Pascal, Ada, and assembly); but among the 'slow as molasses' group, Common Lisp (at least SBCL, which may be the most prolific implementation) is blazingly, scorchingly, stupendously fast. If you know how to use it it's a bat out of hell outrunning greased lightning.

On the Scheme side of things Chez is pretty fast. It's not 'I've gained a whole new level of respect for the people who engineered my CPU' levels fast, but it's still pretty decent.


In the arbitrary collection of programs in https://benchmarksgame-team.pages.debian.net/benchmarksgame/... , SBCL runs x5 slower than C, Racket x15 slower than C. Chez Scheme shoud be in between, like x10 slower than C. (Anyway, some C programs use very advanced low level tricks, so the difference with realistic programs may be slower.)

It's a pity they don't rune benchmarks for Clojure, and I have no idea to make up a number.


> Chez Scheme shoud be in between

? The default implementation as of Racket version 8.0 uses Chez Scheme as its core compiler and runtime system.

https://docs.racket-lang.org/reference/implementations.html

> some C programs use very advanced low level tricks

* possible hand-written vector instructions or "unsafe" or naked ffi" are flagged

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: