I would use Helix in the terminal if it supported Emacs keybindings tbh but I don't want to relearn another set of keybindings. Still I'd be interested in what it becomes.
The keybindings is one of the major selling points of helix. It uses kakoune style object-verb action (like vim visual mode) by default with multiple selections. If you're comfortable with emacs bindings then you're better off with a lightweight emacs alternative.
As an evil user, this is potentially huge to me.
Emacs happens to have the best balance of easy to setup configuration (relatively), powerful package ecosystem, and proper hackability of all editors I've found.
It's not very fast though and has some conventions that feel archaic.
I'm also an Emacs evil user (neovim too). I think the kakoune editing model has the potential to surpass even the vim/evil model. Its default object-verb order makes it easy to preview and change selection before proceeding with the action. That's not possible with vim's normal mode. Vim does have the visual model. But then kakoune model also uses multiple cursors, making it more powerful. I really wanted to try kakoune model in Emacs. But the package needs a bit more updates.
Another issue I have with evil is that it changes a lot of Emacs' default bindings, making it hard to do certain tasks. Some operations simply don't work at all. The kakoune package doesn't do this - at least not in insert mode.
> It's not very fast though and has some conventions that feel archaic
Sadly, multithreading is an afterthought for Emacs. There is just too much legacy stuff to make it easy. The language design is also from another era. The default dynamic binding feels very alien when almost every language everyone knows is lexical binding by default. On the other hand, scheme feel very modern due to very careful language design. But the effort to switch Emacs to scheme didn't find much steam.
There is one aspect where none of the new editors (hx with scheme and nvim with lua)can match Emacs. Emacs is entirely written in elisp with the C parts acting merely as libraries. The extension language for other editors is just an addition to their core editing code.
> Sadly, multithreading is an afterthought for Emacs
It is, but it's usable. I'm actually amazed that, even after three major versions, the built-in threading is not used by the community.
Yes, the threads currently are not usable for number crunching in the background. And yes, there are bugs, and trying to do many things from the background thread doesn't work, sometimes in unexpected ways. You can still block the main thread from the background thread since some things block the event loop, no matter where they were started.
But, the threads do give you independent control flows. Whatever you cannot do in the background, you can offload to the main thread with a timer and a queue of lambdas.
The built-in threads are very, very bare-bones - it's around 15 functions, for threads, mutexes, and condition variables. They are very limited by their "mostly cooperative" nature. However, with a bit of sugar, they are usable for at least one thing: async processes and network communication.
In a background thread, you can "block" to wait for a child process to do something. It's natural and requires no macrology (async.el...). The same is true for network communication. You can block and wait for a response while the rest of Emacs does whatever. With just two functions, you can write code without blocking as if you used `call-process`. Sequential actions - call this, wait for it to finish, call that, wait for it to finish, etc. - can now be coded in a sequential way, without having to worry about callbacks, sentinels, and a poor-man FSM implementation that invariably appears in Elisp that doesn't use threads.
The threads built into Emacs, currently, are closer to green threads or coroutines, functionally, than to OS-level threads. But that's still a huge help in a bunch of important and pervasive scenarios. It's really strange that nobody seems to realize this.
With threads (as they are), the Continuation Passing Style compiler macro (in generator.el), and dynamic modules (for actual parallelism where needed) Emacs now has everything it needs to make it non-blocking by default. Of course, that would entail rewriting everything on top of these abstractions, so it's unrealistic - but for new code and packages? I think we're just one package (along the lines of dash, s, etc.) away from convenient concurrency and parallelism in Emacs. The problem, of course, is that someone needs to design and code that package...
"Yes, the threads currently are not usable for number crunching in the background. And yes, there are bugs, and trying to do many things from the background thread doesn't work, sometimes in unexpected ways. You can still block the main thread from the background thread since some things block the event loop, no matter where they were started."
Many dynamic languages have bodged on threading over the past decade. (I don't think dynamic languages are intrinsically unthreadable or anything, but their interpreters were pretty deeply based on not having threads.) What that has shown is that 90%-effective threading is useless, and 99%-effective threading is superficially appealing but always, always blows up at any sort of scale.
You really need threads that don't come with all those caveats.
I expect it will get there, but another thing we've learned from previous efforts is that telling the community it's ready before it's ready causes "$LANGUAGE threading" searches to be filled with posts telling people how bad it is, even years and years after it has actually been fixed. It's probably a blessing in disguise it's not something the community is pervasively trying to use.
Well, threading - shared-state parallelism, more precisely - is hard to do well, and retrofitting it into a program that wasn't designed with that kind of threading in mind is even more challenging. I don't think any popular languages solved this, save for Java. Especially in the recent versions, with the new virtual threads - as much as I dislike Java, I have to say they did an excellent job on this. Other languages and platforms (that I know of; what's the .NET story here?) are all shitshows to varying degrees, trying to catch up and failing over and over again.
I think the only sensible way to offer parallelism in Emacs is to exclude the "shared-state" part, the way Racket (places) and OCaml do it. I think Python also tries to do it with subinterpreters? Let another instance of the interpreter run in the same process and communicate via message passing. That's probably still a huge undertaking, but at least it seems more viable than going through the whole codebase and adding locks everywhere...
Still, the "threads" in Emacs, as incomplete and half-baked as they are, can be useful. And if nobody uses them, there's no incentive for the developers to improve them. So I think we need at least some early adopters if we want the threading support in Emacs to get better.
Generally, I don't feel like in 2023 concurrency and parallelism are problematic areas anymore aside from existing aged stacks. From a perspective of mainly .NET ecosystem resident, it has been a shitshow outside of it for a long time indeed with many architectural choices throughout the industry paying for the Java sins (e.g. Kafka) and imposing limitations that seemed nonsensical and embarrassing even 7 years ago.
We're talking about parallelism here, not concurrency. async/await solve concurrency, not parallelism (on their own). Kotlin coroutines solve parallelism only because they piggyback on Java threads. I'm not sure about Go, but it's probably M:N concurrency (so with parallelism) like what you get on the BEAM. Then again, on the BEAM you don't get to "share" anything (other than binaries, IIRC).
I'd say concurrency is largely a solved problem, yes; limited parallelism (e.g., with message passing) also mostly works. We don't need to worry about the "C10K problem" anymore. But shared (mutable) state parallelism is, I think, still far from solved - if it can ever be "solved", which is a pretty big assumption :)
I feel like the scheme transition could go better if it had been proposed today. The dev community is much larger and there seems to be more activity around this stuff today.
Case in point, the recent async updates, native compilation and more. They often leave something to be desired but are nevertheless huge upgrades.
A. It's true that Wasmer is bigger, but even my phone has 16GB of RAM and 1TB of storage space.
B. There are probably more computers running WebAssembly loads today than Lisp ones.
C. Lisp is a mature language, almost too mature. I used it extensively in the '80s and it served me well then, but that was 40 years ago. At least have the decency to use a more modern language like Lua, which is what Neovim uses.
Lua is very weak for "programming in the large". It's just a little bit better than early JavaScript. Scripts are OK, but anything that requires more code with more structure requires incredible amounts of perseverance and discipline from all contributors. You can do fairly large programs in Lua as a small team of highly skilled hackers, but the barrier to entry will be much higher than if you did it in a language that offers ready-made abstractions.
I use AwesomeWM, which is basically Emacs of Window Managers, with Lua instead of Elisp. The code is very well written and documented, yet getting into it is much more complicated than if it was written in Python - even if that Python was written poorly.
C. There aren't any other languages that meet the criteria. Lua was a no-go from the start. The maintainers did not like the language, and it necessitated adding more C code to Helix which could complicate building even further. https://github.com/helix-editor/helix/discussions/3806#discu...
Rune is not mature and is developed by a handful of people. Please check the thread I linked, literally everything you could say has already been addressed.
Saying "X is better than Y" or "Why not Z?" is not constructive at all.
It is constructive, though - Lisp used to be fun when parsing complex language grammar was not as trivial as it is today. I loved it, and I worked with muLisp for many years... in the '80s and '90s. For AI (or, rather, "expert systems"), it was without a match... prior to the '80s, though. There were Lisp machines even... in the past.
Why? The properties of lisp make it suitable for both a config file, and an extension language. It can be easily embedded into existing languages and is extremely flexible.
Right, but his concern about security is not just to restrict how you use your computer, but to try to keep it safe in a manageable way.
However, the current state of guarantees around it is not at the point where Turing-completeness is the biggest issue. You can have a simple config, but the rest of the system is still unverifyiable and running unknown blobs.
I think the reasonable way out of it is through restricted capabilities. We won't get a fully verifyiable system we can inspect anytime soon. Probably not before the dark days of mandatory and somewhat provable ad impressions.
So, if I use an editor config off the Internet, I need to inspect it for malware, because it's code, not configuration? Yes, there are languages for configuration - Jsonnet, Starlark, Dhall, which are execution safe - unlike Lisp and Lua!
Do you inspect all the code you run on your computer? You probably got all of it off the internet, except for the firmware blobs you couldn't even inspect if you wanted to.
And hell, even an "execution safe" configuration can contain malware if there's a parser bug.
At some point you have to choose who to trust and not to trust to write code that runs on your system, and all you can really do is try to verify that they did in fact write it, and run untrusted code in isolation from sensitive data.
You already face the same threat then. Many, if not most, nontrivial programs have at least one way to escalate to arbitrary code execution from config. For example sway has exec, basically any useful editor has "on save actions", etc. No need for a Turing complete language when you can just shell out.
Whenever I update my spacemacs config+packages I'm kind of doing that, there's no way you can honestly convince yourself that you thoroughly reviewed everything, but I guess the same applies to when you update your boring text editor's binary and forget to opt-out to some new feature you may not want as your old config might not mean the same thing now.
I think the real problem is around being able to trust your entire system. It'd help much more to have a better capability system so my rouge text editor can't upload my photos or credit card info from my browser profile to the internet, but today things kind of work because of tons of well intended and behaved people collaborating.
Scheme has sandboxing in the form of environments. You can evaluate[0] / load[1] untrusted code by applying an environment specifier[2] with all of the symbols you trust the code to use. For example, if you don't want the code to be able to use IO, simply don't add (scheme read) and (scheme write) to the environment that you eval / load the code with.
It's possible to write declarative configuration in scheme. You can see that in Guix. Eventually someone will write a macro to create something purely declarative, like use-package in emacs.
> it's a tremendous foot gun
I've never seen this footgun in action with elisp in Emacs, lua in neovim or vimscript in vim. Is this anything more than hypothetical?
> Also a huge security hole.
If you put an editor in a position where its Turing-complete configuration is a security hole, you'll be in a lot more trouble than you imagine. Editors by definition are meant to modify stuff in a filesystem. With those privileges, it wont matter what the config language is. The plugins, even in webassembly, will cause serious issues.
Any chance of seeing the things I miss the most from Racket in other langs?
1. Parameters and Syntax Parameters (Syntax parameters make macros more powerful)
2. Turing complete macros (not just syntax-case)
3. Typed Racket
I almost used https://gamelisp.rs/ for a project but the nightly feature it needs broke and it's no longer maintained, glad to see something similar arise! You might want to consider adopting their choice of VecDeque as a list replacement, I think it makes a lot more sense than naive linked lists on modern machines.
1. I do support parameters now, syntax parameters not yet. I would like to! But Racket has a pretty hefty head start on me so it'll take some time.
2. Right now I have syntax-rules macros, I also have defmacro style macros that get used internally in the kernel during expansion, but haven't yet opened them up for user space yet. Syntax case will be coming soon hopefully.
3. The odds of me being able to come up with an implementation to match typed racket pound for pound is pretty low. I have toyed with using contracts as types (where possible), with medium/promising success in certain situations. I have a soft spot for racket and have been modeling behavior after it, however it will take time to be able to create a macro system powerful enough to match it. It wouldn't be impossible to create an alternative syntax to just lower to steel after type checking, but I haven't put time into that.
On the list type - the list in use currently is an unrolled linked list https://github.com/mattwparas/im-lists, which I've found to yield much better performance for iteration than the naive linked lists. When possible, the vm does some in place mutation on the lists as well when consing, which helps performance as well. I also can hot swap for a vlist, but at the moment have stuck with unrolled linked lists.
> 2. Turing complete macros (not just syntax-case)
I assume, you mean `syntax-rules` here.
If I understand correctly, it's pretty easy to simulate a paper tape
with `syntax-rules` macro - that is `syntax-rules` macros are already
"Turing complete".
May I ask, what is your personal journey of learning to code? Did you also discover Lisp/Scheme through SICP? And what have you professionally used Scheme for?
I am currently going through SICP, and I am also interested in Rust, so this project is a great discovery! Maybe I can contribute to it also.
I learned to code primarily through school - I had the privilege of studying at Northwestern where a lot of the Racket people teach, so my first programming class was in Racket. I have worked through some of SICP and How to Design Programs. After Racket I learned some C, C++, and C#. Then taught myself python just independently doing some projects, ended up back taking a few classes in Racket, then one in Agda that got me down the programming language rabbit hole. Took a class in Rust and that got me working on Steel.
I haven't _directly_ used scheme professionally except for some steel scripts for automating some work flows and some racket programs for spark query plan analysis. I'd like to work in scheme more in my professional work, but for now I'm quite happy just working on it for fun.
Contributions are welcome! Feel free to either join the discord and ask questions there if you want a more chat based place, or open a discussion on github if you'd like to learn more. I have it on my TODO list to set up a matrix chat, just haven't gotten around to it - so apologies for having discord as the only chatroom.
They're using hash array mapped tries. I don't have my own personal implementation, I have been using https://github.com/bodil/im-rs until I can get around to making my own implementation (not that I really need to, but it would be a fun exercise).
Functions generate a hash based on a unique id generated for the function, plus the hash of any captured variables, and a hash of the pointer address to the function). That is off the top of my head though so I could be missing some details.
Hashing maps is tricky! With a sufficiently deep hash map you can run into problems since that invokes an equality check as well - at least how I handle it, is that you just attempt to naively hash the keys and values of the hash map, to create a hash code for that object. If the equality check ends up with a sufficiently large depth, eq returns false so we don't stack overflow.
Can someone explain these scheme and lisp languages to me?
Every time I look at these languages, I can't grasp what you can use them for.
And why one would use them.
I always feel like I'm missing something.
In the example scripts and code snippets, I can see that you can define functions, that you can use lists, mathematical operations, you can build some algorithms, you can print text, but it never goes further than that.
I've only used languages like Python, Rust, C, Java, JavaScript, and they all have a very similar vibe, you have a std lib, which can interact with many things, you can build UIs, networking libraries and all that. And I could probably start using any language that is "similar" to these.
But I could never use one of these scheme/lisp languages, as I can't really grasp them.
Sorry, this comment is all over the place, because I can't really explain what's going on in my head when I see languages like this.
I'd call myself a proficient programmer, but every time I look at these languages, it feels like as I've never seen code once in my lifetime.
Any help or hint at what I'm missing, is appreciated.
> And I could probably start using any language that is "similar" to these.
Then you probably could start also using any of the Lisps mentioned. Most likely you are just hung up on the surface syntax, which does not take long to get used to.
> it feels like as I've never seen code once in my lifetime.
I think you must be fixating on syntax, or have been seeing some code that is advanced or (it exists) a very confusing example by some academic to demonstrate some curiosity.
Except for idiomatic recursion (which you don't have to use), Scheme semantics should initially look familiar to a Python or JavaScript programmer, like a subset of that, just with a different syntax. (Scheme nuances are much better designed, but to a new programmer semantics will look like a subset.)
And the Scheme syntax is one of the simplest ever, once you understand it.
What should instead be confusing is a language with very different semantics, like lazy evaluation, or an OOPL with complicating dispatch rules to reason about.
Most of them use most of the same core idioms you're familiar with from other languages, they just look superficially different because of the syntax (especially the parenthesis placement)
Try moving the left parenthesis of each expression over to the right by one and it may become clearer
Why not just program everything in assembly? High level languages give you powers of expression that lead to better programs or are simply more convenient for the particular task at hand. Lisp languages have several features that aren't found in most other programming languages. And a lot features that are found in other languages originated in Lisp.
Perhaps what you are missing is the practical part. For that you should look to the two types of Lisp in widespread usage: Common Lisp and Emacs Lisp. Common Lisp is a general-purpose language with a very large standard library and rich set of third-party libraries. Emacs Lisp is a complete language, but only really used to build text editing like stuff for Emacs. There's tons of real, effective code out there in these languages.
Or perhaps you are confused about the functional programming part. This is only strongly associated with Scheme as other Lisps support other paradigms like object-oriented programming. Functional programming is a thing that takes a while to understand, although my theory is it's actually more natural and it's only because you already learnt imperative programming, which is thoroughly unnatural, that you find it odd. Once grasped it will help you with other languages that support functional programming like JavaScript and Python.
They say learning Lisp will make you a better programmer even if you never use it again. I tend to agree with this.
At least for me, there is a certain appeal in building the world. With scheme you get a very small set of functionality, but you use that to implement the rest of the language, and build abstractions on top of abstractions. Seeing how the whole system can be built from a small set of functionality is pretty cool, and also very satisfying. There is a talk from Andy Wingo, one of the maintainers of Guile, where he describes working on Guile as akin to tending to a garden, and I think its an apt comparison. Something about it feels very organic and personal, which is part of the appeal.
The syntax itself doesn't _really_ matter, it just makes it easy to do so - functions and syntax visually look the same, so it makes it easy to build.
Its not for everyone, but I think its worth exploring for a little bit. Similarly I think its worth really learning any language just a bit, if not to just expand your tool kit. The parenthesis do disappear at a certain point and you learn to read it, but if its not your thing that is fine.
Basically the differences are in the concepts you'll use to write code. Lisps themselves are very different from each other, but just like the languages you're used to, many lisp distributions have standard libraries that can be called, and those building blocks can be used to build applications or whatever else. In this case specifically, Steel provides the facility to call Rust functions within a Steel program: https://github.com/mattwparas/steel.
So, although I haven't used Steel, it looks like the advantage you'd get from using it is the opportunity to take advantage of features it provides like transducers and contracts, which are feature common to some other Lisps as well.
So, just like choosing any other language, it boils down to a series of tradeoffs.
You're not missing anything. You have two choices: a) either take it as true that there's nothing to see here and move on, b) or if you have some spare time learn lisp and then move on. (b) avoids that nagging feeling that you're somehow unworthy.
My first introduction to Lisp-like programming languages were the SICP videos with Abelson and Sussman so to me Lisps are just programming languages like any other but with a nicer syntax and great support for interactive programming.
used to be on the same camp. now I think I get it.
the language is malleable. because it is homoiconic. so while developing software, you are simultaneously writing a domain specific language for your problem. because macros.
in the end, if you like your craft, you end up with a "language" that is very suitable for solving the problem you have at hand, with very little noise.
the downside is, probably most others will not understand your code so lisps have heavy bias towards "solo hackers". BUT... if that is important to you, you can code towards understandability. so much so that you can make it very hard for others to make mistakes when using the public api.
so with most programming languages, you program within the constraints of the syntactic rules of the language. with lisps, you define the language you want to approach the problem, that language comes out by itself iteratively. for some, that is a joy. others don't care for it.
Why do you think homoiconicity is a gimmick? Have you seen languages that offer metaprogramming capabilities as simple & powerful as Lisp without homoiconicity?
Homoiconicity is why Lisps have rich macros and legendary metaprogramming capabilities. You don't even have to deal with it yourself to feel its usefulness. Emacs use-package is an example.
This is just perfection on first glance, exactly what I've been looking for to augment my Rust projects that need a command/configuration language. And with a working repl, too, apparently.
I hope the second and third glances will be good too.
I'm familiar with the Steele quote. The manner in which C++ programmers were "dragged halfway to Lisp" concerns primarily manual memory management and all the bugginess attending thereto (Java came out before the STL and smart pointers were widely adopted). Concerns about memory safety were a significant part of the impetus for developing Java in the first place. Steele was responding to complaints that he had turned his back on Lisp, by countering that he was instead bringing the C++ crowd closer to Lisp with a C++-like language that had Lisp-like memory management.
But the reason why Rust is such revolutionary computer science and, quite possibly, the most interesting thing to happen to PL design in decades is because with safe Rust you get all the memory-safety advantages of Java or Lisp, without a GC because the borrow checker statically guarantees object lifetimes. So Rust programmers don't need to be dragged halfway to Lisp the way C++ programmers were in the mid-90s, because Rust has the same memory-safety guarantees with none of the drawbacks of GC.
> with safe Rust you get all the memory-safety advantages of Java or Lisp, without a GC
Safe Rust does not protect against memory/resource leaks when reference cycles are present. To avoid those, tracing GC is still needed - and most likely unavoidable in the general case. Note that avoiding reference cycles is a global concern that can't be localized to any single part of the program, so trying to ensure this statically is roughly as hard as proving that a random piece of C/C++ code does not corrupt memory.
You can of course stick to tree-like allocation patterns where object lifecycles nest cleanly, and that's what the borrow checker is all about. You can also use arenas/regions, and future Rust versions will hopefully make those easier to use.
Java came out before the STL and smart pointers were widely adopted.
This is a strange comment.
Java 1.0 was released in Jan 1996. (Java wasn't very useful before 1.1) However, Stepanov proposed STL to ANSI/ISO committee in Nov 1993, and HP released a working version to the Internet in Aug 1994.
"[W]idely adopted" is an editorial term. It is meaningless without some backing evidence. ("Never been worse" has a similar sentiment.) How do determine what counts?
I worked on enterprise C++ for years in the mid-2000s that didn't use any smart pointers. There are many huge, old enterprise C++ projects that don't use STL or smart pointers. And, there are still many huge old Java enterprise projects that use shitty cast-from-Object-type to pass around typed data, instead of generics, or something better. Not much being said here!
the most interesting thing to happen to PL design in decades
"[D]ecades" is a wild overstatement. In the last 10 years, I would vote for LLVM, which greatly improved the velocity of (experimental) programming language development. Would Rust have developed so quickly without LLVM? Probably not. Look at the speed of development in Rust, Swift, Zig, and many others that use LLVM as their backend. It is night-and-day compared to 20 years ago in a GCC-only open source compiler world. I remember the bad old days where GCC was the elephant in the room, but so hard to add and maintain frontends, that few did it.
Rust has the same memory-safety guarantees with none of the drawbacks of GC
I never saw this before. Are there any counterpoints?
Rust spent its entire innovation budget on zero-cost memory safety! Basically, instead of you tracking lifetimes (like in C with malloc/free) or the runtime tracking lifetimes (like in a GC'd language), Rust lets the compiler do this with a 'borrowing' system.
Borrowing in Rust means objects exist in two states: owned (which get dropped when they fall out of scope, like in C++), and borrowed, which means some other scope owns the object and we only have a reference to it. The compiler verifies that a borrow cannot outlive the actual object, so it statically prevents use-after-free errors.
Rust also has a distinction between constant and mutable values, and statically checks that any mutable references are exclusive, and that immutable references are only shared with other immutable references. With this it helps prevent race conditions or other such mistakes.
Finally, Rust also actually has smart pointers in case you truly don't know when an object won't be needed anymore, although the names are a bit different than in C++; there's Rc/Arc for reference counting (like shared_ptr), Box for owning pointers (like unique_ptr?), and RefCell, that's like a runtime borrow checker.
Apart from these features that prevent use-after-free and aliasing, Rust also has a feature called 'unsafe' with which you can bypass all these and e.g. work with raw pointers. Unsafe is generally used sparingly (and if not, attracts a lot of criticism, like happened to actix-web), and the safe abstractions on top also provide more pedestrian safety features like bounds checking. You can skip bounds checking on e.g. a Vec, but doing so actually requires you to drop into unsafe yourself, since the get_unchecked function is marked unsafe in the stdlib.
Small interesting side note: I'm pretty sure Rust is actually doing very little new things, PL-design wise. It's more of a realization of theory that has been around for years if not decades.
How does Steel handle garbage collection? Would it be possible to manually control the garbage collector? For example, a game that runs the GC at the end of every frame.
Immutable values are reference counted, so for most code, things will be dropped when they exit scope. For captured mutable values, there is a fairly mundane mark and sweep collector. It is possible to manually control the garbage collector, however I have not optimized it for that kind of workload. If you were embedding Steel in a game, I don't think it would be explicitly necessary to tune the GC as long as you aren't using a ton of mutation. If you were using a lot of mutation and still wanted a relatively performant GC collection at the end of every frame, then the underlying GC implementation would have to be changed or swapped to a different one (which is not impossible - I just only have one GC implemented)
Thanks, I think swapping and controlling the GC would be a very useful feature.
In the game example I gave performance is important, but what's also important is consistency. Interactive apps rely on a steady framerate so what you want to avoid is accumulating garbage across multiple frames, then doing a single large collection pass.
In other words, it's better to do a bit of GC every frame than a bunch at once and risk stuttering.
(Note that doing GC over large object graphs will nonetheless involve significant overhead, even with efficient implementations as seen here; GC is not at all a silver bullet, and should be avoided if at all possible. The actual point of GC is to enable computing over highly general, possibly cyclical object graphs - if that doesn't apply, other memory management strategies can be used instead.)
I'll make a tracking issue for it - new GC work is fun! I'll do some research, I have on my back log to integrate Steel into Bevy or some other Rust game engine, would give me a reason to make some fun GCs
Not at all! I agree it is a useful feature, I'd be curious how much it is necessary if not using any mutation, but the best way for me to find out is to try it out :)
Heh, neat to see you've got the Perceus paper there too. That is in fact the other part (the "ARC") of Nim's memory management for those unaware - with the only differences being Nim frees memory at the end of scope, rather than last use, and and Perceus maybe might be atomic (do not fully remember. ARC isn't atomic.)
The bacon-rajan-cc link has only -implemented- a stop-the-world version, but notes right at the top of the README that it -can- be concurrent, and the stop-the-world-only-ness is only 'Currently.'
Samsara is implementing the same algorithm and seems to be further along. Though there's also 'shredder' https://github.com/Others/shredder with a different overall approach.
Thanks! Was this originally a "just because" project, or do you expect it to offer some improvements or advantages over existing embeddable Schemes like Chibi (as you mentioned)?
Originally started it as a school project, which then during the pandemic morphed into something to work on while cooped up. It is really a passion project, working on it is fun! There are some interesting design things I wanted to explore, like how to get good performance out of safe Rust, using unrolled linked lists or vlists instead of naive linked lists, using contracts, etc.
Chibi is an impressive scheme implementation, and it will take a long time before Steel can hit the same level of compliance as Chibi. There is not a Chibi equivalent in native Rust that I am aware of. There are other embedded scripting languages for Rust that are pleasant - but no schemes of the maturity of Chibi. So in that regard, I'm hoping to offer a compelling scheme in native Rust to make integration with Rust applications relatively easy and painless.
I also don't have a particularly strong need to be 100% completely compliant with the scheme specs. The plan is to have compatibility layers so that portable scheme code can be used, however there are things about scheme that I think Racket (for example) improved on, and I'd like to explore that as well as much as I can.
Fantastic project! If you're looking to embed a DSL/full programming language into your application, this seems right on point. Rhai and RustPython are two other options if you don't dig parentheses.
Yep - you can write standalone steel code without interacting with Rust at all, just interacting with the interpreter. I've done the first few days of the advent of code in Steel without needing to touch any Rust. Now, I will say that Steel has gotten visibility faster than I've been able to keep up with, so you might find a native function missing, or something that you would like to use that isn't implemented yet - at which point you either need to implement it yourself or open up an issue for someone to get to :)
Hopefully the linked README provides a general overview (I know I need to write some more documentation!), but Steel is an implementation of the scheme programming language (not entirely compliant yet, but aiming for R5RS and R7RS compliance). It can be used as a standalone language via the interpreter/repl (like Python or Racket), or it can be embedded inside applications, like Lua. There are hundreds (thousands, probably) of embeddable languages, each with their own flavor - see a list compiled here for example https://github.com/dbohdan/embedded-scripting-languages
Use cases are generally for either configuration, scripting, or plugins - so scripting in games, or adding extensions to your text editor without having to use FFI or RPC + serializing a bunch of data. The advantage it has over using dynamic libraries (in general) is it runs in the same process, and can access the internal data structures directly without a lot of ceremony involved. The downside is that it is typically not as fast as native code unless a JIT is involved.
Javascript is an example of an embedded scripting, where the browser is the host application.
When I tried to introduce s-expressions to a DSL my co-workers nearly lynched me. The parenthesis were so violently hated I sunk into a deep hole and still haven’t came back out of it.
It really is kind of frustrating isn't it? Because anyone who's put any real effort into learning Lisp knows that the parentheses are not what's hard to understand, actually. You don't even have to read them, really.
So when people complain loudly about parentheses in Lisp that just tells me they probably never made any real effort to learn it, and are actively opposed to trying.
It's not a productive starting point for a discussion.
They're not. But they're also not required to flaunt their own ignorance as though it's a well-formed opinion. Also, in a professional setting, I find it a bit unprofessional and rude to poo poo technical ideas over trivialities.
> But they're also not required to flaunt their own ignorance as though it's a well-formed opinion.
I don’t like the parenthesis. I’m not flaunting anything. Is it “flaunting” to calmly and respectfully share an opinion? To you it’s trivial, to me it’s not.
It seems like you’re interested in creating conflict with people that don’t like a thing that you like. Which, I would cast this behavior as unprofessional, tbh.
I'm not a fan of the parenthesis either, but when I learned about s-expressions and how lisp programs are also a data structures that piqued my interest and helped me look past them.
I question people's judgement who can't look past the syntax when there is a very good, and interesting technical reason behind them.
Code as data is interesting, but mostly orthogonal to S expressions. For example, Prolog code has the same property without S expressions, and more esoterically TeX (which is succinctly explained to a lisper as programming with defmacro but not defun).
Though it's just more powerful in Lisp precisely because the code is just lists in a language designed around working with lists.
So you can actually leverage this property in Lisp without the code becoming inscrutable for it, which in my experience doesn't usually happen in other languages.
> I question people's judgement who can't look past the syntax when there is a very good, and interesting technical reason behind them
The list of things that are interesting is endless, though. I see something I’m turned off by, I move on. There are plenty of valuable things to spend time on.
What's an A-expression? I tried looking it up but didn't find anything.
I have heard of M-expressions but never of an actual implementation. Scheme also seems to have some SRFIs involving alternate syntax that's whitespace dependent, like SRFI 119 (wisp), SRFI 110 (sweet-expressions or T-expressions), or SRFI 49.
Yeah it is a personal issue, there are people who really like Lisp syntax (I'm one of them). That doesn't mean anything bad, of course, some people like dark text on a light background, others light text on a dark background. Everyone is different and that's what makes the world so awesome.
I see something like:
some-var: I64 := a * b - c ^ d ^ e;
...and my brain gives out, while I find:
(let ((some-var (- (* a b)
(^ c d e))))
(declare (type I64 some-var))
...)
How do you look at the latter and know the code structure? Are you counting parentheses? Or are you relying on conventions around white space indentation. If the latter, are you not concerned a misplaced parentheses might make the code different from it appears? There could be bugs not shown in the indentation.
Most lispers I know code in eMacs or other smart editors that provide auto code formatting and colored parens. But why not just make the indentation (or whatever that you really rely upon) the actual syntax, so there CANNOT be hidden bugs of that sort?
Edit: to expand on this, I think it is no coincidence that most lisps remain untyped to this day. Strong typing is about having the compiler enforce type rules so you the developer can’t fuck it up. Weak typing is more convenient, but ultimately a source of bugs. Lisp has, effectively, weak syntax. I don’t like weak syntax for the same reasons I don’t like weak typing.
> But why not just make the indentation (or whatever that you really rely upon) the actual syntax, so there CANNOT be hidden bugs of that sort?
That's Python. When whitespace matters, any aesthetic reformatting mistake can change the program's meaning. With s-expressions this cannot happen. A lisp code parser is completely deterministic regardless of where the newlines, spaces, and tabs occur. You can remove all the newlines from a 10,000-line Lisp program and the compiler will parse it exactly the same as if it were formatted aesthetically.* You can also write a simple program that takes that godawful one-line program and reformats it aesthetically however you like--the meaning won't change.
IOW in Lisp the aesthetics of the source code do not determine its meaning; aesthetics and meaning are orthogonal properties and you are free to adjust the two independently. This is also somewhat true in languages like C, but rather than several special-case punctuation characters, in Lisp there's only one: The parenthesis. Lisp is thus similar in spirit to HTML where semantics and layout are [mostly] independent.
* With a few obvious exceptions like EOL comments, and newlines that are part of quoted strings.
> any aesthetic reformatting mistake can change the program's meaning.
Why would you just randomly change indentation? On the contrary, I don't want the indentation to say something else than the code actually does.
> You can remove all the newlines from a 10,000-line Lisp program and the compiler will parse it exactly the same as if it were formatted aesthetically.
> Why would you just randomly change indentation? On the contrary, I don't want the indentation to say something else than the code actually does.
Because the user may want the development environment to display snippets of code in various places: REPL, debugger, code browsers, inspectors, various editor types, ...
In a Lisp system the code can be data and text. Code formatters can reformat code depending on user preferences, device types (color, font, ...), view sizes, ...
In Lisp often code gets generated (for example via 'Macros') and this code will be automatically layouted in various view (different widths, different fonts, different detail). Code can be small or large, the system may abbreviate parts, which one can expand, if necessary.
Source Code is not necessary static text in a file system. Code can just be list-based data structures and layout is fluid.
In Common Lisp the formatted output of code is also user extensible/customizable, a 'pretty printer' is a part of the language spec:
Use an editor that auto-inserts parens and that indents the code correctly. Now nothing bad can happen. And the parens are used to edit code structurally.
re typing: Coalton brings Haskell-like typing on top of CL. https://github.com/coalton-lang/coalton/ Other lisps are typed: typed racket, Carp… and btw, SBCL's compiler brings some welcome type warnings and errors (unlike Python, for instance).
Well I don't like indentation-based syntax because when I write code with such syntax down in a notebook I have this bad tendency to drift to the right as I go lower down the page. And then if I go training or something and return the drift causes me to lose the structure of the code.
With round brackets on the other hand, well... I think my brain just works with round brackets. Like my mind is coded in Lisp or something, I dunno. It would explain why I'm so slow: GC pauses. My mind should really be ported to SBCL or something.
You're clearly the exception though. It's almost always just ignorance in my experience. That's why I used the word probably. I assume if someone asked you what you thought about Lisp, you'd have a lot more interesting stuff to say than just "ew, parentheses", right?
That's my point, is all.
As for readability, I don't think it's trivial at all. I just don't think syntax has all that much to do with it. It plays a role, but a very minor one compared to overall code quality. In other words it's syntax that's mostly a trivial point, not readablity.
One undeniable advantage of Lisp in terms of syntactic readability though is that other languages always end up piling on more syntax over time as the language gets older. That's by far my least favourite thing about Rust for instance, even though I do love the language. There's always some new syntax, keyword, or position that an existing keyword can suddenly go in. The longer I go without actively using Rust, the more work it is to start again, because I have to go learn all this new stuff now. And syntax always takes a while to feel intuitive, at least for me. But it's still a lot easier than grokking new semantics and paradigms. I still don't have a good handle on async rust.
If someone told me to go read some ALGOL 68, I could probably do just fine because there's nothing semantically unfamiliar about it compared to something like C. But if someone gave me some Haskell written as sexps I would be utterly lost despite the syntax being perfectly comfortable to me. Because I never quite grokked typed FP.
No, the horrible syntax of lisp is pretty much all I have to say on it, honestly. I don’t care for lisp for the same reason I don’t care for perl: write-only languages are a bigger hinderence to software maintenance than whatever advantage might be obtained. And the advantages of lisp have long since been obtained by other languages.
Lisps are not write-only. They're easy to read precisely because there is no inscrutable syntax. Every pair of parentheses has a first element, which tells you what they're doing. The parentheses denote a scope in which to look for arguments.
Conventional style guides also make the readability practically a trivial issue because you end up indenting arguments such that the structure of the program is reflected in the indentation, because each S-expression within a parenthesized S-expression is itself an AST node. Writing readable Lisp is just a matter of reflecting the AST's structure in the indentation, which most people do in other languages anyway (to some degree).
> the advantages of lisp have long since been obtained by other languages.
Clearly not, since most other languages are still not S-expression-based. (Though, admittedly, some other advantages have been copied in other languages.)
I did it once in an internal UI because we needed to rapidly expose some functionality that could be composed/piped in complex ways and it would take a while to implement a normal UI
Was met with skepticism, esp around the engineering/maintenance overhead, until I told them the parser took me a couple hours to write and was only a hundred lines of code or so
It is really fun (sad) to see this, I've seen it so many times myself too. You show how something would be with s-expressions, compared to something, and all they can focus on is how many parenthesis there are. But when you sit down and count, they're the same amount as the code was when it wasn't s-expressions, just in different locations. And when you remove the parenthesis, they can kind of understand the code, kind of.
One option that might be suitable for a DSL, is implicit parens based on whitespace. A newline opens a new paren, and the paren closes when it reaches another line with the same indentation e.g:
(defun factorial (x)
(if (zerop x)
1
(* x (factorial (- x 1)))))
could be rewritten as
defun factorial (x)
if (zerop x)
1
* x (factorial (- x 1))
I doubt that SRFI 49, or any other proposal I've seen online, has been battle-tested.
I've written thousands of lines of Scheme using my own preprocessor, and it's my favorite code to look at. I prefer Haskell, for incredible ease of parallelism and a deeper mathematical foundation.
The two features I look for in a reduced parenthesis syntax (like looking for the bone marrow in a beef stew recipe) are:
1. Some constructions begin doubly parenthesized. One needs a symbol to represent the missing object one parenthesis in. I use $.
2. One can write more expressive lines with a flavor of open paren that autocloses at the end of the line. I use |.
Whitespace significance might be the choice even more controversial than Lisp parens. For myself, I appreciate both, but those that do not, really do not.
Probably there are more Python users out there than combined users of all lisp-like languages, which I guess would mean people are less scared of white-space significance than s-expressions :)
The things I'm looking for in language syntax these days are
1. That it's unambiguous enough that my editor can format it correctly every time (provided the code is correct, obviously). Indentation-based languages like Python fail this.
2. That its elements and keywords are distinct enough that my editor can apply colours on it. I sometimes feel that S-expression languages fail at this because of their small number of distinct keywords, but that might not be true.
Besides these two points, if the language is not a joke language, it's probably fine.
But it's what ambiguous about Python's syntax. And I can make it a bit worse:
if a:
print("foo")
if b:
print("bar")
This is not a huge problem, but problems with syntax never are (in non-joke relatively modern languages). But it's still something I prefer that languages fix in their syntax if I get to have a choice.
The example you just posted is a syntax error, as it should be. And I still don’t get what is ambiguous about the syntax, the specification clearly states how indentation corresponds to a syntax tree.
I want this to succeed!! please do not let the word "srfi" ever appear in the packages list...naming libraries with obscure numbers no one remembers was a terrible terrible idea that all schemes seem to perpetuate
SRFI editor here. The numeric designations are there because, among other reasons, there is sometimes more than one SRFI for a particular general idea. But there is nothing stopping libraries from having more than one name, including a semantically meaningful one. In fact, there has been a SRFI standardizing how that is done since 2008 [1].
It is a standard. What do you expect?
But there are some or many Schemes that offer their own libs on top or below SRFI implementations, which then have readable names.
At the moment I only have 1 SRFI package, and its just to check against compatibility :) - my plan is to wrap the SRFI packages with friendly names and just include the metadata in the package spec so someone searching for it can find it easily.
I wonder where the name came from. Being HN, here is my nitpicking imperative:
Names are important. Our inheritance is wit, self-awareness, and irony; names that puncture ego and power and that appeal to joy: C, C++, GNU, Rust, Google, Yahoo!, Vim, Git, awk, etc. Others are beautiful, evocative images, like Apple and Amazon. Names communicate our culture and ideals to each other and to the next generation.
Careless, thoughtless names like Microsoft, IBM, etc. (ok it's ironic and self-deprecating, but without self-awareness or wit!), etc. should be hated and banned. Egotistical BS, especially Tolkien plagiarists who assert they are supernatural, should be tarred and feathered and paraded around town (with wit and irony).
(Plenty of names fall in some middle ground.)
If Steel is just a derivative of 'Rust' [edit: it is not, see the response below], it misses the self-awareness. Someone naming their development product - designed to build great structures - 'Rust' is engaging in a little self-deprication, joy, and self-awareness. Naming the derivative project 'Steel' possibly misses all that; there's a reason the original wasn't named Iron or Steel or Carbon Fiber. But maybe there's more to the name.
1. Guy Steele (along with Gerald Sussman) created scheme, and Steel is close to Steele, just drop the e.
2. Steel is a scheme, and I observed that scheme names have a tendency to be named things crime related: Scheme, Racket (racketeering), Larceny, etc - Not a scientific analysis at all, but I found it funny at the time that Steel sounds like "steal".
3. You made the observation, Steel sounds like something that would be associated with Rust.
Beyond that, I just liked the name. No SEO involved, and arguably probably should have picked something more searchable, but I didn't start making it with the intention of there being a lot of users, it started as a project for school.
(I always try and cram as many simultaneous jokes as possible into project names - and have a soft spot for lisp - so I wish you much joy and whatever level of success is most fun for you)
No judgement (I think great puns bring great minds together so namespace clashes are inevitable), but this was similar to the thinking behind a name for "Guile Steel", the low-level Scheme being plotted about by some of the Guile Scheme folks here: https://dustycloud.org/blog/guile-steel-proposal/
I don't think it's particularly careless: SBCL is short for 'Steel Bank Common Lisp', which is a reference to how the CMU university, named after Carnegie Mellon, could be respectively substituted with 'Steel' (Carnegie) and 'Bank' (Mellon).
[0]: https://github.com/helix-editor/helix/pull/8675