Hacker Newsnew | past | comments | ask | show | jobs | submit | davexunit's commentslogin

I've worked with Andy a bunch and am glad to see him getting recognition for his work on Guile.


Came here to see if anyone mentioned propagators. That thesis is excellent. I second the recommendation.


But do you want your web browser to have the privilege to read your SSH private key? That's the risk of running programs "as you".


The whole problem is mapping privilege to users and groups, so doas doesn't solve the issues explained in the article.

> The key difference here would be that www-deployment can not delegate as easily to arbitrary users, as they would need to ask someone with root access to add additional users to the www-deployment group. But I am left wondering if this use case (if it is important enough)...

Delegation is the killer feature of the object capability model. It's not just important enough, it's the most important. Keep in mind that the ACL model allows delegation, too, it's just unsafe. Users share credentials all the time. Capabilities allow delegation in a way that can be attenuated, revoked, and audited.


Firstly, thank you for engaging and trying to enlighten me.

I do understand why capability delegation is useful and I am familiar with using Unix sockets to delegate the control of daemons using socket permissions, which feels similar to what we see here with capsudod (I have not read the code sadly, too much other code to read today).

However, I am still puzzled what the advantage of having a herd of capsudod instances running is to say my proposal of allowing users to set up their own doas.conf(5)s to delegate capabilities. Yes, we still need SUID and we will need to be darn sure 1,000 or so lines are properly secured, but it is attenuable, revocable, auditable, and feels (perhaps wrongly, because I have a bias towards text files describing the state of a system?) more natural to me than putting it all into the running state of a daemon.

Is there some other strength/weakness of these approaches that I am failing to see? I am no systems programmer, but I find topics like this interesting and dream of a day when I could be one.


> However, I am still puzzled what the advantage of having a herd of capsudod instances running is to say my proposal of allowing users to set up their own doas.conf(5)s to delegate capabilities. Yes, we still need SUID and we will need to be darn sure 1,000 or so lines are properly secured, but it is attenuable, revocable, auditable, and feels (perhaps wrongly, because I have a bias towards text files describing the state of a system?) more natural to me than putting it all into the running state of a daemon.

I think two separate discussions are being mixed here. The above seems mostly concerned with the chosen interface of capsudo. Imperative vs. declarative is orthogonal to the discussion about object capabilities vs. ACLs.


It's nice to see other people writing about the capability transfer feature of Unix domain sockets. File paths are not object capabilities, but file descriptors are. Using a privileged daemon on top of an ambient authority system like Linux seems to be a good way to retrofit object capabilities onto the operating systems we already use. This is the same approach we took in Goblins[0] for our Unix domain socket netlayer for the OCapN[1] protocol.

[0] https://spritely.institute/news/spritely-goblins-v0-16-0-rel...

[1] https://ocapn.org


> Large language models (LLMs) are an indisputable breakthrough of the last five years

Actually a lot people dispute this and I'm sure the author knows that!


UT2k4 was a LAN party favorite of mine. One of the last good multiplayer FPS games before Epic lost its way.


UT2004 (and its Xbox counterpart Unreal Championship 2) were great experiences. UT3 tried to set itself apart by tuning its pace significantly higher, and it lost its audience, and UT2016 was never going to be a significant title given its development history.


Despite not being an ML programmer, I found the spec pretty easy to read for the most part. One of the least intimidating specifications I have ever read, surprisingly.


I wish I could say the same. I don't know what wall I'm hitting which causes it not to click for me, otherwise I'd go off and write my own Wasm interpreter just for the fun of it lol


This is silly. The semantics are entirely different!


How so? Quite literally symbols are used as an immutable string with a shorter syntax. So much so that I've been finding their literal constraints limiting lately.


Almost the entire value of symbols separate from strings is at the level of programmer communication rather than PL semantics.

It tells a reader of the code that this term is arbitrary but significant, probably represents an enmeshment with another part of the code, and will not be displayed to any user. When seeing a new term in code that is a lot of the things you're going to need to figure out about it anyway. It's a very valuable & practical signal.

If you need to mutate or concat or interpolate or capitalize or any other string operation it, it probably shouldn't be a symbol anymore, or shouldn't have been to start with.


> Almost the entire value of symbols separate from strings is at the level of programmer communication rather than PL semantics.

That's the opposite of reality. Symbols are necessitated by PL semantics, which is why languages which don't have those problematic string semantics tend to not bother with symbols.

> It tells a reader of the code that this term is arbitrary but significant

That you can do that with symbols is not why they exist (you can need to associate additional semantics with pretty much any first or third-party type after all, that's why the newtype pattern is so popular in modern statically typed languages).

And it's not like you need formal symbols to do it in the first place. For instance like an other nearby commenter in Python I do that by using single and double-quoted strings, having taken up that habit from Erlang (where symbols are single quoted and strings are double quoted).


> And it's not like you need formal symbols to do it in the first place.

I mean we don't need any of this shit. Go take a long bath and then write some assembly I don't care. Symbols are a useful tool in some languages, for the reasons I described. That you're creating ad hoc quoting conventions to recover part of the functionality of this feature in languages that don't have it is a pretty strong signal I'm correct! Opposite of reality my ass lol.


> This is silly.

Oh my bad, great counterpoint.

> The semantics are entirely different!

They're not. A symbol is an arbitrary identifier, which can be used to point to system elements (e.g. classes, methods, etc...). These are all things you can do just fine with immutable interned strings. Which is exactly what languages which have immutable interned strings do.

You'd just have a broken VM if you used mutable strings for metaprogramming in Ruby, so it needs symbols. Both things it inherited from all of Perl, Smalltalk, and Lisp.


Ruby always had immutable (frozen) strings, so no, this never was a reason for Symbols existence.


They aren't interned frozen strings (unless they were symbols; String#intern was, and still is, an alias for String#to_sym, and String#freeze did not and does not imply String#intern or String#to_sym), and it also (even for literals) took an extra step to either freeze or intern them prior to Ruby 2.3 introducing the "# frozen_string_literal: true" file-level option (and Ruby 3.4 making it unnecessary because it is on by default.)

Amusingly, string literals interned by default in 3.4 or because of the setting in earlier >2.3 Rubies are still (despite being interned) Strings, while Strings interned with String#intern are Symbols.


> They aren't interned frozen strings

Doesn't matter. The parent claim was:

> You'd just have a broken VM if you used mutable strings for metaprogramming in Ruby

From day one it was possible to freeze strings used in metaprograming. I mean Ruby Hashes do that to strings keys.

> Ruby 3.4 making it unnecessary because it is on by default.

That's incorrect: https://byroot.github.io/ruby/performance/2025/10/28/string-...


I personally do not want to write HTML and I especially do not want to encode logic into it. This wave of HTMX-likes has some interesting ideas but encoding it all into HTML just feels so wrong to me. JSX is likewise awful. We need real programming languages here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: