Hacker Newsnew | past | comments | ask | show | jobs | submit | stevefan1999's commentslogin

Do anyone have any recommendations on "slabtops", i.e computers with a C64 form factor but also a small screen embedded on the keyboard, so I can use it as a scientific computer rather than a laptop

There’s this[0] one that seems pretty approachable, I’ve been considering making it

[0]: https://news.ycombinator.com/item?id=46265015


okay, at first I thought you are selling Tor access or vanity hidden service domains as Tor stands for The Onion Router, but it turns out you are selling real onions

without CTE, is SQL a programming language?

At this point I believe running Common Lisp/Scheme from SectorLisp wouldn't be that far off

So technically someone can run a c compiler written in guile in sectorlisp?

I think that the reason stage0 and other projects work this way is that they found lisp and other projects to be very limiting and there were definitely some reasons that they do things the current way that are mentioned in their docs.


In fact the MES-compiler, the one that I want to replace, is written in Scheme. It also comes with a Scheme interpreter written in C. Before the MES-compiler can be executed the source has to be unzipped and unpacked with two programs that are also compiled from C sources. So, you already need a compiler that supports a sizable subset of C to compile all these programs in order to compile the MES-compiler, which is a rather compliant C-compiler, which is used to compile the Tiny C Compiler and a standard library (included in the MES sources). The unzip and untar programs are also needed to unpack/zip the archive with the Tiny C Compiler sources.

Once everything is unpacked, are the resulting compiler materials sufficient to compile and run those packing and unpacking programs?

I am not sure if follow you completely. Yes, I guess it is the case, but what is the use? Before you can compile the Tiny C Compiler, you already need to unpack the .tar.gz file containing the sources. The sources for ungz and untar

So, you need a minimal C compiler that can also compile the ungz and untar programs from their C sources. At the moment, I have a minimal C compiler that can compile the Tiny C Compiler and the standard library that is part of the GNU Mes compiler. One of the following tasks will be to see if it can also compile the sources for ungz and untar, which can be found in [0]. I think it will be not so hard as these sources already have been adapted to be compiled with a minimal C compiler.

[0] https://github.com/oriansj/mescc-tools-extra


Super fascinating stuff! I love these kind of stuff. Genuinely reading comments like yours really cheer me up in a way

I had this thought once that there would be a single sd card or hard drive which would allow bootstrapping source and a hash and with systems like tor and torrenting in general combining with self hosted wikipedia perhaps and all the mirrors of open source software I use.

I had this idea that its now possible to have such hard drives if I so desire and I am not usually paranoid but in actuality, feats like bootstrappability remove paranoia because I feel like there must be other people who do this and who verify security and I can always go to them or they exist :)

Bootstrappability is really cool and awesome stuff in my opinion which is severely underrated by many and I am glad that people like you work in helping bootstrappability. Have a nice day!

Regarding the comment itself, I didn't know that this was the case of Mes as I had just thought the only thing it depends on is scheme but thanks for sharing information about this.


I understand that you mean that SectorLisp is taken as the 'seed' that we have to trust that it is not malicious (or secretly executes a malicious program if its finds it). In the live-bootstrap project it is 'hex0' that is the seed, a simple program that converts hexadecimal character (by pairs) into bytes.

don't threaten me with a good time

That's really good because it means it will be able to have more exposure, more exposure means more improvement, more improvement eventually dig out bad bugs and reduces the attack surface in the long run


Leaks time and order, just like uuid v7.


Kinda the point? Ordered PKs give you a better index.


A server farm, eh?


The ability to pick fields is nice, but the article failed to mention GraphQL's schema stitching and federation capability, which is its actual killer feature that is yet to be seen from any other "RPC" protocols, nix the gRPC which is insanely good for backend but maybe too demanding for web, even with grpc-web *1.

It allows you to separate your GraphQL in multiple "sub-graphs", and forward them to different microservices and facilitates separation of concern at backend level, while putting them back as one unified place for the frontend, giving it the best of both world in theory.

Yet unfortunately, both stitching and federation is rarely in practice due to the people's lack of fundamental abilities to comprehend and manage complexity, and that the web development is so fast, that one product is put out for one another year by year, and the old code is basically thrown away and remain unmaintained, they eventually "siloified"/solidified *2, and therefore it is natural for a simple solution like REST and OpenAPI/Swagger beats the complicated GraphQL, becaues the tech market right now just want to make the product quick and dirty, get the money, then let it go, rinse and repeat. Last 30 years of VC is basically that.

So let me tell you, this is the real reason GraphQL lost: GraphQL is the good money that was driven out, because the market just need the money, regardless of whether it is good, bad or ugly.

Speaking asides, I enjoy GraphQL so much in C#: https://chillicream.com/docs/hotchocolate/v15/defining-a-sch..., they even have integrations with EF Core which is mind-boggling: https://chillicream.com/docs/hotchocolate/v15/integrations/e...

It is so natural, and I've tried to make it run in the new single file C#, plus the dependency injection and NativeAOT...I think I made the single-file code in their discussion tab, but I couldn't find it.

Another good honorable mention would be this: https://opensource.expediagroup.com/graphql-kotlin/docs/sche..., I used it before in place with Koin and Exposed, but I eventually went back to Spring Boot and Hibernate because I needed the integrations despited I loved to have innovations.

*1: For example, why force everyone to use HTTP/2 and thus enfoced TLS by convention? This makes gRPC development quite hard that you will need to have self-signed key and certificates just for starting the server, and that is already a lot of barrier for most developers. And the protobuf, being a compact and concise binary protocol, is basically unreadable without the schema/reflection/introspection, and GraphQL still returns a JSON by default and you can choose to return MessagePack/CBOR based on what the HTTP request header asked for. Yes, grpc-web does return JSON and can be configured to run on H2C, but it is more like an afterthought and not designed for frontend developers

*2: Maybe the better word would be enshittified, but enshittification is a dynamic process to the bottom, while what I mean is more like rotten to death like a zombie, so is it too overboard?


The problem is, container (or immutable) based development environment, like DevContainers and Nix Flakes, still aren't the popular choice for most developments.

I self-hosted DevPods and Coder, but it is quite tedious to do so. I'm experimenting with Eclipse Che now, I'm quite satisfied with it, except it is hard to setup (you need a K8S cluster attached to a OIDC endpoint for authentication and authorization, and a git forge for credentials), and the fact that I cannot run real web-version of VSCode (it looks like VSCode but IIRC it is a Monaco fork that looks almost like VSCode one-to-one but not exactly it) and most extensions on it (and thus limited to OpenVSIX) is a dealbreaker. But in exchange I have a pure K8S based development lifecycle, all my dev environment lives on K8S (including temporary port forwarding -- I have wildcard DNS setup for that), so all my work lives on K8S.

Maybe I could combine a few more open source projects together to make a product.


Uhm, pardon my ignorance... but wouldn't restricting an AI agent in a development environment be just a matter of a well-placed systemd-nspawn call?...


That's not the only stuff you need to manage. Having a system level sandbox is all about limiting the physical scope (the term physical in terms of interacting with the system using shell and syscalls) of stuff that the LLM agent could reach, but what about the logical scope that it could reach too, before you pass it to the physical scope? e.g. git branch/commit, npm run build, kubectl apply, or psql to run scripts that truncate your sql table or delete the database. Those are not easily controllable since they are concrete with contextual details.


These you surely have handled already, as a human is able to fat-finger a database drop as well.


Sure, but at least we can slow down that fat finger by adding safeguards and clean boundaries check, with a LLM agent things are automated at much higher pace, and more "fat fingers" can be done simultaneously, then it will have cascading effect that is beyond repairable. This is why we don't just need physical limitation, but also logical limitation as well.


That's exactly why I let the LLM run read-only commands automatically, but anything that could potentially trigger mutation (either removal or insertion) requires manual intervention.

Another way to prevent this is to run a filesystem snapshot each mutation command approval (that's where COW based filesystems like ZFS and BTRFS would shine), except you also have to block the LLM from deleting your filesystem and snapshots, or dd'ing stuff to your block devices to corrupt it, and I bet it will eventually evolve into this egregiously.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: