I've been working for a while on https://github.com/connet-dev/connet. It gives a different twist at the same problem - instead of an overlay network at L4 (wireguard, etc) or publicly accessible endpoint at L7 (like ngrok) it "projects" a remote endpoint locally (e.g. as if you are running the service on your computer). Of course "locally" can always be a VPS that has caddy in front to give you ngrok-like experience.
The reason connet exists is that nothing (at the time I started, including netbird, tailscale/headscale, frp, rathole, etc) gave the same easy to understand, FOSS, self-hosted, direct peer-to-peer way of remote access to your resources. I believe it does accomplish this and it is self-hosted. And while a cloud deployment at https://connet.dev exists, it is nothing more then repackaging the FOSS project with user/token management.
This is meant just for computers, right? A quick check of the readme showed that devices must run this or that commands, which seems difficult to do on an smartphone. I guess the ngrok-like setup would be the way to go for that case, given the increasing prevalence of phones and tablets as the single form of computing for lots of people
I've been thinking a lot about this case specifically. And you are right, phones are largely not supported right now - I've been researching how to make that happen. One case I've found that works for me currently is running connet via Termux - and I've made the necessary changes to support that.
Native iOS/Android clients, if possible, will probably be the next things I'll work on. At minimum they should enable you to run a "source" (e.g. a consumer of an exposed service), but ideally it will be the whole deal.
A neat idea, but projecting all of these services onto localhost is a bit of a security nightmare. Have you considered looking at what something like Twingate does? Using the CGNAT IP space for the projection allows you to give every individual service its own IP address, which helps quite a bit in terms of allowing you to isolate the services from e.g. malicious web pages.
I'll take a look at what twingate does for sure, thanks for pointing that out.
A few things that worth mentioning for connet's current state - you can technically bind to any local IP, not just loopback (or listen to them all). You also have the option of directly running a TLS/HTTPS destination (for mutual TLS directly to the service) or source (e.g. for mutual authentication between your local listener and the outside world). Another option is to build your own client and define how you want to source traffic - maybe its part of your app and there are no sockets or anything - you just connect and start talking.
connet [1] works in p2p fashion and is pretty quick if it can establish direct connection. Most other solutions do route through a separate node, so if your direct to node latency is low it should be comparable to directly hitting that node. It also has a docker release on ghcr. There is also a saas version [2], if you just wanna try it without running the control plane.
Just released v0.12.0 which has a lot of package cleanup and some important bugfixes. Next, is making the relay infrastructure much more lighter, requiring less synchronization.
Personally, I'm using the hosted version[0] (which is just a repackage of the open source version with dynamic with tokens) to expose my NAS and syncthing web UIs to manage them while I'm away. Sometimes even through my phone (with termux)
Just finished a major (v0.10) revamp of the API (you can use connet as part of an application, not through the CLI) which also fixed a few issues I've been seeing before.
Now, I'm gearing to update the relay protocols - currently relays are closed off by the control server (e.g. you ask it to provision you a relay resource) which requires the relay to communicate with the control server itself. In the new version, the relays will be operating on their own (there might be a shared secret with the control server, in case you want a closed off relay) and peers will reserve directly with the desired relays. Maybe in future, the relays might form clusters on their own to take advantage of better relay-to-relay network and peers will reserve only at the relay closest to them.
Another stream of work, is giving peers identities. Right now the server will give them an internal identity to better support reconnects, but these are not stable (e.g. they don't survive client restarts). In future, the peer will advertise their identity and then other peers may choose what peers to allow comms with and what to ignore, pushing more decisions into peers themself.
Yet another change I'm thinking about is exposing raw endpoints to enable users of the system to implements other protocols - I'm not quite sure if this is really needed (the destination/source, e.g. server/client) covers a lot of ground by itself, but it would be great if these are not the only options.
Many options how to continue, but if I'm out of ideas, there is always a Rust rewrite to throw in /s
Actually, I realized that I've used `/s` incorrectly. I've been thinking about rewriting the clients in Rust, mostly to allow simpler embedding in other languages - java and swift for example (I think it would be great if connet was available on mobile - for android you can Termux to compile/run it, but it is a pain). This will make it harder to embed in golang tho.
Another option is to try to rewrite clients in each of the language, but most fare poorly on QUIC support - in Java for example, I'm not aware of one that is advertised as production ready (looking at kwik with their fork of TLS).
I am excited for having a new version of NixOS. Few days ago I realized that November is almost gone and went looking for when I can expect the new release. And right on schedule it popped out (was checking throughout the day). Big props to the release managers (and of course the maintainers)
I use nix via flakes on my own machines (via NixOS), in my projects (with direnv), on my infrastructure/servers (NixOS deploying with colmena) and at work (nix-darwin and projects). So far the upgrade have gone painlessly, the only change I needed to make is how git is configured in home manager. I continue to be amazed how well NixOS works.
Edit: The only place I still struggle in adopting nix is on my phone. Last time I tried nix-on-droid it didn't even run, but I plan to try that again. Still new to Android (and GrapheneOS)
Tunneling p2p with relay fallback is essentially what connet [1] aspires to be. There are a lot of privacy/security benefits exposing endpoints only to participating peers. You can either run it yourself or use hosted version [2].
Been using profiles for some years now and they are great. I usually start with the default profile, then navigate to "about:profiles" to open all I need. Thanks to profiles, when my manjaro install broke, I migrated to NixOS and all my browsing sessions were ready to use the way I left them. Getting a dedicated, more integrated UI for managing profile will be great.
The one thing I'm missing is "incognito" profiles - e.g. spawn a temporary profile (without any identity attached) easily when I'm researching/navigating unusual sites and kill it once I'm done. Having multiple of these would be a great improvement over normal incognito windows (which share identities).
When I adopted mmap in klevdb [1], I saw a dramatic performance improvements. So, even as klevdb completes a write segment, it will reopen, on demand, the segment for reading with mmap (segments are basically part of write only log). With this any random reads are super fast (but of course not as fast as sequential ones).
I also started self-hosting more and more. But instead of making services available on the internet/intranet (e.g. VPS reverse proxy/tailscale), I'm binding them to localhost and using connet [1] (cloud or self-host [2]) to cast these locally on my on my PC/phone (when I need them). These include my NAS and Syncthing instance running on my NAS and I'm looking to add more.