Hacker Newsnew | past | comments | ask | show | jobs | submit | ikatson's commentslogin

How about do the changes then bake them into the DB docker image. I.e. "docker commit".

Then spin up the dB using that image instead of an empty one for every test run.

This implies starting the DB through docker is faster than what you're doing now of course.


Yeah there's absolutely no way restarting the container will be faster.


The author has a FAQ related to the video and he expands on "Why don’t you mention HLG in the demo" in it https://www.yedlin.net/HDR_Demo_FAQ.html


> My thought was why use UTM? Most of this can be achieved with qemu alone

Afaik UTM uses Qemu under the hood, but provides a nice UI on top for the basic use cases. It also has a library of prepared images, so that your VM is a few clicks away from intention to have one.

It can also modify the VM, resize storage after creation etc.

Of course all of it can be done with QEMU alone, but this makes it easier to deal with than remembering tons of QEMU command line arguments.


Guess you misunderstood me. I know that UTM is built on top of qemu. I use it as well. I mean when already using this init image tooling etc why clicking through the UI to setup a VM. One would think to offload this also to a script. Because in the posts steps UTM is just a means to start the resulting image.


Maybe I did misunderstand, I agree the post makes it overly complicated! You definitely don't need cloud-init just to run Linux on Mac OS


> Afaik UTM uses Qemu under the hood

During installation UTM asks if you’re willing to use Apple virtualization rather than qemu


Thanks to this website, I gave up chasing an elusive target of squeezing 1gbps over wifi on any of my devices - it explains really well what is going on and why I was banging my head against the wall trying to make it work.

A useful trick to explain you current wifi speed on Mac is to alt-click on the wifi icon - it immediately gives tons of useful connection properties that you can make sense of after reading this page.

One more thing that doesn't seem to be mentioned that may decrease the wifi speed - is installing an open-source firmware like OpenWrt on your router. While it gives a ton of cool benefits, it is totally possible the drivers etc that come with it will be worse than stock firmware.


It’s not that the drivers are worse. It’s that the hardware acceleration of the router might not be available since it is all closed source/proprietary. If you care about having a better router experience, something like Microtik or an OPNsense box running on a general purpose computer is great. Then you can combine it with dedicated access points like Eero or Omada.


This is one of the reasons I've separated the router from the AP on my network. I'm running OpenWRT on a normal computer as the router because I'm familiar with setting it up and then using some TP-Link Omada EAP devices for the Wifi part of things. I don't like that they're completely closed as far as firmware/software goes but I'll live with that compromise because it gives me the ability to manage the wifi separately, and then the APs also support VLAN and some other features that lets me handle things nicer than I could with consumer grade routers (with or without stock firmware).


Can you suggest how to get the best router experience if I intend to use an alternative firmware? (I am thinking of OpenWrt right now)

I have some experience running m0n0wall (it wasn't deprecated back then) as a router and a Ubiquiti Long Range AP (bought old, used) as a dedicated Wi-Fi AP at my old home.


My personal current setup is 10 year old desktop PC with a dual Intel NIC running OPNsense as my router. This is more than enough power and aside from a relatively long reboot time (about 80 seconds or so) it works wonderfully. Connected to this router is my WAN (Frontier fiber) in one NIC port and a TP-Link unmanaged Gigabit switch with POE ports on the other. The Wi-Fi is provided by TP-Link Omada access points managed by a local hardware controller (TP-Link sells these and they support many more APs at a time than I ever would need). This setup has been amazing in terms of managing it and in terms of user experience. Fast Wi-Fi, built in roaming over my 6 access points, and a router that is powerful, fast, and very flexible to manage with a great community behind it. Would have loved an open source AP solution but Omada at least does work well just like most TP-Link stuff.


My sibling comment covers what I've done at a high level, what seems to work really well to me is to use a separate router and AP so that the wifi side of things is completely divorced from managing the router. That'll let you use something like OPNSense, OpenWRT, or PFSense to manage the traffic. If you don't already have a device to do that I'd highly recommend looking at ServeTheHome's reviews of some of the N100/N300 based devices that have been coming out lately, there's quite a few really powerful ones coming out that would work wonderfully for this (I'm using a Ryzen based normal PC that I've optimized for power since those didn't exist when I did this).

Some recent reviews that give an idea of what's out there (look further too, there's a few 4x2.5GBe + 2x10GBe ones too)

https://www.servethehome.com/asrock-industrial-4x4-box-8840u...

I'm actually running mine with the router in a VM in proxmox with a PCI-e passthrough NIC because I'm also running a few other network critical services that I wanted more isolation on (omada controller, mail server, ldap, etc.) but don't want the power budget for yet another server.

EDIT: bah, wrong second link for STH, https://www.servethehome.com/everything-homelab-node-goes-1u...


There are 2 very different approaches depending on the reasoning your interest in running alternative firmware.

Reasoning 1: FLOSS/Libre principles - find whatever wireless router has the best wireless radios but still complies to your particular set of openness principles. More than anything the radio performance will still be your performance limitation so the rest of the box ends up not mattering and you can use your software straight on it without much worry. If you're ideal FLOSS hardware doesn't support running your ideal FLOSS "smarts" directly you can mix this with the 2nd approach, otherwise just stick with the one box.

Reasoning 2: You just want better software - find the device with the best radios and see if it has a native "AP" mode (it probably will). The radios will likely outperform the rest of the device if you want to do any "smarts" with the traffic so completely ignore whether or not that specific device can run open software and get an x86 tiny/mini/micro PC to run some set of software like OPNSense or whatever you prefer. The AP is then a dumb passthrough and the PC is flexible so the two sides no longer limit each other (both in performance and lifecycle).


For rqbit at least, there were only a few things necessary to start off with:

1. The bittorrent protocol specification: https://www.bittorrent.org/beps/bep_0003.html

2. Wireshark dumps of some existing BitTorrent clients to write unit-tests for RPC serialization/deserialization. I used qBittorrent, but you can use any other existing client.

3. (kind of optional) DHT protocol: https://www.bittorrent.org/beps/bep_0005.html. This actually came later, you can download torrents using just #1. But if you try to do so, you'll discover that most peer information is stored in DHT, and not in trackers.

Everything else was heuristics, observing real network behaviour, and tweaking the code accordingly.

That said, I'm not exactly the go-to expert on "how to develop BT clients and servers", as rqbit isn't as fully featured as the more mature clients. But given that the above links got me that far, I'm sure they can give a very decent start.


Thank you.


2 years ago when I decided to create rqbit, I've been using qBittorrent as my main client.

I think there was a bug in it, or smth else that caused it for me, that made it download torrents really slowly, and it couldn't saturate my gigabit network, whereas previously it was close. It didn't bother me enough to investigate deeper, but instead sparked the curiosity of what would it take to make a qBittorrent myself.

Since then, either qBittorrent fixed the bug, or whatever else was causing it fixed it, so it's no longer an issue. So if you like qBittorrent, I'm with you - it's great!

My *recent* motivation to put some more work into rqbit was caused by smth else though:

1. I had some free time on my hands, and was craving coding some Rust

2. I gave the client to my dad, he put it on his RaspberryPI, said this is the fastest torrent client he's ever seen, and asked if certain features are available. I implemented them all, cause it wasn't hard, and I made my dad happier at the same time :)


At the moment, it doesn't listen for bittorent downloads on neither UDP nor TCP, so external peers can't connect to it.

It only listens on UDP for DHT requests.

So if it connects by itself (only TCP is supported), then it can upload. In reality, this makes uploading rare.

For "being a good citizen" of the network, a couple days ago I implemented storing peer information, so that it can be returned back when DHT nodes query for it. It has limitations though, e.g. only storing 1000 peers at the moment, and not cleaning up old peers once the peer store is full.

Listening on UDP for torrent requests seems like a big change, maybe someday.


When I built it at first a couple years ago, it all went pretty smooth until I hit concurrent communication with many peers, both for DHT and torrent downloading.

E.g. parsing bencode, and other binary protocols involved in the network (bittorrent peer protocol, tracker requests, DHT protocol) was all easy comparing to managing state.

Managing state (e.g. what parts we have downloaded so far, what are we downloading at the moment, what peers are we expecting to get pieces from, etc) has a lot of edge cases that need to be handled.

For a recent example, I was dealing with a bug that manifested like this: when resuming a torrent from "paused" state, it was stuck at ~99.87% downloaded, and never progressed. Turned out, than when I put the torrent into "paused" state, after disconnecting the peers, I wasn't marking the in-flight pieces for re-download. So when new peers connected, they had nothing to do.

DHT was also tricky, for multiple reasons:

- as it works via UDP, there's no "connection" - you need to match incoming messages to previously sent requests yourself. But then you might never get a response, so you must manage some kind of timeouts, to clean up memory of old outgoing requests that are no longer expected to be completed.

- DHT involves "recursive" requests. You query N peers at first, and each of them might return M other peers. So on next round you need to make N * M requests. And this recursion can continue forever, ever growing exponentially. You need to put some heuristics around not to explode the network and at least your own computer. For example, my MacOS UI was freezing when rqbit was trying to send too many DHT requests at once.

- Handling the above (and other) examples makes the code worse. Managing code complexity, at least for myself to understand what on earth is going on, is much harder than e.g. implementing the binary protocols.

Otherwise, it's all about tricky details, behaviours that can only be observed under certain circumstances, and only if you're curious enough to look, e.g.

- Garbage collection. Ensuring that when the client (e.g. peer, browser etc) disconnect, everything is cleaned up. E.g. when the torrent is paused, this causes a massive "stop" for all spawned tasks. If you don't account for that, they might keep running forever.

- Network issues. Re-connecting to peers, to DHT etc, re-trying everything that can be retried, might be a head-scratcher sometimes.

All that said, it's quite fun to deal with these when that's your goal by itself - to enjoy the process of coding.


I found the trickiest parts to be DHT concurrency (finally solved this adequately after 5-6 years of experimenting), and efficient block requesting (I've rewritten this every few years, but my latest implementation seems to be solid since 2020 or so). The major thing I've not solved, but also been too lazy to tackle is a peer cache. I just reaannounce when all peers are exhausted and start over.


Concurrency and the shared state nature of torrents are definitely what makes it all hard and tricky. I've rewritten this part of DHT completely recently, but according to your experience looks like this time won't be the last.

For block requesting, rqbit has a pretty simple algorithm https://github.com/ikatson/rqbit/blob/main/crates/librqbit/s..., and I didn't notice it in benchmarks, thanks to Rust being fast by default I guess. I admit though, never looked how other clients do it, maybe the rqbit algorithm is too naive.


After reading the title of the article, I tried to solve it for fun before looking up the solution.

The logic followed what I learned at school about how to do it the long way, but of course with binary numbers it's so much easier.

I ended up with a shorter implementation that follows the same idea, but uses recursion [1]. The recursion is compiled away by Rust [2]

[1] https://play.rust-lang.org/?version=stable&mode=debug&editio... [2] https://godbolt.org/z/bq3fnrE4h


For simple usages like this, you can also create an SSH socks proxy with one SSH command, and then configure your browser to use a local port as a socks proxy.

Does not require any software installed on the server, and the whole setup should be quicker then configuring VPN server and client.

Also, an HTTP proxy is a couple steps more to setup, but will allow you to use command line tools on the client, not just the browser. The majority of command line tools support http_proxy and https_proxy environment variables.

An easy and pretty secure way to setup an HTTP proxy is: 1. Install tinyproxy. 2. Configure it to listen only on localhost and start it. 3. SSH port forward localhost:8888 from your server. For example to the same port on your client. 4. Configure your clients to use localhost:8888 as a proxy.


Of course there are alternatives like this and thank you for sharing, but in my eyes, this actually requires significantly more work and mental thought. Spinning up a droplet on DO and opening the config file in wireguard is literally executing one command and doesn't require touching my browser configuration. Takes a couple more clicks to just delete the droplet. Done.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: