Bisect is one of those things where if you're on a certain kind of project, it's really useful, and if you're not on that kind of project you never need it.
If the contributor count is high enough (or you're otherwise in a role for which "contribution" is primarily adjusting others' code), or the behaviors that get reported in bugs are specific and testable, then bisect is invaluable.
If you're in a project where buggy behavior wasn't introduced so much as grew (e.g. the behavior evolved A -> B -> C -> D -> E over time and a bug is reported due to undesirable interactions between released/valuable features in A, C, and E), then bisecting to find "when did this start" won't tell you that much useful. If you often have to write bespoke test scripts to run in bisect (e.g. because "test for presence of bug" is a process that involves restarting/orchestrating lots of services and/or debugging by interacting with a GUI), then you have to balance the time spent writing those with the time it'd take for you to figure out the causal commit by hand. If you're in a project where you're personally familiar with roughly what was released when, or where the release process/community is well-connected, it's often better to promote practices like "ask in Slack/the mailing list whether anyone has made changes to ___ recently, whoever pipes up will help you debug" rather than "everyone should be really good at bisect". Those aren't mutually exclusive, but they both do take work to install in a community and thus have an opportunity cost.
This and many other perennial discussions about Git (including TFA) have a common cause: people assume that criticisms/recommendations for how to use Git as a release coordinator/member of a disconnected team of volunteers apply to people who use Git who are members of small, tightly-coupled teams of collaborators (e.g. working on closed-source software).
> If you're in a project where buggy behavior wasn't introduced so much as grew (e.g. the behavior evolved A -> B -> C -> D -> E over time and a bug is reported due to undesirable interactions between released/valuable features in A, C, and E), then bisecting to find "when did this start" won't tell you that much useful.
I actually think that is the most useful time to use bisect. Since this is a situation where the cause isn't immediately obvious, looking through code can make those issues harder to find.
I'm glad it works for you! I may not have described the situation super clearly: most bugs I triage are either very causally shallow (i.e. they line up exactly with a release or merge, or have an otherwise very well-known cause like "negative input in this form field causes ISE on submit"), or else they're causally well understood but not immediately solvable.
For example, take a made up messaging app. Let's call it ButtsApp. Three big ButtsApp releases releases happened in order that add the features: 1) "send messages"; 2) "oops/undo send"; and 3) "accounts can have multiple users operating on them simultaneously". All of these were deemed to be necessary features and released over successive months.
Most of the bugs that I've spent lots of time diagnosing in my career are of the interacting-known-features variety. In that example, it would be "user A logs in and sends a message, but user B logs in and can undo the sends of user A" or similar. I don't need bisect to tell me that the issue only became problematic when multi-user support was released, but that release isn't getting rolled back. The code triggering the bug is in the undo-send feature that was released months ago, and the offending/buggy action is from the original send-message feature.
Which commit is at fault? Some combination of "none of them" and "all of them". More importantly: is it useful to know commit specifics if we already know that the bug is caused by the interaction of a bunch of separately-released features? In many cases, the "ballistics" of where a bug was added to the codebase are less important.
Again, there are some projects where bisect is solid gold--projects where the bug triage/queue person is more of a traffic cop than a feature/area owner--but in a lot of other projects, bugs are usually some combination of trivially easy to root-cause and/or difficult to fix regardless of whether the causal commit is identified.
Git bisect is a wonder, especially combined with its ability to potentially do the success/fail testing on its own (with the help of some command you provide).
It is a tragedy that more people don't know about it.
Yes, in fact, the protocol states that the client can queue up multiple requests. The purpose of this is to fill up the gap created by the RTT. It is actually quite elegant in its simplicity.
An extension was introduced for continuous updates that allows the server to push frames without receiving requests, so this isn't universally true for all RFB (VNC) software. This is implemented in TigerVNC and noVNC to name a few.
Of course, continuous updates have the buffer-bloat problem that we're all discussing, so they also implemented fairly complex congestion control on top of the whole thing.
Effectively, they just moved the role of congestion control over to the server from the client while making things slightly more complicated.
I have some experience with pushing video frames over TCP.
It appears that the writer has jumped to conclusions at every turn and it's usually the wrong one.
The reason that the simple "poll for jpeg" method works is that polling is actually a very crude congestion control mechanism. The sender only sends the next frame when the receiver has received the last frame and asks for more. The downside of this is that network latency affects the frame rate.
The frame rate issue with the polling method can be solved by sending multiple frame requests at a time, but only as many as will fit within one RTT, so the client needs to know the minimum RTT and the sender's maximum frame rate.
The RFB (VNC) protocol does this, by the way. Well, the thing about rtt_min and frame rate isn't in the spec though.
Now, I will not go though every wrong assumption, but as for this nonsense about P-frames and I-frames: With TCP, you only need one I-frame. The rest can be all P-frames. I don't understand how they came to the conclusion that sending only I-frames over TCP might help with their latency problem. Just turn off B-frames and you should be OK.
The actual problem with the latency was that they had frames piling up in buffers between the sender and the receiver. If you're pushing video frames over TCP, you need feedback. The server needs to know how fast it can send. Otherwise, you get pile-up and a bunch of latency. That's all there is to it.
The simplest, absolutely foolproof way to do this is to use TCP's own congestion control. Spin up a thread that does two things: encodes video frames and sends them out on the socket using a blocking send/write call. Set SO_SNDBUF on that socket to a value that's proportional to your maximum latency tolerance and the rough size of your video frames.
One final bit of advice: use ffmpeg (libavcodec, libavformat, etc). It's much simpler to actually understand what you're doing with that than some convoluted gstreamer pipeline.
This reminds me of my own troubles with my AEG washing machine.
Probably, the most important lesson (for someone who wants to fix their washing machine ASAP) that I learned from that was that there are non-userserviceable error codes and you need to perform an undocumented procedure on your machine to get those codes. I wrote about it in more detail here: https://andri.yngvason.is/repairing-the-washing-machine.html
I would have loved to have an open source diagnostics dongle for my AEG. Maybe next time I'll try and make one. :)
After having used their repair service for over 10 times for my dishwasher during its warranty period and having broken off its front handle (well, the entire front panel really) after 2 more years, I'm never buying an AEG device ever again. I opened it up and fixed it myself, and oh my god, the whole thing just screamed cost cutting. They literally used the power button of a different model or machine, and then just mounted a different power button on top that presses the underlying one. And of course the load bearing thing that holds the front panel and display onto the door frame is a just two tiny bolts in the corners. Great idea to have the entire thing flex constantly in one place. Absolute junk.
This is the downward spiral for a lot of brands. They sell out to an investor, who uses their brand reputation inertia, reduces cost and quality, etc. There's barely any brands left. IIRC Miele is still one of the few good brands for home appliances, but they're also significantly more expensive. At least for the initial purchase, I'm sure it evens out long term.
How new is your washing machine? Mine (US market, Electrolux branded) displays fault codes through the main 8 segment LCD and makes component tests available from that same diagnostic menu. Service literature was available directly from Electrolux — from a paid service with a free trial, although there are plenty of youtube videos covering the same information.
The blog post that I linked to answers your question.
I was able to get at the diagnostics menu (also explained in the blog), but I had to interrogate a service tech in order to learn how to trigger it (also mentioned in the blog).
The manual did not contain this information and I could not find it via Google.
Where did you sign up for the "paid service with a free trial"?
I've used valgrind proactively as long as I've been programming in C and C++.
The errors that are caught by valgrind (and asan) will very often not cause immediate crashes or any kind of obvious bugs. Instead, they cause intermittent bugs that are very hard to catch otherwise. They may even be security vulnerabilities.
Another good reason for using it proactively is that small memory leaks tend to sneak in over time. They might not cause any issues and most of them aren't accumulative. However, having all those leaks in there makes it much more difficult to find big leaks when they happen because you have to sift through all the insignificant ones before you can fix the one that's causing problems.
Drawing into the spectrogram is a fun trick. I would really like to know how much data you can store in that bird using some digital modulation method such as FSK (frequency shift keying).
There could even be multiple carriers in the signal.
It would be even cooler if the bird were to preserve phase. Then you could use PSK!
> I would really like to know how much data you can store in that bird using some digital modulation method such as FSK (frequency shift keying).
The video shows the bird capable of remembering and reproducing 5-10 frequency graphs simultaneously (which you'd need to draw a picture), so you can multiplex those.
> There could even be multiple carriers in the signal.
Or same carrier, but different sets of frequency keys for each stream.
> It would be even cooler if the bird were to preserve phase. Then you could use PSK!
Maybe they do, someone should ping Ben to test that :D
I used to work for a company that makes equipment for the food processing industry.
Sometimes conveyor belts would be left running for days or even weeks in the test area. After a while, you would start to see very fine dust on and around the conveyor belts. This was finely ground POM plastic. On some occasions, there were actually heaps of that stuff forming beneath the conveyor belts.
In the factories, everything gets washed down with pressure washers at least once per day, so very little of this stuff goes into the food, but it definitely gets washed away out to sea.
I think that there is probably a wide-spread misunderstanding on how the micro-plastics enter the food. It does not seem very likely that it would come from the packaging or your tupperware (unless your tupperware is so old that it has actually started to disintegrate). It seems much likelier that the plastics were in the food before it was packaged.
Maybe not? We already have a variety of metals in our bodies, and cells already interact with them as needed and filtering them if not. (I'm obviously being very generic here. Heavy metals are an obvious exception to this.)
Meanwhile, biology has no idea what plastic is and it seems like our bodies have a hard time filtering it out.
> Personally I would have bought the good drill BTW, it is expensive, but it is one of the most used tool
Yes, a good drill is a must-have for any home-owner. Anecdotally, I've owned a "prosumer grade" power drill ever since I bought my first apartment and I've used it a lot more than I originally anticipated. It's held up remarkably against a lot of use and abuse over a decade. Well, I've replaced the chuck and applied some epoxy to the plastic housing, but it still works. :)
Is it a good drill? I'd say that it's not great, but it's been good enough.
I've also bought some "consumer grade" tools and I would not recommend buying those even if you can buy it for cheaper than it costs to rent a proper tool. Sometimes, you get lucky with those tools, but most of the time, they are not worth the price of the box they come in. Often, the problem is that even though the tool doesn't exactly fall apart in your hands (this has happened to me), the precision of the result is just unworkable and you waste a bunch of time dealing with issues you otherwise wouldn't have.
Absolutely! The one thing I've been taught by one of my mentors, and consistently found true, whether working on home, automobile, bicycles, or in shop, is:
Get the best tool that you can for the job.
E.g.: In powered tools, always go for the brushless motor version (lighter, better power, smoother, more durable). Better to buy 2nd hand Matco/Snap-On than consumer wrenches etc. than cheap hand tools (Sure the mid-price ones might also have a lifetime warranty, but the turnaround time won't help you in the field; get ones that will break more rarely in the first place). Whatever the situation, do a bit of research and find what is best.
The good tools will be a joy to use every time and save a myriad of frustrated swear-fests over your lifetime. And they'll last longer.
Are you saying that you've never used git bisect? If that's the case, I think you're missing out.
reply