Hacker Newsnew | past | comments | ask | show | jobs | submit | advisory5739f2's commentslogin

If you find that exception-free code that is necessarily littered with exit value checks at every call, which discourages refactoring and adds massive noise, then you can call the decisions to eschew exceptions as “sane” and “clean”, but I find the resulting code to be neither. Or practically speaking, exit codes will often not be checked at all, or an error check will be omitted by mistake, thereby interrupting the entire intended error handling chain. Somehow that qualifies as a better outcome or more sane? Perhaps it is for a code base that is not changing (write-once) or is not expected to change (i.e. throwaway code written in “true microservice” style).


SRE is an anti pattern that Google is unwilling to admit and is selling books on. Just like there should not be QA, release engineering, continuing engineering or DBA as separate departments/job titles, because these critical parts of software development should not be considered optional and thrown over the wall to take care of by someone with no stake in developing the product.


I've been one the other side of this (i.e. companies that have no SREs or QA, or in one case a company that had QA and got rid of it) and it has always been an unmitigated disaster.

The root cause of this disaster is that, when writing software, interruptions are the death of productivity. Having a software engineer wear too many different hats at one time, especially when some of those hats are largely real-time interrupt driven, can absolutely kill productivity.

To emphasize, I'm not at all in favor of "throwing things over the wall". Software engineers are responsible, for example, for making software that is easy to test and has good observability in place for when production problems show up. But just because you listed a bunch of things that are "critical for software development" doesn't mean that one person or role should be responsible for all of these things.

At the very least, e.g. for smaller teams I recommend that there is a rotating role so devs working on feature development aren't constantly interrupted by production issues, and instead each dev just gets a week-long stint where all they're expected to do is work on support issues and production tooling improvements.


I agree very much with you that interruptions are death of productivity. Your suggestions for weekly rotations are great.

However, I argue that if the engineers are interrupted by QA issues, they will be motivated to find ways to not have those QA issues. In absence of that, we end up with the familiar “feature complete, let QA find bugs” situation.


> However, I argue that if the engineers are interrupted by QA issues, they will be motivated to find ways to not have those QA issues.

There are institutional limitations that engineers cannot overcome, no matter how zealous or motivated. Moreover, companies also ought to remember that engineers can "find ways to not have those QA issues" by seeking employment elsewhere!


This is such a crazy take for me. Any profession that matures eventually specializes. I wouldn’t expect the same person to pour my foundation, install the plumbing, and wire the building. Yet in an ever expanding field we expect someone to be able to do it all. Also saying people who don’t code have no stake so egocentric.


Pouring foundation, installing plumbing and wiring the building is specialized by the physical necessity of these activities, which cannot be repeated without mistake at a great cost. That justifies specialization. Unlike building a bridge, compiling software is essentially free. QA, release engineering and database design can and should be repeated and iterated on by software engineers, because it is a necessary part of the development and removing it from the expected work distorts incentives.


Regardless of these fields being separate departments/job titles: people are not getting promoted for doing QA, release engineering, continuing engineering, or DBA work. It's a huge cultural problem in tech.


I agree that doing this work is looked down upon and not recognized as the critical work that it is.


Only having full stack engs taking care of everything works but only until a certain org size (like in a small startup). Once the org gets larger/systems get more complex, you usually need specialisation. It’s natural, and Google didn’t really invent anything here


I agree, very cool. I still miss the fast productive workflow of Jeskola Buzz from back in the day. Modular software synth + tracker with pattern sequencing. https://youtu.be/8J8i72a11W4?si=IRic-Z_YMinudnhn


Buzz tracker was so awesome

I never got around to making full tunes in it, but I spent countless hours exploring the modular aspects

It actually was what opened my eyes to things like PureData, Max/MSP, etc.. highly underrated program


Have you considered the impact of traffic calming measures on emergency services? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9922345/#:~:tex....


Have you considered the impact on emergency services of encouraging drivers to go as fast as they can whilst not paying sufficient attention?


There are plenty of traffic calming measures that aren't speed bumps


Honestly speaking, that's an incredibly difficult issue to try and optimize for. There are a ton of different measures you could implement to try and improve ambulance travel times, but they're the same street design choices that we know drastically increase accident rates and fatalities for drivers, cyclists, and pedestrians alike.

Wider travel lanes on normal streets? More signalized intersections with overrides for emergency services instead of roundabouts, stop signs, or other measures meant to decrease intersection accidents and fatalities? Removal of speed bumps, raised pedestrian crosswalks, etc.? Additional lanes so ambulances have space to pass other cars?

Sure, they could all ostensibly improve ambulance travel times. But they'd do so by dramatically increasing the number of fatalities on our streets. Not to mention the workload on those same emergency services. So while it can make sense to consider the impact on those services, they probably shouldn't be the driving factor. Or even a main one.

On the other hand, even if speed bumps and other measures cause minor delays, other changes might be able to balance them out. Dedicated bus lanes, for example, are basically exclusive express lanes you could choose to route emergency service vehicles down with potentially significant time savings.


In the Netherlands we have a completely seperated bus network. No speedbumps, traffic light priority and audible cue at the intersection. Works pretty well for emergency services.


Just have emergency services use the bus lane?

And if there isn't one, your problem isn't traffic calming, your problem is the lack of bus lane


I tried it out and the instructions have tips on what record to pick, they say to pick a well known version of a song (like not a live version etc), and preferably song with a beat, but it says it doesn’t use any 3rd party APIs or libraries, only Apple APIs. So my guess at to what it’s doing is using a ShazamKit recognition behind the scenes and looking at the frequencySkew value of the matched result. It also gives you one answer after listening, instead of a continuous gauge, which seems to corroborate song recognition. It probably won’t work with an obscure record that is not Shazamable. And so I don’t think it can measure wow & flutter as a result.

Still pretty cool for those that need to calibrate a turntable, or verify 33 vs 45 PRM for a record.


I love that everyone is guessing all these methods of detecting pitches or using the camera to count rotations, and it turns out that they're most likely literally just using a built-in API and displaying a return value [0].

0: https://developer.apple.com/documentation/shazamkit/shmatche...


I guess a phone's camera can be used to count the RPM, e.g. by filming the label and counting how many milliseconds it needs to do a full rotation (1818ms for a 33 RPM record)


I tried to build a prototype of something like this with opencv once, didn't get it to work reliably though. I have a feeling there should be a relatively simple signal processing version of this, basically a spatiotemporal Fourier transform, that should solve it.


I would go with something like sift features (or a non patented variant thereof if you plan commercial usage). You analyze a first picture and extract the center part. You run a feature detector on each image and the can cheaply match (scale and rotation invariant) against each other - this gives rotation and scaling.


Yea that's precisely what I ended up doing, but the frame extraction was quite finicky.


How can it use ShazamKit if it processes everything on-device?

I'm really really curious how this is done now...


They don't claim everything is done on-device, just that the audio stream processing is.

> Grooved does not collect any data, whatsoever. The audio stream is processed locally on your device and never recorded.

Which is consistent with how ShazamKit works [0]:

> Audio is not shared with Apple and audio signatures cannot be inverted, ensuring content remains secure and private.

0: https://developer.apple.com/shazamkit/


Jef Raskin’s “Humane Interface” book was such an eye opener for me—-no modes (that explained why I was always frustrated with vi or emacs), and that computers should never lose your data.

From this post I learned there was a project called Archy implementing Cat in software.

The core principles still ring so very true: https://web.archive.org/web/20061025010636/http://rchi.raski...

> Computer rage is a familiar phenomenon because computers are so adept at losing your data. At any given moment, you are one innocent step away from destroying minutes, hours, days, months, or years of work.

> Archy never loses your work. This shouldn't be a groundbreaking innovation in computer design, but it is. You never have to save because it's done for you automatically. Your data is stored in such a way that if your computer crashes, your information will still be there the next time you start Archy up.

I think these days we are 70-80% to this groundbreaking innovation—-my computers no longer lose my in-progress e-mail or documents, but sure still keep losing form input here and there, and I still accidentally select all text and type over it in a system with a single-level undo (iOS).


Thanks to Moore's law, and this excellent essay by the late, great Michael O'Connor Clarke[1] and our largely successful KillSave campaign, we at least have autosave as a widely adopted measure of sanity. It's better than nothing.

[1] https://web.archive.org/web/20080621173441/http://killsave.o...


Ugh, I loathe autosave popping up in more and more places, particularly webapps. I've messed up the working copy of my wife's business website and various docs more than once because she's asked me to fix or tweak something. I automatically change some random things around in Squaredpace/Webnode/Canva just to get a feel for a tool I've never used and discover how the workflow might compare to something I'm familiar with. Then I find out there's no undo or it doesn't revert everything, I have no idea what changes I've made vs the original, and it's autosaved. Terrible UX!

Autosave has no place outside anything more complex than a simple text form, unless it's solely a backup solution for crashes, or paired with a very solid version control system.


Agreed, autosave is wrong, because save is, among other things, a destructive operation: it obliterates the old version of what you save. That should require a decision by the user.

What macOS native apps all do, and have for years, is the right thing. They save your work, but not in your file. Open something, make modifications, close the program, open it: the modifications are still there. Crash? When you reboot, you still have your work. You save when you're ready to replace the old version with the new one, the program doesn't make that choice for you.

Mostly they don't save undo history between boots, though, and that's too bad. Ideally every format in which people actually do the work would be non-destructive, preserving an undo history back to whatever the initial state of the file was. Photoshop got this right, so it's common in image editing, but it should work everywhere.

Clearly you don't usually want finished work to contain the complete edit history, so exporting a rendered version is essential. But this isn't a difficult problem.


We basically need the concept of quicksave and discrete save slots from video games.

The system should regularly quicksave, but when I want to make a conscious checkpoint, let me do so. Perhaps use a VMS-style versioned-file system if you want a seperate channel of undo history.


That seems to be an odd formulation of a call for crash-only software.

https://dslab.epfl.ch/pubs/crashonly.pdf


iOS has more than a single level undo, but it’s terrible at exposing it and easily over writable by mistake.


Also, first-party apps like Notes have started reinventing modal interfaces.

sigh


Oops, my mistake.. yeah, it does have multiple levels of undo.

My favorite 3-way merge tool, kdiff3, has no undo.


Sad that C is still being utilized with a serious face. If you can't be bothered to develop in C++ and only pay for what you use, RAII is like your last problem.


You can be really productive, expressive and performant in Groovy. So much of the language still works well with @CompileStatic and doesn’t require dynamic typing. Writing clear code that has decent refactoring support in IntelliJ.

Don’t forget traits! When are we getting traits in Java? Probably never.

Optional semicolons and parentheses to cut the line noise and enable internal DSLs.

Though Java has improved implicit typing with `var` and now we have reasonable lambdas, Java is still not a high-level language, but maybe it is now medium-high.


Manual memory management vs garbage collection trades speed of allocation for speed of freeing memory. Freeing memory is expensive in a garbage collected language, but allocation is basically free.

Modern garbage collectors are very good and handle many common use cases rather efficiently, with minimal or zero pauses, such as freeing short-lived objects. Many GC operations are done in parallel threads without stopping the application (you just pay the CPU overhead cost).

Also JITs in both the CLR and the JVM perform optimizations such as escape analysis, which stack-allocate objects that never escape a function’s scope. These objects thus do not have to be GC’d.

So really with a GC’d language, you mostly have to worry about pauses and GC CPU overhead. Most GCs can be tuned for a predictable workload. (A bigger challenge for a GC is a variable workload.)


Correction: JVM implementations perform escape analysis, in particular, because Java does not have structs. .NET does not perform escape analysis for objects and all attempts to experiment with it so far has shown greater impact on compilation speed with little to no profit in allocation rate. However, you will find average allocation traffic much lower in .NET because C# already puts way more data on stack through structs, non-boxing operations (where Java boxes), stack allocated buffers, non-copying slicing with spans and object pooling (which is common in Java too).


Thank you, I missed the stack allocation design doc stating it’s on the roadmap. (https://github.com/dotnet/runtime/blob/main/docs/design/core...)

Appreciate the detail about the stack allocated bits in .NET.


Yeah, it kind of is. There are quite a few of experiments that are conducted to see if they show promise in the prototype form and then are taken further for proper integration if they do.

Unfortunately, object stack allocation was not one of them even though DOTNET_JitObjectStackAllocation configuration knob exists today, enabling it makes zero impact as it almost never kicks in. By the end of the experiment[0], it was concluded that before investing effort in this kind of feature becomes profitable given how a lot of C# code is written, there are many other lower hanging fruits.

To contrast this, in continuation to green threads experiment, a runtime handled tasks experiment[1] which moves async state machine handling from IL emitted by Roslyn to special-cased methods and then handling purely in runtime code has been a massive success and is now being worked on to be integrated in one of the future version of .NET (hopefully 10?)

[0] https://github.com/dotnet/runtime/issues/11192

[1] https://github.com/dotnet/runtimelab/blob/feature/async2-exp...


I don’t think so, because it is the runtime JIT (Just-In-Time) optimizer that is the critical speed advantage that allows the CLR and JVM to beat C and C++.

The inlining of virtual calls is the critical optimization that enables this. Because C/C++ is optimized statically and never at runtime, it is unable to optimize results of function pointer lookups (in C, and thus also virtual calls in C++). However, the JITs can inline through function pointer lookups.

In sufficiently complex programs, where polymorphism is used (i.e. old code that calls new code without knowing about it), this yields an unsurpassed speed advantage. Polymorphism is critical to managing complexity as an application evolves (even the Linux kernel, written in C, uses polymorphism, e.g. see struct file_operations).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: