Hacker Newsnew | past | comments | ask | show | jobs | submit | ktpsns's commentslogin

Best luck! My year of the Linux desktop has been 2006, so it's now 20 years (with a short 5yrs relapse around 2012). I never look back.

(Similarities to smoking cessation are neither coincidental nor intentional, but unavoidable.)


Mine was a few years earlier (YoLotD). Sadly, I kept up with the fags until 2018 ...

FTR to other readers, "fags" in this context refers to cigarettes (per the GP's parenthetical remark about smoking).

Very kind of you to note a potential en_GB => en_US buggeration.

Nice numbers and it's always worth to know an order of magnitude. But these charts are far away from what "every programmer should know".

I think we can safely steelman the claim to "every Python programmer should know", and even from there, every "serious" Python programmer, writing Python professionally for some "important" reason, not just everyone who picks up Python for some scripting task. Obviously there's not much reason for a C# programmer to go try to memorize all these numbers.

Though IMHO it suffices just to know that "Python is 40-50x slower than C and is bad at using multiple CPUs" is not just some sort of anti-Python propaganda from haters, but a fairly reasonable engineering estimate. If you know that you don't really need that chart. If your task can tolerate that sort of performance, you're fine; if not, figure out early how you are going to solve that problem, be it through the several ways of binding faster code to Python, using PyPy, or by not using Python in the first place, whatever is appropriate for your use case.


And yet Java is more then Java. There are lots of more modern languages on the JVM. The ecosystem is huge and still has lots of inertia.

Yeah. Some of my critique applies to the language, some on the JVM and thus cross language.

Kotlin sure is less awful, for example. But the JVM, as I describe, was always a failed experiment.


Pip is not Linux specific, it's the same on Win/Mac. I prefer AppImages because they are just statically compiled binaries. I prefer Apt&friends because it is good old packaging. But flatpak and snap? Hell no. I see so little advantage there.

This is exactly the type of bad redactions which the X-ray software will also find.



You can X-ray a PDF?


Unsure if you are serious but the commenter is referring to the tool name that this post links to


What's wrong with a big end of day commit? Sure, a well crafted git history can be very valuable. But then comes somebody and decides to just flush your well curated history down the toilet (=delete it and start somewhere else from scratch) and then all the valuable metadata stored in the history is lost.

Maybe consider putting your energy into a good documentation inside the repository. I would love to have more projects with documentations which cover the timeline and ideas during development, instead of having to extract these information from metadata - which is what commit messages are, in the end.


> But then comes somebody and decides to just flush your well curated history down the toilet (=delete it and start somewhere else from scratch) and then all the valuable metadata stored in the history is lost.

How does this happen? I haven't run into this.

> Maybe consider putting your energy into a good documentation inside the repository

I'd say both are valuable.

I use git log and git blame to try to understand how a piece of code came to be. This has saved me a few times.

Recently, I was about to replace something strange to something way more obvious to fix a rendering issue (like, in some HTML, an SVG file was displayed by pasting its content into the HTML directly, and I was about to use an img tag to display it instead), but the git log told me that previously, the SVG was indeed displayed using an img tag and the change was made to fix the issue that the links in the SVG were not working. I would have inadvertently reverted a fix and caused a regression.

I would have missed the reason a code was like this with a big "work" end of the day commit.

It would have been better if the person had commented their change with something like "I know, looks weird, but we need this for the SVG to be interactive" (and I told them btw), but it's easy to not notice a situation where a comment is warranted. When you've spent a couple of hours in some code, your change can end up feeling obvious to you.

The code history is one of the strategies to understand the code, and meaningful commits help with this.


When working on a feature branch it can be useful to break up your changes into logical commits. This gives you the flexibility to roll back to an earlier iteration when still working on the feature if needed.

One of my git habits is to git reset the entire feature branch just before opening a PR, then rebuild it with carefully crafted commits, where i try to make each commit one "step" towards building the feature. This forces me to review every change one last time and then the person doing code review can also follow this progression.

These benefits hold even if the branch ultimately gets squashed when merging into main/master. I also found that even if you squash when merging you can still browse back to the PR in your git repository web UI and still see the full commit history there.


> What's wrong with a big end of day commit?

It's useless for all but the code preservation part, it doesn't tell you anything.

> But then comes somebody and decides to just flush your well curated history down the toilet (=delete it and start somewhere else from scratch) and then all the valuable metadata stored in the history is lost.

I would be very angry if someone deletes my work, why would I accept that? When my colleague throws my work into the bin, I will complain to my superior, they pay me for it after all.

> Maybe consider putting your energy into a good documentation inside the repository. I would love to have more projects with documentations which cover the timeline and ideas during development

That's what commit messages are? They provide the feature that you can click on any line in your codebase and get an explanation, why that line is there, what it is supposed to do, and how it came to be. That's very valuable and in my opinion, much more useful than a static standalone documentation.

First you think of commits as backups, then you think of them as a code distribution. Later you see them as a way to record time. What has been a useful insight to me was, what time is a prerequisite to: causality. Now I see that a VCS is less about recording actual history, but about recording evolution dependency, causality and intent. Also I perceive my work less to be about producing a final state of a codebase, but about producing part of the history of a codebase. My work output is not a single distribution of code, but documented, explainable and attributed diffs, i.e. commits.


Atomic commits compose easier. In case you want to pull a few out to ship as their own topic. Or separate out the noisy changes so rebases are quicker. Separate out the machine-generated commit so you can drop it and regenerate it on top of whatever.

My commit messages are pretty basic “verbed foo” notes to myself, and I’m going to squash merge them to mainline anyway. The atomic commits, sometimes aided by git add -p, are to keep me nimble in an active codebase.


> Maybe consider putting your energy into a good documentation inside the repository.

Commit messages are documentation.

If you have a good commit history you don't need write tons of documents explaining each decision. The history will contain everything that you need, including: when and who changed the code, what was the code change and why the code exists. You have a good interface for retrieving that documentation (git log, perhaps with -S, -G, --grep, -L and some pathspecs) without needing to maintain extra infrastructure for that and without being cluttered over time (it will be mostly hidden unless you actively search that). You also don't need to remember to update the documents, you are forced to do that after each commit.

And that's not a hack, Git was made for that.


A surprisingly large amount of devs, do the work to record data into a VCS (probably because they are told to by colleagues or superiors), but never seem to use them. Then they tell you that generating proper commits isn't all that important. Well, that's because they never actually use the VCS. By my book, only generating commits isn't really using a VCS, that is the information generation part, you also need to do queries on the collected data, otherwise yes it would be quite useless.


Agreed. If they don't care about the history, they don't need a vcs. There's no point in keeping a history if the history isn't helpful


I think they somewhat do care about the history, but only as a backup and list of older versions and as bunch of potential merge bases. But they do not care about the history as in the evolution and causality. It really depends on what you see as the history, so it is a manifestation of a somewhat philosophical problem.


For me the point of splitting commit is not for documentation (though it can be an added benefit). It is so that you can easily rollback a feature, or cherry pick, it also makes the use of blame and bisect more natural. Anyways, that's git, it gives you a lot of options, do what you want with them. If a big end-of-day commit is fine for you, great, but some people prefer to work differently.

But that's not actually the reason I use "git add -p" the most. The way I use it is to exclude temporary code like traces and overrides from my commits while still keeping them in my working copy.


Hmm, this idea of maintaining working copies that differ from upstream strikes me as fragile and cumbersome. For a solo project, sure, whatever works. But for larger projects, IMHO this workflow is an antipattern.


To be fair, yes, it is a bit fragile and cumbersome, though it works for me.

However, it doesn't makes "git -p" less useful when the idea is to separate what you want to publish and what you want to keep in your work zone, be is your working copy or a dev branch.

As always with git, it is not very opinionated, it lets users have their own opinions, and they do! Monorepos vs many repos, rebase vs merge, clean vs honest history,... it can do it all, and I don't think the debates will ever settle on what is an "antipattern" as I don't think there is a single "right" answer.


Yeah, TIMTOWTDI, different strokes, etc... and I'm not claiming there's "one true way". I was just reacting (almost viscerally) to the idea of deliberately maintaining diffs in a stateful local env, which feels like it's begging for "works on my machine" issues. My instinct to avoid that, on principle, extends beyond project source code to "fiddly" local dev environments, seeking things like devcontainers, fully-reproducible builds for CI, etc.


if commit messages are meaningful and the commits are well crafted (with the help of git add -p for example) this documentation can be generated from the metadata ;P Also, big end of day commits normally cover multiple different fixes. Which is, I hope you can understand, not very nice to have in one big commit.

If someone else decides your implementation of something is not good enough, and they manage to get enough buy-in to rewrite it from scratch, maybe they were right to start with?. And if your history is not clear about the why of your changes, you have 0 to defend your work


Also, rebasing is a lot easier when you have small commits, rather than a mega conflict.


>What's wrong with a big end of day commit?

Hoo boy I guess you never tried to use `git blame` on years-old shit huh? Don't push a commit for every line, but for logical units like one particular feature or issue.

>But then comes somebody and decides to just flush your well curated history down the toilet (=delete it and start somewhere else from scratch) and then all the valuable metadata stored in the history is lost.

This doesn't just accidentally happen. There are tools to migrate repositories and to flush ancient commits in huge repositories. If you curate your commit history, this is probably never necessary, or may only become necessary after decades.

>Maybe consider putting your energy into a good documentation inside the repository.

Commit messages are documentation for code, basically. `git blame` associates the messages with individual lines and lets you step through all the revisions.

>I would love to have more projects with documentations which cover the timeline and ideas during development, instead of having to extract these information from metadata - which is what commit messages are, in the end.

The commit messages are for detailed information, not so much for architectural or API documentation. This doesn't mean you should get rid of commit metadata! Eventually, you will find a situation where you wonder what the hell you or someone else was doing, and the commit message will be a critical piece of the puzzle. You can also leave JIRA links or whatever in the message, although that adds a dependency on JIRA for more details.


And User Mode Linux was the basic technology for dirt cheap (not so) virtual machines at some VPS providers 15yrs ago. This had some disadvantages, for instance you could not load custom kernel modules in the VM (such as for VPN), actually you could not modify the kernel at all.


Another major disadvantage, at least back then, was that it did not support SMP at all


When I used Gentoo, where you typically configure&compile the kernel yourself, I never used initramfs.

This was 20yrs ago. Gentoo was really a great teacher.


Problem with that was that you'd run literally every module initialization and occasionally there were some that crashed the kernel.


Only if you compiled your kernel with literally every module. If you compile your kernel with only the modules your system needs, there’s no such issue


which then made it difficult to upgrade your hardware.


Well yes, when you start customizing Gentoo it tends to make hardware changes more difficult. e.g. march=native makes CPU changes difficult, but it's still very common on Gentoo


I am not sure about your use case. There exist many JS libraries which will generate client side QR codes. How many of them do you handle that you optimize for file size? Or is it just an academic interest?

SVGs are XML so technically, yes, you can just embed your visually encoded payload data with namespaces attributes and elements. If you don't want to use namespaces, you can use off-canvas texts, hidden/opacity=0 texts or even XML comments. You can even use the regular metadata section of SVGs. You can make the whole QR code within the SVG a clickable link.


Talos Linux [1], "the Kubernetes Operating System", is written in Go. That means it exactly works as the little demo here, where the Kernel hands over to a statically compiled Go code as init script.

Talos is really an interesting linux distribution because it has no classical user space, i.e. there is no such thing as a $PATH including /bin, /usr/bin, etc. The shell is instead a network API, following the kubernetes configuration-as-code paradigm. The linux host (node) is supposed to run containerized applications. If you really want to, you can use a special container to get access to the actual user space from the node.

[1] https://www.talos.dev/ [2] https://github.com/siderolabs/talos/releases/tag/v1.11.5


I also use Talos, but I wonder if just using systemd for the init process wouldn't have been easier. You can interface with systemd in go quite easily anyways...


s6 (perhaps with s6-rc) is another interesting option. One could say it’s less opinionated than systemd. Or perhaps it’s more correct to say it has another set of opinions.


Off-topic i guess. Are there like large scale success stories using this os?


Yes. I know at least one big cloud provider (actually the biggest) in Germany who uses Talos for their managed k8s.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: