Hacker Newsnew | past | comments | ask | show | jobs | submit | cap10morgan's commentslogin

This highlights a common pitfall: If you “solve” a problem with a “temporary” solution, you lower the priority of the better solution below every unsolved problem. And there are always enough of those to ensure no one ever revisits the temporary solutions.


I've noticed this in my career. The lesson is to not "ship" something until you're proud of it. Sometimes that's easier said than done though.


Then perfectionism sets in, you wind up tweaking and adjusting every detail ad infinitum, and never ship anything.

Bird in the hand, worse is better, etc.


Right! That's why I suggest "proud of" not "sure there are no flaws". If your level of "I'm proud of this thing" is "I can't think of any way to improve it" then you should probably recalibrate.


"There is nothing more permanent than a temporary solution that works"

I have no idea where I heard that, but I use it often at work to ensure we don't ship temporary solutions but do it right the first time.


> "There is nothing more permanent than a temporary solution that works"

> I use it often at work to ensure we ... do it right the first time.

If it works, you did it right!


I disagree, I consider that an absolute win of efficient engineering. Develop the feature enough that it lasts 30 years without needing to be fixed.


It has to be good enough. Was it? You could argue it was, because it survived that long. But it wasn't a standalone product, rather part of Windows, which was a (commercially) successful product because it was good or great enough. Some parts were better, some were worse. While not completely and utterly broken, I think the suggestion here is that the Format dialog fell in the "worse" camp. So, I'm not saying that it was okay because there were more important things to fix (that was true too, probably), but I'm saying that it was okay because there were enough equally important things that were done better/well enough.


I don't condemn it. I condemn the implementation that doesn't render it with the latest widgets and, instead, implements the same widgets that have been abandoned ages ago.

At any given time, there should be one code path to render an abstract UI definition to a screen. It might depend on screen capabilities or running environment or size, but it should be one, so you don't need to maintain and ship ancient unmaintained software.


> I condemn the implementation that doesn't render it with the latest widgets and, instead, implements the same widgets that have been abandoned ages ago.

Interestingly I think this generalises to the whole Windows Explorer.

The Explorer introduced in Win95 and later in NT 4 is a lovely bit of UI design. It introduced the Taskbar (never seen before, and no, the NeXT Dock is not a taskbar, and Acorn's Paul Fellows said NeXT's implementation was apparently derived from the Icon Bar in Acorn RISC OS -- NeXT hired an Acorn developer and he took his Archimedes with him to California). It introduced the Start Menu, with an elegant system of folders and shortcuts as its storage model, later imitated with the newly-customisable Apple Menu of MacOS 8 and later.

But in Win98, Microsoft bodged the Explorer with Active Desktop, which renders via Internet Explorer 4, so that MS could justify bundling IE4 with Windows to the US DOJ in court. That version is multithreaded, which is good, but it's also much bigger and much slower... because it renders via IE. That means new slowdowns and new ugliness, like windows of generic icons, which then get replaced with the correct icons as the renderer tries to catch up. So, they hid that, with an empty window and a flashlight scanning, while the HTML renderer tries to create a view of the Control Panel. It also added wallpapers in folder views, a horribly ugly idea.

And _that_ ugly version is what KDE ended up copying, rather than the cleaner quicker one that was first launched.

And KDE didn't notice and copy the nice neat and Unix-like "just display a Start Menu built from the contents of a directory" idea. It implemented a database instead, and so every successive start menu implementation copies that instead.

The ugly hack done for some other, non-technical reason ends up being the one that influences the successor designs, and the classic clean original implementation is forgotten.

In this case, the results are GNOME 2, MATE, KDE, Cinnamon, LXDE/LXQt, even much of Xfce...


I think the folder views is one of the most amazing ideas that I saw first in Windows with IE3. You don't build an e-mail client: you build a view that sees a folder full of e-mail messages (each one a file) and displays it as an e-mail reader. Add a service to send messages in your outbox and poll services to populate your inbox, a viewer and an editor for messages and you are set.

As for the renderer trying to catch up, it could be implemented as something that reads and caches all required graphic resources before attempting to render to the display, so that everything appears correct the first time.

As for the UI, it's fine if the current rendered uses HTML - you just build something that reads the abstract UI representation and outputs HTML for the window renderer.


> Windows with IE3

Which version of Windows?

I used IE3. It wasn't great. It came with MS Internet Mail and News.

https://web.archive.org/web/20110816052247/http://www.nwnetw...

I quite liked MSIMN, it was actually a pretty good client -- it just needed spam filtering, which it never got until it was a bloated mess.

> you build a view that sees a folder full of e-mail messages

BeOS mail did that, probably first.

But I don't remember MSIMN doing that.

It does sound just like a Maildir folder, though.

https://web.archive.org/web/19971012032244/http://www.qmail....

Maildir dates back to around the same time that Win95 was first released, and Win95 as shipped didn't have MSIMN. It had the Inbox client, designed to talk to MSN and Microsoft Mail.

https://en.wikipedia.org/wiki/Microsoft_Mail


Important caveat with Firefox: You can run your own Firefox Account server (see https://mozilla-services.readthedocs.io/en/latest/howtos/run...) and then e.g. connect to it over a VPN to mitigate a lot of this.


Problem with this is that it needs the sync server:

> Since the Mozilla-hosted sync servers will not trust assertions issued by third-party accounts servers, you will also need to run your own sync-1.5 server.

The tutorial refers to the old unmaintained version: https://github.com/mozilla-services/syncserver, see https://github.com/mozilla-services/syncserver/commit/8d9804...

The alternative is https://github.com/mozilla-services/syncstorage-rs which is ridiculously hard to set up.


I've posted on this a few times. Rehashing[0]:

> https://github.com/mozilla-services/syncserver/pull/294

> So basically they stopped running the older version themselves but don't consider the newer version production-ready yet. What a mess.

This doesn't seem to have improved much since...

Some of my experience self-hosting the whole stack previously:

https://news.ycombinator.com/item?id=30315816

https://news.ycombinator.com/item?id=30727935

[0]: https://news.ycombinator.com/item?id=30728966


> The alternative is https://github.com/mozilla-services/syncstorage-rs which is ridiculously hard to set up.

https://github.com/mozilla-services/syncstorage-rs/issues/49...

This is the issue to watch, supporting SQLite. This makes it feasible to run a simple sync server for a single user or a small group. But this is not moving forward.


I got stuck with the new one as well last time I tried. But it seems like there is an docker option now, which seems to make things a lot easier


I would encourage folks to listen to this podcast episode from April 2020 for some background on how the virus itself can carry detectable earmarks of being natural or lab-grown and what we see in SARS-CoV-2 based on that:

https://gimletmedia.com/shows/science-vs/dvheexn/coronavirus...

Gives you some good additional questions to ask when reports like this come out.


Peter Thiel almost certainly didn’t invent this. The idea that our decisions are heavily influenced by those around us is one of those things that’s been well-established in the relevant scientific communities for some time but has a hard time breaking into the popular consciousness because we don’t want it to be true.


Peter Thiel has mentioned the works and ideas of philosopher René Girard repeatedly. I think that's what he's referring to.


He said Peter "popularized" it, not invented.


He didn’t popularize it either.


He popularized it in tech circles where people all read from the same basket of 5 pop science books.


What's in the basket?


In my experience "Does your company measure the long term benefit of X?" is 99.99999% "no" for any X.


I'm hoping this leads to an announcement of some kind of hypervisor support in iPadOS 15. I would imagine that would come at WWDC if it's coming. Seems like it would allow software development on the iPad while retaining app sandboxing.


This is a incredible idea. Suddenly the iPad would be a general purpose computer and its utility would skyrocket.


I see a lot of comments in here about the v2+ rule’s reason for existing being to allow importing multiple major versions of a module. That’s not it at all. As the blog post (which was easily Googled by me and explains things quite well IMO) states: “If an old package and a new package have the same import path, the new package must be backwards compatible with the old package.”

They’re trying not to break compatibility until a downstream developer explicitly opts in to that breakage. The simplest way to do that is to give the module a new name. And the simplest way to do that is append the major version to the name.

Here is the post: https://blog.golang.org/v2-go-modules

For more background on this principle, I recommend Rich Hickey’s Clojure/conj keynote “Spec-ulation” from 2016: http://blog.ezyang.com/2016/12/thoughts-about-spec-ulation-r...


That's weird, how then do other languages manage to solve importing packages/modules cleanly without requiring that the name of the package include the version number, while still allowing developers to specify exactly which version of the module they want to use?

Surely someone else has solved this problem before, like Rust's Cargo, or Java's Maven, or JS's NPM, or Python's PIP, Ruby's Gems, or...

Unlike in every single one of those other languages, this Golang decision means that you can't pin to a specific minor version, or a specific patch version (in the sense of MAJOR.MINOR.PATCH).

It's very in-character for the Golang team though: they value simplicity-for-themselves no matter what the complexity-for-you cost is.


A lot of languages still haven't. A problem here is that this can't be solved by the package manager alone but needs support in the module loader too (often built into the language).

Python's PIP is getting a proper dependency solver, but there can still only be one package with a given name in an environment. So if package A needs numpy>=2 and package B needs numpy<2 there is no solution.

If you release a new major version of your package and you expect this will be a problem for people, you have to use a different package name (e.g. beautifulsoup4, jinja2). That is if the name for the next version isn't getting squatted on pypi.org.


That's a fair question and something I should have clarified in my comment (though I have a feeling it is likely addressed in the blog post I linked to).

It gets into the deeper motivation behind not breaking backwards compatibility within the same-named module. There are two bad extremes here:

1. Never update dependency versions for fear of breakage. This leaves you open to security vulnerabilities (that are easy to exploit because they are often thoroughly documented in the fixed version's changelog / code). And/or you're stuck with other bugs / missing features the upstream maintainer has already addressed.

2. Always just updating all deps to latest and hoping things don't break, maybe running the test suite and doing a "smoke test" build and run if you're lucky. Often a user becomes the first to find a broken corner case that you didn't think to check but they rely on.

The approach outlined by Rich Hickey in that Spec-ulation talk I linked to allows you to be (relatively) confident that a package with the same name will always be backwards-compatible and you can track the latest release and find a happy middle ground between those two extremes.

Go's approach is one of the few (only?) first-class implementations of this idea in a language's package system (in Clojure, perhaps ironically, this is merely a suggested convention). The Go modules system has its fair share of confusing idiosyncrasies, but this is one of my favorite features of it and I hope they stick to their guns.


It seems like this approach (including the major version number in the package name) has been practiced at the distro level for a long time. E.g. SDL 1.2 vs 2.0 have distinct package names on every distro I'm aware of.


It's short sighted to think that the Go package managment is "simple", there was a lot of thoughts and considerations that was put into it and improvements over other languages ( even recent one like Rust ).

Good read: https://research.swtch.com/vgo-principles which answers some of your questions.


"A lot of thought and consideration" was put into the ten previous approaches they've attempted to solve this problem.

I'm starting to think that the reason they've tried so hard to avoid solving interesting problems in the language is that every time they've tried they've made something worse than every other alternative that existed in the problem space.


With great effort, sometimes simple solutions are discovered.


Other languages don't allow you to import multiple versions of a package. Allowing this opens a can of worms, mostly to do with global state. The two versions of the module can still be competing for the same global resources, say lock files, or registering themselves somewhere, or just standard library stuff like the flag package. Unless developers actually test all their major versions running together, you are just crossing your fingers and hoping. It was the same problems we had with vendoring, and one of the reasons the 'libraries should not vendor' recommendation is made.


I'm not aware of maven allowing me to use multiple major-version libraries in the same package.


You can shade a dependency, which can with the requisite budget in frustration allow for importing a library with multiple versions in the same program.

http://maven.apache.org/plugins/maven-shade-plugin/


I think in conjunction with the minimum version selection it makes the dependency resolution much cheaper and simpler.


> Unlike in every single one of those other languages, this Golang decision means that you can't pin to a specific minor version

Yes, it is possible to pin to minor version.

The version is specified in the `go.mod` file.

Look at this [go.mod example](https://github.com/ukiahsmith/modbin1/blob/main/go.mod).

The upstream `modlib1` library at v4 has tagged versions for v4.0.0 and v4.1.0. The go.mod file pins the version to the _older_ v4.0.0.


So why can't the module developer choose to change the url if they want to introduce breaking changes?

Why is this built into the module system and triggered by a v2? I don't think there is a compelling reason to do so, and there are several compelling reasons not to do so.

I think I prefer the status quo where there is very strong pressure to remain backwards compatible if your module is used by a few people. This leads to far less churn in the ecosystem and more stable modules.

The article makes the assumption that a major version number is used only for breaking changes, but many software packages use such version numbers for major feature releases (as indeed Go2 seems to plan to when/if they release it). I'm not clear why they should be forced to adopt urls ending in /v2 as well.


I guess you are getting in to Go design philosophy. These questions are similar to "why go does not allow lint to just issue warning for unused variables / packages instead of giving hard compiler error?" The answers to these are unfortunately unsatisfactory, no matter how logical.

Reasonable thing is to use language which align your philosophy.


I've used Go for almost a decade and am pretty happy with it, including the rule you mention. It's fine to disagree with decisions made, it may or may not be heeded, but if no-one ever disagrees the language would be poorer for that IMO.


A module developer _can_ change the URL if the want to introduce a breaking change. Absolutely no problem here. But they do not _have_ to: Adding the version as v2, v3, ... to the module name works also. Nobody is "forced to adopt urls ending in /v2". Go modules work pretty well, are very flexible but seem to differ too much from what people are used to and nobody really seems to read all the available documentation and descriptions of why that way and not how language X does it.


As I understand it importers are forced to use urls ending in /v2 for imports, and to find out which major version a project is on when they do import, please do correct me if wrong.

I'd rather this was a choice made by package maintainers individually rather than the go tooling. Most packages simply don't need that as they strive to be backwards compatible and never introduce large breaking changes. Should they always be relegated to versions like 1.9343.234234, because the go tool requires it?


Honestly, yes. According to semver, a major version change is for when you make breaking api changes. If a project is backwards compatible, it wouldn’t need to increment the major version.


I swear I remember reading this somewhere in the Go docs as well but only found it in the FAQ:

https://golang.org/doc/faq#get_version

I think this is a reasonable enough approach. You see it in Debian for dependencies so I'm okay with it for Go. It outlines clearly what your dependencies are.


> During a brownout, password authentication will temporarily fail to alert users who haven't migrated their authentication calls.

This took me a second to correctly parse. Would have been better written as: “During a brownout, password authentication will temporarily fail. This is to alert users who haven't migrated their authentication calls.”


Aside: this is the English language version of spreading a bunch of ideas across multiple lines of code instead of one mega line of code.


Even kept as one sentence it could be much clearer: “To alert users who haven't migrated their authentication calls, during a brownout, password authentication will temporarily fail.”


This confused me too. It also confused me that they only occur for 3 hour periods on 2 specific days. You wouldn't be "alerted" unless you happened to attempt login during those windows.

Can anyone provide more context for this deprecation strategy?


Best to compare Brownout to other strategies that get to the same end result. The goal is that this feature (password authentication) goes away. A common default is a flag day. There's an announcement (maybe now) and on a set date that feature just goes away.

For users who pay attention (say, us) and prioritise accordingly these strategies are the same, they know the feature it going away and can plan for that.

But for users who weren't paying attention or who didn't correctly prioritise, adding a Brownout offers some final warning that helps push more people to start preparing before the final flag day happens.

It doesn't need to get everyone, if 80% of users whose business processes still depend upon password authenticated GitHub notice something went wrong, and during diagnosis discover that what went wrong is they're relying on a deprecated feature, that's a big improvement over 100% of those processes dropping dead on flag day.

Brownout is a desirable choice where you're sure that some large population will not heed the advance notice. I bet that a lot of corporate GitHub setups have all contact mail from GitHub either going to /dev/null or to some business person who hasn't the first clue what "password authentication on the GitHub API" is. Maybe they'll email the right dev person, maybe they forward to an employee who left six months ago, either way it's far from certain anybody who needs to take action will learn from an email.

With UX feature deprecation you can tell the live users of the service. But in APIs even if you notionally have a way to feed stuff back (like a "warnings" field in the results) it's probably blackholed by lazy programmers or lands in a debug log nobody reads. So "It stopped working" is the best way to get attention, but without a Brownout that's permanent. The user learns what's wrong too late to do much about it which sucks.

Brownout is something ISRG's Let's Encrypt has used, because of course Let's Encrypt is largely an API too, they publish feature changes but a huge fraction of all their subscribers aren't paying attention so the Brownout is the first they'll know anything is happening that matters to them.


> Best to compare Brownout to other strategies that get to the same end result.

Sure, the isolated period blackout (“brownout” is a bad metaphor) of the deprecated function has some obvious communicative utility compared to flag day, but once you accept shut-off for communication, it kind of immediately suggests communication methods that have a stronger guarantee of reaching the audience, like progressively frequent blackouts (or probabilistic denials) over a period of time leading to total shutoff.


As I understand it, any API call using the deprecated authentication scheme will fail for the 3-hour period, not just logins.

So for some folks, maybe CI will be broken, or deployment automation, or even code review.

The trade-off here is to be disruptive enough that folks will notice and fix old callers of the API, while not leaving thousands of coders permanently in the lurch (they'll notice and complain, but three hours later they can get back to work, while someone fixes the infrastructure in the meantime).


They might have determined that the bulk of the accounts that were going to authenticate (active accounts) would do so during these periods so they'd reach the vast majority of accounts without breaking the systems using those accounts severely. This might be better than refusing we queries at random, I think.


What isn’t stated in that post is that we are sending monthly email notifications to any user found using password authentication during the deprecation. As a result, we expect the vast majority of users will have been notified several times before the brownout. The brownout is mostly aimed at unearthing any forgotten automations that will break when we disable support for password authentication permanently.


They're going to make the service fail during a known period so people will know they haven't migrated.


We updated the language, thanks for pointing that out.


Just putting a comma after "fail" would fix it.


It is free for the time being. But you point out a good reason why we built dgit to support other storage backends.


Cool! We'll look into that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: