Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Python minor versions allow things to be removed and thus 'break backwards compatibility'. There is a process for warning of upcoming deprecation then removing it after that. For modules this is documented in PEP 4 (from 2000). Similar deprecation schedules are used by other important ecosystems such as Django.

We'd be on way more than Python 3 if the major version was bumped for each one.



This. There are plenty of backward incompatible changes in every minor version bump.


I think he’s hoping they’d adopt semantic versioning, probably won’t happen.

https://semver.org/spec/v2.0.0.html


It's also a bad idea. Semver doesn't distinguish between removing an API that has been deprecated for a decade and Python 2->3. Or, for that matter, between removing one function and removing large parts of the standard library.


Both are changes that will break existing code. SemVer clearly signals this. Why bother with how long something has been deprecated?


You don't see a difference between removing a rarely-used module that has been deprecated for a decade, giving developers 10 years to replace it on the one hand and overhauling the entire language, breaking not just some but most code, on the other hand?

I'm sure Python 2.7 broke compatibility, but you don't see people refusing to upgrade from 2.6 ten years after its release.


Red Hat 6 is supported for another 18 months and still ships with Py 2.6.

Python's slow-moving gently gently approach to breaking changes has not been good for the ecosystem. I'll be glad when 2.x is dead. 7 months 8 days 14 hours... https://pythonclock.org/


But that's just an enterprise distribution doing enterprise distribution things. If you don't upgrade in general, of course you don't upgrade Python.


"just"? From where I'm sitting that's a pretty large chunk of IT.

This has knock-on effects: authors that want to deploy scripts/apps with the minimum fuss will avoid adding deps to whatever /opt-based repo RH ships Python 2.recent (and the hoops you have to jump through to install and activate that). So they remain compatible with 2.6.

All of the other applications and 3rd-party modules shipping with RH 6 are also chained to Py2.6.

Many conservative shops (industry verticals) will refuse to upgrade _anything_ until they absolutely have to. I suspect we live in slightly different IT worlds (lucky you!). This is a problem I see frequently and that's why I'm suggesting Python needs more strict impetus for timely upgrades, not more decade-long opportunities for balkanization and incompatibility.


What I mean is not that Red Hat is insignificant, but that it's not special that it does not upgrade Python. Even in non-conservative shops that generally would just use the latest version, using Python 3 instead of 2 was not a no-brainer for a long time.


Of course there's a difference, but it's irrelevant here, because no matter how long breaking changes have been announced, they break existing code. I prefer a versioning scheme that signals this.


I was once a fervent devotee of semver. It's so predictable! It's structured! There's a system! Everyone loves a system, especially engineers.

I've since become disillusioned.

The problem with semver, in my experience, is that it's impossible to predict whether a change will actually break someone else's code. Of course there are certain classes of change that are more likely to cause problems for other people. Changing a function signature, or deleting a function outright, is obviously a breaking change.

But the line between breaking and non-breaking isn't a bright one. Move away from the obvious examples and things start to get murky. Even the humble bugfix can be problematic. What if a client application unwittingly relies on the buggy behavior? Now that fix is breaking for them. Is that a contrived concern? Maybe—though anyone who has written an emulator can attest that this a real problem.

What about a non-breaking feature addition? Let's say the new feature requires some extra branches in a function, but doesn't change the function's interface or behavior for people who don't use the feature. Fine, non-breaking. Now say these branches alter the function's performance, and a client application's batch job that used to run in under an hour now takes four hours. It does run, so it's not "broken," but four hours is an unacceptable runtime to the users. Is that still a non-breaking change?

What these examples demonstrate to me is that semver's breaking/non-breaking change concept is incoherent. It conceives of changes as universally breaking or non-breaking, out of context, but a change can only be breaking or non-breaking in the context of a specific client application. Even the seemingly obvious example of deleting a function is non-breaking for applications that don't use the function!

I think the way we release software reflects that we know this deep down, even if we don't admit it. Imagine how you might handle upgrading a library in an application you've written. The new library is a bugfix release. Do you upgrade the library, push it to production at 100%, and gallop off to lunch without so much as a glance over your shoulder? My guess is no. Personally I'd be running my test suite, reading the library's change log, making a gradual release, and keeping a close watch on instrumentation during and after.

The interactions between software systems are simply too complex, too nuanced, too specific to the particular applications. We put all these safeguards in place and do things cautiously because we've been burnt too many times. And we've learned that in actual fact, the line between breaking and non-breaking doesn't exist.


> I prefer a versioning scheme that signals this.

How about <Very breaking changes>.<Breaking changes>.<Bugfixes>? That's what Python already does.

Semver is garbage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: