Hacker Newsnew | past | comments | ask | show | jobs | submit | cpburns2009's commentslogin

All I can say is I hate when projects just lazily list every pull request for its change log / release notes. It makes it very difficult to discover what breaking changes there are.

That has been my concern as well. The script I wrote tries to bucket entries into categories, including "Backward Incompatible Change" so those are easier to spot. Since it is automated I am trading some accuracy for time saved, which seemed like the only practical choice for me, since I had to backfill a lot of history, but it’s been surprisingly decent so far.

I am also planning to add some PR templates so contributors include the context up front, which should make any release note generation more accurate.

Are you using any tooling to help with changelog curration? I know towncrier is all about fragments, so contributors must had write a brief summary of their contribution, which would be more in-line with your preference.


Not related really, as you can (and should) publish BREAKING.md separately. Release notes should inform more about new stuff than the update process. Usually PRs have details, so release notes can be easily automated. Migrations, on the other hand…

+1 on separating "how to upgrade" due to breaking changes from "what’s new". A dedicated BREAKING.md / MIGRATIONS.md is a really good idea.

One thing I am trying to do is make the generator surface breaking/migration items explicitly, but I still think anything that requires human judgment (migration steps, caveats) should be hand-curated in a dedicated document like you suggested.


Andy Serkis did a narration of the Lord of the Rings books a few years ago. My guess is it comes from that.

I finally decided to promote my gitignore pattern Python library, pathspec, from v0.x to v1 after 12 years or so.

I'm thinking of reviving my Python SQL parser prototype I have half done. Or maybe resume my Mako template plugin for PyCharm.


Are you sure it's not I.O. µ (micro) ring?

The u is for userspace.

Why would I regret it? I work on open source projects to satisfy a particular itch. I can work on an interesting topic in my free time and drop it if I get bored. I don't have the stress of worrying about deadlines or monetization. The fact that it sits out in the open rather than hidden away on some dusty hard drive is a perk because at least a few people have found them useful. If you can utilize their features to make money yourself, good for you. Why should I care outside of greed or envy?

What is your open-source project?

I have a few. My GitHub user is cpburnz.

Okay, I will visit.

What was Wenger thinking sending Walcott on that early?

I have fond memories of my Garados soloing the elite four in Yellow.

The stale-bots are even worse than that. The reporter may have responded quickly, and the bug may be acknowledged as real. But if there's simply no activity in the issue for the month following, it will be closed.

Yeah, I don't see how Python is fundamentally different from JavaScript as far as dynamicism goes. Sure Python has operator overloading, but JavaScript would implement those as regular methods. Pyrhon's init & new aren't any more convoluted than JavaScript's constructors. Python may support multiple inheritance but method and attribute resolution just uses the MRO which is no different than JavaScript's prototype chain.


Urban myths.

Most people that parrot repeat Python dynamism as root cause never used Smalltalk, Self or Common Lisp, or even PyPy for that matter.


It really depends on the third party service.

For service A, a 500 error may be common and you just need to try again, and a descriptive 400 error indicates the original request was actually handled. In these cases I'd log as a warning.

For service B, a 500 error may indicate the whole API is down, in which case I'd log a warning and not try any more requests for 5 minutes.

For service C, a 500 error may be an anomaly and treat it as hard error and log as error.


What's the difference between B and C? API being down seems like an anomaly.

Also, you can't know how frequently you'll get 500s at the time you're doing integration, so you'll have to go back after some time to revisit log severities. Which doesn't sound optimal.


Exactly. What’s worse is that if you have something like a web service that calls an external API, when that API goes down your log is going to be littered with errors and possibly even tracebacks which is just noise. If you set up a simple “email me on error” kind of service you will get as many emails as there were user requests.

In theory some sort of internal API status tracking thing would be better that has some heuristic of is the API up or down and the error rate. It should warn you when the API is down and when it comes back up. Logging could still show an error or a warning for each request but you don’t need to get an email about each one.


I forgot to mention that for service B, the API being down is a common, daily occurrence and does not last long. The behavior of services A-C is from my real world experience.

I do mean revisiting the log seventies as the behavior of the API becomes known. You start off treating every error as a hard error. As you learn the behavior of the API over time, you adjust the logging and error handling accordingly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: