Hacker Newsnew | past | comments | ask | show | jobs | submit | somacert's commentslogin

Personally I would like to see shttp:// (http over ssh) as a valid protocol.

Realistically, I know this would end up being just as bad and broken as every other web tech.

However due to the superb openssh implementation I tend to view ssh as the superior transport technology vs tls


Different audiences mean very different constraints, policy and technology for success between TLS and SSH.

Most obviously for trust. TOFU is simple and while not fool-proof it's at least easy to think about the consequences. Using SSH with certificates (which somebody is bound to mention) is an afterthought and it shows.

There's also a very different default thinking about who is authenticating to who and why. In TLS the server must authenticate and clients largely do not. In SSH the server's "authentication" is often limited to just proving possession of some private key corresponding to a public key, but the client must provide a username and state up front how they're planning to authenticate before proceeding.

This is why that FIDO OpenSSH integration results in a file on your laptop (or whatever client) with local information whereas WebAuthn (FIDO integration for HTTPS) doesn't do anything like that.

As you'd expect although the underlying primitives aren't dissimilar (Diffie-Hellman style key agreement, AES encrypt everything, bind identity to encrypted session using public key signatures) the details are tailored to their application. TLS isn't a better SSH and SSH isn't a better TLS.


It doesn't have to be. Mail the keys.


That vaguely touches (warning going off topic here, my apologies) my main problem with python. dict.keys() returns an iterator, this is all fine and good, very efficient I suppose, however, the only reason I ever use dict.keys() is so I can sort the keys and you can't sort an iterator so every single dict.keys() in my code base is in the form list(dict.keys()). And it is not even like you need the iterator. A dict works perfectly fine as a key iterator on it's own. /rant


> sorted(my_dict.keys())

Or just:

> sorted(my_dict)

Sorting keys like that is also weird. Keys are insertion ordered, utilize that property where you can and avoid needlessly re-sorting.


Glorious, Thank you kind soul for listening to my slightly raving rant.

Now I feel several sorts of ashamed at missing what is kind of a core function.

salutes


Fwiw, make sure you're using a newish python version. They weren't always sorted by default.


3.6+


> he only reason I ever use dict.keys() is so I can sort the keys

Hum... I use dict.keys() every time I use dictionaries to describe some unspecified data set, what happens way more frequently than any use case that requires sorting the keys. It's even incentivized by the language with that kwargs construct and a mainstream usage within libraries.

I do agree that the result of dict.keys() should have a sort method. There is no reason not to, but from there it doesn't follow that that making it an iterator is a minor gain.

That's the same situation as the GP asking for the removal of a main feature of the library just because he doesn't work with a kind of software that uses it. Congratulations, remove cursors from the library and suddenly Python is a lot less useful for data science.


It doesn’t and shouldn’t have a sort method because python prefers functions that act on objects rather than methods attached to objects.

That’s why you use “len(obj)” rather than “obj.len()”, and why you use “sorted(dict_keys)” rather than adding a “sort” method to random iterables.


Oh, ok. I was comparing it with 'list.sort()', but yeah, this one mutates the list and returns None (just checked it), so 'sorted' is the expected way to sort a dictionary keys, and my post is just wrong.


'list.sort()' is one of those leftovers from "Python before it was really Python" and is not one that's particularly worth it to change because it's useful sometimes.

In general Python prefers builtins that operate on protocols. You use "len(x)" because it's consistent, otherwise you might call "x.len()", "x.length", "x.size()" or any other number of possibilities. And if an object doesn't define it? You're out of luck. With this any object that supports the protocol (__len__) can be passed to length.

The same applies to "sorted()". list.sort() is useful in some specific circumstances where "sorted()" is not adequate, but in general the answer to "how do I sort something" is to use sorted.


I will admit I would not know the difference between gitflow and a hole in the ground. But that diagram looks an awful lot like the workflow you tend to pick up when using fossil.

  * The primary dev branch
  * branch and merge for feature development
  * long term branch for each release.
The authors only real concern appeared to be no rebase, and rebase is already a big no-no in any sort of distributed development, just adopt that attitude for your personal development.


> rebase is already a big no-no in any sort of distributed development

You are ill-informed on this subject, and I have a proof-by-existence.

Sun Microsystems, Inc., used a rebase workflow from the mid-90s all the way until the end, at least for the codebases that made up Solaris (OS/Net and other consolidations), and it did so while using a VCS that did not support rebase out of the box (Teamware, over NFS). It was essentially a git rebase workflow years before Larry McVoy created BitKeeper, and even more years before Git was created, and even more years before the rebase vs. merge workflow controversies.

When I was at Sun we had ~2,000 developers on mainline Solaris. The rebase workflow worked fantastically well.

Large projects had their own repositories that were rebased periodically, and team members would base their clones on the project's clone (known as a "gate").

Large projects' gates would get archived for archeological interest.

All history upstream was linear. No. Merge. Commits. (We called them "merge turds", and they were a big no-no.)

Solaris development was undeniably distributed development.


> The authors only real concern appeared to be no rebase

This is my take as well. That blog post has not much to say about anything, mostly weak points stemming from personal distaste of some things, but it boils down to 'i like rebase more'.


It's easy to lose track of in the face of scene graph vector formats like svg but you have to remember canvas + javascript can be thought of as a procedural cad system, that is, the epitome of drawing tools.

The svg is a static weak alternative to true procedural generation.

For reference the postscript language is the same way. hand written procedural postscript is an amazing drawing tool.


The mafia is just government by other means, and "The government" hates the competition.


Or as I like to call it, xkcd 1179 time


Marketing tends to follow a tick-tock model.

New and improved! (we made the box smaller)

25% more! (box back to normal size, price goes up.)


I thought ceph had a reputation of easy to run, at least compared to it's rivals in the distributed storage space. (gluster, lustre, et al)


Sure, but when you’re running a service you kind of want to know what you’re running. I’m sure Drew is able to figure out the beginning rather easy, but you kind of want to be able to see pitfalls and common misconfigurations etc. It takes a bit of tinkering :).


darkplaces quake does this. see sv_cullentities_trace.

https://www.youtube.com/watch?v=hf94PbE-b9I


My understanding of Swiss neutrality (simplistic and very poor) is that the swiss mercenaries were too good and after they were finally defeated a ban on fielding mercenary companies was enforced and this lead naturally to the neutral stance Switzerland takes today.

https://en.wikipedia.org/wiki/Swiss_mercenaries

Except the pope, being the pope got to keep his.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: