I doubt that he imagined that he could write a law that would force the encryption algorithms to yield to the ASIO. It was all about using a big stick to force people to help break encryption, by inserting back doors etc.
Sure he was a lawyer and banker and probably never should have been involved with things like encryption and the NBN, but that's politics for you.
DO's branding and a lot of their offering is pretty good, but their locations for non-US customers are much worse than many of their competitors. For instance (and a particular point for me), they still don't have any presence in Australia after over half a decade of it being marked as "under review" on their customer feedback page.
Having geographical locations to back up the quality of the offering is a step forward IMO.
Most likely Telstra (40% market share telco) charge too much for peering network traffic in Australia? Same as how Google Cloud free tier excludes only China and Australia.
Vultr has a POP in Sydney and seem to manage just fine, offering plans similar to DO. I like DO but as I'm based in New Zealand, it's a no brainer to go with Vultr due to their Sydney POP.
> Inside of the home directory a file ~/.identity contains the JSON formatted user record of the user. It follows the format defined in JSON User Records.
Why couldn't this have gone into ~/.config/? There's enough garbage cluttering my home directory.
Since the user record is cryptographically signed the user cannot make modifications to the file on their own (at least not without corrupting it, or knowing the private key used for signing the record).
That sounds like something that shouldn't even be in the user's own home directory.
...and JSON, of all things. Every aspect of systemd which I've worked with seems to be full of sprawling complexity and overengineering, and this is no exception. I know "it's not the UNIX philosophy" is a common dismissive complait about it, but looking at the design gives a very different feeling than the "original UNIX" designs, which felt humble and simple.
Json is particularly poorly suited for data which will be digitally signed because it does not reliably round-trip. E.g. you read the data in with a parser and re-serialize the same data, the output you get will often not be bitwise identical to the input.
My favourite example of this is JSON's treatment of numbers. There it literally no way to serialise a number without putting it in a string. I'm just waiting for the first security vulnerability caused by a JSON decoder not deserialising a large integer correctly.
Likely because, despite looking similar, those key/value and .ini style config files like every other system tool are actually hundreds of subtly incompatible or poorly specified formats. Unicode handling, whitespace handling, quoting, indentation, multi-line, line continuation, comments, non-string datatypes, lists, maps... you use YAML for human maintained and YAML or JSON for machine so people can use an off the shelf parser instead of implementing your particular key/value specification. Even TOML is better than rolling your own, since at least it has a common specification.
I once found a .cfg file. It had an incomprehensibly foreign configuration format that is specific to the file. A standardized config format like YAML or JSON has lots gotchas but you can learn them once and then know forever. With custom_format5345 you have the problems of poorly designed configuration formats (lots of gotchas, sometimes more than YAML) and the downsides of having to relearn the undocumented configuration format.
Making it JSON is just begging for users to think they can edit it. I hope they put a big banner disclaimer at the top at least. (Does JSON officially have comments yet?)
Please tell me what you'll end up with if you encode and then decode the number 1E12 using two different JSON implementations. Check the spec, and then tell me if it's simple.
Does the JSON number type permit me to store the number 123,456,789,098,765,432,123,456,789,098,765,432,123? When I read it in with a parser, what integer value will my code see?
This is not an academic question: large integers are common, for example, as cryptographic keys.
I'm the kind of guy who goes through $HOME every once in a while and start looking for workarounds or filing bugs requesting XDG Base Directory spec support.
But in this case I can't complain. It's literally about the behavior of the home folder. This one makes complete sense to me.
> I'm the kind of guy who goes through $HOME every once in a while and start looking for workarounds or filing bugs requesting XDG Base Directory spec support.
I used to be that guy. Now I’m not. Re workarounds: usually that means env vars, but all that crap in env is copied to the execution environment of every single process, which is pretty awful. Re bug reports: usually only a few “that guy”s care at all, sometimes there’s endless debate about whether things go into XDG_DATA_HOME or XDG_CACHE_HOME, the occasional accepted PR requires so much effort I might as well just try to forget about all the garbage sitting in the $HOME.
I would suspect (though I'll admit without looking into it) that in order to know where $XDG_CONFIG_HOME is, one first has to load the user info. Bit of a chicken and the egg scenario.
Thinking out loud, there is no reason .identity needs to be in the root of your home directory. The .identity file contains arbitrary metadata, which could just as easily specify which subdirectory should be mounted as $HOME. You could also move your other $XDG_ directories like .config out of $HOME.
It's needed before the actual contents of the home directory are available (i.e. mounted), if I understand correctly. Nor is it actually user-modifiable:
> Since the user record is cryptographically signed the user cannot make modifications to the file on their own (at least not without corrupting it, or knowing the private key used for signing the record).
> This file system should contain a single directory named after the user. This directory will become the home directory of the user when activated. It contains a second copy of the user record in the ~/.identity file, like in the other storage mechanisms.
Not quite sure what the purpose of this copy is, given users can delete / replace it?
Just out of curiosity, why is it the opposite of a standard if its configurable? XDG home seems to have a sane default of .config, but also provides configuration with the $XDG_CONFIG_HOME environment variable. To me it looks like thats part of the specified XDG base directory standard.
Passwords for Blizzard accounts are also case-insensitive, as they are converted to upper case before hashing. Try it!
I first found this while working on a WoW server emulator in around 2009, but I believe it's been the case since Battle.net 1.0 was launched in 1996. In order to preserve backwards compatibility, it's never been changed.
I feel like as a collective set of industries across development and security we have been telling people for years not to reinvent the wheel because it's easier to stand on the shoulders of giants. Despite this, there seems to be an increasing push (from, for example, Go programmers) to go back to DIY. Is this not tearing away all the work we've done to stop people building their own auth, their own crypto, their own file read/write mechanisms and use ones that are battle-tested and safe?
I should backtrack a tiny bit before I'm downvoted to oblivion.
I'm not opposed to libraries to handle a lot of stuff (like auth, crypto, and file I/O). I think it's typically irresponsible to reinvent these things for any reason other than "it's a fun personal project".
What I dislike about a lot of the frameworks is how locked-in I feel. There's (often) only one way to do things, and I don't always feel it's the best way, like my aforementioned parsing logic.
Sure, the framework's way might not always be the best way of doing things but I prefer to just accept it and move on. It saves spending time debating, deciding, experimenting with different ways of structuring code, when it probably doesn't really make that big of a difference in the end anyway. Most web applications are really not that different from each other.
On top of that it prevents really bad coders from rolling their own in a really bad way. Remember when everybody was coding their own MVC framework in PHP?
As for C# not parsing JSON correctly, that seems more like a problem with a language or library not anything to do with frameworks?
I overly simplified, but basically the issue came with having to figure out how the route was binding, then finding the class that was handling that binding, then finding the class the handled the parsing, etc.
It's not intrinsic to all frameworks I'm sure, but imposed structure like that does inherently lead to bureaucracy, which can make figuring stuff out difficult.
This sort of thing could not appeal to me any more, honestly. I've been trying to work out the most cost-efficient way of doing exactly this.
Of course, I am aware that _the_ most cost-efficient way is to accept that having the brand new and shiny every year is - for the most part - utterly pointless, but hey.
I've played WoW for something like 11 years, from its peak in around 2008-2010 and through its slow decline since. Logging in now and having the reality that this game is actually genuinely fading away is difficult for me to properly process. The gp comment mentioned how non-transient games such as Pokémon are, and how games that do actually fade away are a new thing. I hadn't really thought about this before until the last couple of WoW expansions and its effects on the playerbase.
It makes me really sad to think about; WoW (and video gaming in general) has been a huge part of my life and tinkering with video games under the hood was the kickstarter for my career. I don't know if I will ever find a replacement for it when it's gone.
I believe that at one stage the Quad9 resolvers were owned by IBM. A brief look at the site indicates it was transferred to CleanerDNS, which is a 501(c)(3). Do you know how much involvement IBM still has in the project, if any?
This is a rather unique take on Go that I haven't seen before. A quick scan of your post history indicates you feel quite strongly (negatively) about Go. May I ask what inspired this particular take, and what language background you have?