I feel you (like many people) got burned by the steep learning curve. Empirically some pretty high powered companies use nix successfully. It's of course always difficult to know the counterfactual (would they have been fine with ubuntu) but the power to get SBOMs, patch a dependencies deep in the dependency stack, roll back entire server installs etc. really helps these people scale.
nixpkgs is also the largest and most up to date package set (followed by arch) so there's clearly something in the technology that allows a loosely organised group of people to scale to that level.
NixOS has very limited usage, with few companies adopting it for critical or commercial tasks. It is more common in experimental niches.
One of the main issues with nixpkgs is that users have to rely on overlays for a package. This can lead to obscure errors because if something fails in the original package or a Nix module, it's hard to pinpoint the problem. Additionally, the overuse of links in the directory hierarchy further complicates things, giving the impression that NixOS is a patched and poorly designed structure.
As someone who has tried Nix, uses NixOS, and created my own modular configuration, I made optimizations and wrote some modules to scratch my own itch. I realized I was wasting time trying to make one tool configure other tools. That’s essentially what NixOS does through Nix. Why complicate a Linux system when I can just write bash scripts and automate my tasks without hassle? Sure, they might say it’s reproducible, but it really isn’t. Several packages in NixOS can fail because a developer redefined a variable; this then affects another part of the module and misconfigures the upstream package. So, you end up struggling with something that should be simple and straightforward to diagnose.
I know it's not a proper measurement, but I can't remember the last time I missed something in AUR, but in my short time on NixOS I missed 2 apps and 1 app that disappeared in the NixOS channel upgrade.
I've looked into this but saw hugely variable throughput, sometimes as little as 20 MB / second. Even if full throughput I think s3 single key performance maxes out at ~130 MB / second. How did you get these huge s3 blobs into lambda in a reasonable amount of time?
* With larger lambdas you get more predictable performance, 2GB RAM lambdas should get you ~ 90MB/s [0]
* Assuming you can parse faster than you read from S3 (true for most workloads?) that read throughput is your bottleneck.
* Set target query time, e.g 1s. That means for queries to finish in 1s each record on S3 has to be 90MB or smaller.
* Partition your data in such a way that each record on S3 is smaller than 90 MBs.
* Forgot to mention, you can also do parallel reads from S3, depending on your data format / parsing speed might be something to look into as well.
This is somewhat of a simplified guide (e.g for some workloads merging data takes time and we're not including that here) but should be good enough to start with.
After reading it I found it much harder to enjoy movies showing bad security though (such has heists, nuclear anything, ..).
E.g. from the book I learned about the IAEA recommendations for safekeeping nuclear material [1], and it's pretty clear that smart people spent some time thinking about the various threats.
Anyway, rambling. It's a great and very entertaining book, go read it!
I think this is a common misunderstanding. The images are in the public domain. Nothing stops Getty (or you, or anyone) from selling them, even though you can just use them for free.
The value-add service that Getty offers is legal indemnification, i.e. they cover the legal costs if the image turns out to be copyrighted after all. To offer this service they spend some time and money upfront to research images' copyright status.
Whether you think that's good value for money is up to you.
> they spend some time and money upfront to research images' copyright status.
From the discussion a few days ago, that doesn't seem to be the case. It seems to be more like they just gamble on not getting caught most of the time.
https://news.ycombinator.com/item?id=22340547
The other issue brought up recently is when they try to enforce their licencing of public domain images, which is a lot more shady. Selling you a licence, sure, why not. Complaining you’re using a public domain image without getty’s licence? Threatening legal action over the same?
There may be a lot of value to a lot of their portfolio. But there’s some warty rough edges too.
I don't misunderstand it at all. I am aware its legal. I just think that getty should be completely transparent about the copyright status, instead of granting a restricted license to use something they don't own the rights to grant in the first place.
If they really do indemnify you, it's actually a pretty huge benefit. It's pretty easy to use content that is 'royalty free' but then get sued later on when you find out it actually wasn't.
More often than not recruiters (external and in-house) make up large numbers to get you to reply. Here is a verbatim quote from a mail I got last year:
> For the right candidate year 1 comp will be up to £500k.
After going through a three hour coding test, phone screen + onsite it turned out to be 120k + smallish bonus. Maybe I'm not the right candidate but most likely 500k was never on the table.
It's not the first time this has happened to me or people I know
either. What I'm getting at is that I'd take the Oxford Knight compensation numbers with a pinch of salt. Those 300k jobs _do_ exist but nowhere near as many as in the US.
Additionally levels.fyi or glassdoor data doesn't support a large number of 300k jobs in London.
The original SES doesn't seem to do anything to prevent meltdown/spectre attacks [1]
This version removed direct access "Date" [2] but I'm not sure I'd trust any code running in the same process space given how hard it is to fix spectre in general.
What I really want is a JavaScript API--doesn't need to be a "VM": just a wrapper for an existing one--that makes it trivial to manipulate JavaScript engine spaces but where instead of them merely being separate memory allocators (as would be the case if you allocated two JavaScriptCore runtimes or engines or whatever they were called) the code is run in a separate process that doesn't contain any others memory or information and all communication with it is done via some kind of IPC (which you would then minimize using).
Spectre relies on being able to speculatively access data and extracting information about said data through a side channel despite the speculative execution not committing. A separate process means address spaces are separate, which means speculative execution cannot access the data.
Meltdown is similar, but because a CPU affected by Meltdown does not perform permission checks during speculative execution, you can read memory that the execution environment doesn't even have permissions for. E.g. kernel memory.
The fix for Spectre is thus to only consider address spaces a security boundary; interpreters or JITs cannot be considerd security boundaries any more (in general).
It should survive; the EU Banking Authority is based in London (it will move post Brexit) and the UK treasury were major influencers on this legislation.
Worth saying also; Open Banking actually came out of the UK competition marketing authority - its just become tied up with PSD2 (as its one way to achieve compliance with that legislation)
I do agree but I also remeber that Haskell is 27 years old. Newer Haskell-inspired languages like PureScript don't have a built-in list any more.
There's a lot of old stuff in Haskell, e.g. String which is a list of char. We have a number of new preludes (base, foundation, protolude, ..) that improve the situation a lot, so I'm not sure we really need a "python 3" moment.
We could definitely be more aggressive in pointing out that you need to use a new prelude though.
Any recommendations on which new prelude to use? I'm fairly competent with Haskell but have never looked into these and am feeling decision paralysis -- too many choices.
I've tried a bunch of alternative prelude and my experience is that it makes it very hard to integrate with code that uses the standard prelude. Foundation seems to have the highest chances of success right now, ClassyPrelude seems to be the most well-used, and Protolude seems to be more like a framework to build your own prelude.
If you're competent with Haskell, don't bother. A new Prelude will save you a bunch of imports; that's it. If you're fretting over which one to use, you've already spent too much time making this decision; just stick with the regular Prelude and use your imports.
The worst thing about the default Prelude is that it encourages bad behavior among new users--for instance, with partial functions like `head`. Experienced people know what to avoid and what to work around. So if picking an alternative Prelude seems like too much work, it is.
I have a project where I have mostly dumped the Prelude. All it is doing is saving me a bunch of imports. That's nice but not earth-shaking.
We're mostly using Protolude at the moment. It has some nice properties, e.g.
* "id" renamed to "identity" so you can use id in your own code
* panic functions like "notImplemented"
* a generic string converter "toS"
* lots more
But to be fair most modern preludes work OK.
Foundation is a bit different in that it doesn't ship a huge amount yet but has the potential to eventually replace the core prelude with saner default types like utf8-encoded strings. If you need to ship to production yesterday I would not use Foundation just yet.
Everything that's wrong with [] is wrong with [Char] and more so. In a Unicode world, it rarely makes sense to iterate over codepoints in a string, and it's rarely useful to prepend codepoints or drop codepoints at the beginning of a string. Usually an array-like string (e.g. Text) is better; occasionally something like Seq Char might be useful.
>In a Unicode world, it rarely makes sense to iterate over codepoints in a string
I love that people think that being Unicode somehow makes strings into opaque objects that you can never inspect or manipulate. Do you think that strings magically pop into existence fully formed and then magically disappear into a magic box and come out as rendered glyphs on a screen?
I don't disagree that [Char] is a stupid way to represent strings. Strings should very obviously just be byte arrays. Go does it right, it's one of the few things it does right. It turns out the creators of UTF-8 know how to deal with Unicode properly. Who would have thought?
I just mean that inspecting and manipulating strings is sufficiently complex that most of the time you use something like libicu to do it, so the apparent convenience of [] is not useful to the average programmer.
I have no problem with an opaque string type that supports being serialised into bytes. I also have no problem with a string type that exposes the reality that it's internally represented as UTF-8. But I can understand that maybe that's a little imperfect, because you might not want to maintain a perfect normalised correct UTF-8 encoding all the time. e.g. if you concatenate "blahblahlblaha" and the combining acute symbol followed by "blahlbahlblah", you might want to just store them together and normalise them later or something? I don't know.
nixpkgs is also the largest and most up to date package set (followed by arch) so there's clearly something in the technology that allows a loosely organised group of people to scale to that level.