Hacker Newsnew | past | comments | ask | show | jobs | submit | 15rthughes's commentslogin

You really can’t believe someone wants you to justify a stupid rule and toxic workplace environment?


I honestly don’t care if this isn’t “professional” enough for hackernews or not.

You have the self inflated ego of a psychopath if you think this is true. This comment is the living embodiment of the Dunning-Kruger effect.


Adding the integers 2 and 2 together is definitely a grey area.


There’s an argument to be made that micro packages follow the ideas of the Unix philosophy, but to compare is-odd and a derivative that simply negates it’s results to piping two well optimized coreutils together is definitely a stretch.

I’d also say that implying JS developers inherently have a wider range of experience and knowledge compared to developers of coreutils is flat out absurd.


>> "I’d also say that implying JS developers inherently have a wider range of experience and knowledge compared to developers of coreutils is flat out absurd."

I was way too ambiguous, so I see how you understood it that way. I meant in the downward direction. There are more packages from more people in JS, so you end up with lots of first projects, small experiments, etc. So you have 100% of coreutils being decent to amazing, while it's probably closer to 1% with NPM. Both ways have their advantages, but it's easier to footgun with the highly inclusive NPM approach.


Deep neural networks and their training process rely on multi variable calculus, this is nowhere near a new type of mathematics.


>JavaScript developers were often less trusted or harder to find.

Everyone and their mother writes in JavaScript nowadays. It’s gotten to be a bubble waiting to burst at this point.


The whole point of this notation is to describe limiting behavior, the true value of N isn’t important in the slightest.


That is actually my point. Remember the goal is to nub an array into a set.

Perfect hash tables where there are no collisions are O(1) but they're not suitable for casual use, you need to guarantee your data set never collides hash buckets. That is why most implementations of "maps" in programming languages map to very wide trees after a certain size. A common example is HAMTs (Bagwell is famois for these).

They're flexible and perform well on modern hardware for arbitrary key lookiup.

If your constant time hash tables is taken to the worst case (as we do in O analysis) then that operation would collide with the same bucket and require a potentially full linear search of the data every time to see if it places into your oversubscribed bucket. That's O(n^2).

You could make it better by sorting on insert. That's O(n log n) because it's comparison based.

I've provided a much better solution and links to code that does it. Maps and hash tables don't solve this problem any better than just sorting the array.

It's worth noting that if we're ignoring preprocessing and the domain is well-constrained, we could probably make O(1) to do a bunch of these ops in bulk if we really wanna go off the deep end. But that's cheating because we'd be ignoring construction costs, as you are here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: