This subject always seems to get bogged down in discussions about ordered vs. unordered keys, which to me seems totally irrelevant. No-one seems to mention the glaring shortcoming which is that, since dictionary keys are required to be hashable, Python has the bizarre situation where dicts cannot be dict keys, as in...
{{'foo': 'bar'}: 1, {3:4, 5:6}: 7}
...and there is no reasonable builtin way to get around this!
You may ask: "Why on earth would you ever want a dictionary with dictionaries for its keys?"
More generally, sometimes you have an array, and for whatever reason, it is convenient to use its members as keys. Sometimes, the array in question happens to be an array of dicts. Bang, suddenly it's impossible to use said array's elements as keys! I'm not sure what infuriates me more: said impossibility, or the python community's collective attitude that "that never happens or is needed, therefore no frozendict for you"
> the glaring shortcoming which is that, since dictionary keys are required to be hashable, Python has the bizarre situation where dicts cannot be dict keys
There is nothing at all bizarre or unexpected about this. Mutable objects should not be expected to be valid keys for a hash-based mapping — because the entire point of that data structure is to look things up by a hash value that doesn't change, but mutating an object in general changes what its hash should be.
Besides which, looking things up in such a dictionary is awkward.
> More generally, sometimes you have an array, and for whatever reason, it is convenient to use its members as keys.
We call them lists, unless you're talking about e.g. Numpy arrays with a `dtype` of `object` or something. I can't think of ever being in the situation you describe, but if the point is that your keys are drawn from the list contents, you could just use the list index as a key. Or just store key-value tuples. It would help if you could point at an actual project where you encountered the problem.
Turning a dictionary into a tuple of tuples `((k1, v1), (k2, v2), ...)`; isn't that a reasonable way?
If you want to have hash map keys, you need to think about how to hash them and how to compare for equality, it's just that. There will be complications to that such as floats, which have a tricky notion of equality, or in Python mutable collections which don't want to be hashable.
>I have zero idea how to make small talk with people I haven't known for years.
Here's a trick, it sounds stupid but it works like magic.
Just talk about mundane things that are physically present. Mention the color of the wallpaper. Mention the painting on the wall. Talk about how noisy the room is, or about the food on the plate in front of you. Literally act like you're an image classifier tasked with outputting a text summary of the scene you find yourself in...
If you're the cerebral type like I am, you'll feel afraid these topics will bore the other person. But surprisingly, they don't, if the other person is neurotypical.
To me, the fact that a blog post would be used to train AI is a good thing. Hell yes I want my writing to inform the future zeitgeist! I guess it helps that the things I want to write about are novel things no-one has ever written about. I could see how AI would demoralize me if I were otherwise employed writing Generic Politics Blog #84773. But as someone who writes original unique content, I'm like, hell yes, the more readers the merrier, whether they be human or AI or some unholy combination!
Some of us are born with small frenula of the tongue (or we undergo tongue-tie surgery as kids) and can thus perform Khecari mudra without the traditional self-mutilation used by yoga-masters. https://en.wikipedia.org/wiki/Khecar%C4%AB_mudr%C4%81 This can be useful for cleaning tonsil stones or post-nasal drip, but of course you must do so discretely since people would consider that absolutely disgusting
If you want to read out loud for long stretches of time and you hate taking breaks to catch your breath: you can read out loud while inhaling too! (It feels and sounds super weird though so this isn't very useful in practice.)
And here's a party trick related to OP's super power. Pick a distant object and cross your eyes so as to see it double, preferably with the two doubles distant from each other (i.e., cross your eyes significantly). Then, alternately switch between staring at the left double, and the right double. If you do it right, it will look like your eyes are moving in a bizarre alien way.
Then copy that and paste it a bunch of times to make it multi-line.
Cross your eyes so that the WORD's overlap (all except the leftmost and rightmost). You now see two cursors instead of one. Position your two cursors anywhere you want and then insert a space in order to make the corresponding WORD (or ORDW or RDWO or ORDW) sink into the screen. (Or rise if you parallel-view.)
We used to do this in the computer labs back in 6th grade.
I wrote a paper about doing this using human eyes as the "repeating pattern" (either someone else's, or your own in a mirror): https://philpapers.org/archive/ALEDSK.pdf ...You can use this trick to make boring meetings or conversations mildly more amusing (but be careful not to look like a clown crossing your eyes).
If you're an expert at this, you can even do it to your own hands. Hold both hands in front of you but with one of them palm-away and one of them palm-toward you, so that they have the same shape, then cross- or parallel-view them to get an illusionary middle third hand. Walk around while focusing on the third hand and it's a seriously trippy effect.
Another "super power" application similar to OP: the ability to confirm whether or not two distant digital clocks' seconds-digits are perfectly in sync. Since they're distant, it takes time to shift one's gaze from one to the other, making it hard to confirm whether they're in sync. But cross your eyes so as to reduce the distance, and voila.
Yet another application: quickly assume the same head-tilt angle as your conversation partner. Suppose they tilt their head to the left by N degrees and you want to tilt yours the same way, how can you be sure you have the exact correct tilt? Easy: parallel-view their eyes (as described in the aforementioned paper). You will HAVE to tilt your head the same as them in order to see their "third eye" (and once you've locked on to their third eye, you can effortlessly adjust your head tilt as they do by using their third eye as the necessary guide)
Stereogramming your colleagues eyes during boring meetings.
Ha
Edit: I accidentally did something similar by imaging the crease on an N95 mask as a smile near their nose. It made them look like ducks and I had to bite my tongue so hard to not laugh. I could not unsee it.
If you're distant enough / the people are sitting close enough, you can stereogram two people's faces together. You usually only get fleeting moments of crispness when their heads are aligned correctly though.
Yep! If I knew someone IRL who was into this kind of stuff, I'd really love to experiment with this sort of thing and mirrors. Arrange so that you can stereogram your conversation partner's face with a mirror image of your own face (and that he can do the same with your face and a mirror image of his face). If anyone's in NYC and interested in these sorts of things, my email is in my HN profile "about".
My Library of Ordinal Notation Systems shows a way you can systematically write more and more complex code, with no end---even if you had access to strong AGIs, they could never "finish" the exercise. https://github.com/semitrivial/IONs
Compile error messages in a classical typed language: "Error: Object of type 'StructA' cannot be assigned to variable of type 'StructB'"
Compile error messages in TypeScript when you use a library like React: "Error: Cannot reconcile <5 pages of arcane gibberish> with <5 pages of different arcane gibberish>"
I wonder how much dependencies could be reduced by systematically searching low-hanging fruit and addressing it ad hoc. For example, if commonly-used library A uses one minor thing from (and thus imports all of) library B, which in turn imports hundreds of other libraries, then someone should add the minor thing in question to A and remove the dependency on B there.
It's interesting to think of how this sort of "neighborhood watch" could be incentivized, since it's probably way too big of a task for purely volunteer work. It's tricky though because any incentive to remove dependencies would automatically be a perverse incentive to ADD dependencies (so that you can later remove them and get the credit for it).
Then the code for library B still exists, still potentially has bugs, the only difference is that the same bug has to be fixed by project A1 then again by project A2 and project A3 etc. There is a cost there too, outlined in the recent article 'Tech Debt: My Rust Library Is Now a CDO' https://news.ycombinator.com/item?id=39827645
I guess there's a hybrid model where you're able to select exactly what you're depending on and pull it in dynamically at build/package time.
I've thought a little about, for example, building something that could slice just the needed utility functions out of a Shell utility library. (Not really for minimizing the dependency graph--just for reducing the source-time overhead of parsing a large utility library that you only want a few functions from.)
Would obviously need a lot of toolchain work to really operationalize broadly.
I can at least imagine the first few steps of how I might be able to build a Nix expression that, say, depends on the source of some other library and runs a few tools to find and extract a specific function and the other (manually-identified) bits of source necessary to build a ~library with just the one function, and then let the primary project depend on that. It smells like a fair bit of work, but not so much that I wouldn't try doing it if the complexity/stability of the dependency graph was causing me trouble?
Isn't that already just the role of tree-shaking optimizers? At that point the problem seems to be languages that don't have good tree-shakers, don't/can't tree-shake library dependencies, or maybe that tree-shaking should happen earlier and more often than it often does?
Observably, it seems like the "granularity pendulum" in the JS ecosystem very directly related to the module system. CommonJS was tough to tree-shake so you had sometimes wild levels of granularity where even individual functions might be their own package in the dependency graph. ESM is a lot easier to tree-shake and you start to see more of the libraries that once published dozens or hundreds of sub-packages start to repackage back to just one top-level package alongside ESM adoption.
I imagine the answer's ~yes from the perspective of something you build and deploy (and I agree it's relevant to the article--but I'll caveat that I read xamuel to be asking the question very broadly).
Relying on a post-build process to avoid deploying unused code and dependencies still exposes you to a subset of problems with most if not all of the dependency graph.
Sufficiently-rich correct-by-definition metadata on the internal and external dependencies of each package might let you prune some branches without requiring the dependency to be present, but in the broad there are a lot of cases where that can't really help?
> Some package managers have "features" (e.g. Rust's cargo) or "extras" (e.g. Python) which might be what you are talking about.
I don't think so (though I agree that mechanisms like this are one way to approach the problem).
AFAIK both of these examples are mostly used to provide ~optional behavior (usually to exclude dependencies if you don't need the behavior). This can minimize the set of dependencies, but it's resting on the maintainers' sense of what the core of their library is, and what's ancillary. Said the other way around, both require the software's maintainers to anticipate your use case and feel like it was a good use of their time to split things up very granularly.
In xamuel's hypothetical of library A using one minor thing from library B, this almost certainly means reusing less of library B than its maintainers anticipate.
I can imagine this working in cases where the package is a true bundle of discrete utilities that almost no one will need all of (the package itself is an incredibly small core/stub and each utility is a feature/extra), and the maintainers want to intentionally design it for modular consumption.
But it's hard to imagine many maintainers going through the work of dicing a cohesive library up into granular units when they think most users will be consuming it whole?
{{'foo': 'bar'}: 1, {3:4, 5:6}: 7}
...and there is no reasonable builtin way to get around this!
You may ask: "Why on earth would you ever want a dictionary with dictionaries for its keys?"
More generally, sometimes you have an array, and for whatever reason, it is convenient to use its members as keys. Sometimes, the array in question happens to be an array of dicts. Bang, suddenly it's impossible to use said array's elements as keys! I'm not sure what infuriates me more: said impossibility, or the python community's collective attitude that "that never happens or is needed, therefore no frozendict for you"