Your intuition is exactly correct. An investor with tax to offset can essentially access the same future upside at a discount
However, this discussion will be a perfect introduction to "finances at this level", where about 60% of the action is injecting more variables until you can fit a veneer of quantification onto any narrative.
Best lines in this article. But it doesn't get to IMO a very important point: why can't these processes easily be structured? Here are some good reasons:
- Your process interacts with an unstructured external world (physical reality, customer communication, etc.)
- Your process interacts with differently structured processes, and unstructured in the best agreed transfer protocol (could be external, like data sources, or even internal between teams with different taxonomies)
- Your process must support a wild kind of variability that is not worth categorizing (e.g. every kind of special delivery instruction a customer might provide)
Believing you can always solve these with the right taxonomy and process diagram is like believing there is always another manager to complain to. Experienced process design instead pushes semi-structured variability to the edges, acknowledges those edges, and watches them like a hawk for danger.
We should ABSOLUTELY be applying those principles more to AI... if anything, AI should help us decouple systems and overreach less on system scope. We should get more comfortable building smaller, well-structured processes that float in an unstructured soup, because it has gotten much cheaper for us to let every process have an unstructured edge.
Ok, so you define violence to include advocating for a law that tramples someone rights. The next person says that advocating for laws is its own inalienable right, so you trampled him. And the whole semantic redefinition snake just eats its own tail.
If we want constitutional to have any force, we have to push for a world where words mean something.
Words do mean something - which is exactly why “violence” already has recognised psychological and coercive forms in law and medicine. Pretending otherwise isn’t defending meaning, it’s narrowing it for comfort. People who’ve lived under regimes of fear understand that harm doesn’t need batons to leave marks. But sure, if the only kind of wound you acknowledge is one that bleeds, then the rest of us must be imagining things.
Most people just means physical violence when they say violence, if you use the word differently you will trick many people into thinking you say something you don't.
“Most people” once thought depression was laziness and marital rape was impossible. Appealing to what most people think isn’t clarity, it’s inertia. Language changes because our understanding of harm does. The fact that many still default to the physical doesn’t make the rest untrue - it just shows how far denial can pass for common sense.
This is so true. When you get DSA wrong, you end up needing insanely complex system designs to compensate -- and being great at Testing just can't keep up with the curse of dimensionality from having more moving parts.
Those human imperfections likely decrease randomness - for example leaving cards that started adjacent more likely to remain adjacent han by strict chance.
This all day. Programmer since c64, c++, java, f#, python, JavaScript and everything in between. Code was never the point, but it wasn't just commerce either - it's fun making machines do things they couldn't before. AI is an s-tier upgrade to that mission.
Feels like cultivating acceptance and indifference to your own entanglements is the most isolationist thing you can actually do. To be entangled is to be biased about what's happening to you... do we think the crocodile was indifferent to the escape of his prey, or to being culled in an act of revenge?
Anyway, if folks enjoy this theme I recommend Scavengers Reign, which does a beautiful job of illustrating struggle with biological entanglement.
Drawing on your other comment about spurious correlations, might there be a more direct mathematical test for an unexpectedly high number of aligned correlations?
Trust was its own reason. It's useful for the whole world to have a currency and business environment that operates by rules, even when the rules aren't perfect or fair.
That environment isn't being outcompeted by better, more fair rules - it's just getting vandalized for a few people's gain, and creating risk for everyone else .
At most of these comprehension tasks, AI is already superhuman (in part because Gary picked scaled tasks that humans are surprisingly bad at).
reply