disclosing information is not a medically neutral act. Knowing you have a medical condition can create a great deal of anxiety and anguish and prompt lots of tests. If the result of all those tests and anxiety is "no action indicated," you've basically given your patient a condition that reduces their quality of life for no upside.
I once had a misdiagnosis of an incurable illness that I didn't actually have, and the stress of dealing with that caused me to develop another, very real medical condition that took a year to get under control.
Hypothetically (totally made up numbers), if a positive result on the test means there is 1/10000 chance you have cancer, and negative result means a 1/20000 chance, with the test also having a 1/1000 chance of giving the patient an adverse reaction, i think the questions most patients would ask is why was the test run in the first place?
We're not talking about hiding information, we're talking about not looking for it in the first place. Information that is costly to acquire but not actionable once acquired.
in some cases the knowledge itself is a curse. These commenters mostly have no clue what they’re talking about and it shows.
My spouse found out they had a benign brain tumor, an accidental discovery while doing a brain scan for some other reason. She now has to get annual scans done to make sure the size doesn’t change. Guess what? It hasn’t changed in 5 years.
You might say “better safe than sorry!” To that i say - bullshit. It’s caused her lots of unnecessary stress and anxiety. EVERY year she goes back to the testing center and stresses out about if it’s changed in the last year. She sleeps poorly sometimes because of the anxiety, etc. Knowing every microscopic issue within your body is not always a net benefit! Quality of life matters too, not just longevity.
I think it really depends on the type of cancer. Actionable information is the most useful information.
So my wife has gone through all this extra stress to MAYBE catch a cancerous tumor (28%). That’s assuming it grows large enough to impact her before she dies naturally. And I see that the survival rate of some brain tumors, even if found very early, is very poor (5-10% for some tumors, like glioblastoma).
Lots of “what if’s” here. And for what? All i’m arguing is, knowledge is not always actionable, and what’s not actionable can keep you up at night.
The point i’m making is, we should not be trying to pursue a life of 0 risk and perfect decisions. Life is filled with risk (and good and bad luck). That’s just life.
It depends on your personality or worldview. Some people would be much more comfortable lowering their chances of “what ifs” than leaving it all to fate.
i agree with you. If a patient expresses that sentiment to their doctor, they should act accordingly and order the extra screening. At the end of the day it should be a conversation with your provider.
There should definitely be an honest discussion about pros and cons. And not just the physical, but the mental aspect as well.
Just like the opinion would be different if the size didn't change but she embarked in a risky treatment that left her permenantly disabled or dead.
Hindsight is twenty-twenty. If you take the wrong course of action of course you are going to be upset. But that goes for both possible choices. Its not like the choice is ignore vs take some safe but possibly unnessary action. Both choices could kill you.
On the other hand, the placebo effect works even when the placebo is clearly labelled "placebo". So I guess there's potential to tell people needlessly disconcerting facts and then take the edge off with reassuring bluster and functionless comforts.
This is a "why don't you just" answer. The reason the establishment does this is that we know the outcome of telling people is worse than not telling them. This is an expensive lesson learned over over a century of medical treatment.
Yes, because I have met many doctors whose judgement I profoundly mistrust, and prefer my own. Sometimes their whole paradigm is flawed, but sometimes they're just not informed about my own values. And I would rather die by my own misjudgment than theirs.
I'm an old guy, it's happened several times. The last time, a surgeon removed a tumor, found that it was malignant ... and then told me that it was no big deal, it was a kind of cancer that would not have caused serious problems. She said if she had to get cancer she'd pick this kind. I wish she had told me that before the surgery. I may have had it anyway, but maybe not. Wouldn't you value being fully informed more after that? Surgeons have as much of a conflict of interest when selling their own services as anyone else.
I'm not sure what your point is. This discussion is about medical researchers making decisions on thousands or millions of patients in aggregate... what you're describing is a common thing (don't know how bad a tumor is until it's removed).
The doctor didn't know that before removing the tumor (almost certainly; the alternative is medical fraud).
Doctors going into uber salesman mode selling dangerous surgery is super common. So very common among heart surgeons it’s comical. Point is, blindly trusting doctors and their judgements will in all likelihood just turn you into a sickly perma patient.
Also, if the outcome is worse by informing, doesn't that imply a violation of "first, do no harm"? Which, to be fair, the OP says they wouldn't prioritize...
Depends on how you interpret: "First do no harm". Is that an obligation to minimize the harm to an individual patient? Or is the goal to maximize the health of many patients? Like I've said elsewhere, medical reasoning is subtle.
> We already have an extreme shortage of available healthcare workers. We don't need to stress them further because 20% of the population suddenly decides they need 80 elective surgeries to remove things that would've gone away or stayed benign on their own.
Strawman. No one is suggesting adding extra stress to healthcare workers. It's also not you or your doctors call to make: let's gatekeep this patients cancer because our hospital can't deal with the workload. What a truly wicked idea.
To help alleviate the extreme shortage of available healthcare workers we should instead allow those wanting to pay for these elective surgeries, to pay for them! Drive money into healthcare, scale up treatments, drive money into research. Let the system work.
It's significantly more wicked to pretend that tests, treatments, and more aren't done by healthcare workers (yes, even private ones), and to inundate them with unimportant medical procedures while truly sick people are dying.
Yes, this is true even if the person opting for the elective surgery has millions, potentially even billions of dollars to pay with. Having money doesn't make your illness more important.
Don't get all holier-than-thou on topics like this; it's already a difficult-enough topic.
I often wonder if people who make these kinds of statements simply don't know how market forces work, or if they know how market forces work but just choose to pretend they don't exist in certain contexts where that reality feels particularly unfair...
Demand suppression doesn't work. "Having money doesn't make your illness more important" sounds like a noble sentiment, but by applying it in the real world you'd actually be reducing the total size of the pool of resources available to treat everyone. Talk about holier than thou...
Of course I know how market forces work. They are not the only forces we can manage. They are not the only levers we can pull. Economics are cool, but not almighty. There is no scenario in which a hospital gets extra profit, and turns around and simply reduces the cost for those in need. No - no additional resources are being removed. That's not how hospitals work.
You claim to know, yet you still make statements that are obviously foolish given said knowledge? Imagine applying this logic to other industries:
> The existence of folding phones inundates phone manufactures with orders for devices with unimportant luxury features when there are people who are struggling to afford even a basic entry level phone. Having money doesn't make your needs more important. We should ban folding phones to make entry-level phones more accessible to poor people.
Do you agree the paragraph above is unreasonable and that trying to implement it in the real world would make things worse for everyone? If so, why did you just propose the same thing for medical care a couple comments back?
That's what I'm so curious about; I see this all the time where when the subject matter is emotionally or politically charged people revert to this "there's a fixed-sized pie and I want to make sure I get a big piece" economic model, even while appearing in other contexts to understand that that's not how things work.
> It's significantly more wicked to pretend that tests, treatments, and more aren't done by healthcare workers (yes, even private ones), and to inundate them with unimportant medical procedures while truly sick people are dying.
Strawman+ad hominem. No one is suggesting to pretend _anything_. Charge premiums for these tests based on how "unimportant" they are. Use market forces to move money from those willing to pay, to those who cannot.
Actually neither - you were suggesting that money makes the problem valid. It doesn't. You can charge all you want - it either becomes useless (why are we trying to find ways to get more profit to hospitals???) or it reduces the staff who are better suited to go somewhere else.
This is an unbelievably inefficient way to try and move money to those who need it. The market correction should be elsewhere.
It's easy to think of notation like shell expansions, that all you're doing is replacing expressions with other expressions.
But it goes much deeper than that. Once my professor explained how many great discoveries are often paired with new notation. That new notation signifies "here's a new way to think about this problem". And that many unsolved problems today will give way to powerful notation.
The DSL/language driven approach first creates a notation fitting the problem space directly, then worries about implementing the notation. It's truly empowering. But this is the lisp way. The APL (or Clojure) way is about making your base types truly useful, 100 functions on 1 data structure instead of 10 on 10. So instead of creating a DSL in APL, you design and layout your data very carefully and then everything just falls into place, a bit backwards from the first impression.
One of the issues DSLs give me is that the process of using them invariably obsoletes their utility. That is, the process of writing an implementation seems to be synonymous with the process of learning what DSL your problem really needs.
If you can manage to fluidly update your DSL design along the way, it might work, but in my experience the premature assumptions of initial designs end up getting baked in to so much code that it's really painful to migrate.
APL, on the other hand, I have found extremely amenable to updates and rewrites. I mean, even just psychologically, it feels way more sensible to rewrite a couple lines of code versus a couple hundred, and in practice, I find the language to be very amenable for quickly exploring a problem domain with code sketches.
I was playing with Uiua, a stack and array programming languages. It was amazing to solve the Advent of Code's problems with just a few lines of code. And as GP said. Once you got the right form of array, the handful of functions the standard library was sufficient.
> One of the issues DSLs give me is that the process of using them invariably obsoletes their utility.
That means your DSL is too specific. It should be targeted at the domain, not at the application.
But yes, it's very hard to make them general enough to be robust, but specific enough to be productive. It takes a really deep understanding of the domain, but even this is not enough.
Another way of putting it is that, in practice, we want the ability to easily iterate and find that perfect DSL, don't you think?
IMHO, one big source of technical debt is code relying on some faulty semantics. Maybe initial abstractions baked into the codebase were just not quite right, or maybe the target problem changed under our feet, or maybe the interaction of several independent API boundaries turned out to be messy.
What I was trying to get at above is that APL is pretty great for iteratively refining our knowledge of the target domain and producing working code at the same time. It's just that APL works best when reifying that language down into short APL expressions instead of English words.
That particular quote is from the "Epigrams on Programming" article by Alan J. Perlis, from 1982. Lots of ideas/"Epigrams" from that list are useful, and many languages have implemented lots of them. But some of them aren't so obvious until you've actually put it into practice. Full list can be found here: https://web.archive.org/web/19990117034445/http://www-pu.inf... (the quote in question is item #9)
I think most people haven't experienced the whole "100 functions on 1 data structures instead of 10 on 10" thing themselves, so there is no attempts to bring this to other languages, as you're not aware of it to begin with.
Then the whole static typing hype (that is the current cycle) makes it kind of difficult because static typing kind of tries to force you into the opposite of "1 function you can only use for whatever type you specify in the parameters", although of course traits/interfaces/whatever-your-language-calls-it helps with this somewhat, even if it's still pretty static.
> static typing kind of tries to force you into the opposite
The entire point being to restrict what can be done in order to catch errors. The two things are fundamentally at odds.
Viewed in that way typed metaprogramming is an attempt to generalize those constraints to the extent possible without doing away with them.
I would actually expect array languages to play quite well with the latter. A sequence of transformations without a bunch of conditionals in the middle should generally have a predictable output type for a given input type. The primary issue I run into with numpy is the complexity of accounting for type conversions relative to the input type. When you start needing to account for a variable bit width things tend to spiral out of control.
"APL is like a diamond. It has a beautiful crystal structure; all of its parts are related in a uniform and elegant way. But if you try to extend this structure in any way - even by adding another diamond - you get an ugly kludge. LISP, on the other hand, is like a ball of mud. You can add any amount of mud to it and it still looks like a ball of mud."
-- https://wiki.c2.com/?JoelMosesOnAplAndLisp
some of us think in those terms and daily have to fight those who want 20 different objects, each 5-10 deep in inheritance, to achieve the same thing.
I wouldn't say 100 functions over one data structure, but e.g. in python I prefer a few data structures like dictionary and array, with 10-30 top level functions that operate over those.
if your requirements are fixed, it's easy to go nuts and design all kinds of object hierarchies - but if your requirements change a lot, I find it much easier to stay close to the original structure of the data that lives in the many files, and operate on those structures.
Seeing that diamond metaphor, and then learning how APL sees "operators" as building "functions that are variants of other functions"(1), made me think of currying and higher-order functions in Haskell.
The high regularity of APL operators, which work the same for all functions, force the developer to represent business logic in different parts of the data structure.
That was a good approach when it was created; but modern functional programming offers other tools. Creating pipelines from functors, monads, arrows... allow the programmer to move some of that business logic back into generic functions, retaining the generality and capacity of refactoring, without forcing to use the structure of data as meaningful. Modern PL design has built upon those early insights to provide new tools for the same goal.
if I could write haskell and build an android app without having to be an expert in both haskell and low level android sdk/ndk, I'd be happy to learn it properly.
It is! (https://github.com/cnlohr/rawdrawandroid) - and that's just writing a simple an android app in C. If you want to access the numerous APIs to do anything useful, it's more pain.
Good point. Notation matters in how we explore ideas.
Reminds me of Richard Feynman. He started inventing his own math notation as a teenager while learning trigonometry. He didn’t like how sine and cosine were written, so he made up his own symbols to simplify the formulas and reduce clutter. Just to make it all more intuitive for him.
Indeed, historically. But are we not moving into a society where
thought is unwelcome? We build tools to hide underlying notation and
structure, not because it affords abstraction but because its
"efficient". Is there not a tragedy afoot, by which technology, at its
peak, nullifies all its foundations? Those who can do mental
formalism, mathematics, code etc, I doubt we will have any place in a
future society that values only superficial convenience, the
appearance of correctness, and shuns as "slow old throwbacks" those
who reason symbolically, "the hard way" (without AI).
(cue a dozen comments on how "AI actually helps" and amplifies
symbolic human thought processes)
Let's think about how an abstraction can be useful, and then redundant.
Logarithms allow us to simplify a hard problem (multiplying large numbers), into a simpler problem (addition), but the abstraction results in an approximation. It's a good enough approximation for lots of situations, but it's a map, not the territory. You could also solve division, which means you could take decent stabs at powers and roots and voila, once you made that good enough and a bit faster, an engineering and scientific revolution can take place. Marvelous.
For centuries people produced log tables - some so frustratingly inaccurate that Charles Babbage thought of a machine to automate their calculation - and we had slide rules and we made progress.
And then a descendant of Babbage's machine arrived - the calculator, or computer - and we didn't need the abstraction any more. We could quickly type 35325 x 948572 and far faster than any log table lookup, be confident that the answer was exactly 33,508,305,900. And a new revolution is born.
This is the path we're on. You don't need to know how multiplication by hand works in order to be able to do multiplication - you use the tool available to you. For a while we had a tool that helped (roughly), and then we got a better tool thanks to that tool. And we might be about to get a better tool again where instead of doing the maths, the tool can use more impressive models of physics and engineering to help us build things.
The metaphor I often use is that these tools don't replace people, they just give them better tools. There will always be a place for being able to work from fundamentals, but most people don't need those fundamentals - you don't need to understand the foundations of how calculus was invented to use it, the same way you don't need to build a toaster from scratch to have breakfast, or how to build your car from base materials to get to the mountains at the weekend.
> This is the path we're on. You don't need to know how multiplication by hand works in order to be able to do multiplication - you use the tool available to you.
What tool exactly are you referring to? If you mean LLMs, I actually view them as a regression with respect to basically every one of the "characteristics of notation" desired by the article. There is a reason mathematics is no longer done with long-form prose and instead uses its own, more economical notation that is sufficiently precise as to even be evaluated and analyzed by computers.
Natural languages have a lot of ambiguity, and their grammars allow nonsense to be expressed in them ("colorless green ideas sleep furiously"). Moreover two people can read the same word and connect two different senses or ideas to them ("si duo idem faciunt, non est idem").
Practice with expressing thoughts in formal language is essential for actually patterning your thoughts against the structures of logic. You would not say that someone who is completely ignorant of Nihongo understands Japanese culture, and custom, and manner of expression; similarly, you cannot say that someone ignorant of the language of syllogism and modus tollens actually knows how to reason logically.
You can, of course, get a translator - and that is what maybe some people think the LLM can do for you, both with Nihongo, and with programming languages or formal mathematics.
Otherwise, if you already know how to express what you want with sufficient precision, you're going to just express your ideas in the symbolic, formal language itself; you're not going to just randomly throw in some nondeterminism at the end by leaving the output up to the caprice of some statistical model, or allow something to get "lost in translation."
> If you mean LLMs, I actually view them as a regression with respect to basically every one of the "characteristics of notation" desired by the article.
LLMs are not used for notation; you are right that they're not precise enough for accurate knowledge.
What LLMs do as a tool is solving the Frame Problem, allowing the reasoning system to have access to the "common sense" knowledge that is needed for a specific situation, retrieving it from a humongous amount of background corpus of diverse knowledge, in an efficient way.
Classic AI based on logical inference was never able to achieve this retrieval, thus the unfulfilled promises in the 2000s to have autonomous agents based on ontologies. Those promises seem approachable now thank to the huge statistical databases of all topics stored in compressed LLM models.
A viable problem-solving system should combine the precision of symbolic reasoning with the breadth of generative models, to create checks and heuristics that guide the autonomous agents to interact with the real world in ways that make sense given the background relevant cultural knowledge.
You need to see the comment I was replying to, in order to understand the context I was making.
LLMs are part of what I was thinking of, but not the totality.
We're pretty close to Generative AI - and by that I don't just mean LLMs, but the entire space - being able to use formal notations and abstractions more usefully and correctly, and therefore improve reasoning.
The comment I was replying to complained about this shifting value away from fundamentals and this being a tragedy. My point is that this is just human progress. It's what we do. You buy a microwave, you don't build one yourself. You use a calculator app on your phone, you don't work out the fundamentals of multiplication and division from first principles when you're working out how to split the bill at dinner.
I agree with your general take on all of this, but I'd add that AI will get to the point where it can express "thoughts" in formal language, and then provide appropriate tools to get the job done, and that's fine.
I might not understand Japanese culture without knowledge of Nihongo, but if I'm trying to get across Tokyo in rush hour traffic and don't know how to, do I need to understand Japanese culture, or do I need a tool to help me get my objective done?
If I care deeply about understanding Japanese culture, I will want to dive deep. And I should. But for many people, that's not their thing, and we can't all dive deep on everything, so having tools that do that for us better than existing tools is useful. That's my point: abstractions and tools allow people to get stuff done that ultimately leads to better tools and better abstractions, and so on. Complaining that people don't have a first principle grasp of everything isn't useful.
> But are we not moving into a society where thought is unwelcome?
Not really, no. If anything clear thinking and insight will give an even bigger advantage in a society with pervasive LLM usage. Good prompts don't write themselves.
There's something about economy of thought and ergonomics.. on a smaller scale, when coffeescript popped up, it radically altered how i wrote javascript, because lambda shorthand and all syntactic conveniences. Made it easier to think, read and rewrite.
Same goes for sml/haskell and lisps (at least to me)
Musk said: “Crazy things like just cursory examination of Social Security and we’ve got people in there that are 150 years old.”
He qualified this claim as a “cursory examination”. It’s clearly a comment about the quality of the data and systems. That this is the kind of thing that would be prone to fraud.
Before you hit downvote, please provide evidence that you didn’t hallucinate Musk’s claims here.
But then he goes around saying he actually found massive evidence of fraud. It’s not like he tweets: “Guys, my admittedly cursory examination gave me a feeling this old system which I’ve never seen before could be prone to fraud.”
What proof does he (or you) even have that a COBOL system is particularly prone to fraud? The world’s most important banks still run many things on COBOL. Are you saying that bank mainframes are full of IT fraud?
The point is that a date set to 1875 is actually a null in COBOL.
It doesn't mean the data is missing or inaccurate, any more than a null reference in Java means your program is going to segfault. It simply indicates that the data is not present in this scope.
If Musk was posting: "Guys I just discovered they have tons of null checks in their code here, that's obviously an indication of fraud!" — would that make any sense to you?
It's disingenuous in that they are both platforms doing the same thing.
Apple could argue that it's Microsoft's 30% cut that makes it impossible for developers to get their games published on the App Store. That too, would be disingenuous.
But Microsoft are making a PR maneuver that they can bring to the table to strike a deal with apple.
It never made sense to me why they raycast from the car. Humans don't play this way. The car is an abstraction the model doesn't need to care about it.
It literally doesn't matter if it's 1px or 100 meters to the wall, just learn to not hit it.
Instead, measure from the _camera_. That's all that matters. That's what humans do when we play.
Bonus: with this added perspective you'll be able to drive maps with hills and jumps. Not just flat maps.
RL expert here, the problem with vision is that most likely, it takes too long to render and to process the render in your NN.
At that point you need to run the game real-time instead of faster, so you need a lot of compute to generate your data. You will also need the bandwidth to throw the data around, and the GPUs to take all that input size.
It's definitely possible, but not the place to start, and will require a lot more compute infrastructure.
They're driving non-flat maps by the end of the article and in the subsequent article. AFAIK they can only go with what they see on screen, there don't have access to camera rendering data.
That would indeed be newsworthy if countless businesses were to be insisting that they are trying to make their employees wet by pouring sand on them, providing them with T-shirts with the word "WET" printed on it, asking them to wish their dryness away, or (the most enlightened ones) telling their employees to go outside on cloudy days — anything except, you know, simply pouring water on them.
You have no authority to treat your patient like a child.