Well yes - at least as things currently stand. It's interesting to me not for what it is right now, but what the trend might be. The extremes are probably something like:
1. Damp squib, goes nowhere. In 3 years' time it's all forgotten about
2. Replaces every software engineer on the planet, and we all just talk to Hal for our every need.
Either extreme seems reasonably unlikely. So the big question is: what are the plausible outcomes in the middle? Selfishly, I'd be delighted if a virtual assistant would help with the mechanical dreariness of keeping type definitions consistent between front and back end, ensuring API definitions are similarly consistent, update interface definitions when implementing classes were changed (and vice-versa), etc.
That's the positive interpretation obviously. Given the optimism of the "read-write web" morphed into the dystopian mess that is social media, I don't doubt my optimistic aspirations will be off the mark.
Actually, on second thoughts, maybe I'd rather not know how it's going to turn out...
> There's no way any business will handle business logic to a black box.
You mean, a black box like a programmer's brain? An AI backend will get used if it's demonstrably better on any dimension. The current iteration is no doubt a bit of a toy, but don't underestimate it.
It seems incredibly obvious that you could turn this into a real product, where the LLM generates the code once based on a high-level description of a schema and an API, and caches it until the description changes somehow.
GPT can generate thousands of lines of code nearly instantly, and can regenerate it all on the fly whenever you want to make a few tweaks. No more worrying about high-architecture designed to keep complexity understandable for mere humans. No code style guides or best practices. No need to manage team sizes to keep communication overheads small.
Then you train another AI to generate a fuzz test suite to check an API for violations of the API contract. Thousands of tests checking every possible corner case, again generated nearly instantly.
Don't underestimate where this could go. The current version linked here is a limited prototype of what's to come.
How many businesses operate at the whims of an Excel spreadsheet, hewing to the output of cell C1? A spreadsheet who's creation myth sits alongside a departed founder and no one really knows how it works.
People already do need it! In the US alone, about a half million people are on dialysis. There are many health and functional consequences to both intermittent and peritoneal dialysis; they are not exactly benign treatments.
The major benefit to implanting it under the skin, as we do with pacemakers, is that doing without permanent holes or tubes through the skin reduces infection risk.
Consider also the danger of having something dangling from your body that is powered by your arterial blood pressure (from a major artery, as the kidney is). A trip and fall could be instantly fatal.
I had a kidney stent in for a few days while I healed from an operation. Let me tell you, it’s a huge quality of life downer (obviously the medical operation outweighed the life style concern). You definitely don’t want medical equipment dangling on the outside of your body if you can help it.
Popular shareware proggies like "Xara3D". Don't think they'd qualify as modeling software. They had basic rotate/translate/scale transform routines in them plus glitzy effects.
(Note "shareware" usually meant cracked as far as youthful hobbyist website building from basements and bedrooms was concerned.)
There were some people who made 3D animated gifs of diverse subjects for a living (e.g. The Animation Factory). I am sure they used real 3D modelling, animation and rendering software for that.
The video evidence of the L-form switching is no doubt very interesting, but a more accurate headline would be “One cause of resistance to certain antibiotics in UTIs identified.” Or the original article’s title, which is “Possible role of L-form switching in recurrent urinary tract infection”; see https://www.nature.com/articles/s41467-019-12359-3
I would love to see data on how common this phenomenon is in various populations of UTI patients (elderly, young, inpatient, outpatient, etc), given different prior exposures to antibiotics; for now it looks like 30 patients were assessed.
There are obviously many different causes of resistance previously identified, going all the way back to penicillinase enzymes inactivating penicillin. As often happens with lay summaries, this makes it sound a little too much like the cause of all antibiotic resistance has been found.
Other known causes of antibiotic resistance include the formation of biofilm -- essentially a colony of bacteria -- and rapid genetic mutation in the face of harsh conditions.
Well, for starters, it's hard to get 200 people who might be eligible for life-preserving surgery to volunteer to possibly get a sham procedure for the exclusive benefit of othewars; not to mention build a team of surgeons, hospitals, etc. willing to do the trial; and someone to pay for millions of dollars in treatment and administrative costs. Clinical trials involving surgery, especially with sham procedures as a proper control, are exceedingly rare in the US for these reasons. (this is discussed in the OP itself.)
So what good does increasing US medical school graduation rates do? OK, it would displace some IMGs/FMGs from residency positions, but it doesn't ultimately create more doctors. You can't be licensed to practice independently in the US unless you enter a residency, take the USMLE Step 3 after intern year, and typically you also take a specialty board exam at the end of residency.
We need to see the full, published study and its methods (particularly around recruitment and exclusion criteria) before we can judge it properly. Until then, the presented statistics about accuracy, sensitivity, and specificity potentially bear no relation to real world usage, if the cohort and data quality were tightly controlled, as you'd expect for an initial study involving the makers of the algorithm. A few other thoughts:
1. Even at 98% sensitivity and 90% specificity [0], which I don't think would hold up with real world usage in casual, healthy users, if AFib has a prevalence of roughly 2-3% [1] then by a quick back of the envelope calculation a positive test result is still 5× more likely to be a false positive than a true positive. With those odds, I don't think many cardiologists are going to answer the phone. You'd still need an EKG to diagnose AFib.
2. There is huge variance among people's real world use of wearable sensors, and also among the quality of the sensors. (Imagine people that wear the watch looser, sweat more, have different skin, move it around a lot, etc.) You'd likely need to do an open, third-party validation study of the accuracy of the sensors in the Apple Watch before you can expect doctors to use the data. My understanding is that the Apple Watch sensors are actually pretty good compared to other wearable sensors, but I don't know of any rigorous study of that compares them to an EKG.
3. Obviously, this is only for AFib. AFib is a sweet corner case in terms of extrapolating from heart rate to arrhythmia, because it's a rapid & irregular rhythm that probably contains some subpatterns in beats that are hard for humans to appreciate. As others—including Cardiogram themselves [2]—have pointed out previously, many serious arrhythmias are not possible to detect with only an optical heart rate sensor.
Full journal publication is coming--as you likely know, the system doesn't always move as fast as we'd like.
> quick back of the envelope calculation a positive test result is still 5× more likely to be a false positive than a true positive.
For what it's worth, about 10% of people who come in to the cardiology clinic experiencing symptoms are diagnosed with an abnormal heart rhythm. So even a 20% positive predictive value would be an improvement over the status quo.
As mentioned below, you can use other risk factors (like CHA2DS2-Vasc, or even simply age) to raise the pre-test probability, and thereby control the false positive rate.
As a meta-point, I do think we let the perfect be the enemy of the good in medicine, and that potentially scares people away who could otherwise make positive contributions. For example, many of the most common screening methods in use today are simple, linear models with c-statistics below 0.8. You can build a far-from-perfect system, and still improve dramatically over how people receive healthcare today.
My overall message to machine learning practitioners sitting on the sidelines would be: please join our field. The status quo in medicine is much more primitive than we have been led to believe, and your skills can very literally save lives.
Thanks for replying! I'll certainly be looking forward to the publication.
>about 10% of people who come in to the cardiology clinic experiencing symptoms are diagnosed with an abnormal heart rhythm
OK, but I'd be more careful about staying apples to apples in your comparisons; your app is about asymptomatic AFib. So how many of those people going to the cardiology clinic had undiagnosed AFib; for how many of those would a new diagnosis of AFib have changed the plan of care; etc. Kind of like robbiep was saying, I would be interested in actual added value from the larger perspective.
Totally appreciate your point about perfect being the enemy of the good. The danger is that these semi-medical wearables currently straddle a strange zone between medical and consumer use. The inevitable marketing strategy is to co-opt the positive reputation of medical products while acknowledging none of the pitfalls of consumer products. Most of the screening methods you bring up are used by a doctor on symptomatic patients with a suggestive history, and only as a partial component of clinical judgement. The way Cardiogram seems to make the most money, on the other hand, is to sell the product to asymptomatic, casual users. (Furthermore, CHA2DS2-Vasc costs 30 seconds of talking or reading a medical record, not $700 in Apple products.) So you're inevitably running up against some doubts among physicians [0].
And finally, I agree that more machine learning practitioners should join medical research. I hope the field works to set more reasonable expectations, however, as in: ML will solve very specific subtasks in clinical reasoning (as in the diabetic retinopathy study [1]). Instead, the headlines usually ratchet that up to "AI will replace radiology/cardiology/$specialty in X years." That tends to hurt the people currently in the trenches, since their contribution in bringing about practical, incremental change is diminished. The top answer of this Quora thread [2] has a good discussion of the many dimensions of the problem.
The diabetic retinopathy study (and the somewhat recent stanford dermatology study) were the first ML studies I had read about that blew me away in terms of their sensitivities and specificities, as compared to real doctors. Your comment on specific subtasks is perfect, and I try and use these examples when discussing ML with fellow medical students.
However, like you said, the medical field is very slow, and has quite a lot of inertia to maintain the status quo. Unless insurance companies refuse to compensate practitioners that don't use these tools, I fear that few, if any, in the healthcare field will opt to use such techniques.
And finally: How should someone with both a medical and computer science background get into ML?
I found the Statistical Learning self paced course on Stanford's site to be a great formal intro to ML algorithms implemented in R, and it is taught by the inimitable Hastie and Tibshirani: http://statlearning.class.stanford.edu
> by a quick back of the envelope calculation a positive test result is still 5× more likely to be a false positive than a true positive. With those odds, I don't think many cardiologists are going to answer the phone. You'd still need an EKG to diagnose AFib.
This is a good point, and certainly nobody should go directly to a cardiologist based on these results. It seems that this would be a good system to recommend that people get an EKG done, though.
> probably contains some subpatterns in beats that are hard for humans to appreciate
Not really, no... As you said, AFib is one of a very small number of causes of irregularly irregular heart rates (and is by far the most common). AFib is pretty easy to spot, even just by feeling someone's pulse with your fingers.