I don't think so. A decent C programmer could pretty much imagine how each line of C was translated into assembly, and with certainty, how every byte of data moved through the machine. That's been lost with the rise of higher-level languages, interpreters, their pseudocode, and the explosion of libraries and especially, the rise of cut-and-paste coding. IMO, 90% of today's developers have never thought about how their code connects to the metal. Starting with CS101 in Java, they've always lived entirely within an abstract level of source code. Coding with AI just abstracts that world a couple steps higher, not unlike the way that templates in 4GL languages attempted but failed to achieve, but of course, the abstraction has climbed far beyond that level now. Software craftsmanship has indeed left the building; only the product matters now.
The problem for software artisans is that unlike other handmade craftwork, nobody else ever sees your code. There's no way to differentiate your work from that which is factory-made or LLM-generated.
Therefore I think artisan coders will need to rely on a combination of customisation and customer service. Their specialty will need to be very specific features which are not catered for by the usual mass code creation market, and provide swift and helpful support along with it.
If fluid intelligence is based on the ability to recognize new patterns (unsupervised learning) and crystallized intelligence on recognizing known patterns (supervised learning), then more than physiology, age alone may differentiate the two.
Youngsters know no patterns so they can't match new events to known ones. Oldsters know that most seemingly new stuff is not really new, it's just the same old stuff, so they reduce the cost of thinking and reject the noise by adding the new unlabeled event to an existing cluster rather than creating a new noisy one. That's wisdom. But that's also a behavior that will inevitably increase as we age and our clusters establish themselves and prove their worth.
So aren't those two forms of intelligence less about a difference in brain physiology and more about having learned to employ common sense?
More than socio-economic, the chief factor that advances US political candidates is, simply, fame. These days fame is achieved by somehow becoming an outlier: loud extremism, incessant self promotion, and spending truly insane amounts of money. Intelligence of any kind is irrelevant.
Yeah. The right hasn't been able to repeat Trump, other candidates following his playbook have usually failed. And I think it's because they don't have his three-plus decades of lowest common denominator fame and enough money to buy himself out of repeated business failure and corruption. It's a perfect storm.
At the very least, every school, subject, and teacher should be obliged to conduct experiments during the school year -- A/B/C trials in which various forms of note taking are explored: handwritten, computer-typed, and neither.
Then see how it affects the kids' learning speed and retention of the various subjects. Then they need to compare notes with the other teachers to learn what they did differently and what did or didn't work for them.
Ideally they'd also assess how this worked for different types of students, those with good vs bad reading skills, with good vs bad grades, esp those who are underperforming their potential.
The idea that we would A/B test handwritten vs typed to see what would improve retention is focusing on the wrong thing. It's like A/B testing mayo or no mayo on your big mac to see which version is a healthier meal. No part of the school system is optimized for retention. It's common for students to take a biology class in 9th grade and then never study biology again for the rest of their lives. Everyone knows they won't remember any biology by the time they graduate, and no one cares.
We know what increases retention, it's active recall and (spaced) repetition. These are basic principles of cognitive science have been empirically proven many times. Please try to implement that before demanding that teachers do A/B tests over what font to write the homework assignments in.
You can certainly make it harder to cheat. AIs will inevitably generate summaries that are very similarly written and formatted -- content, context, and sequence -- making it easy for a prof (and their AI) to detect the presence of AI use, especially if students are also quizzed to validate that they have knowledge of their own summary.
Alternately, the prof can require that students write out notes, in longhand, as they read, and require that a photocopy of those notes be submitted, along with a handwritten outline / rough draft, to validate the essays that follow.
I think it's inevitable that "show your work" soon will become the mantra of not just the math, hard science, and engineering courses.
I’m confused by the “at last”, it’s been consistently covered on The Guardian:
iran site:theguardian.com
There is a narrative that has been floating around and it seems like a Russian psyop designed to sow discord (not accusing you of being a bot personally), “the lefties are friends with Iran and don’t complain about their attrocities”, which is objectively false.
That's a great answer that offers concrete insight into what design thinkers are trying to achieve. And it seems like they have a chance to succeed if they also employ iterative experimental methods to learn whether their mental model of user experience is incorrect or incomplete. Do they?
Traditionally you use a lot of paper and experiential prototypes to iterate on, which doesn't cover everything but helps refine assumptions (I sometimes like starting with mocking downstream output like reports and report data, which is a quick way to test specific assumptions about the client's operations and strategic goals, which then can affect the detailed project). When I can, I also try to iterate using scenario-based wargaming, especially for complex processes with a lot of handoffs and edge cases; it lets us "chaos monkey" situations and stress-test our assumptions.
More than once early iterations have led me to call off a project and tell the client that they'd be wasting their money with us; these were problems that either could be solved more effectively internally (with process, education, or cultural changes), weren't going to be effectively addressed by the proposed project, or, quite often, because what they wanted was not what they actually needed.
Increasingly, AI technical/functional prototyping's making it into the early design process where traditionally we'd be doing clickable prototypes, letting us get cheap working prototypes in place for users to test drive and provide feedback on. I like to iterate aggressively on the data schema up front, so this fits in well with my bias towards getting the database and query models largely created during the design effort based on domain research and collaboration.
Another classic example is data scientists trying to model biological processes (or answer questions about processes while ignorant of which components regulate others). Systems biology has a long history of largely clueless attempts to predict outcomes from complex processes that no one understands well enough to model usefully. The biologists know this but the data scientists do not.
reply