The point of assigning blame here isn't so much a moral exercise as it is to decide what went wrong, how to deal with it, and how to prevent the same failure modes in the future.
I think you need a balance. I’ve seen products fall apart due to high error rate.
I like to think of intentionalists—people who want to understand systems—and vibe coders—people who just want things to work on screen expediently.
I think success requires a balance of both. The current problem I see with AI is that it accelerates the vibe part more than the intentionalist part and throws the system out of balance.
Don’t disagree… I think it’s just applying a lot more pressure on dev teams to do things faster though. Devs tend to be expensive and expectations on productivity have increased dramatically.
Nobody wants teams to ship crap, but also folks are increasingly questioning why a bit of final polishing takes so long.
I have a theory that vibe coding existed before AI.
I’ve worked with plenty of developers who are happy to slam null checks everywhere to solve NREs with no thought to why the object is null, should it even be null here, etc. There’s just a vibe that the null check works and solves the problem at hand.
I actually think a few folks like this can be valuable around the edges of software but whole systems built like this are a nightmare to work on. IMO AI vibe coding is an accelerant on this style of not knowing why something works but seeing what you want on the screen.
I've written before about my experience at a shop like this. The null check would swallow the exception and do nothing about the failure so things just errored silently. Many high fives and haughty remarks about how smart the team was for doing this were had at the expense of lesser teams that didn't. The whole operation ran on a hackneyed MVP architecture from a Learning Tree class a guy took in 2008 and snippets stolen from StackOverflow and passed around on a USB key. Deviation from this bible was heresy and rebuked with sharp, unprofessional behavior. It was not a good place to work for those who value independent thought.
> AI vibe coding is an accelerant on this style of not knowing why something works but seeing what you want on the screen.
I've been saying this exact thing for years now. It also does the whole CRUD app "copy, paste, find, replace from another part of the application" workflow for building new domains very well. If you can bootstrap a codebase with good architectural practices and tests then Claude Code is a productivity godsend for building business apps.
Yeah, but you had to integrate it until it at least compiled, which kind of made people think about what they're pasting.
I had a peer who suddenly started completing more stories for a month or two when our output was largely equal before. They got promoted over me. I reviewed one of their PRs... what a mess. They were supposed to implement caching. Their first attempt created the cache but never stored anything in it. Their next attempt stored the data in the cache, but never looked at the cache - always retrieving from the API. They deleted that PR to hide their incompetence and opened a new one that was finally right. He was just blindly using AI to crank out his stories.
That team had something like 40% of capacity being spent on tech debt, rework, and bug fixes. The leadership wanted speed above all else. They even tried to fire me because they thought I was slow, even though I was doing as much or more work than my peers.
It's a frustrating situation. I had a stretch in my career when I was the clean up person who did the 90% of work that was left after management thought a junior had gotten in 90% done. It's potentially very satisfying but very easy to feel unappreciated in (e.g. they wish the junior could have gotten it done and thought I was "too slow" though in retrospect one year of that was an annus mirabilis where I completed an almost unbelievable number of diverse projects.)
Yeah, but integrating manually is more likely to force them to think than if the agent just does everything. You used to have to search stackoverflow, which requires articulating the problem. Now you can just tell copilot to fix it.
> I actually think a few folks like this can be valuable around the edges of software but whole systems built like this are a nightmare to work on. IMO AI vibe coding is an accelerant on this style of not knowing why something works but seeing what you want on the screen.
I would correct that: it's not an accelerant of "seeing what you want on the screen," it's an accelerant of "seeing something on the screen."
[Hey guys, that's a non-LLM it's not X, it's Y!]
Things like habitual, unthoughtful null-checks are a recipe for subtle data errors that are extremely hard to fix because they only get noticed far away (in time and space) from the actual root cause.
I agree but I'd draw a different comparison. That is vibe coding has accelerated the type of developers who relied on stack overflow to solve all their problems. The kind of dev who doesn't try to solve problems themselves. It has just accelerated this type of working, but is less reliable than before.
this matches with my first thought of this "study" (remember what coderabbit sells...); can you compare these types of PRs directly? Is the conclusion that AI produces more bugs, or is that a symptom of something else, like AI PRs are produced by less experienced developers?
One of my frustrations with AI, and one of the reasons I've settled into a tab-complete based usage of it for a lot of things, is precisely that the style of code it uses in the language I'm using puts out a lot of things I consider errors based on the "middle-of-the-road" code style that it has picked up from all the code it has ingested. For instance, I use a policy of "if you don't create invalid data, you won't have to deal with invalid data" [1], but I have to fight the AI on that all the time because it is a routine mistake programmers make and it makes the same mistake repeatedly. I have to fight the AI to properly create types [2] because it just wants to slam everything out as base strings and integers, and inline all manipulations on the spot (repeatedly, if necessary) rather than define methods... at all, let alone correctly use methods to maintain invariants. (I've seen it make methods on some occasions. I've never seen it correctly define invariants with methods.)
Using tab complete gives me the chance to generate a few lines of a solution, then stop it, correct the architectural mistakes it is making, and then move on.
To AI's credit, once corrected, it is reasonably good at using the correct approach. I would like to be able to prompt the tab completion better, and the IDEs could stand to feed the tab completion code more information from the LSP about available methods and their arguments and such, but that's a transient feature issue rather than a fundamental problem. Which is also a reason I fight the AI on this matter rather than just sitting back: In the end, AI benefits from well-organized code too. They are not infinite, they will never be infinite, and while code optimized for AI and code optimized for humans will probably never quite be the same, they are at least correlated enough that it's still worth fighting the AI tendency to spew code out that spends code quality without investing in it.
> a lot of things I consider errors based on the "middle-of-the-road" code style that it has picked up from all the code it has ingested. For instance, I use a policy of "if you don't create invalid data, you won't have to deal with invalid data"
Yea, this is something I've also noticed but it never frustrated me to the point where I wanted to write about it. Playing around with Claude, I noticed it has been trained to code very defensively. Null checks everywhere. Data validation everywhere (regardless of whether the input was created by the user, or under the tight control of the developer). "If" tests for things that will never happen. It's kind of a corporate "safe" style you train junior programmers to do in order to keep them from wrecking things too badly, but when you know what you're doing, it's just cruft.
For example, it loves to test all my C++ class member variables for null, even though there is no code path that creates an incomplete class instance, and I throw if construction fails. Yet it still happily whistles along, checking everything for null in every method, unless I correct it.
> is precisely that the style of code it uses in the language I'm using puts out a lot of things I consider errors based on the "middle-of-the-road" code style that it has picked up from all the code it has ingested.
That is a really good point: the output you're gonna get is going to be mediocre, because it was trained (in aggregate) on mediocrity.
So the people who gush about LLMs were probably subpar programmers to start, and the ones that complain probably tend to be better-than-average, because who would be irritated by mediocrity?
And then you have to think about the long-term social effects: the more code the mediocrity machine puts out, the more mediocre code people are exposed to, and the more mediocre habits they'll pick up and normalize. IMHO, a lot of mediocrity comes from "growing up" in an environment with poor to mediocre norms. The next generation of seniors, who have more experience being LLM operators than writing code themselves, and probably more likely to get stuck in mediocrity.
I know someone's going to make an analogy to compilers to dismiss what I'm saying: but the thing about compilers is they are typically written by very talented and experienced people who've spent a lot of time carefully reasoning about how they behave in different scenarios. That's nothing like an LLM (just imagine how bad compilers would be if they were written by a bunch of mediocre developers from an outsourcing body shop, that's an LLM).
This is close to my approach. I love copilot intellisense at GitHub’s entry tier because I can accept/reject on the line level.
I barely ever use AI code gen at the file level.
Other uses I’ve gotten are:
1. It’s a great replacement for search in many cases
2. I have used it to fully generate bash functions and regexes. I think it’s useful here because the languages are dense and esoteric. So most of my time is remembering syntax. I don’t have it generate pipelines of scripts though.
In some cases I feel like I get better quality at slightly more time than usual. My testing situation in the front end is terribly ugly because of the "test framework can't know React is done rendering" problem but working with Junie I figured out a way to isolate object-based components and run them as real unit test with mocks. I had some unmaintainable Typescript which would explode with gobbledygook error messages that neither Junie or I could understand whenever I changed anything but after two days of talking about it and working on it it was an amazing feeling to see that the type finally made sense to me at Junie at the same time.
In cases where I would have tried one thing I can now try two or three things and keep the one I like the best. I write better comments (I don't do the Claude.md thing but I do write "exemplar" classes that have prescriptive AND descriptive comments and say "take a look at...") and more tests than I would on my on my own for the backend.
Even if you don't want Junie writing a line of code it shines at understanding code bases. If I didn't understand how to use an open source package from reading the docs I've always opened it in the IDE and inspected the code. Now I do the same but ask Junie questions like "How do I do X?" or "How is feature Y implemented?" and often get answers quicker than digging into unfamiliar code manually.
On the other hand it is sometimes "lights on and nobody home", and for a particular patch I am working on now it's tried a few things that just didn't work or had convoluted if-then-else ladders that I hate (even if I told it I didn't like that) but out of all that fighting I got a clear idea of where to put the patch to make it really simple and clean.
But yeah, if you aren't paying attention it can slip something bad past you.
I'd call some null-pointer-lint-with-automatic-fixes tools "vibe coding" tbh. I've run across a couple that do a pretty good job of detecting possible nulls and add annotations about it and that's great... but then the fix is "if null, return null", in practice it's frequently applied completely blindly without any regards to correctness.
If you lean on tools like that, you can rapidly degrade your codebase into "everything might be null and might short circuit silently and it can't tell you about when it happens", leaving you with buggy software that is next to impossible to understand or troubleshoot because there aren't "should not be null" hints or stack traces or logs or anything that would help figure out causes.
If you are in the industry for enough time you certainly crossed with a boss who said that it needs to be fixed in 5 minutes or else even if the problem was not caused by you and the solution clearly needs more than 5 minutes. (The root cause was because someone had only 5 minutes to do something too)
I once had a job that my boss ordered (that's the word he used) me to do the wrong thing. Me and the rest of the team refused except for one guy who did it because he was certain that 9 out 10 people were wrong while he was the only right one)
The company spend 2M USD in returns, refunds and compensations in a project that probably didn't cost that.
It was just a patch! How he could've possibly know - said the dismissed manager.
I caught claude trying to sneak in the equivalent to a CI script yesterday as I was wrangling how to run framework and dotnet tests next to each other without slowing down the framework tests horrendously.
It tried to sneak in changing the CI build script to proceed to next step on failure.
1. if it won't compile you'll give up on the tool in minutes or an hour.
2. if it won't run you'll give up in a few hours or a day.
3. if it sneaks in something you don't find until you're almost - or already - in production it's too late.
charitable: the model was trained on a lot of weak/lazy code product.
less-charitable: there's a vested interest in the approach you saw.
Yeah it’s trained to do that somewhere though it’s not necessary malicious. For RLHF (the model fine tuning) the HF stands for human feedback but is really another trained model that’s trained to score replies the way a human would. And so if that model likes code that passes tests more than code that’s stuck in a debugging loop, that’s what the model becomes optimized for.
In a complex model like Claude there is no doubt much more at work, but some version of optimizing for the wrong thing is what’s ultimately at play.
Yeah. There are times when silently swallowing nulls is the proper answer. I've found myself doing it many times in C# to trap events that get triggered during creation. But you should never do so unless you've traced where they're coming from!
You missed step 3 - click the first result, which is an Ad for a some shovelware app with an almost identical name and icon to the real thing and which displays ads.
For example, my friends told me about wordle and said they play it on an app on their phone. I search "wordle" in the app store, and the correct app is the 4th result, after an ad and some other gunk.
My friend told me I should use "Google Authenticator", the first result is an ad for another paid authenticator app that uses the Google logo in its first screenshot to try and confuse people into downloading it instead.
The Apple App Store is actively hostile to users. I thought the point of iPhones was you could give them to your technologically illiterate parents without them getting phished, but I guess those times are gone.
It's gone from "search something in the app store and install the first result" to "google '<term> iOS app store' and click the first result", that gives you better safer results.
I did a decent amount of work in MagicaVoxel in the past.
I like that Voxel Max works on iPad. It also allows 3D meshes to be imported and voxelized.
Voxel Max has a good amount of polish these days. It’s my top option with MagicaVoxel a close second.
I’ve also used Qubicle and Goxel. Qubicle is okay for specific things. I really like its masking planes feature. I really don’t like Goxel. It’s UI just feels clunky.
Apple may have good hardware, but their software support is comically bad. Their support for backwards compatibility (aside from Rosetta) has basically been "FU, I'm apple", which you need for gaming.
Apple has had little to no interest in becoming a real gaming platform. Unless that changes, gamers will more likely be moving into the sweet embrace of Gaben on Linux.
Open the App Store on a iPhone. Of the four tabs, two are game-centric (“Games” and “Arcade”). Another (“Today”) has been consistently using more than half of its features for games.
In their most recent operating systems, they have released a separate app specifically for games (look at that domain, even).
And how about all the games no longer on the App Store?
Say, Flight Control (one of the first games to hit a million unit sales), or the Infinity Blade series (which wiki says was removed due to incompatibility with newer Apple platform changes)?
Both of those examples are old and precede current efforts. Plus, for the third time:
> Whether they’re succeeding at it is another story.
I’m not arguing Apple excels or is even decent at video games, I’m simply pointing out that it’s clear they are interested in having them on their platforms.
Of course they also took the route of inventing a new 3d api Metal which is at odds with Vulkan. There is HoneyKrisp of course, but if one want's decent gaming on an M1 or M2 laptop, Asahi Linux is actually the superior choice.
I don't think one can call it even close to success when the best way to run AAA games on your hardware is to literally replace the entire operating system which uses cobbled together components like FEX and wine/proton, etc... the fact that that works with more games is insane.
> Whether they’re succeeding at it is another story.
You may disagree with their strategy all you like. You may even think they are doing everything wrong, that’s perfectly legitimate. But they are clearly interested in having gaming happen on their platforms. The claim that they aren’t is the only thing I disputed.
The Apple hardware is indeed very nice, but it's not a good environment for gaming. They've traditionally been quite gaming-hostile with refusing to support the later generations on OpenGL. Then there was a wrapper for Windows games called Whisky, but it was finicky and became unmaintained. Apple has their own App store which sells some games, which is in direct competition with Steam and others, so those actors are probably a bit wary of spending too much resources on the platform. Also a lot of gamer culture is related to building your own hardware, which Apple will never support.
Meanwhile gaming on Linux is becoming better than Windows these days, especially with all the trash to be circumvented on Win11, and Steam working hard on SteamOS etc.
I hate to break it to ya, but Apple Silicon isn't in the top 25 highest-performing consumer GPUs. It's probably not even in the top 25 most-efficient either: https://browser.geekbench.com/opencl-benchmarks
It doesn't seem like Nvidia even has any 3nm GPUs on the market. But sure. When you control for power efficiency, it turns out there's no difference at all!
Please never divide anything by TDP. Use actual power measurements, unless you're trying to ensure your numbers end up being bullshit. (In particular, any number someone claims is a TDP for an Apple processor is made up, because Apple doesn't publish or specify any quantity remotely similar to TDP.)
Are you seriously trying to claim that Apple's total system wall power numbers are appropriate for comparing against an AMD or Intel processor TDP number? You really are trying to ensure the numbers you calculate are bullshit.
I think you did not read the context of this discussion. We're talking about GPU power draw, not SOCs, which can be measured on Apple Silicon and compared against third-party raster workloads.
If you think any of my calculations are wrong, please cite them and correct them. GPU-to-GPU, Apple's raster performance is lacking.
The big advantage of macs when it comes to GPUs isn't their direct speed, its the unified memory model. If I want to buy a GPU that has 64-128GB of addressable memory, it will cost me an enormous amount and the computer itself will be a server module for racks that is loud and not a consumer PC. You can buy a mac with a unified memory model, and even though its GPU is not on the top rankings, the fact that it can operate on your model in regular memory is what gives it its advantages.
I'm a huge fan of Apple's hardware since they introduced their own silicon, but this is just silly. Apple doesn't have the personality needed to court and work with game companies. They're busy expecting everyone to come to them when they'd have to actually work to entice them.
Society grows great when people plant trees whose shade they will never sit in. The problem is that we aren’t raising all of the kids right. It’s a societal problem in as much as it is a personal problem for folks unwilling and often unable to work with their kids on this stuff.
We aren’t a nation of nerds, I doubt we ever were, but nerds really ought to create a support system for each other. I understand why people care so much about which school district they are in. It’s as much about a culture of curiosity as test scores.
I’m a nerd, but we were never a nation of nerds and things turned out pretty well. The reality is that, even for smart people, the world is pretty hard to navigate with book learning. I’m reminded of the last president of Afghanistan, Ashraf Ghani, a professor at Hopkins with a PhD from Columbia who wrote a book called “Fixing Failed States.” Yet he was spectacularly unsuccessful at fixing the problems that were squarely within the field of his expertise.
Given the limits academia’s predictive power with respect to complex issues, I think it’s more important to select for and socialize pro-adaptive “gut feelings.” I went to the Iowa Caucuses back in 2019. These were democrats, but not highly educated ones. Mostly farm and farm adjacent people. But watching them ask questions and deliberate, there was a degree of level-headedness, practicality, prudence, skepticism, and caution that was just remarkable to watch. These are folks who don’t have much book learning but come from generations of people who managed to plan and organize their lives well enough to survive Iowa’s brutally harsh winters and short planting window (about 14 days—either side of that and you and your whole family die). You need smart people to do smart people things, but those conscientious normies are the backbone of a healthy society.
> The reality is that, even for smart people, the world is pretty hard to navigate with book learning. I’m reminded of the last president of Afghanistan, Ashraf Ghani, a professor at Hopkins with a PhD from Columbia who wrote a book called “Fixing Failed States.” Yet he was spectacularly unsuccessful at fixing the problems that were squarely within the field of his expertise.
Outliers.
You cannot come to conclusions based on examining outliers only. The better conclusion is from taking a sample of the population, and checking the correlation between test scores and success.
> Given the limits academia’s predictive power with respect to complex issues, I think it’s more important to select for and socialize pro-adaptive “gut feelings.”
There's plenty of studies that determine the correlation between academic performance and success. Have you possibly even considered that the basic "gut-feeling" only gets better (i.e. more predictive successes) with better academic scores?
IOW, the more you know, the more you learn, the better your heuristic is when making snap conclusions.
> I’m not talking about individual success i’m talking about societal success.
I don't know what that means.
Social mobility? Academic success corresponds quite strongly to that too.
Collective success? Groups who are academically successful also correlate quite well to various measures of success.
I mean, unless we reduce the scope of our samples to the outliers, and look at a non-representative sample, it's really quite hard to support the claim that "gut-feel" is at all valuable without high academic performance.
> It’s a societal problem in as much as it is a personal problem for folks unwilling and often unable to work with their kids on this stuff.
Even that is multi-dimensional. Another big problem we have in the US is that there are groups of people who don't want their children to learn certain things that most well-educated people take for granted.
For example, it's pretty common to this day for some school districts around the country to skip over teaching evolution. It's also common to misrepresent the causes behind the civil war and gloss over the genocide of native populations.
Others could probably come up with additional examples.
My daughter, at her very expensive deep-blue private school, learned that the Constitution was inspired by the Iroquois—who didn’t have written language—but didn’t learn about the English civil war where the ideas behind the constitution actually had their genesis.
In terms of being a citizen in America, it’s far more important to understand the English civil war, British history, etc. Those are the instruction manual for the actual society we have inherited. Even in my deep red state public school system, we spent way more time than was warranted on native Americans and other things that people feel guilty about. If you’re born in a multi-generational colony ship, you need to know how the CO2 scrubbers work. It doesn’t actually help you to know that some indigenous population was decimated by the mining of the uranium that power’s the ship’s reactors.
> It doesn’t actually help you to know that some indigenous population was decimated by the mining of the uranium that power’s the ship’s reactors.
It does, because for people to survive and thrive, they need politics and institutions that don't kill them and that produce CO2 scrubbers. The politics and institutions turn out to be much harder than the scrubbers - few societies produce the latter, and it's generally the ones with much stronger human rights.
But the world’s most technologically advanced civilization was built by politics and institutions that killed and displaced the native Americans then glorified that effort in movies and television. The guys who built the moon rocket and silicon valley grew up playing cowboys and indians.
Nothing is pure. You are ignoring quite a lot, and quite a lot that distinguishes that society and its peers different from than the others, far less accomplished.
The question is not purity, but facing our own faults, personal and societal, do we give up and indulge them or do we keep our vision and confidence and keep improving?
You're moving the goalposts. You made a good point earlier: "for people to survive and thrive, they need politics and institutions that don't kill them and that produce CO2 scrubbers."
We know what "politics and institutions" created the CO2 scrubbers (i.e. our present technologically advanced and prosperous society). It was the ones that displaced and killed the native americans and celebrated it in movies. By your own logic, we should be teaching how to maintain those politics and institutions, so we maintain our prosperity. Insofar as there is any point in learning about history, surely it is learning about what has worked?
This is silly. The 'society' has done very, very many things over the centuries (including other awful ones - slavery, Japanese-American internment, segregation, oppression in Latin America and elsewhere, climate change, etc etc). To pick one and say it's necessary to CO2 scrubbers is just a rhetorical/philosophical game.
That can be fun - we're on HN after all - and even informative to explore, but is not tied to reality.
The point I’m trying to get at is that you seem to be trying to smuggle “liberty and justice” in under the cover of CO2 scrubbers. But the same “politics and institutions” that created our technologically advanced, prosperous society, also did those other things. Even if they weren’t individually “necessary” to that advancement, it seems like being fairly insensitive to such outcomes has been a feature of the approach that has made the U.S. successful. So why fix what isn’t broken?
Thanks for clarifying - I didn't know what you were after.
> Even if they weren’t individually “necessary” to that advancement, it seems like being fairly insensitive to such outcomes has been a feature of the approach that has made the U.S. successful. So why fix what isn’t broken?
To say taking away people's freedoms and lives "isn't broken" is obviously part of the philosophical game.
If you mean to posit the old Faustian choice: If doing such things is required for power, should we do them? Do the ends justify the means? It's a challenging hypothetical, of course, but these days it's extremely overdone, widely used by bad people as a propaganda assault in order to seize power, and now some fools take them seriously. I'm bored with it, and taking them seriously is obviously ridiculous and dangerous.
The interesting part gets little discussion, and is far more interesting because it applies to reality: How do we take care of all of people's needs: life, liberty, and the pursuit of happiness, for all? Cutting the specs down to 'my needs' or 'many people's needs but destroy the rest' is just the corruption of power.
Some of the answer is low-hanging fruit, provided by generations before us, especially in our free democracy. Some needs to be discussed. Shall we start?
I'd imagine the same; skipping evolution entirely is hard. Dismissing it however, is not that uncommon.
> Between 2007 and 2019, there definitely was progress: from 51 percent of high school biology teachers reporting emphasizing evolution and not creationism in 2007 to 67 percent in 2019. It was matched by a drop from 23 to 12 percent of teachers who offer mixed messages by endorsing both evolution and creationism as a valid scientific alternative to evolution, from 18 to 15 percent of teachers who endorse neither evolution nor creationism, and from 8.6 to 5.6 percent of teachers who endorse creationism while not endorsing evolution.
> What would you have the education system do? Put iPads in front of kids all day?
A clear majority of parents that I know actually would have the education system do that. Hence the oftentimes poor results.
A private school I looked at in 2025 required iPads (and nothing else) because their entire management of students was don by an iPad application (that worked on nothing but iPads).
The school admin/marketer/consultant/whatever I spoke to during the sales call literally did not understand what I meant when I said "If your management is so incompetent at decision-making that they got shangaied into buying into this deficient ecosystem when almost any other decision would have worked for both major mobile platforms, why on earth would I think that the other decisions they make would be any good".[1]
------------------------------
[1] Management who make obviously incompetent decisions like "Our study material only works on iPads" are obviously incompetent or otherwise disconnected from reality.
With "obvious" ideas like reading being good, you tend not to have people chiming in to say so. That creates a filter where you only get the contrarianism.
... Prepare shorter or lighter materials for them to read, as this article suggests? Why has reading whole books become the holy grail of education system?
The said education system expected this:
> As a high school student less than a decade ago, he was assigned many whole books and plays to read, among them, “The Immortal Life of Henrietta Lacks,” “The Crucible” and “Their Eyes Were Watching God.”
Yeah, sounds like a very great way to filter out perhaps 20% of good readers and make sure the rest 80% will hate reading for the rest of their lives.
You can say it’s like childcare, sure. But learning has to come from somewhere. Parents seem to be doing less and less out of the classroom. Does that mean we’re just giving up then?
Maybe literature is just a terrible medium for culture except for the relatively brief period in human history when they were extraordinarily cheap to produce and disseminate compared to other cultural products.
Edit: but insofar as media criticism in education is bound to the book rather than the dominant forms of the day, I think children are being done a disservice.
It's still by far the best medium that requires you to be active and imaginative while packing the best information density and usability. Plus it works offline, without power, you can carry it around, &c.
Books forge you in a way short "content" we consume all day long today will never be able to, there are a few long form podcasts here and there that could be comparable but that's not the bulk of the media kids "consume"
Let the market solve it. If the market requires educated adults the market will create that environment or something, answer is probably private schools. I assume they’d say something like that.
Slight problem with that if you would like to live in a functioning, thriving democracy: democracy in the sense of "one person, one vote" requires or at least greatly benefits from a broadly educated population. It's not sufficient, but very likely necessary.
>Let the market solve it. If the market requires educated adults the market will create that environment or something, answer is probably private schools. I assume they’d say something like that.
I don't pretend to speak for anyone else, but I am more than my economic inputs and outputs, and while it was in a somewhat different context, Heinlein's prose applies in spades WRT your assertion:
“I had to perform an act of faith. I had to prove to myself that I was a man. Not just a producing-consuming economic animal…but a man.”
― Robert A. Heinlein[0][1]
The market has never solved anything in ways that are beneifical for humanity. (Just commenting on the first part of your comment, given that your last sentence implies you're just saying what market evangelists would say.)
I always wonder about the truth in, “No advertising is bad advertising.” I think you can have bad advertising that isolates customers but this doesn’t seem to cross that line. We’re all talking about McDonalds now after all.
We’re a game studio with less technical staff using git (art and design) so we use hooks to break some commands that folks usually mess up.
Surprisingly most developers don’t know git well either and this saves them some pain too.
The few power users who know what they’re doing just disable these hooks.
reply