Hacker Newsnew | past | comments | ask | show | jobs | submit | gjm11's commentslogin

> no one actually wants to buy a tungsten cube

Apparently some people do and don't even regret the purchase: https://thume.ca/2019/03/03/my-tungsten-cube/


Those metrics are all aggregate ones. A group containing Bill Gates plus one destitute homeless person $1M in debt has great metrics of that sort. Total debt is a tiny fraction of total income. Income per person is huge, and doesn't stop being huge when you adjust for price differences or hours worked or anything else you care to adjust for. But that destitute homeless person with a $1M debt is still destitute and homeless and $1M in debt.

I haven't commented on "repayment behaviour" because your other comments don't actually mention that. Maybe there's something behind one of the links you posted that explains what you mean by it. I did have a quick look at the not-paywalled ones and didn't see anything of the kind.

(The above isn't a claim that actually the US economy is in a very real sense tanking, or that not-very-rich Americans are heading for destitution, or anything else so concrete. Just pointing out why the things you've been posting don't seem like they address the objection being made.)


Are you sure? I can't find any trace of any book by Richard Dawkins with a title much like that, and that doesn't seem like a very on-brand sort of cover pic for a book by him, and an image search for "Richard Dawkins book cover" doesn't turn up anything like it.

Most likely "I Am Me, I Am Free: The Robot's Guide to Freedom. - David Icke"

Confusing David "The monarchy are secretly lizards" Icke with Dawkins is astonishing.

More "info": https://en.wikipedia.org/wiki/Reptilian_conspiracy_theory#Da...


Omg. I am such an idiot!

I feel that my life has been improved by all three of these. I hadn't seen the "hysterical clickbait" one before you pointed it out, so thank you even though clearly that was the opposite of your intent.

It's somewhat live, as it now has

META-MELTDOWN: WE BROKE HACKER NEWS WITH THIS ONE SIMPLE TRICK (dosaygo-studio.github.io)


I am not 100% convinced by this. The matchup between their painting-based economic index (it's the first component from a PCA analysis, the data for each painting being a vector of pixel-counts for colours in each of 108 bins based on HSV) and GDP growth is pretty dubious, and in places where the two vary together the painting-based metric frequently changes several years before the allegedly-corresponding change in GDP growth.

They have ad hoc explanations for the divergences and try to make lemonade out of the lemons by claiming that their index reveals "higher-frequency fluctuations that traditional series smooth over" but I am willing to bet that if they had had to predict the divergences before doing the calculations they wouldn't have been able to.

I think this is probably mostly pareidolia.


The argument is not that the color index is a perfect replica of GDP, but that it is an independent, higher-frequency proxy for economic activity that captures dimensions missed by traditional reconstructions.

The value of the index lies precisely where it converges with broad historical trends and where it diverges, suggesting new information. The observation that the color index frequently changes before GDP is a sign of its validity, not a weakness - e.g. shifting consumer demand/sentiment or supply chain shocks and a leading indicator of GDP


Also, color pigments might age differently.

Is the image we see today really what was initially drawn?

E.g. the famous night watch picture, which was larger and brighter.


I'm 0% convinced. You can tell from a color palate whether some wallpaper was from the 70's or 80's, but that tells you nothing about the economic conditions and everything about what colors were in style.


As someone else said, we don't know for sure. But it's not like there aren't some at-least-kinda-plausible candidate harms. Here are a few off the top of my head.

(By way of reminder, the question here is about the harms of LLMs specifically to the people using them, so I'm going to ignore e.g. people losing their jobs because their bosses thought an LLM could replace them, possible environmental costs, having the world eaten by superintelligent AI systems that don't need humans any more, use of LLMs to autogenerate terrorist propaganda or scam emails, etc.)

People become like those they spend time with. If a lot of people are spending a lot of time with LLMs, they are going to become more like those LLMs. Maybe only in superficial ways (perhaps they increase their use of the word "delve" or the em-dash or "it's not just X, it's Y" constructions), maybe in deeper ways (perhaps they adapt their _personalities_ to be more like the ones presented by the LLMs). In an individual isolated case, this might be good or bad. When it happens to _everyone_ it makes everyone just a bit more similar to one another, which feels like probably a bad thing.

Much of the point of an LLM as opposed to, say, a search engine is that you're outsourcing not just some of your remembering but some of your thinking. Perhaps widespread use of LLMs will make people mentally lazier. People are already mostly very lazy mentally. This might be bad for society.

People tend to believe what LLMs tell them. LLMs are not perfectly reliable. Again, in isolation this isn't particularly alarming. (People aren't perfectly reliable either. I'm sure everyone reading this believes at least one untrue thing that they believe because some other person said it confidently.) But, again, when large swathes of the population are talking to the same LLMs which make the same mistakes, that could be pretty bad.

Everything in the universe tends to turn into advertising under the influence of present-day market forces. There are less-alarming ways for that to happen with LLMs (maybe they start serving ads in a sidebar or something) and more-alarming ways: maybe companies start paying OpenAI to manipulate their models' output in ways favourable to them. I believe that in many jurisdictions "subliminal advertising" in movies and television is illegal; I believe it's controversial whether it actually works. But I suspect something similar could be done with LLMs: find things associated with your company and train the LLM to mention them more often and with more positive associations. If it can be done, there's a good chance that eventually it will be. Ewww.

All the most capable LLMs run in the cloud. Perhaps people will grow dependent on them, and then the companies providing them -- which are, after all, mostly highly unprofitable right now -- decide to raise their prices massively, to a point at which no one would have chosen to use them so much at the outset. (But at which, having grown dependent on the LLMs, they continue using them.)


I don't agree with most of these points, I think the points about atrophy, trust, etc will have a brief period of adjustment, and then we'll manage. For atrophy, specifically, the world didn't end when our math skills atrophied with calculators, it won't end with LLMs, and maybe we'll learn things much more easily now.

I do agree about ads, it will be extremely worrying if ads bias the LLM. I don't agree about the monopoly part, we already have ways of dealing with monopolies.

In general, I think the "AI is the worst thing ever" concerns are overblown. There are some valid reasons to worry, but overall I think LLMs are a massively beneficial technology.


For the avoidance of doubt, I was not claiming that AI is the worst thing ever. I too think that complaints about that are generally overblown. (Unless it turns out to kill us all or something of the kind, which feels to me like it's unlikely but not nearly as close to impossible as I would be comfortable with[1].) I was offering examples of ways in which LLMs could plausibly turn out to do harm, not examples of ways in which LLMs will definitely make the world end.

Getting worse at mental arithmetic because of having calculators didn't matter much because calculators are just unambiguously better at arithmetic than we are, and if you always have one handy (which these days you effectively do) then overall you're better at arithmetic than if you were better at doing it in your head but didn't have a calculator. (Though, actually, calculators aren't quite unambiguously better because it takes a little bit of extra time and effort to use one, and if you can't do easy arithmetic in your head then arguably you have lost something.)

If thinking-atrophy due to LLMs turns out to be OK in the same way as arithmetic-atrophy due to calculators has, it will be because LLMs are just unambiguously better at thinking than we are. That seems to me (a) to be a scenario in which those exotic doomy risks become much more salient and (b) like a bigger thing to be losing from our lives than arithmetic. Compare "we will have lost an important part of what it is to be human if we never do arithmetic any more" (absurd) with "we will have lost an important part of what it is to be human if we never think any more" (plausible, at least to me).

[1] I don't see how one can reasonably put less than 50% probability on AI getting to clearly-as-smart-as-humans-overall level in the next decade, or less than 10% probability on AI getting clearly-much-smarter-than-humans-overall soon after if it does, or less than 10% probability on having things much smarter than humans around not causing some sort of catastrophe, all of which means a minimum 0.5% chance of AI-induced catastrophe in the not-too-distant future. And those estimates look to me like they're on the low side.


Any sort of atrophy of anything is because you don't need the skill any more. If you need the skill, it won't atrophy. It doesn't matter if it's LLMs or calculators or what, atrophy is always a non-issue, provided the technology won't go away (you don't want to have forgotten how to forage for food if civilization collapses).


Right. But (1) no longer needing the skill of thinking seems not obviously a good thing, and (2) in scenarios where in fact there is no need for humans to think any more I would be seriously worried about doomy outcomes.

(Maybe no longer needing the skill of thinking would be fine! Maybe what happens then is that people who like thinking can go on thinking, and people who don't like thinking and were already pretty bad at it outsource their thinking to AI systems that do it better, and everything's OK. But don't you think it sounds like the sort of transformation where if someone described it and said "... what could possibly go wrong?" you would interpret that as sarcasm? It doesn't seem like the sort of future where we could confidently expect that it would all be fine.)


I'm not sure we'd ever outsource thinking itself to an LLM, we do it too often and too quickly for outsourcing it to work well.


The "obvious" thing to try, which presumably some people are trying pretty hard right now[1], is to (1) use a mathematically-tuned LLM like this one to propose informal Next Things To Try, (2) use an LLM (possibly the same LLM) to convert those into proof assistant formalism, (3) use the proof assistant to check whether what the LLM has suggested is valid, and (4) hook the whole thing together to make a proof-finding-and-verifying machine that never falsely claims to have proved something (because everything goes through that proof assistant) and therefore can tolerate confabulations from LLM #1 and errors from LLM #2 because all those do is waste some work.

[1] IIRC, AlphaProof is a bit like this. But I bet that either there's a whole lot of effort on this sort of thing in the major AI labs, or else there's some good reason to expect it not to work that I haven't thought of. (Maybe just the "bitter lesson", I guess.)

It would doubtless be challenging to get such a system to find large difficult proofs, because it's not so easy to tell what's making progress and what isn't. Maybe you need LLM #3, which again might or might not be the same as the other two LLMs, to assess what parts of the attempt so far seem like they're useful, and scrub the rest from the context or at least stash it somewhere less visible.

It is, of course, also challenging for human mathematicians to find large difficult proofs, and one of the reasons for them is that it's not so easy to tell what's making progress and what isn't. Another major reason, though, is that sometimes you need a genuinely new idea, and so far LLMs aren't particularly good at coming up with those. But a lot of new-enough-ideas[2] are things like "try a version of this technique that worked well in an apparently unrelated field", which is the kind of thing LLMs aren't so bad at.

[2] Also a lot of the new-enough-ideas that mathematicians get really happy about. One of the cool things about mathematics is the way that superficially-unrelated things can turn out to share some of their structure. If LLMs get good at finding that sort of thing but never manage any deeper creativity than that, it could still be enough to produce things that human mathematicians find beautiful.


I think it's fair to say that summing the series directly would be slow, even if it's not slow when you already happen to have summed the previous n-1 terms.

Not least because for modestly-sized target sums the number of terms you need to sum is more than is actually feasible. For instance, if you're interested in approximating a sum of 100 then you need something on the order of exp(100) or about 10^43 terms. You can't just say "well, it's not slow to add up 10^43 numbers, because it's quick if you've already done the first 10^43-1 of them".


Nah, look at their posting history. In the last hour they've posted a whole slew of comments with the same sort of tone and the same AI-ish stylistic quirks, all in quite surprisingly quick succession if the author is actually reading the things they're commenting on and thinking about them before posting. (And their comments before this posting spree are quite different in style.) I won't say it's impossible for this to be human work, but it sure doesn't look like it.


Yeah you're right. I wanted to give them the benefit of the doubt, but the comment history makes it pretty obvious.


For that sort of task: no, Tao isn't all that much better than a "regular researcher" at relatively easy work. But the tougher the problems you set them at, the more advantage Tao will have.

... But mathematics gets very specialized, and if it's a problem in a field the other guy is familiar with and Tao isn't, they'll outperform Tao unless it's a tough enough problem that Tao takes the time to learn a new field for it, in which case maybe he'll win after all through sheer brainpower.

Yes, Tao is very very smart, but it's not like he's 100x better at everything than every other mathematician.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: