Hacker Newsnew | past | comments | ask | show | jobs | submit | more woopsn's commentslogin

I would like AI that helps read and understand the text, but don't see real value having textbooks generated by AI.


But working with complex numbers I hardly if ever write (a, b) for a+ib, while I use the "escape hatches" all the time. They solve equations that have no real solution, they give me paths from x=-1 to x=1 that don't cross the origin, etc. There's only so much to learn about C as a vector space, while the theory tying it to R (and even N) is very deep.


Thing is, there's no such thing as an escape hatch. Either you are working in the reals, or you are working in the complex plane. They don't "solve equations that have no real solution", that equation is either a real number equation or a complex number equation, not both. If you work in the complex plane, that is a different equation describing a different space! It just looks the same in standard notation.

If you don't realize this, then you can draw conclusions that don't make sense in the space you're working with. Take a simple equation like y = -x^2 - 5, representing a thrown ball's trajectory. It never crosses zero, there are no solutions. You can't "pop into the complex numbers and find a solution" because the thing it represents is confined to the reals.

So if you find yourself reaching for complex numbers, you have to understand that the thing you are working with is no longer one-dimensional, even if that second dimension collapses back to 0 at the end.


I guess this PRNG has recurrence period much less than 52! (or wasn't seeded appropriately), so that only sampled a fraction of the distribution

"... the system ties its number generation to the number of seconds that have passed since midnight, resetting once each day, which further limits the possible random values. Only about 86 million arrangements could be generated this way,"


If you take N samples of a real signal you will get N/2+1 bins of information from the DFT, covering 0Hz out to about half the sampling rate.

The bins do not actually measure a specific frequency, more like an average of the power around that frequency.

As you take more and more samples, the bin spacing gets finer and the range of frequencies going into the average becomes tighter (kind of). By collecting enough samples (at an appropriate rate), you can get as precise a measurement as you need around particular frequencies. And by employing other tricks (signal processing).

If you graph the magnitude of the DFT, signals that are a combination of power at just a few frequencies show just a few peaks, around the related bins. Eg a major chord would show 3 fundamental peaks corresponding to the 3 tones (then a bunch of harmonics)

https://en.wikipedia.org/wiki/Fourier_transform#/media/File:...

So you detect these peaks to find out what frequency components are present. (though peak detection itself is complicated)


It's depressing how common this accusation is become here. Before LLM idiot ruined everything, you know what? People wrote things you wouldn't like, in a way you wouldn't like. Especially on their blogs. HN so smart though they can immediately see, tenured Yale professor has no life and is trying to win the message board game with AI slop!


Nobody in this thread accused LLM of writing the OP. Instead, they are saying that it is dumb and easy in the way a lot of LLM writing is, and that LLMs wouldn't have any problem writing it. This author is being disliked in the traditional way, but with a LLM-assisted proof that actually shows that LLMs can write this crap, and write it well.

The real proposal should be that slate dot com type "Is Food Really Good For You?" or "Hands Are A Completely Unnecessary Part Of The Arm" article authors should be replaced by LLM.

I like the proliferation of LLM slop, because it involuntarily reveals the emptiness of an enormous proportion of actual human writing. You can't help but see it, even if you don't want to. You end up forced to talk about the author's resume in defense.


>It's impossible to take this article's criticisms of AI seriously when it's so obviously over-edited with AI itself.

Someone in this thread accused LLM of writing the OP.


Are you attempting to claim that my identification of AI slop is incorrect?

If so, you're almost certainly wrong.


Convolution with dirac delta will give you an exact sample of f(0), and in principle a whole signal could be constructed as a combination of delayed delta signals - but we can't realize an exact delta signal in most spaces, only approximations.

As a result we get finite resolution and truncation of the spectrum. So "Fourier analysis with pre-applied lowpass filter" would be analysis of sampled signals, the filter determined by the sampling kernel (delta approximator) and properties of the DFT.

But so long as the sampling kernel is good (that is the actual terminology), we can form f exactly as the limit of these fuzzy interpolations.

The term "resolution of the identity" is associated with the fact that delta doesn't exist in most function spaces and instead has to be approximated. A good sampling kernel "resolves" the missing (convolutional) identity. I like thinking of the term also in the sense that these operators behave like the identity if it were only good up to some resolution.


Does the fact that BB(k)=N is provable up to some k < 748 mean that all halting problems for machines with k states are answered by a proof in ZFC?


748 is not tight. As given in the article, k=643 is independent of ZFC, and the author speculates that it's possible that something as small as BB(9) could be as well.

The 748/745/643 numbers are just examples of actual machines people have written, using that many states, that halt iff a proof of "false" is found.

At any rate, given the precise k, I believe your intuition is correct. I've heard this called 'proof by simulation' -- if you know a bound on BB(N), you can run a machine for that many steps and then you know if it will run forever. But this property is exactly the intuition for why it grows so fast, and why we will likely never definitively know anything beyond BB(5). BB(6) seems like it might be equivalent to the Collatz conjecture, for example.


When was the circle discovered? When it became essential to physics?


When it was essential to perception. Its necessary to have a model of a circle (, elipse...) in order to correctly parse (at least,) visual perception -- because space is inherently geometrical.


You need some additional assumptions. Only near equilibrium / thermodynamic limit is system linear in entropy. What governs physical processes such as you mention is conservation, dynamics pushing equipartition of energy - but outside that regime these are no longer "theorems".


Given that the links work, the quotes were actually said, numbers are correct, cited research actually exists etc we can immediately rule that out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: