> Wow, I was already impressed with the new comment feature on erdosproblems.com and how it's already been used to solve some of the problems. Excited to see if AI can make a meaningful contribution here.
Since then, there has been some discussion of GPTPro finding a bunch of references, thus enabling many of the problem statuses to be changed from "open" to "solved". But it seems that LLMs couldn't find the right reference for this problem.
But there was a different meaningful contribution from AI here instead.
> you're assuming a lot. Including a notion of mathematical existence which bears little relation to any concept that most lay people have of what mathematical existence might mean.
John Horton Conway:
> It's a funny thing that happens with mathematicians. What's the ontology of mathematical things? How do they exist? In what sense do they exist? There's no doubt that they do exist but you can't poke and prod them except by thinking about them. It's quite astonishing and I still don't understand it, having been a mathematician all my life. How can things be there without actually being there? There's no doubt that 2 is there or 3 or the square root of omega. They're very real things. I still don't know the sense in which mathematical objects exist, but they do. Of course, it's hard to say in what sense a cat is out there, too, but we know it is, very definitely. Cats have a stubborn reality but maybe numbers are stubborner still. You can't push a cat in a direction it doesn't want to go. You can't do it with a number either.
Well 57% is most, although that doesn't account for having multiple kids in one household.
Either way it's close, and the closer you get to Manhattan the higher that number goes. Remember, there are over 8 million people in NYC, and over 12 million during the work day.
Of further potential interest: This paper cites an earlier paper by Keogh and Lin with the provocative title "Clustering of time-series subsequences is meaningless", available online at https://www.cs.ucr.edu/~eamonn/meaningless.pdf
> Given the recent explosion of interest in streaming data and online algorithms, clustering of time series subsequences, extracted via a sliding window, has received much attention. In this work we make a surprising claim. Clustering of time series subsequences is meaningless. More concretely, clusters extracted from these time series are forced to obey a certain constraint that is pathologically unlikely to be satisfied by any dataset, and because of this, the clusters extracted by any clustering algorithm are essentially random. While this constraint can be intuitively demonstrated with a simple illustration and is simple to prove, it has never appeared in the literature. We can justify calling our claim surprising, since it invalidates the contribution of dozens of previously published papers. We will justify our claim with a theorem, illustrative examples, and a comprehensive set of experiments on reimplementations of previous work. Although the primary contribution of our work is to draw attention to the fact that an apparent solution to an important problem is incorrect and should no longer be used, we also introduce a novel method which, based on the concept of time series motifs, is able to meaningfully cluster subsequences on some time series datasets.
Several commenters here seem to ask "okay, so then what's the right way to cluster windows of timeseries??" Perhaps the final sentence of this abstract suggests a solution in that direction?
The suggested solution is look at motifs: windows that are highly similar when trivial matches due to window overlap are excluded. If you take this to its logical conclusion, you end up in the Matrix Profile rabbit hole. https://www.cs.ucr.edu/~eamonn/MatrixProfile.html
If you were to compute it the naive way, that would be slow, but with increasingly sophisticated algorithms developed over a series of twenty papers, you can get massive speedups. Lots of clever tricks to enjoy! Though I guess you can skip the scenic route if you want and just read the first and last papers.
In case anyone was curious, the Internet archive on my parent commenter's link shows large dozen egg prices of:
$7.90 March 2024,
$7.50 November 2023,
$6.50 February 2023.
Oh good thinking! So in line with the sibling commenter, they’ve gone up some, but not a crazy amount, with most of that increase happening prior to the outbreak. And still cheaper than Vital Farms prices mentioned by others elsewhere in the thread.
> totalling almost $2 billion. The LAPD's budget for one fiscal year is larger than most country's GDPs
In case anyone was curious, https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nomi... suggests that ~17 countries have a GDP of less than $2 billion per year. Seeing as how there are 193+ countries, this means that the LAPD budget exceeds the GDP of fewer than 10% of countries. (The median country GDP is ~$50 billion per year.)
For some extra context: while these 17 countries include some very poor countries, the primary reason that they have such small GDPs is their small population. Their combined population is approximately the same as the city of Los Angeles.