Quite amazing how little traction this story is getting. Back in the 1950s-2000s, when Russia's influence on the West was rather subdued, this would have been the only news in town.
Instead it's more like "nothing to see here, kindly vote in our puppet candidate X, thanks!" for various local values of X.
Believe me, we've been aware that this is a non-bug feature for a long time.
The Tuesday law was passed in 1845. Instead of changing it, many legislators are pushing in the opposite direction: trying to selectively suppress their opponents' votes further. If it hurts them more than us, it's a worthy goal!
Not with a randomized audit, such as this one for the 2022 primary [1]. If it flipped just one vote out of 100, and you drew an audit sample of just 1000 votes, the probability of detecting it would be 99.996%.
What do you audit if both the tally and the paper ballot are consistent? The only check possible is the voter checking themselves before they hand over the paper ballot.
The problem stated was that the marker machine lies 1 out of 15 entries. The paper would contain an incorrect selection occasionally. So, yeah, it would require no one noticing during the act.
Indeed, and the math is the same. If out of 3 million voters, just 1000 double-check the printout, they will detect a 1/100 flip with probability 99.996%.
Of course it's important that enough people check their ballot and say, "hey this isn't what I meant" it triggers a formal audit. Not just letting those 1000 have a redo and chalk it up to human error.
Sure, but 1000 is just 1 in 3000 voters. In practice it's going to be way more than that, probably 2 or 3 in 10. Thats hundreds of thousands of voters, many of whom are going to be punctilious people. Of all the suggested fuckery methods, this would be caught the fastest IMO.
And yet somebody who voted said far above in this thread that the machine reads a barcode on their ballot, so they have 0% chance of verifying if their vote was entered correctly. And there is always the added problem of a dieselgate style obfuscation: The machine counts votes differently when in verification mode than in actual vote counting mode.
My preferred machine would be one that did not use integrated circuits, but was simple enough that the entire board and circuit was visible - with no software beyond the circuitry at all. You just need a very simple sensor and tally wheels that mechanically advance, like those used for measuring wheels etc. No need for memory. Keep automation to the absolute bare minimum.
> nothing burger event for an index that has lost relevance
This argument has been around since time immemorial. The right way to think of it is more like a country club or a who's who, rather than a survey or a directory.
As for the news at hand, it's really more about Intel than Nvidia. Sic transit gloria mundi.
I wish some of the sleeping/eating studies covered the options "sleep when I'm tired" and "eat when I'm hungry."
When remote work is an option, it'd be nice to know the health opportunity cost of RTO. Sadly the cohort is too hard to study outside of nursing home residents.
That's not really surprising, considering that's exactly how interest rate increases are supposed to work. They crowd out a bit of business investment and a lot of housing investment [1]. This in turn cools the economy by curtailing construction.
Unfortunately, housing in most areas was screwed to begin with. This was not nearly the case with prior inflation bouts that required rate increases. The Fed was left with two bad choices. It did what it had to do.
If the economy was Windows XP, housing would be its networking stack - its most exploitable sector. This is largely because of local governance and regulatory capture [2]. Housing has been artificially undersupplied for 5 decades under a variety of pretexts, such as architectural integrity. It has effectively turned the sector into a pyramid scheme that captures the wages of renters.
If you want to consider interest rates in the usual model where increases are meant to reduce demand and lower prices, the Fed either really screwed up or we're about to see a huge drop in the housing market.
Prices consistently went up over the last couple years of high rates. The recent fed funds rate drop didn't seem to help, and last I checked loan rates had stayed flat or actually gone up a bit.
If loans weren't hurt enough them the Fed stopped raising rates too early and dropped much too early. Otherwise it would seem that they dropped in anticipation of something coming and loan rates are staying high anticipating higher risk of the same issues ahead.
Prices are not the relevant statistic though, it's housing starts [1]. They dropped from ~1.8M in April 2022 to ~1.3M today. Prices are affected by supply and demand equally, while starts are much more affected by supply.
As for how the Treasury and Fed managed the crisis, it was easily the most incredible macro success story since I've been alive. Had you told me in 2022 that they'd manage sub-3% inflation without a recession at all, I'd have said you believed in fairy tales.
> Prices are not the relevant statistic though, it's housing starts [1]. They dropped from ~1.8M in April 2022 to ~1.3M today. Prices are affected by supply and demand equally, while starts are much more affected by supply.
New construction and prices are always connected, I'm not sure how you could consider one without the other. New builds increases supply, but building them requires buyers willing to pay market price for them. You can't have one without the other.
> As for how the Treasury and Fed managed the crisis, it was easily the most incredible macro success story since I've been alive. Had you told me in 2022 that they'd manage sub-3% inflation without a recession at all, I'd have said you believed in fairy tales.
Its too early to make that call either way. They may very well have managed the "soft landing" they kept pitching, but we really won't know for sure for at least a decade or so. Markets move slow and economies move even slower, give it time to shake out before popping bottles and awarding Nobel Prizes.
The goal in increasing rates is to cool the economy by reducing construction, i.e. housing starts. Exchanging existing units doesn't affect employment and output as much. Since starts dropped by a third, it follows that prices would have been even higher without the raise in rates.
> we really won't know for sure for at least a decade or so
We absolutely already know. The contractionary effect of a raise in rates is largest in the short term. You can't have a lagging effect after rates are cut if there was no contraction to begin with.
> The goal in increasing rates is to cool the economy by reducing construction, i.e. housing starts
That may be a goal, but it isn't the goal. The fed doesn't directly control interest rates on new construction, they control the fed funds rate. Their changes impact the cost for banks to borrow money regardless of the type of loans they underwrite. Increasing rates should decrease demand for new construction, but it decreases demand for existing homes as well. It also negatively impacts employment and many other areas, basically if your run on debt the higher rates hurt.
> We absolutely already know. The contractionary effect of a raise in rates is largest in the short term. You can't have a lagging effect after rates are cut if there was no contraction to begin with.
What makes you say that? Lagging effects after a rate cut aren't directly controlled by, or limited by, what effects we currently have - they wouldn't be lagging if the effects must have already happened. More importantly in my opinion, contraction isn't an absolute and requires a baseline for comparison.
After rates are cut we can't distinguish between a contraction relative to where we would have been without intervention. Comparing against a gross number isn't particularly helpful.
For example, say we had a house worth $100k and it was on track to be worth $110k next year. If we intervene and now it will only be worth $105k next year, wasn't that functionally a contraction induced by the intervention even though it didn't fall below the present value of $100k?
> it decreases demand for existing homes as well. It also negatively impacts employment and many other areas, basically if your run on debt the higher rates hurt
The question is not what's affected. It's how much monetary policy it takes to achieve the desired output and employment effect, and where is the employment effect. We want to stop increasing rates once the effect is achieved.
We raise rates because the economy is overheating. Too much money is chasing too few goods, raising prices. In response to the favorable prices, too many jobs are chasing too few workers, raising wages.
Rising prices and wages (without rising productivity) means inflation. We're at one edge of the Phillips curve [1] and need to move back to the middle.
> That may be a goal, but it isn't the goal.
The goal is actually to raise unemployment. That's what the "cooling the overheating economy" euphemism means. However, we don't have the tools to raise it evenly in all sectors. The only tool we have is the short term interest rate.
Luckily, it affects the long term rate, which affects demand for homes, new and existing. For (say) every 20 fewer homes sold, one realtor and one banker might go unemployed. But for every 20 fewer new homes built (say), 50 laborers might go unemployed. That's why the transmission is primarily through construction [2] [3]. The consumer durables sector (appliances) used to have a large multiplier too, but most of those manufacturing jobs have been automated or moved overseas.
> they wouldn't be lagging if the effects must have already happened
You can model the lag effects as geometric decays of the original impulse. You need a negative original effect (one impulse of high unemployment) that will then regress back to baseline (continued but fading unemployment.) We didn't have any impulse, hence no decay.
The only thing that could really go wrong is a resurgence of inflation if the cuts were too fast, but chances are we would have seen that already too.
> relative to where we would have been without intervention
The problem here is we were at too much output/employment. We had to get through that moment and fix the problem without overshooting in the opposite direction.
We couldn't do better, that's the whole point. It's not like there was potentially more output to be had. The actual problem is that output was too high. That's why there was inflation.
Is there a blog post about what specifically she found to be inscrutable? The C++ doesn't look all that terse at a syntactic level, and has plenty of comments. Are the problems at the domain level?
I can imagine she was glad it took only some gentle massaging to make it work. If she had encountered road blocks there, she would have had to dig in for real.
The code is quite low on comments and doesn't really explain the math there. It probably makes sense if you have background in the lofty math side of computer graphics, but that's a slightly different skillset than being able to reverse-engineer and bring up exotic hardware.
At the risk of turning a unison into a chord, here's my two cents.
If:
1. You know where the 'creases' of orthogonality are. You've carved the turkey 1000 times and you never get it wrong anymore.
2. As a result, there is hardly any difference in complexity between code that is and isn't easy to extend.
Then write code that is easy to extend, not delete.
The question is whether your impression of the above is true. It won't be for most junior developers, and for many senior ones. If orthogonality isn't something you preoccupy yourself with, it probably won't be.
In my experience, the most telling heuristic is rewriting propensity. I'm talking about rewriting while writing, not about refactoring later. Unless something is obvious, you won't get the right design on the first write. You certainly won't get the correct extensible design. If you're instructed to write it just once, then by all means make it easy to delete.
Here's an algebraic example to keep things theoretical. If the easy to delete version proposed by the article is:
f(x) = 6x^2 - 5x + 1
The prospective extensible version is:
g(x,a,b) = ax + b
f(x,q()) = q(x,3,-1) q(x,2,-1)
f(x,g)
It's the generalization for factorable polynomials. It's clearly harder to read than the easy to delete version. It's more complex to write, and so on.
However, it's algebraically orthogonal. It has advantages in some cases, for instance if you later add code for a 6th-order polynomial and need to use its zeroes for something else.
We know that it could be better in some cases. Is it a good bet to predict that it will be better overall? The problem domain can fracture across a thousand orthogonal "creases" like this one. The relevant skill is in making the right bets.
Here's an example that's not orthogonal. Let's say we think the 6 coefficient might be more likely to change in the future:
g(x,a) = ax^2 - 5x - 1
f(x,q()) = q(x,6)
f(x,g)
This version is most likely just adding complexity. A single function is almost always a better bet.
The most salient feature of Russian meddling is playing all sides, both ideologically and in levels of sophistication.
The less-sophisticated efforts are basically misdirection and camouflage. They are intentionally disarming, akin to how their 'strongman' puppets make an effort to bumble and 'accidentally' feed the trolls. Covfefe bigly, if you will.
Global Russian paranoia would actually be alright. We used to have that in the 80s. Certainly better than global Russian domination by way of stooges like Maduro, Orban, Wagner in Africa, and you-know-who in the US.
Instead it's more like "nothing to see here, kindly vote in our puppet candidate X, thanks!" for various local values of X.