> France ("consistently invested in nuclear for the past half-century" ).
Not really and this is currenctly causing a big problem. France stopped building new reactors after 2002. They only built 1 new generation EPR, which was very late and 6x the cost.
Many of the reactors are very old and need to be replaced, but it's difficult to do because of the bad experiences of Flamanville's reactor.
There are other EPR projects abroad and most did overshoot costs by more than 5 to $10B and having a decade delay. Meanwhile renewable and storage keep getting cheaper and better at massive speed. Even France adds much more renewable than nuclear capacity right now. Which is a pragmatic approach... Stay skilled in nuclear and keep minimum investment to keep the industry alive but invest mostly in renewable.
I you hang them at 45 degree the depth will be reducted by sqrt(2). (about 0.7 x hanger length), and you will lose space on each side. And the more you increase the angle, the more you will lose space on the sides.
With this technique, you will reduce the depth to 0.5 x hanger length and not lose space on the side.
With this solution you lose the most space on the side because of the notched rod. The notches have to be wide enough for a the thickest stuff, but then a shirt will take as much space as thick coat.
If your goal is to save as much depth as possible the solution is obviously front to back direction rods, as opposed to side to side. There's plenty of options available from retractable ones to wall mounted ones that don't require a closet and so on.
> The notches have to be wide enough for a the thickest stuff, but then a shirt will take as much space as thick coat.
It's probably easier to take out a hinger and let a thick coat take up two spaces (or more). Assuming you have only few coats, that's more efficient than increasing the spacing uniformly.
It does seem like you lose some flexibility due to the slots, though.
It also seems possible to put many more slots in the rod. There is no functional reason why slots and hingers need to match 1 on 1.
> If your goal is to save as much depth as possible the solution is obviously front to back direction rods, as opposed to side to side.
Somehow it feels like it would be harder to store the same amount of clothes on front-to-back rods. I also think it would be harder to browse clothes and take out the shirt you want; even with the retractable rod you have to pull out and push in rods just to see what's on them.
Yes, but folding the clothes doubles their thickness, so you can fit fewer hingers on a single rod than traditional hangers.
Also the coat hinger system uses fixed distance between the hingers (due to the slots in the rod) which seems like it would be less efficient: thick sweaters and thin T-shirts take up the same width.
It's actually not clear if the hinger system works all that well if you need to hang a lot of thick clothes; the rack looks pretty crowded in the available pictures.
I guess you would have to try this in practice to see which system works better. I wouldn't be surprised if it boils down to "the 45 degree angle is better, but the coat hinger looks neater".
A cryptographically secure pseudorandom number generator lets me pump out a stream of digits that's certainly computable, but unless you know my private key, you won't be able to predict it.
(Finding out the private key from the stream is 'computable', because the definition of computable is comfortable with running exponentially long brute force searches. But that's why 'computable' is not a useful definition in practice. You want something that captures 'tractable', not just 'possible on a Turing machine at all'.)
That simple sequence, 1, 10, 100 etc, the digit is at least computable in O(1) time which is maybe a good way to look at predictability. It's a simple rule and no effort similar to computing every digit before it is required.
If you define predictable as computable with finite memory, it is correct.
If you have finite memory, you have a finite number of states and will eventually return to a previous state. In your example, you eventually will run out of memory to track the number of consecutive zeros.
For example you could set cookies before visiting another website. This is currently impossible in an iframe but possible in a browser.
I've wanted to do this to automatically login users on some external websites.
That's a security feature. If you would have that in any environment it would be massive security issue. If you actually own the service you can still do that.
With compositions A uses B but B can never use A.
With inheritance Child can use the Parent, but Parent will also call the Child
(virtual methods) which in turn can call the Parent again etc.., so the code can become difficult to follow. It can become very complicated with multiple inheritance and multiple levels.
Such code would be difficult to follow regardless of whether you use inheritance or not. Sometimes, two pieces of code developed independently just need to interact very closely with each other.
Remember that OOP and inheritance to some extent came out of the need to develop GUI systems. Inheritance is still heavily used in GUI toolkits because it's a good fit for that problem space. You have graphs of objects that need to be treated at different levels of abstraction, and controls often need to customize (override) or implement some behavior that shouldn't itself be a part of the public API.
Attempts to get rid of inheritance and OOP in UIs end up looking like Compose or React. I found very quickly when working with these that pure composition just wasn't sufficient and these approaches have their own issues; problems that OOP trivially solves become difficult to impossible to solve cleanly without it.
Humans are pretty bad at these questions. Even with the simplest questions like "Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?" I think that a lot of people will give an incorrect answer. And for questions like "Argue for and against the use of kubernetes in the style of a haiku", 99.99% will not be able to do it.
The thing with humans is they will say “I don’t remember how many syllables a haiku has” and “what the hell is kubernetes?” No LLM can reliably produce a haiku because their lexing process deprives them of reliable information about syllable counts. They should all say “I’m sorry, I can’t count syllables, but I’ll try my best anyway.” But the current models don’t do that because they were trained on texts by humans, who can do haiku, and not properly taught their own limits by reinforcement learning. It’s Dunning Kruger gone berserk.
Eh, it's not D&K gone berserk, it's what happens when you attempt to compress reality down to a single dimension (text). If you're doing a haiku, you will likely subvocalize it to ensure you're saying it correctly. It will be interesting when we get multimodal AI that can speak and listen to itself to detect things like this.
The problem isn’t just that everything is text. It’s that everything is a Fourier transform of text in such a way that it’s not actually possible for an LLM to learn to count syllables.
Imagine you have a lot more computing resources in a multimodal LLM. It sees your request of count the syllables and realizes it can't do them from text alone (hell I can't and have to vocalize it). It then sends your request to a audio module and 'says' the sentence, then another listening module that understand syllables 'hears' the sentence.
This is how it works in most humans, now if you do this every day you'll likely make some kind of mental shortcut to reduce the effort needed, but at the end of the day there is no unsolvable problem on the AI side.
> Whereas, in biological brains, the weights are updated continuously.
My personal impression is that many "weights" are updated during sleep time. For example when training juggling, I will make no progress at all for hours of training. But later, after a night of sleep, I will have a large and instant progress.
If I learn something new in the morning, very often I still remember it in the afternoon, even though I haven't been to sleep yet.
Neuroscientists/psychologists/etc believe [0] humans have four tiers of memory: sensory memory (stores what you are experiencing right now, lasts for less than a second); working memory (lasts up to 30 seconds); intermediate-term memory (lasts 2-3 hours); long-term memory (anything from 30 minutes ago until the end of your life).
We don't need to sleep to form new long-term memories – if at dinner time you can still remember what you ate for breakfast (I usually can if I think about it), that's your long-term memory at work. What we need sleep for, is pruning our long-term memory – each night the brain basically runs a compression process, deciding which long-term memories to keep and which to throw away (forget), and how much detail to keep for each memory.
Regarding your juggling example – most neuroscientists believe that the brain stores different types of memories differently. How to perform a task is a procedural memory, and new or improved motor skills such as juggling are a particular form of procedural memory. How the brain processes them is likely quite different from how it processes episodic memories (events of your life) or semantic memories (facts, general knowledge, etc). Sleep may play a somewhat different role for each different memory type, so what's true for learning juggling may not be true for learning facts.