> They don’t understand it and think it will replace them so they are afraid.
I don't have evidence, but I am certain that AI replaced most of all logo and simple landing pages designers already. AI in Figma is surprisingly good.
I doubt it, you’ll still need humans to create novel ideas and designs because things will get stale after a while and trends/styles will continue to evolve.
Exactly. People are getting very good at detecting AI-generated designs -- because everyone can play around with it themselves and see in what ways they always tend to look alike.
To make an impression, it will become even more important to go with a real designer who can work in creative ways to regain people's attention.
But I have little doubt that a lot of the bread-and-butter, not-too-important, I-just-need-to-have-something jobs will no longer be contracted to actual designers.
Just bought a digital piano as New Year present to myself. So far I am doing single-note-a-time melodies from my child's practice book and so far enjoying my slow progress, but I am really struggling. Spent 3 evenings on a single simplest song and still can't play it end to end reliably.
I have to sign every note with a letter in a music book because only other way to "read" music sheet for me is count lines for each note, which is unbearably slow.
I wonder if there is any modern (AI, bluetooth midi app, etc) way to get over initial hurdles easier?
That's good question. While I think my app might help with your ability to pick out notes - it's not going to assist with stuff like proper pianistic ergonomics, fingering, etc.
I’ve heard some people say that the Piano Adventures Player app is a nice little tool because it serves as a supplement to the books themselves.
Funnily, I felt the opposite when I learnt to read music a few years back - that even though I was extremely slow, I could now (theoretically) read and learn to play any music, just slowly! You'll get faster and better with practice, as with everything. I'm still slow. Just not as slow. But crucially I'm fast enough that it's not the bottleneck anymore, getting the notes under my fingers is.
Former band kid who also just got a digital keyboard. Ime learning to read the staff just came from putting in the time on the instrument, but I’m also looking for ways to speed that up. I had the idea of making flashcards and even putting it into an SRS like Anki to see if I can make the process of (re-)learning the staff faster and make it stick. If you come across anything that would help I’m interested too!
I've found that a colorful note guide (such as: https://www.amazon.com/dp/B0BC8NVW4Q) to be very helpful. Without it, I felt completely lost when looking at sheet music.
> As long as the vehicle has a known starting point, quantum accelerometers can continue tracking its movement accurately, regardless of what’s happening above the atmosphere.
So this is a much more precise inertial guidance system replacement? If true, I'd expect UK MoD to be involved to the point of making technology a military secret, but clearly it didn't happen.
given that there is no dev mode or ssh server running on a console, how do they even read low level binary code such as boot loader? Do they transplant memory chips?
In this case, by using fault injection to induce a glitch into a test mode which bypasses secure boot and loads code from SPI, combined with a SPI emulator (and I2C to send the boot vectors).
Chip-off is a common way to retrieve the ROM of embedded devices. It often requires multiple chip-off reads and a reconstruction of the striped data across the chips.
At first I wanted to comment that 20k users/day require manufacturing same amount of terminals, which seemed impressive to me. But then I decided to compare with iPhone and was blown away by the volume: 235M iPhones a year or 643K units manufactured every day!
it's very likely that the terminals don't equal the count of users. Terminals on airliners, ships, and other shared use cases have far more users than the number of terminals would indicate. The other side of the coin is that they're not adding 20,000 individual retail accounts each day.
Cables on overhead high voltage lines are mounted using stacks of ceramic insulators, but here they seemingly just sleeved in some protection and hang on a tunnel wall. Why is that?
Overhead conductors use air as the insulator. Underground cables use an insulating jacket. In the past it was really difficult to build cables with voltage ranges in the 10s of thousands of volts without additional complexity like a dielectric oil being pumped through the cable. I think modern dielectrics are significantly better though.
Yeah, the wires in the new London tunnels are XLPE. Despite being first used in the late 60s it took a long time to be commonly used. Though much of the infrastructure around is still very old.
The cost of oil insulated cables that can do 132kv is about £900 a meter. Whilst there are HV cables that exist on the outskirts of london, they are much rarer in zones 1-3.
I assume that the cost of pylons with raw cables is much much cheaper. The problem is planning permissions, physical clearance. planning permission and now one wants to live near HV cables (that they know of. There are a bunch of 33kv cables buried outside posh people's houses in zone 5, and a bunch in canals.)
Overhead high voltage conductors are not insulated with a coating, probably for many reasons but certainly for cost and heat dissipation.
That means the path through the air to some conducting materials needs a certain distance, and that even when wet or iced over or whatever can happen up there.
Overhead lines need big ceramic stacks because the air is the insulation. In tunnels, the insulation is in the cable itself, and the tunnel just provides structure, cooling, and controlled geometry
My immediate question is that if all of that was on-disk data duplication, why did it affected download size? Can't small download be expanded into optimal layout on the client side?
It didn't. They downloaded 43 GB instead of 152 GB, according to SteamDB: https://steamdb.info/app/553850/depots/ Now it is 20 GB => 21 GB. Steam is pretty good at deduplicating data in transit from their servers. They are not idiots that will let developers/publishers eat their downstream connection with duplicated data.
Furthermore, this raises the possibility of a "de-debloater" that HDD users could run, which would duplicate the data into its loading-optimized form, if they decided they wanted to spend the space on it. (And a "de-de-debloater" to recover the space when they're not actively playing the game...)
> to recover the space when they're not actively playing the game
This would defeat the purpose. The goal of the duplication is to place the related data physically close, on the disk. Hard links, removing then replacing, etc, wouldn't preserve the physical spacing of the data, meaning the terrible slow read head has to physically sweep around more.
I think the sane approach would be to have a HDD/SDD switch for the file lookups, with all the references pointing to the same file, for SDD.
Sure, but defrag is a very slow process, especially if you're re-bloating (since it requires shifting things to make space), and definitely not something that could happen in the background, as the player is playing. Re-bloating definitely wouldn't be good for a quick "Ok, I'm ready to play!".
depending on how the data duplication is actually done (like texture atlasing the actual bits can be very different after image compression) it can be much harder to do rote bit level deduplication. They could potentially ship the code to generate all of those locally, but then they have to deal with a lot of extra rights/contracts to do so (proprietary codecs/tooling is super, super common in gamedev), and
Also largely cause devs/publishers honestly just don't really think about it, they've been doing it as long as optical media has been prevalent (early/mid 90s) and for the last few years devs have actually been taking a look and realizing it doesn't make as much sense as it used to, especially if like in this case the majority of the time is spent on runtime generation of, or if they require a 2080 as minimum specs whats the point of optimizing for 1 low end component if most people running it are on high end systems.
Hitman recently (4 years ago) did a similar massive file shrink and mentioned many of the same things.
Sure it can - it would need either special pre- and postprocessing or lrzip ("long range zip") to do it automatically. lrzip should be better known, it often finds significant redundancy in huge archives like VM images.
I came into the comments specifically to ask if this flag existed. I feel bad that the author developed this whole flow just because they didn't know about this, but that's pretty common with git.
discoverability is a big problem, especially for CLI tools which can't afford to show small hints or "what's new" popups. I myself learned it from someone else, not docs.
update-refs works only in a narrow case when every branch starts form the tip of a previous. Your helper might still be useful if it properly "replants" whole tree keeping its structure.
No, as far as I can tell, it's basically just doing update-refs. But in my defense, I just found out by looking for the option that for some reason my git manpages are from an old version before it was introduced
I don't have evidence, but I am certain that AI replaced most of all logo and simple landing pages designers already. AI in Figma is surprisingly good.