Hacker Newsnew | past | comments | ask | show | jobs | submit | nopurpose's commentslogin

> They don’t understand it and think it will replace them so they are afraid.

I don't have evidence, but I am certain that AI replaced most of all logo and simple landing pages designers already. AI in Figma is surprisingly good.


I doubt it, you’ll still need humans to create novel ideas and designs because things will get stale after a while and trends/styles will continue to evolve.


Exactly. People are getting very good at detecting AI-generated designs -- because everyone can play around with it themselves and see in what ways they always tend to look alike.

To make an impression, it will become even more important to go with a real designer who can work in creative ways to regain people's attention.

But I have little doubt that a lot of the bread-and-butter, not-too-important, I-just-need-to-have-something jobs will no longer be contracted to actual designers.


Just bought a digital piano as New Year present to myself. So far I am doing single-note-a-time melodies from my child's practice book and so far enjoying my slow progress, but I am really struggling. Spent 3 evenings on a single simplest song and still can't play it end to end reliably.

I have to sign every note with a letter in a music book because only other way to "read" music sheet for me is count lines for each note, which is unbearably slow.

I wonder if there is any modern (AI, bluetooth midi app, etc) way to get over initial hurdles easier?


That's good question. While I think my app might help with your ability to pick out notes - it's not going to assist with stuff like proper pianistic ergonomics, fingering, etc.

I’ve heard some people say that the Piano Adventures Player app is a nice little tool because it serves as a supplement to the books themselves.

https://pianoadventures.com/resources/piano-adventures-playe...

Good luck on your journey!


Funnily, I felt the opposite when I learnt to read music a few years back - that even though I was extremely slow, I could now (theoretically) read and learn to play any music, just slowly! You'll get faster and better with practice, as with everything. I'm still slow. Just not as slow. But crucially I'm fast enough that it's not the bottleneck anymore, getting the notes under my fingers is.

Keep at it!


Former band kid who also just got a digital keyboard. Ime learning to read the staff just came from putting in the time on the instrument, but I’m also looking for ways to speed that up. I had the idea of making flashcards and even putting it into an SRS like Anki to see if I can make the process of (re-)learning the staff faster and make it stick. If you come across anything that would help I’m interested too!


I've found that a colorful note guide (such as: https://www.amazon.com/dp/B0BC8NVW4Q) to be very helpful. Without it, I felt completely lost when looking at sheet music.


Keep at it! It takes a few years, but so long as you practice new things consistently and every day you'll keep getting better and better.


I mean it took me like eight years as a kid to get good at this. It's just a slog. All learning is exponential.


> As long as the vehicle has a known starting point, quantum accelerometers can continue tracking its movement accurately, regardless of what’s happening above the atmosphere.

So this is a much more precise inertial guidance system replacement? If true, I'd expect UK MoD to be involved to the point of making technology a military secret, but clearly it didn't happen.


given that there is no dev mode or ssh server running on a console, how do they even read low level binary code such as boot loader? Do they transplant memory chips?


In this case, by using fault injection to induce a glitch into a test mode which bypasses secure boot and loads code from SPI, combined with a SPI emulator (and I2C to send the boot vectors).

https://m.youtube.com/watch?v=cVJZYT8kYsI


Chip-off is a common way to retrieve the ROM of embedded devices. It often requires multiple chip-off reads and a reconstruction of the striped data across the chips.


can we have another battlezone, please?


At first I wanted to comment that 20k users/day require manufacturing same amount of terminals, which seemed impressive to me. But then I decided to compare with iPhone and was blown away by the volume: 235M iPhones a year or 643K units manufactured every day!


it's very likely that the terminals don't equal the count of users. Terminals on airliners, ships, and other shared use cases have far more users than the number of terminals would indicate. The other side of the coin is that they're not adding 20,000 individual retail accounts each day.


AFAIK Starlink has the largest PCB manufacturing facility in the US.


Cables on overhead high voltage lines are mounted using stacks of ceramic insulators, but here they seemingly just sleeved in some protection and hang on a tunnel wall. Why is that?


Overhead conductors use air as the insulator. Underground cables use an insulating jacket. In the past it was really difficult to build cables with voltage ranges in the 10s of thousands of volts without additional complexity like a dielectric oil being pumped through the cable. I think modern dielectrics are significantly better though.


Modern cables with XLPE insulation can handle very high voltages without active oil cooling, here’s a 345/400kV rated underground cable assembly rated for 90C: https://assets.southwire.com/adaptivemedia/rendition?id=332f...


Yeah, the wires in the new London tunnels are XLPE. Despite being first used in the late 60s it took a long time to be commonly used. Though much of the infrastructure around is still very old.


Cost, mainly

The cost of oil insulated cables that can do 132kv is about £900 a meter. Whilst there are HV cables that exist on the outskirts of london, they are much rarer in zones 1-3.

I assume that the cost of pylons with raw cables is much much cheaper. The problem is planning permissions, physical clearance. planning permission and now one wants to live near HV cables (that they know of. There are a bunch of 33kv cables buried outside posh people's houses in zone 5, and a bunch in canals.)


Overhead high voltage conductors are not insulated with a coating, probably for many reasons but certainly for cost and heat dissipation.

That means the path through the air to some conducting materials needs a certain distance, and that even when wet or iced over or whatever can happen up there.


Overhead lines need big ceramic stacks because the air is the insulation. In tunnels, the insulation is in the cable itself, and the tunnel just provides structure, cooling, and controlled geometry


The tunnel is too small to use air as an insulator so they use cable assemblies with multiple layers of insulation.

Here’s some high-voltage cable spec sheets that show a cross-section of the assembly for voltages above 69kV: https://www.southwire.com/wire-cable/high-voltage-undergroun...


> my quarterly feedback was that I don’t ask too many questions

Sounds like your manager felt that he need to provide at least some feedback and it is best/safest he could come up with.


My immediate question is that if all of that was on-disk data duplication, why did it affected download size? Can't small download be expanded into optimal layout on the client side?


It didn't. They downloaded 43 GB instead of 152 GB, according to SteamDB: https://steamdb.info/app/553850/depots/ Now it is 20 GB => 21 GB. Steam is pretty good at deduplicating data in transit from their servers. They are not idiots that will let developers/publishers eat their downstream connection with duplicated data.

https://partner.steamgames.com/doc/sdk/uploading#AppStructur...


Furthermore, this raises the possibility of a "de-debloater" that HDD users could run, which would duplicate the data into its loading-optimized form, if they decided they wanted to spend the space on it. (And a "de-de-debloater" to recover the space when they're not actively playing the game...)

The whole industry could benefit from this.


> to recover the space when they're not actively playing the game

This would defeat the purpose. The goal of the duplication is to place the related data physically close, on the disk. Hard links, removing then replacing, etc, wouldn't preserve the physical spacing of the data, meaning the terrible slow read head has to physically sweep around more.

I think the sane approach would be to have a HDD/SDD switch for the file lookups, with all the references pointing to the same file, for SDD.


So you'd have to defrag after re-bloating, to make all the files contiguous again. That tool already exists, and the re-bloater could just call it.


Sure, but defrag is a very slow process, especially if you're re-bloating (since it requires shifting things to make space), and definitely not something that could happen in the background, as the player is playing. Re-bloating definitely wouldn't be good for a quick "Ok, I'm ready to play!".


I imagine it'd be equivalent to a download task, just one that doesn't consume bandwidth.


depending on how the data duplication is actually done (like texture atlasing the actual bits can be very different after image compression) it can be much harder to do rote bit level deduplication. They could potentially ship the code to generate all of those locally, but then they have to deal with a lot of extra rights/contracts to do so (proprietary codecs/tooling is super, super common in gamedev), and

Also largely cause devs/publishers honestly just don't really think about it, they've been doing it as long as optical media has been prevalent (early/mid 90s) and for the last few years devs have actually been taking a look and realizing it doesn't make as much sense as it used to, especially if like in this case the majority of the time is spent on runtime generation of, or if they require a 2080 as minimum specs whats the point of optimizing for 1 low end component if most people running it are on high end systems.

Hitman recently (4 years ago) did a similar massive file shrink and mentioned many of the same things.


Sure it can - it would need either special pre- and postprocessing or lrzip ("long range zip") to do it automatically. lrzip should be better known, it often finds significant redundancy in huge archives like VM images.


That particular case can be solved much easier by rebasing outer-most branch with `--update-refs` flag.


I came into the comments specifically to ask if this flag existed. I feel bad that the author developed this whole flow just because they didn't know about this, but that's pretty common with git.


I'm pretty sure the author was Claude, so don't feel too bad for it.


Thanks. This is going to be so useful, but it pains me to know I could have been using —update-refs for the last three years.

I used to dutifully read release notes for every git release, but stopped at some point. Apparently that point was more than three years ago.


discoverability is a big problem, especially for CLI tools which can't afford to show small hints or "what's new" popups. I myself learned it from someone else, not docs.


I plan to pay it forward today with a post on my work slack. I just need to try it a time or two myself first.


Except they do. You can type <tab>, search the man page or read the release notes. They just don't force the user to.


Exactly, I was reading the blog and wondering the whole time how it's better than --update-refs, which I have been using a lot recently.


Yep. I set this in .gitconfig


I'm guilty lol. I wrote a helper to do rebase chains like this


update-refs works only in a narrow case when every branch starts form the tip of a previous. Your helper might still be useful if it properly "replants" whole tree keeping its structure.


No, as far as I can tell, it's basically just doing update-refs. But in my defense, I just found out by looking for the option that for some reason my git manpages are from an old version before it was introduced


Though at that point it may be easier to rewrite your helper to manage rebase's interactive scripts.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: