Hacker Newsnew | past | comments | ask | show | jobs | submit | brewmarche's commentslogin

Still using it, it’s fine performance wise, maybe needs another new battery in a year or so. Apple Pay, authenticators and messaging apps are working.

Was hoping for the new iPhone Fold (with Touch ID even) to be small but looks like it’s going to be a really weird ratio when folded.

Of course there are caveats: - Spotify not getting app updates anymore (but still playing fine)

- some websites do not support the Safari version, e.g. GitHub

- most banking apps are not supported


I only knew about SECAM, where it’s even part of the name (Système Électronique Couleur Avec Mémoire)

You can decode a PAL signal without any memory, the memory is only needed to correct for phase errors. In SECAM though, it's a hard requirement because the two color components, Db and Dr, are transmitted on alternating lines, and you need both on each line.

Yes that is called "PAL-S". But the system was designed to use the delay-line method and it was employed since the inception (first broadcast 1967).

Someone else wrote that it was chosen to best match PAL and NTSC. IIRC there is also a Technology Connections video about those early PCM adaptor devices that would record to VHS tape.

<https://en.wikipedia.org/w/index.php?title=44,100_Hz&oldid=1...>

Take it with a grain of salt, I’m not really knowledgeable about this.

E: also note the section about prime number squares below


4800kHz and 44100kHz devices appeared at roughly the same time. Sony's first 44100kHz device was shipped in 1979. Phillips wanted to use 44.0kHz.

If you can do 44.1khz on an NSTC recording device, you can do 44.0khz too. Neither NTSC digital format uses the fully available space in the horizontal blanking intervals on an NTSC VHS device, so using less really isn't a problem.

Why is 44Khz better? There's a very easy way to do excellent sample rate conversions from 44.0Khz to 48Khz, you upsample the audio by 12 (by inserting 11 zeros between each sample), apply a 22Khz low-pass filter, and then decimate by 11 (by keeping only every 11th sample. To go in the other direction, upsample by 11, filter, and decimate by 12. Plausibly implementable on 1979 tech. And trivially implementable on modern tech.

To perform the same conversion from 44.1kHz to 48kHz, you would have to upsample by 160, filter at at a sample rate of 160x44.1kHz, and then decimate by 147. Or upsample by 147, filter, and decimate by 160. Impossible with ancient tech, and challenging even on modern tech. (I would imagine modern solutions would use polyphase filters instead, with tables sizes that would be impractical on 1979 VLSI). Polyphase filter tables for 44.0kHz/48.0kHz conversion are massively smaller too.

As for the prime factors... factors of 7 (twice) of 44100 really aren't useful for anything. More useful would be factors of two (five times), which would in increase the greatest common divisor from 300 to 4,000!


> Now you could play it back wrong by emitting a sharp pulse f_s times per second with the indicated level. This will have a lot of frequency content above 20kHz and, in fact, above f_s/2. It will sounds all kinds of nasty.

Wouldn’t the additional frequencies be inaudible with the original frequencies still present? Why would that sound nasty?


Because the rest of the system is not necessarily designed to tolerate high frequency content gracefully. Any nonlinearities can easily cause that high frequency junk to turn back into audible junk.

This is like the issues xiphmont talks about with trying to reproduce sound above 20kHz, but worse, as this would be (trying to) play back high energy signals that weren’t even present in the original recording.


That would mean that higher sampling rates (which add more inaudible frequencies) could cause similar problems. OK xiphmont actually mentions that, sorry, I had only watched the video when I replied.


If I were designing a live audio workflow from scratch, my intuition would be to sample at a somewhat high frequency (at least 48kHz but maybe 96kHz), do the math to figure out the actual latency / data rate tradeoff, but to also filter the data as needed to minimize high frequency content (again, being careful with latency and fidelity tradeoffs).

But I have never done this and don't have any plans to do so, so I'll let other people worry about it. But maybe some day I'll carry out my evil plot to write an alternative to brutefir that gets good asymptotic complexity without adding latency. :)


This is a nice video. But I’m wondering: do we even need to get back the original signal from the samples? The zero-order hold output actually contains the same audible frequencies doesn’t it? If we only want to listen to it, the stepped wave would be enough then


I think I had to disable spellcheck to fix the ignored keystrokes, it happened even after disabling formatting


ahh, it might have been spellcheck then. I turned off all that stuff. In the heat of the moment, maybe I was a bit too angry to do proper root cause analysis :P


> git/GitHub/gitlab/codeberge

Is this about commit signing? Git and all of the mentioned forges (by uploading the public key in the settings) support SSH keys for that afaik.

git configuration:

gpg.format = ssh

user.signingkey = /path/to/key.pub

If you need local verification of commit signatures you need gpg.ssh.allowedSignersFile too to list the known keys (including yours). ssh-add can remember credentials. Security keys are supported too.


I’m wondering, wouldn’t a default deny inbound firewall still need hole punching with IPv6? You wouldn’t need STUN to find your global address but if you use varying ports you’d need to communicate the port first, and you’d also need to time the simultaneous open. So a coordinating party is still needed somewhere. Getting rid of TURN relays (if you’re affected by symmetric NATs) is of course a huge plus.


No, you'd have something like UPnP open a port on the firewall, I imagine. It depends on the setup, which can now be much more flexible, since the firewall can run on the machine itself. You also have the benefit that multiple machines can listen on the same port, so you don't need a proxy any more.


You should use unique local addresses (ULAs, fc00::/7) not link-local addresses (fe80::/10) for this. Choose a random prefix and advertise it in your network (you can use some website like https://www.unique-local-ipv6.com if you want).

This prevents clashing subnets when using VPN like it sometimes happens with IPv4.


There’s a difference between the European version of the Apple dongle and other regions. The European version maxes out at 0.5 Vrms instead of 1 Vrms.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: