Hacker Newsnew | past | comments | ask | show | jobs | submit | landonxjames's commentslogin

This is infuriating to me. I manually manage my music library and have for years. I buy the iPhone with the most storage so I can keep my entire library with me locally. This used to work great, but has degraded over the last decade. Now when I drag new music to my phone in iTunes nothing happens for minutes, and then if I get lucky it finally starts transferring, but some times nothing happens at all and I have to retry.

Recently when I load new music onto my phone I find that random unrelated album art has been mangled or switched with other albums from other artists. And some music, which exists on my phone's hard drive, is now greyed out and when clicked says "This item is not currently available in your country or region." I am considering switching back to a iPod with an upgraded drive and giving up on keeping music on my phone completely.


There was a fantastic soundtrack created for this audiobook in the 90s by the industrial group Black Rain. https://room40.bandcamp.com/album/neuromancer


Wow, that is absolutely phenomenal! This alone makes me want to listen to the audio book, which would be my first. Perhaps a dumb question, but are ambient tracks and/or similar fx stuff common in audio books? I'd always assumed it was simply a reading.


Many audiobooks have brief music interludes between major sections, but generally that's it, just narration.

Soundbooth Theater does exactly that with music and sound effects

https://soundbooththeater.com/


Woah, thank you so much for posting this. What an atmosphere.


I have used Jco quite a bit (and contributed a few times) to build out simple utilities binding Rust code to JS [1][2]. I think it is great and the Component Model is the most exciting step towards real useful polyglot libraries I have seen in years. I wish it were better publicized, but I understand keeping things lowkey until it is more fleshed out (the async and stream support coming in Preview 3 are the real missing pieces for my usecases).

[1] https://github.com/awslabs/aws-wasm-checksums [2] https://github.com/landonxjames/aws-sdk-wasi-example


Thanks for using and contributing to Jco!

> (the async and stream support coming in Preview 3 are the real missing pieces for my usecases).

Currently this is a huge focus of most of the people working on stuff, and Jco is one of the implementations that needs to be done before P3 can ship, so we're hard at work on it.

> exciting step towards real useful polyglot libraries I have seen in years

I certainly agree (I'm biased) -- I think it's going to be a kind of tech that is new for a little bit and then absolutely everywhere and mostly boring. I think the docker arc is almost guaranteed to happen again, essentially.

The architectural underpinnings, implementation, and possibilities unlocked by this wave of Wasm is amazing -- truly awesome stuff many years in the making thanks to many dedicated contributors.


Repeatedly calling the lawsuit baseless feels like it makes Open AI’s point a lot weaker. They obviously don’t like the suit, but I don’t think you can credibly argue that there aren’t tricky questions around the use of copyrighted materials in training data. Pretending otherwise is disingenuous.


They pay their lawyers and whoever made this page a lot for the express purpose of credibly arguing that it is very clearly totally legal and very cool to use of any IP they want to train their models.

Could you with a straight face argue that the NYT newspaper could be a surrogate girlfriend for you like a GPT can be? They maintain that it is obviously a transformative use and therefore not an infringement of copyright. You and I may disagree with this assertion, but you can see how they could see this as baseless, ridiculous, and frivolous when their livelihoods depend on that being the case.


Not sure about Neural DSP or reverbs in general, but real-time neural network based DSP seems very possible. The open source Neural amp modeler[1] would be a good place to start diving in.

[1] https://www.neuralampmodeler.com/the-code


I have tried NAM but with limited success in modeling some time-based effects (e.g. octave shifting). However, I have not tried to model reverb effects.


To handle time-based effects you need a custom architecture.

https://www.research.ed.ac.uk/en/publications/neural-modelli...

Don’t use NAM. Learn PyTorch.


NAM uses pytorch for its NN implementation?


It is 100% possible and there are a slew of tricks you can use to get big performance boosts with negligible cost to accuracy.


Do you know what the tricks are?


1. Don’t use LSTMs (4 vector-matrix multiplies) or GRUs (3 multiplies). Use a fixed Hippo matrix to update state. Just 1 multiply and since it’s fixed you can unroll during training, much faster than backprop through time.

2. Write SIMD intrinsics by hand. None of the libraries are as fast.

3. Don’t use sigmoid or tanh functions as your nonlinear activation. Instead approximate them with the softsign function which is much cheaper.

Depends on exact architecture, but these optimizations have yielded 10-30x improvement for single threaded CPU real time audio applications.

When GPU audio matures all this may be unnecessary.


I like rust-analyzer in VSCode, but I've found that it does seem to struggle with large projects that have multiple nested Cargo workspaces. IntelliJ with the Rust plugin has handled that (admittedly niche) case better so far. I still prefer VSCode though so I just open each workspace in an individual window and it works more or less as expected.


I believe 2014 is the model year of that Tesla, it seems like that particular crash was in 2018. https://incidentdatabase.ai/cite/320/



According to the final tweet in that thread [0] it seems like they are pivoting to a sort of AI generated art based social media platform. I’m curious why they think this has the potential to be a “mass market changing” business since the social media world seems pretty over saturated already

[0] https://twitter.com/suhail/status/1591831193598963713


A cynical take would be something doused in enough buzzwords to prevent the VCs asking for the half of the money that remains back.


He says in his 4th tweet in the thread "I think there’s a large opportunity to make an experience using many advances in AI (not just diffusion) to make a new kind of Creative Suite. A different set of products with a brand new UX." He links to the prototype at the end of the thread - https://playgroundai.com/

The current site doesn't look very inspiring. Like you say, just another "AI generated art based social media platform". But if his plans are to develop it into some sort of After Effects competitor then ... maybe that's a market opportunity worth cracking?


The last thing he already knew how to build didn't work, so he's going to try something else he already knows how to build.

PMF? What's that?


There's definitely PMF for AI image tools...

...but that particular industry is moving so fast it's already homogenized (with the best and cheapest tools being made by the core AI developers themselves), so without extreme differentiation a new player can't compete.

The beta app in the Twitter thread has less features than current open-source AI Image tooling.


it's not ai generated "social media"

The social aspect was just added as an extra feature

It's more like a Photoshop + canva type project


I recently saw a Twitter thread from last year where someone made a comic book with AI generated backgrounds. The characters were added in later, but it stuck with me as a very cool future use case

https://twitter.com/ursulav/status/1467652391059214337


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: