From https://www.reddit.com/r/rust/comments/1pszdao/media_eilmeld...: eilmeldung is based on the awesome newsflash library and supports many RSS providers. It has vim-like key bindings, is configurable, comes with a powerful query language and bulk operations.
This proiect is not Al (vibe-)coded! And it is sad that I even have to say this.
Still, as a full disclosure, with this proiect I wanted to find out if and how LLMs can be used to learn a new programming language; rust in this case. Each line of code was written by myself; it contains all my beginner mistakes, warts and all. More on this at the bottom of the GitHub page.
There are several nice references within the article. I think the gvnn, Neural Network Library for Geometric Computer Vision, (https://arxiv.org/abs/1607.07405) should also be mentioned.
A lot of things happened since then. References up to spring 2014 are e.g. in our own work to super-resolve arbitrary sized images with convolutional en/decoders: Super-Resolution with Fast Approximate Convolutional Sparse Coding, http://brml.org/uploads/tx_sibibtex/281.pdf
(which still has a lot of possibilities to be extended (e.g. color) and improved upon).
Our natural gait, it turns out, defines us as humans. Not speaking broadly — that we’re only truly bipedal mammal on the earth, blah blah blah — but as individuals. Researchers are increasingly convinced that how we walk can identify us as unique individuals, much like a fingerprint or retina scan.
This sort of research has been underway since the late 1990s, picking up more urgency after the 2001 terrorist attacks on New York and Washington, and the London train attack in 2005. DARPA and Homeland Security, among others, are keenly interested in video analysis programs that can separate out and analyze an individual gait, then use this like fingerprints. DARPA has been sponsoring “human identification at a distance” studies since 2000, which often combines gait analysis with facial and gesture analysis. It’s a hot field right now, and has been heralded as a less invasive approach than retina scans or blood tests or fingerprinting. One study I read on the algorithms of walking put the appeal simply: “Advantages include the fact that it does not require subject cooperation or body contact, and the sensor may be located remotely.”
This proiect is not Al (vibe-)coded! And it is sad that I even have to say this.
Still, as a full disclosure, with this proiect I wanted to find out if and how LLMs can be used to learn a new programming language; rust in this case. Each line of code was written by myself; it contains all my beginner mistakes, warts and all. More on this at the bottom of the GitHub page.