Hacker Newsnew | past | comments | ask | show | jobs | submit | patrickkidger's commentslogin

Quick heads-up that these days I recommend https://github.com/patrick-kidger/jaxtyping over the older repository you've linked there.

I learnt a lot the first time around, so the newer one is much better :)


Ah, I would have never thought jaxtyping supports torch :)


I think so!

I've not tried a couple of things you mention (e.g. background images) but e.g. for dynamic placing there are libraries like https://typst.app/universe/package/meander

The core of Typst is a pretty featureful well-thought-out FP language. This makes expressing libraries for any piece of functionality very pleasant.


This is how static type checkers are told that an imported object is part of the public API for that file. (In addition to anything else present in that file.)

C.f. "the intention here is that only names imported using the form X as X will be exported" from PEP484. [1]

I'm generally a fan of the style of putting all the implementation in private modules (whose names start with an underscore) and then using __init__.py files solely to declare the public API.

[1] https://peps.python.org/pep-0484/


That looks like its only for stub files not __init__.py


It also applies to any .py file. (At least in practice with e.g. pyright)

That said, the documentation on this matter is close to nonexistent.


Oh neat! This is my library. Happy to answer any questions.

(Though it's really a pretty tiny library that just does what it says on the tin, not sure how many questions there can be. :D )


I have a question. Why do you prefix your package files with an underscore?

In fact, you write all of your python like you really have something to hide ;) Like `_Todo`.

Where did you get this pattern?

(I’m way more curious than accusatory. Are people embracing private modules these days as a convention, and I just missed it?)


I think _private has always been a convention in Python, though I'd say most Python is not so rigorous about it. I don't see why it couldn't be applied to modules.

I honestly love when I see a package do stuff like this: it's very clear then what is public interface, and I should consider usable (without sin) and what is supposed to be an internal detail.

Same with the modules: then it is very clear that the re-export of those names in __init__.py is where they're meant to be consumed, and the other modules are just for organizational purposes, not API purposes.

_Todo is then a private type.

Very clean.


I tend to do the same, some colleagues as well, so I guess this is some common pattern.

The way I see it there are two schools:

- The whitelist school: You write everything without _ prefix, then you whitelist what you want accessible with __all__.

- The explicit school: You forget about all and just use _ for symbols, modules, etc.

I find the latter more readable and consistent (can be applied to attributes, free functions, modules...


Yup, you(/sibling comments) have it correct, it's to mark it as private.

Not sure where I got it from, it just seems clean. I don't think I see this super frequently in the ecosystem at large, although anything I've had a hand in will tend to use this style!


I just want to say this is brilliant. I've had my share of problems with asyncio and went back to using sync python and deque instead.


Went scrolling looking for this! Most of the article is about problems solved in JAX.

Also worth noting the Array API standard exists now. This is generally also trying to straighten out the sharp edges.


Same here, beautiful solution


> What open source alternatives?

Helix:

https://github.com/helix-editor/helix/

Like vim, but already has an LSP etc out of the box. Things are already there so the config files are minimal.


I have a US number and live in Switzerland. At least for me, I only receive SMS messages whenever I visit the US -- the rest of the time they're just dropped and I'll never see them.

(Doesn't really bother me, my friends and I all use WhatsApp/etc. anyway.)

n=1 though, maybe this is some quirk of my phone provider.


FWIW - I used to do research in this area - PINNs are a terribly overhyped idea.

See for example https://www.nature.com/articles/s42256-024-00897-5

Classical solvers are very very good at solving PDEs. In contrast PINNs solve PDEs by... training a neural network. Not once, that can be used again later. But every single time you solve a new PDE!

You can vary this idea to try to fix it, but it's still really hard to make it better than any classical method.

As such the main use cases for PINNs -- they do have them! -- is to solve awkward stuff like high-dimensional PDEs or nonlocal operators or something. Here it's not that the PINNs got any better, it's just that all the classical solvers fall off a cliff.

---

Importantly -- none of the above applies to stuff like neural differential equations or neural closure models. These are genuinely really cool and have wide-ranging applications.! The difference is that PINNs are numerical solvers, whilst NDEs/NCMs are techniques for modelling data.

/rant ;)


I concur. As a postdoc for many years adjacent to this work, I was similarly unimpressed.

The best part about PINNs is that since there are so many parameters to tune, you can get several papers out of the same problem. Then these researchers get more publications, hence better job prospects, and go on to promote PINNs even more. Eventually they’ll move on, but not before having sucked the air out of more promising research directions.

—a jaded academic


I believe a lot of this hype is purely attributable to Karniadakis and how bad a lot of the methods in many areas of engineering are. The methods coming out of CRUNCH (PINNs chief among them) seem, if they are not just actually, more intelligent in comparison, since engineers are happy to take a solution to inverse or model selection problems by pure brute force as "innovative" haha.


The general rule of thumb to go by is that whatever Karniadakis proposes, doesn't actually work outside of his benchmarks. PINNs don't really work, and _his flavor_ of neural operators also don't really work.

PINNs have serious problems with the way the "PDE-component" of the loss function needs to be posed, and outside of throwing tons of, often Chinese, PhD students, and postdocs at it, they usually don't work for actual problems. Mostly owed to the instabilities of higher order automatic derivatives, at which point PINN-people begin to go through a cascade of alternative approaches to obtain these higher-order derivatives. But these are all just hacks.


I love karniadakis energy. I invited him to give a talk in my research center ands his talk was fun and really targeted at physicists who understand numerical computing. He gave a good sell and was highly opinionated which was super welcomed. His main argument was that these are just other ways to arrive optimisation and they worked very quickly with only a bit of data. I am sure he would correct me greatly at this point. I’m not an expert on this topic but he knew the field very well and talked at length about the differences between one iterative method he developed and the method that Yao lai at Stanford developed after I had her work on my mind because she talked in an ai conference I organised in Oslo. I liked that he seemed to be willing to disagree with people about his own opinions because he simply believed he is correct.

Edit: this is the Yao lai paper I’m talking about:

https://www.sciencedirect.com/science/article/pii/S002199912...


What do you do now?



Likewise, spending another comment just to agree. Both on the low profile and the low travel distance.

I've tried low-profile chocs and they still have too much travel! But I'm stuck with them as split keyboards are important for me just for the usual collection of wrist health reasons.

So I'm just waiting for Apple to make a split keyboard I guess :)


I have sincerely been considering a bandsaw and a soldering iron! To find out how hard it is to split a keyboard that’s already in one piece and have it remain working.


M1242 but it has too much travel by modern standards.


So why this over qutebrowser [1] ? (Which has been my go-to keyboard-first browser for a long time.) This isn't mentioned in the FAQ despite I think being the natural comparison.

[1] https://github.com/qutebrowser/qutebrowser


As someone who used Qute for a long time:

* Python is much slower than SBCL (yes, even if rendering is done by Blink); including the lack of threading

* Bookmarks are pure crap, they don't have tags nor directories to sort them better

* Less hackable (e.g. something that should be possible in Nyxt: https://github.com/qutebrowser/qutebrowser/issues/3933)

* Massive gaps: https://github.com/qutebrowser/qutebrowser/issues/2328 https://github.com/qutebrowser/qutebrowser/issues/2492 https://github.com/qutebrowser/qutebrowser/issues/5731 (!!!)

* Per domain/URL settings never progressed further than the initial batch of properties: https://github.com/qutebrowser/qutebrowser/issues/3636

* Adblocking is better than hostfile but still missing a lot compared to uBlock (https://github.com/qutebrowser/qutebrowser/issues/6480). No script blocking matrix like uBlock "advanced mode" at all.

My impression is that it has been stuck in bug fixing/dependency churn for a long time now. Switched to Firefox while waiting for Nyxt to be usable (apparently, Nyxt 4 will be it).


> My impression is that it has been stuck in bug fixing/dependency churn for a long time now

I don't think it's just your impression: it's exactly what happened. Depending on Qt for the rendering engine means the browser has been tied to the painfully long release cycle of the whole of Qt. Quickly fixing bugs or implementing new features is hard, they have to hack around limited APIs, beg for more and continually fix new bugs introduced by upstream (both Qt and google).


Nyxt does have ublock origin? It would be a must have for me too.


Not yet, but Nyxt 4 is supposed to support WebExtensions.


Nice!! Then I can also use my password manager and more. Will deffo give it a try then. I hope it'll come soon.


you can redirect in QB. this is how i do it (from my config):

    def redirect(info: interceptor.Request):
        if info.request_url.host() == "en.m.wikipedia.org":
            new_url = QUrl(info.request_url)
            new_url.setHost("en.wikipedia.org")
            try:
                info.redirect(new_url)
            except interceptors.RedirectFailedException:
                pass


Cool, thanks for the tip!


For me; CL/SBCL. It is more fun for me.


I loved qutebrowser, but many pages didn't work because of the rendering engine. That made me go back to Firefox.


The engine is QtWebEngine, which is essentially Chromium without the proprietary stuff. It may a be a bit outdated, but I've never seen a page not being rendered properly. Maybe you used it way back when the default engine was QtWebKit.


Interesting. I'll give it another try.


Also no Python, all Common Lisp.


Vim vs Emacs bindings for one.


You can configure both to use either.

List of emacs-like config in Qutebrowser:

https://github.com/qutebrowser/qutebrowser/blob/main/doc/hel...


Like always it's a second class citizen. I spend a stupid 6 months trying to use emacs like vim. Emacs isn't a text editor. If you need to edit text as a rectangle of characters then you can drop in evil mode. Expecting to use emacs control characters from evil mode it a bit like using Kanji to write English.


Evil (VIM emulation mode in Emacs) does not in any way behave like a second-class citizen. I use evil every single day and it's fantastic.

Emacs is a text editor, yes, among other things.

If anyone is reading this who hasn't tried Emacs, don't let takes like this spoil you giving Emacs a try. Doom Emacs is a fantastic experience to get started but there are more minimal starter kits that give you just evil-mode to start.


I literally said you can use evil mode to edit text.

But trying to use vim inspired motion and editing in other modes is a terrible idea. Just learn how Emacs does it and stop thinking of everything as text. There is usually deeper semantic meaning behind the syntax that an Emacs mode will let you edit directly.


If you want to write English using kanji I recommend starting here: https://www.zompist.com/yingzi/yingzi.htm


It was my experience too that it's better to commit to using Emacs like Emacs. `C-x SPC` is the Emacs way to select a rectangle of characters.


Doomemacs was everything I wanted Neovim to be for me personally. I know it’s a big war on the web, but for some of us evil mode emacs is the easy way to use vim motions.

The only real disadvantage for me is that it’s significantly easier to run Neovim on windows (work).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: