This is how static type checkers are told that an imported object is part of the public API for that file. (In addition to anything else present in that file.)
C.f. "the intention here is that only names imported using the form X as X will be exported" from PEP484. [1]
I'm generally a fan of the style of putting all the implementation in private modules (whose names start with an underscore) and then using __init__.py files solely to declare the public API.
I think _private has always been a convention in Python, though I'd say most Python is not so rigorous about it. I don't see why it couldn't be applied to modules.
I honestly love when I see a package do stuff like this: it's very clear then what is public interface, and I should consider usable (without sin) and what is supposed to be an internal detail.
Same with the modules: then it is very clear that the re-export of those names in __init__.py is where they're meant to be consumed, and the other modules are just for organizational purposes, not API purposes.
Yup, you(/sibling comments) have it correct, it's to mark it as private.
Not sure where I got it from, it just seems clean. I don't think I see this super frequently in the ecosystem at large, although anything I've had a hand in will tend to use this style!
I have a US number and live in Switzerland. At least for me, I only receive SMS messages whenever I visit the US -- the rest of the time they're just dropped and I'll never see them.
(Doesn't really bother me, my friends and I all use WhatsApp/etc. anyway.)
n=1 though, maybe this is some quirk of my phone provider.
Classical solvers are very very good at solving PDEs. In contrast PINNs solve PDEs by... training a neural network. Not once, that can be used again later. But every single time you solve a new PDE!
You can vary this idea to try to fix it, but it's still really hard to make it better than any classical method.
As such the main use cases for PINNs -- they do have them! -- is to solve awkward stuff like high-dimensional PDEs or nonlocal operators or something. Here it's not that the PINNs got any better, it's just that all the classical solvers fall off a cliff.
---
Importantly -- none of the above applies to stuff like neural differential equations or neural closure models. These are genuinely really cool and have wide-ranging applications.! The difference is that PINNs are numerical solvers, whilst NDEs/NCMs are techniques for modelling data.
I concur. As a postdoc for many years adjacent to this work, I was similarly unimpressed.
The best part about PINNs is that since there are so many parameters to tune, you can get several papers out of the same problem. Then these researchers get more publications, hence better job prospects, and go on to promote PINNs even more. Eventually they’ll move on, but not before having sucked the air out of more promising research directions.
I believe a lot of this hype is purely attributable to Karniadakis and how bad a lot of the methods in many areas of engineering are. The methods coming out of CRUNCH (PINNs chief among them) seem, if they are not just actually, more intelligent in comparison, since engineers are happy to take a solution to inverse or model selection problems by pure brute force as "innovative" haha.
The general rule of thumb to go by is that whatever Karniadakis proposes, doesn't actually work outside of his benchmarks. PINNs don't really work, and _his flavor_ of neural operators also don't really work.
PINNs have serious problems with the way the "PDE-component" of the loss function needs to be posed, and outside of throwing tons of, often Chinese, PhD students, and postdocs at it, they usually don't work for actual problems. Mostly owed to the instabilities of higher order automatic derivatives, at which point PINN-people begin to go through a cascade of alternative approaches to obtain these higher-order derivatives. But these are all just hacks.
I love karniadakis energy. I invited him to give a talk in my research center ands his talk was fun and really targeted at physicists who understand numerical computing. He gave a good sell and was highly opinionated which was super welcomed. His main argument was that these are just other ways to arrive optimisation and they worked very quickly with only a bit of data. I am sure he would correct me greatly at this point. I’m not an expert on this topic but he knew the field very well and talked at length about the differences between one iterative method he developed and the method that Yao lai at Stanford developed after I had her work on my mind because she talked in an ai conference I organised in Oslo. I liked that he seemed to be willing to disagree with people about his own opinions because he simply believed he is correct.
Edit: this is the Yao lai paper I’m talking about:
Likewise, spending another comment just to agree. Both on the low profile and the low travel distance.
I've tried low-profile chocs and they still have too much travel! But I'm stuck with them as split keyboards are important for me just for the usual collection of wrist health reasons.
So I'm just waiting for Apple to make a split keyboard I guess :)
I have sincerely been considering a bandsaw and a soldering iron! To find out how hard it is to split a keyboard that’s already in one piece and have it remain working.
So why this over qutebrowser [1] ? (Which has been my go-to keyboard-first browser for a long time.) This isn't mentioned in the FAQ despite I think being the natural comparison.
My impression is that it has been stuck in bug fixing/dependency churn for a long time now. Switched to Firefox while waiting for Nyxt to be usable (apparently, Nyxt 4 will be it).
> My impression is that it has been stuck in bug fixing/dependency churn for a long time now
I don't think it's just your impression: it's exactly what happened. Depending on Qt for the rendering engine means the browser has been tied to the painfully long release cycle of the whole of Qt. Quickly fixing bugs or implementing new features is hard, they have to hack around limited APIs, beg for more and continually fix new bugs introduced by upstream (both Qt and google).
The engine is QtWebEngine, which is essentially Chromium without the proprietary stuff. It may a be a bit outdated, but I've never seen a page not being rendered properly. Maybe you used it way back when the default engine was QtWebKit.
Like always it's a second class citizen. I spend a stupid 6 months trying to use emacs like vim. Emacs isn't a text editor. If you need to edit text as a rectangle of characters then you can drop in evil mode. Expecting to use emacs control characters from evil mode it a bit like using Kanji to write English.
Evil (VIM emulation mode in Emacs) does not in any way behave like a second-class citizen. I use evil every single day and it's fantastic.
Emacs is a text editor, yes, among other things.
If anyone is reading this who hasn't tried Emacs, don't let takes like this spoil you giving Emacs a try. Doom Emacs is a fantastic experience to get started but there are more minimal starter kits that give you just evil-mode to start.
I literally said you can use evil mode to edit text.
But trying to use vim inspired motion and editing in other modes is a terrible idea. Just learn how Emacs does it and stop thinking of everything as text. There is usually deeper semantic meaning behind the syntax that an Emacs mode will let you edit directly.
Doomemacs was everything I wanted Neovim to be for me personally. I know it’s a big war on the web, but for some of us evil mode emacs is the easy way to use vim motions.
The only real disadvantage for me is that it’s significantly easier to run Neovim on windows (work).
I learnt a lot the first time around, so the newer one is much better :)