Just like I code in a programming language, I'm not proposing to turn everything into English prose. Rather, using (abbreviated) names for variables and perhaps a bit more common language in papers (but that's maybe a separate topic).
Also I'm not sure what you mean by "back", is it referring to what we iirc called story exercises in Dutch primary school ("Jan goes to the store and buys seven ladders, then sells three..." etc.) or was this a thing a few hundred years ago or so?
Yeah, imagine making a calculation or transforming a complex formula with words and full sentences. Algebraic notation was a pretty big invention for a reason. For instance, the reason why we use single letters and indexes is so it's not confused with products. Try to write and manipulate the Schrodinger equation with words. Imagine solving the hydrogen atom, it already takes like 50 pages with algebraic notation...
And I don't really understand the "I didn't do math and Greek in School". I barely had a foreign language, but if you're actually learning the concept you memorize the letter as well. You can't understand what a wave function is and then not remember that its symbol is Psi. And if you don't know what a wave function is, it won't help to write derivate_2nd_order(waveFunction, time).
EDIT: obviously we're not talking about stories to teach newcomers, you're talking about writing equations in scientific articles and books with words.
I guess its all coming down to mnemonics, aiding our memories and communicating.
Sure this is the "state of the art", but despite the fact that pure language notations might be even worse, i cant help to think that people thinking like the parent might find something even better.
Maybe something inspired by braille notation or something that is invented while trying to understand how our brain works (just speculating here) will be even more expressive.
I actually like seeing an adult be bothered by the fact that the same symbols that turn science more expressive are also the reason that there's a big ladder for newcomers to understand whats being expressed given its all very arbitrary (someone in
the XVI century choose a random greek letter to represent X).
Imagine how much science would improve with more "brain power" being also able to try to solve some problems given there are less arbitrarity..
Anything other than a single letter variable with at most subindexes, bold, upper/lower case simply doesn't work in maths and science. And because we only have 26 letters, you do have to go to Greek.
Actually, that might be a good exercise: try doing some moderately abstract equations with variable names such as you'd write in a programming language and you'll find yourself shortening them pretty quickly. We literally do it sometimes when modeling an equation for a new domain: we start by writing words and at the end of the blackboard they already became a symbol.
It’s funny; when I was reading the HN comment I was just saying to myself “it would be so nice if the person had used the symbol for phi (φ) rather than spelling it out”. So my reaction was the opposite to yours since my brain comprehends that notation more easily than words.
Using symbols reduces the amount of text your brain has to parse. It makes it much easier to reach consensus on a shared understanding of things. The price to pay is to learn this new notation or language.
Yeah, don't we have the mantra "less code is better code" or something like that? Too many verbosity and our brain turns off. I did type it out the Psi because I couldn't be bothered to type it in mobile, but yeah, it's so weird.
Chinese people learn dozens of thousands of ideograms, I am pretty sure the problem with understanding the science has nothing to do with a few Greek symbols.
The price that was paid for learning the language of math is something that everyone who needs to work with these things are happy to pay. If the notation doesn’t make sense, it’s discarded for ones that does.
It may not make sense for layperson but that’s not really the audience.
Writing formal mathematics as programming languages is basically what automated theorem provers do. The proofs are mostly unreadable.
Mathematical notation really isn't that hard as long as you treat it as its own thing and learn it properly rather than trying to use a likely imperative model of computing programming as a reference point.