"Scientific models are much simpler than most software"
I've been noodling with some similar ideas around collaborative editing of interconnected declarative models, and I don't think the above is necessarily true. A diagram of the Linux Kernel might be even more complex than a diagram of some wild systems biology[0], but surely there is ever more complexity to discover. The tools for massively distributed digital collaboration seem much more polished in software than scientific fields today. I hope the author's work is a step in the right direction!
For similar Scheme-y computer algebra stuff, see Sussman and Wisdom's work in "Structure and Interpretation of Classic Mechanics" and follow-ons.
>A diagram of the Linux Kernel might be even more complex than a diagram of some wild systems biology...
The only reason it could be more complicated is if we only include a small portion of the systems biology diagram. Since we do not understand every reaction pathway in any biological system it will necessarily be a partial picture on the biological side. On the software side we can produce a full diagram.
It will be a long time before we are able to produce a full systems biology diagram.
You can have a scientific model of photosynthesis and explain the concept and how it works to students based on that model. They wouldn't be able to somehow implement that process afterwards nor would anyone expect them to.
Software isn't like that. Software has to describe a process unambiguously and exhaustively. There is no option to say "and now you do X" while leaving X undefined.
Software can't be simpler than the process it describes, it can only provide a simpler interface.
You can also easily explain to students how a piece of complex software works, at the same level of detail. The point is biology is vastly more complex than any software humans have ever written. Evolution selects for spaghetti code.
Evolution seems to select for highly redundant and holographic code, where holographic means non-sparsely encoded in this context. Height, for example, is encoded in a wide variety of genes which can be turned on and off by even more genes.
Why would you use the term "holographic" instead of "non-sparsely encoded," "robust," "redundant," or any of the other better-understood and seemingly more descriptive terms one could use?
Because holographic encoding is a particular type of non-sparse encoding and redundancy. "Holographic" really is the most descriptive way I can think to describe it without using many more words.
Redundancy can be achieved through multiple independent systems, e.g. repeating the encoded text in multiple locations; such an approach is not holographic, though. Parity is a more holographic encoding scheme in that redundancy comes from the interaction of multiple pieces of encoded text.
The term is even more applicable when talking about convolutional neural networks in which every neural node significantly contributes to every encoding. Here the similarity to optical holograms becomes quite apparent.
The comparison is between a model and a software implementation, not with a model of the software. The point being that there are tools for helping with the implementation but not with the model.
Yes, but the software may be based on symbolic reasoning, in which case the computer can 'use' a description of process symbolically and unexhaustively just as a human would.
Its only slightly related- but is there a sort of zoom in Version of a Formula Display language out there?
Lets say i have a Wikipedia article which contains the formula: F = G ((m1m2)/r^2)
Now normal this dense knowledge notation requires lots of already existing knowledge to decipher. But if you had some java script system to "zoom" in, you could replace parts of it - first with words (linking to the corresponding article)- then with the Explanation itself.
This would be lovely, as would be comprehensive documentation of software, but it has the same problem: someone has to write the words, and the documentation; and, even if he or she is motivated to document, there's no reason to believe that the person with the domain expertise to write such documentation has any particularly good writing ability, so now you're coördinating the efforts of two people just to write one equation. Since the rewards are not the sort of thing that will be immediately apparent to a domain expert (who is after all surrounded by people who already understand, or can relatively easily come to understand, the material in question), this effort is unlikely to be expended—at least not on a large scale.
(If the documentation is written by a separate person, not coördinating with the original coder / equat-or, then one must doubt its accuracy; and, if it is auto-generated—well, have you ever seen really good auto-generated documentation?)
>The main difference between Leibniz and computer algebra systems is that Leibniz is a scientific notation whereas computer algebra systems are tools for doing computations. A notation needs to be clear, a tool needs to be flexible and efficient.
Different priorities lead to different design decisions. In a computer algebra system, you never see a definition of a term algebra or a list of rewrite rules. They are hidden from the user, and in the case of commercial software even heavily encoded to make them inaccessible. A computer algebra system is the equivalent of a single giant unpublished Leibniz context. In contrast, Leibniz encourages small, readable, and remixable contexts that can be examined and understood by scientists with reasonable effort. However, Leibniz lacks the huge database of simplification rules that make up the bulk of a computer algebra system.
You don't "run" Leibniz code. Leibniz is not a programming language. A Leibniz context is more like a database of equations than like a program.
You can use the database of equations (and rules) to "reduce" a term, which is also what computer algebra systems do. The difference is that in Leibniz, the database of equations of rules is always known and can be inspected, whereas in a computer algebra system it is carefully hidden from the user. Since it's this database where scientific knowledge is stored, this difference is quite important.
You can indeed consider a context plus a term to be reduced as a program, which is executed by term reduction. But the unit of code in Leibniz is the context, not the context + term-to-be-reduced. If you want to compare to programming languages, a context is more like a library.
The difference is in the focus rather than in the principles. Leibniz is a Turing-complete language, so you can use it for programming. But that's not what Leibniz is designed for. Term reduction is more a way of exploring the definitions in a context than a way to run code.
the documentation here is quite sparse, so correct me if i'm wrong. This seems like a dressed up theorem proving system like PROLOG dressed up as something new.
No, not really. The wikipedia page of Maude has a good explanation of what it is like: https://en.wikipedia.org/wiki/Maude_system. You can also see the Maude book if you're interested [1].
I'm a bit out of my league here, but how does Maude compare to say Idris?
I remember dabbling with Pure, which used term rewriting, and lent itself to expressing mathematical equations in a familiar format, but it was not a prover.
I downloaded the PDF for Maude, and I will certainly take a look in the meantime. My interests are solely for learning, and I thought using something like Idris to work my way through Sussman's Structure and Interpretation of Classical Mechanics, might drive it home for me in ways that my physics classes didn't.
I tried reading your 27 page long essay but got bored a couple pages in and then skimmed the rest. After that I looked at the examples in the repository.
My only question is: why? Seriously, what problem are you trying to solve here?
Your 27 page long essay was useless in conveying that to me because it was written overly explicit and seems to have too much background but not really any content.
Looking at the examples I just don't see a point. I'm a scientist. I work with PDEs. You even mentioned one of the software I work with in your paper.
Nothing you have written explains to me why I should decide to write e.g.:
nabla v = 0
In whatever notation it is you are trying to sell me.
In Leibniz, you wouldn't write only "nabla v = 0", but also what v is (by associating it with a sort), what properties v has (e.g. being positive), and how you actually obtained "nabla v = 0" from some first principles, using a sequence of transformations.
The point of Leibniz is to communicate information to other scientists, but in a notation that can be analyzed and verified by computer programs. An immediate advantage is coherence checking by the Leibniz system - you cannot use undefined quantities in an equation, for example.
Leibniz and Modelica indeed share the objective of writing digital scientific models rather than software. But the kind of information that can be expressed is quite different. Perhaps the most important difference (for me at least) is that Leibniz permits analyzing and reasoning about models, not just simulating them. I don't think Modelica can do this, but I can't claim to know it well enough.
I've been noodling with some similar ideas around collaborative editing of interconnected declarative models, and I don't think the above is necessarily true. A diagram of the Linux Kernel might be even more complex than a diagram of some wild systems biology[0], but surely there is ever more complexity to discover. The tools for massively distributed digital collaboration seem much more polished in software than scientific fields today. I hope the author's work is a step in the right direction!
For similar Scheme-y computer algebra stuff, see Sussman and Wisdom's work in "Structure and Interpretation of Classic Mechanics" and follow-ons.
[0]: https://s-media-cache-ak0.pinimg.com/originals/35/98/92/3598...