Tensors are a good example: In a proof you might want to do induction over the dimensions of a tensor. This means your type of tensors needs to contain all tensors of all dimensions. But working on the type of all tensors is not so nice anymore: a lot of algebraic properties do not hold, they only hold for tensors of a specific dimension. An example is the Tensor library in the Deep_Learning AFP entry.
Now in Isabelle most strutures are encoded as type class, but when your type is not anymore in the type class suddenly you need to proof a lot of theorems yourself, and the automation does not work as nice as with type classes. Generally, in HOL provers you want at least that your type at least has nice algebraic properties on the entire type. If this is not the case proofs get much more convoluted. Isabelle's simplifier supports this case using conditional rewrite rules, but it still does not work as nice as using type inference to handle this cases.
In dependent type theorem provers it isn't a problem to proof everything on tensors with a fixed dimension. When a proof requires to do induction on the dimension itself these kind of shape can always constructed in the proof itsefl.
The memory consumption will surely be better, as HOL or Isabelle will only store the fact that a theorem was proven, but not how. But then Lean can store the proofs and has two independent external type checkers. Isabelle has the infrastructure to do this, but it can not cope with it after a certain size (Isabelle's HOL-Proof does not even contain all of HOL/Complex_Main). In my experience Lean feels much faster than Isabelle, so I would guess the proof will take much longer than in Lean. But I don't have any concrete measures.
In Isabelle one might want to trust the code generator to evaluate certain computations in ML.
One of my PhD students will soon have to learn an interactive prover. I was to recommend Isabelle/HOL because it's got the best automation, but maybe I should consider Lean (and learn it along with him).
I worry slightly about Lean being immature, and lacking a big library eco-system in 2017. OTOH, it's also good to be at the cutting edge.
But your "intermediate specifications" are not part of your specification! The specification says that the result is a sorted version of the input. It does not care about the induction hypothesis in your program. It might be technically necessary depending on the verification condition generator you are using. When you verify a functional program using Isabelle, Lean, or Coq your statement is that the output is a sorted version of the input.
But in practise, you need to specify what sortedness means, and in a generic sorting algorithm that means axiomatising the comparison predicate, including its side-effects. Note for example the C standard library implementation of Quicksort does take an arbitrary predicate, which may not be transitive, may have arbitrary side effects etc. The behaviour of sorting becomes pretty complicated in this scenario.
Lean is LCF style in the sense that there is a small kernel, and all proofs written by the user and generated by the automation are checked by this kernel. This kernel (i.e. type checker) is much smaller than that of Coq or Agda.
It is _not_ LCF style in the sense that there is only a "theorem" datatype with proof operations. Lean proofs are type-checked expressions.
It is hard to find a terse description of the Calculus of Constructions or CIC. Lean's logic essentially consists of Martin-Löf type theory + (noncumulative) universes with Impredicative Prop + inductive type (where nested and mutual induction is compiled down to a simpler forms with only one eliminator) + quotient types + function and Prop extensionality. And optionally choice to gain a classical logic. Most other features, like a Agda like dependent pattern matching, or mutual and nested inductive types are compiled down to more basic features.
You mean that
there is NOT only a "theorem" datatype?
In contrast to Curry/Howard provers, the LCF approach forgets proofs, it guarantees soundness by giving you
access to proof rules only through the "theorem" datatype (which is
the key trusted computing base). To be sure the "theorem" datatype may
internally maintain a proof object (e.g. for the purposes of program
extraction), but that's not necessary.
This is orthogonal. But first, there are still buffer overflows, null pointer accesses & segmentation faults, etc. in the kernel happening. And this is a big problem!
But lets assume this is solved. Then a API can be checked for certain properties. You can specify what it means that something is read-only in an environment etc. And ensure that it is enforced by the operating system.
As example: the verification of seL4 not only guarantees avoidance of classical security holes (buffer overflows etc.) but also certain non-interference properties.
They almost certainly aren't USB chargers. Probably USB ports for inserting thumb drives to update the navigational charts. These kinds of USB ports are [mis]used for charging phones all the time, and the phone can end up being mounted as a mass-storage device.
No idea how the rest of the exploit would work in this scenario though, I have a hard time believing it.
I wonder if the coinductive approach calculus by Pavlovic et al [1, 2] has been considered as a basis for formalisation.
I secretly hope that I will one day get to teach calculus to computer scientists. In that case I would introduce reals as a coinductive data type, and the usual operations (such as differentiation, integration, solving differential equations) as stream operations. That should appeal to programmers, athough it would be weird for conventional mathematicians.
[1] D. Pavlovic, M. Hölzl Escardo, Calculus in coinductive form.
[2] D. Pavlovic, V. Pratt, The continuum as a final coalgebra.
I haven't read the paper, but if you define the reals as constructive Cauchy sequences you'd be forced into a coinnductive definition. Is that equivalent?
I'm not familiar with constructive Cauchy sequences. The usual way of using Cauchy sequences is to quotient them by the ideal of Cauchy sequences that converge to 0.
In constructivism, once you have a Cauchy sequence of rationals, that represents a real. However given two such Cauchy sequences, you cannot always prove whether one is less than, equal to, or greater than the other.
This has some interesting consequences. For example only continuous functions can be functions in constructivism. (If you try to construct a function that is discontinuous at a point, there are Cauchy sequences you can give it that you cannot assign to a Cauchy sequence coming out. So it is not a well-defined function.)
Streams are coinductive, thus reals are the type of final coalgebras of (rat * -) which satisfy the Cauchy condition. So something like (Sigma (mu (rat * -)) isCauchy).
Don't you need to quotient after you've got Cauchy sequences? For example, (0,0,0,0,....) is a representative of the same real number (as Cauchy sequence) same as (1,0,0,0,0,0,....), but both are different as streams.
Yeah, depends on how you're encoding it. Quotient types are to my knowledge still tricky to implement, but you can try to make a setoid and structure this type with some kind of equality. Of course, now we run directly into totality and semi-decidability.
Now in Isabelle most strutures are encoded as type class, but when your type is not anymore in the type class suddenly you need to proof a lot of theorems yourself, and the automation does not work as nice as with type classes. Generally, in HOL provers you want at least that your type at least has nice algebraic properties on the entire type. If this is not the case proofs get much more convoluted. Isabelle's simplifier supports this case using conditional rewrite rules, but it still does not work as nice as using type inference to handle this cases.
In dependent type theorem provers it isn't a problem to proof everything on tensors with a fixed dimension. When a proof requires to do induction on the dimension itself these kind of shape can always constructed in the proof itsefl.