Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Calculus for mathematicians (1997) [pdf] (cr.yp.to)
106 points by bumbledraven on May 28, 2015 | hide | past | favorite | 86 comments


The hardest (ie, best) math prof I ever had (I was in EE, he was in the math department) used to say that "engineers can teach calculus, but their students cannot go on to teach calculus". I'm impressed by someone who knows a subject deeply enough to take that long of a view.

Personally, I just like the engineering view of dx and dy as simply being new variables (with caveats that we immediately forget). Which is why I'll probably not teach calculus any time soon.

But, still, the best "revelation" moment I had was when he off-handedly said "An integral is the inner product of a function and a suitably-dimensioned unit". Light bulb came on, and I "got" integration for the first time. Too bad they can't lead up to it that way in high school.

(Side rant: why do they teach trig before calculus in high school? That's completely backwards. Trig is a bunch of arbitrary formulas if you don't have the calculus behind them.)


> Trig is a bunch of arbitrary formulas if you don't have the calculus behind them.

Why do you claim this? You don't have to have seen calculus to appreciate how trig functions are defined, how to manipulate them, or how to use them in applications.

As a math professor, I personally like the fact that we teach trig and exponential/logarithmic functions before calculus. They are (as you well know) exceedingly rich examples which illustrate why calculus is interesting and useful, and knowing them already enables the student to study calculus without excessive digressions.


I don't know, I agree with the parent. I hated trig in my first encounters with it; the thing that made it practical was writing videogames and graphics demos, and the thing that made it interesting was calculus. For many people, it's the last math they're taught, they don't get the applications, and it's no wonder they don't hunger for more.


sin and cos are the natural basis of solutions of y'' + y = 0. How do you define them?


Ratios in a right triangle perhaps?


How did you get that information? How did you, in a non-magical way, go from information about an angle to information about a ratio?

If students don't know this information, then perhaps they are studying applications. So, what applications are students taught in typical trigonometric texts? Periodic behavior perhaps? Like sound? Only perhaps a brief blurb in the text that application is even possible. Perhaps they look at something about an incline plane. It is unlikely that they will touch projectiles.

It appears that trigonometry is there to give students some sense of mild comfort for future work in physics or engineering. This makes me think, "Why not statistics instead?"


> How did you, in a non-magical way, go from information about an angle to information about a ratio?

By having a right triangle?

The rest of your post seems to show that you want trig to be about periodic behavior, when it really is about triangles. That's what trigonometry means - measuring triangles.

Yes, trig has applications to periodic behavior, projectiles, differential equations, inclined planes, and all kinds of other stuff. But the point of a trig class is not to teach the applications. The point is to teach the tools, and maybe touch on the applications.


The problem is this. Using compass and ruler constructions there is a set of angles you can construct, and you can calculate sin and cos for those angles. You can even write the values for those out explicitly. However no part of this construction sheds light on how to find sin and cos for angles that you don't know how to construct. Or even gives good intuition that no matter how you do it, you can define it in a way that makes sense for all angles.

In fact we draw a picture, people look at it, and their intuition tells them that things will work out. Very few students will notice the logical gaps.

But to close the logical gaps, you need to start with Calculus first, and then derive trig formulas from that.

(Yes, I'm aware of the history here. Euclid presented trig reasonably rigorously a very long time before Calculus. Newton invented Calculus in the 1600s, and then used it as a heuristic to figure out answers that he then rederived using trig in The Principia. Leibniz reinvented Calculus in part based on inspiration from Newton's work. None of this was made formally correct until the late 1800s.)

(I have no opinion on pedagogical arguments about which is best to present first. I believe that we present trig first as a holdover from a curriculum where The Elements was the standard textbook until very recently.)


Well... you can use the half-angle formulas and the angle addition formulas to calculate sin and cos for angles that are arbitrarily close to the ones that you want. Add to that the idea that sin and cos must be continuous (I consider that intuitively obvious from a unit circle, but I don't know how to make that argument rigorous), and you can start to interpolate. You can in fact use these methods to calculate sin and cos for an arbitrary angle to any desired degree of precision... if you have the patience. It will be shorter to use the series derived from calculus, I'll admit.


I do not believe that there is an argument for continuity without starting with Calculus. Certainly starting from ruler and compass constructions it is not obvious.

That said, if you have enough Calculus to define how to measure the arclength of a segment of the circle, you can quickly prove that sin and cos in radians exist, have a nice power series, and so on.

It is like x^y with x positive. We can manually define it every rational y. But the easiest way to get a rigorous and straightforward definition is to prove the algebraic properties of the integral of 1/x, use that to define the logarithm, define its inverse function to be the exponential, prove its algebraic properties, then define x^y as e^(y*log(x)). And it all just works.


It's easy to show more or less directly (by comparing arclength to straight-line length), and certainly without calculus, that the absolute difference between sin(x + delta) and sin(x) is at most |delta|.


You're right. And ditto for cos(x+delta) vs cos(x).

Of course that assumes that arclength is well-defined. The standard approach to which is, of course, Calculus.


Replying really late, just in case anybody reads this. (I went on vacation, and this occurred to me then.)

If you don't have calculus, you don't have anything like a delta-epsilon proof of continuity. But without calculus, you also don't know that you need it. So you just assume (correctly) that you can interpolate, and it works just like you expect, and life goes on.


There's a topological direction... but I have to agree that calculus is much more conceptually immediate to humans anyway.

But even a disembodied being of pure reason might eventually discover continuity via logic->topology.


As young students, we are taught that these functions are ratios of sides on right angles. It is fairly intuitive for them that provided they can draw a right triangle, they just need to measure the sides. And it is easy to accept that someone computed sines and cosines for a lot of angles and put it in a table. My point being, that we didn't need (in the past) calculus to compute trigonometric functions, and I don't see how it is a burden for students to be introduced to those functions without the Calculus definition.

And the reality is, that the definition of sine as a ratio of the catheti and hypotenuse is a rigorous definition of the function. Strictly, this sine is different from the sine of calculus. The first, the sine from Euclidean geometry, assigns a real to pair of rays, while the calculus sine, is function from the real numbers to the reals. And it does take some work to link them formally.


What kind of pedagogical or pragmatic relevance do you see trig as a building block for? I would answer that question by saying that it most likely comes up again either in physics or engineering contexts, or in a standardized exam like MCAT. And only in the sense of familiarity with the unit circle and trig functions.

What other foundation or learning pathway do you see trig serving as? Somebody else mentioned that trig serves use by teaching students that calculus has rich applications. So then I question, what kind of applications are students learning in trig? And if students are to learn rich examples of calculus applications, then why not statistics, which is also relevant to the bio / social sciences? Also, couldn't we mash trig inside calculus?


It seems to me that if I know trig, I can use it to solve the set of geometric problems to which it is applicable, without having been taught any special applications of trig. That's valuable in itself.

Then I take physics, and I find a whole bunch of other applications. I take calculus, and I find a bunch more uses. I take mechanics, and I find a bunch more. But it is not the job of trig to teach me those applications (though hints would be useful). It's not trig's job to teach me physics - that's a job for physics. But I need trig as a foundation.

I'm not sure that I answered your question, though...


But that's my point: it's not really about triangles; that's just a rather mundane application. Trig is really about complex exponentials that solve f'' + f = 0.


It is about triangles. The idea that sine is the solution of this ODE for f(0)=0 and f'(0)=1 is quite modern.

I would say a course on trigonometry usually covers (my experience): trigonometric functions exact value of them for the angles 30, 45, 60, 90 ... degrees Trigonometric formulas for the sum and difference of angles. A formuka for the double and the half angle. Law of sine and law of cosine Lots of relations derived from the Pythagoras theorem (sin^2+cos^=1) how to solve trigonometric equations

With all this, you are equipped to completely determine a triangle, knowing some of its and the length of some its sides. As as application, I was taught, how to measure heights and distances provided you can measure angles.

Thus, without trigonometry, it would be fairly hard to take a course on analytic geometry.

Now, how would the course be enhanced by introducing sine as the solution of an ODE?


I think I am missing something because, I am unable to see why it is huge burden to introduce sine and cosine without their rigorous definition. At which age, are students taught trigonometry? And what does a course on trigonometry covers? What would you think they would be able to do without it?

When we were introduced the sine and the cosine function, we were already familiar with Thales theorem, so therefore we could show that this ratio was a constant.

I am quite sure historically as well sine and cosine predate the more formal construction of those functions, be it as a series, solution of an ODE or inverse of arc sin (and this defined as an integral)...


I think I wasn't clear in saying that I believe the current pedagogical value of trigonometry is in giving students a brief familiarity with the trig functions when they see it again in the context of physics or engineering. Or standardized testing. I think those are the likely scenarios where students are going to be seeing relevance in trigonometry.

What other foundation or learning pathway do you see trigonometry serving as? Somebody else mentioned that it gives students a sense of applications, so they know that calculus is not for nothing. So then I question: what applications? And I pose, how about statistics?


It may be because of your math education but I was introduced to trigonometry in junior high in China. I had, and my classmates had, no trouble understanding them as functions of angles coming from ratios. It is the glue that binds circles and triangles and squares and ... . By high school analytic geometry greatly expanded their scope and use. This is the problem I saw in American high school when I moved to US: shallow introduction to mathematical topics made them vapid and jejune. Ancient Greeks were enthralled by trigonometry, ancient Egyptians built Pyramids and ancient Chinese built great dams with trigonometry. Calling it practically useless and pedagogically only useful as a prep for calculus is going too far.


"So, what applications are students taught in typical trigonometric texts?"

"If a pyramid is 250 cubits high and the side of its base 360 cubits long, what is its seked?" (http://en.m.wikipedia.org/wiki/Rhind_Mathematical_Papyrus#Py...)

Well, maybe not that typical, but it is an example without any periodicity in sight.


My introduction was through model rocketry. Somewhere along the line, Al-Biruni's method for finding the radius of the earth was brought up. That was in the early '70s, in what would be "junior high" in the US (elementary school in my part of Canada). It was always about right triangles, not periodic functions, at the beginning, which makes a whole lot of sense - trigonometry was both useful and used for a whole lot of years before calculus was invented. And like logarithms in the pre-scientific-calculator days, there was a point where one turned to tables for practical reasons without thinking of the table values as "magicical" - we were taught how to calculate intermediate values to the limits of practicality. Is there any practical sense (a sense that would be useful for people who would be entering the trades track) in being able to calculate much more accurately than you can measure angles?


coordinates of rotaty-thing


Infinite sums are as clean as anything.


Trig is the math of triangles and circles — it can be fully understood in geometric terms. Calculus requires far more foundation.

The linked article is interesting but the definition of differentiability looks wrong to me — maybe my brain needs more coffee but it looks like only linear functions are differentiable as defined.

I learned my calculus the pure math way — axioms and analysis. Epsilon delta arguments make more sense to me than "Ball". The fact that to clarify the examples the author resorts to epsilon delta description suggests to me that this approach is clever rather than clear.


You can find your epsilons and deltas in the definition of an open ball. They are just two different ways of writing the same idea. For example, in Definition 2.1, the 'h' in the open ball F is your epsilon and the 'h' in B is the delta. In an epsilon-delta proof, you show that for all |x - c| < delta, |f(x) - f(c)| < epsilon. The open ball B is just the set of all x satisfying |x - c| < delta and the open ball F is the set of all f(x) satisfying |f(x) - f(c)| < epsilon.

Any epsilon or delta that you choose implies a set of numbers satisfying those conditions. Those sets are open balls. By using them, you don't have to say things like "all 'x' such that ...". Which method you prefer probably depends on what you're more familiar with and how you tend to think. Open balls can be easier to visualize, if that's how you think.

Note that not just any ball will do. Closed balls are open balls that also include their boundary. That is, they use a less-than-or-equal-to instead of less-than. Which you use can make a big difference. An open ball on the real number line is just an open interval, an interval excluding its endpoints. It's easy to generalize: an open ball in a Cartesian plane is a circle excluding its border. In three dimensions, it's a sphere excluding its surface . . . and that's why it's called a ball.


I understand what an open ball is (and surely if you read my original post carefully this would be clear), I just don't think it actually makes the discussion clearer. In particular, when explaining something to people, starting with creating unfamiliar concepts is a Bad Idea if those concepts don't have a significant payoff.

In particular, because this is not a discussion about arbitrary spaces, the use of the word "Ball" is counter-intuitive. (But I admit I am probably biased by my own experience.)


"(Side rant: why do they teach trig before calculus in high school? That's completely backwards. Trig is a bunch of arbitrary formulas if you don't have the calculus behind them.)"

I'm fairly convinced the answer is that That Is What The Curriculum Does, and Do Not Question It.

None of the subsequent 9 hours of debate since you posted this convinces me otherwise. It's not possible that there's a better way, and we can marshal all sorts of rationalizations about how this is the best way, and the possibility that it might not be simply can not be conceived. If you think that there might be a better way, you just must not be aware of how what we already have is perfect.

Our math curriculum could be significantly improved in many ways, except that this is the general societal attitude towards it that I see, and it turns out that "Fix the math curriculum" becomes an unsolvable problem when you add the constraint "But don't make any changes to it of any kind, not even to merely reorder a few topics". Feh.


> Side rant: why do they teach trig before calculus in high school? That's completely backwards. Trig is a bunch of arbitrary formulas if you don't have the calculus behind them.

Math is hardly the only place where arbitrary facts/formulas are taught, and people taught to apply them, before learning the underlying math/reasoning behind the arbitrary facts/formulas.

And trig is useful in lots of places in the science curriculum without the backing calculus, so teaching it in the math curriculum early to support the broader curriculum makes sense from that perspective.


> An integral is the inner product of a function and a suitably-dimensioned unit.

Does that mean that ∫f(x)dx is f.(dx, dx, …) = f(x_0)·dx + f(x_1)·dx + f(x_2)·dx + … for all x in the domain?


Integral[f(x)g(x)dx] defines an inner product on a space of functions (glossing over exactly what functions) on an interval. So Integral[f(x) dx] is the inner product (f, 1), where "1" is the constant function g(x) = 1 on the interval.


So ∫f(x)g(x)dx = (f.g)·dx?


No, f·g = ∫f(x)g(x)dx.


But don't you need to account for the width dx of the slices you sum up? I.e. wouldn't f.g = f(x_0)g(x_0) + f(x_1)g(x_1) + … over a real interval blow up?


The point is that f.g (or, as a mathematician or physicist might write it, ⟨f, g⟩ or ⟨f|g⟩) has no independent meaning, but must be defined; and one way to define it (for `L^2` functions, the only one compatible with the `L^2` norm) is as stephencanon did at https://news.ycombinator.com/item?id=9620263 .


Well, it's not just any arbitrary definition, it's the projection of f onto g. Intuitively, sum(f_i * g_i).


> Well, it's not just any arbitrary definition, it's the projection of f onto g.

'Projection' also has no intuitive (EDIT: I meant 'intrinsic') meaning; "inner product" is the same structure as "projection + norm" (subject to appropriate axioms). Anyway, I didn't mean to claim that the definition was arbitrary, but rather that there was no way to argue against it: definitions can't be wrong (at worst, they can be infelicitous, uninteresting, or uninhabited).

> Intuitively, sum(f_i * g_i).

I think rndn (https://news.ycombinator.com/item?id=9621422 )'s objection applies to this intuition: to get a reasonable approximation of the integral, you need a lot of sample points, and any sum that doesn't take into account the spacing of those sample points has a good chance of diverging. (Consider f = g = 1, so that the sum is just a count of the number of sample points!)

Once you write sum(f(x_i) * g(x_i) * (dx)_i), of course, this becomes just notation for (a sequence of) Riemann sums, whose limit is by definition the integral (for continuous functions).


Sampling has nothing to do with this. The inner product of f (x) and g (x) simply is the integral with respect to x of their pointwise scalar products (I suppose in a very broad sense "pointwise" could be seen as analogous to sampling, as could "with respect to x", but the Calculus Gods will smite any who think of dx as a sample of x). The "ah ha" moment was in seeing inner product as the fundamental operation and integration as derived from it.


> I suppose in a very broad sense "pointwise" could be seen as analogous to sampling

Indeed, I don't understand how it could be otherwise. To multiply functions pointwise, you need to know their values at points. It seems to me that 'sampling' is a very good word to describe the process of evaluating a function at a lot of points.

> as could "with respect to x", but the Calculus Gods will smite any who think of dx as a sample of x

Indeed not! It is the spacing between sample points. That is, the `dx` in an integral literally stands for the "ghost of [the] departed quantity" `x_{i + 1} - x_i` (and, in an infinitesimal approach to calculus, it doesn't just stand for but literally is such a difference).


> Does that mean that ∫f(x)dx is f.(dx, dx, …) = f(x_0)·dx + f(x_1)·dx + f(x_2)·dx + … for all x in the domain?

I think that that is what is meant, except that it's not clear what you mean by "for all `x` in the domain"—`x` occurs bound on both sides. Of course this interpretation requires that one understand it as a philosophy rather than a calculation; for example, as your explicit version points out, one really needs tag points spaced `dx` apart to define the inner product, and (absent infinitesimals) the result will be only an approximation to the true integral.

stephencanon (https://news.ycombinator.com/item?id=9620263) gives another interpretation that is unimpeachably mathematically correct, but (a) it is so nearly circular that I think it must be not what bandrami (https://news.ycombinator.com/item?id=9616961) meant, and (b) (perhaps more importantly) the unit there is built into the definition of the inner product itself, rather than being part of the second "inner multiplicand".


No, stephencanon's comment is what I mean, and this may be an example of what my prof. meant that my students would not then be able to teach calculus.

By that point (this course was "differential operators", 700-level stuff) we all had a decent intuition of what inner and outer products are. The prof's comment was that looking at f(x) as an infinite-dimensioned vector, there is a unit *-cube g(x)=1 of compatible dimensions that can produce an inner product f|g (I'm not going to hunt through my character map for the dot or the integral sign). That inner product is the same as Integral(f(x), dx). This was in analogy to the differential operator being the exterior ("wedge") product of a function and its field.

The real point of the definition was relating y' and Integral(y) to div y and grad y: in y' you're going from vectors to tensors, and in Integral(y) you're going from vectors to scalars. Or, Integral(y) is a projection of y on some unit cube, and y' is finding the function of which y is the projection on an appropriate unit cube.


Does that mean that ∫f(x)dx is f.(dx, dx, …) = f(x_0)·dx + f(x_1)·dx + f(x_2)·dx + … for all x in the domain?

No, though it is true that the integral with respect to x, as an operation, is the limit of that summation as dx approaches 0. But the point wasn't about any particular numerical or symbolic manipulation we could do (this was a graduate-level calculus class, after all; we all knew how to actually integrate things).

Generally, when you learn inner products of functions, you learn the definition

f·g = ∫(f(x)g(x))dx where the concatenation there of f(x) and g(x) represents scalar multiplication. We all know how integration works, so this becomes how we define inner products.

My professor's point was to reverse the primacy there. We have a sense from vector operations of what inner products are; that can inform our intuition of what an integral is. That is, rather than saying "I know how to do an integral so I can now take an inner product of two functions", say "I have an intuition of what an inner product is, that is, the projection of one vector onto another to form a scalar, and that should inform my intuition of what an integral is".

The larger motivation for the whole talk was introducing Clifford algebras and the symmetry between dot product generalized to inner product opposed by outer product on the one hand, and cross product generalized to wedge product opposed by interior product on the other hand.

And, just to finish the mathjerking, the whole point of the course was to get to:

(δΩ)∫ω = (Ω)∫dω

ie, the most general case of Stokes' theorem. But that takes a lot of sussing out of what the differential operator d actually is.


> mathjerking

Well, a proper math education is something that many people can only dream of.


Maybe to teach the people how to use formulas they didn't make up themself. This is the part of Math that most of us do after school.


As much as I respect djb, the title is a huge misnomer. If you're a mathematician you want analysis and topology. You want definitions of continuity and differentiability that generalize nicely to arbitrary dimension, to manifolds, to Lie groups, to whatever. You want to marvel at the connections between certain special integrals and infinite sums, and you want to see the full construction of the real numbers for its own sake. You want a list of equivalent definitions of differentiability so you can get a better intuition and use whichever one is the nicest for what you're trying to do.

In a very strong sense, writing a document that "focuses purely on calculus" is antithetical to mathematics.


Hm, I always thought Spivak was calculus for mathematicians.


Apostol, I would have said. That's what MIT's 18.014 Calculus with Theory uses, for example.

http://ocw.mit.edu/courses/mathematics/18-014-calculus-with-...


I am amazed at seeing that the limit is defined by continuity. It is interesting.


99. Expository notes

Common practice in calculus books is to define continuity using limits. I define limits using continuity; continuity is a simpler concept.


Are they equivalent? Or both systems work smoothly?


Continuity in the topological sense implies continuity by limits. For topological spaces with a countable (local) basis, the converse is also true.

So, in general it's not equivalent. For the reals etc., it is.


They are equivalent if you consider limits of nets rather than limits of sequences.


They are logically equivalent for the reals. Search the web for [limits "in terms of continuity"].


How difficult would it be to write this in a formal system? How much space would it take?


Since real analysis is arguably the core of mathematics and of applications of mathematics, it has been formalised in many ways. See [1] for an overview of the state of the art.

[1] S. Boldo, C. Lelay, G. Melquiond, Formalization of Real Analysis: A Survey of Proof Assistants and Libraries., https://hal.inria.fr/hal-00806920v1/document


Even defining the real numbers is already quite involved, for example in Isabelle/HOL it is introduced as quotient over Cauchy sequences:

http://isabelle.in.tum.de/dist/library/HOL/HOL/Real.html

This is the classical way (and similar to HOL Light or HOL4). AFAIK in Coq the standard way is to introduce the reals axiomatically.

The bottom theories in http://isabelle.in.tum.de/dist/library/HOL/HOL/ are all about classical real analysis, based on topology and real-normed vector spaces.


I wonder if the coinductive approach calculus by Pavlovic et al [1, 2] has been considered as a basis for formalisation.

I secretly hope that I will one day get to teach calculus to computer scientists. In that case I would introduce reals as a coinductive data type, and the usual operations (such as differentiation, integration, solving differential equations) as stream operations. That should appeal to programmers, athough it would be weird for conventional mathematicians.

[1] D. Pavlovic, M. Hölzl Escardo, Calculus in coinductive form.

[2] D. Pavlovic, V. Pratt, The continuum as a final coalgebra.


I haven't read the paper, but if you define the reals as constructive Cauchy sequences you'd be forced into a coinnductive definition. Is that equivalent?


I'm not familiar with constructive Cauchy sequences. The usual way of using Cauchy sequences is to quotient them by the ideal of Cauchy sequences that converge to 0.


In constructivism, once you have a Cauchy sequence of rationals, that represents a real. However given two such Cauchy sequences, you cannot always prove whether one is less than, equal to, or greater than the other.

This has some interesting consequences. For example only continuous functions can be functions in constructivism. (If you try to construct a function that is discontinuous at a point, there are Cauchy sequences you can give it that you cannot assign to a Cauchy sequence coming out. So it is not a well-defined function.)


I've seen an impl somewhere where the (int => rat) bit from the Isabelle/HOL design was (stream rat). That's about the best I have at the moment.


BTW, in what sense is that coinductive? For what functor is it a final coalgebra?


Streams are coinductive, thus reals are the type of final coalgebras of (rat * -) which satisfy the Cauchy condition. So something like (Sigma (mu (rat * -)) isCauchy).


Don't you need to quotient after you've got Cauchy sequences? For example, (0,0,0,0,....) is a representative of the same real number (as Cauchy sequence) same as (1,0,0,0,0,0,....), but both are different as streams.


Yeah, depends on how you're encoding it. Quotient types are to my knowledge still tricky to implement, but you can try to make a setoid and structure this type with some kind of equality. Of course, now we run directly into totality and semi-decidability.


The Coq standard library formalizes real analysis. Have a look at the "Reals" section:

https://coq.inria.fr/library/


You would need to get to the reals, which takes already a fair amount of time and includes a lot of machinery. Depending on how you constructed the reals, you might be able to reuse some concepts. So, it depends on where you start and which path you take.

Also, arguing by reference to the choice of appropriate values is incomplete argumentation and so it won't be accepted by a formal system. You'd have to fill these holes.

I'd suggest to start by evaluating the existing formalized constructions of the reals.

I think it is not too hard.


Definitely stashing this away in my time machine for when I travel back to the 17th century. Make both Leibniz and Newton cry...


If you're taking things like that back in time, make sure to translate them to French. Also, learn French. Not knowing French in the 17th century is like not knowing English today.


Also, learn French in Canada, not France prior to your trip back in time. I've been told that Canadians speak French like someone from the 1600s, and those visitors (in 2015) from France find it archaic to the ear.


Did Newton speak French? I'd guess not (or not very well). His principal scientific works were written in Latin, which was the academic lingua franca of the day.


I'm learning French. But where is the time machine?


I remember the "concept" part of my high school calculus class. The teacher had us recite out loud the epsilon-delta definition of continuity until everybody memorized it so they could regurgitate it on the test.

I always wondered why analogies and pictures weren't used more often:

examples:

A 100m sprint is a continuous function of time (f(t) = distance from starting line) because sprinters cant teleport. In fact it is uniformly continuous because people have a maximum speed.

Beating usain bolt's record is a discontinuous function of completion time because f(world_record + epsilon) = 0 while f(world_record - epsilon) = 1.

Fundamental theorem of calculus: If you want to know how fast a guy is running at time t, look at how much ground he covered in 1 second. To get more and more accurate, look at how far he traveled in 0.5 seconds and so on...


It was very DJB of him to use the gauge integral, which nobody else does.


I was lucky enough that this is how I was taught calculus in high-school. It definitely wasn't easy at the time, but I feel for all the students for whom calculus is taught as a mindless set of algebraic rules.

This focus on calculating derivatives as opposed to actually understanding the concept and why the calculations work that way is, I think, why so many struggle with it.


Definition 5.1/5.2 is interesting. It defines derivative at point c, not the derivative function of f. Note that f1(x) is not equal to f'(x) for all x, but f1(c) = f'(c).


Yeah. The paper says: The derivative of f at c is written f'(c). The derivative of f, written f', is the function c -> f'(c).

So the derivative (f') is the result of substituting c for x in f1. For example, if f1 = (x -> x + c) then we would have f' = (c -> c + c) = (c -> 2c).


try to get the derivative of sin(x) from his approach.



One thing that bugs me about this, is that a lot of theorems seem to be formulated backwards. To give an example:

Theorem 9.1. Let f be a continuous real-valued function. Let y be a real number. Let b ≤ c be real numbers with f(b) ≤ y ≤ f(c). Then f(x) = y for some x in [b, c].

Why is this well-formed? Once you say "Let y be a real number", I'm free to pick any real number, which means that there might not be a b and c such that f(b) ≤ y ≤ f(c). Now, I obviously understand what is said, but shouldn't this be formulated more like:

Let f be a continuous real-valued function. Let b ≤ c be real numbers from the domain of f. Let y be a real number in the closed interval bounded by f(b) and f(c) ([f(b), f(c)] or [f(c), f(b)], depending on whether f(b) ≤ f(c) or not). Then there exists an x in [b, c] such, that f(x) = y.

The way this and other theorems, definitions, etc. are formulated in the article bugs me, because I must go back and re-qualify variables based on information deduced from things introduced, after the variable in question was introduced.


The "let" should really be read as "for all". If you do so, your objection disappears since quantifying over the empty always imply the consequent.


You also need to translate "with" as a forall (pi binder) as well. Then, you'll merely note that the formula is unsatisfiable without demonstrating a proof that y is within the proper bounds.


Oh good, so I'm not the only one whose mind tries to translate mathematical prose into the notation of dependent type theory!


Sometimes your problem really does just call for a hammer :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: