Make Calculus Pdf

0 views
Skip to first unread message

Santi Dubrova

unread,
Aug 4, 2024, 11:49:37 PM8/4/24
to desqmesecon
MakeCalculus imagines how Newton might have used 3D printed models, construction toys, programming, craft materials, and an Arduino or two to teach calculus concepts in an intuitive way. The book uses as little reliance on algebra as possible while still retaining enough to allow comparison with a traditional curriculum.

This book is not a traditional Calculus I textbook. Rather, it will take the reader on a tour of key concepts in calculus that lend themselves to hands-on projects. This book also defines terms and common symbols for self-learners.


Then I add a typing rule that says that when you see an assignment such as the above, you use a temporary type context, in which the declared type (the one in : ) is associated with the identifier, to type check and then make sure the declared type equals the abstraction's type.


And finally I add another rule that would let me have a list of assignments on top of a lambda term which is the one I'd evaluate, such that all these assignments would be added to the global scope before the term is evaluated.


So my questions are, is this really Turing complete? And, am I missing something when I say everything would be "well typed" (like for instance, I could define the Y combinator in a way I haven't yet realized or is there any gotcha in this type system)?


If all you have this definition rule (which doesn't perform any computation), it won't help you. You also need a way to use this definition, like you did in your code example: you implicitly used another language construct, where a definition is available in the next line.


From a typing perspective, it doesn't matter whether $T$ is a function type or whether $M$ is an abstraction. These restrictions would ensure that you can find a nice execution semantics (the general rule like phrased above allows recursive definitions like let x : int = x + 1 in x which cannot be usefully evaluated, but you can always define such cases as looping forever).


You can't make a polymorphic fixpoint combinator. This restriction comes with the simply typed lambda calculus. If you added polymorphism (which in itself does not allow recursive functions, for example System F is normalizing), you could get a polymorphic fixpoint combinator with a suitably generalized recursive let construct.


Author/Educators Joan Horvath and Rich Cameron join me to talk about their new book, Make: Calculus, which take a different approach to teaching and learning calculus. This book, like their previous book, Make: Geometry, relies on creating visual models created with Legos or 3D printers to teach Calculus concepts that can be hard to grasp from equations alone.


In the first part of the interview, we talk about the book and its unique approach to calculus. In the second part, Joan and Rich talk us through several ways to visualize calculus using models. I have provided links to the audio only (podcast), followed by a video of the second part, the demo portion.


Our websites use cookies to improve your browsing experience. Some of these are essential for the basic functionalities of our websites. In addition, we use third-party cookies to help us analyze and understand usage. These will be stored in your browser only with your consent and you have the option to opt-out. Your choice here will be recorded for all Make.co Websites.


This sort of "phobia" started from the very first moment I delved into integrals. Riemann sums seemed to make sense, though for me they were not enough for justifying the use of "dx" after the integral sign and the function. After all, you could still do without it in practice (what's the need for writing down the base of these rectangles over and over?). I was satisfied by thinking it was something merely symbolic to remind students what they were doing when they calculated definite integrals, and/or to help them remember with respect to what variable they were integrating (kinda like the reason why we sometimes use dy/dx to write a derivative). Or so I thought.


Having now been approached to differential equations, I'm starting to realize I was completely wrong! I find "dy" and "dx" spread out around equations! How could that be possible if they are just a fancy way of transcribing derivatives and integrals? I imagined they had no meaning outside of those particular contexts (i.e.: dy/dx, and to indicate an integration with respect to x or whatever).


EDIT: I don't think my question is a duplicate of Is $\frac\textrmdy\textrmdx$ not a ratio?, as that one doesn't address its use in integrals and in differential equations. Regardless of whether dy/dx is a ratio or not; what I'm really asking is why we use dx and dy separately for integration and diff. equations. Even if they're numbers, if they tend to 0, then dx (or dy) * whatever = 0. Am I wrong in thinking that way?


But note: even though a rigourous approach might avoid using differentials entirely, there is no need to throw "differential intuition" out the window, because it makes perfect sense if we just think of $dx$ and $dy$ as being extremely tiny but finite numbers, and if we replace $=$ with $\approx$ in the equations we derive. Perhaps the word "infinitesimal" could be thought of as meaning "so tiny that the errors in our approximations are utterly negligible". We can plausibly obtain exact equations "in the limit" (if we are careful).There is something aesthetically appealing about treating $dx$ and $dy$ symmetrically, which can perhaps in some situations give us a feeling that the approach using differentials is the "right" way or more beautiful way to do these computations. Compare these two ways of writing an "exact" differential equation:


Additionally, in differential geometry, quantities like $dx$ are defined precisely as "differential forms", and some treatments of calculus (like Hubbard & Hubbard) embrace differential forms at an early stage. But you can understand calculus rigorously without using differential forms.


There is a whole pyramid of mathematical things that we accept without demur these days that were once as suspect as differentials probably still are. What tends to happen is that we know how the thing works, then we have to find what it is or might be.


One way to define the algebra of differential forms $\Omega(M)$ on a smooth manifold $M$ (as explained by John Baez's week287) is as the exterior algebra of the dual of the module of derivations on the algebra $C^\infty(M)$ of smooth functions $M \to \mathbbR$. Given that derivations are vector fields, 1-forms send vector fields to smooth functions, and some handwaving about area elements suggests that k-forms should be built from 1-forms in an anticommutative fashion, I am almost willing to accept this definition as properly motivated.


Now, the exterior derivative (together with the Hodge star and some fiddling) generalizes the three main operators of multivariable calculus: the divergence, the gradient, and the curl. My intuition about the definitions and properties of these operators comes mostly from basic E&M, and when I think about the special cases of Stokes' theorem for div, grad, and curl, I think about the "physicist's proofs." What I'm not sure how to do, though, is to relate this down-to-earth context with the high-concept algebraic context described above.


Question: How do I see conceptually that differential forms and the exterior derivative, as defined above, naturally have physical interpretations generalizing the "naive" physical interpretations of the divergence, the gradient, and the curl? (By "conceptually" I mean that it is very unsatisfying just to write down the definitions and compute.) And how do I gain physical intuition for the generalized Stokes' theorem?


The first thing to realise is that the div-grad-curl story is inextricably linked to calculus in a three-dimensional euclidean space. This is not surprising if you consider that this stuff used to go by the name of "vector calculus" at a time when a physicist's definition of a vector was "a quantity with both magnitude and direction". Hence the inner product is essential part of the baggage as is the three-dimensionality (in the guise of the cross product of vectors).


In three-dimensional euclidean space you have the inner product and the cross product and this allows you to write the de Rham sequence in terms of div, grad and curl as follows:$$ \matrix \Omega^0 & \stackreld\longrightarrow & \Omega^1 & \stackreld\longrightarrow & \Omega^2 & \stackreld\longrightarrow & \Omega^3 \cr\uparrow & & \uparrow & & \uparrow & & \uparrow \cr\Omega^0 & \stackrel\mathrmgrad\longrightarrow & \mathcalX & \stackrel\mathrmcurl\longrightarrow & \mathcalX & \stackrel\mathrmdiv\longrightarrow & \Omega^0 \cr$$where $\mathcalX$ stands for vector fields and the vertical maps are, from left to right, the following isomorphisms:


The beauty of this is that, first of all, the two vector calculus identities $\mathrmdiv \circ \mathrmcurl = 0$ and $\mathrmcurl \circ \mathrmgrad = 0$ are now subsumed simply in $d^2 = 0$, and that whereas div, grad, curl are trapped in three-dimensional euclidean space, the de Rham complex exists in any differentiable manifold without any extra structure. We teach the language of differential forms to our undergraduates in Edinburgh in their third year and this is one way to motivate it.


Another answer mentioned Gravitation by Misner, Thorne and Wheeler. Personally I found their treatment of differential forms very confusing when I was a student. I'm happier with the idea of a dual vector space than I am with the "milk crates" they draw to illustrate differential forms. Wald's book on General Relativity had, to my mind, a much nicer treatment of this subject.


There is a book that not many physicists I know of seem to like (except mathematical physicists, of course), but that is a true gem in the eyes of mathematicians: I am referring to V. Arnold Mathematical Methods of Classical Mechanics, here on amazon.

3a8082e126
Reply all
Reply to author
Forward
0 new messages