(2) How can we express something like a type with parameter and the initialization of a type with parameter.
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/4bef40e5-3ab7-402d-850c-e08aaeb0edcan%40googlegroups.com.
Hi Mingli,(2) How can we express something like a type with parameter and the initialization of a type with parameter.I'm not an expert, so don't take my word for it. But let me guess. Haskell is based on simple type theory while set.mm is ZFC, i.e. set theory. They are different beasts. You can embed type theory, of course, but that would not be set.mm anymore, IMHO.But maybe some real expert will say that I'm telling nonsensicality and this should be easy to do :)
...
This has already appeared in this thread: in AV's code you have ( QFrm ` V ) as the type of all quadratic forms over V. Or rather it should read that: there appears to be an error in the code, and the code as written would have tripped the definition checker. Here's a revised version (still not checked though):
Hi Mingli,
Let me try to add up to / complete the other answers, starting from your own definition.
You write:
```metamath
df-qfrm $a |- QFrm = { V --> ( Scalar ` V ) | V e. LVec /\
q e. V --> ( Scalar ` V ) /\ v e. V /\ a e. ( Scalar ` V ) /\
q ` ( a .x. v ) = ( a * a ) .x. v }
$.
So your `QFrm` is in the form ` { A --> B | ph } `. Unfortunately, this syntax has not been defined in set.mm. A definition for a "class of something" has to take one of the following forms:
```metamath
cab $a class { x | ph } $.
crab $a class { x e. A | ph } $.
copab $a class { <. x , y >. | ph } $.
coprab $a class { <. <. x , y >. , z >. | ph } $.
```
The first two forms define "the set of `x` in `A` fulfilling
`ph`", the third defines a relation (like '<'), the last also
defines a relation, but with three elements.
Here, you want to define a set of mappings, so the second form is what we need. Something like ` QFrm = { q e. M | ph(q) } `, where `q e. M` would express "q is a mapping", and `ph(q)` would express "q is quadratic".
For "q e. M" to express "q is a mapping", we have ~df-map, and the operation `^m`: the set of all functions that map from ` B ` to ` A ` is written ` ( A ^m B ) `.
Here we want mappings from the base of a field V to the scalars of that field, so we have to write ` ( Base ` ( Scalar ` V ) ) ^m ( Base ` V ) `. Alexander has explained why we need to use the ` Base ` here.
Now, a quadratic form always relates to a specific vector space.
Above, we have not defined what ` V ` is, and so our definition is
not a closed form.
There are two ways around this: either we define ` QFrm ` as the
set of all quadratic forms, regardless of the underlying vector
space, or we first fix a specific vector space, and then define
the class of quadratic forms over that specific vector space. I
think the latter is usually better, and what you try to achieve.
So here what we want is a function, mapping one vector space ` w `
to the class of quadratic forms over ` w `, ` ( QFrm ` w ) `.
Later on, we will be able to state "` Q ` is a quadratic form over
V" by writing `Q e. ( QFrm ` V )`.
We can use the "maps to" notation here: ` ( w e. Field |->
F(w) ) ` stands for "the function which maps the field `w` to
`F(w)`.
Putting it together, this gives:
```metamath
df-qfrm $a |- QFrm = ( w e. Field |-> { q e. ( Base ` ( Scalar
` w ) ) ^m ( Base ` w ) | ph } ) $.
```
Of course I've still omitted `ph` here, which corresponds to
your:
```metamath
v e. V /\ a e. ( Scalar ` V ) /\ q ` ( a .x. v ) = ( a * a ) .x. v
```
We have not introduced `v` and `a` yet though. Those are
respectively any vector and any scalar, so we
shall use "for all": ` A. a e. ( Base ` ( Scalar ` w ) ) A. v e. (
Base ` w ) ps `
Finally, ` .x. ` is just another class variable, like ` A `. In
the theorems which make use of it, like ~ drngmcl, there is a
hypothesis to give it a value. In your definition, we cannot use
such a trick, so we have to use everything verbatim. The scalar
multiplication operation of ` w ` is ` ( .s ` w ) `, so instead of
` ( a .x. v ) ` one has to write ` ( a ( .s ` w ) v ) `. This
gives:
```metamath
( q ` ( a ( .s ` w ) v ) ) = ( ( a ( .r ` ( Scalar ` w ) ) a ) (
.s ` w ) ( q ` v ) )
```
It also looks like we need to express the fact that ` q(u+v) - q(u) - q(v) ` is bilinear. It is a mapping function of two variables `u` and `v`, so we can express it using ~ cmpt2:
```metamath
cmpt2 $a class ( x e. A , y e. B |-> C ) $.
```
Let's assume we have a similar definition for Bilinear Forms,
`BFrm`. A statement "` Q ` is a bilinear form over ` w `" is
expressed ` Q e. ( BFrm ` w ) `. This gives:
```metamath
( u e. ( Base ` w ) , v e. ( Base ` w ) |-> ( q ` ( u ( +g ` w
) v ) ( -g ` ( Scalar ` w ) ) ( ( q ` u ) ( +g ` ( Scalar ` w ) )
( q ` v ) ) ) ) e. ( BFrm ` w )
```
Let's put everything together once again:
```metamath
${```
That shall be exactly Mario's answer, with the addition of the
"bilinear form" part. That's quite complex, but the next theorems
are going to be simpler... and Metamath forces us with some
formalism (which is good!).
Actually, there is one little difference with Mario's definition. I've written ` ( w e. Field |-> ... ) `, but we usually prefer ` ( w e. _V |-> ... ) `. By using the universal class ` _V `, we do not force ` w ` to be a field, this allows us to be as general as possible, and define "what a quadratic form would be if ` w ` was a vector space".
In practice, as explained by Alexander, as soon as we will need any specific field or vector space property (say, distributivity), this will have to appear as a hypothesis, like ` ( W e. Field -> ... ) ` or ` ( W e. LVec -> ... `.
Of course, `BFrm`, the function mapping a field to the set of its bilinear forms, will need to be defined first: this is left as an exercise! ;-)
BR,_
Thierry
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/CAGDRuAg4F%3Deb2qSTLBsXwX7yz05W7b6PBR%2BX%2B2Q-zCBB7E6RbA%40mail.gmail.com.
That's nonsense, this should be easy to do. :) The equivalent of a type with a parameter in set.mm is a function from sets to sets. You use function application to represent the application of the type function to some particular type.
Hi Mingli,
It's very good that you have a paper to follow, and very good that you set yourself as a target to prove another of the "100 theorems"! Good luck, and don't hesitate to shout for help whenever you feel blocked, I'm sure you'll find many willing to help!
The title of the paper is "A very simple proof", but not
everything is as simple as it claims ;-)
Very very quickly surveying what you might need:
It seems you're right that we do not have the 3D cross product in
Metamath yet. That's probably a very good addition! (and
absolutely necessary for that proof)
To express the Euclidean space in 3 dimensions, you can use ` (
EEhil ` 3 )` . It is defined as ` ( RR^ ( 1 ... 3 ) ) `, and
that's a structure which is also a vector space. Then, for
example, the set of quadratic forms over RR^3 will be ` ( QFrm ` (
EEhil ` 3 ) ) ` (once you've defined it!).
We already have quite some matrix algebra, defined by Stefan O'Rear and Alexander Van der Vekens.
Alexander has already introduced the concept of "linear dependency" as `linDepS` (see ~df-lindeps).
Norm has a whole chapter about projective geometry, but unfortunately I don't understand it, I can't tell whether it could be useful.
I hope this helps somehow!
BR,
_
Thierry
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/CAGDRuAhT3ZADrOrPF3%3D9nVv3SZCVj8ZVh5_Rv%3D%2BRZMErfqvHYQ%40mail.gmail.com.
There isn't really any essential way that a type is different from a set, except that a type abstracts some encoding details that sets instantiate in a particular way. That means that if you have a type you want to model then sets are a way to do it, maybe not the only way. Of course the function giving the type family is only the beginning of the story: in haskell an inductive data type comes with constructors and pattern matching, and that corresponds to functions and a recursion principle in metamath.
There isn't really any essential way that a type is different from a set, except that a type abstracts some encoding details that sets instantiate in a particular way. That means that if you have a type you want to model then sets are a way to do it, maybe not the only way. Of course the function giving the type family is only the beginning of the story: in haskell an inductive data type comes with constructors and pattern matching, and that corresponds to functions and a recursion principle in metamath.I'm thinking more about elements that may belong to many sets, while they can have only one type. Isn't that the main reason why type theory was invented in the first place? Yes, it's possible to use set theory with particular axioms like ZFC, but trying to model type theory inside it kinda defeats the purpose. Why not implement type theory in Metamath then?
At least in my email cilent these got linked funny. The
references are to http://us.metamath.org/holuni/mmhol.html and
https://github.com/digama0/dtt.mm respectively, as far as I know.
Hope that helps people follow along in case anyone is unfamiliar.
I don't see why you are so eager to say "types are not quite sets"
Type theory thinking can still be useful without the trappings of type theory itself, and in fact type theory simulated in set theory has some advantages over pure type theory in that you can bend the rules when it is advantageous to do so.
In lean, there is a concept called "defeq", or definitional equality, and it is strictly stronger than regular equality: there are things that are propositionally equal but not defeq, like x + (y + z) and (x + y) + z. You can prove these two are equal, but if you have a type family indexed in natural numbers and an element in F ((x + y) + z) you can't just treat it as an element of F (x + (y + z)). In metamath you can; since the values are equal the sets are equal so any element of one is also an element of the other. Defeq is simply not a thing in metamath, and this is very freeing.
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/14ce5c0a-44e9-4ee8-b6be-ac2d67848ac8n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/c9b002d5-1182-4ddc-b4cd-8e6502a2ad66n%40googlegroups.com.
Hi Mingli,
You're trying to understand how a proof is stored and verified.There's absolutely nothing wrong with that, but I just wanted to state that it's not mandatory that you understand those.
I think when I first started with Metamath, I focused on
understanding simple proofs on the website, or on MMJ2 (both are
similar), created my first proofs and let the tool (MMJ2) store
and verified the proof for me. I consider the compressed proof
format like a kind of "binary executable" which we don't
absolutely need to understand and read.
MMJ2 and the website both hide some steps of the proof
(syntactical steps) and allows you focus on the more important
ones (logical steps).
Like all proofs in set.mm, the proof you chose is provided in compressed format. That format favors space over readability, so as few information as possible is included.1. So the question here is: cdr, ccrg, cfield, df-field are all $a, and only elin2 is a $p, where are the $f or $ e?
The $f and $e hypotheses which are "in scope" are known, so they do not need not repeated in the statement list in parentheses. That format is described in appendix B. of the Metamath Book, which states:
If the statement being proved has m mandatory hypotheses, integers 1 through m correspond to the labels of these hypotheses in the order shown by the show statement ... / full command, i.e., the RPN order of the mandatory hypotheses.
Here is the output of that command:
67962 isfld $p |- ( R e. Field <-> ( R e. DivRing /\ R e. CRing ) ) $= ... $.
Its mandatory hypotheses in RPN order are:
cR $f class R $.
So there is a single mandatory hypothesis, which is `cR`,
referred to by a `A` in the compressed proof (here A encodes to
integer 1).
MM> show proof isfld /all
1 cA=cR $f class R
2 cB=cdr $a class DivRing
3 cC=ccrg $a class CRing
4 cX=cfield $a class Field
5 elin2.x=df-field $a |- Field = ( DivRing i^i CRing )
6 isfld=elin2 $p |- ( R e. Field <-> ( R e. DivRing /\ R e. CRing ) )
```
2. But where does step 1 come from? and when the verifier reads cdr in step 2, what does it happen?
It seems I should understand how class notions take effect precisely?
What happens is that the statement with a single "class R" is stored on the proof stack.
Here is how I would describe the rest of the verification
process:
Next, `B` is read, which refers to `cdr`. This pushes "class DivRing" on the stack, which now contains "class R / class DivRing".
And so on until step 5, where the stack contains "class R / class DivRing / class CRing / class Field / |- Field = ( DivRing i^i CRing )".
Then, at step 6, `elin2` in invoked, which requires 4 variables :
A, X, B and C, and an hypothesis ~elin2.x of the form `|- X = ( B
i^i C ) `.
All 5 items are popped from the stack, in the order in which they are declared:
Finally, the statement of elin2 is pushed on the stack: |- ( A e. X <-> ( A e. B /\ A e. C ) )
Which, when substituted, results in |- ( R e. Field <-> ( R e. DivRing /\ R e. CRing ) ).
That is the final and only expression remaining on the proof
stack, and the verifier finally verifies that this matches with
the statement of ~isfld.
And that's it! Luckily this was a very short proof!
In practice, if you had used MMJ2 or checked the website proof of isfld, you can see that that proof only has 2 steps: 1-4 are hidden, and only 5-6 are shown.
That's because steps 1-4 only deal with "syntactically" building
the expressions which will later be used in substitutions, not
deriving new true statements.
BR,
_
Thierry
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/CAGDRuAjAUAtf9Mvz%3D7TRK3%3DtjAHPp1e0%2BWEmX61n4%2BUuqT%3DsFw%40mail.gmail.com.