Status?

20 views
Skip to first unread message

Ulrik Buchholtz

unread,
Sep 30, 2008, 7:18:23 PM9/30/08
to vdash
Hi!

First of all: I'm very excited about vdash (just saw it on /.), as
I've been thinking about such a project for almost a year now. Let's
hope it works out alright and does not end like the QED project.

Anyway, what's the current status of vdash? In the Ignite talk,
Cameron asks people to help adding math to it, but looking at the
website and the single thread on the list, it appears we're not quite
there yet.

Can people start contributing code to the backend, or are we still at
the drawing board? Is there some document that summarizes the current
ideas for the structure/organization of the vdash system?

Best regards,
Ulrik Buchholtz

Null Set

unread,
Oct 1, 2008, 2:50:08 AM10/1/08
to vdash
I am also very excited about this project!

It certainly doesn't appear to be ready yet, but I did manage to find
the wiki on the website, at http://vdash.org/wiki/Main_Page

Random Page will bring you to the mockup of Cantor's Theorem that was
used in the slides.

Abhishek Mishra

unread,
Oct 1, 2008, 3:20:25 AM10/1/08
to vdash
Its a great project.

And, yes looking at the link to the wiki you provided,
there seems to be just one page in the wiki.

I have even created a login :), but I don't know if it will be there
when the site goes live!

-Abhishek

On Oct 1, 11:50 am, Null Set <Tychomon...@gmail.com> wrote:
> I am also very excited about this project!
>
> It certainly doesn't appear to be ready yet, but I did manage to find
> the wiki on the website, athttp://vdash.org/wiki/Main_Page

Cameron Freer

unread,
Oct 1, 2008, 3:43:47 AM10/1/08
to vdash


Hi!

It's great to see how much interest and excitement vdash has
generated. Thanks for getting in touch!

The site hasn't launched yet (though as Null Set points out, there's
a MediaWiki installation which I've been modifying slowly). I'll
write up a longer document with the proposed structure shortly. For
now, here's a brief outline of the minimal steps necessary to get a
prototype up and running.

-----

1. Isabelle interface that asynchronously verifies entries and makes
appropriate notes in the SQL database. In particular, links to
definitions or results on other vdash pages should be processed as
dependencies appropriately.

2. MediaWiki parser of Isabelle/Isar proofs (possibly via GeSHi)
which also displays data about verification.

3. One-time conversion of IsarMathLib to seed the wiki.

4. Finalize license issues. I now believe there are good reasons
for content to be in the public domain (rather than under the BSD
license, as noted in the e-club slides) but with a strong informal
culture of attribution. I'll elaborate on this issue shortly.

-----

Other features that would be nice in the relatively near future:

4. Method of locking-down verified pages. Perhaps allow only edits
of verified pages that are themselves correct, or allow only forks.

5. More sophisticated versioning of proofs. When a page changes,
allow for links that automatically point to the new statement, as
well as static links to the old version.

6. Parse error messages from Isabelle in a useful way.
For example, a script might take subgoals noted in Isabelle error
messages and expand the document with the outline:
show ___ fix ___ assume ___ sorry

7. Display attribution in a detailed manner, sorted by degree of
contribution.

8. Automated feeds/scrapes from open-access sources (with
appropriate licenses) to create stubs as targets for formalization.

-----

There are many other interesting directions for both UI and back-end
enhancements. I'll soon post some more of my ideas, as well as
pointers to existing tools and proposed features. But I'd also love
to hear what technical ideas you all have, in what ways you'd like
to contribute, and in what directions you'd like to see this go in
the future.

Best,
- Cameron

Chao Xu

unread,
Oct 1, 2008, 1:05:18 PM10/1/08
to vd...@googlegroups.com
Is individual statements going to save in some separate database?
Mediawiki's revisions isn't the best format to store data because
converting around would be difficult.
I also find categorize items in mediawiki a huge pain.
Other than mediawiki is the most popular wiki system out there, I
don't see too much advantages for this.

--
----------
Chao Xu
http://mgccl.com

Cameron Freer

unread,
Oct 1, 2008, 2:36:20 PM10/1/08
to vd...@googlegroups.com
On Thu, 2 Oct 2008, Chao Xu wrote:

> Is individual statements going to save in some separate database?
> Mediawiki's revisions isn't the best format to store data because
> converting around would be difficult.
> I also find categorize items in mediawiki a huge pain.
> Other than mediawiki is the most popular wiki system out there, I
> don't see too much advantages for this.

> ----------
> Chao Xu
> http://mgccl.com


I agree that MediaWiki may not be the best choice. In addition to
being very popular, it is also fairly mature and has many
extensions, but in the long run we may be better off using something
else, or building our own wiki.

SWiM, which Ian (drevicko) has mentioned, is based on IkeWiki.
I suspect that using IkeWiki at this stage would add a lot of
overhead (for semantic metadata) without corresponding benefit.

My inclination had been to modify MediaWiki for now, and maybe
eventually migrate to a custom-built system, but I'd appreciate
hearing other suggestions people may have.

Best,
- Cameron

Tim Green

unread,
Oct 1, 2008, 2:42:25 PM10/1/08
to vdash
On Oct 1, 7:36 pm, Cameron Freer <fr...@math.mit.edu> wrote:
> I agree that MediaWiki may not be the best choice.  In addition to
> being very popular, it is also fairly mature and has many
> extensions, but in the long run we may be better off using something
> else, or building our own wiki.
>
> SWiM, which Ian (drevicko) has mentioned, is based on IkeWiki.
> I suspect that using IkeWiki at this stage would add a lot of
> overhead (for semantic metadata) without corresponding benefit.
>
> My inclination had been to modify MediaWiki for now, and maybe
> eventually migrate to a custom-built system, but I'd appreciate
> hearing other suggestions people may have.
>
> Best,
>  - Cameron

I'd think the best way to proceed would be to use MediaWiki, as it has
mature wiki capabilities, but ensure there's an easy way out if needs
grow. MediaWiki is good if all you want to do is replicate Wikipedia,
but I find it's pretty restrictive if you want to do anything a bit
more magic. I'd just do it so you keep a separate database of the
individual theorems/proofs via whatever extension you're making to do
the verification, which can then be migrated or used elsewhere when
needed.

I personally use Drupal for a number of projects due to its
flexibility, good module writing support and good user access control
- so that might be worth looking into, but it might ultimately be best
to write something from scratch for the more advanced features like
guides and annotators.

One question, is there going to be a facility for checking for
duplicates of proofs? Maybe some sort of theorem fingerprint hash?
It's just I can certainly imagine people duplicating efforts if it
starts to get big.

Chao Xu

unread,
Oct 1, 2008, 3:05:15 PM10/1/08
to vd...@googlegroups.com
I'm also a Drupal user and it's limitless flexibility.
If we create an database for storing the statements and have APIs for
accessing it, then it won't matter what we use for presentation.

--

edb...@infinitebits.net

unread,
Oct 1, 2008, 4:25:03 PM10/1/08
to vd...@googlegroups.com
Cameron and all,

I love MediaWiki and administer numerous instances of it for varied
purposes. Nonetheless, I wonder how easy it is going to be to plumb
its content database to semantically meaningful proof libraries. It
seems to me like the ultimate Web 2.0 challenge.

However, it probably won't be any harder than any other wiki framework,
and MediaWiki can be made to take care of so many formatting and
organizational issues automagically, and it a fast way to get
collaboratively generated content online.

But surely the wiki just acts as a gui for a revolutionary backend
mathematical knowledge database. There have been so many partially
successful demonstration projects before (QED Project, etc) that
withered mainly, IMHO, because they were too specialized, and couldn't
easily be enough things to enough people.

What's deeply meaningful about math is not mathematical facts
(theorems) themselves, but the highly intricate graph of inferential
connections between them. These connections need to be be tracked in
three different ways.

1) hard core logical connections (MetaMath is a good example of this, but
its proofs aren't very human friendly, and its subject scope is limited)
2) intermediate technical (the kind of proof presentation lines that
working mathematicians like to see in LaTeX) and
3) pedegogical -- the ability to backtrack from one mathematical statement
to more elementary ones, perhaps with well placed explanatory notes
available in the presentation).

I've long felt that any ultimately successful mathematical knowledge
base would have to map all three kinds of connections onto a single
collection of theorems -- without ever internally confusing apples and
oranges -- which won't be easy.

It will also be interesting to see how the system keeps track of what
axioms every single statement assumes or doesn't assume. E.g. some
part of proven math theorem space needs the Axiom of Choice invoked
somewhere in its proof tree, and some part can do without it.

Modular ("object oriented") design will be key here, so that different
front ends and various proof checkers can interact with the same
foundational database. Importing mathematical knowledge into this
database without confusing shades of meaning between different fields,
schools of thought, and individual mathematicians, will be quite a
semantic challenge. But, if successful, this could place mathematics
on a firmer logical foundation than it has ever been before. Someday,
no new proof will be published before it is first digested by a
checker (or several) against a comprehensive mathematical knowledge
base to see if it provokes contradictions with anything else we think
we know. There is no such thing as an isolated mathematical fact.

I'm in awe at the goals and motivations of this project and I dearly
hope it succeeds. But that will require both design brilliance and
bountiful amounts of highly skilled help to import a large amount of
mathematical knowledge from its currently distributed, disparate and
overlappingly repetitive forms. It's quite a bit of informational
entropy that needs be reduced so we can have it all in an essential
form with interfaces that make it useful to many.

Best Regards,

Ed Boyce

slawekk

unread,
Oct 2, 2008, 8:09:03 PM10/2/08
to vdash
One of the issues that the wiki design has to tackle is ensuring the
consistency of the proofs. This means that if someone has proven a
theorem (which typically is a lot of work), then the theorem should
not stop being valid just because someone decided to modify something
that the theorem depends on. The previous formalized mathematics wiki
concepts tried to design an elaborate version control schemes, where
the theorem would be valid in the context of a specific version of the
theorem database present at the moment it was verified. The problem
with this solution is that it would probably require a custom wiki
engine.
I would like to suggest a less ambitious solution: identify all
operations that may result in inconsistency and ban them.

The operations to ban are:

- modification of an assertion of a theorem that another theorem
depends on (proofs can be modified, as long as they remain correct).

- modification of definition that is referenced somewhere

- adding an assumption to a locale (I am assuming Isar will be used
here) that is referenced or removing an assumption that is referenced
in a theorem proven in this locale.

- removing notation conventions from locales

To do this one needs to maintain a database of assertions, definitions
and locales.
To be able to build such database adding theorems to the simplifier
should also be forbidden (if a theorem is added to the simplifier the
prover may use it in the proof even if it is not explicitly referenced
in the text).
The system has to be able to do some limited parsing of pages to
update the database and to verify that the above rules are not broken.
The page that breaks any of the rules should not be possible to be
saved even if it Isabelle verifies it is correct.

Another issue is the WYSIWYG editor for the proofs. This is nice to
have, but very difficult to create. I suggest not to do that, at least
initially. Instead, allow for an easy download of snapshots of the
formal part of the wiki so that one can create the proofs off-line
with existing tools and then copy and paste in the regular ASCII based
wiki editor.

Slawekk

Cameron Freer

unread,
Oct 3, 2008, 3:55:18 PM10/3/08
to vdash


I agree on all points. Ultimately, we may move to a custom wiki
engine, and all manner of elaborate in-browser editors, but we can
get quite far without either of these yet.

Your proposed ban of consistency-destroying activities, and easy
snapshot-downloads seem like great places to start.

There are also many useful (but not especially flashy) UI
improvements that we can make (like parsing of Isabelle error
messages, and detailed displays of dependencies) long before adding
various AJAX or other interactive or WYSIWYG features.

Best,
- Cameron

Hendrik Boom

unread,
Oct 6, 2008, 5:15:46 PM10/6/08
to vdash


On Oct 1, 2:50 am, Null Set <Tychomon...@gmail.com> wrote:
> I am also very excited about this project!
>
> It certainly doesn't appear to be ready yet, but I did manage to find
> the wiki on the website, athttp://vdash.org/wiki/Main_Page

At the moment, all that is there is

: Forbidden
:
: You don't have permission to access /wiki/Main_Page on this server.

Hendrik Boom

unread,
Oct 6, 2008, 9:26:24 PM10/6/08
to vdash


On Oct 1, 2:36 pm, Cameron Freer <fr...@math.mit.edu> wrote:>
> My inclination had been to modify MediaWiki for now, and maybe
> eventually migrate to a custom-built system, but I'd appreciate
> hearing other suggestions people may have.

ikiwiki is capable of using several different revision control systems
as back ends, including monotone, which is replicated, paranoid about
data loss, and can record certificates for particular revisions as
being formally verified.

Since monotone's data base is replicated across as many servers as
people care to provide, it makes it easy for users on other systems to
experiment with presentation and editing styles, independent of the
monotonic accumulation of mathemetical truth, which is shared.

-- hendrik

>
> Best,
>  - Cameron

Cameron Freer

unread,
Oct 10, 2008, 11:23:17 AM10/10/08
to vdash
On Thu, 2 Oct 2008, slawekk wrote:

> One of the issues that the wiki design has to tackle is ensuring the
> consistency of the proofs. This means that if someone has proven a
> theorem (which typically is a lot of work), then the theorem should
> not stop being valid just because someone decided to modify something
> that the theorem depends on. The previous formalized mathematics wiki
> concepts tried to design an elaborate version control schemes, where
> the theorem would be valid in the context of a specific version of the
> theorem database present at the moment it was verified. The problem
> with this solution is that it would probably require a custom wiki
> engine.
> I would like to suggest a less ambitious solution: identify all
> operations that may result in inconsistency and ban them.

I agree that your less ambitious solution is a great place to start.
For reference, slides 14-39 of
http://homepages.inf.ed.ac.uk/rpollack/mathWiki/slides/Corbineau.pdf
outline several more complicated possibilities.

Though things along these lines might not be too hard to implement
using a revision control system (as Hendrik Boom suggests).

For now, it is probably best to just get started using a simple
scheme that ensures consistency, and then migrate to allow more
sophisticated operations later.

Best,
- Cameron

Cameron Freer

unread,
Nov 16, 2008, 10:35:21 PM11/16/08
to vdash
On Tue, 30 Sep 2008, Ulrik Buchholtz wrote:

> Can people start contributing code to the backend, or are we still at
> the drawing board? Is there some document that summarizes the current
> ideas for the structure/organization of the vdash system?

I've posted some additional explanation to the website: http://vdash.org/
Feedback appreciated.

I've been working on getting a very basic first version ready
(Monotone + ikiwiki + IsarMathLib, with Isabelle connected
by a simple script which will soon be replaced by a more
robust queue system). Then people can start experimenting
with development ideas on their own, by cloning the repository
and making ikiwiki plugins, CSS, &c.

Best,
- Cameron

Reply all
Reply to author
Forward
0 new messages