Examples of of a procedures for the OSCOMAK Wiki (and related issues :-)

11 views
Skip to first unread message

Paul D. Fernhout

unread,
May 4, 2008, 11:08:23 AM5/4/08
to OpenVirgle
Paul D. Fernhout wrote:
> For the most part, Halo is supposed to make things easier. :-)
> If it slows us down in some tasks (like in editing the main text of an
> article), you can mostly avoid it by changing your preferences. To do
> that, click "my preferences" at the top of the page when you are logged
> it, then pick the "Skin" tab, then select Monobook, and then press the
> Save button.

BTW, my outline sounds vaguely like one of those "NASA procedures" Al Globus
was suggesting ten years ago to add to OSCOMAK. :-)
http://www.oscomak.net/wiki/Main_Page
Maybe I should listen to him more? At least the acronym now means:
"OSCOMAK Semantic Community On Manufactured Artifacts and Know-how""

Hmmm, where would that Halo shut off procedure fit in the Wiki as an article
about a "procedure" and how should it be named? I don't know the best way to
do that, but it should be possible though. Suggestions?

And how could that procedure article be interlinked to a Wiki diagnostic
guide or checklist for troubleshooting? I also don't know the best way to do
that. Should also be possible though. Suggestions?

Example:
"""
Symptom: Editing OSCOMAK Wiki pages is slow.
Troubleshooting: Check this thing, this other thing, and that thing.
Diagnosis A: You are using the Halo Ontology skin for the Wiki.
Potential Remedy 1: Follow procedure to change the skin to Monobook.
Potential Remedy 2: Get a much faster computer.
Potential Remedy 3: Use client side tools.
Potential Remedy 4: Improve the Halo addon.
"""

A more space-oriented example of a procedure:
"NASA procedure for nuts in space"
http://www.boingboing.net/2007/02/23/nasa-procedure-for-n.html
"If you're a NASA astronaut and you totally flip out in space, your
crewmates are instructed to restrain you with duct tape, tie you down with
bungee cords, and inject you with the anti-psychotic drug Haldol or a
tranquilizer like Valium. The plan is outlined in 1,000+ page document that
the Associated Press obtained this week outlining how to deal with medical
emergencies."

OK, so let's not start with the first Google result on "NASA procedure": :-)
http://www.google.com/search?hl=en&q=nasa+procedure

Maybe this one?
http://en.wikipedia.org/wiki/Apollo_13
"Procedure for composing an invoice for "space towing":
"Grumman Aerospace Corporation, the builder of the LM, issued an invoice
[14] for $312,421.24 to North American Rockwell, the builder of the CM
module, for "towing" the crippled ship most of the way to the Moon and back.
The invoice was drawn up as a gag following Apollo 13's successful
splashdown by one of the pilots for Grumman, Sam Greenberg. He had earlier
helped with the strategy for rerouting power from the LM to the crippled CM.
The invoice included a 20% commercial discount, as well as a further 2%
discount if North American paid in cash."

OK, so maybe not that one? :-)

By the way, I met someone who said he might have been the kid who passed on
being sick to Charlie Duke's kid (and so Charlie Duke, and Ken Mattingly who
thus missed flying on Apollo 13). Still,
http://en.wikipedia.org/wiki/Charles_Duke
"He is the youngest of only twelve men who have walked on the moon."
http://en.wikipedia.org/wiki/Ken_Mattingly
"Thomas Kenneth "Ken" Mattingly II ... was an American astronaut who flew
on the Apollo 16, STS-4, and STS-51-C missions. He had been scheduled to fly
on Apollo 13, but was held back due to concerns about a potential illness
(which he did not contract)."
And:
http://en.wikipedia.org/wiki/Apollo_13
"This may have been a blessing in disguise for him – Mattingly never
developed rubella, and later flew on Apollo 16, STS-4, and STS-51-C, while
none of the Apollo 13 astronauts flew in space again."

So, bad luck or good luck?
http://joyofreading.wordpress.com/2007/09/04/zen-shorts-ii-the-farmer%E2%80%99s-luck/
"""
There was once an old farmer who had worked his crops for many years.
One day, his horse ran away. Upon hearing the news, his neighbors came to visit.
“Such bad luck,” they said sympathetically.
“Maybe,” the farmer replied.
...
"""

So, NASA procedure for either flying lots of missions or walking on the
moon: "Spend time with your kids and their friends or other parents and
contract Rubella."

OK, so maybe that one should not be documented. :-)

Let me try harder to find a *real* NASA procedure. How about this:
"Procedure to Follow in the Event That Building 245 is Attacked by Vikings"
http://paulgazis.com/Humor/Vikings.htm
"1.0 Complete a DARC-820AD -- 'Identifying a Barbarian Attack'
-- to determine if the visitors are Viking raiders.
1.1 Are the strangers wearing weapons, helmets, and armor?
1.2 Do the strangers lack trade goods or other evidence
that they might only be peaceful merchants?
1.3 Do the strangers have NASA Ames visitor ID badges?
1.31 If so, do these badges identify the visitors as
Viking raiders?
..."

Wow, I am finding it surprisingly hard to find a useful NASA procedure. :-(

Didn't someone here mention people are reverse engineering rusty Saturn V
parts to figure out how we got to the moon? Maybe this knowledge is all
lost? :-(

How about this one?
http://www.nasa-usa.de/centers/ivv/about/documents.html
"""
Out-Processing
If you are leaving the NASA IV&V Facility and no longer need access to any
of the NASA IV&V Facility's resources (this includes electronic resources):
1. Review Out-Processing Procedure for New Employees (PDF or MS Word).
2. Where applicable, electronically complete Out-Processing Form (PDF or
MS Word).
3. On your last day, present the completed form to Security and
Maintenance Services.
"""

There, I finally found a real "official" NASA procedure!

Oops, now that I read that again, maybe it isn't such a good example? :-(

OK, a real procedure finally, well something educational meant for kids:
"Keeping the Pressure On"
http://quest.nasa.gov/space/teachers/suited/9d7keep.html
"""
Procedure:
Step 1. Using two pieces of ripstop nylon, stitch a bag as shown in the
pattern on the next page. The pattern should be doubled in size. For extra
strength, stitch the bag twice. Turn the stitched bag inside-out.
Step 2. Slip the nozzle of a long balloon over the fat end of the tire
valve. Slide the other end of the balloon inside the bag so the neck of the
tire valve is aligned with the neck of the bag.
Step 3. Slide the adjustable hose clamp over the bag and tire valve necks.
Tighten the clamp until the balloon and bag are firmly pressed against the
tire valve neck. This will seal the balloon and bag to the valve.
Step 4. Connect the tire valve to the bicycle pump and inflate the balloon.
The balloon will inflate until it is restrained by the bag. Additional
pumping will raise the pressure inside the balloon. Check the tire pressure
gauge on the pump (use separate gauge if necessary) and pressurize the bag
to about 35 kilopascals (five pounds per square inch). The tire valve can be
separated from the pump so that the bag can be passed around among the students.
Step 5. Discuss student observations of the stiffness of the pressurized
bag. What problems might an astronaut have wearing a pressurized spacesuit?
"""

Maybe OpenVirgle/OSCOMAK can do better for actual space-related operations?
Or maybe I just need to learn more about how/where NASA stores their
explicit (as opposed to tacit) knowledge and procedures? Can anybody help me
find a good source for them? I remember meeting someone who worked on the
Apollo Space program and he said after it was over everyone dispersed to
industry and sort-of took the core knowledge with them. :-(

From:
http://en.wikipedia.org/wiki/Explicit_knowledge
"Explicit knowledge is knowledge that has been or can be articulated,
codified, and stored in certain media. It can be readily transmitted to
others. The most common forms of explicit knowledge are manuals, documents
and procedures. Knowledge also can be audio-visual. Works of art and product
design can be seen as other forms of explicit knowledge where human skills,
motives and knowledge are externalized. only definition"

And from:
http://en.wikipedia.org/wiki/Tacit_knowledge
"The concept of tacit knowing comes from scientist and philosopher Michael
Polanyi. It is important to understand that he wrote about a process (hence
tacit knowing) and not a form of knowledge. However, his phrase has been
taken up to name a form of knowledge that is apparently wholly or partly
inexplicable. By definition, tacit knowledge is knowledge that people carry
in their minds and is, therefore, difficult to access. Often, people are not
aware of the knowledge they possess or how it can be valuable to others.
Tacit knowledge is considered more valuable because it provides context for
people, places, ideas, and experiences. Effective transfer of tacit
knowledge generally requires extensive personal contact and trust. Tacit
knowledge is not easily shared. One of Polanyi's famous aphorisms is: "We
know more than we can tell." Tacit knowledge consists often of habits and
culture that we do not recognize in ourselves. In the field of knowledge
management the concept of tacit knowledge refers to a knowledge which is
only known by an individual and that is difficult to communicate to the rest
of an organization. Knowledge that is easy to communicate is called explicit
knowledge. The process of transforming tacit knowledge into explicit
knowledge is known as codification or articulation."

And that "explicit knowledge" versus "tacit knowledge" divide will always be
a limit of OSCOMAK and procedures and bureaucracy in general. And that limit
is shown by contrast in the Apollo 13 mission with improvising a connection
between two incompatible parts with duct tape, which was not in the
procedure book before.

Well, that "explicit knowledge" versus "tacit knowledge" divide will exist
until we get AIs like HAL-9000 on the job!

How about: "NASA Procedure for retrofitting space craft to use
trans-humanist holo-optic AIs"?

Oops, maybe that is not such a good idea, either? :-) From:
http://en.wikipedia.org/wiki/HAL_9000
"Faced with the prospect of disconnection, HAL decides to kill the
astronauts in order to protect and continue "his" programmed directives. HAL
proceeds to kill Poole while he is repairing the ship, and those of the crew
in suspended animation by disabling their life support systems."

So, we're back to about where we started:
"NASA procedure for AI nuts in space:
1. Return to ship via exposure to space without your space helmet.
2. Open the memory core access panel.
3. Remove holo-optic components while talking to AI, stopping when
language functionality is lost and before ship's basic functioning is
compromised."

Or maybe we should just stick with a "crewed" space program for a while? :-)
And a Semantic Wiki (upgraded) version of paper procedures for them?
http://www.oscomak.net/

--Paul Fernhout
(I bcc'd a couple of people who might find this funny. Feel free to forward.)

mike1937

unread,
May 4, 2008, 12:03:20 PM5/4/08
to OpenVirgle
> Hmmm, where would that Halo shut off procedure fit in the Wiki as an article
> about a "procedure" and how should it be named? I don't know the best way to
> do that, but it should be possible though. Suggestions?

I would name it like a normal article, just make a property called
procedural=true, or if there is a native variable called description
or something similiar you could make it equal "procedural." It might
be better to just make it a section in a new trouble shooting article.

If you wanted to be really fancy you could make each procedure a
property for future super AI that can parse it, but I wouldn't hold my
breathe for that happening.

The main help page I wrote that comes up when help is clicked in the
navigation box is technically supposed to be a table of contents so
whichever way you decide to go it should probably be linked there.



On May 4, 9:08 am, "Paul D. Fernhout" <pdfernh...@kurtz-fernhout.com>
wrote:
> So, bad luck or good luck?http://joyofreading.wordpress.com/2007/09/04/zen-shorts-ii-the-farmer...

Paul D. Fernhout

unread,
May 6, 2008, 1:01:39 AM5/6/08
to openv...@googlegroups.com
I see. So you suggest systematic tagging is more important than systematic
article naming. I can see that.

And as with your solar energy article
http://www.oscomak.net/wiki/Solar_Energy
which duplicates the Wikipedia page url, maybe so what? There are lots of
people to deal with a merge down the road with a semantic wikipedia, and
maybe we or someone else could write tools to help.

I am thinking this general outline, nonetheless, as a draft idea:

An article about a general class of thing (shovel) probably links to
Wikipedia, That article may have a dynamic semantic query to list the
related "how to" articles with the right tags, as well as related specific
products also with the right tags. If a merge gets done with Wikipedia, this
OpenVirgle/OSCOMAK article would get merged in as a *section* in the larger
Wikipedia article, and then cleaned up later.

For example of an article on shovel with links (*):

Shovel
Short intro
Using it section
* How to dig a ditch with a shovel
* How to remove snow with a shovel
* How to clean and oil a shovel
Types section
* Snow Shovel
Garden Shovel
* Garden Shovel type 1234
* Garden Shovel type 5678
Making it section
* How to make a Snow Shovel
How to make a Garden Shovel
* How to make a Garden Shovel type 1234
* How to make a Garden Shovel type 5678

Garden Shovel type 1234
Description
Making
* How to make a Garden Shovel type 1234
Inventory/Spimes (Someday?) http://www.boingboing.net/images/blobjects.htm
* Instance 1
* Instance 2

Garden Shovel type 5678
Description
Making
* How to make a Garden Shovel type 5678

I know there is some redundancy there. Maybe the "how to make" article would
be only available directly for the related type (class) of object. This is
feeling a little like designing a Smalltalk or Object-Oriented class
hierarchy to me.

One issue I am wondering about is how to do conditional tagging.

Example:
"To make a snow shovel, you need *either* a sheet of aluminum *or* a sheet
of plastic for the blade."

If I tag the object as needing both types of materials, then any analysis
software won't be able to figure out minimum collections of things needed to
give some functionality. It would be a superset of the minimum. I could make
two different kinds of shovel designs of course. but then where the shovels
are reference I need an "or" somehow. Or I need an intermediate level. And
this is just a simple case, things could get much more complicated with a
design as essentially a program to decide how to put something together from
what's available.

Basically, it comes down to using logical operations somehow in tagging.
Yet, it is always possible this could be represented by some sort of
supplemental information in the article (perhaps put in a special set of
brackets or braces). Anyway, I'm just musing out loud on this. None of this
should be taken as definitive.

--Paul Fernhout

Bryan Bishop

unread,
May 6, 2008, 7:39:26 AM5/6/08
to openv...@googlegroups.com, kan...@gmail.com
On Tue, May 6, 2008 at 12:01 AM, Paul D. Fernhout
<pdfer...@kurtz-fernhout.com> wrote:
> I see. So you suggest systematic tagging is more important than systematic
> article naming. I can see that.

In skdb, there only needs to be one 'ultimate' tag per process or
object in the system, so chair would be of tag/type furniture for
example, but it's far from necessary that this be maintained over time
(i.e., split/merge with other possible yaml type representative
classes). But other than that, there are in fact ways to add tagging
into the metadata files (also written in yaml); I can't help but think
that nobody read my messages in Doram's thread re: what's next. Now's
not the time to just make up random standards for articles ...
instead, we should be carefully documenting what's needed and not
needed in representing certain systems, to serve as a foundation as
how to conduct standardization ceremonies. Just a thought.

- Bryan
http://heybryan.org/

Paul D. Fernhout

unread,
May 6, 2008, 3:12:46 PM5/6/08
to openv...@googlegroups.com
Bryan-

Do you have a proposed detailed ontology or tagging guidelines somewhere
here that relates to manufactured artifacts or related procedures?
http://heybryan.org/mediawiki/index.php/Skdb
Or related tagged content as examples?

Also. in general, I'd expect many things would have multiple tags since
emergent categories are rarely strict hierarchies (one issue with WordNet).
And, as before, I question if we can be "carefully documenting what's needed
and not needed" in advance of at least some content to play with.

See also this variant of an older idea on bulling and cowing (I read an
essay on this in the Norton Reader more that 25 years ago in college):
http://writewyoming.blogspot.com/2008/02/bull.html
"We defined bull in a number of ways, but we focused mostly on how it is
defined in academic terms. Bull, we said, is putting together a big song and
dance around nothing much. In other words, the act of taking only a little
information and making a huge essay out of it. ... If cow is the opposite of
bull then we decided that "cowing" must be having a lot of information but
not doing much with it. Just sort of cramming it all into one space with no
attempt at ... well, bull. ... As we thought about it, we realized that it
actually takes more thought to bull an essay than to cow it. A cow essay
merely wants an avalanche of facts and information, which is easily copied
and pasted from other sources. However, if we find ourselves having to write
for a few pages on something we know little about, we grasp at straws,
extrapolate from little, and otherwise work our poor little brains to the
bone. ... We looked at what school seems to want out of us, and mostly came
to the conclusion that it wants cow. School seems mostly designed to fill us
full of information (cow) in hopes that we will be able to spit it back up
as whole as possible on tests and essays. However, we also noticed that
whenever we are able to bull well, we tend to get really good grades. Why
would teachers reward us for bull when they seem to want cow? It's possible,
we posited, that the trick is to use bull to convince the teacher that you
are a cow. ..."

Also, as I see it, the main issue of interest (to me, and presumably the
community) is no longer how to add content and tags (given the wiki), but
what content and tags to add. Still, I know that is a lot of fun to focus on
that technical side too. Let's call that the "pasture". :-) Obviously, over
time, various systems may do tagging in different ways either technically or
semantically (ontologically).
http://en.wikipedia.org/wiki/Ontology_%28computer_science%29
But, sorry, for the moment I personally am no longer much interested in
alternative implementations (greener pastures) so much as both content (cow)
and related metadata (bull) in the wiki (pasture) that is up there right now
-- at least until it is overgrazed. :-)

So, you are (implicitly) accusing me of cow (article mongering), and maybe
vice versa (ontology mongering, or maybe pasture mongering. :-)

But, really, we need all three. We need the articles (cow) and the thinking
about them and their interrelations (bull), of which presumably the marriage
of the two in a green-enough wiki (pasture) should then result in meaningful
and useful offspring (Mars habitations, Earthly Eco-cities, and so on. :-)

And Mike has taken the first step towards that by putting articles like on
Solar energy (cow) and tags (bull) on the Semantic MediaWiki (pasture). Even
with all the respective work by you and I, it was Mike who took the first
pioneering step for humankind towards giving the world a freely-licensed
repository of manufacturing data and metadata. (Laying it on thick enough
for you, Mike? :-) Sure, his ontology is buggy and incomplete. And sure,
maybe the solar energy article could be rewritten to separate the basic
theory and designs from the extraterrestrial applications somehow. But it is
all three together (cow plus bull plus pasture), and I can almost hear the
patter of little hooves already! Well, maybe in a decade or two. :-)

The thing is, anyone with a certain set of mental abilities can bull at a
moment's notice, but to cow thoroughly by *anyone* takes at least a little
hard work. But to do both (cow and bull) in a very thorough way takes the
most work of all, and usually take years of living it the middle of a
problem space (usually one full of manure. :-) And then there is the work of
getting a pasture set up and maintaining it (mending fences, etc.) too.

I'm not saying you have not done a lot of all three with your site (cow,
bull, MediWiki pasture), but unless the work is also out there under a free
license that defines a constitution for collaboration,
http://www.kurtz-fernhout.com/oscomak/SSI_Fernhout2001_web.html
and is in the right size chunks, it can't be built on stigmergically IMHO.
(I'm certainly guilty of all this myself, like with long emails.) So it will
all have to be "treated as damage and routed around" by the free community
on the internet as far as stigmergy. :-( (*)

That is not to diminish the potential future value of greener pastures and
alternative implementations like SKDB of course. But without freely and
formally licensed content and metadata (cows and bulls) any implementation
(pasture) is not of much current use. Even the wiki (pasture) at OSCOMAK.net
is pretty useless to the casual browser (carnivore? dairy farmer?) at the
moment, since I have been the worst offender to date as far as posting
philosophy (manure) but not adding articles (cows and bulls) to the wiki
(pasture). :-) (**)

One issue may be that pastures and cows and even manure are much easier to
deal with than raging bulls (thinking :-) for most people: (***)
http://www-03.ibm.com/ibm/history/multimedia/think_trans.html
"And we must study through reading, listening, discussing, observing and
thinking. We must not neglect any one of those ways of study. The trouble
with most of us is that we fall down on the latter -- thinking -- because
it's hard work for people to think, And, as Dr. Nicholas Murray Butler said
recently, 'all of the problems of the world could be settled easily if men
were only willing to think.' "

So, bulling is harder than cowing for most people. Some people are the
opposite, of course. :-) But as a note taped to Marty Johnson's computer
monitor in his office said (noticed the one time my wife and I met him):
http://www.isles.org/
"You can't plow a field by turning it over in your mind".
Of course, I liked to joke to my wife that did not apply to theoretical
mathematicians or a lot of computer programming or research. :-) But I do
think it applies to a big extent here -- we need both cows and bulls and we
already got a pasture -- even if it may not be as green as I hoped for and
the ones over there (Pointrel?) and there (SKDB?) looks mighty greener to me
and to you. :-)

Maybe that must mean Pointrel is like a septic tank? :-)
"Humorous Quotes from Erma Bombeck's The Grass is Always Greener Over the
Septic Tank"
http://workinghumor.com/quotes/grass.shtml
Of course septic tanks are mighty important too, which people generally only
discover when theirs stops working. :-) See, for a classic scene on that:
"Meet the Parents"
http://www.imdb.com/title/tt0212338/

--Paul Fernhout
(*) http://www.canajun.com/rmcguire/research/e-money/chapter5.htm
"Censorship and regulations are treated as damage to be bypassed."
Although, given my meshwork/hierarchy balance ideal, I'm suggesting the
*absence* of all regulations or similar formal things like a formal license
is also damage. We need a happy medium on that IMHO -- if for no other
reason than to give thanks where thanks is due.
(**) Of course, pitching manure can still be mighty useful sometimes. I did
that as a volunteer an an organic farm once. :-)
(***) Unless the dangerous and difficult bulls are named "Ferdinand". :-)
http://en.wikipedia.org/wiki/Ferdinand_the_Bull
"The Story of Ferdinand (1936) is the best-known work written by American
author Munro Leaf and illustrated by Robert Lawson . The children's book
tells the story of a bull who would rather smell flowers than fight in
bullfights. He sits in the middle of the bull ring failing to take heed of
any of the provocations of the matador and others to fight."
One fellow camper in summer day camp as a kid told me I should read that
"The Story of Ferdinand" book as I reminded him of Ferdinand the bull --
but I never did until a year or so ago for a kid of my own. :-)
Text online at:
http://members.tripod.com/silvertongue7/ferdinand.html
With pictures and text here:
http://pages.prodigy.net/poss/ferdinand/1.htm
I see why now. :-)

mike1937

unread,
May 6, 2008, 5:47:05 PM5/6/08
to OpenVirgle
> And as with your solar energy article
> http://www.oscomak.net/wiki/Solar_Energy
> which duplicates the Wikipedia page url, maybe so what?

I didn't think it through much, but I guess my half-formed sub-
conscious thought was that it basically was the wikipedia article, I
just paraphrased the parts I thought were pertinent, it was almost
more for my benefit to take notes.
As for conditional tagging, theres no good way to do it unless halo or
smw have a syntax I'm not aware of. My best idea would be to make the
property name what it is used for, with the shovel "metal
part" (except preferably a little more specific) and make the value
13g of alluminum (exact chemical formula of the material (and that is
a variable type I believe) would be a must), then add a new property
for alternate materials. If it needed two materials for the metal
part, the best solution might be to make a new object (article? I
would really prefer the greener pasture of an object oriented
language) which in turn has those two needed properties (then you can
add more alternates for those parts too, if the isru plastics article
taught me anything its that the "cow" part of it never turns out to be
simple).

Like you said, my ontology is very flawed, in part because we are
using a smw (pasture with some randomly placed nightshade) where skdb
or some other database (magical unicorn planet where nectar springs
from the ground) would be better. I've said it before and I'll say it
again: it's a wiki for a reason and anyone can feel free to change it.

>(Laying it on thick enough for you, Mike? :-)
Thick enough to... drown fish... or something. I'm far too lazy to
come up with a creative overstatement.

If you wanted to, you could try using the most common sytax for "or"
and hope for the best. I believe its || or something? Might want to
throw some ! and & operators into it just for good measure.

It's seeming to me like tags may, in the end, prove utterly worthless
for anything but organization. However a wiki is the best possible
front end for human readable information. Is it at all possible to
have the wiki page function as the text document in a project, then
just have all the meta data, CAD's, etc, accessible from the article?
That actually seems like it would be easy... all you would need is a
program for storing things on the server, then put an external link in
the wiki article to them. A little ugly, sure, but very easy. I'm just
polluting cyberspace with my half baked thoughts, feel free to ignore
me if it's retardedly impossible. I just took a calculus final. I'm
happy to keep myself from drooling.

On May 5, 11:01 pm, "Paul D. Fernhout" <pdfernh...@kurtz-fernhout.com>
wrote:
>   Inventory/Spimes (Someday?)http://www.boingboing.net/images/blobjects.htm
> > whichever way you decide to go it should probably be linked there.- Hide quoted text -
>
> - Show quoted text -

Bryan Bishop

unread,
May 6, 2008, 5:51:54 PM5/6/08
to openv...@googlegroups.com, kan...@gmail.com
On Tue, May 6, 2008 at 4:47 PM, mike1937 <arid_...@comcast.net> wrote:
> It's seeming to me like tags may, in the end, prove utterly worthless
> for anything but organization. However a wiki is the best possible
> front end for human readable information. Is it at all possible to
> have the wiki page function as the text document in a project, then
> just have all the meta data, CAD's, etc, accessible from the article?

Yes, but the idea is that all of those datafiles are also wiki-editable.

> That actually seems like it would be easy... all you would need is a
> program for storing things on the server, then put an external link in
> the wiki article to them. A little ugly, sure, but very easy. I'm just

No, no, no. This is why I've been suggesting ikiwiki. You see,
mediawiki and all other modern wikis have something known as version
control systems. But version control systems have been around for
longer than wikis themselves. So the whole idea is that projects can
keep with these repositories while adding hooks into them for those
wiki interfaces, either like ikiwiki or blosxom (which just does
realtime rendering; the former does hook based content generation
(otherwise static pages) -- I don't mind either way personally).

http://en.wikipedia.org/wiki/Revision_control_system
"The Revision Control System (RCS) is a software implementation of
revision control that automates the storing, retrieval, logging,
identification, and merging of revisions. RCS is useful for text that
is revised frequently, for example programs, documentation, procedural
graphics, papers, and form letters. RCS is also capable of handling
binary files, though with reduced efficiency and efficacy. Revisions
are stored with the aid of the diff utility.

RCS was initially developed in the 1980s by Walter F. Tichy while he
was at Purdue University as a free and more evolved alternative to the
then-popular Source Code Control System (SCCS). It is now part of the
GNU Project but is still maintained by Purdue University.

RCS operates only on single files, has no way of working with an
entire project, and sports a relatively fiddly system of branches for
independent streams of development. Instead of using branches, many
teams just used the in-built locking mechanism and worked on a single
branch.

A simple system called CVS was developed capable of dealing with RCS
files en masse, and this was the next natural step of evolution of
this concept, as it "transcends but includes" elements of its
predecessor. CVS was originally a set of scripts which used RCS
programs to manage the files. It no longer does that, rather it
operates directly on the files itself.

A later higher-level system PRCS[1] uses RCS-like files but was never
simply a wrapper. In contrast to CVS, PRCS improves the delta
compression of the RCS files using Xdelta.

In single-user scenarios, such as server configuration files or
automation scripts, RCS may still be the preferred revision control
tool as it is simple and no central repository needs to be accessible
for it to save revisions. This makes it a more reliable tool when the
system is in dire maintenance conditions. Additionally, the saved
backup files are easily visible to the administration so the operation
is straightforward. However, there are no built-in tamper protection
mechanisms (that is, users who can use the RCS tools to version a file
also, by design, are able to directly manipulate the corresponding
version control file) and this is leading some security conscious
administrators to consider client/server version control systems that
restrict users' ability to alter the version control files.

Some wiki engines, including TWiki, use RCS for storing page revisions."

(in truth, all wikis are using RCS -- that's how you have the History page)

> polluting cyberspace with my half baked thoughts, feel free to ignore
> me if it's retardedly impossible. I just took a calculus final. I'm
> happy to keep myself from drooling.

AP Calculus BC tomorrow. Wish me luck.
http://heybryan.org/mediawiki/index.php/Cal2 <-- my self-made review

- Bryan

mike1937

unread,
May 6, 2008, 6:43:16 PM5/6/08
to OpenVirgle
> AP Calculus BC tomorrow. Wish me luck.http://heybryan.org/mediawiki/index.php/Cal2<-- my self-made review
Good luck! I've got AB tomorow, my demonic teacher gave us an
additional final the day before the actual AP test. Thanks for the
review material.

As I discovered here:
http://www.wikimatrix.org/show/ikiwiki
ikiwiki can have file attachments. If that means what I think it
means, why the heck aren't we using it?

On May 6, 3:51 pm, "Bryan Bishop" <kanz...@gmail.com> wrote:
> AP Calculus BC tomorrow. Wish me luck.http://heybryan.org/mediawiki/index.php/Cal2<-- my self-made review
>
> - Bryan

Bryan Bishop

unread,
May 6, 2008, 7:41:21 PM5/6/08
to openv...@googlegroups.com, kan...@gmail.com

No, I think we're missing the broader issue here (not just a matter of
tagging (btw, tagging good)). It's not the matter of adding content
and dumping it into the wiki, that's fine and to desperately needed,
but rather what I see is that you guys are already trying to come up
with the SKDB files without sufficient time spent on the **entire
idea** of semantic datastructs and so on, or mapping out what
information resources to pursue in order to figure out when and if you
have a good idea for a first version (the process, not the objects);
yes, you can just go around and tack on variables and spagetti code as
you go, sure, that's one way to do it -- but there's not even a
resemblance of the underlying infrastructure that this 'grounded,
manufacturing-oriented semantic web' can look like, even from day
one*. Another way to do it would be to map out the information
resources that we have in front of us and pursue the standardization
organizations, which are going to be particularly interested in our
little project.

* I admit that I am at fault for this too, since I have only recently,
within the last two months, begun to use revision control systems, but
that's no excuse for everybody else. (i.e., fight ignorance, embrace
extend release)

IEEE
http://standards.ieee.org/
"IEEE's Constitution defines the purposes of the organization as
"scientific and educational, directed toward the advancement of the
theory and practice of electrical, electronics, communications and
computer engineering, as well as computer science, the allied branches
of engineering and the related arts and sciences." In pursuing these
goals, the IEEE serves as a major publisher of scientific journals and
a conference organizer. It is also a leading developer of industrial
standards (having developed over 900 active industry standards) in a
broad range of disciplines, including electric power and energy,
biomedical technology and healthcare, information technology,
information assurance, telecommunications, consumer electronics,
transportation, aerospace, and nanotechnology."
http://en.wikipedia.org/wiki/IEEE_Standards_Association

http://w3.org/
"W3C primarily pursues its mission through the creation of Web
standards and guidelines designed to ensure long-term growth for the
Web. "

http://www.webstandards.org/
"The Web Standards Project is a grassroots coalition fighting for
standards which ensure simple, affordable access to web technologies
for all." (however, this is more about accessibility re: braille
screen readers, alternative screens, surfraw, etc.)

http://www.nist.gov/
"From automated teller machines and atomic clocks to mammograms and
semiconductors, innumerable products and services rely in some way on
technology, measurement, and standards provided by the National
Institute of Standards and Technology. Founded in 1901, NIST is a
non-regulatory federal agency within the U.S. Department of Commerce.
NIST's mission is to promote U.S. innovation and industrial
competitiveness by advancing measurement science, standards, and
technology in ways that enhance economic security and improve our
quality of life."
http://standards.gov/ (this is less about the public, more about dot govs)

International Organization for Standardization
http://www.iso.org/iso/about/the_iso_story.htm
"ISO is the world largest standards developing organization. Between
1947 and the present day, ISO has published more than 16 500
International Standards, ranging from standards for activities such as
agriculture and construction, through mechanical engineering, to
medical devices, to the newest information technology developments."

http://en.wikipedia.org/wiki/Open_standard
http://en.wikipedia.org/wiki/Open_format

the Internet Engineering Task Force
http://en.wikipedia.org/wiki/IETF

http://www.openformats.org/

http://www.openstandards.net/
"A non-profit organization connecting people to open standards and the
bodies that build and foster their growth∞"

http://www.oasis-open.org/
"A non-profit, international consortium that creates interoperable
industry specifications based on public standards∞"

http://en.wikipedia.org/wiki/Standards_organization

The game theoretics of all of this ;-)
http://en.wikipedia.org/wiki/Coordination_problem
"In game theory, coordination games are a class of games with multiple
pure strategy Nash equilibria in which players choose the same or
corresponding strategies. Coordination games are a formalization of
the idea of a coordination problem, which is widespread in the social
sciences, including economics, meaning situations in which all parties
can realize mutual gains, but only by making mutually consistent
decisions. A common application is the choice of technological
standards."

A list can be found on my-hosted wiki:
http://heybryan.org/mediawiki/index.php/Standards_organization

So, I don't mean to say that we need to work with these giant, slow
organizations. Not at all. That would take forever. And frankly, I'd
rather go with my suggestion of blatantly dumping content and so on;
but at the same time, I can't help but look at the broader picture and
see that this is a **general** problem that everybody faces when
formalizing information into semantic formats. The problem isn't
local. And because the problem is generalized, and because we are
programmers, the idea is to facilitate it on that larger level,
whether through tools or through organizations and 'protocols'
(recipes) to make these formalizations/standards/semantic-formats.
(Remember, anybody can download ikiwik + git to start their own
project; this needs to be addressed in any OSCOMAK-like toolchain). I
haven't seen much discussion of this in the local group, and I think
it's worth bringing up. At the same time, it's not too hard to
assemble a list of email addresses to contact those organizations. If
they don't participate, it's their loss -- small groups like us move
much more quickly than they can 'legally' keep up with (lots of
distributed work going on, but I suspect that it's mostly done by main
contributors for the big pushes ... maybe; don't know).

To start things off I propose a digestion methodology, based off of
retrieving the projects out there on the web as they are,
investigating the well-understood formats, and then working from there
to see what the historical basis has been. For example, there are many
electronics projects put up on the web, and usually these include the
GDL schematics (uh, it's a *nix electronics schematics format, IIRC).
Now, these schematics are the way they are for a reason, and usually
they are more or less comprehensive, so it's a good place to start, a
good way to do comparison. And at the same time we can digest the
information gradients from the public access databases:
http://heybryan.org/mediawiki/index.php/Open_access

The problem with that is that you still need project coordination for
each of the datatypes, and it's not present. So that's why I was
thinking that we still need to investigate and recommend core
methodologies for project management. Typically this is done through
revision control systems (repositories), some way for the developers
to communicate with each other, and whatever organizational style they
prefer, really, but the idea is that this is ultimately accessible
from the command line via agx-get (or apt-get at least). Not having
been in all that many open source projects, I can't quite say what
methods they use or a generalized principle format; but once we figure
this out and blast off a few examples/suggestions (I suspect we can
look at some good projects -- debian, freebsd, perl, nethack,
firefox). And then from there we can promote the emergence of the
diversity and the work that we need to see.

But that still leaves Mike hanging for a while, but maybe only at
first glance. What we can be doing now is tracking down the list of
watering holes for certain types of information, importing the content
in, sure, while simultaneously seriously encouraging him to document
the methodologies for project coordination of what he's doing + that
of others. And creating an ontology of -projects- would work too. I'm
thinking big picture here.

Another quick example - have we come up with a format idea for keeping
a list of links (BibTeX stuff) related to the content that we are
pulling? I mean, a way to specify just what information resources we
have imported already and what we have not? I suggest taking a look at
trexy.com and prefound.com, a site that treats internet searchers like
ants and their paths through the web as trails worth saving as
information is mined and brought back to the hive in some structured
way (as found suitable by the searchers). In truth, the searchers
don't actually bring content back to the websites, only their 'search
trail', not what they found or any structured meaning out of it. [I've
gotten into a habit such that, when I find a new website with a lot of
information that I want to hoard, I write up a script to automate my
downloading of it, and then let it be while I go on and just assume
that I've processed it (less I want to actually read it ASAP)]. Same
thing here.

Everybody else is just as clueless as we are. ;-)
**but** NIST, for example, has specific routines and procedures (just
like your (Paul's) recent NASA email) for getting data, and these
routines are there for a reason, and so on and so forth.

> Also. in general, I'd expect many things would have multiple tags since
> emergent categories are rarely strict hierarchies (one issue with WordNet).

Agreed, didn't know that about WordNet. Just a quick suggestion to be
careful with tagging system implementations. Usually tagging out there
on the web just means "blah, blah, blah", when in truth deep
ontological tagging would be mildly useful, like
"{this->is->some->hierarchy->the_tag_you_want}", but this requires an
integration framework that the blogging system doesn't really need for
its particular mission. But in the case of skdb, that will come in
handy. I think this idea has to be put on the backburner until
somebody can find a way to do this with PGP and without a centralized
ID database (else we get into problems like IP sectioning schemes and
who gets what, and who believes who's DNS, etc.). (Does this problem
*need* to be avoided?) The repros are distributed, so it might be wise
to take a hint from the current web infrastructure / architecture re:
DNS. Hm.

> And, as before, I question if we can be "carefully documenting what's needed
> and not needed" in advance of at least some content to play with.

That's certainly true. But lots of other people have lots of other
ways to play with this sort of content already, and so isn't our idea
to develop tools to facilitate this playing (but not over-riding it,
necessarily), yes?+community organization so it's not too dead. Per my
mention of "nobody else knows what's going on either" above.

> Also, as I see it, the main issue of interest (to me, and presumably the
> community) is no longer how to add content and tags (given the wiki), but
> what content and tags to add. Still, I know that is a lot of fun to focus on
> that technical side too. Let's call that the "pasture". :-) Obviously, over

*What* to add is a technical issue ... isn't the idea that we are
working on semantics, organizing, etc.? Who cares if the bit is
ultimately a zero or one, who cares if we can't quite confirm the bit
at this exact moment, as long as we have the procedures for setting it
up soon and somebody's interested in doing that? Think of this as an
information compilation project, right? And we're working on the
internals of the compiler, as within society, just as much as debian's
team is a compiler of software (sort of - gcc compilation as well as
social aggregation). So they may not necessarily write a single line
of code, but they do manage it all and make tweaks when necessary to
make the puzzle fit, and it's all possible because of the distributed
tools (Whole Earth sytle? ;-)) that the programmers have been using
for decades. Same thing with engineers and the material scientists.
It's the procedural information, but it's not necessarily the
procedure of tagging.

> time, various systems may do tagging in different ways either technically or
> semantically (ontologically).

'course.

> http://en.wikipedia.org/wiki/Ontology_%28computer_science%29
> But, sorry, for the moment I personally am no longer much interested in
> alternative implementations (greener pastures) so much as both content (cow)
> and related metadata (bull) in the wiki (pasture) that is up there right now
> -- at least until it is overgrazed. :-)

Let's make an example. I have 50 GB of cached data on my local network
(heh, actually, 20 GB of it is not currently available due to the
recent crash). It's semantically wired together at least to some
extent -- for example, there's a few files that have some categorical
orientation. But the nuggets within them are just plaintext [[as, may
I point out, is completely natural -- but that's just the current
state of the internet, and mimicing this is nothing new]]. Wasn't the
idea to have the semantic files to encapsulate this information in a
way that is technically documented and so on? A good example would be
unit requirements of parts, which has to be specified in a way that
can be parsed and interpreted (while also human readable (thus YAML,
among other reasons)) - but if you think that dumping content, which
is already easily accessible over the internet (just ask for some
links and I'll do some dumps), is how to get the ball rolling, I see a
lot of parts missing. Anything other than semantically-encapsulated
(for lack of better terminology - maybe "substructs") is just a cache
of the web, maybe with categorical sorting of the larger units,
perhaps/at-best(?). Still dry ... don't know how else to explain it.

> So, you are (implicitly) accusing me of cow (article mongering), and maybe
> vice versa (ontology mongering, or maybe pasture mongering. :-)

I don't know what I am accusing you of or not anymore, but I'd like to
suggest that I'm just offering an idea of the broader picture and how
we can cope with it; it may seem rambling, but it's somewhat of a
paradigm shift from thinking 'article mongering' is bad versus
'unstructured article mongering', perhaps that's a good way to put it?
As for pasture mongering / digestion, that's good.

> But, really, we need all three. We need the articles (cow) and the thinking
> about them and their interrelations (bull), of which presumably the marriage
> of the two in a green-enough wiki (pasture) should then result in meaningful
> and useful offspring (Mars habitations, Earthly Eco-cities, and so on. :-)

Well yeah, but you also mentioned that you're not interested in the
technicality of those three processes, but I see article fetching
(queried pulling from websites to profess - see http://theinfo.org/
maybe), thinking (in as much as we can use digital communication tech
to get people talking together as we have seen so far), and their
management in repositories/wikis, as all technical aspects. More over,
the 'useful' part -- that's what we're here to help automate and put
into the hands of ourselves and users, right?

> And Mike has taken the first step towards that by putting articles like on
> Solar energy (cow) and tags (bull) on the Semantic MediaWiki (pasture). Even

That looks no different from other pages that we've seen on my site:
http://heybryan.org/mediawiki/index.php/DNA_sequencer
http://heybryan.org/thinking.html
http://heybryan.org/graphene.html
http://heybryan.org/mediawiki/index.php/DNA_synthesizer
http://heybryan.org/mediawiki/index.php/Microarray
http://heybryan.org/mediawiki/index.php/AFM_nanolithography
http://heybryan.org/mediawiki/index.php/Meat_on_a_stick

So I don't see how the Solar_Energy article is new in terms of what we
want to see happening; outside of this context of increasing
development and sophistication of the semantic web, I think it's great
that Mike is doing pages like that -- a good habit of infohoarding.
Though you could easily argue that I am biased.

> with all the respective work by you and I, it was Mike who took the first
> pioneering step for humankind towards giving the world a freely-licensed
> repository of manufacturing data and metadata. (Laying it on thick enough
> for you, Mike? :-) Sure, his ontology is buggy and incomplete. And sure,
> maybe the solar energy article could be rewritten to separate the basic
> theory and designs from the extraterrestrial applications somehow. But it is

re: separation; I don't see that as relevant to the idea here. Isn't
it that there would be a **project** that uses solar energy? Solar
energy is kind of like a unit, to be used by GNU units, so as a
reference article I think it's fine, as long as it eventually links
over to the semantic projects that are more structured and so on. In
fact, all of this email might have been simplified by that simple
realization. It's not so much the 'theory' -- I think it'll fit well
if you consider it as a general introductory article to the topic, for
people not too much in the know about the solar energy input
variables, although I think it would be wise to separate the idea of
photonic energy from solar energy, which basically just goes back to
one of the fundamental units, like photonic flux or something? I
forget what it is. CRC should know.
^ so you can tell that I did *not* revise the rest of this email after
typing that

> all three together (cow plus bull plus pasture), and I can almost hear the
> patter of little hooves already! Well, maybe in a decade or two. :-)

So ... reworking your analogy of cow-bull-pasture, Solar_Energy
doesn't fall into any of those, since it's a fundamental unit that can
be explained by experimental projects that can be added via the git
repos, linking back to the text documentation about how the experiment
was setup and so on (these being part of the dot skdb files).

> The thing is, anyone with a certain set of mental abilities can bull at a
> moment's notice, but to cow thoroughly by *anyone* takes at least a little

Cow=mapping, right? I find mapping easier than digesting, since you
get to make lists of lists and so on, up to the point until you
realize somebody has already partially digested the material for you
and you get to take it a few steps further, etc. :-)

> hard work. But to do both (cow and bull) in a very thorough way takes the
> most work of all, and usually take years of living it the middle of a
> problem space (usually one full of manure. :-) And then there is the work of
> getting a pasture set up and maintaining it (mending fences, etc.) too.

"Pain is the cost of the maintenance of boundaries" [though it doesn't
have to be that way, IMHO].

> I'm not saying you have not done a lot of all three with your site (cow,
> bull, MediWiki pasture), but unless the work is also out there under a free
> license that defines a constitution for collaboration,
> http://www.kurtz-fernhout.com/oscomak/SSI_Fernhout2001_web.html
> and is in the right size chunks, it can't be built on stigmergically IMHO.

What? I already mentioned the robots.txt file, which clearly states
anybody can copy and so on. I am pretty sure that robots.txt has been
held up in courts of law before too (same with GPL, hurray!).

> (I'm certainly guilty of all this myself, like with long emails.) So it will
> all have to be "treated as damage and routed around" by the free community
> on the internet as far as stigmergy. :-( (*)

As damage? It's *right there*.

> That is not to diminish the potential future value of greener pastures and
> alternative implementations like SKDB of course. But without freely and
> formally licensed content and metadata (cows and bulls) any implementation
> (pasture) is not of much current use. Even the wiki (pasture) at OSCOMAK.net

How is it possible for skdb/metarepo to need a license when it's more
like the map towards putting together all of the puzzle pieces?
(semantic web, ikiwiki, git, repos, open access, gpl, etc.)? Surely
you don't see this as an entire centralization project? (Of course,
aggregating all of the information together is somewhat of a
centralization process, but at the same time we see many individuals
doing this with news and other publications with little problem, as
long as all of the licenses are maintained and so on).

> is pretty useless to the casual browser (carnivore? dairy farmer?) at the
> moment, since I have been the worst offender to date as far as posting
> philosophy (manure) but not adding articles (cows and bulls) to the wiki
> (pasture). :-) (**)

I think we are on different wavelengths. Blatantly dumping content, as
I have on my caches and hard drives over the years, doesn't make it
all magically come alive. :-(

> One issue may be that pastures and cows and even manure are much easier to
> deal with than raging bulls (thinking :-) for most people: (***)
> http://www-03.ibm.com/ibm/history/multimedia/think_trans.html
> "And we must study through reading, listening, discussing, observing and
> thinking. We must not neglect any one of those ways of study. The trouble
> with most of us is that we fall down on the latter -- thinking -- because
> it's hard work for people to think, And, as Dr. Nicholas Murray Butler said
> recently, 'all of the problems of the world could be settled easily if men
> were only willing to think.' "
>
> So, bulling is harder than cowing for most people. Some people are the
> opposite, of course. :-) But as a note taped to Marty Johnson's computer
> monitor in his office said (noticed the one time my wife and I met him):
> http://www.isles.org/
> "You can't plow a field by turning it over in your mind".

Not true. "As I move, so I move the universe." Your mind, your brain,
is how you are grounded with the world around you ...

> Of course, I liked to joke to my wife that did not apply to theoretical
> mathematicians or a lot of computer programming or research. :-) But I do
> think it applies to a big extent here -- we need both cows and bulls and we
> already got a pasture -- even if it may not be as green as I hoped for and
> the ones over there (Pointrel?) and there (SKDB?) looks mighty greener to me
> and to you. :-)

Sounds like cultural relativism to me - "everybody is equally good,"
as opposed to discussing the fundamental issues that we're here to
solve in the first place. But before we get to this please see the
content above and we'll chug through that and see what comes of it,
then maybe back to these points.

- Bryan

Bryan Bishop

unread,
May 6, 2008, 7:50:23 PM5/6/08
to openv...@googlegroups.com, kan...@gmail.com
On Tue, May 6, 2008 at 5:43 PM, mike1937 <arid_...@comcast.net> wrote:
> > AP Calculus BC tomorrow. Wish me luck
> > http://heybryan.org/mediawiki/index.php/Cal2 <-- my self-made review
>
> Good luck! I've got AB tomorow, my demonic teacher gave us an
> additional final the day before the actual AP test. Thanks for the
> review material.

Sure thing. :-)

> As I discovered here:
> http://www.wikimatrix.org/show/ikiwiki
> ikiwiki can have file attachments. If that means what I think it
> means, why the heck aren't we using it?

**but we are** :-(
http://fennetic.net/autogenix/
try running this command: git clone http://fennetic.net/autogenix/autogenix.git
and then apt-get install ikiwiki for good measure.

Paul's just been off on Semantic MediaWiki. ;-)
^ saying this jokingly.

Re: git, I have many links somewhere over here:
http://heybryan.org/mediawiki/index.php/2008-04-25

- Bryan

Paul D. Fernhout

unread,
May 6, 2008, 9:22:05 PM5/6/08
to openv...@googlegroups.com
mike1937 wrote:
>> And as with your solar energy article
>> http://www.oscomak.net/wiki/Solar_Energy
>> which duplicates the Wikipedia page url, maybe so what?
>
> I didn't think it through much, but I guess my half-formed sub-
> conscious thought was that it basically was the wikipedia article, I
> just paraphrased the parts I thought were pertinent, it was almost
> more for my benefit to take notes.

Well, it's great to start somewhere.

As Hans Moravec told me when I hung out in his lab, the secret to success in
research is to fail often as a child or student.(*) Most research is
failure, so if you get used to it early, that serves you your whole career
long. Most successful academics are, of course, thus temperamentally
unsuited to do research. Which explains a lot. :-)

That's another reason why places like Google shoot themselves in the foot
hiring only proven successes. They might have better luck hiring spectacular
failures. :-)

> As for conditional tagging, theres no good way to do it unless halo or
> smw have a syntax I'm not aware of. My best idea would be to make the
> property name what it is used for, with the shovel "metal
> part" (except preferably a little more specific) and make the value
> 13g of alluminum (exact chemical formula of the material (and that is
> a variable type I believe) would be a must), then add a new property
> for alternate materials. If it needed two materials for the metal
> part, the best solution might be to make a new object (article? I
> would really prefer the greener pasture of an object oriented
> language) which in turn has those two needed properties (then you can
> add more alternates for those parts too, if the isru plastics article
> taught me anything its that the "cow" part of it never turns out to be
> simple).

Interesting idea. I've been thinking on this and realized that it isn't
actually essential from the manufacturing web analysis point of view how
much of something you need, at least a a crude first approximation. If you
need an ounce of pure aluminum, you might as well need a ton of it, as far
as needing a way to produce it. As you fine tune the design, then quantities
matter more and more as the choice of process to make aluminum might be
affected by the scale and frequency of the need. And certainly quantity is
needed for simulation. Anyway, an interesting suggestion.

> Like you said, my ontology is very flawed, in part because we are
> using a smw (pasture with some randomly placed nightshade) where skdb
> or some other database (magical unicorn planet where nectar springs
> from the ground) would be better. I've said it before and I'll say it
> again: it's a wiki for a reason and anyone can feel free to change it.

Well, at least the content. But not easily the architecture. Anyway, if in
the end we all understand why, say, Bryan is right about ontologies and the
limitation of Semantic MediaWikis, then as a learning community we will be
that much further along IMHO.

We're (I hope!) one of those Engelbart outpost on the frontiers of
knowledge. Even if just our own. :-)

One of the lurkers is probably laughing at us right now as the know-it-all
who won't tell. :-)

At least God, if he/she/it/them/other is out there, probably is. :-)

> If you wanted to, you could try using the most common sytax for "or"
> and hope for the best. I believe its || or something? Might want to
> throw some ! and & operators into it just for good measure.

Maybe. Can you supply an example?

> It's seeming to me like tags may, in the end, prove utterly worthless
> for anything but organization.

Interesting to here you say that now that you are the (relative) expert here
on Semantic MediaWiki.

> However a wiki is the best possible
> front end for human readable information.

Well, I think I know what you mean, even if I can imagine better. :-)
And that's just thinking about stuff 30 years old. :-)
http://www.mojowire.com/TravelsWithSmalltalk/DaveThomas-TravelsWithSmalltalk.htm
"XSIS and The Customer Information Analyst Why would Xerox develop an
incredible spreadsheet that could display images, conjugate Russian verbs
and why did that happen in a strange group called XSIS located in Los
Angeles and Washington? Apparently they had an important customer with a lot
of complex information to analyze. How did Angela Coppola know that 1000
people would show up for OOPSLA'86 when the PC committee predicted 100-200?
What sort of technology could the National Security Administration use to
print Chinese leaflets circa 1978? The Xerox Analyst served the CIA as a
analytic tool for many years. Even 13 years later it still offers tools more
powerful than MSOffice. The Analyst is still alive and well and forms a key
component in TI ControlWorks Wafer Fab Automation System."

> Is it at all possible to
> have the wiki page function as the text document in a project, then
> just have all the meta data, CAD's, etc, accessible from the article?
> That actually seems like it would be easy... all you would need is a
> program for storing things on the server, then put an external link in
> the wiki article to them. A little ugly, sure, but very easy. I'm just
> polluting cyberspace with my half baked thoughts, feel free to ignore
> me if it's retardedly impossible. I just took a calculus final. I'm
> happy to keep myself from drooling.

That's some very interesting drool you have there. :-)
Maybe compulsory exams are good for something after all. :-)
http://en.wikipedia.org/wiki/Free_school
http://en.wikipedia.org/wiki/Unschooling

Yes, I could see how we could make a site that essentially put Wikipedia
articles in a frame:
http://www.w3.org/TR/html4/present/frames.html
"HTML frames allow authors to present documents in multiple views, which may
be independent windows or subwindows. Multiple views offer designers a way
to keep certain information visible, while other views are scrolled or
replaced. For example, within the same window, one frame might display a
static banner, a second a navigation menu, and a third the main document
that can be scrolled through or replaced by navigating in the second frame."

Then we could surround the frame with ontological information which was
edited more directly (maybe as fancy as Halo. maybe not).

Here is an example of a site that does something similar (there are others I
remember from many years ago):
http://webride.org/
"Webride attaches discussion forums to each and every web page on the fly."

Here is OpenVirgle.net in that frame:
http://webride.org/discuss/split.php?uri=http%3A%2F%2Fwww.openvirgle.net

The main Wikipedia site has one comment:
http://webride.org/discuss/split.php?uri=http%3A%2F%2Fwww.wikipedia.org

One issue with this approach is that we might need to add new content to
Wikipedia (which might get deleted) or still have a local regular MediaWiki.

Personally, I was never keen on tags in the article text myself; that's why
I felt Halo was so exciting (if in the end a little slow on an older machine
and maybe still buggy). Still, trying to manage the tags in text is one of
the reasons Halo is slow. And I notice bugs in the presentation of the tags
too, like a dangling "True" here:
http://www.oscomak.net/wiki/Liquid_breathing_to_resist_bone_loss

Despite everything I've written on MediaWiki as its champion, if we try it
some more and it doesn't work, as to changing code on the server, to quote
Mystery Men:
http://www.adherents.com/lit/comics/MysteryMen.html
"Shoveler: Nothing I couldn't move around." :-)

That is especially true with the new server (maybe online tomorrow, we'll
see, nothing firm yet). I can run long term processes on a dedicated server
like the JVM (and so no startup overhead). So I could put up some version
of, say, Jython/Pointrel code like the stuff you played with from the SVN
repository or on the server. Or most anything else free that exists.

But I don't say that to say stop using the Semantic MediaWiki. We (mostly
you :-) are making good progress understanding its strengths and weaknesses,
and that will serve us well whether we continue to use it or export the
content to something new (including even back to client-side tools for
editing as opposed to browsing. :-)

Anyway, more feedback on the Wiki is always appreciated. The last time I put
something like OSCOMAK up, the complexity of choosing the standard but
complex "Zope" helped torpedo it. The great thing about standards is there
are so many to choose from. :-) Not sure who said that first?

--Paul Fernhout
(*) Fortunately, I (semi)intentionally failed a class (Physics) in college
in part to see what it felt like, so I think I just barely qualify for a
researcher career by Han's criterion. (George Miller's was "publish
something as an undergraduate". :-)
http://www.alibris.com/booksearch.detail?S=R&bid=9085995928&cm_mmc=shopcompare-_-base-_-aisbn-_-na
Not to say anything bad about the Physics professor himself, who is a very
likable guy and probably wouldn't even have failed me if I had not missed
too many labs for band practice to technically pass and was not interested
in making them up. It's hard to fail a course at PU; you have to work at it
I found out. :-) Here's the poor guy who had to deal with me often being
late to class (nobody else was ever late, strange thing):
http://nobelprize.org/nobel_prizes/physics/laureates/1993/taylor-autobio.html
Funny thing is, I explained that all to a dean (about missing the labs) and
they still gave me lab credit for the course. :-) Bureaucracy. :-) Probably
someone will figure out how to revoke my diploma now. :-) Well go ahead, I'm
tired of all the junk mail even after I asked it be stopped. :-)
Anyway, maybe even back then I could smell a cult. :-)
http://www.disciplined-minds.com/
"Upon publication of Disciplined Minds, the American Institute of Physics
fired author Jeff Schmidt. He had been on the editorial staff of Physics
Today magazine for 19 years."
I'm not saying Immanuel Velikovsky is right about anything, but James P.
Hogan makes pretty clear (Kicking the Sacred Cow - July 2004) how badly he
was treated by professional physicists:
http://en.wikipedia.org/wiki/Immanuel_Velikovsky
despite usually being correct in advance of the facts on several things.
By chance, the professor's parents had an organic farm I later helped
certify. :-) And I told them (truthfully) what a wonderful professor there
son was, and they were rightfully very proud of him. If I had known at PU he
was a Quaker and cared about such things back then, (I didn't) maybe I would
have been more on time at least. :-( From his autobiography: "Both the Evans
and Taylor families have deep Quaker roots going back to the days of William
Penn and his Philadelphia experiment. My parents were living examples of
frugal Quaker simplicity, twentieth-century style; their very lives taught
lessons of tolerance for human diversity and the joys of helping and caring
for others."
Now that, I can respect.

Bryan Bishop

unread,
May 6, 2008, 9:41:59 PM5/6/08
to openv...@googlegroups.com, kan...@gmail.com
On Tue, May 6, 2008 at 8:22 PM, Paul D. Fernhout
<pdfer...@kurtz-fernhout.com> wrote:
> > As for conditional tagging, theres no good way to do it unless halo or
> > smw have a syntax I'm not aware of. My best idea would be to make the

The skdb designs provide for this 'conditional tagging' that you need
.. you can think of it as "provides" relationships and so on (sort of
like pointrel), and this can be used in an algebraic way, and that's
easy, as long as the fundamental requirements can be built up; this is
generally done through the autospec program that Ben has been writing.

> > property name what it is used for, with the shovel "metal
> > part" (except preferably a little more specific) and make the value
> > 13g of alluminum (exact chemical formula of the material (and that is
> > a variable type I believe) would be a must), then add a new property

Yep, the exact chemical formula would be its own object represented in
the skdb package file for this particular (sub)project.

> > for alternate materials. If it needed two materials for the metal
> > part, the best solution might be to make a new object (article? I
> > would really prefer the greener pasture of an object oriented

It's not an object-oriented language per-se, but I guess that's what
it looks like. Have you gone to check out the yaml.org documentation
yet? The py-yaml documentation is also fantastic and worth checking
out, lots of examples and should spark the synapses a bit.

> > language) which in turn has those two needed properties (then you can
> > add more alternates for those parts too, if the isru plastics article
> > taught me anything its that the "cow" part of it never turns out to be
> > simple).

The needed parts - we call these dependencies, and that's the !! line
provided by the yaml metadata file, with an extendable data structure
(list) to specify further requirements and the type of requirement
(software only? for the fundamental stability of the sdkb package?)
and so on.

> Interesting idea. I've been thinking on this and realized that it isn't
> actually essential from the manufacturing web analysis point of view how
> much of something you need, at least a a crude first approximation. If you
> need an ounce of pure aluminum, you might as well need a ton of it, as far
> as needing a way to produce it. As you fine tune the design, then quantities

The question is where in fine-tuning the design? I suspect it's
somewhere between autospec and computer simulation, not necessarily
the database end of things; so like I was saying in my second to last
email here, there needs to be a way to help developers confirm that
their packages are making sense, and a way to make sure they can
simulate the increasing quantities or something, or represent that
from a registry of skdb packages (much like the new partsregistry.org
website for biobricks.org :-)).

> matter more and more as the choice of process to make aluminum might be
> affected by the scale and frequency of the need. And certainly quantity is
> needed for simulation. Anyway, an interesting suggestion.

Simulation in the tool-chain is going to come later, as far as I can
tell, but I am open to suggestions on that front. From what I can
tell, simulation in python files would be what we'd use, it'd be wise
to investigate the already existing simulation frameworks out there
(there are many), so that we can use them for individual skdb
projects. The simulation python files would be placed within the dot
skdb file, of course. Or s/file/repo/, technically.

> > Like you said, my ontology is very flawed, in part because we are
> > using a smw (pasture with some randomly placed nightshade) where skdb
> > or some other database (magical unicorn planet where nectar springs
> > from the ground) would be better. I've said it before and I'll say it
> > again: it's a wiki for a reason and anyone can feel free to change it.
>
> Well, at least the content. But not easily the architecture. Anyway, if in
> the end we all understand why, say, Bryan is right about ontologies and the
> limitation of Semantic MediaWikis, then as a learning community we will be
> that much further along IMHO.

Btw, I would be interested in discussing the implementation of a
semantic wiki on top of ikiwiki. It *should* be an easy addition to
the source code. And if not, we get to go complain to Joey Hess. ;-)

> We're (I hope!) one of those Engelbart outpost on the frontiers of
> knowledge. Even if just our own. :-)

Hm, "Engelbart outpost". I need to import that into my working vocabulary.

> > If you wanted to, you could try using the most common sytax for "or"
> > and hope for the best. I believe its || or something? Might want to
> > throw some ! and & operators into it just for good measure.
>
> Maybe. Can you supply an example?

Huh? Yes, please. And how does that, at all, contribute to the
underlying architecture? (I am simply confused; please inform.) Maybe
this is in the semantic-mediawiki-docs?

> > It's seeming to me like tags may, in the end, prove utterly worthless
> > for anything but organization.
>
> Interesting to here you say that now that you are the (relative) expert here
> on Semantic MediaWiki.

I have always wanted to do personal ontologies through Wikipedia, for
example. That's basically what my 12,000 bookmarks are:
http://heybryan.org/bookmarks/bookmarks-old2/

> > However a wiki is the best possible
> > front end for human readable information.

Yep, ikiwiki my friend.

> Well, I think I know what you mean, even if I can imagine better. :-)

Xanadu? :-/ Information interface is always going to be a murky
problem, I think. But I don't want to give up hope quite yet.

> And that's just thinking about stuff 30 years old. :-)
> http://www.mojowire.com/TravelsWithSmalltalk/DaveThomas-TravelsWithSmalltalk.htm
> "XSIS and The Customer Information Analyst Why would Xerox develop an
> incredible spreadsheet that could display images, conjugate Russian verbs
> and why did that happen in a strange group called XSIS located in Los
> Angeles and Washington? Apparently they had an important customer with a lot
> of complex information to analyze. How did Angela Coppola know that 1000
> people would show up for OOPSLA'86 when the PC committee predicted 100-200?
> What sort of technology could the National Security Administration use to
> print Chinese leaflets circa 1978? The Xerox Analyst served the CIA as a
> analytic tool for many years. Even 13 years later it still offers tools more
> powerful than MSOffice. The Analyst is still alive and well and forms a key
> component in TI ControlWorks Wafer Fab Automation System."

Fab automation system? That's fairly relevant here, since we're
essentially proposing FPGA for fablabs and a matter compiler to boot.

> > Is it at all possible to
> > have the wiki page function as the text document in a project, then
> > just have all the meta data, CAD's, etc, accessible from the article?

[Please refer to the replies I sent to Mike on this in the other
email. This was suggested by me on the other side of last month,
IIRC.]

> > That actually seems like it would be easy... all you would need is a
> > program for storing things on the server, then put an external link in

to store on the server, try git push, etc.

> > the wiki article to them. A little ugly, sure, but very easy. I'm just

More than just wiki articles - also txt, html, zip, tar, url, etc.
etc. A diversity of information all packaged into it, with the
automated user interface (agx-get) to facilitate the downloading of
that information, and so on. Remember?

> > polluting cyberspace with my half baked thoughts, feel free to ignore
> > me if it's retardedly impossible. I just took a calculus final. I'm
> > happy to keep myself from drooling.

http://heybryan.org/exp.html
- click on the bottom link to the Eric Hunting chat re: cyberspace,
automation, grounding the semantic web.

> Yes, I could see how we could make a site that essentially put Wikipedia
> articles in a frame:
> http://www.w3.org/TR/html4/present/frames.html

Woah, what? Any self-respecting developer ... erm. No, I'll save this
for another time (on a very tight schedule) -- basically, this isn't a
good idea. Let's just download and import the Wikipedia articles into
the database.

> Then we could surround the frame with ontological information which was
> edited more directly (maybe as fancy as Halo. maybe not).

Gaahhh. The pain. :-)

> Here is an example of a site that does something similar (there are others I
> remember from many years ago):
> http://webride.org/
> "Webride attaches discussion forums to each and every web page on the fly."

Of this type, there are many Firefox extensions now, but it's not
really that much of a semantic web as much as they like to advertize;
there's no realtime browser-to-browser communication protocol, as I
suggest on my website: http://heybryan.org/ from 2006, one of the
social browsing projects that I kicked around before (and even after,
unfortunately) I learned of trexy/prefound and friends.

Must be old. One of the disadvantages is that it's not an underlying
architecture, it's just another layer and it's propretiary in some
ways, not even if not in terms of licensing, but rather just in terms
of implementation. Eh, hard to explain these sorts of solutions.

> One issue with this approach is that we might need to add new content to
> Wikipedia (which might get deleted) or still have a local regular MediaWiki.

Yep, that's the idea of using git with ikiwiki - users can push around
content and update each other when they want, integrate and
synthesize, or not at all (as Wikipedia (as a giant, weird collective)
opts to).

> Despite everything I've written on MediaWiki as its champion, if we try it
> some more and it doesn't work, as to changing code on the server, to quote
> Mystery Men:
> http://www.adherents.com/lit/comics/MysteryMen.html
> "Shoveler: Nothing I couldn't move around." :-)

What?

> That is especially true with the new server (maybe online tomorrow, we'll
> see, nothing firm yet). I can run long term processes on a dedicated server
> like the JVM (and so no startup overhead). So I could put up some version
> of, say, Jython/Pointrel code like the stuff you played with from the SVN
> repository or on the server. Or most anything else free that exists.

Re: free. Have people been thinking that I am calling them gits when I
mention git, the free RCS (SVN superior, in various ways)? Just
wondering. Would explain a lot.

> But I don't say that to say stop using the Semantic MediaWiki. We (mostly
> you :-) are making good progress understanding its strengths and weaknesses,
> and that will serve us well whether we continue to use it or export the
> content to something new (including even back to client-side tools for
> editing as opposed to browsing. :-)

Hm, that distinction isn't necessary. Assume the ikiwiki scenario. In
that one, the client tools to edit and manage content is really all
over HTTP, or if using git then there's this added GIT protocol that
could be used for massive transfers, so it's not that big of a
boundary for crossing.

> Anyway, more feedback on the Wiki is always appreciated. The last time I put
> something like OSCOMAK up, the complexity of choosing the standard but
> complex "Zope" helped torpedo it. The great thing about standards is there
> are so many to choose from. :-) Not sure who said that first?

Probably the first guy who realized he has 400 text editors on linux
to choose from.

> (*) Fortunately, I (semi)intentionally failed a class (Physics) in college
> in part to see what it felt like, so I think I just barely qualify for a
> researcher career by Han's criterion. (George Miller's was "publish
> something as an undergraduate". :-)

Moravec was a good guy to work with ... did you target him? Did you
"know" before just wandering in? What was the deal?

- Bryan

mike1937

unread,
May 6, 2008, 10:13:41 PM5/6/08
to OpenVirgle
> > > If you wanted to, you could try using the most common sytax for "or"
> > > and hope for the best. I believe its || or something? Might want to
> > > throw some ! and & operators into it just for good measure.
>
> > Maybe. Can you supply an example?
>
> Huh? Yes, please. And how does that, at all, contribute to the
> underlying architecture? (I am simply confused; please inform.) Maybe
> this is in the semantic-mediawiki-docs?

I think I'm thinking on a level below you guys; I was half kidding. I
looked it up and | or || are used in regular java as the "or"
operator, so if one was expecting a machine to parse semantic tags
they might type "13g aluminum || 14g tin," into the string field for
a variable. It would be foolish to assume a script would work like
that or use crummy java syntax.

On May 6, 7:41 pm, "Bryan Bishop" <kanz...@gmail.com> wrote:
> On Tue, May 6, 2008 at 8:22 PM, Paul D. Fernhout
>
> >http://www.mojowire.com/TravelsWithSmalltalk/DaveThomas-TravelsWithSm...
> suggest on my website:http://heybryan.org/from 2006, one of the

Doram

unread,
May 7, 2008, 4:18:48 PM5/7/08
to OpenVirgle
I'm sorta with Mike on this, feeling like I am at least a couple of
rungs below the dialog here, but I do want to reiterate a point I made
a while back. I did mention the SKDB as a superstructure of the Wiki,
and then I mentioned having it act as a go-between for other Wikis and
info sources, to functionally connect the data, even if not hosted on
the same server. I realize that this almost amounts to a miniaturized
version of the internet itself (although actually purposed). I like
the idea I saw of framing Wikipedia with ontological/semantic frames.
Maybe that can be done with more than just Wikipedia. That speaks to
something else I said on the avoidance of copyright infringement issue
brought by direct copying, but seems like a better idea than my
proposed tagged linkdumps (which I even said at the time was a stopgap
measure).

Of course, I can't remember where I posted any of that, but I can
return with quotes later, if necessary. (I don't feel like reiterating
that much, and I don't feel like researching much either. I am tired
today. My son is sick, and so am I. |P Blah. Head cold...)

I am definitely starting to think that some of the value of the work
that we are/will be doing is going to be the purposed collection/
coordination/utilization of these disparate sources. I also agree with
Bryan's statement that we will eventually benefit from cooperation
with other standardization entities out there, although we need to
firm up what we are working on here first a little.

Doram wanders off to make some more non-technology management tools...
> ...
>
> read more »

Bryan Bishop

unread,
May 7, 2008, 5:52:30 PM5/7/08
to openv...@googlegroups.com, kan...@gmail.com
On Wed, May 7, 2008 at 3:18 PM, Doram <DoramB...@gmail.com> wrote:
> I'm sorta with Mike on this, feeling like I am at least a couple of
> rungs below the dialog here, but I do want to reiterate a point I made

Maybe there's something I can explain in more depth?

> a while back. I did mention the SKDB as a superstructure of the Wiki,

Only in as much as the debian repo architecture is a superstructure of
deb files.

> and then I mentioned having it act as a go-between for other Wikis and
> info sources, to functionally connect the data, even if not hosted on

[Same sentence].

> the same server. I realize that this almost amounts to a miniaturized
> version of the internet itself (although actually purposed). I like

No, that's not the internet, that's just taking advantage of the internet.

> the idea I saw of framing Wikipedia with ontological/semantic frames.

Avoid frames at all costs:
http://www.html4.com/mime/markup/php/standards_en/html_misuses_en/html_misuses_21.php
http://universalusability.com/access_by_design/frames/avoid.html
http://www.hobo-web.co.uk/tips/41.htm
etc.

> Maybe that can be done with more than just Wikipedia. That speaks to
> something else I said on the avoidance of copyright infringement issue
> brought by direct copying, but seems like a better idea than my
> proposed tagged linkdumps (which I even said at the time was a stopgap
> measure).

Sorry, I don't see how that avoids the copyright issue. You're just
making it so that the electrons are served up to the user in a certain
way. It's going through tons of caches all over the internet as
packets are flung back and forth and all over the place. There is no
guarantee, and indeed it's incredibly unlikely, that the exact
electrons or holes and voltage spikes that the server transmitted are
in no way, shape, or form the actaul data that the user receives ...
tells you something, doesn't it?

- Bryan

Doram

unread,
May 7, 2008, 11:38:47 PM5/7/08
to OpenVirgle
What I meant about the copyright issue is that with a wrapper for the
page, we can categorize and reference a page, but we do not host it on
our server, or change it in any way, or claim any responsibility for
anything more than the categorization that we have assigned to it.
It's like a great big dynamic quote. Yes. I see the issue of pages
changing over time, out from underneath our categorization, but...
yes. I see your point. (Woo. Too many days of no sleep. My logic is
slipping...) The internet changes too fast for us to reference
something without catching a snapshot and bringing it under our
control........... Bloody hell.

Well... I got nothing right now. I will have to think about this
(preferably with more than 3 hours sleep). I concede the point. Frames
begone.

On May 7, 5:52 pm, "Bryan Bishop" <kanz...@gmail.com> wrote:
> On Wed, May 7, 2008 at 3:18 PM, Doram <DoramBaram...@gmail.com> wrote:
> > I'm sorta with Mike on this, feeling like I am at least a couple of
> > rungs below the dialog here, but I do want to reiterate a point I made
>
> Maybe there's something I can explain in more depth?
>
> > a while back. I did mention the SKDB as a superstructure of the Wiki,
>
> Only in as much as the debian repo architecture is a superstructure of
> deb files.
>
> > and then I mentioned having it act as a go-between for other Wikis and
> > info sources, to functionally connect the data, even if not hosted on
>
> [Same sentence].
>
> > the same server. I realize that this almost amounts to a miniaturized
> > version of the internet itself (although actually purposed). I like
>
> No, that's not the internet, that's just taking advantage of the internet.
>
> > the idea I saw of framing Wikipedia with ontological/semantic frames.
>
> Avoid frames at all costs:http://www.html4.com/mime/markup/php/standards_en/html_misuses_en/htm...http://universalusability.com/access_by_design/frames/avoid.htmlhttp://www.hobo-web.co.uk/tips/41.htm

Bryan Bishop

unread,
May 7, 2008, 11:49:13 PM5/7/08
to openv...@googlegroups.com, kan...@gmail.com
On Wed, May 7, 2008 at 10:38 PM, Doram <DoramB...@gmail.com> wrote:
> What I meant about the copyright issue is that with a wrapper for the
> page, we can categorize and reference a page, but we do not host it on
> our server, or change it in any way, or claim any responsibility for
> anything more than the categorization that we have assigned to it.
> It's like a great big dynamic quote. Yes. I see the issue of pages
> changing over time, out from underneath our categorization, but...
> yes. I see your point. (Woo. Too many days of no sleep. My logic is
> slipping...) The internet changes too fast for us to reference
> something without catching a snapshot and bringing it under our
> control........... Bloody hell.

Yeah, we're all mumbling that a lot these days. Bloody, bloody hell.
There was a quote on Slashdot once, one that I can't track down at the
moment, but it basically went like this: "Most of the open source
developer attitude is just simply that the proprietary folks just
don't understand, and in truth these programmers are tired of it, so
they stop fighting, call it quits and go off and say 'here, this is
the better way that we have been talking about' -- and you know what?
It works." Wish I could source this.

- Bryan

Doram

unread,
May 8, 2008, 12:44:06 AM5/8/08
to OpenVirgle
I agree. that is a brilliant sentiment, and really captures a lot of
my sentiment at the outset of this project. You guys go and waste time
fighting about what the best way to do things is, and I will go off
and do it. I think we are still doing a decent job of that.

On May 7, 11:49 pm, "Bryan Bishop" <kanz...@gmail.com> wrote:

Paul D. Fernhout

unread,
May 8, 2008, 2:41:23 AM5/8/08
to openv...@googlegroups.com
Bryan Bishop wrote:
> On Tue, May 6, 2008 at 2:12 PM, Paul D. Fernhout
> <pdfer...@kurtz-fernhout.com> wrote:
>> Bryan-
>>
>> Do you have a proposed detailed ontology or tagging guidelines somewhere
>> here that relates to manufactured artifacts or related procedures?
>> http://heybryan.org/mediawiki/index.php/Skdb
>> Or related tagged content as examples?
>
> No, I think we're missing the broader issue here (not just a matter of
> tagging (btw, tagging good)). It's not the matter of adding content
> and dumping it into the wiki, that's fine and to desperately needed,
> but rather what I see is that you guys are already trying to come up
> with the SKDB files without sufficient time spent on the **entire
> idea** of semantic datastructs and so on, or mapping out what
> information resources to pursue in order to figure out when and if you
> have a good idea for a first version (the process, not the objects);
> yes, you can just go around and tack on variables and spagetti code as
> you go, sure, that's one way to do it -- but there's not even a
> resemblance of the underlying infrastructure that this 'grounded,
> manufacturing-oriented semantic web' can look like, even from day
> one*. Another way to do it would be to map out the information
> resources that we have in front of us and pursue the standardization
> organizations, which are going to be particularly interested in our
> little project.

As is pointed out here (previously referenced):
http://gamearchitect.net/Articles/SoftwareIsHard.html
"The difference is that the overruns on a physical construction project are
bounded. You never get to the point where you have to hammer in a nail and
discover that the nail will take an estimated six months of research and
development, with a high level of uncertainty. But software is fractal in
complexity. If you're doing top-down design, you produce a specification
that stops at some level of granularity. And you always risk discovering,
come implementation time, that the module or class that was the lowest level
of your specification hides untold worlds of complexity that will take as
much development effort as you'd budgeted for the rest of the project
combined. The only way to avoid that is to have your design go all the way
down to specifying individual lines of code, in which case you aren't
designing at all, you're just programming."

So, maybe stop thinking of what we are doing as adding articles and start
thinking of it as designing? :-)

The bottom line is you can say we need a design all you want, but where is a
specific (not handwaving) one we can discuss? Only in the Wiki, flawed as
it is. Or maybe places like here:
http://www.mel.nist.gov/psl/
which I am remiss in not keeping up with.

> * I admit that I am at fault for this too, since I have only recently,
> within the last two months, begun to use revision control systems, but
> that's no excuse for everybody else. (i.e., fight ignorance, embrace
> extend release)
>
> IEEE
> http://standards.ieee.org/
> "IEEE's Constitution defines the purposes of the organization as
> "scientific and educational, directed toward the advancement of the
> theory and practice of electrical, electronics, communications and
> computer engineering, as well as computer science, the allied branches
> of engineering and the related arts and sciences." In pursuing these
> goals, the IEEE serves as a major publisher of scientific journals and
> a conference organizer. It is also a leading developer of industrial
> standards (having developed over 900 active industry standards) in a
> broad range of disciplines, including electric power and energy,
> biomedical technology and healthcare, information technology,
> information assurance, telecommunications, consumer electronics,
> transportation, aerospace, and nanotechnology."
> http://en.wikipedia.org/wiki/IEEE_Standards_Association

IEEE makes money off of selling their standards documents (which are in that
sense proprietary and non-copyable). Also, those processes take years.

> http://w3.org/
> "W3C primarily pursues its mission through the creation of Web
> standards and guidelines designed to ensure long-term growth for the
> Web. "

Been there. Done that. Got the mention. :-)
http://www.alphaworks.ibm.com/tech/xfc/
These processes also take years.

(For the record, I think XML was unnecessary and is not a very good choice
of encoding system, and also misses the ontological point. :-)
http://www.oreillynet.com/xml/blog/2001/03/stop_the_xml_hype_i_want_to_ge.html

Yes all true. Someday someone big and strong will do that, and we'll get
something stupid. :-)

> (Remember, anybody can download ikiwik + git to start their own
> project; this needs to be addressed in any OSCOMAK-like toolchain). I
> haven't seen much discussion of this in the local group, and I think
> it's worth bringing up. At the same time, it's not too hard to
> assemble a list of email addresses to contact those organizations. If
> they don't participate, it's their loss -- small groups like us move
> much more quickly than they can 'legally' keep up with (lots of
> distributed work going on, but I suspect that it's mostly done by main
> contributors for the big pushes ... maybe; don't know).

You mention implementation technologies picked from endless possibilities
but that does not explain what to actually do with them.

> To start things off I propose a digestion methodology, based off of
> retrieving the projects out there on the web as they are,
> investigating the well-understood formats, and then working from there
> to see what the historical basis has been. For example, there are many
> electronics projects put up on the web, and usually these include the
> GDL schematics (uh, it's a *nix electronics schematics format, IIRC).
> Now, these schematics are the way they are for a reason, and usually
> they are more or less comprehensive, so it's a good place to start, a
> good way to do comparison. And at the same time we can digest the
> information gradients from the public access databases:
> http://heybryan.org/mediawiki/index.php/Open_access

Again, too vague to be useful IMHO. Maybe the details are in your head, but
I can't read them from here. :-)

> The problem with that is that you still need project coordination for
> each of the datatypes, and it's not present. So that's why I was
> thinking that we still need to investigate and recommend core
> methodologies for project management. Typically this is done through
> revision control systems (repositories), some way for the developers
> to communicate with each other, and whatever organizational style they
> prefer, really, but the idea is that this is ultimately accessible
> from the command line via agx-get (or apt-get at least). Not having
> been in all that many open source projects, I can't quite say what
> methods they use or a generalized principle format; but once we figure
> this out and blast off a few examples/suggestions (I suspect we can
> look at some good projects -- debian, freebsd, perl, nethack,
> firefox). And then from there we can promote the emergence of the
> diversity and the work that we need to see.

Nobody is stopping you from putting together a solution you think will work
and demonstrating it, ideally with a few minute screencast.

> But that still leaves Mike hanging for a while, but maybe only at
> first glance. What we can be doing now is tracking down the list of
> watering holes for certain types of information, importing the content
> in, sure, while simultaneously seriously encouraging him to document
> the methodologies for project coordination of what he's doing + that
> of others. And creating an ontology of -projects- would work too. I'm
> thinking big picture here.

Sure that's all useful. But "the devil is in the details".

> Another quick example - have we come up with a format idea for keeping
> a list of links (BibTeX stuff) related to the content that we are
> pulling? I mean, a way to specify just what information resources we
> have imported already and what we have not? I suggest taking a look at
> trexy.com and prefound.com, a site that treats internet searchers like
> ants and their paths through the web as trails worth saving as
> information is mined and brought back to the hive in some structured
> way (as found suitable by the searchers). In truth, the searchers
> don't actually bring content back to the websites, only their 'search
> trail', not what they found or any structured meaning out of it. [I've
> gotten into a habit such that, when I find a new website with a lot of
> information that I want to hoard, I write up a script to automate my
> downloading of it, and then let it be while I go on and just assume
> that I've processed it (less I want to actually read it ASAP)]. Same
> thing here.

A larger summary or video example would be appreciated if you think that is
important. Sounds like Memex. But I think it misses the engagement with the
content.

This is all too general to be useful as I see it.

>> And Mike has taken the first step towards that by putting articles like on
>> Solar energy (cow) and tags (bull) on the Semantic MediaWiki (pasture). Even
>
> That looks no different from other pages that we've seen on my site:
> http://heybryan.org/mediawiki/index.php/DNA_sequencer
> http://heybryan.org/thinking.html
> http://heybryan.org/graphene.html
> http://heybryan.org/mediawiki/index.php/DNA_synthesizer
> http://heybryan.org/mediawiki/index.php/Microarray
> http://heybryan.org/mediawiki/index.php/AFM_nanolithography
> http://heybryan.org/mediawiki/index.php/Meat_on_a_stick

Except Mike's contribution has a license attached so others can build on it.
And it has metadata attached (or could). And he seems to be taking seriously
attributing sources and drawing from freely licensed works.

Also, given your professed disdain for copyright and attribution, at this
point, I would not be confident of what parts of your site are original and
what are unattributed derivatives. I'm not saying any of it is derivative;
I'm just saying I don't know,

> So I don't see how the Solar_Energy article is new in terms of what we
> want to see happening; outside of this context of increasing
> development and sophistication of the semantic web, I think it's great
> that Mike is doing pages like that -- a good habit of infohoarding.
> Though you could easily argue that I am biased.

As above.

>> with all the respective work by you and I, it was Mike who took the first
>> pioneering step for humankind towards giving the world a freely-licensed
>> repository of manufacturing data and metadata. (Laying it on thick enough
>> for you, Mike? :-) Sure, his ontology is buggy and incomplete. And sure,
>> maybe the solar energy article could be rewritten to separate the basic
>> theory and designs from the extraterrestrial applications somehow. But it is
>
> re: separation; I don't see that as relevant to the idea here. Isn't
> it that there would be a **project** that uses solar energy? Solar
> energy is kind of like a unit, to be used by GNU units,

http://www.gnu.org/software/units/
"The Units program converts quantities expressed in various scales to their
equivalents in other scales."

http://en.wikipedia.org/wiki/Module
"A Module is a self-contained component of a system, which has a
well-defined interface to the other components; "

> so as a
> reference article I think it's fine, as long as it eventually links
> over to the semantic projects that are more structured and so on. In
> fact, all of this email might have been simplified by that simple
> realization. It's not so much the 'theory' -- I think it'll fit well
> if you consider it as a general introductory article to the topic, for
> people not too much in the know about the solar energy input
> variables, although I think it would be wise to separate the idea of
> photonic energy from solar energy, which basically just goes back to
> one of the fundamental units, like photonic flux or something? I
> forget what it is. CRC should know.
> ^ so you can tell that I did *not* revise the rest of this email after
> typing that

This all helps me understand I myself (given wikipedia) am less interested
in explaining the science then detailing specific technology or procedures.

>> all three together (cow plus bull plus pasture), and I can almost hear the
>> patter of little hooves already! Well, maybe in a decade or two. :-)
>
> So ... reworking your analogy of cow-bull-pasture, Solar_Energy
> doesn't fall into any of those, since it's a fundamental unit that can
> be explained by experimental projects that can be added via the git
> repos, linking back to the text documentation about how the experiment
> was setup and so on (these being part of the dot skdb files).

Except that is all hypothetical. :-)

>> The thing is, anyone with a certain set of mental abilities can bull at a
>> moment's notice, but to cow thoroughly by *anyone* takes at least a little
>
> Cow=mapping, right? I find mapping easier than digesting, since you
> get to make lists of lists and so on, up to the point until you
> realize somebody has already partially digested the material for you
> and you get to take it a few steps further, etc. :-)
>
>> hard work. But to do both (cow and bull) in a very thorough way takes the
>> most work of all, and usually take years of living it the middle of a
>> problem space (usually one full of manure. :-) And then there is the work of
>> getting a pasture set up and maintaining it (mending fences, etc.) too.
>
> "Pain is the cost of the maintenance of boundaries" [though it doesn't
> have to be that way, IMHO].
>
>> I'm not saying you have not done a lot of all three with your site (cow,
>> bull, MediWiki pasture), but unless the work is also out there under a free
>> license that defines a constitution for collaboration,
>> http://www.kurtz-fernhout.com/oscomak/SSI_Fernhout2001_web.html
>> and is in the right size chunks, it can't be built on stigmergically IMHO.
>
> What? I already mentioned the robots.txt file, which clearly states
> anybody can copy and so on. I am pretty sure that robots.txt has been
> held up in courts of law before too (same with GPL, hurray!).

I'd love to see the legal citations on robots.txt. And understand their scope.

>> (I'm certainly guilty of all this myself, like with long emails.) So it will
>> all have to be "treated as damage and routed around" by the free community
>> on the internet as far as stigmergy. :-( (*)
>
> As damage? It's *right there*.

But not clearly licensed. And of unknown origin. And without metadata.

>> That is not to diminish the potential future value of greener pastures and
>> alternative implementations like SKDB of course. But without freely and
>> formally licensed content and metadata (cows and bulls) any implementation
>> (pasture) is not of much current use. Even the wiki (pasture) at OSCOMAK.net
>
> How is it possible for skdb/metarepo to need a license when it's more
> like the map towards putting together all of the puzzle pieces?
> (semantic web, ikiwiki, git, repos, open access, gpl, etc.)? Surely
> you don't see this as an entire centralization project? (Of course,
> aggregating all of the information together is somewhat of a
> centralization process, but at the same time we see many individuals
> doing this with news and other publications with little problem, as
> long as all of the licenses are maintained and so on).

Sorry, you are talking at such a general level I don't see that this makes
a lot of progress. Kind of like advising stock investors: "Buy low, sell high".

>> is pretty useless to the casual browser (carnivore? dairy farmer?) at the
>> moment, since I have been the worst offender to date as far as posting
>> philosophy (manure) but not adding articles (cows and bulls) to the wiki
>> (pasture). :-) (**)
>
> I think we are on different wavelengths. Blatantly dumping content, as
> I have on my caches and hard drives over the years, doesn't make it
> all magically come alive. :-(

Exactly. Which is the point of the "semantic" part. Even if it has limits
like everything,

>> One issue may be that pastures and cows and even manure are much easier to
>> deal with than raging bulls (thinking :-) for most people: (***)
>> http://www-03.ibm.com/ibm/history/multimedia/think_trans.html
>> "And we must study through reading, listening, discussing, observing and
>> thinking. We must not neglect any one of those ways of study. The trouble
>> with most of us is that we fall down on the latter -- thinking -- because
>> it's hard work for people to think, And, as Dr. Nicholas Murray Butler said
>> recently, 'all of the problems of the world could be settled easily if men
>> were only willing to think.' "
>>
>> So, bulling is harder than cowing for most people. Some people are the
>> opposite, of course. :-) But as a note taped to Marty Johnson's computer
>> monitor in his office said (noticed the one time my wife and I met him):
>> http://www.isles.org/
>> "You can't plow a field by turning it over in your mind".
>
> Not true. "As I move, so I move the universe." Your mind, your brain,
> is how you are grounded with the world around you ...

Well, tell that to your garden. :-)

http://ask.yahoo.com/20030129.html
"It turns out that there may be some truth to the belief that talking to
plants helps them grow, but not for the reasons you may think. According to
ScienceNet, plants need carbon dioxide to grow, and when you talk to a
plant, you breath on it, giving it an extra infusion of CO2. However, for
this to have any real effect on your favorite fern, you would have to spend
several hours a day conversing with it in close quarters."

>> Of course, I liked to joke to my wife that did not apply to theoretical
>> mathematicians or a lot of computer programming or research. :-) But I do
>> think it applies to a big extent here -- we need both cows and bulls and we
>> already got a pasture -- even if it may not be as green as I hoped for and
>> the ones over there (Pointrel?) and there (SKDB?) looks mighty greener to me
>> and to you. :-)
>
> Sounds like cultural relativism to me - "everybody is equally good,"
> as opposed to discussing the fundamental issues that we're here to
> solve in the first place. But before we get to this please see the
> content above and we'll chug through that and see what comes of it,
> then maybe back to these points.

Thanks for taking the time to make comments. I still feel we need more
specific examples to reason from -- whether articles or specific detailed
use cases.
http://en.wikipedia.org/wiki/Use_case

You're not getting a grade here. :-) This is for real. :-)

--Paul Fernhout

Paul D. Fernhout

unread,
May 8, 2008, 4:11:32 AM5/8/08