Examples of of a procedures for the OSCOMAK Wiki (and related issues :-)

12 views
Skip to first unread message

Paul D. Fernhout

unread,
May 4, 2008, 11:08:23 AM5/4/08
to OpenVirgle
Paul D. Fernhout wrote:
> For the most part, Halo is supposed to make things easier. :-)
> If it slows us down in some tasks (like in editing the main text of an
> article), you can mostly avoid it by changing your preferences. To do
> that, click "my preferences" at the top of the page when you are logged
> it, then pick the "Skin" tab, then select Monobook, and then press the
> Save button.

BTW, my outline sounds vaguely like one of those "NASA procedures" Al Globus
was suggesting ten years ago to add to OSCOMAK. :-)
http://www.oscomak.net/wiki/Main_Page
Maybe I should listen to him more? At least the acronym now means:
"OSCOMAK Semantic Community On Manufactured Artifacts and Know-how""

Hmmm, where would that Halo shut off procedure fit in the Wiki as an article
about a "procedure" and how should it be named? I don't know the best way to
do that, but it should be possible though. Suggestions?

And how could that procedure article be interlinked to a Wiki diagnostic
guide or checklist for troubleshooting? I also don't know the best way to do
that. Should also be possible though. Suggestions?

Example:
"""
Symptom: Editing OSCOMAK Wiki pages is slow.
Troubleshooting: Check this thing, this other thing, and that thing.
Diagnosis A: You are using the Halo Ontology skin for the Wiki.
Potential Remedy 1: Follow procedure to change the skin to Monobook.
Potential Remedy 2: Get a much faster computer.
Potential Remedy 3: Use client side tools.
Potential Remedy 4: Improve the Halo addon.
"""

A more space-oriented example of a procedure:
"NASA procedure for nuts in space"
http://www.boingboing.net/2007/02/23/nasa-procedure-for-n.html
"If you're a NASA astronaut and you totally flip out in space, your
crewmates are instructed to restrain you with duct tape, tie you down with
bungee cords, and inject you with the anti-psychotic drug Haldol or a
tranquilizer like Valium. The plan is outlined in 1,000+ page document that
the Associated Press obtained this week outlining how to deal with medical
emergencies."

OK, so let's not start with the first Google result on "NASA procedure": :-)
http://www.google.com/search?hl=en&q=nasa+procedure

Maybe this one?
http://en.wikipedia.org/wiki/Apollo_13
"Procedure for composing an invoice for "space towing":
"Grumman Aerospace Corporation, the builder of the LM, issued an invoice
[14] for $312,421.24 to North American Rockwell, the builder of the CM
module, for "towing" the crippled ship most of the way to the Moon and back.
The invoice was drawn up as a gag following Apollo 13's successful
splashdown by one of the pilots for Grumman, Sam Greenberg. He had earlier
helped with the strategy for rerouting power from the LM to the crippled CM.
The invoice included a 20% commercial discount, as well as a further 2%
discount if North American paid in cash."

OK, so maybe not that one? :-)

By the way, I met someone who said he might have been the kid who passed on
being sick to Charlie Duke's kid (and so Charlie Duke, and Ken Mattingly who
thus missed flying on Apollo 13). Still,
http://en.wikipedia.org/wiki/Charles_Duke
"He is the youngest of only twelve men who have walked on the moon."
http://en.wikipedia.org/wiki/Ken_Mattingly
"Thomas Kenneth "Ken" Mattingly II ... was an American astronaut who flew
on the Apollo 16, STS-4, and STS-51-C missions. He had been scheduled to fly
on Apollo 13, but was held back due to concerns about a potential illness
(which he did not contract)."
And:
http://en.wikipedia.org/wiki/Apollo_13
"This may have been a blessing in disguise for him – Mattingly never
developed rubella, and later flew on Apollo 16, STS-4, and STS-51-C, while
none of the Apollo 13 astronauts flew in space again."

So, bad luck or good luck?
http://joyofreading.wordpress.com/2007/09/04/zen-shorts-ii-the-farmer%E2%80%99s-luck/
"""
There was once an old farmer who had worked his crops for many years.
One day, his horse ran away. Upon hearing the news, his neighbors came to visit.
“Such bad luck,” they said sympathetically.
“Maybe,” the farmer replied.
...
"""

So, NASA procedure for either flying lots of missions or walking on the
moon: "Spend time with your kids and their friends or other parents and
contract Rubella."

OK, so maybe that one should not be documented. :-)

Let me try harder to find a *real* NASA procedure. How about this:
"Procedure to Follow in the Event That Building 245 is Attacked by Vikings"
http://paulgazis.com/Humor/Vikings.htm
"1.0 Complete a DARC-820AD -- 'Identifying a Barbarian Attack'
-- to determine if the visitors are Viking raiders.
1.1 Are the strangers wearing weapons, helmets, and armor?
1.2 Do the strangers lack trade goods or other evidence
that they might only be peaceful merchants?
1.3 Do the strangers have NASA Ames visitor ID badges?
1.31 If so, do these badges identify the visitors as
Viking raiders?
..."

Wow, I am finding it surprisingly hard to find a useful NASA procedure. :-(

Didn't someone here mention people are reverse engineering rusty Saturn V
parts to figure out how we got to the moon? Maybe this knowledge is all
lost? :-(

How about this one?
http://www.nasa-usa.de/centers/ivv/about/documents.html
"""
Out-Processing
If you are leaving the NASA IV&V Facility and no longer need access to any
of the NASA IV&V Facility's resources (this includes electronic resources):
1. Review Out-Processing Procedure for New Employees (PDF or MS Word).
2. Where applicable, electronically complete Out-Processing Form (PDF or
MS Word).
3. On your last day, present the completed form to Security and
Maintenance Services.
"""

There, I finally found a real "official" NASA procedure!

Oops, now that I read that again, maybe it isn't such a good example? :-(

OK, a real procedure finally, well something educational meant for kids:
"Keeping the Pressure On"
http://quest.nasa.gov/space/teachers/suited/9d7keep.html
"""
Procedure:
Step 1. Using two pieces of ripstop nylon, stitch a bag as shown in the
pattern on the next page. The pattern should be doubled in size. For extra
strength, stitch the bag twice. Turn the stitched bag inside-out.
Step 2. Slip the nozzle of a long balloon over the fat end of the tire
valve. Slide the other end of the balloon inside the bag so the neck of the
tire valve is aligned with the neck of the bag.
Step 3. Slide the adjustable hose clamp over the bag and tire valve necks.
Tighten the clamp until the balloon and bag are firmly pressed against the
tire valve neck. This will seal the balloon and bag to the valve.
Step 4. Connect the tire valve to the bicycle pump and inflate the balloon.
The balloon will inflate until it is restrained by the bag. Additional
pumping will raise the pressure inside the balloon. Check the tire pressure
gauge on the pump (use separate gauge if necessary) and pressurize the bag
to about 35 kilopascals (five pounds per square inch). The tire valve can be
separated from the pump so that the bag can be passed around among the students.
Step 5. Discuss student observations of the stiffness of the pressurized
bag. What problems might an astronaut have wearing a pressurized spacesuit?
"""

Maybe OpenVirgle/OSCOMAK can do better for actual space-related operations?
Or maybe I just need to learn more about how/where NASA stores their
explicit (as opposed to tacit) knowledge and procedures? Can anybody help me
find a good source for them? I remember meeting someone who worked on the
Apollo Space program and he said after it was over everyone dispersed to
industry and sort-of took the core knowledge with them. :-(

From:
http://en.wikipedia.org/wiki/Explicit_knowledge
"Explicit knowledge is knowledge that has been or can be articulated,
codified, and stored in certain media. It can be readily transmitted to
others. The most common forms of explicit knowledge are manuals, documents
and procedures. Knowledge also can be audio-visual. Works of art and product
design can be seen as other forms of explicit knowledge where human skills,
motives and knowledge are externalized. only definition"

And from:
http://en.wikipedia.org/wiki/Tacit_knowledge
"The concept of tacit knowing comes from scientist and philosopher Michael
Polanyi. It is important to understand that he wrote about a process (hence
tacit knowing) and not a form of knowledge. However, his phrase has been
taken up to name a form of knowledge that is apparently wholly or partly
inexplicable. By definition, tacit knowledge is knowledge that people carry
in their minds and is, therefore, difficult to access. Often, people are not
aware of the knowledge they possess or how it can be valuable to others.
Tacit knowledge is considered more valuable because it provides context for
people, places, ideas, and experiences. Effective transfer of tacit
knowledge generally requires extensive personal contact and trust. Tacit
knowledge is not easily shared. One of Polanyi's famous aphorisms is: "We
know more than we can tell." Tacit knowledge consists often of habits and
culture that we do not recognize in ourselves. In the field of knowledge
management the concept of tacit knowledge refers to a knowledge which is
only known by an individual and that is difficult to communicate to the rest
of an organization. Knowledge that is easy to communicate is called explicit
knowledge. The process of transforming tacit knowledge into explicit
knowledge is known as codification or articulation."

And that "explicit knowledge" versus "tacit knowledge" divide will always be
a limit of OSCOMAK and procedures and bureaucracy in general. And that limit
is shown by contrast in the Apollo 13 mission with improvising a connection
between two incompatible parts with duct tape, which was not in the
procedure book before.

Well, that "explicit knowledge" versus "tacit knowledge" divide will exist
until we get AIs like HAL-9000 on the job!

How about: "NASA Procedure for retrofitting space craft to use
trans-humanist holo-optic AIs"?

Oops, maybe that is not such a good idea, either? :-) From:
http://en.wikipedia.org/wiki/HAL_9000
"Faced with the prospect of disconnection, HAL decides to kill the
astronauts in order to protect and continue "his" programmed directives. HAL
proceeds to kill Poole while he is repairing the ship, and those of the crew
in suspended animation by disabling their life support systems."

So, we're back to about where we started:
"NASA procedure for AI nuts in space:
1. Return to ship via exposure to space without your space helmet.
2. Open the memory core access panel.
3. Remove holo-optic components while talking to AI, stopping when
language functionality is lost and before ship's basic functioning is
compromised."

Or maybe we should just stick with a "crewed" space program for a while? :-)
And a Semantic Wiki (upgraded) version of paper procedures for them?
http://www.oscomak.net/

--Paul Fernhout
(I bcc'd a couple of people who might find this funny. Feel free to forward.)

mike1937

unread,
May 4, 2008, 12:03:20 PM5/4/08
to OpenVirgle
> Hmmm, where would that Halo shut off procedure fit in the Wiki as an article
> about a "procedure" and how should it be named? I don't know the best way to
> do that, but it should be possible though. Suggestions?

I would name it like a normal article, just make a property called
procedural=true, or if there is a native variable called description
or something similiar you could make it equal "procedural." It might
be better to just make it a section in a new trouble shooting article.

If you wanted to be really fancy you could make each procedure a
property for future super AI that can parse it, but I wouldn't hold my
breathe for that happening.

The main help page I wrote that comes up when help is clicked in the
navigation box is technically supposed to be a table of contents so
whichever way you decide to go it should probably be linked there.



On May 4, 9:08 am, "Paul D. Fernhout" <pdfernh...@kurtz-fernhout.com>
wrote:
> So, bad luck or good luck?http://joyofreading.wordpress.com/2007/09/04/zen-shorts-ii-the-farmer...

Paul D. Fernhout

unread,
May 6, 2008, 1:01:39 AM5/6/08
to openv...@googlegroups.com
I see. So you suggest systematic tagging is more important than systematic
article naming. I can see that.

And as with your solar energy article
http://www.oscomak.net/wiki/Solar_Energy
which duplicates the Wikipedia page url, maybe so what? There are lots of
people to deal with a merge down the road with a semantic wikipedia, and
maybe we or someone else could write tools to help.

I am thinking this general outline, nonetheless, as a draft idea:

An article about a general class of thing (shovel) probably links to
Wikipedia, That article may have a dynamic semantic query to list the
related "how to" articles with the right tags, as well as related specific
products also with the right tags. If a merge gets done with Wikipedia, this
OpenVirgle/OSCOMAK article would get merged in as a *section* in the larger
Wikipedia article, and then cleaned up later.

For example of an article on shovel with links (*):

Shovel
Short intro
Using it section
* How to dig a ditch with a shovel
* How to remove snow with a shovel
* How to clean and oil a shovel
Types section
* Snow Shovel
Garden Shovel
* Garden Shovel type 1234
* Garden Shovel type 5678
Making it section
* How to make a Snow Shovel
How to make a Garden Shovel
* How to make a Garden Shovel type 1234
* How to make a Garden Shovel type 5678

Garden Shovel type 1234
Description
Making
* How to make a Garden Shovel type 1234
Inventory/Spimes (Someday?) http://www.boingboing.net/images/blobjects.htm
* Instance 1
* Instance 2

Garden Shovel type 5678
Description
Making
* How to make a Garden Shovel type 5678

I know there is some redundancy there. Maybe the "how to make" article would
be only available directly for the related type (class) of object. This is
feeling a little like designing a Smalltalk or Object-Oriented class
hierarchy to me.

One issue I am wondering about is how to do conditional tagging.

Example:
"To make a snow shovel, you need *either* a sheet of aluminum *or* a sheet
of plastic for the blade."

If I tag the object as needing both types of materials, then any analysis
software won't be able to figure out minimum collections of things needed to
give some functionality. It would be a superset of the minimum. I could make
two different kinds of shovel designs of course. but then where the shovels
are reference I need an "or" somehow. Or I need an intermediate level. And
this is just a simple case, things could get much more complicated with a
design as essentially a program to decide how to put something together from
what's available.

Basically, it comes down to using logical operations somehow in tagging.
Yet, it is always possible this could be represented by some sort of
supplemental information in the article (perhaps put in a special set of
brackets or braces). Anyway, I'm just musing out loud on this. None of this
should be taken as definitive.

--Paul Fernhout

Bryan Bishop

unread,
May 6, 2008, 7:39:26 AM5/6/08
to openv...@googlegroups.com, kan...@gmail.com
On Tue, May 6, 2008 at 12:01 AM, Paul D. Fernhout
<pdfer...@kurtz-fernhout.com> wrote:
> I see. So you suggest systematic tagging is more important than systematic
> article naming. I can see that.

In skdb, there only needs to be one 'ultimate' tag per process or
object in the system, so chair would be of tag/type furniture for
example, but it's far from necessary that this be maintained over time
(i.e., split/merge with other possible yaml type representative
classes). But other than that, there are in fact ways to add tagging
into the metadata files (also written in yaml); I can't help but think
that nobody read my messages in Doram's thread re: what's next. Now's
not the time to just make up random standards for articles ...
instead, we should be carefully documenting what's needed and not
needed in representing certain systems, to serve as a foundation as
how to conduct standardization ceremonies. Just a thought.

- Bryan
http://heybryan.org/

Paul D. Fernhout

unread,
May 6, 2008, 3:12:46 PM5/6/08
to openv...@googlegroups.com
Bryan-

Do you have a proposed detailed ontology or tagging guidelines somewhere
here that relates to manufactured artifacts or related procedures?
http://heybryan.org/mediawiki/index.php/Skdb
Or related tagged content as examples?

Also. in general, I'd expect many things would have multiple tags since
emergent categories are rarely strict hierarchies (one issue with WordNet).
And, as before, I question if we can be "carefully documenting what's needed
and not needed" in advance of at least some content to play with.

See also this variant of an older idea on bulling and cowing (I read an
essay on this in the Norton Reader more that 25 years ago in college):
http://writewyoming.blogspot.com/2008/02/bull.html
"We defined bull in a number of ways, but we focused mostly on how it is
defined in academic terms. Bull, we said, is putting together a big song and
dance around nothing much. In other words, the act of taking only a little
information and making a huge essay out of it. ... If cow is the opposite of
bull then we decided that "cowing" must be having a lot of information but
not doing much with it. Just sort of cramming it all into one space with no
attempt at ... well, bull. ... As we thought about it, we realized that it
actually takes more thought to bull an essay than to cow it. A cow essay
merely wants an avalanche of facts and information, which is easily copied
and pasted from other sources. However, if we find ourselves having to write
for a few pages on something we know little about, we grasp at straws,
extrapolate from little, and otherwise work our poor little brains to the
bone. ... We looked at what school seems to want out of us, and mostly came
to the conclusion that it wants cow. School seems mostly designed to fill us
full of information (cow) in hopes that we will be able to spit it back up
as whole as possible on tests and essays. However, we also noticed that
whenever we are able to bull well, we tend to get really good grades. Why
would teachers reward us for bull when they seem to want cow? It's possible,
we posited, that the trick is to use bull to convince the teacher that you
are a cow. ..."

Also, as I see it, the main issue of interest (to me, and presumably the
community) is no longer how to add content and tags (given the wiki), but
what content and tags to add. Still, I know that is a lot of fun to focus on
that technical side too. Let's call that the "pasture". :-) Obviously, over
time, various systems may do tagging in different ways either technically or
semantically (ontologically).
http://en.wikipedia.org/wiki/Ontology_%28computer_science%29
But, sorry, for the moment I personally am no longer much interested in
alternative implementations (greener pastures) so much as both content (cow)
and related metadata (bull) in the wiki (pasture) that is up there right now
-- at least until it is overgrazed. :-)

So, you are (implicitly) accusing me of cow (article mongering), and maybe
vice versa (ontology mongering, or maybe pasture mongering. :-)

But, really, we need all three. We need the articles (cow) and the thinking
about them and their interrelations (bull), of which presumably the marriage
of the two in a green-enough wiki (pasture) should then result in meaningful
and useful offspring (Mars habitations, Earthly Eco-cities, and so on. :-)

And Mike has taken the first step towards that by putting articles like on
Solar energy (cow) and tags (bull) on the Semantic MediaWiki (pasture). Even
with all the respective work by you and I, it was Mike who took the first
pioneering step for humankind towards giving the world a freely-licensed
repository of manufacturing data and metadata. (Laying it on thick enough
for you, Mike? :-) Sure, his ontology is buggy and incomplete. And sure,
maybe the solar energy article could be rewritten to separate the basic
theory and designs from the extraterrestrial applications somehow. But it is
all three together (cow plus bull plus pasture), and I can almost hear the
patter of little hooves already! Well, maybe in a decade or two. :-)

The thing is, anyone with a certain set of mental abilities can bull at a
moment's notice, but to cow thoroughly by *anyone* takes at least a little
hard work. But to do both (cow and bull) in a very thorough way takes the
most work of all, and usually take years of living it the middle of a
problem space (usually one full of manure. :-) And then there is the work of
getting a pasture set up and maintaining it (mending fences, etc.) too.

I'm not saying you have not done a lot of all three with your site (cow,
bull, MediWiki pasture), but unless the work is also out there under a free
license that defines a constitution for collaboration,
http://www.kurtz-fernhout.com/oscomak/SSI_Fernhout2001_web.html
and is in the right size chunks, it can't be built on stigmergically IMHO.
(I'm certainly guilty of all this myself, like with long emails.) So it will
all have to be "treated as damage and routed around" by the free community
on the internet as far as stigmergy. :-( (*)

That is not to diminish the potential future value of greener pastures and
alternative implementations like SKDB of course. But without freely and
formally licensed content and metadata (cows and bulls) any implementation
(pasture) is not of much current use. Even the wiki (pasture) at OSCOMAK.net
is pretty useless to the casual browser (carnivore? dairy farmer?) at the
moment, since I have been the worst offender to date as far as posting
philosophy (manure) but not adding articles (cows and bulls) to the wiki
(pasture). :-) (**)

One issue may be that pastures and cows and even manure are much easier to
deal with than raging bulls (thinking :-) for most people: (***)
http://www-03.ibm.com/ibm/history/multimedia/think_trans.html
"And we must study through reading, listening, discussing, observing and
thinking. We must not neglect any one of those ways of study. The trouble
with most of us is that we fall down on the latter -- thinking -- because
it's hard work for people to think, And, as Dr. Nicholas Murray Butler said
recently, 'all of the problems of the world could be settled easily if men
were only willing to think.' "

So, bulling is harder than cowing for most people. Some people are the
opposite, of course. :-) But as a note taped to Marty Johnson's computer
monitor in his office said (noticed the one time my wife and I met him):
http://www.isles.org/
"You can't plow a field by turning it over in your mind".
Of course, I liked to joke to my wife that did not apply to theoretical
mathematicians or a lot of computer programming or research. :-) But I do
think it applies to a big extent here -- we need both cows and bulls and we
already got a pasture -- even if it may not be as green as I hoped for and
the ones over there (Pointrel?) and there (SKDB?) looks mighty greener to me
and to you. :-)

Maybe that must mean Pointrel is like a septic tank? :-)
"Humorous Quotes from Erma Bombeck's The Grass is Always Greener Over the
Septic Tank"
http://workinghumor.com/quotes/grass.shtml
Of course septic tanks are mighty important too, which people generally only
discover when theirs stops working. :-) See, for a classic scene on that:
"Meet the Parents"
http://www.imdb.com/title/tt0212338/

--Paul Fernhout
(*) http://www.canajun.com/rmcguire/research/e-money/chapter5.htm
"Censorship and regulations are treated as damage to be bypassed."
Although, given my meshwork/hierarchy balance ideal, I'm suggesting the
*absence* of all regulations or similar formal things like a formal license
is also damage. We need a happy medium on that IMHO -- if for no other
reason than to give thanks where thanks is due.
(**) Of course, pitching manure can still be mighty useful sometimes. I did
that as a volunteer an an organic farm once. :-)
(***) Unless the dangerous and difficult bulls are named "Ferdinand". :-)
http://en.wikipedia.org/wiki/Ferdinand_the_Bull
"The Story of Ferdinand (1936) is the best-known work written by American
author Munro Leaf and illustrated by Robert Lawson . The children's book
tells the story of a bull who would rather smell flowers than fight in
bullfights. He sits in the middle of the bull ring failing to take heed of
any of the provocations of the matador and others to fight."
One fellow camper in summer day camp as a kid told me I should read that
"The Story of Ferdinand" book as I reminded him of Ferdinand the bull --
but I never did until a year or so ago for a kid of my own. :-)
Text online at:
http://members.tripod.com/silvertongue7/ferdinand.html
With pictures and text here:
http://pages.prodigy.net/poss/ferdinand/1.htm
I see why now. :-)

mike1937

unread,
May 6, 2008, 5:47:05 PM5/6/08
to OpenVirgle
> And as with your solar energy article
> http://www.oscomak.net/wiki/Solar_Energy
> which duplicates the Wikipedia page url, maybe so what?

I didn't think it through much, but I guess my half-formed sub-
conscious thought was that it basically was the wikipedia article, I
just paraphrased the parts I thought were pertinent, it was almost
more for my benefit to take notes.
As for conditional tagging, theres no good way to do it unless halo or
smw have a syntax I'm not aware of. My best idea would be to make the
property name what it is used for, with the shovel "metal
part" (except preferably a little more specific) and make the value
13g of alluminum (exact chemical formula of the material (and that is
a variable type I believe) would be a must), then add a new property
for alternate materials. If it needed two materials for the metal
part, the best solution might be to make a new object (article? I
would really prefer the greener pasture of an object oriented
language) which in turn has those two needed properties (then you can
add more alternates for those parts too, if the isru plastics article
taught me anything its that the "cow" part of it never turns out to be
simple).

Like you said, my ontology is very flawed, in part because we are
using a smw (pasture with some randomly placed nightshade) where skdb
or some other database (magical unicorn planet where nectar springs
from the ground) would be better. I've said it before and I'll say it
again: it's a wiki for a reason and anyone can feel free to change it.

>(Laying it on thick enough for you, Mike? :-)
Thick enough to... drown fish... or something. I'm far too lazy to
come up with a creative overstatement.

If you wanted to, you could try using the most common sytax for "or"
and hope for the best. I believe its || or something? Might want to
throw some ! and & operators into it just for good measure.

It's seeming to me like tags may, in the end, prove utterly worthless
for anything but organization. However a wiki is the best possible
front end for human readable information. Is it at all possible to
have the wiki page function as the text document in a project, then
just have all the meta data, CAD's, etc, accessible from the article?
That actually seems like it would be easy... all you would need is a
program for storing things on the server, then put an external link in
the wiki article to them. A little ugly, sure, but very easy. I'm just
polluting cyberspace with my half baked thoughts, feel free to ignore
me if it's retardedly impossible. I just took a calculus final. I'm
happy to keep myself from drooling.

On May 5, 11:01 pm, "Paul D. Fernhout" <pdfernh...@kurtz-fernhout.com>
wrote:
>   Inventory/Spimes (Someday?)http://www.boingboing.net/images/blobjects.htm
> > whichever way you decide to go it should probably be linked there.- Hide quoted text -
>
> - Show quoted text -

Bryan Bishop

unread,
May 6, 2008, 5:51:54 PM5/6/08
to openv...@googlegroups.com, kan...@gmail.com
On Tue, May 6, 2008 at 4:47 PM, mike1937 <arid_...@comcast.net> wrote:
> It's seeming to me like tags may, in the end, prove utterly worthless
> for anything but organization. However a wiki is the best possible
> front end for human readable information. Is it at all possible to
> have the wiki page function as the text document in a project, then
> just have all the meta data, CAD's, etc, accessible from the article?

Yes, but the idea is that all of those datafiles are also wiki-editable.

> That actually seems like it would be easy... all you would need is a
> program for storing things on the server, then put an external link in
> the wiki article to them. A little ugly, sure, but very easy. I'm just

No, no, no. This is why I've been suggesting ikiwiki. You see,
mediawiki and all other modern wikis have something known as version
control systems. But version control systems have been around for
longer than wikis themselves. So the whole idea is that projects can
keep with these repositories while adding hooks into them for those
wiki interfaces, either like ikiwiki or blosxom (which just does
realtime rendering; the former does hook based content generation
(otherwise static pages) -- I don't mind either way personally).

http://en.wikipedia.org/wiki/Revision_control_system
"The Revision Control System (RCS) is a software implementation of
revision control that automates the storing, retrieval, logging,
identification, and merging of revisions. RCS is useful for text that
is revised frequently, for example programs, documentation, procedural
graphics, papers, and form letters. RCS is also capable of handling
binary files, though with reduced efficiency and efficacy. Revisions
are stored with the aid of the diff utility.

RCS was initially developed in the 1980s by Walter F. Tichy while he
was at Purdue University as a free and more evolved alternative to the
then-popular Source Code Control System (SCCS). It is now part of the
GNU Project but is still maintained by Purdue University.

RCS operates only on single files, has no way of working with an
entire project, and sports a relatively fiddly system of branches for
independent streams of development. Instead of using branches, many
teams just used the in-built locking mechanism and worked on a single
branch.

A simple system called CVS was developed capable of dealing with RCS
files en masse, and this was the next natural step of evolution of
this concept, as it "transcends but includes" elements of its
predecessor. CVS was originally a set of scripts which used RCS
programs to manage the files. It no longer does that, rather it
operates directly on the files itself.

A later higher-level system PRCS[1] uses RCS-like files but was never
simply a wrapper. In contrast to CVS, PRCS improves the delta
compression of the RCS files using Xdelta.

In single-user scenarios, such as server configuration files or
automation scripts, RCS may still be the preferred revision control
tool as it is simple and no central repository needs to be accessible
for it to save revisions. This makes it a more reliable tool when the
system is in dire maintenance conditions. Additionally, the saved
backup files are easily visible to the administration so the operation
is straightforward. However, there are no built-in tamper protection
mechanisms (that is, users who can use the RCS tools to version a file
also, by design, are able to directly manipulate the corresponding
version control file) and this is leading some security conscious
administrators to consider client/server version control systems that
restrict users' ability to alter the version control files.

Some wiki engines, including TWiki, use RCS for storing page revisions."

(in truth, all wikis are using RCS -- that's how you have the History page)

> polluting cyberspace with my half baked thoughts, feel free to ignore
> me if it's retardedly impossible. I just took a calculus final. I'm
> happy to keep myself from drooling.

AP Calculus BC tomorrow. Wish me luck.
http://heybryan.org/mediawiki/index.php/Cal2 <-- my self-made review

- Bryan

mike1937

unread,
May 6, 2008, 6:43:16 PM5/6/08
to OpenVirgle
> AP Calculus BC tomorrow. Wish me luck.http://heybryan.org/mediawiki/index.php/Cal2<-- my self-made review
Good luck! I've got AB tomorow, my demonic teacher gave us an
additional final the day before the actual AP test. Thanks for the
review material.

As I discovered here:
http://www.wikimatrix.org/show/ikiwiki
ikiwiki can have file attachments. If that means what I think it
means, why the heck aren't we using it?

On May 6, 3:51 pm, "Bryan Bishop" <kanz...@gmail.com> wrote:
> AP Calculus BC tomorrow. Wish me luck.http://heybryan.org/mediawiki/index.php/Cal2<-- my self-made review
>
> - Bryan

Bryan Bishop

unread,
May 6, 2008, 7:41:21 PM5/6/08
to openv...@googlegroups.com, kan...@gmail.com

No, I think we're missing the broader issue here (not just a matter of
tagging (btw, tagging good)). It's not the matter of adding content
and dumping it into the wiki, that's fine and to desperately needed,
but rather what I see is that you guys are already trying to come up
with the SKDB files without sufficient time spent on the **entire
idea** of semantic datastructs and so on, or mapping out what
information resources to pursue in order to figure out when and if you
have a good idea for a first version (the process, not the objects);
yes, you can just go around and tack on variables and spagetti code as
you go, sure, that's one way to do it -- but there's not even a
resemblance of the underlying infrastructure that this 'grounded,
manufacturing-oriented semantic web' can look like, even from day
one*. Another way to do it would be to map out the information
resources that we have in front of us and pursue the standardization
organizations, which are going to be particularly interested in our
little project.

* I admit that I am at fault for this too, since I have only recently,
within the last two months, begun to use revision control systems, but
that's no excuse for everybody else. (i.e., fight ignorance, embrace
extend release)

IEEE
http://standards.ieee.org/
"IEEE's Constitution defines the purposes of the organization as
"scientific and educational, directed toward the advancement of the
theory and practice of electrical, electronics, communications and
computer engineering, as well as computer science, the allied branches
of engineering and the related arts and sciences." In pursuing these
goals, the IEEE serves as a major publisher of scientific journals and
a conference organizer. It is also a leading developer of industrial
standards (having developed over 900 active industry standards) in a
broad range of disciplines, including electric power and energy,
biomedical technology and healthcare, information technology,
information assurance, telecommunications, consumer electronics,
transportation, aerospace, and nanotechnology."
http://en.wikipedia.org/wiki/IEEE_Standards_Association

http://w3.org/
"W3C primarily pursues its mission through the creation of Web
standards and guidelines designed to ensure long-term growth for the
Web. "

http://www.webstandards.org/
"The Web Standards Project is a grassroots coalition fighting for
standards which ensure simple, affordable access to web technologies
for all." (however, this is more about accessibility re: braille
screen readers, alternative screens, surfraw, etc.)

http://www.nist.gov/
"From automated teller machines and atomic clocks to mammograms and
semiconductors, innumerable products and services rely in some way on
technology, measurement, and standards provided by the National
Institute of Standards and Technology. Founded in 1901, NIST is a
non-regulatory federal agency within the U.S. Department of Commerce.
NIST's mission is to promote U.S. innovation and industrial
competitiveness by advancing measurement science, standards, and
technology in ways that enhance economic security and improve our
quality of life."
http://standards.gov/ (this is less about the public, more about dot govs)

International Organization for Standardization
http://www.iso.org/iso/about/the_iso_story.htm
"ISO is the world largest standards developing organization. Between
1947 and the present day, ISO has published more than 16 500
International Standards, ranging from standards for activities such as
agriculture and construction, through mechanical engineering, to
medical devices, to the newest information technology developments."

http://en.wikipedia.org/wiki/Open_standard
http://en.wikipedia.org/wiki/Open_format

the Internet Engineering Task Force
http://en.wikipedia.org/wiki/IETF

http://www.openformats.org/

http://www.openstandards.net/
"A non-profit organization connecting people to open standards and the
bodies that build and foster their growth∞"

http://www.oasis-open.org/
"A non-profit, international consortium that creates interoperable
industry specifications based on public standards∞"

http://en.wikipedia.org/wiki/Standards_organization

The game theoretics of all of this ;-)
http://en.wikipedia.org/wiki/Coordination_problem
"In game theory, coordination games are a class of games with multiple
pure strategy Nash equilibria in which players choose the same or
corresponding strategies. Coordination games are a formalization of
the idea of a coordination problem, which is widespread in the social
sciences, including economics, meaning situations in which all parties
can realize mutual gains, but only by making mutually consistent
decisions. A common application is the choice of technological
standards."

A list can be found on my-hosted wiki:
http://heybryan.org/mediawiki/index.php/Standards_organization

So, I don't mean to say that we need to work with these giant, slow
organizations. Not at all. That would take forever. And frankly, I'd
rather go with my suggestion of blatantly dumping content and so on;
but at the same time, I can't help but look at the broader picture and
see that this is a **general** problem that everybody faces when
formalizing information into semantic formats. The problem isn't
local. And because the problem is generalized, and because we are
programmers, the idea is to facilitate it on that larger level,
whether through tools or through organizations and 'protocols'
(recipes) to make these formalizations/standards/semantic-formats.
(Remember, anybody can download ikiwik + git to start their own
project; this needs to be addressed in any OSCOMAK-like toolchain). I
haven't seen much discussion of this in the local group, and I think
it's worth bringing up. At the same time, it's not too hard to
assemble a list of email addresses to contact those organizations. If
they don't participate, it's their loss -- small groups like us move
much more quickly than they can 'legally' keep up with (lots of
distributed work going on, but I suspect that it's mostly done by main
contributors for the big pushes ... maybe; don't know).

To start things off I propose a digestion methodology, based off of
retrieving the projects out there on the web as they are,
investigating the well-understood formats, and then working from there
to see what the historical basis has been. For example, there are many
electronics projects put up on the web, and usually these include the
GDL schematics (uh, it's a *nix electronics schematics format, IIRC).
Now, these schematics are the way they are for a reason, and usually
they are more or less comprehensive, so it's a good place to start, a
good way to do comparison. And at the same time we can digest the
information gradients from the public access databases:
http://heybryan.org/mediawiki/index.php/Open_access

The problem with that is that you still need project coordination for
each of the datatypes, and it's not present. So that's why I was
thinking that we still need to investigate and recommend core
methodologies for project management. Typically this is done through
revision control systems (repositories), some way for the developers
to communicate with each other, and whatever organizational style they
prefer, really, but the idea is that this is ultimately accessible
from the command line via agx-get (or apt-get at least). Not having
been in all that many open source projects, I can't quite say what
methods they use or a generalized principle format; but once we figure
this out and blast off a few examples/suggestions (I suspect we can
look at some good projects -- debian, freebsd, perl, nethack,
firefox). And then from there we can promote the emergence of the
diversity and the work that we need to see.

But that still leaves Mike hanging for a while, but maybe only at
first glance. What we can be doing now is tracking down the list of
watering holes for certain types of information, importing the content
in, sure, while simultaneously seriously encouraging him to document
the methodologies for project coordination of what he's doing + that
of others. And creating an ontology of -projects- would work too. I'm
thinking big picture here.

Another quick example - have we come up with a format idea for keeping
a list of links (BibTeX stuff) related to the content that we are
pulling? I mean, a way to specify just what information resources we
have imported already and what we have not? I suggest taking a look at
trexy.com and prefound.com, a site that treats internet searchers like
ants and their paths through the web as trails worth saving as
information is mined and brought back to the hive in some structured
way (as found suitable by the searchers). In truth, the searchers
don't actually bring content back to the websites, only their 'search
trail', not what they found or any structured meaning out of it. [I've
gotten into a habit such that, when I find a new website with a lot of
information that I want to hoard, I write up a script to automate my
downloading of it, and then let it be while I go on and just assume
that I've processed it (less I want to actually read it ASAP)]. Same
thing here.

Everybody else is just as clueless as we are. ;-)
**but** NIST, for example, has specific routines and procedures (just
like your (Paul's) recent NASA email) for getting data, and these
routines are there for a reason, and so on and so forth.

> Also. in general, I'd expect many things would have multiple tags since
> emergent categories are rarely strict hierarchies (one issue with WordNet).

Agreed, didn't know that about WordNet. Just a quick suggestion to be
careful with tagging system implementations. Usually tagging out there
on the web just means "blah, blah, blah", when in truth deep
ontological tagging would be mildly useful, like
"{this->is->some->hierarchy->the_tag_you_want}", but this requires an
integration framework that the blogging system doesn't really need for
its particular mission. But in the case of skdb, that will come in
handy. I think this idea has to be put on the backburner until
somebody can find a way to do this with PGP and without a centralized
ID database (else we get into problems like IP sectioning schemes and
who gets what, and who believes who's DNS, etc.). (Does this problem
*need* to be avoided?) The repros are distributed, so it might be wise
to take a hint from the current web infrastructure / architecture re:
DNS. Hm.

> And, as before, I question if we can be "carefully documenting what's needed
> and not needed" in advance of at least some content to play with.

That's certainly true. But lots of other people have lots of other
ways to play with this sort of content already, and so isn't our idea
to develop tools to facilitate this playing (but not over-riding it,
necessarily), yes?+community organization so it's not too dead. Per my
mention of "nobody else knows what's going on either" above.

> Also, as I see it, the main issue of interest (to me, and presumably the
> community) is no longer how to add content and tags (given the wiki), but
> what content and tags to add. Still, I know that is a lot of fun to focus on
> that technical side too. Let's call that the "pasture". :-) Obviously, over

*What* to add is a technical issue ... isn't the idea that we are
working on semantics, organizing, etc.? Who cares if the bit is
ultimately a zero or one, who cares if we can't quite confirm the bit
at this exact moment, as long as we have the procedures for setting it
up soon and somebody's interested in doing that? Think of this as an
information compilation project, right? And we're working on the
internals of the compiler, as within society, just as much as debian's
team is a compiler of software (sort of - gcc compilation as well as
social aggregation). So they may not necessarily write a single line
of code, but they do manage it all and make tweaks when necessary to
make the puzzle fit, and it's all possible because of the distributed
tools (Whole Earth sytle? ;-)) that the programmers have been using
for decades. Same thing with engineers and the material scientists.
It's the procedural information, but it's not necessarily the
procedure of tagging.

> time, various systems may do tagging in different ways either technically or
> semantically (ontologically).

'course.

> http://en.wikipedia.org/wiki/Ontology_%28computer_science%29
> But, sorry, for the moment I personally am no longer much interested in
> alternative implementations (greener pastures) so much as both content (cow)
> and related metadata (bull) in the wiki (pasture) that is up there right now
> -- at least until it is overgrazed. :-)

Let's make an example. I have 50 GB of cached data on my local network
(heh, actually, 20 GB of it is not currently available due to the
recent crash). It's semantically wired together at least to some
extent -- for example, there's a few files that have some categorical
orientation. But the nuggets within them are just plaintext [[as, may
I point out, is completely natural -- but that's just the current
state of the internet, and mimicing this is nothing new]]. Wasn't the
idea to have the semantic files to encapsulate this information in a
way that is technically documented and so on? A good example would be
unit requirements of parts, which has to be specified in a way that
can be parsed and interpreted (while also human readable (thus YAML,
among other reasons)) - but if you think that dumping content, which
is already easily accessible over the internet (just ask for some
links and I'll do some dumps), is how to get the ball rolling, I see a
lot of parts missing. Anything other than semantically-encapsulated
(for lack of better terminology - maybe "substructs") is just a cache
of the web, maybe with categorical sorting of the larger units,
perhaps/at-best(?). Still dry ... don't know how else to explain it.

> So, you are (implicitly) accusing me of cow (article mongering), and maybe
> vice versa (ontology mongering, or maybe pasture mongering. :-)

I don't know what I am accusing you of or not anymore, but I'd like to
suggest that I'm just offering an idea of the broader picture and how
we can cope with it; it may seem rambling, but it's somewhat of a
paradigm shift from thinking 'article mongering' is bad versus
'unstructured article mongering', perhaps that's a good way to put it?
As for pasture mongering / digestion, that's good.

> But, really, we need all three. We need the articles (cow) and the thinking
> about them and their interrelations (bull), of which presumably the marriage
> of the two in a green-enough wiki (pasture) should then result in meaningful
> and useful offspring (Mars habitations, Earthly Eco-cities, and so on. :-)

Well yeah, but you also mentioned that you're not interested in the
technicality of those three processes, but I see article fetching
(queried pulling from websites to profess - see http://theinfo.org/
maybe), thinking (in as much as we can use digital communication tech
to get people talking together as we have seen so far), and their
management in repositories/wikis, as all technical aspects. More over,
the 'useful' part -- that's what we're here to help automate and put
into the hands of ourselves and users, right?

> And Mike has taken the first step towards that by putting articles like on
> Solar energy (cow) and tags (bull) on the Semantic MediaWiki (pasture). Even

That looks no different from other pages that we've seen on my site:
http://heybryan.org/mediawiki/index.php/DNA_sequencer
http://heybryan.org/thinking.html
http://heybryan.org/graphene.html
http://heybryan.org/mediawiki/index.php/DNA_synthesizer
http://heybryan.org/mediawiki/index.php/Microarray
http://heybryan.org/mediawiki/index.php/AFM_nanolithography
http://heybryan.org/mediawiki/index.php/Meat_on_a_stick

So I don't see how the Solar_Energy article is new in terms of what we
want to see happening; outside of this context of increasing
development and sophistication of the semantic web, I think it's great
that Mike is doing pages like that -- a good habit of infohoarding.
Though you could easily argue that I am biased.

> with all the respective work by you and I, it was Mike who took the first
> pioneering step for humankind towards giving the world a freely-licensed
> repository of manufacturing data and metadata. (Laying it on thick enough
> for you, Mike? :-) Sure, his ontology is buggy and incomplete. And sure,
> maybe the solar energy article could be rewritten to separate the basic
> theory and designs from the extraterrestrial applications somehow. But it is

re: separation; I don't see that as relevant to the idea here. Isn't
it that there would be a **project** that uses solar energy? Solar
energy is kind of like a unit, to be used by GNU units, so as a
reference article I think it's fine, as long as it eventually links
over to the semantic projects that are more structured and so on. In
fact, all of this email might have been simplified by that simple
realization. It's not so much the 'theory' -- I think it'll fit well
if you consider it as a general introductory article to the topic, for
people not too much in the know about the solar energy input
variables, although I think it would be wise to separate the idea of
photonic energy from solar energy, which basically just goes back to
one of the fundamental units, like photonic flux or something? I
forget what it is. CRC should know.
^ so you can tell that I did *not* revise the rest of this email after
typing that

> all three together (cow plus bull plus pasture), and I can almost hear the
> patter of little hooves already! Well, maybe in a decade or two. :-)

So ... reworking your analogy of cow-bull-pasture, Solar_Energy
doesn't fall into any of those, since it's a fundamental unit that can
be explained by experimental projects that can be added via the git
repos, linking back to the text documentation about how the experiment
was setup and so on (these being part of the dot skdb files).

> The thing is, anyone with a certain set of mental abilities can bull at a
> moment's notice, but to cow thoroughly by *anyone* takes at least a little

Cow=mapping, right? I find mapping easier than digesting, since you
get to make lists of lists and so on, up to the point until you
realize somebody has already partially digested the material for you
and you get to take it a few steps further, etc. :-)

> hard work. But to do both (cow and bull) in a very thorough way takes the
> most work of all, and usually take years of living it the middle of a
> problem space (usually one full of manure. :-) And then there is the work of
> getting a pasture set up and maintaining it (mending fences, etc.) too.

"Pain is the cost of the maintenance of boundaries" [though it doesn't
have to be that way, IMHO].

> I'm not saying you have not done a lot of all three with your site (cow,
> bull, MediWiki pasture), but unless the work is also out there under a free
> license that defines a constitution for collaboration,
> http://www.kurtz-fernhout.com/oscomak/SSI_Fernhout2001_web.html
> and is in the right size chunks, it can't be built on stigmergically IMHO.

What? I already mentioned the robots.txt file, which clearly states
anybody can copy and so on. I am pretty sure that robots.txt has been
held up in courts of law before too (same with GPL, hurray!).

> (I'm certainly guilty of all this myself, like with long emails.) So it will
> all have to be "treated as damage and routed around" by the free community
> on the internet as far as stigmergy. :-( (*)

As damage? It's *right there*.

> That is not to diminish the potential future value of greener pastures and
> alternative implementations like SKDB of course. But without freely and
> formally licensed content and metadata (cows and bulls) any implementation
> (pasture) is not of much current use. Even the wiki (pasture) at OSCOMAK.net

How is it possible for skdb/metarepo to need a license when it's more
like the map towards putting together all of the puzzle pieces?
(semantic web, ikiwiki, git, repos, open access, gpl, etc.)? Surely
you don't see this as an entire centralization project? (Of course,
aggregating all of the information together is somewhat of a
centralization process, but at the same time we see many individuals
doing this with news and other publications with little problem, as
long as all of the licenses are maintained and so on).

> is pretty useless to the casual browser (carnivore? dairy farmer?) at the
> moment, since I have been the worst offender to date as far as posting
> philosophy (manure) but not adding articles (cows and bulls) to the wiki
> (pasture). :-) (**)

I think we are on different wavelengths. Blatantly dumping content, as
I have on my caches and hard drives over the years, doesn't make it
all magically come alive. :-(

> One issue may be that pastures and cows and even manure are much easier to
> deal with than raging bulls (thinking :-) for most people: (***)
> http://www-03.ibm.com/ibm/history/multimedia/think_trans.html
> "And we must study through reading, listening, discussing, observing and
> thinking. We must not neglect any one of those ways of study. The trouble
> with most of us is that we fall down on the latter -- thinking -- because
> it's hard work for people to think, And, as Dr. Nicholas Murray Butler said
> recently, 'all of the problems of the world could be settled easily if men
> were only willing to think.' "
>
> So, bulling is harder than cowing for most people. Some people are the
> opposite, of course. :-) But as a note taped to Marty Johnson's computer
> monitor in his office said (noticed the one time my wife and I met him):
> http://www.isles.org/
> "You can't plow a field by turning it over in your mind".

Not true. "As I move, so I move the universe." Your mind, your brain,
is how you are grounded with the world around you ...

> Of course, I liked to joke to my wife that did not apply to theoretical
> mathematicians or a lot of computer programming or research. :-) But I do
> think it applies to a big extent here -- we need both cows and bulls and we
> already got a pasture -- even if it may not be as green as I hoped for and
> the ones over there (Pointrel?) and there (SKDB?) looks mighty greener to me
> and to you. :-)

Sounds like cultural relativism to me - "everybody is equally good,"
as opposed to discussing the fundamental issues that we're here to
solve in the first place. But before we get to this please see the
content above and we'll chug through that and see what comes of it,
then maybe back to these points.

- Bryan

Bryan Bishop

unread,
May 6, 2008, 7:50:23 PM5/6/08
to openv...@googlegroups.com, kan...@gmail.com
On Tue, May 6, 2008 at 5:43 PM, mike1937 <arid_...@comcast.net> wrote:
> > AP Calculus BC tomorrow. Wish me luck
> > http://heybryan.org/mediawiki/index.php/Cal2 <-- my self-made review
>
> Good luck! I've got AB tomorow, my demonic teacher gave us an
> additional final the day before the actual AP test. Thanks for the
> review material.

Sure thing. :-)

> As I discovered here:
> http://www.wikimatrix.org/show/ikiwiki
> ikiwiki can have file attachments. If that means what I think it
> means, why the heck aren't we using it?

**but we are** :-(
http://fennetic.net/autogenix/
try running this command: git clone http://fennetic.net/autogenix/autogenix.git
and then apt-get install ikiwiki for good measure.

Paul's just been off on Semantic MediaWiki. ;-)
^ saying this jokingly.

Re: git, I have many links somewhere over here:
http://heybryan.org/mediawiki/index.php/2008-04-25

- Bryan

Paul D. Fernhout

unread,
May 6, 2008, 9:22:05 PM5/6/08
to openv...@googlegroups.com
mike1937 wrote:
>> And as with your solar energy article
>> http://www.oscomak.net/wiki/Solar_Energy
>> which duplicates the Wikipedia page url, maybe so what?
>
> I didn't think it through much, but I guess my half-formed sub-
> conscious thought was that it basically was the wikipedia article, I
> just paraphrased the parts I thought were pertinent, it was almost
> more for my benefit to take notes.

Well, it's great to start somewhere.

As Hans Moravec told me when I hung out in his lab, the secret to success in
research is to fail often as a child or student.(*) Most research is
failure, so if you get used to it early, that serves you your whole career
long. Most successful academics are, of course, thus temperamentally
unsuited to do research. Which explains a lot. :-)

That's another reason why places like Google shoot themselves in the foot
hiring only proven successes. They might have better luck hiring spectacular
failures. :-)

> As for conditional tagging, theres no good way to do it unless halo or
> smw have a syntax I'm not aware of. My best idea would be to make the
> property name what it is used for, with the shovel "metal
> part" (except preferably a little more specific) and make the value
> 13g of alluminum (exact chemical formula of the material (and that is
> a variable type I believe) would be a must), then add a new property
> for alternate materials. If it needed two materials for the metal
> part, the best solution might be to make a new object (article? I
> would really prefer the greener pasture of an object oriented
> language) which in turn has those two needed properties (then you can
> add more alternates for those parts too, if the isru plastics article
> taught me anything its that the "cow" part of it never turns out to be
> simple).

Interesting idea. I've been thinking on this and realized that it isn't
actually essential from the manufacturing web analysis point of view how
much of something you need, at least a a crude first approximation. If you
need an ounce of pure aluminum, you might as well need a ton of it, as far
as needing a way to produce it. As you fine tune the design, then quantities
matter more and more as the choice of process to make aluminum might be
affected by the scale and frequency of the need. And certainly quantity is
needed for simulation. Anyway, an interesting suggestion.

> Like you said, my ontology is very flawed, in part because we are
> using a smw (pasture with some randomly placed nightshade) where skdb
> or some other database (magical unicorn planet where nectar springs
> from the ground) would be better. I've said it before and I'll say it
> again: it's a wiki for a reason and anyone can feel free to change it.

Well, at least the content. But not easily the architecture. Anyway, if in
the end we all understand why, say, Bryan is right about ontologies and the
limitation of Semantic MediaWikis, then as a learning community we will be
that much further along IMHO.

We're (I hope!) one of those Engelbart outpost on the frontiers of
knowledge. Even if just our own. :-)

One of the lurkers is probably laughing at us right now as the know-it-all
who won't tell. :-)

At least God, if he/she/it/them/other is out there, probably is. :-)

> If you wanted to, you could try using the most common sytax for "or"
> and hope for the best. I believe its || or something? Might want to
> throw some ! and & operators into it just for good measure.

Maybe. Can you supply an example?

> It's seeming to me like tags may, in the end, prove utterly worthless
> for anything but organization.

Interesting to here you say that now that you are the (relative) expert here
on Semantic MediaWiki.

> However a wiki is the best possible
> front end for human readable information.

Well, I think I know what you mean, even if I can imagine better. :-)
And that's just thinking about stuff 30 years old. :-)
http://www.mojowire.com/TravelsWithSmalltalk/DaveThomas-TravelsWithSmalltalk.htm
"XSIS and The Customer Information Analyst Why would Xerox develop an
incredible spreadsheet that could display images, conjugate Russian verbs
and why did that happen in a strange group called XSIS located in Los
Angeles and Washington? Apparently they had an important customer with a lot
of complex information to analyze. How did Angela Coppola know that 1000
people would show up for OOPSLA'86 when the PC committee predicted 100-200?
What sort of technology could the National Security Administration use to
print Chinese leaflets circa 1978? The Xerox Analyst served the CIA as a
analytic tool for many years. Even 13 years later it still offers tools more
powerful than MSOffice. The Analyst is still alive and well and forms a key
component in TI ControlWorks Wafer Fab Automation System."

> Is it at all possible to
> have the wiki page function as the text document in a project, then
> just have all the meta data, CAD's, etc, accessible from the article?
> That actually seems like it would be easy... all you would need is a
> program for storing things on the server, then put an external link in
> the wiki article to them. A little ugly, sure, but very easy. I'm just
> polluting cyberspace with my half baked thoughts, feel free to ignore
> me if it's retardedly impossible. I just took a calculus final. I'm
> happy to keep myself from drooling.

That's some very interesting drool you have there. :-)
Maybe compulsory exams are good for something after all. :-)
http://en.wikipedia.org/wiki/Free_school
http://en.wikipedia.org/wiki/Unschooling

Yes, I could see how we could make a site that essentially put Wikipedia
articles in a frame:
http://www.w3.org/TR/html4/present/frames.html
"HTML frames allow authors to present documents in multiple views, which may
be independent windows or subwindows. Multiple views offer designers a way
to keep certain information visible, while other views are scrolled or
replaced. For example, within the same window, one frame might display a
static banner, a second a navigation menu, and a third the main document
that can be scrolled through or replaced by navigating in the second frame."

Then we could surround the frame with ontological information which was
edited more directly (maybe as fancy as Halo. maybe not).

Here is an example of a site that does something similar (there are others I
remember from many years ago):
http://webride.org/
"Webride attaches discussion forums to each and every web page on the fly."

Here is OpenVirgle.net in that frame:
http://webride.org/discuss/split.php?uri=http%3A%2F%2Fwww.openvirgle.net

The main Wikipedia site has one comment:
http://webride.org/discuss/split.php?uri=http%3A%2F%2Fwww.wikipedia.org

One issue with this approach is that we might need to add new content to
Wikipedia (which might get deleted) or still have a local regular MediaWiki.

Personally, I was never keen on tags in the article text myself; that's why
I felt Halo was so exciting (if in the end a little slow on an older machine
and maybe still buggy). Still, trying to manage the tags in text is one of
the reasons Halo is slow. And I notice bugs in the presentation of the tags
too, like a dangling "True" here:
http://www.oscomak.net/wiki/Liquid_breathing_to_resist_bone_loss

Despite everything I've written on MediaWiki as its champion, if we try it
some more and it doesn't work, as to changing code on the server, to quote
Mystery Men:
http://www.adherents.com/lit/comics/MysteryMen.html
"Shoveler: Nothing I couldn't move around." :-)

That is especially true with the new server (maybe online tomorrow, we'll
see, nothing firm yet). I can run long term processes on a dedicated server
like the JVM (and so no startup overhead). So I could put up some version
of, say, Jython/Pointrel code like the stuff you played with from the SVN
repository or on the server. Or most anything else free that exists.

But I don't say that to say stop using the Semantic MediaWiki. We (mostly
you :-) are making good progress understanding its strengths and weaknesses,
and that will serve us well whether we continue to use it or export the
content to something new (including even back to client-side tools for
editing as opposed to browsing. :-)

Anyway, more feedback on the Wiki is always appreciated. The last time I put
something like OSCOMAK up, the complexity of choosing the standard but
complex "Zope" helped torpedo it. The great thing about standards is there
are so many to choose from. :-) Not sure who said that first?

--Paul Fernhout
(*) Fortunately, I (semi)intentionally failed a class (Physics) in college
in part to see what it felt like, so I think I just barely qualify for a
researcher career by Han's criterion. (George Miller's was "publish
something as an undergraduate". :-)
http://www.alibris.com/booksearch.detail?S=R&bid=9085995928&cm_mmc=shopcompare-_-base-_-aisbn-_-na
Not to say anything bad about the Physics professor himself, who is a very
likable guy and probably wouldn't even have failed me if I had not missed
too many labs for band practice to technically pass and was not interested
in making them up. It's hard to fail a course at PU; you have to work at it
I found out. :-) Here's the poor guy who had to deal with me often being
late to class (nobody else was ever late, strange thing):
http://nobelprize.org/nobel_prizes/physics/laureates/1993/taylor-autobio.html
Funny thing is, I explained that all to a dean (about missing the labs) and
they still gave me lab credit for the course. :-) Bureaucracy. :-) Probably
someone will figure out how to revoke my diploma now. :-) Well go ahead, I'm
tired of all the junk mail even after I asked it be stopped. :-)
Anyway, maybe even back then I could smell a cult. :-)
http://www.disciplined-minds.com/
"Upon publication of Disciplined Minds, the American Institute of Physics
fired author Jeff Schmidt. He had been on the editorial staff of Physics
Today magazine for 19 years."
I'm not saying Immanuel Velikovsky is right about anything, but James P.
Hogan makes pretty clear (Kicking the Sacred Cow - July 2004) how badly he
was treated by professional physicists:
http://en.wikipedia.org/wiki/Immanuel_Velikovsky
despite usually being correct in advance of the facts on several things.
By chance, the professor's parents had an organic farm I later helped
certify. :-) And I told them (truthfully) what a wonderful professor there
son was, and they were rightfully very proud of him. If I had known at PU he
was a Quaker and cared about such things back then, (I didn't) maybe I would
have been more on time at least. :-( From his autobiography: "Both the Evans
and Taylor families have deep Quaker roots going back to the days of William
Penn and his Philadelphia experiment. My parents were living examples of
frugal Quaker simplicity, twentieth-century style; their very lives taught
lessons of tolerance for human diversity and the joys of helping and caring
for others."
Now that, I can respect.

Bryan Bishop

unread,
May 6, 2008, 9:41:59 PM5/6/08
to openv...@googlegroups.com, kan...@gmail.com
On Tue, May 6, 2008 at 8:22 PM, Paul D. Fernhout
<pdfer...@kurtz-fernhout.com> wrote:
> > As for conditional tagging, theres no good way to do it unless halo or
> > smw have a syntax I'm not aware of. My best idea would be to make the

The skdb designs provide for this 'conditional tagging' that you need
.. you can think of it as "provides" relationships and so on (sort of
like pointrel), and this can be used in an algebraic way, and that's
easy, as long as the fundamental requirements can be built up; this is
generally done through the autospec program that Ben has been writing.

> > property name what it is used for, with the shovel "metal
> > part" (except preferably a little more specific) and make the value
> > 13g of alluminum (exact chemical formula of the material (and that is
> > a variable type I believe) would be a must), then add a new property

Yep, the exact chemical formula would be its own object represented in
the skdb package file for this particular (sub)project.

> > for alternate materials. If it needed two materials for the metal
> > part, the best solution might be to make a new object (article? I
> > would really prefer the greener pasture of an object oriented

It's not an object-oriented language per-se, but I guess that's what
it looks like. Have you gone to check out the yaml.org documentation
yet? The py-yaml documentation is also fantastic and worth checking
out, lots of examples and should spark the synapses a bit.

> > language) which in turn has those two needed properties (then you can
> > add more alternates for those parts too, if the isru plastics article
> > taught me anything its that the "cow" part of it never turns out to be
> > simple).

The needed parts - we call these dependencies, and that's the !! line
provided by the yaml metadata file, with an extendable data structure
(list) to specify further requirements and the type of requirement
(software only? for the fundamental stability of the sdkb package?)
and so on.

> Interesting idea. I've been thinking on this and realized that it isn't
> actually essential from the manufacturing web analysis point of view how
> much of something you need, at least a a crude first approximation. If you
> need an ounce of pure aluminum, you might as well need a ton of it, as far
> as needing a way to produce it. As you fine tune the design, then quantities

The question is where in fine-tuning the design? I suspect it's
somewhere between autospec and computer simulation, not necessarily
the database end of things; so like I was saying in my second to last
email here, there needs to be a way to help developers confirm that
their packages are making sense, and a way to make sure they can
simulate the increasing quantities or something, or represent that
from a registry of skdb packages (much like the new partsregistry.org
website for biobricks.org :-)).

> matter more and more as the choice of process to make aluminum might be
> affected by the scale and frequency of the need. And certainly quantity is
> needed for simulation. Anyway, an interesting suggestion.

Simulation in the tool-chain is going to come later, as far as I can
tell, but I am open to suggestions on that front. From what I can
tell, simulation in python files would be what we'd use, it'd be wise
to investigate the already existing simulation frameworks out there
(there are many), so that we can use them for individual skdb
projects. The simulation python files would be placed within the dot
skdb file, of course. Or s/file/repo/, technically.

> > Like you said, my ontology is very flawed, in part because we are
> > using a smw (pasture with some randomly placed nightshade) where skdb
> > or some other database (magical unicorn planet where nectar springs
> > from the ground) would be better. I've said it before and I'll say it
> > again: it's a wiki for a reason and anyone can feel free to change it.
>
> Well, at least the content. But not easily the architecture. Anyway, if in
> the end we all understand why, say, Bryan is right about ontologies and the
> limitation of Semantic MediaWikis, then as a learning community we will be
> that much further along IMHO.

Btw, I would be interested in discussing the implementation of a
semantic wiki on top of ikiwiki. It *should* be an easy addition to
the source code. And if not, we get to go complain to Joey Hess. ;-)

> We're (I hope!) one of those Engelbart outpost on the frontiers of
> knowledge. Even if just our own. :-)

Hm, "Engelbart outpost". I need to import that into my working vocabulary.

> > If you wanted to, you could try using the most common sytax for "or"
> > and hope for the best. I believe its || or something? Might want to
> > throw some ! and & operators into it just for good measure.
>
> Maybe. Can you supply an example?

Huh? Yes, please. And how does that, at all, contribute to the
underlying architecture? (I am simply confused; please inform.) Maybe
this is in the semantic-mediawiki-docs?

> > It's seeming to me like tags may, in the end, prove utterly worthless
> > for anything but organization.
>
> Interesting to here you say that now that you are the (relative) expert here
> on Semantic MediaWiki.

I have always wanted to do personal ontologies through Wikipedia, for
example. That's basically what my 12,000 bookmarks are:
http://heybryan.org/bookmarks/bookmarks-old2/

> > However a wiki is the best possible
> > front end for human readable information.

Yep, ikiwiki my friend.

> Well, I think I know what you mean, even if I can imagine better. :-)

Xanadu? :-/ Information interface is always going to be a murky
problem, I think. But I don't want to give up hope quite yet.

> And that's just thinking about stuff 30 years old. :-)
> http://www.mojowire.com/TravelsWithSmalltalk/DaveThomas-TravelsWithSmalltalk.htm
> "XSIS and The Customer Information Analyst Why would Xerox develop an
> incredible spreadsheet that could display images, conjugate Russian verbs
> and why did that happen in a strange group called XSIS located in Los
> Angeles and Washington? Apparently they had an important customer with a lot
> of complex information to analyze. How did Angela Coppola know that 1000
> people would show up for OOPSLA'86 when the PC committee predicted 100-200?
> What sort of technology could the National Security Administration use to
> print Chinese leaflets circa 1978? The Xerox Analyst served the CIA as a
> analytic tool for many years. Even 13 years later it still offers tools more
> powerful than MSOffice. The Analyst is still alive and well and forms a key
> component in TI ControlWorks Wafer Fab Automation System."

Fab automation system? That's fairly relevant here, since we're
essentially proposing FPGA for fablabs and a matter compiler to boot.

> > Is it at all possible to
> > have the wiki page function as the text document in a project, then
> > just have all the meta data, CAD's, etc, accessible from the article?

[Please refer to the replies I sent to Mike on this in the other
email. This was suggested by me on the other side of last month,
IIRC.]

> > That actually seems like it would be easy... all you would need is a
> > program for storing things on the server, then put an external link in

to store on the server, try git push, etc.

> > the wiki article to them. A little ugly, sure, but very easy. I'm just

More than just wiki articles - also txt, html, zip, tar, url, etc.
etc. A diversity of information all packaged into it, with the
automated user interface (agx-get) to facilitate the downloading of
that information, and so on. Remember?

> > polluting cyberspace with my half baked thoughts, feel free to ignore
> > me if it's retardedly impossible. I just took a calculus final. I'm
> > happy to keep myself from drooling.

http://heybryan.org/exp.html
- click on the bottom link to the Eric Hunting chat re: cyberspace,
automation, grounding the semantic web.

> Yes, I could see how we could make a site that essentially put Wikipedia
> articles in a frame:
> http://www.w3.org/TR/html4/present/frames.html

Woah, what? Any self-respecting developer ... erm. No, I'll save this
for another time (on a very tight schedule) -- basically, this isn't a
good idea. Let's just download and import the Wikipedia articles into
the database.

> Then we could surround the frame with ontological information which was
> edited more directly (maybe as fancy as Halo. maybe not).

Gaahhh. The pain. :-)

> Here is an example of a site that does something similar (there are others I
> remember from many years ago):
> http://webride.org/
> "Webride attaches discussion forums to each and every web page on the fly."

Of this type, there are many Firefox extensions now, but it's not
really that much of a semantic web as much as they like to advertize;
there's no realtime browser-to-browser communication protocol, as I
suggest on my website: http://heybryan.org/ from 2006, one of the
social browsing projects that I kicked around before (and even after,
unfortunately) I learned of trexy/prefound and friends.

Must be old. One of the disadvantages is that it's not an underlying
architecture, it's just another layer and it's propretiary in some
ways, not even if not in terms of licensing, but rather just in terms
of implementation. Eh, hard to explain these sorts of solutions.

> One issue with this approach is that we might need to add new content to
> Wikipedia (which might get deleted) or still have a local regular MediaWiki.

Yep, that's the idea of using git with ikiwiki - users can push around
content and update each other when they want, integrate and
synthesize, or not at all (as Wikipedia (as a giant, weird collective)
opts to).

> Despite everything I've written on MediaWiki as its champion, if we try it
> some more and it doesn't work, as to changing code on the server, to quote
> Mystery Men:
> http://www.adherents.com/lit/comics/MysteryMen.html
> "Shoveler: Nothing I couldn't move around." :-)

What?

> That is especially true with the new server (maybe online tomorrow, we'll
> see, nothing firm yet). I can run long term processes on a dedicated server
> like the JVM (and so no startup overhead). So I could put up some version
> of, say, Jython/Pointrel code like the stuff you played with from the SVN
> repository or on the server. Or most anything else free that exists.

Re: free. Have people been thinking that I am calling them gits when I
mention git, the free RCS (SVN superior, in various ways)? Just
wondering. Would explain a lot.

> But I don't say that to say stop using the Semantic MediaWiki. We (mostly
> you :-) are making good progress understanding its strengths and weaknesses,
> and that will serve us well whether we continue to use it or export the
> content to something new (including even back to client-side tools for
> editing as opposed to browsing. :-)

Hm, that distinction isn't necessary. Assume the ikiwiki scenario. In
that one, the client tools to edit and manage content is really all
over HTTP, or if using git then there's this added GIT protocol that
could be used for massive transfers, so it's not that big of a
boundary for crossing.

> Anyway, more feedback on the Wiki is always appreciated. The last time I put
> something like OSCOMAK up, the complexity of choosing the standard but
> complex "Zope" helped torpedo it. The great thing about standards is there
> are so many to choose from. :-) Not sure who said that first?

Probably the first guy who realized he has 400 text editors on linux
to choose from.

> (*) Fortunately, I (semi)intentionally failed a class (Physics) in college
> in part to see what it felt like, so I think I just barely qualify for a
> researcher career by Han's criterion. (George Miller's was "publish
> something as an undergraduate". :-)

Moravec was a good guy to work with ... did you target him? Did you
"know" before just wandering in? What was the deal?

- Bryan

mike1937

unread,
May 6, 2008, 10:13:41 PM5/6/08
to OpenVirgle
> > > If you wanted to, you could try using the most common sytax for "or"
> > > and hope for the best. I believe its || or something? Might want to
> > > throw some ! and & operators into it just for good measure.
>
> > Maybe. Can you supply an example?
>
> Huh? Yes, please. And how does that, at all, contribute to the
> underlying architecture? (I am simply confused; please inform.) Maybe
> this is in the semantic-mediawiki-docs?

I think I'm thinking on a level below you guys; I was half kidding. I
looked it up and | or || are used in regular java as the "or"
operator, so if one was expecting a machine to parse semantic tags
they might type "13g aluminum || 14g tin," into the string field for
a variable. It would be foolish to assume a script would work like
that or use crummy java syntax.

On May 6, 7:41 pm, "Bryan Bishop" <kanz...@gmail.com> wrote:
> On Tue, May 6, 2008 at 8:22 PM, Paul D. Fernhout
>
> >http://www.mojowire.com/TravelsWithSmalltalk/DaveThomas-TravelsWithSm...
> suggest on my website:http://heybryan.org/from 2006, one of the

Doram

unread,
May 7, 2008, 4:18:48 PM5/7/08
to OpenVirgle
I'm sorta with Mike on this, feeling like I am at least a couple of
rungs below the dialog here, but I do want to reiterate a point I made
a while back. I did mention the SKDB as a superstructure of the Wiki,
and then I mentioned having it act as a go-between for other Wikis and
info sources, to functionally connect the data, even if not hosted on
the same server. I realize that this almost amounts to a miniaturized
version of the internet itself (although actually purposed). I like
the idea I saw of framing Wikipedia with ontological/semantic frames.
Maybe that can be done with more than just Wikipedia. That speaks to
something else I said on the avoidance of copyright infringement issue
brought by direct copying, but seems like a better idea than my
proposed tagged linkdumps (which I even said at the time was a stopgap
measure).

Of course, I can't remember where I posted any of that, but I can
return with quotes later, if necessary. (I don't feel like reiterating
that much, and I don't feel like researching much either. I am tired
today. My son is sick, and so am I. |P Blah. Head cold...)

I am definitely starting to think that some of the value of the work
that we are/will be doing is going to be the purposed collection/
coordination/utilization of these disparate sources. I also agree with
Bryan's statement that we will eventually benefit from cooperation
with other standardization entities out there, although we need to
firm up what we are working on here first a little.

Doram wanders off to make some more non-technology management tools...
> ...
>
> read more »

Bryan Bishop

unread,
May 7, 2008, 5:52:30 PM5/7/08
to openv...@googlegroups.com, kan...@gmail.com
On Wed, May 7, 2008 at 3:18 PM, Doram <DoramB...@gmail.com> wrote:
> I'm sorta with Mike on this, feeling like I am at least a couple of
> rungs below the dialog here, but I do want to reiterate a point I made

Maybe there's something I can explain in more depth?

> a while back. I did mention the SKDB as a superstructure of the Wiki,

Only in as much as the debian repo architecture is a superstructure of
deb files.

> and then I mentioned having it act as a go-between for other Wikis and
> info sources, to functionally connect the data, even if not hosted on

[Same sentence].

> the same server. I realize that this almost amounts to a miniaturized
> version of the internet itself (although actually purposed). I like

No, that's not the internet, that's just taking advantage of the internet.

> the idea I saw of framing Wikipedia with ontological/semantic frames.

Avoid frames at all costs:
http://www.html4.com/mime/markup/php/standards_en/html_misuses_en/html_misuses_21.php
http://universalusability.com/access_by_design/frames/avoid.html
http://www.hobo-web.co.uk/tips/41.htm
etc.

> Maybe that can be done with more than just Wikipedia. That speaks to
> something else I said on the avoidance of copyright infringement issue
> brought by direct copying, but seems like a better idea than my
> proposed tagged linkdumps (which I even said at the time was a stopgap
> measure).

Sorry, I don't see how that avoids the copyright issue. You're just
making it so that the electrons are served up to the user in a certain
way. It's going through tons of caches all over the internet as
packets are flung back and forth and all over the place. There is no
guarantee, and indeed it's incredibly unlikely, that the exact
electrons or holes and voltage spikes that the server transmitted are
in no way, shape, or form the actaul data that the user receives ...
tells you something, doesn't it?

- Bryan

Doram

unread,
May 7, 2008, 11:38:47 PM5/7/08
to OpenVirgle
What I meant about the copyright issue is that with a wrapper for the
page, we can categorize and reference a page, but we do not host it on
our server, or change it in any way, or claim any responsibility for
anything more than the categorization that we have assigned to it.
It's like a great big dynamic quote. Yes. I see the issue of pages
changing over time, out from underneath our categorization, but...
yes. I see your point. (Woo. Too many days of no sleep. My logic is
slipping...) The internet changes too fast for us to reference
something without catching a snapshot and bringing it under our
control........... Bloody hell.

Well... I got nothing right now. I will have to think about this
(preferably with more than 3 hours sleep). I concede the point. Frames
begone.

On May 7, 5:52 pm, "Bryan Bishop" <kanz...@gmail.com> wrote:
> On Wed, May 7, 2008 at 3:18 PM, Doram <DoramBaram...@gmail.com> wrote:
> > I'm sorta with Mike on this, feeling like I am at least a couple of
> > rungs below the dialog here, but I do want to reiterate a point I made
>
> Maybe there's something I can explain in more depth?
>
> > a while back. I did mention the SKDB as a superstructure of the Wiki,
>
> Only in as much as the debian repo architecture is a superstructure of
> deb files.
>
> > and then I mentioned having it act as a go-between for other Wikis and
> > info sources, to functionally connect the data, even if not hosted on
>
> [Same sentence].
>
> > the same server. I realize that this almost amounts to a miniaturized
> > version of the internet itself (although actually purposed). I like
>
> No, that's not the internet, that's just taking advantage of the internet.
>
> > the idea I saw of framing Wikipedia with ontological/semantic frames.
>
> Avoid frames at all costs:http://www.html4.com/mime/markup/php/standards_en/html_misuses_en/htm...http://universalusability.com/access_by_design/frames/avoid.htmlhttp://www.hobo-web.co.uk/tips/41.htm

Bryan Bishop

unread,
May 7, 2008, 11:49:13 PM5/7/08
to openv...@googlegroups.com, kan...@gmail.com
On Wed, May 7, 2008 at 10:38 PM, Doram <DoramB...@gmail.com> wrote:
> What I meant about the copyright issue is that with a wrapper for the
> page, we can categorize and reference a page, but we do not host it on
> our server, or change it in any way, or claim any responsibility for
> anything more than the categorization that we have assigned to it.
> It's like a great big dynamic quote. Yes. I see the issue of pages
> changing over time, out from underneath our categorization, but...
> yes. I see your point. (Woo. Too many days of no sleep. My logic is
> slipping...) The internet changes too fast for us to reference
> something without catching a snapshot and bringing it under our
> control........... Bloody hell.

Yeah, we're all mumbling that a lot these days. Bloody, bloody hell.
There was a quote on Slashdot once, one that I can't track down at the
moment, but it basically went like this: "Most of the open source
developer attitude is just simply that the proprietary folks just
don't understand, and in truth these programmers are tired of it, so
they stop fighting, call it quits and go off and say 'here, this is
the better way that we have been talking about' -- and you know what?
It works." Wish I could source this.

- Bryan

Doram

unread,
May 8, 2008, 12:44:06 AM5/8/08
to OpenVirgle
I agree. that is a brilliant sentiment, and really captures a lot of
my sentiment at the outset of this project. You guys go and waste time
fighting about what the best way to do things is, and I will go off
and do it. I think we are still doing a decent job of that.

On May 7, 11:49 pm, "Bryan Bishop" <kanz...@gmail.com> wrote:

Paul D. Fernhout

unread,
May 8, 2008, 2:41:23 AM5/8/08
to openv...@googlegroups.com
Bryan Bishop wrote:
> On Tue, May 6, 2008 at 2:12 PM, Paul D. Fernhout
> <pdfer...@kurtz-fernhout.com> wrote:
>> Bryan-
>>
>> Do you have a proposed detailed ontology or tagging guidelines somewhere
>> here that relates to manufactured artifacts or related procedures?
>> http://heybryan.org/mediawiki/index.php/Skdb
>> Or related tagged content as examples?
>
> No, I think we're missing the broader issue here (not just a matter of
> tagging (btw, tagging good)). It's not the matter of adding content
> and dumping it into the wiki, that's fine and to desperately needed,
> but rather what I see is that you guys are already trying to come up
> with the SKDB files without sufficient time spent on the **entire
> idea** of semantic datastructs and so on, or mapping out what
> information resources to pursue in order to figure out when and if you
> have a good idea for a first version (the process, not the objects);
> yes, you can just go around and tack on variables and spagetti code as
> you go, sure, that's one way to do it -- but there's not even a
> resemblance of the underlying infrastructure that this 'grounded,
> manufacturing-oriented semantic web' can look like, even from day
> one*. Another way to do it would be to map out the information
> resources that we have in front of us and pursue the standardization
> organizations, which are going to be particularly interested in our
> little project.

As is pointed out here (previously referenced):
http://gamearchitect.net/Articles/SoftwareIsHard.html
"The difference is that the overruns on a physical construction project are
bounded. You never get to the point where you have to hammer in a nail and
discover that the nail will take an estimated six months of research and
development, with a high level of uncertainty. But software is fractal in
complexity. If you're doing top-down design, you produce a specification
that stops at some level of granularity. And you always risk discovering,
come implementation time, that the module or class that was the lowest level
of your specification hides untold worlds of complexity that will take as
much development effort as you'd budgeted for the rest of the project
combined. The only way to avoid that is to have your design go all the way
down to specifying individual lines of code, in which case you aren't
designing at all, you're just programming."

So, maybe stop thinking of what we are doing as adding articles and start
thinking of it as designing? :-)

The bottom line is you can say we need a design all you want, but where is a
specific (not handwaving) one we can discuss? Only in the Wiki, flawed as
it is. Or maybe places like here:
http://www.mel.nist.gov/psl/
which I am remiss in not keeping up with.

> * I admit that I am at fault for this too, since I have only recently,
> within the last two months, begun to use revision control systems, but
> that's no excuse for everybody else. (i.e., fight ignorance, embrace
> extend release)
>
> IEEE
> http://standards.ieee.org/
> "IEEE's Constitution defines the purposes of the organization as
> "scientific and educational, directed toward the advancement of the
> theory and practice of electrical, electronics, communications and
> computer engineering, as well as computer science, the allied branches
> of engineering and the related arts and sciences." In pursuing these
> goals, the IEEE serves as a major publisher of scientific journals and
> a conference organizer. It is also a leading developer of industrial
> standards (having developed over 900 active industry standards) in a
> broad range of disciplines, including electric power and energy,
> biomedical technology and healthcare, information technology,
> information assurance, telecommunications, consumer electronics,
> transportation, aerospace, and nanotechnology."
> http://en.wikipedia.org/wiki/IEEE_Standards_Association

IEEE makes money off of selling their standards documents (which are in that
sense proprietary and non-copyable). Also, those processes take years.

> http://w3.org/
> "W3C primarily pursues its mission through the creation of Web
> standards and guidelines designed to ensure long-term growth for the
> Web. "

Been there. Done that. Got the mention. :-)
http://www.alphaworks.ibm.com/tech/xfc/
These processes also take years.

(For the record, I think XML was unnecessary and is not a very good choice
of encoding system, and also misses the ontological point. :-)
http://www.oreillynet.com/xml/blog/2001/03/stop_the_xml_hype_i_want_to_ge.html

Yes all true. Someday someone big and strong will do that, and we'll get
something stupid. :-)

> (Remember, anybody can download ikiwik + git to start their own
> project; this needs to be addressed in any OSCOMAK-like toolchain). I
> haven't seen much discussion of this in the local group, and I think
> it's worth bringing up. At the same time, it's not too hard to
> assemble a list of email addresses to contact those organizations. If
> they don't participate, it's their loss -- small groups like us move
> much more quickly than they can 'legally' keep up with (lots of
> distributed work going on, but I suspect that it's mostly done by main
> contributors for the big pushes ... maybe; don't know).

You mention implementation technologies picked from endless possibilities
but that does not explain what to actually do with them.

> To start things off I propose a digestion methodology, based off of
> retrieving the projects out there on the web as they are,
> investigating the well-understood formats, and then working from there
> to see what the historical basis has been. For example, there are many
> electronics projects put up on the web, and usually these include the
> GDL schematics (uh, it's a *nix electronics schematics format, IIRC).
> Now, these schematics are the way they are for a reason, and usually
> they are more or less comprehensive, so it's a good place to start, a
> good way to do comparison. And at the same time we can digest the
> information gradients from the public access databases:
> http://heybryan.org/mediawiki/index.php/Open_access

Again, too vague to be useful IMHO. Maybe the details are in your head, but
I can't read them from here. :-)

> The problem with that is that you still need project coordination for
> each of the datatypes, and it's not present. So that's why I was
> thinking that we still need to investigate and recommend core
> methodologies for project management. Typically this is done through
> revision control systems (repositories), some way for the developers
> to communicate with each other, and whatever organizational style they
> prefer, really, but the idea is that this is ultimately accessible
> from the command line via agx-get (or apt-get at least). Not having
> been in all that many open source projects, I can't quite say what
> methods they use or a generalized principle format; but once we figure
> this out and blast off a few examples/suggestions (I suspect we can
> look at some good projects -- debian, freebsd, perl, nethack,
> firefox). And then from there we can promote the emergence of the
> diversity and the work that we need to see.

Nobody is stopping you from putting together a solution you think will work
and demonstrating it, ideally with a few minute screencast.

> But that still leaves Mike hanging for a while, but maybe only at
> first glance. What we can be doing now is tracking down the list of
> watering holes for certain types of information, importing the content
> in, sure, while simultaneously seriously encouraging him to document
> the methodologies for project coordination of what he's doing + that
> of others. And creating an ontology of -projects- would work too. I'm
> thinking big picture here.

Sure that's all useful. But "the devil is in the details".

> Another quick example - have we come up with a format idea for keeping
> a list of links (BibTeX stuff) related to the content that we are
> pulling? I mean, a way to specify just what information resources we
> have imported already and what we have not? I suggest taking a look at
> trexy.com and prefound.com, a site that treats internet searchers like
> ants and their paths through the web as trails worth saving as
> information is mined and brought back to the hive in some structured
> way (as found suitable by the searchers). In truth, the searchers
> don't actually bring content back to the websites, only their 'search
> trail', not what they found or any structured meaning out of it. [I've
> gotten into a habit such that, when I find a new website with a lot of
> information that I want to hoard, I write up a script to automate my
> downloading of it, and then let it be while I go on and just assume
> that I've processed it (less I want to actually read it ASAP)]. Same
> thing here.

A larger summary or video example would be appreciated if you think that is
important. Sounds like Memex. But I think it misses the engagement with the
content.

This is all too general to be useful as I see it.

>> And Mike has taken the first step towards that by putting articles like on
>> Solar energy (cow) and tags (bull) on the Semantic MediaWiki (pasture). Even
>
> That looks no different from other pages that we've seen on my site:
> http://heybryan.org/mediawiki/index.php/DNA_sequencer
> http://heybryan.org/thinking.html
> http://heybryan.org/graphene.html
> http://heybryan.org/mediawiki/index.php/DNA_synthesizer
> http://heybryan.org/mediawiki/index.php/Microarray
> http://heybryan.org/mediawiki/index.php/AFM_nanolithography
> http://heybryan.org/mediawiki/index.php/Meat_on_a_stick

Except Mike's contribution has a license attached so others can build on it.
And it has metadata attached (or could). And he seems to be taking seriously
attributing sources and drawing from freely licensed works.

Also, given your professed disdain for copyright and attribution, at this
point, I would not be confident of what parts of your site are original and
what are unattributed derivatives. I'm not saying any of it is derivative;
I'm just saying I don't know,

> So I don't see how the Solar_Energy article is new in terms of what we
> want to see happening; outside of this context of increasing
> development and sophistication of the semantic web, I think it's great
> that Mike is doing pages like that -- a good habit of infohoarding.
> Though you could easily argue that I am biased.

As above.

>> with all the respective work by you and I, it was Mike who took the first
>> pioneering step for humankind towards giving the world a freely-licensed
>> repository of manufacturing data and metadata. (Laying it on thick enough
>> for you, Mike? :-) Sure, his ontology is buggy and incomplete. And sure,
>> maybe the solar energy article could be rewritten to separate the basic
>> theory and designs from the extraterrestrial applications somehow. But it is
>
> re: separation; I don't see that as relevant to the idea here. Isn't
> it that there would be a **project** that uses solar energy? Solar
> energy is kind of like a unit, to be used by GNU units,

http://www.gnu.org/software/units/
"The Units program converts quantities expressed in various scales to their
equivalents in other scales."

http://en.wikipedia.org/wiki/Module
"A Module is a self-contained component of a system, which has a
well-defined interface to the other components; "

> so as a
> reference article I think it's fine, as long as it eventually links
> over to the semantic projects that are more structured and so on. In
> fact, all of this email might have been simplified by that simple
> realization. It's not so much the 'theory' -- I think it'll fit well
> if you consider it as a general introductory article to the topic, for
> people not too much in the know about the solar energy input
> variables, although I think it would be wise to separate the idea of
> photonic energy from solar energy, which basically just goes back to
> one of the fundamental units, like photonic flux or something? I
> forget what it is. CRC should know.
> ^ so you can tell that I did *not* revise the rest of this email after
> typing that

This all helps me understand I myself (given wikipedia) am less interested
in explaining the science then detailing specific technology or procedures.

>> all three together (cow plus bull plus pasture), and I can almost hear the
>> patter of little hooves already! Well, maybe in a decade or two. :-)
>
> So ... reworking your analogy of cow-bull-pasture, Solar_Energy
> doesn't fall into any of those, since it's a fundamental unit that can
> be explained by experimental projects that can be added via the git
> repos, linking back to the text documentation about how the experiment
> was setup and so on (these being part of the dot skdb files).

Except that is all hypothetical. :-)

>> The thing is, anyone with a certain set of mental abilities can bull at a
>> moment's notice, but to cow thoroughly by *anyone* takes at least a little
>
> Cow=mapping, right? I find mapping easier than digesting, since you
> get to make lists of lists and so on, up to the point until you
> realize somebody has already partially digested the material for you
> and you get to take it a few steps further, etc. :-)
>
>> hard work. But to do both (cow and bull) in a very thorough way takes the
>> most work of all, and usually take years of living it the middle of a
>> problem space (usually one full of manure. :-) And then there is the work of
>> getting a pasture set up and maintaining it (mending fences, etc.) too.
>
> "Pain is the cost of the maintenance of boundaries" [though it doesn't
> have to be that way, IMHO].
>
>> I'm not saying you have not done a lot of all three with your site (cow,
>> bull, MediWiki pasture), but unless the work is also out there under a free
>> license that defines a constitution for collaboration,
>> http://www.kurtz-fernhout.com/oscomak/SSI_Fernhout2001_web.html
>> and is in the right size chunks, it can't be built on stigmergically IMHO.
>
> What? I already mentioned the robots.txt file, which clearly states
> anybody can copy and so on. I am pretty sure that robots.txt has been
> held up in courts of law before too (same with GPL, hurray!).

I'd love to see the legal citations on robots.txt. And understand their scope.

>> (I'm certainly guilty of all this myself, like with long emails.) So it will
>> all have to be "treated as damage and routed around" by the free community
>> on the internet as far as stigmergy. :-( (*)
>
> As damage? It's *right there*.

But not clearly licensed. And of unknown origin. And without metadata.

>> That is not to diminish the potential future value of greener pastures and
>> alternative implementations like SKDB of course. But without freely and
>> formally licensed content and metadata (cows and bulls) any implementation
>> (pasture) is not of much current use. Even the wiki (pasture) at OSCOMAK.net
>
> How is it possible for skdb/metarepo to need a license when it's more
> like the map towards putting together all of the puzzle pieces?
> (semantic web, ikiwiki, git, repos, open access, gpl, etc.)? Surely
> you don't see this as an entire centralization project? (Of course,
> aggregating all of the information together is somewhat of a
> centralization process, but at the same time we see many individuals
> doing this with news and other publications with little problem, as
> long as all of the licenses are maintained and so on).

Sorry, you are talking at such a general level I don't see that this makes
a lot of progress. Kind of like advising stock investors: "Buy low, sell high".

>> is pretty useless to the casual browser (carnivore? dairy farmer?) at the
>> moment, since I have been the worst offender to date as far as posting
>> philosophy (manure) but not adding articles (cows and bulls) to the wiki
>> (pasture). :-) (**)
>
> I think we are on different wavelengths. Blatantly dumping content, as
> I have on my caches and hard drives over the years, doesn't make it
> all magically come alive. :-(

Exactly. Which is the point of the "semantic" part. Even if it has limits
like everything,

>> One issue may be that pastures and cows and even manure are much easier to
>> deal with than raging bulls (thinking :-) for most people: (***)
>> http://www-03.ibm.com/ibm/history/multimedia/think_trans.html
>> "And we must study through reading, listening, discussing, observing and
>> thinking. We must not neglect any one of those ways of study. The trouble
>> with most of us is that we fall down on the latter -- thinking -- because
>> it's hard work for people to think, And, as Dr. Nicholas Murray Butler said
>> recently, 'all of the problems of the world could be settled easily if men
>> were only willing to think.' "
>>
>> So, bulling is harder than cowing for most people. Some people are the
>> opposite, of course. :-) But as a note taped to Marty Johnson's computer
>> monitor in his office said (noticed the one time my wife and I met him):
>> http://www.isles.org/
>> "You can't plow a field by turning it over in your mind".
>
> Not true. "As I move, so I move the universe." Your mind, your brain,
> is how you are grounded with the world around you ...

Well, tell that to your garden. :-)

http://ask.yahoo.com/20030129.html
"It turns out that there may be some truth to the belief that talking to
plants helps them grow, but not for the reasons you may think. According to
ScienceNet, plants need carbon dioxide to grow, and when you talk to a
plant, you breath on it, giving it an extra infusion of CO2. However, for
this to have any real effect on your favorite fern, you would have to spend
several hours a day conversing with it in close quarters."

>> Of course, I liked to joke to my wife that did not apply to theoretical
>> mathematicians or a lot of computer programming or research. :-) But I do
>> think it applies to a big extent here -- we need both cows and bulls and we
>> already got a pasture -- even if it may not be as green as I hoped for and
>> the ones over there (Pointrel?) and there (SKDB?) looks mighty greener to me
>> and to you. :-)
>
> Sounds like cultural relativism to me - "everybody is equally good,"
> as opposed to discussing the fundamental issues that we're here to
> solve in the first place. But before we get to this please see the
> content above and we'll chug through that and see what comes of it,
> then maybe back to these points.

Thanks for taking the time to make comments. I still feel we need more
specific examples to reason from -- whether articles or specific detailed
use cases.
http://en.wikipedia.org/wiki/Use_case

You're not getting a grade here. :-) This is for real. :-)

--Paul Fernhout

Paul D. Fernhout

unread,
May 8, 2008, 4:11:32 AM5/8/08
to openv...@googlegroups.com
Bryan Bishop wrote:
> On Tue, May 6, 2008 at 8:22 PM, Paul D. Fernhout
> <pdfer...@kurtz-fernhout.com> wrote:
>>> As for conditional tagging, theres no good way to do it unless halo or
>>> smw have a syntax I'm not aware of. My best idea would be to make the
>
> The skdb designs provide for this 'conditional tagging' that you need
> .. you can think of it as "provides" relationships and so on (sort of
> like pointrel), and this can be used in an algebraic way, and that's
> easy, as long as the fundamental requirements can be built up; this is
> generally done through the autospec program that Ben has been writing.

Can you provide examples of sample input?

>>> property name what it is used for, with the shovel "metal
>>> part" (except preferably a little more specific) and make the value
>>> 13g of alluminum (exact chemical formula of the material (and that is
>>> a variable type I believe) would be a must), then add a new property
>
> Yep, the exact chemical formula would be its own object represented in
> the skdb package file for this particular (sub)project.

Again, can you provide examples of sample input?

>>> for alternate materials. If it needed two materials for the metal
>>> part, the best solution might be to make a new object (article? I
>>> would really prefer the greener pasture of an object oriented
>
> It's not an object-oriented language per-se, but I guess that's what
> it looks like. Have you gone to check out the yaml.org documentation
> yet? The py-yaml documentation is also fantastic and worth checking
> out, lots of examples and should spark the synapses a bit.

So what? There are many languages. How do you think it should be used with
specific examples?

>>> language) which in turn has those two needed properties (then you can
>>> add more alternates for those parts too, if the isru plastics article
>>> taught me anything its that the "cow" part of it never turns out to be
>>> simple).
>
> The needed parts - we call these dependencies, and that's the !! line
> provided by the yaml metadata file, with an extendable data structure
> (list) to specify further requirements and the type of requirement
> (software only? for the fundamental stability of the sdkb package?)
> and so on.

Again, specific examples?

>> Interesting idea. I've been thinking on this and realized that it isn't
>> actually essential from the manufacturing web analysis point of view how
>> much of something you need, at least a a crude first approximation. If you
>> need an ounce of pure aluminum, you might as well need a ton of it, as far
>> as needing a way to produce it. As you fine tune the design, then quantities
>
> The question is where in fine-tuning the design? I suspect it's
> somewhere between autospec and computer simulation, not necessarily
> the database end of things; so like I was saying in my second to last
> email here, there needs to be a way to help developers confirm that
> their packages are making sense, and a way to make sure they can
> simulate the increasing quantities or something, or represent that
> from a registry of skdb packages (much like the new partsregistry.org
> website for biobricks.org :-)).

Again, specific examples?

Or, if not, do you, or will you, at least have/make a succinct "requirements
document"?
http://www.jiludwig.com/Template_Guidance.html

>> matter more and more as the choice of process to make aluminum might be
>> affected by the scale and frequency of the need. And certainly quantity is
>> needed for simulation. Anyway, an interesting suggestion.
>
> Simulation in the tool-chain is going to come later, as far as I can
> tell, but I am open to suggestions on that front. From what I can
> tell, simulation in python files would be what we'd use, it'd be wise
> to investigate the already existing simulation frameworks out there
> (there are many), so that we can use them for individual skdb
> projects. The simulation python files would be placed within the dot
> skdb file, of course. Or s/file/repo/, technically.

Again, specific examples?

Or, if not, do you, or will you, at least have/make a succinct "requirements
document"?
http://www.jiludwig.com/Template_Guidance.html

>>> Like you said, my ontology is very flawed, in part because we are
>>> using a smw (pasture with some randomly placed nightshade) where skdb
>>> or some other database (magical unicorn planet where nectar springs
>>> from the ground) would be better. I've said it before and I'll say it
>>> again: it's a wiki for a reason and anyone can feel free to change it.
>> Well, at least the content. But not easily the architecture. Anyway, if in
>> the end we all understand why, say, Bryan is right about ontologies and the
>> limitation of Semantic MediaWikis, then as a learning community we will be
>> that much further along IMHO.
>
> Btw, I would be interested in discussing the implementation of a
> semantic wiki on top of ikiwiki. It *should* be an easy addition to
> the source code. And if not, we get to go complain to Joey Hess. ;-)

greener pastures.

>> We're (I hope!) one of those Engelbart outpost on the frontiers of
>> knowledge. Even if just our own. :-)
>
> Hm, "Engelbart outpost". I need to import that into my working vocabulary.
>
>>> If you wanted to, you could try using the most common sytax for "or"
>>> and hope for the best. I believe its || or something? Might want to
>>> throw some ! and & operators into it just for good measure.
>> Maybe. Can you supply an example?
>
> Huh? Yes, please. And how does that, at all, contribute to the
> underlying architecture? (I am simply confused; please inform.) Maybe
> this is in the semantic-mediawiki-docs?

It would be article text parsed by tools that read the text.

>>> It's seeming to me like tags may, in the end, prove utterly worthless
>>> for anything but organization.
>> Interesting to here you say that now that you are the (relative) expert here
>> on Semantic MediaWiki.
>
> I have always wanted to do personal ontologies through Wikipedia, for
> example. That's basically what my 12,000 bookmarks are:
> http://heybryan.org/bookmarks/bookmarks-old2/

Have you considered Google Notebook or something similar?
Not sure about getting the data out

Yes, lots of people know about version control. The issue is more the
structure of the things you are versioning IMHO.

Though the line (intentionally) gets blurry with the Pointrel system.

>>> the wiki article to them. A little ugly, sure, but very easy. I'm just
>
> More than just wiki articles - also txt, html, zip, tar, url, etc.
> etc. A diversity of information all packaged into it, with the
> automated user interface (agx-get) to facilitate the downloading of
> that information, and so on. Remember?
>
>>> polluting cyberspace with my half baked thoughts, feel free to ignore
>>> me if it's retardedly impossible. I just took a calculus final. I'm
>>> happy to keep myself from drooling.
>
> http://heybryan.org/exp.html
> - click on the bottom link to the Eric Hunting chat re: cyberspace,
> automation, grounding the semantic web.

I don't seem to be able to make clear the difference between a functioning
(but limited) system one can use to test out ideas and one that is still, as
far as I can tell, still under construction and so is not easily
evaluateable. See:
http://en.wikipedia.org/wiki/Extreme_Programming
Still, almost every new thing is at first under construction, so it may be
great. But I don't see enough specifics to comment on them, other than a
specific choice of technologies.

Compare what you have with, say, this:
http://www.kurtz-fernhout.com/oscomak/prototype.htm
which at least outlines real screens that could (in theory) really work,
even if I would not do it exactly that way at this point. And it presents
them in the context of an approximation of real data.

>> Yes, I could see how we could make a site that essentially put Wikipedia
>> articles in a frame:
>> http://www.w3.org/TR/html4/present/frames.html
>
> Woah, what? Any self-respecting developer ... erm. No, I'll save this
> for another time (on a very tight schedule) -- basically, this isn't a
> good idea. Let's just download and import the Wikipedia articles into
> the database.

And why is that? I do know there may be some legal issues for some sites,
but that is unlikely for wikipedia. And why pay the bandwidth and disk
costs, plus get out of date?

There are lots of ways to move content around. What is the structure and
substance of the content?

>> Despite everything I've written on MediaWiki as its champion, if we try it
>> some more and it doesn't work, as to changing code on the server, to quote
>> Mystery Men:
>> http://www.adherents.com/lit/comics/MysteryMen.html
>> "Shoveler: Nothing I couldn't move around." :-)
>
> What?

Move around files, data, processes, applications. programs, etc. :-)

>> That is especially true with the new server (maybe online tomorrow, we'll
>> see, nothing firm yet). I can run long term processes on a dedicated server
>> like the JVM (and so no startup overhead). So I could put up some version
>> of, say, Jython/Pointrel code like the stuff you played with from the SVN
>> repository or on the server. Or most anything else free that exists.
>
> Re: free. Have people been thinking that I am calling them gits when I
> mention git, the free RCS (SVN superior, in various ways)? Just
> wondering. Would explain a lot.

There is always something better down the road. You're focusing on
distributed content and you picked a distributed version control system to
help you do that. Fine (I personally like darcs by the way, at least for
small things). But that is just a first step. Try actually having some
content and moving it. :-) And maybe make a screencast. That concreteness
will help a lot IMHO.

>> But I don't say that to say stop using the Semantic MediaWiki. We (mostly
>> you :-) are making good progress understanding its strengths and weaknesses,
>> and that will serve us well whether we continue to use it or export the
>> content to something new (including even back to client-side tools for
>> editing as opposed to browsing. :-)
>
> Hm, that distinction isn't necessary. Assume the ikiwiki scenario. In
> that one, the client tools to edit and manage content is really all
> over HTTP, or if using git then there's this added GIT protocol that
> could be used for massive transfers, so it's not that big of a
> boundary for crossing.

Again, details on pastures not bulls or cows.

>> Anyway, more feedback on the Wiki is always appreciated. The last time I put
>> something like OSCOMAK up, the complexity of choosing the standard but
>> complex "Zope" helped torpedo it. The great thing about standards is there
>> are so many to choose from. :-) Not sure who said that first?
>
> Probably the first guy who realized he has 400 text editors on linux
> to choose from.
>
>> (*) Fortunately, I (semi)intentionally failed a class (Physics) in college
>> in part to see what it felt like, so I think I just barely qualify for a
>> researcher career by Han's criterion. (George Miller's was "publish
>> something as an undergraduate". :-)
>
> Moravec was a good guy to work with ... did you target him? Did you
> "know" before just wandering in? What was the deal?

No, I knew nothing of him. I just knew CMU did robotics. In the 1980s, Hans
was pretty much the only person at CMU working on indoor mobile robots
(which interested me) plus he was nice enough to let me hang out and have a
desk in one of his labs, computer access, a key, etc. I am thankful to him
for that. Of course, I brought my own robot with me and had written part of
my undergraduate thesis on robotics and AI (including Pointrel and triads);
I'm sure that all helped it getting in the door. :-) Of course now every HS
student does robots, but it was rare then. That was while he was working on
his book Mind Children. I think back on that now and think what an amazing
gift that was to let me essentially play in his lab with no money going
either way (a gift mostly unappreciated by me at the time, perhaps out of a
bogus sense of entitlement by me). I helped out in a minor way a bit later
by giving tours when people from town wandered by (which I enjoyed, but it
gets old for people who work there long term). Sometimes I wonder if being
in a room with an active laser range scanner sometimes was bad for my
vision? :-) I also met and then hung out with people in Red Whittaker's
Field Robotics lab (there was some crossover in part because Hans' group had
the vision expertise). It was interesting to hang out with both groups
because Hans was talking about "mind children" essentially replacing people
(classical MIT/Minsky-ish etc. strong AI is a sense) while Red talked about
augmenting what people could do (Engelbart-ish person-in-the-loop
augmentation). It was an interesting time, especially to have a foot in both
camps. Kind of a crazy thing to do now that I look back on it. :-) There had
been money left over my Dad had saved for my college, since I had paid for a
chunk of it myself with a video game I wrote at around 17. (Probably one of
the few who paid for a big chunk of PU with their own earnings from before
it -- what a waste in a sense. :-)
http://www.grossmont.edu/bertdill/docs/CollegeWaste.pdf
"For the sake of argument, the two of us invented a young man whose rich
uncle gave him, in cold cash, the cost of a four-year education at any
college he chose, but the young man didn’t have to spend the money on
college. After bales of computer paper, we had our mythical student write to
his uncle: “Since you said I could spend the money foolishly if I wished, I
am going to blow it all on Princeton.”"
And, compounding my mistake in the belief of implicit college advertising,
rather than wisely investing that remaining money or buying a house with it
as a down payment, I blew what remained after college on hanging out around
CMU. :-) I had hoped to go to CS grad school there the next year (I spent my
whole life up till then doing robotics from seventh grade science fairs and
even before) but got rejected (who needs nutty Psych majors when the whole
CS world knocks on your door?). Which was pretty traumatic to me at the
time. To think I could have instead, say, hung out and worked with my Dad
and heard more about his life (who had just retired, and wanted to do stuff
together) and maybe bought a house at 20; in retrospect (now) I wished I had
done that and had the confidence to find my own way. :-( After the CS
rejection, I went back to Princeton to manage the robot lab there (a job I
had been offered when I graduated but turned down looking for greener
pastures at CMU. :-) Actually. as jobs go, the PU staff job was the best one
I ever had (and I learned 3D computer graphics there) but I did not
appreciate the job, and looking for greener pastures, I moved on after about
a year. Never found them or even ones equally as green (except maybe my life
now. :-) Instead I then wandered through a few PhD programs (IE, CE&OR, E&E)
trying to get on that academic grant treadmill. :-) But it was never a good
match long term. :-( I kept saying the same thing in all my graduate studies
applications -- that I wanted to work on self-replicating space habitats,
but few ever took that very seriously and some (not all) of the faculty to
an extent I feel just saw me (correctly, unfortunately) as gullible cheap
programming help. :-( Or, alternatively, mine tailings. :-)
http://www.its.caltech.edu/~dg/crunch_art.html
http://novia.net/~pschleck/academia/
And this was back when programming was not so common a skill.
But like any relationship, no doubt the right match for anybody may be out
there. :-) But sometimes it takes forever to find it. :-( And life is what
happens in the meanwhile. :-)

--Paul Fernhout

Paul D. Fernhout

unread,
May 8, 2008, 4:19:00 AM5/8/08
to openv...@googlegroups.com

These tips are for generic sites and don't relate specifically to the use
discussed.

>> Maybe that can be done with more than just Wikipedia. That speaks to
>> something else I said on the avoidance of copyright infringement issue
>> brought by direct copying, but seems like a better idea than my
>> proposed tagged linkdumps (which I even said at the time was a stopgap
>> measure).
>
> Sorry, I don't see how that avoids the copyright issue. You're just
> making it so that the electrons are served up to the user in a certain
> way. It's going through tons of caches all over the internet as
> packets are flung back and forth and all over the place. There is no
> guarantee, and indeed it's incredibly unlikely, that the exact
> electrons or holes and voltage spikes that the server transmitted are
> in no way, shape, or form the actaul data that the user receives ...
> tells you something, doesn't it?

Ultimately, you may well be right and this sentiment may prevail.

But it isn't the world I live in yet:
"FEDERAL PROSECUTION OF VIOLATIONS OF INTELLECTUAL PROPERTY RIGHTS"
http://www.justice.gov/criminal/cybercrime/CFAleghist.htm

--Paul Fernhout

Bryan Bishop

unread,
May 8, 2008, 7:54:06 AM5/8/08
to openv...@googlegroups.com, kan...@gmail.com
Re: Paul's request for specific examples; I can't provide the specific
examples because that's what I've been saying is needed in all of
Doram's "what's next" threads; I guess nobody has been spawning a
discussion on that topic for one reason or another. Anyway, the
concept of py-yaml is that it's a serialization methodology, so you
simply **write the classes** in the code, and then the yaml libraries
handle the serialization of that data into a file that can be ported
around from one place to another. So, ultimately it's a question of
what defines an object, and not just an "object" as materialistic as
'chair' but also generalized objects and generalized metadata files,
all of which have a place in this project. One of the data structures
that I was proposing a while back was a list of type BibTeX objects;
another while ago, I was suggesting the a list of Unit Types, and
another while ago graphviz dot structure (perhaps even to represent
(parseable) requirement relationships; dunno if pointrel does this).
So from these fundamental data structures that we find ourselves
needing (handle, hook, element/atom, whatever), we can build it up
from the ground up, or top-down -- whatever we find we need to make
the definition as much as we want, while *simultaneously* dumping it
into a git repo. I'll be sure to check out darcs soon enough, Paul.

- Bryan

Doram

unread,
May 8, 2008, 2:12:20 PM5/8/08
to OpenVirgle
Bryan, I want to say something here, because I sense that everyone is
becoming frustrated.

I have been here a month, reading (to the best of my ability) the
things that everyone has said. I would say, at this point, that I rate
you a highly advanced programmer. What that means to me is that you
have been able, in your experience with computers, been able to climb
inside the thing, and see literally all there is to see about it - how
each part works on the most fundamental levels, and how it all works
together - and that gives you certain advantages. It makes you
unafraid of using the technology any way you see fit, because you
understand that you can literally make it do whatever you want it to,
and moreover, you know how to make it do just that. You have also been
steeped in all of the thought that has gone into computing from as far
back as your interest has taken you, and monitor as many schools of
thought on making computers do what you want, as you can find.

Now, take me. I consider myself a low advancement programmer
(especially considering some of the things I have learned in this
forum). I have actually worked with computers my whole life, but I
have never dived in. I have always danced around the edges, and made
occasional trips to the interior, but never really learned the total
ins and outs of the technology. The major reason for this is that I
have interest in many more technologies than just computers, and I
have split my interest between them, from the environment to cars to
construction to farming. I have also found value in providing a link,
for people who have no experience at all with computers, with the
wilds of the most basic interactions of computing, which is not
properly taught anywhere, even to our children, which are only
learning through sheer immersion (which is exacerbating a generational
gap that need not exist, but that is another rant).

Paul exists in a middle ground, where he has learned certain advanced
things, and made his efforts to learn the ins and outs, but otherwise
has found his niche and plays to his strengths. There is nothing wrong
with this, but at times, it does leave him almost as clueless as
someone like me, when it comes to some of the highly advanced things
that you can talk about with ease and wonder why nobody understands.

The point of all this is that, there are certain bleeding edge
technologies that you talk about (and perhaps in the circles that you
travel in, they are no longer bleeding edge any more), that we haven't
heard of before, and may not understand, even with explanation,
without a total knowledge of the medium of computers, such as you
have. I have personally run into the roadblock of literally not
knowing how to find the circles you run in to be able to research the
things that you talk about. Yes, you have provided linkdumps, but even
in those cases, I do not have the framework with which to understand
them. You may be talking about agx-get, and know from interactions in
that community that it is the debian group doing work on it, so if you
want info on agx-get, you go to debian. I, not having heard of the
group, much less interacted with it, do not know of the connection,
search the linkdumps for agx-get, and find nothing that explicitly
points there. (Note, I have not as of yet tried to search the link
dumps for agx-get, this is a theoretical discussion to prove a point.)
But you see how disconnects can arise.

Now, the current conversation involves methods for advancing the
technology of the classification/linking system that we are attempting
to build around the Semantic Wiki. I have seen the term yaml and py-
yaml mentioned by you before, but not had the time or resources to
look too far into it, personally. From the comments that you have
made, i get the general feeling that it may be a useful tool, but I
have no way, without extensive research, to determine how or where it
fits into the framework that we are building. Paul my be less
confused, but any level of confusion is worth removing, and even if
you couldn't explain it in a way that I would understand, perhaps Paul
can find experience in his own life to connect with the concept and
understand more readily than I.

Paul, you need to stop every once in a while, and ask more specific
questions the like of which I cannot even frame for my lack of
knowledge. "Give examples" is not productive collaboration sometimes,
as "Examples of what" is the usual answer. Even if you are quoting the
person as an pointer of what to give examples of, there my be levels
of complexity in the quote that you are not seeing, that can confuse
the issue. Also, you need to explain when you feel out of depth, in
very basic terms, and, if possible, explain where the gap in your
understanding lies.

Bryan, please be patient with us. On a very fundamental level, we are
not moving as fast as you. Although, from my experience with highly
advanced programmers, patience is not one of their strengths, I do ask
this of you now. Some things will require extreme amounts of
explaining to get the idea across, but the benefit of getting us all
on the same page is enormous, as you well know. Yes, we will be
looking for other highly advanced programmers to join the fold, but
even they can have specialties, and also run into the occasional
disconnect.

If we truly want this to take off, we will all have to be very patient
with each other, and with time, for time is unfortunately the only
real solution, we will all come together and learn to work as one. We
all realize how important, and how needed, this work is. That is why
we are all here. We need to keep this in mind if ever we become
frustrated, and also realize that our shared philosophy can help mend
any differences we have in methodology.

Now, hopefully, your last post will explain things in a way that Paul
can understand (I understand a little better now). Unfortunately, if
you think a separate thread is needed, you may have to be the one to
take the initiative and create it. Not being in the same circles as
you, we are not privy to all of the conversations that you monitor and
participate in every day, so a large part of your thought process are
unfortunately hidden from us, both because we do not read all the same
things that you do, and not having your experience, may not understand
it if we tried.

I understand that there is the possibility that you may feel ignored,
possibly for the fact that you are one of the younger members of the
group, but I believe that no such prejudice exists, and will proudly
say that my 30 years of technological experience pales in comparison
to your 18, and I bow to your superior wisdom in this field.

That is what the true power of this group is. The bringing together of
the most advanced minds available, for the advancement of human
endeavors. I am learning more about what you are expert in, especially
when you are more expert than me, and will defer to your learned
judgment in those areas. If there is something that I would like to
say that I see is being missed, I will say it. If there is something
that I don't understand (if I believe I am capable of understanding ;)
I will ask for an explanation. If there is something that I know more
about than others. I will not hesitate to spend as much time as it
takes to explain it, to keep productivity up, frustration down, and
progress constant. I expect the same of all of you.

Bryan Bishop

unread,
May 8, 2008, 8:59:45 PM5/8/08
to openv...@googlegroups.com, kan...@gmail.com
On Thu, May 8, 2008 at 1:12 PM, Doram <DoramB...@gmail.com> wrote:
> Bryan, I want to say something here, because I sense that everyone is
> becoming frustrated.

For the record, I don't sense that frustration at all. Anyway, I want
to start off with addressing your more meta-points, which you get
across clearly in the first part of your message, and even though
you're not asking for clarification directly on the other small issues
you bring up I'm going to answer those anyway.

> Now, take me. I consider myself a low advancement programmer
> (especially considering some of the things I have learned in this
> forum). I have actually worked with computers my whole life, but I
> have never dived in. I have always danced around the edges, and made
> occasional trips to the interior, but never really learned the total
> ins and outs of the technology. The major reason for this is that I

Maybe the interior is the self?

> have interest in many more technologies than just computers, and I
> have split my interest between them, from the environment to cars to
> construction to farming. I have also found value in providing a link,
> for people who have no experience at all with computers, with the
> wilds of the most basic interactions of computing, which is not
> properly taught anywhere, even to our children, which are only
> learning through sheer immersion (which is exacerbating a generational
> gap that need not exist, but that is another rant).

Re: always providing a link. That policy can turn around to beat you,
in the end. As you mention, I do lots of linkdumps, but that doesn't
necessarily tell you the behaivor that is necessary in order to build
those linkdumps in the first place, but on the other hand, it's also a
glimpse into where I was reading that led to the acquisition of that
behavior in the first place. Trying to make sense of the content and
information out there on the web is kind of a way to acquire the
behavioral routines, it's all 'insight' (or the same basis - whether
or not it's wisdom is a different matter) and it just 'clicks' once
you've processed all of it to a sufficient extent. Many of the
'giants' of the past talk about these sorts of methods (Feynman,
Asimov, most of the Buddhists, ...) . I took some notes:
http://heybryan.org/thinking.html

"What one person can think, so can another." - Damien Sullivan (mindstalk)

Re: basic computer skills not being tought. I am wondering how much
it's a matter of immersion, motivation, energy, attention, versus
being taught. It's interesting to put people in front of a computer
screen and seeing how long it takes for them to estabalish behavior
routines, isn't it? I see people sit down in front of, say, GUIs on
Windows and try to figure out what they are doing, and just kind of
hover around and don't check anything, meanwhile it would have taken
only 2 or 3 seconds to check all of the menus for words that might
vaguely be relevant to somthing that you are interested in, or
positioning the taskbar to the left to have a horizontal readout of
applications so that you can more quickly scan, using ALT+TAB to
rapidly switch between (especially repetitive) tasks, just minor
things like that, which build up into an inability to do anything, or
locking down *very* specific routines, which you can see in autism and
aspergers and other autism spectrum 'disorders'/enhancements. At the
same time you also see the systems-level thinking which is a good way
to approach the problem of using computer software. Like pipelining in
linux, where you stack programs together like legos, or much like
dominoes and other stacks, where a laye does some task and then
another, which is something like the structure of the brain in the
first place (V1, V2, V3, ... - about six layers of neurons for the
neocortical column, they all feed in to the next layer, with
increasingly less axons sending out information, so ultimately it ends
up in a movement decision (or not)). Anyway, it's kind of a practice
in figuring out what the brain is (the "journey to the self" as I
vaguely mention above), while also figuring out what you want to do,
and how the hell other people were able to do these weird, fantastic
things in the past.

> Paul exists in a middle ground, where he has learned certain advanced
> things, and made his efforts to learn the ins and outs, but otherwise
> has found his niche and plays to his strengths. There is nothing wrong
> with this, but at times, it does leave him almost as clueless as
> someone like me, when it comes to some of the highly advanced things
> that you can talk about with ease and wonder why nobody understands.

Might not be cluelessness with him. It's probably that same feeling
that we all get (even me - it's why I still haven't looked up darcs,
chemoinformatics, the ML that was mentiond by Mike a few minutes ago,
brainml, the rest of the neuroensemble site, or the same reason why I
haven't been playing with ruby or haskell or lisp -- simply a feeling
that keeps me away, somewhat justified by saying it would take too
much time to fully process, but that's not entirely true - you always
have time, the only things that truly take time are the long,
repetitive processes, like clicking one link after another on a large
list (which, btw, you should never have to do - write a script to
generate a session file with tabs already opened). This is a
**general** (all too human) problem with everybody, although I would
like to meet the exceptions to this rule, so it's something that needs
support from a group, especially when I start slacking off and Doram
shows up and starts writing an email from his context, his
perspective, which helps me to see what's actually going on, what
others are seeing/missing and unasked questions that should probably
be asked.

> The point of all this is that, there are certain bleeding edge
> technologies that you talk about (and perhaps in the circles that you
> travel in, they are no longer bleeding edge any more), that we haven't
> heard of before, and may not understand, even with explanation,

Yep, this is a general problem of the human condition, and I say that
in terms of information psychology, not in terms of scarcity-centric
socioeconomic babble. I like Zindell's formulation of the problem:
"... as Danlo watched the light radiating out from the flame of the
candles, he remembered a saying that he had once been taught: The
surfaces outside glitter with intelligible lies; the depths inside
blaze with the unintelligible truths. He rubbed the salt water from
his burning eyes, and he marvelled that the search for the truth could
leave him so empty and saddened and utterly alone." You see this same
problem creep up in other areas too, like von Neumann spheres of
exponential growth expanding through a galaxy -- running updates to
all of the probes requirs increasingly more time, so it means
increasingly longer waiting periodsfor computations to be done to
figure out the "next move" (if it is centralized at all; I wouldn't
necessarily recommend it). As Paul said it in a few emails ago, it's
the "creative destruction" of borders, shortening the path from
central core to bleeding edge, filtering the net for interesting
tidbits, etc. Anyway, I am straying off from my intended point. You
know the genera cluelessness that you're feeling? That's what I feel
when I know that I don't know how to manufacture all of these packages
that I use in my everyday life, and I know that this is true for
billions of humans worlwide -- we're all clueless, except for a few
who know how to do the manufacturing processes, so in a way, making
the gradient from "the edge" to central core (self) is what OSCOMAK is
about. To make it easier to play with those sharp, bloody edges. Of
course, humans naturally differentiate and become varied, at the same
time we see the awesome possibilities that this can catalyze. So it's
in some way a general (yet specific, per my implementatin notes)
solution to a broader problem, perhaps providing another way to
consider the "towards post-scarcity" approaches that Paul and I have
been mentioning back and forth [here].

> without a total knowledge of the medium of computers, such as you
> have. I have personally run into the roadblock of literally not
> knowing how to find the circles you run in to be able to research the
> things that you talk about. Yes, you have provided linkdumps, but even

You're right that they are circles. I know that I am going in circles.
And it's a weird feeling too, to know that I've been over this same
idea many, many times, and yet nothing has come of it the first few
times. But mayve that means I'm doing some natural refactoring. As for
finding the circles that I run in, you can basically call me out on
anything that I say and I suspect that I can find you a source.
Sometimes I can even give you the exact time of day that I came across
a link for the first time, my thoughts, what projects I was doing at
the time, and so on. It could help. I assure you, I started off just
as clueless as anybody else -- perhaps more than others ... and I
don't admi to having any more of a clue than anybody else, though I
have seen some things of course that warrant mentioning, evaluation,
etc.

> in those cases, I do not have the framework with which to understand
> them. You may be talking about agx-get, and know from interactions in
> that community that it is the debian group doing work on it, so if you
> want info on agx-get, you go to debian. I, not having heard of the
> group, much less interacted with it, do not know of the connection,
> search the linkdumps for agx-get, and find nothing that explicitly
> points there. (Note, I have not as of yet tried to search the link
> dumps for agx-get, this is a theoretical discussion to prove a point.)
> But you see how disconnects can arise.

Well, first, I think I was pointing out how to get information on
agx-get and those other components sometime last month or even the
month before that. I was mentioning that I join a few guys in
#hplusroadmap on irc.freenode.net (see irchelp.org for information on
getting on IRC). Also, there's some discussion on the hplusroadmap
mailing list, accessible through biohack.sf.net or from heybryan.org,
either way. Now on to the actual issues at hand. So, debian. It's one
of the older linux distribution projects. Linux, the kernel, doesn't
entirely make up an operating system, there needs to be other tools.
So most projects then go pair up with GNU tools (gnu.org) and then go
add in some configuration details, etc. Wikipedia on debian: "Debian
is known for strict adherence to the Unix and free software
philosophies. Debian is also known for its abundance of options — the
current release includes over twenty-six thousand software packages
for eleven computer architectures. These architectures range from the
Intel/AMD 32-bit/64-bit architectures commonly found in personal
computers to the ARM architecture commonly found in embedded systems
and the IBM eServer zSeries mainframes. Throughout Debian's lifetime,
other distributions have taken it as a basis to develop their own,
including: Ubuntu, MEPIS, Dreamlinux, Damn Small Linux, Xandros,
Knoppix, Linspire, sidux, Kanotix, and LinEx among others. A
university's study concluded that Debian's 283 million source code
lines would cost 10 billion USA Dollars to develop by proprietary
means." Knoppix is a good option, it's a Live CD, meaning you download
the ISO file, burn the image to the CD, put the CD in the drive, turn
off the computer, and boot from the CD -- nothing on the hard drive is
edited -- allowing you to test out knoppix (which is sufficiently
similar to debian for starters). In truth, ubuntu is the more popular
live cd these days, especially ever since they got some serious
funding from a philanthropist [I forget his name]. "Ubuntu's
popularity has climbed steadily since its 2004 release. It has been
the most viewed Linux distribution on Distrowatch.com in 2005,[4]
2006,[5] In an August 2007 survey of 38,500 visitors on
DesktopLinux.com, Ubuntu was the most popular distribution with 30.3
percent of respondents using it.[7] Third party sites have arisen to
provide Ubuntu packages outside of the Ubuntu organization. Ubuntu was
awarded the Reader Award for best Linux distribution at the 2005
LinuxWorld Conference and Expo in London.[107] It has been favorably
reviewed in online and print publications.[108][109][110] Ubuntu won
InfoWorld's 2007 Bossie Award for Best Open Source Client OS.[111]
Mark Shuttleworth indicates that there were at least 8 million Ubuntu
users at the end of 2006.[112] The large user-base has resulted in a
large stable of non-Canonical websites. These include general help
sites like Easy Ubuntu Linux,[113] dedicated weblogs (Ubuntu
Gazette),[114] and niche sites within the Ubuntu Linux niche itself
(Ubuntu Women).[115] The year 2007 saw the online publication of the
first magazine dedicated to Ubuntu, Full Circle.[116]" Anyway, what
makes this all so popular? Social aggregation. The apt-get command
line program (which has GUI equivalents like synaptic, aptitude
(ncurses, not true gtk/qt GUI)) allows users to immediately install
software, of any name, from any developer that got the package into
the main repositories (if not, you have to add a line to install from
a certain foreign repository into /etc/apt/sources.list (a file)).
This means that usually, within learning of a new software package,
you can be using it within 10 seconds, uness it's something
horrendously large -- the firefox source code took me a few hours to
download once, it was 130 MB, despite the installation from apt-get
install taking only 30 sec (10 MB download, IIRC). Compiled code is
much more compact, of course. And firefox/mozilla is a beast anyway,
not a good example in retrospect. So, the usual structure of open
source projects is to have a revision control system, programmers and
others drop files into the repo to play around with, and then
periodically releases are made for those who don't know how to compile
programs and so on, sometimes the packaging and compling is automated
("nightly builds" - there are thousands of firefox users that download
new versions every night and test them for bugs, as an example). And
this is all asynchronous, distributed, not centralized except in as
much we all share the same DNS servers and thus, theoretically, can
follow all of the links on the web to find any other age (the "visible
web" upon which Google makes its 'money' (by selling its services for
free? hehe)). I have not *ever* seen a group of people that focus on
getting programmers introduced into the social circles of open source
developers, how to adopt the tools to getthings done, etc. So a large
part of the issue might be social, in that sense. Dunno. Lots of
people (me?) seem to have been able to figure things out [to some
extent] by randomly clicking around the web until some internal
coherence, directionality, developed -- and it turns out I might just
know what others are talking about from time to time (hurray, all the
better).

> Now, the current conversation involves methods for advancing the
> technology of the classification/linking system that we are attempting
> to build around the Semantic Wiki. I have seen the term yaml and py-

Hm, it was my understanding that this was more than just
classification of technologies, more than simply an ontology or
hierarchical structure, and instead we are interested in those
'gradients' that I mentioned -- making it easier for people to find
their way, making it easier to manufacture things whether it's being
parsed by a computer with actuatrs, or parsed by a human with
actuators, in a "shared" language of sorts.

> yaml mentioned by you before, but not had the time or resources to
> look too far into it, personally. From the comments that you have
> made, i get the general feeling that it may be a useful tool, but I
> have no way, without extensive research, to determine how or where it
> fits into the framework that we are building. Paul my be less

You drop classes + methods into skdb. It's a way of formalizing what
we know about these things, instead of a "dry" semantic web, a castle
in the sky. This way, the semantic web [or at least the part that we
are working on here] is 'grounded' via instruments and tools, as well
as people to help others learn the ways, and of course to just go out
there and get projects *done*. :-)

> Paul, you need to stop every once in a while, and ask more specific
> questions the like of which I cannot even frame for my lack of
> knowledge.

Don't we all. Questions are a good way to do the Socratic method, to
see what others understand of the situation as well, but that's tact
and not needed here (can't we all just be blunt?).

> Bryan, please be patient with us. On a very fundamental level, we are
> not moving as fast as you. Although, from my experience with highly

The fact that the majority of my time is spent rotting in a public
institution makes me wonder how fast I am actually going (I suspect,
slow). But, I just want to point out that I am not inpatient, no
matter how much my writing style might convey the contrary.

> advanced programmers, patience is not one of their strengths, I do ask
> this of you now. Some things will require extreme amounts of
> explaining to get the idea across, but the benefit of getting us all
> on the same page is enormous, as you well know. Yes, we will be

Yeah, it's hard to judge your context though. You say you do web
development, for example, but to me that could mean so many things,
from automated content management systems to opening up notepad on
Friday nights to hardcode some XHTML+CSS for some loony client that
wants impossible deadlines and impossible "out of this
world"-zany-style features. Or it could just mean frontpage; the users
of all of these different methods tend to be in completely different
worlds, and uniting those contexts is fundamentally *hard*.

> take the initiative and create it. Not being in the same circles as
> you, we are not privy to all of the conversations that you monitor and
> participate in every day, so a large part of your thought process are
> unfortunately hidden from us, both because we do not read all the same
> things that you do, and not having your experience, may not understand
> it if we tried.

http://heybryan.org/mediawiki/index.php/2008-05-07
http://heybryan.org/~bbishop/cgi-bin/blosxom.cgi
but soon I'll be dumping my email on to the site too in an attempt to
help others keep up.

- Bryan, whose power adapter arrived while typing out this email, so
he's going to go try to get re-acquainted with the ol' laptop.

Paul D. Fernhout

unread,
May 9, 2008, 11:41:15 AM5/9/08
to openv...@googlegroups.com
Doram wrote:
> Of course, I can't remember where I posted any of that, but I can
> return with quotes later, if necessary. (I don't feel like reiterating
> that much, and I don't feel like researching much either. I am tired
> today. My son is sick, and so am I. |P Blah. Head cold...)

Let me see, first you find the time to post insightful and informative
things (like on web design) as a full-time stay-at-home Dad, then I try that
(full-time now) and find I can't post anything without sacrificing sleep,
and now you're posting insightful things while you and your son is sick, and
in another reply als post with lost sleep... I just can't top that without
someone getting really hurt. :-( So to defuse this stay-at-home Dad arms
race, I hereby concede the title of most dedicated Project Virgle
stay-at-home Dad to you. with all rights and honors that go with it. :-)

Of course, in an ideal world, in the workforce, someone would say, "Come to
work again with a sick family (or sick spouse) at home and you're fired!".
:-) Rather than the opposite as it is today. :-( But I'm just kidding to
make a social commentary point on the pathetic nature of even the best of US
corporate employment policies for parents: :-)
http://www.dol.gov/esa/whd/fmla/
relative to much of the rest of the world. :-)
http://www.eurofound.europa.eu/emire/FRANCE/SICKLEAVE-FR.htm
Also:
"Mother's Day: More Than Candy And Flowers, Working Parents Need Paid
Time-Off"
http://www.childpolicyintl.org/issuebrief/issuebrief5.htm
"Once again, the United States is an outlier compared to other countries,
both industrialized and developing, when it comes to policies that support
parents' ability to be at home to care for their babies. Around the world,
statutory childbirth-related leaves, both paid and unpaid, average about a
year and a half. Some 128 countries currently provide paid and job-protected
childbirth-related leave. The average paid leave is for 16 weeks, which
includes pre- and post-birth time off. In some countries leave is mandatory
and in most cases, paid leave is a maternity leave. In nearly half the
countries, the paid leave replaces the full wage (or the maximum covered by
social insurance). This policy affords mothers, and sometimes fathers, time
to spend with their children at a critical time and reduces parental
economic anxieties and pressures."
http://blogs.payscale.com/ask_dr_salary/2007/04/cons_of_a_worki.html
"Until that paid sick leave law (or one like it) passes, choosing between
caring for a sick child or going to work will continue to be a one of the
cons of a working mom (as echoed by our very own Job Mom). Of the working
moms surveyed by Working Mother Magazine, 70% felt guilty for sending their
child to school or daycare while ill, 48.5% felt stressed and 31.2% felt
frustrated. Almost 65% of the moms said that when one family member gets
sick, other family members are likely to follow. There was one bright spot,
54% say they have some flexibility to work from home."

Maybe we can fix that in a future OpenVirgle society? Which is why I feel
social commentary is not too off topic here -- it is both our values and our
technological desires that shape the future. :-)

Anyway, of course I understand how a little internet time can, through
distraction, help relieve the suffering of being mired in mucus. :-) Get
well soon, we need you around here. I have some forum-related ideas (with a
semantic tagging twist) I will eventually make time to post and want your
opinions on. :-)

--Paul Fernhout

Paul D. Fernhout

unread,
May 9, 2008, 10:53:50 PM5/9/08
to openv...@googlegroups.com
Doram-

Interesting related reading:
http://www.paulgraham.com/gh.html
"The problem is not so much the day to day management. Really good hackers
are practically self-managing. The problem is, if you're not a hacker, you
can't tell who the good hackers are. A similar problem explains why American
cars are so ugly. I call it the design paradox. You might think that you
could make your products beautiful just by hiring a great designer to design
them. But if you yourself don't have good taste, how are you going to
recognize a good designer? By definition you can't tell from his portfolio.
And you can't go by the awards he's won or the jobs he's had, because in
design, as in most fields, those tend to be driven by fashion and
schmoozing, with actual ability a distant third. There's no way around it:
you can't manage a process intended to produce beautiful things without
knowing what beautiful is. American cars are ugly because American car
companies are run by people with bad taste."

See also the movie:
"Seven Samurai"
http://en.wikipedia.org/wiki/Seven_Samurai
A big problem the farmers face in hiring Samurai to defend their village
from bandits is that none of the farmers knows what makes a good Samurai.

That's the problem we "farmers" always have, isn't it? :-) Even about
physical stuff, like, say roofing or plumbing. I just don't know what makes,
say, a good roofer or a good plumber (he says, having relubricated a shower
diverter valve this morning because the fixture he bought and had placed in
the wall by a plumber keeps getting hard to turn, even after installing a
replacement kit from the fixture company.)

On farming and OpenVirgle, by the way, my wife just planted two dwarf
Montmorency cherry trees, a bit too near the house IMHO but we will see;
she's usually right on stuff like this, and I'm usually wrong. :-) We like
this concept:
http://en.wikipedia.org/wiki/Permaculture
"The word permaculture, coined by Australians Bill Mollison and David
Holmgren during the 1970s, is a portmanteau of permanent agriculture as well
as permanent culture. Through a series of publications, Mollison, Holmgren
and their associates documented an approach to designing human settlements,
in particular the development of perennial agricultural systems that mimic
the structure and interrelationship found in natural ecologies. Permaculture
design principles extend from the position that "The only ethical decision
is to take responsibility for our own existence and that of our children"
(Mollison, 1990). The intent was that, by rapidly training individuals in a
core set of design principles, those individuals could become designers of
their own environments and able to build increasingly self-sufficient human
settlements — ones that reduce society's reliance on industrial systems of
production and distribution that Mollison identified as fundamentally and
systematically destroying the earth's ecosystems."

Maybe some good ideas for OpenVirgle in those books too, especially the
philosophy on design. A way for you to bring your farming strength to bear
fruit for OpenVirgle. :-)

I'm sure we all appreciate your working towards sustaining productive group
collaboration. And thanks for taking the time for writing up the advice.
You're certainly right that asking good questions is hard, perhaps the
hardest thing a person can do in some ways. I'll try harder to ask better
questions in this and other things.

Hope your long and informative post means that cold is clearing up for your
family.

--Paul Fernhout
Pham Trinli-wannabe and Programming Janitor (which just about sums up my
career anyway. :-)

Reply all
Reply to author
Forward
0 new messages