Manufacturing standards and metadata (was Re: thingiverse)

19 views
Skip to first unread message

Paul D. Fernhout

unread,
Nov 19, 2008, 1:06:19 AM11/19/08
to openmanu...@googlegroups.com
Smári McCarthy wrote:
> I think we need to push for, as Bryan says, a standard description for
> "open hardware designs".. I haven't seen the VOICED XML format (will
> check in a bit) but I think it may be similar to the format I was
> tossing around for OpenCAD.

As I thought about OSCOMAK over the years, and as more and more related
projects popped up (the apparently-now-defunct ThinkCycle by MIT in
particular, but others too) it began to seem clearer to me that some
standard for representing manufacturing information would be a good thing so
people could share it in a common format.

And Bryan says something similar to what you say as well (including in this
thread) whether with SKDB or his other work.

Over the past few years, it seems like there has been a proliferation of now
dozens of groups interested in open manufacturing, plus lots of companies
that might use such things in house. I wonder if we will see one common
repository any time soon like with Wikipedia (which itself eclipsed other
works). I can think of reasons both why it could happen (it's getting
easier) and why it wouldn't (proprietary information and security aspects).
But, rather than everything on one server, I can more readily expect
everyone might settle on one standard (or a small number of them) for
encoding that information, wherever it is stored. It's kind of like how
there isn't just one web server for all web pages, but instead we have the
HTML standard (and others like HTTP) and lots of web servers serving HTML
pages with HTTP and search engines like Google to crawl them. And now we are
starting to get the semantic web version of that, and OWL (mentioned below).

Of course, it's been said that the great thing about standards is, there are
so many to choose from. :-)

One guideline is choosing that sometimes helps:
"Principle of Least Power"
http://www.w3.org/DesignIssues/Principles.html#PLP

And Bryan is right to point to the problems with ontologies that don't
agree. By the way, for those new to the word:
http://en.wikipedia.org/wiki/Ontology
"Traditionally listed as a part of the major branch of philosophy known as
metaphysics, ontology deals with questions concerning what entities exist or
can be said to exist, and how such entities can be grouped, related within a
hierarchy, and subdivided according to similarities and differences."
An ontology is in some sense both a model of part of the world and an
agreement to talk about that model with common terms for similar parts of
it. So, a simple ontology for an automotive domain might subdivide a car
arbitrarily into "engine", "frame", "drive train", and "suspension" (it can
be arbitrary and incomplete, and that's when problems can come in :-) and
then further say all engines can only have certain types of components (and
then problems also come in when you have to deal with different types of
engines etc.). More flexible systems may take the abstraction one level up
and may talk about defining names for parts you might want to talk about
later in terms of more basic concepts. Some examples of ontologies from:
http://en.wikipedia.org/wiki/Ontology_(computer_science)
"Examples of published ontologies ... Gellish English dictionary, an
ontology that includes a dictionary and taxonomy that includes an upper
ontology and a lower ontology that focusses on industrial and business
applications in engineering, technology and procurement. ... IDEAS Group A
formal ontology for enterprise architecture being developed by the
Australian, Canadian, UK and U.S. Defence Depts ... WordNet Lexical
reference system"

I am really not that knowledgeable about all the manufacturing standards out
there and the assumptions they make about what is important to model. I wish
I was. Maybe we need a comprehensive list somewhere? Or maybe there is one
(or Bryan's work is a good step towards that)?

The biggest problem with XML as a social/technical movement is that most
people assumed a standard format for encoding hierarchical data would
somehow solve the ontology problem of all that data being interpreted
different ways by different people with different assumptions and different
needs (and of course, XML did not solve that problem, as XML by itself is
mostly about encoding hierarchical data in a human readable way, not saying
what that data hierarchy means). Some XML-related folks have thought some
about those ontology issues on some topics, but that is not thinking about
XML so much as thinking in XML about usages of XML (and supporting
technologies), and their research would apply equally well to other
situations and textual representation systems. Personally, I've never been a
big XML fan as a syntax. But don't tell these people: :-)
"XML at IBM Research"
http://domino.research.ibm.com/comm/research_projects.nsf/pages/xml.index.html
"IBM XSL Formatting Objects Composer"
http://www.alphaworks.ibm.com/tech/xfc/
I recognize XML has its merits as a human readable interchange format, of
course, in the absence of something better:
http://codemines.blogspot.com/2006/08/now-they-have-two-problems.html
And I've certainly been guilty of inflicting a few binary formats on the
world and myself, and I regret that now. :-( See:
http://c2.com/cgi/wiki?PowerOfPlainText
I've gone back and forth over the years on how to encode data for the
Pointrel Data Repository System -- trying variations of binary, some use of
XML, and variations of custom plain text (where it is right now).

As far as there being an organization that is mandated to do standards about
manufacturing in the USA, NIST (National Institute of Standards and
Technology) has done some work on this:
http://www.mel.nist.gov/psl/
"The Process Specification Language (PSL) defines a neutral representation
for manufacturing processes that supports automated reasoning. Process data
is used throughout the life cycle of a product, from early indications of
manufacturing process flagged during design, through process planning,
validation, production scheduling and control. In addition, the notion of
process also underlies the entire manufacturing cycle, coordinating the
workflow within engineering and shop floor manufacturing."

But I don't really understand their work yet in detail (just not enough
time). It seems like there is a lot of abstraction, but I've been on their
mailing list for years and not seen much activity (or maybe I dropped of it
somehow?). They also list related projects with links:
http://www.mel.nist.gov/psl/projects.html
"""
* EPFL Timed Multi-Level Petri Nets for Integrated Process and Job Shop
Production Planning
* <I-N-OVA> (Constraint Model of Activity)
* IP3S (Integrated Process Planning/Production Scheduling)
* IPPD (Shared Integrated Product/Process Development)
* IPPI (Integrated Product Processing Initiative)
* JTF Core Plan Representation
* MSD (Manufacturing Simulation Driver) project
* Oak Ridge National Laboratories (ORNL)
* Ontology.org
* PIF (Process Interchange Format)
* SPAR - Shared Planning and Activity Representation
* TOVE (Toronto Virtual Enterprise)
* WfMC (Workflow Management Coalition)
"""
(I tried four at random and the links were all broken. :-( )

From the following link,
http://www.mel.nist.gov/psl/people.html
it seems that NIST has got one or two people working on PSL now, which seems
pitiful for something that could transform how trillions of dollars of
business are done annually -- but that's really Congress' fault, if
anyone's. And as the PSL group says on their site:
http://www.mel.nist.gov/psl/yourrole.html
"The reader is encouraged to play an integral role in the definition of the
Process Specification Language by providing comments and suggestions as the
project progresses. It is our intention to create this language to suit the
needs and desires of you, our consumer. We can only do this with your feedback."

Every system of knowledge representation emphasizes different things. For
OSCOMAK, I was more interested in manufacturing webs (and so the dependency
between processes) in order to reason about manufacturing ecologies that
could bootstrap or sustain themselves than I was interested in the details
of processes themselves. However, most people interested in making things
want the details about making something specific, and can fill in a lot of
gaps with their general knowledge, and are less interested in how that thing
fits together with other things. So, we see on, say, the Makezine project
site (hosted on instructibles now?)
http://makezine.com/projects/
http://www.instructables.com/
or other similar sites
http://www.wikihow.com/Main-Page
essentially a photo journal where people tell how to make something, but
there is not much metadata there to connect each project with other projects.

Anyway, a key point is that what you focus on representing has a lot to do
with your goals. And here is a conflict of goals -- most people want to make
specific things (or tell others how to make specific things), and a few
people want to reason about how lots of things are made. Ideally, some good
standard could support both well, but I wonder if it will.

For me, NIST's PSL seems like it is very complex and abstract because it
wants to talk about what is going on in a finer-grained way then what I was
interested in. I just want to know what processes depend on each other in a
general way (not very time specific) defined as "manufacturing recipes", and
also list what artifacts are needed to be made to support human life
comfortably (on Earth on in space) so I can then figure out all the
manufacturing requirements as an extended infrastructure. I'm not saying the
other things NIST is interested in aren't very useful in other situations,
of course (especially in detailed simulation down the road), and I may well
not yet appreciate the importance of some of it.

I'm also not 100% sure on the licensing of NIST's work. As a government
entity, the work should be in the public domain -- except they seem to have
hired contractors (including universities) to work on it, which under
Bayh-Dole Act may mean part or all of it is proprietary. Of course that
might defeat the purpose of the standard as far as broad adoption, but I am
reminded of a story I heard once where some very expensive and hard-to-make
piece of a space probe for NASA with a mirror finish surface (required to do
its job) was delivered to a NASA facility, and, following NASA procedures to
tag everything that came in, a tag was riveted to the perfect surface
rendering the item useless. :-( In the case of NIST & PSL, they seem to have
a link to ISO referencing how PSL is standardized,
http://www.mel.nist.gov/psl/faq.html#d0e116
but ISO makes money by selling copies of its standards (a crazy business
model in these information age days, I feel, especially for a standards
organization presumably interested in widespread adoption of its standards).
See:
http://www.iso.org/iso/copyright.htm
"All ISO publications are protected by copyright. Therefore and unless
otherwise specified, no part of an ISO publication may be reproduced or
utilized in any form or by any means, electronic or mechanical, including
photocopying, microfilm, scanning, without permission in writing from the
publisher."

There is a lot of legacy insanity from the pre-internet age. :-( Although
I'm sure the internet age is producing its own share.

Note that in practice how the ISO standards work is a company buys a printed
copy of the standard for an employee or contractor who sits down and
implements something that conforms to it. That is the ISO model of
development works as far as I understand it (having been one of those people
who got the joy of implementing something according to one of their binary
standards once. :-) I now agree with Alan Kay when he says that any standard
with more than three lines is ambiguous -- which is why you need defining
free and open source reference implementations. :-)

Anyway, I guess within the bounds of copyright you can reverse engineer some
of these standards, but it is an extra hurdle.

In the past, NASA adopted the "STEP" standard:
"NASA adopts STEP standard", Manufacturing Engineering, Sep 1999
http://findarticles.com/p/articles/mi_qa3618/is_/ai_n8852496
"STEP, the Standard for the Exchange of Product model data, was adopted by
the National Aeronautics and Space Administration (NASA) when it required
that all its computer-aided engineering, design, and manufacturing systems
have STEP-compliant tools for data interchange."

But there we have another ISO (proprietary) standard:
http://en.wikipedia.org/wiki/ISO_10303
"STEP is developed and maintained by the ISO technical committee TC 184,
Technical Industrial automation systems and integration, sub-committee SC4
Industrial data. Like other ISO and IEC standards STEP is copyright by ISO
and is not freely available."

This part also from that Wikipedia article is also of interest:
"""
Future of STEP
Despite the many successes of STEP there is still a question in user's minds
about the speed of its development and deployment [5]. Many critics point
out correctly that the XML standards for e-commerce are being developed much
more quickly. To match with these mappings from STEP data models into XML on
the basis of DTD and later XML-Schema were defined. Another rather new
approach is to use the Semantic Web based on the Web Ontology Language for
exchanging product information. Fundamentally, product model data is
different from other kinds of e-commerce data such as invoices, receipts,
etc. The traditional method for communicating product model information is
to make a drawing and the traditional method to communicate an invoice is to
make a form. When you make a drawing or 3D model you need to define
information with many subtle and complex relationships and this makes the
STEP data exchange problem more difficult.
"""

That reference to OWL links to:
http://en.wikipedia.org/wiki/Web_Ontology_Language
"The Web Ontology Language (OWL) is a family of knowledge representation
languages for authoring ontologies, and is endorsed by the World Wide Web
Consortium.[1] This family of languages is based on two (largely, but not
entirely, compatible) semantics: OWL DL and OWL Lite semantics are based on
Description Logics,[2] which have attractive and well-understood
computational properties, while OWL Full uses a novel semantic model
intended to provide compatibility with RDF Schema. OWL ontologies are most
commonly serialized using RDF/XML syntax. OWL is considered one of the
fundamental technologies underpinning the Semantic Web, and has attracted
both academic and commercial interest. In October 2007, a new W3C working
group[3] was started to extend OWL with several new features as proposed in
the OWL 1.1 member submission.[4] This new version, called OWL 2, has
already found its way into semantic editors such as Protégé and semantic
reasoners such as Pellet[5] and FaCT++[6]"

And then RDF brings me back to the pre-WordNet Pointrel System. Sigh. :-)
"On college and space habitats"
http://groups.google.com/group/openvirgle/msg/231e63e966e932df

Anyway, if you want to be maintstream, sounds like stuff related to OWL and
RDF/XML is where it is at. Whether I'll be big enough to go there myself is
a different story. :-)

From this document:
"OWL Web Ontology Language Guide"
http://www.w3.org/TR/owl-guide/
"""
The World Wide Web as it is currently constituted resembles a poorly mapped
geography. Our insight into the documents and capabilities available are
based on keyword searches, abetted by clever use of document connectivity
and usage patterns. The sheer mass of this data is unmanageable without
powerful tool support. In order to map this terrain more precisely,
computational agents require machine-readable descriptions of the content
and capabilities of Web accessible resources. These descriptions must be in
addition to the human-readable versions of that information.

The OWL Web Ontology Language is intended to provide a language that can be
used to describe the classes and relations between them that are inherent in
Web documents and applications.

This document demonstrates the use of the OWL language to

1. formalize a domain by defining classes and properties of those classes,
2. define individuals and assert properties about them, and
3. reason about these classes and individuals to the degree permitted by
the formal semantics of the OWL language.

The sections are organized to present an incremental definition of a set of
classes, properties and individuals, beginning with the fundamentals and
proceeding to more complex language components.
"""

Which, in our case, is more a description of the open manufacturing
standardization problem that a specific solution. :-)

They do include an example of a Winery, which is a manufacturing operation
of a sort, so that is progress. :-) Anyway, it's a document I have only
skimmed myself and should read in depth in any case.

--Paul Fernhout

Bryan Bishop

unread,
Nov 19, 2008, 3:57:21 AM11/19/08
to openmanu...@googlegroups.com, kan...@gmail.com
On 11/19/08, Paul D. Fernhout <pdfer...@kurtz-fernhout.com> wrote:
> And Bryan says something similar to what you say as well (including in this
> thread) whether with SKDB or his other work.

<snip> <-- Stuff that Paul already knows I agree with here -->

> Every system of knowledge representation emphasizes different things. For
> OSCOMAK, I was more interested in manufacturing webs (and so the dependency
> between processes) in order to reason about manufacturing ecologies that
> could bootstrap or sustain themselves than I was interested in the details
> of processes themselves. However, most people interested in making things
> want the details about making something specific, and can fill in a lot of

First, there's a reason for this emphasis on how to make things; this
is the backbone of your requirements/dependency web for Freitas-style
closure engineering and recipes of requirements from one process to
another. But as you go on to mention, in practice this ends up being
silly photoblog things. And these can't even be downloaded all at once
without a web crawler anyway, which is going in the very wrong
direction.

((I used to get into some arguments about whether or not to allow
"products" and "items" into SKDB/OSCOMAK because processes would keep
it all pure and everything, an 'object' being a final true realworld
grounded instantiation of a list or network-applied topology of
intermingling processes, but somehow I've been convinced of otherwise.
I don't presently remember how this happened, but it might be due to
the late hour of the night here.))

> gaps with their general knowledge, and are less interested in how that thing
> fits together with other things. So, we see on, say, the Makezine project
> site (hosted on instructibles now?)
> http://makezine.com/projects/
> http://www.instructables.com/
> or other similar sites
> http://www.wikihow.com/Main-Page
> essentially a photo journal where people tell how to make something, but
> there is not much metadata there to connect each project with other
> projects.

There's many peculiar observations that we can make about the "Maker
community", the one fueled by the commercial powerhouse of the
O'Reilly Media Empire, but I'm not going to rehash them here. Eric
might have a list, and I know Ben's been complaining about it for a
while, and I've met my fair share of people with magic bullet
mentalities quickly approaching the danger zone. I don't know what's
wrong with these sectors of the web; so close to the mark but at the
same time missing it completely. How could this be?

> Anyway, a key point is that what you focus on representing has a lot to do
> with your goals. And here is a conflict of goals -- most people want to make
> specific things (or tell others how to make specific things), and a few
> people want to reason about how lots of things are made. Ideally, some good
> standard could support both well, but I wonder if it will.

I doubt the solution would be some compromise between those, or any,
two extremes. In the end the packaging will probably be in the form of
some sort of mental paradigm shift where you don't think in terms of
magic bullets for designs but instead some integrative approach
overall that helps the misdirected in some way, while also promoting
the use of tools that make easily shared designs, and everybody lives
happily ever after. Oh, and documentation isn't a chore. Yeah, add
that to the wish list too. Joking aside I still strongly doubt
compromise to be the solution.

- Bryan

Smári McCarthy

unread,
Nov 19, 2008, 6:10:01 AM11/19/08
to openmanu...@googlegroups.com, kan...@gmail.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

The file format I suggested was much much simpler.

<?xml version="1.0" ?>
<project>
<name>Acrylic chandelier</name>
<description>
A nice laser-cuttable chandelier.
</description>
<version>1</version>
<hash>MD5 sum of the project</hash>
<website>Location of further design information</website>
<authors>
<author>Me</author>
<author>Myself...</author>
</authors>
<files>
<file desc="Fixes to the bulb fixture"fixment.svg"/>
<file desc="The sides of the chandelier" url="sides.svg"/>
<file desc="Pattern for a side" url="pattern1.png"/>
<file desc="Pattern for a side" url="pattern2.png"/>
<file desc="Pattern for a side" url="pattern3.png"/>
<file desc="Pattern for a side" url="pattern4.png"/>
</files>
</project>


Further meta-data can be added to the format later, but this currently
gives enough information to a program designed to present the data to a
user in an orderly fashion. The only important point here is the fact
that a typical "project" consists of more than one "file" and requires
some descriptions.

So you provide a folder or zip file containing all the files along with
a file called "catalog.xml" that contains the metadata.

KISS.

- Smári
- --
Smári McCarthy
sm...@yaxic.org http://smari.yaxic.org
(+354) 662 2701 - "Technology is about people"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkkj9AEACgkQ9cJSn8kDvvFc/gCbBuA8Sm9GXSv9Fg+OKoovgSnD
dDMAoJ0HFUmn0RuMyttmA5zWGrvHdoLw
=RGig
-----END PGP SIGNATURE-----

Bryan Bishop

unread,
Nov 19, 2008, 7:53:03 AM11/19/08
to Smári McCarthy, openmanu...@googlegroups.com, kan...@gmail.com

Yes, but what's important is the standard set of files that we expect
to see referenced in something like the <FILES> list. CAD? XML? YAML?
And on top of that, what standard package of tools should each one
open with at a minimum or be packaged standard with fabuntu? This is
in a sense what I'm doing with some microtools for managing
repositories, but of course it's heavily format dependent, though the
basic set of tools that are used for file management clearly need to
be extended to repository management, such as simple operations for --
say -- confirming that two CAD files contain the same information
within, if they use cross-references, for instance. Just a small
example. I encourage everyone to take a strong point from debian and
how they do it. Here's how they package software, an example:

http://en.wikipedia.org/wiki/Deb_(file_format)
http://debcreator.cmsoft.net/
details: http://tldp.org/HOWTO/Debian-Binary-Package-Building-HOWTO/x60.html

Anyway, for the record, here's the VOICED system repository XML format
that I discourage because there's some improvements to be done to it:
http://heybryan.org/~bbishop/docs/repo/

<!DOCTYPE RepositoryXML>
<RepositorySystem>
<System SystemDescription="consumer"
SystemContributingInstitution="" SystemType="empty" SystemName="salton
electric wok" >
<Artifact ArtifactName="lid assembly" ArtifactCBName="none"
ArtifactIsAssembly="1" ArtifactManufacturer=""
ArtifactModificationDate="" ArtifactCreationDate="2008-07-23"
ArtifactDescription="" ArtifactParent="salton wok"
ArtifactTrademark="" ArtifactReleaseDate="" ArtifactQty="1" >
<ArtifactFile ArtifactFileType="1"
ArtifactFileExtension="8312" >lidassembly1-FILE</ArtifactFile>
<ArtifactImage>lidassembly-IMAGE</ArtifactImage>
<CreatorInfo CreatorFirstName="" CreatorLastName=""
CreatorEmail="" CreatorAffiliation="" />
</Artifact>
<Artifact ArtifactName="internal" ArtifactCBName="empty"
ArtifactIsAssembly="0" ArtifactManufacturer="empty"
ArtifactModificationDate="" ArtifactCreationDate=""
ArtifactDescription="empty" ArtifactParent=""
ArtifactTrademark="empty" ArtifactReleaseDate="" ArtifactQty="0" >
<CreatorInfo CreatorFirstName="" CreatorLastName=""
CreatorEmail="" CreatorAffiliation="" />
</Artifact>
<Artifact ArtifactName="lid handle" ArtifactCBName="handle"
ArtifactIsAssembly="0" ArtifactManufacturer=""
ArtifactModificationDate="2008-06-24"
ArtifactCreationDate="2008-07-23" ArtifactDescription=""
ArtifactParent="lid assembly" ArtifactTrademark=""
ArtifactReleaseDate="2000-01-01" ArtifactQty="1" >
<Subfunction SubIsSupporting="0" SubInputArtifact="empty"
SubOutputArtifact="empty" SubSubfunction="import" />
<CreatorInfo CreatorFirstName="" CreatorLastName=""
CreatorEmail="" CreatorAffiliation="" />
</Artifact>
<Artifact ArtifactName="external" ArtifactCBName="empty"
ArtifactIsAssembly="0" ArtifactManufacturer="empty"
ArtifactModificationDate="" ArtifactCreationDate=""
ArtifactDescription="empty" ArtifactParent=""
ArtifactTrademark="empty" ArtifactReleaseDate="" ArtifactQty="0" >
<CreatorInfo CreatorFirstName="" CreatorLastName=""
CreatorEmail="" CreatorAffiliation="" />
</Artifact>
</System>
<lidassembly1-FILE><![CDATA[NbFJ1o5FGU1v1My6MoGoKzFM1E6bI88znd32cEwlEIcR3ql38Bt7y4yw==]]></lidassembly1-FILE>
<lidassembly-IMAGE><![CDATA[AAAQhXicnmpx6uhooAgv9891opkxNVALACNbFJ1o5FGU1v1My6MoGoKzFM1E6bI88znd32cEwlEIcR3ql38Bt7y4yw==]]></lidassembly-IMAGE>
</RepositorySystem>

I discourage the immediate similar use because it's easier if you just
honestly put the image and data files in separate directories and
because some of the formats are based off of proprietary software
installations (you have no understanding of how annoying this makes it
for me to work with it at all); and also because the FS and CFG and
assembly graphs aren't properly cross-referenced anyway, which is some
very important metadata.

- Bryan
http://heybryan.org/
1 512 203 0507

Paul D. Fernhout

unread,
Nov 19, 2008, 10:27:14 AM11/19/08
to openmanu...@googlegroups.com
Smári-

Thanks for sharing this format. It can be very hard to make things very
simple sometimes.

We have a few issues being discussed here, if I may try to summarize.

* Information for people to use to make things vs. what a completely
automated system needs to know (if we had one).

* Information about how to make one things vs. information that links things
and processes together in a web for supply chain analysis.

* Other things I care about having to do with the social or technical
process of contributing, like licensing, versions, transactions, etc. :-)

We also need to distinguish how you exchange information (XML is good for
that) and how you archive it (other things may be better, whether database
records or flat files in different formats).

Here is a format I am currently playing with, just to say something like
"Object1 has color red1, oh, I mean, red2; Oops, no it doesn't, it should be
red3. And I meant a slightly different predicate each time, too." :-)

============ file: test002_output.pointrel
Signature: Pointrel20081028.0.1
pointrel-archive://7d7d9c3b-0ceb-433c-8ed8-698112c76cf8

[pointrel-transaction://892ed4cd-c95d-4c75-94fc-574f7dc23708
timestamp: 2008-11-13T21:39:03.755Z
author: Test Author <te...@example.com>
license: GNU:FDL
license: GNU:LGPL
license: CC:by-sa

@pointrel-triple://287bf781-053b-4d62-ac2a-86ac7b4b5469
timestamp: 2008-11-13T21:39:03.741Z

~
~ object1
~ color
~ red1
~

]pointrel-transaction://892ed4cd-c95d-4c75-94fc-574f7dc23708

[pointrel-transaction://f4bdb0b0-aa56-47a3-b466-fe97b41c3cb1
timestamp: 2008-11-13T21:39:03.757Z
author: Test Author <te...@example.com>
license: GNU:FDL
license: GNU:LGPL
license: CC:by-sa

@pointrel-triple://5458a086-ba32-41ae-ad9a-8dc8dbdb6670
timestamp: 2008-11-13T21:39:03.756Z

~
~ object1
~ has color
~ red2
~

]pointrel-transaction://f4bdb0b0-aa56-47a3-b466-fe97b41c3cb1

[pointrel-transaction://c2a4097c-0ffb-4aa8-9261-65588cce8782
timestamp: 2008-11-13T21:39:03.757Z
author: Test Author <te...@example.com>
license: GNU:FDL
license: GNU:LGPL
license: CC:by-sa

@pointrel-triple://803ac376-de9a-4ee0-b3ad-e919de530816
timestamp: 2008-11-13T21:39:03.757Z

~
~ object1
~ has
color
~ red3
~

]pointrel-transaction://c2a4097c-0ffb-4aa8-9261-65588cce8782
==========================

Importing your file would (in theory) come out a lot of triples asserting
each of the pieces of information, all nested inside one transaction. Other
people's changes to it would be in new transactions, ideally all signed by a
public key somehow. I've been thinking I should write an XML importer for
easy comparison (and to maybe work with Bryan's latest stuff.) The key issue
here is refinement (or branching) of a design as a process to be supported,
including looking at old designs.

Anyway, just ideas I am experimenting with right now to give a sense of
other aspects related to storing information that might be of interest; your
format is clearer to most people in practice right now, and people would
just make differently named files, or put them in differently named
subdirectories, or use SVN or another file based version control system like
Git or Bazaar.

OWL and RDF/XML is the mainstream way to go (for the cutting edge. :-)
And sticking that data in an a Semantic MediaWiki or other database backed
website (with versions and authentication) is also a standard. (What OSCOMAK
does at the moment.)
http://www.oscomak.net/wiki/Main_Page

I care more about some other aspects of all that than some people, including
an emphasis of a different aspect of simplicity, but I've been doing this so
long that technology like OWL and RDF is eclipsing the ideas and
implementations I've long played with, and it's hard to let go of my own
idiosyncratic approach. :-) Of course, what is idiosyncratic today may be a
standard tomorrow. :-) But that is a long shot. One person doing a little
part-time is pretty small potatoes compared to dozens or hundreds of people
at places like IBM Research thinking about this full time, going to
conferences, embedding the concepts like XML, RDF, and OWL into commercial
and free products, etc. You want a safe bet, this is the kind of wagon train
to hitch up with:
"The XDB (eXtensible DataBase)"
http://www.webdav.org/
http://infolab.stanford.edu/~maluf/papers/xdb_ipg_ggf03.pdf
"The XDB (eXtensible DataBase) has been released as open source software by
the NASA Ames Research Center. ... XDB-IPG has been used
to create a powerful set of novel information management systems for a
variety of scientific and engineering applications."

I'm reminded of an analogy I thought about first in the context of thinking
about an organization like the New Alchemy Institute, which in the 1970s was
one of the major places working on alternative technology. The effort of a
small organization (or even just one person) in a dark time can be thought
of like a light bulb turned on in your house in the middle of the night to
use the kitchen. It seems blindingly brilliant, so bright you can't even
look at it, it seems so out of place. But it illuminates the room and you
can go about your activities in the kitchen with your back turned to it. :-)
Then you go back to bed, forgetting to turn the light off, and when you wake
up with the sun, you go back to the kitchen and don't even notice the light
is on anymore because there is so much sunlight streaming in the windows.
(Or you may not notice if it is off, either, if it burns out in the
meanwhile. :-) What we are seeing here is the continuing dawn of open
manufacturing (and ideally with an alternative sustainable twist), even
though that process has been going on indirectly for decades (EF
Schumacher's Small is Beautiful is connected to it too), even if there were
some bright lights here and there many years ago (especially people like
Vannevar Bush, Theodore Sturgeon, and William Kent, and many others, who all
inspired me indirectly or directly).
http://en.wikipedia.org/wiki/Memex
http://www.p2pfoundation.net/Skills_of_Xanadu?title=Skills_of_Xanadu
http://www.bkent.net/Doc/darxrp.htm
As I am sure they inspired many other people too.

My wife likes to say big ideas are like whales, if you are lucky you get to
swim with them for a time, but you can't own them.

And I should add, maybe sometimes the whale swims out to sea on the whale's
own to do bigger things without you, and then do you swim out after that
whale, swim back to shore, or look for another whale? :-)

Anyway, there is a lot happening now. I doubt any one person's work will be
critical at this point, but we can all contribute in various ways to
collectively raising the temperature surrounding open manufacturing to the
boiling point. That's why I like a mission statement like (inspired by the
Chaordic Commons vision):
http://www.chaordic.org/
"OSCOMAK [or the open manufacturing list, which is busy superseding it :-)]
supports playful learning communities of individuals and groups chaordically
building free and open source knowledge, tools, and simulations which lay
the groundwork for humanity's sustainable development..."

People here should feel free to swipe that mission statement and rework it
for this list.

The key idea being "playful learning communities". The most important issue
is we are all learning here and sharing ideas. If we progress that way, I
think we will get somewhere interesting at least. And even if we don't, it
will be fun. :-)

--Paul Fernhout

Bryan Bishop

unread,
Nov 19, 2008, 1:31:49 PM11/19/08
to openmanu...@googlegroups.com, kan...@gmail.com
On 11/19/08, Bryan Bishop <kan...@gmail.com> wrote:
> On 11/19/08, Paul D. Fernhout <pdfer...@kurtz-fernhout.com> wrote:
> > of processes themselves. However, most people interested in making things
> > want the details about making something specific, and can fill in a lot of
>
> First, there's a reason for this emphasis on how to make things; this
> is the backbone of your requirements/dependency web for Freitas-style
> closure engineering and recipes of requirements from one process to
> another. But as you go on to mention, in practice this ends up being
> silly photoblog things. And these can't even be downloaded all at once
> without a web crawler anyway, which is going in the very wrong
> direction.
>
> ((I used to get into some arguments about whether or not to allow
> "products" and "items" into SKDB/OSCOMAK because processes would keep
> it all pure and everything, an 'object' being a final true realworld
> grounded instantiation of a list or network-applied topology of
> intermingling processes, but somehow I've been convinced of otherwise.
> I don't presently remember how this happened, but it might be due to
> the late hour of the night here.))

You can visualize this as me trying to sweep out all of the pesky
objects piling up on my hard drives -- it's hard to keep the processes
clean and the epistemology separated (focus on products v. focus on
processes that go into making the final product that pops out of the
tubes in the end), and in practice this hasn't happened yet, but that
doesn't mean it shouldn't.

> > gaps with their general knowledge, and are less interested in how that
> > thing fits together with other things. So, we see on, say, the Makezine project
> > site (hosted on instructibles now?)
> > http://makezine.com/projects/
> > http://www.instructables.com/
> > or other similar sites
> > http://www.wikihow.com/Main-Page
> > essentially a photo journal where people tell how to make something, but
> > there is not much metadata there to connect each project with other
> > projects.

Actually, I put some more thought into this after my complaining in my
previous response. I still strongly doubt that the issue is that there
needs to be a compromise between the two extremes presented above.
What I notice instead is that the nature of the constraints on the
contexts in which these people are making things are such that makers
don't extract much value out of digital representations of designs
other than the fact that it didn't arrive by the physical postal
service. For the majority of the people making amateur rockets on
instructables, making 3D lego printers out of legos, or extracting DNA
from a shot glass, the design information is going to be read by human
eyes in the end anyway, so why spice it up in some intermediate format
that serves to be pedantic and obsessive compulsive in its
requirements? In the programming circles this is obviously not the
case because we can very clearly point a newbie to a directory of tens
of thousands of files that all contain the same information but
expressed in different ways, and then see him become enlightened about
the nature of the file formatting issue and how his program is largely
useless when it's not supplied well-formatted information. This isn't
so the case with the contraptions and designs presented in
instructables, though there are many times were there are minute
details that become important (and apparent when you just sit there
scratching your head in a daze) but these are somehow treated like
zazen, "acquired by experience" (or, in some users' cases, not
acquired at all). Fablabs help this situation out a little bit, but
again not everyone has a reprap, mill, lathe, laser cutter, or steel
mill sitting in their garage wired up to a computer to play around
with, and we're right back to the classic "bootstrapping problem"
where in order to enter information into the repository, you need to
be consistent with respect to previous information, which is difficult
when there is no previous information, especially with no physical
device drivers for starting the manipulations of matter and energy
into different forms. There are long-term solutions to this that we
all know about, such as making tools and setting up as many shops for
as many people as possible and so on, but this doesn't address our
immediate concerns with the awesome potential of the maker communities
and their data entry habits. Heck, even if you threw up the VOICED
repositoryEntry app (which, btw, it is online and GPL'd), you throw in
information about projects but in the end I guarantee you that you
will see people reverting back to their same ways as on the current
instructional websites, howstuffworks, etc., again largely because
it's a shorter path than being pedantic about data entry etc. Overall
I largely doubt that this reversion is because these people genuinely
are distinterested in process ecologies and manufacturing hardware,
but because that's just how human design behavior or "get something
done" behavior works. They already have their own internal stories
going around in their heads about all of the different components that
they are considering for a design, and these "photoblogs" as Paul's
calling them are just easy ways to write out the stories. While it's
great that we're doing community anthropology and archiving all of
these stories, this "follow the shortest path" thing is a bit of a
scam because overall it doesn't promote progress, even if it seems
like progress in the near term. This is in comparison to chess masters
who have to examine large possibility spaces for their next moves,
either mentally pacing themselves down 5 or 6 depths into the tree to
find a winning move. To complete the analogy, that's kind of where we
are here: individual stories on instructables are like moving pawns
all over the place, meanwhile what we need are some chessmasters to
chug through the designs, but nobody's playing on the chess board
(repository formats). Ouch. Anyway, there are ways to show that there
are values in digital design representations to the maker communities
even without them having their super-awesome make-anything shop in
their garage. The most obvious method is the "kit method", where
repository entries are treated more like a kit, where materials and
items involved in it are automatically linked up to ordering systems
across the web for the constituent parts, while also wired up to the
local user's inventory system at home ("one click ordering" which also
updates your computational inventory (and if we have an open source
barcode scanner somewhere on the net that would be useful too)). The
problem with this is that design representations then more involve
linking to "click here to order products" rather than more design
information. To add some incentive for people to use this system,
maybe you could also make some silly "impact factor" metrics about how
many people a person's design could be made by, knowing the
inventories of the userbase that might be interested in that sort of
device or something, so somebody with a 100% impact factor would be
using materials that are common to all inventories, and somebody with
a low impact factor is doing something pretty weird. Unfortunately
there's not much value in this sort of metric, especially since we do
not want to emphasize individual designers too much (since it
shouldn't matter whether A or B was the one to package up a set of
designs into a tar file). It would, however, maybe conceivably allow
for programs to help figure out alternatives and replacements to
experiment with of similar components of other designs that do similar
subfunctions. Mixing and matching designs like this is 'dangerous'
because you don't get guarantees, but there's ways that you can
harvest collective experience and start figuring out general rules for
"good replacements and substitutions". This would then be some extra
incentive to prefer your designs in a repository format: you have more
of a chance of finding an equivalent way of implementing the
functionality. This is one of the parts of the projects that I am
doing in the lab, though the catch is that it requires good data to
work with -- it kind of allows you to statistically sort out "modules"
of interest in design graphs for the ways things wire together and
then figure that it may or may not be interesting, and because of the
"function structure" / black box diagram substrate it then lets you
see if it's even reasonable to hook up a certain subgraph in place of
another (i.e., if it has 4 inputs of snot, and five outputs of blot,
and you just need something that does electrical -> electrical, it's
obviously not qualified) (I've been looking back at origami for this,
since origamial designs are easily written into a computer format:)

fold(A,B)
crease latest edge
fold(T,R)
etc..

^ I guess I'm imagining some sort of pop-the-stack architecture there,
where each fold adds a new surface and a new set of labeled points
that allow for new creases to be made. An instance of some good
origami would be the theo jansenn mechanism, and of course many
aesthetic origami projects. The old systems like webEOS, webOrigami
and EOS were supposedly doing something like this, and Asem hasn't
replied to me yet as to whether or not it still exists. It was an AJAX
frontend + Mathematica notebook backend that did computational origami
folding into different shapes and structures from a programming
language input, based somewhat on graph grammars.

Anyway, it turns out that I have other incentives to do inventory
software as well. It's one of those things that you say you'll do for
all of your boxes of stuff, but then you never do. The other incentive
is actually from my synthetic biology work stuff in software,
hopefully to help automate the inventory transfer of agents from one
lab to another in a more automatic manner, such as sending specialized
chemicals to those who need them when they want to implement certain
plasmids, or just solving routing problems in general. I'm sure
there's already software for the routing stuff out there, though not
for inventory management wired up to web services for requesting
various genes etc. This would be for users to use some of my programs
to automatically generate biological circuits, and then using the same
"kit" concept, somehow come across all of the necessary materials to
make the little critters so.

So it's still the right direction, though it's a slightly different
twist on things - it's not so much top-down enforcement, but bottom-up
enforcement of our interests being represented by choosing to more
flock to designs that are well represented and usable for those of us
without every single possible item laying around, and other various
things that push us closer to the paths that we're interested in
seeing happening.

Paul D. Fernhout

unread,
Nov 19, 2008, 4:45:05 PM11/19/08
to openmanu...@googlegroups.com
Bryan-

In partial answer to why how to make it photo blogs don't have much data,
here is a pessimistic assessment of human psychology and metadata: :-)
"Metacrap: Putting the torch to seven straw-men of the meta-utopia"
http://www.well.com/~doctorow/metacrap.htm
"""
# 2. The problems
* 2.1 People lie
* 2.2 People are lazy
* 2.3 People are stupid
* 2.4 Mission: Impossible -- know thyself
* 2.5 Schemas aren't neutral
* 2.6 Metrics influence results
* 2.7 There's more than one way to describe something

... A world of exhaustive, reliable metadata would be a utopia. It's also a
pipe-dream, founded on self-delusion, nerd hubris and hysterically inflated
market opportunities. ... Do we throw out metadata, then? Of course not.
Metadata can be quite useful, if taken with a sufficiently large pinch of
salt. The meta-utopia will never come into being, but metadata is often a
good means of making rough assumptions about the information that floats
through the Internet. ...

"""

I think a more complete (and charitable) answer is on that I think I read
somewhere by Eric Hunting in his LUF pages, that people are still working
out how to do this as a community.

An alternative to metadata is of course better (smarter) search engines. :-)

But the idea of the semantic web suggests that search engines are not good
enough by themselves.
http://en.wikipedia.org/wiki/Semantic_Web
"Humans are capable of using the Web to carry out tasks such as finding the
Finnish word for "monkey", reserving a library book, and searching for a low
price on a DVD. However, a computer cannot accomplish the same tasks without
human direction because web pages are designed to be read by people, not
machines. The semantic web is a vision of information that is understandable
by computers, so that they can perform more of the tedious work involved in
finding, sharing and combining information on the web."

So here we see a basic conflict. Cory Doctorow says it ain't gonna happen in
a pure way, but Tim Berners-Lee says it should happen and that's presumably
how he spends his time.

And then there are people in the practical middle:
"The Need for Creating Tag Standards"
http://neosmart.net/blog/2007/the-need-for-creating-tag-standards/
"Web 2.0, blogging, and tags all go together, hand-in-hand. However, while
RPC standards exist for blogs and the pinheads boggle over the true
definition of a "blog," no one has a cast-in-iron standard for tags.
Depending on where you go and who you ask, tags are implemented differently,
and even defined in their own unique way. Even more importantly, tags were
meant to be universal and compatible: a medium of sharing and conveying info
across the internet — the very embodiment of a semantic web. Unfortunately,
they're not. Far from it, tags create more discord and confusion than they
do minimize it. "

In general I'd agree that it would be best not to compromise on an approach
(make one thing vs. look at a manufacturing ecology), and it should somehow
support both detailed descriptions on how to make things and dependencies
(like your emphasis with SKDB building on the Debian packaging dependencies
model, although watch out for circular dependencies, of course. :-) A
prolific inventor once told me that average engineering compromises between
two goals (example, cheap vs. durable, like older US cars?), but the best
inventive engineering is not a compromise -- it figures out how to satisfy
both goals at once (example, cheap *and* durable, like newer Japanese cars? :-).

--Paul Fernhout

marc fawzi

unread,
Nov 19, 2008, 5:12:14 PM11/19/08
to openmanu...@googlegroups.com
Hello all,

I was invited to this group by a friend just recently and I've been listening to the discussions with a good deal of curiosity and interest.

This post below triggered my interest in jumping head first into the conversation.

I think labeling people "stupid, lazy, liers" is itself a problem, even when the intent is pragmatic.

<<
    * 2.1 People lie
    * 2.2 People are lazy
    * 2.3 People are stupid
    * 2.4 Mission: Impossible -- know thyself
>>

I would rephrase as:

2.1 People state things that are inconsistent with what I know or what I will know in the future
2.2 People work slower and/or with less effort than I do when it comes to certain tasks (they may work faster and/or with more efforts on other tasks, including ones I don't know about)
2.3 People do things that are inconsistent with what I consider a smart way
2.4 Mission: Possible -- empathize with others as well as with yourself ... only through empathy do you get to know yourself or others.

Sorry for this, but I'm dating a therapist. ;-)

Carry on ...

Marc
http://evolvingtrends.wordpress.com/

Paul D. Fernhout

unread,
Nov 19, 2008, 9:27:58 PM11/19/08
to openmanu...@googlegroups.com
Marc-

Your rewording is much better, agreed.

Cory Doctorow can be controversial -- it probably increases advertising
revenue or something. :-) I thought about saying something when I linked it,
but I did not. I did add myself "I think a more complete (and charitable)
answer is [one] that ...".

But in any case, I found it interesting to find such extreme opinions on
metadata (and human nature) between Doctorow and Berners-Lee.

But, to affirm that Doctorow's position on metadata has a little more
currency right now than Berners-Lee's, it's amazing how easily you can find
information using Google even without widespread semantic tagging: :-)
http://www.google.com/search?hl=en&q=dating+a+therapist

Of course Google has a few tricks that it does, including perhaps now using
some of WordNet's ontology in various ways. WordNet was developed by my
undergraduate advisor (Psychology), by the way. See:
http://groups.google.com/group/openvirgle/msg/231e63e966e932df
"That advisor was George A. Miller;
http://en.wikipedia.org/wiki/George_Armitage_Miller
My 1985 UG senior thesis work ("Why Intelligence: Object, Evolution,
Stability and Model") with him may have very slightly help inspired Wordnet
http://en.wikipedia.org/wiki/WordNet
and so even more indirectly Simpli and Google AdSense:
http://en.wikipedia.org/wiki/AdSense
in the sense of my enthusiastically talking to him a lot about networks of
concepts for AI I wanted to put on a hard disk for a Commodore PET using
Pointrel triads. That hard disk had eaten a document George was writing in
his office on a deadline so he let me have it in the lab to play with
(rather than throw it out) -- that file incident was the probably the only
time I heard him swear. :-) Of course, the actual idea and all the hard
work and the psycholinguistic design behind WordNet is all his. ...
Being around young people can be inspiring in many ways that are not
"plagiarism". Young people bring a hopefulness which can be infectious --
even if in retrospect my plan to build a human level AI using a Commodore
PET and an unreliable 10MB harddisk was absurd. George's brilliance lay in
maybe later thinking, "What AI-ish thing can I build with all I know and the
tools at hand?" He may well have done WordNet whether he had met me in my
enthusiastic unreasonableness or not. Still, it is often the annoying
seemingly ignorant questions of youth that make us old geezers think. :-) "

Of course, many other people George was in communication with at the time
talked about concept networks back then (in an "if only" way) -- including
probably Allen Newell and Herbert Simon, and Marvin Minsky who was another
student of George's. George told me one of the reasons he built WordNet was
essentially just to get everyone to shut up about how amazing such a net of
concepts would be if they had it. :-) I think maybe he was politely
including me in there too. :-) He was not convinced when he started it
whether it would really be useful for much more than furthering the science
of psycholinguistics and being a better dictionary, if I recall correctly.
(I think that even back then George felt there was a lot more to human
intelligence than just pushing symbols around like AI types back then,
including me, liked to talk about. It's easy for AI types who know so little
about human psychology to latch onto one part of intelligence and think it
defines the whole. Even Marvin Minsky has come around to talking about human
minds being able to juggle multiple simultaneous representations and picking
the best one for a task -- which is another aspect of representing
manufacturing metadata that people may eventually want to explore.)

And many people still do talk about concept networks about manufacturing in
an "if only" way, obviously, like me and Bryan in this thread. :-) If only
we had the metadata, then designing self-replicating space habitats would be
easy. :-) But in any case, I feel we need to do for manufacturing what
George Miller did for words in dictionaries (and WordNet goes far beyond
what a dictionary does, because it was informed by George's understanding of
the human mind). At the very least, then we can either use it or shut up
about what it might do for us. :-) But what George had that made WordNet
special is a deep understanding of psycholinguistics from a long career (he
started WordNet around age 65). Ultimately, a great system for manufacturing
may take someone with the same good grasp of manufacturing technology to get
around the chicken/egg problem Bryan raises, of wanting to put in data that
fits together with other data, but there is no data in there yet. Of the
people I've read here, Eric Hunting certainly seems to write like he has a
good grasp of these manufacturing issues. Certainly *I* don't have a good
grasp of manufacturing on a practical basis. I know some about AI-type
things, but little about manufacturing content. And success may well best
come from someone who loves manufacturing techniques for *themselves* as
much as George Miller loves words for *themselves*.

From what I later learned about biology as an Ecology and Evolution grad
student (an indirect way to study networks that make things :-), I do feel
George Miller put something problematical about metaphysics in WordNet. Or
rather, he hard coded aspects of a metaphysics into WordNet when it should
have been above it somehow -- for example some species interrelationships
are encoded in the hierarchy of nouns. Take the ever controversial Platypus,
for example:
http://wordnet.princeton.edu/perl/webwn?o2=&o0=1&o7=&o5=&o1=1&o6=&o4=&o3=&r=1&s=platypus&i=3&h=1100#c
There's something that bothers me about stuff only known to a few
biologists, and still somewhat contested, being in there. It seems to me
like a taxonomic hierarchy (or alternative hierarchies!) should be off to
the side somehow. And human minds are flexible enough to handle that -- the
notion that an item can be classified multiple ways. But nothing is perfect.
WordNet has certainly already made a few people fortunes (including at
Google). It is also a credit to George that he got WordNet out as a form of
"free and open source" content and software back in the 1980s. Maybe I
learned something from his example eventually, since, having made some money
with a video game I wrote, I was gung-ho on proprietary software and patents
back then. :-)

William Kent's ideas are more flexible as far as supporting alternate
goal-dependent metaphysics,
http://www.bkent.net/Doc/darxrp.htm
and the more I think on it, the more I think that when I visited an IBM
facility's library for an afternoon in 1980 as a teenager interested in AI
that I just *must* have seen Bill Kent's book there, since it had just been
recently published and he worked at IBM, and the Pointrel system is so
similar to what he proposes with his ROSE/STAR system in many ways, or at
least was in earlier versions (he didn't have transactions in the model). I
did not actually read that book consciously till I stumbled across it in the
late 1990s in the Iowa State library, so maybe I did never see it back then,
I can't remember for sure. Often people may get a glimpse of something
somewhere and then forget about it. So it is possible that in some sense
WordNet is another legacy of Bill Kent's "Data & Reality" book indirectly.

I did a high school independent project on AI in 10th grade based on the
Winston AI book from MI, which I got by chance when my father took me to the
Trenton State College computer fair:
http://people.csail.mit.edu/phw/Books/AIBACK.HTML
http://en.wikipedia.org/wiki/Trenton_Computer_Festival
along with a book an Pascal which was a revelation to me as a basic and
assembler programmer back then. So, some of those ideas from Winston's AI
book maybe mix up in there too in my work. :-)

And maybe even an interaction with David Gelernter (or perhaps a colleague)
http://en.wikipedia.org/wiki/David_Gelernter
as we overlapped when he was at SUNY Stony Brook as a grad student and I was
a freshman there. It's so hard to recall precisely, but I vaguely remember
having a discussion with someone on the SUNY SB campus who I met just once
in passing going to see some computer equipment and who was arguing for
n-tuples as a better abstraction when I was talking about liking triples
(sometimes too much abstraction can be a problem, too :-); I was trying to
get access to probably the only unused old mini-computer on campus to do AI
work, but I gave up when it seemed like it only took paper tape, as my
Commodore PET was much easier to use.

Anyway, it's all interesting to me as a sense of how people spark off each
other perhaps, and people take an idea from one source and move it in
different ways and then the idea may move on through other people in new
ways (the science of memetics).
http://en.wikipedia.org/wiki/Memetics
And those days a quarter century ago are so hazy to me now it isn't clear to
me anymore always in what ways information was flowing back then. But
clearly Bill Kent's clearly written 1978 "Data and Reality" book predates my
work on the Pointrel triple system, George Miller's WordNet, and David
Gelernter's Linda tuplespace work.

I'm sure the same thing is happening even now with "open manufacturing"
ideas bouncing around and transforming based on people's own knowledge and
interests and inclinations. It's hard to follow the bouncing ball, and maybe
not even worth it in the end. It depends on your priorities.

Of course, Bill Kent chucked his career in data processing for technical
writing and then to go take photographs of the American Southwest. :-)
http://www.bkent.net/
His 1978 "Data & Reality" book never got the attention it deserved, and IMHO
still deserves even a few years after his death, and even with RDF and OWL
and so on. Bill Kent really grasped a layer of abstraction of reality which
may even be psychologically hazardous for some. :-) It can be hard to hold
onto your roots (whatever they are) while working at such an abstract level
of understanding reality (that's the kind of thing extropians talk about
apparently, like with Transcranial Magnetic Stimulation possibly shutting
off some of the brains filtering and categorizing systems temporarily).
http://heybryan.org/~bbishop/docs/how-much-would-it-cost-to-become-transhuman.html
http://brainmeta.com/forum/index.php?showtopic=6048
http://en.wikipedia.org/wiki/Transcranial_magnetic_stimulation
http://www.huge-entity.com/2006/06/autism-reality-and-flooding-of-minds.html
From the last link: "By controlling our perceptions our brains allow new
landscapes of thought to be painted as inner worlds. The twin savants in
Sacks' tale may be able to dance amongst the base forms of existence, but to
them a poem in 5 - 7 - 5 structure is devoid of beauty, a musical
composition in 2/4 time is a mere collection of patterns which they revel in
factoring. Perhaps in restricting the objective world the human mind is
capable of enriching the inner subjective self, and in doing so, places
forever hidden from view the original source of our greatest accomplishments."
(Makes me wonder if getting hit over the head by a baseball bat by accident
on a playground when I was about nine or ten made a difference in my mental
life. :-)

But Bill Kent managed to survive his foray into abstraction, and he left us
a great book and other writings and photographs, and in the end he died of
natural causes visiting family and residing in a place he loved.
http://www.bkent.net/obituary.htm
That's a life pretty well lived, if you ask me.

From:
http://en.wikipedia.org/wiki/The_Gambler_(song)
"The gambler then mentions that the "secret to survivin' is knowing what to
throw away and knowing what to keep" and that "the best you can hope for is
to die in your sleep"."

(I'm glad to say I had the pleasure of corresponding with Bill Kent a few
times in the years before he died, in relation to the Pointrel system and
other things.)

I'm sure I've given your therapist friend more than enough "metadata" about
me in this one post to spend a good long time analyzing me. :-) Which just
shows how valuable metadata can be. :-) And why the OWL specification has a
section on privacy.
http://www.w3.org/TR/owl-ref/#Privacy

--Paul Fernhout

>> http://www.well.com/~doctorow/metacrap.htm<http://www.well.com/%7Edoctorow/metacrap.htm>

marc fawzi

unread,
Nov 20, 2008, 12:50:37 AM11/20/08
to openmanu...@googlegroups.com
Hi Paul,

I read your response with deep interest, then my ADD (or rogue interrupt handler) kicked in half way through, with the following question:

Assume that the following is true:

1. Our ability to perceive 'space' (abstractly speaking) gives us the possibility of having an orientation 
2. ---->Our ability to perceive 'time' gives us certainty about the state of our orientation in space
3. ----------->Our ability to know the state of our orientation in space and our ability to select a new state constrained by the geometry of spacetime allow us to generate non-random patterns 
4. ------------------>Our ability to generate patterns allows us to store patterns (if you can move and change direction then you can be moved and your direction can be changed)
5. ----------------------->The ability to store and generate non-random patterns gives us the ability to compute.
6.------------------------------>Our ability to compute gives us our 'knowing.'

My question is: is there a non-computational kind of 'knowing' ?

Penrose said there is, and he called it Objective Reduction (a spacetime geometry that is non-computable, which he called 'blisters', which gives rise to non-computable knowing, per the hierarchy I've devised above...)

Yikes.

Thank you for sharing your story, which is probably the biggest, non-random, multidimensional pattern I've been able to store in my mental spacetime, in over 10 years. 
 
I can tell you're a master programmer

:-)

Paul D. Fernhout

unread,
Nov 20, 2008, 9:10:09 PM11/20/08
to openmanu...@googlegroups.com
I feel that there are a variety of models you can make of the world or of
consciousness (or of manufacturing. :-) Still, "the map is not the
territory", meaning all these models are at best approximations, some more
useful at the moment than others. I don't know if people can have
significant experience unmediated by models (other than of the operant
conditioning kind).

What I feel these models have in common is that they flow out of things like
assumptions, values, priorities, goals (including desired patterns to
preserve), choice of acceptable reasoning tools (formal logic, intuition,
consilience, pattern recognition or pattern completion, and so on), and
choice of acceptable ways to acquire trusted information (like from
scientific publications, family, experience, sensation, revelation, some
variety of direct spiritual experiences, some variety of faith, media,
memory, etc.). These issues interrelate in various ways (for example, your
values might lead you to prefer reasoning tools of one sort which might lead
to new insights of a certain type that might eventually affect your values,
etc.)

These are the same issues that relate to the metadata choices, or even to
design choices between, say, SKDB and OSCOMAK. Bryan might value using
industry standards like git more than I for sharing information, while I
might value license trails more than he does. Even if the world adopted one
or the other system worldwide for those specific reasons, it doesn't make
one "right" and one "wrong" -- unless you've already assumed that what
matters to define right or wrong is popularity, which is another assumption.
(Of course, in practice, the world may well adopt neither, but again, that
does not make them "failures" depending on how we define our goals -- like
if we value personal learning or the sense of flow of building them.)
Bertrand Russel wrote in one of his essays something like that at the core
of every philosopher's work is one unrecognized assumption. :-)

I'm purposely sidestepping the deeper issue you raise. :-) But on that theme
of the emergence of computation, you might like this webcomic I just saw
from a link on the Extropian chat list, and which gets exactly at this issue
of building up computation from scratch and what that means about awareness
and a sense of time:
"A Bunch of Rocks"
http://xkcd.com/505/

Thanks for your other comments. :-)

Bryan Bishop

unread,
Nov 20, 2008, 9:26:05 PM11/20/08
to openmanu...@googlegroups.com, kan...@gmail.com
On 11/19/08, marc fawzi <marc....@gmail.com> wrote:
> I read your response with deep interest, then my ADD (or rogue interrupt
> handler) kicked in half way through, with the following question:

I like your line of questioning, and have some things on my site you
might be interested in:

http://heybryan.org/intense_world_syndrome.html
http://heybryan.org/thinking.html
http://heybryan.org/recursion.html
http://heybryan.org/buildingbrains.html
http://heybryan.org/bookmarking.html
http://heybryan.org/humancortex.html
http://heybryan.org/2008-08-15.html

So some of the "insight incubation" stuff might count, but that
doesn't give you insight into what to call things (sometimes). There's
some occassions of spooky occurences of people coming up with the same
names, but.

Bryan Bishop

unread,
Nov 20, 2008, 9:29:15 PM11/20/08
to openmanu...@googlegroups.com, kan...@gmail.com
On 11/19/08, Paul D. Fernhout <pdfer...@kurtz-fernhout.com> wrote:
> In partial answer to why how to make it photo blogs don't have much data,
> I think a more complete (and charitable) answer is on that I think I read
> somewhere by Eric Hunting in his LUF pages, that people are still working
> out how to do this as a community.

Draft response I wrote the other day but didn't send because it conveyed no new information:

In programming, you don't have to agree as a community necessarily (I mean with reference to what to call things - like libc), you just know that the libraries are there and available because you have to write and compile your system from scratch, or get it from somebody else to load on to your (somewhat) standard hardware setup. In design of manufacturables this isn't quite the case because you could just as easily invent something as a placeholder and then there's nothing people can do to yell at you to show you that you didn't bother to implement that placeholder. In programming, they show that your code doesn't run as intended and it's pretty easy to show incompleteness. There are a number of other "flipped constraints" in physical engineering that might mandate a complete re-examine of an overall model, unless everyone is doing a common bootstrapped system and so is working from primitive built tools and so on for their technological infrastructure. But then you're less likely to get rolling.

- Bryan

marc fawzi

unread,
Nov 20, 2008, 9:29:48 PM11/20/08
to openmanu...@googlegroups.com
Awesome comic and perfectly on target :)

Good and bad (or right and wrong, outside of computation) are psychic values like love, trust, friendship, beauty, wisdom (my favorite), etc

They are non-computable judgments.... a la Penrose' OR.

I raised the issue/asked the question because there seems to be a tendency among Semantic Web/Web 3.0 folks to talk about "trust" as a computable value. So if 3,000 people say they trust you and 500 say they trust me, then you're more trustworthy. Oops. Doesn't work that way. Same with Wisdom of Crowds (wrote a piece called Unwisdom of Crowds on my blog that ironically got really popular)

Marc
http://evolvingtrends.wordpress.com/

marc fawzi

unread,
Nov 20, 2008, 9:38:40 PM11/20/08
to openmanu...@googlegroups.com
Will browse tonight. Thanks.

Very interested in potentially strong connection between symbolic density and synchronicity.

http://www.google.com/url?sa=t&source=web&ct=res&cd=1&url=http%3A%2F%2Fwww.blackwell-synergy.com%2Fdoi%2Fpdf%2F10.1111%2Fj.0021-8774.2005.00531.x&ei=Ph4mSdinHpr0sAOasaSDDw&usg=AFQjCNE5K5Ai0Kcp9gpxQ90NTk_cUjENFA&sig2=kf7oaS8e0dx8R2XSQAvB2g

I actually "paid" for this article.... but have it saved on my other PC in storage right now :) as I nomad through life. It's one of those articles that has one interesting idea and spends 30 pages explaining it, which isn't "bad" but adds to the info flood, which isn't "bad" but adds to the confusion, which isn't "bad" but adds to the search for meaning, which isn't "bad" but ... shoot me. 

Bryan Bishop

unread,
Nov 20, 2008, 9:47:22 PM11/20/08
to openmanu...@googlegroups.com, kan...@gmail.com
On 11/20/08, marc fawzi <marc....@gmail.com> wrote:
> I raised the issue/asked the question because there seems to be a tendency
> among Semantic Web/Web 3.0 folks to talk about "trust" as a computable
> value. So if 3,000 people say they trust you and 500 say they trust me, then
> you're more trustworthy. Oops. Doesn't work that way. Same with Wisdom of
> Crowds (wrote a piece called Unwisdom of Crowds on my blog that ironically
> got really popular)

There are some people (me?) that do not understand the folk psych
version of 'trust', and because there's no general computation for
trust, I suspect that there are alternatives that aren't compromises
but rather better solutions to the overall troubles that we see, like
in manufacturing, rather than trusting that your Magical Supplier of
All Materials Ever will Always Exist, perhaps you should actually
investigate that a bit, eh? So using this approach you get to think
about some interesting things.

- Bryan

marc fawzi

unread,
Nov 20, 2008, 10:10:19 PM11/20/08
to openmanu...@googlegroups.com
It's just the choice of words... I'm using the word "affinity" instead of "trust" and that resolves the issue for me. Not against systems that compute affinity ... just picking on choice of words, and that's how I made my entry into this discussion :)

marc fawzi

unread,
Nov 21, 2008, 12:32:25 AM11/21/08
to openmanu...@googlegroups.com
I should say that as recently as last year I was advocating a "hard to game" trust metric and said things like "FriendRank" in public... :)

But recently something happened and I started thinking that I should be more careful in word selection because words powerful. They are powerful because of the associations we make. "Trust" is powerful word in our psyche. "Affinity" is much less so.

For example, instead of using the word "trust" as in "buyer-seller trust matrix" I opted to use the word "affinity"  (see here)... Something like "reputation" is computable, simply as the sum of all ratings from all people.

So I started compiling a list of words that have psychic qualities and maybe the word psychic is not the right word but for now that's what I'm calling those words.

The list so far:

trust, trustworthy, friend, friendship, truth, wisdom, beauty, love, good, bad

I'm excluding those (and similar words) from usage in any computational scheme

There is actually a lot of work done on computational theories of beauty, love even good and bad, but given that these words represent non-computable judgments I've decided to keep them out of my lexicon for anything to do with Web 2.0, Web 3.0/Semantic Web, P2P and all computational models therein.

Does that make sense?

Sorry if I have taken this thread off topic

I'll see what I can do to bring it back to topic, if I did take it off topic

Smári McCarthy

unread,
Nov 21, 2008, 5:01:48 AM11/21/08
to openmanu...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Sorry I just have to chime in here. Inline.

marc fawzi wrote:
> Hi Paul,
>
> I read your response with deep interest, then my ADD (or rogue interrupt
> handler) kicked in half way through, with the following question:
>
> Assume that the following is true:
>
> 1. Our ability to perceive 'space' (abstractly speaking) gives us the
> possibility of having an orientation

Is it our ability to perceive space that gives us the possibility of
orientation, or is it our ability to perceive relative orientation that
allows us to build mental models of space? That is to say, can there not
be spatial dimensions we cannot perceive?

> 2. ---->Our ability to perceive 'time' gives us certainty about the
> state of our orientation in space

What is it with people and treating time as something other than a
spatial dimension?

> 3. ----------->Our ability to know the state of our orientation in space
> and our ability to select a new state constrained by the geometry of
> spacetime allow us to generate non-random patterns

This is referred to as causal determinism and has been rejected since
Heisenberg. If you're interested in causal determinism anyway - as I am
- - then I suggest reading:

* Meditations on First Philosophy by Rene Descartes
* A Philosophical Essay on Probabilities by Pierre-Simeon Laplace
* The Logic of Chance by John Venn (and anything from the early days
of probability theory)
* Carnot, Pascal, Fourier, etc, on thermodynamics
* Anything by Thomas Hobbes, Baruch Spinoza, Gottfried Leibniz, David
Hume, Arthur Schopenhauer, Friedrich Nietzsche
* The ontological proofs for the existence of God, see particularly
Kurt Gödel on this issue
* Anything worth reading on the subject of Deism or (to a lesser
degree) Calvinism.
* My upcoming novel, The Dream Machine, as soon as I finish writing it
(shameless plug ;)


> 4. ------------------>Our ability to generate patterns allows us to
> store patterns (if you can move and change direction then you can be
> moved and your direction can be changed)

How are patterns "stored"? What is the storage mechanism? Assuming
causal determinism and using cybernetics terms, storage need only
encompass the convolution of all previous transformations applied to the
system from the point of memory, as perfect reconstruction can be
performed by applying the inverse of the convolute to the system. But
"where" to store that?

> 5. ----------------------->The ability to store and generate non-random
> patterns gives us the ability to compute.

Is summation a non-random pattern? (yes!) How then would you
constructively define summation? The ability to compute must include the
ability to define the method of computation.

> 6.------------------------------>Our ability to compute gives us our
> 'knowing.'

Sounds reasonable if the other points can be resolved. Good luck!


- Smári


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkkmhwcACgkQ9cJSn8kDvvH6ywCgyqv+vfogNx9Cqkq5rwfKu0gV
HbsAniM1KOHgZPz1BGQY3eMrnmculsCT
=L6sT
-----END PGP SIGNATURE-----

Paul D. Fernhout

unread,
Nov 21, 2008, 1:10:44 PM11/21/08
to openmanu...@googlegroups.com
All good points as to why this is different than programming, agreed.

The same is true for writing an essay. There is no way you can tell if it
will be understandable or useful to others without asking them -- unless you
either are a very experienced writer (and know broad formulas) or unless you
know your audience very well and can "simulate" their reactions.

That's one reason I feel simulator of virtual worlds are important to
testing designs down the road. But that suggests putting a bigger emphasis
on designs being machine readable to put into the simulation, although, with
the right virtual tools, a tele-operated robot or avatar in the simulation
might be able to be directed by a person outside the simulation to follow
written instructions. :-)

--Paul Fernhout

marc fawzi

unread,
Nov 21, 2008, 1:19:22 PM11/21/08
to openmanu...@googlegroups.com
<<
>
> Assume that the following is true:
>
> 1. Our ability to perceive 'space' (abstractly speaking) gives us the
> possibility of having an orientation

Is it our ability to perceive space that gives us the possibility of
orientation, or is it our ability to perceive relative orientation that
allows us to build mental models of space? That is to say, can there not
be spatial dimensions we cannot perceive?
>>

The context is human perception, both what we see through our eyes and what we can imagine visually. If we can imagine those extra spatial dimensions visually then we are back to what we can perceive, not what we can't perceive. Are you saying that what we can't know has something to do with what we know?

<<

> 2. ---->Our ability to perceive 'time' gives us certainty about the
> state of our orientation in space

What is it with people and treating time as something other than a
spatial dimension?
>>

"treating" is one word

Very different from "perceiving"

How are you treating time right now? As a spatial dimension? If I say meet me in an hour, are you going to show me how you'd do that in 4D? What does it have to do with what we can perceive? Again, are you saying that what we "treat" time as is the same as how we "perceive" time?
First you say, it's not about what we can perceive but about what we cannot perceive. Then you say it's not about how we perceive (time in this case) but how we "treat" it in theory. So first you switch out of context into the opposite context, from what we can perceive to what we cannot perceive, and then you change context from how we perceive something to how we treat it theoretically, so now you have an opposing context and a completely different context, and you're still not interested in discussing the context because the opposing context and the completely different context are more interest to you, which is fine with me, but it's just as easy for me to dismiss your opposing context and your completely different context and focus on the original context, i.e. what we can perceive.  

<<
> 3. ----------->Our ability to know the state of our orientation in space
> and our ability to select a new state constrained by the geometry of
> spacetime allow us to generate non-random patterns

This is referred to as causal determinism and has been rejected since
Heisenberg. If you're interested in causal determinism anyway - as I am
- - then I suggest reading:

 * Meditations on First Philosophy by Rene Descartes
 * A Philosophical Essay on Probabilities by Pierre-Simeon Laplace
 * The Logic of Chance by John Venn (and anything from the early days
of probability theory)
 * Carnot, Pascal, Fourier, etc, on thermodynamics
 * Anything by Thomas Hobbes, Baruch Spinoza, Gottfried Leibniz, David
Hume, Arthur Schopenhauer, Friedrich Nietzsche
 * The ontological proofs for the existence of God, see particularly
Kurt Gödel on this issue
 * Anything worth reading on the subject of Deism or (to a lesser
degree) Calvinism.
 * My upcoming novel, The Dream Machine, as soon as I finish writing it
(shameless plug ;)

>>

Keep on dreaming :)

Re: Causal determinism. The word "rejected" implies that there is a central authority on science that approves and rejects theories. But logic/maths is the has the biggest authority within science, by definition, so unless causal determinism has been proven wrong, i.e. with formal proof that no one can contest on mathematical or meta-mathematical basis (Godel's work tells us that logic is not infallible, or not immune to uncertainty. It does not tell us that there all logic is uncertain, i.e it does not tell us that logic as a tool is broken.. it tells us that it can be broken. For example "if A = B and B = C then A = C" still holds)

Penrose OR is not about causal determinism. It's about Objective Reduction. Another view of QM that does not assume randomness but non-computable judgment. Same with Bohm's implicate order but I find Penrose OR more attractive to my intuition.

<<

> 4. ------------------>Our ability to generate patterns allows us to
> store patterns (if you can move and change direction then you can be
> moved and your direction can be changed)

How are patterns "stored"? What is the storage mechanism? Assuming
causal determinism and using cybernetics terms, storage need only
encompass the convolution of all previous transformations applied to the
system from the point of memory, as perfect reconstruction can be
performed by applying the inverse of the convolute to the system. But
"where" to store that?
>>

Well, you went off context (i.e. what we can perceive) so far way that I can
understand why the simple concept of storing a pattern in our memory becomes
a convoluted argument. If I can generate a pattern in my perception, e.g. a circle,
then I can also store it. 

<<
> 5. ----------------------->The ability to store and generate non-random
> patterns gives us the ability to compute.

Is summation a non-random pattern? (yes!) How then would you
constructively define summation? The ability to compute must include the
ability to define the method of computation.
>>

Again, the context is what we perceive (summation in this case) not how we "treat" summation theoretically. If you stick to the context the concept becomes really simple: If two apples land on the ground in front of you and you put your finger on the first apple and give it the label "1" then move your finger and point it at the second apple and give it the label "2" then your finger is now pointing at ordinal position 2, and that's how you count, perceptually.

<<

> 6.------------------------------>Our ability to compute gives us our
> 'knowing.'

Sounds reasonable if the other points can be resolved. Good luck!


 - Smári
>>

I didn't have to resolve. I just had retain context and not jump into the opposing context (what we can perceive) or a completely different context (how we treat things theoretically)




 

Bryan Bishop

unread,
Nov 21, 2008, 1:22:17 PM11/21/08
to openmanu...@googlegroups.com, kan...@gmail.com
On 11/21/08, Paul D. Fernhout <pdfer...@kurtz-fernhout.com> wrote:
> That's one reason I feel simulator of virtual worlds are important to
> testing designs down the road. But that suggests putting a bigger emphasis
> on designs being machine readable to put into the simulation, although, with

One of the other factors of putting in machine readable information is
content-generation of instructions for people to assemble the designs.
This is another reason to look at origami and papercraft, since it's
so easy to generate fold and crease instructions to users, with even
more media like 3D models, rotatable visualizations and demos, than
you can with simple origami instruction booklets. Also, English text
generation is possible if you think of it kind of like serialization.
I thought this was already going to be implemented in the system?
Maybe we're not synchronized here enough?

But anyway, throwing it into a simulation is a pretty good solution
for the problem we were talking about [I'm not referring to the other
previous reasons for simulations]. That's one of the avenues that the
lab I'm working in is interested in exploring regardless; it's a good
way to show somebody that their design isn't simulatable and doesn't
compile. "Well I know it works in reality" might not be as good as
"aha, it works in the simulation, maybe you should go explore this
area of abstraction in the package contents <point to some tools for
exploring mechanical details of, I don't know, the arrangements of
bolts in a mechanical contraption>".

> the right virtual tools, a tele-operated robot or avatar in the simulation
> might be able to be directed by a person outside the simulation to follow
> written instructions. :-)

Maybe. I don't understand where you're going with that, natural
language interpretation engines and nonplayable characters in the
simulator? Huh?

marc fawzi

unread,
Nov 21, 2008, 1:41:35 PM11/21/08
to openmanu...@googlegroups.com
Basically, the difference between how we perceive things and how we treat them in maths is this, sometimes:

1. I see a dog (I perceive a dog in front of me)

2. I open up my note book and start describing a bottle of champagne

#2 is how I "treat" the dog theoretically

;0

This is a pattern very familiar to me in theoretical physics. You take one thing or problem and you turn it into a different problem.

If you stick to what you perceive rather than how you theoretically "treat" things that you perceive then you won't have to end up with convoluted arguments.

 
On Fri, Nov 21, 2008 at 10:35 AM, marc fawzi <marc....@gmail.com> wrote:
Sorry, eating blackberries while typing causes errors:


<<
I just had retain context and not jump into the opposing context (what we can perceive) or a completely different context (how we treat things theoretically)
>>

Should read:

I just had retain context and not jump into the opposing context (i.e. what we can NOT perceive) or a completely different context (how we TREAT things theoretically)

marc fawzi

unread,
Nov 21, 2008, 1:35:26 PM11/21/08
to openmanu...@googlegroups.com
Sorry, eating blackberries while typing causes errors:

<<
I just had retain context and not jump into the opposing context (what we can perceive) or a completely different context (how we treat things theoretically)
>>

Should read:

I just had retain context and not jump into the opposing context (i.e. what we can NOT perceive) or a completely different context (how we TREAT things theoretically)

On Fri, Nov 21, 2008 at 10:19 AM, marc fawzi <marc....@gmail.com> wrote:

Paul D. Fernhout

unread,
Nov 21, 2008, 3:06:40 PM11/21/08
to openmanu...@googlegroups.com
Reminds me of this reference on the ExI list on social software:
http://www.shirky.com/writings/group_enemy.html
and how the human group (not the software) are the best at judging
trustworthiness (and in relation to what) (or in his words, a "reputation").
From there: "The world's best reputation management system is right here,
in the brain. And actually, it's right here, in the back, in the emotional
part of the brain. Almost all the work being done on reputation systems
today is either trivial or useless or both, because reputations aren't
linearizable, and they're not portable."

I find his other comments there on core groups policing a system interesting
as well, in relation to a "meshwork/hierarchy" balance.

--Paul Fernhout

Paul D. Fernhout

unread,
Nov 21, 2008, 3:14:39 PM11/21/08
to openmanu...@googlegroups.com
Actually, I feel the issue of source of data, (or the purported path by
which you received data, like in email headers) is important metadata,
because you would use it to decide how much to trust a particular
manufacturing recipe or plan. So, I think what is as or more important that
an "trust" ranking is instead all the metadata you want to know to make that
decision for yourself (or at least, track someone else's assessments who you
otherwise rely on). But once you make that decision, you might store the
information somewhere yourself as metadata for automated querying (show me
all the statements about some topic by people I trust on this topic a lot).

In the current implementation of the Pointrel system, transactions can (in
theory) do some of that, by wrapping layers of identity around statements
(presumably all signed with a public key). But that is all in theory -- it
does not do that yet.

--Paul Fernhout

marc fawzi

unread,
Nov 21, 2008, 3:22:25 PM11/21/08
to openmanu...@googlegroups.com
I agree.

I'm just against _computing_ trust, wisdom, and related concepts..

"computing" of course includes summation of units of such concepts as in digg or whatever "wisdom of crowds" model  // explaining the obvious

I'm actually fine with computing "reputation" because to me it has a very weak connection to "trust"  but we have to be careful around word choices because if someone connects reputation strongly with 'trust' then it becomes the equivalent of trust, more or less. So it's safer to choose words that have universally weak connection to the non-computable judgments of trust, wisdom, friendship, beauty, love, truth (outside of computation), right and wrong (outside of computation), good and bad, etc

marc fawzi

unread,
Nov 21, 2008, 3:45:24 PM11/21/08
to openmanu...@googlegroups.com
And in my agreement with you Paul, I'm including my agreement re: your proposed approach of leaving non-computable judgments up to the user, and if we do automate then we should not confuse/tricking/fail the user by appearing to compute "trust" when in fact we're computing popularity and associate that with "good" or when we add reputation ratings together and associate that with "trust" or "good"

Paul D. Fernhout

unread,
Nov 23, 2008, 6:40:54 PM11/23/08
to openmanu...@googlegroups.com
Yes, I still think papercraft would be a good thing to investigate. I am
still thinking about buying a papercraft cutter like the Graphtec one
referenced here:
"Common format to export to Papecraft cutters, 3D printers, and
do-it-yourself plans?"
http://groups.google.com/group/openvirgle/msg/99b9855310a9e00f

More related OpenVirgle stuff for reference for others:
"OpenVirgle Papercraft idea"
http://groups.google.com/group/openvirgle/browse_thread/thread/c5eb0d7e676219dc
"""
But it turns out origami is really just a subset of a larger area that is
sometimes termed "papercraft". See:
http://en.wikipedia.org/wiki/Papercraft
"Paper models, also called card models or papercraft, are models constructed
mainly from sheets of heavy paper or card stock as a hobby. It may be
considered a broad category that contains origami, and card modelling, with
origami being a paper model made from folding paper (without using glue),
and card modelling as the making of scale models from sheets of card on
which the parts were printed, usually in full colour, for one to cut out,
fold, score and glue together. They appear to be generally more popular in
Europe and Japan than in the United States."

Papercraft in turn is a sort of subset of build contraptions at home using
common cheap things like paper, straws, tape, toothpicks, and so on. Mr.
Rogers and many others have big books on this kind of stuff. Both school
teachers and homeschoolers are big on it too. :-) So there is potentially a
vast audience worldwide -- including people with a computer and printer
(even at work :-) but with a tiny budget for toys for their kids.
"""

On the simulation issue, I was just musing on how you could use a simulation
to test if you had complete-enough written directions to make something . So
I thought you could create a simulator that had an avatar in it, and you
operate the avatar yourself, getting the avatar to follow the written
instructions. It's not that the instructions are in a structured system an
you generate natural language (although that would be nice, to print some
subset of them for people as a backup on paper:)
http://www.kurtz-fernhout.com/oscomak/goals.htm

--Paul Fernhout

Bryan Bishop wrote:

Bryan Bishop

unread,
Nov 23, 2008, 6:52:45 PM11/23/08
to openmanu...@googlegroups.com, kan...@gmail.com
On Sun, Nov 23, 2008 at 5:40 PM, Paul D. Fernhout wrote:
> Yes, I still think papercraft would be a good thing to investigate. I am
> still thinking about buying a papercraft cutter like the Graphtec one
> referenced here:
> "Common format to export to Papecraft cutters, 3D printers, and
> do-it-yourself plans?"
> http://groups.google.com/group/openvirgle/msg/99b9855310a9e00f

As it turns out, one of the other things that ADL does is sheet metal
folding and cutting, which is very similar to origami. I recently sent
an email on this to some lab members, which I'll reproduce here:

===
Theo Janssen mechanism:
http://www.youtube.com/watch?v=-GgOn66knqA
a mechanical contraption out of origami.

Origami graph rewriting:
http://heybryan.org/books/Manufacturing/origami/WebEOS%20-%20system%20for%20origami%20construction%20and%20proving%20on%20the%20web.pdf
===

Huh. I was expecting that email to be slightly more
dramatic/impressive. Well, anyway, there was some posts on openvirgle
that I recall writing about origami, or papercraft or some such. There
are more papers here:
http://heybryan.org/books/Manufacturing/origami/

> More related OpenVirgle stuff for reference for others:
> "OpenVirgle Papercraft idea"
> http://groups.google.com/group/openvirgle/browse_thread/thread/c5eb0d7e676219dc
> """

In the paper linked above, the webEOS / webOrigami people had a
Mathematica notebook application that was interpreting various API
calls for folding a virtual piece of paper based off of a popped stack
of available nodes from which to fold to or from (theoretically, you
can make a fold to any point, but let's keep it simple, eh?). A few
weekends ago I wrote a program called 'supermetal' that generates
metallic contraptions based off of placing further primitive geometric
shapes on the 'critical points' of the already-placed objects. This is
somewhat similar. The problem with folding though is that you can't
really just splice-and-dice, because if you have a frog made out of
origami, you can't just splice the head on to another origami shape
because of the folding patterns involved. But with papercraft, sure.

There's a blender meshing script that takes an object and converts it
into papercraft output, Ben was experimenting with this back in, uh,
March?
http://heybryan.org/mediawiki/index.php/Skdb

> But it turns out origami is really just a subset of a larger area that is
> sometimes termed "papercraft". See:
> http://en.wikipedia.org/wiki/Papercraft
> "Paper models, also called card models or papercraft, are models constructed
> mainly from sheets of heavy paper or card stock as a hobby. It may be
> considered a broad category that contains origami, and card modelling, with
> origami being a paper model made from folding paper (without using glue),
> and card modelling as the making of scale models from sheets of card on
> which the parts were printed, usually in full colour, for one to cut out,
> fold, score and glue together. They appear to be generally more popular in
> Europe and Japan than in the United States."
>
> Papercraft in turn is a sort of subset of build contraptions at home using
> common cheap things like paper, straws, tape, toothpicks, and so on. Mr.
> Rogers and many others have big books on this kind of stuff. Both school
> teachers and homeschoolers are big on it too. :-) So there is potentially a
> vast audience worldwide -- including people with a computer and printer
> (even at work :-) but with a tiny budget for toys for their kids.
> """

For toys, if you can figure out how to automatically generate and
simulate or test theo-jansenns for kinematic movements, you can make
up lots of interesting, weird and bizarre toys that would get kids
pretty excited. For paper folding, these would have to be pretty large
(as large as the toddlers and kids themselves), which is pretty neat
as a holiday project or something.

Paul D. Fernhout

unread,
Nov 23, 2008, 8:52:33 PM11/23/08
to openmanu...@googlegroups.com
Bryan Bishop wrote:
> For toys, if you can figure out how to automatically generate and
> simulate or test theo-jansenns for kinematic movements, you can make
> up lots of interesting, weird and bizarre toys that would get kids
> pretty excited. For paper folding, these would have to be pretty large
> (as large as the toddlers and kids themselves), which is pretty neat
> as a holiday project or something.

Great idea. I've long wanted to take our PlantStudio "evolutionary arts"
software to the next level (evolving arbitrary hierarchical or meshwork 3D
forms).

--Paul Fernhout

Bryan Bishop

unread,
Nov 23, 2008, 9:08:01 PM11/23/08
to openmanu...@googlegroups.com, kan...@gmail.com

This is sorta-kinda what the Automated Design Lab likes to do. Not
necessarily genetic algorithms, however. There are other interesting
ways to search through the implicit possibilities of the designs, like
informed searches, depth-first, breadth-first, A*, backpropagation,
etc. Like I mentioned, 'supermetal' does some of this, it's just a
random shape generator based off of primitives with a limited number
of connection sites added each time. So there's two parts to the
puzzle there, but what's the missing link to getting actual moving
papakura constructions like theo-jansenns and related amazing
fantastical mechanicals?

marc fawzi

unread,
Nov 25, 2008, 10:22:54 PM11/25/08
to openmanu...@googlegroups.com

Michael Koch

unread,
Nov 28, 2008, 10:25:58 PM11/28/08
to Open Manufacturing
Hello Bryan,

First, I want to introduce myself to the group. My name is Michael
Koch and I'm a graduate student working on the VOICED project at
Oregon State. I was recommended to this group by Richard Schulte, and
have been very impressed with all the conversations I've seen on here.

Bryan, I assume it is okay to pass your suggestions onto the VOICED
team? and if anyone has any questions about VOICED and its current
status, feel free to ask. I will do my best to answer them, or relay
them to the correct sources.

Thanks,
Michael Koch


On Nov 19, 6:53 am, "Bryan Bishop" <kanz...@gmail.com> wrote:
> On 11/19/08, Smári McCarthy <s...@hi.is> wrote:
>
>
>
> > The file format I suggested was much much simpler.
>
> > <?xml version="1.0" ?>
> > <project>
> >    <name>Acrylic chandelier</name>
> >    <description>
> >    A nice laser-cuttable chandelier.
> >    </description>
> >    <version>1</version>
> >    <hash>MD5 sum of the project</hash>
> >    <website>Location of further design information</website>
> >    <authors>
> >            <author>Me</author>
> >            <author>Myself...</author>
> >    </authors>
> >    <files>
> >            <file desc="Fixes to the bulb fixture"fixment.svg"/>
> >            <file desc="The sides of the chandelier" url="sides.svg"/>
> >            <file desc="Pattern for a side" url="pattern1.png"/>
> >            <file desc="Pattern for a side" url="pattern2.png"/>
> >            <file desc="Pattern for a side" url="pattern3.png"/>
> >            <file desc="Pattern for a side" url="pattern4.png"/>
> >    </files>
> > </project>
>
> > Further meta-data can be added to the format later, but this currently
> > gives enough information to a program designed to present the data to a
> > user in an orderly fashion. The only important point here is the fact
> > that a typical "project" consists of more than one "file" and requires
> > some descriptions.
>
> Yes, but what's important is the standard set of files that we expect
> to see referenced in something like the <FILES> list. CAD? XML? YAML?
> And on top of that, what standard package of tools should each one
> open with at a minimum or be packaged standard with fabuntu? This is
> in a sense what I'm doing with some microtools for managing
> repositories, but of course it's heavily format dependent, though the
> basic set of tools that are used for file management clearly need to
> be extended to repository management, such as simple operations for --
> say -- confirming that two CAD files contain the same information
> within, if they use cross-references, for instance. Just a small
> example. I encourage everyone to take a strong point from debian and
> how they do it. Here's how they package software, an example:
>
> http://en.wikipedia.org/wiki/Deb_(file_format)http://debcreator.cmsoft.net/
> details:http://tldp.org/HOWTO/Debian-Binary-Package-Building-HOWTO/x60.html
>
> Anyway, for the record, here's the VOICED system repository XML format
> that I discourage because there's some improvements to be done to it:http://heybryan.org/~bbishop/docs/repo/
>
> <!DOCTYPE RepositoryXML>
> <RepositorySystem>
>     <System SystemDescription="consumer"
> SystemContributingInstitution="" SystemType="empty" SystemName="salton
> electric wok" >
>         <Artifact ArtifactName="lid assembly" ArtifactCBName="none"
> ArtifactIsAssembly="1" ArtifactManufacturer=""
> ArtifactModificationDate="" ArtifactCreationDate="2008-07-23"
> ArtifactDescription="" ArtifactParent="salton wok"
> ArtifactTrademark="" ArtifactReleaseDate="" ArtifactQty="1" >
>             <ArtifactFile ArtifactFileType="1"
> ArtifactFileExtension="8312" >lidassembly1-FILE</ArtifactFile>
>             <ArtifactImage>lidassembly-IMAGE</ArtifactImage>
>             <CreatorInfo CreatorFirstName="" CreatorLastName=""
> CreatorEmail="" CreatorAffiliation="" />
>         </Artifact>
>         <Artifact ArtifactName="internal" ArtifactCBName="empty"
> ArtifactIsAssembly="0" ArtifactManufacturer="empty"
> ArtifactModificationDate="" ArtifactCreationDate=""
> ArtifactDescription="empty" ArtifactParent=""
> ArtifactTrademark="empty" ArtifactReleaseDate="" ArtifactQty="0" >
>             <CreatorInfo CreatorFirstName="" CreatorLastName=""
> CreatorEmail="" CreatorAffiliation="" />
>         </Artifact>
>         <Artifact ArtifactName="lid handle" ArtifactCBName="handle"
> ArtifactIsAssembly="0" ArtifactManufacturer=""
> ArtifactModificationDate="2008-06-24"
> ArtifactCreationDate="2008-07-23" ArtifactDescription=""
> ArtifactParent="lid assembly" ArtifactTrademark=""
> ArtifactReleaseDate="2000-01-01" ArtifactQty="1" >
>             <Subfunction SubIsSupporting="0" SubInputArtifact="empty"
> SubOutputArtifact="empty" SubSubfunction="import" />
>             <CreatorInfo CreatorFirstName="" CreatorLastName=""
> CreatorEmail="" CreatorAffiliation="" />
>         </Artifact>
>         <Artifact ArtifactName="external" ArtifactCBName="empty"
> ArtifactIsAssembly="0" ArtifactManufacturer="empty"
> ArtifactModificationDate="" ArtifactCreationDate=""
> ArtifactDescription="empty" ArtifactParent=""
> ArtifactTrademark="empty" ArtifactReleaseDate="" ArtifactQty="0" >
>             <CreatorInfo CreatorFirstName="" CreatorLastName=""
> CreatorEmail="" CreatorAffiliation="" />
>         </Artifact>
>     </System>
>     <lidassembly1-FILE><![CDATA[NbFJ1o5FGU1v1My6MoGoKzFM1E6bI88znd32cEwlEIcR3ql38Bt7y4yw==]]></lidassembly1-FILE>
>     <lidassembly-IMAGE><![CDATA[AAAQhXicnmpx6uhooAgv9891opkxNVALACNbFJ1o5FGU1v1My6MoGoKzFM1E6bI88znd32cEwlEIcR3ql38Bt7y4yw==]]></lidassembly-IMAGE>
> </RepositorySystem>
>
> I discourage the immediate similar use because it's easier if you just
> honestly put the image and data files in separate directories and
> because some of the formats are based off of proprietary software
> installations (you have no understanding of how annoying this makes it
> for me to work with it at all); and also because the FS and CFG and
> assembly graphs aren't properly cross-referenced anyway, which is some
> very important metadata.
>
> - Bryanhttp://heybryan.org/
> 1 512 203 0507

Bryan Bishop

unread,
Nov 28, 2008, 10:31:28 PM11/28/08
to openmanu...@googlegroups.com, kan...@gmail.com
On Fri, Nov 28, 2008 at 9:25 PM, Michael Koch
<michael.d...@gmail.com> wrote:
> Hello Bryan,

Hey Michael.

> First, I want to introduce myself to the group. My name is Michael
> Koch and I'm a graduate student working on the VOICED project at
> Oregon State. I was recommended to this group by Richard Schulte, and
> have been very impressed with all the conversations I've seen on here.

Ah, hey there. I am over at the University of Texas at Austin in the
Automated Design Lab with Dr. Campbell & VOICED. I was hanging out
around this mailing list before I got the job though (reverse order,
you see).

> Bryan, I assume it is okay to pass your suggestions onto the VOICED
> team? and if anyone has any questions about VOICED and its current
> status, feel free to ask. I will do my best to answer them, or relay
> them to the correct sources.

Yeah, absolutely. Somebody else from Rob Stone's lab signed up on my
IEEE Manufacturing and Automation Group (MAG) two months ago, which
was also surprising since it was unprompted. I guess we're all looking
at the same things consistently. Also, a blog account on the
voiced-evo wiki would be neat. I have various code pushes that might
be of interest.

- Bryan

Reply all
Reply to author
Forward
0 new messages