Re: thingiverse

9 views
Skip to first unread message

Bryan Bishop

unread,
Nov 18, 2008, 7:58:42 AM11/18/08
to Michel Bauwens, openmanufacturing, Matt Campbell, Dave Rauchwerk, Bryan Bishop, Vinay Gupta, Smári McCarthy, Christian Siefkes, Eric Hunting, Nathan Cravens, Marcin Jakubowski
On 11/18/08, Michel Bauwens <michel...@gmail.com> wrote:
> This seems quite sophisticated .. any comments,
> see http://www.thingiverse.com/

See also http://unptnt.com/ - a new one that I picked up on at my
local dorkbot meeting. They are in "double secret alpha", but I don't
know why. They have this neat feature that lets you work with a Bill
of Material via a popup window to harvest data sheet information from
various websites from which you might buy things, like Amazon,
Radioshack, digikey, etc. (Wish this would do material sourcing too,
but unfortunately mindat.org, RosettaNet and NOAA only go so far.)

I looked at thingiverse.com, and it looks like on the "upload" tab
they allow any file format and any submission can have any number of
different files and varying degrees of openness. While it's great to
see yet another open hardware directory, the lack of standardization
is worrisome. The XML repository format (essentially like a TAR file
with a minimum consistent set of files within it) that VOICED has been
tossing around might be worth looking into after a few improvements.

When working with the database of parts that I have available to me as
part of my lab work, it's hard enough to link up the different
components and expect the people who did data entry to have kept with
the same ontology, much less keep their black box diagrams in line and
so on. Some of the repository entries have CDD files ("ConceptDraw"),
some have JPGs, others (a few) have CAD models, but none of this is
consistent and it makes it unusable when you start doing large scale
stuff. Standardization is important.

I wish we could all just use fabuntu as the final say in the matter
and point that if fabuntu can't control machinery to physically make
the system you describe in your 'open hardware directory', then
there's something very wrong.

http://fabuntu.org/ (which needs some work because it looks rather dead)

- Bryan
http://heybryan.org/
1 512 203 0507

Smári McCarthy

unread,
Nov 18, 2008, 8:04:06 AM11/18/08
to Bryan Bishop, Michel Bauwens, openmanufacturing, Matt Campbell, Dave Rauchwerk, Vinay Gupta, Christian Siefkes, Eric Hunting, Nathan Cravens, Marcin Jakubowski
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Michel, Bryan, great finds.

I think we need to push for, as Bryan says, a standard description for
"open hardware designs".. I haven't seen the VOICED XML format (will
check in a bit) but I think it may be similar to the format I was
tossing around for OpenCAD.

The Fabuntu project was last I knew run by Ed Baaffi at SETC (Boston's
South End Technology Center). I haven't heard from him for a while, but
he was doing some really cool stuff in a similar vein.

- Smári


- --
Smári McCarthy
sm...@yaxic.org http://smari.yaxic.org
(+354) 662 2701 - "Technology is about people"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkkivUMACgkQ9cJSn8kDvvHSCQCgmqd2n0x9RIemB4sPyRMMkCCv
kMQAmwYN3GNP899nAPYvkkRoZEa3p255
=1fCF
-----END PGP SIGNATURE-----

Bryan Bishop

unread,
Nov 18, 2008, 8:39:35 AM11/18/08
to Smári McCarthy, Michel Bauwens, openmanufacturing, Matt Campbell, Dave Rauchwerk, Vinay Gupta, Christian Siefkes, Eric Hunting, Nathan Cravens, Marcin Jakubowski
On 11/18/08, Smári McCarthy <sp...@hi.is> wrote:
> I think we need to push for, as Bryan says, a standard description for
> "open hardware designs".. I haven't seen the VOICED XML format (will
> check in a bit) but I think it may be similar to the format I was
> tossing around for OpenCAD.

An example:
http://heybryan.org/~bbishop/docs/repo/cd%20player.repo

See also:
http://repository.designengineeringlab.org/
(for a web interface)

This might not be what you wanted for OpenCAD. Think of this as a zip
file but without compression (compression is optional I guess). 3D
modeling information certainly belongs within it, as long as it's
properly cross-referenced to the other information, like annotations
about "this part is called blah, made out of matweb material ID 74126,
and has this many ports for connecting to other blahs". One of the
issues that begins to develop is that you need to have standardization
across the ontologies from the different open hardware directory
sites, because if one says "I have xyz ports" and another is not
capturing that information, then things start to explode and little
children cry when large databases can't be connected together. For my
lab work about a month ago I wrote a few scripts to help repair
databases when they are "incomplete" in the information that is
captured, maybe I'll go find the files and throw them online. Would
anyone like to have those available?

As a minimum, the TAR file, or VOICED XML or whatever we want to call
it, should tend to contain things such as README files, metadata with
information such as what sort of units the item works with (see GNU
units or unum.sf.net) and authors and the one-line description and
name and so no, CAD files, GXML, CAD cross-referenced to 'labels' in
GXML files, licensing information, and sometimes (when necessary)
scripts for simulating, optimizing or otherwise using the package in a
useful way. This needs some public review/debate/comment, so if
anybody reading this finds something terribly wrong, please rant.

I remember reading a quote somewhere. Somebody understood the problem
quite well. It went something like this: "the problems of open
hardware design and compiling designs together is much harder than
writing a mere compiler for your microprocessor's architecture; there
will be many more interconnecting files, linkers and other tricks of
the trade in the end." (Tell this to the gcc gurus and they will jeer,
of course.) Wish I could remember the source.

> The Fabuntu project was last I knew run by Ed Baaffi at SETC (Boston's
> South End Technology Center). I haven't heard from him for a while, but
> he was doing some really cool stuff in a similar vein.

Is he still alive?

albanetcsr

unread,
Nov 19, 2008, 2:54:44 AM11/19/08
to Open Manufacturing
I like the idea of universal XML file describing a design of (almost)
anything. However, I think that so long as there is no universal
fabricator capable of building (almost) anything, there will be a
problem defining universal design/manufacturing description/format. I
guess what I'm trying to say is that file formats naturally arise once
there are tools to read/process them. To reframe the problem, the
universal fabricator of today is a person that reads instructions in
plain language. The instructions can have additional files for
automation tools - for example PDF file with a drilling template, DWG/
SKP or series of JPG files aiding in assembly, DXF or NC file for CNC
routing a part, EPS file for a laser cutter, BRD file to make a PCB
etc. As far as this goes, sites like instructables and thingiverse hit
the nail on the head because all you really need today to make
something is a PDF/HTML instruction and arbitrary set of files
readable by widely available automation tools required for making this
something. This is a bottom-up approach which may not be well
structured, but it works.

On Nov 18, 5:39 am, "Bryan Bishop" <kanz...@gmail.com> wrote:
> - Bryanhttp://heybryan.org/
> 1 512 203 0507

Bryan Bishop

unread,
Nov 19, 2008, 3:20:21 AM11/19/08
to openmanu...@googlegroups.com, kan...@gmail.com
On 11/19/08, albanetcsr <alban...@gmail.com> wrote:
> I like the idea of universal XML file describing a design of (almost)
> anything. However, I think that so long as there is no universal
> fabricator capable of building (almost) anything, there will be a
> problem defining universal design/manufacturing description/format. I

That's why we're talking about a repository format. This is much like
the concept of a zip file or more accurately like a TAR file, which is
just a way of sending a bunch of related files at the same time, plus
or minus compression if you want it. Within this bunch of files there
should be a minimum set of standardization such that there are ways to
find which files are what and so on.

> guess what I'm trying to say is that file formats naturally arise once
> there are tools to read/process them. To reframe the problem, the
> universal fabricator of today is a person that reads instructions in
> plain language. The instructions can have additional files for

There are many tools that read many different types of file formats,
but the majority of it is proprietary (no surprise here), this is the
"driver problem" all over again or perhaps it's the same persistent
driver problem that the open source software community has been
groaning about since the dawn of mankind's information tech.
Surprisingly a lot does get done though, lots of reverse engineering
of drivers and so on, but that's because of the distributed tool base
of the computer which has been an amazing success (now if only the
production tools were just as well distributed (computational design
is pretty easy to come by - opencores, for instance, and so on)).

> automation tools - for example PDF file with a drilling template, DWG/

Gah, PDF is terrible for this. Shoot anybody using PDF for this.
Scientific papers shouldn't be PDFs either: just give me your damn
models and the code you wrote :-) and not a teaser in
only-human-readable-files. How useless. :-( But that's a separate
rant.

> SKP or series of JPG files aiding in assembly, DXF or NC file for CNC

JPG only for archival purposes, please. Shoot anybody using it for
technical data transfer.

> routing a part, EPS file for a laser cutter, BRD file to make a PCB
> etc. As far as this goes, sites like instructables and thingiverse hit
> the nail on the head because all you really need today to make
> something is a PDF/HTML instruction and arbitrary set of files
> readable by widely available automation tools required for making this
> something. This is a bottom-up approach which may not be well
> structured, but it works.

No, it doesn't work. Show me the repositories that keep track of these
files in a consistent way and with enough information -- consistently.
Bottom-up approaches are great, but some of the facilitators that are
currently operating don't seem to have the big picture of what's going
on.

- Bryan

albanetcsr

unread,
Nov 19, 2008, 6:00:41 PM11/19/08
to Open Manufacturing
> No, it doesn't work. Show me the repositories that keep track of these
> files in a consistent way and with enough information -- consistently.
> Bottom-up approaches are great, but some of the facilitators that are
> currently operating don't seem to have the big picture of what's going
> on.

But is there actually a consistent way to efficiently describe making
of an arbitrary thing, given a number of unknown variables arising
from different nature of things, various possible manufacturing
methods, limited access to tools and resources, format
incompatibilities etc.

In my opinion, human readable text with project-specific supporting
files is the optimal solution.

-Vit

Bryan Bishop

unread,
Nov 19, 2008, 11:06:14 PM11/19/08
to openmanu...@googlegroups.com, kan...@gmail.com
On 11/19/08, albanetcsr <alban...@gmail.com> wrote:
> > No, it doesn't work. Show me the repositories that keep track of these
> > files in a consistent way and with enough information -- consistently.
> > Bottom-up approaches are great, but some of the facilitators that are
> > currently operating don't seem to have the big picture of what's going
> > on.
>
> But is there actually a consistent way to efficiently describe making
> of an arbitrary thing, given a number of unknown variables arising
> from different nature of things, various possible manufacturing
> methods, limited access to tools and resources, format
> incompatibilities etc.

Yes. Somehow one human made it, thus showing the possibility; if I
have to wire up motion sensor balls to the person while they make it
to capture all of the information, so be it. I doubt that this will be
the case in the majority of situations. Animation crews for 3D
rendered movie scenes do this to capture human peculiarities in
motion, for instance. Anyway, there's only so many ways under the
stars that you could manufacture, there's a limited set of fundamental
operations like casting and other fellows. Tools can be used to make
other tools; sometimes you can make a tool to give to your friend or
fellow user. etc.

> In my opinion, human readable text with project-specific supporting
> files is the optimal solution.

As an output, human readable text is fine, this is true. In fact it is
my intention that human readable instructions should be generated, and
perhaps sometimes slightly written. But htis is deceiving .. just
having this information distracts from that which matters. Your
CNCzone machine setup isn't going to be able to read human readable
text, but instead probably takes something like gcode. For very good
reasons.

Michel Bauwens

unread,
Nov 20, 2008, 7:30:00 PM11/20/08
to Bryan Bishop, Smári McCarthy, openmanufacturing, Matt Campbell, Dave Rauchwerk, Vinay Gupta, Christian Siefkes, Eric Hunting, Nathan Cravens, Marcin Jakubowski
thanks for the pointers ...

you guys are probably familiar with this: http://p2pfoundation.net/Towards_a_Free_Matter_Economy
--
The P2P Foundation researches, documents and promotes peer to peer alternatives.

Wiki and Encyclopedia, at http://p2pfoundation.net; Blog, at http://blog.p2pfoundation.net; Newsletter, at http://integralvisioning.org/index.php?topic=p2p

Basic essay at http://www.ctheory.net/articles.aspx?id=499; interview at http://poynder.blogspot.com/2006/09/p2p-very-core-of-world-to-come.html
BEST VIDEO ON P2P: http://video.google.com.au/videoplay?docid=4549818267592301968&hl=en-AU

KEEP UP TO DATE through our Delicious tags at http://del.icio.us/mbauwens

The work of the P2P Foundation is supported by SHIFTN, http://www.shiftn.com/

albanetcsr

unread,
Nov 20, 2008, 8:59:27 PM11/20/08
to Open Manufacturing
> Yes. Somehow one human made it, thus showing the possibility; if I
> have to wire up motion sensor balls to the person while they make it
> to capture all of the information, so be it. I doubt that this will be
> the case in the majority of situations. Animation crews for 3D
> rendered movie scenes do this to capture human peculiarities in
> motion, for instance. Anyway, there's only so many ways under the
> stars that you could manufacture, there's a limited set of fundamental
> operations like casting and other fellows. Tools can be used to make
> other tools; sometimes you can make a tool to give to your friend or
> fellow user. etc.

I'm not saying it's not possible, I'm saying it would not be optimal/
efficient given the set of tools available to average hobbyist or
DIYer today. Maybe I misunderstand - do you have an example of what
would be a consistent way to describe a design and/or a manufacturing
process?

-Vit

Bryan Bishop

unread,
Nov 20, 2008, 10:41:33 PM11/20/08
to openmanu...@googlegroups.com, kan...@gmail.com

The average hobbyist has hands and sometimes can get toolsets, like
screwdrivers and other basic mechanical energy systems. Not freely,
and not out in the middle of nowhere, this is true. But we're working
on that. :-) As for a consistent way to do design, there are many CAD
file formats that can be converted from one to another. Of course the
majority of them are proprietary, though there are some open source
standards with API libraries out there IIRC. [As for describing
manufacturing processes, I hope Paul was wrong when he says it might
take another George Miller to come up with an initial description
dataset. The recipes and dependency stuff though is kind of like a
linked list data struct, if that helps.]

Paul D. Fernhout

unread,
Nov 21, 2008, 9:52:10 AM11/21/08
to openmanu...@googlegroups.com
There is a lot to be said for successive approximation -- we create a system
people can use to make things, and bit by bit there is more precision
allowing dumb machines to help. Which suggest to me having a flexible data
storage system at the heart of it, like one based on RDF triples or some
other open-ended approach like Squeak Smalltalk objects. :-)

Also, again, the design relates to your goals. Bryan seems more interested
in full automation at the moment than I. For me, as I outlined in this paper
prototype:
http://www.kurtz-fernhout.com/oscomak/prototype.htm
I feel a free form text area is good enough form my immediate goals of
looking at manufacturing webs (admittedly, with pictures and data file
attachments, which are important, and are not in that prototype).

But, what I also included there, and is essential to my interests, is
formally defining what a "manufacturing recipe" uses, consumes, and
produces, as well as other information about the level of effort required
and failure probabilities (all needed for simulation and design at the level
of abstraction I am focusing on).

Still, when I started getting really excited about self-replicating space
habitats twenty years ago, my hope too was, like Bryan, for full automation.
And I still think that makes a lot of sense in the long term (and James P.
Hogan's books, see my next post, have been a big inspiration to me in that
regard).

Anyway, I'll agree we can have both information about how things link
together and details about each one in the same repository. And I'd add, it
would be good to also include information about how to use artifacts, which
is a slightly different type of procedural information as well, like
information on how to take pictures well is independent of how you make a
camera. Al Globus had suggested that to me -- that much of what an
organization like NASA is concerned about is defining and following
procedures (from pressurizing a spacesuit to entering and exiting a "clean
room"). That procedural knowledge interacts with making artifacts, but it is
not quite the same thing. Although, this does relate to the "process" versus
"artifact" distinction Bryan referred to elsewhere (is everything just a
"process"?)

But as to emphasis, for me, right now I care more about the network issues
that I care about the individual manufacturing details. But, it is people
who care about the individual details who would ideally be adding for each
one the network-related metadata. But, if they don't, perhaps someone like
me could come in afterwards and add in the information (for example,
building on Appropedia's content).

--Paul Fernhout

Bryan Bishop

unread,
Nov 21, 2008, 1:15:54 PM11/21/08
to openmanu...@googlegroups.com, kan...@gmail.com
On 11/21/08, Paul D. Fernhout <pdfer...@kurtz-fernhout.com> wrote:
> There is a lot to be said for successive approximation -- we create a system
> people can use to make things, and bit by bit there is more precision
> allowing dumb machines to help. Which suggest to me having a flexible data

Right, layers of details and so on. But the problem is that it's not
global successive approximation across your entire repository or data
set, but rather it's localized incremental approximations, which by
definition means that you're not going to be able to standardize what
types of incremental new data for different aspects of the system are
going to be on the 'same level'. It's like growing a messy bush, and
what somebody might think is more important about a capacitor goes at
the second level of turtles-all-the-way-down style abstraction,
meanwhile somebody else has their own design elsewhere in the
repository and thinks that capacitor-like effects do not belong in the
second level of functionality for their system, and there's no similar
terminology being used so it's not a matter of automated discovery of
the differences, so you end up with a confounded problem. Now you have
different levels of abstraction for different parts of the repository,
which is fine overall, but people calling them different things and
placing the interfaces at inconsistent places in the trees. I suppose
you can have "dreamweaver" people that come in and make a translation
layer for a consistent set of interfaces to all of the available
packages, kind of a role above that of the "package maintainer"
(somebody who sees through a package into the repository and making
sure it's well formatted, not idiotic, not going to blow up the
galaxies, that sort of thing). Different users would then opt to use
different translation layer packages, or perhaps just one overall, but
I suspect it will be the case that sometimes users will want to use
processes or packages in the repository that are not included in an
official translation layer release version, so will make their own, so
I suspect there would be multiples of these 'translation layer
packages' floating around. The user would have to manually link one
"abstraction layer" in the package to a certain type of 'abstraction
layer' according to his package; perhaps his first-level routines that
he wants to access are the overall standardized electrical outlets,
rather than aesthetic informations, or something, so that when he
writes queries or programs over his datasets, he can then consistently
assume everything that he knows of from the database can be tied-down
via the routing information that his translation layer has. An obvious
'translation layer' to have would be something top-down, where the
"first level of API functions for a certain repository entry" would be
something like the metadata, and the second one would be basic
input/output (or maybe even in between) graph details of a certain
format representing black box diagram information, and even further
would be different types of mathematical API functions, which could
either be packaged within that same repository tar file, or as a
reference to some other thing in the repository (symbolic links).
Either way, the question of whether to put it into a package or its
own package would be a function of how many people would just download
the package to get the subcomponent versus how annoying it would be to
not have it pre-packaged (but it would get automatically downloaded
anyway, so I'm just going in circles here). I'm pretty bad at making
up layers of abstraction for different packages in a repository, but
by doing it with translation layers, you just add various layers in
your pot for your package, and then you let others link them up and
find their own patterns for what they want to do, hoping that you
included whatever information they need to process.

> storage system at the heart of it, like one based on RDF triples or some
> other open-ended approach like Squeak Smalltalk objects. :-)

Paul, :-( you need to stop pushing RDF-v.-XML as the argument and
instead focus on the data structure questions. I don't care whether
it's triples or quadruples, the question is which variables need to be
stored and what information, in what formats, needs to be captured
consistently. The implementation details of how to serialize and
deserialize those types of data structures are trivial, as evidenced
by the ridiculous number of different types of structs already in use
and the many different libraries and APIs for reading and writing in
them, and whether or not they go over CGI protocols or backend TCP/IP
connections over different ports, these are all things that have been
proven and we know that they work; the meaning, content, or
information of the system is where we're stuck at ..

> Also, again, the design relates to your goals. Bryan seems more interested
> in full automation at the moment than I. For me, as I outlined in this paper

Meh.

> I feel a free form text area is good enough form my immediate goals of
> looking at manufacturing webs (admittedly, with pictures and data file
> attachments, which are important, and are not in that prototype).

But you know nothing of the structure of what's important about
different manufacturing processes; this knowledge would contribute to
an understanding of what's important for users to input about
manufacturing processes, the different types of data, the different
variables and factors, rather than just hoping that all users will
braindump some text that happens to be useful.

> But, what I also included there, and is essential to my interests, is
> formally defining what a "manufacturing recipe" uses, consumes, and
> produces, as well as other information about the level of effort required
> and failure probabilities (all needed for simulation and design at the level
> of abstraction I am focusing on).

Yeah, cross-referenced indices to other items in a database or data set.

> Anyway, I'll agree we can have both information about how things link
> together and details about each one in the same repository. And I'd add, it
> would be good to also include information about how to use artifacts, which
> is a slightly different type of procedural information as well, like
> information on how to take pictures well is independent of how you make a
> camera. Al Globus had suggested that to me -- that much of what an

I figure that "how to use this system, process, object thingy in the
repo" would be a function of expressed interfaces either in the
metadata or one of the upper layers of abstraction demanded as input
to the repository. In 'function structures', you see graphs like
"input human energy" and "human energy => chemical energy", in the
form of graphs. When you splice graphs together and find isomorphisms,
you are left with dangling nodes and arcs, which is where you see
interfaces for external systems to play around with. This is where you
also want to look at GNU units or unum.sf.net for the supply of
unit-based information, in the same sense of dimensional analysis for
making sure that things can fit together, as well as just exposing
knowledge of the existence of those interfaces and ports of the thingy
you've added to the repo. (So the thread's original title
(thingiverse) must be sticking with me. Ouch.)

> organization like NASA is concerned about is defining and following
> procedures (from pressurizing a spacesuit to entering and exiting a "clean
> room"). That procedural knowledge interacts with making artifacts, but it is

Sure, part of the information in the repo should be the 'function
structure' stuff, and then also information about insuring that the
claimed deliverables are made. Re: your mention of failure modes, that
information is currently captured in the VOICED design repository in a
number of different ways, so you might want to go browse around that:
http://repository.designengineeringlab.org/

> not quite the same thing. Although, this does relate to the "process" versus
> "artifact" distinction Bryan referred to elsewhere (is everything just a
> "process"?)

All things are processes :-). There's actually an entire branch of
physics for this:
http://forum.wolframscience.com/showthread.php?s=&threadid=1539
http://www.twistet.com/

"Following my 2004 examination of graph-theoretic network evolution
under a uniquely simple rule and the consequent realisation that there
were serious scale constraints to what we could look at with computer
simulations, I've become more and more focused on producing at least a
descriptive account of the aforementioned (cubicly) expanding bubbles
of locally conservative space."

> But as to emphasis, for me, right now I care more about the network issues
> that I care about the individual manufacturing details. But, it is people

Huh? That's like saying "I care more about linked lists than I do
about the actual data structures that matter." I mean, linked lists
are simple things to write. I am confused.

> who care about the individual details who would ideally be adding for each
> one the network-related metadata. But, if they don't, perhaps someone like
> me could come in afterwards and add in the information (for example,
> building on Appropedia's content).

- Bryan

albanetcsr

unread,
Nov 23, 2008, 4:03:42 PM11/23/08
to Open Manufacturing
The problem with structure (XML-like or database-type structure) is
that ideally, you want your data to be normalized/hierarchical. When
you attempt to stuff arbitrary knowledge into such structure, you will
inevitably violate these conditions, which will prompt you to modify
the structure. This cycle will continue as long as you're adding new
data, with complexity growing exponentially. Perhaps I'm using wrong
terminology (math was a long time ago), but I hope you understand what
I am saying. The nature of knowledge (as we humans represent it) is
not structure in aforementioned sense - it is relation, and so I see
why Paul is mentioning Semantic web concepts.

I think I see where you're coming from - somewhere in your posts you
mentioned a genetic algorithm based simulation concerned with self-
replication of a system (I guess from your other reply the thing
you're interested in is the rate of closure for different scenarios).
It seems to me one could build a fairly simple structured model of
manufacturing for this specific simulation. Otherwise, have you
considered a neural net as a storage of manufacturing knowledge?


On Nov 21, 10:15 am, "Bryan Bishop" <kanz...@gmail.com> wrote:
> physics for this:http://forum.wolframscience.com/showthread.php?s=&threadid=1539http://www.twistet.com/
>
> "Following my 2004 examination of graph-theoretic network evolution
> under a uniquely simple rule and the consequent realisation that there
> were serious scale constraints to what we could look at with computer
> simulations, I've become more and more focused on producing at least a
> descriptive account of the aforementioned (cubicly) expanding bubbles
> of locally conservative space."
>
> > But as to emphasis, for me, right now I care more about the network issues
> > that I care about the individual manufacturing details. But, it is people
>
> Huh? That's like saying "I care more about linked lists than I do
> about the actual data structures that matter." I mean, linked lists
> are simple things to write. I am confused.
>
> > who care about the individual details who would ideally be adding for each
> > one the network-related metadata. But, if they don't, perhaps someone like
> > me could come in afterwards and add in the information (for example,
> > building on Appropedia's content).
>
> - Bryanhttp://heybryan.org/
> 1 512 203 0507

Bryan Bishop

unread,
Nov 23, 2008, 4:36:24 PM11/23/08
to openmanu...@googlegroups.com, kan...@gmail.com
On Sun, Nov 23, 2008 at 3:03 PM, albanetcsr <alban...@gmail.com> wrote:
> The problem with structure (XML-like or database-type structure) is
> that ideally, you want your data to be normalized/hierarchical. When
> you attempt to stuff arbitrary knowledge into such structure, you will
> inevitably violate these conditions, which will prompt you to modify
> the structure. This cycle will continue as long as you're adding new

Not really, I've outlined a method previously in a recent email to
arbitrarily hook up the different levels of abstraction of the
different entries in a repository. One person might think that the
quantum physics of an electron is more important, if there's an
electron in the repository for some reason, whereas another person
wants to access things on a bulk relativistic level, so the different
abstract levels can be mixed and matched like a meal plan. Hrm,
that's a bad analogy, but my point still stands, this doesn't require
modification of the overall standardization schema because as long as
the metadata generally stays the same and pointers still exist to
similar types of data, things are going well.

> data, with complexity growing exponentially. Perhaps I'm using wrong
> terminology (math was a long time ago), but I hope you understand what
> I am saying. The nature of knowledge (as we humans represent it) is
> not structure in aforementioned sense - it is relation, and so I see
> why Paul is mentioning Semantic web concepts.

Paul's semantics stuff is still typical data structures-- linked
lists, webs of lists and cycles within graphs that self-reference each
other, etc. Typical comp sci stuff. There's no 'magic' in it just
because it has the word 'semantic' in its name.

> I think I see where you're coming from - somewhere in your posts you
> mentioned a genetic algorithm based simulation concerned with self-
> replication of a system (I guess from your other reply the thing

Well, no, not a genetic algorithm, but a full bruteforce search
algorithm in the case of automated computation of kinematic
self-replicating machines, sure.

> you're interested in is the rate of closure for different scenarios).

What is 'rate of closure'?

> It seems to me one could build a fairly simple structured model of
> manufacturing for this specific simulation. Otherwise, have you
> considered a neural net as a storage of manufacturing knowledge?

Sorry, neural networks already somewhat contain this information,
supposedly, and it's not working out too well. I would prefer exact
numbers, the same ones that are in the handbooks on manufacturing.

- Bryan

Paul D. Fernhout

unread,
Nov 23, 2008, 5:57:16 PM11/23/08
to openmanu...@googlegroups.com
Bryan Bishop wrote:
> On Sun, Nov 23, 2008 at 3:03 PM, albanetcsr <alban...@gmail.com> wrote:
>> you're interested in is the rate of closure for different scenarios).
>
> What is 'rate of closure'?

Is it possible that albanetcsr could be referring to something that I wrote
and maybe attributed it to you?

From:
http://www.pdfernhout.net/princeton-graduate-school-plans.html
"13. Using all the above resources that will be created over the next five
years, I would like to go through several iterations of designing and
prototyping a self replicating habitat with constantly increasing levels of
closure. Closure is the amount of processed goods that must be imported into
the system for it to replicate. Examples of early bottlenecks will be
computers. I hope to have 95% closure by mass in the first prototype, 98% in
the next, 99% in the next, 99.5% in the next, 99.9%, and finally 100%. This
will be over the next ten years. The specific cost for this development will
be 10 million dollars. NASA, SSI, the UN, and various other sources may
contribute towards this. An essential first step will be a feasibility
analysis so I have the figures and documentation to convince others that
this project can really succeed. This first analysis will take six months
and cost $10,000."

--Paul Fernhout

Bryan Bishop

unread,
Nov 23, 2008, 6:06:59 PM11/23/08
to openmanu...@googlegroups.com, kan...@gmail.com
On Sun, Nov 23, 2008 at 4:57 PM, Paul D. Fernhout wrote:
> Bryan Bishop wrote:
> > On Sun, Nov 23, 2008 at 3:03 PM, albanetcsr wrote:
> > > you're interested in is the rate of closure for different scenarios).
> >
> > What is 'rate of closure'?
>
> Is it possible that albanetcsr could be referring to something that I wrote
> and maybe attributed it to you?

There's also:
http://www.islandone.org/MMSG/aasm/AASM53.html#536
http://www.molecularassembler.com/KSRM/5.6.htm
which I'll quote from anyway just because, though the 'rate of
closure' is still foreign to me, even with the quote you produced from
your post-scarcity-princeton article.

================

Fundamental to the problem of designing self-replicating systems is
the issue of closure.

In its broadest sense, this issue reduces to the following question:
Does system function (e.g., factory output) equal or exceed system
structure (e.g., factory components or input needs)? If the answer is
negative, the system cannot independently fully replicate itself; if
positive, such replication may be possible.

Consider, for example, the problem of parts closure. Imagine that the
entire factory and all of its machines are broken down into their
component parts. If the original factory cannot fabricate every one of
these items, then parts closure does not exist and the system is not
fully self-replicating .

In an arbitrary system there are three basic requirements to achieve closure:
Matter closure - can the system manipulate matter in all ways
necessary for complete self-construction?
Energy closure - can the system generate sufficient energy and in the
proper format to power the processes of self-construction?
Information closure can the system successfully command and control
all processes required for complete self-construction?

Partial closure results in a system which is only partially
self-replicating. Some vital matter, energy, or information must be
provided from the outside or the machine system will fail to
reproduce. For instance, various preliminary studies of the matter
closure problem in connection with the possibility of "bootstrapping"
in space manufacturing have concluded that 90-96% closure is
attainable in specific nonreplicating production applications (Bock,
1979; Miller and Smith, 1979; O'Neill et al., 1980). The 4-10% that
still must be supplied sometimes are called "vitamin parts." These
might include hard-to-manufacture but lightweight items such as
microelectronics components, ball bearings, precision instruments and
others which may not be cost-effective to produce via automation
off-Earth except in the longer term. To take another example, partial
information closure would imply that factory-directive control or
supervision is provided from the outside, perhaps (in the case of a
lunar facility) from Earth-based computers programmed with
human-supervised expert systems or from manned remote teleoperation
control stations on Earth or in low Earth orbit.

The fraction of total necessary resources that must be supplied by
some external agency has been dubbed the "Tukey Ratio" (Heer, 1980).
Originally intended simply as an informal measure of basic materials
closure, the most logical form of the Tukey Ratio is computed by
dividing the mass of the external supplies per unit time interval by
the total mass of all inputs necessary to achieve self-replication.
(This is actually the inverse of the original version of the ratio.)
In a fully self-replicating system with no external inputs, the Tukey
Ratio thus would be zero (0%).

It has been pointed out that if a system is "truly isolated in the
thermodynamic sense and also perhaps in a more absolute sense (no
exchange of information with the environment) then it cannot be
self-replicating without violating the laws of thermodynamics"
(Heer,1980). While this is true, it should be noted that a system
which achieves complete "closure" is not "closed" or "isolated" in the
classical sense. Materials, energy, and information still flow into
the system which is thermodynamically "open"; these flows are of
indigenous origin and may be managed autonomously by the SRS itself
without need for direct human intervention.

Closure theory. For replicating machine systems, complete closure is
theoretically quite plausible; no fundamental or logical
impossibilities have yet been identified. Indeed, in many areas
automata theory already provides relatively unambiguous conclusions.
For example, the theoretical capability of machines to perform
"universal computation" and "universal construction" can be
demonstrated with mathematical rigor (Turing, 1936; von Neumann, 1966;
see also sec. 5.2), so parts assembly closure is certainly
theoretically possible.

An approach to the problem of closure in real engineering-systems is
to begin with the issue of parts closure by asking the question: can a
set of machines produce all of its elements? If the manufacture of
each part requires, on average, the addition of >1 new parts to
product it, then an infinite number of parts are required in the
initial system and complete closure cannot be achieved. On the other
hand, if the mean number of new parts per original part is <1, then
the design sequence converges to some finite ensemble of elements and
bounded replication becomes possible.

The central theoretical issue is: can a real machine system itself
produce and assemble all the kinds of parts of which it is comprised?
In our generalized terrestrial industrial economy manned by humans the
answer clearly is yes, since "the set of machines which make all other
machines is a subset of the set of all machines" (Freitas et
al.,1981). In space a few percent of total system mass could feasibly
be supplied from Earth-based manufacturers as "vitamin parts."
Alternatively, the system could be designed with components of very
limited complexity (Heer, 1980). The minimum size of a self-sufficient
"machine economy" remains unknown.

===
===

According to the NASA study final report [2]: "In actual practice, the
achievement of full closure will be a highly complicated, iterative
engineering design process.* Every factory system, subsystem,
component structure, and input requirement must be carefully matched
against known factory output capabilities. Any gaps in the
manufacturing flow must be filled by the introduction of additional
machines, whose own construction and operation may create new gaps
requiring the introduction of still more machines. The team developed
a simple iterative procedure for generating designs for engineering
systems which display complete closure. The procedure must be
cumulatively iterated, first to achieve closure starting from some
initial design, then again to eliminate overclosure to obtain an
optimized design. Each cycle is broken down into a succession of
subiterations which ensure qualitative, quantitative, and throughput
closure. In addition, each subiteration is further decomposed into
design cycles for each factory subsystem or component." A few
subsequent attempts to apply closure analysis have concentrated
largely on qualitative materials closure in machine replicator systems
while de-emphasizing quantitative and nonmaterials closure issues
[1128], or have considered closure issues only in the more limited
context of autocatalytic chemical networks [2367, 2686]. However, Suh
[1160] has presented a systematic approach to manufacturing system
design wherein a hierarchy of functional requirements and design
parameters can be evaluated, yielding a "functionality matrix" (Figure
3.61) that can be used to compare structures, components, or features
of a design with the functions they perform, with a view to achieving
closure.

* To get a sense of the complex iterative nature of closure
engineering, the reader should ponder the design process that he or
she might undertake in order to generate the following full-closure
self-referential "pangram" [2687] (a sentence using all 26 letters at
least once), written by Lee Sallows and reported provided by
Hofstadter [260]: "Only the fool would take trouble to verify that his
sentence was composed of ten a's, three b's, four c's, four d's,
forty-six e's, sixteen f's, four g's, thirteen h's, fifteen i's, two
k's, nine l's, four m's, twenty-five n's, twenty-four o's, five p's,
sixteen r's, forty-one s's, thirty-seven t's, ten u's, eight v's, four
x's, eleven y's, twenty-seven commas, twenty-three apostrophes, seven
hyphens, and, last but not least, a single !" Self-enumerating
sentences like these are also called "Sallowsgrams" [2687] and have
been generated in English, French, Dutch, and Japanese languages using
iterative computer programs.

Partial closure results in a system that is only partially
self-replicating. With partial closure, the machine system will fail
to self-replicate if some vital matter, energy, or information input
is not provided from the outside. For instance, various preliminary
studies [2688-2690] of the materials closure problem in connection
with the possibility of macroscale "bootstrapping" in space
manufacturing have concluded that 90-96% closure is attainable in
specific nonreplicating manufacturing applications. The 4-10% that
still must be supplied are sometimes called "vitamin parts." (The
classic example of self-replication without complete materials
closure: Humans self-reproduce but must but take in vitamin C, whereas
most other self-reproducing vertebrates can make their own vitamin C
[2691].) In the case of macroscale replicators, vitamin parts might
include hard-to-manufacture but lightweight items such as
microelectronics components, ball bearings, precision instruments, and
other parts which might not be cost-effective to produce via
automation off-Earth except in the longer term. To take another
example, partial information closure might imply that factory control
or supervision is provided from the outside, perhaps (in the case of a
lunar facility) from Earth-based computers programmed with
human-supervised expert systems or from manned remote teleoperation
control stations located on Earth or in low Earth orbit.

Regarding closure engineering, Friedman [573] observes that "if 96%
closure can really be attained for the lunar solar cell example, it
would represent a factor of 25 less material that must be expensively
transported to the moon. However, ...a key factor ... which deserves
more emphasis [is] the ratio of the weight of a producing assembly to
the product assembly. For example, the many tons of microchip
manufacturing equipment required to produce a few ounces of microchips
makes this choice a poor one – at least early in the evolution – for
self-replication, thus making microelectronics the top of everyone's
list of 'vitamin parts'."

================
================

albanetcsr

unread,
Nov 23, 2008, 6:09:12 PM11/23/08
to Open Manufacturing
Right. I suppose the right word should be closure ratio: the fraction
of total necessary resources that self-replicating system is capable
of producing (vs those that must be supplied externally).

On Nov 23, 2:57 pm, "Paul D. Fernhout" <pdfernh...@kurtz-fernhout.com>
wrote:
> Bryan Bishop wrote:

Paul D. Fernhout

unread,
Nov 23, 2008, 6:16:34 PM11/23/08
to openmanu...@googlegroups.com
Bryan-

Thanks for all your comments.

On the two specific points below (flexibility and metadata issues).

I see code as being part of the system (there is a "run jython code" option
in the current desktop prototype). This is to support the idea is that you
use tools defined in software that modify the database. So, you might have
an annotation tool which could annotate content put in by another tool (say,
an "Appropedia import tool". So, it is the tools that would define and
constrict a lot of this, even thought the triple format is itself open-ended
and can also be modified by a low-level general purpose browser. Presumably,
one could even make a tool which reads the equivalent of XML DTDs or other
XML schema to restrict what the tool can do in terms of adding triples or
validating entries, like what albertcsr mentions.
http://en.wikipedia.org/wiki/Document_Type_Definition
http://en.wikipedia.org/wiki/XML_schema
http://en.wikipedia.org/wiki/XML_Schema_(W3C)

On the second point, I must not be being clear. You can create a system that
knows how processes interrelate and interdepend without knowing much about
what goes on in each process. An example is how Debian apt-get understands
package interdependencies, but apt-get itself knows little about the
internal contents of packages beyond that they have files in them, and
beyond that, the packages may all be implemented in different languages,
contain readme files written various ways, and so on. So there is a
definition of interrelation information without much commonality of
procedural operation.

By contrast, the procedural operation defining *bytecode* in any Java jar
file all runs the same on any JVM it is compatible with, but you can mix any
combination of incompatible jar files, so in that sense, jar files represent
a definition of machine interpretable procedural operations without any
commonality of interrelation information.

What I'm saying is that right now, I'm more interested in the commonality of
interrelation information than machine interpretability of procedural
operation information. But, if you just want to skdb-get your files and have
the desktop fab next to your computer make something, you probably care
about both (and perhaps even more about machine interpretation, since maybe
you can guess right about dependencies for some simple things.)

--Paul Fernhout

Bryan Bishop wrote:
> On 11/21/08, Paul D. Fernhout <pdfer...@kurtz-fernhout.com> wrote:
>> There is a lot to be said for successive approximation -- we create a system
>> people can use to make things, and bit by bit there is more precision
>> allowing dumb machines to help. Which suggest to me having a flexible data
>
> Right, layers of details and so on. But the problem is that it's not
> global successive approximation across your entire repository or data
> set, but rather it's localized incremental approximations, which by
> definition means that you're not going to be able to standardize what
> types of incremental new data for different aspects of the system are
> going to be on the 'same level'.
>

> [big snip]

Bryan Bishop

unread,
Nov 23, 2008, 6:44:02 PM11/23/08
to openmanu...@googlegroups.com, kan...@gmail.com
On Sun, Nov 23, 2008 at 5:16 PM, Paul D. Fernhout wrote:
> What I'm saying is that right now, I'm more interested in the commonality of
> interrelation information than machine interpretability of procedural
> operation information. But, if you just want to skdb-get your files and have
> the desktop fab next to your computer make something, you probably care
> about both (and perhaps even more about machine interpretation, since maybe
> you can guess right about dependencies for some simple things.)

I attribute this more to a misunderstanding of the difference (or lack
thereof, rather) between objects and processes. For a given 'product'
or 'object' in the system, there are instructions/dependencies that
make up its specification. When you 'tug' on the chain of
dependencies, you unravel a specific set of instructions and steps
that represent the manufacture of the thingy or the physical
expression of the object, either physically or virtually for
analysis/validation. This still sounds no different from what you want
to do .. even though I might have 3D CAD files in the repository,
that's just specific information for parameterizing the processes in
the repository. This is still interrelation information, just
serialized and pulled out of the db.

Vitaly Mankevich

unread,
Nov 23, 2008, 6:49:54 PM11/23/08
to openmanu...@googlegroups.com
> There's also:
> http://www.islandone.org/MMSG/aasm/AASM53.html#536
> http://www.molecularassembler.com/KSRM/5.6.htm
> which I'll quote from anyway just because, though the 'rate of
> closure' is still foreign to me, even with the quote you produced from
> your post-scarcity-princeton article.

Bryan, ok, I see that it's "closure", if that's so important ;)
English is not my native language, so sometimes it seems natural to
attach a qualifier to a word used without one, and it doesn't make
things sound awkward, especially when the concept is new.

You said: "Not really, I've outlined a method previously in a recent


email to arbitrarily hook up the different levels of abstraction of

the different entries in a repository." I may have missed it - do you
have a link?

Paul D. Fernhout

unread,
Nov 23, 2008, 7:11:27 PM11/23/08
to openmanu...@googlegroups.com
Yeah, like vitamins. :-)

You might imagine a self-replicating space habitat might be able to do all
its own mining and make all its own physical structure, except maybe it
might still need to import, say, a few hundred pounds of IC chips from some
larger specialized habitat (or even Earth :-). So, is such a habitat 99.999%
self-replicating by mass, or is it not self-replicating at all?

Actually, inspired by bacterial colonies and biofilms and similar systems,
http://en.wikipedia.org/wiki/Biofilm
http://en.wikipedia.org/wiki/Colony_(biology)
I've come to think in terms of "self-replicating space habitat *networks*" :-)

--Paul Fernhout

Bryan Bishop

unread,
Nov 23, 2008, 7:44:55 PM11/23/08
to openmanu...@googlegroups.com, kan...@gmail.com
On Sun, Nov 23, 2008 at 5:49 PM, Vitaly Mankevich wrote:
> Bryan, ok, I see that it's "closure", if that's so important ;)
> English is not my native language, so sometimes it seems natural to

I don't have a magic detector to tell me when somebody doesn't speak
English natively :-) so misunderstandings are bound to happen.

> You said: "Not really, I've outlined a method previously in a recent
> email to arbitrarily hook up the different levels of abstraction of
> the different entries in a repository." I may have missed it - do you
> have a link?

http://groups.google.com/group/openmanufacturing/msg/412fbe3fdf68765b

Bryan Bishop

unread,
Nov 23, 2008, 7:47:22 PM11/23/08
to openmanu...@googlegroups.com, kan...@gmail.com
On Sun, Nov 23, 2008 at 6:11 PM, Paul D. Fernhout wrote:
> You might imagine a self-replicating space habitat might be able to do all
> its own mining and make all its own physical structure, except maybe it
> might still need to import, say, a few hundred pounds of IC chips from some
> larger specialized habitat (or even Earth :-). So, is such a habitat 99.999%
> self-replicating by mass, or is it not self-replicating at all?

Based off of the way I treat the RepRap community, I'd have to place
my foot down firmly and say that personally it's binary for me. The
ratio isn't all that important since even 99.99999999-ad-infinitum
(which really, truthfully, equals 100%) doesn't mean 100%
(self-replication), thus it doesn't apply.

See: "How can 0.999999999.. = 1?"
http://www.purplemath.com/modules/howcan1.htm

Reply all
Reply to author
Forward
0 new messages