This could be solved in numerous ways, but solving it wouldn't negate
the other problems that I think you have nailed...
> People see BitTorrent as the means they get.... 'stuff'. Are they
> comfortable bringing that activity mainstream? Anonymity is a big deal in
> this space, which can be a bit of a pain for a social platform, depending on
> how you look at it. I can't see people getting 'stuff' while logged in as
> themselves.
Exactly. Like it or not, many bittorrent users have been using the
protocol/application to share files. Chances are they won't want that
activity tied to their identity.
> Discovery/Search - for exactly the same reason you need pirate bay or
> equivalent, you need a central server to locate people, etc. - unless they
> have some pretty funky new distributed tech.
I think this part could be done by extending bittorrent client
software so that it would "serve" the identity of the user. This would
need a defined distributed architecture, but a "finger"-type of
protocol could be implemented in the bittorrent protocol.
> Synchronisation between nodes - e.g. my desktop/laptop/tablet and within the
> social graph. This is interesting in general, but gets harder when the data
> is volatile (which it generally is not with BitTorrent in its current form).
> e.g. I update my status - how does that get reliably to everyone... in
> near-real-time? Distributed twitter.... tough.
Not that it is perfect, but diaspora relies on Reddis to distribute
data about updates (as far as I know. Please jump in correct me if I
am wrong there). Erlang (programming language) also has some
long-standing examples of how this can work with Mnesia database. No
matter how it's done, work will be needed to improve reliability and
performance, but this is now within our grasp (even in applications
that run across http )
> Authentication - not clear to me how this is managed.
>
> Essentially the same things Diaspora has to worry about, plus not being
> perceived as 'social'. The user base is a big deal, but it is the weakest
> form of user base because they are almost entirely anonymous.
This is really the biggest obstacle, and it's not a technical problem.
> One slight
> upside is that this user base probably has a higher than usual proportion of
> 'early adopter' psychology.
> My take is that a logical network topology based on the social graph is a
> good thing, and could improve the viability of a P2P physical topology, but
> it needs to start with a decent model of the social graph, and I'm still not
> seeing that - it always seems to be tacked on to some other property that
> people are trying to leverage for existing network effects.
--
--
Sam Rose
Future Forward Institute and Forward Foundation
Tel:+1(517) 639-1552
Cel: +1-(517)-974-6451
skype: samuelrose
email: samue...@gmail.com
http://futureforwardinstitute.com
http://forwardfound.org
http://hollymeadcapital.com
http://p2pfoundation.net
http://socialmediaclassroom.com
"The universe is not required to be in perfect harmony with human
ambition." - Carl Sagan
This is generally a great idea. We've created an architecture at
https://docs.google.com/present/view?id=dfxhxcx8_67c87nc9hp that we
believe would allow people to write and run applications as we know
them, but have them work in a distributed fashion. Still, I think it
could be useful to start from the "user story" application perspective
to see what people are thinking of from that end.
This reminds me a little of Peerscape, which was implemented as a web
browser add-on. Unfortunately, the project seems to have petered out.
--
The Doctor [412/724/301/703]
http://drwho.virtadpt.net/
"I am everywhere."
Seeing as how it's implemented on top of the original BitTorrent
client, it seems as if a user's data is served off of their running
client and cached locally by people who are connected to them. As for
how it's secured, I don't know. I'd think that access to a particular
user's information would be restricted by whether or not the user has
made something public, private (friends-only), or private
(user-defined group only). ACLs on the user's machine would, of
course, apply to the cache files maintained by the client.
> People see BitTorrent as the means they get.... 'stuff'. Are they
> comfortable bringing that activity mainstream? Anonymity is a big deal in
They aren't now?
> this space, which can be a bit of a pain for a social platform, depending on
> how you look at it. I can't see people getting 'stuff' while logged in as
> themselves.
They do it all the time with private and registration-only trackers.
And hybrid BitTorrent/social networking sites like hexagon.cc, for
that matter.
> Discovery/Search - for exactly the same reason you need pirate bay or
> equivalent, you need a central server to locate people, etc. - unless they
> have some pretty funky new distributed tech.
It's possible that they're using DHT search to find users, sort of
like the method used by Gnutella to find files (either searching
actively or searching the cached indices of peers). I'll have to play
around with it to find out for sure.
> Synchronisation between nodes - e.g. my desktop/laptop/tablet and within the
> social graph. This is interesting in general, but gets harder when the data
> is volatile (which it generally is not with BitTorrent in its current form).
> e.g. I update my status - how does that get reliably to everyone... in
> near-real-time? Distributed twitter.... tough.
Tough but not impossible. I think it'll be worth experimenting with
to see how they did it. It would also be worth playing with the
distributed microblogging implementations out there (like Plexus and
rstat.us) to see how well they work.
It might also be worth taking a look at Fossil to see how well its
distributed wiki/bug tracker functionality works.
> Authentication - not clear to me how this is managed.
Nor I, from the article.
> what's the relationship between bittorrent as social network &
> municipal broadband networks/mesh nets?
We're not sure how a BitTorrent-based socnet will work on a mesh
network. It's an experiment we have on the table for Byzantium in the
future, and we'll post our results when we try it.
As for municipal broadband, it appears that those battles have yet to
be really won, given the pushback that the big ISPs are putting on the
municipal projects. Off the cuff, my concern is that the ISPs will
treat BT-based socnets like they treat BT, i.e., with suspicion and
possibly throttling. That could hurt adoption.
- v
I guess I'll use my free dumb question for newcomers card here. What is Reddis? Any project links?
I tried googling various combinations of Diaspora and Reddis to no avail. Too many false matches on the Indian word Reddis and Indian Diaspora.
- Curtis
> Not that it is perfect, but diaspora relies on Reddis to distributeI guess I'll use my free dumb question for newcomers card here. What is Reddis? Any project links?
> data about updates (as far as I know. Please jump in correct me if I
> am wrong there).
--
In theory, there is no difference between theory and practice.
In<fnord> practice, there is. .... Yogi Berra
Thanks Miles.
--
--
Sam Rose
On May 13, 2011, at 3:43 PM, Bryce Lynch wrote:
> On Thu, May 12, 2011 at 22:04, Colin Hawkett <haw...@gmail.com> wrote:
>> Article looking at the possibility of BitTorrent becoming distributed social
>> network, more-or-less along the lines of Diaspora, but with an
>> already-installed user base in the hundreds of millions.
>
> This reminds me a little of Peerscape, which was implemented as a web
> browser add-on. Unfortunately, the project seems to have petered out.
Peerscape sounds like a great idea. That is how I would imagine the distributed net to function.
"Under the hood, your computer stores copies of your data, the data of your friends and the groups you have joined, and some data about, e.g., friends of friends. It also caches copies of other data that you navigate to. Computers that store the same data establish connections among themselves to keep it in sync.
It's all about managing information instead of servers. Peerscape uses public-key cryptographic signatures to encode the relationships among people, groups, and their content."
No need for servers, and the data can be locally stored and redundantly kept available for who would like to access them.
Sepp
I apologize if I'm saying anything that's been covered here before, I only read back a few months in the archives so far.
Now that I see it's a key-value store, NOSQL variant, I can already see huge scalability problems with that approach. It hints that they do not have the optimal architecture. Thought, I'll have to dig into it further to make sure.
Social networking software performance is a systems architecture problem at heart. There isn't any amount of Moore's Law goodness that can compensate for the sins of a bad architecture.
Social network servers have a data distribution problem not a database problem. Many good lessons come from finance for building trading and exchange data feeds. People pay big money to make sure their data comes in reliably with extremely low latency. The scalability issues have been solved. You know how quickly the volumes blow up when all hell breaks loose in the markets with the high-frequency-trading algorithms running full tilt, right?
You need to use some of the same kinds of tricks in a distributed system to get scalability, but that's not possible to do in a 100% pure peer-to-peer architecture.
There is a reason that Twitter performance sucks despite the fact that some supposedly very experienced people must have been working on it for years. They're optimizing the parts without changing the fundamental architecture to a data distribution architecture. The signs of some wrong architectural decision are written all over my machine every few days when I see Twitter not respond to mundane everyday requests, not throttle back to increase latency to reduce server load, or post a Twitter is Busy message of some sort or another.
Twitter acts like a highway that is tuned to be close to maximum capacity; just before the drop in efficiency hits and you start seeing red brake lights. One little thing out of place on the side of the road and traffic comes to a standstill.
This needs to be even more carefully considered for a true distributed peer-to-peer architecture to work. Efficiency is going to dictate that some peers are going to serve different distribution roles in the network. How and why they do this is critical to acceptance of any system.
Where can I find the best historical thinking on these issues for this list? Any good blog posts? Pointers to web sites or design documents? Who are the people here who consider themselves system-level architects for social networking?
- Curtis
Reddis is a nosql data store. Hmmm. Wow. It's hard to search for. Can't
seem to find a project site for it at the moment.
> I tried googling various combinations of Diaspora and Reddis to no avail. Too many false matches on the Indian word Reddis and Indian Diaspora.
Yeah. They need some SEO love it looks like.
> - Curtis
partially because it's redis (one "d")
I've been hacking on Diaspora for a few months, though I'm far from
understanding all of the technology that has gone into it. Here are my
thoughts.
On 05/13/2011 02:20 PM, Curtis Faith wrote:
> Thank you, I figured it was something like that.
>
> I apologize if I'm saying anything that's been covered here before, I
only read back a few months in the archives so far.
>
> Now that I see it's a key-value store, NOSQL variant, I can already see
huge scalability problems with that approach. It hints that they do not
have the optimal architecture. Thought, I'll have to dig into it further
to make sure.
I agree that Diaspora has not found the optimal architecture for
p2psn, but I do think that they've yet to be topped. I'm hoping that
MondoNet or Project Byzantium will have something to say here.
>
> Social networking software performance is a systems architecture
problem at heart. There isn't any amount of Moore's Law goodness that
can compensate for the sins of a bad architecture.
This is where I've found my head at lately. What if we get a /64 block
of v6 space from ARIN, and put it to use building a non-web
application that uses direct addressing for each peer. This might
solve a lot of the problems introduced in p2p systems via NAT and
other subnetting practices. Perhaps the web-based, server-based,
db-based approached is too encumbered by HTTP packaging. Perhaps we
need to think outside of the browser. Stand-alone software could open
up its own port, instead of having to package and push everything
through 80.
>
> Social network servers have a data distribution problem not a database
problem. Many good lessons come from finance for building trading and
exchange data feeds. People pay big money to make sure their data comes
in reliably with extremely low latency. The scalability issues have been
solved. You know how quickly the volumes blow up when all hell breaks
loose in the markets with the high-frequency-trading algorithms running
full tilt, right?
>
> You need to use some of the same kinds of tricks in a distributed
system to get scalability, but that's not possible to do in a 100% pure
peer-to-peer architecture.
What are the effects when we don't view the architecture as
single-tier peering, but muti-tiered and federated. That is,
peer-to-peer network on a local level connected with other
peer-to-peer networks on a regional level, and so on...
We can introduce incredibly low latency in local-scale interaction at
the cost of introducing some latency in global-scale ones.
>
> There is a reason that Twitter performance sucks despite the fact that
some supposedly very experienced people must have been working on it for
years. They're optimizing the parts without changing the fundamental
architecture to a data distribution architecture. The signs of some
wrong architectural decision are written all over my machine every few
days when I see Twitter not respond to mundane everyday requests, not
throttle back to increase latency to reduce server load, or post a
Twitter is Busy message of some sort or another.
>
> Twitter acts like a highway that is tuned to be close to maximum
capacity; just before the drop in efficiency hits and you start seeing
red brake lights. One little thing out of place on the side of the road
and traffic comes to a standstill.
>
> This needs to be even more carefully considered for a true distributed
peer-to-peer architecture to work. Efficiency is going to dictate that
some peers are going to serve different distribution roles in the
network. How and why they do this is critical to acceptance of any system.
I couldn't agree more. Every system has its trade-offs. Let's consider
the relationship between efficiency, latency, reliability,
distribution, and federation. I'm trying to get the Free Network
Foundation going to have just this type of conversation. Are people
here interested in participating?
>
> Where can I find the best historical thinking on these issues for this
list? Any good blog posts? Pointers to web sites or design documents?
Who are the people here who consider themselves system-level architects
for social networking?
I don't consider myself a system-level architect. Not yet at least.
But I am keenly interested in having this conversation, particularly
with you, Curtis. I've got a lot to learn. We all have. Charles knows
a thing or two about systems architecture, as do many others. Still,
maybe what we need is new ideas. I'm not sure that there really is a
historical conversation to point to, but I think that's exactly why we
should be having the conversation.
>
> - Curtis
>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iQEcBAEBAgAGBQJNzZWDAAoJEA8fUKCD77NLlc0H/jHFZuTtQy61O9U3sYphFlBT
cfoPGem3U0O5CB2fg07jnAXq/R1pET0lwGBucqG0AN6WknK3Vce1RSunMRXLszFv
xg2Y+97pX33u45T544ZQKZLNt+XVbHFzYtWJdtTG32jNn1zbDQqhrWCbpoW1FMhp
rdGI40+PTFAV08MemPOkLFgVOA1vheSOc7PVUH840snbMUhjKVd6vgKm4OlMwhMN
BmSIoF6uTmsdiKiWWDXoEGwORkRQNomGkKx/B7gv4wBYIqeZKqa7c7dUX+RaGek3
mLw+OnsxaPGTk7UOx/1KZcS0+5xsjFCpGIr+yf+/ign9Nm5gSv1x3umbzTYz/z4=
=WMWU
-----END PGP SIGNATURE-----
I REALLY like this idea. I wonder if anyone on list has connections
to people in the first nations community. I just really think this is
an idea worth exploring.
imw
>
>>
>> -- The Doctor [412/724/301/703]http://drwho.virtadpt.net/ "I am
>> everywhere."
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iQEcBAEBAgAGBQJNzZYIAAoJEA8fUKCD77NL0vsIAKLK2L2UjcQVQqucUvIGhplU
eA0roP8GcmI5PYl56uxpDo/1LH4OhdVkxsC7zN+CcjVnN61emBF1oNhau1RWN6vu
RguLqQDrg5Do4FSW2nnjIsdtrfhnQDH4Rved8rjVyQxCcI/OvdCQg/fpPjQE+Qh6
bhm4hTORLBTLIAm0IKCOkacSmivlsVfg9LMsr3wOzo1ejiC6SMORq8kCx41/c4Ue
JVLT/RUeIRZC/8gzUJSk0C1oPyB/YOU2mglfmfhLnJQ/5VeHE6ps6X/pyBOvEQyH
QURNFbkU5fJNNpwazYch0XhLnBvx5jS9hwmmvGYGqqRM9UidEUUdn1gRhiL2ruk=
=dBus
-----END PGP SIGNATURE-----
Oops, Miles is right. :)
--
--
Sam Rose
What makes you think that key value does not scale?
>
> Social networking software performance is a systems architecture problem at heart. There isn't any amount of Moore's Law goodness that can compensate for the sins of a bad architecture.
>
> Social network servers have a data distribution problem not a database problem. Many good lessons come from finance for building trading and exchange data feeds. People pay big money to make sure their data comes in reliably with extremely low latency. The scalability issues have been solved. You know how quickly the volumes blow up when all hell breaks loose in the markets with the high-frequency-trading algorithms running full tilt, right?
>
> You need to use some of the same kinds of tricks in a distributed system to get scalability, but that's not possible to do in a 100% pure peer-to-peer architecture.
>
> There is a reason that Twitter performance sucks despite the fact that some supposedly very experienced people must have been working on it for years. They're optimizing the parts without changing the fundamental architecture to a data distribution architecture. The signs of some wrong architectural decision are written all over my machine every few days when I see Twitter not respond to mundane everyday requests, not throttle back to increase latency to reduce server load, or post a Twitter is Busy message of some sort or another.
>
> Twitter acts like a highway that is tuned to be close to maximum capacity; just before the drop in efficiency hits and you start seeing red brake lights. One little thing out of place on the side of the road and traffic comes to a standstill.
>
> This needs to be even more carefully considered for a true distributed peer-to-peer architecture to work. Efficiency is going to dictate that some peers are going to serve different distribution roles in the network. How and why they do this is critical to acceptance of any system.
>
> Where can I find the best historical thinking on these issues for this list? Any good blog posts? Pointers to web sites or design documents? Who are the people here who consider themselves system-level architects for social networking?
Look no further:
http://www.w3.org/DesignIssues/
I believe that diaspora is employing
http://www.amqp.org/confluence/display/AMQP/Advanced+Message+Queuing+Protocol
for addressing this. My understanding is that AMQP is capable of
meeting the challenge.
Queuing systems fall over under high load. Multicast/data fountains are
what a lot of people are using now. Moves the complexity into the
network instead of the servers. Push the ugly around and all that. I
like the multicast approach and I've got a lot of experience with it.
Including building a multicast to unicast converter and back again.
Hear, hear!
> Social network servers have a data distribution problem not a database problem. Many good lessons come from finance for building trading and exchange data feeds. People pay big money to make sure their data comes in reliably with extremely low latency. The scalability issues have been solved. You know how quickly the volumes blow up when all hell breaks loose in the markets with the high-frequency-trading algorithms running full tilt, right?
>
Of course near-real-time financial trading has very different
performance requirements for group messaging (which is really what
social networking is all about).
> You need to use some of the same kinds of tricks in a distributed system to get scalability, but that's not possible to do in a 100% pure peer-to-peer architecture.
>
Depends on how you define "pure peer-to-peer." USENET (NNTP) is pretty
good example of a highly scalable multiply connected mesh network.
(Yes, it now has a hierarchy, but it really doesn't have to.) Freenet,
Gnunet, and Gnutella do a pretty good job of scaling.
General message queuing systems still have queue bottlenecks at the network, disk i/o and CPU level that can be easily exceeded with the wrong overall system architecture.
The key to scalability is to not have any bottlenecks that can't be subdivided and load-balanced on the fly without taking any nodes offline. What matters here is which nodes are talking to which other nodes, and what kind of information they are sending to each other, not the specific implementation of the nodes themselves. That's what I meant by optimizing the parts without changing the fundamental architecture.
Most of the architectural issues associated with the protocols are irrelevant to scalability. You could store data in in-memory lists for high performance, but if the amount of data coming into or out of a node exceeds the network or CPU performance capability, that node is effectively down. If this happens enough, fail-overs merely cascade to other previously under-capacity nodes taking the whole system down to a crawl as users everywhere halt.
So you need to have a way of making sure that this doesn't happen. For example, a mechanism so that nodes start offloading work and user connection responsibility to underutilized server nodes long before any given node reaches capacity. You can't have every peer talk to the busy nodes or when someone with 100,000 followers tweets you'll use up too many resources (individual TCP packets and the associated CPU time to process them). A good architecture will do what Charles describes, it will fan out and distribute the work from updates and change the connection topology based on the characteristics of the communication requests themselves.
Communication data should roll up geographically or topically and then fan out on the same basis. If you have the luxury of collocating a group of servers (for instance, if you are a centralized company, like say Twitter or FaceBook or an stock market exchange data provider) then you can use IP multi-casting to make the network communications very efficient. In a federation of peer servers located all over the world, you won't have this option.
Not all of these are axiomatic:
> Scalability is all about being able to decompose work into parallelizable chunks and minimizing the work that needs to be done on any given node.
>
No. It's about being able to decompose and parallelize as necessary.
Minimizing load on given nodes is not necessary.
> The key to scalability is to not have any bottlenecks that can't be subdivided and load-balanced on the fly without taking any nodes offline.
>
It's also nice to be tolerant of offline nodes - failures, upgrades,
flakey connectivity associated with mobile nodes (we are talking mobile
mesh networks here), etc. Scalability and data availability are
somewhat tied at the hip.
> Most of the architectural issues associated with the protocols are irrelevant to scalability. You could store data in in-memory lists for high performance, but if the amount of data coming into or out of a node exceeds the network or CPU performance capability, that node is effectively down. If this happens enough, fail-overs merely cascade to other previously under-capacity nodes taking the whole system down to a crawl as users everywhere halt.
>
Network architecture is a central consideration, and that is driven by
protocol and data architecture. Building around a multi-cast or
broadcast protocol leads to very different architectures and node
designs than peer-wise protocols.
Miles Fidelman
Why not? You could have multicast on a regional basis and then
unicast-multicast conversion across the WAN. Or build multi cast into
the WAN. MPLSl3VPN and all that jazz. Tis how I built my last
distributed application. *shrugs* I just hack on stuff and get paid good
money for it. I'm not a wizard architect or anything. Just a builder
that sees problems and builds solutions and solves problems as they come
up.
AMQP supports multicast.
On Fri, May 13, 2011 at 00:12, Colin Hawkett <haw...@gmail.com> wrote:
> Where is your profile page served from? Where is your personal data? Does it
> remain available if you are not online? If it does, how is it secured?Seeing as how it's implemented on top of the original BitTorrent
client, it seems as if a user's data is served off of their running
client and cached locally by people who are connected to them. As for
how it's secured, I don't know. I'd think that access to a particular
user's information would be restricted by whether or not the user has
made something public, private (friends-only), or private
(user-defined group only). ACLs on the user's machine would, of
course, apply to the cache files maintained by the client.
> People see BitTorrent as the means they get.... 'stuff'. Are they
> comfortable bringing that activity mainstream? Anonymity is a big deal inThey aren't now?
> this space, which can be a bit of a pain for a social platform, depending on
> how you look at it. I can't see people getting 'stuff' while logged in as
> themselves.They do it all the time with private and registration-only trackers.
And hybrid BitTorrent/social networking sites like hexagon.cc, for
that matter.
I haven't been following this super closely but unless I missed something big IPv4 is still the only standard you can count on and multicasting is only supported by certain routers. One can certainly build an abstraction for the architecture that uses multicasting when available, I just don't see how that will be very often in most circumstances. Absent a central authority with access to known hardware or hardware that supports known protocols end-to-end, you need to allow for at least some segments of the communication coming through unicast.
Perhaps we are envisioning different use cases for this. I'm thinking group conferencing with groups located in different parts of the world, not in the same building. I'm also thinking about Twitter-like mixed one-to-many streams of low-bandwidth communications. Unless I'm missing something important (and it wouldn't be the first time this happened) multicasting only provides a benefit if more than one destination (router or server node) is connected to any given multicast capable router (like a Cisco router supporting MPLSl3VPN, for instance).
Your use of the term WAN is perhaps telling. It is common when discussing corporate I.T. infrastructure or private networks. For most individuals, there is just: the internet. Any you can't count on anything existing between you and any other users. No leased-line or ATM or VPN connections or any particular SLA agreement.
The architecture for the next net needs to work with that as a baseline. Any optimizations for special cases should be build on top of that.
I do consider the case of coworking spaces, hacker spaces, maker spaces, etc. to be a growing likely target for this technology, and in these spaces, it seems reasonable to suppose that capable routers will exist if there is compelling reason for their purchase. It also seems reasonable to conclude that many people might be on the same video conference on a regular basis connecting with other hackerspaces, etc. So if there are benefits to be gained through optimizing for multicast within a LAN, I expect these groups to buy the hardware to keep their LAN efficient in that event.
In the end, I think we'd both propose the same solution: as you stated: "multicast on a regional basis and then
unicast-multicast conversion across the WAN." We just differ in our opinion of how often the multicast feature will be used in practice.
Peace
Curtis
Actually, I sure don't. I originally thought the focus was on
distributed, decentralized INFRASTRUCTURE - e.g., wireless mesh
networks that are not dependent on carriers, and are not subject to
disruption or centralized control.
The focus seems to have shifted to decentralized applications - again,
less vulnerable to control and disruption, but dependent on the current
Internet infrastructure.
These are two very different problems, that require very different
solutions.
I'm concerned that this conversation addresses how to build rather than what to build. We have a general sense what problem we're trying to solve here, but I'd like to see specifics - which is why I mentioned use cases or user stories a couple days ago. It's meaningless if you build a social network platform without a sense how it'll be used. Replicating Facebook/Twitter streaming of short burst msgs, I submit, is not enough. I think we're already beginning to realize the limitations of those systems, which support shallow and broad messaging, not-quite-conversation, and systems that support deeper conversation and relationship (some of the older forum-based systems, like the WELL). And perhaps we could look at some specific cases, e.g. I'm involved with the Society of Participtory Medicine, which focuses on patient communities, which could be well-supported by powerful networking and communication that includes granular privacy controls.
I advocate for more of a "what to build" conversation before "how to build," though I understand that developers and network administrators are inherently more focused on the how than the why.
Ok... this clarifies things a lot. So, partially to play Devil's
advocate, and partly to understand the purpose here, let me ask two very
specific questions:
1. What does a hardware/software stack - user-owned or otherwise - have
to do with allowing "communities to self-organize,
self-govern, and empower themselves to manifest ideas that are good for
themselves and that build resilience, sustainability, and
thrivability?
2. Why a specific and/or new software/hardware stack, as opposed to the
myriad that already exist (user-owned and otherwise).
Vanessa offers a partial answer with:
> this group has been a hub for rich discussion, and i think if we are
> truly interested in how to communicate/collaborate in practice, not
> just in theory, then we should just begin.
>
> we'll learn what the better tools/features are that we need/desire by
> using the existing ones and realizing what's missing.
>
> thinking and talking about what would be best is a fun exercise, but
> not always practical when it comes to actual human behavior. i think
> we should use ourselves as the test case and begin to experiment with
> what works best for us, and then build the tools to support those
> existing behaviors.
To which, let me elaborate:
There's LOTS of existing hardware/software stacks. At the
hardware/network level these range from fax machines and the Internet to
amateur radio to wireless mesh networks. For that matter, one can
include African drums, smoke signals, and telephone coops started a 100
years ago by Iowa farmers who discovered that strands of barbed wire
worked just fine for supporting telephone pary lines.
At the software level, there were the old computer bulletin boards,
freenets, P2P networks (USENET, FidoNet), and there are today's P2P
applications (FreeNet, GnuNet, Gnutella, IRC, etc.), as well as the
myriad of web sites, blogs, and wikis built from open-source software
and running on user-owned machines. And then there are more covert
technologies, such as onion routers, steganography, botnets, and so forth.
Lots of people (including people on this list) are already using various
combinations of these technologies for applications ranging from
day-to-day business, to social networking, to military operations in
unfriendly territory, to sharing music, to supporting disaster response
(from ham radio operators to Ushahidi), to fomenting revolution in the
streets (fax machines in the Tianamnmen Square days, Twitter in the
middle east), to education, to various collaborative economic projects
(search on the "Public Webmarket," a project I was involved in way back
when), to militia groups and terrorist cells.
All of the technology to date has exhibited both strengths and
weaknesses, and continues to evolve as people identify various
shortcomings. The key point is that the technology has been used, and
continues to be used, successfully in lots of venues - and both usage,
usage patterns, and technology are co-evolving rapidly.
Which leads to the obvious question: What experiences and gaps have led
to this particular set of people having this particular discussion?
What problem are people actually trying to solve - beyond, perhaps,
finding a new and different way of doing things (or more cynically, "not
invented here")?
Personally, my experiences lead me to the conclusion that
self-organization and self-government have very little to do with
technology and lots to do with group process and social infrastructure -
ranging from rules of order to contracts to accounting rules to legal
and cultural context. There's a reason that autocratic, hierarchical
organizations seem to get things done more effectively than ad-hoc
groups of small organizations or individuals; and it has far more with
the ability to make decisions quickly than it has to do with technology
(beyond the basics of having a way to bring people together in a
physical or virtual room). [I was reminded of this recently during the
process of starting a new venture. After starting down the road of
organizing as a cooperative, pretty much everybody else involved said
"every decision is way too painful in a coop" and reminded me of how
many small coops have been reorganizing as LLCs, with clear lines of
authority.]
The Internet is a really interesting example of self-organization on a
global scale - dating back to a few dozen researchers, albeit with the
grand daddy of all funders - which has evolved to become critical global
infrastructure, where nobody owns more than a small piece of the
equipment, and nobody is in charge. It works, continues to evolve, and
has become central to an awful lot of day-to-day life.
Then there's eBay and PayPal - originally an auction site for hobbyists,
it has enabled a whole slew of new economic activity.
I'd be really interested in hearing about real examples of
self-organization and self-government - particularly those that don't
deconstruct to one or two core people actually starting things off. I'd
like to hear about some real examples of emergent self-organization and
self-government, and the ways that technology has enabled/supported
these activities. That would seem to be a good starting point for
looking at what's missing or what needs to be changed.
Miles Fidelman
All of the technology to date has exhibited both strengths and weaknesses, and continues to evolve as people identify various shortcomings. The key point is that the technology has been used, and continues to be used, successfully in lots of venues - and both usage, usage patterns, and technology are co-evolving rapidly....
Personally, my experiences lead me to the conclusion that self-organization and self-government have very little to do with technology and lots to do with group process and social infrastructure - ranging from rules of order to contracts to accounting rules to legal and cultural context.
Agreed, though it keeps striking me as a set of issues that border on
intractable:
1. Shifting to user-owned infrastructure, en masse, requires that there
be really serious incentives for everyone involved. To date, it's been
much easier for people to buy phones, computers, network services from
large organizations that make everything easy and comparatively cheap.
Yes, there are ham radio operators floating around, and people who run
independent wireless networks and such - but most people just don't want
to be bothered until it's too late. Military units, first responders,
militias, terrorist groups will acquire technology in anticipation of
action; most people can't even be bothered to have a first aid kit and
some emergency supplies at home.
2. To the extent that technology is pre-positioned, widely available,
and widely used - it becomes a serious target. What's worked to date is
when people find a new way to take advantage of technology that has
become critical to businesses (fax during the Tiananmen Square
uprisings, twitter more recently); use it until blocked; find
work-arounds in the moment (e.g., Satphones smuggled over the boarder).
It's always going to be an arms race, and it may well be better not to
present an easy-to-anticipate target.
Having said that, there are quite a few technologies already available
that are hard to shut down without shutting down things the
powers-that-be care about:
- onion routers provide for circumenting firewalls
- steganography seems to be widely used by terrorist groups (or so many
would have us believe)
- botnets seem to be pretty hard to shut down (or even track down)
- satphones can be smuggled pretty easily and they can be had from
companies and countries that are on different sides of conflicts
- there's still an awful lot of ham, CB, and emergency radio gear
floating around
- the military spends billions on spread-spectrum, software-defined
radios and ad-hoc mesh networking - with most of the core technology
published in publicly-available journals
Would I like to have a data-capable smart phone / hot spot; that relies
totally on ad hoc mesh networking for connectivity? Of course. Getting
enough other people to adopt them, for them to actually be usable, is a
much harder problem. It's hard enough for a wireless carrier to roll
out a new generation of technology - after investing billions in new
phones, towers, radios, roaming agreements, and marketing to make the
new phones useful on day one.
> Social infrastructures are a critical topic for discussion, and I
> welcome it as part of Contact Con. But the technology underlying those
> social arrangements IS relevant and of central concern here.
Sure... but I expect that useful results are more likely to come out of
MILCON and DEFCON.
I should add that there seems to be an awful lot of ready-to-use mesh
networking gear coming out for the first responder community -- stuff
that's designed to be carried into wilderness areas by firefighters, and
into earthquake zones and other places where existing infrastructure has
been knocked out. That's the kind of stuff that might actually be
useful, and available, when needed.
And for some really interesting stuff, google "opportunistic
communications" and "pocket-switched networks" (yes, that's pocket, with
an "o"). You'll find stuff like this:
www.cl.cam.ac.uk/~jac22/talks/p2p-keynote-6.9.6.ppt.gz
- Where is your profile page served from? Where is your personal data? Does it remain available if you are not online? If it does, how is it secured?
- People see BitTorrent as the means they get.... 'stuff'. Are they comfortable bringing that activity mainstream? Anonymity is a big deal in this space, which can be a bit of a pain for a social platform, depending on how you look at it. I can't see people getting 'stuff' while logged in as themselves.
- Discovery/Search - for exactly the same reason you need pirate bay or equivalent, you need a central server to locate people, etc. - unless they have some pretty funky new distributed tech.
- Synchronisation between nodes - e.g. my desktop/laptop/tablet and within the social graph. This is interesting in general, but gets harder when the data is volatile (which it generally is not with BitTorrent in its current form). e.g. I update my status - how does that get reliably to everyone... in near-real-time? Distributed twitter.... tough.
- Authentication - not clear to me how this is managed.
Essentially the same things Diaspora has to worry about, plus not being perceived as 'social'. The user base is a big deal, but it is the weakest form of user base because they are almost entirely anonymous. One slight upside is that this user base probably has a higher than usual proportion of 'early adopter' psychology.My take is that a logical network topology based on the social graph is a good thing, and could improve the viability of a P2P physical topology, but it needs to start with a decent model of the social graph, and I'm still not seeing that - it always seems to be tacked on to some other property that people are trying to leverage for existing network effects.
Jon Lebkowsky wrote:I'm concerned that this conversation addresses how to build rather than what to build. We have a general sense what problem we're trying to solve here,
Actually, I sure don't. I originally thought the focus was on distributed, decentralized INFRASTRUCTURE - e.g., wireless mesh networks that are not dependent on carriers, and are not subject to disruption or centralized control.
The focus seems to have shifted to decentralized applications - again, less vulnerable to control and disruption, but dependent on the current Internet infrastructure.
These are two very different problems, that require very different solutions.
--
In theory, there is no difference between theory and practice.
In<fnord> practice, there is. .... Yogi Berra
> You grab a torrent file from some website - it contains a list of trackers
> Your BitTorrent client then contacts those trackers to find out which other
> connected peers have part or all of the file you want
That hasn't been the only way of setting up a torrent for a couple of
years now. DHTs have been part of the BitTorrent protocol since...
let me check... May of 2005?
--
The Doctor [412/724/301/703]
http://drwho.virtadpt.net/
"I am everywhere."
On Fri, May 13, 2011 at 12:04, Colin Hawkett <haw...@gmail.com> wrote:> You grab a torrent file from some website - it contains a list of trackers
> Your BitTorrent client then contacts those trackers to find out which other
> connected peers have part or all of the file you wantThat hasn't been the only way of setting up a torrent for a couple of
years now. DHTs have been part of the BitTorrent protocol since...
let me check... May of 2005?
The focus seems to have shifted to decentralized applications - again, less vulnerable to control and disruption, but dependent on the current Internet infrastructure.I think people here underestimate the scope and scale of this infrastructure - it can't be "replaced" by some in the air radio links...There is a role for "REAL" infrastructure - fiber in the ground on a massive scale. I think the European or at least French model where the cities are wired with fiber optics and that these are owned by the cities and companies then pay to run services over the wires is a very good model.
It doesn't. I've found a couple of torrents out there that only
supported DHT, but the downside is how long they take to get running
due to the discovery process.
> populating that torrent file with a 'good' node (i.e. reliable server) is
> still a recommended strategy. It seems the main difference is that the
To get a swarm moving fast, it is. Kademlia (which is the DHT
implementation used by BitTorrent) can bootstrap itself from only one
node in the swarm. The identifiers of each node in the swarm are
hashes, just as the files handled by a particular torrent are hashes.
Each node, after computing its identifier, then tries to find other
nodes by trying to contact permutations of its identifier (one up and
one down, ten up and ten down, until it finds a) a node, and b) a node
that knows something about the swarm it wants t join).
> tracker(s) contained in the torrent file will query the network according to
> the DHT algorithm, rather than holding info about all of the peers
> themselves - i.e. every node is a tracker. Am I right in thinking that the
> reliability/availability of the tracker(s) in the torrent file is just as
> important with either algorithm?
To the best of my knowledge, this is not the case. DHT aims to
replace trackers entirely, if I'm reading the specs correctly.
> I was having trouble working out how widely the spec. is actually used - are
> most torrent files following the DHT spec. these days? I assume most clients
It depends.
Some trackers explicitly forbid distributed torrents to keep the files
private, and when torrents are uploaded DHT support is stripped off.
Others are pretty much DHT only these days (like the Pirate Bay).
Most trackers I've seen support both.
Most modern torrent clients support DHT, and add the necessary
information to the .torrent files they generate automatically unless
you tell them not to.
> (regardless of the torrent file format) are using DHT as the primary
> discovery mechanism?
I would say it's probably about half-and-half, in my experience.
> As far as I can tell DHT isn't part of the protocol, and remains in draft
> status. Where did 2005 come from? I'm probably looking in the wrong place :)
https://secure.wikimedia.org/wikipedia/en/wiki/BitTorrent_%28protocol%29#Distributed_trackers
http://torrentfreak.com/bittorrent-2005-part-6-torrent-clients/
http://azureus.sourceforge.net/changelog.php
On Mon, May 16, 2011 at 13:52, Colin Hawkett <haw...@gmail.com> wrote:
> It seems to me that a DHT implementation still requires a torrent file
> specifying tracker(s) to kick off the DHT based discovery process, and thatIt doesn't. I've found a couple of torrents out there that only
supported DHT, but the downside is how long they take to get running
due to the discovery process.
> populating that torrent file with a 'good' node (i.e. reliable server) is
> still a recommended strategy. It seems the main difference is that theTo get a swarm moving fast, it is. Kademlia (which is the DHT
implementation used by BitTorrent) can bootstrap itself from only one
node in the swarm. The identifiers of each node in the swarm are
hashes, just as the files handled by a particular torrent are hashes.
Each node, after computing its identifier, then tries to find other
nodes by trying to contact permutations of its identifier (one up and
one down, ten up and ten down, until it finds a) a node, and b) a node
that knows something about the swarm it wants t join).
Colin Hawkett wrote:
> The desired outcome, I think most here agree, is that infrastructure
> should be managed in the common/public interest. So our options are to
> either create new infrastructure with the goal that it either has good
> management (or is unmanageable? or can't be corrupted?), or get good
> management of what we have. Stepping back from the infrastructure
> issue a little, I think we would mostly agree that it is a specific
> manifestation of something we see repeated throughout society -
> corporate interest trumps public interest.
>
<snip>
> It seems clear to me that the cause of the problem is dodgy
> government, not dodgy infrastructure. From this perspective there are
> a few issues with the new infrastructure approach -
>
> a) if it ain't broke don't fix it.
> b) we'll still have dodgy government, and they're gonna make the new
> stuff annoyingly difficult to run if we expect them not to control it
> (some stuff [1]
> <http://launch.is/blog/l019-bitcoin-p2p-currency-the-most-dangerous-project-weve-ev.html>[2
> <http://launch.is/blog/l020-is-bitcoin-the-wikileaks-of-monetary-policy.html>] around
> bitcoin is already showing this behaviour).
> c) we'll still have dodgy government, so corporate interest trumps
> public interest in all the other things we have yet to build new ones of.
> d) building new ones 'cos they buggered up the old ones' seems like an
> unnecessarily hard slog.
>
> So I guess my logic is that if infrastructure isn't the problem, then
> how about the dodgy government thing? What can we do from a next-net
> perspective?
I'd also add "how about this "dodgy corporate thing?" :-)
A couple of thoughts present themselves:
1. There are quite a few models of infrastructure ownership to draw on:
- user ownership of infrastructure (works well for large corporations,
not so well for individuals)
- condominiums and cooperatives (works well for houses, apartment
buildings, housing complexes, farmers, and a surprising number of
electric and telephone companies)
- municipal ownership (e.g., of waterworks, electric utilities; with
municipal telecom utilities among the few that offer gigE network
services) -- one can argue that a local government is essentially a
condo association writ large
- various forms of user-owned financial institutions (credit unions,
mutual banks and insurance companies, etc.) - along with a growing "move
your money" campaign (moveyourmoneyproject.org)
- various forms of purchase aggregation (i.e., multiple people or groups
buying things together - in order to increase leverage in the market,
such as insurance purchased through a professional association)
- various forms of affinity programs (e.g., discounts negotiated by
AARP, AAA, etc., certified products like Free Trade Coffee) - again,
ways to shape a market
2. There are examples of each of these applied to telecom.
infrastructure (e.g, corporate and academic networks, CREDO telephone
service, municipal telecom. utilities, etc.)
3. Put one or more of these together, coupled with a grass-roots
"move-your-telecom" campaign, and something might be doable.
Miles Fidelman
A couple of thoughts present themselves:
1. There are quite a few models of infrastructure ownership to draw on:
- user ownership of infrastructure (works well for large corporations,
not so well for individuals)- condominiums and cooperatives (works well for houses, apartment
buildings, housing complexes, farmers, and a surprising number of
electric and telephone companies)- municipal ownership (e.g., of waterworks, electric utilities; with
municipal telecom utilities among the few that offer gigE network
services) -- one can argue that a local government is essentially a
condo association writ large- various forms of user-owned financial institutions (credit unions,
mutual banks and insurance companies, etc.) - along with a growing "move
your money" campaign (moveyourmoneyproject.org)- various forms of purchase aggregation (i.e., multiple people or groups
buying things together - in order to increase leverage in the market,
such as insurance purchased through a professional association)- various forms of affinity programs (e.g., discounts negotiated by
AARP, AAA, etc., certified products like Free Trade Coffee) - again,
ways to shape a market
So I'm thinking here too that if we trust government, and it represents our interests, then through the policy making process, which is more or less a constantly changing re-assessment of the landscape for the good of the people, the right forms of infrastructure ownership will fall out at the places where they need to. It seems to me that the most obvious way to make this work is to have the policy making proces publicly crowd-sourced, and transparent so corporations don't have the ability to corrupt the system by corrupting a small number of individuals.
> So I'm thinking here too that if we trust government, and it
> represents our interests, then through the policy making process,
> which is more or less a constantly changing re-assessment of the
> landscape for the good of the people, the right forms of
> infrastructure ownership will fall out at the places where they need
> to. It seems to me that the most obvious way to make this work is to
> have the policy making proces publicly crowd-sourced, and transparent
> so corporations don't have the ability to corrupt the system by
> corrupting a small number of individuals.
Well that's not what I was suggesting. I was suggesting that money =
power.
It's pretty hard to change government without money and power -
particularly when a lot of what we buy funds the status quo (every time
you drink out of a Dixie Cup, or wipe your rear with a Georgia-Pacific
paper product, you're funding the Koch Brothers and the Tea Party!).
I'm suggesting that rather than tilt against windmills to change
government, and wait for government to solve telecom problems, that we
take more direct action - become our own corporations and/or their
equivalent (cooperatives, local governments, etc.).
In banking, that means taking money out of Bank of America accounts and
putting it into community banks and credit unions. In telecom, it's
about municipal networks.
Change the economics first, then change Government.
I'm suggesting that rather than tilt against windmills to change government, and wait for government to solve telecom problems, that we take more direct action - become our own corporations and/or their equivalent (cooperatives, local governments, etc.).
In banking, that means taking money out of Bank of America accounts and putting it into community banks and credit unions. In telecom, it's about municipal networks.
Change the economics first, then change Government.
Ah, but what if the Users *were* the shareholders?
And what if the ROI for those owners was the Product itself?
And, for those who are not yet part of that organization,
what if the Profit charged against them was treated as
though they were making a late investment?
A corporation structured in this way would not have the
traditional conflict between shareholders and users because
those two groups would be one and the same!
We can start by organizing Users to pre-pay for the
Product (internet access in this case), but treat those
funds as investment and real co-ownership in that corp.
We, the Users, *already* pay all the Costs of operation
anyway, we are just paying them late, and are therefore
also required to pay Profit.
When a corporation is User Owned, and the return for
those investments is the Product itself, then those
owners do not buy the Product because they own it
already as a side-effect of their co-owing the Sources.
Those Owners must still pay all the Costs, but cannot
pay Profit, for who would they pay it to?
Umm... they're called:
- mutual banks and insurance companies
- condominiums
- cooperatives
- associations
- .....
Sounds like a great idea. Go build it! Let us know when we can join up.
>>
>> When a corporation is User Owned, and the return for those
>> investments is the Product itself, then those owners do not buy
>> the Product because they own it already as a side-effect of their
>> co-owing the Sources.
>>
>> Those Owners must still pay all the Costs, but cannot pay Profit,
>> for who would they pay it to?
>
>
> Sounds like a great idea. Go build it! Let us know when we can join
> up.
>
Does sound like a great idea, but I disagree with the second
sentiment. We're building the same thing.
Hey Patrick, you wanna build with us?
FNF could be exactly the corporation that you're talking about.
take care,
imw
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iQEcBAEBAgAGBQJN1HRZAAoJEA8fUKCD77NLuhAH/0g36tCcFD6gSwwz07EhtBAC
DS3iUTKSUIcIGCxtboPOV6TNTc0mDpkVlQa8vnI6oSBEvC31B8HY03FRSBENTT/9
M0+YHuEwK21BZLYeqw6jPqXzd9JepwO3MUNTXAnog9CS2b/iUerLVW54K/rwAVEr
QOLsvyhA4kvfWon4I8odRITR7nDAQmWEIDLAIT1SUaI9No3eDOgfpzvSxZ2DyGpa
KFuA6MREIlaPXTCSPmobJ7ek67gb8x2EqBLKmwGiwHW8ZbzlWGScuPBUtkwkd6w0
4Mbt2WvauaTf/6WtcAozSTLz2YypaqBQC+sCI4KsWB44u9PR9j8goIbiM3teIJw=
=6/kh
-----END PGP SIGNATURE-----
> Sounds like a great idea. Go build it! Let us know when we can join
> up.
Does sound like a great idea, but I disagree with the second
sentiment. We're building the same thing.
Hey Patrick, you wanna build with us?
FNF could be exactly the corporation that you're talking about.
take care,
imw
A conusmer cooperative is very different from what I propose in a few
ways, but maybe the most important is the way the Consumer/Owners
*buy* the Products back from the group instead of being the owners
already.
This means the cooperative actually charges Price above Cost (Profit)
against those Consumer/Owners, and then tries to get rid of that
hot-potato in a variety of ways, but most usually end up causing the
cooperative to suffer from consolidation of control into the hands of
the originators who gain ownership of all the growth caused by those
overpayments.
The User Ownership model I propose is more like what would happen in
the smallest of scenarios.
For example, imagine you and your neighbor run a network wire between
your houses. Obviously you must collectively pay the Costs of
purchasing the equipment and the Costs of the labor to install it, and
the Costs of supplying the electricity, but you would not and COULD
not pay Profit because you are not each buying the Product back from
the collective two of you -- that would be nonsense, though that is
exactly what we will face if we give control to a municipality, and
cooperative, an association, and probably nearly any other
organizational form currently in existence...
Colin Hawkett wrote:
> From the corporate perspective, I'm not even
> sure it is legal for them to do anything but act
> in the interests of their shareholders.Ah, but what if the Users *were* the shareholders?
Governments cannot act in our (the Users') interest because they are
controlled by corporations.
Corporations cannot act in our interest because they are controlled by
investors that expect Profit as reward.
Treating Profit as reward causes conflict with Users because Profit
only occurs when Users lack control.
Profit is *undefined* when Users own and control the Sources of
Production because the Product is not sold - since it is already in
the hands of those who need it!
> the people are the shareholders in government,
I wish that were true, but Governments are composed of humans that
respond to the power consolidated into the hands of the owners of the
Corporations.
> capitalism isn't actually broken
Capitalism is broken because we have mistaken Profit as a reward
instead of understanding it as a measure of the Payer's dependence
upon those current Owners.
When we finally realize what Profit really is, we will treat it as an
investment from the Consumer who paid it - causing a negative feedback
loop that will cause Profit to safely approach zero as each
User/Consumer slowly gains the ownership they need to finally stop
paying tribute to another.
Colin Hawkett wrote:
> with a government that we trust and acts in our interests,Governments cannot act in our (the Users') interest because they are
controlled by corporations.
Corporations cannot act in our interest because they are controlled by
investors that expect Profit as reward.Treating Profit as reward causes conflict with Users because Profit
only occurs when Users lack control.Profit is *undefined* when Users own and control the Sources of
Production because the Product is not sold - since it is already in
the hands of those who need it!
> the people are the shareholders in government,I wish that were true, but Governments are composed of humans that
respond to the power consolidated into the hands of the owners of the
Corporations.
> capitalism isn't actually broken
Capitalism is broken because we have mistaken Profit as a reward
instead of understanding it as a measure of the Payer's dependence
upon those current Owners.When we finally realize what Profit really is, we will treat it as an
investment from the Consumer who paid it - causing a negative feedback
loop that will cause Profit to safely approach zero as each
User/Consumer slowly gains the ownership they need to finally stop
paying tribute to another.
Scalability is all about being able to decompose work into parallelizable chunks and minimizing the work that needs to be done on any given node.
General message queuing systems still have queue bottlenecks at the network, disk i/o and CPU level that can be easily exceeded with the wrong overall system architecture.
The key to scalability is to not have any bottlenecks that can't be subdivided and load-balanced on the fly without taking any nodes offline. What matters here is which nodes are talking to which other nodes, and what kind of information they are sending to each other, not the specific implementation of the nodes themselves. That's what I meant by optimizing the parts without changing the fundamental architecture.
Most of the architectural issues associated with the protocols are irrelevant to scalability. You could store data in in-memory lists for high performance, but if the amount of data coming into or out of a node exceeds the network or CPU performance capability, that node is effectively down. If this happens enough, fail-overs merely cascade to other previously under-capacity nodes taking the whole system down to a crawl as users everywhere halt.
So you need to have a way of making sure that this doesn't happen. For example, a mechanism so that nodes start offloading work and user connection responsibility to underutilized server nodes long before any given node reaches capacity. You can't have every peer talk to the busy nodes or when someone with 100,000 followers tweets you'll use up too many resources (individual TCP packets and the associated CPU time to process them). A good architecture will do what Charles describes, it will fan out and distribute the work from updates and change the connection topology based on the characteristics of the communication requests themselves.
Communication data should roll up geographically or topically and then fan out on the same basis. If you have the luxury of collocating a group of servers (for instance, if you are a centralized company, like say Twitter or FaceBook or an stock market exchange data provider) then you can use IP multi-casting to make the network communications very efficient. In a federation of peer servers located all over the world, you won't have this option.
On May 13, 2011, at 5:00 PM, Charles N Wyble wrote:
> On 05/13/2011 03:57 PM, Samuel Rose wrote:
>> On Fri, May 13, 2011 at 3:20 PM, Curtis Faith <cur...@worldhouse.org> wrote:
>>> Social network servers have a data distribution problem not a database problem. Many good lessons come from finance for building trading and exchange data feeds. People pay big money to make sure their data comes in reliably with extremely low latency. The scalability issues have been solved. You know how quickly the volumes blow up when all hell breaks loose in the markets with the high-frequency-trading algorithms running full tilt, right?
>>>
>>> You need to use some of the same kinds of tricks in a distributed system to get scalability, but that's not possible to do in a 100% pure peer-to-peer architecture.
>>>
>> I believe that diaspora is employing
>> http://www.amqp.org/confluence/display/AMQP/Advanced+Message+Queuing+Protocol
>> for addressing this. My understanding is that AMQP is capable of
>> meeting the challenge.
>
> Queuing systems fall over under high load. Multicast/data fountains are
> what a lot of people are using now. Moves the complexity into the
> network instead of the servers. Push the ugly around and all that. I
> like the multicast approach and I've got a lot of experience with it.
> Including building a multicast to unicast converter and back again.
>
>
>
On Sat, May 14, 2011 at 8:46 AM, Venessa Miemis <veness...@gmail.com> wrote:Is this something that people just go off and do? Or are we thinking to organize in some way? It seems to me that we would benefit from a plan, a set of parameters, a set of models. Maybe some categories of activity, e.g. network infrastructure, network management, use cases, application structure, core technologies, etc.
"thinking and talking about what would be best is a fun exercise, but not always practical when it comes to actual human behavior. i think we should use ourselves as the test case and begin to experiment with what works best for us, and then build the tools to support those existing behaviors."
I think there's been an implicit promise that a project or projects would emerge that would be more focused - beyond loose discussion, data gathering, and curation. We seem to have a set of participants who could take next steps effectively if we can figure out how to organize. Perhaps we focus on finding funding sources so that there can be compensation for doing something in a more organized way.
~ Jon
richard adler wrote:Agreed, though it keeps striking me as a set of issues that border on intractable:
Personally, my experiences lead me to the conclusion that
self-organization and self-government have very little to do with
technology and lots to do with group process and social
infrastructure - ranging from rules of order to contracts to
accounting rules to legal and cultural context.
There is truth in this. But a crucial part of these discussions is the example provided by the recent popular uprisings in the Middle East. We saw tremendous ingenuity displayed by activists there, but those innovations also faced serious obstacles because those group processes and social infrastructures were occurring on top of physical infrastructures they did not own and control.
To be sure, people can find ways to route around that lack of control, as indeed they did at critical moments earlier this year. But at least one of the goals of these discussions could be--and should be--models for internet infrastructures that would move that control from corporate or governmental entities and into the hands of the people who use them.
1. Shifting to user-owned infrastructure, en masse, requires that there be really serious incentives for everyone involved. To date, it's been much easier for people to buy phones, computers, network services from large organizations that make everything easy and comparatively cheap. Yes, there are ham radio operators floating around, and people who run independent wireless networks and such - but most people just don't want to be bothered until it's too late. Military units, first responders, militias, terrorist groups will acquire technology in anticipation of action; most people can't even be bothered to have a first aid kit and some emergency supplies at home.
2. To the extent that technology is pre-positioned, widely available, and widely used - it becomes a serious target. What's worked to date is when people find a new way to take advantage of technology that has become critical to businesses (fax during the Tiananmen Square uprisings, twitter more recently); use it until blocked; find work-arounds in the moment (e.g., Satphones smuggled over the boarder). It's always going to be an arms race, and it may well be better not to present an easy-to-anticipate target.
Having said that, there are quite a few technologies already available that are hard to shut down without shutting down things the powers-that-be care about:
- onion routers provide for circumenting firewalls
- steganography seems to be widely used by terrorist groups (or so many would have us believe)
- botnets seem to be pretty hard to shut down (or even track down)
- satphones can be smuggled pretty easily and they can be had from companies and countries that are on different sides of conflicts
- there's still an awful lot of ham, CB, and emergency radio gear floating around
- the military spends billions on spread-spectrum, software-defined radios and ad-hoc mesh networking - with most of the core technology published in publicly-available journals
Would I like to have a data-capable smart phone / hot spot; that relies totally on ad hoc mesh networking for connectivity? Of course. Getting enough other people to adopt them, for them to actually be usable, is a much harder problem. It's hard enough for a wireless carrier to roll out a new generation of technology - after investing billions in new phones, towers, radios, roaming agreements, and marketing to make the new phones useful on day one.Sure... but I expect that useful results are more likely to come out of MILCON and DEFCON.
Social infrastructures are a critical topic for discussion, and I welcome it as part of Contact Con. But the technology underlying those social arrangements IS relevant and of central concern here.
I'm just saying, if we change the corporations (one corporation
at a time - by creating new corps that do the right thing), then
we don't need to separate the goals of gov from corp.
> I think the key is to crowd-source government
If we have crowd-owned corps, we will have crowd-controlled gov.
>> I wish that were true, but Governments are composed of humans
>> that respond to the power consolidated into the hands of the
>> owners of the Corporations.
>
> Again, this is the point - I agree with you it is not true - this is the
> problem statement. This is what needs fixing.
We can change the corporation's goal of keeping Price above Cost by
attracting Consumers to Invest for the purpose of receiving at-cost
Product. At that point, we will be free to redirect the special value
called Profit (which is defined as Price above Cost) to be treated as
though the Payer of that special value had just made an investment
and is the real owner of that growth. This auto-distributes control
into the hands of those willing to pay for growth and removes the
unnatural and dangerous drive toward scarcity and destruction that
treating Profit as reward tends to cause.
>> Capitalism is broken because we have mistaken Profit as a reward
>> instead of understanding it as a measure of the Payer's dependence
>> upon those current Owners.
>
> I'm don't think you're describing capitalism there. Surely redefining profit
> as investment changes the system to something else? Can you point me
> to any widely accepted definition of capitalism in these terms?
I'm not describing the currently operating Capitalism that is raping our
planet, I am describing why Capitalism is broken, and how it must change
if we are to continue to exist as a species.
Yes, redefining Profit as Payer Investment changes the system dynamics,
and that is a good thing because the current system is extremely wrong.
Colin, why does government currently not act in our interest?
Colin Hawkett wrote:
> Patrick Anderson wrote:
>> Governments cannot act in our (the Users') interest
>> because they are controlled by corporations.
>
> That is the point - our end game should be to change this.I'm just saying, if we change the corporations (one corporation
at a time - by creating new corps that do the right thing), then
we don't need to separate the goals of gov from corp.
We're actually pretty close to that now:
You're essentially describing the capability built into the One Laptop
per Child computer - they establish IP over WiFi mesh connections with
each other, and through each other, until a connection reaches an
Internet POP.
A lot of cell phones support WiFi as well as 3G, and the IP stack will
select WiFi if you're in range of an access point (sort of - there's a
lot of proprietary stuff in those stacks, and carriers like to bill).
And then there are cellular microcells - when you walk into your house,
your phone ends up connecting to the local microcell, which in turn is
connected to a broadband wired network.
It pretty much comes down to configuration at the routing level - what
routes your local node publishes, and peering configuration between
various networks. The original Internet was essentially a mesh
network. Private peering arrangements are what route things through
specific large broadband networks at the edge, and large backbone
networks at the core. (Note that it's a little more complex than that -
full mesh routing works well in a small network, but in a network the
size of today's Internet, routing tables and calculations become
intractable for a fully peer-to-peer mesh.)
It also comes down to a systems administration issue - configuring and
tuning routing systems is not an easy task, and the state of the art
does not come close to this being a completely automated process
(witness the various Internet outages when a corrupted routing table
gets propagated).
There's an awful lot of research and development needed (and going on,
mostly with military funding) to get to a point where this kind of
capability can be deployed on a large scale, to non-technical users.
Just one man's opinion of course, but 40 years in the industry, dating
back to year 2 of the ARPANET (1971), and a long stint at BBN working on
things like network management for large pieces of the early Internet,
plus a lot of years working on municipal networks (among other things)
provides some amount of perspective.
You're kidding, right?
Admittedly I could only skim, there was so much verbiage here since I last looked. However I have some points that I think are relevant based on what I see here...
Re a corporation's responsibility to shareholders, I think the legal obligation is contractual, and a corporation can structure an agreement that tells shareholders, for instance, that sustainability will have priority, or that some percentage of profits will be committed to environmental stewardship, that sort of thing. I.e. (noting that I'm not a lawyer) I think there's no law that says a corporation has to put profit first, just a decree from Milton Friedman... but it could be seen as a contractual issue and I assume shareholders could create legal grief if a corporation wasn't behaving according to their expectations. I think that's the basis for some class action suits.
Re all this talk about "government": You don't get anywhere by conceiving of government as a black box or monolithic entity that can be described as any one thing. What we call "government" is an incredibly complex ecosystem of legislatures, courts, officials.agencies, laws, regulations, etc. The components don't act in concert or with singularity of purpose. Any attempt to address government as a "thing" and not as a complex system is a fantasy and won't get anywhere. I did see something recently from someone who was disillusioned after starting to work in a legislative office, and finding that all the work there is driven by greedy corporations. It's either big greedy corporations wrangling with other big greedy corporations, or with small greedy corporations, and sometimes small greedy corporations wrangling with other small greedy corporations. There's no time or energy left for the kinds of civic issues we would hope they'd be dealing with. I have said for many years that citizens' best bet is to swarm legislators - show up in real numbers with coherent proposals and arguments, but numbers and facts won't necessarily trump money in that world. Where I've seen lobbying work, it was because corporations friendly to our side of the issue stepped in (this in the muni wireless battle in Texas, where Intel, Dell, and Texas Instruments were instrumental in making the bad legislation go away).
My advice to this list is that we should avoid talking about "government" and focus on actions we can take that don't require that conversation, at least for now.
Miles Fidelman wrote:
>
> You're kidding, right?
My hope was to have Colin answer "Because corporations control governments".
Then the question becomes "Why do corporations not act in our interest?"
The answer to that is "Because shareholders (currently) require Profit
as payment."
Then the question becomes "What if we could pay shareholders with
Product instead?"
The answer to that is "We can only treat Product as reward if
shareholders are Users, and hold exactly as much ownership as they
intend to use of the Product."
Then the question becomes "How do we insure ownership is distributed
to the new Users who come on board as the org grows?"
The answer to that is "We can treat the Profit those new Users pay as
though it were an investment from those Payers."
Actually when I tested this a year or so ago it was broken. The mesh
didn't actually route traffic. :( I hope it's been fixed since.
Giant +1 to the previous paragraphs. The nextnet/Free Network Foundation
roadmap that I and others are working on, addresses a lot of these
issues. I expect to publish it by the end of May. I think that will
really bring a lot of this discussion, thinking etc to a common point.
--
Charles N Wyble cha...@knownelement.com @charlesnw on twitter
http://blog.knownelement.com
Building alternative,global scale,secure,
cost effective bit moving platform
for tomorrows alternate default free zone.
Well yeah :-)
> Then the question becomes "Why do corporations not act in our interest?"
>
> The answer to that is "Because shareholders (currently) require Profit
> as payment."
>
> Then the question becomes "What if we could pay shareholders with
> Product instead?"
>
> The answer to that is "We can only treat Product as reward if
> shareholders are Users, and hold exactly as much ownership as they
> intend to use of the Product."
>
> Then the question becomes "How do we insure ownership is distributed
> to the new Users who come on board as the org grows?"
>
> The answer to that is "We can treat the Profit those new Users pay as
> though it were an investment from those Payers."
>
That doesn't really help a lot when faced with entrenched monopolies,
backed by government regulations and enforcement that help them stay
entrenched.
Are you saying it is impossible to start new businesses that:
1.) Are funded and Owned by the Users.
2.) Treat Product as reward for those investments.
3.) Treat Profit against latecomers as Payer investment.
Are you saying the governments won't allow such an
organizational form?
Maybe you are right, and we should prepare for that
resistance. I wonder what machinations they will try...
Not completely - we certainly have enough cooperatives and municipal
utilities to point to, as well as various forms of joint ventures,
partnerships, and so forth - all of which are examples of user-owned
businesses.
What I am saying is that:
- entrenched monopolies control a lot of the resources needed to succeed
in a telecom. business (e.g., access to rights of way, ownership of
wireless spectrum, etc.)
- there are plenty of cases of entrenched monopolies throwing up
roadblocks to things like municipal utilities (one of the more effective
forms for a community to own infrastructure) -- there are more than a
few states where it is illegal for a local government to go into the
telecom. business; and additional states where municipalities have to
"compete on an even footing" with the private sector (i.e., even if the
city already owns a wireless tower, they have to include its cost in
prices charged)
- there are lots of other regulatory games that entrenched monopolies
can play - for example: want to put wire or an antenna on a power pole,
then you need to both negotiate access, and you need people with the
right certifications to do the work (actually, not a bad idea - working
near live electric wires is pretty dangerous)
- competing with an entrenched monopoly is hard and costly, and they can
afford to operate at a loss to drive you out of business --- it's pretty
hard to generate a critical mass of users when the entrenched
competition is giving away service
- and then, if you're relying on any portion of a carrier's
infrastructure, they are going to do things like block your packets,
impose contractual restrictions (it's a violation of your terms of
service to share your WiFi with your neighbor), and so forth
- by all rights, government should be imposing anti-trust restrictions
as a counterbalance to these kinds of things, but I haven't seen a lot
of anti-trust enforcement since the telco breakup a few decades ago
In short, building and operating infrastructure is a complicated and
expensive proposition that involves competing with 800 point gorillas.
It's doable if you want to play hardball, but it's not a game for
theoreticians.
Those organizational forms do not comply with:
2.) Treat Product as reward for those investments.
3.) Treat Profit against latecomers as Payer investment.
> it is illegal for a local government
> to go into the telecom business;
Yet another reason to incorporate in a GNU way instead
of handing it over to a poorly structured city government
that will overcharge for access and never deliver what we
really need since it will once again be 'us' against
'them' - for we, as citizens, do not have real ownership
of the cities even though we pay taxes into slush-funds
that are then doled out by well intentioned tyrants of
the majority.
> competing with an entrenched monopoly is hard and costly,
> and they can afford to operate at a loss to drive you out
of business --- it's pretty hard to generate a critical mass
> of users when the entrenched competition is giving away service
I agree this will be a problem, but I think we can address this
issue by beginning small and growing slowly.
Of course no business cannot operate at a loss indefinitely.
We will have another advantage in that we will be able to operate
at "at cost" indefinitely - since we will not be trying to
perpetuate Profit, but will instead be paying investors with the
Product itself - and so will only need to collect the Costs of
operation instead of continually trying to charge ourselves
more than it really Costs to sustain that network.
> by all rights, government should be imposing anti-trust
> restrictions as a counterbalance to these kinds of things,
So you want a government corrupted by corporations to turn
against those corporations?
How do you propose we do that?
I doubt you have enough money to purchase such legislation,
and even if you could, it would only be a brief win in a
small skirmish that would soon be washed away by all the
other corporate pressure to work against we, the Users.
My background in trading and finance is very strong and I've been thinking about this for years.
The key is to get enough people to synchronize their small actions.
That's why a collaboration environment is the first step. The rest of the problems are actually much easier. Corporations respond to well-defined external stimuli. We can create the ones we need to make the changes we want happen in the most effective humane way possible if we collaborate together.
Peace
Curtis
Perhaps a silly question: What's wrong with existing collaboration
environments?
There's an awful lot of collaboration being done via everything from
email to usenet Drupal to special-purpose sites like Ushahidi. And
there are plenty of examples of ad hoc collaboration in response to
special circumstances (e.g., some of the family matching sites that
cropped up after New Orleans). There are the larger, more formal
efforts such as those around various open source software projects.
There are the flash mob performance art groups like the Banditos
Mysteriosos. And of course there are examples of boycotts organized
across the network (remember "New Coke?").
Not that I'm arguing against new, better, broader technology and
infrastructure; just that there are a lot of tools already available.
Which brings us back to the question: What's missing for your purposes?
I have a lot of concrete specific ways and associated elaborate but simple plans we can use to fix the plutocracy and corporate greed/power problems, and divert resources from them to good projects.My background in trading and finance is very strong and I've been thinking about this for years.
The key is to get enough people to synchronize their small actions.
That's why a collaboration environment is the first step. The rest of the problems are actually much easier. Corporations respond to well-defined external stimuli. We can create the ones we need to make the changes we want happen in the most effective humane way possible if we collaborate together.
Peace
Curtis