On May 23, 5:22 pm, Maciej Sobczak <
see.my.homep...@gmail.com> wrote:
> Don't hesitate to describe your idea in more detail - in particular,
> try to describe how you would like to use it.
Firstly, let me explain my intentions, then I shall explain my
intended solution.
I am trying to create a system/library in C++ that will allow the
creation of arbitrarily sized/complex P2P networks, from which
arbitrary blocks of data can be routed to any node. The system must be
capable of scaling to 40K nodes, so obviously efficiency, overhead,
and routing mechanisms are of a concern. Lastly, it must work on
windows and Linux. The eventual endpoint years down the track is a de-
centralised social network, for the given scale.
My Design:
All nodes have an identification number; a 64bit number unique to the
network. This is used as their 'address' of sorts.
All nodes send out a BONJOUR message, which propergates to all other
nodes within a certian hopcount. This serves a number of purposes; it
allows nodes to be aware of presence of specific nodes, and also,
forms the basis of determining the optimal path to that node.
You see, every BONJOUR message has attached to it a count, which is
incremented each time the message is propergated/flooded again. That
way, the most optimal path to every node in range can be detected by
simply inspecting these packets and remembering which socket sent us
the bonjour message with the smallest hopcount for a given node. Then,
by sending future messages through that socket, it will follow the
optimal path to the destination. Obviously, it will store other
options as well so if the path is broken, other paths are still known.
This will be stored in a hashtable for efficient lookup. The only
downside is memory usage, but even in a table that stores 40K nodes'
information will still be under a MB.
Of course, all messages sent through the network will also have a
messageID and a TTL, to prevent a single message getting stuck in a
loop and saturating bandwidth.
In terms of max. expected load, I would say about 40K nodes, each with
about 8-9 TCP connections, which amounts to processing about 30-50
small messages a second.
So the immediate decision is this: Is YAMI4 fast/scalable enough for
this? or is the parameters thingy and all the other stuff too much
overhead?
Ultimately, if Bandwidth is going to be saturated before CPU is, I
would use YAMI4.
Similarly, if CPU is going to be saturated before bandwidth then I
would have to write my own socket handling etc without the overhead of
parameters etc.
You would have a better idea of YAMI's overhead than me.
On May 23, 5:22 pm, Maciej Sobczak <
see.my.homep...@gmail.com> wrote:
>If it is possible, then such a project might be an
>interesting extension to the YAMI4 distribution.
If I write it with YAMI, have no problem writing it as an extension,
so that you can plug it into the main sourcetree if you so wish, or it
can be available to others.