Iris v0.3.0 out, with fresh new APIs

37 views
Skip to first unread message

Péter Szilágyi

unread,
Aug 11, 2014, 5:16:27 AM8/11/14
to projec...@googlegroups.com
Hey all,

  I'm happy to announce the next development in the project's timeline, the release of version v0.3.0 of the Iris core. It features a lot of stability fixes and optimizations, but most importantly, reworked and stabilized client APIs for both Erlang and Go!

  You can find the official announcement on the Iris website, detailing all the added goodies and a lot of fresh resources that you can sink your teethes in (new playgrounds, quick start guides, dev docs and more) :D

Happy coding,
  Peter

Lee Sylvester

unread,
Aug 11, 2014, 12:53:20 PM8/11/14
to projec...@googlegroups.com
Hi Péter,

This is excellent stuff.  I must have missed this the first time around.  I'm currently rebuilding a cloud infrastructure I wrote last year (properly this time) and I'm looking for a decent messaging system.  A couple of years ago, I used Skynet when writing the platform in Go.  At the time, I'd found Go to be too immature for my needs, and so migrated it to Erlang.  It now works great, but it isn't flexible enough for my needs, hence the rewrite.  Iris looks like a perfect fit, and exactly what I was looking for.

To reiterate the questions I posted to the Erlang list:

1. Is it possible to cluster brokers?  Having a single broker would be a single point of failure.  It would be better to cluster the brokers; or maybe even have one broker per physical VM.

2. What messaging protocol does this use?

3. “When” a node is temporarily partitioned on the network, does it try to reconnect?  If so, how often and for how long?

4. When sending a message to a cluster, how is the node selected?  I’m assuming round robin.  However, it would be good to be able to supply a callback for a node so that it can supply a value.  This value could then be compared against its peers and the lowest value selected. For instance, I have a messaging transport cluster which can cater for a given number of clients.  Currently, I ping all available nodes and they readily supply a count value of their connected peers.  I can then educatedly choose the lowest value to always provide the least loaded node.  This is a simple task, but one which would work well in this scenario, especially when tasks may take seconds or hours (video encoding).

Thanks,
Lee

Lee Sylvester

unread,
Aug 11, 2014, 1:10:23 PM8/11/14
to projec...@googlegroups.com
The website answered Question 1 :-)

Lee Sylvester

unread,
Aug 11, 2014, 1:39:48 PM8/11/14
to projec...@googlegroups.com
Okay, so, for point two, the answer is "it shouldn't (and mustn't) matter", which I would agree with.

Point 4 is a little disappointing, though.  From the source, it looks like nodes are selected randomly.  Personally, I think this is a good opportunity for node clusters to specify their own choice of load balancing.  Since processes can be expensive and can be lengthy, choosing a node which is otherwise under heavy load while other nodes are vacant is not an option for resource intensive networks.  On our own network, for example, a single node may be in use for heavy resource intensive parsing for a couple of hours at a time, while identical nodes may be parsing many hundreds of small processes.  A node will not know if it is being given a heavy process or a light one, but it does know if it is available and it knows the sum of its queue.

Just a thought.

I would still like to know about point 3 :-)

Thanks,
Lee

Péter Szilágyi

unread,
Aug 11, 2014, 2:02:17 PM8/11/14
to Lee Sylvester, projec...@googlegroups.com, erlang-q...@erlang.org
Hey Lee,

  I'm cross replying to both the erlang-questions and the project-iris mailing list to have an answer at both sides.

  Just to have a complete answer to all raised questions:

1. Is it possible to cluster brokers?  Having a single broker would be a single point of failure.  It would be better to cluster the brokers; or maybe even have one broker per physical VM.
As you probably found out, the brokers are actually local access points into the system. The routing itself is done in a decentralized way by the full collection of brokers (or relays). So, yes, Iris was meant to run one Iris node per VM, and all clients attach to that single node. But the Iris nodes in the network will form the P2P overlay itself.

2. What messaging protocol does this use?
Completely custom one based on Go's gob encoding. Since Iris isn't meant to talk internally to anything else (client applications are free to do whatever they want), there is no need to standardize a protocol. This way it can evolve freely whenever needed (and it currently does so heavily since there are quite a lot of rough edges that need sorting out).

3. “When” a node is temporarily partitioned on the network, does it try to reconnect?  If so, how often and for how long?
Each and every Iris node is constantly running a bootstrapping mechanism. In essence, it is based on two procedures: random address probing and local scanning. Local scanning finds nodes close by (IP wise), whereas the probing will discover clusters further away (again, IP wise). This mechanism is running constantly, even if the node is converged within the network. This ensures that in the case of partitioning and reconnecting, the network can converge quickly back into a single one. The bootstrapper will probably need some love in the future as it's still in its original state from a year ago, but you get the picture :) 

4. When sending a message to a cluster, how is the node selected? [...]
Actually it is neither round robin nor random. There was a minor logical flow when I designed the load balancer so it needs a bit of rework (if the nodes are 100% loaded, the balancer screws things up), but the way it works is that it is constantly measuring the CPU usage on the nodes and based on the load and the amount of requests it handled over the previous time cycle, Iris tries to approximate the processing capacity of the node (e.g. it can handle 100 "webserver" requests / sec), and based on that approximation distribute the arriving requests among the participants. This mechanism obviously only works for shorter tasks (i.e order ot 10s/100s ms), so the jury's still out of the best solution for running longer "tasks".

  All in all there is still a *lot* to do, but I'm hoping there is also enough to get people interested in it :)

Cheers,
  Peter


--
You received this message because you are subscribed to the Google Groups "Iris cloud messaging" group.
To unsubscribe from this group and stop receiving emails from it, send an email to project-iris...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Lee Sylvester

unread,
Aug 11, 2014, 5:15:24 PM8/11/14
to projec...@googlegroups.com, erlang-q...@erlang.org
Hi Péter,

Thank you very much for your answers.  I am very much interested in Iris and would like to contribute where possible.  I have lots of experience with both Erlang and Go, and with messaging / cloud development in general, so I can hopefully provide a lot of support for this, whether directly or through supporting modules that help complement Iris.  As I said, I've been looking for a prebuilt solution of this orientation for some time, so this is a good find for me.

Kind regards,
Lee
  

Péter Szilágyi

unread,
Aug 14, 2014, 7:12:39 AM8/14/14
to Lee Sylvester, projec...@googlegroups.com, erlang-q...@erlang.org
Hi Lee,

  Help and contributions are always welcome and there are a lot of places which could use improvement (for example the Erlang binding is significantly slower that the Go version, the bootstrapper needs a rework, a new load balancer is needed, leveled logging, monitoring, etc), but if you really want to help out, then I'd suggest starting experimenting with Iris and you'll see your specific pain points and then we can start from there :)

Cheers,
  Peter


--

Lee Sylvester

unread,
Aug 18, 2014, 1:16:08 PM8/18/14
to projec...@googlegroups.com, erlang-q...@erlang.org
Sounds like a plan, Péter.  I've already agreed with my colleagues to use it.  We'll let you know what we find, and if we improve it, we'll let you know that, too :-)

Thanks,
Lee 
Reply all
Reply to author
Forward
0 new messages