Eureka vs. Zookeeper

7,080 views
Skip to first unread message

gsy...@gsconsulting.biz

unread,
Jun 22, 2013, 4:28:31 AM6/22/13
to eureka_...@googlegroups.com
Hi!

I'm trying to determine what the advantages of Eureka for discovery vs. Zookeeper with curator-x-discovery are. I can see the advantage if I don't need Zookeeper for anything else; Eureka looks like its much simpler to get up and running and keep running. However, if I need Zookeeper for other coordination tasks anyway, does adding Eureka into the mix buy me anything? Or is it better to keep my system simpler by using Zookeeper for both discovery and coordination?

Thanks,

Greg

Karthikeyan Ranganathan

unread,
Jun 23, 2013, 12:00:02 AM6/23/13
to eureka_...@googlegroups.com
I have tried to capture the differences here in the FAQ - https://github.com/Netflix/eureka/wiki/FAQ



--
You received this message because you are subscribed to the Google Groups "eureka_netflix" group.
To unsubscribe from this group and stop receiving emails from it, send an email to eureka_netfli...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.



gsy...@gsconsulting.biz

unread,
Jun 24, 2013, 1:04:10 AM6/24/13
to eureka_...@googlegroups.com
Ah. The impression I got from the FAQ was that it was the rationale for not using Zookeeper as a dependency of eureka, not in place of eureka. Rereading it now, it's a little clearer. So the key advantage of eureka vs. Zookeeper is that it's a lighter weight dependency for services that only need discovery, and none of the coordination or consistency capabilities of Zookeeper, and that things like discovery during service bootstrap (i.e. discovery of the service registry itself) are already defined in the client library. Is that a complete read?

Thanks,

Greg

David Trott

unread,
Jun 24, 2013, 7:00:31 PM6/24/13
to eureka_...@googlegroups.com
I think your read is right:

Eureka goes out of its way to provide availability, sometimes at the expense of consistency.

Where as the Zookeeper server cluster relies on a quorum (at least 1/2 of the machines must be accessible).
However, if coded correctly you can ensure an almost** consistent view across all clients.
** - There is a time delay (the heartbeat interval) within which the nodes can become out of sync.

Hence Eureka is AP
Where as Zookeeper is either CP or CA depending on how you look at the quorum requirement.

My read was that Zookeeper was a good choice within a data center (where partitioning is less likely) however Eureka was a better choice for spanning data centers.

David


PS If you are seriously considering zookeeper you should take a look at Curator, https://github.com/Netflix/curator
Coding Zookeeper directly will certainly improve your understanding of distributed event handling, but you may end up pulling out all your hair ;-)






Thanks,

Greg

Andrew Spyker

unread,
Dec 17, 2013, 2:59:36 PM12/17/13
to eureka_...@googlegroups.com
Wrote a quick blog entry on the topic with my current level of knowledge ...

Andrew Spyker

unread,
Mar 3, 2014, 2:29:06 PM3/3/14
to eureka_...@googlegroups.com
While this is using Zookeeper for more than instance registration, the article is useful to point out that Zookeeper did become a SPOF at Pinterest for them to create a deamon that cached Zookeeper for times where it wasn't up.  I think the caching on top of Eureka is similar, but I think the network partitioning of Eureka behavior mitigates some of the SPOF issues presented.

mic...@fullcontact.com

unread,
Mar 7, 2014, 4:06:24 PM3/7/14
to eureka_...@googlegroups.com
For what it's worth, here at FullContact we're making the switch to Eureka for discovery. A few reasons:

1) Eureka integrates better with other NetflixOSS components (Asgard especially). We've added Zookeeper providers for many of these, but ultimately it's an uphill battle.
2) Eureka is available. ZooKeeper, while tolerant against single node failures, doesn't react well to long partitioning events. For us, it's vastly more important that we maintain an available registry than a necessary consistent registry. If us-east-1d sees 23 nodes, and us-east-1c sees 22 nodes for a little bit, that's OK with us.
3) ZooKeeper is hard. We've gotten pretty good at it, but it requires care and feeding.

Michael

Nitesh Kant

unread,
Mar 8, 2014, 4:05:22 AM3/8/14
to eureka_...@googlegroups.com
Thanks Michael for sharing this with us! Any feedback is appreciated based on your usage.
For more options, visit https://groups.google.com/d/optout.

jma...@gmail.com

unread,
Jul 11, 2014, 6:43:48 AM7/11/14
to eureka_...@googlegroups.com
> 2) Eureka is available. ZooKeeper, while tolerant against single node failures, doesn't react well to long partitioning events. For us, it's vastly more important that we maintain an available registry than a necessary consistent registry. If us-east-1d sees 23 nodes, and us-east-1c sees 22 nodes for a little bit, that's OK with us.

Hi folks --

Having done some reading around ZK failure modes during inter-AZ partitions, particularly Andrew Spyker's post at http://ispyker.blogspot.com/2013/12/zookeeper-as-cloud-native-service.html , I think this, and Eureka' position wrt CAP, is a very good point which could do with being expanded on (in the Eureka FAQ, maybe?) Many organisations are using ZK-based service discovery systems, and may not have thought this risk through. I certainly hadn't. :(

Also, would it be possible to expand on how Eureka will recover post-partition? Do a Eureka client's updates require merging across multiple Eureka server nodes, and how is this done (simple unordered flooding of updates? timestamp-based ordered streams? vector clocks?)

Finally, have you considered asking @aphyr to run a Jepsen test against Eureka? http://aphyr.com/tags/Jepsen -- could be enlightening ;)

--j.

Reply all
Reply to author
Forward
0 new messages