Who is using on production

瀏覽次數:344 次
跳到第一則未讀訊息

abhinav...@gmail.com

未讀,
2017年3月20日 清晨7:28:362017/3/20
收件者:nameko-dev
Hi All,

We have a Django monolith and planning to move to microervices architecture. We came across  Nameko and found it interesting.
Is there list of projects using nameko in production or case studies/ articles.

It would be great if any of you can share your experience with Nameko. How you have architect it? How have you done authentication? etc.

Thanks
Abhi


Matt Yule-Bennett

未讀,
2017年3月20日 下午1:59:472017/3/20
收件者:nameko-dev、abhinav...@gmail.com
Great question! Would love to see this thread grow.

I can kick it off --

We've been using Nameko in production at Student.com for ~2 years.

In our setup we have:

* Domain services, which look after the data and logic related to specific domains in our business (we do student accommodation, so one service manages properties and rooms, another handles prices and availability, and so on)
* Facade services, which aggregate several domain services into APIs for a specific customer (e.g. there is a facade for our website, another for our content management system)

Facade APIs all RESTful HTTP, and they call the domain services over RPC.

There's a fledgling example that shows this pattern here: https://github.com/nameko/nameko-examples (note the "gateway"/facade service still in a PR).

For auth, there are two approaches I've taken in the past:

1. Authenticate at the boundary (i.e. facade) and have domain services just trust their callers (simple, but weak security)
2. Also generate a JWT that contains permissions/roles for that identity, and pass that along with every call so each downstream service can perform its own authorization checks.

We should try to add auth into the example app. The JWT approach sounds complex but it's actually quite simple.

I've already written about our ops/hosting in this thread https://groups.google.com/forum/#!topic/nameko-dev/Sfvm9xY6MHE

Hope that helps. I'd love to see some other folks adding their experiences here too.

abhinav...@gmail.com

未讀,
2017年3月23日 晚上11:18:572017/3/23
收件者:nameko-dev、abhinav...@gmail.com
Hi Matt,

Thanks for sharing the details. It has given me the fair idea. 
Looking forward to auth example.


Thanks,
Abhi

Fergus Doyle

未讀,
2017年4月3日 下午3:11:022017/4/3
收件者:abhinav...@gmail.com、nameko-dev
We use Nameko at Lystable to encapsulate our core business logic.

Our overall architecture is similar to the one Matt described with a few sublte differences.

We use Flask applications/blueprints to provide the HTTP APIs our client apps consume which constitutes the `Facade` pattern Matt described. This stems from a decision made quite some time ago when the http support in Nameko was still in it's relative infancy. That wouldn't necessarily be the case if we were faced with the same decision today.

Our Facades / API Gateways are responsible for managing their own resource definitions and corresponding data. What this means in practice is that they rely less on direct RPC calls to formulate responses to HTTP requests and instead maintain a local data store which is kept up to date by serialising data from lifecycle (state change) events broadcast by the domain services.

We also talk quite a lot about Domain services (in the context of Domain Driven Design). Services are considered either Domain or Utility and each one has a clear remit or mission statement to help enforce the service boundaries and mitigate the likelihood of conflating services with logic outside of their responsibility.

Domain services are responsible for (usually) a number of entities / resource types, the related business logic and for publishing lifecycle / domain events for any consequential state changes. 

Utility services are used for breaking out more functional compute logic that can be shared by other services. Examples here include sending email notifications or centralised scheduling of timed triggers.

We also terminate authentication at the gateway and trust requests made within the Nameko cluster.

The dependency injection patterns are really powerful and can help simplify complexity significantly. It also means you're not limited to amqp or http as transport protocols should you so desire.

Hope this helps.

Cheers,

Fergus 

--
You received this message because you are subscribed to the Google Groups "nameko-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nameko-dev+...@googlegroups.com.
To post to this group, send email to namek...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/nameko-dev/0d89ea44-ea8b-4cf4-9c00-9a15c2887129%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Conor Seabrook

未讀,
2017年4月12日 上午9:13:182017/4/12
收件者:nameko-dev、abhinav...@gmail.com
We have been running Nameko in production for a little over a year now with great success. We are a mobile game developer with a number of successful games which are heavily server based. Being that we control both the client and server our gateway/api uses http and the "experimental" websocket service for two way communication.

Our setup:

* Gateway which passes requests directly to services, and sends events out to clients. Nginx handles ssl termination. 

* Currently we have 25 services spanning 15 compute nodes, at least 3 instances of each for HA and more for high volume services. We use Chef for configuration management to quickly scale and deploy.

* We use Redis as our main datastore backed by PostgreSQL. Redis writes flow to our data warehouse via dedicated db services.

* name_sentry sends errors to our sentry servers for analysis.

* Every entrypoint hit is logged to file and shipped to logstash with fillebeat. We then grok the logs and push them to elastic search for analysis. Having a central logging solution has been invaluable. We can easily see all activity by a player's session or view all service calls that made up a single client request via a correlation_id. We use Kibana to easily query these logs. We generate on average 5 Million hits per day, but burst much higher.

* Nagios/Ganglia to monitor everything including growing RabbitMQ queues.

* Gitlab continuous build integration to easily push service code to production.

Overall moving to microservices was the right decision for us. We looked at rolling our own solution as well as a number of other frameworks but in the end landed on Nameko which has been a joy to work with. We had a rather huge Django/Celery/Twisted application which was becoming increasingly difficult to work with, and easy to break. So we moved to micro services and never looked back!
回覆所有人
回覆作者
轉寄
0 則新訊息