Hi Felipe, how are you doing?
On Sun, Nov 15, 2015 at 1:33 AM, Felipe Santos <
feli...@gmail.com> wrote:
> [...]
>
> I have some questions, perhaps some of them sounds idiots, but the docs was
> not enough to me.
No problem, I will try to answer your questions, please let us know
what we can improve in your docs! :-)
> 1) If we have an environment that we can run only one application(unit),
> because of networks and performance requirements. What the advantages of use
> docker+tsuru? Only deploy facilities?
Docker eases the distribution of your application code across a
cluster of VMs. With tsuru, you'd get a simpler deployment process,
but also automatic scaling. Suppose you wanna run just one container
per host, you can configure the tsuru count scaler to allow at most 1
container per host, and whenever you add new containers, tsuru will
automatically create new VMs (on EC2, CloudStack or DigitalOcean) for
running the new containers. Keep in mind that tsuru will trigger an
auto scale event, so it's not instantaneous, but after some minutes,
you should get everything working as intended.
So, to illustrate: suppose you have 4 containers of an application
named "myapp", and each container is running on a dedicated EC2 VM, so
you have 4 VMs as well. Then you run `tsuru unit-add -a myapp 4`,
doubling the number of containers. tsuru will temporary accommodate
these new containers in the four existing VMs, and the counting auto
scaler will detect that there's more than two containers running on
each VM. It will then spawn 4 new VMs and after they're available,
rebalance the 8 containers across the 8 VMs.
> 2) Tsuru use docker, but for who is developing the application code could
> benefit of docker containers, for example, developers could get docker image
> and develop on it?
Developers could download the image from tsuru, but as they're not
able of running the same hooks that tsuru runs, it would not actually
benefit them. What's nicer is being able to generate a local Docker
image and deploy this image to tsuru, but currently tsuru doesn't
support that. There's an issue for that
(
https://github.com/tsuru/tsuru/issues/1314).
> 3) Is it recommended to have an ELB behind the routers? could tsuru
> autoscale routers?
Yes, you can have an ELB in front of the routers, and we recommend
doing so. tsuru doesn't autoscale routers, but since tsuru doesn't
talk directly to the router VM, you could have an autoscaling group
configured to spawn new VMs of hipache or planb. You'd need an AMI
with the router, and we recommend having a local Redis slave,
configured to point to the master to which tsuru writes the routes.
tsuru will only talk to the Redis master, and router VMs will sync
with this Redis master, so there are three components that can be
scaled independently of each other:
- the tsuru API
- the Redis master (probably using Redis Sentinel for high availability)
- the router (which could be an autoscaling group)
> 4) Is it possible autoscale to create nodes on multiple regions of amazon
> fairly?
Yes, you can, but there are also other issues to tackle: you'd still
have the router in a single region (you could have two autoscaling
groups in different regions and use DNS load balancing between the
ELBs, but you'd still need to figure out how to handle Redis
synchronization, for example).
> 5) Is it possible to make deploy for only some nodes, for example, I have
> three nodes and I want to deploy for only one and route traffic of some IPs
> to that node
Yeah, you can group nodes in pools and when creating the application,
specify which pool the application is going to use. So if you have
three nodes (A, B and C), you could have two pools, say pool1 with
node A, and pool2 with nodes B and C. If you run the command:
% tsuru app-create myapp java --pool pool1
Containers from the app myapp will always run on node A.
> 6) We have same application on cluster, tsuru make deploy each instance at
> time, so we have zero down time, what is the strategy of tsuru?
If you have, say, 6 units of an application and run a deployment,
tsuru will do the following steps:
1. Create a container with the old version of the application,
download the new version in the container, install dependencies and
run build hooks
2. Create a new image with the previous container
3. Create 6 new containers with the brand new image
4. Wait for these 6 new containers to become available (using the
healthcheck in the tsuru.yaml file)
5. Add the 6 new containers in the router - at this point, your
application is actually running with 12 containers, 6 with the old
version, and 6 with the new version)
6. Remove the 6 old containers from the router
7. Destroy the 6 old containers
So it's almost no downtime, you can have downtime if you don't
configure an application healthcheck, or if in step 7, the containers
is still handling some connections (which is hardly the case, if your
requests finish fast).
I hope these answers can bring some light to your doubts. Thanks for
your interest in tsuru!
Best regards,
Francisco