defmodule Supervisor.Process.Supervisor do
use Supervisor.Behaviour
def start_link() do
:supervisor.start_link({:local, :process_supervisor}, __MODULE__, [])
end
def init(_options) do
children = [
supervisor(Supervisor.Process.Single.Supervisor, [] )
]
supervise(children, strategy: :simple_one_for_one)
end
def start_child(id, data) do
:supervisor.start_child(:process_supervisor, [id,data])
end
def stop_child(pid) do
:supervisor.terminate_child(:process_supervisor, pid)
end
end
defmodule Supervisor.Process.Single.Supervisor do
use Supervisor.Behaviour
def gen_name(id) do
binary_to_atom("single.process.supervisor.#{id}")
end
def start_link(id,data) do
:supervisor.start_link({:local, gen_name(id)}, __MODULE__, [id,data])
end
def init([id,data]) do
children = [
worker(Supervisor.Process.Single.Responsible, [id, data] , [restart: :transient] ),
worker(Supervisor.Process.Single.Rabbit, [id], [restart: :transient] )
]
supervise(children, strategy: :one_for_one)
end
end
--
You received this message because you are subscribed to a topic in the Google Groups "elixir-lang-talk" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elixir-lang-talk/dUWlAErbeLw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elixir-lang-ta...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reacting to some event, I need to start a "Process", that is composed of several, but distinct erlang-process. One will be dedicated to handling MQ events, another that will handle TCP / Port interactions. The point was, if the Process handling Rabbit went down for some reason, it could get restarted independantly without any problem for the rest of the tree. Same goes for the other part. On the other hand, if TCP or Port Processes were to crash / get terminated for some reason, It would need some other handling, which is why they don't show up in the second supervisor and get monitored by Responsible.The whole (Rabbit + Responsible) represent one "unit" so to say, that made sense to me, at least. Thus I decided to have them supervised. This supervisor was to be watched, just in case the whole tree went down and automatically get killed.
If it is necessary to clean up before termination, the shutdown strategy must be a timeout value and the gen_server must be set to trap exit signals in the init function. When ordered to shutdown, the gen_server will then call the callback function terminate(shutdown, State):
:shutdown- defines how a child process should be terminated. Defaults to5000for a worker and:infinityfor a supervisor;
Shutdown values
The following shutdown values are supported:
:brutal_kill- the child process is unconditionally terminated usingexit(child, :kill);
:infinity- if the child process is a supervisor, it is a mechanism to give the subtree enough time to shutdown. It can also be used with workers with care;Finally, it can also be any integer meaning that the supervisor tells the child process to terminate by calling
exit(child, :shutdown)and then waits for an exit signal back. If no exit signal is received within the specified time (in miliseconds), the child process is unconditionally terminated usingexit(child, :kill);