--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+unsubscribe@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/0cf67ca9-6360-43d0-a545-b7cb89e5c8c0%40tensorflow.org.
Note that placement is not automatic or automatically optimal, so if you want to do model parallelism, you have to come up with a parallelization scheme that works for your specific model. That's hard (human) work.On the other hand, data parallelism can be made to work automatically, and work pretty well, assuming the workers available are similar.Hence we like data parallelism.Martin
On Thu, Nov 2, 2017 at 8:58 AM, Jack DH <tzn...@gmail.com> wrote:
Hello.I'm working on parallel cluster with tensorflow,but is there any reason why we need Data Parallelism of in-graph replication?If a job is working within in a "in-graph", it means it can be processed with Model Parallelism without any pain.I understand that Model Parallelism is heavily dependent on RDMA, so it can cause latency issue with NUMA architecture,but it would be great if someone can share opinion.
--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+unsubscribe@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/1c33aa81-7c19-472b-ad57-e02476941361%40tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/1c33aa81-7c19-472b-ad57-e02476941361%40tensorflow.org.