Using Seldon-server with a custom tensorflow based model for recommendations

12 views
Skip to first unread message

tyro...@gmail.com

unread,
Apr 4, 2018, 9:27:38 PM4/4/18
to Seldon Users
Hello,

I just started looking seldon and I have to say that the whole platform is looking very promising. On the GitHub page, it is mentioned that the system is able to use tensorflow models but I'm not sure how. I mean, I get that this is done using the so-called microservices (?) but I'm not sure how the training and serving phase of the models can be done. Is it possible for example to use a second cluster separated from the one where we hosting seldon to do the train? Let's say will I be able to use Google Cloud ML Engine to run the training and serving? The tensorflow model that I'm thinking of building is similar to the one that is described in this video

I know my question may seem very general but I would appreciate any of your answers.

Thank you in advance.  
Reply all
Reply to author
Forward
0 new messages