Using Seldon-server with a custom tensorflow based model for recommendations
12 views
Skip to first unread message
tyro...@gmail.com
unread,
Apr 4, 2018, 9:27:38 PM4/4/18
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Seldon Users
Hello,
I just started looking seldon and I have to say that the whole platform is looking very promising. On the GitHub page, it is mentioned that the system is able to use tensorflow models but I'm not sure how. I mean, I get that this is done using the so-called microservices (?) but I'm not sure how the training and serving phase of the models can be done. Is it possible for example to use a second cluster separated from the one where we hosting seldon to do the train? Let's say will I be able to use Google Cloud ML Engine to run the training and serving? The tensorflow model that I'm thinking of building is similar to the one that is described in this video.
I know my question may seem very general but I would appreciate any of your answers.