Hi,
I meet Matei and Mani during spark summit. We are currently evaluating MLFlow on Kubernetes and had some questions about it.
Scenario: We have MLFlow Tracking server deployed in Kubernetes we also have a Jupyter notebook to run MLFlow Training.
However, if we don't provide s3 credentials in Jupyter Notebook container it sends an error. Is it required for both Jupyter Notebook and MLFlow Tracking Server to have s3 credentials in the container?
Apache Spark allows for us to specify an non s3 endpoint other than aws. That way we can use systems like Ceph to store our models. Is this something that will work for MLFlow.
Thanks,
Zak Hassan
Engineer - Artificial Intelligence - Center Of Excellence, CTO Officehttp://radanalytics.io/ - Machine Learning On OpenShift