Hello José!
I had a quick question regarding the project. So I had mistakenly assumed that the Tensorflow C++ core which contains all functionality was being publicly exposed as the C API I had mentioned earlier. However, this is not the case-- the C API is still under development and currently only supports Inference. This means that while it can allow us to do predictions based on previously trained models (in Python) we would not be able to train models from scratch. Moreover, official Tensorflow bindings released by Google for other languages, such as Go, also support Inference only. Google has stated in it's documentation that models should be trained in Python and then can be executed in Go apps (
source). Also the C++ core is not publicly exposed and is compiled using bazel every time it runs, so writing bindings for that is not possible.
In my opinion, enabling Inference by writing bindings for the C API would still be a challenging project. Also Google will keep adding code to the C API to enable Training support as well. Thus, once the Elixir project exists, this functionality could be added to it too, in time.
I wanted to know your thoughts and opinions regarding this. Would this be an acceptable GSoC project? If not, I could look at other frameworks but Tensorflow has the fastest growing API support and would be the best choice for the long-term. Also, this might be a bit of a generalization, but for Elixir, most ML requirements would be that of Inference, and might require predictions being made on the web. This is the major reason why I feel Google did not include Training support for Golang.
Sorry for the long post!
Thanking you,
Anshuman.