Hi,
I started looking into how to build custom ops. For that, I am looking into word2vec example
here -
which uses "skimgram" custom Op. I see that the registration of the Op is done here
-
And corresponding kernel implementation here -
I am trying to understand how this C++ code gets linked with Python to get this working. For that I did following -
1) Built individual targets required to build py_binary (word2vec)
bazel build -c opt tensorflow/models/embedding:word2vec_ops
bazel build -c opt tensorflow/models/embedding:word2vec_kernels
bazel build -c opt tensorflow/models/embedding:word2vec_kernels:gen_word2vec
The first two of those resulted in generated of shared object files (.lo) in $(bazel-bin) and the last one resulted in the gen_word2vec
python file in $(bazel-genfiles)
2) Build the py_binary word2vec target
bazel build -c opt tensorflow/models/embedding:word2vec
As expected, this results in .runfiles folder which symbolic link structure required to run the stub python file word2vec.
3) Run the word2vec. It runs perfectly. Moreover, I am able to run word2vec (in .runfiles) even without installing Tensorflow either the binary or from the source.
Now I can understand it was able to import all the python stuff to run the wrod2vec given the symbolic structure in maintained in .runfiles folder.
However, I still don't see where and how the op registration and kernel code got linked to this python binary. That is, how did the this custom op got registered in Tensorflow in the first place. I am sure some of you have clear and detailed understanding of the process happening behind the scenes here. I would very much appreciate if you can
explain that process in detail.
P.S. - I have already read the build custom op tutorial.
Thanks.