--
You received this message because you are subscribed to the Google Groups "SG19 - Machine Learning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sg19+uns...@isocpp.org.
To post to this group, send email to sg...@isocpp.org.
Visit this group at https://groups.google.com/a/isocpp.org/group/sg19/.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/sg19/77c407bf-c6d4-4fbc-a7f7-8b30a1c357d6%40isocpp.org.
I agree that a good objective is to stay as close as meaningful to Numpy, which has already thought through and solved lots of problems.
The use of * for elementwise was unusual for me at first as well, but one gets used to it. I would consider elementwise products and matmul are equally legit. The only alternative would be to introduce new operators, such as Matlab’s “.*”
I like that xtensor does not come with its own BLAS implementation, but provides an adapter. A C++ standard implementation should probably come with some implementation, but it might be good to write the standard in a way that users can purchase a faster implementation and plug it in.
The lazy evaluation (xexpression) might be a basis for auto-diff, as well as for trace-based exporting of ML models as static graphs (e.g. as ONNX).
One question is how xtensor integrates with GPUs, which is the single most critical feature for machine learning. E.g. this may severely limit the use of lambdas.
What are possibilities for extending the language syntax? For example
· a multi-dimensional [] operator (currently one would probably emulate that with operator())
· syntactic sugar for Python-like index slices, e.g. [2:5] (currently one can use a special data structure such as [Slice(2,5)])
· dot operator (Python 3.5 introduced @)
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/sg19/CAB2pg2QR3DYp60eN%3Ddk05XUutrG6LpAcykD_dbN5EJmJmVxbew%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/sg19/a0cc252f-9cb3-41c9-b530-5e459d5135a6%40isocpp.org.
> One question is how xtensor integrates with GPUs, which is the single most critical feature for machine learning. E.g. this may severely limit the use of lambdas.xtensor does not have GPU support yet, we will add it this year.> how is the return type inferred if you have 3 different types? I would assume that you would return the most static possible one to get good performance?Indeed, when different types are involved in an expression, the return type is the most static possible.> Also are 1D and 2D arrays optimized to remove stride computation?No, but benchmarks haven't showed significant differences with traditional implementations of 1D and 2D arrays (like Eigen for instance), so it seems that compilers are smart enough to optimize away the stride computation.
Cheers,Johan
To post to this group, send email to s...@isocpp.org.
Visit this group at https://groups.google.com/a/isocpp.org/group/sg19/.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/sg19/77c407bf-c6d4-4fbc-a7f7-8b30a1c357d6%40isocpp.org.
--
Quantitative analyst, Ph.D.
Blog: http://blog.audio-tk.com/
LinkedIn: http://www.linkedin.com/in/matthieubrucher
--
You received this message because you are subscribed to the Google Groups "SG19 - Machine Learning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sg19+uns...@isocpp.org.
To post to this group, send email to sg...@isocpp.org.
Visit this group at https://groups.google.com/a/isocpp.org/group/sg19/.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/sg19/CAB2pg2QR3DYp60eN%3Ddk05XUutrG6LpAcykD_dbN5EJmJmVxbew%40mail.gmail.com.