Running (XLA) TensorFlow on custom LLVM build

652 views
Skip to first unread message

Annanay Agarwal

unread,
Mar 17, 2017, 10:53:33 AM3/17/17
to XLA development
Hi all,

I am a compiler optimizations student working on writing a Polyhedral Optimizations enabled compiler for TensorFlow - please see this issue for more details. 

I wanted to know if there is a way by which we can run TensorFlow (XLA) on a custom build of LLVM - which will allow us to modify code even on the LLVM side to enable further optimizations. Right now, the pass manager of LLVM is hardly exploited. Very few passes are added and tools like Polly are not part of the LLVM distribution used by TensorFlow.

Any kind of help / feedback would be great!

Thanks,
Annanay

Peter Hawkins

unread,
Mar 17, 2017, 11:10:51 AM3/17/17
to Annanay Agarwal, XLA development
Hi...

It should be easy enough. The release of LLVM used by the XLA bazel build is controlled by workspace.bzl here:

It should be easy enough to point at a different repository. Currently it uses a snapshot of LLVM taken from an LLVM github mirror. But it could just as easily point at your own repository. Bazel supports a number of other kinds of external repository:
The "git_repository" rule is one way you might do this.

Peter

--
You received this message because you are subscribed to the Google Groups "XLA development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xla-dev+u...@googlegroups.com.
To post to this group, send email to xla...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/xla-dev/9b0e1d8e-a3c5-4bcd-85e9-945fa486d0a6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Annanay Agarwal

unread,
Mar 17, 2017, 2:55:26 PM3/17/17
to XLA development, cs14bte...@iith.ac.in
Hi sir,

Polly actually requires GMP (GNU Multi-precision), isl (Integer set library) and many other Linear Programming tools. This may be a silly question, but does bazel have compilation support for these libraries?

Thanks,
Annanay

On Friday, March 17, 2017 at 8:40:51 PM UTC+5:30, Peter Hawkins wrote:
Hi...

It should be easy enough. The release of LLVM used by the XLA bazel build is controlled by workspace.bzl here:

It should be easy enough to point at a different repository. Currently it uses a snapshot of LLVM taken from an LLVM github mirror. But it could just as easily point at your own repository. Bazel supports a number of other kinds of external repository:
The "git_repository" rule is one way you might do this.

Peter

On Fri, Mar 17, 2017 at 10:53 AM 'Annanay Agarwal' via XLA development <xla...@googlegroups.com> wrote:
Hi all,

I am a compiler optimizations student working on writing a Polyhedral Optimizations enabled compiler for TensorFlow - please see this issue for more details. 

I wanted to know if there is a way by which we can run TensorFlow (XLA) on a custom build of LLVM - which will allow us to modify code even on the LLVM side to enable further optimizations. Right now, the pass manager of LLVM is hardly exploited. Very few passes are added and tools like Polly are not part of the LLVM distribution used by TensorFlow.

Any kind of help / feedback would be great!

Thanks,
Annanay

--
You received this message because you are subscribed to the Google Groups "XLA development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xla-dev+unsubscribe@googlegroups.com.

Brian Retford

unread,
Mar 20, 2017, 2:20:30 PM3/20/17
to XLA development, cs14bte...@iith.ac.in
Annanay,

Bazel technically has support for compiling just about anything, but I haven't seen bazel files made for GMP, ISL, or any of the other libraries used by Polly. Tensorflow has done a ton of work to port other libraries into Bazel (see https://github.com/tensorflow/tensorflow/tree/05d7f793ec5f04cd6b362abfef620a78fefdb35f/third_party). Especially check out the llvm.BUILD file for an example of a very involved port. We use bazel extensively and have ported llvm and CLP and many other libraries ourselves. I saw the post on the github thread about looking for a co-mentor---I can't take on all of that, but I'm happy to help out getting Polly's dependencies ported into bazel, if you'd like.

A better short term option might be to pre-compile all the polly libraries and then statically add them as sources in a cc_library, that should help you get to the part you care about faster. Feel free to email me if you'd like more bazel help.

Annanay Agarwal

unread,
Mar 21, 2017, 12:18:30 PM3/21/17
to Brian Retford, XLA development
Hi sir,

Thanks for the great post! And thank you so much for accepting to be a co-mentor, hope to pull off a great GSoC.

The temporary fix with cc_library got me excited. But my inexperience with bazel is having me run into a lot of issues with linking Polly. I will write to you again soon. 

Thanks again,



--
Annanay Agarwal
Department of CSE
IIT Hyderabad

Michael Kruse

unread,
Apr 12, 2017, 7:38:32 PM4/12/17
to XLA development, cs14bte...@iith.ac.in, Tobias Grosser
Dear xla-dev mailing list and Brian Retford,

I am one of the GSoC administrators for Polly Labs. Annanay submitted a promising proposal for a project and I myself might become one of the mentors. Unfortunately, I don't have any knowledge about XLA or Bazel, so it would be nice to have someone from the XLA/TensorFlow on-board.

We met David Majnemer at EuroLLVM (he gave a presentation about XLA) and asked him about mentorship. He said he was too busy himself, but knows somebody from his lab who might do this. We should write him an email and he'd give us that contact. We mailed him 3 times without a response, so I am trying again on this mailing list. Maybe that person subscribed.

Is there somebody available for being a co-mentor for Annanay's GSoC? We don't need full involvement, but having a contact person who knows XLA style and future direction, and can authorize patches that might be required would be very helpful.

@Brian: I appreciate your offer, which means that we have someone for the Bazel part. Would you like to be added as a mentor? There's still someone missing for the XLA part.

With kind regards,
Michael Kruse

Brian Retford

unread,
Apr 12, 2017, 8:11:55 PM4/12/17
to Michael Kruse, XLA development, cs14bte...@iith.ac.in, Tobias Grosser

Sure I'm happy to cover bazel related things.


--
You received this message because you are subscribed to the Google Groups "XLA development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xla-dev+u...@googlegroups.com.

To post to this group, send email to xla...@googlegroups.com.

Ye Henry Tian

unread,
Sep 20, 2017, 2:25:19 AM9/20/17
to XLA development
You can change the workspace.bzl file as Peter suggested like below:

I tried to use a newer stable version of LLVM in: workspace.bzl:

temp_workaround_http_archive(
      name = "llvm",
      urls = [
          "https://github.com/llvm-mirror/llvm/archive/stable.tar.gz",
      ],
      sha256 = "5491bd0608ca59b94d5fe0a9892494fac263c9e1bd6ab2d4f4dc0e495e29670e", 
      strip_prefix = "llvm-stable",
      build_file = str(Label("//third_party/llvm:llvm.BUILD")),
      repository = tf_repo_name,
  )

You can also find the llvm source files under the path of ~/.cache/bazel/uername/...   which you can change adding "printf". and you can even use GDB to debug those LLVM cpp files when running TensorFlow with XLA JIT enabled.
Reply all
Reply to author
Forward
0 new messages