TensorFlow Newsletter | July, 2017

98 views
Skip to first unread message

Alina Shinkarsky

unread,
Jul 27, 2017, 5:31:30 PM7/27/17
to Dis...@tensorflow.org



TL;DR:

Welcome to your TensorFlow update! We'll be sending periodic newsletters to keep you apprised of what’s coming down the pipe.


Sundar announced Cloud TPUs, TPU pods, and the TensorFlow Research Cloud at Google I/O, and Fei-Fei encouraged the world to sign up to learn more via g.co/tpusignup.


TensorFlow 1.3.0 has been released!

 

High-Level APIs

The first round of canned estimators will be moved into core as of TensorFlow 1.3.x, which includes: DNNClassifier, DNNRegressor, DNNLinearCombinedClassifier, and DNNLinearCombinedRegressor.  The new canned estimators are backward compatible and can be used by changing from tf.contrib.learn.estimators.DNNRegressor to tf.estimator.DNNRegressor.

 

News from Performance

In case you missed it, in May we rolled out a new set of benchmarks, along with a Performance Guide - check out the blog post.


In addition, the following performance improvements were added in the past month:

  • Fused batch norm added to tf.layers.batch_normalization (default: False).  20-30% speedup over non-fused.

  • Intel MKL added as a compile option for open source and may provide significant speedups in some cases.


 

Recent contributions
  • Yahoo contributed IBverbs-based RDMA support for distributed TensorFlow.

  • Minds.ai implemented an MPI-based communication path which complements Yahoo’s work in this space.

  • Our collaboration with Intel produced a PR, primarily focusing on performance improvements in their high-end CPU families (Knights Landing and Broadwell).

  • Intel also published a blog post about these efforts

 

Usability

  • Datasets API for input pipeline: Is more convenient than and typically faster (and getting more so) than queue-based pipelines. Please consider using this going forward.


 

Events

In addition to those, we've talked to a number of companies and universities!

 


This newsletter was curated by,

The TensorFlow Team

Reply all
Reply to author
Forward
0 new messages