TensorFlow 2.0 is coming

11,441 views
Skip to first unread message

'Martin Wicke' via TensorFlow Announcements

unread,
Aug 13, 2018, 12:49:44 PM8/13/18
to anno...@tensorflow.org

Since the open-source release in 2015, TensorFlow has become the world’s most widely adopted machine learning framework, catering to a broad spectrum of users and use-cases. In this time, TensorFlow has evolved along with rapid developments in computing hardware, machine learning research, and commercial deployment.


Reflecting these rapid changes, we have started work on the next major version of TensorFlow. TensorFlow 2.0 will be a major milestone, with a focus on ease of use. Here are some highlights of what users can expect with TensorFlow 2.0:

  • Eager execution will be a central feature of 2.0. It aligns users’ expectations about the programming model better with TensorFlow practice and should make TensorFlow easier to learn and apply.

  • Support for more platforms and languages, and improved compatibility and parity between these components via standardization on exchange formats and alignment of APIs.

  • We will remove deprecated APIs and reduce the amount of duplication, which has caused confusion for users.


We are planning to release a preview version of TensorFlow 2.0 later this year.


Public 2.0 design process

Shortly, we will hold a series of public design reviews covering the planned changes. This process will clarify the features that will be part of TensorFlow 2.0, and allow the community to propose changes and voice concerns. Please join devel...@tensorflow.org if you would like to see announcements of reviews and updates on process. We hope to gather user feedback on the planned changes once we release a preview version later this year.


Compatibility and continuity

TensorFlow 2.0 is an opportunity to correct mistakes and to make improvements which are otherwise forbidden under semantic versioning.


To ease the transition, we will create a conversion tool which updates Python code to use TensorFlow 2.0 compatible APIs, or warns in cases where such a conversion is not possible automatically. A similar tool has helped tremendously in the transition to 1.0.


Not all changes can be made fully automatically. For example, we will be deprecating APIs, some of which do not have a direct equivalent. For such cases, we will offer a compatibility module (tensorflow.compat.v1) which contains the full TensorFlow 1.x API, and which will be maintained through the lifetime of TensorFlow 2.x.


We do not anticipate any further feature development on TensorFlow 1.x once a final version of TensorFlow 2.0 is released. We will continue to issue security patches for the last TensorFlow 1.x release for one year after TensorFlow 2.0’s release date.


On-disk compatibility

We do not intend to make breaking changes to SavedModels or stored GraphDefs (i.e., we plan to include all current kernels in 2.0). However, the changes in 2.0 will mean that variable names in raw checkpoints might have to be converted before being compatible with new models.


tf.contrib

TensorFlow’s contrib module has grown beyond what can be maintained and supported in a single repository. Larger projects are better maintained separately, while we will incubate smaller extensions along with the main TensorFlow code. Consequently, as part of releasing TensorFlow 2.0, we will stop distributing tf.contrib. We will work with the respective owners on detailed migration plans in the coming months, including how to publicise your TensorFlow extension in our community pages and documentation. For each of the contrib modules we will either a) integrate the project into TensorFlow; b) move it to a separate repository or c) remove it entirely. This does mean that all of tf.contrib will be deprecated, and we will stop adding new tf.contrib projects today. We are looking for owners/maintainers for a number of projects currently in tf.contrib, please contact us (reply to this email) if you are interested.


Next steps

For questions about development of or migration to TensorFlow 2.0, contact us at dis...@tensorflow.org. To stay up to date with the details of 2.0 development, please subscribe to devel...@tensorflow.org, and participate in related design reviews.


On behalf of the TensorFlow team,

Martin


--
You received this message because you are subscribed to the Google Groups "TensorFlow Announcements" group.
To unsubscribe from this group and stop receiving emails from it, send an email to announce+u...@tensorflow.org.
Visit this group at https://groups.google.com/a/tensorflow.org/group/announce/.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/announce/CADtzJKOYMjX-GiF1tnmJO3ORBoE9KH%2BOwrgObz8UfzvLOec2XQ%40mail.gmail.com.
For more options, visit https://groups.google.com/a/tensorflow.org/d/optout.

Anthony Dmitriev

unread,
Aug 14, 2018, 4:43:15 AM8/14/18
to Discuss, anno...@tensorflow.org
Hello Martin,

It's exciting to see such good improvements in development process and TensorFlow itself, so let me say that it's a big deal and thanks for a great job of the whole team. 

In accordance with upcoming changes I have a questions. Our team are about to contribute a big module (TensorFlow on Apache Ignite, see this topic), one pull request currently is on review and two are upcoming. We'd like to align integration of these parts with release of Apache Ignite (late September) because some parts of Apache Ignite code depends of TensorFlow enhancement that we are making. So, the question, does it make sense to continue our approach (contributing into tf.contrib) or we need to change it right now?

Best regards,
Anton Dmitriev.

понедельник, 13 августа 2018 г., 19:49:44 UTC+3 пользователь 'Martin Wicke' via TensorFlow Announcements написал:

slings...@cavedu.com

unread,
Aug 14, 2018, 6:54:22 AM8/14/18
to Discuss, anno...@tensorflow.org

Could we (CAVEDU education group in Taiwan) translate this e-mail into traditional Chinese and publish on our tech blog?

Thank you very much

Best regards

Timothy'Martin Wicke' via TensorFlow Announcements於 2018年8月14日星期二 UTC+8上午12時49分44秒寫道:

Lauren C.

unread,
Aug 14, 2018, 8:03:18 AM8/14/18
to slings...@cavedu.com, Discuss, anno...@tensorflow.org
Also simple Chinese please.

--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+unsubscribe@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/afadfd8d-289b-4667-aabf-61773b113e0b%40tensorflow.org.

Martin Wicke

unread,
Aug 14, 2018, 11:46:39 AM8/14/18
to dmitrie...@gmail.com, Discuss
It's ok to push the PR into contrib since it's already in flight (I remember I added a note there as well). We will stop distributing contrib with 2.0, which is still a while away.

However, for the future, you would ideally maintain a separate repo and pip package. That means that users would write

import tensorflow
import tf_ignite_dataset  # or whatever your pip package is called

# use ignite dataset together with TensorFlow.

There will soon be an RFC about the fate of projects in contrib, I'll have more details of what alternatives exist then.

You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.

To post to this group, send email to dis...@tensorflow.org.

Marc Fawzi

unread,
Aug 14, 2018, 12:49:58 PM8/14/18
to wi...@google.com, dmitrie...@gmail.com, Discuss
Will this get incorporated into core?


AFAIK, without it, some models will run out of even the most ambitious memory spec. Seems like a design issue that should be solved in core not contrib. The question applies in general to all contrib packages that address basic issues in TF.





Alexandre Passos

unread,
Aug 14, 2018, 12:52:18 PM8/14/18
to marc....@gmail.com, Martin Wicke, dmitrie...@gmail.com, dis...@tensorflow.org
Yes, most things in contrib which are intended to be in core but haven't migrated to core yet will migrate to core. More details will be provided when we write the actual RFC for contrib deprecation.

(a lot of things in contrib are not really intended to be in core tf and will be moved to their own separate repos; the rest will end up in a contrib repo which is maintained on a best-effort basis only)



--
 - Alex

Anthony Dmitriev

unread,
Aug 15, 2018, 4:07:56 AM8/15/18
to Discuss
Thank you for the answer, Martin, I think we are on the same page now.

One more question about the future changes in tf.contrib. In case we extract this dataset into separate project, who will be responsible for compilation, testing and distribution for all platforms? It looks like a big overhead for every sub-project to have separate continuous integration infrastructure. Will it be possible to reuse "main" tensorflow CI infrastructure? 

Best regards,
Anton Dmitriev

вторник, 14 августа 2018 г., 18:46:39 UTC+3 пользователь Martin Wicke написал:

Martin Wicke

unread,
Aug 15, 2018, 11:22:35 AM8/15/18
to Anthony Dmitriev, Discuss
We will help to set up testing infrastructure for projects maintained by an established SIG. We cannot take responsibility for keeping those tests green, writing them, or distribution (which critically depends on keeping the tests, or at least the build, green). 

I will have more details once I go through the list of projects and talk to owners to determine what should happen to them. I agree it probably makes little sense to have a single repo for each project. Ideally, a SIG-data-input or SIG-contrib-ops would emerge which maintains a repo with several thematically aligned projects inside of it.

Martin

Armando Fandango

unread,
Aug 15, 2018, 1:33:59 PM8/15/18
to Discuss, anno...@tensorflow.org
> We are looking for owners/maintainers for a number of projects currently in tf.contrib, please contact us (reply to this email) if you are interested.
Hi Martin, I am interested. Which projects need an owner specifically please?

Sincerely

Armando

Martin Wicke

unread,
Aug 15, 2018, 1:57:25 PM8/15/18
to arm...@neurasights.com, Discuss, anno...@tensorflow.org
I do not have a list (yet), but I am making one. I will remember you and let you know when I have something more concrete.

You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.

Armando Fandango

unread,
Aug 15, 2018, 4:59:17 PM8/15/18
to Discuss
Thanks, will be on the lookout for your email.

Stephen Oman

unread,
Aug 16, 2018, 4:42:14 PM8/16/18
to Discuss, anno...@tensorflow.org
Hi Martin,

Do you have a proposed timeline leading up to the release of 2.0? I'm not looking to put you and the team on the hook for an actual delivery date, but knowing roughly when it might happen would be useful for medium term planning.

Thanks,
Stephen.

Martin Wicke

unread,
Aug 17, 2018, 7:48:38 PM8/17/18
to stephe...@yahoo.co.uk, Discuss
We hope to release a preview late this year, and an actual 2.0 in Spring. 

Realistically, I cannot get more specific than that, I believe in releasing software when it's ready, and not tie it to an artificial date.

--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.

slings...@cavedu.com

unread,
Aug 21, 2018, 10:11:54 PM8/21/18
to Discuss
Hello Martin,

We just translated the announcement about TensorFlow 2.0 into traditional Chinese and simplified Chinese.

Please see the translation:


Thank you and best regards

CAVEDU
Timothy

Martin Wicke於 2018年8月18日星期六 UTC+8上午7時48分38秒寫道:

slings...@cavedu.com

unread,
Aug 21, 2018, 10:16:40 PM8/21/18
to Discuss, lau...@miscnote.net
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.

To post to this group, send email to dis...@tensorflow.org.

Martin Wicke

unread,
Aug 21, 2018, 11:32:06 PM8/21/18
to slings...@cavedu.com, Mike Liang, Discuss

Mike Liang (梁信屏)

unread,
Aug 23, 2018, 2:06:45 AM8/23/18
to Martin Wicke, slings...@cavedu.com, dis...@tensorflow.org, Rui Li
+Rui Li FYI - TF China DevRel Ecosystem lead


Aurélien Géron

unread,
Aug 26, 2018, 10:40:22 AM8/26/18
to Discuss, anno...@tensorflow.org
Hi,

What's the plan regarding eager vs graph mode in TF 2.0? Will eager execution become the default? It is much more intuitive, easier to debug and profile, and graph mode only becomes useful when you want to deploy models and optimize performance. Moreover, we could live without `tf.enable_eager/graph_execution()`: we could rely strictly on functions like `tf.make_template()` or on autograph, and only when we need it.  Having two different root modes is painful in many cases, such as when learning TensorFlow, writing tests, or in Jupyter notebooks that need both modes.

Thanks,
Aurélien Géron

Martin Wicke

unread,
Aug 27, 2018, 11:58:03 AM8/27/18
to aurelie...@kiwisoft.io, Discuss, anno...@tensorflow.org
Yes, we are planning to make eager execution the default. RFCs to discuss the design and implementation details required to do so have been published (or will soon be). Unless those throw up some unexpected technical difficulties, I would expect eager execution to be default in 2.0.

--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.

Gabriel Perdue

unread,
Aug 27, 2018, 12:06:10 PM8/27/18
to Martin Wicke, Discuss
Martin,

Would you point out where the RFCs are posted? A number of us at Fermilab would like to offer feedback, and we're organizing a coherent response.

Thanks!

pax
Gabe

Gabriel Perdue
Scientist

Scientific Computing Division
Fermi National Accelerator Laboratory
PO Box 500, MS 234, Batavia, IL 60510, USA
Office: 630-840-6499

Connect with us!


Martin Wicke

unread,
Aug 27, 2018, 12:11:43 PM8/27/18
to gnpe...@gmail.com, Discuss
All RFCs are announced on devel...@tensorflow.org, which I'd encourage everyone interested in this to join. They then become PRs to tensorflow/community during the comment period, and are merged to that repo after. The ones relevant for 2.0 will be tagged accordingly. One example is this.

Martin

江宗諭

unread,
Sep 19, 2018, 11:22:25 PM9/19/18
to Discuss
Hello! Martin,

How are you?

I would like to ask could we translate the article below into traditional Chinese and publish it on our blog?

Or who should I ask for permission?


Thank you
Best Regards
Timothy Chiang
Martin Wicke於 2018年8月22日星期三 UTC+8上午11時32分06秒寫道:
Message has been deleted

Stephen Smith

unread,
Oct 26, 2018, 1:22:03 PM10/26/18
to Discuss, anno...@tensorflow.org
Sounds good. For the other language support, will you move a lot of the current Python code to C++ (or at least make it compiled into the main shared object/DLL? Right now when you use other languages like Julia and don't want to communicate with a Python server, a lot of things are not available like eager execution.

Thanks, keep up the good work.

David Pascuzzi

unread,
Nov 2, 2018, 3:33:15 PM11/2/18
to Discuss, anno...@tensorflow.org
Does more platforms mean more Operating Systems?

David-Olivier Pham

unread,
Nov 24, 2018, 7:36:32 PM11/24/18
to Discuss
I read that tf.variable_scope will disappear in favour of tf.layers.

https://github.com/tensorflow/community/blob/master/rfcs/20180817-variables-20.md

May I ask if tf.keras.models.Model will accept output that are not result of chaining layers? Or will we be forced to use lambda layers and subclass the Model object, which is quite verbose and force to define all layers and repeat them with self.my_dummy_layer_name_42?

I love Keras functional API, I personally use tf.layers.[insert any tf.keras.layers name] in combination with tf.variable scope(scope, reuse=tf.AUTO_REUSE) because it leads to compact definition of reusable blocks of layers without the restrictions of keras.Model and I hope some form of this compactness will survive in tf 2.0.

Eugene Brevdo

unread,
Nov 25, 2018, 4:10:44 PM11/25/18
to David-Olivier Pham, Discuss
You can subclass keras Network object, create or store any sublayers in __init__ and implement call()


--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.

David-Olivier Pham

unread,
Nov 25, 2018, 4:25:46 PM11/25/18
to Discuss
Thanks a lot for your swift answer! 

However, from the _init_graph_network method I read that all inputs and outputs should come a keras layer. Moreover, I would preferably avoid to define my layers in the init (self.dense_1 = layers.Dense(...) and then assessing them again in the call method (self.dense_1(my_inpur)).

With a keras model, you can define your layers and call them directly and then compose them through a keras model object so that you only reference to each layer once. (If you need to reuse them, then you can assign the layer to a variable). However, you must use lambda layers for any operation not requiring weights.

Eugene Brevdo

unread,
Nov 25, 2018, 4:40:10 PM11/25/18
to David-Olivier Pham, Discuss
You subclass Network so you control what _init_graph_network sees.

class MyNetwork(network.Network):
  def __init__(self, extra_layers, name):
   super(MyNetwork, self).__init__(self, name)
   my_layers = [ ... ]
   self._layers = my_layers + extra_layers

  def call(self, inputs):
    for layer in self._layers:
      inputs = layer(inputs)
    # do some non-layer tf math on inputs
    # maybe use some other layers on inputs
    return ...

Yes, you do have to preinstantiate your layers and then use them in call() -- that's the primary way going forward.  But it's not too painful to have multiple layers in a Network.  An alternative option is to use the keras Sequential layer to store a bunch of operations you're going to do in sequence, if you want to avoid storing a list of layers directly.

To unsubscribe from this group and stop receiving emails from it, send an email to discuss+unsubscribe@tensorflow.org.

To post to this group, send email to dis...@tensorflow.org.

David-Olivier Pham

unread,
Nov 25, 2018, 5:30:20 PM11/25/18
to Discuss
Thanks a lot for your help. Just to be sure, all layers that weill be trained must be contained in self._layers? 

Eugene Brevdo

unread,
Nov 25, 2018, 5:38:17 PM11/25/18
to David-Olivier Pham, Discuss
No, you can use any self property/properties.  It does look like the symbol self._layers has some additional support code in the base Network class, but we we've never needed or made use of that.

David-Olivier Pham

unread,
Nov 25, 2018, 5:48:47 PM11/25/18
to Discuss
Thanks, I will do some experimentation and have fun :-)

Jeffrey Welch

unread,
Jan 5, 2019, 6:48:05 PM1/5/19
to Discuss, anno...@tensorflow.org

Hi all, 

Good evening, 

May I know does the coming TensorFlow 2.0 run on Python 3.7.x ?

Why the current TensorFlow 1.x can only run on Python 3.5.x but not Python 3.7.x ?

Please reply, Thank you very much

Jeff


On Tuesday, August 14, 2018 at 12:49:44 AM UTC+8, 'Martin Wicke' via TensorFlow Announcements wrote:

Since the open-source release in 2015, TensorFlow has become the world’s most widely adopted machine learning framework, catering to a broad spectrum of users and use-cases. In this time, TensorFlow has evolved along with rapid developments in computing hardware, machine learning research, and commercial deployment.


Reflecting these rapid changes, we have started work on the next major version of TensorFlow. TensorFlow 2.0 will be a major milestone, with a focus on ease of use. Here are some highlights of what users can expect with TensorFlow 2.0:

  • Eager execution will be a central feature of 2.0. It aligns users’ expectations about the programming model better with TensorFlow practice and should make TensorFlow easier to learn and apply.

  • Support for more platforms and languages, and improved compatibility and parity between these components via standardization on exchange formats and alignment of APIs.

  • We will remove deprecated APIs and reduce the amount of duplication, which has caused confusion for users.


We are planning to release a preview version of TensorFlow 2.0 later this year.


Public 2.0 design process

Shortly, we will hold a series of public design reviews covering the planned changes. This process will clarify the features that will be part of TensorFlow 2.0, and allow the community to propose changes and voice concerns. Please join devel...@tensorflow.org if you would like to see announcements of reviews and updates on process. We hope to gather user feedback on the planned changes once we release a preview version later this year.


Compatibility and continuity

TensorFlow 2.0 is an opportunity to correct mistakes and to make improvements which are otherwise forbidden under semantic versioning.


To ease the transition, we will create a conversion tool which updates Python code to use TensorFlow 2.0 compatible APIs, or warns in cases where such a conversion is not possible automatically. A similar tool has helped tremendously in the transition to 1.0.


Not all changes can be made fully automatically. For example, we will be deprecating APIs, some of which do not have a direct equivalent. For such cases, we will offer a compatibility module (tensorflow.compat.v1) which contains the full TensorFlow 1.x API, and which will be maintained through the lifetime of TensorFlow 2.x.


We do not anticipate any further feature development on TensorFlow 1.x once a final version of TensorFlow 2.0 is released. We will continue to issue security patches for the last TensorFlow 1.x release for one year after TensorFlow 2.0’s release date.


On-disk compatibility

We do not intend to make breaking changes to SavedModels or stored GraphDefs (i.e., we plan to include all current kernels in 2.0). However, the changes in 2.0 will mean that variable names in raw checkpoints might have to be converted before being compatible with new models.


tf.contrib

TensorFlow’s contrib module has grown beyond what can be maintained and supported in a single repository. Larger projects are better maintained separately, while we will incubate smaller extensions along with the main TensorFlow code. Consequently, as part of releasing TensorFlow 2.0, we will stop distributing tf.contrib. We will work with the respective owners on detailed migration plans in the coming months, including how to publicise your TensorFlow extension in our community pages and documentation. For each of the contrib modules we will either a) integrate the project into TensorFlow; b) move it to a separate repository or c) remove it entirely. This does mean that all of tf.contrib will be deprecated, and we will stop adding new tf.contrib projects today. We are looking for owners/maintainers for a number of projects currently in tf.contrib, please contact us (reply to this email) if you are interested.


Next steps

For questions about development of or migration to TensorFlow 2.0, contact us at dis...@tensorflow.org. To stay up to date with the details of 2.0 development, please subscribe to devel...@tensorflow.org, and participate in related design reviews.


On behalf of the TensorFlow team,

Martin


--
You received this message because you are subscribed to the Google Groups "TensorFlow Announcements" group.
To unsubscribe from this group and stop receiving emails from it, send an email to announce+u...@tensorflow.org.
Visit this group at https://groups.google.com/a/tensorflow.org/group/announce/.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/announce/CADtzJKOYMjX-GiF1tnmJO3ORBoE9KH%2BOwrgObz8UfzvLOec2XQ%40mail.gmail.com.
For more options, visit https://groups.google.com/a/tensorflow.org/d/optout.

On Tuesday, August 14, 2018 at 12:49:44 AM UTC+8, 'Martin Wicke' via TensorFlow Announcements wrote:

Since the open-source release in 2015, TensorFlow has become the world’s most widely adopted machine learning framework, catering to a broad spectrum of users and use-cases. In this time, TensorFlow has evolved along with rapid developments in computing hardware, machine learning research, and commercial deployment.


Reflecting these rapid changes, we have started work on the next major version of TensorFlow. TensorFlow 2.0 will be a major milestone, with a focus on ease of use. Here are some highlights of what users can expect with TensorFlow 2.0:

  • Eager execution will be a central feature of 2.0. It aligns users’ expectations about the programming model better with TensorFlow practice and should make TensorFlow easier to learn and apply.

  • Support for more platforms and languages, and improved compatibility and parity between these components via standardization on exchange formats and alignment of APIs.

  • We will remove deprecated APIs and reduce the amount of duplication, which has caused confusion for users.


We are planning to release a preview version of TensorFlow 2.0 later this year.


Public 2.0 design process

Shortly, we will hold a series of public design reviews covering the planned changes. This process will clarify the features that will be part of TensorFlow 2.0, and allow the community to propose changes and voice concerns. Please join devel...@tensorflow.org if you would like to see announcements of reviews and updates on process. We hope to gather user feedback on the planned changes once we release a preview version later this year.


Compatibility and continuity

TensorFlow 2.0 is an opportunity to correct mistakes and to make improvements which are otherwise forbidden under semantic versioning.


To ease the transition, we will create a conversion tool which updates Python code to use TensorFlow 2.0 compatible APIs, or warns in cases where such a conversion is not possible automatically. A similar tool has helped tremendously in the transition to 1.0.


Not all changes can be made fully automatically. For example, we will be deprecating APIs, some of which do not have a direct equivalent. For such cases, we will offer a compatibility module (tensorflow.compat.v1) which contains the full TensorFlow 1.x API, and which will be maintained through the lifetime of TensorFlow 2.x.


We do not anticipate any further feature development on TensorFlow 1.x once a final version of TensorFlow 2.0 is released. We will continue to issue security patches for the last TensorFlow 1.x release for one year after TensorFlow 2.0’s release date.


On-disk compatibility

We do not intend to make breaking changes to SavedModels or stored GraphDefs (i.e., we plan to include all current kernels in 2.0). However, the changes in 2.0 will mean that variable names in raw checkpoints might have to be converted before being compatible with new models.


tf.contrib

TensorFlow’s contrib module has grown beyond what can be maintained and supported in a single repository. Larger projects are better maintained separately, while we will incubate smaller extensions along with the main TensorFlow code. Consequently, as part of releasing TensorFlow 2.0, we will stop distributing tf.contrib. We will work with the respective owners on detailed migration plans in the coming months, including how to publicise your TensorFlow extension in our community pages and documentation. For each of the contrib modules we will either a) integrate the project into TensorFlow; b) move it to a separate repository or c) remove it entirely. This does mean that all of tf.contrib will be deprecated, and we will stop adding new tf.contrib projects today. We are looking for owners/maintainers for a number of projects currently in tf.contrib, please contact us (reply to this email) if you are interested.


Next steps

For questions about development of or migration to TensorFlow 2.0, contact us at dis...@tensorflow.org. To stay up to date with the details of 2.0 development, please subscribe to devel...@tensorflow.org, and participate in related design reviews.


On behalf of the TensorFlow team,

Martin


--

Martin Pecka

unread,
Jan 25, 2019, 10:04:44 AM1/25/19
to Discuss, anno...@tensorflow.org
Will you finally provide a standard "SDK" for C++ builds? I.e. to not be forced to compile whole TF if I only want to build a C++ library using it? And will you finally build the pip .so file with C++11 ABI?

Pablo Velazco

unread,
Jan 25, 2019, 11:20:14 AM1/25/19
to Discuss
On Friday, January 25, 2019 at 12:04:44 PM UTC-3, Martin Pecka wrote:
Will you finally provide a standard "SDK" for C++ builds? I.e. to not be forced to compile whole TF if I only want to build a C++ library using it? And will you finally build the pip .so file with C++11 ABI?

That's what I'm really looking forward too. A complete and usable C++ API. I hope in 2.0 we'll see some serious advances on this area.

Martin Wicke

unread,
Jan 25, 2019, 12:02:01 PM1/25/19
to Pablo Velazco, Discuss
This is independent of 2.0 -- most of our work there is on Python (since that is covered by semver). We are working on changes to allow easier building of extensions. 

You already don't have to build TensorFlow to use it, btw., but given how this is C++ you have to use the same (or a compatible) compiler. 

We will move to newer ABI as we move to newer compilers, but when we do, that will make TensorFlow incompatible with older distros. That is undesirable, so don't expect dramatic changes there.  

Martin

--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.

Martin Pecka

unread,
Jan 25, 2019, 12:09:21 PM1/25/19
to dis...@tensorflow.org
Well, how can I link my C++ program against TF without building it? Do
you already provide a C++ library? I know there's the
_pywrap_tensorflow.so object installed by pip, but I consider linking
against it only a hack (though I use it in my project:
https://github.com/tradr-project/tensorflow_ros_cpp/blob/799c5f14da4ab07e4ff50ecf3048c7d0865bb107/cmake/detect_tf_pip.cmake#L116
).

Also, do you know which distros still don't have C++11 ABI? I know it is
Ubuntu 14.04, but that's EOL in 3 months... What other distros are
holding this progress?

--
Martin Pecka

Martin Wicke

unread,
Jan 25, 2019, 12:46:40 PM1/25/19
to Martin Pecka, Discuss
I am sorry, you are right. You can avoid building it, but it involves hackery.

We are building on Ubuntu14 at the moment, and you correctly point out it'll be EOL soon, so we will stop doing that and switch to Ubuntu16. So I was a little bit too pessimistic. However, even Ubuntu16's gcc isn't particularly new (and there's no ABI compatibility guarantees between versions in C++).

--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.

Martin Pecka

unread,
Jan 25, 2019, 12:54:31 PM1/25/19
to Discuss
Hmm, a switch to Ubuntu 16 should help. The biggest problem as I faced
it is `std::string` vs. `std::__cxx11::string`, which basically prevents
compiling any code using C++ strings. I know there are no theoretical
ABI compat guarantees, but practically it is quite okay.

--
Martin Pecka
Reply all
Reply to author
Forward
0 new messages