Announcing Theano 0.10.0beta1

26 views
Skip to first unread message

Steven Bocco

unread,
Aug 9, 2017, 10:02:40 AM8/9/17
to thean...@googlegroups.com, theano...@googlegroups.com

Announcing Theano 0.10.0beta1

This is a beta release for a major version, with lots of new features, bug fixes, and some interface changes (deprecated or potentially misleading features were removed).

The upgrade is recommended for developers who want to help test and report bugs, or want to use new features now.

For those using the bleeding edge version in the git repository, we encourage you to update to the 0.10.0beta1 tag.

What's New

Highlights:
  • Moved Python 3.* minimum supported version from 3.3 to 3.4
  • Replaced deprecated package nose-parameterized with up-to-date package parameterized for Theano requirements
  • Theano now internally uses sha256 instead of md5 to work on systems that forbide md5 for security reason
  • Removed old GPU backend theano.sandbox.cuda. New backend theano.gpuarray is now the official GPU backend
  • Support more debuggers for PdbBreakpoint
  • Scan improvements
    • Speed up Theano scan compilation and gradient computation
    • Added meaningful message when missing inputs to scan
  • Speed up graph toposort algorithm
  • Faster C compilation by massively using a new interface for op params
  • Faster optimization step
  • Documentation updated and more complete
  • Many bug fixes, crash fixes and warning improvements
Interface changes:
  • Merged duplicated diagonal functions into two ops: ExtractDiag (extract a diagonal to a vector), and AllocDiag (set a vector as a diagonal of an empty array)
  • Renamed MultinomialWOReplacementFromUniform to ChoiceFromUniform
  • Removed or deprecated Theano flags:
    • cublas.lib
    • cuda.enabled
    • enable_initial_driver_test
    • gpuarray.sync
    • home
    • lib.cnmem
    • nvcc.* flags
    • pycuda.init
  • Changed grad() method to L_op() in ops that need the outputs to compute gradient
Convolution updates:
  • Extended Theano flag dnn.enabled with new option no_check to help speed up cuDNN importation
  • Implemented separable convolutions
  • Implemented grouped convolutions
GPU:
  • Prevent GPU initialization when not required
  • Added disk caching option for kernels
  • Added method my_theano_function.sync_shared() to help synchronize GPU Theano functions
  • Added useful stats for GPU in profile mode
  • Added Cholesky op based on cusolver backend
  • Added GPU ops based on magma library: SVD, matrix inverse, QR, cholesky and eigh
  • Added GpuCublasTriangularSolve
  • Added atomic addition and exchange for long long values in GpuAdvancedIncSubtensor1_dev20
  • Support log gamma function for all non-complex types
  • Support GPU SoftMax in both OpenCL and CUDA
  • Support offset parameter k for GpuEye
  • CrossentropyCategorical1Hot and its gradient are now lifted to GPU
  • Better cuDNN support
    • Official support for v5.* and v6.*
    • Better support and loading on Windows and Mac
    • Support cuDNN v6 dilated convolutions
    • Support cuDNN v6 reductions
    • Added new Theano flags cuda.include_path, dnn.base_path and dnn.bin_path to help configure Theano when CUDA and cuDNN can not be found automatically.
  • Updated float16 support
    • Added documentation for GPU float16 ops
    • Support float16 for GpuGemmBatch
    • Started to use float32 precision for computations that don't support float16 on GPU
New features:
  • Added a wrapper for Baidu's CTC cost and gradient functions
  • Added scalar and elemwise CPU ops for modified Bessel function of order 0 and 1 from scipy.special.
  • Added Scaled Exponential Linear Unit (SELU) activation
  • Added sigmoid_binary_crossentropy function
  • Added tri-gamma function
  • Added modes half and full for Images2Neibs ops
  • Implemented gradient for AbstractBatchNormTrainGrad
  • Implemented gradient for matrix pseudoinverse op
  • Added new prop replace for ChoiceFromUniform op
  • Added new prop on_error for CPU Cholesky op
  • Added new Theano flag deterministic to help control how Theano optimize certain ops that have deterministic versions. Currently used for subtensor Ops only.
  • Added new Theano flag cycle_detection to speed-up optimization step by reducing time spending in inplace optimizations
  • Added new Theano flag check_stack_trace to help check the stack trace during optimization process
  • Added new Theano flag cmodule.debug to allow a debug mode for Theano C code. Currently used for cuDNN convolutions only.
Others:
  • Added deprecation warning for the softmax and logsoftmax vector case
  • Added a warning to announce that C++ compiler will become mandatory in next Theano release 0.11
Other more detailed changes:
  • Removed useless warning when profile is manually disabled
  • Added tests for abstract conv
  • Added options for disconnected_outputs to Rop
  • Removed theano/compat/six.py
  • Removed COp.get_op_params()
  • Support of list of strings for Op.c_support_code(), to help not duplicate support codes
  • Macro names provided for array properties are now standardized in both CPU and GPU C codes
  • Started to move C code files into separate folder c_code in every Theano module
  • Many improvements for Travis CI tests (with better splitting for faster testing)
  • Many improvements for Jenkins CI tests: daily testings on Mac and Windows in addition to Linux

Download and Install

You can download Theano from http://pypi.python.org/pypi/Theano

Installation instructions are available at http://deeplearning.net/software/theano/install.html

Description

Theano is a Python library that allows you to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays. It is built on top of NumPy. Theano features:

  • tight integration with NumPy: a similar interface to NumPy's. numpy.ndarrays are also used internally in Theano-compiled functions.
  • transparent use of a GPU: perform data-intensive computations much faster than on a CPU.
  • efficient symbolic differentiation: Theano can compute derivatives for functions of one or many inputs.
  • speed and stability optimizations: avoid nasty bugs when computing expressions such as log(1+ exp(x)) for large values of x.
  • dynamic C code generation: evaluate expressions faster.
  • extensive unit-testing and self-verification: includes tools for detecting and diagnosing bugs and/or potential problems.

Theano has been powering large-scale computationally intensive scientific research since 2007, but it is also approachable enough to be used in the classroom (IFT6266 at the University of Montreal).

Resources

About Theano:

http://deeplearning.net/software/theano/

Theano-related projects:

http://github.com/Theano/Theano/wiki/Related-projects

About NumPy:

http://numpy.scipy.org/

About SciPy:

http://www.scipy.org/

Machine Learning Tutorial with Theano on Deep Architectures:

http://deeplearning.net/tutorial/

Acknowledgments

I would like to thank all contributors of Theano. For this particular release, many people have helped, notably (in alphabetical order):

  • Aarni Koskela
  • Adam Becker
  • Adam Geitgey
  • Adrian Keet
  • Adrian Seyboldt
  • Aleksandar Botev
  • Alexander Matyasko
  • amrithasuresh
  • Andrei Costinescu
  • AndroidCloud
  • Anmol Sahoo
  • Arnaud Bergeron
  • Bogdan Budescu
  • Cesar Laurent
  • Chiheb Trabelsi
  • Chong Wu
  • Daren Eiri
  • dareneiri
  • erakra
  • Faruk Ahmed
  • Florian Bordes
  • fo40225
  • Frederic Bastien
  • Gabe Schwartz
  • Ghislain Antony Vaillant
  • Gijs van Tulder
  • Holger Kohr
  • Jan Schlüter
  • Jayanth Koushik
  • Jeff Donahue
  • Jenkins
  • jhelie
  • João Victor Tozatti Risso
  • Juan Camilo Gamboa Higuera
  • Laurent Dinh
  • Lilian Besson
  • lrast
  • Lv Tao
  • Matt Graham
  • Michael Manukyan
  • mila
  • Mohamed Ishmael Diwan Belghazi
  • Mohammed Affan
  • morrme
  • Murugesh Marvel
  • NALEPA
  • Pascal Lamblin
  • Ramana Subramanyam
  • Reyhane Askari
  • Saizheng Zhang
  • Shawn Tan
  • Shubh Vachher
  • Simon Lefrancois
  • Sina Honari
  • Steven Bocco
  • Tegan Maharaj
  • Thomas George
  • Ubuntu
  • Vikram
  • vipulraheja
  • Xavier Bouthillier
  • xiaoqie
  • yikang
  • Zhouhan LIN
  • Zotov Yuriy

Also, thank you to all NumPy and Scipy developers as Theano builds on their strengths.

All questions/comments are always welcome on the Theano mailing-lists ( http://deeplearning.net/software/theano/#community )

--
Steven Bocco

Frédéric Bastien

unread,
Sep 8, 2017, 10:12:08 AM9/8/17
to theano-users, thean...@googlegroups.com
Yes, we continue to support Python 2.7.

Frédéric

On Sat, Sep 2, 2017 at 12:41 AM Jim Goodwin <jwg...@gmail.com> wrote:
Hi,
Does this new release,  Theano 0.10.0beta1,  work with Python v 2.7.13? 

I'm on Win7, using Theano with Keras. 2.0.8. 

(Yes, I'd  rather be using the newer Python, but also need to stay compatible 
with some others who are using old Python on a Mac.)

Thanks
Jim

--

---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages