You received this message because you are subscribed to the Google Groups "TensorFlow Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to developers+...@tensorflow.org.
Visit this group at https://groups.google.com/a/tensorflow.org/group/developers/.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/developers/CAGZdauXqQ9Gze6eAB0R3%3D2j6X2yWfh7QPbrGj1%3D5xuvQUninpQ%40mail.gmail.com.
Can PyTorch provide and maintain a conda-forge recipe?
This would allow the large and growing conda forge ecosystem to easily
install PyTorch in a community-supported way.
Are there problems with using conda or another general package manager?
I agree that the machine learning packages are trying to make a language
specific package manager do more than it was intended and other open source
solutions already exist.
Thanks,
Travis
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/developers/CADtzJKMzpDj2SfFRygaxKTgJD3eoKi7kKBUgZExN9cceMN2CyQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/developers/CADtzJKMzpDj2SfFRygaxKTgJD3eoKi7kKBUgZExN9cceMN2CyQ%40mail.gmail.com.
hi Manuel,
Adding a couple more folks from Apache Arrow to the thread to make
sure they see this discussion.
On Tue, Jan 22, 2019 at 3:48 AM Manuel Klimek <kli...@google.com> wrote:
>
> Sorry if I'm missing something fundamental, but it seems like a new manylinux standard would come with the same problem of basically being static and growing outdated.
>
> I'd be interested in helping to provide a toolchain wheel, as mentioned in the initial post, at least for libc++ (potentially libstdc++) - it seems like that could be updated on an ongoing basis, use standard dependency management and if necessary be bootstrapped with a statically linked compiler.
>
> What would the requirements for such a toolchain wheel be for it to have a chance to be widely used? (note that I come from a C++ background and don't have a lot of experience with Python outside of modules using C++ under the hood :)
In principle I would think that the requirement would be that we
demonstrate that wheels built with the newer compiler toolchain and
libstdc++ dependency can coexist with manylinux1 / manylinux2010
packages. This is supposed to be the promise of devtoolset-produced
libraries anyhow. A potential problem might be projects that need to
pass std::* objects between shared libraries in their C++ API. For
example, the "turbodbc" package uses the "pyarrow" package's C++ API.
This would just mean that any wheel that needs to depend on a wheel in
the "TF/PyTorch-compatible toolchain" ecosystem would necessarily need
to use the alternative build toolchain instead of manylinux*
Le 30/01/2019 à 14:30, Manuel Klimek a écrit :
> >
> > What would the requirements for such a toolchain wheel be for it
> to have a chance to be widely used? (note that I come from a C++
> background and don't have a lot of experience with Python outside of
> modules using C++ under the hood :)
>
> In principle I would think that the requirement would be that we
> demonstrate that wheels built with the newer compiler toolchain and
> libstdc++ dependency can coexist with manylinux1 / manylinux2010
> packages. This is supposed to be the promise of devtoolset-produced
> libraries anyhow. A potential problem might be projects that need to
> pass std::* objects between shared libraries in their C++ API. For
> example, the "turbodbc" package uses the "pyarrow" package's C++ API.
> This would just mean that any wheel that needs to depend on a wheel in
> the "TF/PyTorch-compatible toolchain" ecosystem would necessarily need
> to use the alternative build toolchain instead of manylinux*
>
> Fundamentally, the C++ dependency chain seems to be solvable with pip
> package deps down to the libstdc++/libc++ level.
> I think we'd basically need to provide:
> a) a toolchain pip package to depend on
> b) a manylinux docker image with those libraries and a compiler
> toolchain targeting them installed so packagers have an easy way to
> build these packages
Am I reading you wrong, or are you actually proposing to package another
libstdc++ version as a Python wheel?
If so, are you going to claim that the given wheel is manylinux-compatible?
Regards
Antoine.
You received this message because you are subscribed to the Google Groups "SIG Build" group.
To unsubscribe from this group and stop receiving emails from it, send an email to build+un...@tensorflow.org.
Visit this group at https://groups.google.com/a/tensorflow.org/group/build/.
Le 30/01/2019 à 15:35, Manuel Klimek a écrit :
>
> Am I reading you wrong, or are you actually proposing to package another
> libstdc++ version as a Python wheel?
>
>
> That would be the idea.
>
>
> If so, are you going to claim that the given wheel is
> manylinux-compatible?
>
>
> That is my question :) Why wouldn't it be? (I'd link it against
> manylinux libc and other C-only system libs)
The problem is when you are loading two modules that link against
different libstdc++ versions in the same process. Incidentally, it's
the problem which prompted this discussion.
Regards
Antoine.
>>Fundamentally, the C++ dependency chain seems to be solvable with pip package deps down to the libstdc++/libc++ level.>>I think we'd basically need to provide:>>a) a toolchain pip package to depend onSounds like Anaconda :)>>b) a manylinux docker image with those libraries and a compiler toolchain targeting them installed so packagers have an easy way to build these packagesSounds like conda-forge :)Not to speak for Jonathan and Anconda, but I suspect they designed things the way they did for much of the same reasons as we are discussing here. pip and manylinux standards alone are not/were not good enough.
>>Once we have that in a way that folks are happy with it, it sounds like we'd be good to go?>>There are a couple of obvious questions:>>- how to handle updates of that toolchain package / toolchain?>>- what would we want to target as a first step?I'd also add:- libc is still a pain, so targeted versions there would need spec'd out
>>My proposal for something that we could work on would be:
>>clang & libc++ @ llvm-8Not that I disagree, but why clang over gcc? Seems like gcc may have a bit better compatibility across the board, but that might just be momentum talking.
Le 30/01/2019 à 16:09, Manuel Klimek a écrit :
>
> On Wed, Jan 30, 2019 at 3:51 PM Antoine Pitrou <ant...@python.org
> <mailto:ant...@python.org>> wrote:
>
>
> Le 30/01/2019 à 15:35, Manuel Klimek a écrit :
> >
> > Am I reading you wrong, or are you actually proposing to
> package another
> > libstdc++ version as a Python wheel?
> >
> >
> > That would be the idea.
> >
> >
> > If so, are you going to claim that the given wheel is
> > manylinux-compatible?
> >
> >
> > That is my question :) Why wouldn't it be? (I'd link it against
> > manylinux libc and other C-only system libs)
>
> The problem is when you are loading two modules that link against
> different libstdc++ versions in the same process. Incidentally, it's
> the problem which prompted this discussion.
>
>
> Sure, I'm aware :) I think as long as the requirement that all libraries
> that want to exchange runtime-ABI-compatible versions are compiled with
> the same toolchain, we can provide a way to mangle the symbols
> differently.
Ah, I see... Indeed, mangling the symbols may work for this.
That said, if you're looking to create a de facto standard, why can't it
be proposed as a manylinux iteration?
Regards
Antoine.
You received this message because you are subscribed to the Google Groups "TensorFlow Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to developers+...@tensorflow.org.
Visit this group at https://groups.google.com/a/tensorflow.org/group/developers/.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/developers/CAPuKSJbMA%3DJpvLj%2BFcUK%3DZXXJ5gdz%3D7-oPf8EzEkMNNDXq63ag%40mail.gmail.com.
Just as a heads-up: I would like to also join the meeting but am also located in Europe.I have spent quite some time with the packaging of wheels for pyarrow and turbodbc thus I would like to also give input on this. For Apache Arrow, I see newer manylinux2014 standard as a possible way to go. I'm not so fond of rolloing lib(std)c++ packages inside of pip. It's sadly the case that the features of pip don't allow a good dependency resolution, also with taking CUDA into account, a dependency resolution that differs between source and binary builds of a package. For this case, exactly conda was developed because it was considered out-of-scope for the core Python packaging system. I'm not sure whether we actually can fit all the requirements of the packages that take part in this mail thread into pip without simply reimplementing conda inside of pip.
Uwe
Le 04/02/2019 à 17:36, Uwe L. Korn a écrit :
> I think that problem is whether this would get merged. Conda was created
> after a meeting with Guido van Rossum and other folks at a PyCon quite
> some years ago where the final call was that this is not a problem of
> the core Python ecosystem and that the scientific Python community has
> to roll their own solution.
>
> @Wes McKinney <mailto:wesm...@gmail.com> or someone else: Were you at
> this meeting and can outline why it was declined back then?
I'm not sure anyone in this CC list was at that meeting (I wasn't). If
it's important to have the precise answer, I can try to CC someone.
But I think the general answer is that it's a complex and difficult
endeavour, and the contribution structures inside the Python packaging
ecosystem, where most people are volunteers, didn't allow for it.
There's already enough lag maintaining the current software stack (pip
et al.).
Anaconda then came up and became history, so to speak.
Regards
Antoine.
>
> Uwe
>
> Am Mo., 4. Feb. 2019 um 17:33 Uhr schrieb Manuel Klimek
> <kli...@google.com <mailto:kli...@google.com>>:
>
> On Mon, Feb 4, 2019 at 5:32 PM Uwe L. Korn <xho...@gmail.com
> <mailto:xho...@gmail.com>> wrote:
>
> Just as a heads-up: I would like to also join the meeting but am
> also located in Europe.
>
> I have spent quite some time with the packaging of wheels for
> pyarrow and turbodbc thus I would like to also give input on
> this. For Apache Arrow, I see newer manylinux2014 standard as a
> possible way to go. I'm not so fond of rolloing lib(std)c++
> packages inside of pip. It's sadly the case that the features of
> pip don't allow a good dependency resolution, also with taking
> CUDA into account, a dependency resolution that differs between
> source and binary builds of a package. For this case, exactly
> conda was developed because it was considered out-of-scope for
> the core Python packaging system. I'm not sure whether we
> actually can fit all the requirements of the packages that take
> part in this mail thread into pip without simply reimplementing
> conda inside of pip.
>
>
> One question is probably: what would that entail, and why would it
> be bad? :)
>
>
>
> Uwe
>
> Am Mo., 4. Feb. 2019 um 16:34 Uhr schrieb Jason Zaman
> <ja...@perfinion.com <mailto:ja...@perfinion.com>>:
>
> yeah that's expected. The timing is complicated with people
> spread all
> over. We will post notes after the meeting on the SIG-Build
> mailing
> list and I'd also be up for organizing a separate call with
> europe
> folks if that would be of interest.
>
> On Mon, 4 Feb 2019 at 19:29, 'Manuel Klimek' via SIG Build
> <bu...@tensorflow.org <mailto:bu...@tensorflow.org>> wrote:
> >
> > +Dmitri Gribenko
> >
> > Dmitri has experience with EasyBuild, which seems to be
> used by the HPC community to solve the bootstrap problem and
> could be used to build a toolchain image & pip package.
> >
> > Unfortunately we'll not be able to join the meeting as
> it's at midnight CEST - looking forward to the conclusions
> from the meeting!
> >
> > On Mon, Feb 4, 2019 at 6:00 AM Jason Zaman
> <ja...@perfinion.com <mailto:ja...@perfinion.com>> wrote:
> >>
> >> Hey all,
> >>
> >> We're having the TensorFlow SIG-Build meeting on 5th Feb
> 3pm PST
> >>
> (https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190205T15&p1=224).
> >> Agenda is linked from:
> >>
> https://groups.google.com/a/tensorflow.org/forum/#!topic/build/akyPcGoBIy4
> >>
> >> I'd like to invite everyone from this thread to join the
> call if at
> >> all possible. The agenda for this meeting will spend most
> of the time
> >> focusing on the manylinux issue and hopefully we can get
> together to
> >> flesh out a decent plan on how to tackle this.
> >>
> >> Thanks,
> >> Jason
> >>
> >> On Wed, 30 Jan 2019 at 23:34, 'Manuel Klimek' via SIG Build
> >> <bu...@tensorflow.org <mailto:bu...@tensorflow.org>> wrote:
> >> >
> >> > On Wed, Jan 30, 2019 at 4:21 PM Antoine Pitrou
> <ant...@python.org <mailto:ant...@python.org>> wrote:
> >> >>
> >> >>
> >> >> Le 30/01/2019 à 16:09, Manuel Klimek a écrit :
> >> >> >
> >> >> > On Wed, Jan 30, 2019 at 3:51 PM Antoine Pitrou
> <ant...@python.org <mailto:ant...@python.org>
> >> >> > <mailto:ant...@python.org
> <mailto:build%2Bunsu...@tensorflow.org>.
> >> > Visit this group at
> https://groups.google.com/a/tensorflow.org/group/build/.
> >
> > --
> > You received this message because you are subscribed to
> the Google Groups "SIG Build" group.
> > To unsubscribe from this group and stop receiving emails
> from it, send an email to build+un...@tensorflow.org
> <mailto:build%2Bunsu...@tensorflow.org>.
> I think trying to package CUDA is the wrong way to think about it.
Instead, perhaps you should try to make the package compatible with
system CUDA installs.
I agree in principle.
The problem fundamentally stems from user expectation.
In my ~6+ years of supporting Torch and PyTorch, installing CUDA on a
system can take days, with a user mean approximately half a day. It might
be userland incompetence, or that CUDA is a magical snowflake, but the
reality is that installing CUDA is never great.
So, a huge amount of issues reported by userland are side-effects from
broken CUDA installs.
It doesn't help that the PyPI user expectations of "my package should just
work after a pip install".
If we can reliably install an up-to-date CUDA in a standardized way, and
NVIDIA simply doesn't sidestep the userland issues by saying "user our
docker", or "our PPA is 100% reliable", we would've been in a better state.
Until then, I think it's best that we find a solution for PyPI users that
can work out of box with PyPI.
On Mon, Feb 4, 2019 at 12:52 PM Antoine Pitrou <soli...@pitrou.net> wrote:
> On Tue, 5 Feb 2019 01:45:34 +0800
> Jason Zaman <ja...@perfinion.com> wrote:
> > On Tue, 5 Feb 2019 at 01:30, soumith <sou...@gmail.com> wrote:
> > >
> > > Unfortunately I'll be on a long flight, and cannot make it to the
> SIGBuild meeting.
> > > I'm definitely interested in the meeting notes and any follow-up
> meeting.
> > >
> > > > I think we should leave CUDA out of the
> > > discussion initially and see if we can get the cpu-only wheel working
> > > correctly. Hopefully cpu-only is viable on manylinux2014 then we can
> > > tackle CUDA afterwards.
> > >
> > > 50% of the complexity is in the CUDA packaging.
> > > The other 50% is in shipping a more modern libstdc++.so
> > > I believe we'll make progress if we ignore CUDA, but we'll not address
> half of the issue.
> >
> > Yeah, we'll definitely need both to solve it fully. My thinking is
> > that all packages need at least C++11 but only some need CUDA. Or
> > might we end up where the libstcc++.so is incompatible with CUDA if we
> > don't work on everything together?
>
> I think trying to package CUDA is the wrong way to think about it.
> Instead, perhaps you should try to make the package compatible with
> system CUDA installs.
>
> For example, the Numba pip wheel almost works out-of-the-box with a
> system CUDA install on Ubuntu 18.04. I say "almost" because I had to
> set two environment variables:
> https://github.com/numba/numba/issues/3738
>
> Regards
>
> Antoine.
>
>
>
Replying to the thread because the last two messages got dropped.On Mon, Feb 4, 2019 at 10:00 AM soumith <sou...@gmail.com> wrote:> I think trying to package CUDA is the wrong way to think about it.
Instead, perhaps you should try to make the package compatible with
system CUDA installs.
I agree in principle.
The problem fundamentally stems from user expectation.
In my ~6+ years of supporting Torch and PyTorch, installing CUDA on a
system can take days, with a user mean approximately half a day. It might
be userland incompetence, or that CUDA is a magical snowflake, but the
reality is that installing CUDA is never great.
So, a huge amount of issues reported by userland are side-effects from
broken CUDA installs.
It doesn't help that the PyPI user expectations of "my package should just
work after a pip install".
If we can reliably install an up-to-date CUDA in a standardized way, and
NVIDIA simply doesn't sidestep the userland issues by saying "user our
docker", or "our PPA is 100% reliable", we would've been in a better state.
Until then, I think it's best that we find a solution for PyPI users that
can work out of box with PyPI.
Hello Dimitri,Option "Userspace-2" sounds for me exactly like the thing that conda does. There is already a community around conda-forge that takes care of packaging all native requirements in separate packages including a modern toolchain that is separate from the host system. I still need to understand, why conda is not an option then? We would just be replicating this setup then.As previously mentioned, getting conda functionality into pip would be a valid option but we may face the same issues as to when conda was created. I doubt that the PyPA is more open to this scope expansion then they were then. The personell situation is still very limited in the packaging space. For the users of Arrrow, we definitely have had much better experience with users working in conda than those that were using pip, mainly due to the package manager taking care of all the binary dependencies between different packages like arrow, torch and tensorflow.Also to reiterate a point raised earlier: C++11 with manylinux1 works smoothly. With gcc 4.8.5, everything we need in Arrow supported. C++14 and more are out of scope and can only be used starting with manylinux{2010/2014}.
UweAm Di., 5. Feb. 2019 um 12:19 Uhr schrieb Dmitri Gribenko <dmi...@google.com>:On Mon, Feb 4, 2019 at 12:29 PM Manuel Klimek <kli...@google.com> wrote:Thanks for looping me in, Manuel.So I wanted to go back to the requirements and enumerate possible solutions.From soumith's email:
1. CUDA support2. C++11 support
Neither newest CUDA, nor C++11 work on manylinux1 (CentOS 5.11).The original email does not go into detail why CUDA does not work, but I can imagine it is because of the old userspace libraries (libc, libstdc++, libpthread etc). C++11 does not work because of an old libstdc++ and old GCC.So what can we do about old userspace libraries?Option "Userspace-1": Pip package uses libraries installed on the system where the pip package runs. (AKA the current manylinux approach.)Advantages:- Smaller download size.Disadvantages:- Pip packages have to be built against an old version of userspace libraries to be maximally-compatible.- No nice upgrade path. When we need a specific new feature for something (e.g., today it is modern CUDA and C++11), we have to bump the requirements for the host system. We will always be extremely cautious about not bumping the requirements too much, and therefore we will be always stuck with oldest possible libraries that can do the job.Option "Userspace-2": When the pip package runs, ignore the system userspace libraries. Use libraries from somewhere else.Advantages:- We control which versions of userspace libraries we use. We can use libraries that are newer than system ones.- Complete isolation from the userspace of the system where the pip package runs. The only remaining point of contact with the user's system is the kernel.Disadvantages:- We need to figure out where to get these libraries from.- Bigger download size for users.So where do we get the userspace libraries from?Option "Userspace-2a": Pip community owns all userspace libraries that binaries in a pip package can use.All userspace components defined by manylinux are packaged into a pip package. TensorFlow/PyTorch/... pip packages declare what version of the userspace pip package they depend on.Advantages:- Pip community owns all userspace components.Disadvantages:- Pip community owns way more stuff than before.Option "Userpace-2b": Pip takes all userspace libraries from an existing packager.Same as "Userspace-2a", but instead of owning the build process for the userspace libraries, we take them from an existing packager, for example, Debian, CentOS, Anaconda, Nix package manager, whatever we decide on.Advantages:- Pip community controls userspace components.Disadvantages:- Pip community owns more stuff than before.What can we do about old toolchain?Option "Toolchain-1": Use a toolchain from a certain old distribution, so that the output is maximally-compatible.This option is compatible with any choice of userspace, as long as the libraries don't require a new compiler or language features.Disadvantages:- Ancient toolchain that does not support modern C++.Option "Toolchain-2": Make a modern toolchain that produces maximally-compatible output.This option is difficult to implement, since a modern toolchain using a modern C++ version will require a using a contemporary C++ standard library (libc++ or libstdc++).Option "Toolchain-3": Make a modern toolchain that requires a modern C++ library.AKA what Manuel is proposing. Package modern libc++ as a wheel, make a Docker container with the corresponding Clang for building binary packages like Tensorflow.Thoughts?Dmitri
From the requirements side (Martin will correct me if I'm getting these wrong):- it seems like from the TF point of view, our users are on pip, so we need to deliver there- LLVM is going to require C++14 ~in March as far as I can tell- from trying to find info about manylinux2010 / 14, it seems like these have stalled? (but I'm happy to be proven wrong here :)
Le 05/02/2019 à 16:22, Manuel Klimek a écrit :
> On Tue, Feb 5, 2019 at 2:01 PM Uwe L. Korn <xho...@gmail.com
> <mailto:xho...@gmail.com>> wrote:
>
> Also to reiterate a point raised earlier: C++11 with manylinux1
> works smoothly. With gcc 4.8.5, everything we need in Arrow
> supported. C++14 and more are out of scope and can only be used
> starting with manylinux{2010/2014}.
>
> From the requirements side (Martin will correct me if I'm getting these
> wrong):
> - it seems like from the TF point of view, our users are on pip, so we
> need to deliver there
> - LLVM is going to require C++14 ~in March as far as I can tell
> - from trying to find info about manylinux2010 / 14, it seems like these
> have stalled? (but I'm happy to be proven wrong here :)
manylinux2010 hasn't stalled, it's been progressing slowly. Apparently
pip 19.0 is out which supports downloading and installing manylinux2010
packages. See status page here:
https://github.com/pypa/manylinux/issues/179#issuecomment-457002180
manylinux2014 is an entirely different question. It needs interested
parties to gather and devise a spec and then get it accepted as a new PEP.
Regards
Antoine.
On 2/5/19 9:29 AM, 'Manuel Klimek' via TensorFlow Developers wrote:
On Tue, Feb 5, 2019 at 4:28 PM Antoine Pitrou <ant...@python.org> wrote:
Le 05/02/2019 à 16:22, Manuel Klimek a écrit :
> On Tue, Feb 5, 2019 at 2:01 PM Uwe L. Korn <xho...@gmail.com
> <mailto:xho...@gmail.com>> wrote:
>
> Also to reiterate a point raised earlier: C++11 with manylinux1
> works smoothly. With gcc 4.8.5, everything we need in Arrow
> supported. C++14 and more are out of scope and can only be used
> starting with manylinux{2010/2014}.
>
> From the requirements side (Martin will correct me if I'm getting these
> wrong):
> - it seems like from the TF point of view, our users are on pip, so we
> need to deliver there
> - LLVM is going to require C++14 ~in March as far as I can tell
> - from trying to find info about manylinux2010 / 14, it seems like these
> have stalled? (but I'm happy to be proven wrong here :)
manylinux2010 hasn't stalled, it's been progressing slowly. Apparently
pip 19.0 is out which supports downloading and installing manylinux2010
packages. See status page here:
https://github.com/pypa/manylinux/issues/179#issuecomment-457002180
Cool! The problem is that it doesn't solve the C++14 issue, right?
Devtoolset-7 can be installed on RHEL6/CentOS 6 which is the reference distribution of manylinux2010. Devtoolset-7 includes GCC 7.3.1 which has full support for C++14. On RHEL6/CentOS 6 the devtoolset compilers target the older GCC C++ ABI (-D_GLIBCXX_USE_CXX11_ABI=0) and will not emit the newer ABI. There is a open pull request to the manylinux repository to create a docker image containing this toolset which may be of interest:
https://github.com/pypa/manylinux/pull/252
Cheers,
- Jonathan Helmus
manylinux2014 is an entirely different question. It needs interested
parties to gather and devise a spec and then get it accepted as a new PEP.
Regards
Antoine.
--
You received this message because you are subscribed to the Google Groups "TensorFlow Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to developers+...@tensorflow.org.
Visit this group at https://groups.google.com/a/tensorflow.org/group/developers/.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/developers/CAOsfVvnY5jSsB6aF8qw-d1TFF3XX1OKgCpXC8%3DQ9dyfYbGed_w%40mail.gmail.com.
Le 06/02/2019 à 01:06, Philipp Moritz a écrit :
> Thanks for the meeting! One question concerning a point that is still
> not super clear to me:
>
> Say we define a new manylinux standard based on gcc >=5 (with stable
> c++11 support). There will still be a lot of wheels form the manylinux1
> days that are built against gcc 4.8 that might use the c++11 features
> before they became stable. How do we prevent bugs from that? Is the plan
> to convince everybody who uses these c++11 features to use the new
> manylinux standard?
Yes, that's a bit of a problem.
This discussion arised from the incompatibility between Tensorflow
wheels (compiled with a later toolchain) and other Python wheels
(compiled with a manylinux1-compatible toolchain).
Intuitively, by using the new C++ ABI we may prevent such issues when
installing manylinux1 wheels and manylinux20XX wheels side-by-side. But
it's difficult to say for sure.
Regards
Antoine.
>
> On Tue, Feb 5, 2019 at 8:14 AM Jonathan Helmus <jhe...@anaconda.com
> <mailto:jhe...@anaconda.com>> wrote:
>
>
>
> On 2/5/19 9:29 AM, 'Manuel Klimek' via TensorFlow Developers wrote:
>> On Tue, Feb 5, 2019 at 4:28 PM Antoine Pitrou <ant...@python.org
>> <mailto:ant...@python.org>> wrote:
>>
>>
>>
>> Le 05/02/2019 à 16:22, Manuel Klimek a écrit :
>> > On Tue, Feb 5, 2019 at 2:01 PM Uwe L. Korn <xho...@gmail.com
>> <mailto:xho...@gmail.com>
>> <mailto:developers+...@tensorflow.org>.
>> Visit this group at
>> https://groups.google.com/a/tensorflow.org/group/developers/.
>> To view this discussion on the web visit
>> https://groups.google.com/a/tensorflow.org/d/msgid/developers/CAOsfVvnY5jSsB6aF8qw-d1TFF3XX1OKgCpXC8%3DQ9dyfYbGed_w%40mail.gmail.com
On 2/5/19 9:29 AM, 'Manuel Klimek' via TensorFlow Developers wrote:
On Tue, Feb 5, 2019 at 4:28 PM Antoine Pitrou <ant...@python.org> wrote:
Le 05/02/2019 à 16:22, Manuel Klimek a écrit :
> On Tue, Feb 5, 2019 at 2:01 PM Uwe L. Korn <xho...@gmail.com
> <mailto:xho...@gmail.com>> wrote:
>
> Also to reiterate a point raised earlier: C++11 with manylinux1
> works smoothly. With gcc 4.8.5, everything we need in Arrow
> supported. C++14 and more are out of scope and can only be used
> starting with manylinux{2010/2014}.
>
> From the requirements side (Martin will correct me if I'm getting these
> wrong):
> - it seems like from the TF point of view, our users are on pip, so we
> need to deliver there
> - LLVM is going to require C++14 ~in March as far as I can tell
> - from trying to find info about manylinux2010 / 14, it seems like these
> have stalled? (but I'm happy to be proven wrong here :)
manylinux2010 hasn't stalled, it's been progressing slowly. Apparently
pip 19.0 is out which supports downloading and installing manylinux2010
packages. See status page here:
https://github.com/pypa/manylinux/issues/179#issuecomment-457002180
Cool! The problem is that it doesn't solve the C++14 issue, right?
Devtoolset-7 can be installed on RHEL6/CentOS 6 which is the reference distribution of manylinux2010. Devtoolset-7 includes GCC 7.3.1 which has full support for C++14. On RHEL6/CentOS 6 the devtoolset compilers target the older GCC C++ ABI (-D_GLIBCXX_USE_CXX11_ABI=0) and will not emit the newer ABI.
There is a open pull request to the manylinux repository to create a docker image containing this toolset which may be of interest:
https://github.com/pypa/manylinux/pull/252
Cheers,
- Jonathan Helmus
manylinux2014 is an entirely different question. It needs interested
parties to gather and devise a spec and then get it accepted as a new PEP.
Regards
Antoine.
--
You received this message because you are subscribed to the Google Groups "TensorFlow Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to developers+...@tensorflow.org.
Visit this group at https://groups.google.com/a/tensorflow.org/group/developers/.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/developers/CAOsfVvnY5jSsB6aF8qw-d1TFF3XX1OKgCpXC8%3DQ9dyfYbGed_w%40mail.gmail.com.
Le 06/02/2019 à 14:27, Manuel Klimek a écrit :
> On Wed, Feb 6, 2019 at 12:38 PM Antoine Pitrou <ant...@python.org
> <mailto:ant...@python.org>> wrote:
>
>
> Le 06/02/2019 à 01:06, Philipp Moritz a écrit :
> > Thanks for the meeting! One question concerning a point that is still
> > not super clear to me:
> >
> > Say we define a new manylinux standard based on gcc >=5 (with stable
> > c++11 support). There will still be a lot of wheels form the
> manylinux1
> > days that are built against gcc 4.8 that might use the c++11 features
> > before they became stable. How do we prevent bugs from that? Is
> the plan
> > to convince everybody who uses these c++11 features to use the new
> > manylinux standard?
>
> Yes, that's a bit of a problem.
>
> This discussion arised from the incompatibility between Tensorflow
> wheels (compiled with a later toolchain) and other Python wheels
> (compiled with a manylinux1-compatible toolchain).
>
>
> Do you know where these communicate with std types? (due to ABI tagging
> loading them into the same process should work, right?)
They don't. I don't remember the specifics, Philipp Moritz might know
more about this.
Regards
Antoine.