MPIR 2.2

13 views
Skip to first unread message

Jason Moxham

unread,
Jun 2, 2010, 5:18:37 PM6/2/10
to mpir-...@googlegroups.com
Hi , here are some thoughts about what we should/could do for the next
release.

1) Upgrade yasm to the latest (easy)

2) Upgrade gnu config to the latest ( dont know how difficult that is , but it
could fix some niggles we have , and it might simplify our specialisations.

3) Upgrade to the latest autotools/libtool ( some distros are moving over to
the latest 2.2 , we may need/want to do the same , again dont know what this
involves)

4) A few assembler functions to add

5) Move demo's out of the library onto the web page

6) Get rid of ancient cpus/compilers ( we will still work under C , if anyone
cares) , this would simplify configure a bit , cray,pyramid,
z8000,list,clipper,....

7) Make configure run faster , I'm sure we can remove some of the tests , I
can't believe they are still needed , and/or share the test results with yasm.

8) make make check parallel , it can be done.

9) Some of the changes we have made , have not been finished , finish them.

10) Split configure into two , ie standard and MPIR specific , should make the
maintenance easier , this is fairly ambitious :)

11) drop support for building/running on FAT file systems ( ie file name 8.3
format)

12) simple command line build for windows ( not dependant of vcproj files)

13) fix some known bugs

14) When we update stuff , there are many places where you have to fill in the
same info , make it automatic (autotools can do this , it's just not been set
up that way)

Some of these are trickier than others , but my aim is to simplify the system
(the non-computational parts of it)

Bets now being taken on what % will get done :)

Thoughts?

Jason

jason

unread,
Jun 24, 2010, 1:04:31 AM6/24/10
to mpir-devel
Hi

Hopefully we can release mpir-2.1.1 today , then this is what I plan
to do. I dont have a lot of time in the next month so I'll will mainly
concentrate on small items ie bug fixing/cleaning-up , stuff I can do
in a few hours or in fits and starts , nothing where I have to sit
down for a day or two :(

On Jun 2, 10:18 pm, Jason Moxham <ja...@njkfrudils.plus.com> wrote:
> Hi , here are some thoughts about what we should/could do for the next
> release.
>
> 1) Upgrade yasm to the latest (easy)

Yes

>
> 2) Upgrade gnu config to the latest ( dont know how difficult that is , but it
> could fix some niggles we have , and it might simplify our specialisations.
>
> 3) Upgrade to the latest autotools/libtool ( some distros are moving over to
> the latest 2.2 , we may need/want to do the same , again dont know what this
> involves)
>
> 4) A few assembler functions to add
>

Yes

> 5) Move demo's out of the library onto the web page
>

Yes

> 6) Get rid of ancient cpus/compilers ( we will still work under C , if anyone
> cares) , this would simplify configure a bit  , cray,pyramid,
> z8000,list,clipper,....
>

Yes
These chips a29k clipper i960 m88k ns32k pyr z8000 z8000x are not
supported in the current GCC so I will remove asm code and all the
configure bumf that goes with it. Also maybe vax and cray(old non ieee
standard) , also the lisp directory should be on the website , not in
the library.
There is also some configure guff for cpu's which have already been
removed , clean all this up.

> 7) Make configure run faster , I'm sure we can remove some of the tests , I
> can't believe they are still needed , and/or share the test results with yasm.
>
> 8) make make check parallel , it can be done.
>
> 9) Some of the changes we have made , have not been finished , finish them.
>
> 10) Split configure into two , ie standard and MPIR specific , should make the
> maintenance easier , this is fairly ambitious :)
>
> 11) drop support for building/running on FAT file systems ( ie file name 8.3
> format)
>
> 12) simple command line build for windows ( not dependant of vcproj files)
>
> 13) fix some known bugs
>

Yes

> 14) When we update stuff , there are many places where you have to fill in the
> same info , make it automatic (autotools can do this , it's just not been set
> up that way)
>

Yes , I'll start , and we will see how it goes

Also remove the pre-build stuff , it is needlessly complicated and
only really needed for nail builds , which we dont support . I'll put
the files generated in the mpn/cpu/ directorys or rather links to the
only two variants we support.

Plenty of other little bits to do :)

Jason

Jason Moxham

unread,
Jun 25, 2010, 1:33:31 PM6/25/10
to mpir-...@googlegroups.com
Hi

I've have updated yasm to the latest svn , unfortunately it was not as easy as
I thought , I was going to take a diff of the yasm svn's and just apply that to
our yasm , so that we have only a small set of changes (about 1mb) , but it
didn't apply cleanly, there was a lot lot of differences in white-space??? ,
and make check failed (due to white-space , as they use text comparison for
make check , which is now reliable). The main reason for this is that many of
the files in yasm are auto-generated , but I couldn't get it to regenerate the
broken files.
Anyway in the end I took our differences and applied them to yasm svn , this
works fine and I set up a "script" and diff so we can easily upgrade at any time
"mpir/yasm.diff" , the only downside is that all the yasm file are considered
new , so anyone wishing to get the svn (or see what has changed) has a larger
download ( we are not interested in what has changed in yasm though!!! , it is
just yasm svn rev 2334 , plus our changes in mpir/yasm.diff , then autogen'ed)

I have removed all support for the following cpu's
a29k
clipper
i960
i960mx
m88000 or m88k
m88110
ns32000 or ns32k
pyr or pyramid
z8000
z8000x

Note: gcc did NOT support them , so clearly they are dead , they could still
possibly be used with a generic C build , but you would need an old enough
compiler , which would probably break elsewhere.

I have removed the demos from the library and I will put them on the webpage
once I get them to work outside of the library , there are some dependencies
on undocumented internals.
There was also a emacs "profile" for help with editing m4'ed asm files, this was
in the mpn directory! , I could put it on the website , but I don't think it
is worth it.

There are some more old cpu's which it maybe good to drop all (or explicit)
support , I post a list later with some details for feedback.

It would also simplify things if we could drop support for IRIX , which is
different enough to complicate autotools , I will look more closely into it to
see if this is a good idea or not.

Jason

Jason Moxham

unread,
Jun 25, 2010, 3:16:15 PM6/25/10
to mpir-...@googlegroups.com
Hi

These cpu's also have no support from gcc , so again I think we should
certainly remove them

gmicro
i860
ibm032 or 032 or ROMP
uxp or xp fujitsu 32bit vector supercomputer

Note these are only entries in longlong.h , but as we want to get rid of it
someday , all the cpu types in it have to go somewhere or be removed.

Jason

Jason Moxham

unread,
Jun 28, 2010, 6:58:08 AM6/28/10
to mpir-...@googlegroups.com
On Friday 25 June 2010 20:16:15 Jason Moxham wrote:
> Hi
>
> These cpu's also have no support from gcc , so again I think we should
> certainly remove them
>
> gmicro
> i860
> ibm032 or 032 or ROMP
> uxp or xp fujitsu 32bit vector supercomputer
>
> Note these are only entries in longlong.h , but as we want to get rid of it
> someday , all the cpu types in it have to go somewhere or be removed.
>

These have now been removed.

Trac ticket 295 , I removed the old gcd stuff and associated functions , and
there is one further point mentioned
"Also, the function mpn_ngcd (in ngcd.h) seems to be a duplicate of mpn_gcd. I
think we can probably get rid of it. "


here is the diff between the two

2c2
< mpn_ngcd (mp_ptr gp, mp_ptr ap, mp_size_t an, mp_ptr bp, mp_size_t n)
---
> mpn_gcd (mp_ptr gp, mp_ptr ap, mp_size_t an, mp_ptr bp, mp_size_t n)
11a12
> {
12a14
> }
14c16
< init_scratch = MPN_NGCD_MATRIX_INIT_ITCH ((n+1)/2);
---
> init_scratch = MPN_NGCD_MATRIX_INIT_ITCH (n-P_SIZE(n));
20a23,25
> if (scratch < MPN_NGCD_LEHMER_ITCH(n)) /* Space needed by Lehmer GCD */
> scratch = MPN_NGCD_LEHMER_ITCH(n);
>
48c53
< mp_size_t p = n/2;
---
> mp_size_t p = P_SIZE(n);
71,72c76,81
< #if 0
< /* FIXME: We may want to use lehmer on some systems. */
---
>
> if (ap[n-1] < bp[n-1])
> MP_PTR_SWAP (ap, bp);
>
> if (BELOW_THRESHOLD (n, GCD_THRESHOLD))
> {
77,80c86
< #endif
<
< if (ap[n-1] < bp[n-1])
< MP_PTR_SWAP (ap, bp);
---
> }

So it look like gcd and ngcd are the same but with gcd has been updated to the
latest thresholds , I think?

Jason

Jason Moxham

unread,
Jun 28, 2010, 8:14:48 AM6/28/10
to mpir-...@googlegroups.com
Hi

Now we have removed the old cpu's here are some operating systems I propose we
remove explicit support for

IRIX for mips
OSF/TRU64 for alpha
SunOS <=version 4 (version 5 is called solaris ie on fulvia/mark.skynet)
DJGPP dos
OS2
Unicos cray's unix
pw32 posix on win32

Comments?

Jason


Jason Moxham

unread,
Jun 28, 2010, 9:45:21 AM6/28/10
to mpir-...@googlegroups.com
On Monday 28 June 2010 11:58:08 Jason Moxham wrote:
> On Friday 25 June 2010 20:16:15 Jason Moxham wrote:
> > Hi
> >
> > These cpu's also have no support from gcc , so again I think we should
> > certainly remove them
> >
> > gmicro
> > i860
> > ibm032 or 032 or ROMP
> > uxp or xp fujitsu 32bit vector supercomputer
> >
> > Note these are only entries in longlong.h , but as we want to get rid of
> > it someday , all the cpu types in it have to go somewhere or be removed.
>
> These have now been removed.
>

gcc-4.5.0 has obsoleted support for the old POWER arch (aka RIOS,RIOS2) , so
that is yet another dead directory

Jason

Jason Moxham

unread,
Jun 29, 2010, 3:38:15 PM6/29/10
to mpir-...@googlegroups.com
On Friday 25 June 2010 20:16:15 Jason Moxham wrote:
> Hi
>
> These cpu's also have no support from gcc , so again I think we should
> certainly remove them
>
> gmicro
> i860
> ibm032 or 032 or ROMP
> uxp or xp fujitsu 32bit vector supercomputer
>

I have removed all traces of the above cpus and I will start to chop out the
old OS'es . Note: this does not mean that we will not run under these OS'es ,
it just means that any special conditions for them are removed . Some of these
special conditions are for broken installs or very old versions which were
missing certain crucial header files etc , so later versions may work , but I
would not count on it , and if they dont then tough :)

Thinking about what other changes I would like to make to simplify things a
bit , I released that most of the other changes would involve Brian making
similar changes to the vcproj files . A few of the simpler name changes I am
sure I can do a simple text replacement of the vcproj files, but most of the
other stuff would need Brian's involvement. So it seems that the best course
forward to avoid this duplication of effort is to get the windows port
following the unix one automatically.

The justification for simplifying our build system is that 50% of errors are
build system related , and unfortunately autotools is a very poor design , it
requires you to understand it internals to use it.

1) write our own build system starting with x86 and work thru the other major
cpu's/OSes one at a time. This is TOO much work , we are not taking advantage
of other peoples work on "boring" stuff , MPIR is about math not build systems.

2) We could write a simple script which does a basic build , but it would make
windows a second class build environment unless we re-implement most features
of a make system.There are two aspects to this , 1) to get the build optimal
and 2) be able to debug and develop with windows.

3) We could convert to eg) cmake which supports unix and windows , this
appears to be the most attractive option , this would take a fair amount of
time. cmake produces native vcproj files for windows and makefiles for linux ,
so both camp's would be in their NATIVE elements.

4) Get autotools to run NATIVE in windows with MSVC . What I have in mind is
really a trick , there are two parts to it , 1) get it use cl.exe instead of
gcc.exe 2) hide the fact that we are running under another shell.

autotools can run cl.exe , no problem , just like it can run cc or icc , the
options are a little different , but a script can easily take care of that (as
long as everything is one-to-one) , once we have this bit , then we could get
a MSVC compile under cygwin or minGW . The next stage is to have a "hidden
install" of minGW so we can run autotools, (just like we do for yasm under
linux).

5) Just the leave the system as it is.


My thoughts are these
1) Insane
2) The present configure.bat,make.bat emulate what it would achieve , but would
make development on a windows system awkward.
3) This seems like the best long term solution , to use a build system which
will handle all modern OS'es , but it means a lot of work
4) We should be able to do this with a few weeks hacking
5) Will keep Brian busy :)

Note: this does not address the issue that the assembler code for linux and
windows are different , but I dont believe this to be a major obstical at the
moment.

When I get my Windows box back , assuming they managed to fix it this time ,
then I will try for 4) , I'm sure other projects could also benefit from
this(ie sage)

Jason

Cactus

unread,
Jun 29, 2010, 5:25:44 PM6/29/10
to mpir-devel
On Jun 29, 8:38 pm, Jason Moxham <ja...@njkfrudils.plus.com> wrote:
> On Friday 25 June 2010 20:16:15 Jason Moxham wrote:
>
> > Hi
>
> > These cpu's also have no support from gcc , so again I think we should
> > certainly remove them
>
> > gmicro
> > i860
> > ibm032 or 032 or ROMP
> > uxp or xp fujitsu 32bit vector supercomputer
>
> I have removed all traces of the above cpus and I will start to chop out the
> old OS'es . Note: this does not mean that we will not run under these OS'es ,
> it just means that any special conditions for them are removed . Some of these
> special conditions are for broken installs or very old versions which were
> missing certain crucial header files etc , so later versions may work , but I
> would not count on it , and if they dont then tough :)
>
> Thinking about what other changes I would like to make to simplify things a
> bit , I released that most of the other changes would involve Brian making
> similar changes to the vcproj files . A few of the simpler name changes I am
> sure I can do a simple text replacement of the vcproj files, but most of the
> other stuff would need Brian's involvement. So it seems that the best course
> forward to avoid this duplication of effort is to get the windows port
> following the unix one automatically.

The big issue for me here is not making the changes on Windows but in
working out what has happened on Linux and what this means for the
Windows build.

If the changes are properly documented before they are done the effort
would not then be large, albeit a bit tedious tedious.

> The justification for simplifying our build system is that 50% of errors are
> build system related , and unfortunately autotools is a very poor design , it
> requires you to understand it internals to use it.
>
> 1) write our own build system starting with x86 and work thru the other major
> cpu's/OSes one at a time. This is TOO much work , we are not taking advantage
> of other peoples work on "boring" stuff , MPIR is about math not build systems.

Much as I would like to see this, I agree that it would involve a huge
effort.

> 2) We could write a simple script which does a basic build , but it would make
> windows a second class build environment unless we re-implement most features
> of a make system.There are two aspects to this , 1) to get the build optimal
> and 2) be able to debug and develop with windows.

I am not sure about this one as I don't fully understand the
capabilities you envisage.

> 3) We could convert to eg) cmake which supports unix and windows , this
> appears to be the most attractive option , this would take a fair amount of
> time. cmake produces native vcproj files for windows and makefiles for linux ,
> so both camp's would be in their NATIVE elements.

This is attractive as it would maintain a native Windows build
capability.

> 4) Get autotools to run NATIVE in windows with MSVC . What I have in mind is
> really a trick , there are two parts to it , 1) get it use cl.exe instead of
> gcc.exe 2) hide the fact that we are running under another shell.
>
> autotools can run cl.exe , no problem , just like it can run cc or icc , the
> options are a little different , but a script can easily take care of that (as
> long as everything is one-to-one) , once we have this bit , then we could get
> a MSVC compile under cygwin or minGW . The next stage is to have a "hidden
> install" of minGW so we can run autotools, (just like we do for yasm under
> linux).

I am far from convinced that this will provide a decent native Windows
build.

Turning WIndows into a poor man's version of Linux does not appeal to
me as it almost always involves abandoning Windows conventions, For
example, non standard installation directories have to be used to
avoid spaces in paths whiich almost invaraibly kill Unix tools.

> 5) Just the leave the system as it is.
>
> My thoughts are these
> 1) Insane

Agreed.

> 2) The present configure.bat,make.bat emulate what it would achieve , but would
> make development on a windows system awkward.

It would be hard to emulate the FAT build but I have often wondered
whether it would be possible to automate the generation of *.vcproj
files from Unix makefiles.

> 3) This seems like the best long term solution , to use a build system which
> will handle all modern OS'es , but it means a lot of work

I have never used CMAKE but it has a strong following.

> 4) We should be able to do this with a few weeks hacking

Can it be done without turning Windows into a poor man's Linux?

> 5) Will keep Brian busy :)

That depends on what is now planned :-)

> Note: this does not address the issue that the assembler code for linux and
> windows are different , but I dont believe this to be a major obstical at the
> moment.

But the proliferation of assembler is by far the biggest task that I
face as it petty well always involves a partial rewrite.

Although some conversions are easy, the more 'serious' assembler files
with a lot of macros and repeated code sequences are often very
difficult and very error prone during conversion.

Bill and I started with the intent of having one set of assembler
files to support both Linux and Windows but we gave up as the
constraints imposed by Windows x64 calling conventions are difficult
to accommodate without reducing performance on Linux.

Brian

Jason Moxham

unread,
Jun 30, 2010, 5:32:55 AM6/30/10
to mpir-...@googlegroups.com

I will try to post to the list all relevant changes so you can get the heads
up. Some of the changes I will do when I get a spare hour or two , I wouldn't
do them otherwise as they are not terribly important , but it can make work
for you which could be seen as pretty pointless.

> > The justification for simplifying our build system is that 50% of errors
> > are build system related , and unfortunately autotools is a very poor
> > design , it requires you to understand it internals to use it.
> >
> > 1) write our own build system starting with x86 and work thru the other
> > major cpu's/OSes one at a time. This is TOO much work , we are not taking
> > advantage of other peoples work on "boring" stuff , MPIR is about math
> > not build systems.
>
> Much as I would like to see this, I agree that it would involve a huge
> effort.
>
> > 2) We could write a simple script which does a basic build , but it would
> > make windows a second class build environment unless we re-implement most
> > features of a make system.There are two aspects to this , 1) to get the
> > build optimal and 2) be able to debug and develop with windows.
>
> I am not sure about this one as I don't fully understand the
> capabilities you envisage.
>

I mean a script which would select a code path based on cpu , and then use
cl.exe to compile and link everything in that path. The resultant library
should be just as good for the user , but for the developer many it would be a
pain to use.

I have not considered FAT builds.

> > 3) This seems like the best long term solution , to use a build system
> > which will handle all modern OS'es , but it means a lot of work
>
> I have never used CMAKE but it has a strong following.

I have never used it either :)

>
> > 4) We should be able to do this with a few weeks hacking
>
> Can it be done without turning Windows into a poor man's Linux?
>
> > 5) Will keep Brian busy :)
>
> That depends on what is now planned :-)
>
> > Note: this does not address the issue that the assembler code for linux
> > and windows are different , but I dont believe this to be a major
> > obstical at the moment.
>
> But the proliferation of assembler is by far the biggest task that I
> face as it petty well always involves a partial rewrite.
>

I had always assumed it was fairly painless , as you always manage to convert
them within a day or two :)

Cactus

unread,
Jun 30, 2010, 6:52:33 AM6/30/10
to mpir-devel
Unfortunately it can be quite hard work. On Windows there are two
types of functions - leaf functions and frame functions.

Leaf functions can only use rax, rcx, rdx, r8, r9, r10 and r11 but
offer a major advantage in that they don't require expliicit exception
support.

A lot of your assembler code is agonisingly close to being put in this
from but the fact that Linux has two additional scratch registers (rsi
and rdi) considerably complicates the conversion. In consequence I
usually have to look for ways of reusing registers and this often
involves a lot of code analysis. If I find a way of doing this, I then
have to redesign your code with the fewer registers and then switch
registers to take account of the different calling conventions;

Linux: rdi, rsi, rdx, rcx, r8, r9, 8(rsp), 16(rsp) ....
Windows: rcx, rdx, r8, r9, [rsp+40], [rsp+48] ...

Frame functions can save and restore other registers and hence use
them. But they have do this only at a single function entry point
(save) and at single function exit point (restore). Although your
code is pretty good in saving registers at the start, it makes a lot
of use of multiple exit points all of which I have to change to a
single exit point. When these multiple exits are a part of macro
expansions this remapping becomes diffucult and error prone. In
consequence I often have to spend a lot of time in low level debugging
to get this right. I also have to worry about whether any of this
messes up your optimisation efforts (I suspect I do a bad job here).

You are right that I can usually do this in a couple of days. But it
is often an intensive effort. On the other hand it is at least an
intellectual challenge whereas the other build changes are nothing but
pure tedium :-)

However, I think we can do several things that might make the WIndows
build much simpler now. First of all, the big differences between
Linux and Windows occur on x64. The libraries built with mingw for
win32 work with Visual Studio and, I assume, use the assembler
suppport. So I can drop Visual Studio support for win32 without much
of a penaalty. This would be a significant simplification.

And, now that we have published Visual Studio 2008 and 2010 support
for MPIR 2.1.1, I can drop support for Visual Studio 2008 in future
MPIR releases.

Brian

Jason Moxham

unread,
Jul 1, 2010, 4:10:12 PM7/1/10
to mpir-...@googlegroups.com

More tedium ahead warning......

The pre-build file fac_ui.h , will be incorporated into fac_ui.c removing the
need to generate it. Same for psqr.h , and probably the other two when I get
around to it.


> However, I think we can do several things that might make the WIndows
> build much simpler now. First of all, the big differences between
> Linux and Windows occur on x64. The libraries built with mingw for
> win32 work with Visual Studio and, I assume, use the assembler
> suppport. So I can drop Visual Studio support for win32 without much
> of a penaalty. This would be a significant simplification.
>

Yep , sounds good , although wasn't there some problem with mixing them(I dont
think this is MPIR specific ) Trac ticket 220 , cant read it at the mo , trac
is down

> And, now that we have published Visual Studio 2008 and 2010 support
> for MPIR 2.1.1, I can drop support for Visual Studio 2008 in future
> MPIR releases.
>

I personally think this is too soon , but I dont have to maintain them.

> Brian

Cactus

unread,
Jul 1, 2010, 4:37:37 PM7/1/10
to mpir-devel
> >     Brian- Hide quoted text -
>
> - Show quoted text -- Hide quoted text -
>
> - Show quoted text -

In an ideal world I would agree withn you but the cost of maintaining
both would be very high because the automated conversion from Visual
Studio 2008 to Visual Studio 2010 does not work well for MPIR. In
consequence I would have to maintain both builds independently of one
another and this would be very costly indeed.

Removing the pre-build steps will be a useful simplification. The
other thing that might be interesting on Windows is to build only mpn
code as a library and then have a single build project that takes this
library and adds it to the remaining code that is the same for all x64
architectures. I used to do it this way but I had terrible problems
ensuring that the mpn library and the related config.h files were
correctly associated (i.e. the HAVE_NATIVE_xxx stuff).

But it would be worth trying this again as it would be a massive
simplification if I could get it working reliably.

Brian

Case Vanhorsen

unread,
Jul 1, 2010, 10:46:14 PM7/1/10
to mpir-...@googlegroups.com

All recent versions of Python (2.6, 2.7, and 3.1) and, I believe, the
next 3.2 release, are all built using VS 2008. I haven't checked if
there are any compatibility issues using gmpy that has been compiled
with VS 2010 and Python that has been compiled with VS 2008. I use
mingw32 to create the 32-bit builds so I really only need the 64-bit
support.

casevh


>>
>> I personally think this is too soon , but I dont have to maintain them.
>>
>>
>>
>> >     Brian- Hide quoted text -
>>
>> - Show quoted text -- Hide quoted text -
>>
>> - Show quoted text -
>
> In an ideal world I would agree withn you but the cost of maintaining
> both would be very high because the automated conversion  from Visual
> Studio 2008 to Visual Studio 2010 does not work well for MPIR.  In
> consequence I would have to maintain both builds independently of one
> another and this would be very costly indeed.
>
> Removing the pre-build steps will be a useful simplification.    The
> other thing that might be interesting on Windows is to build only mpn
> code as a library and then have a single build project that takes this
> library and adds it to the remaining code that is the same for all x64
> architectures.   I used to do it this way but I had terrible problems
> ensuring that the mpn library and the related config.h files were
> correctly associated (i.e. the HAVE_NATIVE_xxx stuff).
>
> But it would be worth trying this again as it would be a massive
> simplification if I could get it working reliably.
>
>    Brian
>

> --
> You received this message because you are subscribed to the Google Groups "mpir-devel" group.
> To post to this group, send email to mpir-...@googlegroups.com.
> To unsubscribe from this group, send email to mpir-devel+...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/mpir-devel?hl=en.
>
>

Jason Moxham

unread,
Jul 2, 2010, 9:52:14 AM7/2/10
to mpir-...@googlegroups.com

I have removed all explicit support for the OS'es
pw32
unicos
os2
djgpp
osf/tru64

I have yet to do IRIX/SunOS as there quite a few simplifications that can be
made.

I have removed the pre-build file fac_ui.h , the constants are now in the .c
file , the program that generated them is in a new directory devel/ , which are
files for the developers only , they will not appear in any mpir release.So we
have yasm.diff(used for updating yasm) , setversion(used for changing version
numbers) in there.

The windows build will need to reflect the fac_ui changes.

Jason

Jason Moxham

unread,
Jul 2, 2010, 12:00:36 PM7/2/10
to mpir-...@googlegroups.com

I have removed the pre-build files mp_bases.h and fib_table.h and incorporated
them into gmp-impl.h , for windows the only thing to do is that there is no
need to generate them.

Jason

Cactus

unread,
Jul 2, 2010, 1:46:33 PM7/2/10
to mpir-devel
> Jason- Hide quoted text -
>
> - Show quoted text -

HI Jason

I have removed the three prebuild steps you have cahnged and now all
the Windows builds fail with:

error LNK2001: unresolved external symbol __gmpn_bases

Where is this symbol supposed to be defined now?

Brian


Cactus

unread,
Jul 2, 2010, 1:59:41 PM7/2/10
to mpir-devel


On Jul 2, 5:00 pm, Jason Moxham <ja...@njkfrudils.plus.com> wrote:
> Jason- Hide quoted text -
>
> - Show quoted text -

Aghhh - it looks like I have misunderstood what I needed to do. I
simply removed the three prebuild steps you listed but I still need
one of them to provide mp_bases.c :-(((((

How on earth do I revert the current SVN trunk to an earlier version
(3051)?

Brian

Jason Moxham

unread,
Jul 2, 2010, 2:04:18 PM7/2/10
to mpir-...@googlegroups.com

You are a bit premature , I have only removed half of them , the other half
still to do . I should do it by tomorrow . You might as well leave what you
have done for mo . I left it as autotools didn't like it when I removed both ,
so I need a closer look at it.

Thanks
Jason

Cactus

unread,
Jul 2, 2010, 2:14:15 PM7/2/10
to mpir-devel
Thanks Jason - I will wait on your update before trying again.

Do you intend to remove the perfsqr.h prebuild step?

Brian

I assume I will still haveprebuiid steps

Jason Moxham

unread,
Jul 2, 2010, 2:46:12 PM7/2/10
to mpir-...@googlegroups.com

Yep , they are all going :) , should finish the lot this weekend.

> Brian
>
> I assume I will still haveprebuiid steps

I was thinking of ripping out the path selection code next. This is the code
in configure.in (about 1000 lines) which chooses which mpn asm code to include
and creates symbolic links. We could replace it with a table and a python?
script that we can share between linux and windows , I'm not too sure how this
would fit in with the project files? , but it could enable you to just do just 1
project with the cpu selection done in python. Actually I wont do it next.

Two more cpu's I propose we drop support for

cray
This is the cray vector machines which must be old supercomputers which by now
must be slower that most laptops , so I assume no one still uses them , some
have non-ieee floating point and 48bit ints , so there is quite a lot of
specific code for them. The latest cray machines are x86_86 (or slightly older
alpha's)

vax
This is the 1970/80's minicomputer and according to wikipedia

The VAX architecture was eventually superseded by RISC technology. In 1989 DEC
introduced a range of workstations and servers that ran Ultrix, the DECstation
and DECsystem respectively, based on processors that implemented the MIPS
architecture. In 1992 DEC introduced their own RISC instruction set
architecture, the Alpha AXP (later renamed Alpha), and their own Alpha-based
microprocessor, the DECchip 21064, a high performance 64-bit design capable of
running OpenVMS.
In August 2000, Compaq announced that the remaining VAX models would be
discontinued by the end of the year.[8] By 2005 all manufacturing of VAX
computers had ceased, but old systems remain in widespread use.

Jason

jason

unread,
Jul 16, 2010, 11:08:59 AM7/16/10
to mpir-devel
I have removed the file mpn/ngcd and the associated function , so
ticket 295 is closed

Jason

Jason

unread,
Jul 16, 2010, 10:39:44 AM7/16/10
to mpir-...@googlegroups.com

Hi

Trac has been offline for an hour or so

Thanks
Jason

jason

unread,
Jul 21, 2010, 3:28:37 PM7/21/10
to mpir-devel
All the pre-build stuff has been removed , and I have also removed the
trunk/perfsqr.h file (that was "generated") and incorporated it into
gmp-impl.h (like the other *.h generated files)

I had to put a dummy file in , temporary , it's generated but not
linked to the lib , this is to stop autotools complaining .

I've put the new Slackware 13.1 on my "main" machine which has the new
autotools , this conflicts with our current Makefile.am , so that
needs to be sorted out , but it does allow us to run make check in
parallel (or at least that's what is says on the tin)

I got my main windows box back for the n'th time , but it's still
broken , so windows stuff will have to wait

I have removed the cray(vector machines) and VAX models form
configure.

IRIX/SunOS(<=4) OSes to be removed , bit by bit

jason

unread,
Jul 21, 2010, 6:21:11 PM7/21/10
to mpir-devel
On Jun 2, 10:18 pm, Jason Moxham <ja...@njkfrudils.plus.com> wrote:
> Hi , here are some thoughts about what we should/could do for the next
> release.
>
> 1) Upgrade yasm to the latest (easy)
>
> 2) Upgrade gnu config to the latest ( dont know how difficult that is , but it
> could fix some niggles we have , and it might simplify our specialisations.
>
> 3) Upgrade to the latest autotools/libtool ( some distros are moving over to
> the latest 2.2 , we may need/want to do the same , again dont know what this
> involves)
>

upgrading to the new autotools fixes the issues on my machine . see
branch new-autotools
It does mean that boxen can not be used for running autotools(ie make
dist) , eno however is upto date . We can still build mpir on all
machines (of course) but if you are going to run autotools , you need
the 2.2 (not 1.5) versions.
There is a minor problem that yasm fails make check on some machines ,
and I will test(full test) it on the whole of skynet etc before
putting it into trunk.

The update should enable a parallel make check , see item 8 below

Jason

unread,
Jul 21, 2010, 8:30:52 PM7/21/10
to mpir-...@googlegroups.com

with the new autotools we can do a make check in parallel
eg on eno

without
real 1m8.886s
user 0m50.322s
sys 0m6.525s

with

real 0m26.063s
user 0m50.829s
sys 0m7.377s

the yasm test are not parallel so we dont show all the benefits , but for
cygwin or solaris this will show a much better gain

jason

unread,
Jul 22, 2010, 10:53:49 AM7/22/10
to mpir-devel
I have tested the new autotools branch with our full mpir test
script , and the only issue over trunk is make check fails for yasm on
taurus and eno ONLY (releases do not run make check for yasm), this is
a minor thing and can be sorted out later.
In trunk I have managed to break the fat build on sextus(netburst)
ONLY , this is a bit of an oddity.

So I think I should push the new autotools into trunk.

Jason

Jason

unread,
Jul 22, 2010, 10:41:23 AM7/22/10
to mpir-...@googlegroups.com
On Thursday 22 July 2010 15:53:49 jason wrote:
> I have tested the new autotools branch with our full mpir test
> script , and the only issue over trunk is make check fails for yasm on
> taurus and eno ONLY (releases do not run make check for yasm), this is
> a minor thing and can be sorted out later.
> In trunk I have managed to break the fat build on sextus(netburst)
> ONLY , this is a bit of an oddity.

I've fixed that , a fat build requires asm functions , not plain C

Jason

unread,
Jul 22, 2010, 12:29:36 PM7/22/10
to mpir-...@googlegroups.com
On Thursday 22 July 2010 15:41:23 Jason wrote:
> On Thursday 22 July 2010 15:53:49 jason wrote:
> > I have tested the new autotools branch with our full mpir test
> > script , and the only issue over trunk is make check fails for yasm on
> > taurus and eno ONLY (releases do not run make check for yasm), this is
> > a minor thing and can be sorted out later.
> > In trunk I have managed to break the fat build on sextus(netburst)
> > ONLY , this is a bit of an oddity.
>
> I've fixed that , a fat build requires asm functions , not plain C
>
> > So I think I should push the new autotools into trunk.
> >

Done

At the moment we release MPIR as a tar.gz file , we could save some space and
bandwidth by releasing as a tar.bz2 file (ONLY) , I can set autotools to make
it the default for "make dist" , (lzma format is even smaller , but I think
it's a bit early for that format)

Jason

Jason

unread,
Jul 22, 2010, 3:51:16 PM7/22/10
to mpir-...@googlegroups.com

I've added a new generic mpn function (internal only)

void mpn_not (mp_ptr rp , mp_size_t n )

which is basically just an inplace mpn_com

I want to do the same for an in-place mpn_l/rshift1 but I cant think of a
catchy name for them.

Jason


Jason

unread,
Jul 22, 2010, 4:23:51 PM7/22/10
to mpir-...@googlegroups.com

How about ?
mpn_double
mpn_half

Note: these new functions can all be used unconditionally (ie dont need to
check HAVE_NATIVE)

>
> Jason

Jason

unread,
Jul 22, 2010, 8:17:53 PM7/22/10
to mpir-...@googlegroups.com
On Thursday 22 July 2010 17:29:36 Jason wrote:
> On Thursday 22 July 2010 15:41:23 Jason wrote:
> > On Thursday 22 July 2010 15:53:49 jason wrote:
> > > I have tested the new autotools branch with our full mpir test
> > > script , and the only issue over trunk is make check fails for yasm on
> > > taurus and eno ONLY (releases do not run make check for yasm), this is
> > > a minor thing and can be sorted out later.
> > > In trunk I have managed to break the fat build on sextus(netburst)
> > > ONLY , this is a bit of an oddity.
> >
> > I've fixed that , a fat build requires asm functions , not plain C
> >
> > > So I think I should push the new autotools into trunk.
>
> Done
>

Note: there is also the new autotools

autoupdate

which updates configure.in to the latest spec , there appear to be some windows
dll updates , which may help out on the mingw platform
Have to try it when I get my windows box back


Jason

Jason

unread,
Jul 27, 2010, 6:09:09 AM7/27/10
to mpir-...@googlegroups.com
Hi

Just thinking about the next bit of autotools simplifications , then these bits
are all interconnected in some way.

Support for fat file systems(8+3 names) , ie we have a file mpn/dive_1.c which
gives us the function divexact_1 . We already dont support fat file systems as
we already have files with names longer than 8+3 chars , so this is no great
loss. So I propose to change the file names to match the function names.

Some files ie x86/aors_n.asm or mpn/generic/popham.c provide for two functions
, and the "decision" is made at compile time , I propose we move the
"decision" to "autotools" time.

There are lists of functions that have to be filled in various Makefile.am 's ,
with the above changes we should be able to automate it , and I think the
Windows build could benefit from the code that can list the files/functions.
It would nice if this could handle the function prototypes in the header files
as well.

I need to think about this some more , dont want to start it and get half way
through , and realize I should of done it a different way :)

Jason

Bill Hart

unread,
Jul 27, 2010, 6:16:25 AM7/27/10
to mpir-...@googlegroups.com
On 27 July 2010 11:09, Jason <ja...@njkfrudils.plus.com> wrote:
> Hi
>
> Just thinking about the next bit of autotools simplifications , then these bits
> are all interconnected in some way.
>
> Support for fat file systems(8+3 names) , ie we have a file mpn/dive_1.c which
> gives us the function divexact_1 . We already dont support fat file systems as
> we already have files with names longer than 8+3 chars , so this is no great
> loss. So I propose to change the file names to match the function names.
>

This definitely sounds like a long overdue improvement.

> Some files ie x86/aors_n.asm or mpn/generic/popham.c provide for two functions
> , and the "decision" is made at compile time , I propose we move the
> "decision" to "autotools" time.

Do you mean have two symbolic links to the same file with different
flags for compilation?

>
> There are lists of functions that have to be filled in various Makefile.am 's ,
> with the above changes we should be able to automate it , and I think the
> Windows build could benefit from the code that can list the files/functions.
> It would nice if this could handle the function prototypes in the header files
> as well.
>

This would be nice.

> I need to think about this some more , dont want to start it and get half way
> through , and realize I should of done it a different way :)
>
> Jason
>

Jason

unread,
Jul 27, 2010, 6:31:55 AM7/27/10
to mpir-...@googlegroups.com
On Tuesday 27 July 2010 11:16:25 Bill Hart wrote:
> On 27 July 2010 11:09, Jason <ja...@njkfrudils.plus.com> wrote:
> > Hi
> >
> > Just thinking about the next bit of autotools simplifications , then
> > these bits are all interconnected in some way.
> >
> > Support for fat file systems(8+3 names) , ie we have a file mpn/dive_1.c
> > which gives us the function divexact_1 . We already dont support fat
> > file systems as we already have files with names longer than 8+3 chars ,
> > so this is no great loss. So I propose to change the file names to match
> > the function names.
>
> This definitely sounds like a long overdue improvement.
>
> > Some files ie x86/aors_n.asm or mpn/generic/popham.c provide for two
> > functions , and the "decision" is made at compile time , I propose we
> > move the "decision" to "autotools" time.
>
> Do you mean have two symbolic links to the same file with different
> flags for compilation?
>

Basically the same setup we have at the moment , but when we run autotools ,
we run "our setup script" instead , which runs autotools AND "splits"
aors_n.asm into add_n.asm AND sub_n.asm , that way the build system doesn't
need the compilation FLAGS , ie the build system is now one file=one function.
The complication can still exist , but are confined to our development machines
, so we could write it in python(or whatever , C?)

> > There are lists of functions that have to be filled in various
> > Makefile.am 's , with the above changes we should be able to automate it
> > , and I think the Windows build could benefit from the code that can
> > list the files/functions. It would nice if this could handle the
> > function prototypes in the header files as well.
>
> This would be nice.
>

There are of course files which can have multiple entry points , ie mpn_add_n
and mpn_add_nc , we would need to handle them , and I think there are file
which have a few functions in them (for tuning only?) . Have to think about
that....

Jason

unread,
Aug 13, 2010, 8:34:42 AM8/13/10
to mpir-...@googlegroups.com
Hi

I going to start on these autotools simplifications now , and hopefully the
code is clean enough to finish it .

I appear to have my Windows box back alive and well , and after having some
trouble with installation of Windows 64 (and 32) and MSVC , I should be able
to give the Mingw64 (and 32) a go.

Jason

Jason

unread,
Aug 13, 2010, 9:52:03 AM8/13/10
to mpir-...@googlegroups.com
Hi

I've changed the files
divebyfobm1.* to divexact_byfobm1.*
dive_1.* to divexact_1.*
divebyff.* to divexact_byff.*
diveby3.* to divexact_by3c.*

and I renamed the function divexact_fobm1 to divexact_byfobm1

I not touched any files in the build.vc* directorys , but I did do the x86w and
x86_64w directorys

I've not changed the test file names to match ie we still have t-dive_byff.c
rather than t-divexact_byff.c

More to come

Jason

Jason

unread,
Aug 13, 2010, 11:15:12 AM8/13/10
to mpir-...@googlegroups.com
Hi

I changed the files
pre_divrem_1.* to preinv_divrem_1.*
pre_mod_1.* to preinv_mod_1.*
mode1o.* to modexact_1c_odd.*

and removed the autotools bumf that went with it

This nearly completes the removal of the old fat file system support , there
are a few little bits left , but they are not worth doing at the moment as we
may want to change those bit anyway later.

Jason

Bill Hart

unread,
Aug 13, 2010, 11:28:44 AM8/13/10
to mpir-...@googlegroups.com
It's worth changing the documentation for this too (in the doc/devel
directory I think, in the file configuration or something like that).
That's the one I refer to when adding files to MPIR, so it should be
kept up-to-date.

Bill.

Cactus

unread,
Aug 13, 2010, 12:13:59 PM8/13/10
to mpir-devel
I've updated the Visual Studio 2010 builds to account for these
changes and
tested the nehalem library build. I have not tested the other builds
but I
would be surprised if they didn't work.

I have also updated the Visual Studio 2008 builds in a way that I
think
will work but I no longer have Visual Studio 2008 installed so I have
not
tested these at all.

If people want to continue using the Visual Studio 2008 build files,
we will need a volunteer to maintain them.

Brian

Jason

unread,
Aug 13, 2010, 12:29:53 PM8/13/10
to mpir-...@googlegroups.com

I can test VS2008 , as that is all that I have , and if the changes are simple
enough I can maintain them , but I'm not at all familiar with MSVC , and I
don't want to be :(

Jason

Jason

unread,
Aug 13, 2010, 3:41:04 PM8/13/10
to mpir-...@googlegroups.com
I've changed the files to match the function names of
mpn/perfsqr.c to mpn/perfect_square_p.c
mpn/jacbase.c to mpn/jacobi_base.c
mpn/divis.c to mpn/divisible_p.c

and I've split out
mpn/dc_bdiv_qr_n.c from mpn/dc_bdiv_qr.c
mpn/dc_div_qr_n.c from mpn/dc_div_qr.c

again I've not touched the build.vc* directorys . For the first 3 changes , I
know how to do it , but I dont know how to add a file under NSVC .

I dont think I'll do any more (17 more files in the mpn dir to do) , as the
vs2008 build will be completely broken otherwise.

Jason

Cactus

unread,
Aug 13, 2010, 5:42:05 PM8/13/10
to mpir-devel


On Aug 13, 8:41 pm, Jason <ja...@njkfrudils.plus.com> wrote:
> I've changed the files to match the function names of
> mpn/perfsqr.c to mpn/perfect_square_p.c
> mpn/jacbase.c to mpn/jacobi_base.c
> mpn/divis.c to mpn/divisible_p.c
>
> and I've split out
> mpn/dc_bdiv_qr_n.c from mpn/dc_bdiv_qr.c
> mpn/dc_div_qr_n.c from mpn/dc_div_qr.c
>
> again I've not touched the build.vc* directorys . For the first 3 changes , I
> know how to do it , but I dont know how to add a file under NSVC .

Hi Jason,

This is much harder than a name change and I might not even be able to
do it now that I don't have Visual Studio 2008 installed.

I can explain the steps but it is intricate and I would not want to do
this until you (or someone else) agrees to take on the Visual Studio
2008 ongoing support and maintenance task.

The Visual Studio 2008 build definitely won't work now and I won't do
any further work on it as it is too time consuming and tedious to do
both this and the 2010 builds.

Brian

Cactus

unread,
Aug 13, 2010, 6:48:08 PM8/13/10
to mpir-devel
I have made the changes to the Visual Studio 2008 build files that I
think are needed to add all the recent changes.

But I have not tested anything.

Brian

jason

unread,
Aug 13, 2010, 7:15:05 PM8/13/10
to mpir-...@googlegroups.com
Thanks , I'll test it now.

I dont think I can agree to take it on as it completely unfamilar to use a
gui for me , and I wont have the time.
I'm going to look at a cmd line build for it though.
On a similar note I've just installed mingw64 , but for some reason it still
thinks it 32bit.
Ha just worked it out , Of COURSE , long is 32bit , have to try longlong

Jason

Brian

--

jason

unread,
Aug 13, 2010, 7:31:59 PM8/13/10
to mpir-...@googlegroups.com
On vs2008 configure && make gave us this

prebuild failed , I'll fix my batch file to use the new method

and in make check
Build failure for mpn.divebyff
Build failure for mpn.divebyfobm1

this was on a nehalem

make clean needs to be update to cope with new directory structure as it
leaves a lots of files around

jason

unread,
Aug 13, 2010, 8:13:00 PM8/13/10
to mpir-...@googlegroups.com
----- Original Message -----
From: "jason" <ja...@njkfrudils.plus.com>
To: <mpir-...@googlegroups.com>
Sent: Saturday, August 14, 2010 12:31 AM
Subject: Re: [mpir-devel] Re: MPIR 2.2

> On vs2008 configure && make gave us this
>
> prebuild failed , I'll fix my batch file to use the new method
>

Done


> and in make check
> Build failure for mpn.divebyff
> Build failure for mpn.divebyfobm1
>

Looks like t-NAME.c must match mpn/NAME.c for your MSVC builds

I've not bothered with this in linux , YET


> this was on a nehalem
>
> make clean needs to be update to cope with new directory structure as it
> leaves a lots of files around
>
> Jason
>
>

On the 64Mingw , I 'll have to change the autotools to get it to accept
longlong build

jason

unread,
Aug 13, 2010, 9:41:47 PM8/13/10
to mpir-...@googlegroups.com
Hi Brian
your new changes work , make check passes

cheers

To get mingw64 to work I need to go thru the code so that the long long int
get used when we are using mingw64 as well as win64 MSVC

I'm assuming
_MSC_VER defined <=> using MSVC
_WIN64 defined <=> 64bit
_WIN32 defined <=> 32bit

mingw64 defines _WIN64 but not _MSC_VER , as you would expect


Thanks

jason

unread,
Aug 13, 2010, 11:02:42 PM8/13/10
to mpir-...@googlegroups.com
Hi Brian

I notice that for win64 you have a scipt gen_mpir_h.bat which define's
LONG_LONG_LIMB 1

Is there any reason you did it this way , rather than at somewhere before
line194 in gmp-h.in
#ifdef _WIN64
#define _LONG_LONG_LIMB 1
#endif

because for the mingw64 I need to set it anyway

Cactus

unread,
Aug 14, 2010, 3:21:29 AM8/14/10
to mpir-devel


On Aug 14, 2:41 am, "jason" <ja...@njkfrudils.plus.com> wrote:
> Hi Brian
> your new changes work , make check passes
>
> cheers
>
> To get mingw64 to work I need to go thru the code so that the long long int
> get used when we are using mingw64 as well as win64 MSVC
>
> I'm assuming
> _MSC_VER  defined <=> using MSVC
> _WIN64  defined <=> 64bit

Yes.

> _WIN32 defined <=> 32bit

_WIN32 is defined for both 32 and 64 bit builds so it cannot be used
to detect 32-bit builds.

Brian

Cactus

unread,
Aug 14, 2010, 4:00:18 AM8/14/10
to mpir-devel


On Aug 14, 4:02 am, "jason" <ja...@njkfrudils.plus.com> wrote:
> Hi Brian
>
> I notice that for win64 you have a scipt gen_mpir_h.bat which define's
> LONG_LONG_LIMB 1
>
> Is there any reason you did it this way , rather than at somewhere before
> line194 in gmp-h.in
> #ifdef _WIN64
> #define _LONG_LONG_LIMB 1
> #endif
>
> because for the mingw64  I need to set it anyway.

There are several reasons for this, one being a desire to be able to
define what I need on Windows without changing files in the MPIR
distribution. This is partly historical and comes from my GMP build
at a time when I could not expect TG to allow any Windows changes into
gmp_h.in.

The second, more important, reason is that this aligns with the
intended pre-processing of gmp-h.in, which includes @symbol@ values
that are intended for substitution when gmp-h.in is used to build to
build gmp.h or mpir.h. I have to either remove or substitute all
these @symbol@ values so it is natural to do the definitions during
this process (whiich I assume parallels what happens in the Unix/Linux
builds).

But it turns out that the are significant benefits for the Visual
Studio IDE, that can be much faster when such conditionals are not
present because its intellisense database, which enables code and
symbol browsing in the IDE, background compiles both code paths for
conditionals in order to create its database (this is not quite what
it does but it would be hard to explain the full story of how it
avoids the combinatorial explosion in code paths).

But if you need it in gmp-h.in, I think my script will still work
provided all the associated @symbol@ elements in gmp-h.in are removed.

Brian

Cactus

unread,
Aug 14, 2010, 5:03:09 AM8/14/10
to mpir-devel
In fact, provided your definition occurs before line 60 (I think), all
my script will do is to add a redundant define since it is guarded.

If you wish to add it later, all it needs is a guard to avoid multiple
definitions.

But I assume that the Unix/Linux builds have to process gmp-h.in to
produce mpir.h including the relevant substitutions --- shouldn't the
mingw64 build do the same?

Brian

Jason

unread,
Aug 14, 2010, 6:02:18 AM8/14/10
to mpir-...@googlegroups.com
On Saturday 14 August 2010 10:03:09 Cactus wrote:
> On Aug 14, 9:00 am, Cactus <rieman...@gmail.com> wrote:
> > On Aug 14, 4:02 am, "jason" <ja...@njkfrudils.plus.com> wrote:
> > > Hi Brian
> > >
> > > I notice that for win64 you have a scipt gen_mpir_h.bat which define's
> > > LONG_LONG_LIMB 1
> > >
> > > Is there any reason you did it this way , rather than at somewhere
> > > before line194 in gmp-h.in
> > > #ifdef _WIN64
> > > #define _LONG_LONG_LIMB 1
> > > #endif
> > >
> > > because for the mingw64 I need to set it anyway.
> >
> > There are several reasons for this, one being a desire to be able to
> > define what I need on Windows without changing files in the MPIR
> > distribution. This is partly historical and comes from my GMP build
> > at a time when I could not expect TG to allow any Windows changes into
> > gmp_h.in.
> >
> > The second, more important, reason is that this aligns with the
> > intended pre-processing of gmp-h.in, which includes @symbol@ values
> > that are intended for substitution when gmp-h.in is used to build to
> > build gmp.h or mpir.h. I have to either remove or substitute all
> > these @symbol@ values so it is natural to do the definitions during
> > this process (whiich I assume parallels what happens in the Unix/Linux
> > builds).
> >

pretty much

> > But it turns out that the are significant benefits for the Visual
> > Studio IDE, that can be much faster when such conditionals are not
> > present because its intellisense database, which enables code and
> > symbol browsing in the IDE, background compiles both code paths for
> > conditionals in order to create its database (this is not quite what
> > it does but it would be hard to explain the full story of how it
> > avoids the combinatorial explosion in code paths).
> >
> > But if you need it in gmp-h.in, I think my script will still work
> > provided all the associated @symbol@ elements in gmp-h.in are removed.
> >
> > Brian
>

There is another place it is used.
When running configure (This is before we know how to fill all these @symbols@)
, the configure script compiles a short c prog to test the size of a mp_limb_t
ie output from configure (linux 64bit)
checking for assembler byte directive... .byte
checking how to define a 32-bit word... .long
checking if .align assembly directive is logarithmic... no
checking if the .align directive accepts an 0x90 fill in .text... yes
checking size of unsigned short... 2
checking size of unsigned... 4
checking size of unsigned long... 8
checking size of mp_limb_t... 8 #######################
creating config.m4
configure: creating ./config.status
config.status: creating Makefile
config.status: creating mpf/Makefile

I not sure it is strictly necessary , but for the mo we need it . As I rip
autotools to bits , it may not survive.

Todo this is sets the define __GMP_WITHIN_CONFIGURE and just includes gmp-h.in

So I can guard it with the above define , which is not set by MSVC . Although I
may need to set it again for the build , I'll have to try it to find out.

jason

unread,
Aug 14, 2010, 7:03:40 AM8/14/10
to mpir-devel
The whole business of defining LONG_LONG_LIMB is a bad idea. What I
mean is that good programming is to hide infomation , once we define
mp_limb_t (as whatever) in mpir.h , then the rest of mpir's code , and
any programs built on libmpir should not need to know that it is a
long or long long or whatever. If they want the size , use
GMP_LIMB_BITS.
There are only two places where this may be a problem.
Constants can need LL instead of L , but proper use of CNST_LIMB(x)
macro covers this
printf("%ld",(mp_limb_t)x); need to use the proper gmp_printf

We can still define LONG_LONG_LIMB (after the fact) for all those
badly writen librarys out there that depend on it , but I think
internally we should not depend on it.

I think the only place a long long limb is used in anger , is Win64 ,
or the only modern ABI.

Anyway , to get mingw64 working , I will stick to how we do it for the
moment.

Jason
> >     Brian- Hide quoted text -
>
> - Show quoted text -- Hide quoted text -
>
> - Show quoted text -

Cactus

unread,
Aug 14, 2010, 7:33:08 AM8/14/10
to mpir-devel
I agree but this is just one of many examples of GMPs lack of
consistency and haphazard overall structure..

And even considering _LONG_LONG_LIMB alone, you can choose between:

#if _LONG_LONG_LIMB
#ifdef _LONG_LONG_LIMB
#if defined _LONG_LONG_LIMB

and other variations in its extensive use throughout the code.

We can go through and remove these by using definitions based on
GMP_LIMB_BITS - I'll help if you want to do this.

Brian

jason

unread,
Aug 14, 2010, 7:36:58 AM8/14/10
to mpir-...@googlegroups.com
trunk now builds under mingw64 (C only) (ie
./configure --build=none-pc-mingw32)
It passes make check (except for t-locale , which fails on MSVC as well)
I've tested vs2008 , and all is OK , I can't imagine any problems with
vs2010

I'll change the t-locale test so that it is skipped for WIN64

There are some cosmetic changes that need to be done.
config.guess reports x86_64-pc-mingw32 , ie it doesn't detect the cpu , the
pc bit should be w64 and mingw32 should be mingw64 , and upgrade to GNU
config may fix some of these.
make speed,tune fails
I've only tried a static lib so far , not tried make try , etc......

Jason


----- Original Message -----
From: "jason" <ja...@njkfrudils.plus.com>
To: "mpir-devel" <mpir-...@googlegroups.com>
Sent: Saturday, August 14, 2010 12:03 PM
Subject: [mpir-devel] Re: MPIR 2.2

Jason

--

jason

unread,
Aug 14, 2010, 8:35:22 AM8/14/10
to mpir-...@googlegroups.com
Well perhaps I wont disable the t-locale test , it actually passes under a
shared lib build , and further notice that make check can be run on a shared
lib , I dont know what caused this to happen as we used to fail make check
under mingw32 with a shared lib , perhaps it was the upgrade in autotools ,
so we my be able run a make check on the 32bit one.

----- Original Message -----
From: "jason" <ja...@njkfrudils.plus.com>

To: <mpir-...@googlegroups.com>
Sent: Saturday, August 14, 2010 12:36 PM
Subject: Re: [mpir-devel] Re: MPIR 2.2

Cactus

unread,
Aug 14, 2010, 8:55:56 AM8/14/10
to mpir-devel
The auxiliary programs (speed, try and tune) all build on Windows x64
so something must be different on mingw64.

Now onto the Windows assembler with mingw64!

As far as I can tell mingw64 will work with the stack unwinding
stuff. Unfortunately YASM dwarf2 debug output fails with GDB so I am
not sure about this as I have not been able to trace any assembler
code.

But my AES code uses prologues and epilogues and it works fine with
mingw64.

Brian



Please discuss any necessary changes to the files in mpn\x86w and mpn
\x86\-64w

jason

unread,
Aug 14, 2010, 11:43:26 AM8/14/10
to mpir-...@googlegroups.com

Brian

--

Yeah , we need to change all the *.asm files in x86_64w to *.as , to match
unix where *.asm goto m4/gas and *.as goto yasm , is this a problem?

We need to change the include path for yasm_mac.inc in all the *.asm files
in x86_64w to something I will workout in a minute , is this a problem

Then we will get a compile , will it work ???????
I've also changed the cpuid asm code in the config.guess script to work with
both ABI's as you can not have two separate ones as (unlike 64 verses 32
bit) you cannot guaratee one will fail always


Thanks
Jason

jason

unread,
Aug 14, 2010, 11:53:25 AM8/14/10
to mpir-...@googlegroups.com

----- Original Message -----
From: "jason" <ja...@njkfrudils.plus.com>
To: <mpir-...@googlegroups.com>
Sent: Saturday, August 14, 2010 4:43 PM
Subject: Re: [mpir-devel] Re: MPIR 2.2


>

Yeah , they need to be all the same path , I notice the path changes (to get
to the same directory)
really would like the same path as the linux one , ie current directory
We could also move the windows yasm_mac.inc from mpn/x86_64w/ to the trunk
along with the linux ones.

I also have to switch in the windows yasm_mac as at the mo its still using
the linux ones


> Then we will get a compile , will it work ???????
> I've also changed the cpuid asm code in the config.guess script to work
> with both ABI's as you can not have two separate ones as (unlike 64 verses
> 32 bit) you cannot guaratee one will fail always
>
>
> Thanks
> Jason
>
>
>
> You received this message because you are subscribed to the Google Groups
> "mpir-devel" group.
> To post to this group, send email to mpir-...@googlegroups.com.
> To unsubscribe from this group, send email to
> mpir-devel+...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/mpir-devel?hl=en.
>

jason

unread,
Aug 14, 2010, 11:56:33 AM8/14/10
to mpir-...@googlegroups.com

actually if the definitions in the linux yasm_mac.inc and the windows
yasm_mac.inc don't overlap then we could just have one big one.
same for the fat build.

Cactus

unread,
Aug 14, 2010, 1:39:50 PM8/14/10
to mpir-devel


On Aug 14, 4:53 pm, "jason" <ja...@njkfrudils.plus.com> wrote:
> ----- Original Message -----
> From: "jason" <ja...@njkfrudils.plus.com>
> To: <mpir-...@googlegroups.com>
> Sent: Saturday, August 14, 2010 4:43 PM
> Subject: Re: [mpir-devel] Re: MPIR 2.2
>

Hi Jason,

The assembler extension on Windows is 'asm' and changing this would be
a big issue for me with a high cost.

I would either have to change all my assembler code (much more than
MPIR) to use the 'as' extension (which would make my code unique on
Windows since pretty well everyone uses 'asm') or I would have to have
special build customisation files for MPIR, which would be a high cost
in maintenance terms.

This is not a path I would want to go down so I suggest that you just
copy the Windows stuff into separate directories for use in mingw64.
This would give you the freedom to do what you want without changing
the Visual Studio builds.

Brian

Jason

unread,
Aug 14, 2010, 2:07:58 PM8/14/10
to mpir-...@googlegroups.com

OK , I asked in case it was easy.
As we have a pre-distro step (ie autoreconf) every time we add a file/change
configure anyway , I can just add it to that so that the "user" never need
know.
Perhaps even better the sym links that configure makes can just "drop the m" .
But I still HAVE to change the include path , as it is taken from where the
sym link is , not the original file location. In a pre-distro step I can easily
automate it and keep it legible unlike if we did it at build time , then we
have to comply with everything that been done before on every machine , ever.

I have to change the object file format for yasm as for some reason it does not
default to x64 , this is easy , and I've got to link to x86_64w/yasm_mac.inc
I think it will work then , it should compile anyway :)

Is there much difference between x86w and x86 , they use the same ABI , I
wonder , we could put this into the pre-distro step , it would save
maintainance , and the code base would be notionally smaller


Jason

Jason

unread,
Aug 14, 2010, 2:23:21 PM8/14/10
to mpir-...@googlegroups.com

I say notionally because , our svn is a bit different to most , if you get the
svn of yasm or pari for example , then you can not just build it with the
usual configure && make , you need to do what I have been calling the "pre-
distro" step , ie for yasm you have to run .autogen (which requires autotools)
and for pari you need bison and some other things . Their advantage is that
only user generated files are in the svn , so it keeps it small. Our advantage
is that the typical user can get it and build it just like a release. Our
disadvantage is that a lot of machine generated files make the svn bigger and
dont truly reflect how simple our code is :)


>
> Jason

Cactus

unread,
Aug 14, 2010, 3:04:09 PM8/14/10
to mpir-devel
> Jason- Hide quoted text -
>
> - Show quoted text -

Hi Jason

It would be nice to keep all the files together - including
yasm_mac.inc - in either mpn\x86_64w or mpn\x86w directroies as I then
know which directories I have to manage.

I can easily avoid the directory navigation within the asm files by
adding a YASM include path of '..\..\mpn\x86_64w' for x64 and '..\..
\mpn\x86w' for win32 in the Visual Studio invocation of YASM.

Each file would then simply have

%include 'yasm_mac.inc'

as its include.

I assume that you could then either (a) add a relative include path to
YASM to navigate to the 'mpn\x86_64w' or 'mpn\x86w' directories from
wherever the current directory is, or (b) copy yasm_mac.inc in the
prebuild step from one of these directories to where you want it.

As a first step I can take out the directory navigaation if that would
help.

Brian

Cactus

unread,
Aug 14, 2010, 3:26:21 PM8/14/10
to mpir-devel


On Aug 14, 7:07 pm, Jason <ja...@njkfrudils.plus.com> wrote:
I suspect they are significantly different. The x86w files are not
derived in any direct sense from those in x86 but come from my
original GMP port many years ago. Most are, I think, translations of
now very old GMP gas assembler code to Intel format for YASM but I
recall doing some original coding and finding and correcting errors in
the GMP assembler.

Probably the biggest difference is that the x86w files do not use m4,
which is not a Windows tool so I hand edited all this crap out (I
REALLY hate m4, which is just about the worst macro processor I have
ever encountered).

It might not be too hard to use the m4 ouputs from the x86 files in
Visual Studio. But invoking m4 as part of the Visual Studio build
would be problematic as m4 is not normally available on Windows (I am
in no hurry to change this :-)

Brian

degski

unread,
Aug 19, 2010, 4:50:09 AM8/19/10
to mpir-...@googlegroups.com
Cactus:

> I've updated the Visual Studio 2010 builds to account for these
> changes and tested the nehalem library build. I have not tested
> the other builds but I would be surprised if they didn't work.
>
> I have also updated the Visual Studio 2008 builds in a way that I
> think will work but I no longer have Visual Studio 2008 installed
> so I have not tested these at all.
>
> If people want to continue using the Visual Studio 2008 build files,
> we will need a volunteer to maintain them.

Isn't this move to VS2010 (dropping support for VS2008 altogether, not
exactly old, I would say) a bit premature?

For the moment, for instance, the 'Intel C++ Compiler' does not "yet"
(it's said it's coming, but when? Will it?) integrate with VS2010.

Are there any real advantages?


degski

--
Eric Schmidt, the chief executive of Google, has issued a stark
warning over the amount of personal data people leave on the internet
and suggested that many of them will be forced one day to change their
names in order to escape their cyber past.

The Independent, 18th August 2010

Cactus

unread,
Aug 19, 2010, 5:29:15 AM8/19/10
to mpir-devel


On Aug 19, 9:50 am, degski <deg...@gmail.com> wrote:
> Cactus:
>
> > I've updated the Visual Studio 2010 builds to account for these
> > changes and tested the nehalem library build.  I have not tested
> > the other builds but I would be surprised if they didn't work.
>
> > I have also updated the Visual Studio 2008 builds in a way that I
> > think will work but I no longer have Visual Studio 2008 installed
> > so I have not tested these at all.
>
> > If people want to continue using the Visual Studio 2008 build files,
> > we will need a volunteer to maintain them.
>
> Isn't this move to VS2010 (dropping support for VS2008 altogether, not
> exactly old, I would say) a bit premature?

Hi Degski,

In an ideal world, yes, but we don't have a volunteer to maintain them
and I don't have the time to do it.

> For the moment, for instance, the 'Intel C++ Compiler' does not "yet"
> (it's said it's coming, but when? Will it?) integrate with VS2010.
>
> Are there any real advantages?

The cost of maintaining both Visual Studio 2008 and Visual Studio 2010
builds is nearly twice that of maintaining the Visual Studio 2010
build alone because the Microsoft 2008 -> 2010 project conversion
tools don't handle the MPIR conversion without significant manual
intervention (it is a complex build with a lot of assembler code).

I am happy to provide advice for anyone who is willing to maintain the
2008 build. But it may involve quite a lot of work since some major
code simplifications are in progress.

Brian

degski

unread,
Aug 19, 2010, 6:15:12 AM8/19/10
to mpir-...@googlegroups.com
Hi Cactus,

> The cost of maintaining both Visual Studio 2008 and Visual Studio 2010
> builds is nearly twice that of maintaining the Visual Studio 2010
> build alone because the Microsoft 2008 -> 2010 project conversion
> tools don't handle the MPIR conversion without significant manual
> intervention (it is a complex build with a lot of assembler code).

And in 2012, the problem will raise it's ugly head again! I think one
(sh)/(c)ould question whether a nmake (or one of the other build (open
source) tools Mr. Hart mentioned a while back) approach for building
MPIR on Windows platforms isn't an approach, that will be more/easier
maintainable going into the future.

I'm to a small extent (purely the Windows build stuff) involved in the
SWI-Prolog compiler project (www.swi-prolog.org). The build is also
quite intricate and is script based using nmake. The required changes
from VS2008 to VS2010 were minimal (two if I remember correctly and
they were more aesthetic than anything else).

Cheers


degski

Cactus

unread,
Aug 19, 2010, 6:51:17 AM8/19/10
to mpir-devel


On Aug 19, 11:15 am, degski <deg...@gmail.com> wrote:
> Hi Cactus,
>
> > The cost of maintaining both Visual Studio 2008 and Visual Studio 2010
> > builds is nearly twice that of maintaining the Visual Studio 2010
> > build alone because the Microsoft 2008 -> 2010 project conversion
> > tools don't handle the MPIR  conversion without significant manual
> > intervention (it is a complex build with a lot of assembler code).
>
> And in 2012, the problem will raise it's ugly head again! I think one
> (sh)/(c)ould question whether a nmake (or one of the other build (open
> source) tools Mr. Hart mentioned a while back) approach for building
> MPIR on Windows platforms isn't an approach, that will be more/easier
> maintainable going into the future.

The real problem here is that nobody is willing to put in the effort
involved in maintaining the Visual Studio 2008 build.

I don't have any problem in maintaining the build for the current
released version of Visual Studio so if there is a 2012 version, and I
am still around, it will almost certainly be supported.

> I'm to a small extent (purely the Windows build stuff) involved in the
> SWI-Prolog compiler project (www.swi-prolog.org). The build is also
> quite intricate and is script based using nmake. The required changes
> from VS2008 to VS2010 were minimal (two if I remember correctly and
> they were more aesthetic than anything else).

Its essentially the same problem.

There is strong demand for a 'make' based MPIR build on Windows but,
again, nobody has volunteered to develop this.

Brian

degski

unread,
Aug 19, 2010, 7:20:27 AM8/19/10
to mpir-...@googlegroups.com
Hi Cactus,

> The real problem here is that nobody is willing to put in the effort
> involved in maintaining the Visual Studio 2008 build.

Maybe that should read ...is that nobody, feeling qualified, is
willing to... I would not mind to chip in, but like it says, do I
think I can deliver?

> Its essentially the same problem.

> There is strong demand for a 'make' based MPIR build on Windows but,
> again, nobody has volunteered to develop this.

Well, yes, but once realized, maintainability will go up a lot and the
move to VS2012 or whatever can be smooth and painless.

I guess it would require somebody with a good understanding of the
build process required, who could give input to identifying the
difficulties (I mean, put on "paper") in creating this Windows build
process. In the SWI-Prolog build process, the main difficulty is/was
dealing with the different build environments, once defined and
(pre-)configuration automated, it works quite well.

Like you say, it seems most agree that the project approach is hard to
maintain going forward and that "There is strong demand for a 'make'
based MPIR build on Windows".

Cheers

Bill Hart

unread,
Aug 19, 2010, 10:08:48 AM8/19/10
to mpir-...@googlegroups.com
Yes, there is strong demand for a 'make' based solution on Windows,
but using MSVC as the compiler.

If you would like to give it a go, I know lots of people would be very
appreciative. Jason has put in a kind of configure/make type thing for
Windows. So it is there already to some extent.

Of course for some people, maintaining the visual IDE solutions is
absolutely imperative, and Brian has been doing this for the latest
version of MSVC for a long time. So that will still be needed. It also
makes writing a 'make' based solution about 1000 times easier.

Bill.

Jason

unread,
Aug 27, 2010, 11:18:48 AM8/27/10
to mpir-...@googlegroups.com
Hi

The mingw64 build mostly works now , but there are a few minor problems still
to sort out. In the x86_64w directory we should mirror the x86_64 directory so
we can have builds for the atom , netburst etc . This should not involve any
assembler conversions as so far I dont think there is any specific code for
these cpu's. The fat build doesn't work yet , and we have duplicate symbols
defined on the shared lib build. T-local test fails , and the mingw32 build is
I suspect broken , as our config.guess cant distinguish between mingw32 and
mingw64

Jason

Jason

unread,
Aug 27, 2010, 2:25:27 PM8/27/10
to mpir-...@googlegroups.com
Hi

Hopefully updating configfsf.guess and sub will help with the recognition of
mingw64 and possible some other system where we dont test


diff config-5ac8187868dd8955d5051dfb4ff56e69abe80dbd/config.guess
mpir/trunk/configfsf.guess
6c6
< timestamp='2004-03-12'
---
> timestamp='2004-10-07'
20c20
< # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
---
> # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301,
USA.
662c662,663
< if echo __LP64__ | (CCOPTS= $CC_FOR_BUILD -E -) | grep __LP64__
>/dev/null
---
> echo "__LP64__" > $dummy.c
> if (CCOPTS= $CC_FOR_BUILD -E $dummy.c) | grep __LP64__ >/dev/null

This shows we have mode no significant changes to configfsf.guess and I can just
replace it with the latest version , which has 6 years !!!! of changes.

configfsf.sub looks more difficult

Jason

Jason

unread,
Aug 27, 2010, 3:19:35 PM8/27/10
to mpir-...@googlegroups.com
Doing the same for configfsf.sub we get


diff config-5ac8187868dd8955d5051dfb4ff56e69abe80dbd/config.sub
mpir/trunk/configfsf.sub
4a5,6
> #
> # Copyright 2008 William Hart
10a13,14
> # In particular, this version of the file has been modified for the MPIR
> # program.
24,25c28,29


< # Foundation, Inc., 59 Temple Place - Suite 330,

< # Boston, MA 02111-1307, USA.


---
> # Foundation, Inc., 51 Franklin Street, Fifth Floor,

> # Boston, MA 02110-1301, USA.
303c307,308
< | clipper-* | cydra-* \
---
> | core2-* \
> | clipper-* | cydra-* \
330c335,336
< | none-* | np1-* | nv1-* | ns16k-* | ns32k-* \
---
> | nocona-* \
> | none-* | np1-* | nv1-* | ns16k-* | ns32k-* \
332,334c338,342
< | pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \
< | powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* | ppcbe-* \
< | pyramid-* \
---
> | pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \
> | pentium4-* \
> | powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* | ppcbe-* \
> | prescott-* \
> | pyramid-* \
444c452,455
< cray | j90)
---
> core-*)
> basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'`
> ;;
> cray | j90)
793c804
< pentiumpro | p6 | 6x86 | athlon | athlon_*)
---
> pentiumpro | p6 | 6x86 | athlon | k7 | k7_* |athlon_*)
799,801d809
< pentium4)
< basic_machine=i786-pc
< ;;
805c813
< pentiumpro-* | p6-* | 6x86-* | athlon-*)
---
> pentiumpro-* | p6-* | 6x86-* | athlon-* | k7-*)
811,813d818
< pentium4-*)
< basic_machine=i786-`echo $basic_machine | sed 's/^[^-]*-//'`
< ;;
1102c1107
< *-unknown)
---
> *-unknown | *-pc | *-apple)


You can see that except for the very last diff , the only changes are that
specific cpu models have been added , this is the wrong place to add them , and
anyway some other models are missing and we know it works on all of them. So
the only thing we have to worry about is the "unknown | pc | apple " bit.
They dont have this anyway in the latest , so we may need to add it back in.
The pc bit was added in svn rev 1805 following this thread

http://groups.google.com/group/mpir-
devel/browse_thread/thread/eeafdd119660cad9/6ecdea0122067d44?lnk=gst&q=*-
pc+allready#6ecdea0122067d44

Which was a configure failure on a 32bit ASUS EE PC with atom cpu. I'll leave
out the change for the moment , and see if it still works. If it doesn't then
we should push this change upstream.

Jason

Bill Hart

unread,
Aug 27, 2010, 6:20:55 PM8/27/10
to mpir-...@googlegroups.com
I remember adding some of these thinking that we just needed them to
"pass through". Basically we were using names that it was unfamiliar
with. So it would fail. Adding them here allowed our specialised names
to pass through. That was the reasoning anyhow.

Bill.

Jason

unread,
Aug 27, 2010, 7:22:38 PM8/27/10
to mpir-...@googlegroups.com
Yeah , thats what I thought , but it must of been something else that was
broken , as for example nehalem and atom are not in there and we have no
problems building them.I suppose it could be some broken OS effect.

I think it goes like this for example
configfsf.guess is run to get x86_64-unknown--slackware-linux
configfsf.sub is run to canonicalize it to x86_64-unknown-linux-gnu
config.guess is run to refine the cpu to nehalem-unknown-linux-gnu

So it shouldn't see the cpu type.

Jason

Reply all
Reply to author
Forward
0 new messages