1) Upgrade yasm to the latest (easy)
2) Upgrade gnu config to the latest ( dont know how difficult that is , but it
could fix some niggles we have , and it might simplify our specialisations.
3) Upgrade to the latest autotools/libtool ( some distros are moving over to
the latest 2.2 , we may need/want to do the same , again dont know what this
involves)
4) A few assembler functions to add
5) Move demo's out of the library onto the web page
6) Get rid of ancient cpus/compilers ( we will still work under C , if anyone
cares) , this would simplify configure a bit , cray,pyramid,
z8000,list,clipper,....
7) Make configure run faster , I'm sure we can remove some of the tests , I
can't believe they are still needed , and/or share the test results with yasm.
8) make make check parallel , it can be done.
9) Some of the changes we have made , have not been finished , finish them.
10) Split configure into two , ie standard and MPIR specific , should make the
maintenance easier , this is fairly ambitious :)
11) drop support for building/running on FAT file systems ( ie file name 8.3
format)
12) simple command line build for windows ( not dependant of vcproj files)
13) fix some known bugs
14) When we update stuff , there are many places where you have to fill in the
same info , make it automatic (autotools can do this , it's just not been set
up that way)
Some of these are trickier than others , but my aim is to simplify the system
(the non-computational parts of it)
Bets now being taken on what % will get done :)
Thoughts?
Jason
I've have updated yasm to the latest svn , unfortunately it was not as easy as
I thought , I was going to take a diff of the yasm svn's and just apply that to
our yasm , so that we have only a small set of changes (about 1mb) , but it
didn't apply cleanly, there was a lot lot of differences in white-space??? ,
and make check failed (due to white-space , as they use text comparison for
make check , which is now reliable). The main reason for this is that many of
the files in yasm are auto-generated , but I couldn't get it to regenerate the
broken files.
Anyway in the end I took our differences and applied them to yasm svn , this
works fine and I set up a "script" and diff so we can easily upgrade at any time
"mpir/yasm.diff" , the only downside is that all the yasm file are considered
new , so anyone wishing to get the svn (or see what has changed) has a larger
download ( we are not interested in what has changed in yasm though!!! , it is
just yasm svn rev 2334 , plus our changes in mpir/yasm.diff , then autogen'ed)
I have removed all support for the following cpu's
a29k
clipper
i960
i960mx
m88000 or m88k
m88110
ns32000 or ns32k
pyr or pyramid
z8000
z8000x
Note: gcc did NOT support them , so clearly they are dead , they could still
possibly be used with a generic C build , but you would need an old enough
compiler , which would probably break elsewhere.
I have removed the demos from the library and I will put them on the webpage
once I get them to work outside of the library , there are some dependencies
on undocumented internals.
There was also a emacs "profile" for help with editing m4'ed asm files, this was
in the mpn directory! , I could put it on the website , but I don't think it
is worth it.
There are some more old cpu's which it maybe good to drop all (or explicit)
support , I post a list later with some details for feedback.
It would also simplify things if we could drop support for IRIX , which is
different enough to complicate autotools , I will look more closely into it to
see if this is a good idea or not.
Jason
These cpu's also have no support from gcc , so again I think we should
certainly remove them
gmicro
i860
ibm032 or 032 or ROMP
uxp or xp fujitsu 32bit vector supercomputer
Note these are only entries in longlong.h , but as we want to get rid of it
someday , all the cpu types in it have to go somewhere or be removed.
Jason
These have now been removed.
Trac ticket 295 , I removed the old gcd stuff and associated functions , and
there is one further point mentioned
"Also, the function mpn_ngcd (in ngcd.h) seems to be a duplicate of mpn_gcd. I
think we can probably get rid of it. "
here is the diff between the two
2c2
< mpn_ngcd (mp_ptr gp, mp_ptr ap, mp_size_t an, mp_ptr bp, mp_size_t n)
---
> mpn_gcd (mp_ptr gp, mp_ptr ap, mp_size_t an, mp_ptr bp, mp_size_t n)
11a12
> {
12a14
> }
14c16
< init_scratch = MPN_NGCD_MATRIX_INIT_ITCH ((n+1)/2);
---
> init_scratch = MPN_NGCD_MATRIX_INIT_ITCH (n-P_SIZE(n));
20a23,25
> if (scratch < MPN_NGCD_LEHMER_ITCH(n)) /* Space needed by Lehmer GCD */
> scratch = MPN_NGCD_LEHMER_ITCH(n);
>
48c53
< mp_size_t p = n/2;
---
> mp_size_t p = P_SIZE(n);
71,72c76,81
< #if 0
< /* FIXME: We may want to use lehmer on some systems. */
---
>
> if (ap[n-1] < bp[n-1])
> MP_PTR_SWAP (ap, bp);
>
> if (BELOW_THRESHOLD (n, GCD_THRESHOLD))
> {
77,80c86
< #endif
<
< if (ap[n-1] < bp[n-1])
< MP_PTR_SWAP (ap, bp);
---
> }
So it look like gcd and ngcd are the same but with gcd has been updated to the
latest thresholds , I think?
Jason
Now we have removed the old cpu's here are some operating systems I propose we
remove explicit support for
IRIX for mips
OSF/TRU64 for alpha
SunOS <=version 4 (version 5 is called solaris ie on fulvia/mark.skynet)
DJGPP dos
OS2
Unicos cray's unix
pw32 posix on win32
Comments?
Jason
gcc-4.5.0 has obsoleted support for the old POWER arch (aka RIOS,RIOS2) , so
that is yet another dead directory
Jason
I have removed all traces of the above cpus and I will start to chop out the
old OS'es . Note: this does not mean that we will not run under these OS'es ,
it just means that any special conditions for them are removed . Some of these
special conditions are for broken installs or very old versions which were
missing certain crucial header files etc , so later versions may work , but I
would not count on it , and if they dont then tough :)
Thinking about what other changes I would like to make to simplify things a
bit , I released that most of the other changes would involve Brian making
similar changes to the vcproj files . A few of the simpler name changes I am
sure I can do a simple text replacement of the vcproj files, but most of the
other stuff would need Brian's involvement. So it seems that the best course
forward to avoid this duplication of effort is to get the windows port
following the unix one automatically.
The justification for simplifying our build system is that 50% of errors are
build system related , and unfortunately autotools is a very poor design , it
requires you to understand it internals to use it.
1) write our own build system starting with x86 and work thru the other major
cpu's/OSes one at a time. This is TOO much work , we are not taking advantage
of other peoples work on "boring" stuff , MPIR is about math not build systems.
2) We could write a simple script which does a basic build , but it would make
windows a second class build environment unless we re-implement most features
of a make system.There are two aspects to this , 1) to get the build optimal
and 2) be able to debug and develop with windows.
3) We could convert to eg) cmake which supports unix and windows , this
appears to be the most attractive option , this would take a fair amount of
time. cmake produces native vcproj files for windows and makefiles for linux ,
so both camp's would be in their NATIVE elements.
4) Get autotools to run NATIVE in windows with MSVC . What I have in mind is
really a trick , there are two parts to it , 1) get it use cl.exe instead of
gcc.exe 2) hide the fact that we are running under another shell.
autotools can run cl.exe , no problem , just like it can run cc or icc , the
options are a little different , but a script can easily take care of that (as
long as everything is one-to-one) , once we have this bit , then we could get
a MSVC compile under cygwin or minGW . The next stage is to have a "hidden
install" of minGW so we can run autotools, (just like we do for yasm under
linux).
5) Just the leave the system as it is.
My thoughts are these
1) Insane
2) The present configure.bat,make.bat emulate what it would achieve , but would
make development on a windows system awkward.
3) This seems like the best long term solution , to use a build system which
will handle all modern OS'es , but it means a lot of work
4) We should be able to do this with a few weeks hacking
5) Will keep Brian busy :)
Note: this does not address the issue that the assembler code for linux and
windows are different , but I dont believe this to be a major obstical at the
moment.
When I get my Windows box back , assuming they managed to fix it this time ,
then I will try for 4) , I'm sure other projects could also benefit from
this(ie sage)
Jason
I will try to post to the list all relevant changes so you can get the heads
up. Some of the changes I will do when I get a spare hour or two , I wouldn't
do them otherwise as they are not terribly important , but it can make work
for you which could be seen as pretty pointless.
> > The justification for simplifying our build system is that 50% of errors
> > are build system related , and unfortunately autotools is a very poor
> > design , it requires you to understand it internals to use it.
> >
> > 1) write our own build system starting with x86 and work thru the other
> > major cpu's/OSes one at a time. This is TOO much work , we are not taking
> > advantage of other peoples work on "boring" stuff , MPIR is about math
> > not build systems.
>
> Much as I would like to see this, I agree that it would involve a huge
> effort.
>
> > 2) We could write a simple script which does a basic build , but it would
> > make windows a second class build environment unless we re-implement most
> > features of a make system.There are two aspects to this , 1) to get the
> > build optimal and 2) be able to debug and develop with windows.
>
> I am not sure about this one as I don't fully understand the
> capabilities you envisage.
>
I mean a script which would select a code path based on cpu , and then use
cl.exe to compile and link everything in that path. The resultant library
should be just as good for the user , but for the developer many it would be a
pain to use.
I have not considered FAT builds.
> > 3) This seems like the best long term solution , to use a build system
> > which will handle all modern OS'es , but it means a lot of work
>
> I have never used CMAKE but it has a strong following.
I have never used it either :)
>
> > 4) We should be able to do this with a few weeks hacking
>
> Can it be done without turning Windows into a poor man's Linux?
>
> > 5) Will keep Brian busy :)
>
> That depends on what is now planned :-)
>
> > Note: this does not address the issue that the assembler code for linux
> > and windows are different , but I dont believe this to be a major
> > obstical at the moment.
>
> But the proliferation of assembler is by far the biggest task that I
> face as it petty well always involves a partial rewrite.
>
I had always assumed it was fairly painless , as you always manage to convert
them within a day or two :)
More tedium ahead warning......
The pre-build file fac_ui.h , will be incorporated into fac_ui.c removing the
need to generate it. Same for psqr.h , and probably the other two when I get
around to it.
> However, I think we can do several things that might make the WIndows
> build much simpler now. First of all, the big differences between
> Linux and Windows occur on x64. The libraries built with mingw for
> win32 work with Visual Studio and, I assume, use the assembler
> suppport. So I can drop Visual Studio support for win32 without much
> of a penaalty. This would be a significant simplification.
>
Yep , sounds good , although wasn't there some problem with mixing them(I dont
think this is MPIR specific ) Trac ticket 220 , cant read it at the mo , trac
is down
> And, now that we have published Visual Studio 2008 and 2010 support
> for MPIR 2.1.1, I can drop support for Visual Studio 2008 in future
> MPIR releases.
>
I personally think this is too soon , but I dont have to maintain them.
> Brian
All recent versions of Python (2.6, 2.7, and 3.1) and, I believe, the
next 3.2 release, are all built using VS 2008. I haven't checked if
there are any compatibility issues using gmpy that has been compiled
with VS 2010 and Python that has been compiled with VS 2008. I use
mingw32 to create the 32-bit builds so I really only need the 64-bit
support.
casevh
>>
>> I personally think this is too soon , but I dont have to maintain them.
>>
>>
>>
>> > Brian- Hide quoted text -
>>
>> - Show quoted text -- Hide quoted text -
>>
>> - Show quoted text -
>
> In an ideal world I would agree withn you but the cost of maintaining
> both would be very high because the automated conversion from Visual
> Studio 2008 to Visual Studio 2010 does not work well for MPIR. In
> consequence I would have to maintain both builds independently of one
> another and this would be very costly indeed.
>
> Removing the pre-build steps will be a useful simplification. The
> other thing that might be interesting on Windows is to build only mpn
> code as a library and then have a single build project that takes this
> library and adds it to the remaining code that is the same for all x64
> architectures. I used to do it this way but I had terrible problems
> ensuring that the mpn library and the related config.h files were
> correctly associated (i.e. the HAVE_NATIVE_xxx stuff).
>
> But it would be worth trying this again as it would be a massive
> simplification if I could get it working reliably.
>
> Brian
>
> --
> You received this message because you are subscribed to the Google Groups "mpir-devel" group.
> To post to this group, send email to mpir-...@googlegroups.com.
> To unsubscribe from this group, send email to mpir-devel+...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/mpir-devel?hl=en.
>
>
I have removed all explicit support for the OS'es
pw32
unicos
os2
djgpp
osf/tru64
I have yet to do IRIX/SunOS as there quite a few simplifications that can be
made.
I have removed the pre-build file fac_ui.h , the constants are now in the .c
file , the program that generated them is in a new directory devel/ , which are
files for the developers only , they will not appear in any mpir release.So we
have yasm.diff(used for updating yasm) , setversion(used for changing version
numbers) in there.
The windows build will need to reflect the fac_ui changes.
Jason
I have removed the pre-build files mp_bases.h and fib_table.h and incorporated
them into gmp-impl.h , for windows the only thing to do is that there is no
need to generate them.
Jason
You are a bit premature , I have only removed half of them , the other half
still to do . I should do it by tomorrow . You might as well leave what you
have done for mo . I left it as autotools didn't like it when I removed both ,
so I need a closer look at it.
Thanks
Jason
Yep , they are all going :) , should finish the lot this weekend.
> Brian
>
> I assume I will still haveprebuiid steps
I was thinking of ripping out the path selection code next. This is the code
in configure.in (about 1000 lines) which chooses which mpn asm code to include
and creates symbolic links. We could replace it with a table and a python?
script that we can share between linux and windows , I'm not too sure how this
would fit in with the project files? , but it could enable you to just do just 1
project with the cpu selection done in python. Actually I wont do it next.
Two more cpu's I propose we drop support for
cray
This is the cray vector machines which must be old supercomputers which by now
must be slower that most laptops , so I assume no one still uses them , some
have non-ieee floating point and 48bit ints , so there is quite a lot of
specific code for them. The latest cray machines are x86_86 (or slightly older
alpha's)
vax
This is the 1970/80's minicomputer and according to wikipedia
The VAX architecture was eventually superseded by RISC technology. In 1989 DEC
introduced a range of workstations and servers that ran Ultrix, the DECstation
and DECsystem respectively, based on processors that implemented the MIPS
architecture. In 1992 DEC introduced their own RISC instruction set
architecture, the Alpha AXP (later renamed Alpha), and their own Alpha-based
microprocessor, the DECchip 21064, a high performance 64-bit design capable of
running OpenVMS.
In August 2000, Compaq announced that the remaining VAX models would be
discontinued by the end of the year.[8] By 2005 all manufacturing of VAX
computers had ceased, but old systems remain in widespread use.
Jason
Trac has been offline for an hour or so
Thanks
Jason
with the new autotools we can do a make check in parallel
eg on eno
without
real 1m8.886s
user 0m50.322s
sys 0m6.525s
with
real 0m26.063s
user 0m50.829s
sys 0m7.377s
the yasm test are not parallel so we dont show all the benefits , but for
cygwin or solaris this will show a much better gain
I've fixed that , a fat build requires asm functions , not plain C
Done
At the moment we release MPIR as a tar.gz file , we could save some space and
bandwidth by releasing as a tar.bz2 file (ONLY) , I can set autotools to make
it the default for "make dist" , (lzma format is even smaller , but I think
it's a bit early for that format)
Jason
void mpn_not (mp_ptr rp , mp_size_t n )
which is basically just an inplace mpn_com
I want to do the same for an in-place mpn_l/rshift1 but I cant think of a
catchy name for them.
Jason
How about ?
mpn_double
mpn_half
Note: these new functions can all be used unconditionally (ie dont need to
check HAVE_NATIVE)
>
> Jason
Note: there is also the new autotools
autoupdate
which updates configure.in to the latest spec , there appear to be some windows
dll updates , which may help out on the mingw platform
Have to try it when I get my windows box back
Jason
Just thinking about the next bit of autotools simplifications , then these bits
are all interconnected in some way.
Support for fat file systems(8+3 names) , ie we have a file mpn/dive_1.c which
gives us the function divexact_1 . We already dont support fat file systems as
we already have files with names longer than 8+3 chars , so this is no great
loss. So I propose to change the file names to match the function names.
Some files ie x86/aors_n.asm or mpn/generic/popham.c provide for two functions
, and the "decision" is made at compile time , I propose we move the
"decision" to "autotools" time.
There are lists of functions that have to be filled in various Makefile.am 's ,
with the above changes we should be able to automate it , and I think the
Windows build could benefit from the code that can list the files/functions.
It would nice if this could handle the function prototypes in the header files
as well.
I need to think about this some more , dont want to start it and get half way
through , and realize I should of done it a different way :)
Jason
This definitely sounds like a long overdue improvement.
> Some files ie x86/aors_n.asm or mpn/generic/popham.c provide for two functions
> , and the "decision" is made at compile time , I propose we move the
> "decision" to "autotools" time.
Do you mean have two symbolic links to the same file with different
flags for compilation?
>
> There are lists of functions that have to be filled in various Makefile.am 's ,
> with the above changes we should be able to automate it , and I think the
> Windows build could benefit from the code that can list the files/functions.
> It would nice if this could handle the function prototypes in the header files
> as well.
>
This would be nice.
> I need to think about this some more , dont want to start it and get half way
> through , and realize I should of done it a different way :)
>
> Jason
>
Basically the same setup we have at the moment , but when we run autotools ,
we run "our setup script" instead , which runs autotools AND "splits"
aors_n.asm into add_n.asm AND sub_n.asm , that way the build system doesn't
need the compilation FLAGS , ie the build system is now one file=one function.
The complication can still exist , but are confined to our development machines
, so we could write it in python(or whatever , C?)
> > There are lists of functions that have to be filled in various
> > Makefile.am 's , with the above changes we should be able to automate it
> > , and I think the Windows build could benefit from the code that can
> > list the files/functions. It would nice if this could handle the
> > function prototypes in the header files as well.
>
> This would be nice.
>
There are of course files which can have multiple entry points , ie mpn_add_n
and mpn_add_nc , we would need to handle them , and I think there are file
which have a few functions in them (for tuning only?) . Have to think about
that....
I going to start on these autotools simplifications now , and hopefully the
code is clean enough to finish it .
I appear to have my Windows box back alive and well , and after having some
trouble with installation of Windows 64 (and 32) and MSVC , I should be able
to give the Mingw64 (and 32) a go.
Jason
I've changed the files
divebyfobm1.* to divexact_byfobm1.*
dive_1.* to divexact_1.*
divebyff.* to divexact_byff.*
diveby3.* to divexact_by3c.*
and I renamed the function divexact_fobm1 to divexact_byfobm1
I not touched any files in the build.vc* directorys , but I did do the x86w and
x86_64w directorys
I've not changed the test file names to match ie we still have t-dive_byff.c
rather than t-divexact_byff.c
More to come
Jason
I changed the files
pre_divrem_1.* to preinv_divrem_1.*
pre_mod_1.* to preinv_mod_1.*
mode1o.* to modexact_1c_odd.*
and removed the autotools bumf that went with it
This nearly completes the removal of the old fat file system support , there
are a few little bits left , but they are not worth doing at the moment as we
may want to change those bit anyway later.
Jason
Bill.
I can test VS2008 , as that is all that I have , and if the changes are simple
enough I can maintain them , but I'm not at all familiar with MSVC , and I
don't want to be :(
Jason
and I've split out
mpn/dc_bdiv_qr_n.c from mpn/dc_bdiv_qr.c
mpn/dc_div_qr_n.c from mpn/dc_div_qr.c
again I've not touched the build.vc* directorys . For the first 3 changes , I
know how to do it , but I dont know how to add a file under NSVC .
I dont think I'll do any more (17 more files in the mpn dir to do) , as the
vs2008 build will be completely broken otherwise.
Jason
I dont think I can agree to take it on as it completely unfamilar to use a
gui for me , and I wont have the time.
I'm going to look at a cmd line build for it though.
On a similar note I've just installed mingw64 , but for some reason it still
thinks it 32bit.
Ha just worked it out , Of COURSE , long is 32bit , have to try longlong
Jason
Brian
--
prebuild failed , I'll fix my batch file to use the new method
and in make check
Build failure for mpn.divebyff
Build failure for mpn.divebyfobm1
this was on a nehalem
make clean needs to be update to cope with new directory structure as it
leaves a lots of files around
> On vs2008 configure && make gave us this
>
> prebuild failed , I'll fix my batch file to use the new method
>
Done
> and in make check
> Build failure for mpn.divebyff
> Build failure for mpn.divebyfobm1
>
Looks like t-NAME.c must match mpn/NAME.c for your MSVC builds
I've not bothered with this in linux , YET
> this was on a nehalem
>
> make clean needs to be update to cope with new directory structure as it
> leaves a lots of files around
>
> Jason
>
>
On the 64Mingw , I 'll have to change the autotools to get it to accept
longlong build
cheers
To get mingw64 to work I need to go thru the code so that the long long int
get used when we are using mingw64 as well as win64 MSVC
I'm assuming
_MSC_VER defined <=> using MSVC
_WIN64 defined <=> 64bit
_WIN32 defined <=> 32bit
mingw64 defines _WIN64 but not _MSC_VER , as you would expect
Thanks
I notice that for win64 you have a scipt gen_mpir_h.bat which define's
LONG_LONG_LIMB 1
Is there any reason you did it this way , rather than at somewhere before
line194 in gmp-h.in
#ifdef _WIN64
#define _LONG_LONG_LIMB 1
#endif
because for the mingw64 I need to set it anyway
pretty much
> > But it turns out that the are significant benefits for the Visual
> > Studio IDE, that can be much faster when such conditionals are not
> > present because its intellisense database, which enables code and
> > symbol browsing in the IDE, background compiles both code paths for
> > conditionals in order to create its database (this is not quite what
> > it does but it would be hard to explain the full story of how it
> > avoids the combinatorial explosion in code paths).
> >
> > But if you need it in gmp-h.in, I think my script will still work
> > provided all the associated @symbol@ elements in gmp-h.in are removed.
> >
> > Brian
>
There is another place it is used.
When running configure (This is before we know how to fill all these @symbols@)
, the configure script compiles a short c prog to test the size of a mp_limb_t
ie output from configure (linux 64bit)
checking for assembler byte directive... .byte
checking how to define a 32-bit word... .long
checking if .align assembly directive is logarithmic... no
checking if the .align directive accepts an 0x90 fill in .text... yes
checking size of unsigned short... 2
checking size of unsigned... 4
checking size of unsigned long... 8
checking size of mp_limb_t... 8 #######################
creating config.m4
configure: creating ./config.status
config.status: creating Makefile
config.status: creating mpf/Makefile
I not sure it is strictly necessary , but for the mo we need it . As I rip
autotools to bits , it may not survive.
Todo this is sets the define __GMP_WITHIN_CONFIGURE and just includes gmp-h.in
So I can guard it with the above define , which is not set by MSVC . Although I
may need to set it again for the build , I'll have to try it to find out.
I'll change the t-locale test so that it is skipped for WIN64
There are some cosmetic changes that need to be done.
config.guess reports x86_64-pc-mingw32 , ie it doesn't detect the cpu , the
pc bit should be w64 and mingw32 should be mingw64 , and upgrade to GNU
config may fix some of these.
make speed,tune fails
I've only tried a static lib so far , not tried make try , etc......
Jason
----- Original Message -----
From: "jason" <ja...@njkfrudils.plus.com>
To: "mpir-devel" <mpir-...@googlegroups.com>
Sent: Saturday, August 14, 2010 12:03 PM
Subject: [mpir-devel] Re: MPIR 2.2
Jason
--
----- Original Message -----
From: "jason" <ja...@njkfrudils.plus.com>
To: <mpir-...@googlegroups.com>
Sent: Saturday, August 14, 2010 12:36 PM
Subject: Re: [mpir-devel] Re: MPIR 2.2
Brian
--
Yeah , we need to change all the *.asm files in x86_64w to *.as , to match
unix where *.asm goto m4/gas and *.as goto yasm , is this a problem?
We need to change the include path for yasm_mac.inc in all the *.asm files
in x86_64w to something I will workout in a minute , is this a problem
Then we will get a compile , will it work ???????
I've also changed the cpuid asm code in the config.guess script to work with
both ABI's as you can not have two separate ones as (unlike 64 verses 32
bit) you cannot guaratee one will fail always
Thanks
Jason
>
Yeah , they need to be all the same path , I notice the path changes (to get
to the same directory)
really would like the same path as the linux one , ie current directory
We could also move the windows yasm_mac.inc from mpn/x86_64w/ to the trunk
along with the linux ones.
I also have to switch in the windows yasm_mac as at the mo its still using
the linux ones
> Then we will get a compile , will it work ???????
> I've also changed the cpuid asm code in the config.guess script to work
> with both ABI's as you can not have two separate ones as (unlike 64 verses
> 32 bit) you cannot guaratee one will fail always
>
>
> Thanks
> Jason
>
>
>
> You received this message because you are subscribed to the Google Groups
> "mpir-devel" group.
> To post to this group, send email to mpir-...@googlegroups.com.
> To unsubscribe from this group, send email to
> mpir-devel+...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/mpir-devel?hl=en.
>
actually if the definitions in the linux yasm_mac.inc and the windows
yasm_mac.inc don't overlap then we could just have one big one.
same for the fat build.
OK , I asked in case it was easy.
As we have a pre-distro step (ie autoreconf) every time we add a file/change
configure anyway , I can just add it to that so that the "user" never need
know.
Perhaps even better the sym links that configure makes can just "drop the m" .
But I still HAVE to change the include path , as it is taken from where the
sym link is , not the original file location. In a pre-distro step I can easily
automate it and keep it legible unlike if we did it at build time , then we
have to comply with everything that been done before on every machine , ever.
I have to change the object file format for yasm as for some reason it does not
default to x64 , this is easy , and I've got to link to x86_64w/yasm_mac.inc
I think it will work then , it should compile anyway :)
Is there much difference between x86w and x86 , they use the same ABI , I
wonder , we could put this into the pre-distro step , it would save
maintainance , and the code base would be notionally smaller
Jason
I say notionally because , our svn is a bit different to most , if you get the
svn of yasm or pari for example , then you can not just build it with the
usual configure && make , you need to do what I have been calling the "pre-
distro" step , ie for yasm you have to run .autogen (which requires autotools)
and for pari you need bison and some other things . Their advantage is that
only user generated files are in the svn , so it keeps it small. Our advantage
is that the typical user can get it and build it just like a release. Our
disadvantage is that a lot of machine generated files make the svn bigger and
dont truly reflect how simple our code is :)
>
> Jason
> I've updated the Visual Studio 2010 builds to account for these
> changes and tested the nehalem library build. I have not tested
> the other builds but I would be surprised if they didn't work.
>
> I have also updated the Visual Studio 2008 builds in a way that I
> think will work but I no longer have Visual Studio 2008 installed
> so I have not tested these at all.
>
> If people want to continue using the Visual Studio 2008 build files,
> we will need a volunteer to maintain them.
Isn't this move to VS2010 (dropping support for VS2008 altogether, not
exactly old, I would say) a bit premature?
For the moment, for instance, the 'Intel C++ Compiler' does not "yet"
(it's said it's coming, but when? Will it?) integrate with VS2010.
Are there any real advantages?
degski
--
Eric Schmidt, the chief executive of Google, has issued a stark
warning over the amount of personal data people leave on the internet
and suggested that many of them will be forced one day to change their
names in order to escape their cyber past.
The Independent, 18th August 2010
> The cost of maintaining both Visual Studio 2008 and Visual Studio 2010
> builds is nearly twice that of maintaining the Visual Studio 2010
> build alone because the Microsoft 2008 -> 2010 project conversion
> tools don't handle the MPIR conversion without significant manual
> intervention (it is a complex build with a lot of assembler code).
And in 2012, the problem will raise it's ugly head again! I think one
(sh)/(c)ould question whether a nmake (or one of the other build (open
source) tools Mr. Hart mentioned a while back) approach for building
MPIR on Windows platforms isn't an approach, that will be more/easier
maintainable going into the future.
I'm to a small extent (purely the Windows build stuff) involved in the
SWI-Prolog compiler project (www.swi-prolog.org). The build is also
quite intricate and is script based using nmake. The required changes
from VS2008 to VS2010 were minimal (two if I remember correctly and
they were more aesthetic than anything else).
Cheers
degski
> The real problem here is that nobody is willing to put in the effort
> involved in maintaining the Visual Studio 2008 build.
Maybe that should read ...is that nobody, feeling qualified, is
willing to... I would not mind to chip in, but like it says, do I
think I can deliver?
> Its essentially the same problem.
> There is strong demand for a 'make' based MPIR build on Windows but,
> again, nobody has volunteered to develop this.
Well, yes, but once realized, maintainability will go up a lot and the
move to VS2012 or whatever can be smooth and painless.
I guess it would require somebody with a good understanding of the
build process required, who could give input to identifying the
difficulties (I mean, put on "paper") in creating this Windows build
process. In the SWI-Prolog build process, the main difficulty is/was
dealing with the different build environments, once defined and
(pre-)configuration automated, it works quite well.
Like you say, it seems most agree that the project approach is hard to
maintain going forward and that "There is strong demand for a 'make'
based MPIR build on Windows".
Cheers
If you would like to give it a go, I know lots of people would be very
appreciative. Jason has put in a kind of configure/make type thing for
Windows. So it is there already to some extent.
Of course for some people, maintaining the visual IDE solutions is
absolutely imperative, and Brian has been doing this for the latest
version of MSVC for a long time. So that will still be needed. It also
makes writing a 'make' based solution about 1000 times easier.
Bill.
The mingw64 build mostly works now , but there are a few minor problems still
to sort out. In the x86_64w directory we should mirror the x86_64 directory so
we can have builds for the atom , netburst etc . This should not involve any
assembler conversions as so far I dont think there is any specific code for
these cpu's. The fat build doesn't work yet , and we have duplicate symbols
defined on the shared lib build. T-local test fails , and the mingw32 build is
I suspect broken , as our config.guess cant distinguish between mingw32 and
mingw64
Jason
Hopefully updating configfsf.guess and sub will help with the recognition of
mingw64 and possible some other system where we dont test
diff config-5ac8187868dd8955d5051dfb4ff56e69abe80dbd/config.guess
mpir/trunk/configfsf.guess
6c6
< timestamp='2004-03-12'
---
> timestamp='2004-10-07'
20c20
< # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
---
> # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301,
USA.
662c662,663
< if echo __LP64__ | (CCOPTS= $CC_FOR_BUILD -E -) | grep __LP64__
>/dev/null
---
> echo "__LP64__" > $dummy.c
> if (CCOPTS= $CC_FOR_BUILD -E $dummy.c) | grep __LP64__ >/dev/null
This shows we have mode no significant changes to configfsf.guess and I can just
replace it with the latest version , which has 6 years !!!! of changes.
configfsf.sub looks more difficult
Jason
diff config-5ac8187868dd8955d5051dfb4ff56e69abe80dbd/config.sub
mpir/trunk/configfsf.sub
4a5,6
> #
> # Copyright 2008 William Hart
10a13,14
> # In particular, this version of the file has been modified for the MPIR
> # program.
24,25c28,29
< # Foundation, Inc., 59 Temple Place - Suite 330,
< # Boston, MA 02111-1307, USA.
---
> # Foundation, Inc., 51 Franklin Street, Fifth Floor,
> # Boston, MA 02110-1301, USA.
303c307,308
< | clipper-* | cydra-* \
---
> | core2-* \
> | clipper-* | cydra-* \
330c335,336
< | none-* | np1-* | nv1-* | ns16k-* | ns32k-* \
---
> | nocona-* \
> | none-* | np1-* | nv1-* | ns16k-* | ns32k-* \
332,334c338,342
< | pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \
< | powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* | ppcbe-* \
< | pyramid-* \
---
> | pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \
> | pentium4-* \
> | powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* | ppcbe-* \
> | prescott-* \
> | pyramid-* \
444c452,455
< cray | j90)
---
> core-*)
> basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'`
> ;;
> cray | j90)
793c804
< pentiumpro | p6 | 6x86 | athlon | athlon_*)
---
> pentiumpro | p6 | 6x86 | athlon | k7 | k7_* |athlon_*)
799,801d809
< pentium4)
< basic_machine=i786-pc
< ;;
805c813
< pentiumpro-* | p6-* | 6x86-* | athlon-*)
---
> pentiumpro-* | p6-* | 6x86-* | athlon-* | k7-*)
811,813d818
< pentium4-*)
< basic_machine=i786-`echo $basic_machine | sed 's/^[^-]*-//'`
< ;;
1102c1107
< *-unknown)
---
> *-unknown | *-pc | *-apple)
You can see that except for the very last diff , the only changes are that
specific cpu models have been added , this is the wrong place to add them , and
anyway some other models are missing and we know it works on all of them. So
the only thing we have to worry about is the "unknown | pc | apple " bit.
They dont have this anyway in the latest , so we may need to add it back in.
The pc bit was added in svn rev 1805 following this thread
http://groups.google.com/group/mpir-
devel/browse_thread/thread/eeafdd119660cad9/6ecdea0122067d44?lnk=gst&q=*-
pc+allready#6ecdea0122067d44
Which was a configure failure on a 32bit ASUS EE PC with atom cpu. I'll leave
out the change for the moment , and see if it still works. If it doesn't then
we should push this change upstream.
Jason
Bill.
I think it goes like this for example
configfsf.guess is run to get x86_64-unknown--slackware-linux
configfsf.sub is run to canonicalize it to x86_64-unknown-linux-gnu
config.guess is run to refine the cpu to nehalem-unknown-linux-gnu
So it shouldn't see the cpu type.
Jason