I have the following piece of code, running on IIsi w/ Sys7 Tuneup 1.1.1
using ThinkC (I got it last month so I assume it is the lastest version)
malloc is not returning NULL, However when I step through the code with the
debugger, for both L and m, malloc returns some number like 0xFFFFF***
When I do examine (in the ThinkC debugger) *m or *L I get "Bus Error"
If I use "GO" the program will halted in the for loop (accessing L[0])
with bus error.
Is there a bug in malloc in ThinkC?
#define MSIZE 10000
unsigned int *m;
char **L;
m = (unsigned *)malloc((unsigned)MSIZE*sizeof(unsigned int));
assert(m!=NULL);
L = (char **)malloc((unsigned)MSIZE*sizeof(char *));
assert(L!=NULL);
for(i=0;i<MSIZE;i++) L[i]=NULL;
-------------------------------------------------------------------------------
Homayoon Akhiani "Turning Ideas into ... Reality"
Digital Equipment Corporation "Alpha, The New Beginning"
77 Reed Rd. Hudson, MA 01701 "All Rights Reserved. Copyright(c)1992"
Email: akh...@ricks.enet.dec.com "It is me speaking, not my company"
-------------------------------------------------------------------------------
If you don't include the header <stdlib.h> in your program, it will still
compile, but malloc will behave just as you describe. It passes back a non-
zero (but garbage) pointer, that then bus errors. It is mentioned in the
manual, but it still seems like weird behavior to me.
-Mike
*****************************************
Mike Bell email: be...@apple.com
68000 High Performance Software Group
MS 60-CS
Apple Computer, Inc.
20525 Mariani Ave.
Cupertino, CA 95014
*****************************************
--- end include text ---
One of the most annoying (and non-portable) features of C is that
unprototyped functions will default to int for all input and output
parameters. The size of int is not defined but is usually 32 bits on
workstations and 16 bits for most PC implementations, and for THINK_C
However pointers for a 68000 will be 32 bits whatever.
This means code running on a 32 bit workstation will work passing
pointers to subroutines without declaring the parameters of those functions
unfortunately THINK_C will not work because it will assume 16 bits is
passed to malloc if not declared to take a size_t (=pointer size (=32 bits))
parameter.The compiler will also need to know that malloc is supposed
to return a void *, otherwise it will assume an integer(16 bits)
The moral of this is dont expect any C program to work in THINK C
unless you have checked 'check pointer types' and 'require prototypes'
in your preferences section. This will prevent successful compilation
if all required header files are not present.
If you port a program from unix you will probably find any
%d and %x statements in printf and scanf statements
will need to be changed to %ld and %lx
and occurences of int will need to be changed to long int
where necessary.
The compiler will not warn you if args to printf/scanf are the wrong size
to test portability of a program or lack of it using gnu-c
you can use gcc -mshort -Wall
this will generally cause the compilation to fail and or the executable not to
work, unless carefully written
Things would be nuch nicer if THINK C understood that an int is assumed
to be 32 bits by most programmers, whatever the natural size of 68000
code ought to be ...
>If you port a program from unix you will probably find any
>%d and %x statements in printf and scanf statements
>will need to be changed to %ld and %lx
>and occurences of int will need to be changed to long int
>where necessary.
>
>Things would be nuch nicer if THINK C understood that an int is assumed
>to be 32 bits by most programmers, whatever the natural size of 68000
>code ought to be ...
Use THINK C 5.0.x-- it allows either 2-byte or 4-byte ints--
programmers choice.
--
Matthew T. Russotto russ...@eng.umd.edu russ...@wam.umd.edu
Some news readers expect "Disclaimer:" here.
Just say NO to police searches and seizures. Make them use force.
(not responsible for bodily harm resulting from following above advice)
> Things would be nuch nicer if THINK C understood that an int is
> assumed to be 32 bits by most programmers, whatever the natural
> size of 68000 code ought to be ...
In C 5.0, there's a compiler option that makes the int type 32 bits
wide. It's called "4-byte ints", and it's described on p. 193 in the
section entitled "Porting Code to THINK C".
If you're using ANSI, you must recompile it with the 4-byte ints
option on as well. This will let you format your 32 bit ints using %d,
etc.
-phil
--
Phil Shapiro Software Engineer
Language Products Group Symantec Corporation
Internet: ph...@cs.brandeis.edu
This is a generic problem with C and has been pretty well addressed by the
changes to ANSI. Of course, it is still up to the developer to write portable
code. Apple and THINK have worked to remove the incompatibilites between
their compilers, and Toolbox routines are no longer declared to use the "int"
type, since it is ambiguous. They use "short" and "long" instead.
Writing portable code is difficult. For a good example of how difficult, take
a look at the book _The_C_Standard_Library_ by, I believe, Plauger (my book
is at home). The code is a good object lesson in how difficult it is.
The "natural size" for a 68000 is sort of ambiguous. Does one go with the
register size, 32 bits, or with the size of the data bus on the original
68000? On a 68000, MOVE.W is quicker than MOVE.L, but on a machine with
a 32-bit data bus (most of the Mac II line) they are (I believe) equivalent
in speed. THINK took one approach, MPW the other. THINK has now been
modified so that all the libraries can be built using 32-bit or 16-bit ints,
and works either way. The upshot of this all is that it works now, and code
is very easily transported between MPW and THINK.
I don't know about most programmers, but I think it is kind of unfair to
say that "int" is generally thought of as 32-bits. All K&R and ANSI say is
that it falls within a certain range compared to the other types. If
programmers are making assumptions about the integer size which are non-
portable, they are writing non-portable code.
I don't want to turn this into a war, I just thought I'd throw my two cents
in. Even the best of us wind up writing non-portable code at times (often
there is no good alternative). However, never using "int" helps.
Your suggestions for requiring prototypes, etc. will also help.
--
Paul Potts - po...@itl.itd.umich.edu
Un damne' descendant sans lampe,/ Au bord d'un gouffre dont l'odeur
Trahit l'humide profondeur,/ D'e'ternels escaliers sans rampe...
-Baudelaire on DOS/Windows programming
>One of the most annoying (and non-portable) features of C is that
>unprototyped functions will default to int for all input and output
>parameters. The size of int is not defined but is usually 32 bits on
>workstations and 16 bits for most PC implementations, and for THINK_C
>However pointers for a 68000 will be 32 bits whatever.
>This means code running on a 32 bit workstation will work passing
>pointers to subroutines without declaring the parameters of those functions
>unfortunately THINK_C will not work because it will assume 16 bits is
>passed to malloc if not declared to take a size_t (=pointer size (=32 bits))
>parameter.The compiler will also need to know that malloc is supposed
>to return a void *, otherwise it will assume an integer(16 bits)
THANK YOU! Now I know i'm not crazy. I had a similar type of problem a couple
of weeks ago. People told me I just didn't understand C well enough. I
thought it was behaving differently on the Mac....
>If you port a program from unix you will probably find any
>%d and %x statements in printf and scanf statements
>will need to be changed to %ld and %lx
>and occurences of int will need to be changed to long int
>where necessary.
>The compiler will not warn you if args to printf/scanf are the wrong size
I had this exact problem and I did what this poster described. That did
indeed fix what I was porting. I think the compiler SHOULD warn you!
>to test portability of a program or lack of it using gnu-c
>you can use gcc -mshort -Wall
>this will generally cause the compilation to fail and or the executable not to
>work, unless carefully written
>Things would be nuch nicer if THINK C understood that an int is assumed
>to be 32 bits by most programmers, whatever the natural size of 68000
>code ought to be ...
Hear, Hear!
--
---------------------------------------
"I will not barf unless i'm sick"
- A Chalkboard from "The Simpsons"
In general, the compiler *can't* warn you about this. Whether or not
you think it's appropriate for the compiler, given:
void foo (void)
{
long n = 17L;
printf ("n = %d", n);
}
to scan the string "n = %d" and determine that since the first
parameter replacement string calls for an "int", but the first
argument is a "long", it should generate an error (and personally I
think that's inappropriate), consider the following case:
void foo (char *s)
{
long n = 17L;
printf (s, n);
}
This is certainly legal and meaningful C code, but there's no way the
compiler can check this. (Of course, it *could* emit code to
dynamically check the arguments at run-time -- not in *my* C compiler,
thank you.)
On the other hand, the alternative, seen in languages like Modula-2,
is to require a separate procedure call for each item to be output, so
that typechecking may be done (along the lines of "WriteString (s);
WriteLong (n);" etc. You buys your compiler and you takes your
choice.
Typechecking arguments for functions, when the type and number of
their parameters have not been previously declared (as with "printf"),
is a problem which has not been solved by any programming language I'm
aware of.
-- Russell S. Finn
rsf...@lcs.mit.edu
I spent last Christmas vacation doing a close analysis of code generated by
THINK C, MPW C, and GNU C. MPW was about 10% better than GNU because GNU didn't
optimize long branches into short branches. THINK was about 10% better than MPW
because it used 16-bit ints instead of 32-bit ints. THINK code runs a little
faster, too, because you can use 680x0 instructions for math instead of library
routines. So using 16-bit integers is definitely a win.
Personally, I don't think that most C programmers assume an int is the same as a
long. *I* don't, and I know that everyone I work with is drilled to not make
that mistake, either. Anyone who _does_ make that assumption is quickly burnt.
--
Keith Rollin
Phantom Programmer
Taligent, Inc.
Argh, who assumes what about ints, they could be anything between an 8-bit
micro and a Cray? I never use ints, nope, (un)signed longs and shorts are
the way to go, even better, typecast everything so it's easier to port
code.
Cheers,
Kent
Actually, someone might ask, int:s are the most optimal non-floating
variable structure that the compilers generate, why not make use of this?
My answer, I rather spend less time with porting, than thinking about
performance in code in the initial stage - without even knowing about
the real bottlenecks.
I hope this explained my point of view.
Cheers,
Kent
>Argh, who assumes what about ints, they could be anything between an 8-bit
>micro and a Cray? I never use ints, nope, (un)signed longs and shorts are
>the way to go, even better, typecast everything so it's easier to port
>code.
I assume our correspondent means by typecasting, typedefining code such as
typedef unsigned long ul_t;
rather than
ptr = (char *)malloc((long) bytes);
where the casts (char *) and (long) are definitely not an increase
in portability.
This was not a declaration of war, nor have I anything against Symantec
whose products I find easy to use.
I was simply saying that one of the defects of the language C itself
is that the default type for parameters and returns for undeclared functions
is the nebulous quantity called int which is the natural size of the machine
for arithmetic etc. and is usually 16 bits or more.
In the days when 64K was a lot, Kernighan & Ritchie would never have imagined
that a range of machines would arise
with more addressing range than the machines natural word length.
However this is exactly what has happened with Intel's range of processors
and 680X0 compilers that assume int to be 16 bits for the sake of efficiency
and ANSI-C addresses the issue with its size_t type.
There is a large body of C code that was originally written for
non-ansi compilers which does make the assumption that an int can
contain a pointer, and if it is desired to make any use of such code
it is useful to be able to force your favourite compiler to use an
appropriately large int, and I am glad to be told that THINK C 5.0.X
addresses this issue. If you can't afford the upgrade and don't want the
fag of the 'require prototypes' option, you could change the name of
malloc in the ANSI library to malloc_ and in <stdlib.h> add the line
#define malloc malloc_
this would cause your compilation to fail if <stdlib.h> is not
included everywhere where it is needed. This is the kind of approach adopted
in <console.h> and if applied to the whole ANSI library it would have
caused irritation to some users but would have saved many others from
compiling programs with incorrect library call parameters.
Life would be even easier for beginners and hands on first/ read manual later
users if the ANSI function headers were included by default in a similar
way to the precompiled <MacHeaders> file, but we might run the risk of
dispelling the mystique that surrounds the C programming community ...
Has anyone else run into a problem with the member function from
OOPSDEBUG going into an (apparently) infinite loop? This is
definitely compiler option related. Yesterday, I was working on a
numerically intensive OOP (i.e. one that should run about 5x faster
with the 6888x option checked, but uses TCL to run the user interface),
but member was semi-consistently going into an infinite loop. In my
code, member would always hang (before it got into the numerical code),
but not always in the same place (depended on where breaks were set!).
The options I was using were to the effect of:
68881
68020
sizeof(int) == 4
sizeof(double) == 8
native mode floating point
Re-compiling ANSI and OOPSDEBUG with these options did not help.
Reseting the compiler settings to the "factory settings" made the
program work, but meant I couldn't use 68881 :-(
Probably the next step is to systematically figure out which compiler
options cause the problem.
Speaking of odd things with THINKC's (5.0x) compiler settings...
Has anyone else run into a problem with the member function from
OOPSDEBUG going into an (apparently) infinite loop? This is
definitely compiler option related. Yesterday, I was working on a
numerically intensive OOP (... text deleted ...
The options I was using were to the effect of:
68881
68020
sizeof(int) == 4
sizeof(double) == 8
native mode floating point
Re-compiling ANSI and OOPSDEBUG with these options did not help.
Reseting the compiler settings to the "factory settings" made the
program work, but meant I couldn't use 68881 :-(
Probably the next step is to systematically figure out which compiler
options cause the problem.
I too had this same problem a while back, but was only using the ANSI
library. My code was also numerically intensive, and used malloc/free
to allocate temporary storage matrices. When I tried using the PROFILER
and various compinations of 68000/68020/68881 and native floating point
formats, my code would crash at apparently random places.
I am fairly certain that I didn't even come close to a low memory
situation or munge the malloc/free arguments and array indices.
Any other data points, or solutions out there?
--
mb...@athena.mit.edu
Michael Bradshaw
"Home of the lame .sig file"
Things would be nicer still if C programmers didn't assume any such
stupid thing, and would code right in the first place, using forward
declarators for function return values, longs instead of ints for things
that need to be larger than 16 bits, and typedefs for things that have
to be particular sizes (so they can be tuned in a machine-dependant
header file). And these days, prototypes. That way, we wouldn't be
having all this trouble in the first place, and we could still get
small, efficient code on smaller machines that would run no slower
on the bigger ones (unlike the other way around).
The biggest example of how to do this wrong was a few years ago when
I tried to get PISTOL (Portably Implemented Stack Oriented Language --
a FORTH-like language written in C) ported. It was written on a 16-bit
int 16-bit pointer Z-80 platform, and I was trying to put it on a
16-bit int, 32-bit pointer 68000 system. Yech. It was the most non-
portable C code I'd ever seen. Buried deeply within the code were
all sorts of assumptions: that ints and pointers were the same size,
that said size was 16 bits, and that said entities were little-endian.
There were more. I gave up. It truly was that thing we've all heard
about: assembly language code that happened to be written in C.
+----------------+
! II CCCCCC ! Jim Cathey
! II SSSSCC ! ISC-Bunker Ramo
! II CC ! TAF-C8; Spokane, WA 99220
! IISSSS CC ! UUCP: uunet!isc-br!jimc (ji...@isc-br.isc-br.com)
! II CCCCCC ! (509) 927-5757
+----------------+
"PC's --- the junk bonds of the computer industry"
Now, I am fighting with fopen...
basically, I trace the program to the fopen call and then the machine hangs!
While not relating to my original problem (hangs in member), I have
run into various problems with the ANSI library as delivered by
Symantec with THINKC 5.x. The two which "broke the camel's back"
were:
1. [f,s]printf is essentially useless with [l]g, [l]f, [l]e
format items (h format item is non-ANSI).
2. Garbage results in return values of int and double from
ANSI functions.
Both of these problems are corrected by recompiling the source for
ANSI with "useful" compiler options (e.g. 68020, 68881, native mode
floating point, ...). Symantec is nice in that for some libraries,
they give you the source, so people can make their own ANSI lib that
uses their typical compiler options.
In addition, various people in comp.sys.mac.programmer have reported
problems with malloc. For Mac purposes, I essentially don't use
malloc because of relocation problems when calls are made across
segments (remember, ANSI is c.28k bytes and usually gets stuck into
its own segment!). Where I do use malloc, it is for *VERY* temporary
storage, that is not expected to persist to the next function call.
I have not had problems using malloc in this way, and use Handles for
data I need to keep around for any length of time.
Another big problem with malloc for numerical programming is that
many C compilers (including THINKC) won't malloc > 32K bytes of
space (that's 4K doubles - which is not enough for numerical analysis).
Typically, this fails by allocating 32K bytes and then allowing you
to overwrite memory beyond the end of the 32K bytes (which causes
bus errors if you're lucky and really strange behavior if your not).
-- Mike Webb
At first, I was totally confused by this, but the reference to
recompiling ANSI suggests to me that you were probably trying to use
the supplied ANSI library, which uses 2-byte ints, with code that uses
4-byte ints, and that when you recompiled ANSI, you set the 4-byte int
option on -- which is exactly what the manual tells you to do in this
case. I suspect that the compiler options you mention actually have
little to do with this (in fact, if you look at the ANSI source code,
you'll find it takes great pains to work correctly no matter what the
68881 settings are).
Tell me about it! I'm working on a statistics program that starts off by
declaring several 32 by 32 by 9 arrays of floats. Think C stops the compile at
the first one because it's too big! I normally like to write and debug my
programs on the Mac and then move them to the Sparc for actual running, but now
I'm having to do everything on the Sparc (EMACS, Yuck!) Are there any plans to
cure this problem in Think C in any future version? In my opinion this 32K
segment limit makes Think an inappropriate developmnet environment for any
serious numerical work. Does MPW have this same limitation?
Elliotte Rusty Harold Department of Applied Mathematics
elh...@m.njit.edu New Jersey Institute of Technology
erh...@tesla.njit.edu Newark, NJ 07103
The anecdotes about re-compiling ANSI came from when I had just upgraded
from THINKC 4.x to THINKC 5.x. In 4.x, different ANSI libs were supplied,
while 5.x supplies only one. The 4.x project compiled, but showed the strange
behavior discussed (which sounded like alot of other THINKC 5.x problems
with ANSI functions that have been discussed on the net over the last few
weeks, and e-mail relating to this post).
The original thread is that with the indicated compiler options (specifically
4 byte ints, 68881, 8 byte doubles, native mode FP, 68020) the member
function (used in the bowels of TCL) goes into an infinite loop *EVEN AFTER* ANSI, OOPSDebug (contains member) were re-compiled. Program only works when
"unacceptable" (read no 68881) compile options are used. Also, "infinite loop" should probably be taken with a grain of salt. I suspect it is really in a
for loop to O(2**31), but don't really have the patience to let the program prove this. I suspect that this is a "int which should be short" problem in
TCL or some of the OOP RT support functions (e.g. member). Why it only shows
up in one of my projects (which is very similar to others - read identical
at the OOP level) is mysterious.
Mike Webb
It would, if it existed. (Should this be in the FAQ?)
You have two choices for getting more than 32K of data at a time. The
first is to turn on "Far DATA." This requires ThC 5. The second is to
dynamically allocate it.
Mike Webb is wrong; you can easily allocate more than 32K of data with
malloc. Look at the source if you don't believe me; allocations larger
than a certain size (15K) simply get passed to _NewPtr, and _NewPtr can
get you many megabytes.
Mike's problem is that he's not using function prototypes. malloc
accepts a parameter of type size_t, which is typedef'd as unsigned long.
Without that prototype, the code assumes it's passing an int, which has
nasty and not-entirely-defined results. Turn "prototype checking" on.
Elliotte, I think you're confusing static with dynamic allocation.
Declaring an array at the global level in C is static allocation, and
doesn't involve malloc. Turning on Far DATA will assist you here.
If you want to dynamically allocate it, simply declare the array as a
pointer (in your case, "float (*myArray)[32][32]") and assign the
results of a malloc() to it (in your case,
"myArray = malloc(sizeof(float [32][32][9]))").
--
Jamie McCarthy Internet: k04...@kzoo.edu AppleLink: j.mccarthy
"Son, I am able," she said, "though you scare me." "Watch," said I,
"beloved," I said, "watch me scare you though." Said she, "able am I, son."
R.
--
-----------------------------------------------------------------------
Rich Siegel Internet: sie...@world.std.com
Software Engineer, Quickdraw Group
GCC Technologies
Why is this continually propagated in this newsgroup? malloc lets you
allocate more than 32K perfectly fine, provided you use it correctly.
Be sure to include <stdlib.h> and/or cast the size parameter for
malloc to (long).
> Tell me about it! I'm working on a statistics program that starts off by
>declaring several 32 by 32 by 9 arrays of floats. Think C stops the compile at
>the first one because it's too big! I normally like to write and debug my
>programs on the Mac and then move them to the Sparc for actual running, but now
>I'm having to do everything on the Sparc (EMACS, Yuck!) Are there any plans to
>cure this problem in Think C in any future version? In my opinion this 32K
>segment limit makes Think an inappropriate developmnet environment for any
>serious numerical work.
This is a small problem, but using dynamic arrays allocated with
malloc does away with it. I've done "serious" numerical work on the
Mac for years with no problem. My programs run unaltered on the Mac
(in THINK C) and on other workstations.
David Fry f...@math.harvard.EDU
Department of Mathematics f...@huma1.bitnet
Harvard University ...!harvard!huma1!fry
Cambridge, MA 02138
>>
>> Another big problem with malloc for numerical programming is that
>> many C compilers (including THINKC) won't malloc > 32K bytes of
>> space (that's 4K doubles - which is not enough for numerical analysis).
>> Typically, this fails by allocating 32K bytes and then allowing you
>> to overwrite memory beyond the end of the 32K bytes (which causes
>> bus errors if you're lucky and really strange behavior if your not).
>>
>> -- Mike Webb
> Tell me about it! I'm working on a statistics program that starts off by
>declaring several 32 by 32 by 9 arrays of floats. Think C stops the compile at
>the first one because it's too big! I normally like to write and debug my
>programs on the Mac and then move them to the Sparc for actual running, but now
>I'm having to do everything on the Sparc (EMACS, Yuck!) Are there any plans to
>cure this problem in Think C in any future version? In my opinion this 32K
>segment limit makes Think an inappropriate developmnet environment for any
>serious numerical work. Does MPW have this same limitation?
Code is supplied below, tested under THINK C 5.0.2. I've used similar
array methods on SPARC and PC (yuch!!!).
The code allocates a 3D array on the heap (method does scale and
seems to be portable). Also you get to use [][][] style addressing. I threw
in the "f4" array just to see if it would allocate, it does. I did not
check to see if I could run off the end of memory...as it stands "f4"
cannot use [][][] addressing cos the compiler does not know the
x and y dimensions to build an access function. I know there is a caste
that will condition the access but I've not tried that yet.
So, as things stand, this code should use a bit more memory than the
flat access space that the compiler would generate, and is possibly
slightly more efficient at run-time. Please (if you have time) try it out
and let me know if it hits the spot (or not as the case may be).
--alen
al...@crash.cts.com
------Cut here----8<-----8=-----8<-----
#include <stdio.h>
#include <stdlib.h>
void *
ccalloc(elems, sz)
size_t elems, sz; {
void *mptr = calloc(elems, sz);
if(mptr == (void *)NULL) {
fprintf(stderr, "calloc failed to allocate %ld of %ld\n", elems, sz);
exit(1);
}
return(mptr);
}
void ***
three_d_array(x, y, z, el_size)
size_t el_size; {
int i, j;
void ***arr = (void ***)ccalloc((size_t)x, sizeof(void **));
for(i=0 ; i < x ; i++) {
arr[i] = (void **)ccalloc((size_t)y, sizeof(void *));
for(j=0 ; j < y ; j++)
arr[i][j] = (void *)ccalloc((size_t)z, el_size);
}
return(arr);
}
#define X 32
#define Y 32
#define Z 9
main() {
double ***f1;
float ***f2;
int x, y, z;
float ***f4 = (float ***)ccalloc((size_t)X*(size_t)Y*(size_t)Z, sizeof(float));
f1 = (double ***)three_d_array(X, Y, Z, sizeof(double));
/* another (unused) for good measure!!) */
f2 = (float ***)three_d_array(X, Y, Z, sizeof(float));
/* check if each element is addressable...I used double to strain
* the system a bit more
*/
fprintf(stdout, "sizeof(float:double) = %ld:%ld\n", sizeof(float), sizeof(double));
for(x=0 ; x < X ; x++)
for(y=0 ; y < Y ; y++)
for(z=0 ; z < Z ; z++)
f1[x][y][z] = (double)x * (double)y * (double)z;
for(x=0 ; x < X ; x++) {
for(y=0 ; y < Y ; y++) {
for(z=0 ; z < Z ; z++) {
fprintf(stdout, "%f ", f1[x][y][z]);
}
fprintf(stdout, "\n");
}
fprintf(stdout, "\n\n");
}
}
-------end
In addition, various people in comp.sys.mac.programmer have
reported problems with malloc.
This is true, but none of these problems has been due to a bug in
malloc(). The code for malloc() hasn't been changed in 3 years, since
it was shipped with THINK C 4.0. I expect that all of the programmer
groups on Usenet see a lot of traffic regarding problems with dynamic
allocation (except the LISP ones, of course :-).
For Mac purposes, I essentially don't use malloc because of
relocation problems when calls are made across segments (remember,
ANSI is c.28k bytes and usually gets stuck into its own segment!).
It sounds like you're confusing Mac segments with DOS segment
pointers. There isn't any problem with sharing malloc'd pointers
across Mac CODE segments.
The problem that you ran into originally is, I believe, caused by a
bug in the header file oops.h. When you compile with the 4-byte ints
option on, the Class ID parameter to member() is a short int. However,
member uses a varargs prototype, so this argument is widened to a
4-byte int. This can be fixed by modifying the oops.h header file by
replacing the line:
char __member(...);
with the lines:
#if !__option(int_4)
char __member(...);
#else
char __member(void *, short);
#endif
This is not an official fix, but it should work correctly.
Can't avoid it anyway, malloc calls NewPtr.
Cheers,
Kent