#define cast_to_largest_integral_type(value) \
((LargestIntegralType)((unsigned)(value)))
Removing the spurious (unsigned) resolves many of the warnings, but I think the root of the problem goes deeper. Why does cmockery use the non-standard LargestIntegralType and associated cast macros rather than the C standard intmax_t, uintmax_t, intptr_t, and uintptr_t?
Finally, does LargestIntegralType really need to be uintmax_t, or is it sufficient for it to be uintptr_t? I haven't read enough of the source to tell.
I'm working up a set of patches to clean up the compile on 64-bit platforms; there are a few more individual cases which for some reason don't use the cast_to_largest_integral_type() macro and hence fail on 64-bit platforms. I'll post here when I have the patches ready.
Best regards,
-Steve
--
Steve Byan <stev...@me.com>
Littleton, MA 01460
--
You received this message because you are subscribed to the Google Groups "Cmockery" group.
To post to this group, send email to cmoc...@googlegroups.com.
To unsubscribe from this group, send email to cmockery+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/cmockery?hl=en.
> What ever you do, try compiling cmockery using a wide range of compilers.
As large as I can manage; I don't have a Visual C++ compilation environment set up yet, but intend to do so. I don't have access to much else other than variants of gcc, but I do have access to a variety of 32 bit and 64 bit unix platforms.
> The unsigned cast tells the compiler not to perform sign extension when converting to LargestIntegralType with Microsoft's Visual C compiler.
It casts it to an unsigned int before casting it to largest integral type with everyone's compiler.
What's the problem with sign-extending a signed int or whatever passed in to cast_to_largest_integral_type()? The high-order bits will get truncated when it's cast back to its proper type.
On Feb 25, 2010, at 11:24 AM, Stewart Miles wrote:
> What's the problem with sign-extending a signed int or whatever passed in to cast_to_largest_integral_type()? The high-order bits will get truncated when it's cast back to its proper type.
>
> See sign_extend.c...
Thanks for the explanation.
Unfortunately casting to (unsigned) is equivalent to casting to (unsigned int), which is disastrous on LP64 platforms where a pointer is larger than an int.
I'll have to dig into this further to figure out a good fix for the problem.
> Notice the compile warnings disappear from gcc as well.
Alas, only on ILP32 and ILP64 platforms, not on LP64 platforms.
> So the Microsoft Compiler decides to sign extend pointers which is a pain if you want to do..
> void *some_ptr = malloc(some_size);
> expect_value(SomeFunction, SomePtrArg, some_ptr);
I don't understand why this is a problem. The expect_value macro invokes the expect_value_count() macro, which invokes the cast_to_largest_integral_type() macro on some_ptr. Later, the check_expected() macro invokes the cast_to_largest_integral_type() macro on SomePtrArg, so they should both be sign-extended and therefor compare equal. I agree that the error message on a mismatch might be hard to interpret due to the sign-extension. However, this same problem arises if the parameter is a short or a signed char.
Is there a code path that calls the _check_expected() function directly instead of through the check_expected() macro?
Perhaps the expect_value macro should be typed, so we would have expect_integral_value(), expect_ptr_value(), expect_double_value(), etc? This would allow the correct display of mismatched values.
======================================
#define cast_to_largest_integral_type(x) d_cast_to_max(sizeof(x),
(x))
#define DO_TYPE(t) else if (sizeof(t) == size) rst = va_arg(argp, t)
static inline uintmax_t
d_cast_to_max(size_t const size, ...)
{
va_list argp;
uintmax_t rst = 0;
va_start(argp, size);
if (size <= sizeof(unsigned int)) rst = va_arg(argp, unsigned int);
#if ULONG_MAX > UINT_MAX
DO_TYPE(unsigned long);
#endif
#if ULLONG_MAX > UINT_MAX
DO_TYPE(unsigned long long);
#endif
else assert(false && "invalid type size!");
va_end(argp);
return rst;
}
======================================
"dynamic_cast" may reduce the proformance, so we can use some
"static_cast" optimization.
For example:
#define cast_to_largest_integral_type(x) (sizeof(x) <=
sizeof(uintptr_t) ? \
(uintmax_t)((uintptr_t)(x)) : d_cast_to_max(sizeof(x), (x)) )
> Why does cmockery use the non-standard LargestIntegralType and associated cast macros rather than the C standard intmax_t, uintmax_t, intptr_t, and uintptr_t?
Anybody see a problem with replacing the LargestIntegralType macro with a uintmax_t typedef?
> The macro allows a user of a seriously deficient tool chain (for example, no ANSI C headers for the compile target) to easily substitute an appropriate type for the target.
Can't the user (or the cmockery.h header) typedef the appropriate uintmax_t and uintptr_t rather than #defining a macro? Admittedly, I don't think there is a standard way to know if uintmax_t and uintptr_t have already been typedef'd by the toolchain; but then there also isn't a standard way to know the appropriate sizes for them using preprocessor conditionals if one were to #define LargestIntegralType.
Why not adopt the C99 solution and kludge up the appropriate typedef's on platforms that don't support it?
Can't the user (or the cmockery.h header) typedef the appropriate uintmax_t and uintptr_t rather than #defining a macro? Admittedly, I don't think there is a standard way to know if uintmax_t and uintptr_t have already been typedef'd by the toolchain; but then there also isn't a standard way to know the appropriate sizes for them using preprocessor conditionals if one were to #define LargestIntegralType.
On Mar 2, 2010, at 1:24 PM, Stewart Miles wrote:
> The macro allows a user of a seriously deficient tool chain (for example, no ANSI C headers for the compile target) to easily substitute an appropriate type for the target.
Why not adopt the C99 solution and kludge up the appropriate typedef's on platforms that don't support it?