_TCHAR* string_1;
char* string_2;
string_2<-&string_1 ?
I tried several methods, but i don't
succeed.
Thanks,
Marc
--
Norm
To reply, change domain to an adult feline.
M
"Norman Bullen" <no...@BlackKittenAssociates.com> schreef in bericht
news:gOGdndju5Ikgs5vX...@earthlink.com...
"Norman Bullen" <no...@BlackKittenAssociates.com> wrote in message
news:gOGdndju5Ikgs5vX...@earthlink.com...
> That is, of course, assuming that UNICODE is defined and TCHAR has
> resolved to wchar_t. The whole point of TCHAR is ta make code compilable
> in Unicode or ANSI. If UNICODE is not #defined, TCHAR becomes "char", and
> no translation is necessary. if you perform WideCharToMultiByte on a
> TCHAR when you'r already in ANSI, you'll screw things up. If you're
> programming with TCHAR, it should be because you want your code to work in
> both types of builds. Make sure you use #ifdef UNICODE around code that
> is only intended for that type of build.
If you're using the default code page of the machine, and you're happy to
use the CString classes from the MFC or ATL libraries, then it is easily
done without worrying explicitly about UNICODE being defined:
TCHAR *pszString = _T("Hello world");
CStringA sAnsiString( pszString );
const char *pszAnsiString = sAnsiString;
CStringA has constructors taking either kind of string, and all the
decisions about UNICODE are handled internally. This may not be the most
efficient method but it can sometimes be the neatest!
Dave
--
David Webber
Author of 'Mozart the Music Processor'
http://www.mozart.co.uk
For discussion/support see
http://www.mozart.co.uk/mozartists/mailinglist.htm
Marc:
The first thing you need to do is make sure that you really want to do this in
the first place.
Do you fully understand the difference between an "MBCS" application and a
"Unicode" application, and how the TCHAR concept allows you to write a program
that will compile correctly either way?
If so, why is your program using a mixture of TCHAR and char? This is sometimes
necessary, but usually not.
--
David Wilkinson
Visual C++ MVP
In addition to other answers, have you tried the CT2A helper macros from
ATL?
http://msdn.microsoft.com/en-us/library/87zae4a3(VS.80).aspx
// TCHAR * string_1
CT2A string_2( string_1 );
HTH,
Giovanni
"David Webber" <da...@musical-dot-demon-dot-co.uk> wrote in message
news:eWVkZsU0...@TK2MSFTNGP03.phx.gbl...
> I'm not sure whether you meant that to be a rebuttal to my post,
No - not at all.
> or if you were just offering additional information.
Just an alternative.
> My whole point about Unicode was in response to the OP request to "convert
> TCHAR * to char *". If Unicode is not defined, and OP uses the suggestion
> of WideChareToMultibyte, things will go very wrong.
Yes, absolutely. But the method I pointed out works whether or not UNICODE
is defined because CStringA has the two constructors:
CStringA::CStringA( const char * )
CStringA::CStringA( const wchar_t * )
and if you pass a TCHAR * string, the appropriate one will be called
depending on the definition of TCHAR.
If i would go for a CT2A variant, i would have to include
atlbase.h and atlconv.h?
M
"Giovanni Dicanio" <giovanniD...@REMOVEMEgmail.com> schreef in bericht
news:%23Ach4iZ...@TK2MSFTNGP04.phx.gbl...
Personally, I do not like these implicit conversions like the the CStringA
constructor, because they hide what is going in in your code, and can lead to
hard-to-detect bugs.
I would go for CT2A. If you need headers, then include them.
But again, why do you want to convert TCHAR* to char*?
It depends on your requirements.
CStringA is a robust class, with a rich public interface full of useful
methods, etc. much richer than simple C<X>2<Y> helpers.
On the other side, CX2Y helpers have less overhead than CStringT.
> If i would go for a CT2A variant, i would have to include
> atlbase.h and atlconv.h?
Yes.
But if you have an MFC project with "stdafx.h" skeleton created by VS
wizard, I think that you don't need to explicitly include the aforementioned
header files.
Giovanni
I agree with David. The intention of the programmer is made more explicit
with the CX2Y helpers, IMHO.
> But again, why do you want to convert TCHAR* to char*?
I don't know, but probably the OP needs to call some legacy code based on
char*?
Giovanni
Yes, this is the only valid reason.
But very often the asker of this kind of question has introduced the mixed
string representations in his/her own code.
No, there are others.
For example if you want to write a MIDI file. MIDI events are defined by a
series of bytes, and some are designed to carry text. These are defined as
a couple of bytes starting 0xFF defining what sort of text even it is, then
some bytes representing the length of the text string, and then the text in
single byte characters. It is much easier to do this by manipulating char
* text. I imagine there are other file formats which specify that they
can include monobyte text, but MIDI is the one I use every day.
Internally in my program I use TCHAR text. In MOZART 9 (the current
release) TCHAR=char; in MOZART 10 (under development) TCHAR=wchar_t.
Where it exports a MIDI file, then I can do it via CStringA and *my* code is
identical in the two versions. I regard this as a very valid reason.
(And it is not "legacy code" in that it seems perfectly reasonable to go on
doing this in MOZART 42, by which time all my TCHARs will probably be
explicitly written as wchar_t, but I'll still need char * to write MIDI
files.)
> It depends on your requirements.
> CStringA is a robust class, with a rich public interface full of useful
> methods, etc. much richer than simple C<X>2<Y> helpers.
> On the other side, CX2Y helpers have less overhead than CStringT.
I'll just add that I'm usually doing things with the string I get and so
having CStringA is useful. But CT2A looks fine to me if all you need is
the string.
Aside, for my own education:
I have never used CT2A, but the recommended syntax (from the documentation)
====
LPCWSTR pszW = ...;
CW2A pszA(pszW);
// pszA works like an LPCSTR, and can be used thus:
ExampleFunctionA(pszA);
// Note: pszA will become invalid when it goes out of scope.
====
looks very much like (my paraphrase)
===
LPCWSTR pszW = ...;
CStringA sA(pszW);
// sA works like an LPCSTR, and can be used thus:
ExampleFunctionA(sA);
// Note: sA will become invalid when it goes out of scope.
===
In both cases destruction appears to be assured when it goes out of scope.
In both cases the WideCharToMultiByte-style conversion is done internally on
construction. In both cases an implicit cast to LPCSTR is used when you
pass it to a function. So I don't immediately see that an awful lot more
overhead is involved with one or the other????
Another such file is text files like HTML or XML. There, various encodings
are possible and common with chars, which brings us to another aspect of
said conversion: Firstly, that it is not defined but has to be chosen and
secondly that it can sometimes fail. Whether the mentioned CT2A, CStringA
or MBTWC fit or not must be determined by the requirements.
Uli
--
C++ FAQ: http://parashift.com/c++-faq-lite
Sator Laser GmbH
Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932
Dave:
Well perhaps I meant "legacy or third party" code. However, I would think that
in this case the parameter type should be unsigned char* rather than char*.
Your code will be identical when you switch to Unicode if you use CT2A() also.
And your code will "speak what it means".
The real danger of the CStringA constructor comes when you decide that you want
to use 8-bit strings to represent UTF-8 values. Now you can have a bug in your
code that is not caught either by the compiler or when you test your code using
ASCII text. This happened to me when I converted my application from an MBCS one
with a back-end that used 8-bit strings in the local code page, to a Unicode one
with the (same) back end using UTF-8 strings.
<Text MIDI events>
> Well perhaps I meant "legacy or third party" code.
Not really third party code, but perhaps a "legacy communication protocol"
and/or "legacy file format" - but MIDI, originally defined with a
compactness which has not been necessary for many years now, will always be
that, and shows no signs of being replaced in any effective manner.
> However, I would think that in this case the parameter type should be
> unsigned char* rather than char*.
The distinction is moot. MIDI has AFAIK no way of specifying which code
page should be used to interpret text, and so sticking to characters below
128 is probably recommended!
> Your code will be identical when you switch to Unicode if you use CT2A()
> also.
As it is with CStringA
> And your code will "speak what it means".
As indeed does CStringA.
> The real danger of the CStringA constructor comes when you decide that you
> want to use 8-bit strings to represent UTF-8 values.
I assume you mean in the output? But the "A" on the end of CStringA
surely means the same as it does in CT2A. (ANSI according to the
documentation but at least in the case of CStringA, I believe, ASCII text
with 128-255 according to the code page of the machine.) So I still can't
see any practical difference here.
If you want control over what the 8-bit thingies actually represent, rather
than just using the default code page of the machine, then yes,
WideCharToMultiByte() and its inverse give you control over that. No
argument there. In fact 8-bit text entries in old versions of my music
software have a string and a logfont, and the conversion is now done (when
reading old format files) with MultiByteToWideChar() using the code page
corresponding with the character set entry in the logfont.
But for quick conversions where you have no information about the code page,
or no need to produce anything other than text for the default code page,
then CStringA and CT2A and friends are easy to do and as good as anything
else (and as good as each other).
> Now you can have a bug in your code that is not caught either by the
> compiler or when you test your code using ASCII text. This happened to me
> when I converted my application from an MBCS one with a back-end that used
> 8-bit strings in the local code page, to a Unicode one with the (same)
> back end using UTF-8 strings.
My impression is that under Windows UTF-8 is very much a second class
citizen. Unicode means UTF-16. Of course there are undoubtedly
specialist cases where UTF-8 is required, just as I have a specialist case
where ANSI text is required, and I agree with you 100% that it is not a good
idea to assume anything about either without thinking your code out
carefully :-)
It's not "awful lot more" overhead :) , but I think that CStringT template
has more overhead in constructing the string than C<X>2<Y> conversion
helpers.
You can read the code of CX2Y in <atlconv.h>, and compare it with the
CStringT source code in <cstringt.h>.
I think that more steps are involved in constructing a CStringT instance.
Just that.
Giovanni