On 2.06.2021 17:11, Dmitry Yemanov wrote:
> 02.06.2021 12:16, Virgo Pärna wrote:
>> bytes, when it is defined as CHAR(10) CHARACTER SET WIN1252, but 40
>> bytes, when it is defined as CHAR(10) CHARACTER SET UTF8?
Ok. So it is good idea to also specify character set on UDR definition
SQL to avoid having mismatch between database character set and UDR
And unused bytes at the end should be spaces (0x20)? Also in UTF8?
>> But how is VARCHAR represented? Exactly same as CHAR?
> First two bytes represent the 16-bit length. The remaining "length"
> bytes are the string itself.
So length it length in bytes, which for UTF8 encoded string is actual
length of UTF8 encoded string in bytes (1 for "F", but 2 for "Ä")?