I have written a very basic script (similar to one of your examples) to convert .csv files to .xlsx using Excel::Writer::XLSX. However, I've noticed that when I read in a UTF-8 text/csv file and write it out, the special characters (Spanish accented letters, in this case) aren't getting written properly.
Judging from the docs, it seems like UTF-8 should be used automatically. But when I open the resulting file in Excel, or even if I unzip it and inspect the sharedStrings.xml file, the characters are wrong.
E.g. (hopefully these will paste correctly)
Dirección
becomes
Dirección
If I explicitly open the input file using '<:encoding(utf-8)', then it seems to do the right thing.
What encoding does it assume by default? ASCII? Since UTF-8 is backwards compatible with ISO-8859-1 and ASCII, why not assume UTF-8?
I'm running 0.84 (the latest I saw on CPAN) on RHEL. The 'file' utility detects it as UTF-8 so I had been hoping it would "just work" without me having to specify the encoding for every file. I am thinking that for my purposes specifying the input encoding as UTF-8 should be safe, but wanted to ask the question.
Thanks.