I have a few comments that might help you. I previously owned a weird
rare Japanese import and more recently I did some sniffing of my
Peugeot's data bus.
On Wed, 2011-10-05 at 12:34 +1100, David Lyon wrote:
> I'm off to Japan next week and trying to break through a wall of
> silence. Need to find information and mr google doesn't want to
> tell me anything.
I assume you don't speak Japanese? Not much information in English for
car stuff in Japan. :(
>
> Near me there's some caryards with lots of these:
>
> - http://www.jlimports.com.au/nissan-skyline-v36-370gt-coupe
>
> I want to translate the screens from japanese to english.
Have you tried talking to some of the car yards? I expect they would be
very excited about being able to sell the cars with English screens, and
could maybe hook you up with their in-country contacts.
That gets you someone who speaks both English and Japanese, and knows
the car scene so they'd be able to point you to wreckers and the like.
The other place to try is Yahoo Japan auctions, which was "the place" to
buy car parts a few years ago. If you have a local (Japanese) shipping
address then you should be able to buy things from there and collect
them when you're in-country, or there are agents who will buy for you
and then ship internationally (not sure who a good agent is any more.) I
found the key was discovering what the part is called in Japanese, then
putting that into the search function. :)
> I'm
> thinking I can make a man-in-the-middle interceptor between
> the computer and the LCD, detect different screens and patch
> the graphic lcd data. So stuff appears in english suddenly.
I've never heard of anyone doing that. It sounds fiddly but I guess
perfectly possible. Presumably you'd need to be letting some of the
display data through so numbers, animations, etc. still play.
To give you some more random other suggestions:
* There is probably a flash chip somewhere in the module with all the
Japanese strings stored in it alongside the program. I have NFI what
encoding they'd be in, but you might be able to reflash with English
strings of the same byte length (probably plus a checksum somewhere.)
Better yet if you can find an English-language equivalent module and
dump its flash for comparison.
I think this could be less pain than intercepting the LCD lines (still
very tricky.) But you'd need to be prepared to maybe brick a few first.
* My Peugeot has a multi-function display which is presumably quite like
the one in the V35, only less complex (2003 model.) Although the display
is its own computer it's also fairly "dumb", mostly taking CAN messages
from the other car components. ie there's a status message with fuel
information (for trip computer display), a message from the stereo with
CD/radio status information, messages when the automatic wipers get
turned on/off, doors open, etc.
The display module itself just responds to these by updating its
internal information, and displaying whatever the user has selected on
the screen. It will occasionally send out CAN messages based on the
selections that the user makes. Apart from this there's not a lot of
onboard "smarts".
Assuming Nissan uses CAN, you could intercept the relevant CAN bus on a
running v35 and dump the messages. Then set up the module on a test
bench and replay & tweak the CAN messages, work out what they all mean.
Then presumably try to guess & inject any that you didn't see in the
dump (like engine failure warnings.) After you're done documenting all
that, you'd be able to replace the monitor unit with any computer with a
CAN transceiver.
That's also a ton of work, maybe even more than the low-level LCD hack,
I don't know, but it sprung to mind as an idea.
Hth.
- Angus
> I want to translate the screens from japanese to english. I'm
> thinking I can make a man-in-the-middle interceptor between
> the computer and the LCD, detect different screens and patch
> the graphic lcd data. So stuff appears in english suddenly.
Ummm... that's not going to be easy or cheap... I might even venture to
say impossible.
Not that it's _technically_ difficult to intercept and re-generate screens
- every sampling device with a frame buffer does that. I just don't see
how you could possibly detect arbitrary Japanese text and replace it with
English. *Maybe* if you had a known set of static, pre-canned screens (and
then, wouldn't you just replace the unit itself?), but for a generic
application, I simply don't see it happening.
Hope that's not the only reason you're going to Japan!?!
Regards,
--
Mark McDougall, Engineer
Virtual Logic Pty Ltd, <http://www.vl.com.au>
21-25 King St, Rockdale, 2216
Ph: +612-9599-3255 Fax: +612-9599-3266
I assume you don't speak Japanese? Not much information in English for
car stuff in Japan. :(
Have you tried talking to some of the car yards? I expect they would be
very excited about being able to sell the cars with English screens, and
could maybe hook you up with their in-country contacts.
The other place to try is Yahoo Japan auctions, which was "the place" to
buy car parts a few years ago. If you have a local (Japanese) shipping
address then you should be able to buy things from there and collect
them when you're in-country, or there are agents who will buy for you
and then ship internationally (not sure who a good agent is any more.) I
found the key was discovering what the part is called in Japanese, then
putting that into the search function. :)
* There is probably a flash chip somewhere in the module with all the
Japanese strings stored in it alongside the program. I have NFI what
encoding they'd be in, but you might be able to reflash with English
strings of the same byte length (probably plus a checksum somewhere.)
Better yet if you can find an English-language equivalent module and
dump its flash for comparison.
* My Peugeot has a multi-function display which is presumably quite like
the one in the V35, only less complex (2003 model.) Although the display
is its own computer it's also fairly "dumb", mostly taking CAN messages
from the other car components. ie there's a status message with fuel
information (for trip computer display), a message from the stereo with
CD/radio status information, messages when the automatic wipers get
turned on/off, doors open, etc.
*Maybe* if you had a known set of static, pre-canned screens (and
then, wouldn't you just replace the unit itself?), but for a generic
application, I simply don't see it happening.
Hope that's not the only reason you're going to Japan!?!
There's a fabulous blog writeup out there somewhere explaining
(amongst a lot of other stuff) just how litte image recognition you
need to do to write a screenscraping poker-bot. I'd guess the problem
of "recognising a tiny subset of japanese characters as they're
displayed on a car dashboard" is much closer to distinguishing which
cards out of a deck of 52 you can see, rather than the hard AI problem
of actually translating arbitrary japanese text into english.
big
--
"I say we take off and nuke the entire site from orbit.
That's the only way to be sure." - Ellen Ripley
> All the screens are static and have some identifying feature somewhere.
Would it be possible then to simply replace the unit with your own?
> I've done many screenscraping/on-the-fly translations in the past. So
> this is nothing out of the ordinary. Hopefully I won't have to do it this
> way.
Mind if I ask what sort of hardware you'll be using for this?
Would it be possible then to simply replace the unit with your own?
Mind if I ask what sort of hardware you'll be using for this?
> I've done many screenscraping/on-the-fly translations in the past.
Could you elaborate on the original hardware and solutions involved? I'm
curious to know what sort of stuff you've done in this area.
On 5/10/2011 2:41 PM, David Lyon wrote:
> I've done many screenscraping/on-the-fly translations in the past.Could you elaborate on the original hardware and solutions involved? I'm
curious to know what sort of stuff you've done in this area.
--
You received this message because you are subscribed to the Google Groups "Robots & Dinosaurs" group.
To post to this group, send email to sydney-h...@googlegroups.com.
To unsubscribe from this group, send email to sydney-hackspa...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sydney-hackspace?hl=en.
--
You received this message because you are subscribed to the Google Groups "Robots & Dinosaurs" group.
To view this discussion on the web visit https://groups.google.com/d/msg/sydney-hackspace/-/YHF5sc6TQ7gJ.
To post to this group, send email to sydney-h...@googlegroups.com.
To unsubscribe from this group, send email to sydney-hackspa...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sydney-hackspace?hl=en.
> I'm not sure what this hardware is exactly, but I got asked to 'do something'
> to get the GPS and media systems working in english. I'm just figuring that
> it can't be too special and is probably made with something not too different
> than is commonly off the shelf. There can't be too many makers of the LCD/TFT
> screens.
All the screen-scraping you've done in the past is - AFAICT - text-based.
I assume you realise that LCD data will comprise only RGB pixel values -
yes? For 640x480 resolution you're looking at roughly 25MHz sample rate of
said pixel values for a purely parallel bus. For LVDS, for example, you
could be looking upwards of 65MHz.
I'm not clear on whether you actually need to decode the content, or
merely recognise various screens and generate a replacement graphic?
I'm also not clear on how your hardware interfaces to the LCD?!? Or are
you planning on hijacking the software running on the unit itself, and
doing all the work there?
> The menu structure isn't anymore deep than 4. I can probably
> read the button selection clicks too to see where the user is navigating.
So you'll be running your own software on the device itself? How do you
plan on hijacking the screen output?
FYI I'm not trying to shoot you down. I'm genuinely interested in your
approach and am curious to know more about it! I also have experience with
designing video mixing hardware so I know a little about video formats...
Plan C, which may never happen now, was to write an ISR on the arduino to read parallel data in from the 'puter, and pretend to be the lcd. Then after a 'screen-load' of pixels went out to the lcd on the output port, think about what went past, check a few pixels to work out what function is on the screen, and then over-write some of those pixels with english.
> So... reading pixels and buffering them and then doing Japanese text
> recognition and replacement... on an Arduino?
I don't believe David is planning on recognising text; just identifying
the screen using fairly simple heuristics and substituting (graphical)
text data where required.
> It's not about mHz because on a parrallel data bus there is a strobe.
> Which controls how fast the data goes through.
Umm, no. Graphical VGA-resolution LCD displays have a fixed data rate at
which RGB pixel data must be streamed. You can't pause or otherwise slow
the data rate because the screen needs refreshing regularly.
So you'd need to sample the data in your micro at the full data rate. For
a parallel bus, it's slower (~25MHz) but lots of I/O. For serial (e.g.
LVDS or SDVO) you're looking at faster bit rates but less I/O.
> If I can sample pixels at known points, I can see what
> screen the system is at, then pump out graphics over-writing what is there
> with text in english - in between when the computer isn't updating.
You can't simply time-interleave your own data onto the LCD bus. To update
a certain part of the screen, the data needs to go out at a certain time,
and that would involve regenerating the _entire_ screen where you either
pass data thru or inject your own at certain points in the frame.
To do what you're hoping to do, would require interfacing to the *front*
end of the frame buffer itself, rather then the LCD. Even then, there's no
simple mechanism to contend with the CPU and master that bus to inject
your own data. Here, the bus speeds are even faster. And the screen memory
accesses are completely random/arbitrary, and depend wholly on the video
driver.
It could be that I'm missing the point here entirely (and happy to have
that pointed out to me) but I simply don't see how you could achieve this
with any embedded micro. It'd be a big job even for an FPGA.
--
You received this message because you are subscribed to the Google Groups "Robots & Dinosaurs" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sydney-hackspa...@googlegroups.com.
To post to this group, send email to sydney-h...@googlegroups.com.
Visit this group at https://groups.google.com/group/sydney-hackspace.
For more options, visit https://groups.google.com/d/optout.