make OPENALPR detect Tunisian licence plates

1,938 views
Skip to first unread message

Mohamed Bouguezzi

unread,
Jun 2, 2015, 5:08:58 PM6/2/15
to open...@googlegroups.com
Hello everyone, i need help to make OPENALPR detect tunisian plates, the good thing abou it is that the tunisian licence plates are all similars only the numbers are the same
it has 1 or 2 or 3 Numbers then "Tunisia" (WRITTEN IN ARABIC) then 1 or 2 or 3 or 4 numbers maximum.
looking forward your help it's so urgent thank you!!
images.jpeg
images1.jpeg
imagexs.jpeg
mtn.jpg

Matt

unread,
Jun 3, 2015, 8:47:15 AM6/3/15
to open...@googlegroups.com, mohamedb...@gmail.com
Yes, it should do a good job on those plates once trained.  In order to train OpenALPR for Tunisian plates, you'll need to collect a few thousand image samples.  More info on training is here:

Mohamed Bouguezzi

unread,
Jun 14, 2015, 2:06:40 PM6/14/15
to open...@googlegroups.com, mohamedb...@gmail.com
hello everyone! thank you Matt for your answer but i think you dont check your E-mails anymore :
hey Matt i think you are in holidays lol bah sorry to be ennying but i really couldnt solve the problem i look forward your help! it doesn' detect all the plate lisence 
Screenshot from 2015-06-14 18:58:46.png

Matt

unread,
Jun 16, 2015, 10:58:40 PM6/16/15
to open...@googlegroups.com, mohamedb...@gmail.com
It looks like you're using the US training data with your license plates.  That's probably not going to work too well.  The EU data (alpr -c eu [license plate]) should work better.  If you want really good results, you'll probably want to train your own detector/OCR for Tunisia.

Mohamed Bouguezzi

unread,
Jun 17, 2015, 8:15:43 AM6/17/15
to open...@googlegroups.com, mohamedb...@gmail.com
hello Matt Thank you again for your help i already started training my OCR but when i start the caracter recognition and identification, i get something i dont inderstand what is it :/ 
                    Cube ERROR (ConvNetCharClassifier::RunNets): NeuralNet is NULL
does it work with it or i have to configure something?
Screenshot from 2015-06-17 13:14:34.png

Matt

unread,
Jun 17, 2015, 9:36:33 PM6/17/15
to open...@googlegroups.com, mohamedb...@gmail.com
That looks like a Tesseract error.  What country are you using in the classifychars program?  Is there a corresponding OCR language?  You can try commenting out the OCR function in the classifychars utility -- I don't think it does anything useful.

Mohamed Bouguezzi

unread,
Jun 19, 2015, 10:18:16 PM6/19/15
to open...@googlegroups.com
hello MAtt thank you for your help, i ignored that problem and i continued the training i got the tif/box files, changed the names n put them in the /tn/input dir and then i executed the ./train.py and i got this errors, i think i am in a wrong directory?? (ps: i edited the directory in train.py to much the right one and i edited them too in the other executable files that are in the ./train.py like (unicharst_extractor , mftraining ,.... to much the right ones,)then i got less errors n i gt that i think there s a problem in this step in ./train.py :

print "Executing: " + train_cmd 
    os.system(train_cmd)
    os.system("mv ./" + file_without_ext + ".tr ./tmp/" + file_without_ext + ".tr")
    os.system("mv ./" + file_without_ext + ".txt ./tmp/" + file_without_ext + ".txt")
there is no *.tr and *.txt files in my output! what can i do with that?
Screenshot from 2015-06-20 03:13:50.png

Mohamed Bouguezzi

unread,
Jun 24, 2015, 5:47:06 PM6/24/15
to open...@googlegroups.com
Hello Matt :)
the program i fixed the problems and the program works perfectly and detects the TUNISIAN Lisence plates what can i do to make it open source available for everyone but i want to put my name somwhere in it :p thank you for your help and everyone who participated in it!! 
Now i want to implement it on Raspberry PI do you have any documentations to help? :)

Matt

unread,
Jun 26, 2015, 7:55:18 PM6/26/15
to open...@googlegroups.com, mohamedb...@gmail.com
Excellent.  Yes, feel free to put your name all over those trained files.  

I've compiled on Raspberry Pi 2.  I had to compile opencv myself since the opencv packages were not available for ARM.  Other dependencies were available via APT.  So it was sort of a combination of "The Easy Way" and "The Harder Way":

Mohamed Bouguezzi

unread,
Jul 9, 2015, 7:36:54 PM7/9/15
to open...@googlegroups.com, mohamedb...@gmail.com

it's ok MAtt i fixed it there is no version 2.4.8. i changed it to 2.4.9
now i tried to make it work with my RPI webcam using this documentation
:http://lukagabric.com/raspberry-pi-license-plate-recognition/

i changed in that PyALPR.py document these lines:
   #webcam subprocess args
        webcam_command = "raspivid -o  alpr.jpg"
        self.webcam_command_args = shlex.split(webcam_command)

        #alpr subprocess args
        alpr_command = "alpr -c tn -t hr -n 300 -j alpr.jpg"
        self.alpr_command_args = shlex.split(alpr_command)


    def webcam_subprocess(self):
        return subprocess.Popen(self.webcam_command_args, stdout=subprocess.PIPE, shell=True)


    def alpr_subprocess(self):
        return subprocess.Popen(self.alpr_command_args, stdout=subprocess.PIPE, shell=True)


to give it the right raspberry camera command and the right "alpr -c tn" command
and corrected that fault (added shell=True) in those 2 lines trying to make it work but !....... i got all of this :



pi@raspberrypi ~/PyALPR-master $ python PyALPR.py

raspivid Camera App v1.3.12

Display camera output to display, and optionally saves an H264 capture at requested bitrate


usage: raspivid [options]

Image parameter commands

-?, --help    : This help information
-w, --width    : Set image width <size>. Default 1920
-h, --height    : Set image height <size>. Default 1080
-b, --bitrate    : Set bitrate. Use bits per second (e.g. 10MBits/s would be -b 10000000)
-o, --output    : Output filename <filename> (to write to stdout, use '-o -')
-v, --verbose    : Output verbose information during run
-t, --timeout    : Time (in ms) to capture for. If not specified, set to 5s. Zero to disable
-d, --demo    : Run a demo mode (cycle through range of camera options, no capture)
-fps, --framerate    : Specify the frames per second to record
-e, --penc    : Display preview image *after* encoding (shows compression artifacts)
-g, --intra    : Specify the intra refresh period (key frame rate/GoP size). Zero to produce an initial I-frame and then just P-frames.
-pf, --profile    : Specify H264 profile to use for encoding
-td, --timed    : Cycle between capture and pause. -cycle on,off where on is record time and off is pause time in ms
-s, --signal    : Cycle between capture and pause on Signal
-k, --keypress    : Cycle between capture and pause on ENTER
-i, --initial    : Initial state. Use 'record' or 'pause'. Default 'record'
-qp, --qp    : Quantisation parameter. Use approximately 10-40. Default 0 (off)
-ih, --inline    : Insert inline headers (SPS, PPS) to stream
-sg, --segment    : Segment output file in to multiple files at specified interval <ms>
-wr, --wrap    : In segment mode, wrap any numbered filename back to 1 when reach number
-sn, --start    : In segment mode, start with specified segment number
-sp, --split    : In wait mode, create new output file for each start event
-c, --circular    : Run encoded data through circular buffer until triggered then save
-x, --vectors    : Output filename <filename> for inline motion vectors
-cs, --camselect    : Select camera <number>. Default 0
-set, --settings    : Retrieve camera settings and write to stdout
-md, --mode    : Force sensor mode. 0=auto. See docs for other modes available
-if, --irefresh    : Set intra refresh type


H264 Profile options :
baseline,main,high


H264 Intra refresh options :
cyclic,adaptive,both,cyclicrows

Preview parameter commands

-p, --preview    : Preview window settings <'x,y,w,h'>
-f, --fullscreen    : Fullscreen preview mode
-op, --opacity    : Preview window opacity (0-255)
-n, --nopreview    : Do not display a preview window

Image parameter commands

-sh, --sharpness    : Set image sharpness (-100 to 100)
-co, --contrast    : Set image contrast (-100 to 100)
-br, --brightness    : Set image brightness (0 to 100)
-sa, --saturation    : Set image saturation (-100 to 100)
-ISO, --ISO    : Set capture ISO
-vs, --vstab    : Turn on video stabilisation
-ev, --ev    : Set EV compensation
-ex, --exposure    : Set exposure mode (see Notes)
-awb, --awb    : Set AWB mode (see Notes)
-ifx, --imxfx    : Set image effect (see Notes)
-cfx, --colfx    : Set colour effect (U:V)
-mm, --metering    : Set metering mode (see Notes)
-rot, --rotation    : Set image rotation (0-359)
-hf, --hflip    : Set horizontal flip
-vf, --vflip    : Set vertical flip
-roi, --roi    : Set region of interest (x,y,w,d as normalised coordinates [0.0-1.0])
-ss, --shutter    : Set shutter speed in microseconds
-awbg, --awbgains    : Set AWB gains - AWB mode must be off
-drc, --drc    : Set DRC Level
-st, --stats    : Force recomputation of statistics on stills capture pass
-a, --annotate    : Enable/Set annotate flags or text
-3d, --stereo    : Select stereoscopic mode
-dec, --decimate    : Half width/height of stereo image
-3dswap, --3dswap    : Swap camera order for stereoscopic
-ae, --annotateex    : Set extra annotation parameters (text size, text colour(hex YUV), bg colour(hex YUV))


Notes

Exposure mode options :
auto,night,nightpreview,backlight,spotlight,sports,snow,beach,verylong,fixedfps,antishake,fireworks

AWB mode options :
off,auto,sun,cloud,shade,tungsten,fluorescent,incandescent,flash,horizon

Image Effect mode options :
none,negative,solarise,sketch,denoise,emboss,oilpaint,hatch,gpen,pastel,watercolour,film,blur,saturation,colourswap,washedout,posterise,colourpoint,colourbalance,cartoon

Metering Mode options :
average,spot,backlit,matrix

Dynamic Range Compression (DRC) options :
off,low,med,high

Warning: You are running an unsupported version of Tesseract.
Expecting version 3.03, your version is: 3.02.02
Error opening data file /usr/local/src/openalpr/tesseract-ocr/tessdata/tessdata/lus.traineddata
Please make sure the TESSDATA_PREFIX environment variable is set to the parent directory of your "tessdata" directory.
Failed loading language 'lus'
Tesseract couldn't load any languages!
^CTraceback (most recent call last):
  File "PyALPR.py", line 64, in <module>
    plate_reader.read_plate()
  File "PyALPR.py", line 40, in read_plate
    alpr_json, alpr_error = self.alpr_json_results()
  File "PyALPR.py", line 26, in alpr_json_results
    alpr_out, alpr_error = self.alpr_subprocess().communicate()
  File "/usr/lib/python2.7/subprocess.py", line 746, in communicate
    stdout = _eintr_retry_call(self.stdout.read)
  File "/usr/lib/python2.7/subprocess.py", line 478, in _eintr_retry_call
    return func(*args)
KeyboardInterrupt




knowing that it works on RPI with pics and videos
how can i correct this?

Mohamed Bouguezzi

unread,
Jul 11, 2015, 1:51:31 PM7/11/15
to open...@googlegroups.com, mohamedb...@gmail.com

Hello everyone :
i got it working but it doesn't give any result!
+ this script needs a small modification which is:

add shell=False in :
subprocess.Popen(self.alpr_command_args, stdout=subprocess.PIPE, shell=False)
and
subprocess.Popen(self.webcam_command_args, stdout=subprocess.PIPE, shell=False)

but i got no result : when i delete "--quiet" from "fswebcam" command i get everything working with the webcam , the image is written in "image-name.jpg" and thats all nothing else.

there is no other script to make it work in streaming ? means real time analyse of the license plate from the RPI webcam?

can you help me MAtt,maybe this will be my last comment beceause I'll have to give the project to my professor next Tuesday and i look forward to make it working in real time, hope u can help me with a script that doesn't need modifications beeause I have no more time to lose searching errors.


hope you'll answer me as fast as you can!

Matt

unread,
Jul 11, 2015, 3:08:19 PM7/11/15
to open...@googlegroups.com, mohamedb...@gmail.com
What is fswebcam?  That program takes a snapshot from your webcam?  I don't know how that program works, so I'm not sure if I can offer much help.

It's probably simpler if you just pull the image directly.  If you like Python, you can use the OpenCV Python bindings to grab the frame from your webcam, then pass the bytes into OpenALPR Python bindings to process it.

Mohamed Bouguezzi

unread,
Jul 11, 2015, 9:53:52 PM7/11/15
to open...@googlegroups.com, mohamedb...@gmail.com
"fswebcam" is the command line that has the same utility as "raspistill" and "raspivid"
Thank you Matt but i dont know how to do that!! :/ can offer more help?
i don't have any idea how to do that!
bah it'll be the 1st time to try python but i am willing to do it! :)

youssfi.c...@gmail.com

unread,
Apr 22, 2016, 11:21:33 AM4/22/16
to OpenALPR, mohamedb...@gmail.com

Bonjour Mohamed,
je suis en train de travailler sur le meme projet. j'ai utilisé openCV et ocr ( matching, sift, tesseract...) et j'ai pas trouvé de résultats satisfisants. Veuillez m'aider en me mettant sur le bon chemin.
est ce que vous avez déposer votre travail en open source en ligne ?
merci d'avance

abdessalem aymen

unread,
Jul 12, 2016, 6:47:25 AM7/12/16
to OpenALPR, mohamedb...@gmail.com
Bonjour mohamed,

j'ai besoin de votre expertise a propos de openalpr est ce que tu trouve la bonne solution avec les matricules tunisienne ou nn 

med-aziz hadj said

unread,
Feb 12, 2018, 3:39:09 PM2/12/18
to OpenALPR
Bonsoir mohamed , je suis en train de travailler sur un projet sur la détection des matricules de voiture. J'ai essayé de travailler avec la bibliothèque OPENALPR mais ça n'a pas marché. En faisant quelques recherches je suis tombé sur ton commentaire dans un forum où tu disais que tu as rencontré le même problème. Je voulais savoir est ce que tu as arrivé à en trouver une solution ?

Alaeddine Harizi

unread,
May 25, 2019, 1:33:21 PM5/25/19
to OpenALPR
hey man did you find any solution?
can you help me

Sam Navcom

unread,
Oct 3, 2019, 8:21:13 PM10/3/19
to OpenALPR
agents and hundreds of Camera's.  I need someone to assist me in setting up, and configuring the OpenALPR Open Source Agent and Camera.  This is a new development effort.  I am offering 65.00 per/hour (1099).  This is an unending project, with unlimited funds.  I can be reached at (512) 712-3123. Also, please email me at snavarro@adjacentsolutions

Thank you,

Sam Navarro
Reply all
Reply to author
Forward
0 new messages