java.lang.UnsatisfiedLinkError: no jniopencv_core in java.library.path

2,125 views
Skip to first unread message

Deepali Patel

unread,
Aug 11, 2016, 9:07:16 AM8/11/16
to javacv, nish...@us.ibm.com, jaga...@us.ibm.com, dee...@us.ibm.com
I am trying to execute the Alexnet example for DeepLearning4j available at https://github.com/deeplearning4j/ImageNet-Example/blob/master/src/main/java/imagenet/ImageNetMain.java

I am facing following error:

Exception in thread "Thread-2" java.lang.UnsatisfiedLinkError: no jniopencv_core in java.library.path

        at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1889)

        at java.lang.Runtime.loadLibrary0(Runtime.java:849)

        at java.lang.System.loadLibrary(System.java:1088)

        at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:726)

        at org.bytedeco.javacpp.Loader.load(Loader.java:501)

        at org.bytedeco.javacpp.Loader.load(Loader.java:418)

        at org.bytedeco.javacpp.opencv_core.<clinit>(opencv_core.java:10)

        at java.lang.Class.forName0(Native Method)

        at java.lang.Class.forName(Class.java:278)

        at org.bytedeco.javacpp.Loader.load(Loader.java:473)

        at org.bytedeco.javacpp.Loader.load(Loader.java:418)

        at org.bytedeco.javacpp.opencv_imgcodecs.<clinit>(opencv_imgcodecs.java:13)

        at org.datavec.image.loader.NativeImageLoader.asMatrix(NativeImageLoader.java:193)

        at imagenet.Utils.ImageNetRecordReader.next(ImageNetRecordReader.java:75)

        at org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator.next(RecordReaderDataSetIterator.java:171)

        at org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator.next(RecordReaderDataSetIterator.java:347)

        at org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator.next(RecordReaderDataSetIterator.java:46)

        at org.deeplearning4j.datasets.iterator.AsyncDataSetIterator$IteratorRunnable.run(AsyncDataSetIterator.java:294)

        at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.UnsatisfiedLinkError: /tmp/javacpp1176234266740952/libjniopencv_core.so: /tmp/javacpp1176234266740952/libopencv_core.so.3.1: symbol _ZTTNSt7__cxx1118basic_stringstreamIcSt11char_traitsIcESaIcEEE, version GLIBCXX_3.4.21 not defined in file libstdc++.so.6 with link time reference

        at java.lang.ClassLoader$NativeLibrary.load(Native Method)

        at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1968)

        at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1893)

        at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1854)

        at java.lang.Runtime.load0(Runtime.java:795)

        at java.lang.System.load(System.java:1062)

        at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:709)

        ... 15 more

Exception in thread "main" java.lang.IllegalStateException: Unexpected state occurred for AsyncDataSetIterator: runnable died or no data available

        at org.deeplearning4j.datasets.iterator.AsyncDataSetIterator.next(AsyncDataSetIterator.java:226)

        at org.deeplearning4j.datasets.iterator.AsyncDataSetIterator.next(AsyncDataSetIterator.java:35)

        at org.deeplearning4j.datasets.iterator.MultipleEpochsIterator.next(MultipleEpochsIterator.java:99)

        at org.deeplearning4j.datasets.iterator.MultipleEpochsIterator.next(MultipleEpochsIterator.java:122)

        at org.deeplearning4j.datasets.iterator.MultipleEpochsIterator.next(MultipleEpochsIterator.java:36)

        at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.fit(MultiLayerNetwork.java:1048)

        at imagenet.ImageNetStandardExample.trainModel(ImageNetStandardExample.java:81)

        at imagenet.ImageNetStandardExample.initialize(ImageNetStandardExample.java:49)

        at imagenet.ImageNetMain.run(ImageNetMain.java:125)

        at imagenet.ImageNetMain.main(ImageNetMain.java:195)


How can I resolve this issue? 


Regards

Deepali

Message has been deleted

Ali Can Albayrak

unread,
Aug 11, 2016, 4:44:20 PM8/11/16
to javacv, nish...@us.ibm.com, jaga...@us.ibm.com, dee...@us.ibm.com

May your glibc version is under 3.4.21 ? Did you check it out ?

Samuel Audet

unread,
Aug 15, 2016, 9:40:34 AM8/15/16
to jav...@googlegroups.com
On 08/12/2016 05:44 AM, Ali Can Albayrak wrote:
> May your glibc version is under 3.4.21 ? Did you check it out ?

Thanks for taking the time to help Ali!

Yes, it looks like that Linux distribution is too old. Deepali, please
upgrade to something like CentOS 6, Ubuntu 14.04, or more recent.

Samuel

Martin Deinhofer

unread,
Sep 6, 2017, 10:57:10 AM9/6/17
to javacv
Hi,

When I try to execute an example using javacv 1.3.1 on an Ubuntu 16.04 LTS on a Raspberry Pi, I also get the UnsatisfiedLinkError.
Are there any special dependencies I have to install first?

Are you sure the glibc version must be that high (3.4.21)?

Thanks
Martin

Samuel Audet

unread,
Sep 6, 2017, 7:46:48 PM9/6/17
to jav...@googlegroups.com, Martin Deinhofer
What does your pom.xml file look like?

Martin Deinhofer

unread,
Sep 11, 2017, 8:01:51 AM9/11/17
to javacv


Am Donnerstag, 7. September 2017 01:46:48 UTC+2 schrieb Samuel Audet:
What does your pom.xml file look like?


I did not build it from source with maven, I just used the jars of javacv-1.3.1, added them as libraries to my program and used it. 
On windows it runs without problems.
The strange thing is, that when trying it first on the RPi there was no UnsatisfiedLinkError, the OpenCVFrameGrabber could open the Logitech USB camera, but the 
frame read command failed with "read() Error: Could not read frame in start()."

Does this indicate that the grabber does not support the camera or that it maybe needed more time for reading?

After the read error I installed python and opencv because I wanted to test if the camera is generally supported on the RPi with opencv.
So I installed

sudo apt-get install libopencv-dev python-opencv

The python examples worked with my camera. But when I then tried my java program again, I got the UnsatisfiedLinkError!!

Any ideas, what happened here??

As far as I understood, it should not be necessary to install opencv additionally to javacv because it's shared library is already contained in the javacv jars.

Thanks
Martin

Samuel Audet

unread,
Sep 11, 2017, 9:02:01 PM9/11/17
to jav...@googlegroups.com, Martin Deinhofer
Could you try again with the JAR files from JavaCV Platform 1.3.3?


On 09/11/2017 09:01 PM, Martin Deinhofer wrote:

Martin Deinhofer

unread,
Sep 13, 2017, 10:54:09 AM9/13/17
to javacv
Ok, regarding the UnsatisfiedLinkError this was obviously my mistake. I accidently took an older version.

Now I tested it again with javacv-1.3.3 but unfortunately, I get the read error in the OpenCVFrameGrabber:

Exception in thread "main" org.bytedeco.javacv.FrameGrabber$Exception: read() Error: Could not read frame in start().
at org.bytedeco.javacv.OpenCVFrameGrabber.start(OpenCVFrameGrabber.java:222)
at AsTeRICSDemo.main(AsTeRICSDemo.java:103)

I have a Raspberry Pi 3, The Logitech camera is plugged in via USB and can be opened with vlc.
I can also open and grab the camera with the FFMPEGFrameGrabber of javacv.

here to 100 seconds. But nevertheless the subsequent read fails.

Any ideas?

Samuel Audet

unread,
Sep 13, 2017, 6:02:57 PM9/13/17
to jav...@googlegroups.com, Martin Deinhofer
If your camera outputs images in some compressed format, the binaries for OpenCV are not built with FFmpeg, so it might be able to decompress the frames.

We could without too much trouble update the presets to link with FFmpeg...

But FFmpeg usually works better than OpenCV at grabbing from cameras, so if FFmpegFrameGrabber works, I'd recommend continue using that anyway.

Samuel

Martin Deinhofer

unread,
Sep 14, 2017, 4:20:35 PM9/14/17
to Samuel Audet, jav...@googlegroups.com



Zitat von Samuel Audet <samuel...@gmail.com>:

> If your camera outputs images in some compressed format, the
> binaries for OpenCV are not built with FFmpeg, so it might be able
> to decompress the frames.


But I don't think it is a compressed format, because than the python
examples using the opencv binding would not work either, right?
I am also pretty sure the camera works with the OpenCVFrameGrabber on
an x86 Linux. I have to test this again.

Generally, I would not mind ffmpeg, the problem is that as far as I
know the ffmpeg preset has linked in also the pure GPL parts of ffmpeg.

Thanks
Martin
--
Martin Deinhofer, MSc
Research and Development, International Projects

University of Applied Sciences Technikum Wien
Hoechstaedtplatz 6, 1200 Vienna, Austria, Europe
T: +43 1 333 40 77-297, F: +43 1 333 40 77-99 297
E: martin.d...@technikum-wien.at
I: embsys.technikum-wien.at

Samuel Audet

unread,
Sep 14, 2017, 6:13:14 PM9/14/17
to jav...@googlegroups.com, martin.d...@technikum-wien.at, Vin Baines
Vince (Cc) might have something more to say about video capture with
OpenCV on ARM, Vince?

Also, looking at how the binaries for Python were built would help us
figure out which flags we need to have,
but I'm pretty sure they were built with FFmpeg enabled...

Samuel

Vin Baines

unread,
Sep 15, 2017, 2:14:23 AM9/15/17
to Samuel Audet, martin.d...@technikum-wien.at, jav...@googlegroups.com
Have you got an example of your code I could use to reproduce the problem? I've got a couple of Logitech cams I can test with too.

I've used been using both opencv and ffmpeg fine so interested to dig into the problem

Martin Deinhofer

unread,
Sep 15, 2017, 4:02:44 PM9/15/17
to javacv
Hi,


I created a runnable jar together with the binaries of javacv-1.3.3 and then started it with:

java -jar <jarname> 8 0 320 240

for OpenCV and camera 0
and for FFMpeg:

java -jar <jarnma> 9 /dev/video0 320 240

v.f.b...@gmail.com

unread,
Sep 18, 2017, 4:44:23 AM9/18/17
to javacv
Hi,

Can you try it using

FFmpegFrameGrabber grabber = new FFmpegFrameGrabber("/dev/video0");

changing /dev/video0 to whatever your webcam is shown as on the pi?

Testing using a pi camera rather than USB right now, but I get the same error as you with OpenCV grabber (maybe the 0 reference isn't finding a valid source), but the above works fine for me in your code

Martin Deinhofer

unread,
Sep 18, 2017, 11:20:17 AM9/18/17
to javacv
Hi,

Yes with FFMpegFrameGrabber and /dev/video0 it worked for me on the pi also. The pic is good but the latency is bad. Could this be due to the fact that the javacpp presets for the pi are not optimized for the RPi 3 by default?
With the OpenCVFrameGrabber it still does not work. In between I tested it on an Ubuntu 16.04 x64 in a VirtualBox. The USB camera could be opened but the images were corrupted. But this is maybe because of the VirtualBox. --> It was not really a success either.

And again as I said, the opencv python binding on the Pi works pretty good, with low latency also.


Regarding RaspiCam, how can you access it within javacv? As the RaspiCam does not have v4l driver, how can it be interfaced?

-->
So what could be the reason now?
Is it really because python also links some functionality from ffmpeg?

Thanks in advance!
Martin

Vin Baines

unread,
Sep 18, 2017, 11:53:55 AM9/18/17
to jav...@googlegroups.com
Hard to say, I did try a little experiment with enabling a bunch of pi3 optimisations, but didn't make much difference - but might have been a flawed test. 

There's a lot of questions there, best way if you need an answer to them is to come up with some experiments that would find out, pi functionality is still not very well tested (or if it is, would be nice to see some results). A year or so ago, I found not much difference between opencv and ffmpeg via javacv, and raw framerate I could get around 20-30fps - so long as resolution wasn't too high. That framerate dropped really quickly as soon as any operations were performed on each frame - so with face detection it'd be more like 1-2fps. 

Would be interesting if someone has time to benchmark, where that slowness comes in, is it just that the pi is a bit slower (but, 1ghz is still pretty quick for what I grew up using), but if python is able to call effectively the same methods from opencv for face recognition using same data set, camera, device, etc, to keep it all fair, that'd be pretty useful to know. If you've got time to put together sample code for python and java that's equivalent, it'd be a great step forward to baseline/improve pi performance.

For using the raspicam, try this
sudo modprobe bcm2835-v4l2 max_video_width=2592 max_video_height=1944

That should give you a device at /dev/video0 usable as if its a normal USB camera

--

---
You received this message because you are subscribed to a topic in the Google Groups "javacv" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/javacv/70Fl3NDLGXk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to javacv+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

VIKAS SINHA

unread,
Sep 22, 2017, 10:01:54 AM9/22/17
to javacv
Hi Team,

I am using sample from https://github.com/bytedeco/javacv. I have downloaded the javaCV 1.3.3 containing dependent Jars. I have added them to the Library. After that I downloaded the opencv-3.2.0-vc14.exe. After running this exe, it created the opencv folder containg the build and source folder. Then I changed the VM options to -Djava.library.path="C:\opencv\build\java\x64".

Still getting same below error. I am using Netbeans IDE 8.2 on Windows 7. Please help.

Exception in thread "main" java.lang.UnsatisfiedLinkError: no jniopencv_core in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:976)
at org.bytedeco.javacpp.Loader.load(Loader.java:777)
at org.bytedeco.javacpp.Loader.load(Loader.java:684)
at org.bytedeco.javacpp.opencv_core.<clinit>(opencv_core.java:10)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.bytedeco.javacpp.Loader.load(Loader.java:739)
at org.bytedeco.javacpp.Loader.load(Loader.java:684)
at org.bytedeco.javacpp.opencv_imgcodecs.<clinit>(opencv_imgcodecs.java:13)
at facerecognition.FaceRecognition.main(FaceRecognition.java:22)
Caused by: java.lang.UnsatisfiedLinkError: no opencv_imgproc320 in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:976)
at org.bytedeco.javacpp.Loader.load(Loader.java:765)
... 8 more


Code that I am trying to run is 

package facerecognition;

import static org.bytedeco.javacpp.opencv_core.*;
import static org.bytedeco.javacpp.opencv_imgproc.*;
import static org.bytedeco.javacpp.opencv_imgcodecs.*;
/**
 *
 * @author 
 */
public class FaceRecognition {

    /**
     * @param args the command line arguments
     */
    public static void main(String[] args) {
        // TODO code application logic here
        IplImage image = cvLoadImage("C:\\Users\\myname\\Desktop\\images\\face\\test.jpg");
        if (image != null) {
            cvSmooth(image, image);
            cvSaveImage("C:\\Users\\myname\\Desktop\\images\\face\\test1.jpg", image);
            cvReleaseImage(image);
        }
    }
    
}


Regards,
Viku

Samuel Audet

unread,
Oct 15, 2017, 9:07:42 PM10/15/17
to javacv
Some JAR files are missing, please try to add all of them.

--

---
You received this message because you are subscribed to the Google Groups "javacv" group.
To unsubscribe from this group and stop receiving emails from it, send an email to javacv+unsubscribe@googlegroups.com.

Vin Baines

unread,
Nov 5, 2017, 1:04:08 PM11/5/17
to jav...@googlegroups.com
I was still curious about this, so I just built opencv with python 2.7 bindings on a Pi3, and tested a pretty simple python program to grab the camera via PiCamera library, pass it to opencv and perform face recognition on that frame.

If I keep the loop just grabbing the image, I get about 15fps. As soon as I do face recognition per frame, it drops to about 2.5fps. So seems on par with javacv. If anyone is getting better performance I'm keen to hear, could be something to include in Pi builds for javacv

On 18 September 2017 at 16:53, Vin Baines <v.f.b...@gmail.com> wrote:
Hard to say, I did try a little experiment with enabling a bunch of pi3 optimisations, but didn't make much difference - but might have been a flawed test. 

There's a lot of questions there, best way if you need an answer to them is to come up with some experiments that would find out, pi functionality is still not very well tested (or if it is, would be nice to see some results). A year or so ago, I found not much difference between opencv and ffmpeg via javacv, and raw framerate I could get around 20-30fps - so long as resolution wasn't too high. That framerate dropped really quickly as soon as any operations were performed on each frame - so with face detection it'd be more like 1-2fps. 

Would be interesting if someone has time to benchmark, where that slowness comes in, is it just that the pi is a bit slower (but, 1ghz is still pretty quick for what I grew up using), but if python is able to call effectively the same methods from opencv for face recognition using same data set, camera, device, etc, to keep it all fair, that'd be pretty useful to know. If you've got time to put together sample code for python and java that's equivalent, it'd be a great step forward to baseline/improve pi performance.

For using the raspicam, try this
sudo modprobe bcm2835-v4l2 max_video_width=2592 max_video_height=1944

That should give you a device at /dev/video0 usable as if its a normal USB camera

Martin Deinhofer

unread,
Dec 6, 2017, 8:45:07 AM12/6/17
to javacv
Hi,

Could you provide the python script also? 
I think it really only makes sense to do comparisons, if you use the same SW and HW versions. The camera you choose also has a big impact.

And also use the same settings regarding 
frameRate
resolution

I have here an RPi 3 with Raspbian Jessie, the raspi cam and a Logilink camera (UA0072A).


Am Sonntag, 5. November 2017 19:04:08 UTC+1 schrieb Vin Baines:
I was still curious about this, so I just built opencv with python 2.7 bindings on a Pi3, and tested a pretty simple python program to grab the camera via PiCamera library, pass it to opencv and perform face recognition on that frame.

If I keep the loop just grabbing the image, I get about 15fps. As soon as I do face recognition per frame, it drops to about 2.5fps. So seems on par with javacv. If anyone is getting better performance I'm keen to hear, could be something to include in Pi builds for javacv
On 18 September 2017 at 16:53, Vin Baines <v.f.b...@gmail.com> wrote:
Hard to say, I did try a little experiment with enabling a bunch of pi3 optimisations, but didn't make much difference - but might have been a flawed test. 

There's a lot of questions there, best way if you need an answer to them is to come up with some experiments that would find out, pi functionality is still not very well tested (or if it is, would be nice to see some results). A year or so ago, I found not much difference between opencv and ffmpeg via javacv, and raw framerate I could get around 20-30fps - so long as resolution wasn't too high. That framerate dropped really quickly as soon as any operations were performed on each frame - so with face detection it'd be more like 1-2fps. 

Would be interesting if someone has time to benchmark, where that slowness comes in, is it just that the pi is a bit slower (but, 1ghz is still pretty quick for what I grew up using), but if python is able to call effectively the same methods from opencv for face recognition using same data set, camera, device, etc, to keep it all fair, that'd be pretty useful to know. If you've got time to put together sample code for python and java that's equivalent, it'd be a great step forward to baseline/improve pi performance.

For using the raspicam, try this
sudo modprobe bcm2835-v4l2 max_video_width=2592 max_video_height=1944

That should give you a device at /dev/video0 usable as if its a normal USB camera
To unsubscribe from this group and all its topics, send an email to javacv+un...@googlegroups.com.

Vin Baines

unread,
Dec 6, 2017, 11:33:54 AM12/6/17
to jav...@googlegroups.com
Yeah, sure. Hopefully I've not made any obvious error in the below. I'm not sure how much the camera would really make a difference though? If you comment out the cv2.cvtColor and face_cascade.detectMultiScale lines, you're just grabbing camera frames as quick as you can and making no further calculations on them right? So if that's slow, then there's an issue with the camera, but that gets me around 20fps. That's using the raspicam, I know USB bus isn't amazingly fast on the Pi but I'd be surprised if there's much difference. Once the frame is grabbed then camera performance goes out of the equations, and if you uncomment the cvtColor, frame rate is halved straight away for me - suggests to me computation. The cascade then drops me from 10fps to about 1-2fps, maybe with a different classifier it might be quicker

from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2

camera=PiCamera()
camera.resolution=(640,480)
camera.framerate=30
rawCapture=PiRGBArray(camera, size=(640,480))

face_cascade = cv2.CascadeClassifier('/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml')
start = time.time()
fps = 0

for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
 
  image=frame.array
  gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
  faces = face_cascade.detectMultiScale(gray, 1.1, 5)
  print "Found "+str(len(faces))+" face(s)"

  rawCapture.truncate(0)
  fps=fps+1

  currentTime = time.time()
  if currentTime - start > 1:
   print fps
   start = time.time()
   fps=0;


To unsubscribe from this group and all its topics, send an email to javacv+unsubscribe@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages