Linking CXX shared library ../../lib/libcaffe.dylib and lib/libcaffe.1.0.0-rc3.dylib

200 views
Skip to first unread message

Deniz

unread,
Feb 15, 2017, 10:11:22 PM2/15/17
to Caffe Users
when I try make all, I continue to get this error. I think there is a linking issue between libcaffe.dylib and libcaffe.1.0.0-rc3.dylib. 
I am using OSX El Capitan 10.11.6.

I tried various solutions. The following are some of the solutions I've seen as relevant, but didn't understand how to apply:

1.  I don't know how to use this. The Makefile that I got upon cloning Caffe from git is much different than the one I have right now - which is appearently generated by cmake. I am running cmake $CAFFE_ROOT and then make all. Is this wrong?

On OSX El Capitan, I added one line to the Makefile after the exisiting LDFLAGS. No need to run @dougalsutherland's script:

LDFLAGS += $(foreach librarydir,$(LIBRARY_DIRS),-L$(librarydir)) $(PKG_CONFIG) \
                $(foreach library,$(LIBRARIES),-l$(library))
LDFLAGS += -Wl,-rpath,/usr/local/cuda/lib

I can make a PR, but I'm not sure how this change would affect other OS'es?


2. This commit feels relevant, however its already a commit - and I still have an error. 


3. I tried this but made no difference : 
After make test in $CAFFE_ROOT:
cp -a .build_release/lib/. /usr/local/lib/

Any help?


Linking CXX shared library ../../lib/libcaffe.dylib
Undefined symbols for architecture x86_64:
  "_H5LTfind_dataset", referenced from:
      caffe::SGDSolver<float>::RestoreSolverStateFromHDF5(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in sgd_solver.cpp.o
      caffe::SGDSolver<double>::RestoreSolverStateFromHDF5(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in sgd_solver.cpp.o
      void caffe::hdf5_load_nd_dataset_helper<float>(long long, char const*, int, int, caffe::Blob<float>*) in hdf5.cpp.o
      void caffe::hdf5_load_nd_dataset_helper<double>(long long, char const*, int, int, caffe::Blob<double>*) in hdf5.cpp.o
  "_H5LTget_dataset_info", referenced from:
      void caffe::hdf5_load_nd_dataset_helper<float>(long long, char const*, int, int, caffe::Blob<float>*) in hdf5.cpp.o
      void caffe::hdf5_load_nd_dataset_helper<double>(long long, char const*, int, int, caffe::Blob<double>*) in hdf5.cpp.o
      caffe::hdf5_load_string(long long, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in hdf5.cpp.o
  "_H5LTget_dataset_ndims", referenced from:
      void caffe::hdf5_load_nd_dataset_helper<float>(long long, char const*, int, int, caffe::Blob<float>*) in hdf5.cpp.o
      void caffe::hdf5_load_nd_dataset_helper<double>(long long, char const*, int, int, caffe::Blob<double>*) in hdf5.cpp.o
  "_H5LTmake_dataset_double", referenced from:
      void caffe::hdf5_save_nd_dataset<double>(long long, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, caffe::Blob<double> const&, bool) in hdf5.cpp.o
  "_H5LTmake_dataset_float", referenced from:
      void caffe::hdf5_save_nd_dataset<float>(long long, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, caffe::Blob<float> const&, bool) in hdf5.cpp.o
  "_H5LTmake_dataset_int", referenced from:
      caffe::hdf5_save_int(long long, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) in hdf5.cpp.o
  "_H5LTmake_dataset_string", referenced from:
      caffe::hdf5_save_string(long long, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in hdf5.cpp.o
  "_H5LTread_dataset_double", referenced from:
      void caffe::hdf5_load_nd_dataset<double>(long long, char const*, int, int, caffe::Blob<double>*) in hdf5.cpp.o
  "_H5LTread_dataset_float", referenced from:
      void caffe::hdf5_load_nd_dataset<float>(long long, char const*, int, int, caffe::Blob<float>*) in hdf5.cpp.o
  "_H5LTread_dataset_int", referenced from:
      caffe::hdf5_load_int(long long, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in hdf5.cpp.o
  "_H5LTread_dataset_string", referenced from:
      caffe::hdf5_load_string(long long, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in hdf5.cpp.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [lib/libcaffe.1.0.0-rc3.dylib] Error 1
make[1]: *** [src/caffe/CMakeFiles/caffe.dir/all] Error 2
make: *** [all] Error 2





Makefile.config:

# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).
# USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
CPU_ONLY := 1

# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
# You should not set this flag if you will be reading LMDBs with any
# possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
# CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 lines for compatibility.
#CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
# -gencode arch=compute_20,code=sm_21 \
# -gencode arch=compute_30,code=sm_30 \
# -gencode arch=compute_35,code=sm_35 \
# -gencode arch=compute_50,code=sm_50 \
# -gencode arch=compute_50,code=compute_50

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
BLAS_INCLUDE := /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/Headers
BLAS_LIB := /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
/usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
# $(ANACONDA_HOME)/include/python2.7 \
# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \

# Uncomment to use Python 3 (default is Python 2)
# PYTHON_LIBRARIES := boost_python3 python3.5m
# PYTHON_INCLUDE := /usr/include/python3.5m \
#                 /usr/lib/python3.5/dist-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include \
# /usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/include/python2.7
# PYTHON_LIB += $(shell brew --prefix numpy)/lib \
# /usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib

# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/local/cuda/include /usr/local/cuda/targets/x86_64-linux/include/thrust/system/cuda/detail/
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/local/cuda/lib



# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @
Reply all
Reply to author
Forward
0 new messages