Build failed at parallel.cpp

24 views
Skip to first unread message

wen dai

unread,
Nov 17, 2019, 9:09:56 AM11/17/19
to Caffe Users
Hi there, I've just pulled the newest Caffe repository and intended to compile caffe-cuda. My nvidia driver and opencv were all set, but when I tried to compile Caffe, I got error like:

CXX src/caffe/parallel.cpp
src/caffe/parallel.cpp: In instantiation of ‘caffe::P2PSync<Dtype>::P2PSync(boost::shared_ptr<caffe::Solver<Dtype> >, caffe::P2PSync<Dtype>*, const caffe::SolverParameter&) [with Dtype = float]’:
src/caffe/parallel.cpp:478:1:   required from here
src/caffe/parallel.cpp:257:5: error: invalid new-expression of abstract class type ‘caffe::WorkerSolver<float>’
     solver_.reset(new WorkerSolver<Dtype>(param, root_solver.get()));
     ^
In file included from ./include/caffe/parallel.hpp:50:0,
                 from ./include/caffe/caffe.hpp:50,
                 from src/caffe/parallel.cpp:49:
./include/caffe/solver.hpp:207:7: note:   because the following virtual functions are pure within ‘caffe::WorkerSolver<float>’:
 class WorkerSolver : public Solver<Dtype> {
       ^
./include/caffe/solver.hpp:159:16: note: void caffe::Solver<Dtype>::PrintLearningRate() [with Dtype = float]
   virtual void PrintLearningRate() = 0;
                ^
src/caffe/parallel.cpp: In instantiation of ‘caffe::P2PSync<Dtype>::P2PSync(boost::shared_ptr<caffe::Solver<Dtype> >, caffe::P2PSync<Dtype>*, const caffe::SolverParameter&) [with Dtype = double]’:
src/caffe/parallel.cpp:478:1:   required from here
src/caffe/parallel.cpp:257:5: error: invalid new-expression of abstract class type ‘caffe::WorkerSolver<double>’
     solver_.reset(new WorkerSolver<Dtype>(param, root_solver.get()));
     ^
In file included from ./include/caffe/parallel.hpp:50:0,
                 from ./include/caffe/caffe.hpp:50,
                 from src/caffe/parallel.cpp:49:
./include/caffe/solver.hpp:207:7: note:   because the following virtual functions are pure within ‘caffe::WorkerSolver<double>’:
 class WorkerSolver : public Solver<Dtype> {
       ^
./include/caffe/solver.hpp:159:16: note: void caffe::Solver<Dtype>::PrintLearningRate() [with Dtype = double]
   virtual void PrintLearningRate() = 0;
                ^
Makefile:815: recipe for target '.build_release/src/caffe/parallel.o' failed
make: *** [.build_release/src/caffe/parallel.o] Error 1


It looks like there's something wrong with source code? Could you help me with that?
ps. My makefile.config was modified as:
# cuDNN acceleration switch (uncomment to build with cuDNN).
# USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1

# USE_MKL2017_AS_DEFAULT_ENGINE := 1
# or put this at the top your train_val.protoxt or solver.prototxt file:
# engine: "MKL2017" 
# or use this option with caffe tool:
# -engine "MKL2017"

USE_MKLDNN_AS_DEFAULT_ENGINE := 1
# Put this at the top your train_val.protoxt or solver.prototxt file:
# engine: "MKLDNN" 
# or use this option with caffe tool:
# -engine "MKLDNN"

# uncomment to disable IO dependencies and corresponding data layers
USE_OPENCV := 1
USE_LEVELDB := 0
# USE_LMDB := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
# You should not set this flag if you will be reading LMDBs with any
# possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3
OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++
# If you use Intel compiler, you can choose uncomment following to enable static build
# ICC_STATIC_BUILD := 1

# If you use Intel compiler define a path to newer boost if not used
# already. 
# BOOST_ROOT := 

# Use remove batch norm optimization to boost inference
DISABLE_BN_FOLDING := 0

# Use Conv + Relu fusion to boost inference
DISABLE_CONV_RELU_FUSION:= 0

# Use Bn + ReLU fusion to boost inference
DISABLE_BN_RELU_FUSION := 0

# Use Conv + Concat  fusion to boost inference.
ENABLE_CONCAT_FUSION := 0

# Use Conv + Eltwise + Relu layer fusion to boost inference.
DISABLE_CONV_SUM_FUSION := 0

# Use sparse to boost inference.
DISABLE_SPARSE := 0

# Use fc/relu fusion to boost inference.
DISABLE_FC_RELU_FUSION := 0

# Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) 
# Uncomment to disable MKLDNN download by customized setting
# DISABLE_MKLDNN_DOWNLOAD := 1

# Intel(r) Machine Learning Scaling Library (uncomment to build
# with MLSL for multi-node training)
# USE_MLSL := 1

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 lines for compatibility.
CUDA_ARCH := #-gencode arch=compute_20,code=sm_20 \
     #-gencode arch=compute_20,code=sm_21 \
     -gencode arch=compute_30,code=sm_30 \
     -gencode arch=compute_35,code=sm_35 \
     -gencode arch=compute_50,code=sm_50 \
     -gencode arch=compute_50,code=compute_50

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := mkl
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

SERIAL_HDF5_INCLUDE := /usr/include/hdf5/serial/

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
# PYTHON_INCLUDE := /usr/include/python2.7 \
#/usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
ANACONDA_HOME := /opt/conda
PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
$(ANACONDA_HOME)/include/python3.6m \
$(ANACONDA_HOME)/lib/python3.6/site-packages/numpy/core/include \

# Uncomment to use Python 3 (default is Python 2)
PYTHON_LIBRARIES := boost_python3 python3.6
# PYTHON_INCLUDE := /usr/include/python3.5m \
#                 /usr/lib/python3.5/dist-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) ${SERIAL_HDF5_INCLUDE} /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment to enable training performance monitoring
# PERFORMANCE_MONITORING := 1

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# Uncomment to disable OpenMP support.
# USE_OPENMP := 0

# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @

Reply all
Reply to author
Forward
0 new messages