make all failed, CPU_ONLY, Ubuntu16.04

275 views
Skip to first unread message

Ali Raza

unread,
Jul 13, 2018, 2:57:17 PM7/13/18
to Caffe Users


alirazanaru@alirazanaru-SVE1511MFXS:~/.local/install/caffe-segnet$ make all
PROTOC src/caffe/proto/caffe.proto
CXX .build_release/src/caffe/proto/caffe.pb.cc
CXX src/caffe/layer_factory.cpp
CXX src/caffe/blob.cpp
CXX src/caffe/net.cpp
CXX src/caffe/util/cudnn.cpp
CXX src/caffe/util/benchmark.cpp
CXX src/caffe/util/im2col.cpp
CXX src/caffe/util/insert_splits.cpp
CXX src/caffe/util/io.cpp
CXX src/caffe/util/db.cpp
CXX src/caffe/util/math_functions.cpp
CXX src/caffe/util/upgrade_proto.cpp
CXX src/caffe/layers/silence_layer.cpp
CXX src/caffe/layers/threshold_layer.cpp
CXX src/caffe/layers/slice_layer.cpp
CXX src/caffe/layers/cudnn_conv_layer.cpp
CXX src/caffe/layers/dense_image_data_layer.cpp
CXX src/caffe/layers/im2col_layer.cpp
CXX src/caffe/layers/relu_layer.cpp
CXX src/caffe/layers/upsample_layer.cpp
CXX src/caffe/layers/pooling_layer.cpp
CXX src/caffe/layers/hdf5_output_layer.cpp
CXX src/caffe/layers/deconv_layer.cpp
CXX src/caffe/layers/infogain_loss_layer.cpp
CXX src/caffe/layers/bn_layer.cpp
CXX src/caffe/layers/spp_layer.cpp
CXX src/caffe/layers/cudnn_relu_layer.cpp
CXX src/caffe/layers/cudnn_pooling_layer.cpp
CXX src/caffe/layers/bnll_layer.cpp
CXX src/caffe/layers/memory_data_layer.cpp
CXX src/caffe/layers/cudnn_softmax_layer.cpp
CXX src/caffe/layers/hdf5_data_layer.cpp
CXX src/caffe/layers/prelu_layer.cpp
CXX src/caffe/layers/absval_layer.cpp
CXX src/caffe/layers/mvn_layer.cpp
CXX src/caffe/layers/sigmoid_layer.cpp
CXX src/caffe/layers/dropout_layer.cpp
CXX src/caffe/layers/dummy_data_layer.cpp
CXX src/caffe/layers/eltwise_layer.cpp
CXX src/caffe/layers/reshape_layer.cpp
CXX src/caffe/layers/lrn_layer.cpp
CXX src/caffe/layers/reduction_layer.cpp
CXX src/caffe/layers/hinge_loss_layer.cpp
CXX src/caffe/layers/accuracy_layer.cpp
CXX src/caffe/layers/data_layer.cpp
CXX src/caffe/layers/euclidean_loss_layer.cpp
CXX src/caffe/layers/window_data_layer.cpp
CXX src/caffe/layers/conv_layer.cpp
CXX src/caffe/layers/multinomial_logistic_loss_layer.cpp
CXX src/caffe/layers/power_layer.cpp
CXX src/caffe/layers/filter_layer.cpp
CXX src/caffe/layers/neuron_layer.cpp
CXX src/caffe/layers/sigmoid_cross_entropy_loss_layer.cpp
CXX src/caffe/layers/cudnn_tanh_layer.cpp
CXX src/caffe/layers/concat_layer.cpp
CXX src/caffe/layers/softmax_loss_layer.cpp
CXX src/caffe/layers/split_layer.cpp
CXX src/caffe/layers/exp_layer.cpp
CXX src/caffe/layers/argmax_layer.cpp
CXX src/caffe/layers/contrastive_loss_layer.cpp
src/caffe/layers/contrastive_loss_layer.cpp: In instantiation of ‘void caffe::ContrastiveLossLayer<Dtype>::Forward_cpu(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&) [with Dtype = float]’:
src/caffe/layers/contrastive_loss_layer.cpp:118:1:   required from here
src/caffe/layers/contrastive_loss_layer.cpp:56:30: error: no matching function for call to ‘max(double, float)’
         Dtype dist = std::max(margin - sqrt(dist_sq_.cpu_data()[i]), Dtype(0.0));
                              ^
In file included from /usr/include/c++/5/algorithm:61:0,
                 from src/caffe/layers/contrastive_loss_layer.cpp:1:
/usr/include/c++/5/bits/stl_algobase.h:219:5: note: candidate: template<class _Tp> const _Tp& std::max(const _Tp&, const _Tp&)
     max(const _Tp& __a, const _Tp& __b)
     ^
/usr/include/c++/5/bits/stl_algobase.h:219:5: note:   template argument deduction/substitution failed:
src/caffe/layers/contrastive_loss_layer.cpp:56:30: note:   deduced conflicting types for parameter ‘const _Tp’ (‘double’ and ‘float’)
         Dtype dist = std::max(margin - sqrt(dist_sq_.cpu_data()[i]), Dtype(0.0));
                              ^
In file included from /usr/include/c++/5/algorithm:61:0,
                 from src/caffe/layers/contrastive_loss_layer.cpp:1:
/usr/include/c++/5/bits/stl_algobase.h:265:5: note: candidate: template<class _Tp, class _Compare> const _Tp& std::max(const _Tp&, const _Tp&, _Compare)
     max(const _Tp& __a, const _Tp& __b, _Compare __comp)
     ^
/usr/include/c++/5/bits/stl_algobase.h:265:5: note:   template argument deduction/substitution failed:
src/caffe/layers/contrastive_loss_layer.cpp:56:30: note:   deduced conflicting types for parameter ‘const _Tp’ (‘double’ and ‘float’)
         Dtype dist = std::max(margin - sqrt(dist_sq_.cpu_data()[i]), Dtype(0.0));
                              ^
Makefile:526: recipe for target '.build_release/src/caffe/layers/contrastive_loss_layer.o' failed
make: *** [.build_release/src/caffe/layers/contrastive_loss_layer.o] Error 1


PLZ HELP ME SOMEONE

THANKS IN ADVANCE

Message has been deleted

ALI RAZA

unread,
Jul 14, 2018, 12:59:01 PM7/14/18
to Caffe Users
this ERROR is solved by Replacing Line no 56 of contrastive_loss_layer.cpp file
with this
Dtype dist = std::max(margin - (float)sqrt(dist_sq_.cpu_data()[i]), Dtype(0.0));
and make all done successfully
but a new ERROR raised while make runtest

CXX src/caffe/test/test_power_layer.cpp
In file included from src/caffe/test/test_power_layer.cpp:12:0:
./include/caffe/test/test_gradient_check_util.hpp: In instantiation of ‘void caffe::GradientChecker<Dtype>::CheckGradientSingle(caffe::Layer<Dtype>*, const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&, int, int, int, bool) [with Dtype = float]’:
./include/caffe/test/test_gradient_check_util.hpp:208:26:   required from ‘void caffe::GradientChecker<Dtype>::CheckGradientEltwise(caffe::Layer<Dtype>*, const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&) [with Dtype = float]’
src/caffe/test/test_power_layer.cpp:78:5:   required from ‘void caffe::PowerLayerTest<TypeParam>::TestBackward(caffe::PowerLayerTest<TypeParam>::Dtype, caffe::PowerLayerTest<TypeParam>::Dtype, caffe::PowerLayerTest<TypeParam>::Dtype) [with TypeParam = caffe::CPUDevice<float>; caffe::PowerLayerTest<TypeParam>::Dtype = float]’
src/caffe/test/test_power_layer.cpp:167:3:   required from ‘void caffe::PowerLayerTest_TestPowerTwoScaleHalfGradient_Test<gtest_TypeParam_>::TestBody() [with gtest_TypeParam_ = caffe::CPUDevice<float>]’
src/caffe/test/test_power_layer.cpp:170:1:   required from here
./include/caffe/test/test_gradient_check_util.hpp:167:31: error: no matching function for call to ‘max(const double&, float)’
         Dtype scale = std::max(

                               ^
In file included from /usr/include/c++/5/algorithm:61:0,
                 from src/caffe/test/test_power_layer.cpp:1:

/usr/include/c++/5/bits/stl_algobase.h:219:5: note: candidate: template<class _Tp> const _Tp& std::max(const _Tp&, const _Tp&)
     max(const _Tp& __a, const _Tp& __b)
     ^
/usr/include/c++/5/bits/stl_algobase.h:219:5: note:   template argument deduction/substitution failed:
In file included from src/caffe/test/test_power_layer.cpp:12:0:
./include/caffe/test/test_gradient_check_util.hpp:167:31: note:   deduced conflicting types for parameter ‘const _Tp’ (‘double’ and ‘float’)
         Dtype scale = std::max(

                               ^
In file included from /usr/include/c++/5/algorithm:61:0,
                 from src/caffe/test/test_power_layer.cpp:1:

/usr/include/c++/5/bits/stl_algobase.h:265:5: note: candidate: template<class _Tp, class _Compare> const _Tp& std::max(const _Tp&, const _Tp&, _Compare)
     max(const _Tp& __a, const _Tp& __b, _Compare __comp)
     ^
/usr/include/c++/5/bits/stl_algobase.h:265:5: note:   template argument deduction/substitution failed:
In file included from src/caffe/test/test_power_layer.cpp:12:0:
./include/caffe/test/test_gradient_check_util.hpp:167:31: note:   deduced conflicting types for parameter ‘const _Tp’ (‘double’ and ‘float’)
         Dtype scale = std::max(
                               ^
Makefile:526: recipe for target '.build_release/src/caffe/test/test_power_layer.o' failed
make: *** [.build_release/src/caffe/test/test_power_layer.o] Error 1

ALI RAZA

unread,
Jul 15, 2018, 5:15:49 AM7/15/18
to Caffe Users
@Nikiforos Pittaras  can you plz help me to solve this problem

I am using this https://github.com/alexgkendall/caffe-segnet caffe-segnet

plz help me
Reply all
Reply to author
Forward
0 new messages