Groups
Conversations
All groups and messages
Send feedback to Google
Help
Training
Sign in
Groups
Caffe Users
Conversations
Labels
00-classification
1
17
1D
200-classes
2015
23
3D
3D-Caffe
3Dconv
5
572
AMD
ARM
AWS
AlexNet
Apache
Arxiv
Axis
Benchmark
Bu
Buld
C3D
CIFAR
CIFAR-100
CNN
COMPATIBILITY
CPU
Caffe-cpu
CaffeToolbox
Caltech101
CascadeClassifier
Coffe
Completed
ConvLSTM
Cplusplus
CppAPI
CreatorRegistry
CuDNN
D
DARTS
DB
DSO
DataLayer
DataTransformer
Deep-network
DeepDream
DeepLab
Deeplabv2
Dell
Deploy
Detectron
Duplicate
Ensemble
EuclideanLoss
FCN
FCN32
FCRN
FINETUNING
Fedora
FlowNet
FreeBSD
Fully
GNURADIO
GPU_Mode
GSM
GSOC
GTX-570
GTX980
Gaussian
Generate_train_data
Great
H
HDF5Data
HDF5DataLayer
HandHeldSDR
Hi
IDE
ILSVRC13
ILSVRC2013
INRIA
ImageNet
InfogainLossLayer
Initilization
Intel
Interactivity
K40
L1-norm
L2-norm
LD_LIBARAY_PATH
LRCN
LRN
LSTM
LTE
LabView
LibUSRP
MKL
MPI
MemoryData
MemoryDataLayer
MobileNet-SSD
Multiscale
Mutex
NIN
NIST
NLP
NYUD-v2
Net
Net-Surgery
NetSpec
OCR
OS10
OSX
OverFeat
Overfitting
PYTHONPATH
Parallel
PedestrianDetection
Przemek
Pyramid
R
RGBD
RHEL
ResNet
SSD
SVM
SegNet
Serial
SetupParameters
Sharing
Simulink
Simulink-USRP
SliceLayer
SoftmaxWithLoss
SplitLayer
T
USRP1
V2
VOC_dataset
XavierFiller
Xcode
_ULx86_64_step
_caffe
a
aborted
accuracy
action
activation
adadelta
administration
admm
adreno
adversarial
all
am
amazon
amd64
amdgpu-pro
anaconda
android
annotation
application
apt
argmax
atlas
auc
audio
auto-encoder
autoencoder
average
azure
background
backward
balance
balancing
batch
batch_size
batchnorm
beginners
bgr
bias
bin
binaryproto
blas
blob
blobs
blocking_queue
boos
boost
boost_python
bounding-box-image
branch
broadcast
bug
build
buster
c
cafe
caffa
caffe
caffe-16bit
caffe-64bit
caffe-8bit
caffe-binary
caffe-dilation
caffe-fcn
caffe-gemmlowp
caffe-installation
caffe-master
caffe-model
caffe-parallel
caffe-recurrent
caffe-reference
caffe-segnet
caffe-training-log
caffe-users
caffe-windows
caffe2
caffe_root
caffemodel
caffenet
caffezoo
callbacks
camera
camme
cassandra
categories
cats
cblas
centos
chaining
change
channel-pooling
channels
checker
cifar-10
cifar10
class
class5
classification
classifier
cloud
cls
cmake
cmath
cmd
cmdcaffe
code
cold-brew
colorization
commandline
compilation
compile
compiling
compression
compressison
compute_image_mean
concat
confusion-matrix
const
constant
constant_loss
contrastive
control
convergence
convert
convert_annoset
convert_imagenet
convert_imageset
convex
convnet
convo
convolution
convolution_layer
convolutionm
convoluton
copy
correct
cpp_classification
cpu_only
crash
create_imagenet
crfasrnn
crop
crop-layer
cropping
cross-validation
csv
cuda
curl
curve
custom
cvpr
cvpr15
data
data-augmentation
data-imbalance
data-layer
data_transformer
database
dataset
datastax
datum
debian
debian-9
debug
deconvolution
deep
deep-learning
deepVis
deepnetworkscascade
default
demo
denoising
dense
dependencies
deployment
derivated
derivatives
detection
deviation
dies
different
digits
directory
display
distillation
distributed
distributing
divide
dll
dmb-file
dnn
docker
documentation
download
draw
draw_net
drawback
dropout
droput
dual
dyld
dyldLibrary
ec2
efficient
eltwise
eltwise-layer-test
elu
end2end
endian
engine
entropy
err
error
eucli
evaluation
example
examples
exception
excitation-backprop
export
expresso
f
face
face-detection
face-verification
facepoints
failure
fast
fast-rcnn
faster
fasterrcnn
fcn8
feature
feature-extraction
feature-selection
feature_maps
feedforward
feeding-data
figure
files
filler
filter
fine
fine-tuning
finetuen
finetune
fixed
flags
flask
flickr
float
float_labels
floating
food
format
forward_pass
foward
frames
free
fully-connected
fullyConvolutional
gamma
gcc4
gcc5
gcc6
gdb
get_output
gf
gflag
gflags
gist
global
glog
google
googlenet
gou
gpu
gradient
gray
grayscale
groundtruth
grouping
gtx-1080-ti
gui
h5-files
h5f
hackernews
hadoop
hardware
hdf5
hdf5withLMDB
header
hedging
help
heroku
homebrew
hopfield
how-to
hyperparameters
i
ia
ignore_label
im2col
ima
image
image-tagging
imageCLEF
image_data
imagenet_mean
images
imagine
imbalance
implementation
import
import-error
improvement
inception
index
infogain
ingredients
inheritance
init
initial
initialization
inner_product
input
input_output
inputdata
inputfiles
inputs
ins
install
installaion
installation
instance
instruction
integer
intel-caffe
invalid
io
io_inibackup
ip
ipython
isnta
iter_size
iteration
java
jni
job
jupyter
keras
keras-users
kernel
key
killed
kullback-leibler
l
label
labels
landmark
large
large_output_classes
large_vocabulary
lasagne
layer
layer-registry
layerregistry
layers
lboost_thread
ld
ldb
leak
learning
learning-rate
learning_curve
learning_rate
lenet
leveldb
libcaffe
libprotobuf
libsvm
libunwind
license
linking
linux
little
lmbd
lmdb
loading
localization
log
logging
loss
loss-function
loss-layer
loss_big
loss_weight
loss_weights
lr
lr_policy
lsvrc12
mac
macbook
machine
machine-learning
macos
make
makeall
makefile
mammography
manjaro
map
mask
matcaffe
math_functions
matlab
matrices
matrix
matrix_predictions
max-out
maxpool
mdb
mean
memory
memory-layer
memory_data
mentor
mex
mexw64
mirror
missing
missing-dependencies
mlsl
mnist
model
modelselection
modified_model
mojave
mono
motion
multi
multi-channel
multi-core
multi-gpus
multi-label
multi-node
multi-target
multiask
multilabel
multimodal
multiple
multiple-data
multiple-input
multiple-networks
multipleloss
multiprocessing
multitask
multitasking
multithreading
multiview
mutable
muti-hdd
mvn
mxnet
my
name
nan
natural_language
ndk
network
neural
new
newlayer
newnode
ninja
no
noise
non-encoded
non-image
normalization
notebook
novaoznaka
novice
nsight
ntop
num_inputs
num_output
numba
number
numpy
nvidia
nvidia-settings
o
object
openBlas
opencv
optimization
optimus
output
overlapping
oversampling
p
p100
package
padding
parame
parameter
parameter_extraction
params
parser
parsing
pascal
patches
path
patterns
performance
phase
pixel
pixelwise
plateau
please-respond
plot
point
pointcloud
pooling
pooling-dimension
pose
posenet
pre
pre-computed
pre-trained
precision
predicted
prediction
prelu
preprocessing
pretrained
printouts
probabilities
problem
processing
produced
profile
protobuf
protocbuffer
prototxt
pruning
publications
pull
py-faster-rcnn
pycaffe
pyhon
pypy
python
python-layer
python2
python3
pythreadstate_get
qt
qualcomm
quantization
question
quiet
quit
r-cnn
random
raspberrypie
rc3
rcnn
reader
reading
real-time
reboot
recall
recipe
recognition
recurrent
recursive
reductionlayer
redudant
reference
regex
regfreeA
regression
regrs
regularizor
release
request
requirements
rescale
research
reshape
resizing
rest
result
return
rfcn
rgb
ristretto
robotics
roc
roi
ros
rotation
runtest
saliency
same
samples
satellite
save
scale
scaleLayer
scaling
scnn
scope
scource
scripts
seg
segfault
segmentation
select
selective-layers
semantic
semanticsegmentation
sensor
sequence
set
setenv
sgd
sgmentation_fault
shape
shared
shared_libs
shelhamer
shell
showboxes
shuffle
siamese
sierra
sigmoid
similarity_search
simple
single-label
site-packages
size
skimage
skimcaffe
sklearn
slice
slice_size
slicing
slow
snapdragon
snappy
snapshot
softmax
softmax_loss_layer
solved
solver
solverstate
spark
sparse
spatial
specific
speech
speed
spp-net
square
squeezenet
standard
std
step-by-step
studio
style
subtract
subtraction
suffix
support
suppress
symbol_database
synsetword
synthetic
tags
tensorflow
terminal
test
testing
text-spotting
textspotting
textures
theano
theano-users
theory
threshold
time
time-series
top-1
top-5
tracking
train
train1k
train_val
training
transfer
transfer-learning
transform_param
transformer
triplet
trouble
troubleshooting
try
tuning
tutorial
tutorials
u-net
ubuntu
ubuntu1604
uint8
unbalance
undefined_reference
undefined_symbol
underfitting
unexpected
unsupervised
usage
v1layerparameter
val
val1
validating
validation
valueclip
values
variable-timesteps
variables
vector
verbose
version
vgg
vgg16
vgg19
video
video-caffe
viennacl
visual
visualization
voxel
warning
web
web_demo
webcam
webface
website
weight
weight_extraction
weight_sharing
weight_transpose
weights
wiki
windows
winograd
wired
with_python_layer
withcode
workflow
wrong
x86_64
xamarin
xml
yosemite
yosinski
zeiler
zeros
zoo
About
Groups keyboard shortcuts have been updated
Dismiss
See shortcuts
Caffe Users
1–30 of 8727
Mark all as read
Report group
0 selected
Shrabani Ghosh
7/7/19
Check failed: num_axes() <= 4 (5 vs. 4) Cannot use legacy accessors on Blobs with > 4 axes.
I am finding this error in the testing part. The training part went well. For the same net and same
unread,
backward
blobs
caffe
error
loss
prototxt
pycaffe
test
Check failed: num_axes() <= 4 (5 vs. 4) Cannot use legacy accessors on Blobs with > 4 axes.
I am finding this error in the testing part. The training part went well. For the same net and same
7/7/19
pwj
,
Przemek D
2
3/29/18
Backpropagating through Concat layer
1. Yes, this is exactly how it works! 2. No difference at all from the mathematical point of view. It
unread,
backward
blobs
concat
excitation-backprop
Backpropagating through Concat layer
1. Yes, this is exactly how it works! 2. No difference at all from the mathematical point of view. It
3/29/18
Raj
2/22/18
How to have multiple blob in a single layer.
Hello, Can you please help with an issue I am facing while using caffe . I want to keep data in
unread,
backward
blobs
caffe
data
gpu
help
vector
How to have multiple blob in a single layer.
Hello, Can you please help with an issue I am facing while using caffe . I want to keep data in
2/22/18
HARJATIN SINGH
,
Shubham Juneja
2
5/23/17
Caffe model gives same output for every image
I am experiencing a similar problem. What I have seen so far is that first image gets stuck in the
unread,
AlexNet
blobs
caffemodel
data
model
pycaffe
Caffe model gives same output for every image
I am experiencing a similar problem. What I have seen so far is that first image gets stuck in the
5/23/17
Sergius Liu
,
Mahmoud Badr
2
4/22/17
Can't understand a statement in an example
argmax arguments are the number of returns you want argmax function to return. for example, if i
unread,
blobs
caffe
data
example
label
Can't understand a statement in an example
argmax arguments are the number of returns you want argmax function to return. for example, if i
4/22/17
Victor Hugo
1/30/17
How to resize or concatenate Blobs using the C++ API
I'm using a custom data layer provided in a fork to read video frames. This implementation reads
unread,
blobs
datum
hdf5
How to resize or concatenate Blobs using the C++ API
I'm using a custom data layer provided in a fork to read video frames. This implementation reads
1/30/17
xua...@gmail.com
12/13/16
Is there any method that can be used instead of crop_layer
the input data is n*c*7*7 blob,now I want to simultaneously use some crops of the inputdata wrt
unread,
blobs
crop
inputdata
layer
Is there any method that can be used instead of crop_layer
the input data is n*c*7*7 blob,now I want to simultaneously use some crops of the inputdata wrt
12/13/16
naranjuelo
,
Przemek D
5
11/14/16
Data reshape for fully connected layers
W, H and finally C, for a total of N times, that's it El lunes, 14 de noviembre de 2016, 15:01:33
unread,
blobs
caffe
fully-connected
inner_product
reshape
Data reshape for fully connected layers
W, H and finally C, for a total of N times, that's it El lunes, 14 de noviembre de 2016, 15:01:33
11/14/16
Sharp Weapon
,
Przemek D
4
10/7/16
Unknown bottom blob 'data' (layer 'conv1', bottom index 0)
I suppose your data is of the wrong shape. Caffe expects a 4D blob at the input, made of (batch_size)
unread,
blobs
caffe
hdf5
lenet
vector
Unknown bottom blob 'data' (layer 'conv1', bottom index 0)
I suppose your data is of the wrong shape. Caffe expects a 4D blob at the input, made of (batch_size)
10/7/16
Karthik Ganesan
9/7/16
Incremental neural network calculation
Hi, I am a very new user to Caffe. As part of my research I am doing an experiment to modify a NN to
unread,
blobs
caffe
inputs
novice
weights
Incremental neural network calculation
Hi, I am a very new user to Caffe. As part of my research I am doing an experiment to modify a NN to
9/7/16
springfl...@gmail.com
,
Evan Shelhamer
2
7/14/16
how to handle redundant (not used) blobs in the network?
The `Silence` layer type takes bottoms without making any tops to ignore unattached tops like the `
unread,
blobs
caffe
lmdb
redudant
how to handle redundant (not used) blobs in the network?
The `Silence` layer type takes bottoms without making any tops to ignore unattached tops like the `
7/14/16
oranjee...@protonmail.com
3
7/7/16
C++ geting the net backwards gradients
Just in case someone is interested in this, I finally found out what I was doing wrong. I was really
unread,
backward
blobs
network
C++ geting the net backwards gradients
Just in case someone is interested in this, I finally found out what I was doing wrong. I was really
7/7/16
Ankit Dhall
6/21/16
Final net output and calculation of argmax in C++
Hello, I have been trying to get the output of the network using C++. It contains a 40 channel
unread,
argmax
blobs
caffe
cpp_classification
data
ubuntu
Final net output and calculation of argmax in C++
Hello, I have been trying to get the output of the network using C++. It contains a 40 channel
6/21/16
Ralph Aeschimann
,
Anirban Ray
4
10/17/16
Input data format for Recurrent Layers (PR #2033)
Hey, Thank you so much for sharing. I still have some confusion about setting the delta value of 0 or
unread,
LSTM
blobs
caffe-recurrent
data
layers
recurrent
theory
Input data format for Recurrent Layers (PR #2033)
Hey, Thank you so much for sharing. I still have some confusion about setting the delta value of 0 or
10/17/16
zic...@ualberta.ca
4
4/28/16
Does LMDB data layer automatically do mean subtraction?
Just figured it out. In case anyone comes across similar issues, here's the solution: The issue
unread,
blobs
data
layer
lmdb
preprocessing
Does LMDB data layer automatically do mean subtraction?
Just figured it out. In case anyone comes across similar issues, here's the solution: The issue
4/28/16
Jobs Bill
, …
Daniel Moodie
5
3/14/17
How does caffe work on GPU mode?
Each layer in caffe will call forward_gpu. By default forward_gpu calls forward_cpu as it is expected
unread,
blobs
caffe
gpu
How does caffe work on GPU mode?
Each layer in caffe will call forward_gpu. By default forward_gpu calls forward_cpu as it is expected
3/14/17
Stefano Lombardi
,
Lisa
2
2/27/16
Concat 2 blobs
Do you want a blob that is of size 501x64? If that is the case you should reshape your scale factor
unread,
blobs
concat
Concat 2 blobs
Do you want a blob that is of size 501x64? If that is the case you should reshape your scale factor
2/27/16
Alex Orloff
, …
Jan C Peters
5
1/30/16
image scaling and pooling
Thank you Jan, ad 1) OK, actually It not so important in my task. ad 2) Sure, I understand that I
unread,
blobs
pooling
scale
image scaling and pooling
Thank you Jan, ad 1) OK, actually It not so important in my task. ad 2) Sure, I understand that I
1/30/16
Tiferet Gazit
,
Evan Shelhamer
3
1/12/16
Support for blobs > 2GB
Thank you - this is great news! Being able to use more training examples should greatly improve my
unread,
batch
blobs
bug
caffe
Support for blobs > 2GB
Thank you - this is great news! Being able to use more training examples should greatly improve my
1/12/16
Tianqi Tang
,
Dana Shavit
2
12/22/15
How the weight update works when back propagation?
The update is performed several stages, first in the solver (eg SGDSolver.cpp) in SGDSolver<Dtype
unread,
backward
blobs
gradient
weight
How the weight update works when back propagation?
The update is performed several stages, first in the solver (eg SGDSolver.cpp) in SGDSolver<Dtype
12/22/15
Fredrik Skeppstedt
,
Jan C Peters
6
10/29/15
Run network forward on single sample, while keeping training batch size large.
Thank you very much for the detailed answer! Very helpful! Den torsdag 29 oktober 2015 kl. 11:05:45
unread,
batch_size
blobs
data
pycaffe
Run network forward on single sample, while keeping training batch size large.
Thank you very much for the detailed answer! Very helpful! Den torsdag 29 oktober 2015 kl. 11:05:45
10/29/15
Prachi Jain
2
10/21/15
Using large number of output classes in RNN model gives error "blob size exceeds INT_MAX"
On reducing the vocabulary size (number of output classes) to 82994, training continued smoothly. But
unread,
blobs
caffe-recurrent
inner_product
large_output_classes
large_vocabulary
train
Using large number of output classes in RNN model gives error "blob size exceeds INT_MAX"
On reducing the vocabulary size (number of output classes) to 82994, training continued smoothly. But
10/21/15
Ben
2
10/13/15
Problems with lmdb when there're multiple inputs of image and corresponding labels of different size
It's weird that sometimes the mean of label blob is right, sometimes it's wrong. What's
unread,
blobs
data-layer
lmdb
multi
multilabel
Problems with lmdb when there're multiple inputs of image and corresponding labels of different size
It's weird that sometimes the mean of label blob is right, sometimes it's wrong. What's
10/13/15
lahw...@gmail.com
,
Thomas Wood
5
5/9/16
What protobuf should I use to load the trained googlenet.caffemodel from java?
Thank you! Interesting to know. I also found this paper: https://arxiv.org/abs/1511.07376 so I know
unread,
blobs
caffemodel
googlenet
protobuf
snapshot
What protobuf should I use to load the trained googlenet.caffemodel from java?
Thank you! Interesting to know. I also found this paper: https://arxiv.org/abs/1511.07376 so I know
5/9/16
Nicholas Dufour
9/10/15
Create a 'masking' blob for new loss layer?
Hi All- Two quick questions: - What is the best way to instantiate a new blob Y for a loss layer,
unread,
blobs
layer
loss
Create a 'masking' blob for new loss layer?
Hi All- Two quick questions: - What is the best way to instantiate a new blob Y for a loss layer,
9/10/15
Sara Sabour
, …
Anuj Modi
4
11/11/16
Pickling of "caffe._caffe.Blob" instances is not enabled
Did this work? On Wednesday, 21 October 2015 16:56:15 UTC+5:30, Jan wrote: Pickling is only supported
unread,
blobs
centos
pycaffe
Pickling of "caffe._caffe.Blob" instances is not enabled
Did this work? On Wednesday, 21 October 2015 16:56:15 UTC+5:30, Jan wrote: Pickling is only supported
11/11/16
Arghavan Arafati
,
Saeed Izadi
2
8/27/15
Test Output file Using MATLAB
Arghavan, here is a sample code for working with matlab wrapper: addpath(genpath('matlab'));
unread,
blobs
caffe
caffe-binary
data
input
matcaffe
matlab
test
Test Output file Using MATLAB
Arghavan, here is a sample code for working with matlab wrapper: addpath(genpath('matlab'));
8/27/15
AJB
7/9/15
Manipulating blob values in a network via MATLAB
Hi, I have a network which looks like this: data:blob->layer1:layer->blob1:blob->........-
unread,
blobs
caffe
matlab
Manipulating blob values in a network via MATLAB
Hi, I have a network which looks like this: data:blob->layer1:layer->blob1:blob->........-
7/9/15
Philip H
,
Michael Wilber
3
7/8/15
BLOB: why use offset rather than computing index with n and c?
Not exactly. It totally does make sense to run the loops this way because that's how data is
unread,
blobs
BLOB: why use offset rather than computing index with n and c?
Not exactly. It totally does make sense to run the loops this way because that's how data is
7/8/15
Ziyu Wang
,
deep.learn...@gmail.com
2
8/20/15
Shapes of blobs produced by InnerProductLayer and MemoryDataLayer do not match
Hi, I am experiencing with the same error message. Did you find the solution? Thanks. On Thursday,
unread,
blobs
caffe
layer
prototxt
Shapes of blobs produced by InnerProductLayer and MemoryDataLayer do not match
Hi, I am experiencing with the same error message. Did you find the solution? Thanks. On Thursday,
8/20/15