Groups
Conversations
All groups and messages
Send feedback to Google
Help
Training
Sign in
Groups
Caffe Users
Conversations
Labels
00-classification
1
17
1D
200-classes
2015
23
3D
3D-Caffe
3Dconv
5
572
AMD
ARM
AWS
AlexNet
Apache
Arxiv
Axis
Benchmark
Bu
Buld
C3D
CIFAR
CIFAR-100
CNN
COMPATIBILITY
CPU
Caffe-cpu
CaffeToolbox
Caltech101
CascadeClassifier
Coffe
Completed
ConvLSTM
Cplusplus
CppAPI
CreatorRegistry
CuDNN
D
DARTS
DB
DSO
DataLayer
DataTransformer
Deep-network
DeepDream
DeepLab
Deeplabv2
Dell
Deploy
Detectron
Duplicate
Ensemble
EuclideanLoss
FCN
FCN32
FCRN
FINETUNING
Fedora
FlowNet
FreeBSD
Fully
GNURADIO
GPU_Mode
GSM
GSOC
GTX-570
GTX980
Gaussian
Generate_train_data
Great
H
HDF5Data
HDF5DataLayer
HandHeldSDR
Hi
IDE
ILSVRC13
ILSVRC2013
INRIA
ImageNet
InfogainLossLayer
Initilization
Intel
Interactivity
K40
L1-norm
L2-norm
LD_LIBARAY_PATH
LRCN
LRN
LSTM
LTE
LabView
LibUSRP
MKL
MPI
MemoryData
MemoryDataLayer
MobileNet-SSD
Multiscale
Mutex
NIN
NIST
NLP
NYUD-v2
Net
Net-Surgery
NetSpec
OCR
OS10
OSX
OverFeat
Overfitting
PYTHONPATH
Parallel
PedestrianDetection
Przemek
Pyramid
R
RGBD
RHEL
ResNet
SSD
SVM
SegNet
Serial
SetupParameters
Sharing
Simulink
Simulink-USRP
SliceLayer
SoftmaxWithLoss
SplitLayer
T
USRP1
V2
VOC_dataset
XavierFiller
Xcode
_ULx86_64_step
_caffe
a
aborted
accuracy
action
activation
adadelta
administration
admm
adreno
adversarial
all
am
amazon
amd64
amdgpu-pro
anaconda
android
annotation
application
apt
argmax
atlas
auc
audio
auto-encoder
autoencoder
average
azure
background
backward
balance
balancing
batch
batch_size
batchnorm
beginners
bgr
bias
bin
binaryproto
blas
blob
blobs
blocking_queue
boos
boost
boost_python
bounding-box-image
branch
broadcast
bug
build
buster
c
cafe
caffa
caffe
caffe-16bit
caffe-64bit
caffe-8bit
caffe-binary
caffe-dilation
caffe-fcn
caffe-gemmlowp
caffe-installation
caffe-master
caffe-model
caffe-parallel
caffe-recurrent
caffe-reference
caffe-segnet
caffe-training-log
caffe-users
caffe-windows
caffe2
caffe_root
caffemodel
caffenet
caffezoo
callbacks
camera
camme
cassandra
categories
cats
cblas
centos
chaining
change
channel-pooling
channels
checker
cifar-10
cifar10
class
class5
classification
classifier
cloud
cls
cmake
cmath
cmd
cmdcaffe
code
cold-brew
colorization
commandline
compilation
compile
compiling
compression
compressison
compute_image_mean
concat
confusion-matrix
const
constant
constant_loss
contrastive
control
convergence
convert
convert_annoset
convert_imagenet
convert_imageset
convex
convnet
convo
convolution
convolution_layer
convolutionm
convoluton
copy
correct
cpp_classification
cpu_only
crash
create_imagenet
crfasrnn
crop
crop-layer
cropping
cross-validation
csv
cuda
curl
curve
custom
cvpr
cvpr15
data
data-augmentation
data-imbalance
data-layer
data_transformer
database
dataset
datastax
datum
debian
debian-9
debug
deconvolution
deep
deep-learning
deepVis
deepnetworkscascade
default
demo
denoising
dense
dependencies
deployment
derivated
derivatives
detection
deviation
dies
different
digits
directory
display
distillation
distributed
distributing
divide
dll
dmb-file
dnn
docker
documentation
download
draw
draw_net
drawback
dropout
droput
dual
dyld
dyldLibrary
ec2
efficient
eltwise
eltwise-layer-test
elu
end2end
endian
engine
entropy
err
error
eucli
evaluation
example
examples
exception
excitation-backprop
export
expresso
f
face
face-detection
face-verification
facepoints
failure
fast
fast-rcnn
faster
fasterrcnn
fcn8
feature
feature-extraction
feature-selection
feature_maps
feedforward
feeding-data
figure
files
filler
filter
fine
fine-tuning
finetuen
finetune
fixed
flags
flask
flickr
float
float_labels
floating
food
format
forward_pass
foward
frames
free
fully-connected
fullyConvolutional
gamma
gcc4
gcc5
gcc6
gdb
get_output
gf
gflag
gflags
gist
global
glog
google
googlenet
gou
gpu
gradient
gray
grayscale
groundtruth
grouping
gtx-1080-ti
gui
h5-files
h5f
hackernews
hadoop
hardware
hdf5
hdf5withLMDB
header
hedging
help
heroku
homebrew
hopfield
how-to
hyperparameters
i
ia
ignore_label
im2col
ima
image
image-tagging
imageCLEF
image_data
imagenet_mean
images
imagine
imbalance
implementation
import
import-error
improvement
inception
index
infogain
ingredients
inheritance
init
initial
initialization
inner_product
input
input_output
inputdata
inputfiles
inputs
ins
install
installaion
installation
instance
instruction
integer
intel-caffe
invalid
io
io_inibackup
ip
ipython
isnta
iter_size
iteration
java
jni
job
jupyter
keras
keras-users
kernel
key
killed
kullback-leibler
l
label
labels
landmark
large
large_output_classes
large_vocabulary
lasagne
layer
layer-registry
layerregistry
layers
lboost_thread
ld
ldb
leak
learning
learning-rate
learning_curve
learning_rate
lenet
leveldb
libcaffe
libprotobuf
libsvm
libunwind
license
linking
linux
little
lmbd
lmdb
loading
localization
log
logging
loss
loss-function
loss-layer
loss_big
loss_weight
loss_weights
lr
lr_policy
lsvrc12
mac
macbook
machine
machine-learning
macos
make
makeall
makefile
mammography
manjaro
map
mask
matcaffe
math_functions
matlab
matrices
matrix
matrix_predictions
max-out
maxpool
mdb
mean
memory
memory-layer
memory_data
mentor
mex
mexw64
mirror
missing
missing-dependencies
mlsl
mnist
model
modelselection
modified_model
mojave
mono
motion
multi
multi-channel
multi-core
multi-gpus
multi-label
multi-node
multi-target
multiask
multilabel
multimodal
multiple
multiple-data
multiple-input
multiple-networks
multipleloss
multiprocessing
multitask
multitasking
multithreading
multiview
mutable
muti-hdd
mvn
mxnet
my
name
nan
natural_language
ndk
network
neural
new
newlayer
newnode
ninja
no
noise
non-encoded
non-image
normalization
notebook
novaoznaka
novice
nsight
ntop
num_inputs
num_output
numba
number
numpy
nvidia
nvidia-settings
o
object
openBlas
opencv
optimization
optimus
output
overlapping
oversampling
p
p100
package
padding
parame
parameter
parameter_extraction
params
parser
parsing
pascal
patches
path
patterns
performance
phase
pixel
pixelwise
plateau
please-respond
plot
point
pointcloud
pooling
pooling-dimension
pose
posenet
pre
pre-computed
pre-trained
precision
predicted
prediction
prelu
preprocessing
pretrained
printouts
probabilities
problem
processing
produced
profile
protobuf
protocbuffer
prototxt
pruning
publications
pull
py-faster-rcnn
pycaffe
pyhon
pypy
python
python-layer
python2
python3
pythreadstate_get
qt
qualcomm
quantization
question
quiet
quit
r-cnn
random
raspberrypie
rc3
rcnn
reader
reading
real-time
reboot
recall
recipe
recognition
recurrent
recursive
reductionlayer
redudant
reference
regex
regfreeA
regression
regrs
regularizor
release
request
requirements
rescale
research
reshape
resizing
rest
result
return
rfcn
rgb
ristretto
robotics
roc
roi
ros
rotation
runtest
saliency
same
samples
satellite
save
scale
scaleLayer
scaling
scnn
scope
scource
scripts
seg
segfault
segmentation
select
selective-layers
semantic
semanticsegmentation
sensor
sequence
set
setenv
sgd
sgmentation_fault
shape
shared
shared_libs
shelhamer
shell
showboxes
shuffle
siamese
sierra
sigmoid
similarity_search
simple
single-label
site-packages
size
skimage
skimcaffe
sklearn
slice
slice_size
slicing
slow
snapdragon
snappy
snapshot
softmax
softmax_loss_layer
solved
solver
solverstate
spark
sparse
spatial
specific
speech
speed
spp-net
square
squeezenet
standard
std
step-by-step
studio
style
subtract
subtraction
suffix
support
suppress
symbol_database
synsetword
synthetic
tags
tensorflow
terminal
test
testing
text-spotting
textspotting
textures
theano
theano-users
theory
threshold
time
time-series
top-1
top-5
tracking
train
train1k
train_val
training
transfer
transfer-learning
transform_param
transformer
triplet
trouble
troubleshooting
try
tuning
tutorial
tutorials
u-net
ubuntu
ubuntu1604
uint8
unbalance
undefined_reference
undefined_symbol
underfitting
unexpected
unsupervised
usage
v1layerparameter
val
val1
validating
validation
valueclip
values
variable-timesteps
variables
vector
verbose
version
vgg
vgg16
vgg19
video
video-caffe
viennacl
visual
visualization
voxel
warning
web
web_demo
webcam
webface
website
weight
weight_extraction
weight_sharing
weight_transpose
weights
wiki
windows
winograd
wired
with_python_layer
withcode
workflow
wrong
x86_64
xamarin
xml
yosemite
yosinski
zeiler
zeros
zoo
About
Groups keyboard shortcuts have been updated
Dismiss
See shortcuts
Caffe Users
1–30 of 8727
Mark all as read
Report group
0 selected
dusa
,
Przemek D
2
1/9/18
Caffe HDF5 Data Error
This particular issue has been answered here. W dniu wtorek, 26 grudnia 2017 16:18:13 UTC+1
unread,
HDF5DataLayer
batch_size
hdf5
prototxt
Caffe HDF5 Data Error
This particular issue has been answered here. W dniu wtorek, 26 grudnia 2017 16:18:13 UTC+1
1/9/18
xiaohu Bill
10/23/17
batch size effectiveness on multi-GPU training
I am confuse some concept about the effectiveness of batch size on multi-GPU training. Plz correct it
unread,
batch_size
caffe
batch size effectiveness on multi-GPU training
I am confuse some concept about the effectiveness of batch size on multi-GPU training. Plz correct it
10/23/17
Yawei Lu
,
Przemek D
3
7/18/17
question about deploy.prototxt
Your reply really answers my question. Thanks a lot! 在 2017年7月17日星期一 UTC+8下午9:35:42,Przemek D写道:
unread,
Deploy
batch_size
question about deploy.prototxt
Your reply really answers my question. Thanks a lot! 在 2017年7月17日星期一 UTC+8下午9:35:42,Przemek D写道:
7/18/17
xzhong
,
Kağan İncetan
2
10/18/17
Use Same Data for Training and Validation Gives Inconsistent Accuracy
Hi, I am facing more or less the same issue. Have you ever found any answer for that? Regards 21
unread,
CIFAR
batch_size
debug
training
tuning
validation
Use Same Data for Training and Validation Gives Inconsistent Accuracy
Hi, I am facing more or less the same issue. Have you ever found any answer for that? Regards 21
10/18/17
Isha Garg
4/11/17
Understanding test accuracy usage over mini-batches
Hi, I've hit a snag for the first time that I can't resolve reading the previous threads. I
unread,
accuracy
batch_size
finetune
testing
Understanding test accuracy usage over mini-batches
Hi, I've hit a snag for the first time that I can't resolve reading the previous threads. I
4/11/17
S Bald
,
Patrick McNeil
6
2/16/17
Large Batch-Size, Delaying Backprop for Nonseparable Loss Function
I have not tried to store the results in the past, so I am not sure how exactly that process would
unread,
batch_size
loss
memory
Large Batch-Size, Delaying Backprop for Nonseparable Loss Function
I have not tried to store the results in the past, so I am not sure how exactly that process would
2/16/17
barkın tuncer
2/6/17
Googlenet training error
Hello everyone I am trying to train Googlenet which is in the models file but I am getting the error
unread,
batch_size
caffe
error
googlenet
memory
prototxt
snapshot
testing
Googlenet training error
Hello everyone I am trying to train Googlenet which is in the models file but I am getting the error
2/6/17
Yuanyuan Li
12/10/16
why "Training Region-based Object Detectors with Online Hard Example Mining" set two roi networks?
in the paper, there is paragraph as following: why not this straightforward way inefficient? why the
unread,
backward
batch_size
loss
memory
why "Training Region-based Object Detectors with Online Hard Example Mining" set two roi networks?
in the paper, there is paragraph as following: why not this straightforward way inefficient? why the
12/10/16
MKR
2
11/22/16
Batch Size impact on Batch Normalisation Layer
This is my Network: name: "ResNet-50" layer { name: "data" type: "Input
unread,
batch
batch_size
batchnorm
Batch Size impact on Batch Normalisation Layer
This is my Network: name: "ResNet-50" layer { name: "data" type: "Input
11/22/16
jadeh...@gmail.com
11/8/16
The same image data and resize to 64*64 and 128*128, but the latter of result is not convergence
I used webface image data to train and test caffe. First, detected face. Second, resized all image to
unread,
accuracy
batch_size
caffe
size
The same image data and resize to 64*64 and 128*128, but the latter of result is not convergence
I used webface image data to train and test caffe. First, detected face. Second, resized all image to
11/8/16
Sharp Weapon
,
Lemma
2
11/6/16
Training error and validation error are the same - zero accuracy during training
Hi, It seem you are using high lr_base, and opened the layers wight learning into high learning.
unread,
batch_size
caffe
caffenet
classification
hdf5
layer
learning_rate
loss
Training error and validation error are the same - zero accuracy during training
Hi, It seem you are using high lr_base, and opened the layers wight learning into high learning.
11/6/16
Sharp Weapon
,
Wilf Rosenbaum
4
11/2/16
Zero accuracy training a neural network using caffe
Oh, ok, in that case the best thing you should check is
unread,
batch_size
caffe
learning_rate
testing
training
Zero accuracy training a neural network using caffe
Oh, ok, in that case the best thing you should check is
11/2/16
Suyog Trivedi
,
Ketil Malde
5
10/14/16
Caffe significance of Validation (test) loss and Train (loss)
Thanks for your help. I ran the training for 15k iterations. I am getting the output curve as below.
unread,
CIFAR
accuracy
batch_size
caffe
convolutionm
dataset
image_data
learning_rate
lmdb
loss
model
pycaffe
snapshot
testing
Caffe significance of Validation (test) loss and Train (loss)
Thanks for your help. I ran the training for 15k iterations. I am getting the output curve as below.
10/14/16
alkamid
, …
Uday Kusupati
4
5/31/17
Memory requirements for ResNet-50 finetuning
I have the same problem too. But running on single gpu gave no error. I think the problem is in
unread,
AWS
batch_size
caffe
finetune
gpu
memory
novice
ubuntu
Memory requirements for ResNet-50 finetuning
I have the same problem too. But running on single gpu gave no error. I think the problem is in
5/31/17
Jumabek Alikhanov
,
shai harel
3
9/8/16
Help, network is not learning. Training bvlc_reference_caffenet WITHOUT PADDING on multi-GPUs.
My solver file is the same as bvlc_reference_caffenet. Since I didn't change the train.prototxt
unread,
ImageNet
batch_size
gpu
loss
padding
plateau
prototxt
training
Help, network is not learning. Training bvlc_reference_caffenet WITHOUT PADDING on multi-GPUs.
My solver file is the same as bvlc_reference_caffenet. Since I didn't change the train.prototxt
9/8/16
邰磊
, …
HIMANSHU RAI
5
9/7/17
How to change the trainning batch size in FCN?
And the total number of times that we iterate over the complete data set is (max_iter/total-images in
unread,
FCN
batch
batch_size
caffe
How to change the trainning batch size in FCN?
And the total number of times that we iterate over the complete data set is (max_iter/total-images in
9/7/17
Jalen Hawkins
,
Daniel Moodie
4
6/24/16
accuracy=0?
okay so i found a few slight mechanical errors such as the name of some of the pictures not matching
unread,
ImageNet
accuracy
batch_size
caffe
image_data
learning_rate
loss
network
snapshot
training
accuracy=0?
okay so i found a few slight mechanical errors such as the name of some of the pictures not matching
6/24/16
Mipso
,
Yossi Biton
3
6/15/16
How to classify multiple images (5000+)
Thank you! Would changing the batch size have an impact on how many images I can pass in?
unread,
batch_size
caffe
classification
data-augmentation
error
example
gpu
images
memory
multiple
pycaffe
python
How to classify multiple images (5000+)
Thank you! Would changing the batch size have an impact on how many images I can pass in?
6/15/16
kareem ahmed
6/13/16
Using iter_size while manually feeding data into the network
I am currently training a triplet network, where each batch consists of 3 images, two of which are
unread,
batch_size
pycaffe
python
solver
Using iter_size while manually feeding data into the network
I am currently training a triplet network, where each batch consists of 3 images, two of which are
6/13/16
Hongtao Yang
, …
Eli Gibson
4
5/17/16
Problem with hdf5 data input
Hi Hongtao, When shuffle is false, Caffe will load up hdf5 files in the order listed in the file.
unread,
batch_size
hdf5
Problem with hdf5 data input
Hi Hongtao, When shuffle is false, Caffe will load up hdf5 files in the order listed in the file.
5/17/16
dusa
,
auro tripathy
5
1/10/17
LSTM caffe code for activity recognition by lisa - classification with smaller memory
Just an update to my last post, it does build without the cudnn support. I turned that off and it has
unread,
LSTM
batch_size
caffe
classification
inputdata
memory
python
LSTM caffe code for activity recognition by lisa - classification with smaller memory
Just an update to my last post, it does build without the cudnn support. I turned that off and it has
1/10/17
dusa
5/13/16
LSTM caffe code for activity recognition by lisa - classification with smaller memory
Hi! I am trying to run the LSTM code for activity recognition by lisa -http://www.eecs.berkeley.edu/~
unread,
LSTM
batch_size
caffe
classification
inputdata
memory
python
LSTM caffe code for activity recognition by lisa - classification with smaller memory
Hi! I am trying to run the LSTM code for activity recognition by lisa -http://www.eecs.berkeley.edu/~
5/13/16
Lillian Liu
,
Jan
3
5/10/16
Help on interpreting the log printed by caffe
Thank you so much!! That clears up my confusion!! Lillian On Tuesday, May 10, 2016 at 4:52:49 AM UTC-
unread,
batch_size
log
parsing
testing
Help on interpreting the log printed by caffe
Thank you so much!! That clears up my confusion!! Lillian On Tuesday, May 10, 2016 at 4:52:49 AM UTC-
5/10/16
Daniela G
,
Jan
6
4/29/16
Different accuracy with command line and python
Thank you Jan :) segunda-feira, 25 de Abril de 2016 às 14:46:00 UTC+1, Jan escreveu: Depends. The
unread,
CNN
accuracy
batch_size
caffe
python
Different accuracy with command line and python
Thank you Jan :) segunda-feira, 25 de Abril de 2016 às 14:46:00 UTC+1, Jan escreveu: Depends. The
4/29/16
Dimitris
,
Jan
2
4/18/16
Novice questions on LeNet
See interleaved answers. Am Samstag, 9. April 2016 22:13:41 UTC+2 schrieb Dimitris: Hi, I am a new
unread,
Deploy
batch_size
lenet
memory
novice
prototxt
Novice questions on LeNet
See interleaved answers. Am Samstag, 9. April 2016 22:13:41 UTC+2 schrieb Dimitris: Hi, I am a new
4/18/16
Chias JaJa
,
Ahmed Ibrahim
2
3/31/16
How to change image size to 1*128 (height * width) for CaffeNet?
Conv. layers gets smaller and smaller. you have to create an architecture that can handle such
unread,
batch_size
caffe
data
error
image_data
label
layer
lmdb
output
test
ubuntu
How to change image size to 1*128 (height * width) for CaffeNet?
Conv. layers gets smaller and smaller. you have to create an architecture that can handle such
3/31/16
Alex Orloff
, …
Ahmet Selman Bozkır
8
12/25/17
batch size and overfitting
I have read your posts. Thank you. But the comments you have made, made me to think about the
unread,
Overfitting
batch_size
batch size and overfitting
I have read your posts. Thank you. But the comments you have made, made me to think about the
12/25/17
Caleb Belth
1/26/16
Check failed: error == cudaSuccess (2 vs. 0) out of memory
I'm an undergraduate computer science student beginning to do research in machine learning and I
unread,
ImageNet
batch_size
caffe
crash
cuda
error
gpu
help
memory
novice
training
ubuntu
Check failed: error == cudaSuccess (2 vs. 0) out of memory
I'm an undergraduate computer science student beginning to do research in machine learning and I
1/26/16
Fredrik Skeppstedt
,
Jan C Peters
6
10/29/15
Run network forward on single sample, while keeping training batch size large.
Thank you very much for the detailed answer! Very helpful! Den torsdag 29 oktober 2015 kl. 11:05:45
unread,
batch_size
blobs
data
pycaffe
Run network forward on single sample, while keeping training batch size large.
Thank you very much for the detailed answer! Very helpful! Den torsdag 29 oktober 2015 kl. 11:05:45
10/29/15
Cuong Duc
,
Oscar Beijbom
2
8/15/15
meaning of 'test_iter' in solver.prototxt
It means the number of batches to run on the test set. So yeah, if you want the test phase to run on
unread,
batch_size
caffe
solver
training
meaning of 'test_iter' in solver.prototxt
It means the number of batches to run on the test set. So yeah, if you want the test phase to run on
8/15/15