Groups
Sign in
Groups
Caffe Users
Conversations
Labels
00-classification
1
17
1D
200-classes
2015
23
3D
3D-Caffe
3Dconv
5
572
AMD
ARM
AWS
AlexNet
Apache
Arxiv
Axis
Benchmark
Bu
Buld
C3D
CIFAR
CIFAR-100
CNN
COMPATIBILITY
CPU
Caffe-cpu
CaffeToolbox
Caltech101
CascadeClassifier
Coffe
Completed
ConvLSTM
Cplusplus
CppAPI
CreatorRegistry
CuDNN
D
DARTS
DB
DSO
DataLayer
DataTransformer
Deep-network
DeepDream
DeepLab
Deeplabv2
Dell
Deploy
Detectron
Duplicate
Ensemble
EuclideanLoss
FCN
FCN32
FCRN
FINETUNING
Fedora
FlowNet
FreeBSD
Fully
GNURADIO
GPU_Mode
GSM
GSOC
GTX-570
GTX980
Gaussian
Generate_train_data
Great
H
HDF5Data
HDF5DataLayer
HandHeldSDR
Hi
IDE
ILSVRC13
ILSVRC2013
INRIA
ImageNet
InfogainLossLayer
Initilization
Intel
Interactivity
K40
L1-norm
L2-norm
LD_LIBARAY_PATH
LRCN
LRN
LSTM
LTE
LabView
LibUSRP
MKL
MPI
MemoryData
MemoryDataLayer
MobileNet-SSD
Multiscale
Mutex
NIN
NIST
NLP
NYUD-v2
Net
Net-Surgery
NetSpec
OCR
OS10
OSX
OverFeat
Overfitting
PYTHONPATH
Parallel
PedestrianDetection
Przemek
Pyramid
R
RGBD
RHEL
ResNet
SSD
SVM
SegNet
Serial
SetupParameters
Sharing
Simulink
Simulink-USRP
SliceLayer
SoftmaxWithLoss
SplitLayer
T
USRP1
V2
VOC_dataset
XavierFiller
Xcode
_ULx86_64_step
_caffe
a
aborted
accuracy
action
activation
adadelta
administration
admm
adreno
adversarial
all
am
amazon
amd64
amdgpu-pro
anaconda
android
annotation
application
apt
argmax
atlas
auc
audio
auto-encoder
autoencoder
average
azure
background
backward
balance
balancing
batch
batch_size
batchnorm
beginners
bgr
bias
bin
binaryproto
blas
blob
blobs
blocking_queue
boos
boost
boost_python
bounding-box-image
branch
broadcast
bug
build
buster
c
cafe
caffa
caffe
caffe-16bit
caffe-64bit
caffe-8bit
caffe-binary
caffe-dilation
caffe-fcn
caffe-gemmlowp
caffe-installation
caffe-master
caffe-model
caffe-parallel
caffe-recurrent
caffe-reference
caffe-segnet
caffe-training-log
caffe-users
caffe-windows
caffe2
caffe_root
caffemodel
caffenet
caffezoo
callbacks
camera
camme
cassandra
categories
cats
cblas
centos
chaining
change
channel-pooling
channels
checker
cifar-10
cifar10
class
class5
classification
classifier
cloud
cls
cmake
cmath
cmd
cmdcaffe
code
cold-brew
colorization
commandline
compilation
compile
compiling
compression
compressison
compute_image_mean
concat
confusion-matrix
const
constant
constant_loss
contrastive
control
convergence
convert
convert_annoset
convert_imagenet
convert_imageset
convex
convnet
convo
convolution
convolution_layer
convolutionm
convoluton
copy
correct
cpp_classification
cpu_only
crash
create_imagenet
crfasrnn
crop
crop-layer
cropping
cross-validation
csv
cuda
curl
curve
custom
cvpr
cvpr15
data
data-augmentation
data-imbalance
data-layer
data_transformer
database
dataset
datastax
datum
debian
debian-9
debug
deconvolution
deep
deep-learning
deepVis
deepnetworkscascade
default
demo
denoising
dense
dependencies
deployment
derivated
derivatives
detection
deviation
dies
different
digits
directory
display
distillation
distributed
distributing
divide
dll
dmb-file
dnn
docker
documentation
download
draw
draw_net
drawback
dropout
droput
dual
dyld
dyldLibrary
ec2
efficient
eltwise
eltwise-layer-test
elu
end2end
endian
engine
entropy
err
error
eucli
evaluation
example
examples
exception
excitation-backprop
export
expresso
f
face
face-detection
face-verification
facepoints
failure
fast
fast-rcnn
faster
fasterrcnn
fcn8
feature
feature-extraction
feature-selection
feature_maps
feedforward
feeding-data
figure
files
filler
filter
fine
fine-tuning
finetuen
finetune
fixed
flags
flask
flickr
float
float_labels
floating
food
format
forward_pass
foward
frames
free
fully-connected
fullyConvolutional
gamma
gcc4
gcc5
gcc6
gdb
get_output
gf
gflag
gflags
gist
global
glog
google
googlenet
gou
gpu
gradient
gray
grayscale
groundtruth
grouping
gtx-1080-ti
gui
h5-files
h5f
hackernews
hadoop
hardware
hdf5
hdf5withLMDB
header
hedging
help
heroku
homebrew
hopfield
how-to
hyperparameters
i
ia
ignore_label
im2col
ima
image
image-tagging
imageCLEF
image_data
imagenet_mean
images
imagine
imbalance
implementation
import
import-error
improvement
inception
index
infogain
ingredients
inheritance
init
initial
initialization
inner_product
input
input_output
inputdata
inputfiles
inputs
ins
install
installaion
installation
instance
instruction
integer
intel-caffe
invalid
io
io_inibackup
ip
ipython
isnta
iter_size
iteration
java
jni
job
jupyter
keras
keras-users
kernel
key
killed
kullback-leibler
l
label
labels
landmark
large
large_output_classes
large_vocabulary
lasagne
layer
layer-registry
layerregistry
layers
lboost_thread
ld
ldb
leak
learning
learning-rate
learning_curve
learning_rate
lenet
leveldb
libcaffe
libprotobuf
libsvm
libunwind
license
linking
linux
little
lmbd
lmdb
loading
localization
log
logging
loss
loss-function
loss-layer
loss_big
loss_weight
loss_weights
lr
lr_policy
lsvrc12
mac
macbook
machine
machine-learning
macos
make
makeall
makefile
mammography
manjaro
map
mask
matcaffe
math_functions
matlab
matrices
matrix
matrix_predictions
max-out
maxpool
mdb
mean
memory
memory-layer
memory_data
mentor
mex
mexw64
mirror
missing
missing-dependencies
mlsl
mnist
model
modelselection
modified_model
mojave
mono
motion
multi
multi-channel
multi-core
multi-gpus
multi-label
multi-node
multi-target
multiask
multilabel
multimodal
multiple
multiple-data
multiple-input
multiple-networks
multipleloss
multiprocessing
multitask
multitasking
multithreading
multiview
mutable
muti-hdd
mvn
mxnet
my
name
nan
natural_language
ndk
network
neural
new
newlayer
newnode
ninja
no
noise
non-encoded
non-image
normalization
notebook
novaoznaka
novice
nsight
ntop
num_inputs
num_output
numba
number
numpy
nvidia
nvidia-settings
o
object
openBlas
opencv
optimization
optimus
output
overlapping
oversampling
p
p100
package
padding
parame
parameter
parameter_extraction
params
parser
parsing
pascal
patches
path
patterns
performance
phase
pixel
pixelwise
plateau
please-respond
plot
point
pointcloud
pooling
pooling-dimension
pose
posenet
pre
pre-computed
pre-trained
precision
predicted
prediction
prelu
preprocessing
pretrained
printouts
probabilities
problem
processing
produced
profile
protobuf
protocbuffer
prototxt
pruning
publications
pull
py-faster-rcnn
pycaffe
pyhon
pypy
python
python-layer
python2
python3
pythreadstate_get
qt
qualcomm
quantization
question
quiet
quit
r-cnn
random
raspberrypie
rc3
rcnn
reader
reading
real-time
reboot
recall
recipe
recognition
recurrent
recursive
reductionlayer
redudant
reference
regex
regfreeA
regression
regrs
regularizor
release
request
requirements
rescale
research
reshape
resizing
rest
result
return
rfcn
rgb
ristretto
robotics
roc
roi
ros
rotation
runtest
saliency
same
samples
satellite
save
scale
scaleLayer
scaling
scnn
scope
scource
scripts
seg
segfault
segmentation
select
selective-layers
semantic
semanticsegmentation
sensor
sequence
set
setenv
sgd
sgmentation_fault
shape
shared
shared_libs
shelhamer
shell
showboxes
shuffle
siamese
sierra
sigmoid
similarity_search
simple
single-label
site-packages
size
skimage
skimcaffe
sklearn
slice
slice_size
slicing
slow
snapdragon
snappy
snapshot
softmax
softmax_loss_layer
solved
solver
solverstate
spark
sparse
spatial
specific
speech
speed
spp-net
square
squeezenet
standard
std
step-by-step
studio
style
subtract
subtraction
suffix
support
suppress
symbol_database
synsetword
synthetic
tags
tensorflow
terminal
test
testing
text-spotting
textspotting
textures
theano
theano-users
theory
threshold
time
time-series
top-1
top-5
tracking
train
train1k
train_val
training
transfer
transfer-learning
transform_param
transformer
triplet
trouble
troubleshooting
try
tuning
tutorial
tutorials
u-net
ubuntu
ubuntu1604
uint8
unbalance
undefined_reference
undefined_symbol
underfitting
unexpected
unsupervised
usage
v1layerparameter
val
val1
validating
validation
valueclip
values
variable-timesteps
variables
vector
verbose
version
vgg
vgg16
vgg19
video
video-caffe
viennacl
visual
visualization
voxel
warning
web
web_demo
webcam
webface
website
weight
weight_extraction
weight_sharing
weight_transpose
weights
wiki
windows
winograd
wired
with_python_layer
withcode
workflow
wrong
x86_64
xamarin
xml
yosemite
yosinski
zeiler
zeros
zoo
About
Send feedback
Help
Caffe Users
1–28 of 8728
Mark all as read
Report group
0 selected
puren...@gmail.com
,
Przemek D
2
3/20/18
training the fully convolutional networks (FCN) from scratch for RGB-D data
I can answer two of your questions, as I've never done any training on RGBD. 1. I'm not sure
unread,
RGBD
fullyConvolutional
network
training
vgg
training the fully convolutional networks (FCN) from scratch for RGB-D data
I can answer two of your questions, as I've never done any training on RGBD. 1. I'm not sure
3/20/18
Pujan Paudel
9/13/17
Shape Mismatch Error Training voc-fcn16s during of copying parameters from pretrained model
I tried training voc-fcn16s. The network configuration goes well, as the log goes : 4368 net.cpp:255]
unread,
FCN
caffe-fcn
copy
fullyConvolutional
segmentation
semanticsegmentation
Shape Mismatch Error Training voc-fcn16s during of copying parameters from pretrained model
I tried training voc-fcn16s. The network configuration goes well, as the log goes : 4368 net.cpp:255]
9/13/17
Pujan Paudel
, …
Jianyuan Shi
5
1/21/18
CudaError when trying to train Fully Convolutional Networks
Hi, Pujan Paudel. Had you solved the problem? Could you please tell me how? 在 2017年9月13日星期三 UTC+8上午2:
unread,
caffe-fcn
cuda
fullyConvolutional
training
CudaError when trying to train Fully Convolutional Networks
Hi, Pujan Paudel. Had you solved the problem? Could you please tell me how? 在 2017年9月13日星期三 UTC+8上午2:
1/21/18
Jonathan Balloch
9/3/17
Finetuning a deep fully convolutional neural network with skip connections
What is the proper procedure for finetuning a deep fully convolutional neural network with skip
unread,
FCN
ResNet
data-augmentation
fine-tuning
fullyConvolutional
transfer-learning
Finetuning a deep fully convolutional neural network with skip connections
What is the proper procedure for finetuning a deep fully convolutional neural network with skip
9/3/17
Alex Ter-Sarkisov
7/26/17
Derivatives wrt to data: size of layers vs size of input
So after some extensive hacking I found out that net.blobs[...].diff, ie derivatives wrt to data
unread,
caffe-fcn
derivatives
fcn8
fullyConvolutional
Derivatives wrt to data: size of layers vs size of input
So after some extensive hacking I found out that net.blobs[...].diff, ie derivatives wrt to data
7/26/17
Marie Nachname
, …
Przemek D
14
7/10/17
U-Net Image segmentation won't converge, loss doesn't change significantly
Classification networks output as many channels as the number of classes in your dataset. So if you
unread,
convergence
convolution
dataset
deconvolution
error
fullyConvolutional
hdf5
loss
prototxt
segmentation
training
U-Net Image segmentation won't converge, loss doesn't change significantly
Classification networks output as many channels as the number of classes in your dataset. So if you
7/10/17
JunSik CHOI
, …
puren...@gmail.com
4
4/2/18
Why subtract mean BGR values and where are those mean values come from in FCN?
Hello, I would like to get an answer for these questions as well. Also, why do they convert RGB to
unread,
bgr
caffe
caffe-fcn
fullyConvolutional
Why subtract mean BGR values and where are those mean values come from in FCN?
Hello, I would like to get an answer for these questions as well. Also, why do they convert RGB to
4/2/18
Hermann Hesse
2
9/28/16
How does Caffe handle different input shapes in classification problems by default?
As summary: when an image (>224x224x3) comes in a trained neural network (224x224x3), it
unread,
caffe
fully-connected
fullyConvolutional
image_data
prototxt
pycaffe
shape
How does Caffe handle different input shapes in classification problems by default?
As summary: when an image (>224x224x3) comes in a trained neural network (224x224x3), it
9/28/16
Solitarysea
,
dksa...@gmail.com
2
2/22/17
Problem with training FCN for 4 channels
Could you share your prototxt file?
unread,
Deep-network
FCN
FCN32
fullyConvolutional
loss
Problem with training FCN for 4 channels
Could you share your prototxt file?
2/22/17
Raúl Gombru
,
Evan Shelhamer
3
7/28/16
Variance in optimum learing rate value to fine-tune FCN in different frameworks
Thank you for your answer Evan! It has been really helpful. So I understand that the choice of
unread,
finetune
fullyConvolutional
learning_rate
Variance in optimum learing rate value to fine-tune FCN in different frameworks
Thank you for your answer Evan! It has been really helpful. So I understand that the choice of
7/28/16
PJ
7/6/16
Net surgery to FCN - am I doing it right?
Hi all, I am kind of stuck here, hopefully someone can throw some light on it. To demonstrate my
unread,
FCN
caffe
convolution
deconvolution
fullyConvolutional
pycaffe
testing
training
Net surgery to FCN - am I doing it right?
Hi all, I am kind of stuck here, hopefully someone can throw some light on it. To demonstrate my
7/6/16
Tong Shen
2/4/16
How to reshape the weights of an Inner_product layer in Caffe?
I want to change a fully connected layer to a fully convolutional layer. But Caffe stores the fully
unread,
caffe
fullyConvolutional
inner_product
reshape
How to reshape the weights of an Inner_product layer in Caffe?
I want to change a fully connected layer to a fully convolutional layer. But Caffe stores the fully
2/4/16
Etienne Perot
,
Evan Shelhamer
3
4/18/16
Batch Normalization, Fully Convolutional Training & Gradient Accumulation
Thanks! That's pretty interesting. I was resizing the labelmap instead, which is dumb since a lot
unread,
CNN
batch
fullyConvolutional
normalization
Batch Normalization, Fully Convolutional Training & Gradient Accumulation
Thanks! That's pretty interesting. I was resizing the labelmap instead, which is dumb since a lot
4/18/16
César Salgado
11/14/15
Fully convolutioinal inference without subsampling.
I want to use a trained convnet to predict labels for every pixel of an image. I have already seen
unread,
fullyConvolutional
Fully convolutioinal inference without subsampling.
I want to use a trained convnet to predict labels for every pixel of an image. I have already seen
11/14/15
eran paz
, …
Evan Shelhamer
9
9/30/15
Fully convolutional classifier
Hi Evan Thanks! Exactly what I was looking for. I'll give it a try. Thanks Eran On Wednesday,
unread,
classifier
fullyConvolutional
Fully convolutional classifier
Hi Evan Thanks! Exactly what I was looking for. I'll give it a try. Thanks Eran On Wednesday,
9/30/15
Nicolai Harich
, …
Etienne Perot
8
9/23/15
Fully Convolutional Network and unbalanced label distribution
Hello to you all! sorry for my late answer, Ben : I was proposing to put some pixels from the
unread,
FCN
balancing
fullyConvolutional
segmentation
semantic
Fully Convolutional Network and unbalanced label distribution
Hello to you all! sorry for my late answer, Ben : I was proposing to put some pixels from the
9/23/15
BenG
, …
Yang mk
9
1/29/16
When testing Fully convolutional networks on pascal voc 2011, I cann't get the reported result.
do you get the gaven mean I/U? and as you mentioned above, you got 72 mean I/U, can you tell me how
unread,
FCN
caffemodel
code
fullyConvolutional
help
python
testing
When testing Fully convolutional networks on pascal voc 2011, I cann't get the reported result.
do you get the gaven mean I/U? and as you mentioned above, you got 72 mean I/U, can you tell me how
1/29/16
Etienne Perot
, …
Carlos Treviño
6
8/27/15
fully convolutional training for bounding box
Thanks Etienne! The code is very clear, but i still don't get what pd means. if you can clarify
unread,
fullyConvolutional
hdf5
regression
fully convolutional training for bounding box
Thanks Etienne! The code is very clear, but i still don't get what pd means. if you can clarify
8/27/15
Ben Gee
, …
Wong Fungtion
4
2/27/17
FCN, when finetuning from "VGG16.caffemodel", there's a mismatch of blob shape at layer "fc6".
VGG16 prototxt using old layer definition "layers" instead of "layer",using it
unread,
caffemodel
fullyConvolutional
vgg16
FCN, when finetuning from "VGG16.caffemodel", there's a mismatch of blob shape at layer "fc6".
VGG16 prototxt using old layer definition "layers" instead of "layer",using it
2/27/17
Mansi Rankawat
, …
Youssef
20
11/23/16
Fully convolutional Network (FCN-32) loss remains constant while training
Hello Vignesh, The bilinear initialization of the Deconvolution filters and keeping them constant,
unread,
caffe
finetune
fullyConvolutional
loss
Fully convolutional Network (FCN-32) loss remains constant while training
Hello Vignesh, The bilinear initialization of the Deconvolution filters and keeping them constant,
11/23/16
eran paz
,
Evan Shelhamer
4
7/17/15
Out of memory on K80 with 12G - fully convolutional net
Re: #2016, we did run our experiments with shared buffers and the longjon:future branch readme
unread,
fullyConvolutional
memory
Out of memory on K80 with 12G - fully convolutional net
Re: #2016, we did run our experiments with shared buffers and the longjon:future branch readme
7/17/15
Gavin Hackeling
, …
Majid
23
11/5/16
Fully Convolutional Network Only Predicts One Class
Hi, Thanks @Evan. I have a problem with using python layer which you mentioned to read the data as it
unread,
fullyConvolutional
Fully Convolutional Network Only Predicts One Class
Hi, Thanks @Evan. I have a problem with using python layer which you mentioned to read the data as it
11/5/16
eran paz
, …
王勇翔
7
8/12/15
Image segmentation - pixel wise labeling
HI , thank you for the advice and sorry for ... my stupidity XD... I just write the wrong lmdb path
unread,
fullyConvolutional
multilabel
pixelwise
segmentation
Image segmentation - pixel wise labeling
HI , thank you for the advice and sorry for ... my stupidity XD... I just write the wrong lmdb path
8/12/15
Carlos Treviño
6/17/15
Caffe_Image_Parsing
Hi, I'm trying to train a convnet for image parsing/image segmentation, like the one from this
unread,
caffe
fullyConvolutional
lmdb
parsing
python
segmentation
Caffe_Image_Parsing
Hi, I'm trying to train a convnet for image parsing/image segmentation, like the one from this
6/17/15
Christopher Catton
, …
15535...@qq.com
24
9/12/16
Cublas error trying to train network on GPU
Hi Majid, I also experience the same problem. I use caffe for regression issue. The input are images,
unread,
bug
caffe
fullyConvolutional
gpu
Cublas error trying to train network on GPU
Hi Majid, I also experience the same problem. I use caffe for regression issue. The input are images,
9/12/16
Gavin Hackeling
, …
Evan Shelhamer
6
6/23/15
How to set "reshape" parameter of FromProto
The softmax loss expects the labels to be a single channel where the value for each instance is the
unread,
blobs
channels
fullyConvolutional
prototxt
segmentation
How to set "reshape" parameter of FromProto
The softmax loss expects the labels to be a single channel where the value for each instance is the
6/23/15
Param Rajpura
,
saeed masoomi
2
2/13/18
Doubts in Net Surgery Tutorial
Hi, I'm stuck in stride concept? do you get any idea why stride is 32?
unread,
Net-Surgery
fullyConvolutional
segmentation
Doubts in Net Surgery Tutorial
Hi, I'm stuck in stride concept? do you get any idea why stride is 32?
2/13/18
Ellery R Russell
,
Lisandro K
2
7/6/16
reduce network memory usage for forward pass
Hi Ellery, I am also interested in reducing my models size in RAM memory. Have you solved this
unread,
caffe
fullyConvolutional
memory
reduce network memory usage for forward pass
Hi Ellery, I am also interested in reducing my models size in RAM memory. Have you solved this
7/6/16