c++ - Caffe classifocation.cpp always returns 100% probability -


i'm trying use caffe c++ classification example (here code) classify image handwritten digit (i train model on mnist database), returns probabilities

[0, 0, 0, 1.000, 0, 0, 0, 0, 0]  (1.000 can on different position) 

even if image has no number on it. think should like

[0.01, 0.043, ... 0.9834, ... ] 

also, example '9', it's predicts wrong number.
1 thing change in classification.cpp i'm using cpu

//#ifdef cpu_only       caffe::set_mode(caffe::cpu); // <----- cpu //#else //  caffe::set_mode(caffe::gpu); //#endif 

this how deploy.prototxt looks like

name: "lenet" layer {   name: "data"   type: "imagedata"   top: "data"   top: "label"   image_data_param {     source: "d:\\caffe-windows\\examples\\mnist\\test\\file_list.txt"   } } layer {   name: "conv1"   type: "convolution"   bottom: "data"   top: "conv1"   param {     lr_mult: 1   }   param {     lr_mult: 2   }   convolution_param {     num_output: 20     kernel_size: 5     stride: 1     weight_filler {       type: "xavier"     }     bias_filler {       type: "constant"     }   } } layer {   name: "pool1"   type: "pooling"   bottom: "conv1"   top: "pool1"   pooling_param {     pool: max     kernel_size: 2     stride: 2   } } layer {   name: "conv2"   type: "convolution"   bottom: "pool1"   top: "conv2"   param {     lr_mult: 1   }   param {     lr_mult: 2   }   convolution_param {     num_output: 50     kernel_size: 5     stride: 1     weight_filler {       type: "xavier"     }     bias_filler {       type: "constant"     }   } } layer {   name: "pool2"   type: "pooling"   bottom: "conv2"   top: "pool2"   pooling_param {     pool: max     kernel_size: 2     stride: 2   } } layer {   name: "ip1"   type: "innerproduct"   bottom: "pool2"   top: "ip1"   param {     lr_mult: 1   }   param {     lr_mult: 2   }   inner_product_param {     num_output: 500     weight_filler {       type: "xavier"     }     bias_filler {       type: "constant"     }   } } layer {   name: "relu1"   type: "relu"   bottom: "ip1"   top: "ip1" } layer {   name: "ip2"   type: "innerproduct"   bottom: "ip1"   top: "ip2"   param {     lr_mult: 1   }   param {     lr_mult: 2   }   inner_product_param {     num_output: 10     weight_filler {       type: "xavier"     }     bias_filler {       type: "constant"     }   } } layer {   name: "loss"   type: "softmax"   bottom: "ip2"   top: "loss" }     

file_list.txt

d:\caffe-windows\examples\mnist\test\test1.jpg 0 

and tests1.jpg this

enter image description here

(black&white 28*28 image saved in paint, have tried different sizes doesn't matter, preprocces() resizes anyway)

to train network use this tutorial, here prototxt

so why predicts wrong digits , alway 100% probability?

(i'm using windows 7, vs13)

in "imagedata" layer, should normalize test1.jpg data [0, 255] [0, 1] "scale" keep consistency of preprocess manner between training , test following:

image_data_param {     source: "d:\\caffe-windows\\examples\\mnist\\test\\file_list.txt"     scale: 0.00390625   } 

Comments

Popular posts from this blog

java - Static nested class instance -

c# - Bluetooth LE CanUpdate Characteristic property -

JavaScript - Replace variable from string in all occurrences -