Caffe 训练mnist数据

环境: Ubuntu 12.04,  Caffe

cd $CAFFE_ROOT/data/mnist
./

cd $CAFFE_ROOT/examples/mnist
vi lenet_solver.prototxt
修改solver_mode为CPU

./train_lenet.sh

I0823 08:11:04.501404 15183 caffe.cpp:90] Starting Optimization
I0823 08:11:04.502498 15183 solver.cpp:32] Initializing solver from parameters:
test_iter: 100
test_interval: 500
base_lr: 0.01
display: 100
max_iter: 10000
lr_policy: “inv”
gamma: 0.0001
power: 0.75
momentum: 0.9
weight_decay: 0.0005
snapshot: 5000
snapshot_prefix: “lenet”
solver_mode: CPU
net: “lenet_train_test.prototxt”
FATAL: Error inserting nvidia_331 (/lib/modules/3.2.0-57-generic/updates/dkms/nvidia_331.ko): No such device
E0823 08:11:04.762663 15183 common.cpp:91] Cannot create Cublas handle. Cublas won’t be available.
FATAL: Error inserting nvidia_331 (/lib/modules/3.2.0-57-generic/updates/dkms/nvidia_331.ko): No such device
E0823 08:11:04.982652 15183 common.cpp:98] Cannot create Curand generator. Curand won’t be available.
I082308:11:04.982898 15183 solver.cpp:72] Creating training net from net file: lenet_train_test.prototxt
I0823 08:11:04.983438 15183 net.cpp:223] The NetState phase (0) differed from the phase (1) specified by a rule in layer mnist
I0823 08:11:04.983516 15183 net.cpp:223] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy
I0823 08:11:04.983629 15183 net.cpp:38] Initializing net from parameters:

name: “LeNet”

layers {
top: “data”
top: “label”
name: “mnist”
type: DATA
data_param {
source: “mnist-test-leveldb”
scale: 0.00390625
batch_size: 100
}
include {
phase: TEST
}
}
layers {
bottom: “data”
top: “conv1”
name: “conv1”
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
}
}
}
layers {
bottom: “conv1”
top: “pool1”
name: “pool1”
type: POOLING
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layers {
bottom: “pool1”
top: “conv2”
name: “conv2”
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
}
}
}
layers {
bottom: “conv2”
top: “pool2”
name: “pool2”
type: POOLING
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layers {
bottom: “pool2”
top: “ip1”
name: “ip1”
type: INNER_PRODUCT
blobs_lr: 1
blobs_lr: 2
inner_product_param {
num_output: 500
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
}
}
}
layers {
bottom: “ip1”
top: “ip1”
name: “relu1”
type: RELU
}
layers {
bottom: “ip1”
top: “ip2”
name: “ip2”
type: INNER_PRODUCT
blobs_lr: 1
blobs_lr: 2
inner_product_param {
num_output: 10
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
}
}
}
layers {
bottom: “ip2”
bottom: “label”
top: “accuracy”
name: “accuracy”
type: ACCURACY
include {
phase: TEST
}
}
layers {
bottom: “ip2”
bottom: “label”
top: “loss”
name: “loss”
type: SOFTMAX_LOSS
}
state {
phase: TEST
}
I0823 08:11:04.524307  2464 net.cpp:66] Creating Layer mnist
I0823 08:11:04.524438  2464 net.cpp:290] mnist -> data
I0823 08:11:04.524711  2464 net.cpp:290] mnist -> label
I0823 08:11:04.524833  2464 data_layer.cpp:179] Opening leveldb mnist-test-leveldb
I0823 08:11:04.617794  2464 data_layer.cpp:262] output data size: 100,1,28,28
I0823 08:11:04.618073  2464 net.cpp:83] Top shape: 100 1 28 28 (78400)
I0823 08:11:04.618237  2464 net.cpp:83] Top shape: 100 1 1 1 (100)
I0823 08:11:04.618285  2464 net.cpp:130] mnist does not need backward computation.
I0823 08:11:04.618414  2464 net.cpp:66] Creating Layer label_mnist_1_split
I0823 08:11:04.618479  2464 net.cpp:329] label_mnist_1_split <- label
I0823 08:11:04.618859  2464 net.cpp:280] label_mnist_1_split -> label (in-place)
I0823 08:11:04.618948  2464 net.cpp:290] label_mnist_1_split -> label_mnist_1_split_1
I0823 08:11:04.618999  2464 net.cpp:83] Top shape: 100 1 1 1 (100)
I0823 08:11:04.619735  2464 net.cpp:83] Top shape: 100 1 1 1 (100)
I0823 08:11:04.619850  2464 net.cpp:130] label_mnist_1_split does not need backward com                                                                                 putation.
I0823 08:11:04.619900  2464 net.cpp:66] Creating Layer conv1
I0823 08:11:04.620210  2464 net.cpp:329] conv1 <- data
I0823 08:11:04.620262  2464 net.cpp:290] conv1 -> conv1
I0823 08:11:04.620434  2464 net.cpp:83] Top shape: 100 20 24 24 (1152000)
I0823 08:11:04.620515  2464 net.cpp:125] conv1 needs backward computation.
I0823 08:11:04.620580  2464 net.cpp:66] Creating Layer pool1
I0823 08:11:04.620620  2464 net.cpp:329] pool1 <- conv1
I0823 08:11:04.620663  2464 net.cpp:290] pool1 -> pool1
I0823 08:11:04.621214  2464 net.cpp:83] Top shape: 100 20 12 12 (288000)
I0823 08:11:04.621287  2464 net.cpp:125] pool1 needs backward computation.
I0823 08:11:04.621368  2464 net.cpp:66] Creating Layer conv2
I0823 08:11:04.621604  2464 net.cpp:329] conv2 <- pool1
I0823 08:11:04.621724  2464 net.cpp:290] conv2 -> conv2
I0823 08:11:04.622458  2464 net.cpp:83] Top shape: 100 50 8 8 (320000)
I0823 08:11:04.622563  2464 net.cpp:125] conv2 needs backward computation.
I0823 08:11:04.622607  2464 net.cpp:66] Creating Layer pool2
I0823 08:11:04.622648  2464 net.cpp:329] pool2 <- conv2
I0823 08:11:04.622691  2464 net.cpp:290] pool2 -> pool2
I0823 08:11:04.622730  2464 net.cpp:83] Top shape: 100 50 4 4 (80000)
I0823 08:11:04.623108  2464 net.cpp:125] pool2 needs backward computation.
I0823 08:11:04.623181  2464 net.cpp:66] Creating Layer ip1
I0823 08:11:04.623435  2464 net.cpp:329] ip1 <- pool2
I0823 08:11:04.623749  2464 net.cpp:290] ip1 -> ip1
I0823 08:11:04.628530  2464 net.cpp:83] Top shape: 100 500 1 1 (50000)
I0823 08:11:04.628690  2464 net.cpp:125] ip1 needs backward computation.
I0823 08:11:04.628726  2464 net.cpp:66] Creating Layer relu1
I0823 08:11:04.628751  2464 net.cpp:329] relu1 <- ip1
I0823 08:11:04.628779  2464 net.cpp:280] relu1 -> ip1 (in-place)
I0823 08:11:04.628809  2464 net.cpp:83] Top shape: 100 500 1 1 (50000)
I0823 08:11:04.628835  2464 net.cpp:125] relu1 needs backward computation.
I0823 08:11:04.629266  2464 net.cpp:66] Creating Layer ip2
I0823 08:11:04.629317  2464 net.cpp:329] ip2 <- ip1
I0823 08:11:04.629365  2464 net.cpp:290] ip2 -> ip2
I0823 08:11:04.629861  2464 net.cpp:83] Top shape: 100 10 1 1 (1000)
I0823 08:11:04.629947  2464 net.cpp:125] ip2 needs backward computation.
I0823 08:11:04.629992  2464 net.cpp:66] Creating Layer ip2_ip2_0_split
I0823 08:11:04.630108  2464 net.cpp:329] ip2_ip2_0_split <- ip2
I0823 08:11:04.630190  2464 net.cpp:280] ip2_ip2_0_split -> ip2 (in-place)
I0823 08:11:04.630980  2464 net.cpp:290] ip2_ip2_0_split -> ip2_ip2_0_split_1
I0823 08:11:04.631105  2464 net.cpp:83] Top shape: 100 10 1 1 (1000)
I0823 08:11:04.631145  2464 net.cpp:83] Top shape: 100 10 1 1 (1000)
I0823 08:11:04.631182  2464 net.cpp:125] ip2_ip2_0_split needs backward computation.
I0823 08:11:04.631342  2464 net.cpp:66] Creating Layer accuracy
I0823 08:11:04.631391  2464 net.cpp:329] accuracy <- ip2
I0823 08:11:04.631862  2464 net.cpp:329] accuracy <- label
I0823 08:11:04.631963  2464 net.cpp:290] accuracy -> accuracy
I0823 08:11:04.632132  2464 net.cpp:83] Top shape: 1 1 1 1 (1)
I0823 08:11:04.632175  2464 net.cpp:125] accuracy needs backward computation.
I0823 08:11:04.632494  2464 net.cpp:66] Creating Layer loss
I0823 08:11:04.632750  2464 net.cpp:329] loss <- ip2_ip2_0_split_1
I0823 08:11:04.632804  2464 net.cpp:329] loss <- label_mnist_1_split_1
I0823 08:11:04.632853  2464 net.cpp:290] loss -> loss
I0823 08:11:04.633280  2464 net.cpp:83] Top shape: 1 1 1 1 (1)
I0823 08:11:04.633471  2464 net.cpp:125] loss needs backward computation.
I0823 08:11:04.633826  2464 net.cpp:156] This network produces output accuracy
I0823 08:11:04.633872  2464 net.cpp:156] This network produces output loss
I0823 08:11:04.634106  2464 net.cpp:402] Collecting Learning Rate and Weight Decay.
I0823 08:11:04.634172  2464 net.cpp:167] Network initialization done.
I0823 08:11:04.634213  2464 net.cpp:168] Memory required for data: 0
I0823 08:11:04.634326  2464 solver.cpp:46] Solver scaffolding done.
I0823 08:11:04.634436  2464 solver.cpp:165] Solving LeNet
I0823 08:11:04.634881  2464 solver.cpp:232] Iteration 0, Testing net (#0)
I0823 08:11:19.170075  2464 solver.cpp:270] Test score #0: 0.1059
I0823 08:11:19.170248  2464 solver.cpp:270] Test score #1: 2.30245
I0823 08:11:19.417044  2464 solver.cpp:195] Iteration 0, loss = 2.30231
I0823 08:11:19.417177  2464 solver.cpp:365] Iteration 0, lr = 0.01
I0823 08:11:43.741911  2464 solver.cpp:195] Iteration 100, loss = 0.317127
I0823 08:11:43.742342  2464 solver.cpp:365] Iteration 100, lr = 0.00992565
I0823 08:12:07.532147  2464 solver.cpp:195] Iteration 200, loss = 0.173197
I0823 08:12:07.532258  2464 solver.cpp:365] Iteration 200, lr = 0.00985258
I0823 08:12:31.409700  2464 solver.cpp:195] Iteration 300, loss = 0.247124
I0823 08:12:31.410508  2464 solver.cpp:365] Iteration 300, lr = 0.00978075
I0823 08:12:54.552777  2464 solver.cpp:195] Iteration 400, loss = 0.102047
I0823 08:12:54.552903  2464 solver.cpp:365] Iteration 400, lr = 0.00971013
I0823 08:13:17.605888  2464 solver.cpp:232] Iteration 500, Testing net (#0)

……
I0823 09:10:29.736903  2464 solver.cpp:270] Test score #0: 0.9887
I0823 09:10:29.737015  2464 solver.cpp:270] Test score #1: 0.0369187
I0823 09:10:30.063771  2464 solver.cpp:195] Iteration 9500, loss = 0.00306773
I0823 09:10:30.063874  2464 solver.cpp:365] Iteration 9500, lr = 0.00606002
I0823 09:10:57.213291  2464 solver.cpp:195] Iteration 9600, loss = 0.00250475
I0823 09:10:57.213827  2464 solver.cpp:365] Iteration 9600, lr = 0.00603682
I0823 09:11:26.278821  2464 solver.cpp:195] Iteration 9700, loss = 0.00243088
I0823 09:11:26.279002  2464 solver.cpp:365] Iteration 9700, lr = 0.00601382
I0823 09:11:53.438747  2464 solver.cpp:195] Iteration 9800, loss = 0.0136355
I0823 09:11:53.439350  2464 solver.cpp:365] Iteration 9800, lr = 0.00599102
I0823 09:12:20.007823  2464 solver.cpp:195] Iteration 9900, loss = 0.00696897
I0823 09:12:20.008005  2464 solver.cpp:365] Iteration 9900, lr = 0.00596843
I0823 09:12:46.920634  2464 solver.cpp:287] Snapshotting to lenet_iter_10000
I0823 09:12:46.930307  2464 solver.cpp:294] Snapshotting solver state to lenet_iter_10000.solverstate
I0823 09:12:47.039417  2464 solver.cpp:213] Iteration 10000, loss = 0.00343354
I0823 09:12:47.039518  2464 solver.cpp:232] Iteration 10000, Testing net (#0)
I0823 09:13:02.146388  2464 solver.cpp:270] Test score #0: 0.9909
I0823 09:13:02.146509  2464 solver.cpp:270] Test score #1: 0.0288982
I0823 09:13:02.146543  2464 solver.cpp:218] Optimization Done.
I0823 09:13:02.146564  2464 caffe.cpp:113] Optimization Done.

运行最终产生lenet_iter_10000的binary protobuf文件,查看文件内容:

cd /u01/caffe/examples/mnist
jerry@hq:/u01/caffe/examples/mnist$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type “help”, “copyright”, “credits” or “license” for more information.
>>> import caffe
>>> net = caffe.Net(‘lenet.prototxt’, ‘lenet_iter_10000’)
FATAL: Error inserting nvidia_331 (/lib/modules/3.2.0-57-generic/updates/dkms/nvidia_331.ko): No such device
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0823 10:41:06.040340 16020 common.cpp:91] Cannot create Cublas handle. Cublas won’t be available.
FATAL: Error inserting nvidia_331 (/lib/modules/3.2.0-57-generic/updates/dkms/nvidia_331.ko): No such device
E0823 10:41:06.242882 16020 common.cpp:98] Cannot create Curand generator. Curand won’t be available.
I0823 10:41:06.243221 16020 net.cpp:38] Initializing net from parameters:
name: “LeNet”
layers {
bottom: “data”
top: “conv1”
name: “conv1”
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
}
}
}
layers {
bottom: “conv1”
top: “pool1”
name: “pool1”
type: POOLING
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layers {
bottom: “pool1”
top: “conv2”
name: “conv2”
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
}
}
}
layers {
bottom: “conv2”
top: “pool2”
name: “pool2”
type: POOLING
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layers {
bottom: “pool2”
top: “ip1”
name: “ip1”
type: INNER_PRODUCT
blobs_lr: 1
blobs_lr: 2
inner_product_param {
num_output: 500
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
}
}
}
layers {
bottom: “ip1”
top: “ip1”
name: “relu1”
type: RELU
}
layers {
bottom: “ip1”
top: “ip2”
name: “ip2”
type: INNER_PRODUCT
blobs_lr: 1
blobs_lr: 2
inner_product_param {
num_output: 10
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
}
}
}
layers {
bottom: “ip2”
top: “prob”
name: “prob”
type: SOFTMAX
}
input: “data”
input_dim: 64
input_dim: 1
input_dim: 28
input_dim: 28
I0823 10:41:06.244067 16020 net.cpp:292] Input 0 -> data
I0823 10:41:06.244173 16020 net.cpp:66] Creating Layer conv1
I0823 10:41:06.244201 16020 net.cpp:329] conv1 <- data
I0823 10:41:06.244228 16020 net.cpp:290] conv1 -> conv1
I0823 10:41:06.245010 16020 net.cpp:83] Top shape: 64 20 24 24 (737280)
I0823 10:41:06.245100 16020 net.cpp:125] conv1 needs backward computation.
I0823 10:41:06.245172 16020 net.cpp:66] Creating Layer pool1
I0823 10:41:06.245210 16020 net.cpp:329] pool1 <- conv1
I0823 10:41:06.245276 16020 net.cpp:290] pool1 -> pool1
I0823 10:41:06.245338 16020 net.cpp:83] Top shape: 64 20 12 12 (184320)
I0823 10:41:06.245378 16020 net.cpp:125] pool1 needs backward computation.
I0823 10:41:06.245426 16020 net.cpp:66] Creating Layer conv2
I0823 10:41:06.245462 16020 net.cpp:329] conv2 <- pool1
I0823 10:41:06.245509 16020 net.cpp:290] conv2 -> conv2
I0823 10:41:06.245834 16020 net.cpp:83] Top shape: 64 50 8 8 (204800)
I0823 10:41:06.245893 16020 net.cpp:125] conv2 needs backward computation.
I0823 10:41:06.245918 16020 net.cpp:66] Creating Layer pool2
I0823 10:41:06.246021 16020 net.cpp:329] pool2 <- conv2
I0823 10:41:06.246088 16020 net.cpp:290] pool2 -> pool2
I0823 10:41:06.246136 16020 net.cpp:83] Top shape: 64 50 4 4 (51200)
I0823 10:41:06.246212 16020 net.cpp:125] pool2 needs backward computation.
I0823 10:41:06.246263 16020 net.cpp:66] Creating Layer ip1
I0823 10:41:06.246296 16020 net.cpp:329] ip1 <- pool2
I0823 10:41:06.246352 16020 net.cpp:290] ip1 -> ip1
I0823 10:41:06.250891 16020 net.cpp:83] Top shape: 64 500 1 1 (32000)
I0823 10:41:06.251027 16020 net.cpp:125] ip1 needs backward computation.
I0823 10:41:06.251073 16020 net.cpp:66] Creating Layer relu1
I0823 10:41:06.251111 16020 net.cpp:329] relu1 <- ip1
I0823 10:41:06.251149 16020 net.cpp:280] relu1 -> ip1 (in-place)
I0823 10:41:06.251196 16020 net.cpp:83] Top shape: 64 500 1 1 (32000)
I0823 10:41:06.251231 16020 net.cpp:125] relu1 needs backward computation.
I0823 10:41:06.251268 16020 net.cpp:66] Creating Layer ip2
I0823 10:41:06.251302 16020 net.cpp:329] ip2 <- ip1
I0823 10:41:06.251461 16020 net.cpp:290] ip2 -> ip2
I0823 10:41:06.251601 16020 net.cpp:83] Top shape: 64 10 1 1 (640)
I0823 10:41:06.251646 16020 net.cpp:125] ip2 needs backward computation.
I0823 10:41:06.251682 16020 net.cpp:66] Creating Layer prob
I0823 10:41:06.251716 16020 net.cpp:329] prob <- ip2
I0823 10:41:06.251757 16020 net.cpp:290] prob -> prob
I0823 10:41:06.252317 16020 net.cpp:83] Top shape: 64 10 1 1 (640)
I0823 10:41:06.252887 16020 net.cpp:125] prob needs backward computation.
I0823 10:41:06.252924 16020 net.cpp:156] This network produces output prob
I0823 10:41:06.252977 16020 net.cpp:402] Collecting Learning Rate and Weight Decay.
I0823 10:41:06.253016 16020 net.cpp:167] Network initialization done.
I0823 10:41:06.253099 16020 net.cpp:168] Memory required for data: 200704

查看网络定义
>>> print open(‘lenet.prototxt’).read()
name: “LeNet”
input: “data”
input_dim: 64
input_dim: 1
input_dim: 28
input_dim: 28
layers {
name: “conv1”
type: CONVOLUTION
bottom: “data”
top: “conv1”
blobs_lr: 1
blobs_lr: 2
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
}
}
}
layers {
name: “pool1”
type: POOLING
bottom: “conv1”
top: “pool1”
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layers {
name: “conv2”
type: CONVOLUTION
bottom: “pool1”
top: “conv2”
blobs_lr: 1
blobs_lr: 2
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
}
}
}
layers {
name: “pool2”
type: POOLING
bottom: “conv2”
top: “pool2”
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layers {
name: “ip1”
type: INNER_PRODUCT
bottom: “pool2”
top: “ip1”
blobs_lr: 1
blobs_lr: 2
inner_product_param {
num_output: 500
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
}
}
}
layers {
name: “relu1”
type: RELU
bottom: “ip1”
top: “ip1”
}
layers {
name: “ip2”
type: INNER_PRODUCT
bottom: “ip1”
top: “ip2”
blobs_lr: 1
blobs_lr: 2
inner_product_param {
num_output: 10
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
}
}
}
layers {
name: “prob”
type: SOFTMAX
bottom: “ip2”
top: “prob”
}

查看每层的参数
>>>dir(net)
>>> net.params[“ip2”][0].data
array([[[[-0.00913269,  0.06095703, -0.09719526, …, -0.01292357,
-0.02721527, -0.04921406],
[ 0.07316435, -0.10016691, -0.00194797, …, -0.02357075,
-0.03735601, -0.12467863],
[ 0.11690015, -0.13771389, -0.04632974, …,  0.02967362,
-0.11868649,  0.01114164],
…,
[-0.18345536, -0.01772851,  0.06773216, …, -0.00851034,
-0.02590596,  0.01125562],
[-0.16715027,  0.03873322,  0.03800297, …,  0.0236346 ,
-0.01642762,  0.04072023],
[ 0.10814335, -0.04631414,  0.09708735, …, -0.0280726 ,
-0.14074558,  0.14641024]]]], dtype=float32)

后续功能将继续探索

BVLC Cafe 使用

环境: Ubuntu 12.04, CUDA 6.0,

1. 预先安装软件

pip install -r /u01/caffe/python/requirements.txt
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev

# gflags
wget https://github.com/schuhschuh/gflags/archive/master.zip
unzip master.zip
cd gflags-master
mkdir build && cd build
CXXFLAGS=”-fPIC” cmake .. -DGFLAGS_NAMESPACE=google
make && make install

# glog
wget https://google-glog.googlecode.com/files/glog-0.3.3.tar.gz
tar zxvf glog-0.3.3.tar.gz
cd glog-0.3.3
./configure
make && make install

# lmdb
git clone git://gitorious.org/mdb/mdb.git
cd mdb/libraries/liblmdb
make && make install

2.  配置安装文件

cp Makefile.config.example Makefile.config
vi Makefile.config, 去掉注释(由于虚拟机不支技显卡)
CPU_ONLY := 1

3. 编译,报错如下:

jerry@hq:/u01/caffe$ make
g++ .build_release/tools/convert_imageset.o .build_release/lib/libcaffe.a -o .build_release/tools/convert_imageset.bin -fPIC -DCPU_ONLY -DNDEBUG -O2 -I/usr/include/python2.7 -I/usr/lib/python2.7/dist-packages/numpy/core/include -I/usr/local/include -I.build_release/src -I./src -I./include -Wall -Wno-sign-compare -L/usr/lib -L/usr/local/lib -L/usr/lib -lglog -lgflags -lpthread -lprotobuf -lleveldb -lsnappy -llmdb -lboost_system -lhdf5_hl -lhdf5 -lopencv_core -lopencv_highgui -lopencv_imgproc -lcblas -latlas
.build_release/lib/libcaffe.a(blob.o): In function `caffe::Blob<float>::Update()’:
blob.cpp:(.text._ZN5caffe4BlobIfE6UpdateEv[_ZN5caffe4BlobIfE6UpdateEv]+0x43): undefined reference to `void caffe::caffe_gpu_axpy<float>(int, float, float const*, float*)’
.build_release/lib/libcaffe.a(blob.o): In function `caffe::Blob<float>::asum_data() const’:
blob.cpp:(.text._ZNK5caffe4BlobIfE9asum_dataEv[_ZNK5caffe4BlobIfE9asum_dataEv]+0x3f): undefined reference to `void caffe::caffe_gpu_asum<float>(int, float const*, float*)’
.build_release/lib/libcaffe.a(blob.o): In function `caffe::Blob<float>::asum_diff() const’:
blob.cpp:(.text._ZNK5caffe4BlobIfE9asum_diffEv[_ZNK5caffe4BlobIfE9asum_diffEv]+0x3f): undefined reference to `void caffe::caffe_gpu_asum<float>(int, float const*, float*)’
.build_release/lib/libcaffe.a(blob.o): In function `caffe::Blob<double>::Update()’:
blob.cpp:(.text._ZN5caffe4BlobIdE6UpdateEv[_ZN5caffe4BlobIdE6UpdateEv]+0x43): undefined reference to `void caffe::caffe_gpu_axpy<double>(int, double, double const*, double*)’
.build_release/lib/libcaffe.a(blob.o): In function `caffe::Blob<double>::asum_data() const’:
blob.cpp:(.text._ZNK5caffe4BlobIdE9asum_dataEv[_ZNK5caffe4BlobIdE9asum_dataEv]+0x3f): undefined reference to `void caffe::caffe_gpu_asum<double>(int, double const*, double*)’
.build_release/lib/libcaffe.a(blob.o): In function `caffe::Blob<double>::asum_diff() const’:
blob.cpp:(.text._ZNK5caffe4BlobIdE9asum_diffEv[_ZNK5caffe4BlobIdE9asum_diffEv]+0x3f): undefined reference to `void caffe::caffe_gpu_asum<double>(int, double const*, double*)’
.build_release/lib/libcaffe.a(common.o): In function `caffe::GlobalInit(int*, char***)’:
common.cpp:(.text+0x12a): undefined reference to `gflags::ParseCommandLineFlags(int*, char***, bool)’
.build_release/lib/libcaffe.a(common.o): In function `caffe::Caffe::Caffe()’:
common.cpp:(.text+0x179): undefined reference to `cublasCreate_v2′
common.cpp:(.text+0x1cb): undefined reference to `curandCreateGenerator’
common.cpp:(.text+0x22d): undefined reference to `curandSetPseudoRandomGeneratorSeed’
.build_release/lib/libcaffe.a(common.o): In function `caffe::Caffe::~Caffe()’:
common.cpp:(.text+0x434): undefined reference to `cublasDestroy_v2′
common.cpp:(.text+0x456): undefined reference to `curandDestroyGenerator’
.build_release/lib/libcaffe.a(common.o): In function `caffe::Caffe::DeviceQuery()’:
common.cpp:(.text+0x5f8): undefined reference to `cudaGetDevice’
common.cpp:(.text+0x616): undefined reference to `cudaGetDeviceProperties’
common.cpp:(.text+0xd22): undefined reference to `cudaGetErrorString’
.build_release/lib/libcaffe.a(common.o): In function `caffe::Caffe::SetDevice(int)’:
common.cpp:(.text+0x1222): undefined reference to `cudaGetDevice’
common.cpp:(.text+0x1247): undefined reference to `cudaSetDevice’
common.cpp:(.text+0x127b): undefined reference to `cublasDestroy_v2′
common.cpp:(.text+0x12a9): undefined reference to `curandDestroyGenerator’
common.cpp:(.text+0x12ce): undefined reference to `cublasCreate_v2′
common.cpp:(.text+0x12fc): undefined reference to `curandCreateGenerator’
common.cpp:(.text+0x1330): undefined reference to `curandSetPseudoRandomGeneratorSeed’
common.cpp:(.text+0x1729): undefined reference to `cudaGetErrorString’
common.cpp:(.text+0x1882): undefined reference to `cudaGetErrorString’
.build_release/lib/libcaffe.a(common.o): In function `caffe::Caffe::set_random_seed(unsigned int)’:
common.cpp:(.text+0x1aff): undefined reference to `curandDestroyGenerator’
common.cpp:(.text+0x1b2d): undefined reference to `curandCreateGenerator’
common.cpp:(.text+0x1b5c): undefined reference to `curandSetPseudoRandomGeneratorSeed’
.build_release/lib/libcaffe.a(math_functions.o): In function `void caffe::caffe_copy<double>(int, double const*, double*)’:
math_functions.cpp:(.text._ZN5caffe10caffe_copyIdEEviPKT_PS1_[_ZN5caffe10caffe_copyIdEEviPKT_PS1_]+0x6c): undefined reference to `cudaMemcpy’
math_functions.cpp:(.text._ZN5caffe10caffe_copyIdEEviPKT_PS1_[_ZN5caffe10caffe_copyIdEEviPKT_PS1_]+0x160): undefined reference to `cudaGetErrorString’
.build_release/lib/libcaffe.a(math_functions.o): In function `void caffe::caffe_copy<int>(int, int const*, int*)’:
math_functions.cpp:(.text._ZN5caffe10caffe_copyIiEEviPKT_PS1_[_ZN5caffe10caffe_copyIiEEviPKT_PS1_]+0x6c): undefined reference to `cudaMemcpy’
math_functions.cpp:(.text._ZN5caffe10caffe_copyIiEEviPKT_PS1_[_ZN5caffe10caffe_copyIiEEviPKT_PS1_]+0x160): undefined reference to `cudaGetErrorString’
.build_release/lib/libcaffe.a(math_functions.o): In function `void caffe::caffe_copy<unsigned int>(int, unsigned int const*, unsigned int*)’:
math_functions.cpp:(.text._ZN5caffe10caffe_copyIjEEviPKT_PS1_[_ZN5caffe10caffe_copyIjEEviPKT_PS1_]+0x6c): undefined reference to `cudaMemcpy’
math_functions.cpp:(.text._ZN5caffe10caffe_copyIjEEviPKT_PS1_[_ZN5caffe10caffe_copyIjEEviPKT_PS1_]+0x160): undefined reference to `cudaGetErrorString’
.build_release/lib/libcaffe.a(math_functions.o): In function `void caffe::caffe_copy<float>(int, float const*, float*)’:
math_functions.cpp:(.text._ZN5caffe10caffe_copyIfEEviPKT_PS1_[_ZN5caffe10caffe_copyIfEEviPKT_PS1_]+0x6c): undefined reference to `cudaMemcpy’
math_functions.cpp:(.text._ZN5caffe10caffe_copyIfEEviPKT_PS1_[_ZN5caffe10caffe_copyIfEEviPKT_PS1_]+0x160): undefined reference to `cudaGetErrorString’
.build_release/lib/libcaffe.a(syncedmem.o): In function `caffe::SyncedMemory::cpu_data()’:
syncedmem.cpp:(.text+0x26): undefined reference to `caffe::caffe_gpu_memcpy(unsigned long, void const*, void*)’
.build_release/lib/libcaffe.a(syncedmem.o): In function `caffe::SyncedMemory::mutable_cpu_data()’:
syncedmem.cpp:(.text+0x136): undefined reference to `caffe::caffe_gpu_memcpy(unsigned long, void const*, void*)’
.build_release/lib/libcaffe.a(syncedmem.o): In function `caffe::SyncedMemory::~SyncedMemory()’:
syncedmem.cpp:(.text+0x1c1): undefined reference to `cudaFree’
syncedmem.cpp:(.text+0x20f): undefined reference to `cudaGetErrorString’
.build_release/lib/libcaffe.a(syncedmem.o): In function `caffe::SyncedMemory::mutable_gpu_data()’:
syncedmem.cpp:(.text+0x29a): undefined reference to `caffe::caffe_gpu_memcpy(unsigned long, void const*, void*)’
syncedmem.cpp:(.text+0x2b9): undefined reference to `cudaMalloc’
syncedmem.cpp:(.text+0x2e5): undefined reference to `cudaMemset’
syncedmem.cpp:(.text+0x321): undefined reference to `cudaGetErrorString’
syncedmem.cpp:(.text+0x379): undefined reference to `cudaMalloc’
syncedmem.cpp:(.text+0x3c2): undefined reference to `cudaGetErrorString’
syncedmem.cpp:(.text+0x435): undefined reference to `cudaGetErrorString’
.build_release/lib/libcaffe.a(syncedmem.o): In function `caffe::SyncedMemory::gpu_data()’:
syncedmem.cpp:(.text+0x4ca): undefined reference to `caffe::caffe_gpu_memcpy(unsigned long, void const*, void*)’
syncedmem.cpp:(.text+0x4e9): undefined reference to `cudaMalloc’
syncedmem.cpp:(.text+0x515): undefined reference to `cudaMemset’
syncedmem.cpp:(.text+0x549): undefined reference to `cudaMalloc’
syncedmem.cpp:(.text+0x592): undefined reference to `cudaGetErrorString’
syncedmem.cpp:(.text+0x608): undefined reference to `cudaGetErrorString’
syncedmem.cpp:(.text+0x678): undefined reference to `cudaGetErrorString’
collect2: error: ld returned 1 exit status
make: *** [.build_release/tools/convert_imageset.bin] Error 1

很多引用是gpu的定义,但编译时使用cpu-only选项也是通不过的。

4. 修改Makefile.config, 注释CPU_ONLY := 1, 同时修改CUSTOM_CXX := g++-4.6

sudo apt-get install gcc-4.6 g++-4.6 gcc-4.6-multilib g++-4.6-multilib

修改这两个文件
vi src/caffe/common.cpp
vi tools/caffe.cpp
使用google替代gflags

make clean

make

make pycaffe
g++-4.6 -shared -o python/caffe/_caffe.so python/caffe/_caffe.cpp \\
.build_release/lib/libcaffe.a -fPIC -DNDEBUG -O2 -I/usr/include/python2.7 -I/usr/lib/python2.7/dist-packages/numpy/core/include -I/usr/local/include -I.build_release/src -I./src -I./include -I/usr/local/cuda/include -Wall -Wno-sign-compare -L/usr/lib -L/usr/local/lib -L/usr/lib -L/usr/local/cuda/lib64 -L/usr/local/cuda/lib -lcudart -lcublas -lcurand -lglog -lgflags -lpthread -lprotobuf -lleveldb -lsnappy -llmdb -lboost_system -lhdf5_hl -lhdf5 -lopencv_core -lopencv_highgui -lopencv_imgproc -lcblas -latlas -lboost_python -lpython2.7

touch python/caffe/proto/__init__.py
protoc –proto_path=src –python_out=python src/caffe/proto/caffe_pretty_print.proto

protoc –proto_path=src –python_out=python src/caffe/proto/caffe.proto

执行 sudo cp /u01/caffe/python/caffe/ /usr/local/lib/python2.7/dist-packages/ -Rf

cuda-convnet2使用

环境: Ubuntu 12.04,  cuda-convnet2,  CUDA 6

安装步骤:

1. 预安装需求库

sudo apt-get install python-dev python-numpy python-scipy python-magic python-matplotlib libatlas-base-dev libjpeg-dev libopencv-dev git

2.  安装CUDA 6.0
从http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1204/x86_64/cuda-repo-ubuntu1204_6.5-14_amd64.deb下载 cuda-repo-ubuntu1204_6.5-14_amd64.deb
$wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1204/x86_64/cuda-repo-ubuntu1204_6.5-14_amd64.deb
$ sudo dpkg -i cuda-repo-ubuntu1204_6.0-37_amd64.deb
$ sudo apt-get update
$ sudo apt-get install cuda

3. 配置CUDA环境变量

vi ~/.bashrc

export CUDA_HOME=/usr/local/cuda-6.0
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64

PATH=${CUDA_HOME}/bin:${PATH}
export PATH

4. 下载cuda-convnet2源码

git clone https://code.google.com/p/cuda-convnet2/

5. 编译源码

jerry@hq:/u01/cuda-convnet2$ sh build.sh
mkdir -p ./bin//src
g++  -O3 -c -fPIC   -DNUMPY_INTERFACE -I./include -I/usr/include/python2.7 -I/usr/lib/python2.7/dist-packages/numpy/core/include/numpy/ src/matrix.cpp -o ./bin//src/matrix.o
In file included from /usr/include/python2.7/Python.h:8:0,
from src/../include/matrix.h:22,
from src/matrix.cpp:17:
/usr/include/python2.7/pyconfig.h:1161:0: warning: “_POSIX_C_SOURCE” redefined [enabled by default]
#define _POSIX_C_SOURCE 200112L
^
In file included from /usr/include/stdlib.h:25:0,
from src/../include/matrix_funcs.h:20,
from src/../include/matrix.h:20,
from src/matrix.cpp:17:
/usr/include/features.h:164:0: note: this is the location of the previous definition
# define _POSIX_C_SOURCE 200809L
^
In file included from /usr/include/python2.7/Python.h:8:0,
from src/../include/matrix.h:22,
from src/matrix.cpp:17:
/usr/include/python2.7/pyconfig.h:1183:0: warning: “_XOPEN_SOURCE” redefined [enabled by default]
#define _XOPEN_SOURCE 600
^
In file included from /usr/include/stdlib.h:25:0,
from src/../include/matrix_funcs.h:20,
from src/../include/matrix.h:20,
from src/matrix.cpp:17:
/usr/include/features.h:166:0: note: this is the location of the previous definition
# define _XOPEN_SOURCE 700
^
cd ./bin/ && g++  -O3   -DNUMPY_INTERFACE -shared -Wl,-no-undefined -o libutilpy.so src/matrix.o -L/usr/lib/atlas-base -latlas -lcblas -lpython2.7
ln -sf ./bin//libutilpy.so .
mkdir -p ./bin/release
mkdir -p ./obj/release/src
mkdir -p ./bin/release
mkdir -p ./obj/release/src
mkdir -p ./bin/release
mkdir -p ./obj/release/src
mkdir -p ./bin//src
g++  -O3 -c -fPIC   -I./include -I/usr/include/python2.7  src/pyext.cpp -o ./bin//src/pyext.o
In file included from /usr/include/python2.7/Python.h:8:0,
from src/../include/pyext.h:23,
from src/pyext.cpp:17:
/usr/include/python2.7/pyconfig.h:1161:0: warning: “_POSIX_C_SOURCE” redefined [enabled by default]
#define _POSIX_C_SOURCE 200112L
^
In file included from /usr/include/stdio.h:28:0,
from src/../include/pyext.h:20,
from src/pyext.cpp:17:
/usr/include/features.h:164:0: note: this is the location of the previous definition
# define _POSIX_C_SOURCE 200809L
^
In file included from /usr/include/python2.7/Python.h:8:0,
from src/../include/pyext.h:23,
from src/pyext.cpp:17:
/usr/include/python2.7/pyconfig.h:1183:0: warning: “_XOPEN_SOURCE” redefined [enabled by default]
#define _XOPEN_SOURCE 600
^
In file included from /usr/include/stdio.h:28:0,
from src/../include/pyext.h:20,
from src/pyext.cpp:17:
/usr/include/features.h:166:0: note: this is the location of the previous definition
# define _XOPEN_SOURCE 700
^
cd ./bin/ && g++  -O3   -shared -Wl,-no-undefined -o _MakeDataPyExt.so src/pyext.o -L/usr/local/cuda/lib64 `pkg-config –libs python` `pkg-config –libs opencv` -lpthread
ln -sf ./bin//_MakeDataPyExt.so .

6.  运行脚本
jerry@hq:/u01/cuda-convnet2$ python convnet.py –data-path=/u01/lisa/data/cifar10/cifar-10-batches-py –save-path=/u01/jerry/tmp –test-range=5 –train-range=1-4 –layer-def=./layers/layers-cifar10-11pct.cfg –layer-params=./layers/layer-params-cifar10-11pct.cfg –data-provider=cifar-cropped –test-freq=13 –epochs=100
Option –gpu (GPU override) not supplied
convnet.py usage:
Option                             Description                                                              Default
[–check-grads <0/1>           ] – Check gradients and quit?                                                [0]
[–color-noise <float>         ] – Add PCA noise to color channels with given scale                         [0]
[–conserve-mem <0/1>          ] – Conserve GPU memory (slower)?                                            [0]
[–conv-to-local <string,…>  ] – Convert given conv layers to unshared local                              []
[–epochs <int>                ] – Number of epochs                                                         [50000]
[–feature-path <string>       ] – Write test data features to this path (to be used with –write-features) []
[–force-save <0/1>            ] – Force save before quitting                                               [0]
[–inner-size <int>            ] – Cropped DP: crop size (0 = don’t crop)                                   [0]
[–layer-path <string>         ] – Layer file path prefix                                                   []
[–load-file <string>          ] – Load file                                                                []
[–logreg-name <string>        ] – Logreg cost layer name (for –test-out)                                  []
[–mini <int>                  ] – Minibatch size                                                           [128]
[–multiview-test <0/1>        ] – Cropped DP: test on multiple patches?                                    [0]
[–scalar-mean <float>         ] – Subtract this scalar from image (-1 = don’t)                             [-1]
[–test-freq <int>             ] – Testing frequency                                                        [57]
[–test-one <0/1>              ] – Test on one batch at a time?                                             [1]
[–test-only <0/1>             ] – Test and quit?                                                           [0]
[–test-out <string>           ] – Output test case predictions to given path                               []
[–unshare-weights <string,…>] – Unshare weight matrices in given layers                                  []
[–write-features <string>     ] – Write test data features from given layer                                []
–data-path <string>            – Data path
–data-provider <string>        – Data provider
–gpu <int,…>                 – GPU override
–layer-def <string>            – Layer definition file
–layer-params <string>         – Layer parameter file
–save-file <string>            – Save file override
–save-path <string>            – Save path
–test-range <int[-int]>        – Data batch range: testing
–train-range <int[-int]>       – Data batch range: training

由于ubuntu是在windows里的虚拟机,无法使用本机或外置的gpu显卡, 故无法运行程序。 有点遗憾