RocksDB, pyrocksdb 的安装与使用

环境:Ubuntu 12.04,  RocksDB, pyrocksdb

RocksDB是FB基于google的LevelDB基础上改良的键值对数据库,类似于memcache和redis,支持RAM, Flash, Disk存储,写速度快过LevelDB 10倍左右, 听起来有点高大上的感觉,可参考https://github.com/facebook/rocksdb/wiki/Performance-Benchmarks 。不管这么多了,先安装试用下

安装步骤:

rocksdb安装:

sudo git clone https://github.com/facebook/rocksdb.git
cd rocksdb

vi Makefile
将这一行 OPT += -O2 -fno-omit-frame-pointer -momit-leaf-frame-pointer
修改为 OPT += -O2 -lrt -fno-omit-frame-pointer -momit-leaf-frame-pointer

在~/.bashrc中增加 export LD_PRELOAD=/lib/x86_64-linux-gnu/librt.so.1,并使变量生效source ~/.bashrc

(这两步用于解决这个问题 ” undefined symbol: clock_gettime”)

sudo git checkout 2.8.fb
sudo make shared_lib

cd ..
sudo chown jerry:jerry rocksdb -Rf
cd rocksdb

sudo cp librocksdb.so /usr/local/lib
sudo mkdir -p /usr/local/include/rocksdb/
sudo cp -r ./include/* /usr/local/include/

(这三步解决这个问题 “ Fatal error: rocksdb/slice.h: No such file or directory “)

pyrocksdb安装:
sudo pip install “Cython>=0.20”
sudo pip install git+git://github.com/stephan-hof/pyrocksdb.git@v0.2.1

至些安装成功
进入pyrocksdb环境

jerry@hq:/u01/rocksdb$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type “help”, “copyright”, “credits” or “license” for more information.
>>> import rocksdb
>>> db = rocksdb.DB(“test.db”, rocksdb.Options(create_if_missing=True))
>>> db.put(b“key1”, b“v1”)
>>> db.put(b“key2”, b“v2”)
>>> db.get(b”key1″)

‘v1’

五种开源的深度学习软件的评估

环境: Windows 7,  Ubuntu 12.04,  H2O, RStudio, Pylearn2, Caffe, Cuba_convnet2, Octave

Java版本: H2O

可参与我的文章 : http://blog.itpub.net/16582684/viewspace-1255976/

优点:实现CPU集群,实现并行和分布式,与R语言结果比较方便处理数据
缺点:不支持GPU

C++版本 :Caffe, Cuba_convnet2

可参与我的文章 :http://blog.itpub.net/16582684/viewspace-1256400/      http://blog.itpub.net/16582684/viewspace-1254584/

Caffe  优点: 支持CPU和GPU,支持python, matlab接口,计算速度比较快,目前是图像分类效果比较好
缺点:不支持集群

Cuba_convnet2:  优点: 支持单机GPU集群
缺点: 不支持CPU, 操作有些复杂

Python版本:Pylearn2

可参与我的文章 : http://blog.itpub.net/16582684/viewspace-1243187/

优点:支持CPU和GPU
缺点:不支持并发和集群

Octave/Matlab版本: DeeplearnToolbox

可参与我的文章 :http://blog.itpub.net/16582684/viewspace-1255317/

优点: 编码简洁,容易理解其算法
缺点: 只支持单个cpu计算

总结: 以上各个版本都有自己的适应场景,没法去找出一个最好的。 目前深度学习架构发展朝两个方向: 1.  GPU集群,   2. CPU和GPU混合集群。  开源版本已经给出第一种,目前第二种也就只有一两家公司实现了。

Cascalog简介

环境: CentOS 5.7,  CDH 4.2.0

Cascalog是一款基于cascading和hadoop上用clojure定义的DSL。由于clojure的元数据和函数编程范式,它很好地定义函数和查询。

下面讲解下使用场景:

1. 使用lein创建一个工程
lein cascalog_incanter

2. 切入到cascalog_incanter,编辑project.clj 如下所示:

(defproject cascalog_incanter “0.1.0-SNAPSHOT”
:description “FIXME: write description”
:url “http://example.com/FIXME”
:license {:name “Eclipse Public License”
:url “http://www.eclipse.org/legal/epl-v10.html”}
:dependencies [[org.clojure/clojure “1.6.0”]
[cascalog/cascalog-core “2.1.1”]
[incanter “1.5.5”]]
:repositories [[“conjars.org” “http://conjars.org/repo”]
[“cloudera” “https://repository.cloudera.com/artifactory/cloudera-repos/”]]
:profiles {
:provided {
:dependencies [
;[org.apache.hadoop/hadoop-core “1.2.1”] ; Apache Hadoop MapReduce v1
;[org.apache.hadoop/hadoop-core “2.0.0-mr1-cdh4.2.0”] ; CDH 4.2.0 MapReduce v1
[org.apache.hadoop/hadoop-common “2.0.0-cdh4.2.0” ] ; Cloudera Hadoop 4.2.0 YARN
[org.apache.hadoop/hadoop-mapreduce-client-core “2.0.0-cdh4.2.0” ] ; Cloudera Hadoop 4.2.0 MapReduce v2
]
}
:dev {
:dependencies [
[org.apache.hadoop/hadoop-minicluster “2.0.0-cdh4.2.0”] ; Cloudera Hadoop 4.2.0
]}
}
)

3. 进入编程模式
lein repl

4. 参考示例http://cascalog.org/articles/getting_started.html

Titan-hadoop 分布式图计算框架

环境: Centos 5.7,    titan-0.5.0-hadoop2

titan-hadoop是一款支持在hadoop上做分布式图计算的框架, 它的前身是faunus,是图分析引擎,后来归并到titan项目上。 可以在http://s3.thinkaurelius.com/downloads/titan/titan-0.5.0-hadoop2.zip下载安装文件。 具体使用请参考 http://s3.thinkaurelius.com/docs/titan/0.5.0/hadoop-getting-started.html 。

目前的版本支持hadoop 2.2.0,  跟其它的hadoop版本编译的时候出现不少问题。 还没有找到针对hadoop的titan-hadoop2源码。

titan-hadoop “Too many open files”修正

环境: CentOS 5.7,  Titan-0.5.0-Hadoop2

在titan-hadoop启动gremlin.sh运行图遍历时经常出现”too many open files”的报错, 发现最终的问题超出系统最大的打工的文件数据。

解决如下:

查询系统设置
ulimit -n
默认是1024

修改参数:

sudo vi /etc/security/limits.conf

增加或修改这两项即可 ,然后重启系统

* soft nofile 8192
* hard nofile 8192

Titan-hadoop访问DBpedia文件内容

环境: Centos, Titan-0.5.0-Hadoop2

Titan-hadoop 实现对N_TRIPLES格式的RDF 访问,从dbpedia下载nt格式的文件(例如: http://data.dws.informatik.uni-mannheim.de/dbpedia/2014/zh/labels_en_uris_zh.nt.bz2),编写访问属性文件,如下:
[cloudera@localhost titan-0.5.0-hadoop2]$ vi conf/hadoop/rdf-input.properties

# input graph parameters
titan.hadoop.input.format=com.thinkaurelius.titan.hadoop.formats.edgelist.rdf.RDFInputFormat
titan.hadoop.input.location=examples/labels_en_uris_zh.nt
titan.hadoop.input.conf.format=N_TRIPLES
titan.hadoop.input.conf.as-properties=http://www.w3.org/1999/02/22-rdf-syntax-ns#type
titan.hadoop.input.conf.use-localname=true
titan.hadoop.input.conf.literal-as-property=true

# output data parameters
titan.hadoop.output.format=com.thinkaurelius.titan.hadoop.formats.graphson.GraphSONOutputFormat
titan.hadoop.sideeffect.format=org.apache.hadoop.mapreduce.lib.output.TextOutputFormat

查询数据:

[cloudera@localhost titan-0.5.0-hadoop2]$ gremlin.sh

gremlin> g = HadoopFactory.open(“conf/hadoop/rdf-input.properties”)

gremlin> g.V.map()

……

17:37:12 INFO  org.apache.hadoop.mapred.LocalJobRunner  – reduce > reduce
17:37:12 INFO  org.apache.hadoop.mapred.Task  – Task ‘attempt_local1370056218_0005_r_000000_0’ done.
17:37:13 INFO  org.apache.hadoop.mapreduce.Job  – Job job_local1370056218_0005 completed successfully
17:37:13 INFO  org.apache.hadoop.mapreduce.Job  – Counters: 35
File System Counters
FILE: Number of bytes read=2911187173
FILE: Number of bytes written=3038059762
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Map-Reduce Framework
Map input records=405909
Map output records=405909
Map output bytes=65118176
Map output materialized bytes=66297322
Input split bytes=268
Combine input records=405909
Combine output records=405909
Reduce input groups=405909
Reduce shuffle bytes=0
Reduce input records=405909
Reduce output records=0
Spilled Records=811818
Shuffled Maps =0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=5136
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=2091909120
com.thinkaurelius.titan.hadoop.formats.edgelist.EdgeListInputMapReduce$Counters
IN_EDGES_CREATED=0
OUT_EDGES_CREATED=0
VERTEX_PROPERTIES_CREATED=1217727
VERTICES_CREATED=405909
VERTICES_EMITTED=405909
com.thinkaurelius.titan.hadoop.mapreduce.transform.PropertyMapMap$Counters
VERTICES_PROCESSED=405909
com.thinkaurelius.titan.hadoop.mapreduce.transform.VerticesMap$Counters
EDGES_PROCESSED=0
VERTICES_PROCESSED=405909
File Input Format Counters
Bytes Read=54114517
File Output Format Counters
Bytes Written=0
==>47994559900176       {label_=[慾望], _id=[47994559900176], name=[Want], uri=[http://dbpedia.org/resource/Want]}
==>60888991522182       {label_=[无机化学命名法], _id=[60888991522182], name=[IUPAC_nomenclature_of_inorganic_chemistry], uri=[http://dbpedia.org/resource/IUPAC_nomenclature_of_inorganic_chemistry]}
==>78841791384159       {label_=[诺伊斯塔特-格莱韦], _id=[78841791384159], name=[Neustadt-Glewe], uri=[http://dbpedia.org/resource/Neustadt-Glewe]}
==>78961407639797       {label_=[打狗英國領事館文化園區], _id=[78961407639797], name=[Former_British_Consulate_at_Takao], uri=[http://dbpedia.org/resource/Former_British_Consulate_at_Takao]}
==>95522075072286       {label_=[賴琳恩], _id=[95522075072286], name=[Lene_Lai], uri=[http://dbpedia.org/resource/Lene_Lai]}
==>153451821264409      {label_=[唐古韭], _id=[153451821264409], name=[Allium_tanguticum], uri=[http://dbpedia.org/resource/Allium_tanguticum]}
==>154857715280524      {label_=[温带], _id=[154857715280524], name=[Temperate_climate], uri=[http://dbpedia.org/resource/Temperate_climate]}
==>166027168671115      {label_=[GSh-18手槍], _id=[166027168671115], name=[GSh-18], uri=[http://dbpedia.org/resource/GSh-18]}
==>166513572484984      {label_=[WMA], _id=[166513572484984], name=[WMA], uri=[http://dbpedia.org/resource/WMA]}
==>182078824443170      {label_=[保罗·纳斯], _id=[182078824443170], name=[Paul_Nurse], uri=[http://dbpedia.org/resource/Paul_Nurse]}
==>211356647821663      {label_=[克魯克斯頓 (明尼蘇達州)], _id=[211356647821663], name=[Crookston,_Minnesota], uri=[http://dbpedia.org/resource/Crookston,_Minnesota]}
==>222227245802710      {label_=[我的女友是九尾狐], _id=[222227245802710], name=[My_Girlfriend_Is_a_Nine-Tailed_Fox], uri=[http://dbpedia.org/resource/My_Girlfriend_Is_a_Nine-Tailed_Fox]}
==>229972043766751      {label_=[李天荣], _id=[229972043766751], name=[Wilson_Lee_Flores], uri=[http://dbpedia.org/resource/Wilson_Lee_Flores]}
==>247488956381743      {label_=[1,2-双(二异丙基膦)乙烷], _id=[247488956381743], name=[1,2-Bis(diisopropylphosphino)ethane], uri=[http://dbpedia.org/resource/1,2-Bis(diisopropylphosphino)ethane]}
==>264200262547493      {label_=[欽迪龍屬], _id=[264200262547493], name=[Chindesaurus], uri=[http://dbpedia.org/resource/Chindesaurus]}
==>…

Clojure运行环境在windows上安装

环境:windows 7

有时需要在windows上环境上编写clojure代码,因此需要在这个环境上安装clojure运行环境

方法一:

1. 首先安装curl工具

2. 从https://raw.githubusercontent.com/technomancy/leiningen/stable/bin/lein.bat 下载文件lein.bat

3. 设置环境变量
set HTTP_CLIENT=curl –proxy-ntlm –insecure -f -L -o
set HTTPS_PROXY=

3. 安装 lein.bat self-install

4. 进入交互环境
lein new project_name
lein.bat repl

方法二:

1. 直接下载clojure-x.x.x.jar文件

2. 启动交互环境
java -jar clojure-x.x.x.jar

DOS命令sqlite3 中文乱码解决

在dos中使用sqlite3进行操作,由于dos窗口默认的是GBK编码,而sqlite通常为UTF-8,因此会出现sqlite中的中文字符在dos窗口中显示的是乱码的问题。

打开dos窗口,输入chcp 65001然后回车;注:65001即为UTF-8格式,936是GBK;

对着dos窗口的标题右键,在弹出来的窗口中选择属性,在弹出的窗口中将字体更改为:Lucida Console

 

Anconda 安装psycopg2

环境:Ubuntu 12.04,  Anaconda-1.9.2-Linux-x86_64

首先安装binstar

conda install binstar

然后使用binstar搜索psycopg2的网上安装路径

binstar search -t conda psycopg2
Run ‘binstar show <USER/PACKAGE>’ to get more details:
Packages:
Name | Access | Package Types | Summary
————————- | ———— | ————— | ——————–
auto/psycopg2database | published | conda | http://jimmyg.org/work/code/psycopg2databa se/index.html
bencpeters/psycopg2 | public | conda | Python-PostgreSQL Database Adapter
chuongdo/psycopg2 | public | conda | Python-PostgreSQL Database Adapter
dan_blanchard/psycopg2 | public | conda | http://initd.org/psycopg/
davidbgonzalez/psycopg2 | public | conda | None
deric/psycopg2 | public | conda | None
jonrowland/psycopg2 | public | conda | None
kevincal/psycopg2 | published | conda |
topper/psycopg2-windows | public | conda | PostgreSQL adapter for the Python programm ing language
trent/psycopg2 | public | conda | None
Found 10 packages

可知其中一个路径为bencpeters/psycopg2

安装如下:

 conda install -c https://conda.binstar.org/bencpeters psycopg2
Fetching package metadata: .Error: unknown host: http://repo.continuum.io/pkgs/pro/linux-64/
.Error: unknown host: http://repo.continuum.io/pkgs/free/linux-64/
.
Solving package specifications: .
Package plan for installation in environment /home/jerry/anaconda:

The following packages will be downloaded:

package | build
—————————|—————–
psycopg2-2.5.3 | py27_0 393 KB

The following packages will be linked:

package | build
—————————|—————–
psycopg2-2.5.3 | py27_0 hard-link

Proceed ([y]/n)? y

Fetching packages …
psycopg2-2.5.3-py27_0.tar.bz2 28% |#######################