有种方法:
方法1:
ps ax | grep “postgres” | cut -f2 -d” ” | xargs kill
方法2:
ps ax | grep “postgres” | awk ‘{print $1}’ | xargs kill
量化自我和极简主义的窝藏点
技术
有种方法:
方法1:
ps ax | grep “postgres” | cut -f2 -d” ” | xargs kill
方法2:
ps ax | grep “postgres” | awk ‘{print $1}’ | xargs kill
环境:CentOS6.4, Postfix
最近搭建一个邮件服务器用于其它服务来访问,但使用过程中发现无法使用同一域名的邮件名来发送(例如user1@360.cn无法发送给user2@360.cn的邮件帐户),查看邮件日志并无报错记录:
sudo tail -n 100 /var/log/maillog
一直无解,只好换另一个域名的邮件帐号来发送(例user1@126.cn)
环境:CentOS 6.4, Postfix
由于需要借助邮件服务器来发送报表和出错信息,因而搭建一台邮件服务器成为必然。
步骤如下:
1.安装软件包
yum install postfix system-switch-mail
2.更改默认MTA为Postfix
/usr/sbin/alternatives –set mta /usr/sbin/sendmail.postfix
3.检查下是否将MTA改为Postfix了:
alternatives –display mta
4.配置Postfix主配置文件/etc/postfix/main.cf
指定postfix监听的网络端口为所有
inet_interfaces = all
指定运行postfix服务的邮件主机名称(FQDN名,通过hostname -f查到)
myhostname = quickstart.cloudera
指定运行Postfix服务的邮件主机的域名(无域名请注释)
#mydomain = xxx.xxx
指定由本台邮件主机寄出的每封邮件的邮件头中mail from的地址
myorigin = $mydomain
指定可接收邮件的主机名或域名,只有当发来的邮件的收件人地址与该参数值相匹配时,Postfix才会将该邮件接收下来。
mydestination = $myhostname, localhost.$mydomain, localhost, mail.$mydomain, $mydomain
设置可转发(Relay)哪些IP网段的邮件
mynetworks = 127.0.0.0/8, 192.168.10.0/24
设置可转发(Relay)哪些网域的邮件
relay_domains = $mydestination
5.重启Postfix服务
service postfix restart
6.测试邮件服务
telnet localhost 25
修改postfix的端口
修改 /etc/postfix/master.cf
注释掉这行
smtp inet n – n – – smtpd
然后加上中一样 2500 表示端口号
2500 inet n – n – – smtpd
命令如下:
time echo “scale=5000; 4*a(1)” | bc -l -q
环境:CentOS 6.3
一直以来都有需要模拟某一端口来发送数据,发现linux上自带的nc非常方便。命令如下:
nc -l 8888
https://questions.cms.gov/faq.php?faqId=7977
https://archive.ics.uci.edu/ml/datasets.html
环境:ElasticSearch 1.4.4, elasticsearch-river-kafka-1.2.1-plugin, kafka 0.8.1
安装ElasticSearch的kafka插件
.bin/plugin -install kafka-river -url https://github.com/mariamhakobyan/elasticsearch-river-kafka/releases/download/v1.2.1/elasticsearch-river-kafka-1.2.1-plugin.zip
增加元数据
curl -XPUT ‘localhost:9200/_river/kafka-river/_meta’ -d ‘
{
“type” : “kafka”,
“kafka” : {
“zookeeper.connect” : “xxx.xxx.xxx.xxx:2181,xxx.xxx.xxx.xxx:2181,xxx.xxx.xxx.xxx:2181”,
“zookeeper.connection.timeout.ms” : 10000,
“topic” : “flume-topic1”,
“message.type” : “json”
},
“index” : {
“index” : “kafka-index”,
“type” : “status”,
“bulk.size” : 3,
“concurrent.requests” : 1,
“action.type” : “index”,
“flush.interval” : “12h”
}
}’
重启ElasticSearch的服务
查看元数据状态
curl -XGET ‘http://localhost:9200/_river/kafka-river/_search?pretty’
curl -XGET ‘http://localhost:9200/_river/kafka-index/_search?pretty’
curl -XDELETE ‘localhost:9200/_river/kafka-river/’
在kafka生成json数据
bin/kafka-console-producer.sh –topic flume-topic1 –broker-list xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092
{“id”:”123″, “name”:”hq”}
{“id”:”123″, “name”:”hq”}
{“id”:”123″, “name”:”hq”}
{“id”:”123″, “name”:”hq”}
查看最终数据
curl -XGET ‘http://localhost:9200/kafka-index/_search?pretty’
环境:CentOS 6.3, Kafka 8.1, Flume 1.6, elasticsearch-1.4.4
配置文件如下:
[adadmin@s9 apache-flume-1.6.0-bin]$ vi conf/flume.conf
#define source, sink, channel
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.channels = c1
a1.sources.r1.command = tail -F /home/adadmin/.bash_history
# Describe the sink
#only test
#a1.sinks.k1.type = logger
#load to Kafka
#a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
#a1.sinks.k1.batchSize = 5
#a1.sinks.k1.brokerList = xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092
#a1.sinks.k1.topic = flume_topic1
#load to ElasticSearch
a1.sinks.k1.type = org.apache.flume.sink.elasticsearch.ElasticSearchSink
a1.sinks.k1.hostNames = xxx.xxx.xxx.xxx:9300
a1.sinks.k1.clusterName = elasticsearch
a1.sinks.k1.batchSize = 100
a1.sinks.k1.indexName = logstash
a1.sinks.k1.ttl = 5
a1.sinks.k1.serializer = org.apache.flume.sink.elasticsearch.ElasticSearchLogStashEventSerializer
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
启用Flume agent
[adadmin@s9 apache-flume-1.6.0-bin]$ bin/flume-ng agent -c /home/adadmin/apache-flume-1.6.0-bin/conf -f /home/adadmin/apache-flume-1.6.0-bin/conf/flume.conf -n a1 -Dflume.root.logger=INFO,console
(注:在导入ElasticSearch时需要把此文件的lib导入到flume的库目录下,操作如下:
[adadmin@s9 apache-flume-1.6.0-bin]$ mkdir -p plugins.d/elasticsearch/libext
[adadmin@s9 apache-flume-1.6.0-bin]$cp /home/adadmin/elasticsearch-1.4.4/lib/*.jar plugins.d/elasticsearch/libext
)
环境: Pentaho 5.3, postgresql 9.3
最近在看pentaho report designer和CDE有没有实现类似同比和环比的功能,可惜的是没有找到。那只好从数据库的角度来解决这个问题。
假如有两表test, dim_date
test:
20140818;4
20150817;40
20150818;10
20150819;55
20160817;30
20160818;50
dim_date:
20150104;”2015年01月04日”;”2015年”;”第01月”;”2015-01-04″;”第1周”
20150103;”2015年01月03日”;”2015年”;”第01月”;”2015-01-03″;”第1周”
20150102;”2015年01月02日”;”2015年”;”第01月”;”2015-01-02″;”第1周”
通过olap窗口函数lag,语句实现如下:
select * from (
select date_id, volume, lag(volume, 1) over (order by date_fmt) pre_volume, lag(volume, 2) over (order by date_fmt) pre_365_volume from (
select b.date_id, b.date_fmt, a.volume from test a, dim_date b where a.date_id = b.date_id and b.date_fmt in(to_date(‘2016-08-18’, ‘yyyy-mm-dd’) – interval ‘1 day’, ‘2016-08-18’, to_date(‘2016-08-18’, ‘yyyy-mm-dd’) – interval ‘1 year’)
) m
) n where n.date_id = 20160818
可查实现查询日期的前一天和前一年同一日期的数据,这样就可以实现同比和环比的功能。