ELK日志分析系统简介
Elasticsearch介绍
Logstash介绍
Kibana介绍
部署ELK日志分析系统
部署ELK日志分析系统详解:
主机 | 操作系统 | 主机名 | IP | 主要软件 |
---|---|---|---|---|
服务器 | Centos7.4 | node1 | 20.0.0.31 | Elasticsearch 、 Kibana |
服务器 | Centos7.4 | node1 | 20.0.0.32 | Elasticsearch |
服务器 | Centos7.4 | apache | 20.0.0.33 | Logstash 、Apache |
【配置elasticsearche环境】
//登录20.0.0.31 更改主机名、配置域名解析 查看Java环境
[[email protected] ~]# hostnamectl set-hostname node1
[[email protected] ~]# su
[[email protected] ~]# vim /etc/hosts
20.0.0.31 node1
20.0.0.32 node2
[[email protected] ~]# java -version
openjdk version “1.8.0_131”
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)
//登录20.0.0.32 更改主机名、配置域名解析 查看Java环境
[[email protected] ~]# hostnamectl set-hostname node2
[[email protected] ~]# su
[[email protected] ~]# vim /etc/hosts
20.0.0.31 node1
20.0.0.32 node2
[[email protected] ~]# java -version
openjdk version “1.8.0_131”
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)
【部署elasticsearch软件】
//登录20.0.0.31主机 node1
1.安装elasticsearch-5.5.0.rpm包
上传elasticsearch-5.5.0.rpm到/opt
[[email protected] ~]# cd /opt
[[email protected] opt]# rpm -ivh elasticsearch-5.5.0.rpm
2.加载系统服务
[[email protected] opt]# systemctl daemon-reload
[[email protected] opt]# systemctl enable elasticsearch.service
3.更改elasticsearch主配置文件
[[email protected] opt]# cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak
[[email protected] opt]# vim /etc/elasticsearch/elasticsearch.yml
:set nu 显示行数
17 cluste.name: my-elk-cluster //集群名
23 node.name: node1 //节点名
33 path.data: /data/elk_data //数据存放路径
37 path.logs: /var/log/elasticsearch/ //日志存放路径
43 bootstrap.memory_lock: false //不在启动的时候锁定内存
55 network.host: 0.0.0.0 //提供服务绑定的IP地址,0.0.0.0代表所有地址
59 http.port: 9200 //侦听端口为9200
68 discovery.zen.ping.unicast.hosts: [“node1”, “node2”] //集群发现通过单播实现
看一下刚刚配置的项:
[[email protected] opt]# grep -v “^#” /etc/elasticsearch/elasticsearch.yml
cluster.name: my-elk-cluster
node.name: node1
path.data: /data/elk_data
path.logs: /var/log/elasticsearch/
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: [“node1”, “node2”]
4.创建数据存放路径并授权
[[email protected] opt]# mkdir -p /data/elk_data
[[email protected] opt]# chown elasticsearch:elasticsearch /data/elk_data/
5.启动elasticsearch并查看是否成功开启
[[email protected] opt]# systemctl start elasticsearch.service
[[email protected] opt]# netstat -antp | grep 9200
6.查看节点信息,用真机20.0.0.1 的浏览器打开 http://20.0.0.31:9200 有文件打开,下面是节点信息
{
“name” : “node1”,
“cluster_name” : “my-elk-cluster”,
“cluster_uuid” : “_8fxoBPLQgyVxw8n9DfE9w”,
“version” : {
“number” : “5.5.0”,
“build_hash” : “260387d”,
“build_date” : “2017-06-30T23:16:05.735Z”,
“build_snapshot” : false,
“lucene_version” : “6.6.0”
},
“tagline” : “You Know, for Search”
}
//登录20.0.0.32主机 node2
1.安装elasticsearch-5.5.0.rpm包
上传elasticsearch-5.5.0.rpm到/opt
[[email protected] ~]# cd /opt
[[email protected] opt]# rpm -ivh elasticsearch-5.5.0.rpm
2.加载系统服务
[[email protected] opt]# systemctl daemon-reload
[[email protected] opt]# systemctl enable elasticsearch.service
3.更改elasticsearch主配置文件
[[email protected] opt]# cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak
[[email protected] opt]# vim /etc/elasticsearch/elasticsearch.yml
:set nu 显示行数
17 cluste.name: my-elk-cluster //集群名
23 node.name: node2 //节点名
33 path.data: /data/elk_data //数据存放路径
37 path.logs: /var/log/elasticsearch/ //日志存放路径
43 bootstrap.memory_lock: false //不在启动的时候锁定内存
55 network.host: 0.0.0.0 //提供服务绑定的IP地址,0.0.0.0代表所有地址
59 http.port: 9200 //侦听端口为9200
68 discovery.zen.ping.unicast.hosts: [“node1”, “node2”] //集群发现通过单播实现
看一下刚刚配置的项:
[[email protected] opt]# grep -v “^#” /etc/elasticsearch/elasticsearch.yml
cluster.name: my-elk-cluster
node.name: node1
path.data: /data/elk_data
path.logs: /var/log/elasticsearch/
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: [“node1”, “node2”]
4.创建数据存放路径并授权
[[email protected] opt]# mkdir -p /data/elk_data
[[email protected] opt]# chown elasticsearch:elasticsearch /data/elk_data/
5.启动elasticsearch并查看是否成功开启
[[email protected] opt]# systemctl start elasticsearch.service
[[email protected] opt]# netstat -antp | grep 9200
【集群检查健康和状态】
在真机20.0.0.1,用浏览器打开http://20.0.0.31:9200/_cluster/health?pretty //检查集群健康情况
页面显示如下:
{
“cluster_name” : “my-elk-cluster”,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 2,
“number_of_data_nodes” : 2,
“active_primary_shards” : 0,
“active_shards” : 0,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 100.0
}
在真机20.0.0.1,用浏览器打开http://20.0.0.31:9200/_cluster/state?pretty //检查集群状态信息
页面显示如下:
{
“cluster_name” : “my-elk-cluster”,
“version” : 3,
“state_uuid” : “_8y7oIDsSs-qaWagkgWqLg”,
“master_node” : “mwj6qmVlRxSvOvJQCmXHXA”,
“blocks” : { },
“nodes” : {
“mwj6qmVlRxSvOvJQCmXHXA” : {
“name” : “node1”,
“ephemeral_id” : “tJEJ_A_4She_gL7bJu6ZNg”,
“transport_address” : “20.0.0.31:9300”,
“attributes” : { }
},
“RqMHtYJ9RHu-QeqfhFqq-g” : {
“name” : “node2”,
“ephemeral_id” : “d3SmnxtcQCa19iGGSV30DA”,
“transport_address” : “20.0.0.32:9300”,
“attributes” : { }
}
},
“metadata” : {
“cluster_uuid” : “_8fxoBPLQgyVxw8n9DfE9w”,
“templates” : { },
“indices” : { },
“index-graveyard” : {
“tombstones” : [ ]
}
},
“routing_table” : {
“indices” : { }
},
“routing_nodes” : {
“unassigned” : [ ],
“nodes” : {
“mwj6qmVlRxSvOvJQCmXHXA” : [ ],
“RqMHtYJ9RHu-QeqfhFqq-g” : [ ]
}
}
}
【安装elasticsearch-head插件】
上述查看集群的方式及其不方便,我们可以通过安装elasticsearch-head插件管理集群
//登录20.0.0.31 node1主机
上传node-v8.2.1.tar.gz到/opt
[[email protected] ~]# yum install gcc gcc-c++ make -y
//编译安装node组件依赖包
[[email protected] ~]# cd /opt/
[[email protected] opt]# tar zxvf node-v8.2.1.tar.gz
[[email protected] opt]# cd node-v8.2.1/
[[email protected] opt]# ./configure
[[email protected] opt]# make (安装时间较长,可以make -j4(比如我的这台虚拟机是2核的,参数就设置4,最多允许4个编译命令同时执行))
[[email protected] opt]# make install
【安装phantomjs前端框架】
上传phantomjs-2.1.1-linux-x86_64.tar.bz2包到/usr/local/src/
[[email protected] node-v8.2.1]# cd /usr/local/src/
[[email protected] src]# tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2
[[email protected] src]# cd phantomjs-2.1.1-linux-x86_64/bin
[[email protected] bin]# cp phantomjs /usr/local/bin/
【安装elasticsearch-head数据可视化工具】
上传elasticsearch-head.tar.gz到 /usr/local/src
[[email protected] bin]# cd /usr/local/src/
[[email protected] src]# tar zxvf elasticsearch-head.tar.gz
[[email protected] src]# cd elasticsearch-head/
[[email protected] elasticsearch-head]# npm install
【修改主配置文件】
[[email protected] elasticsearch-head]# cd ~
[[email protected] ~]# vim /etc/elasticsearch/elasticsearch.yml //一下配置文件插入末尾
http.cors.enabled: true //开启跨域访问支持,默认是false
http.cors.allow-origin: “*” //跨域访问允许的域名地址
[[email protected] ~]# systemctl restart elasticsearch
【启动elasticsearch-head 启动服务器】
[[email protected] ~]# cd /usr/local/src/elasticsearch-head/
[[email protected] elasticsearch-head]# npm run start & //切换到后台运行
(以下为系统提示)
[1] 101081
[[email protected] elasticsearch-head]#
> [email protected] start /usr/local/src/elasticsearch-head
> grunt server
Running “connect:server” (connect) task
Waiting forever…
Started connect web server on http://localhost:9100
[[email protected] elasticsearch-head]# netstat -lnupt | grep 9100
[[email protected] elasticsearch-head]# netstat -lnupt | grep 9200
//登录20.0.0.32 node2主机 (与node1主机步骤一样)
上传node-v8.2.1.tar.gz到/opt
[[email protected] ~]# yum install gcc gcc-c++ make -y
//编译安装node组件依赖包
[[email protected] ~]# cd /opt/
[[email protected] opt]# tar zxvf node-v8.2.1.tar.gz
[[email protected] opt]# cd node-v8.2.1/
[[email protected] opt]# ./configure
[[email protected] opt]# make (安装时间较长,可以make -j4(比如我的这台虚拟机是2核的,参数就设置-j4,最多允许4个编译命令同时执行,根据核心数上下调参数))
[[email protected] opt]# make install
【安装phantomjs前端框架】
上传phantomjs-2.1.1-linux-x86_64.tar.bz2包到/usr/local/src/
[[email protected] node-v8.2.1]# cd /usr/local/src/
[[email protected] src]# tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2
[[email protected] src]# cd phantomjs-2.1.1-linux-x86_64/bin
[[email protected] bin]# cp phantomjs /usr/local/bin/
【安装elasticsearch-head数据可视化工具】
上传elasticsearch-head.tar.gz到 /usr/local/src
[[email protected] bin]# cd /usr/local/src/
[[email protected] src]# tar zxvf elasticsearch-head.tar.gz
[[email protected] src]# cd elasticsearch-head/
[[email protected] elasticsearch-head]# npm install
【修改主配置文件】
[[email protected] elasticsearch-head]# cd ~
[[email protected] ~]# vim /etc/elasticsearch/elasticsearch.yml //一下配置文件插入末尾
http.cors.enabled: true //开启跨域访问支持,默认是false
http.cors.allow-origin: “*” //跨域访问允许的域名地址
[[email protected] ~]# systemctl restart elasticsearch
【启动elasticsearch-head 启动服务器】
[[email protected] ~]# cd /usr/local/src/elasticsearch-head/
[[email protected] elasticsearch-head]# npm run start & //切换到后台运行
(以下为系统提示信息)
[1] 101081
[[email protected] elasticsearch-head]#
> [email protected] start /usr/local/src/elasticsearch-head
> grunt server
Running “connect:server” (connect) task
Waiting forever…
Started connect web server on http://localhost:9100
[[email protected] elasticsearch-head]# netstat -lnupt | grep 9100
[[email protected] elasticsearch-head]# netstat -lnupt | grep 9200
在真机上的浏览器输入http://20.0.0.31:9100
在Elasticsearch后面的栏目中输入http://20.0.0.31:9200
点击连接
集群健康值变为:green (0 of 0)
在真机上的浏览器输入http://20.0.0.32:9100
在Elasticsearch后面的栏目中输入http://20.0.0.32:9200
点击连接
集群健康值变为:green (0 of 0)
//在真机浏览器上输入20.00.31:9200
新建索引为index-demo,类型为test,可以看到创建成功
可以看见索引默认被分片为5个,并都有一个副本
点击数据浏览,会发现在node1上创建的索引为index-demo,类型为test等相关的信息。
//在主机20.0.0.31 node1上可以输入创建
[[email protected] ~]# curl -XPUT ‘localhost:9200/index-demo/test/1?pretty&pretty’ -H ‘content-Type: application/json’ -d ‘{“user”:“zhangsan”,“mesg”:“hello world”}’
{
“_index” : “index-demo”,
“_type” : “test”,
“_id” : “1”,
“_version” : 1,
“result” : “created”,
“_shards” : {
“total” : 2,
“successful” : 2,
“failed” : 0
},
“created” : true
}
然后在index-demo中可以看见
//登录20.0.0.31
【安装logstash并做一些日志搜集输出到elasticsearch中】
(关闭防火墙及核心防护)
systemctl stop firewalld
systemctl disabled firewalld
vim /etc/selinux/config
更改:
SELINUX=disabled
1.更改主机名
[[email protected] ~]# hostnamectl set-hostname apache
[[email protected] ~]# su
2.安装Apache服务(httpd)
[[email protected] ~]# yum -y install httpd
[[email protected] ~]# systemctl start httpd
3.安装java环境
[[email protected] ~]# java -version
openjdk version “1.8.0_131”
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)
(默认安装的,不用动了)
4.安装logstash
上传logstash-5.5.1.rpm到/opt目录下
[[email protected] ~]# cd /opt/
[[email protected] ~]# rpm -ivh logstash-5.5.1.rpm
[[email protected] ~]# systemctl start logstash.service
[[email protected] ~]# systemctl enable logstash.service
[[email protected] ~]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin/
5.logstash(Apache)与elasticsearch(node)功能是否正常,做对接测试
Logstash这个命令测试
· -f 该选项可以这顶logstash的配置文件,根据配置文件配置logstash
· -e 后面加字符串,该字符串可以被当做logstash的配置(如果是"",则默认使用stdin作为输入,stdout作为输出)
· -t 测试配置文件是否正确,然后退出
6.输入方式用标准输入,输出方式用标准输出—登录20.0.0.30 ,在Apache服务器上
[[email protected] opt]# logstash -e ‘input { stdin{} } output { stdout{} }’
(等它启动)
15:59:16.855 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
最后一行出现这个表示启动代理成功,开启了API接口,端口号9600
输入www.baidu.com、www.sina.com.cn 能看到些信息了
7.使用rubydebug显示详细输出,codec为一种编解码器
[[email protected] opt]# logstash -e ‘input { stdin{} } output { stdout { codec=>rubydebug } }’
(等它加载)
16:06:00.814 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
加载完最后一行出现上面这个,输入如下:
www.baidu.com
{
“@timestamp” => 2020-10-29T08:06:39.813Z,
“@version” => “1”,
“host” => “apache”,
“message” => “www.baidu.com”
}
//使用logstash将信息写入elasticsearch (输入、输出 对接)
[[email protected] opt]# logstash -e ‘input { stdin{} } output { elasticsearch { hosts=>[“20.0.0.31:9200”] } }’
(等)
16:11:47.276 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
最后一行是这个,输入:
www.baidu.com
www.sina.com.cn
www.google.com.cn
8.登录20.0.0.1 真机
打开浏览器输入http://20.0.0.31:9100 //查看索引信息
多出了 logstash-2020.10.29 信息
点击数据浏览查看响应的内容,出口已经可以对接Elasticsearch了。
【登录20.0.0.33 Apache主机 做对接配置】
logstash配置文件
logstash配置文件主要由三部分组成:input、output、filter(根据需要)
[[email protected] opt]# chmod o+r /var/log/messages
[[email protected] opt]# ll /var/log/messages
-rw----r–. 1 root root 697326 10月 29 16:20 /var/log/messages
[[email protected] opt]# vim /etc/logstash/conf.d/system.conf
添加:
input {
file{
path => “/var/log/messages”
type => “system”
start_position => “beginning”
}
}
output {
elasticsearch {
hosts => [“20.0.0.31:9200”]
index => “system-%{+YYYY.MM.dd}”
}
}
[[email protected] opt]# systemctl restart logstash.service
//登录20.0.0.1 真机
浏览器输入 http://20.0.0.31:9100 //查看索引信息
多出了 system-2020.10.29 索引信息
【在node1主机安装kibana】
//登录20.0.0.31 node1主机,在node1主机安装kibana
上传kibana-5.5.1-x86_64.rpm到 /usr/local/sec目录
[[email protected] ~]# cd /usr/local/src/
[[email protected] src]# rpm -ivh kibana-5.5.1-x86_64.rpm
[[email protected] src]# cd /etc/kibana/
[[email protected] kibana]# cp kibana.yml kibana.yml.bak
[[email protected] kibana]# vim kibana.yml
:set nu //显示行数
2 server.port: 5601 \kibana打开的端口
7 server.host: “0.0.0.0” \kibana侦听的地址
21 elasticsearch.url: “http://20.0.0.31:9200” \和elasticsearch建立联系
30 kibana.index: “.kibana” \在elasticsearch中添加.kibana索引
[[email protected] kibana]# systemctl start kibana.service
[[email protected] kibana]# systemctl enable kibana.service
//登录20.0.0.1 真机
在浏览器输入20.0.0.31:5601
首次登录创建一个索引名:
index name or pattern 下面输入:system-* //这是对接系统日志文件
然后点击下面的create创建按钮
然后点击左上角Discover按钮,就能看到system-*信息
然后点击下面的host旁边的add,会发现右面的图只有Time和host选项了,这个比较友好
【对接Apache主机的Apache 日志文件(访问的、错误的)】
//在30.0.0.33 apache主机上
[[email protected] ~]# cd /etc/logstash/conf.d/
[[email protected] conf.d]# touch apache_log.conf
[[email protected] conf.d]# vim apache_log.conf
添加:
input {
file{
path => “/etc/httpd/logs/access_log”
type => “access”
start_position => “beginning”
}
file{
path => “/etc/httpd/logs/error_log”
type => “error”
start_position => “beginning”
}
}
output {
if [type] == “access” {
elasticsearch {
hosts => [“20.0.0.31:9200”]
index => “apache_access-%{+YYYY.MM.dd}”
}
}
if [type] == “error” {
elasticsearch {
hosts => [“20.0.0.31:9200”]
index => “apache_error-%{+YYYY.MM.dd}”
}
}
}
[[email protected] conf.d]# /usr/share/logstash/bin/logstash -f apache_log.conf
//用20.0.0.1真机
浏览器输入http://20.0.0.31:9100 //查看索引信息
可以看到
apache_error-2020.10.29
apache_access-2020.10.29
(如果看不到apache_access-2020.10.29这样的索引信息,可以用浏览器访问20.0.0.33多访问几次,再回来看就有了)
然后在浏览器输入httpd:20.0.0.31:5601
点击左下角management选项—index patterns—create index pattern
—分别创建apache_error-* 和 apache_access-* 的索引