Logstash提供了一系列filter过滤plugin来处理收集到的log event,根据log event的特征去切分所须要的字段,方便kibana作visualize和dashboard的data analysis。全部logstash支持的event切分插件查看这里。下面咱们主要讲grok切分。javascript
%{SYNTAX:SEMANTIC}
%{SYNTAX:SEMANTIC}
* `SYNTAX`表明匹配值的类型,例如,`0.11`能够`NUMBER`类型所匹配,`10.222.22.25`可使用`IP`匹配。 * `SEMANTIC`表示存储该值的一个变量声明,它会存储在`elasticsearch`当中方便`kibana`作字段搜索和统计,你能够将一个`IP`定义为客户端IP地址`client_ip_address`,eg:`%{IP:client_ip_address}`,所匹配到的值就会存储到`client_ip_address`这个字段里边,相似数据库的列名,也能够把event log中的数字当成数字类型存储在一个指定的变量当中,好比响应时间`http_response_time`,假设event log record以下:
* `SYNTAX`表明匹配值的类型,例如,`0.11`能够`NUMBER`类型所匹配,`10.222.22.25`可使用`IP`匹配。 * `SEMANTIC`表示存储该值的一个变量声明,它会存储在`elasticsearch`当中方便`kibana`作字段搜索和统计,你能够将一个`IP`定义为客户端IP地址`client_ip_address`,eg:`%{IP:client_ip_address}`,所匹配到的值就会存储到`client_ip_address`这个字段里边,相似数据库的列名,也能够把event log中的数字当成数字类型存储在一个指定的变量当中,好比响应时间`http_response_time`,假设event log record以下:
55.3.244.1 GET /index.html 15824 0.043
可使用以下grok pattern来匹配这种记录php
%{IP:client_id_address} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:http_response_time}
%{IP:client_id_address} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:http_response_time}
在logstash conf.d文件夹下面建立filter conf文件,内容以下css
# /etc/logstash/conf.d/01-filter.conf filter { grok { match => { "message" => "%{IP:client_id_address} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:http_response_time}" } } }
# /etc/logstash/conf.d/01-filter.conf filter { grok { match => { "message" => "%{IP:client_id_address} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:http_response_time}" } } }
如下是filter结果html
client_id_address: 55.3.244.1 method: GET request: /index.html bytes: 15824 http_response_time: 0.043
grok内置的默认类型有不少种,查看全部默认类型。java
(?<field_name>the pattern here)
(?<field_name>the pattern here)
假设你须要匹配的文本片断为一个长度为10或11的十六进制的值,使用下列语法能够获取该片断,并把值赋予queue_id
(?<queue_id>[0-9A-F]{10,11})
(?<queue_id>[0-9A-F]{10,11})
patterns
,在此文件夹下面建立一个文件,文件名随意,eg: postfix
# contents of ./patterns/postfix:
POSTFIX_QUEUEID [0-9A-F]{10,11}
55.3.244.1 GET /index.html 15824 0.043 ABC24C98567在logstash conf.d文件夹下面建立filter conf文件,内容以下
filter { grok { patterns_dir => ["./patterns"] match => { "message" => "%{IP:client_id_address} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:http_response_time} %{POSTFIX_QUEUEID:queue_id}" } } }
filter { grok { patterns_dir => ["./patterns"] match => { "message" => "%{IP:client_id_address} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:http_response_time} %{POSTFIX_QUEUEID:queue_id}" } } }
匹配结果以下:git
client_id_address: 55.3.244.1 method: GET request: /index.html bytes: 15824 http_response_time: 0.043 queue_id: ABC24C98567
推荐使用grokdebugger来写匹配模式,输入event log record,再逐步使用pattern微调切分,下方会根据你所写的模式将输入切分字段。github
add_field:
当pattern匹配切分红功以后,能够动态的对某些字段进行特定的修改或者添加新的字段,使用%{fieldName}
来获取字段的值filter { grok{ add_field => { "foo_%{somefield}" => "Hello world, %{somefield}" } } }
filter { grok{ add_field => { "foo_%{somefield}" => "Hello world, %{somefield}" } } }
# You can also add multiple fields at once: filter { grok { add_field => { "foo_%{somefield}" => "Hello world, %{somefield}" "new_field" => "new_static_value" } } }
# You can also add multiple fields at once: filter { grok { add_field => { "foo_%{somefield}" => "Hello world, %{somefield}" "new_field" => "new_static_value" } } }
若是somefield=dad
,logstash会将foo_dad
新字段加入elasticsearch
,并将值Hello world, dad
赋予该字段数据库
add_tag:
为通过filter或者匹配成功的event添加标签filter { grok { add_tag => [ "foo_%{somefield}" ] } }
filter { grok { add_tag => [ "foo_%{somefield}" ] } }
# You can also add multiple tags at once: filter { grok { add_tag => [ "foo_%{somefield}", "taggedy_tag"] } }
# You can also add multiple tags at once: filter { grok { add_tag => [ "foo_%{somefield}", "taggedy_tag"] } }