ELK stack权威指南
上QQ阅读APP看书,第一时间看更新

3.1 Nginx访问日志

访问日志处理分析绝对是使用ELK stack时最常见的需求。默认的处理方式下,性能和精确度都不够好。本节会列举对Nginx访问日志的几种不同处理方式,并阐明其优劣。

3.1.1 grok处理方式

Logstash默认自带了Apache标准日志的grok正则:

COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{NOTSPACE:auth}\[%{HTTPDATE:
    timestamp}\] “(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?
    |%{DATA:rawrequest})” %{NUMBER:response} (?:%{NUMBER:bytes}|-)
COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}

对于Nginx标准日志格式,可以发现只是最后多了一个$http_x_forwarded_for变量。所以Nginx标准日志的grok正则定义是:

MAINNGINXLOG %{COMBINEDAPACHELOG} %{QS:x_forwarded_for}

自定义的日志格式,可以照此修改。

3.1.2 split处理方式

Nginx日志因为部分变量中内含空格,所以很多时候只能使用%{QS}正则来做分隔,性能和细度都不太好。如果能自定义一个比较少见的字符作为分隔符,那么处理起来就简单多了。假设定义的日志格式如下:

log_format main “$http_x_forwarded_for | $time_local | $request | $status |
    $body_bytes_sent | ”“$request_body | $content_length | $http_referer | $http_user_agent | $nuid | ”“$http_cookie | $remote_addr | $hostname | $upstream_addr | $upstream_response_
    time | $request_time”;

实际日志如下:

117.136.9.248 | 08/Apr/2015:16:00:01 +0800 | POST /notice/newmessage?sign=cba4f614e05db285850cadc696fcdad0&token=JAGQ92Mjs3--gik_b_DsPIQHcyMKYGpD&did=b749736ac70f12df700b18cd6d051d5&osn=android&osv=4.0.4&appv=3.0.1&net=460-02-2g&longitude=120.393006&latitude=36.178329&ch=360&lp=1&ver=1&ts=1428479998151&im=869736012353958&sw=0&sh=0&la=zh-CN&lm=weixin&dt=vivoS11tHTTP/1.1| 200 | 132 | abcd-sign-v1://dd03c57f8cb6fcef919ab5df66f2903f:d51asq5yslwnyz5t/{\x22type\x22:4,\x22uid\x22:7567306} | 89 | - | abcd/3.0.1, Android/4.0.4, vivo S11t | nuid=0C0A0A0A01E02455EA7CF47E02FD072C1428480001.157| - | 10.10.10.13 | bnx02.abcdprivate.com | 10.10.10.22:9999 | 0.022 | 0.022 59.50.44.53 | 08/Apr/2015:16:00:01 +0800 | POST /feed/pubList?appv=3.0.3&did=89da72550de488328e2aba5d97850e9f&dt=iPhone6%2C2&im=B48C21F3-487E-4071-9742-DC6D61710888&la=cn&latitude=0.000000&lm=weixin&longitude=0.000000&lp=-1.000000&net=0-0-wifi&osn=iOS&osv=8.1.3&sh=568.000000&sw=320.000000&token=7NobA7asg3Jb6n9o4ETdPXyNNiHwMs4J&ts=1428480001275 HTTP/1.1 | 200 | 983 | abcd-sign-v1://b398870a0b25b29aae65cd553addc43d:72214ee85d7cca22/{\x22nextkey\x22:\x22\x22,\x22uid\x22:\x2213062545\x22,\x22token\x22:\x227NobA7asg3Jb6n9o4ETdPXyNNiHwMs4J\x22}| 139 | - | Shopping/3.0.3 (iPhone; iOS 8.1.3; Scale/2.00) | nuid=0C0A0A0A81-DF2455017D548502E48E2E1428480001.154 | nuid=CgoKDFUk34GFVH0BLo7kAg== | 10.10.10.11 | bnx02.abcdprivate.com | 10.10.10.35:9999 | 0.025 | 0.026

然后还可以针对request做更细致的切分。比如URL参数部分。很明显,URL参数中的字段顺序是乱的。第一行问号之后的第一个字段是sign,第二行问号之后的第一个字段是appv。所以需要将字段进行切分,取出每个字段对应的值。官方自带grok满足不了要求,最终采用的Logstash配置如下:

filter {
    ruby {
        init =>“@kname =['http_x_forwarded_for','time_local','request','status',
            'body_bytes_sent','request_body','content_length','http_referer','http_
            user_agent','nuid','http_cookie','remote_addr','hostname','upstream_
            addr','upstream_response_time','request_time']”
        code =>“event.append(Hash[@kname.zip(event['message'].split(‘ | ’))])”
    }
    if [request] {
        ruby {
            init =>“@kname = ['method','uri','verb']”
            code =>“event.append(Hash[@kname.zip(event['request'].split(‘ ’))])”
        }
        if [uri] {
            ruby {
                init =>“@kname = ['url_path','url_args']”
                code =>“event.append(Hash[@kname.zip(event['request'].split(‘?’))])”
            }
            kv {
                prefix =>“url_”
                source =>“url_args”
                field_split =>“&”
                remove_field => [ “url_args”,“uri”,“request” ]
            }
        }
    }
    mutate {
        convert => [“body_bytes_sent” , “integer”,“content_length”, “integer”,“upstream_response_time”, “float”,“request_time”, “float”
        ]
    }
    date {
        match => [ “time_local”, “dd/MMM/yyyy:hh:mm:ss Z” ]
        locale =>“en”
    }
}

最终结果如下:

{“message” =>“1.43.3.188 | 08/Apr/2015:16:00:01 +0800 | POST /search/sug
    gest?appv=3.0.3&did=dfd5629d705d400795f698055806f01d&dt=iPhone7%2C2&im=
    AC926907-27AA-4A10-9916-C5DC75F29399&la=cn&latitude=-33.903867&lm=
    sina&longitude=151.208137&lp=-1.000000&net=0-0-wifi&osn=iOS&osv=8.1.3&sh=66
    7.000000&sw=375.000000&token=_ovaPz6Ue68ybBuhXustPbG-xf1WbsPO&ts=
    1428480001567 HTTP/1.1 | 200 | 353 | abcd-sign-v1://a24b478486d3bb92ed89a-
    901541b60a5:b23e9d2c14fe6755/{\\x22key\\x22:\\x22last\\x22,\\x22offset\\x22:
    \\x220\\x22,\\x22token\\x22:\\x22_ovaPz6Ue68ybBuhXustPbG-xf1WbsPO\\x22,
    \\x22limit\\x22:\\x2220\\x22} | 148 | - | abcdShopping/3.0.3 (iPhone; iOS
    8.1.3; Scale/2.00) | nuid=0B0A0A0A9A64AF54F97634640230944E1428480001.113
    | nuid=CgoKC1SvZJpkNHb5TpQwAg== | 10.10.10.11 | bnx02.abcdprivate.com | 
    10.10.10.26:9999 | 0.070 | 0.071”,“@version” =>“1”,“@timestamp” =>“2015-04-08T08:00:01.000Z”,“type” =>“nginxapiaccess”,“host” =>“blog05.abcdprivate.com”,“path” =>“/home/nginx/logs/api.access.log”,“http_x_forwarded_for” =>“1.43.3.188”,“time_local” =>“ 08/Apr/2015:16:00:01 +0800”,“status” =>“200”,“body_bytes_sent” => 353,“request_body” =>“abcd-sign-v1://a24b478486d3bb92ed89a901541b60a5:b23e9d2c1
    4fe6755/{\\x22key\\x22:\\x22last\\x22,\\x22offset\\x22:\\x220\\x22,\\x22token
    \\x22:\\x22_ovaPz6Ue68ybBuhXustPbG-xf1WbsPO\\x22,\\x22limit\\x22:\\x2220\\x22}”,“content_length” => 148,“http_referer” =>“-”,“http_user_agent” =>“abcdShopping/3.0.3 (iPhone; iOS 8.1.3; Scale/2.00)”,“nuid” =>“nuid=0B0A0A0A9A64AF54F97634640230944E1428480001.113”,“http_cookie” =>“nuid=CgoKC1SvZJpkNHb5TpQwAg==”,“remote_addr” =>“10.10.10.11”,“hostname” =>“bnx02.abcdprivate.com”,“upstream_addr” =>“10.10.10.26:9999”,“upstream_response_time” => 0.070,“request_time” => 0.071,“method” =>“POST”,“verb” =>“HTTP/1.1”,“url_path” =>“/search/suggest”,“url_appv” =>“3.0.3”,“url_did” =>“dfd5629d705d400795f698055806f01d”,“url_dt” =>“iPhone7%2C2”,“url_im” =>“AC926907-27AA-4A10-9916-C5DC75F29399”,“url_la” =>“cn”,“url_latitude” =>“-33.903867”,“url_lm” =>“sina”,“url_longitude” =>“151.208137”,“url_lp” =>“-1.000000”,“url_net” =>“0-0-wifi”,“url_osn” =>“iOS”,“url_osv” =>“8.1.3”,“url_sh” =>“667.000000”,“url_sw” =>“375.000000”,“url_token” =>“_ovaPz6Ue68ybBuhXustPbG-xf1WbsPO”,“url_ts” =>“1428480001567”
}

如果URL参数过多,可以不使用kv切分,或者预先定义成nested object后改成数组形式:

if [uri] {
    ruby {
        init =>“@kname = ['url_path','url_args']”
        code =>“event.append(Hash[@kname.zip(event['request'].split(‘?’))])”
    }
    if [url_args] {
        ruby {
            init =>“@kname = ['key','value']”
            code =>“event['nested_args'] = event['url_args'].split(‘&’)。collect
                {|i| Hash[@kname.zip(i.split(‘=’))]}”
            remove_field => [ “url_args”,“uri”,“request” ]
        }
    }
}

采用nested object的优化原理和nested object的使用方式,请阅读后面Elasticsearch调优章节。

3.1.3 json格式

自定义分隔符虽好,但是配置写起来毕竟复杂很多。其实对Logstash来说,Nginx日志还有另一种更简便的处理方式,就是自定义日志格式时,通过手工拼写直接输出成JSON格式:

log_format json '{“@timestamp”:“$time_iso8601”,'
             ‘“host”:“$server_addr”,'
             ’“clientip”:“$remote_addr”,'
             ‘“size”:$body_bytes_sent,'
             ’“responsetime”:$request_time,'
             ‘“upstreamtime”:“$upstream_response_time”,'
             ’“upstreamhost”:“$upstream_addr”,'
             ‘“http_host”:“$host”,'
             ’“url”:“$uri”,'
             ‘“xff”:“$http_x_forwarded_for”,'
             ’“referer”:“$http_referer”,'
             ‘“agent”:“$http_user_agent”,'
             ’“status”:“$status”}';

然后采用下面的Logstash配置即可:

input {
    file {
        path =>“/var/log/nginx/access.log”
        codec => json
    }
}
filter {
    mutate {
        split => [ “upstreamtime”, “,” ]
    }
    mutate {
        convert => [ “upstreamtime”, “float” ]
    }
}

这里采用多个mutate插件,是因为upstreamtime可能有多个数值,所以先切割成数组以后,再分别转换成浮点型数值。而在mutate中,convert函数的执行优先级高于split函数,所以只能分开两步写。mutate内各函数的优先级顺序,之前插件介绍章节有详细说明,读者可以返回去阅读。

3.1.4 syslog方式发送

Nginx从1.7版开始,加入了syslog支持,Tengine则更早。这样,我们可以通过syslog直接发送日志。Nginx上的配置如下:

access_log syslog:server=unix:/data0/rsyslog/nginx.sock locallog;

或者直接发送给远程Logstash机器:

access_log syslog:server=192.168.0.2:5140,facility=local6,tag=nginx-access,
    severity=info logstashlog;

默认情况下,Nginx将使用local7.info等级,以nginx为标签发送数据。注意,采用syslog发送日志的时候,无法配置buffer=16k选项。