其实rsyslog、syslog-ng、nxlog这三种东西真的是都差不多。随便选一个用都没问题。

比较喜欢nxlog的route和json功能,所以用它来推数据到elasticsearch

用om_elasticsearch推:

...
<Input in>  
        Module im_tcp
        Host 0.0.0.0
        Port 1514
    InputType Binary
</Input>

<Output es>  
        Module om_elasticsearch
        URL http://localhost:9200/_bulk
        FlushInterval 2
        FlushLimit 100
        # Create an index daily
        Index strftime($EventTime, "nxlog-%Y%m%d")
        IndexType "My logs"

        # Use the following if you don't have $EventTime set
        #Index strftime(now(),"nxlog-%Y%m%d")
</Output>

<Route r>  
        Path in => es
</Route>  
...

用om_http推:

...
<Output elasticsearch>  
    Module      om_http
    URL         http://elasticsearch:9200
    ContentType application/json
    Exec        set_http_request_path(strftime($EventTime, "/nxlog-%Y%m%d/" + $SourceModuleName)); rename_field("timestamp","@timestamp"); to_json();
</Output>  
...

我们生产上是将各个机器上的日志通过rsyslog发到nxlog,再由nxlog导入elasticsearch,然后用kinaba看。

json化的F5日志如下:

    {
     "MessageSourceAddress":"172.1.2.2",
     "EventReceivedTime":"2016-01-02 14:04:07",
     "SourceModuleName":"in_udp",
     "SourceModuleType":"im_udp",
     "SyslogFacilityValue":22,
     "SyslogFacility":"LOCAL6",
     "SyslogSeverityValue":6,
     "SyslogSeverity":"INFO",
     "SeverityValue":2,
     "Severity":"INFO",
     "Hostname":"www",
     "EventTime":"2016-01-02 14:04:07",
     "Message":"info logger: [ssl_req][02/Jun/2016:14:04:07 +0800] 127.0.0.1 TLSv1 AES256-SHA \"/iControl/iControlPortal.cgi\" 656"
    }

nxlog的配置如下:

...
<Extension json>  
    Module      xm_json
</Extension>

<Input in_udp>  
    Module      im_udp
    Host        0.0.0.0
    Port        514
    Exec        parse_syslog(); to_json();
</Input>

<Output nxlog_out>  
Module om_file  
File "/var/log/nxlog/nxlog.out"  
</Output>

<Processor buffer_udp>  
    Module      pm_buffer
    # 1Mb buffer
    MaxSize 1024
    Type Mem
    # warn at 512k
    WarnLimit 512
</Processor>  
...
<Output elasticsearch>  
    Module      om_http
    URL         http://localhost:9200
    ContentType application/json
    Exec        set_http_request_path(strftime($EventTime, "/logstash-%Y.%m.%d/F5-Log"));
    Exec        set_http_request_path(strftime($EventTime, "/logstash-%Y.%m.%d/F5-Log"));
    Exec        delete($EventReceivedTime);
    Exec        delete($SourceModuleName);
    Exec        delete($SourceModuleType);
    Exec        delete($SyslogFacilityValue);
    Exec        delete($SyslogFacility);
    Exec        delete($SyslogSeverityValue);
    #Exec       delete($SyslogSeverity);
    Exec        delete($SeverityValue);
    Exec        delete($Severity);
    Exec        $type="F5-Log";
    Exec        $t=strftime($EventTime, "%Y-%m-%dT%H:%M:%S%z");
    Exec        rename_field("t","@timestamp");
    Exec        to_json();
</Output>

<Route udp>  
     Path        in_udp => buffer_udp => nxlog_out => elasticsearch
</Route>  

注意我们在inudp的时候就已经把数据变成了json,随后发到bufferudp,缓冲区,缓冲区的作用是当elasticsearch坏了需要重启的时候,数据会先放到buffer,当好了的时候重发。

nxlog_out是为了查看到底送过来什么字段,我们好调试。实际生产中,看到一条数据就够了。然后就可以删除这个route。

elasticsearch中,我们删去了无用的废物字段。加上了type和@timestamp,如果不加, kibana无法判断。

所以如上,我们可以直送json数据到Elasticsearch,然后再kibana显示出来。

comments powered by Disqus