Parse the fluentd log filed into json and want to map key value for kibana 4 to display

2,606 views
Skip to first unread message

learning

unread,
Dec 6, 2016, 11:23:37 AM12/6/16
to Fluentd Google Group

Hello


The following is the log message which I have ( I am not sure why '\\\' showing in the log.


{"message":"{\"log\":\"2016-12-06 15:12:10,690|http-nio-8080-exec-25|INFO|namespace:com-example|com.audit|10.233.87.12|monitordemo-o9jwy|monitordemo| {\\\"nodeName\\\":\\\"nodeName\\\",\\\"applicationId\\\":\\\"User\\\",\\\"uniqueTransactionId\\\":\\\"468be312-bf15-428f-869a-2e754079cee6\\\",\\\"transactionName\\\":\\\"service.hello\\\",\\\"transactionStatus\\\":\\\"C\\\",\\\"responseCode\\\":\\\"200\\\",\\\"responseDescription\\\":\\\"OK\\\",\\\"endTimestamp\\\":\\\"2016-12-06 15:12:10.687\\\",\\\"initiatedTimestamp\\\":\\\"2016-12-06 15:12:10.655\\\",\\\"elapsedTime\\\":\\\"32\\\",\\\"clientIp\\\":\\\"10.233.115.0\\\",\\\"cluster\\\":\\\"cluster\\\",\\\"httpMethod\\\":\\\"GET\\\",\\\"requestURL\\\":\\\"http://10.112:8080/demo/service/hello\\\"}\\n\",\"stream\":\"stdout\",\"time\":\"2016-12-06T15:12:10.693921067Z\"}"






I used the below code to spilt the message for mapping in kibana 4 


<filter k9.**>



     @type record_modifier

     enable_ruby yes 

     auto_typecast yes

     
<record>   

      logEventTimestamp ${record["message"].split('|')[0]}

      threadId  ${record["message"].split('|')[1]}

      logLevel ${record["message"].split('|')[2]}

      namespace ${record["message"].split('|')[3]}

      logType ${record["message"].split('|')[4]}

      serverIpAddress ${record["message"].split('|')[5]}

      serverName ${record["message"].split('|')[6]}

      podServiceName ${record["message"].split('|')[7]}

      #logrecord_json ${record["message"].split('|')[8]}

      logrecord_json ${record["message"].split('|')[8].delete! '\\\\'}

     
</record> 

    
</filter>



logrecord_json is looks as below.


logrecord_json{"nodeName":"sampleNode","applicationId”:”testuser”,”uniqueTransactionId":"468be312-bf15-428f-869a-2e754079cee6","transactionName":"service.hello","transactionStatus":"C"



 



I have tried the below code and its worked in ES5 and kibana 5 . but now I need to use kibana4 and the following parser is not working


 <filter k9.**>

     @type parser

     format csv

     key_name logrecord_json

     reserve_data true

     #time_parse no

     #hash_value_field logrecord_json

     #hash_value_field parsed

    </filter>

    

By using  'logrecord_json' how can I map the json data like in kibana


nodeName= sampleNode

applicationId= testuser

uniqueTransactionId=468be312-bf15-428f-869a-2e754079cee6


I am not an expert on regex if anyone please suggest a solution , I have tried a lot :(.


learning

unread,
Dec 6, 2016, 11:25:10 AM12/6/16
to Fluentd Google Group
"}" missing in the logrecord_json.

logrecord_json{"nodeName":"sampleNode","applicationId”:”testuser”,”uniqueTransactionId":"468be312-bf15-428f-869a-2e754079cee6","transactionName":"service.hello","transactionStatus":"C"}



learning

unread,
Dec 6, 2016, 12:08:43 PM12/6/16
to Fluentd Google Group
Error I am getting in fluentd as follows

port=>9200, :scheme=>"http"}

 

2016-12-06T15:13:56.543180040Z 2016-12-06 15:13:56 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not match with data ' {\"nodeName\”:\”sampleName\",\"applicationId\”:\”testUser\",\"uniqueTransactionId\":\"ee28bfc1-9afd-424f-9a20-c84f603512c1\",\"transactionName\":\"service.hello\",\"transactionStatus\":\"C\",\"responseCode\":\"200\",\"responseDescription\":\"OK\",\"endTimestamp\":\"2016-12-06 15:13:55.546\",\"initiatedTimestamp\":\"2016-12-06 15:13:55.505\",\"elapsedTime\":\"41\",\"clientIp\":\"10.233.115.0\",\"cluster\":\"cluster\",\"httpMethod\":\"GET\",\"requestURL\":\"http://10.22.281.27:8080/demo/service/hello\"}n\",\"stream\":\"stdout\",\"time\":\"2016-12-06T15:13:55.550938285Z\"}'" tag="k9.var.log.containers.monitordemo.log" time=#<Fluent::EventTime:0x000000017e6408 @sec=1481037236, @nsec=540396904>

Mr. Fiber

unread,
Dec 7, 2016, 2:44:52 AM12/7/16
to Fluentd Google Group
 I am not sure why '\\\' showing in the log.

Your log is json in json in json.
JSON string escapes " charactor and ruby's string dump also escapes ".
This is why \\\ is showing.

I used the below code to spilt the message for mapping in kibana 4 

This config assumes json in json, not json in json in json.
You need to rewrite the configuration.
BTW, record_modifier doesn't have auto_typecast and enable_ruby parameter.
I assume you use record_transformer parameters for record_modifier.


Masahiro

--
You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

learning

unread,
Dec 7, 2016, 2:20:37 PM12/7/16
to Fluentd Google Group
<filter k9.**>
     @type record_modifier
     
<record>  
      logEventTimestamp ${record["message"].split('|')
[0]}

      threadId  ${record["message"].split('|'
)[1]}

      logLevel ${record["message"].split('|')
[2]}

      namespace ${record["message"].split('|')
[3]}

      logType ${record["message"].split('|')
[4]}

      serverIpAddress ${record["message"].split('|')
[5]}

      serverName ${record["message"].split('|')
[6]}

      podServiceName ${record["message"].split('|')
[7]}

      logrecord_json ${record["message"].split('|')
[8]}

     
</record>

   
</filter>

<filter k9.**>
     @type parser
     format json
     key_name logrecord_json
     reserve_data true
</filter>


I was using above filters and everything mapped correctly in Kibana 5 & ES5 but in not in kibana 4


I am not sure what configuration changes I needs to make, sorry I am new to this. 


After splitting the record 'logrecord_json ' field has josn data. then using the 'parser' filter is not automatically parsing it.


how can I make this json like string to key value pairs for kibana 4.

To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.

Mr. Fiber

unread,
Dec 7, 2016, 11:49:49 PM12/7/16
to Fluentd Google Group
Checking stored documents via ES, not via kibana, is better.

BTW, if kibana 5 doesn't work but kibana 4 works with same document,
the problem is kibana 5.
If kibana is the problem, please ask the problem on elastic forum.



To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+unsubscribe@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages