Cannot pull pcap files; "Second ES query couldn't find this ID." error

831 views
Skip to first unread message

ledin...@gmail.com

unread,
Jun 10, 2018, 1:03:25 PM6/10/18
to security-onion
New install of 14.04.5.13 - cannot pivot from Kibana to capME to grab pcap files. Every once in a while it will work but the vast majority throw the above error.

System has 4 cores; 16GB RAM, 450GB dedicated HDD for data only.

This same system ran the pre-ELK version of SO just fine for 6mos. - never a single issue.

Both /nsm and /var/lib/mysql have been moved to a secondary disk (/data2) and all modifications were made per the various threads and SO-repo instructions. AppArmor has been triple-checked for any oversights (i.e. the tcpdump and mysql files in /etc/apparmor.d & /etc/apparmor.d/local)

SOSTAT is attached and a previous thread re: this issue is linked below.


https://groups.google.com/d/msg/security-onion/-_rjnhgI8pI/gpwlIIsfCAAJ

sostat_180610-0949.txt

ledin...@gmail.com

unread,
Jun 10, 2018, 2:38:22 PM6/10/18
to security-onion
Two more datapoints here:

(1) All of the instances when the error is seen following a failed pivot attempt involve a Kibana log entry that has the required 5-tuple of timestamp, sip, sport, dip, dport.

(2) There is no activity in the /var/log/nsm/[ HOSTNAME ]-[ INTERFACE ]/pcap_agent.log file when the error message is thrown.

ledin...@gmail.com

unread,
Jun 10, 2018, 3:07:50 PM6/10/18
to security-onion
Couple more datapoints:

(3) Pivots from squert also fail but no error is thrown - they just hang. This is also intermittent in that every once in a while, one will work but the majority just hang.

(4) The load on this box in not huge - the monitored link is < 50Mbps at peak with typical sustained traffic flow at 2-10Mbps.

ledin...@gmail.com

unread,
Jun 10, 2018, 6:59:43 PM6/10/18
to security-onion
OK - last bit of info before I downgrade back to the pre-ELK 14.04 version...

I performed a brand new truly basic installation of 14.04.5.13 + all SOUP updates and the squert\Kibana pcap issues persist. This install does not have any secondary drive mappings or other operational tweaks - this is as default as an install can get.

If anyone has any insights and\or ideas, let me know. I have saved the inoperable VMs so I can easily spin those back up if need be. I would love to get ELK working but for now, it appears it is missing key functionality needed for my environment.

Thanks.

Message has been deleted

ledin...@gmail.com

unread,
Jun 10, 2018, 8:06:51 PM6/10/18
to security-onion
Looks like I was able to resolve the squert + capme part of this by clearing cookies\cached items. Wouldn't have thought something that mundane would cause capme to hang but nonetheless, it can. I could see this maybe if the same pcap was being attempted but for different pcap pulls to hang...doesn't make sense why that would occur.

Of course, I then tried the Kibana part of this and that is still not working properly...

Wes Lambert

unread,
Jun 11, 2018, 8:05:39 AM6/11/18
to securit...@googlegroups.com
In Kibana, is it specific types of events that you are not able to pull (Such as NIDS, Bro, etc) , or all types?

Thanks,
Wes

On Sun, Jun 10, 2018 at 8:06 PM, <ledin...@gmail.com> wrote:
Looks like I was able to resolve the squert + capme part of this by clearing cookies\cached items.  Wouldn't have thought something that mundane would cause capme to hang but nonetheless, it can.  I could see this maybe if the same pcap was being attempted but for different pcap pulls to hang...doesn't make sense why that would occur.

Of course, I then tried the Kibana part of this and that is still not working properly...

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.



--

ledin...@gmail.com

unread,
Jun 11, 2018, 4:29:42 PM6/11/18
to security-onion
Hi Wes,

Was doing some testing and while I have not yet checked all event_types, I can say that the failures thus far have been:

* bro_syslog
* bro_ssl
* snort

Also, it appears these work:

* bro_dns
* bro_conn

I will report back with additional event_types and other info as I discover it.

Thanks.

ledin...@gmail.com

unread,
Jun 11, 2018, 4:37:08 PM6/11/18
to security-onion
OK - was able to check the others much quicker this time...I can confirm the following also fail:

* bro_files - this is intermittent but actually works at least 50%
* bro_http
* bro_x509
* bro_dns - initially reported as working but I did get a few errors on re-test - works about 70+ of the time

Wes Lambert

unread,
Jun 12, 2018, 8:24:33 AM6/12/18
to securit...@googlegroups.com
When CapMe pulls PCAPs for records from ES, it uses or searches ES or a Bro connection record tied/related to that event -- if it cannot find one, then it will fail.  Are you able to find a corresponding bro_conn record in ES (srcip,dstip,srcport,dstport,timestamp) for the event from which you are trying to pivot?

Could it be that the bro_conn logs for those events have not yet made it to ES?  Could it be that the bro_conn log has not yet been written to disk because the session/stream it is watching has not officially closed/terminated?

Thanks,
Wes

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.

ledin...@gmail.com

unread,
Jun 15, 2018, 9:17:57 PM6/15/18
to security-onion
Hi Wes,

Based on your feedback, I did some additional testing and several of the failed PCAP pulls do not have bro_conn log entries. No idea why but that is the situation. I do know that in my pre-ELK SO instance, this same traffic would all be seen and PCAPs available via ELSA.

Also, on the session not closed aspect of this, much of the traffic is UDP so I wouldn't think that would be a factor for those PCAPs.

Let me know if you can think of anything else to check. I thought all flows would generate a bro-conn log but perhaps not - either way, I am definitely stumped on this...

Doug Burks

unread,
Jun 16, 2018, 8:16:27 AM6/16/18
to securit...@googlegroups.com
Hi ledingtech,

Have you tried adjusting your Kibana time range to search, for example, only yesterday's logs to see if perhaps there is a delay with logs being ingested into Elasticsearch?

Have you checked to see if the logs appear in /nsm/bro/logs?

Is this a VM?

If so, is it using storage that is shared with other VMs?

What kind of storage is it?  New SSD or old rotating platter?

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.



--
Doug Burks
CEO
Security Onion Solutions, LLC

Leding Tech

unread,
Jun 17, 2018, 3:55:59 PM6/17/18
to securit...@googlegroups.com
Hi Doug,

Thanks for the feedback…

First off, this is an issue that had been seen by at least a couple of users before I encountered it and the last feedback was that it was going to be fixed in an upcoming version.  I checked the release notes and didn't see anything specifically related to this so I’m not sure if it ever got resolved.  This link discusses what I reference here: https://groups.google.com/d/msg/security-onion/6EkxteAgTZQ/-2HXNB06AAAJ

Also, I’m reluctant to start targeting the VM\HW as this exact same setup worked flawlessly with my pre-ELK SO instance.  Now maybe the ELK-flavor has greater HW specs than what is outlined on the Hardware page but I think you folks already updated it with the new ELK hardware requirements.  In any event, I have reviewed that page several times to ensure I’m not violating any of the HW maxims - I definitely fit into the medium network class and my VM is spec’d at or above that level of hardware.  The first post in this thread outlines my hardware but let me also summarize here:

* VM running on ESXi 6.5
* VM has 4 vCPUs across 2 cores, 16GB RAM, 50GB OS volume on spinning HDD shared with 2 other low intensity VMs.
* 450GB spinning HDD dedicated to data only - mySQL and NSM reside on this drive and this drive is NOT SHARED.

In addition, I have done a fair amount of vmWare centric testing for this VM both with the pre-ELK and ELK flavors and I am getting very similar IOPS and disk latency numbers.  Again, this VM performed fine with pre-ELK.

You asked about limiting my search and I have tested using log entires less than 5m old and then log entries that are 1-72 hours old.  When a given ID fails to pull a PCAP - it fails regardless of when I test it.

You also asked about the logs being in /nsm/bro/logs and this is interesting - I tested a few that are syslog event_types and they’re not there !!!!  Here is the test procedure I used - please let me know if this is valid as well as any other thoughts\ideas:

1) Search in Kibana for a log entry at least 24h old and that previously failed PCAP pull.
2) Attempt to re-pull the PCAP by pivoting to capME From Kibana - error is thrown.
3) Retrieve the proper hourly log file from /nsm/bro/logs - in this case, this was a syslog file.
4) GREP for the log's _id - NOT FOUND.
5) GREP for the log's timestamp (i.e. NOT @timestamp) - NOT FOUND.

Also, the log entries I re-tested using this procedure relate to syslog data from my border FW reporting down to my telemetry server via a link that is monitored by SO.  I am able to retrieve many other PCAPs for this exact same traffic but others fail as noted here.

I am going to continue testing and report back any further info.  Please also let me know additional test steps you think make sense.

Thanks.




You received this message because you are subscribed to a topic in the Google Groups "security-onion" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/security-onion/RKbaiZqWj64/unsubscribe.
To unsubscribe from this group and all its topics, send an email to security-onio...@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.

Doug Burks

unread,
Jun 18, 2018, 4:05:06 PM6/18/18
to securit...@googlegroups.com
Hi Leding Tech,

Replies inline.


On Sun, Jun 17, 2018 at 3:55 PM, Leding Tech <ledin...@gmail.com> wrote:
>
> Hi Doug,
>
> Thanks for the feedback…
>
> First off, this is an issue that had been seen by at least a couple of users before I encountered it and the last feedback was that it was going to be fixed in an upcoming version.  I checked the release notes and didn't see anything specifically related to this so I’m not sure if it ever got resolved.  This link discusses what I reference here: https://groups.google.com/d/msg/security-onion/6EkxteAgTZQ/-2HXNB06AAAJ

That thread is from September of last year.  Since then, we've made lots of improvements to our Elastic integration, including fixes for several different causes of the "Second ES query couldn't find this ID." error.

 
> Also, I’m reluctant to start targeting the VM\HW as this exact same setup worked flawlessly with my pre-ELK SO instance.  Now maybe the ELK-flavor has greater HW specs than what is outlined on the Hardware page but I think you folks already updated it with the new ELK hardware requirements.  In any event, I have reviewed that page several times to ensure I’m not violating any of the HW maxims - I definitely fit into the medium network class and my VM is spec’d at or above that level of hardware.  The first post in this thread outlines my hardware but let me also summarize here:
>
> * VM running on ESXi 6.5
> * VM has 4 vCPUs across 2 cores, 16GB RAM, 50GB OS volume on spinning HDD shared with 2 other low intensity VMs.
> * 450GB spinning HDD dedicated to data only - mySQL and NSM reside on this drive and this drive is NOT SHARED.

What is the average amount of network traffic you are monitoring in Mbps?

> In addition, I have done a fair amount of vmWare centric testing for this VM both with the pre-ELK and ELK flavors and I am getting very similar IOPS and disk latency numbers.  Again, this VM performed fine with pre-ELK.
>
> You asked about limiting my search and I have tested using log entires less than 5m old and then log entries that are 1-72 hours old.  When a given ID fails to pull a PCAP - it fails regardless of when I test it.
>
> You also asked about the logs being in /nsm/bro/logs and this is interesting - I tested a few that are syslog event_types and they’re not there !!!!  Here is the test procedure I used - please let me know if this is valid as well as any other thoughts\ideas:

When you say "syslog", are you looking at the Syslog dashboard under "Bro Hunting"?

> 1) Search in Kibana for a log entry at least 24h old and that previously failed PCAP pull.
> 2) Attempt to re-pull the PCAP by pivoting to capME From Kibana - error is thrown.
> 3) Retrieve the proper hourly log file from /nsm/bro/logs - in this case, this was a syslog file.
> 4) GREP for the log's _id - NOT FOUND.

_id is specific to Elasticsearch and would not be in the original Bro logs.  There is, however, a uid field that should be present in the original Bro logs.

> 5) GREP for the log's timestamp (i.e. NOT @timestamp) - NOT FOUND.
>
> Also, the log entries I re-tested using this procedure relate to syslog data from my border FW reporting down to my telemetry server via a link that is monitored by SO.  I am able to retrieve many other PCAPs for this exact same traffic but others fail as noted here.
>
> I am going to continue testing and report back any further info.  Please also let me know additional test steps you think make sense.

I tried to duplicate your scenario using some of our sample pcaps that include syslog data.  I ran so-test and went to the Bro Syslog dashboard:



Pivoting from these logs to pcap seems to work fine:





One thing that you might want to do is set up a test system like this, run so-test, and verify that pivoting works for you as shown above.  Then compare the test system to your production system to see if you can spot any differences.  


>
> Thanks.
>
>
>
>
> On Jun 16, 2018, at 5:16 AM, Doug Burks <doug....@securityonionsolutions.com> wrote:
>
> Hi ledingtech,
>
> Have you tried adjusting your Kibana time range to search, for example, only yesterday's logs to see if perhaps there is a delay with logs being ingested into Elasticsearch?
>
> Have you checked to see if the logs appear in /nsm/bro/logs?
>
> Is this a VM?
>
> If so, is it using storage that is shared with other VMs?
>
> What kind of storage is it?  New SSD or old rotating platter?
>
> On Fri, Jun 15, 2018 at 9:17 PM,  <ledin...@gmail.com> wrote:
>>
>> Hi Wes,
>>
>> Based on your feedback, I did some additional testing and several of the failed PCAP pulls do not have bro_conn log entries.  No idea why but that is the situation.  I do know that in my pre-ELK SO instance, this same traffic would all be seen and PCAPs available via ELSA.
>>
>> Also, on the session not closed aspect of this, much of the traffic is UDP so I wouldn't  think that would be a factor for those PCAPs.
>>
>> Let me know if you can think of anything else to check.  I thought all flows would generate a bro-conn log but perhaps not - either way, I am definitely stumped on this...
>>
>>
>> On Tuesday, June 12, 2018 at 5:24:33 AM UTC-7, Wes wrote:
>> > When CapMe pulls PCAPs for records from ES, it uses or searches ES or a Bro connection record tied/related to that event -- if it cannot find one, then it will fail.  Are you able to find a corresponding bro_conn record in ES (srcip,dstip,srcport,dstport,timestamp) for the event from which you are trying to pivot?
>> >
>> >
>> > Could it be that the bro_conn logs for those events have not yet made it to ES?  Could it be that the bro_conn log has not yet been written to disk because the session/stream it is watching has not officially closed/terminated?
>> >
>> >
>> > Thanks,
>> > Wes
>>
>> --
>> Follow Security Onion on Twitter!
>> https://twitter.com/securityonion
>> ---
>> You received this message because you are subscribed to the Google Groups "security-onion" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.

>> To post to this group, send email to securit...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/security-onion.
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
> Doug Burks
> CEO
> Security Onion Solutions, LLC
>
> --
> Follow Security Onion on Twitter!
> https://twitter.com/securityonion
> ---
> You received this message because you are subscribed to a topic in the Google Groups "security-onion" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/topic/security-onion/RKbaiZqWj64/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to security-onio...@googlegroups.com.
> To post to this group, send email to securit...@googlegroups.com.
> Visit this group at https://groups.google.com/group/security-onion.
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> Follow Security Onion on Twitter!
> https://twitter.com/securityonion
> ---
> You received this message because you are subscribed to the Google Groups "security-onion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.

> To post to this group, send email to securit...@googlegroups.com.
> Visit this group at https://groups.google.com/group/security-onion.
> For more options, visit https://groups.google.com/d/optout.




Leding Tech

unread,
Jun 18, 2018, 4:17:19 PM6/18/18
to securit...@googlegroups.com
Hi Doug,

Great feedback - I am going to spin-up a dedicated testbed and will respond back with any new info.  Here are some answers to your specific questions:

* Traffic: peak at 50Mbps but that is very rare; sustained is between 3-15Mbps.

* I discovered the _id field missing as well - my error.  However, I was able to also try and track down the log entires using the 5-tuple and timestamp - still missing.  

* In terms of the syslog question, I have accessed these log entires via a variety of different Kibana screens.  The issue was initially encountered when performing searches via the Kibana Discover and Dashboard screens for specific log entries from my FW to my telemetry server. I only focussed on syslog here because that was the event_type of an an issue I had handy when starting this thread. However,  I have also seen this occur for other event_types.




<image.png>

Pivoting from these logs to pcap seems to work fine:

<image.png>

<image.png>

Leding Tech

unread,
Jun 18, 2018, 5:03:40 PM6/18/18
to securit...@googlegroups.com
Hi Doug,

First, I want to reiterate - pivot does work in some cases…not all but some…

Next, here are some screenshots of the issue on my end…this is via Bro Hunting —> syslog and the capMe is for the first entry:

 


Message has been deleted

ledin...@gmail.com

unread,
Jun 18, 2018, 5:20:07 PM6/18/18
to security-onion
Trying the re-post one more time - apologies...
so-pivot-issue_kibana-1.jpeg

ledin...@gmail.com

unread,
Jun 18, 2018, 7:57:29 PM6/18/18
to security-onion
Here's an example of one that works fine - same server, source IP, destination IP & port, etc.
Also, this one has entries in both Bro conn.log and syslog.log files.

Bizarre...

so-pivot-issue_capme-2.jpeg
so-pivot-issue_kibana-2.jpeg

Doug Burks

unread,
Jun 19, 2018, 7:05:05 AM6/19/18
to securit...@googlegroups.com
Hi ledingtech,

If Bro is not logging to /nsm/bro/logs/ consistently, then your ability to pivot from Kibana to Capme won't be consistent either.  So it's probably best to focus first on your other thread and getting Bro to log to /nsm/bro/logs/ consistently.

Doug Burks

unread,
Jun 20, 2018, 8:46:46 AM6/20/18
to securit...@googlegroups.com
On Tue, Jun 19, 2018 at 7:05 AM, Doug Burks <doug....@securityonionsolutions.com> wrote:
On Mon, Jun 18, 2018 at 7:57 PM, <ledin...@gmail.com> wrote:
Here's an example of one that works fine - same server, source IP, destination IP & port, etc.
 Also, this one has entries in both Bro conn.log and syslog.log files.

Bizarre...



Hi ledingtech,

Based on the last update in your other thread, let's try an experiment...

Watch out for line wrapping below!

First, make a backup copy of /var/www/so/capme/.inc/callback-elastic.php:
sudo cp /var/www/so/capme/.inc/callback-elastic.php /var/www/so/capme/.inc/callback-elastic.php.orig

When CapMe refers to the "second ES query", it's taking the log that you pivoted from, pulling out the Bro CID, and then querying Elasticsearch for a conn.log that contains that CID.  Currently, that search is limited to a one hour window (30 minutes before the log timestamp and 30 minutes after).  Let's increase the end time to 1 week after:
sudo sed -i 's|$et = $timestamp_epoch + 1800;|$et = $timestamp_epoch + 604800;|g' /var/www/so/capme/.inc/callback-elastic.php

Then try a few of your problematic pivots to see if that makes any difference.

joeav...@gmail.com

unread,
Jan 9, 2019, 10:46:39 AM1/9/19
to security-onion
Hey Doug,

I too was having this issue. I made the changes you suggested regarding the timestamp and it worked perfectly. I know this is an old post but I didn't see any updates to it so I figured I'd let you know. Thanks for the help!

Joe

Doug Burks

unread,
Jan 9, 2019, 11:33:43 AM1/9/19
to securit...@googlegroups.com
Hi Joe,

Thanks for the feedback.  That experiment was based on the idea that ledingtech was seeing some long flows with the log not being written until the end of the flow.  Were you having trouble pivoting on long flows?

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.

Joe Volpe

unread,
Jan 9, 2019, 12:18:47 PM1/9/19
to securit...@googlegroups.com
Hey Doug,

I've attached a screenshot of the pcap that I was trying to pivot to in CapMe prior to making the changes you suggested.  After I made the changes it opened right up and no ES error.  I was experiencing this with a few others but can't remember which one's.  If I come across it again, I'll be sure to update the thread and let you know.  If there's anything else I can provide please let me know.  Thanks.

Joe

You received this message because you are subscribed to a topic in the Google Groups "security-onion" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/security-onion/RKbaiZqWj64/unsubscribe.
To unsubscribe from this group and all its topics, send an email to security-onio...@googlegroups.com.
Pcap_SO.jpg

Doug Burks

unread,
Jan 9, 2019, 12:33:42 PM1/9/19
to securit...@googlegroups.com
Based on the screenshot, that doesn't really seem like a long flow and it seems a little surprising that the original code didn't work for that flow.  I'd be really curious to see the log you were pivoting from (guessing it was a Bro HTTP log) and its corresponding conn.log (pivot on UID hyperlink and then filter for bro_conn Data Type).  

Joe Volpe

unread,
Jan 10, 2019, 9:32:45 AM1/10/19
to securit...@googlegroups.com
Hi Doug,

I've copy and pasted each of the logs below.  I left the IPs in rather than "sanitizing" the logs (they're pretty innocuous).

Time                                               source_ip         source_port    destination_ip    destination_port                 uid                                      _id   
January 9th 2019, 08:11:46.342   192.168.66.107          -                                        -                                        CdH1zd1XuilP10abDg ylgrM2gBKAR-58FSYcX1
January 9th 2019, 08:11:46.281   192.168.66.107         34690         52.216.232.139            80                 CdH1zd1XuilP10abDg zFgrM2gBKAR-58FSYcX4
January 9th 2019, 08:11:46.224   192.168.66.107         34690         52.216.232.139            80                 CdH1zd1XuilP10abDg 9FkwM2gBKAR-58FSCw4B

########################################################################################################################################################################################
{
  "_index": "seconion:logstash-bro-2019.01.09",
  "_type": "doc",
  "_id": "ylgrM2gBKAR-58FSYcX1",
  "_version": 1,
  "_score": null,
  "_source": {
    "file_ip": [
      "52.216.232.139"
    ],
    "syslog-host_from": "seconion",
    "destination_ip": [
      "192.168.66.107"
    ],
    "syslog-file_name": "/nsm/bro/logs/current/files.log",
    "is_orig": "false",
    "timed_out": false,
    "uid": [
      "CdH1zd1XuilP10abDg"
    ],
    "local_orig": "false",
    "destination_geo": {},
    "syslog-priority": "notice",
    "duration": 0,
    "syslog-host": "SecOnion",
    "missing_bytes": 0,
    "logstash_time": 0.004214048385620117,
    "mimetype": "text/html",
    "analyzer": [
      "SHA1",
      "MD5"
    ],
    "host": "gateway",
    "source": "HTTP",
    "@timestamp": "2019-01-09T15:11:46.342Z",
    "syslog-tags": ".source.s_bro_files",
    "fuid": "F72QKl2qofIgFqHKt",
    "total_bytes": 419,
    "sha1": "e9a8c695391be81c2ea3eee65a52afa94c222815",
    "@version": "1",
    "depth": 0,
    "tags": [
      "syslogng",
      "bro",
      "_geoip_lookup_failure",
      "external_destination"
    ],
    "event_type": "bro_files",
    "syslog-sourceip": "127.0.0.1",
    "overflow_bytes": 0,
    "timestamp": "2019-01-09T15:11:47.595Z",
    "ips": "192.168.66.107",
    "destination_ips": "192.168.66.107",
    "port": 35850,
    "seen_bytes": 419,
    "md5": "4384cffb0af0f42d033cc1465f016427",
    "message": "{\"ts\":\"2019-01-09T15:11:46.342518Z\",\"fuid\":\"F72QKl2qofIgFqHKt\",\"tx_hosts\":[\"52.216.232.139\"],\"rx_hosts\":[\"192.168.66.107\"],\"conn_uids\":[\"CdH1zd1XuilP10abDg\"],\"source\":\"HTTP\",\"depth\":0,\"analyzers\":[\"SHA1\",\"MD5\"],\"mime_type\":\"text/html\",\"duration\":0.0,\"local_orig\":false,\"is_orig\":false,\"seen_bytes\":419,\"total_bytes\":419,\"missing_bytes\":0,\"overflow_bytes\":0,\"timedout\":false,\"md5\":\"4384cffb0af0f42d033cc1465f016427\",\"sha1\":\"e9a8c695391be81c2ea3eee65a52afa94c222815\"}",
    "syslog-facility": "user"
  },
  "fields": {
    "@timestamp": [
      "2019-01-09T15:11:46.342Z"
    ]
  },
  "highlight": {
    "uid.keyword": [
      "@kibana-highlighted-field@CdH1zd1XuilP10abDg@/kibana-highlighted-field@"
    ],
    "uid": [
      "@kibana-highlighted-field@CdH1zd1XuilP10abDg@/kibana-highlighted-field@"
    ],
    "message": [
      "{\"ts\":\"2019-01-09T15:11:46.342518Z\",\"fuid\":\"F72QKl2qofIgFqHKt\",\"tx_hosts\":[\"52.216.232.139\"],\"rx_hosts\":[\"192.168.66.107\"],\"conn_uids\":[\"@kibana-highlighted-field@CdH1zd1XuilP10abDg@/kibana-highlighted-field@\"],\"source\":\"HTTP\",\"depth\":0,\"analyzers\":[\"SHA1\",\"MD5\"],\"mime_type\":\"text/html\",\"duration\":0.0,\"local_orig\":false,\"is_orig\":false,\"seen_bytes\":419,\"total_bytes\":419,\"missing_bytes\":0,\"overflow_bytes\":0,\"timedout\":false,\"md5\":\"4384cffb0af0f42d033cc1465f016427\",\"sha1\":\"e9a8c695391be81c2ea3eee65a52afa94c222815\"}"
    ]
  },
  "sort": [
    1547046706342
  ]
}

#####################################################################################################################################################################################
{
  "_index": "seconion:logstash-bro-2019.01.09",
  "_type": "doc",
  "_id": "zFgrM2gBKAR-58FSYcX4",
  "_version": 1,
  "_score": null,
  "_source": {
    "destination_geo": {
      "latitude": 39.0481,
      "ip": "52.216.232.139",
      "country_name": "United States",
      "region_code": "VA",
      "longitude": -77.4728,
      "dma_code": 511,
      "location": {
        "lat": 39.0481,
        "lon": -77.4728
      },
      "continent_code": "NA",
      "postal_code": "20149",
      "region_name": "Virginia",
      "country_code3": "US",
      "city_name": "Ashburn",
      "timezone": "America/New_York",
      "country_code2": "US"
    },
    "syslog-priority": "notice",
    "logstash_time": 0.0068209171295166016,
    "source_ip": "192.168.66.107",
    "@timestamp": "2019-01-09T15:11:46.281Z",
    "syslog-tags": ".source.s_bro_http",
    "destination_port": 80,
    "response_body_length": 419,
    "source_port": 34690,
    "@version": "1",
    "resp_mime_types": [
      "text/html"
    ],
    "source_ips": "192.168.66.107",
    "destination_ips": "52.216.232.139",
    "frequency_scores": "5.5638",
    "useragent_length": 57,
    "virtual_host": "52.216.232.139",
    "destination_ip": "52.216.232.139",
    "status_code": 200,
    "syslog-facility": "user",
    "syslog-host_from": "seconion",
    "syslog-file_name": "/nsm/bro/logs/current/http_enp1s0f1.log",
    "uid": "CdH1zd1XuilP10abDg",
    "uri_length": 26,
    "syslog-host": "SecOnion",
    "useragent": "Dalvik/2.1.0 (Linux; U; Android 7.1.2; AFTN Build/NS6258)",
    "version": "1.1",
    "status_message": "OK",
    "resp_fuids": [
      "F72QKl2qofIgFqHKt"
    ],
    "trans_depth": 1,
    "tags": [
      "syslogng",
      "bro",
      "external_destination",
      "internal_source"
    ],
    "virtual_host_length": 14,
    "event_type": "bro_http",
    "syslog-sourceip": "127.0.0.1",
    "method": "GET",
    "timestamp": "2019-01-09T15:11:47.595Z",
    "ips": [
      "192.168.66.107",
      "52.216.232.139"
    ],
    "request_body_length": 0,
    "virtual_host_frequency_score": 5.5638,
    "port": 35850,
    "freq_virtual_host": "52216232139",
    "uri": "/kindle-wifi/wifistub.html",
    "message": "{\"ts\":\"2019-01-09T15:11:46.281106Z\",\"uid\":\"CdH1zd1XuilP10abDg\",\"id.orig_h\":\"192.168.66.107\",\"id.orig_p\":34690,\"id.resp_h\":\"52.216.232.139\",\"id.resp_p\":80,\"trans_depth\":1,\"method\":\"GET\",\"host\":\"52.216.232.139\",\"uri\":\"/kindle-wifi/wifistub.html\",\"version\":\"1.1\",\"user_agent\":\"Dalvik/2.1.0 (Linux; U; Android 7.1.2; AFTN Build/NS6258)\",\"request_body_len\":0,\"response_body_len\":419,\"status_code\":200,\"status_msg\":\"OK\",\"tags\":[],\"resp_fuids\":[\"F72QKl2qofIgFqHKt\"],\"resp_mime_types\":[\"text/html\"]}"
  },
  "fields": {
    "@timestamp": [
      "2019-01-09T15:11:46.281Z"
    ]
  },
  "highlight": {
    "uid.keyword": [
      "@kibana-highlighted-field@CdH1zd1XuilP10abDg@/kibana-highlighted-field@"
    ],
    "uid": [
      "@kibana-highlighted-field@CdH1zd1XuilP10abDg@/kibana-highlighted-field@"
    ],
    "message": [
      "{\"ts\":\"2019-01-09T15:11:46.281106Z\",\"uid\":\"@kibana-highlighted-field@CdH1zd1XuilP10abDg@/kibana-highlighted-field@\",\"id.orig_h\":\"192.168.66.107\",\"id.orig_p\":34690,\"id.resp_h\":\"52.216.232.139\",\"id.resp_p\":80,\"trans_depth\":1,\"method\":\"GET\",\"host\":\"52.216.232.139\",\"uri\":\"/kindle-wifi/wifistub.html\",\"version\":\"1.1\",\"user_agent\":\"Dalvik/2.1.0 (Linux; U; Android 7.1.2; AFTN Build/NS6258)\",\"request_body_len\":0,\"response_body_len\":419,\"status_code\":200,\"status_msg\":\"OK\",\"tags\":[],\"resp_fuids\":[\"F72QKl2qofIgFqHKt\"],\"resp_mime_types\":[\"text/html\"]}"
    ]
  },
  "sort": [
    1547046706281
  ]
}

########################################################################################################################################################################################
{
  "_index": "seconion:logstash-bro-2019.01.09",
  "_type": "doc",
  "_id": "9FkwM2gBKAR-58FSCw4B",
  "_version": 1,
  "_score": null,
  "_source": {
    "respond_bytes": 853,
    "local_orig": "true",
    "destination_geo": {
      "latitude": 39.0481,
      "ip": "52.216.232.139",
      "country_name": "United States",
      "region_code": "VA",
      "longitude": -77.4728,
      "dma_code": 511,
      "location": {
        "lat": 39.0481,
        "lon": -77.4728
      },
      "continent_code": "NA",
      "postal_code": "20149",
      "region_name": "Virginia",
      "country_code3": "US",
      "city_name": "Ashburn",
      "timezone": "America/New_York",
      "country_code2": "US"
    },
    "respond_country_code": "US",
    "syslog-priority": "notice",
    "tunnel_parents": [],
    "logstash_time": 0.004636287689208984,
    "service": "http",
    "source_ip": "192.168.66.107",
    "missed_bytes": 0,
    "@timestamp": "2019-01-09T15:11:46.224Z",
    "syslog-tags": ".source.s_bro_conn",
    "destination_port": 80,
    "source_port": 34690,
    "@version": "1",
    "history": "ShADadfFr",
    "source_ips": "192.168.66.107",
    "original_bytes": 183,
    "destination_ips": "52.216.232.139",
    "local_respond": "false",
    "destination_ip": "52.216.232.139",
    "syslog-facility": "user",
    "syslog-host_from": "seconion",
    "syslog-file_name": "/nsm/bro/logs/current/conn.log",
    "uid": "CdH1zd1XuilP10abDg",
    "original_ip_bytes": 535,
    "respond_ip_bytes": 1564,
    "duration": 300.128771,
    "syslog-host": "SecOnion",
    "respond_packets": 7,
    "protocol": "tcp",
    "host": "gateway",
    "connection_state": "SF",
    "sensor_name": "seconion-enp1s0f1",
    "connection_state_description": "Normal SYN/FIN completion",
    "total_bytes": 1036,
    "tags": [
      "syslogng",
      "bro",
      "external_destination",
      "internal_source"
    ],
    "original_packets": 8,
    "event_type": "bro_conn",
    "syslog-sourceip": "127.0.0.1",
    "timestamp": "2019-01-09T15:16:52.851Z",
    "ips": [
      "192.168.66.107",
      "52.216.232.139"
    ],
    "port": 35850,
    "message": "{\"ts\":\"2019-01-09T15:11:46.224599Z\",\"uid\":\"CdH1zd1XuilP10abDg\",\"id.orig_h\":\"192.168.66.107\",\"id.orig_p\":34690,\"id.resp_h\":\"52.216.232.139\",\"id.resp_p\":80,\"proto\":\"tcp\",\"service\":\"http\",\"duration\":300.128771,\"orig_bytes\":183,\"resp_bytes\":853,\"conn_state\":\"SF\",\"local_orig\":true,\"local_resp\":false,\"missed_bytes\":0,\"history\":\"ShADadfFr\",\"orig_pkts\":8,\"orig_ip_bytes\":535,\"resp_pkts\":7,\"resp_ip_bytes\":1564,\"tunnel_parents\":[],\"resp_cc\":\"US\",\"sensorname\":\"seconion-enp1s0f1\"}"
  },
  "fields": {
    "@timestamp": [
      "2019-01-09T15:11:46.224Z"
    ]
  },
  "highlight": {
    "uid.keyword": [
      "@kibana-highlighted-field@CdH1zd1XuilP10abDg@/kibana-highlighted-field@"
    ],
    "uid": [
      "@kibana-highlighted-field@CdH1zd1XuilP10abDg@/kibana-highlighted-field@"
    ],
    "message": [
      "{\"ts\":\"2019-01-09T15:11:46.224599Z\",\"uid\":\"@kibana-highlighted-field@CdH1zd1XuilP10abDg@/kibana-highlighted-field@\",\"id.orig_h\":\"192.168.66.107\",\"id.orig_p\":34690,\"id.resp_h\":\"52.216.232.139\",\"id.resp_p\":80,\"proto\":\"tcp\",\"service\":\"http\",\"duration\":300.128771,\"orig_bytes\":183,\"resp_bytes\":853,\"conn_state\":\"SF\",\"local_orig\":true,\"local_resp\":false,\"missed_bytes\":0,\"history\":\"ShADadfFr\",\"orig_pkts\":8,\"orig_ip_bytes\":535,\"resp_pkts\":7,\"resp_ip_bytes\":1564,\"tunnel_parents\":[],\"resp_cc\":\"US\",\"sensorname\":\"seconion-enp1s0f1\"}"
    ]
  },
  "sort": [
    1547046706224
  ]
}

Doug Burks

unread,
Jan 18, 2019, 7:41:58 AM1/18/19
to securit...@googlegroups.com
This is really strange.  All 3 logs below show a timestamp of 08:11:46 so I'm not sure why the original code didn't work.  Is your Security Onion box set to UTC timezone or did you manually set it to your local time zone?  When you were running the original code, did all pivots fail or just some?  If you revert to the original code and try the pivot below again, does it continue to succeed or does it now fail?

don m.

unread,
Feb 6, 2019, 12:14:02 PM2/6/19
to security-onion

Greetings -
Having a similar problem to this thread.

For events that show up in the "Discovery" area, for "today" - meaning that the date is 2/6 or "Today", and data would be in the 'current' pcap directory structure, when I click on the hyperlink for an _ID, CapMe happily shows me the event data.

For events that show up in Discovery for "yesterday" (as in the date is 2/5, clicking on the _ID field fails w/ the same message "Second ES query could not find ID".

Doug Burks

unread,
Feb 6, 2019, 1:01:04 PM2/6/19
to securit...@googlegroups.com
Hi Don,

Are you able to share a screenshot of the full expanded log that you're trying to pivot from?

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.

don m.

unread,
Feb 6, 2019, 2:20:25 PM2/6/19
to security-onion
You Bet! Hopefully this is enough info.
20190206_SO_Kibana_Yesterday.PNG

Doug Burks

unread,
Feb 6, 2019, 2:54:54 PM2/6/19
to securit...@googlegroups.com
Based on that screenshot, CapMe should try to find the corresponding Bro conn log using an Elastic query like this:
event_type:bro_conn AND 172.16.100.4 AND 49192 AND 192.168.214.4 AND 1433

Are you able to search Kibana for that and find the corresponding Bro conn log?

On Wed, Feb 6, 2019 at 2:20 PM don m. <donm...@gmail.com> wrote:
You Bet! Hopefully this is enough info.

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.

don m.

unread,
Feb 6, 2019, 4:15:18 PM2/6/19
to security-onion
Alas, no ...

The data isn't there. I tried just the relationship event_type:bro_conn AND 172... AND 192... and didn't get results, past 48 hrs, which gives me great pause because... in my case 172.16.0.0 is the DMZ and I thought that I had that segment under monitoring...

SO - I'll check all of the is and ts and then see what I can see. Lots of details to make this work out.

Thanks!

Joe Volpe

unread,
Feb 16, 2019, 12:12:59 PM2/16/19
to securit...@googlegroups.com
Hi Doug,

Sorry it's taken me so long to respond.  My box is set to local time.  At the time, I think it was a few pivots that failed.

Since then, I've been trying to replicate this but to no avail.  I think I may have jumped the gun and tried to open a pcap that hadn't finished processing yet.  At this point, unless something changes, let's chalk this one up as a PEBCAK error.  Thanks again for your help.

Joe
Reply all
Reply to author
Forward
0 new messages