App_proto Failed

4 views
Skip to first unread message

Monica Okane

unread,
Aug 3, 2024, 4:39:00 PM8/3/24
to delryqudce

I'm trying to write a mutation to describe a service name based on the port given. Suricata is feeding the data to logstash, and it currently doesn't have a very good way of describing services unless it's in the predefined list. I'd like to still have that information available instead of getting nothing or getting a "failed" entry.

Do I have to do a separate check for whether [app_proto] is set? I'm using replace currently, but I'm not sure if it will create the [app_proto] field if it doesn't already exist. Otherwise, I think I'll have to remove_field first, then add_field.

It uses filebeat to tail Suricata's EVE log and send the data to Logstash, which then performs a lot of processing and enrichment of the data (including the service name mapping you are looking to achieve). The reason for using Filebeat and Logstash together, is to allow multiple Suricata instances to report into a single central collection point.

It also looks like you are trying to figure out the client and server end of the connections by looking for the lowest port number. The integration above also does the same, but with a bit more sophisticated method.

I've looked at this, and it does look great, but you only seem to be compatible with Elastic Stack 6.2. We're on 6.5, and we rather like the extra features we get with it over 6.2. If you update to supporting newer Elastic Stack versions, let me know.

I'm working on writing some signatures for some malware that uses a custom protocol. In my signatures, I'd like to be able to write "app-layer-protocol:failed;" in order to filter out traffic that has a known protocol associated with it. However, with that logic incorporated to my signatures, some of the signatures fail to fire. Upon inspection of these rule matches with and without that logic, I found that the rules that consistently fire are being "matched" with a protocol as ALPROTO_FAILED, and the ones that are not firing are not matched at all, and thus are labeled as having no app layer protocol, i.e. ALPROTO_UNKNOWN. There is no support for looking for traffic with app-layer-protocol:unknown; either. From my perspective, any traffic that has an "unknown" protocol, would be considered to have failed application layer protocol inspection. Therefore, I would like app-layer-protocol:failed; to match both traffic that has been classified as ALPROTO_UNKNOWN and ALPROTO_FAILED.

I'm not sure if this is considered a feature or a bug... I felt it was a bug, as I expected my traffic to match against the signature whether application layer protocol detection failed or... ended up in an unknown state. Those are effectively the same state in my eyes, I understand that those might have different meanings within the code of suricata. Please let me know if I can help by providing any more information, clarifying anything, etc. I don't know how to label this in terms of effort or difficulty either; I ask that you please excuse my ignorance.

I can provide PCAP, and give the specific case I'm talking about, sure!
...However, I'm not sure it will help make implementation better code side. I'm just asking for the keyword "app-layer-protocol" with the argument/value of "failed" to match traffic that has been marked as either ALPROTO_UNKNOWN or ALPROTO_FAILED.

Hello Splunkers,
I keep getting the error message "Could not load lookup=LOOKUP-app_proto" in multiple apps on multiple dashboards. I have checked settings and neither the lookup file or definition existed and I can't figure out what is asking for this lookup. I can't find a reference to a lookup by that name in any documentation or on any of the Splunk sites. I have created a lookup with a matching name but I don't know where to put it. I added it to the search app but I still got the error, then I added it to an app getting the error and that didn't work either. Basic system info is below, let me know what other info you would like and I will provide it as soon as I can. Thanks for reading.

Looking at these auto lookups, they list the "lookup definition" as stream_app_lookup, checking there showed "stream_app_lookup" was present but listed "supported fields" as none. Next, I checked the Lookup table files and found that "stream_app_lookup" was not present. (This should have been created when the Stream app was installed.)

I had actually created a lookup with same name as an existing lookup, but with different fields. This name collision was causing the error. I changed the name of the new lookup and the errors went away.

hey eliasit,
can you suggest some inputs in integrating the splunk_app_stream to get the dns logs, seems its not fetching the data from dns servers when I tried installing splfwdrs in dns server via deployment server.

When you create a lookup definition at splunk, you have to run a command at splunk, to refresh the new configuration, because sometimes splunk does not recognise the new configuration. there two ways to do it
1 - run the command debug refresh, this commando will make splunk to get the new lookup definition, this happened with myself several times. I am only able to get the lookup working properly after I run this process. It does not restart the splunk service, only reload the configuration definitions.
-> :8000/en-GB/debug/refresh
2 - restart the splunk service

The Cloud Spanner Database Admin API can be used to: * create, drop, and list databases * update the schema of pre-existing databases * create, delete, copy and list backups for a database * restore a database from an existing backup

The returned long-running operation has a name of the format projects//instances//databases//operations/ and can be used to track execution of the ChangeQuorum. The metadata field type is ChangeQuorumMetadata.

Starts copying a Cloud Spanner Backup. The returned backup long-running operation will have a name of the format projects//instances//backups//operations/ and can be used to track copying of the backup. The operation is associated with the destination backup. The metadata field type is CopyBackupMetadata. The response field type is Backup, if successful. Cancelling the returned operation will stop the copying and delete the destination backup. Concurrent CopyBackup requests can run on the same source backup.

Starts creating a new Cloud Spanner Backup. The returned backup long-running operation will have a name of the format projects//instances//backups//operations/ and can be used to track creation of the backup. The metadata field type is CreateBackupMetadata. The response field type is Backup, if successful. Cancelling the returned operation will stop the creation and delete the backup. There can be only one pending backup creation per database. Backup creation of different databases can run concurrently.

Creates a new Spanner database and starts to prepare it for serving. The returned long-running operation will have a name of the format /operations/ and can be used to track preparation of the database. The metadata field type is CreateDatabaseMetadata. The response field type is Database, if successful.

Drops (aka deletes) a Cloud Spanner database. Completed backups for the database will be retained according to their expire_time. Note: Cloud Spanner might continue to accept requests for a few seconds after the database has been deleted.

Lists the backup long-running operations in the given instance. A backup operation has a name of the form projects//instances//backups//operations/. The long-running operation metadata field type metadata.type_url describes the type of the metadata. Operations returned include those that have completed/failed/canceled within the last 7 days, and pending operations. Operations returned are ordered by operation.metadata.value.progress.start_time in descending order starting from the most recently started operation.

Lists database longrunning-operations. A database operation has a name of the form projects//instances//databases//operations/. The long-running operation metadata field type metadata.type_url describes the type of the metadata. Operations returned include those that have completed/failed/canceled within the last 7 days, and pending operations.

Create a new database by restoring from a completed backup. The new database must be in the same project and in an instance with the same instance configuration as the instance containing the backup. The returned database long-running operation has a name of the format projects//instances//databases//operations/, and can be used to track the progress of the operation, and to cancel it. The metadata field type is RestoreDatabaseMetadata. The response type is Database, if successful. Cancelling the returned operation will stop the restore and delete the database. There can be only one database being restored into an instance at a time. Once the restore operation completes, a new restore operation can be initiated, without waiting for the optimize operation associated with the first restore to complete.

Attempting this RPC on a non-existent Cloud Spanner database will result in a NOT_FOUND error if the user has spanner.databases.list permission on the containing Cloud Spanner instance. Otherwise returns an empty set of permissions. Calling this method on a backup that does not exist will result in a NOT_FOUND error if the user has spanner.backups.list permission on the containing instance.

The returned long-running operation will have a name of the format projects//instances//databases//operations/ and can be used to track the database modification. The metadata field type is UpdateDatabaseMetadata. The response field type is Database, if successful.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages