Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Filebeat - Module Cisco-ASA] Parsing of Cisco Event Message 734001 #16212

Closed
MarcusCaepio opened this issue Feb 10, 2020 · 6 comments · Fixed by #16612
Closed

[Filebeat - Module Cisco-ASA] Parsing of Cisco Event Message 734001 #16212

MarcusCaepio opened this issue Feb 10, 2020 · 6 comments · Fixed by #16612
Labels
Filebeat Filebeat

Comments

@MarcusCaepio
Copy link
Contributor

MarcusCaepio commented Feb 10, 2020

Hi all,
Describe the enhancement:
we recently switched from Logstash to Ingest with filebeat-cisco-asa and unfortunately noticed, that one important cisco message is not yet ingested (correctly), the cisco.asa.message_id:734001 . These messages give information about the DAP records, which a user gets, when he connects via VPN and are very while troubleshooting VPN problems.
Describe a specific use case for the enhancement or feature:
The ID Pattern given by cisco looks like this:
%ASA-6-734001: DAP: User user, Addr ipaddr , Connection connection : The following DAP records were selected for this connection: DAP record names
https://www.cisco.com/c/en/us/td/docs/security/asa/syslog/b_syslog/syslogs9.html#con_5678113

Here is a sample event:
%ASA-6-734001: DAP: User firstname.lastname@domain.com, Addr 1.2.3.4, Connection AnyConnect: The following DAP records were selected for this connection: dap_name1, dap_name2
The Pattern and Logstash filter, I used in the past, looks like this:

Pattern:
CISCOFW734001 DAP: User %{DATA:user}, Addr %{IP:src_ip}, Connection %{GREEDYDATA:connection_type}: The following DAP records were selected for this connection: %{GREEDYDATA:dap_records}

Filter:
    if [dap_records] {
      mutate {
        split => { "dap_records" => ", " }
      }

It would be very nice, if this could be Ingested by Filebeat, too. At the moment, we are not able use elastic for this use case, because we can though search the message id, but as soon as we additionally search for a username in the event.original field, the search really takes very, very long.

As I am currently totally inexperienced in creating modules. So I don't know, if I can give constructive help here. But if sb. can lend me a hand, where I have to begin (beside the docs), I will try to help.

Also relates to #14151

Thanks in advanced!
Cheers,
Marcus

@elasticmachine
Copy link
Collaborator

Pinging @elastic/siem (Team:SIEM)

@MarcusCaepio
Copy link
Contributor Author

Maybe @adriansr can help here most?

@andrewkroh
Copy link
Member

But if sb. can lend me a hand, where I have to begin (beside the docs), I will try to help.

Hi @MarcusCaepio, the Cisco ASA module uses an Elasticsearch Ingest Node pipeline to do the parsing. This module is bundled into the Filebeat install. You can try making modification to your pipeline and testing the results. Any time you change the pipeline config on disk you need to reinstall it to Elasticsearch before indexing any log data.

Your filebeat install will have a module/cisco/shared/ingest/asa-ftd-pipeline.yml file. You probably want to insert a dissect processor for message 734001. For example this is how message 106001 is parsed:

- dissect:
if: "ctx._temp_.cisco.message_id == '106001'"
field: "message"
pattern: "%{network.direction} %{network.transport} connection %{event.outcome} from %{source.address}/%{source.port} to %{destination.address}/%{destination.port} flags %{} on interface %{_temp_.cisco.source_interface}"

After you make changes to that file you need to re-install the pipeline to ES with

filebeat setup --pipelines -E filebeat.overwrite_pipelines=true -e

so your changes take effect. Then you can send some test log lines through and check the result. If the changes work let us know and we can update the module with your changes.

@MarcusCaepio
Copy link
Contributor Author

MarcusCaepio commented Feb 21, 2020

Hey @andrewkroh ,
Thanks a lot for the tipps. Today I tried your suggestion and I think, I have created a working pattern. Maybe you want to have a look on the commit in my fork and tell me, if this is ok for a PR?

Thanks in advance.

@adriansr
Copy link
Contributor

@MarcusCaepio , go ahead with the PR. You'll have to make sure that the new fields under cisco are added to module/cisco/asa/_meta/fields.yml and module/cisco/ftd/_meta/fields.yml and maybe you can add an example log line to module/cisco/asa/test.

Instead of cisco.user I think you can use one of the ECS fields, user.name or user.email.

@MarcusCaepio
Copy link
Contributor Author

MarcusCaepio commented Feb 24, 2020

Hi @adriansr ,
PR is done. Please ignore my last comment, I just had a thinking mistake :)

adriansr pushed a commit to MarcusCaepio/beats that referenced this issue Mar 18, 2020
    Fixes elastic#16212
    The split part is needed, because one has to be able to search for an
    explicit dap_record. As the records order and number can vary a lot,
    just saving the whole string makes no sense. I chose "user.email", "source.ip"
    as ECS fields and "cisco.connection_type", "cisco.dap_records",
    as looking to the syslog messages docs,they also call it like that.
    I made "make update" in /beats/x.pack/filebeat and /beats/filebeat.
    Hopefully the pipeline succeeds now.
adriansr pushed a commit that referenced this issue Mar 19, 2020
The split part is needed, because one has to be able to search for an
explicit dap_record. As the records order and number can vary a lot,
just saving the whole string makes no sense. I chose "user.email", "source.ip"
as ECS fields and "cisco.connection_type", "cisco.dap_records",
as looking to the syslog messages docs,they also call it like that.
I made "make update" in /beats/x.pack/filebeat and /beats/filebeat.
Hopefully the pipeline succeeds now.

Fixes #16212

Co-authored-by: MarcusCaepio <7324088+MarcusCaepio@users.noreply.github.com>
adriansr pushed a commit to adriansr/beats that referenced this issue Mar 19, 2020
The split part is needed, because one has to be able to search for an
explicit dap_record. As the records order and number can vary a lot,
just saving the whole string makes no sense. I chose "user.email", "source.ip"
as ECS fields and "cisco.connection_type", "cisco.dap_records",
as looking to the syslog messages docs,they also call it like that.
I made "make update" in /beats/x.pack/filebeat and /beats/filebeat.
Hopefully the pipeline succeeds now.

Fixes elastic#16212

Co-authored-by: MarcusCaepio <7324088+MarcusCaepio@users.noreply.github.com>
(cherry picked from commit ac2b333)
adriansr added a commit that referenced this issue Mar 19, 2020
…essage 734001. (#17128)

The split part is needed, because one has to be able to search for an
explicit dap_record. As the records order and number can vary a lot,
just saving the whole string makes no sense. I chose "user.email", "source.ip"
as ECS fields and "cisco.connection_type", "cisco.dap_records",
as looking to the syslog messages docs,they also call it like that.
I made "make update" in /beats/x.pack/filebeat and /beats/filebeat.
Hopefully the pipeline succeeds now.

Fixes #16212

(cherry picked from commit ac2b333)

Co-authored-by: MarcusCaepio <7324088+MarcusCaepio@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Filebeat Filebeat
Projects
None yet
6 participants