-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data getting lost with sniff() and a callback function? #1789
Comments
Scapy is not able to cope with a high volume of network traffic and, unfortunately, will never be. We try to optimize the core now (#642) and then (#1735), but improvements are overall limited. You can try to make things slightly faster by:
|
Thanks. I was thinking I would have to do that. I am not familiar, however, with how the sniff() function works in multiple threads concurrently. For instance, if I were to construct say 4 different BPF filters that are filtering mutually-exclusive packets, can I have four different threads running sniff() with each of these filters? Also, what is the performance advantage (and any programmatic issues?) of the pcap module versus what it currently uses? I was under the impression that the filter used pcap to begin with; guess i am not very familiar with how it all works. Thanks |
Don’t call sniff() in threads. Instead read raw packets, dispatch them to threads and dissect them with Scapy.
The pcap module reads packets from C, that might be a little faster than from Python.
|
Okay-- so read the raw packets (can I apply a filter with this to reduce the amount of stuff to dissect? I just need to see TCP traffic from a certain source IP to a certain destination port) and perhaps put them into a thread-safe Queue (e.g. Queue class in Py 2) and pop them from the queue from a pool of threads dedicated to that? Thanks |
This was originally a response to another issue that was closed because of "no response"; this issue certainly exists it seems as I'm having the same problem routinely with lots of traffic (and thus lots of callbacks).
Re: I'm having the same issue with payload loss-- details:
I am taking all TCP packets in a filter ("ip and tcp") and, with the sniff() function, passing the matched packets to a callback function. This callback function is converting the packet payload to a string and running a printable check against each character and, if its printable, it is appending to the string buffer. As my protocol is all ASCII, this works fine; however I am losing some packets. At some point, data will get lost and the next step in the protocol will appear appended to the end of a prior incomplete protocol message, missing its new-line and command separator. The client sending the data is not excluding this data.
As for packet fragmentation I have verified that the MF flag is NOT set on any of these packets; in fact 'DF' is set on all of them.
When I tcpdump I see all the data just fine, but in the callback via sniff() with a simple filter ("ip and tcp") and my simple printable-character filter, it doesn't aggregate all of it all of the time; sometimes it works just fine, other times it seems to miss entire packets.
I have a very, VERY high amount of network traffic and a single thread calling the callback function and sniff(). Is there any known problem w/ traffic load like that and, if so, is there a way to alleviate it and get everything processed? Would there be a chance anything could be dropped?
I'm calling sniff() like this FWIW:
sniff(iface="enp2s1", prn=packetCallback, filter="ip and tcp", store=0)
Thanks in advance!
The text was updated successfully, but these errors were encountered: