mirror of https://github.com/OISF/suricata
cybersecurityidsintrusion-detection-systemintrusion-prevention-systemipsnetwork-monitornetwork-monitoringnsmsecuritysuricatathreat-hunting
You cannot select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Until now, TCP session reuse was handled in the TCP stream engine. If the state was TCP_CLOSED, a new SYN packet was received and a few other conditions were met, the flow was 'reset' and reused for the 'new' TCP session. There are a number of problems with this approach: - it breaks the normal flow lifecycle wrt timeout, detection, logging - new TCP sessions could come in on different threads due to mismatches in timeouts between suricata and flow balancing hw/nic/drivers - cleanup code was often causing problems - it complicated locking because of the possible thread mismatch This patch implements a different solution, where a new TCP session also gets a new flow. To do this 2 main things needed to be done: 1. the flow engine needed to be aware of when the TCP reuse case was happening 2. the flow engine needs to be able to 'skip' the old flow once it was replaced by a new one To handle (1), a new function TcpSessionPacketSsnReuse() is introduced to check for the TCP reuse conditions. It's called from 'FlowCompare()' for TCP packets / TCP flows that are candidates for reuse. FlowCompare returns FALSE for the 'old' flow in the case of TCP reuse. This in turn will lead to the flow engine not finding a flow for the TCP SYN packet, resulting in the creation of a new flow. To handle (2), FlowCompare flags the 'old' flow. This flag causes future FlowCompare calls to always return FALSE on it. In other words, the flow can't be found anymore. It can only be accessed by: 1. existing packets with a reference to it 2. flow timeout handling as this logic gets the flows from walking the hash directly 3. flow timeout pseudo packets, as they are set up by (2) The old flow will time out normally, as governed by the "tcp closed" flow timeout setting. At timeout, the normal detection, logging and cleanup code will process it. The flagging of a flow making it 'unfindable' in the flow hash is a bit of a hack. The reason for this approach over for example putting the old flow into a forced timeout queue where it could be timed out, is that such a queue could easily become a contention point. The TCP session reuse case can easily be created by an attacker. In case of multiple packet handlers, this could lead to contention on such a flow timeout queue. |
11 years ago | |
|---|---|---|
| benches | 17 years ago | |
| contrib | 12 years ago | |
| doc | 13 years ago | |
| lua | 11 years ago | |
| m4 | 16 years ago | |
| qa | 11 years ago | |
| rules | 11 years ago | |
| scripts | 11 years ago | |
| src | 11 years ago | |
| .gitignore | 13 years ago | |
| .travis.yml | 12 years ago | |
| COPYING | 17 years ago | |
| ChangeLog | 11 years ago | |
| LICENSE | 16 years ago | |
| Makefile.am | 11 years ago | |
| Makefile.cvs | 17 years ago | |
| acsite.m4 | 17 years ago | |
| autogen.sh | 13 years ago | |
| classification.config | 16 years ago | |
| config.rpath | 13 years ago | |
| configure.ac | 11 years ago | |
| doxygen.cfg | 12 years ago | |
| reference.config | 11 years ago | |
| suricata.yaml.in | 11 years ago | |
| threshold.config | 13 years ago | |