This adds new functions that will be called
through unix-socket and permit to update
and show memcap value.
The memcap value needs to be handled in a
thread safe way, so for this reason it is
declared as atomic var.
FlowGetMemuse() function is made as public
because the memuse value will be shown
through unix-socket.
Set flags by default:
-Wmissing-prototypes
-Wmissing-declarations
-Wstrict-prototypes
-Wwrite-strings
-Wcast-align
-Wbad-function-cast
-Wformat-security
-Wno-format-nonliteral
-Wmissing-format-attribute
-funsigned-char
Fix minor compiler warnings for these new flags on gcc and clang.
Make stream engine use the streaming buffer API for it's data storage.
This means that the data is stored in a single reassembled sliding
buffer. The subleties of the reassembly, e.g. overlap handling, are
taken care of at segment insertion.
The TcpSegments now have a StreamingBufferSegment that contains an
offset and a length. Using this the segment data can be retrieved
per segment.
Redo segment insertion. The insertion code is moved to it's own file
and is simplified a lot.
A major difference with the previous implementation is that the segment
list now contains overlapping segments if the traffic is that way.
Previously there could be more and smaller segments in the memory list
than what was seen on the wire.
Due to the matching of in memory segments and on the wire segments,
the overlap with different data detection (potential mots attacks)
is much more accurate.
Raw and App reassembly progress is no longer tracked per segment using
flags, but there is now a progress tracker in the TcpStream for each.
When pruning we make sure we don't slide beyond in-use segments. When
both app-layer and raw inspection are beyond the start of the segment
list, the segments might not be freed even though the data in the
streaming buffer is already gone. This is caused by the 'in-use' status
that the segments can implicitly have. This patch accounts for that
when calculating the 'left_edge' of the streaming window.
Raw reassembly still sets up 'StreamMsg' objects for content
inspection. They are set up based on either the full StreamingBuffer,
or based on the StreamingBufferBlocks if there are gaps in the data.
Reworked 'stream needs work' logic. When a flow times out the flow
engine checks whether a TCP flow still needs work. The
StreamNeedsReassembly function is used to test if a stream still has
unreassembled segments or uninspected stream chunks.
This patch updates the function to consider the app and/or raw
progress. It also cleans the function up and adds more meaningful
debug messages. Finally it makes it non-inline.
Unittests have been overhauled, and partly moved into their own files.
Remove lots of dead code.
For flow bypass, the flow timeout handling is triggered which may
create up to 3 pseudo packets that hold a reference to the flow.
However, in the bypass case the code signaled to the timeout logic
that the flow can be freed unconditionally by returning 1. This
lead to packets going through the engine with a pointer to a now
freed/recycled flow.
This patch fixes the logic by removing the special bypass case,
which seemed redundant anyway. Effectively reverts 68d9677.
Bug #1928.
This adds a new timeout value for local bypassed state. For user
simplication it is called only `bypassed`. The patch also adds
a emergency value so we can clean bypassed flows a bit faster.
If we see packets for a capture bypassed flow after some times, it
means that the capture method is not handling correctly the bypass
so it is better to switch to local bypass method.
As capture method like nfq will cut both side of the flow instantly
we will not get the hack for most data which have been received. So
it is better to force reassembly to be sure to get the timeout of
the entry.
This patch adds two new states to the flow:
* local bypass: for suricata only bypass, packets belonging to
a flow in this state will be discard fast
* capture bypass: capture method is handling the bypass and suricata
will discard packets that are currently queued
A bypassed state to flow that will be set on flow when a bypass
decision is taken. In the case of capture bypass this will allow
to remove faster the flow entry from the flow table instead of
waiting for the "established" timeout.
Until now the flow manager would walk the entire flow hash table on an
interval. It would thus touch all flows, leading to a lot of memory
and cache pressure. In scenario's where the number of tracked flows run
into the hundreds on thousands, and the memory used can run into many
hundreds of megabytes or even gigabytes, this would lead to serious
performance degradation.
This patch introduces a new approach. A timestamp per flow bucket
(hash row) is maintained by the flow manager. It holds the timestamp
of the earliest possible timeout of a flow in the list. The hash walk
skips rows with timestamps beyond the current time.
As the timestamp depends on the flows in the hash row's list, and on
the 'state' of each flow in the list, any addition of a flow or
changing of a flow's state invalidates the timestamp. The flow manager
then has to walk the list again to set a new timestamp.
A utility function FlowUpdateState is introduced to change Flow states,
taking care of the bucket timestamp invalidation while at it.
Empty flow buckets use a special value so that we don't have to take
the flow bucket lock to find out the bucket is empty.
This patch also adds more performance counters:
flow_mgr.flows_checked | Total | 929
flow_mgr.flows_notimeout | Total | 391
flow_mgr.flows_timeout | Total | 538
flow_mgr.flows_removed | Total | 277
flow_mgr.flows_timeout_inuse | Total | 261
flow_mgr.rows_checked | Total | 1000000
flow_mgr.rows_skipped | Total | 998835
flow_mgr.rows_empty | Total | 290
flow_mgr.rows_maxlen | Total | 2
flow_mgr.flows_checked: number of flows checked for timeout in the
last pass
flow_mgr.flows_notimeout: number of flows out of flow_mgr.flows_checked
that didn't time out
flow_mgr.flows_timeout: number of out of flow_mgr.flows_checked that
did reach the time out
flow_mgr.flows_removed: number of flows out of flow_mgr.flows_timeout
that were really removed
flow_mgr.flows_timeout_inuse: number of flows out of flow_mgr.flows_timeout
that were still in use or needed work
flow_mgr.rows_checked: hash table rows checked
flow_mgr.rows_skipped: hash table rows skipped because non of the flows
would time out anyway
The counters below are only relating to rows that were not skipped.
flow_mgr.rows_empty: empty hash rows
flow_mgr.rows_maxlen: max number of flows per hash row. Best to keep low,
so increase hash-size if needed.
flow_mgr.rows_busy: row skipped because it was locked by another thread
Instead of a single big FlowProto array containing timeouts separately
for normal and emergency cases, plus the 'Free' pointer for the
protoctx, split up these arrays.
An array made of FlowProtoTimeout for just the normal timeouts and an
mirror of that for emergency timeouts are used through a pointer that
will be set at init and by swapped by the emergency logic. It's swapped
back when the emergency is over.
The free funcs are moved to their own array.
This simplifies the timeout lookup code and shrinks the data that is
commonly used.
[src/detect-engine-loader.c:272]: (error) Buffer is accessed out of bounds.
[src/flow-manager.c:742]: (error) Buffer is accessed out of bounds.
[src/flow-manager.c:906]: (error) Buffer is accessed out of bounds.
Update FlowManager/Recycler to use global name.
Also add # into thread number.
Update af-packet to use global threadnames.
Update pcap to use global threadnames.
Update pfring to use global threadnames.
Update erf-dag to use global threadnames.
Update nflog to use global threadnames.
Update netmap to use global threadnames.
Update napatech to use global threadnames.
- Changed FlowManagerThread to FM-
- Changed FlowRecyclerThread to FR-
- Changed use of strcasecmp to strncasecmp. This was used in the
killing and disabling of FM/FR Threads.
Cppcheck 1.72 gives a warning on the following code pattern:
char blah[32] = "";
snprintf(blah, sizeof(blah), "something");
The warning is:
(error) Buffer is accessed out of bounds.
While this appears to be a FP, in most cases the initialization to ""
was unnecessary as the snprintf statement immediately follows the
variable declaration.
The flow timeout mechanism called both from the flow manager at run time
and at shutdown creates pseudo packets. For this it has it's own packet
pool, which can be depleted if the timeout logic is faster than the packet
processing threads. In this case the flow timeout would enter a wait loop.
The problem however, is that this wait loop would happen while keeping a
flow locked. This could lead to a race condition when the packet thread(s)
are waiting for the lock that the flow manager has.
This patch introduces a new packet pool call 'PacketPoolWaitForN', meant
to make sure that the thread's packet pool has at least N available
packets. The flow timeout paths use this to make sure enough packets are
available *before* grabbing the flow lock. If there aren't enough packets
available yet, the wait happens before the lock as well.
This still means the wait can happen while the flow hash row is locked, so
we do make sure some more packets are available when entering that. But
perhaps in the future we need a more precise logic there as well.
Don't kill flow manager and recyclers before the rest of the threads. The
packet threads may still have packets from their pools. As the flow threads
would destroy their pools the packets would be lost.
This patch doesn't kill the threads, it just pulls them out of their run
loop and into a wait loop. The packet pools won't be cleared until all
threads are killed.
Wait for flow management threads to close before moving on to the
next steps in the shutdown process.
Don't destroy flow force reassembly packet pool too early. Worker
threads may still want to return packets to it.
This replaces the tcp.reused_ssn counter. The flow engine now
enforces the TCP flow reuse logic.
The counter is incremented only when the flow is timed out, so
after the "tcp closed" timeout expired for a flow.
Until this point, the flow manager would check for timed out flows
by walking the flow hash, locking first the hash row and then each
individual flow to get it's state and timestamp. To not be too
intrusive trylocks were used so that a busy flow wouldn't cause the
flow manager to wait for a long time while holding the hash row lock.
Building on the changes in handling of the flow state and lastts
fields, this patch changes the flow managers behavior.
It can now get a flows state atomically and the lastts can be safely
read while holding just the flow hash row lock. This allows the flow
manager to do the basic time out check much more cheaply:
1. it doesn't have to wait for getting a lock
2. it doesn't interupt the packet path
As a consequence the trylock is now also gone. A flow that returns
'true' on timeout is pretty much certainly not going to be busy so
we can safely lock it unconditionally. This also means the flow
manager now walks the entire row unconditionally and is guaranteed
to inspect each flow in the row.
To make sure the functions called before the flow lock don't
accidentally change the flow (which would require a lock) the args
to these flows are changed to const pointers.
A flow has 3 states: NEW, ESTABLISHED and CLOSED.
For all protocols except TCP, a flow is in state NEW as long as just one
side of the conversation has been seen. When both sides have been
observed the state is moved to ESTABLISHED.
TCP has a different logic, controlled by the stream engine. Here the TCP
state is leading.
Until now, when parts of the engine needed to know the flow state, it
would invoke a per protocol callback 'GetProtoState'. For TCP this would
return the state based on the TcpSession.
This patch changes this logic. It introduces an atomic variable in the
flow 'flow_state'. It defaults to NEW and is set to ESTABLISHED for non-
TCP protocols when we've seen both sides of the conversation.
For TCP, the state is updated from the TCP engine directly.
The goal is to allow for access to the state without holding the Flow's
main mutex lock. This will later allow the Flow Manager(s) to evaluate
the Flow w/o interupting it.
Use new management API to run the flow recycler.
Make number of threads configurable:
flow:
memcap: 64mb
hash-size: 65536
prealloc: 10000
emergency-recovery: 30
managers: 2
recyclers: 2
This sets up 2 flow recyclers.