This patch adds a parent_id field to the Flow structure that
contain the flow ID of the parent connection for protocol with
dynamic parallel connection opening like FTP.
This adds new functions that will be called
through unix-socket and permit to update
and show memcap value.
The memcap value needs to be handled in a
thread safe way, so for this reason it is
declared as atomic var.
FlowGetMemuse() function is made as public
because the memuse value will be shown
through unix-socket.
Netflow entry collects the minimum and maximum
time to live during the life of the incoming flow.
This adds those field to a netflow event.
Signed-off-by: Eric Leblond <eric@regit.org>
Add API calls to upgrade to TLS or to request a protocol change
without a specific protocol expectation.
If the HTTP CONNECT session includes a port on the url, use that to
look up the probing parser during protocol detection. Solves a
missed detection of a SSLv2 session that upgrades to TLSv1. SSLv2
relies on the probing parser which is limited to certain ports.
In case of STARTTLS in SMTP and FTP, the port is hardcoded to 443.
A new event APPLAYER_UNEXPECTED_PROTOCOL is set if there was a
mismatch.
Support changing the application level protocol for a flow. This is
needed by STARTTLS and HTTP CONNECT to switch from the original
alproto to tls.
This commit allows a flag to be set 'FLOW_CHANGE_PROTO', which
triggers a new protocol detection on the next packet for a flow.
Add negated matches to match list instead of amatch.
Allow matching on 'failed'.
Introduce per packet flags for proto detection. Flags are used to
only inspect once per direction. Flag packet on PD-failure too.
The Flow::data_al_so_far was used for tracking data already
parsed when protocol for the current direction wasn't known yet. As
this behaviour has changed the tracking can be removed.
This adds a new timeout value for local bypassed state. For user
simplication it is called only `bypassed`. The patch also adds
a emergency value so we can clean bypassed flows a bit faster.
This patch adds two new states to the flow:
* local bypass: for suricata only bypass, packets belonging to
a flow in this state will be discard fast
* capture bypass: capture method is handling the bypass and suricata
will discard packets that are currently queued
A bypassed state to flow that will be set on flow when a bypass
decision is taken. In the case of capture bypass this will allow
to remove faster the flow entry from the flow table instead of
waiting for the "established" timeout.
Until now the flow manager would walk the entire flow hash table on an
interval. It would thus touch all flows, leading to a lot of memory
and cache pressure. In scenario's where the number of tracked flows run
into the hundreds on thousands, and the memory used can run into many
hundreds of megabytes or even gigabytes, this would lead to serious
performance degradation.
This patch introduces a new approach. A timestamp per flow bucket
(hash row) is maintained by the flow manager. It holds the timestamp
of the earliest possible timeout of a flow in the list. The hash walk
skips rows with timestamps beyond the current time.
As the timestamp depends on the flows in the hash row's list, and on
the 'state' of each flow in the list, any addition of a flow or
changing of a flow's state invalidates the timestamp. The flow manager
then has to walk the list again to set a new timestamp.
A utility function FlowUpdateState is introduced to change Flow states,
taking care of the bucket timestamp invalidation while at it.
Empty flow buckets use a special value so that we don't have to take
the flow bucket lock to find out the bucket is empty.
This patch also adds more performance counters:
flow_mgr.flows_checked | Total | 929
flow_mgr.flows_notimeout | Total | 391
flow_mgr.flows_timeout | Total | 538
flow_mgr.flows_removed | Total | 277
flow_mgr.flows_timeout_inuse | Total | 261
flow_mgr.rows_checked | Total | 1000000
flow_mgr.rows_skipped | Total | 998835
flow_mgr.rows_empty | Total | 290
flow_mgr.rows_maxlen | Total | 2
flow_mgr.flows_checked: number of flows checked for timeout in the
last pass
flow_mgr.flows_notimeout: number of flows out of flow_mgr.flows_checked
that didn't time out
flow_mgr.flows_timeout: number of out of flow_mgr.flows_checked that
did reach the time out
flow_mgr.flows_removed: number of flows out of flow_mgr.flows_timeout
that were really removed
flow_mgr.flows_timeout_inuse: number of flows out of flow_mgr.flows_timeout
that were still in use or needed work
flow_mgr.rows_checked: hash table rows checked
flow_mgr.rows_skipped: hash table rows skipped because non of the flows
would time out anyway
The counters below are only relating to rows that were not skipped.
flow_mgr.rows_empty: empty hash rows
flow_mgr.rows_maxlen: max number of flows per hash row. Best to keep low,
so increase hash-size if needed.
flow_mgr.rows_busy: row skipped because it was locked by another thread
Instead of a single big FlowProto array containing timeouts separately
for normal and emergency cases, plus the 'Free' pointer for the
protoctx, split up these arrays.
An array made of FlowProtoTimeout for just the normal timeouts and an
mirror of that for emergency timeouts are used through a pointer that
will be set at init and by swapped by the emergency logic. It's swapped
back when the emergency is over.
The free funcs are moved to their own array.
This simplifies the timeout lookup code and shrinks the data that is
commonly used.
The flow id itself is not stored in the flow, but generated based on
properties that do not change during the lifetime of the flow.
As it's meant for use with the json output, it is limited to a signed
64 bit integer (int64_t) because that is the time json_integer_t uses.
Now that the flow lookup is done in the worker threads the flow
queue handlers running after the capture thread(s) no longer have
access to the flow. This limits the options of how flow balancing
can be done.
This patch removes all code that is now useless. The only 2 methods
that still make sense are 'hash' and 'ippair'.
To simplify locking, move all locking out of the individual detect
code. Instead at the start of detection lock the flow, and at the
end of detection unlock it.
The lua code can be called without a lock still (from the output
code paths), so still pass around a lock hint to take care of this.
Instead of handling the packet update during flow lookup, handle
it in the stream/detect threads. This lowers the load of the
capture thread(s) in autofp mode.
The decoders now set a flag in the packet if the packet needs a
flow lookup. Then the workers will take care of this. The decoders
also already calculate the raw flow hash value. This is so that
this value can be used in flow balancing in autofp.
Because the flow lookup/creation is now done in the worker threads,
the flow balancing can no longer use the flow. It's not yet
available. Autofp load balancing uses raw hash values instead.
In the same line, move UDP AppLayer out of the DecodeUDP module,
and also into the stream/detect threads.
Handle TCP session reuse inside the flow engine itself. If a looked up
flow matches the packet, but is a TCP stream starter, check if the
ssn needs to be reused. If that is the case handle it within the
lookup function. Simplies the locking and removes potential race
conditions.
Update Flow lookup functions to get a flow reference during lookup.
This reference is set under the FlowBucket lock.
This paves the way to not getting a flow lock during lookups.
Store the tenant id in the flow and use the stored id when setting
up pesudo packets.
For tunnel and defrag packets, get tenant from parent. This will only
pass tenant_id's set at capture time.
For defrag packets, the tenant selector based on vlan id will still
work as the vlan id(s) are stored in the defrag tracker before being
passed on.
Stream GAPs and stream reassembly depth are tracked per direction. In
many cases they will happen in one direction, but not in the other.
Example:
HTTP requests a generally smaller than responses. So on the response
side we may hit the depth limit, but not on the request side.
The asynchronious 'disruption' has a side effect in the transaction
engine. The 'progress' tracking would never mark such transactions
as complete, and thus some inspection and logging wouldn't happen
until the very last moment: when EOF's are passed around.
Especially in proxy environments with _very_ many transactions in a
single TCP connection, this could lead to serious resource issues. The
EOF handling would suddenly have to handle thousands or more
transactions. These transactions would have been stored for a long time.
This patch introduces the concept of disruption flags. Flags passed to
the tx progress logic that are and indication of disruptions in the
traffic or the traffic handling. The idea is that the progress is
marked as complete on disruption, even if a tx is not complete. This
allows the detection and logging engines to process the tx after which
it can be cleaned up.
Instead, intruduce StreamTcpDisableAppLayer to disable app layer
tracking and reassembly. StreamTcpAppLayerIsDisabled can be used
to check it.
Replace all uses of FlowSetSessionNoApplayerInspectionFlag and
the FLOW_NO_APPLAYER_INSPECTION.
Use separate data structures for storing TX and FLOW (AMATCH) detect
state.
- move state storing into util funcs
- remove de_state_m
- simplify reset state logic on reload
For the autofp case, handling TCP reuse in the flow engine didn't work.
The problem is the mismatch between the moment the flow engine looks at
packets and the stream, and the moment the stream engine runs. Flow engine
is invoked in the packet capture thread(s), while the stream engine runs
as part of the stream/detect thread(s). Because of the queues between
those threads the flow manager may already inspect a new SYN while the
stream engine still has to process the previous session.
Moving the flow engine to the stream/detect thread(s) wasn't an option
as the 'autofp' load balancing depends on the flow already being
available in the packet.
The solution here is to add a check for this condition to the stream
engine. At this point the TCP state is up to date. If a TCP reuse case
is encountered, this is the global logic:
- detach packet for old flow
- get a new flow and attach it to the packet
- flag the old flow that it is now obsolete
Additional logic makes sure that the packets already in the queue
between the flow thread(s) and the stream thread are reassigned the
new flow.
Some special handling:
Apply previous 'reuse' before checking for a new reuse. Otherwise we're
tagging the wrong flow in some cases (multiple reuses in the same tuple).
When in a flow/ssn reuse condition, properly remove the packet from
the flow.
Don't 'reuse' if packet is a SYN retransmission.
The old flow is timed out normally by the flow manager.
Until now, TCP session reuse was handled in the TCP stream engine.
If the state was TCP_CLOSED, a new SYN packet was received and a few
other conditions were met, the flow was 'reset' and reused for the
'new' TCP session.
There are a number of problems with this approach:
- it breaks the normal flow lifecycle wrt timeout, detection, logging
- new TCP sessions could come in on different threads due to mismatches
in timeouts between suricata and flow balancing hw/nic/drivers
- cleanup code was often causing problems
- it complicated locking because of the possible thread mismatch
This patch implements a different solution, where a new TCP session also
gets a new flow. To do this 2 main things needed to be done:
1. the flow engine needed to be aware of when the TCP reuse case was
happening
2. the flow engine needs to be able to 'skip' the old flow once it was
replaced by a new one
To handle (1), a new function TcpSessionPacketSsnReuse() is introduced
to check for the TCP reuse conditions. It's called from 'FlowCompare()'
for TCP packets / TCP flows that are candidates for reuse. FlowCompare
returns FALSE for the 'old' flow in the case of TCP reuse.
This in turn will lead to the flow engine not finding a flow for the TCP
SYN packet, resulting in the creation of a new flow.
To handle (2), FlowCompare flags the 'old' flow. This flag causes future
FlowCompare calls to always return FALSE on it. In other words, the flow
can't be found anymore. It can only be accessed by:
1. existing packets with a reference to it
2. flow timeout handling as this logic gets the flows from walking the
hash directly
3. flow timeout pseudo packets, as they are set up by (2)
The old flow will time out normally, as governed by the "tcp closed"
flow timeout setting. At timeout, the normal detection, logging and
cleanup code will process it.
The flagging of a flow making it 'unfindable' in the flow hash is a bit
of a hack. The reason for this approach over for example putting the
old flow into a forced timeout queue where it could be timed out, is
that such a queue could easily become a contention point. The TCP
session reuse case can easily be created by an attacker. In case of
multiple packet handlers, this could lead to contention on such a flow
timeout queue.
The flow keyword used flag names that were shared with the
Packet::flowflags field. Some of the flags were'nt used by the packet
though. This lead to waste of some 'flag space'.
This patch defines dedicated flags for the flow keyword and removes
the otherwise unused flags from the FLOW_PKT_* space.
In the lastts timeval struct field in the flow the timestamp of the
last packet to update is recorded. This allows for tracking the timeout
of the flow. So far, this value was updated under the flow lock and also
read under the flow lock.
This patch moves the updating of this field to the FlowGetFlowFromHash
function, where it updated at the point where both the Flow and the
Flow Hash Row lock are held. This guarantees that the field is only
updated when both locks are held.
This makes reading the field safe when either lock is held, which is the
purpose of this patch.
The flow manager, while holding the flow hash row lock, can now safely
read the lastts value. This allows it to do the flow timeout check
without actually locking the flow.
A flow has 3 states: NEW, ESTABLISHED and CLOSED.
For all protocols except TCP, a flow is in state NEW as long as just one
side of the conversation has been seen. When both sides have been
observed the state is moved to ESTABLISHED.
TCP has a different logic, controlled by the stream engine. Here the TCP
state is leading.
Until now, when parts of the engine needed to know the flow state, it
would invoke a per protocol callback 'GetProtoState'. For TCP this would
return the state based on the TcpSession.
This patch changes this logic. It introduces an atomic variable in the
flow 'flow_state'. It defaults to NEW and is set to ESTABLISHED for non-
TCP protocols when we've seen both sides of the conversation.
For TCP, the state is updated from the TCP engine directly.
The goal is to allow for access to the state without holding the Flow's
main mutex lock. This will later allow the Flow Manager(s) to evaluate
the Flow w/o interupting it.