Adds a framework for setting exception policies. These would be called
when the engine reaches some kind of exception condition, like hitting
a memcap or some traffic processing error.
The policy gives control over what should happen next: drop the packet,
drop the packet and flow, bypass, etc.
Implements the policy for:
stream: If stream session or reassembly memcaps are hit call the
memcap policy on the packet and flow.
flow: Apply policy when memcap is reached and no flow could be
freed up.
defrag: Apply policy when no tracker could be picked up.
app-layer: Apply ppolicy if a parser reaches an error state.
All options default to 'ignore', which means the default behavior
is unchanged.
Adds commandline options: add simulation options for exceptions. These
are only exposed if compiled with `--enable-debug`.
Ticket: #5214.
Ticket: #5215.
Ticket: #5216.
Ticket: #5218.
Ticket: #5194.
(cherry picked from commit 8580499ded)
In the case of midstream SYN/ACK pickup, we reverse the flow based on
the SYN/ACK. If we then later get traffic that appears to be in the
reverse direction based on the app-layer, we would reverse it again.
This isn't correct. When we have the SYN/ACK we know the flow's real
direction.
(cherry picked from commit fea374626a)
Ticket: #4562
As the data which triggered the opposing side
was the same protocol and not another one,
that means the protocol change failed.
Prevents a memory leak in later call of AppLayerParserParse
which would allocate a new state and leak the old one
(cherry picked from commit be617a3c1b)
To take TCP window into account
And to actually bail out if we received too much data
where the limit is configured by stream.reassembly.depth
(cherry picked from commit f77b027ada)
For size limit checks consider only available data at the stream start
and before any GAPS.
The old check would consider too much data if there were temporary gaps,
like when a data packet was in-window but (far) ahead of the expected
segment.
(cherry picked from commit 7a114e506a)
So that protocol detection does not run for too long because
TCPProtoDetectCheckBailConditions somehow relies on its TCP stream
to start from zero, which is not the case on protocol change
Adds also debug validation checks, such as
both sides are known on protocol change
And only sets once alproto_orig
TCPProtoDetect can either set f->alproto, change f->alstate
and return error.
When the original alstate gets freed, we shall set the pointer
to NULL, as it can get reused.
When a TCP session is picked up from the response the flow is
reversed by the protocol detection code.
This would lead to duplicate logging of the response. The reason this
happened was that the per stream app progress tracker was not handled
correctly by the direction reversing code. While the streams were
swapped the stream engine would continue to use a now outdated pointer
to what had become the wrong direction.
This patches fixes this by making the stream a ptr to ptr that can be
updated by the protocol detection as well.
In addition, the progress tracking was cleaned up and the GAP error
handling in this case was improved as well.
When Suricata picks up a flow it assumes the first packet is
toserver. In a perfect world without packet loss and where all
sessions neatly start after Suricata itself started, this would be
true. However, in reality we have to account for packet loss and
Suricata starting to get packets for flows already active be for
Suricata is (re)started.
The protocol records on the wire would often be able to tell us more
though. For example in SMB1 and SMB2 records there is a flag that
indicates whether the record is a request or a response. This patch
is enabling the procotol detection engine to utilize this information
to 'reverse' the flow.
There are three ways in which this is supported in this patch:
1. patterns for detection are registered per direction. If the proto
was not recognized in the traffic direction, and midstream is
enabled, the pattern set for the opposing direction is also
evaluated. If that matches, the flow is considered to be in the
wrong direction and is reversed.
2. probing parsers now have a way to feed back their understanding
of the flow direction. They are now passed the direction as
Suricata sees the traffic when calling the probing parsers. The
parser can then see if its own observation matches that, and
pass back it's own view to the caller.
3. a new pattern + probing parser set up: probing parsers can now
be registered with a pattern, so that when the pattern matches
the probing parser is called as well. The probing parser can
then provide the protocol detection engine with the direction
of the traffic.
The process of reversing takes a multi step approach as well:
a. reverse the current packets direction
b. reverse most of the flows direction sensitive flags
c. tag the flow as 'reversed'. This is because the 5 tuple is
*not* reversed, since it is immutable after the flows creation.
Most of the currently registered parsers benefit already:
- HTTP/SMTP/FTP/TLS patterns are registered per direction already
so they will benefit from the pattern midstream logic in (1)
above.
- the Rust based SMB parser uses a mix of pattern + probing parser
as described in (3) above.
- the NFS detection is purely done by probing parser and is updated
to consider the direction in that parser.
Other protocols, such as DNS, are still to do.
Ticket: #2572
This patch provides a working expectation system. This will allow
suricata to have a way to identify parallel connections opened by
a protocol such as FTP.
Expectation are a chained list and there is a cleaning by timeout
of the entries.
This patch also defined a counter of expectations that is also
used to check if we need to query IPPairs. This way we only query
the IPPairs store if we have an expectation.
Check counter id before updating a counter. In case of a disabled
parser with the protocol detection enable, the id can be 0. In
debug mode this would lead to a BUG_ON.
Add API calls to upgrade to TLS or to request a protocol change
without a specific protocol expectation.
If the HTTP CONNECT session includes a port on the url, use that to
look up the probing parser during protocol detection. Solves a
missed detection of a SSLv2 session that upgrades to TLSv1. SSLv2
relies on the probing parser which is limited to certain ports.
In case of STARTTLS in SMTP and FTP, the port is hardcoded to 443.
A new event APPLAYER_UNEXPECTED_PROTOCOL is set if there was a
mismatch.
Support changing the application level protocol for a flow. This is
needed by STARTTLS and HTTP CONNECT to switch from the original
alproto to tls.
This commit allows a flag to be set 'FLOW_CHANGE_PROTO', which
triggers a new protocol detection on the next packet for a flow.
Set flags by default:
-Wmissing-prototypes
-Wmissing-declarations
-Wstrict-prototypes
-Wwrite-strings
-Wcast-align
-Wbad-function-cast
-Wformat-security
-Wno-format-nonliteral
-Wmissing-format-attribute
-funsigned-char
Fix minor compiler warnings for these new flags on gcc and clang.
Implement the inline mode for raw content inspection. Packets
are leading, and when a packet's payload has been added to the
stream, the packet is inspected in the context of the stream.
Reassembly will return a buffer with the packet data with older
data in front of it and after it, if available.
At flow timeout, we no longer need to first run reassembly in
one dir, then inspection in the other. We can do both in single
packet now.
Disable pseudo packets when receiving stream end packets. Instead
call the app-layer parser in the packet direction for stream end
packets and flow end packets.
These changes in handling of those stream end packets make the
pseudo packets unnecessary.
Remove the 'StreamMsg' approach from the engine. In this approach the
stream engine would create a list of chunks for inspection by the
detection engine. There were several issues:
1. the messages had a fixed size, so blocks of data bigger than ~4k
would be cut into multiple messages
2. it lead to lots of data copying and unnecessary memory use
3. the StreamMsgs used a central pool
The Stream engine switched over to the streaming buffer API, which
means that the reassembled data is always available. This made the
StreamMsg approach even clunkier.
The new approach exposes the streaming buffer data to the detection
engine. It has to pay attention to an important issue though: packet
loss. The data may have gaps. The streaming buffer API tracks the
blocks of continuous data.
To access the data for inspection a callback approach is used. The
'StreamReassembleRaw' function is called with a callback and data.
This way it runs the MPM and individual rule inspection code. At
the end of each detection run the stream engine is notified that it
can move forward it's 'progress'.