In preparation of the introduction of more general purpose prefilter
engines, rename PatternMatcherQueue to PrefilterRuleStore. The new
engines will fill this structure a similar way to the current mpm
prefilters.
In the case of a bypassed flow we add a 'bypass' key that can
be 'local' or 'capture'. This will allow the user to know if
capture bypass method is failing by looking at the 'bypass' key.
This adds a new timeout value for local bypassed state. For user
simplication it is called only `bypassed`. The patch also adds
a emergency value so we can clean bypassed flows a bit faster.
If we see packets for a capture bypassed flow after some times, it
means that the capture method is not handling correctly the bypass
so it is better to switch to local bypass method.
If a packet triggers a rule which contains both
bypass and filestore keywords,
it won't be stored since it's not inspected.
To avoid that, when a rule containing filestore keyword
we make sure that also bypass keyword is present.
This adds a new keyword which permits to call the
bypass callback when a sig is matched.
The callback must be called when the match of the sig
is complete.
This patch activates bypass for encrypted flow and for flow
that have reached stream depth on both side.
For encrypted flow , suricata is stopping the inspection so
we can just get it out via bypass. The same logic apply
for flow that have reached the stream depth.
For a basic test of feature, use the following ruleset:
```
table ip filter {
chain output {
type filter hook output priority 0; policy accept;
ct mark 0x1 counter accept
oif lo counter queue num 0
}
chain connmark_save {
type filter hook output priority 1; policy accept;
mark 0x1 ct mark set mark counter
ct mark 0x1 counter
}
}
```
And use bypass mark and mask of 1 in nfq configuration. Then you
can test the system by scp big file to 127.0.0.1. You can also
use iperf to measure the performance on localhost. It is recommended
to lower the MTU to 1500 to get something more realistic by increasing
the number of packets..
Call the packet bypass callback if necessary and update the flow
state. In case of failure we switch to local bypassed state and set
capture bypassed state if the callback is successful.
As capture method like nfq will cut both side of the flow instantly
we will not get the hack for most data which have been received. So
it is better to force reassembly to be sure to get the timeout of
the entry.
This patch adds two new states to the flow:
* local bypass: for suricata only bypass, packets belonging to
a flow in this state will be discard fast
* capture bypass: capture method is handling the bypass and suricata
will discard packets that are currently queued
A bypassed state to flow that will be set on flow when a bypass
decision is taken. In the case of capture bypass this will allow
to remove faster the flow entry from the flow table instead of
waiting for the "established" timeout.