Use new management API to run the flow recycler.
Make number of threads configurable:
flow:
memcap: 64mb
hash-size: 65536
prealloc: 10000
emergency-recovery: 30
managers: 2
recyclers: 2
This sets up 2 flow recyclers.
Use new management API to run the flow manager.
Support multiple flow managers, where each of them works with it's
own part of the flow hash.
Make number of threads configurable:
flow:
memcap: 64mb
hash-size: 65536
prealloc: 10000
emergency-recovery: 30
managers: 2
This sets up 2 flow managers.
Handle misc tasks only in instance 1: Handle defrag hash timeout
handing, host hash timeout handling and flow spare queue updating
only from the first instance.
Now that we use 'filetype' instead of 'type', we should also
use 'regular' instead of 'file'.
Added fallback to make sure we stay compatible to old configs.
EVE logging is a really good substitute for pcapinfo. Suriwire is
now supporting EVE output so it is not anymore necessary to have
pcapinfo in Suricata.
Add profiling to a logfile. Default is $log_dir/pcaplog_stats.log
The counters for open, close, rotate, write and handles are written
to it, as well as:
- total bytes written
- cost per MiB
- cost per GiB
Option is disabled by default.
Instead of the notion of toserver and toclient protocol detection, use
destination port and source port.
Independent of the data direction, the flow's port settings will be used
to find the correct probing parser, where we first try the dest port,
and if that fails the source port.
Update the configuration file format, where toserver is replaced by 'dp'
and toclient by 'sp'. Toserver is intrepreted as 'dp' and toclient as
'sp' for backwards compatibility.
Example for dns:
dns:
# memcaps. Globally and per flow/state.
#global-memcap: 16mb
#state-memcap: 512kb
# How many unreplied DNS requests are considered a flood.
# If the limit is reached, app-layer-event:dns.flooded; will match.
#request-flood: 500
tcp:
enabled: yes
detection-ports:
dp: 53
udp:
enabled: yes
detection-ports:
dp: 53
Like before, progress of protocol detection is tracked per flow direction.
Bug #1142.
Add option "profiling.sample-rate":
# Run profiling for every xth packet. The default is 1, which means we
# profile every packet. If set to 1000, one packet is profiled for every
# 1000 received.
#sample-rate: 1000
This allows for configuration of the sample rate.
This patch introduces wrapper functions around allocation functions
to be able to have a global HTP memcap. A simple subsitution of
function was not enough because allocated size needed to be known
during freeing and reallocation.
The value of the memcap can be set in the YAML and is left by default
to unlimited (0) to avoid any surprise to users.
app-layer.[ch], app-layer-detect-proto.[ch] and app-layer-parser.[ch].
Things addressed in this commit:
- Brings out a proper separation between protocol detection phase and the
parser phase.
- The dns app layer now is registered such that we don't use "dnstcp" and
"dnsudp" in the rules. A user who previously wrote a rule like this -
"alert dnstcp....." or
"alert dnsudp....."
would now have to use,
alert dns (ipproto:tcp;) or
alert udp (app-layer-protocol:dns;) or
alert ip (ipproto:udp; app-layer-protocol:dns;)
The same rules extend to other another such protocol, dcerpc.
- The app layer parser api now takes in the ipproto while registering
callbacks.
- The app inspection/detection engine also takes an ipproto.
- All app layer parser functions now take direction as STREAM_TOSERVER or
STREAM_TOCLIENT, as opposed to 0 or 1, which was taken by some of the
functions.
- FlowInitialize() and FlowRecycle() now resets proto to 0. This is
needed by unittests, which would try to clean the flow, and that would
call the api, AppLayerParserCleanupParserState(), which would try to
clean the app state, but the app layer now needs an ipproto to figure
out which api to internally call to clean the state, and if the ipproto
is 0, it would return without trying to clean the state.
- A lot of unittests are now updated where if they are using a flow and
they need to use the app layer, we would set a flow ipproto.
- The "app-layer" section in the yaml conf has also been updated as well.
Add two new mPIPE load-balancing configuration options in suricata.yaml.
1) "sticky" which keep sending flows to one CPU, but if that queue is full,
don't drop the packet, move the flow to the least loaded queue.
2) Round-robin, which always picks the least full input queue for each
packet.
Allow configuring the number of packets in the input queue (iqueue) in
suricata.yaml.
For the mPipe.buckets configuration, which must be a power of 2, round
up to the next power of two, rather than report an error.
Added mpipe.min-buckets, which defaults to 256, so if the requested number
of buckets can't be allocated, Suricata will keep dividing by 2 until either
it succeeds in allocating buckets, or reaches the minimum number of buckets
and fails.
emerging-virus.rules is not present anymore in ET ruleset downloaded
by 'make install-rules'. This patch removes it from the list to avoid
an error message.
This patch adds support for checksum-checks in the pcap-file running
mode. This is the same functionnality as the one already existing for
live interface.
It can be setup in the YAML:
pcap-file:
checksum-checks: auto
A message is displayed for small pcap to warn that invalid checksum
rate is big on the pcap file and that checksum-check could
be set to no.
The meta-field-option allows for setting the hard limit of request
and response fields in HTTP. In requests this applies to the request
line and headers, not the body. In responses, this applies to the
response line and headers, not the body.
Libhtp uses a default limit of 18k. If this is reached an event is
raised.
Ticket 986.
Strip the 'proxy' parts from the normalized uri as inspected by http_uri,
urilen, pcre /U and others.
In a request line like:
GET http://suricata-ids.org/blah/ HTTP/1.1
the normalized URI will now be:
/blah/
This doesn't affect http_raw_uri. So matching the hostname, etc is still
possible through this keyword.
Additionally, a new per HTTP 'personality' option was added to change
this behavior: "uri-include-all":
uri-include-all: <true|false>
Include all parts of the URI. By default the
'scheme', username/password, hostname and port
are excluded. Setting this option to true adds
all of them to the normalized uri as inspected
by http_uri, urilen, pcre with /U and the other
keywords that inspect the normalized uri.
Note that this does not affect http_raw_uri.
So adding uri-include-all:true to all personalities in the yaml will
restore the old default behavior.
Ticket 1008.
Take VLAN IDs into account when re-assembling fragments.
Prevents fragments that would otherwise match, but on different
VLANs from being reassembled with each other.
This patch update reject code to send the packet on the interface
it comes from when 'host-mode' is set to 'sniffer-only'. When
'host-mode' is set to 'router', the reject packet is sent via
the routing interface.
This should fix#957.
This variable can be used to indicate to suricata that the host
running is running as a router or is in sniffing only mode.
This will used at least to determine which interfaces are used to
send reject message.
1. Proto detection
2. Parsers
For app layer protocols.
libhtp has now been moved to the section under app-layer.protocols.http,
but we still provide backward compatibility with older conf files.
The TILE-Gx processor includes a packet processing engine, called
mPIPE, that can deliver packets directly into user space memory. It
handles buffer allocation and load balancing (either static 5-tuple
hashing, or dynamic flow affinity hashing are used here). The new
packet source code is in source-mpipe.c and source-mpipe.h
A new Tile runmode is added that configures the Suricata pipelines in
worker mode, where each thread does the entire packet processing
pipeline. It scales across all the Gx chips sizes of 9, 16, 36 or 72
cores. The new runmode is in runmode-tile.c and runmode-tile.h
The configure script detects the TILE-Gx architecture and defines
HAVE_MPIPE, which is then used to conditionally enable the code to
support mPIPE packet processing. Suricata runs on TILE-Gx even without
mPIPE support enabled.
The Suricata Packet structures are allocated by the mPIPE hardware by
allocating the Suricata Packet structure immediatley before the mPIPE
packet buffer and then pushing the mPIPE packet buffer pointer onto
the mPIPE buffer stack. This way, mPIPE writes the packet data into
the buffer, returns the mPIPE packet buffer pointer, which is then
converted into a Suricata Packet pointer for processing inside
Suricata. When the Packet is freed, the buffer is returned to mPIPE's
buffer stack, by setting ReleasePacket to an mPIPE release specific
function.
The code checks for the largest Huge page available in Linux when
Suricata is started. TILE-Gx supports Huge pages sizes of 16MB, 64MB,
256MB, 1GB and 4GB. Suricata then divides one of those page into
packet buffers for mPIPE.
The code is not yet optimized for high performance. Performance
improvements will follow shortly.
The code was originally written by Tom Decanio and then further
modified by Tilera.
This code has been tested with Tilera's Multicore Developement
Environment (MDE) version 4.1.5. The TILEncore-Gx36 (PCIe card) and
TILEmpower-Gx (1U Rack mount).
In some cases using the vlan id(s) in flow hashing is problematic. Cases
of broken routers have been reported. So this option allows for disabling
the use of vlan id(s) while calculating the flow hash, and in the future
other hashes.
Vlan tracking for flow is enabled by default.