Instead of the notion of toserver and toclient protocol detection, use
destination port and source port.
Independent of the data direction, the flow's port settings will be used
to find the correct probing parser, where we first try the dest port,
and if that fails the source port.
Update the configuration file format, where toserver is replaced by 'dp'
and toclient by 'sp'. Toserver is intrepreted as 'dp' and toclient as
'sp' for backwards compatibility.
Example for dns:
dns:
# memcaps. Globally and per flow/state.
#global-memcap: 16mb
#state-memcap: 512kb
# How many unreplied DNS requests are considered a flood.
# If the limit is reached, app-layer-event:dns.flooded; will match.
#request-flood: 500
tcp:
enabled: yes
detection-ports:
dp: 53
udp:
enabled: yes
detection-ports:
dp: 53
Like before, progress of protocol detection is tracked per flow direction.
Bug #1142.
Add option "profiling.sample-rate":
# Run profiling for every xth packet. The default is 1, which means we
# profile every packet. If set to 1000, one packet is profiled for every
# 1000 received.
#sample-rate: 1000
This allows for configuration of the sample rate.
This patch introduces wrapper functions around allocation functions
to be able to have a global HTP memcap. A simple subsitution of
function was not enough because allocated size needed to be known
during freeing and reallocation.
The value of the memcap can be set in the YAML and is left by default
to unlimited (0) to avoid any surprise to users.
app-layer.[ch], app-layer-detect-proto.[ch] and app-layer-parser.[ch].
Things addressed in this commit:
- Brings out a proper separation between protocol detection phase and the
parser phase.
- The dns app layer now is registered such that we don't use "dnstcp" and
"dnsudp" in the rules. A user who previously wrote a rule like this -
"alert dnstcp....." or
"alert dnsudp....."
would now have to use,
alert dns (ipproto:tcp;) or
alert udp (app-layer-protocol:dns;) or
alert ip (ipproto:udp; app-layer-protocol:dns;)
The same rules extend to other another such protocol, dcerpc.
- The app layer parser api now takes in the ipproto while registering
callbacks.
- The app inspection/detection engine also takes an ipproto.
- All app layer parser functions now take direction as STREAM_TOSERVER or
STREAM_TOCLIENT, as opposed to 0 or 1, which was taken by some of the
functions.
- FlowInitialize() and FlowRecycle() now resets proto to 0. This is
needed by unittests, which would try to clean the flow, and that would
call the api, AppLayerParserCleanupParserState(), which would try to
clean the app state, but the app layer now needs an ipproto to figure
out which api to internally call to clean the state, and if the ipproto
is 0, it would return without trying to clean the state.
- A lot of unittests are now updated where if they are using a flow and
they need to use the app layer, we would set a flow ipproto.
- The "app-layer" section in the yaml conf has also been updated as well.
Add two new mPIPE load-balancing configuration options in suricata.yaml.
1) "sticky" which keep sending flows to one CPU, but if that queue is full,
don't drop the packet, move the flow to the least loaded queue.
2) Round-robin, which always picks the least full input queue for each
packet.
Allow configuring the number of packets in the input queue (iqueue) in
suricata.yaml.
For the mPipe.buckets configuration, which must be a power of 2, round
up to the next power of two, rather than report an error.
Added mpipe.min-buckets, which defaults to 256, so if the requested number
of buckets can't be allocated, Suricata will keep dividing by 2 until either
it succeeds in allocating buckets, or reaches the minimum number of buckets
and fails.
emerging-virus.rules is not present anymore in ET ruleset downloaded
by 'make install-rules'. This patch removes it from the list to avoid
an error message.
This patch adds support for checksum-checks in the pcap-file running
mode. This is the same functionnality as the one already existing for
live interface.
It can be setup in the YAML:
pcap-file:
checksum-checks: auto
A message is displayed for small pcap to warn that invalid checksum
rate is big on the pcap file and that checksum-check could
be set to no.
The meta-field-option allows for setting the hard limit of request
and response fields in HTTP. In requests this applies to the request
line and headers, not the body. In responses, this applies to the
response line and headers, not the body.
Libhtp uses a default limit of 18k. If this is reached an event is
raised.
Ticket 986.
Strip the 'proxy' parts from the normalized uri as inspected by http_uri,
urilen, pcre /U and others.
In a request line like:
GET http://suricata-ids.org/blah/ HTTP/1.1
the normalized URI will now be:
/blah/
This doesn't affect http_raw_uri. So matching the hostname, etc is still
possible through this keyword.
Additionally, a new per HTTP 'personality' option was added to change
this behavior: "uri-include-all":
uri-include-all: <true|false>
Include all parts of the URI. By default the
'scheme', username/password, hostname and port
are excluded. Setting this option to true adds
all of them to the normalized uri as inspected
by http_uri, urilen, pcre with /U and the other
keywords that inspect the normalized uri.
Note that this does not affect http_raw_uri.
So adding uri-include-all:true to all personalities in the yaml will
restore the old default behavior.
Ticket 1008.
Take VLAN IDs into account when re-assembling fragments.
Prevents fragments that would otherwise match, but on different
VLANs from being reassembled with each other.
This patch update reject code to send the packet on the interface
it comes from when 'host-mode' is set to 'sniffer-only'. When
'host-mode' is set to 'router', the reject packet is sent via
the routing interface.
This should fix#957.
This variable can be used to indicate to suricata that the host
running is running as a router or is in sniffing only mode.
This will used at least to determine which interfaces are used to
send reject message.
1. Proto detection
2. Parsers
For app layer protocols.
libhtp has now been moved to the section under app-layer.protocols.http,
but we still provide backward compatibility with older conf files.
The TILE-Gx processor includes a packet processing engine, called
mPIPE, that can deliver packets directly into user space memory. It
handles buffer allocation and load balancing (either static 5-tuple
hashing, or dynamic flow affinity hashing are used here). The new
packet source code is in source-mpipe.c and source-mpipe.h
A new Tile runmode is added that configures the Suricata pipelines in
worker mode, where each thread does the entire packet processing
pipeline. It scales across all the Gx chips sizes of 9, 16, 36 or 72
cores. The new runmode is in runmode-tile.c and runmode-tile.h
The configure script detects the TILE-Gx architecture and defines
HAVE_MPIPE, which is then used to conditionally enable the code to
support mPIPE packet processing. Suricata runs on TILE-Gx even without
mPIPE support enabled.
The Suricata Packet structures are allocated by the mPIPE hardware by
allocating the Suricata Packet structure immediatley before the mPIPE
packet buffer and then pushing the mPIPE packet buffer pointer onto
the mPIPE buffer stack. This way, mPIPE writes the packet data into
the buffer, returns the mPIPE packet buffer pointer, which is then
converted into a Suricata Packet pointer for processing inside
Suricata. When the Packet is freed, the buffer is returned to mPIPE's
buffer stack, by setting ReleasePacket to an mPIPE release specific
function.
The code checks for the largest Huge page available in Linux when
Suricata is started. TILE-Gx supports Huge pages sizes of 16MB, 64MB,
256MB, 1GB and 4GB. Suricata then divides one of those page into
packet buffers for mPIPE.
The code is not yet optimized for high performance. Performance
improvements will follow shortly.
The code was originally written by Tom Decanio and then further
modified by Tilera.
This code has been tested with Tilera's Multicore Developement
Environment (MDE) version 4.1.5. The TILEncore-Gx36 (PCIe card) and
TILEmpower-Gx (1U Rack mount).
In some cases using the vlan id(s) in flow hashing is problematic. Cases
of broken routers have been reported. So this option allows for disabling
the use of vlan id(s) while calculating the flow hash, and in the future
other hashes.
Vlan tracking for flow is enabled by default.
Use per thread pools to store and retrieve SSN's from. Uses PoolThread
API.
Remove max-sessions setting. Pools are set to unlimited, but TCP memcap
limits the amount of sessions.
The prealloc_session settings now applies to each thread, so lowered the
default from 32k to 2k.
Normally, there is one verdict per packet, i.e., we receive a packet,
process it, and then tell the kernel what to do with that packet (eg.
DROP or ACCEPT).
recv(), packet id x
send verdict v, packet id x
recv(), packet id x+1
send verdict v, packet id x+1
[..]
recv(), packet id x+n
send verdict v, packet id x+n
An alternative is to process several packets from the queue, and then send
a batch-verdict.
recv(), packet id x
recv(), packet id x+1
[..]
recv(), packet id x+n
send batch verdict v, packet id x+n
A batch verdict affects all previous packets (packet_id <= x+n),
we thus only need to remember the last packet_id seen.
Caveats:
- can't modify payload
- verdict is applied to all packets
- nfmark (if set) will be set for all packets
- increases latency (packets remain queued by the kernel
until batch verdict is sent).
To solve this, we only defer verdict for up to 20 packets and
send pending batch-verdict immediately if:
- no packets are currently queue
- current packet should be dropped
- current packet has different nfmark
- payload of packet was modified
This patch adds a configurable batch verdict support for workers runmode.
The batch verdicts are turned off by default.
Problem is that batch verdicts only work with kernels >= 3.1, i.e.
using newer libnetfilter_queue with an old kernel means non-working
suricata. So the functionnality has to be disabled by default.
By randomizing chunk size around the choosen value, it is possible
to escape some evasion technics that are using the fact they know
chunk size to split the attack at the correct place.
This patch activates randomization by default and set the random
interval to chunk size value +- 10%.
Until now, when processing the TCP 3 way handshake (3whs), retransmissions
of SYN/ACKs are silently accepted, unless they are different somehow. If
the SEQ or ACK values are different they are considered wrong and events
are set. The stream events rules will match on this.
In some cases, this is wrong. If the client missed the SYN/ACK, the server
may send a different one with a different SEQ. This commit deals with this.
As it is impossible to predict which one the client will accept, each is
added to a list. Then on receiving the final ACK from the 3whs, the list
is checked and the state is updated according to the queued SYN/ACK.
Added a napatech section in the yaml configuration.
hba - host buffer allowance
use-all-streams - whether all streams should be used
streams - list of stream numbers to use when use-all-streams is no
The source-napatech.* files were modified to support the host buffer allowance configuration.
The runmode-napatech.c file was modified to support both the host buffer allowance configuration and stream configuration
Signed-off-by: Matt Keeler <mk@npulsetech.com>
This patch introduces 'snaplen' a new YAML variable in the pcap section.
It can be set per-interface to force pcap capture snaplen. If not set
it defaults to interface MTU if MTU can be known via a ioctl call and to
full capture if not.