doesn't need the gpu results and to release the packet for the next run.
Previously the inspection engine wouldn't inform the cuda module, if it
didn't need the results. As a consequence, when the packet is next taken
for re-use, and if the packet is still being processed by the cuda module,
the engine would wait till the cuda module frees the packet.
This commits updates this functionality to inform the cuda module to
release the packet for the afore-mentioned case.
Add option (disabled by default) to honor pass rules. This means that
when a pass rule matches in a flow, it's packets are no longer stored
by the pcap-log module.
SigMatchGetLastSMFromLists() is finding the sm with the largest
index among all of the values returned from SigMatchGetLastSM() on
the set of (list and type) tuples passed as arguments.
The function was creating an array of the types, then creating an array
of the results of SigMatchGetLastSM(), sorting that list completely, then
only returning the first values from the list.
The new code, gets one set of arguments from the variable arguments, calls
SigMatchGetLastSM() and if the returned sm has a larger index, keeps that
as the last sm.
Allow a script to set the 'stream' buffer type. This will add the
script to the PMATCH list.
Example script:
alert tcp any any -> any any (content:"html"; lua:stream.lua; sid:1;)
function init (args)
local needs = {}
needs["stream"] = tostring(true)
return needs
end
-- return match via table
function match(args)
local result = {}
b = tostring(args["stream"])
o = tostring(args["offset"])
bo = string.sub(b, o);
print (bo)
return result
end
return 0
To prevent accidental writes into the orignal packet buffer, use
const pointers for the extension header pointers used by IPv6. This
will cause compiler warnings in case of writes.
A logic error in the IPv6 Routing header parsing caused accidental
updating of the original packet buffer. The calculated extension
header lenght was set to the length field of the routing header,
causing it to be wrong.
This has 2 consequences:
1. defrag failure. As the now modified payload was used in defrag,
the decoding of the reassembled packet now contained a broken length
field for the routing header. This would lead to decoding failure.
The potential here is evasion, although it would trigger:
[1:2200014:1] SURICATA IPv6 truncated extension header
2. in IPS mode, especially the AF_PACKET mode, the modified and now
broken packet would be transmitted on the wire. It's likely that
end hosts and/or routers would reject this packet.
NFQ based IPS mode would be less affected, as it 'verdicts' based on
the packet handle. In case of replacing the packet (replace keyword
or stream normalization) it could broadcast the bad packet.
Additionally, the RH Type 0 address parsing was also broken. It too
would modify the original packet. As the result of this code was not
used anywhere else in the engine, this code is now disabled.
Reported-By: Rafael Schaefer <rschaefer@ernw.de>
AF_PACKET is not setting the engine mode to IPS when some
interfaces are peered and use IPS mode. This is due to the
fact, it is possible to peer 2 interfaces and run an IPS on
them and have a third one that is running in normal IDS mode.
In fact this choice is the bad one as unwanted side effect is
that there is no drop log and that stream inline is not used.
To fix that, this patch puts suricata in IPS mode as soon as
there is two interfaces in IPS mode. And it displays a error
message to warn user that the accuracy of detection on IDS only
interfaces will be low.
When using AMATCH, continue detection would fail if the tx part
had already run. This lead to start detection rerunning, causing
multiple alerts for the same issue.
As there is no inspection engine for request_line, the sigmatch was
added to the AMATCH list. However, no AppLayerMatch function for
lua scripts was defined.
This patch defines a AppLayerMatch function.
Bug #1273.
I found three somewhat serious IPv6 address bugs within the Suricata 2.0.x source code. Two are in the source module "detect-engine-address.c", and the third is in "util-radix-tree.c".
The first bug occurs within the function DetectAddressParse2(). When parsing an address string and a negated block is encountered (such as when parsing !$HOME_NET, for example), any corresponding IPv6 addresses were not getting added to the Group Heads in the DetectAddressList. Only IPv4 addresses were being added.
I discovered another bug related to IPv6 address ranges in the Signature Match Address Array comparison code for IPv6 addresses. The function DetectAddressMatchIPv6() walks a signature's source or destination match address list comparing each to the current packet's corresponding address value. The match address list consists of value pairs representing a lower and upper IP address range. If the packet's address is within that range (including equal to either the lower or upper bound), then a signature match flag is returned.
The original test of each signature match address to the packet was performed using a set of four compounded AND comparisons looking at each of the four 32-bit blocks that comprise an IPv6 address. The problem with the old comparison is that if ANY of the four 32-bit blocks failed the test, then a "no-match" was returned. This is incorrect. If one or more of the more significant 32-bit blocks met the condition, then it is a match no matter if some of the less significant 32-bit blocks did not meet the condition. Consider this example where Packet represents the packet address being checked, and Target represents the upper bound of a match address pair. We are testing if Packet is less than Target.
Packet -- 2001:0470 : 1f07:00e2 : 1988:01f1 : d468:27ab
Target -- 2001:0470 : 1f07:00e2 : a48c:2e52 : d121:101e
In this example the Packet's address is less than the target and it should give a match. However, the old code would compare each 32-bit block (shown spaced out above for clarity) and logically AND the result with the next least significant block comparison. If any of the four blocks failed the comparison, that kicked out the whole address. The flaw is illustrated above. The first two blocks are 2001:0470 and 1f07:00e2 and yield TRUE; the next less significant block is 1988:01f1 and a48c:2e52, and also yields TRUE (that is, Packet is less than Target); but the last block compare is FALSE (d468:27ab is not less than d121:101e). That last block is the least significant block, though, so its FALSE determination should not invalidate a TRUE from any of the more significant blocks. However, in the previous code using the compound logical AND block, that last least significant block would invalidate the tests done with the more significant blocks.
The other bug I found for IPv6 occurs when trying to parse and insert an IPv6 address into a Radix Tree using the function SCRadixAddKeyIPV6String(). The test for min and max values for an IPv6 CIDR mask incorrectly tests the upper limit as 32 when it should be 128 for an IPv6 address. I think this perhaps is an old copy-paste error if the IPv6 version of this function was initially copied from the corresponding IPv4 version directly above it in the code. Without this patch, the function will return null when you attempt to add an IPv6 network whose CIDR mask is larger than 32 (for example, the popular /64 mask will cause the function to return the NULL error condition).
(amended by Victor Julien)
When using the inspection engines, track the current tx_id in the
thread storage the detect thread uses. As 0 is a valid tx_id, add
a simple bool that indicates if the tx_id field is set.
Allow use of the Flow Logging API through Lua scripts.
Minimal script:
function init (args)
local needs = {}
needs["type"] = "flow"
return needs
end
function setup (args)
end
function log(args)
startts = SCFlowTimeString()
ipver, srcip, dstip, proto, sp, dp = SCFlowTuple()
print ("Flow IPv" .. ipver .. " src " .. srcip .. " dst " .. dstip ..
" proto " .. proto .. " sp " .. sp .. " dp " .. dp)
end
function deinit (args)
end
Add SCStreamingBuffer lua function to retrieve the data passed
to the script per streaming API invocation.
Example:
function log(args)
data = SCStreamingBuffer()
hex_dump(data)
end
Make normalized body data available to the script through
HttpGetRequestBody and HttpGetResponseBody.
There no guarantees that all of the body will be availble.
Example:
function log(args)
a, o, e = HttpGetResponseBody();
--print("offset " .. o .. " end " .. e)
for n, v in ipairs(a) do
print(v)
end
end
Get the host from libhtp's tx->request_hostname, which can either be
the host portion of the url or the host portion of the Host header.
Example:
http_host = HttpGetRequestHost()
if http_host == nil then
http_host = "<hostname unknown>"
end
SCFlowAppLayerProto: get alproto as string from the flow. If alproto
is not (yet) known, it returns "unknown".
function log(args)
alproto = SCFlowAppLayerProto()
if alproto ~= nil then
print (alproto)
end
end
A new callback to give access to thread id, name and group name:
SCThreadInfo. It gives: tid (integer), tname (string), tgroup (string)
function log(args)
tid, tname, tgroup = SCThreadInfo()
SCFlowTimeString: returns string form of start time of a flow
Example:
function log(args)
startts = SCFlowTimeString()
ts = SCPacketTimeString()
if ts == startts then
print("new flow")
end
Add SCPacketTimeString to get the packets time string in the format:
11/24/2009-18:57:25.179869
Example use:
function log(args)
ts = SCPacketTimeString()
SCRuleIds(): returns sid, rev, gid:
function log(args)
sid, rev, gid = SCRuleIds()
SCRuleMsg(): returns msg
function log(args)
msg = SCRuleMsg()
SCRuleClass(): returns class msg and prio:
function log(args)
class, prio = SCRuleClass()
if class == nil then
class = "unknown"
end
Add flow store and retrieval wrappers for accessing the flow through
Lua's lightuserdata method.
The flow functions store/retrieve a lock hint as well.
If the script needing a packet doesn't specify a filter, it will
be run against all packets. This patch adds the support for this
mode. It is a packet logger with a condition function that always
returns true.
Add a lua callback for getting Suricata's log path, so that lua scripts
can easily get the logging directory Suricata uses.
Update the Setup logic to register callbacks before the scripts 'setup'
is called.
Example:
name = "fast_lua.log"
function setup (args)
filename = SCLogPath() .. "/" .. name
file = assert(io.open(filename, "a"))
end
Add file logger support. The script uses:
function init (args)
local needs = {}
needs['type'] = 'file'
return needs
end
The type is set to file to make it a file logger.
Add utility functions for placing things on the stack for use
by the scripts. Functions for numbers, strings and byte arrays.
Add callback for returing IP header info: ip version, src ip,
dst ip, proto, sp, dp (or type and code for icmp and icmpv6):
SCPacketTuple
Through 'needs' the script init function can indicate it wants to
see packets and select a condition function. Currently only alerts
is an option:
function init (args)
local needs = {}
needs["type"] = "packet"
needs["filter"] = "alerts"
return needs
end
Add an argument to the registration to indicate which iterator
needs to be used: Stream or HttpBody
Add HttpBody Iterator, calling the logger(s) for each Http body chunk.
StreamIterator implementation for iterating over ACKed segments.
Flag each segment as logged when the log function has been called for it.
Set a 'OPEN' flag for the first segment in both directions.
Set a 'CLOSE' flag when the stream ends. If the last segment was already
logged, a empty CLOSE call is performed with NULL data.
This patch adds a new Log API for streaming data such as TCP reassembled
data and HTTP body data. It could also replace Filedata API.
Each time a new chunk of data is available, the callback will be called.
- Removed unnecessary assignment of the data field
- Removed else condition (same function called for IPv4 and IPV6)
- Fixed constants to be a power of two (used in bitwise operations)
The field ext_pkt was cleaned before calling the release function.
The result was that IPS mode such as the one of AF_PACKET were not
working anymore because they were not able to send the data which
were initially pointed by ext_pkt.
This patch moves the ext_pkt cleaning to the cleaning macro. This
ensures that the cleaning is done for allocated and pool packets.
Call PACKET_RELEASE_REFS from PacketPoolGetPacket() so that
we only access the large packet structure just before actually
using it. Should give better cache behaviour.
The Source Routing Header had routing defined as a char* for a field
of variable size. Since that field was not being used in the code, I
removed the pointer and added a comment.
Structures that are used to cast packet data into fields need to be packed
so that the compiler doesn't add any padding to these fields. This also helps
Tile-Gx to avoid unaligned loads because the compiler will insert code to
handle the possible unaligned load.
Reported in bug 1238 is an issue where stream reassembly can be
disrupted.
A packet that was in-window, but otherwise unexpected set the
window to a really low value, causing the next *expected* packet
to be considered out of window. This lead to missing data in the
stream reassembly.
The packet was unexpected in various ways:
- it would ack unseen traffic
- it's sequence number would not match the expected next_seq
- set a really low window, while not being a proper window update
Detection however, it greatly hampered by the fact that in case of
packet loss, quite similar packets come in. Alerting in this case
is unwanted. Ignoring/skipping packets in this case as well.
The logic used in this patch is as follows. If:
- the packet is not a window update AND
- packet seq > next_seq AND
- packet acq > next_seq (packet acks unseen data) AND
- packet shrinks window more than it's own data size
THEN set event and skip the packet in the stream engine.
So in case of a segment with no data, any window shrinking is rejected.
Bug #1238.
Until now the time out handling in defrag was done using a single
uint32_t that tracked seconds. This lead to corner cases, where
defrag trackers could be timed out a little too early.
If a next header / protocol is encountered that we can't handle (yet)
set an event. Disabled the rule by default.
decode-event:ipv6.unknown_next_header;
Fix or rather implement handling of unfragmentable exthdrs in ipv6.
The exthdr(s) appearing before the frag header were copied into the
reassembled packet correctly, however the stripping of the frag header
did not work correctly.
Example:
The common case is a frag header directly after the ipv6 header:
[ipv6 header]->[frag header]->[icmpv6 (part1)]
[ipv6 header]->[frag header]->[icmpv6 (part2)]
This would result in:
[ipv6 header]->[icmpv6]
The ipv6 headers 'next header' setting would be updated to point to
whatever the frag header was pointing to.
This would also happen when is this case:
[ipv6 header]->[hop header]->[frag header]->[icmpv6 (part1)]
[ipv6 header]->[hop header]->[frag header]->[icmpv6 (part2)]
The result would be:
[ipv6 header]->[hop header]->[icmpv6]
However, here too the ipv6 header would have been updated to point
to what the frag header pointed at. So it would consider the hop header
as if it was an ICMPv6 header, or whatever the frag header pointed at.
The result is that packets would not be correctly parsed, and thus this
issue can lead to evasion.
This patch implements handling of the unfragmentable part. In the first
segment that is stored in the list for reassembly, this patch detects
unfragmentable headers and updates it to have the last unfragmentable
header point to the layer after the frag header.
Also, the ipv6 headers 'next hdr' is only updated if no unfragmentable
headers are used. If they are used, the original value is correct.
Reported-By: Rafael Schaefer <rschaefer@ernw.de>
Bug #1244.
Some tests are already registered via the function
AppLayerParserRegisterProtocolUnittests. So we don't need to
egister them during runmode initialization.
Exponentially increase the memory allocated for new states when adding new
states, then at the end resize down to the actually final size so that no space is wasted.
On non-TLS systems, check each time the Thread Local Storage
is requested and if it has not been initialized for this thread, initialize it.
The prevents not initializing the worker threads in autofp run mode.
Use new management API to run the flow recycler.
Make number of threads configurable:
flow:
memcap: 64mb
hash-size: 65536
prealloc: 10000
emergency-recovery: 30
managers: 2
recyclers: 2
This sets up 2 flow recyclers.
Use new management API to run the flow manager.
Support multiple flow managers, where each of them works with it's
own part of the flow hash.
Make number of threads configurable:
flow:
memcap: 64mb
hash-size: 65536
prealloc: 10000
emergency-recovery: 30
managers: 2
This sets up 2 flow managers.
Handle misc tasks only in instance 1: Handle defrag hash timeout
handing, host hash timeout handling and flow spare queue updating
only from the first instance.
Currently management threads do their own thread setup and handling. This
patch introduces a new way of handling management threads.
Functionality that needs to run as a management thread can now register
itself as a regular 'thread module' (TmModule), where the 'Management'
callback is registered.
The flow end flags field is filled by the flow manager or the flow
hash (in case of forced timeout of a flow) to record the timeout
conditions in the flow:
- emergency mode
- state
- reason (timed out or forced)
Add logging to the flow logger.
Thread was killed by the generic TmThreadKillThreads instead of
the FlowKillFlowRecyclerThread. The latter wakes the thread up, so
that shutdown is quite a bit faster.
For netflow logging track TCP flags per stream direction. As the struct
had no more space left without expanding it, the flags and wscale
fields are now compressed.
Most flows are marked for clean up by the flow manager, which then
passes them to the recycler. The recycler logs and cleans up. However,
under resource stress conditions, the packet threads can recycle
existing flow directly. So here the recycler has no role to play, as
the flow is immediately used.
For this reason, the packet threads need to be able to invoke the
flow logger directly.
The flow logging thread ctx will stored in the DecodeThreadVars
stucture. Therefore, this patch makes the DecodeThreadVars an argument
to FlowHandlePacket.
Now that we use 'filetype' instead of 'type', we should also
use 'regular' instead of 'file'.
Added fallback to make sure we stay compatible to old configs.
At -O1+ in both Gcc and Clang, PacketPoolWait would optimize the
wait loop in the wrong way. Adding a compiler barrier to prevent
this optimization issue.
Fix pcap packet acquisition methods passing 0 to pcap_dispatch.
Previously they passed the packet pool size, but the packet_q_len
variable was now hardcoded at 0.
This patch sets packet_q_len to 64. If packet pool is empty, we fall
back to direct alloc. As the pcap_dispatch function is only called
when packet pool is not empty, we alloc at most 63 packets.
Better handle the autofp case where one thread allocates the majority
of the packets and other threads free those packets.
Add a list of locally pending packets. The first packet freed goes on the
pending list, then subsequent freed packets for the same Packet Pool are
added to this list until it hits a fixed number of packets, then the
entire list of packets is pushed onto the pool's return stack. If a freed
packet is not for the pending pool, it is freed immediately to its pool's
return stack, as before.
For the autofp case, since there is only one Packet Pool doing all the
allocation, every other thread will keep a list of pending packets for
that pool.
For the worker run mode, most packets are allocated and freed locally. For
the case where packets are being returned to a remote pool, a pending list
will be kept for one of those other threads, all others are returned as before.
Which remote pool for which to keep a pending list is changed each time the
pending list is returned. Since the return pending pool is cleared when it is
freed, then next packet to be freed chooses the new pending pool.
Using a stack for free Packet storage causes recently freed Packets to be
reused quickly, while there is more likelihood of the data still being in
cache.
The new structure has a per-thread private stack for allocating Packets
which does not need any locking. Since Packets can be freed by any thread,
there is a second stack (return stack) for freeing packets by other threads.
The return stack is protected by a mutex. Packets are moved from the return
stack to the private stack when the private stack is empty.
Returning packets back to their "home" stack keeps the stacks from getting out
of balance.
The PacketPoolInit() function is now called by each thread that will be
allocating packets. Each thread allocates max_pending_packets, which is a
change from before, where that was the total number of packets across all
threads.
Whenever DETECT_CONTENT_NOCASE is set for a BoyerMoore matcher, the
function BoyerMooreCtxToNocase() must be called. This call was missing
in AppLayerProtoDetectPMRegisterPattern().
Also created BoyerMooreNocaseCtxInit() that calls BoyerMooreCtxToNocase()
to make some code cleaner and safer.
Rather than converting the search string to lower case while searching,
convert it to lowercase during initialization.
Changes the Boyer Moore search API for take BmCtx
Change the API for BoyerMoore to take a BmCtx rather than the two parts that
are stored in the context. Which is how it is mostly used. This enforces
always calling BoyerMooreCtxToNocase() to convert to no-case.
Use CtxInit and CtxDeinit functions to create and destroy the context,
even in unit tests.
EVE logging is a really good substitute for pcapinfo. Suriwire is
now supporting EVE output so it is not anymore necessary to have
pcapinfo in Suricata.
When using multi mode, the filename can use a few variables:
%n -- thread number, where the 1st thread has 1, and it increments
%i -- thread id (system thread id, similar to pid)
%t -- timestamp, where seconds or seconds+usecs depends on
the ts-format option.
Example:
filename: filename: pcaps/%n/pcap.%t
This will translate to: pcaps/3/pcap.1256792217 for the 3rd thread.
Note that while it's possible to use directories, they won't be
created. So make sure they exist.
This patch adds a field 'is_private' to PcapLogData, so that the
using thread knows if it needs to lock access to it or not.
Reshuffle PcapLogData to roughly match order of access.
This patch implements a new mode in pcap-logging: 'multi'. It stores
a pcap file per logger thread, instead of just one file globally.
This removes lock contention, so it brings a lot more performance.
The trade off is that there are now mulitple files where there would
be one before.
Files have a thread id added to their name: base_name.tid.ts, so by
we have something like: "log.pcap.20057.1254500095".
PcapLog uses the global data structure PcapLogData as thread data
as well. This is possible because all operations on it are locked.
This patch introduces PcapLogThreadData. It contains a pointer to
the PcapLogData. Currently to the global instance, but in the future
it may hold a thread-local instance of PcapLogData.
Add profiling to a logfile. Default is $log_dir/pcaplog_stats.log
The counters for open, close, rotate, write and handles are written
to it, as well as:
- total bytes written
- cost per MiB
- cost per GiB
Option is disabled by default.
Tracks: file open, file close, file rotate (which includes open and
close), file write and open handles.
Open handles measures the cost of open the libpcap handles.
When the config is missing, DefragPolicyGetHostTimeout will default
to returning -1. This will effectively set no timeout at all, leading
to defrag trackers being freed too early.
This patch is fixing an issue in defragmentation code. The
insertion of a fragment in the list of fragments is done with
respect to the offset of the fragment. But the code was using
the original offset of the fragment and not the one of the
new reconstructed fragment (which can be different in the
case of overlapping segment where the left part is trimmed).
This case could lead to some evasion techniques by causing
Suricata to analyse a different payload.
This patch fixes the following issue reported by valgrind:
31 errors in context 1 of 1:
Conditional jump or move depends on uninitialised value(s)
at 0x8AB2F8: UnixSocketPcapFilesCheck (runmode-unix-socket.c:279)
by 0x97725D: UnixCommandBackgroundTasks (unix-manager.c:368)
by 0x97BC52: UnixManagerThread (unix-manager.c:884)
by 0x6155F6D: start_thread (pthread_create.c:311)
by 0x6E3A9CC: clone (clone.S:113)
The running field in PcapCommand was not initialized.
This patch fixes an issue in unix socket handling. It is possible
that a socket did disconnect when analysing a command and because
the data treatment is done in a loop on clients this was leading
to a update of the list of clients during the loop. So we need
in fact to use TAILQ_FOREACH_SAFE instead of TAILQ_FOREACH.
Reported-by: Luigi Sandon <luigi.sandon@gmail.com>
Fix-suggested-by: Luigi Sandon <luigi.sandon@gmail.com>
It is possible to have a non-contiguous CPU set, which was not being
handled correctly on the TILE architecture.
Added a "rank" field in the ThreadVar to store the worker's rank separately
from the cpu for this case.
When applying wildcard thresholds (with sid = 0 and/or gid = 0) it's wrong
to exit on the first signature already having an event filter. Indeed,
doing so results in the theshold not being applied to all subsequent
signatures. Change the code in order to skip signatures with event
filters instead of breaking out of the loop.
If a live reload signal was given before the engine was fully started
up (e.g. pcap file thread waiting for a disk to spin up), a segv could
occur.
This patch only enables live reloads after the threads have been
started up completely.