Error handling was not done. The implementation is making the
choice to consider we must detroy the socket in case of parsing
error. The same was done for tpacket_v2.
Suppress useless fields in AFPThreadVars. This patch also get rid
of bytes counter as it was only used to display a message at exit.
Information on livedev and on packet counters are enough.
This patch adds a basic implementation of AF_PACKET tpacket v3. It
is basic in the way it is only working for 'workers' runnning mode.
If not in 'workers' mode there is a fallback to tpacket_v2. Feature
is activated via tpacket-v3 option in the af-packet section of
Suricata YAML.
This will handle minimal DetectEngineCtx structures (used in delayed
detect mode) safely, since they don't get SPM global contexts allocated.
Also added BUG_ON checks for valid spm_table entries.
Make the file storage use the streaming buffer API.
As the individual file chunks were not needed by themselves, this
approach uses a chunkless implementation.
Enforce inspect window also in IDS mode. Try always to get at least
'inspect win' worth of data. In case there is more new data, take
some of the old data as well to make sure there is always some overlap.
This unifies IDS and IPS modes, the only difference left is the start
of inspection. IDS waits until min_size is available, IPS starts right
away.
Convert HTTP body handling to use the Streaming Buffer API. This means
the HtpBodyChunks no longer maintain their own data segments, but
instead add their data to the StreamingBuffer instance in the HtpBody
structure.
In case the HtpBodyChunk needs to access it's data it can do so still
through the Streaming Buffer API.
Updates & simplifies the various users of the reassembled bodies:
multipart parsing and the detection engine.
Add a new API to store data from streaming sources, like HTTP body
processing or TCP data.
Currently most of the code uses a pattern of list of data chunks
(e.g. TcpSegment) that is reassembled into a large buffer on-demand.
The Streaming Buffer API changes the logic to store the data in
reassembled form from the start, with the segments/chunks pointing
to the reassembled data.
The main buffer storing the data slides forward, automatically or
manually. The *NoTrack calls allows for a segmentless mode of
operation.
This approach has two main advantages:
1. accessing the reassembled data is virtually cost-free
2. reduction of allocations and memory management
Now that the flow lookup is done in the worker threads the flow
queue handlers running after the capture thread(s) no longer have
access to the flow. This limits the options of how flow balancing
can be done.
This patch removes all code that is now useless. The only 2 methods
that still make sense are 'hash' and 'ippair'.
Initial version of the 'FlowWorker' thread module. This module
combines Flow handling, TCP handling, App layer handling and
Detection in a single module. It does all flow related processing
under a single flow lock.
To simplify locking, move all locking out of the individual detect
code. Instead at the start of detection lock the flow, and at the
end of detection unlock it.
The lua code can be called without a lock still (from the output
code paths), so still pass around a lock hint to take care of this.
When we run on live traffic, time handling is simple. Packets have a
timestamp set by the capture method. Management threads can simply
use 'gettimeofday' to know the current time. There should never be
any serious gap between the two or major differnces between the
threads.
In offline mode, things are dramatically different. Here we try to keep
the time from the pcap, which means that if the packets are recorded in
2011 the log output should also reflect this. Multiple issues:
1. merged pcaps might have huge time jumps or time going backward
2. slowly recorded pcaps may be processed much faster than their
'realtime'
3. management threads need a concept of what the 'current' time is for
enforcing timeouts
4. due to (1) individual threads may have very different views on what
the current time is. E.g. T1 processed packet 1 with TS X, while T2
at the very same time processes packet 2 with TS X+100000s.
The changes in flow handling make the problems worse. The capture thread
no longer handles the flow lookup, while it did set the global 'time'.
This meant that a thread may be working on Packet 1 with TS 1, while the
capture thread already saw packet 2 with TS 10000. Management threads
would take TS 10000 as the 'current time', considering a flow created by
the first thread as timed out immediately.
This was less of a problem before the flow changes as the capture thread
would also create a flow reference for a packet, meaning the flow
couldn't time out as easily. Packets in the queues between capture
thread and workers would all hold such references.
The patch updates the time handling to be as follows.
In offline mode we keep the timestamp per thread. If a management thread
needs current time, it will get the minimum of the threads' values. This
is to avoid the problem that T2s time value might already trigger a flow
timeout as the flow lastts + 100000s is almost certainly meaning the
flow would be considered timed out.
Instead of handling the packet update during flow lookup, handle
it in the stream/detect threads. This lowers the load of the
capture thread(s) in autofp mode.
The decoders now set a flag in the packet if the packet needs a
flow lookup. Then the workers will take care of this. The decoders
also already calculate the raw flow hash value. This is so that
this value can be used in flow balancing in autofp.
Because the flow lookup/creation is now done in the worker threads,
the flow balancing can no longer use the flow. It's not yet
available. Autofp load balancing uses raw hash values instead.
In the same line, move UDP AppLayer out of the DecodeUDP module,
and also into the stream/detect threads.
Handle TCP session reuse inside the flow engine itself. If a looked up
flow matches the packet, but is a TCP stream starter, check if the
ssn needs to be reused. If that is the case handle it within the
lookup function. Simplies the locking and removes potential race
conditions.
Update Flow lookup functions to get a flow reference during lookup.
This reference is set under the FlowBucket lock.
This paves the way to not getting a flow lock during lookups.
Match on server name indication (SNI) extension in TLS using tls_sni
keyword, e.g:
alert tls any any -> any any (msg:"SNI test"; tls_sni;
content:"example.com"; sid:12345;)
This new API allows for different SPM implementations, using a function
pointer table like that used for MPM.
This change also switches over the paths that make use of
DetectContentData (which previously used BoyerMoore directly) to the new
API.
Bug #1771.
Direct leak of 1834 byte(s) in 1 object(s) allocated from:
#0 0x4e2e65 in realloc ??:?
#1 0xcec27b in LogTlsLogPem /home/mats/suricata/src/log-tlsstore.c:130
#2 0xcea4f5 in LogTlsStoreLogger /home/mats/suricata/src/log-tlsstore.c:303
#3 0xd8b99c in OutputPacketLog /home/mats/suricata/src/output-packet.c:104
Sometimes we want to log when we reach a specified state instead of
waiting for the session to end. E.g for TLS we want to log as soon
as the handshake is done.
To do this, a new logger is added, where it is possible to specify
a custom "ProgressCompletionStatus".
Add function AppLayerParserRegisterLoggerFuncs for registering
a callback function for checking if a specific logger has logged
a transaction, and a callback function for specifying that it has.
Also add functions AppLayerParserGetTxLogged and
AppLayerParserSetTxLogged to invoke these callback functions.
Change AppLayerParserRegisterGetStateProgressCompletionStatus to
only store one ProgressCompletionStatus callback function for each
alproto, instead of storing one for each ipproto.
This enables us to use AppLayerParserGetStateProgressCompletionStatus
in functions where we do not know the ipproto used.
In case of Mask Write register or Write single register request with
no data (malformed packet), app-layer-modbus checks response content
(data) with the none stored request content. That causes the segmentation
fault.
Before accessing to request content, app-layer-modbus checks now if
content has been previously stored. 4 unitests have been adding, 2 of them
to test the management of Mask Write register and Write single register requests,
and the 2 others to check invalid Mask Write register and Write single register
requests.
For af-packet, pf-ring, netmap, and pcap use a generic
lookup function to find the configuration node for an
interface.
The new lookup function does not depend on the ordering
of the items inside the device configuration.
NSS library was not deinit at exit resulting in memory leak. As
it is useless for a config test, the patch updates the code so it
is not initialized.
Patch also calls MagicDeinit to free memory used by libmagic.
[src/detect-engine-loader.c:272]: (error) Buffer is accessed out of bounds.
[src/flow-manager.c:742]: (error) Buffer is accessed out of bounds.
[src/flow-manager.c:906]: (error) Buffer is accessed out of bounds.
Update FlowManager/Recycler to use global name.
Also add # into thread number.
Update af-packet to use global threadnames.
Update pcap to use global threadnames.
Update pfring to use global threadnames.
Update erf-dag to use global threadnames.
Update nflog to use global threadnames.
Update netmap to use global threadnames.
Update napatech to use global threadnames.
- Changed FlowManagerThread to FM-
- Changed FlowRecyclerThread to FR-
- Changed use of strcasecmp to strncasecmp. This was used in the
killing and disabling of FM/FR Threads.
The shortening of the interfacenames is now dependent on the
size of the destination buffer, so that this can be easily
changed in the future. The process uses snprintf and strlcat.
Also changed the buffer sizes in the util-runmodes to 12
so that they can hold 11 chars + null terminator.
Changed out strcpy, strncpy to strlcat and strlcpy. Also added
checks to see if the shortening did work or if it would fail in
advance. Fixed code in util-device and util-runmodes.
Added function LiveSafeDeviceName in util-device that shortens an
NIC device name if the name is over a given length and turns
it in to Ex: longi...eeth1
When multiple files were in a tx, the first one(s) closed/complete
and a new open one as well, a match in the former could lead to not
inspecting the latter.
This patch adds a workaround for this case, by allowing the file
inspection code to return a special code for 'match, but more files
available in tx'.
The stateful detection engine will then not make this match final for
the tx. It relies on the file pruning to kick in to make sure the
already complete files are removed from the tx before the next time
the detection engine is called on the tx.
The stateful detection engine needs some assistance when inspecting
transactions with multiple files. This patch flags the detect state
(if any) about the availability of new files in http. For http it
should only apply to multipart bodies although the flag is set for
all files.
The stateful detection engine needs some assistance when inspecting
transactions with multiple files. This patch flags the detect state
(if any) about the availability of new files in smtp.
When no rules with 'file content' keywords like filemd5 or filestore
were used, and non of the file outputs would force 'output' like
'force-md5' and 'force-magic', the file would not be tracked at all.
This meant that logging wouldn't work and neither would filename and
fileext inspection.
This patch removes the tracking bypass from the SMTP code and leaves
decisions to the file API.
AFL+ASAN found that with certain input we used an uninitialized byte
in the length calculation. Probably harmless as the length was still
validated afterwards.
Add support for AFL PERSISTANT_MODE when Suricata is compiled with
a supported compiler (only afl-clang-fast for now).
This gives a ~10x performance boost when fuzzing.
This patch introduces a new set of commandline options meant for
assisting in fuzz testing the app layer implementations.
Per protocol, 2 commandline options are added:
--afl-http-request=<filename>
--afl-http=<filename>
In the former case, the contents of the file are passed directly to
the HTTP parser as request data.
In the latter case, the data is devided between request and responses.
First 64 bytes are request, then next 64 are response, next 64 are
request, etc, etc.
When fuzzing, AFL will create lots of malformed rules. We don't want
to error out on those. As we're fuzzing the parser any non-crash
should return 0. Crashes (ASAN or not) will return a non-0 code.
Add regex setup and free util functions. Keywords often use a regex
to parse rule input. Introduce a common function to do this setup.
Also create a list of registered regexes to free at engine shutdown.
Unit testing support macros for failing on expressions,
as well as passing tests on expressions.
If fatal unittests are enabled BUG_ON will be triggered for
an assertion providing the line number of the failure, otherwise
the test will simply fail.
Moved the fatal flag to a global var instead of a configuration
parameter for ease of access from a macro.
Until now, the TCP options would all be stored in the Packet structure.
The commonly used ones (wscale, ts, sack, sackok and mss*) then had a
pointer to the position in the option array. Overall this option array
was large. About 360 bytes on 64bit systems. Since no part of the engine
would every access this array other than through the common short cuts,
this was actually just wasteful.
This patch changes the approach. It stores just the common ones in the
packet. The rest is gone. This shrinks the packet structure with almost
300 bytes.
* even though mss wasn't actually used
*** CID 1358124: Null pointer dereferences (REVERSE_INULL)
/src/detect-engine-mpm.c: 940 in MpmStoreSetup()
934 PopulateMpmHelperAddPatternToPktCtx(ms->mpm_ctx,
935 cd, s, 0, (cd->flags & DETECT_CONTENT_FAST_PATTERN_CHOP));
936 }
937 }
938 }
939
>>> CID 1358124: Null pointer dereferences (REVERSE_INULL)
>>> Null-checking "ms->mpm_ctx" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.
940 if (ms->mpm_ctx != NULL) {
941 if (ms->mpm_ctx->pattern_cnt == 0) {
942 MpmFactoryReClaimMpmCtx(de_ctx, ms->mpm_ctx);
943 ms->mpm_ctx = NULL;
944 } else {
945 if (ms->sgh_mpm_context == MPM_CTX_FACTORY_UNIQUE_CONTEXT) {
detect.c:3801:13: warning: Value stored to 'tmplist2_tail' is never read
tmplist2_tail = joingr;
^ ~~~~~~
detect.c:3804:13: warning: Value stored to 'tmplist2_tail' is never read
tmplist2_tail = joingr;
^ ~~~~~~
2 warnings generated.
Make the rule grouping dump to rule_group.json configurable.
detect:
profiling:
grouping:
dump-to-disk: false
include-rules: false # very verbose
include-mpm-stats: false
Make the port grouping whitelisting configurable. A whitelisted port
ends up in it's own port group.
detect:
grouping:
tcp-whitelist: 80, 443
udp-whitelist: 53, 5060
No portranges are allowed at this point.
Instead of detect-engine which used a list for no good reason, use a
simple map now.
detect:
profile: medium
custom-values:
toclient-groups: 3
toserver-groups: 25
sgh-mpm-context: auto
inspection-recursion-limit: 3000
# If set to yes, the loading of signatures will be made after the capture
# is started. This will limit the downtime in IPS mode.
#delayed-detect: yes
Create a hash table of unique DetectPort objects before trying to
create a unique list of these objects. This safes a lot of cycles
in the creation of the list.
SGH's for tcp and udp are now always only per proto and per direction.
This means we can simply reuse the packet and stream mpm pointers.
The SGH's for the other protocols already used a directionless catch
all mpm pointer.
For all mpm wrapper functions, check minlen vs the input buffer to see
if we can bypass the mpm search.
Next to this, make all the function inline. Also constify the input and
do other minor cleanups.
Dump a json record containing all sigs that need to be inspected after
prefilter. Part of profiling. Only dump if threshold is met, which is
currently set by:
--set detect.profiling.inspect-logging-threshold=200
A file called packet_inspected_rules.json is created in the default
log dir.
Per rule group tracking of checks, use of lists, mpm matches,
post filter counts.
Logs SGH id so it can be compared with the rule_group.json output.
Implemented both in a human readable text format and a JSON format.
Turn list of mpm_ctx pointers into a union so that we don't waste
space. The sgh's for tcp and udp are in one direction only, so the
ts and tc ones are now in the union.
So far, the patterns as passed to the mpm's would use global id's that
were shared among all buffers, directions. This would lead to a fairly
large pattern id space. As the mpm algo's use the pattern id's to
prevent duplicate matching through a pattern id based bitarray,
shrinking this space will optimize performance.
This patch implements this. It sets a flag before adding the pattern
to the mpm ctx, instructing the mpm to ignore the provided pid and
handle pids management itself. This leads to a shrinking of the
bitarray size.
This is made possible by the previous work that removes the pid logic
from the code.
Next to this, this patch moves the pattern setup stage to common util
functions. This avoids code duplication.
Update ac, ac-bs and ac-ks to use this.
The idea is: if mpm is negated, it's both on mpm and nonmpm sid lists
and we can kick it out in that case during the merge sort.
It only works for patterns that are 'independent'. This means that the
rule doesn't need to only match if the negated mpm pattern is limited
to the first 10 bytes for example.
Or more generally, an negated mpm pattern that has depth, offset,
distance or within settings can't be handled this way. These patterns
are not added to the mpm at all, but just to to non-mpm list. This
makes sense as they will *always* need manual inspection.
Similarly, a pattern that is 'chopped' always needs validation. This
is because in this case we only inspect a part of the final pattern.
Instead of the binary yes/no whitelisting used so far, use different
values for different sorts of whitelist reasons. The port list will
be sorted by whitelist value first, then by rule count.
The goal is to whitelist groups that have weak sigs:
- 1 byte pattern groups
- SYN sigs
Rules that check for SYN packets are mostly scan detection rules.
They will be checked often as SYN packets are very common.
e.g. alert tcp any any -> any 22 (flags:S,12; sid:123;)
This patch adds whitelisting for SYN-sigs, so that the sigs end up
in as unique groups as possible.
- negated mpm sigs
Currently negated mpm sigs are inspected often, so they are quite
expensive. For this reason, try to whitelist them.
These values are set during 'stage 1', rule preprocessing.
Since SYN inspecting rules are expensive, this patch splits the
'non-mpm' list (i.e. the rules that are always considered) into
a 'syn' and 'non-syn' list. The SYN list is only inspected if the
packet has the SYN flag set, otherwise the non-syn list is used.
The syn-list contains _all_ rules. The non-syn list contains all
minus the rules requiring the SYN bit in a packet.
Update port grouping logic. Previously it would create one consistent
list w/o overlap. It largely still does this, except for the 'catch
all' port group at the end of the list. This port group contains all
the sigs that didn't fit into the other groups.
Replace tree based approach for rule grouping with a per port (tcp/udp)
and per protocol approach.
Grouping now looks like:
+----+
|icmp+--->
+----+
|gre +--->
+----+
|esp +--->
+----+
other|... |
+----->-----+
| |N +--->
| +----+
|
| tcp +----+ +----+
+----->+ 80 +-->+ 139+-->
| +----+ +----+
|
| udp +----+ +----+
+---+----->+ 53 +-->+ 135+-->
| +----+ +----+
|toserver
+--->
|toclient
|
+--->
So the first 'split' in the rules is the direction: toserver or toclient.
Rules that don't have a direction, are in both branches.
Then the split is between tcp/udp and the other protocols. For tcp and
udp port lists are used. For the other protocols, grouping is simply per
protocol.
The ports used are the destination ports for toserver sigs and source
ports for toclient sigs.
Introduce 'ac-ks' or the Kenneth Steele AC implementation. It's
actually 'ac-tile' written by Ken for the Tilera platform. This
patch adds support for it on other architectures as well.
Enable ac-tile for other archs as 'ac-ks'.
Fix a bunch of OOB reads in the loops that triggered ASAN.
*** CID 1358023: Null pointer dereferences (REVERSE_INULL)
/src/util-mpm-hs.c: 860 in SCHSDestroyThreadCtx()
854 if (thr_ctx->scratch != NULL) {
855 hs_free_scratch(thr_ctx->scratch);
856 mpm_thread_ctx->memory_cnt--;
857 mpm_thread_ctx->memory_size -= thr_ctx->scratch_size;
858 }
859
>>> CID 1358023: Null pointer dereferences (REVERSE_INULL)
>>> Null-checking "mpm_thread_ctx->ctx" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.
860 if (mpm_thread_ctx->ctx != NULL) {
861 SCFree(mpm_thread_ctx->ctx);
862 mpm_thread_ctx->ctx = NULL;
863 mpm_thread_ctx->memory_cnt--;
864 mpm_thread_ctx->memory_size -= sizeof(SCHSThreadCtx);
865 }
Direct leak of 80 byte(s) in 5 object(s) allocated from:
#0 0x4c673b in __interceptor_malloc (/home/victor/dev/suricata/src/suricata+0x4c673b)
#1 0xb7a425 in DetectEngineSignatureIsDuplicate /home/victor/dev/suricata/src/detect-parse.c:1715:10
#2 0xb79390 in DetectEngineAppendSig /home/victor/dev/suricata/src/detect-parse.c:1836:19
#3 0x86fe56 in DetectLoadSigFile /home/victor/dev/suricata/src/detect.c:357:15
#4 0x815fee in ProcessSigFiles /home/victor/dev/suricata/src/detect.c:419:13
#5 0x8139a8 in SigLoadSignatures /home/victor/dev/suricata/src/detect.c:499:15
#6 0xfe435d in LoadSignatures /home/victor/dev/suricata/src/suricata.c:1979:9
#7 0xfcd87e in main /home/victor/dev/suricata/src/suricata.c:2345:17
#8 0x7fb66bf7cec4 in __libc_start_main /build/eglibc-3GlaMS/eglibc-2.19/csu/libc-start.c:287
By default, hashlittle() will read off the end of the key, up to the
next four-byte boundary, although the data beyond the end of the key
doesn't affect the hash. This read causes uninitialized read warnings
from Valgrind and Address Sanitizer.
Here we add hashlittle_safe(), which avoids reading off the end of the
buffer (using the code inside the VALGRIND-guarded block in the original
hashlittle() implementation).
Capture methods that are non blocking will still not generate packets
that go through the system if there is no traffic. Some maintenance
tasks, like rule reloads rely on packets to complete.
This patch introduces a new thread flag, THV_CAPTURE_INJECT_PKT, that
instructs the capture thread to create a fake packet.
The capture implementations can call the TmThreadsCaptureInjectPacket
utility function either with the packet they already got from the pool
or without a packet. In this case the util func will get it's own
packet.
Implementations for pcap, AF_PACKET and PF_RING.
Split wait loop into three steps:
- first insert pseudo packets
- 2nd nudge all capture threads to break out of their loop
- third, wait for the detection thread contexts to be used
Interupt capture more than once if needed
Move packet injection into util func
Simplify handling of USR2 signal. The SCLogInfo usage could lead to
dead locks as the SCLog API can do many complicated things including
memory allocations, syslog calls, libjansson message construction.
If an existing malloc call was interupted, it could lead to the
following dead lock:
0 __lll_lock_wait_private () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:97
1 0x0000003140c7d2df in _L_lock_10176 () from /lib64/libc.so.6
2 0x0000003140c7ab83 in __libc_malloc (bytes=211543457408) at malloc.c:3655
3 0x0000003140c80ec2 in __strdup (s=0x259ca40 "[%i] %t - (%f:%l) <%d> (%n) -- ") at strdup.c:43
4 0x000000000059dd4a in SCLogMessageGetBuffer (tval=0x7fff52b47360, color=1, type=SC_LOG_OP_TYPE_REGULAR, buffer=0x7fff52b47370 "", buffer_size=2048,
log_format=0x259ca40 "[%i] %t - (%f:%l) <%d> (%n) -- ", log_level=SC_LOG_INFO, file=0x63dd00 "suricata.c", line=287, function=0x640f50 "SignalHandlerSigusr2StartingUp", error_code=SC_OK,
message=0x7fff52b47bb0 "Live rule reload only possible after engine completely started.") at util-debug.c:307
5 0x000000000059e940 in SCLogMessage (log_level=SC_LOG_INFO, file=0x63dd00 "suricata.c", line=287, function=0x640f50 "SignalHandlerSigusr2StartingUp", error_code=SC_OK,
message=0x7fff52b47bb0 "Live rule reload only possible after engine completely started.") at util-debug.c:549
6 0x000000000057e374 in SignalHandlerSigusr2StartingUp (sig=12) at suricata.c:287
7 <signal handler called>
8 _int_malloc (av=0x3140f8fe80, bytes=<value optimized out>) at malloc.c:4751
9 0x0000003140c7ab1c in __libc_malloc (bytes=296) at malloc.c:3657
10 0x0000000000504d55 in FlowAlloc () at flow-util.c:60
11 0x00000000004fd909 in FlowInitConfig (quiet=0 '\000') at flow.c:454
12 0x0000000000584c8e in main (argc=6, argv=0x7fff52b4a3b8) at suricata.c:2300
This patch simply sets a variable and lets the main loop act on that.
If a TCP packet could not get a flow (flow engine out of flows/memory)
and there were *only* TCP inspecting rules with the direction
explicitly set to 'to_server', a NULL pointer deref could happen.
PacketPatternSearchWithStreamCtx would fall through to the 'to_client'
case which was not initialized.
Add "ippair" autofp scheduler to split traffic based on source and
destination IP only (not ports).
- This is useful when using the "xbits" feature to track events
that occur between the same hosts but not necessarily the same
flow (such as exploit kit landings/expoits/payloads)
- The disadvantage is that traffic may be balanced very unevenly
between threads if some host pairs are much more frequently seen
than others, so it may be only practicable for sandbox or pcap
analysis
- not tested for IPv6
See https://redmine.openinfosecfoundation.org/issues/1661
This possibly fix:
ndirect leak of 64 byte(s) in 1 object(s) allocated from:
#0 0x4c264b in malloc (/home/victor/qa/buildbot/donkey/z600fuzz/Private/src/.libs/lt-suricata+0x4c264b)
#1 0x7fb09c1e8aaa in json_array (/usr/lib/x86_64-linux-gnu/libjansson.so.4+0x6aaa)
#2 0xd67553 in JsonEmailLogJsonData /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/output-json-email-common.c:290:27
#3 0xd6a272 in JsonEmailLogJson /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/output-json-email-common.c:370:19
#4 0xd956b9 in JsonSmtpLogger /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/output-json-smtp.c:103:9
#5 0xdcedac in OutputTxLog /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/output-tx.c:165:17
#6 0xff6669 in TmThreadsSlotVarRun /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/tm-threads.c:132:17
#7 0xffecc1 in TmThreadsSlotVar /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/tm-threads.c:474:17
#8 0x7fb09bfcc181 in start_thread /build/eglibc-3GlaMS/eglibc-2.19/nptl/pthread_create.c:312
In JsonEmailLogJsonData function, an invalid state was leading to
early exit without a proper freeing of resources.
This should fix:
Indirect leak of 72 byte(s) in 1 object(s) allocated from:
#0 0x4c264b in malloc (/home/victor/qa/buildbot/donkey/z600fuzz/Private/src/.libs/lt-suricata+0x4c264b)
#1 0x7fb09c1e886a in json_object (/usr/lib/x86_64-linux-gnu/libjansson.so.4+0x686a)
#2 0xd6a272 in JsonEmailLogJson /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/output-json-email-common.c:370:19
#3 0xd956b9 in JsonSmtpLogger /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/output-json-smtp.c:103:9
#4 0xdcedac in OutputTxLog /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/output-tx.c:165:17
#5 0xff6669 in TmThreadsSlotVarRun /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/tm-threads.c:132:17
#6 0xffecc1 in TmThreadsSlotVar /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/tm-threads.c:474:17
#7 0x7fb09bfcc181 in start_thread /build/eglibc-3GlaMS/eglibc-2.19/nptl/pthread_create.c:312
[src/util-ip.c:104]: (error) Shifting a negative value is undefined behaviour
[src/util-radix-tree.c:1160]: (error) Shifting a negative value is undefined behaviour
[src/util-radix-tree.c:1357]: (error) Shifting a negative value is undefined behaviour
[src/util-radix-tree.c:1380]: (error) Shifting a negative value is undefined behaviour
[src/util-radix-tree.c:1438]: (error) Shifting a negative value is undefined behaviour
Reset counters_global_id at ctx destruction. In the unix socket
runmode the lack of this reset would cause the id's to increase with
each pcap, leading to an ever larger stats array.
Denotes the max detection list so that rule validation can
allow post-detection lists to come after base64_data, but
disallow detection lists to come after it.
As SMTP file_data detection uses the file API, the file's inspect
tracker should be considered when pruning files.
This patch sets the FILE_USE_DETECT flag on files tracked by smtp.
It also adds logic to move inspected tracker ahead if detection
doesn't do it, like when no rules are matching or detection engine
is disabled.
When the file API is used to do content inspection (currently only
smtp does this), the detection should be considered while pruning
the file chunks.
This patch introduces a new flag for the file API: FILE_USE_DETECT
When it is used, 'FilePrune' will not remove chunks that are (partly)
beyond the File::content_inspected tracker.
When using this flag, it's important to realize that when the detect
engine is disabled or rules are not matching, content_inspected
might not get updated.
Cppcheck 1.72 gives a warning on the following code pattern:
char blah[32] = "";
snprintf(blah, sizeof(blah), "something");
The warning is:
(error) Buffer is accessed out of bounds.
While this appears to be a FP, in most cases the initialization to ""
was unnecessary as the snprintf statement immediately follows the
variable declaration.
We want to add counters in order to track the number of times we hit a
decode event. A decode event is related to an error in the protocol
decoding over a certain packet.
This patch fist modifies the decode-event list, reordering it in order
to separate single packet events from stream-related events and adding
the prefix "decoder" to decode events.
The counters are created during the decode setup and the relative event
counter is increased every time a packet with the flag PKT_IS_INVALID is
finalized in the decode phase
If stream.inline setting was missing it would default to IDS.
This patch changes the default to 'auto', meaning that in IPS mode
the stream engine also uses IPS mode and in IDS mode it's still in
IDS mode.
Bug #1570