Under eve/alert, introduce a new metadata configuration
section. If no provided, or simply yes defaults will be used.
Otherwise this a map with fields that can be toggled on and
off. The defaults are:
outputs:
- eve-log:
types:
- alert:
metadata:
app-layer: true
flow: true
rule:
raw: false
metadata: true
To enable something that is disabled by default, or to disable
something that is enabled by default, only that key need to
be changed, everything else will keep its default value.
For SIEM analysis it is often useful to refer to the actual rules to
find out why a specific alert has been triggered when the signature
message does not convey enough information.
Turn on the new rule flag to include the rule text in eve alert output.
The feature is turned off by default.
With a rule like this:
alert dns $HOME_NET any -> 8.8.8.8 any (msg:"Google DNS server contacted"; sid:42;)
The eve alert output might look something like this (pretty-printed for
readability):
{
"timestamp": "2017-08-14T12:35:05.830812+0200",
"flow_id": 1919856770919772,
"in_iface": "eth0",
"event_type": "alert",
"src_ip": "10.20.30.40",
"src_port": 50968,
"dest_ip": "8.8.8.8",
"dest_port": 53,
"proto": "UDP",
"alert": {
"action": "allowed",
"gid": 1,
"signature_id": 42,
"rev": 0,
"signature": "Google DNS server contacted",
"category": "",
"severity": 3,
"rule": "alert dns $HOME_NET any -> 8.8.8.8 any (msg:\"Google DNS server contacted\"; sid:43;)"
},
"app_proto": "dns",
"flow": {
"pkts_toserver": 1,
"pkts_toclient": 0,
"bytes_toserver": 81,
"bytes_toclient": 0,
"start": "2017-08-14T12:35:05.830812+0200"
}
}
Feature #2020
Metadata of the signature can now conditionaly put in the alert
events. This will allow user to get more context about the events
generated by the alert.
detect-metadata: conditional parsing
Only parses metadata if an output module will use the information.
Patch also adds a unittest to check metadata is not parsed if not
asked to.
output-json-alert: optional output keys as array
Update rule metadata configuration to have an option to output
value as array. Also adds an option to log only a series of keys
as array. This is useful in the case of some ruleset where from
instance the `tag` key is used multiple time.
(Jason Ish) rule metadata: always log as lists
After review of rule metadata, we can't make assumptions
on what should be a list or not. So log everything as a list.
This patch updates the Signature structure so it contains the
metadata under a key value form.
Later patch will make that dictionary available in the events.
By default log metadata.
Remove toggles for individual protocol types and just use a
single toggle to control including the app-layer with the
alert.
The metadata (currently app-layer and flow) can be disabled
by setting metadata to a falsey value, but its removed
from the default configuration (but wil be in docs)
Give traffic/id and traffic/label flowbits special handling
in the eve output. Instead of just logging them as flowbits,
give them their own top level object.
{
"traffic": {
"id": ["id0", "id1"],
"label": ["label0", "label1"]
}
}
This rule will match on the STREAM_3WHS_ACK_DATA_INJECT, that is
set if we're:
- in IPS mode
- get a data packet from the server
- that matches the exact SEQ/ACK expectations for the 3whs
The action of the rule is set to drop as the stream engine will drop.
So the rule action is actually not needed, but for consistency it
is drop.
If we have only seen the SYN and SYN/ACK of the 3whs, accept from
server data if it perfectly matches the SEQ/ACK expectations. This
might happen in 2 scenarios:
1. packet loss: if we lost the final ACK, we may get data that fits
this pattern (e.g. a SMTP EHLO message).
2. MOTS/MITM packet injection: an attacker can send a data packet
together with its SYN/ACK packet. The client due to timing almost
certainly gets the SYN/ACK before considering the data packet,
and will respond with the final ACK before processing the data
packet.
In IDS mode we will accept the data packet and rely on the reassembly
engine to warn us if the packet was indeed injected.
In IPS mode we will drop the packet. In the packet loss case we will
rely on retransmissions to get the session back up and running. For
the injection case we blocked this injection attempt.
The detect engine would bypass packets that are set as dropped. This
seems sane, as these packets are going to be dropped anyway.
However, it lead to the following corner case: stream events that
triggered the drop could not be matched on the rules. The packet
with the event wouldn't make it to the detect engine due to the bypass.
This patch changes the logic to not bypass DROP packets anymore.
Packets that are dropped by the stream engine will set the no payload
inspection flag, so avoid needless cost.
TFTP parsing and logging written in Rust.
Log on eve.json the type of request (read or write), the name of the file and
the mode.
Example of output:
"tftp":{"packet":"read","file":"rfc1350.txt","mode":"octet"}
This commit fixes a leak of mmap'ed ring buffer that was not
unmaped when a socket was closed. In addition, the leak could
break an inline channel on certain configurations.
Also slightly changed AFPCreateSocket():
1. If an interface is not up, it does not try to apply any
settings to a socket. This reduces a number of error messages
while an interface is down.
2. Interface is considered active if both IFF_UP and IFF_RUNNING
are present.
So far, the suricata socket suricata-command.socket has the rights
rw-r----- suricata:user.
When suricata is used with restricted access, an other application
(suricatasc like) that needs to access to the command socket also
with restricted access can not write to the socket since it is not
the owner (e.g suricata within container, with an hardened value
for umask and hardened rights for users).
The socket should be set as rw-rw----. Use chmod instead of fchmod
and set it after the socket creation.
https://redmine.openinfosecfoundation.org/issues/2412
Suricatasc is not supporting pcap-file processing in continuous mode.
Register a new command pcap-file-continuous in the unix manager to work
with suricatasc. Add defaulted arguments for pcap-file to support
backwards compatibility.
RAND_MAX is not guaranteed to be a divisor of ULONG_MAX, so take the
necessary precautions to get unbiased random numbers. Although the
bias might be negligible, it's not advisable to rely on it.
according to its man page, sigprocmask has undefined behavior in
multithreaded environments. Instead of explictly blocking the handling
of SIGUSR2 in every thread, direct block handling SIGUSR2 before
creating the threads and enable again the handling of this signal
afterwards. In this way, only the main thread will be able to manage
this signal properly.
Add startswith modifier to simplify matching patterns at the start
of a buffer.
Instead of:
content:"abc"; depth:3;
This enables:
content:"abc"; startswith;
Especially with longer patterns this makes the intention of the rule
more clear and eases writing the rules.
Internally it's simply a shorthand for 'depth:<pattern len>;'.
Ticket https://redmine.openinfosecfoundation.org/issues/742
Fix the inspection of multiple files in a single TX, where new files
may be added to the TX after inspection started.
Assign the hard coded id DE_STATE_FLAG_FILE_INSPECT to the file
inspect engine.
Make sure that sigs that do file inspection and don't match on the
current file always store a detailed state. This state will include
the DE_STATE_FLAG_FILE_INSPECT flag.
When the app-layer indicates a new file is available, for each sig
that has the DE_STATE_FLAG_FILE_INSPECT flag set, reset part of the
state so that the sig is evaluated again.
Free txs that are done out of order if we can. Some protocol
implementations have transactions running in parallel, where it is
possible that a tx that started later finishes earlier than other
transactions. Support freeing those.
Also improve handling on asynchronious transactions. If transactions
are unreplied, e.g. in the dns flood case, the parser may at some
point free transactions on it's own. Handle this case in
the app-layer engine so that the various tracking id's (inspect, log,
and 'min') are updated accordingly.
Next, free txs much more aggressively. Instead of freeing old txs
at the app-layer parsing stage, free all complete txs at the end
of the flow-worker. This frees txs much sooner in many cases.
When stateless rules are depending on a flowbit being set by a stateful
rule, the inspection order is almost certainly wrong.
Switch stateless rules depending on stateful rules to being stateful.
This is used to turn 'TCP stream' inspecting rules (which are stateless
unless mixed with stateful keywords) into stateful rules.
Use per tx detect_flags to track prefilter. Detect flags are used for 2
things:
1. marking tx as fully inspected
2. tracking already run prefilter (incl mpm) engines
This supercedes the MpmIDs API for directionless tracking
of the prefilter engines.
When we have no SGH we have to flag the txs that are 'complete'
as inspected as well.
Special handling for the stream engine:
If a rule mixes TX inspection and STREAM inspection, we can encounter
the case where the rule is evaluated against multiple transactions
during a single inspection run. As the stream data is exactly the same
for each of those runs, it's wasteful to rerun inspection of the stream
portion of the rule.
This patch enables caching of the stream 'inspect engine' result in
the local 'RuleMatchCandidateTx' array. This is valid only during the
live of a single inspection run.
Remove stateful inspection from 'mask' (SignatureMask). The mask wasn't
used in most cases for those rules anyway, as there we rely on the
prefilter. Add a alproto check to catch the remaining cases.
When building the active non-mpm/non-prefilter list check not just
the mask, but also the alproto. This especially helps stateful rules
with negated mpm.
Simplify AppLayerParserHasDecoderEvents usage in detection to only
return true if protocol detection events are set. Other detection is done
in inspect engines.
Move rule group lookup and handling into it's own function. Handle
'post lookup' tasks immediately, instead of after the first detect
run. The tasks were independent of the initial detection.
Many cleanups and much refactoring.
Add API meant to replace the MpmIDs API. It uses a u64 for each direction
in a tx to keep track of 2 things:
1. is inspection done?
2. which prefilter engines (like mpm) are already completed
Analyze flowbits to find which bits are only checked.
Track whether they are set and checked on the same level of 'statefulness'
for later used.
Dump flowbits to json including the sids that set/check etc the bit.
There is probably not too much bad about enabling both, but
open file counts can get messy with both enabled. And v1
should be schedule for deprecation soon enough.
This doesn't need to be registered from suricata.c. And moving
it to the init function makes sure its only registered if
the logger is actually enabled.
Track each type of error warning and only log it once. Also create
a new stat, file_store.fs_errors to count each file system type
error (open, rename, unlink).
Also remove exit stats, they are of limited value.
As fileinfo records are logged to the main eve log, disable
metadata by default. But when enabled, just use the fileinfo
record.
Metadata is stored in a file named:
<sha256>.<seconds>.<file_id>.json
where the sha256 is the same as the file logged, the seconds
is the unix timestamp in seconds for the fileinfo record,
and the file_id is an atomically incremented integer per
Suricata instance.
This should allow for each occurrence of the same file to have
its own metadata file. But a collision is expected when running
Suricata repeatedly over the same pcap, as that would be the
exact same occurrence of a file.
Even if a file is truncated, force the SHA256 if force sha256
is set to yes.
The new file store requires the sha256 regardless of the file
state if it is to be logged, as the filename is based on the
sha256.
Filestore v2 is starts as a copy of log-filestore with the
following changes.
- NSS is required as file names as based on the SHA256.
- Work/tmp files are stored in a temp. directory, then
moved into a directory tree where the directory names
are the first 2 characters of the hex SHA256.
- Removes the need for a waldo file or pid in the filenames.
Split the building of the fileinfo record from the writing
of the record so the building can be called from other code.
Specifically the new filestore output which uses fileinfo
records as the metadata.
Give SCCreateDirectoryTree a new argument, final. If true the
full path will be created as a directory. If false, the last
component will not be created as a directory (current
behaviour).
Renames SCLogCreateDirectoryTree to SCCreateDirectoryTree
and move into a util module for re-use.
Also moves SCMkDir from suricata-common.h to the more
appropriately names util-path.h.
I would have prefered to use util-file for file related options
but that is already used by file store utilities. util-path
is close enough for file related operations.
The new OutputInitResult is a struct return type that allows
logger init functions to return a NULL context without
raising error.
Instead of returning NULL to signal error, the "ok" field will
be set to false. If ok, but the ctx is NULL, then silently
move on to the next logger.
Use case: multiple versions of a specific logger, and one
implementation decides the configuration is not for that
implemenation. It can return NULL, ok.
The flow manager thread (that also runs the host cleanup code) would
sometimes free a host before it's thresholds are timed out. This would
lead to misdetection or too many alerts.
This was mostly (only?) visible on slower systems. And was caused by a
mismatch between time concepts of the async flow manager thread and the
packet threads, resulting in the flow manager using a timestamp that
was before the threshold entry creation ts. This would lead to an
integer underflow in the timeout check, leading to a incorrect conclusion
that the threshold entry was timed out.
To address this, check if the 'check' timestamp is not before the creation
timestamp.
If TmThreadDrainPacketThreads would take more than 60 seconds, the wait
loop that follows it would reach 'timeout' condition immediately. This
would lead to a null ptr deref of 'tv'.
Fix by not counting the TmThreadDrainPacketThreads and also not doing
the null ptr deref in any case.
In offline mode, if the starting timestamp is 0 suricata will never
initialize cached_minute_start array. This cause the timestamp to be
ignored when needed (e.g., in fast.log).
This commit will force the initialization of this array.
PrintRawUriFp does not properly escape backslash. This causes confusion
between a \ character and an hex-encoded character. PrintRawUriBuffer,
instead, correctly does backslash-encoding.
Adding proper escaping of backslash to PrintRawUriFp.
For loggers that register once per direction, use unique id's per
direction.
Reshuffle id's to keep tx log id's low so we can use u32 for tracking
logged loggers.
Avoid looping in transaction output.
Update app-layer API to store the bits in one step
and retrieve the bits in a single step as well.
Update users of the API.
Create a bitmap of the loggers per protocol. This is done at runtime
based on the loggers that are enabled. Take the logger_id for each
logger and store it as a bitmap in the app-layer protcol storage.
Goal is to be able to use it as an expectation later.
usleep on MinGW doesn't behave as expected. Added replacement
wrapper around 'Sleep(msec)'. As that has msec resolution and
not a usec resolution, change the various thread init and stop
functions to test for the actual time waited instead of counting
the usecs passed to usleep.
On MinGW the result of ntohl needs to be casted to uint32_t and
the result of ntohs to uint16_t. To avoid doing this everywhere
add SCNtohl and SCNtohs macros.
Many were using AlertJsonThread instead of OutputJsonCtx,
but as the datatypes were similar enough no harm was done.
Now that they are using their proper datatype, removed
AlertJsonThread from output.h as its no longer used.
According to PF_RING upstream the vlan header should never be stripped
from the packet PF_RING feeds to Suricata. But upstream also indicated
keeping the check would be a good "safety check".
So in addition to the check, add a warning that warns once (per thread
for implementation simplicity) if the vlan hdr does appear to be stripped
after all.
When Suricata was monitoring traffic with a single vlan layer, the stats
and output instead showed 2. This was caused by the raw packets PF_RING
feeds Suricata would hold the vlan header, but the code assumed that
the header was stripped and the vlan_id passed to Suricata through
PF_RING's extended_hdr.parsed_pkt.
This patch adds the following logic: Check vlan id from the parser packet
PF_RING prepared. PF_RING sets the vlan_id based on its own parsing or
based on the hardware offload. It gives no indication on where the vlan_id
came from, so we rely on the vlan_offset field. If it's 0, we assume the
PF_RING parser did not see the vlan header and got it from the hardware
offload. In this case we will use this information directly, as we won't
get a raw vlan header later. If PF_RING did set the offset, we do the
parsing in the Suricata decoder so that we have full control.
PF_RING *should* put back the vlan header in all cases, and also set the
vlan_offset field, but as a extra precaution keep the check described
above.
Bug #2355.
This keyword mathes on ftp operation STOR and RETR. It will allow
rules writer to select if the alert has to be on a put or a fetch
operation.
It is now possible to write a signature like:
alert ftp-data any any -> any any (msg:"FTP data get firwmare"; ftdata_command:retr; sid:2; rev:1;)
to alert when a file is retrieved from a FTP server.
Use expectation to be able to identify connections that are
ftp data. It parses the PASV response, STOR message and the
RETR message to provide extraction of files.
Implementation in Rust of FTP messages parsing is available.
Also this patch changes some var name prefixed by ssh to ftp.
This patch provides a working expectation system. This will allow
suricata to have a way to identify parallel connections opened by
a protocol such as FTP.
Expectation are a chained list and there is a cleaning by timeout
of the entries.
This patch also defined a counter of expectations that is also
used to check if we need to query IPPairs. This way we only query
the IPPairs store if we have an expectation.
This patch adds a parent_id field to the Flow structure that
contain the flow ID of the parent connection for protocol with
dynamic parallel connection opening like FTP.
This permits to handle memcap values through
unix socket for:
- stream
- stream-reassembly
- flow
- applayer-proto-http
- defrag
- ippair
- host
It will be possible to show or change a memcap value
for a specified configuration and list all memcap values
available.
The following commands are registered for unix-socket:
- memcap-set
- memcap-show
- memcap-list
Output:
>>> memcap-show flow
Success:
{
"value": "64mb"
}
>>> memcap-set flow 64mb
Success:
"memcap value for 'flow' updated: 67108864"
Command with invalid memcap key:
>>> memcap-set udp 32mb
Error:
"Available config: stream stream-reassembly flow applayer-proto-http defrag ippair host"
Command with an invalid memcap value:
>>> memcap-set http 32mmb
Error:
"error parsing memcap specified, value not changed"
This adds new functions that will be called
through unix-socket and permit to update
and show memcap value.
The memcap value needs to be handled in a
thread safe way, so for this reason it is
declared as atomic var.
Another function is added to gets
the memuse value since it will be shown
through unix-socket.
This adds new functions that will be called
through unix-socket and permit to update
and show memcap value.
The memcap value needs to be handled in a
thread safe way, so for this reason it is
declared as atomic var.
Another function is added to gets
the memuse value since it will be shown
through unix-socket.
This adds new functions that will be called
through unix-socket and permit to update
and show memcap value.
The memcap value needs to be handled in a
thread safe way, so for this reason it is
declared as atomic var.
Another function is added to gets
the memuse value since it will be shown
through unix-socket.
This adds new functions that will be called
through unix-socket and permit to update
and show memcap value.
The memcap value needs to be handled in a
thread safe way, so for this reason it is
declared as atomic var.
This adds new functions that will be called
through unix-socket and permit to update
and show memcap value.
The memcap value needs to be handled in a
thread safe way, so for this reason it is
declared as atomic var.
FlowGetMemuse() function is made as public
because the memuse value will be shown
through unix-socket.
This adds new functions that will be called
through unix-socket and permit to update
and show memcap value.
The memcap value needs to be handled in a
thread safe way, so for this reason it is
declared as atomic var.
According to all checking memcap functions,
the size passed as argument is declared as uint64_t
except for StreamTcpReassembleCheckMemcap where it's
defined as uint32_t.
If the hash function returns an index greater than the array size of the
hash table, the index is not checked. Even if this is the responsibility
of the caller, add a safety check to avoid errors.
We did had a race condition with running logrotate with multiple
EVE Json files. Consequence was one of the file not being reopen
by suricata that did continue to write to the rotated one.
Trying fix on signal handler did fail so this patch implements
log rotation support by adding a dedicated command to unix socket
to reopen the log files.
Multiple NULL-pointer dereferences after ConfGet in PostConfLoadedSetup can cause suricata to terminate with segfaults. The ASAN-output:
ASAN:DEADLYSIGNAL =================================================================
5734ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7f1a9a3967cc bp 0x7ffdff033ad0 sp 0x7ffdff033250 T0)
0 0x7f1a9a3967cb (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x447cb)
1 0x55ba65f66f27 in PostConfLoadedSetup /root/suricata-1/src/suricata.c:2652
2 0x55ba65f6870e in main /root/suricata-1/src/suricata.c:2898
3 0x7f1a96aeb2b0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202b0)
4 0x55ba65af9039 in _start (/usr/local/bin/suricata+0xc8039)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x447cb)
This commit fixes Bug #2370 by replacing ConfGet by ConfGetValue
There are multiple NULL-pointer dereferences after calling ConfGetBool in StreamTcpInitConfig. ConfGetBool calls ConfGet which doesn't check the vptr-argument. This is a sample ASAN-output:
1453ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7f2969b83a28 bp 0x7ffdbf613a90 sp 0x7ffdbf613210 T0)
0 0x7f2969b83a27 in strcasecmp (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x51a27)
1 0x564185accefd in ConfValIsTrue /root/suricata-1/src/conf.c:559
2 0x564185accb4f in ConfGetBool /root/suricata-1/src/conf.c:512
3 0x564185dcbe05 in StreamTcpInitConfig /root/suricata-1/src/stream-tcp.c:381
4 0x564185e21a88 in PreRunInit /root/suricata-1/src/suricata.c:2264
5 0x564185e24d2c in PostConfLoadedSetup /root/suricata-1/src/suricata.c:2763
6 0x564185e2570e in main /root/suricata-1/src/suricata.c:2898
7 0x7f29662cb2b0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202b0)
8 0x5641859b6039 in _start (/usr/local/bin/suricata+0xc8039)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x51a27) in strcasecmp
1453ABORTING
This commit replaces ConfGet by ConfGetValue in ConfGetBool. This does not only fix Bug #2368 but might also fix others too.
Multiple NULL-pointer dereferences after ConfGet in HostInitConfig can cause suricata to terminate with segfaults. The ASAN-output:
==29747==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7ff937904646 bp
0 0x7ff937904645 in strlen (/lib/x86_64-linux-gnu/libc.so.6+0x80645)
1 0x7ff93b146eec (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x3beec)
2 0x5618387c86a3 in HostInitConfig /root/suricata-1/src/host.c:174
3 0x56183893eccb in PostConfLoadedSetup /root/suricata-1/src/suricata.c:2752
4 0x56183893f70e in main /root/suricata-1/src/suricata.c:2898
5 0x7ff9378a42b0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202b0)
6 0x5618384d0039 in _start (/usr/local/bin/suricata+0xc8039)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV (/lib/x86_64-linux-gnu/libc.so.6+0x80645) in strlen
This commit fixes Bug #2367
In some cases the rule reload could hang. The pending USR2 signals will
be recognized even with the <2 check. Also the SCLogWarning shouldn't be
used in the handler (see Warning about SCLog* API above in the code).
Check counter id before updating a counter. In case of a disabled
parser with the protocol detection enable, the id can be 0. In
debug mode this would lead to a BUG_ON.
https://redmine.openinfosecfoundation.org/issues/2353
Command line option pcap-file-continuous was ignoring command line options
passed after its usage. Fixed bug, fixed formatting of help command
regarding pcap-file-continuous.
There are several NULL-pointer derefs in StreamTCPInitConfig. All of them happen because ConfGet returns 1 even if the value is NULL(due to misconfiguration for example).
This commit introduces a new function "ConfGetValue". It adds return values for NULL-pointer to ConfGet and could be used as a replacement for ConfGet.
Note: Simply modify ConfGet might not be a good idea, because there are some places where ConfGet should return 1 even if "value" is NULL. For example if ConfGet should get a Config-Leave in the yaml-hierarchy.
Bug: 2354
This commit fixes multiple NULL-pointer dereferences in FlowInitConfig after reading in config-values(flow.hash-size, flow.prealloc and flow.memcap) for flow. Here is a sample ASAN-output:
=================================================================
ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7fea73456646 bp 0x7fffd70e1ba0 sp 0x7fffd70e1328 T0)
0 0x7fea73456645 in strlen (/lib/x86_64-linux-gnu/libc.so.6+0x80645)
1 0x7fea76c98eec (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x3beec)
2 0x5643efb4c205 in FlowInitConfig /root/suricata-1/src/flow.c:455
3 0x5643efcd1751 in PreRunInit /root/suricata-1/src/suricata.c:2247
4 0x5643efcd49f4 in PostConfLoadedSetup /root/suricata-1/src/suricata.c:2748
5 0x5643efcd5402 in main /root/suricata-1/src/suricata.c:2884
6 0x7fea733f62b0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202b0)
7 0x5643ef8761a9 in _start (/usr/local/bin/suricata+0xc51a9)
Ticketno: Bug #2349
The "sig_file" argument of DetectLoadCompleteSigPath() is not checked for NULL-values. If this argument is NULL a SEGV occurs because of a dereferenced NULL-pointer in strlen in PathIsAbsolute. This commit fixes bug #2347. Here is the ASAN-output:
==17170==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7fd1afa00646 bp 0x7ffe8398e6d0 sp 0x7ffe8398de58 T0)
0 0x7fd1afa00645 in strlen (/lib/x86_64-linux-gnu/libc.so.6+0x80645)
1 0x7fd1b3242eec (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x3beec)
2 0x5561c8cddf7f in PathIsAbsolute /root/suricata-1/src/util-path.c:40
3 0x5561c8cddfea in PathIsRelative /root/suricata-1/src/util-path.c:65
4 0x5561c89275e4 in DetectLoadCompleteSigPath /root/suricata-1/src/detect.c:264
5 0x5561c8929e75 in SigLoadSignatures /root/suricata-1/src/detect.c:486
6 0x5561c8c0f2b3 in LoadSignatures /root/suricata-1/src/suricata.c:2419
7 0x5561c8c1051d in PostConfLoadedDetectSetup /root/suricata-1/src/suricata.c:2550
8 0x5561c8c12424 in main /root/suricata-1/src/suricata.c:2887
9 0x7fd1af9a02b0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202b0)
10 0x5561c87b31a9 in _start (/usr/local/bin/suricata+0xc51a9)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV (/lib/x86_64-linux-gnu/libc.so.6+0x80645) in strlen
The value for the configuration-option "unix-command.enabled" is not properly checked in ConfUnixSocketIsEnable. This causes a NULL-pointer dereference in strcmp. This commit fixes bug #2346. The ASAN-output looks like:
ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7f03b69737cc bp 0x7ffcef322c10 sp 0x7ffcef322390 T0)
0 0x7f03b69737cb (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x447cb)
1 0x5617a76d3f55 in ConfUnixSocketIsEnable /root/suricata-1/src/util-conf.c:104
2 0x5617a741b6e7 in DetectEngineMultiTenantSetup /root/suricata-1/src/detect-engine.c:2447
3 0x5617a769e0c3 in PostConfLoadedDetectSetup /root/suricata-1/src/suricata.c:2527
4 0x5617a76a0424 in main /root/suricata-1/src/suricata.c:2887
5 0x7f03b30c82b0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202b0)
6 0x5617a72411a9 in _start (/usr/local/bin/suricata+0xc51a9)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x447cb
There is a memory-leak in DetectAddressTestConfVars. If the programm takes the "goto error"-path, the pointers gh and ghn will not be freed. This commit fixes bug #2345. Here is the ASAN-output:
=================================================================
ERROR: LeakSanitizer: detected memory leaks
Direct leak of 24 byte(s) in 1 object(s) allocated from:
0 0x7f4347cb1d28 in malloc (/usr/lib/x86_64-linux-gnu/libasan.so.3+0xc1d28)
1 0x55fe1fc8dcfc in DetectAddressHeadInit /root/suricata-1/src/detect-engine-address.c:1534
2 0x55fe1fc8c50a in DetectAddressTestConfVars /root/suricata-1/src/detect-engine-address.c:1306
3 0x55fe1ff356bd in PostConfLoadedSetup /root/suricata-1/src/suricata.c:2696
4 0x55fe1ff365eb in main /root/suricata-1/src/suricata.c:2884
5 0x7f43443892b0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202b0)
Direct leak of 24 byte(s) in 1 object(s) allocated from:
0 0x7f4347cb1d28 in malloc (/usr/lib/x86_64-linux-gnu/libasan.so.3+0xc1d28)
1 0x55fe1fc8dcfc in DetectAddressHeadInit /root/suricata-1/src/detect-engine-address.c:1534
2 0x55fe1fc8c524 in DetectAddressTestConfVars /root/suricata-1/src/detect-engine-address.c:1310
3 0x55fe1ff356bd in PostConfLoadedSetup /root/suricata-1/src/suricata.c:2696
4 0x55fe1ff365eb in main /root/suricata-1/src/suricata.c:2884
5 0x7f43443892b0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202b0)
SUMMARY: AddressSanitizer: 48 byte(s) leaked in 2 allocation(s).
If output log reopen fails, don't try to output the error. This would
lead to a deadlock as reopen was called from a SCLogMessage call. This
call already held the output lock.
Bug #2306.
Propagate inspection limits from anchered keywords to the rest of
a rule.
Examples:
content:"A"; depth:1; is anchored, it can only match in the first byte
content:"A"; depth:1; content:"BC"; distance:0; within:2;
"BC" can only be in the 2nd and 3rd byte of the payload. So effectively
it has an implicite offset of 1 and an implicit depth of 3.
content:"A"; depth:1; content:"BC"; distance:0; can assume offset:1; for
the 2nd content.
content:"A"; depth:1; pcre:"/B/R"; content:"C"; distance:0; can assume
at least offset:1; for content "C". We can't analyzer the pcre pattern
(yet), so we assume it matches with 0 bytes.
Add lots of test cases.
Currently, when live reload is executed through
unix-socket, suri prints in the console the following
error message:
"Live rule reload not possible if -s or -S option used at runtime."
Instead, prints "done" in unix socket,
when the live reload is not executed.
This permits to print the invalid rules through
unix socket.
An example output is the following:
>>> show-failed-rules
Success:
[
{
"filename": "/home/eric/git/oisf/benches/tls-store.rules",
"line": 2,
"rule": "alert ts any any -> any 334 (msg:\"Store TLS\"; tls.store; sid:2; rev:1;)"
},
{
"filename": "/home/eric/git/oisf/benches/tls-store.rules",
"line": 3,
"rule": "alert ls any any -> any 334 (msg:\"Store TLS\"; tls.store; sid:3; rev:1;)"
}
]
The dump is limited to 20 entries to avoid to send a too big
message to the client that don't support it by default.
Add a non blocking function to reload rules. It will be useful
for remote system management to avoid to block them waiting the
reload. And as we now have a last-reload command we can get the
status of the current reload.
Remove the DONE state to fix a problem with state not being
changed correctly when multiple reload were done. As DONE was
not really useful, we can remove it.
This adds the engine stats in stats event.
If multi-tenancy is enabled, it will add
stats for each tenant
The following is a snippet of the generated EVE entry:
"detect":{"engines":[{"last_reload":"2015-10-13T09:59:48.044996+0200","rules_loaded":17184,"rules_failed":0}],"alert":28}
Multi-tenancy enabled:
"detect":{"engines":[{"id":1,"last_reload":"2015-10-13T09:56:46.447153+0200","rules_loaded":17084,"rules_failed":0},
{"id":2,"last_reload":"2015-10-13T09:56:36.504877+0200","rules_loaded":3268,"rules_failed":0}],
"alert":28}
This takes the form of an option to add the pid of the process to file
names. Additionally, it adds a suffix to the file name to indicate it is
not finalized.
Adding the pid to the file name reduces the likelihood that a file is
overwritten when suricata is unexpectedly killed. The number in the
waldo file is only written out during a clean shutdown. In the event
of an improper shutdown, extracted files will be written using the old
number and existing files with the same name will be overwritten.
Writes extracted files and their metadata to a temporary file suffixed
with '.tmp'. Renames the files when they are completely done being
written. As-is there is no way to know that a file on disk is still
being written to by suricata.
Don't use pcre for the high level rule parsing, instead
using a tokenizing parser for breaking out the rule
into keywords and options.
Much faster, especially on older CPUs. Should also allow
us to provide better context where a rule parse error
occurs.
Converting the NTP parser to the new registration method is a simple,
3-steps process:
- change the extern functions to use generic input parameters (functions
in all parsers must share common types to be generic) and cast them
- declare the Parser structure
- remove the C code and call the registration function
Add a common structure definition to contain all required elements
required for registering a parser.
This also reduces the risk of incorrectly registering a parser: the
compiler will type-check functions.
The registration function allows factorization of the code. It can be
used to register parsers, but is not mandatory.
If extra registration code (for functions not in the structure)
it is still possible by calling the C functions after the registration.
Add StringToAppProto to map a protocol name to a AppProto.
Exposing this function is required to let parsers discover their
AppProto identifier constant dynamically.
For example, a parser can request this value, and use it for
registration without knowing the value.
pfring.h brings a different version of likely/unlikely that gives
warnings. So make sure we include our own before.
Make sure pfring.h isn't included globally due to apparent redefinition
of pthread_rwlock_t.
This patch adds support for hw bypass by enabling flow offload in the network
card (when supported) and implementing the BypassPacketsFlow callback.
Hw bypass support is disabled by default, and can be enabled by setting
"bypass: yes" in the pfring interface configuration section in suricata.yaml.
Netflow entry collects the minimum and maximum
time to live during the life of the incoming flow.
This adds those field to a netflow event.
Signed-off-by: Eric Leblond <eric@regit.org>
Reimplement keyword to match on SHA-1 fingerprint of TLS
certificate as a mpm keyword.
alert tls any any -> any (msg:"TLS cert fingerprint test";
tls_cert_fingerprint;
content:"4a:a3:66:76:82:cb:6b:23:bb:c3:58:47:23:a4:63:a7:78:a4:a1:18";
sid:12345;)
https://redmine.openinfosecfoundation.org/issues/2215
Fixes issue with events being dropped since socket was non-blocking for
offline run modes.
Add a method for determining offline from run mode. Make sure SCInstance
offline is set correctly. Use current run mode to set socket flags.
SCLogMapLogLevelToSyslogLevel(): treat SC_LOG_PERF messages as LOG_DEBUG
Previously, when logging to syslog, perf events had a default EMERG priority,
which could be a bit confusing.
An empty value for coredump.max-dump in the config-file leads to a segfault because of a NULL-pointer dereference in CoredumpLoadConfig().
Here is a configuration example:
coredump.max-dump: []
This lets suricata crash with a segfault:
ASAN-output:
==9412==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7f22e851aa28 bp 0x7ffd90006fc0 sp 0x7ffd90006740 T0)
0 0x7f22e851aa27 in strcasecmp (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x51a27)
1 0x5608a7ec0108 in CoredumpLoadConfig /root/suricata-1/src/util-coredump-config.c:52
2 0x5608a7e8bb22 in PostConfLoadedSetup /root/suricata-1/src/suricata.c:2752
3 0x5608a7e8c577 in main /root/suricata-1/src/suricata.c:2892
4 0x7f22e4c622b0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202b0)
5 0x5608a7a30c59 in _start (/usr/local/bin/suricata+0xc4c59)
Bug #2276
If there are empty values in the config-file where integer values are expected, strtoimax in the ConfGetInt-function will segfault because of NULL-pointer dereference.
Here is a configuration example:
pcre.match-limit: []
This will let suricata crash with a segfault.
ASAN-output:
ASAN:DEADLYSIGNAL =================================================================
16951ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7fa690e3ccc5 bp 0x000000000000 sp 0x7ffd0d770ad0 T0)
0 0x7fa690e3ccc4 (/lib/x86_64-linux-gnu/libc.so.6+0x36cc4)
1 0x7fa6946a6534 in strtoimax (/usr/lib/x86_64-linux-gnu/libasan.so.3+0x44534)
2 0x55e0aeba6499 in ConfGetInt /root/suricata-1/src/conf.c:390
3 0x55e0aed2545d in DetectPcreRegister /root/suricata-1/src/detect-pcre.c:99
4 0x55e0aec1b4ce in SigTableSetup /root/suricata-1/src/detect.c:3783
5 0x55e0aeeed58d in PostConfLoadedSetup /root/suricata-1/src/suricata.c:2690
6 0x55e0aeeee4f2 in main /root/suricata-1/src/suricata.c:2892
7 0x7fa690e262b0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202b0)
8 0x55e0aea92d39 in _start (/usr/local/bin/suricata+0xc7d39)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV (/lib/x86_64-linux-gnu/libc.so.6+0x36cc4)
This commit fixes Ticket #2275
If someone accidently writes invalid characters in some parts of the suricata.yaml-configfile, the size-parameter of the ParseSizeString-function becomes NULL and gets dereferenced. Suricata crashes with SEGV. This commit fixes Ticket #2274
The following config value leads to a Segfault:
app-layer.protocols.smtp.inspected-tracker.content-inspect-window: *4096
If 'raw' content patterns were used in a dns_query rule, the raw
patterns would only be evaluated for TCP, but not for UDP.
This patch adds the inspection for UDP as well.
Bug #2263.
Leak in error path as seen by scan-build:
CC detect-engine-port.o
detect-engine-port.c:1083:13: warning: Potential leak of memory pointed to by 'temp_rule_var_port'
return -1;
^
The older random functions returned random values in the range of
0 - RAND_MAX. This is what the http randomize code was expecting.
Newer methods, based on getrandom (or probably Windows too), return
a much large range of values, including negative values and >RAND_MAX.
This patch adds a wrapper to turn the returned value into the expected
range before using it in the http code.
The same is true for the stream engine.
Interface name shortening introduces double periods ('..') as spacers,
which cause issues during JSON stats serialization as there '.'
characters are also used as separators to define nesting of the JSON
output. This commit makes sure that '..' are skipped during tokenizing.
Fixes Redmine bug #2208.