The headers table from client to server
and the one from server to client
may have different maximum sizes
(even if both endpoints have to keep both tables)
Scenario is use of dummy padding in write AndX request
or other similar commands using a data offset.
Parsing skips now these dummy bytes, and generates one event
Evasion scenario is
- a first dummy write of one byte at offset 0 is done
- the second full write of EICAR at offset 0 is then done
and does not trigger detection
The last write had the final value, and as we cannot "cancel"
the previous write, we set an event which is then transformed into
an app-layer decoder alert
This patch addresses issues discovered by redmine ticket 3896. With the
approach of finding latest record, there was a chance that no record was
found at all and consumed + needed became input length.
e.g.
input_len = 1000
input = 01 05 00 02 00 03 a5 56 00 00 .....
There exists no |05 00| identifier in the rest of the record. After
having parsed |05 00|, there was a search for another record with the
leftover data. Current data length at this point would be 997. Since the
identifier was not found in the data, we calculate the consumed bytes at
this point i.e. consumed = current_data.len() - 1 which would be 996.
Needed bytes still stay at a constant of 2. So, consumed + needed = 996
+ 2 = 998 which is lesser than initial input length of 1000 and hence
the assertion fails.
There could be two fixes to this problem.
1. Finding the latest record but making use of the last found record in
case no new record was found.
2. Always use the earliest record.
This patch takes the approach (2). It also makes sure that the gap and
current direction are the same.
When one side of the connection reaches the STREAM_DEPTH condition the
parser should be aware of this. Otherwise transactions will forever be
waiting for data in that direction.
This parameter is NULL or the pointer to the previous state
for the previous protocol in the case of a protocol change,
for instance from HTTP1 to HTTP2
This way, the new protocol can use the old protocol context.
For instance, HTTP2 mimicks the HTTP1 request, to have a HTTP2
transaction with both request and response
To signal incomplete data, we must return the number of
consumed bytes. When we get a banner and some records, we have
to take into account the number of bytes already consumed by
the banner parsing before reaching an incomplete record.
As is documented in RFC 7541, section 6.1
The index value of 0 is not used. It MUST be treated as a decoding
error if found in an indexed header field representation.
Added `dns_parse_rdata_soa` to parse SOA fields into an `DNSRDataSOA`
struct.
Added logging for answer and authority SOA records in both version
1 & 2, as well as grouped formats.
Borrow a macro from https://github.com/popzxc/stdext-rs that
will give us the Rust function name in SCLog messages in Rust.
As this trick only works on Rust 1.38 and newer, keep the old
macro around and set a feature based on a Rust version test
done during ./configure.
Functions written in Rust will need to suricata::plugin::init()
to bootstrap themselves. This bootstrap process sets the log level
within the Rust address space, and hooks up function pointers
that are expected to be set during normal runs of Suricata.
Make sure the log level is setup with a pure Rust function, so
when it is set, its set within the address space of the caller.
This is important for Rust plugins where the Rust modules are not
in the address space of the Suricata main process.
This commit logs http2 as an http event. The idea is to somewhat
normalize http/http2 so common info can be version agnostic.
This puts the http2 specific fields in an "http2" object inside
the "http" object.
HTTP2 headers/values that are in common with HTTP1 are logged
under the "http" object to be compatible with HTTP1 logging.
Only run cbindgen when necessary. This is a bit tricky. When
building a dist we want to unconditionally build the headers.
When going through a "make; sudo make install" type process,
cbindgen should not be run as the headers already exist, are
valid, and the environment under sudo is more often than
not suitable to pick up the Rust toolchains when installed
with rustup.
For the normal "make" case we have the gen/rust-bindings.h file
depend on library file, this will cause it to only be rebuilt
if the code was modified.
For "make dist" we unconditionally create "dist/rust-bindings.h".
This means the generated file could be in 2 locations, so update
configure.ac, and the library search find to find it.
The "gen/rust-bindings.h" should be picked up first if it exists,
for those who develop from a dist archive where "dist/rust-bindings.h"
also exists.
Not completely happy having the same file in 2 locations, but not
sure how else to get the dependency tracking correct.
Elastic search didn't accept the 'hassh' and 'hassh.string'. It would
see the first 'hassh' as a string and split the second key into a
object 'hassh' with a string member 'string'. So two different types
for 'hassh', so it rejected it.
This patch mimics the ja3(s) logging by creating a 'hassh' object
with 2 members: 'hash', which holds the md5 representation, and
'string' which holds the string representation.
In case of lossy connections the NFS state would properly clean up
transactions, including file transactions. However for files the
state was never set to 'truncated', leading to files to stay 'active'.
This would lead these files staying in the NFS's state. In long running
sessions with lots of files this would lead to performance and memory
use issues.
This patch cleans truncates the file that was being transmitted when
a file transaction is being closed.
Based on 65e9a7c31c
DCERPC parser so far provided support for single transactions only.
Extend that to support multiple transactions.
In order for multiple transactions to work, there is always a
transaction identifier for any protocol in its header that lets a
response match the request. In DCERPC, for TCP, that param is call_id in
the header which is a 32 bit field. For UDP, however since it uses
different version of RPC (4.x), this is defined by serial number field
defined in the header. This field however is not contiguous and needs to
be assembled by the provided serial_low and serial_hi fields.
Uses "cargo doc --no-deps" to build the documentation just for
our Suricata package. Without --no-deps, documentation will be
build for all our dependencies as well.
The generated documentation will end up in target/doc as HTML.
Optional callback a parser can register for applying configuration
to the 'transaction'. Most parsers have a bidirectional tx. For those
parsers that have different types of transaction handling, this new
callback can be used to properly apply the config.
AppLayerTxData is a structure each tx should include that will contain
the common fields the engine needs for tracking logging, detection and
possibly other things.
AppLayerTxConfig will be used by the detection engine to configure
the transaction.
This module uses the `der-parser` crate to parse ASN1 objects in order to replace src/util-decode-asn1.c
It also handles the parsing of the asn1 keyword rules and detection checks performed in src/detect-asn1.c
In case of lossy connections the SMB state would properly clean up
transactions, including file transactions. However for files the
state was never set to 'truncated', leading to files to stay 'active'.
This would lead these files staying in the SMB's state. In long running
sessions with lots of files this would lead to performance and memory
use issues.
This patch cleans truncates the file that was being transmitted when
a file transaction is being closed.
JsonBuilder is a Rust module for creating JSON output. Unlike
Jansson, the final JSON string is built up as items are added,
instead of building up an object tree and rendering it when
done.
The idea is to create a more efficient JSON serializer instead
of a flexible one.
To signal incomplete data, we must return the number of
consumed bytes. When we get a banner and some records, we have
to take into account the number of bytes already consumed by
the banner parsing before reaching an incomplete record.
All the dead code in C after the Rust implementation is hereby removed.
Invalid/migrated tests have also been deleted.
All the function calls in C have been replaced with appropriate calls to
Rust functions. Same has been done for smb/detect.rs as a part of this
migration.
This parser rewrites the DCE/RPC protocol implementation of Suricata
in Rust. More tests have been added to improve the coverage and some
fixes have been made to the tests already written in C. Most of the
valid tests from C have been imported to Rust.
File anatomy
src/dcerpc.rs
This file contains the implementation of single transactions in DCE/RPC
over TCP. It takes care of REQUEST, RESPONSE, BIND and BINDACK business
logic before and after the data parsing. DCERPCState holds the state
corresponding to a particular transaction and handles all important
aspects. It also defines any common structures and constants required
for DCE/RPC parsing irrespective of the carrier protocol.
src/dcerpc_udp.rs
This file contains the implementation of single transactions in DCE/RPC
over UDP. It takes care of REQUEST and RESPONSE parsing. It borrows the
Request and Response structs from src/dcerpc.rs.
src/detect.rs
This file contains the implementation of dce_iface and opnum detect
keywords. Both the parsing and the matching is taken care of by
functions in this file. Tests have been rewritten with the test data
from C.
src/parser.rs
This file contains all the nom parsers written for DCERPCRequest,
DCERPCResponse, DCERPCBind, DCERPCBindAck, DCERPCHeader, DCERPCHdrUdp.
It also implements functions to assemble and convert UUIDs. All the
fields have their endianness defined unless its an 8bit field or an
unusable one, then it's little endian but it won't make any difference.
src/mod.rs
This file contains all the modules of dcerpc folder which should be
taken into account during compilation.
Function calls
This is a State-wise implementation of the protocol for single
transaction only i.e. a valid state object is required to parse any
record. Function calls start with the app layer parser in C which
detects the application layer protocol to be DCE/RPC and calls the
appropriate functions in C which in turn make a call to these functions
in Rust using FFI. All the necessary information is passed from C to the
parsers and handlers in Rust.
Implementation
When a batch of input comes in, there is an analysis of whether the
input header and the direction is appropriate. Next check is about the
size of fragment. If it is as defined by the header, process goes
through else the data is buffered and more data is awaited. After this,
type of record as indicated by the header is checked. A call to the
appropriate handler is made. After the handling, State is updated with
the latest information about whatever record came in.
AppLayerResult::ok() is returned in case all went well else
AppLayerResult::err() is returned indicating something went wrong.
This commit adds an updated incomplete handling for the RFB-Parser. If
incomplete data is processed, the successfully consumed position and
length of remainder + 1 is returned. If the next packet is not empty
suricata will call the parser again.
This commit is a result of discussion on https://github.com/OISF/suricata/pull/4792.
Addresses #3570 by extra checking of calculated size requests.
With the given input, the parser eventually arrived at
parser::parse_failure_reason() which parsed from the remaining four
bytes (describing the string length) that the failure string to follow
would be 4294967295 bytes long. While calculating the total size of the
data to request via AppLayerResult::incomplete(), adding the four bytes
for the parsed but not consumed string length caused the u32 length to
overflow, resulting in a much smaller value triggering the bug condition.
This problem was addressed by more careful checking of values in each step
that could overflow: one subtraction, one addition (which could overflow
the usize length values), and a final check to determine whether the result
still fit into the u32 values required by AppLayerResult::incomplete().
If so, we would safely convert the values and pass them to the result type.
If not, we simply return AppLayerResult::err() but do not erroneously and
silently request the wrong amount.
This commit adds support for the Remote Framebuffer Protocol (RFB) as
used, for example, by various VNC implementations. It targets the
official versions 3.3, 3.7 and 3.8 of the protocol and provides logging
for the RFB handshake communication for now. Logged events include
endpoint versions, details of the security (i.e. authentication)
exchange as well as metadata about the image transfer parameters.
Detection is enabled using keywords for:
- rfb.name: Session name as sticky buffer
- rfb.sectype: Security type, e.g. VNC-style challenge-response
- rfb.secresult: Result of the security exchange, e.g. OK, FAIL, ...
The latter could be used, for example, to detect brute-force attempts
on open VNC servers, while the name could be used to map unwanted VNC
sessions to the desktop owners or machines.
We also ship example EVE-JSON output and keyword docs as part of the
Sphinx source for Suricata's RTD documentation.
This patch simplifies the return codes app-layer parsers use,
in preparation of a patch set for overhauling the return type.
Introduce two macros:
APP_LAYER_OK (value 0)
APP_LAYER_ERROR (value -1)
Update all parsers to use this.
Unfortunately, the transition to nom 5 (and functions instead of macros)
has side-effects, one of them being requiring lots of types annotations
when using a parsing, for ex in a match instruction.
Close all prior transactions in the direction of the GAP, except the
file xfers. Those use their own logic described below.
After a GAP all normal transactions are closed. File transactions
are left open as they can handle GAPs in principle. However, the
GAP might have contained the closing of a file and therefore it
may remain active until the end of the flow.
This patch introduces a time based heuristic for these transactions.
After the GAP all file transactions are stamped with the current
timestamp. If 60 seconds later a file has seen no update, its marked
as closed.
This is meant to fix resource starvation issues observed in long
running SMB sessions where packet loss was causing GAPs. Due to the
similarity of the NFS and SMB parsers, this issue is fixed for NFS
as well in this patch.
Bug #3424.
Bug #3425.
After a GAP all normal transactions are closed. File transactions
are left open as they can handle GAPs in principle. However, the
GAP might have contained the closing of a file and therefore it
may remain active until the end of the flow.
This patch introduces a time based heuristic for these transactions.
After the GAP all file transactions are stamped with the current
timestamp. If 60 seconds later a file has seen no update, its marked
as closed.
This is meant to fix resource starvation issues observed in long
running SMB sessions where packet loss was causing GAPs.
For make clean, only remove gen/ if cbindgen is available.
This prevents make clean from remove gen when the headers
were bundled, but cbindgen is not available to remove them.
Unconditionally remove gen and vendor in maintainerclean.
The modifications as part of the cbindgen commit caused issues
with distcheck, revert the Makefile to how it was with the Python
generator, but still using cbindgen.
Also always assume we'll include the generated headers in the
distribution archive to fix make distcheck from distribution
archives with headers included, but no cbindgen.
If sources are vendored, we get the same effect of using frozen
with a lock file, and the Cargo.lock is generated based
on the vendored sources.
This also removes the need to ship a Cargo.lock.
Fixed out of source builds with vendored sources.
Rust 1.40 in strict mode will now fail the build on the
presence of unnecessary parentheses.
warning: unnecessary parentheses around type
--> src/smb/smb2_ioctl.rs:41:12
|
41 | -> (&mut SMBTransaction)
| ^^^^^^^^^^^^^^^^^^^^^ help: remove these parentheses
|
= note: `#[warn(unused_parens)]` on by default
Since ebcc4db84a the flow worker runs
file pruning after parsing, detection and loging. This means we can
simplify the pruning logic. If a file is in state >= CLOSED, we can
prune it. Detection and outputs will have had a final chance to
process it.
Remove the calls to the pruning code from Rust. They are no longer
needed.
If rustup is in use, and a user uses sudo or su for the make
install, the install may fail with a "no default toolchain"
error.
To prevent this, detect at configure if rustup is being used,
then set RUSTUP_HOME for all calls to cargo.
Add a rule keyword, dns.opcode to match on the opcode flag
found in the DNS request and response headers.
Only exact matches are allowed with negation.
Examples:
- dns.opcode:4;
- dns.opcode:!1;