Because some alprotos will remain static and defined as a constant,
such as ALPROTO_UNKNOWN=0, or ALPROTO_FAILED.
The regular already used protocols keep for now their static
identifier such as ALPROTO_SNMP, but this could be made more
dynamic in a later commit.
ALPROTO_FAILED was used in comparison and these needed to change to use
either ALPROTO_MAX or use standard function AppProtoIsValid
Ticket: 7465
If a bug chunk of data is parsed in one go, we could create many
transactions even if marking them as complete, and have
quadratic complexity calling find_request.
Proposed solution is to fail on creating a new transaction if too
many already exist.
Allow `set_uint` to accept any number value that can be converted to a
u64. Prevents callers from having to do `as u64`.
This required fixing up any callers that used `.into()` to just pass in
their value without the into conversion.
Most calls using `as u64` can have that cast removed, with the exception
of `usize` values which must still be cast is conversion can't be
guaranteed to be non-fallible.
Add events for the following resource name parsing issues:
- name truncated as its too long
- maximum number of labels reached
- infinite loop
Currently these events are only registered when encountered, but
recoverable. That is where we are able to return some of the name,
usually in a truncated state.
As name parsing has many code paths, we pass in a pointer to a flag
field that can be updated by the name parser, this is done in
addition to the flags being set on a specific name as when logging we
want to designate which fields are truncated, etc. But for alerts, we
just care that something happened during the parse. It also reduces
errors as it won't be forgotten to check for the flags and set the
event if some new parser is written that also parses names.
Ticket: #7280
Once a name has gone over 1025 chars it will be truncated to 1025
chars and no more labels will be added to it, however the name will
continue to be parsed up to the label limit in attempt to find the end
so parsing can continue.
This introduces a new struct, DNSName which contains the name and any
flags which indicate any name parsing errors which should not error
out parsing the complete message, for example, infinite recursion
after some labels are parsed can continue, or truncation of name where
compression was used so we know the start of the next data to be
parsed.
This limits the logged DNS messages from being over our maximum size
of 10Mb in the case of really long names.
Ticket: #7280
To optimize detection, and logging, to avoid going through
all the live transactions when only a few were modified.
Two boolean fields are added to the tx data: updated_tc and ts
The app-layer parsers are now responsible to set these when
needed, and the logging and detection uses them to skip
transactions that were not updated.
There may some more optimization remaining by when we set
both updated_tc and updated_ts in functions returning
a mutable transaction, by checking if all the callers
are called in one direction only (request or response)
Ticket: 7087
For example, "requires: foo bar" is an unknown requirement, however
its not tracked, nor an error as it follows the syntax. Instead,
record these unknown keywords, and fail the requirements check if any
are present.
A future version of Suricata may have new requires keywords, for
example a check for keywords.
Ticket: #7418
This prevents the clippy warning:
508 | #[derive(FromPrimitive, Debug)]
| ^------------
| |
| `FromPrimitive` is not local
| move the `impl` block outside of this constant `_IMPL_NUM_FromPrimitive_FOR_IsakmpPayloadType`
509 | pub enum IsakmpPayloadType {
| ----------------- `IsakmpPayloadType` is not local
|
= note: the derive macro `FromPrimitive` defines the non-local `impl`, and may need to be changed
= note: the derive macro `FromPrimitive` may come from an old version of the `num_derive` crate, try updating your dependency with `cargo update -p num_derive`
= note: an `impl` is never scoped, even when it is nested inside an item, as it may impact type checking outside of that item, which can be the case if neither the trait or the self type are at the same nesting level as the `impl`
= note: items in an anonymous const item (`const _: () = { ... }`) are treated as in the same scope as the anonymous const's declaration for the purpose of this lint
= note: this warning originates in the derive macro `FromPrimitive` (in Nightly builds, run with -Z macro-backtrace for more info)
To ensure that all calls to cargo use the same environment variables,
put the environment variables in CARGO_ENV so every call to cargo can
easily use the same vars.
The Cargo build system is smarter than make, it can detect a change in
an environment variable that affects the build, and the setting of
SURICATA_LUA_SYS_HEADER_DST changing could cause a rebuild.
Also update suricata-lua-sys, which is smarter about copying headers. It
will only copy if the destination does not exist, or the source header
is newer than the target, which can also prevent unnecessary rebuilds.
This is mainly to fix an issue where subsequent builds may fail,
especially when running an editor with a LSP enabled:
Update lua crate to 0.1.0-alpha.5. This update will force a rewrite of
the headers if the env var SURICATA_LUA_SYS_HEADER_DST changes. This
fixes the issue where the headers may not be written.
The cause is that Rust dependencies are cached, and if your editor is
using rust-analyzer, it might cache the build without this var being
set, so these headers are not available to Suricata. This crate update
forces the re-run of the Lua build.rs if this env var changes, fixing
this issue.
Generic ssn2vec_map was a HashMap used for mapping session key to
different types of vector data:
- GUID
- filename
- share name
Turn this into a bounded LruCache. Rename to ssn2vec_cache.
Size of the cache is 512 by default, and can be configured using:
`app-layer.protocols.smb.max-session-cache-size`
Ticket: #5672.
Reimplement the ssnguid2vec_map HashMap as a LruCache.
Since this is a DCERPC record cache, name it as such.
Default size is 128. Can be controlled by
`app-layer.protocols.smb.max-dcerpc-frag-cache-size`.
Ticket: #5672.
Turn the map mapping the smb session key to smb tree into a lru cache,
limited to 1024 by default.
Add `app-layer.protocols.smb.max-tree-cache-size` option to control the
limit.
Ticket: #5672.
Don't tag the session as gap'd when the GAP is in a precise location:
1. in "skip" data, where the GAP just fits the skip data
2. in file data, where we pass the GAP on to the file
This reduces load of GAP post-processing that is unnecessary in these
case.
This crate lets us instruct it where to copy the header files instead
of our Makefile trying to find the correct ones and copying them into
place.
Can prevent the simultaneous copy errors sometimes seen on a make
without a clean.
Introduce a common function for mapping names to IDs that performs
bounds checking.
Note: For event IDs in the enum that are larger than a uint8_t, -1
will be returned instead of -4. -4 has special meaning during
signature parsin that means requirements were not met. -4 has no
special handling prior to requirements, or the meaning has been lost.
Add a pure rust base64 decoder. This supports 3 modes of operation just
like the C decoder as follows.
1. RFC 2045
2. RFC 4648
3. Strict
One notable change is that "strict" mode is carried out by the rust
base64 crate instead of native Rust. This crate was already used for
encoding in a few places like datasets of string type. As a part of this
mode, now, only the strings that can be reliably converted back are
decoded.
The decoder fn is available to C via FFI.
Bug 6280
Ticket 7065
Ticket 7058
According to RFC 3261, a single header can be repeated one or more times,
and its name can also be specified using the 'compact form.'
This patch updates the hashmap used for storing headers to accommodate multiple
values instead of just one.
Additionally, if a header name is defined in the compact form, it is expanded
into its long form (i.e., the standard name).
This conversion simplifies the logic for matching a given header
and ensures 1:1 parity with keywords.
Ticket #6374
Once we are tracking tx progress per-direction for PGSQL, we can trigger
the raw stream reassembly, for detection purposes, as soon as the
transactions are completed in the given direction.
Task #7000
PGSQL's current implementation tracks the transaction progress without
taking into consideration flow direction, and also has indirections
that make it harder to understand how the progress is tracked, as well
as when a request or response is actually complete.
This patch introduces tracking such progress per direction and adds
completion status per direction, too. This will help when triggering
raw stream reassembly or for unidirectional transactions, and may be
useful when we implement sub-protocols that can have multiple requests
per transaction, as well.
CancelRequests and TerminationRequests are examples of unidirectional
transactions. There won't be any responses to those requests, so we can
also mark the response side as done, and set their transactions as
completed.
Bug #7113
If a stream-only rule matches, and we find a tx where we
want to log the app-layer data, store into the tx data that
we already logged, so that we do not log again the app-layer metadata
Ticket: 7085
DCERPC/TCP tends to return the same values for invalid and incomplete
headers. As a result of this, invalid headers and any traffic following
it is buffered and processed later on assumed to be valid DCERPC traffic.
Fix this by clearly defining error and incomplete data and taking
appropriate actions.
Bug 7230
warning: first doc comment paragraph is too long
--> src/detect/iprep.rs:57:1
|
57 | / /// value matching is done use `DetectUintData` logic.
58 | | /// isset matching is done using special `DetectUintData` value ">= 0"
59 | | /// isnotset matching bypasses `DetectUintData` and is handled directly
60 | | /// in the match function (in C).
| |_
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#too_long_first_doc_paragraph
= note: `#[warn(clippy::too_long_first_doc_paragraph)]` on by default
help: add an empty line
warning: you seem to be trying to use `match` for destructuring a single pattern. Consider using `if let`
--> src/dcerpc/log.rs:36:33
|
36 | DCERPC_TYPE_BIND => match &state.bind {
| _________________________________^
37 | | Some(bind) => {
38 | | jsb.open_array("interfaces")?;
39 | | for uuid in &bind.uuid_list {
... |
51 | | None => {}
52 | | },
| |_____________^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#single_match
= note: `#[warn(clippy::single_match)]` on by default
In the DCERPC over TCP pcap, logging and rule matching is disrupted by adding a simple rule:
alert tcp any any -> any any (flow:to_server,established; \
dce_iface:5d2b62aa-ee0a-4a95-91ae-b064fdb471fc; dce_opnum:1; \
dce_stub_data; content:"|42 77 4E 6F 64 65 49 50 2E 65 78 65 20|"; \
content:!"|00|"; within:100; distance:97; sid:1; rev:1; )
Works: alert + 3 dcerpc records.
But when adding a trivial rule:
alert tcp any any -> any any (flow:to_server,established; \
dce_iface:5d2b62aa-ee0a-4a95-91ae-b064fdb471fc; dce_opnum:1; \
dce_stub_data; content:"|42 77 4E 6F 64 65 49 50 2E 65 78 65 20|"; \
content:!"|00|"; within:100; distance:97; sid:1; rev:1; )
alert tcp any any -> any any (dsize:3; sid:2; rev:1; )
The alert for sid:1 disappears and also there is one dcerpc event less.
In the single rule case we can aggressively free the transactions, as there
is only an sgh in the toserver direction.
This means that when we encounter the 2nd REQUEST, the first 2 transactions
have already been processed and freed. So for the 2nd REQUEST we open a new
TX and run inspection and logging on it.
When the 2nd rule is added, it adds toclient sgh as well. This means that we
will now slightly delay the freeing of the transactions.
As a consequence we still have the TX for the first REQUEST when the 2nd REQUEST
is parsed. This leads to the 2nd REQUEST re-using the TX. Since the TX is
already marked as inspected, it means the toserver rule now no longer matches.
Also we're not logging this TX correctly now.
This commit fixes the issue by not "finding" a TX that as already been
marked complete in the search direction.
Bug #7187.
base64 crate is updated to the latest version 0.22.1. This came with
several API changes which are applied to the code. The old calls have
been replaced with the newer calls.
This was done following the availability of better fns to directly
decode into slices/vectors as needed and also that previous version was
too old.
Along with this change, update the Cargo.lock.in to reflect all changes
in the package versions.
Task 7219
PgsqlTransactionState has a variant named "Init" which is a little too
generic to export to C. Fortunately this method doesn't need to be
exposed to C, instead remove it as it was only called by
rs_pgsql_tx_get_alstate_progress which also doesn't need to be public
or expose to C.
Ticket: #7227
Following the same logic as for PGSQL, if there is a gap in an LDAP request or
response, the parser tries to sync up again by checking if the message can be
parsed and effectively parses it on the next call.
Ticket #7176
This introduces a new parser registration function for LDAP/UDP, and update
ldap configuration in order to be able to enable/disable a single parser
independently (such as dns).
Also, GAPs are accepted only for TCP parser and not for UDP.
Ticket #7203
warning: can be more succinctly written as a byte str
--> src/mime/smtp.rs:762:37
|
762 | mime_smtp_find_url_strings(ctx, &[b'\n']);
| ^^^^^^^^ help: try: `b"\n"`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#byte_char_slices
= note: `#[warn(clippy::byte_char_slices)]` on by default
Ticket: 5734
Adds frames for SSH records, that come after banner, and before
the data is encrypted.
These records may contain cipher lists for instance.
by making tx parsing and creation more easily available,
without needing a dns state.
Dns event NotResponse is now set on the right tx, and not the one
before.
Also debug log for Z-flag on request says "request" instead of
"response"
Also rustfmt dns.rs
This implementation adds types and filters specified in the LDAP RFC to
work with the ldap_parser.
Although using the parser directly would be
best, strange behavior has been observed during transaction logging.
It appears that C pointers are being overwritten, leading to incorrect
output when LDAP fields are logged.
Ticket: 4863
On the way, convert some keywords to use the first-class integer
support.
And helpers for pure rust the support for multi-buffer.
Move the C unit tests about keyword mqtt.protocol_version
to unit tests for generic integer parsing, and test version 5
instead of testing twice version 3.
Also iterate all tx's messages for reason code as is done for other
keywords.
And allow detection on empty topics.
truncate fn is only active and used by dcerpc and smb parsers. In case
stream depth is reached for any side, truncate fn is supposed to set the
tx entity (request/response) in the same direction as complete so the
other side is not forever waiting for data.
However, whether the stream depth is reached is already checked by
AppLayerParserGetStateProgress fn which is called by:
- DetectTx
- DetectEngineInspectBufferGeneric
- AppLayerParserSetTransactionInspectId
- OutputTxLog
- AppLayerParserTransactionsCleanup
and, in such a case, StateGetProgressCompletionStatus is returned for
the respective direction. This fn following efc9a7a, always returns 1
as long as the direction is valid meaning that the progress for the
current direction is marked complete. So, there is no need for the additional
callback to mark the entities as done in case of depth or a gap.
Remove all such glue code and callbacks for truncate fns.
Bug 7044
as its functionality is already covered by the generic code.
This removes APP_LAYER_PARSER_TRUNC_TC and APP_LAYER_PARSER_TRUNC_TS
flags as well as FlowGetDisruptionFlags sets STREAM_DEPTH flag in case
the respective stream depth was reached. This flag tells that whether
all the open files should be truncated or not.
Bug 7044
DNS v3 logging fixes the discrepancies between request and response
logging with the main difference being queries always being placed in an
array.
Bug: #6281
V3 style DNS logging fixes the discrepancies between request and
response logging better dns records and alert records.
The main change is that queries and answers are always logged as
arrays, and header fields are not logged in array items.
For alerts this means that answers are now logged as arrays, queries
already were.
DNS records will get this new format as well, but with a configuration
parameter.
Bug: #6281
Feature: 7017
Add DNSRDataOPT struct and DNSRData enum type OPT.
Add OPT parsing function and test function.
Add DNSRData OPT type to lua.rs match.
Log OPT rdata.
Feature: 7011
Add additionals to DNSMessage struct.
Add parsing logic to populate additional section data.
Patch dns tests to account for additional section parsing.
Don't assume the ntlmssp version field is always present if the flag is
set. Instead keep track of the offsets of the data of the various blobs
and see if there is space for the version.
Inspired by how Wireshark does the parsing.
Bug: #7121.
Issue: 6487
To avoid ambiguity, a single definition for base 64 decoding modes will
be used. The Rust base64 transform contains the definitions for the
existing mode types: Strict, RFC2045, RFC4648
Ticket: 6390
This can happen with keyword filestore:both,flow
If one direction does not have a signature group with a filestore,
the file is set to nostore on opening, until a signature in
the other direction tries to set it to store.
Subsequent files will be stored in both directions as flow flags
are now set.
New warning from rustc.
The other option is to allow dead code, however this is more explicit,
and when they are read, its obvious they should be renamed.
Ticket: 4863
On the way, convert unit test DetectSNMPCommunityTest to a SV test.
And also, make snmp.pdu_type use a generic uint32 for detection,
allowing operators, instead of just equality.
Implement special "isset" and "isnotset" modes.
"isset" matches if an IP address is part of an iprep category with any
value.
It is internally implemented as ">=,0", which should always be true if
there is a value to evaluate, as valid reputation values are 0-127.
"isnotset" matches if an IP address is not part of an iprep category.
Internally it is implemented outside the uint support.
Ticket: #6857.
For TCP streams, app proto stream reassembly can start earlier, instead
of waiting and queueing up data before doing so.
Task #7018
Related to
Bug #7004
Ticket: 3958
- transactions are now bidirectional
- there is a logger
- gap support is improved with probing for resync
- frames support
- app-layer events
- enip_command keyword accepts now string enumeration as values.
- add enip.status keyword
- add keywords :
enip.product_name, enip.protocol_version, enip.revision,
enip.identity_status, enip.state, enip.serial, enip.product_code,
enip.device_type, enip.vendor_id, enip.capabilities,
enip.cip_attribute, enip.cip_class, enip.cip_instance,
enip.cip_status, enip.cip_extendedstatus
Ticket: 5185
Previously, it was looked for message in plain text, and base64
encoding was only handled for attachments.
This commit also fixes the buffering got such base64 data streamed
into urls finding, by buffering a beginning non-empty line,
and by ensuring that we run extraction on the last line,
even if it had no EOL.
Expose the raw stream earlier to the detection engine, as Pgsql can have
multiple messages per transaction and usually will have a message
complete within one TCP packet.
Bug #7000
Related to
Bug #7026
Before, the JsonBuilder object for the pgsql event was being created
from the C-side function that actually called the Rust logger.
This resulted that if another module - such as the Json Alert called the
PGSQL logger, we wouldn't have the `pgsql` key present in the log output
- only its inner fields.
Bug #6983
Remove maintainer-clean-local, this is not needed.
In distclean-local, remove "rust/dist" and "rust/vendor" as they are
created during "make dist".
In "clean-local", remove "rust/target" and "rust/gen" as they are
created during a normal "make".
So far, the SANs were available as a part of IssuerDN via x509_parser
crate but SANs were not available to the SSLState* to be directly used
to setup and match against a sticky buffer.
Expose it to SSLStateConnp.
Feature 5234
Addresses this warning from the Rust compiler:
warning: `../rust/.cargo/config` is deprecated in favor of `config.toml`
note: if you need to support cargo 1.38 or earlier, you can symlink `config` to `config.toml`
Ticket: 7013
Done consistently for all protocols
This may change some protocols behaviors which failed early
if they found there was not enough data...
As this triggers rustc 1.78
unsafe precondition(s) violated: slice::from_raw_parts requires
the pointer to be aligned and non-null,
and the total size of the slice not to exceed `isize::MAX`
Ticket: 7013
- Remove no_mangle and pub from FFI functions that are only accessed
with a function pointer.
- Rename all no_mangle FFI functions to our C naming scheme.
This implements a logger for the SDP protocol.
Given that SDP is encapsulated within other protocols (such as SIP),
enabling it separately is not necessary.
Ticket #6627
This implements a parser for the SDP protocol.
Given that SDP is encapsulated within other protocols (such as SIP),
enabling it separately is not necessary.
Ticket #6627.
Ticket: 6892
As HTTP hpack header compression allows one single byte to
express a previously seen arbitrary-size header block (name+value)
we should avoid to copy the vectors data, but just point
to the same data, while reamining memory safe, even in the case
of later headers eviction from the dybnamic table.
Rust std solution is Rc, and the use of clone, so long as the
data is accessed by only one thread.
It was brought to my attention by GLongo that Pgsql parser handled eof
diffrently for requests and responses, and apparently there isn't a good
reason for such a difference therefore, apply same logic used for
rs_pgsql_parse_request for checking for eof when parsing a response.
Remove the path.lib parameter that is substituted into the output
Cargo.toml by autoconf. Instead, as part of the build, "cd" into the
source directory. We already set the Rust target directory to the
external build directory.
This makes the Cargo.toml more generic, and in a format suitable for
publishing to crates.io. It also makes it easier to pull in external
crates without needing to patch up their Cargo.toml, for example, it
might make pulling libhtp-rs easier.
When outputting a float, check if its infinity, or not a number and
output a null instead.
Using a null was chosen as this is what serde_yaml, Firefox, Chrome,
Node, etc. do.
Ticket: #6921
error: unnecessary use of `to_vec`
--> src/smb/smb.rs:1048:62
|
1048 | let (name, is_dcerpc) = match self.guid2name_map.get(&guid.to_vec()) {
| ^^^^^^^^^^^^^^ help: replace it with: `guid`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_to_owned
= note: `#[deny(clippy::unnecessary_to_owned)]` implied by `#[deny(warnings)]`
And also other uses of to_vec() on already Vec
Ticket: 6883
error: field `0` is never read
--> src/asn1/mod.rs:36:14
|
36 | BerError(Err<der_parser::error::BerError>),
| -------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
| field in this variant
|
Ticket: 6799
When we find an overlong banner, we get into the state just
waiting for end of line, and we just want to skip the bytes
until then.
Returning AppLayerResult::incomplete made TCP engine retain
the bytes and grow the buffer that we parsed again and again...
dns.rcode matches the rcode header field in DNS messages
It's an unsigned integer
valid ranges = [0-15]
Does not support prefilter
Supports matches in both flow directions
Task #6621
Time locked to 0.3.20 to guarantee MSRV of 1.63.
Update snmp-parser to 0.10.0.
Update asn1-rs to 0.6.1.
Update kerberos-parser to 0.8.0.
Update x509-parser 0.16.0.
Update der-parser to 9.0.0.
Remove specific use of der-parser 6.
Ticket: #6817.
Ticket: #6818.
It matches the rrtype field in DNS
It's an unsigned integer match
valid ranges = [0-65535]
Does not support prefilter
Supports flow in both directions
Feature #6666
This permits to detect the SIP protocol using pattern matching instead of
probing parser.
Since it is no longer used, the respective probing functions have been removed.
This patch permits to set a direction when a new transaction is created in order
to avoid 'signature shadowing' as reported by Eric Leblond in commit
5aaf50760f
The `is_version_char` function incorrectly allowed characters that are not
part of the valid SIP version "SIP/2.0".
For instance, 'HTTP/1.1' was mistakenly accepted as a valid SIP version,
although it's not.
This commit fixes the issue by updating the condition to strictly
check for the correct version string.
error: writing `&Vec` instead of `&[_]` involves a new object where a slice will do
--> src/dns/log.rs:371:29
|
371 | pub fn dns_print_addr(addr: &Vec<u8>) -> std::string::String {
| ^^^^^^^^ help: change this to: `&[u8]`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#ptr_arg
Ticket: 6481
Instead of just setting the old transactions to a drop state so
that they get later cleaned up by Suricata, fail creating new ones.
This is because one call to app-layer parsing can create many
transactions, and quadratic complexity could happen in one
single app-layer parsing because of find_or_create_tx
If the next PDU is already in the slice next, do not use it and
restrict ourselves to the length of this PDU.
Avoids overconsumption of memory by quadratic complexity, when
having many small PDUS in one big chunk being parsed
Ticket: #6411
Ticket: 5926
HTTP2 continuation frames are defined in RFC 9113.
They allow header blocks to be split over multiple HTTP2 frames.
For Suricata to process correctly these header blocks, it
must do the reassembly of the payload of these HTTP2 frames.
Otherwise, we get incomplete decoding for headers names and/or
values while decoding a single frame.
Design is to add a field to the HTTP2 state, as the RFC states that
these continuation frames form a discrete unit :
> Field blocks MUST be transmitted as a contiguous sequence of frames,
> with no interleaved frames of any other type or from any other stream.
So, we do not have to duplicate this reassembly field per stream id.
Another design choice is to wait for the reassembly to be complete
before doing any decoding, to avoid quadratic complexity on partially
decoding of the data.
Especially sets transactions to complete when we get a response
without having seen the request, so that the transactions
end up getting cleaned (instead of living/leaking in the state).
Also try to set the event on the relevant transaction, instead
of creating a new transaction just for the purpose of having
the event.
Ticket: #6299
Add a new rule keyword "requires" that allows a rule to require specific
Suricata versions and/or Suricata features to be enabled.
Example:
requires: feature geoip, version >= 7.0.0, version < 8;
requires: version >= 7.0.3 < 8
requires: version >= 7.0.3 < 8 | >= 8.0.3
Feature: #5972
Co-authored-by: Philippe Antoine <pantoine@oisf.net>
A CanceldRequest can occur after any query request, and is sent over a
new connection, leading to a new flow. It won't take any reply, but, if
processed by the backend, will lead to an ErrorResponse.
Task #6577
SCDnsTxGetQueryName was introduced to allow for getting the query name
in responses as well as requests, so covers the functionality of
rs_dns_tx_get_query_name.
This sticky buffer will allow content matching on the answer names.
While ansers typically only occur in DNS responses, we allow the buffer
to be used in request context as well as the request message format
allows it.
Feature: #6496
DNS request and response messages follow the same format so there is
no reason not to use the same data structure for each. While its
unlikely to see fields like answers in a request, the message format
does not disallow them, so it might be interesting data to have the
ability to log.
With the changes in the probing_ts function, this other one could become
obsolete. Remove it, and directly call `parser::parse_request` when
checking for gaps, instead.
Some non-pgsql traffic seen by Suricata is mistankenly identified as
pgsql, as the probing function is too generic. Now, if the parser sees
an unknown message type, even if it looks like pgsql, it will fail.
Bug #6080
We had unkonwn message type for the backend, but not the frontend
messages. It's important to better identify those to improve pgsql
probing functions.
Related to
Bug #6080
Since the asn1 keyword is processing payload data, move the handling of
the keyword into the PMATCH with content inspection.
Use u32 as buffer length in the Rust FFI
Especially fix setup-app-layer script to not forget this part
This allows, for simple loggers, to have a unique definition
of the actual logging function with the jsonbuilder.
This way, alerts, files, and app-layer event can share the code
to output the same data.
Ticket: #3827
* Log vendor client identifier (dhcp option 60) if extended dhcp
logging is turned on. This required the `vendor_client_identifier` to
be added to the json schema. Validation done using an SV Test
* Added `requested_ip` to the json schema as well, since it was
missed. My SV test failed without it.
Feature #4587
So far, if only the starting request was a DCERPC request, it would be
considered DCERPC traffic. Since ALTER_CONTEXT is a valid request type,
it should be accepted too.
Reported and patch proposed in the following Redmine ticket by
InterNALXz.
Bug 6191
Ticket: #6211
Completes commit 02dece5db5
Once a http2 stream has end of stream flag, we close the file.
If we see new data frames with this stream id, the new_chunk
function should ignore them as the file was already closed.
We only try to parse a small subset of what is possible in
RFB. Currently we only understand some standard auth schemes
and stop parsing when the server-client handshake is complete.
Since in IPS mode returning an error from the parser causes
drops that are likely uncalled for, we do not want to return
errors when we simply do not understand what happens in the
traffic. This addresses Redmine #5912.
Bug: #5912.
Rust 1.70 appears to now link code on both branches of `if cfg!(test)`
now causing Rust unit tests to fail as that pattern was used to
disable functions only available when linked with the Suricata C code.
To work-around this issue, provide two versions of the `new` function,
one for unit tests and one when running as an application.
and not the one from state
If a SNMP flow starts with a V2 version transaction,
then there is a V3i version transaction,
we will now log V3 for the second transaction
When adding an element to the dynamic headers table, the oldest
ones may get evicted. When multiple elements get evicted, they
should get evicted all at once with drain, instead of one by one
as there will be a massive move each time.
Ticket: #6103
Use try_reserve before growing the internal buffer, and the internal
state vector. This allows allocation errors to be caught and an error
returned instead of just aborting the process.
Ticket: #6057
Use backticks for proper markdown processing. As Rust code in
backticks is compiled, and this is a non-complete example, tag the
code sample to be ignored.
This flag is no longer needed as a parser can now create a transaction
as unidirectional.
Setting this flag also doesn't make sense on parsers that may have
request/reply and some unidirectional messaging.
sawp 0.12 is available and addresses future compilation failures in
dependent crates.
Updated modbus test case to expect 12 bytes needed instead of 15. This
aligns with expectations as the test case slices 3 bytes off the end of
a 12 byte message so needing 12 bytes is correct.
Ticket #5989
Minimal modifications required on the Suricata side, mainly for fields
becoming private and needing an accessor instead.
Note: As the kerberos parser still depends on der-parser 6.0, we still
have to depend on that so it is depended on, but renamed to
der-parser6. There is not an udpated kerberos-parser yet that uses
der-parser 8.2.0.
Ticket: #5991
Updating snmp-parser required directly depending on the asn1-rs crate
for the Oid type, as snmp-parser does not re-export this type anymore.
Ticket: #5992
Build on Eric's but set the direction on transaction creation when
needed. I think this makes it a little more clear, and easier to
document when creating single direction transactions.
This also somewhat abstracts the inner-workings of a directional
transaction from the implementation.
Ticket: #4759
Stamus team did discover a problem were a signature can shadow
other signatures.
For example, on a PCAP only containing Kerberos protocol and where the
following signature is matching:
alert krb5 $HOME_NET any -> any any (msg:"krb match"; krb5_cname; content:"marlo"; sid:3; rev:1;)
If we add the following signature to the list of signature
alert ssh $HOME_NET any -> any any (msg:"rr"; content:"rr"; flow:established,to_server; sid:4; rev:2;)
Then the Kerberos signature is not matching anymore.
To understand this case, we need some information:
- The krb5_cname is a to_client keyword
- The signal on ssh is to_server
- Kerberos has unidirectional transaction
- kerberos application state progress is a function always returning 1
As the two signatures are in opposite side, they end up in separate
sig group head.
Another fact is that, in the PCAP, the to_server side of the session
is sent first to the detection. It thus hit the sig group head of
the SSH signature. When Suricata runs detection in this direction
the Kerberos application layer send the transaction as it is existing
and because the alstate progress function just return 1 if the transaction
exists. So Suricata runs DetectRunTx() and stops when it sees that
sgh->tx_engines is NULL.
But the transaction is consumed by the engine as it has been evaluated
in one direction and the kerberos transaction are unidirectional so
there is no need to continue looking at it.
This results in no matching of the kerberos signature as the match
should occur in the evaluation of the other side but the transaction
with the data is already seen has been handled.
This problem was discovered on this Kerberos signature but all
the application layer with unidirectional transaction are impacted.
This patch introduces a flag that can be used by application layer
to signal that the TX should not be inspected. By using this flag
on the directional detect_flags_[ts|tc] the application layer can
prevent the TX to be consumed in the wrong direction.
Application layers with unidirectional TX will be updated
in separate commits to set the flag on the direction opposite
to the one they are.
Ticket: #5799
Cargo.lock has to be provided as template, Cargo.lock.in so it can
live beside Cargo.lock in out of tree automake builds, like distcheck.
This will pin Rust dependencies even for git builds, updating
Cargo.lock will now be a manual process that we'll have to take care
of periodically.
The latest Rust will automatically "fix" derivable default
implementation, which is nice, but makes changes that don't meet our
current MSRV, so allow derivable impls for now.
If a file (read/write) SMB record has padding/trailing data
after the buffer being read or written, and that Suricata falls
in one case where it skips the data, it should skip until
the very end of the NBSS record, meaning it should also skip the
padding/trailing data.
Otherwise, an attacker may smuggle some NBSS/SMB record in this
trailing data, that will be interpreted by Suricata, but not
by the SMB client/server, leading to evasions.
Ticket: #5786
When Suricata handles files over SMB, it does not wait for the
NBSS record to be complete, and can stream the payload to the
file... But it did not check the consistency of the SMB record
length being read or written against the NBSS record length.
This could lead to an evasion where an attacker crafts a SMB
write with a too big Length field, and then sends its evil
payload, even if the server returned an error for the write request.
Ticket: #5770
An array of interfaces was being logged without creating an array,
resulting in duplicate "interface" objects being logged. Instead put
these interfaces into an array like already done elsewhere.
Issue: 5814
Remove the second occurrence of tree_id logging which appears to
always be a duplicate of the first tree_id logged, even though they
come from different data structures.
Issue: 5811
The Rust time crate used by the x509-parser crate represents dates
before 1970 as negative numbers which do not survive the conversion to
SCTime_t and formatting with the current time formatting functions.
Instead of fixing our formatting functions to handle such dates,
create a Rust function for logging TLS dates directly to JSON using
the time crate that handles such dates properly.
Also add a FFI function for formatting to a provided C buffer for the
legacy tls-log.
Issue: 5817
When deriving AppLayerEvent, allow the event name to be set with the
"name" attribute in cases where the transformed name is not suitable.
This allows us to use enum variant names like
"FtpEventRequestCommandTooLong" for direct use in C, but is also a
name that doesn't transform well to an event name in rules, where we
want to see "request_command_too_long".
Ticket: #5808
May have been introduced by a24d7dc45c
Function http2_range_open expects to be called only when
tx.file_range is nil. One condition to ensure this is to check
that we are beginning the files contents. The filetracker field
file_open is not fit for this, as it may be reset to false.
UDP parsers should never return error as it should indicate to Suricata
that an unrecoverable error has occurred. UDP being record based for
the most part is almost always recoverable, at least for protocols like
DNS.
As UDP streams getting probed, a stream that does not appear to be DNS
at first, may have a single packet that does look close enough to DNS
to be picked up as DNS causing every subsequent packet to result in a
parser error.
To mitigate this, probe every incoming DNS message header for validity
before continuing onto the body. If the header doesn't validate as
DNS, just ignore the packet so no parse error is registered.
Accept DNS messages with an invalid opcode that are otherwise
valid. Such DNS message will create a parser event.
This is a change of behavior, previously an invalid opcode would cause
the DNS message to not be detected or parsed as DNS.
Issue: #5444
After a gap in a file transaction, the file tracker is truncated. However
this did not clear any stored out of order chunks from memory or stop more
chunks to be stored, leading to accumulation of a large number of chunks.
This patches fixes this be clearing the stored chunks on trunc. It also
makes sure no more chunks are stored in the tracker after the trunc.
Bug: #5781.