Ticket: 6900
And thus avoid DOS by logging a request using a compressed
header block repeated many times and having a long value...
(cherry picked from commit 03442c9071)
Ticket: 6892
As HTTP hpack header compression allows one single byte to
express a previously seen arbitrary-size header block (name+value)
we should avoid to copy the vectors data, but just point
to the same data, while reamining memory safe, even in the case
of later headers eviction from the dybnamic table.
Rust std solution is Rc, and the use of clone, so long as the
data is accessed by only one thread.
(cherry picked from commit 390f09692e)
When outputting a float, check if its infinity, or not a number and
output a null instead.
Using a null was chosen as this is what serde_yaml, Firefox, Chrome,
Node, etc. do.
Ticket: #6921
(cherry picked from commit 71f59e529c)
error: creating a mutable reference to mutable static is discouraged
--> src/mqtt/mqtt.rs:752:23
|
752 | let max_msg_len = &mut MAX_MSG_LEN;
| ^^^^^^^^^^^^^^^^ mutable reference to mutable static
|
= note: for more information, see issue #114447 <https://github.com/rust-lang/rust/issues/114447>
= note: this will be a hard error in the 2024 edition
= note: this mutable reference has lifetime `'static`, but if the static gets accessed (read or written) by any other means, or any other reference is created, then any further use of this mutable reference is Undefined Behavior
error: unnecessary use of `to_vec`
--> src/smb/smb.rs:1048:62
|
1048 | let (name, is_dcerpc) = match self.guid2name_map.get(&guid.to_vec()) {
| ^^^^^^^^^^^^^^ help: replace it with: `guid`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_to_owned
= note: `#[deny(clippy::unnecessary_to_owned)]` implied by `#[deny(warnings)]`
And also other uses of to_vec() on already Vec
(cherry picked from commit f7cde8f00e)
Ticket: 6883
error: field `0` is never read
--> src/asn1/mod.rs:36:14
|
36 | BerError(Err<der_parser::error::BerError>),
| -------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
| field in this variant
|
(cherry picked from commit 02f2fb8833)
Ticket: 6799
When we find an overlong banner, we get into the state just
waiting for end of line, and we just want to skip the bytes
until then.
Returning AppLayerResult::incomplete made TCP engine retain
the bytes and grow the buffer that we parsed again and again...
(cherry picked from commit 271ed2008b)
error: writing `&Vec` instead of `&[_]` involves a new object where a slice will do
--> src/dns/log.rs:371:29
|
371 | pub fn dns_print_addr(addr: &Vec<u8>) -> std::string::String {
| ^^^^^^^^ help: change this to: `&[u8]`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#ptr_arg
(cherry picked from commit 68b0052018)
Ticket: 6481
Instead of just setting the old transactions to a drop state so
that they get later cleaned up by Suricata, fail creating new ones.
This is because one call to app-layer parsing can create many
transactions, and quadratic complexity could happen in one
single app-layer parsing because of find_or_create_tx
(cherry picked from commit 80abc22f64)
A next PDU may already be in the slice to parse.
Do not skip its parsing, ie do not use rest, but take just
the length of the pdu
(cherry picked from commit 86de7cffa7)
If the next PDU is already in the slice next, do not use it and
restrict ourselves to the length of this PDU.
Avoids overconsumption of memory by quadratic complexity, when
having many small PDUS in one big chunk being parsed
Ticket: #6411
(cherry picked from commit f52c033e56)
Ticket: 5926
HTTP2 continuation frames are defined in RFC 9113.
They allow header blocks to be split over multiple HTTP2 frames.
For Suricata to process correctly these header blocks, it
must do the reassembly of the payload of these HTTP2 frames.
Otherwise, we get incomplete decoding for headers names and/or
values while decoding a single frame.
Design is to add a field to the HTTP2 state, as the RFC states that
these continuation frames form a discrete unit :
> Field blocks MUST be transmitted as a contiguous sequence of frames,
> with no interleaved frames of any other type or from any other stream.
So, we do not have to duplicate this reassembly field per stream id.
Another design choice is to wait for the reassembly to be complete
before doing any decoding, to avoid quadratic complexity on partially
decoding of the data.
(cherry picked from commit aff54f29f8)
Especially sets transactions to complete when we get a response
without having seen the request, so that the transactions
end up getting cleaned (instead of living/leaking in the state).
Also try to set the event on the relevant transaction, instead
of creating a new transaction just for the purpose of having
the event.
Ticket: #6299
(cherry picked from commit 89936b6530)
Add a new rule keyword "requires" that allows a rule to require specific
Suricata versions and/or Suricata features to be enabled.
Example:
requires: feature geoip, version >= 7.0.0, version < 8;
requires: version >= 7.0.3 < 8
requires: version >= 7.0.3 < 8 | >= 8.0.3
Feature: #5972
Co-authored-by: Philippe Antoine <pantoine@oisf.net>
(cherry picked from commit 5d5b0509a5)
A CanceldRequest can occur after any query request, and is sent over a
new connection, leading to a new flow. It won't take any reply, but, if
processed by the backend, will lead to an ErrorResponse.
Task #6577
(cherry picked from commit 30ac77ce65)
With the changes in the probing_ts function, this other one could become
obsolete. Remove it, and directly call `parser::parse_request` when
checking for gaps, instead.
(cherry picked from commit 9aeeac532e)
Some non-pgsql traffic seen by Suricata is mistankenly identified as
pgsql, as the probing function is too generic. Now, if the parser sees
an unknown message type, even if it looks like pgsql, it will fail.
Bug #6080
(cherry picked from commit 4f85d06192)
We had unkonwn message type for the backend, but not the frontend
messages. It's important to better identify those to improve pgsql
probing functions.
Related to
Bug #6080
(cherry picked from commit 1ac5d97259)
Ticket: #6426
as per RFC 9113
":authority" MUST NOT include the deprecated userinfo subcomponent
for "http" or "https" schemed URIs.
(cherry picked from commit e3cd0d073f)
* Log vendor client identifier (dhcp option 60) if extended dhcp
logging is turned on. This required the `vendor_client_identifier` to
be added to the json schema. Validation done using an SV Test
* Added `requested_ip` to the json schema as well, since it was
missed. My SV test failed without it.
Feature #4587
So far, if only the starting request was a DCERPC request, it would be
considered DCERPC traffic. Since ALTER_CONTEXT is a valid request type,
it should be accepted too.
Reported and patch proposed in the following Redmine ticket by
InterNALXz.
Bug 6191
Ticket: #6211
Completes commit 02dece5db5
Once a http2 stream has end of stream flag, we close the file.
If we see new data frames with this stream id, the new_chunk
function should ignore them as the file was already closed.