Frames of the following types have been added for toserver direction:
1. Pdu: The entire Protocol Data Unit
2. Hdr: Header of the request
3. Data: PDU data
Feature 4904
With the introduction of AppLayerResult::incomplete API, fragmented data
is no longer handled fully in the dcerpc code. Given that these code
paths are already covered by the following s-v tests, these tests can now be
safely removed.
- dce-gap-handling
- dcerpc-dce-iface-*
Ticket 5699
Instead of own internal mechanism of buffering in case of fragmented
data, use AppLayerResult::incomplete API to let the AppLayer Parser take
care of it. This makes the memory use more efficient.
Remove any unneeded variables and code with the introduction of this
API.
Ticket 5699
TCP data can be presented to the protocol parser in any way e.g. one
byte at a time, single complete PDU, fragmented PDU, multiple PDUs at
once. A limit of 1MB can be easily reached in some of such scenarios.
Remove the check that rejects data that is more than 1MB.
Commit 2bcc66da58 broke logging from
plugins:
- debug visibility was reduced making it unusable from an external crate
- the plugins view of the log level was broken
To fix:
- make debug pub
- minor change to initialization of the log LEVEL as seen by the plugin
so its seen by the plugin. I'm not really sure why the previous
version wasn't working though, but this one does
ldap.responses.count matches on the number of LDAP responses
This keyword maps to the eve field len(ldap.responses[])
It is an unsigned 32-bit integer
Doesn't support prefiltering
Ticket: #7453
ldap.responses.operation matches on Lightweight Directory Access Protocol response operations
This keyword maps to the eve field ldap.responses[].operation
It is an unsigned 8-bit integer
Doesn't support prefiltering
Ticket: #7453
ldap.request.operation matches on Lightweight Directory Access Protocol request operations
This keyword maps to the eve field ldap.request.operation
It is an unsigned 8-bit integer
Doesn't support prefiltering
Ticket: #7453
- remove "rs_" prefix from functions that are not exported
- prefix exported functions with "SC"
- don't export functions that are only used by pointer
Ticket: 7498
Both the macros export_tx_data_get and export_state_data_get can
generate non-pub functions as the function they generate is only used
as a pointer during registration.
Remove "pub" and "no_mangle" from the generated functions and update
the names of the generated functions to follow Rust rules as they are
no longer exported into the global C namespace.
Ticket: 7498
Because some alprotos will remain static and defined as a constant,
such as ALPROTO_UNKNOWN=0, or ALPROTO_FAILED.
The regular already used protocols keep for now their static
identifier such as ALPROTO_SNMP, but this could be made more
dynamic in a later commit.
ALPROTO_FAILED was used in comparison and these needed to change to use
either ALPROTO_MAX or use standard function AppProtoIsValid
Ticket: 7465
If a bug chunk of data is parsed in one go, we could create many
transactions even if marking them as complete, and have
quadratic complexity calling find_request.
Proposed solution is to fail on creating a new transaction if too
many already exist.
Allow `set_uint` to accept any number value that can be converted to a
u64. Prevents callers from having to do `as u64`.
This required fixing up any callers that used `.into()` to just pass in
their value without the into conversion.
Most calls using `as u64` can have that cast removed, with the exception
of `usize` values which must still be cast is conversion can't be
guaranteed to be non-fallible.
Add events for the following resource name parsing issues:
- name truncated as its too long
- maximum number of labels reached
- infinite loop
Currently these events are only registered when encountered, but
recoverable. That is where we are able to return some of the name,
usually in a truncated state.
As name parsing has many code paths, we pass in a pointer to a flag
field that can be updated by the name parser, this is done in
addition to the flags being set on a specific name as when logging we
want to designate which fields are truncated, etc. But for alerts, we
just care that something happened during the parse. It also reduces
errors as it won't be forgotten to check for the flags and set the
event if some new parser is written that also parses names.
Ticket: #7280
Once a name has gone over 1025 chars it will be truncated to 1025
chars and no more labels will be added to it, however the name will
continue to be parsed up to the label limit in attempt to find the end
so parsing can continue.
This introduces a new struct, DNSName which contains the name and any
flags which indicate any name parsing errors which should not error
out parsing the complete message, for example, infinite recursion
after some labels are parsed can continue, or truncation of name where
compression was used so we know the start of the next data to be
parsed.
This limits the logged DNS messages from being over our maximum size
of 10Mb in the case of really long names.
Ticket: #7280
To optimize detection, and logging, to avoid going through
all the live transactions when only a few were modified.
Two boolean fields are added to the tx data: updated_tc and ts
The app-layer parsers are now responsible to set these when
needed, and the logging and detection uses them to skip
transactions that were not updated.
There may some more optimization remaining by when we set
both updated_tc and updated_ts in functions returning
a mutable transaction, by checking if all the callers
are called in one direction only (request or response)
Ticket: 7087