Follow Rust convention of using a "sys" crate for bindings to C
functions. The bindings don't exist yet, but will be generated by
bindgen and put into this crate.
Ticket: #7341
In a recent warning reported by scan-build, datasets were found to be
using a blocking call in a critical section.
datasets.c:187:12: warning: Call to blocking function 'fgets' inside of critical section [unix.BlockInCriticalSection]
187 | while (fgets(line, (int)sizeof(line), fp) != NULL) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
datasets.c:292:12: warning: Call to blocking function 'fgets' inside of critical section [unix.BlockInCriticalSection]
292 | while (fgets(line, (int)sizeof(line), fp) != NULL) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
datasets.c:368:12: warning: Call to blocking function 'fgets' inside of critical section [unix.BlockInCriticalSection]
368 | while (fgets(line, (int)sizeof(line), fp) != NULL) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
datasets.c:442:12: warning: Call to blocking function 'fgets' inside of critical section [unix.BlockInCriticalSection]
442 | while (fgets(line, (int)sizeof(line), fp) != NULL) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
datasets.c:512:12: warning: Call to blocking function 'fgets' inside of critical section [unix.BlockInCriticalSection]
512 | while (fgets(line, (int)sizeof(line), fp) != NULL) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
5 warnings generated.
These calls are blocking in the multi tenant mode where several tenants
may be trying to load the same dataset in parallel.
In a single tenant mode, this operation is performed as a part of a
single thread before the engine startup.
In order to evade the warning and simplify the code, the initial file
reading is moved to Rust with this commit with a much simpler handling
of dataset and datarep.
Bug 7398
A no padding option is provided as a mode, as its a variant suitable
for encoding and decoding.
A padding optional function is added that is indifferent to padding
when decoding. This can be useful when you're not sure if padding
exists, and don't really care.
Frames of the following types have been added for toserver direction:
1. Pdu: The entire Protocol Data Unit
2. Hdr: Header of the request
3. Data: PDU data
Feature 4904
With the introduction of AppLayerResult::incomplete API, fragmented data
is no longer handled fully in the dcerpc code. Given that these code
paths are already covered by the following s-v tests, these tests can now be
safely removed.
- dce-gap-handling
- dcerpc-dce-iface-*
Ticket 5699
Instead of own internal mechanism of buffering in case of fragmented
data, use AppLayerResult::incomplete API to let the AppLayer Parser take
care of it. This makes the memory use more efficient.
Remove any unneeded variables and code with the introduction of this
API.
Ticket 5699
TCP data can be presented to the protocol parser in any way e.g. one
byte at a time, single complete PDU, fragmented PDU, multiple PDUs at
once. A limit of 1MB can be easily reached in some of such scenarios.
Remove the check that rejects data that is more than 1MB.
Commit 2bcc66da58 broke logging from
plugins:
- debug visibility was reduced making it unusable from an external crate
- the plugins view of the log level was broken
To fix:
- make debug pub
- minor change to initialization of the log LEVEL as seen by the plugin
so its seen by the plugin. I'm not really sure why the previous
version wasn't working though, but this one does
ldap.responses.count matches on the number of LDAP responses
This keyword maps to the eve field len(ldap.responses[])
It is an unsigned 32-bit integer
Doesn't support prefiltering
Ticket: #7453
ldap.responses.operation matches on Lightweight Directory Access Protocol response operations
This keyword maps to the eve field ldap.responses[].operation
It is an unsigned 8-bit integer
Doesn't support prefiltering
Ticket: #7453
ldap.request.operation matches on Lightweight Directory Access Protocol request operations
This keyword maps to the eve field ldap.request.operation
It is an unsigned 8-bit integer
Doesn't support prefiltering
Ticket: #7453
- remove "rs_" prefix from functions that are not exported
- prefix exported functions with "SC"
- don't export functions that are only used by pointer
Ticket: 7498
Both the macros export_tx_data_get and export_state_data_get can
generate non-pub functions as the function they generate is only used
as a pointer during registration.
Remove "pub" and "no_mangle" from the generated functions and update
the names of the generated functions to follow Rust rules as they are
no longer exported into the global C namespace.
Ticket: 7498
Because some alprotos will remain static and defined as a constant,
such as ALPROTO_UNKNOWN=0, or ALPROTO_FAILED.
The regular already used protocols keep for now their static
identifier such as ALPROTO_SNMP, but this could be made more
dynamic in a later commit.
ALPROTO_FAILED was used in comparison and these needed to change to use
either ALPROTO_MAX or use standard function AppProtoIsValid
Ticket: 7465
If a bug chunk of data is parsed in one go, we could create many
transactions even if marking them as complete, and have
quadratic complexity calling find_request.
Proposed solution is to fail on creating a new transaction if too
many already exist.
Allow `set_uint` to accept any number value that can be converted to a
u64. Prevents callers from having to do `as u64`.
This required fixing up any callers that used `.into()` to just pass in
their value without the into conversion.
Most calls using `as u64` can have that cast removed, with the exception
of `usize` values which must still be cast is conversion can't be
guaranteed to be non-fallible.