Ticket: 7285
As this is the default for websocket, which is bigger than the
defaut for zlib usage
Also limit the decompressed content to the max-payload-size
configuration parameter also used for non-compressed content.
And also use a stateful decoder to store/remember the compression
state to be able to decompress later messages.
move flate2.rs to a backend supporting the setting
of window_bits, which is not the case for miniz-oxide.
This will allow WebSocket to use Sec-WebSocket-Extensions
which can set a non-default window_bits
email.received matches on MIME EMAIL Received
This keyword maps to the EVE field email.received[]
It is a sticky buffer
Supports multiple buffer matching
Supports prefiltering
Ticket: #7599
email.url matches on URLs extracted from an email
This keyword maps to the EVE field email.url[]
Supports multiple buffer matching
Supports prefiltering
Ticket: #7597
Remove the authors field as it is deprecated.
Update the repository page to the Suricata repository.
Remove the homepage, it can be found via the reposistory page.
As the "suricata" crate depends on htp, we need to publish htp to
crates.io first, however "htp" name is already taken. So rename "htp" to
"suricata-htp".
On a normal project where the Cargo.lock is checked in, it would be
normal to see an updated Cargo.lock in git status and the like. As we
use autoconf to generate this file, we should just copy it back to the
input file so we get the same convenience of seeing when it is
updated, which usually means it needs to be checked in.
However, to satisfy "make distcheck", only copy it if the input
template exists, if the input template does not exist we are in an out
of tree build.
ldap.responses.attribute_type matches on LDAP attribute type/description
This keyword maps the eve field ldap.responses[].search_result_entry.attributes[].type
It is a sticky buffer
Supports multiple buffer matching
Supports prefiltering
Ticket: #7533
ldap.request.attribute_type matches on LDAP attribute type/description
This keyword maps the following eve fields:
ldap.request.search_request.attributes[]
ldap.request.modify_request.changes[].modification.attribute_type
ldap.request.add_request.attributes[].name
ldap.request.compare_request.attribute_value_assertion.description
It is a sticky buffer
Supports multiple buffer matching
Supports prefiltering
Ticket: #7533
Move handling of FTP responses to Rust to improve support for FTP
keyword matching. Parsing the response line when encountered
simplifies multi-buffer matching and metadata output.
Issue: 4082
This module uses the sawp-pop3 crate to parse POP3 requests and responses
Features:
- eve logging
- events for parsable but non-RFC-compliant messages
Ticket: 3243
Config:
Firewall rules are like normal rule, with some key differences.
They are loaded separate, and first, from:
```yaml
firewall-rule-path: /etc/suricata/firewall/
firewall-rule-files:
- fw.rules
```
Can also be loaded with --firewall-rules-exclusive: Mostly for QA purposes.
Allow -S with --firewall-rules-exclusive, so that firewall and threat detection
rules can be tested together.
Rules:
Differences with regular "threat detection" rules:
1. these rules are evaluated before threat detection rules
2. these rules are evaluated in the order as they appear in the rule file
3. currently only rules specifying an explicit hook at supported
a. as a consequence, no rules will be treated as (like) IP-only, PD-only or
DE-only
Require explicit action scope for firewall rules. Default policy is
drop for the firewall tables.
Actions:
New action "accept" is added to allow traffic in the filter tables.
New scope "accept:tx" is added to allow accepting a transaction.
Tables:
Rulesets are per table.
Table processing order: `packet:filter` -> `packet:td` -> `app:*:*` -> `app:td`.
Each of the tables has some unique properties:
`packet:filter`:
- default policy is `drop:packet`
- rules are process in order
- action scopes are explicit
- `drop` or `accept` is immediate
- `accept:hook` continues to `packet:td`
`packet:td`:
- default policy is `accept:hook`
- rules are ordered by IDS/IPS ordering logic
- action scopes are implicit
- actions are queued
- continues to `app:*:*` or `alert/action finalize`
`app:*:*`:
- default policy is `drop:flow`
- rules are process in order
- action scopes are explicit
- `drop` is immediate
- `accept` is conditional on possible `drop` from `packet:td`
- `accept:hook` continues to `app:td`, `accept:packet` or `accept:flow`
continues to `alert/action finalize`
`app:td`:
- default policy is `accept:hook`
- rules are ordered by IDS/IPS ordering logic
- action scopes are implicit
- actions are queued
- continues to `alert/action finalize`
Implementation:
During sigorder, split into packet:filter, app:*:* and general td.
Allow fw rules to work when in pass:flow mode. When firewall mode is enabled,
`pass:flow` will not skip the detection engine anymore, but instead
process the firewall rules and then apply the pass before inspecting threat
detect rules.
If we see a space-like character that is not space 0x20 in uri,
we set this event, even it the request line finished with a normal
space and protocol
Fixes: 9c324b796e ("http: Use libhtp-rs.)
Ticket: 5053
Move enum OutputJsonLogDirection and struct
EveJsonTxLoggerRegistrationData to a public header user by rust
thanks to bindgen
Rename to use SC prefix on the way
And make EveJsonSimpleTxLogFunc use a const pointer to transaction
suricata.yaml output section for smb now parses a types list
and will restrict logging of transactions to these types.
By default, everything still gets logged
Remove unused rs_smb_log_json_request on the way
Ticket: 7620
This sub-protocol inspects messages exchanged between postgresql backend
and frontend after a 'COPY TO STDOUT' has been processed.
Parses new messages:
- CopyOutResponse -- initiates copy-out mode/sub-protocol
- CopyData -- data transfer messages
- CopyDone -- signals that no more CopyData messages will be seen from
the sender for the current transaction
Task #4854
warning: using `contains()` instead of `iter().any()` is more efficient
--> src/http2/http2.rs:267:20
|
267 | if block.value.iter().any(|&x| x == b'@') {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: try: `block.value.contains(&b'@')`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#manual_contains
= note: `#[warn(clippy::manual_contains)]` on by default
warning: extern declarations without an explicit ABI are deprecated
--> src/core.rs:72:1
|
72 | extern {
| ^^^^^^ help: explicitly specify the "C" ABI: `extern "C"`
|
= note: `#[warn(missing_abi)]` on by default
Ticket: #2696
There are a lot of changes here, which are described below.
In general these changes are renaming constants to conform to the
libhtp-rs versions (which are generated by cbindgen); making all htp
types opaque and changing struct->member references to
htp_struct_member() function calls; and a handful of changes to offload
functionality onto libhtp-rs from suricata, such as URI normalization
and transaction cleanup.
Functions introduced to handle opaque htp_tx_t:
- tx->parsed_uri => htp_tx_parsed_uri(tx)
- tx->parsed_uri->path => htp_uri_path(htp_tx_parsed_uri(tx)
- tx->parsed_uri->hostname => htp_uri_hostname(htp_tx_parsed_uri(tx))
- htp_tx_get_user_data() => htp_tx_user_data(tx)
- htp_tx_is_http_2_upgrade(tx) convenience function introduced to detect response status 101
and “Upgrade: h2c" header.
Functions introduced to handle opaque htp_tx_data_t:
- d->len => htp_tx_data_len()
- d->data => htp_tx_data_data()
- htp_tx_data_tx(data) function to get the htp_tx_t from the htp_tx_data_t
- htp_tx_data_is_empty(data) convenience function introduced to test if the data is empty.
Other changes:
Build libhtp-rs as a crate inside rust. Update autoconf to no longer
use libhtp as an external dependency. Remove HAVE_HTP feature defines
since they are no longer needed.
Make function arguments and return values const where possible
htp_tx_destroy(tx) will now free an incomplete transaction
htp_time_t replaced with standard struct timeval
Callbacks from libhtp now provide the htp_connp_t and the htp_tx_data_t
as separate arguments. This means the connection parser is no longer
fetched from the transaction inside callbacks.
SCHTPGenerateNormalizedUri() functionality moved inside libhtp-rs, which
now provides normalized URI values.
The normalized URI is available with accessor function: htp_tx_normalized_uri()
Configuration settings added to control the behaviour of the URI normalization:
- htp_config_set_normalized_uri_include_all()
- htp_config_set_plusspace_decode()
- htp_config_set_convert_lowercase()
- htp_config_set_double_decode_normalized_query()
- htp_config_set_double_decode_normalized_path()
- htp_config_set_backslash_convert_slashes()
- htp_config_set_bestfit_replacement_byte()
- htp_config_set_convert_lowercase()
- htp_config_set_nul_encoded_terminates()
- htp_config_set_nul_raw_terminates()
- htp_config_set_path_separators_compress()
- htp_config_set_path_separators_decode()
- htp_config_set_u_encoding_decode()
- htp_config_set_url_encoding_invalid_handling()
- htp_config_set_utf8_convert_bestfit()
- htp_config_set_normalized_uri_include_all()
- htp_config_set_plusspace_decode()
Constants related to configuring uri normalization:
- HTP_URL_DECODE_PRESERVE_PERCENT => HTP_URL_ENCODING_HANDLING_PRESERVE_PERCENT
- HTP_URL_DECODE_REMOVE_PERCENT => HTP_URL_ENCODING_HANDLING_REMOVE_PERCENT
- HTP_URL_DECODE_PROCESS_INVALID => HTP_URL_ENCODING_HANDLING_PROCESS_INVALID
htp_config_set_field_limits(soft_limit, hard_limit) changed to
htp_config_set_field_limit(limit) because libhtp didn't implement soft
limits.
libhtp logging API updated to provide HTP_LOG_CODE constants along with
the message. This eliminates the need to perform string matching on
message text to map log messages to HTTP_DECODER_EVENT values, and the
HTP_LOG_CODE values can be used directly. In support of this,
HTP_DECODER_EVENT values are mapped to their corresponding HTP_LOG_CODE
values.
New log events to describe additional anomalies:
HTP_LOG_CODE_REQUEST_TOO_MANY_LZMA_LAYERS
HTP_LOG_CODE_RESPONSE_TOO_MANY_LZMA_LAYERS
HTP_LOG_CODE_PROTOCOL_CONTAINS_EXTRA_DATA
HTP_LOG_CODE_CONTENT_LENGTH_EXTRA_DATA_START
HTP_LOG_CODE_CONTENT_LENGTH_EXTRA_DATA_END
HTP_LOG_CODE_SWITCHING_PROTO_WITH_CONTENT_LENGTH
HTP_LOG_CODE_DEFORMED_EOL
HTP_LOG_CODE_PARSER_STATE_ERROR
HTP_LOG_CODE_MISSING_OUTBOUND_TRANSACTION_DATA
HTP_LOG_CODE_MISSING_INBOUND_TRANSACTION_DATA
HTP_LOG_CODE_ZERO_LENGTH_DATA_CHUNKS
HTP_LOG_CODE_REQUEST_LINE_UNKNOWN_METHOD
HTP_LOG_CODE_REQUEST_LINE_UNKNOWN_METHOD_NO_PROTOCOL
HTP_LOG_CODE_REQUEST_LINE_UNKNOWN_METHOD_INVALID_PROTOCOL
HTP_LOG_CODE_REQUEST_LINE_NO_PROTOCOL
HTP_LOG_CODE_RESPONSE_LINE_INVALID_PROTOCOL
HTP_LOG_CODE_RESPONSE_LINE_INVALID_RESPONSE_STATUS
HTP_LOG_CODE_RESPONSE_BODY_INTERNAL_ERROR
HTP_LOG_CODE_REQUEST_BODY_DATA_CALLBACK_ERROR
HTP_LOG_CODE_RESPONSE_INVALID_EMPTY_NAME
HTP_LOG_CODE_REQUEST_INVALID_EMPTY_NAME
HTP_LOG_CODE_RESPONSE_INVALID_LWS_AFTER_NAME
HTP_LOG_CODE_RESPONSE_HEADER_NAME_NOT_TOKEN
HTP_LOG_CODE_REQUEST_INVALID_LWS_AFTER_NAME
HTP_LOG_CODE_LZMA_DECOMPRESSION_DISABLED
HTP_LOG_CODE_CONNECTION_ALREADY_OPEN
HTP_LOG_CODE_COMPRESSION_BOMB_DOUBLE_LZMA
HTP_LOG_CODE_INVALID_CONTENT_ENCODING
HTP_LOG_CODE_INVALID_GAP
HTP_LOG_CODE_ERROR
The new htp_log API supports consuming log messages more easily than
walking a list and tracking the current offset. Internally, libhtp-rs
now provides log messages as a queue of htp_log_t, which means the
application can simply call htp_conn_next_log() to fetch the next log
message until the queue is empty. Once the application is done with a
log message, they can call htp_log_free() to dispose of it.
Functions supporting htp_log_t:
htp_conn_next_log(conn) - Get the next log message
htp_log_message(log) - To get the text of the message
htp_log_code(log) - To get the HTP_LOG_CODE value
htp_log_free(log) - To free the htp_log_t
This adds a sticky (multi) buffer to match the "Connection data"
subfield of the "Media description" field in both requests and
responses.
Ticket #7291
This adds a stick (multi) buffer to match the "Session information"
subfield of the "Media description" field in both requests and
responses.
Ticket #7291
The current parser implementations take a field, such as connection data, and
split it into subfields for a specific structure (e.g., struct ConnectionData).
However, following this approach requires several sticky buffers to match the
whole field, which can make a rule a bit verbose and doesn't offer any advantage
for matching specific parts of a field.
With this patch, a single line is still split into pieces if it makes sense for
parsing purposes, but these pieces are then reassembled into a single string.
This way, only one sticky buffer is needed to match the entire field.
Ticket #7291
This commit adds keyword/build support for the entropy keyword. The
entropy keyword compares an entropy value with a value calculated
according to the Shannon entropy on the available content.
Issue: 4162
This commit adds
- Parser for the entropy keyword
- Calculation of content the Shannon entropy value
Issue: 4162
The entropy keyword syntax is the keyword entropy followed by options
and the entropy value for comparison.
The minimum entropy keyword specification is:
entropy: value <entropy-spec>
This results in the calculated entropy value being compared with
<entropy-spec> with the equality operator.
Calculated entropy values are between 0.0 and 8.0, inclusive.
A match occurs when the values and operator agree. This example matches
if the calculated and entropy value are the same.
When entropy keyword options are specified, all options and "value" must
be comma-separated. Options and value may be specified in any order.
Options have default values:
- bytes is equal to the current content length
- offset is 0
- comparison with value is equality
entropy: [bytes <byteval>] [offset <offsetval>] value <entropy-spec>
Using default values:
entropy: bytes 0, offset 0, value =<entropy-spec>
<entropy-spec> is: <operator> (see below) and a value, e.g., "< 4.1"
The following operators are available from the float crate:
- = (default): Match when calculated entropy value equals specified entropy value
- < Match when calculated entropy value is strictly less than specified entropy value
- <= Match when calculated entropy value is less than or equal to specified entropy value
- > Match when calculated entropy value is strictly greater than specified entropy value
- >= Match when calculated entropy value is greater than or equal to specified entropy value
- != Match when calculated entropy value is not equal to specified entropy value
- x-y Match when calculated entropy value is in the range, exclusive
- !x-y Match when calculated entropy value is not in the range, exclusive
Reduce per tx space for tracking detection/prefilter progress. Instead
of a per direction u64 of flags, where each bit reflected a progress
value, use a simple u8 to track the linear progression through the
progress values. Use an offset to allow 0 to mean no value.
Add flags field as well to track "skip detect" and "inspect complete".
Deranged v0.4.1 (a dependency of the time crate) has implemented
PartialOrd for some integer types that conflict with the
implementation in the standard library creating an ambiguity as such
implementation are global. For more info see
https://github.com/jhpratt/deranged/issues/18.
To fix, use "::from" directly, instead of using .into() which is where
we run into amgibuity.
Notable changes from the previous API:
- rcode will return the rcode as an integer
- rcode_string will return the string representation
Also fixes an issue where an rcode of 0 was returned as nil.
Ticket: #7602
Feature: 7012
Add dns.response sticky buffer to match on dns response fields.
Add rust functions to return dns response packet data.
Unit tests verifying signature matching.
ldap.responses.message matches on LDAPResult error message
This keyword maps the following eve fields:
ldap.responses[].bind_response.message
ldap.responses[].search_result_done.message
ldap.responses[].modify_response.message
ldap.responses[].add_response.message
ldap.responses[].del_response.message
ldap.responses[].mod_dn_response.message
ldap.responses[].compare_response.message
ldap.responses[].extended_response.message
It is a sticky buffer
Supports prefiltering
Ticket: #7532
ldap.responses.result_code matches on LDAP result code
This keyword maps the following eve fields:
ldap.responses[].bind_response.result_code
ldap.responses[].search_result_done.result_code
ldap.responses[].modify_response.result_code
ldap.responses[].add_response.result_code
ldap.responses[].del_response.result_code
ldap.responses[].mod_dn_response.result_code
ldap.responses[].compare_response.result_code
ldap.responses[].extended_response.result_code
It is an unsigned 32-bit integer
Doesn't support prefiltering
Ticket: #7532
Funciton ldap_tx_get_responses_dn returns empty buffer in case
the response doesn't contain the distinguished name field
Fixes: 73ae6e997f ("detect: add ldap.responses.dn")
Change variable name 'req' to 'resp' in function ldap_tx_get_responses_dn and documentation nits
Fixes:
73ae6e997f ("detect: add ldap.responses.dn")
16dcee46fc ("detect: add ldap.request.dn")
- remove rs_prefix, replace with SC if needed
- remove pub and no_mangle where not needed
- remove some unused functions and fields
Related to ticket: #7498
This uses the cbindgen found during ./configure, and not the one
found on the path during "make", which while often the same, aren't
always the same.
Ticket: #6384
As references to static mutables are highly discouraged, remove the
global suppressing of the compiler warning. Each use case can be
suppressed as needed.
Ticket: #7417
It doesn't appear to be needed. The vec being cleared is only set once
per run, so never needs to be cleared.
Removes one point where we have to supress the static_mut_refs compiler
warning.
Ticket: #7417
Issue: 4082
Move the configuration file handling to Rust.
These changes will no longer terminate Suricata when there's an invalid
value for ftp.memcap. Like earlier Suricata releases, an error message
is logged "Invalid value <value> for ftp.memcap" but Suricata will no
longer terminate execution. It will use a default value of "0" instead.
Ticket: 7567
After a gap, we search a new record that may start later than
the beginning of current stream slice.
If so, consume the first bytes before the start of the record,
so that AppLayerResult::incomplete can be consistent and not
trigger assertion !((res.needed + res.consumed < input_len))
Ticket: 7556
See RFC 9000 section 17.2.5.2 :
After the client has received and processed an Initial
or Retry packet from the server,
it MUST discard any subsequent Retry packets that it receives.
ldap.responses.dn matches on LDAPDN from responses operations
This keyword maps the following eve fields:
ldap.responses[].search_result_entry.base_object
ldap.responses[].bind_response.matched_dn
ldap.responses[].search_result_done.matched_dn
ldap.responses[].modify_response.matched_dn
ldap.responses[].add_response.matched_dn
ldap.responses[].del_response.matched_dn
ldap.responses[].mod_dn_response.matched_dn
ldap.responses[].compare_response.matched_dn
ldap.responses[].extended_response.matched_dn
It is a sticky buffer
Supports prefiltering
Ticket: #7471
ldap.request.dn matches on LDAPDN from request operations
This keyword maps the following eve fields:
ldap.request.bind_request.name
ldap.request.add_request.entry
ldap.request.search_request.base_object
ldap.request.modify_request.object
ldap.request.del_request.dn
ldap.request.mod_dn_request.entry
ldap.request.compare_request.entry
It is a sticky buffer
Supports prefiltering
Ticket: #7471
Ticket: 7556
To do so, we need to add 2 buffers (one for each direction)
to the QuicState structure, so that on parsing the second packet
with hello/crypto fragment, we still have the data of the first
hello/crypto fragment.
Use a hardcoded limit so that these buffers cannot grow indefinitely
and set an event when reaching the limit
This may happen in some situations if the app-layer parser only sees
unknown messages and sets an event: there will be an empty transaction,
but nothing to log.
Related to
Task #5566
No state change, but since we added Unknown responses, we should handle
that case -- should we have a specific state for such cases?
Related to
Bug #5524
Task #5566
Some inner parsers were using it, some weren't. Better to standardize
this. Also take the time to avoid magic numbers for representing the
expected lengths for pgsql PDUs.
Also throwing PgsqlParseError and allowing for incomplete results.
Related to
Task #5566
Bug #5524
Some backend messages can be the shortest pgsql length possible,
4 bytes, but the parser expectd all messages to be longer than that.
Related to
Bug #5524
Also disable bindgen's generated layout tests. They are valid for the
platform generating the tests, but may not be valid for other
platforms. For example, if the tests are generated on a 64 bit
platform the tests will not be valid when run on a 32 bit platform as
pointers are a different size.
However, the generating bindings are valid for both platform.
Ticket: #7341
We don't keep bindgen's autogenerated do not edit line as it contains
the bindgen version which could break the CI check for out of date
bindings. So add our own do not edit line.
Ticket: #7341
Have bindgen generate bindings for app-layer-protos.h, then use the
generated definitions of AppProto/AppProtoEnum instead if defining
them ourselves.
This header was chosen as its used by Rust, and its a simple header
with no circular dependencies.
Ticket: #7341
Bindgen works by processing a header file which includes all other
header files it should generate bindings for. For this I've created
bindgen.h which just includes app-layer-protos.h for now as an
example.
These bindings are then generated and saved in the "suricata-sys"
crate and become availale as "suricata_sys::sys".
Ticket: #7341
Follow Rust convention of using a "sys" crate for bindings to C
functions. The bindings don't exist yet, but will be generated by
bindgen and put into this crate.
Ticket: #7341
In a recent warning reported by scan-build, datasets were found to be
using a blocking call in a critical section.
datasets.c:187:12: warning: Call to blocking function 'fgets' inside of critical section [unix.BlockInCriticalSection]
187 | while (fgets(line, (int)sizeof(line), fp) != NULL) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
datasets.c:292:12: warning: Call to blocking function 'fgets' inside of critical section [unix.BlockInCriticalSection]
292 | while (fgets(line, (int)sizeof(line), fp) != NULL) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
datasets.c:368:12: warning: Call to blocking function 'fgets' inside of critical section [unix.BlockInCriticalSection]
368 | while (fgets(line, (int)sizeof(line), fp) != NULL) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
datasets.c:442:12: warning: Call to blocking function 'fgets' inside of critical section [unix.BlockInCriticalSection]
442 | while (fgets(line, (int)sizeof(line), fp) != NULL) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
datasets.c:512:12: warning: Call to blocking function 'fgets' inside of critical section [unix.BlockInCriticalSection]
512 | while (fgets(line, (int)sizeof(line), fp) != NULL) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
5 warnings generated.
These calls are blocking in the multi tenant mode where several tenants
may be trying to load the same dataset in parallel.
In a single tenant mode, this operation is performed as a part of a
single thread before the engine startup.
In order to evade the warning and simplify the code, the initial file
reading is moved to Rust with this commit with a much simpler handling
of dataset and datarep.
Bug 7398
A no padding option is provided as a mode, as its a variant suitable
for encoding and decoding.
A padding optional function is added that is indifferent to padding
when decoding. This can be useful when you're not sure if padding
exists, and don't really care.
Frames of the following types have been added for toserver direction:
1. Pdu: The entire Protocol Data Unit
2. Hdr: Header of the request
3. Data: PDU data
Feature 4904
With the introduction of AppLayerResult::incomplete API, fragmented data
is no longer handled fully in the dcerpc code. Given that these code
paths are already covered by the following s-v tests, these tests can now be
safely removed.
- dce-gap-handling
- dcerpc-dce-iface-*
Ticket 5699
Instead of own internal mechanism of buffering in case of fragmented
data, use AppLayerResult::incomplete API to let the AppLayer Parser take
care of it. This makes the memory use more efficient.
Remove any unneeded variables and code with the introduction of this
API.
Ticket 5699
TCP data can be presented to the protocol parser in any way e.g. one
byte at a time, single complete PDU, fragmented PDU, multiple PDUs at
once. A limit of 1MB can be easily reached in some of such scenarios.
Remove the check that rejects data that is more than 1MB.
Commit 2bcc66da58 broke logging from
plugins:
- debug visibility was reduced making it unusable from an external crate
- the plugins view of the log level was broken
To fix:
- make debug pub
- minor change to initialization of the log LEVEL as seen by the plugin
so its seen by the plugin. I'm not really sure why the previous
version wasn't working though, but this one does
ldap.responses.count matches on the number of LDAP responses
This keyword maps to the eve field len(ldap.responses[])
It is an unsigned 32-bit integer
Doesn't support prefiltering
Ticket: #7453
ldap.responses.operation matches on Lightweight Directory Access Protocol response operations
This keyword maps to the eve field ldap.responses[].operation
It is an unsigned 8-bit integer
Doesn't support prefiltering
Ticket: #7453
ldap.request.operation matches on Lightweight Directory Access Protocol request operations
This keyword maps to the eve field ldap.request.operation
It is an unsigned 8-bit integer
Doesn't support prefiltering
Ticket: #7453
- remove "rs_" prefix from functions that are not exported
- prefix exported functions with "SC"
- don't export functions that are only used by pointer
Ticket: 7498
Both the macros export_tx_data_get and export_state_data_get can
generate non-pub functions as the function they generate is only used
as a pointer during registration.
Remove "pub" and "no_mangle" from the generated functions and update
the names of the generated functions to follow Rust rules as they are
no longer exported into the global C namespace.
Ticket: 7498