Ticket: 7658
Suricata will not handle well if we open a file for this tx,
do not close it, but set the transaction state to completed.
RFC 9113 section 6.1 states:
If a DATA frame is received whose Stream Identifier field is 0x00,
the recipient MUST respond with a connection error (Section 5.4.1)
of type PROTOCOL_ERROR.
This module uses the sawp-pop3 crate to parse POP3 requests and responses
Features:
- eve logging
- events for parsable but non-RFC-compliant messages
Ticket: 3243
Ticket: #2696
There are a lot of changes here, which are described below.
In general these changes are renaming constants to conform to the
libhtp-rs versions (which are generated by cbindgen); making all htp
types opaque and changing struct->member references to
htp_struct_member() function calls; and a handful of changes to offload
functionality onto libhtp-rs from suricata, such as URI normalization
and transaction cleanup.
Functions introduced to handle opaque htp_tx_t:
- tx->parsed_uri => htp_tx_parsed_uri(tx)
- tx->parsed_uri->path => htp_uri_path(htp_tx_parsed_uri(tx)
- tx->parsed_uri->hostname => htp_uri_hostname(htp_tx_parsed_uri(tx))
- htp_tx_get_user_data() => htp_tx_user_data(tx)
- htp_tx_is_http_2_upgrade(tx) convenience function introduced to detect response status 101
and “Upgrade: h2c" header.
Functions introduced to handle opaque htp_tx_data_t:
- d->len => htp_tx_data_len()
- d->data => htp_tx_data_data()
- htp_tx_data_tx(data) function to get the htp_tx_t from the htp_tx_data_t
- htp_tx_data_is_empty(data) convenience function introduced to test if the data is empty.
Other changes:
Build libhtp-rs as a crate inside rust. Update autoconf to no longer
use libhtp as an external dependency. Remove HAVE_HTP feature defines
since they are no longer needed.
Make function arguments and return values const where possible
htp_tx_destroy(tx) will now free an incomplete transaction
htp_time_t replaced with standard struct timeval
Callbacks from libhtp now provide the htp_connp_t and the htp_tx_data_t
as separate arguments. This means the connection parser is no longer
fetched from the transaction inside callbacks.
SCHTPGenerateNormalizedUri() functionality moved inside libhtp-rs, which
now provides normalized URI values.
The normalized URI is available with accessor function: htp_tx_normalized_uri()
Configuration settings added to control the behaviour of the URI normalization:
- htp_config_set_normalized_uri_include_all()
- htp_config_set_plusspace_decode()
- htp_config_set_convert_lowercase()
- htp_config_set_double_decode_normalized_query()
- htp_config_set_double_decode_normalized_path()
- htp_config_set_backslash_convert_slashes()
- htp_config_set_bestfit_replacement_byte()
- htp_config_set_convert_lowercase()
- htp_config_set_nul_encoded_terminates()
- htp_config_set_nul_raw_terminates()
- htp_config_set_path_separators_compress()
- htp_config_set_path_separators_decode()
- htp_config_set_u_encoding_decode()
- htp_config_set_url_encoding_invalid_handling()
- htp_config_set_utf8_convert_bestfit()
- htp_config_set_normalized_uri_include_all()
- htp_config_set_plusspace_decode()
Constants related to configuring uri normalization:
- HTP_URL_DECODE_PRESERVE_PERCENT => HTP_URL_ENCODING_HANDLING_PRESERVE_PERCENT
- HTP_URL_DECODE_REMOVE_PERCENT => HTP_URL_ENCODING_HANDLING_REMOVE_PERCENT
- HTP_URL_DECODE_PROCESS_INVALID => HTP_URL_ENCODING_HANDLING_PROCESS_INVALID
htp_config_set_field_limits(soft_limit, hard_limit) changed to
htp_config_set_field_limit(limit) because libhtp didn't implement soft
limits.
libhtp logging API updated to provide HTP_LOG_CODE constants along with
the message. This eliminates the need to perform string matching on
message text to map log messages to HTTP_DECODER_EVENT values, and the
HTP_LOG_CODE values can be used directly. In support of this,
HTP_DECODER_EVENT values are mapped to their corresponding HTP_LOG_CODE
values.
New log events to describe additional anomalies:
HTP_LOG_CODE_REQUEST_TOO_MANY_LZMA_LAYERS
HTP_LOG_CODE_RESPONSE_TOO_MANY_LZMA_LAYERS
HTP_LOG_CODE_PROTOCOL_CONTAINS_EXTRA_DATA
HTP_LOG_CODE_CONTENT_LENGTH_EXTRA_DATA_START
HTP_LOG_CODE_CONTENT_LENGTH_EXTRA_DATA_END
HTP_LOG_CODE_SWITCHING_PROTO_WITH_CONTENT_LENGTH
HTP_LOG_CODE_DEFORMED_EOL
HTP_LOG_CODE_PARSER_STATE_ERROR
HTP_LOG_CODE_MISSING_OUTBOUND_TRANSACTION_DATA
HTP_LOG_CODE_MISSING_INBOUND_TRANSACTION_DATA
HTP_LOG_CODE_ZERO_LENGTH_DATA_CHUNKS
HTP_LOG_CODE_REQUEST_LINE_UNKNOWN_METHOD
HTP_LOG_CODE_REQUEST_LINE_UNKNOWN_METHOD_NO_PROTOCOL
HTP_LOG_CODE_REQUEST_LINE_UNKNOWN_METHOD_INVALID_PROTOCOL
HTP_LOG_CODE_REQUEST_LINE_NO_PROTOCOL
HTP_LOG_CODE_RESPONSE_LINE_INVALID_PROTOCOL
HTP_LOG_CODE_RESPONSE_LINE_INVALID_RESPONSE_STATUS
HTP_LOG_CODE_RESPONSE_BODY_INTERNAL_ERROR
HTP_LOG_CODE_REQUEST_BODY_DATA_CALLBACK_ERROR
HTP_LOG_CODE_RESPONSE_INVALID_EMPTY_NAME
HTP_LOG_CODE_REQUEST_INVALID_EMPTY_NAME
HTP_LOG_CODE_RESPONSE_INVALID_LWS_AFTER_NAME
HTP_LOG_CODE_RESPONSE_HEADER_NAME_NOT_TOKEN
HTP_LOG_CODE_REQUEST_INVALID_LWS_AFTER_NAME
HTP_LOG_CODE_LZMA_DECOMPRESSION_DISABLED
HTP_LOG_CODE_CONNECTION_ALREADY_OPEN
HTP_LOG_CODE_COMPRESSION_BOMB_DOUBLE_LZMA
HTP_LOG_CODE_INVALID_CONTENT_ENCODING
HTP_LOG_CODE_INVALID_GAP
HTP_LOG_CODE_ERROR
The new htp_log API supports consuming log messages more easily than
walking a list and tracking the current offset. Internally, libhtp-rs
now provides log messages as a queue of htp_log_t, which means the
application can simply call htp_conn_next_log() to fetch the next log
message until the queue is empty. Once the application is done with a
log message, they can call htp_log_free() to dispose of it.
Functions supporting htp_log_t:
htp_conn_next_log(conn) - Get the next log message
htp_log_message(log) - To get the text of the message
htp_log_code(log) - To get the HTP_LOG_CODE value
htp_log_free(log) - To free the htp_log_t
Ticket: 7556
To do so, we need to add 2 buffers (one for each direction)
to the QuicState structure, so that on parsing the second packet
with hello/crypto fragment, we still have the data of the first
hello/crypto fragment.
Use a hardcoded limit so that these buffers cannot grow indefinitely
and set an event when reaching the limit
TCP urgent handling is a complex topic due to conflicting RFCs and
implementations.
Until now the URG flag and urgent pointer values were simply ignored,
leading to an effective "inline" processing of urgent data. Many
implementations however, do not default to this behavior.
Many actual implementations use the urgent mechanism to send 1 byte of
data out of band to the application.
Complicating the matter is that the way the urgent logic is handled is
generally configurable both of the OS and the app level. So from the
network it is impossible to know with confidence what the settings are.
This patch adds the following policies:
`stream.reassembly.urgent.policy`:
- drop: drop URG packets before they affect the stream engine
- inline: ignore the urgent pointer and process all data inline
- oob (out of band): treat the last byte as out of band
- gap: skip the last byte, but do no adjust sequence offsets, leading to
gaps in the data
For the `oob` option, tracking of a sequence number offset is required,
as the OOB data does "consume" sequence number space. This is limited to
64k. For this reason, there is a second policy:
`stream.reassembly.urgent.oob-limit-policy`:
- drop: drop URG packets before they affect the stream engine
- inline: ignore the urgent pointer and process all data inline
- gap: skip the last byte, but do no adjust sequence offsets, leading to
gaps in the data
Bug: #7411.
Add events for the following resource name parsing issues:
- name truncated as its too long
- maximum number of labels reached
- infinite loop
Currently these events are only registered when encountered, but
recoverable. That is where we are able to return some of the name,
usually in a truncated state.
As name parsing has many code paths, we pass in a pointer to a flag
field that can be updated by the name parser, this is done in
addition to the flags being set on a specific name as when logging we
want to designate which fields are truncated, etc. But for alerts, we
just care that something happened during the parse. It also reduces
errors as it won't be forgotten to check for the flags and set the
event if some new parser is written that also parses names.
Ticket: #7280
Ticket: 7191
So as to avoid quadratic complexity in libhtp.
Make the limit configurable from suricata.yaml,
and have an event when network traffic goes over the limit.
Enable backoff for most rules. The rules looking at the session start up
use a count of 1 and a multiplier of 2.
Post-3whs rules use a count of 1 and a multiplier of 10.
Ticket: 3958
- transactions are now bidirectional
- there is a logger
- gap support is improved with probing for resync
- frames support
- app-layer events
- enip_command keyword accepts now string enumeration as values.
- add enip.status keyword
- add keywords :
enip.product_name, enip.protocol_version, enip.revision,
enip.identity_status, enip.state, enip.serial, enip.product_code,
enip.device_type, enip.vendor_id, enip.capabilities,
enip.cip_attribute, enip.cip_class, enip.cip_instance,
enip.cip_status, enip.cip_extendedstatus
Ticket: 5926
HTTP2 continuation frames are defined in RFC 9113.
They allow header blocks to be split over multiple HTTP2 frames.
For Suricata to process correctly these header blocks, it
must do the reassembly of the payload of these HTTP2 frames.
Otherwise, we get incomplete decoding for headers names and/or
values while decoding a single frame.
Design is to add a field to the HTTP2 state, as the RFC states that
these continuation frames form a discrete unit :
> Field blocks MUST be transmitted as a contiguous sequence of frames,
> with no interleaved frames of any other type or from any other stream.
So, we do not have to duplicate this reassembly field per stream id.
Another design choice is to wait for the reassembly to be complete
before doing any decoding, to avoid quadratic complexity on partially
decoding of the data.
We only try to parse a small subset of what is possible in
RFB. Currently we only understand some standard auth schemes
and stop parsing when the server-client handshake is complete.
Since in IPS mode returning an error from the parser causes
drops that are likely uncalled for, we do not want to return
errors when we simply do not understand what happens in the
traffic. This addresses Redmine #5912.
Bug: #5912.
Support case where there are multiple SYN retransmits, where
each has a new timestamp.
Before this patch, Suricata would only accept a SYN/ACK that
matches the last timestamp. However, observed behavior is that
the server may choose to only respond to the first. In IPS mode
this could lead to a connection timing out as Suricata drops
the SYN/ACK it considers wrong, and the server continues to
retransmit it.
This patch reuses the SYN/ACK queuing logic to keep a list
of SYN packets and their window, timestamp, wscale and sackok
settings. Then when the SYN/ACK arrives, it is first evaluated
against the normal session state. But if it fails due to a
timestamp mismatch, it will look for queued SYN's and see if
any of them match the timestamp. If one does, the ssn is updated
to use that SYN and the SYN/ACK is accepted.
Bug: #5856.
When Suricata handles files over SMB, it does not wait for the
NBSS record to be complete, and can stream the payload to the
file... But it did not check the consistency of the SMB record
length being read or written against the NBSS record length.
This could lead to an evasion where an attacker crafts a SMB
write with a too big Length field, and then sends its evil
payload, even if the server returned an error for the write request.
Ticket: #5770
If the packet is shorter than IP payload length we no longer flag it as an
invalid UDP packet. UDP packet can be therefore shorter than IP payload.
Keyword "udp.hlen_invalid" became outdated as we no longer flag short UDP
packets as invalid.
Redmine ticket: #5693
Accept DNS messages with an invalid opcode that are otherwise
valid. Such DNS message will create a parser event.
This is a change of behavior, previously an invalid opcode would cause
the DNS message to not be detected or parsed as DNS.
Issue: #5444