It generates a `compile_commands.json` suitable for clangd.
This is almost mandatory to have a command like this one for NixOs
users as tool like bear are not able to intercept correctly the
clang calls due to the usage of a wrapper.
Ticket: #7669
Ticket: #2696
There are a lot of changes here, which are described below.
In general these changes are renaming constants to conform to the
libhtp-rs versions (which are generated by cbindgen); making all htp
types opaque and changing struct->member references to
htp_struct_member() function calls; and a handful of changes to offload
functionality onto libhtp-rs from suricata, such as URI normalization
and transaction cleanup.
Functions introduced to handle opaque htp_tx_t:
- tx->parsed_uri => htp_tx_parsed_uri(tx)
- tx->parsed_uri->path => htp_uri_path(htp_tx_parsed_uri(tx)
- tx->parsed_uri->hostname => htp_uri_hostname(htp_tx_parsed_uri(tx))
- htp_tx_get_user_data() => htp_tx_user_data(tx)
- htp_tx_is_http_2_upgrade(tx) convenience function introduced to detect response status 101
and “Upgrade: h2c" header.
Functions introduced to handle opaque htp_tx_data_t:
- d->len => htp_tx_data_len()
- d->data => htp_tx_data_data()
- htp_tx_data_tx(data) function to get the htp_tx_t from the htp_tx_data_t
- htp_tx_data_is_empty(data) convenience function introduced to test if the data is empty.
Other changes:
Build libhtp-rs as a crate inside rust. Update autoconf to no longer
use libhtp as an external dependency. Remove HAVE_HTP feature defines
since they are no longer needed.
Make function arguments and return values const where possible
htp_tx_destroy(tx) will now free an incomplete transaction
htp_time_t replaced with standard struct timeval
Callbacks from libhtp now provide the htp_connp_t and the htp_tx_data_t
as separate arguments. This means the connection parser is no longer
fetched from the transaction inside callbacks.
SCHTPGenerateNormalizedUri() functionality moved inside libhtp-rs, which
now provides normalized URI values.
The normalized URI is available with accessor function: htp_tx_normalized_uri()
Configuration settings added to control the behaviour of the URI normalization:
- htp_config_set_normalized_uri_include_all()
- htp_config_set_plusspace_decode()
- htp_config_set_convert_lowercase()
- htp_config_set_double_decode_normalized_query()
- htp_config_set_double_decode_normalized_path()
- htp_config_set_backslash_convert_slashes()
- htp_config_set_bestfit_replacement_byte()
- htp_config_set_convert_lowercase()
- htp_config_set_nul_encoded_terminates()
- htp_config_set_nul_raw_terminates()
- htp_config_set_path_separators_compress()
- htp_config_set_path_separators_decode()
- htp_config_set_u_encoding_decode()
- htp_config_set_url_encoding_invalid_handling()
- htp_config_set_utf8_convert_bestfit()
- htp_config_set_normalized_uri_include_all()
- htp_config_set_plusspace_decode()
Constants related to configuring uri normalization:
- HTP_URL_DECODE_PRESERVE_PERCENT => HTP_URL_ENCODING_HANDLING_PRESERVE_PERCENT
- HTP_URL_DECODE_REMOVE_PERCENT => HTP_URL_ENCODING_HANDLING_REMOVE_PERCENT
- HTP_URL_DECODE_PROCESS_INVALID => HTP_URL_ENCODING_HANDLING_PROCESS_INVALID
htp_config_set_field_limits(soft_limit, hard_limit) changed to
htp_config_set_field_limit(limit) because libhtp didn't implement soft
limits.
libhtp logging API updated to provide HTP_LOG_CODE constants along with
the message. This eliminates the need to perform string matching on
message text to map log messages to HTTP_DECODER_EVENT values, and the
HTP_LOG_CODE values can be used directly. In support of this,
HTP_DECODER_EVENT values are mapped to their corresponding HTP_LOG_CODE
values.
New log events to describe additional anomalies:
HTP_LOG_CODE_REQUEST_TOO_MANY_LZMA_LAYERS
HTP_LOG_CODE_RESPONSE_TOO_MANY_LZMA_LAYERS
HTP_LOG_CODE_PROTOCOL_CONTAINS_EXTRA_DATA
HTP_LOG_CODE_CONTENT_LENGTH_EXTRA_DATA_START
HTP_LOG_CODE_CONTENT_LENGTH_EXTRA_DATA_END
HTP_LOG_CODE_SWITCHING_PROTO_WITH_CONTENT_LENGTH
HTP_LOG_CODE_DEFORMED_EOL
HTP_LOG_CODE_PARSER_STATE_ERROR
HTP_LOG_CODE_MISSING_OUTBOUND_TRANSACTION_DATA
HTP_LOG_CODE_MISSING_INBOUND_TRANSACTION_DATA
HTP_LOG_CODE_ZERO_LENGTH_DATA_CHUNKS
HTP_LOG_CODE_REQUEST_LINE_UNKNOWN_METHOD
HTP_LOG_CODE_REQUEST_LINE_UNKNOWN_METHOD_NO_PROTOCOL
HTP_LOG_CODE_REQUEST_LINE_UNKNOWN_METHOD_INVALID_PROTOCOL
HTP_LOG_CODE_REQUEST_LINE_NO_PROTOCOL
HTP_LOG_CODE_RESPONSE_LINE_INVALID_PROTOCOL
HTP_LOG_CODE_RESPONSE_LINE_INVALID_RESPONSE_STATUS
HTP_LOG_CODE_RESPONSE_BODY_INTERNAL_ERROR
HTP_LOG_CODE_REQUEST_BODY_DATA_CALLBACK_ERROR
HTP_LOG_CODE_RESPONSE_INVALID_EMPTY_NAME
HTP_LOG_CODE_REQUEST_INVALID_EMPTY_NAME
HTP_LOG_CODE_RESPONSE_INVALID_LWS_AFTER_NAME
HTP_LOG_CODE_RESPONSE_HEADER_NAME_NOT_TOKEN
HTP_LOG_CODE_REQUEST_INVALID_LWS_AFTER_NAME
HTP_LOG_CODE_LZMA_DECOMPRESSION_DISABLED
HTP_LOG_CODE_CONNECTION_ALREADY_OPEN
HTP_LOG_CODE_COMPRESSION_BOMB_DOUBLE_LZMA
HTP_LOG_CODE_INVALID_CONTENT_ENCODING
HTP_LOG_CODE_INVALID_GAP
HTP_LOG_CODE_ERROR
The new htp_log API supports consuming log messages more easily than
walking a list and tracking the current offset. Internally, libhtp-rs
now provides log messages as a queue of htp_log_t, which means the
application can simply call htp_conn_next_log() to fetch the next log
message until the queue is empty. Once the application is done with a
log message, they can call htp_log_free() to dispose of it.
Functions supporting htp_log_t:
htp_conn_next_log(conn) - Get the next log message
htp_log_message(log) - To get the text of the message
htp_log_code(log) - To get the HTP_LOG_CODE value
htp_log_free(log) - To free the htp_log_t
Cache Hyperscan serialized databases to disk to prevent compilation
of the same databases when Suricata is run again with the same
ruleset.
Hyperscan binary files are stored per rulegroup in the designated
folder, by default in the cached library folder.
Since caching is per signature group heads,
some chunk of the ruleset can change and it still can reuse part of
the unchanged signature groups.
Loading *fresh* ET Open ruleset: 19 seconds
Loading *cached* ET Open ruleset: 07 seconds
Ticket: 7170
Add EVE documentation for QUIC and Pgsql to their respective sections of
the userguide.
Also add a complete EVE reference as an appendix.
Other protocols can be done, but its a manual process to document in the
schema, then add the glue to pull them into the documentation.
The documentation is generated during "make dist", or if it doesn't
exist, "conf.py" will attempt to generate the eve documentation for
building on Readthedocs.
Provide an example of an extremely simple application that links
against Suricata. This provides a Makefile integrated with the
Suricata build system for in-tree building, as well as an example
Makefile for building out of tree.
Currently this application just wraps SuricataMain and does nothing
else.
Simplify the Makefile by avoiding automake and providing our own
Makefile.in that is suitable for in-tree builds of the plugin and can
also serve as an example for standalone plugins.
But the bigger benefit of this is to allow building the example plugin
even with --disable-shared provided to configure, as this is just a
phony limitation imposed by automake/libtool.
This is an example of what adding plugin examples to the Suricata repo
could look like.
This plugin is an example plugin for an EVE filetype. It could be
extended to support outputs like Redis, syslog, etc.
There is one issue with adding plugins like this to an autotools
project, the project can't be built with --disable-shared, which is
more of an autotools limitation, and not really a Suricata issue.
Suricata built with --disable-shared will load plugins just fine.
Note that the examples directory was added as DIST_SUBDIRS as we don't
want normal builds to recurse into it and attempt to build the plugin,
its just an example, but we still need to keep distcheck happy.
On installation, make sure the data directory is created. This will
usually be /var/lib/suricata/data, but otherwise follows the
autoconf/automake instructions.
This directory is for runtime state information, which for now is
datasets but may be expanded in the future. Suricata already expects
this directory to exist for "state" and "save" datasets, but it has
been up to the user to create it.
Currently, it seems easier to upload the diagram images to git than to
try to make the image generation script work with out of the tree builds
and other corner cases.
This means, however, that one must activelly remember to update msc
diagram files, run the script and re-add new png files, if those ever
need to be updated. To raise awareness to that, a watermark was added
to the diagram images.
Also removed configuration steps that added mscgen as dependency
(locally and for workflow builds and readthedocs).
If suricata-update is not available on "make install-full", don't
exit 1, instead give the reason why its not installed, but still
succeed the install.
Split the headers and source into 2 variables. Headers are
marked noinst so they don't get automatically installed on
"make install". Instead they will be installed by a custom
Makefile target, "make install-headers".
Following the pattern of many other libraries, provide a -config
program to output cflags and libs to properly link an application
against the library.
usage: libsuricata-config [--cflags] [--libs] [--static]
--cflags and --libs can be used infividually or together.
--static will link against the static libraries instead of the
shared library. Note that if the shared library is not available,
the static libraries will be provided even without this option.
As we don't install the libraries by default, provide a make target,
"install-library" to install the libsuricata library files.
If shared library support exists, both the static and shared
libraries will be installed, otherwise only the static libraries
will be installed.
Don't try to run suricata-update if its not installed.
The 'make install-rules' target would try to run suricata-update
when it was detected that it was bundled, but didn't consider
if suricata-update was actually installed.
Install classification.config and reference.config to $datadir,
where they can be updated on every upgrade.
This required moving them into a sub-directory for autotools
to do its thing.
Redmine issue:
https://redmine.openinfosecfoundation.org/issues/3209
So far when "make install-rules" stage was executed, libhtp path was not
recognized as ldconfig does not run by this stage.
Set "LD_LIBRARY_PATH" since we already know the path where libhtp would
be.
Closes redmine ticket #2669.
Stop falling back to the old method of installing rules into
/etc/suricata/rules if Suricata-Update is not available.
The goal here is to move away from the behaviour of installing
rules to /etc/suricata/rules as part of the default install
process. The engine provided rules are already installed to
/usr/share/suricata/rules, which can then be used as input
to rule management tools such as Suricata-Update.
This does not change the behaviour for Suricata release users
with the bundled Suricata-Update.
Also removes Oinkmaster and PulledPork suggestion for rule
management.
Will be built and installed as part of the Python code used
for suricatactl, which is intended to be the generic place
for all Python utility code that gets installed with Suricata.
No change to suricatasc code.
This patch introduces the ebpf cluster mode. This mode is using
an extended BPF function that is loaded into the kernel and
provide the load balancing.
An example of cluster function is provided in the ebpf
subdirectory and provide ippair load balancing function.
This is a function which uses the same method as
the one used in autofp ippair to provide a symetrical
load balancing based on IP addresses.
A simple filter example allowing to drop IPv6 is added to the
source.
This patch also prepares the infrastructure to be able to load
and use map inside eBPF files. This will be used later for flow
bypass.
Use a new directory, Python to host the Suricata python modules.
One entry point is suricatactl, a control script for
miscalleneous tasks. Currently onl filestore pruning
is implemented.
Rust is currently optional, use the --enable-rust configure
argument to enable Rust.
By default Rust will be built in release mode. If debug is enabled
then it will be built in debug mode.
On make dist, "cargo vendor" will be run to make a local copy
of Rust dependencies for the distribution archive file.
Add autoconf checks to test for the vendored source, and if it
exists setup the build to use the vendored code instead of
fetching it from the network.
Also, as Cargo requires semantic versioning, the Suricata version
had to change from 4.0dev to 4.0.0-dev.
Decode Modbus request and response messages, and extracts
MODBUS Application Protocol header and the code function.
In case of read/write function, extracts message contents
(read/write address, quantity, count, data to write).
Links request and response messages in a transaction according to
Transaction Identifier (transaction management based on DNS source code).
MODBUS Messaging on TCP/IP Implementation Guide V1.0b
(http://www.modbus.org/docs/Modbus_Messaging_Implementation_Guide_V1_0b.pdf)
MODBUS Application Protocol Specification V1.1b3
(http://www.modbus.org/docs/Modbus_Application_Protocol_V1_1b3.pdf)
Based on DNS source code.
Signed-off-by: David DIALLO <diallo@et.esia.fr>