Instead of having a per detection engine list of rule that couldn't be
prefiltered, put those into special "prefilter" engines.
For packet and frame rules this doesn't change much, it just removes
some hard coded logic from the detect engine.
For the packet non-prefilter rules in the "non-prefilter" special prefilter
engine, add additional filtering for the packet variant. It can prefilter on
alproto, dsize and dest port.
The frame non-prefilter rules are added to a single engine, that per
rule checks the alproto and the type.
For app-layer, there is an engine per progress value, per app-layer
protocol and per direction. This hooks app-layer non-prefilter rules
into the app inspect logic at the correct "progress" hook.
e.g. a rule like
dns.query; bsize:1;
Negated MPM rules will also fall into this category:
dns.query; content:!"abc";
Are part of a special "generic list" app engine for dns, at the
same progress hook as `dns.query`.
This all results in a lot fewer checks:
previous:
--------------------------------------------------------------------------
Date: 1/29/2025 -- 10:22:25. Sorted by: number of checks.
--------------------------------------------------------------------------
Num Rule Gid Rev Ticks % Checks Matches Max Ticks Avg Ticks Avg Match Avg No Match
-------- ------------ -------- -------- ------------ ------ -------- -------- ----------- ----------- ----------- --------------
1 20 1 0 181919672 11.85 588808 221 60454 308.96 2691.46 308.07
2 50 1 0 223455914 14.56 453104 418 61634 493.17 3902.59 490.02
3 60 1 0 185990683 12.12 453104 418 60950 410.48 1795.40 409.20
4 51 1 0 192436011 12.54 427028 6084 61223 450.64 2749.12 417.42
5 61 1 0 180401533 11.75 427028 6084 61093 422.46 2177.04 397.10
6 70 1 0 153899099 10.03 369836 0 61282 416.13 0.00 416.13
7 71 1 0 123389405 8.04 369836 12833 44921 333.63 2430.23 258.27
8 41 1 0 63889876 4.16 155824 12568 39138 410.01 1981.97 272.10
9 40 1 0 64149724 4.18 155818 210 39792 411.70 4349.57 406.38
10 10 1 0 70848850 4.62 65558 0 39544 1080.70 0.00 1080.70
11 11 1 0 94743878 6.17 65558 32214 60547 1445.19 2616.14 313.92
this commit:
--------------------------------------------------------------------------
Date: 1/29/2025 -- 10:15:46. Sorted by: number of checks.
--------------------------------------------------------------------------
Num Rule Gid Rev Ticks % Checks Matches Max Ticks Avg Ticks Avg Match Avg No Match
-------- ------------ -------- -------- ------------ ------ -------- -------- ----------- ----------- ----------- --------------
1 50 1 0 138776766 19.23 95920 418 167584 1446.80 3953.11 1435.83
2 60 1 0 97988084 13.58 95920 418 182817 1021.56 1953.63 1017.48
3 51 1 0 105318318 14.60 69838 6084 65649 1508.04 2873.38 1377.74
4 61 1 0 89571260 12.41 69838 6084 164632 1282.56 2208.41 1194.20
5 11 1 0 91132809 12.63 32779 32214 373569 2780.22 2785.58 2474.45
6 10 1 0 66095303 9.16 32779 0 56704 2016.39 0.00 2016.39
7 70 1 0 48107573 6.67 12928 0 42832 3721.19 0.00 3721.19
8 71 1 0 32308792 4.48 12928 12833 39565 2499.13 2510.05 1025.09
9 41 1 0 25546837 3.54 12886 12470 41479 1982.53 1980.84 2033.05
10 40 1 0 26069992 3.61 12886 210 38495 2023.13 4330.05 1984.91
11 20 1 0 639025 0.09 221 221 14750 2891.52 2891.52 0.00
|
1 year ago | |
|---|---|---|
| .clusterfuzzlite | 1 year ago | |
| .github | 1 year ago | |
| benches | ||
| doc | 1 year ago | |
| ebpf | ||
| etc | 1 year ago | |
| examples | 1 year ago | |
| git-templates | 1 year ago | |
| lua | ||
| plugins | 1 year ago | |
| python | 1 year ago | |
| qa | 1 year ago | |
| rules | 1 year ago | |
| rust | 1 year ago | |
| scripts | 1 year ago | |
| src | 1 year ago | |
| suricata-update | ||
| .clang-format | ||
| .gitignore | 1 year ago | |
| .readthedocs.yaml | ||
| COPYING | ||
| ChangeLog | ||
| LICENSE | ||
| Makefile.am | 1 year ago | |
| README.md | ||
| SECURITY.md | ||
| acsite.m4 | ||
| autogen.sh | ||
| config.rpath | ||
| configure.ac | 1 year ago | |
| doxygen.cfg | 1 year ago | |
| libsuricata-config.in | 1 year ago | |
| requirements.txt | 1 year ago | |
| suricata.yaml.in | 1 year ago | |
| threshold.config | ||
README.md
Suricata
Introduction
Suricata is a network IDS, IPS and NSM engine developed by the OISF and the Suricata community.
Resources
Contributing
We're happily taking patches and other contributions. Please see our Contribution Process for how to get started.
Suricata is a complex piece of software dealing with mostly untrusted input. Mishandling this input will have serious consequences:
- in IPS mode a crash may knock a network offline
- in passive mode a compromise of the IDS may lead to loss of critical and confidential data
- missed detection may lead to undetected compromise of the network
In other words, we think the stakes are pretty high, especially since in many common cases the IDS/IPS will be directly reachable by an attacker.
For this reason, we have developed a QA process that is quite extensive. A consequence is that contributing to Suricata can be a somewhat lengthy process.
On a high level, the steps are:
- GitHub-CI based checks. This runs automatically when a pull request is made.
- Review by devs from the team and community
- QA runs from private QA setups. These are private due to the nature of the test traffic.
Overview of Suricata's QA steps
OISF team members are able to submit builds to our private QA setup. It will run a series of build tests and a regression suite to confirm no existing features break.
The final QA runs takes a few hours minimally, and generally runs overnight. It currently runs:
- extensive build tests on different OS', compilers, optimization levels, configure features
- static code analysis using cppcheck, scan-build
- runtime code analysis using valgrind, AddressSanitizer, LeakSanitizer
- regression tests for past bugs
- output validation of logging
- unix socket testing
- pcap based fuzz testing using ASAN and LSAN
- traffic replay based IDS and IPS tests
Next to these tests, based on the type of code change further tests can be run manually:
- traffic replay testing (multi-gigabit)
- large pcap collection processing (multi-terabytes)
- fuzz testing (might take multiple days or even weeks)
- pcap based performance testing
- live performance testing
- various other manual tests based on evaluation of the proposed changes
It's important to realize that almost all of the tests above are used as acceptance tests. If something fails, it's up to you to address this in your code.
One step of the QA is currently run post-merge. We submit builds to the Coverity Scan program. Due to limitations of this (free) service, we can submit once a day max. Of course it can happen that after the merge the community will find issues. For both cases we request you to help address the issues as they may come up.
FAQ
Q: Will you accept my PR?
A: That depends on a number of things, including the code quality. With new features it also depends on whether the team and/or the community think the feature is useful, how much it affects other code and features, the risk of performance regressions, etc.
Q: When will my PR be merged?
A: It depends, if it's a major feature or considered a high risk change, it will probably go into the next major version.
Q: Why was my PR closed?
A: As documented in the Suricata GitHub workflow, we expect a new pull request for every change.
Normally, the team (or community) will give feedback on a pull request after which it is expected to be replaced by an improved PR. So look at the comments. If you disagree with the comments we can still discuss them in the closed PR.
If the PR was closed without comments it's likely due to QA failure. If the GitHub-CI checks failed, the PR should be fixed right away. No need for a discussion about it, unless you believe the QA failure is incorrect.
Q: The compiler/code analyser/tool is wrong, what now?
A: To assist in the automation of the QA, we're not accepting warnings or errors to stay. In some cases this could mean that we add a suppression if the tool supports that (e.g. valgrind, DrMemory). Some warnings can be disabled. In some exceptional cases the only 'solution' is to refactor the code to work around a static code checker limitation false positive. While frustrating, we prefer this over leaving warnings in the output. Warnings tend to get ignored and then increase risk of hiding other warnings.
Q: I think your QA test is wrong
A: If you really think it is, we can discuss how to improve it. But don't come to this conclusion too quickly, more often it's the code that turns out to be wrong.
Q: Do you require signing of a contributor license agreement?
A: Yes, we do this to keep the ownership of Suricata in one hand: the Open Information Security Foundation. See http://suricata.io/about/open-source/ and http://suricata.io/about/contribution-agreement/