The HTTP bodies (http_client_body and http_server_body/file_data) use
settings to control how much data we have before doing first inspection:
request-body-minimal-inspect-size
response-body-minimal-inspect-size
These settings default to 32k as quite some existing rules need this.
At the same time, the 'raw stream' inspection uses its own limits. By
default it inspects the data in blocks of about 2.5k. This could lead
to a situation where rules would not match.
For example, with 2 rules like this:
content:"abc"; content:"data="; http_client_body; depth:5; sid:1;
content:"xyz"; sid:2;
Sid 1 would only be inspected when the POST body reached the 32k limit
or when it was complete. Observed case shows the POST body to be 18k.
Sid 2 is inspected as soon as the 2.5k limit is reached, and then again
for each 2.5k increment. This moves the raw stream tracker forward.
So by the time sid 1 is inspected, some 18/19k into the stream, the
raw stream tracker is actually already moved forward for approximately
17.5k, this leads to the stream match of sid 1 possibly not matching.
Since the body match is at the start of the buffer, it makes sense
that the body and stream are inspected together.
The body inspection uses a tracker 'body_inspected', that keeps track
of how far into the body both MPM and per signature inspection has
moved.
This patch updates the logic in 2 ways:
1. it triggers earlier HTTP body inspection, which is matched to the
stream inspection. When the detection engine finds it has stream
data available for inspection, it passes the new 'STREAM_FLUSH'
flag to the HTTP body inspection code. Which will then do an
early inspection, even if still before the min inspect size.
2. to still somewhat adhere to the min inspect size, the body
tracker is not updated until the min inspect size is reached.
This will lead to some re-evaluation of the same body data.
If raw stream reassembly is disabled, this 'STREAM_FLUSH' flag is
never set, and the old behavior is used.
Bug #2522.
|
7 years ago | |
|---|---|---|
| .github | 8 years ago | |
| benches | 16 years ago | |
| contrib | 11 years ago | |
| doc | 7 years ago | |
| ebpf | 8 years ago | |
| etc | 8 years ago | |
| lua | 8 years ago | |
| m4 | 9 years ago | |
| python | 7 years ago | |
| qa | 8 years ago | |
| rules | 7 years ago | |
| rust | 7 years ago | |
| scripts | 8 years ago | |
| src | 7 years ago | |
| suricata-update | 8 years ago | |
| .gitignore | 8 years ago | |
| .travis.yml | 7 years ago | |
| COPYING | 10 years ago | |
| ChangeLog | 7 years ago | |
| LICENSE | 10 years ago | |
| Makefile.am | 8 years ago | |
| Makefile.cvs | 16 years ago | |
| README.md | 8 years ago | |
| acsite.m4 | 16 years ago | |
| appveyor.yml | 9 years ago | |
| autogen.sh | 8 years ago | |
| classification.config | 16 years ago | |
| config.rpath | 12 years ago | |
| configure.ac | 7 years ago | |
| doxygen.cfg | 10 years ago | |
| reference.config | 11 years ago | |
| suricata.yaml.in | 7 years ago | |
| threshold.config | 8 years ago | |
README.md
Suricata
Introduction
Suricata is a network IDS, IPS and NSM engine.
Installation
https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Suricata_Installation
User Guide
You can follow the Suricata user guide to get started.
Our deprecated (but still useful) user guide is also available.
Contributing
We're happily taking patches and other contributions. Please see https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Contributing for how to get started.
Suricata is a complex piece of software dealing with mostly untrusted input. Mishandling this input will have serious consequences:
- in IPS mode a crash may knock a network offline;
- in passive mode a compromise of the IDS may lead to loss of critical and confidential data;
- missed detection may lead to undetected compromise of the network.
In other words, we think the stakes are pretty high, especially since in many common cases the IDS/IPS will be directly reachable by an attacker.
For this reason, we have developed a QA process that is quite extensive. A consequence is that contributing to Suricata can be a somewhat lengthy process.
On a high level, the steps are:
-
Travis-CI based build & unit testing. This runs automatically when a pull request is made.
-
Review by devs from the team and community
-
QA runs
Overview of Suricata's QA steps
Trusted devs and core team members are able to submit builds to our (semi) public Buildbot instance. It will run a series of build tests and a regression suite to confirm no existing features break.
The final QA run takes a few hours minimally, and is started by Victor. It currently runs:
- extensive build tests on different OS', compilers, optimization levels, configure features
- static code analysis using cppcheck, scan-build
- runtime code analysis using valgrind, DrMemory, AddressSanitizer, LeakSanitizer
- regression tests for past bugs
- output validation of logging
- unix socket testing
- pcap based fuzz testing using ASAN and LSAN
Next to these tests, based on the type of code change further tests can be run manually:
- traffic replay testing (multi-gigabit)
- large pcap collection processing (multi-terabytes)
- AFL based fuzz testing (might take multiple days or even weeks)
- pcap based performance testing
- live performance testing
- various other manual tests based on evaluation of the proposed changes
It's important to realize that almost all of the tests above are used as acceptance tests. If something fails, it's up to you to address this in your code.
One step of the QA is currently run post-merge. We submit builds to the Coverity Scan program. Due to limitations of this (free) service, we can submit once a day max. Of course it can happen that after the merge the community will find issues. For both cases we request you to help address the issues as they may come up.
FAQ
Q: Will you accept my PR?
A: That depends on a number of things, including the code quality. With new features it also depends on whether the team and/or the community think the feature is useful, how much it affects other code and features, the risk of performance regressions, etc.
Q: When will my PR be merged?
A: It depends, if it's a major feature or considered a high risk change, it will probably go into the next major version.
Q: Why was my PR closed?
A: As documented in the Suricata Github workflow here https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Github_work_flow, we expect a new pull request for every change.
Normally, the team (or community) will give feedback on a pull request after which it is expected to be replaced by an improved PR. So look at the comments. If you disagree with the comments we can still discuss them in the closed PR.
If the PR was closed without comments it's likely due to QA failure. If the Travis-CI check failed, the PR should be fixed right away. No need for a discussion about it, unless you believe the QA failure is incorrect.
Q: the compiler/code analyser/tool is wrong, what now?
A: to assist in the automation of the QA, we're not accepting warnings or errors to stay. In some cases this could mean that we add a suppression if the tool supports that (e.g. valgrind, DrMemory). Some warnings can be disabled. In some exceptional cases the only 'solution' is to refactor the code to work around a static code checker limitation false positive. While frusterating, we prefer this over leaving warnings in the output. Warnings tend to get ignored and then increase risk of hiding other warnings.
Q: I think your QA test is wrong
A: If you really think it is, we can discuss how to improve it. But don't come to this conclusion to quickly, moreoften it's the code that turns out to be wrong.
Q: do you require signing of a contributor license agreement?
A: Yes, we do this to keep the ownership of Suricata in one hand: the Open Information Security Foundation. See http://suricata-ids.org/about/open-source/ and http://suricata-ids.org/about/contribution-agreement/