When packet is coming from a real ethernet card, the kernel is
stripping the vlan header and delivering a modified packet so
we need to insert the VLAN header back before sending the packet
on the wire.
To do so, we pass an option to the raw socket to add a reserve
before the packet data. It will get Suricata some head room to
to move the ethernet addresses before there actual place and
and insert the VLAN header in the correct place.
We get VLAN info from the ring buffer as the call of AFPWrite is
always done in the release function so we still have access to the
memory.
The JSON logger had already been updated to handle
transactions without a response. Apply the same logic
to the older dns-log where a logger is registered
for each direction.
Fixes issue 2012.
If the app-layer-parsing has a very long content it exceeds the maximum
defined in "alproto_name". This adds a check for the too long content
before it will be passed to "strlcpy" and logs an error.
The new Hyperscan 4.4 API provides a function to check for SSSE3
presence at runtime. This allows us to fall back to non-Hyperscan
matchers on systems without SSSE3 even when the suricata executable
is built with Hyperscan support. Addresses Redmine issue #2010.
Signed-off-by: Sascha Steinbiss <sascha@steinbiss.name>
Tested-by: Arturo Borrero Gonzalez <arturo@debian.org>
If segments section in the yaml is ommitted (default) or when the
pool size is set to 'from_mtu', the size of the pool will be MTU
minus 40. If the MTU couldn't be determined, it's assumed to be
1500, so the segment size for the bool will be 1460.
At shutdown, all flows that still need work are handled by the flow
force reassembly logic. This means one or more flow end pseudo packets
are generated and pushed through the engine for final detection and
logging.
In some cases this would not work correctly. This was caused by the
flow timeout logic kicking in before all the 'live' packets were
processed. Before the flow timeout handling runs the receive threads
are disabled, however the engine did not wait for the in-flight
packets to be fully processed. In autofp mode, packets could still
be in the queue between receive thread(s) and flow worker(s).
This patch adds a new function that 'drains' all the packet threads
of any in-progress packets before moving on the flow timeout logic.
Bug #1946.
When running ./setup-app-layer.sh require the protocol name to
start with a capital letter so it looks somewhat like a proper
name. This will help give better function names.
For example:
./setup-app-layer.sh IRC
./setup-app-layer.sh Irc
will create function names starting with IRC or Irc. But we do
not want function names to start with "irc".
Add SCFlowHasAlerts() to check if a flow has alerts. Returns true
on alerts, false otherwise.
Example:
has_alerts = SCFlowHasAlerts()
if has_alerts then
-- do something
end
This patch introduces the FileDataSize and FileTrackedSize functions.
The first one is just a renaming of the initial FilSize function
whereas the other one is using the newly introduced size field as
value.
The file size returned by FileSize is invalid if file store is not
used so we introduce a new size field in File structure that is used
to store the size.
Code was failing on a NULL return value which can be returned
when there was nothing todo instead of an error. Instead
check the errbuf for a non-NULL value to determine error.
Prevents the case where the logged id is incremented if a newer
transaction is complete and an older one is still outstanding.
For example, dns request0, unsolicited dns response, dns response0
would result in the valid response0 never being logged.
Similarily this could happen for:
request0, request1, response1, response0
which would end up having request0, request1 and response1 logged,
but response0 would not be logged.
This patch fixes an issue with hash computation resulting in the
invalidity of at least one hash when at least two different hashes
functions were used.
Impact was setting as `force-hash: [md5, sha256]` not to be valid.
Also it could lead to false negative if too different hash functions
had to be used on a single file due to signatures.