When Suricata handles files over SMB, it does not wait for the
NBSS record to be complete, and can stream the payload to the
file... But it did not check the consistency of the SMB record
length being read or written against the NBSS record length.
This could lead to an evasion where an attacker crafts a SMB
write with a too big Length field, and then sends its evil
payload, even if the server returned an error for the write request.
Ticket: #5770
Fuzzing highlighted an issue where a command sequence on the same file
id triggered a logging issue:
file data for id N
close id N
file data for id N
If this happened in a single blob of data passed to the parser, the
existing file tx would be reused, the file "reopened", confusing the
file logging logic. This would trigger a debug assert.
This patch makes sure a new file tx is created for the file data
coming in after the first file tx is closed.
Bug: #5567.
Use .is_some() and .is_none() instead of comparing against None.
Comparing against None requires a value to impl PartialEq, is_none() and
is_some() do not and are more idiomatic.
This patch updates the NT status code definition to use the status
definition used on Microsoft documentation website. A first python
script is building JSON object with code definition.
```
import json
from bs4 import BeautifulSoup
import requests
ntstatus = requests.get('https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/596a1078-e883-4972-9bbc-49e60bebca55')
ntstatus_parsed = BeautifulSoup(ntstatus.text, 'html.parser')
ntstatus_parsed = ntstatus_parsed.find('tbody')
ntstatus_dict = {}
for item in ntstatus_parsed.find_all('tr'):
cell = item.find_all('td')
if len(cell) == 0:
continue
code = cell[0].find_all('p')
description_ps = cell[1].find_all('p')
description_list = []
if len(description_ps):
for desc in description_ps:
if not desc.string is None:
description_list.append(desc.string.replace('\n ', ''))
else:
description_list = ['Description not available']
if not code[0].string.lower() in ntstatus_dict:
ntstatus_dict[code[0].string.lower()] = {"text": code[1].string, "desc": ' '.join(description_list)}
print(json.dumps(ntstatus_dict))
```
The second one is generating the code that is ready to be inserted into the
source file:
```
import json
ntstatus_file = open('ntstatus.json', 'r')
ntstatus = json.loads(ntstatus_file.read())
declaration_format = 'pub const SMB_NT%s:%su32 = %s;\n'
resolution_format = ' SMB_NT%s%s=> "%s",\n'
declaration = ""
resolution = ""
text_max = len(max([ntstatus[x]['text'] for x in ntstatus.keys()], key=len))
for code in ntstatus.keys():
text = ntstatus[code]['text']
text_spaces = ' ' * (4 + text_max - len(text))
declaration += declaration_format % (text, text_spaces, code)
resolution += resolution_format % (text, text_spaces, text)
print(declaration)
print('\n')
print('''
pub fn smb_ntstatus_string(c: u32) -> String {
match c {
''')
print(resolution)
print('''
_ => { return (c).to_string(); },
}.to_string()
}
''')
```
Bug #5412.
Update APIs to store files in transactions instead of the per flow state.
Goal is to avoid the overhead of matching up files and transactions in
cases where there are many of both.
Update all protocol implementations to support this.
Update file logging logic to account for having files in transactions. Instead
of it acting separately on file containers, it is now tied into the
transaction logging.
Update the filestore keyword to consider a match if filestore output not
enabled.
Instead of closing files in both direction when receiving a close request,
close only toserver files for the request and close toclient on receiving
a response.
SMB1 record parsing code simplification.
Frames:
nbss.pdu
nbss.hdr
nbss.data
smb1.pdu
smb1.hdr
smb1.data
smb2.pdu
smb2.hdr
smb2.data
smb3.pdu
smb3.hdr
smb3.data
The smb* frames are created for valid SMB records.
Before, even if there were no outputs, all the arguments
were evaluated, which could turn expensive
All variables which are used only in certain build configurations
are now prefixed by underscore to avoid warnings
Evasion scenario is
- a first dummy write of one byte at offset 0 is done
- the second full write of EICAR at offset 0 is then done
and does not trigger detection
The last write had the final value, and as we cannot "cancel"
the previous write, we set an event which is then transformed into
an app-layer decoder alert
If we missed the tree connect we can't know for sure if we're
reading from a (DCERPC) PIPE or not. In this case probe the data
to see if it looks like DCERPC.
If the detection succeeds, use a special 'suricata::dcerpc' service
in the TX.
Simplify handling of DCERPC records that cross records
Update logging for the response only TXs.
When skipping records the skip tracker could underflow if the record
parsing had more data than expected.
Enforce the calculation by moving it into a method and make the actual
fields private.