Current file storing approach is using a open file, write data,
close file logic. If this technic is fixing the problem of getting
too much open files in Suricata it is not optimal.
Test on a loop shows that open, write, close on a single file is
two time slower than a single open, loop of write, close.
This patch updates the logic by storing the fd in the File structure.
This is done for a certain number of files. If this amount is exceeded
then the previous logic is used.
This patch also adds two counters. First is the number of
currently open files. The second one is the number of time
the open, write, close sequence has been used due to too much
open files.
In EVE, the entries are:
stats {file_store: {"open_files_max_hit":0,"open_files":5}}
As the logging modules are no longer threading modules, rename
them so they don't look like they are being registered as
threading modules.
Also, move the registration to the output.c which will handle
registration of the loggers.
Make the file storage use the streaming buffer API.
As the individual file chunks were not needed by themselves, this
approach uses a chunkless implementation.
A new logger API for registering file storage handlers. Where the
FileLog handler is called once per file, this handler will be called
for each data chunk so that storing the entire file is possible.
The logger call in the API is as follows:
typedef int (*FiledataLogger)(ThreadVars *, void *thread_data,
const Packet *, const File *, const FileData *, uint8_t flags);
All data is const, thus should be read only. The final flags field
is used to indicate to the caller that the file is new, or if it's
being closed.
Files use an internal unique id 'file_id' which can be used by the
loggers to create unique file names. This id can use the 'waldo'
feature of the log-filestore module. This patch moves that waldo
loading and storing logic to this API's implementation. A new
configuration directive 'file-store-waldo: <filename>' is added,
but the existing waldo settings will also continue to work.