TmThreadCreate copy string provided as name for threads to
avoid any issue is a non allocated string is used.
This patch also introduce TmThreadSetGroupName function. This
function is used to be sure we have an allocation when
assigning the thread group name. This way we can free allocated
memory at exit.
Both code changes have required some fixes in different parts of
the code to be in sync with the new API.
Good point about these changes is that it fixes an inconsistency
were some names were not allocated and some were.
Fix cleaning of threads where mutex and condition where not freed.
This fixes:
352 (192 direct, 160 indirect) bytes in 4 blocks are definitely lost in loss record 301 of 327
at 0x4C29C0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
by 0x909404: TmThreadInitMC (tm-threads.c:1764)
by 0x908DE7: TmThreadCreate (tm-threads.c:1120)
by 0x90A326: TmThreadCreateMgmtThread (tm-threads.c:1183)
by 0x4CA0AD: StatsSpawnThreads (counters.c:856)
by 0x87F254: UnixSocketPcapFilesCheck (runmode-unix-socket.c:396)
by 0x910330: UnixCommandBackgroundTasks (unix-manager.c:430)
by 0x9140DD: UnixManager (unix-manager.c:980)
by 0x9077F3: TmThreadsManagement (tm-threads.c:600)
by 0x68DE283: start_thread (pthread_create.c:333)
by 0x80A6A4C: clone (in /lib/x86_64-linux-gnu/libc-2.21.so)
This patch adds a new callback PktAcqBreakLoop() in TmModule to let
packet acquisition modules define "break-loop" functions to terminate
the capture loop. This is useful in case of blocking functions that
need special actions to take place in order to stop the execution.
Implement this for PF_RING
If TmThreadsUnregisterThread was called with out of range 'id', a lock
would not be cleared after returning from the function.
** CID 1264421: Missing unlock (LOCK)
/src/tm-threads.c: 2186 in TmThreadsUnregisterThread()
Don't kill flow manager and recyclers before the rest of the threads. The
packet threads may still have packets from their pools. As the flow threads
would destroy their pools the packets would be lost.
This patch doesn't kill the threads, it just pulls them out of their run
loop and into a wait loop. The packet pools won't be cleared until all
threads are killed.
Wait for flow management threads to close before moving on to the
next steps in the shutdown process.
Don't destroy flow force reassembly packet pool too early. Worker
threads may still want to return packets to it.
Update pktacq loop to process flow timeouts in a running engine.
Add a new step to the shutdown phase of packet acquisition loop
threads (pktacqloop).
The shutdown code lets the pktacqloop break out of it's packet
acquisition loop. The thread then enters a flow timeout loop, where
it processes packets from it's tv->stream_pq queue until it's
empty _and_ the KILL flag is set.
Make sure receive threads are done before moving on to flow hash
cleanup (recycle all). Without this the flow recycler could start
it's unconditional hash clean up while detect threads are still
running on the flows.
Update unix socket to match live modes.
Add function TmThreadsInjectPacketById that is to be used to inject flow
timeout packets into the threads stream_pq queue.
TmThreadsInjectPacketById will also wake up listening threads if
applicable.
Packets are passed all packets together in an NULL terminated array
to reduce locking overhead.
Create thread registration and unregistration API for assigning unique
thread id's.
Threadvars is static even if a thread restarts, so we can do the
registration before the threads start.
A thread is unregistered when the ThreadVars are freed.
Currently management threads do their own thread setup and handling. This
patch introduces a new way of handling management threads.
Functionality that needs to run as a management thread can now register
itself as a regular 'thread module' (TmModule), where the 'Management'
callback is registered.
Make sure we register the detect.alerts counter before packet runtime starts
even in delayed detect mode. The registration of new counters at packet
runtime is not supported by the counters api and might lead to crashes as there
is no proper locking to allow for this operation.
This changes how delayed detect works a bit. Now we call the ThreadInit
callback twice. The first call will only register the counter. The 2nd call
will do all the other setup. This way the counter is registered before the
counters api starts operating in the packet runtime.
Fixes the segv reported in ticket #1018.
On Tile, replace pthread_mutex_locks with queued spin locks (ticket
locks) for dataplane processing code. This is safe when running on
dataplane cores with one thread per core. The condition variables are
no-ops when the thread is spinning anyway.
For control plane threads, unix-manager, stats-logs, thread startup,
use pthread_mutex_locks. For these locks replaced SCMutex with SCCtrlMutex
and SCCond with SCCtrlCond.
Bug #939: thread name buffers are sized inconsistently
These buffers are now all fixed at 16 bytes.
Bug #914: Having a high number of pickup queues (216+) makes suricata crash
Fixed so that we can now have 256 pickup queues, which is the current built-in
maximum. Improved the error reporting.
Bug #928: Max number of threads
Error reporting improved. Issue was the same as #914.
This patch introduces a unix command socket. JSON formatted messages
can be exchanged between suricata and a program connecting to a
dedicated socket.
The protocol is the following:
* Client connects to the socket
* It sends a version message: { "version": "$VERSION_ID" }
* Server answers with { "return": "OK|NOK" }
If server returns OK, the client is now allowed to send command.
The format of command is the following:
{
"command": "pcap-file",
"arguments": { "filename": "smtp-clean.pcap", "output-dir": "/tmp/out" }
}
The server will try to execute the "command" specified with the
(optional) provided "arguments".
The answer by server is the following:
{
"return": "OK|NOK",
"message": JSON_OBJECT or information string
}
A simple script is provided and is available under scripts/suricatasc. It
is not intended to be enterprise-grade tool but it is more a proof of
concept/example code. The first command line argument of suricatasc is
used to specify the socket to connect to.
Configuration of the feature is made in the YAML under the 'unix-command'
section:
unix-command:
enabled: yes
filename: custom.socket
The path specified in 'filename' is not absolute and is relative to the
state directory.
A new running mode called 'unix-socket' is also added.
When starting in this mode, only a unix socket manager
is started. When it receives a 'pcap-file' command, the manager
start a 'pcap-file' running mode which does not really leave at
the end of file but simply exit. The manager is then able to start
a new running mode with a new file.
To start this mode, Suricata must be started with the --unix-socket
option which has an optional argument which fix the file name of the
socket. The path is not absolute and is relative to the state directory.
THe 'pcap-file' command adds a file to the list of files to treat.
For each pcap file, a pcap file running mode is started and the output
directory is changed to what specified in the command. The running
mode specified in the 'runmode' YAML setting is used to select which
running mode must be use for the pcap file treatment.
This requires modification in suricata.c file where initialisation code
is now conditional to the fact 'unix-socket' mode is not used.
Two other commands exists to get info on the remaining tasks:
* pcap-file-number: return the number of files in the waiting queue
* pcap-file-list: return the list of waiting files
'pcap-file-list' returns a structured object as message. The
structure is the following:
{
'count': 2,
'files': ['file1.pcap', 'file2.pcap']
}