If the IPv6 is in non-recommended form followed by a 5 digit port number, it
is not anonymized.
A reproducer for this is: 1a00:c820:1180:c84c::ad3f:d991:ec2e:49255
closes https://github.com/rsyslog/rsyslog/issues/4856
The `failedmsg_entry` expects a null-terminated string in `key`, but
here we allocate with malloc and copy a string-with-length-n into only
the first n bytes. If the final byte is null, this is by coincidence
only.
We've observed this by means of seeing random binary data appended to
keys submitted to kafka apparently at random, and this looks like a
smoking gun.
There was a rare possibility that the E_AGAIN/E_INTERRUPT handling
could cause an infinite loop (100% CPU Usage), for example when a TLS
handshake is interrupted at a certain stage.
- After gnutls_record_recv is called, and E_AGAIN/E_INTERRUPT error
occurs, we need to do additional read/write direction handling
with gnutls_record_get_direction.
- After the second call of gnutls_record_recv (Expand buffer)
we needed to also check the eror codes for E_AGAIN/E_INTERRUPT
to do propper errorhandling.
- Add extra debug output based on ossl driver.
- Potential fix for 100% CPU Loop Receiveloop after gtlsRecordRecv
in doRetry call.
see also: https://github.com/rsyslog/rsyslog/issues/4818
do_inotify block only once for events with timeouts. There is possible
scenario where no data is available and read() enters blocking while poll()
is released.
The change introduced here changes the racy reading to be same for the
triggered file as for the rest of the files:
current:
- Inotify is triggered
- Triggering file is read for new data
- Rest of the files are read for timer expiration
- more inotifys are processed these may be at the end of the file already as well then
new:
- Inotify is triggered
- All files are read for timer expiration, which includes the triggering file
- Triggering files is read for data, this may be at the end of the file already then
- more inotifys are processed these may be at the end of the file already as well then
Therefore the change introduced harmonises the way triggering and the future triggers
are handled, by not making the triggering one as an exceptional one, so it is easier
to test the changes as well.
there was a more or less cosmetic data race which could happen when children
processes died in quick sequence. Even then, no real harm happened, as all
children were reaped eventually.
A similar data race exists for HUP processing.
However, these races polluted TSAN test runs, and so we fixed them.
When action.errorfile.maxsize configuration
option is enabled and error file already has a
certain size smaller than max size configured,
it is increasing higher than configured max
size as the error file is considered
to be zero in code.
This fix reads current error file size and
limits the size to the maximum size configured
fixes#4821
Signed-off-by: Sergio Arroutbi <sarroutb@redhat.com>
After call doHUP(), probably there is a internal log in the list. However, it
will not be wrote out immediately, because the mainloop will be blocked at
pselect in wait_timeout() until a long timeout or next message occur.
More deadly, the log may be lost if the deamon exits unexpectedly.
We might as well put processImInternal() after doHUP(), so that the message
will be flushed out immediately.
Fixes: 723f6fdfa6(rsyslogd: Fix race between signals and main loop timeout)
Signed-off-by: Yun Zhou <yun.zhou@windriver.com>