fix spelling errors in testbench

This commit is contained in:
Josh Soref 2019-12-27 13:57:25 +01:00 committed by Rainer Gerhards
parent 2cdddf3cda
commit af19128573
64 changed files with 104 additions and 104 deletions

View File

@ -18,12 +18,12 @@
# wait. However, the invalid signaling did not take into account that it did not # wait. However, the invalid signaling did not take into account that it did not
# signal the async writer to shut down. So the main thread went into a condition # signal the async writer to shut down. So the main thread went into a condition
# wait - and thus we had a deadlock. That situation occured only under very specific # wait - and thus we had a deadlock. That situation occured only under very specific
# cirumstances. As far as the analysis goes, the following need to happen: # circumstances. As far as the analysis goes, the following need to happen:
# 1. buffers on that file are being flushed # 1. buffers on that file are being flushed
# 2. no new data arrives # 2. no new data arrives
# 3. the inactivity timeout has not yet expired # 3. the inactivity timeout has not yet expired
# 4. *then* (and only then) the stream is closed or destructed # 4. *then* (and only then) the stream is closed or destructed
# In that, 1 to 4 are prequisites for the deadlock which will happen in 4. However, # In that, 1 to 4 are prerequisites for the deadlock which will happen in 4. However,
# for it to happen, we also need the right "timing". There is a race between the # for it to happen, we also need the right "timing". There is a race between the
# main thread and the async writer thread. The deadlock will only happen under # main thread and the async writer thread. The deadlock will only happen under
# the "right" circumstances, which basically means it will not happen always. # the "right" circumstances, which basically means it will not happen always.
@ -42,10 +42,10 @@
# is still being enqueued, but at a slow rate. So if one is patient enough, the load # is still being enqueued, but at a slow rate. So if one is patient enough, the load
# generator will be able to finish. However, rsyslogd will never process the data # generator will be able to finish. However, rsyslogd will never process the data
# it received because it is locked in the deadlock caused by #4 above. # it received because it is locked in the deadlock caused by #4 above.
# Note that "$OMFileFlushOnTXEnd on" is not causing this behaviour. We just use it # Note that "$OMFileFlushOnTXEnd on" is not causing this behavior. We just use it
# to (quite) reliably cause the failure condition. The failure described above # to (quite) reliably cause the failure condition. The failure described above
# (in version 4.6.1) was also present when the setting was set to "off", but its # (in version 4.6.1) was also present when the setting was set to "off", but its
# occurence was very much less probable - because the perquisites are then much # occurrence was very much less probable - because the perquisites are then much
# harder to hit. without it, the test may need to run for several hours before # harder to hit. without it, the test may need to run for several hours before
# we hit all failure conditions. # we hit all failure conditions.
# #

View File

@ -7,7 +7,7 @@
# it quickly. As such the hope is the test will be useful in future again. # it quickly. As such the hope is the test will be useful in future again.
# #
# NOTE WELL: The rsyslog shutdown condition is hard to get 100% right # NOTE WELL: The rsyslog shutdown condition is hard to get 100% right
# as due to not flushing at transaction end we cannot rely on the oputput # as due to not flushing at transaction end we cannot rely on the output
# file count as we usually do. However, we cannot avoid this as otherwise # file count as we usually do. However, we cannot avoid this as otherwise
# we loose an important trigger condition. # we loose an important trigger condition.
# added 2019-10-23 by Rgerhards # added 2019-10-23 by Rgerhards

View File

@ -3,7 +3,7 @@
# shall result in data staying in buffers until shutdown, what # shall result in data staying in buffers until shutdown, what
# then will trigger some somewhat complex logic in the stream # then will trigger some somewhat complex logic in the stream
# writer (open, write, close all during the stream close # writer (open, write, close all during the stream close
# opertion). It is vital that only few messages be sent. # operation). It is vital that only few messages be sent.
# #
# The main effort of this test is not (only) to see if we # The main effort of this test is not (only) to see if we
# receive the data, but rather to see if we get into an abort # receive the data, but rather to see if we get into an abort

View File

@ -28,7 +28,7 @@ tcpflood -m $NUMMESSAGES
printf 'waiting for timeout to occur\n' printf 'waiting for timeout to occur\n'
sleep 6 # GOOD SLEEP - we wait for the timeout! sleep 6 # GOOD SLEEP - we wait for the timeout!
printf 'timeout should now have occurred - check file state\n' printf 'timeout should now have occurred - check file state\n'
seq_check # mow everthing MUST be persisted seq_check # mow everything MUST be persisted
shutdown_when_empty shutdown_when_empty
wait_shutdown wait_shutdown
seq_check # just a double-check that nothing is added twice seq_check # just a double-check that nothing is added twice

View File

@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# This is a simple shell script that carries out some checks against # This is a simple shell script that carries out some checks against
# configurations we expect from some provided config files. We use # configurations we expect from some provided config files. We use
# rsyslogd's verifcation function. Note that modifications to the # rsyslogd's verification function. Note that modifications to the
# config elements, or even simple text changes, cause these checks to # config elements, or even simple text changes, cause these checks to
# fail. However, it should be fairly easy to adapt them to the changed # fail. However, it should be fairly easy to adapt them to the changed
# environment. And while nothing changed, they permit is to make sure # environment. And while nothing changed, they permit is to make sure

View File

@ -14,7 +14,7 @@ template(name="outfmt" type="string" string="%msg:::compressSPACE%\n")
startup startup
# we need to generate a file, because otherwise our multiple spaces # we need to generate a file, because otherwise our multiple spaces
# do not survive the execution pathes through the shell # do not survive the execution paths through the shell
echo "<165>1 2003-08-24T05:14:15.000003-07:00 192.0.2.1 tcpflood 8710 - - msgnum:0000000 test test test" >$RSYSLOG_DYNNAME.tmp echo "<165>1 2003-08-24T05:14:15.000003-07:00 192.0.2.1 tcpflood 8710 - - msgnum:0000000 test test test" >$RSYSLOG_DYNNAME.tmp
tcpflood -I $RSYSLOG_DYNNAME.tmp tcpflood -I $RSYSLOG_DYNNAME.tmp
rm $RSYSLOG_DYNNAME.tmp rm $RSYSLOG_DYNNAME.tmp

View File

@ -17,7 +17,7 @@ template(name="outfmt" type="list") {
startup startup
# we need to generate a file, because otherwise our multiple spaces # we need to generate a file, because otherwise our multiple spaces
# do not survive the execution pathes through the shell # do not survive the execution paths through the shell
echo "<165>1 2003-08-24T05:14:15.000003-07:00 192.0.2.1 tcpflood 8710 - - msgnum:0000000 test test test" >$RSYSLOG_DYNNAME.tmp echo "<165>1 2003-08-24T05:14:15.000003-07:00 192.0.2.1 tcpflood 8710 - - msgnum:0000000 test test test" >$RSYSLOG_DYNNAME.tmp
tcpflood -I $RSYSLOG_DYNNAME.tmp tcpflood -I $RSYSLOG_DYNNAME.tmp
rm $RSYSLOG_DYNNAME.tmp rm $RSYSLOG_DYNNAME.tmp

View File

@ -6,7 +6,7 @@
# at least the queue is kind of readable. # at least the queue is kind of readable.
# To simulate the error condition, we create a DA queue with a large memory # To simulate the error condition, we create a DA queue with a large memory
# part and fill it via injectmsg (do NOT use tcpflood, as this would add # part and fill it via injectmsg (do NOT use tcpflood, as this would add
# complexity of TCP window etc to the reception of messages - injecmsg is # complexity of TCP window etc to the reception of messages - injectmsg is
# synchronous, so we do not have anything in flight after it terminates). # synchronous, so we do not have anything in flight after it terminates).
# We have a blocking action which prevents actual processing of any of the # We have a blocking action which prevents actual processing of any of the
# injected messages. We then inject a large number of messages, but only # injected messages. We then inject a large number of messages, but only

View File

@ -60,7 +60,7 @@ export TB_ERR_TIMEOUT=101
# diag system internal environment variables # diag system internal environment variables
# these variables are for use by test scripts - they CANNOT be # these variables are for use by test scripts - they CANNOT be
# overriden by the user # overridden by the user
# TCPFLOOD_EXTRA_OPTS enables to set extra options for tcpflood, usually # TCPFLOOD_EXTRA_OPTS enables to set extra options for tcpflood, usually
# used in tests that have a common driver where it # used in tests that have a common driver where it
# is too hard to set these options otherwise # is too hard to set these options otherwise
@ -81,7 +81,7 @@ export ZOOPIDFILE="$(pwd)/zookeeper.pid"
TB_TIMEOUT_STARTSTOP=400 # timeout for start/stop rsyslogd in tenths (!) of a second 400 => 40 sec TB_TIMEOUT_STARTSTOP=400 # timeout for start/stop rsyslogd in tenths (!) of a second 400 => 40 sec
# note that 40sec for the startup should be sufficient even on very slow machines. we changed this from 2min on 2017-12-12 # note that 40sec for the startup should be sufficient even on very slow machines. we changed this from 2min on 2017-12-12
TB_TEST_TIMEOUT=90 # number of seconds after which test checks timeout (eg. waits) TB_TEST_TIMEOUT=90 # number of seconds after which test checks timeout (eg. waits)
TB_TEST_MAX_RUNTIME=${TEST_MAX_RUNTIME:-580} # maximum runtuime in seconds for a test; TB_TEST_MAX_RUNTIME=${TEST_MAX_RUNTIME:-580} # maximum runtime in seconds for a test;
# default TEST_MAX_RUNTIME e.g. for long-running tests or special # default TEST_MAX_RUNTIME e.g. for long-running tests or special
# testbench use. Testbench will abort test # testbench use. Testbench will abort test
# after that time (iff it has a chance to, not strictly enforced) # after that time (iff it has a chance to, not strictly enforced)
@ -94,7 +94,7 @@ if [ "$TESTTOOL_DIR" == "" ]; then
export TESTTOOL_DIR="${srcdir:-.}" export TESTTOOL_DIR="${srcdir:-.}"
fi fi
# newer functionality is preferrably introduced via bash functions # newer functionality is preferably introduced via bash functions
# rgerhards, 2018-07-03 # rgerhards, 2018-07-03
rsyslog_testbench_test_url_access() { rsyslog_testbench_test_url_access() {
local missing_requirements= local missing_requirements=
@ -169,7 +169,7 @@ test_status() {
setvar_RS_HOSTNAME() { setvar_RS_HOSTNAME() {
printf '### Obtaining HOSTNAME (prequisite, not actual test) ###\n' printf '### Obtaining HOSTNAME (prerequisite, not actual test) ###\n'
generate_conf "" generate_conf ""
add_conf 'module(load="../plugins/imtcp/.libs/imtcp") add_conf 'module(load="../plugins/imtcp/.libs/imtcp")
input(type="imtcp" port="0" listenPortFileName="'$RSYSLOG_DYNNAME'.tcpflood_port") input(type="imtcp" port="0" listenPortFileName="'$RSYSLOG_DYNNAME'.tcpflood_port")
@ -275,7 +275,7 @@ startup_common() {
# we need to remove the imdiag port file as there are some # we need to remove the imdiag port file as there are some
# tests that start multiple times. These may get the old port # tests that start multiple times. These may get the old port
# number if the file still exists AND timing is bad so that # number if the file still exists AND timing is bad so that
# imdiag does not genenrate the port file quickly enough on # imdiag does not generate the port file quickly enough on
# startup. # startup.
rm -f $RSYSLOG_DYNNAME.imdiag$instance.port rm -f $RSYSLOG_DYNNAME.imdiag$instance.port
if [ ! -f $CONF_FILE ]; then if [ ! -f $CONF_FILE ]; then
@ -388,7 +388,7 @@ wait_file_exists() {
# a generic check function and must only used with those kafka tests # a generic check function and must only used with those kafka tests
# that actually need it. # that actually need it.
kafka_wait_group_coordinator() { kafka_wait_group_coordinator() {
echo We are waiting for kafka/zookeper being ready to deliver messages echo We are waiting for kafka/zookeeper being ready to deliver messages
wait_file_exists $RSYSLOG_OUT_LOG " wait_file_exists $RSYSLOG_OUT_LOG "
Non-existence of $RSYSLOG_OUT_LOG can be caused Non-existence of $RSYSLOG_OUT_LOG can be caused
@ -913,7 +913,7 @@ await_lookup_table_reload() {
# $1 filename, default $RSYSLOG_OUT_LOG # $1 filename, default $RSYSLOG_OUT_LOG
# $2 expected nbr of lines, default $NUMMESSAGES # $2 expected nbr of lines, default $NUMMESSAGES
# $3 timout in seconds # $3 timeout in seconds
# options (need to be specified in THIS ORDER if multiple given): # options (need to be specified in THIS ORDER if multiple given):
# --delay ms -- if given, delay to use between retries # --delay ms -- if given, delay to use between retries
# --abort-on-oversize -- error_exit if more lines than expected are present # --abort-on-oversize -- error_exit if more lines than expected are present
@ -1159,7 +1159,7 @@ do_cleanup() {
# our $1 is the to-be-used exit code. if $2 is "stacktrace", call gdb. # our $1 is the to-be-used exit code. if $2 is "stacktrace", call gdb.
# #
# NOTE: if a function test_error_exit_handler is defined, error_exit will # NOTE: if a function test_error_exit_handler is defined, error_exit will
# call it immeditely before termination. This may be used to cleanup # call it immediately before termination. This may be used to cleanup
# some things or emit additional diagnostic information. # some things or emit additional diagnostic information.
error_exit() { error_exit() {
if [ $1 -eq $TB_ERR_TIMEOUT ]; then if [ $1 -eq $TB_ERR_TIMEOUT ]; then
@ -1214,7 +1214,7 @@ error_exit() {
RSYSLOG_DEBUG=$RSYSLOG_DEBUG_SAVE RSYSLOG_DEBUG=$RSYSLOG_DEBUG_SAVE
rm IN_AUTO_DEBUG rm IN_AUTO_DEBUG
fi fi
# output listening ports as a temporay debug measure (2018-09-08 rgerhards), now disables, but not yet removed (2018-10-22) # output listening ports as a temporary debug measure (2018-09-08 rgerhards), now disables, but not yet removed (2018-10-22)
#if [ $(uname) == "Linux" ]; then #if [ $(uname) == "Linux" ]; then
# netstat -tlp # netstat -tlp
#else #else
@ -1277,7 +1277,7 @@ error_stats() {
} }
# do the usual sequence check to see if everything was properly received. # do the usual sequence check to see if everything was properly received.
# $4... are just to have the abilit to pass in more options... # $4... are just to have the ability to pass in more options...
# add -v to chkseq if you need more verbose output # add -v to chkseq if you need more verbose output
# argument --check-only can be used to simply do a check without abort in fail case # argument --check-only can be used to simply do a check without abort in fail case
# env var SEQ_CHECK_FILE permits to override file name to check # env var SEQ_CHECK_FILE permits to override file name to check
@ -1351,7 +1351,7 @@ seq_check() {
# do the usual sequence check to see if everything was properly received. This is # do the usual sequence check to see if everything was properly received. This is
# a duplicateof seq-check, but we could not change its calling conventions without # a duplicateof seq-check, but we could not change its calling conventions without
# breaking a lot of exitings test cases, so we preferred to duplicate the code here. # breaking a lot of exitings test cases, so we preferred to duplicate the code here.
# $4... are just to have the abilit to pass in more options... # $4... are just to have the ability to pass in more options...
# add -v to chkseq if you need more verbose output # add -v to chkseq if you need more verbose output
seq_check2() { seq_check2() {
$RS_SORTCMD $RS_SORT_NUMERIC_OPT < ${RSYSLOG2_OUT_LOG} | ./chkseq -s$1 -e$2 $3 $4 $5 $6 $7 $RS_SORTCMD $RS_SORT_NUMERIC_OPT < ${RSYSLOG2_OUT_LOG} | ./chkseq -s$1 -e$2 $3 $4 $5 $6 $7
@ -1363,7 +1363,7 @@ seq_check2() {
# do the usual sequence check, but for gzip files # do the usual sequence check, but for gzip files
# $4... are just to have the abilit to pass in more options... # $4... are just to have the ability to pass in more options...
gzip_seq_check() { gzip_seq_check() {
if [ "$1" == "" ]; then if [ "$1" == "" ]; then
if [ "$NUMMESSAGES" == "" ]; then if [ "$NUMMESSAGES" == "" ]; then
@ -1445,7 +1445,7 @@ exit_test() {
# Extended Exit handling for kafka / zookeeper instances # Extended Exit handling for kafka / zookeeper instances
kafka_exit_handling "true" kafka_exit_handling "true"
printf '%s Test %s SUCCESFUL (took %s seconds)\n' "$(tb_timestamp)" "$0" "$(( $(date +%s) - TB_STARTTEST ))" printf '%s Test %s SUCCESSFUL (took %s seconds)\n' "$(tb_timestamp)" "$0" "$(( $(date +%s) - TB_STARTTEST ))"
echo ------------------------------------------------------------------------------- echo -------------------------------------------------------------------------------
exit 0 exit 0
} }
@ -1530,7 +1530,7 @@ if [ -z "$ES_DOWNLOAD" ]; then
fi fi
dep_es_cached_file="$dep_cache_dir/$ES_DOWNLOAD" dep_es_cached_file="$dep_cache_dir/$ES_DOWNLOAD"
# kafaka (including Zookeeper) # kafka (including Zookeeper)
dep_kafka_dir_xform_pattern='s#^[^/]\+#kafka#g' dep_kafka_dir_xform_pattern='s#^[^/]\+#kafka#g'
dep_zk_dir_xform_pattern='s#^[^/]\+#zk#g' dep_zk_dir_xform_pattern='s#^[^/]\+#zk#g'
dep_es_dir_xform_pattern='s#^[^/]\+#es#g' dep_es_dir_xform_pattern='s#^[^/]\+#es#g'
@ -1648,7 +1648,7 @@ stop_kafka() {
done done
if [[ "$2" == 'true' ]]; then if [[ "$2" == 'true' ]]; then
# Prozess shutdown, do cleanup now # Process shutdown, do cleanup now
cleanup_kafka $1 cleanup_kafka $1
fi fi
fi fi
@ -1713,7 +1713,7 @@ stop_zookeeper() {
fi fi
if [[ "$2" == 'true' ]]; then if [[ "$2" == 'true' ]]; then
# Prozess shutdown, do cleanup now # Process shutdown, do cleanup now
cleanup_zookeeper $1 cleanup_zookeeper $1
fi fi
rm "$ZOOPIDFILE" rm "$ZOOPIDFILE"
@ -1732,10 +1732,10 @@ cleanup_zookeeper() {
start_zookeeper() { start_zookeeper() {
if [ "$KEEP_KAFKA_RUNNING" == "YES" ] && [ -f "$ZOOPIDFILE" ]; then if [ "$KEEP_KAFKA_RUNNING" == "YES" ] && [ -f "$ZOOPIDFILE" ]; then
if kill -0 "$(cat "$ZOOPIDFILE")"; then if kill -0 "$(cat "$ZOOPIDFILE")"; then
printf 'zookeeper already runing, no need to start\n' printf 'zookeeper already running, no need to start\n'
return return
else else
printf 'INFO: zookeper pidfile %s exists, but zookeeper not runing\n' "$ZOOPIDFILE" printf 'INFO: zookeeper pidfile %s exists, but zookeeper not running\n' "$ZOOPIDFILE"
printf 'deleting pid file\n' printf 'deleting pid file\n'
rm -f "$ZOOPIDFILE" rm -f "$ZOOPIDFILE"
fi fi
@ -1784,7 +1784,7 @@ start_kafka() {
# shellcheck disable=SC2009 - we do not grep on the process name! # shellcheck disable=SC2009 - we do not grep on the process name!
kafkapid=$(ps aux | grep -i $dep_work_kafka_config | grep java | grep -v grep | awk '{print $2}') kafkapid=$(ps aux | grep -i $dep_work_kafka_config | grep java | grep -v grep | awk '{print $2}')
if [ "$KEEP_KAFKA_RUNNING" == "YES" ] && [ "$kafkapid" != "" ]; then if [ "$KEEP_KAFKA_RUNNING" == "YES" ] && [ "$kafkapid" != "" ]; then
printf 'kafka already runing, no need to start\n' printf 'kafka already running, no need to start\n'
return return
fi fi
@ -1979,7 +1979,7 @@ download_elasticsearch() {
} }
# prepare eleasticsearch execution environment # prepare elasticsearch execution environment
# this also stops any previous elasticsearch instance, if found # this also stops any previous elasticsearch instance, if found
prepare_elasticsearch() { prepare_elasticsearch() {
stop_elasticsearch # stop if it is still running stop_elasticsearch # stop if it is still running
@ -2062,7 +2062,7 @@ start_elasticsearch() {
} }
# read data from ES to a local file so that we can process # read data from ES to a local file so that we can process
# $1 - number of records (ES does not return all records unless you tell it explicitely). # $1 - number of records (ES does not return all records unless you tell it explicitly).
# $2 - ES port # $2 - ES port
es_getdata() { es_getdata() {
curl --silent -XPUT --show-error -H 'Content-Type: application/json' "http://localhost:${2:-$ES_PORT}/rsyslog_testbench/_settings" -d '{ "index" : { "max_result_window" : '${1:-$NUMMESSAGES}' } }' curl --silent -XPUT --show-error -H 'Content-Type: application/json' "http://localhost:${2:-$ES_PORT}/rsyslog_testbench/_settings" -d '{ "index" : { "max_result_window" : '${1:-$NUMMESSAGES}' } }'
@ -2225,7 +2225,7 @@ mysql_prep_for_test() {
# get data from mysql DB so that we can do seq_check on it. # get data from mysql DB so that we can do seq_check on it.
mysql_get_data() { mysql_get_data() {
# note "-s" is requried to suppress the select "field header" # note "-s" is required to suppress the select "field header"
mysql -s --user=rsyslog --password=testbench --database $RSYSLOG_DYNNAME \ mysql -s --user=rsyslog --password=testbench --database $RSYSLOG_DYNNAME \
-e "select substring(Message,9,8) from SystemEvents;" \ -e "select substring(Message,9,8) from SystemEvents;" \
> $RSYSLOG_OUT_LOG > $RSYSLOG_OUT_LOG
@ -2365,7 +2365,7 @@ case $1 in
fi fi
if [ "$RSYSLOG_DYNNAME" != "" ]; then if [ "$RSYSLOG_DYNNAME" != "" ]; then
echo "FAIL: \$RSYSLOG_DYNNAME already set in init" echo "FAIL: \$RSYSLOG_DYNNAME already set in init"
echo "hint: was init accidently called twice?" echo "hint: was init accidentally called twice?"
exit 2 exit 2
fi fi
export RSYSLOG_DYNNAME="rstb_$(./test_id $(basename $0))" export RSYSLOG_DYNNAME="rstb_$(./test_id $(basename $0))"
@ -2422,7 +2422,7 @@ case $1 in
# happens in chained test scripts. Delete on exit is fine, # happens in chained test scripts. Delete on exit is fine,
# though. # though.
# note: TCPFLOOD_EXTRA_OPTS MUST NOT be unset in init, because # note: TCPFLOOD_EXTRA_OPTS MUST NOT be unset in init, because
# some tests need to set it BEFORE calling init to accomodate # some tests need to set it BEFORE calling init to accommodate
# their generic test drivers. # their generic test drivers.
if [ "$TCPFLOOD_EXTRA_OPTS" != '' ] ; then if [ "$TCPFLOOD_EXTRA_OPTS" != '' ] ; then
echo TCPFLOOD_EXTRA_OPTS set: $TCPFLOOD_EXTRA_OPTS echo TCPFLOOD_EXTRA_OPTS set: $TCPFLOOD_EXTRA_OPTS

View File

@ -4,7 +4,7 @@
# Triggering condition: "json" property (message variables) are present # Triggering condition: "json" property (message variables) are present
# and "structured-data" property is also present. Caused rsyslog to # and "structured-data" property is also present. Caused rsyslog to
# thrash the queue file, getting messages stuck in it and loosing all # thrash the queue file, getting messages stuck in it and loosing all
# after the initial problem occurence. # after the initial problem occurrence.
# add 2017-02-08 by Rainer Gerhards, released under ASL 2.0 # add 2017-02-08 by Rainer Gerhards, released under ASL 2.0
uname uname

View File

@ -7,7 +7,7 @@
# happen in our test. So the DA worker pool thread is, depending on # happen in our test. So the DA worker pool thread is, depending on
# timing, started and shut down multiple times. This is not a problem # timing, started and shut down multiple times. This is not a problem
# indication! # indication!
# The pstats disply is for manual review - it helps to see how many # The pstats display is for manual review - it helps to see how many
# messages actually went to the DA queue. # messages actually went to the DA queue.
# Copyright (C) 2019-10-28 by Rainer Gerhards # Copyright (C) 2019-10-28 by Rainer Gerhards
# This file is part of the rsyslog project, released under ASL 2.0 # This file is part of the rsyslog project, released under ASL 2.0

View File

@ -1,6 +1,6 @@
#!/bin/bash #!/bin/bash
# we test the execonly if previous is suspended directive. This is the # we test the execonly if previous is suspended directive. This is the
# most basic test which soley tests a singel case but no dependencies within # most basic test which solely tests a single case but no dependencies within
# the ruleset. # the ruleset.
# rgerhards, 2010-06-23 # rgerhards, 2010-06-23
echo ===================================================================================== echo =====================================================================================

View File

@ -2,7 +2,7 @@
# we test the execonly if previous is suspended directive. For this, # we test the execonly if previous is suspended directive. For this,
# we have an action that is suspended for all messages but the second. # we have an action that is suspended for all messages but the second.
# we write two files: one only if the output is suspended and the other one # we write two files: one only if the output is suspended and the other one
# in all cases. This should thouroughly check the logic involved. # in all cases. This should thoroughly check the logic involved.
# rgerhards, 2010-06-23 # rgerhards, 2010-06-23
echo =============================================================================== echo ===============================================================================
echo \[execonlywhenprevsuspended2.sh\]: test execonly...suspended functionality echo \[execonlywhenprevsuspended2.sh\]: test execonly...suspended functionality

View File

@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# addd 2017-03-06 by RGerhards, released under ASL 2.0 # addd 2017-03-06 by RGerhards, released under ASL 2.0
# Note: we need to inject a somewhat larger nubmer of messages in order # Note: we need to inject a somewhat larger number of messages in order
# to ensure that we receive some messages in the actual output file, # to ensure that we receive some messages in the actual output file,
# as batching can (validly) cause a larger loss in the non-writable # as batching can (validly) cause a larger loss in the non-writable
# file # file

View File

@ -4,7 +4,7 @@
# importantly, it checks that error messages can be issued very early # importantly, it checks that error messages can be issued very early
# during startup. # during startup.
# Note that we use the override of the hostname to ensure we do not # Note that we use the override of the hostname to ensure we do not
# accidentely get an acceptable FQDN-type hostname during testing. # accidentally get an acceptable FQDN-type hostname during testing.
# #
# IMPORTANT: We cannot use the regular plumbing here, as our preload # IMPORTANT: We cannot use the regular plumbing here, as our preload
# interferes with socket operations (we cannot bind the port for some # interferes with socket operations (we cannot bind the port for some

View File

@ -27,7 +27,7 @@ ruleset(name="ruleset") {
echo ' [START=1552143924 KSH="MYBATCH.sh"' echo ' [START=1552143924 KSH="MYBATCH.sh"'
echo ' DURATION=120] ' echo ' DURATION=120] '
} > $RSYSLOG_DYNNAME.dsd.done } > $RSYSLOG_DYNNAME.dsd.done
echo "Batch report to consumme ${RSYSLOG_DYNNAME}.dsd.done for 2019-03-09T15:05:24" echo "Batch report to consume ${RSYSLOG_DYNNAME}.dsd.done for 2019-03-09T15:05:24"
startup startup
shutdown_when_empty shutdown_when_empty
wait_shutdown wait_shutdown

View File

@ -33,7 +33,7 @@ case $(uname) in
datelog=$(date "+%Y-%m-%dT%H:%M:%S" -ud @$(stat -c "%Y" $RSYSLOG_DYNNAME.dsu.done)) datelog=$(date "+%Y-%m-%dT%H:%M:%S" -ud @$(stat -c "%Y" $RSYSLOG_DYNNAME.dsu.done))
;; ;;
esac esac
echo "Batch report to consumme ${RSYSLOG_DYNNAME}.dsu.done for ${datelog}" echo "Batch report to consume ${RSYSLOG_DYNNAME}.dsu.done for ${datelog}"
startup startup
shutdown_when_empty shutdown_when_empty
wait_shutdown wait_shutdown

View File

@ -33,7 +33,7 @@ case $(uname) in
datelog=$(date -ud @$(stat -c "%Y" $RSYSLOG_DYNNAME.dtl.done) "+%Y-%m-%dT%H:%M:%S") datelog=$(date -ud @$(stat -c "%Y" $RSYSLOG_DYNNAME.dtl.done) "+%Y-%m-%dT%H:%M:%S")
;; ;;
esac esac
echo "Batch report to consumme ${RSYSLOG_DYNNAME}.dtl.done for ${datelog}" echo "Batch report to consume ${RSYSLOG_DYNNAME}.dtl.done for ${datelog}"
startup startup
shutdown_when_empty shutdown_when_empty
wait_shutdown wait_shutdown

View File

@ -33,7 +33,7 @@ case $(uname) in
datelog=$(date "+%Y-%m-%dT%H:%M:%S" -ud @$(stat -c "%Y" $RSYSLOG_DYNNAME.rsu.done)) datelog=$(date "+%Y-%m-%dT%H:%M:%S" -ud @$(stat -c "%Y" $RSYSLOG_DYNNAME.rsu.done))
;; ;;
esac esac
echo "Batch report to consumme ${RSYSLOG_DYNNAME}.rsu.done for ${datelog}" echo "Batch report to consume ${RSYSLOG_DYNNAME}.rsu.done for ${datelog}"
startup startup
shutdown_when_empty shutdown_when_empty
wait_shutdown wait_shutdown

View File

@ -33,7 +33,7 @@ case $(uname) in
datelog=$(date "+%Y-%m-%dT%H:%M:%S" -ud @$(stat -c "%Y" $RSYSLOG_DYNNAME.rtl.done)) datelog=$(date "+%Y-%m-%dT%H:%M:%S" -ud @$(stat -c "%Y" $RSYSLOG_DYNNAME.rtl.done))
;; ;;
esac esac
echo "Batch report to consumme ${RSYSLOG_DYNNAME}.rtl.done for ${datelog}" echo "Batch report to consume ${RSYSLOG_DYNNAME}.rtl.done for ${datelog}"
startup startup
shutdown_when_empty shutdown_when_empty
wait_shutdown wait_shutdown

View File

@ -22,7 +22,7 @@ printf 'msgnum:0
msgnum:1' > $RSYSLOG_DYNNAME.input msgnum:1' > $RSYSLOG_DYNNAME.input
printf '\nmsgnum:2' >> $RSYSLOG_DYNNAME.input printf '\nmsgnum:2' >> $RSYSLOG_DYNNAME.input
# sleep a little to give rsyslog a chance to process unterminated linet # sleep a little to give rsyslog a chance to process unterminated lines
./msleep 500 ./msleep 500
# write some more lines (see https://github.com/rsyslog/rsyslog/issues/144) # write some more lines (see https://github.com/rsyslog/rsyslog/issues/144)

View File

@ -104,7 +104,7 @@ echo Starting receiver instance [imkafka]
startup startup
# --- # ---
# Messure Starttime # Measure Starttime
TIMESTART=$(date +%s.%N) TIMESTART=$(date +%s.%N)
# --- Fill Kafka Server with messages # --- Fill Kafka Server with messages
@ -125,7 +125,7 @@ echo Stopping sender instance [omkafka]
shutdown_when_empty shutdown_when_empty
wait_shutdown wait_shutdown
# Messure Endtime # Measure Endtime
TIMEEND=$(date +%s.%N) TIMEEND=$(date +%s.%N)
TIMEDIFF=$(echo "$TIMEEND - $TIMESTART" | bc) TIMEDIFF=$(echo "$TIMEEND - $TIMESTART" | bc)
echo "*** imkafka time to process all data: $TIMEDIFF seconds!" echo "*** imkafka time to process all data: $TIMEDIFF seconds!"

View File

@ -23,7 +23,7 @@ template(name="outfmt" type="string" string="%msg:F,58:2%\n")
template="outfmt" template="outfmt"
file=`echo $RSYSLOG_OUT_LOG`) file=`echo $RSYSLOG_OUT_LOG`)
' '
# Begin actuall testcase # Begin actual testcase
startup startup
tcpflood -p'$TCPFLOOD_PORT' -m$NUMMESSAGES -Ttls -x$srcdir/tls-certs/ca.pem -Z$srcdir/tls-certs/cert.pem -z$srcdir/tls-certs/key.pem tcpflood -p'$TCPFLOOD_PORT' -m$NUMMESSAGES -Ttls -x$srcdir/tls-certs/ca.pem -Z$srcdir/tls-certs/cert.pem -z$srcdir/tls-certs/key.pem
wait_file_lines wait_file_lines

View File

@ -35,5 +35,5 @@ sleep 5 # due to large messages, we need this time for the tcp receiver to settl
shutdown_when_empty # shut down rsyslogd when done processing messages shutdown_when_empty # shut down rsyslogd when done processing messages
wait_shutdown # and wait for it to terminate wait_shutdown # and wait for it to terminate
seq_check 0 49999 -E seq_check 0 49999 -E
# content_check 'XXXXX' # Not really a check if it worked, but in TLS stuff in unfished TLS Packets gets lost, so we can't use seq-check. # content_check 'XXXXX' # Not really a check if it worked, but in TLS stuff in unfinished TLS Packets gets lost, so we can't use seq-check.
exit_test exit_test

View File

@ -1,5 +1,5 @@
#!/bin/bash #!/bin/bash
# Copyright (C) 2016 by Rainer Gerhardds # Copyright (C) 2016 by Rainer Gerhards
# This file is part of the rsyslog project, released under ASL 2.0 # This file is part of the rsyslog project, released under ASL 2.0
. ${srcdir:=.}/diag.sh init . ${srcdir:=.}/diag.sh init

View File

@ -5,7 +5,7 @@
# whether or not we have a leak, not any other functionality. Most # whether or not we have a leak, not any other functionality. Most
# importantly, we do not care if the error message appears or not. This # importantly, we do not care if the error message appears or not. This
# is because it is not so easy to pick it up from the system log and other # is because it is not so easy to pick it up from the system log and other
# tests already cover this szenario. # tests already cover this scenario.
# add 2017-05-10 by Rainer Gerhards, released under ASL 2.0 # add 2017-05-10 by Rainer Gerhards, released under ASL 2.0
uname uname

View File

@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# added 2015-11-17 by rgerhards # added 2015-11-17 by rgerhards
# This file is part of the rsyslog project, released under ASL 2.0 # This file is part of the rsyslog project, released under ASL 2.0
# Note: the aim of this test is to test against misadressing, so we do # Note: the aim of this test is to test against misaddressing, so we do
# not actually check the output # not actually check the output
uname uname
@ -11,7 +11,7 @@ if [ $(uname) = "FreeBSD" ] ; then
fi fi
echo =============================================================================== echo ===============================================================================
echo \[json_null.sh\]: test for json containung \"null\" value echo \[json_null.sh\]: test for json containing \"null\" value
. ${srcdir:=.}/diag.sh init . ${srcdir:=.}/diag.sh init
generate_conf generate_conf
add_conf ' add_conf '

View File

@ -1,10 +1,10 @@
#!/bin/bash #!/bin/bash
# added 2015-11-17 by rgerhards # added 2015-11-17 by rgerhards
# This file is part of the rsyslog project, released under ASL 2.0 # This file is part of the rsyslog project, released under ASL 2.0
# Note: the aim of this test is to test against misadressing, so we do # Note: the aim of this test is to test against misaddressing, so we do
# not actually check the output # not actually check the output
echo =============================================================================== echo ===============================================================================
echo \[json_null.sh\]: test for json containung \"null\" value echo \[json_null.sh\]: test for json containing \"null\" value
. ${srcdir:=.}/diag.sh init . ${srcdir:=.}/diag.sh init
generate_conf generate_conf
add_conf ' add_conf '

View File

@ -9,7 +9,7 @@ if [ $(uname) = "FreeBSD" ] ; then
fi fi
echo =============================================================================== echo ===============================================================================
echo \[json_null_array.sh\]: test for json containung \"null\" value echo \[json_null_array.sh\]: test for json containing \"null\" value
. ${srcdir:=.}/diag.sh init . ${srcdir:=.}/diag.sh init
generate_conf generate_conf
add_conf ' add_conf '

View File

@ -2,7 +2,7 @@
# added 2015-11-17 by rgerhards # added 2015-11-17 by rgerhards
# This file is part of the rsyslog project, released under ASL 2.0 # This file is part of the rsyslog project, released under ASL 2.0
echo =============================================================================== echo ===============================================================================
echo \[json_null_array.sh\]: test for json containung \"null\" value echo \[json_null_array.sh\]: test for json containing \"null\" value
. ${srcdir:=.}/diag.sh init . ${srcdir:=.}/diag.sh init
generate_conf generate_conf
add_conf ' add_conf '

View File

@ -27,7 +27,7 @@ set $.garply = "";
foreach ($.quux in $!foo) do { foreach ($.quux in $!foo) do {
if ($.quux!key == "str2") then { if ($.quux!key == "str2") then {
set $.quux!random_key = $.quux!key; set $.quux!random_key = $.quux!key;
unset $!foo; #because it is deep copied, the foreach loop will continue to work, but the action to print "post_sucide_foo" will not see $!foo unset $!foo; #because it is deep copied, the foreach loop will continue to work, but the action to print "post_suicide_foo" will not see $!foo
} }
action(type="omfile" file=`echo $RSYSLOG_OUT_LOG` template="quux") action(type="omfile" file=`echo $RSYSLOG_OUT_LOG` template="quux")
foreach ($.corge in $.quux!value) do { foreach ($.corge in $.quux!value) do {

View File

@ -1,9 +1,9 @@
#!/bin/bash #!/bin/bash
# this is a small helper script used to run testbench tests # this is a small helper script used to run testbench tests
# repetetively. It is meant for manual use and not included # repetitively. It is meant for manual use and not included
# in any testbench functionality. # in any testbench functionality.
# There are some options commented out, e.g. exit on first # There are some options commented out, e.g. exit on first
# failuere. If that option is desired, it must be "commented # failure. If that option is desired, it must be "commented
# in" -- we don't think it's worth to add proper options for that. # in" -- we don't think it's worth to add proper options for that.
# Copyright (2015) by Rainer Gerhards, released under ASL 2.0 # Copyright (2015) by Rainer Gerhards, released under ASL 2.0
RUN=1 RUN=1

View File

@ -1,8 +1,8 @@
#!/bin/bash #!/bin/bash
# This file is part of the rsyslog project, released under ASL 2.0 # This file is part of the rsyslog project, released under ASL 2.0
. ${srcdir:=.}/diag.sh init . ${srcdir:=.}/diag.sh init
# we libmaxmindb, in packaged versions, has a small cosmetic memory leak, # we libmaxminddb, in packaged versions, has a small cosmetic memory leak,
# thus we need a supressions file: # thus we need a suppressions file:
export RS_TESTBENCH_VALGRIND_EXTRA_OPTS="$RS_TESTBENCH_VALGRIND_EXTRA_OPTS --suppressions=$srcdir/libmaxmindb.supp" export RS_TESTBENCH_VALGRIND_EXTRA_OPTS="$RS_TESTBENCH_VALGRIND_EXTRA_OPTS --suppressions=$srcdir/libmaxmindb.supp"
generate_conf generate_conf
add_conf ' add_conf '

View File

@ -1,8 +1,8 @@
#!/bin/bash #!/bin/bash
# This file is part of the rsyslog project, released under ASL 2.0 # This file is part of the rsyslog project, released under ASL 2.0
. ${srcdir:=.}/diag.sh init . ${srcdir:=.}/diag.sh init
# we libmaxmindb, in packaged versions, has a small cosmetic memory leak, # we libmaxminddb, in packaged versions, has a small cosmetic memory leak,
# thus we need a supressions file: # thus we need a suppressions file:
export RS_TESTBENCH_VALGRIND_EXTRA_OPTS="$RS_TESTBENCH_VALGRIND_EXTRA_OPTS --suppressions=$srcdir/libmaxmindb.supp" export RS_TESTBENCH_VALGRIND_EXTRA_OPTS="$RS_TESTBENCH_VALGRIND_EXTRA_OPTS --suppressions=$srcdir/libmaxmindb.supp"
generate_conf generate_conf
add_conf ' add_conf '

View File

@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# added 2018-04-06 by richm, released under ASL 2.0 # added 2018-04-06 by richm, released under ASL 2.0
# #
# Note: on buidbot VMs (where there is no environment cleanup), the # Note: on buildbot VMs (where there is no environment cleanup), the
# kubernetes test server may be kept running if the script aborts or # kubernetes test server may be kept running if the script aborts or
# is aborted (buildbot master failure!) for some reason. As such we # is aborted (buildbot master failure!) for some reason. As such we
# execute it under "timeout" control, which ensure it always is # execute it under "timeout" control, which ensure it always is

View File

@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# added 2018-04-06 by richm, released under ASL 2.0 # added 2018-04-06 by richm, released under ASL 2.0
# #
# Note: on buidbot VMs (where there is no environment cleanup), the # Note: on buildbot VMs (where there is no environment cleanup), the
# kubernetes test server may be kept running if the script aborts or # kubernetes test server may be kept running if the script aborts or
# is aborted (buildbot master failure!) for some reason. As such we # is aborted (buildbot master failure!) for some reason. As such we
# execute it under "timeout" control, which ensure it always is # execute it under "timeout" control, which ensure it always is

View File

@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# added 2018-04-06 by richm, released under ASL 2.0 # added 2018-04-06 by richm, released under ASL 2.0
# #
# Note: on buidbot VMs (where there is no environment cleanup), the # Note: on buildbot VMs (where there is no environment cleanup), the
# kubernetes test server may be kept running if the script aborts or # kubernetes test server may be kept running if the script aborts or
# is aborted (buildbot master failure!) for some reason. As such we # is aborted (buildbot master failure!) for some reason. As such we
# execute it under "timeout" control, which ensure it always is # execute it under "timeout" control, which ensure it always is

View File

@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# added 2018-04-06 by richm, released under ASL 2.0 # added 2018-04-06 by richm, released under ASL 2.0
# #
# Note: on buidbot VMs (where there is no environment cleanup), the # Note: on buildbot VMs (where there is no environment cleanup), the
# kubernetes test server may be kept running if the script aborts or # kubernetes test server may be kept running if the script aborts or
# is aborted (buildbot master failure!) for some reason. As such we # is aborted (buildbot master failure!) for some reason. As such we
# execute it under "timeout" control, which ensure it always is # execute it under "timeout" control, which ensure it always is

View File

@ -37,7 +37,7 @@ done
shutdown_when_empty shutdown_when_empty
wait_shutdown wait_shutdown
# note "-s" is requried to suppress the select "field header" # note "-s" is required to suppress the select "field header"
mysql -s --user=rsyslog --password=testbench < ${srcdir}/testsuites/mysql-select-msg.sql > $RSYSLOG_OUT_LOG mysql -s --user=rsyslog --password=testbench < ${srcdir}/testsuites/mysql-select-msg.sql > $RSYSLOG_OUT_LOG
seq_check seq_check
mysql_cleanup_test mysql_cleanup_test

View File

@ -2,7 +2,7 @@
# addd 2016-06-16 by RGerhards, released under ASL 2.0 # addd 2016-06-16 by RGerhards, released under ASL 2.0
messages=20000 # how many messages to inject? messages=20000 # how many messages to inject?
# Note: we need to inject a somewhat larger nubmer of messages in order # Note: we need to inject a somewhat larger number of messages in order
# to ensure that we receive some messages in the actual output file, # to ensure that we receive some messages in the actual output file,
# as batching can (validly) cause a larger loss in the non-writable # as batching can (validly) cause a larger loss in the non-writable
# file # file

View File

@ -20,7 +20,7 @@ wait_shutdown
journalctl -r -t rsyslogd: |grep "RsysLoG-TESTBENCH $COOKIE" journalctl -r -t rsyslogd: |grep "RsysLoG-TESTBENCH $COOKIE"
if [ $? -ne 1 ]; then if [ $? -ne 1 ]; then
echo "error: cookie $COOKIE not found. Head of journal:" echo "error: cookie $COOKIE not found. Head of journal:"
journalctrl -r -t rsyslogd: | head journalctl -r -t rsyslogd: | head
exit 1 exit 1
fi fi
exit_test exit_test

View File

@ -26,7 +26,7 @@ wait_shutdown
journalctl -r -t rsyslogd: |grep "RsysLoG-TESTBENCH $COOKIE" journalctl -r -t rsyslogd: |grep "RsysLoG-TESTBENCH $COOKIE"
if [ $? -ne 1 ]; then if [ $? -ne 1 ]; then
echo "error: cookie $COOKIE not found. Head of journal:" echo "error: cookie $COOKIE not found. Head of journal:"
journalctrl -r -t rsyslogd: | head journalctl -r -t rsyslogd: | head
exit 1 exit 1
fi fi
exit_test exit_test

View File

@ -2,7 +2,7 @@
# This file is part of the rsyslog project, released under ASL 2.0 # This file is part of the rsyslog project, released under ASL 2.0
# Similar to the 'omprog-output-capture.sh' test, with multiple worker # Similar to the 'omprog-output-capture.sh' test, with multiple worker
# threads on high load. Checks that the lines concurrently emmitted to # threads on high load. Checks that the lines concurrently emitted to
# stdout/stderr by the various program instances are not intermingled in # stdout/stderr by the various program instances are not intermingled in
# the output file (i.e., are captured atomically by omprog) when 1) the # the output file (i.e., are captured atomically by omprog) when 1) the
# lines are less than PIPE_BUF bytes long and 2) the program writes the # lines are less than PIPE_BUF bytes long and 2) the program writes the

View File

@ -63,9 +63,9 @@ while IFS= read -r line; do
# #
# TODO: Issue #2420: Deferred messages within a transaction are # TODO: Issue #2420: Deferred messages within a transaction are
# not retried by rsyslog. # not retried by rsyslog.
# If that's the expected behaviour, what's then the difference # If that's the expected behavior, what's then the difference
# between the RS_RET_OK and the RS_RET_DEFER_COMMIT return codes? # between the RS_RET_OK and the RS_RET_DEFER_COMMIT return codes?
# If that's not the expected behaviour, the following lines must # If that's not the expected behavior, the following lines must
# be removed when the bug is solved. # be removed when the bug is solved.
# #
# (START OF CODE THAT WILL POSSIBLY NEED TO BE REMOVED) # (START OF CODE THAT WILL POSSIBLY NEED TO BE REMOVED)

View File

@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# The sole point of this test is that omusrmsg does not abort. # The sole point of this test is that omusrmsg does not abort.
# We cannot check the actual outcome, as we would need to be running # We cannot check the actual outcome, as we would need to be running
# under root to do this. This test has explicitely been added to ensure # under root to do this. This test has explicitly been added to ensure
# we can do some basic testing even when not running as root. Additional # we can do some basic testing even when not running as root. Additional
# tests may be added for the root case. # tests may be added for the root case.
# addd 2018-08-05 by RGerhards, released under ASL 2.0 # addd 2018-08-05 by RGerhards, released under ASL 2.0

View File

@ -16,12 +16,12 @@ ruleset(name="ruleset1") {
' '
startup startup
tcpflood -m1 -T "udp" -M "\"<27>xapi: [error|xen3|15|Guest liveness monitor D:bca30ab3f1c1|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message)\"" tcpflood -m1 -T "udp" -M "\"<27>xapi: [error|xen3|15|Guest liveness monitor D:bca30ab3f1c1|master_connection] Connection to master died. I will continue to retry indefinitely (suppressing future logging of this message)\""
tcpflood -m1 -T "udp" -M "\"This is a message!\"" tcpflood -m1 -T "udp" -M "\"This is a message!\""
shutdown_when_empty shutdown_when_empty
wait_shutdown wait_shutdown
export EXPECTED="27,daemon,err,$RS_HOSTNAME,xapi,xapi:, [error|xen3|15|Guest liveness monitor D:bca30ab3f1c1|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message) export EXPECTED="27,daemon,err,$RS_HOSTNAME,xapi,xapi:, [error|xen3|15|Guest liveness monitor D:bca30ab3f1c1|master_connection] Connection to master died. I will continue to retry indefinitely (suppressing future logging of this message)
13,user,notice,This,is,is, a message!" 13,user,notice,This,is,is, a message!"
cmp_exact $RSYSLOG_OUT_LOG cmp_exact $RSYSLOG_OUT_LOG

View File

@ -16,12 +16,12 @@ ruleset(name="ruleset1") {
' '
startup startup
tcpflood -m1 -M "\"<27>xapi: [error|xen3|15|Guest liveness monitor D:bca30ab3f1c1|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message)\"" tcpflood -m1 -M "\"<27>xapi: [error|xen3|15|Guest liveness monitor D:bca30ab3f1c1|master_connection] Connection to master died. I will continue to retry indefinitely (suppressing future logging of this message)\""
tcpflood -m1 -M "\"This is a message!\"" tcpflood -m1 -M "\"This is a message!\""
shutdown_when_empty shutdown_when_empty
wait_shutdown wait_shutdown
export EXPECTED="27,daemon,err,$RS_HOSTNAME,xapi,xapi:, [error|xen3|15|Guest liveness monitor D:bca30ab3f1c1|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message) export EXPECTED="27,daemon,err,$RS_HOSTNAME,xapi,xapi:, [error|xen3|15|Guest liveness monitor D:bca30ab3f1c1|master_connection] Connection to master died. I will continue to retry indefinitely (suppressing future logging of this message)
13,user,notice,This,is,is, a message!" 13,user,notice,This,is,is, a message!"
cmp_exact $RSYSLOG_OUT_LOG cmp_exact $RSYSLOG_OUT_LOG
exit_test exit_test

View File

@ -33,5 +33,5 @@ tcpflood -m1000 -d500
shutdown_when_empty # shut down rsyslogd when done processing messages shutdown_when_empty # shut down rsyslogd when done processing messages
wait_shutdown # and wait for it to terminate wait_shutdown # and wait for it to terminate
# NO need to check seqno -- see header comment # NO need to check seqno -- see header comment
echo we did not loop, so the test is sucessfull echo we did not loop, so the test is successful
exit_test exit_test

View File

@ -1,5 +1,5 @@
#!/bin/bash #!/bin/bash
# Test if rsyslog survives sending truely random data to it... # Test if rsyslog survives sending truly random data to it...
# #
# added 2010-04-01 by Rgerhards # added 2010-04-01 by Rgerhards
# This file is part of the rsyslog project, released under ASL 2.0 # This file is part of the rsyslog project, released under ASL 2.0

View File

@ -13,7 +13,7 @@ fi
# STEP1: start both instances and send 1000 messages. # STEP1: start both instances and send 1000 messages.
# Note: receiver is instance 1, sender instance 2. # Note: receiver is instance 1, sender instance 2.
# #
# start up the instances. Note that the envrionment settings can be changed to # start up the instances. Note that the environment settings can be changed to
# set instance-specific debugging parameters! # set instance-specific debugging parameters!
#export RSYSLOG_DEBUG="debug nostdout" #export RSYSLOG_DEBUG="debug nostdout"
#export RSYSLOG_DEBUGLOG="log2" #export RSYSLOG_DEBUGLOG="log2"
@ -96,7 +96,7 @@ echo file size to expect is $OLDFILESIZE
# #
# Step 4: send new data. Queue files are not permitted to grow now # Step 4: send new data. Queue files are not permitted to grow now
# (but one file continous to exist). # (but one file continuous to exist).
# #
echo step 4 echo step 4
injectmsg2 11001 10 injectmsg2 11001 10

View File

@ -1,5 +1,5 @@
#!/bin/bash #!/bin/bash
# tests 'config.enabled="on"' -- default value is implicitely check # tests 'config.enabled="on"' -- default value is implicitly check
# in all testbench tests and does not need its individual test # in all testbench tests and does not need its individual test
# (actually it is here tested via template() and action() as well... # (actually it is here tested via template() and action() as well...
# added 2018-01-22 by Rainer Gerhards; Released under ASL 2.0 # added 2018-01-22 by Rainer Gerhards; Released under ASL 2.0

View File

@ -13,7 +13,7 @@ template(name="outfmt" type="list") {
/* tcpflood uses local4.=debug */ /* tcpflood uses local4.=debug */
if prifilt("syslog.*") then if prifilt("syslog.*") then
stop # it actually doesn`t matter what we do here stop # it actually does not matter what we do here
else else
action(type="omfile" file=`echo $RSYSLOG_OUT_LOG` template="outfmt") action(type="omfile" file=`echo $RSYSLOG_OUT_LOG` template="outfmt")
' '

View File

@ -13,7 +13,7 @@ template(name="outfmt" type="list") {
# we deliberately include continue/stop to make sure we have more than # we deliberately include continue/stop to make sure we have more than
# one statement. This catches grammar erorrs # one statement. This catches grammar errors
ruleset(name="rs2") { ruleset(name="rs2") {
continue continue
action(type="omfile" file=`echo $RSYSLOG_OUT_LOG` template="outfmt") action(type="omfile" file=`echo $RSYSLOG_OUT_LOG` template="outfmt")

View File

@ -1,6 +1,6 @@
#!/bin/bash #!/bin/bash
# Test for the getenv() rainerscript function # Test for the getenv() rainerscript function
# this is a quick test, but it gurantees that the code path is # this is a quick test, but it guarantees that the code path is
# at least progressed (but we do not check for unset envvars!) # at least progressed (but we do not check for unset envvars!)
# added 2009-11-03 by Rgerhards # added 2009-11-03 by Rgerhards
# This file is part of the rsyslog project, released under GPLv3 # This file is part of the rsyslog project, released under GPLv3

View File

@ -59,7 +59,7 @@ tcpflood -c3 -p'$RSYSLOG_PORT2' -m20000 -i20000
tcpflood -c3 -p'$RSYSLOG_PORT3' -m20000 -i40000 tcpflood -c3 -p'$RSYSLOG_PORT3' -m20000 -i40000
# in this version of the imdiag, we do not have the capability to poll # in this version of the imdiag, we do not have the capability to poll
# all queues for emptyness. So we do a sleep in the hopes that this will # all queues for emptiness. So we do a sleep in the hopes that this will
# sufficiently drain the queues. This is race, but the best we currently # sufficiently drain the queues. This is race, but the best we currently
# can do... - rgerhards, 2009-11-05 # can do... - rgerhards, 2009-11-05
sleep 2 sleep 2

View File

@ -59,7 +59,7 @@ tcpflood -c3 -p'$RSYSLOG_PORT2' -m20000 -i20000
tcpflood -c3 -p'$RSYSLOG_PORT3' -m20000 -i40000 tcpflood -c3 -p'$RSYSLOG_PORT3' -m20000 -i40000
# in this version of the imdiag, we do not have the capability to poll # in this version of the imdiag, we do not have the capability to poll
# all queues for emptyness. So we do a sleep in the hopes that this will # all queues for emptiness. So we do a sleep in the hopes that this will
# sufficiently drain the queues. This is race, but the best we currently # sufficiently drain the queues. This is race, but the best we currently
# can do... - rgerhards, 2009-11-05 # can do... - rgerhards, 2009-11-05
shutdown_when_empty # shut down rsyslogd when done processing messages shutdown_when_empty # shut down rsyslogd when done processing messages

View File

@ -10,16 +10,16 @@
# config file name ($2). From that name, the sender and receiver config file # config file name ($2). From that name, the sender and receiver config file
# names are automatically generated. # names are automatically generated.
# So: $1 config file name, $2 number of messages # So: $1 config file name, $2 number of messages
# environmet variable TCPFLOOD_EXTRA_OPTIONS is used to slowdown sending when # environment variable TCPFLOOD_EXTRA_OPTIONS is used to slowdown sending when
# using UDP (we've seen problems due to UDP message loss if sending with full # using UDP (we've seen problems due to UDP message loss if sending with full
# speed) # speed)
# #
# A note on TLS testing: the current testsuite (in git!) already contains # A note on TLS testing: the current testsuite (in git!) already contains
# TLS test cases. However, getting these test cases correct is not simple. # TLS test cases. However, getting these test cases correct is not simple.
# That's not a problem with the code itself, but rater a problem with # That's not a problem with the code itself, but rather a problem with
# synchronization in the test environment. So I have deciced to keep the # synchronization in the test environment. So I have decided to keep the
# TLS tests in, but not yet actually utilize them. This is most probably # TLS tests in, but not yet actually utilize them. This is most probably
# left as an excercise for future (devel) releases. -- rgerhards, 2009-11-11 # left as an exercise for future (devel) releases. -- rgerhards, 2009-11-11
# #
# added 2009-11-11 by Rgerhards # added 2009-11-11 by Rgerhards
# This file is part of the rsyslog project, released under ASL 2.0 # This file is part of the rsyslog project, released under ASL 2.0

View File

@ -8,7 +8,7 @@
relp_port=$(./omrelp_dflt_port) relp_port=$(./omrelp_dflt_port)
if [ $relp_port -lt 1024 ]; then if [ $relp_port -lt 1024 ]; then
if [ "$EUID" -ne 0 ]; then if [ "$EUID" -ne 0 ]; then
echo relp default port $relp_port is priviledged echo relp default port $relp_port is privileged
echo need to be root to run this test - skipping echo need to be root to run this test - skipping
exit 77 exit 77
fi fi

View File

@ -54,7 +54,7 @@ wait_shutdown 2
shutdown_when_empty shutdown_when_empty
wait_shutdown wait_shutdown
# IMPORTANT: this test will generate many error messsages. This is exactly it's # IMPORTANT: this test will generate many error messages. This is exactly it's
# intent. So do not think something is wrong. The content_check below checks # intent. So do not think something is wrong. The content_check below checks
# these error codes. # these error codes.

View File

@ -2,7 +2,7 @@
# added 2015-05-22 by singh.janmejay # added 2015-05-22 by singh.janmejay
# This file is part of the rsyslog project, released under ASL 2.0 # This file is part of the rsyslog project, released under ASL 2.0
echo =============================================================================== echo ===============================================================================
echo \[stop_when_array_has_element.sh\]: loop detecting presense of an element and stopping ruleset execution echo \[stop_when_array_has_element.sh\]: loop detecting presence of an element and stopping ruleset execution
. ${srcdir:=.}/diag.sh init stop_when_array_has_element.sh . ${srcdir:=.}/diag.sh init stop_when_array_has_element.sh
generate_conf generate_conf
add_conf ' add_conf '

View File

@ -3,7 +3,7 @@
. ${srcdir:=.}/diag.sh init . ${srcdir:=.}/diag.sh init
messages=20000 # how many messages to inject? messages=20000 # how many messages to inject?
# Note: we need to inject a somewhat larger nubmer of messages in order # Note: we need to inject a somewhat larger number of messages in order
# to ensure that we receive some messages in the actual output file, # to ensure that we receive some messages in the actual output file,
# as batching can (validly) cause a larger loss in the non-writable # as batching can (validly) cause a larger loss in the non-writable
# file # file
@ -33,7 +33,7 @@ wait_shutdown
# we still try to kill it in case the test did not connect to it! Note that we # we still try to kill it in case the test did not connect to it! Note that we
# do not need an extra wait, as the rsyslog shutdown process should have taken # do not need an extra wait, as the rsyslog shutdown process should have taken
# far long enough. # far long enough.
echo wating on background process echo waiting on background process
kill $BGPROCESS &> /dev/null kill $BGPROCESS &> /dev/null
wait $BGPROCESS wait $BGPROCESS

View File

@ -3,7 +3,7 @@
# we send 100,000 messages in the hopes that his puts at least a little bit # we send 100,000 messages in the hopes that his puts at least a little bit
# of pressure on the threading subsystem. To really prove it, we would need to # of pressure on the threading subsystem. To really prove it, we would need to
# push messages for several minutes, but that takes too long during the # push messages for several minutes, but that takes too long during the
# automatted tests (hint: do this manually after suspect changes). Thankfully, # automated tests (hint: do this manually after suspect changes). Thankfully,
# in practice many threading bugs result in an abort rather quickly and these # in practice many threading bugs result in an abort rather quickly and these
# should be covered by this test here. # should be covered by this test here.
# rgerhards, 2009-06-26 # rgerhards, 2009-06-26

View File

@ -3,7 +3,7 @@
# we send 100,000 messages in the hopes that his puts at least a little bit # we send 100,000 messages in the hopes that his puts at least a little bit
# of pressure on the threading subsystem. To really prove it, we would need to # of pressure on the threading subsystem. To really prove it, we would need to
# push messages for several minutes, but that takes too long during the # push messages for several minutes, but that takes too long during the
# automatted tests (hint: do this manually after suspect changes). Thankfully, # automated tests (hint: do this manually after suspect changes). Thankfully,
# in practice many threading bugs result in an abort rather quickly and these # in practice many threading bugs result in an abort rather quickly and these
# should be covered by this test here. # should be covered by this test here.
# rgerhards, 2009-06-26 # rgerhards, 2009-06-26

View File

@ -7,7 +7,7 @@
# #
# This file is part of rsyslog. # This file is part of rsyslog.
# Released under ASL 2.0 # Released under ASL 2.0
echo \[validation-run.sh\]: testing configuraton validation echo \[validation-run.sh\]: testing configuration validation
echo "testing a failed configuration verification run" echo "testing a failed configuration verification run"
../tools/rsyslogd -u2 -N1 -f$srcdir/testsuites/invalid.conf -M../runtime/.libs:../.libs ../tools/rsyslogd -u2 -N1 -f$srcdir/testsuites/invalid.conf -M../runtime/.libs:../.libs
if [ $? -ne 1 ]; then if [ $? -ne 1 ]; then