5736d1ddacc4019101e7a5170dd25efbc63b622a tracing: pass if replaced by tx/pkg to tracepoint (0xb10c)
a4ec07f1944999c2eead41d08d7dd4fc3aa71243 doc: add comments for CTxMemPool::ChangeSet (Suhas Daftuar)
83f814b1d1100baac9dca9c176f89b0ec2555dbc Remove m_all_conflicts from SubPackageState (Suhas Daftuar)
d3c8e7dfb63f7986a1f9654ea2393aabe3cd78da Ensure that we don't add duplicate transactions in rbf fuzz tests (Suhas Daftuar)
d7dc9fd2f7bc675256687b9c55fdbec9cc8ac781 Move CalculateChunksForRBF() to the mempool changeset (Suhas Daftuar)
284a1d33f1dcbc3b3404ea40a948ff6600239613 Move prioritisation into changeset (Suhas Daftuar)
446b08b599bc492bbec10ccc2292aee6f90c58e7 Don't distinguish between direct conflicts and all conflicts when doing cluster-size-2-rbf checks (Suhas Daftuar)
b53041021abc4f9ee7203341413e8676e2d5a7ca Duplicate transactions are not permitted within a changeset (Suhas Daftuar)
b447416fddcb8c8647391502cca3dbfd1552e02e Public mempool removal methods Assume() no changeset is outstanding (Suhas Daftuar)
2b30f4d36c86f775ac637b171d27d42a02309c5b Make RemoveStaged() private (Suhas Daftuar)
18829194ca68152ac0b38d34e94b9265ee74c410 Enforce that there is only one changeset at a time (Suhas Daftuar)
7fb62f7db60c7d793828ae45f87bc3f5c63cc989 Apply mempool changeset transactions directly into the mempool (Suhas Daftuar)
34b6c5833d11ea84fbd4b891e06408f6f4ca6fac Clean up FinalizeSubpackage to avoid workspace-specific information (Suhas Daftuar)
57983b8add72a04721d3f2050c063a3c4d8683ed Move LimitMempoolSize to take place outside FinalizeSubpackage (Suhas Daftuar)
01e145b9758f1df14a7ea18058ba9577bf88e459 Move changeset from workspace to subpackage (Suhas Daftuar)
802214c0832de00f24268183f7763fa984ba7903 Introduce mempool changesets (Suhas Daftuar)
87d92fa340195d9c87be3d023ca133b90b3b7d4e test: Add unit test coverage of package rbf + prioritisetransaction (Suhas Daftuar)
15d982f91e6b0f145c9dd4edf29827cfabb37a3f Add package hash to package-rbf log message (Suhas Daftuar)
Pull request description:
part of cluster mempool: #30289
It became clear while working on cluster mempool that it would be helpful for transaction validation if we could consider a full set of proposed changes to the mempool -- consisting of a set of transactions to add, and a set of transactions (ie conflicts) to simultaneously remove -- and perform calculations on what the mempool would look like if the proposed changes were to be applied. Two specific examples of where we'd like to do this:
- Determining if ancestor/descendant/TRUC limits would be violated (in the future, cluster limits) if either a single transaction or a package of transactions were to be accepted
- Determining if an RBF would make the mempool "better", however that idea is defined, both in the single transaction and package of transaction cases
In preparation for cluster mempool, I have pulled this reworking of the mempool interface out of #28676 so it can be reviewed on its own. I have not re-implemented ancestor/descendant limits to be run through the changeset, since with cluster mempool those limits will be going away, so this seems like wasted effort. However, I have rebased #28676 on top of this branch so reviewers can see what the new mempool interface could look like in the cluster mempool setting.
There are some minor behavior changes here, which I believe are inconsequential:
- In the package validation setting, transactions would be added to the mempool before the `ConsensusScriptChecks()` are run. In theory, `ConsensusScriptChecks()` should always pass if the `PolicyScriptChecks()` have passed and it's just a belt-and-suspenders for us, but if somehow they were to diverge then there could be some small behavior change from adding transactions and then removing them, versus never adding them at all.
- The error reporting on `CheckConflictTopology()` has slightly changed due to no longer distinguishing between direct conflicts and indirect conflicts. I believe this should be entirely inconsequential because there shouldn't be a logical difference between those two ideas from the perspective of this function, but I did have to update some error strings in some tests.
- Because, in a package setting, RBFs now happen as part of the entire package being accepted, the logging has changed slightly because we do not know which transaction specifically evicted a given removed transaction.
- Specifically, the "package hash" is now used to reference the set of transactions that are being accepted, rather than any single txid. The log message relating to package RBF that happen in the `TXPACKAGES` category has been updated as well to include the package hash, so that it's possible to see which specific set of transactions are being referenced by that package hash.
- Relatedly, the tracepoint logging in the package rbf case has been updated as well to reference the package hash, rather than a transaction hash.
ACKs for top commit:
naumenkogs:
ACK 5736d1ddac
instagibbs:
ACK 5736d1ddacc4019101e7a5170dd25efbc63b622a
ismaelsadeeq:
reACK 5736d1ddacc4019101e7a5170dd25efbc63b622a
glozow:
ACK 5736d1ddacc
Tree-SHA512: 21810872e082920d337c89ac406085aa71c5f8e5151ab07aedf41e6601f60a909b22fbf462ef3b735d5d5881e9b76142c53957158e674dd5dfe6f6aabbdf630b
Unit tests
The sources in this directory are unit test cases. Boost includes a unit testing framework, and since Bitcoin Core already uses Boost, it makes sense to simply use this framework rather than require developers to configure some other framework (we want as few impediments to creating unit tests as possible).
The build system is set up to compile an executable called test_bitcoin
that runs all of the unit tests. The main source file for the test library is found in
util/setup_common.cpp.
The examples in this document assume the build directory is named
build. You'll need to adapt them if you named it differently.
Compiling/running unit tests
Unit tests will be automatically compiled if dependencies were met during the generation of the Bitcoin Core build system and tests weren't explicitly disabled.
The unit tests can be run with ctest --test-dir build, which includes unit
tests from subtrees.
Run test_bitcoin --list_content for the full list of tests.
To run the unit tests manually, launch build/src/test/test_bitcoin. To recompile
after a test file was modified, run cmake --build build and then run the test again. If you
modify a non-test file, use cmake --build build --target test_bitcoin to recompile only what's needed
to run the unit tests.
To add more unit tests, add BOOST_AUTO_TEST_CASE functions to the existing
.cpp files in the test/ directory or add new .cpp files that
implement new BOOST_AUTO_TEST_SUITE sections.
To run the GUI unit tests manually, launch build/src/qt/test/test_bitcoin-qt
To add more GUI unit tests, add them to the src/qt/test/ directory and
the src/qt/test/test_main.cpp file.
Running individual tests
The test_bitcoin runner accepts command line arguments from the Boost
framework. To see the list of arguments that may be passed, run:
test_bitcoin --help
For example, to run only the tests in the getarg_tests file, with full logging:
build/src/test/test_bitcoin --log_level=all --run_test=getarg_tests
or
build/src/test/test_bitcoin -l all -t getarg_tests
or to run only the doubledash test in getarg_tests
build/src/test/test_bitcoin --run_test=getarg_tests/doubledash
The --log_level= (or -l) argument controls the verbosity of the test output.
The test_bitcoin runner also accepts some of the command line arguments accepted by
bitcoind. Use -- to separate these sets of arguments:
build/src/test/test_bitcoin --log_level=all --run_test=getarg_tests -- -printtoconsole=1
The -printtoconsole=1 after the two dashes sends debug logging, which
normally goes only to debug.log within the data directory, to the
standard terminal output as well.
Running test_bitcoin creates a temporary working (data) directory with a randomly
generated pathname within test_common bitcoin/, which in turn is within
the system's temporary directory (see
temp_directory_path).
This data directory looks like a simplified form of the standard bitcoind data
directory. Its content will vary depending on the test, but it will always
have a debug.log file, for example.
The location of the temporary data directory can be specified with the
-testdatadir option. This can make debugging easier. The directory
path used is the argument path appended with
/test_common bitcoin/<test-name>/datadir.
The directory path is created if necessary.
Specifying this argument also causes the data directory
not to be removed after the last test. This is useful for looking at
what the test wrote to debug.log after it completes, for example.
(The directory is removed at the start of the next test run,
so no leftover state is used.)
$ build/src/test/test_bitcoin --run_test=getarg_tests/doubledash -- -testdatadir=/somewhere/mydatadir
Test directory (will not be deleted): "/somewhere/mydatadir/test_common bitcoin/getarg_tests/doubledash/datadir"
Running 1 test case...
*** No errors detected
$ ls -l '/somewhere/mydatadir/test_common bitcoin/getarg_tests/doubledash/datadir'
total 8
drwxrwxr-x 2 admin admin 4096 Nov 27 22:45 blocks
-rw-rw-r-- 1 admin admin 1003 Nov 27 22:45 debug.log
If you run an entire test suite, such as --run_test=getarg_tests, or all the test suites
(by not specifying --run_test), a separate directory
will be created for each individual test.
Adding test cases
To add a new unit test file to our test suite, you need
to add the file to either src/test/CMakeLists.txt or
src/wallet/test/CMakeLists.txt for wallet-related tests. The pattern is to create
one test file for each class or source file for which you want to create
unit tests. The file naming convention is <source_filename>_tests.cpp
and such files should wrap their tests in a test suite
called <source_filename>_tests. For an example of this pattern,
see uint256_tests.cpp.
Logging and debugging in unit tests
ctest --test-dir build will write to the log file build/Testing/Temporary/LastTest.log. You can
additionally use the --output-on-failure option to display logs of the failed tests automatically
on failure. For running individual tests verbosely, refer to the section
above.
To write to logs from unit tests you need to use specific message methods
provided by Boost. The simplest is BOOST_TEST_MESSAGE.
For debugging you can launch the test_bitcoin executable with gdb or lldb and
start debugging, just like you would with any other program:
gdb build/src/test/test_bitcoin
Segmentation faults
If you hit a segmentation fault during a test run, you can diagnose where the fault
is happening by running gdb ./build/src/test/test_bitcoin and then using the bt command
within gdb.
Another tool that can be used to resolve segmentation faults is valgrind.
If for whatever reason you want to produce a core dump file for this fault, you can do
that as well. By default, the boost test runner will intercept system errors and not
produce a core file. To bypass this, add --catch_system_errors=no to the
test_bitcoin arguments and ensure that your ulimits are set properly (e.g. ulimit -c unlimited).
Running the tests and hitting a segmentation fault should now produce a file called core
(on Linux platforms, the file name will likely depend on the contents of
/proc/sys/kernel/core_pattern).
You can then explore the core dump using
gdb build/src/test/test_bitcoin core
(gdb) bt # produce a backtrace for where a segfault occurred