f5883286e32b625aab3dd80c74d6adb4f37f0a80 Add a fuzz test for Num3072 multiplication and inversion (Pieter Wuille)
a26ce628942243fc9848a63bfdfa5e61f5e936f3 Safegcd based modular inverse for Num3072 (Pieter Wuille)
91ce8cef2d8955d980ab7e89fbf74e8b29adf178 Add benchmark for MuHash finalization (Pieter Wuille)
Pull request description:
This implements a safegcd-based modular inverse for MuHash3072. It is a fairly straightforward translation of [the libsecp256k1 implementation](https://github.com/bitcoin-core/secp256k1/pull/831), with the following changes:
* Generic for 32-bit and 64-bit
* Specialized for the specific MuHash3072 modulus (2^3072 - 1103717).
* A bit more C++ish
* Far fewer sanity checks
A benchmark is also included for MuHash3072::Finalize. The new implementation is around 100x faster on x86_64 for me (from 5.8 ms to 57 μs); for 32-bit code the factor is likely even larger.
For more information:
* [Original paper](https://gcd.cr.yp.to/papers.html) by Daniel J. Bernstein and Bo-Yin Yang
* [Implementation](https://github.com/bitcoin-core/secp256k1/pull/767) for libsecp256k1 by Peter Dettman; and the [final](https://github.com/bitcoin-core/secp256k1/pull/831) version
* [Explanation](https://github.com/bitcoin-core/secp256k1/blob/master/doc/safegcd_implementation.md) of the algorithm using Python snippets
* [Analysis](https://github.com/sipa/safegcd-bounds) of the maximum number of iterations the algorithm needs
* [Formal proof in Coq](https://medium.com/blockstream/a-formal-proof-of-safegcd-bounds-695e1735a348) by Russell O'Connor (for the 256-bit version of the algorithm; here we use a 3072-bit one).
ACKs for top commit:
achow101:
ACK f5883286e32b625aab3dd80c74d6adb4f37f0a80
TheCharlatan:
Re-ACK f5883286e32b625aab3dd80c74d6adb4f37f0a80
dergoegge:
tACK f5883286e32b625aab3dd80c74d6adb4f37f0a80
Tree-SHA512: 275872c61d30817a82901dee93fc7153afca55c32b72a95b8768f3fd464da1b09b36f952f30e70225e766b580751cfb9b874b2feaeb73ffaa6943c8062aee19a
18619b473255786b80f27dd33b46eb26a63fffc7 wallet: remove BDB dependency from wallet migration benchmark (furszy)
Pull request description:
Part of the legacy wallet removal working path #20160.
Stops creating a bdb database in the wallet migration benchmark.
Instead, the benchmark now creates the db in memory and re-uses it for the migration process.
ACKs for top commit:
achow101:
ACK 18619b473255786b80f27dd33b46eb26a63fffc7
brunoerg:
code review ACK 18619b473255786b80f27dd33b46eb26a63fffc7
theStack:
Code-review ACK 18619b473255786b80f27dd33b46eb26a63fffc7
Tree-SHA512: a107deee3d2c00b980e3606be07d038ca524b98251442956d702a7996e2ac5e2901f656482018cacbac8ef6a628ac1fb03f677d1658aeaded4036d834a95d7e0
a4df12323c4e9230bca58562ba17ecee4233f8fe doc: add release notes (Sjors Provoost)
c75872ffdd98ce9f04fb8489515d1b63853f03b4 test: use DIFF_1_N_BITS in tool_signet_miner (tdb3)
4131f322ac0abe43639065362cd3c4ea36d2c5c3 test: check difficulty adjustment using alternate mainnet (Sjors Provoost)
c4f68c12e228818f655352d17d038dcc7ba1db3a Use OP_0 for BIP34 padding in signet and tests (Sjors Provoost)
cf0a62878be214cd4ec779aab214221b27b769b6 rpc: add next to getmininginfo (Sjors Provoost)
2d18a078a2d9eaa53b8b7acc7600141c69f0d742 rpc: add target and bits to getchainstates (Sjors Provoost)
f153f57acc9f9a6f84af161d5bed9aa9965abaa3 rpc: add target and bits to getblockchaininfo (Sjors Provoost)
baa504fdfaff4a9f61bc939035df5d5f2978cfd7 rpc: add target to getmininginfo result (Sjors Provoost)
2a7bfebd5e788e1d9e7e07a9f1b8e3625a0301cd Add target to getblock(header) in RPC and REST (Sjors Provoost)
341f93251677fee66c822f414b75499e8b3b31f6 rpc: add GetTarget helper (Sjors Provoost)
d20d96fa41ce706ccc480b4f3143438ce0720348 test: use REGTEST_N_BITS in feature_block (tdb3)
7ddbed4f9fc0c90bfed244a71194740a4a1fa1be rpc: add nBits to getmininginfo (Sjors Provoost)
ba7b9f3d7bf5a1ad395262b080e832f5c9958e4d build: move pow and chain to bitcoin_common (Sjors Provoost)
c4cc9e3e9df2733260942e0513dd8478d2a104da consensus: add DeriveTarget() to pow.h (Sjors Provoost)
Pull request description:
**tl&dr for consensus-code only reviewers**: the first commit splits `CheckProofOfWorkImpl()` in order to create a `DeriveTarget()` helper. The rest of this PR does not touch consensus code.
There are three ways to represent the proof-of-work in a block:
1. nBits
2. Difficulty
3. Target
The latter notation is useful when you want to compare share work against either the pool target (to get paid) or network difficulty (found an actual block). E.g. for difficulty 1 which corresponds to an nBits value of `0x00ffff`:
```
share hash: f6b973257df982284715b0c7a20640dad709d22b0b1a58f2f88d35886ea5ac45
target: 7fffff0000000000000000000000000000000000000000000000000000000000
```
It's immediately clear that the share is invalid because the hash is above the target.
This type of logging is mostly done by the pool software. It's a nice extra convenience, but not very important. It impacts the following RPC calls:
1. `getmininginfo` displays the `target` for the tip block
2. `getblock` and `getblockheader` display the `target` for a specific block (ditto for their REST equivalents)
The `getdifficulty` method is a bit useless in its current state, because what miners really want to know if the difficulty for the _next_ block. So I added a boolean argument `next` to `getdifficulty`. (These values are typically the same, except for the first block in a retarget period. On testnet3 / testnet4 they change when no block is found after 20 minutes).
Similarly I added a `next` object to `getmininginfo` which shows `bit`, `difficulty` and `target` for the next block.
In order to test the difficulty transition, an alternate mainnet chain with 2016 blocks was generated and used in `mining_mainnet.py`. The chain is deterministic except for its timestamp and nonce values, which are stored in `mainnet_alt.json`.
As described at the top, this PR introduces a helper method `DeriveTarget()` which is split out from `CheckProofOfWorkImpl`. The proposed `checkblock` RPC in #31564 needs this helper method internally to figure out the consensus target.
Finally, this PR moves `pow.cpp` and `chain.cpp` from `bitcoin_node` to `bitcoin_common`, in order to give `rpc/util.cpp` (which lives in `bitcoin_common`) access to `pow.h`.
ACKs for top commit:
ismaelsadeeq:
re-ACK a4df12323c4e9230bca58562ba17ecee4233f8fe
tdb3:
code review re ACK a4df12323c4e9230bca58562ba17ecee4233f8fe
ryanofsky:
Code review ACK a4df12323c4e9230bca58562ba17ecee4233f8fe. Only overall changes since last review were dropping new `gettarget` method and dropping changes to `getdifficulty`, but there were also various internal changes splitting and rearranging commits.
Tree-SHA512: edef5633590379c4be007ac96fd1deda8a5b9562ca6ff19fe377cb552b5166f3890d158554c249ab8345977a06da5df07866c9f42ac43ee83dfe3830c61cd169
Done in separate commit to simplify review.
Also renames benchmarks, since they're not strictly tests.
Co-authored-by: Hodlinator <172445034+hodlinator@users.noreply.github.com>
Stops creating a bdb database in the wallet migration benchmark.
Instead, the benchmark now creates the db in memory and re-uses
it for the migration process.
The check type function now needs to return a std::optional<R> for some type R,
and the check queue overall will return std::nullopt if all individual checks
return that, or one of the non-nullopt values if there is at least one.
For most tests, we use R=int, but for the actual validation code, we make it return
the ScriptError.
11f3bc229ccd4b20191855fb1df882cfa6145264 refactor: Reserve vectors in fuzz tests (Lőrinc)
152fefe7a22b7da3cfe2815083634bece9c5654e refactor: Preallocate PrevectorFillVector(In)Direct without vector resize (Lőrinc)
a774c7a339c26b1409c9a9572d2b52810ee64062 refactor: Fix remaining clang-tidy performance-inefficient-vector errors (Lőrinc)
Pull request description:
PR inspired by https://github.com/bitcoin/bitcoin/pull/29608#issuecomment-2437847307 (and https://github.com/bitcoin/bitcoin/pull/29458, https://github.com/bitcoin/bitcoin/pull/29606, https://github.com/bitcoin/bitcoin/pull/29607, https://github.com/bitcoin/bitcoin/pull/30093).
The `clang-tidy` check can be run via:
```bash
cmake -B build -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DBUILD_BENCH=ON -DBUILD_FUZZ_BINARY=ON -DBUILD_FOR_FUZZING=ON && cmake --build build -j$(nproc)
run-clang-tidy -quiet -p build -j $(nproc) -checks='-*,performance-inefficient-vector-operation' | grep -v 'clang-tidy'
```
which revealed 3 tests and 1 prod warning (+ fuzz and benching, found by hebasto).
Even though the tests aren't performance critical, getting rid of these warnings (for which the checks were already enabled via https://github.com/bitcoin/bitcoin/blob/master/src/.clang-tidy#L18, see below), the fix was quite simple.
<details>
<summary>clang-tidy -list-checks</summary>
```bash
cd src && clang-tidy -list-checks | grep 'vector'
performance-inefficient-vector-operation
```
</details>
<details>
<summary>Output before the change</summary>
```
src/test/rpc_tests.cpp:434:9: error: 'emplace_back' is called inside a loop; consider pre-allocating the container capacity before the loop [performance-inefficient-vector-operation,-warnings-as-errors]
433 | for (int64_t i = 0; i < 100; i++) {
434 | feerates.emplace_back(1 ,1);
| ^
src/test/checkqueue_tests.cpp:366:13: error: 'emplace_back' is called inside a loop; consider pre-allocating the container capacity before the loop [performance-inefficient-vector-operation,-warnings-as-errors]
365 | for (size_t i = 0; i < 3; ++i) {
366 | tg.emplace_back(
| ^
src/test/cuckoocache_tests.cpp:231:9: error: 'emplace_back' is called inside a loop; consider pre-allocating the container capacity before the loop [performance-inefficient-vector-operation,-warnings-as-errors]
228 | for (uint32_t x = 0; x < 3; ++x)
229 | /** Each thread is emplaced with x copy-by-value
230 | */
231 | threads.emplace_back([&, x] {
| ^
src/rpc/output_script.cpp:127:17: error: 'push_back' is called inside a loop; consider pre-allocating the container capacity before the loop [performance-inefficient-vector-operation,-warnings-as-errors]
126 | for (unsigned int i = 0; i < keys.size(); ++i) {
127 | pubkeys.push_back(HexToPubKey(keys[i].get_str()));
| ^
```
And the fuzz and benchmarks, noticed by hebasto: https://github.com/bitcoin/bitcoin/pull/31305#issuecomment-2483124499
</details>
ACKs for top commit:
maflcko:
review ACK 11f3bc229ccd4b20191855fb1df882cfa6145264 🎦
achow101:
ACK 11f3bc229ccd4b20191855fb1df882cfa6145264
theuni:
ACK 11f3bc229ccd4b20191855fb1df882cfa6145264
hebasto:
ACK 11f3bc229ccd4b20191855fb1df882cfa6145264, tested with clang 19.1.5 + clang-tidy.
Tree-SHA512: 41691c19f35c63b922a95407617a54f9bff1af3f95f99d15642064f321df038aeb1ae5f061f854ed913f69036807cc28fa6222b2ff4c24ef43b909027fa0f9b3
5736d1ddacc4019101e7a5170dd25efbc63b622a tracing: pass if replaced by tx/pkg to tracepoint (0xb10c)
a4ec07f1944999c2eead41d08d7dd4fc3aa71243 doc: add comments for CTxMemPool::ChangeSet (Suhas Daftuar)
83f814b1d1100baac9dca9c176f89b0ec2555dbc Remove m_all_conflicts from SubPackageState (Suhas Daftuar)
d3c8e7dfb63f7986a1f9654ea2393aabe3cd78da Ensure that we don't add duplicate transactions in rbf fuzz tests (Suhas Daftuar)
d7dc9fd2f7bc675256687b9c55fdbec9cc8ac781 Move CalculateChunksForRBF() to the mempool changeset (Suhas Daftuar)
284a1d33f1dcbc3b3404ea40a948ff6600239613 Move prioritisation into changeset (Suhas Daftuar)
446b08b599bc492bbec10ccc2292aee6f90c58e7 Don't distinguish between direct conflicts and all conflicts when doing cluster-size-2-rbf checks (Suhas Daftuar)
b53041021abc4f9ee7203341413e8676e2d5a7ca Duplicate transactions are not permitted within a changeset (Suhas Daftuar)
b447416fddcb8c8647391502cca3dbfd1552e02e Public mempool removal methods Assume() no changeset is outstanding (Suhas Daftuar)
2b30f4d36c86f775ac637b171d27d42a02309c5b Make RemoveStaged() private (Suhas Daftuar)
18829194ca68152ac0b38d34e94b9265ee74c410 Enforce that there is only one changeset at a time (Suhas Daftuar)
7fb62f7db60c7d793828ae45f87bc3f5c63cc989 Apply mempool changeset transactions directly into the mempool (Suhas Daftuar)
34b6c5833d11ea84fbd4b891e06408f6f4ca6fac Clean up FinalizeSubpackage to avoid workspace-specific information (Suhas Daftuar)
57983b8add72a04721d3f2050c063a3c4d8683ed Move LimitMempoolSize to take place outside FinalizeSubpackage (Suhas Daftuar)
01e145b9758f1df14a7ea18058ba9577bf88e459 Move changeset from workspace to subpackage (Suhas Daftuar)
802214c0832de00f24268183f7763fa984ba7903 Introduce mempool changesets (Suhas Daftuar)
87d92fa340195d9c87be3d023ca133b90b3b7d4e test: Add unit test coverage of package rbf + prioritisetransaction (Suhas Daftuar)
15d982f91e6b0f145c9dd4edf29827cfabb37a3f Add package hash to package-rbf log message (Suhas Daftuar)
Pull request description:
part of cluster mempool: #30289
It became clear while working on cluster mempool that it would be helpful for transaction validation if we could consider a full set of proposed changes to the mempool -- consisting of a set of transactions to add, and a set of transactions (ie conflicts) to simultaneously remove -- and perform calculations on what the mempool would look like if the proposed changes were to be applied. Two specific examples of where we'd like to do this:
- Determining if ancestor/descendant/TRUC limits would be violated (in the future, cluster limits) if either a single transaction or a package of transactions were to be accepted
- Determining if an RBF would make the mempool "better", however that idea is defined, both in the single transaction and package of transaction cases
In preparation for cluster mempool, I have pulled this reworking of the mempool interface out of #28676 so it can be reviewed on its own. I have not re-implemented ancestor/descendant limits to be run through the changeset, since with cluster mempool those limits will be going away, so this seems like wasted effort. However, I have rebased #28676 on top of this branch so reviewers can see what the new mempool interface could look like in the cluster mempool setting.
There are some minor behavior changes here, which I believe are inconsequential:
- In the package validation setting, transactions would be added to the mempool before the `ConsensusScriptChecks()` are run. In theory, `ConsensusScriptChecks()` should always pass if the `PolicyScriptChecks()` have passed and it's just a belt-and-suspenders for us, but if somehow they were to diverge then there could be some small behavior change from adding transactions and then removing them, versus never adding them at all.
- The error reporting on `CheckConflictTopology()` has slightly changed due to no longer distinguishing between direct conflicts and indirect conflicts. I believe this should be entirely inconsequential because there shouldn't be a logical difference between those two ideas from the perspective of this function, but I did have to update some error strings in some tests.
- Because, in a package setting, RBFs now happen as part of the entire package being accepted, the logging has changed slightly because we do not know which transaction specifically evicted a given removed transaction.
- Specifically, the "package hash" is now used to reference the set of transactions that are being accepted, rather than any single txid. The log message relating to package RBF that happen in the `TXPACKAGES` category has been updated as well to include the package hash, so that it's possible to see which specific set of transactions are being referenced by that package hash.
- Relatedly, the tracepoint logging in the package rbf case has been updated as well to reference the package hash, rather than a transaction hash.
ACKs for top commit:
naumenkogs:
ACK 5736d1ddac
instagibbs:
ACK 5736d1ddacc4019101e7a5170dd25efbc63b622a
ismaelsadeeq:
reACK 5736d1ddacc4019101e7a5170dd25efbc63b622a
glozow:
ACK 5736d1ddacc
Tree-SHA512: 21810872e082920d337c89ac406085aa71c5f8e5151ab07aedf41e6601f60a909b22fbf462ef3b735d5d5881e9b76142c53957158e674dd5dfe6f6aabbdf630b
42066f45ff5d48e78a317eda63c035809bd657c6 Refactor SipHash_32b benchmark to improve accuracy and avoid optimization issues (Lőrinc)
Pull request description:
This PR stems from the discussions in https://github.com/bitcoin/bitcoin/pull/30317#discussion_r1649187336
The previous benchmark for `SipHash` was slightly less accurate in representing real-world usage and allowed for potential compiler optimizations that could invalidate the benchmark.
This change aims to ensure the benchmark produces more realistic results.
By modifying the initial values and only incrementing the bytes of `val`, the benchmark should reflects a more typical usage patterns - and prevent the compiler from optimizing away the calculations.
-------
On my M1 processor the benchmark's speed changed significantly (but the CI seems to produce the same result as before):
> cmake -B build -DCMAKE_BUILD_TYPE=Release -DBUILD_BENCH=ON && cmake --build build -j10 &&
./build/src/bench/bench_bitcoin --filter=SipHash_32b --min-time=1000
Before:
| ns/op | op/s | err% | total | benchmark
|--------------------:|--------------------:|--------:|----------:|:----------
| 35.15 | 28,445,856.66 | 0.2% | 1.10 | `SipHash_32b`
After (note that only the benchmark changed):
| ns/op | op/s | err% | total | benchmark
|--------------------:|--------------------:|--------:|----------:|:----------
| 22.05 | 45,350,886.64 | 0.3% | 1.10 | `SipHash_32b`
ACKs for top commit:
maflcko:
review ACK 42066f45ff5d48e78a317eda63c035809bd657c6
achow101:
ACK 42066f45ff5d48e78a317eda63c035809bd657c6
hodlinator:
ACK 42066f45ff5d48e78a317eda63c035809bd657c6
Tree-SHA512: 6bbe9d725d4c3396642e55ce48c31baa5339e56838d6d5fb377fb1069daa9292375e7020ceff7da0d78befffc1e984f717b5232217fe911989613480adaa937e
fa66e0887ca1a1445d8b18ba1fadb12b2d911048 bench: add support for custom data directory (furszy)
ad9c2cceda9cd893c0f754e49f7fca6e417ee95f test, bench: specialize working directory name (furszy)
Pull request description:
Expands the benchmark framework with the existing `-testdatadir` arg,
enabling the ability to change the benchmark data directory.
This is useful for running benchmarks on different storage devices, and
not just under the OS `/tmp/` directory.
A good use case is #28574, where we are benchmarking the wallet
migration process on an HDD.
ACKs for top commit:
maflcko:
re-ACK fa66e0887ca1a1445d8b18ba1fadb12b2d911048
achow101:
ACK fa66e0887ca1a1445d8b18ba1fadb12b2d911048
tdb3:
re ACK fa66e0887ca1a1445d8b18ba1fadb12b2d911048
hodlinator:
re-ACK fa66e0887ca1a1445d8b18ba1fadb12b2d911048
pablomartin4btc:
re-ACK fa66e0887ca1a1445d8b18ba1fadb12b2d911048
Tree-SHA512: 4e87206c07e26fe193c07074ae9eb0cc9c70a58aeea8cf27d18fb5425d77e4b00dbe0e6d6a75c17b427744e9066458b9a84e5ef7b0420f02a4fccb9c5ef4dacc
Rather than individually calling addUnchecked for each transaction added in a
changeset (after removing all the to-be-removed transactions), instead we can
take advantage of boost::multi_index's splicing features to extract and insert
entries directly from the staging multi_index into mapTx.
This has the immediate advantage of saving allocation overhead for mempool
entries which have already been allocated once. This also means that the memory
locations of mempool entries will not change when transactions go from staging
to the main mempool.
Additionally, eliminate addUnchecked and require all new transactions to enter
the mempool via a CTxMemPoolChangeSet.
Expands the benchmark framework with the existing '-testdatadir' arg,
enabling the ability to change the benchmark data directory.
This is useful for running benchmarks on different storage devices, and
not just under the OS /tmp/ directory.
Since G_TEST_GET_FULL_NAME is not initialized in the benchmark framework,
benchmarks using the unit test setup run in the same directory without
any clear distinction between them.
This poses an extra complication for locating any specific benchmark
directory during a failure.
In master, unit tests and benchmarks run in the following path:
/<OS_tmp_dir>/test_common bitcoin/<random_uint256>/
After this commit, unit tests and benchmarks are contained within its
own directory:
/<OS_tmp_dir>/test_common bitcoin/<test_name>/<time_in_nanoseconds>/
This makes it easier to find any benchmark run when a failure occurs.
- Modify `SipHash_32b` benchmark to use `FastRandomContext` for generating initial values.
- Cycle through and modify each byte of the `uint256` value to ensure no part of it can be optimized away.
The lack of "recursion" (where the method call overwrites the used inputs partially) and the systematic modification of each input byte makes the benchmark usage more reliable and thorough.
0b3ec8c59b2b6d598531cd4e688eb276e597c825 clusterlin: remove Cluster type (Pieter Wuille)
1c24c625105cd62518d2eee7e95c3b1c74ea9923 clusterlin: merge two DepGraph fuzz tests into simulation test (Pieter Wuille)
0606e66fdbb914f984433d8950f0c32b5fb871f3 clusterlin: add DepGraph::RemoveTransactions and support for holes in DepGraph (Pieter Wuille)
75b5d42419ed1286fc9557bc97ec5bac3f9be837 clusterlin: make DepGraph::AddDependency support multiple dependencies at once (Pieter Wuille)
abf50649d13018bf40c5803730a03053737efeee clusterlin: simplify DepGraphFormatter::Ser (Pieter Wuille)
eaab55ffc8102140086297877747b7fa07b419eb clusterlin: rework DepGraphFormatter::Unser (Pieter Wuille)
5901cf7100a75c8131223e23b6c90e0e93611eae clusterlin: abstract out DepGraph::GetReduced{Parents,Children} (Pieter Wuille)
Pull request description:
Part of cluster mempool: #30289
This adds:
* `DepGraph::AddDependencies` to add 0 or more dependencies to a single transaction at once (identical to calling `DepGraph::AddDependency` once for each, but more efficient).
* `DepGraph::RemoveTransactions` to remove 0 or more transactions from a depgraph.
* `DepGraph::GetReducedParents` (and `DepGraph::GetReducedChildren`) to get the (reduced) direct parents and children of a transaction in a depgraph.
After which, the `Cluster` type is removed.
This is the result of fleshing out the design for the "intermediate layer" ("TxGraph", no PR yet) between the cluster linearization layer and the mempool layer. My earlier thinking was that TxGraph would store `Cluster` objects (vectors of pairs of `FeeFrac`s and sets of parents), and convert them to `DepGraph` on the fly whenever needed. However, after more consideration, it seems better to have TxGraph store `DepGraph` objects, and manipulate them directly without constantly re-creating them. This requires `DepGraph` to have some additional functionality.
The bulk of the complexity here is the addition of `DepGraph::RemoveTransactions`, which leaves the remaining transactions' positions within the `DepGraph` untouched (we want existing identifiers to remain valid), so this implies that graphs can now have "holes" (positions that are unused, but followed by positions that are used). To enable that, an extension of the fuzz/test serialization format `DepGraphFormatter` is included to deal with such holes.
ACKs for top commit:
sdaftuar:
reACK 0b3ec8c59b2b6d598531cd4e688eb276e597c825
instagibbs:
reACK 0b3ec8c59b2b6d598531cd4e688eb276e597c825
ismaelsadeeq:
reACK 0b3ec8c59b2b6d598531cd4e688eb276e597c825
glozow:
ACK 0b3ec8c59b2b6d598531cd4e688eb276e597c825, reviewed range-diff from aab53ddcd8fcbc3c0be0da9383f8e06abe5badda and `clusterlin_depgraph_sim`
Tree-SHA512: a804b7f26d544c5cb0847322e235c810525cb0607737be6116c3156d582da3ba3352af8ea48e74eed5268f9c3eca63b30181d01b23a6dd0be1b99191f81cceb0
This changes DepGraph::AddDependency into DepGraph::AddDependencies, which takes
in a single child, but a set of parent transactions, making them all dependencies
at once.
This is important for performance. N transactions can have O(N^2) parents combined,
so constructing a full DepGraph using just AddDependency (which is O(N) on its own)
could take O(N^3) time, while doing the same with AddDependencies (also O(N) on its
own) only takes O(N^2).
Notably, this matters for DepGraphFormatter::Unser, which goes from O(N^3) to O(N^2).
Co-Authored-By: Greg Sanders <gsanders87@gmail.com>
a240e150e837b5a95ed19765a2e8b7c5b6013f35 streams: remove AutoFile::Get() entirely (Pieter Wuille)
e624a9bef16b6335fd119c10698352b59bf2930a streams: cache file position within AutoFile (Pieter Wuille)
Pull request description:
Fixes#30833.
Instead of relying on frequent `ftell` calls (which appear to cause a significant slowdown on some systems) in XOR-enabled `AutoFile`s, cache the file position within `AutoFile` itself.
ACKs for top commit:
achow101:
ACK a240e150e837b5a95ed19765a2e8b7c5b6013f35
davidgumberg:
untested reACK a240e150e8
theStack:
Code-review ACK a240e150e837b5a95ed19765a2e8b7c5b6013f35
Tree-SHA512: fd3681edc018afaf955dc7a41a0c953ca80d46c1129e3c5b306c87c95aae93b2fe7b900794eb8b6f10491f9211645e7939918a28838295e6873eb226fca7006f
e4e3b44e9cc7227b3ad765397c884999f57bac2e net: call `Select` with reachable networks in `ThreadOpenConnections` (brunoerg)
829becd990b504a2e8a57fa8a6ff6ac6ae8ff900 addrman: change `Select` to support multiple networks (brunoerg)
f698636ec86c004ab331994559c163b7319e6423 net: add `All()` in `ReachableNets` (brunoerg)
Pull request description:
This PR changes addrman's `Select` to support multiple networks and change `ThreadOpenConnections` to call it with reachable networks. It can avoid unnecessary `Select` calls and avoid exceeding the max number of tries (100), especially when turning a clearnet + Tor/I2P/CJDNS node to Tor/I2P/CJDNS. Compared to #29330, this approach is "less aggresive". It does not add a new init flag and does not impact address relay.
I did an experiment of calling `Select` without passing a network until it finds an address from a network that compose 20% ~ 25% of the addrman (limited to 100 tries).

ACKs for top commit:
achow101:
ACK e4e3b44e9cc7227b3ad765397c884999f57bac2e
vasild:
ACK e4e3b44e9cc7227b3ad765397c884999f57bac2e
naumenkogs:
ACK e4e3b44e9cc7227b3ad765397c884999f57bac2e
Tree-SHA512: e8466b72b85bbc2ad8bfb14471eb27d2c50d4e84218f5ede2c15a6fa3653af61b488cde492dbd398f7502bd847e95bfee1abb7e01092daba2236d3ce3d6d2268
Empirically, this approach seems to be more efficient in common real-life
clusters, and does not change the worst case.
Co-Authored-By: Suhas Daftuar <sdaftuar@gmail.com>
Automatically add topologically-valid subsets of the potential set pot
to inc. It can be proven that these must be part of the best reachable
topologically-valid set from that work item.
This is a crucial optimization that (apparently) reduces the maximum
number of iterations from ~2^(N-1) to ~sqrt(2^N).
Co-Authored-By: Suhas Daftuar <sdaftuar@gmail.com>
fadbcd51fc77a3f4e877851463f3c7425fb751d2 bench: Remove redundant logging benchmarks (MarcoFalke)
fa8dd952e279a87f6027ddd2e2119bf2ae2f9943 bench: Use LogInfo instead of the deprecated alias LogPrintf (MarcoFalke)
Pull request description:
`LogPrint*ThreadNames` is redundant with `LogWith(out)ThreadNames`,
because they all measure toggling the thread names (and check that it
has no effect on performance).
Fix it by removing the redundant ones. This also allows to drop a deprecated logging alias.
ACKs for top commit:
stickies-v:
ACK fadbcd51fc77a3f4e877851463f3c7425fb751d2
Tree-SHA512: 4fe137f374aa4ee1aa0e1da4a1f9839c0e52c23dbb93198ecafee98de39d311cc47304bba4191f3807aa00c51b1eae543e3f270f03d341c84910e5e341a1d475
This change allows to drop brittle sizeof calls in favor of the
std::span::size method.
Other improvements include:
* Use of a namespace to mark test and bench data
* Use of the modern std::byte
* Drop of a no longer used std::vector copy and the bench/data module
LogPrint*ThreadNames is redundant with LogWith(out)ThreadNames, because
they all measure toggling the thread names (and check that it has no
effect on performance).
This also allows to remove unused and deprecated macros.
fa09cb41f58d0483ffe134eb274b9048c5260faa refactor: Remove unused LogPrint (MarcoFalke)
333341589010b1d9b21b68ae6649992fd2653756 scripted-diff: LogPrint -> LogDebug (MarcoFalke)
Pull request description:
`LogPrint` has many issues:
* It seems to indicate that something is being "printed", however config options such as `-printtoconsole` actually control what and where something is logged.
* It does not mention the log severity (debug).
* It is a deprecated alias for `LogDebug`, according to the dev notes.
* It wastes review cycles, because reviewers sometimes point out that it is deprecated.
* It makes the code inconsistent, when both are used, possibly even in lines right next to each other (like in `InitHTTPServer`)
Fix all issues by removing the deprecated alias.
I checked all conflicting pull requests and at the time of writing there are no conflicts, except in pull requests that are marked as draft, are yet unreviewed, or are blocked on feedback for other reasons. So I think it is fine to do now.
ACKs for top commit:
stickies-v:
ACK fa09cb41f58d0483ffe134eb274b9048c5260faa
danielabrozzoni:
utACK fa09cb41f58d0483ffe134eb274b9048c5260faa
TheCharlatan:
ACK fa09cb41f58d0483ffe134eb274b9048c5260faa
Tree-SHA512: 14270f4cfa3906025a0b994cbb5b2e3c8c2427c0beb19c717a505a2ccbfb1fd1ecf2fd03f6c52d22cde69a8d057e50d2207119fab2c2bc8228db3f10d4288d0f
faa382ae7642da0e436ea2c7f7eac67386280a7e ci, doc: Drop reference to `src/.bear-tidy-config` (Hennadii Stepanov)
d71ac768424333b65a6d88c9752cc9c7fdb276f3 build: Remove Autotools-based build system (Hennadii Stepanov)
e268b48419b802857c329a7ae27d3dbe4c1a9a4b doc: Adjust `doc/design/libraries.md` (Hennadii Stepanov)
d209e4f1566f9240f105bb93ed61bda9b4bb272b doc: Drop mentions of `share/genbuild.sh` (Hennadii Stepanov)
Pull request description:
This PR deletes the Autotools-based build system.
The MSVC build system is deleted in https://github.com/bitcoin/bitcoin/pull/30731.
ACKs for top commit:
maflcko:
re-ACK faa382ae7642da0e436ea2c7f7eac67386280a7e 🍦
TheCharlatan:
ACK faa382ae7642da0e436ea2c7f7eac67386280a7e
fanquake:
ACK faa382ae7642da0e436ea2c7f7eac67386280a7e
Tree-SHA512: 53df977b5b199a1c38f7f61a042a62b24831c559ba65a461b4ac1c96a1a56e2dfd676df79f1358fd1cc1749ff27e7b548086157f337d4f596c1054cb3d2d5739
Ideally all call sites should accept std::byte instead of uint8_t but those transformations are left to future PRs.
-BEGIN VERIFY SCRIPT-
sed -i --regexp-extended 's/\bParseHex\(("[^"]*")\)/\1_hex_u8/g' $(git grep -l ParseHex -- :src ':(exclude)src/test/util_tests.cpp')
sed -i --regexp-extended 's/\bParseHex<std::byte>\(("[^"]*")\)/\1_hex/g' $(git grep -l ParseHex -- :src ':(exclude)src/test/util_tests.cpp')
sed -i --regexp-extended 's/\bScriptFromHex\(("[^"]*")\)/ToScript(\1_hex)/g' src/test/script_tests.cpp
-END VERIFY SCRIPT-
Co-Authored-By: MarcoFalke <*~=`'#}+{/-|&$^_@721217.xyz>
Co-Authored-By: Ryan Ofsky <ryan@ofsky.org>