mirror of
https://github.com/bitcoin/bitcoin.git
synced 2026-03-14 23:43:58 +00:00
Merge bitcoin/bitcoin#31829: p2p: improve TxOrphanage denial of service bounds
50024620b909fc30b68a3715680e963f048482a5 [bench] worst case LimitOrphans and EraseForBlock (glozow) 45c7a4b56d28c75bb9c48f0a9e7f3a73a7899328 [functional test] orphan resolution works in the presence of DoSy peers (glozow) 835f5c77cdee36eb72088ea39e4d0435a0d11819 [prep/test] restart instead of bumpmocktime between p2p_orphan_handling subtests (glozow) b113877545a1c83b470a380402b4409aa02c8282 [fuzz] Add simulation fuzz test for TxOrphanage (Pieter Wuille) 03aaaedc6daf304c708aad93b64d78412a348580 [prep] Return the made-reconsiderable announcements in AddChildrenToWorkSet (Pieter Wuille) ea29c4371e86a418f357c19c50e562e8a67cb5fd [p2p] bump DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE to 3,000 (glozow) 24afee8d8f94e5f5a03c4f497dc6a2e4e3d82605 [fuzz] TxOrphanage protects peers that don't go over limit (glozow) a2878cfb4ae260ca8bb87072e6948ca422f9b71d [unit test] strengthen GetChildrenFromSamePeer tests: results are in recency order (glozow) 7ce3b7ee579c6d1b43b7fa1dacc5bc1c8e1ab1b3 [unit test] basic TxOrphanage eviction and protection (glozow) 4d23d1d7e7fac0e622d7e88be9fe40210bb2f68c [cleanup] remove unused rng param from LimitOrphans (glozow) 067365d2a8a421a074bb54394118beccb3f775c2 [p2p] overhaul TxOrphanage with smarter limits (glozow) 1a41e7962db364b7abf1eb37901c3455ddc3e2bb [refactor] create aliases for TxOrphanage Count and Usage (glozow) b50bd72c42bc664478c325a7e606cb36826973b1 [prep] change return type of EraseTx to bool (glozow) 3da6d7f8f6fc7599c769d7521610272f8e373d2c [prep/refactor] make TxOrphanage a virtual class implemented by TxOrphanageImpl (glozow) 77ebe8f2801215162fe7c00f2dfd35366c4a91f7 [prep/test] have TxOrphanage remember its own limits in LimitOrphans (glozow) d0af4239b7f04278123a2ca192e05f29f739b28f [prep/refactor] move DEFAULT_MAX_ORPHAN_TRANSACTIONS to txorphanage.h (glozow) 51365225b898d2f5cefa2fec28e712baf7a70e05 [prep/config] remove -maxorphantx (glozow) 8dd24c29aec819d9247f57439fd6bbaa092e8e54 [prep/test] modify test to not access TxOrphanage internals (glozow) 44f532782445d467e0dc42b15fd8aceed1230d9c [fuzz] add SeedRandomStateForTest(SeedRand::ZEROS) to txorphan (glozow) 15a4ec906976e0728224cc37cf78b997c88550d5 [prep/rpc] remove entry and expiry time from getorphantxs (glozow) 08e58fa91198afda6f894c20026b64f239938e03 [prep/refactor] move txorphanage to node namespace and directory (glozow) bb91d23fa95678d03c711be84894efc7656e847c [txorphanage] change type of usage to int64_t (glozow) Pull request description: This PR is part of the orphan resolution project, see #27463. This design came from collaboration with sipa - thanks. We want to limit the CPU work and memory used by `TxOrphanage` to avoid denial of service attacks. On master, this is achieved by limiting the number of transactions in this data structure to 100, and the weight of each transaction to 400KWu (the largest standard tx) [0]. We always allow new orphans, but if the addition causes us to exceed 100, we evict one randomly. This is dead simple, but has problems: - It makes the orphanage trivially churnable: any one peer can render it useless by spamming us with lots of orphans. It's possible this is happening: "Looking at data from node alice on 2024-09-14 shows that we’re sometimes removing more than 100k orphans per minute. This feels like someone flooding us with orphans." [1] - Effectively, opportunistic 1p1c is useless in the presence of adversaries: it is *opportunistic* and pairs a low feerate tx with a child that happens to be in the orphanage. So if nothing is able to stay in orphanages, we can't expect 1p1cs to propagate. - This number is also often insufficient for the volume of orphans we handle: historical data show that overflows are pretty common, and there are times where "it seems like [the node] forgot about the orphans and re-requested them multiple times." [1] Just jacking up the `-maxorphantxs` number is not a good enough solution, because it doesn't solve the churnability problem, and the effective resource bounds scale poorly. This PR introduces numbers for {global, per-peer} {memory usage, announcements + number of inputs}, representing resource limits: - The (constant) **global latency score limit** is the number of unique (wtxid, peer) pairs in the orphanage + the number of inputs spent by those (deduplicated) transactions floor-divided by 10 [2]. This represents a cap on CPU or latency for any given operation, and does not change with the number of peers we have. Evictions must happen whenever this limit is reached. The primary goal of this limit is to ensure we do not spend more than a few ms on any call to `LimitOrphans` or `EraseForBlock`. - The (variable) **per-peer latency score limit** is the global latency score limit divided by the number of peers. Peers are allowed to exceed this limit provided the global announcement limit has not been reached. The per-peer announcement limit decreases with more peers. - The (constant) **per-peer memory usage reservation** is the amount of orphan weight [3] reserved per peer [4]. Reservation means that peers are effectively guaranteed this amount of space. Peers are allowed to exceed this limit provided the global usage limit is not reached. The primary goal of this limit is to ensure we don't oom. - The (variable) **global memory usage limit** is the number of peers multiplied by the per-peer reservation [5]. As such, the global memory usage limit scales up with the number of peers we have. Evictions must happen whenever this limit is reached. - We introduce a "Peer DoS Score" which is the maximum between its "CPU Score" and "Memory Score." The CPU score is the ratio between the number of orphans announced by this peer / peer announcement limit. The memory score is the total usage of all orphans announced by this peer / peer usage reservation. Eviction changes in a few ways: - It is triggered if either limit is exceeded. - On each iteration of the loop, instead of selecting a random orphan, we select a peer and delete 1 of its announcements. Specifically, we select the peer with the highest DoS score, which is the maximum between its CPU DoS score (based on announcements) and Memory DoS score (based on tx weight). After the peer has been selected, we evict the oldest orphan (non-reconsiderable sorted before reconsiderable). - Instead of evicting orphans, we evict announcements. An orphan is still in the orphanage as long as there is 1 peer announcer. Of course, over the course of several iteration loops, we may erase all announcers, thus erasing the orphan itself. The purpose of this change is to prevent a peer from being able to trigger eviction of another peer's orphans. This PR also: - Reimplements `TxOrphanage` as single multi-index container. - Effectively bounds the number of transactions that can be in a peer's work set by ensuring it is a subset of the peer's announcements. - Removes the `-maxorphantxs` config option, as the orphanage no longer limits by unique orphans. This means we can receive 1p1c packages in the presence of spammy peers. It also makes the orphanage more useful and increases our download capacity without drastically increasing orphanage resource usage. [0]: This means the effective memory limit in orphan weight is 100 * 400KWu = 40MWu [1]: https://delvingbitcoin.org/t/stats-on-orphanage-overflows/1421 [2]: Limit is 3000, which is equivalent to one max size ancestor package (24 transactions can be missing inputs) for each peer (default max connections is 125). [3]: Orphan weight is used in place of actual memory usage because something like "one maximally sized standard tx" is easier to reason about than "considering the bytes allocated for vin and vout vectors, it needs to be within N bytes..." etc. We can also consider a different formula to encapsulate more the memory overhead but still have an interface that is easy to reason about. [4]: The limit is 404KWu, which is the maximum size of an ancestor package. [5]: With 125 peers, this is 50.5MWu, which is a small increase from the existing limit of 40MWu. While the actual memory usage limit is higher (this number does not include the other memory used by `TxOrphanage` to store the outpoints map, etc.), this is within the same ballpark as the old limit. ACKs for top commit: marcofleon: ReACK 50024620b909fc30b68a3715680e963f048482a5 achow101: light ACK 50024620b909fc30b68a3715680e963f048482a5 instagibbs: ACK 50024620b909fc30b68a3715680e963f048482a5 theStack: Code-review ACK 50024620b909fc30b68a3715680e963f048482a5 Tree-SHA512: 270c11a2d116a1bf222358a1b4e25ffd1f01e24da958284fa8c4678bee5547f9e0554e87da7b7d5d5d172ca11da147f54a69b3436cc8f382debb6a45a90647fd
This commit is contained in:
commit
80067ac111
@ -277,6 +277,7 @@ add_library(bitcoin_node STATIC EXCLUDE_FROM_ALL
|
||||
node/timeoffsets.cpp
|
||||
node/transaction.cpp
|
||||
node/txdownloadman_impl.cpp
|
||||
node/txorphanage.cpp
|
||||
node/txreconciliation.cpp
|
||||
node/utxo_snapshot.cpp
|
||||
node/warnings.cpp
|
||||
@ -308,7 +309,6 @@ add_library(bitcoin_node STATIC EXCLUDE_FROM_ALL
|
||||
txdb.cpp
|
||||
txgraph.cpp
|
||||
txmempool.cpp
|
||||
txorphanage.cpp
|
||||
txrequest.cpp
|
||||
validation.cpp
|
||||
validationinterface.cpp
|
||||
|
||||
@ -49,6 +49,7 @@ add_executable(bench_bitcoin
|
||||
streams_findbyte.cpp
|
||||
strencodings.cpp
|
||||
txgraph.cpp
|
||||
txorphanage.cpp
|
||||
util_time.cpp
|
||||
verify_script.cpp
|
||||
xor.cpp
|
||||
|
||||
270
src/bench/txorphanage.cpp
Normal file
270
src/bench/txorphanage.cpp
Normal file
@ -0,0 +1,270 @@
|
||||
// Copyright (c) The Bitcoin Core developers
|
||||
// Distributed under the MIT software license, see the accompanying
|
||||
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
|
||||
|
||||
#include <bench/bench.h>
|
||||
#include <consensus/amount.h>
|
||||
#include <net.h>
|
||||
#include <policy/policy.h>
|
||||
#include <primitives/transaction.h>
|
||||
#include <pubkey.h>
|
||||
#include <script/sign.h>
|
||||
#include <test/util/setup_common.h>
|
||||
#include <node/txorphanage.h>
|
||||
#include <util/check.h>
|
||||
#include <test/util/transaction_utils.h>
|
||||
|
||||
#include <cstdint>
|
||||
#include <memory>
|
||||
|
||||
static constexpr node::TxOrphanage::Usage TINY_TX_WEIGHT{240};
|
||||
static constexpr int64_t APPROX_WEIGHT_PER_INPUT{200};
|
||||
|
||||
// Creates a transaction with num_inputs inputs and 1 output, padded to target_weight. Use this function to maximize m_outpoint_to_orphan_it operations.
|
||||
// If num_inputs is 0, we maximize the number of inputs.
|
||||
static CTransactionRef MakeTransactionBulkedTo(unsigned int num_inputs, int64_t target_weight, FastRandomContext& det_rand)
|
||||
{
|
||||
CMutableTransaction tx;
|
||||
assert(target_weight >= 40 + APPROX_WEIGHT_PER_INPUT);
|
||||
if (!num_inputs) num_inputs = (target_weight - 40) / APPROX_WEIGHT_PER_INPUT;
|
||||
for (unsigned int i = 0; i < num_inputs; ++i) {
|
||||
tx.vin.emplace_back(Txid::FromUint256(det_rand.rand256()), 0);
|
||||
}
|
||||
assert(GetTransactionWeight(*MakeTransactionRef(tx)) <= target_weight);
|
||||
|
||||
tx.vout.resize(1);
|
||||
|
||||
// If necessary, pad the transaction to the target weight.
|
||||
if (GetTransactionWeight(*MakeTransactionRef(tx)) < target_weight - 4) {
|
||||
BulkTransaction(tx, target_weight);
|
||||
}
|
||||
return MakeTransactionRef(tx);
|
||||
}
|
||||
|
||||
// Constructs a transaction using a subset of inputs[start_input : start_input + num_inputs] up to the weight_limit.
|
||||
static CTransactionRef MakeTransactionSpendingUpTo(const std::vector<CTxIn>& inputs, unsigned int start_input, unsigned int num_inputs, int64_t weight_limit)
|
||||
{
|
||||
CMutableTransaction tx;
|
||||
for (unsigned int i{start_input}; i < start_input + num_inputs; ++i) {
|
||||
if (GetTransactionWeight(*MakeTransactionRef(tx)) + APPROX_WEIGHT_PER_INPUT >= weight_limit) break;
|
||||
tx.vin.emplace_back(inputs.at(i % inputs.size()));
|
||||
}
|
||||
assert(tx.vin.size() > 0);
|
||||
return MakeTransactionRef(tx);
|
||||
}
|
||||
static void OrphanageSinglePeerEviction(benchmark::Bench& bench)
|
||||
{
|
||||
FastRandomContext det_rand{true};
|
||||
|
||||
// Fill up announcements slots with tiny txns, followed by a single large one
|
||||
unsigned int NUM_TINY_TRANSACTIONS((node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE));
|
||||
|
||||
// Construct transactions to submit to orphanage: 1-in-1-out tiny transactions
|
||||
std::vector<CTransactionRef> tiny_txs;
|
||||
tiny_txs.reserve(NUM_TINY_TRANSACTIONS);
|
||||
for (unsigned int i{0}; i < NUM_TINY_TRANSACTIONS; ++i) {
|
||||
tiny_txs.emplace_back(MakeTransactionBulkedTo(1, TINY_TX_WEIGHT, det_rand));
|
||||
}
|
||||
auto large_tx = MakeTransactionBulkedTo(1, MAX_STANDARD_TX_WEIGHT, det_rand);
|
||||
assert(GetTransactionWeight(*large_tx) <= MAX_STANDARD_TX_WEIGHT);
|
||||
|
||||
const auto orphanage{node::MakeTxOrphanage(/*max_global_ann=*/node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE, /*reserved_peer_usage=*/node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER)};
|
||||
|
||||
// Populate the orphanage. To maximize the number of evictions, first fill up with tiny transactions, then add a huge one.
|
||||
NodeId peer{0};
|
||||
// Add tiny transactions until we are just about to hit the memory limit, up to the max number of announcements.
|
||||
// We use the same tiny transactions for all peers to minimize their contribution to the usage limit.
|
||||
int64_t total_weight_to_add{0};
|
||||
for (unsigned int txindex{0}; txindex < NUM_TINY_TRANSACTIONS; ++txindex) {
|
||||
const auto& tx{tiny_txs.at(txindex)};
|
||||
|
||||
total_weight_to_add += GetTransactionWeight(*tx);
|
||||
if (total_weight_to_add > orphanage->MaxGlobalUsage()) break;
|
||||
|
||||
assert(orphanage->AddTx(tx, peer));
|
||||
|
||||
// Sanity check: we should always be exiting at the point of hitting the weight limit.
|
||||
assert(txindex < NUM_TINY_TRANSACTIONS - 1);
|
||||
}
|
||||
|
||||
// In the real world, we always trim after each new tx.
|
||||
// If we need to trim already, that means the benchmark is not representative of what LimitOrphans may do in a single call.
|
||||
assert(orphanage->TotalOrphanUsage() <= orphanage->MaxGlobalUsage());
|
||||
assert(orphanage->TotalLatencyScore() <= orphanage->MaxGlobalLatencyScore());
|
||||
assert(orphanage->TotalOrphanUsage() + TINY_TX_WEIGHT > orphanage->MaxGlobalUsage());
|
||||
|
||||
bench.epochs(1).epochIterations(1).run([&]() NO_THREAD_SAFETY_ANALYSIS {
|
||||
// Lastly, add the large transaction.
|
||||
const auto num_announcements_before_trim{orphanage->CountAnnouncements()};
|
||||
assert(orphanage->AddTx(large_tx, peer));
|
||||
orphanage->LimitOrphans();
|
||||
|
||||
// If there are multiple peers, note that they all have the same DoS score. We will evict only 1 item at a time for each new DoSiest peer.
|
||||
const auto num_announcements_after_trim{orphanage->CountAnnouncements()};
|
||||
const auto num_evicted{num_announcements_before_trim - num_announcements_after_trim};
|
||||
|
||||
// The number of evictions is the same regardless of the number of peers. In both cases, we can exceed the
|
||||
// usage limit using 1 maximally-sized transaction.
|
||||
assert(num_evicted == MAX_STANDARD_TX_WEIGHT / TINY_TX_WEIGHT);
|
||||
});
|
||||
}
|
||||
static void OrphanageMultiPeerEviction(benchmark::Bench& bench)
|
||||
{
|
||||
// Best number is just below sqrt(DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE)
|
||||
static constexpr unsigned int NUM_PEERS{39};
|
||||
// All peers will have the same transactions. We want to be just under the weight limit, so divide the max usage limit by the number of unique transactions.
|
||||
static constexpr node::TxOrphanage::Count NUM_UNIQUE_TXNS{node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE / NUM_PEERS};
|
||||
static constexpr node::TxOrphanage::Usage TOTAL_USAGE_LIMIT{node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER * NUM_PEERS};
|
||||
// Subtract 4 because BulkTransaction rounds up and we must avoid going over the weight limit early.
|
||||
static constexpr node::TxOrphanage::Usage LARGE_TX_WEIGHT{TOTAL_USAGE_LIMIT / NUM_UNIQUE_TXNS - 4};
|
||||
static_assert(LARGE_TX_WEIGHT >= TINY_TX_WEIGHT * 2, "Tx is too small, increase NUM_PEERS");
|
||||
// The orphanage does not permit any transactions larger than 400'000, so this test will not work if the large tx is much larger.
|
||||
static_assert(LARGE_TX_WEIGHT <= MAX_STANDARD_TX_WEIGHT, "Tx is too large, decrease NUM_PEERS");
|
||||
|
||||
FastRandomContext det_rand{true};
|
||||
// Construct large transactions
|
||||
std::vector<CTransactionRef> shared_txs;
|
||||
shared_txs.reserve(NUM_UNIQUE_TXNS);
|
||||
for (unsigned int i{0}; i < NUM_UNIQUE_TXNS; ++i) {
|
||||
shared_txs.emplace_back(MakeTransactionBulkedTo(9, LARGE_TX_WEIGHT, det_rand));
|
||||
}
|
||||
std::vector<size_t> indexes;
|
||||
indexes.resize(NUM_UNIQUE_TXNS);
|
||||
std::iota(indexes.begin(), indexes.end(), 0);
|
||||
|
||||
const auto orphanage{node::MakeTxOrphanage(/*max_global_ann=*/node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE, /*reserved_peer_usage=*/node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER)};
|
||||
// Every peer sends the same transactions, all from shared_txs.
|
||||
// Each peer has 1 or 2 assigned transactions, which they must place as the last and second-to-last positions.
|
||||
// The assignments ensure that every transaction is in some peer's last 2 transactions, and is thus remains in the orphanage until the end of LimitOrphans.
|
||||
static_assert(NUM_UNIQUE_TXNS <= NUM_PEERS * 2);
|
||||
|
||||
// We need each peer to send some transactions so that the global limit (which is a function of the number of peers providing at least 1 announcement) rises.
|
||||
for (unsigned int i{0}; i < NUM_UNIQUE_TXNS; ++i) {
|
||||
for (NodeId peer{0}; peer < NUM_PEERS; ++peer) {
|
||||
const CTransactionRef& reserved_last_tx{shared_txs.at(peer)};
|
||||
CTransactionRef reserved_second_to_last_tx{peer < NUM_UNIQUE_TXNS - NUM_PEERS ? shared_txs.at(peer + NUM_PEERS) : nullptr};
|
||||
|
||||
const auto& tx{shared_txs.at(indexes.at(i))};
|
||||
if (tx == reserved_last_tx) {
|
||||
// Skip
|
||||
} else if (reserved_second_to_last_tx && tx == reserved_second_to_last_tx) {
|
||||
// Skip
|
||||
} else {
|
||||
orphanage->AddTx(tx, peer);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Now add the final reserved transactions.
|
||||
for (NodeId peer{0}; peer < NUM_PEERS; ++peer) {
|
||||
const CTransactionRef& reserved_last_tx{shared_txs.at(peer)};
|
||||
CTransactionRef reserved_second_to_last_tx{peer < NUM_UNIQUE_TXNS - NUM_PEERS ? shared_txs.at(peer + NUM_PEERS) : nullptr};
|
||||
// Add the final reserved transactions.
|
||||
if (reserved_second_to_last_tx) {
|
||||
orphanage->AddTx(reserved_second_to_last_tx, peer);
|
||||
}
|
||||
orphanage->AddTx(reserved_last_tx, peer);
|
||||
}
|
||||
|
||||
assert(orphanage->CountAnnouncements() == NUM_PEERS * NUM_UNIQUE_TXNS);
|
||||
const auto total_usage{orphanage->TotalOrphanUsage()};
|
||||
const auto max_usage{orphanage->MaxGlobalUsage()};
|
||||
assert(max_usage - total_usage <= LARGE_TX_WEIGHT);
|
||||
assert(orphanage->TotalLatencyScore() <= orphanage->MaxGlobalLatencyScore());
|
||||
|
||||
auto last_tx = MakeTransactionBulkedTo(0, max_usage - total_usage + 1, det_rand);
|
||||
|
||||
bench.epochs(1).epochIterations(1).run([&]() NO_THREAD_SAFETY_ANALYSIS {
|
||||
const auto num_announcements_before_trim{orphanage->CountAnnouncements()};
|
||||
// There is a small gap between the total usage and the max usage. Add a transaction to fill it.
|
||||
assert(orphanage->AddTx(last_tx, 0));
|
||||
orphanage->LimitOrphans();
|
||||
|
||||
// If there are multiple peers, note that they all have the same DoS score. We will evict only 1 item at a time for each new DoSiest peer.
|
||||
const auto num_evicted{num_announcements_before_trim - orphanage->CountAnnouncements() + 1};
|
||||
// The trimming happens as a round robin. In the first NUM_UNIQUE_TXNS - 2 rounds for each peer, only duplicates are evicted.
|
||||
// Once each peer has 2 transactions left, it's possible to select a peer whose oldest transaction is unique.
|
||||
assert(num_evicted >= (NUM_UNIQUE_TXNS - 2) * NUM_PEERS);
|
||||
});
|
||||
}
|
||||
|
||||
static void OrphanageEraseAll(benchmark::Bench& bench, bool block_or_disconnect)
|
||||
{
|
||||
FastRandomContext det_rand{true};
|
||||
const auto orphanage{node::MakeTxOrphanage(/*max_global_ann=*/node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE, /*reserved_peer_usage=*/node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER)};
|
||||
// This is an unrealistically large number of inputs for a block, as there is almost no room given to witness data,
|
||||
// outputs, and overhead for individual transactions. The entire block is 1 transaction with 20,000 inputs.
|
||||
constexpr unsigned int NUM_BLOCK_INPUTS{MAX_BLOCK_WEIGHT / APPROX_WEIGHT_PER_INPUT};
|
||||
const auto block_tx{MakeTransactionBulkedTo(NUM_BLOCK_INPUTS, MAX_BLOCK_WEIGHT - 4000, det_rand)};
|
||||
CBlock block;
|
||||
block.vtx.push_back(block_tx);
|
||||
|
||||
// Transactions with 9 inputs maximize the computation / LatencyScore ratio.
|
||||
constexpr unsigned int INPUTS_PER_TX{9};
|
||||
constexpr unsigned int NUM_PEERS{125};
|
||||
constexpr unsigned int NUM_TXNS_PER_PEER = node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE / NUM_PEERS;
|
||||
|
||||
// Divide the block's inputs evenly among the peers.
|
||||
constexpr unsigned int INPUTS_PER_PEER = NUM_BLOCK_INPUTS / NUM_PEERS;
|
||||
static_assert(INPUTS_PER_PEER > 0);
|
||||
// All the block inputs are spent by the orphanage transactions. Each peer is assigned 76 of them.
|
||||
// Each peer has 24 transactions spending 9 inputs each, so jumping by 3 ensures we cover all of the inputs.
|
||||
static_assert(7 * NUM_TXNS_PER_PEER + INPUTS_PER_TX - 1 >= INPUTS_PER_PEER);
|
||||
|
||||
for (NodeId peer{0}; peer < NUM_PEERS; ++peer) {
|
||||
int64_t weight_left_for_peer{node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER};
|
||||
for (unsigned int txnum{0}; txnum < node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE / NUM_PEERS; ++txnum) {
|
||||
// Transactions must be unique since they use different (though overlapping) inputs.
|
||||
const unsigned int start_input = peer * INPUTS_PER_PEER + txnum * 7;
|
||||
|
||||
// Note that we shouldn't be able to hit the weight limit with these small transactions.
|
||||
const int64_t weight_limit{std::min<int64_t>(weight_left_for_peer, MAX_STANDARD_TX_WEIGHT)};
|
||||
auto ptx = MakeTransactionSpendingUpTo(block_tx->vin, /*start_input=*/start_input, /*num_inputs=*/INPUTS_PER_TX, /*weight_limit=*/weight_limit);
|
||||
|
||||
assert(GetTransactionWeight(*ptx) <= MAX_STANDARD_TX_WEIGHT);
|
||||
assert(!orphanage->HaveTx(ptx->GetWitnessHash()));
|
||||
assert(orphanage->AddTx(ptx, peer));
|
||||
|
||||
weight_left_for_peer -= GetTransactionWeight(*ptx);
|
||||
if (weight_left_for_peer < TINY_TX_WEIGHT * 2) break;
|
||||
}
|
||||
}
|
||||
|
||||
// If these fail, it means this benchmark is not realistic because the orphanage would have been trimmed already.
|
||||
assert(orphanage->TotalLatencyScore() <= orphanage->MaxGlobalLatencyScore());
|
||||
assert(orphanage->TotalOrphanUsage() <= orphanage->MaxGlobalUsage());
|
||||
|
||||
// 3000 announcements (and unique transactions) in the orphanage.
|
||||
// They spend a total of 27,000 inputs (20,000 unique ones)
|
||||
assert(orphanage->CountAnnouncements() == NUM_PEERS * NUM_TXNS_PER_PEER);
|
||||
assert(orphanage->TotalLatencyScore() == orphanage->CountAnnouncements());
|
||||
|
||||
bench.epochs(1).epochIterations(1).run([&]() NO_THREAD_SAFETY_ANALYSIS {
|
||||
if (block_or_disconnect) {
|
||||
// Erase everything through EraseForBlock.
|
||||
// Every tx conflicts with this block.
|
||||
orphanage->EraseForBlock(block);
|
||||
assert(orphanage->CountAnnouncements() == 0);
|
||||
} else {
|
||||
// Erase everything through EraseForPeer.
|
||||
for (NodeId peer{0}; peer < NUM_PEERS; ++peer) {
|
||||
orphanage->EraseForPeer(peer);
|
||||
}
|
||||
assert(orphanage->CountAnnouncements() == 0);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
static void OrphanageEraseForBlock(benchmark::Bench& bench)
|
||||
{
|
||||
OrphanageEraseAll(bench, /*block_or_disconnect=*/true);
|
||||
}
|
||||
static void OrphanageEraseForPeer(benchmark::Bench& bench)
|
||||
{
|
||||
OrphanageEraseAll(bench, /*block_or_disconnect=*/false);
|
||||
}
|
||||
|
||||
BENCHMARK(OrphanageSinglePeerEviction, benchmark::PriorityLevel::LOW);
|
||||
BENCHMARK(OrphanageMultiPeerEviction, benchmark::PriorityLevel::LOW);
|
||||
BENCHMARK(OrphanageEraseForBlock, benchmark::PriorityLevel::LOW);
|
||||
BENCHMARK(OrphanageEraseForPeer, benchmark::PriorityLevel::LOW);
|
||||
@ -490,7 +490,6 @@ void SetupServerArgs(ArgsManager& argsman, bool can_listen_ipc)
|
||||
argsman.AddArg("-allowignoredconf", strprintf("For backwards compatibility, treat an unused %s file in the datadir as a warning, not an error.", BITCOIN_CONF_FILENAME), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
|
||||
argsman.AddArg("-loadblock=<file>", "Imports blocks from external file on startup", ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
|
||||
argsman.AddArg("-maxmempool=<n>", strprintf("Keep the transaction memory pool below <n> megabytes (default: %u)", DEFAULT_MAX_MEMPOOL_SIZE_MB), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
|
||||
argsman.AddArg("-maxorphantx=<n>", strprintf("Keep at most <n> unconnectable transactions in memory (default: %u)", DEFAULT_MAX_ORPHAN_TRANSACTIONS), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
|
||||
argsman.AddArg("-mempoolexpiry=<n>", strprintf("Do not keep transactions in the mempool longer than <n> hours (default: %u)", DEFAULT_MEMPOOL_EXPIRY_HOURS), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
|
||||
argsman.AddArg("-minimumchainwork=<hex>", strprintf("Minimum work assumed to exist on a valid chain in hex (default: %s, testnet3: %s, testnet4: %s, signet: %s)", defaultChainParams->GetConsensus().nMinimumChainWork.GetHex(), testnetChainParams->GetConsensus().nMinimumChainWork.GetHex(), testnet4ChainParams->GetConsensus().nMinimumChainWork.GetHex(), signetChainParams->GetConsensus().nMinimumChainWork.GetHex()), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::OPTIONS);
|
||||
argsman.AddArg("-par=<n>", strprintf("Set the number of script verification threads (0 = auto, up to %d, <0 = leave that many cores free, default: %d)",
|
||||
|
||||
@ -35,6 +35,7 @@
|
||||
#include <node/protocol_version.h>
|
||||
#include <node/timeoffsets.h>
|
||||
#include <node/txdownloadman.h>
|
||||
#include <node/txorphanage.h>
|
||||
#include <node/txreconciliation.h>
|
||||
#include <node/warnings.h>
|
||||
#include <policy/feerate.h>
|
||||
@ -53,7 +54,6 @@
|
||||
#include <sync.h>
|
||||
#include <tinyformat.h>
|
||||
#include <txmempool.h>
|
||||
#include <txorphanage.h>
|
||||
#include <uint256.h>
|
||||
#include <util/check.h>
|
||||
#include <util/strencodings.h>
|
||||
@ -533,7 +533,7 @@ public:
|
||||
std::optional<std::string> FetchBlock(NodeId peer_id, const CBlockIndex& block_index) override
|
||||
EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex);
|
||||
bool GetNodeStateStats(NodeId nodeid, CNodeStateStats& stats) const override EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex);
|
||||
std::vector<TxOrphanage::OrphanTxBase> GetOrphanTransactions() override EXCLUSIVE_LOCKS_REQUIRED(!m_tx_download_mutex);
|
||||
std::vector<node::TxOrphanage::OrphanTxBase> GetOrphanTransactions() override EXCLUSIVE_LOCKS_REQUIRED(!m_tx_download_mutex);
|
||||
PeerManagerInfo GetInfo() const override EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex);
|
||||
void SendPings() override EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex);
|
||||
void RelayTransaction(const Txid& txid, const Wtxid& wtxid) override EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex);
|
||||
@ -1754,7 +1754,7 @@ bool PeerManagerImpl::GetNodeStateStats(NodeId nodeid, CNodeStateStats& stats) c
|
||||
return true;
|
||||
}
|
||||
|
||||
std::vector<TxOrphanage::OrphanTxBase> PeerManagerImpl::GetOrphanTransactions()
|
||||
std::vector<node::TxOrphanage::OrphanTxBase> PeerManagerImpl::GetOrphanTransactions()
|
||||
{
|
||||
LOCK(m_tx_download_mutex);
|
||||
return m_txdownloadman.GetOrphanTransactions();
|
||||
@ -1925,7 +1925,7 @@ PeerManagerImpl::PeerManagerImpl(CConnman& connman, AddrMan& addrman,
|
||||
m_banman(banman),
|
||||
m_chainman(chainman),
|
||||
m_mempool(pool),
|
||||
m_txdownloadman(node::TxDownloadOptions{pool, m_rng, opts.max_orphan_txs, opts.deterministic_rng}),
|
||||
m_txdownloadman(node::TxDownloadOptions{pool, m_rng, opts.deterministic_rng}),
|
||||
m_warnings{warnings},
|
||||
m_opts{opts}
|
||||
{
|
||||
|
||||
@ -8,9 +8,9 @@
|
||||
|
||||
#include <consensus/amount.h>
|
||||
#include <net.h>
|
||||
#include <node/txorphanage.h>
|
||||
#include <protocol.h>
|
||||
#include <threadsafety.h>
|
||||
#include <txorphanage.h>
|
||||
#include <validationinterface.h>
|
||||
|
||||
#include <atomic>
|
||||
@ -36,8 +36,6 @@ class Warnings;
|
||||
|
||||
/** Whether transaction reconciliation protocol should be enabled by default. */
|
||||
static constexpr bool DEFAULT_TXRECONCILIATION_ENABLE{false};
|
||||
/** Default for -maxorphantx, maximum number of orphan transactions kept in memory */
|
||||
static const uint32_t DEFAULT_MAX_ORPHAN_TRANSACTIONS{100};
|
||||
/** Default number of non-mempool transactions to keep around for block reconstruction. Includes
|
||||
orphan, replaced, and rejected transactions. */
|
||||
static const uint32_t DEFAULT_BLOCK_RECONSTRUCTION_EXTRA_TXN{100};
|
||||
@ -78,8 +76,6 @@ public:
|
||||
bool ignore_incoming_txs{DEFAULT_BLOCKSONLY};
|
||||
//! Whether transaction reconciliation protocol is enabled
|
||||
bool reconcile_txs{DEFAULT_TXRECONCILIATION_ENABLE};
|
||||
//! Maximum number of orphan transactions kept in memory
|
||||
uint32_t max_orphan_txs{DEFAULT_MAX_ORPHAN_TRANSACTIONS};
|
||||
//! Number of non-mempool transactions to keep around for block reconstruction. Includes
|
||||
//! orphan, replaced, and rejected transactions.
|
||||
uint32_t max_extra_txs{DEFAULT_BLOCK_RECONSTRUCTION_EXTRA_TXN};
|
||||
@ -113,7 +109,7 @@ public:
|
||||
/** Get statistics from node state */
|
||||
virtual bool GetNodeStateStats(NodeId nodeid, CNodeStateStats& stats) const = 0;
|
||||
|
||||
virtual std::vector<TxOrphanage::OrphanTxBase> GetOrphanTransactions() = 0;
|
||||
virtual std::vector<node::TxOrphanage::OrphanTxBase> GetOrphanTransactions() = 0;
|
||||
|
||||
/** Get peer manager info. */
|
||||
virtual PeerManagerInfo GetInfo() const = 0;
|
||||
|
||||
@ -16,10 +16,6 @@ void ApplyArgsManOptions(const ArgsManager& argsman, PeerManager::Options& optio
|
||||
{
|
||||
if (auto value{argsman.GetBoolArg("-txreconciliation")}) options.reconcile_txs = *value;
|
||||
|
||||
if (auto value{argsman.GetIntArg("-maxorphantx")}) {
|
||||
options.max_orphan_txs = uint32_t((std::clamp<int64_t>(*value, 0, std::numeric_limits<uint32_t>::max())));
|
||||
}
|
||||
|
||||
if (auto value{argsman.GetIntArg("-blockreconstructionextratxn")}) {
|
||||
options.max_extra_txs = uint32_t((std::clamp<int64_t>(*value, 0, std::numeric_limits<uint32_t>::max())));
|
||||
}
|
||||
|
||||
@ -6,9 +6,8 @@
|
||||
#define BITCOIN_NODE_TXDOWNLOADMAN_H
|
||||
|
||||
#include <net.h>
|
||||
#include <node/txorphanage.h>
|
||||
#include <policy/packages.h>
|
||||
#include <txorphanage.h>
|
||||
#include <util/transaction_identifier.h>
|
||||
|
||||
#include <cstdint>
|
||||
#include <memory>
|
||||
@ -42,8 +41,6 @@ struct TxDownloadOptions {
|
||||
const CTxMemPool& m_mempool;
|
||||
/** RNG provided by caller. */
|
||||
FastRandomContext& m_rng;
|
||||
/** Maximum number of transactions allowed in orphanage. */
|
||||
const uint32_t m_max_orphan_txs;
|
||||
/** Instantiate TxRequestTracker as deterministic (used for tests). */
|
||||
bool m_deterministic_txrequest{false};
|
||||
};
|
||||
|
||||
@ -97,7 +97,7 @@ void TxDownloadManagerImpl::ActiveTipChange()
|
||||
|
||||
void TxDownloadManagerImpl::BlockConnected(const std::shared_ptr<const CBlock>& pblock)
|
||||
{
|
||||
m_orphanage.EraseForBlock(*pblock);
|
||||
m_orphanage->EraseForBlock(*pblock);
|
||||
|
||||
for (const auto& ptx : pblock->vtx) {
|
||||
RecentConfirmedTransactionsFilter().insert(ptx->GetHash().ToUint256());
|
||||
@ -137,7 +137,7 @@ bool TxDownloadManagerImpl::AlreadyHaveTx(const GenTxid& gtxid, bool include_rec
|
||||
// While we won't query by txid, we can try to "guess" what the wtxid is based on the txid.
|
||||
// A non-segwit transaction's txid == wtxid. Query this txhash "casted" to a wtxid. This will
|
||||
// help us find non-segwit transactions, saving bandwidth, and should have no false positives.
|
||||
if (m_orphanage.HaveTx(Wtxid::FromUint256(hash))) return true;
|
||||
if (m_orphanage->HaveTx(Wtxid::FromUint256(hash))) return true;
|
||||
|
||||
if (include_reconsiderable && RecentRejectsReconsiderableFilter().contains(hash)) return true;
|
||||
|
||||
@ -157,7 +157,7 @@ void TxDownloadManagerImpl::ConnectedPeer(NodeId nodeid, const TxDownloadConnect
|
||||
|
||||
void TxDownloadManagerImpl::DisconnectedPeer(NodeId nodeid)
|
||||
{
|
||||
m_orphanage.EraseForPeer(nodeid);
|
||||
m_orphanage->EraseForPeer(nodeid);
|
||||
m_txrequest.DisconnectedPeer(nodeid);
|
||||
|
||||
if (auto it = m_peer_info.find(nodeid); it != m_peer_info.end()) {
|
||||
@ -174,7 +174,7 @@ bool TxDownloadManagerImpl::AddTxAnnouncement(NodeId peer, const GenTxid& gtxid,
|
||||
// - exists in orphanage
|
||||
// - peer can be an orphan resolution candidate
|
||||
if (const auto* wtxid = std::get_if<Wtxid>(>xid)) {
|
||||
if (auto orphan_tx{m_orphanage.GetTx(*wtxid)}) {
|
||||
if (auto orphan_tx{m_orphanage->GetTx(*wtxid)}) {
|
||||
auto unique_parents{GetUniqueParents(*orphan_tx)};
|
||||
std::erase_if(unique_parents, [&](const auto& txid) {
|
||||
return AlreadyHaveTx(txid, /*include_reconsiderable=*/false);
|
||||
@ -187,7 +187,8 @@ bool TxDownloadManagerImpl::AddTxAnnouncement(NodeId peer, const GenTxid& gtxid,
|
||||
}
|
||||
|
||||
if (MaybeAddOrphanResolutionCandidate(unique_parents, *wtxid, peer, now)) {
|
||||
m_orphanage.AddAnnouncer(orphan_tx->GetWitnessHash(), peer);
|
||||
m_orphanage->AddAnnouncer(orphan_tx->GetWitnessHash(), peer);
|
||||
m_orphanage->LimitOrphans();
|
||||
}
|
||||
|
||||
// Return even if the peer isn't an orphan resolution candidate. This would be caught by AlreadyHaveTx.
|
||||
@ -227,7 +228,7 @@ bool TxDownloadManagerImpl::MaybeAddOrphanResolutionCandidate(const std::vector<
|
||||
{
|
||||
auto it_peer = m_peer_info.find(nodeid);
|
||||
if (it_peer == m_peer_info.end()) return false;
|
||||
if (m_orphanage.HaveTxFromPeer(wtxid, nodeid)) return false;
|
||||
if (m_orphanage->HaveTxFromPeer(wtxid, nodeid)) return false;
|
||||
|
||||
const auto& peer_entry = m_peer_info.at(nodeid);
|
||||
const auto& info = peer_entry.m_connection_info;
|
||||
@ -305,7 +306,7 @@ std::optional<PackageToValidate> TxDownloadManagerImpl::Find1P1CPackage(const CT
|
||||
// children instead of the real one provided by the honest peer. Since we track all announcers
|
||||
// of an orphan, this does not exclude parent + orphan pairs that we happened to request from
|
||||
// different peers.
|
||||
const auto cpfp_candidates_same_peer{m_orphanage.GetChildrenFromSamePeer(ptx, nodeid)};
|
||||
const auto cpfp_candidates_same_peer{m_orphanage->GetChildrenFromSamePeer(ptx, nodeid)};
|
||||
|
||||
// These children should be sorted from newest to oldest. In the (probably uncommon) case
|
||||
// of children that replace each other, this helps us accept the highest feerate (probably the
|
||||
@ -327,9 +328,9 @@ void TxDownloadManagerImpl::MempoolAcceptedTx(const CTransactionRef& tx)
|
||||
m_txrequest.ForgetTxHash(tx->GetHash());
|
||||
m_txrequest.ForgetTxHash(tx->GetWitnessHash());
|
||||
|
||||
m_orphanage.AddChildrenToWorkSet(*tx, m_opts.m_rng);
|
||||
m_orphanage->AddChildrenToWorkSet(*tx, m_opts.m_rng);
|
||||
// If it came from the orphanage, remove it. No-op if the tx is not in txorphanage.
|
||||
m_orphanage.EraseTx(tx->GetWitnessHash());
|
||||
m_orphanage->EraseTx(tx->GetWitnessHash());
|
||||
}
|
||||
|
||||
std::vector<Txid> TxDownloadManagerImpl::GetUniqueParents(const CTransaction& tx)
|
||||
@ -398,7 +399,7 @@ node::RejectedTxTodo TxDownloadManagerImpl::MempoolRejectedTx(const CTransaction
|
||||
const auto& wtxid = ptx->GetWitnessHash();
|
||||
// Potentially flip add_extra_compact_tx to false if tx is already in orphanage, which
|
||||
// means it was already added to vExtraTxnForCompact.
|
||||
add_extra_compact_tx &= !m_orphanage.HaveTx(wtxid);
|
||||
add_extra_compact_tx &= !m_orphanage->HaveTx(wtxid);
|
||||
|
||||
// If there is no candidate for orphan resolution, AddTx will not be called. This means
|
||||
// that if a peer is overloading us with invs and orphans, they will eventually not be
|
||||
@ -411,7 +412,7 @@ node::RejectedTxTodo TxDownloadManagerImpl::MempoolRejectedTx(const CTransaction
|
||||
|
||||
for (const auto& nodeid : orphan_resolution_candidates) {
|
||||
if (MaybeAddOrphanResolutionCandidate(unique_parents, ptx->GetWitnessHash(), nodeid, now)) {
|
||||
m_orphanage.AddTx(ptx, nodeid);
|
||||
m_orphanage->AddTx(ptx, nodeid);
|
||||
}
|
||||
}
|
||||
|
||||
@ -420,9 +421,7 @@ node::RejectedTxTodo TxDownloadManagerImpl::MempoolRejectedTx(const CTransaction
|
||||
m_txrequest.ForgetTxHash(tx.GetWitnessHash());
|
||||
|
||||
// DoS prevention: do not allow m_orphanage to grow unbounded (see CVE-2012-3789)
|
||||
// Note that, if the orphanage reaches capacity, it's possible that we immediately evict
|
||||
// the transaction we just added.
|
||||
m_orphanage.LimitOrphans(m_opts.m_max_orphan_txs, m_opts.m_rng);
|
||||
m_orphanage->LimitOrphans();
|
||||
} else {
|
||||
unique_parents.clear();
|
||||
LogDebug(BCLog::MEMPOOL, "not keeping orphan with rejected parents %s (wtxid=%s)\n",
|
||||
@ -491,7 +490,7 @@ node::RejectedTxTodo TxDownloadManagerImpl::MempoolRejectedTx(const CTransaction
|
||||
|
||||
// If the tx failed in ProcessOrphanTx, it should be removed from the orphanage unless the
|
||||
// tx was still missing inputs. If the tx was not in the orphanage, EraseTx does nothing and returns 0.
|
||||
if (state.GetResult() != TxValidationResult::TX_MISSING_INPUTS && m_orphanage.EraseTx(ptx->GetWitnessHash()) > 0) {
|
||||
if (state.GetResult() != TxValidationResult::TX_MISSING_INPUTS && m_orphanage->EraseTx(ptx->GetWitnessHash())) {
|
||||
LogDebug(BCLog::TXPACKAGES, " removed orphan tx %s (wtxid=%s)\n", ptx->GetHash().ToString(), ptx->GetWitnessHash().ToString());
|
||||
}
|
||||
|
||||
@ -561,28 +560,28 @@ std::pair<bool, std::optional<PackageToValidate>> TxDownloadManagerImpl::Receive
|
||||
|
||||
bool TxDownloadManagerImpl::HaveMoreWork(NodeId nodeid)
|
||||
{
|
||||
return m_orphanage.HaveTxToReconsider(nodeid);
|
||||
return m_orphanage->HaveTxToReconsider(nodeid);
|
||||
}
|
||||
|
||||
CTransactionRef TxDownloadManagerImpl::GetTxToReconsider(NodeId nodeid)
|
||||
{
|
||||
return m_orphanage.GetTxToReconsider(nodeid);
|
||||
return m_orphanage->GetTxToReconsider(nodeid);
|
||||
}
|
||||
|
||||
void TxDownloadManagerImpl::CheckIsEmpty(NodeId nodeid)
|
||||
{
|
||||
assert(m_txrequest.Count(nodeid) == 0);
|
||||
assert(m_orphanage.UsageByPeer(nodeid) == 0);
|
||||
assert(m_orphanage->UsageByPeer(nodeid) == 0);
|
||||
}
|
||||
void TxDownloadManagerImpl::CheckIsEmpty()
|
||||
{
|
||||
assert(m_orphanage.TotalOrphanUsage() == 0);
|
||||
assert(m_orphanage.Size() == 0);
|
||||
assert(m_orphanage->TotalOrphanUsage() == 0);
|
||||
assert(m_orphanage->Size() == 0);
|
||||
assert(m_txrequest.Size() == 0);
|
||||
assert(m_num_wtxid_peers == 0);
|
||||
}
|
||||
std::vector<TxOrphanage::OrphanTxBase> TxDownloadManagerImpl::GetOrphanTransactions() const
|
||||
{
|
||||
return m_orphanage.GetOrphanTransactions();
|
||||
return m_orphanage->GetOrphanTransactions();
|
||||
}
|
||||
} // namespace node
|
||||
|
||||
@ -10,9 +10,9 @@
|
||||
#include <consensus/validation.h>
|
||||
#include <kernel/chain.h>
|
||||
#include <net.h>
|
||||
#include <node/txorphanage.h>
|
||||
#include <primitives/transaction.h>
|
||||
#include <policy/packages.h>
|
||||
#include <txorphanage.h>
|
||||
#include <txrequest.h>
|
||||
|
||||
class CTxMemPool;
|
||||
@ -22,7 +22,7 @@ public:
|
||||
TxDownloadOptions m_opts;
|
||||
|
||||
/** Manages unvalidated tx data (orphan transactions for which we are downloading ancestors). */
|
||||
TxOrphanage m_orphanage;
|
||||
std::unique_ptr<TxOrphanage> m_orphanage;
|
||||
/** Tracks candidates for requesting and downloading transaction data. */
|
||||
TxRequestTracker m_txrequest;
|
||||
|
||||
@ -128,7 +128,7 @@ public:
|
||||
return *m_lazy_recent_confirmed_transactions;
|
||||
}
|
||||
|
||||
TxDownloadManagerImpl(const TxDownloadOptions& options) : m_opts{options}, m_txrequest{options.m_deterministic_txrequest} {}
|
||||
TxDownloadManagerImpl(const TxDownloadOptions& options) : m_opts{options}, m_orphanage{MakeTxOrphanage()}, m_txrequest{options.m_deterministic_txrequest} {}
|
||||
|
||||
struct PeerInfo {
|
||||
/** Information relevant to scheduling tx requests. */
|
||||
|
||||
720
src/node/txorphanage.cpp
Normal file
720
src/node/txorphanage.cpp
Normal file
@ -0,0 +1,720 @@
|
||||
// Copyright (c) 2021-2022 The Bitcoin Core developers
|
||||
// Distributed under the MIT software license, see the accompanying
|
||||
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
|
||||
|
||||
#include <node/txorphanage.h>
|
||||
|
||||
#include <consensus/validation.h>
|
||||
#include <logging.h>
|
||||
#include <policy/policy.h>
|
||||
#include <primitives/transaction.h>
|
||||
#include <util/feefrac.h>
|
||||
#include <util/time.h>
|
||||
#include <util/hasher.h>
|
||||
|
||||
#include <boost/multi_index/indexed_by.hpp>
|
||||
#include <boost/multi_index/ordered_index.hpp>
|
||||
#include <boost/multi_index/tag.hpp>
|
||||
#include <boost/multi_index_container.hpp>
|
||||
|
||||
#include <cassert>
|
||||
#include <cmath>
|
||||
#include <unordered_map>
|
||||
|
||||
namespace node {
|
||||
/** Minimum NodeId for lower_bound lookups (in practice, NodeIds start at 0). */
|
||||
static constexpr NodeId MIN_PEER{std::numeric_limits<NodeId>::min()};
|
||||
/** Maximum NodeId for upper_bound lookups. */
|
||||
static constexpr NodeId MAX_PEER{std::numeric_limits<NodeId>::max()};
|
||||
class TxOrphanageImpl final : public TxOrphanage {
|
||||
// Type alias for sequence numbers
|
||||
using SequenceNumber = uint64_t;
|
||||
/** Global sequence number, increment each time an announcement is added. */
|
||||
SequenceNumber m_current_sequence{0};
|
||||
|
||||
/** One orphan announcement. Each announcement (i.e. combination of wtxid, nodeid) is unique. There may be multiple
|
||||
* announcements for the same tx, and multiple transactions with the same txid but different wtxid are possible. */
|
||||
struct Announcement
|
||||
{
|
||||
const CTransactionRef m_tx;
|
||||
/** Which peer announced this tx */
|
||||
const NodeId m_announcer;
|
||||
/** What order this transaction entered the orphanage. */
|
||||
const SequenceNumber m_entry_sequence;
|
||||
/** Whether this tx should be reconsidered. Always starts out false. A peer's workset is the collection of all
|
||||
* announcements with m_reconsider=true. */
|
||||
bool m_reconsider{false};
|
||||
|
||||
Announcement(const CTransactionRef& tx, NodeId peer, SequenceNumber seq) :
|
||||
m_tx{tx}, m_announcer{peer}, m_entry_sequence{seq}
|
||||
{ }
|
||||
|
||||
/** Get an approximation for "memory usage". The total memory is a function of the memory used to store the
|
||||
* transaction itself, each entry in m_orphans, and each entry in m_outpoint_to_orphan_it. We use weight because
|
||||
* it is often higher than the actual memory usage of the tranaction. This metric conveniently encompasses
|
||||
* m_outpoint_to_orphan_it usage since input data does not get the witness discount, and makes it easier to
|
||||
* reason about each peer's limits using well-understood transaction attributes. */
|
||||
TxOrphanage::Usage GetMemUsage() const {
|
||||
return GetTransactionWeight(*m_tx);
|
||||
}
|
||||
|
||||
/** Get an approximation of how much this transaction contributes to latency in EraseForBlock and EraseForPeer.
|
||||
* The computation time is a function of the number of entries in m_orphans (thus 1 per announcement) and the
|
||||
* number of entries in m_outpoint_to_orphan_it (thus an additional 1 for every 10 inputs). Transactions with a
|
||||
* small number of inputs (9 or fewer) are counted as 1 to make it easier to reason about each peer's limits in
|
||||
* terms of "normal" transactions. */
|
||||
TxOrphanage::Count GetLatencyScore() const {
|
||||
return 1 + (m_tx->vin.size() / 10);
|
||||
}
|
||||
};
|
||||
|
||||
// Index by wtxid, then peer
|
||||
struct ByWtxid {};
|
||||
using ByWtxidView = std::tuple<Wtxid, NodeId>;
|
||||
struct WtxidExtractor
|
||||
{
|
||||
using result_type = ByWtxidView;
|
||||
result_type operator()(const Announcement& ann) const
|
||||
{
|
||||
return ByWtxidView{ann.m_tx->GetWitnessHash(), ann.m_announcer};
|
||||
}
|
||||
};
|
||||
|
||||
// Sort by peer, then by whether it is ready to reconsider, then by recency.
|
||||
struct ByPeer {};
|
||||
using ByPeerView = std::tuple<NodeId, bool, SequenceNumber>;
|
||||
struct ByPeerViewExtractor {
|
||||
using result_type = ByPeerView;
|
||||
result_type operator()(const Announcement& ann) const
|
||||
{
|
||||
return ByPeerView{ann.m_announcer, ann.m_reconsider, ann.m_entry_sequence};
|
||||
}
|
||||
};
|
||||
|
||||
struct OrphanIndices final : boost::multi_index::indexed_by<
|
||||
boost::multi_index::ordered_unique<boost::multi_index::tag<ByWtxid>, WtxidExtractor>,
|
||||
boost::multi_index::ordered_unique<boost::multi_index::tag<ByPeer>, ByPeerViewExtractor>
|
||||
>{};
|
||||
|
||||
using AnnouncementMap = boost::multi_index::multi_index_container<Announcement, OrphanIndices>;
|
||||
template<typename Tag>
|
||||
using Iter = typename AnnouncementMap::index<Tag>::type::iterator;
|
||||
AnnouncementMap m_orphans;
|
||||
|
||||
const TxOrphanage::Count m_max_global_latency_score{DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE};
|
||||
const TxOrphanage::Usage m_reserved_usage_per_peer{DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER};
|
||||
|
||||
/** Number of unique orphans by wtxid. Less than or equal to the number of entries in m_orphans. */
|
||||
TxOrphanage::Count m_unique_orphans{0};
|
||||
|
||||
/** Memory used by orphans (see Announcement::GetMemUsage()), deduplicated by wtxid. */
|
||||
TxOrphanage::Usage m_unique_orphan_usage{0};
|
||||
|
||||
/** The sum of each unique transaction's latency scores including the inputs only (see Announcement::GetLatencyScore
|
||||
* but subtract 1 for the announcements themselves). The total orphanage's latency score is given by this value +
|
||||
* the number of entries in m_orphans. */
|
||||
TxOrphanage::Count m_unique_rounded_input_scores{0};
|
||||
|
||||
/** Index from the parents' outputs to wtxids that exist in m_orphans. Used to find children of
|
||||
* a transaction that can be reconsidered and to remove entries that conflict with a block.*/
|
||||
std::unordered_map<COutPoint, std::set<Wtxid>, SaltedOutpointHasher> m_outpoint_to_orphan_it;
|
||||
|
||||
struct PeerDoSInfo {
|
||||
TxOrphanage::Usage m_total_usage{0};
|
||||
TxOrphanage::Count m_count_announcements{0};
|
||||
TxOrphanage::Count m_total_latency_score{0};
|
||||
bool operator==(const PeerDoSInfo& other) const
|
||||
{
|
||||
return m_total_usage == other.m_total_usage &&
|
||||
m_count_announcements == other.m_count_announcements &&
|
||||
m_total_latency_score == other.m_total_latency_score;
|
||||
}
|
||||
void Add(const Announcement& ann)
|
||||
{
|
||||
m_total_usage += ann.GetMemUsage();
|
||||
m_total_latency_score += ann.GetLatencyScore();
|
||||
m_count_announcements += 1;
|
||||
}
|
||||
bool Subtract(const Announcement& ann)
|
||||
{
|
||||
Assume(m_total_usage >= ann.GetMemUsage());
|
||||
Assume(m_total_latency_score >= ann.GetLatencyScore());
|
||||
Assume(m_count_announcements >= 1);
|
||||
|
||||
m_total_usage -= ann.GetMemUsage();
|
||||
m_total_latency_score -= ann.GetLatencyScore();
|
||||
m_count_announcements -= 1;
|
||||
return m_count_announcements == 0;
|
||||
}
|
||||
/** There are 2 DoS scores:
|
||||
* - Latency score (ratio of total latency score / max allowed latency score)
|
||||
* - Memory score (ratio of total memory usage / max allowed memory usage).
|
||||
*
|
||||
* If the peer is using more than the allowed for either resource, its DoS score is > 1.
|
||||
* A peer having a DoS score > 1 does not necessarily mean that something is wrong, since we
|
||||
* do not trim unless the orphanage exceeds global limits, but it means that this peer will
|
||||
* be selected for trimming sooner. If the global latency score or global memory usage
|
||||
* limits are exceeded, it must be that there is a peer whose DoS score > 1. */
|
||||
FeeFrac GetDosScore(TxOrphanage::Count max_peer_latency_score, TxOrphanage::Usage max_peer_bytes) const
|
||||
{
|
||||
assert(max_peer_latency_score > 0);
|
||||
assert(max_peer_bytes > 0);
|
||||
const FeeFrac cpu_score(m_total_latency_score, max_peer_latency_score);
|
||||
const FeeFrac mem_score(m_total_usage, max_peer_bytes);
|
||||
return std::max<FeeFrac>(cpu_score, mem_score);
|
||||
}
|
||||
};
|
||||
/** Store per-peer statistics. Used to determine each peer's DoS score. The size of this map is used to determine the
|
||||
* number of peers and thus global {latency score, memory} limits. */
|
||||
std::unordered_map<NodeId, PeerDoSInfo> m_peer_orphanage_info;
|
||||
|
||||
/** Erase from m_orphans and update m_peer_orphanage_info. */
|
||||
template<typename Tag>
|
||||
void Erase(Iter<Tag> it);
|
||||
|
||||
/** Check if there is exactly one announcement with the same wtxid as it. */
|
||||
bool IsUnique(Iter<ByWtxid> it) const;
|
||||
|
||||
/** Check if the orphanage needs trimming. */
|
||||
bool NeedsTrim() const;
|
||||
public:
|
||||
TxOrphanageImpl() = default;
|
||||
TxOrphanageImpl(Count max_global_ann, Usage reserved_peer_usage) :
|
||||
m_max_global_latency_score{max_global_ann},
|
||||
m_reserved_usage_per_peer{reserved_peer_usage}
|
||||
{}
|
||||
~TxOrphanageImpl() noexcept override = default;
|
||||
|
||||
TxOrphanage::Count CountAnnouncements() const override;
|
||||
TxOrphanage::Count CountUniqueOrphans() const override;
|
||||
TxOrphanage::Count AnnouncementsFromPeer(NodeId peer) const override;
|
||||
TxOrphanage::Count LatencyScoreFromPeer(NodeId peer) const override;
|
||||
TxOrphanage::Usage UsageByPeer(NodeId peer) const override;
|
||||
|
||||
TxOrphanage::Count MaxGlobalLatencyScore() const override;
|
||||
TxOrphanage::Count TotalLatencyScore() const override;
|
||||
TxOrphanage::Usage ReservedPeerUsage() const override;
|
||||
|
||||
/** Maximum allowed (deduplicated) latency score for all tranactions (see Announcement::GetLatencyScore()). Dynamic
|
||||
* based on number of peers. Each peer has an equal amount, but the global maximum latency score stays constant. The
|
||||
* number of peers times MaxPeerLatencyScore() (rounded) adds up to MaxGlobalLatencyScore(). As long as every peer's
|
||||
* m_total_latency_score / MaxPeerLatencyScore() < 1, MaxGlobalLatencyScore() is not exceeded. */
|
||||
TxOrphanage::Count MaxPeerLatencyScore() const override;
|
||||
|
||||
/** Maximum allowed (deduplicated) memory usage for all transactions (see Announcement::GetMemUsage()). Dynamic based
|
||||
* on number of peers. More peers means more allowed memory usage. The number of peers times ReservedPeerUsage()
|
||||
* adds up to MaxGlobalUsage(). As long as every peer's m_total_usage / ReservedPeerUsage() < 1, MaxGlobalUsage() is
|
||||
* not exceeded. */
|
||||
TxOrphanage::Usage MaxGlobalUsage() const override;
|
||||
|
||||
bool AddTx(const CTransactionRef& tx, NodeId peer) override;
|
||||
bool AddAnnouncer(const Wtxid& wtxid, NodeId peer) override;
|
||||
CTransactionRef GetTx(const Wtxid& wtxid) const override;
|
||||
bool HaveTx(const Wtxid& wtxid) const override;
|
||||
bool HaveTxFromPeer(const Wtxid& wtxid, NodeId peer) const override;
|
||||
CTransactionRef GetTxToReconsider(NodeId peer) override;
|
||||
bool EraseTx(const Wtxid& wtxid) override;
|
||||
void EraseForPeer(NodeId peer) override;
|
||||
void EraseForBlock(const CBlock& block) override;
|
||||
void LimitOrphans() override;
|
||||
std::vector<std::pair<Wtxid, NodeId>> AddChildrenToWorkSet(const CTransaction& tx, FastRandomContext& rng) override;
|
||||
bool HaveTxToReconsider(NodeId peer) override;
|
||||
std::vector<CTransactionRef> GetChildrenFromSamePeer(const CTransactionRef& parent, NodeId nodeid) const override;
|
||||
size_t Size() const override { return m_unique_orphans; }
|
||||
std::vector<OrphanTxBase> GetOrphanTransactions() const override;
|
||||
TxOrphanage::Usage TotalOrphanUsage() const override;
|
||||
void SanityCheck() const override;
|
||||
};
|
||||
|
||||
template<typename Tag>
|
||||
void TxOrphanageImpl::Erase(Iter<Tag> it)
|
||||
{
|
||||
// Update m_peer_orphanage_info and clean up entries if they point to an empty struct.
|
||||
// This means peers that are not storing any orphans do not have an entry in
|
||||
// m_peer_orphanage_info (they can be added back later if they announce another orphan) and
|
||||
// ensures disconnected peers are not tracked forever.
|
||||
auto peer_it = m_peer_orphanage_info.find(it->m_announcer);
|
||||
Assume(peer_it != m_peer_orphanage_info.end());
|
||||
if (peer_it->second.Subtract(*it)) m_peer_orphanage_info.erase(peer_it);
|
||||
|
||||
if (IsUnique(m_orphans.project<ByWtxid>(it))) {
|
||||
m_unique_orphans -= 1;
|
||||
m_unique_rounded_input_scores -= it->GetLatencyScore() - 1;
|
||||
m_unique_orphan_usage -= it->GetMemUsage();
|
||||
|
||||
// Remove references in m_outpoint_to_orphan_it
|
||||
const auto& wtxid{it->m_tx->GetWitnessHash()};
|
||||
for (const auto& input : it->m_tx->vin) {
|
||||
auto it_prev = m_outpoint_to_orphan_it.find(input.prevout);
|
||||
if (it_prev != m_outpoint_to_orphan_it.end()) {
|
||||
it_prev->second.erase(wtxid);
|
||||
// Clean up keys if they point to an empty set.
|
||||
if (it_prev->second.empty()) {
|
||||
m_outpoint_to_orphan_it.erase(it_prev);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
m_orphans.get<Tag>().erase(it);
|
||||
}
|
||||
|
||||
bool TxOrphanageImpl::IsUnique(Iter<ByWtxid> it) const
|
||||
{
|
||||
// Iterators ByWtxid are sorted by wtxid, so check if neighboring elements have the same wtxid.
|
||||
auto& index = m_orphans.get<ByWtxid>();
|
||||
if (it == index.end()) return false;
|
||||
if (std::next(it) != index.end() && std::next(it)->m_tx->GetWitnessHash() == it->m_tx->GetWitnessHash()) return false;
|
||||
if (it != index.begin() && std::prev(it)->m_tx->GetWitnessHash() == it->m_tx->GetWitnessHash()) return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
TxOrphanage::Usage TxOrphanageImpl::UsageByPeer(NodeId peer) const
|
||||
{
|
||||
auto it = m_peer_orphanage_info.find(peer);
|
||||
return it == m_peer_orphanage_info.end() ? 0 : it->second.m_total_usage;
|
||||
}
|
||||
|
||||
TxOrphanage::Count TxOrphanageImpl::CountAnnouncements() const { return m_orphans.size(); }
|
||||
|
||||
TxOrphanage::Usage TxOrphanageImpl::TotalOrphanUsage() const { return m_unique_orphan_usage; }
|
||||
|
||||
TxOrphanage::Count TxOrphanageImpl::CountUniqueOrphans() const { return m_unique_orphans; }
|
||||
|
||||
TxOrphanage::Count TxOrphanageImpl::AnnouncementsFromPeer(NodeId peer) const {
|
||||
auto it = m_peer_orphanage_info.find(peer);
|
||||
return it == m_peer_orphanage_info.end() ? 0 : it->second.m_count_announcements;
|
||||
}
|
||||
|
||||
TxOrphanage::Count TxOrphanageImpl::LatencyScoreFromPeer(NodeId peer) const {
|
||||
auto it = m_peer_orphanage_info.find(peer);
|
||||
return it == m_peer_orphanage_info.end() ? 0 : it->second.m_total_latency_score;
|
||||
}
|
||||
|
||||
bool TxOrphanageImpl::AddTx(const CTransactionRef& tx, NodeId peer)
|
||||
{
|
||||
const auto& wtxid{tx->GetWitnessHash()};
|
||||
const auto& txid{tx->GetHash()};
|
||||
|
||||
// Ignore transactions above max standard size to avoid a send-big-orphans memory exhaustion attack.
|
||||
TxOrphanage::Usage sz = GetTransactionWeight(*tx);
|
||||
if (sz > MAX_STANDARD_TX_WEIGHT) {
|
||||
LogDebug(BCLog::TXPACKAGES, "ignoring large orphan tx (size: %u, txid: %s, wtxid: %s)\n", sz, txid.ToString(), wtxid.ToString());
|
||||
return false;
|
||||
}
|
||||
|
||||
// We will return false if the tx already exists under a different peer.
|
||||
const bool brand_new{!HaveTx(wtxid)};
|
||||
|
||||
auto [iter, inserted] = m_orphans.get<ByWtxid>().emplace(tx, peer, m_current_sequence);
|
||||
// If the announcement (same wtxid, same peer) already exists, emplacement fails. Return false.
|
||||
if (!inserted) return false;
|
||||
|
||||
++m_current_sequence;
|
||||
auto& peer_info = m_peer_orphanage_info.try_emplace(peer).first->second;
|
||||
peer_info.Add(*iter);
|
||||
|
||||
// Add links in m_outpoint_to_orphan_it
|
||||
if (brand_new) {
|
||||
for (const auto& input : tx->vin) {
|
||||
auto& wtxids_for_prevout = m_outpoint_to_orphan_it.try_emplace(input.prevout).first->second;
|
||||
wtxids_for_prevout.emplace(wtxid);
|
||||
}
|
||||
|
||||
m_unique_orphans += 1;
|
||||
m_unique_orphan_usage += iter->GetMemUsage();
|
||||
m_unique_rounded_input_scores += iter->GetLatencyScore() - 1;
|
||||
|
||||
LogDebug(BCLog::TXPACKAGES, "stored orphan tx %s (wtxid=%s), weight: %u (mapsz %u outsz %u)\n",
|
||||
txid.ToString(), wtxid.ToString(), sz, m_orphans.size(), m_outpoint_to_orphan_it.size());
|
||||
Assume(IsUnique(iter));
|
||||
} else {
|
||||
LogDebug(BCLog::TXPACKAGES, "added peer=%d as announcer of orphan tx %s (wtxid=%s)\n",
|
||||
peer, txid.ToString(), wtxid.ToString());
|
||||
Assume(!IsUnique(iter));
|
||||
}
|
||||
return brand_new;
|
||||
}
|
||||
|
||||
bool TxOrphanageImpl::AddAnnouncer(const Wtxid& wtxid, NodeId peer)
|
||||
{
|
||||
auto& index_by_wtxid = m_orphans.get<ByWtxid>();
|
||||
auto it = index_by_wtxid.lower_bound(ByWtxidView{wtxid, MIN_PEER});
|
||||
|
||||
// Do nothing if this transaction isn't already present. We can't create an entry if we don't
|
||||
// have the tx data.
|
||||
if (it == index_by_wtxid.end()) return false;
|
||||
if (it->m_tx->GetWitnessHash() != wtxid) return false;
|
||||
|
||||
// Add another announcement, copying the CTransactionRef from one that already exists.
|
||||
const auto& ptx = it->m_tx;
|
||||
auto [iter, inserted] = index_by_wtxid.emplace(ptx, peer, m_current_sequence);
|
||||
// If the announcement (same wtxid, same peer) already exists, emplacement fails. Return false.
|
||||
if (!inserted) return false;
|
||||
|
||||
++m_current_sequence;
|
||||
auto& peer_info = m_peer_orphanage_info.try_emplace(peer).first->second;
|
||||
peer_info.Add(*iter);
|
||||
|
||||
const auto& txid = ptx->GetHash();
|
||||
LogDebug(BCLog::TXPACKAGES, "added peer=%d as announcer of orphan tx %s (wtxid=%s)\n",
|
||||
peer, txid.ToString(), wtxid.ToString());
|
||||
|
||||
Assume(!IsUnique(iter));
|
||||
return true;
|
||||
}
|
||||
|
||||
bool TxOrphanageImpl::EraseTx(const Wtxid& wtxid)
|
||||
{
|
||||
auto& index_by_wtxid = m_orphans.get<ByWtxid>();
|
||||
|
||||
auto it = index_by_wtxid.lower_bound(ByWtxidView{wtxid, MIN_PEER});
|
||||
if (it == index_by_wtxid.end() || it->m_tx->GetWitnessHash() != wtxid) return false;
|
||||
|
||||
auto it_end = index_by_wtxid.upper_bound(ByWtxidView{wtxid, MAX_PEER});
|
||||
unsigned int num_ann{0};
|
||||
const auto txid = it->m_tx->GetHash();
|
||||
while (it != it_end) {
|
||||
Assume(it->m_tx->GetWitnessHash() == wtxid);
|
||||
Erase<ByWtxid>(it++);
|
||||
num_ann += 1;
|
||||
}
|
||||
|
||||
LogDebug(BCLog::TXPACKAGES, "removed orphan tx %s (wtxid=%s) (%u announcements)\n", txid.ToString(), wtxid.ToString(), num_ann);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/** Erase all entries by this peer. */
|
||||
void TxOrphanageImpl::EraseForPeer(NodeId peer)
|
||||
{
|
||||
auto& index_by_peer = m_orphans.get<ByPeer>();
|
||||
auto it = index_by_peer.lower_bound(ByPeerView{peer, false, 0});
|
||||
if (it == index_by_peer.end() || it->m_announcer != peer) return;
|
||||
|
||||
unsigned int num_ann{0};
|
||||
while (it != index_by_peer.end() && it->m_announcer == peer) {
|
||||
// Delete item, cleaning up m_outpoint_to_orphan_it iff this entry is unique by wtxid.
|
||||
Erase<ByPeer>(it++);
|
||||
num_ann += 1;
|
||||
}
|
||||
Assume(!m_peer_orphanage_info.contains(peer));
|
||||
|
||||
if (num_ann > 0) LogDebug(BCLog::TXPACKAGES, "Erased %d orphan transaction(s) from peer=%d\n", num_ann, peer);
|
||||
}
|
||||
|
||||
/** If the data structure needs trimming, evicts announcements by selecting the DoSiest peer and evicting its oldest
|
||||
* announcement (sorting non-reconsiderable orphans first, to give reconsiderable orphans a greater chance of being
|
||||
* processed). Does nothing if no global limits are exceeded. This eviction strategy effectively "reserves" an
|
||||
* amount of announcements and space for each peer. The reserved amount is protected from eviction even if there
|
||||
* are peers spamming the orphanage.
|
||||
*/
|
||||
void TxOrphanageImpl::LimitOrphans()
|
||||
{
|
||||
if (!NeedsTrim()) return;
|
||||
|
||||
const auto original_unique_txns{CountUniqueOrphans()};
|
||||
|
||||
// Even though it's possible for MaxPeerLatencyScore to increase within this call to LimitOrphans
|
||||
// (e.g. if a peer's orphans are removed entirely, changing the number of peers), use consistent limits throughout.
|
||||
const auto max_ann{MaxPeerLatencyScore()};
|
||||
const auto max_mem{ReservedPeerUsage()};
|
||||
|
||||
// We have exceeded the global limit(s). Now, identify who is using too much and evict their orphans.
|
||||
// Create a heap of pairs (NodeId, DoS score), sorted by descending DoS score.
|
||||
std::vector<std::pair<NodeId, FeeFrac>> heap_peer_dos;
|
||||
heap_peer_dos.reserve(m_peer_orphanage_info.size());
|
||||
for (const auto& [nodeid, entry] : m_peer_orphanage_info) {
|
||||
// Performance optimization: only consider peers with a DoS score > 1.
|
||||
const auto dos_score = entry.GetDosScore(max_ann, max_mem);
|
||||
if (dos_score >> FeeFrac{1, 1}) {
|
||||
heap_peer_dos.emplace_back(nodeid, dos_score);
|
||||
}
|
||||
}
|
||||
static constexpr auto compare_score = [](const auto& left, const auto& right) {
|
||||
if (left.second != right.second) return left.second < right.second;
|
||||
// Tiebreak by considering the more recent peer (higher NodeId) to be worse.
|
||||
return left.first < right.first;
|
||||
};
|
||||
std::make_heap(heap_peer_dos.begin(), heap_peer_dos.end(), compare_score);
|
||||
|
||||
unsigned int num_erased{0};
|
||||
// This outer loop finds the peer with the highest DoS score, which is a fraction of {usage, announcements} used
|
||||
// over the respective allowances. We continue until the orphanage is within global limits. That means some peers
|
||||
// might still have a DoS score > 1 at the end.
|
||||
// Note: if ratios are the same, FeeFrac tiebreaks by denominator. In practice, since the CPU denominator (number of
|
||||
// announcements) is always lower, this means that a peer with only high number of announcements will be targeted
|
||||
// before a peer using a lot of memory, even if they have the same ratios.
|
||||
do {
|
||||
Assume(!heap_peer_dos.empty());
|
||||
// This is a max-heap, so the worst peer is at the front. pop_heap()
|
||||
// moves it to the back, and the next worst peer is moved to the front.
|
||||
std::pop_heap(heap_peer_dos.begin(), heap_peer_dos.end(), compare_score);
|
||||
const auto [worst_peer, dos_score] = std::move(heap_peer_dos.back());
|
||||
heap_peer_dos.pop_back();
|
||||
|
||||
// If needs trim, then at least one peer has a DoS score higher than 1.
|
||||
Assume(dos_score >> (FeeFrac{1, 1}));
|
||||
|
||||
auto it_worst_peer = m_peer_orphanage_info.find(worst_peer);
|
||||
|
||||
// This inner loop trims until this peer is no longer the DoSiest one or has a score within 1. The score 1 is
|
||||
// just a conservative fallback: once the last peer goes below ratio 1, NeedsTrim() will return false anyway.
|
||||
// We evict the oldest announcement(s) from this peer, sorting non-reconsiderable before reconsiderable.
|
||||
// The number of inner loop iterations is bounded by the total number of announcements.
|
||||
const auto& dos_threshold = heap_peer_dos.empty() ? FeeFrac{1, 1} : heap_peer_dos.front().second;
|
||||
auto it_ann = m_orphans.get<ByPeer>().lower_bound(ByPeerView{worst_peer, false, 0});
|
||||
while (NeedsTrim()) {
|
||||
if (!Assume(it_ann->m_announcer == worst_peer)) break;
|
||||
if (!Assume(it_ann != m_orphans.get<ByPeer>().end())) break;
|
||||
|
||||
Erase<ByPeer>(it_ann++);
|
||||
num_erased += 1;
|
||||
|
||||
// If we erased the last orphan from this peer, it_worst_peer will be invalidated.
|
||||
it_worst_peer = m_peer_orphanage_info.find(worst_peer);
|
||||
if (it_worst_peer == m_peer_orphanage_info.end() || it_worst_peer->second.GetDosScore(max_ann, max_mem) <= dos_threshold) break;
|
||||
}
|
||||
|
||||
if (!NeedsTrim()) break;
|
||||
|
||||
// Unless this peer is empty, put it back in the heap so we continue to consider evicting its orphans.
|
||||
// We may select this peer for evictions again if there are multiple DoSy peers.
|
||||
if (it_worst_peer != m_peer_orphanage_info.end() && it_worst_peer->second.m_count_announcements > 0) {
|
||||
heap_peer_dos.emplace_back(worst_peer, it_worst_peer->second.GetDosScore(max_ann, max_mem));
|
||||
std::push_heap(heap_peer_dos.begin(), heap_peer_dos.end(), compare_score);
|
||||
}
|
||||
} while (true);
|
||||
|
||||
const auto remaining_unique_orphans{CountUniqueOrphans()};
|
||||
LogDebug(BCLog::TXPACKAGES, "orphanage overflow, removed %u tx (%u announcements)\n", original_unique_txns - remaining_unique_orphans, num_erased);
|
||||
}
|
||||
|
||||
std::vector<std::pair<Wtxid, NodeId>> TxOrphanageImpl::AddChildrenToWorkSet(const CTransaction& tx, FastRandomContext& rng)
|
||||
{
|
||||
std::vector<std::pair<Wtxid, NodeId>> ret;
|
||||
auto& index_by_wtxid = m_orphans.get<ByWtxid>();
|
||||
for (unsigned int i = 0; i < tx.vout.size(); i++) {
|
||||
const auto it_by_prev = m_outpoint_to_orphan_it.find(COutPoint(tx.GetHash(), i));
|
||||
if (it_by_prev != m_outpoint_to_orphan_it.end()) {
|
||||
for (const auto& wtxid : it_by_prev->second) {
|
||||
// Belt and suspenders, each entry in m_outpoint_to_orphan_it should always have at least 1 announcement.
|
||||
auto it = index_by_wtxid.lower_bound(ByWtxidView{wtxid, MIN_PEER});
|
||||
if (!Assume(it != index_by_wtxid.end())) continue;
|
||||
|
||||
// Select a random peer to assign orphan processing, reducing wasted work if the orphan is still missing
|
||||
// inputs. However, we don't want to create an issue in which the assigned peer can purposefully stop us
|
||||
// from processing the orphan by disconnecting.
|
||||
auto it_end = index_by_wtxid.upper_bound(ByWtxidView{wtxid, MAX_PEER});
|
||||
const auto num_announcers{std::distance(it, it_end)};
|
||||
if (!Assume(num_announcers > 0)) continue;
|
||||
std::advance(it, rng.randrange(num_announcers));
|
||||
|
||||
if (!Assume(it->m_tx->GetWitnessHash() == wtxid)) break;
|
||||
|
||||
// Mark this orphan as ready to be reconsidered.
|
||||
static constexpr auto mark_reconsidered_modifier = [](auto& ann) { ann.m_reconsider = true; };
|
||||
if (!it->m_reconsider) {
|
||||
index_by_wtxid.modify(it, mark_reconsidered_modifier);
|
||||
ret.emplace_back(wtxid, it->m_announcer);
|
||||
}
|
||||
|
||||
LogDebug(BCLog::TXPACKAGES, "added %s (wtxid=%s) to peer %d workset\n",
|
||||
it->m_tx->GetHash().ToString(), it->m_tx->GetWitnessHash().ToString(), it->m_announcer);
|
||||
}
|
||||
}
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
bool TxOrphanageImpl::HaveTx(const Wtxid& wtxid) const
|
||||
{
|
||||
auto it_lower = m_orphans.get<ByWtxid>().lower_bound(ByWtxidView{wtxid, MIN_PEER});
|
||||
return it_lower != m_orphans.get<ByWtxid>().end() && it_lower->m_tx->GetWitnessHash() == wtxid;
|
||||
}
|
||||
|
||||
CTransactionRef TxOrphanageImpl::GetTx(const Wtxid& wtxid) const
|
||||
{
|
||||
auto it_lower = m_orphans.get<ByWtxid>().lower_bound(ByWtxidView{wtxid, MIN_PEER});
|
||||
if (it_lower != m_orphans.get<ByWtxid>().end() && it_lower->m_tx->GetWitnessHash() == wtxid) return it_lower->m_tx;
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
bool TxOrphanageImpl::HaveTxFromPeer(const Wtxid& wtxid, NodeId peer) const
|
||||
{
|
||||
return m_orphans.get<ByWtxid>().count(ByWtxidView{wtxid, peer}) > 0;
|
||||
}
|
||||
|
||||
/** If there is a tx that can be reconsidered, return it and set it back to
|
||||
* non-reconsiderable. Otherwise, return a nullptr. */
|
||||
CTransactionRef TxOrphanageImpl::GetTxToReconsider(NodeId peer)
|
||||
{
|
||||
auto it = m_orphans.get<ByPeer>().lower_bound(ByPeerView{peer, true, 0});
|
||||
if (it != m_orphans.get<ByPeer>().end() && it->m_announcer == peer && it->m_reconsider) {
|
||||
// Flip m_reconsider. Even if this transaction stays in orphanage, it shouldn't be
|
||||
// reconsidered again until there is a new reason to do so.
|
||||
static constexpr auto mark_reconsidered_modifier = [](auto& ann) { ann.m_reconsider = false; };
|
||||
m_orphans.get<ByPeer>().modify(it, mark_reconsidered_modifier);
|
||||
return it->m_tx;
|
||||
}
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
/** Return whether there is a tx that can be reconsidered. */
|
||||
bool TxOrphanageImpl::HaveTxToReconsider(NodeId peer)
|
||||
{
|
||||
auto it = m_orphans.get<ByPeer>().lower_bound(ByPeerView{peer, true, 0});
|
||||
return it != m_orphans.get<ByPeer>().end() && it->m_announcer == peer && it->m_reconsider;
|
||||
}
|
||||
void TxOrphanageImpl::EraseForBlock(const CBlock& block)
|
||||
{
|
||||
std::set<Wtxid> wtxids_to_erase;
|
||||
for (const CTransactionRef& ptx : block.vtx) {
|
||||
const CTransaction& block_tx = *ptx;
|
||||
|
||||
// Which orphan pool entries must we evict?
|
||||
for (const auto& input : block_tx.vin) {
|
||||
auto it_prev = m_outpoint_to_orphan_it.find(input.prevout);
|
||||
if (it_prev != m_outpoint_to_orphan_it.end()) {
|
||||
// Copy all wtxids to wtxids_to_erase.
|
||||
std::copy(it_prev->second.cbegin(), it_prev->second.cend(), std::inserter(wtxids_to_erase, wtxids_to_erase.end()));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
unsigned int num_erased{0};
|
||||
for (const auto& wtxid : wtxids_to_erase) {
|
||||
num_erased += EraseTx(wtxid) ? 1 : 0;
|
||||
}
|
||||
|
||||
if (num_erased != 0) {
|
||||
LogDebug(BCLog::TXPACKAGES, "Erased %d orphan transaction(s) included or conflicted by block\n", num_erased);
|
||||
}
|
||||
Assume(wtxids_to_erase.size() == num_erased);
|
||||
}
|
||||
|
||||
/** Get all children that spend from this tx and were received from nodeid. Sorted from most
|
||||
* recent to least recent. */
|
||||
std::vector<CTransactionRef> TxOrphanageImpl::GetChildrenFromSamePeer(const CTransactionRef& parent, NodeId peer) const
|
||||
{
|
||||
std::vector<CTransactionRef> children_found;
|
||||
const auto& parent_txid{parent->GetHash()};
|
||||
|
||||
// Iterate through all orphans from this peer, in reverse order, so that more recent
|
||||
// transactions are added first. Doing so helps avoid work when one of the orphans replaced
|
||||
// an earlier one. Since we require the NodeId to match, one peer's announcement order does
|
||||
// not bias how we process other peer's orphans.
|
||||
auto& index_by_peer = m_orphans.get<ByPeer>();
|
||||
auto it_upper = index_by_peer.upper_bound(ByPeerView{peer, true, std::numeric_limits<uint64_t>::max()});
|
||||
auto it_lower = index_by_peer.lower_bound(ByPeerView{peer, false, 0});
|
||||
|
||||
while (it_upper != it_lower) {
|
||||
--it_upper;
|
||||
if (!Assume(it_upper->m_announcer == peer)) break;
|
||||
// Check if this tx spends from parent.
|
||||
for (const auto& input : it_upper->m_tx->vin) {
|
||||
if (input.prevout.hash == parent_txid) {
|
||||
children_found.emplace_back(it_upper->m_tx);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
return children_found;
|
||||
}
|
||||
|
||||
std::vector<TxOrphanage::OrphanTxBase> TxOrphanageImpl::GetOrphanTransactions() const
|
||||
{
|
||||
std::vector<TxOrphanage::OrphanTxBase> result;
|
||||
result.reserve(m_unique_orphans);
|
||||
|
||||
auto& index_by_wtxid = m_orphans.get<ByWtxid>();
|
||||
auto it = index_by_wtxid.begin();
|
||||
std::set<NodeId> this_orphan_announcers;
|
||||
while (it != index_by_wtxid.end()) {
|
||||
this_orphan_announcers.insert(it->m_announcer);
|
||||
// If this is the last entry, or the next entry has a different wtxid, build a OrphanTxBase.
|
||||
if (std::next(it) == index_by_wtxid.end() || std::next(it)->m_tx->GetWitnessHash() != it->m_tx->GetWitnessHash()) {
|
||||
result.emplace_back(it->m_tx, std::move(this_orphan_announcers));
|
||||
this_orphan_announcers.clear();
|
||||
}
|
||||
|
||||
++it;
|
||||
}
|
||||
Assume(m_unique_orphans == result.size());
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
void TxOrphanageImpl::SanityCheck() const
|
||||
{
|
||||
std::unordered_map<NodeId, PeerDoSInfo> reconstructed_peer_info;
|
||||
std::map<Wtxid, std::pair<TxOrphanage::Usage, TxOrphanage::Count>> unique_wtxids_to_scores;
|
||||
std::set<COutPoint> all_outpoints;
|
||||
|
||||
for (auto it = m_orphans.begin(); it != m_orphans.end(); ++it) {
|
||||
for (const auto& input : it->m_tx->vin) {
|
||||
all_outpoints.insert(input.prevout);
|
||||
}
|
||||
unique_wtxids_to_scores.emplace(it->m_tx->GetWitnessHash(), std::make_pair(it->GetMemUsage(), it->GetLatencyScore() - 1));
|
||||
|
||||
auto& peer_info = reconstructed_peer_info[it->m_announcer];
|
||||
peer_info.m_total_usage += it->GetMemUsage();
|
||||
peer_info.m_count_announcements += 1;
|
||||
peer_info.m_total_latency_score += it->GetLatencyScore();
|
||||
}
|
||||
assert(reconstructed_peer_info.size() == m_peer_orphanage_info.size());
|
||||
|
||||
// All outpoints exist in m_outpoint_to_orphan_it, all keys in m_outpoint_to_orphan_it correspond to some
|
||||
// orphan, and all wtxids referenced in m_outpoint_to_orphan_it are also in m_orphans.
|
||||
// This ensures m_outpoint_to_orphan_it is cleaned up.
|
||||
assert(all_outpoints.size() == m_outpoint_to_orphan_it.size());
|
||||
for (const auto& [outpoint, wtxid_set] : m_outpoint_to_orphan_it) {
|
||||
assert(all_outpoints.contains(outpoint));
|
||||
for (const auto& wtxid : wtxid_set) {
|
||||
assert(unique_wtxids_to_scores.contains(wtxid));
|
||||
}
|
||||
}
|
||||
|
||||
// Cached m_unique_orphans value is correct.
|
||||
assert(m_orphans.size() >= m_unique_orphans);
|
||||
assert(m_orphans.size() <= m_peer_orphanage_info.size() * m_unique_orphans);
|
||||
assert(unique_wtxids_to_scores.size() == m_unique_orphans);
|
||||
|
||||
const auto calculated_dedup_usage = std::accumulate(unique_wtxids_to_scores.begin(), unique_wtxids_to_scores.end(),
|
||||
TxOrphanage::Usage{0}, [](TxOrphanage::Usage sum, const auto pair) { return sum + pair.second.first; });
|
||||
assert(calculated_dedup_usage == m_unique_orphan_usage);
|
||||
|
||||
// Global usage is deduplicated, should be less than or equal to the sum of all per-peer usages.
|
||||
const auto summed_peer_usage = std::accumulate(m_peer_orphanage_info.begin(), m_peer_orphanage_info.end(),
|
||||
TxOrphanage::Usage{0}, [](TxOrphanage::Usage sum, const auto pair) { return sum + pair.second.m_total_usage; });
|
||||
assert(summed_peer_usage >= m_unique_orphan_usage);
|
||||
|
||||
// Cached m_unique_rounded_input_scores value is correct.
|
||||
const auto calculated_total_latency_score = std::accumulate(unique_wtxids_to_scores.begin(), unique_wtxids_to_scores.end(),
|
||||
TxOrphanage::Count{0}, [](TxOrphanage::Count sum, const auto pair) { return sum + pair.second.second; });
|
||||
assert(calculated_total_latency_score == m_unique_rounded_input_scores);
|
||||
|
||||
// Global latency score is deduplicated, should be less than or equal to the sum of all per-peer latency scores.
|
||||
const auto summed_peer_latency_score = std::accumulate(m_peer_orphanage_info.begin(), m_peer_orphanage_info.end(),
|
||||
TxOrphanage::Count{0}, [](TxOrphanage::Count sum, const auto pair) { return sum + pair.second.m_total_latency_score; });
|
||||
assert(summed_peer_latency_score >= m_unique_rounded_input_scores + m_orphans.size());
|
||||
}
|
||||
|
||||
TxOrphanage::Count TxOrphanageImpl::MaxGlobalLatencyScore() const { return m_max_global_latency_score; }
|
||||
TxOrphanage::Count TxOrphanageImpl::TotalLatencyScore() const { return m_unique_rounded_input_scores + m_orphans.size(); }
|
||||
TxOrphanage::Usage TxOrphanageImpl::ReservedPeerUsage() const { return m_reserved_usage_per_peer; }
|
||||
TxOrphanage::Count TxOrphanageImpl::MaxPeerLatencyScore() const { return m_max_global_latency_score / std::max<unsigned int>(m_peer_orphanage_info.size(), 1); }
|
||||
TxOrphanage::Usage TxOrphanageImpl::MaxGlobalUsage() const { return m_reserved_usage_per_peer * std::max<int64_t>(m_peer_orphanage_info.size(), 1); }
|
||||
|
||||
bool TxOrphanageImpl::NeedsTrim() const
|
||||
{
|
||||
return TotalLatencyScore() > MaxGlobalLatencyScore() || TotalOrphanUsage() > MaxGlobalUsage();
|
||||
}
|
||||
std::unique_ptr<TxOrphanage> MakeTxOrphanage() noexcept
|
||||
{
|
||||
return std::make_unique<TxOrphanageImpl>();
|
||||
}
|
||||
std::unique_ptr<TxOrphanage> MakeTxOrphanage(TxOrphanage::Count max_global_ann, TxOrphanage::Usage reserved_peer_usage) noexcept
|
||||
{
|
||||
return std::make_unique<TxOrphanageImpl>(max_global_ann, reserved_peer_usage);
|
||||
}
|
||||
} // namespace node
|
||||
157
src/node/txorphanage.h
Normal file
157
src/node/txorphanage.h
Normal file
@ -0,0 +1,157 @@
|
||||
// Copyright (c) 2021-2022 The Bitcoin Core developers
|
||||
// Distributed under the MIT software license, see the accompanying
|
||||
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
|
||||
|
||||
#ifndef BITCOIN_NODE_TXORPHANAGE_H
|
||||
#define BITCOIN_NODE_TXORPHANAGE_H
|
||||
|
||||
#include <consensus/validation.h>
|
||||
#include <net.h>
|
||||
#include <primitives/block.h>
|
||||
#include <primitives/transaction.h>
|
||||
#include <sync.h>
|
||||
#include <util/time.h>
|
||||
|
||||
#include <map>
|
||||
#include <set>
|
||||
|
||||
namespace node {
|
||||
/** Default value for TxOrphanage::m_reserved_usage_per_peer. Helps limit the total amount of memory used by the orphanage. */
|
||||
static constexpr int64_t DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER{404'000};
|
||||
/** Default value for TxOrphanage::m_max_global_latency_score. Helps limit the maximum latency for operations like
|
||||
* EraseForBlock and LimitOrphans. */
|
||||
static constexpr unsigned int DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE{3000};
|
||||
|
||||
/** A class to track orphan transactions (failed on TX_MISSING_INPUTS)
|
||||
* Since we cannot distinguish orphans from bad transactions with non-existent inputs, we heavily limit the amount of
|
||||
* announcements (unique (NodeId, wtxid) pairs), the number of inputs, and size of the orphans stored (both individual
|
||||
* and summed). We also try to prevent adversaries from churning this data structure: once global limits are reached, we
|
||||
* continuously evict the oldest announcement (sorting non-reconsiderable orphans before reconsiderable ones) from the
|
||||
* most resource-intensive peer until we are back within limits.
|
||||
* - Peers can exceed their individual limits (e.g. because they are very useful transaction relay peers) as long as the
|
||||
* global limits are not exceeded.
|
||||
* - As long as the orphan has 1 announcer, it remains in the orphanage.
|
||||
* - No peer can trigger the eviction of another peer's orphans.
|
||||
* - Peers' orphans are effectively protected from eviction as long as they don't exceed their limits.
|
||||
* Not thread-safe. Requires external synchronization.
|
||||
*/
|
||||
class TxOrphanage {
|
||||
public:
|
||||
using Usage = int64_t;
|
||||
using Count = unsigned int;
|
||||
|
||||
/** Allows providing orphan information externally */
|
||||
struct OrphanTxBase {
|
||||
CTransactionRef tx;
|
||||
/** Peers added with AddTx or AddAnnouncer. */
|
||||
std::set<NodeId> announcers;
|
||||
|
||||
// Constructor with moved announcers
|
||||
OrphanTxBase(CTransactionRef tx, std::set<NodeId>&& announcers) :
|
||||
tx(std::move(tx)),
|
||||
announcers(std::move(announcers))
|
||||
{}
|
||||
};
|
||||
|
||||
virtual ~TxOrphanage() = default;
|
||||
|
||||
/** Add a new orphan transaction */
|
||||
virtual bool AddTx(const CTransactionRef& tx, NodeId peer) = 0;
|
||||
|
||||
/** Add an additional announcer to an orphan if it exists. Otherwise, do nothing. */
|
||||
virtual bool AddAnnouncer(const Wtxid& wtxid, NodeId peer) = 0;
|
||||
|
||||
/** Get a transaction by its witness txid */
|
||||
virtual CTransactionRef GetTx(const Wtxid& wtxid) const = 0;
|
||||
|
||||
/** Check if we already have an orphan transaction (by wtxid only) */
|
||||
virtual bool HaveTx(const Wtxid& wtxid) const = 0;
|
||||
|
||||
/** Check if a {tx, peer} exists in the orphanage.*/
|
||||
virtual bool HaveTxFromPeer(const Wtxid& wtxid, NodeId peer) const = 0;
|
||||
|
||||
/** Extract a transaction from a peer's work set, and flip it back to non-reconsiderable.
|
||||
* Returns nullptr if there are no transactions to work on.
|
||||
* Otherwise returns the transaction reference, and removes
|
||||
* it from the work set.
|
||||
*/
|
||||
virtual CTransactionRef GetTxToReconsider(NodeId peer) = 0;
|
||||
|
||||
/** Erase an orphan by wtxid, including all announcements if there are multiple.
|
||||
* Returns true if an orphan was erased, false if no tx with this wtxid exists. */
|
||||
virtual bool EraseTx(const Wtxid& wtxid) = 0;
|
||||
|
||||
/** Maybe erase all orphans announced by a peer (eg, after that peer disconnects). If an orphan
|
||||
* has been announced by another peer, don't erase, just remove this peer from the list of announcers. */
|
||||
virtual void EraseForPeer(NodeId peer) = 0;
|
||||
|
||||
/** Erase all orphans included in or invalidated by a new block */
|
||||
virtual void EraseForBlock(const CBlock& block) = 0;
|
||||
|
||||
/** Limit the orphanage to MaxGlobalLatencyScore and MaxGlobalUsage. */
|
||||
virtual void LimitOrphans() = 0;
|
||||
|
||||
/** Add any orphans that list a particular tx as a parent into the from peer's work set */
|
||||
virtual std::vector<std::pair<Wtxid, NodeId>> AddChildrenToWorkSet(const CTransaction& tx, FastRandomContext& rng) = 0;
|
||||
|
||||
/** Does this peer have any work to do? */
|
||||
virtual bool HaveTxToReconsider(NodeId peer) = 0;
|
||||
|
||||
/** Get all children that spend from this tx and were received from nodeid. Sorted from most
|
||||
* recent to least recent. */
|
||||
virtual std::vector<CTransactionRef> GetChildrenFromSamePeer(const CTransactionRef& parent, NodeId nodeid) const = 0;
|
||||
|
||||
/** Return how many entries exist in the orphange */
|
||||
virtual size_t Size() const = 0;
|
||||
|
||||
/** Get all orphan transactions */
|
||||
virtual std::vector<OrphanTxBase> GetOrphanTransactions() const = 0;
|
||||
|
||||
/** Get the total usage (weight) of all orphans. If an orphan has multiple announcers, its usage is
|
||||
* only counted once within this total. */
|
||||
virtual Usage TotalOrphanUsage() const = 0;
|
||||
|
||||
/** Total usage (weight) of orphans for which this peer is an announcer. If an orphan has multiple
|
||||
* announcers, its weight will be accounted for in each PeerOrphanInfo, so the total of all
|
||||
* peers' UsageByPeer() may be larger than TotalOrphanUsage(). Similarly, UsageByPeer() may be far higher than
|
||||
* ReservedPeerUsage(), particularly if many peers have provided the same orphans. */
|
||||
virtual Usage UsageByPeer(NodeId peer) const = 0;
|
||||
|
||||
/** Check consistency between PeerOrphanInfo and m_orphans. Recalculate counters and ensure they
|
||||
* match what is cached. */
|
||||
virtual void SanityCheck() const = 0;
|
||||
|
||||
/** Number of announcements, i.e. total size of m_orphans. Ones for the same wtxid are not de-duplicated.
|
||||
* Not the same as TotalLatencyScore(). */
|
||||
virtual Count CountAnnouncements() const = 0;
|
||||
|
||||
/** Number of unique orphans (by wtxid). */
|
||||
virtual Count CountUniqueOrphans() const = 0;
|
||||
|
||||
/** Number of orphans stored from this peer. */
|
||||
virtual Count AnnouncementsFromPeer(NodeId peer) const = 0;
|
||||
|
||||
/** Latency score of transactions announced by this peer. */
|
||||
virtual Count LatencyScoreFromPeer(NodeId peer) const = 0;
|
||||
|
||||
/** Get the maximum global latency score allowed */
|
||||
virtual Count MaxGlobalLatencyScore() const = 0;
|
||||
|
||||
/** Get the total latency score of all orphans */
|
||||
virtual Count TotalLatencyScore() const = 0;
|
||||
|
||||
/** Get the reserved usage per peer */
|
||||
virtual Usage ReservedPeerUsage() const = 0;
|
||||
|
||||
/** Get the maximum latency score allowed per peer */
|
||||
virtual Count MaxPeerLatencyScore() const = 0;
|
||||
|
||||
/** Get the maximum global usage allowed */
|
||||
virtual Usage MaxGlobalUsage() const = 0;
|
||||
};
|
||||
|
||||
/** Create a new TxOrphanage instance */
|
||||
std::unique_ptr<TxOrphanage> MakeTxOrphanage() noexcept;
|
||||
std::unique_ptr<TxOrphanage> MakeTxOrphanage(TxOrphanage::Count max_global_ann, TxOrphanage::Usage reserved_peer_usage) noexcept;
|
||||
} // namespace node
|
||||
#endif // BITCOIN_NODE_TXORPHANAGE_H
|
||||
@ -832,8 +832,6 @@ static std::vector<RPCResult> OrphanDescription()
|
||||
RPCResult{RPCResult::Type::NUM, "bytes", "The serialized transaction size in bytes"},
|
||||
RPCResult{RPCResult::Type::NUM, "vsize", "The virtual transaction size as defined in BIP 141. This is different from actual serialized size for witness transactions as witness data is discounted."},
|
||||
RPCResult{RPCResult::Type::NUM, "weight", "The transaction weight as defined in BIP 141."},
|
||||
RPCResult{RPCResult::Type::NUM_TIME, "entry", "The entry time into the orphanage expressed in " + UNIX_EPOCH_TIME},
|
||||
RPCResult{RPCResult::Type::NUM_TIME, "expiration", "The orphan expiration time expressed in " + UNIX_EPOCH_TIME},
|
||||
RPCResult{RPCResult::Type::ARR, "from", "",
|
||||
{
|
||||
RPCResult{RPCResult::Type::NUM, "peer_id", "Peer ID"},
|
||||
@ -841,7 +839,7 @@ static std::vector<RPCResult> OrphanDescription()
|
||||
};
|
||||
}
|
||||
|
||||
static UniValue OrphanToJSON(const TxOrphanage::OrphanTxBase& orphan)
|
||||
static UniValue OrphanToJSON(const node::TxOrphanage::OrphanTxBase& orphan)
|
||||
{
|
||||
UniValue o(UniValue::VOBJ);
|
||||
o.pushKV("txid", orphan.tx->GetHash().ToString());
|
||||
@ -849,8 +847,6 @@ static UniValue OrphanToJSON(const TxOrphanage::OrphanTxBase& orphan)
|
||||
o.pushKV("bytes", orphan.tx->GetTotalSize());
|
||||
o.pushKV("vsize", GetVirtualTransactionSize(*orphan.tx));
|
||||
o.pushKV("weight", GetTransactionWeight(*orphan.tx));
|
||||
o.pushKV("entry", int64_t{TicksSinceEpoch<std::chrono::seconds>(orphan.nTimeExpire - ORPHAN_TX_EXPIRE_TIME)});
|
||||
o.pushKV("expiration", int64_t{TicksSinceEpoch<std::chrono::seconds>(orphan.nTimeExpire)});
|
||||
UniValue from(UniValue::VARR);
|
||||
for (const auto fromPeer: orphan.announcers) {
|
||||
from.push_back(fromPeer);
|
||||
@ -899,7 +895,7 @@ static RPCHelpMan getorphantxs()
|
||||
{
|
||||
const NodeContext& node = EnsureAnyNodeContext(request.context);
|
||||
PeerManager& peerman = EnsurePeerman(node);
|
||||
std::vector<TxOrphanage::OrphanTxBase> orphanage = peerman.GetOrphanTransactions();
|
||||
std::vector<node::TxOrphanage::OrphanTxBase> orphanage = peerman.GetOrphanTransactions();
|
||||
|
||||
int verbosity{ParseVerbosity(request.params[0], /*default_verbosity=*/0, /*allow_bool*/false)};
|
||||
|
||||
|
||||
@ -173,9 +173,8 @@ FUZZ_TARGET(txdownloadman, .init = initialize)
|
||||
// Initialize txdownloadman
|
||||
bilingual_str error;
|
||||
CTxMemPool pool{MemPoolOptionsForTest(g_setup->m_node), error};
|
||||
const auto max_orphan_count = fuzzed_data_provider.ConsumeIntegralInRange<unsigned int>(0, 300);
|
||||
FastRandomContext det_rand{true};
|
||||
node::TxDownloadManager txdownloadman{node::TxDownloadOptions{pool, det_rand, max_orphan_count, true}};
|
||||
node::TxDownloadManager txdownloadman{node::TxDownloadOptions{pool, det_rand, true}};
|
||||
|
||||
std::chrono::microseconds time{244466666};
|
||||
|
||||
@ -278,14 +277,9 @@ FUZZ_TARGET(txdownloadman, .init = initialize)
|
||||
// peer without tracking anything (this is only for the txdownload_impl target).
|
||||
static bool HasRelayPermissions(NodeId peer) { return peer == 0; }
|
||||
|
||||
static void CheckInvariants(const node::TxDownloadManagerImpl& txdownload_impl, size_t max_orphan_count)
|
||||
static void CheckInvariants(const node::TxDownloadManagerImpl& txdownload_impl)
|
||||
{
|
||||
const TxOrphanage& orphanage = txdownload_impl.m_orphanage;
|
||||
|
||||
// Orphanage usage should never exceed what is allowed
|
||||
Assert(orphanage.Size() <= max_orphan_count);
|
||||
txdownload_impl.m_orphanage.SanityCheck();
|
||||
|
||||
txdownload_impl.m_orphanage->SanityCheck();
|
||||
// We should never have more than the maximum in-flight requests out for a peer.
|
||||
for (NodeId peer = 0; peer < NUM_PEERS; ++peer) {
|
||||
if (!HasRelayPermissions(peer)) {
|
||||
@ -304,9 +298,8 @@ FUZZ_TARGET(txdownloadman_impl, .init = initialize)
|
||||
// Initialize a TxDownloadManagerImpl
|
||||
bilingual_str error;
|
||||
CTxMemPool pool{MemPoolOptionsForTest(g_setup->m_node), error};
|
||||
const auto max_orphan_count = fuzzed_data_provider.ConsumeIntegralInRange<unsigned int>(0, 300);
|
||||
FastRandomContext det_rand{true};
|
||||
node::TxDownloadManagerImpl txdownload_impl{node::TxDownloadOptions{pool, det_rand, max_orphan_count, true}};
|
||||
node::TxDownloadManagerImpl txdownload_impl{node::TxDownloadOptions{pool, det_rand, true}};
|
||||
|
||||
std::chrono::microseconds time{244466666};
|
||||
|
||||
@ -350,7 +343,7 @@ FUZZ_TARGET(txdownloadman_impl, .init = initialize)
|
||||
block.vtx.push_back(rand_tx);
|
||||
txdownload_impl.BlockConnected(std::make_shared<CBlock>(block));
|
||||
// Block transactions must be removed from orphanage
|
||||
Assert(!txdownload_impl.m_orphanage.HaveTx(rand_tx->GetWitnessHash()));
|
||||
Assert(!txdownload_impl.m_orphanage->HaveTx(rand_tx->GetWitnessHash()));
|
||||
},
|
||||
[&] {
|
||||
txdownload_impl.BlockDisconnected();
|
||||
@ -402,7 +395,7 @@ FUZZ_TARGET(txdownloadman_impl, .init = initialize)
|
||||
const auto& package = maybe_package->m_txns;
|
||||
// Parent is in m_lazy_recent_rejects_reconsiderable and child is in m_orphanage
|
||||
Assert(txdownload_impl.RecentRejectsReconsiderableFilter().contains(rand_tx->GetWitnessHash().ToUint256()));
|
||||
Assert(txdownload_impl.m_orphanage.HaveTx(maybe_package->m_txns.back()->GetWitnessHash()));
|
||||
Assert(txdownload_impl.m_orphanage->HaveTx(maybe_package->m_txns.back()->GetWitnessHash()));
|
||||
// Package has not been rejected
|
||||
Assert(!txdownload_impl.RecentRejectsReconsiderableFilter().contains(GetPackageHash(package)));
|
||||
// Neither is in m_lazy_recent_rejects
|
||||
@ -437,7 +430,7 @@ FUZZ_TARGET(txdownloadman_impl, .init = initialize)
|
||||
if (fuzzed_data_provider.ConsumeBool()) time_skip *= -1;
|
||||
time += time_skip;
|
||||
}
|
||||
CheckInvariants(txdownload_impl, max_orphan_count);
|
||||
CheckInvariants(txdownload_impl);
|
||||
// Disconnect everybody, check that all data structures are empty.
|
||||
for (NodeId nodeid = 0; nodeid < NUM_PEERS; ++nodeid) {
|
||||
txdownload_impl.DisconnectedPeer(nodeid);
|
||||
|
||||
@ -6,6 +6,7 @@
|
||||
#include <consensus/validation.h>
|
||||
#include <net_processing.h>
|
||||
#include <node/eviction.h>
|
||||
#include <node/txorphanage.h>
|
||||
#include <policy/policy.h>
|
||||
#include <primitives/transaction.h>
|
||||
#include <script/script.h>
|
||||
@ -14,12 +15,16 @@
|
||||
#include <test/fuzz/fuzz.h>
|
||||
#include <test/fuzz/util.h>
|
||||
#include <test/util/setup_common.h>
|
||||
#include <txorphanage.h>
|
||||
#include <uint256.h>
|
||||
#include <util/check.h>
|
||||
#include <util/feefrac.h>
|
||||
#include <util/time.h>
|
||||
|
||||
#include <algorithm>
|
||||
#include <bitset>
|
||||
#include <cmath>
|
||||
#include <cstdint>
|
||||
#include <iostream>
|
||||
#include <memory>
|
||||
#include <set>
|
||||
#include <utility>
|
||||
@ -32,11 +37,12 @@ void initialize_orphanage()
|
||||
|
||||
FUZZ_TARGET(txorphan, .init = initialize_orphanage)
|
||||
{
|
||||
SeedRandomStateForTest(SeedRand::ZEROS);
|
||||
FuzzedDataProvider fuzzed_data_provider(buffer.data(), buffer.size());
|
||||
FastRandomContext orphanage_rng{/*fDeterministic=*/true};
|
||||
FastRandomContext orphanage_rng{ConsumeUInt256(fuzzed_data_provider)};
|
||||
SetMockTime(ConsumeTime(fuzzed_data_provider));
|
||||
|
||||
TxOrphanage orphanage;
|
||||
auto orphanage = node::MakeTxOrphanage();
|
||||
std::vector<COutPoint> outpoints; // Duplicates are tolerated
|
||||
outpoints.reserve(200'000);
|
||||
|
||||
@ -49,7 +55,7 @@ FUZZ_TARGET(txorphan, .init = initialize_orphanage)
|
||||
|
||||
std::vector<CTransactionRef> tx_history;
|
||||
|
||||
LIMITED_WHILE(outpoints.size() < 200'000 && fuzzed_data_provider.ConsumeBool(), 10 * DEFAULT_MAX_ORPHAN_TRANSACTIONS)
|
||||
LIMITED_WHILE(outpoints.size() < 200'000 && fuzzed_data_provider.ConsumeBool(), 1000)
|
||||
{
|
||||
// construct transaction
|
||||
const CTransactionRef tx = [&] {
|
||||
@ -85,11 +91,11 @@ FUZZ_TARGET(txorphan, .init = initialize_orphanage)
|
||||
// previous loop and potentially the parent of this tx.
|
||||
if (ptx_potential_parent) {
|
||||
// Set up future GetTxToReconsider call.
|
||||
orphanage.AddChildrenToWorkSet(*ptx_potential_parent, orphanage_rng);
|
||||
orphanage->AddChildrenToWorkSet(*ptx_potential_parent, orphanage_rng);
|
||||
|
||||
// Check that all txns returned from GetChildrenFrom* are indeed a direct child of this tx.
|
||||
NodeId peer_id = fuzzed_data_provider.ConsumeIntegral<NodeId>();
|
||||
for (const auto& child : orphanage.GetChildrenFromSamePeer(ptx_potential_parent, peer_id)) {
|
||||
for (const auto& child : orphanage->GetChildrenFromSamePeer(ptx_potential_parent, peer_id)) {
|
||||
assert(std::any_of(child->vin.cbegin(), child->vin.cend(), [&](const auto& input) {
|
||||
return input.prevout.hash == ptx_potential_parent->GetHash();
|
||||
}));
|
||||
@ -97,63 +103,63 @@ FUZZ_TARGET(txorphan, .init = initialize_orphanage)
|
||||
}
|
||||
|
||||
// trigger orphanage functions
|
||||
LIMITED_WHILE(fuzzed_data_provider.ConsumeBool(), 10 * DEFAULT_MAX_ORPHAN_TRANSACTIONS)
|
||||
LIMITED_WHILE(fuzzed_data_provider.ConsumeBool(), 1000)
|
||||
{
|
||||
NodeId peer_id = fuzzed_data_provider.ConsumeIntegral<NodeId>();
|
||||
const auto total_bytes_start{orphanage.TotalOrphanUsage()};
|
||||
const auto total_peer_bytes_start{orphanage.UsageByPeer(peer_id)};
|
||||
const auto total_bytes_start{orphanage->TotalOrphanUsage()};
|
||||
const auto total_peer_bytes_start{orphanage->UsageByPeer(peer_id)};
|
||||
const auto tx_weight{GetTransactionWeight(*tx)};
|
||||
|
||||
CallOneOf(
|
||||
fuzzed_data_provider,
|
||||
[&] {
|
||||
{
|
||||
CTransactionRef ref = orphanage.GetTxToReconsider(peer_id);
|
||||
CTransactionRef ref = orphanage->GetTxToReconsider(peer_id);
|
||||
if (ref) {
|
||||
Assert(orphanage.HaveTx(ref->GetWitnessHash()));
|
||||
Assert(orphanage->HaveTx(ref->GetWitnessHash()));
|
||||
}
|
||||
}
|
||||
},
|
||||
[&] {
|
||||
bool have_tx = orphanage.HaveTx(tx->GetWitnessHash());
|
||||
bool have_tx = orphanage->HaveTx(tx->GetWitnessHash());
|
||||
// AddTx should return false if tx is too big or already have it
|
||||
// tx weight is unknown, we only check when tx is already in orphanage
|
||||
{
|
||||
bool add_tx = orphanage.AddTx(tx, peer_id);
|
||||
bool add_tx = orphanage->AddTx(tx, peer_id);
|
||||
// have_tx == true -> add_tx == false
|
||||
Assert(!have_tx || !add_tx);
|
||||
|
||||
if (add_tx) {
|
||||
Assert(orphanage.UsageByPeer(peer_id) == tx_weight + total_peer_bytes_start);
|
||||
Assert(orphanage.TotalOrphanUsage() == tx_weight + total_bytes_start);
|
||||
Assert(orphanage->UsageByPeer(peer_id) == tx_weight + total_peer_bytes_start);
|
||||
Assert(orphanage->TotalOrphanUsage() == tx_weight + total_bytes_start);
|
||||
Assert(tx_weight <= MAX_STANDARD_TX_WEIGHT);
|
||||
} else {
|
||||
// Peer may have been added as an announcer.
|
||||
if (orphanage.UsageByPeer(peer_id) == tx_weight + total_peer_bytes_start) {
|
||||
Assert(orphanage.HaveTxFromPeer(wtxid, peer_id));
|
||||
if (orphanage->UsageByPeer(peer_id) == tx_weight + total_peer_bytes_start) {
|
||||
Assert(orphanage->HaveTxFromPeer(wtxid, peer_id));
|
||||
} else {
|
||||
// Otherwise, there must not be any change to the peer byte count.
|
||||
Assert(orphanage.UsageByPeer(peer_id) == total_peer_bytes_start);
|
||||
Assert(orphanage->UsageByPeer(peer_id) == total_peer_bytes_start);
|
||||
}
|
||||
|
||||
// Regardless, total bytes should not have changed.
|
||||
Assert(orphanage.TotalOrphanUsage() == total_bytes_start);
|
||||
Assert(orphanage->TotalOrphanUsage() == total_bytes_start);
|
||||
}
|
||||
}
|
||||
have_tx = orphanage.HaveTx(tx->GetWitnessHash());
|
||||
have_tx = orphanage->HaveTx(tx->GetWitnessHash());
|
||||
{
|
||||
bool add_tx = orphanage.AddTx(tx, peer_id);
|
||||
bool add_tx = orphanage->AddTx(tx, peer_id);
|
||||
// if have_tx is still false, it must be too big
|
||||
Assert(!have_tx == (tx_weight > MAX_STANDARD_TX_WEIGHT));
|
||||
Assert(!have_tx || !add_tx);
|
||||
}
|
||||
},
|
||||
[&] {
|
||||
bool have_tx = orphanage.HaveTx(tx->GetWitnessHash());
|
||||
bool have_tx_and_peer = orphanage.HaveTxFromPeer(tx->GetWitnessHash(), peer_id);
|
||||
bool have_tx = orphanage->HaveTx(tx->GetWitnessHash());
|
||||
bool have_tx_and_peer = orphanage->HaveTxFromPeer(tx->GetWitnessHash(), peer_id);
|
||||
// AddAnnouncer should return false if tx doesn't exist or we already HaveTxFromPeer.
|
||||
{
|
||||
bool added_announcer = orphanage.AddAnnouncer(tx->GetWitnessHash(), peer_id);
|
||||
bool added_announcer = orphanage->AddAnnouncer(tx->GetWitnessHash(), peer_id);
|
||||
// have_tx == false -> added_announcer == false
|
||||
Assert(have_tx || !added_announcer);
|
||||
// have_tx_and_peer == true -> added_announcer == false
|
||||
@ -161,43 +167,43 @@ FUZZ_TARGET(txorphan, .init = initialize_orphanage)
|
||||
|
||||
// Total bytes should not have changed. If peer was added as announcer, byte
|
||||
// accounting must have been updated.
|
||||
Assert(orphanage.TotalOrphanUsage() == total_bytes_start);
|
||||
Assert(orphanage->TotalOrphanUsage() == total_bytes_start);
|
||||
if (added_announcer) {
|
||||
Assert(orphanage.UsageByPeer(peer_id) == tx_weight + total_peer_bytes_start);
|
||||
Assert(orphanage->UsageByPeer(peer_id) == tx_weight + total_peer_bytes_start);
|
||||
} else {
|
||||
Assert(orphanage.UsageByPeer(peer_id) == total_peer_bytes_start);
|
||||
Assert(orphanage->UsageByPeer(peer_id) == total_peer_bytes_start);
|
||||
}
|
||||
}
|
||||
},
|
||||
[&] {
|
||||
bool have_tx = orphanage.HaveTx(tx->GetWitnessHash());
|
||||
bool have_tx_and_peer{orphanage.HaveTxFromPeer(wtxid, peer_id)};
|
||||
bool have_tx = orphanage->HaveTx(tx->GetWitnessHash());
|
||||
bool have_tx_and_peer{orphanage->HaveTxFromPeer(wtxid, peer_id)};
|
||||
// EraseTx should return 0 if m_orphans doesn't have the tx
|
||||
{
|
||||
auto bytes_from_peer_before{orphanage.UsageByPeer(peer_id)};
|
||||
Assert(have_tx == orphanage.EraseTx(tx->GetWitnessHash()));
|
||||
auto bytes_from_peer_before{orphanage->UsageByPeer(peer_id)};
|
||||
Assert(have_tx == orphanage->EraseTx(tx->GetWitnessHash()));
|
||||
if (have_tx) {
|
||||
Assert(orphanage.TotalOrphanUsage() == total_bytes_start - tx_weight);
|
||||
Assert(orphanage->TotalOrphanUsage() == total_bytes_start - tx_weight);
|
||||
if (have_tx_and_peer) {
|
||||
Assert(orphanage.UsageByPeer(peer_id) == bytes_from_peer_before - tx_weight);
|
||||
Assert(orphanage->UsageByPeer(peer_id) == bytes_from_peer_before - tx_weight);
|
||||
} else {
|
||||
Assert(orphanage.UsageByPeer(peer_id) == bytes_from_peer_before);
|
||||
Assert(orphanage->UsageByPeer(peer_id) == bytes_from_peer_before);
|
||||
}
|
||||
} else {
|
||||
Assert(orphanage.TotalOrphanUsage() == total_bytes_start);
|
||||
Assert(orphanage->TotalOrphanUsage() == total_bytes_start);
|
||||
}
|
||||
}
|
||||
have_tx = orphanage.HaveTx(tx->GetWitnessHash());
|
||||
have_tx_and_peer = orphanage.HaveTxFromPeer(wtxid, peer_id);
|
||||
have_tx = orphanage->HaveTx(tx->GetWitnessHash());
|
||||
have_tx_and_peer = orphanage->HaveTxFromPeer(wtxid, peer_id);
|
||||
// have_tx should be false and EraseTx should fail
|
||||
{
|
||||
Assert(!have_tx && !have_tx_and_peer && !orphanage.EraseTx(wtxid));
|
||||
Assert(!have_tx && !have_tx_and_peer && !orphanage->EraseTx(wtxid));
|
||||
}
|
||||
},
|
||||
[&] {
|
||||
orphanage.EraseForPeer(peer_id);
|
||||
Assert(!orphanage.HaveTxFromPeer(tx->GetWitnessHash(), peer_id));
|
||||
Assert(orphanage.UsageByPeer(peer_id) == 0);
|
||||
orphanage->EraseForPeer(peer_id);
|
||||
Assert(!orphanage->HaveTxFromPeer(tx->GetWitnessHash(), peer_id));
|
||||
Assert(orphanage->UsageByPeer(peer_id) == 0);
|
||||
},
|
||||
[&] {
|
||||
// Make a block out of txs and then EraseForBlock
|
||||
@ -207,18 +213,16 @@ FUZZ_TARGET(txorphan, .init = initialize_orphanage)
|
||||
auto& tx_to_remove = PickValue(fuzzed_data_provider, tx_history);
|
||||
block.vtx.push_back(tx_to_remove);
|
||||
}
|
||||
orphanage.EraseForBlock(block);
|
||||
orphanage->EraseForBlock(block);
|
||||
for (const auto& tx_removed : block.vtx) {
|
||||
Assert(!orphanage.HaveTx(tx_removed->GetWitnessHash()));
|
||||
Assert(!orphanage.HaveTxFromPeer(tx_removed->GetWitnessHash(), peer_id));
|
||||
Assert(!orphanage->HaveTx(tx_removed->GetWitnessHash()));
|
||||
Assert(!orphanage->HaveTxFromPeer(tx_removed->GetWitnessHash(), peer_id));
|
||||
}
|
||||
},
|
||||
[&] {
|
||||
// test mocktime and expiry
|
||||
SetMockTime(ConsumeTime(fuzzed_data_provider));
|
||||
auto limit = fuzzed_data_provider.ConsumeIntegral<unsigned int>();
|
||||
orphanage.LimitOrphans(limit, orphanage_rng);
|
||||
Assert(orphanage.Size() <= limit);
|
||||
orphanage->LimitOrphans();
|
||||
});
|
||||
|
||||
}
|
||||
@ -228,9 +232,645 @@ FUZZ_TARGET(txorphan, .init = initialize_orphanage)
|
||||
ptx_potential_parent = tx;
|
||||
}
|
||||
|
||||
const bool have_tx{orphanage.HaveTx(tx->GetWitnessHash())};
|
||||
const bool get_tx_nonnull{orphanage.GetTx(tx->GetWitnessHash()) != nullptr};
|
||||
const bool have_tx{orphanage->HaveTx(tx->GetWitnessHash())};
|
||||
const bool get_tx_nonnull{orphanage->GetTx(tx->GetWitnessHash()) != nullptr};
|
||||
Assert(have_tx == get_tx_nonnull);
|
||||
}
|
||||
orphanage.SanityCheck();
|
||||
orphanage->SanityCheck();
|
||||
}
|
||||
|
||||
FUZZ_TARGET(txorphan_protected, .init = initialize_orphanage)
|
||||
{
|
||||
SeedRandomStateForTest(SeedRand::ZEROS);
|
||||
FuzzedDataProvider fuzzed_data_provider(buffer.data(), buffer.size());
|
||||
FastRandomContext orphanage_rng{ConsumeUInt256(fuzzed_data_provider)};
|
||||
SetMockTime(ConsumeTime(fuzzed_data_provider));
|
||||
|
||||
// We have num_peers peers. Some subset of them will never exceed their reserved weight or announcement count, and
|
||||
// should therefore never have any orphans evicted.
|
||||
const unsigned int MAX_PEERS = 125;
|
||||
const unsigned int num_peers = fuzzed_data_provider.ConsumeIntegralInRange<unsigned int>(1, MAX_PEERS);
|
||||
// Generate a vector of bools for whether each peer is protected from eviction
|
||||
std::bitset<MAX_PEERS> protected_peers;
|
||||
for (unsigned int i = 0; i < num_peers; i++) {
|
||||
protected_peers.set(i, fuzzed_data_provider.ConsumeBool());
|
||||
}
|
||||
|
||||
// Params for orphanage.
|
||||
const unsigned int global_latency_score_limit = fuzzed_data_provider.ConsumeIntegralInRange<unsigned int>(num_peers, 6'000);
|
||||
const int64_t per_peer_weight_reservation = fuzzed_data_provider.ConsumeIntegralInRange<int64_t>(1, 4'040'000);
|
||||
auto orphanage = node::MakeTxOrphanage(global_latency_score_limit, per_peer_weight_reservation);
|
||||
|
||||
// The actual limit, MaxPeerLatencyScore(), may be higher, since TxOrphanage only counts peers
|
||||
// that have announced an orphan. The honest peer will not experience evictions if it never
|
||||
// exceeds this.
|
||||
const unsigned int honest_latency_limit = global_latency_score_limit / num_peers;
|
||||
// Honest peer will not experience evictions if it never exceeds this.
|
||||
const int64_t honest_mem_limit = per_peer_weight_reservation;
|
||||
|
||||
std::vector<COutPoint> outpoints; // Duplicates are tolerated
|
||||
outpoints.reserve(400);
|
||||
|
||||
// initial outpoints used to construct transactions later
|
||||
for (uint8_t i = 0; i < 4; i++) {
|
||||
outpoints.emplace_back(Txid::FromUint256(uint256{i}), 0);
|
||||
}
|
||||
|
||||
// These are honest peer's live announcements. We expect them to be protected from eviction.
|
||||
std::set<Wtxid> protected_wtxids;
|
||||
|
||||
LIMITED_WHILE(outpoints.size() < 400 && fuzzed_data_provider.ConsumeBool(), 1000)
|
||||
{
|
||||
// construct transaction
|
||||
const CTransactionRef tx = [&] {
|
||||
CMutableTransaction tx_mut;
|
||||
const auto num_in = fuzzed_data_provider.ConsumeIntegralInRange<uint32_t>(1, outpoints.size());
|
||||
const auto num_out = fuzzed_data_provider.ConsumeIntegralInRange<uint32_t>(1, 256);
|
||||
// pick outpoints from outpoints as input. We allow input duplicates on purpose, given we are not
|
||||
// running any transaction validation logic before adding transactions to the orphanage
|
||||
tx_mut.vin.reserve(num_in);
|
||||
for (uint32_t i = 0; i < num_in; i++) {
|
||||
auto& prevout = PickValue(fuzzed_data_provider, outpoints);
|
||||
// try making transactions unique by setting a random nSequence, but allow duplicate transactions if they happen
|
||||
tx_mut.vin.emplace_back(prevout, CScript{}, fuzzed_data_provider.ConsumeIntegralInRange<uint32_t>(0, CTxIn::SEQUENCE_FINAL));
|
||||
}
|
||||
// output amount or spendability will not affect txorphanage
|
||||
tx_mut.vout.reserve(num_out);
|
||||
for (uint32_t i = 0; i < num_out; i++) {
|
||||
const auto payload_size = fuzzed_data_provider.ConsumeIntegralInRange<unsigned int>(0, 100000);
|
||||
if (payload_size) {
|
||||
tx_mut.vout.emplace_back(0, CScript() << OP_RETURN << std::vector<unsigned char>(payload_size));
|
||||
} else {
|
||||
tx_mut.vout.emplace_back(0, CScript{});
|
||||
}
|
||||
}
|
||||
auto new_tx = MakeTransactionRef(tx_mut);
|
||||
// add newly constructed outpoints to the coin pool
|
||||
for (uint32_t i = 0; i < num_out; i++) {
|
||||
outpoints.emplace_back(new_tx->GetHash(), i);
|
||||
}
|
||||
return new_tx;
|
||||
}();
|
||||
|
||||
const auto wtxid{tx->GetWitnessHash()};
|
||||
|
||||
// orphanage functions
|
||||
LIMITED_WHILE(fuzzed_data_provider.remaining_bytes(), 10 * global_latency_score_limit)
|
||||
{
|
||||
NodeId peer_id = fuzzed_data_provider.ConsumeIntegralInRange<NodeId>(0, num_peers - 1);
|
||||
const auto tx_weight{GetTransactionWeight(*tx)};
|
||||
|
||||
// This protected peer will never send orphans that would
|
||||
// exceed their own personal allotment, so is never evicted.
|
||||
const bool peer_is_protected{protected_peers[peer_id]};
|
||||
|
||||
CallOneOf(
|
||||
fuzzed_data_provider,
|
||||
[&] { // AddTx
|
||||
bool have_tx_and_peer = orphanage->HaveTxFromPeer(wtxid, peer_id);
|
||||
if (peer_is_protected && !have_tx_and_peer &&
|
||||
(orphanage->UsageByPeer(peer_id) + tx_weight > honest_mem_limit ||
|
||||
orphanage->LatencyScoreFromPeer(peer_id) + (tx->vin.size() / 10) + 1 > honest_latency_limit)) {
|
||||
// We never want our protected peer oversized or over-announced
|
||||
} else {
|
||||
orphanage->AddTx(tx, peer_id);
|
||||
if (peer_is_protected && orphanage->HaveTxFromPeer(wtxid, peer_id)) {
|
||||
protected_wtxids.insert(wtxid);
|
||||
}
|
||||
}
|
||||
},
|
||||
[&] { // AddAnnouncer
|
||||
bool have_tx_and_peer = orphanage->HaveTxFromPeer(tx->GetWitnessHash(), peer_id);
|
||||
// AddAnnouncer should return false if tx doesn't exist or we already HaveTxFromPeer.
|
||||
{
|
||||
if (peer_is_protected && !have_tx_and_peer &&
|
||||
(orphanage->UsageByPeer(peer_id) + tx_weight > honest_mem_limit ||
|
||||
orphanage->LatencyScoreFromPeer(peer_id) + (tx->vin.size()) + 1 > honest_latency_limit)) {
|
||||
// We never want our protected peer oversized
|
||||
} else {
|
||||
orphanage->AddAnnouncer(tx->GetWitnessHash(), peer_id);
|
||||
if (peer_is_protected && orphanage->HaveTxFromPeer(wtxid, peer_id)) {
|
||||
protected_wtxids.insert(wtxid);
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
[&] { // EraseTx
|
||||
if (protected_wtxids.count(tx->GetWitnessHash())) {
|
||||
protected_wtxids.erase(wtxid);
|
||||
}
|
||||
orphanage->EraseTx(wtxid);
|
||||
Assert(!orphanage->HaveTx(wtxid));
|
||||
},
|
||||
[&] { // EraseForPeer
|
||||
if (!protected_peers[peer_id]) {
|
||||
orphanage->EraseForPeer(peer_id);
|
||||
Assert(orphanage->UsageByPeer(peer_id) == 0);
|
||||
Assert(orphanage->LatencyScoreFromPeer(peer_id) == 0);
|
||||
Assert(orphanage->AnnouncementsFromPeer(peer_id) == 0);
|
||||
}
|
||||
},
|
||||
[&] { // LimitOrphans
|
||||
// Assert that protected peers are never affected by LimitOrphans.
|
||||
unsigned int protected_count = 0;
|
||||
unsigned int protected_bytes = 0;
|
||||
for (unsigned int peer = 0; peer < num_peers; ++peer) {
|
||||
if (protected_peers[peer]) {
|
||||
protected_count += orphanage->LatencyScoreFromPeer(peer);
|
||||
protected_bytes += orphanage->UsageByPeer(peer);
|
||||
}
|
||||
}
|
||||
orphanage->LimitOrphans();
|
||||
Assert(orphanage->TotalLatencyScore() <= global_latency_score_limit);
|
||||
Assert(orphanage->TotalOrphanUsage() <= per_peer_weight_reservation * num_peers);
|
||||
|
||||
// Number of announcements and usage should never differ before and after since
|
||||
// we've never exceeded the per-peer reservations.
|
||||
for (unsigned int peer = 0; peer < num_peers; ++peer) {
|
||||
if (protected_peers[peer]) {
|
||||
protected_count -= orphanage->LatencyScoreFromPeer(peer);
|
||||
protected_bytes -= orphanage->UsageByPeer(peer);
|
||||
}
|
||||
}
|
||||
Assert(protected_count == 0);
|
||||
Assert(protected_bytes == 0);
|
||||
});
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
orphanage->SanityCheck();
|
||||
// All of the honest peer's announcements are still present.
|
||||
for (const auto& wtxid : protected_wtxids) {
|
||||
Assert(orphanage->HaveTx(wtxid));
|
||||
}
|
||||
}
|
||||
|
||||
FUZZ_TARGET(txorphanage_sim)
|
||||
{
|
||||
SeedRandomStateForTest(SeedRand::ZEROS);
|
||||
// This is a comphehensive simulation fuzz test, which runs through a scenario involving up to
|
||||
// 16 transactions (which may have simple or complex topology, and may have duplicate txids
|
||||
// with distinct wtxids, and up to 16 peers. The scenario is performed both on a real
|
||||
// TxOrphanage object and the behavior is compared with a naive reimplementation (just a vector
|
||||
// of announcements) where possible, and tested for desired properties where not possible.
|
||||
|
||||
//
|
||||
// 1. Setup.
|
||||
//
|
||||
|
||||
/** The total number of transactions this simulation uses (not all of which will necessarily
|
||||
* be present in the orphanage at once). */
|
||||
static constexpr unsigned NUM_TX = 16;
|
||||
/** The number of peers this simulation uses (not all of which will necessarily be present in
|
||||
* the orphanage at once). */
|
||||
static constexpr unsigned NUM_PEERS = 16;
|
||||
/** The maximum number of announcements this simulation uses (which may be higher than the
|
||||
* number permitted inside the orphanage). */
|
||||
static constexpr unsigned MAX_ANN = 64;
|
||||
|
||||
FuzzedDataProvider provider(buffer.data(), buffer.size());
|
||||
/** Local RNG. Only used for topology/sizes of the transaction set, the order of transactions
|
||||
* in EraseForBlock, and for the randomized passed to AddChildrenToWorkSet. */
|
||||
InsecureRandomContext rng(provider.ConsumeIntegral<uint64_t>());
|
||||
|
||||
//
|
||||
// 2. Construct an interesting set of 16 transactions.
|
||||
//
|
||||
|
||||
// - Pick a topological order among the transactions.
|
||||
std::vector<unsigned> txorder(NUM_TX);
|
||||
std::iota(txorder.begin(), txorder.end(), unsigned{0});
|
||||
std::shuffle(txorder.begin(), txorder.end(), rng);
|
||||
// - Pick a set of dependencies (pair<child_index, parent_index>).
|
||||
std::vector<std::pair<unsigned, unsigned>> deps;
|
||||
deps.reserve((NUM_TX * (NUM_TX - 1)) / 2);
|
||||
for (unsigned p = 0; p < NUM_TX - 1; ++p) {
|
||||
for (unsigned c = p + 1; c < NUM_TX; ++c) {
|
||||
deps.emplace_back(c, p);
|
||||
}
|
||||
}
|
||||
std::shuffle(deps.begin(), deps.end(), rng);
|
||||
deps.resize(provider.ConsumeIntegralInRange<unsigned>(0, NUM_TX * 4 - 1));
|
||||
// - Construct the actual transactions.
|
||||
std::set<Wtxid> wtxids;
|
||||
std::vector<CTransactionRef> txn(NUM_TX);
|
||||
node::TxOrphanage::Usage total_usage{0};
|
||||
for (unsigned t = 0; t < NUM_TX; ++t) {
|
||||
CMutableTransaction tx;
|
||||
if (t > 0 && rng.randrange(4) == 0) {
|
||||
// Occasionally duplicate the previous transaction, so that repetitions of the same
|
||||
// txid are possible (with different wtxid).
|
||||
tx = CMutableTransaction(*txn[txorder[t - 1]]);
|
||||
} else {
|
||||
tx.version = 1;
|
||||
tx.nLockTime = 0xffffffff;
|
||||
// Construct 1 to 16 outputs.
|
||||
auto num_outputs = rng.randrange<unsigned>(1 << rng.randrange<unsigned>(5)) + 1;
|
||||
for (unsigned output = 0; output < num_outputs; ++output) {
|
||||
CScript scriptpubkey;
|
||||
scriptpubkey.resize(provider.ConsumeIntegralInRange<unsigned>(20, 34));
|
||||
tx.vout.emplace_back(CAmount{0}, std::move(scriptpubkey));
|
||||
}
|
||||
// Construct inputs (one for each dependency).
|
||||
for (auto& [child, parent] : deps) {
|
||||
if (child == t) {
|
||||
auto& partx = txn[txorder[parent]];
|
||||
assert(partx->version == 1);
|
||||
COutPoint outpoint(partx->GetHash(), rng.randrange<size_t>(partx->vout.size()));
|
||||
tx.vin.emplace_back(outpoint);
|
||||
tx.vin.back().scriptSig.resize(provider.ConsumeIntegralInRange<unsigned>(16, 200));
|
||||
}
|
||||
}
|
||||
// Construct fallback input in case there are no dependencies.
|
||||
if (tx.vin.empty()) {
|
||||
COutPoint outpoint(Txid::FromUint256(rng.rand256()), rng.randrange<size_t>(16));
|
||||
tx.vin.emplace_back(outpoint);
|
||||
tx.vin.back().scriptSig.resize(provider.ConsumeIntegralInRange<unsigned>(16, 200));
|
||||
}
|
||||
}
|
||||
// Optionally modify the witness (allowing wtxid != txid), and certainly when the wtxid
|
||||
// already exists.
|
||||
while (wtxids.contains(CTransaction(tx).GetWitnessHash()) || rng.randrange(4) == 0) {
|
||||
auto& input = tx.vin[rng.randrange(tx.vin.size())];
|
||||
if (rng.randbool()) {
|
||||
input.scriptWitness.stack.resize(1);
|
||||
input.scriptWitness.stack[0].resize(rng.randrange(100));
|
||||
} else {
|
||||
input.scriptWitness.stack.resize(0);
|
||||
}
|
||||
}
|
||||
// Convert to CTransactionRef.
|
||||
txn[txorder[t]] = MakeTransactionRef(std::move(tx));
|
||||
wtxids.insert(txn[txorder[t]]->GetWitnessHash());
|
||||
auto weight = GetTransactionWeight(*txn[txorder[t]]);
|
||||
assert(weight < MAX_STANDARD_TX_WEIGHT);
|
||||
total_usage += GetTransactionWeight(*txn[txorder[t]]);
|
||||
}
|
||||
|
||||
//
|
||||
// 3. Initialize real orphanage
|
||||
//
|
||||
|
||||
auto max_global_ann = provider.ConsumeIntegralInRange<node::TxOrphanage::Count>(NUM_PEERS, MAX_ANN);
|
||||
auto reserved_peer_usage = provider.ConsumeIntegralInRange<node::TxOrphanage::Usage>(1, total_usage);
|
||||
auto real = node::MakeTxOrphanage(max_global_ann, reserved_peer_usage);
|
||||
|
||||
//
|
||||
// 4. Functions and data structures for the simulation.
|
||||
//
|
||||
|
||||
/** Data structure representing one announcement (pair of (tx, peer), plus whether it's
|
||||
* reconsiderable or not. */
|
||||
struct SimAnnouncement
|
||||
{
|
||||
unsigned tx;
|
||||
NodeId announcer;
|
||||
bool reconsider{false};
|
||||
SimAnnouncement(unsigned tx_in, NodeId announcer_in, bool reconsider_in) noexcept :
|
||||
tx(tx_in), announcer(announcer_in), reconsider(reconsider_in) {}
|
||||
};
|
||||
/** The entire simulated orphanage is represented by this list of announcements, in
|
||||
* announcement order (unlike TxOrphanageImpl which uses a sequence number to represent
|
||||
* announcement order). New announcements are added to the back. */
|
||||
std::vector<SimAnnouncement> sim_announcements;
|
||||
|
||||
/** Consume a transaction (index into txn) from provider. */
|
||||
auto read_tx_fn = [&]() -> unsigned { return provider.ConsumeIntegralInRange<unsigned>(0, NUM_TX - 1); };
|
||||
/** Consume a NodeId from provider. */
|
||||
auto read_peer_fn = [&]() -> NodeId { return provider.ConsumeIntegralInRange<unsigned>(0, NUM_PEERS - 1); };
|
||||
/** Consume both a transaction (index into txn) and a NodeId from provider. */
|
||||
auto read_tx_peer_fn = [&]() -> std::pair<unsigned, NodeId> {
|
||||
auto code = provider.ConsumeIntegralInRange<unsigned>(0, NUM_TX * NUM_PEERS - 1);
|
||||
return {code % NUM_TX, code / NUM_TX};
|
||||
};
|
||||
/** Determine if we have any announcements of the given transaction in the simulation. */
|
||||
auto have_tx_fn = [&](unsigned tx) -> bool {
|
||||
for (auto& ann : sim_announcements) {
|
||||
if (ann.tx == tx) return true;
|
||||
}
|
||||
return false;
|
||||
};
|
||||
/** Count the number of peers in the simulation. */
|
||||
auto count_peers_fn = [&]() -> unsigned {
|
||||
std::bitset<NUM_PEERS> mask;
|
||||
for (auto& ann : sim_announcements) {
|
||||
mask.set(ann.announcer);
|
||||
}
|
||||
return mask.count();
|
||||
};
|
||||
/** Determine if we have any reconsiderable announcements of a given transaction. */
|
||||
auto have_reconsiderable_fn = [&](unsigned tx) -> bool {
|
||||
for (auto& ann : sim_announcements) {
|
||||
if (ann.reconsider && ann.tx == tx) return true;
|
||||
}
|
||||
return false;
|
||||
};
|
||||
/** Determine if a peer has any transactions to reconsider. */
|
||||
auto have_reconsider_fn = [&](NodeId peer) -> bool {
|
||||
for (auto& ann : sim_announcements) {
|
||||
if (ann.reconsider && ann.announcer == peer) return true;
|
||||
}
|
||||
return false;
|
||||
};
|
||||
/** Get an iterator to an existing (wtxid, peer) pair in the simulation. */
|
||||
auto find_announce_wtxid_fn = [&](const Wtxid& wtxid, NodeId peer) -> std::vector<SimAnnouncement>::iterator {
|
||||
for (auto it = sim_announcements.begin(); it != sim_announcements.end(); ++it) {
|
||||
if (txn[it->tx]->GetWitnessHash() == wtxid && it->announcer == peer) return it;
|
||||
}
|
||||
return sim_announcements.end();
|
||||
};
|
||||
/** Get an iterator to an existing (tx, peer) pair in the simulation. */
|
||||
auto find_announce_fn = [&](unsigned tx, NodeId peer) {
|
||||
for (auto it = sim_announcements.begin(); it != sim_announcements.end(); ++it) {
|
||||
if (it->tx == tx && it->announcer == peer) return it;
|
||||
}
|
||||
return sim_announcements.end();
|
||||
};
|
||||
/** Compute a peer's DoS score according to simulation data. */
|
||||
auto dos_score_fn = [&](NodeId peer, int32_t max_count, int32_t max_usage) -> FeeFrac {
|
||||
int64_t count{0};
|
||||
int64_t usage{0};
|
||||
for (auto& ann : sim_announcements) {
|
||||
if (ann.announcer != peer) continue;
|
||||
count += 1 + (txn[ann.tx]->vin.size() / 10);
|
||||
usage += GetTransactionWeight(*txn[ann.tx]);
|
||||
}
|
||||
return std::max(FeeFrac{count, max_count}, FeeFrac{usage, max_usage});
|
||||
};
|
||||
|
||||
//
|
||||
// 5. Run through a scenario of mutators on both real and simulated orphanage.
|
||||
//
|
||||
|
||||
LIMITED_WHILE(provider.remaining_bytes() > 0, 200) {
|
||||
int command = provider.ConsumeIntegralInRange<uint8_t>(0, 15);
|
||||
while (true) {
|
||||
if (sim_announcements.size() < MAX_ANN && command-- == 0) {
|
||||
// AddTx
|
||||
auto [tx, peer] = read_tx_peer_fn();
|
||||
bool added = real->AddTx(txn[tx], peer);
|
||||
bool sim_have_tx = have_tx_fn(tx);
|
||||
assert(added == !sim_have_tx);
|
||||
if (find_announce_fn(tx, peer) == sim_announcements.end()) {
|
||||
sim_announcements.emplace_back(tx, peer, false);
|
||||
}
|
||||
break;
|
||||
} else if (sim_announcements.size() < MAX_ANN && command-- == 0) {
|
||||
// AddAnnouncer
|
||||
auto [tx, peer] = read_tx_peer_fn();
|
||||
bool added = real->AddAnnouncer(txn[tx]->GetWitnessHash(), peer);
|
||||
bool sim_have_tx = have_tx_fn(tx);
|
||||
auto sim_it = find_announce_fn(tx, peer);
|
||||
assert(added == (sim_it == sim_announcements.end() && sim_have_tx));
|
||||
if (added) {
|
||||
sim_announcements.emplace_back(tx, peer, false);
|
||||
}
|
||||
break;
|
||||
} else if (command-- == 0) {
|
||||
// EraseTx
|
||||
auto tx = read_tx_fn();
|
||||
bool erased = real->EraseTx(txn[tx]->GetWitnessHash());
|
||||
bool sim_have = have_tx_fn(tx);
|
||||
assert(erased == sim_have);
|
||||
std::erase_if(sim_announcements, [&](auto& ann) { return ann.tx == tx; });
|
||||
break;
|
||||
} else if (command-- == 0) {
|
||||
// EraseForPeer
|
||||
auto peer = read_peer_fn();
|
||||
real->EraseForPeer(peer);
|
||||
std::erase_if(sim_announcements, [&](auto& ann) { return ann.announcer == peer; });
|
||||
break;
|
||||
} else if (command-- == 0) {
|
||||
// EraseForBlock
|
||||
auto pattern = provider.ConsumeIntegralInRange<uint64_t>(0, (uint64_t{1} << NUM_TX) - 1);
|
||||
CBlock block;
|
||||
std::set<COutPoint> spent;
|
||||
for (unsigned tx = 0; tx < NUM_TX; ++tx) {
|
||||
if ((pattern >> tx) & 1) {
|
||||
block.vtx.emplace_back(txn[tx]);
|
||||
for (auto& txin : block.vtx.back()->vin) {
|
||||
spent.insert(txin.prevout);
|
||||
}
|
||||
}
|
||||
}
|
||||
std::shuffle(block.vtx.begin(), block.vtx.end(), rng);
|
||||
real->EraseForBlock(block);
|
||||
std::erase_if(sim_announcements, [&](auto& ann) {
|
||||
for (auto& txin : txn[ann.tx]->vin) {
|
||||
if (spent.count(txin.prevout)) return true;
|
||||
}
|
||||
return false;
|
||||
});
|
||||
break;
|
||||
} else if (command-- == 0) {
|
||||
// AddChildrenToWorkSet
|
||||
auto tx = read_tx_fn();
|
||||
FastRandomContext rand_ctx(rng.rand256());
|
||||
auto added = real->AddChildrenToWorkSet(*txn[tx], rand_ctx);
|
||||
/** Map of all child wtxids, with value whether they already have a reconsiderable
|
||||
announcement from some peer. */
|
||||
std::map<Wtxid, bool> child_wtxids;
|
||||
for (unsigned child_tx = 0; child_tx < NUM_TX; ++child_tx) {
|
||||
if (!have_tx_fn(child_tx)) continue;
|
||||
bool child_of = false;
|
||||
for (auto& txin : txn[child_tx]->vin) {
|
||||
if (txin.prevout.hash == txn[tx]->GetHash()) {
|
||||
child_of = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (child_of) {
|
||||
child_wtxids[txn[child_tx]->GetWitnessHash()] = have_reconsiderable_fn(child_tx);
|
||||
}
|
||||
}
|
||||
for (auto& [wtxid, peer] : added) {
|
||||
// Wtxid must be a child of tx.
|
||||
auto child_wtxid_it = child_wtxids.find(wtxid);
|
||||
assert(child_wtxid_it != child_wtxids.end());
|
||||
// Announcement must exist.
|
||||
auto sim_ann_it = find_announce_wtxid_fn(wtxid, peer);
|
||||
assert(sim_ann_it != sim_announcements.end());
|
||||
// Announcement must not yet be reconsiderable.
|
||||
assert(sim_ann_it->reconsider == false);
|
||||
// Make reconsiderable.
|
||||
sim_ann_it->reconsider = true;
|
||||
}
|
||||
for (auto& [wtxid, peer] : added) {
|
||||
// Remove from child_wtxids map, so we can check that only already-reconsiderable
|
||||
// ones are missing from the result.
|
||||
child_wtxids.erase(wtxid);
|
||||
}
|
||||
// Verify that AddChildrenToWorkSet does not select announcements that were already reconsiderable:
|
||||
// Check all child wtxids which did not occur at least once in the result were already reconsiderable
|
||||
// due to a previous AddChildrenToWorkSet.
|
||||
for (auto& [wtxid, already_reconsider] : child_wtxids) {
|
||||
assert(already_reconsider);
|
||||
}
|
||||
break;
|
||||
} else if (command-- == 0) {
|
||||
// GetTxToReconsider.
|
||||
auto peer = read_peer_fn();
|
||||
auto result = real->GetTxToReconsider(peer);
|
||||
if (result) {
|
||||
// A transaction was found. It must have a corresponding reconsiderable
|
||||
// announcement from peer.
|
||||
auto sim_ann_it = find_announce_wtxid_fn(result->GetWitnessHash(), peer);
|
||||
assert(sim_ann_it != sim_announcements.end());
|
||||
assert(sim_ann_it->announcer == peer);
|
||||
assert(sim_ann_it->reconsider);
|
||||
// Make it non-reconsiderable.
|
||||
sim_ann_it->reconsider = false;
|
||||
} else {
|
||||
// No reconsiderable transaction was found from peer. Verify that it does not
|
||||
// have any.
|
||||
assert(!have_reconsider_fn(peer));
|
||||
}
|
||||
break;
|
||||
} else if (command-- == 0) {
|
||||
// LimitOrphans
|
||||
const auto max_ann = max_global_ann / std::max<unsigned>(1, count_peers_fn());
|
||||
const auto max_mem = reserved_peer_usage;
|
||||
while (true) {
|
||||
// Count global usage and number of peers.
|
||||
node::TxOrphanage::Usage total_usage{0};
|
||||
node::TxOrphanage::Count total_latency_score = sim_announcements.size();
|
||||
for (unsigned tx = 0; tx < NUM_TX; ++tx) {
|
||||
if (have_tx_fn(tx)) {
|
||||
total_usage += GetTransactionWeight(*txn[tx]);
|
||||
total_latency_score += txn[tx]->vin.size() / 10;
|
||||
}
|
||||
}
|
||||
auto num_peers = count_peers_fn();
|
||||
bool oversized = (total_usage > reserved_peer_usage * num_peers) ||
|
||||
(total_latency_score > real->MaxGlobalLatencyScore());
|
||||
if (!oversized) break;
|
||||
// Find worst peer.
|
||||
FeeFrac worst_dos_score{0, 1};
|
||||
unsigned worst_peer = unsigned(-1);
|
||||
for (unsigned peer = 0; peer < NUM_PEERS; ++peer) {
|
||||
auto dos_score = dos_score_fn(peer, max_ann, max_mem);
|
||||
// Use >= so that the more recent peer (higher NodeId) wins in case of
|
||||
// ties.
|
||||
if (dos_score >= worst_dos_score) {
|
||||
worst_dos_score = dos_score;
|
||||
worst_peer = peer;
|
||||
}
|
||||
}
|
||||
assert(worst_peer != unsigned(-1));
|
||||
assert(worst_dos_score >> FeeFrac(1, 1));
|
||||
// Find oldest announcement from worst_peer, preferring non-reconsiderable ones.
|
||||
bool done{false};
|
||||
for (int reconsider = 0; reconsider < 2; ++reconsider) {
|
||||
for (auto it = sim_announcements.begin(); it != sim_announcements.end(); ++it) {
|
||||
if (it->announcer != worst_peer || it->reconsider != reconsider) continue;
|
||||
sim_announcements.erase(it);
|
||||
done = true;
|
||||
break;
|
||||
}
|
||||
if (done) break;
|
||||
}
|
||||
assert(done);
|
||||
}
|
||||
real->LimitOrphans();
|
||||
// We must now be within limits, otherwise LimitOrphans should have continued further).
|
||||
// We don't check the contents of the orphanage until the end to make fuzz runs faster.
|
||||
assert(real->TotalLatencyScore() <= real->MaxGlobalLatencyScore());
|
||||
assert(real->TotalOrphanUsage() <= real->MaxGlobalUsage());
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//
|
||||
// 6. Perform a full comparison between the real orphanage's inspectors and the simulation.
|
||||
//
|
||||
|
||||
real->SanityCheck();
|
||||
|
||||
|
||||
auto all_orphans = real->GetOrphanTransactions();
|
||||
node::TxOrphanage::Usage orphan_usage{0};
|
||||
std::vector<node::TxOrphanage::Usage> usage_by_peer(NUM_PEERS);
|
||||
node::TxOrphanage::Count unique_orphans{0};
|
||||
std::vector<node::TxOrphanage::Count> count_by_peer(NUM_PEERS);
|
||||
node::TxOrphanage::Count total_latency_score = sim_announcements.size();
|
||||
for (unsigned tx = 0; tx < NUM_TX; ++tx) {
|
||||
bool sim_have_tx = have_tx_fn(tx);
|
||||
if (sim_have_tx) {
|
||||
orphan_usage += GetTransactionWeight(*txn[tx]);
|
||||
total_latency_score += txn[tx]->vin.size() / 10;
|
||||
}
|
||||
unique_orphans += sim_have_tx;
|
||||
auto orphans_it = std::find_if(all_orphans.begin(), all_orphans.end(), [&](auto& orph) { return orph.tx->GetWitnessHash() == txn[tx]->GetWitnessHash(); });
|
||||
// GetOrphanTransactions (OrphanBase existence)
|
||||
assert((orphans_it != all_orphans.end()) == sim_have_tx);
|
||||
// HaveTx
|
||||
bool have_tx = real->HaveTx(txn[tx]->GetWitnessHash());
|
||||
assert(have_tx == sim_have_tx);
|
||||
// GetTx
|
||||
auto txref = real->GetTx(txn[tx]->GetWitnessHash());
|
||||
assert(!!txref == sim_have_tx);
|
||||
if (sim_have_tx) assert(txref->GetWitnessHash() == txn[tx]->GetWitnessHash());
|
||||
|
||||
for (NodeId peer = 0; peer < NUM_PEERS; ++peer) {
|
||||
auto it_sim_ann = find_announce_fn(tx, peer);
|
||||
bool sim_have_ann = it_sim_ann != sim_announcements.end();
|
||||
if (sim_have_ann) usage_by_peer[peer] += GetTransactionWeight(*txn[tx]);
|
||||
count_by_peer[peer] += sim_have_ann;
|
||||
// GetOrphanTransactions (announcers presence)
|
||||
if (sim_have_ann) assert(sim_have_tx);
|
||||
if (sim_have_tx) assert(orphans_it->announcers.count(peer) == sim_have_ann);
|
||||
// HaveTxFromPeer
|
||||
bool have_ann = real->HaveTxFromPeer(txn[tx]->GetWitnessHash(), peer);
|
||||
assert(sim_have_ann == have_ann);
|
||||
// GetChildrenFromSamePeer
|
||||
auto children_from_peer = real->GetChildrenFromSamePeer(txn[tx], peer);
|
||||
auto it = children_from_peer.rbegin();
|
||||
for (int phase = 0; phase < 2; ++phase) {
|
||||
// First expect all children which have reconsiderable announcement from peer, then the others.
|
||||
for (auto& ann : sim_announcements) {
|
||||
if (ann.announcer != peer) continue;
|
||||
if (ann.reconsider != (phase == 1)) continue;
|
||||
bool matching_parent{false};
|
||||
for (const auto& vin : txn[ann.tx]->vin) {
|
||||
if (vin.prevout.hash == txn[tx]->GetHash()) matching_parent = true;
|
||||
}
|
||||
if (!matching_parent) continue;
|
||||
// Found an announcement from peer which is a child of txn[tx].
|
||||
assert(it != children_from_peer.rend());
|
||||
assert((*it)->GetWitnessHash() == txn[ann.tx]->GetWitnessHash());
|
||||
++it;
|
||||
}
|
||||
}
|
||||
assert(it == children_from_peer.rend());
|
||||
}
|
||||
}
|
||||
// TotalOrphanUsage
|
||||
assert(orphan_usage == real->TotalOrphanUsage());
|
||||
for (NodeId peer = 0; peer < NUM_PEERS; ++peer) {
|
||||
bool sim_have_reconsider = have_reconsider_fn(peer);
|
||||
// HaveTxToReconsider
|
||||
bool have_reconsider = real->HaveTxToReconsider(peer);
|
||||
assert(have_reconsider == sim_have_reconsider);
|
||||
// UsageByPeer
|
||||
assert(usage_by_peer[peer] == real->UsageByPeer(peer));
|
||||
// AnnouncementsFromPeer
|
||||
assert(count_by_peer[peer] == real->AnnouncementsFromPeer(peer));
|
||||
}
|
||||
// CountAnnouncements
|
||||
assert(sim_announcements.size() == real->CountAnnouncements());
|
||||
// CountUniqueOrphans
|
||||
assert(unique_orphans == real->CountUniqueOrphans());
|
||||
// MaxGlobalLatencyScore
|
||||
assert(max_global_ann == real->MaxGlobalLatencyScore());
|
||||
// ReservedPeerUsage
|
||||
assert(reserved_peer_usage == real->ReservedPeerUsage());
|
||||
// MaxPeerLatencyScore
|
||||
auto present_peers = count_peers_fn();
|
||||
assert(max_global_ann / std::max<unsigned>(1, present_peers) == real->MaxPeerLatencyScore());
|
||||
// MaxGlobalUsage
|
||||
assert(reserved_peer_usage * std::max<unsigned>(1, present_peers) == real->MaxGlobalUsage());
|
||||
// TotalLatencyScore.
|
||||
assert(real->TotalLatencyScore() == total_latency_score);
|
||||
}
|
||||
|
||||
@ -4,6 +4,7 @@
|
||||
|
||||
#include <arith_uint256.h>
|
||||
#include <consensus/validation.h>
|
||||
#include <node/txorphanage.h>
|
||||
#include <policy/policy.h>
|
||||
#include <primitives/transaction.h>
|
||||
#include <pubkey.h>
|
||||
@ -12,7 +13,6 @@
|
||||
#include <test/util/random.h>
|
||||
#include <test/util/setup_common.h>
|
||||
#include <test/util/transaction_utils.h>
|
||||
#include <txorphanage.h>
|
||||
|
||||
#include <array>
|
||||
#include <cstdint>
|
||||
@ -21,28 +21,6 @@
|
||||
|
||||
BOOST_FIXTURE_TEST_SUITE(orphanage_tests, TestingSetup)
|
||||
|
||||
class TxOrphanageTest : public TxOrphanage
|
||||
{
|
||||
public:
|
||||
TxOrphanageTest(FastRandomContext& rng) : m_rng{rng} {}
|
||||
|
||||
inline size_t CountOrphans() const
|
||||
{
|
||||
return m_orphans.size();
|
||||
}
|
||||
|
||||
CTransactionRef RandomOrphan()
|
||||
{
|
||||
std::map<Wtxid, OrphanTx>::iterator it;
|
||||
it = m_orphans.lower_bound(Wtxid::FromUint256(m_rng.rand256()));
|
||||
if (it == m_orphans.end())
|
||||
it = m_orphans.begin();
|
||||
return it->second.tx;
|
||||
}
|
||||
|
||||
FastRandomContext& m_rng;
|
||||
};
|
||||
|
||||
static void MakeNewKeyWithFastRandomContext(CKey& key, FastRandomContext& rand_ctx)
|
||||
{
|
||||
std::vector<unsigned char> keydata;
|
||||
@ -94,6 +72,368 @@ static bool EqualTxns(const std::set<CTransactionRef>& set_txns, const std::vect
|
||||
return true;
|
||||
}
|
||||
|
||||
unsigned int CheckNumEvictions(node::TxOrphanage& orphanage)
|
||||
{
|
||||
const auto original_total_count{orphanage.CountAnnouncements()};
|
||||
orphanage.LimitOrphans();
|
||||
assert(orphanage.TotalLatencyScore() <= orphanage.MaxGlobalLatencyScore());
|
||||
assert(orphanage.TotalOrphanUsage() <= orphanage.MaxGlobalUsage());
|
||||
return original_total_count - orphanage.CountAnnouncements();
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(peer_dos_limits)
|
||||
{
|
||||
FastRandomContext det_rand{true};
|
||||
|
||||
// Construct transactions to use. They must all be the same size.
|
||||
static constexpr unsigned int NUM_TXNS_CREATED = 100;
|
||||
static constexpr int64_t TX_SIZE{469};
|
||||
static constexpr int64_t TOTAL_SIZE = NUM_TXNS_CREATED * TX_SIZE;
|
||||
|
||||
std::vector<CTransactionRef> txns;
|
||||
txns.reserve(NUM_TXNS_CREATED);
|
||||
// All transactions are the same size.
|
||||
for (unsigned int i{0}; i < NUM_TXNS_CREATED; ++i) {
|
||||
auto ptx = MakeTransactionSpending({}, det_rand);
|
||||
txns.emplace_back(ptx);
|
||||
BOOST_CHECK_EQUAL(TX_SIZE, GetTransactionWeight(*ptx));
|
||||
}
|
||||
|
||||
// Single peer: eviction is triggered if either limit is hit
|
||||
{
|
||||
// Test announcement limits
|
||||
NodeId peer{8};
|
||||
auto orphanage_low_ann = node::MakeTxOrphanage(/*max_global_ann=*/1, /*reserved_peer_usage=*/TX_SIZE * 10);
|
||||
auto orphanage_low_mem = node::MakeTxOrphanage(/*max_global_ann=*/10, /*reserved_peer_usage=*/TX_SIZE);
|
||||
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage_low_mem), 0);
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage_low_ann), 0);
|
||||
|
||||
// Add the first transaction
|
||||
orphanage_low_ann->AddTx(txns.at(0), peer);
|
||||
orphanage_low_mem->AddTx(txns.at(0), peer);
|
||||
|
||||
// Add more. One of the limits is exceeded, so LimitOrphans evicts 1.
|
||||
orphanage_low_ann->AddTx(txns.at(1), peer);
|
||||
BOOST_CHECK(orphanage_low_ann->TotalLatencyScore() > orphanage_low_ann->MaxGlobalLatencyScore());
|
||||
BOOST_CHECK(orphanage_low_ann->TotalOrphanUsage() <= orphanage_low_ann->MaxGlobalUsage());
|
||||
|
||||
orphanage_low_mem->AddTx(txns.at(1), peer);
|
||||
BOOST_CHECK(orphanage_low_mem->TotalLatencyScore() <= orphanage_low_mem->MaxGlobalLatencyScore());
|
||||
BOOST_CHECK(orphanage_low_mem->TotalOrphanUsage() > orphanage_low_mem->MaxGlobalUsage());
|
||||
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage_low_mem), 1);
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage_low_ann), 1);
|
||||
|
||||
// The older transaction is evicted.
|
||||
BOOST_CHECK(!orphanage_low_ann->HaveTx(txns.at(0)->GetWitnessHash()));
|
||||
BOOST_CHECK(!orphanage_low_mem->HaveTx(txns.at(0)->GetWitnessHash()));
|
||||
BOOST_CHECK(orphanage_low_ann->HaveTx(txns.at(1)->GetWitnessHash()));
|
||||
BOOST_CHECK(orphanage_low_mem->HaveTx(txns.at(1)->GetWitnessHash()));
|
||||
}
|
||||
|
||||
// Single peer: latency score includes inputs
|
||||
{
|
||||
// Test latency score limits
|
||||
NodeId peer{10};
|
||||
auto orphanage_low_ann = node::MakeTxOrphanage(/*max_global_ann=*/5, /*reserved_peer_usage=*/TX_SIZE * 1000);
|
||||
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage_low_ann), 0);
|
||||
|
||||
// Add the first transaction
|
||||
orphanage_low_ann->AddTx(txns.at(0), peer);
|
||||
|
||||
// Add 1 more transaction with 45 inputs. Even though there are only 2 announcements, this pushes the orphanage above its maximum latency score.
|
||||
std::vector<COutPoint> outpoints_45;
|
||||
for (unsigned int j{0}; j < 45; ++j) {
|
||||
outpoints_45.emplace_back(Txid::FromUint256(det_rand.rand256()), j);
|
||||
}
|
||||
auto ptx = MakeTransactionSpending(outpoints_45, det_rand);
|
||||
orphanage_low_ann->AddTx(ptx, peer);
|
||||
|
||||
BOOST_CHECK(orphanage_low_ann->TotalLatencyScore() > orphanage_low_ann->MaxGlobalLatencyScore());
|
||||
BOOST_CHECK(orphanage_low_ann->LatencyScoreFromPeer(peer) > orphanage_low_ann->MaxPeerLatencyScore());
|
||||
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage_low_ann), 1);
|
||||
|
||||
// The older transaction is evicted.
|
||||
BOOST_CHECK(!orphanage_low_ann->HaveTx(txns.at(0)->GetWitnessHash()));
|
||||
BOOST_CHECK(orphanage_low_ann->HaveTx(ptx->GetWitnessHash()));
|
||||
}
|
||||
|
||||
// Single peer: eviction order is FIFO on non-reconsiderable, then reconsiderable orphans.
|
||||
{
|
||||
// Construct parent + child pairs
|
||||
std::vector<CTransactionRef> parents;
|
||||
std::vector<CTransactionRef> children;
|
||||
for (unsigned int i{0}; i < 10; ++i) {
|
||||
CTransactionRef parent = MakeTransactionSpending({}, det_rand);
|
||||
CTransactionRef child = MakeTransactionSpending({{parent->GetHash(), 0}}, det_rand);
|
||||
parents.emplace_back(parent);
|
||||
children.emplace_back(child);
|
||||
}
|
||||
|
||||
// Test announcement limits
|
||||
NodeId peer{9};
|
||||
auto orphanage = node::MakeTxOrphanage(/*max_global_ann=*/3, /*reserved_peer_usage=*/TX_SIZE * 10);
|
||||
|
||||
// First add a tx which will be made reconsiderable.
|
||||
orphanage->AddTx(children.at(0), peer);
|
||||
|
||||
// Then add 2 more orphans... not oversize yet.
|
||||
orphanage->AddTx(children.at(1), peer);
|
||||
orphanage->AddTx(children.at(2), peer);
|
||||
|
||||
// Make child0 ready to reconsider
|
||||
const std::vector<std::pair<Wtxid, NodeId>> expected_set_c0{std::make_pair(children.at(0)->GetWitnessHash(), peer)};
|
||||
BOOST_CHECK(orphanage->AddChildrenToWorkSet(*parents.at(0), det_rand) == expected_set_c0);
|
||||
BOOST_CHECK(orphanage->HaveTxToReconsider(peer));
|
||||
|
||||
// Add 1 more orphan, causing the orphanage to be oversize. child1 is evicted.
|
||||
orphanage->AddTx(children.at(3), peer);
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage), 1);
|
||||
BOOST_CHECK(orphanage->HaveTx(children.at(0)->GetWitnessHash()));
|
||||
BOOST_CHECK(!orphanage->HaveTx(children.at(1)->GetWitnessHash()));
|
||||
BOOST_CHECK(orphanage->HaveTx(children.at(2)->GetWitnessHash()));
|
||||
BOOST_CHECK(orphanage->HaveTx(children.at(3)->GetWitnessHash()));
|
||||
|
||||
// Add 1 more... child2 is evicted.
|
||||
orphanage->AddTx(children.at(4), peer);
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage), 1);
|
||||
BOOST_CHECK(orphanage->HaveTx(children.at(0)->GetWitnessHash()));
|
||||
BOOST_CHECK(!orphanage->HaveTx(children.at(2)->GetWitnessHash()));
|
||||
BOOST_CHECK(orphanage->HaveTx(children.at(3)->GetWitnessHash()));
|
||||
BOOST_CHECK(orphanage->HaveTx(children.at(4)->GetWitnessHash()));
|
||||
|
||||
// Eviction order is FIFO within the orphans that are read
|
||||
const std::vector<std::pair<Wtxid, NodeId>> expected_set_c4{std::make_pair(children.at(4)->GetWitnessHash(), peer)};
|
||||
BOOST_CHECK(orphanage->AddChildrenToWorkSet(*parents.at(4), det_rand) == expected_set_c4);
|
||||
const std::vector<std::pair<Wtxid, NodeId>> expected_set_c3{std::make_pair(children.at(3)->GetWitnessHash(), peer)};
|
||||
BOOST_CHECK(orphanage->AddChildrenToWorkSet(*parents.at(3), det_rand) == expected_set_c3);
|
||||
|
||||
// child5 is evicted immediately because it is the only non-reconsiderable orphan.
|
||||
orphanage->AddTx(children.at(5), peer);
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage), 1);
|
||||
BOOST_CHECK(orphanage->HaveTx(children.at(0)->GetWitnessHash()));
|
||||
BOOST_CHECK(orphanage->HaveTx(children.at(3)->GetWitnessHash()));
|
||||
BOOST_CHECK(orphanage->HaveTx(children.at(4)->GetWitnessHash()));
|
||||
BOOST_CHECK(!orphanage->HaveTx(children.at(5)->GetWitnessHash()));
|
||||
|
||||
// Transactions are marked non-reconsiderable again when returned through GetTxToReconsider
|
||||
BOOST_CHECK_EQUAL(orphanage->GetTxToReconsider(peer), children.at(0));
|
||||
orphanage->AddTx(children.at(6), peer);
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage), 1);
|
||||
BOOST_CHECK(!orphanage->HaveTx(children.at(0)->GetWitnessHash()));
|
||||
BOOST_CHECK(orphanage->HaveTx(children.at(3)->GetWitnessHash()));
|
||||
BOOST_CHECK(orphanage->HaveTx(children.at(4)->GetWitnessHash()));
|
||||
BOOST_CHECK(orphanage->HaveTx(children.at(6)->GetWitnessHash()));
|
||||
|
||||
// The first transaction returned from GetTxToReconsider is the older one, not the one that was marked for
|
||||
// reconsideration earlier.
|
||||
BOOST_CHECK_EQUAL(orphanage->GetTxToReconsider(peer), children.at(3));
|
||||
BOOST_CHECK_EQUAL(orphanage->GetTxToReconsider(peer), children.at(4));
|
||||
}
|
||||
|
||||
// Multiple peers: when limit is exceeded, we choose the DoSiest peer and evict their oldest transaction.
|
||||
{
|
||||
NodeId peer_dosy{0};
|
||||
NodeId peer1{1};
|
||||
NodeId peer2{2};
|
||||
|
||||
unsigned int max_announcements = 60;
|
||||
// Set a high per-peer reservation so announcement limit is always hit first.
|
||||
auto orphanage = node::MakeTxOrphanage(max_announcements, TOTAL_SIZE * 10);
|
||||
|
||||
// No evictions happen before the global limit is reached.
|
||||
for (unsigned int i{0}; i < max_announcements; ++i) {
|
||||
orphanage->AddTx(txns.at(i), peer_dosy);
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage), 0);
|
||||
}
|
||||
BOOST_CHECK_EQUAL(orphanage->AnnouncementsFromPeer(peer_dosy), max_announcements);
|
||||
BOOST_CHECK_EQUAL(orphanage->AnnouncementsFromPeer(peer1), 0);
|
||||
BOOST_CHECK_EQUAL(orphanage->AnnouncementsFromPeer(peer2), 0);
|
||||
|
||||
// Add 10 unique transactions from peer1.
|
||||
// LimitOrphans should evict from peer_dosy, because that's the one exceeding announcement limits.
|
||||
unsigned int num_from_peer1 = 10;
|
||||
for (unsigned int i{0}; i < num_from_peer1; ++i) {
|
||||
orphanage->AddTx(txns.at(max_announcements + i), peer1);
|
||||
// The announcement limit per peer has halved, but LimitOrphans does not evict beyond what is necessary to
|
||||
// bring the total announcements within its global limit.
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage), 1);
|
||||
BOOST_CHECK(orphanage->AnnouncementsFromPeer(peer_dosy) > orphanage->MaxPeerLatencyScore());
|
||||
|
||||
BOOST_CHECK_EQUAL(orphanage->AnnouncementsFromPeer(peer1), i + 1);
|
||||
BOOST_CHECK_EQUAL(orphanage->AnnouncementsFromPeer(peer_dosy), max_announcements - i - 1);
|
||||
|
||||
// Evictions are FIFO within a peer, so the ith transaction sent by peer_dosy is the one that was evicted.
|
||||
BOOST_CHECK(!orphanage->HaveTx(txns.at(i)->GetWitnessHash()));
|
||||
}
|
||||
// Add 10 transactions that are duplicates of the ones sent by peer_dosy. We need to add 10 because the first 10
|
||||
// were just evicted in the previous block additions.
|
||||
for (unsigned int i{num_from_peer1}; i < num_from_peer1 + 10; ++i) {
|
||||
// Tx has already been sent by peer_dosy
|
||||
BOOST_CHECK(orphanage->HaveTxFromPeer(txns.at(i)->GetWitnessHash(), peer_dosy));
|
||||
orphanage->AddTx(txns.at(i), peer2);
|
||||
|
||||
// Announcement limit is by entry, not by unique orphans
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage), 1);
|
||||
|
||||
// peer_dosy is still the only one getting evicted
|
||||
BOOST_CHECK_EQUAL(orphanage->AnnouncementsFromPeer(peer_dosy), max_announcements - i - 1);
|
||||
BOOST_CHECK_EQUAL(orphanage->AnnouncementsFromPeer(peer1), num_from_peer1);
|
||||
BOOST_CHECK_EQUAL(orphanage->AnnouncementsFromPeer(peer2), i + 1 - num_from_peer1);
|
||||
|
||||
// Evictions are FIFO within a peer, so the ith transaction sent by peer_dosy is the one that was evicted.
|
||||
BOOST_CHECK(!orphanage->HaveTxFromPeer(txns.at(i)->GetWitnessHash(), peer_dosy));
|
||||
BOOST_CHECK(orphanage->HaveTx(txns.at(i)->GetWitnessHash()));
|
||||
}
|
||||
|
||||
// With 6 peers, each can add 10, and still only peer_dosy's orphans are evicted.
|
||||
const unsigned int max_per_peer{max_announcements / 6};
|
||||
for (NodeId peer{3}; peer < 6; ++peer) {
|
||||
for (unsigned int i{0}; i < max_per_peer; ++i) {
|
||||
orphanage->AddTx(txns.at(peer * max_per_peer + i), peer);
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage), 1);
|
||||
}
|
||||
}
|
||||
for (NodeId peer{0}; peer < 6; ++peer) {
|
||||
BOOST_CHECK_EQUAL(orphanage->AnnouncementsFromPeer(peer), max_per_peer);
|
||||
}
|
||||
}
|
||||
|
||||
// Limits change as more peers are added.
|
||||
{
|
||||
auto orphanage{node::MakeTxOrphanage()};
|
||||
// These stay the same regardless of number of peers
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxGlobalLatencyScore(), node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE);
|
||||
BOOST_CHECK_EQUAL(orphanage->ReservedPeerUsage(), node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER);
|
||||
|
||||
// These change with number of peers
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxGlobalUsage(), node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxPeerLatencyScore(), node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE);
|
||||
|
||||
// Number of peers = 1
|
||||
orphanage->AddTx(txns.at(0), 0);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxGlobalLatencyScore(), node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE);
|
||||
BOOST_CHECK_EQUAL(orphanage->ReservedPeerUsage(), node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxGlobalUsage(), node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxPeerLatencyScore(), node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE);
|
||||
|
||||
// Number of peers = 2
|
||||
orphanage->AddTx(txns.at(1), 1);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxGlobalLatencyScore(), node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE);
|
||||
BOOST_CHECK_EQUAL(orphanage->ReservedPeerUsage(), node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxGlobalUsage(), node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER * 2);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxPeerLatencyScore(), node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE / 2);
|
||||
|
||||
// Number of peers = 3
|
||||
orphanage->AddTx(txns.at(2), 2);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxGlobalLatencyScore(), node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE);
|
||||
BOOST_CHECK_EQUAL(orphanage->ReservedPeerUsage(), node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxGlobalUsage(), node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER * 3);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxPeerLatencyScore(), node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE / 3);
|
||||
|
||||
// Number of peers didn't change.
|
||||
orphanage->AddTx(txns.at(3), 2);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxGlobalLatencyScore(), node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE);
|
||||
BOOST_CHECK_EQUAL(orphanage->ReservedPeerUsage(), node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxGlobalUsage(), node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER * 3);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxPeerLatencyScore(), node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE / 3);
|
||||
|
||||
// Once a peer has no orphans, it is not considered in the limits.
|
||||
// Number of peers = 2
|
||||
orphanage->EraseForPeer(2);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxGlobalLatencyScore(), node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE);
|
||||
BOOST_CHECK_EQUAL(orphanage->ReservedPeerUsage(), node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxGlobalUsage(), node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER * 2);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxPeerLatencyScore(), node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE / 2);
|
||||
|
||||
// Number of peers = 1
|
||||
orphanage->EraseTx(txns.at(0)->GetWitnessHash());
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxGlobalLatencyScore(), node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE);
|
||||
BOOST_CHECK_EQUAL(orphanage->ReservedPeerUsage(), node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxGlobalUsage(), node::DEFAULT_RESERVED_ORPHAN_WEIGHT_PER_PEER);
|
||||
BOOST_CHECK_EQUAL(orphanage->MaxPeerLatencyScore(), node::DEFAULT_MAX_ORPHANAGE_LATENCY_SCORE);
|
||||
}
|
||||
|
||||
// Test eviction of multiple transactions at a time
|
||||
{
|
||||
// Create a large transaction that is 10 times larger than the normal size transaction.
|
||||
CMutableTransaction tx_large;
|
||||
tx_large.vin.resize(1);
|
||||
BulkTransaction(tx_large, 10 * TX_SIZE);
|
||||
auto ptx_large = MakeTransactionRef(tx_large);
|
||||
|
||||
const auto large_tx_size = GetTransactionWeight(*ptx_large);
|
||||
BOOST_CHECK(large_tx_size > 10 * TX_SIZE);
|
||||
BOOST_CHECK(large_tx_size < 11 * TX_SIZE);
|
||||
|
||||
auto orphanage = node::MakeTxOrphanage(20, large_tx_size);
|
||||
// One peer sends 10 normal size transactions. The other peer sends 10 normal transactions and 1 very large one
|
||||
NodeId peer_normal{0};
|
||||
NodeId peer_large{1};
|
||||
for (unsigned int i = 0; i < 20; i++) {
|
||||
orphanage->AddTx(txns.at(i), i < 10 ? peer_normal : peer_large);
|
||||
}
|
||||
BOOST_CHECK(orphanage->TotalLatencyScore() <= orphanage->MaxGlobalLatencyScore());
|
||||
BOOST_CHECK(orphanage->TotalOrphanUsage() <= orphanage->MaxGlobalUsage());
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage), 0);
|
||||
|
||||
// Add the large transaction. This should cause evictions of all the previous 10 transactions from that peer.
|
||||
orphanage->AddTx(ptx_large, peer_large);
|
||||
BOOST_CHECK_EQUAL(CheckNumEvictions(*orphanage), 10);
|
||||
|
||||
// peer_normal should still have 10 transactions, and peer_large should have 1.
|
||||
BOOST_CHECK_EQUAL(orphanage->AnnouncementsFromPeer(peer_normal), 10);
|
||||
BOOST_CHECK_EQUAL(orphanage->AnnouncementsFromPeer(peer_large), 1);
|
||||
BOOST_CHECK(orphanage->HaveTxFromPeer(ptx_large->GetWitnessHash(), peer_large));
|
||||
BOOST_CHECK_EQUAL(orphanage->CountAnnouncements(), 11);
|
||||
}
|
||||
|
||||
// Test that latency score includes number of inputs.
|
||||
{
|
||||
auto orphanage = node::MakeTxOrphanage();
|
||||
|
||||
// Add 10 transactions with 9 inputs each.
|
||||
std::vector<COutPoint> outpoints_9;
|
||||
for (unsigned int j{0}; j < 9; ++j) {
|
||||
outpoints_9.emplace_back(Txid::FromUint256(m_rng.rand256()), j);
|
||||
}
|
||||
for (unsigned int i{0}; i < 10; ++i) {
|
||||
auto ptx = MakeTransactionSpending(outpoints_9, m_rng);
|
||||
orphanage->AddTx(ptx, 0);
|
||||
}
|
||||
BOOST_CHECK_EQUAL(orphanage->CountAnnouncements(), 10);
|
||||
BOOST_CHECK_EQUAL(orphanage->TotalLatencyScore(), 10);
|
||||
|
||||
// Add 10 transactions with 50 inputs each.
|
||||
std::vector<COutPoint> outpoints_50;
|
||||
for (unsigned int j{0}; j < 50; ++j) {
|
||||
outpoints_50.emplace_back(Txid::FromUint256(m_rng.rand256()), j);
|
||||
}
|
||||
|
||||
for (unsigned int i{0}; i < 10; ++i) {
|
||||
CMutableTransaction tx;
|
||||
std::shuffle(outpoints_50.begin(), outpoints_50.end(), m_rng);
|
||||
auto ptx = MakeTransactionSpending(outpoints_50, m_rng);
|
||||
BOOST_CHECK(orphanage->AddTx(ptx, 0));
|
||||
if (i < 5) BOOST_CHECK(!orphanage->AddTx(ptx, 1));
|
||||
}
|
||||
// 10 of the 9-input transactions + 10 of the 50-input transactions + 5 more announcements of the 50-input transactions
|
||||
BOOST_CHECK_EQUAL(orphanage->CountAnnouncements(), 25);
|
||||
// Base of 25 announcements, plus 10 * 5 for the 50-input transactions (counted just once)
|
||||
BOOST_CHECK_EQUAL(orphanage->TotalLatencyScore(), 25 + 50);
|
||||
|
||||
// Peer 0 sent all 20 transactions
|
||||
BOOST_CHECK_EQUAL(orphanage->AnnouncementsFromPeer(0), 20);
|
||||
BOOST_CHECK_EQUAL(orphanage->LatencyScoreFromPeer(0), 20 + 10 * 5);
|
||||
|
||||
// Peer 1 sent 5 of the 10 transactions with many inputs
|
||||
BOOST_CHECK_EQUAL(orphanage->AnnouncementsFromPeer(1), 5);
|
||||
BOOST_CHECK_EQUAL(orphanage->LatencyScoreFromPeer(1), 5 + 5 * 5);
|
||||
}
|
||||
}
|
||||
BOOST_AUTO_TEST_CASE(DoS_mapOrphans)
|
||||
{
|
||||
// This test had non-deterministic coverage due to
|
||||
@ -104,7 +444,7 @@ BOOST_AUTO_TEST_CASE(DoS_mapOrphans)
|
||||
// signature's R and S values have leading zeros.
|
||||
m_rng.Reseed(uint256{33});
|
||||
|
||||
TxOrphanageTest orphanage{m_rng};
|
||||
std::unique_ptr<node::TxOrphanage> orphanage{node::MakeTxOrphanage()};
|
||||
CKey key;
|
||||
MakeNewKeyWithFastRandomContext(key, m_rng);
|
||||
FillableSigningProvider keystore;
|
||||
@ -114,6 +454,8 @@ BOOST_AUTO_TEST_CASE(DoS_mapOrphans)
|
||||
auto now{GetTime<std::chrono::seconds>()};
|
||||
SetMockTime(now);
|
||||
|
||||
std::vector<CTransactionRef> orphans_added;
|
||||
|
||||
// 50 orphan transactions:
|
||||
for (int i = 0; i < 50; i++)
|
||||
{
|
||||
@ -126,13 +468,15 @@ BOOST_AUTO_TEST_CASE(DoS_mapOrphans)
|
||||
tx.vout[0].nValue = i*CENT;
|
||||
tx.vout[0].scriptPubKey = GetScriptForDestination(PKHash(key.GetPubKey()));
|
||||
|
||||
orphanage.AddTx(MakeTransactionRef(tx), i);
|
||||
auto ptx = MakeTransactionRef(tx);
|
||||
orphanage->AddTx(ptx, i);
|
||||
orphans_added.emplace_back(ptx);
|
||||
}
|
||||
|
||||
// ... and 50 that depend on other orphans:
|
||||
for (int i = 0; i < 50; i++)
|
||||
{
|
||||
CTransactionRef txPrev = orphanage.RandomOrphan();
|
||||
const auto& txPrev = orphans_added[m_rng.randrange(orphans_added.size())];
|
||||
|
||||
CMutableTransaction tx;
|
||||
tx.vin.resize(1);
|
||||
@ -144,13 +488,15 @@ BOOST_AUTO_TEST_CASE(DoS_mapOrphans)
|
||||
SignatureData empty;
|
||||
BOOST_CHECK(SignSignature(keystore, *txPrev, tx, 0, SIGHASH_ALL, empty));
|
||||
|
||||
orphanage.AddTx(MakeTransactionRef(tx), i);
|
||||
auto ptx = MakeTransactionRef(tx);
|
||||
orphanage->AddTx(ptx, i);
|
||||
orphans_added.emplace_back(ptx);
|
||||
}
|
||||
|
||||
// This really-big orphan should be ignored:
|
||||
for (int i = 0; i < 10; i++)
|
||||
{
|
||||
CTransactionRef txPrev = orphanage.RandomOrphan();
|
||||
const auto& txPrev = orphans_added[m_rng.randrange(orphans_added.size())];
|
||||
|
||||
CMutableTransaction tx;
|
||||
tx.vout.resize(1);
|
||||
@ -169,61 +515,29 @@ BOOST_AUTO_TEST_CASE(DoS_mapOrphans)
|
||||
for (unsigned int j = 1; j < tx.vin.size(); j++)
|
||||
tx.vin[j].scriptSig = tx.vin[0].scriptSig;
|
||||
|
||||
BOOST_CHECK(!orphanage.AddTx(MakeTransactionRef(tx), i));
|
||||
BOOST_CHECK(!orphanage->AddTx(MakeTransactionRef(tx), i));
|
||||
}
|
||||
|
||||
size_t expected_num_orphans = orphanage.CountOrphans();
|
||||
size_t expected_num_orphans = orphanage->Size();
|
||||
|
||||
// Non-existent peer; nothing should be deleted
|
||||
orphanage.EraseForPeer(/*peer=*/-1);
|
||||
BOOST_CHECK_EQUAL(orphanage.CountOrphans(), expected_num_orphans);
|
||||
orphanage->EraseForPeer(/*peer=*/-1);
|
||||
BOOST_CHECK_EQUAL(orphanage->Size(), expected_num_orphans);
|
||||
|
||||
// Each of first three peers stored
|
||||
// two transactions each.
|
||||
for (NodeId i = 0; i < 3; i++)
|
||||
{
|
||||
orphanage.EraseForPeer(i);
|
||||
orphanage->EraseForPeer(i);
|
||||
expected_num_orphans -= 2;
|
||||
BOOST_CHECK(orphanage.CountOrphans() == expected_num_orphans);
|
||||
BOOST_CHECK(orphanage->Size() == expected_num_orphans);
|
||||
}
|
||||
|
||||
// Test LimitOrphanTxSize() function, nothing should timeout:
|
||||
FastRandomContext rng{/*fDeterministic=*/true};
|
||||
orphanage.LimitOrphans(/*max_orphans=*/expected_num_orphans, rng);
|
||||
BOOST_CHECK_EQUAL(orphanage.CountOrphans(), expected_num_orphans);
|
||||
expected_num_orphans -= 1;
|
||||
orphanage.LimitOrphans(/*max_orphans=*/expected_num_orphans, rng);
|
||||
BOOST_CHECK_EQUAL(orphanage.CountOrphans(), expected_num_orphans);
|
||||
assert(expected_num_orphans > 40);
|
||||
orphanage.LimitOrphans(40, rng);
|
||||
BOOST_CHECK_EQUAL(orphanage.CountOrphans(), 40);
|
||||
orphanage.LimitOrphans(10, rng);
|
||||
BOOST_CHECK_EQUAL(orphanage.CountOrphans(), 10);
|
||||
orphanage.LimitOrphans(0, rng);
|
||||
BOOST_CHECK_EQUAL(orphanage.CountOrphans(), 0);
|
||||
|
||||
// Add one more orphan, check timeout logic
|
||||
auto timeout_tx = MakeTransactionSpending(/*outpoints=*/{}, rng);
|
||||
orphanage.AddTx(timeout_tx, 0);
|
||||
orphanage.LimitOrphans(1, rng);
|
||||
BOOST_CHECK_EQUAL(orphanage.CountOrphans(), 1);
|
||||
|
||||
// One second shy of expiration
|
||||
SetMockTime(now + ORPHAN_TX_EXPIRE_TIME - 1s);
|
||||
orphanage.LimitOrphans(1, rng);
|
||||
BOOST_CHECK_EQUAL(orphanage.CountOrphans(), 1);
|
||||
|
||||
// Jump one more second, orphan should be timed out on limiting
|
||||
SetMockTime(now + ORPHAN_TX_EXPIRE_TIME);
|
||||
BOOST_CHECK_EQUAL(orphanage.CountOrphans(), 1);
|
||||
orphanage.LimitOrphans(1, rng);
|
||||
BOOST_CHECK_EQUAL(orphanage.CountOrphans(), 0);
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(same_txid_diff_witness)
|
||||
{
|
||||
FastRandomContext det_rand{true};
|
||||
TxOrphanage orphanage;
|
||||
std::unique_ptr<node::TxOrphanage> orphanage{node::MakeTxOrphanage()};
|
||||
NodeId peer{0};
|
||||
|
||||
std::vector<COutPoint> empty_outpoints;
|
||||
@ -237,31 +551,31 @@ BOOST_AUTO_TEST_CASE(same_txid_diff_witness)
|
||||
const auto& mutated_wtxid = child_mutated->GetWitnessHash();
|
||||
BOOST_CHECK(normal_wtxid != mutated_wtxid);
|
||||
|
||||
BOOST_CHECK(orphanage.AddTx(child_normal, peer));
|
||||
BOOST_CHECK(orphanage->AddTx(child_normal, peer));
|
||||
// EraseTx fails as transaction by this wtxid doesn't exist.
|
||||
BOOST_CHECK_EQUAL(orphanage.EraseTx(mutated_wtxid), 0);
|
||||
BOOST_CHECK(orphanage.HaveTx(normal_wtxid));
|
||||
BOOST_CHECK(orphanage.GetTx(normal_wtxid) == child_normal);
|
||||
BOOST_CHECK(!orphanage.HaveTx(mutated_wtxid));
|
||||
BOOST_CHECK(orphanage.GetTx(mutated_wtxid) == nullptr);
|
||||
BOOST_CHECK_EQUAL(orphanage->EraseTx(mutated_wtxid), 0);
|
||||
BOOST_CHECK(orphanage->HaveTx(normal_wtxid));
|
||||
BOOST_CHECK(orphanage->GetTx(normal_wtxid) == child_normal);
|
||||
BOOST_CHECK(!orphanage->HaveTx(mutated_wtxid));
|
||||
BOOST_CHECK(orphanage->GetTx(mutated_wtxid) == nullptr);
|
||||
|
||||
// Must succeed. Both transactions should be present in orphanage.
|
||||
BOOST_CHECK(orphanage.AddTx(child_mutated, peer));
|
||||
BOOST_CHECK(orphanage.HaveTx(normal_wtxid));
|
||||
BOOST_CHECK(orphanage.HaveTx(mutated_wtxid));
|
||||
BOOST_CHECK(orphanage->AddTx(child_mutated, peer));
|
||||
BOOST_CHECK(orphanage->HaveTx(normal_wtxid));
|
||||
BOOST_CHECK(orphanage->HaveTx(mutated_wtxid));
|
||||
|
||||
// Outpoints map should track all entries: check that both are returned as children of the parent.
|
||||
std::set<CTransactionRef> expected_children{child_normal, child_mutated};
|
||||
BOOST_CHECK(EqualTxns(expected_children, orphanage.GetChildrenFromSamePeer(parent, peer)));
|
||||
BOOST_CHECK(EqualTxns(expected_children, orphanage->GetChildrenFromSamePeer(parent, peer)));
|
||||
|
||||
// Erase by wtxid: mutated first
|
||||
BOOST_CHECK_EQUAL(orphanage.EraseTx(mutated_wtxid), 1);
|
||||
BOOST_CHECK(orphanage.HaveTx(normal_wtxid));
|
||||
BOOST_CHECK(!orphanage.HaveTx(mutated_wtxid));
|
||||
BOOST_CHECK_EQUAL(orphanage->EraseTx(mutated_wtxid), 1);
|
||||
BOOST_CHECK(orphanage->HaveTx(normal_wtxid));
|
||||
BOOST_CHECK(!orphanage->HaveTx(mutated_wtxid));
|
||||
|
||||
BOOST_CHECK_EQUAL(orphanage.EraseTx(normal_wtxid), 1);
|
||||
BOOST_CHECK(!orphanage.HaveTx(normal_wtxid));
|
||||
BOOST_CHECK(!orphanage.HaveTx(mutated_wtxid));
|
||||
BOOST_CHECK_EQUAL(orphanage->EraseTx(normal_wtxid), 1);
|
||||
BOOST_CHECK(!orphanage->HaveTx(normal_wtxid));
|
||||
BOOST_CHECK(!orphanage->HaveTx(mutated_wtxid));
|
||||
}
|
||||
|
||||
|
||||
@ -286,39 +600,47 @@ BOOST_AUTO_TEST_CASE(get_children)
|
||||
// Spends the same outpoint as previous tx. Should still be returned; don't assume outpoints are unique.
|
||||
auto child_p1n0_p2n0 = MakeTransactionSpending({{parent1->GetHash(), 0}, {parent2->GetHash(), 0}}, det_rand);
|
||||
|
||||
const NodeId node0{0};
|
||||
const NodeId node1{1};
|
||||
const NodeId node2{2};
|
||||
const NodeId node3{3};
|
||||
|
||||
// All orphans provided by node1
|
||||
{
|
||||
TxOrphanage orphanage;
|
||||
BOOST_CHECK(orphanage.AddTx(child_p1n0, node1));
|
||||
BOOST_CHECK(orphanage.AddTx(child_p2n1, node1));
|
||||
BOOST_CHECK(orphanage.AddTx(child_p1n0_p1n1, node1));
|
||||
BOOST_CHECK(orphanage.AddTx(child_p1n0_p2n0, node1));
|
||||
auto orphanage{node::MakeTxOrphanage()};
|
||||
BOOST_CHECK(orphanage->AddTx(child_p1n0, node1));
|
||||
BOOST_CHECK(orphanage->AddTx(child_p2n1, node1));
|
||||
BOOST_CHECK(orphanage->AddTx(child_p1n0_p1n1, node1));
|
||||
BOOST_CHECK(orphanage->AddTx(child_p1n0_p2n0, node1));
|
||||
|
||||
std::set<CTransactionRef> expected_parent1_children{child_p1n0, child_p1n0_p2n0, child_p1n0_p1n1};
|
||||
std::set<CTransactionRef> expected_parent2_children{child_p2n1, child_p1n0_p2n0};
|
||||
// Also add some other announcers for the same transactions
|
||||
BOOST_CHECK(!orphanage->AddTx(child_p1n0_p1n1, node0));
|
||||
BOOST_CHECK(!orphanage->AddTx(child_p2n1, node0));
|
||||
BOOST_CHECK(!orphanage->AddTx(child_p1n0, node3));
|
||||
|
||||
BOOST_CHECK(EqualTxns(expected_parent1_children, orphanage.GetChildrenFromSamePeer(parent1, node1)));
|
||||
BOOST_CHECK(EqualTxns(expected_parent2_children, orphanage.GetChildrenFromSamePeer(parent2, node1)));
|
||||
|
||||
std::vector<CTransactionRef> expected_parent1_children{child_p1n0_p2n0, child_p1n0_p1n1, child_p1n0};
|
||||
std::vector<CTransactionRef> expected_parent2_children{child_p1n0_p2n0, child_p2n1};
|
||||
|
||||
BOOST_CHECK(expected_parent1_children == orphanage->GetChildrenFromSamePeer(parent1, node1));
|
||||
BOOST_CHECK(expected_parent2_children == orphanage->GetChildrenFromSamePeer(parent2, node1));
|
||||
|
||||
// The peer must match
|
||||
BOOST_CHECK(orphanage.GetChildrenFromSamePeer(parent1, node2).empty());
|
||||
BOOST_CHECK(orphanage.GetChildrenFromSamePeer(parent2, node2).empty());
|
||||
BOOST_CHECK(orphanage->GetChildrenFromSamePeer(parent1, node2).empty());
|
||||
BOOST_CHECK(orphanage->GetChildrenFromSamePeer(parent2, node2).empty());
|
||||
|
||||
// There shouldn't be any children of this tx in the orphanage
|
||||
BOOST_CHECK(orphanage.GetChildrenFromSamePeer(child_p1n0_p2n0, node1).empty());
|
||||
BOOST_CHECK(orphanage.GetChildrenFromSamePeer(child_p1n0_p2n0, node2).empty());
|
||||
BOOST_CHECK(orphanage->GetChildrenFromSamePeer(child_p1n0_p2n0, node1).empty());
|
||||
BOOST_CHECK(orphanage->GetChildrenFromSamePeer(child_p1n0_p2n0, node2).empty());
|
||||
}
|
||||
|
||||
// Orphans provided by node1 and node2
|
||||
{
|
||||
TxOrphanage orphanage;
|
||||
BOOST_CHECK(orphanage.AddTx(child_p1n0, node1));
|
||||
BOOST_CHECK(orphanage.AddTx(child_p2n1, node1));
|
||||
BOOST_CHECK(orphanage.AddTx(child_p1n0_p1n1, node2));
|
||||
BOOST_CHECK(orphanage.AddTx(child_p1n0_p2n0, node2));
|
||||
std::unique_ptr<node::TxOrphanage> orphanage{node::MakeTxOrphanage()};
|
||||
BOOST_CHECK(orphanage->AddTx(child_p1n0, node1));
|
||||
BOOST_CHECK(orphanage->AddTx(child_p2n1, node1));
|
||||
BOOST_CHECK(orphanage->AddTx(child_p1n0_p1n1, node2));
|
||||
BOOST_CHECK(orphanage->AddTx(child_p1n0_p2n0, node2));
|
||||
|
||||
// +----------------+---------------+----------------------------------+
|
||||
// | | sender=node1 | sender=node2 |
|
||||
@ -331,53 +653,58 @@ BOOST_AUTO_TEST_CASE(get_children)
|
||||
{
|
||||
std::set<CTransactionRef> expected_parent1_node1{child_p1n0};
|
||||
|
||||
BOOST_CHECK(EqualTxns(expected_parent1_node1, orphanage.GetChildrenFromSamePeer(parent1, node1)));
|
||||
BOOST_CHECK_EQUAL(orphanage->GetChildrenFromSamePeer(parent1, node1).size(), 1);
|
||||
BOOST_CHECK(orphanage->HaveTxFromPeer(child_p1n0->GetWitnessHash(), node1));
|
||||
BOOST_CHECK(EqualTxns(expected_parent1_node1, orphanage->GetChildrenFromSamePeer(parent1, node1)));
|
||||
}
|
||||
|
||||
// Children of parent2 from node1:
|
||||
{
|
||||
std::set<CTransactionRef> expected_parent2_node1{child_p2n1};
|
||||
|
||||
BOOST_CHECK(EqualTxns(expected_parent2_node1, orphanage.GetChildrenFromSamePeer(parent2, node1)));
|
||||
BOOST_CHECK(EqualTxns(expected_parent2_node1, orphanage->GetChildrenFromSamePeer(parent2, node1)));
|
||||
}
|
||||
|
||||
// Children of parent1 from node2:
|
||||
// Children of parent1 from node2: newest returned first.
|
||||
{
|
||||
std::set<CTransactionRef> expected_parent1_node2{child_p1n0_p1n1, child_p1n0_p2n0};
|
||||
|
||||
BOOST_CHECK(EqualTxns(expected_parent1_node2, orphanage.GetChildrenFromSamePeer(parent1, node2)));
|
||||
std::vector<CTransactionRef> expected_parent1_node2{child_p1n0_p2n0, child_p1n0_p1n1};
|
||||
BOOST_CHECK(orphanage->HaveTxFromPeer(child_p1n0_p1n1->GetWitnessHash(), node2));
|
||||
BOOST_CHECK(orphanage->HaveTxFromPeer(child_p1n0_p2n0->GetWitnessHash(), node2));
|
||||
BOOST_CHECK(expected_parent1_node2 == orphanage->GetChildrenFromSamePeer(parent1, node2));
|
||||
}
|
||||
|
||||
// Children of parent2 from node2:
|
||||
{
|
||||
std::set<CTransactionRef> expected_parent2_node2{child_p1n0_p2n0};
|
||||
|
||||
BOOST_CHECK(EqualTxns(expected_parent2_node2, orphanage.GetChildrenFromSamePeer(parent2, node2)));
|
||||
BOOST_CHECK_EQUAL(1, orphanage->GetChildrenFromSamePeer(parent2, node2).size());
|
||||
BOOST_CHECK(orphanage->HaveTxFromPeer(child_p1n0_p2n0->GetWitnessHash(), node2));
|
||||
BOOST_CHECK(EqualTxns(expected_parent2_node2, orphanage->GetChildrenFromSamePeer(parent2, node2)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(too_large_orphan_tx)
|
||||
{
|
||||
TxOrphanage orphanage;
|
||||
std::unique_ptr<node::TxOrphanage> orphanage{node::MakeTxOrphanage()};
|
||||
CMutableTransaction tx;
|
||||
tx.vin.resize(1);
|
||||
|
||||
// check that txs larger than MAX_STANDARD_TX_WEIGHT are not added to the orphanage
|
||||
BulkTransaction(tx, MAX_STANDARD_TX_WEIGHT + 4);
|
||||
BOOST_CHECK_EQUAL(GetTransactionWeight(CTransaction(tx)), MAX_STANDARD_TX_WEIGHT + 4);
|
||||
BOOST_CHECK(!orphanage.AddTx(MakeTransactionRef(tx), 0));
|
||||
BOOST_CHECK(!orphanage->AddTx(MakeTransactionRef(tx), 0));
|
||||
|
||||
tx.vout.clear();
|
||||
BulkTransaction(tx, MAX_STANDARD_TX_WEIGHT);
|
||||
BOOST_CHECK_EQUAL(GetTransactionWeight(CTransaction(tx)), MAX_STANDARD_TX_WEIGHT);
|
||||
BOOST_CHECK(orphanage.AddTx(MakeTransactionRef(tx), 0));
|
||||
BOOST_CHECK(orphanage->AddTx(MakeTransactionRef(tx), 0));
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(process_block)
|
||||
{
|
||||
FastRandomContext det_rand{true};
|
||||
TxOrphanageTest orphanage{det_rand};
|
||||
std::unique_ptr<node::TxOrphanage> orphanage{node::MakeTxOrphanage()};
|
||||
|
||||
// Create outpoints that will be spent by transactions in the block
|
||||
std::vector<COutPoint> outpoints;
|
||||
@ -392,10 +719,10 @@ BOOST_AUTO_TEST_CASE(process_block)
|
||||
const NodeId node{0};
|
||||
|
||||
auto control_tx = MakeTransactionSpending({}, det_rand);
|
||||
BOOST_CHECK(orphanage.AddTx(control_tx, node));
|
||||
BOOST_CHECK(orphanage->AddTx(control_tx, node));
|
||||
|
||||
auto bo_tx_same_txid = MakeTransactionSpending({outpoints.at(0)}, det_rand);
|
||||
BOOST_CHECK(orphanage.AddTx(bo_tx_same_txid, node));
|
||||
BOOST_CHECK(orphanage->AddTx(bo_tx_same_txid, node));
|
||||
block.vtx.emplace_back(bo_tx_same_txid);
|
||||
|
||||
// 2 transactions with the same txid but different witness
|
||||
@ -403,30 +730,30 @@ BOOST_AUTO_TEST_CASE(process_block)
|
||||
block.vtx.emplace_back(b_tx_same_txid_diff_witness);
|
||||
|
||||
auto o_tx_same_txid_diff_witness = MakeMutation(b_tx_same_txid_diff_witness);
|
||||
BOOST_CHECK(orphanage.AddTx(o_tx_same_txid_diff_witness, node));
|
||||
BOOST_CHECK(orphanage->AddTx(o_tx_same_txid_diff_witness, node));
|
||||
|
||||
// 2 different transactions that spend the same input.
|
||||
auto b_tx_conflict = MakeTransactionSpending({outpoints.at(2)}, det_rand);
|
||||
block.vtx.emplace_back(b_tx_conflict);
|
||||
|
||||
auto o_tx_conflict = MakeTransactionSpending({outpoints.at(2)}, det_rand);
|
||||
BOOST_CHECK(orphanage.AddTx(o_tx_conflict, node));
|
||||
BOOST_CHECK(orphanage->AddTx(o_tx_conflict, node));
|
||||
|
||||
// 2 different transactions that have 1 overlapping input.
|
||||
auto b_tx_conflict_partial = MakeTransactionSpending({outpoints.at(3), outpoints.at(4)}, det_rand);
|
||||
block.vtx.emplace_back(b_tx_conflict_partial);
|
||||
|
||||
auto o_tx_conflict_partial_2 = MakeTransactionSpending({outpoints.at(4), outpoints.at(5)}, det_rand);
|
||||
BOOST_CHECK(orphanage.AddTx(o_tx_conflict_partial_2, node));
|
||||
BOOST_CHECK(orphanage->AddTx(o_tx_conflict_partial_2, node));
|
||||
|
||||
orphanage.EraseForBlock(block);
|
||||
orphanage->EraseForBlock(block);
|
||||
for (const auto& expected_removed : {bo_tx_same_txid, o_tx_same_txid_diff_witness, o_tx_conflict, o_tx_conflict_partial_2}) {
|
||||
const auto& expected_removed_wtxid = expected_removed->GetWitnessHash();
|
||||
BOOST_CHECK(!orphanage.HaveTx(expected_removed_wtxid));
|
||||
BOOST_CHECK(!orphanage->HaveTx(expected_removed_wtxid));
|
||||
}
|
||||
// Only remaining tx is control_tx
|
||||
BOOST_CHECK_EQUAL(orphanage.Size(), 1);
|
||||
BOOST_CHECK(orphanage.HaveTx(control_tx->GetWitnessHash()));
|
||||
BOOST_CHECK_EQUAL(orphanage->Size(), 1);
|
||||
BOOST_CHECK(orphanage->HaveTx(control_tx->GetWitnessHash()));
|
||||
}
|
||||
|
||||
BOOST_AUTO_TEST_CASE(multiple_announcers)
|
||||
@ -436,60 +763,60 @@ BOOST_AUTO_TEST_CASE(multiple_announcers)
|
||||
const NodeId node2{2};
|
||||
size_t expected_total_count{0};
|
||||
FastRandomContext det_rand{true};
|
||||
TxOrphanageTest orphanage{det_rand};
|
||||
std::unique_ptr<node::TxOrphanage> orphanage{node::MakeTxOrphanage()};
|
||||
|
||||
// Check accounting per peer.
|
||||
// Check that EraseForPeer works with multiple announcers.
|
||||
{
|
||||
auto ptx = MakeTransactionSpending({}, det_rand);
|
||||
const auto& wtxid = ptx->GetWitnessHash();
|
||||
BOOST_CHECK(orphanage.AddTx(ptx, node0));
|
||||
BOOST_CHECK(orphanage.HaveTx(wtxid));
|
||||
BOOST_CHECK(orphanage->AddTx(ptx, node0));
|
||||
BOOST_CHECK(orphanage->HaveTx(wtxid));
|
||||
expected_total_count += 1;
|
||||
BOOST_CHECK_EQUAL(orphanage.Size(), expected_total_count);
|
||||
BOOST_CHECK_EQUAL(orphanage->Size(), expected_total_count);
|
||||
|
||||
// Adding again should do nothing.
|
||||
BOOST_CHECK(!orphanage.AddTx(ptx, node0));
|
||||
BOOST_CHECK_EQUAL(orphanage.Size(), expected_total_count);
|
||||
BOOST_CHECK(!orphanage->AddTx(ptx, node0));
|
||||
BOOST_CHECK_EQUAL(orphanage->Size(), expected_total_count);
|
||||
|
||||
// We can add another tx with the same txid but different witness.
|
||||
auto ptx_mutated{MakeMutation(ptx)};
|
||||
BOOST_CHECK(orphanage.AddTx(ptx_mutated, node0));
|
||||
BOOST_CHECK(orphanage.HaveTx(ptx_mutated->GetWitnessHash()));
|
||||
BOOST_CHECK(orphanage->AddTx(ptx_mutated, node0));
|
||||
BOOST_CHECK(orphanage->HaveTx(ptx_mutated->GetWitnessHash()));
|
||||
expected_total_count += 1;
|
||||
|
||||
BOOST_CHECK(!orphanage.AddTx(ptx, node0));
|
||||
BOOST_CHECK(!orphanage->AddTx(ptx, node0));
|
||||
|
||||
// Adding a new announcer should not change overall accounting.
|
||||
BOOST_CHECK(orphanage.AddAnnouncer(ptx->GetWitnessHash(), node2));
|
||||
BOOST_CHECK_EQUAL(orphanage.Size(), expected_total_count);
|
||||
BOOST_CHECK(orphanage->AddAnnouncer(ptx->GetWitnessHash(), node2));
|
||||
BOOST_CHECK_EQUAL(orphanage->Size(), expected_total_count);
|
||||
|
||||
// If we already have this announcer, AddAnnouncer returns false.
|
||||
BOOST_CHECK(orphanage.HaveTxFromPeer(ptx->GetWitnessHash(), node2));
|
||||
BOOST_CHECK(!orphanage.AddAnnouncer(ptx->GetWitnessHash(), node2));
|
||||
BOOST_CHECK(orphanage->HaveTxFromPeer(ptx->GetWitnessHash(), node2));
|
||||
BOOST_CHECK(!orphanage->AddAnnouncer(ptx->GetWitnessHash(), node2));
|
||||
|
||||
// Same with using AddTx for an existing tx, which is equivalent to using AddAnnouncer
|
||||
BOOST_CHECK(!orphanage.AddTx(ptx, node1));
|
||||
BOOST_CHECK_EQUAL(orphanage.Size(), expected_total_count);
|
||||
BOOST_CHECK(!orphanage->AddTx(ptx, node1));
|
||||
BOOST_CHECK_EQUAL(orphanage->Size(), expected_total_count);
|
||||
|
||||
// if EraseForPeer is called for an orphan with multiple announcers, the orphanage should only
|
||||
// erase that peer from the announcers set.
|
||||
orphanage.EraseForPeer(node0);
|
||||
BOOST_CHECK(orphanage.HaveTx(ptx->GetWitnessHash()));
|
||||
BOOST_CHECK(!orphanage.HaveTxFromPeer(ptx->GetWitnessHash(), node0));
|
||||
orphanage->EraseForPeer(node0);
|
||||
BOOST_CHECK(orphanage->HaveTx(ptx->GetWitnessHash()));
|
||||
BOOST_CHECK(!orphanage->HaveTxFromPeer(ptx->GetWitnessHash(), node0));
|
||||
// node0 is the only one that announced ptx_mutated
|
||||
BOOST_CHECK(!orphanage.HaveTx(ptx_mutated->GetWitnessHash()));
|
||||
BOOST_CHECK(!orphanage->HaveTx(ptx_mutated->GetWitnessHash()));
|
||||
expected_total_count -= 1;
|
||||
BOOST_CHECK_EQUAL(orphanage.Size(), expected_total_count);
|
||||
BOOST_CHECK_EQUAL(orphanage->Size(), expected_total_count);
|
||||
|
||||
// EraseForPeer should delete the orphan if it's the only announcer left.
|
||||
orphanage.EraseForPeer(node1);
|
||||
BOOST_CHECK_EQUAL(orphanage.Size(), expected_total_count);
|
||||
BOOST_CHECK(orphanage.HaveTx(ptx->GetWitnessHash()));
|
||||
orphanage.EraseForPeer(node2);
|
||||
orphanage->EraseForPeer(node1);
|
||||
BOOST_CHECK_EQUAL(orphanage->Size(), expected_total_count);
|
||||
BOOST_CHECK(orphanage->HaveTx(ptx->GetWitnessHash()));
|
||||
orphanage->EraseForPeer(node2);
|
||||
expected_total_count -= 1;
|
||||
BOOST_CHECK_EQUAL(orphanage.Size(), expected_total_count);
|
||||
BOOST_CHECK(!orphanage.HaveTx(ptx->GetWitnessHash()));
|
||||
BOOST_CHECK_EQUAL(orphanage->Size(), expected_total_count);
|
||||
BOOST_CHECK(!orphanage->HaveTx(ptx->GetWitnessHash()));
|
||||
}
|
||||
|
||||
// Check that erasure for blocks removes for all peers.
|
||||
@ -497,18 +824,18 @@ BOOST_AUTO_TEST_CASE(multiple_announcers)
|
||||
CBlock block;
|
||||
auto tx_block = MakeTransactionSpending({}, det_rand);
|
||||
block.vtx.emplace_back(tx_block);
|
||||
BOOST_CHECK(orphanage.AddTx(tx_block, node0));
|
||||
BOOST_CHECK(!orphanage.AddTx(tx_block, node1));
|
||||
BOOST_CHECK(orphanage->AddTx(tx_block, node0));
|
||||
BOOST_CHECK(!orphanage->AddTx(tx_block, node1));
|
||||
|
||||
expected_total_count += 1;
|
||||
|
||||
BOOST_CHECK_EQUAL(orphanage.Size(), expected_total_count);
|
||||
BOOST_CHECK_EQUAL(orphanage->Size(), expected_total_count);
|
||||
|
||||
orphanage.EraseForBlock(block);
|
||||
orphanage->EraseForBlock(block);
|
||||
|
||||
expected_total_count -= 1;
|
||||
|
||||
BOOST_CHECK_EQUAL(orphanage.Size(), expected_total_count);
|
||||
BOOST_CHECK_EQUAL(orphanage->Size(), expected_total_count);
|
||||
}
|
||||
}
|
||||
BOOST_AUTO_TEST_CASE(peer_worksets)
|
||||
@ -517,7 +844,7 @@ BOOST_AUTO_TEST_CASE(peer_worksets)
|
||||
const NodeId node1{1};
|
||||
const NodeId node2{2};
|
||||
FastRandomContext det_rand{true};
|
||||
TxOrphanageTest orphanage{det_rand};
|
||||
std::unique_ptr<node::TxOrphanage> orphanage{node::MakeTxOrphanage()};
|
||||
// AddChildrenToWorkSet should pick an announcer randomly
|
||||
{
|
||||
auto tx_missing_parent = MakeTransactionSpending({}, det_rand);
|
||||
@ -525,18 +852,19 @@ BOOST_AUTO_TEST_CASE(peer_worksets)
|
||||
const auto& orphan_wtxid = tx_orphan->GetWitnessHash();
|
||||
|
||||
// All 3 peers are announcers.
|
||||
BOOST_CHECK(orphanage.AddTx(tx_orphan, node0));
|
||||
BOOST_CHECK(!orphanage.AddTx(tx_orphan, node1));
|
||||
BOOST_CHECK(orphanage.AddAnnouncer(orphan_wtxid, node2));
|
||||
BOOST_CHECK(orphanage->AddTx(tx_orphan, node0));
|
||||
BOOST_CHECK(!orphanage->AddTx(tx_orphan, node1));
|
||||
BOOST_CHECK(orphanage->AddAnnouncer(orphan_wtxid, node2));
|
||||
for (NodeId node = node0; node <= node2; ++node) {
|
||||
BOOST_CHECK(orphanage.HaveTxFromPeer(orphan_wtxid, node));
|
||||
BOOST_CHECK(orphanage->HaveTxFromPeer(orphan_wtxid, node));
|
||||
}
|
||||
|
||||
// Parent accepted: child is added to 1 of 3 worksets.
|
||||
orphanage.AddChildrenToWorkSet(*tx_missing_parent, det_rand);
|
||||
int node0_reconsider = orphanage.HaveTxToReconsider(node0);
|
||||
int node1_reconsider = orphanage.HaveTxToReconsider(node1);
|
||||
int node2_reconsider = orphanage.HaveTxToReconsider(node2);
|
||||
auto newly_reconsiderable = orphanage->AddChildrenToWorkSet(*tx_missing_parent, det_rand);
|
||||
BOOST_CHECK_EQUAL(newly_reconsiderable.size(), 1);
|
||||
int node0_reconsider = orphanage->HaveTxToReconsider(node0);
|
||||
int node1_reconsider = orphanage->HaveTxToReconsider(node1);
|
||||
int node2_reconsider = orphanage->HaveTxToReconsider(node2);
|
||||
BOOST_CHECK_EQUAL(node0_reconsider + node1_reconsider + node2_reconsider, 1);
|
||||
|
||||
NodeId assigned_peer;
|
||||
@ -550,15 +878,15 @@ BOOST_AUTO_TEST_CASE(peer_worksets)
|
||||
}
|
||||
|
||||
// EraseForPeer also removes that tx from the workset.
|
||||
orphanage.EraseForPeer(assigned_peer);
|
||||
BOOST_CHECK_EQUAL(orphanage.GetTxToReconsider(node0), nullptr);
|
||||
orphanage->EraseForPeer(assigned_peer);
|
||||
BOOST_CHECK_EQUAL(orphanage->GetTxToReconsider(node0), nullptr);
|
||||
|
||||
// Delete this tx, clearing the orphanage.
|
||||
BOOST_CHECK_EQUAL(orphanage.EraseTx(orphan_wtxid), 1);
|
||||
BOOST_CHECK_EQUAL(orphanage.Size(), 0);
|
||||
BOOST_CHECK_EQUAL(orphanage->EraseTx(orphan_wtxid), 1);
|
||||
BOOST_CHECK_EQUAL(orphanage->Size(), 0);
|
||||
for (NodeId node = node0; node <= node2; ++node) {
|
||||
BOOST_CHECK_EQUAL(orphanage.GetTxToReconsider(node), nullptr);
|
||||
BOOST_CHECK(!orphanage.HaveTxFromPeer(orphan_wtxid, node));
|
||||
BOOST_CHECK_EQUAL(orphanage->GetTxToReconsider(node), nullptr);
|
||||
BOOST_CHECK(!orphanage->HaveTxFromPeer(orphan_wtxid, node));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -80,7 +80,7 @@ static bool CheckOrphanBehavior(node::TxDownloadManagerImpl& txdownload_impl, co
|
||||
return false;
|
||||
}
|
||||
|
||||
if (expect_orphan != txdownload_impl.m_orphanage.HaveTx(tx->GetWitnessHash())) {
|
||||
if (expect_orphan != txdownload_impl.m_orphanage->HaveTx(tx->GetWitnessHash())) {
|
||||
err_msg = strprintf("unexpectedly %s tx in orphanage", expect_orphan ? "did not find" : "found");
|
||||
return false;
|
||||
}
|
||||
@ -114,7 +114,7 @@ BOOST_FIXTURE_TEST_CASE(tx_rejection_types, TestChain100Setup)
|
||||
{
|
||||
CTxMemPool& pool = *Assert(m_node.mempool);
|
||||
FastRandomContext det_rand{true};
|
||||
node::TxDownloadOptions DEFAULT_OPTS{pool, det_rand, DEFAULT_MAX_ORPHAN_TRANSACTIONS, true};
|
||||
node::TxDownloadOptions DEFAULT_OPTS{pool, det_rand, true};
|
||||
|
||||
// A new TxDownloadManagerImpl is created for each tx so we can just reuse the same one.
|
||||
TxValidationState state;
|
||||
@ -162,7 +162,7 @@ BOOST_FIXTURE_TEST_CASE(tx_rejection_types, TestChain100Setup)
|
||||
BOOST_CHECK_EQUAL(parent_txid_rejected, txdownload_impl.RecentRejectsFilter().contains(child_wtxid.ToUint256()));
|
||||
|
||||
// Unless rejected, the child should be in orphanage.
|
||||
BOOST_CHECK_EQUAL(!parent_txid_rejected, txdownload_impl.m_orphanage.HaveTx(ptx_child->GetWitnessHash()));
|
||||
BOOST_CHECK_EQUAL(!parent_txid_rejected, txdownload_impl.m_orphanage->HaveTx(ptx_child->GetWitnessHash()));
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -172,7 +172,7 @@ BOOST_FIXTURE_TEST_CASE(handle_missing_inputs, TestChain100Setup)
|
||||
{
|
||||
CTxMemPool& pool = *Assert(m_node.mempool);
|
||||
FastRandomContext det_rand{true};
|
||||
node::TxDownloadOptions DEFAULT_OPTS{pool, det_rand, DEFAULT_MAX_ORPHAN_TRANSACTIONS, true};
|
||||
node::TxDownloadOptions DEFAULT_OPTS{pool, det_rand, true};
|
||||
NodeId nodeid{1};
|
||||
node::TxDownloadConnectionInfo DEFAULT_CONN{/*m_preferred=*/false, /*m_relay_permissions=*/false, /*m_wtxid_relay=*/true};
|
||||
|
||||
|
||||
@ -1,363 +0,0 @@
|
||||
// Copyright (c) 2021-2022 The Bitcoin Core developers
|
||||
// Distributed under the MIT software license, see the accompanying
|
||||
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
|
||||
|
||||
#include <txorphanage.h>
|
||||
|
||||
#include <consensus/validation.h>
|
||||
#include <logging.h>
|
||||
#include <policy/policy.h>
|
||||
#include <primitives/transaction.h>
|
||||
#include <util/time.h>
|
||||
|
||||
#include <cassert>
|
||||
|
||||
bool TxOrphanage::AddTx(const CTransactionRef& tx, NodeId peer)
|
||||
{
|
||||
const Txid& hash = tx->GetHash();
|
||||
const Wtxid& wtxid = tx->GetWitnessHash();
|
||||
if (auto it{m_orphans.find(wtxid)}; it != m_orphans.end()) {
|
||||
AddAnnouncer(wtxid, peer);
|
||||
// No new orphan entry was created. An announcer may have been added.
|
||||
return false;
|
||||
}
|
||||
|
||||
// Ignore big transactions, to avoid a
|
||||
// send-big-orphans memory exhaustion attack. If a peer has a legitimate
|
||||
// large transaction with a missing parent then we assume
|
||||
// it will rebroadcast it later, after the parent transaction(s)
|
||||
// have been mined or received.
|
||||
// 100 orphans, each of which is at most 100,000 bytes big is
|
||||
// at most 10 megabytes of orphans and somewhat more byprev index (in the worst case):
|
||||
unsigned int sz = GetTransactionWeight(*tx);
|
||||
if (sz > MAX_STANDARD_TX_WEIGHT)
|
||||
{
|
||||
LogDebug(BCLog::TXPACKAGES, "ignoring large orphan tx (size: %u, txid: %s, wtxid: %s)\n", sz, hash.ToString(), wtxid.ToString());
|
||||
return false;
|
||||
}
|
||||
|
||||
auto ret = m_orphans.emplace(wtxid, OrphanTx{{tx, {peer}, Now<NodeSeconds>() + ORPHAN_TX_EXPIRE_TIME}, m_orphan_list.size()});
|
||||
assert(ret.second);
|
||||
m_orphan_list.push_back(ret.first);
|
||||
for (const CTxIn& txin : tx->vin) {
|
||||
m_outpoint_to_orphan_it[txin.prevout].insert(ret.first);
|
||||
}
|
||||
m_total_orphan_usage += sz;
|
||||
m_total_announcements += 1;
|
||||
auto& peer_info = m_peer_orphanage_info.try_emplace(peer).first->second;
|
||||
peer_info.m_total_usage += sz;
|
||||
|
||||
LogDebug(BCLog::TXPACKAGES, "stored orphan tx %s (wtxid=%s), weight: %u (mapsz %u outsz %u)\n", hash.ToString(), wtxid.ToString(), sz,
|
||||
m_orphans.size(), m_outpoint_to_orphan_it.size());
|
||||
return true;
|
||||
}
|
||||
|
||||
bool TxOrphanage::AddAnnouncer(const Wtxid& wtxid, NodeId peer)
|
||||
{
|
||||
const auto it = m_orphans.find(wtxid);
|
||||
if (it != m_orphans.end()) {
|
||||
Assume(!it->second.announcers.empty());
|
||||
const auto ret = it->second.announcers.insert(peer);
|
||||
if (ret.second) {
|
||||
auto& peer_info = m_peer_orphanage_info.try_emplace(peer).first->second;
|
||||
peer_info.m_total_usage += it->second.GetUsage();
|
||||
m_total_announcements += 1;
|
||||
LogDebug(BCLog::TXPACKAGES, "added peer=%d as announcer of orphan tx %s\n", peer, wtxid.ToString());
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
int TxOrphanage::EraseTx(const Wtxid& wtxid)
|
||||
{
|
||||
std::map<Wtxid, OrphanTx>::iterator it = m_orphans.find(wtxid);
|
||||
if (it == m_orphans.end())
|
||||
return 0;
|
||||
for (const CTxIn& txin : it->second.tx->vin)
|
||||
{
|
||||
auto itPrev = m_outpoint_to_orphan_it.find(txin.prevout);
|
||||
if (itPrev == m_outpoint_to_orphan_it.end())
|
||||
continue;
|
||||
itPrev->second.erase(it);
|
||||
if (itPrev->second.empty())
|
||||
m_outpoint_to_orphan_it.erase(itPrev);
|
||||
}
|
||||
|
||||
const auto tx_size{it->second.GetUsage()};
|
||||
m_total_orphan_usage -= tx_size;
|
||||
m_total_announcements -= it->second.announcers.size();
|
||||
// Decrement each announcer's m_total_usage
|
||||
for (const auto& peer : it->second.announcers) {
|
||||
auto peer_it = m_peer_orphanage_info.find(peer);
|
||||
if (Assume(peer_it != m_peer_orphanage_info.end())) {
|
||||
peer_it->second.m_total_usage -= tx_size;
|
||||
}
|
||||
}
|
||||
|
||||
size_t old_pos = it->second.list_pos;
|
||||
assert(m_orphan_list[old_pos] == it);
|
||||
if (old_pos + 1 != m_orphan_list.size()) {
|
||||
// Unless we're deleting the last entry in m_orphan_list, move the last
|
||||
// entry to the position we're deleting.
|
||||
auto it_last = m_orphan_list.back();
|
||||
m_orphan_list[old_pos] = it_last;
|
||||
it_last->second.list_pos = old_pos;
|
||||
}
|
||||
const auto& txid = it->second.tx->GetHash();
|
||||
// Time spent in orphanage = difference between current and entry time.
|
||||
// Entry time is equal to ORPHAN_TX_EXPIRE_TIME earlier than entry's expiry.
|
||||
LogDebug(BCLog::TXPACKAGES, " removed orphan tx %s (wtxid=%s) after %ds\n", txid.ToString(), wtxid.ToString(),
|
||||
Ticks<std::chrono::seconds>(NodeClock::now() + ORPHAN_TX_EXPIRE_TIME - it->second.nTimeExpire));
|
||||
m_orphan_list.pop_back();
|
||||
|
||||
m_orphans.erase(it);
|
||||
return 1;
|
||||
}
|
||||
|
||||
void TxOrphanage::EraseForPeer(NodeId peer)
|
||||
{
|
||||
// Zeroes out this peer's m_total_usage.
|
||||
m_peer_orphanage_info.erase(peer);
|
||||
|
||||
int nErased = 0;
|
||||
std::map<Wtxid, OrphanTx>::iterator iter = m_orphans.begin();
|
||||
while (iter != m_orphans.end())
|
||||
{
|
||||
// increment to avoid iterator becoming invalid after erasure
|
||||
auto& [wtxid, orphan] = *iter++;
|
||||
auto orphan_it = orphan.announcers.find(peer);
|
||||
if (orphan_it != orphan.announcers.end()) {
|
||||
orphan.announcers.erase(peer);
|
||||
m_total_announcements -= 1;
|
||||
|
||||
// No remaining announcers: clean up entry
|
||||
if (orphan.announcers.empty()) {
|
||||
nErased += EraseTx(orphan.tx->GetWitnessHash());
|
||||
}
|
||||
}
|
||||
}
|
||||
if (nErased > 0) LogDebug(BCLog::TXPACKAGES, "Erased %d orphan transaction(s) from peer=%d\n", nErased, peer);
|
||||
}
|
||||
|
||||
void TxOrphanage::LimitOrphans(unsigned int max_orphans, FastRandomContext& rng)
|
||||
{
|
||||
unsigned int nEvicted = 0;
|
||||
auto nNow{Now<NodeSeconds>()};
|
||||
if (m_next_sweep <= nNow) {
|
||||
// Sweep out expired orphan pool entries:
|
||||
int nErased = 0;
|
||||
auto nMinExpTime{nNow + ORPHAN_TX_EXPIRE_TIME - ORPHAN_TX_EXPIRE_INTERVAL};
|
||||
std::map<Wtxid, OrphanTx>::iterator iter = m_orphans.begin();
|
||||
while (iter != m_orphans.end())
|
||||
{
|
||||
std::map<Wtxid, OrphanTx>::iterator maybeErase = iter++;
|
||||
if (maybeErase->second.nTimeExpire <= nNow) {
|
||||
nErased += EraseTx(maybeErase->first);
|
||||
} else {
|
||||
nMinExpTime = std::min(maybeErase->second.nTimeExpire, nMinExpTime);
|
||||
}
|
||||
}
|
||||
// Sweep again 5 minutes after the next entry that expires in order to batch the linear scan.
|
||||
m_next_sweep = nMinExpTime + ORPHAN_TX_EXPIRE_INTERVAL;
|
||||
if (nErased > 0) LogDebug(BCLog::TXPACKAGES, "Erased %d orphan tx due to expiration\n", nErased);
|
||||
}
|
||||
while (m_orphans.size() > max_orphans)
|
||||
{
|
||||
// Evict a random orphan:
|
||||
size_t randompos = rng.randrange(m_orphan_list.size());
|
||||
EraseTx(m_orphan_list[randompos]->first);
|
||||
++nEvicted;
|
||||
}
|
||||
if (nEvicted > 0) LogDebug(BCLog::TXPACKAGES, "orphanage overflow, removed %u tx\n", nEvicted);
|
||||
}
|
||||
|
||||
void TxOrphanage::AddChildrenToWorkSet(const CTransaction& tx, FastRandomContext& rng)
|
||||
{
|
||||
for (unsigned int i = 0; i < tx.vout.size(); i++) {
|
||||
const auto it_by_prev = m_outpoint_to_orphan_it.find(COutPoint(tx.GetHash(), i));
|
||||
if (it_by_prev != m_outpoint_to_orphan_it.end()) {
|
||||
for (const auto& elem : it_by_prev->second) {
|
||||
// Belt and suspenders, each orphan should always have at least 1 announcer.
|
||||
if (!Assume(!elem->second.announcers.empty())) continue;
|
||||
|
||||
// Select a random peer to assign orphan processing, reducing wasted work if the orphan is still missing
|
||||
// inputs. However, we don't want to create an issue in which the assigned peer can purposefully stop us
|
||||
// from processing the orphan by disconnecting.
|
||||
auto announcer_iter = std::begin(elem->second.announcers);
|
||||
std::advance(announcer_iter, rng.randrange(elem->second.announcers.size()));
|
||||
auto announcer = *(announcer_iter);
|
||||
|
||||
// Get this source peer's work set, emplacing an empty set if it didn't exist
|
||||
// (note: if this peer wasn't still connected, we would have removed the orphan tx already)
|
||||
std::set<Wtxid>& orphan_work_set = m_peer_orphanage_info.try_emplace(announcer).first->second.m_work_set;
|
||||
// Add this tx to the work set
|
||||
orphan_work_set.insert(elem->first);
|
||||
LogDebug(BCLog::TXPACKAGES, "added %s (wtxid=%s) to peer %d workset\n",
|
||||
tx.GetHash().ToString(), tx.GetWitnessHash().ToString(), announcer);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
bool TxOrphanage::HaveTx(const Wtxid& wtxid) const
|
||||
{
|
||||
return m_orphans.count(wtxid);
|
||||
}
|
||||
|
||||
CTransactionRef TxOrphanage::GetTx(const Wtxid& wtxid) const
|
||||
{
|
||||
auto it = m_orphans.find(wtxid);
|
||||
return it != m_orphans.end() ? it->second.tx : nullptr;
|
||||
}
|
||||
|
||||
bool TxOrphanage::HaveTxFromPeer(const Wtxid& wtxid, NodeId peer) const
|
||||
{
|
||||
auto it = m_orphans.find(wtxid);
|
||||
return (it != m_orphans.end() && it->second.announcers.contains(peer));
|
||||
}
|
||||
|
||||
CTransactionRef TxOrphanage::GetTxToReconsider(NodeId peer)
|
||||
{
|
||||
auto peer_it = m_peer_orphanage_info.find(peer);
|
||||
if (peer_it == m_peer_orphanage_info.end()) return nullptr;
|
||||
|
||||
auto& work_set = peer_it->second.m_work_set;
|
||||
while (!work_set.empty()) {
|
||||
Wtxid wtxid = *work_set.begin();
|
||||
work_set.erase(work_set.begin());
|
||||
|
||||
const auto orphan_it = m_orphans.find(wtxid);
|
||||
if (orphan_it != m_orphans.end()) {
|
||||
return orphan_it->second.tx;
|
||||
}
|
||||
}
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
bool TxOrphanage::HaveTxToReconsider(NodeId peer)
|
||||
{
|
||||
auto peer_it = m_peer_orphanage_info.find(peer);
|
||||
if (peer_it == m_peer_orphanage_info.end()) return false;
|
||||
|
||||
auto& work_set = peer_it->second.m_work_set;
|
||||
return !work_set.empty();
|
||||
}
|
||||
|
||||
void TxOrphanage::EraseForBlock(const CBlock& block)
|
||||
{
|
||||
std::vector<Wtxid> vOrphanErase;
|
||||
|
||||
for (const CTransactionRef& ptx : block.vtx) {
|
||||
const CTransaction& tx = *ptx;
|
||||
|
||||
// Which orphan pool entries must we evict?
|
||||
for (const auto& txin : tx.vin) {
|
||||
auto itByPrev = m_outpoint_to_orphan_it.find(txin.prevout);
|
||||
if (itByPrev == m_outpoint_to_orphan_it.end()) continue;
|
||||
for (auto mi = itByPrev->second.begin(); mi != itByPrev->second.end(); ++mi) {
|
||||
const CTransaction& orphanTx = *(*mi)->second.tx;
|
||||
vOrphanErase.push_back(orphanTx.GetWitnessHash());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Erase orphan transactions included or precluded by this block
|
||||
if (vOrphanErase.size()) {
|
||||
int nErased = 0;
|
||||
for (const auto& orphanHash : vOrphanErase) {
|
||||
nErased += EraseTx(orphanHash);
|
||||
}
|
||||
LogDebug(BCLog::TXPACKAGES, "Erased %d orphan transaction(s) included or conflicted by block\n", nErased);
|
||||
}
|
||||
}
|
||||
|
||||
std::vector<CTransactionRef> TxOrphanage::GetChildrenFromSamePeer(const CTransactionRef& parent, NodeId nodeid) const
|
||||
{
|
||||
// First construct a vector of iterators to ensure we do not return duplicates of the same tx
|
||||
// and so we can sort by nTimeExpire.
|
||||
std::vector<OrphanMap::iterator> iters;
|
||||
|
||||
// For each output, get all entries spending this prevout, filtering for ones from the specified peer.
|
||||
for (unsigned int i = 0; i < parent->vout.size(); i++) {
|
||||
const auto it_by_prev = m_outpoint_to_orphan_it.find(COutPoint(parent->GetHash(), i));
|
||||
if (it_by_prev != m_outpoint_to_orphan_it.end()) {
|
||||
for (const auto& elem : it_by_prev->second) {
|
||||
if (elem->second.announcers.contains(nodeid)) {
|
||||
iters.emplace_back(elem);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Sort by address so that duplicates can be deleted. At the same time, sort so that more recent
|
||||
// orphans (which expire later) come first. Break ties based on address, as nTimeExpire is
|
||||
// quantified in seconds and it is possible for orphans to have the same expiry.
|
||||
std::sort(iters.begin(), iters.end(), [](const auto& lhs, const auto& rhs) {
|
||||
if (lhs->second.nTimeExpire == rhs->second.nTimeExpire) {
|
||||
return &(*lhs) < &(*rhs);
|
||||
} else {
|
||||
return lhs->second.nTimeExpire > rhs->second.nTimeExpire;
|
||||
}
|
||||
});
|
||||
// Erase duplicates
|
||||
iters.erase(std::unique(iters.begin(), iters.end()), iters.end());
|
||||
|
||||
// Convert to a vector of CTransactionRef
|
||||
std::vector<CTransactionRef> children_found;
|
||||
children_found.reserve(iters.size());
|
||||
for (const auto& child_iter : iters) {
|
||||
children_found.emplace_back(child_iter->second.tx);
|
||||
}
|
||||
return children_found;
|
||||
}
|
||||
|
||||
std::vector<TxOrphanage::OrphanTxBase> TxOrphanage::GetOrphanTransactions() const
|
||||
{
|
||||
std::vector<OrphanTxBase> ret;
|
||||
ret.reserve(m_orphans.size());
|
||||
for (auto const& o : m_orphans) {
|
||||
ret.push_back({o.second.tx, o.second.announcers, o.second.nTimeExpire});
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
void TxOrphanage::SanityCheck() const
|
||||
{
|
||||
// Check that cached m_total_announcements is correct
|
||||
unsigned int counted_total_announcements{0};
|
||||
// Check that m_total_orphan_usage is correct
|
||||
unsigned int counted_total_usage{0};
|
||||
|
||||
// Check that cached PeerOrphanInfo::m_total_size is correct
|
||||
std::map<NodeId, unsigned int> counted_size_per_peer;
|
||||
|
||||
for (const auto& [wtxid, orphan] : m_orphans) {
|
||||
counted_total_announcements += orphan.announcers.size();
|
||||
counted_total_usage += orphan.GetUsage();
|
||||
|
||||
Assume(!orphan.announcers.empty());
|
||||
for (const auto& peer : orphan.announcers) {
|
||||
auto& count_peer_entry = counted_size_per_peer.try_emplace(peer).first->second;
|
||||
count_peer_entry += orphan.GetUsage();
|
||||
}
|
||||
}
|
||||
|
||||
Assume(m_total_announcements >= m_orphans.size());
|
||||
Assume(counted_total_announcements == m_total_announcements);
|
||||
Assume(counted_total_usage == m_total_orphan_usage);
|
||||
|
||||
// There must be an entry in m_peer_orphanage_info for each peer
|
||||
// However, there may be m_peer_orphanage_info entries corresponding to peers for whom we
|
||||
// previously had orphans but no longer do.
|
||||
Assume(counted_size_per_peer.size() <= m_peer_orphanage_info.size());
|
||||
|
||||
for (const auto& [peerid, info] : m_peer_orphanage_info) {
|
||||
auto it_counted = counted_size_per_peer.find(peerid);
|
||||
if (it_counted == counted_size_per_peer.end()) {
|
||||
Assume(info.m_total_usage == 0);
|
||||
} else {
|
||||
Assume(it_counted->second == info.m_total_usage);
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -1,166 +0,0 @@
|
||||
// Copyright (c) 2021-2022 The Bitcoin Core developers
|
||||
// Distributed under the MIT software license, see the accompanying
|
||||
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
|
||||
|
||||
#ifndef BITCOIN_TXORPHANAGE_H
|
||||
#define BITCOIN_TXORPHANAGE_H
|
||||
|
||||
#include <consensus/validation.h>
|
||||
#include <net.h>
|
||||
#include <primitives/block.h>
|
||||
#include <primitives/transaction.h>
|
||||
#include <sync.h>
|
||||
#include <util/time.h>
|
||||
|
||||
#include <map>
|
||||
#include <set>
|
||||
|
||||
/** Expiration time for orphan transactions */
|
||||
static constexpr auto ORPHAN_TX_EXPIRE_TIME{20min};
|
||||
/** Minimum time between orphan transactions expire time checks */
|
||||
static constexpr auto ORPHAN_TX_EXPIRE_INTERVAL{5min};
|
||||
|
||||
/** A class to track orphan transactions (failed on TX_MISSING_INPUTS)
|
||||
* Since we cannot distinguish orphans from bad transactions with
|
||||
* non-existent inputs, we heavily limit the number of orphans
|
||||
* we keep and the duration we keep them for.
|
||||
* Not thread-safe. Requires external synchronization.
|
||||
*/
|
||||
class TxOrphanage {
|
||||
public:
|
||||
/** Add a new orphan transaction */
|
||||
bool AddTx(const CTransactionRef& tx, NodeId peer);
|
||||
|
||||
/** Add an additional announcer to an orphan if it exists. Otherwise, do nothing. */
|
||||
bool AddAnnouncer(const Wtxid& wtxid, NodeId peer);
|
||||
|
||||
CTransactionRef GetTx(const Wtxid& wtxid) const;
|
||||
|
||||
/** Check if we already have an orphan transaction (by wtxid only) */
|
||||
bool HaveTx(const Wtxid& wtxid) const;
|
||||
|
||||
/** Check if a {tx, peer} exists in the orphanage.*/
|
||||
bool HaveTxFromPeer(const Wtxid& wtxid, NodeId peer) const;
|
||||
|
||||
/** Extract a transaction from a peer's work set
|
||||
* Returns nullptr if there are no transactions to work on.
|
||||
* Otherwise returns the transaction reference, and removes
|
||||
* it from the work set.
|
||||
*/
|
||||
CTransactionRef GetTxToReconsider(NodeId peer);
|
||||
|
||||
/** Erase an orphan by wtxid */
|
||||
int EraseTx(const Wtxid& wtxid);
|
||||
|
||||
/** Maybe erase all orphans announced by a peer (eg, after that peer disconnects). If an orphan
|
||||
* has been announced by another peer, don't erase, just remove this peer from the list of announcers. */
|
||||
void EraseForPeer(NodeId peer);
|
||||
|
||||
/** Erase all orphans included in or invalidated by a new block */
|
||||
void EraseForBlock(const CBlock& block);
|
||||
|
||||
/** Limit the orphanage to the given maximum */
|
||||
void LimitOrphans(unsigned int max_orphans, FastRandomContext& rng);
|
||||
|
||||
/** Add any orphans that list a particular tx as a parent into the from peer's work set */
|
||||
void AddChildrenToWorkSet(const CTransaction& tx, FastRandomContext& rng);
|
||||
|
||||
/** Does this peer have any work to do? */
|
||||
bool HaveTxToReconsider(NodeId peer);
|
||||
|
||||
/** Get all children that spend from this tx and were received from nodeid. Sorted from most
|
||||
* recent to least recent. */
|
||||
std::vector<CTransactionRef> GetChildrenFromSamePeer(const CTransactionRef& parent, NodeId nodeid) const;
|
||||
|
||||
/** Return how many entries exist in the orphange */
|
||||
size_t Size() const
|
||||
{
|
||||
return m_orphans.size();
|
||||
}
|
||||
|
||||
/** Allows providing orphan information externally */
|
||||
struct OrphanTxBase {
|
||||
CTransactionRef tx;
|
||||
/** Peers added with AddTx or AddAnnouncer. */
|
||||
std::set<NodeId> announcers;
|
||||
NodeSeconds nTimeExpire;
|
||||
|
||||
/** Get the weight of this transaction, an approximation of its memory usage. */
|
||||
unsigned int GetUsage() const {
|
||||
return GetTransactionWeight(*tx);
|
||||
}
|
||||
};
|
||||
|
||||
std::vector<OrphanTxBase> GetOrphanTransactions() const;
|
||||
|
||||
/** Get the total usage (weight) of all orphans. If an orphan has multiple announcers, its usage is
|
||||
* only counted once within this total. */
|
||||
unsigned int TotalOrphanUsage() const { return m_total_orphan_usage; }
|
||||
|
||||
/** Total usage (weight) of orphans for which this peer is an announcer. If an orphan has multiple
|
||||
* announcers, its weight will be accounted for in each PeerOrphanInfo, so the total of all
|
||||
* peers' UsageByPeer() may be larger than TotalOrphanBytes(). */
|
||||
unsigned int UsageByPeer(NodeId peer) const {
|
||||
auto peer_it = m_peer_orphanage_info.find(peer);
|
||||
return peer_it == m_peer_orphanage_info.end() ? 0 : peer_it->second.m_total_usage;
|
||||
}
|
||||
|
||||
/** Check consistency between PeerOrphanInfo and m_orphans. Recalculate counters and ensure they
|
||||
* match what is cached. */
|
||||
void SanityCheck() const;
|
||||
|
||||
protected:
|
||||
struct OrphanTx : public OrphanTxBase {
|
||||
size_t list_pos;
|
||||
};
|
||||
|
||||
/** Total usage (weight) of all entries in m_orphans. */
|
||||
unsigned int m_total_orphan_usage{0};
|
||||
|
||||
/** Total number of <peer, tx> pairs. Can be larger than m_orphans.size() because multiple peers
|
||||
* may have announced the same orphan. */
|
||||
unsigned int m_total_announcements{0};
|
||||
|
||||
/** Map from wtxid to orphan transaction record. Limited by
|
||||
* -maxorphantx/DEFAULT_MAX_ORPHAN_TRANSACTIONS */
|
||||
std::map<Wtxid, OrphanTx> m_orphans;
|
||||
|
||||
struct PeerOrphanInfo {
|
||||
/** List of transactions that should be reconsidered: added to in AddChildrenToWorkSet,
|
||||
* removed from one-by-one with each call to GetTxToReconsider. The wtxids may refer to
|
||||
* transactions that are no longer present in orphanage; these are lazily removed in
|
||||
* GetTxToReconsider. */
|
||||
std::set<Wtxid> m_work_set;
|
||||
|
||||
/** Total weight of orphans for which this peer is an announcer.
|
||||
* If orphans are provided by different peers, its weight will be accounted for in each
|
||||
* PeerOrphanInfo, so the total of all peers' m_total_usage may be larger than
|
||||
* m_total_orphan_size. If a peer is removed as an announcer, even if the orphan still
|
||||
* remains in the orphanage, this number will be decremented. */
|
||||
unsigned int m_total_usage{0};
|
||||
};
|
||||
std::map<NodeId, PeerOrphanInfo> m_peer_orphanage_info;
|
||||
|
||||
using OrphanMap = decltype(m_orphans);
|
||||
|
||||
struct IteratorComparator
|
||||
{
|
||||
template<typename I>
|
||||
bool operator()(const I& a, const I& b) const
|
||||
{
|
||||
return a->first < b->first;
|
||||
}
|
||||
};
|
||||
|
||||
/** Index from the parents' COutPoint into the m_orphans. Used
|
||||
* to remove orphan transactions from the m_orphans */
|
||||
std::map<COutPoint, std::set<OrphanMap::iterator, IteratorComparator>> m_outpoint_to_orphan_it;
|
||||
|
||||
/** Orphan transactions in vector for quick random eviction */
|
||||
std::vector<OrphanMap::iterator> m_orphan_list;
|
||||
|
||||
/** Timestamp for the next scheduled sweep of expired orphans */
|
||||
NodeSeconds m_next_sweep{0s};
|
||||
};
|
||||
|
||||
#endif // BITCOIN_TXORPHANAGE_H
|
||||
@ -143,14 +143,14 @@ class InvalidTxRequestTest(BitcoinTestFramework):
|
||||
self.wait_until(lambda: 1 == len(node.getpeerinfo()), timeout=12) # p2ps[1] is no longer connected
|
||||
assert_equal(expected_mempool, set(node.getrawmempool()))
|
||||
|
||||
self.log.info('Test orphan pool overflow')
|
||||
self.log.info('Test orphanage can store more than 100 transactions')
|
||||
orphan_tx_pool = [CTransaction() for _ in range(101)]
|
||||
for i in range(len(orphan_tx_pool)):
|
||||
orphan_tx_pool[i].vin.append(CTxIn(outpoint=COutPoint(i, 333)))
|
||||
orphan_tx_pool[i].vout.append(CTxOut(nValue=11 * COIN, scriptPubKey=SCRIPT_PUB_KEY_OP_TRUE))
|
||||
|
||||
with node.assert_debug_log(['orphanage overflow, removed 1 tx']):
|
||||
node.p2ps[0].send_txs_and_test(orphan_tx_pool, node, success=False)
|
||||
node.p2ps[0].send_txs_and_test(orphan_tx_pool, node, success=False)
|
||||
self.wait_until(lambda: len(node.getorphantxs()) >= 101)
|
||||
|
||||
self.log.info('Test orphan with rejected parents')
|
||||
rejected_parent = CTransaction()
|
||||
@ -160,8 +160,8 @@ class InvalidTxRequestTest(BitcoinTestFramework):
|
||||
node.p2ps[0].send_txs_and_test([rejected_parent], node, success=False)
|
||||
|
||||
self.log.info('Test that a peer disconnection causes erase its transactions from the orphan pool')
|
||||
with node.assert_debug_log(['Erased 100 orphan transaction(s) from peer=26']):
|
||||
self.reconnect_p2p(num_connections=1)
|
||||
self.reconnect_p2p(num_connections=1)
|
||||
self.wait_until(lambda: len(node.getorphantxs()) == 0)
|
||||
|
||||
self.log.info('Test that a transaction in the orphan pool is included in a new tip block causes erase this transaction from the orphan pool')
|
||||
tx_withhold_until_block_A = CTransaction()
|
||||
|
||||
@ -7,12 +7,20 @@ Test opportunistic 1p1c package submission logic.
|
||||
"""
|
||||
|
||||
from decimal import Decimal
|
||||
import random
|
||||
import time
|
||||
|
||||
from test_framework.blocktools import MAX_STANDARD_TX_WEIGHT
|
||||
from test_framework.mempool_util import (
|
||||
create_large_orphan,
|
||||
fill_mempool,
|
||||
)
|
||||
from test_framework.messages import (
|
||||
CInv,
|
||||
COutPoint,
|
||||
CTransaction,
|
||||
CTxIn,
|
||||
CTxOut,
|
||||
CTxInWitness,
|
||||
MAX_BIP125_RBF_SEQUENCE,
|
||||
MSG_WTX,
|
||||
@ -21,12 +29,20 @@ from test_framework.messages import (
|
||||
tx_from_hex,
|
||||
)
|
||||
from test_framework.p2p import (
|
||||
NONPREF_PEER_TX_DELAY,
|
||||
P2PInterface,
|
||||
TXID_RELAY_DELAY,
|
||||
)
|
||||
from test_framework.script import (
|
||||
CScript,
|
||||
OP_NOP,
|
||||
OP_RETURN,
|
||||
)
|
||||
from test_framework.test_framework import BitcoinTestFramework
|
||||
from test_framework.util import (
|
||||
assert_equal,
|
||||
assert_greater_than,
|
||||
assert_greater_than_or_equal,
|
||||
)
|
||||
from test_framework.wallet import (
|
||||
MiniWallet,
|
||||
@ -373,6 +389,164 @@ class PackageRelayTest(BitcoinTestFramework):
|
||||
result_missing_parent = node.submitpackage(package_hex_missing_parent)
|
||||
assert_equal(result_missing_parent["package_msg"], "package-not-child-with-unconfirmed-parents")
|
||||
|
||||
def create_small_orphan(self):
|
||||
"""Create small orphan transaction"""
|
||||
tx = CTransaction()
|
||||
# Nonexistent UTXO
|
||||
tx.vin = [CTxIn(COutPoint(random.randrange(1 << 256), random.randrange(1, 100)))]
|
||||
tx.wit.vtxinwit = [CTxInWitness()]
|
||||
tx.wit.vtxinwit[0].scriptWitness.stack = [CScript([OP_NOP] * 5)]
|
||||
tx.vout = [CTxOut(100, CScript([OP_RETURN, b'a' * 3]))]
|
||||
return tx
|
||||
|
||||
@cleanup
|
||||
def test_orphanage_dos_large(self):
|
||||
self.log.info("Test that the node can still resolve orphans when peers use lots of orphanage space")
|
||||
node = self.nodes[0]
|
||||
node.setmocktime(int(time.time()))
|
||||
|
||||
peer_normal = node.add_p2p_connection(P2PInterface())
|
||||
peer_doser = node.add_p2p_connection(P2PInterface())
|
||||
|
||||
self.log.info("Create very large orphans to be sent by DoSy peers (may take a while)")
|
||||
large_orphans = [create_large_orphan() for _ in range(100)]
|
||||
# Check to make sure these are orphans, within max standard size (to be accepted into the orphanage)
|
||||
for large_orphan in large_orphans:
|
||||
assert_greater_than_or_equal(100000, large_orphan.get_vsize())
|
||||
assert_greater_than(MAX_STANDARD_TX_WEIGHT, large_orphan.get_weight())
|
||||
assert_greater_than_or_equal(3 * large_orphan.get_vsize(), 2 * 100000)
|
||||
testres = node.testmempoolaccept([large_orphan.serialize().hex()])
|
||||
assert not testres[0]["allowed"]
|
||||
assert_equal(testres[0]["reject-reason"], "missing-inputs")
|
||||
|
||||
num_individual_dosers = 30
|
||||
self.log.info(f"Connect {num_individual_dosers} peers and send a very large orphan from each one")
|
||||
# This test assumes that unrequested transactions are processed (skipping inv and
|
||||
# getdata steps because they require going through request delays)
|
||||
# Connect 20 peers and have each of them send a large orphan.
|
||||
for large_orphan in large_orphans[:num_individual_dosers]:
|
||||
peer_doser_individual = node.add_p2p_connection(P2PInterface())
|
||||
peer_doser_individual.send_and_ping(msg_tx(large_orphan))
|
||||
node.bumpmocktime(NONPREF_PEER_TX_DELAY + TXID_RELAY_DELAY)
|
||||
peer_doser_individual.wait_for_getdata([large_orphan.vin[0].prevout.hash])
|
||||
|
||||
# Make sure that these transactions are going through the orphan handling codepaths.
|
||||
# Subsequent rounds will not wait for getdata because the time mocking will cause the
|
||||
# normal package request to time out.
|
||||
self.wait_until(lambda: len(node.getorphantxs()) == num_individual_dosers)
|
||||
|
||||
self.log.info("Send an orphan from a non-DoSy peer. Its orphan should not be evicted.")
|
||||
low_fee_parent = self.create_tx_below_mempoolminfee(self.wallet)
|
||||
high_fee_child = self.wallet.create_self_transfer(
|
||||
utxo_to_spend=low_fee_parent["new_utxo"],
|
||||
fee_rate=200*FEERATE_1SAT_VB,
|
||||
target_vsize=100000
|
||||
)
|
||||
|
||||
# Announce
|
||||
orphan_tx = high_fee_child["tx"]
|
||||
orphan_inv = CInv(t=MSG_WTX, h=orphan_tx.wtxid_int)
|
||||
|
||||
# Wait for getdata
|
||||
peer_normal.send_and_ping(msg_inv([orphan_inv]))
|
||||
node.bumpmocktime(NONPREF_PEER_TX_DELAY)
|
||||
peer_normal.wait_for_getdata([orphan_tx.wtxid_int])
|
||||
peer_normal.send_and_ping(msg_tx(orphan_tx))
|
||||
|
||||
# Wait for parent request
|
||||
parent_txid_int = int(low_fee_parent["txid"], 16)
|
||||
node.bumpmocktime(NONPREF_PEER_TX_DELAY + TXID_RELAY_DELAY)
|
||||
peer_normal.wait_for_getdata([parent_txid_int])
|
||||
|
||||
self.log.info("Send another round of very large orphans from a DoSy peer")
|
||||
for large_orphan in large_orphans[30:]:
|
||||
peer_doser.send_and_ping(msg_tx(large_orphan))
|
||||
|
||||
# Something was evicted; the orphanage does not contain all large orphans + the 1p1c child
|
||||
self.wait_until(lambda: len(node.getorphantxs()) < len(large_orphans) + 1)
|
||||
|
||||
self.log.info("Provide the orphan's parent. This 1p1c package should be successfully accepted.")
|
||||
peer_normal.send_and_ping(msg_tx(low_fee_parent["tx"]))
|
||||
assert_equal(node.getmempoolentry(orphan_tx.txid_hex)["ancestorcount"], 2)
|
||||
|
||||
@cleanup
|
||||
def test_orphanage_dos_many(self):
|
||||
self.log.info("Test that the node can still resolve orphans when peers are sending tons of orphans")
|
||||
node = self.nodes[0]
|
||||
node.setmocktime(int(time.time()))
|
||||
|
||||
peer_normal = node.add_p2p_connection(P2PInterface())
|
||||
|
||||
# 2 sets of peers: the first set all send the same batch_size orphans. The second set each
|
||||
# sends batch_size distinct orphans.
|
||||
batch_size = 51
|
||||
num_peers_shared = 60
|
||||
num_peers_unique = 40
|
||||
|
||||
# 60 peers * 51 orphans = 3060 announcements
|
||||
shared_orphans = [self.create_small_orphan() for _ in range(batch_size)]
|
||||
self.log.info(f"Send the same {batch_size} orphans from {num_peers_shared} DoSy peers (may take a while)")
|
||||
peer_doser_shared = [node.add_p2p_connection(P2PInterface()) for _ in range(num_peers_shared)]
|
||||
for i in range(num_peers_shared):
|
||||
for orphan in shared_orphans:
|
||||
peer_doser_shared[i].send_without_ping(msg_tx(orphan))
|
||||
|
||||
# We sync peers to make sure we have processed as many orphans as possible. Ensure at least
|
||||
# one of the orphans was processed.
|
||||
for peer_doser in peer_doser_shared:
|
||||
peer_doser.sync_with_ping()
|
||||
self.wait_until(lambda: any([tx.txid_hex in node.getorphantxs() for tx in shared_orphans]))
|
||||
|
||||
self.log.info("Send an orphan from a non-DoSy peer. Its orphan should not be evicted.")
|
||||
low_fee_parent = self.create_tx_below_mempoolminfee(self.wallet)
|
||||
high_fee_child = self.wallet.create_self_transfer(
|
||||
utxo_to_spend=low_fee_parent["new_utxo"],
|
||||
fee_rate=200*FEERATE_1SAT_VB,
|
||||
)
|
||||
|
||||
# Announce
|
||||
orphan_tx = high_fee_child["tx"]
|
||||
orphan_inv = CInv(t=MSG_WTX, h=orphan_tx.wtxid_int)
|
||||
|
||||
# Wait for getdata
|
||||
peer_normal.send_and_ping(msg_inv([orphan_inv]))
|
||||
node.bumpmocktime(NONPREF_PEER_TX_DELAY)
|
||||
peer_normal.wait_for_getdata([orphan_tx.wtxid_int])
|
||||
peer_normal.send_and_ping(msg_tx(orphan_tx))
|
||||
|
||||
# Orphan has been entered and evicted something else
|
||||
self.wait_until(lambda: high_fee_child["txid"] in node.getorphantxs())
|
||||
|
||||
# Wait for parent request
|
||||
parent_txid_int = low_fee_parent["tx"].txid_int
|
||||
node.bumpmocktime(NONPREF_PEER_TX_DELAY + TXID_RELAY_DELAY)
|
||||
peer_normal.wait_for_getdata([parent_txid_int])
|
||||
|
||||
# Each of the num_peers_unique peers creates a distinct set of orphans
|
||||
many_orphans = [self.create_small_orphan() for _ in range(batch_size * num_peers_unique)]
|
||||
|
||||
self.log.info(f"Send sets of {batch_size} orphans from {num_peers_unique} DoSy peers (may take a while)")
|
||||
for peernum in range(num_peers_unique):
|
||||
peer_doser_batch = node.add_p2p_connection(P2PInterface())
|
||||
this_batch_orphans = many_orphans[batch_size*peernum : batch_size*(peernum+1)]
|
||||
for tx in this_batch_orphans:
|
||||
# Don't wait for responses, because it dramatically increases the runtime of this test.
|
||||
peer_doser_batch.send_without_ping(msg_tx(tx))
|
||||
|
||||
# Ensure at least one of the peer's orphans shows up in getorphantxs. Since each peer is
|
||||
# reserved a portion of orphanage space, this must happen as long as the orphans are not
|
||||
# rejected for some other reason.
|
||||
peer_doser_batch.sync_with_ping()
|
||||
self.wait_until(lambda: any([tx.txid_hex in node.getorphantxs() for tx in this_batch_orphans]))
|
||||
|
||||
self.log.info("Check that orphan from normal peer still exists in orphanage")
|
||||
assert high_fee_child["txid"] in node.getorphantxs()
|
||||
|
||||
self.log.info("Provide the orphan's parent. This 1p1c package should be successfully accepted.")
|
||||
peer_normal.send_and_ping(msg_tx(low_fee_parent["tx"]))
|
||||
assert orphan_tx.txid_hex in node.getrawmempool()
|
||||
assert_equal(node.getmempoolentry(orphan_tx.txid_hex)["ancestorcount"], 2)
|
||||
|
||||
def run_test(self):
|
||||
node = self.nodes[0]
|
||||
# To avoid creating transactions with the same txid (can happen if we set the same feerate
|
||||
@ -407,6 +581,9 @@ class PackageRelayTest(BitcoinTestFramework):
|
||||
self.test_multiple_parents()
|
||||
self.test_other_parent_in_mempool()
|
||||
|
||||
self.test_orphanage_dos_large()
|
||||
self.test_orphanage_dos_many()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
PackageRelayTest(__file__).main()
|
||||
|
||||
@ -5,10 +5,14 @@
|
||||
|
||||
import time
|
||||
|
||||
from test_framework.mempool_util import tx_in_orphanage
|
||||
from test_framework.mempool_util import (
|
||||
create_large_orphan,
|
||||
tx_in_orphanage,
|
||||
)
|
||||
from test_framework.messages import (
|
||||
CInv,
|
||||
CTxInWitness,
|
||||
DEFAULT_ANCESTOR_LIMIT,
|
||||
MSG_TX,
|
||||
MSG_WITNESS_TX,
|
||||
MSG_WTX,
|
||||
@ -43,14 +47,7 @@ from test_framework.wallet import (
|
||||
# for one peer and y seconds for another, use specific values instead.
|
||||
TXREQUEST_TIME_SKIP = NONPREF_PEER_TX_DELAY + TXID_RELAY_DELAY + OVERLOADED_PEER_TX_DELAY + 1
|
||||
|
||||
DEFAULT_MAX_ORPHAN_TRANSACTIONS = 100
|
||||
|
||||
def cleanup(func):
|
||||
# Time to fastfoward (using setmocktime) in between subtests to ensure they do not interfere with
|
||||
# one another, in seconds. Equal to 12 hours, which is enough to expire anything that may exist
|
||||
# (though nothing should since state should be cleared) in p2p data structures.
|
||||
LONG_TIME_SKIP = 12 * 60 * 60
|
||||
|
||||
def wrapper(self):
|
||||
try:
|
||||
func(self)
|
||||
@ -58,10 +55,13 @@ def cleanup(func):
|
||||
# Clear mempool
|
||||
self.generate(self.nodes[0], 1)
|
||||
self.nodes[0].disconnect_p2ps()
|
||||
self.nodes[0].bumpmocktime(LONG_TIME_SKIP)
|
||||
# Check that mempool and orphanage have been cleared
|
||||
self.wait_until(lambda: len(self.nodes[0].getorphantxs()) == 0)
|
||||
assert_equal(0, len(self.nodes[0].getrawmempool()))
|
||||
|
||||
self.restart_node(0, extra_args=["-persistmempool=0"])
|
||||
# Allow use of bumpmocktime again
|
||||
self.nodes[0].setmocktime(int(time.time()))
|
||||
self.wallet.rescan_utxos(include_mempool=True)
|
||||
return wrapper
|
||||
|
||||
@ -593,46 +593,6 @@ class OrphanHandlingTest(BitcoinTestFramework):
|
||||
assert_equal(node.getmempoolentry(tx_child["txid"])["wtxid"], tx_child["wtxid"])
|
||||
self.wait_until(lambda: len(node.getorphantxs()) == 0)
|
||||
|
||||
@cleanup
|
||||
def test_max_orphan_amount(self):
|
||||
self.log.info("Check that we never exceed our storage limits for orphans")
|
||||
|
||||
node = self.nodes[0]
|
||||
self.generate(self.wallet, 1)
|
||||
peer_1 = node.add_p2p_connection(P2PInterface())
|
||||
|
||||
self.log.info("Check that orphanage is empty on start of test")
|
||||
assert len(node.getorphantxs()) == 0
|
||||
|
||||
self.log.info("Filling up orphanage with " + str(DEFAULT_MAX_ORPHAN_TRANSACTIONS) + "(DEFAULT_MAX_ORPHAN_TRANSACTIONS) orphans")
|
||||
orphans = []
|
||||
parent_orphans = []
|
||||
for _ in range(DEFAULT_MAX_ORPHAN_TRANSACTIONS):
|
||||
tx_parent_1 = self.wallet.create_self_transfer()
|
||||
tx_child_1 = self.wallet.create_self_transfer(utxo_to_spend=tx_parent_1["new_utxo"])
|
||||
parent_orphans.append(tx_parent_1["tx"])
|
||||
orphans.append(tx_child_1["tx"])
|
||||
peer_1.send_without_ping(msg_tx(tx_child_1["tx"]))
|
||||
|
||||
peer_1.sync_with_ping()
|
||||
orphanage = node.getorphantxs()
|
||||
self.wait_until(lambda: len(node.getorphantxs()) == DEFAULT_MAX_ORPHAN_TRANSACTIONS)
|
||||
|
||||
for orphan in orphans:
|
||||
assert tx_in_orphanage(node, orphan)
|
||||
|
||||
self.log.info("Check that we do not add more than the max orphan amount")
|
||||
tx_parent_1 = self.wallet.create_self_transfer()
|
||||
tx_child_1 = self.wallet.create_self_transfer(utxo_to_spend=tx_parent_1["new_utxo"])
|
||||
peer_1.send_and_ping(msg_tx(tx_child_1["tx"]))
|
||||
parent_orphans.append(tx_parent_1["tx"])
|
||||
orphanage = node.getorphantxs()
|
||||
assert_equal(len(orphanage), DEFAULT_MAX_ORPHAN_TRANSACTIONS)
|
||||
|
||||
self.log.info("Clearing the orphanage")
|
||||
for index, parent_orphan in enumerate(parent_orphans):
|
||||
peer_1.send_and_ping(msg_tx(parent_orphan))
|
||||
self.wait_until(lambda: len(node.getorphantxs()) == 0)
|
||||
|
||||
@cleanup
|
||||
def test_orphan_handling_prefer_outbound(self):
|
||||
@ -671,6 +631,72 @@ class OrphanHandlingTest(BitcoinTestFramework):
|
||||
peer_inbound.sync_with_ping()
|
||||
peer_inbound.wait_for_parent_requests([parent_tx.txid_int])
|
||||
|
||||
@cleanup
|
||||
def test_maximal_package_protected(self):
|
||||
self.log.info("Test that a node only announcing a maximally sized ancestor package is protected in orphanage")
|
||||
self.nodes[0].setmocktime(int(time.time()))
|
||||
node = self.nodes[0]
|
||||
|
||||
peer_normal = node.add_p2p_connection(P2PInterface())
|
||||
peer_doser = node.add_p2p_connection(P2PInterface())
|
||||
|
||||
# Each of the num_peers peers creates a distinct set of orphans
|
||||
large_orphans = [create_large_orphan() for _ in range(60)]
|
||||
|
||||
# Check to make sure these are orphans, within max standard size (to be accepted into the orphanage)
|
||||
for large_orphan in large_orphans:
|
||||
testres = node.testmempoolaccept([large_orphan.serialize().hex()])
|
||||
assert not testres[0]["allowed"]
|
||||
assert_equal(testres[0]["reject-reason"], "missing-inputs")
|
||||
|
||||
num_individual_dosers = 20
|
||||
self.log.info(f"Connect {num_individual_dosers} peers and send a very large orphan from each one")
|
||||
# This test assumes that unrequested transactions are processed (skipping inv and
|
||||
# getdata steps because they require going through request delays)
|
||||
# Connect 20 peers and have each of them send a large orphan.
|
||||
for large_orphan in large_orphans[:num_individual_dosers]:
|
||||
peer_doser_individual = node.add_p2p_connection(P2PInterface())
|
||||
peer_doser_individual.send_and_ping(msg_tx(large_orphan))
|
||||
node.bumpmocktime(NONPREF_PEER_TX_DELAY + TXID_RELAY_DELAY + 1)
|
||||
peer_doser_individual.wait_for_getdata([large_orphan.vin[0].prevout.hash])
|
||||
|
||||
# Make sure that these transactions are going through the orphan handling codepaths.
|
||||
# Subsequent rounds will not wait for getdata because the time mocking will cause the
|
||||
# normal package request to time out.
|
||||
self.wait_until(lambda: len(node.getorphantxs()) == num_individual_dosers)
|
||||
|
||||
# Now honest peer will send a maximally sized ancestor package of 24 orphans chaining
|
||||
# off of a single missing transaction, with a total vsize 404,000Wu
|
||||
ancestor_package = self.wallet.create_self_transfer_chain(chain_length=DEFAULT_ANCESTOR_LIMIT - 1)
|
||||
sum_ancestor_package_vsize = sum([tx["tx"].get_vsize() for tx in ancestor_package])
|
||||
final_tx = self.wallet.create_self_transfer(utxo_to_spend=ancestor_package[-1]["new_utxo"], target_vsize=101000 - sum_ancestor_package_vsize)
|
||||
ancestor_package.append(final_tx)
|
||||
|
||||
# Peer sends all but first tx to fill up orphange with their orphans
|
||||
for orphan in ancestor_package[1:]:
|
||||
peer_normal.send_and_ping(msg_tx(orphan["tx"]))
|
||||
|
||||
orphan_set = node.getorphantxs()
|
||||
for orphan in ancestor_package[1:]:
|
||||
assert orphan["txid"] in orphan_set
|
||||
|
||||
# Wait for ultimate parent request (the root ancestor transaction)
|
||||
parent_txid_int = ancestor_package[0]["tx"].txid_int
|
||||
node.bumpmocktime(NONPREF_PEER_TX_DELAY + TXID_RELAY_DELAY)
|
||||
|
||||
self.wait_until(lambda: "getdata" in peer_normal.last_message and parent_txid_int in [inv.hash for inv in peer_normal.last_message.get("getdata").inv])
|
||||
|
||||
self.log.info("Send another round of very large orphans from a DoSy peer")
|
||||
for large_orphan in large_orphans[num_individual_dosers:]:
|
||||
peer_doser.send_and_ping(msg_tx(large_orphan))
|
||||
|
||||
self.log.info("Provide the top ancestor. The whole package should be re-evaluated after enough time.")
|
||||
peer_normal.send_and_ping(msg_tx(ancestor_package[0]["tx"]))
|
||||
|
||||
# Wait until all transactions have been processed. When the last tx is accepted, it's
|
||||
# guaranteed to have all ancestors.
|
||||
self.wait_until(lambda: node.getmempoolentry(final_tx["txid"])["ancestorcount"] == DEFAULT_ANCESTOR_LIMIT)
|
||||
|
||||
@cleanup
|
||||
def test_announcers_before_and_after(self):
|
||||
self.log.info("Test that the node uses all peers who announced the tx prior to realizing it's an orphan")
|
||||
@ -820,10 +846,10 @@ class OrphanHandlingTest(BitcoinTestFramework):
|
||||
self.test_same_txid_orphan()
|
||||
self.test_same_txid_orphan_of_orphan()
|
||||
self.test_orphan_txid_inv()
|
||||
self.test_max_orphan_amount()
|
||||
self.test_orphan_handling_prefer_outbound()
|
||||
self.test_announcers_before_and_after()
|
||||
self.test_parents_change()
|
||||
self.test_maximal_package_protected()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
@ -4,10 +4,7 @@
|
||||
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
|
||||
"""Tests for orphan related RPCs."""
|
||||
|
||||
import time
|
||||
|
||||
from test_framework.mempool_util import (
|
||||
ORPHAN_TX_EXPIRE_TIME,
|
||||
tx_in_orphanage,
|
||||
)
|
||||
from test_framework.messages import (
|
||||
@ -101,8 +98,6 @@ class OrphanRPCsTest(BitcoinTestFramework):
|
||||
tx_child_2 = self.wallet.create_self_transfer(utxo_to_spend=tx_parent_2["new_utxo"])
|
||||
peer_1 = node.add_p2p_connection(P2PInterface())
|
||||
peer_2 = node.add_p2p_connection(P2PInterface())
|
||||
entry_time = int(time.time())
|
||||
node.setmocktime(entry_time)
|
||||
peer_1.send_and_ping(msg_tx(tx_child_1["tx"]))
|
||||
peer_2.send_and_ping(msg_tx(tx_child_2["tx"]))
|
||||
|
||||
@ -128,9 +123,6 @@ class OrphanRPCsTest(BitcoinTestFramework):
|
||||
assert_equal(len(node.getorphantxs()), 1)
|
||||
orphan_1 = orphanage[0]
|
||||
self.orphan_details_match(orphan_1, tx_child_1, verbosity=1)
|
||||
self.log.info("Checking orphan entry/expiration times")
|
||||
assert_equal(orphan_1["entry"], entry_time)
|
||||
assert_equal(orphan_1["expiration"], entry_time + ORPHAN_TX_EXPIRE_TIME)
|
||||
|
||||
self.log.info("Checking orphan details (verbosity 2)")
|
||||
orphanage = node.getorphantxs(verbosity=2)
|
||||
|
||||
@ -4,11 +4,22 @@
|
||||
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
|
||||
"""Helpful routines for mempool testing."""
|
||||
from decimal import Decimal
|
||||
import random
|
||||
|
||||
from .blocktools import (
|
||||
COINBASE_MATURITY,
|
||||
)
|
||||
from .messages import CTransaction
|
||||
from .messages import (
|
||||
COutPoint,
|
||||
CTransaction,
|
||||
CTxIn,
|
||||
CTxInWitness,
|
||||
CTxOut,
|
||||
)
|
||||
from .script import (
|
||||
CScript,
|
||||
OP_RETURN,
|
||||
)
|
||||
from .util import (
|
||||
assert_equal,
|
||||
assert_greater_than,
|
||||
@ -19,8 +30,6 @@ from .wallet import (
|
||||
MiniWallet,
|
||||
)
|
||||
|
||||
ORPHAN_TX_EXPIRE_TIME = 1200
|
||||
|
||||
def assert_mempool_contents(test_framework, node, expected=None, sync=True):
|
||||
"""Assert that all transactions in expected are in the mempool,
|
||||
and no additional ones exist. 'expected' is an array of
|
||||
@ -106,3 +115,13 @@ def tx_in_orphanage(node, tx: CTransaction) -> bool:
|
||||
"""Returns true if the transaction is in the orphanage."""
|
||||
found = [o for o in node.getorphantxs(verbosity=1) if o["txid"] == tx.txid_hex and o["wtxid"] == tx.wtxid_hex]
|
||||
return len(found) == 1
|
||||
|
||||
def create_large_orphan():
|
||||
"""Create huge orphan transaction"""
|
||||
tx = CTransaction()
|
||||
# Nonexistent UTXO
|
||||
tx.vin = [CTxIn(COutPoint(random.randrange(1 << 256), random.randrange(1, 100)))]
|
||||
tx.wit.vtxinwit = [CTxInWitness()]
|
||||
tx.wit.vtxinwit[0].scriptWitness.stack = [CScript(b'X' * 390000)]
|
||||
tx.vout = [CTxOut(100, CScript([OP_RETURN, b'a' * 20]))]
|
||||
return tx
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user