mirror of
https://github.com/bitcoin/bitcoin.git
synced 2026-02-03 20:21:10 +00:00
86d7135e36efd39781cf4c969011df99f0cbb69d [p2p] only attempt 1p1c when both txns provided by the same peer (glozow) f7658d9b1475ecaa5cb8e543e5c66a3a3a2dc1fb [cleanup] remove p2p_inv from AddTxAnnouncement (glozow) 063c1324c143d98e6d5108bb51b3ca59b45f9b85 [functional test] getorphantxs reflects multiple announcers (glozow) 0da693f7e129fccaecf9a2c177083d2e80d37781 [functional test] orphan handling with multiple announcers (glozow) b6ea4a9afe2d8bbf49b6b6c42f0a3ce4390c4535 [p2p] try multiple peers for orphan resolution (glozow) 1d2e1d709ce3d95d409254c860347bc3fedf30e1 [refactor] move creation of unique_parents to helper function (glozow) c6893b0f0b7b205c8da4b9d281a55c9eb843b582 [txdownload] remove unique_parents that we already have (glozow) 163aaf285af91b49c2d788463dc1e1654c88ade6 [fuzz] orphanage multiple announcer functions (glozow) 22b023b09da3e2fe00467c77b105a61c1961081f [unit test] multiple orphan announcers (glozow) 96c1a822a274689611f409246ef1573906b0083e [unit test] TxOrphanage EraseForBlock (glozow) 04448ce32a3bc4b6d12293f71e40333abe35c224 [txorphanage] add GetTx so that orphan vin can be read (glozow) e810842acda6fe56e0536ebecfbb9d17d26e1513 [txorphanage] support multiple announcers (glozow) 62a9ff187076686b39dca64ad4f2f439da0875d1 [refactor] change type of unique_parents to Txid (glozow) 6951ddcefd9e05f31ee7634bbfbf1d19e04ec00e [txrequest] GetCandidatePeers (glozow) Pull request description: Part of #27463. (Transaction) **orphan resolution** is a process that kicks off when we are missing UTXOs to validate an unconfirmed transaction. We currently request missing parents by txid; BIP 331 also defines a way to [explicitly request ancestors](https://github.com/bitcoin/bips/blob/master/bip-0331.mediawiki#handle-orphans-better). Currently, when we find that a transaction is an orphan, we only try to resolve it with the peer who provided the `tx`. If this doesn't work out (e.g. they send a `notfound` or don't respond), we do not try again. We actually can't, because we've already forgotten who else could resolve this orphan (i.e. all the other peers who announced the transaction). What is wrong with this? It makes transaction download less reliable, particularly for 1p1c packages which must go through orphan resolution in order to be downloaded. Can we fix this with BIP 331 / is this "duct tape" before the real solution? BIP 331 (receiver-initiated ancestor package relay) is also based on the idea that there is an orphan that needs resolution, but it's just a new way of communicating information. It's not inherently more honest; you can request ancestor package information and get a `notfound`. So ancestor package relay still requires some kind of procedure for retrying when an orphan resolution attempt fails. See the #27742 implementation which builds on this orphan resolution tracker to keep track of what packages to download (it just isn't rebased on this exact branch). The difference when using BIP 331 is that we request `ancpkginfo` and then `pkgtxns` instead of the parent txids. Zooming out, we'd like orphan handling to be: - Bandwidth-efficient: don't have too many requests out at once. As already implemented today, transaction requests for orphan parents and regular download both go through the `TxRequestTracker` so that we don't have duplicate requests out. - Not vulnerable to censorship: don't give up too easily, use all candidate peers. See e.g. https://bitcoincore.org/en/2024/07/03/disclose_already_asked_for/ - Load-balance between peers: don't overload peers; use all peers available. This is also useful for when we introduce per-peer orphan protection, since each peer will have limited slots. The approach taken in this PR is to think of each peer who announces an orphan as a potential "orphan resolution candidate." These candidates include: - the peer who sent us the orphan tx - any peers who announced the orphan prior to us downloading it - any peers who subsequently announce the orphan after we have started trying to resolve it For each orphan resolution candidate, we treat them as having "announced" all of the missing parents to us at the time of receipt of this orphan transaction (or at the time they announced the tx if they do so after we've already started tracking it as an orphan). We add the missing parents as entries to `m_txrequest`, incorporating the logic of typical txrequest processing, which means we prefer outbounds, try not to have duplicate requests in flight, don't overload peers, etc. ACKs for top commit: marcofleon: Code review ACK 86d7135e36efd39781cf4c969011df99f0cbb69d instagibbs: reACK 86d7135e36efd39781cf4c969011df99f0cbb69d dergoegge: Code review ACK 86d7135e36efd39781cf4c969011df99f0cbb69d mzumsande: ACK 86d7135e36efd39781cf4c969011df99f0cbb69d Tree-SHA512: 618d523b86e60c3ea039e88326d50db4e55e8e18309c6a20e8f2b10ed9e076f1de0315c335fd3b8abdabcc8b53cbceb66fb59147d05470ea25b83a2b4bd9c877
451 lines
21 KiB
C++
451 lines
21 KiB
C++
// Copyright (c) 2023 The Bitcoin Core developers
|
|
// Distributed under the MIT software license, see the accompanying
|
|
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
|
|
|
|
#include <consensus/validation.h>
|
|
#include <node/context.h>
|
|
#include <node/mempool_args.h>
|
|
#include <node/miner.h>
|
|
#include <node/txdownloadman.h>
|
|
#include <node/txdownloadman_impl.h>
|
|
#include <test/fuzz/FuzzedDataProvider.h>
|
|
#include <test/fuzz/fuzz.h>
|
|
#include <test/fuzz/util.h>
|
|
#include <test/fuzz/util/mempool.h>
|
|
#include <test/util/mining.h>
|
|
#include <test/util/script.h>
|
|
#include <test/util/setup_common.h>
|
|
#include <test/util/txmempool.h>
|
|
#include <util/hasher.h>
|
|
#include <util/rbf.h>
|
|
#include <util/time.h>
|
|
#include <txmempool.h>
|
|
#include <validation.h>
|
|
#include <validationinterface.h>
|
|
|
|
namespace {
|
|
|
|
const TestingSetup* g_setup;
|
|
|
|
constexpr size_t NUM_COINS{50};
|
|
COutPoint COINS[NUM_COINS];
|
|
|
|
static TxValidationResult TESTED_TX_RESULTS[] = {
|
|
// Skip TX_RESULT_UNSET
|
|
TxValidationResult::TX_CONSENSUS,
|
|
TxValidationResult::TX_INPUTS_NOT_STANDARD,
|
|
TxValidationResult::TX_NOT_STANDARD,
|
|
TxValidationResult::TX_MISSING_INPUTS,
|
|
TxValidationResult::TX_PREMATURE_SPEND,
|
|
TxValidationResult::TX_WITNESS_MUTATED,
|
|
TxValidationResult::TX_WITNESS_STRIPPED,
|
|
TxValidationResult::TX_CONFLICT,
|
|
TxValidationResult::TX_MEMPOOL_POLICY,
|
|
// Skip TX_NO_MEMPOOL
|
|
TxValidationResult::TX_RECONSIDERABLE,
|
|
TxValidationResult::TX_UNKNOWN,
|
|
};
|
|
|
|
// Precomputed transactions. Some may conflict with each other.
|
|
std::vector<CTransactionRef> TRANSACTIONS;
|
|
|
|
// Limit the total number of peers because we don't expect coverage to change much with lots more peers.
|
|
constexpr int NUM_PEERS = 16;
|
|
|
|
// Precomputed random durations (positive and negative, each ~exponentially distributed).
|
|
std::chrono::microseconds TIME_SKIPS[128];
|
|
|
|
static CTransactionRef MakeTransactionSpending(const std::vector<COutPoint>& outpoints, size_t num_outputs, bool add_witness)
|
|
{
|
|
CMutableTransaction tx;
|
|
// If no outpoints are given, create a random one.
|
|
for (const auto& outpoint : outpoints) {
|
|
tx.vin.emplace_back(outpoint);
|
|
}
|
|
if (add_witness) {
|
|
tx.vin[0].scriptWitness.stack.push_back({1});
|
|
}
|
|
for (size_t o = 0; o < num_outputs; ++o) tx.vout.emplace_back(CENT, P2WSH_OP_TRUE);
|
|
return MakeTransactionRef(tx);
|
|
}
|
|
static std::vector<COutPoint> PickCoins(FuzzedDataProvider& fuzzed_data_provider)
|
|
{
|
|
std::vector<COutPoint> ret;
|
|
ret.push_back(fuzzed_data_provider.PickValueInArray(COINS));
|
|
LIMITED_WHILE(fuzzed_data_provider.ConsumeBool(), 10) {
|
|
ret.push_back(fuzzed_data_provider.PickValueInArray(COINS));
|
|
}
|
|
return ret;
|
|
}
|
|
|
|
void initialize()
|
|
{
|
|
static const auto testing_setup = MakeNoLogFileContext<const TestingSetup>();
|
|
g_setup = testing_setup.get();
|
|
for (uint32_t i = 0; i < uint32_t{NUM_COINS}; ++i) {
|
|
COINS[i] = COutPoint{Txid::FromUint256((HashWriter() << i).GetHash()), i};
|
|
}
|
|
size_t outpoints_index = 0;
|
|
// 2 transactions same txid different witness
|
|
{
|
|
auto tx1{MakeTransactionSpending({COINS[outpoints_index]}, /*num_outputs=*/5, /*add_witness=*/false)};
|
|
auto tx2{MakeTransactionSpending({COINS[outpoints_index]}, /*num_outputs=*/5, /*add_witness=*/true)};
|
|
Assert(tx1->GetHash() == tx2->GetHash());
|
|
TRANSACTIONS.emplace_back(tx1);
|
|
TRANSACTIONS.emplace_back(tx2);
|
|
outpoints_index += 1;
|
|
}
|
|
// 2 parents 1 child
|
|
{
|
|
auto tx_parent_1{MakeTransactionSpending({COINS[outpoints_index++]}, /*num_outputs=*/1, /*add_witness=*/true)};
|
|
TRANSACTIONS.emplace_back(tx_parent_1);
|
|
auto tx_parent_2{MakeTransactionSpending({COINS[outpoints_index++]}, /*num_outputs=*/1, /*add_witness=*/false)};
|
|
TRANSACTIONS.emplace_back(tx_parent_2);
|
|
TRANSACTIONS.emplace_back(MakeTransactionSpending({COutPoint{tx_parent_1->GetHash(), 0}, COutPoint{tx_parent_2->GetHash(), 0}},
|
|
/*num_outputs=*/1, /*add_witness=*/true));
|
|
}
|
|
// 1 parent 2 children
|
|
{
|
|
auto tx_parent{MakeTransactionSpending({COINS[outpoints_index++]}, /*num_outputs=*/2, /*add_witness=*/true)};
|
|
TRANSACTIONS.emplace_back(tx_parent);
|
|
TRANSACTIONS.emplace_back(MakeTransactionSpending({COutPoint{tx_parent->GetHash(), 0}},
|
|
/*num_outputs=*/1, /*add_witness=*/true));
|
|
TRANSACTIONS.emplace_back(MakeTransactionSpending({COutPoint{tx_parent->GetHash(), 1}},
|
|
/*num_outputs=*/1, /*add_witness=*/true));
|
|
}
|
|
// chain of 5 segwit
|
|
{
|
|
COutPoint& last_outpoint = COINS[outpoints_index++];
|
|
for (auto i{0}; i < 5; ++i) {
|
|
auto tx{MakeTransactionSpending({last_outpoint}, /*num_outputs=*/1, /*add_witness=*/true)};
|
|
TRANSACTIONS.emplace_back(tx);
|
|
last_outpoint = COutPoint{tx->GetHash(), 0};
|
|
}
|
|
}
|
|
// chain of 5 non-segwit
|
|
{
|
|
COutPoint& last_outpoint = COINS[outpoints_index++];
|
|
for (auto i{0}; i < 5; ++i) {
|
|
auto tx{MakeTransactionSpending({last_outpoint}, /*num_outputs=*/1, /*add_witness=*/false)};
|
|
TRANSACTIONS.emplace_back(tx);
|
|
last_outpoint = COutPoint{tx->GetHash(), 0};
|
|
}
|
|
}
|
|
// Also create a loose tx for each outpoint. Some of these transactions conflict with the above
|
|
// or have the same txid.
|
|
for (const auto& outpoint : COINS) {
|
|
TRANSACTIONS.emplace_back(MakeTransactionSpending({outpoint}, /*num_outputs=*/1, /*add_witness=*/true));
|
|
}
|
|
|
|
// Create random-looking time jumps
|
|
int i = 0;
|
|
// TIME_SKIPS[N] for N=0..15 is just N microseconds.
|
|
for (; i < 16; ++i) {
|
|
TIME_SKIPS[i] = std::chrono::microseconds{i};
|
|
}
|
|
// TIME_SKIPS[N] for N=16..127 has randomly-looking but roughly exponentially increasing values up to
|
|
// 198.416453 seconds.
|
|
for (; i < 128; ++i) {
|
|
int diff_bits = ((i - 10) * 2) / 9;
|
|
uint64_t diff = 1 + (CSipHasher(0, 0).Write(i).Finalize() >> (64 - diff_bits));
|
|
TIME_SKIPS[i] = TIME_SKIPS[i - 1] + std::chrono::microseconds{diff};
|
|
}
|
|
}
|
|
|
|
void CheckPackageToValidate(const node::PackageToValidate& package_to_validate, NodeId peer)
|
|
{
|
|
Assert(package_to_validate.m_senders.size() == 2);
|
|
Assert(package_to_validate.m_senders.front() == peer);
|
|
Assert(package_to_validate.m_senders.back() < NUM_PEERS);
|
|
|
|
// Package is a 1p1c
|
|
const auto& package = package_to_validate.m_txns;
|
|
Assert(IsChildWithParents(package));
|
|
Assert(package.size() == 2);
|
|
}
|
|
|
|
FUZZ_TARGET(txdownloadman, .init = initialize)
|
|
{
|
|
SeedRandomStateForTest(SeedRand::ZEROS);
|
|
FuzzedDataProvider fuzzed_data_provider(buffer.data(), buffer.size());
|
|
SetMockTime(ConsumeTime(fuzzed_data_provider));
|
|
|
|
// Initialize txdownloadman
|
|
bilingual_str error;
|
|
CTxMemPool pool{MemPoolOptionsForTest(g_setup->m_node), error};
|
|
const auto max_orphan_count = fuzzed_data_provider.ConsumeIntegralInRange<unsigned int>(0, 300);
|
|
FastRandomContext det_rand{true};
|
|
node::TxDownloadManager txdownloadman{node::TxDownloadOptions{pool, det_rand, max_orphan_count, true}};
|
|
|
|
std::chrono::microseconds time{244466666};
|
|
|
|
LIMITED_WHILE(fuzzed_data_provider.ConsumeBool(), 10000)
|
|
{
|
|
NodeId rand_peer = fuzzed_data_provider.ConsumeIntegralInRange<int64_t>(0, NUM_PEERS - 1);
|
|
|
|
// Transaction can be one of the premade ones or a randomly generated one
|
|
auto rand_tx = fuzzed_data_provider.ConsumeBool() ?
|
|
MakeTransactionSpending(PickCoins(fuzzed_data_provider),
|
|
/*num_outputs=*/fuzzed_data_provider.ConsumeIntegralInRange(1, 500),
|
|
/*add_witness=*/fuzzed_data_provider.ConsumeBool()) :
|
|
TRANSACTIONS.at(fuzzed_data_provider.ConsumeIntegralInRange<unsigned>(0, TRANSACTIONS.size() - 1));
|
|
|
|
CallOneOf(
|
|
fuzzed_data_provider,
|
|
[&] {
|
|
node::TxDownloadConnectionInfo info{
|
|
.m_preferred = fuzzed_data_provider.ConsumeBool(),
|
|
.m_relay_permissions = fuzzed_data_provider.ConsumeBool(),
|
|
.m_wtxid_relay = fuzzed_data_provider.ConsumeBool()
|
|
};
|
|
txdownloadman.ConnectedPeer(rand_peer, info);
|
|
},
|
|
[&] {
|
|
txdownloadman.DisconnectedPeer(rand_peer);
|
|
txdownloadman.CheckIsEmpty(rand_peer);
|
|
},
|
|
[&] {
|
|
txdownloadman.ActiveTipChange();
|
|
},
|
|
[&] {
|
|
CBlock block;
|
|
block.vtx.push_back(rand_tx);
|
|
txdownloadman.BlockConnected(std::make_shared<CBlock>(block));
|
|
},
|
|
[&] {
|
|
txdownloadman.BlockDisconnected();
|
|
},
|
|
[&] {
|
|
txdownloadman.MempoolAcceptedTx(rand_tx);
|
|
},
|
|
[&] {
|
|
TxValidationState state;
|
|
state.Invalid(fuzzed_data_provider.PickValueInArray(TESTED_TX_RESULTS), "");
|
|
bool first_time_failure{fuzzed_data_provider.ConsumeBool()};
|
|
|
|
node::RejectedTxTodo todo = txdownloadman.MempoolRejectedTx(rand_tx, state, rand_peer, first_time_failure);
|
|
Assert(first_time_failure || !todo.m_should_add_extra_compact_tx);
|
|
},
|
|
[&] {
|
|
GenTxid gtxid = fuzzed_data_provider.ConsumeBool() ?
|
|
GenTxid::Txid(rand_tx->GetHash()) :
|
|
GenTxid::Wtxid(rand_tx->GetWitnessHash());
|
|
txdownloadman.AddTxAnnouncement(rand_peer, gtxid, time);
|
|
},
|
|
[&] {
|
|
txdownloadman.GetRequestsToSend(rand_peer, time);
|
|
},
|
|
[&] {
|
|
txdownloadman.ReceivedTx(rand_peer, rand_tx);
|
|
const auto& [should_validate, maybe_package] = txdownloadman.ReceivedTx(rand_peer, rand_tx);
|
|
// The only possible results should be:
|
|
// - Don't validate the tx, no package.
|
|
// - Don't validate the tx, package.
|
|
// - Validate the tx, no package.
|
|
// The only combination that doesn't make sense is validate both tx and package.
|
|
Assert(!(should_validate && maybe_package.has_value()));
|
|
if (maybe_package.has_value()) CheckPackageToValidate(*maybe_package, rand_peer);
|
|
},
|
|
[&] {
|
|
txdownloadman.ReceivedNotFound(rand_peer, {rand_tx->GetWitnessHash()});
|
|
},
|
|
[&] {
|
|
const bool expect_work{txdownloadman.HaveMoreWork(rand_peer)};
|
|
const auto ptx = txdownloadman.GetTxToReconsider(rand_peer);
|
|
// expect_work=true doesn't necessarily mean the next item from the workset isn't a
|
|
// nullptr, as the transaction could have been removed from orphanage without being
|
|
// removed from the peer's workset.
|
|
if (ptx) {
|
|
// However, if there was a non-null tx in the workset, HaveMoreWork should have
|
|
// returned true.
|
|
Assert(expect_work);
|
|
}
|
|
}
|
|
);
|
|
// Jump forwards or backwards
|
|
auto time_skip = fuzzed_data_provider.PickValueInArray(TIME_SKIPS);
|
|
if (fuzzed_data_provider.ConsumeBool()) time_skip *= -1;
|
|
time += time_skip;
|
|
}
|
|
// Disconnect everybody, check that all data structures are empty.
|
|
for (NodeId nodeid = 0; nodeid < NUM_PEERS; ++nodeid) {
|
|
txdownloadman.DisconnectedPeer(nodeid);
|
|
txdownloadman.CheckIsEmpty(nodeid);
|
|
}
|
|
txdownloadman.CheckIsEmpty();
|
|
}
|
|
|
|
// Give node 0 relay permissions, and nobody else. This helps us remember who is a RelayPermissions
|
|
// peer without tracking anything (this is only for the txdownload_impl target).
|
|
static bool HasRelayPermissions(NodeId peer) { return peer == 0; }
|
|
|
|
static void CheckInvariants(const node::TxDownloadManagerImpl& txdownload_impl, size_t max_orphan_count)
|
|
{
|
|
const TxOrphanage& orphanage = txdownload_impl.m_orphanage;
|
|
|
|
// Orphanage usage should never exceed what is allowed
|
|
Assert(orphanage.Size() <= max_orphan_count);
|
|
|
|
// We should never have more than the maximum in-flight requests out for a peer.
|
|
for (NodeId peer = 0; peer < NUM_PEERS; ++peer) {
|
|
if (!HasRelayPermissions(peer)) {
|
|
Assert(txdownload_impl.m_txrequest.Count(peer) <= node::MAX_PEER_TX_ANNOUNCEMENTS);
|
|
}
|
|
}
|
|
txdownload_impl.m_txrequest.SanityCheck();
|
|
}
|
|
|
|
FUZZ_TARGET(txdownloadman_impl, .init = initialize)
|
|
{
|
|
SeedRandomStateForTest(SeedRand::ZEROS);
|
|
FuzzedDataProvider fuzzed_data_provider(buffer.data(), buffer.size());
|
|
SetMockTime(ConsumeTime(fuzzed_data_provider));
|
|
|
|
// Initialize a TxDownloadManagerImpl
|
|
bilingual_str error;
|
|
CTxMemPool pool{MemPoolOptionsForTest(g_setup->m_node), error};
|
|
const auto max_orphan_count = fuzzed_data_provider.ConsumeIntegralInRange<unsigned int>(0, 300);
|
|
FastRandomContext det_rand{true};
|
|
node::TxDownloadManagerImpl txdownload_impl{node::TxDownloadOptions{pool, det_rand, max_orphan_count, true}};
|
|
|
|
std::chrono::microseconds time{244466666};
|
|
|
|
LIMITED_WHILE(fuzzed_data_provider.ConsumeBool(), 10000)
|
|
{
|
|
NodeId rand_peer = fuzzed_data_provider.ConsumeIntegralInRange<int64_t>(0, NUM_PEERS - 1);
|
|
|
|
// Transaction can be one of the premade ones or a randomly generated one
|
|
auto rand_tx = fuzzed_data_provider.ConsumeBool() ?
|
|
MakeTransactionSpending(PickCoins(fuzzed_data_provider),
|
|
/*num_outputs=*/fuzzed_data_provider.ConsumeIntegralInRange(1, 500),
|
|
/*add_witness=*/fuzzed_data_provider.ConsumeBool()) :
|
|
TRANSACTIONS.at(fuzzed_data_provider.ConsumeIntegralInRange<unsigned>(0, TRANSACTIONS.size() - 1));
|
|
|
|
CallOneOf(
|
|
fuzzed_data_provider,
|
|
[&] {
|
|
node::TxDownloadConnectionInfo info{
|
|
.m_preferred = fuzzed_data_provider.ConsumeBool(),
|
|
.m_relay_permissions = HasRelayPermissions(rand_peer),
|
|
.m_wtxid_relay = fuzzed_data_provider.ConsumeBool()
|
|
};
|
|
txdownload_impl.ConnectedPeer(rand_peer, info);
|
|
},
|
|
[&] {
|
|
txdownload_impl.DisconnectedPeer(rand_peer);
|
|
txdownload_impl.CheckIsEmpty(rand_peer);
|
|
},
|
|
[&] {
|
|
txdownload_impl.ActiveTipChange();
|
|
// After a block update, nothing should be in the rejection caches
|
|
for (const auto& tx : TRANSACTIONS) {
|
|
Assert(!txdownload_impl.RecentRejectsFilter().contains(tx->GetWitnessHash().ToUint256()));
|
|
Assert(!txdownload_impl.RecentRejectsFilter().contains(tx->GetHash().ToUint256()));
|
|
Assert(!txdownload_impl.RecentRejectsReconsiderableFilter().contains(tx->GetWitnessHash().ToUint256()));
|
|
Assert(!txdownload_impl.RecentRejectsReconsiderableFilter().contains(tx->GetHash().ToUint256()));
|
|
}
|
|
},
|
|
[&] {
|
|
CBlock block;
|
|
block.vtx.push_back(rand_tx);
|
|
txdownload_impl.BlockConnected(std::make_shared<CBlock>(block));
|
|
// Block transactions must be removed from orphanage
|
|
Assert(!txdownload_impl.m_orphanage.HaveTx(rand_tx->GetWitnessHash()));
|
|
},
|
|
[&] {
|
|
txdownload_impl.BlockDisconnected();
|
|
Assert(!txdownload_impl.RecentConfirmedTransactionsFilter().contains(rand_tx->GetWitnessHash().ToUint256()));
|
|
Assert(!txdownload_impl.RecentConfirmedTransactionsFilter().contains(rand_tx->GetHash().ToUint256()));
|
|
},
|
|
[&] {
|
|
txdownload_impl.MempoolAcceptedTx(rand_tx);
|
|
},
|
|
[&] {
|
|
TxValidationState state;
|
|
state.Invalid(fuzzed_data_provider.PickValueInArray(TESTED_TX_RESULTS), "");
|
|
bool first_time_failure{fuzzed_data_provider.ConsumeBool()};
|
|
|
|
bool reject_contains_wtxid{txdownload_impl.RecentRejectsFilter().contains(rand_tx->GetWitnessHash().ToUint256())};
|
|
|
|
node::RejectedTxTodo todo = txdownload_impl.MempoolRejectedTx(rand_tx, state, rand_peer, first_time_failure);
|
|
Assert(first_time_failure || !todo.m_should_add_extra_compact_tx);
|
|
if (!reject_contains_wtxid) Assert(todo.m_unique_parents.size() <= rand_tx->vin.size());
|
|
},
|
|
[&] {
|
|
GenTxid gtxid = fuzzed_data_provider.ConsumeBool() ?
|
|
GenTxid::Txid(rand_tx->GetHash()) :
|
|
GenTxid::Wtxid(rand_tx->GetWitnessHash());
|
|
txdownload_impl.AddTxAnnouncement(rand_peer, gtxid, time);
|
|
},
|
|
[&] {
|
|
const auto getdata_requests = txdownload_impl.GetRequestsToSend(rand_peer, time);
|
|
// TxDownloadManager should not be telling us to request things we already have.
|
|
// Exclude m_lazy_recent_rejects_reconsiderable because it may request low-feerate parent of orphan.
|
|
for (const auto& gtxid : getdata_requests) {
|
|
Assert(!txdownload_impl.AlreadyHaveTx(gtxid, /*include_reconsiderable=*/false));
|
|
}
|
|
},
|
|
[&] {
|
|
const auto& [should_validate, maybe_package] = txdownload_impl.ReceivedTx(rand_peer, rand_tx);
|
|
// The only possible results should be:
|
|
// - Don't validate the tx, no package.
|
|
// - Don't validate the tx, package.
|
|
// - Validate the tx, no package.
|
|
// The only combination that doesn't make sense is validate both tx and package.
|
|
Assert(!(should_validate && maybe_package.has_value()));
|
|
if (should_validate) {
|
|
Assert(!txdownload_impl.AlreadyHaveTx(GenTxid::Wtxid(rand_tx->GetWitnessHash()), /*include_reconsiderable=*/true));
|
|
}
|
|
if (maybe_package.has_value()) {
|
|
CheckPackageToValidate(*maybe_package, rand_peer);
|
|
|
|
const auto& package = maybe_package->m_txns;
|
|
// Parent is in m_lazy_recent_rejects_reconsiderable and child is in m_orphanage
|
|
Assert(txdownload_impl.RecentRejectsReconsiderableFilter().contains(rand_tx->GetWitnessHash().ToUint256()));
|
|
Assert(txdownload_impl.m_orphanage.HaveTx(maybe_package->m_txns.back()->GetWitnessHash()));
|
|
// Package has not been rejected
|
|
Assert(!txdownload_impl.RecentRejectsReconsiderableFilter().contains(GetPackageHash(package)));
|
|
// Neither is in m_lazy_recent_rejects
|
|
Assert(!txdownload_impl.RecentRejectsFilter().contains(package.front()->GetWitnessHash().ToUint256()));
|
|
Assert(!txdownload_impl.RecentRejectsFilter().contains(package.back()->GetWitnessHash().ToUint256()));
|
|
}
|
|
},
|
|
[&] {
|
|
txdownload_impl.ReceivedNotFound(rand_peer, {rand_tx->GetWitnessHash()});
|
|
},
|
|
[&] {
|
|
const bool expect_work{txdownload_impl.HaveMoreWork(rand_peer)};
|
|
const auto ptx{txdownload_impl.GetTxToReconsider(rand_peer)};
|
|
// expect_work=true doesn't necessarily mean the next item from the workset isn't a
|
|
// nullptr, as the transaction could have been removed from orphanage without being
|
|
// removed from the peer's workset.
|
|
if (ptx) {
|
|
// However, if there was a non-null tx in the workset, HaveMoreWork should have
|
|
// returned true.
|
|
Assert(expect_work);
|
|
Assert(txdownload_impl.AlreadyHaveTx(GenTxid::Wtxid(ptx->GetWitnessHash()), /*include_reconsiderable=*/false));
|
|
// Presumably we have validated this tx. Use "missing inputs" to keep it in the
|
|
// orphanage longer. Later iterations might call MempoolAcceptedTx or
|
|
// MempoolRejectedTx with a different error.
|
|
TxValidationState state_missing_inputs;
|
|
state_missing_inputs.Invalid(TxValidationResult::TX_MISSING_INPUTS, "");
|
|
txdownload_impl.MempoolRejectedTx(ptx, state_missing_inputs, rand_peer, fuzzed_data_provider.ConsumeBool());
|
|
}
|
|
}
|
|
);
|
|
|
|
auto time_skip = fuzzed_data_provider.PickValueInArray(TIME_SKIPS);
|
|
if (fuzzed_data_provider.ConsumeBool()) time_skip *= -1;
|
|
time += time_skip;
|
|
CheckInvariants(txdownload_impl, max_orphan_count);
|
|
}
|
|
// Disconnect everybody, check that all data structures are empty.
|
|
for (NodeId nodeid = 0; nodeid < NUM_PEERS; ++nodeid) {
|
|
txdownload_impl.DisconnectedPeer(nodeid);
|
|
txdownload_impl.CheckIsEmpty(nodeid);
|
|
}
|
|
txdownload_impl.CheckIsEmpty();
|
|
}
|
|
|
|
} // namespace
|