mirror of
https://github.com/bitcoin/bitcoin.git
synced 2026-01-31 10:41:08 +00:00
Merge bitcoin/bitcoin#33591: Cluster mempool followups
b8d279a81c16fe9f5b6d422e518c77344e217d4f doc: add comment to explain correctness of GatherClusters() (Suhas Daftuar)
aba7500a30eecf742c56e292e9a385ca57066a6c Fix parameter name in getmempoolcluster rpc (Suhas Daftuar)
6c1325a0913e22258ab6b62f381e56c7bebbd462 Rename weight -> clusterweight in RPC output, and add doc explaining mempool terminology (Suhas Daftuar)
bc2eb931da30bd98670528c0b96f6ca05f14f8b9 Require mempool lock to be held when invoking TRUC checks (Suhas Daftuar)
957ae232414b38adcf9358e198fded42f7c1feea Improve comments for getTransactionAncestry to reference cluster counts instead of descendants (Suhas Daftuar)
d97d6199ce506cda858afa867f2582c8138953a5 Fix comment to reference cluster limits, not chain limits (Suhas Daftuar)
a1b341ef9875a8a160464f320886f8dac7491237 Sanity check feerate diagram in CTxMemPool::check() (Suhas Daftuar)
23d6f457c4c06e405464594c7a2be1a11e9bcc1b rpc: improve getmempoolcluster output (Suhas Daftuar)
d2dcd37aac1e723a4103f2d6fefaa492141f5d42 Avoid using mapTx.modify() to update modified fees (Suhas Daftuar)
d84ffc24d2dc35642864924aaf7466fa17ac5875 doc: add release notes snippet for cluster mempool (Suhas Daftuar)
b0417ba94437d8bb23a7b66a3641ee8f3682a2dc doc: Add design notes for cluster mempool and explain new mempool limits (Suhas Daftuar)
2d88966e43c6c6323d8af5272ab7841f5c896f12 miner: replace "package" with "chunk" (Suhas Daftuar)
6f3e8eb3001a87d0a6d9ec8662ddb40ce7a673f4 Add a GetFeePerVSize() accessor to CFeeRate, and use it in the BlockAssembler (Suhas Daftuar)
b5f245f6f2193a3c19bea3eed7ceda1e80b83160 Remove unused DEFAULT_ANCESTOR_SIZE_LIMIT_KVB and DEFAULT_DESCENDANT_SIZE_LIMIT_KVB (Suhas Daftuar)
1dac54d506b5765f3d86a6efc30538931305b000 Use cluster size limit instead of ancestor size limit in txpackage unit test (Suhas Daftuar)
04f65488ca3e8e8eb7d290982e55e70be96491bb Use cluster size limit instead of ancestor/descendant size limits when sanity checking TRUC policy limits (Suhas Daftuar)
634291a7dc4485942cc9cbde510b92f9580d5c5e Use cluster limits instead of ancestor/descendant limits when sanity checking package policy limits (Suhas Daftuar)
fc18ef1f3f333dd28d8cc7e3571d76a985d90240 Remove ancestor and descendant vsize limits from MemPoolLimits (Suhas Daftuar)
ed8e819121d7065c6e34a6ae422842369c4a1659 Warn user if using -limitancestorsize/-limitdescendantsize that the options have no effect (Suhas Daftuar)
80d8df2d47c25851b51fe3319605fe41c34ca9f8 Invoke removeUnchecked() directly in removeForBlock() (Suhas Daftuar)
9292570f4cb85fc6690dfeeb55ea867d575ebba3 Rewrite GetChildren without sets (Suhas Daftuar)
3e39ea8c307010bc0132615ecef55b39851f7437 Rewrite removeForReorg to avoid using sets (Suhas Daftuar)
a3c31dfd71def7ce4414c627261fa4516f943547 scripted-diff: rename AddToMempool -> TryAddToMempool (Suhas Daftuar)
a5a7905d83dfa8a5173f886f7007132e18b53e3a Simplify removeRecursive (Suhas Daftuar)
01d8520038eafa0e00eeddcea29cba2b1b87917e Remove unused argument to RemoveStaged (Suhas Daftuar)
bc64013e6fad2d054bc5a31630c09f33a62b8f4f Remove unused variable (cacheMap) in mempool (Suhas Daftuar)
Pull request description:
As suggested in the main cluster mempool PR (https://github.com/bitcoin/bitcoin/pull/28676#pullrequestreview-3177119367), I've pulled out some of the non-essential optimizations and cleanups into this separate PR.
Will continue to add more commits here to address non-blocking suggestions/improvements as they come up.
ACKs for top commit:
instagibbs:
ACK b8d279a81c
sipa:
ACK b8d279a81c16fe9f5b6d422e518c77344e217d4f
Tree-SHA512: 1a05e99eaf8db2e274a1801307fed5d82f8f917e75ccb9ab0e1b0eb2f9672b13c79d691d78ea7cd96900d0e7d5031a3dd582ebcccc9b1d66eb7455b1d3642235
This commit is contained in:
commit
e0ba6bbed9
@ -9,7 +9,7 @@ contents. Policy is *not* applied to transactions in blocks.
|
||||
|
||||
This documentation is not an exhaustive list of all policy rules.
|
||||
|
||||
- [Mempool Limits](mempool-limits.md)
|
||||
- [Mempool Design and Limits](mempool-design.md)
|
||||
- [Mempool Replacements](mempool-replacements.md)
|
||||
- [Packages](packages.md)
|
||||
|
||||
|
||||
104
doc/policy/mempool-design.md
Normal file
104
doc/policy/mempool-design.md
Normal file
@ -0,0 +1,104 @@
|
||||
# Mempool design and limits
|
||||
|
||||
## Definitions
|
||||
|
||||
We view the unconfirmed transactions in the mempool as a directed graph,
|
||||
with an edge from transaction B to transaction A if B spends an output created
|
||||
by A (i.e., B is a **child** of A, and A is a **parent** of B).
|
||||
|
||||
A transaction's **ancestors** include, recursively, its parents, the parents of
|
||||
its parents, etc. A transaction's **descendants** include, recursively, its
|
||||
children, the children of its children, etc.
|
||||
|
||||
A **cluster** is a connected component of the graph, i.e., a set of
|
||||
transactions where each transaction is reachable from any other transaction in
|
||||
the set by following edges in either direction. The cluster corresponding to a
|
||||
given transaction consists of that transaction, its ancestors and descendants,
|
||||
and the ancestors and descendants of those transactions, and so on.
|
||||
|
||||
Each cluster is **linearized**, or sorted, in a topologically valid order (i.e.,
|
||||
no transaction appears before any of its ancestors). Our goal is to construct a
|
||||
linearization where the highest feerate subset of a cluster appears first,
|
||||
followed by the next highest feerate subset of the remaining transactions, and
|
||||
so on[1]. We call these subsets **chunks**, and the chunks of a linearization
|
||||
have the property that they are always in monotonically decreasing feerate
|
||||
order.
|
||||
|
||||
Given two or more linearized clusters, we can construct a linearization of the
|
||||
union by simply merge sorting the chunks of each cluster by feerate.
|
||||
|
||||
For any set of linearized clusters, then, we can define the **feerate diagram**
|
||||
of the set by plotting the cumulative fee (y-axis) against the cumulative size
|
||||
(x-axis) as we progress from chunk to chunk. Given two linearizations for the
|
||||
same set of transactions, we can compare their feerate diagrams by
|
||||
comparing their cumulative fees at each size value. Two diagrams may be
|
||||
**incomparable** if neither contains the other (i.e., there exist size values at
|
||||
which each one has a greater cumulative fee than the other). Or, they may be
|
||||
**equivalent** if they have identical cumulative fees at every size value; or
|
||||
one may be **strictly better** than the other if they are comparable and there
|
||||
exists at least one size value for which the cumulative fee is strictly higher
|
||||
in one of them.
|
||||
|
||||
For more background and rationale, see [2] and [3] below.
|
||||
|
||||
## Mining/eviction
|
||||
|
||||
As described above, the linearization of each cluster gives us a linearization
|
||||
of the entire mempool. We use this ordering for both block building and
|
||||
eviction, by selecting chunks at the front of the linearization when
|
||||
constructing a block template, and by evicting chunks from the back of the
|
||||
linearization when we need to free up space in the mempool.
|
||||
|
||||
## Replace-by-fee
|
||||
|
||||
Prior to the cluster mempool implementation, it was possible for replacements
|
||||
to be prevented even if they would make the mempool more profitable for miners,
|
||||
and it was possible for replacements to be permitted even if the newly accepted
|
||||
transaction was less desirable to miners than the transactions it was
|
||||
replacing. With the ability to construct linearizations of the mempool, we're
|
||||
now able to compare the feerate diagram of the mempool before and after a
|
||||
proposed replacement, and only accept the replacement if it makes the feerate
|
||||
diagram strictly better.
|
||||
|
||||
In simple cases, the intuition is that a replacement should have a higher
|
||||
feerate and fee than the transaction(s) it replaces. But for more complex cases
|
||||
(where some transactions may have unconfirmed parents), there may not be a
|
||||
simple way to describe the fee that is needed to successfully replace a set of
|
||||
transactions, other than to say that the overall feerate diagram of the
|
||||
resulting mempool must improve somewhere and not be worse anywhere.
|
||||
|
||||
## Mempool limits
|
||||
|
||||
### Motivation
|
||||
|
||||
Selecting chunks in decreasing feerate order when building a block template
|
||||
will be close to optimal when the maximum size of any chunk is small compared
|
||||
to the block size. And for mempool eviction, we don't wish to evict too much of
|
||||
the mempool at once when a single (potentially small) transaction arrives that
|
||||
takes us over our mempool size limit. For both of these reasons, it's desirable
|
||||
to limit the maximum size of a cluster and thereby limit the maximum size of
|
||||
any chunk (as a cluster may consist entirely of one chunk).
|
||||
|
||||
The computation required to linearize a transaction grows (in polynomial time)
|
||||
with the number of transactions in a cluster, so limiting the number of
|
||||
transactions in a cluster is necessary to ensure that we're able to find good
|
||||
(ideally, optimal) linearizations in a reasonable amount of time.
|
||||
|
||||
### Limits
|
||||
|
||||
Transactions submitted to the mempool must not result in clusters that would
|
||||
exceed the cluster limits (64 transactions and 101 kvB total per cluster).
|
||||
|
||||
## References/Notes
|
||||
[1] This is an instance of the maximal-ratio closure problem, which is closely
|
||||
related to the maximal-weight closure problem, as found in the field of mineral
|
||||
extraction for open pit mining.
|
||||
|
||||
[2] See
|
||||
https://delvingbitcoin.org/t/an-overview-of-the-cluster-mempool-proposal/393
|
||||
for a high level overview of the cluster mempool implementation (PR#33629,
|
||||
since v31.0) and its design rationale.
|
||||
|
||||
[3] See https://delvingbitcoin.org/t/mempool-incentive-compatibility/553 for an
|
||||
explanation of why and how we use feerate diagrams for mining, eviction, and
|
||||
evaluating transaction replacements.
|
||||
@ -1,65 +0,0 @@
|
||||
# Mempool Limits
|
||||
|
||||
## Definitions
|
||||
|
||||
Given any two transactions Tx0 and Tx1 where Tx1 spends an output of Tx0,
|
||||
Tx0 is a *parent* of Tx1 and Tx1 is a *child* of Tx0.
|
||||
|
||||
A transaction's *ancestors* include, recursively, its parents, the parents of its parents, etc.
|
||||
A transaction's *descendants* include, recursively, its children, the children of its children, etc.
|
||||
|
||||
A mempool entry's *ancestor count* is the total number of in-mempool (unconfirmed) transactions in
|
||||
its ancestor set, including itself.
|
||||
A mempool entry's *descendant count* is the total number of in-mempool (unconfirmed) transactions in
|
||||
its descendant set, including itself.
|
||||
|
||||
A mempool entry's *ancestor size* is the aggregated virtual size of in-mempool (unconfirmed)
|
||||
transactions in its ancestor set, including itself.
|
||||
A mempool entry's *descendant size* is the aggregated virtual size of in-mempool (unconfirmed)
|
||||
transactions in its descendant set, including itself.
|
||||
|
||||
Transactions submitted to the mempool must not exceed the ancestor and descendant limits (aka
|
||||
mempool *package limits*) set by the node (see `-limitancestorcount`, `-limitancestorsize`,
|
||||
`-limitdescendantcount`, `-limitdescendantsize`).
|
||||
|
||||
## Exemptions
|
||||
|
||||
### CPFP Carve Out
|
||||
|
||||
**CPFP Carve Out** if a transaction candidate for submission to the
|
||||
mempool would cause some mempool entry to exceed its descendant limits, an exemption is made if all
|
||||
of the following conditions are met:
|
||||
|
||||
1. The candidate transaction is no more than 10,000 virtual bytes.
|
||||
|
||||
2. The candidate transaction has an ancestor count of 2 (itself and exactly 1 ancestor).
|
||||
|
||||
3. The in-mempool transaction's descendant count, including the candidate transaction, would only
|
||||
exceed the limit by 1.
|
||||
|
||||
*Rationale*: this rule was introduced to prevent pinning by domination of a transaction's descendant
|
||||
limits in two-party contract protocols such as LN. Also see the [mailing list
|
||||
post](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-November/016518.html).
|
||||
|
||||
This rule was introduced in [PR #15681](https://github.com/bitcoin/bitcoin/pull/15681).
|
||||
|
||||
### Single-Conflict RBF Carve Out
|
||||
|
||||
When a candidate transaction for submission to the mempool would replace mempool entries, it may
|
||||
also decrease the descendant count of other mempool entries. Since ancestor/descendant limits are
|
||||
calculated prior to removing the would-be-replaced transactions, they may be overestimated.
|
||||
|
||||
An exemption is given for a candidate transaction that would replace mempool transactions and meets
|
||||
all of the following conditions:
|
||||
|
||||
1. The candidate transaction has exactly 1 directly conflicting transaction.
|
||||
|
||||
2. The candidate transaction does not spend any unconfirmed inputs that are not also spent by the
|
||||
directly conflicting transaction.
|
||||
|
||||
The following discounts are given to account for the would-be-replaced transaction(s):
|
||||
|
||||
1. The descendant count limit is temporarily increased by 1.
|
||||
|
||||
2. The descendant size limit temporarily is increased by the virtual size of the to-be-replaced
|
||||
directly conflicting transaction.
|
||||
19
doc/policy/mempool-terminology.md
Normal file
19
doc/policy/mempool-terminology.md
Normal file
@ -0,0 +1,19 @@
|
||||
## Fee and Size Terminology in Mempool Policy
|
||||
|
||||
* Each transaction has a **weight** and virtual size as defined in BIP 141 (different from serialized size for witness transactions, as witness data is discounted and the value is rounded up to the nearest integer).
|
||||
|
||||
* In the RPCs, "weight", refers to the weight as defined in BIP 141.
|
||||
|
||||
* A transaction has a **sigops size**, defined as its sigop cost multiplied by the node's `-bytespersigop`, an adjustable policy.
|
||||
|
||||
* A transaction's **virtual size (vsize)** refers to its **sigops-adjusted virtual size**: the maximum of its BIP 141 size and sigop size. This virtual size is used to simplify the process of building blocks that satisfy both the maximum weight limit and sigop limit.
|
||||
|
||||
* In the RPCs, "vsize" refers to this sigops-adjusted virtual size.
|
||||
|
||||
* Mempool entry data with the suffix "-size" (eg "ancestorsize") refer to the cumulative sigops-adjusted virtual size of the transactions in the associated set.
|
||||
|
||||
* A transaction can also have a **sigops-adjusted weight**, defined similarly as the maximum of its BIP 141 weight and 4 times the sigops size. This value is used internally by the mempool to avoid losing precision, and mempool entry data with the suffix "-weight" (eg "chunkweight", "clusterweight") refer to this sigops-adjusted weight.
|
||||
|
||||
* A transaction's **base fee** is the difference between its input and output values.
|
||||
|
||||
* A transaction's **modified fee** is its base fee added to any **fee delta** introduced by using the `prioritisetransaction` RPC. Modified fee is used internally for all fee-related mempool policies and block building.
|
||||
43
doc/release-notes-33629.md
Normal file
43
doc/release-notes-33629.md
Normal file
@ -0,0 +1,43 @@
|
||||
Mempool
|
||||
=======
|
||||
|
||||
The mempool has been reimplemented with a new design ("cluster mempool"), to
|
||||
facilitate better decision-making when constructing block templates, evicting
|
||||
transactions, relaying transactions, and validating replacement transactions
|
||||
(RBF). Most changes should be transparent to users, but some behavior changes
|
||||
are noted:
|
||||
|
||||
- The mempool no longer enforces ancestor or descendant size/count limits.
|
||||
Instead, two new default policy limits are introduced governing connected
|
||||
components, or clusters, in the mempool, limiting clusters to 64 transactions
|
||||
and up to 101 kB in virtual size. Transactions are considered to be in the
|
||||
same cluster if they are connected to each other via any combination of
|
||||
parent/child relationships in the mempool. These limits can be overridden
|
||||
using command line arguments; see the extended help (`-help-debug`)
|
||||
for more information.
|
||||
|
||||
- Within the mempool, transactions are ordered based on the feerate at which
|
||||
they are expected to be mined, which takes into account the full set, or
|
||||
"chunk", of transactions that would be included together (e.g., a parent and
|
||||
its child, or more complicated subsets of transactions). This ordering is
|
||||
utilized by the algorithms that implement transaction selection for
|
||||
constructing block templates; eviction from the mempool when it is full; and
|
||||
transaction relay announcements to peers.
|
||||
|
||||
- The replace-by-fee validation logic has been updated so that transaction
|
||||
replacements are only accepted if the resulting mempool's feerate diagram is
|
||||
strictly better than before the replacement. This eliminates all known cases
|
||||
of replacements occurring that make the mempool worse off, which was possible
|
||||
under previous RBF rules. For singleton transactions (that are in clusters by
|
||||
themselves) it's sufficient for a replacement to have a higher fee and
|
||||
feerate than the original. See
|
||||
[delvingbitcoin.org post](https://delvingbitcoin.org/t/an-overview-of-the-cluster-mempool-proposal/393#rbf-can-now-be-made-incentive-compatible-for-miners-11)
|
||||
for more information.
|
||||
|
||||
- Two new RPCs have been added: `getmempoolcluster` will provide the set of
|
||||
transactions in the same cluster as the given transaction, along with the
|
||||
ordering of those transactions and grouping into chunks; and
|
||||
`getmempoolfeeratediagram` will return the feerate diagram of the entire
|
||||
mempool.
|
||||
|
||||
- Chunk size and chunk fees are now also included in the output of `getmempoolentry`.
|
||||
@ -22,7 +22,7 @@
|
||||
static void AddTx(const CTransactionRef& tx, const CAmount& fee, CTxMemPool& pool) EXCLUSIVE_LOCKS_REQUIRED(cs_main, pool.cs)
|
||||
{
|
||||
LockPoints lp;
|
||||
AddToMempool(pool, CTxMemPoolEntry(TxGraph::Ref(), tx, fee, /*time=*/0, /*entry_height=*/1, /*entry_sequence=*/0, /*spends_coinbase=*/false, /*sigops_cost=*/4, lp));
|
||||
TryAddToMempool(pool, CTxMemPoolEntry(TxGraph::Ref(), tx, fee, /*time=*/0, /*entry_height=*/1, /*entry_sequence=*/0, /*spends_coinbase=*/false, /*sigops_cost=*/4, lp));
|
||||
}
|
||||
|
||||
namespace {
|
||||
|
||||
@ -29,7 +29,7 @@ static void AddTx(const CTransactionRef& tx, CTxMemPool& pool) EXCLUSIVE_LOCKS_R
|
||||
unsigned int sigOpCost{4};
|
||||
uint64_t fee{0};
|
||||
LockPoints lp;
|
||||
AddToMempool(pool, CTxMemPoolEntry(TxGraph::Ref(),
|
||||
TryAddToMempool(pool, CTxMemPoolEntry(TxGraph::Ref(),
|
||||
tx, fee, nTime, nHeight, sequence,
|
||||
spendsCoinbase, sigOpCost, lp));
|
||||
}
|
||||
|
||||
@ -27,7 +27,7 @@ static void AddTx(const CTransactionRef& tx, const CAmount& nFee, CTxMemPool& po
|
||||
bool spendsCoinbase = false;
|
||||
unsigned int sigOpCost = 4;
|
||||
LockPoints lp;
|
||||
AddToMempool(pool, CTxMemPoolEntry(TxGraph::Ref(),
|
||||
TryAddToMempool(pool, CTxMemPoolEntry(TxGraph::Ref(),
|
||||
tx, nFee, nTime, nHeight, sequence,
|
||||
spendsCoinbase, sigOpCost, lp));
|
||||
}
|
||||
|
||||
@ -29,7 +29,7 @@ static void AddTx(const CTransactionRef& tx, CTxMemPool& pool, FastRandomContext
|
||||
bool spendsCoinbase = false;
|
||||
unsigned int sigOpCost = 4;
|
||||
LockPoints lp;
|
||||
AddToMempool(pool, CTxMemPoolEntry(TxGraph::Ref(), tx, det_rand.randrange(10000)+1000, nTime, nHeight, sequence, spendsCoinbase, sigOpCost, lp));
|
||||
TryAddToMempool(pool, CTxMemPoolEntry(TxGraph::Ref(), tx, det_rand.randrange(10000)+1000, nTime, nHeight, sequence, spendsCoinbase, sigOpCost, lp));
|
||||
}
|
||||
|
||||
struct Available {
|
||||
|
||||
@ -22,7 +22,7 @@
|
||||
static void AddTx(const CTransactionRef& tx, const CAmount& fee, CTxMemPool& pool) EXCLUSIVE_LOCKS_REQUIRED(cs_main, pool.cs)
|
||||
{
|
||||
LockPoints lp;
|
||||
AddToMempool(pool, CTxMemPoolEntry(TxGraph::Ref(), tx, fee, /*time=*/0, /*entry_height=*/1, /*entry_sequence=*/0, /*spends_coinbase=*/false, /*sigops_cost=*/4, lp));
|
||||
TryAddToMempool(pool, CTxMemPoolEntry(TxGraph::Ref(), tx, fee, /*time=*/0, /*entry_height=*/1, /*entry_sequence=*/0, /*spends_coinbase=*/false, /*sigops_cost=*/4, lp));
|
||||
}
|
||||
|
||||
static void RpcMempool(benchmark::Bench& bench)
|
||||
|
||||
17
src/init.cpp
17
src/init.cpp
@ -633,10 +633,13 @@ void SetupServerArgs(ArgsManager& argsman, bool can_listen_ipc)
|
||||
argsman.AddArg("-deprecatedrpc=<method>", "Allows deprecated RPC method(s) to be used", ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
|
||||
argsman.AddArg("-stopafterblockimport", strprintf("Stop running after importing blocks from disk (default: %u)", DEFAULT_STOPAFTERBLOCKIMPORT), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
|
||||
argsman.AddArg("-stopatheight", strprintf("Stop running after reaching the given height in the main chain (default: %u)", DEFAULT_STOPATHEIGHT), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
|
||||
argsman.AddArg("-limitancestorcount=<n>", strprintf("Do not accept transactions if number of in-mempool ancestors is <n> or more (default: %u)", DEFAULT_ANCESTOR_LIMIT), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
|
||||
argsman.AddArg("-limitancestorsize=<n>", strprintf("Do not accept transactions whose size with all in-mempool ancestors exceeds <n> kilobytes (default: %u)", DEFAULT_ANCESTOR_SIZE_LIMIT_KVB), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
|
||||
argsman.AddArg("-limitdescendantcount=<n>", strprintf("Do not accept transactions if any ancestor would have <n> or more in-mempool descendants (default: %u)", DEFAULT_DESCENDANT_LIMIT), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
|
||||
argsman.AddArg("-limitdescendantsize=<n>", strprintf("Do not accept transactions if any ancestor would have more than <n> kilobytes of in-mempool descendants (default: %u).", DEFAULT_DESCENDANT_SIZE_LIMIT_KVB), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
|
||||
argsman.AddArg("-limitancestorcount=<n>", strprintf("Deprecated setting to not accept transactions if number of in-mempool ancestors is <n> or more (default: %u); replaced by cluster limits (see -limitclustercount) and only used by wallet for coin selection", DEFAULT_ANCESTOR_LIMIT), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
|
||||
// Ancestor and descendant size limits were removed. We keep
|
||||
// -limitancestorsize/-limitdescendantsize as hidden args to display a more
|
||||
// user friendly error when set.
|
||||
argsman.AddArg("-limitancestorsize", "", ArgsManager::ALLOW_ANY, OptionsCategory::HIDDEN);
|
||||
argsman.AddArg("-limitdescendantsize", "", ArgsManager::ALLOW_ANY, OptionsCategory::HIDDEN);
|
||||
argsman.AddArg("-limitdescendantcount=<n>", strprintf("Deprecated setting to not accept transactions if any ancestor would have <n> or more in-mempool descendants (default: %u); replaced by cluster limits (see -limitclustercount) and only used by wallet for coin selection", DEFAULT_DESCENDANT_LIMIT), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
|
||||
argsman.AddArg("-test=<option>", "Pass a test-only option. Options include : " + Join(TEST_OPTIONS_DOC, ", ") + ".", ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
|
||||
argsman.AddArg("-limitclustercount=<n>", strprintf("Do not accept transactions into mempool which are directly or indirectly connected to <n> or more other unconfirmed transactions (default: %u, maximum: %u)", DEFAULT_CLUSTER_LIMIT, MAX_CLUSTER_COUNT_LIMIT), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
|
||||
argsman.AddArg("-limitclustersize=<n>", strprintf("Do not accept transactions whose virtual size with all in-mempool connected transactions exceeds <n> kilobytes (default: %u)", DEFAULT_CLUSTER_SIZE_LIMIT_KVB), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
|
||||
@ -901,6 +904,12 @@ bool AppInitParameterInteraction(const ArgsManager& args)
|
||||
if (args.IsArgSet("-checkpoints")) {
|
||||
InitWarning(_("Option '-checkpoints' is set but checkpoints were removed. This option has no effect."));
|
||||
}
|
||||
if (args.IsArgSet("-limitancestorsize")) {
|
||||
InitWarning(_("Option '-limitancestorsize' is given but ancestor size limits have been replaced with cluster size limits (see -limitclustersize). This option has no effect."));
|
||||
}
|
||||
if (args.IsArgSet("-limitdescendantsize")) {
|
||||
InitWarning(_("Option '-limitdescendantsize' is given but descendant size limits have been replaced with cluster size limits (see -limitclustersize). This option has no effect."));
|
||||
}
|
||||
|
||||
// Error if network-specific options (-addnode, -connect, etc) are
|
||||
// specified in default section of config file, but not overridden
|
||||
|
||||
@ -221,8 +221,8 @@ public:
|
||||
node::TxBroadcast broadcast_method,
|
||||
std::string& err_string) = 0;
|
||||
|
||||
//! Calculate mempool ancestor and descendant counts for the given transaction.
|
||||
virtual void getTransactionAncestry(const Txid& txid, size_t& ancestors, size_t& descendants, size_t* ancestorsize = nullptr, CAmount* ancestorfees = nullptr) = 0;
|
||||
//! Calculate mempool ancestor and cluster counts for the given transaction.
|
||||
virtual void getTransactionAncestry(const Txid& txid, size_t& ancestors, size_t& cluster_count, size_t* ancestorsize = nullptr, CAmount* ancestorfees = nullptr) = 0;
|
||||
|
||||
//! For each outpoint, calculate the fee-bumping cost to spend this outpoint at the specified
|
||||
// feerate, including bumping its ancestors. For example, if the target feerate is 10sat/vbyte
|
||||
|
||||
@ -80,7 +80,7 @@ private:
|
||||
const unsigned int entryHeight; //!< Chain height when entering the mempool
|
||||
const bool spendsCoinbase; //!< keep track of transactions that spend a coinbase
|
||||
const int64_t sigOpCost; //!< Total sigop cost
|
||||
CAmount m_modified_fee; //!< Used for determining the priority of the transaction for mining in a block
|
||||
mutable CAmount m_modified_fee; //!< Used for determining the priority of the transaction for mining in a block
|
||||
mutable LockPoints lockPoints; //!< Track the height and time at which tx was final
|
||||
|
||||
public:
|
||||
@ -124,7 +124,7 @@ public:
|
||||
const LockPoints& GetLockPoints() const { return lockPoints; }
|
||||
|
||||
// Updates the modified fees with descendants/ancestors.
|
||||
void UpdateModifiedFee(CAmount fee_diff)
|
||||
void UpdateModifiedFee(CAmount fee_diff) const
|
||||
{
|
||||
m_modified_fee = SaturatingAdd(m_modified_fee, fee_diff);
|
||||
}
|
||||
|
||||
@ -22,12 +22,8 @@ struct MemPoolLimits {
|
||||
int64_t cluster_size_vbytes{DEFAULT_CLUSTER_SIZE_LIMIT_KVB * 1'000};
|
||||
//! The maximum allowed number of transactions in a package including the entry and its ancestors.
|
||||
int64_t ancestor_count{DEFAULT_ANCESTOR_LIMIT};
|
||||
//! The maximum allowed size in virtual bytes of an entry and its ancestors within a package.
|
||||
int64_t ancestor_size_vbytes{DEFAULT_ANCESTOR_SIZE_LIMIT_KVB * 1'000};
|
||||
//! The maximum allowed number of transactions in a package including the entry and its descendants.
|
||||
int64_t descendant_count{DEFAULT_DESCENDANT_LIMIT};
|
||||
//! The maximum allowed size in virtual bytes of an entry and its descendants within a package.
|
||||
int64_t descendant_size_vbytes{DEFAULT_DESCENDANT_SIZE_LIMIT_KVB * 1'000};
|
||||
|
||||
/**
|
||||
* @return MemPoolLimits with all the limits set to the maximum
|
||||
@ -35,7 +31,7 @@ struct MemPoolLimits {
|
||||
static constexpr MemPoolLimits NoLimits()
|
||||
{
|
||||
int64_t no_limit{std::numeric_limits<int64_t>::max()};
|
||||
return {std::numeric_limits<unsigned>::max(), no_limit, no_limit, no_limit, no_limit, no_limit};
|
||||
return {std::numeric_limits<unsigned>::max(), no_limit, no_limit, no_limit};
|
||||
}
|
||||
};
|
||||
} // namespace kernel
|
||||
|
||||
@ -38,11 +38,7 @@ void ApplyArgsManOptions(const ArgsManager& argsman, MemPoolLimits& mempool_limi
|
||||
|
||||
mempool_limits.ancestor_count = argsman.GetIntArg("-limitancestorcount", mempool_limits.ancestor_count);
|
||||
|
||||
if (auto vkb = argsman.GetIntArg("-limitancestorsize")) mempool_limits.ancestor_size_vbytes = *vkb * 1'000;
|
||||
|
||||
mempool_limits.descendant_count = argsman.GetIntArg("-limitdescendantcount", mempool_limits.descendant_count);
|
||||
|
||||
if (auto vkb = argsman.GetIntArg("-limitdescendantsize")) mempool_limits.descendant_size_vbytes = *vkb * 1'000;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -193,12 +193,12 @@ std::unique_ptr<CBlockTemplate> BlockAssembler::CreateNewBlock()
|
||||
return std::move(pblocktemplate);
|
||||
}
|
||||
|
||||
bool BlockAssembler::TestPackage(FeePerWeight package_feerate, int64_t packageSigOpsCost) const
|
||||
bool BlockAssembler::TestChunkBlockLimits(FeePerWeight chunk_feerate, int64_t chunk_sigops_cost) const
|
||||
{
|
||||
if (nBlockWeight + package_feerate.size >= m_options.nBlockMaxWeight) {
|
||||
if (nBlockWeight + chunk_feerate.size >= m_options.nBlockMaxWeight) {
|
||||
return false;
|
||||
}
|
||||
if (nBlockSigOpsCost + packageSigOpsCost >= MAX_BLOCK_SIGOPS_COST) {
|
||||
if (nBlockSigOpsCost + chunk_sigops_cost >= MAX_BLOCK_SIGOPS_COST) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
@ -206,7 +206,7 @@ bool BlockAssembler::TestPackage(FeePerWeight package_feerate, int64_t packageSi
|
||||
|
||||
// Perform transaction-level checks before adding to block:
|
||||
// - transaction finality (locktime)
|
||||
bool BlockAssembler::TestPackageTransactions(const std::vector<CTxMemPoolEntryRef>& txs) const
|
||||
bool BlockAssembler::TestChunkTransactions(const std::vector<CTxMemPoolEntryRef>& txs) const
|
||||
{
|
||||
for (const auto tx : txs) {
|
||||
if (!IsFinalTx(tx.get().GetTx(), nHeight, m_lock_time_cutoff)) {
|
||||
@ -252,18 +252,18 @@ void BlockAssembler::addChunks()
|
||||
|
||||
while (selected_transactions.size() > 0) {
|
||||
// Check to see if min fee rate is still respected.
|
||||
if (chunk_feerate.fee < m_options.blockMinFeeRate.GetFee(chunk_feerate_vsize.size)) {
|
||||
if (chunk_feerate_vsize << m_options.blockMinFeeRate.GetFeePerVSize()) {
|
||||
// Everything else we might consider has a lower feerate
|
||||
return;
|
||||
}
|
||||
|
||||
int64_t package_sig_ops = 0;
|
||||
int64_t chunk_sig_ops = 0;
|
||||
for (const auto& tx : selected_transactions) {
|
||||
package_sig_ops += tx.get().GetSigOpCost();
|
||||
chunk_sig_ops += tx.get().GetSigOpCost();
|
||||
}
|
||||
|
||||
// Check to see if this chunk will fit.
|
||||
if (!TestPackage(chunk_feerate, package_sig_ops) || !TestPackageTransactions(selected_transactions)) {
|
||||
if (!TestChunkBlockLimits(chunk_feerate, chunk_sig_ops) || !TestChunkTransactions(selected_transactions)) {
|
||||
// This chunk won't fit, so we skip it and will try the next best one.
|
||||
m_mempool->SkipBuilderChunk();
|
||||
++nConsecutiveFailed;
|
||||
|
||||
@ -110,13 +110,12 @@ private:
|
||||
void addChunks() EXCLUSIVE_LOCKS_REQUIRED(m_mempool->cs);
|
||||
|
||||
// helper functions for addChunks()
|
||||
/** Test if a new package would "fit" in the block */
|
||||
bool TestPackage(FeePerWeight package_feerate, int64_t packageSigOpsCost) const;
|
||||
/** Perform checks on each transaction in a package:
|
||||
* locktime, premature-witness, serialized size (if necessary)
|
||||
* These checks should always succeed, and they're here
|
||||
* only as an extra check in case of suboptimal node configuration */
|
||||
bool TestPackageTransactions(const std::vector<CTxMemPoolEntryRef>& txs) const;
|
||||
/** Test if a new chunk would "fit" in the block */
|
||||
bool TestChunkBlockLimits(FeePerWeight chunk_feerate, int64_t chunk_sigops_cost) const;
|
||||
/** Perform locktime checks on each transaction in a chunk:
|
||||
* This check should always succeed, and is here
|
||||
* only as an extra check in case of a bug */
|
||||
bool TestChunkTransactions(const std::vector<CTxMemPoolEntryRef>& txs) const;
|
||||
};
|
||||
|
||||
/**
|
||||
|
||||
@ -57,6 +57,8 @@ public:
|
||||
*/
|
||||
CAmount GetFee(int32_t virtual_bytes) const;
|
||||
|
||||
FeePerVSize GetFeePerVSize() const { return m_feerate; }
|
||||
|
||||
/**
|
||||
* Return the fee in satoshis for a vsize of 1000 vbytes
|
||||
*/
|
||||
|
||||
@ -24,15 +24,10 @@ static constexpr uint32_t MAX_PACKAGE_COUNT{25};
|
||||
static constexpr uint32_t MAX_PACKAGE_WEIGHT = 404'000;
|
||||
static_assert(MAX_PACKAGE_WEIGHT >= MAX_STANDARD_TX_WEIGHT);
|
||||
|
||||
// If a package is to be evaluated, it must be at least as large as the mempool's ancestor/descendant limits,
|
||||
// otherwise transactions that would be individually accepted may be rejected in a package erroneously.
|
||||
// Since a submitted package must be child-with-parents (all of the transactions are a parent
|
||||
// of the child), package limits are ultimately bounded by mempool package limits. Ensure that the
|
||||
// defaults reflect this constraint.
|
||||
static_assert(DEFAULT_DESCENDANT_LIMIT >= MAX_PACKAGE_COUNT);
|
||||
static_assert(DEFAULT_ANCESTOR_LIMIT >= MAX_PACKAGE_COUNT);
|
||||
static_assert(MAX_PACKAGE_WEIGHT >= DEFAULT_ANCESTOR_SIZE_LIMIT_KVB * WITNESS_SCALE_FACTOR * 1000);
|
||||
static_assert(MAX_PACKAGE_WEIGHT >= DEFAULT_DESCENDANT_SIZE_LIMIT_KVB * WITNESS_SCALE_FACTOR * 1000);
|
||||
// Packages are part of a single cluster, so ensure that the package limits are
|
||||
// set within the mempool's cluster size limits.
|
||||
static_assert(DEFAULT_CLUSTER_LIMIT >= MAX_PACKAGE_COUNT);
|
||||
static_assert(MAX_PACKAGE_WEIGHT <= DEFAULT_CLUSTER_SIZE_LIMIT_KVB * WITNESS_SCALE_FACTOR * 1000);
|
||||
|
||||
/** A "reason" why a package was invalid. It may be that one or more of the included
|
||||
* transactions is invalid or the package itself violates our rules.
|
||||
|
||||
@ -71,12 +71,8 @@ static constexpr unsigned int DEFAULT_CLUSTER_LIMIT{64};
|
||||
static constexpr unsigned int DEFAULT_CLUSTER_SIZE_LIMIT_KVB{101};
|
||||
/** Default for -limitancestorcount, max number of in-mempool ancestors */
|
||||
static constexpr unsigned int DEFAULT_ANCESTOR_LIMIT{25};
|
||||
/** Default for -limitancestorsize, maximum kilobytes of tx + all in-mempool ancestors */
|
||||
static constexpr unsigned int DEFAULT_ANCESTOR_SIZE_LIMIT_KVB{101};
|
||||
/** Default for -limitdescendantcount, max number of in-mempool descendants */
|
||||
static constexpr unsigned int DEFAULT_DESCENDANT_LIMIT{25};
|
||||
/** Default for -limitdescendantsize, maximum kilobytes of in-mempool descendants */
|
||||
static constexpr unsigned int DEFAULT_DESCENDANT_SIZE_LIMIT_KVB{101};
|
||||
/** Default for -datacarrier */
|
||||
static const bool DEFAULT_ACCEPT_DATACARRIER = true;
|
||||
/**
|
||||
|
||||
@ -59,6 +59,7 @@ std::optional<std::string> PackageTRUCChecks(const CTxMemPool& pool, const CTran
|
||||
const Package& package,
|
||||
const std::vector<CTxMemPoolEntry::CTxMemPoolEntryRef>& mempool_parents)
|
||||
{
|
||||
AssertLockHeld(pool.cs);
|
||||
// This function is specialized for these limits, and must be reimplemented if they ever change.
|
||||
static_assert(TRUC_ANCESTOR_LIMIT == 2);
|
||||
static_assert(TRUC_DESCENDANT_LIMIT == 2);
|
||||
@ -173,7 +174,7 @@ std::optional<std::pair<std::string, CTransactionRef>> SingleTRUCChecks(const CT
|
||||
const std::set<Txid>& direct_conflicts,
|
||||
int64_t vsize)
|
||||
{
|
||||
LOCK(pool.cs);
|
||||
AssertLockHeld(pool.cs);
|
||||
// Check TRUC and non-TRUC inheritance.
|
||||
for (const auto& entry_ref : mempool_parents) {
|
||||
const auto& entry = &entry_ref.get();
|
||||
|
||||
@ -32,9 +32,8 @@ static constexpr int64_t TRUC_MAX_WEIGHT{TRUC_MAX_VSIZE * WITNESS_SCALE_FACTOR};
|
||||
/** Maximum sigop-adjusted virtual size of a tx which spends from an unconfirmed TRUC transaction. */
|
||||
static constexpr int64_t TRUC_CHILD_MAX_VSIZE{1000};
|
||||
static constexpr int64_t TRUC_CHILD_MAX_WEIGHT{TRUC_CHILD_MAX_VSIZE * WITNESS_SCALE_FACTOR};
|
||||
// These limits are within the default ancestor/descendant limits.
|
||||
static_assert(TRUC_MAX_VSIZE + TRUC_CHILD_MAX_VSIZE <= DEFAULT_ANCESTOR_SIZE_LIMIT_KVB * 1000);
|
||||
static_assert(TRUC_MAX_VSIZE + TRUC_CHILD_MAX_VSIZE <= DEFAULT_DESCENDANT_SIZE_LIMIT_KVB * 1000);
|
||||
// These limits are within the default cluster limits.
|
||||
static_assert(TRUC_MAX_VSIZE + TRUC_CHILD_MAX_VSIZE <= DEFAULT_CLUSTER_SIZE_LIMIT_KVB * 1000);
|
||||
|
||||
/** Must be called for every transaction, even if not TRUC. Not strictly necessary for transactions
|
||||
* accepted through AcceptMultipleTransactions.
|
||||
@ -67,7 +66,7 @@ static_assert(TRUC_MAX_VSIZE + TRUC_CHILD_MAX_VSIZE <= DEFAULT_DESCENDANT_SIZE_L
|
||||
std::optional<std::pair<std::string, CTransactionRef>> SingleTRUCChecks(const CTxMemPool& pool, const CTransactionRef& ptx,
|
||||
const std::vector<CTxMemPoolEntry::CTxMemPoolEntryRef>& mempool_parents,
|
||||
const std::set<Txid>& direct_conflicts,
|
||||
int64_t vsize);
|
||||
int64_t vsize) EXCLUSIVE_LOCKS_REQUIRED(pool.cs);
|
||||
|
||||
/** Must be called for every transaction that is submitted within a package, even if not TRUC.
|
||||
*
|
||||
@ -92,6 +91,6 @@ std::optional<std::pair<std::string, CTransactionRef>> SingleTRUCChecks(const CT
|
||||
* */
|
||||
std::optional<std::string> PackageTRUCChecks(const CTxMemPool& pool, const CTransactionRef& ptx, int64_t vsize,
|
||||
const Package& package,
|
||||
const std::vector<CTxMemPoolEntry::CTxMemPoolEntryRef>& mempool_parents);
|
||||
const std::vector<CTxMemPoolEntry::CTxMemPoolEntryRef>& mempool_parents) EXCLUSIVE_LOCKS_REQUIRED(pool.cs);
|
||||
|
||||
#endif // BITCOIN_POLICY_TRUC_POLICY_H
|
||||
|
||||
@ -267,14 +267,15 @@ static RPCHelpMan testmempoolaccept()
|
||||
static std::vector<RPCResult> ClusterDescription()
|
||||
{
|
||||
return {
|
||||
RPCResult{RPCResult::Type::NUM, "weight", "total sigops-adjusted weight (as defined in BIP 141 and modified by '-bytespersigop'"},
|
||||
RPCResult{RPCResult::Type::NUM, "clusterweight", "total sigops-adjusted weight (as defined in BIP 141 and modified by '-bytespersigop')"},
|
||||
RPCResult{RPCResult::Type::NUM, "txcount", "number of transactions"},
|
||||
RPCResult{RPCResult::Type::ARR, "txs", "transactions in this cluster in mining order",
|
||||
{RPCResult{RPCResult::Type::OBJ, "txentry", "",
|
||||
RPCResult{RPCResult::Type::ARR, "chunks", "chunks in this cluster (in mining order)",
|
||||
{RPCResult{RPCResult::Type::OBJ, "chunk", "",
|
||||
{
|
||||
RPCResult{RPCResult::Type::STR_HEX, "txid", "the transaction id"},
|
||||
RPCResult{RPCResult::Type::NUM, "chunkfee", "fee of the chunk containing this tx"},
|
||||
RPCResult{RPCResult::Type::NUM, "chunkweight", "sigops-adjusted weight of the chunk containing this transaction"}
|
||||
RPCResult{RPCResult::Type::NUM, "chunkfee", "fees of the transactions in this chunk"},
|
||||
RPCResult{RPCResult::Type::NUM, "chunkweight", "sigops-adjusted weight of all transactions in this chunk"},
|
||||
RPCResult{RPCResult::Type::ARR, "txs", "transactions in this chunk in mining order",
|
||||
{RPCResult{RPCResult::Type::STR_HEX, "txid", "transaction id"}}},
|
||||
}
|
||||
}}
|
||||
}
|
||||
@ -311,6 +312,19 @@ static std::vector<RPCResult> MempoolEntryDescription()
|
||||
};
|
||||
}
|
||||
|
||||
void AppendChunkInfo(UniValue& all_chunks, FeePerWeight chunk_feerate, std::vector<const CTxMemPoolEntry *> chunk_txs)
|
||||
{
|
||||
UniValue chunk(UniValue::VOBJ);
|
||||
chunk.pushKV("chunkfee", ValueFromAmount((int)chunk_feerate.fee));
|
||||
chunk.pushKV("chunkweight", chunk_feerate.size);
|
||||
UniValue chunk_txids(UniValue::VARR);
|
||||
for (const auto& chunk_tx : chunk_txs) {
|
||||
chunk_txids.push_back(chunk_tx->GetTx().GetHash().ToString());
|
||||
}
|
||||
chunk.pushKV("txs", std::move(chunk_txids));
|
||||
all_chunks.push_back(std::move(chunk));
|
||||
}
|
||||
|
||||
static void clusterToJSON(const CTxMemPool& pool, UniValue& info, std::vector<const CTxMemPoolEntry *> cluster) EXCLUSIVE_LOCKS_REQUIRED(pool.cs)
|
||||
{
|
||||
AssertLockHeld(pool.cs);
|
||||
@ -318,18 +332,31 @@ static void clusterToJSON(const CTxMemPool& pool, UniValue& info, std::vector<co
|
||||
for (const auto& tx : cluster) {
|
||||
total_weight += tx->GetAdjustedWeight();
|
||||
}
|
||||
info.pushKV("weight", total_weight);
|
||||
info.pushKV("clusterweight", total_weight);
|
||||
info.pushKV("txcount", (int)cluster.size());
|
||||
UniValue txs(UniValue::VARR);
|
||||
|
||||
// Output the cluster by chunk. This isn't handed to us by the mempool, but
|
||||
// we can calculate it by looking at the chunk feerates of each transaction
|
||||
// in the cluster.
|
||||
FeePerWeight current_chunk_feerate = pool.GetMainChunkFeerate(*cluster[0]);
|
||||
std::vector<const CTxMemPoolEntry *> current_chunk;
|
||||
current_chunk.reserve(cluster.size());
|
||||
|
||||
UniValue all_chunks(UniValue::VARR);
|
||||
for (const auto& tx : cluster) {
|
||||
UniValue txentry(UniValue::VOBJ);
|
||||
auto feerate = pool.GetMainChunkFeerate(*tx);
|
||||
txentry.pushKV("txid", tx->GetTx().GetHash().ToString());
|
||||
txentry.pushKV("chunkfee", ValueFromAmount((int)feerate.fee));
|
||||
txentry.pushKV("chunkweight", feerate.size);
|
||||
txs.push_back(txentry);
|
||||
if (current_chunk_feerate.size == 0) {
|
||||
// We've iterated all the transactions in the previous chunk; so
|
||||
// append it to the output.
|
||||
AppendChunkInfo(all_chunks, pool.GetMainChunkFeerate(*current_chunk[0]), current_chunk);
|
||||
current_chunk.clear();
|
||||
current_chunk_feerate = pool.GetMainChunkFeerate(*tx);
|
||||
}
|
||||
current_chunk.push_back(tx);
|
||||
current_chunk_feerate.size -= tx->GetAdjustedWeight();
|
||||
}
|
||||
info.pushKV("txs", txs);
|
||||
AppendChunkInfo(all_chunks, pool.GetMainChunkFeerate(*current_chunk[0]), current_chunk);
|
||||
current_chunk.clear();
|
||||
info.pushKV("chunks", std::move(all_chunks));
|
||||
}
|
||||
|
||||
static void entryToJSON(const CTxMemPool& pool, UniValue& info, const CTxMemPoolEntry& e) EXCLUSIVE_LOCKS_REQUIRED(pool.cs)
|
||||
@ -668,12 +695,18 @@ static RPCHelpMan getmempoolcluster()
|
||||
},
|
||||
[&](const RPCHelpMan& self, const JSONRPCRequest& request) -> UniValue
|
||||
{
|
||||
uint256 hash = ParseHashV(request.params[0], "parameter 1");
|
||||
uint256 hash = ParseHashV(request.params[0], "txid");
|
||||
|
||||
const CTxMemPool& mempool = EnsureAnyMemPool(request.context);
|
||||
LOCK(mempool.cs);
|
||||
|
||||
auto cluster = mempool.GetCluster(Txid::FromUint256(hash));
|
||||
auto txid = Txid::FromUint256(hash);
|
||||
const auto entry{mempool.GetEntry(txid)};
|
||||
if (entry == nullptr) {
|
||||
throw JSONRPCError(RPC_INVALID_ADDRESS_OR_KEY, "Transaction not in mempool");
|
||||
}
|
||||
|
||||
auto cluster = mempool.GetCluster(txid);
|
||||
|
||||
UniValue info(UniValue::VOBJ);
|
||||
clusterToJSON(mempool, info, cluster);
|
||||
|
||||
@ -67,7 +67,7 @@ BOOST_AUTO_TEST_CASE(SimpleRoundTripTest)
|
||||
CBlock block(BuildBlockTestCase(rand_ctx));
|
||||
|
||||
LOCK2(cs_main, pool.cs);
|
||||
AddToMempool(pool, entry.FromTx(block.vtx[2]));
|
||||
TryAddToMempool(pool, entry.FromTx(block.vtx[2]));
|
||||
BOOST_CHECK_EQUAL(pool.get(block.vtx[2]->GetHash()).use_count(), SHARED_TX_OFFSET + 0);
|
||||
|
||||
// Do a simple ShortTxIDs RT
|
||||
@ -151,7 +151,7 @@ BOOST_AUTO_TEST_CASE(NonCoinbasePreforwardRTTest)
|
||||
CBlock block(BuildBlockTestCase(rand_ctx));
|
||||
|
||||
LOCK2(cs_main, pool.cs);
|
||||
AddToMempool(pool, entry.FromTx(block.vtx[2]));
|
||||
TryAddToMempool(pool, entry.FromTx(block.vtx[2]));
|
||||
BOOST_CHECK_EQUAL(pool.get(block.vtx[2]->GetHash()).use_count(), SHARED_TX_OFFSET + 0);
|
||||
|
||||
Txid txhash;
|
||||
@ -222,7 +222,7 @@ BOOST_AUTO_TEST_CASE(SufficientPreforwardRTTest)
|
||||
CBlock block(BuildBlockTestCase(rand_ctx));
|
||||
|
||||
LOCK2(cs_main, pool.cs);
|
||||
AddToMempool(pool, entry.FromTx(block.vtx[1]));
|
||||
TryAddToMempool(pool, entry.FromTx(block.vtx[1]));
|
||||
BOOST_CHECK_EQUAL(pool.get(block.vtx[1]->GetHash()).use_count(), SHARED_TX_OFFSET + 0);
|
||||
|
||||
Txid txhash;
|
||||
@ -322,7 +322,7 @@ BOOST_AUTO_TEST_CASE(ReceiveWithExtraTransactions) {
|
||||
extra_txn.resize(10);
|
||||
|
||||
LOCK2(cs_main, pool.cs);
|
||||
AddToMempool(pool, entry.FromTx(block.vtx[2]));
|
||||
TryAddToMempool(pool, entry.FromTx(block.vtx[2]));
|
||||
BOOST_CHECK_EQUAL(pool.get(block.vtx[2]->GetHash()).use_count(), SHARED_TX_OFFSET + 0);
|
||||
// Ensure the non_block_tx is actually not in the block
|
||||
for (const auto &block_tx : block.vtx) {
|
||||
|
||||
@ -67,7 +67,7 @@ FUZZ_TARGET(mini_miner, .init = initialize_miner)
|
||||
TestMemPoolEntryHelper entry;
|
||||
const CAmount fee{ConsumeMoney(fuzzed_data_provider, /*max=*/MAX_MONEY/100000)};
|
||||
assert(MoneyRange(fee));
|
||||
AddToMempool(pool, entry.Fee(fee).FromTx(tx));
|
||||
TryAddToMempool(pool, entry.Fee(fee).FromTx(tx));
|
||||
|
||||
// All outputs are available to spend
|
||||
for (uint32_t n{0}; n < num_outputs; ++n) {
|
||||
|
||||
@ -122,9 +122,7 @@ std::unique_ptr<CTxMemPool> MakeMempool(FuzzedDataProvider& fuzzed_data_provider
|
||||
|
||||
// ...override specific options for this specific fuzz suite
|
||||
mempool_opts.limits.ancestor_count = fuzzed_data_provider.ConsumeIntegralInRange<unsigned>(0, 50);
|
||||
mempool_opts.limits.ancestor_size_vbytes = fuzzed_data_provider.ConsumeIntegralInRange<unsigned>(0, 202) * 1'000;
|
||||
mempool_opts.limits.descendant_count = fuzzed_data_provider.ConsumeIntegralInRange<unsigned>(0, 50);
|
||||
mempool_opts.limits.descendant_size_vbytes = fuzzed_data_provider.ConsumeIntegralInRange<unsigned>(0, 202) * 1'000;
|
||||
mempool_opts.max_size_bytes = fuzzed_data_provider.ConsumeIntegralInRange<unsigned>(0, 200) * 1'000'000;
|
||||
mempool_opts.expiry = std::chrono::hours{fuzzed_data_provider.ConsumeIntegralInRange<unsigned>(0, 999)};
|
||||
// Only interested in 2 cases: sigop cost 0 or when single legacy sigop cost is >> 1KvB
|
||||
|
||||
@ -81,7 +81,7 @@ FUZZ_TARGET(partially_downloaded_block, .init = initialize_pdb)
|
||||
|
||||
if (add_to_mempool && !pool.exists(tx->GetHash())) {
|
||||
LOCK2(cs_main, pool.cs);
|
||||
AddToMempool(pool, ConsumeTxMemPoolEntry(fuzzed_data_provider, *tx));
|
||||
TryAddToMempool(pool, ConsumeTxMemPoolEntry(fuzzed_data_provider, *tx));
|
||||
available.insert(i);
|
||||
}
|
||||
}
|
||||
|
||||
@ -75,14 +75,14 @@ FUZZ_TARGET(rbf, .init = initialize_rbf)
|
||||
}
|
||||
LOCK2(cs_main, pool.cs);
|
||||
if (!pool.GetIter(another_tx.GetHash())) {
|
||||
AddToMempool(pool, ConsumeTxMemPoolEntry(fuzzed_data_provider, another_tx));
|
||||
TryAddToMempool(pool, ConsumeTxMemPoolEntry(fuzzed_data_provider, another_tx));
|
||||
}
|
||||
}
|
||||
const CTransaction tx{*mtx};
|
||||
if (fuzzed_data_provider.ConsumeBool()) {
|
||||
LOCK2(cs_main, pool.cs);
|
||||
if (!pool.GetIter(tx.GetHash())) {
|
||||
AddToMempool(pool, ConsumeTxMemPoolEntry(fuzzed_data_provider, tx));
|
||||
TryAddToMempool(pool, ConsumeTxMemPoolEntry(fuzzed_data_provider, tx));
|
||||
}
|
||||
}
|
||||
{
|
||||
@ -143,7 +143,7 @@ FUZZ_TARGET(package_rbf, .init = initialize_package_rbf)
|
||||
break;
|
||||
}
|
||||
assert(!pool.GetIter(parent_entry.GetTx().GetHash()));
|
||||
AddToMempool(pool, parent_entry);
|
||||
TryAddToMempool(pool, parent_entry);
|
||||
|
||||
// It's possible that adding this to the mempool failed due to cluster
|
||||
// size limits; if so bail out.
|
||||
@ -162,7 +162,7 @@ FUZZ_TARGET(package_rbf, .init = initialize_package_rbf)
|
||||
break;
|
||||
}
|
||||
if (!pool.GetIter(child_entry.GetTx().GetHash())) {
|
||||
AddToMempool(pool, child_entry);
|
||||
TryAddToMempool(pool, child_entry);
|
||||
// Adding this transaction to the mempool may fail due to cluster
|
||||
// size limits; if so bail out.
|
||||
if(!pool.GetIter(child_entry.GetTx().GetHash())) {
|
||||
|
||||
@ -72,17 +72,17 @@ BOOST_AUTO_TEST_CASE(MempoolRemoveTest)
|
||||
BOOST_CHECK_EQUAL(testPool.size(), poolSize);
|
||||
|
||||
// Just the parent:
|
||||
AddToMempool(testPool, entry.FromTx(txParent));
|
||||
TryAddToMempool(testPool, entry.FromTx(txParent));
|
||||
poolSize = testPool.size();
|
||||
testPool.removeRecursive(CTransaction(txParent), REMOVAL_REASON_DUMMY);
|
||||
BOOST_CHECK_EQUAL(testPool.size(), poolSize - 1);
|
||||
|
||||
// Parent, children, grandchildren:
|
||||
AddToMempool(testPool, entry.FromTx(txParent));
|
||||
TryAddToMempool(testPool, entry.FromTx(txParent));
|
||||
for (int i = 0; i < 3; i++)
|
||||
{
|
||||
AddToMempool(testPool, entry.FromTx(txChild[i]));
|
||||
AddToMempool(testPool, entry.FromTx(txGrandChild[i]));
|
||||
TryAddToMempool(testPool, entry.FromTx(txChild[i]));
|
||||
TryAddToMempool(testPool, entry.FromTx(txGrandChild[i]));
|
||||
}
|
||||
// Remove Child[0], GrandChild[0] should be removed:
|
||||
poolSize = testPool.size();
|
||||
@ -104,8 +104,8 @@ BOOST_AUTO_TEST_CASE(MempoolRemoveTest)
|
||||
// Add children and grandchildren, but NOT the parent (simulate the parent being in a block)
|
||||
for (int i = 0; i < 3; i++)
|
||||
{
|
||||
AddToMempool(testPool, entry.FromTx(txChild[i]));
|
||||
AddToMempool(testPool, entry.FromTx(txGrandChild[i]));
|
||||
TryAddToMempool(testPool, entry.FromTx(txChild[i]));
|
||||
TryAddToMempool(testPool, entry.FromTx(txGrandChild[i]));
|
||||
}
|
||||
// Now remove the parent, as might happen if a block-re-org occurs but the parent cannot be
|
||||
// put into the mempool (maybe because it is non-standard):
|
||||
@ -127,7 +127,7 @@ BOOST_AUTO_TEST_CASE(MempoolSizeLimitTest)
|
||||
tx1.vout.resize(1);
|
||||
tx1.vout[0].scriptPubKey = CScript() << OP_1 << OP_EQUAL;
|
||||
tx1.vout[0].nValue = 10 * COIN;
|
||||
AddToMempool(pool, entry.Fee(1000LL).FromTx(tx1));
|
||||
TryAddToMempool(pool, entry.Fee(1000LL).FromTx(tx1));
|
||||
|
||||
CMutableTransaction tx2 = CMutableTransaction();
|
||||
tx2.vin.resize(1);
|
||||
@ -135,7 +135,7 @@ BOOST_AUTO_TEST_CASE(MempoolSizeLimitTest)
|
||||
tx2.vout.resize(1);
|
||||
tx2.vout[0].scriptPubKey = CScript() << OP_2 << OP_EQUAL;
|
||||
tx2.vout[0].nValue = 10 * COIN;
|
||||
AddToMempool(pool, entry.Fee(500LL).FromTx(tx2));
|
||||
TryAddToMempool(pool, entry.Fee(500LL).FromTx(tx2));
|
||||
|
||||
pool.TrimToSize(pool.DynamicMemoryUsage()); // should do nothing
|
||||
BOOST_CHECK(pool.exists(tx1.GetHash()));
|
||||
@ -145,7 +145,7 @@ BOOST_AUTO_TEST_CASE(MempoolSizeLimitTest)
|
||||
BOOST_CHECK(pool.exists(tx1.GetHash()));
|
||||
BOOST_CHECK(!pool.exists(tx2.GetHash()));
|
||||
|
||||
AddToMempool(pool, entry.FromTx(tx2));
|
||||
TryAddToMempool(pool, entry.FromTx(tx2));
|
||||
CMutableTransaction tx3 = CMutableTransaction();
|
||||
tx3.vin.resize(1);
|
||||
tx3.vin[0].prevout = COutPoint(tx2.GetHash(), 0);
|
||||
@ -153,7 +153,7 @@ BOOST_AUTO_TEST_CASE(MempoolSizeLimitTest)
|
||||
tx3.vout.resize(1);
|
||||
tx3.vout[0].scriptPubKey = CScript() << OP_3 << OP_EQUAL;
|
||||
tx3.vout[0].nValue = 10 * COIN;
|
||||
AddToMempool(pool, entry.Fee(2000LL).FromTx(tx3));
|
||||
TryAddToMempool(pool, entry.Fee(2000LL).FromTx(tx3));
|
||||
|
||||
pool.TrimToSize(pool.DynamicMemoryUsage() * 3 / 4); // tx3 should pay for tx2 (CPFP)
|
||||
BOOST_CHECK(!pool.exists(tx1.GetHash()));
|
||||
@ -216,11 +216,11 @@ BOOST_AUTO_TEST_CASE(MempoolSizeLimitTest)
|
||||
tx7.vout[1].scriptPubKey = CScript() << OP_7 << OP_EQUAL;
|
||||
tx7.vout[1].nValue = 10 * COIN;
|
||||
|
||||
AddToMempool(pool, entry.Fee(700LL).FromTx(tx4));
|
||||
TryAddToMempool(pool, entry.Fee(700LL).FromTx(tx4));
|
||||
auto usage_with_tx4_only = pool.DynamicMemoryUsage();
|
||||
AddToMempool(pool, entry.Fee(100LL).FromTx(tx5));
|
||||
AddToMempool(pool, entry.Fee(110LL).FromTx(tx6));
|
||||
AddToMempool(pool, entry.Fee(900LL).FromTx(tx7));
|
||||
TryAddToMempool(pool, entry.Fee(100LL).FromTx(tx5));
|
||||
TryAddToMempool(pool, entry.Fee(110LL).FromTx(tx6));
|
||||
TryAddToMempool(pool, entry.Fee(900LL).FromTx(tx7));
|
||||
|
||||
// From the topology above, tx7 must be sorted last, so it should
|
||||
// definitely evicted first if we must trim. tx4 should definitely remain
|
||||
@ -234,10 +234,10 @@ BOOST_AUTO_TEST_CASE(MempoolSizeLimitTest)
|
||||
// tx7, but this behavior need not be guaranteed.
|
||||
|
||||
if (!pool.exists(tx5.GetHash()))
|
||||
AddToMempool(pool, entry.Fee(100LL).FromTx(tx5));
|
||||
TryAddToMempool(pool, entry.Fee(100LL).FromTx(tx5));
|
||||
if (!pool.exists(tx6.GetHash()))
|
||||
AddToMempool(pool, entry.Fee(110LL).FromTx(tx6));
|
||||
AddToMempool(pool, entry.Fee(900LL).FromTx(tx7));
|
||||
TryAddToMempool(pool, entry.Fee(110LL).FromTx(tx6));
|
||||
TryAddToMempool(pool, entry.Fee(900LL).FromTx(tx7));
|
||||
|
||||
// If we trim sufficiently, everything but tx4 should be removed.
|
||||
pool.TrimToSize(usage_with_tx4_only + 1);
|
||||
@ -246,9 +246,9 @@ BOOST_AUTO_TEST_CASE(MempoolSizeLimitTest)
|
||||
BOOST_CHECK(!pool.exists(tx6.GetHash()));
|
||||
BOOST_CHECK(!pool.exists(tx7.GetHash()));
|
||||
|
||||
AddToMempool(pool, entry.Fee(100LL).FromTx(tx5));
|
||||
AddToMempool(pool, entry.Fee(110LL).FromTx(tx6));
|
||||
AddToMempool(pool, entry.Fee(900LL).FromTx(tx7));
|
||||
TryAddToMempool(pool, entry.Fee(100LL).FromTx(tx5));
|
||||
TryAddToMempool(pool, entry.Fee(110LL).FromTx(tx6));
|
||||
TryAddToMempool(pool, entry.Fee(900LL).FromTx(tx7));
|
||||
|
||||
std::vector<CTransactionRef> vtx;
|
||||
SetMockTime(42);
|
||||
@ -307,7 +307,7 @@ BOOST_AUTO_TEST_CASE(MempoolAncestryTests)
|
||||
// [tx1]
|
||||
//
|
||||
CTransactionRef tx1 = make_tx(/*output_values=*/{10 * COIN});
|
||||
AddToMempool(pool, entry.Fee(10000LL).FromTx(tx1));
|
||||
TryAddToMempool(pool, entry.Fee(10000LL).FromTx(tx1));
|
||||
|
||||
// Ancestors / clustersize should be 1 / 1 (itself / itself)
|
||||
pool.GetTransactionAncestry(tx1->GetHash(), ancestors, clustersize);
|
||||
@ -319,7 +319,7 @@ BOOST_AUTO_TEST_CASE(MempoolAncestryTests)
|
||||
// [tx1].0 <- [tx2]
|
||||
//
|
||||
CTransactionRef tx2 = make_tx(/*output_values=*/{495 * CENT, 5 * COIN}, /*inputs=*/{tx1});
|
||||
AddToMempool(pool, entry.Fee(10000LL).FromTx(tx2));
|
||||
TryAddToMempool(pool, entry.Fee(10000LL).FromTx(tx2));
|
||||
|
||||
// Ancestors / clustersize should be:
|
||||
// transaction ancestors clustersize
|
||||
@ -338,7 +338,7 @@ BOOST_AUTO_TEST_CASE(MempoolAncestryTests)
|
||||
// [tx1].0 <- [tx2].0 <- [tx3]
|
||||
//
|
||||
CTransactionRef tx3 = make_tx(/*output_values=*/{290 * CENT, 200 * CENT}, /*inputs=*/{tx2});
|
||||
AddToMempool(pool, entry.Fee(10000LL).FromTx(tx3));
|
||||
TryAddToMempool(pool, entry.Fee(10000LL).FromTx(tx3));
|
||||
|
||||
// Ancestors / clustersize should be:
|
||||
// transaction ancestors clustersize
|
||||
@ -363,7 +363,7 @@ BOOST_AUTO_TEST_CASE(MempoolAncestryTests)
|
||||
// \---1 <- [tx4]
|
||||
//
|
||||
CTransactionRef tx4 = make_tx(/*output_values=*/{290 * CENT, 250 * CENT}, /*inputs=*/{tx2}, /*input_indices=*/{1});
|
||||
AddToMempool(pool, entry.Fee(10000LL).FromTx(tx4));
|
||||
TryAddToMempool(pool, entry.Fee(10000LL).FromTx(tx4));
|
||||
|
||||
// Ancestors / clustersize should be:
|
||||
// transaction ancestors clustersize
|
||||
@ -400,13 +400,13 @@ BOOST_AUTO_TEST_CASE(MempoolAncestryTests)
|
||||
CTransactionRef& tyi = *ty[i];
|
||||
tyi = make_tx(/*output_values=*/{v}, /*inputs=*/i > 0 ? std::vector<CTransactionRef>{*ty[i - 1]} : std::vector<CTransactionRef>{});
|
||||
v -= 50 * CENT;
|
||||
AddToMempool(pool, entry.Fee(10000LL).FromTx(tyi));
|
||||
TryAddToMempool(pool, entry.Fee(10000LL).FromTx(tyi));
|
||||
pool.GetTransactionAncestry(tyi->GetHash(), ancestors, clustersize);
|
||||
BOOST_CHECK_EQUAL(ancestors, i+1);
|
||||
BOOST_CHECK_EQUAL(clustersize, i+1);
|
||||
}
|
||||
CTransactionRef ty6 = make_tx(/*output_values=*/{5 * COIN}, /*inputs=*/{tx3, ty5});
|
||||
AddToMempool(pool, entry.Fee(10000LL).FromTx(ty6));
|
||||
TryAddToMempool(pool, entry.Fee(10000LL).FromTx(ty6));
|
||||
|
||||
// Ancestors / clustersize should be:
|
||||
// transaction ancestors clustersize
|
||||
@ -472,10 +472,10 @@ BOOST_AUTO_TEST_CASE(MempoolAncestryTestsDiamond)
|
||||
tb = make_tx(/*output_values=*/{5 * COIN, 3 * COIN}, /*inputs=*/ {ta});
|
||||
tc = make_tx(/*output_values=*/{2 * COIN}, /*inputs=*/{tb}, /*input_indices=*/{1});
|
||||
td = make_tx(/*output_values=*/{6 * COIN}, /*inputs=*/{tb, tc}, /*input_indices=*/{0, 0});
|
||||
AddToMempool(pool, entry.Fee(10000LL).FromTx(ta));
|
||||
AddToMempool(pool, entry.Fee(10000LL).FromTx(tb));
|
||||
AddToMempool(pool, entry.Fee(10000LL).FromTx(tc));
|
||||
AddToMempool(pool, entry.Fee(10000LL).FromTx(td));
|
||||
TryAddToMempool(pool, entry.Fee(10000LL).FromTx(ta));
|
||||
TryAddToMempool(pool, entry.Fee(10000LL).FromTx(tb));
|
||||
TryAddToMempool(pool, entry.Fee(10000LL).FromTx(tc));
|
||||
TryAddToMempool(pool, entry.Fee(10000LL).FromTx(td));
|
||||
|
||||
// Ancestors / descendants should be:
|
||||
// transaction ancestors descendants
|
||||
|
||||
@ -149,21 +149,21 @@ void MinerTestingSetup::TestPackageSelection(const CScript& scriptPubKey, const
|
||||
// This tx has a low fee: 1000 satoshis
|
||||
Txid hashParentTx = tx.GetHash(); // save this txid for later use
|
||||
const auto parent_tx{entry.Fee(1000).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx)};
|
||||
AddToMempool(tx_mempool, parent_tx);
|
||||
TryAddToMempool(tx_mempool, parent_tx);
|
||||
|
||||
// This tx has a medium fee: 10000 satoshis
|
||||
tx.vin[0].prevout.hash = txFirst[1]->GetHash();
|
||||
tx.vout[0].nValue = 5000000000LL - 10000;
|
||||
Txid hashMediumFeeTx = tx.GetHash();
|
||||
const auto medium_fee_tx{entry.Fee(10000).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx)};
|
||||
AddToMempool(tx_mempool, medium_fee_tx);
|
||||
TryAddToMempool(tx_mempool, medium_fee_tx);
|
||||
|
||||
// This tx has a high fee, but depends on the first transaction
|
||||
tx.vin[0].prevout.hash = hashParentTx;
|
||||
tx.vout[0].nValue = 5000000000LL - 1000 - 50000; // 50k satoshi fee
|
||||
Txid hashHighFeeTx = tx.GetHash();
|
||||
const auto high_fee_tx{entry.Fee(50000).Time(Now<NodeSeconds>()).SpendsCoinbase(false).FromTx(tx)};
|
||||
AddToMempool(tx_mempool, high_fee_tx);
|
||||
TryAddToMempool(tx_mempool, high_fee_tx);
|
||||
|
||||
block_template = mining->createNewBlock(options);
|
||||
BOOST_REQUIRE(block_template);
|
||||
@ -192,7 +192,7 @@ void MinerTestingSetup::TestPackageSelection(const CScript& scriptPubKey, const
|
||||
tx.vin[0].prevout.hash = hashHighFeeTx;
|
||||
tx.vout[0].nValue = 5000000000LL - 1000 - 50000; // 0 fee
|
||||
Txid hashFreeTx = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(0).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(0).FromTx(tx));
|
||||
uint64_t freeTxSize{::GetSerializeSize(TX_WITH_WITNESS(tx))};
|
||||
|
||||
// Calculate a fee on child transaction that will put the package just
|
||||
@ -202,7 +202,7 @@ void MinerTestingSetup::TestPackageSelection(const CScript& scriptPubKey, const
|
||||
tx.vin[0].prevout.hash = hashFreeTx;
|
||||
tx.vout[0].nValue = 5000000000LL - 1000 - 50000 - feeToUse;
|
||||
Txid hashLowFeeTx = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(feeToUse).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(feeToUse).FromTx(tx));
|
||||
|
||||
// waitNext() should return nullptr because there is no better template
|
||||
should_be_nullptr = block_template->waitNext({.timeout = MillisecondsDouble{0}, .fee_threshold = 1});
|
||||
@ -221,7 +221,7 @@ void MinerTestingSetup::TestPackageSelection(const CScript& scriptPubKey, const
|
||||
tx_mempool.removeRecursive(CTransaction(tx), MemPoolRemovalReason::REPLACED);
|
||||
tx.vout[0].nValue -= 2; // Now we should be just over the min relay fee
|
||||
hashLowFeeTx = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(feeToUse + 2).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(feeToUse + 2).FromTx(tx));
|
||||
|
||||
// waitNext() should return if fees for the new template are at least 1 sat up
|
||||
block_template = block_template->waitNext({.fee_threshold = 1});
|
||||
@ -243,7 +243,7 @@ void MinerTestingSetup::TestPackageSelection(const CScript& scriptPubKey, const
|
||||
// hashFreeTx2 + hashLowFeeTx2.
|
||||
BulkTransaction(tx, 4000);
|
||||
Txid hashFreeTx2 = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(0).SpendsCoinbase(true).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(0).SpendsCoinbase(true).FromTx(tx));
|
||||
|
||||
// This tx can't be mined by itself
|
||||
tx.vin[0].prevout.hash = hashFreeTx2;
|
||||
@ -251,7 +251,7 @@ void MinerTestingSetup::TestPackageSelection(const CScript& scriptPubKey, const
|
||||
feeToUse = blockMinFeeRate.GetFee(freeTxSize);
|
||||
tx.vout[0].nValue = 5000000000LL - 100000000 - feeToUse;
|
||||
Txid hashLowFeeTx2 = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(feeToUse).SpendsCoinbase(false).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(feeToUse).SpendsCoinbase(false).FromTx(tx));
|
||||
block_template = mining->createNewBlock(options);
|
||||
BOOST_REQUIRE(block_template);
|
||||
block = block_template->getBlock();
|
||||
@ -266,7 +266,7 @@ void MinerTestingSetup::TestPackageSelection(const CScript& scriptPubKey, const
|
||||
// as well.
|
||||
tx.vin[0].prevout.n = 1;
|
||||
tx.vout[0].nValue = 100000000 - 10000; // 10k satoshi fee
|
||||
AddToMempool(tx_mempool, entry.Fee(10000).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(10000).FromTx(tx));
|
||||
block_template = mining->createNewBlock(options);
|
||||
BOOST_REQUIRE(block_template);
|
||||
block = block_template->getBlock();
|
||||
@ -350,7 +350,7 @@ void MinerTestingSetup::TestBasicMining(const CScript& scriptPubKey, const std::
|
||||
for (auto& t : txs) {
|
||||
// If we don't set the number of sigops in the CTxMemPoolEntry,
|
||||
// template creation fails during sanity checks.
|
||||
AddToMempool(tx_mempool, entry.Fee(LOWFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(t));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(LOWFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(t));
|
||||
legacy_sigops += GetLegacySigOpCount(*t);
|
||||
BOOST_CHECK(tx_mempool.GetIter(t->GetHash()).has_value());
|
||||
}
|
||||
@ -375,7 +375,7 @@ void MinerTestingSetup::TestBasicMining(const CScript& scriptPubKey, const std::
|
||||
|
||||
int64_t legacy_sigops = 0;
|
||||
for (auto& t : txs) {
|
||||
AddToMempool(tx_mempool, entry.Fee(LOWFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).SigOpsCost(GetLegacySigOpCount(*t)*WITNESS_SCALE_FACTOR).FromTx(t));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(LOWFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).SigOpsCost(GetLegacySigOpCount(*t)*WITNESS_SCALE_FACTOR).FromTx(t));
|
||||
legacy_sigops += GetLegacySigOpCount(*t);
|
||||
BOOST_CHECK(tx_mempool.GetIter(t->GetHash()).has_value());
|
||||
}
|
||||
@ -408,7 +408,7 @@ void MinerTestingSetup::TestBasicMining(const CScript& scriptPubKey, const std::
|
||||
tx.vout[0].nValue -= LOWFEE;
|
||||
hash = tx.GetHash();
|
||||
bool spendsCoinbase = i == 0; // only first tx spends coinbase
|
||||
AddToMempool(tx_mempool, entry.Fee(LOWFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(spendsCoinbase).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(LOWFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(spendsCoinbase).FromTx(tx));
|
||||
BOOST_CHECK(tx_mempool.GetIter(hash).has_value());
|
||||
tx.vin[0].prevout.hash = hash;
|
||||
}
|
||||
@ -421,7 +421,7 @@ void MinerTestingSetup::TestBasicMining(const CScript& scriptPubKey, const std::
|
||||
|
||||
// orphan in tx_mempool, template creation fails
|
||||
hash = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(LOWFEE).Time(Now<NodeSeconds>()).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(LOWFEE).Time(Now<NodeSeconds>()).FromTx(tx));
|
||||
BOOST_CHECK_EXCEPTION(mining->createNewBlock(options), std::runtime_error, HasReason("bad-txns-inputs-missingorspent"));
|
||||
}
|
||||
|
||||
@ -434,7 +434,7 @@ void MinerTestingSetup::TestBasicMining(const CScript& scriptPubKey, const std::
|
||||
tx.vin[0].prevout.hash = txFirst[1]->GetHash();
|
||||
tx.vout[0].nValue = BLOCKSUBSIDY - HIGHFEE;
|
||||
hash = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(HIGHFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(HIGHFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
tx.vin[0].prevout.hash = hash;
|
||||
tx.vin.resize(2);
|
||||
tx.vin[1].scriptSig = CScript() << OP_1;
|
||||
@ -442,7 +442,7 @@ void MinerTestingSetup::TestBasicMining(const CScript& scriptPubKey, const std::
|
||||
tx.vin[1].prevout.n = 0;
|
||||
tx.vout[0].nValue = tx.vout[0].nValue + BLOCKSUBSIDY - HIGHERFEE; // First txn output + fresh coinbase - new txn fee
|
||||
hash = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(HIGHERFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(HIGHERFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
BOOST_REQUIRE(mining->createNewBlock(options));
|
||||
}
|
||||
|
||||
@ -457,7 +457,7 @@ void MinerTestingSetup::TestBasicMining(const CScript& scriptPubKey, const std::
|
||||
tx.vout[0].nValue = 0;
|
||||
hash = tx.GetHash();
|
||||
// give it a fee so it'll get mined
|
||||
AddToMempool(tx_mempool, entry.Fee(LOWFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(false).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(LOWFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(false).FromTx(tx));
|
||||
// Should throw bad-cb-multiple
|
||||
BOOST_CHECK_EXCEPTION(mining->createNewBlock(options), std::runtime_error, HasReason("bad-cb-multiple"));
|
||||
}
|
||||
@ -472,10 +472,10 @@ void MinerTestingSetup::TestBasicMining(const CScript& scriptPubKey, const std::
|
||||
tx.vout[0].nValue = BLOCKSUBSIDY - HIGHFEE;
|
||||
tx.vout[0].scriptPubKey = CScript() << OP_1;
|
||||
hash = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(HIGHFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(HIGHFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
tx.vout[0].scriptPubKey = CScript() << OP_2;
|
||||
hash = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(HIGHFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(HIGHFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
BOOST_CHECK_EXCEPTION(mining->createNewBlock(options), std::runtime_error, HasReason("bad-txns-inputs-missingorspent"));
|
||||
}
|
||||
|
||||
@ -518,12 +518,12 @@ void MinerTestingSetup::TestBasicMining(const CScript& scriptPubKey, const std::
|
||||
CScript script = CScript() << OP_0;
|
||||
tx.vout[0].scriptPubKey = GetScriptForDestination(ScriptHash(script));
|
||||
hash = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(LOWFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(LOWFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
tx.vin[0].prevout.hash = hash;
|
||||
tx.vin[0].scriptSig = CScript() << std::vector<unsigned char>(script.begin(), script.end());
|
||||
tx.vout[0].nValue -= LOWFEE;
|
||||
hash = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(LOWFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(false).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(LOWFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(false).FromTx(tx));
|
||||
BOOST_CHECK_EXCEPTION(mining->createNewBlock(options), std::runtime_error, HasReason("block-script-verify-flag-failed"));
|
||||
|
||||
// Delete the dummy blocks again.
|
||||
@ -559,7 +559,7 @@ void MinerTestingSetup::TestBasicMining(const CScript& scriptPubKey, const std::
|
||||
tx.vout[0].scriptPubKey = CScript() << OP_1;
|
||||
tx.nLockTime = 0;
|
||||
hash = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(HIGHFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(HIGHFEE).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
BOOST_CHECK(CheckFinalTxAtTip(*Assert(m_node.chainman->ActiveChain().Tip()), CTransaction{tx})); // Locktime passes
|
||||
BOOST_CHECK(!TestSequenceLocks(CTransaction{tx}, tx_mempool)); // Sequence locks fail
|
||||
|
||||
@ -573,7 +573,7 @@ void MinerTestingSetup::TestBasicMining(const CScript& scriptPubKey, const std::
|
||||
tx.vin[0].nSequence = CTxIn::SEQUENCE_LOCKTIME_TYPE_FLAG | (((m_node.chainman->ActiveChain().Tip()->GetMedianTimePast()+1-m_node.chainman->ActiveChain()[1]->GetMedianTimePast()) >> CTxIn::SEQUENCE_LOCKTIME_GRANULARITY) + 1); // txFirst[1] is the 3rd block
|
||||
prevheights[0] = baseheight + 2;
|
||||
hash = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Time(Now<NodeSeconds>()).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Time(Now<NodeSeconds>()).FromTx(tx));
|
||||
BOOST_CHECK(CheckFinalTxAtTip(*Assert(m_node.chainman->ActiveChain().Tip()), CTransaction{tx})); // Locktime passes
|
||||
BOOST_CHECK(!TestSequenceLocks(CTransaction{tx}, tx_mempool)); // Sequence locks fail
|
||||
|
||||
@ -596,7 +596,7 @@ void MinerTestingSetup::TestBasicMining(const CScript& scriptPubKey, const std::
|
||||
prevheights[0] = baseheight + 3;
|
||||
tx.nLockTime = m_node.chainman->ActiveChain().Tip()->nHeight + 1;
|
||||
hash = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Time(Now<NodeSeconds>()).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Time(Now<NodeSeconds>()).FromTx(tx));
|
||||
BOOST_CHECK(!CheckFinalTxAtTip(*Assert(m_node.chainman->ActiveChain().Tip()), CTransaction{tx})); // Locktime fails
|
||||
BOOST_CHECK(TestSequenceLocks(CTransaction{tx}, tx_mempool)); // Sequence locks pass
|
||||
BOOST_CHECK(IsFinalTx(CTransaction(tx), m_node.chainman->ActiveChain().Tip()->nHeight + 2, m_node.chainman->ActiveChain().Tip()->GetMedianTimePast())); // Locktime passes on 2nd block
|
||||
@ -611,7 +611,7 @@ void MinerTestingSetup::TestBasicMining(const CScript& scriptPubKey, const std::
|
||||
prevheights.resize(1);
|
||||
prevheights[0] = baseheight + 4;
|
||||
hash = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Time(Now<NodeSeconds>()).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Time(Now<NodeSeconds>()).FromTx(tx));
|
||||
BOOST_CHECK(!CheckFinalTxAtTip(*Assert(m_node.chainman->ActiveChain().Tip()), CTransaction{tx})); // Locktime fails
|
||||
BOOST_CHECK(TestSequenceLocks(CTransaction{tx}, tx_mempool)); // Sequence locks pass
|
||||
BOOST_CHECK(IsFinalTx(CTransaction(tx), m_node.chainman->ActiveChain().Tip()->nHeight + 2, m_node.chainman->ActiveChain().Tip()->GetMedianTimePast() + 1)); // Locktime passes 1 second later
|
||||
@ -675,7 +675,7 @@ void MinerTestingSetup::TestPrioritisedMining(const CScript& scriptPubKey, const
|
||||
tx.vout.resize(1);
|
||||
tx.vout[0].nValue = 5000000000LL; // 0 fee
|
||||
Txid hashFreePrioritisedTx = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(0).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(0).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
tx_mempool.PrioritiseTransaction(hashFreePrioritisedTx, 5 * COIN);
|
||||
|
||||
tx.vin[0].prevout.hash = txFirst[1]->GetHash();
|
||||
@ -683,20 +683,20 @@ void MinerTestingSetup::TestPrioritisedMining(const CScript& scriptPubKey, const
|
||||
tx.vout[0].nValue = 5000000000LL - 1000;
|
||||
// This tx has a low fee: 1000 satoshis
|
||||
Txid hashParentTx = tx.GetHash(); // save this txid for later use
|
||||
AddToMempool(tx_mempool, entry.Fee(1000).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(1000).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
|
||||
// This tx has a medium fee: 10000 satoshis
|
||||
tx.vin[0].prevout.hash = txFirst[2]->GetHash();
|
||||
tx.vout[0].nValue = 5000000000LL - 10000;
|
||||
Txid hashMediumFeeTx = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(10000).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(10000).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
|
||||
tx_mempool.PrioritiseTransaction(hashMediumFeeTx, -5 * COIN);
|
||||
|
||||
// This tx also has a low fee, but is prioritised
|
||||
tx.vin[0].prevout.hash = hashParentTx;
|
||||
tx.vout[0].nValue = 5000000000LL - 1000 - 1000; // 1000 satoshi fee
|
||||
Txid hashPrioritsedChild = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(1000).Time(Now<NodeSeconds>()).SpendsCoinbase(false).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(1000).Time(Now<NodeSeconds>()).SpendsCoinbase(false).FromTx(tx));
|
||||
tx_mempool.PrioritiseTransaction(hashPrioritsedChild, 2 * COIN);
|
||||
|
||||
// Test that transaction selection properly updates ancestor fee calculations as prioritised
|
||||
@ -708,19 +708,19 @@ void MinerTestingSetup::TestPrioritisedMining(const CScript& scriptPubKey, const
|
||||
tx.vin[0].prevout.hash = txFirst[3]->GetHash();
|
||||
tx.vout[0].nValue = 5000000000LL; // 0 fee
|
||||
Txid hashFreeParent = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(0).SpendsCoinbase(true).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(0).SpendsCoinbase(true).FromTx(tx));
|
||||
tx_mempool.PrioritiseTransaction(hashFreeParent, 10 * COIN);
|
||||
|
||||
tx.vin[0].prevout.hash = hashFreeParent;
|
||||
tx.vout[0].nValue = 5000000000LL; // 0 fee
|
||||
Txid hashFreeChild = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(0).SpendsCoinbase(false).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(0).SpendsCoinbase(false).FromTx(tx));
|
||||
tx_mempool.PrioritiseTransaction(hashFreeChild, 1 * COIN);
|
||||
|
||||
tx.vin[0].prevout.hash = hashFreeChild;
|
||||
tx.vout[0].nValue = 5000000000LL; // 0 fee
|
||||
Txid hashFreeGrandchild = tx.GetHash();
|
||||
AddToMempool(tx_mempool, entry.Fee(0).SpendsCoinbase(false).FromTx(tx));
|
||||
TryAddToMempool(tx_mempool, entry.Fee(0).SpendsCoinbase(false).FromTx(tx));
|
||||
|
||||
auto block_template = mining->createNewBlock(options);
|
||||
BOOST_REQUIRE(block_template);
|
||||
|
||||
@ -84,7 +84,7 @@ BOOST_FIXTURE_TEST_CASE(miniminer_negative, TestChain100Setup)
|
||||
const CAmount negative_modified_fees{positive_base_fee + negative_fee_delta};
|
||||
BOOST_CHECK(negative_modified_fees < 0);
|
||||
const auto tx_mod_negative = make_tx({COutPoint{m_coinbase_txns[4]->GetHash(), 0}}, /*num_outputs=*/1);
|
||||
AddToMempool(pool, entry.Fee(positive_base_fee).FromTx(tx_mod_negative));
|
||||
TryAddToMempool(pool, entry.Fee(positive_base_fee).FromTx(tx_mod_negative));
|
||||
pool.PrioritiseTransaction(tx_mod_negative->GetHash(), negative_fee_delta);
|
||||
const COutPoint only_outpoint{tx_mod_negative->GetHash(), 0};
|
||||
|
||||
@ -114,21 +114,21 @@ BOOST_FIXTURE_TEST_CASE(miniminer_1p1c, TestChain100Setup)
|
||||
|
||||
// Create a parent tx0 and child tx1 with normal fees:
|
||||
const auto tx0 = make_tx({COutPoint{m_coinbase_txns[0]->GetHash(), 0}}, /*num_outputs=*/2);
|
||||
AddToMempool(pool, entry.Fee(med_fee).FromTx(tx0));
|
||||
TryAddToMempool(pool, entry.Fee(med_fee).FromTx(tx0));
|
||||
const auto tx1 = make_tx({COutPoint{tx0->GetHash(), 0}}, /*num_outputs=*/1);
|
||||
AddToMempool(pool, entry.Fee(med_fee).FromTx(tx1));
|
||||
TryAddToMempool(pool, entry.Fee(med_fee).FromTx(tx1));
|
||||
|
||||
// Create a low-feerate parent tx2 and high-feerate child tx3 (cpfp)
|
||||
const auto tx2 = make_tx({COutPoint{m_coinbase_txns[1]->GetHash(), 0}}, /*num_outputs=*/2);
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(tx2));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(tx2));
|
||||
const auto tx3 = make_tx({COutPoint{tx2->GetHash(), 0}}, /*num_outputs=*/1);
|
||||
AddToMempool(pool, entry.Fee(high_fee).FromTx(tx3));
|
||||
TryAddToMempool(pool, entry.Fee(high_fee).FromTx(tx3));
|
||||
|
||||
// Create a parent tx4 and child tx5 where both have low fees
|
||||
const auto tx4 = make_tx({COutPoint{m_coinbase_txns[2]->GetHash(), 0}}, /*num_outputs=*/2);
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(tx4));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(tx4));
|
||||
const auto tx5 = make_tx({COutPoint{tx4->GetHash(), 0}}, /*num_outputs=*/1);
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(tx5));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(tx5));
|
||||
const CAmount tx5_delta{CENT/100};
|
||||
// Make tx5's modified fee much higher than its base fee. This should cause it to pass
|
||||
// the fee-related checks despite being low-feerate.
|
||||
@ -137,9 +137,9 @@ BOOST_FIXTURE_TEST_CASE(miniminer_1p1c, TestChain100Setup)
|
||||
|
||||
// Create a high-feerate parent tx6, low-feerate child tx7
|
||||
const auto tx6 = make_tx({COutPoint{m_coinbase_txns[3]->GetHash(), 0}}, /*num_outputs=*/2);
|
||||
AddToMempool(pool, entry.Fee(high_fee).FromTx(tx6));
|
||||
TryAddToMempool(pool, entry.Fee(high_fee).FromTx(tx6));
|
||||
const auto tx7 = make_tx({COutPoint{tx6->GetHash(), 0}}, /*num_outputs=*/1);
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(tx7));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(tx7));
|
||||
|
||||
std::vector<COutPoint> all_unspent_outpoints({
|
||||
COutPoint{tx0->GetHash(), 1},
|
||||
@ -405,23 +405,23 @@ BOOST_FIXTURE_TEST_CASE(miniminer_overlap, TestChain100Setup)
|
||||
|
||||
// Create 3 parents of different feerates, and 1 child spending outputs from all 3 parents.
|
||||
const auto tx0 = make_tx({COutPoint{m_coinbase_txns[0]->GetHash(), 0}}, /*num_outputs=*/2);
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(tx0));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(tx0));
|
||||
const auto tx1 = make_tx({COutPoint{m_coinbase_txns[1]->GetHash(), 0}}, /*num_outputs=*/2);
|
||||
AddToMempool(pool, entry.Fee(med_fee).FromTx(tx1));
|
||||
TryAddToMempool(pool, entry.Fee(med_fee).FromTx(tx1));
|
||||
const auto tx2 = make_tx({COutPoint{m_coinbase_txns[2]->GetHash(), 0}}, /*num_outputs=*/2);
|
||||
AddToMempool(pool, entry.Fee(high_fee).FromTx(tx2));
|
||||
TryAddToMempool(pool, entry.Fee(high_fee).FromTx(tx2));
|
||||
const auto tx3 = make_tx({COutPoint{tx0->GetHash(), 0}, COutPoint{tx1->GetHash(), 0}, COutPoint{tx2->GetHash(), 0}}, /*num_outputs=*/3);
|
||||
AddToMempool(pool, entry.Fee(high_fee).FromTx(tx3));
|
||||
TryAddToMempool(pool, entry.Fee(high_fee).FromTx(tx3));
|
||||
|
||||
// Create 1 grandparent and 1 parent, then 2 children.
|
||||
const auto tx4 = make_tx({COutPoint{m_coinbase_txns[3]->GetHash(), 0}}, /*num_outputs=*/2);
|
||||
AddToMempool(pool, entry.Fee(high_fee).FromTx(tx4));
|
||||
TryAddToMempool(pool, entry.Fee(high_fee).FromTx(tx4));
|
||||
const auto tx5 = make_tx({COutPoint{tx4->GetHash(), 0}}, /*num_outputs=*/3);
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(tx5));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(tx5));
|
||||
const auto tx6 = make_tx({COutPoint{tx5->GetHash(), 0}}, /*num_outputs=*/2);
|
||||
AddToMempool(pool, entry.Fee(med_fee).FromTx(tx6));
|
||||
TryAddToMempool(pool, entry.Fee(med_fee).FromTx(tx6));
|
||||
const auto tx7 = make_tx({COutPoint{tx5->GetHash(), 1}}, /*num_outputs=*/2);
|
||||
AddToMempool(pool, entry.Fee(high_fee).FromTx(tx7));
|
||||
TryAddToMempool(pool, entry.Fee(high_fee).FromTx(tx7));
|
||||
|
||||
std::vector<CTransactionRef> all_transactions{tx0, tx1, tx2, tx3, tx4, tx5, tx6, tx7};
|
||||
std::vector<int64_t> tx_vsizes;
|
||||
@ -608,7 +608,7 @@ BOOST_FIXTURE_TEST_CASE(calculate_cluster, TestChain100Setup)
|
||||
lasttx = m_coinbase_txns[cluster_count];
|
||||
for (auto i{0}; i < 50; ++i) {
|
||||
const auto tx = make_tx({COutPoint{lasttx->GetHash(), 0}}, /*num_outputs=*/1);
|
||||
AddToMempool(pool, entry.Fee(CENT).FromTx(tx));
|
||||
TryAddToMempool(pool, entry.Fee(CENT).FromTx(tx));
|
||||
chain_txids.push_back(tx->GetHash());
|
||||
lasttx = tx;
|
||||
}
|
||||
@ -622,7 +622,7 @@ BOOST_FIXTURE_TEST_CASE(calculate_cluster, TestChain100Setup)
|
||||
|
||||
// GatherClusters stops at 500 transactions.
|
||||
const auto tx_501 = make_tx({COutPoint{lasttx->GetHash(), 0}}, /*num_outputs=*/1);
|
||||
AddToMempool(pool, entry.Fee(CENT).FromTx(tx_501));
|
||||
TryAddToMempool(pool, entry.Fee(CENT).FromTx(tx_501));
|
||||
const auto cluster_501 = pool.GatherClusters(last_txs);
|
||||
BOOST_CHECK_EQUAL(cluster_501.size(), 0);
|
||||
|
||||
@ -635,12 +635,12 @@ BOOST_FIXTURE_TEST_CASE(calculate_cluster, TestChain100Setup)
|
||||
std::vector<Txid> zigzag_txids;
|
||||
for (auto p{0}; p < 32; ++p) {
|
||||
const auto txp = make_tx({COutPoint{Txid::FromUint256(GetRandHash()), 0}}, /*num_outputs=*/2);
|
||||
AddToMempool(pool, entry.Fee(CENT).FromTx(txp));
|
||||
TryAddToMempool(pool, entry.Fee(CENT).FromTx(txp));
|
||||
zigzag_txids.push_back(txp->GetHash());
|
||||
}
|
||||
for (auto c{0}; c < 31; ++c) {
|
||||
const auto txc = make_tx({COutPoint{zigzag_txids[c], 1}, COutPoint{zigzag_txids[c+1], 0}}, /*num_outputs=*/1);
|
||||
AddToMempool(pool, entry.Fee(CENT).FromTx(txc));
|
||||
TryAddToMempool(pool, entry.Fee(CENT).FromTx(txc));
|
||||
zigzag_txids.push_back(txc->GetHash());
|
||||
}
|
||||
const auto vec_iters_zigzag = pool.GetIterVec(zigzag_txids);
|
||||
|
||||
@ -63,9 +63,9 @@ BOOST_AUTO_TEST_CASE(BlockPolicyEstimates)
|
||||
tx.vin[0].prevout.n = 10000*blocknum+100*j+k; // make transaction unique
|
||||
{
|
||||
LOCK2(cs_main, mpool.cs);
|
||||
AddToMempool(mpool, entry.Fee(feeV[j]).Time(Now<NodeSeconds>()).Height(blocknum).FromTx(tx));
|
||||
TryAddToMempool(mpool, entry.Fee(feeV[j]).Time(Now<NodeSeconds>()).Height(blocknum).FromTx(tx));
|
||||
// Since TransactionAddedToMempool callbacks are generated in ATMP,
|
||||
// not AddToMempool, we cheat and create one manually here
|
||||
// not TryAddToMempool, we cheat and create one manually here
|
||||
const int64_t virtual_size = GetVirtualTransactionSize(*MakeTransactionRef(tx));
|
||||
const NewMempoolTransactionInfo tx_info{NewMempoolTransactionInfo(MakeTransactionRef(tx),
|
||||
feeV[j],
|
||||
@ -163,9 +163,9 @@ BOOST_AUTO_TEST_CASE(BlockPolicyEstimates)
|
||||
tx.vin[0].prevout.n = 10000*blocknum+100*j+k;
|
||||
{
|
||||
LOCK2(cs_main, mpool.cs);
|
||||
AddToMempool(mpool, entry.Fee(feeV[j]).Time(Now<NodeSeconds>()).Height(blocknum).FromTx(tx));
|
||||
TryAddToMempool(mpool, entry.Fee(feeV[j]).Time(Now<NodeSeconds>()).Height(blocknum).FromTx(tx));
|
||||
// Since TransactionAddedToMempool callbacks are generated in ATMP,
|
||||
// not AddToMempool, we cheat and create one manually here
|
||||
// not TryAddToMempool, we cheat and create one manually here
|
||||
const int64_t virtual_size = GetVirtualTransactionSize(*MakeTransactionRef(tx));
|
||||
const NewMempoolTransactionInfo tx_info{NewMempoolTransactionInfo(MakeTransactionRef(tx),
|
||||
feeV[j],
|
||||
@ -226,9 +226,9 @@ BOOST_AUTO_TEST_CASE(BlockPolicyEstimates)
|
||||
tx.vin[0].prevout.n = 10000*blocknum+100*j+k;
|
||||
{
|
||||
LOCK2(cs_main, mpool.cs);
|
||||
AddToMempool(mpool, entry.Fee(feeV[j]).Time(Now<NodeSeconds>()).Height(blocknum).FromTx(tx));
|
||||
TryAddToMempool(mpool, entry.Fee(feeV[j]).Time(Now<NodeSeconds>()).Height(blocknum).FromTx(tx));
|
||||
// Since TransactionAddedToMempool callbacks are generated in ATMP,
|
||||
// not AddToMempool, we cheat and create one manually here
|
||||
// not TryAddToMempool, we cheat and create one manually here
|
||||
const int64_t virtual_size = GetVirtualTransactionSize(*MakeTransactionRef(tx));
|
||||
const NewMempoolTransactionInfo tx_info{NewMempoolTransactionInfo(MakeTransactionRef(tx),
|
||||
feeV[j],
|
||||
|
||||
@ -47,7 +47,7 @@ static CTransactionRef add_descendants(const CTransactionRef& tx, int32_t num_de
|
||||
auto tx_to_spend = tx;
|
||||
for (int32_t i{0}; i < num_descendants; ++i) {
|
||||
auto next_tx = make_tx(/*inputs=*/{tx_to_spend}, /*output_values=*/{(50 - i) * CENT});
|
||||
AddToMempool(pool, entry.FromTx(next_tx));
|
||||
TryAddToMempool(pool, entry.FromTx(next_tx));
|
||||
BOOST_CHECK(pool.GetIter(next_tx->GetHash()).has_value());
|
||||
tx_to_spend = next_tx;
|
||||
}
|
||||
@ -67,40 +67,40 @@ BOOST_FIXTURE_TEST_CASE(rbf_helper_functions, TestChain100Setup)
|
||||
|
||||
// Create a parent tx1 and child tx2 with normal fees:
|
||||
const auto tx1 = make_tx(/*inputs=*/ {m_coinbase_txns[0]}, /*output_values=*/ {10 * COIN});
|
||||
AddToMempool(pool, entry.Fee(normal_fee).FromTx(tx1));
|
||||
TryAddToMempool(pool, entry.Fee(normal_fee).FromTx(tx1));
|
||||
const auto tx2 = make_tx(/*inputs=*/ {tx1}, /*output_values=*/ {995 * CENT});
|
||||
AddToMempool(pool, entry.Fee(normal_fee).FromTx(tx2));
|
||||
TryAddToMempool(pool, entry.Fee(normal_fee).FromTx(tx2));
|
||||
|
||||
// Create a low-feerate parent tx3 and high-feerate child tx4 (cpfp)
|
||||
const auto tx3 = make_tx(/*inputs=*/ {m_coinbase_txns[1]}, /*output_values=*/ {1099 * CENT});
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(tx3));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(tx3));
|
||||
const auto tx4 = make_tx(/*inputs=*/ {tx3}, /*output_values=*/ {999 * CENT});
|
||||
AddToMempool(pool, entry.Fee(high_fee).FromTx(tx4));
|
||||
TryAddToMempool(pool, entry.Fee(high_fee).FromTx(tx4));
|
||||
|
||||
// Create a parent tx5 and child tx6 where both have very low fees
|
||||
const auto tx5 = make_tx(/*inputs=*/ {m_coinbase_txns[2]}, /*output_values=*/ {1099 * CENT});
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(tx5));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(tx5));
|
||||
const auto tx6 = make_tx(/*inputs=*/ {tx5}, /*output_values=*/ {1098 * CENT});
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(tx6));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(tx6));
|
||||
// Make tx6's modified fee much higher than its base fee. This should cause it to pass
|
||||
// the fee-related checks despite being low-feerate.
|
||||
pool.PrioritiseTransaction(tx6->GetHash(), 1 * COIN);
|
||||
|
||||
// Two independent high-feerate transactions, tx7 and tx8
|
||||
const auto tx7 = make_tx(/*inputs=*/ {m_coinbase_txns[3]}, /*output_values=*/ {999 * CENT});
|
||||
AddToMempool(pool, entry.Fee(high_fee).FromTx(tx7));
|
||||
TryAddToMempool(pool, entry.Fee(high_fee).FromTx(tx7));
|
||||
const auto tx8 = make_tx(/*inputs=*/ {m_coinbase_txns[4]}, /*output_values=*/ {999 * CENT});
|
||||
AddToMempool(pool, entry.Fee(high_fee).FromTx(tx8));
|
||||
TryAddToMempool(pool, entry.Fee(high_fee).FromTx(tx8));
|
||||
|
||||
// Will make these two parents of single child
|
||||
const auto tx11 = make_tx(/*inputs=*/ {m_coinbase_txns[7]}, /*output_values=*/ {995 * CENT});
|
||||
AddToMempool(pool, entry.Fee(normal_fee).FromTx(tx11));
|
||||
TryAddToMempool(pool, entry.Fee(normal_fee).FromTx(tx11));
|
||||
const auto tx12 = make_tx(/*inputs=*/ {m_coinbase_txns[8]}, /*output_values=*/ {995 * CENT});
|
||||
AddToMempool(pool, entry.Fee(normal_fee).FromTx(tx12));
|
||||
TryAddToMempool(pool, entry.Fee(normal_fee).FromTx(tx12));
|
||||
|
||||
// Will make two children of this single parent
|
||||
const auto tx13 = make_tx(/*inputs=*/ {m_coinbase_txns[9]}, /*output_values=*/ {995 * CENT, 995 * CENT});
|
||||
AddToMempool(pool, entry.Fee(normal_fee).FromTx(tx13));
|
||||
TryAddToMempool(pool, entry.Fee(normal_fee).FromTx(tx13));
|
||||
|
||||
const auto entry1_normal = pool.GetIter(tx1->GetHash()).value();
|
||||
const auto entry2_normal = pool.GetIter(tx2->GetHash()).value();
|
||||
@ -179,8 +179,8 @@ BOOST_FIXTURE_TEST_CASE(rbf_conflicts_calculator, TestChain100Setup)
|
||||
|
||||
const auto parent_tx_1 = make_tx(/*inputs=*/ {m_coinbase_txns[0]}, /*output_values=*/ output_values);
|
||||
const auto parent_tx_2 = make_tx(/*inputs=*/ {m_coinbase_txns[1]}, /*output_values=*/ output_values);
|
||||
AddToMempool(pool, entry.Fee(normal_fee).FromTx(parent_tx_1));
|
||||
AddToMempool(pool, entry.Fee(normal_fee).FromTx(parent_tx_2));
|
||||
TryAddToMempool(pool, entry.Fee(normal_fee).FromTx(parent_tx_1));
|
||||
TryAddToMempool(pool, entry.Fee(normal_fee).FromTx(parent_tx_2));
|
||||
|
||||
std::vector<CTransactionRef> direct_children;
|
||||
|
||||
@ -190,7 +190,7 @@ BOOST_FIXTURE_TEST_CASE(rbf_conflicts_calculator, TestChain100Setup)
|
||||
auto pretx = make_tx(/*inputs=*/ {parent_tx}, /*output_values=*/ {995 * CENT});
|
||||
CMutableTransaction tx(*pretx);
|
||||
tx.vin[0].prevout.n = i;
|
||||
AddToMempool(pool, entry.Fee(normal_fee).FromTx(tx));
|
||||
TryAddToMempool(pool, entry.Fee(normal_fee).FromTx(tx));
|
||||
BOOST_CHECK(pool.GetIter(tx.GetHash()).has_value());
|
||||
direct_children.push_back(MakeTransactionRef(tx));
|
||||
}
|
||||
@ -257,9 +257,9 @@ BOOST_FIXTURE_TEST_CASE(improves_feerate, TestChain100Setup)
|
||||
|
||||
// low feerate parent with normal feerate child
|
||||
const auto tx1 = make_tx(/*inputs=*/ {m_coinbase_txns[0], m_coinbase_txns[1]}, /*output_values=*/ {10 * COIN});
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(tx1));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(tx1));
|
||||
const auto tx2 = make_tx(/*inputs=*/ {tx1}, /*output_values=*/ {995 * CENT});
|
||||
AddToMempool(pool, entry.Fee(normal_fee).FromTx(tx2));
|
||||
TryAddToMempool(pool, entry.Fee(normal_fee).FromTx(tx2));
|
||||
|
||||
const auto entry1 = pool.GetIter(tx1->GetHash()).value();
|
||||
const auto tx1_fee = entry1->GetModifiedFee();
|
||||
@ -323,7 +323,7 @@ BOOST_FIXTURE_TEST_CASE(improves_feerate, TestChain100Setup)
|
||||
|
||||
// Adding a grandchild makes the cluster size 3, which is also calculable
|
||||
const auto tx5 = make_tx(/*inputs=*/ {tx2}, /*output_values=*/ {995 * CENT});
|
||||
AddToMempool(pool, entry.Fee(normal_fee).FromTx(tx5));
|
||||
TryAddToMempool(pool, entry.Fee(normal_fee).FromTx(tx5));
|
||||
const auto entry5 = pool.GetIter(tx5->GetHash()).value();
|
||||
|
||||
changeset = pool.GetChangeSet();
|
||||
@ -348,7 +348,7 @@ BOOST_FIXTURE_TEST_CASE(calc_feerate_diagram_rbf, TestChain100Setup)
|
||||
// low -> high -> medium fee transactions that would result in two chunks together since they
|
||||
// are all same size
|
||||
const auto low_tx = make_tx(/*inputs=*/ {m_coinbase_txns[0]}, /*output_values=*/ {10 * COIN});
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(low_tx));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(low_tx));
|
||||
|
||||
const auto entry_low = pool.GetIter(low_tx->GetHash()).value();
|
||||
const auto low_size = entry_low->GetAdjustedWeight();
|
||||
@ -384,7 +384,7 @@ BOOST_FIXTURE_TEST_CASE(calc_feerate_diagram_rbf, TestChain100Setup)
|
||||
|
||||
// Add a second transaction to the cluster that will make a single chunk, to be evicted in the RBF
|
||||
const auto high_tx = make_tx(/*inputs=*/ {low_tx}, /*output_values=*/ {995 * CENT});
|
||||
AddToMempool(pool, entry.Fee(high_fee).FromTx(high_tx));
|
||||
TryAddToMempool(pool, entry.Fee(high_fee).FromTx(high_tx));
|
||||
const auto entry_high = pool.GetIter(high_tx->GetHash()).value();
|
||||
const auto high_size = entry_high->GetAdjustedWeight();
|
||||
|
||||
@ -416,12 +416,12 @@ BOOST_FIXTURE_TEST_CASE(calc_feerate_diagram_rbf, TestChain100Setup)
|
||||
|
||||
// Make a size 2 cluster that is itself two chunks; evict both txns
|
||||
const auto high_tx_2 = make_tx(/*inputs=*/ {m_coinbase_txns[1]}, /*output_values=*/ {10 * COIN});
|
||||
AddToMempool(pool, entry.Fee(high_fee).FromTx(high_tx_2));
|
||||
TryAddToMempool(pool, entry.Fee(high_fee).FromTx(high_tx_2));
|
||||
const auto entry_high_2 = pool.GetIter(high_tx_2->GetHash()).value();
|
||||
const auto high_size_2 = entry_high_2->GetAdjustedWeight();
|
||||
|
||||
const auto low_tx_2 = make_tx(/*inputs=*/ {high_tx_2}, /*output_values=*/ {9 * COIN});
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(low_tx_2));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(low_tx_2));
|
||||
const auto entry_low_2 = pool.GetIter(low_tx_2->GetHash()).value();
|
||||
const auto low_size_2 = entry_low_2->GetAdjustedWeight();
|
||||
|
||||
@ -440,15 +440,15 @@ BOOST_FIXTURE_TEST_CASE(calc_feerate_diagram_rbf, TestChain100Setup)
|
||||
|
||||
// You can have more than two direct conflicts
|
||||
const auto conflict_1 = make_tx(/*inputs=*/ {m_coinbase_txns[2]}, /*output_values=*/ {10 * COIN});
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(conflict_1));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(conflict_1));
|
||||
const auto conflict_1_entry = pool.GetIter(conflict_1->GetHash()).value();
|
||||
|
||||
const auto conflict_2 = make_tx(/*inputs=*/ {m_coinbase_txns[3]}, /*output_values=*/ {10 * COIN});
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(conflict_2));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(conflict_2));
|
||||
const auto conflict_2_entry = pool.GetIter(conflict_2->GetHash()).value();
|
||||
|
||||
const auto conflict_3 = make_tx(/*inputs=*/ {m_coinbase_txns[4]}, /*output_values=*/ {10 * COIN});
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(conflict_3));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(conflict_3));
|
||||
const auto conflict_3_entry = pool.GetIter(conflict_3->GetHash()).value();
|
||||
|
||||
{
|
||||
@ -465,7 +465,7 @@ BOOST_FIXTURE_TEST_CASE(calc_feerate_diagram_rbf, TestChain100Setup)
|
||||
|
||||
// Add a child transaction to conflict_1 and make it cluster size 2, two chunks due to same feerate
|
||||
const auto conflict_1_child = make_tx(/*inputs=*/{conflict_1}, /*output_values=*/ {995 * CENT});
|
||||
AddToMempool(pool, entry.Fee(low_fee).FromTx(conflict_1_child));
|
||||
TryAddToMempool(pool, entry.Fee(low_fee).FromTx(conflict_1_child));
|
||||
const auto conflict_1_child_entry = pool.GetIter(conflict_1_child->GetHash()).value();
|
||||
|
||||
{
|
||||
|
||||
@ -239,7 +239,7 @@ BOOST_AUTO_TEST_CASE(package_validation_tests)
|
||||
}
|
||||
// A single, giant transaction submitted through ProcessNewPackage fails on single tx policy.
|
||||
CTransactionRef giant_ptx = create_placeholder_tx(999, 999);
|
||||
BOOST_CHECK(GetVirtualTransactionSize(*giant_ptx) > DEFAULT_ANCESTOR_SIZE_LIMIT_KVB * 1000);
|
||||
BOOST_CHECK(GetVirtualTransactionSize(*giant_ptx) > DEFAULT_CLUSTER_SIZE_LIMIT_KVB * 1000);
|
||||
Package package_single_giant{giant_ptx};
|
||||
auto result_single_large = ProcessNewPackage(m_node.chainman->ActiveChainstate(), *m_node.mempool, package_single_giant, /*test_accept=*/true, /*client_maxfeerate=*/{});
|
||||
if (auto err_single_large{CheckPackageMempoolAcceptResult(package_single_giant, result_single_large, /*expect_valid=*/false, nullptr)}) {
|
||||
|
||||
@ -225,7 +225,7 @@ BOOST_FIXTURE_TEST_CASE(ephemeral_tests, RegTestingSetup)
|
||||
BOOST_CHECK_EQUAL(child_wtxid, Wtxid());
|
||||
|
||||
// Add first grandparent to mempool and fetch entry
|
||||
AddToMempool(pool, entry.FromTx(grandparent_tx_1));
|
||||
TryAddToMempool(pool, entry.FromTx(grandparent_tx_1));
|
||||
|
||||
// Ignores ancestors that aren't direct parents
|
||||
BOOST_CHECK(CheckEphemeralSpends({child_no_dust}, dustrelay, pool, child_state, child_wtxid));
|
||||
@ -248,7 +248,7 @@ BOOST_FIXTURE_TEST_CASE(ephemeral_tests, RegTestingSetup)
|
||||
BOOST_CHECK_EQUAL(child_wtxid, Wtxid());
|
||||
|
||||
// Add second grandparent to mempool
|
||||
AddToMempool(pool, entry.FromTx(grandparent_tx_2));
|
||||
TryAddToMempool(pool, entry.FromTx(grandparent_tx_2));
|
||||
|
||||
// Only spends single dust out of two direct parents
|
||||
BOOST_CHECK(!CheckEphemeralSpends({dust_non_spend_both_parents}, dustrelay, pool, child_state, child_wtxid));
|
||||
@ -263,7 +263,7 @@ BOOST_FIXTURE_TEST_CASE(ephemeral_tests, RegTestingSetup)
|
||||
BOOST_CHECK_EQUAL(child_wtxid, Wtxid());
|
||||
|
||||
// Now add dusty parent to mempool
|
||||
AddToMempool(pool, entry.FromTx(parent_with_dust));
|
||||
TryAddToMempool(pool, entry.FromTx(parent_with_dust));
|
||||
|
||||
// Passes dust checks even with non-parent ancestors
|
||||
BOOST_CHECK(CheckEphemeralSpends({child_no_dust}, dustrelay, pool, child_state, child_wtxid));
|
||||
@ -281,9 +281,9 @@ BOOST_FIXTURE_TEST_CASE(version3_tests, RegTestingSetup)
|
||||
std::vector<CTxMemPoolEntry::CTxMemPoolEntryRef> empty_parents;
|
||||
|
||||
auto mempool_tx_v3 = make_tx(random_outpoints(1), /*version=*/3);
|
||||
AddToMempool(pool, entry.FromTx(mempool_tx_v3));
|
||||
TryAddToMempool(pool, entry.FromTx(mempool_tx_v3));
|
||||
auto mempool_tx_v2 = make_tx(random_outpoints(1), /*version=*/2);
|
||||
AddToMempool(pool, entry.FromTx(mempool_tx_v2));
|
||||
TryAddToMempool(pool, entry.FromTx(mempool_tx_v2));
|
||||
|
||||
// Cannot spend from an unconfirmed TRUC transaction unless this tx is also TRUC.
|
||||
{
|
||||
@ -389,7 +389,7 @@ BOOST_FIXTURE_TEST_CASE(version3_tests, RegTestingSetup)
|
||||
package_multi_parents.emplace_back(mempool_tx_v3);
|
||||
for (size_t i{0}; i < 2; ++i) {
|
||||
auto mempool_tx = make_tx(random_outpoints(i + 1), /*version=*/3);
|
||||
AddToMempool(pool, entry.FromTx(mempool_tx));
|
||||
TryAddToMempool(pool, entry.FromTx(mempool_tx));
|
||||
mempool_outpoints.emplace_back(mempool_tx->GetHash(), 0);
|
||||
package_multi_parents.emplace_back(mempool_tx);
|
||||
}
|
||||
@ -414,7 +414,7 @@ BOOST_FIXTURE_TEST_CASE(version3_tests, RegTestingSetup)
|
||||
auto last_outpoint{random_outpoints(1)[0]};
|
||||
for (size_t i{0}; i < 2; ++i) {
|
||||
auto mempool_tx = make_tx({last_outpoint}, /*version=*/3);
|
||||
AddToMempool(pool, entry.FromTx(mempool_tx));
|
||||
TryAddToMempool(pool, entry.FromTx(mempool_tx));
|
||||
last_outpoint = COutPoint{mempool_tx->GetHash(), 0};
|
||||
package_multi_gen.emplace_back(mempool_tx);
|
||||
if (i == 1) middle_tx = mempool_tx;
|
||||
@ -501,7 +501,7 @@ BOOST_FIXTURE_TEST_CASE(version3_tests, RegTestingSetup)
|
||||
BOOST_CHECK(GetTransactionWeight(*tx_mempool_v3_child) <= TRUC_CHILD_MAX_VSIZE * WITNESS_SCALE_FACTOR);
|
||||
auto parents{pool.GetParents(entry.FromTx(tx_mempool_v3_child))};
|
||||
BOOST_CHECK(SingleTRUCChecks(pool, tx_mempool_v3_child, parents, empty_conflicts_set, GetVirtualTransactionSize(*tx_mempool_v3_child)) == std::nullopt);
|
||||
AddToMempool(pool, entry.FromTx(tx_mempool_v3_child));
|
||||
TryAddToMempool(pool, entry.FromTx(tx_mempool_v3_child));
|
||||
|
||||
Package package_v3_1p1c{mempool_tx_v3, tx_mempool_v3_child};
|
||||
BOOST_CHECK(PackageTRUCChecks(pool, tx_mempool_v3_child, GetVirtualTransactionSize(*tx_mempool_v3_child), package_v3_1p1c, empty_parents) == std::nullopt);
|
||||
@ -528,7 +528,7 @@ BOOST_FIXTURE_TEST_CASE(version3_tests, RegTestingSetup)
|
||||
expected_error_str);
|
||||
|
||||
// Configuration where parent already has 2 other children in mempool (no sibling eviction allowed). This may happen as the result of a reorg.
|
||||
AddToMempool(pool, entry.FromTx(tx_v3_child2));
|
||||
TryAddToMempool(pool, entry.FromTx(tx_v3_child2));
|
||||
auto tx_v3_child3 = make_tx({COutPoint{mempool_tx_v3->GetHash(), 24}}, /*version=*/3);
|
||||
auto entry_mempool_parent = pool.GetIter(mempool_tx_v3->GetHash()).value();
|
||||
BOOST_CHECK_EQUAL(pool.GetDescendantCount(entry_mempool_parent), 3);
|
||||
@ -547,9 +547,9 @@ BOOST_FIXTURE_TEST_CASE(version3_tests, RegTestingSetup)
|
||||
auto tx_mempool_nibling = make_tx({COutPoint{tx_mempool_sibling->GetHash(), 0}}, /*version=*/3);
|
||||
auto tx_to_submit = make_tx({COutPoint{tx_mempool_grandparent->GetHash(), 1}}, /*version=*/3);
|
||||
|
||||
AddToMempool(pool, entry.FromTx(tx_mempool_grandparent));
|
||||
AddToMempool(pool, entry.FromTx(tx_mempool_sibling));
|
||||
AddToMempool(pool, entry.FromTx(tx_mempool_nibling));
|
||||
TryAddToMempool(pool, entry.FromTx(tx_mempool_grandparent));
|
||||
TryAddToMempool(pool, entry.FromTx(tx_mempool_sibling));
|
||||
TryAddToMempool(pool, entry.FromTx(tx_mempool_nibling));
|
||||
|
||||
auto parents_3gen{pool.GetParents(entry.FromTx(tx_to_submit))};
|
||||
const auto expected_error_str{strprintf("tx %s (wtxid=%s) would exceed descendant count limit",
|
||||
|
||||
@ -212,7 +212,7 @@ void CheckMempoolTRUCInvariants(const CTxMemPool& tx_pool)
|
||||
}
|
||||
}
|
||||
|
||||
void AddToMempool(CTxMemPool& tx_pool, const CTxMemPoolEntry& entry)
|
||||
void TryAddToMempool(CTxMemPool& tx_pool, const CTxMemPoolEntry& entry)
|
||||
{
|
||||
LOCK2(cs_main, tx_pool.cs);
|
||||
auto changeset = tx_pool.GetChangeSet();
|
||||
|
||||
@ -65,7 +65,7 @@ void CheckMempoolTRUCInvariants(const CTxMemPool& tx_pool);
|
||||
|
||||
/** One-line wrapper for creating a mempool changeset with a single transaction
|
||||
* and applying it if the policy limits are respected. */
|
||||
void AddToMempool(CTxMemPool& tx_pool, const CTxMemPoolEntry& entry);
|
||||
void TryAddToMempool(CTxMemPool& tx_pool, const CTxMemPoolEntry& entry);
|
||||
|
||||
/** Mock the mempool minimum feerate by adding a transaction and calling TrimToSize(0),
|
||||
* simulating the mempool "reaching capacity" and evicting by descendant feerate. Note that
|
||||
|
||||
@ -58,13 +58,12 @@ std::vector<CTxMemPoolEntry::CTxMemPoolEntryRef> CTxMemPool::GetChildren(const C
|
||||
{
|
||||
LOCK(cs);
|
||||
std::vector<CTxMemPoolEntry::CTxMemPoolEntryRef> ret;
|
||||
setEntries children;
|
||||
WITH_FRESH_EPOCH(m_epoch);
|
||||
auto iter = mapNextTx.lower_bound(COutPoint(entry.GetTx().GetHash(), 0));
|
||||
for (; iter != mapNextTx.end() && iter->first->hash == entry.GetTx().GetHash(); ++iter) {
|
||||
children.insert(iter->second);
|
||||
}
|
||||
for (const auto& child : children) {
|
||||
ret.emplace_back(*child);
|
||||
if (!visited(iter->second)) {
|
||||
ret.emplace_back(*(iter->second));
|
||||
}
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
@ -198,7 +197,7 @@ void CTxMemPool::Apply(ChangeSet* changeset)
|
||||
AssertLockHeld(cs);
|
||||
m_txgraph->CommitStaging();
|
||||
|
||||
RemoveStaged(changeset->m_to_remove, false, MemPoolRemovalReason::REPLACED);
|
||||
RemoveStaged(changeset->m_to_remove, MemPoolRemovalReason::REPLACED);
|
||||
|
||||
for (size_t i=0; i<changeset->m_entry_vec.size(); ++i) {
|
||||
auto tx_entry = changeset->m_entry_vec[i];
|
||||
@ -308,35 +307,41 @@ CTxMemPool::txiter CTxMemPool::CalculateDescendants(const CTxMemPoolEntry& entry
|
||||
return mapTx.iterator_to(entry);
|
||||
}
|
||||
|
||||
void CTxMemPool::removeRecursive(CTxMemPool::txiter to_remove, MemPoolRemovalReason reason)
|
||||
{
|
||||
AssertLockHeld(cs);
|
||||
Assume(!m_have_changeset);
|
||||
auto descendants = m_txgraph->GetDescendants(*to_remove, TxGraph::Level::MAIN);
|
||||
for (auto tx: descendants) {
|
||||
removeUnchecked(mapTx.iterator_to(static_cast<const CTxMemPoolEntry&>(*tx)), reason);
|
||||
}
|
||||
}
|
||||
|
||||
void CTxMemPool::removeRecursive(const CTransaction &origTx, MemPoolRemovalReason reason)
|
||||
{
|
||||
// Remove transaction from memory pool
|
||||
AssertLockHeld(cs);
|
||||
Assume(!m_have_changeset);
|
||||
setEntries txToRemove;
|
||||
txiter origit = mapTx.find(origTx.GetHash());
|
||||
if (origit != mapTx.end()) {
|
||||
txToRemove.insert(origit);
|
||||
} else {
|
||||
// When recursively removing but origTx isn't in the mempool
|
||||
// be sure to remove any children that are in the pool. This can
|
||||
// happen during chain re-orgs if origTx isn't re-accepted into
|
||||
// the mempool for any reason.
|
||||
for (unsigned int i = 0; i < origTx.vout.size(); i++) {
|
||||
auto it = mapNextTx.find(COutPoint(origTx.GetHash(), i));
|
||||
if (it == mapNextTx.end())
|
||||
continue;
|
||||
txiter nextit = it->second;
|
||||
assert(nextit != mapTx.end());
|
||||
txToRemove.insert(nextit);
|
||||
}
|
||||
txiter origit = mapTx.find(origTx.GetHash());
|
||||
if (origit != mapTx.end()) {
|
||||
removeRecursive(origit, reason);
|
||||
} else {
|
||||
// When recursively removing but origTx isn't in the mempool
|
||||
// be sure to remove any descendants that are in the pool. This can
|
||||
// happen during chain re-orgs if origTx isn't re-accepted into
|
||||
// the mempool for any reason.
|
||||
auto iter = mapNextTx.lower_bound(COutPoint(origTx.GetHash(), 0));
|
||||
std::vector<const TxGraph::Ref*> to_remove;
|
||||
while (iter != mapNextTx.end() && iter->first->hash == origTx.GetHash()) {
|
||||
to_remove.emplace_back(&*(iter->second));
|
||||
++iter;
|
||||
}
|
||||
setEntries setAllRemoves;
|
||||
for (txiter it : txToRemove) {
|
||||
CalculateDescendants(it, setAllRemoves);
|
||||
auto all_removes = m_txgraph->GetDescendantsUnion(to_remove, TxGraph::Level::MAIN);
|
||||
for (auto ref : all_removes) {
|
||||
auto tx = mapTx.iterator_to(static_cast<const CTxMemPoolEntry&>(*ref));
|
||||
removeUnchecked(tx, reason);
|
||||
}
|
||||
|
||||
RemoveStaged(setAllRemoves, false, reason);
|
||||
}
|
||||
}
|
||||
|
||||
void CTxMemPool::removeForReorg(CChain& chain, std::function<bool(txiter)> check_final_and_mature)
|
||||
@ -346,15 +351,19 @@ void CTxMemPool::removeForReorg(CChain& chain, std::function<bool(txiter)> check
|
||||
AssertLockHeld(::cs_main);
|
||||
Assume(!m_have_changeset);
|
||||
|
||||
setEntries txToRemove;
|
||||
for (indexed_transaction_set::const_iterator it = mapTx.begin(); it != mapTx.end(); it++) {
|
||||
if (check_final_and_mature(it)) txToRemove.insert(it);
|
||||
std::vector<const TxGraph::Ref*> to_remove;
|
||||
for (txiter it = mapTx.begin(); it != mapTx.end(); it++) {
|
||||
if (check_final_and_mature(it)) {
|
||||
to_remove.emplace_back(&*it);
|
||||
}
|
||||
}
|
||||
setEntries setAllRemoves;
|
||||
for (txiter it : txToRemove) {
|
||||
CalculateDescendants(it, setAllRemoves);
|
||||
|
||||
auto all_to_remove = m_txgraph->GetDescendantsUnion(to_remove, TxGraph::Level::MAIN);
|
||||
|
||||
for (auto ref : all_to_remove) {
|
||||
auto it = mapTx.iterator_to(static_cast<const CTxMemPoolEntry&>(*ref));
|
||||
removeUnchecked(it, MemPoolRemovalReason::REORG);
|
||||
}
|
||||
RemoveStaged(setAllRemoves, false, MemPoolRemovalReason::REORG);
|
||||
for (indexed_transaction_set::const_iterator it = mapTx.begin(); it != mapTx.end(); it++) {
|
||||
assert(TestLockPointValidity(chain, it->GetLockPoints()));
|
||||
}
|
||||
@ -372,7 +381,7 @@ void CTxMemPool::removeConflicts(const CTransaction &tx)
|
||||
if (Assume(txConflict.GetHash() != tx.GetHash()))
|
||||
{
|
||||
ClearPrioritisation(txConflict.GetHash());
|
||||
removeRecursive(txConflict, MemPoolRemovalReason::CONFLICT);
|
||||
removeRecursive(it->second, MemPoolRemovalReason::CONFLICT);
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -389,10 +398,8 @@ void CTxMemPool::removeForBlock(const std::vector<CTransactionRef>& vtx, unsigne
|
||||
for (const auto& tx : vtx) {
|
||||
txiter it = mapTx.find(tx->GetHash());
|
||||
if (it != mapTx.end()) {
|
||||
setEntries stage;
|
||||
stage.insert(it);
|
||||
txs_removed_for_block.emplace_back(*it);
|
||||
RemoveStaged(stage, true, MemPoolRemovalReason::BLOCK);
|
||||
removeUnchecked(it, MemPoolRemovalReason::BLOCK);
|
||||
}
|
||||
removeConflicts(*tx);
|
||||
ClearPrioritisation(tx->GetHash());
|
||||
@ -418,6 +425,8 @@ void CTxMemPool::check(const CCoinsViewCache& active_coins_tip, int64_t spendhei
|
||||
|
||||
uint64_t checkTotal = 0;
|
||||
CAmount check_total_fee{0};
|
||||
CAmount check_total_modified_fee{0};
|
||||
int64_t check_total_adjusted_weight{0};
|
||||
uint64_t innerUsage = 0;
|
||||
|
||||
assert(!m_txgraph->IsOversized(TxGraph::Level::MAIN));
|
||||
@ -429,13 +438,28 @@ void CTxMemPool::check(const CCoinsViewCache& active_coins_tip, int64_t spendhei
|
||||
|
||||
// Number of chunks is bounded by number of transactions.
|
||||
const auto diagram{GetFeerateDiagram()};
|
||||
Assume(diagram.size() <= score_with_topo.size() + 1);
|
||||
assert(diagram.size() <= score_with_topo.size() + 1);
|
||||
assert(diagram.size() >= 1);
|
||||
|
||||
std::optional<Wtxid> last_wtxid = std::nullopt;
|
||||
auto diagram_iter = diagram.cbegin();
|
||||
|
||||
for (const auto& it : score_with_topo) {
|
||||
// GetSortedScoreWithTopology() contains the same chunks as the feerate
|
||||
// diagram. We do not know where the chunk boundaries are, but we can
|
||||
// check that there are points at which they match the cumulative fee
|
||||
// and weight.
|
||||
// The feerate diagram should never get behind the current transaction
|
||||
// size totals.
|
||||
assert(diagram_iter->size >= check_total_adjusted_weight);
|
||||
if (diagram_iter->fee == check_total_modified_fee &&
|
||||
diagram_iter->size == check_total_adjusted_weight) {
|
||||
++diagram_iter;
|
||||
}
|
||||
checkTotal += it->GetTxSize();
|
||||
check_total_adjusted_weight += it->GetAdjustedWeight();
|
||||
check_total_fee += it->GetFee();
|
||||
check_total_modified_fee += it->GetModifiedFee();
|
||||
innerUsage += it->DynamicMemoryUsage();
|
||||
const CTransaction& tx = it->GetTx();
|
||||
|
||||
@ -502,8 +526,13 @@ void CTxMemPool::check(const CCoinsViewCache& active_coins_tip, int64_t spendhei
|
||||
assert(it2 != mapTx.end());
|
||||
}
|
||||
|
||||
++diagram_iter;
|
||||
assert(diagram_iter == diagram.cend());
|
||||
|
||||
assert(totalTxSize == checkTotal);
|
||||
assert(m_total_fee == check_total_fee);
|
||||
assert(diagram.back().fee == check_total_modified_fee);
|
||||
assert(diagram.back().size == check_total_adjusted_weight);
|
||||
assert(innerUsage == cachedInnerUsage);
|
||||
}
|
||||
|
||||
@ -591,7 +620,7 @@ void CTxMemPool::PrioritiseTransaction(const Txid& hash, const CAmount& nFeeDelt
|
||||
if (it != mapTx.end()) {
|
||||
// PrioritiseTransaction calls stack on previous ones. Set the new
|
||||
// transaction fee to be current modified fee + feedelta.
|
||||
mapTx.modify(it, [&nFeeDelta](CTxMemPoolEntry& e) { e.UpdateModifiedFee(nFeeDelta); });
|
||||
it->UpdateModifiedFee(nFeeDelta);
|
||||
m_txgraph->SetTransactionFee(*it, it->GetModifiedFee());
|
||||
++nTransactionsUpdated;
|
||||
}
|
||||
@ -744,7 +773,7 @@ void CTxMemPool::RemoveUnbroadcastTx(const Txid& txid, const bool unchecked) {
|
||||
}
|
||||
}
|
||||
|
||||
void CTxMemPool::RemoveStaged(setEntries &stage, bool updateDescendants, MemPoolRemovalReason reason) {
|
||||
void CTxMemPool::RemoveStaged(setEntries &stage, MemPoolRemovalReason reason) {
|
||||
AssertLockHeld(cs);
|
||||
for (txiter it : stage) {
|
||||
removeUnchecked(it, reason);
|
||||
@ -754,7 +783,7 @@ void CTxMemPool::RemoveStaged(setEntries &stage, bool updateDescendants, MemPool
|
||||
bool CTxMemPool::CheckPolicyLimits(const CTransactionRef& tx)
|
||||
{
|
||||
LOCK(cs);
|
||||
// Use ChangeSet interface to check whether the chain
|
||||
// Use ChangeSet interface to check whether the cluster count
|
||||
// limits would be violated. Note that the changeset will be destroyed
|
||||
// when it goes out of scope.
|
||||
auto changeset = GetChangeSet();
|
||||
@ -776,7 +805,7 @@ int CTxMemPool::Expire(std::chrono::seconds time)
|
||||
for (txiter removeit : toremove) {
|
||||
CalculateDescendants(removeit, stage);
|
||||
}
|
||||
RemoveStaged(stage, false, MemPoolRemovalReason::EXPIRY);
|
||||
RemoveStaged(stage, MemPoolRemovalReason::EXPIRY);
|
||||
return stage.size();
|
||||
}
|
||||
|
||||
@ -928,6 +957,9 @@ std::vector<CTxMemPool::txiter> CTxMemPool::GatherClusters(const std::vector<Txi
|
||||
for (auto txid : txids) {
|
||||
auto it = mapTx.find(txid);
|
||||
if (it != mapTx.end()) {
|
||||
// Note that TxGraph::GetCluster will return results in graph
|
||||
// order, which is deterministic (as long as we are not modifying
|
||||
// the graph).
|
||||
auto cluster = m_txgraph->GetCluster(*it, TxGraph::Level::MAIN);
|
||||
if (unique_cluster_representatives.insert(static_cast<const CTxMemPoolEntry*>(&(**cluster.begin()))).second) {
|
||||
for (auto tx : cluster) {
|
||||
@ -968,7 +1000,7 @@ CTxMemPool::ChangeSet::TxHandle CTxMemPool::ChangeSet::StageAddition(const CTran
|
||||
TxGraph::Ref ref(m_pool->m_txgraph->AddTransaction(FeePerWeight(fee, GetSigOpsAdjustedWeight(GetTransactionWeight(*tx), sigops_cost, ::nBytesPerSigOp))));
|
||||
auto newit = m_to_add.emplace(std::move(ref), tx, fee, time, entry_height, entry_sequence, spends_coinbase, sigops_cost, lp).first;
|
||||
if (delta) {
|
||||
m_to_add.modify(newit, [&delta](CTxMemPoolEntry& e) { e.UpdateModifiedFee(delta); });
|
||||
newit->UpdateModifiedFee(delta);
|
||||
m_pool->m_txgraph->SetTransactionFee(*newit, newit->GetModifiedFee());
|
||||
}
|
||||
|
||||
|
||||
@ -281,9 +281,6 @@ public:
|
||||
std::vector<CTxMemPoolEntry::CTxMemPoolEntryRef> GetParents(const CTxMemPoolEntry &entry) const;
|
||||
|
||||
private:
|
||||
typedef std::map<txiter, setEntries, CompareIteratorByHash> cacheMap;
|
||||
|
||||
|
||||
std::vector<indexed_transaction_set::const_iterator> GetSortedScoreWithTopology() const EXCLUSIVE_LOCKS_REQUIRED(cs);
|
||||
|
||||
/**
|
||||
@ -323,6 +320,11 @@ public:
|
||||
*/
|
||||
void check(const CCoinsViewCache& active_coins_tip, int64_t spendheight) const EXCLUSIVE_LOCKS_REQUIRED(::cs_main);
|
||||
|
||||
/**
|
||||
* Remove a transaction from the mempool along with any descendants.
|
||||
* If the transaction is not already in the mempool, find any descendants
|
||||
* and remove them.
|
||||
*/
|
||||
void removeRecursive(const CTransaction& tx, MemPoolRemovalReason reason) EXCLUSIVE_LOCKS_REQUIRED(cs);
|
||||
/** After reorg, filter the entries that would no longer be valid in the next block, and update
|
||||
* the entries' cached LockPoints if needed. The mempool does not have any knowledge of
|
||||
@ -581,10 +583,11 @@ private:
|
||||
* If a transaction is in this set, then all in-mempool descendants must
|
||||
* also be in the set, unless this transaction is being removed for being
|
||||
* in a block.
|
||||
* Set updateDescendants to true when removing a tx that was in a block, so
|
||||
* that any in-mempool descendants have their ancestor state updated.
|
||||
*/
|
||||
void RemoveStaged(setEntries& stage, bool updateDescendants, MemPoolRemovalReason reason) EXCLUSIVE_LOCKS_REQUIRED(cs);
|
||||
void RemoveStaged(setEntries& stage, MemPoolRemovalReason reason) EXCLUSIVE_LOCKS_REQUIRED(cs);
|
||||
|
||||
/* Helper for the public removeRecursive() */
|
||||
void removeRecursive(txiter to_remove, MemPoolRemovalReason reason) EXCLUSIVE_LOCKS_REQUIRED(cs);
|
||||
|
||||
/** Before calling removeUnchecked for a given transaction,
|
||||
* UpdateForRemoveFromMempool must be called on the entire (dependent) set
|
||||
|
||||
@ -173,7 +173,7 @@ class MempoolClusterTest(BitcoinTestFramework):
|
||||
target_vsize_per_tx = int((max_cluster_size_vbytes - 500) / num_txns)
|
||||
cluster_submitted = self.add_chain_cluster(node, num_txns, target_vsize_per_tx)
|
||||
|
||||
vsize_remaining = max_cluster_size_vbytes - weight_to_vsize(node.getmempoolcluster(cluster_submitted[0]["txid"])['weight'])
|
||||
vsize_remaining = max_cluster_size_vbytes - weight_to_vsize(node.getmempoolcluster(cluster_submitted[0]["txid"])['clusterweight'])
|
||||
self.log.info("Test that cluster size limit is enforced")
|
||||
self.test_limit_enforcement(cluster_submitted, target_vsize_per_tx=vsize_remaining + 4)
|
||||
|
||||
@ -298,12 +298,87 @@ class MempoolClusterTest(BitcoinTestFramework):
|
||||
assert tx_replacer_sponsor["txid"] in node.getrawmempool()
|
||||
assert_equal(node.getmempoolcluster(tx_replacer["txid"])['txcount'], 2)
|
||||
|
||||
@cleanup
|
||||
def test_getmempoolcluster(self):
|
||||
node = self.nodes[0]
|
||||
|
||||
self.log.info("Testing getmempoolcluster")
|
||||
|
||||
assert_equal(node.getrawmempool(), [])
|
||||
|
||||
# Not in-mempool
|
||||
not_mempool_tx = self.wallet.create_self_transfer()
|
||||
assert_raises_rpc_error(-5, "Transaction not in mempool", node.getmempoolcluster, not_mempool_tx["txid"])
|
||||
|
||||
# Test that chunks are being recomputed properly
|
||||
|
||||
# One chunk with one tx
|
||||
first_chunk_tx = self.wallet.send_self_transfer(from_node=node)
|
||||
first_chunk_info = node.getmempoolcluster(first_chunk_tx["txid"])
|
||||
assert_equal(first_chunk_info, {'clusterweight': first_chunk_tx["tx"].get_weight(), 'txcount': 1, 'chunks': [{'chunkfee': first_chunk_tx["fee"], 'chunkweight': first_chunk_tx["tx"].get_weight(), 'txs': [first_chunk_tx["txid"]]}]})
|
||||
|
||||
# Another unconnected tx, nothing should change
|
||||
self.wallet.send_self_transfer(from_node=node)
|
||||
first_chunk_info = node.getmempoolcluster(first_chunk_tx["txid"])
|
||||
assert_equal(first_chunk_info, {'clusterweight': first_chunk_tx["tx"].get_weight(), 'txcount': 1, 'chunks': [{'chunkfee': first_chunk_tx["fee"], 'chunkweight': first_chunk_tx["tx"].get_weight(), 'txs': [first_chunk_tx["txid"]]}]})
|
||||
|
||||
# Second connected tx, makes one chunk still with high enough fee
|
||||
second_chunk_tx = self.wallet.send_self_transfer(from_node=node, utxo_to_spend=first_chunk_tx["new_utxo"], fee_rate=Decimal("0.01"))
|
||||
first_chunk_info = node.getmempoolcluster(first_chunk_tx["txid"])
|
||||
# output is same across same cluster transactions
|
||||
assert_equal(first_chunk_info, node.getmempoolcluster(second_chunk_tx["txid"]))
|
||||
chunkweight = first_chunk_tx["tx"].get_weight() + second_chunk_tx["tx"].get_weight()
|
||||
chunkfee = first_chunk_tx["fee"] + second_chunk_tx["fee"]
|
||||
assert_equal(first_chunk_info, {'clusterweight': chunkweight, 'txcount': 2, 'chunks': [{'chunkfee': chunkfee, 'chunkweight': chunkweight, 'txs': [first_chunk_tx["txid"], second_chunk_tx["txid"]]}]})
|
||||
|
||||
# Third connected tx, makes one chunk still with high enough fee
|
||||
third_chunk_tx = self.wallet.send_self_transfer(from_node=node, utxo_to_spend=second_chunk_tx["new_utxo"], fee_rate=Decimal("0.1"))
|
||||
first_chunk_info = node.getmempoolcluster(first_chunk_tx["txid"])
|
||||
# output is same across same cluster transactions
|
||||
assert_equal(first_chunk_info, node.getmempoolcluster(third_chunk_tx["txid"]))
|
||||
chunkweight = first_chunk_tx["tx"].get_weight() + second_chunk_tx["tx"].get_weight() + third_chunk_tx["tx"].get_weight()
|
||||
chunkfee = first_chunk_tx["fee"] + second_chunk_tx["fee"] + third_chunk_tx["fee"]
|
||||
assert_equal(first_chunk_info, {'clusterweight': chunkweight, 'txcount': 3, 'chunks': [{'chunkfee': chunkfee, 'chunkweight': chunkweight, 'txs': [first_chunk_tx["txid"], second_chunk_tx["txid"], third_chunk_tx["txid"]]}]})
|
||||
|
||||
# Now test single cluster with each tx being its own chunk
|
||||
|
||||
# One chunk with one tx
|
||||
first_chunk_tx = self.wallet.send_self_transfer(from_node=node)
|
||||
first_chunk_info = node.getmempoolcluster(first_chunk_tx["txid"])
|
||||
assert_equal(first_chunk_info, {'clusterweight': first_chunk_tx["tx"].get_weight(), 'txcount': 1, 'chunks': [{'chunkfee': first_chunk_tx["fee"], 'chunkweight': first_chunk_tx["tx"].get_weight(), 'txs': [first_chunk_tx["txid"]]}]})
|
||||
|
||||
# Second connected tx, lower fee
|
||||
second_chunk_tx = self.wallet.send_self_transfer(from_node=node, utxo_to_spend=first_chunk_tx["new_utxo"], fee_rate=Decimal("0.000002"))
|
||||
first_chunk_info = node.getmempoolcluster(first_chunk_tx["txid"])
|
||||
# output is same across same cluster transactions
|
||||
assert_equal(first_chunk_info, node.getmempoolcluster(second_chunk_tx["txid"]))
|
||||
first_chunkweight = first_chunk_tx["tx"].get_weight()
|
||||
second_chunkweight = second_chunk_tx["tx"].get_weight()
|
||||
assert_equal(first_chunk_info, {'clusterweight': first_chunkweight + second_chunkweight, 'txcount': 2, 'chunks': [{'chunkfee': first_chunk_tx["fee"], 'chunkweight': first_chunkweight, 'txs': [first_chunk_tx["txid"]]}, {'chunkfee': second_chunk_tx["fee"], 'chunkweight': second_chunkweight, 'txs': [second_chunk_tx["txid"]]}]})
|
||||
|
||||
# Third connected tx, even lower fee
|
||||
third_chunk_tx = self.wallet.send_self_transfer(from_node=node, utxo_to_spend=second_chunk_tx["new_utxo"], fee_rate=Decimal("0.000001"))
|
||||
first_chunk_info = node.getmempoolcluster(first_chunk_tx["txid"])
|
||||
# output is same across same cluster transactions
|
||||
assert_equal(first_chunk_info, node.getmempoolcluster(third_chunk_tx["txid"]))
|
||||
first_chunkweight = first_chunk_tx["tx"].get_weight()
|
||||
second_chunkweight = second_chunk_tx["tx"].get_weight()
|
||||
third_chunkweight = third_chunk_tx["tx"].get_weight()
|
||||
chunkfee = first_chunk_tx["fee"] + second_chunk_tx["fee"] + third_chunk_tx["fee"]
|
||||
assert_equal(first_chunk_info, {'clusterweight': first_chunkweight + second_chunkweight + third_chunkweight, 'txcount': 3, 'chunks': [{'chunkfee': first_chunk_tx["fee"], 'chunkweight': first_chunkweight, 'txs': [first_chunk_tx["txid"]]}, {'chunkfee': second_chunk_tx["fee"], 'chunkweight': second_chunkweight, 'txs': [second_chunk_tx["txid"]]}, {'chunkfee': third_chunk_tx["fee"], 'chunkweight': third_chunkweight, 'txs': [third_chunk_tx["txid"]]}]})
|
||||
|
||||
# If we prioritise the last transaction it can join the second transaction's chunk.
|
||||
node.prioritisetransaction(third_chunk_tx["txid"], 0, int(third_chunk_tx["fee"]*COIN) + 1)
|
||||
first_chunk_info = node.getmempoolcluster(first_chunk_tx["txid"])
|
||||
assert_equal(first_chunk_info, {'clusterweight': first_chunkweight + second_chunkweight + third_chunkweight, 'txcount': 3, 'chunks': [{'chunkfee': first_chunk_tx["fee"], 'chunkweight': first_chunkweight, 'txs': [first_chunk_tx["txid"]]}, {'chunkfee': second_chunk_tx["fee"] + 2*third_chunk_tx["fee"] + Decimal("0.00000001"), 'chunkweight': second_chunkweight + third_chunkweight, 'txs': [second_chunk_tx["txid"], third_chunk_tx["txid"]]}]})
|
||||
|
||||
def run_test(self):
|
||||
node = self.nodes[0]
|
||||
self.wallet = MiniWallet(node)
|
||||
self.generate(self.wallet, 400)
|
||||
|
||||
self.test_getmempoolcluster()
|
||||
|
||||
self.test_cluster_limit_rbf(DEFAULT_CLUSTER_LIMIT)
|
||||
|
||||
for cluster_size_limit_kvb in [10, 20, 33, 100, DEFAULT_CLUSTER_SIZE_LIMIT_KVB]:
|
||||
|
||||
@ -164,7 +164,7 @@ class MempoolCoinbaseTest(BitcoinTestFramework):
|
||||
assert_raises_rpc_error(-26, "non-final", self.nodes[0].sendrawtransaction, timelock_tx)
|
||||
|
||||
self.log.info("Broadcast and mine spend_2 and spend_3")
|
||||
wallet.sendrawtransaction(from_node=self.nodes[0], tx_hex=spend_2['hex'])
|
||||
spend_2_id = wallet.sendrawtransaction(from_node=self.nodes[0], tx_hex=spend_2['hex'])
|
||||
wallet.sendrawtransaction(from_node=self.nodes[0], tx_hex=spend_3['hex'])
|
||||
self.log.info("Generate a block")
|
||||
self.generate(self.nodes[0], 1)
|
||||
@ -172,7 +172,7 @@ class MempoolCoinbaseTest(BitcoinTestFramework):
|
||||
assert_raises_rpc_error(-26, 'non-final', self.nodes[0].sendrawtransaction, timelock_tx)
|
||||
|
||||
self.log.info("Create spend_2_1 and spend_3_1")
|
||||
spend_2_1 = wallet.create_self_transfer(utxo_to_spend=spend_2["new_utxo"])
|
||||
spend_2_1 = wallet.create_self_transfer(utxo_to_spend=spend_2["new_utxo"], version=1)
|
||||
spend_3_1 = wallet.create_self_transfer(utxo_to_spend=spend_3["new_utxo"])
|
||||
|
||||
self.log.info("Broadcast and mine spend_3_1")
|
||||
@ -211,6 +211,24 @@ class MempoolCoinbaseTest(BitcoinTestFramework):
|
||||
self.log.info("spend_3_1 has been re-orged out of the chain and is back in the mempool")
|
||||
assert_equal(set(self.nodes[0].getrawmempool()), {spend_1_id, spend_2_1_id, spend_3_1_id})
|
||||
|
||||
self.log.info("Reorg out enough blocks to get spend_2 back in the mempool, along with its child")
|
||||
|
||||
while (spend_2_id not in self.nodes[0].getrawmempool()):
|
||||
b = self.nodes[0].getbestblockhash()
|
||||
for node in self.nodes:
|
||||
node.invalidateblock(b)
|
||||
|
||||
assert(spend_2_id in self.nodes[0].getrawmempool())
|
||||
assert(spend_2_1_id in self.nodes[0].getrawmempool())
|
||||
|
||||
# Chain 10 more transactions off of spend_2_1
|
||||
self.log.info("Give spend_2 some more descendants by creating a chain of 10 transactions spending from it")
|
||||
parent_utxo = spend_2_1["new_utxo"]
|
||||
for i in range(10):
|
||||
tx = wallet.create_self_transfer(utxo_to_spend=parent_utxo, version=1)
|
||||
self.nodes[0].sendrawtransaction(tx['hex'])
|
||||
parent_utxo = tx["new_utxo"]
|
||||
|
||||
self.log.info("Use invalidateblock to re-org back and make all those coinbase spends immature/invalid")
|
||||
b = self.nodes[0].getblockhash(first_block + 100)
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user