de4242f47476769d0a7f3e79e8297ed2dd60d9a4 refactor: Use reference for chain_start in HeadersSyncState (Daniela Brozzoni)
e37555e5401f9fca39ada0bd153e46b2c7ebd095 refactor: Use initializer list in CompressedHeader (Daniela Brozzoni)
0488bdfefe92b2c9a924be9244c91fe472462aab refactor: Remove unused parameter in ReportHeadersPresync (Daniela Brozzoni)
256246a9fa5b05141c93aeeb359394b9c7a80e49 refactor: Remove redundant parameter from CheckHeadersPoW (Daniela Brozzoni)
ca0243e3a6d77d2b218749f1ba113b81444e3f4a refactor: Remove useless CBlock::GetBlockHeader (Pieter Wuille)
45686522224598bed9923e60daad109094d7bc29 refactor: Use std::span in HasValidProofOfWork (Daniela Brozzoni)
4066bfe561a45f61a3c9bf24bec7f600ddcc7467 refactor: Compute work from headers without CBlockIndex (Daniela Brozzoni)
0bf6139e194f355d121bb2aea74715d1c4099598 p2p: Avoid an IsAncestorOfBestHeaderOrTip call (Pieter Wuille)
Pull request description:
This is a partial* revival of #25968
It contains a list of most-unrelated simplifications and optimizations to the code merged in #25717:
- Avoid an IsAncestorOfBestHeaderOrTip call: Just don't call this function when it won't have any effect.
- Compute work from headers without CBlockIndex: Avoid the need to construct a CBlockIndex object just to compute work for a header, when its nBits value suffices for that. Also use some Spans where possible.
- Remove useless CBlock::GetBlockHeader: There is no need for a function to convert a CBlock to a CBlockHeader, as it's a child class of it.
It also contains the following code cleanups, which were suggested by reviewers in #25968:
- Remove redundant parameter from CheckHeadersPoW: No need to pass consensusParams, as CheckHeadersPow already has access to m_chainparams.GetConsensus()
- Remove unused parameter in ReportHeadersPresync
- Use initializer list in CompressedHeader, also make GetFullHeader const
- Use reference for chain_start in HeadersSyncState: chain_start can never be null, so it's better to pass it as a reference rather than a raw pointer
*I decided to leave out three commits that were in #25968 (4e7ac7b94d04e056e9994ed1c8273c52b7b23931, ab52fb4e95aa2732d1a1391331ea01362e035984, 7f1cf440ca1a9c86085716745ca64d3ac26957c0), since they're a bit more involved, and I'm a new contributor. If this PR gets merged, I'll comment under #25968 to note that these three commits are still up for grabs :)
ACKs for top commit:
l0rinc:
ACK de4242f47476769d0a7f3e79e8297ed2dd60d9a4
polespinasa:
re-ACK de4242f47476769d0a7f3e79e8297ed2dd60d9a4
sipa:
ACK de4242f47476769d0a7f3e79e8297ed2dd60d9a4
achow101:
ACK de4242f47476769d0a7f3e79e8297ed2dd60d9a4
hodlinator:
re-ACK de4242f47476769d0a7f3e79e8297ed2dd60d9a4
Tree-SHA512: 1de4f3ce0854a196712505f2b52ccb985856f5133769552bf37375225ea8664a3a7a6a9578c4fd461e935cd94a7cbbb08f15751a1da7651f8962c866146d9d4b
Avoid the need to construct a CBlockIndex object just to compute work for a header,
when its nBits value suffices for that.
Co-Authored-By: Pieter Wuille <pieter@wuille.net>
By using the pskip pointer, which regularly allows jumping back much faster
than pprev, the forking point between two CBlockIndex entries can be found
much faster.
A simulation shows that no more than 136 steps are needed to jump anywhere
within the first 2^20 block heights, and on average 65 jumps for uniform
forking points around that height.
Also removed CChain::GetLocator() and replaced its call
with GetLocator() which uses LocatorEntries instead.
Co-authored-by: ryanofsky <ryan@ofsky.org>
Co-authored-by: l0rinc <l0rinc@users.noreply.github.com>
This introduces an insignificant performance penalty, as it means locator
construction needs to use the skiplist-based CBlockIndex::GetAncestor()
function instead of the lookup-based CChain, but avoids the need for
callers to have access to a relevant CChain object.
As suggested in #14711, pass height to CChain::FindEarliestAtLeast to
simplify Chain interface by combining findFirstBlockWithTime and
findFirstBlockWithTimeAndHeight into one
Extend findearliestatleast_edge_test in consequence
b4058ed Fix code constness in CBlockIndex::GetAncestor() overloads (Dan Raviv)
Pull request description:
Make the non-const overload of `CBlockIndex::GetAncestor()` reuse the const overload implementation instead of the other way around. This way, the constness of the const overload implementation is guaranteed. The other way around, it was possible to implement the non-const overload in a way which mutates the object, and since that implementation would be called even for const objects (due to the reuse), we would get undefined behavior.
Tree-SHA512: 545a8639bc52502ea06dbd924e8fabec6274fa69b43e3b8966a7987ce4dae6fb2498f623730fde7ed0e47478941c7f8baa2e76a12018134ff7c14c0dfa25ba3a
Make the non-const overload of CBlockIndex::GetAncestor() reuse the const overload implementation instead of the other way around. This way, the constness of the const overload implementation is guaranteed. The other way around, it was possible to implement the non-const overload in a way which mutates the object, and since that implementation would be called even for const objects (due to the reuse), we would get undefined behavior.
A few "a->an" and "an->a".
"Shows, if the supplied default SOCKS5 proxy" -> "Shows if the supplied default SOCKS5 proxy". Change made on 3 occurrences.
"without fully understanding the ramification of a command" -> "without fully understanding the ramifications of a command".
Removed duplicate words such as "the the".
In spite of the name FindLatestBefore used std::lower_bound to try
to find the earliest block with a nTime greater or equal to the
the requested value. But lower_bound uses bisection and requires
the input to be ordered with respect to the comparison operation.
Block times are not well ordered.
I don't know what lower_bound is permitted to do when the data
is not sufficiently ordered, but it's probably not good.
(I could construct an implementation which would infinite loop...)
To resolve the issue this commit introduces a maximum-so-far to the
block indexes and searches that.
For clarity the function is renamed to reflect what it actually does.
An issue that remains is that there is no grace period in importmulti:
If a address is created at time T and a send is immediately broadcast
and included by a miner with a slow clock there may not yet have been
any block with at least time T.
The normal rescan has a grace period of 7200 seconds, but importmulti
does not.
This replaces using inv messages to announce new blocks, when a peer requests
(via the new "sendheaders" message) that blocks be announced with headers
instead of inv's.
Since headers-first was introduced, peers send getheaders messages in response
to an inv, which requires generating a block locator that is large compared to
the size of the header being requested, and requires an extra round-trip before
a reorg can be relayed. Save time by tracking headers that a peer is likely to
know about, and send a headers chain that would connect to a peer's known
headers, unless the chain would be too big, in which case we revert to sending
an inv instead.
Based off of @sipa's commit to announce all blocks in a reorg via inv,
which has been squashed into this commit.
Rebased-by: Pieter Wuille
Instead of only checking height to decide whether to disable script checks,
actually check whether a block is an ancestor of a checkpoint, up to which
headers have been validated. This means that we don't have to prevent
accepting a side branch anymore - it will be safe, just less fast to
do.
We still need to prevent being fed a multitude of low-difficulty headers
filling up our memory. The mechanism for that is unchanged for now: once
a checkpoint is reached with headers, no headers chain branching off before
that point are allowed anymore.