- Implements getmocktime instead of time.sleep to prevent races
- Enforce success of all mocktime operations
- Do not increment mocktime with 0 seconds
- Use fresh incoming peers for each test
- Explicitly do test teardown
- Remove TXID_RELAY_DELAY as this no longer exists in the dogecoind
implementation
- Add the inflight throttling test to replace the removed inflight
limit test
- Add an expiry fallback test
Cherry-picked from: a5a4e4b6
Github Pull Request: #3577
Maintaining up to 100000 INVs per peer is excessive. A Dogecoin
Core node will never send more than 7 invs per second.
Original Author: Pieter Wuille <pieter@wuille.net>
Cherry-picked from: 8eb52142
Github Pull Request: #3577
The major changes are:
* Announcements from outbound (and whitelisted) peers are now
always preferred over those from inbound peers. This used to be
the case for the first request (by delaying the first request
from inbound peers), and a bias after. The 2s delay for requests
from inbound peers still exists, but after that, if viable
outbound peers remain for any given transaction, they will
always be tried first.
* No more hard cap of 100 in flight transactions per peer, as
there is less need for it (memory usage is linear in the number
of announcements, but independent from the number in flight,
and CPU usage isn't affected by it). Furthermore, if only one
peer announces a transaction, and it has over 100 in flight and
requestable already, we still want to request it from them. The
cap is replaced with an additional 2s delay (possibly combined
with the existing 2s delays for inbound connections).
Original Author: Pieter Wuille <pieter@wuille.net>
Cherry-picked from: 3a700cde
Github Pull Request: #3577
1. Allow more time for notifications to be delivered under load
2. Assure that in a non-hostile reorg the tracked transaction
isn't mined prematurely by mining 80-bytes blocks
3. Test all 4 messages for the doublespend scenario at once to
not error out when the conflict races the new transaction
This allows us to more strictly test the reorg behavior while
reducing race issues
Cherry-picked from: 3a1519a9
Github Pull Request: #3572
adds a line when no copyright for Dogecoin Core Developers exists
but the file has been edited by us, to the last year found in git
log, or extends the year range on an existing line when a file
has been modified since the year previously listed.
Excludes subtrees.
Before this change, if you made a typo such as `-parellel-8` when trying to run
tests, you'd get a backtrace that's difficult to interpret. With this change,
you'll get a better error message and a non-zero exit code.
Tests 5 scenarios:
1. Notifying for a new transaction into mempool
2. Notifying again for a mined known transaction
3. Notifying for a reversal and subsequent remine
4. Notifying for a reversal and subsequent doublespend
5. Notifying for a transaction that wasn't in mempool first
This logs either positional arguments or named arguments if no
positional arguments exist when using --tracerpc to allow deep
examination of calls that use named arguments.
The __call__ function rejects any calls made using both positional
and named arguments, therefore we can print either in this
construction.
I have made an attempt to update fee rate in p2p-feefilter.py to recommended minimum transaction fee of 0.01 DOGE/kb to reflect fee rate changes made in 1.14.4 release , it is linked with issue #3201
This adds two helper functions. One function gets a height parameter from the
incoming RPC request. The other performs the scanning. We can use both
functions for reducing code in other RPC calls that can/should take height
parameters and perform rescanning.
-update install-deps.sh so it will clean up after itself and can be invoked from root directory.
-add python3-pip and python3-setuptools to ci matrix jobs that run qa and qa/README.md which are needed in order to install ltc-scrypt.
-update archive source to dogecoin/ltc-scrypt:v1.0.1.
-update qa/README.md to include sudo prior to apt-get install directive and add instructions to invoke install-deps.sh script from root directory.
This allows users to avoid rescanning the entire chain when importing a
new private key, if they provide the height of the block from which to
start. Note that any transactions to or from the corresponding wallet
will only be indexed if they occur at or after the given height.
The height argument is `height`, consistent with the `height` argument to
`rescan`.
Tests 5 scenarios for transaction download scheduling:
1. Whether eventually, after a series of timeouts, all our peers
that announced a transaction are sent a getdata request
2. Whether outbound peers are prioritized over inbound peers when
a getdata request takes longer than optimal
3. That we honor the maximum in-flight capacity, that this is on
a per-peer basis and that it resets itself after timeout
4. That when we have an inflight getdata request when a peer
disconnects, we recover after the initial 30 second timeout
and fetch the transaction from another peer
5. That we recover after a peer sends us a notfound message for
a tx we had an inflight getdata request for.
the p2p-policy test has a number of issues because it is a real-
time relay test that can at the moment not be mocked and it is
insufficiently hardened against signature length variation.
This makes 2 changes to the test to make it more reliable:
1. Increase the maximum wait time for transactions to be relayed to
2 minutes instead of 30 seconds. This gives us more certainty
that the PoissonNextSend() function doesn't schedule outside of
our window.
2. Whenever we sign a transaction with an unexpected signature
length, retry constructing the transaction (with a different
input). Fees are changed to be 100% exact.
Note that this makes the test potentially take a longer time to
complete, so we move it up in the list of the test runner, to
be triggered early.
Each node keeps a registry of manually added nodes (through the
addnode parameter, rpc call or UI) but there are currently no
limits imposed on that usage, which is a bit sloppy and can lead
to situations where memory is being used for storing addresses
that are never connected to, because the maximum number of
connections used for addnode entries is hardcoded as 8. This
could prevent smaller systems that host nodes (like those
running on an ARM SoC) to optimally use the available memory.
This enhancement limits the addnode functionality as follows:
1. Whenever over 799 nodes are added to the registry, require
the user to remove an entry before a new one can be added
2. Disallow very large addresses (more than 256 characters).
This limit provides for at least 4 levels of subdomains as
specified under RFC1035.
See https://datatracker.ietf.org/doc/html/rfc1035#section-2.3.1
Verification levels must be between 0 and 4 inclusive, and block heights should
always be positive. While code in the scan process handles the latter case with
a default value, it's better to verify and reject invalid input where it first
enters the system.
This adds more defensiveness around dumping or backing up wallets, so
that the directory and filepaths are always available (even if they were
on transient storage that was removed), and that they never overwrite
other files.
Tests that depend on enable_mocktime() because they use the cached
chain are testing things that need actual time. Because we're
mocking time more often when we are processing messages, these
tests fail (because to the node, no time expires)
This rewrites these tests to no longer use the cached chain.
Tests affected:
- listtransactions.py
- receivedby.py
Adds the time field to addr messages from protocol version 31402,
but serialize/deserialize without it for version messages. This
allows us to test p2p addr messages.
See src/version.h and c891967b
The algorithm here is important and took some time to get right. Instead
of comparing whether the current number of connected nodes minus the
number of unevictable nodes is greater than the number of max
connections, check that:
* there are any evictable nodes (connected nodes minus unevictable
nodes)
* there are more nodes connected than requested (connected nodes minus
max connections)
While we could wait for nodes to disconnect organically, it's more
important to run the eviction logic frequently enough that we can tell
when it will have an effect.
Whitelisted connections and protected inbound connections are
unevictable, and max connections should account for inbound connections.
Because the evictor will never evict protected inbound connections, the
maximum connection count should always be at least as large as the
protected connection count.
Note that the tests for this use a delay and test that the delay has not
expired. This helps improve determinism in the testing. Otherwise, a
strict test for a fixed number of disconnections is susceptible to
things like CPU jitter, especially when running through CI.
Patrick ran this test for 1000 runs on busy CPUs and saw no failures.
This uses the constant in src/net.h for the minimum allowed number of
connections.
The limit of new max connections is silently capped to the number of
available file descriptors. This value is not exposed in the UI or
RPC messages so as not to leak interesting or important system details.
The floor of maximum connections is set to the number of connections
required for this node to operate.