Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rpctest: integration test harness fixes #2071

Merged
merged 5 commits into from
Dec 22, 2023

Conversation

guggero
Copy link
Collaborator

@guggero guggero commented Dec 15, 2023

This fixes two issues with the current RPC integration test framework:

  • Generated test temp directories were created using a non-mutex-protected harness ID and a deprecated way to create temp directories, both is now fixed
  • Because Go creates a sub process for each package that is tested, a package-global variable to track used TCP ports doesn't work reliably, especially if multiple packages are run in parallel. This PR adds a new way to issue unique TCP ports per process ID, which needs to be the parent process ID of each sub-process to work as expected.

@coveralls
Copy link

coveralls commented Dec 15, 2023

Pull Request Test Coverage Report for Build 7298210334

  • 17 of 98 (17.35%) changed or added relevant lines in 3 files are covered.
  • 49 unchanged lines in 6 files lost coverage.
  • Overall coverage decreased (-0.2%) to 56.428%

Changes Missing Coverage Covered Lines Changed/Added Lines %
integration/rpctest/node.go 8 10 80.0%
integration/rpctest/rpc_harness.go 8 87 9.2%
Files with Coverage Reduction New Missed Lines %
connmgr/connmanager.go 1 86.27%
integration/rpctest/rpc_harness.go 1 44.93%
integration/rpctest/node.go 2 70.44%
peer/peer.go 9 73.49%
wire/msgaddrv2.go 16 51.72%
wire/netaddressv2.go 20 74.45%
Totals Coverage Status
Change from base Build 7278303266: -0.2%
Covered Lines: 28552
Relevant Lines: 50599

💛 - Coveralls

integration/rpctest/rpc_harness.go Outdated Show resolved Hide resolved
integration/rpctest/rpc_harness.go Show resolved Hide resolved
integration/rpctest/rpc_harness.go Show resolved Hide resolved
integration/rpctest/rpc_harness.go Show resolved Hide resolved
// if no port is found and the maximum available TCP port is reached.
func NextAvailablePortForProcess(pid int) int {
lockFile := fmt.Sprintf(
"%s/rpctest-port-pid-%d.lock", os.TempDir(), pid,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

curious about whether this would behave the same on windows?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, good point. I think we need to explicitly close a file before we can delete it. And we should probably use filepath.Join() instead of fmt.Sprintf() to create the path. Will fix.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated things to be more Windows friendly.


// We take the next one.
lastPort++
for lastPort < 65535 {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if we just listen to port 0 it will give us a random free port? Something like this
https://github.com/phayes/freeport/blob/master/freeport.go#L8

or this,
https://gist.github.com/sevkin/96bdae9274465b2d09191384f86ef39d

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but we do want ports in order so we can incrementally distribute them. Otherwise we'd need to keep a list of already used ports in the file instead of just the last one we used.

Copy link
Member

@Roasbeef Roasbeef left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ⚙️

Merge pending that windows fix.

@guggero guggero force-pushed the integration-harness-fixes branch from 0ac03ae to 4bf8b11 Compare December 21, 2023 07:16
@guggero guggero force-pushed the integration-harness-fixes branch from 4bf8b11 to 203c984 Compare December 21, 2023 07:20
@guggero
Copy link
Collaborator Author

guggero commented Dec 21, 2023

I updated things a bit with the node's dataDir and logDir, now it should be more clear where things are stored and who's in charge of cleaning up.

@guggero guggero requested a review from yyforyongyu December 21, 2023 07:23
@guggero
Copy link
Collaborator Author

guggero commented Dec 21, 2023

It looks like #1955 broke the compilation on 32-bit systems. I added a commit to fix that, @kcalvinalvin could you take a look at the last commit in this PR please?

This commit adds a new NextAvailablePortForProcess function that takes a
process ID and then assures unique (non-occupied) port numbers are
returned per process.
This uses a temporary file that contains the latest used port and a
secondary temporary lock file to assure only a single goroutine can
request a new port at a time.

The GenerateProcessUniqueListenerAddresses is intened to be used as a
package-level override for the ListenAddressGenerator variable. We don't
use it by default to make sure we don't break any existing assumptions.
@guggero guggero force-pushed the integration-harness-fixes branch from 38c904b to b308ab4 Compare December 21, 2023 08:42
Copy link
Collaborator

@yyforyongyu yyforyongyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM🏖

@guggero guggero force-pushed the integration-harness-fixes branch from b308ab4 to 7644d14 Compare December 22, 2023 09:14
@guggero guggero merged commit 8766bfd into btcsuite:master Dec 22, 2023
3 checks passed
@guggero guggero deleted the integration-harness-fixes branch December 22, 2023 10:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants