-
Notifications
You must be signed in to change notification settings - Fork 720
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
*: improve the linter and fix some bugs #8015
Conversation
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
@@ -386,8 +386,7 @@ func (c *ResourceGroupsController) Start(ctx context.Context) { | |||
} | |||
|
|||
case gc := <-c.tokenBucketUpdateChan: | |||
now := gc.run.now | |||
go gc.handleTokenBucketUpdateEvent(c.loopCtx, now) | |||
go gc.handleTokenBucketUpdateEvent(c.loopCtx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
now
is not used, we need to confirm if it is expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's okay to remove it
@@ -797,6 +797,7 @@ func (c *tsoClient) processRequests( | |||
stream tsoStream, dcLocation string, tbc *tsoBatchController, | |||
) error { | |||
requests := tbc.getCollectedRequests() | |||
// nolint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
defer is in the loop, not sure if it is right.
@@ -108,7 +108,8 @@ type ControllerConfig struct { | |||
} | |||
|
|||
// Adjust adjusts the configuration and initializes it with the default value if necessary. | |||
func (rmc *ControllerConfig) Adjust(meta *configutil.ConfigMetaData) { | |||
// FIXME: is it expected? | |||
func (rmc *ControllerConfig) Adjust(_ *configutil.ConfigMetaData) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adjust
doesn't use meta, is it expected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, it's better use meta
for the method Adjust
of RequestUnitConfig
.
@@ -347,7 +347,7 @@ func (c *RuleChecker) fixLooseMatchPeer(region *core.RegionInfo, fit *placement. | |||
if region.GetLeader().GetId() != peer.GetId() && rf.Rule.Role == placement.Leader { | |||
ruleCheckerFixLeaderRoleCounter.Inc() | |||
if c.allowLeader(fit, peer) { | |||
return operator.CreateTransferLeaderOperator("fix-leader-role", c.cluster, region, region.GetLeader().GetStoreId(), peer.GetStoreId(), []uint64{}, 0) | |||
return operator.CreateTransferLeaderOperator("fix-leader-role", c.cluster, region, peer.GetStoreId(), []uint64{}, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
transfer leader doesn't need a source store id
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It isn't necessary for source.
@@ -32,7 +32,7 @@ type Request interface { | |||
getCount() uint32 | |||
// process sends request and receive response via stream. | |||
// count defines the count of timestamps to retrieve. | |||
process(forwardStream stream, count uint32, tsoProtoFactory ProtoFactory) (tsoResp, error) | |||
process(forwardStream stream, count uint32) (tsoResp, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't use factory in process
@@ -556,7 +556,7 @@ func (s *GrpcServer) Tso(stream pdpb.PD_TsoServer) error { | |||
|
|||
if errCh == nil { | |||
doneCh = make(chan struct{}) | |||
defer close(doneCh) | |||
defer close(doneCh) // nolint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
defer
here is not right.
@@ -100,7 +100,8 @@ func ReadGetJSONWithBody(re *require.Assertions, client *http.Client, url string | |||
if err != nil { | |||
return err | |||
} | |||
return checkResp(resp, StatusOK(re), ExtractJSON(re, data)) | |||
checkOpts = append(checkOpts, StatusOK(re), ExtractJSON(re, data)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
previously this function was not right
@@ -118,7 +119,7 @@ func (r *RegionSplitter) splitRegionsByKeys(parCtx context.Context, splitKeys [] | |||
r.handler.ScanRegionsByKeyRange(groupKeys, results) | |||
} | |||
case <-ctx.Done(): | |||
break | |||
break outerLoop |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it won't exit the for loop
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #8015 +/- ##
==========================================
+ Coverage 77.27% 77.32% +0.04%
==========================================
Files 468 468
Lines 60890 60868 -22
==========================================
+ Hits 47055 47068 +13
+ Misses 10291 10261 -30
+ Partials 3544 3539 -5
Flags with carried forward coverage won't be shown. Click here to find out more. |
@@ -76,7 +76,7 @@ func (suite *regionSplitterTestSuite) SetupSuite() { | |||
suite.ctx, suite.cancel = context.WithCancel(context.Background()) | |||
} | |||
|
|||
func (suite *regionSplitterTestSuite) TearDownTest() { | |||
func (suite *regionSplitterTestSuite) TearDownSuite() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need use TearDownSuite instead of TearDownTest
@@ -181,9 +181,6 @@ static: install-tools pre-build | |||
@ gofmt -s -l -d $(PACKAGE_DIRECTORIES) 2>&1 | awk '{ print } END { if (NR > 0) { exit 1 } }' | |||
@ echo "golangci-lint ..." | |||
@ golangci-lint run --verbose $(PACKAGE_DIRECTORIES) --allow-parallel-runners | |||
@ echo "revive ..." | |||
@ revive -formatter friendly -config revive.toml $(PACKAGES) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to remove revive.toml
also?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -797,6 +797,7 @@ func (c *tsoClient) processRequests( | |||
stream tsoStream, dcLocation string, tbc *tsoBatchController, | |||
) error { | |||
requests := tbc.getCollectedRequests() | |||
// nolint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can create a method to handle batch trace operations
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Signed-off-by: Ryan Leung <[email protected]>
Signed-off-by: Ryan Leung <[email protected]>
Signed-off-by: Ryan Leung <[email protected]>
Signed-off-by: Ryan Leung <[email protected]>
Signed-off-by: Ryan Leung <[email protected]>
Signed-off-by: Ryan Leung <[email protected]>
Signed-off-by: Ryan Leung <[email protected]>
Signed-off-by: Ryan Leung <[email protected]>
/merge |
@HuSharp: It seems you want to merge this PR, I will help you trigger all the tests: /run-all-tests You only need to trigger
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
This pull request has been accepted and is ready to merge. Commit hash: 14503e3
|
@rleungx: Your PR was out of date, I have automatically updated it for you. If the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
What problem does this PR solve?
Issue Number: Close #8019.
What is changed and how does it work?
Check List
Tests
Release note