-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Bitbucket support #143
base: main
Are you sure you want to change the base?
Conversation
- from: https://www.svgrepo.com/svg/349308/bitbucket - MIT license
- cloud-only at present
- this is pretty ugly - server support has only been tested with a prism mock - watcher counting is not implemented yet
- still needs screen shots and proofreading
Wow this is awesome!! Thanks for the contribution 😃 Will take a look at this later today. Will also take a look at getting a Bitbucket data center setup for testing. Also @gburiola, would you be willing to give this a try? |
"countMisc": { | ||
"type": "boolean", | ||
"default": false, | ||
"description": "Count watchers and forks" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still reading, but curious - what does this setting do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added these counts for parity with the other services, although bitbucket doesn't seem to do stars the way github does.
Against bitbucket cloud I was seeing rate limiting of API calls when watchers and forks were counted for a large number of repos. I made it configurable for the time being, since I don't need the info.
In the server API, I don't see these counts, so they will be zero in that case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah kk gotcha - I think we can probably just remove fetching stars / fork count alltogether, at least for now.
For certain hosts (e.g., GitHub), zoekt will use the git config zoekt.github-stars
and zoekt.github-forks
to influence search rankings. It looks like this isn't plumbed through for bitbucket (via zoekt.bitbucket-stars
). If we find there is a need for relevance ranking via stars/forks for bitbucket, then we can do that plumbing then.
isArchived: project.isArchived, | ||
gitConfigMetadata: { | ||
// | ||
// using 'bitbucket-server' lets 'zoekt' generate the correct commit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To confirm, the URL pattern for bitbucket cloud is the same as bitbucket server?
Looking at the zoekt code, it looks like the URL pattern defined is:
// Commit URL:
// https://<bitbucketserver-host>/projects/<project>/repos/<repo>/commits/5be7ca73b898bf17a08e607918accfdeafe1e0bc
// File URL:
// https://<bitbucketserver-host>/projects/<project>/repos/<repo>/browse/<file>?at=5be7ca73b898bf17a08e607918accfdeafe1e0bc
"countMisc": { | ||
"type": "boolean", | ||
"default": false, | ||
"description": "Count watchers and forks" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah kk gotcha - I think we can probably just remove fetching stars / fork count alltogether, at least for now.
For certain hosts (e.g., GitHub), zoekt will use the git config zoekt.github-stars
and zoekt.github-forks
to influence search rankings. It looks like this isn't plumbed through for bitbucket (via zoekt.bitbucket-stars
). If we find there is a need for relevance ranking via stars/forks for bitbucket, then we can do that plumbing then.
if (config.countMisc) { | ||
// not sure the suggested mapping of watchers to stars makes much sense | ||
const { durationMs: starMs, data: foundStars } = await measure(() => client.countWatchers(client, repo)); | ||
stars = foundStars | ||
logger.info(` Found ${stars} watchers for ${repo.name} in ${starMs} ms`); | ||
const { durationMs: forkMs, data: foundForks } = await measure(() => client.countForks(client, repo)); | ||
forks = foundForks | ||
logger.info(` Found ${forks} forks for ${repo.name} in ${forkMs} ms`); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on other thread - we can probably just remove this for now
"default": false, | ||
"description": "Count watchers and forks" | ||
}, | ||
"url": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to make this a required field? That way url
with be typeof string
instead of string | undefined
?
// | ||
function serverClient(config: BitbucketConfig, ctx: AppContext): BitbucketClient { | ||
const token = config.token ? getTokenFromConfig(config.token, ctx) : undefined; | ||
const url = config.url ? config.url : 'https://junk'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(see other comment about making url
required)
logger.debug('PAGE ' + JSON.stringify(page)); | ||
if (page.error != null) { | ||
// rate limit? | ||
await delay(RETRY_TIME); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to determine if the error returned was a rate limit vs. some other error? Check status code is 429? Also, had you run into rate limiting issue in your testing?
Currently, this looks like it will enter into a infinite loops if requests never succeed (e.g., if the token is incorrectly scoped and we are hitting 401s)
logger.debug('PAGE ' + JSON.stringify(page)); | ||
if (page.error != null) { | ||
// rate limit? | ||
await delay(RETRY_TIME); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same rate limit comment as on serverGetPaginatedResult
I didn't get a chance to go over everything since been out the last few days, so just going to release my feedback I had so far. A general, code quality nit: we should consider making things stronger typed. For example, it would be great if we could type the response bodies s.t., we have more guarantees when we reach for certain properties. I was hacking around a little bit, and maybe something along these lines this could work import type { ClientOptions, ClientPathsWithMethod } from "openapi-fetch";
import { createBitbucketCloudClient } from "@coderabbitai/bitbucket/cloud";
import { SchemaProject as CloudProject } from "@coderabbitai/bitbucket/cloud/openapi";
// afaik, this is the only way of extracting the client API type
type CloudAPI = ReturnType<typeof createBitbucketCloudClient>;
// Defines a type that is a union of all API paths that have a GET method in the
// client api.
type CloudGetRequestPath = ClientPathsWithMethod<ClientAPI, "get">;
/**
* Defines a generic response body for any paginated api.
**/
type PaginatedResponse<T> = {
readonly next?: string;
readonly page?: number;
readonly pagelen?: number;
readonly previous?: string;
readonly size?: number;
readonly values?: readonly T[];
}
/**
* We need to do `V extends CloudGetRequestPath` since we will need to call `apiClient.GET(url, ...)`, which
* expects `url` to be of type `CloudGetRequestPath`. See example.
**/
const getPaginatedCloud = async <T, V extends CloudGetRequestPath>(baseUrl: V, get: (url: V) => Promise<PaginatedResponse<T>>) => {
const results: T[] = [];
let url = baseUrl;
while (true) {
const response = await get(url);
if (!response.values || response.values.length === 0) {
break;
}
results.push(...response.values);
if (!response.next) {
break;
}
// cast required here since response.next is a string.
url = response.next as V;
}
return results;
} Usage example: async function cloudGetProjects(client: ClientAPI, workspace: string): Promise<string[]> {
var results: string[] = [];
const baseUrl = '/workspaces/{workspace}/projects';
// `projects` is of type `CloudProject[]`.
const projects = await getPaginatedCloud<CloudProject, typeof baseUrl>(baseUrl, async (url) => {
// This response body will be typed to `/workspaces/{workspace}/projects`
const response = await client.GET(url, {
params: {
path: {
workspace,
}
}
});
const { data, error } = response;
if (error) {
throw new Error (`Failed to fetch projects for workspace ${workspace}: ${error.type}`);
}
return data;
});
// --> All of this is now typed <--
for (const project of projects) {
if (project.key) {
results.push(project.key);
}
}
logger.debug(` FOUND ${results.length} PROJECTS IN ${workspace}`);
return results;
} There are probably simpler ways of doing this so open to suggestions. |
For issue #14
I've tested this against bitbucket.org.
I was only able to test the server/data center support against a prism mock based on the OpenAPI spec. I don't have
an instance to test against, so it may not work in real usage. Hopefully, any fixes will be small.
The README file needs some screen shot updates for adding app passwords. I may be able to sanitize one for Bitbucket Cloud, but a screenshot for the server version will need to come from someone who has it available.
This addresses my personal need, so I don't know how much more time I'm going to spend on it at present. Maybe it will be useful to others, though.