Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: skip reporting CanaryOnly failures for stable version tests #2698

Merged
merged 6 commits into from
Oct 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .github/workflows/test-e2e.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,12 +56,12 @@ jobs:
run: |
if [ "${{ github.event_name }}" == "workflow_dispatch" ]; then
VERSION_SELECTORS=[${{ github.event.inputs.versions }}]
echo "group=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]" >> $GITHUB_OUTPUT
echo "total=12" >> $GITHUB_OUTPUT
echo "group=[1, 2, 3, 4]" >> $GITHUB_OUTPUT
echo "total=4" >> $GITHUB_OUTPUT
elif [ "${{ github.event_name }}" == "pull_request" ]; then
VERSION_SELECTORS=[\"latest\"]
echo "group=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]" >> $GITHUB_OUTPUT
echo "total=12" >> $GITHUB_OUTPUT
echo "group=[1, 2, 3, 4]" >> $GITHUB_OUTPUT
echo "total=4" >> $GITHUB_OUTPUT
else
VERSION_SELECTORS=[\"latest\",\"canary\",\"14.2.15\",\"13.5.1\"]
echo "group=[1, 2, 3, 4]" >> $GITHUB_OUTPUT
Expand Down
2 changes: 1 addition & 1 deletion run-local-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ export NEXT_TEST_MODE=deploy
export RUNTIME_DIR=$(pwd)
cp tests/netlify-deploy.ts ../next.js/test/lib/next-modes/netlify-deploy.ts
cd ../next.js/
git apply ../opennextjs-netlify/tests/e2e-utils.patch || git apply ../opennextjs-netlify/tests/e2e-utils-v2.patch
git apply $RUNTIME_DIR/tests/e2e-utils.patch || git apply $RUNTIME_DIR/tests/e2e-utils-v2.patch
node run-tests.js --type e2e --debug --test-pattern $1
git checkout -- test/lib/e2e-utils.ts

35 changes: 19 additions & 16 deletions tests/netlify-deploy.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,6 @@ import { tmpdir } from 'node:os'
import path from 'path'
import { NextInstance } from './base'

type NetlifyDeployResponse = {
name: string
site_id: string
site_name: string
deploy_id: string
deploy_url: string
logs: string
}

async function packNextRuntimeImpl() {
const runtimePackDir = await fs.mkdtemp(path.join(tmpdir(), 'opennextjs-netlify-pack'))

Expand Down Expand Up @@ -133,7 +124,7 @@ export class NextDeployInstance extends NextInstance {

const deployRes = await execa(
'npx',
['netlify', 'deploy', '--build', '--json', '--message', deployTitle ?? ''],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's a weird bug in ntl deploy --json that makes debugging failures annoying anyway so I like this change :)

(I should write it up but I don't know how to reproduce it 😞)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we are just not getting any output at all on failures with --json flag. At least that's was my experience here and why I wanted to change this stuff in the first place to see actual logs on deploy failures

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I think that's right... maybe it's by design?

['netlify', 'deploy', '--build', '--message', deployTitle ?? ''],
{
cwd: this.testDir,
reject: false,
Expand All @@ -142,17 +133,29 @@ export class NextDeployInstance extends NextInstance {

if (deployRes.exitCode !== 0) {
throw new Error(
`Failed to deploy project ${deployRes.stdout} ${deployRes.stderr} (${deployRes.exitCode})`,
`Failed to deploy project (${deployRes.exitCode}) ${deployRes.stdout} ${deployRes.stderr} `,
)
}

try {
const data: NetlifyDeployResponse = JSON.parse(deployRes.stdout)
this._url = data.deploy_url
const [url] = new RegExp(/https:.+\.netlify\.app/gm).exec(deployRes.stdout) || []
if (!url) {
throw new Error('Could not extract the URL from the build logs')
}
const [deployID] = new URL(url).host.split('--')
this._url = url
this._parsedUrl = new URL(this._url)
this._deployId = data.deploy_id
require('console').log(`Deployment URL: ${data.deploy_url}`)
require('console').log(`Logs: ${data.logs}`)
this._deployId = deployID
this._cliOutput = deployRes.stdout + deployRes.stderr
require('console').log(`Deployment URL: ${this._url}`)

const [buildLogsUrl] =
new RegExp(/https:\/\/app\.netlify\.com\/sites\/.+\/deploys\/[0-9a-f]+/gm).exec(
deployRes.stdout,
) || []
if (buildLogsUrl) {
require('console').log(`Logs: ${buildLogsUrl}`)
}
} catch (err) {
console.error(err)
throw new Error(`Failed to parse deploy output: ${deployRes.stdout}`)
Expand Down
8 changes: 7 additions & 1 deletion tests/test-config.json
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,9 @@
{
"file": "test/e2e/app-dir/app-static/app-static.test.ts",
"reason": "Uses CLI output",
"tests": ["app-dir static/dynamic handling should warn for too many cache tags"]
"tests": [
"app-dir static/dynamic handling should warn for too many cache tags"
]
},
{
"file": "test/e2e/app-dir/parallel-routes-and-interception/parallel-routes-and-interception.test.ts",
Expand Down Expand Up @@ -358,6 +360,10 @@
{
"file": "test/e2e/app-dir/dynamic-io-request-apis/dynamic-io-request-apis.test",
"reason": "Uses CLI output"
},
{
"file": "test/e2e/next-config-warnings/esm-externals-false/esm-externals-false.test.ts",
"reason": "Uses CLI output"
}
],
"failures": [
Expand Down
7 changes: 7 additions & 0 deletions tools/deno/junit2json.ts
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,13 @@ function junitToJson(xmlData: { testsuites: JUnitTestSuites }): Array<TestSuite>
if (skippedTestsForFile?.some(({ name }) => name === testCase['@name'])) {
continue
}

// skip reporting on tests that even fail to deploy because they rely on experiments not available
// in currently tested version
if (testCase.failure?.includes('CanaryOnlyError')) {
continue
}
Comment on lines +121 to +125
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doing this here is a little funky because we report test results in 3(?) different places and this is just one of them. So now we'll have a mismatch between the Travis CI summary thing, the actual raw test output, and the e2e report built from this JSON (previously we also had a fourth which was the Slack notification).

Maybe this is fine for now, but I wonder if there's a more direct way to configure the test runner the same way Vercel's CI does? I would imagine it's some slight env var change we need to make or something 🤷🏼.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/vercel/next.js/blob/a8de8730d75625e3bd068787121561cf6ab5eaac/packages/next/src/server/config.ts#L244 setting this env var might prevent build failure and allow actual usage of those features on stable versions (so the deploys wouldn't be failing but features themselves very well might not work correctly yet for us), but I wasn't sure what can of worms I could open when playing with this env var so went with "skipping"

Overall - as we are producing report for users to see and we are generating it for stable version - I think it's fair to skip or ignore tests for features that can't be used on stable versions (without getting into "test mode") - the change here might be not enough overall - but I think direction is correct - skip/ignore them instead of trying to run them?

Those tests would be displayed when we run tests against canary because we wouldn't be hitting CanaryOnly error then

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In any case - this is something we can change, but we need something here and I think this is good enough at least for now until we figure out exactly how we should handle this?


const status = testCase.failure ? 'failed' : 'passed'
const test: TestCase = {
name: testCase['@name'],
Expand Down
Loading