Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test(container): add container e2e UI test #65

Merged
merged 1 commit into from
Sep 26, 2024

Conversation

Youyou-smiles
Copy link
Contributor

@Youyou-smiles Youyou-smiles commented Sep 25, 2024

PR

PR Checklist

Please check if your PR fulfills the following requirements:

  • The commit message follows our Commit Message Guidelines
  • Tests for the changes have been added (for bug fixes / features)
  • Docs have been added / updated (for bug fixes / features)

PR Type

What kind of change does this PR introduce?

  • Bugfix
  • Feature
  • Code style update (formatting, local variables)
  • Refactoring (no functional changes, no api changes)
  • Build related changes
  • CI related changes
  • Documentation content changes
  • Other... Please describe:

What is the current behavior?

Issue Number: N/A

What is the new behavior?

Does this PR introduce a breaking change?

  • Yes
  • No

Other information

Summary by CodeRabbit

  • New Features

    • Introduced new test suites for the container, hrapprover, and layout components to validate UI functionality.
    • Implemented Playwright for end-to-end testing, ensuring error handling and screenshot capture for key UI elements.
  • Tests

    • Added multiple test cases to check basic usage, custom dimensions, and layout specifications, enhancing overall test coverage.

Copy link

coderabbitai bot commented Sep 25, 2024

Walkthrough

A new set of test suites has been introduced for various components, including container, hrapprover, and layout. Each suite employs Playwright for end-to-end testing, focusing on UI functionality and visual validation through screenshot captures. The tests ensure that demo elements are visible in the viewport and handle page errors gracefully.

Changes

File Path Change Summary
tests/container/xdesign.spec.ts Introduced a test suite for the container component with two test cases validating UI functionality and screenshots.
tests/hrapprover/xdesign.spec.ts Added a test suite for the hrapprover component containing a test case that verifies UI functionality and captures screenshots.
tests/layout/xdesign.spec.ts Created a test suite for the layout component with a test case that checks UI functionality and captures a screenshot.

Possibly related PRs

Suggested labels

e2e

Suggested reviewers

  • zzcr

🐰 In a world of tests so bright,
Each component shines with delight.
With screenshots captured, errors no more,
UI validated, we all can explore!
Hooray for changes, let’s give a cheer,
For a rabbit's joy is always near! 🥕✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot added the e2e playwright label Sep 25, 2024
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (8)
tests/layout/xdesign.spec.ts (3)

3-3: Verify naming convention and consider translating to English.

The test suite description is in Chinese. To ensure consistency and improve maintainability:

  1. Verify if this naming convention aligns with the project's standards.
  2. Consider translating the description to English if the project primarily uses English for code and comments.

For example: test.describe('Layout Component XDesign Specification', () => { ... })


4-10: Good test structure, with suggestions for improvement.

The test case is well-structured and covers important aspects of UI testing. Here are some suggestions for improvement:

  1. Consider translating the test title to English for consistency.
  2. The URL navigation uses a fragment. Consider using a more robust method to ensure the correct page is loaded, such as checking the page title or a unique element on the page.
  3. The screenshot filename is hardcoded. Consider using a dynamic filename based on the test description to improve maintainability.

Example improvement for point 3:

const screenshotName = `${test.info().titlePath.join('-')}.png`.toLowerCase().replace(/\s+/g, '-');
await expect(demo).toHaveScreenshot(screenshotName);

1-12: Consider expanding test coverage.

The current test suite includes only a basic usage test case. To ensure comprehensive coverage of the layout component, consider adding more test cases, such as:

  1. Different layout configurations (e.g., with/without header, footer, sidebar)
  2. Responsive behavior tests
  3. Interaction tests (if applicable)
  4. Edge cases (e.g., very large or small content)

Example additional test case:

test('Responsive layout - mobile view', async ({ page }) => {
  await page.setViewportSize({ width: 375, height: 667 });
  // ... (similar steps as the basic usage test)
  await expect(demo).toHaveScreenshot('responsive-mobile.png');
});
tests/hrapprover/xdesign.spec.ts (2)

3-4: Consider improving test suite structure and naming.

  1. The test suite and test case names are in Chinese. For better international collaboration, consider using English names or adding English translations as comments.
  2. There's only one test case in this suite. Consider adding more test cases to cover different scenarios and improve test coverage.

Example of improved naming:

test.describe('HRApprover component XDesign standards', () => {
  test('Custom service - UI screenshot', async ({ page }) => {
    // Test implementation...
  });
  
  // Additional test cases...
});

7-15: LGTM: Test logic and assertions are well-structured.

The test case follows a clear structure:

  1. Navigation to the correct page
  2. Verification of element visibility
  3. Screenshot capture
  4. Interaction with UI elements
  5. Re-verification and another screenshot capture

This approach effectively tests the UI functionality and appearance.

However, consider adding more specific assertions about the UI state after the interaction. For example:

await prover.click();
await expect(someElementThatShouldChangeAfterClick).toHaveText('Expected text');
tests/container/xdesign.spec.ts (3)

4-10: LGTM: Well-structured test case with good practices.

The test case effectively covers the basic usage of the container component. It includes error handling, viewport checking, and screenshot comparison, which are all good practices for UI testing.

Consider using English for test case names to improve readability for international contributors. For example:

test('Basic Usage - UI Screenshot', async ({ page }) => {
  // ... (rest of the test case)
})

12-18: LGTM: Consistent test structure, but consider refactoring.

The test case for custom width and height follows the same good practices as the first test case, which is excellent for consistency.

To reduce code duplication and improve maintainability, consider refactoring both test cases into a single parameterized test. Here's an example of how you could do this:

interface TestCase {
  name: string;
  url: string;
  selector: string;
  screenshotName: string;
}

const testCases: TestCase[] = [
  {
    name: 'Basic Usage - UI Screenshot',
    url: 'container#basic-usage',
    selector: '#basic-usage .pc-demo',
    screenshotName: 'basic-usage.png',
  },
  {
    name: 'Custom Width and Height - UI Screenshot',
    url: 'container#custom-with-height',
    selector: '#custom-with-height .pc-demo',
    screenshotName: 'custom-with-height.png',
  },
];

testCases.forEach(({ name, url, selector, screenshotName }) => {
  test(name, async ({ page }) => {
    page.on('pageerror', (exception) => expect(exception).toBeNull());
    await page.goto(url);
    const demo = page.locator(selector);
    await expect(demo).toBeInViewport();
    await expect(demo).toHaveScreenshot(screenshotName);
  });
});

This approach makes it easier to add new test cases in the future and reduces the likelihood of inconsistencies between test cases.


1-19: Consider adding more test cases for comprehensive coverage.

The current test suite covers basic usage and custom width/height scenarios, which is a good start. To ensure more comprehensive coverage of the container component, consider adding the following test cases:

  1. Test with different content types (text, images, nested components).
  2. Test with extreme dimensions (very small or very large).
  3. Test responsive behavior if applicable.
  4. Test any interactive features of the container (if any).
  5. Test with different themes or styles if supported.

These additional test cases will help ensure the container component behaves correctly under various conditions and use cases.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 0d61e51 and 113737e.

🔇 Files ignored due to path filters (5)
  • tests/container/xdesign.spec.ts-snapshots/basic-usage-chromium-win32.png is excluded by !**/*.png
  • tests/container/xdesign.spec.ts-snapshots/custom-with-height-chromium-win32.png is excluded by !**/*.png
  • tests/hrapprover/xdesign.spec.ts-snapshots/custom-chromium-win32.png is excluded by !**/*.png
  • tests/hrapprover/xdesign.spec.ts-snapshots/custom-service-chromium-win32.png is excluded by !**/*.png
  • tests/layout/xdesign.spec.ts-snapshots/basic-usage-chromium-win32.png is excluded by !**/*.png
📒 Files selected for processing (3)
  • tests/container/xdesign.spec.ts (1 hunks)
  • tests/hrapprover/xdesign.spec.ts (1 hunks)
  • tests/layout/xdesign.spec.ts (1 hunks)
🔇 Additional comments not posted (4)
tests/layout/xdesign.spec.ts (1)

1-1: LGTM: Imports are correct and necessary.

The import statement is appropriate for writing Playwright tests, importing only the necessary functions expect and test.

tests/hrapprover/xdesign.spec.ts (2)

1-1: LGTM: Imports are correct and concise.

The necessary Playwright testing utilities are imported correctly.


1-17: Overall assessment: Good start, but room for improvement.

This new e2e UI test for the hrapprover component is a positive addition to the project, aligning well with the PR objectives. The test structure and use of Playwright are appropriate. However, there are several areas for improvement:

  1. Internationalization: Consider using English for test names or providing translations.
  2. Test coverage: Add more test cases to cover various scenarios.
  3. Error handling: Refine the approach to avoid suppressing all errors indiscriminately.
  4. Assertions: Include more specific assertions about UI state changes after interactions.

These enhancements would significantly improve the robustness and maintainability of the test suite.

tests/container/xdesign.spec.ts (1)

1-3: LGTM: Imports and test structure are well-organized.

The imports from '@playwright/test' are correct, and the test structure using test.describe provides a clear context for the container component tests.

Comment on lines +5 to +6
// 受服务影响,控制台有报错,运行时可注释异常捕获
page.on('pageerror', (exception) => expect(exception).toBeNull())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Reconsider the error handling approach.

The current error handling suppresses all page errors, which might hide important issues. Consider logging the errors or only suppressing specific known errors.

Suggested improvement:

page.on('pageerror', (exception) => {
  console.warn('Page error:', exception);
  // Only ignore specific errors if necessary
  if (exception.message.includes('Known error to ignore')) {
    return;
  }
  expect(exception).toBeNull();
});

await page.goto('hrapprover#custom-service')
const demo = page.locator('#custom-service .pc-demo')
await expect(demo).toBeInViewport()
await expect(demo).toHaveScreenshot('custom-service.png')
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Visual Regression Testing Not Configured in CI/CD Pipeline

The shell script indicates that no visual regression testing configuration was found in the CI/CD pipeline. Please ensure that visual regression tests are properly set up to handle screenshot comparisons effectively.

🔗 Analysis chain

Good use of visual regression testing.

The use of screenshots for visual comparison is a good practice for catching unexpected UI changes. However, ensure that your CI/CD pipeline is set up to handle these visual comparisons effectively.

To ensure the screenshot comparison is set up correctly in the CI/CD pipeline, you can run the following command:

Also applies to: 15-15

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if visual regression testing is configured in CI/CD

# Look for configuration files that might include visual regression setup
fd -e yml -e yaml -e json | xargs grep -i "screenshot\|visual\|image.?comparison" || echo "No visual regression testing configuration found in CI/CD files."

Length of output: 222

@zzcr zzcr merged commit 5033dcb into opentiny:dev Sep 26, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
e2e playwright
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants