Skip to content

[core] Setup testing to work with CSS imports #17214

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 68 commits into from
Apr 11, 2025

Conversation

romgrk
Copy link
Contributor

@romgrk romgrk commented Apr 1, 2025

Split from #16677

In preparation for the datagrid styling solution that will use CSS stylesheets, this adds a placeholder stylesheet and migrates the test setup to vite to get CSS working.

@romgrk romgrk added core Covers general upkeep, such as minor refactoring, typo fixes, or structural changes. breaking change Introduces changes that are not backward compatible. scope: data grid Changes or issues related to the data grid product type: enhancement This is not a bug, nor a new feature labels Apr 1, 2025
@cherniavskii cherniavskii added this to the 8.0.0 milestone Apr 1, 2025
@mui-bot
Copy link

mui-bot commented Apr 1, 2025

Deploy preview: https://deploy-preview-17214--material-ui-x.netlify.app/

Generated by 🚫 dangerJS against f72a797

Copy link

github-actions bot commented Apr 1, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions github-actions bot added the PR: out-of-date The pull request has merge conflicts and can't be merged label Apr 1, 2025
@@ -43,7 +47,7 @@ default-job: &default-job
COREPACK_ENABLE_DOWNLOAD_PROMPT: '0'
working_directory: /tmp/mui
docker:
- image: cimg/node:20.19
- image: mcr.microsoft.com/playwright:v1.51.0-noble
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why switch to a different image?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The image is already used in some of the tests. This only makes it that we use that one consistently. Otherwise, some of the run steps need to be adapted to run on different environments.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're not running our scripts on this image locally neither right. Why not keep them minimally portable between the two images we use? Now we pull in an image with one node version and override it with another, with no layer caching possible.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To add salt to the injury: the runtime of every CI step is increased by a not insignificant amount. 🙈
Spinup of cimg/node is ~3s, while playwright image is ~40s. 🤔

I propose we keep using cimg/node(just update to node 23) images for steps that do not need playwright.

As for tests requiring playwright, WDYT about manually installing it over the cimg/node image? 🤔

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As for tests requiring playwright, WDYT about manually installing it over the cimg/node image?

AFAIK playwright needs soe packages that are not present in cimg/node, IMO two images is really the best option for our CI.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we avoid this huge argos diff somehow?
Frankly, it's not a great time for a bulk screenshots update as it will cause some friction for other PRs ahead of the stable release 😅

I agree with Rom on this one.
If that's the best compromise now, then I don't see a big problem accepting the diff and moving forward with it. 🤷

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Manually installing playwright plus system deps takes around ~50s

The startup of the playwright also takes a wildly varied time... 🤷

So, it seems that the current setup can also take up to ~80s for env setup in some cases.

but if we want to optimize that we should probably publish our own docker image (based on cimg/node) where we setup that plus ffmepg (another ~40s), we could maybe save around ~60s on each job that requires playwright.

That is a great suggestion. 👍
However, that would mean that we inherit one more thing that we have to maintain (keep updated) for our "infra". 🙈

I have also setup pnpm package caching, that should save an additional ~50s on all jobs.

I think that when we discussed this, we agreed that the difference between a fresh install every time vs a cache pull and restore is up to ~10s, hence every repo agreed to ditch caching to save on storage credits and keep fresh pulling. 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed pnpm caching.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that playwright bundles an ffmpeg version, we could save time by using that instead of installing a system version.

I also looked a bit into the cimg/node:*-browsers images to avoid installing playwright browsers, but it looks like those still need downloading browsers (but done via circleci tools) so I'm not sure it would save time. The test:regressions could save time by only installing chromium though (right now all 3 browsers are installed).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also looked a bit into the cimg/node:*-browsers images to avoid installing playwright browsers, but it looks like those still need downloading browsers (but done via circleci tools) so I'm not sure it would save time.

It looks like they support only Google Chrome and Firefox, whereas we use Playwright defaults (Chromium, Firefox and Webkit).
So, no luck, I guess... 🙈 🤷

The test:regressions could save time by only installing chromium though (right now all 3 browsers are installed).

Yeah, that could be a nice improvement. 👍
Or, we look into caching at least the playwright installs, since they change much less frequently than the pnpm-lock.yml file. 🤔

@github-actions github-actions bot removed the PR: out-of-date The pull request has merge conflicts and can't be merged label Apr 2, 2025
@romgrk romgrk force-pushed the refactor-css-breaking-change branch from 26f3f98 to c0036e6 Compare April 2, 2025 22:19
@github-actions github-actions bot added the PR: out-of-date The pull request has merge conflicts and can't be merged label Apr 2, 2025
Copy link

github-actions bot commented Apr 2, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Copy link
Member

@LukasTy LukasTy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. 👍
Thank your for the massive effort addressing this. 🙏

However, it looks like e2e and regression tests run slightly slower after these changes. 🙈 🤷

Comment on lines +5 to +6
import { setupFakeClock, restoreFakeClock } from '../utils/setupFakeClock'; // eslint-disable-line
import { generateTestLicenseKey, setupTestLicenseKey } from '../utils/testLicense'; // eslint-disable-line
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick: What are these eslint disables for? 🤔 🤷

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test and test/regressions have a package.json, eslint doesn't want us to import files across packages.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha, thanks for clarifying. 👍

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

eslint doesn't want us to import files across packages.

One of the reasons is that before we have observed that it sometimes causes more of the ESM tech debt that you've run into before. Most tools and runtimes assume the package boundary is never breached as that would make no sense in an installed environment.
This is one of eslint rules that is definitely not there for preference reasons and if possible, we should always avoid disabling it. Would it be possible to refactor the code to make this unnecessary? Perhaps these helpers belong in the test utils package?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this improvement could be done in a follow-up. 🙈

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For actual packages it would make sense to be strict with that rule, but this is test code that is only run as-is and is never installed in any other configuration than this one. I'm not modifying the layout of the test folder, importing from ./test/utils in ./test/regressions is already present on master: https://github.com/mui/mui-x/blob/master/test/regressions/index.tsx#L7. This PR is already big enough, I don't want to move more stuff around for the moment.

Copy link
Member

@Janpot Janpot Apr 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For actual packages

The last time we had a problem around this was in the pigment regression tests that imported the docs, and change detection not running the regression tests when the docs changes. It made the assumption that it could rely on package dependencies to know which dependent code to run, but such dependency didn't exist for the regression tests as it just reached out of the folder. This made some regressions go unnoticed for a while. Both of these workspaces are not installable packages.

is already present on master

But is not crossing a workspace boundary, adding a package.json file changes things. Contributors and tooling make different assumptions around isolation between workspaces. They often expect explicit dependencies to be defined, while reaching out of the folder essentially creates an implicit dependency.

This PR is already big enough

My recommendation in such cases would always be to split up the PR, for example, the regression test migration is not a prerequisite for adding the css imports. The advantage of smaller, better scoped PRs is that a blocker on a single change doesn't block all the changes. It allows us to not rush some parts while still advancing others. In a collaborative context, for PR reviewers it's also much more ergonomical to review and rereview when a PR is scoped to a single concern only.

I think that this improvement could be done in a follow-up.

Could, but will? 😄 Anyway, not a hard blocker no.

@github-actions github-actions bot added the PR: out-of-date The pull request has merge conflicts and can't be merged label Apr 10, 2025
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions github-actions bot removed the PR: out-of-date The pull request has merge conflicts and can't be merged label Apr 10, 2025
@LukasTy
Copy link
Member

LukasTy commented Apr 10, 2025

I've gone through all the Argos changes.
Looks like we have a few actual Argos diffs:

@romgrk
Copy link
Contributor Author

romgrk commented Apr 10, 2025

@LukasTy
Copy link
Member

LukasTy commented Apr 10, 2025

Thanks for the replies on the Argos diff.
Yeah, it sounds like we could accept the PrintExport as is and look into it in a follow-up. 👌
This looks good for me to merge, but I'd be keen to hear other thoughts. 😃

@JCQuintas
Copy link
Member

@bernardobelchior this might affect your changes in the export func. Just to be aware 👍

Copy link
Member

@cherniavskii cherniavskii left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming we'll address discussions in follow-ups, LGTM

@bernardobelchior
Copy link
Member

bernardobelchior commented Apr 11, 2025

@bernardobelchior this might affect your changes in the export func. Just to be aware 👍

Thanks for the heads up, @JCQuintas, I'll take a look!

@romgrk romgrk merged commit 230234e into mui:master Apr 11, 2025
21 checks passed
@romgrk romgrk deleted the refactor-css-breaking-change branch April 11, 2025 17:12
"docs:build": "pnpm --filter docs build",
"docs:build": "pnpm run release:build && pnpm --filter docs build",
Copy link
Member

@oliviertassinari oliviertassinari Apr 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important regression. This change means that the watermark now shows in the docs:

SCR-20250414-bvxp

https://deploy-preview-17214--material-ui-x.netlify.app/x/react-data-grid/#pro-version

Before, we would set MTU5NjMxOTIwMDAwMA== for the release info, now it gets generated at each docs deploy.

It also seems that it takes about 15% more time now to deploy Netlify: https://app.netlify.com/sites/material-ui-x/deploys?page=1&filter=master.

Do we really need to do this change? What's wrong with?

Suggested change
"docs:build": "pnpm --filter docs build",
"docs:build": "pnpm run release:build && pnpm --filter docs build",
"docs:build": "pnpm --filter docs build",

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before, we would set MTU5NjMxOTIwMDAwMA== for the release info, now it gets generated at each docs deploy.

IIUC, we're setting the package release timestamp to an older value to make it work with the timestamp of the (now expired) license stored in the NEXT_PUBLIC_MUI_LICENSE secret? Should we instead be updating that secret?

Do we really need to do this change

The prod build of the docs now use the modules as they are published on NPM rather than path aliases. It allows us to catch build/export issues like these two, that we caught thanks to this PR:
https://mui-org.slack.com/archives/C042YB5RB3N/p1742407588778789
https://mui-org.slack.com/archives/C05SBSU8NPR/p1741040016736319

Copy link
Member

@oliviertassinari oliviertassinari Apr 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we instead be updating that secret?

It's another solution that we could use, but then people can copy it from our docs. Today, they get a 2023 expired license key.

The prod build of the docs now use the modules as they are published on NPM rather than path aliases. It allows us to catch build/export issues like these two, that we caught thanks to this PR:

So, the question is then, is this worth the CI slowdown? I personally, I'm more the advocate of: "CI as fast as possible, and fix bugs quickly as they get reported in issues" 😄. So more on the side of using the source raw if needed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before, we would set MTU5NjMxOTIwMDAwMA== for the release info, now it gets generated at each docs deploy.

Can we keep doing that for the packages built specifically for the docs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The license discussion can be continued here: #17356 (comment)

So, the question is then, is this worth the CI slowdown? I personally, I'm more the advocate of: "CI as fast as possible, and fix bugs quickly as they get reported in issues"

Imo it's less hard to debug & understand an issue when it's reported by the CI directly in the PR that causes it rather than when we need to read & debug a user report later on. I would be more sensible to that argument if the CI was good enough to offer a quick feedback loop, but it takes 5-10 minutes to run right now. If I need to debug something, I'll be running the commands locally rather than waiting that much time for the CI to report issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking change Introduces changes that are not backward compatible. core Covers general upkeep, such as minor refactoring, typo fixes, or structural changes. scope: data grid Changes or issues related to the data grid product test type: enhancement This is not a bug, nor a new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants