-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
[core] Setup testing to work with CSS imports #17214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Deploy preview: https://deploy-preview-17214--material-ui-x.netlify.app/ |
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
.circleci/config.yml
Outdated
@@ -43,7 +47,7 @@ default-job: &default-job | |||
COREPACK_ENABLE_DOWNLOAD_PROMPT: '0' | |||
working_directory: /tmp/mui | |||
docker: | |||
- image: cimg/node:20.19 | |||
- image: mcr.microsoft.com/playwright:v1.51.0-noble |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why switch to a different image?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The image is already used in some of the tests. This only makes it that we use that one consistently. Otherwise, some of the run
steps need to be adapted to run on different environments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're not running our scripts on this image locally neither right. Why not keep them minimally portable between the two images we use? Now we pull in an image with one node version and override it with another, with no layer caching possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To add salt to the injury: the runtime of every CI step is increased by a not insignificant amount. 🙈
Spinup of cimg/node
is ~3s, while playwright image is ~40s. 🤔
I propose we keep using cimg/node
(just update to node 23) images for steps that do not need playwright
.
As for tests requiring playwright, WDYT about manually installing it over the cimg/node
image? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As for tests requiring playwright, WDYT about manually installing it over the cimg/node image?
AFAIK playwright needs soe packages that are not present in cimg/node, IMO two images is really the best option for our CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we avoid this huge argos diff somehow?
Frankly, it's not a great time for a bulk screenshots update as it will cause some friction for other PRs ahead of the stable release 😅
I agree with Rom on this one.
If that's the best compromise now, then I don't see a big problem accepting the diff and moving forward with it. 🤷
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Manually installing playwright plus system deps takes around ~50s
The startup of the playwright also takes a wildly varied time... 🤷
- Sometimes it's cached and 2s + 13s for ffmpeg install
- Other times a fresh spin up takes 70s + 13s for ffmpeg install
- Or even 36s for spin up and 32s for ffmpeg install
So, it seems that the current setup can also take up to ~80s for env setup in some cases.
but if we want to optimize that we should probably publish our own docker image (based on
cimg/node
) where we setup that plusffmepg
(another ~40s), we could maybe save around ~60s on each job that requires playwright.
That is a great suggestion. 👍
However, that would mean that we inherit one more thing that we have to maintain (keep updated) for our "infra". 🙈
I have also setup pnpm package caching, that should save an additional ~50s on all jobs.
I think that when we discussed this, we agreed that the difference between a fresh install every time vs a cache pull and restore is up to ~10s, hence every repo agreed to ditch caching to save on storage credits and keep fresh pulling. 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed pnpm caching.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed that playwright bundles an ffmpeg
version, we could save time by using that instead of installing a system version.
I also looked a bit into the cimg/node:*-browsers
images to avoid installing playwright browsers, but it looks like those still need downloading browsers (but done via circleci tools) so I'm not sure it would save time. The test:regressions
could save time by only installing chromium though (right now all 3 browsers are installed).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also looked a bit into the
cimg/node:*-browsers
images to avoid installing playwright browsers, but it looks like those still need downloading browsers (but done via circleci tools) so I'm not sure it would save time.
It looks like they support only Google Chrome and Firefox, whereas we use Playwright defaults (Chromium, Firefox and Webkit).
So, no luck, I guess... 🙈 🤷
The
test:regressions
could save time by only installing chromium though (right now all 3 browsers are installed).
Yeah, that could be a nice improvement. 👍
Or, we look into caching at least the playwright installs, since they change much less frequently than the pnpm-lock.yml
file. 🤔
Co-authored-by: Andrew Cherniavskii <[email protected]> Signed-off-by: Rom Grk <[email protected]>
26f3f98
to
c0036e6
Compare
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. 👍
Thank your for the massive effort addressing this. 🙏
However, it looks like e2e
and regression
tests run slightly slower after these changes. 🙈 🤷
import { setupFakeClock, restoreFakeClock } from '../utils/setupFakeClock'; // eslint-disable-line | ||
import { generateTestLicenseKey, setupTestLicenseKey } from '../utils/testLicense'; // eslint-disable-line |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick: What are these eslint
disables for? 🤔 🤷
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test
and test/regressions
have a package.json
, eslint doesn't want us to import files across packages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gotcha, thanks for clarifying. 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eslint doesn't want us to import files across packages.
One of the reasons is that before we have observed that it sometimes causes more of the ESM tech debt that you've run into before. Most tools and runtimes assume the package boundary is never breached as that would make no sense in an installed environment.
This is one of eslint rules that is definitely not there for preference reasons and if possible, we should always avoid disabling it. Would it be possible to refactor the code to make this unnecessary? Perhaps these helpers belong in the test utils package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that this improvement could be done in a follow-up. 🙈
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For actual packages it would make sense to be strict with that rule, but this is test code that is only run as-is and is never installed in any other configuration than this one. I'm not modifying the layout of the test folder, importing from ./test/utils
in ./test/regressions
is already present on master: https://github.com/mui/mui-x/blob/master/test/regressions/index.tsx#L7. This PR is already big enough, I don't want to move more stuff around for the moment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For actual packages
The last time we had a problem around this was in the pigment regression tests that imported the docs, and change detection not running the regression tests when the docs changes. It made the assumption that it could rely on package dependencies to know which dependent code to run, but such dependency didn't exist for the regression tests as it just reached out of the folder. This made some regressions go unnoticed for a while. Both of these workspaces are not installable packages.
is already present on master
But is not crossing a workspace boundary, adding a package.json file changes things. Contributors and tooling make different assumptions around isolation between workspaces. They often expect explicit dependencies to be defined, while reaching out of the folder essentially creates an implicit dependency.
This PR is already big enough
My recommendation in such cases would always be to split up the PR, for example, the regression test migration is not a prerequisite for adding the css imports. The advantage of smaller, better scoped PRs is that a blocker on a single change doesn't block all the changes. It allows us to not rush some parts while still advancing others. In a collaborative context, for PR reviewers it's also much more ergonomical to review and rereview when a PR is scoped to a single concern only.
I think that this improvement could be done in a follow-up.
Could, but will? 😄 Anyway, not a hard blocker no.
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
I've gone through all the Argos changes.
|
|
Thanks for the replies on the Argos diff. |
@bernardobelchior this might affect your changes in the export func. Just to be aware 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming we'll address discussions in follow-ups, LGTM
Thanks for the heads up, @JCQuintas, I'll take a look! |
"docs:build": "pnpm --filter docs build", | ||
"docs:build": "pnpm run release:build && pnpm --filter docs build", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important regression. This change means that the watermark now shows in the docs:

https://deploy-preview-17214--material-ui-x.netlify.app/x/react-data-grid/#pro-version
Before, we would set MTU5NjMxOTIwMDAwMA==
for the release info, now it gets generated at each docs deploy.
It also seems that it takes about 15% more time now to deploy Netlify: https://app.netlify.com/sites/material-ui-x/deploys?page=1&filter=master.
Do we really need to do this change? What's wrong with?
"docs:build": "pnpm --filter docs build", | |
"docs:build": "pnpm run release:build && pnpm --filter docs build", | |
"docs:build": "pnpm --filter docs build", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before, we would set MTU5NjMxOTIwMDAwMA== for the release info, now it gets generated at each docs deploy.
IIUC, we're setting the package release timestamp to an older value to make it work with the timestamp of the (now expired) license stored in the NEXT_PUBLIC_MUI_LICENSE
secret? Should we instead be updating that secret?
Do we really need to do this change
The prod build of the docs now use the modules as they are published on NPM rather than path aliases. It allows us to catch build/export issues like these two, that we caught thanks to this PR:
https://mui-org.slack.com/archives/C042YB5RB3N/p1742407588778789
https://mui-org.slack.com/archives/C05SBSU8NPR/p1741040016736319
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we instead be updating that secret?
It's another solution that we could use, but then people can copy it from our docs. Today, they get a 2023 expired license key.
The prod build of the docs now use the modules as they are published on NPM rather than path aliases. It allows us to catch build/export issues like these two, that we caught thanks to this PR:
So, the question is then, is this worth the CI slowdown? I personally, I'm more the advocate of: "CI as fast as possible, and fix bugs quickly as they get reported in issues" 😄. So more on the side of using the source raw if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before, we would set MTU5NjMxOTIwMDAwMA== for the release info, now it gets generated at each docs deploy.
Can we keep doing that for the packages built specifically for the docs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The license discussion can be continued here: #17356 (comment)
So, the question is then, is this worth the CI slowdown? I personally, I'm more the advocate of: "CI as fast as possible, and fix bugs quickly as they get reported in issues"
Imo it's less hard to debug & understand an issue when it's reported by the CI directly in the PR that causes it rather than when we need to read & debug a user report later on. I would be more sensible to that argument if the CI was good enough to offer a quick feedback loop, but it takes 5-10 minutes to run right now. If I need to debug something, I'll be running the commands locally rather than waiting that much time for the CI to report issues.
Split from #16677
In preparation for the datagrid styling solution that will use CSS stylesheets, this adds a placeholder stylesheet and migrates the test setup to vite to get CSS working.