Skip to content

Investigate test differences on the bots #18928

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
timvandermeij opened this issue Oct 20, 2024 · 5 comments
Closed

Investigate test differences on the bots #18928

timvandermeij opened this issue Oct 20, 2024 · 5 comments
Assignees
Labels

Comments

@timvandermeij
Copy link
Contributor

timvandermeij commented Oct 20, 2024

In PR #18921 we noticed lots of reference test failures on Windows, but found that they already occurred two days earlier in PR #18918. The last working run was found to be in PR #18905 from five days ago, so something changed in Firefox in the meantime even though we used Firefox 133 for all runs.

Moreover, in PR #18921 we noticed Chrome on Linux timing out in the logs, which caused errors to be reported: TEST-UNEXPECTED-FAIL | test failed chrome has not responded in 120s.

To verify that the problems were not caused by PR #18921 I made a dummy PR that shows the same two issues: #18926.

We should investigate what changed to cause all differences on the Windows bot and what happened with Chrome on the Linux bot. @calixteman Could you perhaps have a look at this?

@calixteman
Copy link
Contributor

Few days ago (thursday) I deleted the firefox binary on the two bots in order to have an update of nightly with a fix I wrote to remove a failure which was polluting the logs:

JavaScript error: chrome://remote/content/shared/listeners/CachedResourceListener.sys.mjs, line 65: NS_NOINTERFACE: Component returned failure code: 0x80004002 (NS_NOINTERFACE)

@timvandermeij
Copy link
Contributor Author

timvandermeij commented Oct 22, 2024

Would you say the movement in the tests is expected? If so, we should probably just run makeref on the Windows bot (especially with the upcoming image decoder PRs we need to be able to tell if anything changed unexpectedly).

That would then leave us with the Linux bot where Chrome times out at the end. Are there any left-over processes there perhaps, or should we restart the bot?

@Snuffleupagus
Copy link
Collaborator

Snuffleupagus commented Oct 22, 2024

If so, we should probably just run makeref on the Windows bot (especially with the upcoming image decoder PRs we need to be able to tell if anything changed unexpectedly).

Whoops, I ran makeref earlier today after landing a PR. Maybe that was a mistake, since I didn't really check the Windows "failures" very carefully beforehand.

@calixteman
Copy link
Contributor

Would you say the movement in the tests is expected? If so, we should probably just run makeref on the Windows bot (especially with the upcoming image decoder PRs we need to be able to tell if anything changed unexpectedly).

Yeah some changes have been made around font stuff on Windows, so I think it's fine.

@Snuffleupagus
Copy link
Collaborator

Snuffleupagus commented Oct 29, 2024

Would you say the movement in the tests is expected? If so, we should probably just run makeref on the Windows bot (especially with the upcoming image decoder PRs we need to be able to tell if anything changed unexpectedly).

We've successfully run makeref a number of times since then, so I think we can regard this part as fixed.

That would then leave us with the Linux bot where Chrome times out at the end. Are there any left-over processes there perhaps, or should we restart the bot?

Looking at recent test-runs, it seems that browsertest generally works in Chrome now; let's close this for now since things appear to be working again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants