-
Notifications
You must be signed in to change notification settings - Fork 4.6k
live-tests: add regression tests suite #35837
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
This stack of pull requests is managed by Graphite. Learn more about stacking. Join @alafanechere and the rest of your teammates on |
34c6b5f
to
99fc98f
Compare
95c6731
to
4e06bbd
Compare
90587ad
to
d78a92b
Compare
6699069
to
db3029d
Compare
0a4f755
to
d9d08c7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great @alafanechere! I only reviewed files changed in the last two commits because looks like the rest won't have a diff once the merge conflict is fixed.
One main comment, about the use of fixtures.
return ConnectorRunner(dagger_client, **spec_target_execution_inputs.to_dict()) | ||
|
||
|
||
@pytest.fixture |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Declaring this (and execution results for other commands) as a pytest fixture has the effect of executing them before any tests run. Is there a reason you prefer to do it this way rather than getting the result as part of the test? I have a slight preference for the latter - to me, executing the calls feels like part of the test behavior, so should fail during execution of a specific test.
Also, will declaring these as fixtures end up running all commands, even if we only want to run a single test? Or will they be executed lazily?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think its indeed a fair point of discussion.
As these fixtures are indeed lazily used on test request I found it useful to declare this logic in fixture.
IMO it avoids a lot of code duplication if we create multiple tests performing assertions on command execution results, and it makes sure the execution report is created.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preemptively approving since we can address the fixture question whenever we'd like.
5f749cc
to
a837536
Compare
a837536
to
f34082a
Compare
What
Relates to https://github.com/airbytehq/airbyte-internal-issues/issues/6483
This PR boostraps a regression test suite.
It basically declares fixtures with execution results of commands on the control and target connectors version.
It implements a basic expected records test for demonstration, we will port CAT's expected record test later.
The test execution persists test artifact that can be useful for debugging test failures.
Recommended reading order
airbyte-ci/connectors/live-tests/src/live_tests/regression_tests/conftest.py
airbyte-ci/connectors/live-tests/src/live_tests/regression_tests/test_expected_records.py
Test output example