-
-
Notifications
You must be signed in to change notification settings - Fork 6.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
avoid setting debug property if not needed #18872
Conversation
thanks for the PR. please follow step 3 to update the samples so that the CI can verify the change. |
updated the samples |
Hi. |
can you please review the test failures when you've time? |
Oh, it was the fact that |
that's ok. all tests passed. thanks again for the PR. if no one has further feedback on this PR, i'll merge it later this week. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not setting self.debug
to a specific value will not set self.__debug
, which may create AttributeError
because the Configuration
object is not correctly initialized.
I would also suggest to add a few tests to actually show this behavior:
Configuration
created without passingdebug
Configuration
created withdebug=False
Configuration
created withdebug=True
In all three cases, configuration.debug
should be called to verify the expected behavior. With this change, the first test would fail.
if debug is not None: | ||
self.debug = debug |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this self.debug
property is not always set, then the underlying __debug
property will also not always be set, which may produce AttributeError
.
I would suggest to at least set self.__debug
to False
by default so that the debug
setter/getter don't have a way to fail, and all the code that are also using this attribute will also not fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can easily add a default value for self.__debug
, but I'm not sure how to write the tests.
Can you point me to a sample I can adapt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can check here for instance: https://github.com/OpenAPITools/openapi-generator/blob/master/samples/openapi3/client/petstore/python/tests/test_configuration.py
There are several tests around this class, you should be able to create a test that fails with your current change and make it pass by setting an initial value for __debug
.
There may be other places with this kind of tests (they are written manually), but I didn't check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uhm... I was starting to write some tests, but it seems there are some unrelated failures.
$ cd samples/openapi3/client/petstore/python
$ poetry install
[...]
$ poetry run pytest
[...]
================================================= short test summary info ==================================================
FAILED tests/test_model.py::ModelTests::test_valdiator - AttributeError: 'ModelTests' object has no attribute 'assertEquals'. Did you mean: 'assertEqual'?
FAILED tests/test_pet_api.py::PetApiTests::test_add_pet_and_get_pet_by_id - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_add_pet_and_get_pet_by_id_with_http_info - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_add_pet_and_get_pet_by_id_without_preload_content_flag - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_delete_pet - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_find_pets_by_status - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_find_pets_by_tags - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_get_pet_by_id_serialize - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_update_pet - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_update_pet_with_form - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_upload_file - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
================================== 11 failed, 264 passed, 3 skipped, 9 warnings in 14.25s ==================================
I'm normally using python 3.12, where it was removed
Should I also update the unrelated tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- For
assertEquals
, this should be migrated in another PR I think. Can you test with 3.11 instead? - For the URL failures, there's an explanation how to start a test container to support the tests at the top of
tests/test_pet_api.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(oh, and for the various PetApiTests
failed tests, while it's easy to follow the instructions in the tests
openapi-generator/samples/openapi3/client/petstore/python/tests/test_pet_api.py
Lines 6 to 11 in 793aba7
Run the tests. | |
$ docker pull swaggerapi/petstore | |
$ docker run -d -e SWAGGER_HOST=http://petstore.swagger.io -e SWAGGER_BASE_PATH=/v2 -p 80:8080 swaggerapi/petstore | |
$ pip install -U pytest | |
$ cd petstore_api-python | |
$ pytest |
Updated the pr to add explicit check the behavior on the various values you can pass to the constructor |
@@ -137,6 +137,7 @@ conf = {{{packageName}}}.Configuration( | |||
""" | |||
|
|||
_default = None | |||
__debug = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no reason to make it a class attribute, can you move this as an instance attribute line 217?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -76,5 +76,14 @@ def test_get_host_from_settings(self): | |||
self.assertEqual("http://petstore.swagger.io:8080/v2", self.config.get_host_from_settings(0, {'port': '8080'})) | |||
self.assertEqual("http://dev-petstore.swagger.io:8080/v2", self.config.get_host_from_settings(0, {'server': 'dev-petstore', 'port': '8080'})) | |||
|
|||
def testConfigurationDebug(self): | |||
for debug, expected in [(True, True), (False, False), (None, False)]: | |||
with self.subTest(debug=debug, expected=expected): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL about unittest's subTest
🙇
with self.subTest(expected=False): | ||
c = petstore_api.Configuration() | ||
self.assertEqual(expected, c.debug) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this works a bit by chance, since expected
in the assert is the one taken from the for-loop line 80.
Can you make the test a bit more explicit?
with self.subTest(expected=False): | |
c = petstore_api.Configuration() | |
self.assertEqual(expected, c.debug) | |
expected = False | |
with self.subTest(debug="unset", expected=expected): | |
c = petstore_api.Configuration() | |
self.assertEqual(expected, c.debug) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
d'oh, you're right!
rewrote them and added explicit message
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the tests and fixes!
I was just wondering if there is anything else to do on my side. |
@wing328 can you check this pull request? |
Thanks! |
this pr add an optional boolean flag to enable / disable / keep as it is the debug property
fixes #18871
@cbornet @tomplus @krjakbrjak @fa0311 @multani
PR checklist
Commit all changed files.
This is important, as CI jobs will verify all generator outputs of your HEAD commit as it would merge with master.
These must match the expectations made by your contribution.
You may regenerate an individual generator by passing the relevant config(s) as an argument to the script, for example
./bin/generate-samples.sh bin/configs/java*
.IMPORTANT: Do NOT purge/delete any folders/files (e.g. tests) when regenerating the samples as manually written tests may be removed.
master
(upcoming 7.6.0 minor release - breaking changes with fallbacks),8.0.x
(breaking changes without fallbacks)