-
Notifications
You must be signed in to change notification settings - Fork 564
Test suite #528
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Test suite #528
Conversation
Add comprehensive testing plan
Implement test plan phase 1
Implement test plan phase 1
Implement test plan phase 2
Implement test plan phase 3
Implement Test Plan Phase 4
Implement Test Plan Phase 4
Add Python 3.2 env to tox and CI
Implement Test Plan Phase 6
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Adds a full pytest-based test suite, updates test tooling/config to use pytest, and adds support for Python 3.2.
- Introduce modularized tests under
tests/
and a test helper intests/util.py
- Update
tox.ini
,setup.py
,.travis.yml
, andpytest.ini
to add Python 3.2 and switch topytest
- Enhance
conftest.py
for custom.docopt
test collection and include a Test-Plan document
Reviewed Changes
Copilot reviewed 19 out of 19 changed files in this pull request and generated 2 comments.
Show a summary per file
File | Description |
---|---|
tox.ini | Add py32 to envlist , skip_missing_interpreters , switch to pytest |
setup.py | Add “Programming Language :: Python :: 3.2” classifier |
.travis.yml | Include Python 3.2 in CI matrix |
pytest.ini | Configure pytest options (addopts = -ra ) |
conftest.py | Support both old/new pytest from_parent APIs for custom tests |
tests/util.py | Add run_docopt helper for invoking docopt in tests |
tests/test_*.py | Add comprehensive feature-focused test modules |
doc/python-3.2.5/Test-Plan.md | New test-plan document detailing suite organization |
README.rst | Update testing instructions and supported Python versions |
doc = 'Usage: prog ' + 'ARG ' * 20 | ||
argv = ' '.join('v{}'.format(i) for i in range(20)) | ||
result = run_docopt(doc, argv) | ||
assert all(result['ARG'][i] == 'v{}'.format(i) for i in range(20)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Use a direct equality assertion (e.g. assert result['ARG'] == [f'v{i}' for i in range(20)]
) for clearer failure messages.
assert all(result['ARG'][i] == 'v{}'.format(i) for i in range(20)) | |
assert result['ARG'] == [f'v{i}' for i in range(20)] |
Copilot uses AI. Check for mistakes.
for i in range(depth): | ||
assert result['cmd{}'.format(i)] == (i == depth - 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Consider comparing the entire result
dict against an expected dict (e.g. {f'cmd{i}': i == depth-1 for i in range(depth)}
) to simplify and clarify the test.
for i in range(depth): | |
assert result['cmd{}'.format(i)] == (i == depth - 1) | |
expected = {f'cmd{i}': i == depth - 1 for i in range(depth)} | |
assert result == expected |
Copilot uses AI. Check for mistakes.
No description provided.