-
-
Notifications
You must be signed in to change notification settings - Fork 779
fix: update links to point to our repo files #1388
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? 8000 Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Reviewed everything up to 90c299c in 38 seconds
More details
- Looked at
15
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
4
drafted comments based on config settings.
1. tests/llm/test_openai/test_multimodal.py:11
- Draft comment:
Ensure the new audio URL points to an existing, accessible file in the repo. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold85%
This comment is asking the PR author to ensure that the new audio URL points to an existing, accessible file in the repo. This is similar to asking the author to double-check something, which is against the rules. The comment does not provide a specific suggestion or point out a specific issue with the code.
2. tests/llm/test_openai/test_multimodal.py:12
- Draft comment:
Confirm the updated image URL is correct and the image file is hosted in the repo. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold85%
This comment is asking the PR author to confirm the correctness of an image URL and its hosting, which violates the rule against asking for confirmation or verification. It does not provide a specific suggestion or point out a clear issue.
3. tests/llm/test_openai/test_multimodal.py:11
- Draft comment:
Updated audio_url to point to our repo asset. Consider pinning the commit hash or a tag instead of 'main' for long-term stability. - Reason this comment was not posted:
Confidence changes required:80%
<= threshold85%
None
4. tests/llm/test_openai/test_multimodal.py:12
- Draft comment:
Updated image_url to point to our repo asset. Consider pinning the commit hash or tag instead of using 'main' to ensure asset stability. - Reason this comment was not posted:
Confidence changes required:80%
<= threshold85%
None
Workflow ID: wflow_UgSWAtfhCdTJLPAX
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
Deploying instructor-py with
|
Latest commit: |
c9a930f
|
Status: | ✅ Deploy successful! |
Preview URL: | https://8aa0e34e.instructor-py.pages.dev |
Branch Preview URL: | https://fix-tests.instructor-py.pages.dev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❌ Changes requested. Incremental review on 8ac8cde in 1 minute and 10 seconds
More details
- Looked at
12
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
1
drafted comments based on config settings.
1. tests/llm/test_perplexity/test_modes.py:63
- Draft comment:
Add a reason to the pytest.skip marker (e.g., @pytest.mark.skip(reason='...')) to clarify why this test is being skipped. - Reason this comment was not posted:
Marked as duplicate.
Workflow ID: wflow_iGdHRktfPUqXThUK
Want Ellipsis to fix these issues? Tag @ellipsis-dev
in a comment. You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Incremental review on 99e7cce in 36 seconds
More details
- Looked at
25
lines of code in2
files - Skipped
0
files when reviewing. - Skipped posting
4
drafted comments based on config settings.
1. .github/workflows/test.yml:28
- Draft comment:
Added 'and not perplexity' to test filter. Verify that tests tagged with 'perplexity' are correctly isolated. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold85%
The comment is asking the author to verify that tests tagged with 'perplexity' are correctly isolated. This falls under asking the author to ensure something is tested or verified, which is against the rules.
2. .github/workflows/test_docs.yml:36
- Draft comment:
Switched from 'poetry run pytest' to 'uv run pytest' to align with project tooling. Confirm consistency with other test jobs. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =50%
<= threshold85%
The comment suggests a change in the command used for running tests, which is a specific suggestion. However, it asks to confirm consistency with other test jobs, which is against the rules. The comment could be useful if it only pointed out the change and suggested ensuring consistency without asking for confirmation.
3. .github/workflows/test.yml:27
- Draft comment:
Ensure the exclusion of perplexity tests is intentional. A comment might help clarify why 'and not perp 8000 lexity' was added. - Reason this comment was not posted:
Confidence changes required:66%
<= threshold85%
None
4. .github/workflows/test_docs.yml:36
- Draft comment:
Standardize the test command by replacing 'poetry run pytest' with 'uv run pytest'. Confirm that this change aligns with workflow consistency. - Reason this comment was not posted:
Confidence changes required:66%
<= threshold85%
None
Workflow ID: wflow_2TMG0KZKcO95P0KM
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Incremental review on c9a930f in 35 seconds
More details
- Looked at
22
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
4
drafted comments based on config settings.
1. .github/workflows/test.yml:28
- Draft comment:
Inconsistent perplexity exclusion: Removed 'and not perplexity' filter in the test run step compared to the coverage step. Confirm if this is intentional. - Reason this comment was not posted:
Confidence changes required:66%
<= threshold85%
None
2. .github/workflows/test.yml:43
- Draft comment:
Coverage step now adds 'and not perplexity' exclusion. Ensure the filtering strategy is consistent across all test steps. - Reason this comment was not posted:
Confidence changes required:66%
<= threshold85%
None
3. .github/workflows/test.yml:27
- Draft comment:
Inconsistent filter: The 'not perplexity' clause was removed here compared to before. Confirm that perplexity tests should run in non-coverage runs. - Reason this comment was not posted:
Confidence changes required:80%
<= threshold85%
None
4. .github/workflows/test.yml:43
- Draft comment:
Inconsistent filter: The coverage command now excludes 'perplexity' tests while the test run does not. Ensure consistent filtering for perplexity tests. - Reason this comment was not posted:
Confidence changes required:80%
<= threshold85%
None
Workflow ID: wflow_LEiNpw4BLyW3vykx
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
@jxnl the perplexity tests were failing because we don't have a API key set in CI. Also fixed the multimodal tests so they reference assets in our pipeline itself |
Important
Update links to repo files, modify test commands in workflows, and skip a test case.
audio_url
andimage_url
intest_multimodal.py
to point to files in theinstructor-ai/instructor
repo.not perplexity
to coverage report command intest.yml
.test_docs.yml
frompoetry run
touv run
.@pytest.mark.skip()
totest_complex_nested_model
intest_modes.py
.This description was created by
for c9a930f. It will automatically update as commits are pushed.