Skip to content

Refactor tests in test_check.py to use pytest.mark.parametrize for better test isolation #1172

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 11 commits into from

Conversation

Karl-Michaud
Copy link
Contributor

@Karl-Michaud Karl-Michaud commented May 15, 2025

Proposed Changes

Made 2 key changes:

  1. Added pytest.mark.parameterize annotation to pull the inputs out of similar unit test function
  2. Optimized the test input list

As explained above, I used the pytest.mark.parameterize annotation. This in turn meant that I could delete 3 unit tests, and had to modify 2 others. For the main test function test_check_behaviour(), I used the cartesian product feature to test all inputs on both a default config, and a special config.

However, this turned out to be highly inefficient; it took Pytest around 1min50s to run test_check.py on my computer. After further digging, I realized that the reasons behind this were:

  1. There were a lot of redundant test inputs
  2. The cartesian product coupled with these inefficient inputs resulted in around 80 tests being ran

With that in mind, for the second key change, I simply optimized the test inputs by:

  • Removing redundant input files
  • Removing directories (There was an excess of directories which increases the runtime by a lot)

This resulted in Pytest taking around 50s to run test_check.py on my computer. For reference, the master branch takes around 45s to run on my computer. This was the best I could do, since any further change affected the coverage.

Final thoughts:

  • Using pytest.mark.parameterize definitely makes the file more clean, and removes redundant code. However even after optimization, Pytest takes longer (albeit around 5 more seconds) to run test_check.py.

Type of Change

(Write an X or a brief description next to the type or types that best describe your changes.)

Type Applies?
🚨 Breaking change (fix or feature that would cause existing functionality to change)
New feature (non-breaking change that adds functionality)
🐛 Bug fix (non-breaking change that fixes an issue)
♻️ Refactoring (internal change to codebase, without changing functionality)
🚦 Test update (change that only adds or modifies tests) X
📚 Documentation update (change that only updates documentation)
📦 Dependency update (change that updates a dependency)
🔧 Internal (change that only affects developers or continuous integration)

Checklist

Before opening your pull request:

  • I have performed a self-review of my changes.
    • Check that all changed files included in this pull request are intentional changes.
    • Check that all changes are relevant to the purpose of this pull request, as described above.
  • I have added tests for my changes, if applicable.
    • Not applicable
  • I have updated the project documentation, if applicable.
    • Not applicable
  • I have updated the project Changelog (this is required for all changes).
  • If this is my first contribution, I have added myself to the list of contributors.

After opening your pull request:

  • I have verified that the pre-commit.ci checks have passed.
  • I have verified that the CI tests have passed.
  • I have reviewed the test coverage changes reported by Coveralls.
  • I have requested a review from a project maintainer.

Questions and Comments

1)

I believe that the optimization was directly related to the refactoring I was doing. I may be mistaken though...
Should I have created a new PR for this issue?

2)

Also, this piece of code (although ugly) saves around 30s when running the test file:

@pytest.mark.parametrize( "input_files", _TEST_FILE_INPUTS + ["examples/nodes",])

Is this problematic? Should I make a separate variable for _TEST_FILE_INPUTS + ["examples/nodes",]?

3)

I tried isolating different files from examples/nodes but no matter what I did, there was always a coverage problem.

4)

Finally, as usual, any tips/criticism is highly appreciated and welcome. I'm always looking to improve!

@Karl-Michaud Karl-Michaud self-assigned this May 15, 2025
@coveralls
Copy link
Collaborator

coveralls commented May 15, 2025

Pull Request Test Coverage Report for Build 15059060495

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 93.395%

Totals Coverage Status
Change from base Build 15008912119: 0.0%
Covered Lines: 3436
Relevant Lines: 3679

💛 - Coveralls

@Karl-Michaud Karl-Michaud requested a review from david-yz-liu May 16, 2025 01:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants