Skip to content

polish & clean code #566

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Jun 3, 2025
Merged

polish & clean code #566

merged 15 commits into from
Jun 3, 2025

Conversation

till-m
Copy link
Member

@till-m till-m commented May 22, 2025

This PR contains various improvements:

  • Makes the random_state not a property of the acquisition function, but something to be provided during .suggest
    • Before, when choosing a non-default acquisition function and wanting deterministic behaviour, one had to set a lot of states manually. Now, the optimizer state will be used for the acquisition function
  • Rename n_lbfgs_b to n_smart since we now have the DE optimization
  • fix another bug related to acq maximization, which caused seeding of the smart optimization to use suboptimal points
    • this also caused a test to pass that should've failed. I fixed this test.
  • make constraints work properly with typed optimization
    • this required a slight refactor, now the constraint model is constructed in TargetSpace.__init___ instead of BayesianOptimizer.__init___

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features

    • Random state must now be provided explicitly when making suggestions with acquisition functions, improving clarity in randomness control.
    • Added a method to generate random parameter samples from the target space.
  • Bug Fixes

    • Improved error handling in optimization routines, now raising errors if differential evolution fails instead of continuing silently.
    • Enhanced determinism in tests by explicitly controlling random state usage.
    • Relaxed assertion tolerances in constraint tests for greater robustness.
  • Refactor

    • Simplified random state management by removing internal storage and requiring it as an argument for relevant methods.
    • Updated method and parameter names for consistency (e.g., replaced n_l_bfgs_b with n_smart).
    • Adjusted constraint handling in target space initialization for improved reliability.
    • Unified and simplified initialization logic related to constraints and acquisition functions.
  • Tests

    • Updated tests to pass random state explicitly, ensuring reproducible and consistent test results.
    • Removed deprecated constructor parameters related to random state in acquisition function tests.

Copy link

coderabbitai bot commented May 22, 2025

Walkthrough

The changes refactor random state management across acquisition functions in the Bayesian optimization package. Random state is now passed explicitly to suggestion and optimization methods rather than stored internally. Constructors and tests are updated accordingly. Target space constraint handling is modified to always wrap constraints in a ConstraintModel. Test tolerances and parameter passing are adjusted for consistency and determinism.

Changes

File(s) Change Summary
bayes_opt/acquisition.py Removed internal random state storage; random state passed explicitly to methods; deprecated constructor argument; updated method signatures; improved error handling; removed random state serialization.
bayes_opt/bayesian_optimization.py Acquisition functions instantiated without random state; random state passed explicitly in suggest; added random_sample method; simplified constraint and target space initialization; unified random sampling.
bayes_opt/target_space.py Constructor now expects NonlinearConstraint instead of ConstraintModel; always wraps constraint in ConstraintModel; updated preallocation of constraint values.
tests/test_acquisition.py Removed random state from constructors; passed random state explicitly to methods; fixed random seed for determinism; updated parameter names and assertions; removed one test.
tests/test_constraint.py Relaxed precision tolerance in constraint test assertion; removed redundant assertion.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant BayesianOptimizer
    participant AcquisitionFunction
    participant TargetSpace

    User->>BayesianOptimizer: suggest()
    BayesianOptimizer->>AcquisitionFunction: suggest(..., random_state)
    AcquisitionFunction->>TargetSpace: optimization routines (with random_state)
    TargetSpace-->>AcquisitionFunction: candidate points
    AcquisitionFunction-->>BayesianOptimizer: suggested point
    BayesianOptimizer-->>User: result
Loading

Poem

A hop, a skip, a random seed,
Now passed along with every need.
No more hidden state inside—
Explicit flows, no tricks to hide!
Constraints are wrapped up neat and tight,
Tests are clearer, bugs take flight.
🐇✨ Code grows strong, and runs just right!


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between db97f2f and 21d9aeb.

📒 Files selected for processing (2)
  • bayes_opt/bayesian_optimization.py (3 hunks)
  • bayes_opt/target_space.py (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • bayes_opt/target_space.py
  • bayes_opt/bayesian_optimization.py
⏰ Context from checks skipped due to timeout of 90000ms (9)
  • GitHub Check: Python 3.13 - numpy >=2
  • GitHub Check: Python 3.12 - numpy >=2
  • GitHub Check: Python 3.11 - numpy >=1.25,<2
  • GitHub Check: Python 3.12 - numpy >=1.25,<2
  • GitHub Check: Python 3.10 - numpy >=2
  • GitHub Check: Python 3.11 - numpy >=2
  • GitHub Check: Python 3.10 - numpy >=1.25,<2
  • GitHub Check: Python 3.9 - numpy >=2
  • GitHub Check: Python 3.9 - numpy >=1.25,<2
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
bayes_opt/acquisition.py (3)

155-156: Passing an int seed here will silently recreate a fresh RNG every call

ensure_rng(random_state) converts an int into a new RandomState each time, meaning two consecutive suggest() calls with the same integer seed will always sample the exact same “random” candidates.
If that is unintended, cache the converted RandomState once per suggest() invocation:

-        random_state = ensure_rng(random_state)
+        random_state = ensure_rng(random_state)         # convert once
+        # Keep a reference so children & helpers re-use the same generator
+        rng = random_state

230-233: Docstring now mentions differential-evolution but examples still talk about “warm-up points”

Lines 231-233 updated the wording, but the rest of the paragraph still references the old n_warmup constant. Consider updating the whole block for consistency and to avoid confusion.


276-280: Return-type annotation out of sync with actual return value

_random_sample_minimize returns three values (x_min, min_acq, x_seeds) but the type hint says tuple[NDArray | None, float]. Update the annotation to reflect the extra element.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2c78f7c and 4479243.

📒 Files selected for processing (3)
  • bayes_opt/acquisition.py (27 hunks)
  • bayes_opt/bayesian_optimization.py (2 hunks)
  • tests/test_acquisition.py (12 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (9)
  • GitHub Check: Python 3.13 - numpy >=2
  • GitHub Check: Python 3.11 - numpy >=2
  • GitHub Check: Python 3.12 - numpy >=1.25,<2
  • GitHub Check: Python 3.11 - numpy >=1.25,<2
  • GitHub Check: Python 3.10 - numpy >=2
  • GitHub Check: Python 3.12 - numpy >=2
  • GitHub Check: Python 3.10 - numpy >=1.25,<2
  • GitHub Check: Python 3.9 - numpy >=1.25,<2
  • GitHub Check: Python 3.9 - numpy >=2
🔇 Additional comments (32)
bayes_opt/bayesian_optimization.py (1)

252-254: Random state propagation looks good

BayesianOptimization.suggest() now forwards its internal RNG to the acquisition function, so determinism is preserved even after the refactor. No issues spotted here.

tests/test_acquisition.py (31)

45-46: New random_state fixture provides consistent test behavior

Adding a fixed RandomState fixture with seed 0 is a good practice that ensures tests are deterministic across different runs.


73-74: Constructor simplified to align with new random state management

The MockAcquisition constructor no longer accepts or stores a random_state parameter, which aligns with the PR's objective of making random_state a method parameter rather than a class property.


99-99: Parameter renamed from n_l_bfgs_b to n_smart

This rename better reflects the purpose of the parameter, especially now that Differential Evolution optimization has been introduced alongside L-BFGS-B.


102-106: Random state now passed explicitly to suggest method

The test has been updated to pass the random_state explicitly to the suggest method, consistent with the PR's objective of making random_state a method parameter rather than a class property.


109-114: New test validates acquisition function maximization

This addition helps verify the fix for the bug mentioned in the PR objectives - ensuring that the best random sample is correctly included in the seeds for smart optimization.


120-120: Parameter renamed from n_l_bfgs_b to n_smart

Consistent with other changes, the parameter name has been updated to better reflect its purpose in the optimization process.


125-125: UpperConfidenceBound constructor no longer accepts random_state

The constructor has been simplified to remove the random_state parameter, aligned with the PR's goal of centralizing random state management.


135-137: Random state now passed explicitly to suggest method

Acquisition functions now require random_state as a parameter to the suggest method, making the stochastic behavior more explicit and controlled.


142-142: UpperConfidenceBound constructor simplified

Constructor no longer accepts random_state, consistent with the new approach to random state management.


150-152: Random state passed explicitly to _smart_minimize method

The internal optimization method now receives random_state as a parameter, ensuring deterministic behavior during testing.


157-157: UpperConfidenceBound constructor simplified for constraint test

Constructor no longer accepts random_state, consistent with the new approach to random state management.


165-165: ProbabilityOfImprovement constructor simplified

The constructor no longer accepts random_state, aligned with the refactored random state management.


171-173: Random state now passed explicitly to suggest method

The suggest method now receives random_state as a parameter, ensuring deterministic behavior in tests.


177-181: Consistent pattern for passing random_state to methods

The constructor no longer accepts random_state, and it's now passed explicitly to the suggest method, maintaining the consistent pattern throughout the codebase.


186-192: ProbabilityOfImprovement with constraints follows new pattern

The constructor no longer accepts random_state, and the suggest method now requires it as a parameter, consistent with other acquisition functions.


196-199: Consistent pattern for passing random_state to methods

Random state is passed explicitly to suggest method calls, ensuring deterministic behavior in constraint-related tests.


203-220: ExpectedImprovement follows new random state pattern

All instances of ExpectedImprovement initialization and suggest method calls have been updated to follow the new pattern: no random_state in constructor, explicit random_state in method calls.


224-237: ExpectedImprovement with constraints follows new pattern

The constructor no longer accepts random_state, and the suggest method now requires it as a parameter, consistent with other acquisition functions.


242-244: ConstantLiar constructor simplified

The base acquisition and ConstantLiar constructors no longer accept random_state, aligned with the refactored random state management.


252-253: Random state passed explicitly to ConstantLiar suggest method

The suggest method now receives random_state as a parameter, ensuring deterministic behavior in tests.


266-266: Consistent pattern for passing random_state to methods

Random state is passed explicitly to the suggest method call, maintaining consistency throughout the test suite.


277-281: ConstantLiar with constraints follows new pattern

The constructor no longer accepts random_state, and the suggest method now requires it as a parameter, consistent with other acquisition functions.


285-285: Consistent pattern for passing random_state to methods

Random state is passed explicitly to suggest method calls, ensuring deterministic behavior in constraint-related tests.


293-293: GPHedge constructor simplified

The constructor no longer requires base acquisitions to have random_state parameters, aligned with the refactored random state management.


311-315: Base acquisitions for GPHedge follow new pattern

All base acquisition constructors have been simplified to remove the random_state parameter, consistent with the refactored approach.


335-339: Base acquisitions for softmax sampling follow new pattern

All base acquisition constructors have been simplified to remove the random_state parameter, consistent with the refactored approach.


355-355: Random state passed explicitly to _sample_idx_from_softmax_gains method

The internal method now receives random_state as a parameter, ensuring deterministic behavior during testing.


359-366: GPHedge integration test follows new pattern

Base acquisitions no longer receive random_state in constructors, and the suggest method now requires it as a parameter, consistent with the refactored approach.


370-371: Random state passed explicitly to suggest method in loop

The suggest method consistently receives random_state as a parameter, ensuring deterministic behavior across multiple iterations.


597-598: Custom acquisition constructors simplified

The constructor for custom acquisition functions has been simplified to no longer accept random_state, aligned with the refactored random state management.


618-619: Custom acquisition without set params follows new pattern

The constructor has been simplified to remove the random_state parameter, consistent with the refactored approach throughout the codebase.

Copy link

codecov bot commented May 22, 2025

Codecov Report

Attention: Patch coverage is 90.00000% with 6 lines in your changes missing coverage. Please review.

Project coverage is 97.76%. Comparing base (2c78f7c) to head (21d9aeb).
Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
bayes_opt/acquisition.py 86.95% 6 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #566      +/-   ##
==========================================
- Coverage   97.95%   97.76%   -0.20%     
==========================================
  Files          10       10              
  Lines        1175     1164      -11     
==========================================
- Hits         1151     1138      -13     
- Misses         24       26       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@till-m
Copy link
Member Author

till-m commented May 22, 2025

@CodeRabbit review

Copy link

coderabbitai bot commented May 22, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🔭 Outside diff range comments (1)
bayes_opt/acquisition.py (1)

230-272: ⚠️ Potential issue

_acq_min can return None when both optimisers fail

If n_random > 0 succeeds but every smart optimisation run fails (min_acq_s == np.inf) and min_acq_r == np.inf (e.g. because n_random==0), the function falls through and returns x_min_r, which is None.
Down-stream callers expect a NumPy array and will crash.

-        if n_smart:
+        if n_smart:
             ...
-                return x_min_s
-        return x_min_r
+                return x_min_s
+        # Fallbacks
+        if x_min_r is None:
+            return x_min_s
+        if x_min_s is None:
+            return x_min_r
+        return x_min_r
🧹 Nitpick comments (4)
tests/test_target_space.py (2)

101-105: Lambda signature does not match TargetSpace parameter names

ConstraintModel expects the constraint function to accept named parameters identical to the optimization variables (p1, p2).
Using lambda x: x will raise a TypeError if constraint.eval(**kwargs) is ever invoked (e.g. when calling probe()).

-constraint = ConstraintModel(lambda x: x, -2, 2, transform=None)
+constraint = ConstraintModel(lambda p1, p2: p1 - p2, -2, 2, transform=None)

Even though the current test registers explicit constraint_values and therefore never calls eval, using a correctly-typed lambda future-proofs the test and documents intent.


199-206: Redundant transform=None argument

ConstraintModel’s transform parameter already defaults to None.
Unless the test explicitly verifies that the explicit None is propagated, the extra argument is superfluous and makes the call longer than necessary.

-constraint = ConstraintModel(lambda p1, p2: p1 - p2, -2, 2)
+constraint = ConstraintModel(lambda p1, p2: p1 - p2, -2, 2)
bayes_opt/acquisition.py (2)

66-73: Deprecation warning is good – update docstring too

random_state is now deprecated at construction time but the class-level docstring still advertises it as an active parameter. Updating the docstring will avoid confusing library users.


1187-1203: Type hint mismatch in _sample_idx_from_softmax_gains

The function signature now specifies RandomState, but ensure_rng may supply a Generator once the earlier SciPy fix (see above) is applied.
Consider typing against np.random.RandomState | np.random.Generator or simply Any to avoid mypy/IDE warnings.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between acd4c54 and 6e59e96.

📒 Files selected for processing (5)
  • bayes_opt/acquisition.py (29 hunks)
  • bayes_opt/bayesian_optimization.py (2 hunks)
  • bayes_opt/target_space.py (1 hunks)
  • tests/test_acquisition.py (12 hunks)
  • tests/test_target_space.py (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • bayes_opt/bayesian_optimization.py
  • tests/test_acquisition.py
🧰 Additional context used
🧬 Code Graph Analysis (2)
tests/test_target_space.py (3)
bayes_opt/target_space.py (3)
  • constraint (203-210)
  • TargetSpace (34-713)
  • set_constraint (102-117)
bayes_opt/constraint.py (1)
  • ConstraintModel (23-263)
tests/test_bayesian_optimization.py (1)
  • target_func (19-21)
bayes_opt/acquisition.py (5)
tests/test_acquisition.py (9)
  • random_state (45-46)
  • target_space (55-56)
  • fun (144-148)
  • gp (50-51)
  • pbounds (30-31)
  • constraint (35-36)
  • base_acq (82-83)
  • base_acq (600-601)
  • base_acq (621-622)
bayes_opt/util.py (1)
  • ensure_rng (8-30)
bayes_opt/target_space.py (8)
  • max (623-651)
  • TargetSpace (34-713)
  • random_sample (564-602)
  • bounds (193-200)
  • continuous_dimensions (223-234)
  • res (653-684)
  • set_constraint (102-117)
  • constraint (203-210)
tests/test_bayesian_optimization.py (1)
  • random_sample (500-511)
bayes_opt/parameter.py (4)
  • random_sample (68-87)
  • random_sample (262-281)
  • random_sample (359-381)
  • bounds (59-61)
⏰ Context from checks skipped due to timeout of 90000ms (9)
  • GitHub Check: Python 3.12 - numpy >=2
  • GitHub Check: Python 3.10 - numpy >=1.25,<2
  • GitHub Check: Python 3.13 - numpy >=2
  • GitHub Check: Python 3.12 - numpy >=1.25,<2
  • GitHub Check: Python 3.11 - numpy >=1.25,<2
  • GitHub Check: Python 3.11 - numpy >=2
  • GitHub Check: Python 3.10 - numpy >=2
  • GitHub Check: Python 3.9 - numpy >=2
  • GitHub Check: Python 3.9 - numpy >=1.25,<2

@till-m
Copy link
Member Author

till-m commented May 26, 2025

@fmfn since you asked me to tag you, here you can have a look.

What's nice about this feature is that it does catch some things that are hard for humans to catch (e.g. this commit was actually a result of me testing the feature on a PR of my fork and getting this problem pointed out to me). OTOH, sometimes it misreports things, e.g. this comment is wrong, since the .lb property of the ConstraintModel is always an array and thus always permits .size calls.

In the end, it's a good tool to point out potential problems but one should probably not mindlessly "fix" them.

coderabbitai bot added a commit that referenced this pull request May 30, 2025
Docstrings generation was requested by @till-m.

* #566 (comment)

The following files were modified:

* `bayes_opt/acquisition.py`
* `bayes_opt/bayesian_optimization.py`
* `bayes_opt/target_space.py`
* `tests/test_acquisition.py`
* `tests/test_constraint.py`
* `tests/test_target_space.py`
Copy link

coderabbitai bot commented May 30, 2025

Note

Generated docstrings for this pull request at #568

@till-m till-m changed the title polish code/spring clean polish & clean code Jun 3, 2025
@till-m till-m merged commit f6cde7f into bayesian-optimization:master Jun 3, 2025
13 of 15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant