-
-
Notifications
You must be signed in to change notification settings - Fork 8.8k
[WIP] Avoid R API calls within OpenMP region #11448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: release_1.7.0
Are you sure you want to change the base?
Conversation
Would it perhaps be better to backport the PR that fixed the same issue in the master branch? |
Is that PR after the massive rewrite of the R package? If so, it probably won't apply to the 1.7 branch |
I think it was before the changes to .R files, but I don't remember. The thing is that it also fixed other issues along the way. |
Actually looking at the PR: might not be easy to backport, let's better stick with this one. |
@@ -6,10 +6,10 @@ | |||
<parent> | |||
<groupId>ml.dmlc</groupId> | |||
<artifactId>xgboost-jvm_2.12</artifactId> | |||
<version>1.7.11</version> | |||
<version>1.7.12</version> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is R-only, I guess changes here are not needed.
expect_equal(importance_from_dump(), importance, tolerance = 1e-6) | ||
|
||
## decision stump | ||
m <- xgboost::xgboost( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if it'd be correct to use xgboost::
in tests. It's certainly not needed as the whole namespace is imported.
|
||
n_threads <- 2 | ||
|
||
test_that("training with feature weights works", { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say better not to add extra tests for this short-lived release, since they increase the chances of rejections and extra fixes being needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is merely restoring the tests removed in the previous commit, per the CRAN maintainer's request
R-package/src/xgboost_R.cc
Outdated
@@ -247,15 +247,18 @@ XGB_DLL SEXP XGDMatrixSetInfo_R(SEXP handle, SEXP field, SEXP array) { | |||
auto ctx = DMatrixCtx(R_ExternalPtrAddr(handle)); | |||
if (!strcmp("group", name)) { | |||
std::vector<unsigned> vec(len); | |||
const int* array_ptr = INTEGER_RO(array); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't notice that you have Depends: R (>= 3.3.0)
. INTEGER_RO
and REAL_RO
appeared in 3.5.0. You can either redefine them on older versions of R or use INTEGER
and REAL
.
Thank you for working on this! Please let me know if there's anything I can help. (testing, submitting packages, etc). |
@trivialfis Can you help with submitting the package with the fix? |
Is this ready? (still marked as draft) Since 1.7.11 never went through, do you think we need to bump to 1.7.12? Lastly, did you manage to tackle issues with the changed openblas? There seems to be floating point accuracy related test failures. |
No, I was never able to reproduce the issue. |
Per CRAN recommendations:
I don't think that's the case. The failures looked like completely different results than would be expected, not small numerical inaccuracies.
Perhaps one way could be by building R with flag |
I already tried it and could not reproduce the other test failures. @trivialfis Have the maintainer gotten back to you about reproducing the OpenBLAS environment? If not, let's ask for permission to remove the failing tests |
Need to separate the issues here. I'm not sure whether the issue fixed by this PR is the cause of all test failures. How many test failures did you reproduce using the Are you using docker to build the R variant? If so, I can run tests as well. |
Only the tests that were failing due to memory protection error, e.g. test_feature_weights.R. These tests crash immediately when run with --enable-strict-barrier. I will get back to you with the full list of tests that are affected by the memory issue. |
In the case of memory corruptions, since all the tests are executed in the same process, it might not necessarily be the case that problems come from the specific test that fails, nor that failures would be deterministically reproducible.
They seldomly answer emails so I wouldn't count on getting a reply in due time. |
@trivialfis Only the test in
And of course, I am not able to reproduce the other failing tests, e.g.
I'd suggest the following course of action:
|
Asked for help again on the public mail list. |
It looks like CRAN maintainers accepted my previous package that removed the tests. |
#11431 (comment)