8000 Clipping filter error with xgboost>=1.5.0 · Issue #301 · NREL/rdtools · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Clipping filter error with xgboost>=1.5.0 8000 #301

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
kandersolar opened this issue Nov 30, 2021 · 3 comments · Fixed by #304
Closed

Clipping filter error with xgboost>=1.5.0 #301

kandersolar opened this issue Nov 30, 2021 · 3 comments · Fixed by #304
Labels

Comments

@kandersolar
Copy link
Member

Describe the bug
The new clipping filter function uses an XGBoost model bundled with rdtools. Recent CI runs in #300 and #297 are failing due to some error related to that model (example): AttributeError: 'XGBModel' object has no attribute 'enable_categorical'

I suspect it is caused by some incompatibility between the version of xgboost that stored the model file and the version of xgboost used to read it back into memory. The last passing CI run used xgboost==1.4.2 (link) and the failing runs are using 1.5.1. I can reproduce the same behavior locally (fails on xgboost==1.5.0 and 1.5.1, works on 1.4.2).

Full error message and traceback

Full pytest output
$ pytest rdtools\test\filtering_test.py::test_xgboost_clip_filter
================================================= test session starts =================================================
platform win32 -- Python 3.7.7, pytest-6.2.3, py-1.10.0, pluggy-0.13.1 -- c:\users\kanderso\software\anaconda3\envs\rdtools-dev\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\KANDERSO\projects\rdtools, configfile: setup.cfg
plugins: nbval-0.9.6, mock-3.6.1
collected 1 item

rdtools/test/filtering_test.py::test_xgboost_clip_filter FAILED                                                  [100%]

====================================================== FAILURES =======================================================
______________________________________________ test_xgboost_clip_filter _______________________________________________

generate_power_time_series_no_clipping = (0       1
1       2
2       3
3       4
4       5
     ...
95     96
96     97
97     98
98     99
99    100
Length:...0:00     97
2016-12-06 12:00:00     98
2016-12-06 13:00:00     99
2016-12-06 14:00:00    100
Length: 100, dtype: int32)
generate_power_time_series_clipping = (0      2
1      4
2      6
3      8
4     10
      ..
95    10
96     8
97     6
98     4
99     2
Length: 100, dtype...2:00:00+00:00     6
2016-12-06 13:00:00+00:00     4
2016-12-06 14:00:00+00:00     2
Freq: H, Length: 100, dtype: int32)
generate_power_time_series_one_min_intervals = 2016-12-02 11:00:00+00:00    1
2016-12-02 11:01:00+00:00    2
2016-12-02 11:02:00+00:00    3
2016-12-02 11:03:00+00:00...02 12:37:00+00:00    3
2016-12-02 12:38:00+00:00    2
2016-12-02 12:39:00+00:00    1
Freq: T, Length: 100, dtype: int32
generate_power_time_series_irregular_intervals = 2016-12-02 11:00:00+00:00      1
2016-12-02 11:01:00+00:00      2
2016-12-02 11:02:00+00:00      3
2016-12-02 11:03:00...12-03 20:50:00+00:00    102
2016-12-03 20:55:00+00:00    101
2016-12-03 21:00:00+00:00    100
Length: 259, dtype: int64

    def test_xgboost_clip_filter(generate_power_time_series_no_clipping,
                                 generate_power_time_series_clipping,
                                 generate_power_time_series_one_min_intervals,
                                 generate_power_time_series_irregular_intervals):
        ''' Unit tests for XGBoost clipping filter.'''
        # Test the time series where the data isn't clipped
        power_no_datetime_index_nc, power_datetime_index_nc, power_nc_tz_naive = \
            generate_power_time_series_no_clipping
        # Test that a Type Error is raised when a pandas series
        # without a datetime index is used.
        pytest.raises(TypeError,  xgboost_clip_filter,
                      power_no_datetime_index_nc)
        # Test that an error is thrown when we don't include the correct
        # mounting configuration input
        pytest.raises(ValueError,  xgboost_clip_filter,
                      power_datetime_index_nc, 'not_fixed')
        # Test that an error is thrown when there are 10 or fewer readings
        # in the time series
        pytest.raises(Exception,  xgboost_clip_filter,
                      power_datetime_index_nc[:9])
        # Test that a warning is thrown when the time series is tz-naive
        warnings.simplefilter("always")
        with warnings.catch_warnings(record=True) as w:
>           xgboost_clip_filter(power_nc_tz_naive)

rdtools\test\filtering_test.py:227:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
..\..\software\anaconda3\envs\rdtools-dev\lib\site-packages\rdtools\filtering.py:697: in xgboost_clip_filter
    power_ac_df).astype(bool))
..\..\software\anaconda3\envs\rdtools-dev\lib\site-packages\xgboost\sklearn.py:1290: in predict
    iteration_range=iteration_range,
..\..\software\anaconda3\envs\rdtools-dev\lib\site-packages\xgboost\sklearn.py:879: in predict
    if self._can_use_inplace_predict():
..\..\software\anaconda3\envs\rdtools-dev\lib\site-packages\xgboost\sklearn.py:811: in _can_use_inplace_predict
    predictor = self.get_params().get("predictor", None)
..\..\software\anaconda3\envs\rdtools-dev\lib\site-packages\xgboost\sklearn.py:505: in get_params
    params.update(cp.__class__.get_params(cp, deep))
..\..\software\anaconda3\envs\rdtools-dev\lib\site-packages\xgboost\sklearn.py:502: in get_params
    params = super().get_params(deep)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <[AttributeError("'XGBModel' object has no attribute 'enable_categorical'") raised in repr()] XGBModel object at 0x1d6eac40648>
deep = True

    def get_params(self, deep=True):
        """
        Get parameters for this estimator.

        Parameters
        ----------
        deep : bool, default=True
            If True, will return the parameters for this estimator and
            contained subobjects that are estimators.

        Returns
        -------
        params : dict
            Parameter names mapped to their values.
        """
        out = dict()
        for key in self._get_param_names():
>           value = getattr(self, key)
E           AttributeError: 'XGBModel' object has no attribute 'enable_categorical'

..\..\software\anaconda3\envs\rdtools-dev\lib\site-packages\sklearn\base.py:195: AttributeError

To Reproduce

$ pip install xgboost==1.5.1
$ pytest rdtools\test\filtering_test.py::test_xgboost_clip_filter

Additional context
dmlc/xgboost#7423 seems relevant. It seems like we can't rely on a pickled model to work across different versions of xgboost.

Possible solutions:

  • Save the model in some other way, e.g. the JSON described here? https://xgboost.readthedocs.io/en/latest/tutorials/saving_model.html
  • Restrict the allowed range of xgboost versions to those that are known to work with our model. This might cause problems down the road, for example if the aging version of xgboost we require doesn't work on new versions of python.
  • Something else?
@mdeceglie
Copy link
Collaborator

Using save_model to save in JSON sounds appealing. The article notes that this is experimental, but perhaps worth a shot? What do you think @kperrynrel ?

@kperrynrel
Copy link
Collaborator

@mdeceglie @kanderso-nrel I do like the sound of the JSON, since I'd have to regenerate the model each time we change the version. I can do some digging on how well this works

kandersolar added a commit to kandersolar/rdtools that referenced this issue Nov 30, 2021
@kperrynrel kperrynrel linked a pull request Dec 1, 2021 that will close this issue
2 tasks
@jsparadacelis
Copy link
jsparadacelis commented Feb 1, 2022

To someone who is trying to use XGBoost Model from a JSON file I did it:

xgb_classifier = xgb.XGBClassifier()
xgb_classifier.load_model(json_file_name)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants
0