8000 Fail at import: dependencies not installed · Issue #11 · intel/analytics-zoo · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Fail at import: dependencies not installed #11
Open
@mbrhd

Description

@mbrhd

Hi

I'm exploring Chronos for time series. I've decided to use this example notebook to start.

When running
from zoo.chronos.data import TSDataset
I got the following error message:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/anaconda3/envs/analytics-zoo-test/lib/python3.8/site-packages/zoo/__init__.py", line 17, in <module>
    from zoo.common.nncontext import *
  File "/opt/anaconda3/envs/analytics-zoo-test/lib/python3.8/site-packages/zoo/common/__init__.py", line 17, in <module>
    from .utils import *
  File "/opt/anaconda3/envs/analytics-zoo-test/lib/python3.8/site-packages/zoo/common/utils.py", line 16, in <module>
    from bigdl.util.common import Sample as BSample, JTensor as BJTensor,\
  File "/opt/anaconda3/envs/analytics-zoo-test/lib/python3.8/site-packages/bigdl/__init__.py", line 18, in <module>
    prepare_env()
  File "/opt/anaconda3/envs/analytics-zoo-test/lib/python3.8/site-packages/bigdl/util/engine.py", line 155, in prepare_env
    __prepare_spark_env()
  File "/opt/anaconda3/envs/analytics-zoo-test/lib/python3.8/site-packages/bigdl/util/engine.py", line 53, in __prepare_spark_env
    if exist_pyspark():
  File "/opt/anaconda3/envs/analytics-zoo-test/lib/python3.8/site-packages/bigdl/util/engine.py", line 26, in exist_pyspark
    import pyspark
  File "/opt/anaconda3/envs/analytics-zoo-test/lib/python3.8/site-packages/pyspark/__init__.py", line 51, in <module>
    from pyspark.context import SparkContext
  File "/opt/anaconda3/envs/analytics-zoo-test/lib/python3.8/site-packages/pyspark/context.py", line 31, in <module>
    from pyspark import accumulators
  File "/opt/anaconda3/envs/analytics-zoo-test/lib/python3.8/site-packages/pyspark/accumulators.py", line 97, in <module>
    from pyspark.serializers import read_int, PickleSerializer
  File "/opt/anaconda3/envs/analytics-zoo-test/lib/python3.8/site-packages/pyspark/serializers.py", line 72, in <module>
    from pyspark import cloudpickle
  File "/opt/anaconda3/envs/analytics-zoo-test/lib/python3.8/site-packages/pyspark/cloudpickle.py", line 145, in <module>
    _cell_set_template_code = _make_cell_set_template_code()
  File "/opt/anaconda3/envs/analytics-zoo-test/lib/python3.8/site-packages/pyspark/cloudpickle.py", line 126, in _make_cell_set_template_code
    return types.CodeType(
TypeError: an integer is required (got type bytes)

I fixed this by installing spark using: conda install pyspark

Then the same command from zoo.chronos.data import TSDataset fails because of pandas, packaging, and tsfresh not installed. I fixed this issue by installing pandas, packaging and tsfresh.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0