8000 GitHub - pittloo/qlib
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

pittloo/qlib

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Qlib is an AI-oriented quantitative investment platform, which aims to realize the potential, empower the research, and create the value of AI technologies in quantitative investment.

With Qlib, you can easily apply your favorite model to create a better Quant investment strategy.

Framework of Qlib

framework

At the module level, Qlib is a platform that consists of the above components. Each component is loose-coupling and can be used stand-alone.

Name Description
Data layer DataServer focus on providing high performance infrastructure for user to retrieve and get raw data. DataEnhancement will preprocess the data and provide the best dataset to be fed in to the models
Interday Model Interday model focus on producing forecasting signals(aka. alpha). Models are trained by Model Creator and managed by Model Manager. User could choose one or multiple models for forecasting. Multiple models could be combined with Ensemble module
Interday Strategy Portfolio Generator will take forecasting signals as input and output the orders based on current position to achieve target portfolio
Intraday Trading Order Executor is responsible for executing orders produced by Interday Strategy and returning the executed results.
Analysis User could get detailed analysis report of forecasting signal and portfolio in this part.
  • The modules with hand-drawn style is under development and will be released in the future.
  • The modules with dashed border is highly user-customizable and extendible.

Quick start

Installation

To install Qlib from source you need Cython in addition to the normal dependencies above:

pip install numpy
pip install --upgrade  cython

Clone the repository and then run:

python setup.py install

Get Data

  • Load and prepare the Data: execute the following command to load the stock data:
    python scripts/get_data.py qlib_data_cn --target_dir ~/.qlib/qlib_data/cn_data

Auto Quant research workflow with estimator

Qlib provides a tool named estimator to run whole workflow automatically(including building dataset, train models, backtest, analysis)

  1. Run estimator (config.yaml for: estimator_config.yaml):

    cd examples  # Avoid running program under the directory contains `qlib`
    estimator -c estimator/estimator_config.yaml

    Estimator result:

                          risk
    sub_bench mean    0.000662
              std     0.004487
              annual  0.166720
              sharpe  2.340526
              mdd    -0.080516
    sub_cost  mean    0.000577
              std     0.004482
              annual  0.145392
              sharpe  2.043494
              mdd    -0.083584

    See the full documents for Use Estimator to Start An Experiment.

  2. Analysis

    Run examples/estimator/analyze_from_estimator.ipynb in jupyter notebook

    1. forecasting signal analysis

      • Cumulative Return

      Cumulative Return long_short

      • Information Coefficient(IC)

      Information Coefficient
      Monthly IC
      IC

      • Auto Correlation

      Auto Correlation

    2. portfolio analysis

      • Report

      Report

Customized Quant research workflow by code

Automatic workflow may not suite the research workflow of all Quant researchers. To support flexible Quant research workflow, Qlib also provide modularized interface to allow researchers to build their own workflow. Here is a demo for customized Quant research workflow by code

More About Qlib

The detailed documents are organized in docs. Sphinx and the readthedocs theme is required to build the documentation in html formats.

cd docs/
conda install sphinx sphinx_rtd_theme -y
# Otherwise, you can install them with pip
# pip install sphinx sphinx_rtd_theme
make html

You can also view the latest document online directly.

The roadmap is managed as a github project.

Offline mode and online mode

The data server of Qlib can both deployed as offline mode and online mode. The default mode is offline mode.

Under offline mode, the data will be deployed locally.

Under online mode, the data will be deployed as a shared data service. The data and their cache will be shared by clients. The data retrieving performance is expected to be improved due to a higher rate of cache hits. It will use less disk space, too. The documents of the online mode can be found in Qlib-Server. The online mode can be deployed automatically with Azure CLI based scripts

Performance of Qlib Data Server

The performance of data processing is important to data-driven methods like AI technologies. As an AI-oriented platform, Qlib provides a solution for data storage and data processing. To demonstrate the performance of Qlib, We compare Qlib with several other solutions.

We evaluate the performance of several solutions by completing the same task, which creates a dataset(14 features/factors) from the basic OHLCV daily data of a stock market(800 stocks each day from 2007 to 2020). The task involves data queries and processing.

HDF5 MySQL MongoDB InfluxDB Qlib -E -D Qlib +E -D Qlib +E +D
Total (1CPU) (seconds) 184.4±3.7 365.3±7.5 253.6±6.7 368.2±3.6 147.0±8.8 47.6±1.0 7.4±0.3
Total (64CPU) (seconds) 8.8±0.6 4.2±0.2
  • +(-)E indicates with(out) ExpressionCache
  • +(-)D indicates with(out) DatasetCache

Most general-purpose databases take too much time on loading data. After looking into the underlying implementation, we find that data go through too many layers of interfaces and unnecessary format transformations in general-purpose database solutions. Such overheads greatly slow down the data loading process. Qlib data are stored in a compact format, which is efficient to be combined into arrays for scientific computation.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%
0