-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add docs for qlib.rl #1322
Add docs for qlib.rl #1322
Conversation
docs/component/rl.rst
Outdated
In QlibRL, EnvWrapper is a subclass of gym.Env, so it implements all necessary interfaces of gym.Env. Any classes or pipelines that accept gym.Env should also accept EnvWrapper. Developers do not need to implement their own EnvWrapper to build their own environment. Instead, they only need to implement 4 components of the EnvWrapper: | ||
|
||
- `Simulator` | ||
The simulator is the core component responsible for the environment simulation. Developers could implement all the logic that is directly related to the environment simulation in the Simulator in any way they like. In QlibRL, there are already two implementations of Simulator: 1) ``SingleAssetOrderExecution``, which is built based on Qlib's backtest toolkits. 2) ``SimpleSingleAssetOrderExecution``, which is built based on naive simulation logic. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is built based on naive simulation logic
a simplified trading simulator, which ignores a lot of details (e.g. trading limitations, rounding) but is quite fast.
docs/component/rl.rst
Outdated
In QlibRL, EnvWrapper is a subclass of gym.Env, so it implements all necessary interfaces of gym.Env. Any classes or pipelines that accept gym.Env should also accept EnvWrapper. Developers do not need to implement their own EnvWrapper to build their own environment. Instead, they only need to implement 4 components of the EnvWrapper: | ||
|
||
- `Simulator` | ||
The simulator is the core component responsible for the environment simulation. Developers could implement all the logic that is directly related to the environment simulation in the Simulator in any way they like. In QlibRL, there are already two implementations of Simulator: 1) ``SingleAssetOrderExecution``, which is built based on Qlib's backtest toolkits. 2) ``SimpleSingleAssetOrderExecution``, which is built based on naive simulation logic. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
already two implementations of Simulator
already two implementations of Simulator for single asset trading.
docs/component/rl.rst
Outdated
In QlibRL, EnvWrapper is a subclass of gym.Env, so it implements all necessary interfaces of gym.Env. Any classes or pipelines that accept gym.Env should also accept EnvWrapper. Developers do not need to implement their own EnvWrapper to build their own environment. Instead, they only need to implement 4 components of the EnvWrapper: | ||
|
||
- `Simulator` | ||
The simulator is the core component responsible for the environment simulation. Developers could implement all the logic that is directly related to the environment simulation in the Simulator in any way they like. In QlibRL, there are already two implementations of Simulator: 1) ``SingleAssetOrderExecution``, which is built based on Qlib's backtest toolkits. 2) ``SimpleSingleAssetOrderExecution``, which is built based on naive simulation logic. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
which is built based on Qlib's backtest toolkits
which is built based on Qlib's backtest toolkits and hence considers a lot of practical trading details but is slow.
docs/component/rl.rst
Outdated
|
||
Portfolio Construction | ||
------------ | ||
Portfolio construction is a process of selecting securities optimally by taking a minimum risk to achieve maximum returns. With an RL-based solution, an agent allocates stocks at every time step by obtaining information for each stock and the market. The key is to develop of policy for building a portfolio and make the policy able to pick the optimal portfolio. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RL-based portfolio construction learning will be released in the future.
docs/component/rl.rst
Outdated
------------ | ||
As a fundamental problem in algorithmic trading, order execution aims at fulfilling a specific trading order, either liquidation or acquirement, for a given instrument. Essentially, the goal of order execution is twofold: it not only requires to fulfill the whole order but also targets a more economical execution with maximizing profit gain (or minimizing capital loss). The order execution with only one order of liquidation or acquirement is called single-asset order execution. | ||
|
||
Considering stock investment always aim to pursue long-term maximized profits, is usually behaved in the form of a sequential process of continuously adjusting the asset portfolio, execution for multiple orders, including order of liquidation and acquirement, brings more constraints and making the sequence of execution for different orders should be considered, e.g. before executing an order to buy some stocks, we have to sell at least one stock. The order execution with multiple assets is called multi-asset order execution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is usually behaved?
weird grammar
docs/component/rl.rst
Outdated
According to the order execution’s trait of sequential decision making, an RL-based solution could be applied to solve the order execution. With an RL-based solution, an agent optimizes execution strategy through interacting with the market environment. | ||
|
||
With QlibRL, the RL algorithm in the above scenarios can be easily implemented. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can add an extra section for nested Portfolio Construction & Order Execution
and emphasize the difference from traditional methods.
docs/component/rl.rst
Outdated
|
||
Example | ||
============ | ||
QlibRL provides a set of APIs for developers to further simplify their development. For example, if developers have already defined their simulator / interpreters / reward function / policy, they could launch the training pipeline by simply running: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can link each part to the example instead of only introducing how to call the training API
docs/component/highfreq.rst
Outdated
@@ -15,15 +15,17 @@ In order to support the joint backtest strategies in multiple levels, a correspo | |||
|
|||
Besides backtesting, the optimization of strategies from different levels is not standalone and can be affected by each other. | |||
For example, the best portfolio management strategy may change with the performance of order executions(e.g. a portfolio with higher turnover may becomes a better choice when we improve the order execution strategies). | |||
To achieve the overall good performance , it is necessary to consider the interaction of strategies in different level. | |||
To achieve the overall good performance , it is necessary to consider the interaction of strategies in different level. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove the extra useless blank.
docs/component/highfreq.rst
Outdated
The frequency of trading algorithm, decision content and execution environment can be customized by users (e.g. intraday trading, daily-frequency trading, weekly-frequency trading), and the execution environment can be nested with finer-grained trading algorithm and execution environment inside (i.e. sub-workflow in the figure, e.g. daily-frequency orders can be turned into finer-grained decisions by splitting orders within the day). The flexibility of nested decision execution framework makes it easy for users to explore the effects of combining different levels of trading strategies and break down the optimization barriers between different levels of trading algorithm. | ||
The frequency of trading algorithm, decision content and execution environment can be customized by users (e.g. intraday trading, daily-frequency trading, weekly-frequency trading), and the execution environment can be nested with finer-grained trading algorithm and execution environment inside (i.e. sub-workflow in the figure, e.g. daily-frequency orders can be turned into finer-grained decisions by splitting orders within the day). The flexibility of nested decision execution framework makes it easy for users to explore the effects of combining different levels of trading strategies and break down the optimization barriers between different levels of trading algorithm. | ||
|
||
The optimization for the nested decision execution framework can be implemented with an RL-based method, which can be supported by `qlib.rl<https://github.com/microsoft/qlib/tree/main/examples/rl>`_. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the reference to the docs will be better than an example.
I think keeping the example will also be helpful
docs/component/rl.rst
Outdated
@@ -79,7 +99,7 @@ QlibRL provides a set of APIs for developers to further simplify their developme | |||
policy=policy, | |||
reward=PAPenaltyReward(), | |||
vessel_kwargs={ | |||
"episode_per_iter": 100, | |||
"episode_per_iter": 100, 6 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does the 6
mean here?
docs/component/highfreq.rst
Outdated
|
||
The optimization for the nested decision execution framework can be implemented with an RL-based method, which can be supported by `qlib.rl<https://github.com/microsoft/qlib/tree/main/examples/rl>`_. | ||
The optimization for the nested decision execution framework can be implemented with the support of QlibRL. To know more about how to use the QlibRL, go to API Reference: `RL API <../reference/api.html#rl>`_. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reference to the RL docs will be better instead of RL API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It has been fixed.
Co-authored-by: you-n-g <[email protected]>
Co-authored-by: you-n-g <[email protected]>
docs/component/rl.rst
Outdated
As demonstrated in the following figure, an RL system consists of four elements, 1)the agent 2) the environment the agent interacts with 3) the policy that the agent follows to take actions on the environment and 4)the reward signal from the environment to the agent. | ||
In general, the agent can perceive and interpret its environment, take actions and learn through reward, to seek long-term and maximum overall reward to achieve an optimal solution. | ||
|
||
.. image:: ../_static/img/RL_framework.png |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the image might be too small. Have you checked it in the rendered document?
docs/component/rl.rst
Outdated
Reinforcement Learning in Quantitative Trading | ||
======================================================================== | ||
.. currentmodule:: qlib | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest adding a summary upfront to describe what kind of problem we intend to solve.
docs/component/rl.rst
Outdated
|
||
According to the order execution’s trait of sequential decision-making, an RL-based solution could be applied to solve the order execution. With an RL-based solution, an agent optimizes execution strategy by interacting with the market environment. | ||
|
||
With ``QlibRL``, the RL algorithm in the above scenarios can be easily implemented. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is QlibRL
a term?
docs/component/rl.rst
Outdated
``QlibRL`` makes it possible to jointly optimize different levels of strategies/models/agents. Take `Nested Decision Execution Framework <https://github.com/microsoft/qlib/blob/main/examples/nested_decision_execution>`_ as an example, the optimization of order execution strategy and portfolio management strategies can interact with each other to maximize returns. | ||
|
||
|
||
Quick Start |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest putting quick start into another separate file. Otherwise the file would look too long.
docs/component/rl.rst
Outdated
buy: ["current", "$close"] | ||
sell: ["current", "$close"] | ||
strategies: | ||
30min: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the indent is wrong?
docs/component/rl.rst
Outdated
data_dim: 6 | ||
data_ticks: 240 | ||
max_step: 8 | ||
processed_data_provider: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest adding per line explanation for what each configuration means.
docs/component/rl.rst
Outdated
|
||
In QlibRL, EnvWrapper is a subclass of gym.Env, so it implements all necessary interfaces of gym.Env. Any classes or pipelines that accept gym.Env should also accept EnvWrapper. Developers do not need to implement their own EnvWrapper to build their own environment. Instead, they only need to implement 4 components of the EnvWrapper: | ||
|
||
- `Simulator` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Link to class reference with :class:`~qlib.rl.Simulator`
.
docs/component/rl.rst
Outdated
|
||
$ python qlib/rl/contrib/backtest.py --config_path backtest_config.yml | ||
|
||
In that case, `qlib.rl.order_execution.simulator_qlib.SingleAssetOrderExecution <https://github.com/microsoft/qlib/blob/main/qlib/rl/order_execution/simulator_qlib.py>`_ and `qlib.rl.order_execution.simulator_simple.SingleAssetOrderExecutionSimple <https://github.com/microsoft/qlib/blob/main/qlib/rl/order_execution/simulator_simple.py>`_ as examples for simulator, `StateInterpreter <https://github.com/microsoft/qlib/blob/main/qlib/rl/order_execution/interpreter.py>`_ and `ActionInterpreter <https://github.com/microsoft/qlib/blob/main/qlib/rl/order_execution/interpreter.py>`_ as examples for interpreter, and `qlib.rl.order_execution.reward.PAPenaltyReward <https://github.com/microsoft/qlib/blob/main/qlib/rl/order_execution/reward.py>`_ as an example for reward. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use :class:
to reference class.
docs/component/rl.rst
Outdated
============ | ||
``Qlib`` provides a set of APIs for developers to further simplify their development such as base classes for Interpreter, Simulator, Reward. | ||
|
||
.. automodule:: qlib.rl |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I imagine this will be very long. Put it into another file please.
docs/component/rl/api.rst
Outdated
|
||
``Qlib`` provides a set of APIs for developers to further simplify their development such as base classes for Interpreter, Simulator and Reward. | ||
|
||
.. autoclass:: qlib.rl.simulator.Simulator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we use automodule?
Have you checked the rendered results?
docs/component/rl/framework.rst
Outdated
|
||
As you may have noticed, a training vessel itself holds all the required components to build an EnvWrapper rather than holding an instance of EnvWrapper directly. This allows the training vessel to create duplicates of EnvWrapper dynamically when necessary (for example, under parallel training). | ||
|
||
With a training vessel, the trainer could finally launch the training pipeline by simple, Scikit-learn-like interfaces (i.e., `trainer.fit()`). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use double backtick for inline code-block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
docs/component/rl/quickstart.rst
Outdated
|
||
QlibRL provides an example of an implementation of a single asset order execution task and the following is an example of the config file to train with QlibRL. | ||
|
||
.. code-block:: text |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yaml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
docs/component/rl/quickstart.rst
Outdated
|
||
.. code-block:: console | ||
|
||
$ python qlib/rl/contrib/train_onpolicy.py --config_path train_config.yml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use python -m qlib.rl.contrib.train_onpolicy
. Otherwise users must clone qlib to run this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
kwargs: | ||
lr: 1.0e-4 | ||
# the path for the latest model in the training process | ||
weight_file: ./checkpoints/latest.pth |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do I download this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
latest.pth
is generated during training so there is no need to download it. The comment has already talked about this, but maybe we could make it more clear. @lwwang1995
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest commenting out this line by default.
docs/component/rl.rst
Outdated
@@ -0,0 +1,278 @@ | |||
.. _rl: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file can be deleted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
docs/component/rl/quickstart.rst
Outdated
|
||
.. code-block:: console | ||
|
||
$ python -m qlib/rl/contrib/train_onpolicy.py --config_path train_config.yml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
python -m qlib.rl.contrib.train_policy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
examples/rl/README.md
Outdated
@@ -49,7 +49,7 @@ After training, checkpoints will be stored under `checkpoints/`. | |||
## Run backtest | |||
|
|||
``` | |||
python ../../qlib/rl/contrib/backtest.py --config_path ./experiment_config/backtest/config.py | |||
python ../../qlib/rl/contrib/backtest.py --config_path ./experiment_config/backtest/config.yml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
* Add docs for qlib.rl * Update docs for qlib.rl * Add homepage introduct to RL framework * Update index Link * Fix Icon * typo * Update catelog * Update docs for qlib.rl * Update docs for qlib.rl * Update figure * Update docs for qlib.rl * Update setup.py * FIx setup.py * Update docs and fix some typos * Fix the reference to RL docs * Update framework.svg * Update framework.svg * Update framework.svg * Update docs for qlibrl. * Update docs for qlibrl. * Update docs for Qlibrl. * Update docs for qlibrl. * Update docs for qlibrl. * Update docs for qlibrl. * Add new framework * Update jpg * Update framework.svg * Update framework.svg * Update Qlib framework and description * Update grammar * Update README.md * Update README.md * Update docs/component/rl.rst Co-authored-by: you-n-g <[email protected]> * Update docs/component/rl.rst Co-authored-by: you-n-g <[email protected]> * Update docs for qlib.rl * Change theme for docs. * Update docs for qlib.rl * Update docs for qlib.rl * Update docs for qlib.rl * Update docs for qlib.rl. * Update docs for qlib.rl * Update docs for qlib.rl * Update docs for qlib.rl Co-authored-by: Young <[email protected]> Co-authored-by: you-n-g <[email protected]>
* Add docs for qlib.rl * Update docs for qlib.rl * Add homepage introduct to RL framework * Update index Link * Fix Icon * typo * Update catelog * Update docs for qlib.rl * Update docs for qlib.rl * Update figure * Update docs for qlib.rl * Update setup.py * FIx setup.py * Update docs and fix some typos * Fix the reference to RL docs * Update framework.svg * Update framework.svg * Update framework.svg * Update docs for qlibrl. * Update docs for qlibrl. * Update docs for Qlibrl. * Update docs for qlibrl. * Update docs for qlibrl. * Update docs for qlibrl. * Add new framework * Update jpg * Update framework.svg * Update framework.svg * Update Qlib framework and description * Update grammar * Update README.md * Update README.md * Update docs/component/rl.rst Co-authored-by: you-n-g <[email protected]> * Update docs/component/rl.rst Co-authored-by: you-n-g <[email protected]> * Update docs for qlib.rl * Change theme for docs. * Update docs for qlib.rl * Update docs for qlib.rl * Update docs for qlib.rl * Update docs for qlib.rl. * Update docs for qlib.rl * Update docs for qlib.rl * Update docs for qlib.rl Co-authored-by: Young <[email protected]> Co-authored-by: you-n-g <[email protected]>
No description provided.