Skip to content

Commit dfc0ed3

Browse files
authored
fix_typo (microsoft#790)
Signed-off-by: unknown <[email protected]>
1 parent f59cfe5 commit dfc0ed3

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

56 files changed

+92
-92
lines changed

CHANGES.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Version 0.2.1
3030
--------------------
3131
- Support registering user-defined ``Provider``.
3232
- Support use operators in string format, e.g. ``['Ref($close, 1)']`` is valid field format.
33-
- Support dynamic fields in ``$some_field`` format. And exising fields like ``Close()`` may be deprecated in the future.
33+
- Support dynamic fields in ``$some_field`` format. And existing fields like ``Close()`` may be deprecated in the future.
3434

3535
Version 0.2.2
3636
--------------------
@@ -78,7 +78,7 @@ Version 0.3.5
7878
- Support multi-label training, you can provide multiple label in ``handler``. (But LightGBM doesn't support due to the algorithm itself)
7979
- Refactor ``handler`` code, dataset.py is no longer used, and you can deploy your own labels and features in ``feature_label_config``
8080
- Handler only offer DataFrame. Also, ``trainer`` and model.py only receive DataFrame
81-
- Change ``split_rolling_data``, we roll the data on market calender now, not on normal date
81+
- Change ``split_rolling_data``, we roll the data on market calendar now, not on normal date
8282
- Move some date config from ``handler`` to ``trainer``
8383

8484
Version 0.4.0
@@ -167,11 +167,11 @@ Version 0.8.0
167167
- There are lots of changes for daily trading, it is hard to list all of them. But a few important changes could be noticed
168168
- The trading limitation is more accurate;
169169
- In `previous version <https://github.com/microsoft/qlib/blob/v0.7.2/qlib/contrib/backtest/exchange.py#L160>`_, longing and shorting actions share the same action.
170-
- In `current verison <https://github.com/microsoft/qlib/blob/7c31012b507a3823117bddcc693fc64899460b2a/qlib/backtest/exchange.py#L304>`_, the trading limitation is different between loging and shorting action.
170+
- In `current version <https://github.com/microsoft/qlib/blob/7c31012b507a3823117bddcc693fc64899460b2a/qlib/backtest/exchange.py#L304>`_, the trading limitation is different between logging and shorting action.
171171
- The constant is different when calculating annualized metrics.
172172
- `Current version <https://github.com/microsoft/qlib/blob/7c31012b507a3823117bddcc693fc64899460b2a/qlib/contrib/evaluate.py#L42>`_ uses more accurate constant than `previous version <https://github.com/microsoft/qlib/blob/v0.7.2/qlib/contrib/evaluate.py#L22>`_
173173
- `A new version <https://github.com/microsoft/qlib/blob/7c31012b507a3823117bddcc693fc64899460b2a/qlib/tests/data.py#L17>`_ of data is released. Due to the unstability of Yahoo data source, the data may be different after downloading data again.
174-
- Users could chec kout the backtesting results between `Current version <https://github.com/microsoft/qlib/tree/7c31012b507a3823117bddcc693fc64899460b2a/examples/benchmarks>`_ and `previous version <https://github.com/microsoft/qlib/tree/v0.7.2/examples/benchmarks>`_
174+
- Users could check out the backtesting results between `Current version <https://github.com/microsoft/qlib/tree/7c31012b507a3823117bddcc693fc64899460b2a/examples/benchmarks>`_ and `previous version <https://github.com/microsoft/qlib/tree/v0.7.2/examples/benchmarks>`_
175175

176176

177177
Other Versions

docs/component/highfreq.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ To get the join trading performance of daily and intraday trading, they must int
1414
In order to support the joint backtest strategies in multiple levels, a corresponding framework is required. None of the publicly available high-frequency trading frameworks considers multi-level joint trading, which make the backtesting aforementioned inaccurate.
1515

1616
Besides backtesting, the optimization of strategies from different levels is not standalone and can be affected by each other.
17-
For example, the best portfolio management strategy may change with the performance of order executions(e.g. a portfolio with higher turnover may becomes a better choice when we imporve the order execution strategies).
17+
For example, the best portfolio management strategy may change with the performance of order executions(e.g. a portfolio with higher turnover may becomes a better choice when we improve the order execution strategies).
1818
To achieve the overall good performance , it is necessary to consider the interaction of strategies in different level.
1919

2020
Therefore, building a new framework for trading in multiple levels becomes necessary to solve the various problems mentioned above, for which we designed a nested decision execution framework that consider the interaction of strategies.

docs/component/recorder.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Here is a general view of the structure of the system:
3737
3838
This experiment management system defines a set of interface and provided a concrete implementation ``MLflowExpManager``, which is based on the machine learning platform: ``MLFlow`` (`link <https://mlflow.org/>`_).
3939

40-
If users set the implementation of ``ExpManager`` to be ``MLflowExpManager``, they can use the command `mlflow ui` to visualize and check the experiment results. For more information, pleaes refer to the related documents `here <https://www.mlflow.org/docs/latest/cli.html#mlflow-ui>`_.
40+
If users set the implementation of ``ExpManager`` to be ``MLflowExpManager``, they can use the command `mlflow ui` to visualize and check the experiment results. For more information, please refer to the related documents `here <https://www.mlflow.org/docs/latest/cli.html#mlflow-ui>`_.
4141

4242
Qlib Recorder
4343
===================

docs/hidden/tuner.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Let's see an example,
3131

3232
First make sure you have the latest version of `qlib` installed.
3333

34-
Then, you need to privide a configuration to setup the experiment.
34+
Then, you need to provide a configuration to setup the experiment.
3535
We write a simple configuration example as following,
3636

3737
.. code-block:: YAML
@@ -217,13 +217,13 @@ The tuner pipeline contains different tuners, and the `tuner` program will proce
217217
Each part represents a tuner, and its modules which are to be tuned. Space in each part is the hyper-parameters' space of a certain module, you need to create your searching space and modify it in `/qlib/contrib/tuner/space.py`. We use `hyperopt` package to help us to construct the space, you can see the detail of how to use it in https://github.com/hyperopt/hyperopt/wiki/FMin .
218218

219219
- model
220-
You need to provide the `class` and the `space` of the model. If the model is user's own implementation, you need to privide the `module_path`.
220+
You need to provide the `class` and the `space` of the model. If the model is user's own implementation, you need to provide the `module_path`.
221221

222222
- trainer
223-
You need to proveide the `class` of the trainer. If the trainer is user's own implementation, you need to privide the `module_path`.
223+
You need to provide the `class` of the trainer. If the trainer is user's own implementation, you need to provide the `module_path`.
224224

225225
- strategy
226-
You need to provide the `class` and the `space` of the strategy. If the strategy is user's own implementation, you need to privide the `module_path`.
226+
You need to provide the `class` and the `space` of the strategy. If the strategy is user's own implementation, you need to provide the `module_path`.
227227

228228
- data_label
229229
The label of the data, you can search which kinds of labels will lead to a better result. This part is optional, and you only need to provide `space`.
@@ -273,7 +273,7 @@ You need to use the same dataset to evaluate your different `estimator` experime
273273
About the data and backtest
274274
~~~~~~~~~~~~~~~~~~~~~~~~~~~
275275

276-
`data` and `backtest` are all same in the whole `tuner` experiment. Different `estimator` experiments must use the same data and backtest method. So, these two parts of config are same with that in `estimator` configuration. You can see the precise defination of these parts in `estimator` introduction. We only provide an example here.
276+
`data` and `backtest` are all same in the whole `tuner` experiment. Different `estimator` experiments must use the same data and backtest method. So, these two parts of config are same with that in `estimator` configuration. You can see the precise definition of these parts in `estimator` introduction. We only provide an example here.
277277

278278
.. code-block:: YAML
279279

docs/introduction/quick.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Users can easily intsall ``Qlib`` according to the following steps:
3131
git clone https://github.com/microsoft/qlib.git && cd qlib
3232
python setup.py install
3333
34-
To kown more about `installation`, please refer to `Qlib Installation <../start/installation.html>`_.
34+
To known more about `installation`, please refer to `Qlib Installation <../start/installation.html>`_.
3535

3636
Prepare Data
3737
==============
@@ -44,7 +44,7 @@ Load and prepare data by running the following code:
4444
4545
This dataset is created by public data collected by crawler scripts in ``scripts/data_collector/``, which have been released in the same repository. Users could create the same dataset with it.
4646

47-
To kown more about `prepare data`, please refer to `Data Preparation <../component/data.html#data-preparation>`_.
47+
To known more about `prepare data`, please refer to `Data Preparation <../component/data.html#data-preparation>`_.
4848

4949
Auto Quant Research Workflow
5050
====================================

examples/benchmarks/TFT/data_formatters/base.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@
3232
import enum
3333

3434

35-
# Type defintions
35+
# Type definitions
3636
class DataTypes(enum.IntEnum):
3737
"""Defines numerical types of each column."""
3838

examples/benchmarks/TFT/libs/hyperparam_opt.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -254,9 +254,9 @@ def __init__(
254254
param_ranges: Discrete hyperparameter range for random search.
255255
fixed_params: Fixed model parameters per experiment.
256256
root_model_folder: Folder to store optimisation artifacts.
257-
worker_number: Worker index definining which set of hyperparameters to
257+
worker_number: Worker index defining which set of hyperparameters to
258258
test.
259-
search_iterations: Maximum numer of random search iterations.
259+
search_iterations: Maximum number of random search iterations.
260260
num_iterations_per_worker: How many iterations are handled per worker.
261261
clear_serialised_params: Whether to regenerate hyperparameter
262262
combinations.
@@ -330,7 +330,7 @@ def load_serialised_hyperparam_df(self):
330330
if os.path.exists(self.serialised_ranges_folder):
331331
df = pd.read_csv(self.serialised_ranges_path, index_col=0)
332332
else:
333-
print("Unable to load - regenerating serach ranges instead")
333+
print("Unable to load - regenerating search ranges instead")
334334
df = self.update_serialised_hyperparam_df()
335335

336336
return df

examples/benchmarks/TFT/libs/tft_model.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -342,7 +342,7 @@ def get(cls, key):
342342

343343
@classmethod
344344
def contains(cls, key):
345-
"""Retuns boolean indicating whether key is present in cache."""
345+
"""Returns boolean indicating whether key is present in cache."""
346346

347347
return key in cls._data_cache
348348

@@ -1120,10 +1120,10 @@ def predict(self, df, return_targets=False):
11201120
Args:
11211121
df: Input dataframe
11221122
return_targets: Whether to also return outputs aligned with predictions to
1123-
faciliate evaluation
1123+
facilitate evaluation
11241124
11251125
Returns:
1126-
Input dataframe or tuple of (input dataframe, algined output dataframe).
1126+
Input dataframe or tuple of (input dataframe, aligned output dataframe).
11271127
"""
11281128

11291129
data = self._batch_data(df)

examples/benchmarks/TFT/tft.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -295,7 +295,7 @@ def finetune(self, dataset: DatasetH):
295295
def to_pickle(self, path: Union[Path, str]):
296296
"""
297297
Tensorflow model can't be dumped directly.
298-
So the data should be save seperatedly
298+
So the data should be save separately
299299
300300
**TODO**: Please implement the function to load the files
301301

examples/benchmarks/TRA/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ And here are two ways to run the model:
5757
python example.py --config_file configs/config_alstm.yaml
5858
```
5959

60-
Here we trained TRA on a pretrained backbone model. Therefore we run `*_init.yaml` before TRA's scipts.
60+
Here we trained TRA on a pretrained backbone model. Therefore we run `*_init.yaml` before TRA's scripts.
6161

6262
### Results
6363

0 commit comments

Comments
 (0)