The machine learning toolkit for time series analysis in Python
Section | Description |
---|---|
Installation | Installing the dependencies and tslearn |
Getting started | A quick introduction on how to use tslearn |
Available features | An extensive overview of tslearn's functionalities |
Documentation | A link to our API reference and a gallery of examples |
Contributing | A guide for heroes willing to contribute |
Citation | A citation for tslearn for scholarly articles |
There are different alternatives to install tslearn:
- PyPi:
python -m pip install tslearn
- Conda:
conda install -c conda-forge tslearn
- Git:
python -m pip install https://github.com/tslearn-team/tslearn/archive/main.zip
In order for the installation to be successful, the required dependencies must be installed. For a more detailed guide on how to install tslearn, please see the Documentation.
tslearn expects a time series dataset to be formatted as a 3D numpy
array. The three dimensions correspond to the number of time series, the number of measurements per time series and the number of dimensions respectively (n_ts, max_sz, d
). In order to get the data in the right format, different solutions exist:
- You can use the utility functions such as
to_time_series_dataset
. - You can convert from other popular time series toolkits in Python.
- You can load any of the UCR datasets in the required format.
- You can generate synthetic data using the
generators
module.
It should further be noted that tslearn supports variable-length timeseries.
>>> from tslearn.utils import to_time_series_dataset
>>> my_first_time_series = [1, 3, 4, 2]
>>> my_second_time_series = [1, 2, 4, 2]
>>> my_third_time_series = [1, 2, 4, 2, 2]
>>> X = to_time_series_dataset([my_first_time_series,
my_second_time_series,
my_third_time_series])
>>> y = [0, 1, 1]
Optionally, tslearn has several utilities to preprocess the data. In order to facilitate the convergence of different algorithms, you can scale time series. Alternatively, in order to speed up training times, one can resample the data or apply a piece-wise transformation.
>>> from tslearn.preprocessing import TimeSeriesScalerMinMax
>>> X_scaled = TimeSeriesScalerMinMax().fit_transform(X)
>>> print(X_scaled)
[[[0.] [0.667] [1.] [0.333] [nan]]
[[0.] [0.333] [1.] [0.333] [nan]]
[[0.] [0.333] [1.] [0.333] [0.333]]]
After getting the data in the right format, a model can be trained. Depending on the use case, tslearn supports different tasks: classification, clustering and regression. For an extensive overview of possibilities, check out our gallery of examples.
>>> from tslearn.neighbors import KNeighborsTimeSeriesClassifier
>>> knn = KNeighborsTimeSeriesClassifier(n_neighbors=1)
>>> knn.fit(X_scaled, y)
>>> print(knn.predict(X_scaled))
[0 1 1]
As can be seen, the models in tslearn follow the same API as those of the well-known scikit-learn. Moreover, they are fully compatible with it, allowing to use different scikit-learn utilities such as hyper-parameter tuning and pipelines.
tslearn further allows to perform all different types of analysis. Examples include calculating barycenters of a group of time series or calculate the distances between time series using a variety of distance metrics.
data | processing | clustering | classification | regression | metrics |
---|---|---|---|---|---|
UCR Datasets | Scaling | TimeSeriesKMeans | KNN Classifier | KNN Regressor | Dynamic Time Warping |
Generators | Piecewise | KShape | TimeSeriesSVC | TimeSeriesSVR | Global Alignment Kernel |
Conversion(1, 2) | KernelKmeans | LearningShapelets | MLP | Barycenters | |
Early Classification | Matrix Profile |
The documentation is hosted at readthedocs. It includes an API, gallery of examples and a user guide.
If you would like to contribute to tslearn
, please have a look at our contribution guidelines. A list of interesting TODO's can be found here. If you want other ML methods for time series to be added to this TODO list, do not hesitate to open an issue!
If you use tslearn
in a scientific publication, we would appreciate citations:
@article{JMLR:v21:20-091,
author = {Romain Tavenard and Johann Faouzi and Gilles Vandewiele and
Felix Divo and Guillaume Androz and Chester Holtz and
Marie Payne and Roman Yurchak and Marc Ru{\ss}wurm and
Kushal Kolar and Eli Woods},
title = {Tslearn, A Machine Learning Toolkit for Time Series Data},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {118},
pages = {1-6},
url = {http://jmlr.org/papers/v21/20-091.html}
}
Authors would like to thank Mathieu Blondel for providing code for Kernel k-means and Soft-DTW, and to Mehran Maghoumi for his torch
-compatible implementation of SoftDTW.