|
| 1 | +Backend selection and use |
| 2 | +========================= |
| 3 | + |
| 4 | +`tslearn` proposes different backends (`NumPy` and `PyTorch`) |
| 5 | +to compute time series metrics such as `DTW` and `Soft-DTW`. |
| 6 | +The `PyTorch` backend can be used to compute gradients of |
| 7 | +metric functions thanks to automatic differentiation. |
| 8 | + |
| 9 | +Backend selection |
| 10 | +----------------- |
| 11 | + |
| 12 | +A backend can be instantiated using the function ``instantiate_backend``. |
| 13 | +To specify which backend should be instantiated (`NumPy` or `PyTorch`), |
| 14 | +this function accepts four different kind of input parameters: |
| 15 | + |
| 16 | +* a string equal to ``"numpy"`` or ``"pytorch"``. |
| 17 | +* a `NumPy` array or a `Torch` tensor. |
| 18 | +* a Backend instance. The input backend is then returned. |
| 19 | +* ``None`` or anything else than mentioned previously. The backend `NumPy` is then instantiated. |
| 20 | + |
| 21 | +Examples |
| 22 | +~~~~~~~~ |
| 23 | + |
| 24 | +If the input is the string ``"numpy"``, the ``NumPyBackend`` is instantiated. |
| 25 | + |
| 26 | +.. code-block:: python |
| 27 | +
|
| 28 | + >>> from tslearn.backend import instantiate_backend |
| 29 | + >>> be = instantiate_backend("numpy") |
| 30 | + >>> print(be.backend_string) |
| 31 | + "numpy" |
| 32 | +
|
| 33 | +If the input is the string ``"pytorch"``, the ``PyTorchBackend`` is instantiated. |
| 34 | + |
| 35 | +.. code-block:: python |
| 36 | +
|
| 37 | + >>> be = instantiate_backend("pytorch") |
| 38 | + >>> print(be.backend_string) |
| 39 | + "pytorch" |
| 40 | +
|
| 41 | +If the input is a `NumPy` array, the ``NumPyBackend`` is instantiated. |
| 42 | + |
| 43 | +.. code-block:: python |
| 44 | +
|
| 45 | + >>> import numpy as np |
| 46 | + >>> be = instantiate_backend(np.array([0])) |
| 47 | + >>> print(be.backend_string) |
| 48 | + "numpy" |
| 49 | +
|
| 50 | +If the input is a `Torch` tensor, the ``PyTorchBackend`` is instantiated. |
| 51 | + |
| 52 | +.. code-block:: python |
| 53 | +
|
| 54 | + >>> import torch |
| 55 | + >>> be = instantiate_backend(torch.tensor([0])) |
| 56 | + >>> print(be.backend_string) |
| 57 | + "pytorch" |
| 58 | +
|
| 59 | +If the input is a Backend instance, the input backend is returned. |
| 60 | + |
| 61 | +.. code-block:: python |
| 62 | +
|
| 63 | + >>> print(be.backend_string) |
| 64 | + "pytorch" |
| 65 | + >>> be = instantiate_backend(be) |
| 66 | + >>> print(be.backend_string) |
| 67 | + "pytorch" |
| 68 | +
|
| 69 | +If the input is ``None``, the ``NumPyBackend`` is instantiated. |
| 70 | + |
| 71 | +.. code-block:: python |
| 72 | +
|
| 73 | + >>> be = instantiate_backend(None) |
| 74 | + >>> print(be.backend_string) |
| 75 | + "numpy" |
| 76 | +
|
| 77 | +If the input is anything else, the ``NumPyBackend`` is instantiated. |
| 78 | + |
| 79 | +.. code-block:: python |
| 80 | +
|
| 81 | + >>> be = instantiate_backend("Hello, World!") |
| 82 | + >>> print(be.backend_string) |
| 83 | + "numpy" |
| 84 | +
|
| 85 | +The function ``instantiate_backend`` accepts any number of input parameters, including zero. |
| 86 | +To select which backend should be instantiated (`NumPy` or `PyTorch`), |
| 87 | +a for loop is performed on the inputs until a backend is selected. |
| 88 | + |
| 89 | +.. code-block:: python |
| 90 | +
|
| 91 | + >>> be = instantiate_backend(1, None, "Hello, World!", torch.tensor([0]), "numpy") |
| 92 | + >>> print(be.backend_string) |
| 93 | + "pytorch" |
| 94 | +
|
| 95 | +If none of the inputs are related to `NumPy` or `PyTorch`, the ``NumPyBackend`` is instantiated. |
| 96 | + |
| 97 | +.. code-block:: python |
| 98 | +
|
| 99 | + >>> be = instantiate_backend(1, None, "Hello, World!") |
| 100 | + >>> print(be.backend_string) |
| 101 | + "numpy" |
| 102 | +
|
| 103 | +Use the backends |
| 104 | +---------------- |
| 105 | + |
| 106 | +The names of the attributes and methods of the backends |
| 107 | +are inspired by the `NumPy` backend. |
| 108 | + |
| 109 | +Examples |
| 110 | +~~~~~~~~ |
| 111 | + |
| 112 | +Create backend objects. |
| 113 | + |
| 114 | +.. code-block:: python |
| 115 | +
|
| 116 | + >>> be = instantiate_backend("pytorch") |
| 117 | + >>> mat = be.array([[0 , 1], [2, 3]], dtype=float) |
| 118 | + >>> print(mat) |
| 119 | + tensor([[0., 1.], |
| 120 | + [2., 3.]], dtype=torch.float64) |
| 121 | +
|
| 122 | +Use backend functions. |
| 123 | + |
| 124 | +.. code-block:: python |
| 125 | +
|
| 126 | + >>> norm = be.linalg.norm(mat) |
| 127 | + >>> print(norm) |
| 128 | + tensor(3.7417, dtype=torch.float64) |
| 129 | +
|
| 130 | +Choose the backend used by metric functions |
| 131 | +------------------------------------------- |
| 132 | + |
| 133 | +`tslearn`'s metric functions have an optional input parameter "``be``" to specify the |
| 134 | +backend to use to compute the metric. |
| 135 | + |
| 136 | +Examples |
| 137 | +~~~~~~~~ |
| 138 | + |
| 139 | +.. code-block:: python |
| 140 | +
|
| 141 | + >>> import torch |
| 142 | + >>> from tslearn.metrics import dtw |
| 143 | + >>> s1 = torch.tensor([[1.0], [2.0], [3.0]], requires_grad=True) |
| 144 | + >>> s2 = torch.tensor([[3.0], [4.0], [-3.0]]) |
| 145 | + >>> sim = dtw(s1, s2, be="pytorch") |
| 146 | + >>> print(sim) |
| 147 | + sim tensor(6.4807, grad_fn=<SqrtBackward0>) |
| 148 | +
|
| 149 | +By default, the optional input parameter ``be`` is equal to ``None``. |
| 150 | +Note that the first line of the function ``dtw`` is: |
| 151 | + |
| 152 | +.. code-block:: python |
| 153 | +
|
| 154 | + be = instantiate_backend(be, s1, s2) |
| 155 | +
|
| 156 | +Therefore, even if ``be=None``, the ``PyTorchBackend`` is instantiated and used to compute the |
| 157 | +DTW metric since ``s1`` and ``s2`` are `Torch` tensors. |
| 158 | + |
| 159 | +.. code-block:: python |
| 160 | +
|
| 161 | + >>> sim = dtw(s1, s2) |
| 162 | + >>> print(sim) |
| 163 | + sim tensor(6.4807, grad_fn=<SqrtBackward0>) |
| 164 | +
|
| 165 | +Automatic differentiation |
| 166 | +------------------------- |
| 167 | + |
| 168 | +The `PyTorch` backend can be used to compute the gradients of the metric functions thanks to automatic differentiation. |
| 169 | + |
| 170 | +Examples |
| 171 | +~~~~~~~~ |
| 172 | + |
| 173 | +Compute the gradient of the Dynamic Time Warping similarity measure. |
| 174 | + |
| 175 | +.. code-block:: python |
| 176 | +
|
| 177 | + >>> s1 = torch.tensor([[1.0], [2.0], [3.0]], requires_grad=True) |
| 178 | + >>> s2 = torch.tensor([[3.0], [4.0], [-3.0]]) |
| 179 | + >>> sim = dtw(s1, s2, be="pytorch") |
| 180 | + >>> sim.backward() |
| 181 | + >>> d_s1 = s1.grad |
| 182 | + >>> print(d_s1) |
| 183 | + tensor([[-0.3086], |
| 184 | + [-0.1543], |
| 185 | + [ 0.7715]]) |
| 186 | +
|
| 187 | +Compute the gradient of the Soft-DTW similarity measure. |
| 188 | + |
| 189 | +.. code-block:: python |
| 190 | +
|
| 191 | + >>> from tslearn.metrics import soft_dtw |
| 192 | + >>> ts1 = torch.tensor([[1.0], [2.0], [3.0]], requires_grad=True) |
| 193 | + >>> ts2 = torch.tensor([[3.0], [4.0], [-3.0]]) |
| 194 | + >>> sim = soft_dtw(ts1, ts2, gamma=1.0, be="pytorch", compute_with_backend=True) |
| 195 | + >>> print(sim) |
| 196 | + tensor(41.1876, dtype=torch.float64, grad_fn=<SelectBackward0>) |
| 197 | + >>> sim.backward() |
| 198 | + >>> d_ts1 = ts1.grad |
| 199 | + >>> print(d_ts1) |
| 200 | + tensor([[-4.0001], |
| 201 | + [-2.2852], |
| 202 | + [10.1643]]) |
0 commit comments