@@ -97,30 +97,30 @@ class Kernel(Module):
97
97
98
98
.. note::
99
99
100
- The :attr:` lengthscale` parameter is parameterized on a log scale to constrain it to be positive.
101
- You can set a prior on this parameter using the :attr:` lengthscale_prior` argument.
100
+ The lengthscale parameter is parameterized on a log scale to constrain it to be positive.
101
+ You can set a prior on this parameter using the lengthscale_prior argument.
102
102
103
- Base Args:
104
- :attr:` ard_num_dims` (int, optional):
103
+ Args:
104
+ ard_num_dims (int, optional):
105
105
Set this if you want a separate lengthscale for each input
106
- dimension. It should be `d` if :attr:`x1` is a `n x d` matrix. Default: `None`
107
- :attr:` batch_shape` (torch.Size, optional):
106
+ dimension. It should be `d` if x1 is a `n x d` matrix. Default: `None`
107
+ batch_shape (torch.Size, optional):
108
108
Set this if you want a separate lengthscale for each batch of input
109
- data. It should be `b1 x ... x bk` if :attr:`x1` is a `b1 x ... x bk x n x d` tensor.
110
- :attr:` active_dims` (tuple of ints, optional):
109
+ data. It should be `b1 x ... x bk` if x1 is a `b1 x ... x bk x n x d` tensor.
110
+ active_dims (tuple of ints, optional):
111
111
Set this if you want to compute the covariance of only a few input dimensions. The ints
112
112
corresponds to the indices of the dimensions. Default: `None`.
113
- :attr:` lengthscale_prior` (Prior, optional):
113
+ lengthscale_prior (Prior, optional):
114
114
Set this if you want to apply a prior to the lengthscale parameter. Default: `None`
115
- :attr:` lengthscale_constraint` (Constraint, optional):
115
+ lengthscale_constraint (Constraint, optional):
116
116
Set this if you want to apply a constraint to the lengthscale parameter. Default: `Positive`.
117
- :attr:` eps` (float):
117
+ eps (float):
118
118
The minimum value that the lengthscale can take (prevents divide by zero errors). Default: `1e-6`.
119
119
120
- Base Attributes:
121
- :attr:` lengthscale` (Tensor):
120
+ Attributes:
121
+ lengthscale (Tensor):
122
122
The lengthscale parameter. Size/shape of parameter depends on the
123
- :attr:` ard_num_dims` and :attr:` batch_shape` arguments.
123
+ ard_num_dims and batch_shape arguments.
124
124
125
125
Example:
126
126
>>> covar_module = gpytorch.kernels.LinearKernel()
@@ -188,13 +188,13 @@ def forward(self, x1, x2, diag=False, last_dim_is_batch=False, **params):
188
188
This method should be imlemented by all Kernel subclasses.
189
189
190
190
Args:
191
- :attr:`x1` (Tensor `n x d` or `b x n x d`):
191
+ x1 (Tensor `n x d` or `b x n x d`):
192
192
First set of data
193
- :attr:`x2` (Tensor `m x d` or `b x m x d`):
193
+ x2 (Tensor `m x d` or `b x m x d`):
194
194
Second set of data
195
- :attr:` diag` (bool):
195
+ diag (bool):
196
196
Should the Kernel compute the whole kernel, or just the diag?
197
- :attr:` last_dim_is_batch` (tuple, optional):
197
+ last_dim_is_batch (tuple, optional):
198
198
If this is true, it treats the last dimension of the data as another batch dimension.
199
199
(Useful for additive structure over the dimensions). Default: False
200
200
@@ -284,15 +284,15 @@ def covar_dist(
284
284
all pairs of points in x1 and x2.
285
285
286
286
Args:
287
- :attr:`x1` (Tensor `n x d` or `b1 x ... x bk x n x d`):
287
+ x1 (Tensor `n x d` or `b1 x ... x bk x n x d`):
288
288
First set of data.
289
- :attr:`x2` (Tensor `m x d` or `b1 x ... x bk x m x d`):
289
+ x2 (Tensor `m x d` or `b1 x ... x bk x m x d`):
290
290
Second set of data.
291
- :attr:` diag` (bool):
291
+ diag (bool):
292
292
Should we return the whole distance matrix, or just the diagonal? If True, we must have `x1 == x2`.
293
- :attr:` last_dim_is_batch` (tuple, optional):
293
+ last_dim_is_batch (tuple, optional):
294
294
Is the last dimension of the data a batch dimension or not?
295
- :attr:` square_dist` (bool):
295
+ square_dist (bool):
296
296
Should we square the distance matrix before returning?
297
297
298
298
Returns:
0 commit comments