Skip to content

[Docs] GP Regression With Uncertain Inputs #2175

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
corwinjoy opened this issue Oct 24, 2022 · 1 comment
Open

[Docs] GP Regression With Uncertain Inputs #2175

corwinjoy opened this issue Oct 24, 2022 · 1 comment

Comments

@corwinjoy
Copy link
Contributor

📚 Documentation/Examples

Thanks for this awesome library! I'm really enjoying learning it, but I did get a little confused while
reading the documentation and would like to suggest an improvement.

In the example with uncertain inputs, the final graph is confusing.
Here, I am talking about this workbook:

https://github.com/cornellius-gp/gpytorch/blob/master/examples/01_Exact_GPs/GP_Regression_DistributionalKernel.ipynb

The setup is that we have a sine function with decreasing noise as x increases:

# Training data is 100 points in [0,1] inclusive regularly spaced
train_x_mean = torch.linspace(0, 1, 20)

# We'll assume the variance shrinks the closer we get to 1
train_x_stdv = torch.linspace(0.03, 0.01, 20)

But, when we look at the final graph the confidence bands become much wider as the training noise decreases!
This is the opposite of what I would expect!

This is quite strange, but I think this is because of the noise assumed for the test set.

In the final cell you have:
...
test_x_distributional = torch.stack((test_x, (1e-2 * torch.ones_like(test_x)).log()), dim=1)
...

This level of noise for the test data has a huge impact on the final graph.

  1. If you set the test noise level to e.g. 1e-3 you get what I would have expected,
    wider bands on the left, then narrower bands on the right as the training noise decreases.
  2. If you set the test noise level to e.g. 1e-1 this flattens everything and you get a single wide band.
  3. For the existing noise, I'm not quite sure I understand why the bands are wider on the right but I think the logic is,
    that given the training noise a test noise of 1e-2 is easy to explain on the left where the noise is 3e-2, but harder to explain
    on the right where the training noise is only 1e-2.

Recommendations:
I would recommend changing the test noise to 1e-3. Or, if the intent is to really show the interaction between similar training and test noises maybe emphasize this and talk about how training and test noise interact?

@gpleiss
Copy link
Member

gpleiss commented Nov 7, 2022

I think changing test noise to 1e-3 makes sense. Feel free to open a PR making this change.

corwinjoy pushed a commit to corwinjoy/gpytorch that referenced this issue Nov 23, 2022
…Inputs'.

Set the test noise level to 1e-3 to be smaller than the training data, so we get the output graph you would expect, wider bands on the left, then narrower bands on the right as the training noise decreases.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants