You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/05_Interpolation.jl
+43-15Lines changed: 43 additions & 15 deletions
Original file line number
Diff line number
Diff line change
@@ -184,7 +184,7 @@ is uniquely defined by the data.
184
184
185
185
# ╔═╡ 20a76496-f1a0-4690-8522-a13cf4656e6d
186
186
md"""
187
-
!!! danger "Indexing conventions: Starting at $0$ or $1$"
187
+
!!! danger "Indexing conventions: Starting at 0 or 1"
188
188
Note that in the definition of the polynomial (Equation (3)) the sum starts now from $j=0$ whereas in equation (1) it started from $j=1$.
189
189
190
190
When discussing numerical methods (such as here interpolations) it is sometimes more convenient to start indexing from $0$ and sometimes to start from $1$. Please be aware of this and read sums in this notebook carefully. Occasionally we use color to highlight the start of a sum explicitly.
@@ -453,9 +453,29 @@ to obtain an **accurate reconstruction** of the original $f$.
453
453
454
454
"""
455
455
456
+
# ╔═╡ 669f818c-8f7c-4e6d-9d89-ea979e69ce66
457
+
md"""
458
+
A standard metric to measure how good the approximation of $p_n$ to $f$ is,
459
+
is to check the largest deviation between the differences of function vaules
460
+
on the domain of our data. Assuming we want to approximate $f$ on the interval
which is the so-called **infinity norm** of the difference $f - p_n$.
466
+
More generally the **infinity norm** $\|\phi \|_\infty$ for a function $\phi : D \to \mathbb{R}$ is the expression
467
+
```math
468
+
\|\phi \|_\infty = \max_{x \in D} |\phi(x)|,
469
+
```
470
+
i.e. the maximal absolute value the function takes over its input domain $D$.
471
+
472
+
Note that the error $\|f - p_n\|_\infty$ effectively **measures** how well our **polynomial interpolation model** $p_n$ **generalises to unseen datapoints** $(x_{n+1}, f(x_{n+1}))$ with $x \in D$: If this error $\|f - p_n\|_\infty$ is small, $p_n$ is a very good model for $f$. If this error is large, it is a rather inaccurate model.
473
+
"""
474
+
456
475
# ╔═╡ 5f1abde6-f4fc-4c5e-9a19-abf28ca3584d
457
476
md"""
458
-
For illustration we contrast two cases, namely the construction of a polynomial interpolation of the functions
477
+
We **return to the interpolation problem**.
478
+
For illustration in this section we contrast two cases, namely the construction of a polynomial interpolation of the functions
@@ -535,15 +558,16 @@ To understand this behaviour the following error estimate is useful:
535
558
\|\phi \|_\infty = \max_{x \in D} |\phi(x)|,
536
559
```
537
560
i.e. the maximal absolute value the function takes.
538
-
539
-
Note that in this theorem the error $\|f - p_n\|_\infty$ effectively measures how badly our polynomial interpolation model $p_n$ generalises for unseen datapoints $(x_{n+1}, f(x_{n+1}))$ with $x \in D$: If this error $\|f - p_n\|_\infty$ is small, $p_n$ is a very good statistical model for $f$. If this error is large, it is a rather inaccurate model.
540
561
"""
541
562
542
563
# ╔═╡ 5f63abce-f843-4c33-9db6-0770323b55ac
543
564
md"""
544
565
The **key conclusion of the previous theorem** is that
545
-
the error $\|f - p_n\|_\infty$ only decreases as $n$ increases
546
-
when the right-hand side (RHS) of (8) is bounded by a constant. So let's check this for our functions.
566
+
if the right-hand side (RHS) of (8) goes to zero,
567
+
than the the error $\|f - p_n\|_\infty$ neccessirly vanishes
568
+
as $n$ increases.
569
+
570
+
So let's check this for our functions.
547
571
- For $f_\text{sin}(x) = \sin(5x)$ we can easily verify
548
572
$|f_\text{sin}^{(n+1)}(x)| = 5 |f_\text{sin}^{(n)}(x)|$ as well as $\max_{[-1,1]} |f_\text{sin}(x)| = \max_{x\in[-1,1]} |\sin(5x)| = 1$, such that
549
573
```math
@@ -609,12 +633,12 @@ This visual result is also confirmed by a more detailed analysis,
609
633
which reveals, that the origin is our choice of a *regular* spacing between the sampling points, an effect known as Runge's phaenomenon.
* If the error scales as $C K^{-n}$ where $n$ is some accuracy parameter (with larger $n$ giving more accurate results), then we say the scheme has **exponential convergence**.
707
-
* For approximation schemes it is often more convenient to instead formulate the method using an approximation parameter $h = O(1/n)$, i.e. which scales inversely to $n$. In this case *smaller* $h$ give more accurate results. In this case exponential convergence is characterised by an error scaling as $C K^h$.
708
731
"""
709
732
710
-
# ╔═╡ 1d61d925-1ceb-439b-aff6-c5e8483fdc31
733
+
# ╔═╡ a15750a3-3507-4ee1-8b9a-b7d6a3dcea46
711
734
md"""
712
735
### Stability of polynomial interpolation
713
736
@@ -730,7 +753,10 @@ Instead of producing an interpolating polynomial $p_n$
730
753
using the exact data $(x_i, y_i)$,
731
754
our procedure only has access to the noisy data $(x_i, \tilde{y}_i)$,
732
755
thus producing the polynomial $\tilde{p}_n$.
756
+
"""
733
757
758
+
# ╔═╡ 7f855423-72ac-4e6f-92bc-73c12e5007eb
759
+
md"""
734
760
In **stabilty analysis** we now ask the question:
735
761
How different are $p_n$ and $\tilde{p}_n$ given a measurement noise
736
762
of order $\varepsilon$.
@@ -833,16 +859,16 @@ let
833
859
for (i, values) inenumerate((values_equispaced, values_chebyshev))
834
860
noise = [10^η_log_poly * (-1+2rand()) for _ in values]
835
861
836
-
fsin_accurate =fsin.(values) # = [fsin(x) for x in values]
862
+
#fsin_accurate = fsin.(values) # = [fsin(x) for x in values]
0 commit comments