Skip to content

Commit a14ac33

Browse files
authored
Prerelease v24.3.0 (#2053)
* Update change log * Update install instructions in readme * Docs updates
1 parent 6c17ad5 commit a14ac33

File tree

8 files changed

+41
-45
lines changed

8 files changed

+41
-45
lines changed

CHANGELOG.md

Lines changed: 14 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,37 +1,38 @@
1-
* 24.x.x
1+
* 24.3.0
2+
- New features:
3+
- Added `FluxNormaliser` processor (#1878)
24
- Bug fixes:
3-
- Fix bug with 'median' and 'mean' methods in Masker averaging over the wrong axes.
5+
- Fix bug with 'median' and 'mean' methods in Masker averaging over the wrong axes (#1548)
46
- `SPDHG` `gamma` parameter is now applied correctly so that the product of the dual and primal step sizes remains constant as `gamma` varies (#1644)
57
- Allow MaskGenerator to be run on DataContainers (#2001)
6-
- Make Paganin Processor work with AcquistionData with one angle (#1920)
8+
- Make `PaganinProcessor` work with `AcquistionData` with one angle (#1920)
79
- Fix bug passing `kwargs` to PDHG (#2010)
8-
- Show1D correctly applies slices to N-dimensional data (#2022)
9-
- BlockOperator direct and adjoint methods: can pass out as a DataContainer instead of a (1,1) BlockDataContainer where geometry permits (#1926)
10+
- `show1D` correctly applies slices to N-dimensional data (#2022)
11+
- `BlockOperator` direct and adjoint methods: can pass out as a `DataContainer` instead of a (1,1) `BlockDataContainer` where geometry permits (#1926)
1012
- Add path to python executable to cmake commands to fix issue with cmake retrieving wrong python (#2044)
1113
- Enhancements:
12-
- Removed multiple exits from numba implementation of KullbackLeibler divergence (#1901)
14+
- Removed multiple exits from numba implementation of `KullbackLeibler` divergence (#1901)
1315
- Updated the `SPDHG` algorithm to take a stochastic `Sampler`(#1644)
1416
- Updated the `SPDHG` algorithm to include setters for step sizes (#1644)
15-
- Add FluxNormaliser processor (#1878)
16-
- SAPBY for the BlockDataContainer now does not require an `out` to be passed (#2008)
17+
- SAPBY for the `BlockDataContainer` now does not require an `out` to be passed (#2008)
1718
- Fixed the rendering of the SAG/SAGA documentation (#2011)
1819
- Set aliases: ISTA=PGD, FISTA=APGD (#2007)
1920
- Dependencies:
20-
- Added scikit-image to CIL-Demos conda install command as needed for new Callbacks notebook.
21+
- Added scikit-image to CIL-Demos conda install command as needed for new Callbacks notebook (#1955)
2122
- Replaced matplotlib dependency with matplotlib-base (#2031)
2223
- Remove CIL-Data from build requirements, update version to >=22 in run requirements (#2046)
2324
- Changes that break backwards compatibility:
24-
- show1D argument renamed `label`->`dataset_labels`, default plot size has changed. (#2022)
25-
- show1D Default behaviour for displaying and labeling multiple plots has changed. Each slice requested will be displayed on a new subplot comparing all datasets at that position. (#2022)
26-
- Deprecated `norms` and `prob` in the `SPDHG` algorithm to be set in the `BlockOperator` and `Sampler` respectively (#1644)
25+
- `show1D` argument renamed `label`->`dataset_labels`, default plot size has changed. (#2022)
26+
- `show1D` Default behaviour for displaying and labeling multiple plots has changed. Each slice requested will be displayed on a new subplot comparing all datasets at that position. (#2022)
2727
- The `run` method in the cil algorithm class will no longer run if a number of iterations is not passed (#1940)
2828
- Paganin processor now requires the CIL data order (#1920)
2929
- The gradient descent algorithm now takes `f` instead of `objective_function` to match with ISTA and FISTA (#2006)
30+
- Deprecated code
31+
- Deprecated `norms` and `prob` in the `SPDHG` algorithm to be set in the `BlockOperator` and `Sampler` respectively (#1644)
3032
- Deprecated `rtol` and `atol` from GD so that it does not stop iterating automatically - for this functionality users should use a callback (#1944)
3133
- Testing
3234
- Added a new test file `test_algorithm_convergence` that will hold our algorithm tests that run to convergence (#2019)
3335

34-
3536
* 24.2.0
3637
- New Features:
3738
- Added SVRG and LSVRG stochastic functions (#1625)

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,13 +21,13 @@ We recommend using either [`miniconda`](https://docs.conda.io/projects/miniconda
2121
Install a new environment using:
2222

2323
```sh
24-
conda create --name cil -c conda-forge -c https://software.repos.intel.com/python/conda -c ccpi cil=24.2.0 ipp=2021.12
24+
conda create --name cil -c conda-forge -c https://software.repos.intel.com/python/conda -c ccpi cil=24.3.0 ipp=2021.12
2525
```
2626

2727
To install CIL and the additional packages and plugins needed to run the [CIL demos](https://github.com/TomographicImaging/CIL-Demos) install the environment with:
2828

2929
```sh
30-
conda create --name cil -c conda-forge -c https://software.repos.intel.com/python/conda -c ccpi cil=24.2.0 ipp=2021.12 astra-toolbox=*=cuda* tigre ccpi-regulariser tomophantom ipykernel ipywidgets scikit-image
30+
conda create --name cil -c conda-forge -c https://software.repos.intel.com/python/conda -c ccpi cil=24.3.0 ipp=2021.12 astra-toolbox=*=cuda* tigre ccpi-regulariser tomophantom ipykernel ipywidgets scikit-image
3131
```
3232

3333
where:

Wrappers/Python/cil/framework/acquisition_geometry.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1596,7 +1596,7 @@ def __eq__(self, other):
15961596
class AcquisitionGeometry(object):
15971597
"""This class holds the AcquisitionGeometry of the system.
15981598
1599-
Please initialise the AcquisitionGeometry using the using the static methods:
1599+
Please initialise the AcquisitionGeometry using the static methods:
16001600
16011601
`AcquisitionGeometry.create_Parallel2D()`
16021602

Wrappers/Python/cil/framework/block.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -319,9 +319,8 @@ def sapyb(self, a, y, b, out=None, num_threads = NUM_THREADS):
319319
out : BlockDataContainer, optional
320320
Provides a placeholder for the result
321321
322-
Example:
323-
--------
324-
322+
Example
323+
-------
325324
>>> a = 2
326325
>>> b = 3
327326
>>> ig = ImageGeometry(10,11)

Wrappers/Python/cil/optimisation/algorithms/ADMM.py

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -24,25 +24,26 @@
2424

2525
class LADMM(Algorithm):
2626

27-
'''
27+
r'''
2828
LADMM is the Linearized Alternating Direction Method of Multipliers (LADMM)
2929
30-
General form of ADMM : min_{x} f(x) + g(y), subject to Ax + By = b
30+
General form of ADMM : :math:`min_{x} f(x) + g(y)`, subject to :math:`Ax + By = b`
3131
32-
Case: A = Id, B = -K, b = 0 ==> min_x f(Kx) + g(x)
32+
Case: :math:`A = Id, B = -K, b = 0 ==> min_x f(Kx) + g(x)`
3333
3434
The quadratic term in the augmented Lagrangian is linearized for the x-update.
3535
3636
Main algorithmic difference is that in ADMM we compute two proximal subproblems,
3737
where in the PDHG a proximal and proximal conjugate.
3838
3939
Reference (Section 8) : https://link.springer.com/content/pdf/10.1007/s10107-018-1321-1.pdf
40+
41+
42+
.. math:: x^{k} = prox_{\tau f } (x^{k-1} - \tau/\sigma A^{T}(Ax^{k-1} - z^{k-1} + u^{k-1} )
4043
41-
x^{k} = prox_{\tau f } (x^{k-1} - tau/sigma A^{T}(Ax^{k-1} - z^{k-1} + u^{k-1} )
42-
43-
z^{k} = prox_{\sigma g} (Ax^{k} + u^{k-1})
44+
.. math:: z^{k} = prox_{\sigma g} (Ax^{k} + u^{k-1})
4445
45-
u^{k} = u^{k-1} + Ax^{k} - z^{k}
46+
.. math:: u^{k} = u^{k-1} + Ax^{k} - z^{k}
4647
4748
'''
4849

Wrappers/Python/cil/optimisation/algorithms/SPDHG.py

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,8 @@
3131

3232
class SPDHG(Algorithm):
3333
r'''Stochastic Primal Dual Hybrid Gradient (SPDHG) solves separable optimisation problems of the type:
34-
.. math::
35-
36-
\min_{x} f(Kx) + g(x) = \min_{x} \sum f_i(K_i x) + g(x)
34+
35+
.. math:: \min_{x} f(Kx) + g(x) = \min_{x} \sum f_i(K_i x) + g(x)
3736
3837
where :math:`f_i` and the regulariser :math:`g` need to be proper, convex and lower semi-continuous.
3938
@@ -282,7 +281,7 @@ def set_step_sizes_from_ratio(self, gamma=1.0, rho=0.99):
282281
gamma : Positive float
283282
parameter controlling the trade-off between the primal and dual step sizes
284283
rho : Positive float
285-
parameter controlling the size of the product :math: \sigma\tau :math:
284+
parameter controlling the size of the product :math:`\sigma\tau`
286285
287286
288287

Wrappers/Python/cil/optimisation/functions/Function.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -583,7 +583,7 @@ def convex_conjugate(self, x):
583583
return self.function.convex_conjugate(x) - self.constant
584584

585585
def proximal(self, x, tau, out=None):
586-
""" Returns the proximal operator of :math:`F+scalar`
586+
r""" Returns the proximal operator of :math:`F+scalar`
587587
588588
.. math:: \text{prox}_{\tau (F+scalar)}(x) = \text{prox}_{\tau F}
589589

Wrappers/Python/cil/optimisation/functions/SVRGFunction.py

Lines changed: 10 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
class SVRGFunction(ApproximateGradientSumFunction):
3232

3333
r"""
34-
The Stochastic Variance Reduced Gradient (SVRG) function calculates the approximate gradient of :math:`\sum_{i=1}^{n-1}f_i`. For this approximation, every `snapshot_update_interval` number of iterations, a full gradient calculation is made at this "snapshot" point. Intermediate gradient calculations update this snapshot by taking a index :math:`i_k` and calculating the gradient of :math:`f_{i_k}`s at the current iterate and the snapshot, updating the approximate gradient to be:
34+
The Stochastic Variance Reduced Gradient (SVRG) function calculates the approximate gradient of :math:`\sum_{i=1}^{n-1}f_i`. For this approximation, every `snapshot_update_interval` number of iterations, a full gradient calculation is made at this "snapshot" point. Intermediate gradient calculations update this snapshot by taking a index :math:`i_k` and calculating the gradient of :math:`f_{i_k}`'s at the current iterate and the snapshot, updating the approximate gradient to be:
3535
3636
.. math ::
3737
n*\nabla f_{i_k}(x_k) - n*\nabla f_{i_k}(\tilde{x}) + \nabla \sum_{i=0}^{n-1}f_i(\tilde{x}),
@@ -60,7 +60,7 @@ class SVRGFunction(ApproximateGradientSumFunction):
6060
snapshot_update_interval : positive int or None, optional
6161
The interval for updating the full gradient (taking a snapshot). The default is 2*len(functions) so a "snapshot" is taken every 2*len(functions) iterations. If the user passes `0` then no full gradient snapshots will be taken.
6262
store_gradients : bool, default: `False`
63-
Flag indicating whether to store an update a list of gradients for each function :math:`f_i` or just to store the snapshot point :math:` \tilde{x}` and its gradient :math:`\nabla \sum_{i=0}^{n-1}f_i(\tilde{x})`.
63+
Flag indicating whether to store an update a list of gradients for each function :math:`f_i` or just to store the snapshot point :math:`\tilde{x}` and its gradient :math:`\nabla \sum_{i=0}^{n-1}f_i(\tilde{x})`.
6464
6565
6666
"""
@@ -212,28 +212,24 @@ def _update_full_gradient_and_return(self, x, out=None):
212212

213213

214214
class LSVRGFunction(SVRGFunction):
215-
"""""
215+
r"""
216216
A class representing a function for Loopless Stochastic Variance Reduced Gradient (SVRG) approximation. This is similar to SVRG, except the full gradient at a "snapshot" is calculated at random intervals rather than at fixed numbers of iterations.
217-
218-
219-
Reference
220-
----------
221-
222-
Kovalev, D., Horváth, S. &; Richtárik, P.. (2020). Don’t Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop. Proceedings of the 31st International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 117:451-467 Available from https://proceedings.mlr.press/v117/kovalev20a.html.
223-
224-
225-
217+
226218
Parameters
227219
----------
228-
functions : `list` of functions
220+
functions : `list` of functions
229221
A list of functions: :code:`[f_{0}, f_{1}, ..., f_{n-1}]`. Each function is assumed to be smooth with an implemented :func:`~Function.gradient` method. All functions must have the same domain. The number of functions `n` must be strictly greater than 1.
230222
sampler: An instance of a CIL Sampler class ( :meth:`~optimisation.utilities.sampler`) or of another class which has a `next` function implemented to output integers in {0,...,n-1}.
231223
This sampler is called each time gradient is called and sets the internal `function_num` passed to the `approximate_gradient` function. Default is `Sampler.random_with_replacement(len(functions))`.
232224
snapshot_update_probability: positive float, default: 1/n
233225
The probability of updating the full gradient (taking a snapshot) at each iteration. The default is :math:`1./n` so, in expectation, a snapshot will be taken every :math:`n` iterations.
234226
store_gradients : bool, default: `False`
235-
Flag indicating whether to store an update a list of gradients for each function :math:`f_i` or just to store the snapshot point :math:` \tilde{x}` and it's gradient :math:`\nabla \sum_{i=0}^{n-1}f_i(\tilde{x})`.
227+
Flag indicating whether to store an update a list of gradients for each function :math:`f_i` or just to store the snapshot point :math:`\tilde{x}` and its gradient :math:`\nabla \sum_{i=0}^{n-1}f_i(\tilde{x})`.
236228
229+
230+
Reference
231+
---------
232+
Kovalev, D., Horváth, S. &; Richtárik, P.. (2020). Don’t Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop. Proceedings of the 31st International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 117:451-467 Available from https://proceedings.mlr.press/v117/kovalev20a.html.
237233
238234
Note
239235
----

0 commit comments

Comments
 (0)