Skip to content

Release -> main #150

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 121 commits into from
May 5, 2022
Merged

Release -> main #150

merged 121 commits into from
May 5, 2022

Conversation

gomezzz
Copy link
Collaborator

@gomezzz gomezzz commented May 5, 2022

Description

Bringing main up-to-date with next release as part of #147

danielkelshaw and others added 30 commits August 11, 2021 11:19
…id.py

By using `torch.ravel()` a view of the Tensor is used rather than
copying it.
It is also possible to stack with a dimension rather than transposing
the Tensor after-the-fact.
…fficiency

Minor performance improvement in integration_grid.py
- Fix output and Pytorch version number from enable_cuda.py
- Update docstring in utils.py
…pdate-utils.py

Fix output from enable_cuda and update utils
release -> develop for Release 0.2.3
- Updated env in .readthedocs.yml
…-avoid-problems-with-cudatoolkit-incompatibilities-(latest)

Use different environment for rtd to avoid problems with cudatoolkit incompatibilities (latest)
…-Type-In-Docstring

Updated return types in docstirngs of .integrate functions according …
Numpy as numerical backend is now supported in Trapezoid, Simpson, Boole and MonteCarlo.
The numerical backend is determined by the type of the integration_domain argument.
JAX and Tensorflow now work with the Newton Cotes rules and MonteCarlo
Since JAX has a special syntax for in-place changes, I replaced the zeros array initialisation with stacking.
To not break the torch gradients I used stack instead of array for the h values.
MonteCarlo now no longer initialises a tensor full of zeros; I haven't tested the runtime and memory impact of this change yet.
For Tensorflow I replaced the way ravel is called. Tensorflow's ravel only works if numpy behaviour is enabled, otherwise it is missing in the latest tensorflow version.
This is used if integration_domain is, for example, a list.
I added a RNG helper class to maintain a state for random number generation if the backend supports it. This should reduce problems when an integrand function itself changes a global RNG state.
For consistency between backends, if seed is None, the RNG is initialized with a random state where possible. For the torch backend this means that a previous call to torch.random.manual_seed no longer affects future random number generations in MonteCarlo where the seed argument is unsed or None.
* Add the default argument for the seed and adjust the parameter order
* Add a uniform dummy method as a place to document this backend-specific function, which is defined in the constructor
I moved the imports into the functions so that set_precision can work if torch is not installed.
…ntation mistake

* Fix _linspace_with_grads doc: N is int

* Cast the integration_domain to float in _setup_integration_domain so that tensorflow cannot use integers for the domain tensor
  This is required if integration_domain is a list and the backend argument is specified.

* Change _linspace_with_grads so that it does not fail with tensorflow and requires_grad set to True
…DME and docs

I used "conda" and a wildcard in build versions in environment.yml to install numerical backends with CUDA support
jaxlib with CUDA support seems to be only available with pip
With the pip installation of tensorflow, the ~/miniconda3/envs/torchquad/lib/python3.9/site-packages/tests folder appears and breaks the imports in torchquads tests.
I tried prepending "../" to sys.path instead of appending it but this did not fix the problem.
I also added a run_example_functions function which in comparison to compute_test_errors additionally returns the functions.
Furthermore, the example functions are not generated on import but when calling run_example_functions. The tests runtime difference due to this change is negligible in comparison to the time required to import a numerical backend.
FHof and others added 29 commits March 14, 2022 17:38
…pe argument

The new enable_cuda should be backwards-compatible if no argument or only the data_type argument is passed.
…usage during gradient calculation

The memory usage can be visualized over time with the tensorboard profiler.
The scope of chi2 and other variables included the whole while loop body,
so PyTorch could not free points from the previously deleted five iterations because these variables still referred to the points in the autograph graph.

I also added a conversion to int to a calculation of a new value of self._starting_N.
* Replace torch with autoray in the dependencies
* Move environment.yml to environment_all_backends.yml for the installation of all backends
* Create a new environment.yml for the installation of minimally required packages
These whitespaces were not used for line breaks. (two whitespaces at the end of a line produce a line break in Markdown)
* Mention autoray and the support of other numerical backends in the Readme, index.rst and install.rst.
  Also mention backend versions which work with torchquad and show installation instructions
* Add a section to tutorial.rst about other backends
* Add a speedups section about compilations and execution of separate parts of the integration in tutorial.rst
* Explain the meaning of the "backend tensor" type
* Mention where PyTorch is used as example numerical backend in tutorial.rst and update the Outline
* Format the Python3 code in tutorial.rst, e.g. execute black and change some comments
* Show NewtonCotes and BaseIntegrator methods in autodoc.rst
* Show only the integrate method in integration_methods.rst
* Trailing whitespaces were removed by my text editor.
* Replace Numpy with NumPy
* Change the RNG docstring so that autodoc supports it properly
* Mention low accuracy with integrands with an output far away from zero in a VEGAS docstring
With this the user can change the RNG seed in a VEGAS integrand without changing the sample points.
Measurements showed that the uniform function is ca. 4 µs slower with CPU and 30 µs slower with CUDA when the state is saved and restored, and changing the state may interfere with asynchronous execution, so I disabled the option by default.
…od docstrings

The return value depends on the backend, is usually a tensor of empty shape and, depending on the integrand, may be a complex number.
This avoids the 'Cannot update the VEGASMap.' warning, which was shown because there were too few points for the warmup.
* Move the get_jit_compiled_integrate methods to the bottom
* Move the calculate_result methods below the integrate methods
Support other backends than PyTorch using autoray
Removing outdated figure
For now removing out-of-date figure
Removing some accidental white space changes
main -> develop to sync docs / hotfixes
Develop -> Release as part of release process for 0.3
@gomezzz gomezzz merged commit bbbb378 into main May 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants