-
Notifications
You must be signed in to change notification settings - Fork 41
Release -> main #150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Release -> main #150
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…id.py By using `torch.ravel()` a view of the Tensor is used rather than copying it. It is also possible to stack with a dimension rather than transposing the Tensor after-the-fact.
…fficiency Minor performance improvement in integration_grid.py
- Fix output and Pytorch version number from enable_cuda.py - Update docstring in utils.py
…pdate-utils.py Fix output from enable_cuda and update utils
release -> develop for Release 0.2.3
- Fixed warnings in docs
- Updated env in .readthedocs.yml
…-avoid-problems-with-cudatoolkit-incompatibilities-(latest) Use different environment for rtd to avoid problems with cudatoolkit incompatibilities (latest)
…-Type-In-Docstring Updated return types in docstirngs of .integrate functions according …
Numpy as numerical backend is now supported in Trapezoid, Simpson, Boole and MonteCarlo. The numerical backend is determined by the type of the integration_domain argument.
JAX and Tensorflow now work with the Newton Cotes rules and MonteCarlo Since JAX has a special syntax for in-place changes, I replaced the zeros array initialisation with stacking. To not break the torch gradients I used stack instead of array for the h values. MonteCarlo now no longer initialises a tensor full of zeros; I haven't tested the runtime and memory impact of this change yet. For Tensorflow I replaced the way ravel is called. Tensorflow's ravel only works if numpy behaviour is enabled, otherwise it is missing in the latest tensorflow version.
This is used if integration_domain is, for example, a list.
I added a RNG helper class to maintain a state for random number generation if the backend supports it. This should reduce problems when an integrand function itself changes a global RNG state. For consistency between backends, if seed is None, the RNG is initialized with a random state where possible. For the torch backend this means that a previous call to torch.random.manual_seed no longer affects future random number generations in MonteCarlo where the seed argument is unsed or None.
* Add the default argument for the seed and adjust the parameter order * Add a uniform dummy method as a place to document this backend-specific function, which is defined in the constructor
I moved the imports into the functions so that set_precision can work if torch is not installed.
…level if it exists
…ntation mistake * Fix _linspace_with_grads doc: N is int * Cast the integration_domain to float in _setup_integration_domain so that tensorflow cannot use integers for the domain tensor This is required if integration_domain is a list and the backend argument is specified. * Change _linspace_with_grads so that it does not fail with tensorflow and requires_grad set to True
…DME and docs I used "conda" and a wildcard in build versions in environment.yml to install numerical backends with CUDA support jaxlib with CUDA support seems to be only available with pip
With the pip installation of tensorflow, the ~/miniconda3/envs/torchquad/lib/python3.9/site-packages/tests folder appears and breaks the imports in torchquads tests. I tried prepending "../" to sys.path instead of appending it but this did not fix the problem.
I also added a run_example_functions function which in comparison to compute_test_errors additionally returns the functions. Furthermore, the example functions are not generated on import but when calling run_example_functions. The tests runtime difference due to this change is negligible in comparison to the time required to import a numerical backend.
…pe argument The new enable_cuda should be backwards-compatible if no argument or only the data_type argument is passed.
…usage during gradient calculation The memory usage can be visualized over time with the tensorboard profiler. The scope of chi2 and other variables included the whole while loop body, so PyTorch could not free points from the previously deleted five iterations because these variables still referred to the points in the autograph graph. I also added a conversion to int to a calculation of a new value of self._starting_N.
* Replace torch with autoray in the dependencies * Move environment.yml to environment_all_backends.yml for the installation of all backends * Create a new environment.yml for the installation of minimally required packages
These whitespaces were not used for line breaks. (two whitespaces at the end of a line produce a line break in Markdown)
* Mention autoray and the support of other numerical backends in the Readme, index.rst and install.rst. Also mention backend versions which work with torchquad and show installation instructions * Add a section to tutorial.rst about other backends * Add a speedups section about compilations and execution of separate parts of the integration in tutorial.rst * Explain the meaning of the "backend tensor" type * Mention where PyTorch is used as example numerical backend in tutorial.rst and update the Outline * Format the Python3 code in tutorial.rst, e.g. execute black and change some comments * Show NewtonCotes and BaseIntegrator methods in autodoc.rst * Show only the integrate method in integration_methods.rst * Trailing whitespaces were removed by my text editor. * Replace Numpy with NumPy * Change the RNG docstring so that autodoc supports it properly * Mention low accuracy with integrands with an output far away from zero in a VEGAS docstring
With this the user can change the RNG seed in a VEGAS integrand without changing the sample points. Measurements showed that the uniform function is ca. 4 µs slower with CPU and 30 µs slower with CUDA when the state is saved and restored, and changing the state may interfere with asynchronous execution, so I disabled the option by default.
…od docstrings The return value depends on the backend, is usually a tensor of empty shape and, depending on the integrand, may be a complex number.
… on GNU/Linux Co-authored-by: Pablo Gómez <[email protected]>
This avoids the 'Cannot update the VEGASMap.' warning, which was shown because there were too few points for the warmup.
… be inferred from user-provided arguments
* Move the get_jit_compiled_integrate methods to the bottom * Move the calculate_result methods below the integrate methods
Support other backends than PyTorch using autoray
Removing outdated figure
For now removing out-of-date figure
Removing some accidental white space changes
main -> develop to sync docs / hotfixes
Develop -> Release as part of release process for 0.3
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Bringing main up-to-date with next release as part of #147