You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
booktitle={Second International Meeting for Applied Geoscience \& Energy},
5
5
pages={1482--1486},
6
6
year={2022},
7
+
doi={10.1190/image2022-3750561.1},
7
8
organization={Society of Exploration Geophysicists and American Association of Petroleum~…}
8
9
}
9
10
@@ -29,6 +30,7 @@ @article{alemohammad2023self
29
30
title={Self-consuming generative models go mad},
30
31
author={Alemohammad, Sina and Casco-Rodriguez, Josue and Luzi, Lorenzo and Humayun, Ahmed Imtiaz and Babaei, Hossein and LeJeune, Daniel and Siahkoohi, Ali and Baraniuk, Richard G},
31
32
journal={arXiv preprint arXiv:2307.01850},
33
+
doi={10.52591/lxai202312101},
32
34
year={2023}
33
35
}
34
36
@@ -50,6 +52,7 @@ @article{bezanson2017julia
50
52
number={1},
51
53
pages={65--98},
52
54
year={2017},
55
+
doi={10.1137/141000671},
53
56
publisher={SIAM},
54
57
url={https://doi.org/10.1137/141000671}
55
58
}
@@ -58,13 +61,15 @@ @article{peters2019symmetric
58
61
title={Symmetric block-low-rank layers for fully reversible multilevel neural networks},
59
62
author={Peters, Bas and Haber, Eldad and Lensink, Keegan},
60
63
journal={arXiv preprint arXiv:1912.12137},
64
+
doi={10.48550/arXiv.1912.12137},
61
65
year={2019}
62
66
}
63
67
64
68
@article{orozco2022memory,
65
69
title={Memory Efficient Invertible Neural Networks for 3D Photoacoustic Imaging},
66
70
author={Orozco, Rafael and Louboutin, Mathias and Herrmann, Felix J},
title={A differentiable programming system to bridge machine learning and scientific computing},
85
90
author={Innes, Mike and Edelman, Alan and Fischer, Keno and Rackauckas, Chris and Saba, Elliot and Shah, Viral B and Tebbutt, Will},
86
91
journal={arXiv preprint arXiv:1907.07587},
92
+
doi={10.48550/arXiv.1907.07587},
87
93
year={2019}
88
94
}
89
95
@@ -146,10 +152,15 @@ @software{nflows
146
152
url = {https://doi.org/10.5281/zenodo.4296287}
147
153
}
148
154
149
-
@article{paszke2017automatic,
150
-
title={Automatic differentiation in pytorch},
151
-
author={Paszke, Adam and Gross, Sam and Chintala, Soumith and Chanan, Gregory and Yang, Edward and DeVito, Zachary and Lin, Zeming and Desmaison, Alban and Antiga, Luca and Lerer, Adam},
152
-
year={2017}
155
+
156
+
157
+
@article{paszke2019pytorch,
158
+
title={Pytorch: An imperative style, high-performance deep learning library},
159
+
author={Paszke, Adam and Gross, Sam and Massa, Francisco and Lerer, Adam and Bradbury, James and Chanan, Gregory and Killeen, Trevor and Lin, Zeming and Gimelshein, Natalia and Antiga, Luca and others},
160
+
journal={Advances in neural information processing systems},
Copy file name to clipboardExpand all lines: docs/paper/paper.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ The package we present, InvertibleNetworks.jl, is a pure Julia [@bezanson2017jul
47
47
# Statement of need
48
48
49
49
50
-
This software package focuses on memory efficiency. The promise of neural networks is in learning high-dimensional distributions from examples thus normalizing flow packages should allow easy application to large dimensional inputs such as images or 3D volumes. Interestingly, the invertibility of normalizing flows naturally alleviates memory concerns since intermediate network activations can be recomputed instead of saved in memory, greatly reducing the memory needed during backpropagation. The problem is that directly implementing normalizing flows in automatic differentiation frameworks such as PyTorch [@paszke2017automatic] will not automatically exploit this invertibility. The available packages for normalizing flows such as Bijectors.jl [@fjelde2020bijectors], NormalizingFlows.jl [@NormalizingFlows.jl], nflows [@nflows], normflows [@stimper2023normflows] and FrEIA [@freia] are built depending on automatic differentiation frameworks and thus do not exploit invertibility for memory efficiently.
50
+
This software package focuses on memory efficiency. The promise of neural networks is in learning high-dimensional distributions from examples thus normalizing flow packages should allow easy application to large dimensional inputs such as images or 3D volumes. Interestingly, the invertibility of normalizing flows naturally alleviates memory concerns since intermediate network activations can be recomputed instead of saved in memory, greatly reducing the memory needed during backpropagation. The problem is that directly implementing normalizing flows in automatic differentiation frameworks such as PyTorch [@paszke2019pytorch] will not automatically exploit this invertibility. The available packages for normalizing flows such as Bijectors.jl [@fjelde2020bijectors], NormalizingFlows.jl [@NormalizingFlows.jl], nflows [@nflows], normflows [@stimper2023normflows] and FrEIA [@freia] are built depending on automatic differentiation frameworks and thus do not exploit invertibility for memory efficiently.
51
51
52
52
We chose to write this package in Julia since it was built with a commitment to facilitate interoperability with other packages for workflows in scientific machine learning [@louboutin2022accelerating]. The interoperability was facilitated by the multiple dispatch system of Julia. Our goal is to provide solutions for imaging problems with high degrees of freedom, where computational speed is crucial. We have found that this software significantly benefits from Julia's Just-In-Time compilation technology.
0 commit comments