Skip to content

Commit 988cb91

Browse files
committed
Documentation update
1 parent 717ee17 commit 988cb91

File tree

1 file changed

+12
-5
lines changed

1 file changed

+12
-5
lines changed

docs/source/guide/guide_part_i.rst

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -288,8 +288,17 @@ For example, spikes can be stored efficiently using a sparse monitor:
288288
network.layers[layer], state_vars=["s"], time=int(time / dt), device=device, sparse=True
289289
)
290290
291-
Note that using sparse tensors is advantageous only when the percentage of non-zero values is less than 4% of the total values.
292-
The table below compares memory consumption between sparse and dense tensors:
291+
292+
Performance Considerations:
293+
294+
295+
While sparse tensors reduce memory usage when the percentage of non-zero values is below 4% (see table below),
296+
there is a trade-off in computational speed. Benchmarks on an RTX 3070 GPU show:
297+
298+
* Sparse runtime: 1.2 seconds
299+
* Dense runtime: 0.5 seconds
300+
301+
The dense implementation achieves 2x faster execution compared to sparse tensors in this configuration.
293302

294303
======================= ====================== ====================== ====================
295304
Sparse (megabytes used) Dense (megabytes used) Ratio (Sparse/Dense) % % of non zero values
@@ -315,9 +324,7 @@ Sparse (megabytes used) Dense (megabytes used) Ratio (Sparse/Dense) % % of non z
315324
283 119 238 9.5
316325
======================= ====================== ====================== ====================
317326

318-
The tensor size does not affect the values in the third column.
319-
This table was generated by :code:`examples/benchmark/sparse_vs_dense_tensors.py`
320-
327+
This table and performance metrics were generated by :code:`examples/benchmark/sparse_vs_dense_tensors.py`
321328

322329
Running Simulations
323330
-------------------

0 commit comments

Comments
 (0)