Skip to content

Commit 28203fe

Browse files
committed
added pretty print of elapsed time to main.rs and fixed minor indentation errors in doc
1 parent af48d3b commit 28203fe

File tree

5 files changed

+72
-26
lines changed

5 files changed

+72
-26
lines changed

README.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -16,34 +16,34 @@ provides simple ways to manage very large graphs, exploiting modern compression
1616
techniques. More precisely, it is currently made of:
1717

1818
- A set of simple codes, called ζ _codes_, which are particularly suitable for
19-
storing web graphs (or, in general, integers with a power-law distribution in a
20-
certain exponent range).
19+
storing web graphs (or, in general, integers with a power-law distribution in a
20+
certain exponent range).
2121

2222
- Algorithms for compressing web graphs that exploit gap compression and
23-
differential compression (à la
24-
[LINK](http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-175.html)),
25-
intervalisation, and ζ codes to provide a high compression ratio (see [our
26-
datasets](http://law.di.unimi.it/datasets.php)). The algorithms are controlled
27-
by several parameters, which provide different tradeoffs between access speed
28-
and compression ratio.
23+
differential compression (à la
24+
[LINK](http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-175.html)),
25+
intervalisation, and ζ codes to provide a high compression ratio (see [our
26+
datasets](http://law.di.unimi.it/datasets.php)). The algorithms are controlled
27+
by several parameters, which provide different tradeoffs between access speed
28+
and compression ratio.
2929

3030
- Algorithms for accessing a compressed graph without actually decompressing
31-
it, using lazy techniques that delay the decompression until it is actually
32-
necessary.
31+
it, using lazy techniques that delay the decompression until it is actually
32+
necessary.
3333

3434
- Algorithms for analysing very large graphs, such as {@link
35-
it.unimi.dsi.webgraph.algo.HyperBall}, which has been used to show that
36-
Facebook has just [four degrees of
37-
separation](http://vigna.di.unimi.it/papers.php#BBRFDS).
35+
it.unimi.dsi.webgraph.algo.HyperBall}, which has been used to show that
36+
Facebook has just [four degrees of
37+
separation](http://vigna.di.unimi.it/papers.php#BBRFDS).
3838

3939
- A [Java implementation](http://webgraph.di.unimi.it/) of the algorithms above,
4040
now in maintenance mode.
4141

4242
- This crate, providing a complete, documented implementation of the algorithms
4343
above in Rust. It is free software distributed under either the [GNU Lesser
44-
General Public License
45-
2.1+](https://www.gnu.org/licenses/old-licenses/lgpl-2.1.html) or the [Apache
46-
Software License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
44+
General Public License
45+
2.1+](https://www.gnu.org/licenses/old-licenses/lgpl-2.1.html) or the [Apache
46+
Software License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
4747

4848
- [Data sets](http://law.di.unimi.it/datasets.php) for large graph (e.g.,
4949
billions of links).
@@ -107,7 +107,7 @@ for_!((src, succ) in graph {
107107
## More Options
108108

109109
- By starting from the [`BVGraphSeq`] class you can obtain an instance that does
110-
not need the `BASENAME.ef` file, but provides only [iteration].
110+
not need the `BASENAME.ef` file, but provides only [iteration].
111111

112112
- Graphs can be labeled by [zipping] them together with a [labeling]. In fact,
113113
graphs are just labelings with `usize` labels.

src/algo/llp/mod.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,9 +67,9 @@ pub mod preds;
6767
/// [par_apply](crate::traits::SequentialLabeling::par_apply).
6868
/// * `gammas` - The ɣ values to use in the LLP algorithm.
6969
/// * `num_threads` - The number of threads to use. If `None`, the number of
70-
/// threads is set to [`num_cpus::get`].
70+
/// threads is set to [`num_cpus::get`].
7171
/// * `chunk_size` - The chunk size used to randomize the permutation. This is
72-
/// an advanced option: see
72+
/// an advanced option: see
7373
/// [par_apply](crate::traits::SequentialLabeling::par_apply).
7474
/// * `granularity` - The granularity of the parallel processing expressed as
7575
/// the number of arcs to process at a time. If `None`, the granularity is

src/graphs/bvgraph/codecs/factories.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Implementations of the [`BitReaderFactory`] trait can be used to create
1414
bit readers accessing a graph data using different techniques.
1515
- [`FileFactory`] uses a [std::fs::File] to create a bit reader.
1616
- [`MemoryFactory`] creates bit readers from a slice of memory,
17-
either [allocated](MemoryFactory::new_mem) or [mapped](MemoryFactory::new_mmap).
17+
either [allocated](MemoryFactory::new_mem) or [mapped](MemoryFactory::new_mmap).
1818
- [`MmapHelper`] can be used to create a bit reader from a memory-mapped file.
1919
2020
Any factory can be plugged either into a

src/graphs/bvgraph/random_access.rs

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -57,14 +57,14 @@ where
5757
///
5858
/// # Arguments
5959
/// - `reader_factory`: backend that can create objects that allows
60-
/// us to read the bitstream of the graph to decode the edges.
60+
/// us to read the bitstream of the graph to decode the edges.
6161
/// - `offsets`: the bit offset at which we will have to start for decoding
62-
/// the edges of each node. (This is needed for the random accesses,
63-
/// [`BVGraphSeq`] does not need them)
62+
/// the edges of each node. (This is needed for the random accesses,
63+
/// [`BVGraphSeq`] does not need them)
6464
/// - `min_interval_length`: the minimum size of the intervals we are going
65-
/// to decode.
65+
/// to decode.
6666
/// - `compression_window`: the maximum distance between two nodes that
67-
/// reference each other.
67+
/// reference each other.
6868
/// - `number_of_nodes`: the number of nodes in the graph.
6969
/// - `number_of_arcs`: the number of arcs in the graph.
7070
///

src/main.rs

Lines changed: 47 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ use clap_complete::shells::Shell;
1111
use webgraph::{build_info, cli};
1212

1313
pub fn main() -> Result<()> {
14+
let start = std::time::Instant::now();
1415
env_logger::builder()
1516
.filter_level(log::LevelFilter::Debug)
1617
.try_init()?;
@@ -92,5 +93,50 @@ pub fn main() -> Result<()> {
9293
simplify,
9394
to_csv,
9495
transpose
95-
)
96+
)?;
97+
98+
log::info!(
99+
"The command took {}",
100+
pretty_print_elapsed(start.elapsed().as_secs_f64())
101+
);
102+
103+
Ok(())
104+
}
105+
106+
/// Pretty print the elapsed seconds in a human readable format.
107+
fn pretty_print_elapsed(elapsed: f64) -> String {
108+
let mut result = String::new();
109+
let mut elapsed_seconds = elapsed as u64;
110+
let weeks = elapsed_seconds / (60 * 60 * 24 * 7);
111+
elapsed_seconds %= 60 * 60 * 24 * 7;
112+
let days = elapsed_seconds / (60 * 60 * 24);
113+
elapsed_seconds %= 60 * 60 * 24;
114+
let hours = elapsed_seconds / (60 * 60);
115+
elapsed_seconds %= 60 * 60;
116+
let minutes = elapsed_seconds / 60;
117+
//elapsed_seconds %= 60;
118+
119+
match weeks {
120+
0 => {}
121+
1 => result.push_str("1 week "),
122+
_ => result.push_str(&format!("{} weeks ", weeks)),
123+
}
124+
match days {
125+
0 => {}
126+
1 => result.push_str("1 day "),
127+
_ => result.push_str(&format!("{} days ", days)),
128+
}
129+
match hours {
130+
0 => {}
131+
1 => result.push_str("1 hour "),
132+
_ => result.push_str(&format!("{} hours ", hours)),
133+
}
134+
match minutes {
135+
0 => {}
136+
1 => result.push_str("1 minute "),
137+
_ => result.push_str(&format!("{} minutes ", minutes)),
138+
}
139+
140+
result.push_str(&format!("{:.3} seconds ({}s)", elapsed % 60.0, elapsed));
141+
result
96142
}

0 commit comments

Comments
 (0)