|
1 | 1 | ==============================================================================
|
2 |
| -*xpypact*: FISPACT output to datasets converter |
| 2 | +*xpypact*: FISPACT output to Polars or DuckDB converter |
3 | 3 | ==============================================================================
|
4 | 4 |
|
5 | 5 |
|
|
16 | 16 | Description
|
17 | 17 | -----------
|
18 | 18 |
|
19 |
| -The module loads FISPACT JSON output as xarray dataset. |
| 19 | +The module loads FISPACT JSON output files and converts to Polars dataframes |
| 20 | +with minor data normalization. |
20 | 21 | This allows efficient data extraction and aggregation.
|
| 22 | +Multiple JSON files can be combined using simple additional identification for different |
| 23 | +FISPACT runs. So far we use just two-dimensional identification by material |
| 24 | +and case. The case usually identifies certain neutron flux. |
| 25 | + |
21 | 26 |
|
22 | 27 | Implemented functionality
|
23 | 28 | -------------------------
|
24 | 29 |
|
25 | 30 | - export to DuckDB
|
26 | 31 | - export to parquet files
|
27 | 32 |
|
28 |
| -.. configures and runs FISPACT, converts FISPACT output to xarray datasets. |
29 |
| -
|
30 | 33 | .. note::
|
31 | 34 |
|
32 | 35 | Currently available FISPACT v.5 API uses rather old python version (3.6).
|
33 |
| - That prevents direct use of their API in our package (>=3.8). |
| 36 | + That prevents direct use of their API in our package (>=3.10). |
34 | 37 | Check if own python integration with FISPACT is reasonable and feasible.
|
| 38 | + Or provide own FISPACT python binding. |
35 | 39 |
|
36 | 40 |
|
37 | 41 | Installation
|
@@ -61,9 +65,50 @@ From source
|
61 | 65 | Examples
|
62 | 66 | --------
|
63 | 67 |
|
64 |
| -.. note:: |
| 68 | +.. code-block:: |
| 69 | +
|
| 70 | + from xpypact import FullDataCollector, Inventory |
| 71 | +
|
| 72 | + def get_material_id(p: Path) -> int: |
| 73 | + ... |
| 74 | +
|
| 75 | + def get_case_id(p: Path) -> int: |
| 76 | + ... |
| 77 | +
|
| 78 | + jsons = [path1, path2, ...] |
| 79 | + material_ids = {p: get_material_id(p) for p in jsons } |
| 80 | + case_ids = {c:: get_case_id(p) for p in jsons |
| 81 | +
|
| 82 | + collector = FullDataCollector() |
| 83 | +
|
| 84 | + for json in jsons: |
| 85 | + inventory = Inventory.from_json(json) |
| 86 | + collector.append(inventory, material_id=material_ids[json], case_id=case_ids[json]) |
| 87 | +
|
| 88 | + collected = collector.get_result() |
| 89 | +
|
| 90 | + # save to parquet files |
| 91 | +
|
| 92 | + collected.save_to_parquets(Path.cwd() / "parquets") |
| 93 | +
|
| 94 | + # or use DuckDB database |
| 95 | +
|
| 96 | + import from xpypact.dao save |
| 97 | + import duckdb as db |
| 98 | +
|
| 99 | + con = db.connect() |
| 100 | + save(con, collected) |
| 101 | +
|
| 102 | + gamma_from_db = con.sql( |
| 103 | + """ |
| 104 | + select |
| 105 | + g, rate |
| 106 | + from timestep_gamma |
| 107 | + where material_id = 1 and case_id = 54 and time_step_number = 7 |
| 108 | + order by g |
| 109 | + """, |
| 110 | + ).fetchall() |
65 | 111 |
|
66 |
| - Add examples |
67 | 112 |
|
68 | 113 | Contributing
|
69 | 114 | ------------
|
|
0 commit comments