Skip to content

Upgrade ProblemReductions, improve documentation #94

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Mar 13, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions .github/workflows/CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,12 @@ jobs:
arch:
- x64
steps:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@v1
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@v2
with:
version: ${{ matrix.version }}
arch: ${{ matrix.arch }}
- uses: actions/cache@v1
- uses: actions/cache@v4
env:
cache-name: cache-artifacts
with:
Expand All @@ -54,16 +54,16 @@ jobs:
- uses: julia-actions/julia-buildpkg@v1
- uses: julia-actions/julia-runtest@v1
- uses: julia-actions/julia-processcoverage@v1
- uses: codecov/codecov-action@v1
- uses: codecov/codecov-action@v5
with:
file: lcov.info

docs:
name: Documentation
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@v1
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@v2
with:
version: '1'
- run: |
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ coverage:
$(JL) -e 'using Pkg; Pkg.test("GenericTensorNetworks"; coverage=true)'

serve:
$(JL) -e 'using Pkg; Pkg.activate("docs"); using LiveServer; servedocs(;skip_dirs=["docs/src/assets", "docs/src/generated"], literate_dir="examples")'
$(JL) -e 'using Pkg; Pkg.activate("docs"); using LiveServer; servedocs(;skip_dirs=["docs/build", "docs/src/assets", "docs/src/generated"], literate_dir="examples")'

clean:
rm -rf docs/build
Expand Down
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ LuxorGraphPlot = "0.5"
OMEinsum = "0.8"
Polynomials = "4"
Primes = "0.5"
ProblemReductions = "0.2"
ProblemReductions = "0.3"
Random = "1"
SIMDTypes = "0.1"
Serialization = "1"
Expand Down
122 changes: 79 additions & 43 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,57 +4,93 @@ CurrentModule = GenericTensorNetworks

# GenericTensorNetworks

This package implements generic tensor networks to compute *solution space properties* of a class of hard combinatorial problems.
The *solution space properties* include
* The maximum/minimum solution sizes,
* The number of solutions at certain sizes,
* The enumeration of solutions at certain sizes.
* The direct sampling of solutions at certain sizes.

The solvable problems include [Independent set problem](@ref), [Maximal independent set problem](@ref), [Spin-glass problem](@ref), [Cutting problem](@ref), [Vertex matching problem](@ref), [Binary paint shop problem](@ref), [Coloring problem](@ref), [Dominating set problem](@ref), [Satisfiability problem](@ref), [Set packing problem](@ref) and [Set covering problem](@ref).

## Background knowledge

Please check our paper ["Computing properties of independent sets by generic programming tensor networks"](https://arxiv.org/abs/2205.03718).
If you find our paper or software useful in your work, we would be grateful if you could cite our work. The [CITATION.bib](https://github.com/QuEraComputing/GenericTensorNetworks.jl/blob/master/CITATION.bib) file in the root of this repository lists the relevant papers.

## Quick start

You can find a set up guide in our [README](https://github.com/QuEraComputing/GenericTensorNetworks.jl).
To get started, open a Julia REPL and type the following code.

```@repl
using GenericTensorNetworks, Graphs#, CUDA
solve(
GenericTensorNetwork(IndependentSet(
Graphs.random_regular_graph(20, 3),
UnitWeight(20)); # default: uniform weight 1
optimizer = TreeSA(),
openvertices = (), # default: no open vertices
fixedvertices = Dict() # default: no fixed vertices
),
GraphPolynomial();
usecuda=false # the default value
)
## Overview
GenericTensorNetworks is a high-performance package that uses tensor network algorithms to solve challenging combinatorial optimization problems. This approach allows us to efficiently compute various solution space properties that would be intractable with traditional methods.

## Key Capabilities
Our package can compute a wide range of solution space properties:

* Maximum and minimum solution sizes
* Solution counts at specific sizes
* Complete enumeration of solutions
* Statistical sampling from the solution space

## Supported Problem Classes
GenericTensorNetworks can solve many important combinatorial problems:

* [Independent Set Problem](@ref)
* [Maximal Independent Set Problem](@ref)
* [Spin-Glass Problem](@ref)
* [Maximum Cut Problem](@ref)
* [Vertex Matching Problem](@ref)
* [Binary Paint Shop Problem](@ref)
* [Graph Coloring Problem](@ref)
* [Dominating Set Problem](@ref)
* [Boolean Satisfiability Problem](@ref)
* [Set Packing Problem](@ref)
* [Set Covering Problem](@ref)

## Scientific Background
For the theoretical foundation and algorithmic details, please refer to our paper:
["Computing properties of independent sets by generic programming tensor networks"](https://arxiv.org/abs/2205.03718)

If you find our package useful in your research, please cite our work using the references in [CITATION.bib](https://github.com/QuEraComputing/GenericTensorNetworks.jl/blob/master/CITATION.bib).

## Getting Started

### Installation
Installation instructions are available in our [README](https://github.com/QuEraComputing/GenericTensorNetworks.jl).

### Basic Example
Here's a simple example that computes the independence polynomial of a random regular graph:

```julia
using GenericTensorNetworks, Graphs # Add CUDA for GPU acceleration

# Create and solve a problem instance
result = solve(
GenericTensorNetwork(
IndependentSet(
Graphs.random_regular_graph(20, 3), # Graph to analyze
UnitWeight(20) # Uniform vertex weights
);
optimizer = TreeSA(), # Contraction order optimizer
openvertices = (), # No open vertices
fixedvertices = Dict() # No fixed vertices
),
GraphPolynomial(); # Property to compute
usecuda = false # Use CPU (set true for GPU)
)
```

Here the main function [`solve`](@ref) takes three input arguments, the problem instance of type [`IndependentSet`](@ref), the property instance of type [`GraphPolynomial`](@ref) and an optional key word argument `usecuda` to decide use GPU or not.
If one wants to use GPU to accelerate the computation, the `, CUDA` should be uncommented.
### Understanding the API

The main function `solve` takes three components:

An [`IndependentSet`](@ref) instance takes two positional arguments to initialize, the graph instance that one wants to solve and the weights for each vertex. Here, we use a random regular graph with 20 vertices and degree 3, and the default uniform weight 1.
1. **Problem Instance**: Created with `GenericTensorNetwork`, which wraps problem types like `IndependentSet`
- The first argument defines the problem (graph and weights)
- Optional arguments control the tensor network construction:
- `optimizer`: Algorithm for finding efficient contraction orders
- `openvertices`: Degrees of freedom to leave uncontracted
- `fixedvertices`: Variables with fixed assignments

The [`GenericTensorNetwork`](@ref) function is a constructor for the problem instance, which takes the problem instance as the first argument and optional key word arguments. The key word argument `optimizer` is for specifying the tensor network optimization algorithm.
The keyword argument `openvertices` is a tuple of labels for specifying the degrees of freedom not summed over, and `fixedvertices` is a label-value dictionary for specifying the fixed values of the degree of freedoms.
Here, we use [`TreeSA`](@ref) method as the tensor network optimizer, and leave `openvertices` the default values.
The [`TreeSA`](@ref) method finds the best contraction order in most of our applications, while the default [`GreedyMethod`](@ref) runs the fastest.
2. **Property to Compute**: Such as `GraphPolynomial`, `SizeMax`, or `ConfigsAll`

3. **Computation Options**: Like `usecuda` to enable GPU acceleration

Note: The first execution will be slower due to Julia's just-in-time compilation. Subsequent runs will be much faster.

### API Structure
The following diagram illustrates the possible combinations of inputs:

The first execution of this function will be a bit slow due to Julia's just in time compiling.
The subsequent runs will be fast.
The following diagram lists possible combinations of input arguments, where functions in the `Graph` are mainly defined in the package [Graphs](https://github.com/JuliaGraphs/Graphs.jl), and the rest can be found in this package.
```@raw html
<div align=center>
<img src="assets/fig7.svg" width="75%"/>
</div>
```⠀
You can find many examples in this documentation, a good one to start with is [Independent set problem](@ref).

Functions in the `Graph` box are primarily from the [Graphs](https://github.com/JuliaGraphs/Graphs.jl) package, while the rest are defined in GenericTensorNetworks.

## Next Steps
For a deeper understanding, we recommend starting with the [Independent Set Problem](@ref) example, which demonstrates the core functionality of the package.
```
Loading
Loading