Skip to content

Use Cholesky decomposition for correlated_values #99

Open
@ces42

Description

@ces42

Trying to do by hand exactly what correlated_values(nom_values, covariance_mat) does I thought of a different approach than the one used in the current implementation:

n = len(nom_values)
L = cholesky(covariance_mat)
return nom_values + dot(L, uarray(zeros(n), ones(n))

This essentially does the same as correlated_values (except for the tags and it returns a array instead of a tuple). It should be faster than diagonalizing the covariance matrix and some quick tests suggest that it might be numerically more precise:
For nom_values = ones(10) and covariance_mat = scipy.linalg.hilbert(10) the reconstructed covariance matrix using covariance_matrix only differs in 15 using cholesky while the current implementation differs in 78 entries.

Is there a reason why the current implementation uses diagonalization instead of cholesky? (If there is no such reason I would be happy to write a pull request)

P.S. I really like this package and it's saving me a ton of time in my lab course :)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions