Skip to content

BUG: pytensor config lost in pm.sample(mp_ctx='spawn') #7790

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
velochy opened this issue May 19, 2025 · 4 comments
Open

BUG: pytensor config lost in pm.sample(mp_ctx='spawn') #7790

velochy opened this issue May 19, 2025 · 4 comments
Labels

Comments

@velochy
Copy link
Contributor

velochy commented May 19, 2025

Describe the issue:

When calling pm.sample(mp_ctx='spawn'), the pytensor.config settings are lost, at least when compiling some functions.

Reproduceable code example:

-

Error message:

PyMC version information:

pymc 5.22.0

Context for the issue:

The situation is as follows: we are working with rather large models, and on OSX, we run into the native compiler's bracket limit. The solution we've used so far is to just add

if platform.system()=='Darwin': 
    pytensor.config.gcc__cxxflags = '-fbracket-depth=4096'

into our framework. Which works if using a single chain. However, when using mp_ctx='spawn', the compilation fails, complaining that it reaches the default depth of 256, i.e. it is clear this flag is not passed on.

The easy workaround is to just set this value in the .pytensorrc

I am, however, wondering why compilation is not shared i.e. done before the process gets split?

@velochy velochy added the bug label May 19, 2025
@ricardoV94
Copy link
Member

ricardoV94 commented May 20, 2025

I am, however, wondering why compilation is not shared i.e. done before the process gets split?

It is when you fork. The step samplers which hold the compiled sampling functions are pickled / sent to their processes iirc.

Spawn will reimport stuff / reexecute the code that compiles the functions I presume.

@ricardoV94
Copy link
Member

I am, however, wondering why compilation is not shared i.e. done before the process gets split?

Code complexity is my guess.

@velochy
Copy link
Contributor Author

velochy commented May 20, 2025

I figured this bug is worth reporting as it might be a canary for an issue you might actually want to look into.

For our purposes, the easy workaround of using .pytensorrc solves the issue, so feel free to close it if you believe it does not warrant any action.

@ricardoV94
Copy link
Member

I don't know why the pytensor config flags don't get redefined in the spawned context

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants