Skip to content

Multithreading Architecture Improvements in FreeCAD #36

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
tritao opened this issue Feb 28, 2025 · 22 comments
Open

Multithreading Architecture Improvements in FreeCAD #36

tritao opened this issue Feb 28, 2025 · 22 comments
Labels
funded The FPA voted to fund this proposal

Comments

@tritao
Copy link

tritao commented Feb 28, 2025

Proposal description

This proposal aims to improve performance by offloading heavy OCCT computations and ensuring better UI responsiveness. The work focuses on enhancing the base infrastructure to enable asynchronous operations, setting the stage for eventual parallel processing of the document DAG (which is not included in this proposal).

Deliverables

  • Async Document Recompute:
    Develop and deliver pull requests (PRs) that enable asynchronous recomputation of the document and its objects. This includes supporting recomputations for Part and PartDesign features.

  • Multithreaded Signal System:
    Refactor the existing synchronous Boost signal system to support multithreaded operation, thereby improving responsiveness and scalability.

  • Python Async Support:
    Implement Python asynchronous support to facilitate non-blocking operations and improve integration with the new multithreading architecture.

  • UI Improvements for Background Tasks:
    Deliver a PR that introduces UI enhancements to manage modal or background tasks, ensuring that long-running processes do not freeze the interface.

Timeline

The project is estimated to take 3 months overall, with the time divided equally among the main deliverables:

  1. Month 1:

    • Focus on developing asynchronous document recompute capabilities and implementing asynchronous recompute for document objects, including Part/PartDesign features.
  2. Month 2:

    • Refactor the existing synchronous Boost signal system for multithreaded operation.
    • Begin preliminary work on Python async support.
  3. Month 3:

    • Complete Python async support.
    • Develop and integrate UI improvements to support background and modal tasks, ensuring a smooth user experience.

Risks and mitigation

Multithreading in FreeCAD is challenging due to the current architectural issues, but discussions and planning have paved a clear path forward. While there is a known PR by a community member (James Stanley) addressing part of the problem (which has some architectural issues), we plan to resolve those challenges with our approach, and will try to work together with the community to integrate such work where possible. We have already implemented a working proof of concept for a subset of this work, which gives us confidence in managing technical risks.

Compensation

The total compensation requested for the project is 2000 EUR, to be paid at the end of the project.

About you

Joao Matos (tritao), main grant applicant:

  • Several years of experience working on multithreaded systems, including 3D game/graphics engines and servers.
  • 60 commits accepted in FreeCAD.

Benjamin Nauck (hyarion):

  • 7 years of professional game development experience, 2 years building graphical simulation systems, and 8 months as a research assistant working on HPC.
  • 100 commits accepted in FreeCAD.
@kadet1090
Copy link
Member

We need more proposals like that. It is well structured, deliverables are easy to understand and have quite clear definitions of done. It is reasonably priced (if anything you guys should ask for more!) and the feature is much needed.

I've seen preliminary work on that topic from authors and the work looks very promising, and clearly is achievable by them. Hope that this passes. Good luck guys!

@chennes chennes added the under committee review Currently being reviewed by the FPA Grant Review Committee label Mar 1, 2025
@pieterhijma
Copy link

I had a meeting today with @tritao and @hyarion and I promised to summarize the points that I made in the meeting for this project. My experience is that any form of multithreading makes things much more complex and additionally, often the performance benefits are limited as well. This doesn't mean that we shouldn't do it, but it would be good to take the following things into account:

My first point is to take Amdahl's law and Gustafson's law into account. Amdahl's law states that speedup is limited by the ratio between sequential and parallel processing. If there is relatively a high ratio of sequential computation, then the expected speedup is limited. Gustafson's law is more optimistic that essentially states that you can increase the relative amount of parallelism by increasing the problem size, thus improving on the expected speedup.

Important to note here is that synchronization as part of the parallel execution should be regarded as sequential computation. So, if synchronization because of parallelism doesn't scale, then speedup will also be limited.

Given this, it is important to understand whether FreeCAD is in the Amdahl's law case or Gustafson's law case. I believe we are more in Amdahl's law case because if we have parallel processing, users aren't going to create larger, more parallel models. Additionally, because FreeCAD has in general many dependencies between objects, the execute phase of these dependent objects needs to be run sequentially (see this account of the dependency checking for an explanation of the recompute phase).

Because of this, the point I wanted to make is that the amount of speedup that we can expect from adding more multithreading is probably going to be limited in the general case.

Additionally, if this project is granted, I think it is important to take the following things into account:

  1. Make sure that synchronization does not add too much to the cost and -- more importantly -- that it scales if the amount of parallelism increases.
  2. It is important to have a good understanding of whether the solution does not prevent to unlock more parallelism, for example by making it very complex to improve the dependency check mechanism (for example making it property-based instead of object-based).
  3. Make sure that the solution also scales if sequential code is highly improved. So, it is important to understand that synchronization costs can increase (relatively) if sequential code is improved and I think there are many cases in which sequential code can be improved as well.
  4. Expect that the complexity of the code increases in general. Challenges that I foresee are issues with callbacks into Python, Pyton Global Interpreter Lock (GIL) problems and calling Python code from string commands (for displaying it in the Python console).

@pieterhijma
Copy link

The second point I wanted to make is that in my experience in parallelizing code, it is very important to do that on the basis of measurements. So, I would recommend to get or create a representative data set of FreeCAD models with low performance that represents FreeCAD execution in general cases but also for more favorable cases. A second step is to always be able to measure the performance increase or decrease, preferably in a way that can make a distinction between "useful work" and "overhead".

@pieterhijma
Copy link

We've also discussed Python's GIL's problems and I would recommend to install a Python-debug package because this will get you very useful information on GIL problems.

Examples of GIL issues:

@pieterhijma
Copy link

By the way, @tritao and @hyarion had very good answers to all this but it's better that they post them themselves :)

@chennes
Copy link
Member

chennes commented Mar 5, 2025

Thank you again for your grant proposal: it was submitted in time to be evaluated as part of the Q1 2025 grant cycle, and is currently under committee review. In this quarter we received 12 grant requests totaling approximately 61.800 EUR in requests, and we expect to award approximately 15.000 EUR in grants. As you can tell from these numbers we expect the approval process to be highly competitive! We appreciate your participation in the program, and you can expect to hear the results of the technical review committee's deliberations in two weeks.

@kadet1090
Copy link
Member

My first point is to take Amdahl's law and Gustafson's law into account. Amdahl's law states that speedup is limited by the ratio between sequential and parallel processing. If there is relatively a high ratio of sequential computation, then the expected speedup is limited. Gustafson's law is more optimistic that essentially states that you can increase the relative amount of parallelism by increasing the problem size, thus improving on the expected speedup.

While this is absolutely true, I don't think that speedup is the most important thing that this grant will bring. From my perspective even if actual recomputing workload is sequential but is done on different thread and architecture is adjusted so we can split some tasks to be done in parallel it is a huge win. Even simply unlocking the main UI thread to do UI work and application would not seem freezed is enough of a win.

Important aspect for this PR would be to ensure that architecture is flexible and allows us to do more work in the feature, even if speedup coming from the process would be negligible.

@hyarion
Copy link

hyarion commented Mar 6, 2025

I had a meeting today with @tritao and @hyarion and I promised to summarize the points that I made in the meeting for this project. My experience is that any form of multithreading makes things much more complex and additionally, often the performance benefits are limited as well. This doesn't mean that we shouldn't do it, but it would be good to take the following things into account:

We completely agree that multithreading introduces inherent complexity. Our approach is intentionally incremental - we aim to first address the user experience by decoupling heavy computations from the UI thread. This initial phase will provide essential data on performance gains versus added synchronization costs and serve as a foundation for future, more comprehensive parallelization efforts.

My first point is to take Amdahl's law and Gustafson's law into account. Amdahl's law states that speedup is limited by the ratio between sequential and parallel processing. If there is relatively a high ratio of sequential computation, then the expected speedup is limited. Gustafson's law is more optimistic that essentially states that you can increase the relative amount of parallelism by increasing the problem size, thus improving on the expected speedup.

We acknowledge that FreeCAD’s dependency-heavy design currently aligns more with Amdahl's scenario, as the sequential parts (especially in the recompute phase) restrict the overall speedup. While larger, more parallel problems could benefit from Gustafson's law, our target user models typically do not scale in that direction. This understanding informs our decision to focus initially on the UX and base asynchronous operations rather than a full DAG overhaul.

Important to note here is that synchronization as part of the parallel execution should be regarded as sequential computation. So, if synchronization because of parallelism doesn't scale, then speedup will also be limited.

Exactly. We are keenly aware that synchronization overhead can become a bottleneck. That’s why our strategy includes profiling and performance measurements at every iteration, ensuring that any additional synchronization is justified by tangible benefits.

Given this, it is important to understand whether FreeCAD is in the Amdahl's law case or Gustafson's law case. I believe we are more in Amdahl's law case because if we have parallel processing, users aren't going to create larger, more parallel models. Additionally, because FreeCAD has in general many dependencies between objects, the execute phase of these dependent objects needs to be run sequentially (see this account of the dependency checking for an explanation of the recompute phase).

We agree that users won’t change how they create models. That said, there are situations where we can run some parts in parallel when data dependencies allow it. Our main focus is reducing the delays caused by the unavoidable sequential steps that affect UI responsiveness. By moving heavy computations to background threads, we aim to make the interface smoother - even if the overall performance boost is somewhat limited.

Because of this, the point I wanted to make is that the amount of speedup that we can expect from adding more multithreading is probably going to be limited in the general case.

We share this concern. Our plan is not to promise dramatic speedup across the board, but to ensure that the application remains responsive during heavy computations. This is critical to avoiding scenarios where the UI freezes, potentially leading to data loss if the application is force-closed.

Additionally, if this project is granted, I think it is important to take the following things into account:

  1. Make sure that synchronization does not add too much to the cost and -- more importantly -- that it scales if the amount of parallelism increases.

We anticipate minimal overhead since our focus is not on overhauling the entire DAG. Nonetheless, we will measure performance using a dataset to validate our approach with the help of the community.

  1. It is important to have a good understanding of whether the solution does not prevent to unlock more parallelism, for example by making it very complex to improve the dependency check mechanism (for example making it property-based instead of object-based).

We intentionally keep the scope narrow for this grant phase. Our goal is to lay the architectural groundwork for asynchronous operations without overhauling the entire dependency management system. This allows us to evaluate the impact and challenges first-hand before considering any extensive changes to the dependency check mechanism.

  1. Make sure that the solution also scales if sequential code is highly improved. So, it is important to understand that synchronization costs can increase (relatively) if sequential code is improved and I think there are many cases in which sequential code can be improved as well.

We acknowledge that improvements in sequential code can shift the balance between overhead and performance gains. This is another reason why we are keeping the scope narrow for this phase, ensuring that any enhancements in multithreading are carefully measured and justified.

  1. Expect that the complexity of the code increases in general. Challenges that I foresee are issues with callbacks into Python, Python Global Interpreter Lock (GIL) problems and calling Python code from string commands (for displaying it in the Python console).

The complexity introduced by multithreading and interactions with Python (especially regarding the GIL) is well noted. Our strategy is to implement these changes iteratively, allowing us to isolate and address such issues step by step. This staged approach minimizes risk and ensures that challenges with Python callbacks or GIL constraints are managed effectively.

The second point I wanted to make is that in my experience in parallelizing code, it is very important to do that on the basis of measurements. So, I would recommend to get or create a representative data set of FreeCAD models with low performance that represents FreeCAD execution in general cases but also for more favorable cases. A second step is to always be able to measure the performance increase or decrease, preferably in a way that can make a distinction between "useful work" and "overhead".

We fully agree with you. Measuring performance and comparing "useful work" against any "overhead" is important. We plan to collaborate with the community to build a comprehensive model library for testing. This data will guide our iterations and ensure that our multithreading improvements yield net positive benefits without introducing significant additional complexity.

@pieterhijma
Copy link

Great, sounds really good. Indeed, as @kadet1090 and @hyarion mention, getting computation away from the UI thread would be very useful. I'm happy that all the above concerns are taken into account!

@yorikvanhavre
Copy link
Member

This is a can or worms you guys want to open 😅 But it would IMHO be very welcome.

Indeed I would also love to see good metrics and analyses there, I think that can foster many other ideas outside the scope of this project.

@Reqrefusion
Copy link
Member

I have a lot of question marks in my mind for big work on multithreading architecture. I'm not sure what the overall benefit will be of the amount of work done. However, there is a very drawn framework here, a very wonderful scope. It's an effort that should definitely be supported.

@shaise
Copy link
Collaborator

shaise commented Mar 8, 2025

First of all I like this proposal it is very well structured and indeed beneficial.
If I understand correctly (correct me if I'm wrong), this will not speedup computation, but will make the UI responsive while the computation is done in the background.
This is indeed beneficial, but I think we need some kind of visual feedback that some thing is being done in the background. Also, even though the UI keeps being responsive, we should find a way to block the user from continuing to add features to the model while the previous operation is still in progress.

@hyarion
Copy link

hyarion commented Mar 11, 2025

First of all I like this proposal it is very well structured and indeed beneficial. If I understand correctly (correct me if I'm wrong), this will not speedup computation, but will make the UI responsive while the computation is done in the background.

Any potential speedup is considered a bonus as it isn't in the limited scope of this grant application. You are correct that the aim is mostly to make the UI more responsive while computations are done in the background. The work needed to accomplish this in a clean way will be a stepping stone to further work with multithreading computations.

Our work might allow for speedups if existing code is changed to use async methods for operations. But as Yorik mentions, this work is a bit of a can of worms and we just aim to loosen the lid a bit, not fully opening it, which is why we don't want to overpromise anything as that would increase the scope and risks.

This is indeed beneficial, but I think we need some kind of visual feedback that some thing is being done in the background. Also, even though the UI keeps being responsive, we should find a way to block the user from continuing to add features to the model while the previous operation is still in progress.

We don't want to define how this needs to be implemented too early, as working on this project might give us further insights on what could be done. But I agree with you, allowing full interaction without any feedback would be bad.

The MVP would probably be to open a modal that can be canceled (if OCCT 7.6.0+ is used). Further work in this project might be to lock all feature creation as you are suggesting, or to only lock features/objects based on the DAG. But we don't completely know at this stage what would be (and feel the best), which is why we don't want to lock us to one option.

We are actively discussing other projects with the DWG, CWG, and other developers. This is something we believe is important especially on bigger projects like this.


Here are two screencasts from @jes which shows a PoC implementation on how it could look with modal showing progress and in the status bar:

Expand for video of modal progress bar 🎥

Screencast.from.2025-03-01.09-56-51.mp4

Expand for video of progress bar in status bar 🎥

Screencast.from.2025-03-01.12-02-54.mp4

@grd
Copy link

grd commented Mar 14, 2025

This is indeed a very big can of worms.

I have two questions:

  1. How many of OCCT features are involved?
  2. This question is probably very dumb since I am not fluent with C++, but what about concurrency? Is it possible to introduce concurrency? What would it bring and what are the down sides?

Overall, I like the proposal.

@chennes chennes added voting in progress The grant is currently being voted on by FPA members and removed under committee review Currently being reviewed by the FPA Grant Review Committee labels Mar 15, 2025
@chennes chennes added funded The FPA voted to fund this proposal and removed voting in progress The grant is currently being voted on by FPA members labels Mar 31, 2025
@chennes
Copy link
Member

chennes commented Mar 31, 2025

The FPA's vote on the Quarter 1 2025 grant proposals is complete, and this grant was selected for funding. Congratulations, @tritao and @hyarion.

Selected comments from reviewers:

  • "Very good and useful proposal, probably won’t reach an usable state, but definitely useful to fund some research in that area."
  • "A very nice to have feature even if it's only for UI responsiveness. I do have concerns about it being risky where many parts of the code might rely on operations being done in sequence. However the compensation asked for is very low, and perhaps it is worth the try."
  • "This proposal certainly has merit and would be a boon to FreeCAD. I am skeptical if it can be accomplished in the 3 months defined in the proposal. I agree with others that clear metrics of the current state as a baseline is a must."

In general reviewers supported the FPA funding this important work, though there was some skepticism that it can be accomplished in the timeframe specified.

@ickby
Copy link

ickby commented Apr 2, 2025

Please consider also the developer experience: It must stay possible to develop functionality in FreCAD without considering multithreading. FreeCAD is not only OCC and python, but also integrates many other libraries in its core functionality, and those libraries are often not generally thread safe, e.g. VTK for FEM postprocessing. Changing the FC architecture to have thread safety as requirement is way to limiting for the choice of external libraries as well as too heavy on the required effort for implementing new features.

@KeithSloan
Copy link

Qt provides robust support for inter-process communication (IPC) and shared memory using classes like QSharedMemory, QSystemSemaphore, and QEvent, enabling applications to exchange data and synchronize operations across different processes.

I hope that in the future FreeCAD will support IPC facilities i.e. when communicating with other applications.
Just hope that any change to a multi threading architecture will still allow IPC facilities to be implemented.

Here's a breakdown of how Qt facilitates shared message passing and related concepts:

  1. Shared Memory with QSharedMemory:
    Purpose:
    QSharedMemory allows multiple processes to access the same memory segment, enabling efficient data sharing.
    Mechanism:
    It provides a cross-platform interface to the operating system's shared memory implementation.
    Usage:
    Applications can create, attach to, and detach from shared memory segments using QSharedMemory.
    Important Note:
    Shared memory segments are located at potentially different addresses in each process's memory space, so applications must share only position-independent data (e.g., primitive types or arrays of such types).
  2. Synchronization with QSystemSemaphore:
    Purpose:
    QSystemSemaphore enables synchronization between processes, ensuring that access to shared resources is properly controlled.
    Mechanism:
    It provides a mechanism for processes to wait on or signal events, preventing race conditions and ensuring data consistency.
    Usage:
    Semaphores can be used to control access to shared memory segments or other resources, ensuring that only one process accesses a resource at a time.
  3. Inter-Process Communication (IPC) with QEvent:
    Purpose:
    QEvent allows processes to communicate by sending custom events, enabling flexible and decoupled communication.
    Mechanism:
    Applications can create custom event types by subclassing QEvent and then post these events to other processes using QCoreApplication::postEvent().
    Usage:
    The receiving process can override customEvent() to handle the custom event, allowing for flexible and efficient communication between processes.
  4. Other IPC Mechanisms:
    Signals and Slots:
    While primarily used for inter-object communication within a single process, Qt's signal and slot mechanism can be extended to facilitate IPC by using a shared object as a message bus.
    Memory-Mapped Files:
    QFile can be used to create memory-mapped files, which are another way to share data between processes.
  5. Examples and Use Cases:
    Shared Data:
    Sharing data between a server and multiple clients, or between different components of a multi-process application.
    Synchronization:
    Coordinating the actions of multiple processes, such as ensuring that a file is only accessed by one process at a time.
    Event-Driven Communication:
    Sending commands or notifications between processes, such as a notification that a file has been updated.

@tritao
Copy link
Author

tritao commented Apr 9, 2025

Qt provides robust support for inter-process communication (IPC) and shared memory using classes like QSharedMemory, QSystemSemaphore, and QEvent, enabling applications to exchange data and synchronize operations across different processes.

I hope that in the future FreeCAD will support IPC facilities i.e. when communicating with other applications. Just hope that any change to a multi threading architecture will still allow IPC facilities to be implemented.

I don't think this work will impact on the ability to implement IPC, if anything it should make it easier due to the async architecture, depending on the kind of IPC mechanisms we are talking about.

I think @mnesarco has been doing some work in this area recently by the way.

@KeithSloan
Copy link

KeithSloan commented Apr 10, 2025

The sort of thing I am thinking of is a workbench that sends data via scp and also a request to a message queue like ApacheMQ (maybe one of three queues, short | medium | long) A Remote server running analysis software works away at the queue and when a job completes it would be good to be able to signal back to the FreeCAD machine and have a process there send a Qt Semaphore that the Workbench code could deal with and update workbench info.
This would require the workbench to fork a process that was listening for the Qt Semaphore.

The advantage of a Semaphore is that it avoid an application having to keep polling to see if the remote server has completed the work.

I started some work on this but it is currently in a private Repro.

@mnesarco
Copy link

mnesarco commented Apr 10, 2025

@KeithSloan I am already working on a workbench using tcp stack and event queues to have 2 way async communication with external tools. I see a lot of overlap here. Having multiple workbenches doing the same is not a problem but it could be a waste of resources for one of us.

@KeithSloan
Copy link

"I see a lot of overlap here. Having multiple workbenches doing the same is not a problem but it could be a waste of resources for one of us."

Fully agree

tritao added a commit to tritao/FreeCAD that referenced this issue May 12, 2025
Introduce optional asynchronous recomputation of documents and features
to keep the UI responsive during heavy operations. When enabled,
recompute requests are processed by background worker thread, allowing
the main GUI thread to continue rendering and handling user input.

Errors such as dependency cycles are reported back on the UI thread via
callbacks, and the classic synchronous recompute path remains available
when the feature is turned off.

This adds the base infrastructure, which will be used by following PRs

Background worker:

On startup, Application spawns a `_recomputeThread` that waits on a
`std::condition_variable`.

Requests are enqueued via `queueRecomputeRequest()`, protected by a
mutex, and the thread cleanly shuts down in the destructor by signaling
`_stopRecomputeThread`.

Request/Result types:

RecomputeRequest holds pointers to a `Document` or `DocumentObject`, a
recursion flag, and a callback.

`RecomputeResult` captures success or exception state.

Preference toggle:

Added "Enable async document recomputation" option to
`DlgSettingsDocument.ui` (Document preferences), persisted under `User
parameter:BaseApp/Preferences/Document`.

`isAsyncRecomputeEnabled()` reads this flag to choose async vs. sync.

Integration points:

In `DocumentPyImp.cpp`, Python's `recompute()` dispatches to the worker
when async is enabled.

`StdCmdRefresh` now enqueues a RecomputeRequest with a UI‐thread
callback that shows the dependency‐cycle warning if needed.

`ViewProviderTransformed::recomputeFeature()` similarly defers to the
worker or falls back to direct `recomputeFeature()`.

Deferred signaling:

Document gains `queueRecomputedObject()` and `processPendingSignals()`
to buffer and later emit recompute signals for individual objects.

This enhancement ensures long recompute operations no longer block the
interface, improving user experience while retaining full backwards
compatibility.

This work is done as part of an FPA grant:
FreeCAD/FPA-grant-proposals#36
tritao added a commit to tritao/FreeCAD that referenced this issue May 13, 2025
Introduce optional asynchronous recomputation of documents and features
to keep the UI responsive during heavy operations. When enabled,
recompute requests are processed by background worker thread, allowing
the main GUI thread to continue rendering and handling user input.

Errors such as dependency cycles are reported back on the UI thread via
callbacks, and the classic synchronous recompute path remains available
when the feature is turned off.

This adds the base infrastructure, which will be used by following PRs

Background worker:

On startup, Application spawns a `_recomputeThread` that waits on a
`std::condition_variable`.

Requests are enqueued via `queueRecomputeRequest()`, protected by a
mutex, and the thread cleanly shuts down in the destructor by signaling
`_stopRecomputeThread`.

Request/Result types:

RecomputeRequest holds pointers to a `Document` or `DocumentObject`, a
recursion flag, and a callback.

`RecomputeResult` captures success or exception state.

Preference toggle:

Added "Enable async document recomputation" option to
`DlgSettingsDocument.ui` (Document preferences), persisted under `User
parameter:BaseApp/Preferences/Document`.

`isAsyncRecomputeEnabled()` reads this flag to choose async vs. sync.

Integration points:

In `DocumentPyImp.cpp`, Python's `recompute()` dispatches to the worker
when async is enabled.

`StdCmdRefresh` now enqueues a RecomputeRequest with a UI‐thread
callback that shows the dependency‐cycle warning if needed.

`ViewProviderTransformed::recomputeFeature()` similarly defers to the
worker or falls back to direct `recomputeFeature()`.

This enhancement ensures long recompute operations no longer block the
interface, improving user experience while retaining full backwards
compatibility.

This work is done as part of an FPA grant:
FreeCAD/FPA-grant-proposals#36
tritao added a commit to tritao/FreeCAD that referenced this issue May 14, 2025
Introduce optional asynchronous recomputation of documents and features
to keep the UI responsive during heavy operations. When enabled,
recompute requests are processed by background worker thread, allowing
the main GUI thread to continue rendering and handling user input.

Errors such as dependency cycles are reported back on the UI thread via
callbacks, and the classic synchronous recompute path remains available
when the feature is turned off.

This adds the base infrastructure, which will be used by following PRs

Background worker:

On startup, Application spawns a `_recomputeThread` that waits on a
`std::condition_variable`.

Requests are enqueued via `queueRecomputeRequest()`, protected by a
mutex, and the thread cleanly shuts down in the destructor by signaling
`_stopRecomputeThread`.

Request/Result types:

RecomputeRequest holds pointers to a `Document` or `DocumentObject`, a
recursion flag, and a callback.

`RecomputeResult` captures success or exception state.

Preference toggle:

Added "Enable async document recomputation" option to
`DlgSettingsDocument.ui` (Document preferences), persisted under `User
parameter:BaseApp/Preferences/Document`.

`isAsyncRecomputeEnabled()` reads this flag to choose async vs. sync.

Integration points:

In `DocumentPyImp.cpp`, Python's `recompute()` dispatches to the worker
when async is enabled.

`StdCmdRefresh` now enqueues a RecomputeRequest with a UI‐thread
callback that shows the dependency‐cycle warning if needed.

`ViewProviderTransformed::recomputeFeature()` similarly defers to the
worker or falls back to direct `recomputeFeature()`.

This enhancement ensures long recompute operations no longer block the
interface, improving user experience while retaining full backwards
compatibility.

This work is done as part of an FPA grant:
FreeCAD/FPA-grant-proposals#36
tritao added a commit to tritao/FreeCAD that referenced this issue May 14, 2025
Introduce optional asynchronous recomputation of documents and features
to keep the UI responsive during heavy operations. When enabled,
recompute requests are processed by background worker thread, allowing
the main GUI thread to continue rendering and handling user input.

Errors such as dependency cycles are reported back on the UI thread via
callbacks, and the classic synchronous recompute path remains available
when the feature is turned off.

This adds the base infrastructure, which will be used by following PRs

Background worker:

On startup, Application spawns a `_recomputeThread` that waits on a
`std::condition_variable`.

Requests are enqueued via `queueRecomputeRequest()`, protected by a
mutex, and the thread cleanly shuts down in the destructor by signaling
`_stopRecomputeThread`.

Request/Result types:

RecomputeRequest holds pointers to a `Document` or `DocumentObject`, a
recursion flag, and a callback.

`RecomputeResult` captures success or exception state.

Preference toggle:

Added "Enable async document recomputation" option to
`DlgSettingsDocument.ui` (Document preferences), persisted under `User
parameter:BaseApp/Preferences/Document`.

`isAsyncRecomputeEnabled()` reads this flag to choose async vs. sync.

Integration points:

In `DocumentPyImp.cpp`, Python's `recompute()` dispatches to the worker
when async is enabled.

`StdCmdRefresh` now enqueues a RecomputeRequest with a UI‐thread
callback that shows the dependency‐cycle warning if needed.

`ViewProviderTransformed::recomputeFeature()` similarly defers to the
worker or falls back to direct `recomputeFeature()`.

This enhancement ensures long recompute operations no longer block the
interface, improving user experience while retaining full backwards
compatibility.

This work is done as part of an FPA grant:
FreeCAD/FPA-grant-proposals#36
tritao added a commit to tritao/FreeCAD that referenced this issue May 14, 2025
Introduce optional asynchronous recomputation of documents and features
to keep the UI responsive during heavy operations. When enabled,
recompute requests are processed by background worker thread, allowing
the main GUI thread to continue rendering and handling user input.

Errors such as dependency cycles are reported back on the UI thread via
callbacks, and the classic synchronous recompute path remains available
when the feature is turned off.

This adds the base infrastructure, which will be used by following PRs

Background worker:

On startup, Application spawns a `_recomputeThread` that waits on a
`std::condition_variable`.

Requests are enqueued via `queueRecomputeRequest()`, protected by a
mutex, and the thread cleanly shuts down in the destructor by signaling
`_stopRecomputeThread`.

Request/Result types:

RecomputeRequest holds pointers to a `Document` or `DocumentObject`, a
recursion flag, and a callback.

`RecomputeResult` captures success or exception state.

Preference toggle:

Added "Enable async document recomputation" option to
`DlgSettingsDocument.ui` (Document preferences), persisted under `User
parameter:BaseApp/Preferences/Document`.

`isAsyncRecomputeEnabled()` reads this flag to choose async vs. sync.

Integration points:

In `DocumentPyImp.cpp`, Python's `recompute()` dispatches to the worker
when async is enabled.

`StdCmdRefresh` now enqueues a RecomputeRequest with a UI‐thread
callback that shows the dependency‐cycle warning if needed.

`ViewProviderTransformed::recomputeFeature()` similarly defers to the
worker or falls back to direct `recomputeFeature()`.

This enhancement ensures long recompute operations no longer block the
interface, improving user experience while retaining full backwards
compatibility.

This work is done as part of an FPA grant:
FreeCAD/FPA-grant-proposals#36
tritao added a commit to tritao/FreeCAD that referenced this issue May 14, 2025
Introduce optional asynchronous recomputation of documents and features
to keep the UI responsive during heavy operations. When enabled,
recompute requests are processed by background worker thread, allowing
the main GUI thread to continue rendering and handling user input.

Errors such as dependency cycles are reported back on the UI thread via
callbacks, and the classic synchronous recompute path remains available
when the feature is turned off.

This adds the base infrastructure, which will be used by following PRs

Background worker:

On startup, Application spawns a `_recomputeThread` that waits on a
`std::condition_variable`.

Requests are enqueued via `queueRecomputeRequest()`, protected by a
mutex, and the thread cleanly shuts down in the destructor by signaling
`_stopRecomputeThread`.

Request/Result types:

RecomputeRequest holds pointers to a `Document` or `DocumentObject`, a
recursion flag, and a callback.

`RecomputeResult` captures success or exception state.

Preference toggle:

Added "Enable async document recomputation" option to
`DlgSettingsDocument.ui` (Document preferences), persisted under `User
parameter:BaseApp/Preferences/Document`.

`isAsyncRecomputeEnabled()` reads this flag to choose async vs. sync.

Integration points:

In `DocumentPyImp.cpp`, Python's `recompute()` dispatches to the worker
when async is enabled.

`StdCmdRefresh` now enqueues a RecomputeRequest with a UI‐thread
callback that shows the dependency‐cycle warning if needed.

`ViewProviderTransformed::recomputeFeature()` similarly defers to the
worker or falls back to direct `recomputeFeature()`.

This enhancement ensures long recompute operations no longer block the
interface, improving user experience while retaining full backwards
compatibility.

This work is done as part of an FPA grant:
FreeCAD/FPA-grant-proposals#36
tritao added a commit to tritao/FreeCAD that referenced this issue May 14, 2025
Introduce optional asynchronous recomputation of documents and features
to keep the UI responsive during heavy operations. When enabled,
recompute requests are processed by background worker thread, allowing
the main GUI thread to continue rendering and handling user input.

Errors such as dependency cycles are reported back on the UI thread via
callbacks, and the classic synchronous recompute path remains available
when the feature is turned off.

This adds the base infrastructure, which will be used by following PRs

Background worker:

On startup, Application spawns a `_recomputeThread` that waits on a
`std::condition_variable`.

Requests are enqueued via `queueRecomputeRequest()`, protected by a
mutex, and the thread cleanly shuts down in the destructor by signaling
`_stopRecomputeThread`.

Request/Result types:

RecomputeRequest holds pointers to a `Document` or `DocumentObject`, a
recursion flag, and a callback.

`RecomputeResult` captures success or exception state.

Preference toggle:

Added "Enable async document recomputation" option to
`DlgSettingsDocument.ui` (Document preferences), persisted under `User
parameter:BaseApp/Preferences/Document`.

`isAsyncRecomputeEnabled()` reads this flag to choose async vs. sync.

Integration points:

In `DocumentPyImp.cpp`, Python's `recompute()` dispatches to the worker
when async is enabled.

`StdCmdRefresh` now enqueues a RecomputeRequest with a UI‐thread
callback that shows the dependency‐cycle warning if needed.

`ViewProviderTransformed::recomputeFeature()` similarly defers to the
worker or falls back to direct `recomputeFeature()`.

This enhancement ensures long recompute operations no longer block the
interface, improving user experience while retaining full backwards
compatibility.

This work is done as part of an FPA grant:
FreeCAD/FPA-grant-proposals#36
tritao added a commit to tritao/FreeCAD that referenced this issue May 16, 2025
Introduce optional asynchronous recomputation of documents and features
to keep the UI responsive during heavy operations. When enabled,
recompute requests are processed by background worker thread, allowing
the main GUI thread to continue rendering and handling user input.

Errors such as dependency cycles are reported back on the UI thread via
callbacks, and the classic synchronous recompute path remains available
when the feature is turned off.

This adds the base infrastructure, which will be used by following PRs

Background worker:

On startup, Application spawns a `_recomputeThread` that waits on a
`std::condition_variable`.

Requests are enqueued via `queueRecomputeRequest()`, protected by a
mutex, and the thread cleanly shuts down in the destructor by signaling
`_stopRecomputeThread`.

Request/Result types:

RecomputeRequest holds pointers to a `Document` or `DocumentObject`, a
recursion flag, and a callback.

`RecomputeResult` captures success or exception state.

This work is done as part of an FPA grant:
FreeCAD/FPA-grant-proposals#36
tritao added a commit to tritao/FreeCAD that referenced this issue May 17, 2025
Introduce optional asynchronous recomputation of documents and features
to keep the UI responsive during heavy operations. When enabled,
recompute requests are processed by background worker thread, allowing
the main GUI thread to continue rendering and handling user input.

Errors such as dependency cycles are reported back on the UI thread via
callbacks, and the classic synchronous recompute path remains available
when the feature is turned off.

This adds the base infrastructure, which will be used by following PRs

Background worker:

On startup, Application spawns a `_recomputeThread` that waits on a
`std::condition_variable`.

Requests are enqueued via `queueRecomputeRequest()`, protected by a
mutex, and the thread cleanly shuts down in the destructor by signaling
`_stopRecomputeThread`.

Request/Result types:

RecomputeRequest holds pointers to a `Document` or `DocumentObject`, a
recursion flag, and a callback.

`RecomputeResult` captures success or exception state.

This work is done as part of an FPA grant:
FreeCAD/FPA-grant-proposals#36
@tritao
Copy link
Author

tritao commented May 22, 2025

A little update on this, work has been ongoing on this, as can be seen on the referenced PRs.

A FEP (FreeCAD Enhancement Proposal) has been published at FreeCAD/FreeCAD-Enhancement-Proposals#14 and is currently under further discussion.

tritao added a commit to tritao/FreeCAD that referenced this issue May 26, 2025
Introduce optional asynchronous recomputation of documents and features
to keep the UI responsive during heavy operations. When enabled,
recompute requests are processed by background worker thread, allowing
the main GUI thread to continue rendering and handling user input.

Errors such as dependency cycles are reported back on the UI thread via
callbacks, and the classic synchronous recompute path remains available
when the feature is turned off.

This adds the base infrastructure, which will be used by following PRs

Background worker:

On startup, Application spawns a `_recomputeThread` that waits on a
`std::condition_variable`.

Requests are enqueued via `queueRecomputeRequest()`, protected by a
mutex, and the thread cleanly shuts down in the destructor by signaling
`_stopRecomputeThread`.

Request/Result types:

RecomputeRequest holds pointers to a `Document` or `DocumentObject`, a
recursion flag, and a callback.

`RecomputeResult` captures success or exception state.

This work is done as part of an FPA grant:
FreeCAD/FPA-grant-proposals#36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
funded The FPA voted to fund this proposal
Projects
None yet
Development

No branches or pull requests