-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
fix: ensure RunEvent::Exit is triggered on restart #12313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…xit` event before exiting the application.
Package Changes Through 1c58985There are 12 changes which include tauri with minor, tauri-runtime with minor, tauri-runtime-wry with minor, tauri-utils with minor, tauri-cli with minor, @tauri-apps/cli with minor, @tauri-apps/api with minor, tauri-bundler with patch, tauri-build with minor, tauri-codegen with minor, tauri-macros with minor, tauri-plugin with minor Planned Package VersionsThe following package releases are the planned based on the context of changes in this pull request.
Add another change file through the GitHub UI by following this link. Read about change files or the docs at github.com/jbolda/covector |
I've fixed format error. sorry |
Any updates to this PR? |
I'm waiting for review. |
The CI failure is corepack issue (nodejs/corepack#625) so it's not my fault and nothing I can do. |
Maybe we can just store a Edit: actually, no since we have marked it as diverging function, maybe we can do #10171 (comment), by introducing |
Yes I think we can add I may make separated PR for this new approach. |
It should be fine, I think only the process plugin is affected in all the official plugins? (the restart command) |
Iseached for use cases and restart command in process plugin and all their use cases (even in js) would also needs migration. |
this is now becoming a really complicated solution 😂 I had to push a change to make it not deadlock when restart() is called on the main thread |
|
||
// the value is set to true when the event loop is already exited | ||
event_loop_exit_mutex: Mutex<bool>, | ||
event_loop_exit_condvar: std::sync::Condvar, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm not super familiar with condvar, is there an advantage in using this over e.g. a multi producer multi consumer channel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK, mcmc channel can only one thread at once but condvar can wake multiple threads at once so I used condvar
0b9faf6
to
055ffd7
Compare
I don't enough time to implement request_restart so I'm keeping this PR as-is but should we push forward this complex solution instead of reimplement with request_restart? |
I'm considering going forward with both - fix the restart() behavior but also deprecate it and implement request_restart |
I'm thinking if something like this would just work for diff --git a/crates/tauri/src/app.rs b/crates/tauri/src/app.rs
index 728183d69..7813097ca 100644
--- a/crates/tauri/src/app.rs
+++ b/crates/tauri/src/app.rs
@@ -10,6 +10,7 @@ use crate::{
},
manager::{webview::UriSchemeProtocol, AppManager, Asset},
plugin::{Plugin, PluginStore},
+ process::restart,
resources::ResourceTable,
runtime::{
window::{WebviewEvent as RuntimeWebviewEvent, WindowEvent as RuntimeWindowEvent},
@@ -42,7 +43,7 @@ use std::{
borrow::Cow,
collections::HashMap,
fmt,
- sync::{mpsc::Sender, Arc, MutexGuard},
+ sync::{atomic, mpsc::Sender, Arc, MutexGuard},
};
use crate::{event::EventId, runtime::RuntimeHandle, Event, EventTarget};
@@ -477,6 +478,18 @@ impl<R: Runtime> AppHandle<R> {
crate::process::restart(&self.env());
}
+ /// Restarts the app by triggering [`RunEvent::ExitRequested`] with code [`RESTART_EXIT_CODE`] and [`RunEvent::Exit`]..
+ pub fn request_restart(&self) {
+ self
+ .manager
+ .restart_on_exit
+ .store(true, atomic::Ordering::Relaxed);
+ if self.runtime_handle.request_exit(RESTART_EXIT_CODE).is_err() {
+ self.cleanup_before_exit();
+ crate::process::restart(&self.env());
+ }
+ }
+
/// Sets the activation policy for the application. It is set to `NSApplicationActivationPolicyRegular` by default.
///
/// # Examples
@@ -1084,6 +1097,9 @@ impl<R: Runtime> App<R> {
let event = on_event_loop_event(&app_handle, RuntimeRunEvent::Exit, &manager);
callback(&app_handle, event);
app_handle.cleanup_before_exit();
+ if self.manager.restart_on_exit.load(atomic::Ordering::Relaxed) {
+ restart(&self.env());
+ }
}
_ => {
let event = on_event_loop_event(&app_handle, event, &manager);
diff --git a/crates/tauri/src/manager/mod.rs b/crates/tauri/src/manager/mod.rs
index 882b141ca..ec6610864 100644
--- a/crates/tauri/src/manager/mod.rs
+++ b/crates/tauri/src/manager/mod.rs
@@ -6,7 +6,7 @@ use std::{
borrow::Cow,
collections::HashMap,
fmt,
- sync::{Arc, Mutex, MutexGuard},
+ sync::{atomic::AtomicBool, Arc, Mutex, MutexGuard},
};
use serde::Serialize;
@@ -221,6 +221,8 @@ pub struct AppManager<R: Runtime> {
pub(crate) invoke_key: String,
pub(crate) channel_interceptor: Option<ChannelInterceptor<R>>,
+
+ pub(crate) restart_on_exit: AtomicBool,
}
impl<R: Runtime> fmt::Debug for AppManager<R> {
@@ -322,6 +324,7 @@ impl<R: Runtime> AppManager<R> {
resources_table: Arc::default(),
invoke_key,
channel_interceptor,
+ restart_on_exit: AtomicBool::new(false),
}
}
|
that should work - i'm fixing some other bugs but i can test this later |
leads to a deadlock currently
@lucasfernog the deadlock around the event loop seems to be a recurring theme now, could you tell where the deadlock happens? I only know the plugin lock which can be solved by not having the callbacks being MutFn so we don't need a lock to call them (breaking change so needs to wait for v3) Also I think this main thread exit implementation should probably be moved to tauri-runtime-wry? Using UnsafeCell here feels so wrong to me |
In this case we have a deadlock because we're running tauri/crates/tauri-runtime-wry/src/lib.rs Line 2377 in 3fb8d7c
I'll see if there's a way to rewrite it in a better way, but I'm not super confident. |
I would probably need to use the same mechanism to store the event loop callback and use it in a thread-safe struct (but ensuring it only gets called in the main thread) |
Got it now, so the main problem is that the function is blocking a the event loop while waiting for the event loop to move and receive the next message
I wonder why is this though, if we're not blocking on the main thread, this should be fine? |
yes you're right |
I see, so maybe we should update the comments to include where it would dead lock if we do it on the main thread? I was quite confused by it (still somewhat confused right now 😂) And just one more thing, since the |
yeah that's something i was thinking about too.. maybe we should crash instead (so it's neither a deadlock nor a crazy unsafecell usage) |
In this case I'm leaning towards just leave this function as is and deprecate it, then introduce Since this function has never triggered the |
but that's not the only problem this PR tries to solve.. it also fixes the race condition for restart/exit |
Wait, what race condition? |
|
I see, but I still feel like we could just remove |
Fixes #12310
Fixes #11392
Fixes tauri-apps/plugins-workspace#1692
Fixes tauri-apps/plugins-workspace#2256