-
-
Notifications
You must be signed in to change notification settings - Fork 3.9k
allow interactive worldspace uis #14523
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the criteria for adding features and examples? I can't help but feel we are bloating the examples past the point of usefulness, and relying on them over documenting features.
On it's own ManualCursorPosition
doesn't explain how it should be used, or motivate why it needs to be first party.
World space UI is generally useful, but this addition seems like it can't stand on its own without a lot of supporting code in an example.
crates/bevy_ui/src/focus.rs
Outdated
/// manual control of interactivity for world-space uis, or custom cursor position control for window-based uis. | ||
/// For optimal latency the contents should be updated in [`bevy_app::PreUpdate`], before [`crate::UiSystem::Focus`] and after [`bevy_input::InputSystem`]. | ||
#[derive(Component, Default)] | ||
pub struct ManualCursorPosition(pub Option<Vec2>); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really love adding more complexity to the focus system, especially if we are in the middle up upstreaming mod_picking, which can handle this case more generally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think the complexity is minimal, and i suspect gpu picking would be unable to return the uv coords of an intersection which is required for determining the cursor position. it also seems like it would be inefficient for me when i have collision infrastructure already.
edit: i guess gpu picking may be able to resolve the triangle? then one could use the mesh data to extract uvs, which is the same thing i'm doing with my collision lib...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the mod_picking stuff being upstreamed, there are a few ways to tackle this, but they are agnostic to what backend you are using to do your hit tests.
The input mapping is separate from the actual hit testing. You could chain these together (hit test against UI quads, pipe that as input into the UI picking backend), or you could take the simpler approach (without a frame of lag) that this PR takes, which is to just compute the pointer position on any world space UI quads in the input system, and let the picking backends consume that.
Either way, ManualCursorPosition
won't be needed.
yeah i was on the fence for the example, i'm happy to remove it. I added it because
a game-ready solution would involve a collision library, which we don't have as first party. a simple code block on the struct seemed insufficient. I felt like this was a reasonable compromise to demonstrate the intention without writing a whole game.
if it's not first party then you have to duplicate/replace the whole focus system in order to get excluding the example this pr adds about 20 lines of code, 15 of which are fixing an existing ambiguity wrt overlapping uis on multiple cameras, and it opens up functionality that i personally need. if consensus is against it then i'll just continue to patch locally, but I do generally try to upstream things that I need and that are more broadly useful. |
My preference would be instead of making this a special case, use this component as the only interface for getting inputs into the focus system. The default bevy_ui input plugin would drive this with the pointer position on the window, and for this use case, where the render target is not a window, but a texture, you would need to supply it yourself - like you are doing in the example. In other words, decouple the focus system from the input system. The input system sets the pointer position on the camera's render target, and the focus system just consumes that, it doesn't care how it got there. This also makes it possible to drive the UI with virtual pointers. This would also be more in line with how things will work with mod_picking. |
agreed and updated. shall i drop the example as well? |
The example is useful IMO: it's good to teach, and it'll be valuable during refactoring. |
let cursor_position = maybe_window_ref | ||
.and_then(|w| windows.get(w.entity()).ok()) | ||
.and_then(Window::cursor_position) | ||
.or_else(|| touches_input.first_pressed_position()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just noting there's an assumption here that with touch devices there's only one window. not sure that's guaranteed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(there are also up to 12 pointers at a time)
i'm not clear why this was moved to waiting on author. it looks like i didn't get any notification at the time, and it stops anyone else from looking into it which is a bit unfortunate. |
It looks like your PR has been selected for a highlight in the next release blog post, but you didn't provide a release note. Please review the instructions for writing release notes, then expand or revise the content in the release notes directory to showcase your changes. |
Agreed, and I didn't get a notification either. Please leave a comment when triaging these :) |
Just wanted to drop by with an example I made that doesn't require any changes to bevy, it's about 180 LOC diff to the existing render ui to world example: #19199, most of which is adding a missing feature to the raycasting. This existing PR is essentially duplicating functionality that already exists in bevy_picking. |
Objective
allow use of the built in focus/interaction systems for non-window-based uis
Solution
ManualCursorPosition
component that users can add to cameras to specify the cursor positionui_focus_system
UiStack
by camera order as well as z-order, so that blocking elements are not ambiguously resovled between camerasinteractive_ui_texture
exampleTesting
see example:
Recording.2024-07-29.181258.mp4