-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Concurrency control in multi-user environments #1069
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Concurrency issues are handled by the database. I wrote an article about the different kinds of concurrency problems and how PostgreSQL handles them: https://begriffs.com/posts/2017-08-01-practical-guide-sql-isolation.html See which of those issues might apply in your situation. PostgREST uses the "Read Committed" level of transaction isolation. Your question makes me think maybe this level should be configurable from the PostgREST server configuration file. Another thing is that we don't yet have a way to specify that multiple client requests happen in a single transaction. Some ideas for the interface are proposed in #286. Currently to execute queries in a transaction you must create a stored procedure and have the client execute it via |
One more note: if we turn the isolation higher we can run into the |
@begriffs HTTP etags and conditional requests would be a good way to expose retryable concurrent update failures to clients. See the |
My concern really wasn't about database transactions (although it would be neat if we could control them for PostgREST, too!) - it is usually strongly discouraged anyway to hold a database connection with a transaction open during the time a user (human being in front of the computer :-)) takes to edit fields of some entity (and maybe goes away for any amount of time - maybe even forever). @jackfirth I don't think exponential backoff (i.e., just retrying the very same request later) applies to the case when an update is rejected because the entity has been updated by somebody else in the meantime (which is the meaning of "If-Unmodified-Since"). Something that is NOT unmodified since time X will never again be unmodified since the same time X, I think. |
Raising +1 for making the isolation level configurable. |
Once #1176 is done, this could be solved with an |
So I've been checking the WedDAV RFC and it has the LOCK/UNLOCK HTTP methods which could solve this use case. These can be mapped to PostgreSQL advisory locks, which can be used for distributed locking(this is also related to this idea on #286). No need to leave a transaction open in this case. So basically, we could do: LOCK /tbl?id=eq.1
200 OK
Lock-Token: <token> The user can then store the Lock-Token locally and then use it for: UNLOCK /tbl?id=eq.1
Lock-Token: <token>
200 OK This Whenever others users try to PATCH/DELETE the same resource, they'd get an error: PATCH /tbl?id=eq.1
423 LOCKED Seems this should work in theory. We'd have to somehow restrict PATCH/DELETE to only be based on PK equality, similar to how PUT is restricted right now. Otherwise we wouldn't be able to prevent a Since releasing session-level advisory locks is up to the application, we'd need a way to do this. We would definitely need a timeout. One lazy idea would be to leave it to the pool idle timeout, but it's not guaranteed that a connection would be idle. A connection lifetime(nikita-volkov/hasql-pool#28) could also be too long. So we would need to maintain some state in postgREST, which would be the advisory locks ids. Then use another thread that periodically frees them. |
Sorry for resurrecting this old ticket, I was just reminded of this discussion because of this discussion over on the Supabase client repo. Steve, for all the reasons you mention, I do not think that using advisory locks is a good idea. In addition, session-level advisory locks require holding the session open. This can be a problem in load-balancing production environments when multiple instances of PostgREST access the same database. If one of the instances is holding an advisory lock but then crashes for whatever reason, its session may get cleaned up by Postgres and the other instances will see its lock as released, even though the application that sent the ETag and E: It does make assumptions on the order in which rows are returned, however, which can be solved on the PostgREST level, but may require some thought and trickery. |
Dear PostgREST community
Is there an "idiomatic" way (meaning PostgREST principles) to handle concurrent updates in a multi-user environment? I.e., with PostgREST as "backend", having a frontend (say webapp) for users to retrieve, edit, and then store changes to entities (rows in tables). How would you handle two users attempting to edit the very same entity (row) concurrently, while maintaining integrity?
There are some suggestions over here: https://blog.4psa.com/rest-best-practices-managing-concurrent-updates/ - how would this translate to PostgREST?
How do you handle this in real world applications?
Sebastian
The text was updated successfully, but these errors were encountered: