-
Notifications
You must be signed in to change notification settings - Fork 28
MUC messages duplicated as DMs #137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Using additional logging (https://github.com/Fishbowler/openfire-monitoring-plugin/tree/137_more-logs) and some database snooping, the extra message has no stanza stored in ofMessageArchive, and so one gets recreated (although seemingly before it reaches the line of code that should do this here and so this isn't used). The user3 should get a stanza about a message from user1 that looks like this:
But instead gets one that looks like this:
The stanza should be more properly reconstructed, and that would prevent this issue. There are 2 separate issues to be logged:
|
I've replaced the usage of 'DM' with 'one-on-one messages' in the above comments, as in XMPP context, 'DM' can cause some confusion. DM is typically reserved in scenarios where private messages are sent 'through' a MUC room (https://xmpp.org/extensions/xep-0045.html#privatemessage). I've also briefly discussed this with the author of Conversations, to see if they're interested in changing behavior on their end. Conversations currently ignores the 'to' address, which is why these messages show. |
I'm assuming that messages sent to a MUC room by users that are connected to a cluster node other than the senior node do not get logged in the The content of the table is shown below (which probably renders badly, apologies) seems to collaborate that, at least for the first time I tried the scenario. The message content describes on what cluster node the sender was connected.
|
DeI'm not having luck reproducing this. In a completely new environment, I've created a scenario in which I'm connecting user1, on server xmpp1 (which happens to be the senior cluster member), and share one message in room mucOne. I'm also connecting user2, on server xmpp2 (junior cluster member), and share another message in room mucOne. Next, I'm using the Smack debugger that's part of Spark to send off a MAM query, using user3 (I've done that twice, connecting to each cluster node). The responses seemed valid to me. Next, I joined the room (using user3 again), also twice: once on each cluster node. The chat history that gets sent seems fine. Database content prior to starting to interact with the message archive:
Query, using user3 connected to xmpp1 / senior cluster node:
Responses:
Query, using user3 connected to xmpp2 / junior cluster node:
Responses
Joining (through the UI) mucOne, using user3 connected to xmpp2 / junior cluster node:
Responses:
Joining (through the UI) mucOne, using user3 connected to xmpp1 / senior cluster node:
Responses:
|
I was using the same MUC both times, but doubt that's the deciding factor. I wonder if I need some traffic interception? Perhaps the request sent by the Conversations client differs slightly from yours? 🤷♂️ |
I did that too, but wrote it down incorrectly. I have now modified my comment above. |
Let's try that. The xmldebugger plugin should help us capture that traffic. |
Here's a log using the xmldebugger. |
This is a snippet of log from an Openfire with the modified Monitoring plugin with additional logging. |
We have found various factors contributing to this. The primary cause seems to be that the stanza that is triggering a MUC event on a junior cluster member does not get transferred to the senior cluster member (that takes responsibility for processing the event). |
As the senior node is responsible for processing cluster events related, that node needs to have all available data to operate on. By making available the original stanza, this stanza can be persisted in the database. This in turn prevents an inpresice reproduction from being used when the corresponding message archive is used at some point later in the future.
As the senior node is responsible for processing cluster events related, that node needs to have all available data to operate on. By making available the original stanza, this stanza can be persisted in the database. This in turn prevents an inpresice reproduction from being used when the corresponding message archive is used at some point later in the future.
As the senior node is responsible for processing cluster events related, that node needs to have all available data to operate on. By making available the original stanza, this stanza can be persisted in the database. This in turn prevents an inpresice reproduction from being used when the corresponding message archive is used at some point later in the future.
Tested on Guus' branch, and can no longer reproduce the issue |
…cluster It is important to assign a message ID, which is used for ordering messages in a conversation, soon after the message arrived, as opposed to just before the message gets written to the database. In the latter scenario, the message ID values might no longer reflect the order of the messages in a conversation, as database writes are batched up together for performance reasons. Using these batches won't affect the database-insertion order (as compared to the order of messages in the conversation) on a single Openfire server, but when running in a cluster, these batches do have a good chance to mess up the order of things.
Turns out that this issue is pretty much a duplicate of #19 |
…gniterealtime#137) With issue igniterealtime#137 fixed, issue igniterealtime#19 can be closed.
It is important to assign a message ID, which is used for ordering messages in a conversation, soon after the message arrived, as opposed to just before the message gets written to the database. In the latter scenario, the message ID values might no longer reflect the order of the messages in a conversation, as database writes are batched up together for performance reasons. Using these batches won't affect the database-insertion order (as compared to the order of messages in the conversation) on a single Openfire server, but when running in a cluster, these batches do have a good chance to mess up the order of things.
Environment:
With users in the roster and in the MUC, the Conversations user sometimes sees MUC messages duplicated and appearing as one-on-one messages from the message author.
Launch cluster using https://github.com/surevine/openfire-docker-compose/tree/deleteme_all-the-branches
Log in to Spark1 as user1
Log in to Spark2 as user2
Open MUC1 as user1 and send a message
Open MUC1 as user2 and send a message
Log in to Conversations as user3
Open MUC1 as user3. See messages.
Open one-on-one messages with user1. See message.
Open one-on-one messages with user2. See no message.
(Sometimes user1/user2 are switched. Maybe it's the user on the node that user3 isn't on?)
The text was updated successfully, but these errors were encountered: