Skip to content

Prompt user with TLC options before running #207

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Apr 16, 2021

Conversation

klinvill
Copy link
Contributor

Addresses #172.

The idea behind this PR is to pop up an input box prompt with the TLC options when running the "Check model with TLC" command. This both shows the user what options TLC will run with and gives them a convenient place to change the options. This PR then saves the user-provided options to the workspace settings if a workspace is open, or to the global settings if not, allowing the same options to be prompted to the user for the next run, even across sessions.

This PR adds some user friction, but in most cases it just requires the user to press enter since the options are pre-populated from the previous run.

Outstanding questions/concerns I have with this PR are:

  • Is the added user friction annoying enough to warrant adding a checkbox to the settings to disable the prompt?
  • I omit the -config argument from pre-populating by default since I want users to be able to re-use the same options across multiple modules, however I include the -coverage argument. This means that default options are now defined in two places: coverage in getTlcOptions() and config in buildTlcOptions().
  • TLC options are saved to workspace settings when possible, and global settings when not. Should they instead always be saved to the global settings?
  • Tests for getTlcOptions() would require UI interaction. This would mean integrating a framework like vscode-extension-tester or puppeteer/selenium. This would add complexity and a maintenance burden. Would it be worth it?

This allows for the user-provided options to be persisted across
sessions for a workspace. It does not persist options across workspaces
however.
Provides fallback behavior in the case where a user opens a file in
vscode without opening a folder (which would open a workspace).
@klinvill klinvill marked this pull request as draft April 12, 2021 17:29
@klinvill
Copy link
Contributor Author

@lemmy how has the TLC options prompt been working for you? Are there changes you think should be made?

I made a few updates here to persist the user-entered TLC options to the workspace configurations by default (or global if a workspace is not open). When you get the chance, can you take a look and let me know what you think?

@lemmy
Copy link
Member

lemmy commented Apr 12, 2021

Not to comment on the technical aspects, but I've been using this feature (earlier commit) for a few weeks now and it has significantly reduced the number of times I ran TLC with bogus parameters (because command-line parameters are store as extension settings). Unless there is a better place to show TLC parameters before model-checking, I suggest merging this PR asap.

@klinvill klinvill marked this pull request as ready for review April 12, 2021 18:04
@lemmy
Copy link
Member

lemmy commented Apr 12, 2021

FWIW: I suggest not storing the settings at the workspace or global level; they are model-level settings. This extension has no corresponding concept (config \cup parameters \cup JVM tuning make up a Toolbox's model).
Examples:

  • Spec A deadlocks while spec B does not
  • It's feasible to visualize the state space of Spec A with config C but infeasible with config D
  • ...

For now, the input box should just be pre-populated with a user's previous input.

@klinvill
Copy link
Contributor Author

FWIW: I suggest not storing the settings at the workspace or global level; they are model-level settings

While this is true, I tend to pass the same few options to TLC. For my usage, it seems that the friction of typing in the options again when you (re-)open a window is more than that of clearing the options when you switch models. Is this not the case in your experience?

@lemmy
Copy link
Member

lemmy commented Apr 13, 2021

My use of TLA+ might not be representative because I work more on TLA+ rather than with TLA+. That said, I would assume that the average user has one vscode workspace per spec, and, thus, would want parameters to be stored at the workspace level (if at all).
I suggest gathering more feedback.

@klinvill
Copy link
Contributor Author

@alygin, @hwayne, do you have any thoughts on if TLC options should be:

  • Only persisted for a session. When you close the window, the options are reset to default (-coverage 1 in this case).
  • Persisted for a workspace. When you reopen a workspace, the last options you used for that workspace are pre-populated.
  • Persisted globally. When you open a new workspace, or reopen the same workspace, the last options you used anywhere are pre-populated.
  • Combination of the above two: persisted for a workspace when a workspace is open, otherwise persisted globally.

For reference, the clip below shows what the implementation currently looks like. By default it prompts with the options -coverage 1. The provided options are then saved in the workspace settings. These saved settings are used to pre-populate the TLC options for the next run.

tlc_options.mp4

@lemmy
Copy link
Member

lemmy commented Apr 13, 2021

/cc @Calvin-L @will62794 (please ignore if you don't use the TLA+ vscode extension)

@Alexander-N
Copy link

  • Is the added user friction annoying enough to warrant adding a checkbox to the settings to disable the prompt?

I think such an option would be great.

Persisting the options for a workspace seems fine, but I have no strong opinion on that.

Copy link
Contributor

@alygin alygin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR adds some user friction, but in most cases it just requires the user to press enter since the options are pre-populated from the previous run.

There's the "Run last model check again" command (handler) that should eliminate this friction for most cases by running the last check with no options dialog. Unfortunately, now it triggers the prompt.

Could you, please, make it always use the last options without asking the user? This will bring model checking back to one shortcut (or a single mouse click) distance from the user.

@alygin
Copy link
Contributor

alygin commented Apr 13, 2021

As for the user preferences destination, I think the "Workspace first" attitude is reasonable.

But I'm afraid the extension behavior will be confusing in the following scenario:

  1. The user runs a model check, provides some options.
  2. The user goes to global settings and changes the default options for TLC.
  3. The user runs a model check, and the extensions shows the prompt with the same options he or she provided on the step 1, with no respect to the new global settings.

I have no idea on how to handle that correctly. Leave it this way? Reset the options provided by the user?

@klinvill
Copy link
Contributor Author

There's the "Run last model check again" command (handler) that should eliminate this friction for most cases by running the last check with no options dialog. Unfortunately, now it triggers the prompt.

Could you, please, make it always use the last options without asking the user? This will bring model checking back to one shortcut (or a single mouse click) distance from the user.

Sure thing. I just pushed a change to show the prompt for the checkModel() and checkModelCustom() handlers, but not for the runLastCheckAgain() handler.

I didn't realize that the "Evaluate expression" command also runs the model checker. I opted to not prompt for that command. Do you agree with this decision or do you think it should also prompt for "Evaluate expression"?

@klinvill
Copy link
Contributor Author

I'm afraid the extension behavior will be confusing in the following scenario:

  1. The user runs a model check, provides some options.
  2. The user goes to global settings and changes the default options for TLC.
  3. The user runs a model check, and the extensions shows the prompt with the same options he or she provided on the step 1, with no respect to the new global settings.

I have no idea on how to handle that correctly. Leave it this way? Reset the options provided by the user?

I'm using vscode's default configuration behavior which means that the workspace settings will take precedence over the global settings.

It looks like the settings show a (small) message when a setting is set at both the global and workspace level:
image

But maybe we could add a warning message (that the user can silence) to warn the user when both global and workspace settings are set? Something like this:
image

I can also fix the behavior to unset the workspace settings when any prompt is given. This would ensure the next prompt is pre-populated from the global settings (if present).

@sonarqubecloud
Copy link

Kudos, SonarCloud Quality Gate passed!

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
0.0% 0.0% Duplication

@klinvill
Copy link
Contributor Author

  • Is the added user friction annoying enough to warrant adding a checkbox to the settings to disable the prompt?

I think such an option would be great.

The latest commit adds an option to disable the prompt:
image

@lemmy
Copy link
Member

lemmy commented Apr 14, 2021

I didn't realize that the "Evaluate expression" command also runs the model checker. I opted to not prompt for that command. Do you agree with this decision or do you think it should also prompt for "Evaluate expression"?

Off the top of my head, I can't think of parameters that I would want to pass when evaluating (constant) expressions.

Copy link
Contributor

@alygin alygin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thank you!

@lemmy
Copy link
Member

lemmy commented May 20, 2021

This PR interferes with the functionality to evaluate the selected expression from the editor's right-click context menu.

@klinvill
Copy link
Contributor Author

klinvill commented May 20, 2021

This PR interferes with the functionality to evaluate the selected expression from the editor's right-click context menu.

@lemmy is this just in your fork, or also in the main repo? I added a showOptionsPrompt arg to the doCheckModel() function that disables the prompt. It looks like it's properly disabled in the master branch: https://github.com/alygin/vscode-tlaplus/blob/917305bffb75fb82fd77218c1d8c0dead66689fd/src/commands/evaluateExpression.ts#L98 (It looks like this is the call for both the TLA+: Evaluate expression... command and the Evaluate selected expression command)

@lemmy
Copy link
Member

lemmy commented May 20, 2021

@klinvill Thanks, that might very well be it. Will check!

Btw. I realized that the dialog could be used to support evaluating expressions in a spec that declares variables and/or constants, but lacks a model:

---------------------- MODULE ATD ---------------------
EXTENDS Naturals

CONSTANT N

ASSUME NIsPosNat == N \in Nat \ {0}

Node == 0 .. N-1
=============================================================================

If a user tries to evaluate Node, one gets:

Error evaluating expression:
The constant parameter N is not assigned a value by the configuration file.

It would be nice if one could type WITH N <- 42 in the prompt (suffix of regular TLA+ expression INSTANCE ATD WITH N <- 42).

@lemmy
Copy link
Member

lemmy commented May 21, 2021

@klinvill Can you suggest how to logically merge your changes with the ones related to the TLC debugger? Both of us changed APIs in tla2tools.ts, but more fundamentally the TLC debugger integrates TLC with the VSCode launch API.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants