Closed
Description
The goal is to be able to handle tens of thousands of datasets in a relatively usable manner in the GUI.
- Fix uploading of large collections
- Bundle uploads so upload form scales to hundreds or thousands of datasets.
- Rework the collection builder for scale - redo the paired
collection builder to try different patterns and be more aggressive
with its initial guess if it can match everything (and not offer an
initial guess if it cannot match anything).
- Add pagination and search to collection view within history panel.
- Scale up tool execution and workflow scheduling.
- Intra-step recovery of workflow step.
(Workflow scheduling doesn't keep track of progress within a step. #3883) - Multi-threaded job creation during workflow threading
(Multi-threaded job scheduling in workflows. #3903) - General database optimizations for scheduling. Increase Job Throughput #4959
- Optimize memory consumption for workflow scheduling. Optimize Memory Consumption when Evaluating Large Workflows #5044
- Intra-step recovery of workflow step.
- Fix tool form performance issues - database and datatype matching optimizations. Tool UI Unusably and Unreasonably Slow #3865.
- Increase robustness and ability for user for respond intelligently
to failures:- Improve failed job re-submission to include tool indicated
errors. (Run re-submission handling logic in response to tool indicated failures. #3320) - Allow re-running mapped jobs in the history panel.
- Allow re-running only failed jobs within mapped jobs in the
history panel (Rerunning a failed dataset collection element should substitute the failed element #2235) - Fix "resume" workflow for the above two fixes.
- Improve failed job re-submission to include tool indicated