Skip to content

Commit bc9e4c8

Browse files
authored
Merge pull request #76 from IQSS/develop
Update from IQSS develop
2 parents 2fb9106 + 3a2f2cc commit bc9e4c8

File tree

69 files changed

+2284
-1140
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

69 files changed

+2284
-1140
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@ oauth-credentials.md
3434

3535
/src/main/webapp/oauth2/newAccount.html
3636
scripts/api/setup-all.sh*
37+
scripts/api/setup-all.*.log
3738

3839
# ctags generated tag file
3940
tags

conf/solr/7.7.2/schema_dv_mdb_copies.xml

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,9 +133,13 @@
133133
<copyField source="studyAssayOtherMeasurmentType" dest="_text_" maxChars="3000"/>
134134
<copyField source="studyAssayOtherOrganism" dest="_text_" maxChars="3000"/>
135135
<copyField source="studyAssayPlatform" dest="_text_" maxChars="3000"/>
136+
<copyField source="studyAssayOtherPlatform" dest="_text_" maxChars="3000"/>
136137
<copyField source="studyAssayTechnologyType" dest="_text_" maxChars="3000"/>
138+
<copyField source="studyAssayOtherTechnologyType" dest="_text_" maxChars="3000"/>
137139
<copyField source="studyDesignType" dest="_text_" maxChars="3000"/>
140+
<copyField source="studyOtherDesignType" dest="_text_" maxChars="3000"/>
138141
<copyField source="studyFactorType" dest="_text_" maxChars="3000"/>
142+
<copyField source="studyOtherFactorType" dest="_text_" maxChars="3000"/>
139143
<copyField source="subject" dest="_text_" maxChars="3000"/>
140144
<copyField source="subtitle" dest="_text_" maxChars="3000"/>
141145
<copyField source="targetSampleActualSize" dest="_text_" maxChars="3000"/>
@@ -154,4 +158,4 @@
154158
<copyField source="universe" dest="_text_" maxChars="3000"/>
155159
<copyField source="weighting" dest="_text_" maxChars="3000"/>
156160
<copyField source="westLongitude" dest="_text_" maxChars="3000"/>
157-
</schema>
161+
</schema>

conf/solr/7.7.2/schema_dv_mdb_fields.xml

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,9 +133,13 @@
133133
<field name="studyAssayOtherMeasurmentType" type="text_en" multiValued="true" stored="true" indexed="true"/>
134134
<field name="studyAssayOtherOrganism" type="text_en" multiValued="true" stored="true" indexed="true"/>
135135
<field name="studyAssayPlatform" type="text_en" multiValued="true" stored="true" indexed="true"/>
136+
<field name="studyAssayOtherPlatform" type="text_en" multiValued="true" stored="true" indexed="true"/>
136137
<field name="studyAssayTechnologyType" type="text_en" multiValued="true" stored="true" indexed="true"/>
138+
<field name="studyAssayOtherTechnologyType" type="text_en" multiValued="true" stored="true" indexed="true"/>
137139
<field name="studyDesignType" type="text_en" multiValued="true" stored="true" indexed="true"/>
140+
<field name="studyOtherDesignType" type="text_en" multiValued="true" stored="true" indexed="true"/>
138141
<field name="studyFactorType" type="text_en" multiValued="true" stored="true" indexed="true"/>
142+
<field name="studyOtherFactorType" type="text_en" multiValued="true" stored="true" indexed="true"/>
139143
<field name="subject" type="text_en" multiValued="true" stored="true" indexed="true"/>
140144
<field name="subtitle" type="text_en" multiValued="false" stored="true" indexed="true"/>
141145
<field name="targetSampleActualSize" type="text_en" multiValued="false" stored="true" indexed="true"/>
@@ -154,4 +158,4 @@
154158
<field name="universe" type="text_en" multiValued="true" stored="true" indexed="true"/>
155159
<field name="weighting" type="text_en" multiValued="false" stored="true" indexed="true"/>
156160
<field name="westLongitude" type="text_en" multiValued="true" stored="true" indexed="true"/>
157-
</fields>
161+
</fields>
Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
# Dataverse 5.1
2+
3+
This release brings new features, enhancements, and bug fixes to Dataverse. Thank you to all of the community members who contributed code, suggestions, bug reports, and other assistance across the project.
4+
5+
## Release Highlights
6+
7+
### Large File Upload for Installations Using AWS S3
8+
9+
The added support for multipart upload through the API and UI (Issue #6763) will allow files larger than 5 GB to be uploaded to Dataverse when an installation is running on AWS S3. Previously, only non-AWS S3 storage configurations would allow uploads larger than 5 GB.
10+
11+
### Dataset-Specific Stores
12+
13+
In previous releases, configuration options were added that allow each dataverse to have a specific store enabled. This release adds even more granularity, with the ability to set a dataset-level store.
14+
15+
## Major Use Cases
16+
17+
Newly-supported use cases in this release include:
18+
19+
- Users can now upload files larger than 5 GB on installations running AWS S3 (Issue #6763, PR #6995)
20+
- Administrators will now be able to specify a store at the dataset level in addition to the Dataverse level (Issue #6872, PR #7272)
21+
- Users will have their dataset's directory structure retained when uploading a dataset with shapefiles (Issue #6873, PR #7279)
22+
- Users will now be able to download zip files through the experimental Zipper service when the set of downloaded files have duplicate names (Issue [#80](https://github.com/IQSS/dataverse.harvard.edu/issues/80), PR #7276)
23+
- Users will now be able to download zip files with the proper file structure through the experiment Zipper service (Issue #7255, PR #7258)
24+
- Administrators will be able to use new APIs to keep the Solr index and the DB in sync, allowing easier resolution of an issue that would occasionally cause stale search results to not load. (Issue #4225, PR #7211)
25+
26+
## Notes for Dataverse Installation Administrators
27+
28+
### New API for setting a Dataset-level Store
29+
30+
- This release adds a new API for setting a dataset-specific store. Learn more in the Managing Dataverse and Datasets section of the [Admin Guide](http://guides.dataverse.org/en/5.1/admin/solr-search-index.html).
31+
32+
### Multipart Upload Storage Monitoring, Recommended Use for Multipart Upload
33+
34+
Charges may be incurred for storage reserved for multipart uploads that are not completed or cancelled. Administrators may want to do periodic manual or automated checks for open multipart uploads. Learn more in the Big Data Support section of the [Developers Guide](http://guides.dataverse.org/en/5.1/developer/big-data-support.html).
35+
36+
While multipart uploads can support much larger files, and can have advantages in terms of robust transfer and speed, they are more complex than single part direct uploads. Administrators should consider taking advantage of the options to limit use of multipart uploads to specific users by using multiple stores and configuring access to stores with high file size limits to specific Dataverses (added in 4.20) or Datasets (added in this release).
37+
38+
### New APIs for keeping Solr records in sync
39+
40+
This release adds new APIs to keep the Solr index and the DB in sync, allowing easier resolution of an issue that would occasionally cause search results to not load. Learn more in the Solr section of the [Admin Guide](http://guides.dataverse.org/en/5.1/admin/solr-search-index.html).
41+
42+
### Documentation for Purging the Ingest Queue
43+
44+
At times, it may be necessary to cancel long-running Ingest jobs in the interest of system stability. The Troubleshooting section of the [Admin Guide](http://guides.dataverse.org/en/5.1/admin/) now has specific steps.
45+
46+
### Biomedical Metadata Block Updated
47+
48+
The Life Science Metadata block (biomedical.tsv) was updated. "Other Design Type", "Other Factor Type", "Other Technology Type", "Other Technology Platform" boxes were added. See the "Additional Upgrade Steps" below if you use this in your installation.
49+
50+
## Notes for Tool Developers and Integrators
51+
52+
### Spaces in File Names
53+
54+
Dataverse Installations using S3 storage will no longer replace spaces in file names of downloaded files with the + character. If your tool or integration has any special handling around this, you may need to make further adjustments to maintain backwards compatibility while also supporting Dataverse installations on 5.1+.
55+
56+
## Complete List of Changes
57+
58+
For the complete list of code changes in this release, see the [5.1 Milestone](https://github.com/IQSS/dataverse/milestone/90?closed=1) in Github.
59+
60+
For help with upgrading, installing, or general questions please post to the [Dataverse Google Group](https://groups.google.com/forum/#!forum/dataverse-community) or email [email protected].
61+
62+
## Installation
63+
64+
If this is a new installation, please see our [Installation Guide](http://guides.dataverse.org/en/5.1/installation/)
65+
66+
## Upgrade Instructions
67+
68+
0. These instructions assume that you've already successfully upgraded from Dataverse 4.x to Dataverse 5 following the instructions in the [Dataverse 5 Release Notes](https://github.com/IQSS/dataverse/releases/tag/v5.0).
69+
70+
1. Undeploy the previous version.
71+
72+
<payara install path>/payara/bin/asadmin list-applications
73+
<payara install path>/payara/bin/asadmin undeploy dataverse
74+
75+
2. Stop payara and remove the generated directory, start.
76+
77+
- service payara stop
78+
- remove the generated directory: rm -rf <payara install path>payara/payara/domains/domain1/generated
79+
- service payara start
80+
81+
3. Deploy this version.
82+
<payara install path>/payara/bin/asadmin deploy <path>dataverse-5.1.war
83+
84+
4. Restart payara
85+
86+
### Additional Upgrade Steps
87+
88+
1. Update Biomedical Metadata Block (if used), Reload Solr, ReExportAll
89+
90+
`wget https://github.com/IQSS/dataverse/releases/download/v5.1/biomedical.tsv`
91+
`curl http://localhost:8080/api/admin/datasetfield/load -X POST --data-binary @biomedical.tsv -H "Content-type: text/tab-separated-values"`
92+
93+
- copy schema_dv_mdb_fields.xml and schema_dv_mdb_copies.xml to solr server, for example into /usr/local/solr/solr-7.7.2/server/solr/collection1/conf/ directory
94+
- Restart Solr, or tell Solr to reload its configuration:
95+
96+
`curl "http://localhost:8983/solr/admin/cores?action=RELOAD&core=collection1"`
97+
98+
- Run ReExportall to update JSON Exports
99+
<http://guides.dataverse.org/en/5.1/admin/metadataexport.html?highlight=export#batch-exports-through-the-api>
Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
# Dataverse 5.1.1
2+
3+
This minor release adds important scaling improvements for installations running on AWS S3. It is recommended that 5.1.1 be used in production instead of 5.1.
4+
5+
## Release Highlights
6+
7+
### Connection Pool Size Configuration Option, Connection Optimizations
8+
9+
Dataverse 5.1 improved the efficiency of making S3 connections through use of an http connection pool. This release adds optimizations around closing streams and channels that may hold S3 http connections open and exhaust the connection pool. In parallel, this release increases the default pool size from 50 to 256 and adds the ability to increase the size of the connection pool, so a larger pool can be configured if needed.
10+
11+
## Major Use Cases
12+
13+
Newly-supported use cases in this release include:
14+
15+
- Administrators of installations using S3 will be able to define the connection pool size, allowing better resource scaling for larger installations (Issue #7309, PR #7313)
16+
17+
## Notes for Dataverse Installation Administrators
18+
19+
### 5.1.1 vs. 5.1 for Production Use
20+
21+
As mentioned above, we encourage 5.1.1 instead of 5.1 for production use.
22+
23+
### New JVM Option for Connection Pool Size
24+
25+
Larger installations may want to increase the number of open S3 connections allowed (default is 256). For example, to set the value to 4096:
26+
27+
``./asadmin create-jvm-options "-Ddataverse.files.<id>.connection-pool-size=4096"`
28+
29+
The JVM Options section of the [Configuration Guide](http://guides.dataverse.org/en/5.1.1/installation/config/) has more information.
30+
31+
## Complete List of Changes
32+
33+
For the complete list of code changes in this release, see the [5.1.1 Milestone](https://github.com/IQSS/dataverse/milestone/91?closed=1) in Github.
34+
35+
For help with upgrading, installing, or general questions please post to the [Dataverse Google Group](https://groups.google.com/forum/#!forum/dataverse-community) or email [email protected].
36+
37+
## Installation
38+
39+
If this is a new installation, please see our [Installation Guide](http://guides.dataverse.org/en/5.1.1/installation/)
40+
41+
## Upgrade Instructions
42+
43+
0. These instructions assume that you've already successfully upgraded to Dataverse 5.1 following the instructions in the [Dataverse 5.1 Release Notes](https://github.com/IQSS/dataverse/releases/tag/v5.1).
44+
45+
1. Undeploy the previous version.
46+
47+
<payara install path>/payara/bin/asadmin list-applications
48+
<payara install path>/payara/bin/asadmin undeploy dataverse
49+
50+
2. Stop payara and remove the generated directory, start.
51+
52+
- service payara stop
53+
- remove the generated directory: rm -rf <payara install path>payara/payara/domains/domain1/generated
54+
- service payara start
55+
56+
3. Deploy this version.
57+
<payara install path>/payara/bin/asadmin deploy <path>dataverse-5.1.1.war
58+
59+
4. Restart payara

doc/release-notes/6763-multipart-uploads.md

Lines changed: 0 additions & 3 deletions
This file was deleted.
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
## Google Cloud Archiver
2+
3+
Dataverse Bags can now be sent to a bucket in Google Cloud, including those in the 'Coldline' storage class, which provide less expensive but slower access.
4+
5+
## Use Cases
6+
7+
- As an Administrator I can set up a regular export to Google Cloud so that my users' data is preserved.
8+
9+
## New Settings
10+
11+
:GoogleCloudProject - the name of the project managing the bucket.
12+
:GoogleCloudBucket - the name of the bucket to use

doc/release-notes/7184-spaces-in-filenames.md

Lines changed: 0 additions & 7 deletions
This file was deleted.

doc/sphinx-guides/source/admin/dataverses-datasets.rst

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,8 @@ The available drivers can be listed with::
5959

6060
curl -H "X-Dataverse-key: $API_TOKEN" http://$SERVER/api/admin/dataverse/storageDrivers
6161
62+
(Individual datasets can be configured to use specific file stores as well. See the "Datasets" section below.)
63+
6264

6365
Datasets
6466
--------
@@ -130,3 +132,23 @@ Diagnose Constraint Violations Issues in Datasets
130132

131133
To identify invalid data values in specific datasets (if, for example, an attempt to edit a dataset results in a ConstraintViolationException in the server log), or to check all the datasets in the Dataverse for constraint violations, see :ref:`Dataset Validation <dataset-validation-api>` in the :doc:`/api/native-api` section of the User Guide.
132134

135+
Configure a Dataset to store all new files in a specific file store
136+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
137+
138+
Configure a dataset to use a specific file store (this API can only be used by a superuser) ::
139+
140+
curl -H "X-Dataverse-key: $API_TOKEN" -X PUT -d $storageDriverLabel http://$SERVER/api/datasets/$dataset-id/storageDriver
141+
142+
The current driver can be seen using::
143+
144+
curl http://$SERVER/api/datasets/$dataset-id/storageDriver
145+
146+
It can be reset to the default store as follows (only a superuser can do this) ::
147+
148+
curl -H "X-Dataverse-key: $API_TOKEN" -X DELETE http://$SERVER/api/datasets/$dataset-id/storageDriver
149+
150+
The available drivers can be listed with::
151+
152+
curl -H "X-Dataverse-key: $API_TOKEN" http://$SERVER/api/admin/dataverse/storageDrivers
153+
154+

doc/sphinx-guides/source/admin/solr-search-index.rst

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,18 @@ There are two ways to perform a full reindex of the Dataverse search index. Star
1414
Clear and Reindex
1515
+++++++++++++++++
1616

17+
18+
Index and Database Consistency
19+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
20+
21+
Get a list of all database objects that are missing in Solr, and Solr documents that are missing in the database:
22+
23+
``curl http://localhost:8080/api/admin/index/status``
24+
25+
Remove all Solr documents that are orphaned (ie not associated with objects in the database):
26+
27+
``curl http://localhost:8080/api/admin/index/clear-orphans``
28+
1729
Clearing Data from Solr
1830
~~~~~~~~~~~~~~~~~~~~~~~
1931

@@ -81,4 +93,4 @@ If you suspect something isn't indexed properly in solr, you may bypass the Data
8193

8294
``curl "http://localhost:8983/solr/collection1/select?q=dsPersistentId:doi:10.15139/S3/HFV0AO"``
8395

84-
to see the JSON you were hopefully expecting to see passed along to Dataverse.
96+
to see the JSON you were hopefully expecting to see passed along to Dataverse.

doc/sphinx-guides/source/admin/troubleshooting.rst

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,26 @@ A User Needs Their Account to Be Converted From Institutional (Shibboleth), ORCI
4343

4444
See :ref:`converting-shibboleth-users-to-local` and :ref:`converting-oauth-users-to-local`.
4545

46+
.. _troubleshooting-ingest:
47+
48+
Ingest
49+
------
50+
51+
Long-Running Ingest Jobs Have Exhausted System Resources
52+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
53+
54+
Ingest is both CPU- and memory-intensive, and depending on your system resources and the size and format of tabular data files uploaded, may render Dataverse unresponsive or nearly inoperable. It is possible to cancel these jobs by purging the ingest queue.
55+
56+
``/usr/local/payara5/mq/bin/imqcmd -u admin query dst -t q -n DataverseIngest`` will query the DataverseIngest destination. The password, unless you have changed it, matches the username.
57+
58+
``/usr/local/payara5/mq/bin/imqcmd -u admin purge dst -t q -n DataverseIngest`` will purge the DataverseIngest queue, and prompt for your confirmation.
59+
60+
Finally, list destinations to verify that the purge was successful::
61+
62+
``/usr/local/payara5/mq/bin/imqcmd -u admin list dst``
63+
64+
If you are still running Glassfish, substitute glassfish4 for payara5 above. If you have installed Dataverse in some other location, adjust the above paths accordingly.
65+
4666
.. _troubleshooting-payara:
4767

4868
Payara

0 commit comments

Comments
 (0)