Skip to content

Commit

Permalink
Merge branch 'develop' into 10554-avoid-solr-join-guest #10554
Browse files Browse the repository at this point in the history
  • Loading branch information
pdurbin committed May 17, 2024
2 parents 6baef6a + da3dd95 commit 070b3c0
Show file tree
Hide file tree
Showing 28 changed files with 482 additions and 177 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Bug fixed for the ``incomplete metadata`` label being shown for published dataset with incomplete metadata in certain scenarios. This label will now be shown for draft versions of such datasets and published datasets that the user can edit. This label can also be made invisible for published datasets (regardless of edit rights) with the new option ``dataverse.ui.show-validity-label-when-published`` set to `false`.
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
Changed ``api/dataverses/{id}/metadatablocks`` so that setting the query parameter ``onlyDisplayedOnCreate=true`` also returns metadata blocks with dataset field type input levels configured as required on the General Information page of the collection, in addition to the metadata blocks and their fields with the property ``displayOnCreate=true`` (which was the original behavior).

A new endpoint ``api/dataverses/{id}/inputLevels`` has been created for updating the dataset field type input levels of a collection via API.
12 changes: 8 additions & 4 deletions doc/release-notes/6.2-release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -417,12 +417,16 @@ In the following commands we assume that Payara 6 is installed in `/usr/local/pa

As noted above, deployment of the war file might take several minutes due a database migration script required for the new storage quotas feature.

6\. Restart Payara
6\. For installations with internationalization:

- Please remember to update translations via [Dataverse language packs](https://github.com/GlobalDataverseCommunityConsortium/dataverse-language-packs).

7\. Restart Payara

- `service payara stop`
- `service payara start`

7\. Update the following Metadata Blocks to reflect the incremental improvements made to the handling of core metadata fields:
8\. Update the following Metadata Blocks to reflect the incremental improvements made to the handling of core metadata fields:

```
wget https://github.com/IQSS/dataverse/releases/download/v6.2/geospatial.tsv
Expand All @@ -442,7 +446,7 @@ wget https://github.com/IQSS/dataverse/releases/download/v6.2/biomedical.tsv
curl http://localhost:8080/api/admin/datasetfield/load -H "Content-type: text/tab-separated-values" -X POST --upload-file scripts/api/data/metadatablocks/biomedical.tsv
```

8\. For installations with custom or experimental metadata blocks:
9\. For installations with custom or experimental metadata blocks:

- Stop Solr instance (usually `service solr stop`, depending on Solr installation/OS, see the [Installation Guide](https://guides.dataverse.org/en/6.2/installation/prerequisites.html#solr-init-script))

Expand All @@ -455,7 +459,7 @@ curl http://localhost:8080/api/admin/datasetfield/load -H "Content-type: text/ta
- Restart Solr instance (usually `service solr restart` depending on solr/OS)

9\. Reindex Solr:
10\. Reindex Solr:

For details, see https://guides.dataverse.org/en/6.2/admin/solr-search-index.html but here is the reindex command:

Expand Down
41 changes: 40 additions & 1 deletion doc/sphinx-guides/source/api/native-api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -898,7 +898,46 @@ The following attributes are supported:
* ``filePIDsEnabled`` ("true" or "false") Restricted to use by superusers and only when the :ref:`:AllowEnablingFilePIDsPerCollection <:AllowEnablingFilePIDsPerCollection>` setting is true. Enables or disables registration of file-level PIDs in datasets within the collection (overriding the instance-wide setting).

.. _collection-storage-quotas:


Update Collection Input Levels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Updates the dataset field type input levels in a collection.

Please note that this endpoint overwrites all the input levels of the collection page, so if you want to keep the existing ones, you will need to add them to the JSON request body.

If one of the input levels corresponds to a dataset field type belonging to a metadata block that does not exist in the collection, the metadata block will be added to the collection.

This endpoint expects a JSON with the following format::

[
{
"datasetFieldTypeName": "datasetFieldTypeName1",
"required": true,
"include": true
},
{
"datasetFieldTypeName": "datasetFieldTypeName2",
"required": true,
"include": true
}
]

.. code-block:: bash
export API_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
export SERVER_URL=https://demo.dataverse.org
export ID=root
export JSON='[{"datasetFieldTypeName":"geographicCoverage", "required":true, "include":true}, {"datasetFieldTypeName":"country", "required":true, "include":true}]'
curl -X PUT -H "X-Dataverse-key: $API_TOKEN" -H "Content-Type:application/json" "$SERVER_URL/api/dataverses/$ID/inputLevels" -d "$JSON"
The fully expanded example above (without environment variables) looks like this:

.. code-block:: bash
curl -X PUT -H "X-Dataverse-key:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -H "Content-Type:application/json" "https://demo.dataverse.org/api/dataverses/root/inputLevels" -d '[{"datasetFieldTypeName":"geographicCoverage", "required":true, "include":false}, {"datasetFieldTypeName":"country", "required":true, "include":false}]'
Collection Storage Quotas
~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down
20 changes: 20 additions & 0 deletions doc/sphinx-guides/source/installation/config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2945,6 +2945,24 @@ Defaults to ``false``.
Can also be set via any `supported MicroProfile Config API source`_, e.g. the environment variable
``DATAVERSE_API_ALLOW_INCOMPLETE_METADATA``. Will accept ``[tT][rR][uU][eE]|1|[oO][nN]`` as "true" expressions.

.. _dataverse.ui.show-validity-label-when-published:

dataverse.ui.show-validity-label-when-published
+++++++++++++++++++++++++++++++++++++++++++++++

Even when you do not allow incomplete metadata to be saved in dataverse, some metadata may end up being incomplete, e.g., after making a metadata field mandatory. Datasets where that field is
not filled out, become incomplete, and therefore can be labeled with the ``incomplete metadata`` label. By default, this label is only shown for draft datasets and published datasets that the
user can edit. This option can be disabled by setting it to ``false`` where only draft datasets with incomplete metadata will have that label. When disabled, all published dataset will not have
that label. Note that you need to reindex the datasets after changing the metadata definitions. Reindexing will update the labels and other dataset information according to the new situation.

When enabled (by default), published datasets with incomplete metadata will have an ``incomplete metadata`` label attached to them, but only for the datasets that the user can edit.
You can list these datasets, for example, with the validity of metadata filter shown in "My Data" page that can be turned on by enabling the :ref:`dataverse.ui.show-validity-filter` option.

Defaults to ``true``.

Can also be set via any `supported MicroProfile Config API source`_, e.g. the environment variable
``DATAVERSE_API_SHOW_LABEL_FOR_INCOMPLETE_WHEN_PUBLISHED``. Will accept ``[tT][rR][uU][eE]|1|[oO][nN]`` as "true" expressions.

.. _dataverse.signposting.level1-author-limit:

dataverse.signposting.level1-author-limit
Expand Down Expand Up @@ -3142,6 +3160,8 @@ Defaults to ``false``.
Can also be set via any `supported MicroProfile Config API source`_, e.g. the environment variable
``DATAVERSE_UI_ALLOW_REVIEW_FOR_INCOMPLETE``. Will accept ``[tT][rR][uU][eE]|1|[oO][nN]`` as "true" expressions.

.. _dataverse.ui.show-validity-filter:

dataverse.ui.show-validity-filter
+++++++++++++++++++++++++++++++++

Expand Down
8 changes: 3 additions & 5 deletions src/main/java/edu/harvard/iq/dataverse/DatasetPage.java
Original file line number Diff line number Diff line change
Expand Up @@ -2296,13 +2296,11 @@ private void displayPublishMessage(){

public boolean isValid() {
if (valid == null) {
DatasetVersion version = dataset.getLatestVersion();
if (!version.isDraft()) {
if (workingVersion.isDraft() || (canUpdateDataset() && JvmSettings.UI_SHOW_VALIDITY_LABEL_WHEN_PUBLISHED.lookupOptional(Boolean.class).orElse(true))) {
valid = workingVersion.isValid();
} else {
valid = true;
}
DatasetVersion newVersion = version.cloneDatasetVersion();
newVersion.setDatasetFields(newVersion.initDatasetFields());
valid = newVersion.isValid();
}
return valid;
}
Expand Down
31 changes: 30 additions & 1 deletion src/main/java/edu/harvard/iq/dataverse/DatasetVersion.java
Original file line number Diff line number Diff line change
Expand Up @@ -1728,7 +1728,36 @@ public List<ConstraintViolation<DatasetField>> validateRequired() {
}

public boolean isValid() {
return validate().isEmpty();
// first clone to leave the original untouched
final DatasetVersion newVersion = this.cloneDatasetVersion();
// initDatasetFields
newVersion.setDatasetFields(newVersion.initDatasetFields());
// remove special "N/A" values and empty values
newVersion.removeEmptyValues();
// check validity of present fields and detect missing mandatory fields
return newVersion.validate().isEmpty();
}

private void removeEmptyValues() {
if (this.getDatasetFields() != null) {
for (DatasetField dsf : this.getDatasetFields()) {
removeEmptyValues(dsf);
}
}
}

private void removeEmptyValues(DatasetField dsf) {
if (dsf.getDatasetFieldType().isPrimitive()) { // primitive
final Iterator<DatasetFieldValue> i = dsf.getDatasetFieldValues().iterator();
while (i.hasNext()) {
final String v = i.next().getValue();
if (StringUtils.isBlank(v) || DatasetField.NA_VALUE.equals(v)) {
i.remove();
}
}
} else {
dsf.getDatasetFieldCompoundValues().forEach(cv -> cv.getChildDatasetFields().forEach(v -> removeEmptyValues(v)));
}
}

public Set<ConstraintViolation> validate() {
Expand Down
8 changes: 8 additions & 0 deletions src/main/java/edu/harvard/iq/dataverse/Dataverse.java
Original file line number Diff line number Diff line change
Expand Up @@ -411,6 +411,14 @@ public List<DataverseFieldTypeInputLevel> getDataverseFieldTypeInputLevels() {
return dataverseFieldTypeInputLevels;
}

public boolean isDatasetFieldTypeRequiredAsInputLevel(Long datasetFieldTypeId) {
for(DataverseFieldTypeInputLevel dataverseFieldTypeInputLevel : dataverseFieldTypeInputLevels) {
if (dataverseFieldTypeInputLevel.getDatasetFieldType().getId().equals(datasetFieldTypeId) && dataverseFieldTypeInputLevel.isRequired()) {
return true;
}
}
return false;
}

public Template getDefaultTemplate() {
return defaultTemplate;
Expand Down
16 changes: 11 additions & 5 deletions src/main/java/edu/harvard/iq/dataverse/FilePage.java
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@
import edu.harvard.iq.dataverse.makedatacount.MakeDataCountLoggingServiceBean;
import edu.harvard.iq.dataverse.makedatacount.MakeDataCountLoggingServiceBean.MakeDataCountEntry;
import edu.harvard.iq.dataverse.privateurl.PrivateUrlServiceBean;
import edu.harvard.iq.dataverse.settings.JvmSettings;
import edu.harvard.iq.dataverse.settings.SettingsServiceBean;
import edu.harvard.iq.dataverse.util.BundleUtil;
import edu.harvard.iq.dataverse.util.FileUtil;
Expand Down Expand Up @@ -314,13 +315,18 @@ private void displayPublishMessage(){
}
}

Boolean valid = null;

public boolean isValid() {
if (!fileMetadata.getDatasetVersion().isDraft()) {
return true;
if (valid == null) {
final DatasetVersion workingVersion = fileMetadata.getDatasetVersion();
if (workingVersion.isDraft() || (canUpdateDataset() && JvmSettings.UI_SHOW_VALIDITY_LABEL_WHEN_PUBLISHED.lookupOptional(Boolean.class).orElse(true))) {
valid = workingVersion.isValid();
} else {
valid = true;
}
}
DatasetVersion newVersion = fileMetadata.getDatasetVersion().cloneDatasetVersion();
newVersion.setDatasetFields(newVersion.initDatasetFields());
return newVersion.isValid();
return valid;
}

private boolean canViewUnpublishedDataset() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,10 +58,18 @@ public List<MetadataBlock> listMetadataBlocksDisplayedOnCreate(Dataverse ownerDa

if (ownerDataverse != null) {
Root<Dataverse> dataverseRoot = criteriaQuery.from(Dataverse.class);
Join<Dataverse, DataverseFieldTypeInputLevel> datasetFieldTypeInputLevelJoin = dataverseRoot.join("dataverseFieldTypeInputLevels", JoinType.LEFT);

Predicate requiredPredicate = criteriaBuilder.and(
datasetFieldTypeInputLevelJoin.get("datasetFieldType").in(metadataBlockRoot.get("datasetFieldTypes")),
criteriaBuilder.isTrue(datasetFieldTypeInputLevelJoin.get("required")));

Predicate unionPredicate = criteriaBuilder.or(displayOnCreatePredicate, requiredPredicate);

criteriaQuery.where(criteriaBuilder.and(
criteriaBuilder.equal(dataverseRoot.get("id"), ownerDataverse.getId()),
metadataBlockRoot.in(dataverseRoot.get("metadataBlocks")),
displayOnCreatePredicate
unionPredicate
));
} else {
criteriaQuery.where(displayOnCreatePredicate);
Expand Down
3 changes: 3 additions & 0 deletions src/main/java/edu/harvard/iq/dataverse/Shib.java
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ public class Shib implements java.io.Serializable {
SettingsServiceBean settingsService;
@EJB
SystemConfig systemConfig;
@EJB
UserServiceBean userService;

HttpServletRequest request;

Expand Down Expand Up @@ -259,6 +261,7 @@ else if (ShibAffiliationOrder.equals("firstAffiliation")) {
state = State.REGULAR_LOGIN_INTO_EXISTING_SHIB_ACCOUNT;
logger.fine("Found user based on " + userPersistentId + ". Logging in.");
logger.fine("Updating display info for " + au.getName());
userService.updateLastLogin(au);
authSvc.updateAuthenticatedUser(au, displayInfo);
logInUserAndSetShibAttributes(au);
String prettyFacesHomePageString = getPrettyFacesHomePageString(false);
Expand Down
Loading

0 comments on commit 070b3c0

Please sign in to comment.