Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Database partition mode Part 2 #6409

Open
wants to merge 16 commits into
base: master
Choose a base branch
from
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
type: perf
issue: 6409
title: "The JPA server will no longer use the HFJ_RES_VER_PROV table to store and index values from
the `Resource.meta.source` element. Beginning in HAPI FHIR 6.8.0 (and Smile CDR 2023.08.R01), a
new pair of columns have been used to store data for this element, so this change only affects
data which was stored in HAPI FHIR prior to version 6.8.0 (released August 2023). If you have
FHIR resources which were stored in a JPA server prior to this version, and you use the
Resource.meta.source element and/or the `_source` search parameter, you should perform a complete
reindex of your server to ensure that data is not lost. See the upgrade notes for more information.
"
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Upgrade Notes

The JPA server stores values for the field `Resource.meta.source` in dedicated columns in its database so that they can be indexes and searched for as needed, using the `_source` Search Parameter.

Prior to HAPI FHIR 6.8.0 (and Smile CDR 2023.08.R01), these values were stored in a dedicated table called `HFJ_RES_VER_PROV`. Beginning in HAPI FHIR 6.8.0 (Smile CDR 2023.08.R01), two new columns were added to the `HFJ_RES_VER`
table which store the same data and make it available for searches.

As of HAPI FHIR 8.0.0, the legacy table is no longer searched by default. If you do not have Resource.meta.source data stored in HAPI FHIR that was last created/updated prior to version 6.8.0, this change will not affect you and no action needs to be taken.

If you do have such data, you should follow the following steps:

* Enable the JpaStorageSettings setting `setAccessMetaSourceInformationFromProvenanceTable(true)` to configure the server to continue using the legacy table.

* Perform a server resource reindex by invoking the [$reindex Operation (server)](https://smilecdr.com/docs/fhir_repository/search_parameter_reindexing.html#reindex-server) with the `optimizeStorage` parameter set to `ALL_VERSIONS`.

* When this reindex operation has successfully completed, the setting above can be disabled. Disabling this setting avoids an extra database round-trip when loading data, so this change will have a positive performance impact on your server.
Original file line number Diff line number Diff line change
Expand Up @@ -119,11 +119,12 @@
import ca.uhn.fhir.jpa.search.builder.predicate.NumberPredicateBuilder;
import ca.uhn.fhir.jpa.search.builder.predicate.QuantityNormalizedPredicateBuilder;
import ca.uhn.fhir.jpa.search.builder.predicate.QuantityPredicateBuilder;
import ca.uhn.fhir.jpa.search.builder.predicate.ResourceHistoryPredicateBuilder;
import ca.uhn.fhir.jpa.search.builder.predicate.ResourceHistoryProvenancePredicateBuilder;
import ca.uhn.fhir.jpa.search.builder.predicate.ResourceIdPredicateBuilder;
import ca.uhn.fhir.jpa.search.builder.predicate.ResourceLinkPredicateBuilder;
import ca.uhn.fhir.jpa.search.builder.predicate.ResourceTablePredicateBuilder;
import ca.uhn.fhir.jpa.search.builder.predicate.SearchParamPresentPredicateBuilder;
import ca.uhn.fhir.jpa.search.builder.predicate.SourcePredicateBuilder;
import ca.uhn.fhir.jpa.search.builder.predicate.StringPredicateBuilder;
import ca.uhn.fhir.jpa.search.builder.predicate.TagPredicateBuilder;
import ca.uhn.fhir.jpa.search.builder.predicate.TokenPredicateBuilder;
Expand Down Expand Up @@ -680,8 +681,15 @@ public TokenPredicateBuilder newTokenPredicateBuilder(SearchQueryBuilder theSear

@Bean
@Scope("prototype")
public SourcePredicateBuilder newSourcePredicateBuilder(SearchQueryBuilder theSearchBuilder) {
return new SourcePredicateBuilder(theSearchBuilder);
public ResourceHistoryPredicateBuilder newResourceHistoryPredicateBuilder(SearchQueryBuilder theSearchBuilder) {
return new ResourceHistoryPredicateBuilder(theSearchBuilder);
}

@Bean
@Scope("prototype")
public ResourceHistoryProvenancePredicateBuilder newResourceHistoryProvenancePredicateBuilder(
SearchQueryBuilder theSearchBuilder) {
return new ResourceHistoryProvenancePredicateBuilder(theSearchBuilder);
}

@Bean
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@
import ca.uhn.fhir.jpa.api.svc.ISearchCoordinatorSvc;
import ca.uhn.fhir.jpa.dao.ISearchBuilder;
import ca.uhn.fhir.jpa.dao.SearchBuilderFactory;
import ca.uhn.fhir.jpa.dao.data.IResourceSearchViewDao;
import ca.uhn.fhir.jpa.dao.data.IResourceTagDao;
import ca.uhn.fhir.jpa.dao.tx.HapiTransactionService;
import ca.uhn.fhir.jpa.model.config.PartitionSettings;
Expand Down Expand Up @@ -90,9 +89,6 @@ public class SearchConfig {
@Autowired
private DaoRegistry myDaoRegistry;

@Autowired
private IResourceSearchViewDao myResourceSearchViewDao;

@Autowired
private FhirContext myContext;

Expand Down Expand Up @@ -172,7 +168,6 @@ public ISearchBuilder newSearchBuilder(
myInterceptorBroadcaster,
myResourceTagDao,
myDaoRegistry,
myResourceSearchViewDao,
myContext,
myIdHelperService,
theResourceType);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,6 @@
import ca.uhn.fhir.jpa.model.entity.BaseHasResource;
import ca.uhn.fhir.jpa.model.entity.BaseTag;
import ca.uhn.fhir.jpa.model.entity.ResourceEncodingEnum;
import ca.uhn.fhir.jpa.model.entity.ResourceHistoryProvenanceEntity;
import ca.uhn.fhir.jpa.model.entity.ResourceHistoryTable;
import ca.uhn.fhir.jpa.model.entity.ResourceLink;
import ca.uhn.fhir.jpa.model.entity.ResourceTable;
Expand Down Expand Up @@ -741,8 +740,8 @@ protected EncodedResource populateResourceIntoEntity(
} else {
ResourceHistoryTable currentHistoryVersion = theEntity.getCurrentVersionEntity();
if (currentHistoryVersion == null) {
currentHistoryVersion = myResourceHistoryTableDao.findForIdAndVersionAndFetchProvenance(
theEntity.getId(), theEntity.getVersion());
currentHistoryVersion =
myResourceHistoryTableDao.findForIdAndVersion(theEntity.getId(), theEntity.getVersion());
}
if (currentHistoryVersion == null || !currentHistoryVersion.hasResource()) {
changed = true;
Expand Down Expand Up @@ -1480,8 +1479,8 @@ private void createHistoryEntry(
* this could return null if the current resourceVersion has been expunged
* in which case we'll still create a new one
*/
historyEntry = myResourceHistoryTableDao.findForIdAndVersionAndFetchProvenance(
theEntity.getResourceId(), resourceVersion - 1);
historyEntry =
myResourceHistoryTableDao.findForIdAndVersion(theEntity.getResourceId(), resourceVersion - 1);
if (historyEntry != null) {
reusingHistoryEntity = true;
theEntity.populateHistoryEntityVersionAndDates(historyEntry);
Expand Down Expand Up @@ -1539,29 +1538,12 @@ private void createHistoryEntry(
boolean haveSource = isNotBlank(source) && shouldStoreSource;
boolean haveRequestId = isNotBlank(requestId) && shouldStoreRequestId;
if (haveSource || haveRequestId) {
ResourceHistoryProvenanceEntity provenance = null;
if (reusingHistoryEntity) {
/*
* If version history is disabled, then we may be reusing
* a previous history entity. If that's the case, let's try
* to reuse the previous provenance entity too.
*/
provenance = historyEntry.getProvenance();
}
if (provenance == null) {
provenance = historyEntry.toProvenance();
}
provenance.setResourceHistoryTable(historyEntry);
provenance.setResourceTable(theEntity);
provenance.setPartitionId(theEntity.getPartitionId());
if (haveRequestId) {
String persistedRequestId = left(requestId, Constants.REQUEST_ID_LENGTH);
provenance.setRequestId(persistedRequestId);
historyEntry.setRequestId(persistedRequestId);
}
if (haveSource) {
String persistedSource = left(source, ResourceHistoryTable.SOURCE_URI_LENGTH);
provenance.setSourceUri(persistedSource);
historyEntry.setSourceUri(persistedSource);
}
if (theResource != null) {
Expand All @@ -1571,8 +1553,6 @@ private void createHistoryEntry(
shouldStoreRequestId ? requestId : null,
theResource);
}

myEntityManager.persist(provenance);
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@
import ca.uhn.fhir.jpa.api.model.ExpungeOutcome;
import ca.uhn.fhir.jpa.api.model.LazyDaoMethodOutcome;
import ca.uhn.fhir.jpa.api.svc.IIdHelperService;
import ca.uhn.fhir.jpa.dao.data.IResourceHistoryProvenanceDao;
import ca.uhn.fhir.jpa.dao.tx.HapiTransactionService;
import ca.uhn.fhir.jpa.delete.DeleteConflictUtil;
import ca.uhn.fhir.jpa.model.cross.IBasePersistedResource;
Expand All @@ -51,6 +52,7 @@
import ca.uhn.fhir.jpa.model.entity.BaseTag;
import ca.uhn.fhir.jpa.model.entity.PartitionablePartitionId;
import ca.uhn.fhir.jpa.model.entity.ResourceEncodingEnum;
import ca.uhn.fhir.jpa.model.entity.ResourceHistoryProvenanceEntity;
import ca.uhn.fhir.jpa.model.entity.ResourceHistoryTable;
import ca.uhn.fhir.jpa.model.entity.ResourceTable;
import ca.uhn.fhir.jpa.model.entity.TagDefinition;
Expand Down Expand Up @@ -204,6 +206,9 @@ public abstract class BaseHapiFhirResourceDao<T extends IBaseResource> extends B
@Autowired
private IJobCoordinator myJobCoordinator;

@Autowired
private IResourceHistoryProvenanceDao myResourceHistoryProvenanceDao;

private IInstanceValidatorModule myInstanceValidator;
private String myResourceName;
private Class<T> myResourceType;
Expand Down Expand Up @@ -1000,7 +1005,7 @@ public void beforeCommit(boolean readOnly) {

protected ResourceTable updateEntityForDelete(
RequestDetails theRequest, TransactionDetails theTransactionDetails, ResourceTable theEntity) {
myResourceSearchUrlSvc.deleteByResId(theEntity.getId());
myResourceSearchUrlSvc.deleteByResId(JpaPid.fromId(theEntity.getId()));
Date updateTime = new Date();
return updateEntity(theRequest, null, theEntity, updateTime, true, true, theTransactionDetails, false, true);
}
Expand Down Expand Up @@ -1245,7 +1250,7 @@ public IBundleProvider history(
return myPersistedJpaBundleProviderFactory.history(
theRequest,
myResourceName,
entity.getId(),
JpaPid.fromId(entity.getId()),
theSince,
theUntil,
theOffset,
Expand Down Expand Up @@ -1275,7 +1280,7 @@ public IBundleProvider history(
return myPersistedJpaBundleProviderFactory.history(
theRequest,
myResourceName,
entity.getId(),
JpaPid.fromId(entity.getId()),
theHistorySearchDateRangeParam.getLowerBoundAsInstant(),
theHistorySearchDateRangeParam.getUpperBoundAsInstant(),
theHistorySearchDateRangeParam.getOffset(),
Expand Down Expand Up @@ -1375,8 +1380,8 @@ protected <MT extends IBaseMetaType> void doMetaAddOperation(
doMetaAdd(theMetaAdd, latestVersion, theRequest, transactionDetails);

// Also update history entry
ResourceHistoryTable history = myResourceHistoryTableDao.findForIdAndVersionAndFetchProvenance(
entity.getId(), entity.getVersion());
ResourceHistoryTable history =
myResourceHistoryTableDao.findForIdAndVersion(entity.getId(), entity.getVersion());
doMetaAdd(theMetaAdd, history, theRequest, transactionDetails);
}

Expand Down Expand Up @@ -1423,8 +1428,8 @@ public <MT extends IBaseMetaType> void doMetaDeleteOperation(
} else {
doMetaDelete(theMetaDel, latestVersion, theRequest, transactionDetails);
// Also update history entry
ResourceHistoryTable history = myResourceHistoryTableDao.findForIdAndVersionAndFetchProvenance(
entity.getId(), entity.getVersion());
ResourceHistoryTable history =
myResourceHistoryTableDao.findForIdAndVersion(entity.getId(), entity.getVersion());
doMetaDelete(theMetaDel, history, theRequest, transactionDetails);
}

Expand Down Expand Up @@ -1694,7 +1699,7 @@ private void reindexOptimizeStorage(
int pageSize = 100;
for (int page = 0; ((long) page * pageSize) < entity.getVersion(); page++) {
Slice<ResourceHistoryTable> historyEntities =
myResourceHistoryTableDao.findForResourceIdAndReturnEntitiesAndFetchProvenance(
myResourceHistoryTableDao.findAllVersionsExceptSpecificForResourcePid(
PageRequest.of(page, pageSize), entity.getId(), historyEntity.getVersion());
for (ResourceHistoryTable next : historyEntities) {
reindexOptimizeStorageHistoryEntity(entity, next);
Expand All @@ -1716,11 +1721,18 @@ private void reindexOptimizeStorageHistoryEntity(ResourceTable entity, ResourceH
}
}
}
if (isBlank(historyEntity.getSourceUri()) && isBlank(historyEntity.getRequestId())) {
if (historyEntity.getProvenance() != null) {
historyEntity.setSourceUri(historyEntity.getProvenance().getSourceUri());
historyEntity.setRequestId(historyEntity.getProvenance().getRequestId());
changed = true;
if (myStorageSettings.isAccessMetaSourceInformationFromProvenanceTable()) {
if (isBlank(historyEntity.getSourceUri()) && isBlank(historyEntity.getRequestId())) {
Long id = historyEntity.getId();
Optional<ResourceHistoryProvenanceEntity> provenanceEntityOpt =
myResourceHistoryProvenanceDao.findById(id);
if (provenanceEntityOpt.isPresent()) {
ResourceHistoryProvenanceEntity provenanceEntity = provenanceEntityOpt.get();
historyEntity.setSourceUri(provenanceEntity.getSourceUri());
historyEntity.setRequestId(provenanceEntity.getRequestId());
myResourceHistoryProvenanceDao.delete(provenanceEntity);
changed = true;
}
}
}
if (changed) {
Expand Down Expand Up @@ -2457,7 +2469,7 @@ protected DaoMethodOutcome doUpdateForUpdateOrPatch(
ResourceTable entity = (ResourceTable) theEntity;
if (entity.isSearchUrlPresent()) {
myResourceSearchUrlSvc.deleteByResId(
(Long) theEntity.getPersistentId().getId());
JpaPid.fromId((Long) theEntity.getPersistentId().getId()));
entity.setSearchUrlPresent(false);
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ public <P extends IResourcePersistentId> void preFetchResources(
* However, for realistic average workloads, this should reduce the number of round trips.
*/
if (idChunk.size() >= 2) {
List<ResourceTable> entityChunk = prefetchResourceTableHistoryAndProvenance(idChunk);
List<ResourceTable> entityChunk = prefetchResourceTableAndHistory(idChunk);

if (thePreFetchIndexes) {

Expand Down Expand Up @@ -244,14 +244,13 @@ public <P extends IResourcePersistentId> void preFetchResources(
}

@Nonnull
private List<ResourceTable> prefetchResourceTableHistoryAndProvenance(List<Long> idChunk) {
private List<ResourceTable> prefetchResourceTableAndHistory(List<Long> idChunk) {
assert idChunk.size() < SearchConstants.MAX_PAGE_SIZE : "assume pre-chunked";

Query query = myEntityManager.createQuery("select r, h "
+ " FROM ResourceTable r "
+ " LEFT JOIN fetch ResourceHistoryTable h "
+ " on r.myVersion = h.myResourceVersion and r.id = h.myResourceId "
+ " left join fetch h.myProvenance "
+ " WHERE r.myId IN ( :IDS ) ");
query.setParameter("IDS", idChunk);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ private ISearchQueryExecutor doSearch(

// indicate param was already processed, otherwise queries DB to process it
theParams.setOffset(null);
return SearchQueryExecutors.from(longs);
return SearchQueryExecutors.from(JpaPid.fromLongList(longs));
}

private int getMaxFetchSize(SearchParameterMap theParams, Integer theMax) {
Expand Down Expand Up @@ -385,7 +385,6 @@ public List<IResourcePersistentId> search(
@SuppressWarnings("rawtypes")
private List<IResourcePersistentId> toList(ISearchQueryExecutor theSearchResultStream, long theMaxSize) {
return StreamSupport.stream(Spliterators.spliteratorUnknownSize(theSearchResultStream, 0), false)
.map(JpaPid::fromId)
.limit(theMaxSize)
.collect(Collectors.toList());
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,14 +40,16 @@
import jakarta.persistence.criteria.CriteriaBuilder;
import jakarta.persistence.criteria.CriteriaQuery;
import jakarta.persistence.criteria.Expression;
import jakarta.persistence.criteria.JoinType;
import jakarta.persistence.criteria.Predicate;
import jakarta.persistence.criteria.Root;
import jakarta.persistence.criteria.Subquery;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;

import java.time.Instant;
import java.time.ZoneId;
import java.time.ZonedDateTime;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
Expand Down Expand Up @@ -122,8 +124,6 @@ public List<ResourceHistoryTable> fetchEntities(

addPredicatesToQuery(cb, thePartitionId, criteriaQuery, from, theHistorySearchStyle);

from.fetch("myProvenance", JoinType.LEFT);

/*
* The sort on myUpdated is the important one for _history operations, but there are
* cases where multiple pages of results all have the exact same myUpdated value (e.g.
Expand Down Expand Up @@ -242,8 +242,23 @@ private void addPredicateForAtQueryParameter(
Subquery<Date> pastDateSubQuery = theQuery.subquery(Date.class);
Root<ResourceHistoryTable> subQueryResourceHistory = pastDateSubQuery.from(ResourceHistoryTable.class);
Expression myUpdatedMostRecent = theCriteriaBuilder.max(subQueryResourceHistory.get("myUpdated"));

/*
* This conversion from the Date in myRangeEndInclusive into a ZonedDateTime is an experiment -
* There is an intermittent test failure in testSearchHistoryWithAtAndGtParameters() that I can't
* figure out. But I've added a ton of logging to the error it fails with and I noticed that
* we emit SQL along the lines of
* select coalesce(max(rht2_0.RES_UPDATED), timestamp with time zone '2024-10-05 18:24:48.172000000Z')
* for this date, and all other dates are in GMT so this is an experiment. If nothing changes,
* we can roll this back to
* theCriteriaBuilder.literal(myRangeStartInclusive)
* JA 20241005
*/
ZonedDateTime rangeStart =
ZonedDateTime.ofInstant(Instant.ofEpochMilli(myRangeStartInclusive.getTime()), ZoneId.of("GMT"));

Expression myUpdatedMostRecentOrDefault =
theCriteriaBuilder.coalesce(myUpdatedMostRecent, theCriteriaBuilder.literal(myRangeStartInclusive));
theCriteriaBuilder.coalesce(myUpdatedMostRecent, theCriteriaBuilder.literal(rangeStart));

pastDateSubQuery
.select(myUpdatedMostRecentOrDefault)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,15 +19,15 @@
*/
package ca.uhn.fhir.jpa.dao;

import ca.uhn.fhir.jpa.model.dao.JpaPid;
import ca.uhn.fhir.jpa.model.entity.BaseTag;
import ca.uhn.fhir.jpa.model.entity.IBaseResourceEntity;
import ca.uhn.fhir.jpa.model.entity.ResourceTag;
import jakarta.annotation.Nullable;
import org.hl7.fhir.instance.model.api.IBaseResource;

import java.util.Collection;

public interface IJpaStorageResourceParser extends IStorageResourceParser {
public interface IJpaStorageResourceParser extends IStorageResourceParser<JpaPid> {

/**
* Convert a storage entity into a FHIR resource model instance. This method may return null if the entity is not
Expand All @@ -36,7 +36,7 @@ public interface IJpaStorageResourceParser extends IStorageResourceParser {
<R extends IBaseResource> R toResource(
Class<R> theResourceType,
IBaseResourceEntity theEntity,
Collection<ResourceTag> theTagList,
Collection<BaseTag> theTagList,
boolean theForHistoryOperation);

/**
Expand Down
Loading
Loading