Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BACKLOG-39705] Remove from Region Cache not removing 1st level cache… #5511

Merged
merged 1 commit into from
Jan 31, 2024

Conversation

tkafalas
Copy link
Contributor

… entry.

@tkafalas tkafalas requested a review from a team as a code owner January 30, 2024 16:54
@NJtwentyone
Copy link
Contributor

NJtwentyone commented Jan 30, 2024

I'm normally don't care for updating commit message but I feel in this instance, it make sense to add [PPP-4826] since there isn't really any direct code change to anything Manage datasource in PUC. Also, it might prevent any issue with backporting this change to previous releases.

@tkafalas
Copy link
Contributor Author

tkafalas commented Jan 30, 2024

probably adding [ppp-4826] is a good idea. I leave the proof to the QA team. I have seen it locally. Please feel free to run it thought the IT test since you are in a good position to do that.

@NJtwentyone
Copy link
Contributor

@tkafalas Do you provide any proof that the code changes works? Either the steps to reproduce noted in https://hv-eng.atlassian.net/browse/BACKLOG-39705 And some execution of integration tests mentioned here: https://hv-eng.atlassian.net/browse/BACKLOG-39705?focusedCommentId=1940152

For my POC fix (https://hv-eng.atlassian.net/browse/BACKLOG-39705?focusedCommentId=1940968) I had to update a couple more functions for get the integration test to pass like CacheManager#clearRegionCache( String region )

@@ -400,7 +400,10 @@ public Set getAllEntriesFromRegionCache( String region ) {
public void removeFromRegionCache( String region, Object key ) {
if ( checkRegionEnabled( region ) ) {
HvCache hvcache = (HvCache) regionCache.get( region );
hvcache.evictEntityData( (String) key );
try ( SessionImpl session = (SessionImpl) hvcache.getSessionFactory().openSession() ) {
Copy link
Contributor

@NJtwentyone NJtwentyone Jan 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a blocker

For CacheManager#removeFromRegionCache shouldn't be a catch statement statement here and at least a log line here, instead of shallowing the error.
The log level can be [error, debug, trace] just people can debug it in the field without a code change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The try is to guarantee that open session gets closed. an exception will bubble up

hvcache.evictEntityData( (String) key );
try ( SessionImpl session = (SessionImpl) hvcache.getSessionFactory().openSession() ) {
hvcache.getStorageAccess().removeFromCache( key, session );
hvcache.evictEntityData( (String) key );
Copy link
Contributor

@NJtwentyone NJtwentyone Jan 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a blocker.

similar comment for https://github.com/pentaho/pentaho-platform/pull/5511/files#r1471722021

shouldn't we add a log statement with level [error, debug, trace,..] for MappingException to class that will handle hvcache.evictEntityData( (String) key );

@Override public void evictEntityData( String entityName ) {
try {
evictEntityData( getSessionFactory().getMetamodel().entityPersister( entityName ) );
} catch ( MappingException e) {
//Nothing to do if the entry is not there.
}
}

The MappingException was being shallowed and there was no way to find this out except for a developer attaching a debugger.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't believe so. That exception happens when trying to remove the 2nd level cache which is only used in the carteStatusCache to persist the log xml to a file. The fact that there is no 2nd level cache persister for these entries is not an error and logging it would just produce spam.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The use case would be if we need to customer's environment which we don't have access, but they can give us log files. So would don't have to deploy custom jars and we can't attach debuggers. Customer support or whoever would simply have to adjust logging levels in log4j.

if you find it useful or not address it or not. I'm fine either way.

@buildguy

This comment has been minimized.

Copy link

@buildguy
Copy link
Collaborator

👍 Frogbot scanned this pull request and found that it did not add vulnerable dependencies.

Note:

Frogbot also supports Contextual Analysis, Secret Detection, IaC and SAST Vulnerabilities Scanning. This features are included as part of the JFrog Advanced Security package, which isn't enabled on your system.


@buildguy
Copy link
Collaborator

⚠️ Build finished in 1h 30m 51s

Build command:

mvn clean verify -B -e -Daudit -amd -pl extensions

⛔ Failed Tests

⛈️ 1 test(s) failed:

org.pentaho.platform.plugin.services.exporter.PentahoPlatformExporterTest.testExportSchedules_SchedulereThrowsException (click to expand)


Wanted but not invoked:
iScheduler.getJobs(null);
-> at org.pentaho.platform.plugin.services.exporter.PentahoPlatformExporterTest.testExportSchedules_SchedulereThrowsException(PentahoPlatformExporterTest.java:154)
Actually, there were zero interactions with this mock.

Tests run: 1544, Failures: 1, Skipped: 6    Test Results


ℹ️ This is an automatic message

@peterrinehart peterrinehart merged commit f6a8432 into pentaho:master Jan 31, 2024
3 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants