Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cyclic detach bug is still present #2368

Open
chicko12345 opened this issue Feb 19, 2025 · 1 comment
Open

Cyclic detach bug is still present #2368

chicko12345 opened this issue Feb 19, 2025 · 1 comment

Comments

@chicko12345
Copy link

Hi Community,

we have still the same problem, as described here:
#1577

For the issue a bug fix was provided here:
#2037

Thank you very much! However, if bigger datasets are used, stack overflow exception is thrown.

Our environment:

  • Glassfish 7.0.22
  • EclipseLink 4.0.5
  • Java/JDK version JavaSE 21
  • Oracle 19C

I've experimentally reverted:

  1. Revert "detach() fixes ([master] detach() fixes #2037) ([4.0] detach() fixes (#2037) - backport from master #2225)"
    This reverts commit 8e9cb33.
  2. Revert "Bug541873 - ENTITYMANAGER.DETACH() TRIGGERS LAZY LOADING INTO THE PERSISTENCE CONTEXT (Bug541873 - ENTITYMANAGER.DETACH() TRIGGERS LAZY LOADING INTO THE PER… #348)"
    This reverts commit b8a573d.

And the big datasets are processed without problems in our case.

Since i was asked by Mr. Felcman to be more specific about "bigger datasets" and data constellation which causes the error, i took already provided code for the issue 1577
jpa-bug-1577-StackOverflowErrorBecauseOfCyclicDETACH.tar.gz

and tried to modify it, so the error can be reproduced.

jpa-bug-1577-StackOverflowErrorBecauseOfCyclicDETACH.tar.gz

In the class
com.oracle.jpa.bugtest.TestBug

there is a
private static final long NUMBER_OF_GROUP_ITEMS
which is set to 1000. In this case the bug is already reproducible. If set to 500 the stack overflow exception isn't thrown and the test is successful.
Note: The reverted version can deal with 100k group items without any problems.

What i also realized is that the current implementation can cause performance problems, even if set to 500:

----Reverted version of eclipse.persistence.core, 500 group items----
Started Insert data
Finished Insert data 0.192 seconds
Started Commit
Finished Commit 0.586 seconds
Started detaching group
Finished detaching group 0.001 seconds
Started detaching groupitems
Finished detaching groupitems 0.052000000000000005 seconds
Started Commit
Finished Commit 0.0 seconds

----Original version (4.0.5) of eclipse.persistence.core , 500 group items----
Started Insert data
Finished Insert data 0.187 seconds
Started Commit
Finished Commit 0.684 seconds
Started detaching group
Finished detaching group 0.17500000000000002 seconds
Started detaching groupitems
Finished detaching groupitems 38.107 seconds
Started Commit
Finished Commit 0.004 seconds

I hope this helps.

Kind Regards

@chicko12345
Copy link
Author

Hi Community,

@rfelcman
i took a look at discussion in #1577.
@mister48 stated, that from his point of view the problem is a part of
org.eclipse.persistence.internal.descriptors.DescriptorIterator and the content of the IdentityHashMap visitedObjects.
Different objects representing the same entity are considered as different entities.
So i tried to verify it and extended the DescriptorIterator code on the official 4.0 branch (without any reverts) to check if he's right.

//+
protected Map visitedObjects2;
//...
protected DescriptorIterator() {
 //...
    //+
    this.visitedObjects2 = new HashMap();
//...
}
//...
//+
public Map getVisitedObjects2() {
		return visitedObjects2;
}
//...
public void iterateReferenceObjectForMapping(Object referenceObject, DatabaseMapping mapping) {
  //...
        //Added additional check on visitedObjects2 to already existing check on visitedObjects
        if (this.usesGroup && this.shouldTrackCurrentGroup) {
			Set visited = (Set) getVisitedObjects().get(referenceObject);
			Set visited2 = (Set) getVisitedObjects2().get(referenceObject.hashCode());
			if (visited == null && visited2 == null) {
				visited = new HashSet(1);
				visited.add(this.currentItem.getGroup());
				getVisitedObjects().put(referenceObject, visited);

				visited2 = new HashSet(1);
				visited2.add(this.currentItem.getGroup());
				getVisitedObjects2().put(referenceObject.hashCode(), visited2);
			} else {
				if (visited.contains(this.currentItem.getGroup()) || visited2.contains(this.currentItem.getGroup())) {
					// source object has been already visited with an equal group
					return;
				} else {
					visited.add(this.currentItem.getGroup());
					visited2.add(this.currentItem.getGroup());
				}
			}
		} else {
			// Check if already processed.
			if (getVisitedObjects().containsKey(referenceObject)
					|| getVisitedObjects2().containsKey(referenceObject.hashCode())) {
				return;
			}

			getVisitedObjects().put(referenceObject, referenceObject);
			getVisitedObjects2().put(referenceObject.hashCode(), referenceObject);
		}
 //...
}
//...
public void startIterationOn(Object sourceObject, AttributeGroup group) {
//...
        //Added additional check on visitedObjects2 to already existing check on visitedObjects
        if (this.usesGroup && this.shouldTrackCurrentGroup) {
			Set visited = (Set) getVisitedObjects().get(sourceObject);
			Set visited2 = (Set) getVisitedObjects2().get(sourceObject.hashCode());
			if (visited == null && visited2 == null) {
				visited = new HashSet(1);
				visited.add(group);
				getVisitedObjects().put(sourceObject, visited);

				visited2 = new HashSet(1);
				visited2.add(group);
				getVisitedObjects2().put(sourceObject.hashCode(), visited2);
			} else {
				if (visited.contains(group) || visited2.contains(group)) {
					// source object has been already visited with an equal group
					return;
				} else {
					visited.add(group);
					visited2.add(group);
				}
			}
		} else {
			if (getVisitedObjects().containsKey(sourceObject)
					|| getVisitedObjects2().containsKey(sourceObject.hashCode())) {
				return;
			}
			getVisitedObjects().put(sourceObject, sourceObject);
			getVisitedObjects2().put(sourceObject.hashCode(), sourceObject);
		}
//...
}

Unfortunately i cannot build the whole eclipselink repo with tests. Something is wrong with bundles/tests project, even if use the profile jdk11+. pax.exam cannot be transfered via https as i understand. I've excluded the bundles/tests project from tests.
All other tests are passed. In addition i've tested one of the realistic "bigger dataset" in our environment with the patched version. No Stack Overflows any more. Dataset can be processed without problems but with performance decrease of 7% compared to the reverted version.
Our regression tests are also successful.

However, this change doesn't help at all and even reduces performance even more, if the entities aren't implemented correctly.
In the provided
jpa-bug-1577-StackOverflowErrorBecauseOfCyclicDETACH.tar.gz
The hashCode() isn't implemented in Group and GroupItem.

If the entities implement hashCode(), the change helps a lot.

I changed the test in the following manner:

  1. changed the id of Group and GroupItem from long to Long
  2. implemented a simple hashCode():
if (id == null) {
	return super.hashCode();
}
return java.util.Objects.hash(getClass(), id);

jpa-bug-1577-StackOverflowErrorBecauseOfCyclicDETACH.tar.gz

And compared to already described test results:

----Patched version of 4.0.5 of eclipse.persistence.core , 500 group items----
Started Insert data
Finished Insert data 0.165 seconds
Started Commit
Finished Commit 0.621 seconds
Started detaching group
Finished detaching group 0.056 seconds
Started detaching groupitems
Finished detaching groupitems 0.266 seconds (was 38.107 seconds)
Started Commit
Finished Commit 0.0 seconds

----Patched version of 4.0.5 of eclipse.persistence.core ,10k group items (before the patch 1k didn't work)----
Started Insert data
Finished Insert data 16.925 seconds
Started Commit
Finished Commit 4.845 seconds
Started detaching group
Finished detaching group 0.225 seconds
Started detaching groupitems
Finished detaching groupitems 48.932 seconds
Started Commit
Finished Commit 0.001 seconds

----Patched version of 4.0.5 of eclipse.persistence.core ,100k group items (before the patch 1k didn't work)----
Started Insert data
Finished Insert data 3470.195 seconds
Started Commit
Finished Commit 34.4 seconds
Started detaching group
Finished detaching group 1.185 seconds
Started detaching groupitems
Finished detaching groupitems 7861.501 seconds (Performance decrease of 5% compared to the reverted version)
Started Commit
Finished Commit 0.002 seconds

----Reverted version of eclipse.persistence.core ,100k group items----
Started Insert data
Finished Insert data 3442.7490000000003 seconds
Started Commit
Finished Commit 35.936 seconds
Started detaching group
Finished detaching group 1.2730000000000001 seconds
Started detaching groupitems
Finished detaching groupitems 7456.642 seconds
Started Commit
Finished Commit 0.001 seconds

Kind Regards.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant