Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible deadlock in ImmutableList #1977

Closed
kronar opened this issue Feb 19, 2015 · 10 comments
Closed

Possible deadlock in ImmutableList #1977

kronar opened this issue Feb 19, 2015 · 10 comments
Assignees
Labels
Milestone

Comments

@kronar
Copy link

kronar commented Feb 19, 2015

ImmutableList contains following code since version 15.0

private static final ImmutableList EMPTY =
new RegularImmutableList(ObjectArrays.EMPTY_ARRAY);

@cpovirk
Copy link
Member

cpovirk commented Feb 19, 2015

Can you elaborate on the deadlock? I guess the idea is that one thread starts to load ImmutableList while the other starts to load RegularImmutableList? That's probably not good, though we may have this problem in a lot of places, and I haven't heard reports of trouble... yet...

@apangin
Copy link

apangin commented Feb 19, 2015

Here is the minimal test case I could create. It hangs intermittently.

public static void main(String[] args) throws Exception {
    new Thread() {
        public void run() {
            ImmutableList.of();
        }
    }.start();

    Thread.sleep(1);

    Class.forName("com.google.common.collect.RegularImmutableList");
}

The problem is not just theoretical - we've hit the real deadlock in a large production deployment.

@cpovirk cpovirk self-assigned this Mar 6, 2015
@cpovirk
Copy link
Member

cpovirk commented Mar 6, 2015

Sorry, I missed the last part of your message (about how you've hit this in prod). That makes more much more worried about this.

Part of the reason I'm worried is that we might have these cycles in lots of places. I'll fix this one, but we may have others. Maybe we should write a test that loads each of our classes in a new, instrumented ClassLoader. I'm always in over my head with ClassLoader, but I'm hoping that maybe we can override loadClass to record all classes loaded during the loading of the class. Then we can build a graph and look for cycles. If they're all over, we may just wait for reports to come in.... Hopefully there would be a manageable number.

@cpovirk
Copy link
Member

cpovirk commented Mar 11, 2015

ClassLoader is probably the wrong approach. @lukesandberg suggests that ASM may help. For now, though, I'm dropping this.

(The specific problem of RegularImmutableList has been fixed internally, so it will be mirrored out soon.)

cpovirk added a commit that referenced this issue Mar 20, 2015
This was requested in #1977 to avoid circular static-init dependencies.
I fear that we have other such problems. We should consider adding a test for them.
-------------
Created by MOE: http://code.google.com/p/moe-java
MOE_MIGRATED_REVID=88131198
@cgdecker cgdecker added this to the 19.0 milestone Apr 7, 2015
@cpovirk
Copy link
Member

cpovirk commented Oct 8, 2015

Thought about this a little more today.

One subtlety here is how the app starts to initialize RegularImmutableList without first initializing ImmutableList. How does the app get at package-private RegularImmutableList except through ImmutableList? Well, it turns out that Guava calls directly into RegularImmutableList in places outside ImmutableList. For example, ImmutableSortedMap.fromEntries does this.

Until then, I had hoped that I could restrict my search to classes that involve clinit from at least 2 public (or protected) classes. (It turns out, as expected, that Guava has a lot of cycles that involve non-public classes.) But if I restrict myself to cases in which at least 2 of the classes involved are public, I miss the ImmutableList problem.

But if I turn off the restriction, again, I get many results, most of which are false positives. But to actually determine which are true positives, I'd have to soup up the analyzer. I think I need to find cases where we can get from a public/protected API into a clinit for a class? This sounds like a mess.

Given that, I think I'll just go after the high-visibility classes with potential problems: ImmutableSortedMultiset, ImmutableSortedSet, and ImmutableTable. We should probably follow the ImmutableList pattern there if only to set a good example for ourselves.

cpovirk added a commit that referenced this issue Oct 8, 2015
- ImmutableSortedMultiset: I couldn't figure out a way for this to actually deadlock, but I figure it's best for it to follow the pattern of the others.
- ImmutableSortedSet: This might be able to deadlock if one thread creates a RegularImmutableMap (which then calls into RegularImmutableSortedSet and then ImmutableSet) while another initializes ImmutableSortedSet (which then calls into RegularImmutableSortedSet).
- ImmutableTable: This might be able to deadlock if one thread GWT-deserializes a SparseImmutableTable (which then calls into ImmutableTable) while another initializes ImmutableTable (which calls into SparseImmutableTable).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=104995935
@cpovirk
Copy link
Member

cpovirk commented Oct 1, 2020

To detect any remaining or new problems, we could look at ArchUnit for its Cycle Checks (mentioned in #4011 (comment)).

[edit: Oops, sorry, I think ArchUnit is talking about a different kind of cycle, so it's unlikely to help here.]

@vlsi
Copy link

vlsi commented Nov 12, 2020

I've requested ArchUnit sample via TNG/ArchUnit#470
An alternative option is https://github.com/jQAssistant/jqassistant which loads the model to Neo4j, then assertions could use cypher queries.

@carterkozak
Copy link
Contributor

@cpovirk I've built an errorprone check which detects precisely this kind of deadlock here: palantir/gradle-baseline#1598

I'd be more than happy to transfer the code upstream into error-prone if there's interest.

@cpovirk
Copy link
Member

cpovirk commented Jan 11, 2021

Thanks! I am interested in principle. My fear has been that such a checker may be prone to false positives, but if they turn out to be easy enough to work around, we'd probably enable a check anyway as a precaution.

@carterkozak
Copy link
Contributor

I've added an error-prone ticket to discuss upstreaming the check: google/error-prone#2062

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants