Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Odd locking strategy in AreInternalsVisibleToDynamicProxy #377

Closed
TimLovellSmith opened this issue Jun 13, 2018 · 8 comments
Closed

Odd locking strategy in AreInternalsVisibleToDynamicProxy #377

TimLovellSmith opened this issue Jun 13, 2018 · 8 comments
Milestone

Comments

@TimLovellSmith
Copy link
Contributor

I can't figure out
a) why it tests internalsVisibleToDynamicProxy.ContainsKey() again after Upgrading the lock.
b) why it doesn't do the first codepath with a non-upgradeable read lock, similar to BaseProxyGenerator, although maybe a reasonable answer could be they have different expectations on how likely they are to need to upgrade the lock...

internal static bool AreInternalsVisibleToDynamicProxy(Assembly asm)
		{
			using (var locker = internalsVisibleToDynamicProxyLock.ForReadingUpgradeable())
			{
				if (internalsVisibleToDynamicProxy.ContainsKey(asm))
				{
					return internalsVisibleToDynamicProxy[asm];
				}

				locker.Upgrade();

				if (internalsVisibleToDynamicProxy.ContainsKey(asm))
				{
					return internalsVisibleToDynamicProxy[asm];
				}
@TimLovellSmith
Copy link
Contributor Author

InvocationHelper's GetMethodOnType() is another case where it rechecks the cache after upgrading the lock, though I don't know why it is needed.

@stakx
Copy link
Member

stakx commented Jun 14, 2018

using (var locker = @lock.ForReadingUpgradeable())
{
var methodOnTarget = GetFromCache(proxiedMethod, type);
if (methodOnTarget != null)
{
return methodOnTarget;
}
locker.Upgrade();
methodOnTarget = GetFromCache(proxiedMethod, type);
if (methodOnTarget != null)
{
return methodOnTarget;
}
methodOnTarget = ObtainMethod(proxiedMethod, type);
PutToCache(proxiedMethod, type, methodOnTarget);
return methodOnTarget;
}

@TimLovellSmith: This might be an instance of "double-checked locking", a pattern that tries to avoid taking the (relatively expensive) lock by a (comparatively cheap) check outside, then if the lock cannot be avoided the check has to be repeated because things might have changed in the meantime. This was a very popular pattern in Java before it got its current bad reputation.

(I am not sufficiently knowledgeable to judge whether it is implemented safely here, IIRC there should be some kind of memory barrier between the first and second check to prevent the compiler / runtime / processor from caching the first result or reordering execution. Not sure if the above code plays along well with .NET's memory model.)

What is your goal here? Are you seeking an understanding of how locking in Castle Core works, or are you suspecting there might be a bug with it, or do you want to optimize, ...?

@ghost
Copy link

ghost commented Jun 14, 2018

@stakx I think this is reader writer locks. There was also a slim version but I am not sure we are using this.

@TimLovellSmith
Copy link
Contributor Author

@stakx Sorry, the point of the issue is not only that it looks funny but it seems like a good chance this could be rewritten for better perf: either because it is doing redundant call to GetFromCache which can be eliminated (since nobody else should be able to write to it while you have an upgradeable read lock), or because it is not avoiding contention on the cache lock by doing a non-upgradeable-read-lock while checking the cache the first time through.

@stakx
Copy link
Member

stakx commented Jun 15, 2018

P.S.: I've originally posted the following in your PR. I'm reposting it here because this is meant more for further discussion than as a suggestion to implement right away.

Your PR looks good to me. Sorry for initially missing the point of your enquiry.

The following is just an additional idea; feel free to disregard. (I'm not a multi-threading expert at all, so perhaps this is naïve.)

P.P.S.: Some quick experiments show that applying the following suggestion would cause quite a few changes to the public API. So it's probably not very feasible. I'll leave this post as is, let's say... as a thought experiment.

It seems that the locks are mostly used in combination with Dictionary<,>. I was wondering whether it might make sense to push the locking that's currently strewn all over the place down into a new dictionary-like type, say:

// using System.Collections.Generic;
// using System.Threading;

internal sealed class SynchronizedDictionary<TKey, TValue> : IDisposable
{
    private Dictionary<TKey, TValue> dictionary;
    private ReaderWriterLockSlim readerWriterLock;

    // constructor and Dispose omitted for brevity

    public TValue GetOrAdd(TKey key, Func<TKey, TValue> valueFactory)
    {
        TValue value;

        this.readerWriterLock.EnterReadLock();
        try
        {
            if (this.dictionary.TryGetValue(key, out value))
            {
                return value;
            }
        }
        finally
        {
            this.readerWriterLock.ExitReadLock();
        }

        this.readerWriterLock.EnterUpgradeableReadLock();
        try
        {
            if (this.dictionary.TryGetValue(key, out value))
            {
                return value;
            }

            this.readerWriterLock.EnterWriteLock();
            try
            {
                value = valueFactory.Invoke(key);
                this.dictionary.Add(key, value);
                return value;
            }
            finally
            {
                this.readerWriterLock.ExitWriteLock();
            }
        }
        finally
        {
            this.readerWriterLock.ExitUpgradeableReadLock();
        }
    }
}

This could have two advantages:

  1. Raise the level of abstraction in the remaining code, which just needs a SynchronizedDictionary<,> instead of a combination of dictionary plus lock that must be remembered to always be used together.
  2. Reducing overhead by using .NET's ReaderWriterLockSlim directly instead of Core's Lock abstraction that's built on top of it.

You'll notice that the GetOrAdd API is identical to that in the BCL's ConcurrentDictionary<,>. That however doesn't mean that ConcurrentDictionary<,> would be a suitable alternative (besides the fact that it isn't available for .NET 3.5): Using a completely lock-free collection might be the wrong thing to do in BaseProxyGenerator.ObtainProxyType—there should be no concurrent Reflection.Emit-ting to the same dynamic module, since Reflection.Emit's official thread-safety guarantees are very weak.

@jonorossi jonorossi added this to the vNext milestone Jun 15, 2018
@jonorossi
Copy link
Member

I can't figure out
a) why it tests internalsVisibleToDynamicProxy.ContainsKey() again after Upgrading the lock.
b) why it doesn't do the first codepath with a non-upgradeable read lock, similar to BaseProxyGenerator,

Thanks @TimLovellSmith, as I mentioned in the pull request your change has got to be what the author of the code wanted but didn't write (or they just stuffed it up 😉), this code was written wrong. The InvocationHelper change should definitely reduce contention on heavily loaded web apps.

Cheers 🍺

@TimLovellSmith
Copy link
Contributor Author

@stakx Maybe replacing with a lock-free dictionary like ConcurrentDictionary would be good? It's not actually lock-free for all operations (e.g. count and enumeration and write uses 'fine grained locking' as opposed to actually being lock-free) but I believe it is lock-free for doing dictionary lookups. It has some other overheads though.

@ghost
Copy link

ghost commented Jun 15, 2018

@TimLovellSmith

Concurrent dictionary is most definitely not lock free. It optimises locks by stating your intent with lookups. GetOrAdd being an example.

I still like your approach though and I think threading in general needs a slight update in castle. You are right with what you are saying. 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants