Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bpo-43452: Microoptimizations to PyType_Lookup #24804

Merged
merged 3 commits into from
Mar 20, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Added micro-optimizations to ``_PyType_Lookup()`` to improve cache lookup performance in the common case of cache hits.
51 changes: 23 additions & 28 deletions Objects/typeobject.c
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,7 @@ class object "PyObject *" "&PyBaseObject_Type"
& ((1 << MCACHE_SIZE_EXP) - 1))

#define MCACHE_HASH_METHOD(type, name) \
MCACHE_HASH((type)->tp_version_tag, \
((PyASCIIObject *)(name))->hash)
MCACHE_HASH((type)->tp_version_tag, ((Py_ssize_t)(name)) >> 3)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect that id(s) >> 3 is a much worse hash function than hash(s).
Is the cost of an extra memory read lower than the cost of the additional collisions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this cache is so wildly successful that it doesn't actually matter a whole lot one way or another. It does do a little bit worse on collisions, but it doesn't seem to impact the hit rate much. This is running python -m test.regrtest and logging the stats at the end of the run:

>>3:
{'total_slots': 4096, 'occupied_slots': 4096, 'num_hits': 20931459, 'num_misses': 4292, 'num_collisions': 127068, 'num_uncacheable': 1, 'num_mro_steps': 1170327}

((PyASCIIObject *)(name))->hash:
{'total_slots': 4096, 'occupied_slots': 4096, 'num_hits': 20922787, 'num_misses': 4304, 'num_collisions': 107251, 'num_uncacheable': 1, 'num_mro_steps': 1133300}

method_cache_hits is bumped in the hit case
method_cache_collisions is bumped when adding a new entry and replacing the old one
method_cache_misses is bumped when adding a new entry and there isn't an existing one

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have any evidence as to which is faster?
If not, then the status quo wins.

My intuition is that the better spread from using the hash is likely to have better worse case performance, but in general there would be no measurable difference.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think per @methane running the benchmark suite it does look better with the object hash. I've also setup a small micro-benchmark and put it in https://bugs.python.org/issue43452 and it shows a slight win there as well. I had to write it in C to get any clear signal one way or another though, just doing the tight loop in Python seemed to have too much noise.

#define MCACHE_CACHEABLE_NAME(name) \
PyUnicode_CheckExact(name) && \
PyUnicode_IS_READY(name) && \
Expand Down Expand Up @@ -333,6 +332,7 @@ PyType_Modified(PyTypeObject *type)
}
}
type->tp_flags &= ~Py_TPFLAGS_VALID_VERSION_TAG;
type->tp_version_tag = 0; /* 0 is not a valid version tag */
}

static void
Expand Down Expand Up @@ -391,6 +391,7 @@ type_mro_modified(PyTypeObject *type, PyObject *bases) {
Py_XDECREF(type_mro_meth);
type->tp_flags &= ~(Py_TPFLAGS_HAVE_VERSION_TAG|
Py_TPFLAGS_VALID_VERSION_TAG);
type->tp_version_tag = 0; /* 0 is not a valid version tag */
}

static int
Expand Down Expand Up @@ -3346,18 +3347,15 @@ _PyType_Lookup(PyTypeObject *type, PyObject *name)
PyObject *res;
int error;

if (MCACHE_CACHEABLE_NAME(name) &&
_PyType_HasFeature(type, Py_TPFLAGS_VALID_VERSION_TAG)) {
/* fast path */
unsigned int h = MCACHE_HASH_METHOD(type, name);
struct type_cache *cache = get_type_cache();
struct type_cache_entry *entry = &cache->hashtable[h];
if (entry->version == type->tp_version_tag && entry->name == name) {
unsigned int h = MCACHE_HASH_METHOD(type, name);
struct type_cache *cache = get_type_cache();
struct type_cache_entry *entry = &cache->hashtable[h];
if (entry->version == type->tp_version_tag &&
entry->name == name) {
#if MCACHE_STATS
cache->hits++;
cache->hits++;
#endif
return entry->value;
}
return entry->value;
}

/* We may end up clearing live exceptions below, so make sure it's ours. */
Expand All @@ -3380,24 +3378,21 @@ _PyType_Lookup(PyTypeObject *type, PyObject *name)
return NULL;
}

if (MCACHE_CACHEABLE_NAME(name)) {
struct type_cache *cache = get_type_cache();
if (assign_version_tag(cache, type)) {
unsigned int h = MCACHE_HASH_METHOD(type, name);
struct type_cache_entry *entry = &cache->hashtable[h];
entry->version = type->tp_version_tag;
entry->value = res; /* borrowed */
assert(((PyASCIIObject *)(name))->hash != -1);
if (MCACHE_CACHEABLE_NAME(name) && assign_version_tag(cache, type)) {
h = MCACHE_HASH_METHOD(type, name);
struct type_cache_entry *entry = &cache->hashtable[h];
entry->version = type->tp_version_tag;
entry->value = res; /* borrowed */
assert(((PyASCIIObject *)(name))->hash != -1);
#if MCACHE_STATS
if (entry->name != Py_None && entry->name != name) {
cache->collisions++;
}
else {
cache->misses++;
}
#endif
Py_SETREF(entry->name, Py_NewRef(name));
if (entry->name != Py_None && entry->name != name) {
cache->collisions++;
}
else {
cache->misses++;
}
#endif
Py_SETREF(entry->name, Py_NewRef(name));
}
return res;
}
Expand Down