Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow renaming child nodes in _ready #78706

Merged
merged 1 commit into from
Jul 21, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions core/templates/hash_map.h
Original file line number Diff line number Diff line change
Expand Up @@ -353,6 +353,40 @@ class HashMap {
return true;
}

// Replace the key of an entry in-place, without invalidating iterators or changing the entries position during iteration.
// p_old_key must exist in the map and p_new_key must not, unless it is equal to p_old_key.
bool replace_key(const TKey &p_old_key, const TKey &p_new_key) {
if (p_old_key == p_new_key) {
return true;
}
uint32_t pos = 0;
ERR_FAIL_COND_V(_lookup_pos(p_new_key, pos), false);
ERR_FAIL_COND_V(!_lookup_pos(p_old_key, pos), false);
HashMapElement<TKey, TValue> *element = elements[pos];

// Delete the old entries in hashes and elements.
const uint32_t capacity = hash_table_size_primes[capacity_index];
const uint64_t capacity_inv = hash_table_size_primes_inv[capacity_index];
uint32_t next_pos = fastmod((pos + 1), capacity_inv, capacity);
while (hashes[next_pos] != EMPTY_HASH && _get_probe_length(next_pos, hashes[next_pos], capacity, capacity_inv) != 0) {
SWAP(hashes[next_pos], hashes[pos]);
SWAP(elements[next_pos], elements[pos]);
Comment on lines +372 to +373
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAICT only the other entries need to be "shifted by one", no point to keep moving the element we're removing (the last entry in the chain is cleared right after this loop anyway).

Suggested change
SWAP(hashes[next_pos], hashes[pos]);
SWAP(elements[next_pos], elements[pos]);
hashes[pos] = hashes[next_pos];
elements[pos] = elements[next_pos];

(under the assumption that pos != next_pos; which should be the case as next_pos = fastmod((pos + 1), capacity_inv, capacity))


I see in HashMap::erase it's done like that too:

SWAP(hashes[next_pos], hashes[pos]);
SWAP(elements[next_pos], elements[pos]);

It could be changed the same way if the element being removed would be stored just like in here before doing all the "shifting" (HashMapElement<TKey, TValue> *element = elements[pos]) and it would be used instead of elements[pos] in:
if (elements[pos]->prev) {
elements[pos]->prev->next = elements[pos]->next;
}
if (elements[pos]->next) {
elements[pos]->next->prev = elements[pos]->prev;
}
element_alloc.delete_allocation(elements[pos]);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is copied as-is from erase and would rather keep it simple for this PR, same for the duplicate hash computation below (also some operations can compute the hash / lookup the element 3 or 4 times, so hash map really deserves a dedicated optimization PR with measurements etc)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. As mentioned, these are just some notes/thoughts. 🙃

pos = next_pos;
next_pos = fastmod((pos + 1), capacity_inv, capacity);
}
hashes[pos] = EMPTY_HASH;
elements[pos] = nullptr;
// _insert_with_hash will increment this again.
num_elements--;

// Update the HashMapElement with the new key and reinsert it.
const_cast<TKey &>(element->data.key) = p_new_key;
uint32_t hash = _hash(p_new_key);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already calculated in _lookup_pos(p_new_key, pos) so there's a room for optimizing it.

_insert_with_hash(hash, element);

return true;
}

// Reserves space for a number of elements, useful to avoid many resizes and rehashes.
// If adding a known (possibly large) number of elements at once, must be larger than old capacity.
void reserve(uint32_t p_new_capacity) {
Expand Down
5 changes: 2 additions & 3 deletions scene/main/node.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1137,7 +1137,6 @@ void Node::_set_name_nocheck(const StringName &p_name) {

void Node::set_name(const String &p_name) {
ERR_FAIL_COND_MSG(data.inside_tree && !Thread::is_main_thread(), "Changing the name to nodes inside the SceneTree is only allowed from the main thread. Use `set_name.call_deferred(new_name)`.");
ERR_FAIL_COND_MSG(data.parent && data.parent->data.blocked > 0, "Parent node is busy setting up children, `set_name(new_name)` failed. Consider using `set_name.call_deferred(new_name)` instead.");
String name = p_name.validate_node_name();

ERR_FAIL_COND(name.is_empty());
Expand All @@ -1149,9 +1148,9 @@ void Node::set_name(const String &p_name) {
data.name = name;

if (data.parent) {
data.parent->data.children.erase(old_name);
data.parent->_validate_child_name(this, true);
data.parent->data.children.insert(data.name, this);
bool success = data.parent->data.children.replace_key(old_name, data.name);
ERR_FAIL_COND_MSG(!success, "Renaming child in hashtable failed, this is a bug.");
}

if (data.unique_name_in_owner && data.owner) {
Expand Down