forked from git-for-windows/git
-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fast track bug fix to merge-recursive #2
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: Ben Peart <benpeart@microsoft.com> Signed-off-by: Elijah Newren <newren@gmail.com> update test case
merge-recursive takes any files marked as unmerged by unpack_trees, tries to figure out whether they can be resolved (e.g. using renames or a file-level merge), and then if they can be it will delete the old cache entries and writes new ones. This means that any ce_flags for those cache entries are essentially cleared when merging. Unfortunately, if a file was marked as skip_worktree and it needs a file-level merge but the merge results in the same version of the file that was found in HEAD, we skip updating the worktree (because the file was unchanged) but clear the skip_worktree bit (because of the delete-cache-entry-and-write-new-one). This makes git treat the file as having a local change in the working copy, namely a delete, when it should appear as unchanged despite not being present. Avoid this problem by copying the skip_worktree flag in this case. Signed-off-by: Elijah Newren <newren@gmail.com>
Closed in favor of one coming from a branch in my fork |
I think we actually need to override https://github.com/Microsoft/git/blob/gvfs-2.18.0/.github/PULL_REQUEST_TEMPLATE.md in the |
kewillford
pushed a commit
to kewillford/git
that referenced
this pull request
Nov 4, 2019
…osoft#2 Signed-off-by: Alessandro Menti <alessandro.menti@alessandromenti.it>
kewillford
pushed a commit
to kewillford/git
that referenced
this pull request
Nov 4, 2019
* 'update-italian-translation' of github.com:AlessandroMenti/git-po: l10n: it.po: update the Italian translation for Git 2.24.0 round microsoft#2
kewillford
pushed a commit
to kewillford/git
that referenced
this pull request
Nov 4, 2019
l10n-2.24.0-rnd2 * tag 'l10n-2.24.0-rnd2' of https://github.com/git-l10n/git-po: l10n: zh_CN: for git v2.24.0 l10n round 1~2 l10n: de.po: Update German translation l10n: sv.po: Update Swedish translation (4695t0f0u) l10n: bg.po: Updated Bulgarian translation (4694) l10n: vi(4694t): Updated translation for v2.24.0 l10n: es: 2.24.0 round 2 l10n: it.po: update the Italian translation for Git 2.24.0 round microsoft#2 l10n: fr v2.24.0 rnd2 l10n: git.pot: v2.24.0 round 2 (1 new) l10n: it.po: update the Italian translation for Git 2.24.0 l10n: fr 2.24.0 rnd 1 l10n: git.pot: v2.24.0 round 1 (35 new, 16 removed) l10n: bg.po: Updated Bulgarian translation (4693) l10n: sv.po: Update Swedish translation (4674t0f0u) l10n: Update Catalan translation
derrickstolee
pushed a commit
that referenced
this pull request
Dec 30, 2019
…ev() In 'builtin/name-rev.c' in the name_rev() function there is a loop iterating over all parents of the given commit, and the loop body looks like this: if (parent_number > 1) { if (generation > 0) // branch #1 new_name = ... else // branch #2 new_name = ... name_rev(parent, new_name, ...); } else { // branch #3 name_rev(...); } These conditions are not covered properly in the test suite. As far as purely test coverage goes, they are all executed several times over in 't6120-describe.sh'. However, they don't directly influence the command's output, because the repository used in that test script contains several branches and tags pointing somewhere into the middle of the commit DAG, and thus result in a better name for the to-be-named commit. This can hide bugs: e.g. by replacing the 'new_name' parameter of the first recursive name_rev() call with 'tip_name' (effectively making both branch #1 and #2 a noop) 'git name-rev --all' shows thousands of bogus names in the Git repository, but the whole test suite still passes successfully. In an early version of a later patch in this series I managed to mess up all three branches (at once!), but the test suite still passed. So add a new test case that operates on the following history: A--------------master \ / \----------M2 \ / \---M1-C \ / B and names the commit 'B' to make sure that all three branches are crucial to determine 'B's name: - There is only a single ref, so all names are based on 'master', without any undesired interference from other refs. - Each time name_rev() follows the second parent of a merge commit, it appends "^2" to the name. Following 'master's second parent right at the start ensures that all commits on the ancestry path from 'master' to 'B' have a different base name from the original 'tip_name' of the very first name_rev() invocation. Currently, while name_rev() is recursive, it doesn't matter, but it will be necessary to properly cover all three branches after the recursion is eliminated later in this series. - Following 'M2's second parent makes sure that branch #2 (i.e. when 'generation = 0') affects 'B's name. - Following the only parent of the non-merge commit 'C' ensures that branch #3 affects 'B's name, and that it increments 'generation'. - Coming from 'C' 'generation' is 1, thus following 'M1's second parent makes sure that branch #1 affects 'B's name. Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
derrickstolee
pushed a commit
that referenced
this pull request
Feb 19, 2020
Recent versions of the gcc and clang Address Sanitizer produce test failures related to regexec(). This triggers with gcc-10 and clang-8 (but not gcc-9 nor clang-7). Running: make CC=gcc-10 SANITIZE=address test results in failures in t4018, t3206, and t4062. The cause seems to be that when built with ASan, we use a different version of regexec() than normal. And this version doesn't understand the REG_STARTEND flag. Here's my evidence supporting that. The failure in t4062 is an ASan warning: expecting success of 4062.2 '-G matches': git diff --name-only -G "^(0{64}){64}$" HEAD^ >out && test 4096-zeroes.txt = "$(cat out)" ================================================================= ==672994==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7fa76f672000 at pc 0x7fa7726f75b6 bp 0x7ffe41bdda70 sp 0x7ffe41bdd220 READ of size 4097 at 0x7fa76f672000 thread T0 #0 0x7fa7726f75b5 (/lib/x86_64-linux-gnu/libasan.so.6+0x4f5b5) #1 0x562ae0c9c40e in regexec_buf /home/peff/compile/git/git-compat-util.h:1117 #2 0x562ae0c9c40e in diff_grep /home/peff/compile/git/diffcore-pickaxe.c:52 #3 0x562ae0c9cc28 in pickaxe_match /home/peff/compile/git/diffcore-pickaxe.c:166 [...] In this case we're looking in a buffer which was mmap'd via reuse_worktree_file(), and whose size is 4096 bytes. But libasan's regex tries to look at byte 4097 anyway! If we tweak Git like this: diff --git a/diff.c b/diff.c index 8e2914c031..cfae60c120 100644 --- a/diff.c +++ b/diff.c @@ -3880,7 +3880,7 @@ static int reuse_worktree_file(struct index_state *istate, */ if (ce_uptodate(ce) || (!lstat(name, &st) && !ie_match_stat(istate, ce, &st, 0))) - return 1; + return 0; return 0; } to use a regular buffer (with a trailing NUL) instead of an mmap, then the complaint goes away. The other failures are actually diff output with an incorrect funcname header. If I instrument xdiff to show the funcname matching like so: diff --git a/xdiff-interface.c b/xdiff-interface.c index 8509f9ea22..f6c3dc1986 100644 --- a/xdiff-interface.c +++ b/xdiff-interface.c @@ -197,6 +197,7 @@ struct ff_regs { struct ff_reg { regex_t re; int negate; + char *printable; } *array; }; @@ -218,7 +219,12 @@ static long ff_regexp(const char *line, long len, for (i = 0; i < regs->nr; i++) { struct ff_reg *reg = regs->array + i; - if (!regexec_buf(®->re, line, len, 2, pmatch, 0)) { + int ret = regexec_buf(®->re, line, len, 2, pmatch, 0); + warning("regexec %s:\n regex: %s\n buf: %.*s", + ret == 0 ? "matched" : "did not match", + reg->printable, + (int)len, line); + if (!ret) { if (reg->negate) return -1; break; @@ -264,6 +270,7 @@ void xdiff_set_find_func(xdemitconf_t *xecfg, const char *value, int cflags) expression = value; if (regcomp(®->re, expression, cflags)) die("Invalid regexp to look for hunk header: %s", expression); + reg->printable = xstrdup(expression); free(buffer); value = ep + 1; } then when compiling with ASan and gcc-10, running the diff from t4018.66 produces this: $ git diff -U1 cpp-skip-access-specifiers warning: regexec did not match: regex: ^[ ]*[A-Za-z_][A-Za-z_0-9]*:[[:space:]]*($|/[/*]) buf: private: warning: regexec matched: regex: ^((::[[:space:]]*)?[A-Za-z_].*)$ buf: private: diff --git a/cpp-skip-access-specifiers b/cpp-skip-access-specifiers index 4d4a9db..ebd6f42 100644 --- a/cpp-skip-access-specifiers +++ b/cpp-skip-access-specifiers @@ -6,3 +6,3 @@ private: void DoSomething(); int ChangeMe; }; void DoSomething(); - int ChangeMe; + int IWasChanged; }; That first regex should match (and is negated, so it should be telling us _not_ to match "private:"). But it wouldn't if regexec() is looking at the whole buffer, and not just the length-limited line we've fed to regexec_buf(). So this is consistent again with REG_STARTEND being ignored. The correct output (compiling without ASan, or gcc-9 with Asan) looks like this: warning: regexec matched: regex: ^[ ]*[A-Za-z_][A-Za-z_0-9]*:[[:space:]]*($|/[/*]) buf: private: [...more lines that we end up not using...] warning: regexec matched: regex: ^((::[[:space:]]*)?[A-Za-z_].*)$ buf: class RIGHT : public Baseclass diff --git a/cpp-skip-access-specifiers b/cpp-skip-access-specifiers index 4d4a9db..ebd6f42 100644 --- a/cpp-skip-access-specifiers +++ b/cpp-skip-access-specifiers @@ -6,3 +6,3 @@ class RIGHT : public Baseclass void DoSomething(); - int ChangeMe; + int IWasChanged; }; So it really does seem like libasan's regex engine is ignoring REG_STARTEND. We should be able to work around it by compiling with NO_REGEX, which would use our local regexec(). But to make matters even more interesting, this isn't enough by itself. Because ASan has support from the compiler, it doesn't seem to intercept our call to regexec() at the dynamic library level. It actually recognizes when we are compiling a call to regexec() and replaces it with ASan-specific code at that point. And unlike most of our other compat code, where we might have git_mmap() or similar, the actual symbol name in the compiled compat/regex code is regexec(). So just compiling with NO_REGEX isn't enough; we still end up in libasan! We can work around that by having the preprocessor replace regexec with git_regexec (both in the callers and in the actual implementation), and we truly end up with a call to our custom regex code, even when compiling with ASan. That's probably a good thing to do anyway, as it means anybody looking at the symbols later (e.g., in a debugger) would have a better indication of which function is which. So we'll do the same for the other common regex functions (even though just regexec() is enough to fix this ASan problem). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
derrickstolee
pushed a commit
that referenced
this pull request
Jun 1, 2020
Signed-off-by: Alessandro Menti <alessandro.menti@alessandromenti.it>
derrickstolee
pushed a commit
that referenced
this pull request
Jun 1, 2020
* 'update-italian-translation' of github.com:AlessandroMenti/git-po: l10n: it.po: update for Git 2.27.0 round #2
derrickstolee
pushed a commit
that referenced
this pull request
Jun 1, 2020
l10n-2.27.0-rnd2 * tag 'l10n-2.27.0-rnd2' of git://github.com/git-l10n/git-po: (23 commits) l10n: zh_TW.po: v2.27.0 round 2 (0 untranslated) l10n: zh_TW.po: v2.27.0 round 1 (0 untranslated) l10n: de.po: Fix typo in the German translation of octopus l10n: de.po: Update German translation for Git 2.27.0 l10n: it.po: update for Git 2.27.0 round #2 l10n: tr: v2.27.0 round 2 l10n: fr.po v2.27.0 rnd 2 l10n: bg.po: Updated Bulgarian translation (4875t) l10n: Update Catalan translation l10n: sv.po: Update Swedish translation (4875t0f0u) l10n: vi(4875t): Updated Vietnamses translation for 2.27.0rd2 l10n: zh_CN: for git v2.27.0 l10n round 1~2 l10n: git.pot: v2.27.0 round 2 (+1) l10n: Update Catalan translation l10n: vi(4874t): Updated Vietnamses translation for 2.27.0 l10n: es: 2.27.0 round 1 l10n: bg.po: Updated Bulgarian translation (4868t) l10n: fr v2.27.0 rnd 1 l10n: sv.po: Update Swedish translation (4839t0f0u) l10n: tr: v2.27.0 round 1 ...
derrickstolee
pushed a commit
that referenced
this pull request
Aug 21, 2020
The Chunk Lookup table stores the chunks' starting offset in the commit-graph file, not their sizes. Consequently, the size of a chunk can only be calculated by subtracting its offset from the offset of the subsequent chunk (or that of the terminating label). This is currenly implemented in a bit complicated way: as we iterate over the entries of the Chunk Lookup table, we check the id of each chunk and store its starting offset, then we check the id of the last seen chunk and calculate its size using its previously saved offset. At the moment there is only one chunk for which we calculate its size, but this patch series will add more, and the repeated chunk id checks are not that pretty. Instead let's read ahead the offset of the next chunk on each iteration, so we can calculate the size of each chunk right away, right where we store its starting offset. Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
derrickstolee
pushed a commit
that referenced
this pull request
Aug 21, 2020
Unify the 'chunk_ids' and 'chunk_sizes' arrays into an array of 'struct chunk_info'. This will allow more cleanups in the following patches. Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
derrickstolee
pushed a commit
that referenced
this pull request
Aug 21, 2020
The early part of t9100 creates an unusual "doubled" history in the "git-svn" ref. When we get to t9100.17, it looks like this: $ git log --oneline --graph git-svn [...] * efd0303 detect node change from file to directory #2 |\ * | 3e727c0 detect node change from file to directory #2 |/ * 3b00468 try a deep --rmdir with a commit |\ * | b4832d8 try a deep --rmdir with a commit |/ * f0d7bd5 import for git svn Each commit we make with "git commit" is paired with one from "git svn set-tree", with the latter as a merge of the first and its grandparent. Later, t9100.17 wants to check that "git svn fetch" gets the same trees. And it does, but just one copy of each. So it uses rev-list to get the tree of each commit and pipes it to "uniq" to drop the duplicates. Our input isn't sorted, but it will find adjacent duplicates. This works reliably because the order of commits from rev-list always shows the duplicates next to each other. For any one of those merges, we could choose to show its duplicate or the grandparent first. But barring clocks running backwards, the duplicate will always have a time equal to or greater than the grandparent. Even if equal, we break ties by showing the first-parent first, so the duplicates remain adjacent. But this would break if the timestamps stopped moving in chronological order. Normally we would rely on test_tick for this, but we have _two_ sources of time here: - "git commit" creates one commit based on GIT_COMMITTER_DATE (which respects test_tick) - the "svn set-tree" one is based on subversion, which does not have an easy way to specify a timestamp So using test_tick actually breaks the test, because now the duplicates are far in the past, and we'll show the grandparent before the duplicate. And likewise, a proposed change to set GIT_COMMITTER_DATE in all scripts will break it. We _could_ fix this by sorting before removing duplicates, but presumably it's a useful part of the test to make sure the trees appear in the same order in both spots. Likewise, we could use something like: perl -ne 'print unless $seen{$_}++' to remove duplicates without impacting the order. But that doesn't work either, because there are actually multiple (non-duplicate) commits with the same trees (we change a file mode and then change it back). So we'd actually have to de-duplicate the combination of subject and tree. Which then further throws off t9100.18, which compares the tree hashes exactly; we'd have to strip the result back down. Since this test _isn't_ buggy, the simplest thing is to just work around the proposed change by documenting our expectation that git-created commits are correctly interleaved using the current time. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
derrickstolee
pushed a commit
that referenced
this pull request
Aug 21, 2020
The changed-path Bloom filter is improved using ideas from an independent implementation. * sg/commit-graph-cleanups: commit-graph: simplify write_commit_graph_file() #2 commit-graph: simplify write_commit_graph_file() #1 commit-graph: simplify parse_commit_graph() #2 commit-graph: simplify parse_commit_graph() #1 commit-graph: clean up #includes diff.h: drop diff_tree_oid() & friends' return value commit-slab: add a function to deep free entries on the slab commit-graph-format.txt: all multi-byte numbers are in network byte order commit-graph: fix parsing the Chunk Lookup table tree-walk.c: don't match submodule entries for 'submod/anything'
mjcheetham
pushed a commit
that referenced
this pull request
Dec 15, 2020
In demultiplex_sideband(), there are two oddities when we check an incoming packet: - if it has zero length, then we assume it's a flush packet. This means we fail to notice the difference between a real flush and a true zero-length packet that's missing its sideband designator. It's not a huge problem in practice because we'd never send a zero-length data packet (even our keepalives are otherwise-empty sideband-1 packets). But it would be nice to detect and report the error, since it's likely to cause other confusion (we think the other side flushed, but they do not). - we try to detect packets missing their designator by checking for "if (len < 1)". But this will never trigger for "len == 0"; we've already detected that and left the function before then. It _could_ detect a negative "len" parameter. But in that case, the error message is wrong. The issue is not "no sideband" but rather "eof while reading the packet". However, this can't actually be triggered in practice, because neither of the two callers uses pkt_read's GENTLE_ON_EOF flag. Which means they'd die with "the remote end hung up unexpectedly" before we even get here. So this truly is dead code. We can improve these cases by passing in a pkt-line status to the demultiplexer, and by having recv_sideband() use GENTLE_ON_EOF. This gives us two improvements: - we can now reliably detect flush packets, and will report a normal packet missing its sideband designator as an error - we'll report an eof with a more detailed "protocol error: eof while reading sideband packet", rather than the generic "the remote end hung up unexpectedly" - when we see an eof, we'll flush the sideband scratch buffer, which may provide some hints from the remote about why they hung up (though note we already flush on newlines, so it's likely that most such messages already made it through) In some sense this patch goes against fbd76cd (sideband: reverse its dependency on pkt-line, 2019-01-16), which caused the sideband code not to depend on the pkt-line code. But that commit was really just trying to deal with the circular header dependency. The two modules are conceptually interlinked, and it was just trying to keep things compiling. And indeed, there's a sticking point in this patch: because pkt-line.h includes sideband.h, we can't add the reverse include we need for the sideband code to have an "enum packet_read_status" parameter. Nor can we forward declare it, because you can't forward declare an enum in C. However, C does guarantee that enums fit in an int, so we can just use that type. One alternative would be for the callers to check themselves that they got something sane from the pkt-line code. But besides duplicating logic, this gets quite tricky. Any error condition requires flushing the sideband #2 scratch buffer, which only demultiplex_sideband() knows how to do. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
mjcheetham
pushed a commit
that referenced
this pull request
Dec 15, 2020
Test 5572.63 ("branch has no merge base with remote-tracking counterpart") was introduced in 4d36f88 (submodule: do not pass null OID to setup_revisions, 2018-05-24), as a regression test for the bug this commit was fixing (preventing a 'fatal: bad object' error when the current branch and the remote-tracking branch we are pulling have no merge-base). However, the commit message for 4d36f88 does not describe in which real-life situation this bug was encountered. The brief discussion on the mailing list [1] does not either. The regression test is not really representative of a real-life scenario: both the local repository and its upstream have only a single commit, and the "no merge-base" scenario is simulated by recreating this root commit in the local repository using 'git commit-tree' before calling 'git pull --rebase --recurse-submodules'. The rebase succeeds and results in the local branch being reset to the same root commit as the upstream branch. The fix in 4d36f88 modifies 'submodule.c::submodule_touches_in_range' so that if 'excl_oid' is null, which is the case when the 'git merge-base --fork-point' invocation in 'builtin/pull.c::get_rebase_fork_point' errors (no fork-point), then instead of 'incl_oid --not excl_oid' being passed to setup_revisions, only 'incl_oid' is passed, and 'submodule_touches_in_range' examines 'incl_oid' and all its ancestors to verify that they do not touch the submodule. In test 5572.63, the recreated lone root commit in the local repository is thus the only commit being examined by 'submodule_touches_in_range', and this commit *adds* the submodule. However, 'submodule_touches_in_range' *succeeds* because 'combine-diff.c::diff_tree_combined' (see the backtrace below) returns early since this commit is the root commit and has no parents. #0 diff_tree_combined at combine-diff.c:1494 #1 0x0000000100150cbe in diff_tree_combined_merge at combine-diff.c:1649 #2 0x00000001002c7147 in collect_changed_submodules at submodule.c:869 #3 0x00000001002c7d6f in submodule_touches_in_range at submodule.c:1268 #4 0x00000001000ad58b in cmd_pull at builtin/pull.c:1040 In light of all this, add a note in t5572 documenting this peculiar test. [1] https://lore.kernel.org/git/20180524204729.19896-1-jonathantanmy@google.com/t/#u Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
neerajsi-msft
pushed a commit
to neerajsi-msft/git
that referenced
this pull request
Feb 17, 2021
The previous change reduced time spent in strlen() while comparing consecutive paths in verify_cache(), but we can do better. The conditional checks the existence of a directory separator at the correct location, but only after doing a string comparison. Swap the order to be logically equivalent but perform fewer string comparisons. To test the effect on performance, I used a repository with over three million paths in the index. I then ran the following command on repeat: git -c index.threads=1 commit --amend --allow-empty --no-edit Here are the measurements over 10 runs after a 5-run warmup: Benchmark #1: v2.30.0 Time (mean ± σ): 854.5 ms ± 18.2 ms Range (min … max): 825.0 ms … 892.8 ms Benchmark microsoft#2: Previous change Time (mean ± σ): 833.2 ms ± 10.3 ms Range (min … max): 815.8 ms … 849.7 ms Benchmark microsoft#3: This change Time (mean ± σ): 815.5 ms ± 18.1 ms Range (min … max): 795.4 ms … 849.5 ms This change is 2% faster than the previous change and 5% faster than v2.30.0. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
neerajsi-msft
pushed a commit
to neerajsi-msft/git
that referenced
this pull request
Feb 17, 2021
This is modelled on the version of handle_directory_level_conflicts() from merge-recursive.c, but is massively simplified due to the following factors: * strmap API provides simplifications over using direct hashmap * we have a dirs_removed field in struct rename_info that we have an easy way to populate from collect_merge_info(); this was already used in compute_rename_counts() and thus we do not need to check for condition microsoft#2. * The removal of condition microsoft#2 by handling it earlier in the code also obviates the need to check for condition microsoft#3 -- if both sides renamed a directory, meaning that the directory no longer exists on either side, then neither side could have added any new files to that directory, and thus there are no files whose locations we need to move due to such a directory rename. In fact, the same logic that makes condition microsoft#3 irrelevant means condition #1 is also irrelevant so we could drop this function. However, it is cheap to check if both sides rename the same directory, and doing so can save future computation. So, simply remove any directories that both sides renamed from the list of directory renames. Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
neerajsi-msft
pushed a commit
to neerajsi-msft/git
that referenced
this pull request
Feb 17, 2021
Add some timing instrumentation for both merge-ort and diffcore-rename; I used these to measure and optimize performance in both, and several future patch series will build on these to reduce the timings of some select testcases. === Setup === The primary testcase I used involved rebasing a random topic in the linux kernel (consisting of 35 patches) against an older version. I added two variants, one where I rename a toplevel directory, and another where I only rebase one patch instead of the whole topic. The setup is as follows: $ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git $ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e $ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34 $ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e $ git switch -c 5.4-renames v5.4 $ git mv drivers pilots # Introduce over 26,000 renames $ git commit -m "Rename drivers/ to pilots/" $ git config merge.renameLimit 30000 $ git config merge.directoryRenames true === Testcases === Now with REBASE standing for either "git rebase [--merge]" (using merge-recursive) or "test-tool fast-rebase" (using merge-ort), the testcases are: Testcase #1: no-renames $ git checkout v5.4^0 $ REBASE --onto HEAD base hwmon-updates Note: technically the name is misleading; there are some renames, but very few. Rename detection only takes about half the overall time. Testcase microsoft#2: mega-renames $ git checkout 5.4-renames^0 $ REBASE --onto HEAD base hwmon-updates Testcase microsoft#3: just-one-mega $ git checkout 5.4-renames^0 $ REBASE --onto HEAD base hwmon-just-one === Timing results === Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames, 10 runs for the other two cases): merge-recursive merge-ort no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s A single re-run of each with some breakdowns: --- no-renames --- merge-recursive merge-ort overall runtime: 19.302 s 14.257 s inexact rename detection: 7.603 s 7.906 s everything else: 11.699 s 6.351 s --- mega-renames --- merge-recursive merge-ort overall runtime: 5950.195 s 5499.672 s inexact rename detection: 5746.309 s 5487.120 s everything else: 203.886 s 17.552 s --- just-one-mega --- merge-recursive merge-ort overall runtime: 151.001 s 158.582 s inexact rename detection: 143.448 s 157.835 s everything else: 7.553 s 0.747 s === Timing observations === 0) Maximum speedup The "everything else" row represents the maximum speedup we could achieve if we were to somehow infinitely parallelize inexact rename detection, but leave everything else alone. The fact that this is so much smaller than the real runtime (even in the case with virtually no renames) makes it clear just how overwhelmingly large the time spent on rename detection can be. 1) no-renames 1a) merge-ort is faster than merge-recursive, which is nice. However, this still should not be considered good enough. Although the "merge" backend to rebase (merge-recursive) is sometimes faster than the "apply" backend, this is one of those cases where it is not. In fact, even merge-ort is slower. The "apply" backend can complete this testcase in 6.940 s ± 0.485 s which is about 2x faster than merge-ort and 3x faster than merge-recursive. One goal of the merge-ort performance work will be to make it faster than git-am on this (and similar) testcases. 2) mega-renames 2a) Obviously rename detection is a huge cost; it's where most the time is spent. We need to cut that down. If we could somehow infinitely parallelize it and drive its time to 0, the merge-recursive time would drop to about 204s, and the merge-ort time would drop to about 17s. I think this particular stat shows I've subtly baked a couple performance improvements into merge-ort and into fast-rebase already. 3) just-one-mega 3a) not much to say here, it just gives some flavor for how rebasing only one patch compares to rebasing 35. === Goals === This patch is obviously just the beginning. Here are some of my goals that this measurement will help us achieve: * Drive the cost of rename detection down considerably for merges * After the above has been achieved, see if there are other slowness factors (which would have previously been overshadowed by rename detection costs) which we can then focus on and also optimize. * Ensure our rebase testcase that requires little rename detection is noticeably faster with merge-ort than with apply-based rebase. Signed-off-by: Elijah Newren <newren@gmail.com> Acked-by: Taylor Blau <ttaylorr@github.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
neerajsi-msft
pushed a commit
to neerajsi-msft/git
that referenced
this pull request
Feb 17, 2021
Specify the format of the on-disk reverse index 'pack-*.rev' file, as well as prepare the code for the existence of such files. The reverse index maps from pack relative positions (i.e., an index into the array of object which is sorted by their offsets within the packfile) to their position within the 'pack-*.idx' file. Today, this is done by building up a list of (off_t, uint32_t) tuples for each object (the off_t corresponding to that object's offset, and the uint32_t corresponding to its position in the index). To convert between pack and index position quickly, this array of tuples is radix sorted based on its offset. This has two major drawbacks: First, the in-memory cost scales linearly with the number of objects in a pack. Each 'struct revindex_entry' is sizeof(off_t) + sizeof(uint32_t) + padding bytes for a total of 16. To observe this, force Git to load the reverse index by, for e.g., running 'git cat-file --batch-check="%(objectsize:disk)"'. When asking for a single object in a fresh clone of the kernel, Git needs to allocate 120+ MB of memory in order to hold the reverse index in memory. Second, the cost to sort also scales with the size of the pack. Luckily, this is a linear function since 'load_pack_revindex()' uses a radix sort, but this cost still must be paid once per pack per process. As an example, it takes ~60x longer to print the _size_ of an object as it does to print that entire object's _contents_: Benchmark #1: git.compile cat-file --batch <obj Time (mean ± σ): 3.4 ms ± 0.1 ms [User: 3.3 ms, System: 2.1 ms] Range (min … max): 3.2 ms … 3.7 ms 726 runs Benchmark microsoft#2: git.compile cat-file --batch-check="%(objectsize:disk)" <obj Time (mean ± σ): 210.3 ms ± 8.9 ms [User: 188.2 ms, System: 23.2 ms] Range (min … max): 193.7 ms … 224.4 ms 13 runs Instead, avoid computing and sorting the revindex once per process by writing it to a file when the pack itself is generated. The format is relatively straightforward. It contains an array of uint32_t's, the length of which is equal to the number of objects in the pack. The ith entry in this table contains the index position of the ith object in the pack, where "ith object in the pack" is determined by pack offset. One thing that the on-disk format does _not_ contain is the full (up to) eight-byte offset corresponding to each object. This is something that the in-memory revindex contains (it stores an off_t in 'struct revindex_entry' along with the same uint32_t that the on-disk format has). Omit it in the on-disk format, since knowing the index position for some object is sufficient to get a constant-time lookup in the pack-*.idx file to ask for an object's offset within the pack. This trades off between the on-disk size of the 'pack-*.rev' file for runtime to chase down the offset for some object. Even though the lookup is constant time, the constant is heavier, since it can potentially involve two pointer walks in v2 indexes (one to access the 4-byte offset table, and potentially a second to access the double wide offset table). Consider trying to map an object's pack offset to a relative position within that pack. In a cold-cache scenario, more page faults occur while switching between binary searching through the reverse index and searching through the *.idx file for an object's offset. Sure enough, with a cold cache (writing '3' into '/proc/sys/vm/drop_caches' after 'sync'ing), printing out the entire object's contents is still marginally faster than printing its size: Benchmark #1: git.compile cat-file --batch-check="%(objectsize:disk)" <obj >/dev/null Time (mean ± σ): 22.6 ms ± 0.5 ms [User: 2.4 ms, System: 7.9 ms] Range (min … max): 21.4 ms … 23.5 ms 41 runs Benchmark microsoft#2: git.compile cat-file --batch <obj >/dev/null Time (mean ± σ): 17.2 ms ± 0.7 ms [User: 2.8 ms, System: 5.5 ms] Range (min … max): 15.6 ms … 18.2 ms 45 runs (Numbers taken in the kernel after cheating and using the next patch to generate a reverse index). There are a couple of approaches to improve cold cache performance not pursued here: - We could include the object offsets in the reverse index format. Predictably, this does result in fewer page faults, but it triples the size of the file, while simultaneously duplicating a ton of data already available in the .idx file. (This was the original way I implemented the format, and it did show `--batch-check='%(objectsize:disk)'` winning out against `--batch`.) On the other hand, this increase in size also results in a large block-cache footprint, which could potentially hurt other workloads. - We could store the mapping from pack to index position in more cache-friendly way, like constructing a binary search tree from the table and writing the values in breadth-first order. This would result in much better locality, but the price you pay is trading O(1) lookup in 'pack_pos_to_index()' for an O(log n) one (since you can no longer directly index the table). So, neither of these approaches are taken here. (Thankfully, the format is versioned, so we are free to pursue these in the future.) But, cold cache performance likely isn't interesting outside of one-off cases like asking for the size of an object directly. In real-world usage, Git is often performing many operations in the revindex (i.e., asking about many objects rather than a single one). The trade-off is worth it, since we will avoid the vast majority of the cost of generating the revindex that the extra pointer chase will look like noise in the following patch's benchmarks. This patch describes the format and prepares callers (like in pack-revindex.c) to be able to read *.rev files once they exist. An implementation of the writer will appear in the next patch, and callers will gradually begin to start using the writer in the patches that follow after that. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
derrickstolee
pushed a commit
that referenced
this pull request
Mar 29, 2021
…sponse query_result can be be an empty strbuf (STRBUF_INIT) - in that case trying to read 3 bytes triggers a buffer overflow read (as query_result.buf = '\0'). Therefore we need to check query_result's length before trying to read 3 bytes. This overflow was introduced in: 940b94f (fsmonitor: log invocation of FSMonitor hook to trace2, 2021-02-03) It was found when running the test-suite against ASAN, and can be most easily reproduced with the following command: make GIT_TEST_OPTS="-v" DEFAULT_TEST_TARGET="t7519-status-fsmonitor.sh" \ SANITIZE=address DEVELOPER=1 test ==2235==ERROR: AddressSanitizer: global-buffer-overflow on address 0x0000019e6e5e at pc 0x00000043745c bp 0x7fffd382c520 sp 0x7fffd382bcc8 READ of size 3 at 0x0000019e6e5e thread T0 #0 0x43745b in MemcmpInterceptorCommon(void*, int (*)(void const*, void const*, unsigned long), void const*, void const*, unsigned long) /home/abuild/rpmbuild/BUILD/llvm-11.0.0.src/build/../projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:842:7 #1 0x43786d in bcmp /home/abuild/rpmbuild/BUILD/llvm-11.0.0.src/build/../projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:887:10 #2 0x80b146 in fsmonitor_is_trivial_response /home/ahunt/oss-fuzz/git/fsmonitor.c:192:10 #3 0x80b146 in query_fsmonitor /home/ahunt/oss-fuzz/git/fsmonitor.c:175:7 #4 0x80a749 in refresh_fsmonitor /home/ahunt/oss-fuzz/git/fsmonitor.c:267:21 #5 0x80bad1 in tweak_fsmonitor /home/ahunt/oss-fuzz/git/fsmonitor.c:429:4 #6 0x90f040 in read_index_from /home/ahunt/oss-fuzz/git/read-cache.c:2321:3 #7 0x8e5d08 in repo_read_index_preload /home/ahunt/oss-fuzz/git/preload-index.c:164:15 #8 0x52dd45 in prepare_index /home/ahunt/oss-fuzz/git/builtin/commit.c:363:6 #9 0x52a188 in cmd_commit /home/ahunt/oss-fuzz/git/builtin/commit.c:1588:15 #10 0x4ce77e in run_builtin /home/ahunt/oss-fuzz/git/git.c:453:11 #11 0x4ccb18 in handle_builtin /home/ahunt/oss-fuzz/git/git.c:704:3 #12 0x4cb01c in run_argv /home/ahunt/oss-fuzz/git/git.c:771:4 #13 0x4cb01c in cmd_main /home/ahunt/oss-fuzz/git/git.c:902:19 #14 0x6aca8d in main /home/ahunt/oss-fuzz/git/common-main.c:52:11 #15 0x7fb027bf5349 in __libc_start_main (/lib64/libc.so.6+0x24349) #16 0x4206b9 in _start /home/abuild/rpmbuild/BUILD/glibc-2.26/csu/../sysdeps/x86_64/start.S:120 0x0000019e6e5e is located 2 bytes to the left of global variable 'strbuf_slopbuf' defined in 'strbuf.c:51:6' (0x19e6e60) of size 1 'strbuf_slopbuf' is ascii string '' 0x0000019e6e5e is located 126 bytes to the right of global variable 'signals' defined in 'sigchain.c:11:31' (0x19e6be0) of size 512 SUMMARY: AddressSanitizer: global-buffer-overflow /home/abuild/rpmbuild/BUILD/llvm-11.0.0.src/build/../projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:842:7 in MemcmpInterceptorCommon(void*, int (*)(void const*, void const*, unsigned long), void const*, void const*, unsigned long) Shadow bytes around the buggy address: 0x000080334d70: f9 f9 f9 f9 00 f9 f9 f9 f9 f9 f9 f9 00 00 00 00 0x000080334d80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x000080334d90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x000080334da0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x000080334db0: 00 00 00 00 00 00 00 00 00 00 00 00 f9 f9 f9 f9 =>0x000080334dc0: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9[f9]01 f9 f9 f9 0x000080334dd0: f9 f9 f9 f9 03 f9 f9 f9 f9 f9 f9 f9 02 f9 f9 f9 0x000080334de0: f9 f9 f9 f9 00 f9 f9 f9 f9 f9 f9 f9 04 f9 f9 f9 0x000080334df0: f9 f9 f9 f9 01 f9 f9 f9 f9 f9 f9 f9 00 00 00 00 0x000080334e00: f9 f9 f9 f9 00 00 00 00 f9 f9 f9 f9 01 f9 f9 f9 0x000080334e10: f9 f9 f9 f9 04 f9 f9 f9 f9 f9 f9 f9 00 f9 f9 f9 Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc Signed-off-by: Andrzej Hunt <ajrhunt@google.com> Acked-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
dscho
pushed a commit
that referenced
this pull request
May 22, 2021
Commit 878f988 (t/test-lib: teach --chain-lint to detect broken &&-chains in subshells, 2018-07-11) introduced additional chain-lint tests which add an extra "sed" pipeline to each test we run. This has a measurable impact on runtime. Here are timings with and without a new environment variable (added by this patch) that lets you disable just the additional sed-based chain-lint tests: Benchmark #1: GIT_TEST_CHAIN_LINT_HARDER=1 make test Time (mean ± σ): 64.202 s ± 1.030 s [User: 622.469 s, System: 301.402 s] Range (min … max): 61.571 s … 65.662 s 10 runs Benchmark #2: GIT_TEST_CHAIN_LINT_HARDER=0 make test Time (mean ± σ): 57.591 s ± 0.333 s [User: 529.368 s, System: 270.618 s] Range (min … max): 57.143 s … 58.309 s 10 runs Summary 'GIT_TEST_CHAIN_LINT_HARDER=0 make test' ran 1.11 ± 0.02 times faster than 'GIT_TEST_CHAIN_LINT_HARDER=1 make test' Of course those extra lint checks are doing something useful, so paying a few extra seconds (at least on Linux) isn't so bad (though note the CPU time; we're bounded in our parallel run here by the slowest test, so it really is ~120s of CPU improvement). But we can observe that there are some test scripts where they produce a much stronger effect, and provide less value. In t0027 and t3070 we run a very large number of small tests, all driven by a series of functions/loops which are filling in the test bodies. There we get much less bang for our buck in terms of bug-finding versus CPU cost. This patch introduces a mechanism for controlling when those extra lint checks are run, at two levels: - a user can ask to disable or to force-enable the checks by setting GIT_TEST_CHAIN_LINT_HARDER - if the user hasn't specified a preference, individual scripts can disable the checks by setting GIT_TEST_CHAIN_LINT_HARDER_DEFAULT; scripts which don't set that get the current behavior of enabling them. In addition, this patch flips the default for t0027 and t3070's mass-generated sections to disable the extra checks. Here are the timing results for t0027: Benchmark #1: GIT_TEST_CHAIN_LINT_HARDER=1 ./t0027-auto-crlf.sh Time (mean ± σ): 17.078 s ± 0.848 s [User: 14.878 s, System: 7.075 s] Range (min … max): 15.952 s … 18.421 s 10 runs Benchmark #2: GIT_TEST_CHAIN_LINT_HARDER=0 ./t0027-auto-crlf.sh Time (mean ± σ): 9.063 s ± 0.759 s [User: 7.890 s, System: 3.362 s] Range (min … max): 7.747 s … 10.619 s 10 runs Benchmark #3: ./t0027-auto-crlf.sh Time (mean ± σ): 9.186 s ± 0.881 s [User: 7.957 s, System: 3.427 s] Range (min … max): 7.796 s … 10.498 s 10 runs Summary 'GIT_TEST_CHAIN_LINT_HARDER=0 ./t0027-auto-crlf.sh' ran 1.01 ± 0.13 times faster than './t0027-auto-crlf.sh' 1.88 ± 0.18 times faster than 'GIT_TEST_CHAIN_LINT_HARDER=1 ./t0027-auto-crlf.sh' We can see that disabling the checks for the whole script buys us an almost 2x speedup. But the new default behavior, disabling them only for the mass-generated part, gets us most of that speedup (but still leaves the checks on for further manual tests people might write). As a side note, I'd caution about comparing runtimes and CPU seconds between this timing and the earlier "make test" one. In "make test", we're running a lot of scripts in parallel, so the CPU is throttling down (and thus a CPU second saved here would count for more during a parallel run; the same work takes more CPU seconds there). We get similar results for t3070: Benchmark #1: GIT_TEST_CHAIN_LINT_HARDER=1 ./t3070-wildmatch.sh Time (mean ± σ): 20.054 s ± 3.967 s [User: 16.003 s, System: 8.286 s] Range (min … max): 11.891 s … 23.671 s 10 runs Benchmark #2: GIT_TEST_CHAIN_LINT_HARDER=0 ./t3070-wildmatch.sh Time (mean ± σ): 12.399 s ± 2.256 s [User: 7.542 s, System: 5.342 s] Range (min … max): 9.606 s … 15.727 s 10 runs Benchmark #3: ./t3070-wildmatch.sh Time (mean ± σ): 10.726 s ± 3.476 s [User: 6.790 s, System: 4.365 s] Range (min … max): 5.444 s … 15.376 s 10 runs Summary './t3070-wildmatch.sh' ran 1.16 ± 0.43 times faster than 'GIT_TEST_CHAIN_LINT_HARDER=0 ./t3070-wildmatch.sh' 1.87 ± 0.71 times faster than 'GIT_TEST_CHAIN_LINT_HARDER=1 ./t3070-wildmatch.sh' Again, we get almost a 2x speedup disabling these. In this case, there are no tests not covered by the script's "default to disable" behavior, so the second two benchmarks should be the same (and while they do differ, you can see the variance is quite high but they're within one standard deviation). So it seems like for these two scripts, at least, disabling the extra checks is a reasonable tradeoff. Sadly, the overall runtime of "make test" on my system doesn't get much faster. But that's because we're mostly limited by the cost of the single biggest test. Here are the top-5 tests by wall-clock time from a parallel run, before my patch: 57.9192368984222 t9001-send-email.sh 45.6329638957977 t0027-auto-crlf.sh 32.5278220176697 t3070-wildmatch.sh 22.2701289653778 t7610-mergetool.sh 20.8635759353638 t1701-racy-split-index.sh And after: 57.1476998329163 t9001-send-email.sh 33.776211977005 t0027-auto-crlf.sh 21.3116669654846 t7610-mergetool.sh 20.7748689651489 t1701-racy-split-index.sh 19.6957249641418 t7112-reset-submodule.sh We dropped 12s from t0027, and t3070 dropped off our list entirely at around 16s. In both cases we're bound by t9001, but its slowness is due to the actual tests, so we'll have to deal with it in a different way. But this reduces overall CPU, and means that dealing with t9001 (by improving the speed of send-email or splitting it apart) will let us reduce our overall runtime even on multi-core machines. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
ldennington
pushed a commit
to ldennington/git
that referenced
this pull request
Jun 2, 2021
shorten_unambiguous_ref() returns an allocated string. We have to track it separately from the const refname. This leak has existed since: 9ab55da (git symbolic-ref --delete $symref, 2012-10-21) This leak was found when running t0001 with LSAN, see also LSAN output below: Direct leak of 19 byte(s) in 1 object(s) allocated from: #0 0x486514 in strdup /home/abuild/rpmbuild/BUILD/llvm-11.0.0.src/build/../projects/compiler-rt/lib/asan/asan_interceptors.cpp:452:3 microsoft#1 0x9ab048 in xstrdup /home/ahunt/oss-fuzz/git/wrapper.c:29:14 microsoft#2 0x8b452f in refs_shorten_unambiguous_ref /home/ahunt/oss-fuzz/git/refs.c microsoft#3 0x8b47e8 in shorten_unambiguous_ref /home/ahunt/oss-fuzz/git/refs.c:1287:9 microsoft#4 0x679fce in check_symref /home/ahunt/oss-fuzz/git/builtin/symbolic-ref.c:28:14 microsoft#5 0x679ad8 in cmd_symbolic_ref /home/ahunt/oss-fuzz/git/builtin/symbolic-ref.c:70:9 microsoft#6 0x4cd60d in run_builtin /home/ahunt/oss-fuzz/git/git.c:453:11 microsoft#7 0x4cb2da in handle_builtin /home/ahunt/oss-fuzz/git/git.c:704:3 microsoft#8 0x4ccc37 in run_argv /home/ahunt/oss-fuzz/git/git.c:771:4 microsoft#9 0x4cac29 in cmd_main /home/ahunt/oss-fuzz/git/git.c:902:19 microsoft#10 0x69cc6e in main /home/ahunt/oss-fuzz/git/common-main.c:52:11 microsoft#11 0x7f98388a4349 in __libc_start_main (/lib64/libc.so.6+0x24349) Signed-off-by: Andrzej Hunt <ajrhunt@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
ldennington
pushed a commit
to ldennington/git
that referenced
this pull request
Jun 2, 2021
dwim_ref() allocs a new string into ref. Instead of setting to NULL to discard it, we can FREE_AND_NULL. This leak appears to have been introduced in: 4cf76f6 (builtin/reset: compute checkout metadata for reset, 2020-03-16) This leak was found when running t0001 with LSAN, see also LSAN output below: Direct leak of 5 byte(s) in 1 object(s) allocated from: #0 0x486514 in strdup /home/abuild/rpmbuild/BUILD/llvm-11.0.0.src/build/../projects/compiler-rt/lib/asan/asan_interceptors.cpp:452:3 microsoft#1 0x9a7108 in xstrdup /home/ahunt/oss-fuzz/git/wrapper.c:29:14 microsoft#2 0x8add6b in expand_ref /home/ahunt/oss-fuzz/git/refs.c:670:12 microsoft#3 0x8ad777 in repo_dwim_ref /home/ahunt/oss-fuzz/git/refs.c:644:22 microsoft#4 0x6394af in dwim_ref /home/ahunt/oss-fuzz/git/./refs.h:162:9 microsoft#5 0x637e5c in cmd_reset /home/ahunt/oss-fuzz/git/builtin/reset.c:426:4 microsoft#6 0x4cd60d in run_builtin /home/ahunt/oss-fuzz/git/git.c:453:11 microsoft#7 0x4cb2da in handle_builtin /home/ahunt/oss-fuzz/git/git.c:704:3 microsoft#8 0x4ccc37 in run_argv /home/ahunt/oss-fuzz/git/git.c:771:4 microsoft#9 0x4cac29 in cmd_main /home/ahunt/oss-fuzz/git/git.c:902:19 microsoft#10 0x69c5ce in main /home/ahunt/oss-fuzz/git/common-main.c:52:11 microsoft#11 0x7f57ebb9d349 in __libc_start_main (/lib64/libc.so.6+0x24349) Signed-off-by: Andrzej Hunt <ajrhunt@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
ldennington
pushed a commit
to ldennington/git
that referenced
this pull request
Jun 2, 2021
Most of these pointers can safely be freed when cmd_clone() completes, therefore we make sure to free them. The one exception is that we have to UNLEAK(repo) because it can point either to argv[0], or a malloc'd string returned by absolute_pathdup(). We also have to free(path) in the middle of cmd_clone(): later during cmd_clone(), path is unconditionally overwritten with a different path, triggering a leak. Freeing the first path immediately after use (but only in the case where it contains data) seems like the cleanest solution, as opposed to freeing it unconditionally before path is reused for another path. This leak appears to have been introduced in: f38aa83 (use local cloning if insteadOf makes a local URL, 2014-07-17) These leaks were found when running t0001 with LSAN, see also an excerpt of the LSAN output below (the full list is omitted because it's far too long, and mostly consists of indirect leakage of members of the refs we are freeing). Direct leak of 178 byte(s) in 1 object(s) allocated from: #0 0x49a53d in malloc /home/abuild/rpmbuild/BUILD/llvm-11.0.0.src/build/../projects/compiler-rt/lib/asan/asan_malloc_linux.cpp:145:3 microsoft#1 0x9a6ff4 in do_xmalloc /home/ahunt/oss-fuzz/git/wrapper.c:41:8 microsoft#2 0x9a6fca in xmalloc /home/ahunt/oss-fuzz/git/wrapper.c:62:9 microsoft#3 0x8ce296 in copy_ref /home/ahunt/oss-fuzz/git/remote.c:885:8 microsoft#4 0x8d2ebd in guess_remote_head /home/ahunt/oss-fuzz/git/remote.c:2215:10 microsoft#5 0x51d0c5 in cmd_clone /home/ahunt/oss-fuzz/git/builtin/clone.c:1308:4 microsoft#6 0x4cd60d in run_builtin /home/ahunt/oss-fuzz/git/git.c:453:11 microsoft#7 0x4cb2da in handle_builtin /home/ahunt/oss-fuzz/git/git.c:704:3 microsoft#8 0x4ccc37 in run_argv /home/ahunt/oss-fuzz/git/git.c:771:4 microsoft#9 0x4cac29 in cmd_main /home/ahunt/oss-fuzz/git/git.c:902:19 microsoft#10 0x69c45e in main /home/ahunt/oss-fuzz/git/common-main.c:52:11 microsoft#11 0x7f6a459d5349 in __libc_start_main (/lib64/libc.so.6+0x24349) Direct leak of 165 byte(s) in 1 object(s) allocated from: #0 0x49a53d in malloc /home/abuild/rpmbuild/BUILD/llvm-11.0.0.src/build/../projects/compiler-rt/lib/asan/asan_malloc_linux.cpp:145:3 microsoft#1 0x9a6fc4 in do_xmalloc /home/ahunt/oss-fuzz/git/wrapper.c:41:8 microsoft#2 0x9a6f9a in xmalloc /home/ahunt/oss-fuzz/git/wrapper.c:62:9 microsoft#3 0x8ce266 in copy_ref /home/ahunt/oss-fuzz/git/remote.c:885:8 microsoft#4 0x51e9bd in wanted_peer_refs /home/ahunt/oss-fuzz/git/builtin/clone.c:574:21 microsoft#5 0x51cfe1 in cmd_clone /home/ahunt/oss-fuzz/git/builtin/clone.c:1284:17 microsoft#6 0x4cd60d in run_builtin /home/ahunt/oss-fuzz/git/git.c:453:11 microsoft#7 0x4cb2da in handle_builtin /home/ahunt/oss-fuzz/git/git.c:704:3 microsoft#8 0x4ccc37 in run_argv /home/ahunt/oss-fuzz/git/git.c:771:4 microsoft#9 0x4cac29 in cmd_main /home/ahunt/oss-fuzz/git/git.c:902:19 microsoft#10 0x69c42e in main /home/ahunt/oss-fuzz/git/common-main.c:52:11 microsoft#11 0x7f8fef0c2349 in __libc_start_main (/lib64/libc.so.6+0x24349) Direct leak of 178 byte(s) in 1 object(s) allocated from: #0 0x49a53d in malloc /home/abuild/rpmbuild/BUILD/llvm-11.0.0.src/build/../projects/compiler-rt/lib/asan/asan_malloc_linux.cpp:145:3 microsoft#1 0x9a6ff4 in do_xmalloc /home/ahunt/oss-fuzz/git/wrapper.c:41:8 microsoft#2 0x9a6fca in xmalloc /home/ahunt/oss-fuzz/git/wrapper.c:62:9 microsoft#3 0x8ce296 in copy_ref /home/ahunt/oss-fuzz/git/remote.c:885:8 microsoft#4 0x8d2ebd in guess_remote_head /home/ahunt/oss-fuzz/git/remote.c:2215:10 microsoft#5 0x51d0c5 in cmd_clone /home/ahunt/oss-fuzz/git/builtin/clone.c:1308:4 microsoft#6 0x4cd60d in run_builtin /home/ahunt/oss-fuzz/git/git.c:453:11 microsoft#7 0x4cb2da in handle_builtin /home/ahunt/oss-fuzz/git/git.c:704:3 microsoft#8 0x4ccc37 in run_argv /home/ahunt/oss-fuzz/git/git.c:771:4 microsoft#9 0x4cac29 in cmd_main /home/ahunt/oss-fuzz/git/git.c:902:19 microsoft#10 0x69c45e in main /home/ahunt/oss-fuzz/git/common-main.c:52:11 microsoft#11 0x7f6a459d5349 in __libc_start_main (/lib64/libc.so.6+0x24349) Direct leak of 165 byte(s) in 1 object(s) allocated from: #0 0x49a6b2 in calloc /home/abuild/rpmbuild/BUILD/llvm-11.0.0.src/build/../projects/compiler-rt/lib/asan/asan_malloc_linux.cpp:154:3 microsoft#1 0x9a72f2 in xcalloc /home/ahunt/oss-fuzz/git/wrapper.c:140:8 microsoft#2 0x8ce203 in alloc_ref_with_prefix /home/ahunt/oss-fuzz/git/remote.c:867:20 microsoft#3 0x8ce1a2 in alloc_ref /home/ahunt/oss-fuzz/git/remote.c:875:9 microsoft#4 0x72f63e in process_ref_v2 /home/ahunt/oss-fuzz/git/connect.c:426:8 microsoft#5 0x72f21a in get_remote_refs /home/ahunt/oss-fuzz/git/connect.c:525:8 microsoft#6 0x979ab7 in handshake /home/ahunt/oss-fuzz/git/transport.c:305:4 microsoft#7 0x97872d in get_refs_via_connect /home/ahunt/oss-fuzz/git/transport.c:339:9 microsoft#8 0x9774b5 in transport_get_remote_refs /home/ahunt/oss-fuzz/git/transport.c:1388:4 microsoft#9 0x51cf80 in cmd_clone /home/ahunt/oss-fuzz/git/builtin/clone.c:1271:9 microsoft#10 0x4cd60d in run_builtin /home/ahunt/oss-fuzz/git/git.c:453:11 microsoft#11 0x4cb2da in handle_builtin /home/ahunt/oss-fuzz/git/git.c:704:3 microsoft#12 0x4ccc37 in run_argv /home/ahunt/oss-fuzz/git/git.c:771:4 microsoft#13 0x4cac29 in cmd_main /home/ahunt/oss-fuzz/git/git.c:902:19 microsoft#14 0x69c45e in main /home/ahunt/oss-fuzz/git/common-main.c:52:11 microsoft#15 0x7f6a459d5349 in __libc_start_main (/lib64/libc.so.6+0x24349) Direct leak of 105 byte(s) in 1 object(s) allocated from: #0 0x49a859 in realloc /home/abuild/rpmbuild/BUILD/llvm-11.0.0.src/build/../projects/compiler-rt/lib/asan/asan_malloc_linux.cpp:164:3 microsoft#1 0x9a71f6 in xrealloc /home/ahunt/oss-fuzz/git/wrapper.c:126:8 microsoft#2 0x93622d in strbuf_grow /home/ahunt/oss-fuzz/git/strbuf.c:98:2 microsoft#3 0x937a73 in strbuf_addch /home/ahunt/oss-fuzz/git/./strbuf.h:231:3 microsoft#4 0x939fcd in strbuf_add_absolute_path /home/ahunt/oss-fuzz/git/strbuf.c:911:4 microsoft#5 0x69d3ce in absolute_pathdup /home/ahunt/oss-fuzz/git/abspath.c:261:2 microsoft#6 0x51c688 in cmd_clone /home/ahunt/oss-fuzz/git/builtin/clone.c:1021:10 microsoft#7 0x4cd60d in run_builtin /home/ahunt/oss-fuzz/git/git.c:453:11 microsoft#8 0x4cb2da in handle_builtin /home/ahunt/oss-fuzz/git/git.c:704:3 microsoft#9 0x4ccc37 in run_argv /home/ahunt/oss-fuzz/git/git.c:771:4 microsoft#10 0x4cac29 in cmd_main /home/ahunt/oss-fuzz/git/git.c:902:19 microsoft#11 0x69c45e in main /home/ahunt/oss-fuzz/git/common-main.c:52:11 microsoft#12 0x7f6a459d5349 in __libc_start_main (/lib64/libc.so.6+0x24349) Signed-off-by: Andrzej Hunt <ajrhunt@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
ldennington
pushed a commit
to ldennington/git
that referenced
this pull request
Jun 2, 2021
Make sure that we release the temporary strbuf during dwim_branch() for all codepaths (and not just for the early return). This leak appears to have been introduced in: f60a7b7 (worktree: teach "add" to check out existing branches, 2018-04-24) Note that UNLEAK(branchname) is still needed: the returned result is used in add(), and is stored in a pointer which is used to point at one of: - a string literal ("HEAD") - member of argv (whatever the user specified in their invocation) - or our newly allocated string returned from dwim_branch() Fixing the branchname leak isn't impossible, but does not seem worthwhile given that add() is called directly from cmd_main(), and cmd_main() returns immediately thereafter - UNLEAK is good enough. This leak was found when running t0001 with LSAN, see also LSAN output below: Direct leak of 60 byte(s) in 1 object(s) allocated from: #0 0x49a859 in realloc /home/abuild/rpmbuild/BUILD/llvm-11.0.0.src/build/../projects/compiler-rt/lib/asan/asan_malloc_linux.cpp:164:3 microsoft#1 0x9ab076 in xrealloc /home/ahunt/oss-fuzz/git/wrapper.c:126:8 microsoft#2 0x939fcd in strbuf_grow /home/ahunt/oss-fuzz/git/strbuf.c:98:2 microsoft#3 0x93af53 in strbuf_splice /home/ahunt/oss-fuzz/git/strbuf.c:239:3 microsoft#4 0x83559a in strbuf_check_branch_ref /home/ahunt/oss-fuzz/git/object-name.c:1593:2 microsoft#5 0x6988b9 in dwim_branch /home/ahunt/oss-fuzz/git/builtin/worktree.c:454:20 microsoft#6 0x695f8f in add /home/ahunt/oss-fuzz/git/builtin/worktree.c:525:19 microsoft#7 0x694a04 in cmd_worktree /home/ahunt/oss-fuzz/git/builtin/worktree.c:1036:10 microsoft#8 0x4cd60d in run_builtin /home/ahunt/oss-fuzz/git/git.c:453:11 microsoft#9 0x4cb2da in handle_builtin /home/ahunt/oss-fuzz/git/git.c:704:3 microsoft#10 0x4ccc37 in run_argv /home/ahunt/oss-fuzz/git/git.c:771:4 microsoft#11 0x4cac29 in cmd_main /home/ahunt/oss-fuzz/git/git.c:902:19 microsoft#12 0x69caee in main /home/ahunt/oss-fuzz/git/common-main.c:52:11 microsoft#13 0x7f7b7dd10349 in __libc_start_main (/lib64/libc.so.6+0x24349) Signed-off-by: Andrzej Hunt <ajrhunt@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
ldennington
pushed a commit
to ldennington/git
that referenced
this pull request
Jun 2, 2021
The primary goal of this change is to stop leaking init_db_template_dir. This leak can happen because: 1. git_init_db_config() allocates new memory into init_db_template_dir without first freeing the existing value. 2. init_db_template_dir might already contain data, either because: 2.1 git_config() can be invoked twice with this callback in a single process - at least 2 allocations are likely. 2.2 A single git_config() allocation can invoke the callback multiple times for a given key (see further explanation in the function docs) - each of those calls will trigger another leak. The simplest fix for the leak would be to free(init_db_template_dir) before overwriting it. Instead we choose to convert to fetching init.templatedir via git_config_get_value() as that is more explicit, more efficient, and avoids allocations (the returned result is owned by the config cache, so we aren't responsible for freeing it). If we remove init_db_template_dir, git_init_db_config() ends up being responsible only for forwarding core.* config values to platform_core_config(). However platform_core_config() already ignores non-core.* config values, so we can safely remove git_init_db_config() and invoke git_config() directly with platform_core_config() as the callback. The platform_core_config forwarding was originally added in: 2878533 (mingw: respect core.hidedotfiles = false in git-init again, 2019-03-11 And I suspect the potential for a leak existed since the original implementation of git_init_db_config in: 90b4518 (Add `init.templatedir` configuration variable., 2010-02-17) LSAN output from t0001: Direct leak of 73 byte(s) in 1 object(s) allocated from: #0 0x49a859 in realloc /home/abuild/rpmbuild/BUILD/llvm-11.0.0.src/build/../projects/compiler-rt/lib/asan/asan_malloc_linux.cpp:164:3 microsoft#1 0x9a7276 in xrealloc /home/ahunt/oss-fuzz/git/wrapper.c:126:8 microsoft#2 0x9362ad in strbuf_grow /home/ahunt/oss-fuzz/git/strbuf.c:98:2 microsoft#3 0x936eaa in strbuf_add /home/ahunt/oss-fuzz/git/strbuf.c:295:2 microsoft#4 0x868112 in strbuf_addstr /home/ahunt/oss-fuzz/git/./strbuf.h:304:2 microsoft#5 0x86a8ad in expand_user_path /home/ahunt/oss-fuzz/git/path.c:758:2 microsoft#6 0x720bb1 in git_config_pathname /home/ahunt/oss-fuzz/git/config.c:1287:10 microsoft#7 0x5960e2 in git_init_db_config /home/ahunt/oss-fuzz/git/builtin/init-db.c:161:11 microsoft#8 0x7255b8 in configset_iter /home/ahunt/oss-fuzz/git/config.c:1982:7 microsoft#9 0x7253fc in repo_config /home/ahunt/oss-fuzz/git/config.c:2311:2 microsoft#10 0x725ca7 in git_config /home/ahunt/oss-fuzz/git/config.c:2399:2 microsoft#11 0x593e8d in create_default_files /home/ahunt/oss-fuzz/git/builtin/init-db.c:225:2 microsoft#12 0x5935c6 in init_db /home/ahunt/oss-fuzz/git/builtin/init-db.c:449:11 microsoft#13 0x59588e in cmd_init_db /home/ahunt/oss-fuzz/git/builtin/init-db.c:714:9 microsoft#14 0x4cd60d in run_builtin /home/ahunt/oss-fuzz/git/git.c:453:11 microsoft#15 0x4cb2da in handle_builtin /home/ahunt/oss-fuzz/git/git.c:704:3 microsoft#16 0x4ccc37 in run_argv /home/ahunt/oss-fuzz/git/git.c:771:4 microsoft#17 0x4cac29 in cmd_main /home/ahunt/oss-fuzz/git/git.c:902:19 microsoft#18 0x69c4de in main /home/ahunt/oss-fuzz/git/common-main.c:52:11 microsoft#19 0x7f23552d6349 in __libc_start_main (/lib64/libc.so.6+0x24349) Signed-off-by: Andrzej Hunt <ajrhunt@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
vdye
pushed a commit
that referenced
this pull request
May 13, 2022
…ckout` builtin This integrates the `sparse-checkout` builtin with the sparse index. The tricky part here is that we need to partially expand the index when we are modifying the sparse-checkout definition. Note that we modify the pattern list in a careful way: we create a `struct pattern_list` in-memory in `builtin/sparse-checkout.c` then apply those patterns to the index before writing the patterns to the sparse-checkout file. The `update_sparsity()` method does the work to assign the `SKIP_WORKTREE` bit appropriately, but this doesn't work if the files that are within the new sparse-checkout cone are still hidden behind a sparse directory. The new `expand_to_pattern_list()` method does the hard work of expanding the sparse directories that are now within the new patterns. This expands only as far as needed, possibly creating new sparse directory entries. This method does not contract existing files to sparse directories, and a big reason why is because of the check for ignored files as we delete those directories. The `clean_tracked_sparse_directories()` method is called after `update_sparsity()`, but we need to read the `A/B/.gitignore` file (or lack thereof) before we can delete `A/B/`. If we convert to sparse too quickly, then we lose this information and cause a full expansion. Most of the correctness is handled by existing tests in `t1092`, but I add checks for `ensure_not_expanded` in some hopefully interesting cases. As for performance, `git sparse-checkout set` can be slow if it needs to move a lot of files. However, no-op `git sparse-checkout set` (i.e. set the sparse-checkout cone to only include files at root, and do this on repeat) has these performance results on Linux in a monorepo with 2+ million files at `HEAD`: ``` Benchmark #1: baseline Time (mean ± σ): 10.465 s ± 0.018 s [User: 9.885 s, System: 0.573 s] Range (min … max): 10.450 s … 10.497 s 5 runs Benchmark #2: new code Time (mean ± σ): 68.9 ms ± 2.9 ms [User: 45.8 ms, System: 17.1 ms] Range (min … max): 63.4 ms … 74.0 ms 41 runs Summary 'new code' ran 151.89 ± 6.30 times faster than 'baseline' ```
dscho
pushed a commit
that referenced
this pull request
Jun 17, 2022
…ckout` builtin This integrates the `sparse-checkout` builtin with the sparse index. The tricky part here is that we need to partially expand the index when we are modifying the sparse-checkout definition. Note that we modify the pattern list in a careful way: we create a `struct pattern_list` in-memory in `builtin/sparse-checkout.c` then apply those patterns to the index before writing the patterns to the sparse-checkout file. The `update_sparsity()` method does the work to assign the `SKIP_WORKTREE` bit appropriately, but this doesn't work if the files that are within the new sparse-checkout cone are still hidden behind a sparse directory. The new `expand_to_pattern_list()` method does the hard work of expanding the sparse directories that are now within the new patterns. This expands only as far as needed, possibly creating new sparse directory entries. This method does not contract existing files to sparse directories, and a big reason why is because of the check for ignored files as we delete those directories. The `clean_tracked_sparse_directories()` method is called after `update_sparsity()`, but we need to read the `A/B/.gitignore` file (or lack thereof) before we can delete `A/B/`. If we convert to sparse too quickly, then we lose this information and cause a full expansion. Most of the correctness is handled by existing tests in `t1092`, but I add checks for `ensure_not_expanded` in some hopefully interesting cases. As for performance, `git sparse-checkout set` can be slow if it needs to move a lot of files. However, no-op `git sparse-checkout set` (i.e. set the sparse-checkout cone to only include files at root, and do this on repeat) has these performance results on Linux in a monorepo with 2+ million files at `HEAD`: ``` Benchmark #1: baseline Time (mean ± σ): 10.465 s ± 0.018 s [User: 9.885 s, System: 0.573 s] Range (min … max): 10.450 s … 10.497 s 5 runs Benchmark #2: new code Time (mean ± σ): 68.9 ms ± 2.9 ms [User: 45.8 ms, System: 17.1 ms] Range (min … max): 63.4 ms … 74.0 ms 41 runs Summary 'new code' ran 151.89 ± 6.30 times faster than 'baseline' ```
dscho
pushed a commit
that referenced
this pull request
Jun 17, 2022
…ckout` builtin This integrates the `sparse-checkout` builtin with the sparse index. The tricky part here is that we need to partially expand the index when we are modifying the sparse-checkout definition. Note that we modify the pattern list in a careful way: we create a `struct pattern_list` in-memory in `builtin/sparse-checkout.c` then apply those patterns to the index before writing the patterns to the sparse-checkout file. The `update_sparsity()` method does the work to assign the `SKIP_WORKTREE` bit appropriately, but this doesn't work if the files that are within the new sparse-checkout cone are still hidden behind a sparse directory. The new `expand_to_pattern_list()` method does the hard work of expanding the sparse directories that are now within the new patterns. This expands only as far as needed, possibly creating new sparse directory entries. This method does not contract existing files to sparse directories, and a big reason why is because of the check for ignored files as we delete those directories. The `clean_tracked_sparse_directories()` method is called after `update_sparsity()`, but we need to read the `A/B/.gitignore` file (or lack thereof) before we can delete `A/B/`. If we convert to sparse too quickly, then we lose this information and cause a full expansion. Most of the correctness is handled by existing tests in `t1092`, but I add checks for `ensure_not_expanded` in some hopefully interesting cases. As for performance, `git sparse-checkout set` can be slow if it needs to move a lot of files. However, no-op `git sparse-checkout set` (i.e. set the sparse-checkout cone to only include files at root, and do this on repeat) has these performance results on Linux in a monorepo with 2+ million files at `HEAD`: ``` Benchmark #1: baseline Time (mean ± σ): 10.465 s ± 0.018 s [User: 9.885 s, System: 0.573 s] Range (min … max): 10.450 s … 10.497 s 5 runs Benchmark #2: new code Time (mean ± σ): 68.9 ms ± 2.9 ms [User: 45.8 ms, System: 17.1 ms] Range (min … max): 63.4 ms … 74.0 ms 41 runs Summary 'new code' ran 151.89 ± 6.30 times faster than 'baseline' ```
dscho
pushed a commit
that referenced
this pull request
Jun 17, 2022
…ckout` builtin This integrates the `sparse-checkout` builtin with the sparse index. The tricky part here is that we need to partially expand the index when we are modifying the sparse-checkout definition. Note that we modify the pattern list in a careful way: we create a `struct pattern_list` in-memory in `builtin/sparse-checkout.c` then apply those patterns to the index before writing the patterns to the sparse-checkout file. The `update_sparsity()` method does the work to assign the `SKIP_WORKTREE` bit appropriately, but this doesn't work if the files that are within the new sparse-checkout cone are still hidden behind a sparse directory. The new `expand_to_pattern_list()` method does the hard work of expanding the sparse directories that are now within the new patterns. This expands only as far as needed, possibly creating new sparse directory entries. This method does not contract existing files to sparse directories, and a big reason why is because of the check for ignored files as we delete those directories. The `clean_tracked_sparse_directories()` method is called after `update_sparsity()`, but we need to read the `A/B/.gitignore` file (or lack thereof) before we can delete `A/B/`. If we convert to sparse too quickly, then we lose this information and cause a full expansion. Most of the correctness is handled by existing tests in `t1092`, but I add checks for `ensure_not_expanded` in some hopefully interesting cases. As for performance, `git sparse-checkout set` can be slow if it needs to move a lot of files. However, no-op `git sparse-checkout set` (i.e. set the sparse-checkout cone to only include files at root, and do this on repeat) has these performance results on Linux in a monorepo with 2+ million files at `HEAD`: ``` Benchmark #1: baseline Time (mean ± σ): 10.465 s ± 0.018 s [User: 9.885 s, System: 0.573 s] Range (min … max): 10.450 s … 10.497 s 5 runs Benchmark #2: new code Time (mean ± σ): 68.9 ms ± 2.9 ms [User: 45.8 ms, System: 17.1 ms] Range (min … max): 63.4 ms … 74.0 ms 41 runs Summary 'new code' ran 151.89 ± 6.30 times faster than 'baseline' ```
dscho
pushed a commit
that referenced
this pull request
Jun 17, 2022
…ckout` builtin This integrates the `sparse-checkout` builtin with the sparse index. The tricky part here is that we need to partially expand the index when we are modifying the sparse-checkout definition. Note that we modify the pattern list in a careful way: we create a `struct pattern_list` in-memory in `builtin/sparse-checkout.c` then apply those patterns to the index before writing the patterns to the sparse-checkout file. The `update_sparsity()` method does the work to assign the `SKIP_WORKTREE` bit appropriately, but this doesn't work if the files that are within the new sparse-checkout cone are still hidden behind a sparse directory. The new `expand_to_pattern_list()` method does the hard work of expanding the sparse directories that are now within the new patterns. This expands only as far as needed, possibly creating new sparse directory entries. This method does not contract existing files to sparse directories, and a big reason why is because of the check for ignored files as we delete those directories. The `clean_tracked_sparse_directories()` method is called after `update_sparsity()`, but we need to read the `A/B/.gitignore` file (or lack thereof) before we can delete `A/B/`. If we convert to sparse too quickly, then we lose this information and cause a full expansion. Most of the correctness is handled by existing tests in `t1092`, but I add checks for `ensure_not_expanded` in some hopefully interesting cases. As for performance, `git sparse-checkout set` can be slow if it needs to move a lot of files. However, no-op `git sparse-checkout set` (i.e. set the sparse-checkout cone to only include files at root, and do this on repeat) has these performance results on Linux in a monorepo with 2+ million files at `HEAD`: ``` Benchmark #1: baseline Time (mean ± σ): 10.465 s ± 0.018 s [User: 9.885 s, System: 0.573 s] Range (min … max): 10.450 s … 10.497 s 5 runs Benchmark #2: new code Time (mean ± σ): 68.9 ms ± 2.9 ms [User: 45.8 ms, System: 17.1 ms] Range (min … max): 63.4 ms … 74.0 ms 41 runs Summary 'new code' ran 151.89 ± 6.30 times faster than 'baseline' ```
dscho
pushed a commit
that referenced
this pull request
Jun 17, 2022
…ckout` builtin This integrates the `sparse-checkout` builtin with the sparse index. The tricky part here is that we need to partially expand the index when we are modifying the sparse-checkout definition. Note that we modify the pattern list in a careful way: we create a `struct pattern_list` in-memory in `builtin/sparse-checkout.c` then apply those patterns to the index before writing the patterns to the sparse-checkout file. The `update_sparsity()` method does the work to assign the `SKIP_WORKTREE` bit appropriately, but this doesn't work if the files that are within the new sparse-checkout cone are still hidden behind a sparse directory. The new `expand_to_pattern_list()` method does the hard work of expanding the sparse directories that are now within the new patterns. This expands only as far as needed, possibly creating new sparse directory entries. This method does not contract existing files to sparse directories, and a big reason why is because of the check for ignored files as we delete those directories. The `clean_tracked_sparse_directories()` method is called after `update_sparsity()`, but we need to read the `A/B/.gitignore` file (or lack thereof) before we can delete `A/B/`. If we convert to sparse too quickly, then we lose this information and cause a full expansion. Most of the correctness is handled by existing tests in `t1092`, but I add checks for `ensure_not_expanded` in some hopefully interesting cases. As for performance, `git sparse-checkout set` can be slow if it needs to move a lot of files. However, no-op `git sparse-checkout set` (i.e. set the sparse-checkout cone to only include files at root, and do this on repeat) has these performance results on Linux in a monorepo with 2+ million files at `HEAD`: ``` Benchmark #1: baseline Time (mean ± σ): 10.465 s ± 0.018 s [User: 9.885 s, System: 0.573 s] Range (min … max): 10.450 s … 10.497 s 5 runs Benchmark #2: new code Time (mean ± σ): 68.9 ms ± 2.9 ms [User: 45.8 ms, System: 17.1 ms] Range (min … max): 63.4 ms … 74.0 ms 41 runs Summary 'new code' ran 151.89 ± 6.30 times faster than 'baseline' ```
dscho
pushed a commit
that referenced
this pull request
Jun 17, 2022
…ckout` builtin This integrates the `sparse-checkout` builtin with the sparse index. The tricky part here is that we need to partially expand the index when we are modifying the sparse-checkout definition. Note that we modify the pattern list in a careful way: we create a `struct pattern_list` in-memory in `builtin/sparse-checkout.c` then apply those patterns to the index before writing the patterns to the sparse-checkout file. The `update_sparsity()` method does the work to assign the `SKIP_WORKTREE` bit appropriately, but this doesn't work if the files that are within the new sparse-checkout cone are still hidden behind a sparse directory. The new `expand_to_pattern_list()` method does the hard work of expanding the sparse directories that are now within the new patterns. This expands only as far as needed, possibly creating new sparse directory entries. This method does not contract existing files to sparse directories, and a big reason why is because of the check for ignored files as we delete those directories. The `clean_tracked_sparse_directories()` method is called after `update_sparsity()`, but we need to read the `A/B/.gitignore` file (or lack thereof) before we can delete `A/B/`. If we convert to sparse too quickly, then we lose this information and cause a full expansion. Most of the correctness is handled by existing tests in `t1092`, but I add checks for `ensure_not_expanded` in some hopefully interesting cases. As for performance, `git sparse-checkout set` can be slow if it needs to move a lot of files. However, no-op `git sparse-checkout set` (i.e. set the sparse-checkout cone to only include files at root, and do this on repeat) has these performance results on Linux in a monorepo with 2+ million files at `HEAD`: ``` Benchmark #1: baseline Time (mean ± σ): 10.465 s ± 0.018 s [User: 9.885 s, System: 0.573 s] Range (min … max): 10.450 s … 10.497 s 5 runs Benchmark #2: new code Time (mean ± σ): 68.9 ms ± 2.9 ms [User: 45.8 ms, System: 17.1 ms] Range (min … max): 63.4 ms … 74.0 ms 41 runs Summary 'new code' ran 151.89 ± 6.30 times faster than 'baseline' ```
dscho
pushed a commit
that referenced
this pull request
Jun 17, 2022
…ckout` builtin This integrates the `sparse-checkout` builtin with the sparse index. The tricky part here is that we need to partially expand the index when we are modifying the sparse-checkout definition. Note that we modify the pattern list in a careful way: we create a `struct pattern_list` in-memory in `builtin/sparse-checkout.c` then apply those patterns to the index before writing the patterns to the sparse-checkout file. The `update_sparsity()` method does the work to assign the `SKIP_WORKTREE` bit appropriately, but this doesn't work if the files that are within the new sparse-checkout cone are still hidden behind a sparse directory. The new `expand_to_pattern_list()` method does the hard work of expanding the sparse directories that are now within the new patterns. This expands only as far as needed, possibly creating new sparse directory entries. This method does not contract existing files to sparse directories, and a big reason why is because of the check for ignored files as we delete those directories. The `clean_tracked_sparse_directories()` method is called after `update_sparsity()`, but we need to read the `A/B/.gitignore` file (or lack thereof) before we can delete `A/B/`. If we convert to sparse too quickly, then we lose this information and cause a full expansion. Most of the correctness is handled by existing tests in `t1092`, but I add checks for `ensure_not_expanded` in some hopefully interesting cases. As for performance, `git sparse-checkout set` can be slow if it needs to move a lot of files. However, no-op `git sparse-checkout set` (i.e. set the sparse-checkout cone to only include files at root, and do this on repeat) has these performance results on Linux in a monorepo with 2+ million files at `HEAD`: ``` Benchmark #1: baseline Time (mean ± σ): 10.465 s ± 0.018 s [User: 9.885 s, System: 0.573 s] Range (min … max): 10.450 s … 10.497 s 5 runs Benchmark #2: new code Time (mean ± σ): 68.9 ms ± 2.9 ms [User: 45.8 ms, System: 17.1 ms] Range (min … max): 63.4 ms … 74.0 ms 41 runs Summary 'new code' ran 151.89 ± 6.30 times faster than 'baseline' ```
dscho
pushed a commit
that referenced
this pull request
Jun 18, 2022
…ckout` builtin This integrates the `sparse-checkout` builtin with the sparse index. The tricky part here is that we need to partially expand the index when we are modifying the sparse-checkout definition. Note that we modify the pattern list in a careful way: we create a `struct pattern_list` in-memory in `builtin/sparse-checkout.c` then apply those patterns to the index before writing the patterns to the sparse-checkout file. The `update_sparsity()` method does the work to assign the `SKIP_WORKTREE` bit appropriately, but this doesn't work if the files that are within the new sparse-checkout cone are still hidden behind a sparse directory. The new `expand_to_pattern_list()` method does the hard work of expanding the sparse directories that are now within the new patterns. This expands only as far as needed, possibly creating new sparse directory entries. This method does not contract existing files to sparse directories, and a big reason why is because of the check for ignored files as we delete those directories. The `clean_tracked_sparse_directories()` method is called after `update_sparsity()`, but we need to read the `A/B/.gitignore` file (or lack thereof) before we can delete `A/B/`. If we convert to sparse too quickly, then we lose this information and cause a full expansion. Most of the correctness is handled by existing tests in `t1092`, but I add checks for `ensure_not_expanded` in some hopefully interesting cases. As for performance, `git sparse-checkout set` can be slow if it needs to move a lot of files. However, no-op `git sparse-checkout set` (i.e. set the sparse-checkout cone to only include files at root, and do this on repeat) has these performance results on Linux in a monorepo with 2+ million files at `HEAD`: ``` Benchmark #1: baseline Time (mean ± σ): 10.465 s ± 0.018 s [User: 9.885 s, System: 0.573 s] Range (min … max): 10.450 s … 10.497 s 5 runs Benchmark #2: new code Time (mean ± σ): 68.9 ms ± 2.9 ms [User: 45.8 ms, System: 17.1 ms] Range (min … max): 63.4 ms … 74.0 ms 41 runs Summary 'new code' ran 151.89 ± 6.30 times faster than 'baseline' ```
dscho
pushed a commit
that referenced
this pull request
Jun 22, 2022
…ckout` builtin This integrates the `sparse-checkout` builtin with the sparse index. The tricky part here is that we need to partially expand the index when we are modifying the sparse-checkout definition. Note that we modify the pattern list in a careful way: we create a `struct pattern_list` in-memory in `builtin/sparse-checkout.c` then apply those patterns to the index before writing the patterns to the sparse-checkout file. The `update_sparsity()` method does the work to assign the `SKIP_WORKTREE` bit appropriately, but this doesn't work if the files that are within the new sparse-checkout cone are still hidden behind a sparse directory. The new `expand_to_pattern_list()` method does the hard work of expanding the sparse directories that are now within the new patterns. This expands only as far as needed, possibly creating new sparse directory entries. This method does not contract existing files to sparse directories, and a big reason why is because of the check for ignored files as we delete those directories. The `clean_tracked_sparse_directories()` method is called after `update_sparsity()`, but we need to read the `A/B/.gitignore` file (or lack thereof) before we can delete `A/B/`. If we convert to sparse too quickly, then we lose this information and cause a full expansion. Most of the correctness is handled by existing tests in `t1092`, but I add checks for `ensure_not_expanded` in some hopefully interesting cases. As for performance, `git sparse-checkout set` can be slow if it needs to move a lot of files. However, no-op `git sparse-checkout set` (i.e. set the sparse-checkout cone to only include files at root, and do this on repeat) has these performance results on Linux in a monorepo with 2+ million files at `HEAD`: ``` Benchmark #1: baseline Time (mean ± σ): 10.465 s ± 0.018 s [User: 9.885 s, System: 0.573 s] Range (min … max): 10.450 s … 10.497 s 5 runs Benchmark #2: new code Time (mean ± σ): 68.9 ms ± 2.9 ms [User: 45.8 ms, System: 17.1 ms] Range (min … max): 63.4 ms … 74.0 ms 41 runs Summary 'new code' ran 151.89 ± 6.30 times faster than 'baseline' ```
dscho
pushed a commit
that referenced
this pull request
Jun 27, 2022
…ckout` builtin This integrates the `sparse-checkout` builtin with the sparse index. The tricky part here is that we need to partially expand the index when we are modifying the sparse-checkout definition. Note that we modify the pattern list in a careful way: we create a `struct pattern_list` in-memory in `builtin/sparse-checkout.c` then apply those patterns to the index before writing the patterns to the sparse-checkout file. The `update_sparsity()` method does the work to assign the `SKIP_WORKTREE` bit appropriately, but this doesn't work if the files that are within the new sparse-checkout cone are still hidden behind a sparse directory. The new `expand_to_pattern_list()` method does the hard work of expanding the sparse directories that are now within the new patterns. This expands only as far as needed, possibly creating new sparse directory entries. This method does not contract existing files to sparse directories, and a big reason why is because of the check for ignored files as we delete those directories. The `clean_tracked_sparse_directories()` method is called after `update_sparsity()`, but we need to read the `A/B/.gitignore` file (or lack thereof) before we can delete `A/B/`. If we convert to sparse too quickly, then we lose this information and cause a full expansion. Most of the correctness is handled by existing tests in `t1092`, but I add checks for `ensure_not_expanded` in some hopefully interesting cases. As for performance, `git sparse-checkout set` can be slow if it needs to move a lot of files. However, no-op `git sparse-checkout set` (i.e. set the sparse-checkout cone to only include files at root, and do this on repeat) has these performance results on Linux in a monorepo with 2+ million files at `HEAD`: ``` Benchmark #1: baseline Time (mean ± σ): 10.465 s ± 0.018 s [User: 9.885 s, System: 0.573 s] Range (min … max): 10.450 s … 10.497 s 5 runs Benchmark #2: new code Time (mean ± σ): 68.9 ms ± 2.9 ms [User: 45.8 ms, System: 17.1 ms] Range (min … max): 63.4 ms … 74.0 ms 41 runs Summary 'new code' ran 151.89 ± 6.30 times faster than 'baseline' ```
derrickstolee
added a commit
that referenced
this pull request
Jun 27, 2022
…ckout` builtin This integrates the `sparse-checkout` builtin with the sparse index. The tricky part here is that we need to partially expand the index when we are modifying the sparse-checkout definition. Note that we modify the pattern list in a careful way: we create a `struct pattern_list` in-memory in `builtin/sparse-checkout.c` then apply those patterns to the index before writing the patterns to the sparse-checkout file. The `update_sparsity()` method does the work to assign the `SKIP_WORKTREE` bit appropriately, but this doesn't work if the files that are within the new sparse-checkout cone are still hidden behind a sparse directory. The new `expand_to_pattern_list()` method does the hard work of expanding the sparse directories that are now within the new patterns. This expands only as far as needed, possibly creating new sparse directory entries. This method does not contract existing files to sparse directories, and a big reason why is because of the check for ignored files as we delete those directories. The `clean_tracked_sparse_directories()` method is called after `update_sparsity()`, but we need to read the `A/B/.gitignore` file (or lack thereof) before we can delete `A/B/`. If we convert to sparse too quickly, then we lose this information and cause a full expansion. Most of the correctness is handled by existing tests in `t1092`, but I add checks for `ensure_not_expanded` in some hopefully interesting cases. As for performance, `git sparse-checkout set` can be slow if it needs to move a lot of files. However, no-op `git sparse-checkout set` (i.e. set the sparse-checkout cone to only include files at root, and do this on repeat) has these performance results on Linux in a monorepo with 2+ million files at `HEAD`: ``` Benchmark #1: baseline Time (mean ± σ): 10.465 s ± 0.018 s [User: 9.885 s, System: 0.573 s] Range (min … max): 10.450 s … 10.497 s 5 runs Benchmark #2: new code Time (mean ± σ): 68.9 ms ± 2.9 ms [User: 45.8 ms, System: 17.1 ms] Range (min … max): 63.4 ms … 74.0 ms 41 runs Summary 'new code' ran 151.89 ± 6.30 times faster than 'baseline' ```
dscho
pushed a commit
that referenced
this pull request
Jul 12, 2022
…ckout` builtin This integrates the `sparse-checkout` builtin with the sparse index. The tricky part here is that we need to partially expand the index when we are modifying the sparse-checkout definition. Note that we modify the pattern list in a careful way: we create a `struct pattern_list` in-memory in `builtin/sparse-checkout.c` then apply those patterns to the index before writing the patterns to the sparse-checkout file. The `update_sparsity()` method does the work to assign the `SKIP_WORKTREE` bit appropriately, but this doesn't work if the files that are within the new sparse-checkout cone are still hidden behind a sparse directory. The new `expand_to_pattern_list()` method does the hard work of expanding the sparse directories that are now within the new patterns. This expands only as far as needed, possibly creating new sparse directory entries. This method does not contract existing files to sparse directories, and a big reason why is because of the check for ignored files as we delete those directories. The `clean_tracked_sparse_directories()` method is called after `update_sparsity()`, but we need to read the `A/B/.gitignore` file (or lack thereof) before we can delete `A/B/`. If we convert to sparse too quickly, then we lose this information and cause a full expansion. Most of the correctness is handled by existing tests in `t1092`, but I add checks for `ensure_not_expanded` in some hopefully interesting cases. As for performance, `git sparse-checkout set` can be slow if it needs to move a lot of files. However, no-op `git sparse-checkout set` (i.e. set the sparse-checkout cone to only include files at root, and do this on repeat) has these performance results on Linux in a monorepo with 2+ million files at `HEAD`: ``` Benchmark #1: baseline Time (mean ± σ): 10.465 s ± 0.018 s [User: 9.885 s, System: 0.573 s] Range (min … max): 10.450 s … 10.497 s 5 runs Benchmark #2: new code Time (mean ± σ): 68.9 ms ± 2.9 ms [User: 45.8 ms, System: 17.1 ms] Range (min … max): 63.4 ms … 74.0 ms 41 runs Summary 'new code' ran 151.89 ± 6.30 times faster than 'baseline' ```
ldennington
pushed a commit
to ldennington/git
that referenced
this pull request
Jul 15, 2022
"dst" can legitimately be "0\{40\}" for a creation patch, e.g. when the stat information is stale, but it falls into "look at work tree" case. The original description in b6d8f30 ([PATCH] diff-raw format update take microsoft#2., 2005-05-23) forgot that deletion also makes the "dst" 0* SHA-1. Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
ldennington
pushed a commit
to ldennington/git
that referenced
this pull request
Jul 15, 2022
Near the end of the "Raw output format" section, an example shows the output of 'git diff-files' for a tracked file modified on disk but not yet added to the index. However the wording is: <sha1> is shown as all 0's if a file is new on the filesystem and it is out of sync with the index. which is confusing since it can be understood to mean that 'file' is a new, yet untracked file, in which case 'git diff-files' does not care about it at all. When this example was introduced all the way back in c64b9b8 (Reference documentation for the core git commands., 2005-05-05), 'old' and 'new' referred to the two entities being compared, depending on the command being used (diff-index, diff-tree or diff-files - which at the time were diff-cache, diff-tree and show-diff). The wording used at the time was: <new-sha1> is shown as all 0's if new is a file on the filesystem and it is out of sync with the cache. This section was reworked in 81e50ea ([PATCH] The diff-raw format updates., 2005-05-21) and the mention of the meaning of 'new' and 'old' was removed. Then in f73ae1f (Some typos and light editing of various manpages, 2005-10-05), the wording was changed to what it is now. In addition, in b6d8f30 ([PATCH] diff-raw format update take microsoft#2., 2005-05-23), the section was further reworked and did not use '<sha1>' anymore, making the example the sole user of this token. Rework the introductory sentence of the example to instead refer to 'sha1 for "dst"', which is what the text description above it uses, and fix the wording so that we do not mention a "new file". While at it, also tweak the wording used in the description of the raw format to explicitely state that all 0's are used for the destination hash if the working tree is out of sync with the index, instead of the more vague "look at worktree". Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
ldennington
pushed a commit
to ldennington/git
that referenced
this pull request
Jul 15, 2022
The two examples in the doc for 'git diff-index' were not updated when the raw output format was changed in 81e50ea ([PATCH] The diff-raw format updates., 2005-05-21) (first example) and in b6d8f30 ([PATCH] diff-raw format update take microsoft#2., 2005-05-23) and 7cb6ac1 (diff: diff_aligned_abbrev: remove ellipsis after abbreviated SHA-1 value, 2017-12-03) (second example). Update the output, inventing some characters to complete the source hash in the second example. Also correct the destination mode in the second example, which was wrongly '100664' since the addition of the example in c64b9b8 (Reference documentation for the core git commands., 2005-05-05). Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
ldennington
pushed a commit
to ldennington/git
that referenced
this pull request
Jul 15, 2022
Git uses zlib for its own object store, but calls gzip when creating tgz archives. Add an option to perform the gzip compression for the latter using zlib, without depending on the external gzip binary. Plug it in by making write_block a function pointer and switching to a compressing variant if the filter command has the magic value "git archive gzip". Does that indirection slow down tar creation? Not really, at least not in this test: $ hyperfine -w3 -L rev HEAD,origin/main -p 'git checkout {rev} && make' \ './git -C ../linux archive --format=tar HEAD # {rev}' Benchmark microsoft#1: ./git -C ../linux archive --format=tar HEAD # HEAD Time (mean ± σ): 4.044 s ± 0.007 s [User: 3.901 s, System: 0.137 s] Range (min … max): 4.038 s … 4.059 s 10 runs Benchmark microsoft#2: ./git -C ../linux archive --format=tar HEAD # origin/main Time (mean ± σ): 4.047 s ± 0.009 s [User: 3.903 s, System: 0.138 s] Range (min … max): 4.038 s … 4.066 s 10 runs How does tgz creation perform? $ hyperfine -w3 -L command 'gzip -cn','git archive gzip' \ './git -c tar.tgz.command="{command}" -C ../linux archive --format=tgz HEAD' Benchmark microsoft#1: ./git -c tar.tgz.command="gzip -cn" -C ../linux archive --format=tgz HEAD Time (mean ± σ): 20.404 s ± 0.006 s [User: 23.943 s, System: 0.401 s] Range (min … max): 20.395 s … 20.414 s 10 runs Benchmark microsoft#2: ./git -c tar.tgz.command="git archive gzip" -C ../linux archive --format=tgz HEAD Time (mean ± σ): 23.807 s ± 0.023 s [User: 23.655 s, System: 0.145 s] Range (min … max): 23.782 s … 23.857 s 10 runs Summary './git -c tar.tgz.command="gzip -cn" -C ../linux archive --format=tgz HEAD' ran 1.17 ± 0.00 times faster than './git -c tar.tgz.command="git archive gzip" -C ../linux archive --format=tgz HEAD' So the internal implementation takes 17% longer on the Linux repo, but uses 2% less CPU time. That's because the external gzip can run in parallel on its own processor, while the internal one works sequentially and avoids the inter-process communication overhead. What are the benefits? Only an internal sequential implementation can offer this eco mode, and it allows avoiding the gzip(1) requirement. This implementation uses the helper functions from our zlib.c instead of the convenient gz* functions from zlib, because the latter doesn't give the control over the generated gzip header that the next patch requires. Original-patch-by: Rohit Ashiwal <rohit.ashiwal265@gmail.com> Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
derrickstolee
added a commit
that referenced
this pull request
Aug 31, 2022
…ckout` builtin This integrates the `sparse-checkout` builtin with the sparse index. The tricky part here is that we need to partially expand the index when we are modifying the sparse-checkout definition. Note that we modify the pattern list in a careful way: we create a `struct pattern_list` in-memory in `builtin/sparse-checkout.c` then apply those patterns to the index before writing the patterns to the sparse-checkout file. The `update_sparsity()` method does the work to assign the `SKIP_WORKTREE` bit appropriately, but this doesn't work if the files that are within the new sparse-checkout cone are still hidden behind a sparse directory. The new `expand_to_pattern_list()` method does the hard work of expanding the sparse directories that are now within the new patterns. This expands only as far as needed, possibly creating new sparse directory entries. This method does not contract existing files to sparse directories, and a big reason why is because of the check for ignored files as we delete those directories. The `clean_tracked_sparse_directories()` method is called after `update_sparsity()`, but we need to read the `A/B/.gitignore` file (or lack thereof) before we can delete `A/B/`. If we convert to sparse too quickly, then we lose this information and cause a full expansion. Most of the correctness is handled by existing tests in `t1092`, but I add checks for `ensure_not_expanded` in some hopefully interesting cases. As for performance, `git sparse-checkout set` can be slow if it needs to move a lot of files. However, no-op `git sparse-checkout set` (i.e. set the sparse-checkout cone to only include files at root, and do this on repeat) has these performance results on Linux in a monorepo with 2+ million files at `HEAD`: ``` Benchmark #1: baseline Time (mean ± σ): 10.465 s ± 0.018 s [User: 9.885 s, System: 0.573 s] Range (min … max): 10.450 s … 10.497 s 5 runs Benchmark #2: new code Time (mean ± σ): 68.9 ms ± 2.9 ms [User: 45.8 ms, System: 17.1 ms] Range (min … max): 63.4 ms … 74.0 ms 41 runs Summary 'new code' ran 151.89 ± 6.30 times faster than 'baseline' ```
dscho
pushed a commit
that referenced
this pull request
Sep 16, 2022
Since commit fcc07e9 (is_promisor_object(): free tree buffer after parsing, 2021-04-13), we'll always free the buffers attached to a "struct tree" after searching them for promisor links. But there's an important case where we don't want to do so: if somebody else is already using the tree! This can happen during a "rev-list --missing=allow-promisor" traversal in a partial clone that is missing one or more trees or blobs. The backtrace for the free looks like this: #1 free_tree_buffer tree.c:147 #2 add_promisor_object packfile.c:2250 #3 for_each_object_in_pack packfile.c:2190 #4 for_each_packed_object packfile.c:2215 #5 is_promisor_object packfile.c:2272 #6 finish_object__ma builtin/rev-list.c:245 #7 finish_object builtin/rev-list.c:261 #8 show_object builtin/rev-list.c:274 #9 process_blob list-objects.c:63 #10 process_tree_contents list-objects.c:145 #11 process_tree list-objects.c:201 #12 traverse_trees_and_blobs list-objects.c:344 [...] We're in the middle of walking through the entries of a tree object via process_tree_contents(). We see a blob (or it could even be another tree entry) that we don't have, so we call is_promisor_object() to check it. That function loops over all of the objects in the promisor packfile, including the tree we're currently walking. When we're done with it there, we free the tree buffer. But as we return to the walk in process_tree_contents(), it's still holding on to a pointer to that buffer, via its tree_desc iterator, and it accesses the freed memory. Even a trivial use of "--missing=allow-promisor" triggers this problem, as the included test demonstrates (it's just a vanilla --blob:none clone). We can detect this case by only freeing the tree buffer if it was allocated on our behalf. This is a little tricky since that happens inside parse_object(), and it doesn't tell us whether the object was already parsed, or whether it allocated the buffer itself. But by checking for an already-parsed tree beforehand, we can distinguish the two cases. That feels a little hacky, and does incur an extra lookup in the object-hash table. But that cost is fairly minimal compared to actually loading objects (and since we're iterating the whole pack here, we're likely to be loading most objects, rather than reusing cached results). It may also be a good direction for this function in general, as there are other possible optimizations that rely on doing some analysis before parsing: - we could detect blobs and avoid reading their contents; they can't link to other objects, but parse_object() doesn't know that we don't care about checking their hashes. - we could avoid allocating object structs entirely for most objects (since we really only need them in the oidset), which would save some memory. - promisor commits could use the commit-graph rather than loading the object from disk This commit doesn't do any of those optimizations, but I think it argues that this direction is reasonable, rather than relying on parse_object() and trying to teach it to give us more information about whether it parsed. The included test fails reliably under SANITIZE=address just when running "rev-list --missing=allow-promisor". Checking the output isn't strictly necessary to detect the bug, but it seems like a reasonable addition given the general lack of coverage for "allow-promisor" in the test suite. Reported-by: Andrew Olsen <andrew.olsen@koordinates.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
dscho
pushed a commit
that referenced
this pull request
Sep 16, 2022
Fix a memory leak occuring in case of pathspec copy in preload_index. Direct leak of 8 byte(s) in 8 object(s) allocated from: #0 0x7f0a353ead47 in __interceptor_malloc (/usr/lib/gcc/x86_64-pc-linux-gnu/11.3.0/libasan.so.6+0xb5d47) #1 0x55750995e840 in do_xmalloc /home/anthony/src/c/git/wrapper.c:51 #2 0x55750995e840 in xmalloc /home/anthony/src/c/git/wrapper.c:72 #3 0x55750970f824 in copy_pathspec /home/anthony/src/c/git/pathspec.c:684 #4 0x557509717278 in preload_index /home/anthony/src/c/git/preload-index.c:135 #5 0x55750975f21e in refresh_index /home/anthony/src/c/git/read-cache.c:1633 #6 0x55750915b926 in cmd_status builtin/commit.c:1547 #7 0x5575090e1680 in run_builtin /home/anthony/src/c/git/git.c:466 #8 0x5575090e1680 in handle_builtin /home/anthony/src/c/git/git.c:720 #9 0x5575090e284a in run_argv /home/anthony/src/c/git/git.c:787 #10 0x5575090e284a in cmd_main /home/anthony/src/c/git/git.c:920 #11 0x5575090dbf82 in main /home/anthony/src/c/git/common-main.c:56 #12 0x7f0a348230ab (/lib64/libc.so.6+0x290ab) Signed-off-by: Anthony Delannoy <anthony.2lannoy@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
ldennington
pushed a commit
to ldennington/git
that referenced
this pull request
Feb 17, 2023
When "read_strategy_opts()" is called we may have populated the "opts->strategy" before, so we'll need to free() it to avoid leaking memory. We populate it before because we cal get_replay_opts() from within "rebase.c" with an already populated "opts", which we then copy. Then if we're doing a "rebase -i" the sequencer API itself will promptly clobber our alloc'd version of it with its own. If this code is changed to do, instead of the added free() here a: if (opts->strategy) opts->strategy = xstrdup("another leak"); We get a couple of stacktraces from -fsanitize=leak showing how we ended up clobbering the already allocated value, i.e.: Direct leak of 6 byte(s) in 1 object(s) allocated from: #0 0x7f2e8cd45545 in __interceptor_malloc ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:75 microsoft#1 0x7f2e8cb0fcaa in __GI___strdup string/strdup.c:42 microsoft#2 0x6c4778 in xstrdup wrapper.c:39 microsoft#3 0x66bcb8 in read_strategy_opts sequencer.c:2902 microsoft#4 0x66bf7b in read_populate_opts sequencer.c:2969 microsoft#5 0x6723f9 in sequencer_continue sequencer.c:5063 microsoft#6 0x4a4f74 in run_sequencer_rebase builtin/rebase.c:348 microsoft#7 0x4a64c8 in run_specific_rebase builtin/rebase.c:753 microsoft#8 0x4a9b8b in cmd_rebase builtin/rebase.c:1824 microsoft#9 0x407a32 in run_builtin git.c:466 microsoft#10 0x407e0a in handle_builtin git.c:721 microsoft#11 0x40803d in run_argv git.c:788 microsoft#12 0x40850f in cmd_main git.c:923 microsoft#13 0x4eee79 in main common-main.c:57 microsoft#14 0x7f2e8ca9f209 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 microsoft#15 0x7f2e8ca9f2bb in __libc_start_main_impl ../csu/libc-start.c:389 microsoft#16 0x405fd0 in _start (git+0x405fd0) Direct leak of 4 byte(s) in 1 object(s) allocated from: #0 0x7f2e8cd45545 in __interceptor_malloc ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:75 microsoft#1 0x7f2e8cb0fcaa in __GI___strdup string/strdup.c:42 microsoft#2 0x6c4778 in xstrdup wrapper.c:39 microsoft#3 0x4a3c31 in xstrdup_or_null git-compat-util.h:1169 microsoft#4 0x4a447a in get_replay_opts builtin/rebase.c:163 microsoft#5 0x4a4f5b in run_sequencer_rebase builtin/rebase.c:346 microsoft#6 0x4a64c8 in run_specific_rebase builtin/rebase.c:753 microsoft#7 0x4a9b8b in cmd_rebase builtin/rebase.c:1824 microsoft#8 0x407a32 in run_builtin git.c:466 microsoft#9 0x407e0a in handle_builtin git.c:721 microsoft#10 0x40803d in run_argv git.c:788 microsoft#11 0x40850f in cmd_main git.c:923 microsoft#12 0x4eee79 in main common-main.c:57 microsoft#13 0x7f2e8ca9f209 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 microsoft#14 0x7f2e8ca9f2bb in __libc_start_main_impl ../csu/libc-start.c:389 microsoft#15 0x405fd0 in _start (git+0x405fd0) This can be seen in e.g. the 4th test of "t3404-rebase-interactive.sh". In the larger picture the ownership of the "struct replay_opts" is quite a mess, e.g. in this case rebase.c's static "get_replay_opts()" function partially creates it, but nothing in rebase.c will free() it. The structure is "mostly owned" by the sequencer API, but it also expects to get these partially populated versions of it. It would be better to have rebase keep track of what it allocated, and free() that, and to pass that as a "const" to the sequencer API, which would copy what it needs to its own version, and to free() that. But doing so is a much larger change, and however messy the ownership boundary is here is consistent with what we're doing already, so let's just free() this to fix the leak. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
ldennington
added a commit
to ldennington/git
that referenced
this pull request
Oct 31, 2023
This commit contains two temporary workflows: 1. windows-full: This workflow was designed to use the same key used for Linux signing for GPG signing. It also uses the Azure Code Signing action to sign Windows installers. 2. windows-minimal: This workflow was intended to test the minimal requirements for Windows builds to see whether it is possible to use Azure Code Signing as the singtool alias for exe signing. This workflow is already set up with tmate for testing on GitHub Actions runners (since the issue described below does not repro locally). While we were unable to get past an issue with the second workflow hanging instead of executing signing, hopefully this will provide a solid starting point if someone would like to re-attempt Azure Code Signing/GPG signing for microsoft/git in the future. Note: If signing is successfully implemented with the alias used in workflow microsoft#2, this alias should also be used for installer signing rather than the Azure Code Signing action so that the embedded uninstaller is signed in addition to the installers.
ldennington
added a commit
to ldennington/git
that referenced
this pull request
Oct 31, 2023
This commit contains two temporary workflows: 1. windows-full: This workflow was designed to use the same key used for Linux signing for GPG signing. It also uses the Azure Code Signing action to sign Windows installers. 2. windows-minimal: This workflow was intended to test the minimal requirements for Windows builds to see whether it is possible to use Azure Code Signing as the singtool alias for exe signing. This workflow is already set up with tmate for testing on GitHub Actions runners (since the issue described below does not repro locally). While we were unable to get past an issue with the second workflow hanging instead of executing signing, hopefully this will provide a solid starting point if someone would like to re-attempt Azure Code Signing/GPG signing for microsoft/git in the future. Note: If signing is successfully implemented with the alias used in workflow microsoft#2, this alias should also be used for installer signing rather than the Azure Code Signing action so that the embedded uninstaller is signed in addition to the installers.
dscho
pushed a commit
that referenced
this pull request
Nov 3, 2023
When "update-index --unresolve $path" cannot find the resolve-undo record for the path the user requested to unresolve, it stuffs the blobs from HEAD and MERGE_HEAD to stage #2 and stage #3 as a fallback. For this reason, the operation does not even start unless both "HEAD" and "MERGE_HEAD" exist. This is suboptimal in a few ways: * It does not recreate stage #1. Even though it is a correct design decision not to do so (because it is impossible to recreate in general cases, without knowing how we got there, including what merge strategy was used), it is much less useful not to have that information in the index. * It limits the "unresolve" operation only during a conflicted "git merge" and nothing else. Other operations like "rebase", "cherry-pick", and "switch -m" may result in conflicts, and the user may want to unresolve the conflict that they incorrectly resolved in order to redo the resolution, but the fallback would not kick in. * Most importantly, the entire "unresolve" operation is disabled after a conflicted merge is committed and MERGE_HEAD is removed, even though the index has perfectly usable resolve-undo records. By lazily reading the HEAD and MERGE_HEAD only when we need to go to the fallback codepath, we will allow cases where resolve-undo records are available (which is 100% of the time, unless the user is reading from an index file created by Git more than 10 years ago) to proceed even after a conflicted merge was committed, during other mergy operations that do not use MERGE_HEAD, or after the result of such mergy operations has been committed. Signed-off-by: Junio C Hamano <gitster@pobox.com>
dscho
pushed a commit
that referenced
this pull request
Nov 3, 2023
When t5583-push-branches.sh was originally introduced via 425b4d7 (push: introduce '--branches' option, 2023-05-06), it was not leak-free. In fact, the test did not even run correctly until 022fbb6 (t5583: fix shebang line, 2023-05-12), but after applying that patch, we see a failure at t5583.8: ==2529087==ERROR: LeakSanitizer: detected memory leaks Direct leak of 384 byte(s) in 1 object(s) allocated from: #0 0x7fb536330986 in __interceptor_realloc ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:98 #1 0x55e07606cbf9 in xrealloc wrapper.c:140 #2 0x55e075fb6cb3 in prio_queue_put prio-queue.c:42 #3 0x55e075ec81cb in get_reachable_subset commit-reach.c:917 #4 0x55e075fe9cce in add_missing_tags remote.c:1518 #5 0x55e075fea1e4 in match_push_refs remote.c:1665 #6 0x55e076050a8e in transport_push transport.c:1378 #7 0x55e075e2eb74 in push_with_options builtin/push.c:401 #8 0x55e075e2edb0 in do_push builtin/push.c:458 #9 0x55e075e2ff7a in cmd_push builtin/push.c:702 #10 0x55e075d8aaf0 in run_builtin git.c:452 #11 0x55e075d8af08 in handle_builtin git.c:706 #12 0x55e075d8b12c in run_argv git.c:770 #13 0x55e075d8b6a0 in cmd_main git.c:905 #14 0x55e075e81f07 in main common-main.c:60 #15 0x7fb5360ab6c9 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 #16 0x7fb5360ab784 in __libc_start_main_impl ../csu/libc-start.c:360 #17 0x55e075d88f40 in _start (git+0x1ff40) (BuildId: 38ad998b85a535e786129979443630d025ec2453) SUMMARY: LeakSanitizer: 384 byte(s) leaked in 1 allocation(s). This leak was addressed independently via 68b5117 (commit-reach: fix memory leak in get_reachable_subset(), 2023-06-03), which makes t5583 leak-free. But t5583 was not in the tree when 68b5117 was written, and the two only met after the latter was merged back in via 693bde4 (Merge branch 'mh/commit-reach-get-reachable-plug-leak', 2023-06-20). At that point, t5583 was leak-free. Let's mark it as such accordingly. Signed-off-by: Taylor Blau <me@ttaylorr.com> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
dscho
added a commit
that referenced
this pull request
Nov 5, 2023
The `linux-leaks` job has become a lot more aggressive, failing the following test cases: - t0001.52 extensions.objectFormat is not allowed with repo version 0 - t1302.3 gitdir selection on unsupported repo - t1302.4 gitdir not required mode - t1302.5 gitdir required mode - t1302.9 abort version=1 no-such-extension - t1302.12 abort version=0 noop-v1 The reason is that the `commondir` strbuf _is_ sometimes initialized _even if_ `discover_git_directory()` fails. The symptom: Direct leak of 24 byte(s) in 1 object(s) allocated from: #0 0x7f2e80ed8293 in __interceptor_realloc ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:98 #1 0x5603ff6ae674 in xrealloc wrapper.c:138 #2 0x5603ff661801 in strbuf_grow strbuf.c:101 #3 0x5603ff662417 in strbuf_add strbuf.c:300 #4 0x5603ff653e2b in strbuf_addstr strbuf.h:310 #5 0x5603ff65703b in setup_git_directory_gently_1 setup.c:1329 #6 0x5603ff65731b in discover_git_directory_reason setup.c:1388 #7 0x5603ff55b6c0 in discover_git_directory setup.h:79 #8 0x5603ff55b7ce in hook_path_early hook.c:35 #9 0x5603ff55b97d in find_hook hook.c:77 #10 0x5603ff55bcd3 in run_hooks_opt hook.c:189 #11 0x5603ff37ca05 in run_pre_command_hook git.c:457 #12 0x5603ff37ce0a in run_builtin git.c:532 #13 0x5603ff37d37e in handle_builtin git.c:798 #14 0x5603ff37d65a in run_argv git.c:867 #15 0x5603ff37dc8e in cmd_main git.c:1007 #16 0x5603ff48f4ee in main common-main.c:62 #17 0x7f2e80cabd8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 The fix is easy: always release the `strbuf`s, even when the discovery of the Git directory failed. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
mjcheetham
pushed a commit
that referenced
this pull request
Jul 23, 2024
Signed-off-by: Junio C Hamano <gitster@pobox.com>
dscho
added a commit
that referenced
this pull request
Oct 9, 2024
An internal customer reported a segfault when running `git sparse-checkout set` with the `index.sparse` config enabled. I was unable to reproduce it locally, but with their help we debugged into the failing process and discovered the following stacktrace: ``` #0 0x00007ff6318fb7b0 in rehash (map=0x3dfb00d0440, newsize=1048576) at hashmap.c:125 #1 0x00007ff6318fbc66 in hashmap_add (map=0x3dfb00d0440, entry=0x3dfb5c58bc8) at hashmap.c:247 #2 0x00007ff631937a70 in hash_index_entry (istate=0x3dfb00d0400, ce=0x3dfb5c58bc8) at name-hash.c:122 #3 0x00007ff631938a2f in add_name_hash (istate=0x3dfb00d0400, ce=0x3dfb5c58bc8) at name-hash.c:638 #4 0x00007ff631a064de in set_index_entry (istate=0x3dfb00d0400, nr=8291, ce=0x3dfb5c58bc8) at sparse-index.c:255 #5 0x00007ff631a06692 in add_path_to_index (oid=0x5ff130, base=0x5ff580, path=0x3dfb4b725da "<redacted>", mode=33188, context=0x5ff570) at sparse-index.c:307 #6 0x00007ff631a3b48c in read_tree_at (r=0x7ff631c026a0 <the_repo>, tree=0x3dfb5b41f60, base=0x5ff580, depth=2, pathspec=0x5ff5a0, fn=0x7ff631a064e5 <add_path_to_index>, context=0x5ff570) at tree.c:46 #7 0x00007ff631a3b60b in read_tree_at (r=0x7ff631c026a0 <the_repo>, tree=0x3dfb5b41e80, base=0x5ff580, depth=1, pathspec=0x5ff5a0, fn=0x7ff631a064e5 <add_path_to_index>, context=0x5ff570) at tree.c:80 #8 0x00007ff631a3b60b in read_tree_at (r=0x7ff631c026a0 <the_repo>, tree=0x3dfb5b41ac8, base=0x5ff580, depth=0, pathspec=0x5ff5a0, fn=0x7ff631a064e5 <add_path_to_index>, context=0x5ff570) at tree.c:80 #9 0x00007ff631a06a95 in expand_index (istate=0x3dfb00d0100, pl=0x0) at sparse-index.c:422 #10 0x00007ff631a06cbd in ensure_full_index (istate=0x3dfb00d0100) at sparse-index.c:456 #11 0x00007ff631990d08 in index_name_stage_pos (istate=0x3dfb00d0100, name=0x3dfb0020080 "algorithm/levenshtein", namelen=21, stage=0, search_mode=EXPAND_SPARSE) at read-cache.c:556 #12 0x00007ff631990d6c in index_name_pos (istate=0x3dfb00d0100, name=0x3dfb0020080 "algorithm/levenshtein", namelen=21) at read-cache.c:566 #13 0x00007ff63180dbb5 in sanitize_paths (argc=185, argv=0x3dfb0030018, prefix=0x0, skip_checks=0) at builtin/sparse-checkout.c:756 #14 0x00007ff63180de50 in sparse_checkout_set (argc=185, argv=0x3dfb0030018, prefix=0x0) at builtin/sparse-checkout.c:860 #15 0x00007ff63180e6c5 in cmd_sparse_checkout (argc=186, argv=0x3dfb0030018, prefix=0x0) at builtin/sparse-checkout.c:1063 #16 0x00007ff6317234cb in run_builtin (p=0x7ff631ad9b38 <commands+2808>, argc=187, argv=0x3dfb0030018) at git.c:548 #17 0x00007ff6317239c0 in handle_builtin (argc=187, argv=0x3dfb0030018) at git.c:808 #18 0x00007ff631723c7d in run_argv (argcp=0x5ffdd0, argv=0x5ffd78) at git.c:877 #19 0x00007ff6317241d1 in cmd_main (argc=187, argv=0x3dfb0030018) at git.c:1017 #20 0x00007ff631838b60 in main (argc=190, argv=0x3dfb0030000) at common-main.c:64 ``` The very bottom of the stack being the `rehash()` method from `hashmap.c` as called within the `name-hash` API made me look at where these hashmaps were being used in the sparse index logic. These were being copied across indexes, which seems dangerous. Indeed, clearing these hashmaps and setting them as not initialized fixes the segfault. The second commit is a response to a test failure that happens in `t1092-sparse-checkout-compatibility.sh` where `git stash pop` starts to fail because the underlying `git checkout-index` process fails due to colliding files. Passing the `-f` flag appears to work, but it's unclear why this name-hash change causes that change in behavior.
dscho
added a commit
that referenced
this pull request
Oct 9, 2024
An internal customer reported a segfault when running `git sparse-checkout set` with the `index.sparse` config enabled. I was unable to reproduce it locally, but with their help we debugged into the failing process and discovered the following stacktrace: ``` #0 0x00007ff6318fb7b0 in rehash (map=0x3dfb00d0440, newsize=1048576) at hashmap.c:125 #1 0x00007ff6318fbc66 in hashmap_add (map=0x3dfb00d0440, entry=0x3dfb5c58bc8) at hashmap.c:247 #2 0x00007ff631937a70 in hash_index_entry (istate=0x3dfb00d0400, ce=0x3dfb5c58bc8) at name-hash.c:122 #3 0x00007ff631938a2f in add_name_hash (istate=0x3dfb00d0400, ce=0x3dfb5c58bc8) at name-hash.c:638 #4 0x00007ff631a064de in set_index_entry (istate=0x3dfb00d0400, nr=8291, ce=0x3dfb5c58bc8) at sparse-index.c:255 #5 0x00007ff631a06692 in add_path_to_index (oid=0x5ff130, base=0x5ff580, path=0x3dfb4b725da "<redacted>", mode=33188, context=0x5ff570) at sparse-index.c:307 #6 0x00007ff631a3b48c in read_tree_at (r=0x7ff631c026a0 <the_repo>, tree=0x3dfb5b41f60, base=0x5ff580, depth=2, pathspec=0x5ff5a0, fn=0x7ff631a064e5 <add_path_to_index>, context=0x5ff570) at tree.c:46 #7 0x00007ff631a3b60b in read_tree_at (r=0x7ff631c026a0 <the_repo>, tree=0x3dfb5b41e80, base=0x5ff580, depth=1, pathspec=0x5ff5a0, fn=0x7ff631a064e5 <add_path_to_index>, context=0x5ff570) at tree.c:80 #8 0x00007ff631a3b60b in read_tree_at (r=0x7ff631c026a0 <the_repo>, tree=0x3dfb5b41ac8, base=0x5ff580, depth=0, pathspec=0x5ff5a0, fn=0x7ff631a064e5 <add_path_to_index>, context=0x5ff570) at tree.c:80 #9 0x00007ff631a06a95 in expand_index (istate=0x3dfb00d0100, pl=0x0) at sparse-index.c:422 #10 0x00007ff631a06cbd in ensure_full_index (istate=0x3dfb00d0100) at sparse-index.c:456 #11 0x00007ff631990d08 in index_name_stage_pos (istate=0x3dfb00d0100, name=0x3dfb0020080 "algorithm/levenshtein", namelen=21, stage=0, search_mode=EXPAND_SPARSE) at read-cache.c:556 #12 0x00007ff631990d6c in index_name_pos (istate=0x3dfb00d0100, name=0x3dfb0020080 "algorithm/levenshtein", namelen=21) at read-cache.c:566 #13 0x00007ff63180dbb5 in sanitize_paths (argc=185, argv=0x3dfb0030018, prefix=0x0, skip_checks=0) at builtin/sparse-checkout.c:756 #14 0x00007ff63180de50 in sparse_checkout_set (argc=185, argv=0x3dfb0030018, prefix=0x0) at builtin/sparse-checkout.c:860 #15 0x00007ff63180e6c5 in cmd_sparse_checkout (argc=186, argv=0x3dfb0030018, prefix=0x0) at builtin/sparse-checkout.c:1063 #16 0x00007ff6317234cb in run_builtin (p=0x7ff631ad9b38 <commands+2808>, argc=187, argv=0x3dfb0030018) at git.c:548 #17 0x00007ff6317239c0 in handle_builtin (argc=187, argv=0x3dfb0030018) at git.c:808 #18 0x00007ff631723c7d in run_argv (argcp=0x5ffdd0, argv=0x5ffd78) at git.c:877 #19 0x00007ff6317241d1 in cmd_main (argc=187, argv=0x3dfb0030018) at git.c:1017 #20 0x00007ff631838b60 in main (argc=190, argv=0x3dfb0030000) at common-main.c:64 ``` The very bottom of the stack being the `rehash()` method from `hashmap.c` as called within the `name-hash` API made me look at where these hashmaps were being used in the sparse index logic. These were being copied across indexes, which seems dangerous. Indeed, clearing these hashmaps and setting them as not initialized fixes the segfault. The second commit is a response to a test failure that happens in `t1092-sparse-checkout-compatibility.sh` where `git stash pop` starts to fail because the underlying `git checkout-index` process fails due to colliding files. Passing the `-f` flag appears to work, but it's unclear why this name-hash change causes that change in behavior.
mjcheetham
pushed a commit
that referenced
this pull request
Dec 3, 2024
An internal customer reported a segfault when running `git sparse-checkout set` with the `index.sparse` config enabled. I was unable to reproduce it locally, but with their help we debugged into the failing process and discovered the following stacktrace: ``` #0 0x00007ff6318fb7b0 in rehash (map=0x3dfb00d0440, newsize=1048576) at hashmap.c:125 #1 0x00007ff6318fbc66 in hashmap_add (map=0x3dfb00d0440, entry=0x3dfb5c58bc8) at hashmap.c:247 #2 0x00007ff631937a70 in hash_index_entry (istate=0x3dfb00d0400, ce=0x3dfb5c58bc8) at name-hash.c:122 #3 0x00007ff631938a2f in add_name_hash (istate=0x3dfb00d0400, ce=0x3dfb5c58bc8) at name-hash.c:638 #4 0x00007ff631a064de in set_index_entry (istate=0x3dfb00d0400, nr=8291, ce=0x3dfb5c58bc8) at sparse-index.c:255 #5 0x00007ff631a06692 in add_path_to_index (oid=0x5ff130, base=0x5ff580, path=0x3dfb4b725da "<redacted>", mode=33188, context=0x5ff570) at sparse-index.c:307 #6 0x00007ff631a3b48c in read_tree_at (r=0x7ff631c026a0 <the_repo>, tree=0x3dfb5b41f60, base=0x5ff580, depth=2, pathspec=0x5ff5a0, fn=0x7ff631a064e5 <add_path_to_index>, context=0x5ff570) at tree.c:46 #7 0x00007ff631a3b60b in read_tree_at (r=0x7ff631c026a0 <the_repo>, tree=0x3dfb5b41e80, base=0x5ff580, depth=1, pathspec=0x5ff5a0, fn=0x7ff631a064e5 <add_path_to_index>, context=0x5ff570) at tree.c:80 #8 0x00007ff631a3b60b in read_tree_at (r=0x7ff631c026a0 <the_repo>, tree=0x3dfb5b41ac8, base=0x5ff580, depth=0, pathspec=0x5ff5a0, fn=0x7ff631a064e5 <add_path_to_index>, context=0x5ff570) at tree.c:80 #9 0x00007ff631a06a95 in expand_index (istate=0x3dfb00d0100, pl=0x0) at sparse-index.c:422 #10 0x00007ff631a06cbd in ensure_full_index (istate=0x3dfb00d0100) at sparse-index.c:456 #11 0x00007ff631990d08 in index_name_stage_pos (istate=0x3dfb00d0100, name=0x3dfb0020080 "algorithm/levenshtein", namelen=21, stage=0, search_mode=EXPAND_SPARSE) at read-cache.c:556 #12 0x00007ff631990d6c in index_name_pos (istate=0x3dfb00d0100, name=0x3dfb0020080 "algorithm/levenshtein", namelen=21) at read-cache.c:566 #13 0x00007ff63180dbb5 in sanitize_paths (argc=185, argv=0x3dfb0030018, prefix=0x0, skip_checks=0) at builtin/sparse-checkout.c:756 #14 0x00007ff63180de50 in sparse_checkout_set (argc=185, argv=0x3dfb0030018, prefix=0x0) at builtin/sparse-checkout.c:860 #15 0x00007ff63180e6c5 in cmd_sparse_checkout (argc=186, argv=0x3dfb0030018, prefix=0x0) at builtin/sparse-checkout.c:1063 #16 0x00007ff6317234cb in run_builtin (p=0x7ff631ad9b38 <commands+2808>, argc=187, argv=0x3dfb0030018) at git.c:548 #17 0x00007ff6317239c0 in handle_builtin (argc=187, argv=0x3dfb0030018) at git.c:808 #18 0x00007ff631723c7d in run_argv (argcp=0x5ffdd0, argv=0x5ffd78) at git.c:877 #19 0x00007ff6317241d1 in cmd_main (argc=187, argv=0x3dfb0030018) at git.c:1017 #20 0x00007ff631838b60 in main (argc=190, argv=0x3dfb0030000) at common-main.c:64 ``` The very bottom of the stack being the `rehash()` method from `hashmap.c` as called within the `name-hash` API made me look at where these hashmaps were being used in the sparse index logic. These were being copied across indexes, which seems dangerous. Indeed, clearing these hashmaps and setting them as not initialized fixes the segfault. The second commit is a response to a test failure that happens in `t1092-sparse-checkout-compatibility.sh` where `git stash pop` starts to fail because the underlying `git checkout-index` process fails due to colliding files. Passing the `-f` flag appears to work, but it's unclear why this name-hash change causes that change in behavior.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Thanks for taking the time to contribute to Git!
Those seeking to contribute to the Git for Windows fork should see
http://gitforwindows.org/#contribute on how to contribute Windows specific enhancements.
If your contribution is for the core Git functions and documentation
please be aware that the Git community does not use the github.com issues
or pull request mechanism for their contributions.
Instead, we use the Git mailing list (git@vger.kernel.org) for code and
documenatation submissions, code reviews, and bug reports. The
mailing list is plain text only (anything with HTML is sent directly
to the spam folder).
Nevertheless, you can use submitGit to conveniently send your Pull
Requests commits to our mailing list.
Please read the "guidelines for contributing" linked above!