Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Post-mortem debugging support inside V8/Node.js #227

Closed
mmarchini opened this issue Aug 30, 2018 · 22 comments
Closed

Post-mortem debugging support inside V8/Node.js #227

mmarchini opened this issue Aug 30, 2018 · 22 comments
Labels

Comments

@mmarchini
Copy link
Contributor

I'm working on a new approach to post-mortem debugging to make it easier for tools to keep up-to-date with V8 changes. As discussed in #202, keeping llnode up-to-date today is a tortuous and tricky process, and sometimes llnode might not work for the last Node.js releases (for example, it's not working for Node.js v10.x at the moment).

This new approach relies on V8 exposing an API to make it easier to navigate core dumps. V8 don't need to know how to handle core dumps as long as it provides a way for tools to navigate the heap without reimplementing everything and using several heuristics.

I created a design document for this proposal and I'm working on a prototype:

Would love to get some feedback from @nodejs/v8 and @nodejs/post-mortem on this proposal :)

@mcollina
Copy link
Member

I like the approach if it does not introduce any overhead.

@mmarchini
Copy link
Contributor Author

Ok, here are the results for the first prototype:

Binary size growth was ~6% (~2.5 Mb), less than I was expecting, which is good.

es benchmarks were worse than I was expecting, which is bad.

Benchmarks
                                                                                     confidence improvement accuracy (*)   (**)   (***)
 es/defaultparams-bench.js n=100000000 method='withdefaults'                                        -0.09 %       ±2.20% ±2.92%  ±3.81%
 es/defaultparams-bench.js n=100000000 method='withoutdefaults'                                      0.22 %       ±2.11% ±2.81%  ±3.66%
 es/destructuring-bench.js n=100000000 method='destructure'                                         -0.45 %       ±0.51% ±0.68%  ±0.89%
 es/destructuring-bench.js n=100000000 method='swap'                                                 0.24 %       ±1.83% ±2.44%  ±3.17%
 es/destructuring-object-bench.js n=100000000 method='destructureObject'                            -0.88 %       ±1.72% ±2.28%  ±2.97%
 es/destructuring-object-bench.js n=100000000 method='normal'                                       -0.30 %       ±1.63% ±2.16%  ±2.81%
 es/foreach-bench.js n=5000000 count=10 method='for-in'                                     ***    -20.20 %       ±0.24% ±0.32%  ±0.42%
 es/foreach-bench.js n=5000000 count=10 method='for-of'                                              0.08 %       ±0.82% ±1.09%  ±1.42%
 es/foreach-bench.js n=5000000 count=10 method='for'                                                 0.88 %       ±1.92% ±2.55%  ±3.33%
 es/foreach-bench.js n=5000000 count=10 method='forEach'                                            -0.67 %       ±1.93% ±2.56%  ±3.34%
 es/foreach-bench.js n=5000000 count=100 method='for-in'                                    ***    -21.65 %       ±1.09% ±1.47%  ±1.95%
 es/foreach-bench.js n=5000000 count=100 method='for-of'                                             0.15 %       ±0.33% ±0.44%  ±0.57%
 es/foreach-bench.js n=5000000 count=100 method='for'                                               -0.78 %       ±1.16% ±1.56%  ±2.05%
 es/foreach-bench.js n=5000000 count=100 method='forEach'                                            0.39 %       ±0.68% ±0.91%  ±1.19%
 es/foreach-bench.js n=5000000 count=20 method='for-in'                                     ***    -20.74 %       ±0.36% ±0.47%  ±0.62%
 es/foreach-bench.js n=5000000 count=20 method='for-of'                                             -0.85 %       ±0.86% ±1.15%  ±1.50%
 es/foreach-bench.js n=5000000 count=20 method='for'                                                -0.23 %       ±4.03% ±5.36%  ±7.00%
 es/foreach-bench.js n=5000000 count=20 method='forEach'                                             0.31 %       ±2.36% ±3.14%  ±4.09%
 es/foreach-bench.js n=5000000 count=5 method='for-in'                                      ***    -23.26 %       ±0.37% ±0.49%  ±0.64%
 es/foreach-bench.js n=5000000 count=5 method='for-of'                                              -0.07 %       ±1.77% ±2.35%  ±3.06%
 es/foreach-bench.js n=5000000 count=5 method='for'                                                  1.21 %       ±1.93% ±2.58%  ±3.36%
 es/foreach-bench.js n=5000000 count=5 method='forEach'                                             -1.00 %       ±1.52% ±2.02%  ±2.64%
 es/map-bench.js n=1000000 method='fakeMap'                                                 ***     -8.13 %       ±0.37% ±0.50%  ±0.65%
 es/map-bench.js n=1000000 method='map'                                                     ***     -3.59 %       ±0.57% ±0.76%  ±1.00%
 es/map-bench.js n=1000000 method='nullProtoLiteralObject'                                  ***     -7.02 %       ±0.32% ±0.43%  ±0.56%
 es/map-bench.js n=1000000 method='nullProtoObject'                                         ***     -7.12 %       ±0.38% ±0.51%  ±0.66%
 es/map-bench.js n=1000000 method='object'                                                  ***     -8.05 %       ±0.44% ±0.60%  ±0.78%
 es/map-bench.js n=1000000 method='storageObject'                                           ***     -7.81 %       ±0.37% ±0.49%  ±0.64%
 es/restparams-bench.js n=100000000 method='arguments'                                              -0.00 %       ±0.67% ±0.89%  ±1.16%
 es/restparams-bench.js n=100000000 method='copy'                                            **     -0.56 %       ±0.38% ±0.51%  ±0.66%
 es/restparams-bench.js n=100000000 method='rest'                                                    0.00 %       ±0.46% ±0.61%  ±0.80%
 es/spread-assign.js n=1000000 count=10 method='_extend'                                             0.15 %       ±0.77% ±1.03%  ±1.34%
 es/spread-assign.js n=1000000 count=10 method='assign'                                             -0.84 %       ±0.92% ±1.23%  ±1.60%
 es/spread-assign.js n=1000000 count=10 method='spread'                                     ***    -34.19 %       ±0.47% ±0.62%  ±0.81%
 es/spread-assign.js n=1000000 count=20 method='_extend'                                    ***    -15.87 %       ±0.66% ±0.88%  ±1.14%
 es/spread-assign.js n=1000000 count=20 method='assign'                                     ***    -24.58 %       ±0.75% ±1.00%  ±1.30%
 es/spread-assign.js n=1000000 count=20 method='spread'                                     ***    -24.84 %       ±0.82% ±1.10%  ±1.44%
 es/spread-assign.js n=1000000 count=5 method='_extend'                                             -0.60 %       ±0.85% ±1.14%  ±1.48%
 es/spread-assign.js n=1000000 count=5 method='assign'                                               0.62 %       ±1.33% ±1.77%  ±2.32%
 es/spread-assign.js n=1000000 count=5 method='spread'                                      ***    -31.73 %       ±0.42% ±0.56%  ±0.74%
 es/spread-bench.js n=5000000 rest=0 context='context' count=10 method='apply'                *      1.52 %       ±1.18% ±1.57%  ±2.05%
 es/spread-bench.js n=5000000 rest=0 context='context' count=10 method='call-spread'         **     -1.53 %       ±0.99% ±1.31%  ±1.71%
 es/spread-bench.js n=5000000 rest=0 context='context' count=10 method='spread'                     -0.36 %       ±1.38% ±1.84%  ±2.40%
 es/spread-bench.js n=5000000 rest=0 context='context' count=20 method='apply'               **     -1.67 %       ±1.08% ±1.44%  ±1.88%
 es/spread-bench.js n=5000000 rest=0 context='context' count=20 method='call-spread'          *     -0.69 %       ±0.62% ±0.83%  ±1.08%
 es/spread-bench.js n=5000000 rest=0 context='context' count=20 method='spread'                     -0.11 %       ±1.15% ±1.53%  ±1.99%
 es/spread-bench.js n=5000000 rest=0 context='context' count=5 method='apply'                       -0.04 %       ±1.33% ±1.77%  ±2.30%
 es/spread-bench.js n=5000000 rest=0 context='context' count=5 method='call-spread'                 -0.71 %       ±0.98% ±1.31%  ±1.70%
 es/spread-bench.js n=5000000 rest=0 context='context' count=5 method='spread'                      -0.66 %       ±1.38% ±1.83%  ±2.39%
 es/spread-bench.js n=5000000 rest=0 context='null' count=10 method='apply'                   *      1.62 %       ±1.23% ±1.63%  ±2.13%
 es/spread-bench.js n=5000000 rest=0 context='null' count=10 method='call-spread'             *      1.26 %       ±1.18% ±1.57%  ±2.04%
 es/spread-bench.js n=5000000 rest=0 context='null' count=10 method='spread'                         1.73 %       ±7.23% ±9.62% ±12.53%
 es/spread-bench.js n=5000000 rest=0 context='null' count=20 method='apply'                         -0.83 %       ±1.12% ±1.48%  ±1.93%
 es/spread-bench.js n=5000000 rest=0 context='null' count=20 method='call-spread'                   -0.21 %       ±0.75% ±1.00%  ±1.31%
 es/spread-bench.js n=5000000 rest=0 context='null' count=20 method='spread'                         0.35 %       ±1.05% ±1.39%  ±1.81%
 es/spread-bench.js n=5000000 rest=0 context='null' count=5 method='apply'                    *     -2.86 %       ±2.46% ±3.29%  ±4.34%
 es/spread-bench.js n=5000000 rest=0 context='null' count=5 method='call-spread'                     0.26 %       ±1.02% ±1.36%  ±1.77%
 es/spread-bench.js n=5000000 rest=0 context='null' count=5 method='spread'                   *     -1.62 %       ±1.44% ±1.92%  ±2.49%
 es/spread-bench.js n=5000000 rest=1 context='context' count=10 method='apply'                      -0.92 %       ±1.51% ±2.01%  ±2.63%
 es/spread-bench.js n=5000000 rest=1 context='context' count=10 method='call-spread'                -0.59 %       ±0.95% ±1.26%  ±1.64%
 es/spread-bench.js n=5000000 rest=1 context='context' count=10 method='spread'                     -0.81 %       ±1.05% ±1.39%  ±1.81%
 es/spread-bench.js n=5000000 rest=1 context='context' count=20 method='apply'                      -0.05 %       ±1.21% ±1.61%  ±2.09%
 es/spread-bench.js n=5000000 rest=1 context='context' count=20 method='call-spread'                 0.40 %       ±0.86% ±1.14%  ±1.49%
 es/spread-bench.js n=5000000 rest=1 context='context' count=20 method='spread'                     -0.45 %       ±0.96% ±1.28%  ±1.67%
 es/spread-bench.js n=5000000 rest=1 context='context' count=5 method='apply'                       -0.94 %       ±1.27% ±1.70%  ±2.21%
 es/spread-bench.js n=5000000 rest=1 context='context' count=5 method='call-spread'                 -0.07 %       ±1.10% ±1.46%  ±1.91%
 es/spread-bench.js n=5000000 rest=1 context='context' count=5 method='spread'                       0.35 %       ±1.38% ±1.84%  ±2.41%
 es/spread-bench.js n=5000000 rest=1 context='null' count=10 method='apply'                         -1.05 %       ±1.35% ±1.80%  ±2.35%
 es/spread-bench.js n=5000000 rest=1 context='null' count=10 method='call-spread'                   -0.96 %       ±1.19% ±1.59%  ±2.07%
 es/spread-bench.js n=5000000 rest=1 context='null' count=10 method='spread'                  *     -1.37 %       ±1.29% ±1.71%  ±2.23%
 es/spread-bench.js n=5000000 rest=1 context='null' count=20 method='apply'                         -0.44 %       ±0.96% ±1.28%  ±1.66%
 es/spread-bench.js n=5000000 rest=1 context='null' count=20 method='call-spread'                   -0.31 %       ±0.72% ±0.96%  ±1.25%
 es/spread-bench.js n=5000000 rest=1 context='null' count=20 method='spread'                        -0.56 %       ±0.97% ±1.29%  ±1.69%
 es/spread-bench.js n=5000000 rest=1 context='null' count=5 method='apply'                    *     -2.06 %       ±1.90% ±2.54%  ±3.33%
 es/spread-bench.js n=5000000 rest=1 context='null' count=5 method='call-spread'                    -0.17 %       ±1.14% ±1.52%  ±1.98%
 es/spread-bench.js n=5000000 rest=1 context='null' count=5 method='spread'                   *      1.35 %       ±1.16% ±1.55%  ±2.02%
 es/string-concatenations.js mode='multi-concat' n=1000                                             -1.38 %       ±5.09% ±6.77%  ±8.83%
 es/string-concatenations.js mode='multi-join' n=1000                                       ***     -3.82 %       ±1.99% ±2.65%  ±3.45%
 es/string-concatenations.js mode='multi-template' n=1000                                           -1.46 %       ±3.95% ±5.25%  ±6.84%
 es/string-concatenations.js mode='to-string-concat' n=1000                                   *     -4.02 %       ±3.64% ±4.89%  ±6.45%
 es/string-concatenations.js mode='to-string-string' n=1000                                         -2.15 %       ±2.76% ±3.68%  ±4.83%
 es/string-concatenations.js mode='to-string-template' n=1000                                 *     -7.94 %       ±6.69% ±8.98% ±11.84%
 es/string-repeat.js size=10 encoding='ascii' mode='Array' n=1000                                   -1.64 %       ±2.70% ±3.60%  ±4.71%
 es/string-repeat.js size=10 encoding='ascii' mode='repeat' n=1000                            *     -5.03 %       ±3.86% ±5.16%  ±6.78%
 es/string-repeat.js size=10 encoding='utf8' mode='Array' n=1000                             **     -3.51 %       ±2.25% ±2.99%  ±3.90%
 es/string-repeat.js size=10 encoding='utf8' mode='repeat' n=1000                                   -2.73 %       ±3.38% ±4.50%  ±5.86%
 es/string-repeat.js size=1000 encoding='ascii' mode='Array' n=1000                         ***     -8.22 %       ±2.76% ±3.68%  ±4.81%
 es/string-repeat.js size=1000 encoding='ascii' mode='repeat' n=1000                                -0.01 %       ±4.01% ±5.34%  ±6.97%
 es/string-repeat.js size=1000 encoding='utf8' mode='Array' n=1000                          ***     -6.42 %       ±2.99% ±4.00%  ±5.24%
 es/string-repeat.js size=1000 encoding='utf8' mode='repeat' n=1000                                 -3.67 %       ±5.91% ±7.87% ±10.26%
 es/string-repeat.js size=1000000 encoding='ascii' mode='Array' n=1000                      ***     -2.23 %       ±0.32% ±0.43%  ±0.56%
 es/string-repeat.js size=1000000 encoding='ascii' mode='repeat' n=1000                             -1.22 %       ±3.93% ±5.24%  ±6.83%
 es/string-repeat.js size=1000000 encoding='utf8' mode='Array' n=1000                       ***     -1.77 %       ±0.25% ±0.33%  ±0.43%
 es/string-repeat.js size=1000000 encoding='utf8' mode='repeat' n=1000                              -0.77 %       ±5.02% ±6.69%  ±8.71%

Be aware that when doing many comparisons the risk of a false-positive
result increases. In this case there are 94 comparisons, you can thus
expect the following amount of false-positive results:
  4.70 false positives, when considering a   5% risk acceptance (*, **, ***),
  0.94 false positives, when considering a   1% risk acceptance (**, ***),
  0.09 false positives, when considering a 0.1% risk acceptance (***)

Not sure if we can optimize this solution more than it already is (the prototype is using V8_UNLIKELY and V8_INLINE to check if postmortem mode is enabled with as few instructions as possible).

If anyone is interested, the code is here: nodejs/node@4ce744a...mmarchini:v8-postmortem-analyzer-api

I want to play with other ideas next week and I'm open to suggestions :)

@mcollina
Copy link
Member

In the document, @hashseed recommended to use a config flag.

@mmarchini
Copy link
Contributor Author

Yeah, but then we need a custom build just to execute postmortem tools built using this API (which would increase the barrier to use these tools).

I'll try @hashseed suggestion to load the heap spaces on the same addresses they were allocated before, but other suggestions would be appreciated as well.

@misterdjules
Copy link

@mmarchini How would that new API be consumed by debugging tools? I don't think I understand all the subtleties of that. Do you have code somewhere (maybe as changes to llnode?) that does that?

@davepacheco
Copy link

This is exciting! @bcantrill and I spoke with several members of the V8 team years ago about built-in VM support for postmortem debugging, but it didn't seem like there was enough interest to build and maintain that support.

One thing I don't fully understand: in what context does this API execute? Does a debugger (e.g., lldb) somehow load a copy of v8 and execute API calls into it? (If so, there are lots of follow-up questions, but I'm not sure how else this is intended to work.)

@mmarchini
Copy link
Contributor Author

@misterdjules @davepacheco the code I've been using to test the API is a Node.js native module which uses the LLDB API to load a core dump and read it. The current proposal wouldn't run as an LLDB plugin, but we would be able to implement a REPL using JavaScript, and it would lower the barrier for contributions in the front-end as well as provide a JavaScript API by default for automated post-mortem analysis. Here's an example on how to use the API: https://gist.github.com/mmarchini/647720e08468b8b96a7922f79c20c87e

Just to emphasize: I'm open to any suggestions which would help us maintain post-mortem tools in the long term. If we come up with something that works and is entirely different from the proposed solution, I'll be happy to implement it :)

@sethbrenith
Copy link

I tried @hashseed's suggestion of loading the dump content into memory at the same addresses, and it works nicely once everything is initialized properly. It requires only minor changes in V8, and introduces no runtime overhead. I wrote a design document describing the approach; feedback is welcome!

@mmarchini
Copy link
Contributor Author

This looks amazing @sethbrenith! Thank you for working on this!

I will try to implement a lldb extension using this API next week.

@hashseed
Copy link
Member

@sethbrenith very impressive!

Looks like we currently have two options:
A) introduce an API to redirect memory accesses to lldb, prototype here.
B) introduce a way to resurrect V8 from core dump, design doc here.

Both approaches considered, (A) adds a lot more API surface, and is a bigger cross-cutting change. The V8 team is currently working on pointer compression and possibly defining object layout in a DSL. Both would cause conflicts with (A).

It's probably no surprise that I like (B) better. It also offers value to V8 and Chromium, which is why the chances that it is continuously maintained is much higher.

@mhdawson
Copy link
Member

@sethbrenith looks like you've made some good progress. I'll agree with @hashseed that something that has the best likelihood of ongoing maintenance is important.

One question I have is around the second version of V8 that needs to be loaded. That sounds like something that may be some effort to manage in that we'd have to make them available and find the matching version for a given core dump. @hashseed how much overhead would it be to include the extra code needed for this approach so that it's in the core? I understand it might not be possible to get just that with the current flags/options but wonder it it might be something we'd want to do if the subset we need has a small enough overhead.

@davepacheco
Copy link

Thanks so much for sharing this @sethbrenith! As I said, I'm really excited about the possibility of building postmortem debugging into V8 itself. (The only reason we went the way we did in the first place with mdb_v8 was that we felt we had no other option at the time.)

I think I don't fully understand the approach here, so I apologize in advance if these questions or comments aren't applicable.

One downside of this approach is that by calling into V8 to do the work, there are classes of problems (mostly in V8 itself) that likely cannot be debugged using this approach. Namely, if V8 itself crashes (e.g., because some internal data structure is invalid, because some pointer is NULL or something), you can use mdb_v8 to debug that problem. On the other hand, if you load another copy of V8 and use it to inspect its state, it's seems likely the debugger would crash for the same reason as the original program. A more important case of is may be where the original program crashed because it ran out of memory. Would it not be likely that the debugger would also run out of memory when trying to, say, print objects? This case seems pretty important, though maybe I've misunderstood or there's some way to work around it. Aside from running out of memory, the next more common case might be debugging issues involving add-ons, which might also corrupt C/C++-level structures.

Relatedly, how does memory management work in the postmortem host? Does V8 continue allocating from its heap? Does it run garbage collection?

The list of static data that v8_postmortem_debugger requests is based entirely on trial and error, and could break at any time.
Likewise, initialization of embedder code loaded by the postmortem host could be brittle.

It seems like the main advantage of this approach is to eliminate the brittleness of putting all the implementation details in the debugger. Are these problems easier?

@sethbrenith
Copy link

Thanks everybody for the feedback, I really appreciate it.

In response to @davepacheco's questions:

if you load another copy of V8 and use it to inspect its state, it's seems likely the debugger would crash for the same reason as the original program.

Yes, this is a risk. I don't know how often this problem would arise because I haven't done extensive testing on real-world crash dumps, but Jaroslav mentioned in a comment on the document that it is sometimes a problem with the existing gdb tools. I hope that this approach would still be able to provide some value in that case (for example, maybe you can't print a single corrupt object, but you can print the stack and some related objects that help you understand how you got to the current execution point).

Would it not be likely that the debugger would also run out of memory when trying to, say, print objects?

I haven't actually tried it, but I'll speculate anyway. If the dump was from a 32-bit process, then it might have run up against the total addressable memory limit, so moving its content into a new process isn't going to help anything. Perhaps the postmortem host could choose some pages to not map so it has enough room for its own execution. In a 64-bit process, on the other hand, the address space is huge so any OOM crash was probably the process running out of physical memory. In that case, the postmortem host could map the memory from a file using copy-on-write to avoid requiring as much physical memory (or it could just run on a machine with more memory or page file available).

Does V8 continue allocating from its heap? Does it run garbage collection?

Isolate::PrintStack executes within a DisallowHeapAllocation block, so it doesn't allocate anything or run GC. I haven't investigated printing for every object type, so I'm less sure about them, but I'd be surprised if they allocated things in the JS heap or triggered GC.

It seems like the main advantage of this approach is to eliminate the brittleness of putting all the implementation details in the debugger. Are these problems easier?

I hope so. I like the idea that fixing a problem in this tool is likely just finding some variable that needs to be initialized and adding it to a list, rather than substantially rewriting code that has to match the internal logic of V8.

@mmarchini
Copy link
Contributor Author

I have a somewhat working lldb extension which uses the API proposed by @sethbrenith to debug Node.js core dumps. It's still extremelly rough in the edges though, I'll share it after doing some cleanup.

Overall I like the API, it's simple to use and gives you the results you want. I left a few suggestions (along with possible implementations) in the CL.

My main concern for now is regarding the ThreadLocalAccessFunction implementation. Thread local storage is extremely platform and architecture dependant, and debuggers (at least on *nix platforms) don't know how to access it. This will make it harder to write portable extensions. Unfortunately, I don't have any proposed solution on how to mitigate this issue.

That sounds like something that may be some effort to manage in that we'd have to make them available and find the matching version for a given core dump. @hashseed how much overhead would it be to include the extra code needed for this approach so that it's in the core? I understand it might not be possible to get just that with the current flags/options but wonder it it might be something we'd want to do if the subset we need has a small enough overhead.

In my current prototype the postmortem host code is included in the node binary, and it is activated by the flag --experimental-postmortem-host.

@mmarchini
Copy link
Contributor Author

@sethbrenith feel free to join our bi-weekly working group meeting to discuss this topic if you want. We'll have one today at 09:30 PST (#274).

We'll also have a summit early in March (#203) in Munich if you want to discuss it in person :)

@hashseed
Copy link
Member

hashseed commented Feb 6, 2019

I would imagine that we would have a very limited set of supported features that we can perform with the debugging host:

  • Print object content
  • Print stack trace
  • Print disassembly for code objects
  • Collect heap statistics
  • Collect heap snapshot
  • Verify heap content

I don't actually expect any continued execution, nor GC. In fact, that would be a security risk.

Regarding testing, I was hoping to have code in V8 upstream, and also include tests to ensure it works.

@mmarchini
Copy link
Contributor Author

Could we have an option to print the object's content and stack traces in a JSON format? A JSON format would make it easier for debuggers to extend functionality without requiring changes to V8. For example, a debugger could get the JSON formatted stack trace from V8 and interpolate it with the native stack trace to output a complete stack trace with all C++ and JavaScript frames (I don't think we can get this complete stack trace today with the gdb macros).

@sethbrenith
Copy link

Thanks @mmarchini for the prototyping work! It's really valuable to know that thread local storage is a problem area on *nix debuggers. That said, I'd recommend holding off on further coding for now because the discussion in the doc is still pretty far from consensus on whether this is the right approach.

I agree that an option for machine-parseable output would be useful; my WinDbg prototype was using regex matching on the output to find things that look like object pointers and make them clickable, which is pretty delightful but also error-prone. In the long term, when printing is based on Torque metadata and not hand-coded, it should be relatively easy to support two separate printing format options.

Unfortunately I can't make it to Munich this time, but I'll call in to this morning's meeting. Thanks for the invitations!

@davepacheco
Copy link

@hashseed wrote:

I don't actually expect any continued execution, nor GC. In fact, that would be a security risk.

I agree that we probably don't want that, but when I asked that, I wasn't clear on what would prevent that. I'm still not sure how we know that attempting to print an object or one of these other operations wouldn't try to allocate memory from the VM's heap. It seems like an intrinsic challenge with using the VM code itself inside the debugger without the VM really knowing that it's running in a restricted context?

@mmarchini wrote:

Could we have an option to print the object's content and stack traces in a JSON format?

I think this would be great. It's trickier than it might seem for objects, since they can contain cycles and other complex structures. A few years ago we proposed a JSON format that I believe could represent heap objects, stacks, and other metadata, but I don't think anybody's implemented most of it.

@mmarchini
Copy link
Contributor Author

If anyone is interested, I implemented a proof-of-concept lldb plugin for Node.js using the API described here. The plugin is intended for analyzing Linux core dumps on Linux environments. The code is available here and implementation details as well as challenges and limitations are described here.

@mmarchini
Copy link
Contributor Author

Based on the meeting we had last Thursday, the most likely path moving forward is to use Torque to generate meta description of objects, and then use those meta descriptions to navigate objects using debuggers. It might take some time to get there though, because Torquification of heap objects in V8 is still in its early stages.

@github-actions
Copy link

This issue is stale because it has been open many days with no activity. It will be closed soon unless the stale label is removed or a comment is made.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants