Skip to content

Commit

Permalink
Adding allocated and reserved memory values to memory timline view. (#…
Browse files Browse the repository at this point in the history
…107056)

Summary: This diff adds the max allocated and max reserved memory values to the memory timeline plot.

Test Plan:
Executed

`buck run mode/dev-nosan kineto/libkineto/fb/integration_tests:pytorch_resnet_integration_test -- --enable_profiling --profile_memory --trace_handler=auto_trace --with_stack --record_shapes` on my devgpu.

The generated output is at
https://www.internalfb.com/manifold/explorer/ai_efficiency/tree/traces/dynocli/devgpu020.odn1.facebook.com/rank-0/rank-0.Aug_10_16_50_50.236946.pt.memorytl.html

 {F1067885545}
Screenshot of the html above
 {F1067886350}

Reviewed By: aaronenyeshi

Differential Revision: D48251791

Pull Request resolved: #107056
Approved by: https://github.com/aaronenyeshi, https://github.com/davidberard98
  • Loading branch information
anupambhatnagar authored and pytorchmergebot committed Aug 21, 2023
1 parent da76599 commit 3336aa1
Showing 1 changed file with 7 additions and 1 deletion.
8 changes: 7 additions & 1 deletion torch/profiler/_memory_profiler.py
Original file line number Diff line number Diff line change
Expand Up @@ -1151,6 +1151,8 @@ def export_memory_timeline_html(
mt = self._coalesce_timeline(device)
times, sizes = np.array(mt[0]), np.array(mt[1])
stacked = np.cumsum(sizes, axis=1) / 1024**3
max_memory_allocated = torch.cuda.max_memory_allocated()
max_memory_reserved = torch.cuda.max_memory_reserved()

# Plot memory timeline as stacked data
fig = plt.figure(figsize=figsize, dpi=80)
Expand All @@ -1164,7 +1166,11 @@ def export_memory_timeline_html(
axes.set_xlabel("Time (us)")
axes.set_ylabel("Memory (GB)")
title = "\n\n".join(
([title] if title else []) + [f"Max: {stacked[:, -1].max():.2f} GB"]
([title] if title else [])
+ [
f"Max memory allocated: {max_memory_allocated/(10**9):.2f} GB \n"
f"Max memory reserved: {max_memory_reserved/(10**9):.2f} GB"
]
)
axes.set_title(title)

Expand Down

0 comments on commit 3336aa1

Please sign in to comment.