Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[VM] Memory Manager moved up to runtime #15833

Merged
merged 11 commits into from
Oct 3, 2023

Conversation

srkreddy1238
Copy link
Contributor

Now graph runtime also uses the same memory manager This acommodates a common memory manager with pooled and naive support.

As a follow up we can move the WorkspacePool to use this common memory manager.

This is a prerequisite to accommodate a common two stage memory allocation as described in #15058

@srkreddy1238 srkreddy1238 changed the title [VM] memory Manager moved up to runtime [VM] Memory Manager moved up to runtime Sep 27, 2023
@srkreddy1238 srkreddy1238 force-pushed the vm_mem_planner branch 2 times, most recently from 6562464 to 3f8b8ae Compare September 27, 2023 10:43
include/tvm/runtime/memory_manager.h Outdated Show resolved Hide resolved
include/tvm/runtime/memory_manager.h Outdated Show resolved Hide resolved
include/tvm/runtime/memory_manager.h Outdated Show resolved Hide resolved
Copy link
Member

@yongwww yongwww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the work, overall it looks good to me, have left some comments.

include/tvm/runtime/memory/memory_manager.h Outdated Show resolved Hide resolved
src/runtime/memory/memory_manager.cc Show resolved Hide resolved
Copy link
Member

@yongwww yongwww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@echuraev echuraev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general LGTM. Thank you for your PR! One comment which should be applied.

src/runtime/memory/memory_manager.cc Outdated Show resolved Hide resolved
@srkreddy1238 srkreddy1238 force-pushed the vm_mem_planner branch 2 times, most recently from 36b9112 to 98fa479 Compare September 29, 2023 15:35
srkreddy1238 and others added 11 commits October 3, 2023 15:29
Now graph runtime also uses the same memory manager
This acommodates a common memory manager with pooled and naive support.

As a follow up we can move the WorkspacePool to use this common memory manager.
Co-authored-by: Egor Churaev <egor.churaev@gmail.com>
Using available allocator instead of requested is leading to an unpexpected crash
@yongwww yongwww merged commit b8abff9 into apache:main Oct 3, 2023
5 checks passed
@yongwww
Copy link
Member

yongwww commented Oct 3, 2023

This was merged, thanks for the effort!

yongwww pushed a commit to yongwww/tvm that referenced this pull request Oct 6, 2023
* [VM] memory Manager moved up to runtime

Now graph runtime also uses the same memory manager
This acommodates a common memory manager with pooled and naive support.

As a follow up we can move the WorkspacePool to use this common memory manager.

* * update dependents with new file addition.

* *  define memory_manager under new namespace

* * use ShapeTuple across vm executor and memory_manager

* * ShapeTuple across the Allocators

* * GetDataSize is moved to DeviceAPI and memory_manager uses this interface.

* * review comments

* * Make compiler happy with unused variables

* * lint

* Update src/runtime/memory/memory_manager.cc

Co-authored-by: Egor Churaev <egor.churaev@gmail.com>

* * allow multiple allocators to coexist for the same device.
Using available allocator instead of requested is leading to an unpexpected crash

---------

Co-authored-by: Egor Churaev <egor.churaev@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants