Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uplift third_party/tt-metal to f8fe02d9f591f4c7f929131630ce5147cfe88f8e 2025-02-01 #2060

Merged
merged 3 commits into from
Feb 2, 2025

Conversation

vmilosevic
Copy link
Contributor

@vmilosevic vmilosevic commented Feb 1, 2025

This PR uplifts the third_party/tt-metal to the f8fe02d9f591f4c7f929131630ce5147cfe88f8e

  • Updates in all mlir code for TTNN Shape related refactoring/updates
    • ttnn::SimpleShape renamed to ttnn::Shape in tt-metal 2df6e60cd9
    • Tensor get_shape() -> get_logical_shape() due to tt-metal 8063f0ab02
    • Removed some code in EmitC path to do with selecting between Shape and SimpleShape
      now just using Shape (previously called SimpleShape).

 - ttnn::SimpleShape renamed to ttnn::Shape in tt-metal 2df6e60cd9
 - Tensor get_shape() -> get_logical_shape() due to tt-metal 8063f0ab02
@kmabeeTT
Copy link
Contributor

kmabeeTT commented Feb 1, 2025

Pushed updates to match tt-metal renaming of ttnn::SimpleShape to ttnn::Shape in some code, and replacing get_shape() with get_logical_shape() on Tensor objects, can build default build locally now. I know there is a bit more updates missing (EmitC, gut the usage of both shape types, just use ttnn::Shape and also remove some LegacyShape usage) but wanted to get a pulse on CI run as baseline to make sure that code is covered. Edit: Passing after EmitC related updates. I didn't bother removing LegacyShape usage this time, since nothing was failing and CI was already passing here and in 3 FE's, so merging this while it's still hot.

@kmabeeTT
Copy link
Contributor

kmabeeTT commented Feb 2, 2025

Brings 27 tt-metal commits. Passing CI for all 3 FE projects:

tt-torch: link
tt-forge-fe: link
tt-xla: link

git l 9d69fb143bac50983dff914c5348539d0a7d2021..f8fe02d9f591f4c7f929131630ce5147cfe88f8e
2025-01-31 f8fe02d9f5 by GitHub (Author afuller@tenstorrent.com) : Extra cleanup.  With emphasis (#17466)
2025-01-31 4078e26b7f by GitHub (Author omilyutin@tenstorrent.com) : #17203: Resolve a deadlock resulting in graph tracker tracking allocation before it was completed  (#17436)
2025-01-31 f30bfcb0c6 by GitHub (Author dmakoviichuk@tenstorrent.com) : Optimized launch_op compilation (#17448) Tested locally only :(
2025-01-31 70206b9cf1 by GitHub (Author 159165450+skhorasganiTT@users.noreply.github.com) : [skip ci] Update vLLM commit for deepseek-distill-llama70b in README.md (#17459)
2025-01-31 2df6e60cd9 by GitHub (Author sminakov@tenstorrent.com) : Rename ttnn::SimpleShape to ttnn::Shape (#17454)
2025-01-31 b6d72ad545 by GitHub (Author afuller@tenstorrent.com) : Update the .rst to match code (#17455)
2025-01-31 8063f0ab02 by GitHub (Author sminakov@tenstorrent.com) : Remove ttnn::Shape and LegacyShape from the codebase (#17413)
2025-01-31 f9b52ff317 by Aditya Saigal (Author asaigal@tenstorrent.com) : #0: Modify DeviceRange semantics to match CoreRange
2025-01-31 c7825a1e62 by GitHub (Author 132708568+kpaigwar@users.noreply.github.com) : #0: fix bug in createqkv (#17452)
2025-01-31 e106618f7c by GitHub (Author williamly@tenstorrent.com) : [skip ci] #0: Add failure signature for runner shutdown (#17439)
2025-01-31 2b29d50289 by GitHub (Author 159165450+skhorasganiTT@users.noreply.github.com) : [Llama3.2-11b-vision] Add max_cross_attn_tokens property to vLLM generator class (#17401)
2025-01-31 12efa1f54f by GitHub (Author moconnor@tenstorrent.com) : [skip ci] Add DeepSeek perf and instructions (#17445)
2025-01-31 ed0d2e14ac by GitHub (Author afuller@tenstorrent.com) : Step 1 of normalizing Boost as a dependency (#17446)
2025-01-31 41d4b368f5 by GitHub (Author 159165450+skhorasganiTT@users.noreply.github.com) : Add Qwen2.5 vLLM generator (based on LlamaGenerator), fix batch 1 issue with generator's decode forward (#17422)
2025-01-31 47fe29f879 by GitHub (Author afuller@tenstorrent.com) : Address clang tidy error (#17437)
2025-01-31 c109516f95 by GitHub (Author 164946524+bbradelTT@users.noreply.github.com) : #16380: add nD support for generic reductions (#17399)
2025-01-31 6c87f4c91a by John Bauman (Author jbauman@tenstorrent.com) : #0: Update pgm_dispatch_golden.json
2025-01-31 d354e26cac by Austin Ho (Author aho@tenstorrent.com) : #0: Change noc_inline_dw_write to use write_at_cmd_buf to avoid overwriting cached register values of write_reg_cmd_buf
2025-01-31 e099072861 by GitHub (Author williamly@tenstorrent.com) : #15414: Read annotation data to determine job-level failure signature and reason (#17410)
2025-01-31 a53f8dd024 by GitHub (Author caixunshiren@gmail.com) : Xuncai/all gather fixes (#17155)
2025-01-31 9760f02266 by GitHub (Author 129077594+johanna-rock-tt@users.noreply.github.com) : Post all gather layernorm/rms norm with resharding (#17377)
2025-01-31 afe387ac62 by GitHub (Author 132708568+kpaigwar@users.noreply.github.com) : #0: Fuse batch slicing with create_qkv_heads (#16416)
2025-01-31 c9ddf8478b by GitHub (Author nardo@tenstorrent.com) : Page size assertion error (#16717)
2025-01-31 783d35aea7 by GitHub (Author nardo@tenstorrent.com) : Delete tilize cpp tests (#17329)
2025-01-31 25ee758217 by GitHub (Author skrstic@tenstorrent.com) : Removing skip_for_blackhole and fix some minor issues for blackhole conv2d (#17222)
2025-01-31 2d624db973 by GitHub (Author skrstic@tenstorrent.com) : Expanded im2col documentation (#16846)
2025-01-30 f708212d6c by Joseph Chu (Author jchu@tenstorrent.com) : #0: Add programming examples for TT-Metalium multi-device native APIs

@kmabeeTT kmabeeTT merged commit a64c145 into main Feb 2, 2025
30 checks passed
@kmabeeTT kmabeeTT deleted the uplift branch February 2, 2025 00:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants