AssertionError: The dataloader must be a torch_xla.distributed.parallel_loader.MpDeviceLoader
#30091
Closed
1 of 4 tasks
System Info
transformer
:v4.39.3
torch
:2.3.0
torch_xla
:2.3.0+gite385c2f
peft
:0.10.0
trl
:0.8.1
Following the discussion from #29659 where @shub-kris provided a script in comment #29659 (comment)
Ran into this issue
It asks to pass a
accelerate.DataLoaderConfiguration
but I am not sure where to do it. Accelerate is being called internally by transformers somewhere.Who can help?
@ArthurZucker
@muellerz
@shub-kris
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
I ran the example in TPU V4 on Kubernetes. It was 2x2x4 TPU Device.
The
Dockerfile
to build the image is as follows wheredemo.py
was the script from the issue-comment.Expected behavior
Model finetunes and prints output.
The text was updated successfully, but these errors were encountered: