Replies: 1 comment 2 replies
-
It's probably going to be very slow and expensive, but definitely an exciting prospect that a single model can do all this. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Is anyone else excited to see how we can use GPT-4's visual inputs for perception-action loops? Perhaps we will no longer need to describe the scene to the drone or robotic arm, and instead they will be able to identify their surroundings and nearby objects on their own.
Beta Was this translation helpful? Give feedback.
All reactions