Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First try rusb-async #143

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

First try rusb-async #143

wants to merge 1 commit into from

Conversation

a1ien
Copy link
Owner

@a1ien a1ien commented Sep 4, 2022

rusb-async also published on crates-io https://crates.io/crates/rusb-async

Co-authored-by: Ryan Butler <thebutlah@gmail.com>
Co-authored-by: Kevin Mehall <km@kevinmehall.net>
This was referenced Sep 4, 2022
@JJTech0130
Copy link

Does this work for isochronous transfers? https://github.com/JJTech0130/easycap-rs/blob/master/src/streaming.rs is my code, but it's just printing 0s

@a1ien
Copy link
Owner Author

a1ien commented Sep 14, 2022

Hi. Thank you for trying the current api. I haven't tested isochronous endpoints. What operating system are you trying on?

@JJTech0130
Copy link

macOS. I’m trying to port https://github.com/JJTech0130/easycap to Rust, it used libusb from Python.
I’m fairly new to Rust so it might just me doing something wrong

@a1ien
Copy link
Owner Author

a1ien commented Sep 14, 2022

Oh it looks like I've implemented the isochronous API incorrectly. I'll try to fix it as soon as I have time. Possibly this week

@a1ien
Copy link
Owner Author

a1ien commented Sep 14, 2022

I completely forgot that isochronous transfer is different from other. Need more time to think how to implement better api for them. Because for iso we should return list of packet

pub unsafe fn submit_control_raw(&mut self, buffer: Vec<u8>) -> Result<()> {
// Safety: If transfer is submitted, it is pushed onto `pending` where it will be
// dropped before `device` is freed.
unsafe {
Copy link

@JJTech0130 JJTech0130 Sep 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not really that knowledgeable in unsafe rust, but when I patched in a modified version while trying to fix isochronous transfers, rustc warned me that an unsafe { block is not necessary because it's an unsafe fn

@JJTech0130

This comment was marked as outdated.

@JJTech0130

This comment was marked as outdated.

@JJTech0130

This comment was marked as outdated.

@JJTech0130
Copy link

Turns out it's called a "struct hack": to make the last member of the struct have a variable size.
Seems very unsafe lol.
https://www.geeksforgeeks.org/struct-hack/
Not sure how one would implement it in Rust

@a1ien
Copy link
Owner Author

a1ien commented Sep 15, 2022

Yes it's C struct hack and it's work in rust for repr C struct. Example bellow

use libusb1_sys as ffi;
use std::ptr::NonNull;

fn main() {
    unsafe {
        let ptr =
            NonNull::new(ffi::libusb_alloc_transfer(5)).expect("Could not allocate transfer!");

        let transfer: *mut ffi::libusb_transfer = ptr.as_ptr();
        (*transfer).num_iso_packets = 5;

        ffi::libusb_set_iso_packet_lengths(ptr.as_ptr(), (200 / 5) as u32);
        for i in 0..(*transfer).num_iso_packets {
            let len = (*transfer)
                .iso_packet_desc
                .get_unchecked_mut(i as usize)
                .length;
            println!("len of #{i} is {len}")
        }
    }
}

@JJTech0130
Copy link

JJTech0130 commented Sep 15, 2022

This is what I came up with yesterday, forgot to add it here:

    /// Prerequisite: must be an isochronous transfer
    fn iso_packet_descriptors(&self) -> &[ffi::libusb_iso_packet_descriptor] {
        assert!(self.transfer().transfer_type == LIBUSB_TRANSFER_TYPE_ISOCHRONOUS);
        // The packet descriptors are stored using the "C99 struct hack":
        // Basically, there is a zero sized array at the end of the struct
        // and the struct is allocated with extra memory added to the end
        // the size of the extra memory goes to the last element of the struct
        unsafe {
            let addr = addr_of!(self.transfer().iso_packet_desc)
                .cast::<ffi::libusb_iso_packet_descriptor>();
            slice::from_raw_parts(addr, self.transfer().num_iso_packets as usize)
        }
    }

@a1ien
Copy link
Owner Author

a1ien commented Sep 15, 2022

Looks good. Are you try this with iso? I don't have device with iso endpoint, so I can't test.
Also i found another misconception in current API. I try to do some rework on weekend.

@JJTech0130
Copy link

Yes, the reason I want to use rusb in the first place is this device called “EasyCap” which allows capturing composite audio + video. Wanted a rust implementation as the python version was too slow. It uses isochronous transfers for streaming video.

@JJTech0130
Copy link

I’m wondering how you want to implement this: should isochronous transfers have their own poll function, with a different return type, or just have one function that can return multiple types?
I’m leaning towards having multiple functions: that way it can be guaranteed at compile time what type it will return.
I’m also wondering if the transfer type should be stored in the struct some how? Or even have multiple structs implementing common traits?

@a1ien
Copy link
Owner Author

a1ien commented Sep 15, 2022

I am still not sure. Maybe enum or something that contain enum inside. Also pool should process only one endpoint or we should pass transfer into user data.

@JJTech0130

This comment was marked as outdated.

@JJTech0130
Copy link

Also, I think that the Transfer structs should be cleaned up and given safe wrappers... I don't like the whole tie it into a pool thing... imho the user should be free to use Tokio or another library.
And it looks like Futures are the right thing to do in this scenario

@a1ien
Copy link
Owner Author

a1ien commented Sep 25, 2022

imho the user should be free to use Tokio or another library.
And it looks like Futures are the right thing to do in this scenario

It's really hard to do. Because libusb it completion based is really hard to do safe wrapper. If you not see this post https://without.boats/blog/io-uring/ I strong recommend read it first.

@JJTech0130

This comment was marked as outdated.

@JJTech0130

This comment was marked as outdated.

@JJTech0130

This comment was marked as outdated.

@JJTech0130

This comment was marked as outdated.

@JJTech0130

This comment was marked as outdated.

@JJTech0130

This comment was marked as outdated.

@JJTech0130
Copy link

Y’know what, I’m going to stop trying to restructure it. I’m realizing that I can’t keep using OOP principles here… not sure what the alternative is though…

@JJTech0130
Copy link

So, I'm seeing several things. One of which, is there isn't actually a way to tell what transfer is what, is there?
My usecase has an audio stream and a video stream, the audio stream is bulk and the video isochronous. I had a simple workaround where it would simply get the total length of all the packets and dump them into the buffer, thinking I could parse them on the other end. The problem I'm seeing is there is no good way to separate my audio and video responses, is there? Other that just guessing based upon the length of the buffer. And there's no nice "resubmit" like there is for the Python API.

@JJTech0130
Copy link

Maybe an enum like you said:

enum TransferType {
    Bulk { endpoint: u8 },
    Control { request_type: u8, request: u8, value: u16, index: u16 },
    Isochronous { endpoint: u8, num_packets: u32 },
}

If we passed that to UserData, it would be available on the other end, and we could keep track of what transfer it was. Would also allow for a quick re-submit.
Not sure if the buffer should be part of the enum or just something in the actual transfer.
You would then pass this "metadata" into the actual transfer.

let length = data.len() as u16;

ffi::libusb_fill_control_setup(
buf.as_mut_ptr() as *mut u8,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's with the as *mut u8? Shouldn't .as_mut_ptr() return that already?

index: u16,
data: &[u8],
) -> Self {
let mut buf = Vec::with_capacity(data.len() + LIBUSB_CONTROL_SETUP_SIZE);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sooo... how is data actually used? You create the buffer with it's length, and you pass it's length to control_setup, but where do you actually copy it into the buffer?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm working on the other part of my refactor for iso transfers right now, but someone on the Rust discord suggested boxed slices instead of this weird Vec stuff.
BTW I don't think the length is ever updated either, so all the data in the buffer is in the reserved portion (as it's only written to by the FFI function which doesn't know about Vecs

@JJTech0130
Copy link

Also oops totally forgot about implementing Interrupt transfers...

@brandonros
Copy link

brandonros commented Oct 15, 2022

@a1ien can you make rusb_async::Error public so we can match on PollTimeout errors please?

error[E0603]: module `error` is private
   --> src/main.rs:181:37
    |
181 |                         rusb_async::error::Error::PollTimeout => {
    |                                     ^^^^^ private module
    |

@a1ien
Copy link
Owner Author

a1ien commented Oct 16, 2022

Hi. Sure, but i want little rewrite this approach. I found a few more omissions. I try to find time and make new release at the end of October.

@JJTech0130
Copy link

@a1ien Sorry to bother you, but I just wondered if you were going to work on this soon?

@a1ien
Copy link
Owner Author

a1ien commented Jan 18, 2023

Sorry for delay. I little busy on my new work. I will try to find time within a month.

@mcuee
Copy link

mcuee commented May 1, 2023

Just wondering what it the test device used. I am trying to modify the example to suit the Cypress EZ-USB FX3 bulksrcsink example. Now read_device seems to work but not read_async.

PS C:\work\libusb\rust\rusb-async> git diff
diff --git a/examples/read_device.rs b/examples/read_device.rs
index bf1d943..079ee86 100644
--- a/examples/read_device.rs
+++ b/examples/read_device.rs
@@ -105,11 +105,6 @@ fn read_device<T: UsbContext>(
         );
     }

-    match find_readable_endpoint(device, device_desc, TransferType::Interrupt) {
-        Some(endpoint) => read_endpoint(handle, endpoint, TransferType::Interrupt),
-        None => println!("No readable interrupt endpoint"),
-    }
-
     match find_readable_endpoint(device, device_desc, TransferType::Bulk) {
         Some(endpoint) => read_endpoint(handle, endpoint, TransferType::Bulk),
         None => println!("No readable bulk endpoint"),
@@ -169,18 +164,10 @@ fn read_endpoint<T: UsbContext>(

     match configure_endpoint(handle, &endpoint) {
         Ok(_) => {
-            let mut buf = [0; 256];
+            let mut buf = [0; 1024];
             let timeout = Duration::from_secs(1);

             match transfer_type {
-                TransferType::Interrupt => {
-                    match handle.read_interrupt(endpoint.address, &mut buf, timeout) {
-                        Ok(len) => {
-                            println!(" - read: {:?}", &buf[..len]);
-                        }
-                        Err(err) => println!("could not read from endpoint: {}", err),
-                    }
-                }
                 TransferType::Bulk => match handle.read_bulk(endpoint.address, &mut buf, timeout) {
                     Ok(len) => {
                         println!(" - read: {:?}", &buf[..len]);
diff --git a/rusb-async/examples/read_async.rs b/rusb-async/examples/read_async.rs
index 7685f08..c7fabf9 100644
--- a/rusb-async/examples/read_async.rs
+++ b/rusb-async/examples/read_async.rs
@@ -33,7 +33,7 @@ fn main() {
     );

     const NUM_TRANSFERS: usize = 32;
-    const BUF_SIZE: usize = 64;
+    const BUF_SIZE: usize = 1024;

     let mut async_pool = TransferPool::new(device).expect("Failed to create async pool!");

diff --git a/rusb-async/examples/read_write_async.rs b/rusb-async/examples/read_write_async.rs
index 4ca8119..fddddd3 100644
--- a/rusb-async/examples/read_write_async.rs
+++ b/rusb-async/examples/read_write_async.rs
@@ -33,7 +33,7 @@ fn main() {

             loop {
                 let mut buf = if write_pool.pending() < 8 {
-                    Vec::with_capacity(64)
+                    Vec::with_capacity(1024)
                 } else {
                     write_pool
                         .poll(Duration::from_secs(5))
@@ -42,7 +42,7 @@ fn main() {

                 buf.clear();
                 buf.push(i);
-                buf.resize(64, 0x2);
+                //buf.resize(64, 0x2);

                 write_pool
                     .submit_bulk(out_endpoint, buf)
PS C:\work\libusb\rust\rusb-async> .\target\debug\examples\read_device 0x04b4 0x00f1
Active configuration: 1
Languages: [Language { raw: 1033 }]
Manufacturer: Some("Cypress")
Product: Some("FX3")
Serial Number: None
Reading from endpoint: Endpoint { config: 1, iface: 0, setting: 0, address: 129 }
 - kernel driver? false
 - read: [170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170, 170]
 - 
PS C:\work\libusb\rust\rusb-async> .\target\debug\examples\read_async 0x04b4 0x00f1 0x81
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: ParseIntError { kind: InvalidDigit }', rusb-async\examples\read_async.rs:27:60
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

PS C:\work\libusb\rust\rusb-async> .\target\debug\examples\read_write_async 0x04b4 0x00f1 0x81 0x01
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: ParseIntError { kind: InvalidDigit }', rusb-async\examples\read_write_async.rs:16:57
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

PS C:\work\libusb\rust\rusb-async> .\target\debug\examples\libusb_info.exe
libusb v1.0.26.11791
has capability? true
has hotplug? false
has HID access? true
supports detach kernel driver? false
PS C:\work\libusb\rust\rusb-async> .\target\debug\examples\list_devices.exe
Bus 001 Device 014 ID 04b4:00f1  480 Mbps
Device Descriptor:
  bcdUSB              2.10
  bDeviceClass        0x00
  bDeviceSubClass     0x00
  bDeviceProtocol     0x00
  bMaxPacketSize0       64
  idVendor          0x04b4 Cypress Semiconductor Corp.
  idProduct         0x00f1 Unknown product
  bcdDevice           0.00
  iManufacturer          1 Cypress
  iProduct               2 FX3
  iSerialNumber          0
  bNumConfigurations     1
  Config Descriptor:
    bNumInterfaces         1
    bConfigurationValue    1
    iConfiguration         0
    bmAttributes:
      Self Powered     false
      Remote Wakeup    false
    bMaxPower            100mW
    no extra data
    Interface Descriptor:
      bInterfaceNumber       0
      bAlternateSetting      0
      bNumEndpoints          2
      bInterfaceClass     0xff
      bInterfaceSubClass  0x00
      bInterfaceProtocol  0x00
      iInterface             0
    []
      Endpoint Descriptor:
        bEndpointAddress    0x01 EP 1 Out
        bmAttributes:
          Transfer Type          Bulk
          Synch Type             NoSync
          Usage Type             Data
        wMaxPacketSize    0x0200
        bInterval              0
      Endpoint Descriptor:
        bEndpointAddress    0x81 EP 1 In
        bmAttributes:
          Transfer Type          Bulk
          Synch Type             NoSync
          Usage Type             Data
        wMaxPacketSize    0x0200
        bInterval              0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants