Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sinara FMC and VHDCI support #148

Closed
jbqubit opened this issue Feb 1, 2017 · 38 comments
Closed

Sinara FMC and VHDCI support #148

jbqubit opened this issue Feb 1, 2017 · 38 comments

Comments

@jbqubit
Copy link
Collaborator

jbqubit commented Feb 1, 2017

Recap of discussion in this Issue.

Physical resources

I/O options offered by ARTIQ

  • TTL_PHY (~8 ns resolution, 1 pin/ea)
  • SERDES_TTL_PHY (~1 ns resolution, 1 pin/ea)
    • SERDES_TTL_PHY is a superset of TTL_PHY
  • SPI_PHY ([SCLK, MOSI, MISO, CS], 4 pins/ea)
    • SPI Master PHY and a SERDES TTL PHY can't easily be without incurring an SPI speed penalty

default support

to be determined by this Issue...

@jbqubit
Copy link
Collaborator Author

jbqubit commented Feb 1, 2017

I'm pulling together the list of features that are expected to be supported in v0.1 software and gateware for Sinara #139. This is intended to be a minimal list of what /will/ be implemented not a wish-list. One decision to be made is how to use the FMC on PCB_Melino and PCB_Sayma_AMC.

Proposal:

This was referenced Feb 1, 2017
@jbqubit
Copy link
Collaborator Author

jbqubit commented Feb 1, 2017

@dtcallcock says

Using the FMC-VHDCI card and VHDCI carrier would presumably allow reuse of Kasli extensions and gateware for some of the above (TTL/ADC/DAC/CameraLink etc).

I like this suggestion. FMC-VHDCI is more extensible than the Creotech board and fits better with the Kasli 3U ecosystem.

@jbqubit jbqubit added this to the 0.1 board support milestone Feb 1, 2017
@jordens
Copy link
Member

jordens commented Feb 1, 2017

Using the FMC-VHDCI card and VHDCI carrier would presumably allow reuse of Kasli extensions and gateware for some of the above (TTL/ADC/DAC/CameraLink etc).

It is not so much about gateware reuse. It is about limiting the number of bitstreams: there are a bunch of problems with supporting dynamic reconfiguration of what e.g. one extension connector on the VHDCI board does. E.g. we can't really switch between a TTL and a SPI without having separate bitstreams.

@hartytp
Copy link
Collaborator

hartytp commented Feb 1, 2017

@jordens Good point. I'd naively assumed that I'd be able to run the DAC extension board from the VHDCI carrier if I need to. Forgive me if it's a stupid question, but what prevents us from muxing the SPI and TTL in gateware?

@hartytp
Copy link
Collaborator

hartytp commented Feb 1, 2017

answering my own question: given the number of different ways one could wire up multiple SPI busses, anything sufficiently flexible to be genuinely useful is too messy/complex to be practical to implement...

@jordens
Copy link
Member

jordens commented Feb 1, 2017

It is not impossible but there are many unknows.
Let's assume we restrict the possible functionalities of a given pin to one of "a TTL I/O, a fixed SPI signal, a fixed I2C signal, a fixed CameraLink signal, [insert more here]" then yes, we could mux it, assuming that (a) it still works timing-wise without introducing registers there and (b) we have the logic resources to have all those RTIO PHYs around. For muxing with the high resolution SERDES based TTLs it would be trickier again (quite some gateware and clocking; definitely increasing latency which can be problematic for SPI).

@jbqubit
Copy link
Collaborator Author

jbqubit commented Feb 1, 2017

@gkasprow Please confirm the extent to which the FMC is VITA 57.1 compliant?
It's a High Pin Count (HPC) connector.

@dtcallcock
Copy link
Member

How about if we fixed a pinout for a couple of combinations like 'all TTLs' and '1xSPI + TTLs' and just supported muxing in gateware between those. That would cover most use cases and only add a modest number of PHYs. Then we'd just accept Camera link, SERDES TTLs and non-standard pinouts would need a different bitstream.

@gkasprow
Copy link
Member

gkasprow commented Feb 1, 2017 via email

@jordens
Copy link
Member

jordens commented Feb 1, 2017

@dtcallcock sure. we could consider restricting the extension usage the following way:
EXT0: 8xSERDES-TTL I/O or 1xCameraLink
EXT1: 8xSERDES-TTL I/O or 1xCameraLink
EXT2: 2x(4xTTL I/O or 1xSPI)
EXT2: 2x(4xTTL I/O or 1xSPI)
...
(I hope the syntax is intuitive)

It still doubles the number of PHYs and requires agreement about exactly how we pin-out an SPI bank.

@hartytp
Copy link
Collaborator

hartytp commented Feb 1, 2017

@jordens How bad is doubling the number of PHYs? Where are we on FPGA resources?

@jordens
Copy link
Member

jordens commented Feb 1, 2017

For what I outlined you need to have twice as many PHYs as you can actually use. Every pin can be used for just one of two PHYs.
I don't know where we are on resources. One would have to find some time and funding to explore this.

@dtcallcock
Copy link
Member

@jordens Is 2x(1xSPI) compatible with the plan from #142 of driving two SPI DACs with 2xCLK, 2xMOSI, 1xMISO, 2xSYNC, 1xLDAC? I'm thinking the shared MISO would break this.

@hartytp
Copy link
Collaborator

hartytp commented Feb 1, 2017

@jordens Is 2x(1xSPI) compatible with the plan from #142 of driving two SPI DACs with 2xCLK, 2xMOSI, 1xMISO, 2xSYNC, 1xLDAC? I'm thinking the shared MISO would break this.

and, more generally, if an SPI phy uses 4 pins, how is LDAC handled?

@jordens
Copy link
Member

jordens commented Feb 1, 2017

@dtcallcock Yes. That would not work. It is one reason why we should not try to bend "standard SPI" too much. 4xTTL (for LDAC and a couple of additional CS) + 1xSPI would work.

@hartytp
Copy link
Collaborator

hartytp commented Feb 1, 2017

@jordens point taken. The extra DAC isn't worth the pain of having to maintain a custom bitstream. We'll go to one DAC then.

@hartytp
Copy link
Collaborator

hartytp commented Feb 6, 2017

@jordens

Let's assume we restrict the possible functionalities of a given pin to one of "a TTL I/O, a fixed SPI signal, a fixed I2C signal, a fixed CameraLink signal, [insert more here]"

EXT0: 8xSERDES-TTL I/O or 1xCameraLink
EXT1: 8xSERDES-TTL I/O or 1xCameraLink
EXT2: 2x(4xTTL I/O or 1xSPI)
EXT2: 2x(4xTTL I/O or 1xSPI)

It still doubles the number of PHYs

  • To clarify: the proposal is to only support SPI, TTL & CameraLink directly from Kasli, right? I2C etc are supported by extension boards that communicate with Kasli via SPI.

  • I'm happy with this proposal (2 CameraLink per Kasli is plenty IMO).

  • Will we support something similar for Metlino VHDCI? (e.g. it would be nice to be able to run some SPI busses directly from the Metlino VHDCI carrier).

  • How hard would it be to support multiplexed SERDES-TTL & SPI (rather than standard TTL and SPI)? For Metlino VHDCI, it would be nice to have lots of SERDES-TTL, but still have the option of running a few SPI busses.

and requires agreement about exactly how we pin-out an SPI bank.

This is something we should agree on before any SPI extension boards get made. What would your preferred pin-out be?

@jordens
Copy link
Member

jordens commented Feb 6, 2017

To clarify: the proposal is to only support SPI, TTL & CameraLink directly from Kasli, right? I2C etc are supported by extension boards that communicate with Kasli via SPI.

This was arbitrary. And just for four extensions. But yes, if there are good SPI-to-I2C converters (SC18IS600), then why not. Or one could look at making the SPIMaster core so generic that it can just as well talk I2C. Like the serial protocol engines in modern µCs. But it's quite some work.

I'm happy with this proposal (2 CameraLink per Kasli is plenty IMO).

One CameraLink would likely occupy two extension headers.

Will we support something similar for Metlino VHDCI? (e.g. it would be nice to be able to run some SPI busses directly from the Metlino VHDCI carrier).

Sure. Same problem there.

How hard would it be to support multiplexed SERDES-TTL & SPI (rather than standard TTL and SPI)? For Metlino VHDCI, it would be nice to have lots of SERDES-TTL, but still have the option of running a few SPI busses.

It costs a handful cycles of latency. i.e. SPI reads would be even slower than they are now. The coding required is not gigantic. It is more about tedious corner cases and attempting to get resource usage and timing closure lined up for these highly multiplexed designs.

This is something we should agree on before any SPI extension boards get made. What would your preferred pin-out be?

The exact order doesn't matter much. SCLK, MOSI, MISO, CS seems to be frequent.

@hartytp
Copy link
Collaborator

hartytp commented Feb 6, 2017

But yes, if there are good SPI-to-I2C converters (SC18IS600), then why not. Or one could look at making the SPIMaster core so generic that it can just as well talk I2C. Like the serial protocol engines in modern µCs. But it's quite some work.

Let's not bother with I2C for now. If it becomes a problem then we can look into upgrading SPIMaster later.

One CameraLink would likely occupy two extension headers.

Sorry my mistake. Thanks for clarifying.

It costs a handful cycles of latency. i.e. SPI reads would be even slower than they are now. The coding required is not gigantic. It is more about tedious corner cases and attempting to get resource usage and timing closure lined up for these highly multiplexed designs.

Okay, let's not bother with this for now, and stick to something along the lines of your original suggestion.

Will we support something similar for Metlino VHDCI? (e.g. it would be nice to be able to run some SPI busses directly from the Metlino VHDCI carrier).

Sure. Same problem there.

Maybe, use one VHDCI exclusively for SERDES IO, and multiplex the other one?

@jbqubit jbqubit changed the title Sinara FMC support for v0.1 Sinara FMC and VHDCI support for v0.1 Feb 6, 2017
@jbqubit
Copy link
Collaborator Author

jbqubit commented Feb 6, 2017

Issue title updated to reflect need to decide on support for both FMC and VHDCI. I started to summarize conclusions about this Issue in the leading post.

Metlino VHDCI anticipates applications that a) require very-low latency, high bandwidth link to ARTIQ Core Device (Metlino) or b) convenient re-use of 3U peripherals via VHDCI Carrier. CameraLink does not fit this profile for the following reasons.

  • Maximum possible pixel clock rate for CameraLink is 85 MHz, 255 MB/s (single-link) ref
  • CameraLink is pin-hungry >=16 pins
  • Fanciest camera on the market (I know about) is Andor Ultra 888
    • readout rate is still slow relative to DRTIO: 697 frame-per-second for 128x128 sub-array
    • at best 10 ns timing accuracy
  • Gateware-based image processing is likely required for feedback within the qubit coherence time. As this gateware is likely to be application specific it shouldn't be part of the standard Metlino or Sayma .bit files.
    Better to design a CameraLink 3U peripheral for Kasli, do the image processing on Kasli and transmit results back to ARTIQ Core Device using DRTIO.

A broader comment is that we don't have to anticipate all future uses of FMC/VHDCI this winter. Let's settle on something simple for now. Based on discussion thus far and assuming CameraLink is dropped I propose the following. Call it jbqubit_p1.

gateware configuration options

Configuration options for groups of 8 I/O.

  • let c1 be 8 of SERDES_TTL_PHY
  • let c2 be 4 of TTL_PHY and 1 of SPI_PHY
  • let c3 be whatever is needed to enable 4 channel SFP breakout

VHDCI

PCB_Metlino VHDCI 1
- EXT0, EXT1, EXT2, EXT3 each independently configurable as c1 or c2
- I2C
PCB_Metlino VHDCI 2
- EXT0, EXT1, EXT2, EXT3 each independently configurable as c1 or c2
- I2C

FMC

PCB_Melino HPC FMC

  • configurable as d1 or d2
  • I2C

PCB_Sayma_AMC LPC FMC

  • configured as d2
  • I2C
  • NOTE: d1 is not supported since SFP breakout requires HPC FMC

@jbqubit
Copy link
Collaborator Author

jbqubit commented Feb 6, 2017

@hartytp Would c2 permit operation of Zotino using VHDCI Carrier? I've not followed exactly how Zotino uses the SPI LDAC line.

5xLVDS lines used for DAC SPI bus: CLK, MOSI, MISO, SYNC, LDAC

@jordens
Copy link
Member

jordens commented Feb 6, 2017

Let's not bother with I2C for now. If it becomes a problem then we can look into upgrading SPIMaster later.

I'll still bother with I2C. We need it.

Maybe, use one VHDCI exclusively for SERDES IO, and multiplex the other one?

One should probably collect use cases and determine priorities...

@hartytp
Copy link
Collaborator

hartytp commented Feb 6, 2017

@jbqubit : Would c2 permit operation of Zotino using VHDCI Carrier? I've not followed exactly how Zotino uses the SPI LDAC line.

yes

@jordens : I'll still bother with I2C. We need it.
One should probably collect use cases and determine priorities...

That sounds like a good idea...

@jbqubit
Copy link
Collaborator Author

jbqubit commented Feb 6, 2017

Does somebody have an imminent use case not covered by the proposal I made earlier today? We don't have to anticipate all future uses of FMC/VHDCI, just enough to get Sinara off the ground. It's gateware so accommodating

@jmizrahi @jasonamini is what I proposed compatible with your many-channel PMT-readout scheme?

@gkasprow
Copy link
Member

gkasprow commented Feb 7, 2017 via email

@jbqubit
Copy link
Collaborator Author

jbqubit commented Feb 7, 2017

Let's move CameraLink discussion to #156.

@jbqubit
Copy link
Collaborator Author

jbqubit commented Feb 7, 2017

@jordens said

I'll still bother with I2C. We need it.

How is I2C used?

@jbqubit
Copy link
Collaborator Author

jbqubit commented Feb 13, 2017

How is I2C used?

The wiki answers my question. "I2C from each VHDCI is routed to TCA9548 8-ch multiplexer on VHDCI Carrier."

@jbqubit jbqubit changed the title Sinara FMC and VHDCI support for v0.1 Sinara FMC and VHDCI support Feb 16, 2017
@jbqubit
Copy link
Collaborator Author

jbqubit commented Feb 16, 2017

@jordens @sbourdeauducq Is my proposed specification from 10 days ago acceptable? If so please close Issue.

@jordens
Copy link
Member

jordens commented Feb 16, 2017

@jbqubit It is unwise and unsound IMO but certainly acceptable. Closing.
Your description and reasoning about CameraLink are flawed though. If you don't want to use it or pay for it that's fine. But apart from technical errors, I don't see how you can seriously explain to us about how we would want to use CameraLink and argue against us doing so.

@jordens jordens closed this as completed Feb 16, 2017
@jbqubit
Copy link
Collaborator Author

jbqubit commented Feb 16, 2017

It might help to reiterate the intended scope of this Issue.

I'm pulling together the list of features that are expected to be supported in v0.1 software and gateware for Sinara #139. This is intended to be a minimal list of what /will/ be implemented not a wish-list.

I don't want to discourage others from developing other interfaces including CameraLink (#156). The present discussion is intended to come up with a simple baseline specification for the FMC and VHDCI IO.

It is unwise and unsound IMO but certainly acceptable.

What aspects of what I proposed (jbqubit_p1) cause you concern?

@jordens
Copy link
Member

jordens commented Feb 16, 2017

@jbqubit Who is your target audience here? Is this a specification that you plan to have implemented or would you like someone else to fund this?

Please consolidate and clean this up w.r.t. the proposals and data elsewhere (e.g. in #129, #149).

From the top of my head:

  • As explained elsewhere, we can't easily mux the SPI Master PHY and a SERDES TTL PHY without incurring a SPI speed penalty.
  • SERDES TTL PHY is a superset of TTL PHY. One would be wasting user, FPGA, development, funding resources.
  • FMC HPC is a superset of FMC LPC. Distinguishing them means wasting resources.
  • Runtime support for different extensions and FMC cards requires the same IO standard and type (differential, single ended). Someone needs to verify compatibility.

@jbqubit
Copy link
Collaborator Author

jbqubit commented Feb 21, 2017

@jbqubit Who is your target audience here?

My target audience is M-Labs who is implementing v0.1 support for the Sinara hardware.

As explained elsewhere, we can't easily mux the SPI Master PHY and a SERDES TTL PHY without incurring a SPI speed penalty.

I don't recall where this conversation is at the moment. Can you link to it? Is there an estimate as to the speed penalty?

FMC HPC is a superset of FMC LPC. Distinguishing them means wasting resources.

PCB_Metlino and PCB_Sinara_AMC are distinct targets and will already have their own .bit files. I see no harm in supporting FMC HPC for PCB_Metlino and FMC LPC for PCB_Sinara_AMC.

SERDES TTL PHY is a superset of TTL PHY. One would be wasting user, FPGA, development, funding resources.

OK. VHDCI on PCB_Metlino is directly connected to the Metlino FPGA. This is the primary low-latency (10's ns) IO in the Sinara system. It is desirable to use SERDES_TTL_PHY for the VHDCI IO. The (VHDCI Carrier)[https://github.com/m-labs/sinara/wiki/VHDCI%20carrier] is intended to provide IO breakout for use with (EEM)[https://github.com/m-labs/sinara/wiki/EEM] peripherals.

Runtime support for different extensions and FMC cards requires the same IO standard and type (differential, single ended). Someone needs to verify compatibility.

Good point. I provided to concrete examples of FMC cards. One for PCB_Metlino and one for PCB_Sayma_AMC. Please plan support for each in the respective bit files. Accommodation of other FMC can be addressed down the line.

In light of your comments I propose a variation as follows labeled: jbqubit_p2. Define configuration options for groups of 8 I/O.

  • let c1 be 8 of SERDES_TTL_PHY
  • let c2 be 4 of TTL_PHY and 1 of SPI_PHY

PCB_Metlino

VHDCI 1

  • supported hardware: VHDCI Carrier
  • EXT0, EXT1, EXT2, EXT3 configured to toggle between c1 and c2
  • I2C

VHDCI 2

  • supported hardware: VHDCI Carrier
  • EXT0, EXT1, EXT2, EXT3 configured to toggle between c1 and c2
  • I2C

FMC

  • supported hardware: 4 channel SFP breakout FMC board
  • allocate remaining IO as EXT0, ..., EXTN configured to toggle between c1 and c2

PCB_Sayma_AMC

FMC

  • supported hardware: FMC LPC Creotech FMC to VHDCI adapter with gateware
  • supported hardware: VHDCI link to VHDCI Carrier
  • EXT0, EXT1, EXT2, EXT3 configured to toggle between c1 and c2
  • I2C

@jbqubit jbqubit reopened this Feb 21, 2017
@gkasprow
Copy link
Member

gkasprow commented Feb 21, 2017 via email

@jordens
Copy link
Member

jordens commented Feb 23, 2017

Again. SERDES_TTL_PHY is a superset of TTL_PHY. Toggling between them is idiotic.
Toggling between SPI and SERDES TTL will at least give you a handful of cycles latency for input and output. That's two handful of cycles slowdown per SPI bit when reading. And again, it is unclear whether we have the FPGA resources.

@jbqubit
Copy link
Collaborator Author

jbqubit commented Feb 23, 2017

Toggling between SPI and SERDES TTL will at least give you a handful of cycles latency for input and output.

The intent of supporting "togging" between c1 and c2 is a) to permit attachment of either fast TTL hardware or TTL+SPI hardware b) to compile only a single .bit to cover a range of use cases. The duty cycle for switching any given EXTn from c1 to c2 is weeks or months. So if I understand your statement about latency, it's OK if toggling takes many micro-seconds.

@jordens This is what you proposed several weeks ago.

@dtcallcock sure. we could consider restricting the extension usage the following way:
EXT0: 8xSERDES-TTL I/O or 1xCameraLink
EXT1: 8xSERDES-TTL I/O or 1xCameraLink
EXT2: 2x(4xTTL I/O or 1xSPI)
EXT2: 2x(4xTTL I/O or 1xSPI)

And again, it is unclear whether we have the FPGA resources.

@jordens Do you mean the Sayma or Metlino FPGA? Do you have

@sbourdeauducq What are the resource requirements of SERDES_TTL_PHY and SPI_PHY with a FIFO depth of, say, 128?

Since the intent is for the Kasli Carrier and VHDCI Carrier to be interchangeable we need to square this away. I expect Kasli implementation will also support multiple IO configurations in a single .bit for each of the 8 EMM ports. Configurations based on planned peripherals includes 8 of SERDES_TTL_PHY (eg BNC DIO breakout) and xx TTL + SPI_PHY (eg Zotino, Novogorny).

@jordens
Copy link
Member

jordens commented Feb 24, 2017

No Joe. Being able to toggle is a speed penalty for SPI (always).
Either FPGA.

@jbqubit
Copy link
Collaborator Author

jbqubit commented Feb 24, 2017

--Updated at 7:40pm GMT--
Based on discussion and conversation with Robert.

  1. Supporting the ability to toggle between SERDES_TTL_PHY and TTL_PHY/SPI_PHY incurs a latency penalty. Toggling between TTL_PHY/SPI_PHY does not incur a latency penalty.

  2. SERDES_TTL_PHY is somewhat more FPGA-resource intensive. Metlino has more available FPGA resources than Sayma.

  3. The ARTIQ core device is Metlino and at present all experiment branching occurs on the core device, low-latency Sinara IO is available only on Metlino.

  4. Thirty-two channels of low-latency ~1 ns-resolution IO is sufficient for near-term use cases.

  5. Implementation of Kasli Carrier IO interface requires few FPGA resources since Kasli FPGA has few resources by design.

  6. Based on existing EEM peripheral designs, the Kasli Carrier IO interface takes two forms. <-- @jordens create a new Issue for this -->

  • 8 TTL_PHY plus 1 I2C_PHY
    • BNC DIO breakout
    • SMA DIO breakout
    • RJ45 LVDS breakout
  • 4 TTL_PHY, 1 SPI_PHY plus 1 I2C_PHY
    • Zotino, Novogorny, Mirny, Urukul, Sary, Berdenish
  1. PCB_Sayma_AMC FMC may be populated by a "dumb" FMC-VHDCI adapter. A Sinara system has at least one PCB_Sayma_AMC. This is an approach which permits Sinara to drive the VHDCI Carrier.

Call the following variation jbqubit_p3.

PCB_Metlino

VHDCI 1 and VHDCI 2

  • EXT0, EXT1, EXT2, EXT3 configured as 8 of SERDES_TTL_PHY
  • I2C
  • supported hardware via VHDCI Carrier:
    • BNC DIO breakout
    • SMA DIO breakout
    • RJ45 LVDS breakout

FMC

PCB_Sayma_AMC

FMC

  • EXT0, EXT1, EXT2, EXT3 configured to permit toggling of each EXTn between the following configurations
    • 8 TTL_PHY
    • 4 TTL_PHY, 1 SPI_PHY
  • I2C
  • supported hardware

@hartytp hartytp closed this as completed May 14, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants