Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Smart" backplane for EEM/Kasli ecosystem #1

Closed
dhslichter opened this issue Jul 12, 2017 · 75 comments
Closed

"Smart" backplane for EEM/Kasli ecosystem #1

dhslichter opened this issue Jul 12, 2017 · 75 comments

Comments

@dhslichter
Copy link
Member

Moving my comment from #204 to a new issue:

Might it be better in the long haul to have Kasli's functionality actually just integrated on the backplane? You would have a slim "dumb" card that brings SFP and coax connections to the front panel, and connects them to the backplane via some high-performance but lower-pin-count connector, suitable for gigabit signals. Then the Artix FPGA, clock distribution, etc all live on the backplane, and you don't need crazy fat connectors just to route all the EEM signals and clocks off the Kasli and on to the backplane. This would also free more front panel space, because if you are adding 96-DIN mezzanines to Kasli then this obviously will make it wider, which may be undesirable for some.

The only downside I can see is that if your FPGA on the backplane goes bad for some reason, it's more work to change it out....but this seems like it should not be a very common occurrence.

@gkasprow
Copy link
Member

gkasprow commented Jul 12, 2017

@dhslichter Take it into account that to place such big FPGA on backplane, you need at least 12 layers. And such big 19" board would be hell expensive. Moreover, DIN connectors are THT so it means you cannot place BGA under them so some of slots would be wasted. At least 12..14 HP.
It's much cheaper and more convenient (maintenance) to keep backplane simple and make Kasli complex.
We don't have to use DIN96 connector for Kasli. We can easily use any type of backplane connector, i.e. 170 pin AMCone we use in Sayma/Metlino. We can stack them to have 340 or even 680 pins and they are dedicated to high speed signalling. These backplane connectors are cheap, from Kasli side you don't need any (PCB fingers for 170 pins).
We can make current Kasli compatible with such ones.
look here

@dhslichter
Copy link
Member Author

ack @gkasprow, sounds very logical.

@gkasprow
Copy link
Member

gkasprow commented Nov 10, 2017

There is interesting initiative here: http://easy-phi.ch/
They use different approach, based on active backplane with USB as the main interface.
There is plenty of work they did and it would be nice to find common denominator between our EEM and their backplane. Example board schematic is here. On page 2 in upper right edge there is DIN connector pinout.
I see potential of marrying our solution with their. In case of the backplane, we can simply use pins which are grounded for our LVDS lanes.
In this way such backplane would support both DRTIO and non-DRTIO boards with USB interface where simple controller based on USB hub or some RPI/Beaglebone do the job. In case of DRTIO extensions we simply plug there Kasli. Or we can mix the two and mix boards within crate.
We could build all planned hardware to be compliant with this approach.
It would be nice to combine our and their efforts and not duplicate the work.
5HP is not an issue, one can simply place 1HP dummy panels in between ours.
This is just an idea, nothing more.

@gkasprow gkasprow reopened this Nov 10, 2017
@jordens
Copy link
Member

jordens commented Nov 10, 2017

Pretty sure that's been dead for a while. I chatted with them back in '15 but haven't seen any movement since then.

@gkasprow
Copy link
Member

@jordens at least we can reuse some ideas for slow control things :)

@jordens
Copy link
Member

jordens commented Nov 10, 2017

Yep.

@gkasprow
Copy link
Member

@jordens I met a man from Unige at the quantum engineering workshop in London, who is using this Easy-phi stuff. They contacted me and it seems the project is still alive.

@jordens
Copy link
Member

jordens commented Jan 18, 2018

The connection between the panel board and the smart backplane could be 4x SATA plus a ribbon for all of USB/Power/SFP-Management/LEDs and maybe MMCX-SMA pigtail for the clock.

@gkasprow
Copy link
Member

The question is if we really want to have smart backplane.
It is part that is difficult to replace and also the most expensive.
It's much easier to break active backplane, while passive one can be broken only mechanically.
I'd keep the backplane dummy and put all the smartness to the Kasli or other controller.
Today we had discussion with @marmeladapk and he asked simple question - why we really need this backplane. I know that it's nice looking, we get rid of cable nest but we add: cost, all boards must be bigger to have access to it so they will be more expensive, the backplane does not come for free, we limit the slot availability in case we use 8HP boards. We can of course make some slots 8HP and some 4HP.
Another aspect is crate size - I saw users having different HP size. Some use table-top crates, others prefer 19" rack mount ones. We would need dedicated backplanes here as well.

If we really want to go for backplane, it would be nice to keep some compatibility with existing EEM boards.
In case of short ones, it's easy - just add some extender with IDC socket and backplane connector.

We can do the same to existing Kasli - extend IDC cables to the backplane connector attached directly to another end of the IDC cable. It has 64 pins (96 pin version also has 64 connected pins) so would split to 2 EMMs. But this means we need 4 additional such backplane IDC connectors!

If we go for 96 pin ones, we would need only two of them, but dedicated adapter board would be needed that would translate 8 30 pin IDCs to two straight DIN 41612.
Another issue is clock routing.

Yet another option that do not use any cables is to add to Kasli two mezzanines with clock distribution and female connectors that:

  • first one mates with 4 EEMs and second backplane connector
  • second one mates with next 4 EEMs and third backplane connector.

In this way we keep compatibility with all existing boards
Urukul, Sampler and Mirny would need to use 8 HP slots and dedicated mezzanines - EEM to backplane adapters

@jordens
Copy link
Member

jordens commented Jan 19, 2018

I agree with that.

@hartytp
Copy link

hartytp commented Jan 19, 2018

@gkasprow I agree completely: while the idea of a BP sounds nice, I'm not sure we'd use one even if it were available:

  • Maybe I'll change my mind once I've used these boards more, but I don't think the ribbon cables will be that much of a pain. Should be fine from a SI/PI perspective
  • Even a passive BP will be more expensive, a fiddle to mount and stock than the current solution.
  • Kasli is already quite limited in terms of IO, so we cannot afford to waste any. However, we have boards with different widths and different numbers of EEM connectors required. I can't see any efficient way of ensuring that all Kasli IO can be used from the BP in a sensible way (e.g. not involving lots of unused space in the rack).
  • BP unlikely to be compatible with lots of case sizes. A lot of our Kasli will be in relatively small crates which are unlikely to have a BP, so we will still have the IDC there.
  • Have to increase size of boards to mate with BP connector.

I'm going to (perhaps optimistically) close this issue for now. My reason for doing this is that currently there does not seem to be a clear plan for the BP, or even necessarily a clear use case for it. Moreover, no one is currently taking the lead on it and I'm not sure we have the resources to think about it properly. My suggestion is that we keep this closed until someone has the time/motivation to take an active lead on this.

@hartytp hartytp closed this as completed Jan 19, 2018
@dhslichter
Copy link
Member Author

Following on, I agree with the above that the backplane does not seem to provide important benefits, and introduces a lot of extra complication and cost, at this point. The big points of the Kasli ecosystem are low cost, flexibility, adaptability, and simplicity, and the backplane idea runs counter to all of them.

@gkasprow
Copy link
Member

For future reference - there is a COTS backplane that seem to fit perfectly to our needs - the Compact PCI Serial standard. With most basic configuration it has 8x LVDS pairs, I2C, 12V and management power rail. And it is 3U Euro crate. ButiIt supports only 8 cards. The modules are 4 HP.
Short introduction is here
Of course that will be more expensive than existing solution, mainly due to connectors.

@jordens
Copy link
Member

jordens commented Dec 4, 2018

Let's continue the cpci discussion here.
A couple more pieces of info from Schroff Hartmann

And some general info on backplane design article

Connectors and presentation

CERN DIOT project with two presentations and DIOT 24 IO board design

@gkasprow gkasprow transferred this issue from sinara-hw/sinara Dec 4, 2018
@gkasprow gkasprow reopened this Dec 4, 2018
@gkasprow
Copy link
Member

gkasprow commented Dec 4, 2018

Management/Standby power seems to be 5V instread of 3.3V.

that's not a big deal, tiny 3.3V regulator does the job.

@gkasprow do I understand that correctly that you want to use the 4 Ethernet, 2 USB3 and 2 SATA pairs as the second EEM connector on the "dual EEMs"? You would not use the fat pipe on the first two slots?

If we want to keep same pinout between slots, it's a must.
We won't use this backplane for PCIe anyway, so who cares.
If we use only first two slots with double width fat pipe, we still have no use for 2 EEM slot.

I think there is one "slot 9: dual EEM" missing.

I didn't count slot 0 with controller :)
Let's number them correctly with specification, starting from 1.

How much would a custom backplane cost? Is that a 20 layer board in the 5k range or more like 1k?

depends strongly with quantity, but I don't see strong reason to do it.
We don't need full mesh topology, only single star configurtion. FR4 would also work, no need for 10G support. Price would be dominted by connectors, not PCB. 10 layers would be sufficient.

IMO if we go down this route, we should just settle on 4 HP and try really hard to stick with the number of diff pairs easily available. That's 12 diff pairs (4 lane PCIx, 1 USB3, 1 SATA) for ports 3-8 and 20 pairs for ports 1-2.

add 4 Ethernet pairs to every slot + 1 USB2.0
I'm in favour of such 4HP approach. If someone want Sampler with BNCs, can add IDC2BNC converter and use 2 neighbouring slots.
I wouldn't change too much number of LVDS lanes and keep signal compatibility with existing boards.
If we use only 4HP modules, can simply assign dual EEM to slots 2-5 and single EEM to slots 6-9
And cPCI serial has one interesting feature - the same boards, equipped with shells that dissipate heat, can be used in space environment, plugged to CPCI Serial Space backplane. It adds second redundant controller and also dual CAN support. All connectivity is dual star. I have spec but didn't analyse it in details.

Sidenote: interestingly, Schroff supports connecting the chassis to the backplane ground and isolating backplane and chassis gnd. I.e. connecting chassis and signal/power ground is not at all required or the only option.

In satellites and high reliability systems isolation is a must.

@jordens
Copy link
Member

jordens commented Dec 4, 2018

The rear io stuff would also perfectly fit the idea of having BNC breakouts towards the back for e.g. sampler. Or Dio BNC or zotino or banker.

I am ok with using the Ethernet star from kasli for four more diff pairs. The only downside is that it requires another connector and is physically far away from J1.

@gkasprow
Copy link
Member

gkasprow commented Dec 4, 2018

we would use it only on boards that require second EEM.
I'm a bit sceptical about using rear IO for analogue signals. There are other high density connector standards that fit within 4HP. SMB,SMC or micro BNC. We can always use BNC pigtails which can be cheap when ordered in large quantity.
@jordens
any idea about cPCI pricing for typical crate with power supply?

@dtcallcock
Copy link
Member

There are other high density connector standards that fit within 4HP. SMB,SMC or micro BNC. We can always use BNC pigtails which can be cheap when ordered in large quantity.

My preference would be to scrap BNC across the whole of Sinara as it's just too big for Eurocard panels IMHO. If we do that then I think we should pick one connector to replace it (and also replace SMA in some places where high performance isn't required).

MMCX would be an obvious choice as we are already using it for the Sayma v2 AFE analog inputs and Stabilizer aux digital inputs. However, they are a bit on the small side and easy to disengage compared SMB/MCX so one might worry about mechanical robustness. SMC/Micro BNC/LEMO all seem somewhat less ubiquitous and more expensive. My slight preference would be for MCX, but it would be good to hear from people with more experience.

I'm not totally sure how you would isolate this kind of connector from the front panel though (just a clearance hole?).

Perhaps this should be a separate issue ('[RFC] Death to BNC'?).

@dtcallcock
Copy link
Member

One thing I worry about is whether there will be broad support and funding from the current user base to convert 10-20 boards from EEM to cPCI. Whilst it looks like it could be a nicer system in many ways, it doesn't add any killer new features and the lack of backwards compatibility with already-purchased hardware would be painful to swallow.

If the project was well funded by its own grant that would probably make it more palatable. There seems to be a market as, for example Innsbruck's Quantum Flagship effort specifically has "control electronics developed to commercial level" as one of it's milestones.

@sbourdeauducq
Copy link
Member

My preference would be to scrap BNC across the whole of Sinara as it's just too big for Eurocard panels IMHO.

I disagree; many groups have small systems and there is plenty of space for BNCs in a 84hp crate.

Yes, the backplane isn't very interesting and the backward compatibility problems are tough (and what if we need to replace or add a board in an existing EEM system?).

@gkasprow
Copy link
Member

gkasprow commented Dec 5, 2018

backward compatibility would be ensured by installing both EEM and cPCI connectors. EEM would be not mounted by default. Of course one would not be able to use previous generation modules with backplane.
To continue support for 8HP panels, LVDS bus mux on Kasli would do the job (to some extend) without loosing EEM signals in not used slots.
The nice thing in existing system is that one can entirely fill 19" rack with mix of 8 and 4HP modules and utilise all 12 EEM channels without loosing single one. I'm not sure if it is really important.
We have some idea to adopt existing SInara HW to space applications, and backplane is the must in such case. So funding would be provided in such case. And if all community could benefit from that, it would be great.

@gkasprow
Copy link
Member

gkasprow commented Dec 5, 2018

The cPCI connectors are press-fit mounted, so can be installed after boards are assembled, so the user can choose which connectivity wants during ordering.

@sbourdeauducq
Copy link
Member

backward compatibility would be ensured by installing both EEM and cPCI connectors

There is still the issue with Kasli.

The nice thing in existing system is that one can entirely fill 19" rack with mix of 8 and 4HP modules and utilise all 12 EEM channels without loosing single one. I'm not sure if it is really important.

Yes, that's very nice.
Also the cables work with all enclosures and without the frustrating issues that tend to pop up at every occasion in mechanical systems (for example: when mounting Schroff front panel screws on some other brands, due to their length they cannot be inserted fully and do not hold the panel properly).

@jordens
Copy link
Member

jordens commented Dec 5, 2018

@dtcallcock @sbourdeauducq That line of argument doesn't carry much weight for me. It's like comparing pdq and Shutter. Shuttler also doesn't add any killer feature over pdq and is full backwards incompatible. But Shuttler is a much better design that addresses the problems with pdq.
The backplane is extremely interesting and AFAICT the only way forward for the numerous reasons mentioned.
BNC can go. There would be patch and break out panels for people wanting low density connectors.

@sbourdeauducq
Copy link
Member

You cannot compare one NIST internal design with a full line of products that is deployed in many dozens of labs.
The backplane is not the only way forward as demonstrated by the successful deployment of the ribbon cable systems.

@jordens
Copy link
Member

jordens commented Dec 7, 2018

Yes. Using an existing backplane - CERN or standard - is very valuable.

What minimum number of layers does the connector require on kasli and on the eems?

@gkasprow
Copy link
Member

gkasprow commented Dec 7, 2018

In our case 4 layers would do the job.

@gkasprow
Copy link
Member

gkasprow commented Jan 3, 2019

The CPCIS uses shared I2C bus that goes through all modules. In Kasli Ecosystem we use different approach with I2C mux/switch. We want to keep the backplane passive so have to choose different approach.
CPCIS standard defines a few control lines:

  • WAKE_OUT# - it is common, open collector wire-AND signal used to generate interrupts
  • RST# - active low global reset signal
  • PCIE_EN# - module present/PCIe capable signal, active low. It is dedicated to every module.

I propose using PCIE_EN to enable I2C bus switch. Normally this signal is pulled low by 220R resistor. Controller detects module presence by using weak pullup. Then it pulls the line up enabling I2C bus switch and connecting the EEM I2C resources to the common I2C bus for module identification.
CPCIS defines also common SPI bus with geographical addressing but so far I see no use cases for it.

@gkasprow
Copy link
Member

gkasprow commented Jan 3, 2019

I sketched schematic of cPCIS adapter with proposed signal assignment.
CPCIS_EEM_Adapter.PDF

@jordens
Copy link
Member

jordens commented Jan 3, 2019

Nice! New repository?
Aren't the global addressing (GA) signals driven by the backplane? Why do you ground them here?
Is it 3.3 V I2C even though it is 5 V MP?

@gkasprow
Copy link
Member

gkasprow commented Jan 3, 2019

True!
I copied it from CERN design.
They are driven by the backplane.
I2C is 3.3V according to specification.

@jordens
Copy link
Member

jordens commented Jan 3, 2019

I meant why do you short GA2 and GA3 to GND if the backplane drives them?

@gkasprow
Copy link
Member

gkasprow commented Jan 3, 2019

This is what I mean. They should be inputs.

@gkasprow gkasprow transferred this issue from sinara-hw/meta Jan 3, 2019
@gkasprow
Copy link
Member

I've ordered this chassis. I will make sure the connectors are symmetrical and produce the adapter board. I will also switch card depth from 160 to 220mm. Guys @CERN claim that it is possible.

@jordens
Copy link
Member

jordens commented Jan 11, 2019

Nice. The chassis core is just the plain old standard, the front and back rails are with the longer cPCI lips. And the middle rails are movable every 60mm. Get the longer plastic rails, they seem to be available in 220 mm with cPCI coding. I guess you won't have much space in the back anymore which could be a problem for the power connector, the 160 mm power supply might need some hackery to connect the backplane, and the cooling will focus on the front 160 mm.

@gkasprow
Copy link
Member

I have 80mm on the back - there is space for RTM option.
Supply has dedicated backplane, so we can mount two sets of rails and leave existing plastic rails.
In this way, supply backplane and plastic rails would reside on 160mm rails while rest would be attached to additional rails installed on on 220mm depth.

@gkasprow
Copy link
Member

gkasprow commented Jan 11, 2019

Anyway, I will verify it with real HW. Delivery will be within 4 weeks.

@gkasprow
Copy link
Member

When I look at the crate rear side, the holes seem to be not placed evenly every 60mm
obraz
Anyway, who says we have to use 220mm length?:)

@gkasprow
Copy link
Member

Some ideas came to my mind. We plan to develop together with the CERN CPCIS controller based on Kasli, but with additional FMC interface and ZynQ. With plugged quad SFP FMC we will get essential Kasli functionality. But we will have to switch to the FFG900 package which means we will have more transceivers (16). 8 GTX will be routed to FMC to enable 8x SFP FMC that already exists. The other 8 GTX can be routed to the backplane. We can reserve some CPCIS slots as DRTIO-capable. And such slots can be equipped with DRTIO-connected modules as well.
We will also develop 84HP backplane with 8HP slots. We can install DRTIO-capable slots between 8HP slots. In this way, one could combine 8HP modules like Sampler with advanced 4HP DRTIO and standard 4HP EEM modules. Up to 16 various modules could be installed.
Just an idea to discuss.

@gkasprow
Copy link
Member

@jordens @dtcallcock what do you think about such idea?
I specified it here

@gkasprow
Copy link
Member

gkasprow commented Mar 26, 2019

Update:

  • I agreed with CERN common specification of the open-source backplane. It would be a simple design with necessary signals to use with EEM cards. Boards can be 4HP (with spacers) or 6HP. 2 standard power supplies are supported. CERN will design the radiation hard power supply.
  • the backplane can be installed at 160 or 220mm depth
  • my CPCIS crate is on a way, I should get it in a few days.
  • I'm thinking about a slightly different adapter design that would enable backplane while keeping 160mm card length. The idea is to make 4 adapters:
  1. standard with EEM signals
  2. active with PCIe or USB bridge so the whole ecosystem of modules could be used natively in CPCIS crate with standard x86 CPU
  3. as above but with SpaceWire bridge.
  4. simple passive adapter compatible with 220mm assembly

@gkasprow
Copy link
Member

gkasprow commented May 27, 2019

After a long discussion, we agreed with CERN the specification for the backplane that fulfills all needs.
The contract for backplane design is almost in place.
The board will be designed using Kicad. It will be a simple, low-cost board that fits low-cost chassis but is fully compatible with CPCIS. The slot spacing is 6HP. If there is enough interest, we can make also 4 HP or 8 HP variant since the schematics will be the same.
Btw, a few weeks ago I received my CPCIS crate.

@gkasprow
Copy link
Member

more or less final component placement. The adapter is mounted on top of Urukul
obraz
obraz

@gkasprow
Copy link
Member

Some photos of real HW. The front panel is 3D printed for the moment.
One detail needs fixing - the slot between upper and lower shield.
2020-06-16 23 32 01
2020-06-16 23 32 09
2020-06-16 23 32 17
2020-06-16 23 32 21
2020-06-16 23 32 26
2020-06-16 23 32 30
2020-06-16 23 32 34
2020-06-16 23 32 37

@gkasprow
Copy link
Member

Entire shield/cover is low-cost laser cut. Just a few EUR for set.

@hartytp
Copy link

hartytp commented Jun 17, 2020

Looks lovely! What's the cost of retrofitting that to an EEM? How does that compare to redoing the EEMs as native CPICs (not suggesting we should do that right now, just curious)?

@hartytp
Copy link

hartytp commented Jun 17, 2020

oops, never mind, just saw your previous post. Is all that really just a few EUR? That seems very cheap.

@gkasprow
Copy link
Member

If we do it as native CPCIS, we would get rid of 4 connectors and 2 standoffs. NRE cost would be also slightly reduced because it would be only one board instead of two. But if we produce the adapters and shields in high quantities, the difference would be negligible. The most expensive are the CPCIS connectors anyway (10 + 9 EUR). I will improve the shield design to eliminate the nasty looking slots and reduce the stress on mezzanine connectors during extraction. The entire stress during insertion is already redirected by the shield.

@hartytp
Copy link

hartytp commented Jun 17, 2020

Cool! It would be awesome to see an ARTIQ crate up and running with CPICS.

@gkasprow
Copy link
Member

I'm back to business so expect it soon.

@gkasprow
Copy link
Member

We finished the design of the completely open-source CPCIS backplane and chassis. It was made with Kicad. It supports RTM option as well.

@gkasprow
Copy link
Member

The backplane can easily be tailored to specific use-cases. The same with chassis.

@marmeladapk
Copy link
Member

Closing - no action needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants
@sbourdeauducq @jordens @gkasprow @dhslichter @dtcallcock @hartytp @marmeladapk and others