-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Smart" backplane for EEM/Kasli ecosystem #1
Comments
@dhslichter Take it into account that to place such big FPGA on backplane, you need at least 12 layers. And such big 19" board would be hell expensive. Moreover, DIN connectors are THT so it means you cannot place BGA under them so some of slots would be wasted. At least 12..14 HP. |
ack @gkasprow, sounds very logical. |
There is interesting initiative here: http://easy-phi.ch/ |
Pretty sure that's been dead for a while. I chatted with them back in '15 but haven't seen any movement since then. |
@jordens at least we can reuse some ideas for slow control things :) |
Yep. |
@jordens I met a man from Unige at the quantum engineering workshop in London, who is using this Easy-phi stuff. They contacted me and it seems the project is still alive. |
The connection between the panel board and the smart backplane could be 4x SATA plus a ribbon for all of USB/Power/SFP-Management/LEDs and maybe MMCX-SMA pigtail for the clock. |
The question is if we really want to have smart backplane. If we really want to go for backplane, it would be nice to keep some compatibility with existing EEM boards. We can do the same to existing Kasli - extend IDC cables to the backplane connector attached directly to another end of the IDC cable. It has 64 pins (96 pin version also has 64 connected pins) so would split to 2 EMMs. But this means we need 4 additional such backplane IDC connectors! If we go for 96 pin ones, we would need only two of them, but dedicated adapter board would be needed that would translate 8 30 pin IDCs to two straight DIN 41612. Yet another option that do not use any cables is to add to Kasli two mezzanines with clock distribution and female connectors that:
In this way we keep compatibility with all existing boards |
I agree with that. |
@gkasprow I agree completely: while the idea of a BP sounds nice, I'm not sure we'd use one even if it were available:
I'm going to (perhaps optimistically) close this issue for now. My reason for doing this is that currently there does not seem to be a clear plan for the BP, or even necessarily a clear use case for it. Moreover, no one is currently taking the lead on it and I'm not sure we have the resources to think about it properly. My suggestion is that we keep this closed until someone has the time/motivation to take an active lead on this. |
Following on, I agree with the above that the backplane does not seem to provide important benefits, and introduces a lot of extra complication and cost, at this point. The big points of the Kasli ecosystem are low cost, flexibility, adaptability, and simplicity, and the backplane idea runs counter to all of them. |
For future reference - there is a COTS backplane that seem to fit perfectly to our needs - the Compact PCI Serial standard. With most basic configuration it has 8x LVDS pairs, I2C, 12V and management power rail. And it is 3U Euro crate. ButiIt supports only 8 cards. The modules are 4 HP. |
Let's continue the cpci discussion here. And some general info on backplane design article CERN DIOT project with two presentations and DIOT 24 IO board design |
that's not a big deal, tiny 3.3V regulator does the job.
If we want to keep same pinout between slots, it's a must.
I didn't count slot 0 with controller :)
depends strongly with quantity, but I don't see strong reason to do it.
add 4 Ethernet pairs to every slot + 1 USB2.0
In satellites and high reliability systems isolation is a must. |
The rear io stuff would also perfectly fit the idea of having BNC breakouts towards the back for e.g. sampler. Or Dio BNC or zotino or banker. I am ok with using the Ethernet star from kasli for four more diff pairs. The only downside is that it requires another connector and is physically far away from J1. |
we would use it only on boards that require second EEM. |
My preference would be to scrap BNC across the whole of Sinara as it's just too big for Eurocard panels IMHO. If we do that then I think we should pick one connector to replace it (and also replace SMA in some places where high performance isn't required). MMCX would be an obvious choice as we are already using it for the Sayma v2 AFE analog inputs and Stabilizer aux digital inputs. However, they are a bit on the small side and easy to disengage compared SMB/MCX so one might worry about mechanical robustness. SMC/Micro BNC/LEMO all seem somewhat less ubiquitous and more expensive. My slight preference would be for MCX, but it would be good to hear from people with more experience. I'm not totally sure how you would isolate this kind of connector from the front panel though (just a clearance hole?). Perhaps this should be a separate issue ('[RFC] Death to BNC'?). |
One thing I worry about is whether there will be broad support and funding from the current user base to convert 10-20 boards from EEM to cPCI. Whilst it looks like it could be a nicer system in many ways, it doesn't add any killer new features and the lack of backwards compatibility with already-purchased hardware would be painful to swallow. If the project was well funded by its own grant that would probably make it more palatable. There seems to be a market as, for example Innsbruck's Quantum Flagship effort specifically has "control electronics developed to commercial level" as one of it's milestones. |
I disagree; many groups have small systems and there is plenty of space for BNCs in a 84hp crate. Yes, the backplane isn't very interesting and the backward compatibility problems are tough (and what if we need to replace or add a board in an existing EEM system?). |
backward compatibility would be ensured by installing both EEM and cPCI connectors. EEM would be not mounted by default. Of course one would not be able to use previous generation modules with backplane. |
The cPCI connectors are press-fit mounted, so can be installed after boards are assembled, so the user can choose which connectivity wants during ordering. |
There is still the issue with Kasli.
Yes, that's very nice. |
@dtcallcock @sbourdeauducq That line of argument doesn't carry much weight for me. It's like comparing pdq and Shutter. Shuttler also doesn't add any killer feature over pdq and is full backwards incompatible. But Shuttler is a much better design that addresses the problems with pdq. |
You cannot compare one NIST internal design with a full line of products that is deployed in many dozens of labs. |
Yes. Using an existing backplane - CERN or standard - is very valuable. What minimum number of layers does the connector require on kasli and on the eems? |
In our case 4 layers would do the job. |
The CPCIS uses shared I2C bus that goes through all modules. In Kasli Ecosystem we use different approach with I2C mux/switch. We want to keep the backplane passive so have to choose different approach.
I propose using PCIE_EN to enable I2C bus switch. Normally this signal is pulled low by 220R resistor. Controller detects module presence by using weak pullup. Then it pulls the line up enabling I2C bus switch and connecting the EEM I2C resources to the common I2C bus for module identification. |
I sketched schematic of cPCIS adapter with proposed signal assignment. |
Nice! New repository? |
True! |
I meant why do you short GA2 and GA3 to GND if the backplane drives them? |
This is what I mean. They should be inputs. |
I've ordered this chassis. I will make sure the connectors are symmetrical and produce the adapter board. I will also switch card depth from 160 to 220mm. Guys @CERN claim that it is possible. |
Nice. The chassis core is just the plain old standard, the front and back rails are with the longer cPCI lips. And the middle rails are movable every 60mm. Get the longer plastic rails, they seem to be available in 220 mm with cPCI coding. I guess you won't have much space in the back anymore which could be a problem for the power connector, the 160 mm power supply might need some hackery to connect the backplane, and the cooling will focus on the front 160 mm. |
I have 80mm on the back - there is space for RTM option. |
Anyway, I will verify it with real HW. Delivery will be within 4 weeks. |
Some ideas came to my mind. We plan to develop together with the CERN CPCIS controller based on Kasli, but with additional FMC interface and ZynQ. With plugged quad SFP FMC we will get essential Kasli functionality. But we will have to switch to the FFG900 package which means we will have more transceivers (16). 8 GTX will be routed to FMC to enable 8x SFP FMC that already exists. The other 8 GTX can be routed to the backplane. We can reserve some CPCIS slots as DRTIO-capable. And such slots can be equipped with DRTIO-connected modules as well. |
@jordens @dtcallcock what do you think about such idea? |
Update:
|
After a long discussion, we agreed with CERN the specification for the backplane that fulfills all needs. |
Entire shield/cover is low-cost laser cut. Just a few EUR for set. |
Looks lovely! What's the cost of retrofitting that to an EEM? How does that compare to redoing the EEMs as native CPICs (not suggesting we should do that right now, just curious)? |
oops, never mind, just saw your previous post. Is all that really just a few EUR? That seems very cheap. |
If we do it as native CPCIS, we would get rid of 4 connectors and 2 standoffs. NRE cost would be also slightly reduced because it would be only one board instead of two. But if we produce the adapters and shields in high quantities, the difference would be negligible. The most expensive are the CPCIS connectors anyway (10 + 9 EUR). I will improve the shield design to eliminate the nasty looking slots and reduce the stress on mezzanine connectors during extraction. The entire stress during insertion is already redirected by the shield. |
Cool! It would be awesome to see an ARTIQ crate up and running with CPICS. |
I'm back to business so expect it soon. |
We finished the design of the completely open-source CPCIS backplane and chassis. It was made with Kicad. It supports RTM option as well. |
The backplane can easily be tailored to specific use-cases. The same with chassis. |
Closing - no action needed. |
Moving my comment from #204 to a new issue:
Might it be better in the long haul to have Kasli's functionality actually just integrated on the backplane? You would have a slim "dumb" card that brings SFP and coax connections to the front panel, and connects them to the backplane via some high-performance but lower-pin-count connector, suitable for gigabit signals. Then the Artix FPGA, clock distribution, etc all live on the backplane, and you don't need crazy fat connectors just to route all the EEM signals and clocks off the Kasli and on to the backplane. This would also free more front panel space, because if you are adding 96-DIN mezzanines to Kasli then this obviously will make it wider, which may be undesirable for some.
The only downside I can see is that if your FPGA on the backplane goes bad for some reason, it's more work to change it out....but this seems like it should not be a very common occurrence.
The text was updated successfully, but these errors were encountered: