-
Notifications
You must be signed in to change notification settings - Fork 3
(RTL) Schematic view
See (RTL) IDE
As discussed in 111#issuecomment-230159008, GHDL itself can be used to generate a tree structure describing the hierarchy of the design, but it is not so straightforward to get the connections (which are required to generate a netlist). I don't know if pyVHDLParser would be better at this task.
Even if the netlist is retrieved, drawing is an issue itself. @Nic30 mentioned ELK - Eclipse Layout Kernel). @flip111 mentioned a known library: (OGDF).
@flip111 In both IDE setting and documentation setting i think it would be useful to make visualizations in such a way that they allow you to explore the design.
I quite disagree here. I think it'd be very useful to make PDF versions of the HTML documentation predictable somehow, hence, images/schematics in the documentation should be static. Indeed, kevinpt/symbolator can be a good reference. I think the interactivity in the documentation should be limited to cross-references, and maybe some fancy efects: expandable/collapsable elements (trees, navbars, sidebars), tabs, modals, dialogs...
The visualization in the IDE would be the really interactive (and editable) one, requiring JavaScript to support so. This can be similar to Vivado's Block Design, RTLVision, etc.
Of course, a link can be added to the symbol in the documentation in order to open the interactive version.
@flip111 I imagine looking at parts of your design and the clicking on the edges to start showing other parts they are connected to. If you want to do this the visualizing library needs to be fast enough for it so you don't have to wait long if your design gets big. I'm not sure which techniques have good hardware acceleration. All the libraries use either WebGL, canvas, SVG or HTML (this is the order i expect them to be fastest to slowest). Though for the moment it would be a waste of time to cook up a custom WebGL solution or something like that. Better go with an already existing solution (Nic30 d3.js or mxgraph or whatever). But it's interesting to see how these libraries respond when you load a big design into it.
We can learn tricks from digital graphics design (demoscene, animation films, videogames..): you don't need to draw reality in the screen, just fake it well enough to make it look real. Any hardware design has further more details than those a human can 'see' at a glance. That is, if a block is collapsed, or the zoom is far enough, there is no need to draw it. Then, the full data structure should be handled in the backend and the frontend can request/remove details as blocks are expanded/collapsed, added/removed, etc. Of course, some caching can be implemented so you can expand/collapse the same block multiple times without feeling the possible latency each time. But I think you get the idea.