Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decouple Triangulation objects from integration loop (WIP) #732

Closed
amartinhuertas opened this issue Dec 14, 2021 · 4 comments
Closed

Decouple Triangulation objects from integration loop (WIP) #732

amartinhuertas opened this issue Dec 14, 2021 · 4 comments

Comments

@amartinhuertas
Copy link
Member

amartinhuertas commented Dec 14, 2021

Discussion resulting from meeting 14/12/2021 @santiagobadia - @amartinhuertas

Please, do not read yet, until the issue becomes more definitive, work in progress

Note: Draft issue description to be completed/reviewed by @santiagobadia. Once completed, it has to be discussed with care with @fverdugo, which has devoted profound thinking to this in a variety of different scenarios. This will most probably lead to breaking-release changes into Gridap.jl's kernel if we decide to go ahead.

After some discussion, we have the impression that the current approach for transforming the high-level API integrals into low-level lazy_map operation trees is somehow rigid/not flexible enough to express more general discretization methods, such as, e.g., HDG methods.

The first concern that we have with the current approach is related to how we currently conceive Triangulation objects in Gridap.jl. They are not just "geometrical" objects, they however implicitly encode information required to support the implementation of the integration loop. For example, an SkeletonTriangulation is more than a triangulation with the internal facets of the model, it indeed provides the necessary mechanisms to implement DG methods assuming that one has a facet-wise version of the integration loop, e.g., how facets are glued to neighbouring cells.

To be more general, we believe that it is key to segregate two concepts that now are somehow mixed into Triangulation objects.

  1. The geometrical objects on which we want to evaluate the integrals. We refer to the topological dimension of these objects as
    D_M. These objects are conceptually independent on the integration loop. For example, a DG method requires to evaluate facet integrals, no matter
    one implements a loop over the facets or over the cells. The same applies to HDG methods, although in practice one is willing
    to perform a loop over the cells so that static condensation can be performed.

  2. The geometrical objects that we want to traverse during the integration loop. We refer to the topological dimension of these objects as to D_L.

In most cases, the objects of 1. and 2. match, but not necessarily. One can always evaluate integrals over objects of dimension D_M walking through objects of dimension D_L, with D_M <= D_L. (The question of course is whether such flexibility is
advantageous, it turns out that it is indeed in some cases, e.g, for HDG methods, i.e., D_I=1 and D_L=2, assuming we are in 2D).

In the high-level API, we have terms of the form ∫(integrand)*dΩ, where is of type Measure and integrand
"is-a" CellDatum instance, typically of type extending CellField (although not necessarily). is not actually a CellDatum, but it holds a CellQuadrature instance inside, which "is-a" CellDatum as well. The elements in a CellDatum (e.g., integration rule points or finite element basis functions) are associated to the "cells" of a Triangulation{Dc,Dp} instance,
i.e., the geometrical objects of topological dimension Dc of the model from which the triangulation was built (or a subset of them).

A possible approach to solve the concern above is to transfer from Triangulation to Measure the responsability/logic associated to the loop traversal and required transformation operations. At present Measure is a light-weight object that is basically there for syntatic sugar purposes. To this end, we may need to add additional traits to Measure, to be defined during the constructor, e.g., the topological dimension and set of geometrical entities over we want to loop. (to-think in more detail).

Note that, apart from the geometrical objects in 1. and 2., we have the geometrical objects on which integrand is defined. Let us label these objects as 3. integrand might be in the general case a complex operation tree involving CellDatums defined over geometrical objects of different topological dimension. This has to be taken care of (it is indeed currently being taken care of) in order to correctly transform ∫(integrand)*dΩ into a low-level lazy_map operation tree that leads to the local matrices/vectors. This transformation clearly depends on the objects in 1., 2., and 3.

The second concern that we have is related to the way this transformation is currently performed. Whenever we have an operation among two CellField objects (e.g., the multiplication of test and trial functions), the code builds a new OperationCellField out of a transformation of them into a common domain, as far as this transformation is possible accordingly to a ser of rules (see _to_common_domain(a::CellField...) for full details). This transformation occurs bottom-top (from the leaves to the root of) the operation tree, each time a new OperationCellField instance is built out of CellField operands. A final to common domain transformation is performed to the root of the tree, taking into account the Triangulation object that is associated to the Measure object and that of the root OperationCellField (see function _to_common_domain(f::CellField,x::CellPoint) for full details).

We believe that it could be at least advantageous (if not mandatory, at most, in more general scenarios) to defer these "to common domain" transformations till the operation tree has been fully built and we combine it with the Measure. We could perform these transformations as we evaluate the different fields bottom-top, in combination with the Measure. Now the transformations are performed on the fields, prior to evaluation, not during evaluation.

To think:

  • With the current approach, we double-check with a try-catch block whether the combination of the fields at hand is well defined at each node of the tree, and generate an Exception otherwise. To this end, the fields are evaluated at the triangulation nodal points (see function OperationCellField(op::Operation,args::CellField...) for more details). I guess that such technique can be also be implemented with the alternative approach, indeed using the actual points on which the fields are to be evaluated.
  • Can this alternative approach prevent the application of some optimizations?
  • With this approach we wont be able to evaluate operations among CellFields without the intervention of a Measure, e.g., we can currently write (uh*vh)(x), with uh and vh defined on different domains, and x a CellPoint object). Not sure if this is actually an issue or not.

**Raw black-board illustrations. To be curated ... **

IMG20211214164156
IMG20211214172356
IMG20211214172407

@fverdugo
Copy link
Member

Hi @amartinhuertas @santiagobadia,

The refactoring done for v0.17 should make possible to implement hybrid methods with minor changes (minor at least in comparison with what is exposed above). When I designed v0.17 I was aware that at some point we are going to implement hybrid methods and I have prepared several parts of the code for this purpose (even though I have had no time to implement hybrid methods completely).

More than changing current functions/objects (I think they are pretty OK), I think we need to implement new ones for the cases not yet accounted in the library. Essentially 1) a new object that represents the local faces for all cells in the mesh and 2) new "glue" types that will allow to combine fields defined on the faces and on the cells.

The key part introduced in v0.17 is the possibility of doing 2). I think I have never showed to you in detail the main new ideas behind v0.17. Perhaps we can start with this.

@fverdugo
Copy link
Member

Hi @amartinhuertas @santiagobadia

I have implemented an (incomplete) draft about how to deal with quantities defined on the cell boundaries. The additions to the library are here (at a later stage we can reorganise code) https://github.com/gridap/Gridap.jl/blob/cell_boundary_draft/src/cell_boundary_draft.jl

With this draft, I am able to evaluate and visualise, on the cell boundaries, fields defined on the cells and on the faces. This is the first step (and the difficult part) towards numerical integration. Here an example:

using Gridap

partition = (0,1,0,1)
cells = (5,5)
model = CartesianDiscreteModel(partition,cells)

D = num_cell_dims(model)
Ω = Triangulation(ReferenceFE{D},model)
Γ = Triangulation(ReferenceFE{D-1},model)
∂K = CellBoundary(model)

v = CellField(x->x[1]+x[2],Ω)
q = CellField(x->x[1]-x[2]+1,Γ)

writevtk(Ω,"draft_Ω",cellfields=["v"=>v])
writevtk(Γ,"draft_Γ",cellfields=["q"=>q])
writevtk(∂K,"draft_∂K",offset=0.15,cellfields=["q"=>q,"v"=>v])

v and q are defined on different domains, see here:

draft_1

∂K represents the cell boundaries, I can visualise it. Note that I am applying a little offset towards the cell centre. Otherwise ∂K and Γ would overlap.

draft_2

Now, visualize q on ∂K and Γ:

draft_3

Now, visualize v on ∂K and Ω:
draft_4

This is a very quick draft, but I think it is a good starting point.

@amartinhuertas
Copy link
Member Author

For the records, please note the following comment 936ebd3#r62267265

@amartinhuertas
Copy link
Member Author

I think we can close this issue. More or less agreed on which path to follow in the meantime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants