ABOUT US

RESEARCH

RESEARCHERS

REPORTS

SOFTWARE

FACILITIES

EMAIL SERVICES

WIKIS


Adaptive Methods for Partial Differential Equations Current Research Topics:


A posteriori error estimation


With the exception of trivial cases, the solution of partial differential equations via the finite element method generates some degree of error. The usefulness of the method as a design tool largely depends upon the ability to quantify this error for a given analysis. Given the true solution for a posed problem, this quantification is readily accomplished. However, the finite element method is typically employed to solve problems for which the analytical solution is not known. Since the true solution is not known, the error may not be determined exactly, but must instead be approximated in some fashion.


Discretization errors in a finite element analysis

A posteriori error estimation is one method of approximating the discretization error present in a finite element solution. In this procedure, knowledge of the governing partial differential equation and the finite element solution data are utilized to form an accurate estimate of the amount of error in the problem domain. A posteriori error estimators are commonly element-based or patch-based, and as such also provide information regarding the location of discretization errors within the problem domain. This information is crucial for the correction indication step in an adaptive analysis, where decisions are made concerning the appropriate locations and amounts of discretization refinement.

Once the error in the finite element solution has been determined, the discretization of the problem domain may be altered accordingly to improve accuracy. A subsequent finite element analysis and error estimation is then performed to approximate the discretization error in the refined mesh. If the solution accuracy is not within user specified tolerances, the discretization may be refined and the process repeated.

Correction Indication


After element level and global errors have been estimated in an adaptive analysis the discretization must be appropriately altered. These alterations are made in a manner such that the error in subsequent analyses is reduced to a level deemed acceptable by the user. It is, however, not sufficient to merely reduce the error to the specified level. The new mesh must be designed such that the resulting discretization is an efficient discretization. That is, the discretization produces the least amount of error with the fewest degrees of freedom.

In the case of h-version adaptivity the determination of the appropriate element sizes in the new discretization is based on the premise that the most efficient mesh for a given problem is the one that equally distributes the error among the elements. The theory utilizes information about the existing discretization geometry, the element level (i.e. local level) and global error data evaluated in the energy norm, and a prescribed value specifying the amount of error which is acceptable to the user for the given problem.

Unfortunately, the theory of basing new element sizes on local errors evaluated in norms comprised of integrals over a given element and its boundary contains an inherent limitation. Specifically, this theory utilizes a single scalar measure of the approximate error in an element to calculate a single scalar value for the appropriate size of the elements which will replace the given element size in the new discretization. Without further embellishment this provides information for the distribution of elements within the new discretization only in terms of discrete patches. This theory gives no direct information regarding a continuous distribution of new element sizes if the new sizes are significantly smaller than the existing elements. Note that in general a continuous distribution of new element sizes in the problem domain necessitates a distribution of new element sizes within the elements existing in the current discretization, and that this information cannot be supplied by a single scalar value per element.

It is possible to use model topology, mesh topology, and element level error information to infer a continuous distribution of new element sizes within the elements comprising the existing discretization. A continuous distribution of element sizes is designed by first determining appropriate new element sizes at specific locations in the problem domain. Interpolation functions of suitable continuity are then defined to describe the variation of element sizes throughout the problem domain. The topology of the geometric model is then employed to tailor the discretization for specific problem classes.

Bracket example


Click on the picture to enlarge

                                        

                      Bracket geometry, loads,                    Initial mesh.
                      and boundary conditions.


                                        

                      Adaptively refined mesh.                  Detail of corner area with                                                                                  overlay of original mesh.

Adaptive analysis environments (h, hp, and hpr, s-techniques)

Mesh enrichment


The application of adaptive finite element or finite volume techniques on unstructured 3D meshes requires the ability to locally alter the size of the elements as dictated by the error indication procedures. One approach to obtain the desired distribution of element sizes is to regenerate the entire mesh using an automatic mesh generator controlled by the given mesh size distributions over the domain. This approach is computationally expensive and introduces the complexities of mapping solution fields from one mesh to another. An alternative approach to obtaining the desired distribution of elements is to locally refine and/or coarsen the mesh. Such local operations can be performed efficiently and can effectively address the issues associated with the transfer of solution information.

Examples of locally refined three dimensional meshes

Edge-based refinement and coarsening procedures allow for general anisotropic refinement with no over-refinement. Local retriangulation tools which do not alter the local element sizes are used to improve triangulation quality. The local retriangulation tools improve the quality of the elements involved without the need to perform the same degree of over-refinement other procedures may require. This is important since the amount of over-refinement caused by procedures with implicit assurance of element quality maintenance can be very large. The retriangulation procedures are also critical for allowing the proper application of refinement of elements on curved domain boundaries.

Triangulations can also be over-refined when triangulation quality is degrading. The application of refinement on meshes of curved geometric domains, where at each step the mesh is to provide its best approximation to the geometric domain, introduces a number of complexities that are directly addressed by the procedures developed at SCOREC. This is an important addition since the more straightforward procedures, including all those with a priori control of element shapes, are limited to domains with planar faces.

Since the local mesh modification procedures are invoked directly from within the analysis procedures, and the analysis procedures for large problems are carried out in parallel, it is critical that these procedures run in parallel and scale well. All aspects of this adaptation procedure have therefore been parallelized, and show good speed-ups on large meshes.

Extraction Techniques


A class of extraction formulations was developed to recover solution quantities with superconvergent accuracy. As a continuation, there were some new developments in the boundary stress extraction method to expand its applications to boundary locations that are not traction-free. Furthermore, the convergence characteristics of the extracted pointwise stress is investigated in an h-adaptive process using different adaptive error control schemes. Observations from those numerical experiments provide some initial insight toward the development of an adaptive approach that can obtain prescribed pointwise solution accuracy with optimal computation efficiency.