Javatpoint Logo
Javatpoint Logo

Finite Element Method

A common technique for numerically resolving differential equations that appear in engineering and mathematical modelling is the finite element method (FEM). The conventional topics of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential are typical issue areas of interest.

The FEM is a general numerical method for addressing boundary value problems involving partial differential equations in two or three spatial variables. The FEM breaks down a complex system into smaller, more manageable pieces known as finite elements to solve an issue.

The numerical domain for the solution, which has a finite number of points, is implemented by creating a mesh of the object using a specific space discretization in the space dimensions. In the end, a set of algebraic equations emerges from the formulation of a boundary value problem using the finite element approach.

The technique makes domain-wide approximations of the unknown function. The small system of equations that describes these finite elements is then combined with other equations to model the full issue. The calculus of variations is used by the FEM to minimize an associated error function and then approximate a solution.

Fundamental ideas

  • Representation of complicated geometry accurately
  • Including material properties that aren't related
  • Simple illustration of the entire solution
  • Recording local impacts.

Typical application of the approach entails:

  • Separating the problem's domain into a few smaller subdomains, each of which is represented by a set of basic equations for the main problem
  • Creating a global system of equations by methodically recombining every set of component equations for the calculation.
Finite Element Method
Finite Element Method

The initial values of the original problem can be used to construct the global system of equations, which can then be calculated to produce a numerical result.

The element equations in the first stage are straightforward equations that serve as local approximations of the underlying complex equations under study, which are frequently partial differential equations (PDE).

The finite element approach is frequently offered as a specific case of the Galerkin method to explain the approximation in this process.

In mathematical terms, the procedure entails creating an integral from the inner product of the weight and residual functions and setting the integral to zero. It is, in essence, a method that minimizes approximation error by fitting trial functions into the PDE.

The weight functions are polynomial approximation functions that project the residual, which is the error brought on by the trial functions.

  • The procedure strips the PDE of all spatial derivatives and approximates it locally with
  • The group of algebraic equations for issues with steady states, and a group of ordinary differential equations for issues with transients.

These sets of equations make up the basic equations. They are linear and vice versa if the underlying PDE is linear.

While ordinary differential equation sets that arise in transient problems are resolved by numerical integration using established methods like Euler's method or the Runge-Kutta method, algebraic equation sets that arise in steady-state problems are resolved using numerical linear algebra methods.

A global system of equations is created from the element equations in step (2) above by transforming the coordinates from the local nodes of the subdomains to the global nodes of the domain.

Regarding the reference coordinate system, this spatial transformation includes the necessary orientation corrections. FEM software frequently uses coordinate data produced from the subdomains to carry out the operation.

Finite element analysis (FEA) is the term used to describe the actual application of FEM. Engineering analysis can be carried out computationally using FEA in this context. It makes use of software programmed with a FEM algorithm as well as mesh generation techniques to break a difficult problem into manageable pieces.

When using FEA, the complex problem typically refers to a physical system with the underlying physics represented by the Navier-Stokes equations, the heat equation, or the Euler-Bernoulli beam equation, which are expressed as integral or PDE equations, respectively. The divided small elements of the complex problem correspond to various regions of the physical system.

When the domain changes, as it does during a solid-state response with a moving boundary, when the necessary precision varies across the entire domain, or when the solution is not smooth, FEA may be used to analyze problems. Examples of complex domains include autos and oil pipelines. FEA simulations are an invaluable tool since they eliminate the need for numerous hard prototype production and testing iterations for a variety of high fidelity circumstances.[Reference needed]

For instance, it is feasible to improve forecast accuracy in "important" locations, such as the front of the car, and decrease it in its rear (thereby lowering the simulation's cost). Another instance would be numerical weather forecasting, where it is more crucial to have precise forecasts for developing highly nonlinear phenomena (such as eddies in the water or tropical cyclones in the atmosphere) than for comparatively quiet areas.


The finite element approach was developed to address complicated elasticity and structural analysis issues in civil and aeronautical engineering, albeit the exact date of its conception is difficult to pinpoint. The early 1940s work by A. Hrennikoff and R. Courant is when it first began to take shape.

The pioneer Ioannis Argyris was another. In the USSR, Leonard Oganesyan's name is typically associated with the introduction of the method's practical use. Based on calculations for dam construction, it was independently discovered in China by Feng Kang in the late 1950s and early 1960s, where it was known as the finite difference approach based on variation principle.

Although the methods employed by these pioneers varied, they all had one thing in common: the mesh discretization of a continuous domain into a collection of discrete sub-domains, sometimes referred to as elements.

While Courant's method divides the domain into finite triangular subregions to solve second order elliptic partial differential equations that result from the problem of cylinder torsion, Hrennikoff's methodology discretizes the domain by employing a lattice analogy.

Courant made an evolutionary contribution, building on a substantial body of prior findings for PDEs produced by Rayleigh, Ritz, and Galerkin.

The development of the finite element method was really sparked in the 1960s and 1970s by the work of J. H. Argyris and colleagues at the University of Stuttgart, R. W. Clough and colleagues at UC Berkeley, O. C.

Zienkiewicz and colleagues at Swansea University, Philippe G. Ciarlet and colleagues at the University of Paris 6, and Richard Gallagher and colleagues at Cornell University. These years saw the development of open-source finite element programmed, which added to the momentum.

The finite element programme SAP IV was made widely accessible by UC Berkeley, while the original version of NASTRAN was financed by NASA. Sesam was created in 1969 in Norway by the ship classification organization Norske Veritas (now DNV GL) for use in ship analysis.

The 1973 article by Strang and Fix gave the finite element approach a formal mathematical foundation.[11] Since then, the technique has been expanded to include the numerical modelling of physical systems across a wide range of engineering specialties, including fluid dynamics, heat transport, and electromagnetic.

Discussing technology

The design of finite element techniques:

A variational formulation, a discretization strategy, one or more solution algorithms, and post-processing techniques define a finite element method.

The Galerkin technique, the discontinuous Galerkin method, mixed methods, etc. are examples of the variational formulation. A clearly defined series of steps that include (a) creating finite element meshes, (b) defining basis functions on reference elements (also known as shape functions), and (c) mapping reference elements onto mesh elements is regarded to be a discretization technique. The x-FEM, Iso geometric analysis, h-version, p-version, and hp-version are some examples of discretization techniques.

Each discretization method has some benefits and drawbacks. Realizing almost optimal performance for the largest collection of mathematical models in a specific model class is a suitable criterion for choosing a discretization approach.

Direct and iterative solvers are two broad categories that can be used to group together various numerical solution strategies. These algorithms are made to take advantage of the matrices' sparsity, which depends on the variational formulation and discretization technique selections.

A finite element solution's relevant data can be extracted via postprocessing processes. Postprocessors must offer a posteriori error estimation in terms of the quantities of interest to satisfy the requirements of solution verification.

When approximation errors exceed acceptable levels, the discretization must be changed-either manually by the analyst or automatically through an adaptive process. The realization of super convergence is made possible by a few very effective postprocessors.


A numerical method called the extended finite element method (XFEM) is based on the partition of unity method (PUM) and the generalized finite element method (GFEM). It enhances the solution space for solutions to differential equations with discontinuous functions, extending the capabilities of the conventional finite element approach.

By enhancing the approximation space, extended finite element methods enable the issue of interest's difficult features, such as boundary layers, singularities, and discontinuities, to be reproduced naturally. It was demonstrated that for some issues, such an embedding of the issue's feature into the approximation space can dramatically boost convergence rates and precision.

Furthermore, treating discontinuity-related problems with XFEMs eliminates the need to mesh and re-mesh the discontinuity surfaces, reducing the computational costs and projection errors associated with traditional finite element methods but limiting the discontinuities to mesh edges.

This method is used in a variety of research codes at varying degrees:

  • GetFEM++
  • xfem++
  • openxfem++

ASTER, Morfeo, Abaqus, Altair Radios, and other codes have also used XFEM. With a few plugins and genuine core implementations available (ANSYS, SAMCEF, OOFELIE, etc.), other commercial finite element software is progressively adopting it.

SBFEM, or scaled boundary finite element method

Song and Wolf (1997) were the authors that first introduced the scaled boundary finite element method (SBFEM). One of the most successful contributions to the field of numerical analysis of fracture mechanics problems has been the SBFEM.

It is a semi-analytical fundamental-solutionless method that combines the benefits of boundary element discretization and finite element formulations and processes. Contrary to the boundary element approach, no basic differential solution is necessary.


Finite element approach with smoothing

For the simulation of physical events, a specific class of numerical simulation techniques is known as the S-FEM, or Smoothed Finite Element Methods. The finite element method and meshfree techniques were combined to create it.

Finite element approach for crystal plasticity (CPFEM)

Franz Roters created the crystal plasticity finite element method (CPFEM), a sophisticated numerical technique. Metals can be thought of as aggregation of crystals that exhibit anisotropy during deformation, such as anomalous localization of stress and strain.

To take crystal anisotropy into account during the routine, CPFEM based on slip (shear strain rate) can calculate dislocation, crystal orientation, and other textural information. It is now used to numerically analyze material deformation, surface roughness, fractures, and other phenomena.

Method of virtual elements (VEM)

A generalization of the traditional finite element approach for arbitrary element geometries, the virtual element method (VEM) was developed by Beiro da Veiga et al. (2013) as an extension of mimetic finite difference (MFD) methods.

As a result, highly erratic and non-convex generic polygons (or polyhedra in 3D) are accepted. The term "virtual" refers to the fact that explicit calculation of the local form function base is neither necessary nor ever done.

A connection to the gradient discretization technique

Conforming, nonconforming, and mixed finite element method types are specific instances of the gradient discretization method (GDM).

As a result, for these specific FEMs, the convergence properties of the GDM that have been established for several problems (linear and nonlinear elliptic issues, linear, nonlinear, and degenerate parabolic problems), hold.

Compared to the finite difference approach

A different method for roughly estimating PDE solutions is the finite difference method (FDM). The following are the variations between FEM and FDM:

  • The FEM's ability to handle complex geometries (and boundaries) with relative ease is perhaps its most alluring quality. While managing geometries in FEM is theoretically simple, FDM in its simplest form is limited to handling rectangular shapes and simple variations thereof.
  • FDM is more frequently utilized for models that are rectangular or block-shaped rather than those with irregular CAD geometries.
  • In general, FEM permits mesh adaptivity that is more flexible than FDM.
  • Finite differences' greatest selling point is how simple it is to use.
  • There are various ways to view the FDM as a unique instance of the FEM strategy. For instance, if the issue is discretized by a regular rectangular mesh with each rectangle being divided into two triangles, first-order FEM and FDM are equivalent for Poisson's equation.
  • For example, because the quality of the approximation between grid points is subpar in FDM, there are several grounds to believe that the mathematical underpinnings of the finite element approximation are sounder.
  • A FEM approximation frequently has a higher quality than the corresponding FDM technique, but this is highly problem-dependent, and there are numerous examples to the reverse.

Youtube For Videos Join Our Youtube Channel: Join Now


Help Others, Please Share

facebook twitter pinterest

Learn Latest Tutorials


Trending Technologies

B.Tech / MCA