0% found this document useful (0 votes)
59 views13 pages

CAE Practically - Notes

CAE (computer-aided engineering) uses numerical methods to simulate physical phenomena through mathematical models. While all models are imperfect abstractions of reality, some models can provide useful insights if the right aspects are modeled. CAE allows engineers to evaluate design performance through simulation without building prototypes, supporting design decisions and enabling optimization before manufacturing. Common numerical methods used in CAE include the finite element method, finite volume method, and finite difference method. CAE benefits engineering by determining design performance beforehand to reduce expenses, though physical testing is still needed to fully validate models.

Uploaded by

Enr Guz
Copyright
© © All Rights Reserved
0% found this document useful (0 votes)
59 views13 pages

CAE Practically - Notes

CAE (computer-aided engineering) uses numerical methods to simulate physical phenomena through mathematical models. While all models are imperfect abstractions of reality, some models can provide useful insights if the right aspects are modeled. CAE allows engineers to evaluate design performance through simulation without building prototypes, supporting design decisions and enabling optimization before manufacturing. Common numerical methods used in CAE include the finite element method, finite volume method, and finite difference method. CAE benefits engineering by determining design performance beforehand to reduce expenses, though physical testing is still needed to fully validate models.

Uploaded by

Enr Guz
Copyright
© © All Rights Reserved
You are on page 1/ 13

Part 1: CAE Essentials

Slide 4: CAE Essentials


• Let’s begin by reading this quote which is often attributed to the famous statistician George P.
Box who was the author of an incredible body of knowledge in statistical modelling and
design-of-experiments.
• He used to say that “all models are wrong; some models are useful”. This synthesizes very
good the essence of numerical simulation: it is impossible to apprehend the exact nature of all
aspects of reality in a single mathematical or abstract formulation, and it is incumbent of the
engineer’s own experience and judgement to define which aspects can be modelled and when
they give us a useful glimpse to understand the mechanisms how nature works. That is the art
of carrying out successfully a simulation.

Slide 5: What is CAE?


• CAE which stands for Computer-Aided Engineering, and it is common denomination for
software tools dedicated to support engineers in their design decisions by evaluating their
performance without needing to build an actual prototype.

• Together with CAD and CAM, they share the common CAx denomination of software tools.
It serves to summarizes software tools aimed to support people involved in the design, analysis,
and manufacture of products.

• CAE is essentially about using numerical methods to simulate real physical phenomena
under different conditions by means of their mathematical models.
• I will try to explain each one of these terms written bold since they are fundamental concepts
in understanding the goal of CAE.

So, What is a model?


• One can say that it is the abstraction of reality into forms and quantities that can be played
with by our brains.
• From its root in Latin, meaning measure, a model can be understood like a pattern or a figure
to imitate, which is somehow what a simulation does.

• For example, on the right you can see the Navier-Stokes equations for incompressible fluids,
which are basically an expression the physical reality of flow around an obstacle, or inside a
tube or flow through a turbine. As it can be seen, the equations link, through mathematical
operations, different physical magnitudes like speed, pressure, flow rate, etc. For instance,
these equations, using numerical modelling techniques, can help us apprehending the physical
reality of a Pelton turbine, which maybe some of you are familiar with, with its corresponding
model. From the model we can perform simulations that will help us understand the physics
involved.
• Same can be said about many other laws and models: diffusion equation, heat equations,
Maxwell equations for EM, etc.

• Sometimes models can also represent realities other than physical, like financial (for instance
a business valuation) or social realities(evolution of the population in a given area, what are
the parameters to do that, etc).
Slide 6: What is a simulation?
• The simulation of a physical phenomenon is their virtual replication. We can consider for
example, an apple falling of a tree, a process like an accelerating car or a state, like the internal
constraints of a part manufactured by injection molding.

• If we consider our definition of mathematical model in the previous slide, To simulate


something, we use the model established to explain the physics of a problem, and replicate
virtually the experience changing the conditions to observe the outcomes, without need of
recreating the experience in real-life. Simulating normally implicates the resolution of the
mathematical equation engendered by the model.

• The solution is then displayed using typically a visual representation (an image, a video) but
can also be a sound for example, that can help us to relate intuitively to what is happening in
the real world in the corresponding experiment.

What is a numerical method?


• A mathematical model can exhibit several degrees of complexity.
• In order to have a basic understanding of a physical reality, most of models are based on
assumptions that simplify the model to make it easily “solvable”. In this case analytical
solutions with simple mathematical expressions can be found, like for example with a beam. A
beam is one of the basic structural elements , which has been extensively studied by scientific
and engineers to understand the mechanics of structures. This theory has been essential to the
development of civil engineering landmarks and development of most artifacts in our modern
world: I am talking about cars, airplanes, trains, furniture, consumers electronics. Even in
understanding the natural world structures.
• You can on the right the Euler-Bernoulli equation, which is the base of classical beam theory
use by mechanical and civil engineers to estimate the deformation of structure under a load. In
this equation, the deformation is the magnitude of interest, and it is linked to an external load.
• The result can be expressed as mathematical functions for the deflection and the internal
efforts. You can see them written here as mathematical functions

However, to mirror reality accurately, an increased complexity around the models is inevitable.
This is because of many non-negligible aspects of reality:
• Happens in a three-dimensional space. Linear (1-D) or plane (2-D) representations like or
beam, these are just projections
• Is different under different environmental conditions
• is usually Transitional and dynamic
• is Multiphysical, this means it combines many different aspects involving the interaction
between bodies
• is never ideal, perfect , infinite, punctual, etc.

• When this is taken into account, we can easily see the evolution of mathematical formulations
of these models: Linear-equations become non-linear-equations, ordinary differential
equations become partial differential-equations, explicit functions become implicit functions:

• In these conditions, it is almost always impossible to find an analytical solution.


• This is where the numerical methods come into play: they cope with the complexity problem
by the discretization of the problem with respect the time and/or space. It is then possible to
translate the complexity a of a global problem into a large structured collection of small
simple problems that can be solved simply and locally.

Slide 7: What kinds of methods exist?


• Many numerical methods have been developed by mathematicians in the 20th century to tackle
this problem .in the frame of CAE, these are the most common and used. These methods aim
to solve the same discretization problem using different approaches

• The FEM: it is currently the most frequently used method. What it does basically it discretizes
the space, where the problem is solved locally and then integrated to have an approximative
solution. This gives FEM enough flexibility to solve a broad range of physics problems, in
structural analysis, dynamics and vibrations anylsis, heat transfer, chemical engineering,
electromagnetics (including electrostatics, magnetostatics, low-frequency electromagnetics,
and frequency-domain high-frequency electromagnetic waves), multi-physics, and fluid.
The method itself owns its success to the flexibility of the elements.

• FVM: is rather focused on conservation laws. For FVM, the division of the geometry
accommodates the conservation laws to be solved exactly, somehow sacrificing the flexibility
that is present in the FEM. It is however a natural choice for fluid mechanics problems, CFD,
heat transfer, and chemical engineering

• Finite-difference method: that it is the easiest to implement, but whose discretization is not
ideal for problems with more geometrical complexity than squares and rectangles. Currently it
is common to use it to perform weather calculations, in astrophysics, and seismology

Why should I care about CAE?


• Let’s spend some time here talking about the Pros and Cons of spending time and effort in
setting up numerical models for our desings

• Pros: As we saw, CAE tools are only an extension of the classic equation solving used to
design the first machines at the beginning of the industrial era.
• Then, CAE exhibits the same pros, determine the performance of a design before actually
building it, therefore inducing lower expenses since less lab testing and prototyping are
required.
• Optimization is possible before actually producing something, therefore allowing faster and
cheaper engineering iterations before building actual prototypes.
• Building a model helps us also to have better global picture of the consequences of our design
decisions. Since we are working with a computer, data management becomes easier.

• Of course, it is not the perfect solution to all of our problems. On the other hand among the
cons, we have that, as said before, reality cannot perfectly be simulated, and as result the last
word comes always to the experimental tests. An experiment, a demonstrator or a prototype
are needed at some point to validate the model.

• Also, it is incumbent to the engineer to judge when the use of CAE tools is going to have a
positive payback, for the effort of introducing it during the development of a product. But
thanks to the rapid increase of available computational power and the rise of data science and
AI, the potential market for CAE can be expected to grow steadily in the next decades.
Slide 8-9: Main domains
• To finish this introduction, let me present you a bit of the different domains hiding under this
common CAE denomination:
• Let’s start with FEA or finite element analysis, which basically englobes structural and
dynamic analyses. FEA is a about studying the behavior of structures under loads while being
subjected to different conditions. Frequent situations include bending, traction, buckling,
fatigue, shock, vibrations, etc. where the stress or the deformation of a structure are to be
evaluated.
• CFD or computational fluid dynamics, is a similar field specifically applied to fluids, liquids
or gas, and their behavior under loads of mechanical nature like pressure, and often of thermal
nature, like temperature gradients and heat. Pressure, temperature or speed distributions of
fluid particles are often evaluated using CFD.
• Multibody dynamics or MBD analyzes the kinematics of assemblies composed by rigid
bodies, this is, without considering deformations due to the loads. It is mainly used in
applications where systems with complex motion equations, the idea being able to simulate
kinematic and dynamic tensor elements like speed, accelerations, range, contact forces,
torques, etc.
• Thermal analysis focuses on heat transfer, and, when combined with CFD, also sometimes on
mass transfer. Usually thermal analysis problems aim to optimize the cooling or the heating of
the analyzed system, whether it means to maximize it, like in the case of heat sinks, or
minimize it, like in the case of building insulation.
• Other less known CAE domains include
• Computational Electromagnetics, which simulates the behavior of parts and assemblies when
influenced by electromagnetic fields, taking into account their own EM properties. Of course,
this is of big interest in all electronic applications.
• Simulations of metal forming for the manufacturing of geometrically complex parts, an
environment where structures are loaded well beyond their elastic domain, implying non-
linear behaviors.
• Fluid-solid interactions, where the main interest are interfaces. This includes also the
simulation of sound emissions and their propagation.

I would close this introduction chapter here.


Do you have any questions?

Part 2: FEM
For this second part, I would like to introduce you to the practical aspects of the simulation process.
What does a simulation look like? What are the steps?
For those who are already familiar with numerical simulation it will be a good refresher while for
those who are seeing this for the first time, it will give you a more concrete view of what we have been
talking.
As insinuated in the title, for this chapter and the next, I will narrow the scope to the particular case of
FEM, since, as mentioned before, FEM is the most common resolution method for CAE problems.
However, most of the steps in the FEM simulation process can be translated without too much effort to
other numerical methods like FVM and FDM.
Slide 11: What is FEM?
• So, what is FEM? As said before, the FEM is a numerical method that discretizes a heavily
complex problem, usually represented by a differential equation, and solves it locally for the
state variable, or variable, which is the physical magnitude for which we try to solve the
equation. It can be either temperature, deformation, electric field, etc..

• FEM is part of what is a broader family of numerical methods known as the Galerkin methods
developed by mathematicians to solve differential equations.
• For example, it can be seen on the right how a structure is divided in hexahedral and
tetrahedral elements which combined build up the complete solid to be analyzed.

• The discretization is performed by dividing the region of interest (a curve, a surface or a


volume) in very small but finite elements, from where the name comes from. This
discretization is also applied to time in the case of time dependent equations.

Formulations
• I will try to explain shortly the notion of formulation and its importance in the understanding
of the FEM method.
• Initially, the model, in its mathematical form, exhibits the form a partial differential equation,
which is called the “strong formulation”. Its exact solution cannot analytically found. The
solution will depend on the initial conditions of the problem and the conditions on the limits of
the system, called also boundary conditions.

• Luckily, as engineers, not having the exact solution isn’t an obstacle to find the required
information we are looking in the model. “All models are wrong, but some are useful”

• To turn this obstacle around, it was proposed (can be tracked back to the 40’s and 50’s Turner
and Clough: Stiffness and deflection analysis of complex structures. 1956) to solve the
problem locally in each one of these elements.

• With localization, the PDE is approximated by sets of ordinary differential equations or


algebraic equations which are easy to solve. The mathematical model is said to take a “weak
formulation”, since the solution is locally integrated. I will not go too much further into detail
about how the integration guarantees that the “weak formulation” does find the best solution
possible to the problem of the “strong formulation”, but you can search in the literature the
notion of “minimum energy” if you are interested to go deeper.

• So, what is important to remember, is that the essential characteristic of the “weak
formulation”, is that its integration depends only on the geometry and its discretization. This
makes that problems having the same pre-established geometry and discretization have a
similar “first part” of the solution. After integration of the weak formulation, we have what we
call the “discrete formulation”. At this point the original problem has become a set of ordinary
differential equations (ODE) or algebraic equations that can be solved by different
mathematical techniques.

Slide 12: Convergence


• To understand the notion of convergence, a visual aide can be seen on the left. If the blue
curve represents de solutions of a differential equation, it can be approximated by an
increasing number of linear functions represents by the red straight lines. Both solutions are
identical in the xi points. Intuitively, the accuracy of the approximation increases with the
number of elements. This asymptotic trend towards a result is what we call convergence,
which is a condition to be verified if the FEM approximation is correct.
• However, it happens that the approximation can converge towards values which are not the
accurate with respect to the real solution and sometimes could even be deprived of physical
sense, although mathematically correct. This can happen by several reasons which will be
explained later. The important thing to be aware of is that convergence is not equivalent to
accuracy.

Discretization
• The local integration of the “weak formulation” requires two things:
▪ A geometrical division of the analyzed system, which serves as support or basis,
▪ And the local imposition of a mathematical form to the solution, called Ansatz. This
ansatz takes usually the form of a polynomial. Although it is theoretically possible to
choose any continuous and continuously differentiable function inside the element.
For what it concerns the engineer, nowadays FEM codes use first degree-polynomials
or linear functions, and second-degree polynomials or quadratic elements, which
suffice to solve any problem.
▪ Both of these elements, the division and the ansatz, constitute the “mesh” because of the
image that conveys the action of dividing the system in small pieces, as you can see on the
figure on the left.
• The mesh is then the concrete expression of the “discretization” notion.
Slide 13: Process
• Let’s talk about the process itself.
• We consider three phases: pre-processing, solving and post-processing, which are basically,
the “before, pending, and after” of the computational effort.
• I will explain the details in the next slides, but what is important to take in fom this slide, is
that most of the work of the engineer, hopefully, is carried out during the pre-processing
phase. In this phase, the input parameters need to be understood and applied correctly, the
engineer using their own experience and skills to ensure that:
o The simulation gives an accurate convergent solution,
o The use of the resources is optimized,
o
• The solving part is carried ut by the computer, and normally it is invisible to the average user,
although normally FEM codes keep a track of the solving process as an automated report.
• During the post processing, the engineer takes back the control of the simulation. In this part,
they critically assess the results and, by verifying the quality of the solution can extract the
required information by the other stakeholders.
• In these conditions, it is important to remember that many aspects of the simulations are
either given by the engineering problem itself (as requested by the stakeholders) or are
automated.

Slide 14: Pre-Processing


• Let’s dive into some aspects of the pre-processing
• As you see, there is a big list checklist of parameters one needs to be aware of before running
a simulation. And this list is not exhaustive.
• The art and the skill of the engineer is that, by being aware of these parameters, he or she
avoids the “garbage in/garbage out” production, which means that we will produce high-
quality and useful data only by setting the good inputs for the computer to solve the problem.
• Although inputs like the geometry, the initial and boundary conditions and the contacts
introduced by the user as part of the problems inputs, hopefully other parameters like the mesh
size or element shapes, can be more and more frequently controlled automatically by the
solver algorithm. As much as this is a useful support, it is important that the engineer is aware
of this and make sure he or she understands what this implies.

Slide 15: Basic parameters


• Among the things that are often controlled by the system, I would like to mention two that I
find important to be aware of, since the wrong selection of these is often the source of errors.
o First of all, the solver type. This refers to the approach the algorithm is going to take
to solve the discrete formulation of our problem, which depends on the nature of the
matrix equation. An explicit solution implies that, by means of the appropriate
transformations, it is possible to find an expression that solves exactly the matrix
equation, like for example using the Euler-central difference method. This often
requires that the problem is simple, almost exclusively linear, so these transformations
can be actually performed because of the huge matrix inversions that are required, and
also the ability of actually obtain an explicit expression for the solution. This is why it
is almost only applicable to linear problems. If the problem is relatively ill-defined,
for example by having insufficient information on the nature of contacts or boundary
conditions, the algorithm can quickly become unstable and diverge.
o On the other hand, an implicit solver will take an iterative approach, like the oe from
newton-Raphson. The advantage of this approach is its stability, snce it will always
converge to a solution. The solution scheme will be independent of the nature of the
discrete formulation, so it is applicable to all range of complex problems including
non-linearities. By its iterative nature, implicit solvers are then also time-consuming.
For this reason, the are rather adapted to problems static or quasi-static poblems.

• Now, let’s talk about the order of the element. As said in the previous slides, most solvers use
exclusively these element orders to approach the target solution.
To understand the differences between linear and quadratic, you can use the visual support on
the right:
o Intuitively, you can see that the approximation using quadratic elements gives
somehow a solution closer to the exact one because of the capacity of quadratic
functions to define curve lines. Problems where the mathematical formulation
includes derivatives of higher-order can then be more accurately solved using a
quadratic ansatz rather than linear. Of course, this comes with a price and it is the
increased use of computing power required to run the solver.
o This is why linear elements are preferred when the quadratic approximation is not
expected to bring a real-added value with respect to the linear solution, typically in
non-structural problems. Also, linear elements have been proven to be more
appropriate for other non-linear problems (not all) like non-linear contact problems,
strongly localized deformations or wave propagation.
o Quadratic elements can also help to solve two purely numerical effects of of the
application of FEM, which are hour-glassing (introduced by the reduced integration)
and locking (shear in bending problems)

• Now, many Other parameters usually controlled by the algorithm are useful to be aware of,
but this is already the topic of a more advanced course.

Slide 16: Meshing


• A word about meshing:
• When working with 3D geometries, it is evident that elements will be three dimensional, these
little parts being a more or less deformed variant of a hexahedral, pentahedral or tetrahedral
element, as shown in the bottom-right corner of the table. These are generally called solid
elements.
• However, elements can be greatly degenerated according to the geometry, therefore raising the
question if they still can optimally represented by one of these shapes. It is typically the case
when analyzing thin and slim structures like sheet metal, pipes, tubes or beams. In this cases,
special cases of the solid elements can be applied, among which we have the shell elements
(when two dimensions are much larger than the last one) and the beam elements (when two
comparable dimensions are much smaller the last one). In both cases, the negligible
dimensions would be represented by a single parameter, like the thickness of the shell in the
case of shell elements. The objective is mostly the optimization of computing resources.

• Also, cases where deformations are locked on a single plane or line, and therefore existing in a
dimensionally reduced space, will use lower dimension elements, like the plane or truss
elements.

• Other aspects to be considering during meshing in order to optimize the structure and the
running time of the algorithm are: the size of the elements, the type of integration, the meshing
method, etc. Seeing thes n detail is also out of scope for this introduction course.

Slide 17: Post-processing


• Finally, we come the post-processing, with is the other end of the black-box that is the solving
part.
• In this phase, the results are available. Available FEM codes have almost always a GUI where
the user can display and visually inspect the results of the calculations, usually using a color
code to represent the magnitudes, as it can be seen in the examples on the slide. By default, the
color red is almost universally used to design critical or “hot” spots, while blue the less
critical.
• At this stage, the engineer must use their critical sense to perform sanity checks on the results
to verify the are coherent and truly represent the situation that is being analyzed. As said
before, convergence is a necessary condition, but not sufficient. Usually a convergence report
is available.
• Another inspection that can be done is the continuity of the solution. On the right you can
examples of discontinuities that can be sign of hidden problems that come from the numerical
resolution of the problem.
• Also, most codes provide useful information in the form of warning and error messages with
specific statements that could raise red flags.
• When the sanity checks are done, it is then necessary to ask oneself other questions in order to
validate further the results obtained. Depending on the context, it might be required that the
simulation is confronted to test results, hand calculations, other validated past simulations, or
maybe to repeat the same simulations under other conditions to look at the coherence.

Part 3: Walkout through an example


Slide 19: introduction
Let me show you an example to illustrate this simulation process. This simulation was performed
using Ansys, which I find has nice interface to organize and parametrize your simulations.
Let’s consider a C-Shape frame for a machine press. Typically these machines are used for sheet
metal manufacturing processes like bending, punching, stamping, deep drawing, etc. This is a
typical example where you want to evaluate the resistance of a structural design, to static loads, in
order to guarantee the structural integrity of the frame. We want to calculate then the structural
stress on this frame under required operational conditions.
Let us consider the geometry you see on the left: the frame global body has been split in several
smaller units, which make sense some how by the way you assembly such a frame, which is
normally made of welded or bolted components.
The objective of the simulation is to see where the stresses are the highest and how can we
optimize it.
The requirements of the simulation as described as follows:
1. Let’s suppose the press exercises a vertical force of 100kN, and we want to see what is the
effect on the frame.
2. The frame is fixed to the floor by for large screws that will be considered underformable. This
is usually the case, relatively speaking, since screws are pre-stressed and produced to have
superior properties dans normal construction steel.
3. In such a structure, it is expected to have the highest stress around typical concentration points
like the R20mm corner you see. The structural will be optimized in this sense at the end of the
chapter.

Slide 20: pre-processing: elements


Let’s then start with the pre-processing:
1. Step 1, We import the geometry as a step file. We see that the geometry includes a table and
the frame made of structural steel. In his particular case, the geometry was already divided in
several bodies. We also define the material here, which is structural stell.
2. And, who says multibody simulations says interaction between them. Define the correct
contacts is necessary. In this particular case, since we consider the frame a rigid structure, the
bodies in contact will share a rigid contact type, right?
3. Next important thing is the external forces. We have our two forces of 100kN in this case,
which are each one simply opposite reactions with respect to the manufacturing tool. Look at
the direction and the support of the forces.
4. Last, we define the support, which are on the bottom. We have four threads which will be
fixed features. This is, the algorithm will not move the face of the elements in these regions.
How these are defined is essential to the resolution of the algorithm.

Slide 21: pre-processing: Mesh and elements


Now, let’s spend some time looking at the mesh. As said before, it is expected to have the highest
stress around the corners. In these critical points which exhibit all the smallest characteristic
dimensions, we will try to introduce many elements in order to “catch” the stress and deformations.
However, it is not optimal to do this everywhere, since a high number of elements inevitably increases
the required resources to solve the problem. We use then bigger elements or with a lower order where
it is no expected to have something interesting coming up.
Let’s analyze the different bodies one-by-one to decide what are the most appropriate meshing
parameters in each case.

1. First, these two plates in the front. Not real stresses are expected since the effort path is rather
crossing the parts on the back of the frame. Linear shell elements should be enough.
2. On the side walls, we have the most interesting. They are rather large pieces with small radii at
the corners, so it is necessary to use small elements around the corners but rather big on the
rest of the parts. Normally, algorithms can automatically select a method to do this. It is just
important to set a sufficient resolution around the corners. In our case, let’s consider 10mm as
size for the elements, the other being controlled by the computer. It will be inspected if the
resolution was sufficient. Also, tetrahedral elements will be applied, although a mix
tetrahedral + hexahedral could be possible.
3. The bottom base fame and the overhead holder of the tool are rather thick plates with holes. In
a similar way, the mesh is refined around the holes, but since the problem is not that critical in
these regions, it is actually not critical to go into a deep analysis.
4. The base plate is rather a solid block with no small details, so it can take the easiest selection:
solid hexahedrals.
5. Finally, the back plate is a thick plate without particular elements, which is expected to be
bent. Quadratic shell elements are used in this case.
We can see on the ight what is the resulting mesh. Usually these meshes are automatically generated
by the computer with the parameters that are user-defiend and other that are controlled by the
computer in case the use hasn’t defined then explicitely. In our case for example, we have a global
resolution (calculated by the computer) of around 100mm. Following the impositions we introduced,
the final mesh as a total of around 15000 knots and 5000 elements.
The mesh created and the parameters se up, we can run the simulation.
Slide 22: Post-processing: results overview
Let’s then check the first results. The interpretation of simulations are relatively straightforward, since
presented as a code of colors: usually red represent the highest values and blue the lowest.
1. Let’s start looking at the internal efforts created by the press forces of 100kN. The maximum
local stress is around 85 MPa at the maximum point, which as expected, are around these
corners.
2. Most of the frame is rather blue,, whch means the stress is rather low overall expect in this
place.
3. It is also worth mentioning that the solution came after the first iteration, which most probably
means that a direct or explicit resolution algorithm was selected by the computer to solve the
simulation.
4. Let us check know the deformations. Normally, deformations in the press are important when
accuracy in the manufacturing process is required.
5. Ansys let’s us to select specific geometries to observe the simulated magnitudes along them.
We can see that a maximum deformation of 0.6mm can be spotted on the upper part, while the
base plate for the machining is much less deformed, around 0.01 which is practically
negligible taking into account the dimensions in play.

Slide 23: Post-processing: accuracy improvement


Let us now get a bit deeper. As said before, it is essential for the engineer to have a critical eye on the
results to avoid the “garbage in/garbage out” scenario. The most critical part being the corners, let’s
zoom in to see more in detail what happens locally.
As you can see on the left, the distribution of the local stress is rather irregular, with isolines following
rather the borders of the elements rather than showing a smooth continuity as it is expected by the
solid mechanics equations. This is usually a symptom of an insufficient density of elements and/or
knots. Ansys usually allows a quick local refinement of the mesh.
After we change to mesh size parameter from 10 to 5mm and recalculate, we see that the stress
distribution looks much better. The price of accuracy is a higher number of elements, this time going
up to 16300 nodes, which is not an extremely high price to pay to have a better result. The new
maximum is meanwhile around 82 MPa.

Slide 24: Post-processing: structural optimization


Let us try a variant with more rounding R100 instead of R20. We know from our solid mechanics
courses that this will lead to a stress relief.
As predicted, a huge difference can be seen from this point of view, since the maximum stress is
now around 43 MPa, almost 50% lower, around the same place.
Let us now check more closely, in order to observe the quality of the solution.
The resolution around the corner is still 10mm. On the contrary to the previous case, the
distribution is rather smooth , which can be of course attributed to the change in the geometry. In
these circunstances, this resolution of 10mm is adapted to the 100mm radius. Indeed, it can be
seen on figure of the right that refining the mesh more doesn’t really bring an added value since
the new max stress is still close to 43MPa, but the simulation used 25700 nodes instead of 16300
to obtain more or less the same results.
Therefore the importance of analyzing the results to subsequently improve the simulation.

Part 4: Future of CAE


Slide 27: Cloud based FEM
• In commercial CAE software, normally one of the parameters one can set before starting a
simulation is how many processors one wants the solver to use, since the resolution algorithms
are capable of dividing the domain solution and distributing the task between several
calculators.

• This way, what we call High-Performance Computing (or HPC) uses locally installed server
networks to perform calculation-power-demanding simulations. For example, as a student I
used the cluster we had available to start tens of CFD calculations automatically, each one
containing hundreds of thousands of elements. That was possible 10 years ago, but the cost of
having a cluster was something that only big institutes, research centers or large companies
could afford

• Recently, new cloud computing applications like Microsoft Azure or Amazon Web
Services go a step further and proposes remote calculation. Also CAE software providers
like Ansys or COMSOL have started teaming up to provide integrated applications and
new tools that can open even more possibilities than reducing the infrastructure needed.

• Cloud computing opens the door to massive data exchange necessary to: product
interconnection (IoT), acceleration and optimization of design cycles, virtual
prototyping, etc.
• This way, not only HPC FEM becomes more accessible to small companies, but also
permits the mining of massive amounts of data necessary to implement true data-driven
design techniques like machine learning and deep learning.

Slide 28: Digital Twins and Virtual prototyping


• Speaking of which, we can already take a look to the concept of digital twin.
• The concept was actually introduced in the 2000’s, where it was proposed to integrate several
models of the same product at the system level. Of course, it was not only in the sense of
Multiphysics, but also aspects like control software, costs, life-cycle, etc.

• In this sense, CAE software companies have developed their own concepts, like for example
Ansys. You can see on the left some schematics explaining the combination of the product as
a physical entity and he product as an asset. All together led us to the introduction of the
virtual prototypes, which it is considered one of the future essential tools for innovation.
• Intuitively, the possibility of manipulate a virtual model of our product to observe the effect of
design changes is an opportunity to drastically reduce the costs of prototyping and testing to
the strict minimum. As seen in the description of the process simulation, virtual prototyping
can also provide masses of information easily available by what is called virtual sensors,
allowing to introduce the concepts of data-driven design and IoT.
• Another advantage of the digital twins: performing what-if scenario analysis.
• For the optimization of the design from the multiphysical point of view: maximizing the
performance and minimizing the drawbacks, introducing new features, etc.
• This can also be done on operational side of the product, providing information to reduce
failure risk when doing a FMEA for example, or predicting operational life, finding optimal
operation points, predicting the optimal frequency of maintenance etc..

Slide 29: Data- and simulation-driven R&D


• Another topic: the increasing relevancy of data-driven.

• From my personal experience I can give the example of one of the main topics of my PhD,
which was to understand the structural performance of composite materials typically used in
aerospace, under different atmospheric conditions typically found by aircraft. We developed a
mixed numerical-experimental method, that trained FEM models of composite structures
based only on their vibrational behavior. The deltas in the output of both experiment and
simulation allowed to correct the parameters of a model so it can improve its accuracy. This
approach was the only acceptable given the complexity of predicting the final elastic
properties of a composite layup with complex forms like you usually find in airplane fuselages
and wings. By this example, you can have a hint of the potential of combining simulation and
data to improve design.

• Because using CAE techniques to design a product are rather oriented towards the simulation-
based paradigm where a model is the starting point.
• However, as we were able to see during this course, modelling is not exactly the reality, and
making our simulations closer to the reality requires exponentially increasing amounts of
computing power.
• On the other hand, when we take a data-driven approach to design, we have the problem of
having to develop and deploy expensive prototype iterations, and also be able to obtain vast
amounts of data.
• Several companies, like Ansys as you can see on the left, try to combine both approaches in
order to use the best of each to accelerate the product design.
• This combined approach introduces machine-learning techniques that use data obtained in
early prototypes in order to train our basic models to make then more performing, the idea
leading towards the development of a true digital twin. Also, t provides a gateway to the
introduction of IoT in the digital factory since digital twins can also be used to interact with
physical products, but also to create complete models of bigger systems.

Slide 30: Materials & Manufacturing R&D

• On a more research oriented note, new possibilities given by cloud computing and machine
learning can also stimulate research.
• Let us take for example material research, a topic with is closely related to numerical
simulations. CAE can provide an ideal frame for smart materials manufacturing since
numerical modelling can provide a flexible frame were materials can be attributed almost any
kind of property. This can allow understanding some particular behaviors of materials.
• Also they would allow simulation-based design for manufacturing, predictive maintenance,
failure prediction, etc.
• Same thing regarding bio-inspired materials. Many of materials observed in nature, specially
those observed at microscopic level, exhibit exceptional properties like for example extremely
high elastic modulii, high impact energy absorption, permeability, etc. and advanced
numerical simulations are a precious tool nowadays to understand the intricate physics behind
these, since simulation techniques can play at different space and time-scale, linking
macroscopic properties to microscopical elements. This is very useful to study nano-materials
for example.
• Another field is additive manufacturing: already existing for many years, topology
optimization algorithms are commercially available. Also, the very 3D printing process itself
can be simulated, at least for simple materials like metal and plastic composites. However,
there is still a big potential with the application of additive manufacturing of parts with
complex microstructures, precisely for example from biological inspiration. This is a field
with many promises for the future.

You might also like