CAE Practically - Notes
CAE Practically - Notes
• Together with CAD and CAM, they share the common CAx denomination of software tools.
It serves to summarizes software tools aimed to support people involved in the design, analysis,
and manufacture of products.
• CAE is essentially about using numerical methods to simulate real physical phenomena
under different conditions by means of their mathematical models.
• I will try to explain each one of these terms written bold since they are fundamental concepts
in understanding the goal of CAE.
• For example, on the right you can see the Navier-Stokes equations for incompressible fluids,
which are basically an expression the physical reality of flow around an obstacle, or inside a
tube or flow through a turbine. As it can be seen, the equations link, through mathematical
operations, different physical magnitudes like speed, pressure, flow rate, etc. For instance,
these equations, using numerical modelling techniques, can help us apprehending the physical
reality of a Pelton turbine, which maybe some of you are familiar with, with its corresponding
model. From the model we can perform simulations that will help us understand the physics
involved.
• Same can be said about many other laws and models: diffusion equation, heat equations,
Maxwell equations for EM, etc.
• Sometimes models can also represent realities other than physical, like financial (for instance
a business valuation) or social realities(evolution of the population in a given area, what are
the parameters to do that, etc).
Slide 6: What is a simulation?
• The simulation of a physical phenomenon is their virtual replication. We can consider for
example, an apple falling of a tree, a process like an accelerating car or a state, like the internal
constraints of a part manufactured by injection molding.
• The solution is then displayed using typically a visual representation (an image, a video) but
can also be a sound for example, that can help us to relate intuitively to what is happening in
the real world in the corresponding experiment.
However, to mirror reality accurately, an increased complexity around the models is inevitable.
This is because of many non-negligible aspects of reality:
• Happens in a three-dimensional space. Linear (1-D) or plane (2-D) representations like or
beam, these are just projections
• Is different under different environmental conditions
• is usually Transitional and dynamic
• is Multiphysical, this means it combines many different aspects involving the interaction
between bodies
• is never ideal, perfect , infinite, punctual, etc.
• When this is taken into account, we can easily see the evolution of mathematical formulations
of these models: Linear-equations become non-linear-equations, ordinary differential
equations become partial differential-equations, explicit functions become implicit functions:
• The FEM: it is currently the most frequently used method. What it does basically it discretizes
the space, where the problem is solved locally and then integrated to have an approximative
solution. This gives FEM enough flexibility to solve a broad range of physics problems, in
structural analysis, dynamics and vibrations anylsis, heat transfer, chemical engineering,
electromagnetics (including electrostatics, magnetostatics, low-frequency electromagnetics,
and frequency-domain high-frequency electromagnetic waves), multi-physics, and fluid.
The method itself owns its success to the flexibility of the elements.
• FVM: is rather focused on conservation laws. For FVM, the division of the geometry
accommodates the conservation laws to be solved exactly, somehow sacrificing the flexibility
that is present in the FEM. It is however a natural choice for fluid mechanics problems, CFD,
heat transfer, and chemical engineering
• Finite-difference method: that it is the easiest to implement, but whose discretization is not
ideal for problems with more geometrical complexity than squares and rectangles. Currently it
is common to use it to perform weather calculations, in astrophysics, and seismology
• Pros: As we saw, CAE tools are only an extension of the classic equation solving used to
design the first machines at the beginning of the industrial era.
• Then, CAE exhibits the same pros, determine the performance of a design before actually
building it, therefore inducing lower expenses since less lab testing and prototyping are
required.
• Optimization is possible before actually producing something, therefore allowing faster and
cheaper engineering iterations before building actual prototypes.
• Building a model helps us also to have better global picture of the consequences of our design
decisions. Since we are working with a computer, data management becomes easier.
• Of course, it is not the perfect solution to all of our problems. On the other hand among the
cons, we have that, as said before, reality cannot perfectly be simulated, and as result the last
word comes always to the experimental tests. An experiment, a demonstrator or a prototype
are needed at some point to validate the model.
• Also, it is incumbent to the engineer to judge when the use of CAE tools is going to have a
positive payback, for the effort of introducing it during the development of a product. But
thanks to the rapid increase of available computational power and the rise of data science and
AI, the potential market for CAE can be expected to grow steadily in the next decades.
Slide 8-9: Main domains
• To finish this introduction, let me present you a bit of the different domains hiding under this
common CAE denomination:
• Let’s start with FEA or finite element analysis, which basically englobes structural and
dynamic analyses. FEA is a about studying the behavior of structures under loads while being
subjected to different conditions. Frequent situations include bending, traction, buckling,
fatigue, shock, vibrations, etc. where the stress or the deformation of a structure are to be
evaluated.
• CFD or computational fluid dynamics, is a similar field specifically applied to fluids, liquids
or gas, and their behavior under loads of mechanical nature like pressure, and often of thermal
nature, like temperature gradients and heat. Pressure, temperature or speed distributions of
fluid particles are often evaluated using CFD.
• Multibody dynamics or MBD analyzes the kinematics of assemblies composed by rigid
bodies, this is, without considering deformations due to the loads. It is mainly used in
applications where systems with complex motion equations, the idea being able to simulate
kinematic and dynamic tensor elements like speed, accelerations, range, contact forces,
torques, etc.
• Thermal analysis focuses on heat transfer, and, when combined with CFD, also sometimes on
mass transfer. Usually thermal analysis problems aim to optimize the cooling or the heating of
the analyzed system, whether it means to maximize it, like in the case of heat sinks, or
minimize it, like in the case of building insulation.
• Other less known CAE domains include
• Computational Electromagnetics, which simulates the behavior of parts and assemblies when
influenced by electromagnetic fields, taking into account their own EM properties. Of course,
this is of big interest in all electronic applications.
• Simulations of metal forming for the manufacturing of geometrically complex parts, an
environment where structures are loaded well beyond their elastic domain, implying non-
linear behaviors.
• Fluid-solid interactions, where the main interest are interfaces. This includes also the
simulation of sound emissions and their propagation.
Part 2: FEM
For this second part, I would like to introduce you to the practical aspects of the simulation process.
What does a simulation look like? What are the steps?
For those who are already familiar with numerical simulation it will be a good refresher while for
those who are seeing this for the first time, it will give you a more concrete view of what we have been
talking.
As insinuated in the title, for this chapter and the next, I will narrow the scope to the particular case of
FEM, since, as mentioned before, FEM is the most common resolution method for CAE problems.
However, most of the steps in the FEM simulation process can be translated without too much effort to
other numerical methods like FVM and FDM.
Slide 11: What is FEM?
• So, what is FEM? As said before, the FEM is a numerical method that discretizes a heavily
complex problem, usually represented by a differential equation, and solves it locally for the
state variable, or variable, which is the physical magnitude for which we try to solve the
equation. It can be either temperature, deformation, electric field, etc..
• FEM is part of what is a broader family of numerical methods known as the Galerkin methods
developed by mathematicians to solve differential equations.
• For example, it can be seen on the right how a structure is divided in hexahedral and
tetrahedral elements which combined build up the complete solid to be analyzed.
Formulations
• I will try to explain shortly the notion of formulation and its importance in the understanding
of the FEM method.
• Initially, the model, in its mathematical form, exhibits the form a partial differential equation,
which is called the “strong formulation”. Its exact solution cannot analytically found. The
solution will depend on the initial conditions of the problem and the conditions on the limits of
the system, called also boundary conditions.
• Luckily, as engineers, not having the exact solution isn’t an obstacle to find the required
information we are looking in the model. “All models are wrong, but some are useful”
• To turn this obstacle around, it was proposed (can be tracked back to the 40’s and 50’s Turner
and Clough: Stiffness and deflection analysis of complex structures. 1956) to solve the
problem locally in each one of these elements.
• So, what is important to remember, is that the essential characteristic of the “weak
formulation”, is that its integration depends only on the geometry and its discretization. This
makes that problems having the same pre-established geometry and discretization have a
similar “first part” of the solution. After integration of the weak formulation, we have what we
call the “discrete formulation”. At this point the original problem has become a set of ordinary
differential equations (ODE) or algebraic equations that can be solved by different
mathematical techniques.
Discretization
• The local integration of the “weak formulation” requires two things:
▪ A geometrical division of the analyzed system, which serves as support or basis,
▪ And the local imposition of a mathematical form to the solution, called Ansatz. This
ansatz takes usually the form of a polynomial. Although it is theoretically possible to
choose any continuous and continuously differentiable function inside the element.
For what it concerns the engineer, nowadays FEM codes use first degree-polynomials
or linear functions, and second-degree polynomials or quadratic elements, which
suffice to solve any problem.
▪ Both of these elements, the division and the ansatz, constitute the “mesh” because of the
image that conveys the action of dividing the system in small pieces, as you can see on the
figure on the left.
• The mesh is then the concrete expression of the “discretization” notion.
Slide 13: Process
• Let’s talk about the process itself.
• We consider three phases: pre-processing, solving and post-processing, which are basically,
the “before, pending, and after” of the computational effort.
• I will explain the details in the next slides, but what is important to take in fom this slide, is
that most of the work of the engineer, hopefully, is carried out during the pre-processing
phase. In this phase, the input parameters need to be understood and applied correctly, the
engineer using their own experience and skills to ensure that:
o The simulation gives an accurate convergent solution,
o The use of the resources is optimized,
o
• The solving part is carried ut by the computer, and normally it is invisible to the average user,
although normally FEM codes keep a track of the solving process as an automated report.
• During the post processing, the engineer takes back the control of the simulation. In this part,
they critically assess the results and, by verifying the quality of the solution can extract the
required information by the other stakeholders.
• In these conditions, it is important to remember that many aspects of the simulations are
either given by the engineering problem itself (as requested by the stakeholders) or are
automated.
• Now, let’s talk about the order of the element. As said in the previous slides, most solvers use
exclusively these element orders to approach the target solution.
To understand the differences between linear and quadratic, you can use the visual support on
the right:
o Intuitively, you can see that the approximation using quadratic elements gives
somehow a solution closer to the exact one because of the capacity of quadratic
functions to define curve lines. Problems where the mathematical formulation
includes derivatives of higher-order can then be more accurately solved using a
quadratic ansatz rather than linear. Of course, this comes with a price and it is the
increased use of computing power required to run the solver.
o This is why linear elements are preferred when the quadratic approximation is not
expected to bring a real-added value with respect to the linear solution, typically in
non-structural problems. Also, linear elements have been proven to be more
appropriate for other non-linear problems (not all) like non-linear contact problems,
strongly localized deformations or wave propagation.
o Quadratic elements can also help to solve two purely numerical effects of of the
application of FEM, which are hour-glassing (introduced by the reduced integration)
and locking (shear in bending problems)
• Now, many Other parameters usually controlled by the algorithm are useful to be aware of,
but this is already the topic of a more advanced course.
• Also, cases where deformations are locked on a single plane or line, and therefore existing in a
dimensionally reduced space, will use lower dimension elements, like the plane or truss
elements.
• Other aspects to be considering during meshing in order to optimize the structure and the
running time of the algorithm are: the size of the elements, the type of integration, the meshing
method, etc. Seeing thes n detail is also out of scope for this introduction course.
1. First, these two plates in the front. Not real stresses are expected since the effort path is rather
crossing the parts on the back of the frame. Linear shell elements should be enough.
2. On the side walls, we have the most interesting. They are rather large pieces with small radii at
the corners, so it is necessary to use small elements around the corners but rather big on the
rest of the parts. Normally, algorithms can automatically select a method to do this. It is just
important to set a sufficient resolution around the corners. In our case, let’s consider 10mm as
size for the elements, the other being controlled by the computer. It will be inspected if the
resolution was sufficient. Also, tetrahedral elements will be applied, although a mix
tetrahedral + hexahedral could be possible.
3. The bottom base fame and the overhead holder of the tool are rather thick plates with holes. In
a similar way, the mesh is refined around the holes, but since the problem is not that critical in
these regions, it is actually not critical to go into a deep analysis.
4. The base plate is rather a solid block with no small details, so it can take the easiest selection:
solid hexahedrals.
5. Finally, the back plate is a thick plate without particular elements, which is expected to be
bent. Quadratic shell elements are used in this case.
We can see on the ight what is the resulting mesh. Usually these meshes are automatically generated
by the computer with the parameters that are user-defiend and other that are controlled by the
computer in case the use hasn’t defined then explicitely. In our case for example, we have a global
resolution (calculated by the computer) of around 100mm. Following the impositions we introduced,
the final mesh as a total of around 15000 knots and 5000 elements.
The mesh created and the parameters se up, we can run the simulation.
Slide 22: Post-processing: results overview
Let’s then check the first results. The interpretation of simulations are relatively straightforward, since
presented as a code of colors: usually red represent the highest values and blue the lowest.
1. Let’s start looking at the internal efforts created by the press forces of 100kN. The maximum
local stress is around 85 MPa at the maximum point, which as expected, are around these
corners.
2. Most of the frame is rather blue,, whch means the stress is rather low overall expect in this
place.
3. It is also worth mentioning that the solution came after the first iteration, which most probably
means that a direct or explicit resolution algorithm was selected by the computer to solve the
simulation.
4. Let us check know the deformations. Normally, deformations in the press are important when
accuracy in the manufacturing process is required.
5. Ansys let’s us to select specific geometries to observe the simulated magnitudes along them.
We can see that a maximum deformation of 0.6mm can be spotted on the upper part, while the
base plate for the machining is much less deformed, around 0.01 which is practically
negligible taking into account the dimensions in play.
• This way, what we call High-Performance Computing (or HPC) uses locally installed server
networks to perform calculation-power-demanding simulations. For example, as a student I
used the cluster we had available to start tens of CFD calculations automatically, each one
containing hundreds of thousands of elements. That was possible 10 years ago, but the cost of
having a cluster was something that only big institutes, research centers or large companies
could afford
• Recently, new cloud computing applications like Microsoft Azure or Amazon Web
Services go a step further and proposes remote calculation. Also CAE software providers
like Ansys or COMSOL have started teaming up to provide integrated applications and
new tools that can open even more possibilities than reducing the infrastructure needed.
• Cloud computing opens the door to massive data exchange necessary to: product
interconnection (IoT), acceleration and optimization of design cycles, virtual
prototyping, etc.
• This way, not only HPC FEM becomes more accessible to small companies, but also
permits the mining of massive amounts of data necessary to implement true data-driven
design techniques like machine learning and deep learning.
• In this sense, CAE software companies have developed their own concepts, like for example
Ansys. You can see on the left some schematics explaining the combination of the product as
a physical entity and he product as an asset. All together led us to the introduction of the
virtual prototypes, which it is considered one of the future essential tools for innovation.
• Intuitively, the possibility of manipulate a virtual model of our product to observe the effect of
design changes is an opportunity to drastically reduce the costs of prototyping and testing to
the strict minimum. As seen in the description of the process simulation, virtual prototyping
can also provide masses of information easily available by what is called virtual sensors,
allowing to introduce the concepts of data-driven design and IoT.
• Another advantage of the digital twins: performing what-if scenario analysis.
• For the optimization of the design from the multiphysical point of view: maximizing the
performance and minimizing the drawbacks, introducing new features, etc.
• This can also be done on operational side of the product, providing information to reduce
failure risk when doing a FMEA for example, or predicting operational life, finding optimal
operation points, predicting the optimal frequency of maintenance etc..
• From my personal experience I can give the example of one of the main topics of my PhD,
which was to understand the structural performance of composite materials typically used in
aerospace, under different atmospheric conditions typically found by aircraft. We developed a
mixed numerical-experimental method, that trained FEM models of composite structures
based only on their vibrational behavior. The deltas in the output of both experiment and
simulation allowed to correct the parameters of a model so it can improve its accuracy. This
approach was the only acceptable given the complexity of predicting the final elastic
properties of a composite layup with complex forms like you usually find in airplane fuselages
and wings. By this example, you can have a hint of the potential of combining simulation and
data to improve design.
• Because using CAE techniques to design a product are rather oriented towards the simulation-
based paradigm where a model is the starting point.
• However, as we were able to see during this course, modelling is not exactly the reality, and
making our simulations closer to the reality requires exponentially increasing amounts of
computing power.
• On the other hand, when we take a data-driven approach to design, we have the problem of
having to develop and deploy expensive prototype iterations, and also be able to obtain vast
amounts of data.
• Several companies, like Ansys as you can see on the left, try to combine both approaches in
order to use the best of each to accelerate the product design.
• This combined approach introduces machine-learning techniques that use data obtained in
early prototypes in order to train our basic models to make then more performing, the idea
leading towards the development of a true digital twin. Also, t provides a gateway to the
introduction of IoT in the digital factory since digital twins can also be used to interact with
physical products, but also to create complete models of bigger systems.
• On a more research oriented note, new possibilities given by cloud computing and machine
learning can also stimulate research.
• Let us take for example material research, a topic with is closely related to numerical
simulations. CAE can provide an ideal frame for smart materials manufacturing since
numerical modelling can provide a flexible frame were materials can be attributed almost any
kind of property. This can allow understanding some particular behaviors of materials.
• Also they would allow simulation-based design for manufacturing, predictive maintenance,
failure prediction, etc.
• Same thing regarding bio-inspired materials. Many of materials observed in nature, specially
those observed at microscopic level, exhibit exceptional properties like for example extremely
high elastic modulii, high impact energy absorption, permeability, etc. and advanced
numerical simulations are a precious tool nowadays to understand the intricate physics behind
these, since simulation techniques can play at different space and time-scale, linking
macroscopic properties to microscopical elements. This is very useful to study nano-materials
for example.
• Another field is additive manufacturing: already existing for many years, topology
optimization algorithms are commercially available. Also, the very 3D printing process itself
can be simulated, at least for simple materials like metal and plastic composites. However,
there is still a big potential with the application of additive manufacturing of parts with
complex microstructures, precisely for example from biological inspiration. This is a field
with many promises for the future.