Optimisation
Glossary Of Terms
- Adjoint Method
-
The adjoint method computes sensitivities of a scalar objective with respect to many design variables at a cost roughly independent of the number of variables. After solving the primary simulation (flow, structural), an adjoint equation is solved, yielding gradients of the objective. This makes high‑dimensional optimisation (e.g., shape, topology) tractable. The method is particularly powerful in CFD, where gradient‑based search over hundreds of shape parameters would otherwise be infeasible.
- Curve Fitting
-
Curve fitting is the process of constructing a mathematical function that has the best fit for a set of data points. The curve fitting process uses methods such as interpolation for exact fitting or smoothing for approximate fitting.
- Design Of Experiments
-
Design Of Experiments (DOE) is a technique used to scientifically determine the location of a number of sampling points to obtain a good insight into the response of a system. In general, those sampling points are a number of input variables, bounded by a maximum and minimum value. When multiple input variables are considered, the number of possible combinations can rapidly become too large. With DOE the large number of sampling points is reduced to a more manageable level. Results of analyses based on DOE are often presented in a response surface to predict the behaviour of a system without the need for performing additional analyses.
- DOE
-
- Deterministic Optimisation
-
Deterministic optimisation assumes all inputs, models, and responses are exact and repeatable—so running the same problem twice yields identical results. Classical techniques (linear programming, nonlinear programming, convex optimisation) leverage this certainty to guarantee convergence properties and efficient search. In contrast to stochastic optimisation or probabilistic optimisation, which explicitly addresses randomness, deterministic methods treat uncertainty separately or via post‑processing. They excel when models are precise and computational cost is critical, but may produce designs sensitive to real‑world variability unless combined with robustness or reliability analyses.
- Evolutionary algorithms (EAs)
-
Evolutionary algorithms (EAs) are population‑based, stochastic optimisers inspired by natural evolution. A set of candidate solutions (“population”) undergoes selection (fitter individuals chosen), crossover (recombining design variables), and mutation (random perturbations). Over generations, the population evolves toward better solutions. EAs excel at global search, handling complex, discontinuous, or noisy objectives, but typically require many function evaluations.
- Genetic Algorithm (GA)
-
Genetic algorithms are a class of evolutionary algorithms where solutions are encoded as “chromosomes”—strings of bits, integers, or real values. Operators mimic biological genetics: crossover exchanges segments between parents; mutation flips bits or perturbs values. Selection pressures guide the population toward higher fitness. GAs handle discrete, mixed, and highly nonlinear problems well, but parameter tuning (population size, crossover/mutation rates) is critical for efficiency.
- Gradient Descent
-
Gradient descent is a first‑order iterative method: at each step, design variables move opposite the gradient of the objective by a step size (learning rate). It’s straightforward and memory‑efficient but can converge slowly in narrow or ill‑conditioned valleys. Variants—momentum, adaptive step‑size (AdaGrad, RMSProp)—ameliorate these issues. In engineering, line‑search or trust‑region enhancements often replace naive constant‑step gradient descent for robustness.
- Gradient‑Based Method
-
Gradient‑based methods use derivatives of the objective and constraints to guide the search. First‑order methods (steepest descent, conjugate gradient) use gradient vectors; second‑order methods (Newton, quasi‑Newton) also use Hessian information for curvature. When objective and constraints are smooth and differentiable, these methods converge rapidly near optima. They struggle when derivatives are noisy (as with Monte Carlo simulations) or discontinuous (e.g., topology changes), and may require accurate finite‑difference or adjoint sensitivities.
- Metamodel
-
see Surrogate Model
- Multi‑Objective Optimisation
-
Multi‑objective optimisation involves simultaneous optimisation of two or more conflicting objectives. Instead of a single scalar objective, algorithms generate a population of solutions approximating the Pareto front. Techniques include weighted‑sum methods (combining objectives into one scalar), Pareto‑based evolutionary algorithms (NSGA‑II, SPEA2), and goal‑attainment methods. This paradigm is common in engineering when trade‑offs (cost vs. performance, weight vs. strength) must be explicitly evaluated.
- Objective Function
-
The objective function (also called cost or fitness function) is a mathematical expression that quantifies how “good” a given design or solution is, typically as a single scalar value (though vector-valued objectives occur in multi‑objective problems). In engineering, it might represent weight, compliance, drag, or energy consumption. During optimisation, algorithms seek to minimise or maximise this function subject to constraints. A well‑formulated objective clearly captures the performance metric of interest and often balances competing priorities (e.g., stiffness versus weight) by combining them into a single measure.
- Parametric Optimisation
-
With parametric optimisation a number of input parameters, such as geometry, material properties, loads, etc. are varied to examine the response of a structure. Techniques as DOE and response surfaces are used in combination with parametric optimisation to avoid the need of executing a different analysis for every possible value of an input variable.
- Probabilistic Optimisation
-
Probabilistic optimisation is a specialized branch of stochastic optimisation that models uncertain inputs with explicit probability distributions and formulates objectives or constraints in probabilistic terms—such as “limit failure probability to 1 %” or “maximise probability of meeting a performance threshold.” By contrast, deterministic optimisation treats all inputs as exact and handles uncertainty only via post‑processing or robust‑design overlays. Probabilistic methods often employ reliability‑based techniques (e.g. FORM/SORM, surrogate‑based RBDO) to ensure designs satisfy chance‑constraints under variability, directly embedding reliability within the optimisation loop.
- Particle Swarm Optimisation (PSO)
-
PSO is a population‑based method where each “particle” adjusts its position in the design space based on its own best-known position and the global (or neighborhood) best. Particles balance exploration and exploitation via velocity updates influenced by cognitive and social components. PSO is simple to implement, has few parameters, and works well for continuous problems, but can suffer premature convergence and struggles with discrete or highly constrained spaces.
- Response Surface
-
A response surface is a best-fit curve of a set of data points of multiple variables. A response surface predicts or approximates an output variable as a function of two or more input variables, based on a limited number of calculated or measured data points.
- Robust Design Optimisation
-
Robust design optimisation seeks solutions that maintain performance despite variability (manufacturing tolerances, material properties, loading conditions). Objectives may include minimizing the mean and variance of performance metrics. Techniques involve probabilistic modelling, worst‑case scenarios, or using surrogate models to approximate statistical moments. Robust designs trade peak performance for stability, ensuring reliability in real‑world conditions.
- Simulated Annealing
-
Simulated annealing is a probabilistic technique inspired by metallurgical annealing. Starting at a high “temperature,” the algorithm accepts both improving and, with a probability that decays over time, worsening moves—allowing escape from local optima. As temperature lowers according to a cooling schedule, the probability of uphill moves decreases, focusing on exploitation. SA is easy to implement and can handle discrete spaces, but convergence speed depends heavily on the cooling schedule.
- Six Sigma Analysis
-
A typical analysis assumes input parameters (material, geometry, loads, etc.) to have a fixed value. To eliminate the uncertainty around these fixed values, a safety factor is often used. This approach is called determinisitc.
Designing for Six Sigma provides for a mechanism that takes a statistical deviation of those input variables into account. The output of a Six Sigma analysis is a statistical distribution of the response of the system. This approach is called probabilistic. A product has Six Sigma quality if only 3.4 per 1 million parts fail.
- Stochastic Optimisation
-
Stochastic optimisation encompasses methods that explicitly handle randomness in objective functions or constraints by incorporating sampling—often via Monte Carlo, stochastic approximation, or noisy gradient estimates (e.g. SGD). These algorithms seek solutions that perform well in expectation or whose performance distributions meet certain criteria. Unlike deterministic optimisation, which assumes exact, repeatable models, stochastic approaches embrace variability during the search. They overlap with probabilistic optimisation, but are broader: probabilistic methods focus on chance‑constraints and failure‑probability targets, whereas stochastic methods also include noise‑tolerant or sampling‑based techniques without explicit reliability formulations.
- Surrogate Model (Metamodel)
-
Surrogate models or metamodels approximate expensive simulations (FEA/CFD) with cheap, analytical or statistical functions—polynomials, kriging (Gaussian processes), radial basis functions, neural networks. They predict objective and constraint values at new points and provide uncertainty estimates for exploration. By replacing costly simulations, they enable global search or sensitivity analysis. Accuracy depends on sample quality; therefore, adaptive sampling (sequential DoE) is often employed.
- Topology Optimisation
-
Topology optimisation is a mathematical method that optimises the layout of a material within a given design space, for a given set of loads, constraints and boundary conditions. The goal of a topology optimisation is to maximise the performance of a system.
Contrary to parametric optimisation, topology optimisation can result in any possible shape within the design space. These organic shapes, typical for topology optimisation, are often difficult to manufacture with traditional production methods and are therefore more suitable for additive manufacturing or advanced cast techniques.
- What-if Scenario
-
Various what-if scenarios quantify the influence of a number of design variables on the performance of a product or process. See also Parametric Optimisation.
Our courses
If you want to learn more about how to use the Finite Element Method more efficiently in your designs, then you might want to take a look at our course Practical Introduction to the Finite Element Method or our course Introduction to Fatigue Analysis with FEA.