The algorithm presented here, P3GA, is an efficient algorithm for performing what we term “parameterized optimization.” Under the concept of parameterized optimization, the aim is to solve an optimization problem as a function of some parameters.

Definition of Parameterized Optimization

Standard Optimization

In order to formulate the parameterized search problem mathematically, it is helpful to consider first the non-parameterized case. We will use maximization throughout this article without loss of generality since minimizing \(f\) is equivalent to maximizing \(-f\). Using the notation from the previous subsection, we formulate the standard optimization problem as

J^*=\underset{x \in \Omega}{\text{max}}~~f(x)

The set \(\Omega \subset R^n\) is typically defined by geometric constraints, physical laws, and practical considerations. The aim is to find a solution, denoted \(J^*\), that maximizes the objective function \(f(x)\), and \(x \in \Omega\).

Parameterized Optimization

Intuitively, the parameterized search problem can be thought of as an application of standard optimization at every combination of some parameter value(s). We extend Eq. 1 to the general parameterized case as

J^*(\theta) = & \ \underset{x \in \Omega }{\text{max}} \ \ f(x,\theta) \\
& \ \theta \in \Theta \subset R^m

The \(m\) parameter(s) are denoted \(\theta\) and \(\Theta\) is the parameter space. The objective function is optimized as a function of the parameters. In other words, the solution to Eq. 1 is the maximum of \(f(x,\theta)\) for every value of \(\theta\) in the parameter space. Thus, the solution rather than being a single point is a (potentially infinite) set, denoted \(J^*(\theta)\).


Figure 1: Notional illustration of a solution set to a parameterized optimization problem

Figure 1 is an illustration of a solution in the 2-dimensional case. The shaded region is the feasible region in the objective space. The bold line is the maximum of the objective function for a given \(\theta\). Note, the \(x\) dimension not shown would be into the page.

In Eq. 1 the parameter space is some set in \(R^m\). This set may be defined simply by upper and lower bounds as in Figure 1 or with more general (nonlinear) constraints. In some applications, it may be useful to define the parameter space as the feasible set of a function
\(\theta:R^n \rightarrow R^m\), such that \(\Theta=\{\theta|\theta=\theta(x),x \in \Omega\}\). This formulation may be useful when the parameter is a function of design variables rather than the design variables directly. For example, two analysis models may share an interface property, e.g., a \underline{suspension model} and \underline{chassis model} may share \underline{spring constant} in their analysis. The spring constant is a function of design variables, e.g., wire thickness, number of coils, etc. In this case, the parameter space would be the set of feasible spring constants. The parameterized optimum would be the best suspensions/chassis designs possible for every possible spring constant. \subsection*{Example}

Consider the standard optimization problem

J^*=\underset{-2 \leq x_1,x_2 \leq 2}{\text{max}} \ f(x_1,x_2)


Figure 2: Comparison between standard and parameterized optimization.

where \(f(x_1,x_2)=0.5 + x_1 e^{-x_1^2 – x_2 ^2}\). It is easy to see that \(J^*=\frac{2 + \sqrt{2e}}{2\sqrt{2e}}\) and occurs at \((x_1,x_2)=\big(\sqrt{\frac{1}{2}},0\big)\). See Fig. 2 for a graphical representation of the optimum solution. Say that we wish to “parameterize” this simple search problem along \(x_1\) as

J^*(x_1)= & \underset{-2 \leq x_2 \leq 2}{\text{max}}\ f(x_1,x_2) \\
& x_1 \in (-2,2)

In other words, we want to find the \(x_2\) that maximizes the objective function \(f(x_1,x_2)\) for any possible value of \(x_1\). See Fig. 2 for a graphical representation of the parameterized solution. In this case, the parameterized solution is an infinite set, a 1-dimensional “frontier.”
\section*{P3GA} In the proposed algorithm, we adapt the general multiobjective genetic algorithm (MOGA) strategy to be compatible with parameterized optimization problems. Without loss of generality, suppose designer preferences are monotonically increasing in each objective. The multiobjective search problem is formulated as

& \underset{x \in \Omega }{\text{max}}
& & f_i(x), \; i = 1,2,…,m.

The set of solutions to Eq. 1 is termed the Pareto frontier in the literature. A MOGA typically begins its search with a random set of solutions, called the population. Each member of the population is evaluated and assigned a fitness value relative to two meta-objectives:

  1. Find solutions close to the Pareto frontier.
  2. Maintain diversity in the solution set.

The Pareto frontier is the set of solutions to Eq. 1. Importantly, most MOGAs rely on the concept of \textit{dominance} to assign higher fitness values to members closer to the Pareto frontier. A Pareto dominated alternative is one for which there exists another alternative that is at least as preferred in all objectives and strictly more preferred in at least one objective,see Fig. 3a, where alternatives \((a)\) and \((c)\) are nondominated. We adapt the MOGA scheme to suit our needs by replacing the concept of Pareto dominance with parameterized Pareto dominance (PPD). Under the PPD rule, a dominated alternative is one for which there exists an alternative with a more preferred objective value and equal parameter value (this definition is compatible with Eq. 1). Thus, under the PPD rule a non-dominated alternative will be necessarily closer to the  solution of the parameterized optimization problem.


Figure 3: Illustration of dominance analysis on randomly generated data using (a) the Pareto dominance rule and (b) the parameterized Pareto rule.

Predictive Dominance

The complicating factor in the practical application of the PPD is that alternatives must have equal parameter values for dominance to occur. Because genetic algorithms are variants of randomized search algorithms, this is unlikely to occur. In other words, a straightforward application of PPD to the population members is unlikely to dominate any
members. Fig. 3b is an illustration of this difficulty. In Fig. 3b, none of the members can be said to be dominated since PPD can only occur along the dashed lines (parameter value equality). If no members are classified as dominated, there is no selection bias toward the solution frontier, which would result in poor search performance.

To overcome this difficulty, we instead perform dominance analysis using points that are predicted to be feasible rather than current members of the population. A nondominated member is one that is not parametrically Pareto dominated by any point that is predicted to be feasible. We refer to this concept as \textit{p-dominance}.

In P3GA, we take the attribute space image of the current members of the population as training data for a machine learning algorithm: support vector domain description (SVDD). The resulting SVDD is then the predicted feasible set, i.e., the shaded region illustrated in Fig. 4. Using the SVDD approach, the p-nondominated members are those that are not dominated by a predicted feasible site, the blue shaded region in the figure.


Figure 4: Illustration of p-dominance

Algorithm Overview


Figure 5: Flow chart of the Predictive Parametrized Pareto Genetic Algorithm (P3GA). The shaded processes correspond to the novel concepts implemented in P3GA.

Fig. 5 is a flow chart of the proposed algorithm. The novelty of P3GA is in the use of PPD as the dominance criterion and the concept of predicted dominance. A detailed description of the algorithm can be found in [Galvan, Malak 2015].

The algorithm can be found on github [under-construction].