Research Article Open Access
On The Frank FREs and Its Application in Optimization Problems
Amin Ghodousian*
Faculty of Engineering Science, College of Engineering, University of Tehran, Tehran, Iran
*Corresponding author: Amin Ghodousian, Faculty of Engineering Science, College of Engineering, University of Tehran, P.O. Box 11365-4563, Tehran, Iran, E-mail: @
Received: June 17, 2018; Accepted: June 23, 2018; Published: June 29, 2018
Citation: Ghodousian A (2018) On The Frank FREs and Its Application in Optimization Problems. J Comp Sci Appl Inform Technol. 3(2): 1-14. DOI: 10.15226/2474-9257/3/2/00130
Abstract
Frank t-norms are parametric family of continuous Archimedean t-norms which are also strict when the parameter is nonnegative. Very often, this family of t-norms is also called the family of fundamental t-norms because of the role it plays in several applications. In this paper, we study a nonlinear optimization problem with a special system of Fuzzy Relational Equations (FRE) as its constraints. We firstly investigate the resolution of the feasible solutions set when it is defined with max-Frank composition and present some necessary and sufficient conditions for determining the feasibility and some procedures for simplifying the problem. Since the feasible solutions set of FREs is non-convex and the finding of all minimal solutions is an NP-hard problem, conventional nonlinear programming methods may involve high computation complexity. Based on the obtained theoretical properties of the problem, a genetic algorithm is used, which preserves the feasibility of new generated solutions and does not need to initially find the minimal solutions. Moreover, a method is presented to generate feasible max-Frank FREs as test problems for evaluating the performance of our algorithm. The presented method has been compared with some related works. The obtained results confirm the high performance of the current method in solving such nonlinear problems.

Keywords: Fuzzy relational equations; Nonlinear optimization; Genetic algorithm;
Introduction
In this paper, we study the following nonlinear problem in which the constraints are formed as fuzzy relational equations defined by Frank t-norm:
$\begin{array}{l}\mathrm{min}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}f\left(x\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}A\phi x=b\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\in {\left[0,1\right]}^{n}\end{array}$
where and $\forall j\in J\right),$ is a fuzzy matrix, is an $m$ dimensional fuzzy vector, and "$\phi$ " is the max-Frank composition, that is, $\phi \text{\hspace{0.17em}}\left(x,y\right)={T}_{F}^{s}\left(x,y\right)={\mathrm{log}}_{s}\left(1+\frac{\left({s}^{x}-1\right)\left({s}^{y}-1\right)}{s-1}\right)$ in which s >0 and s≠ 1.

If ${a}_{i}$ is the i’th row of matrix A, then problem (1) can be expressed as follows:
$min f(x) φ( a i ,x)= b i , i∈I x∈ [0,1] n MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGceaqabeaaciGGTb GaaiyAaiaac6gacaaMc8UaaGPaVlaaykW7caaMc8UaamOzaiaacIca caWG4bGaaiykaaqaaiaaykW7caaMc8UaaGPaVlaaykW7caaMc8UaaG PaVlaaykW7caaMc8UaaGPaVlaaykW7caaMc8UaeqOXdOMaaiikaiaa dggadaWgaaWcbaGaamyAaaqabaGccaaMc8UaaiilaiaadIhacaGGPa Gaeyypa0JaamOyamaaDaaaleaacaWGPbaabaaaaOGaaGPaVlaaykW7 caaMc8UaaiilaiaaykW7caaMc8UaaGPaVlaadMgacqGHiiIZcaWGjb aabaGaaGPaVlaaykW7caaMc8UaaGPaVlaaykW7caaMc8UaaGPaVlaa ykW7caaMc8UaaGPaVlaaykW7caWG4bGaeyicI4Saai4waiaaicdaca GGSaGaaGymaiaac2fadaahaaWcbeqaaiaad6gaaaaaaaa@8450@$
where the constraints mean: $φ ( a i ,x)= max j∈J {φ ( a ij , x j )}= max j∈J { T F s ( a ij , x j )}= max j∈J { log s ( 1+ ( s a ij −1)( s x j −1) s−1 ) }= b i ,∀i∈I MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqOXdOMaaG PaVlaacIcacaWGHbWaaSbaaSqaaiaadMgaaeqaaOGaaiilaiaadIha caGGPaGaeyypa0ZaaCbeaeaaciGGTbGaaiyyaiaacIhaaSqaaiaadQ gacqGHiiIZcaWGkbaabeaakiaacUhacqaHgpGAcaaMc8Uaaiikaiaa dggadaWgaaWcbaGaamyAaiaadQgaaeqaaOGaaiilaiaadIhadaWgaa WcbaGaamOAaaqabaGccaGGPaGaaiyFaiabg2da9maaxababaGaciyB aiaacggacaGG4baaleaacaWGQbGaeyicI4SaamOsaaqabaGccaGG7b GaamivamaaDaaaleaacaWGgbaabaGaam4CaaaakiaaykW7caGGOaGa amyyamaaBaaaleaacaWGPbGaamOAaaqabaGccaGGSaGaamiEamaaBa aaleaacaWGQbaabeaakiaacMcacaGG9bGaeyypa0ZaaCbeaeaaciGG TbGaaiyyaiaacIhaaSqaaiaadQgacqGHiiIZcaWGkbaabeaakmaacm aabaGaciiBaiaac+gacaGGNbWaaSbaaSqaaiaadohaaeqaaOWaaeWa aeaacaaIXaGaey4kaSYaaSaaaeaacaGGOaGaam4CamaaCaaaleqaba GaamyyamaaBaaameaacaWGPbGaamOAaaqabaaaaOGaeyOeI0IaaGym aiaacMcacaGGOaGaam4CamaaCaaaleqabaGaamiEamaaBaaameaaca WGQbaabeaaaaGccqGHsislcaaIXaGaaiykaaqaaiaadohacqGHsisl caaIXaaaaaGaayjkaiaawMcaaaGaay5Eaiaaw2haaiabg2da9iaadk gadaWgaaWcbaGaamyAaaqabaGccaaMc8UaaGPaVlaaykW7caaMc8Ua aGPaVlaaykW7caaMc8UaaiilaiabgcGiIiaadMgacqGHiiIZcaWGjb aaaa@9A64@$
The above definition can be extended for and $s=\infty$ by taking limits. So, it is easy to verify that and ${T}_{F}^{\infty }\left(x,y\right)=\mathrm{max}\left\{x+y-1,0\right\},$ that is, Frank t-norm is converted to minimum, product and Lukasiewicz t-norm, respectively. Frank family of t-norms plays a central role in the investigation of the contraposition law for QL-implications [8].

The theory of fuzzy relational equations was firstly proposed by Sanchez, [52]. He introduced a FRE whit max-min composition and applied the model to medical diagnosis in Brouwerian logic. Nowadays, it is well-known that many issues associated with a body knowledge can be treated as FRE problems [44]. In addition to such applications, FRE theory has been applied in many fields including fuzzy control, discrete dynamic systems, prediction of fuzzy systems, fuzzy decision making, fuzzy pattern recognition, fuzzy clustering, image compression and reconstruction, and so on. Pedrycz, [45] categorized and extended two ways of the generalizations of FRE in terms of sets under discussion and various operations which are taken into account. Since then, many theoretical improvements have been investigated and many applications have been presented [2,3,5,11,24,28,32,37,38, 41,43,46,48,49,57,59,65].

The solvability and the finding of solutions set are the primary (and the most fundamental) subject concerning FRE problems. Many studies have reported fuzzy relational equations with max-min and max-product compositions. Both compositions are special cases of the max-triangular-norm (max-t-norm). Di Nola, et al. proved that the solution set of FRE defined by continuous max-t-norm composition is often a non-convex set that is completely determined by one maximum solution and a finite number of minimal solutions [6]. Over the last decades, the solvability of FRE defined with different max-t compositions has been investigated by many researches [36,47,50,51,53,55,56,60, 64,68].

Optimizing an objective function subjected to a system of fuzzy relational equations or inequalities (FRI) is one of the most interesting and on-going topics among the problems related to the FRE (or FRI) theory [1,9,19-27,25-27,33,34,39,54,61,66]. By far the most frequently studied aspect is the determination of a minimize of a linear objective function and the use of the maxmin composition [1,20]. So, it is an almost standard approach to translate this type of problem into a corresponding 0-1 integer linear programming problem, which is then solved using a branch and bound method [10,62]. The topic of the linear optimization problem was also investigated with max-product operation [19,26,40]. Moreover, some studies have determined a more general operator of linear optimization with replacement of max-min and max-product compositions with a max-t-norm composition, max-average composition or max-star composition [25,34,54,31,61,22,27].

Recently, many interesting generalizations of the linear and non-linear programming problems constrained by FRE or FRI have been introduced and developed based on composite operations and fuzzy relations used in the definition of the constraints, and some developments on the objective function of the problems [4,7,12,20,35,39,58,63,66]. For instance, the linear optimization of bipolar FRE was studied by some researchers where FRE was defined with max-min composition and max- Lukasiewicz composition [12,35,39]. Ghodousian and khorram, [21] focused on the algebraic structure of two fuzzy relational $A\phi x\le {b}^{1}$ and $D\phi x\ge {b}^{2}$ , and studied a mixed fuzzy system formed by the two preceding FRIs, where $\varphi$ is an operator with (closed) convex solutions. Yang, [67] studied the optimal solution of minimizing a linear objective function subject to fuzzy relational inequalities where the constraints defined as ${a}_{i1}\wedge {x}_{1}+{a}_{i2}\wedge {x}_{2}+...+{a}_{in}\wedge {x}_{n}\ge {b}_{i}$ for $i=1,...,m$ and $a\wedge b=\mathrm{min}\left\{a,b\right\}.$ In [20], the authors introduced FRI-FC problem $\mathrm{min}\left\{\text{\hspace{0.17em}}\text{\hspace{0.17em}}{c}^{T}x\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}A\phi x\preccurlyeq b\text{\hspace{0.17em}}\text{\hspace{0.17em}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\in {\left[0,1\right]}^{n}\text{\hspace{0.17em}}\right\},$ where $\varphi$ is max-min composition and “$\preccurlyeq$ ” denotes the relaxed or fuzzy version of the ordinary inequality “$\preccurlyeq$ ”. Another interesting generalization of such optimization problems are related to objective function. Wu, et al. [63] represented an efficient method to optimize a linear fractional programming problem under FRE with max-Archimedean t-norm composition. Dempe and Ruziyeva, [4] generalized the fuzzy linear optimization problem by considering fuzzy coefficients. Dubey, et al. [7] studied linear programming problems involving interval uncertainty modeled using intuitionistic fuzzy set [7]. On the other hand, Lu and Fang considered the single non-linear objective function and solved it with FRE constraints and max-min operator [42]. They proposed a genetic algorithm for solving the problem. In [29], the authors used the same method for max-product operator. Also, Ghodousian, et al. [15,16,18] presented GA algorithms to solve the non-linear problem with FRE constraints defined by Lukasiewicz, Dubois –Prade and Sugeno-Weber operators.

Generally, there are three important difficulties related to FRE or FRI problems. Firstly, in order to completely determine FREs and FRIs, we must initially find all the minimal solutions, and the finding of all the minimal solutions is an NP-hard problem. Secondly, a feasible region formed as FRE or FRI is often a nonconvex set [21]. Finally, FREs and FRIs as feasible regions lead to optimization problems with highly non-linear constraints. Due to the above mentioned difficulties, although the analytical methods are efficient to find exact optimal solutions, they may also involve high computational complexity for high-dimensional problems (especially, if the simplification processes cannot considerably reduce the problem).

In this paper, we use the genetic algorithm proposed in [21] for solving problem (1), which keeps the search inside of the feasible region without finding any minimal solution and checking the feasibility of new generated solutions. Since the feasibility of problem (1) is essentially dependent on the t-norm (Frank t-norm) used in the definition of the constraints, a method is also presented to construct feasible test problems. More precisely, we construct a feasible problem by randomly generating a fuzzy matrix and a fuzzy vector according to some criteria resulted from the necessary and sufficient conditions. It is proved that the max-Frank fuzzy relational equations constructed by this method are not empty. Moreover, a comparison is made between the current method and the algorithms presented in [42] and [29].

The remainder of the paper is organized as follows. Section 2 takes a brief look at some basic results on the feasible solutions set of problem (1). In Section 3, the GA algorithm is briefly described. Finally, in Section 4 the experimental results are demonstrated and a conclusion is provided in Section 5.
Basic Properties of Max-Frank FRE
Characterization of feasible solutions set
This section describes the basic definitions and structural properties concerning problem (1) that are used throughout the paper. For the sake of simplicity, let ${S}_{{T}_{F}^{s}}\left({a}_{i},{b}_{i}\right)$ denote the feasible solutions set of i‘th equation, that is, ${S}_{{T}_{F}^{s}}\left({a}_{i},{b}_{i}\right)=\left\{x\in {\left[0,1\right]}^{n}\text{\hspace{0.17em}}\text{\hspace{0.17em}}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\underset{j=1}{\overset{n}{\mathrm{max}}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}\left({a}_{ij},{x}_{j}\right)\right\}={b}_{i}\right\}$. Also, let ${S}_{{T}_{F}^{s}}\left(A,b\right)$ denote the feasible solutions set of problem (1). Based on the foregoing notations, it is clear that ${S}_{{T}_{F}^{s}}\left(A,b\right)=\underset{i\in I}{\cap }{S}_{{T}_{F}^{s}}\left({a}_{i},{b}_{i}\right)$.

Definition 1: For each $i\in I,$ we define ${J}_{i}=\left\{j\in J\text{\hspace{0.17em}}\text{\hspace{0.17em}}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{ij}\ge {b}_{i}\right\}.$

According to definition 1, we have the following lemmas, which are easily proved by the monotonicity and identity law of t-norms, definition 1 and the definition of Frank t-norm.

Lemma 1: Let $i\in I.$ If $j\notin {J}_{i},$ then

Lemma 2: Let $i\in I$ and $j\in {J}_{i}.$

(a) If ${x}_{j}>{\mathrm{log}}_{s}\left(1+\frac{\left({s}^{{b}_{i}}-1\right)\left(s-1\right)}{{s}^{{a}_{ij}}-1}\right)$ and ${b}_{i}\ne 0,$ then ${T}_{F}^{s}\left({a}_{ij},{x}_{j}\right)>{b}_{i}.$

(b) If ${x}_{j}={\mathrm{log}}_{s}\left(1+\frac{\left({s}^{{b}_{i}}-1\right)\left(s-1\right)}{{s}^{{a}_{ij}}-1}\right)$ and ${b}_{i}\ne 0,$ then ${T}_{F}^{s}\left({a}_{ij},{x}_{j}\right)={b}_{i}.$

(c) If ${x}_{j}<{\mathrm{log}}_{s}\left(1+\frac{\left({s}^{{b}_{i}}-1\right)\left(s-1\right)}{{s}^{{a}_{ij}}-1}\right)$ and ${b}_{i}\ne 0,$ then ${T}_{F}^{s}\left({a}_{ij},{x}_{j}\right)<{b}_{i}.$

(d) If ${a}_{ij}={b}_{i}=0,$ then ${T}_{F}^{s}\left({a}_{ij},{x}_{j}\right)={b}_{i},\text{\hspace{0.17em}}\forall {x}_{j}\in \left[0,1\right].$

(e) If ${a}_{ij}>{b}_{i}=0,$ then ${T}_{F}^{s}\left({a}_{ij},{x}_{j}\right)={b}_{i}$ for ${x}_{j}=0,$ and ${T}_{F}^{s}\left({a}_{ij},{x}_{j}\right)>{b}_{i}$ for $0<{x}_{j}\le 1.$

Lemma 3 below gives a necessary and sufficient condition for the feasibility of sets

Lemma 3: For a fixed if and only if ${J}_{i}\ne \varnothing .$

Proof: The proof is similar to the proof of Lemma 3 in [15].

Definition 2: Suppose that $i\in I$ and ${S}_{{T}_{F}^{s}}\left({a}_{i},{b}_{i}\right)\ne \varnothing$ (hence, ${J}_{i}\ne \varnothing$ from lemma 3). Let ${\stackrel{^}{x}}_{i}=\left[{\left({\stackrel{^}{x}}_{i}\right)}_{1},{\left({\stackrel{^}{x}}_{i}\right)}_{2},...,{\left({\stackrel{^}{x}}_{i}\right)}_{n}\right]\in {\left[0,1\right]}^{n}$ where the components are defined as follows:
Also, for each $j\in {J}_{i},$ we define ${\stackrel{⌣}{x}}_{i}\left(j\right)=\left[{\stackrel{⌣}{x}}_{i}{\left(j\right)}_{1},{\stackrel{⌣}{x}}_{i}{\left(j\right)}_{2},...,{\stackrel{⌣}{x}}_{i}{\left(j\right)}_{n}\right]\in {\left[0,1\right]}^{n}$ such that
The following theorem characterizes the feasible region of the i‘th relational equation $\left(i\in I\right).$

Theorem 1: Let $i\in I.$ If ${S}_{{T}_{F}^{s}}\left({a}_{i},{b}_{i}\right)\ne \varnothing ,$ then ${S}_{{T}_{F}^{s}}\left({a}_{i},{b}_{i}\right)=\underset{j\in {J}_{i}}{\cup }\left[\text{\hspace{0.17em}}{\stackrel{⌣}{x}}_{i}\left(j\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{^}{x}}_{i}\right].$

Proof: For a more general case, see Corollary 2.3 in [21].

From theorem 1, ${\stackrel{^}{x}}_{i}$ is the unique maximum solution and ${\stackrel{⌣}{x}}_{i}\left(j\right)$‘s $\left(j\in {J}_{i}\right)$ are the minimal solutions of ${S}_{{T}_{F}^{s}}\left({a}_{i},{b}_{i}\right).$

Definition 3: Let be the maximum solution of ${S}_{{T}_{F}^{s}}\left({a}_{i},{b}_{i}\right).$ We define $\overline{X}=\underset{i\in I}{\mathrm{min}}\left\{\text{\hspace{0.17em}}{\stackrel{^}{x}}_{i}\right\}.$

Definition 4: Let $e:I\to {J}_{i}$ so that and let $E$ be the set of all vectors $e$ . For the sake of convenience, we represent each $e\in E$ as an $m$ –dimensional vector $e=\left[{j}_{1},{j}_{2},...,{j}_{m}\right]$ in which ${j}_{k}=e\left(k\right).$

Definition 5: Let $e=\left[{j}_{1},{j}_{2},...,{j}_{m}\right]\in E.$ We define $\underset{_}{X}\left(e\right)=\left[\underset{_}{X}{\left(e\right)}_{1},\underset{_}{X}{\left(e\right)}_{2},...,\underset{_}{X}{\left(e\right)}_{n}\right]\in {\left[0,1\right]}^{n},$ where

From the relation ${S}_{{T}_{F}^{s}}\left(A,b\right)=\underset{i\in I}{\cap }{S}_{{T}_{F}^{s}}\left({a}_{i},{b}_{i}\right)$ and Theorem 1, the following theorem is easily attained.

Theorem 2: ${S}_{{T}_{F}^{s}}\left(A,b\right)=\underset{e\in E}{\cup }\left[\text{\hspace{0.17em}}\underset{_}{X}\left(e\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}}\overline{X}\right].$

As a consequence, it turns out that $\overline{X}$ is the unique maximum solution and $\underset{_}{X}\left(e\right)$'s $\left(e\in E\right)$ are the minimal solutions of ${S}_{{T}_{F}^{s}}\left(A,b\right).$ Moreover, we have the following corollary that is directly resulted from theorem 2.

Corollary 1(first necessary and sufficient condition): ${S}_{{T}_{F}^{s}}\left(A,b\right)\ne \varnothing$ if and only if $\overline{X}\in {S}_{{T}_{F}^{s}}\left(A,b\right)$ .

Example 1: Consider the problem below with Frank t-norm
where $\phi \text{\hspace{0.17em}}\left(x,y\right)={T}_{F}^{2}\left(x,y\right)={\mathrm{log}}_{2}\left(1+\left({s}^{x}-1\right)\left({s}^{y}-1\right)\right)$ (i.e., $s=2$ ). By definition 1, we have and ${J}_{5}=\left\{1\text{\hspace{0.17em}},\text{\hspace{0.17em}}2\text{\hspace{0.17em}},\text{\hspace{0.17em}}3\text{\hspace{0.17em}},\text{\hspace{0.17em}}4\text{\hspace{0.17em}},\text{\hspace{0.17em}}5\text{\hspace{0.17em}},\text{\hspace{0.17em}}6\right\}.$ The unique maximum solution and the minimal solutions of each equation are obtained by definition 2 as follows, where 0.783316030456544 has been rounded to 0.7833:

Therefore, by theorem 1 we have

${S}_{{T}_{F}^{s}}\left({a}_{1},{b}_{1}\right)=\left[\text{\hspace{0.17em}}{\stackrel{⌣}{x}}_{1}\left(1\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{^}{x}}_{1}\right]\cup \left[\text{\hspace{0.17em}}{\stackrel{⌣}{x}}_{1}\left(4\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{^}{x}}_{1}\right],$

${S}_{{T}_{F}^{s}}\left({a}_{2},{b}_{2}\right)=\left[\text{\hspace{0.17em}}{\stackrel{⌣}{x}}_{2}\left(1\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{^}{x}}_{2}\right]\cup \left[\text{\hspace{0.17em}}{\stackrel{⌣}{x}}_{2}\left(5\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{^}{x}}_{2}\right],$

${S}_{{T}_{F}^{s}}\left({a}_{3},{b}_{3}\right)=\left[\text{\hspace{0.17em}}{\stackrel{⌣}{x}}_{3}\left(1\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{^}{x}}_{3}\right]\cup \left[\text{\hspace{0.17em}}{\stackrel{⌣}{x}}_{3}\left(4\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{^}{x}}_{3}\right]\cup \left[\text{\hspace{0.17em}}{\stackrel{⌣}{x}}_{3}\left(5\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{^}{x}}_{3}\right]$ and

${S}_{{T}_{F}^{s}}\left({a}_{4},{b}_{4}\right)=\left[\text{\hspace{0.17em}}{\stackrel{⌣}{x}}_{4}\left(4\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{^}{x}}_{4}\right]\cup \left[\text{\hspace{0.17em}}{\stackrel{⌣}{x}}_{4}\left(5\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{^}{x}}_{4}\right]$ and

${S}_{{T}_{F}^{s}}\left({a}_{5},{b}_{5}\right)=\left[\text{\hspace{0.17em}}{0}_{1×6}\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{^}{x}}_{5}\right]$ where ${0}_{1×6}$ is a zero vector. From definition 3, It is easy to verify that $\overline{X}\in {S}_{{T}_{F}^{s}}\left(A,b\right).$ Therefore, the above problem is feasible by corollary 1. Finally, the cardinality of set $E$ is equal to 24 (definition 4). So, we have 24 solutions $\underset{_}{X}\left(e\right)$ associated to 24 vectors $e$ . For example, for $e=\left[1\text{\hspace{0.17em}},\text{\hspace{0.17em}}5\text{\hspace{0.17em}},\text{\hspace{0.17em}}1\text{\hspace{0.17em}},\text{\hspace{0.17em}}5\text{\hspace{0.17em}},\text{\hspace{0.17em}}6\right],$ we obtain $\underset{_}{X}\left(e\right)=\mathrm{max}\text{\hspace{0.17em}}\left\{{\stackrel{⌣}{x}}_{1}\left(1\right),\text{\hspace{0.17em}}{\stackrel{⌣}{x}}_{2}\left(5\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{⌣}{x}}_{3}\left(1\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{⌣}{x}}_{4}\left(5\right),{\stackrel{⌣}{x}}_{5}\left(6\right)\right\}$ from definition 5 that means
Simplification Processes
In practice, there are often some components of matrix $A$ that have no effect on the solutions to problem (1). Therefore, we can simplify the problem by changing the values of these components to zeros. For this reason, various simplification processes have been proposed by researchers. We refer the interesting reader to [21] where a brief review of such these processes is given. Here, we present two simplification techniques based on the Frank t-norm.

Definition 6: If a value changing in an element, say ${a}_{ij}$ , of a given fuzzy relation matrix $A$ has no effect on the solutions of problem (1), this value changing is said to be an equivalence operation.

Corollary 2: Suppose that In this case, it is obvious that $\underset{j=1}{\overset{n}{\mathrm{max}}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}\left({a}_{ij},{x}_{j}\right)\right\}={b}_{i}$ is equivalent to $\underset{\begin{array}{l}j=1\\ j\ne {j}_{0}\end{array}}{\overset{n}{\mathrm{max}}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}\left({a}_{ij},{x}_{j}\right)\right\}={b}_{i},$ that is, “resetting ${a}_{i{j}_{0}}$ to zero” has no effect on the solutions of problem (1) (since component ${a}_{i{j}_{0}}$ only appears in the i‘th constraint of problem (1)). Therefore, if ${T}_{F}^{s}\left({a}_{i{j}_{0}},{x}_{{j}_{0}}\right)<{b}_{i},\text{\hspace{0.17em}}\forall x\in {S}_{{T}_{F}^{s}}\left(A,b\right),$ then “resetting ${a}_{i{j}_{0}}$ to zero” is an equivalence operation.

Lemma 4 (first simplification): Suppose that ${j}_{0}\notin {J}_{i},$ for some $i\in I$ and ${j}_{0}\in J.$ Then, “resetting ${a}_{i{j}_{0}}$ to zero” is an equivalence operation.

Proof: From corollary 2, it is sufficient to show that But, from lemma 1 we have Thus

Lemma 5 (second simplification): Suppose that ${j}_{0}\in {J}_{{i}_{1}}$ and ${b}_{{i}_{1}}\ne 0,$ where ${i}_{1}\in I$ and ${j}_{0}\in J.$ If at least one of the following conditions hold, then “resetting ${a}_{{i}_{1}{j}_{0}}$ to zero” is an equivalence operation:

(a) There exists some such that and ${\mathrm{log}}_{s}\left(1+\left({s}^{{b}_{{i}_{2}}}-1\right)\left(s-1\right)/\left({s}^{{a}_{{i}_{\text{\hspace{0.17em}}2\text{\hspace{0.17em}}}{j}_{0}}}-1\right)\right)<{\mathrm{log}}_{s}\left(1+\left({s}^{{b}_{{i}_{1}}}-1\right)\left(s-1\right)/\left({s}^{{a}_{{i}_{\text{\hspace{0.17em}}1\text{\hspace{0.17em}}}{j}_{0}}}-1\right)\right).$

(b) There exists some such that ${a}_{{i}_{2}\text{\hspace{0.17em}}{j}_{0}}>0.$

Proof: (a) Similar to the proof of lemma 4, we show that Consider an arbitrary feasible solution $x\in {S}_{{T}_{F}^{s}}\left(A,b\right).$ Since $x\in {S}_{{T}_{F}^{s}}\left(A,b\right),$ it turns out that ${T}_{F}^{s}\left({a}_{{i}_{1}{j}_{0}},{x}_{{j}_{0}}\right)>{b}_{{i}_{1}}$ never holds. So, assume that ${T}_{F}^{s}\left({a}_{{i}_{1}{j}_{0}},{x}_{{j}_{0}}\right)={b}_{{i}_{1}},$ that is ${\mathrm{log}}_{s}\left(1+\left({s}^{{a}_{{i}_{\text{\hspace{0.17em}}1\text{\hspace{0.17em}}}{j}_{0}}}-1\right)\left({s}^{{x}_{{j}_{0}}}-1\right)/\left(s-1\right)\right)={b}_{{i}_{1}}.$ Since ${b}_{{i}_{1}}\ne 0,$ from lemma 2 we conclude that ${x}_{{j}_{0}}={\mathrm{log}}_{s}\left(1+\left({s}^{{b}_{{i}_{1}}}-1\right)\left(s-1\right)/\left({s}^{{a}_{{i}_{\text{\hspace{0.17em}}1\text{\hspace{0.17em}}}{j}_{0}}}-1\right)\right).$ So, by the assumptionwe have ${\mathrm{log}}_{s}\left(1+\left({s}^{{b}_{{i}_{2}}}-1\right)\left(s-1\right)/\left({s}^{{a}_{{i}_{\text{\hspace{0.17em}}2\text{\hspace{0.17em}}}{j}_{0}}}-1\right)\right)<{x}_{{j}_{0}}.$ Therefore, lemma 2 (part (a)) implies ${T}_{F}^{s}\left({a}_{{i}_{2}{j}_{0}},{x}_{{j}_{0}}\right)>{b}_{{i}_{2}}$ that contradicts $x\in {S}_{{T}_{F}^{s}}\left(A,b\right).$

(b) By the assumption, we have ${j}_{0}\in {J}_{{i}_{2}}.$ Now, the result similarly follows by a simpler argument.

Example 2: Consider the problem presented in example 1. From the first simplification (lemma 4), “resetting the following components ${a}_{ij}$ to zeros” are equivalence operations: in all of these cases, ${a}_{ij}<{b}_{i},$ that is $j\notin {J}_{i}.$ Also, from the second simplification (lemma 5, part (a)), we can change the value of component ${a}_{21}$ to zero; because ${a}_{21}={b}_{2}$ (i.e., $1\in {J}_{2}$ ), (i.e. $1\in {J}_{1}$), ${b}_{1}\ne 0$ and $\text{0}\text{.7833}={\mathrm{log}}_{s}\left(1+\left({s}^{{b}_{1}}-1\right)\left(s-1\right)/\left({s}^{{a}_{11}}-1\right)\right)<{\mathrm{log}}_{s}\left(1+\left({s}^{{b}_{2}}-1\right)\left(s-1\right)/\left({s}^{{a}_{21}}-1\right)\right)=1$ . Moreover, from lemma 5 (part (b)), we can also change the values of components and ${a}_{44}$ to zeros with no effect on the solutions set of the problem (since and ${b}_{5}=0$ ${a}_{54}>0\right).$

In addition to simplifying the problem, a necessary and sufficient condition is also derived from lemma 5. Before formally presenting the condition, some useful notations are introduced. Let $\stackrel{˜}{A}$ denote the simplified matrix resulted from $A$ after applying the simplification processes (lemmas 4 and 5). Also, similar to definition 1, assume that where ${\stackrel{˜}{a}}_{ij}$ denotes $\left(i,j\right)$ ‘th component of matrix $\stackrel{˜}{A}$. The following theorem gives a necessary and sufficient condition for the feasibility of problem (1).

Theorem 3 (second necessary and sufficient condition): ${S}_{{T}_{F}^{s}}\left(A,b\right)\ne \varnothing$ if and only if

Proof: Since ${S}_{{T}_{F}^{s}}\left(A,b\right)={S}_{{T}_{F}^{s}}\left(\stackrel{˜}{A},b\right)$ from lemmas 4 and 5, it is sufficient to show that ${S}_{{T}_{F}^{s}}\left(\stackrel{˜}{A},b\right)\ne \varnothing$ if and only if Let ${S}_{{T}_{F}^{s}}\left(\stackrel{˜}{A},b\right)\ne \varnothing .$ Therefore, where ${\stackrel{˜}{a}}_{i}$ denotes i ‘th row of matrix $\stackrel{˜}{A}.$ Now, lemma 3 implies Conversely, suppose that Again, by using lemma 3 we have By contradiction, suppose that ${S}_{{T}_{F}^{s}}\left(\stackrel{˜}{A},b\right)=\varnothing .$ Therefore, $\overline{X}\notin {S}_{{T}_{F}^{s}}\left(\stackrel{˜}{A},b\right)$ from corollary 1, and then there exists ${i}_{0}\in I$ such that $\overline{X}\notin {S}_{{T}_{F}^{s}}\left({\stackrel{˜}{a}}_{{i}_{0}},{b}_{{i}_{0}}\right).$ Since $\underset{j\notin {\stackrel{˜}{J}}_{i}}{\mathrm{max}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}\left({\stackrel{˜}{a}}_{{i}_{0}j},{\overline{X}}_{j}\right)\right\}<{b}_{{i}_{0}}$ (from lemma 1) , we must have either $\underset{j\in {\stackrel{˜}{J}}_{i}}{\mathrm{max}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}\left({\stackrel{˜}{a}}_{{i}_{0}j},{\overline{X}}_{j}\right)\right\}>{b}_{{i}_{0}}$ or $\underset{j\in {\stackrel{˜}{J}}_{i}}{\mathrm{max}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}\left({\stackrel{˜}{a}}_{{i}_{0}j},{\overline{X}}_{j}\right)\right\}<{b}_{{i}_{0}}.$ Anyway, since $\overline{X}\le {\stackrel{^}{x}}_{{i}_{0}}$ (i.e., we have $\underset{j\in {\stackrel{˜}{J}}_{{i}_{0}}}{\mathrm{max}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}\left({\stackrel{˜}{a}}_{{i}_{0}j},{\overline{X}}_{j}\right)\right\}\le \underset{j\in {\stackrel{˜}{J}}_{{i}_{0}}}{\mathrm{max}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}\left({\stackrel{˜}{a}}_{{i}_{0}j},{\left({\stackrel{^}{x}}_{{i}_{0}}\right)}_{j}\right)\right\}={b}_{{i}_{0}},$ and then the former case (i.e.,$\underset{j\in {\stackrel{˜}{J}}_{i}}{\mathrm{max}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}\left({\stackrel{˜}{a}}_{{i}_{0}j},{\overline{X}}_{j}\right)\right\}>{b}_{{i}_{0}}$) never holds. Therefore, $\underset{j\in {\stackrel{˜}{J}}_{i}}{\mathrm{max}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}\left({\stackrel{˜}{a}}_{{i}_{0}j},{\overline{X}}_{j}\right)\right\}<{b}_{{i}_{0}}$ that implies ${b}_{{i}_{0}}\ne 0$ and Hence, by lemma 2, we must have On the other hand, Therefore, and then from definitions 2 and 3, for each $j\in {\stackrel{˜}{J}}_{{i}_{0}}$ there must exists ${i}_{j}\in I$ such that either $j\in {\stackrel{˜}{J}}_{{i}_{j}}$ and ${\overline{X}}_{j}={\left({\stackrel{^}{x}}_{{i}_{j}}\right)}_{j}={\mathrm{log}}_{s}\left(1+\left({s}^{{b}_{{i}_{j}}}-1\right)\left(s-1\right)/\left({s}^{{\stackrel{˜}{a}}_{{i}_{j\text{\hspace{0.17em}}}j}}-1\right)\right)$ or $j\in {\stackrel{˜}{J}}_{{i}_{j}}$ and ${\stackrel{˜}{a}}_{{i}_{j}\text{\hspace{0.17em}}j}>{b}_{{i}_{j}}=0.$ Until now, we proved that ${b}_{{i}_{0}}\ne 0$ and for each $j\in {\stackrel{˜}{J}}_{{i}_{0}},$ there exist ${i}_{j}\in I$ such that either $j\in {\stackrel{˜}{J}}_{{i}_{j}}$ and ${\mathrm{log}}_{s}\left(1+\left({s}^{{b}_{{i}_{j}}}-1\right)\left(s-1\right)/\left({s}^{{\stackrel{˜}{a}}_{{i}_{j\text{\hspace{0.17em}}}j}}-1\right)\right)<{\mathrm{log}}_{s}\left(1+\left({s}^{{b}_{{i}_{\text{\hspace{0.17em}}0}}}-1\right)\left(s-1\right)/\left({s}^{{\stackrel{˜}{a}}_{{i}_{\text{\hspace{0.17em}}0\text{\hspace{0.17em}}}j}}-1\right)\right)$ (because, ${\mathrm{log}}_{s}\left(1+\left({s}^{{b}_{{i}_{j}}}-1\right)\left(s-1\right)/\left({s}^{{\stackrel{˜}{a}}_{{i}_{j\text{\hspace{0.17em}}}j}}-1\right)\right)={\overline{X}}_{j}<{\mathrm{log}}_{s}\left(1+\left({s}^{{b}_{{i}_{\text{\hspace{0.17em}}0}}}-1\right)\left(s-1\right)/\left({s}^{{\stackrel{˜}{a}}_{{i}_{\text{\hspace{0.17em}}0\text{\hspace{0.17em}}}j}}-1\right)\right)$) or ${b}_{{i}_{j}}=0$ ${\stackrel{˜}{a}}_{{i}_{j}\text{\hspace{0.17em}}j}>0.$ But in both cases, we must have from the parts (a) and (b) of lemma 5, respectively. Therefore, that is a contradiction.

Remark 1: Since ${S}_{{T}_{F}^{s}}\left(A,b\right)={S}_{{T}_{F}^{s}}\left(\stackrel{˜}{A},b\right)$ (from lemmas 4 and 5), we can rewrite all the previous definitions and results in a simpler manner by replacing ${\stackrel{˜}{J}}_{i}$ with
A summary of the GA
In this section, the genetic algorithm proposed in [15] is briefly discussed. Since the feasible region of problem (1) is non-convex, a convex subset of the feasible region is firstly introduced. Consequently, the proposed GA can easily generate the initial population by randomly choosing individuals from this convex feasible subset. At the last part of this section, a method is presented to generate random feasible max-Yager fuzzy relational equations.
Initialization
The initial population is given by randomly generating the individuals inside the feasible region. For this purpose, we firstly find a convex subset of the feasible solutions set, that is, we find set $F$ such that $F\subseteq {S}_{{T}_{Y}^{p}}\left(A,b\right)$ and $F$ is convex. Then, the initial population is generated by randomly selecting individuals from set $F$.

Definition 7: Suppose that ${S}_{{T}_{F}^{s}}\left(\stackrel{˜}{A},b\right)\ne \varnothing .$ For each $i\in I,$ let ${\stackrel{⌣}{x}}_{i}=\left[{\left({\stackrel{⌣}{x}}_{i}\right)}_{1},{\left({\stackrel{⌣}{x}}_{i}\right)}_{2},...,{\left({\stackrel{⌣}{x}}_{i}\right)}_{n}\right]\in {\left[0,1\right]}^{n}$ where the components are defined as follows:
Also, we define $\underset{_}{X}=\underset{i\in I}{\mathrm{max}}\left\{{\stackrel{⌣}{x}}_{i}\right\}.$

Remark 2: According to definition 2 and remark 1, it is clear that for a fixed $i\in I$ and Therefore, from definitions 5 and 7 we have and $\forall e\in E.$ Thus

Example 3: Consider the problem presented in example 1, where Also, according to example 2, the simplified matrix $\stackrel{˜}{A}$ is
From definition 7, we have , and then Therefore, set $F=\left[\underset{_}{X}\text{\hspace{0.17em}},\text{\hspace{0.17em}}\overline{X}\right]$ is obtained as a collection of intervals:
By generating random numbers in the corresponding intervals, we acquire one initial individual:

The algorithm for generating the initial population is simply obtained as follows:

Algorithm 1 (Initial Population):
Selection Strategy
Suppose that the individuals in the population are sorted according to their ranks from the best to worst, that is, individual $pop\left(r\right)$ has rank $r.$ The probability ${P}_{r}$ of choosing the $r$ ‘th individual is given by the following formulas:
where the weight to be a value of the Gaussian function with argument $r,$ mean 1 , and standard deviation $q\text{\hspace{0.17em}}{S}_{pop},$ where $q$ is a parameter of the algorithm.
Mutation Operator
As usual, suppose that ${S}_{{T}_{F}^{s}}\left(A,b\right)\ne \varnothing .$ So, from theorem 3 we have Where (see definition1 and remark 1).

Definition 8: Let ${I}^{+}=\left\{i\in I\text{\hspace{0.17em}}\text{\hspace{0.17em}}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}{b}_{i}\ne 0\text{\hspace{0.17em}}\right\}.$ So, we define where $|{\stackrel{˜}{J}}_{i}|$ denotes the cardinality of set ${\stackrel{˜}{J}}_{i}.$

The mutation operator is defined as follows:
Algorithm 2 (Mutation operator):
Crossover operator
In section 2, it was proved that $\overline{X}$ is the unique maximum solution of ${S}_{{T}_{F}^{s}}\left(A,b\right).$ By using this result, the crossover operator is stated as follows:
Algorithm 3 (Crossover operator):
Construction of Test Problems
There are usually several ways to generate a feasible FRE defined with different t-norms. In what follows, we present a procedure to generate random feasible max-Frank fuzzy relational equations:
Algorithm 4 (construction of feasible Max-Frank FRE):

By the following theorem, it is proved that algorithm 4 always generates random feasible max-Frank fuzzy relational equations.

Theorem 4: The solutions set ${S}_{{T}_{F}^{s}}\left(A,b\right)$ of FRE (with Frank t-norm) constructed by algorithm 4 is not empty. Proof. According to step 3 of the algorithm, Therefore, To complete the proof, we show that By contradiction, suppose that the second simplification process reset ${a}_{i{j}_{i}}$ to zero, for some $i\in I.$ So, ${b}_{i}\ne 0$ and there must exists some such that either and ${\mathrm{log}}_{s}\left(1+\left({s}^{{b}_{k}}-1\right)\left(s-1\right)/\left({s}^{{a}_{k\text{\hspace{0.17em}}{j}_{i}}}-1\right)\right)<{\mathrm{log}}_{s}\left(1+\left({s}^{{b}_{i}}-1\right)\left(s-1\right)/\left({s}^{{a}_{i\text{\hspace{0.17em}}{j}_{i}}}-1\right)\right)$ or ${b}_{k}=0$ and ${a}_{k\text{\hspace{0.17em}}{j}_{i}}>0.$ In the former case, we note that ${a}_{k\text{\hspace{0.17em}}{j}_{i}}>{\mathrm{log}}_{s}\left(1+\left({s}^{{b}_{k}}-1\right)\left({s}^{{a}_{i\text{\hspace{0.17em}}{j}_{i}}}-1\right)/\left({s}^{{b}_{i}}-1\right)\right).$ Anyway, both cases contradict step 4.
Experimental Results and Comparison with Related Works
In this section, we present the experimental results for evaluating the performance of our algorithm. Firstly, we apply our algorithm to 8 test problems described in Appendix A. The test problems have been randomly generated in different sizes by algorithm 4 given in section 3. Since the objective function is an ordinary nonlinear function, we take some objective functions from the well-known source: Test Examples for Nonlinear Programming Codes [30]. In section 4.2, we make a comparison against the algorithms proposed in [42] and [29]. To perform a fair comparison, we follow the same experimental setup for the parameters and $\gamma =1.005$ as suggested by the authors in [29] and [42]. Since the authors did not explicitly report the size of the population, we consider ${S}_{pop}=50$ for all the three GAs. As mentioned before, we set $q=0.1$ in relation (2) for the current GA. Moreover, in order to compare our algorithm with max-min GA (max-product GA [29]), we modified all the definitions used in the current GA based on the minimum t-norm (product t-norm) [42]. For example, we used the simplification process presented in [42] for minimum, and the simplification process given in [19,29] for product. Finally, 30 experiments are performed for all the GAs and for eight test problems reported in Appendix B, that is, each of the preceding GA is executed 30 times for each test problem. All the test problems included in Appendix A, have been defined by considering $s=2$ in ${T}_{F}^{s}.$ Also, the maximum number of iterations is equal to 100 for all the methods.
Performance of the Max-Frank GA
To verify the solutions found by the max-Frank GA, the optimal solutions of the test problems are also needed. Since is formed as the union of the finite number of convex closed cells (theorem 2), the optimal solutions are also acquired by the following procedure:

1. Computing all the convex cells of the Frank FRE.
2. Searching the optimal solution for each convex cell.
3. Finding the global optimum by comparing these local optimal solutions.

The computational results of the eight test problems (see Appendix A) are shown in Table 1 and Figures 1-8. In Table 1, the results are averaged over 30 runs and the average best-so-far solution; average mean fitness function and median of the best solution in the last iteration are reported.

Table 2 includes the best results found by the max-Frank GA and the above procedure. According to Table 2, the optimal solutions computed by the max-Frank GA and the optimal solutions obtained by the above procedure match very well. Tables 1 and 2, demonstrate the attractive ability of the max- Frank GA to detect the optimal solutions of problem (1). Also, the good convergence rate of the max- Frank GA could be concluded from Table 1 and figures 1-8.
Table 1: Results of applying the max-Frank GA to the eight test problems of Appendix A. The results have been averaged over 30 runs. Maximum number of iterations=100.
 Test problems Average best-so-far Median best-so-far Average mean fitness A.1 1.73375 1.73375 1.74575 A.2 -2.5903 -2.5907 -2.5885 A.3 -0.10266 -0.10266 -0.10249 A.4 2.677598 2.677598 2.677661 A.5 68.60251 68.60251 68.60253 A.6 -0.46007 -0.46313 -0.45953 A.7 0.001365 0.001365 0.001372 A.8 105.996 105.996 105.996
Table 2: Comparison of the solutions found by Max-Frank GA and the optimal values of the test problems described in Appendix A
 Test problems Solutions of max-Frank GA Optimal values A.1 1.73375 1.73372 A.2 -2.5907 -2.5908 A.3 -0.10266 -0.10266 A.4 2.677598 2.677598 A.5 68.60251 68.60251 A.6 0.001365 0.001364 A.7 -1.19221 -1.19221 A.8 105.996 105.9953
Figure 1: The performance of the max-Frank GA on test problem A.1.
Figure 2: The performance of the max-Frank GA on test problem A.2.
Figure 3: The performance of the max-Frank GA on test problem A.3.
Figure 4: The performance of the max-Frank GA on test problem A.4.
Figure 5: The performance of the max-Frank GA on test problem A.5.
Figure 6: The performance of the max-Frank GA on test problem A.6.
Figure 7: The performance of the max-Frank GA on test problem A.7.
Figure 8: performance of the max-Frank GA on test problem A.8.
Comparisons with Other Works
As mentioned before, we can make a comparison between the current GA, max-min GA and max-product GA [42,29]. For this purpose, all the test problems described in Appendix B have been designed in such a way that they are feasible for both the minimum and product t-norms.

The first comparison is against max-min GA, and we apply our algorithm (modified for the minimum t-norm) to the test problems by considering as the minimum t-norm. The results are shown in Table 3 including the optimal objective values found by the current GA and max-min GA. As is shown in this table, the current GA finds better solutions for test problems 1, 5 and 6, and the same solutions for the other test problems.

Table 4 shows that the current GA finds the optimal values faster than max-min GA and hence has a higher convergence rate, even for the same solutions. The only exception is testing problem 8 in which all the results are the same. In all the cases, results marked with “*” indicate the better cases.

The second comparison is against the max-product GA. In this case, we apply our algorithm (modified for the product t-norm) to the same test problems by considering as the product t-norm (Tables 5 and 6).

The results, in Tables 5 and 6, demonstrate that the current GA produces better solutions (or the same solutions with a higher convergence rate) when compared against max-product GAs for all the test problems.
Table 3: Best results found by our algorithm and max-min GA
 Test problems Lu and Fang Our algorithm B.1 8.429676 8.4296754* B.2 -1.3888 -1.3888 B.3 0 0 B.4 5.0909 5.0909 B.5 71.1011 71.0968* B.6 -0.3291 -0.4175* B.7 -0.6737 -0.6737 B.8 93.9796 93.9796
Table 4: A Comparison between the results found by the current GA and max-min GA
 Test problems Max-min GA Our GA B.1 Average best-so-far 8.429701 8.4296796* Median best-so-far 8.429676 8.429676 Average mean fitness 8.430887 8.4298745* B.2 Average best-so-far -1.3888 -1.3888 Median best-so-far -1.3888 -1.3888 Average mean fitness -1.3877 -1.3886* B.3 Average best-so-far 0 0 Median best-so-far 0 0 Average mean fitness 7.15E-07 0* B.4 Average best-so-far 5.0909 5.0909 Median best-so-far 5.0909 5.0909 Average mean fitness 5.091 5.0908* B.5 Average best-so-far 71.1011 71.0969* Median best-so-far 71.1011 71.0968* Average mean fitness 71.1327 71.1216* B.6 Average best-so-far -0.3291 -0.4175* Median best-so-far -0.3291 -0.4175* Average mean fitness -0.3287 -0.4162* B.7 Average best-so-far -0.6737 -0.6737 Median best-so-far -0.6737 -0.6737 Average mean fitness -0.6736 -0.6737* B.8 Average best-so-far 93.9796 93.9796 Median best-so-far 93.9796 93.9796 Average mean fitness 93.9796 93.9796
Table 5: Best results found by our algorithm and max-product GA
 Test problems Hassanzadeh et al. Our algorithm B.1 13.6174 13.61740246* B.2 -1.5557 -1.5557 B.3 0 0 B.4 5.8816 5.8816 B.5 45.065 45.0314* B.6 -0.3671 -0.4622* B.7 -2.47023 -2.47023 B.8 38.0195 38.0150*
Table 6: A Comparison between the results found by the current GA and max-product GA
 Test problems Max-product GA Our GA B.1 Average best-so-far 13.61745 13.61740502* Median best-so-far 13.6174 13.61740260* Average mean fitness 13.61786 13.61781613* B.2 Average best-so-far -1.5557 -1.5557 Median best-so-far -1.5557 -1.5557 Average mean fitness -1.5524 -1.5557* B.3 Average best-so-far 0 0 Median best-so-far 0 0 Average mean fitness 1.54E-05 0* B.4 Average best-so-far 5.8816 5.8816 Median best-so-far 5.8816 5.8816 Average mean fitness 5.8823 5.8816* B.5 Average best-so-far 45.065 45.0315* Median best-so-far 45.065 45.0314* Average mean fitness 45.1499 45.0460* B.6 Average best-so-far -0.3671 -0.4622* Median best-so-far -0.3671 -0.4622* Average mean fitness -0.3668 -0.4614* B.7 Average best-so-far -2.47023 -2.47023 Median best-so-far -2.47023 -2.47023 Average mean fitness -2.47018 -2.470213* B.8 Average best-so-far 38.0195 38.0150* Median best-so-far 38.0195 38.0150* Average mean fitness 38.0292 38.0171*
In [42], the proposed mutation operator decreases one variable of vector $x$ to a random number between $\left[0\text{\hspace{0.17em}},\text{\hspace{0.17em}}{x}_{j}\right)$ each time (the same mutation operator has been used in [29]). In this mutation operator, a decreasing variable often followed by increasing several other variables to guarantee the feasibility of a new solution. However, in the current GA, the feasibility of the new solution ${x}^{\prime }$ is simultaneously obtained by decreasing a proper variable to zero. Therefore, we have no need to revise the new solution to make it feasible. Moreover, since the proposed mutation operator decreases the selected variables to zeros, the new individuals are more likely to have greater distances from the maximum solution $\overline{X},$ especially ${x}^{\prime }$ may be even a minimal solution (see remark 4). This strategy increases the ability of the algorithm to expand the search space for finding new individuals.

Finally, authors in both [29] and [42] used the same “threepoint” crossover operator. The three-point crossover is defined by three points (two parents and the maximum solution $\overline{X}\right)$ and two operators called “contraction” and “extraction”. Both contraction and extraction operators are employed between ${x}_{1}$ and ${x}_{2},$ and between and $\overline{X}.$ However, from the four mentioned cases, only one case certainly results in a feasible offspring (i.e., the contraction between and $\overline{X}\right).$ Therefore, for the other three cases, the feasibility of the new generated solutions must be checked by substituting them into the fuzzy relational equations as well as the constraints In contrast, the current crossover operator uses only one parent each time. Offspring ${x}_{new1}$ is obtained as a random point on the line segment between ${x}^{\prime }$ and $\overline{X}.$ But, offspring ${x}_{new2}$ lies close to its parent. This difference between ${x}_{new1}$ and ${x}_{new2}$ provides a suitable tradeoff between exploration and exploitation. Also, as is stated in remark 6, the new solutions ${x}_{new1}$ and ${x}_{new2}$ are always feasible.
Conclusion
In this paper, we investigated the resolution of FRE defined by Frank family of t-norms and introduced a nonlinear problem with the max-Frank fuzzy relational equations. In order to determine the feasibility of the problem, two necessary and sufficient conditions were presented. Also, two simplification approaches (depending on the Frank t-norm) were proposed to simplify the problem. A genetic algorithm was designed for solving the nonlinear optimization problems constrained by the max-Frank FRE. Moreover, we presented a method for generating feasible max-Frank FREs as test problems for the performance evaluation of the proposed algorithm. Experiments were performed with the proposed method in the generated feasible test problems. We conclude that the proposed GA can find the optimal solutions for all the cases with a great convergence rate. Moreover, a comparison was made between the proposed method and maxmin and max-product GAs, which solve the nonlinear optimization problems subjected to the FREs defined by max-min and maxproduct compositions, respectively. The results showed that the proposed method (modified by minimum and product t-norms) finds better solutions compared with the solutions obtained by the other algorithms.

As future works, we aim at testing our algorithm in other type of nonlinear optimization problems whose constraints are defined as FRE or FRI with other well-known t-norms.
Acknowledgment
We are very grateful to the anonymous referees and the editor in chief for their comments and suggestions, which were very helpful in improving the paper.
Appendix A
Test Problem A.1:
$f(x)= ( x 1 +10 x 2 ) 2 +5 ( x 3 − x 4 ) 2 + ( x 2 −2 x 3 ) 4 +10 ( x 1 − x 4 ) 4 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9iaacIcacaWG4bWaaSbaaSqaaiaaigda aeqaaOGaey4kaSIaaGymaiaaicdacaWG4bWaaSbaaSqaaiaaikdaae qaaOGaaiykamaaCaaaleqabaGaaGOmaaaakiabgUcaRiaaiwdacaGG OaGaamiEamaaBaaaleaacaaIZaaabeaakiabgkHiTiaadIhadaWgaa WcbaGaaGinaaqabaGccaGGPaWaaWbaaSqabeaacaaIYaaaaOGaey4k aSIaaiikaiaadIhadaWgaaWcbaGaaGOmaaqabaGccqGHsislcaaIYa GaamiEamaaBaaaleaacaaIZaaabeaakiaacMcadaahaaWcbeqaaiaa isdaaaGccqGHRaWkcaaIXaGaaGimaiaacIcacaWG4bWaaSbaaSqaai aaigdaaeqaaOGaeyOeI0IaamiEamaaBaaaleaacaaI0aaabeaakiaa cMcadaahaaWcbeqaaiaaisdaaaaaaa@5D96@$
Test Problem A.2:
$f(x)= x 1 − x 2 − x 3 − x 1 x 3 + x 1 x 4 + x 2 x 3 − x 2 x 4 − x 3 x 5 , MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9iaadIhadaWgaaWcbaGaaGymaaqabaGc cqGHsislcaWG4bWaaSbaaSqaaiaaikdaaeqaaOGaeyOeI0IaamiEam aaBaaaleaacaaIZaaabeaakiabgkHiTiaadIhadaWgaaWcbaGaaGym aaqabaGccaWG4bWaaSbaaSqaaiaaiodaaeqaaOGaey4kaSIaamiEam aaBaaaleaacaaIXaaabeaakiaadIhadaWgaaWcbaGaaGinaaqabaGc cqGHRaWkcaWG4bWaaSbaaSqaaiaaikdaaeqaaOGaamiEamaaBaaale aacaaIZaaabeaakiabgkHiTiaadIhadaWgaaWcbaGaaGOmaaqabaGc caWG4bWaaSbaaSqaaiaaisdaaeqaaOGaeyOeI0IaamiEamaaBaaale aacaaIZaaabeaakiaadIhadaWgaaWcbaGaaGynaaqabaGccaGGSaaa aa@5A7C@$
Test Problem A.3:
$f(x)= x 1 x 2 −Ln(1+ x 4 x 5 )+ x 3 , MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9iaadIhadaWgaaWcbaGaaGymaaqabaGc caWG4bWaaSbaaSqaaiaaikdaaeqaaOGaeyOeI0Iaamitaiaad6gaca GGOaGaaGymaiabgUcaRiaadIhadaWgaaWcbaGaaGinaaqabaGccaWG 4bWaaSbaaSqaaiaaiwdaaeqaaOGaaiykaiabgUcaRiaadIhadaWgaa WcbaGaaG4maaqabaGccaGGSaaaaa@4B25@$
Test Problem A.4:
$f(x)= x 1 +2 x 2 +4 x 5 + e x 1 x 4 , MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9iaadIhadaWgaaWcbaGaaGymaaqabaGc cqGHRaWkcaaIYaGaamiEamaaBaaaleaacaaIYaaabeaakiabgUcaRi aaisdacaWG4bWaaSbaaSqaaiaaiwdaaeqaaOGaey4kaSIaamyzamaa CaaaleqabaGaamiEamaaBaaameaacaaIXaaabeaaliaadIhadaWgaa adbaGaaGinaaqabaaaaOGaaiilaaaa@49D4@$
Test Problem A.5:
$f(x)= ∑ k=1 5 [100 ( x k+1 − x k 2 ) 2 + (1− x k ) 2 ] , MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9maaqahabaGaai4waiaaigdacaaIWaGa aGimaiaacIcacaWG4bWaaSbaaSqaaiaadUgacqGHRaWkcaaIXaaabe aakiabgkHiTiaadIhadaqhaaWcbaGaam4AaaqaaiaaikdaaaGccaGG PaWaaWbaaSqabeaacaaIYaaaaOGaey4kaSIaaiikaiaaigdacqGHsi slcaWG4bWaaSbaaSqaaiaadUgaaeqaaOGaaiykamaaCaaaleqabaGa aGOmaaaakiaac2faaSqaaiaadUgacqGH9aqpcaaIXaaabaGaaGynaa qdcqGHris5aOGaaiilaaaa@5569@$
Test Problem A.6:
$f(x)=−0.5( x 1 x 4 − x 2 x 3 + x 2 x 6 − x 5 x 6 + x 5 x 4 − x 6 x 3 ), MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9iabgkHiTiaaicdacaGGUaGaaGynaiaa cIcacaWG4bWaaSbaaSqaaiaaigdaaeqaaOGaamiEamaaBaaaleaaca aI0aaabeaakiabgkHiTiaadIhadaWgaaWcbaGaaGOmaaqabaGccaWG 4bWaaSbaaSqaaiaaiodaaeqaaOGaey4kaSIaamiEamaaBaaaleaaca aIYaaabeaakiaadIhadaWgaaWcbaGaaGOnaaqabaGccqGHsislcaWG 4bWaaSbaaSqaaiaaiwdaaeqaaOGaamiEamaaBaaaleaacaaI2aaabe aakiabgUcaRiaadIhadaWgaaWcbaGaaGynaaqabaGccaWG4bWaaSba aSqaaiaaisdaaeqaaOGaeyOeI0IaamiEamaaBaaaleaacaaI2aaabe aakiaadIhadaWgaaWcbaGaaG4maaqabaGccaGGPaGaaiilaaaa@5B33@$
Test Problem A.7:
$f(x)= e x 1 x 2 x 3 x 4 x 5 −0.5 ( x 2 3 + x 6 3 + x 7 3 +1) 2 , MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9iaadwgadaahaaWcbeqaaiaadIhadaWg aaadbaGaaGymaaqabaWccaWG4bWaaSbaaWqaaiaaikdaaeqaaSGaam iEamaaBaaameaacaaIZaaabeaaliaadIhadaWgaaadbaGaaGinaaqa baWccaWG4bWaaSbaaWqaaiaaiwdaaeqaaaaakiabgkHiTiaaicdaca GGUaGaaGynaiaacIcacaWG4bWaa0baaSqaaiaaikdaaeaacaaIZaaa aOGaey4kaSIaamiEamaaDaaaleaacaaI2aaabaGaaG4maaaakiabgU caRiaadIhadaqhaaWcbaGaaG4naaqaaiaaiodaaaGccqGHRaWkcaaI XaGaaiykamaaCaaaleqabaGaaGOmaaaakiaacYcaaaa@5691@$
Test Problem A.8:
$f(x)= ( x 1 −1) 2 + ( x 7 −1) 2 +10 ∑ k=1 6 (10−k) ( x k 2 − x k+1 ) 2 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9iaacIcacaWG4bWaaSbaaSqaaiaaigda aeqaaOGaeyOeI0IaaGymaiaacMcadaahaaWcbeqaaiaaikdaaaGccq GHRaWkcaGGOaGaamiEamaaBaaaleaacaaI3aaabeaakiabgkHiTiaa igdacaGGPaWaaWbaaSqabeaacaaIYaaaaOGaey4kaSIaaGymaiaaic dadaaeWbqaaiaacIcacaaIXaGaaGimaiabgkHiTiaadUgacaGGPaGa aiikaiaadIhadaqhaaWcbaGaam4AaaqaaiaaikdaaaGccqGHsislca WG4bWaaSbaaSqaaiaadUgacqGHRaWkcaaIXaaabeaakiaacMcadaah aaWcbeqaaiaaikdaaaaabaGaam4Aaiabg2da9iaaigdaaeaacaaI2a aaniabggHiLdaaaa@5D61@$
Appendix B
Test Problem B.1:
$f(x)= ( x 1 +10 x 2 ) 2 +5 ( x 3 − x 4 ) 2 + ( x 2 −2 x 3 ) 4 +10 ( x 1 − x 4 ) 4 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9iaacIcacaWG4bWaaSbaaSqaaiaaigda aeqaaOGaey4kaSIaaGymaiaaicdacaWG4bWaaSbaaSqaaiaaikdaae qaaOGaaiykamaaCaaaleqabaGaaGOmaaaakiabgUcaRiaaiwdacaGG OaGaamiEamaaBaaaleaacaaIZaaabeaakiabgkHiTiaadIhadaWgaa WcbaGaaGinaaqabaGccaGGPaWaaWbaaSqabeaacaaIYaaaaOGaey4k aSIaaiikaiaadIhadaWgaaWcbaGaaGOmaaqabaGccqGHsislcaaIYa GaamiEamaaBaaaleaacaaIZaaabeaakiaacMcadaahaaWcbeqaaiaa isdaaaGccqGHRaWkcaaIXaGaaGimaiaacIcacaWG4bWaaSbaaSqaai aaigdaaeqaaOGaeyOeI0IaamiEamaaBaaaleaacaaI0aaabeaakiaa cMcadaahaaWcbeqaaiaaisdaaaaaaa@5D96@$
Test Problem B.2:
$f(x)= x 1 − x 2 − x 3 − x 1 x 3 + x 1 x 4 + x 2 x 3 − x 2 x 4 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9iaadIhadaWgaaWcbaGaaGymaaqabaGc cqGHsislcaWG4bWaaSbaaSqaaiaaikdaaeqaaOGaeyOeI0IaamiEam aaBaaaleaacaaIZaaabeaakiabgkHiTiaadIhadaWgaaWcbaGaaGym aaqabaGccaWG4bWaaSbaaSqaaiaaiodaaeqaaOGaey4kaSIaamiEam aaBaaaleaacaaIXaaabeaakiaadIhadaWgaaWcbaGaaGinaaqabaGc cqGHRaWkcaWG4bWaaSbaaSqaaiaaikdaaeqaaOGaamiEamaaBaaale aacaaIZaaabeaakiabgkHiTiaadIhadaWgaaWcbaGaaGOmaaqabaGc caWG4bWaaSbaaSqaaiaaisdaaeqaaaaa@54F3@$
Test Problem B.3:
$f(x)= x 1 x 2 x 3 x 4 x 5 , MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9iaadIhadaWgaaWcbaGaaGymaaqabaGc caWG4bWaaSbaaSqaaiaaikdaaeqaaOGaamiEamaaBaaaleaacaaIZa aabeaakiaadIhadaWgaaWcbaGaaGinaaqabaGccaWG4bWaaSbaaSqa aiaaiwdaaeqaaOGaaiilaaaa@449C@$
Test Problem B.4:
$f(x)= x 1 +2 x 2 +4 x 5 + e x 1 x 4 , MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9iaadIhadaWgaaWcbaGaaGymaaqabaGc cqGHRaWkcaaIYaGaamiEamaaBaaaleaacaaIYaaabeaakiabgUcaRi aaisdacaWG4bWaaSbaaSqaaiaaiwdaaeqaaOGaey4kaSIaamyzamaa CaaaleqabaGaamiEamaaBaaameaacaaIXaaabeaaliaadIhadaWgaa adbaGaaGinaaqabaaaaOGaaiilaaaa@49D4@$
Test Problem B.5:
$f(x)= ∑ k=1 6 [100 ( x k+1 − x k 2 ) 2 + (1− x k ) 2 ] , MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9maaqahabaGaai4waiaaigdacaaIWaGa aGimaiaacIcacaWG4bWaaSbaaSqaaiaadUgacqGHRaWkcaaIXaaabe aakiabgkHiTiaadIhadaqhaaWcbaGaam4AaaqaaiaaikdaaaGccaGG PaWaaWbaaSqabeaacaaIYaaaaOGaey4kaSIaaiikaiaaigdacqGHsi slcaWG4bWaaSbaaSqaaiaadUgaaeqaaOGaaiykamaaCaaaleqabaGa aGOmaaaakiaac2faaSqaaiaadUgacqGH9aqpcaaIXaaabaGaaGOnaa qdcqGHris5aOGaaiilaaaa@556A@$
Test Problem B.6:
$f(x)=−0.5( x 1 x 4 − x 2 x 3 + x 2 x 6 − x 5 x 6 + x 5 x 4 − x 6 x 7 ) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9iabgkHiTiaaicdacaGGUaGaaGynaiaa cIcacaWG4bWaaSbaaSqaaiaaigdaaeqaaOGaamiEamaaBaaaleaaca aI0aaabeaakiabgkHiTiaadIhadaWgaaWcbaGaaGOmaaqabaGccaWG 4bWaaSbaaSqaaiaaiodaaeqaaOGaey4kaSIaamiEamaaBaaaleaaca aIYaaabeaakiaadIhadaWgaaWcbaGaaGOnaaqabaGccqGHsislcaWG 4bWaaSbaaSqaaiaaiwdaaeqaaOGaamiEamaaBaaaleaacaaI2aaabe aakiabgUcaRiaadIhadaWgaaWcbaGaaGynaaqabaGccaWG4bWaaSba aSqaaiaaisdaaeqaaOGaeyOeI0IaamiEamaaBaaaleaacaaI2aaabe aakiaadIhadaWgaaWcbaGaaG4naaqabaGccaGGPaaaaa@5A87@$
Test Problem B.7:
$f(x)= e x 1 x 2 x 3 x 4 x 5 −0.5 ( x 1 3 + x 2 3 + x 6 3 +1) 2 , MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9iaadwgadaahaaWcbeqaaiaadIhadaWg aaadbaGaaGymaaqabaWccaWG4bWaaSbaaWqaaiaaikdaaeqaaSGaam iEamaaBaaameaacaaIZaaabeaaliaadIhadaWgaaadbaGaaGinaaqa baWccaWG4bWaaSbaaWqaaiaaiwdaaeqaaaaakiabgkHiTiaaicdaca GGUaGaaGynaiaacIcacaWG4bWaa0baaSqaaiaaigdaaeaacaaIZaaa aOGaey4kaSIaamiEamaaDaaaleaacaaIYaaabaGaaG4maaaakiabgU caRiaadIhadaqhaaWcbaGaaGOnaaqaaiaaiodaaaGccqGHRaWkcaaI XaGaaiykamaaCaaaleqabaGaaGOmaaaakiaacYcaaaa@568B@$
Test Problem B.8:
$f(x)= ( x 1 −1) 2 + ( x 7 −1) 2 +10 ∑ k=1 6 (10−k) ( x k 2 − x k+1 ) 2 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOzaiaacI cacaWG4bGaaiykaiabg2da9iaacIcacaWG4bWaaSbaaSqaaiaaigda aeqaaOGaeyOeI0IaaGymaiaacMcadaahaaWcbeqaaiaaikdaaaGccq GHRaWkcaGGOaGaamiEamaaBaaaleaacaaI3aaabeaakiabgkHiTiaa igdacaGGPaWaaWbaaSqabeaacaaIYaaaaOGaey4kaSIaaGymaiaaic dadaaeWbqaaiaacIcacaaIXaGaaGimaiabgkHiTiaadUgacaGGPaGa aiikaiaadIhadaqhaaWcbaGaam4AaaqaaiaaikdaaaGccqGHsislca WG4bWaaSbaaSqaaiaadUgacqGHRaWkcaaIXaaabeaakiaacMcadaah aaWcbeqaaiaaikdaaaaabaGaam4Aaiabg2da9iaaigdaaeaacaaI2a aaniabggHiLdaaaa@5D61@$
ReferencesTop

Listing : ICMJE