Keywords: Fuzzy relational equations; Nonlinear optimization; Genetic algorithm;
If ${a}_{i}$ is the i’th row of matrix A, then problem (1) can be expressed as follows:
The theory of fuzzy relational equations was firstly proposed by Sanchez, [52]. He introduced a FRE whit maxmin composition and applied the model to medical diagnosis in Brouwerian logic. Nowadays, it is wellknown that many issues associated with a body knowledge can be treated as FRE problems [44]. In addition to such applications, FRE theory has been applied in many fields including fuzzy control, discrete dynamic systems, prediction of fuzzy systems, fuzzy decision making, fuzzy pattern recognition, fuzzy clustering, image compression and reconstruction, and so on. Pedrycz, [45] categorized and extended two ways of the generalizations of FRE in terms of sets under discussion and various operations which are taken into account. Since then, many theoretical improvements have been investigated and many applications have been presented [2,3,5,11,24,28,32,37,38, 41,43,46,48,49,57,59,65].
The solvability and the finding of solutions set are the primary (and the most fundamental) subject concerning FRE problems. Many studies have reported fuzzy relational equations with maxmin and maxproduct compositions. Both compositions are special cases of the maxtriangularnorm (maxtnorm). Di Nola, et al. proved that the solution set of FRE defined by continuous maxtnorm composition is often a nonconvex set that is completely determined by one maximum solution and a finite number of minimal solutions [6]. Over the last decades, the solvability of FRE defined with different maxt compositions has been investigated by many researches [36,47,50,51,53,55,56,60, 64,68].
Optimizing an objective function subjected to a system of fuzzy relational equations or inequalities (FRI) is one of the most interesting and ongoing topics among the problems related to the FRE (or FRI) theory [1,9,1927,2527,33,34,39,54,61,66]. By far the most frequently studied aspect is the determination of a minimize of a linear objective function and the use of the maxmin composition [1,20]. So, it is an almost standard approach to translate this type of problem into a corresponding 01 integer linear programming problem, which is then solved using a branch and bound method [10,62]. The topic of the linear optimization problem was also investigated with maxproduct operation [19,26,40]. Moreover, some studies have determined a more general operator of linear optimization with replacement of maxmin and maxproduct compositions with a maxtnorm composition, maxaverage composition or maxstar composition [25,34,54,31,61,22,27].
Recently, many interesting generalizations of the linear and nonlinear programming problems constrained by FRE or FRI have been introduced and developed based on composite operations and fuzzy relations used in the definition of the constraints, and some developments on the objective function of the problems [4,7,12,20,35,39,58,63,66]. For instance, the linear optimization of bipolar FRE was studied by some researchers where FRE was defined with maxmin composition and max Lukasiewicz composition [12,35,39]. Ghodousian and khorram, [21] focused on the algebraic structure of two fuzzy relational $A\phi x\le {b}^{1}$ and $D\phi x\ge {b}^{2}$ , and studied a mixed fuzzy system formed by the two preceding FRIs, where $\varphi $ is an operator with (closed) convex solutions. Yang, [67] studied the optimal solution of minimizing a linear objective function subject to fuzzy relational inequalities where the constraints defined as ${a}_{i1}\wedge {x}_{1}+{a}_{i2}\wedge {x}_{2}+\mathrm{...}+{a}_{in}\wedge {x}_{n}\ge {b}_{i}$ for $i=1,\mathrm{...},m$ and $a\wedge b=\mathrm{min}\{a,b\}.$ In [20], the authors introduced FRIFC problem $\mathrm{min}\{\text{\hspace{0.17em}}\text{\hspace{0.17em}}{c}^{T}x\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}A\phi x\preccurlyeq b\text{\hspace{0.17em}}\text{\hspace{0.17em}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\in {[0,1]}^{n}\text{\hspace{0.17em}}\},$ where $\varphi $ is maxmin composition and “$\preccurlyeq $ ” denotes the relaxed or fuzzy version of the ordinary inequality “$\preccurlyeq $ ”. Another interesting generalization of such optimization problems are related to objective function. Wu, et al. [63] represented an efficient method to optimize a linear fractional programming problem under FRE with maxArchimedean tnorm composition. Dempe and Ruziyeva, [4] generalized the fuzzy linear optimization problem by considering fuzzy coefficients. Dubey, et al. [7] studied linear programming problems involving interval uncertainty modeled using intuitionistic fuzzy set [7]. On the other hand, Lu and Fang considered the single nonlinear objective function and solved it with FRE constraints and maxmin operator [42]. They proposed a genetic algorithm for solving the problem. In [29], the authors used the same method for maxproduct operator. Also, Ghodousian, et al. [15,16,18] presented GA algorithms to solve the nonlinear problem with FRE constraints defined by Lukasiewicz, Dubois –Prade and SugenoWeber operators.
Generally, there are three important difficulties related to FRE or FRI problems. Firstly, in order to completely determine FREs and FRIs, we must initially find all the minimal solutions, and the finding of all the minimal solutions is an NPhard problem. Secondly, a feasible region formed as FRE or FRI is often a nonconvex set [21]. Finally, FREs and FRIs as feasible regions lead to optimization problems with highly nonlinear constraints. Due to the above mentioned difficulties, although the analytical methods are efficient to find exact optimal solutions, they may also involve high computational complexity for highdimensional problems (especially, if the simplification processes cannot considerably reduce the problem).
In this paper, we use the genetic algorithm proposed in [21] for solving problem (1), which keeps the search inside of the feasible region without finding any minimal solution and checking the feasibility of new generated solutions. Since the feasibility of problem (1) is essentially dependent on the tnorm (Frank tnorm) used in the definition of the constraints, a method is also presented to construct feasible test problems. More precisely, we construct a feasible problem by randomly generating a fuzzy matrix and a fuzzy vector according to some criteria resulted from the necessary and sufficient conditions. It is proved that the maxFrank fuzzy relational equations constructed by this method are not empty. Moreover, a comparison is made between the current method and the algorithms presented in [42] and [29].
The remainder of the paper is organized as follows. Section 2 takes a brief look at some basic results on the feasible solutions set of problem (1). In Section 3, the GA algorithm is briefly described. Finally, in Section 4 the experimental results are demonstrated and a conclusion is provided in Section 5.
Definition 1: For each $i\in I,$ we define ${J}_{i}=\left\{j\in J\text{\hspace{0.17em}}\text{\hspace{0.17em}}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{ij}\ge {b}_{i}\right\}.$
According to definition 1, we have the following lemmas, which are easily proved by the monotonicity and identity law of tnorms, definition 1 and the definition of Frank tnorm.
Lemma 1: Let $i\in I.$ If $j\notin {J}_{i},$ then ${T}_{F}^{s}({a}_{ij},{x}_{j})<{b}_{i},\text{}\forall {x}_{j}\in [0,1].$
Lemma 2: Let $i\in I$ and $j\in {J}_{i}.$
(a) If ${x}_{j}>{\mathrm{log}}_{s}\left(1+\frac{({s}^{{b}_{i}}1)(s1)}{{s}^{{a}_{ij}}1}\right)$ and ${b}_{i}\ne 0,$ then ${T}_{F}^{s}({a}_{ij},{x}_{j})>{b}_{i}.$
(b) If ${x}_{j}={\mathrm{log}}_{s}\left(1+\frac{({s}^{{b}_{i}}1)(s1)}{{s}^{{a}_{ij}}1}\right)$ and ${b}_{i}\ne 0,$ then ${T}_{F}^{s}({a}_{ij},{x}_{j})={b}_{i}.$
(c) If ${x}_{j}<{\mathrm{log}}_{s}\left(1+\frac{({s}^{{b}_{i}}1)(s1)}{{s}^{{a}_{ij}}1}\right)$ and ${b}_{i}\ne 0,$ then ${T}_{F}^{s}({a}_{ij},{x}_{j})<{b}_{i}.$
(d) If ${a}_{ij}={b}_{i}=0,$ then ${T}_{F}^{s}({a}_{ij},{x}_{j})={b}_{i},\text{\hspace{0.17em}}\forall {x}_{j}\in [0,1].$
(e) If ${a}_{ij}>{b}_{i}=0,$ then ${T}_{F}^{s}({a}_{ij},{x}_{j})={b}_{i}$ for ${x}_{j}=0,$ and ${T}_{F}^{s}({a}_{ij},{x}_{j})>{b}_{i}$ for $0<{x}_{j}\le 1.$
Lemma 3 below gives a necessary and sufficient condition for the feasibility of sets ${S}_{{T}_{F}^{s}}({a}_{i},{b}_{i}),\text{}\forall i\in I.$
Lemma 3: For a fixed $i\in I,\text{}{S}_{{T}_{F}^{s}}({a}_{i},{b}_{i})\ne \varnothing $ if and only if ${J}_{i}\ne \varnothing .$
Proof: The proof is similar to the proof of Lemma 3 in [15].
Definition 2: Suppose that $i\in I$ and ${S}_{{T}_{F}^{s}}({a}_{i},{b}_{i})\ne \varnothing $ (hence, ${J}_{i}\ne \varnothing $ from lemma 3). Let ${\widehat{x}}_{i}=[{({\widehat{x}}_{i})}_{1},{({\widehat{x}}_{i})}_{2},\mathrm{...},{({\widehat{x}}_{i})}_{n}]\in {[0,1]}^{n}$ where the components are defined as follows:
Theorem 1: Let $i\in I.$ If ${S}_{{T}_{F}^{s}}({a}_{i},{b}_{i})\ne \varnothing ,$ then ${S}_{{T}_{F}^{s}}({a}_{i},{b}_{i})={\displaystyle \underset{j\in {J}_{i}}{\cup}[\text{\hspace{0.17em}}{\stackrel{\u2323}{x}}_{i}(j)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\widehat{x}}_{i}]}.$
Proof: For a more general case, see Corollary 2.3 in [21].
From theorem 1, ${\widehat{x}}_{i}$ is the unique maximum solution and ${\stackrel{\u2323}{x}}_{i}(j)$‘s $(j\in {J}_{i})$ are the minimal solutions of ${S}_{{T}_{F}^{s}}({a}_{i},{b}_{i}).$
Definition 3: Let ${\widehat{x}}_{i}\text{}(i\in I)$ be the maximum solution of ${S}_{{T}_{F}^{s}}({a}_{i},{b}_{i}).$ We define $\overline{X}=\underset{i\in I}{\mathrm{min}}\left\{\text{\hspace{0.17em}}{\widehat{x}}_{i}\right\}.$
Definition 4: Let $e:I\to {J}_{i}$ so that $e(i)=j\in {J}_{i},\text{}\forall i\in I,$ and let $E$ be the set of all vectors $e$ . For the sake of convenience, we represent each $e\in E$ as an $m$ –dimensional vector $e=[{j}_{1},{j}_{2},\mathrm{...},{j}_{m}]$ in which ${j}_{k}=e(k).$
Definition 5: Let $e=[{j}_{1},{j}_{2},\mathrm{...},{j}_{m}]\in E.$ We define $\underset{\_}{X}(e)=[\underset{\_}{X}{(e)}_{1},\underset{\_}{X}{(e)}_{2},\mathrm{...},\underset{\_}{X}{(e)}_{n}]\in {[0,1]}^{n},$ where $\underset{\_}{X}{(e)}_{j}=\underset{i\in I}{\mathrm{max}}\left\{{\stackrel{\u2323}{x}}_{i}{(e(i))}_{j}\right\}=\underset{i\in I}{\mathrm{max}}\left\{{\stackrel{\u2323}{x}}_{i}{({j}_{i})}_{j}\right\},\text{}\forall j\in J.$
From the relation ${S}_{{T}_{F}^{s}}(A,b)={\displaystyle \underset{i\in I}{\cap}{S}_{{T}_{F}^{s}}({a}_{i},{b}_{i})}$ and Theorem 1, the following theorem is easily attained.
Theorem 2: ${S}_{{T}_{F}^{s}}(A,b)={\displaystyle \underset{e\in E}{\cup}[\text{\hspace{0.17em}}\underset{\_}{X}(e)\text{\hspace{0.17em}},\text{\hspace{0.17em}}\overline{X}]}.$
As a consequence, it turns out that $\overline{X}$ is the unique maximum solution and $\underset{\_}{X}(e)$'s $(e\in E)$ are the minimal solutions of ${S}_{{T}_{F}^{s}}(A,b).$ Moreover, we have the following corollary that is directly resulted from theorem 2.
Corollary 1(first necessary and sufficient condition): ${S}_{{T}_{F}^{s}}(A,b)\ne \varnothing $ if and only if $\overline{X}\in {S}_{{T}_{F}^{s}}(A,b)$ .
Example 1: Consider the problem below with Frank tnorm
${\widehat{x}}_{1}={\widehat{x}}_{3}=[\text{0}\text{.7833,1,1,1,1,1}],\text{}{\widehat{x}}_{2}={\widehat{x}}_{4}=[\text{1,1,1,1,1,1}],\text{\hspace{0.17em}}{\widehat{x}}_{5}=[\text{1,0,1,0,1,1}].$
${\stackrel{\u2323}{x}}_{1}(1)=[\text{0}\text{.7833,0,0,0,0,0}],\text{\hspace{0.17em}}{\stackrel{\u2323}{x}}_{1}(4)=[\text{0,0,0,1,0,0}]$
${\stackrel{\u2323}{x}}_{2}(1)=[\text{1,0,0,0,0,0}],\text{}{\stackrel{\u2323}{x}}_{2}(5)=[0\text{,0,0,0,1,0}]$
${\stackrel{\u2323}{x}}_{3}(1)=[\text{0}\text{.7833,0,0,0,0,0}],\text{}{\stackrel{\u2323}{x}}_{3}(4)=[\text{0,0,0,1,0,0}],\text{}{\stackrel{\u2323}{x}}_{3}(5)=[0\text{,0,0,0,1,0}]$
${\stackrel{\u2323}{x}}_{4}(4)=[\text{0,0,0,1,0,0}],\text{}{\stackrel{\u2323}{x}}_{4}(5)=[\text{0,0,0,0,1,0}]$
${\stackrel{\u2323}{x}}_{5}(j)=[\text{0,0,0,0,0,0}]\text{\hspace{0.17em}}\text{\hspace{0.17em}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}j\in \left\{1,2,3,4,5,6\right\}$
Therefore, by theorem 1 we have
${S}_{{T}_{F}^{s}}({a}_{1},{b}_{1})=[\text{\hspace{0.17em}}{\stackrel{\u2323}{x}}_{1}(1)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\widehat{x}}_{1}]\cup [\text{\hspace{0.17em}}{\stackrel{\u2323}{x}}_{1}(4)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\widehat{x}}_{1}],$
${S}_{{T}_{F}^{s}}({a}_{2},{b}_{2})=[\text{\hspace{0.17em}}{\stackrel{\u2323}{x}}_{2}(1)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\widehat{x}}_{2}]\cup [\text{\hspace{0.17em}}{\stackrel{\u2323}{x}}_{2}(5)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\widehat{x}}_{2}],$
${S}_{{T}_{F}^{s}}({a}_{3},{b}_{3})=[\text{\hspace{0.17em}}{\stackrel{\u2323}{x}}_{3}(1)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\widehat{x}}_{3}]\cup [\text{\hspace{0.17em}}{\stackrel{\u2323}{x}}_{3}(4)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\widehat{x}}_{3}]\cup [\text{\hspace{0.17em}}{\stackrel{\u2323}{x}}_{3}(5)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\widehat{x}}_{3}]$ and
${S}_{{T}_{F}^{s}}({a}_{4},{b}_{4})=[\text{\hspace{0.17em}}{\stackrel{\u2323}{x}}_{4}(4)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\widehat{x}}_{4}]\cup [\text{\hspace{0.17em}}{\stackrel{\u2323}{x}}_{4}(5)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\widehat{x}}_{4}]$ and
${S}_{{T}_{F}^{s}}({a}_{5},{b}_{5})=[\text{\hspace{0.17em}}{0}_{1\times 6}\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\widehat{x}}_{5}]$ where ${0}_{1\times 6}$ is a zero vector. From definition 3, $\overline{X}=[\text{0}\text{.7833,0,1,0,1,1}].$ It is easy to verify that $\overline{X}\in {S}_{{T}_{F}^{s}}(A,b).$ Therefore, the above problem is feasible by corollary 1. Finally, the cardinality of set $E$ is equal to 24 (definition 4). So, we have 24 solutions $\underset{\_}{X}(e)$ associated to 24 vectors $e$ . For example, for $e=[1\text{\hspace{0.17em}},\text{\hspace{0.17em}}5\text{\hspace{0.17em}},\text{\hspace{0.17em}}1\text{\hspace{0.17em}},\text{\hspace{0.17em}}5\text{\hspace{0.17em}},\text{\hspace{0.17em}}6],$ we obtain $\underset{\_}{X}(e)=\mathrm{max}\text{\hspace{0.17em}}\left\{{\stackrel{\u2323}{x}}_{1}(1),\text{\hspace{0.17em}}{\stackrel{\u2323}{x}}_{2}(5)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{\u2323}{x}}_{3}(1)\text{\hspace{0.17em}},\text{\hspace{0.17em}}{\stackrel{\u2323}{x}}_{4}(5),{\stackrel{\u2323}{x}}_{5}(6)\right\}$ from definition 5 that means $\underset{\_}{X}(e)=[\text{0}\text{.7833,0,0,0,1,0}].$
Definition 6: If a value changing in an element, say ${a}_{ij}$ , of a given fuzzy relation matrix $A$ has no effect on the solutions of problem (1), this value changing is said to be an equivalence operation.
Corollary 2: Suppose that ${T}_{F}^{s}({a}_{i{j}_{0}},{x}_{{j}_{0}})<{b}_{i},\text{}\forall x\in {S}_{{T}_{F}^{s}}(A,b).$ In this case, it is obvious that $\underset{j=1}{\overset{n}{\mathrm{max}}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}({a}_{ij},{x}_{j})\right\}={b}_{i}$ is equivalent to $\underset{\begin{array}{l}j=1\\ j\ne {j}_{0}\end{array}}{\overset{n}{\mathrm{max}}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}({a}_{ij},{x}_{j})\right\}={b}_{i},$ that is, “resetting ${a}_{i{j}_{0}}$ to zero” has no effect on the solutions of problem (1) (since component ${a}_{i{j}_{0}}$ only appears in the i‘th constraint of problem (1)). Therefore, if ${T}_{F}^{s}({a}_{i{j}_{0}},{x}_{{j}_{0}})<{b}_{i},\text{\hspace{0.17em}}\forall x\in {S}_{{T}_{F}^{s}}(A,b),$ then “resetting ${a}_{i{j}_{0}}$ to zero” is an equivalence operation.
Lemma 4 (first simplification): Suppose that ${j}_{0}\notin {J}_{i},$ for some $i\in I$ and ${j}_{0}\in J.$ Then, “resetting ${a}_{i{j}_{0}}$ to zero” is an equivalence operation.
Proof: From corollary 2, it is sufficient to show that ${T}_{F}^{s}({a}_{i{j}_{0}},{x}_{{j}_{0}})<{b}_{i},\text{}\forall x\in {S}_{{T}_{F}^{s}}(A,b).$ But, from lemma 1 we have ${T}_{F}^{s}({a}_{i{j}_{0}},{x}_{{j}_{0}})<{b}_{i},\text{}\forall {x}_{{j}_{0}}\in [0,1].$ Thus ${T}_{F}^{s}({a}_{i{j}_{0}},{x}_{{j}_{0}})<{b}_{i},\text{}\forall x\in {S}_{{T}_{F}^{s}}(A,b).$
Lemma 5 (second simplification): Suppose that ${j}_{0}\in {J}_{{i}_{1}}$ and ${b}_{{i}_{1}}\ne 0,$ where ${i}_{1}\in I$ and ${j}_{0}\in J.$ If at least one of the following conditions hold, then “resetting ${a}_{{i}_{1}{j}_{0}}$ to zero” is an equivalence operation:
(a) There exists some ${i}_{2}\in I\text{}({i}_{1}\ne {i}_{2})$ such that ${j}_{0}\in {J}_{{i}_{2}},\text{}{b}_{{i}_{2}}\ne 0$ and ${\mathrm{log}}_{s}\left(1+({s}^{{b}_{{i}_{2}}}1)(s1)/({s}^{{a}_{{i}_{\text{\hspace{0.17em}}2\text{\hspace{0.17em}}}{j}_{0}}}1)\right)<{\mathrm{log}}_{s}\left(1+({s}^{{b}_{{i}_{1}}}1)(s1)/({s}^{{a}_{{i}_{\text{\hspace{0.17em}}1\text{\hspace{0.17em}}}{j}_{0}}}1)\right).$
(b) There exists some ${i}_{2}\in I\text{}({i}_{1}\ne {i}_{2})$ such that ${a}_{{i}_{2}\text{\hspace{0.17em}}{j}_{0}}>0.$
Proof: (a) Similar to the proof of lemma 4, we show that ${T}_{F}^{s}({a}_{{i}_{1}{j}_{0}},{x}_{{j}_{0}})<{b}_{i},\text{}\forall x\in {S}_{{T}_{F}^{s}}(A,b).$ Consider an arbitrary feasible solution $x\in {S}_{{T}_{F}^{s}}(A,b).$ Since $x\in {S}_{{T}_{F}^{s}}(A,b),$ it turns out that ${T}_{F}^{s}({a}_{{i}_{1}{j}_{0}},{x}_{{j}_{0}})>{b}_{{i}_{1}}$ never holds. So, assume that ${T}_{F}^{s}({a}_{{i}_{1}{j}_{0}},{x}_{{j}_{0}})={b}_{{i}_{1}},$ that is ${\mathrm{log}}_{s}\left(1+({s}^{{a}_{{i}_{\text{\hspace{0.17em}}1\text{\hspace{0.17em}}}{j}_{0}}}1)({s}^{{x}_{{j}_{0}}}1)/(s1)\right)={b}_{{i}_{1}}.$ Since ${b}_{{i}_{1}}\ne 0,$ from lemma 2 we conclude that ${x}_{{j}_{0}}={\mathrm{log}}_{s}\left(1+({s}^{{b}_{{i}_{1}}}1)(s1)/({s}^{{a}_{{i}_{\text{\hspace{0.17em}}1\text{\hspace{0.17em}}}{j}_{0}}}1)\right).$ So, by the assumptionwe have ${\mathrm{log}}_{s}\left(1+({s}^{{b}_{{i}_{2}}}1)(s1)/({s}^{{a}_{{i}_{\text{\hspace{0.17em}}2\text{\hspace{0.17em}}}{j}_{0}}}1)\right)<{x}_{{j}_{0}}.$ Therefore, lemma 2 (part (a)) implies ${T}_{F}^{s}({a}_{{i}_{2}{j}_{0}},{x}_{{j}_{0}})>{b}_{{i}_{2}}$ that contradicts $x\in {S}_{{T}_{F}^{s}}(A,b).$
(b) By the assumption, we have ${j}_{0}\in {J}_{{i}_{2}}.$ Now, the result similarly follows by a simpler argument.
Example 2: Consider the problem presented in example 1. From the first simplification (lemma 4), “resetting the following components ${a}_{ij}$ to zeros” are equivalence operations: ${a}_{12},\text{}{a}_{13},\text{\hspace{0.17em}}{a}_{15},\text{\hspace{0.17em}}{a}_{16};\text{}{a}_{22},\text{\hspace{0.17em}}{a}_{23},\text{}{a}_{24},\text{\hspace{0.17em}}{a}_{26};\text{}{a}_{32},\text{}{a}_{33},\text{}{a}_{36};\text{}{a}_{41},\text{}{a}_{42},\text{}{a}_{43},\text{}{a}_{46};$ in all of these cases, ${a}_{ij}<{b}_{i},$ that is $j\notin {J}_{i}.$ Also, from the second simplification (lemma 5, part (a)), we can change the value of component ${a}_{21}$ to zero; because ${a}_{21}={b}_{2}$ (i.e., $1\in {J}_{2}$ ), ${b}_{2}\ne 0,\text{}{a}_{11}{b}_{1}$ (i.e. $1\in {J}_{1}$), ${b}_{1}\ne 0$ and $\text{0}\text{.7833}={\mathrm{log}}_{s}\left(1+({s}^{{b}_{1}}1)(s1)/({s}^{{a}_{11}}1)\right)<{\mathrm{log}}_{s}\left(1+({s}^{{b}_{2}}1)(s1)/({s}^{{a}_{21}}1)\right)=1$ . Moreover, from lemma 5 (part (b)), we can also change the values of components ${a}_{14},\text{}{a}_{34}$ and ${a}_{44}$ to zeros with no effect on the solutions set of the problem (since $4\in {J}_{1}\cap {J}_{3}\cap {J}_{4},\text{}{b}_{i}\ne 0\text{(}i=1,3,4),$ and ${b}_{5}=0$ ${a}_{54}>0).$
In addition to simplifying the problem, a necessary and sufficient condition is also derived from lemma 5. Before formally presenting the condition, some useful notations are introduced. Let $\tilde{A}$ denote the simplified matrix resulted from $A$ after applying the simplification processes (lemmas 4 and 5). Also, similar to definition 1, assume that ${\tilde{J}}_{i}=\left\{j\in J\text{\hspace{0.17em}}\text{\hspace{0.17em}}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\tilde{a}}_{ij}\ge {b}_{i}\right\}\text{}(i\in I)$ where ${\tilde{a}}_{ij}$ denotes $(i,j)$ ‘th component of matrix $\tilde{A}$. The following theorem gives a necessary and sufficient condition for the feasibility of problem (1).
Theorem 3 (second necessary and sufficient condition): ${S}_{{T}_{F}^{s}}(A,b)\ne \varnothing $ if and only if ${\tilde{J}}_{i}\ne \varnothing ,\text{}\forall i\in I.$
Proof: Since ${S}_{{T}_{F}^{s}}(A,b)={S}_{{T}_{F}^{s}}(\tilde{A},b)$ from lemmas 4 and 5, it is sufficient to show that ${S}_{{T}_{F}^{s}}(\tilde{A},b)\ne \varnothing $ if and only if ${\tilde{J}}_{i}\ne \varnothing ,\text{}\forall i\in I.$ Let ${S}_{{T}_{F}^{s}}(\tilde{A},b)\ne \varnothing .$ Therefore, ${S}_{{T}_{F}^{s}}({\tilde{a}}_{i},{b}_{i})\ne \varnothing ,\text{}\forall i\in I,$ where ${\tilde{a}}_{i}$ denotes i ‘th row of matrix $\tilde{A}.$ Now, lemma 3 implies ${\tilde{J}}_{i}\ne \varnothing ,\text{}\forall i\in I.$ Conversely, suppose that ${\tilde{J}}_{i}\ne \varnothing ,\text{}\forall i\in I.$ Again, by using lemma 3 we have ${\tilde{J}}_{i}\ne \varnothing ,\text{}\forall i\in I.$ By contradiction, suppose that ${S}_{{T}_{F}^{s}}(\tilde{A},b)=\varnothing .$ Therefore, $\overline{X}\notin {S}_{{T}_{F}^{s}}(\tilde{A},b)$ from corollary 1, and then there exists ${i}_{0}\in I$ such that $\overline{X}\notin {S}_{{T}_{F}^{s}}({\tilde{a}}_{{i}_{0}},{b}_{{i}_{0}}).$ Since $\underset{j\notin {\tilde{J}}_{i}}{\mathrm{max}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}({\tilde{a}}_{{i}_{0}j},{\overline{X}}_{j})\right\}<{b}_{{i}_{0}}$ (from lemma 1) , we must have either $\underset{j\in {\tilde{J}}_{i}}{\mathrm{max}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}({\tilde{a}}_{{i}_{0}j},{\overline{X}}_{j})\right\}>{b}_{{i}_{0}}$ or $\underset{j\in {\tilde{J}}_{i}}{\mathrm{max}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}({\tilde{a}}_{{i}_{0}j},{\overline{X}}_{j})\right\}<{b}_{{i}_{0}}.$ Anyway, since $\overline{X}\le {\widehat{x}}_{{i}_{0}}$ (i.e., ${\overline{X}}_{j}\le {({\widehat{x}}_{{i}_{0}})}_{j},\text{}\forall j\in J),$ we have $\underset{j\in {\tilde{J}}_{{i}_{0}}}{\mathrm{max}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}({\tilde{a}}_{{i}_{0}j},{\overline{X}}_{j})\right\}\le \underset{j\in {\tilde{J}}_{{i}_{0}}}{\mathrm{max}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}({\tilde{a}}_{{i}_{0}j},{({\widehat{x}}_{{i}_{0}})}_{j})\right\}={b}_{{i}_{0}},$ and then the former case (i.e.,$\underset{j\in {\tilde{J}}_{i}}{\mathrm{max}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}({\tilde{a}}_{{i}_{0}j},{\overline{X}}_{j})\right\}>{b}_{{i}_{0}}$) never holds. Therefore, $\underset{j\in {\tilde{J}}_{i}}{\mathrm{max}}\text{\hspace{0.17em}}\left\{{T}_{F}^{s}({\tilde{a}}_{{i}_{0}j},{\overline{X}}_{j})\right\}<{b}_{{i}_{0}}$ that implies ${b}_{{i}_{0}}\ne 0$ and ${T}_{F}^{s}({\tilde{a}}_{{i}_{0}j},{\overline{X}}_{j})<{b}_{{i}_{0}},\text{}\forall j\in {\tilde{J}}_{{i}_{0}}.$ Hence, by lemma 2, we must have ${\overline{X}}_{j}<{\mathrm{log}}_{s}\left(1+({s}^{{b}_{{i}_{\text{\hspace{0.17em}}0}}}1)(s1)/({s}^{{\tilde{a}}_{{i}_{\text{\hspace{0.17em}}0\text{\hspace{0.17em}}}j}}1)\right),\text{}\forall j\in {\tilde{J}}_{{i}_{0}}.$ On the other hand, ${\mathrm{log}}_{s}\left(1+({s}^{{b}_{{i}_{\text{\hspace{0.17em}}0}}}1)(s1)/({s}^{{\tilde{a}}_{{i}_{\text{\hspace{0.17em}}0\text{\hspace{0.17em}}}j}}1)\right)\le 1,\text{}\forall j\in {\tilde{J}}_{{i}_{0}}.$ Therefore, ${\overline{X}}_{j}<1,\text{}\forall j\in {\tilde{J}}_{{i}_{0}},$ and then from definitions 2 and 3, for each $j\in {\tilde{J}}_{{i}_{0}}$ there must exists ${i}_{j}\in I$ such that either $j\in {\tilde{J}}_{{i}_{j}}$ and ${\overline{X}}_{j}={({\widehat{x}}_{{i}_{j}})}_{j}={\mathrm{log}}_{s}\left(1+({s}^{{b}_{{i}_{j}}}1)(s1)/({s}^{{\tilde{a}}_{{i}_{j\text{\hspace{0.17em}}}j}}1)\right)$ or $j\in {\tilde{J}}_{{i}_{j}}$ and ${\tilde{a}}_{{i}_{j}\text{\hspace{0.17em}}j}>{b}_{{i}_{j}}=0.$ Until now, we proved that ${b}_{{i}_{0}}\ne 0$ and for each $j\in {\tilde{J}}_{{i}_{0}},$ there exist ${i}_{j}\in I$ such that either $j\in {\tilde{J}}_{{i}_{j}}$ and ${\mathrm{log}}_{s}\left(1+({s}^{{b}_{{i}_{j}}}1)(s1)/({s}^{{\tilde{a}}_{{i}_{j\text{\hspace{0.17em}}}j}}1)\right)<{\mathrm{log}}_{s}\left(1+({s}^{{b}_{{i}_{\text{\hspace{0.17em}}0}}}1)(s1)/({s}^{{\tilde{a}}_{{i}_{\text{\hspace{0.17em}}0\text{\hspace{0.17em}}}j}}1)\right)$ (because, ${\mathrm{log}}_{s}\left(1+({s}^{{b}_{{i}_{j}}}1)(s1)/({s}^{{\tilde{a}}_{{i}_{j\text{\hspace{0.17em}}}j}}1)\right)={\overline{X}}_{j}<{\mathrm{log}}_{s}\left(1+({s}^{{b}_{{i}_{\text{\hspace{0.17em}}0}}}1)(s1)/({s}^{{\tilde{a}}_{{i}_{\text{\hspace{0.17em}}0\text{\hspace{0.17em}}}j}}1)\right)$) or ${b}_{{i}_{j}}=0$ ${\tilde{a}}_{{i}_{j}\text{\hspace{0.17em}}j}>0.$ But in both cases, we must have ${\tilde{a}}_{{i}_{0}j}=0\text{}(\forall j\in {\tilde{J}}_{{i}_{0}})$ from the parts (a) and (b) of lemma 5, respectively. Therefore, ${\tilde{a}}_{{i}_{0}j}<{b}_{{i}_{0}}\ne 0\text{}(\forall j\in {\tilde{J}}_{{i}_{0}})$ that is a contradiction.
Remark 1: Since ${S}_{{T}_{F}^{s}}(A,b)={S}_{{T}_{F}^{s}}(\tilde{A},b)$ (from lemmas 4 and 5), we can rewrite all the previous definitions and results in a simpler manner by replacing ${\tilde{J}}_{i}$ with ${J}_{i}\text{}(i\in I).$
Definition 7: Suppose that ${S}_{{T}_{F}^{s}}(\tilde{A},b)\ne \varnothing .$ For each $i\in I,$ let ${\stackrel{\u2323}{x}}_{i}=[{({\stackrel{\u2323}{x}}_{i})}_{1},{({\stackrel{\u2323}{x}}_{i})}_{2},\mathrm{...},{({\stackrel{\u2323}{x}}_{i})}_{n}]\in {[0,1]}^{n}$ where the components are defined as follows:
Remark 2: According to definition 2 and remark 1, it is clear that for a fixed $i\in I$ and $j\in {\tilde{J}}_{i},\text{}{\stackrel{\u2323}{x}}_{i}{(j)}_{k}\le {({\stackrel{\u2323}{x}}_{i})}_{k}\text{}(\forall k\in J).$ Therefore, from definitions 5 and 7 we have $\underset{\_}{X}{(e)}_{k}=\underset{i\in I}{\mathrm{max}}\left\{{\stackrel{\u2323}{x}}_{i}{(e(i))}_{k}\right\}=\underset{i\in I}{\mathrm{max}}\left\{{\stackrel{\u2323}{x}}_{i}{({j}_{i})}_{k}\right\}\le \underset{i\in I}{\mathrm{max}}\left\{{({\stackrel{\u2323}{x}}_{i})}_{k}\right\}={\underset{\_}{X}}_{k},\text{}\forall k\in J$ and $\forall e\in E.$ Thus $\underset{\_}{X}(e)\le \underset{\_}{X},\text{}\forall e\in E.$
Example 3: Consider the problem presented in example 1, where $\overline{X}=[\text{0}\text{.7833,0,1,0,1,1}].$ Also, according to example 2, the simplified matrix $\tilde{A}$ is
The algorithm for generating the initial population is simply obtained as follows:
Algorithm 1 (Initial Population):
$$\begin{array}{l}\text{1}\text{.Getfuzzymatrix}A\text{,fuzzyvector}b\text{\hspace{0.17em}}\text{andpopulationsize}{S}_{pop}\text{.}\\ \text{2}\text{.If}\text{\hspace{0.17em}}\overline{X}\notin {S}_{{T}_{F}^{s}}(A,b)\text{,thenstop;theproblemisinfeasible(corollary1)}\text{.}\\ \text{3}\text{.Fori}=\text{1,2,}\mathrm{...}\text{,}{S}_{pop}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{Generatearandom}n\text{dimensionalsolution}pop(i)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{intheinterval}[\underset{\_}{X}\text{\hspace{0.17em}},\text{\hspace{0.17em}}\overline{X}].\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{End}\end{array}$$
Definition 8: Let ${I}^{+}=\left\{i\in I\text{\hspace{0.17em}}\text{\hspace{0.17em}}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}{b}_{i}\ne 0\text{\hspace{0.17em}}\right\}.$ So, we define $D=\left\{j\in J\text{\hspace{0.17em}}\text{\hspace{0.17em}}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\exists i\in {I}^{+}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{suchthat}\text{\hspace{0.17em}}\text{\hspace{0.17em}}j\in {\tilde{J}}_{i}\Rightarrow \left\text{\hspace{0.17em}}{\tilde{J}}_{i}\text{\hspace{0.17em}}\right1\right\},$ where $\left{\tilde{J}}_{i}\right$ denotes the cardinality of set ${\tilde{J}}_{i}.$
The mutation operator is defined as follows:
By the following theorem, it is proved that algorithm 4 always generates random feasible maxFrank fuzzy relational equations.
Theorem 4: The solutions set ${S}_{{T}_{F}^{s}}(A,b)$ of FRE (with Frank tnorm) constructed by algorithm 4 is not empty. Proof. According to step 3 of the algorithm, ${j}_{i}\in {J}_{i},\text{}\forall i\in I.$ Therefore, ${J}_{i}\ne \varnothing ,\text{}\forall i\in I.$ To complete the proof, we show that ${j}_{i}\in {\tilde{J}}_{i},\text{}\forall i\in I.$ By contradiction, suppose that the second simplification process reset ${a}_{i{j}_{i}}$ to zero, for some $i\in I.$ So, ${b}_{i}\ne 0$ and there must exists some $k\in I\text{}(k\ne i)$ such that either ${j}_{i}\in {J}_{k},\text{}{b}_{k}\ne 0$ and ${\mathrm{log}}_{s}\left(1+({s}^{{b}_{k}}1)(s1)/({s}^{{a}_{k\text{\hspace{0.17em}}{j}_{i}}}1)\right)<{\mathrm{log}}_{s}\left(1+({s}^{{b}_{i}}1)(s1)/({s}^{{a}_{i\text{\hspace{0.17em}}{j}_{i}}}1)\right)$ or ${b}_{k}=0$ and ${a}_{k\text{\hspace{0.17em}}{j}_{i}}>0.$ In the former case, we note that ${a}_{k\text{\hspace{0.17em}}{j}_{i}}>{\mathrm{log}}_{s}\left(1+({s}^{{b}_{k}}1)({s}^{{a}_{i\text{\hspace{0.17em}}{j}_{i}}}1)/({s}^{{b}_{i}}1)\right).$ Anyway, both cases contradict step 4.
1. Computing all the convex cells of the Frank FRE.
2. Searching the optimal solution for each convex cell.
3. Finding the global optimum by comparing these local optimal solutions.
The computational results of the eight test problems (see Appendix A) are shown in Table 1 and Figures 18. In Table 1, the results are averaged over 30 runs and the average bestsofar solution; average mean fitness function and median of the best solution in the last iteration are reported.
Table 2 includes the best results found by the maxFrank GA and the above procedure. According to Table 2, the optimal solutions computed by the maxFrank GA and the optimal solutions obtained by the above procedure match very well. Tables 1 and 2, demonstrate the attractive ability of the max Frank GA to detect the optimal solutions of problem (1). Also, the good convergence rate of the max Frank GA could be concluded from Table 1 and figures 18.
Test problems 
Average bestsofar 
Median bestsofar 
Average mean fitness 
A.1 
1.73375 
1.73375 
1.74575 
A.2 
2.5903 
2.5907 
2.5885 
A.3 
0.10266 
0.10266 
0.10249 
A.4 
2.677598 
2.677598 
2.677661 
A.5 
68.60251 
68.60251 
68.60253 
A.6 
0.46007 
0.46313 
0.45953 
A.7 
0.001365 
0.001365 
0.001372 
A.8 
105.996 
105.996 
105.996 
Test problems 
Solutions of maxFrank GA 
Optimal values 
A.1 
1.73375 
1.73372 
A.2 
2.5907 
2.5908 
A.3 
0.10266 
0.10266 
A.4 
2.677598 
2.677598 
A.5 
68.60251 
68.60251 
A.6 
0.001365 
0.001364 
A.7 
1.19221 
1.19221 
A.8 
105.996 
105.9953 
The first comparison is against maxmin GA, and we apply our algorithm (modified for the minimum tnorm) to the test problems by considering as the minimum tnorm. The results are shown in Table 3 including the optimal objective values found by the current GA and maxmin GA. As is shown in this table, the current GA finds better solutions for test problems 1, 5 and 6, and the same solutions for the other test problems.
Table 4 shows that the current GA finds the optimal values faster than maxmin GA and hence has a higher convergence rate, even for the same solutions. The only exception is testing problem 8 in which all the results are the same. In all the cases, results marked with “*” indicate the better cases.
The second comparison is against the maxproduct GA. In this case, we apply our algorithm (modified for the product tnorm) to the same test problems by considering as the product tnorm (Tables 5 and 6).
The results, in Tables 5 and 6, demonstrate that the current GA produces better solutions (or the same solutions with a higher convergence rate) when compared against maxproduct GAs for all the test problems.
Test problems 
Lu and Fang 
Our algorithm 
B.1 
8.429676 
8.4296754* 
B.2 
1.3888 
1.3888 
B.3 
0 
0 
B.4 
5.0909 
5.0909 
B.5 
71.1011 
71.0968* 
B.6 
0.3291 
0.4175* 
B.7 
0.6737 
0.6737 
B.8 
93.9796 
93.9796 
Test problems 

Maxmin GA 
Our GA 
B.1 
Average bestsofar 
8.429701 
8.4296796* 
Median bestsofar 
8.429676 
8.429676 

Average mean fitness 
8.430887 
8.4298745* 

B.2 
Average bestsofar 
1.3888 
1.3888 
Median bestsofar 
1.3888 
1.3888 

Average mean fitness 
1.3877 
1.3886* 

B.3 
Average bestsofar 
0 
0 
Median bestsofar 
0 
0 

Average mean fitness 
7.15E07 
0* 

B.4 
Average bestsofar 
5.0909 
5.0909 
Median bestsofar 
5.0909 
5.0909 

Average mean fitness 
5.091 
5.0908* 

B.5 
Average bestsofar 
71.1011 
71.0969* 
Median bestsofar 
71.1011 
71.0968* 

Average mean fitness 
71.1327 
71.1216* 

B.6 
Average bestsofar 
0.3291 
0.4175* 
Median bestsofar 
0.3291 
0.4175* 

Average mean fitness 
0.3287 
0.4162* 

B.7 
Average bestsofar 
0.6737 
0.6737 
Median bestsofar 
0.6737 
0.6737 

Average mean fitness 
0.6736 
0.6737* 

B.8 
Average bestsofar 
93.9796 
93.9796 
Median bestsofar 
93.9796 
93.9796 

Average mean fitness 
93.9796 
93.9796 
Test problems 
Hassanzadeh et al. 
Our algorithm 
B.1 
13.6174 
13.61740246* 
B.2 
1.5557 
1.5557 
B.3 
0 
0 
B.4 
5.8816 
5.8816 
B.5 
45.065 
45.0314* 
B.6 
0.3671 
0.4622* 
B.7 
2.47023 
2.47023 
B.8 
38.0195 
38.0150* 
Test problems 

Maxproduct GA 
Our GA 
B.1 
Average bestsofar 
13.61745 
13.61740502* 
Median bestsofar 
13.6174 
13.61740260* 

Average mean fitness 
13.61786 
13.61781613* 

B.2 
Average bestsofar 
1.5557 
1.5557 
Median bestsofar 
1.5557 
1.5557 

Average mean fitness 
1.5524 
1.5557* 

B.3 
Average bestsofar 
0 
0 
Median bestsofar 
0 
0 

Average mean fitness 
1.54E05 
0* 

B.4 
Average bestsofar 
5.8816 
5.8816 
Median bestsofar 
5.8816 
5.8816 

Average mean fitness 
5.8823 
5.8816* 

B.5 
Average bestsofar 
45.065 
45.0315* 
Median bestsofar 
45.065 
45.0314* 

Average mean fitness 
45.1499 
45.0460* 

B.6 
Average bestsofar 
0.3671 
0.4622* 
Median bestsofar 
0.3671 
0.4622* 

Average mean fitness 
0.3668 
0.4614* 

B.7 
Average bestsofar 
2.47023 
2.47023 
Median bestsofar 
2.47023 
2.47023 

Average mean fitness 
2.47018 
2.470213* 

B.8 
Average bestsofar 
38.0195 
38.0150* 
Median bestsofar 
38.0195 
38.0150* 

Average mean fitness 
38.0292 
38.0171* 
Finally, authors in both [29] and [42] used the same “threepoint” crossover operator. The threepoint crossover is defined by three points (two parents ${x}_{1},\text{}{x}_{2},$ and the maximum solution $\overline{X})$ and two operators called “contraction” and “extraction”. Both contraction and extraction operators are employed between ${x}_{1}$ and ${x}_{2},$ and between ${x}_{i}\text{}(i=1,2)$ and $\overline{X}.$ However, from the four mentioned cases, only one case certainly results in a feasible offspring (i.e., the contraction between ${x}_{i}\text{}(i=1,2)$ and $\overline{X}).$ Therefore, for the other three cases, the feasibility of the new generated solutions must be checked by substituting them into the fuzzy relational equations as well as the constraints ${x}_{j}\in [0,1],\text{}\forall j\in J.$ In contrast, the current crossover operator uses only one parent each time. Offspring ${x}_{new1}$ is obtained as a random point on the line segment between ${x}^{\prime}$ and $\overline{X}.$ But, offspring ${x}_{new2}$ lies close to its parent. This difference between ${x}_{new1}$ and ${x}_{new2}$ provides a suitable tradeoff between exploration and exploitation. Also, as is stated in remark 6, the new solutions ${x}_{new1}$ and ${x}_{new2}$ are always feasible.
As future works, we aim at testing our algorithm in other type of nonlinear optimization problems whose constraints are defined as FRE or FRI with other wellknown tnorms.
 Chang CW, B. S. Shieh. Linear optimization problem constrained by fuzzy max–min relation equations. Information Sciences. 2013;234:7179.
 Chen L, Wang PP. Fuzzy relation equations (i): the general and specialized solving algorithms. Soft Computing. 2002;6(6):428435.
 Chen L, Wang PP. Fuzzy relation equations (ii): the branchpointsolutions and the categorized minimal solutions. Soft Computing. 2007;11(1):3340.
 Dempe S, Ruziyeva A. On the calculation of a membership function for the solution of a fuzzy linear optimization problem. Fuzzy Sets and Systems. 2012;188(1):5867.
 Di Martino F, Sessa S. Digital watermarking in coding/decoding processes with fuzzy relation equations. Soft Computing. 2006;10(3):238243.
 Di Nola A, Sessa S, Pedrycz W, Sanchez E. Fuzzy relational equations and their applications in knowledge engineering. Dordrecht: Kluwer academic press; 1989.
 Dubey D, Chandra S, Mehra A. Fuzzy linear programming under interval uncertainty based on IFS representation. Fuzzy Sets and Systems. 2012;188(1):6887.
 Dubois D, Prade H. Fundamentals of Fuzzy Sets. Boston: Kluwer; 2000.
 Fan YR, Huang GH, Yang AL. Generalized fuzzy linear programming for decision making under uncertainty: Feasibility of fuzzy solutions and solving approach. Information Sciences. 2013;241:1227.
 Fang SC, Li G. Solving fuzzy relation equations with a linear objective function. Fuzzy Sets and Systems. 1999;103(1):107113.
 Fernandez MJ, Gil P. Some specific types of fuzzy relation equations. Information Sciences. 2004;164(14):189195.
 Freson S, De Baets B, De Meyer H. Linear optimization with bipolar max–min constraints. Information Sciences. 2013;234:315.
 Ghodousian A. Optimization of the reducible objective functions with monotone factors subject to FRI constraints defined with continuous tnorms. Archives of Industrial Engineering. 2018;1(1):119.
 Ghodousian A, Raeisian Parvari M. A modiﬁed PSO algorithm for linear optimization problem subject to the generalized fuzzy relational inequalities with fuzzy constraints (FRIFC). Information Sciences. 2017;418–419:317345.
 Ghodousian A, Babalhavaeji A. An efﬁcient genetic algorithm for solving nonlinear optimization problems deﬁned with fuzzy relational equations and maxLukasiewicz composition. Applied Soft Computing. 2018;69:475492.
 Ghodousian A, Naeeimib M, Babalhavaeji A. Nonlinear optimization problem subjected to fuzzy relational equations deﬁned by DuboisPrade family of tnorms. Computers & Industrial Engineering. 2018;119:167180.
 Ghodousian A, Zarghani R. Linear optimization on the intersection of two fuzzy relational inequalities deﬁned with Yager family of tnorms. Journal of Algorithms and Computation. 2017;49(1):5582.
 Ghodousian A. Ahmadi A. Dehghani. Solving a nonconvex nonlinear optimization problem constrained by fuzzy relational equations and SugenoWeber family of tnorms. Journal of Algorithms and Computation. 2017;49(2):63101.
 Ghodousian A, Khorram E. An algorithm for optimizing the linear function with fuzzy relation equation constraints regarding maxprod composition. Applied Mathematics and Computation. 2006;178(2):502509.
 Ghodousian A, Khorram E. Fuzzy linear optimization in the presence of the fuzzy relation inequality constraints with maxmin composition. Information Sciences. 2008;178(2):501519.
 Ghodousian A, Khorram E. Linear optimization with an arbitrary fuzzy relational inequality. Fuzzy Sets and Systems. 2012;206:89102.
 Ghodousian A, Khorram E. Solving a linear programming problem with the convex combination of the maxmin and the maxaverage fuzzy relation equations. Applied Mathematics and computation. 2006;180(1):411418.
 Guo FF, Pang LP, Meng D, Xia ZQ. An algorithm for solving optimization problems with fuzzy relational inequality constraints. Information Sciences. 2013;252:2031.
 Guo FF, Xia ZQ. An algorithm for solving optimization problems with one linear objective function and finitely many constraints of fuzzy relation inequalities. Fuzzy Optimization and Decision Making. 2006;5(1):3347.
 Guu SM, Wu YK. Minimizing a linear objective function under a maxtnorm fuzzy relational equation constraint. Fuzzy Sets and Systems. 2010;161(2):285297.
 Guu SM, Wu YK. Minimizing a linear objective function with fuzzy relation equation constraints. Fuzzy Optimization and Decision Making. 2002;1(4):347360.
 Guu SM, Wu YK. Minimizing an linear objective function under a maxtnorm fuzzy relational equation constraint. Fuzzy Sets and Systems. 2010;161(2):285297.
 Han SC, Li HX. Notes on pseudotnorms and implication operators on a complete Brouwerian lattice and pseudotnorms and implication operators: direct products and direct product decompositions. Fuzzy Sets and Systems. 2005;153(2):289294.
 Hassanzadeh R, Khorram E, Mahdavi I, MahdaviAmiri N. A genetic algorithm for optimization problems with fuzzy relation constraints using maxproduct composition. Applied Soft Computing. 2011;11(1):551560.
 Hock W, Schittkowski K. Test examples for nonlinear programming codes. Lecture Notes in Economics and Mathematical Systems. 1981;187.
 Khorram E, Ghodousian A. Linear objective function optimization with fuzzy relation equation constraints regarding maxav composition. Applied Mathematics and Computation. 2006;173(2):872886.
 Klement EP, Mesiar R, Pap E. Triangular norms. Position paper I: Basic analytical and algebraic properties. Fuzzy Sets and Systems. 2004;143(1):526.
 Lee HC, Guu SM. On the optimal threetier multimedia streaming services. Fuzzy Optimization and Decision Making. 2002;2(1):3139.
 Li P, Fang SC. On the resolution and optimization of a system of fuzzy relational equations with supT composition. Fuzzy Optimization and Decision Making. 2008;7(2):169214.
 Li P, Liu Y. Linear optimization with bipolar fuzzy relational equation constraints using lukasiewicz triangular norm. Soft Computing. 2014;18(7):13991404.
 Li P, Fang SC. A survey on fuzzy relational equations, part I: classification and solvability. Fuzzy Optimization and Decision Making. 2009;8(2):179229.
 Lin JL. On the relation between fuzzy maxarchimedean tnorm relational equations and the covering problem. Fuzzy Sets and Systems. 2009;160(16):23282344.
 Lin JL, Wu YK, Guu SM. On fuzzy relational equations and the covering problem. Information Sciences. 2011;181(14):29512963.
 Liu C, Lur YY, Wu YK. Linear optimization of bipolar fuzzy relational equations with maxŁukasiewicz composition. Information Sciences. 2016;360:149–162.
 Loetamonphong J, Fang SC. Optimization of fuzzy relation equations with maxproduct composition. Fuzzy Sets and Systems. 2001;118(3):509517.
 Loia V, Sessa S. Fuzzy relation equations for coding/decoding processes of images and videos. Information Sciences. 2005;171(13):145172.
 Lu J, Fang SC. Solving nonlinear optimization problems with fuzzy relation equations constraints. Fuzzy Sets and Systems. 2001;119(1):120.
 Markovskii V. On the relation between equations with maxproduct composition and the covering problem. Fuzzy Sets and Systems. 2005;153(2):261273.
 Pedrycz W. Granular Computing: Analysis and Design of Intelligent Systems. Boca Raton: CRC Press; 2013.
 Pedrycz W. Fuzzy relational equations with generalized connectives and their applications. Fuzzy Sets and Systems. 1983;10(13):185201.
 Pedrycz W, Vasilakos AV. Modularization of fuzzy relational equations. Soft Computing. 2002;6(1):3337.
 Peeva K. Resolution of fuzzy relational equationsmethods, algorithm and software with applications. Information Sciences. 2013;234:4463.
 Perfilieva I. Fuzzy function as an approximate solution to a system of fuzzy relation equations. Fuzzy Sets and Systems. 2004;147(3):363383.
 Perfilieva I, Novak V. System of fuzzy relation equations model of IFTHEN rules. Information Sciences. 2007;177(16):32183227.
 Perfilieva I. Finitary solvability conditions for systems of fuzzy relation equations. Information Sciences. 2013;234:2943.
 Qu XB, Wang XP, Lei MH. Conditions under which the solution sets of fuzzy relational equations over complete Brouwerian lattices form lattices. Fuzzy Sets and Systems. 2014;234:3445.
 Sanchez E. Resolution of composite fuzzy relation equations. Information and Control. 1976;30(1):3848.
 Shieh BS. Infinite fuzzy relation equations with continuous tnorms. Information Sciences. 2008;178(8):19611967.
 Shieh BS. Minimizing a linear objective function under a fuzzy maxtnorm relation equation constraint. Information Sciences. 2011;181(4):832841.
 Sun F. Conditions for the existence of the least solution and minimal solutions to fuzzy relation equations over complete Brouwerian lattices. Information Sciences. 2012;205:8692.
 Sun F, Wang XP, Qu XB. Minimal join decompositions and their applications to fuzzy relation equations over complete Brouwerian lattices. Information Sciences. 2013;224(1):143151.
 Wang S, Fang SC, Nuttle HLW. Solution sets of intervalvalued fuzzy relational equations. Fuzzy Optimization and Decision Making. 2003;2(1):4160.
 Wang PZ, Zhang DZ, Sanchez E, Lee ES. Latticized linear programming and fuzzy relation inequalities. Journal of Mathematical Analysis and Applications. 1991;159(1):7287.
 Wu YK, Guu SM. A note on fuzzy relation programming problems with maxstricttnorm composition. Fuzzy Optimization and Decision Making. 2004;3(3):271278.
 Wu YK, Guu SM. An efficient procedure for solving a fuzzy relation equation with maxArchimedean tnorm composition. IEEE Transactions on Fuzzy Systems. 2008;16(1):7384.
 Wu YK. Optimization of fuzzy relational equations with maxav composition. Information Sciences. 2007;177(19):42164229.
 Wu YK, Guu SM. Minimizing a linear function under a fuzzy maxmin relational equation constraints. Fuzzy Sets and Systems. 2005;150(1):147162.
 Wu YK, Guu SM, Liu JY. Reducing the search space of a linear fractional programming problem under fuzzy relational equations with maxArchimedean tnorm composition. Fuzzy Sets and Systems. 2008;159(24):33473359.
 Xiong QQ, Wang XP. Fuzzy relational equations on complete Brouwerian lattices. Information Sciences. 2012;193:141152.
 Xiong QQ, Wang XP. Some properties of supmin fuzzy relational equations on infinite domains. Fuzzy Sets and Systems. 2005;151(2):393402.
 Yang XP, Zhou XG, Cao BY. Latticized linear programming subject to maxproduct fuzzy relation inequalities with application in wireless communication. Information Sciences. 2016;358–359:44–55.
 Yang SJ. An algorithm for minimizing a linear objective function subject to the fuzzy relation inequalities with additionmin composition. Fuzzy Sets and Systems. 2014;255:4151.
 Yeh T. On the minimal solutions of maxmin fuzzy relation equations. Fuzzy Sets and Systems. 2008;159(1):2339.