Research Article
Open Access
On The Frank FREs and Its Application in
Optimization Problems
Amin Ghodousian*
Faculty of Engineering Science, College of Engineering, University of Tehran, Tehran, Iran
*Corresponding author: Amin Ghodousian, Faculty of Engineering Science, College of Engineering, University of Tehran, P.O. Box 11365-4563,
Tehran, Iran, E-mail:
@
Received: June 17, 2018; Accepted: June 23, 2018; Published: June 29, 2018
Citation: Ghodousian A (2018) On The Frank FREs and Its Application in Optimization Problems. J Comp Sci Appl Inform Technol. 3(2): 1-14. DOI:
10.15226/2474-9257/3/2/00130
Abstract
Frank t-norms are parametric family of continuous Archimedean
t-norms which are also strict when the parameter is nonnegative. Very
often, this family of t-norms is also called the family of fundamental
t-norms because of the role it plays in several applications. In this
paper, we study a nonlinear optimization problem with a special
system of Fuzzy Relational Equations (FRE) as its constraints. We
firstly investigate the resolution of the feasible solutions set when it
is defined with max-Frank composition and present some necessary
and sufficient conditions for determining the feasibility and some
procedures for simplifying the problem. Since the feasible solutions
set of FREs is non-convex and the finding of all minimal solutions is
an NP-hard problem, conventional nonlinear programming methods
may involve high computation complexity. Based on the obtained
theoretical properties of the problem, a genetic algorithm is used,
which preserves the feasibility of new generated solutions and does
not need to initially find the minimal solutions. Moreover, a method
is presented to generate feasible max-Frank FREs as test problems for
evaluating the performance of our algorithm. The presented method
has been compared with some related works. The obtained results
confirm the high performance of the current method in solving such
nonlinear problems.
Keywords: Fuzzy relational equations; Nonlinear optimization;
Genetic algorithm;
Introduction
In this paper, we study the following nonlinear problem in
which the constraints are formed as fuzzy relational equations
defined by Frank t-norm:
where
and
is a fuzzy matrix,
is an
dimensional
fuzzy vector, and "
" is the max-Frank composition, that is,
in which s >0 and s≠ 1.
If
is the i’th row of matrix A, then problem (1) can be
expressed as follows:
where the constraints mean:
The above definition can be extended for
and
by taking limits. So, it is easy to verify that
and
that is,
Frank t-norm is converted to minimum, product and Lukasiewicz
t-norm, respectively. Frank family of t-norms plays a central role
in the investigation of the contraposition law for QL-implications
[8].
The theory of fuzzy relational equations was firstly proposed
by Sanchez, [52]. He introduced a FRE whit max-min composition
and applied the model to medical diagnosis in Brouwerian logic.
Nowadays, it is well-known that many issues associated with a
body knowledge can be treated as FRE problems [44]. In addition
to such applications, FRE theory has been applied in many fields
including fuzzy control, discrete dynamic systems, prediction of
fuzzy systems, fuzzy decision making, fuzzy pattern recognition,
fuzzy clustering, image compression and reconstruction, and
so on. Pedrycz, [45] categorized and extended two ways of the
generalizations of FRE in terms of sets under discussion and
various operations which are taken into account. Since then,
many theoretical improvements have been investigated and
many applications have been presented [2,3,5,11,24,28,32,37,38,
41,43,46,48,49,57,59,65].
The solvability and the finding of solutions set are the
primary (and the most fundamental) subject concerning FRE
problems. Many studies have reported fuzzy relational equations
with max-min and max-product compositions. Both compositions
are special cases of the max-triangular-norm (max-t-norm).
Di Nola, et al. proved that the solution set of FRE defined by
continuous max-t-norm composition is often a non-convex set
that is completely determined by one maximum solution and a
finite number of minimal solutions [6]. Over the last decades, the
solvability of FRE defined with different max-t compositions has
been investigated by many researches [36,47,50,51,53,55,56,60,
64,68].
Optimizing an objective function subjected to a system of
fuzzy relational equations or inequalities (FRI) is one of the most
interesting and on-going topics among the problems related to
the FRE (or FRI) theory [1,9,19-27,25-27,33,34,39,54,61,66]. By
far the most frequently studied aspect is the determination of a
minimize of a linear objective function and the use of the maxmin
composition [1,20]. So, it is an almost standard approach to
translate this type of problem into a corresponding 0-1 integer
linear programming problem, which is then solved using a branch
and bound method [10,62]. The topic of the linear optimization
problem was also investigated with max-product operation
[19,26,40]. Moreover, some studies have determined a more
general operator of linear optimization with replacement of
max-min and max-product compositions with a max-t-norm
composition, max-average composition or max-star composition
[25,34,54,31,61,22,27].
Recently, many interesting generalizations of the linear
and non-linear programming problems constrained by FRE or
FRI have been introduced and developed based on composite
operations and fuzzy relations used in the definition of the
constraints, and some developments on the objective function of
the problems [4,7,12,20,35,39,58,63,66]. For instance, the linear
optimization of bipolar FRE was studied by some researchers
where FRE was defined with max-min composition and max-
Lukasiewicz composition [12,35,39]. Ghodousian and khorram,
[21] focused on the algebraic structure of two fuzzy relational
and
, and studied a mixed fuzzy
system formed by the two preceding FRIs, where
is an
operator with (closed) convex solutions. Yang, [67] studied
the optimal solution of minimizing a linear objective function
subject to fuzzy relational inequalities where the constraints
defined as
for
and
In [20], the authors introduced
FRI-FC problem
where
is
max-min composition and “
” denotes the relaxed or fuzzy
version of the ordinary inequality “
”. Another interesting
generalization of such optimization problems are related to
objective function. Wu, et al. [63] represented an efficient method
to optimize a linear fractional programming problem under
FRE with max-Archimedean t-norm composition. Dempe and
Ruziyeva, [4] generalized the fuzzy linear optimization problem
by considering fuzzy coefficients. Dubey, et al. [7] studied linear
programming problems involving interval uncertainty modeled
using intuitionistic fuzzy set [7]. On the other hand, Lu and
Fang considered the single non-linear objective function and
solved it with FRE constraints and max-min operator [42]. They
proposed a genetic algorithm for solving the problem. In [29], the
authors used the same method for max-product operator. Also,
Ghodousian, et al. [15,16,18] presented GA algorithms to solve the
non-linear problem with FRE constraints defined by Lukasiewicz,
Dubois –Prade and Sugeno-Weber operators.
Generally, there are three important difficulties related to FRE
or FRI problems. Firstly, in order to completely determine FREs
and FRIs, we must initially find all the minimal solutions, and
the finding of all the minimal solutions is an NP-hard problem.
Secondly, a feasible region formed as FRE or FRI is often a nonconvex
set [21]. Finally, FREs and FRIs as feasible regions lead to
optimization problems with highly non-linear constraints. Due to
the above mentioned difficulties, although the analytical methods
are efficient to find exact optimal solutions, they may also involve
high computational complexity for high-dimensional problems
(especially, if the simplification processes cannot considerably
reduce the problem).
In this paper, we use the genetic algorithm proposed in [21]
for solving problem (1), which keeps the search inside of the
feasible region without finding any minimal solution and checking
the feasibility of new generated solutions. Since the feasibility
of problem (1) is essentially dependent on the t-norm (Frank
t-norm) used in the definition of the constraints, a method is also
presented to construct feasible test problems. More precisely,
we construct a feasible problem by randomly generating a fuzzy
matrix and a fuzzy vector according to some criteria resulted
from the necessary and sufficient conditions. It is proved that the
max-Frank fuzzy relational equations constructed by this method
are not empty. Moreover, a comparison is made between the
current method and the algorithms presented in [42] and [29].
The remainder of the paper is organized as follows. Section 2
takes a brief look at some basic results on the feasible solutions set
of problem (1). In Section 3, the GA algorithm is briefly described.
Finally, in Section 4 the experimental results are demonstrated
and a conclusion is provided in Section 5.
Basic Properties of Max-Frank FRE
Characterization of feasible solutions set
This section describes the basic definitions and
structural properties concerning problem (1) that are used
throughout the paper. For the sake of simplicity, let
denote the feasible solutions set of i‘th equation, that is,
.
Also, let
denote
the feasible solutions set of problem (1). Based on the foregoing
notations, it is clear that
.
Definition 1: For each
we define
According to definition 1, we have the following lemmas, which are easily proved by the monotonicity and identity law of t-norms, definition 1 and the definition of Frank t-norm.
Lemma 1: Let
If
then
Lemma 2: Let
and
(a) If
and
then
(b) If
and
then
(c) If
and
then
(d) If
then
(e) If
then
for
and
for
Lemma 3 below gives a necessary and sufficient condition for the
feasibility of sets
Lemma 3: For a fixed
if and only if
Proof: The proof is similar to the proof of Lemma 3 in [15].
Definition 2: Suppose that
and
(hence,
from lemma 3). Let
where the components are defined as follows:
Also, for each
we define
such that
The following theorem characterizes the feasible region of the
i‘th relational equation
Theorem 1: Let
If
then
Proof: For a more general case, see Corollary 2.3 in [21].
From theorem 1,
is the unique maximum solution and
‘s
are the minimal solutions of
Definition 3: Let
be the maximum solution of
We define
Definition 4: Let
so that
and
let
be the set of all vectors
. For the sake of convenience,
we represent each
as an
–dimensional vector
in which
Definition 5: Let
We define
where
From the relation
and Theorem 1, the
following theorem is easily attained.
Theorem 2:
As a consequence, it turns out that
is the unique maximum solution and
's
are the minimal solutions of
Moreover, we have the following corollary that is
directly resulted from theorem 2.
Corollary 1(first necessary and sufficient condition):
if and only if
.
Example 1: Consider the problem below with Frank t-norm
where
(i.e.,
). By definition 1, we have
and
The unique maximum solution and the
minimal solutions of each equation are obtained by definition 2
as follows, where 0.783316030456544 has been rounded to 0.7833:
Therefore, by theorem 1 we have
and
and
where
is a zero vector. From
definition 3,
It is easy to verify
that
Therefore, the above problem is feasible
by corollary 1. Finally, the cardinality of set
is equal to 24
(definition 4). So, we have 24 solutions
associated to 24
vectors
. For example, for
we obtain
from definition 5 that means
Simplification Processes
In practice, there are often some components of matrix
that
have no effect on the solutions to problem (1). Therefore, we can
simplify the problem by changing the values of these components
to zeros. For this reason, various simplification processes have
been proposed by researchers. We refer the interesting reader to
[21] where a brief review of such these processes is given. Here,
we present two simplification techniques based on the Frank
t-norm.
Definition 6: If a value changing in an element, say
, of a given
fuzzy relation matrix
has no effect on the solutions of problem
(1), this value changing is said to be an equivalence operation.
Corollary 2: Suppose that
In
this case, it is obvious that
is equivalent to
that is, “resetting
to zero” has no effect on the
solutions of problem (1) (since component
only appears in
the i‘th constraint of problem (1)). Therefore, if
then “resetting
to zero” is an equivalence
operation.
Lemma 4 (first simplification): Suppose that
for some
and
Then, “resetting
to zero” is an equivalence
operation.
Proof: From corollary 2, it is sufficient to show that
But, from lemma 1 we have
Thus
Lemma 5 (second simplification): Suppose that
and
where
and
If at least one of the following
conditions hold, then “resetting
to zero” is an equivalence
operation:
(a) There exists some
such that
and
(b) There exists some
such that
Proof: (a) Similar to the proof of lemma 4, we show that
Consider an arbitrary feasible
solution
Since
it turns out that
never holds. So, assume that
that is
Since
from
lemma 2 we conclude that
So, by the assumptionwe have
Therefore, lemma 2 (part (a)) implies
that
contradicts
(b) By the assumption, we have
Now, the result similarly
follows by a simpler argument.
Example 2: Consider the problem presented in example 1.
From the first simplification (lemma 4), “resetting the following
components
to zeros” are equivalence operations:
in all of these cases,
that is
Also, from the second simplification (lemma 5, part (a)), we can change the value of component
to zero;
because
(i.e.,
),
(i.e.
),
and
. Moreover, from
lemma 5 (part (b)), we can also change the values of components
and
to zeros with no effect on the solutions set of the
problem (since
and
In addition to simplifying the problem, a necessary and
sufficient condition is also derived from lemma 5. Before formally
presenting the condition, some useful notations are introduced. Let
denote the simplified matrix resulted from
after applying
the simplification processes (lemmas 4 and 5). Also, similar
to definition 1, assume that
where
denotes
‘th component of matrix
. The following theorem
gives a necessary and sufficient condition for the feasibility of
problem (1).
Theorem 3 (second necessary and sufficient condition):
if and only if
Proof: Since
from lemmas 4 and 5, it
is sufficient to show that
if and only if
Let
Therefore,
where
denotes i ‘th row of matrix
Now, lemma
3 implies
Conversely, suppose that
Again, by using lemma 3 we have
By contradiction, suppose that
Therefore,
from corollary 1, and then there exists
such that
Since
(from
lemma 1)
, we must have either
or
Anyway, since
(i.e.,
we have
and
then the former case
(i.e.,
) never holds.
Therefore,
that implies
and
Hence, by lemma 2, we must have
On the other hand,
Therefore,
and then from definitions 2 and 3, for each
there must exists
such that either
and
or
and
Until now, we proved that
and for
each
there exist
such that either
and
(because,
) or
But in both cases, we must have
from the parts (a) and (b) of lemma 5, respectively.
Therefore,
that is a contradiction.
Remark 1: Since
(from lemmas 4 and 5), we can rewrite all the previous
definitions and results in a simpler manner by replacing
with
A summary of the GA
In this section, the genetic algorithm proposed in [15] is
briefly discussed. Since the feasible region of problem (1) is
non-convex, a convex subset of the feasible region is firstly
introduced. Consequently, the proposed GA can easily generate
the initial population by randomly choosing individuals from this
convex feasible subset. At the last part of this section, a method
is presented to generate random feasible max-Yager fuzzy
relational equations.
Initialization
The initial population is given by randomly generating the
individuals inside the feasible region. For this purpose, we firstly
find a convex subset of the feasible solutions set, that is, we find
set
such that
and
is convex. Then, the initial
population is generated by randomly selecting individuals from
set
.
Definition 7: Suppose that
For each
let
where the components are defined
as follows:
Also, we define
Remark 2: According to definition 2 and remark 1, it
is clear that for a fixed
and
Therefore, from definitions 5 and 7 we have
and
Thus
Example 3: Consider the problem presented in example 1, where
Also, according to example 2, the
simplified matrix
is
From definition 7, we have
, and then
Therefore, set
is obtained as a collection of intervals:
By generating random numbers in the corresponding
intervals, we acquire one initial individual:
The algorithm for generating the initial population is simply
obtained as follows:
Algorithm 1 (Initial Population):
Selection Strategy
Suppose that the individuals in the population are sorted
according to their ranks from the best to worst, that is, individual
has rank
The probability
of choosing the
‘th
individual is given by the following formulas:
where the weight to be a value of the Gaussian function with
argument
mean 1 , and standard deviation
where
is a parameter of the algorithm.
Mutation Operator
As usual, suppose that
So, from theorem 3 we
have
Where
(see
definition1 and remark 1).
Definition 8: Let
So, we define
where
denotes the cardinality of set
The mutation operator is defined as follows:
Algorithm 2 (Mutation operator):
Crossover operator
In section 2, it was proved that
is the unique maximum
solution of
By using this result, the crossover operator
is stated as follows:
Algorithm 3 (Crossover operator):
Construction of Test Problems
There are usually several ways to generate a feasible FRE
defined with different t-norms. In what follows, we present
a procedure to generate random feasible max-Frank fuzzy
relational equations:
Algorithm 4 (construction of feasible Max-Frank FRE):
By the following theorem, it is proved that algorithm 4 always
generates random feasible max-Frank fuzzy relational equations.
Theorem 4: The solutions set
of FRE (with Frank t-norm)
constructed by algorithm 4 is not empty.
Proof. According to
step 3 of the algorithm,
Therefore,
To complete the proof, we show that
By contradiction, suppose that the second simplification process
reset
to zero, for some
So,
and there must
exists some
such that either
and
or
and
In the former case, we note that
Anyway, both cases
contradict step 4.
Experimental Results and Comparison with
Related Works
In this section, we present the experimental results for
evaluating the performance of our algorithm. Firstly, we apply
our algorithm to 8 test problems described in Appendix A. The
test problems have been randomly generated in different sizes
by algorithm 4 given in section 3. Since the objective function is
an ordinary nonlinear function, we take some objective functions
from the well-known source: Test Examples for Nonlinear
Programming Codes [30]. In section 4.2, we make a comparison
against the algorithms proposed in [42] and [29]. To perform
a fair comparison, we follow the same experimental setup for
the parameters
and
as suggested by the authors in [29] and [42]. Since the authors
did not explicitly report the size of the population, we consider
for all the three GAs. As mentioned before, we set
in relation (2) for the current GA. Moreover, in order to
compare our algorithm with max-min GA (max-product GA [29]),
we modified all the definitions used in the current GA based on
the minimum t-norm (product t-norm) [42]. For example, we
used the simplification process presented in [42] for minimum,
and the simplification process given in [19,29] for product.
Finally, 30 experiments are performed for all the GAs and for
eight test problems reported in Appendix B, that is, each of the
preceding GA is executed 30 times for each test problem. All
the test problems included in Appendix A, have been defined
by considering
in
Also, the maximum number of
iterations is equal to 100 for all the methods.
Performance of the Max-Frank GA
To verify the solutions found by the max-Frank GA, the
optimal solutions of the test problems are also needed. Since is
formed as the union of the finite number of convex closed cells
(theorem 2), the optimal solutions are also acquired by the
following procedure:
1. Computing all the convex cells of the Frank FRE.
2. Searching the optimal solution for each convex cell.
3. Finding the global optimum by comparing these local optimal
solutions.
The computational results of the eight test problems (see
Appendix A) are shown in Table 1 and Figures 1-8. In Table 1,
the results are averaged over 30 runs and the average best-so-far
solution; average mean fitness function and median of the best
solution in the last iteration are reported.
Table 2 includes the best results found by the max-Frank
GA and the above procedure. According to Table 2, the optimal
solutions computed by the max-Frank GA and the optimal
solutions obtained by the above procedure match very well.
Tables 1 and 2, demonstrate the attractive ability of the max-
Frank GA to detect the optimal solutions of problem (1). Also, the
good convergence rate of the max- Frank GA could be concluded
from Table 1 and figures 1-8.
Table 1: Results of applying the max-Frank GA to the eight test
problems of Appendix A. The results have been averaged over 30 runs.
Maximum number of iterations=100.
Test problems |
Average best-so-far |
Median best-so-far |
Average mean fitness |
A.1 |
1.73375 |
1.73375 |
1.74575 |
A.2 |
-2.5903 |
-2.5907 |
-2.5885 |
A.3 |
-0.10266 |
-0.10266 |
-0.10249 |
A.4 |
2.677598 |
2.677598 |
2.677661 |
A.5 |
68.60251 |
68.60251 |
68.60253 |
A.6 |
-0.46007 |
-0.46313 |
-0.45953 |
A.7 |
0.001365 |
0.001365 |
0.001372 |
A.8 |
105.996 |
105.996 |
105.996 |
Table 2: Comparison of the solutions found by Max-Frank GA and the
optimal values of the test problems described in Appendix A
Test problems |
Solutions of max-Frank GA |
Optimal values |
A.1 |
1.73375 |
1.73372 |
A.2 |
-2.5907 |
-2.5908 |
A.3 |
-0.10266 |
-0.10266 |
A.4 |
2.677598 |
2.677598 |
A.5 |
68.60251 |
68.60251 |
A.6 |
0.001365 |
0.001364 |
A.7 |
-1.19221 |
-1.19221 |
A.8 |
105.996 |
105.9953 |
Figure 1: The performance of the max-Frank GA on test problem A.1.
Figure 2: The performance of the max-Frank GA on test problem A.2.
Figure 3: The performance of the max-Frank GA on test problem A.3.
Figure 4: The performance of the max-Frank GA on test problem A.4.
Figure 5: The performance of the max-Frank GA on test problem A.5.
Figure 6: The performance of the max-Frank GA on test problem A.6.
Figure 7: The performance of the max-Frank GA on test problem A.7.
Figure 8: performance of the max-Frank GA on test problem A.8.
Comparisons with Other Works
As mentioned before, we can make a comparison between
the current GA, max-min GA and max-product GA [42,29]. For
this purpose, all the test problems described in Appendix B have
been designed in such a way that they are feasible for both the
minimum and product t-norms.
The first comparison is against max-min GA, and we apply
our algorithm (modified for the minimum t-norm) to the test
problems by considering as the minimum t-norm. The results
are shown in Table 3 including the optimal objective values found
by the current GA and max-min GA. As is shown in this table, the
current GA finds better solutions for test problems 1, 5 and 6, and
the same solutions for the other test problems.
Table 4 shows that the current GA finds the optimal values
faster than max-min GA and hence has a higher convergence
rate, even for the same solutions. The only exception is testing
problem 8 in which all the results are the same. In all the cases,
results marked with “*” indicate the better cases.
The second comparison is against the max-product GA. In this
case, we apply our algorithm (modified for the product t-norm)
to the same test problems by considering as the product t-norm
(Tables 5 and 6).
The results, in Tables 5 and 6, demonstrate that the current
GA produces better solutions (or the same solutions with a higher
convergence rate) when compared against max-product GAs for
all the test problems.
Table 3: Best results found by our algorithm and max-min GA
Test problems |
Lu and Fang |
Our algorithm |
B.1 |
8.429676 |
8.4296754* |
B.2 |
-1.3888 |
-1.3888 |
B.3 |
0 |
0 |
B.4 |
5.0909 |
5.0909 |
B.5 |
71.1011 |
71.0968* |
B.6 |
-0.3291 |
-0.4175* |
B.7 |
-0.6737 |
-0.6737 |
B.8 |
93.9796 |
93.9796 |
Table 4: A Comparison between the results found by the current GA
and max-min GA
Test problems |
|
Max-min GA |
Our GA |
B.1 |
Average best-so-far |
8.429701 |
8.4296796* |
Median best-so-far |
8.429676 |
8.429676 |
Average mean fitness |
8.430887 |
8.4298745* |
B.2 |
Average best-so-far |
-1.3888 |
-1.3888 |
Median best-so-far |
-1.3888 |
-1.3888 |
Average mean fitness |
-1.3877 |
-1.3886* |
B.3 |
Average best-so-far |
0 |
0 |
Median best-so-far |
0 |
0 |
Average mean fitness |
7.15E-07 |
0* |
B.4 |
Average best-so-far |
5.0909 |
5.0909 |
Median best-so-far |
5.0909 |
5.0909 |
Average mean fitness |
5.091 |
5.0908* |
B.5 |
Average best-so-far |
71.1011 |
71.0969* |
Median best-so-far |
71.1011 |
71.0968* |
Average mean fitness |
71.1327 |
71.1216* |
B.6 |
Average best-so-far |
-0.3291 |
-0.4175* |
Median best-so-far |
-0.3291 |
-0.4175* |
Average mean fitness |
-0.3287 |
-0.4162* |
B.7 |
Average best-so-far |
-0.6737 |
-0.6737 |
Median best-so-far |
-0.6737 |
-0.6737 |
Average mean fitness |
-0.6736 |
-0.6737* |
B.8 |
Average best-so-far |
93.9796 |
93.9796 |
Median best-so-far |
93.9796 |
93.9796 |
Average mean fitness |
93.9796 |
93.9796 |
Table 5: Best results found by our algorithm and max-product GA
Test problems |
Hassanzadeh et al. |
Our algorithm |
B.1 |
13.6174 |
13.61740246* |
B.2 |
-1.5557 |
-1.5557 |
B.3 |
0 |
0 |
B.4 |
5.8816 |
5.8816 |
B.5 |
45.065 |
45.0314* |
B.6 |
-0.3671 |
-0.4622* |
B.7 |
-2.47023 |
-2.47023 |
B.8 |
38.0195 |
38.0150* |
Table 6: A Comparison between the results found by the current GA
and max-product GA
Test problems |
|
Max-product GA |
Our GA |
B.1 |
Average best-so-far |
13.61745 |
13.61740502* |
Median best-so-far |
13.6174 |
13.61740260* |
Average mean fitness |
13.61786 |
13.61781613* |
B.2 |
Average best-so-far |
-1.5557 |
-1.5557 |
Median best-so-far |
-1.5557 |
-1.5557 |
Average mean fitness |
-1.5524 |
-1.5557* |
B.3 |
Average best-so-far |
0 |
0 |
Median best-so-far |
0 |
0 |
Average mean fitness |
1.54E-05 |
0* |
B.4 |
Average best-so-far |
5.8816 |
5.8816 |
Median best-so-far |
5.8816 |
5.8816 |
Average mean fitness |
5.8823 |
5.8816* |
B.5 |
Average best-so-far |
45.065 |
45.0315* |
Median best-so-far |
45.065 |
45.0314* |
Average mean fitness |
45.1499 |
45.0460* |
B.6 |
Average best-so-far |
-0.3671 |
-0.4622* |
Median best-so-far |
-0.3671 |
-0.4622* |
Average mean fitness |
-0.3668 |
-0.4614* |
B.7 |
Average best-so-far |
-2.47023 |
-2.47023 |
Median best-so-far |
-2.47023 |
-2.47023 |
Average mean fitness |
-2.47018 |
-2.470213* |
B.8 |
Average best-so-far |
38.0195 |
38.0150* |
Median best-so-far |
38.0195 |
38.0150* |
Average mean fitness |
38.0292 |
38.0171* |
In [42], the proposed mutation operator decreases one
variable of vector
to a random number between
each time (the same mutation operator has been used in [29]).
In this mutation operator, a decreasing variable often followed
by increasing several other variables to guarantee the feasibility
of a new solution. However, in the current GA, the feasibility of
the new solution
is simultaneously obtained by decreasing a
proper variable to zero. Therefore, we have no need to revise the
new solution to make it feasible. Moreover, since the proposed
mutation operator decreases the selected variables to zeros, the
new individuals are more likely to have greater distances from
the maximum solution
especially
may be even a minimal
solution (see remark 4). This strategy increases the ability of the
algorithm to expand the search space for finding new individuals.
Finally, authors in both [29] and [42] used the same “threepoint”
crossover operator. The three-point crossover is defined
by three points (two parents
and the maximum solution
and two operators called “contraction” and “extraction”. Both
contraction and extraction operators are employed between
and
and between
and
However, from
the four mentioned cases, only one case certainly results in a
feasible offspring (i.e., the contraction between
and
Therefore, for the other three cases, the feasibility of the
new generated solutions must be checked by substituting them
into the fuzzy relational equations as well as the constraints
In contrast, the current crossover operator
uses only one parent each time. Offspring
is obtained
as a random point on the line segment between
and
But, offspring
lies close to its parent. This difference
between
and
provides a suitable tradeoff between
exploration and exploitation. Also, as is stated in remark 6, the
new solutions
and
are always feasible.
Conclusion
In this paper, we investigated the resolution of FRE defined by
Frank family of t-norms and introduced a nonlinear problem with
the max-Frank fuzzy relational equations. In order to determine
the feasibility of the problem, two necessary and sufficient
conditions were presented. Also, two simplification approaches
(depending on the Frank t-norm) were proposed to simplify
the problem. A genetic algorithm was designed for solving the
nonlinear optimization problems constrained by the max-Frank
FRE. Moreover, we presented a method for generating feasible
max-Frank FREs as test problems for the performance evaluation
of the proposed algorithm. Experiments were performed with
the proposed method in the generated feasible test problems.
We conclude that the proposed GA can find the optimal solutions
for all the cases with a great convergence rate. Moreover, a
comparison was made between the proposed method and maxmin
and max-product GAs, which solve the nonlinear optimization
problems subjected to the FREs defined by max-min and maxproduct
compositions, respectively. The results showed that the
proposed method (modified by minimum and product t-norms)
finds better solutions compared with the solutions obtained by
the other algorithms.
As future works, we aim at testing our algorithm in other
type of nonlinear optimization problems whose constraints are
defined as FRE or FRI with other well-known t-norms.
Acknowledgment
We are very grateful to the anonymous referees and the
editor in chief for their comments and suggestions, which were
very helpful in improving the paper.
Appendix A
Test Problem A.1:
Test Problem A.2:
Test Problem A.3:
Test Problem A.4:
Test Problem A.5:
Test Problem A.6:
Test Problem A.7:
Test Problem A.8:
Appendix B
Test Problem B.1:
Test Problem B.2:
Test Problem B.3:
Test Problem B.4:
Test Problem B.5:
Test Problem B.6:
Test Problem B.7:
Test Problem B.8:
- Chang CW, B. S. Shieh. Linear optimization problem constrained by fuzzy max–min relation equations. Information Sciences. 2013;234:71-79.
- Chen L, Wang PP. Fuzzy relation equations (i): the general and specialized solving algorithms. Soft Computing. 2002;6(6):428-435.
- Chen L, Wang PP. Fuzzy relation equations (ii): the branch-point-solutions and the categorized minimal solutions. Soft Computing. 2007;11(1):33-40.
- Dempe S, Ruziyeva A. On the calculation of a membership function for the solution of a fuzzy linear optimization problem. Fuzzy Sets and Systems. 2012;188(1):58-67.
- Di Martino F, Sessa S. Digital watermarking in coding/decoding processes with fuzzy relation equations. Soft Computing. 2006;10(3):238-243.
- Di Nola A, Sessa S, Pedrycz W, Sanchez E. Fuzzy relational equations and their applications in knowledge engineering. Dordrecht: Kluwer academic press; 1989.
- Dubey D, Chandra S, Mehra A. Fuzzy linear programming under interval uncertainty based on IFS representation. Fuzzy Sets and Systems. 2012;188(1):68-87.
- Dubois D, Prade H. Fundamentals of Fuzzy Sets. Boston: Kluwer; 2000.
- Fan YR, Huang GH, Yang AL. Generalized fuzzy linear programming for decision making under uncertainty: Feasibility of fuzzy solutions and solving approach. Information Sciences. 2013;241:12-27.
- Fang SC, Li G. Solving fuzzy relation equations with a linear objective function. Fuzzy Sets and Systems. 1999;103(1):107-113.
- Fernandez MJ, Gil P. Some specific types of fuzzy relation equations. Information Sciences. 2004;164(1-4):189-195.
- Freson S, De Baets B, De Meyer H. Linear optimization with bipolar max–min constraints. Information Sciences. 2013;234:3-15.
- Ghodousian A. Optimization of the reducible objective functions with monotone factors subject to FRI constraints defined with continuous t-norms. Archives of Industrial Engineering. 2018;1(1):1-19.
- Ghodousian A, Raeisian Parvari M. A modified PSO algorithm for linear optimization problem subject to the generalized fuzzy relational inequalities with fuzzy constraints (FRI-FC). Information Sciences. 2017;418–419:317-345.
- Ghodousian A, Babalhavaeji A. An efficient genetic algorithm for solving nonlinear optimization problems defined with fuzzy relational equations and max-Lukasiewicz composition. Applied Soft Computing. 2018;69:475-492.
- Ghodousian A, Naeeimib M, Babalhavaeji A. Nonlinear optimization problem subjected to fuzzy relational equations defined by Dubois-Prade family of t-norms. Computers & Industrial Engineering. 2018;119:167-180.
- Ghodousian A, Zarghani R. Linear optimization on the intersection of two fuzzy relational inequalities defined with Yager family of t-norms. Journal of Algorithms and Computation. 2017;49(1):55-82.
- Ghodousian A. Ahmadi A. Dehghani. Solving a non-convex non-linear optimization problem constrained by fuzzy relational equations and Sugeno-Weber family of t-norms. Journal of Algorithms and Computation. 2017;49(2):63-101.
- Ghodousian A, Khorram E. An algorithm for optimizing the linear function with fuzzy relation equation constraints regarding max-prod composition. Applied Mathematics and Computation. 2006;178(2):502-509.
- Ghodousian A, Khorram E. Fuzzy linear optimization in the presence of the fuzzy relation inequality constraints with max-min composition. Information Sciences. 2008;178(2):501-519.
- Ghodousian A, Khorram E. Linear optimization with an arbitrary fuzzy relational inequality. Fuzzy Sets and Systems. 2012;206:89-102.
- Ghodousian A, Khorram E. Solving a linear programming problem with the convex combination of the max-min and the max-average fuzzy relation equations. Applied Mathematics and computation. 2006;180(1):411-418.
- Guo FF, Pang LP, Meng D, Xia ZQ. An algorithm for solving optimization problems with fuzzy relational inequality constraints. Information Sciences. 2013;252:20-31.
- Guo FF, Xia ZQ. An algorithm for solving optimization problems with one linear objective function and finitely many constraints of fuzzy relation inequalities. Fuzzy Optimization and Decision Making. 2006;5(1):33-47.
- Guu SM, Wu YK. Minimizing a linear objective function under a max-t-norm fuzzy relational equation constraint. Fuzzy Sets and Systems. 2010;161(2):285-297.
- Guu SM, Wu YK. Minimizing a linear objective function with fuzzy relation equation constraints. Fuzzy Optimization and Decision Making. 2002;1(4):347-360.
- Guu SM, Wu YK. Minimizing an linear objective function under a max-t-norm fuzzy relational equation constraint. Fuzzy Sets and Systems. 2010;161(2):285-297.
- Han SC, Li HX. Notes on pseudo-t-norms and implication operators on a complete Brouwerian lattice and pseudo-t-norms and implication operators: direct products and direct product decompositions. Fuzzy Sets and Systems. 2005;153(2):289-294.
- Hassanzadeh R, Khorram E, Mahdavi I, Mahdavi-Amiri N. A genetic algorithm for optimization problems with fuzzy relation constraints using max-product composition. Applied Soft Computing. 2011;11(1):551-560.
- Hock W, Schittkowski K. Test examples for nonlinear programming codes. Lecture Notes in Economics and Mathematical Systems. 1981;187.
- Khorram E, Ghodousian A. Linear objective function optimization with fuzzy relation equation constraints regarding max-av composition. Applied Mathematics and Computation. 2006;173(2):872-886.
- Klement EP, Mesiar R, Pap E. Triangular norms. Position paper I: Basic analytical and algebraic properties. Fuzzy Sets and Systems. 2004;143(1):5-26.
- Lee HC, Guu SM. On the optimal three-tier multimedia streaming services. Fuzzy Optimization and Decision Making. 2002;2(1):31-39.
- Li P, Fang SC. On the resolution and optimization of a system of fuzzy relational equations with sup-T composition. Fuzzy Optimization and Decision Making. 2008;7(2):169-214.
- Li P, Liu Y. Linear optimization with bipolar fuzzy relational equation constraints using lukasiewicz triangular norm. Soft Computing. 2014;18(7):1399-1404.
- Li P, Fang SC. A survey on fuzzy relational equations, part I: classification and solvability. Fuzzy Optimization and Decision Making. 2009;8(2):179-229.
- Lin JL. On the relation between fuzzy max-archimedean t-norm relational equations and the covering problem. Fuzzy Sets and Systems. 2009;160(16):2328-2344.
- Lin JL, Wu YK, Guu SM. On fuzzy relational equations and the covering problem. Information Sciences. 2011;181(14):2951-2963.
- Liu C, Lur YY, Wu YK. Linear optimization of bipolar fuzzy relational equations with max-Łukasiewicz composition. Information Sciences. 2016;360:149–162.
- Loetamonphong J, Fang SC. Optimization of fuzzy relation equations with max-product composition. Fuzzy Sets and Systems. 2001;118(3):509-517.
- Loia V, Sessa S. Fuzzy relation equations for coding/decoding processes of images and videos. Information Sciences. 2005;171(1-3):145-172.
- Lu J, Fang SC. Solving nonlinear optimization problems with fuzzy relation equations constraints. Fuzzy Sets and Systems. 2001;119(1):1-20.
- Markovskii V. On the relation between equations with max-product composition and the covering problem. Fuzzy Sets and Systems. 2005;153(2):261-273.
- Pedrycz W. Granular Computing: Analysis and Design of Intelligent Systems. Boca Raton: CRC Press; 2013.
- Pedrycz W. Fuzzy relational equations with generalized connectives and their applications. Fuzzy Sets and Systems. 1983;10(1-3):185-201.
- Pedrycz W, Vasilakos AV. Modularization of fuzzy relational equations. Soft Computing. 2002;6(1):33-37.
- Peeva K. Resolution of fuzzy relational equations-methods, algorithm and software with applications. Information Sciences. 2013;234:44-63.
- Perfilieva I. Fuzzy function as an approximate solution to a system of fuzzy relation equations. Fuzzy Sets and Systems. 2004;147(3):363-383.
- Perfilieva I, Novak V. System of fuzzy relation equations model of IF-THEN rules. Information Sciences. 2007;177(16):3218-3227.
- Perfilieva I. Finitary solvability conditions for systems of fuzzy relation equations. Information Sciences. 2013;234:29-43.
- Qu XB, Wang XP, Lei MH. Conditions under which the solution sets of fuzzy relational equations over complete Brouwerian lattices form lattices. Fuzzy Sets and Systems. 2014;234:34-45.
- Sanchez E. Resolution of composite fuzzy relation equations. Information and Control. 1976;30(1):38-48.
- Shieh BS. Infinite fuzzy relation equations with continuous t-norms. Information Sciences. 2008;178(8):1961-1967.
- Shieh BS. Minimizing a linear objective function under a fuzzy max-t-norm relation equation constraint. Information Sciences. 2011;181(4):832-841.
- Sun F. Conditions for the existence of the least solution and minimal solutions to fuzzy relation equations over complete Brouwerian lattices. Information Sciences. 2012;205:86-92.
- Sun F, Wang XP, Qu XB. Minimal join decompositions and their applications to fuzzy relation equations over complete Brouwerian lattices. Information Sciences. 2013;224(1):143-151.
- Wang S, Fang SC, Nuttle HLW. Solution sets of interval-valued fuzzy relational equations. Fuzzy Optimization and Decision Making. 2003;2(1):41-60.
- Wang PZ, Zhang DZ, Sanchez E, Lee ES. Latticized linear programming and fuzzy relation inequalities. Journal of Mathematical Analysis and Applications. 1991;159(1):72-87.
- Wu YK, Guu SM. A note on fuzzy relation programming problems with max-strict-t-norm composition. Fuzzy Optimization and Decision Making. 2004;3(3):271-278.
- Wu YK, Guu SM. An efficient procedure for solving a fuzzy relation equation with max-Archimedean t-norm composition. IEEE Transactions on Fuzzy Systems. 2008;16(1):73-84.
- Wu YK. Optimization of fuzzy relational equations with max-av composition. Information Sciences. 2007;177(19):4216-4229.
- Wu YK, Guu SM. Minimizing a linear function under a fuzzy max-min relational equation constraints. Fuzzy Sets and Systems. 2005;150(1):147-162.
- Wu YK, Guu SM, Liu JY. Reducing the search space of a linear fractional programming problem under fuzzy relational equations with max-Archimedean t-norm composition. Fuzzy Sets and Systems. 2008;159(24):3347-3359.
- Xiong QQ, Wang XP. Fuzzy relational equations on complete Brouwerian lattices. Information Sciences. 2012;193:141-152.
- Xiong QQ, Wang XP. Some properties of sup-min fuzzy relational equations on infinite domains. Fuzzy Sets and Systems. 2005;151(2):393-402.
- Yang XP, Zhou XG, Cao BY. Latticized linear programming subject to max-product fuzzy relation inequalities with application in wireless communication. Information Sciences. 2016;358–359:44–55.
- Yang SJ. An algorithm for minimizing a linear objective function subject to the fuzzy relation inequalities with addition-min composition. Fuzzy Sets and Systems. 2014;255:41-51.
- Yeh T. On the minimal solutions of max-min fuzzy relation equations. Fuzzy Sets and Systems. 2008;159(1):23-39.