Research article Open Access
Cascade Neuro-Fuzzy Architecture Based Mobile- Robot Navigation and Obstacle Avoidance in Static and Dynamic Environments
Anish Pandey*, and Kavita Burse
Oriental College of Technology, Bhopal, India
*Corresponding author: Anish Pandey, Oriental College of Technology, Bhopal, Ganga Nagar SEC-2 Mangla Road Bilaspur, Chhattisgarh, India, Tel: +919861932338; E-mail: @
Received: September 16, 2016; Accepted: September 28, 2016; Published: October 05, 2016
Citation: Pandey A, Burse K (2016) Cascade Neuro-Fuzzy Architecture Based Mobile-Robot Navigation and Obstacle Avoidance in Static and Dynamic Environments. Int J Adv Robot Automn 1(3): 1-9.
Abstract
Real-time navigation in the partially unknown environment is an interesting task for mobile robotics. This article presents the cascade neuro-fuzzy (CN-Fuzzy) architecture for intelligent mobile robot navigation and obstacle avoidance in static and dynamic environments. The array of ultrasonic range finder sensors and sharp infrared range sensors are used to read the front, left and right obstacle distances. The cascade neural network is used to train the robot to reach the goal. Its inputs are the different obstacle distance received from the sensors. The output of the neural network is a turning angle between the robot and goal. The fuzzy architecture is integrated with the cascade neural network to control the velocities of the robot. Successful simulation and experimental results verify the effectiveness of the proposed architecture in both static and dynamic environments. Moreover, the proposed CN-Fuzzy architecture gives better results (in terms of path length) as compared to previously developed techniques, which verifies the effectiveness of the proposed architecture.

Keywords: Cascade neuro-fuzzy; Fuzzy logic; Neural networks; Mobile robots; Obstacle avoidance; Velocity
Introduction
The applications of the intelligent robot in many fields such as industry, space, agriculture, defense and transportation, and other social sectors are growing day by day. The mobile robot performs many tasks such as rescue operation, patrolling, underwater exploration, disaster relief and planetary exploration, etc. Therefore, the author is trying to put the effort in the field of the intelligent robot using CN-Fuzzy architecture, which can avoid the obstacle autonomously and reach the goal safely in the given environment. Autonomous mobile robot navigation is one of the challenging tasks for any soft computing techniques. Fuzzy logic and neural network have been widely used for mobile robot navigation and control because these methods are capable of handling the system uncertainty. Generally, the fuzzy logic is the combination of fuzzy rules and membership functions (inputs and outputs), which are constructed by human knowledge. And the neural network may be applied to a linear or nonlinear system, which can solve the real system problems using empirical data set (experimental or predicted). The neural network with fuzzy logic [1] improves the decision speed of the mobile robot for target seeking and obstacle avoidance.

Target seeking and obstacle avoidance are the two important tasks for any mobile robot in the environment. Godjevac and Steele [2] have integrated the Takagi-Sugeno type fuzzy controller and Radial Basis Function Neural Network (RBFNN) to solve the mobile robot path planning. Where the fuzzy logic is used to handle the uncertainty of the environment, and the neural network is used to tune the parameters of membership functions. Rai and Rai, [3] have designed the Arduino UNO microcontroller-based DC motor speed control system using multilayer neural network and Proportional Integral Derivative (PID) controller. Yang and Meng [4] have applied the biologically inspired neural network to generate a collision-free path in a nonstationary environment. In [5], the authors have designed the Reinforcement Ant Optimized Fuzzy Controller (RAOFC) and applied it for wheeled mobile robot wall-following control under reinforcement learning environments. The inputs of the proposed controller are rangefinding sonar sensors, and the output is a robot steering angle. Algabri, et al. [6] have combined the fuzzy logic with other soft computing techniques such as Genetic Algorithm (GA), Neural Networks (NN), and Particle Swarm Optimization (PSO) to optimize the membership function parameters of the fuzzy controller for improving the navigation performance of the mobile robot. Fuzzy reinforcement learning sensor-based mobile robot navigation has been presented by Beom and Cho [7] for complex environments. In [8], the authors have constructed behaviour-based neuro-fuzzy control architecture for mobile robot navigation in an unstructured environment. Rossomando and Soria, [9] have designed an adaptive neural network PID controller to solve the trajectory tracking control problem of a mobile robot. In [10], the authors have developed a genetic algorithm to choose the best membership parameters from the fuzzy inference system and implemented it to control the steering angle of a mobile robot in the partially unknown environment. In [11], the authors have presented the navigation method of the two robots (a leader robot and a follower robot) using Fuzzy Controllers (FC). In [15-16], the authors have designed the sensor based adaptive neuro-fuzzy inference controller for mobile robot navigation and obstacle avoidance in the various environments.

Cascade Neural Network (CNN) is similar to feed Forward Neural Network (FNN). Both neural networks use back propagation algorithm for updating the weights and biases [12]. This article describes the cascade neural network based fuzzy architecture for mobile-robot navigation and obstacle avoidance in static and dynamic environments. The cascade neural network is used to train the robot to reach the goal. Its inputs are different obstacle distance received from the sensors. The output of the neural network is a turning angle between the robot and goal. The fuzzy logic architecture is used to control the right motor velocity and left motor velocity of the mobile robot. In the last two decades, many researchers have implemented different neurofuzzy techniques for solving the navigation problem of the mobile robot. Motivated by the above literature survey, the primary objective of this paper is to improve the navigation accuracy and efficiency of the mobile robot using the cascade neuro-fuzzy controller. The remainder of this article is structured as follows: Section 2 introduces the design and implementation of the CNFuzzy architecture for mobile robot navigation and obstacle avoidance in various environments. Section 3 demonstrates the computer simulation results in different unknown environments. Section 4 describes the simulation result comparison with developed techniques. Section 5 presents the experimental results and discussion for validating the proposed controller. Finally, Section 6 depicts the summary.
Cascade Neuro-Fuzzy (CN-Fuzzy) Architecture
This section introduces the design and implementation of the CN-Fuzzy architecture for navigation of mobile robot and obstacle avoidance in various environments. The cascade neural network is used to train the robot to reach the goal in the environment, and the fuzzy logic architecture is used to control the right motor velocity and left motor velocity of the mobile robot. Figure 1 shows the proposed architecture of CN-Fuzzy for navigation of mobile robot and obstacle avoidance in unknown environments.
Cascade neural network for goal reaching
The neural network is one of the important techniques for the mobile robot navigation. In this section, the Cascade Neural Network (CNN) is used to train the robot to reach the goal in the environment. The neural network is the combination of many layers such as input layer, hidden (intermediate) layers, and the output layer; all the layers are connected with each other by the neurons. The CNN is the similar to the feed Forward Neural Network (FNN). Both CNN and FNN use back propagation algorithm for updating the weights and biases. The two back propagation algorithms, namely Levenberg-Marquardt (LM) and Bayesian Regularization (BR) are used to adjust the network weights and biases. Figure 2 illustrates the general structure of a Cascade Neural Network (CNN). In Figure 2, u, w , b and v addresses the input variables, synaptic weights, neuron bias, and output variable, respectively.

The inputs of the CNN are the obstacle distance received from the various sensors, and the output of the CNN is a turning angle between the robot and goal. Table 1 describes the different training patterns for the cascade neural network, which helps the robot to reach the goal in the environment. The proposed CNN uses three inputs, two hidden layers (six and four neurons, respectively) and single output layer for the mobile robot navigation. The CNN has three inputs: F.O.D. (Front Obstacle Distance), L.O.D. (Left Obstacle Distance), and R.O.D. (Right Obstacle Distance), respectively. The output of this CNN is a Turning Angle (T.A.) between the robot and goal. The input and output of the CNN can be written as follows: -
Figure 1: The cascade neuro-fuzzy architecture for navigation of mobile robot and obstacle avoidance in unknown environments
Figure 2: The general structure of the Cascade Neural Network (CNN)
Where = 1, 2, 3. (Three inputs F.O.D., L.O.D., and R.O.D., respectively)
Input layer (first layer):
q i [1] = u i MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyCamaaDa aaleaacaWGPbaabaGaai4waiaaigdacaGGDbaaaOGaeyypa0JaamyD amaaBaaaleaacaWGPbaabeaaaaa@3DA5@                                                                                                             (1)
Two hidden layers (second and third):
q t [s] =φ( NE T t [s] ) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyCamaaDa aaleaacaWG0baabaGaai4waiaadohacaGGDbaaaOGaeyypa0JaeqOX dO2aaeWaaeaacaWGobGaamyraiaadsfadaqhaaWcbaGaamiDaaqaai aacUfacaWGZbGaaiyxaaaaaOGaayjkaiaawMcaaaaa@457D@                                                                                                             (2)

NE T t [s] = i ( w ti [s] q i [s1] + b t [s] ) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOtaiaadw eacaWGubWaa0baaSqaaiaadshaaeaacaGGBbGaam4Caiaac2faaaGc cqGH9aqpdaaeqbqaaiaacIcacaWG3bWaa0baaSqaaiaadshacqGHfl Y1caWGPbaabaGaai4waiaadohacaGGDbaaaOGaeyyXICTaamyCamaa DaaaleaacaWGPbaabaGaai4waiaadohacqGHsislcaaIXaGaaiyxaa aakiabgUcaRiaadkgadaqhaaWcbaGaamiDaaqaaiaacUfacaWGZbGa aiyxaaaakiaacMcaaSqaaiaadMgaaeqaniabggHiLdaaaa@5855@                                                                              (3)
Where s = 2, 3. (Second and third layers)
Output layer (fourth layer):
v (p) = q [4] =φ( NE T [4] ) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamODamaaBa aaleaacaGGOaGaamiCaiaacMcaaeqaaOGaeyypa0JaamyCamaaCaaa leqabaGaai4waiaaisdacaGGDbaaaOGaeyypa0JaeqOXdO2aaeWaae aacaWGobGaamyraiaadsfadaahaaWcbeqaaiaacUfacaaI0aGaaiyx aaaaaOGaayjkaiaawMcaaaaa@479C@                                                                                               (4)

NE T [4] = i ( w i [4] q i [3] + b i [4] ) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOtaiaadw eacaWGubWaaWbaaSqabeaacaGGBbGaaGinaiaac2faaaGccqGH9aqp daaeqbqaaiaacIcacaWG3bWaa0baaSqaaiaadMgaaeaacaGGBbGaaG inaiaac2faaaGccqGHflY1caWGXbWaa0baaSqaaiaadMgaaeaacaGG BbGaaG4maiaac2faaaGccqGHRaWkcaWGIbWaa0baaSqaaiaadMgaae aacaGGBbGaaGinaiaac2faaaGccaGGPaaaleaacaWGPbaabeqdcqGH ris5aaaa@517D@                                                                                   (5)
Where u i MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyDamaaBa aaleaacaWGPbaabeaaaaa@3809@ is the input variables, v (p) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamODamaaBa aaleaacaGGOaGaamiCaiaacMcaaeqaaaaa@396A@ is the predicted output variable (turning angle). The w ti [s] MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaam4DamaaDa aaleaacaWG0bGaeyyXICTaamyAaaqaaiaacUfacaWGZbGaaiyxaaaa aaa@3E07@ is the synaptic weight on connection joining the th neuron in the layer [s1] MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaai4waiaado hacqGHsislcaaIXaGaaiyxaaaa@3A55@ to the t th neuron in the layer [s] MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaai4waiaado hacaGGDbaaaa@38AD@ ; b t [s] MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOyamaaDa aaleaacaWG0baabaGaai4waiaadohacaGGDbaaaaaa@3ABA@ is a bias of the t MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamiDaaaa@36EE@ th neuron in the layer [s] MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaai4waiaado hacaGGDbaaaa@38AD@ and φ t ( d ) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqOXdO2aaS baaSqaaiaadshaaeqaaOWaaeWaaeaacaWGKbaacaGLOaGaayzkaaaa aa@3B53@ is the Log-sigmoid transfer function. φ(d)= 1 ( 1+exp(d) )                                                  (6) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqOXdOMaai ikaiaadsgacaGGPaGaeyypa0ZaaSaaaeaacaaIXaaabaWaaeWaaeaa caaIXaGaey4kaSIaciyzaiaacIhacaGGWbGaaiikaiabgkHiTiaads gacaGGPaaacaGLOaGaayzkaaaaaiaabccacaqGGaGaaeiiaiaabcca caqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiai aabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGa aeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccaca qGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaa bccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaae iiaiaabccacaqGGaGaaeiiaiaabccacaqGOaGaaeOnaiaabMcaaaa@6638@
The proposed CNN is verified through the mean squared error (MSE) and root mean square error (RMSE) method: -
MSE(%)=[  1 r ( v (a) v (p) r ) 2 ]×              (7) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8 qacaWGnbGaam4uaiaadweacaGGOaGaaiyjaiaacMcacqGH9aqpdaWa daqaamaaqadabaWaaeWaaeaadaWcaaqaaiaadAhadaWgaaWcbaGaai ikaiaadggacaGGPaaabeaakiabgkHiTiaadAhadaWgaaWcbaGaaiik aiaadchacaGGPaaabeaaaOqaaiaadkhaaaaacaGLOaGaayzkaaWaaW baaSqabeaacaaIYaaaaaqaaiaabccacaaIXaaabaGaamOCaaqdcqGH ris5aaGccaGLBbGaayzxaaGaey41aqRaaeiiaiaabccacaqGGaGaae iiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqG GaGaaeiiaiaabccacaqGOaGaae4naiaabMcaaaa@5A40@
Table 1: The different training patterns for mobile robot navigation

F.O.D. (cm)

L.O.D. (cm)

R.O.D. (cm)

T.A. (degree)

Turning Direction

20

115

20

74.3

Left

20

20

150

-65.9

Right

125

25

150

-70.4

Right

25

75

50

55

Left

40

120

60

59.4

Left

25

150

100

72.8

Left

25

50

120

-22.9

Right

22

25

22

73.4

Left

50

25

25

0

Straight

20

27

27

77

Left

100

28

25

0

Straight

25

21

22

77.2

Left

150

25

115

-70.5

Right

150

20

25

0

Straight

150

100

100

-70.4

Right

RMSE(%)= 1 r [  1 r ( v (a) v (p) v (a) ) 2 ] ×100          (8) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8 qacaWGsbGaamytaiaadofacaWGfbGaaiikaiaacwcacaGGPaGaeyyp a0ZaaOaaaeaadaWcaaqaaiaaigdaaeaacaWGYbaaamaadmaabaWaaa bmaeaadaqadaqaamaalaaabaGaamODamaaBaaaleaacaGGOaGaamyy aiaacMcaaeqaaOGaeyOeI0IaamODamaaBaaaleaacaGGOaGaamiCai aacMcaaeqaaaGcbaGaamODamaaBaaaleaacaGGOaGaamyyaiaacMca aeqaaaaaaOGaayjkaiaawMcaamaaCaaaleqabaGaaGOmaaaaaeaaca qGGaGaaGymaaqaaiaadkhaa0GaeyyeIuoaaOGaay5waiaaw2faaaWc beaakiabgEna0kaaigdacaaIWaGaaGimaiaabccacaqGGaGaaeiiai aabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGOaGa aeioaiaabMcaaaa@5F1B@
Where v (a) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8 qacaWG2bWaaSbaaSqaaiaacIcacaWGHbGaaiykaaqabaaaaa@397B@ is the actual output variable, v (p) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8 qacaWG2bWaaSbaaSqaaiaacIcacaWGWbGaaiykaaqabaaaaa@398A@ is the predicted (network) output variable, and r MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOCaaaa@36EC@ is the number of observations. u MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyDaaaa@36EF@ , w MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaam4Daaaa@36F1@ , b MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOyaaaa@36DC@ and v MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamODaaaa@36F0@ addresses the input variables, synaptic weights, neuron bias, and output variable, respectively.
Fuzzy Logic Architecture (FLA) for obstacle avoidance
This section describes the design of Mamdani-type fuzzy logic [17] architecture for navigation of mobile robot and obstacle avoidance in unknown environments. The Fuzzy Logic Architecture (FLA) is used to control the right motor velocity and left motor velocity of the mobile robot. The proposed FLA has four inputs and two outputs. The FLA receives first three inputs (obstacle distance) from the various sensors of the mobile robot. The first three inputs are denoted by F.O.D., L.O.D., and R.O.D., respectively. The fourth input is the turning angle (goal angle) between the robot and goal, and which is received from the CNN. The outputs of the FLA are the velocities of the motors of robot. The outputs are addressed by RMV (Right Motor Velocity) and LMV (Left Motor Velocity), respectively. The range of first three inputs is divided into two linguistic variables, namely CLOSE and AWAY, respectively, and it is located between 20cm to 150cm. The two linguistic variables NEGATIVE and POSITIVE, respectively, are used for turning angle. The range of outputs is divided into two linguistic variables, namely LOW and HIGH, respectively. The two generalized bell-shaped (Gbell) membership functions are used for inputs and outputs. Figure 3 shows the input and output variables of the FLA. Figure 4 illustrates the fuzzy logic architecture. The fuzzy rule set of the FLA is described in Table 2. The FLA is composed through Mamdani-type fuzzy model in the following form: -
Rul e m : IF  x 1  is  A j1 ,  x 2  is  A j2 ,  x 3  is  A j3 , &  x 4  is  A j4  THEN  y 1  is  B j1  &  y 2  is  B j2    (9) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOuaiaadw hacaWGSbGaamyzamaaBaaaleaacaWGTbaabeaakiaacQdacaqGGaGa amysaiaadAeacaqGGaGaamiEamaaBaaaleaacaaIXaaabeaakiaabc cacaWGPbGaam4CaiaabccacaWGbbWaaSbaaSqaaiaadQgacaaIXaaa beaakiaacYcacaqGGaGaamiEamaaBaaaleaacaaIYaaabeaakiaabc cacaWGPbGaam4CaiaabccacaWGbbWaaSbaaSqaaiaadQgacaaIYaaa beaakiaacYcacaqGGaGaamiEamaaBaaaleaacaaIZaaabeaakiaabc cacaWGPbGaam4CaiaabccacaWGbbWaaSbaaSqaaiaadQgacaaIZaaa beaakiaacYcacaqGGaGaaiOjaiaabccacaWG4bWaaSbaaSqaaiaais daaeqaaOGaaeiiaiaadMgacaWGZbGaaeiiaiaadgeadaWgaaWcbaGa amOAaiaaisdaaeqaaOGaaeiiaiaadsfacaWGibGaamyraiaad6eaca qGGaGaamyEamaaBaaaleaacaaIXaaabeaakiaabccacaWGPbGaam4C aiaabccacaWGcbWaaSbaaSqaaiaadQgacaaIXaaabeaakiaabccaca GGMaGaaeiiaiaadMhadaWgaaWcbaGaaGOmaaqabaGccaqGGaGaamyA aiaadohacaqGGaGaamOqamaaBaaaleaacaWGQbGaaGOmaaqabaGcca qGGaGaaeiiaiaabccacaqGOaGaaeyoaiaabMcaaaa@7CB3@
Where m MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyBaaaa@36E7@ =1, 2, 3…12 (twelve rules), the x 1 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamiEamaaBa aaleaacaaIXaaabeaaaaa@37D9@ , x 2 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamiEamaaBa aaleaacaaIYaaabeaaaaa@37DA@ , x 3 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamiEamaaBa aaleaacaaIZaaabeaaaaa@37DB@ and x 4 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamiEamaaBa aaleaacaaI0aaabeaaaaa@37DC@ are the input variables. Similarly y 1 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBa aaleaacaaIXaaabeaaaaa@37DA@ and y 2 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBa aaleaacaaIYaaabeaaaaa@37DB@ are the outputvariables. The A j1 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyqamaaBa aaleaacaWGQbGaaGymaaqabaaaaa@3891@ , A j2 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyqamaaBa aaleaacaWGQbGaaGOmaaqabaaaaa@3892@ , A j3 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyqamaaBa aaleaacaWGQbGaaG4maaqabaaaaa@3893@ and A j4 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyqamaaBa aaleaacaWGQbGaaGinaaqabaaaaa@3894@ are the fuzzy sets of the input variables. Similarly, B j1 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOqamaaBa aaleaacaWGQbGaaGymaaqabaaaaa@3892@ and B j2 MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOqamaaBa aaleaacaWGQbGaaGOmaaqabaaaaa@3893@ are the fuzzy sets of theoutput variables. The j MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOAaaaa@36E4@ =1, 2 because each input and output have two Gbell membership functions. The fuzzy set (inputs and outputs) uses the following Gbell membership function: - μ jk ( x k ;a,b,c)= 1 1+ | x k - c jk a jk | 2 b jk                                                    (10) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8 qacqaH8oqBpaWaaSbaaSqaaiaadQgacaWGRbaabeaakiaacIcacaWG 4bWaaSbaaSqaaiaadUgaaeqaaOGaai4oaiaadggacaGGSaGaamOyai aacYcacaWGJbGaaiykaiabg2da9maalaaabaGaaGymaaqaaiaaigda cqGHRaWkdaabdaqaamaalaaabaGaamiEamaaBaaaleaacaWGRbaabe aakiaac2cacaWGJbWaaSbaaSqaaiaadQgacaWGRbaabeaaaOqaaiaa dggadaWgaaWcbaGaamOAaiaadUgaaeqaaaaaaOGaay5bSlaawIa7am aaCaaaleqabaGaaGOmaiaadkgadaWgaaadbaGaamOAaiaadUgaaeqa aaaaaaGccaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGa GaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabcca caqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiai aabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGa aeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccaca qGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaa bccacaqGGaGaaeiiaiaabccacaqGOaGaaeymaiaabcdacaqGPaaaaa@78C8@
μ jl ( y l ;a,b,c)= 1 1+ | y l - c jl a jl | 2 b jl                                                      (11) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8 qacqaH8oqBpaWaaSbaaSqaaiaadQgacaWGSbaabeaakiaacIcacaWG 5bWaaSbaaSqaaiaadYgaaeqaaOGaai4oaiaadggacaGGSaGaamOyai aacYcacaWGJbGaaiykaiabg2da9maalaaabaGaaGymaaqaaiaaigda cqGHRaWkdaabdaqaamaalaaabaGaamyEamaaBaaaleaacaWGSbaabe aakiaac2cacaWGJbWaaSbaaSqaaiaadQgacaWGSbaabeaaaOqaaiaa dggadaWgaaWcbaGaamOAaiaadYgaaeqaaaaaaOGaay5bSlaawIa7am aaCaaaleqabaGaaGOmaiaadkgadaWgaaadbaGaamOAaiaadYgaaeqa aaaaaaGccaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGa GaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabcca caqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiai aabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGa aeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccaca qGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaa bccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabIcacaqGXaGaae ymaiaabMcaaaa@7A17@
Where k MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaam4Aaaaa@36E5@ =1…4 (four inputs), and l MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamiBaaaa@36E6@ =1, 2 (two outputs). The symbols a MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyyaaaa@36DB@ , b MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOyaaaa@36DC@ and c MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaam4yaaaa@36DD@ are adjusting parameters of the Gbell membership function; called as the half width, slope control, and centre respectively.
The defuzzification of the output variables ( y 1  and  y 2 ) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaiikaiaadM hadaWgaaWcbaGaaGymaaqabaGcqaaaaaaaaaWdbiaacckacaWGHbGa amOBaiaadsgacaqGGaGaamyEamaaBaaaleaacaaIYaaabeaak8aaca GGPaaaaa@3FE5@ are accomplished by the weighted average method: - y 1 = m=1 12 ( μ j1 ( x 1 ) μ j2 ( x 2 ) μ j3 ( x 3 ) μ j4 ( x 4 ) ) y 1 m=1 12 ( μ j1 ( x 1 ) μ j2 ( x 2 ) μ j3 ( x 3 ) μ j4 ( x 4 ) )                               (12) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBa aaleaacaaIXaaabeaakabaaaaaaaaapeGaeyypa0ZaaSaaaeaadaae WbqaamaabmaabaGaeqiVd02aaSbaaSqaaiaadQgacaaIXaaabeaaki aacIcacaWG4bWaaSbaaSqaaiaaigdaaeqaaOGaaiykaiabgwSixlab eY7aTnaaBaaaleaacaWGQbGaaGOmaaqabaGccaGGOaGaamiEamaaBa aaleaacaaIYaaabeaakiaacMcacqGHflY1cqaH8oqBdaWgaaWcbaGa amOAaiaaiodaaeqaaOGaaiikaiaadIhadaWgaaWcbaGaaG4maaqaba GccaGGPaGaeyyXICTaeqiVd02aaSbaaSqaaiaadQgacaaI0aaabeaa kiaacIcacaWG4bWaaSbaaSqaaiaaisdaaeqaaOGaaiykaaGaayjkai aawMcaaiabgwSixlaadMhapaWaaSbaaSqaaiaaigdaaeqaaaWdbeaa caWGTbGaeyypa0JaaGymaaqaaiaaigdacaaIYaaaniabggHiLdaake aadaaeWbqaamaabmaabaGaeqiVd02aaSbaaSqaaiaadQgacaaIXaaa beaakiaacIcacaWG4bWaaSbaaSqaaiaaigdaaeqaaOGaaiykaiabgw SixlabeY7aTnaaBaaaleaacaWGQbGaaGOmaaqabaGccaGGOaGaamiE amaaBaaaleaacaaIYaaabeaakiaacMcacqGHflY1cqaH8oqBdaWgaa WcbaGaamOAaiaaiodaaeqaaOGaaiikaiaadIhadaWgaaWcbaGaaG4m aaqabaGccaGGPaGaeyyXICTaeqiVd02aaSbaaSqaaiaadQgacaaI0a aabeaakiaacIcacaWG4bWaaSbaaSqaaiaaisdaaeqaaOGaaiykaaGa ayjkaiaawMcaaaWcbaGaamyBaiabg2da9iaaigdaaeaacaaIXaGaaG OmaaqdcqGHris5aaaakiaabccacaqGGaGaaeiiaiaabccacaqGGaGa aeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccaca qGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaa bccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaae iiaiaabIcacaqGXaGaaeOmaiaabMcaaaa@A7F1@
Figure 3: Membership Functions (i), Obstacle Distances (F.O.D., L.O.D. and R.O.D., respectively), (ii) Turning Angle (TA), and (iii) Motor Velocities (Right and Left respectively)
Table 2: Fuzzy rule sets for navigation of mobile robot and obstacle avoidance

F.O.D. (cm)

L.O.D. (cm)

R.O.D. (cm)

T.A. (degree)

Turning Direction

20

115

20

74.3

Left

20

20

150

-65.9

Right

125

25

150

-70.4

Right

25

75

50

55

Left

40

120

60

59.4

Left

25

150

100

72.8

Left

25

50

120

-22.9

Right

22

25

22

73.4

Left

50

25

25

0

Straight

20

27

27

77

Left

100

28

25

0

Straight

25

21

22

77.2

Left

150

25

115

-70.5

Right

150

20

25

0

Straight

150

100

100

-70.4

Right

y 2 = m=1 12 ( μ j1 ( x 1 ) μ j2 ( x 2 ) μ j3 ( x 3 ) μ j4 ( x 4 ) ) y 2 m=1 12 ( μ j1 ( x 1 ) μ j2 ( x 2 ) μ j3 ( x 3 ) μ j4 ( x 4 ) )                                 (13) MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBa aaleaacaaIYaaabeaakabaaaaaaaaapeGaeyypa0ZaaSaaaeaadaae WbqaamaabmaabaGaeqiVd02aaSbaaSqaaiaadQgacaaIXaaabeaaki aacIcacaWG4bWaaSbaaSqaaiaaigdaaeqaaOGaaiykaiabgwSixlab eY7aTnaaBaaaleaacaWGQbGaaGOmaaqabaGccaGGOaGaamiEamaaBa aaleaacaaIYaaabeaakiaacMcacqGHflY1cqaH8oqBdaWgaaWcbaGa amOAaiaaiodaaeqaaOGaaiikaiaadIhadaWgaaWcbaGaaG4maaqaba GccaGGPaGaeyyXICTaeqiVd02aaSbaaSqaaiaadQgacaaI0aaabeaa kiaacIcacaWG4bWaaSbaaSqaaiaaisdaaeqaaOGaaiykaaGaayjkai aawMcaaiabgwSixlaadMhapaWaaSbaaSqaaiaaikdaaeqaaaWdbeaa caWGTbGaeyypa0JaaGymaaqaaiaaigdacaaIYaaaniabggHiLdaake aadaaeWbqaamaabmaabaGaeqiVd02aaSbaaSqaaiaadQgacaaIXaaa beaakiaacIcacaWG4bWaaSbaaSqaaiaaigdaaeqaaOGaaiykaiabgw SixlabeY7aTnaaBaaaleaacaWGQbGaaGOmaaqabaGccaGGOaGaamiE amaaBaaaleaacaaIYaaabeaakiaacMcacqGHflY1cqaH8oqBdaWgaa WcbaGaamOAaiaaiodaaeqaaOGaaiikaiaadIhadaWgaaWcbaGaaG4m aaqabaGccaGGPaGaeyyXICTaeqiVd02aaSbaaSqaaiaadQgacaaI0a aabeaakiaacIcacaWG4bWaaSbaaSqaaiaaisdaaeqaaOGaaiykaaGa ayjkaiaawMcaaaWcbaGaamyBaiabg2da9iaaigdaaeaacaaIXaGaaG OmaaqdcqGHris5aaaakiaabccacaqGGaGaaeiiaiaabccacaqGGaGa aeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccaca qGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaa bccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaae iiaiaabccacaqGGaGaaeikaiaabgdacaqGZaGaaeykaaaa@A93A@
Computer Simulation Results
This section presents the computer simulation results using CN-Fuzzy architecture in the different unknown static and dynamic environments. The simulations have done using MATLAB
Figure 4: Fuzzy logic architecture
Figure 5: FFlowchart of the mobile robot navigation and obstacle avoidance based on CN-Fuzzy architecture
Figure 6: Mobile robot navigation in an environment without obstacle using CN-Fuzzy architecture
Figure 7: Mobile robot navigation in an unknown environment using CN-Fuzzy architecture
Figure 8: Mobile robot navigation in the cluttered environment using CN-Fuzzy architecture
software on the HP 3.40 GHz processor. Figure 5 illustrates the developed flowchart of mobile robot navigation and obstacle avoidance based on CN-Fuzzy architecture. Figures 6 to 9 shows the mobile robot navigation trajectories in the different static and dynamic environments. In the simulation results, it is assumed that the position of the start point and goal point are known. But the positions of all the obstacles in the environment are unknown for the robot. The dimension of the environments is 300cm width and 300cm height. A minimum threshold distance is fixed between the robot and the obstacle. Now if the robot detects the obstacle in the threshold range, then the proposed architecture estimates the desired turning direction of a mobile robot. Table 3 illustrates the navigation path length and time taken by the robot in the various unknown environments.
Comparison With Previous Developed Techniques
This section describes the computer simulation result comparisons between the previously developed techniques [13, 14] and proposed CN-Fuzzy architecture in the same environment.
First Comparison with Developed Technique
In article [13], the authors have designed goal-seeking, obstacle avoidance behavior, and other behavior for mobile robot navigation using fuzzy controller. Figures 10 and 11 illustrate the mobile robot navigation in the same environment without obstacle using fuzzy controller [13] and CN-Fuzzy architecture, respectively. From simulation result, it can be clearly seen that the robot covers shorter distance to reach the goal using proposed architecture as compared to previous technique [13]. Table 4 shows the path covered by the robot to reach the goal using fuzzy controller [13] and proposed CN-Fuzzy architecture. The centimetre measurements are taken on the proportional basis.
Figure 9: Mobile robot navigation in the dynamic environment using CN-Fuzzy architecture
Figure 10: without Mobile robot navigation in an environment obstacle using fuzzy controller [13].
Figure 11: Mobile robot navigation in an environment without obstacle using CN-Fuzzy architecture
Figure 12: Mobile robot navigation in an environment with obstacles using artificial neural network
Second Comparison with Developed Technique
In this section, the simulation result comparison has been made between the previous technique [14] and proposed CNFuzzy architecture in the same environment with the obstacles. In [14], the authors have discussed the motion and path planning of a car-like wheeled mobile robot between the stationary obstacles using backpropagation artificial neural network. Figure 12 shows the mobile robot navigation in an environment with obstacles using artificial neural network [14]. Figure 13 presents the path covered by the robot using proposed CN-Fuzzy architecture in the same environment. From the Figures 12 and 13, it is observed that the proposed architecture avoid the obstacles with some shorter distance or minimum steering as compared to previous model [14]. Table 5 illustrates the path traced (in cm) by the robot to reach the goal using proposed architecture and previous model [14]. The centimetre measurements are taken on the proportional basis.
Experimental Results
Experimental Mobile Robot Description
This section describes the characteristic of the experimental mobile robot (Figure 14). The robot has two front wheels, which is powered by separate DC geared motors. The motor driver is used to control the velocity and direction of the robot. The width of the robot plate is 23cm, and the track width and height of robot are 30cm and 8cm, respectively. The mobile robot is equipped with one sharp infrared range sensor on the front side, and the two ultrasonic range finder sensors fitted on the left and right side of the robot, as shown in Figure 15. Each sensor can read obstacle from 20cm to 150cm approximately. The minimum and maximum velocities of the experimental mobile robot are between 6.7 cm/sec to 16.7 cm/sec approximately.
Experiments
This section presents the experimental results of a mobile robot using CN-Fuzzy architecture in the different environments. The experiments have been performed by C/C++ running Arduino microcontroller based mobile robot. The proposed architecture controls the motor velocities (right and left) of the robot during navigation in the environment using sensor data interpretation. Figure 16 to 18 shows the real time navigation of the experimental mobile robot in the different environments. The width and height of the platform are 250cm and 250cm, respectively. In
Table 3: Simulation results of mobile robot navigation in the different environments using CN-Fuzzy architecture

Figure No.

Environment Type

Travelling Path Length (cm)

Navigation Time (sec)

Figure 6

Without obstacle

103

11.6

Figure 7

Unknown environment

89

10.1

Figure 8

Cluttered environment

120

13.4

Figure 9

Dynamic environment

77

8.6

Table 4: The simulation result comparison between the fuzzy controller [13] and proposed CN-Fuzzy architecture

Figure No.

Method

Navigation Path Length (cm)

Figure 10

Fuzzy [13]

51

Figure 11

CN-Fuzzy architecture

46

Figure 13: Mobile robot navigation in an environment with obstacles using CN-Fuzzy architecture
Table 5: The simulation result comparison between the artificial neural network [14] and proposed CN-Fuzzy architecture

Figure No.

Method

Navigation Path Length (cm)

Figure 12

Artificial neural network [14]

87

Figure 13

CN-Fuzzy architecture

80

Figure 14: Experimental mobile robot
Figure 15: Sensor distribution of the experimental mobile robot
the experimental results, it is assumed that the position of the start point and goal point are known. But the positions of all the obstacles in the environment are unknown for the robot. Firstly, the robot goes towards the goal in the environment, and if the sensor detects the obstacle in the threshold range, then the proposed architecture controls the velocity of the mobile robot. In Figure 17, the start position of the robot is (50, 100) cm and the position of the goal is (250, 180) cm. The starting angle between the robot and the goal is 21.8°. In Figure 18, the start position of the robot is (45, 125) cm and the goal position is (130, 40) cm. The starting angle between the robot and the goal is 45°. In Figures 17 and 18, if the left obstacle is near to the mobile robot, then the robot turns right, i.e. the velocity of the right motor is less than the velocity of left motor. Similarly, if the right obstacle is near to the mobile robot, then the robot turns left, i.e. the velocity of the right motor is more than the velocity of left motor. The average moving speed of the robot is 0.09 m/ sec. The experimental results in the different snapshots verify
Figure 16: Experimental result of mobile robot navigation same as a simulation result (shown in Figure 6)
Figure 17: Experimental result of mobile robot navigation same as a simulation result (shown in Figure 7)
Figure 18: Experimental result of mobile robot navigation same as a simulation result (shown in Figure 13)
Table 6: Experimental results of a mobile robot navigation in the different environments using CN-Fuzzy architecture

Figure No.

Environment Type

Travelling Path Length (cm)

Navigation Time (sec)

Figure 16

Without obstacle

109

12.4

Figure 17

Unknown environment

94

10.8

Figure 18

Unknown environment

85

10.1

Table 7: Travelling path lengths comparison between simulation and experimental results

Figure No. (Simulation and Experimental res.)

Travelling Path Length (cm)

Error between simulation and experimental result

Simulation Result

Experimental Result

Figures 6 and 16

103

109

5.5%

Figures 7 and 17

89

94

5.32%

Figures 13 and 18

80

85

5.88%

Table 8: Navigation time comparison between simulation and experimental results

Figure No. (Simulation and Experimental res.)

Navigation Time (sec)

Error Between Simulation and Experimental Result

Simulation Result

Experimental Result

Figures 6 and 16

11.6

12.4

6.45%

Figures 7 and 17

10.1

10.8

6.48%

Figures 13 and 18

9.4

10.1

6.93%

the effectiveness of the proposed architecture. Table 6 shows the real-time navigation path length and time taken by the robot in the various unknown environments. Tables 7 and 8 illustrate the travelling path length and navigation time comparison between the simulation and experimental results, respectively. In the comparison study between the simulation and experiments, it is observed that some errors have been found, these are happened due to slippage and friction during real time experiment.
Conclusion And Future Scope
In this paper, the CN-Fuzzy architecture has been applied to the intelligent navigation of a mobile robot in unknown environments filled with obstacles. The major contributions of this present paper are summarized as follows:

1) The cascade neural network is designed to train the robot to reach the goal in the environment. The inputs of cascade neural network are the obstacle distances, and the output is the turning angle between the robot and goal.
2) The fuzzy logic controller helps the robot to control the right motor velocity and left motor velocity in the environments for obstacle avoidance.
3) The proposed CN-Fuzzy architecture gives better results (in terms of path length) as compared to previous developed techniques [13] and [14], which proves the authenticity of the proposed architecture.
4) Moreover, the simulation and experimental results in the different environment show the effectiveness of the proposed architecture in the both static and dynamic environments. The average percentage of errors between simulation and experimental studies are found to be within 5.57% in terms of travelling path lengths and 6.62% in terms of navigation time.

The proposed techniques are developed and tested for the navigation of single robot in static and dynamic environments. In future research, this proposed architecture can be extended for multiple mobile robot navigation and obstacle avoidance.
ReferencesTop
  1. Ma X, Li X, Qiao H. Fuzzy neural network-based real-time self-reaction of mobile robot in unknown environments. 2001;11 (8):1039–1052.
  2. Godjevac J, Steele N. Neuro-Fuzzy control of a mobile robot. 1999;28(1):127–143.
  3. Rai N, Rai B. Neural network based closed loop speed control of dc motor using Arduino UNO. 2013;4(2): 137–140.
  4. Yang  S X, Meng M. Neural network approaches to dynamic collision-free trajectory generation. 200;31(3): 302–318.
  5. Juang C F, Hsu C H. Reinforcement ant optimized fuzzy controller for mobile-robot wall-following control. 2009;56(10):3931–3940.
  6. Algabri M, Mathkour H, Ramdane H, Alsulaiman M. Comparative study of soft computing techniques for mobile robot navigation in an unknown environment. 2015;50: 42–56.
  7. Beom H R, Cho K S. A sensor-based navigation for a mobile robot using fuzzy logic and reinforcement learning. 1995;25(3):464–477.
  8. Li W, Ma C, Wahl F M. A neuro-fuzzy system architecture for behavior-based control of a mobile robot in unknown environments. 1997;87(2):133–140.
  9. Rossomando F G, Soria C M. Design and implementation of adaptive neural PID for nonlinear dynamics in mobile robots. 2015;13(4):913–918.
  10. Ming L, Zailin G, Shuzi Y. Mobile robot fuzzy control optimization using genetic algorithm. 1996;10(4):293–298.
  11. Juang C F, Lai M G, Zeng W T. Evolutionary fuzzy control and navigation for two wheeled robots cooperatively carrying an object in unknown environments. 2015;45(9):1731–1743.
  12. Chayjan R A, Ashari M E. Modeling Isosteric heat of soya bean for desorption energy estimation using neural network approach. 2010;70(4):616–625.
  13. Qing-yong B, Shun-ming L, Wei-yan S, Mu-jin A. A fuzzy behavior-based architecture for mobile robot navigation in unknown environments. 2009:257–261.
  14. Engedy I, Horvath G. Artificial neural network based local motion planning of a wheeled mobile robot. 2010:213–218.
  15. Mohanty P K and Parhi D R. Navigation of autonomous mobile robot using adaptive network based fuzzy inference system. 2015;28(7):2861-2868.
  16. Pothal J K and Parhi D R. Navigation of Multiple Mobile Robots in a Highly Clutter Terrains using Adaptive Neuro-Fuzzy Inference System. 2015;72:48-58.
  17. Mamdani E H and Assilian S. An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller. 1975;7(1):1-13.
 
Listing : ICMJE   

Creative Commons License Open Access by Symbiosis is licensed under a Creative Commons Attribution 3.0 Unported License