Waning Of Real Power Loss By Modified Particle Swarm Optimization

This paper presents Modified Particle Swarm Optimization (MPSO) algorithm for solving reactive power problem. A nonlinear decreasing weight factor used to change the fundamental ways of Particle Swarm Optimization (PSO). To allow full play to the approximation capability of the function of Back Propagation neural network & overcome the main shortcomings of its liability to fall into local extreme value this study proposed a concept of modified PSO algorithm. Back Propagation network jointly to optimize the original weight, threshold value of network and incorporating the PSO algorithm into Back Propagation network to establish a Modified PSO-BP network system. Proposed method MPSO progresses convergence speed and the capability to search optimal value. In order to evaluate the proposed MPSO algorithm, it has been tested on IEEE 30 bus system and compared to other reported standard algorithms and simulation results show that (MPSO) is more efficient than other algorithms in reducing the real power loss & voltage profiles are within the limits.


Introduction
Reactive power problem is one of the difficult optimization problems in power systems.Various mathematical techniques have been adopted to solve this optimal reactive power dispatch problem.These include the gradient method [1,2], Newton method [3] and linear programming [4][5][6][7].The gradient and Newton methods suffer from the difficulty in handling inequality constraints.To apply linear programming, the input-output function is to be expressed as a set of linear functions which may lead to loss of accuracy.Recently global optimization techniques such as genetic algorithms have been proposed to solve the reactive power flow problem [8,9].In recent years, the problem of voltage stability and voltage collapse has become a major concern in power system planning and operation.To enhance the voltage stability, voltage magnitudes alone will not be a reliable indicator of how far an operating point is from the collapse point [10].The reactive power support and voltage problems are intrinsically minimize the active power loss and can be written in equations as follows: ( ) Where F-objective function, P L -power loss, g k -conductance of branch,V i and V j are voltages at buses i,j, Nbr-total number of transmission lines in power systems.

Voltage profile improvement
To minimize the voltage deviation in PQ buses, the objective function (F) can be written as: Where VD -voltage deviation, v ω -is a weighting factor of voltage deviation.
And the Voltage deviation given by: Where Npq-number of load buses

Equality Constraint
The equality constraint of the problem is indicated by the power balance equation as follows: Where P G -total power generation, P D -total power demand.

Inequality Constraints
The inequality constraint implies the limits on components in the power system in addition to the limits created to make sure system security.Upper and lower bounds on the active power of slack bus (P g ), and reactive power of generators (Q g ) are written as follows: min max gslack gslack gslack , Upper and lower bounds on the bus voltage magnitudes (V i ) is given by: , Uper and lower bounds on the transformers tap ratios (T i ) is given by: , Upper and lower bounds on the compensators (Q c ) is given by: , Where N is the total number of buses, N g is the total number of generators, N T is the total number of Transformers, N c is the total number of shunt reactive compensators.

Particle Swarm Optimization
Particle Swarm Optimization Algorithm (PSO) is a population based optimization tool where the system is initialized with a population of random particles and the algorithm searches for optima by updating generations.Suppose in the D-dimensional objects searching space, there is a community composed of N particle.The "I" particle represent a D-dimensional vector, ( ) X .It means that the "i" particle represents its position in this space.Every position of particle "X" is a potential solution.If we put "x" into objective function, we can know the adaptive value.We can know whether the "x" is the optimal answer based on the adaptive value.The speed of particle is also a D-dimensional, it also recorded as , ,..
We record the particle I to the h times, the optimal position was ( ) Where, 1 c & 2 c : Speeding coefficient, adjusting the maxim step length that flying the best particle in whole situation and the individual best particle respectively.Appropriate c 1 and c 2 speed up the convergence and avoid falling into partial optimality r 1 &r 2 : Random number between 0 and 1, for controlling the weight of speed W: Inertia factor.It was oriented toward overall searching .We usually take the original value as 0.9 and make it to 0.1 with the addition and reduction of the times of iteration.It mainly used to total searching, making the searching space converge to a certain space.Then we can get the solution in high degree of accuracy by partial refined researching.With the increasing number of dimension of problems, basic PSO algorithm is easily falling into partial extreme value, thus influence the optimal function of algorithm.Someone brought up with improved algorithm.Many scholars' research shows [11][12][13][14][15] that "w" has a great influence on the algorithm of particle swarm.When the "w" is bigger, the algorithm has a strong ability in total searching and when the "W" is smaller, it is good for partial searching.Therefore, in recent years, some scholars brought up many schemes.LDW (Linearly Decreasing Inertia Weight) is given by, ( ) Where max w & min w : The maximum and minimum value of W, t : The step of iteration, max t : The maximum iteration step.
However, there are still problems in equation (12), so in the primary period of operation, if it detects the optimal point, it wants to converge to the optimal point promptly.However, the linear reduction slows down the speed of convergence of algorithm.In the later period of function, with the reduction of "w", it may make the ability of total searching decline and the variety awaken.Finally it may easily fall into partial optimum.In this text, we use the PSO method of nonlinear variation weight with momentum to improve this method is given by, ( ) 2 θ is momentum, when in max / = t t θ ,t is smaller, 2 θ is near to 1 and w is near to max w ., it ensure the ability of total searching.With the increasing of t, w reduces in non linearity, ensuring the searching ability in partial areas.In the later max = t t avoiding the problems caused by the decrease of w.That is, the reduction ability of total searching and the decline of variety.

Back Propagation (BP) Neural Network
The standard Back Propagation (BP) Neural Network consists of input layer, one or several hidden layers and an input layer.
The node action function of BP neural network is generally "S" function.Common activation function f (x) is derivable sigmoid function: ( ) Error function R is: In this formula, j Y is expected out mj Y is actual output n is sample length.
The uniform expression of weight modified formula of BP algorithm is: The specific process of BP algorithm can be generalized as follows: a. Select n samples as a training set.f.According to new weight and biases values, the output layer is calculated.The calculation doesn't stop until the training set meets the stopping condition.

Modified Particle Swarm Optimization (MPSO)
We apply Particle Swarm Optimization (PSO) to train BP Network by optimizing the original weight and the threshold value.When the algorithm ends, we can find the point near the overall situation optimal point.In the particle swarm, every particle's position represents weights set among the BP network during the resent iteration.The dimension of every particle is decided by the number of the weight and the threshold value serving as connecting bridge.
The concrete process of MPSO is narrated as follows: a. Initialization: i n is the number of neurons in the hidden layer no representing the number of neurons in input layer.So, the dimension of particle swarm D is: b. Setting fitness function of particle swarm' in this text, we choose mean square error in BP Neural Network as fitness function of particle swarm: ( ) Coming to the optimal weight and the threshold value based on equation given below,

b.
Initialize weight and biases value in neural network.The initialized values are always random numbers between (-1, 1).Every sample in the training set needs the following processing: c.According to the size of every connection weight, the data of input layer are weighted and input into the activation function of hidden layer and then new values are obtained.According to the size of every connection weight, the new values are weighted and input into the activation function of output layer and the output results of output layer are calculated.d.If error exist between output result and desired result, the calculation training is wrong.

e.
Adjust weight and biases value.

kjy:
The output in theory based on sample K kj y : The virtual output based on sample K M: The number of Neural Network c.Using the improved particle swarm algorithm to optimize the weight and the threshold value of BP network.

:
2,.., n × : The weight between the hidden layer and The weight between the hidden layer and the output layer d.Let the optimal weight and the threshold value be as the original weight and then put them into Neural Network for training.Adjusting the weight and the threshold value based on BP algorithm until the function index of the network's Mean Square Error (MSE) <e."e" is the preset

Table 2 :
Generators Power Limits

Table 3 :
After optimization values of control variables

Table 4 :
Performance of MPSO algorithm