BiSection Algorithm for Solving Linear BiLevel Programming Problem
Eghbal Hosseini^{1, *}, Isa Nakhai Kamalabadi^{2}
^{1}Department of Mathematics, Payamenur University of Tehran, Tehran, Iran
^{2}Department of Industry, University of Kurdistan, Sanandaj, Iran
Abstract
The bilevel programming problem (BLPP) is significant because of its application in several areas such as transportation, finance, management, computer science and so on. This problem is an appropriate tool to model these real problems. It has been proven that the general BLPP is an NPhard problem. Therefore, the BLPP is a practical and complicated problem so solving this problem would be significant. In this paper, we attempt for developing an algorithm based on bisection algorithm to solve BLPP. Using the KarushKuhnTucker conditions the bilevel programming problem is converted to a nonsmooth single level problem, and then it is smoothed by heuristic method for using proposed bisection algorithm. The smoothed problem is solved using bisection algorithm which is an exact method for solving the problem. The presented approach achieves an efficient and feasible solution in an appropriate time which has been evaluated by solving test problems.
Keywords
BiSection Algorithm, Linear BiLevel Programming Problem, KarushKuhnTucker Conditions
Received:April 17, 2015
Accepted: May 8, 2015
Published online: June 14, 2015
@ 2015 The Authors. Published by American Institute of Science. This Open Access article is under the CC BYNC license. http://creativecommons.org/licenses/bync/4.0/
1. Introduction
It has been proven that the bilevel programming problem (BLPP) is an NPHard problem [1,2]. Several algorithms have been proposed to solve BLPP [11,12,13,21,25,26,27,28,29,31,32,33]. These algorithms are divided into the following classes: global techniques, enumeration methods, transformation methods [3,4,22,23], meta heuristic approaches, fuzzy methods [5,6,7,8,24], primaldual interior methods [13]. In the following, these techniques are shortly introduced.
1.1. Global Techniques
All optimization methods can be divided into two distinctive classes: local and global algorithms. Local ones depend on initial point and characteristics such as continuity and differentiability of the objective function. These algorithms search only a local solution, a point at which the objective function is smaller than at all other feasible points in vicinity. They do not always find the best minima, that is, the global solution. On the other hand, global methods can achieve global optimal solution. These methods are independent of initial point as well as continuity and differentiability of the objective function [9,10,11,12,34,35].
1.2. Enumeration Methods
Branch and bound is an optimization algorithm that uses the basic enumeration. But in these methods we employ clever techniques for calculating upper bounds and lower bounds on the objective function by reducing the number of search steps. In these methods, the main idea is that the vertex points of achievable domain for BLPP are basic feasible solutions of the problem and the optimal solution is among them [14].
1.3. Meta Heuristic Approaches
Meta heuristic approaches are proposed by many researchers to solve complex combinatorial optimization. Whereas these methods are too fast and known as suitable techniques for solving optimization problems, however, they can only propose a solution near to optimal. These approaches are generally appropriate to search global optimal solutions in very large space whenever convex or nonconvex feasible domain is allowed. In these approaches, BLPP is transformed to a single level problem by using transformation methods and then meta heuristic methods are utilized to find out the optimal solution [15,16,17,18,19,25,36,40].
However there are several approaches to solve optimization problems, but there is no any exact classic approach. In this paper, the authors have tried to proposed bisection algorithm, which is a convergent approach, to solve linear BLPP.
The remainder of the paper is structured as follows: problem formulation and smoothing method to the BLPP are introduced in Section 2. The algorithm based on bisection is proposed in Section 3. Computational results are presented for our approaches in Section 4. As result, the paper is finished in Section 5 by presenting the concluding remarks.
2. Problem Formulation and Smoothing Method
It has been proven that the bilevel programming problem (BLPP) is an NPHard problem [1,2]. Several algorithms have been proposed to solve BLPP [11,12,13,21,25,26,27,28,29,31,32,33]. These algorithms are divided into the following classes: global techniques [9,10,11,12,34,35], enumeration methods [14], transformation methods, meta heuristic approaches, fuzzy methods [5,6,7,8,24], primaldual interior methods [13]. In the following, these techniques are shortly introduced.
The BLPP is used frequently by problems with decentralized planning structure. It is defined as [20]:
 (1) 
Definition 2.1:
Every point such as is an optimal solution to the bilevel problem if
(2)
Which IR is feasible region of the BLPP.
Using KKT conditions problem (1) can be converted into the following problem:
 (3) 
To convert the inequality constraint to an equality constraint, the positive slack variable is added:
 (4) 
3. Bisection Algorithm (BA)
Most of nonlinear equations are very difficult to solve and some of them are unsolved. Therefore several methods have been proposed to approximate of the root these nonlinear equations. One of the most important methods to approximate the root is the bisection algorithm. Necessary condition to use these methods, to find approximate of root, is the uniqueness root of the nonlinear equation [41, 42]. Firstly we proposed necessary definitions and theorems then explain the bisection algorithm. Finally to illustrate this method we solve several examples.
Definition 3.1:
, that is limit points of A.
Definition 2:
are separated if
Definition 2.2:
A is a connected set if it is not union of two separated sets.
The following theorems guarantee that the equation has just a root.
Theorem1 [43]
If f be a continuous function in closed interval [a,b] and f(a)f(b)<0 then f(x)=0 has at least a root in (a,b).
Proof:
Because f is continuous then f([a,b]) is connected [3]. Let if there is no any x so that f(x)=0 then .
Now let
Also A , B are separated then E is not connected and this is opposite assumption. Then proof is complete.
Now we proposed a theorem without proof.
Theorem 2 [43]
If f be a continuous function in closed interval [a,b] and differentiable function in (a,b) then f(x)=0 has at most a root in (a,b).
Example 1:
Let
We check two theorems to this function.
f is continuous function and because:
Then has at least one root in given interval according theorem 1.
Obviously therefore f has at most one root in given interval.
The bisection method is a simple method to find a root of equation in an interval. Suppose f is a continuous function in interval [a,b] and f(a)f(b)<0, we want approximate root of f(x) = 0. The root of f(x) =0 in interval [a,b] should be unique namely two theorem 1 and 2 should applicable to the f(x)=0 in interval [a,b]. In this method, first of all the interval [a,b] is bisected, let , at the next step we check signs of f(a)f(c) and f(c)f(b), if f(a)f(c)<0 then the subinterval [a,c] is selected and we sets b=c and if f(c)f(b)<0 the subinterval [c,b] is selected and we sets a=c. Definitely in both of two cases, the sign of f(a)f(b) is negative therefore the bisection algorithm is applicable to new interval [a,b]. This procedure is continued while one of the under conditions is true.
1. , which are the midpoint of the interval in (n+1)th step and root respectively.
2. , which are the midpoint of the interval in (n+1)th and nth step.
3. , which is a given very small and positive number in all conditions.
However bisection algorithm is a simple method, but it obtain solution very slowly. Therefore an approximation of solution is obtained by this method then it is used in other methods as a starting point.
In the bisection method a sequence is composed so that limit of the sequence is equal to root of the given equation.
Steps
Step1: input a,b.
Step 2: let x = (a+b)/2 : print x.
Step 3: if ABS (f(x)) <EPS then end.
Step 4: if f (a) f(x)<0 then let b=x ELSE let a=x.
Step 5: GOTO 2.
Step 6: END.
Theorem 3
The bisection method is convergent in the interval [a,b] if f(a)f(b)<0 and f is continuous.
Proof: if is the midpoint of the interval in the ith step and p is solution, then absolute error in iterations is:
As we now Consequently we have:
.
Therefore the composed sequence by the bisection algorithm is convergent to the root f.
Example 2
Suppose we are going to solve the simple equation by the bisection algorithm.
To solve this equation firstly we manipulate it that right side be zero. Then we have
Equivalently the goal is finding root of function
Now we guess two number such as a,b so that . Let a=0 and b=1 then f(a)=1 and f(b)=1 therefore.
Because f is polynomial then it is continuous in every interval of real numbers particularly in [0, 1]. Therefore f has conditions of theorem 1 then f has at least one root in [0, 1].
Derivative of f equal to:
Obviously is positive in (0, 1) therefore then f has conditions of theorem 2 too. Namely f(x) =0 has at most a root in (0, 1).
According to the theorem 1 and theorem 2, f(x) =0 has just one root in (0, 1). Now we con use the bisection algorithm to find the root of in interval [0, 1].
The following table is summary of the bisection method to solve this example at the five iterations:
Iterations  a  b  sign of  
1  0  1  0.5 

2  0.5  1  0.75  _ 
3  0.5  0.75  0.6258   
4  0.5  0.625  0.5625  + 
5  0.625  0.625  0.5937 
According the above table root of in [0, 1] approximately is .
Example 3
Suppose we are going to approximate the root of following equation by the bisection algorithm until
In fact we want finding root of function .
Now we guess two number such as a,b so that . Let a=0 and b=1 then f(a)=1 and f(b)=1 therefore
.
Because f is polynomial then it is continuous in every interval of real numbers particularly in [0, 1]. Therefore f has conditions of theorem 1 then f has at least one root in [0, 1].
Derivative of f equal to:
Obviously is positive in (0, 1) therefore f has conditions of theorem 2 too. Namely f(x) =0 has at most a root in (0, 1).
According to the theorem 1 and theorem 2, f(x) =0 has just one root in (0, 1). Now we con use the bisection algorithm to find the root of in interval [0, 1].
The following table is summary of the bisection method to solve this example at the five iterations:
Iterations  a  b  sign of  
1  0  1  0.5  _  0.2167 
2  0  0.5  0.25  +  0.1748 
3  0.25  0.5  0.375  _  0.0452 
4  0.25  0.375  0.3125  +  0.0559 
5  0.3125  0.375  0.3437  +  0.0035 
According the above table root of in [0, 1] approximately is .
4. Computational Results
To illustrate bisection algorithm, two standard examples will be solved in this section.
Example 4 [17]
Consider the following linear bilevel programming problem:
Using KKT conditions, the following problem is obtained:
This problem have been solved using bisection algorithm, Optimal solution have been presented according to Table 3.
Best solution by our method  Best solution according to reference [17]  Optimal solution  
( 





(4,4)  12  (3.9,4)  12.1  (4,4)  12 
Example 5 [17]
Consider the following linear bilevel programming problem.
After applying KKT conditions and smoothing method, the above problem will be transformed into the problem which will be solved using bisection algorithm. Optimal solution for this example is presented according to Table 4.
More problems with deferent sizes have been solved by two approaches and computation results have been proposed in Table 5. The programming is performed using MATLAB 7.1 and use a personal computer (CPU: Intel (R) Celeron(R) 1000 M @ 1.8 GHz, RAM: 4 GB) to execute the program. It is easy to see that TA algorithm is better than HA according to Table 6.
Best solution by our method  Best solution according to reference [17]  Optimal solution  
( 





(1.88.,0.79,0.09)  8.40  (1.83,0.89,0.004)  8.21  (  8.44 
Best solution by our method with different values of ε  Best solution according to reference [3,7,26,27]  
O.S  O.S  
Example 6  (1.33,1.30,0.03,0.37,1.30,0.91)  (1.32,1.28,0,0.33,1.25,0.92) 
Example 7  (2,0,0,0)  (2,0,0,0) 
Example 8  (0,1,0.2,0.6,0.4)  (0,0.9,0,0.6,0.4) 
Example 9  (17,11.04)  (17.45,10.90) 
5. Conclusion and Future Work
The main difficulty of the multilevel programming problem is that after using the KKT conditions the nonlinear constraints are appeared. In this paper was attempted to remove these constraints by the proposed theorem, slack variables and proposed PSOMGA algorithm. As mentioned previously the authors have been combined two continuous and discrete effective approaches to the linear BLPP which this form of combining has not been studied by any researchers. According to the Tables the proposed method presents optimal solution in appropriate time and iterations. In the future works, the following should be researched:
(1) Examples in the larger sizes can be supplied to illustrate the efficiency of the proposed algorithm.
(2) Showing the efficiency of the proposed algorithm for solving other kinds of BLPP such as quadratic and nonlinear.
(3) Solving other kinds of multilevel programming problem such as trilevel programming problem.
Nomenclature
 Objective function of the first level in the TLPP 
 Objective function of the second level in the TLPP 
 Objective function of the third level in the TLPP 
 Constraints in the TLPP 
 Slack variable 
 Slack variable 
 Objective function of the first level in the BLPP 
 Objective function of the first level in the BLPP 
 Constraints in the BLPP 
 A nonempty convex set 
 Lagrange function 
 Lagrange Coefficient 
 Lagrange Coefficient 
 Lagrange Coefficient 
 Initial population 
 Crossover population 
 Mutation population 
 Set of chromosomes in the current generation 
 Optimal solution for the TLPP 
 Optimal solution for the BLPP 
References