OPTIMIZATION METHODS AND QUADRATIC PROGRAMMING

OPTIMIZATION METHODS AND QUADRATIC PROGRAMMING

OPTIMIZATION METHODS AND QUADRATIC PROGRAMMING

A THESIS

Submitted in partial fulfillment of the requirement of the award of the degree of

MASTER OF SCIENCE in Mathematics

By

Ms. Richa Singh

Under the supervision of

Prof. Anil Kumar

MAY, 2012

DEPARTMENT OF MATHEMATICS

NATIONAL INSTITUTE OF TECHNOLOGY, ROURKELA

ODISHA, INDIA

NATIONAL INSTITUTE OF TECHNOLOGY, ROURKELA

DECLARATION

I declare that the topicOptimization methods and Quadratic Programming ” for my M.Sc. degree has not been submitted in any other institution or University for award of any other degree .

Place: Ms. Richa Singh

Date: Roll no. 410MA2112

Department of Mathematics

National Institute of Technology

Rourkela-769008 i

NATIONAL INSTITUTE OF TECHNOLOGY, ROURKELA

CERTIFICATE

This is to certify that the project Thesis entitled “Optimization methods and Quadratic

Programming” submitted by Ms. Richa Singh , Roll no: 410MA2112 for the partial fulfilment of the requirements of M.Sc. degree in Mathematics from National institute of

Technology ,Rourkela is a authentic record of review work carried out by her under my supervision and guidance. The content of this dissertation has not been submitted to any other Institute or University for the award of any degree.

Dr. Anil Kumar

Associate Professer

Department of Mathematics

National Institute of Technology

Rourkela-769008

Odisha, India ii

ACKNOWLEDGEMENT

I wish to express my deep sense of gratitude to my supervisor, Prof. Anil Kumar, Department of

Mathematics, National Institute of Technology, Rourkela for his inspiring guidance and assistance in the preparation of this thesis.

I am grateful to Prof. S. K. Sarangi, Director, National Institute of Technology, Rourkela for providing excellent facilities in the Institute for carrying out research.

I also take the opportunity to acknowledge quite explicitly with gratitude my debt to the Head of the Department, Prof G.K. Panda and all the Professors and Staff, Department of Mathematics,

National Institute of Technology, Rourkela for their encouragement and valuable suggestions during the preparation of this thesis.

I am extremely grateful to my parents, my engineer brother and my friends, who are a constant source of inspiration for me.

(Richa Singh)

E-Mail: [email protected]

, [email protected]

iii

ABSTRACT

Optimization is the process of maximizing or minimizing the objective function which satisfies the given constraints. There are two types of optimization problem linear and nonlinear. Linear optimization problem has wide range of applications, but all realistic problem cannot be modeled as linear program, so here non-linear programming gains its importance. In the present work I have tried to find the solution of non-linear programming Quadratic problem under different conditions such as when constraints are not present and when constraints are present in the form of equality and inequality sign.

Graphical method is also highly efficient in solving problems in two dimensions. Wolfe’s modified simplex method helps in solving the Quadratic programming problem by converting the quadratic problem in successive stages to linear programming which can be solved easily by applying two – phase simplex method. A variety of problems arising in the area of engineering, management etc. are modeled as optimization problem thus making optimization an important branch of modern applied mathematics. iv

TABLE OF CONTENTS

Description

Cover page

Declaration

Certificate

Acknowledgement

Abstract

Table of contents

Chapter 1 Introduction to optimization methods & quadratic programming

Chapter 2 Pre-requisites to quadratic programming

Chapter 3 Convex functions and its properties

Chapter 4 Unconstrained problems of optimization

Chapter 5 Constrained problems with equality constraints

(Lagrangian method)

Chapter 6 Constraints in the form of inequality

(Kuhn-Tucker conditions)

Chapter 7 Graphical Method

Chapter 8 Quadratic Programming (Wolfe’s method)

Conclusion

References

Page No.

- i ii iii iv v

1

2

5

7

9

17

20

22

27

28 v

Chapter-1

INTRODUCTION TO OPTIMIZATION METHODS & QUADRATIC

PROGRAMMING

Optimization constitutes a very important branch of modern applied mathematics. A variety of problems arising in the field of engineering design, operations research, management science, computer science, financial engineering and economics can be modeled as optimization is useful in real life.

It was the development of the simplex method for linear programming by G.B. Dantzig in the mid 40’s which in the sense started the subject of mathematical optimization. Another major development was due to

H.W. Kuhn and A.W. Tucker in 1951 who gave necessary/sufficient optimality conditions for non-linear programming problem, now known as Karush-Kuhn

Tucker(KKT) conditions. In 1939 W. Karush had already developed conditions similar to those given by Kuhn Tucker.

The presence of linearity structure on the given optimization problem gave beautiful mathematical results and also helped greatly in its algorithmic development. However most of the real world applications lead to optimization problems which are inherently nonlinear and are void of linearity. Fortunately most often this nonlinearity is of ‘parabola’ type leading to the convexity structure which can be used to understand the convex optimization problems or Quadratic programming problem.

1

Chapter-2

PRE-

REQUISITES TO QUADRATIC PROGRAMMING

Some definitions:

Vector: A vector in n space is an ordered set of n real numbers.

For e.g. a=(a

1

,a

2

, …..,a n

) is a vector of elements or components.

Null vector: The null vector is a vector whose elements are all zero.

0=(0,0,……,0).The null vector corresponds to origin.

Sum vector : The sum vector is a vector whose elements are all one.

1=(1,1,……,1).

Unit vector (e i

) :The unit vector(e i

) is a vector whose i th

element is one. e i

=(1,0,……,0).

E

2

, there are two unit vectors. E n

, there are n unit vectors.

Orthogonal vectors: Two vectors a and b are said to be orthogonal if a.b=0

Linear independence: A set of vectors a the equation

1

a

1

2

a

2

.........

k a k

1

,a

2

, …..,a

0 k is linearly independent if

is satisfied only if

1

2

.........

k

0 .

Linear dependence: A set of vectors which are not linearly independent are called linearly dependent.

Spanning set: The set of vectors a

1

,a

2

, …..,a k in E n

is a spanning set in E n

if every vector in E n

can be expressed as a linear combination of vectors a

1

,a

2

,

…..,a k. where (k<n).

Basis set: A set of vectors a

1 i)

,a

2

, …..,a it is linearly independent set ii) it is a spanning set of E n k

in E n

is a basis set if

, if it is a basis then k=n.

Standard basis : The set of unit vectors e

1

,e

2

,e

3

,………e n

is called the standard basis for E n

.

Matrix : A matrix is a rectangular array of ordered numbers, arranged into rows and columns. A= [a ij

] mxn.

The elements a ij

for i=j i.e. a

11

, a

22

, a

33

and so on are called principal diagonal elements ,others are called off diagonal elements.

Square matrix: Any matrix in which number of rows is equal to number of columns is known as square matrix (m=n).

Diagonal matrix: A square matrix in which all off diagonal elements are zero i.e, a ij

=0 for (i≠j) is called a diagonal matrix.

Identity or Unit matrix : A diagonal matrix whose all principal elements are

1 is called an Identity or unit matrix denoted simply by I.

2

Transpose matrix: The transpose of a matrix A=[a ij

] denoted by A

T is a matrix obtained by interchanging the rows and columns of A.

Symmetric matrix: A square matrix A is said to be symmetric if the matrix

A remains the same by interchanging the rows and columns of A(i.e, a ij

= a ji

or A

T

=A)

Row matrix: A matrix having only a single row is called a row matrix .It is an 1xn matrix.

Column matrix: A matrix having only a single column is called a column matrix .It is an mx1 matrix.

Null matrix: A matrix whose all elements are zero is called a null matrix.

Rank of a matrix: A positive integer r is said to be the rank of a matrix A denoted by ρ(A) if

i)

Matrix A possess at least one r-rowed minor which is not zero.

ii)

Matrix A does not possess any nonzero (r+1)-rowed minor.

Equivalent matrices: Two matrices A and B are said to be equivalent, if and only if ρ(A)=ρ(B) denoted by A~B.

Quadratic forms: Let x=(x

1

,x

2

, …..,x n

) and nxn matrix A=[a ij

] then a function of n variables denoted by f(x

1

,x

2

, …..,x n

) or Q(x) is called a quadratic forms in n space if Q(x)=x

T

A x =

 

a ij x i x j

Properties of Quadratic forms: i) Positive definite : A quadratic form Q(x) is positive definite iff Q(x) is positive (>0) for all x≠0. ii) Positive semi-definite: A quadratic form Q(x) is positive semi definite iff, Q(x) is non-negative (≥0) for all x and there exists an x≠0 for which Q(x)=0 . iii) Negative definite: A quadratic form Q(x) is negative definite iff, -Q(x) is positive definite. iv) Negative semi-definite: A quadratic form Q(x) is negative semi- definite iff, -Q(x) is positive semi- definite. v) Indefinite: A quadratic form Q(x) is indefinite if Q(x)is positive for some x and negative for some other.

Difference equation: An equation relating the values of a function y and one or more of its differences Δy ,Δ

2 y,… for each value of a set of numbers is called a difference equation.

Order of difference equation: The difference between the highest and the lowest suffix of the equation is called the order of difference equation.

Y k+1

+ 3y k

=0

Here, k+1-k=1 is the order of the difference equation.

3

Feasible solution: Solution values of the decision variables x j

(j=1,2,3,……,n) which satisfy the constraints and non-negativity conditions is known as feasible solution.

Basic feasible solution: Collection to all feasible solutions to a problem constitutes a convex set whose extreme points correspond to the basic

 feasible solution.

Extreme points to a convex set: A point x in a convex set c is called an extreme point if x cannot be expressed as a convex combination of any two distinct points x

(1)

and x

(2)

in c .

4

Chapter-3

CONVEX FUNCTIONS AND THEIR PROPERTIES

Definition

Convex functions: Let S 

R n

be a convex set and f: S →R. Then f is called a convex function if for all x, u

S and for all 0≤

f

(

x

( 1

)

u

)

f

(

x

)

( 1

)

f

(

u

)

≤1 ,we have

Some examples of convex functions are:

i

)

f

(

x

)

x

2

,

x

R ii

)

f

(

x

)

x

,

x

R iii

)

f

(

x

)

e x

,

x

R iv

)

f

(

x

)

e

x

,

x

R v

)

f

(

x

)

 

1

x

2

,

1

x

1

Concave functions: Let S

R n

be a convex set and f: S →R. Then f is called a concave function if for all x, u

f

(

x

( 1

)

u

)

f

(

x

)

( 1

)

f

(

u

) .

S and for all 0≤

≤1 ,we have

Some examples of convex functions are:

i

)

f

(

x

)

, x>0

ii

)

f

(

x

)

 

x

,

x

R iii

)

f

(

x

)

 

1

x

2

,

1

x

1

Properties

1. If a function is both convex and concave, then it has to be a linear function.

2. A function may be neither convex nor concave. e.g.

f

(

x

)

 sin x ,

2

2 or

f

(

x

)

x

3

,

x

R

3. The domain of a convex function has to be a convex set.

4. A convex/concave function need not be differentiable.

5

e.g.

f

(

x

)

x

,

x

R

is convex but not differentiable at x=0.

5. Convex functions need not even be continuous. e.g. f(x)={x

2

if -1≤x≤1} and {2 if x=1}

It is not continuous at x=2. However, convex functions are always continuous in the interior of its domain.

6. If f and g are two convex functions defined over a convex set S

R n then

i) f+g

ii) αf(α≥0)

iii) h(x)=Max

x

S

(f(x),g(x)) are convex functions.

7. If f and g are two concave functions defined over a convex set S

R n then

i) f+g

ii) αf(α≥0)

iii)

h(x)=Min

x

S

(f(x),g(x)) are concave functions.

6

Chapter-4

UNCONSTRAINED PROBLEMS OF OPTIMIZATION

Some important results:

 A necessary condition for a continuous function with continuous first and second partial derivatives to have an extreme point at x

0 is that each first partial derivative of f(x),evaluated at x

0 vanish i.e.

is the gradient vector.

 A sufficient condition for a stationary point

to be an extreme point is that the

Hessian matrix H evaluated at is i) Negative definite when ii) Positive definite when

is a maximum point and

is minimum point.

Example: Find the maximum or minimum of the function

Applying the necessary condition where

The solution of these simultaneous equations is given by

is the only point that satisfies the necessary condition.

Now by checking the sufficiency condition we have to determine whether this point is maxima or minima.

7

Hessian matrix, evaluated at (2, 4, 6) is given by

The principal minor determinants of H: have the values 2, 4, 8 respectively. Thus, each principal minor determinant is positive.

Hence, this is positive definite and the point (2, 4, 6) yields a minimum of f(x).

8

Chapter-5

CONSTRAINED OPTIMIZATION WITH EQUALITY CONSTRAINTS

Lagrangian method

In non-linear programming problem if objective function is differentiable and has equality constraints optimization can be achieved by the use of Lagrange multipliers.

Formulation

Consider the problem of maximizing or minimizing z = f(x

1

,x

2

) subject to the constraints g(x

1

,x

2

) =c andx

1

,x

2≥

0 where c is a constant. We assume that f(x

1

,x

2

) and g(x

1

,x

2

) are differentiable w.r.t x

1

and x

2.

Let us introduce a differentiable function h(x

1

,x

2

) differentiable w.r.t x

1 and x

2

and defined by h(x

1

,x

2

)≡g(x

1

,x

2

)- c .

The problem is restated as maximize z = f(x

1

,x

2

) subject to the constraints h(x

1

,x

2

)=0 and x

1

,x

2 ≥

0 .

To find the necessary conditions for a maximum (or minimum) value of z ,a new function is formed by introducing a Lagrange multiplier

, as

L

(

x

1

,

x

2

,

)

f

(

x

1

,

x

2

)

h

.(

x

1

,

x

2

).

The number

is an unknown constant and the function

Lagrangian function with Lagrange multiplier

L

(

x

1

,

x

2

,

) is called the

. The necessary conditions for a maximum or minimum of f(x

1

,x

2

) subject to h(x

1

,x

2

)=0 are thus given by

Necessary condition

L

(

x

1

,

x

1

x

2

,

)

0

L

L

(

(

x

1

,

x

2

x

2

x

1

,

x

2

,

,

)

)

0

0

Their partial derivatives are given by

L

x

1

f

x

1

h

x

1

9

L

x

2

f

x

2

h

x

2

L

 

h

where L ,f and h stand for the functions.

The necessary conditions for maximum or minimum of f(x

1

,x

2

) are given by f

1

=

 h

1 ;

f

2

=

 h

2 and -h(x

1

,x

2

)=0 .

Sufficient condition

Let the Lagrangian function for n variables and one constraint be

L

(

x

,

)

f

(

x

)

h

.(

x

)

.

The necessary conditions for a stationary point to be a maximum or minimum are

L

x j

f

x j

h

x j

0

(j=1,2,….,n) and

L

 

h

(

x

)

0

The value of

is obtained by

f

h

x j

x j

( for j=1,2,….,n)

The sufficient conditions for a maximum or minimum require the evaluation at each stationary point of n-1 principal minors of the determinant given:

1

n

1

If Δ

3

> 0 , Δ

4

<0 , Δ

5

>0, the signs pattern being alternate ,the stationary point is a local maximum .If Δ

3

< 0 , Δ

4

<0 ,…….. Δ n+1

<0 , the sign being always negative, the stationary point

is a local minimum.

10

Example:

Obtain the set of necessary and sufficient conditions for the following NLP

Minimize x

1

x

2

x

z

3

2

x

1

2

24

x

1

11 ;

x

1

,

x

2

,

x

3

2

x

2

2

0

8

x

2

2

x

3

2

12

x

3

200

subject to the constraints:

Solution: We formulate the Lagrangian function as

L

(

x

1

,

x

2

,

)

2

x

1

2

24

x

1

2

x

2

2

8

x

2

2

x

3

2

12

x

3

200

(

x

1

x

2

x

3

11 ).

The necessary conditions for the stationary point are

L

x

1

L

x

2

L

4

x

1

4

x

2

(

x

1

24

8

x

2

x

3

0

0

11 )

0

The solution to the simultaneous equations yields the stationary point

x

0

(

x

1

,

x

2

,

x

3

)

( 6 , 2 , 3 ) ;

0

.The sufficient condition for stationary point to minimum is that both the minors Δ

3

and Δ

4 should be negative.

3

0

1

1

1

4

4

1

0

 

8

4

1

1

0

0

1

1

4

0

0

1

0

4

0

1

0

0

 

48

4

Since Δ

3

and Δ

4 both are negative ,

x

0 point is local minimum .Thus,

x

0

( 6 , 2 , 3 ) provides the solution to the NLPP. The stationary

( 6 , 2 , 3 )

provides the solution to NLPP.

Sufficient conditions for a NLPP with more than one equality constraints

Optimize

z

f

(

x

),

x

R n

subject to

g i

(

x

)

0 ,

i

1 , 2 ,....,

m and

x

0 L(x,

)

 f(x) i m

1

 i g i

(x) (m

 n) the where m= number of equality constraints = number of Lagrangian multipliers

n = number of unknowns constraints

11

L

x j

L

j

0 for j

1,2,..., n

0 for j

1,2,...., m

Provide the necessary conditions for stationary points of

. The function

L

(

x

,

),

f

(

x

) and

g

(

x

)

all possess partial derivatives of first and second order with respect to the decision variables.

M

2

L

(

x

,

x i

x j

) for all i and

nxn

L

(

x

,

) w.r.t decision variables. j

be the matrix of second order partial derivatives of

V

g

x i

(

x

)

j

m

xn

where i=1,2,…..,m; j=1,2,……,n

Define the square matrix

H

B

0

V

T

V

M

(

m

n

) x (

m

n

) where O is an mxm null matrix. The matrix H

B

is called the bordered Hessian matrix .Then the sufficient conditions for maximum and minimum is: (x*,

* ) be the stationary point for the Lagrangian function L

(x, ) and H

B

* be the value of corresponding bordered Hessian matrix i) X * is a maximum point , if starting with principal minor of order (m+1), the last (n-m) principal minors of H

B

* form an alternating sign pattern starting with (-1) m+n

ii)

X * is a minimum point , if starting with principal minor of order (2m+1), the last (n-m) principal minors of H

B

* have the sign of (-1) m

Example: Optimize

x

1

x

2

x

3

z

15 ; 2

x

1

x

2

4

x

1

2

2x

3

2

x

2

2

20

;

x

3

x

1

,

2

x

2

,

4

x

3

x

1

x

2

0 such that

Solution:

z

f

(

x

)

4

x

1

2

2

x

2

2

x

3

2

4

x

1

x

2 such that

g

1

(

x

)

g

2

(

x

)

x

1

2

x

1

-

x

2 x

2

x

3

2x

3

15 ;

20 ;

x

1

,

x

2

,

x

3

0

The Lagrangian function is given by

12

L

(

x

,

)

f

(

x

)

1

g

1

(

x

)

2

g

2

(

x

)

4x

1

2

2x

2

2

x

3

2

4x

1 x

2

-

1

(x

1

 x

2

 x

3

15) -

2

(2x

1

x

2

2x

3

20)

The stationary point (x*,

* ) can be obtained by the following necessary conditions

L

x

1

L

x

2

L

x

3

L

1

L

2

8

x

1

4

x

2

2

x

3

4

x

2

1

1

 

(

x

1

x

2

 

( 2

x

1

x

2

1

2

2

2

x

3

2

2

4

x

1

0 ..........

..........

..........

.........(

iii

)

15 )

0 ..........

..........

..........

(

iv

)

2

x

3

0 ..........

..........

.........(

i

)

0 ..........

..........

..........

(

ii

)

20 )

0 ..........

..........

........(

v

)

Solving equation (i) and (v) we get

x x

2

x

1

3

2

1

2

1

4

1

4

2

2

2

2

Substituting the values of

x

1

,

x

2

,

x

3 in equation (iv) and (v) we get

7

1

5

1

5

2

10

2

60 ..........

..........

..........

..........

.......(

vi

)

80 ..........

..........

..........

..........

......(

vii

)

Solving equation (vi) and (vii) we get

1

40

;

9

2

52

;

9

x

1

33

;

9

x

2

10

;

3

x

3

8

x

*

*

(

x

1

,

x

2

(

1

,

2

,

x

3

)

)

(

40

9

(

33

9

,

10

3

, 8 )

,

52

))

9

For this stationary point

(

x

*,

*)

the bordered Hessian matrix is given by

13

L

x

1

L

x

2

L

x

3

L

1

L

2

8

x

1

4

x

2

2

x

3

4

x

2

1

1

 

(

x

1

x

2

 

( 2

x

1

x

2

1

2

2

2

x

3

2

2

4

x

1

15 )

0 ..........

..........

..........

(

iv

)

2

x

3

0 ..........

..........

.........(

i

)

0 ..........

..........

..........

(

ii

)

0 ..........

..........

..........

.........(

iii

)

20 )

0 ..........

..........

........(

v

)

Solving equation (i) and (v) we get

x x

2

x

1

3

2

1

2

1

4

1

4

2

2

2

2

Substituting the values of

x

1

,

x

2

,

x

3 in equation (iv) and (v) we get

7

1

5

1

5

2

10

2

60 ..........

..........

..........

..........

.......(

vi

)

80 ..........

..........

..........

..........

......(

vii

)

Solving equation (vi) and (vii) we get

1

40

;

9

2

52

;

9

x

1

33

;

9

x

2

10

;

3

x

3

8

x

*

(

x

1

,

x

2

,

x

3

)

(

33

9

,

10

3

, 8 )

*

(

1

,

2

)

(

40

9

,

52

))

9

For this stationary point

(

x

*,

*)

the bordered Hessian matrix is given by the following necessary conditions

14

L

x

1

L

x

2

L

x

3

L

1

L

2

8

x

1

4

x

2

4

2

x

2

x

3

 

(

x

1

 

( 2

x

1

1

x

1

2

x

2

2

2

x

3

1

2

2

 

0 ..........

4

x

1

0 ..........

15 )

x

3

2

0

2

..........

20 )

0

0 ..........

Solving equation (i) and (v) we get

x

1

x x

2

3

2

1

4

2

1

2

4

1

2

2

2

i

)

ii

)

iii

)

iv

)

v

)

Substituting the values of

x

1

,

x

2

,

x

3 in equation (iv) and (v) we get

7

1

5

1

5

2

10

2

60 ..........

80 ..........

Solving equation (vi) and (vii) we get

vi

)

vii

)

1

40

;

9

2

52

;

9

x

1

33

;

9

x

2

10

;

3

x

3

8

x

*

*

(

x

1

,

x

2

(

1

,

2

,

)

x

3

)

(

40

9

(

33

9

,

10

3

, 8 )

,

52

))

9

15

For this stationary point

H

B

0

0

1

1

1

0

0

2

2

1

1

2

8

0

4

1

1

4

4

0

(

x

*,

*)

the bordered Hessian matrix is given by

1

2

0

0

2

72 n-m=3-2=1

2m+1=2x2+1=5

The determinant

has sign of (-1)

2

i.e. positive.Therefore x* is a minimum point.

16

Chapter-6

CONSTRAINTS IN THE FORM OF INEQUALITIES

(Kuhn – Tucker Necessary conditions)

Maximize f(x), x =(x

1

,x

2

,……….,x n

) subject to m number of inequalities constraints g i

(x)≤ b i

,i=1,2,…..,m. including the non-negativity constraints x≥0 which are written as x≤0 , the necessary conditions for a local maxima or stationary point(s) at x

are i)

L

x j

(x , ,S )=0 j=1,2,…..,n ii)

I

[g

i

(x ) - b i

]=0 iii)g

i

(x ) ≤ b i iv) i

≥ 0 i=1,2,……..,m.

(Kuhn-Tucker Sufficient conditions)

The Kuhn-Tucker conditions which are necessary are also sufficient if f(x ) is concave and the feasible space is convex , i.e. if f(x) is strictly concave and g i

(x), i=1,2,….m are convex.

Example:

Determine x

1

, x

2

, x

3

so as to maximize constraints

x

1

x

2

2 ; 2

x

1

3x

2

1 2 ;

x

1

,

x

2

0

z

 

x

1

2

x

2

2

x

3

2

4

x

1

6

x

2

subject to

Solution:

f

(

x

)

 

x

1

2

x

2

2

x

3

2

4

x

1

6

x

2

x

g

1

(

x

)

g

2

(

x

)

x

1

2

x

1

x

2

3x

2

2 ;

1 2;

x

1

,

x

2

,

x

3

0

First we decide about the concavity and convexity of f(x)

H

B

2

0

0

0

2

0

0

0

2 n=3, m=2,n-m=1

17

Therefore

.Thus, f(x) is concave. Clearly g

1

(x) and g

2

(x) are convex in x

.Thus the Kuhn-Tucker conditions will be the necessary and sufficient conditions for a maximum. These conditions are obtained by partial derivatives of Lagrangian function.

L

(

x

,

,

s

)

f

(

x

)

1

[

g

1

(

x

)

s

1

2

]

2

[

g

2

(

x

)

s

2

2

] where s=(s

1

,s

2

),

=(

1

,

2

) and s

1

,s

2 being slack variables and

1

,

2 are Lagrangian multipliers. The Kuhn-Tucker conditions are given by

a

)

L

(

x

,

,

s

)

-x

1

2

x

2

2

x

3

2

4x

1

6x

2

-

1

(x

1

 x

2

2

 s

1

2

) -

2

(2x

1

3x

2

12

 s

2

2

)

L i

)

ii

)

x

1

L

x

2

2

x

1

2

x

2

4

6

1

1

2

2

3

2

L iii

)

x

3

b

)

i

)

1

(

x

1

ii

)

2

( 2

x

1

c

)

i

)

x

1

2

x

3

0

x

2

x

2

3

x

2

2

2 )

0

12

)

0 .

ii

) 2

x

1

d

)

1

0 ,

3

x

2

2

12

0

0

0 .

0 .

0

Now, four different cases may arise:

Case 1: (

1

=0,

2

=0)

In this case, the system (a) of equations give: x

1

=2, x

2

=3 ,x

3

=0. However, this solution violates both the inequalities of (c) given above.

Case2: (

1

=0,

2

0)

In this case, (b) gives 2x

1

+3x

2

=12 and a(i) and (ii) give -2x

1

+4=2

2

, -2x

2

+6=3

2

Solution of these simultaneous equations gives x

1

=24/13,x

2

=36/13,

2

=2/13>0 and also equation (a) (iii) gives x

3

=0.This solution violates c (i). So, this solution is discarded.

Case 3: (

1

0,

2

0)

In this case, (b) (i) and (ii) gives x and x

2

1

+x

2

=2 and 2x

=8. Thus (a)(i), (ii), (iii) yield x

3

=0,

1

1

+3x

=68,

2

2

=12. These equations give x

=-26 .Since

2

1

=-6

=-26 violates the condition (d). So, this solution is discarded.

18

Case 4: (

1

0,

2

0)

In this case (b) (i) gives x

1

+x

2

=2 .This together with (a) (i) and (ii) gives x

1

=1/2 ,x

2

=3/2

,

1

=3>0. Further from a (iii) x

3

=0.This solution does not violate any of the Kuhn-Tucker conditions. Hence, the optimum (maximum) solution to the given problem is : x

1

=1/2 , x

2

=3/2 ,x

3

=0; with

1

=3,

2

=0 the maximum value of the objective function is z=17/2.

19

Chapter-7

GRAPHICAL METHOD (Non-linear objective function and linear constraints)

Example: Minimize the distance of the origin from the convex region bounded by the constraints

x

1

x

2

4 ; 2

x

1

 x

2

5 ; and

x

1

,

x

2

0 .Verify that Kuhn-Tucker necessary conditions hold at the point of minimum distance.

Solution: Minimizing the distance of the origin from the convex region is equivalent to finding the length of radius i.e. minimum distance from origin to the tangent which just touches the convex region and is bounded by the given constraints i.e.

min(

r

2

z

)

x

1

2

x

2

2 such that

x

1

x

2

4 ; 2

x

1

 x

2

5 ; and

x

1

,

x

2

0

The feasible region will lie in the first quadrant as

x

1

,

x

2

x

1

x

2

4 ; 2

x

1

 x

2

0 .We plot the lines

5 .The region shaded by the lines is the unbounded convex feasible region. We have to search for a point (

x

1

,

x

2

) which gives a minimum value of

x

1

2

x

2

2 and lies in the feasible region

The (slope)gradient of the tangent to the tangent to the circle

x

1

2

x

2

2

k

2

x

1

2

x

2

dx

2

dx

1

dx

2

dx

1

 

x x

2

1

0

20

Slope of the line

x

1

x

2

4 ; is 1 and slope of the line 2

x

1

 x

2

5 is 2 .

Case 1: If the line x

1

+ x

2

=4 is tangent to the circle

x

1

2

x

2

2

k

then

dx

2

dx

1

 

x x

2

1

 

1 then x

1

=x

2

. On solving x

1

+ x

2

=4 and x

1

=x

2

we get x

1

=2 and x

2

=2.The line touches the circle at point (2,2).

Case2:IF the line 2 x

1

=2x

2

. On solving

x

1

2

x

1 x

2

 x

2

5 is tangent to the circle

x

1

2

x

2

2

k dx dx

1

2

 

x x

2

1

 

2 then

5 and x

1

=2x

2

we get x

1

=2,x

2

=1.The line touches the circle at the point (2,1).

Out of these two points (2,1) lies outside the feasible region ,but point (2,2) lies in the feasible region. So , min

z

x

1

2

x

2

2

2

2

2

2

8 , x

1

2, x

2

2

Verification of Kuhn-Tucker condition : To verify (2,2) satisfies Kuhn-Tucker conditions

f

(

x

)

x

1

2

x

2

2

;

g

1

(

x

)

g

2

(

x

)

x

1

2

x

1

x

2

 x

2

4 ;

5;

x

1

,

x

2

0

L

(

x

,

,

s

)

f

(

x

)

1

[

g

1

(

x

)

s

1

2

]

2

[

g

2

(

x

)

s

2

2

] where s=(s

1

,s

2

),

=(

1

,

2

) and s

1

, s

2

Being slack variables and

1

,

2 are Lagrangian multipliers .The Kuhn-Tucker conditions are given by

L

(

x

,

,

s

)

 x

1

2

 x

2

2

-

1

(x

1

 x

2

4) -

2

(2x

1

 x

2

5)

L a

)

i

)

x

1

2

x

1

1

2

2

ii

)

L

x

2

2

x

2

1

2

0

at

(2,2) solving we get

1

b

)

i

)

1

(

x

1

ii

)

2

( 2

x

1

x

2

x

2

4 )

5 )

0 .

0 .

0 .

c

)

i

)

x

1

x

2

4

ii

) 2

x

1

d

)

1

0 ,

x

2

2

5

0

0

0

4 ,

2

0

(2,2) satisfies a), b), c) conditions of the Kuhn-Tucker for minima.

Hence, Min z= 8, x

1

=2, x

2

=2 is the solution and it satisfies Kuhn- Tucker conditions.

21

Chapter-8

QUADRATIC PROGRAMMING (Wolfe’s method)

Max z

f

(

x

)

j n

1

c j x j

1

2

j n n



1

k

1

c jk x j x k

subject t o the constraint s :

n

j

1 a ij

x j

b j

,

x j

0 (

i

1 , 2 ,.........

....,

m

;

j

1 , 2 ,.........

...,

n

) where

c jk

c kj

For all j and k , b i

≥ 0 for all i=1,2,………,m. Also assume the quadratic form

n n



j

1

k

1

c jk x j x k

be negative semi-definite.

Outline of the iterative procedure is

Step 1: First we convert the inequality constraints into equations by introducing slack variables q

1

2

in the i th

constraint (i=1,2,3,………,m) and the slack variables r j

2

in the j th non-negativity constraint (j=1,2,……,n).

Step 2: Then, we construct the Lagrangian function

L

(

x

,

q

,

r

,

,

)

f

(

x

)

i m

1

i

[

n

i

1

a ij x j where x

(

x

1

,

x

2

,.........

..,

x n

),

b i q

(

q

1

2

,.........

...,

q m

2

)

r

x

(

r

1

2

,.........

...,

r n

(

1

,

2

,.........

..,

2

)

m

)

(

1

,

2

,.........

..,

n

),

q i

2

]

j n

1

j

[

x j

r j

2

]

Differentiating L partially w.r.t. the components of x , q , r,

, and equating the first order partial derivatives to zero , Kuhn-Tucker conditions are obtained.

Step 3: We introduce the non-negative artificial variable v j

, j =1,2,…..,n in the Kuhn-

Tucker conditions

c j

k n

1

c jk x k m

i

1

a ij

i

j

0 for

1,2,....., n and to construct an objective function

z v

v

1

v

2

.....

v n

.

22

Step 4: We obtain the initial basic feasible solution to the following linear programming problem min

z v

v

1

v

2

.....

v n

subject t o the constraint s

k n

1

x k c jk

k n

1

c jk x k

i m

1

a ij

i

j

v j

 

c j

for (j

1,2,....., n)

n

j

1

a ij x j

q i

2

b i where i

( 1 , 2 ,.........

..,

m

),

v j

,

i

,

j

,

x j

0 (

i

1 , 2 ,........,

m

;

j

1 , 2 ,.........

.,

n

) and satisfying the complement ary slackness condition :

(

i n

j

1

 j x j

i m

1

i s i

1 , 2 ,........,

m

;

j

0 (

where s i

1 , 2 ,.........

.,

n

)

q i

2

)

i s i

0

and

 j x j

0

Step 5: Now we apply 2- phase simplex method to find and optimum solution of Linear

Programming problem in step 4 .The solution must satisfy the above complementary slackness condition.

Step 6 : Thus the optimum solution obtained in step 5 is the optimal solution of the given

Quadratic programming problem (QPP).

Example:

Max

3x

1

z

2x

2

8

x

1

10

x

2

6 and x

1

,

2

x

1

2 x

2

0

x

2

2 subject to

Solution:We convert all the inequality constraints to ≤

3x

1

2x

2

6 and x

1

0,-x

2

0

Now we introduce slack variables

3x

1

2x

2

x

1

r

1

2

q

1

2

0,

x

2

r

2

2

0

6 and

So the problem now becomes

Max z

8

x

1

10

x

2

2

x

1

2

x

2

2

23

3x

1

2x

2

x

1

r

1

2

q

1

2

0,

x

2

r

2

2

0

6 and

To obtain the Kuhn-Tucker condition we construct Lagrange function

L

(

x

1

,

x

2

,

1

,

1

,

2

,

q

1

,

r

1

,

r

2

)

1

( x

1

r

1

2

)

2

( -x

2

( 8

x

1

10

x

2

r

2

2

)

2

x

1

2

x

2

2

)

1

( 3x

1

2x

2

q

1

2

6 )

The necessary and sufficient conditions are

L

x

1

L

x

2

8

4

x

1

3

1

1

0

10

2

x

2

2

1

2

0

Defining s

1

=q

1

2

we have

1 s

1

=0 ,µ

1 x

1

=0

3x

1

2x

2

s

1

6

x

1

,

x

2

,

1

,

1

,

2

,

s

1

0

Modified linear programming

Now introducing artificial variables v1 and v2 we have

Max v z s

.

t

4

x

1

2

x

2

2

1

3

1

v

1

2

1

v

2

v

2

3

x

1

2

x

2

s

1

6

v

1

10

8

Table 1

BV C

B

V

1

-1

V

2

S

1

-1

0

Zv=-

18

X

B

8

10

6

X

1

(0) X

2

(0)

1

(0) µ

1

(0) µ

2

(0) V

1

(-1) V

2

(-1) S

1

(0)

4 0 3 -1 0 1 0 0

0

3

-4

2

2

-2

2

0

-5

0

0

1

-1

0

1

0

0

0

1

0

0

0

1

0

24

1

cannot be the entering variable, since s

1

is basic variable ૃ

1 s

1

=0. So x

1

is the entering variable, since µ

1

is not basic variable. (x

2 can also be the entering variable as µ

2 is not basic variable) Min ratio (8/4,6/3) there is a tie. So we take x

2 as entering variable.

Min ratio(10/2,6/2)

Table 2

BV C

B

V

1

V

2

-1

-1

X

8

4

B

X

1

(0) X

2

(0) ૃ

1

(0)

µ

4

-3

0

0

3

2

-1

0

1

(0) µ

0

2

(0) V

1

(-1) V

-1

1

0

0

1

2

(-1) S

1

(0)

0

0

X

2

0

Zv=-12

3 3/2

-1

1

0

0

-5

0

1

0

1

0

0

0

0

½

0

Now ૃ

1

can enter as s

1

is not basic variable. Leaving variable (8/3,4/2) is v

2

.

Table 3

BV C

B

V

1

-1

X

B

2

X

1

(0) X

2

(0) ૃ

1

(0)

µ

1

(0) µ

2

(0) V

1

(-1) V

2

(-1) S

1

(0)

17/2 0 0 -1 3 1 -3 0

1

0 2 -3/2 0 1 0 -1 0 1 0

X

2

0

Zv=-2

3 3/2

-17/2

1

0

0

0

0

1

0

-3

0

0

0

4

½

0

Min. ratio (4/17,2)

25

Table 4

BV C

B x

1

0

1

0

X

2

0

Zv=0

X

B

X

4/17 1

1

(0) X

2

0

(0) ૃ

1

0

(0)

µ

1

(0) µ

2

(0) V

1

(-1) V

2

(-1) S

1

(0)

-2/17 6/17 2/17 -6/17 0

40/17 0

45/17 0

0

0

1

0

1

0

0

-3/17

3/17

0

8/17

-9/17

0

20/17

-3/17

1

-6/17

9/17

1

0

35/34

0

The optimum solution is x

1

=4/17, x

2

=45/17,

1

=40/17.

v

2

=

v

1

=

µ

1

=

µ

2

=

s

1

=

0

This satisfies the condition ૃ

1 s

1

=0 , µ

1 x

1

=0 , µ

2 x

2

=0 and the restriction sign of the

Lagrangian multipliers. So, the maximum value of z is max(z) = 6137/289.

26

Conclusion

Present work demonstrates methods to solve the optimization problems which are of

Quadratic in nature .As discussed earlier concept of convex functions have been used to solve the optimization problems.

Three different cases are considered when the problems are : i) Unconstrained ii) Constrained in form of equality iii) Constrained in form of inequalities

Graphical method has proved very efficient in solving problems in two dimensions.

Wolfe’s method converts the Quadratic programming to linear programming in successive steps which can be solved easily by two phase simplex method. Thus, the Quadratic

Programming Problems can be handled easily.

27

References

(1)

Mokhter S Bazaraa , Hanif .D. Sherali, C. N. Shetty, Nonlinear Programming

Theory and Algorithms , John Wiley and Sons Third edition.

(2)

Suresh Chandra ; Jayadeva, Aparna Mehera, Numerical Optimization with

Application, Narosa Publishing House,Reprint 2011

(3)

S.D Sharma , Himanshu Sharma, Operations Research Kedar Nath Ram Nath,1972

(4) Kanti Swarup, P.K.Gupta, Man Mohan,Operations Research ,Sultan Chand& Sons,1978.

28

Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Related manuals

Download PDF

advertisement