Handling Vectors and Matrices in Matlab Actually all Matlab variables are designed to be vectors or matrices by default. When we give them single values, they become scalar. Consider a Matrix A of size (3x3). One method of filling the matrix is bv entering elements one by one: >>A(1,1)=2; >>A(1,2)=3; >>A(1,3)=4; >> >>A(2,1)=5; >>A(2,2)=6; >>A(2,3)=7; >> >>A(3,1)=1; >>A(3,2)=8; >>A(3,3)=9;

First row

Second row

Third row

This is a laborious way of filling the matrix

In Matlab they are easily entered as: >> A = [2 3 4; 5 6 7; 1 8 9 ];

3rd row 2nd row 1st row Rows seperated by semi-colons

Or >> A [2: 3: 4; 5: 6: 7; 1: 8: 9 ];

A blank or full-colon seperates columns

>> A A= 2 5 1

3 6 8

4 7 9

We can also assemble submatrices to create a full matrix: Eg. >> a=[2 3 4]; >> b=[5 6 7]; >> c=[1 8 9]; >> >> A=[a;b;c]; gives the same A matrix as above..

Generation of Row and Column vectors (for solving equations)

Column vector

Row vector

Some built-in matrix functions: >> det(A) – Determinant of A. >> eig(A) - Eigenvalues and Eigenvectors of A

;

;

Note: Matrices are multiplied in the same way as scalar variables. But ensure that row and columns match.

Initialization of matrices to Zero, or Identity Matrix: zeros(M,N) produce a zero matrix of order (MxN). eye(N) produce an identity matrix of order NxN. A=zeros(4,3); R=zeros(3,1); Q=zeros(1,3);

(4 x3 matrix of zeros) (column matrix of zeros) (row matrix of zeros)

Example of creating an identity matrix: >> >> A=eye(3); identity matrix of order 3. >> A ans = 1 0 0

0 1 0

0 0 1

Matlab Optimization Toolbox

Contains routines for: 1. 2. 3. 4. 5.

Linear programming Constrained non-linear optimization Unconstrained non-linear optimization Goal programming Least squares minimization

Basics: 1. For all Optimization functions, the parameters must be set using the ‘optimset ( )’ function. (eg. Number of iterations, Tolerance, Display options (i.e wether to show all iterations or not)….etc 2. A subroutine (.m file) which returns the Objective function when supplied with design variable vector {x} may be required. It will be called by main programme. 3. A subroutine (.m file) which returns the Equality and Inequality constraint coefficient matrices may be required. In case of Equality Constraints, it must return [Aeq]and {beq}. where [Aeq]{x} = {beq} . In case of Inequality Constraints, it must return [A] and {b}. where [A] {x} ≤ {b}.

Linear programming subroutine linprog ( ).

The problem is specified by, Minimize f(x) (where {x} is a vector of design variables) subject to: [A eq ] {x}={beq }; [A] {x}={b}; {lb } ≤ {x} ≤ {lu } (i.e upper and lower bounds of each design variable)

Set default options of linprog using optimset , options = optimset (' linprog '); We can also give individual customized options; options = optimset (' DISPLAY ', ' off ', ' MaxIter ',100, ' Tolfun, '1e − 5'); or options.MaxIter = 100; (refer MATLAB help command on optimset ) Then call linprog, linprog ( f , A, b, Aeq , beq , lb , ub , x0 , options ) f is the vector of coefficients of Obj. function. A& b coefficients of inequality matrices. Aeq & beq coefficients of equality matrices. lb , ub

upper and lower bounds of x.

x0

starting values of variables.

options specified in Optimset.

Minimize f ( x) = − 5 x1 − 4 x2 − 6 x3 Coefficients of Objective function are

f = [−5 − 4 − 6];

subject to : x1 3 x1

− x2 +2 x2

+ x3 ≤ 20 +4 x3 ≤ 42

3 x1

+2 x2

+0 x3 ≤ 30

(only inequality constraints )

i.e [ A] {x} ≤ {b} [ A] = [1 − 1

1; 3 2 4; 3 2 0];

{b} = [20; 42 ; 30]; [ Aeq ] = []; i.e. null

and {beq } = []; i.e. null

( No equality constrs )

Variable Bounds ? ⎧0 ⎫ ⎪ ⎪ lb = ⎨0 ⎬ ⎪0 ⎪ ⎩ ⎭

⎧ x1 ⎫ ⎪ ⎪ ≤ ⎨ x2 ⎬ ≤ ⎪ x3⎪ ⎩ ⎭

⎧?⎫ ⎪ ⎪ ⎨?⎬ = ub ; ⎪? ⎪ ⎩ ⎭ so lb = [0;0;0]; ub = []; not defined , null

⎧ x10 ⎫ ⎪ ⎪ Starting values of x = {x0 } = ⎨ x20 ⎬ = set as null [ ]; ⎪ x0 ⎪ ⎩ 3⎭

% Listing of the linprog program. % options=optimset('linprog'); % f=[-5 -4 -6]; A=[1 -1 1 ;3 2 4 ; 3 2 0 ]; b=[20; 42;30]; lb=zeros(3,1); ub=[ ]; x0=[ ]; Aeq=[ ]; beq=[ ]; [x,fval] = linprog(f,A,b,Aeq,beq,lb,ub,x0,options); ----------------------------------------------------------------------------------output ---------------------------------------------------------------------------------x= 0.0000 15.0000 3.0000 fval = -78.0000

Let us try a Goal programming problem with linprog ; There are two Objective functions. f1 = −5 x1 − 4 x2 − 6 x3 f 2 = −3x1 + 2 x2 − x3 (other 3 constraints are same as in previous problem). Both f1 and f 2 are set to Goals say, f1 is set to − 65. f 2 is set to 20.

Two new Equality constraints are created ; Now there are 7 variables; x1 , x2 , x3 ,η1 , δ1 , η2 and δ 2 . −5 x1 − 4 x2 − 6 x3 + η1 − δ1 + 0.η2 + 0.δ 2 = − 65 ( f1 ) −3 x1 + 2 x2 − x3 + 0.η1 + 0.δ1 + η2 −

δ2 =

20 ( f 2 )

[ Aeq ] = [−5 − 4 − 6 1 − 1 0 0; − 3 2 − 1 0 0 1 − 1];

{beq } = [−65; 20];

The previous 3Inequality constraints are also padded with η1 , δ1 , η2 and δ 2 with zero coefficients. Eg . x1 − x2 + x3 + 0. η1 + 0.δ1 + 0.η2 + 0.δ 2 ≤ 20

What is the Objective function? Minimize: 0.x1 + 0.x2 + 0.x3 + η1 + δ1 +η2 + δ 2 f = [0 0 0 1 1 1 1];

% effect of slack variable and goal programming options=optimset('linprog') f=[0 0 0 1 1 1 1]; A=[1 -1 1 0 0 0 0; 3 2 4 0 0 0 0; 3 2 0 0 0 0 0]; b=[20 ;42 ; 30]; lb=zeros(7,1); ub=[]; x0=[]; Aeq=[-5 -4 -6 1 -1 0 0; -3 2 -3 0 0 1 -1]; beq=[-65; 20]; [x,fval] = linprog(f,A,b,Aeq,beq,lb,ub,x0,options) ----------------------------------------------------------------------------------output ---------------------------------------------------------------------------------x= 0.2091 13.1511 1.8917 Slack variables are all zero; 0.0000 The Goal is attainable 0.0000 0.0000 0.0000 fval = Obj function value 2.3784e-015

Single variable unconstrained optimization (minimization).

This subroutine is based on Golden section method. [x,fval]=fminbnd('FUN',x1,x2,options); it returns 'x' the solution and 'fval' the minimum value of the Objective function. f=FUN(x) has to be seperately written as a subroutine (FUN.m) which returns 'f' when supplied with 'x'. (say, a single variable function like

f = 2sin x -

x2 ); 20

x1,x2 are the variable bounds (lower and upper). options=optimset('fminbnd'); (all default values for that function) or options=optimset('Tolx',1e-3,'MaxIter',100);

You can lookup these parameters using “help” command

%fminbnd minimizes single variable within bounds. %based on golden section method with parabolic interpolation. %set options to default values of fminbnd using options=optimset('fminbnd') %OR set it to non-default values chosen by you. %Here maximum iterations is 100 and tolerance for termination is 1e-3. % options=optimset('Maxiter',100,'Tolx',1e-3); x1=0; x2=4; [x,fval]=fminbnd('FUN',x1,x2,options) %----------------------------------------------------------% define the function which will return the objective function ‘FUN’ in a file FUN.m % in that file, write function f=FUN(x) f=-2*sin(x)-x^2/20; %-------------------------------------------------------%solution %-------------------------------------------------------x= 1.6536 fval = -2.1299

Multi-variable unconstrained optimization 1. fminunc() 2. fminsearch()

This use the Gradient method (calculus based) Uses random search algorithm (direct method).

The function fminunc(): [x,fval]=fminunc('FUN',x0,options) FUN.m has to be written such that when supplied with vector of input variables {x}={x1,x2,x3....} , it will return the Obj. function value.

{x0}

is the vector of initial values for {x} .

options=optimset('fminunc') or, the user can set each parameter.

we will try the example : Minimize : f = e x1 ⎡⎣ 4 x12 + 2 x22 + 4 x1 x2 + 2 x2 + 1⎤⎦ ; Local minima is zero at : ( x1 , x2 ) = (0.5, −1) We will try two initial values (10,10) and (1,1) and see the solution in each case.

options=optimset('fminunc'); x0=[10 10]; [x,fval]=fminunc('FUN1',x0,options) %Description of the FUN1.m file. It must contain this script. function f = FUN1(x) f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1); %---------------------------------------%output %----------------------------------------x= -2.1394 8.9839 fval = 14.3402 %-----------------------------%this is wrong optima. Try another initial guess. %--------------------------------x0=[1 1]; [x,fval]=fminunc('FUN1',x0,options) x= 0.5000 -1.0000 fval = 2.7215e-012 %---------------------------------------------------%this is the right value. It can be verified with other initial guesses.

%-----------------------------------------------------

To see more detailed output we can set ‘output’ option in the function. ‘exit flag’ option is also useful to indicate any errors.

options=optimset('fminunc'); x0=[1 1]; [x,fval,exitflag, output]=fminunc('fun1',x0,options) x= 0.5000 -1.0000 fval = 2.7215e-012 exitflag = 1 output = iterations: 17 funcCount: 57 stepsize: 1 firstorderopt: 3.0955e-006 algorithm: 'medium-scale: Quasi-Newton line search'

The function fminsearch() - Direct (random) search method [x,fval]=fminsearch('FUN',x0,options) FUN.m has to be written such that when supplied with vector of input variables {x}={x1,x2,x3....} , it will return the Obj. function.

{x0}

is the vector of initial values for {x} .

options=optimset('fminsearch') or, the user can set each parameter.

Try the previous example using random search: Minimize : f = e x1 ⎣⎡ 4 x12 + 2 x22 + 4 x1 x2 + 2 x2 + 1⎦⎤ ; the actual minima is zero at : ( x1 , x2 ) = (0.5, −1)

% options=optimset('fminsearch'); x0=[1 1]; [x,fval]=fminsearch('FUN1',x0,options) %----------------------------%output %--------------------------------x= -763.1739 120.6751 fval = 0 %-------------------------------%this is not right %let us try a guess close to the actual optima (.5, -1) x0=[1 -2]; [x,fval]=fminsearch('FUN1',x0,options) %-------------output %---------now its ok. x= 0.5000 -1.0000 fval = 1.5367e-009 In this example, the Direct search algorithm performance is inferior to the gradient based example. The random search algorithm may perform better than gradient based if there are a large number of variables.

Quadratic Programming: If the Objective function can be expressed in the Quadratic form: ⎧ ∂f f ( x) = ⎨ ⎩ ∂x

∂f ⎫ ⎬ ∂y ⎭( x 0, y 0)

⎧ d1 ⎫ 1 ⎧ d1 ⎫ 2 ⎨ ⎬ + {d1 d 2 } ⎡⎣∇ f ⎤⎦ ⎨ ⎬ ⎩d 2 ⎭ 2 ⎩d 2 ⎭

1 ={ f }T {d } + {d }T [Q]{d } 2

(f and Q have to be supplied to the algorithm)

and we have sets of Equality and Inequality constraints: [A eq ]{d}={beq }; [A]{d} ≤ {b} Then we can use the function: [x,fval]=quadprog(Q,f,A,b,A eq,Beq , lb ,ub ,xo,options)

Example: (ref. Previous lecture on Constrained Optimization) x y + 2 y x xy − 2 = 0 x + y −1 ≥ 0

Minimize

6

Subject to :

At a starting point (2,1), Obj. func has been converted to Quadratic form : ⎧ 23 f ( x, y ) = ⎨ ⎩4

−47 ⎫ ⎧ d1 ⎫ 1 ⎬⎨ ⎬ + 4 ⎭ ⎩d 2 ⎭ 2

{d1

⎡ 3/ 8 −25 / 4 ⎤ d2 } ⎢ 24 ⎥⎦ ⎣ −25 / 4

f

Q

Constraints are expressed as: xy - 2 = 0 becomes :

⎧ d1 ⎫ ⎬ = 0; d ⎩ 2⎭

{1 2} ⎨

Aeq = {1 2} ; beq = {0}.

x + y −1 ≥ 0 ⎧d ⎫ becomes : {1 1} ⎨ 1 ⎬ + 2 ≥ 0 ⎩d 2 ⎭ ⎧d ⎫ or − {1 1} ⎨ 1 ⎬ ≤ 2 A = {−1 − 1} ; b = {2}. d ⎩ 2⎭

⎧ d1 ⎫ ⎨ 2⎬ ⎩d ⎭

%Illustration of Quadratic programming 'quadprog' % options=optimset('quadprog'); f=[23/4;-47/4]; Q=[3/8 -25/4; -25/4 24]; Aeq=[1 2]; beq=[0]; A=[-1 -1]; b=[2]; x0=[2 1]; lb=[]; ub=[]; [x,fval]=quadprog(Q,f,A,b,Aeq,beq,lb,ub,x0,options) %-------------------------------------------------------------------%output %-------------------------------------------------------------------x= -0.9208 0.4604 fval = -5.3521 %------------------------------%correct solution

Note: For sequential quadratic programming, we have to call the subroutine again with the value of Q and f calculated at (-0.9208, 0.4604).

Gembicki’s Goal Programming method (Multi Objective Optimization)

If there are many Objective functions Fi (x), and they have goals Fg.i ; then the Gembickis methods tries to minimize, Fi (x) − weight.i γ ≤ Fg.i Optimization var iables (x, γ ). γ is a slack variable which must be minimized to a small value to satify the goals. Subject to Equality, Inequality and Non-Linear Constraints. Function "fgoalattain" is defined as follows;

[ x, fval , attainfactor , exitflag ]

= fgoalattain(' fun ', x0, goal , weight , A, b, Aeq, beq, lb, ub, nonlcon);

fun.m − This subroutine must return the Fi values when supplied with the ' x' values.

Other Algorithm Parameters: a) goal:

Vector of Goals to be attained by the Objective functions.

b) weight: Weights are recommended to be set equal to goals to ensure that the same relative difference exists for attaining all goals. Typically the weight is set to abs( goal ). When the weight and goals are positive the algorithm tries to make the Objectives less than the goal values. When the weight is negative, it tries to make the Objectives more than the goal values. c)"GoalsExactAchieve" parameter: If we want to exactly achieve the goals (not less than) set the "GoalsExactAchieve" parameter equal to an integer, which is the number of Obj. functions to exactly achieve the goals. These objective functions must be ranked in the beginning of the Objective function vector.

or

options = optimset (' GoalsExactAchieve ',3); options.GoalsExactAchieve = 3; (Means, the first 3 Obj. functions are set to exactly achieve the goals)

d) attainfactor : The algorithm returns an attainment factor. "attainfactor " is the amount of over or underachievement of the goals. If weights were set equal in magnitude to goals, the attainment factors gives the smallest percentage overattainment or underattainment among all the Obj. functions. attainfactor = ( set goal − attained goal ) / setgoal. Eg... attainfactor is returned as − 0.1. This means that goals have been overacheived by 10% for one objective function.

Example : Consider the multi-obj example in the lecture: Min:

4x - y - 0.5 x + y

( f1 ) ( f 2)

(try a goal of − 0.5) (try a goal of 3.5)

Subject to: 2 x + y ≥ 8; x ≥ 1; y ≤ 5; ( x, y ) ≥ 0 (Note: no Equality Constraints, or Non-linear Constraints); goal = [−0.5 3.5]; weight = abs ( goal );

A

Aeq=[ ]; Beq=[ ];

⎡ −2 1 ⎤ ⎧−8⎫ x ⎢ −1 − 0 ⎥ ⎧ ⎫ = ⎪ −1⎪ ⎢ ⎥ ⎨ y⎬ ⎨ ⎬ ⎩ ⎭ ⎪ ⎪ ⎢⎣ 0 1 ⎥⎦ ⎩5⎭

Nonlcon=[ ];

b

(i.e. no equality constraints or non-linear constraints)

Write fun4.m which returns f(1) and f(2) when supplied with x(1),x(2). function f=fun4(x) f(1)= 4*x(1)-x(2); f(2)= - 0.5*x(1)+x(2);

options=optimset('fgoalattain'); % define coeffs of inequality constraints % Inequality constraints; A=[2 1; -1 0; 0 1]; b=[8 -1 5]; %All others set to NULL; Aeq=[]; beq=[]; nonlcon=[]; % Lower bound of x and y set to 0, 0; Upper bound undefined; lb=[0 0]; ub=[ ]; % initial value of x,y x0=[0 0]; % %Goals and weights defined goal = [-0.5 3.5]; weight=abs(goal); % % Call the function; [x,fval,attainfactor,exitflag]=fgoalattain('fun4',x0,goal,weight,A,b,Aeq,beq,lb,ub,nonlcon)

Results of the run: x=

solution

1.0000

4.4375 Attained f1 , f2 values.

fval = -0.4375

3.9375

attainfactor =

12.5% discrepancy in goal for f1 (i.e -0.4375 attained against -0.5)

0.1250

Same 12.5% discrepancy for f2.

exitflag =

Positive exit flag means iterations have converged.

4

Let us try new goals and see the attainment factor. Let us try goals of [-1 4.5]; − − − − − − − − changed − − − %Goals and weights re-defined goal = [-1 4.5]; weight=abs(goal); −−−−−−−−

x= 1

5 f1, f2 perfectly attained.

fval = -1.0000

4.5000

attainfactor = 0 exitflag = 4

attainment factor=0

General constrained optimization with Linear & Non-linear constraints

[x,feval]=fmincon(‘FUN’,xo,A,b,Aeq,beq,lb,ub, ‘NONLCON’,options) Here, 1. A,b, Aq,beq represent the coefficients of linear (equality and inequality) constraints. 2. FUN.m is a subroutine which returns the Objective function. 3. NONLCON.m is a subroutine which specifies the non-linear constraints. It returns the values of C and Ceq when supplied with variables x=x1,x2,x3.......... [C, Ceq]= NONLCON (x) Ceq=h(x) C=g(x)

(Equality constraint is defined as h(x)=0, Ceq is the constraint violation); (Inequality constraint is defined as g(x)≤0, C is deviation from zero);

Minimize : f ( x ) = x13 − 6 x12 + 11x1 + x3 Constraints : a ) Non − linear inequalities x12 + x22 − x32 ≤ 0 4 − x12 − x22 − x32 ≤ 0 b) Linear inequalities (they can be given as variable bounds also) x3 ≤ 5 − xi ≤ 0, i = 1,2,3

function [c, ceq] = NONLCON(x) % Nonlinear inequality constraints % c = [x(1)^2+x(2)^2-x(3)^2; 4-x(1)^2-x(2)^2-x(3)^2]; % There are no Nonlinear equality constraints ceq=[ ];

Linear constraints? Only 4 inequality constraints are given. A=[0 0 1; -1 0 0; 0 -1 0; 0 0 -1]; b=[5; 0; 0; 0]; Aeq=[ ]; beq=[ ];

Definition of Objective function? function f= FUN(x) f= x(1)^3-6*x(1)^2+11*x(1)+x(3);

% programme listing options = optimset(‘fmincon'); x0 = [.1,.1,3.0]; % Starting guess % A=[0 0 1; -1 0 0; 0 -1 0; 0 0 -1]; b=[5; 0; 0; 0]; % Aeq=[ ]; beq=[ ]; % lb=[ ]; ub=[ ] [x,fval]=fmincon(‘FUN’,x0,A,b, Aeq, beq,,lb,ub,’NONLCON’,options); % x 0 1.4142 1.4142

solution

%after running the programme, check for constraint violation. [c,ceq] = NONLCON(x) % it will print the values of [c], [ceq] vector. [c] values must be all less than zero.

A Machine Design problem (S.S. Rao’s text)

⎧ x1 ⎫ ⎧h ⎫ ⎪x ⎪ ⎪ ⎪ ⎪ 2 ⎪ ⎪l ⎪ Design vector X = ⎨ ⎬ = ⎨ ⎬ ⎪ x3 ⎪ ⎪t ⎪ ⎪⎩ x4 ⎪⎭ ⎪⎩b ⎪⎭

Min : f ( X ) = 1.1047 x12 x2 + 0.04811x3 x4 (14.0 + x2 )

7 inequality constraints : g1 ( X ) = τ ( X ) − τ max ≤ 0 g 2 ( X ) = σ ( X ) − σ max ≤ 0 g3 ( X ) = x1 − x4 ≤ 0 g 4 ( X ) = 0.10471* x12 + 0.04811* x3 * x4 * (14.0 + x2 ) − 5.0 ≤ 0 g5 ( X ) = 0.125 − x1 ≤ 0 g 6 ( X ) = δ ( X ) − δ max ≤ 0 g 7 ( X ) = P − Pc ( X ) ≤ 0 4 Variable bounds : g8 ( X ) and g9 ( X ) : 0.1 ≤ xi ≤ 2.0, i = 1,4 g10 ( X ) to g11 ( X ) : 0.1 ≤ xi ≤ 10.0, i = 2,3

Definitions of parameters used in the constraint equation.

τ = (τ ' ) 2 + 2τ 'τ ' ' cos θ + (τ ' ' ) 2 τ '=

P 2 .x1 x 2

MR J x ⎞ ⎛ M = P⎜ L + 2 ⎟ 2⎠ ⎝

τ ''=

P = 6000 lb, L = 14 in E = 30 x10 6 psi, G = 12 x10 6 psi, τ max = 13,600 psi

σ max = 30,000 psi and δ max = 0.25 in

⎡ x 2 2 ⎛ x1 + x3 ⎞ 2 ⎤ +⎜ R= ⎢ ⎟ ⎥ 4 2 ⎝ ⎠ ⎦⎥ ⎣⎢ ⎛ ⎡ x 2 2 ⎛ x1 + x3 ⎞ 2 ⎤ ⎞ J = 2⎜ 2 x1 x 2 ⎢ +⎜ ⎟ ⎥⎟ ⎜ 12 2 ⎝ ⎠ ⎥⎦ ⎟⎠ ⎢⎣ ⎝ 6 PL σ (X ) = x 4 x32 4 PL3 δ (X ) = 3 Ex3 x 4 4.013 EG ( x32 * x 46 / 36) ⎛ x3 ⎜1 − Pc ( X ) = 2 ⎜ 2l L ⎝

E ⎞ ⎟ 4G ⎟⎠

Objective function is defined in FUN.m. % function f= FUN(x) f=1.10471*x(1)*x(1)*x(2)+0.04811*x(3)*x(4)*(14.0+x(2));

function [c, ceq] = NONLCON(x) P=6000; E=30e6; G=12e6; smax=30000; deltamax=0.25; shearmax=13600; L=14; j1=2*1.414*x(1)*x(2); j2=((x(2)^2)/12+((x(1)+x(3))/2)^2); J=j1*j2; S=6*P*L/(x(4)*x(3)*x(3)); K=(4*P*L^3)/(E*(x(3)^3)*x(4)); pc1=1-(x(3)/(2*L))*(E/(4*G))^(1/2); pc2=4.013*((E*G*(x(3)^2 )*(x(4)^6)/36))^(1/2); Pc=pc1*pc2/(L*L); M=P*(L+x(2)/2); R=((x(2)^2)/4+((x(1)+x(3))/2)^2)^0.5; A=P/(x(1)*x(2)*1.414); B=M*R/J; T = (A^2+2*A*B*(x(2)/(2*R))+B^2)^0.5; ceq=[ ]; %No equality constraints c = [T-shearmax; S-smax; x(1)-x(4); 0.10471*x(1)*x(1)+0.04811*x(3)*x(4)*(14.0+x(2))-5.0;… 0.125-x(1); K-deltamax; P-Pc]; % 7 inequality constraints

% executing the optimization routine. x0 = [0.2455;6.196;8.273;0.2455]; % Starting guess % define variable bounds lb=[0.1;0.1;0.1;0.1]; ub=[2;10;10;2]; % Note A, b, Aeq and beq are all set to NULL. options = optimset(‘fmincon’); [x,feval]=fmincon(‘FUN’,x0,[],[],[],[],lb,ub,’NONLCON’,options); x= 0.1865 4.3184 8.2915 0.2444 feval= 1.9516

%value of the objective function at minima

% as a check we can look at the constraint values of {x}

[c,ceq] = NONLCON(x) c= 0.0000 0 -0.0579 -3.2107 -0.0615 -0.2342 0 ceq = []

Constraint values are ≤ 0