Scilab Function
Last update : 23/10/2007

NDcost - generic external for optim computing gradient using finite differences

Calling Sequence

[f,g,ind]=NDcost(x,ind,fun,varargin)

Parameters

Description

This function can be used as an external for optim to minimize problem where gradient is too complicated to be programmed. only the function fun which computes the criterion is required.

This function should be used as follow: [f,xopt,gopt]=optim(list(NDcost,fun,p1,...pn),x0,...)

Examples

// example #1 (a simple one)
//function to minimize
function f=rosenbrock(x,varargin)
   p=varargin(1)
   f=1+sum( p*(x(2:$)-x(1:$-1)^2)^2 + (1-x(2:$))^2)
endfunction

x0=[1;2;3;4];
[f,xopt,gopt]=optim(list(NDcost,rosenbrock,200),x0)

// example #2: this example (from Rainer von Seggern) shows a quick (*) scilab way
//             to identify the parameters of a linear differential equation.
//             The model is a simple damped (linear) oscillator:
//
//               x''(t) + c x'(t) + k x(t) = 0
// 
// and we write it as a system of (2) differential equations of first
// order with y(1) = x, and y(2) = x':
//
//     dy1/dt = y(2)
//     dy2/dt = -c*y(2) -k*y(1)
//
// We suppose to have m measures of x (that is y(1)) at different times 
// t_obs(1), ..., t_obs(m) called x_obs(1), ..., x_obs(m)  (in this example
// these measures will be simulated) and we want to find the parameters
// c and k by minimizing the sum of square errors between x_obs and y1(t_obs,p) 
// 
// (*) This method is not the most efficient but is easy to implement.
 
function dy = DEQ(t,y,p)
   // the rhs of our first order differential equation system
   c =p(1);k=p(2)
   dy=[y(2);-c*y(2)-k*y(1)]
endfunction

function y=uN(p, t, t0, y0)
   // Numerical solution get with ode (in this case an exact analytic
   // solution is found easily but ode could work for any equations...)
   // Note: the ode output must be an approximation of the solution at
   //       times given in the vector t=[t(1),...,t($)]  
   y = ode(y0,t0,t,list(DEQ,p))
endfunction

function r = cost_func(p, t_obs, x_obs, t0, y0) 
   // the function to be minimized : this is the sum of the squared
   // errors between what gives the model and the measuments
   sol = uN(p, t_obs, t0, y0)
   e = sol(1,:) - x_obs
   r = sum(e.*e) 
endfunction

// Datas
t0 = 0; y0 = [10;0]; // initial time and initial conditions 
T = 30;  // final time for measurements

// Here we simulate experimental datas (from which the parameters
// should be found)
pe = [0.2;3];  // exact parameters
m = 60;  t_obs = linspace(t0+2,T,m); // observation times
sigma = 2;   // noise level
y_obs = uN(pe, t_obs, t0, y0);
x_obs = y_obs(1,:) + grand(1,m,"nor",0, sigma);

// initial guess parameters
p0 = [0.5 ; 5];  

// for information the cost function before optimization:
cost0 = cost_func(p0, t_obs, x_obs, t0, y0); 
mprintf("\n\r cost function before optimization = %g",cost0)

// Solution with optim
[costopt,popt]=optim(list(NDcost,cost_func, t_obs, x_obs, t0, y0),p0); //,'ar',40,40,1e-3);

mprintf("\n\r cost function after optimization = %g",costopt)
mprintf("\n\r values of the parameters found: c = %g, k = %g",popt(1),popt(2))

// a small plot
t = linspace(0,T,400);
y = uN(popt, t, t0, y0);
clf();
plot2d(t',y(1,:)',style=2)
plot2d(t_obs',y_obs(1,:)',style=-2)
legend(["model","measures"]);
xtitle("least square fit to identify ode parameters")
 
  

See Also

optim ,   external ,   derivative ,