nlm
Non-Linear Minimization
Description
This function carries out a minimization of the function f
using a Newton-type algorithm. See the references for details.
Usage
nlm(f, p, ..., hessian = FALSE, typsize = rep(1, length(p)), fscale = 1, print.level = 0, ndigit = 12, gradtol = 1e-6, stepmax = max(1000 * sqrt(sum((p/typsize)^2)), 1000), steptol = 1e-6, iterlim = 100, check.analyticals = TRUE)
Arguments
f | the function to be minimized, returning a single numeric value. This should be a function with first argument a vector of the length of If the function value has an attribute called |
p | starting parameter values for the minimization. |
... | additional arguments to be passed to |
hessian | if |
typsize | an estimate of the size of each parameter at the minimum. |
fscale | an estimate of the size of |
print.level | this argument determines the level of printing which is done during the minimization process. The default value of |
ndigit | the number of significant digits in the function |
gradtol | a positive scalar giving the tolerance at which the scaled gradient is considered close enough to zero to terminate the algorithm. The scaled gradient is a measure of the relative change in |
stepmax | a positive scalar which gives the maximum allowable scaled step length. |
steptol | A positive scalar providing the minimum allowable relative step length. |
iterlim | a positive integer specifying the maximum number of iterations to be performed before the program is terminated. |
check.analyticals | a logical scalar specifying whether the analytic gradients and Hessians, if they are supplied, should be checked against numerical derivatives at the initial parameter values. This can help detect incorrectly formulated gradients or Hessians. |
Details
Note that arguments after ...
must be matched exactly.
If a gradient or hessian is supplied but evaluates to the wrong mode or length, it will be ignored if check.analyticals = TRUE
(the default) with a warning. The hessian is not even checked unless the gradient is present and passes the sanity checks.
The C code for the “perturbed” cholesky, choldc()
has had a bug in all R versions before 3.4.1.
From the three methods available in the original source, we always use method “1” which is line search.
The functions supplied should always return finite (including not NA
and not NaN
) values: for the function value itself non-finite values are replaced by the maximum positive value with a warning.
Value
A list containing the following components:
minimum | the value of the estimated minimum of |
estimate | the point at which the minimum value of |
gradient | the gradient at the estimated minimum of |
hessian | the hessian at the estimated minimum of |
code | an integer indicating why the optimization process terminated.
|
iterations | the number of iterations performed. |
Source
The current code is by Saikat DebRoy and the R Core team, using a C translation of Fortran code by Richard H. Jones.
References
Dennis, J. E. and Schnabel, R. B. (1983). Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, Englewood Cliffs, NJ.
Schnabel, R. B., Koontz, J. E. and Weiss, B. E. (1985). A modular system of algorithms for unconstrained minimization. ACM Transactions on Mathematical Software, 11, 419–440. doi: 10.1145/6187.6192.
See Also
constrOptim
for constrained optimization, optimize
for one-dimensional minimization and uniroot
for root finding. deriv
to calculate analytical derivatives.
For nonlinear regression, nls
may be better.
Examples
f <- function(x) sum((x-1:length(x))^2) nlm(f, c(10,10)) nlm(f, c(10,10), print.level = 2) utils::str(nlm(f, c(5), hessian = TRUE)) f <- function(x, a) sum((x-a)^2) nlm(f, c(10,10), a = c(3,5)) f <- function(x, a) { res <- sum((x-a)^2) attr(res, "gradient") <- 2*(x-a) res } nlm(f, c(10,10), a = c(3,5)) ## more examples, including the use of derivatives. ## Not run: demo(nlm)
Copyright (©) 1999–2012 R Foundation for Statistical Computing.
Licensed under the GNU General Public License.