Optimization

Optimization

The solution of the function could be a local minimum, a local maximum, or a saddle point at a position where the function gradient is zero:

When the eigenvalues of the function’s Hessian matrix at the zero-gradient position are all positive, we have a local minimum for the function.

When the eigenvalues of the function’s Hessian matrix at the zero-gradient position are all negative, we have a local maximum for the function.

When the eigenvalues of the function’s Hessian matrix at the zero-gradient position are negative and positive, we have a saddle point for the function.

references

https://d2l.ai/chapter_optimization/optimization-intro.html

Author

s-serenity

Posted on

2024-10-22

Updated on

2024-10-23

Licensed under

You need to set install_url to use ShareThis. Please set it in _config.yml.
You forgot to set the business or currency_code for Paypal. Please set it in _config.yml.

Comments

You forgot to set the shortname for Disqus. Please set it in _config.yml.