Skip to content

Bill-Gots/numberical_optimization

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

README

Welcome to dsy_numberical_optimization

Introduction

This is a library of numerical optimization develop by a mathematic student. If you are interest in this project , welcome to sent your advice to my email: madsy@mail.scut.edu.cn.

book

To use this project, you need to define your own target function and optimizer with object "target_function" and "Optimizer".

Let's introduce some important class first:

class target_function:
--dimension: type:list
if your function is R_n to R, you should input [1,n].
--function: type:function
input your own target funtion which all computational process must base on numpy.mat.
--grad_operator: type:function
input your own target funtion's grad-operator which all computational process must base on numpy.mat.
--Hessian: type:function
(optional,default=(lambda x: 0))
input your own target function's Hessian-operator which all computational process must base on numpy.mat.
class Optimizer:
--target_function: type:target_function
input your own target_function.
--Interpolate: type:str
(default="bisection",option:"bisection","quadratic")
--error_end: type:float
(default=1e-6)
End condition,when |f(x_k)-f(x_{k-1)}|< error_end , iteration end.
--alpha_logs: type:bool
you can choose to print step_length every step or not.
--steps_logs: type:bool
you can choose print target function's value every step or not.
--require_time: type:bool
you can choose print the time spent in the entire optimization process.

About Optimizer, We will use Optimizer.GD_optmize() frequently.

Optimizer.GD_optmize():
--start: type:numpy.mat
input your start point
--Method type:str
(default="steepest_descent",
option:"steepest_descent","linear_conjugate_gradient", "FR_conjugate_gradient","PR_conjugate_gradient","HR_conjugate_gradient"s)
input your optimization Method
--A type:numpy.mat
(optional,default=np.mat([]))
**ps:**Only when you use Method "linear_conjugate_gradient",you need to Input A.
A is from funtion form like (0.5x^T Ax-b^T*x)

First, You need to instantiate the target function using a function and its gradient operator. For example:

import numpy as np
from dsy_numberical_optimization import target_funtion
A=np.mat([[2,0,0],
          [0,3,0],
          [0,0,5]])   
b=np.mat([[4],
          [1],
          [5]])            

def f(x):
    y=0.5*x.T*A*x-b.T*x
    return y
def grad_f(x):
    y=A*x-b
    return y
tf=target_funtion([1,3],f,deri_f)

Then, you should instantiate your Optimizer with your target function.

from dsy_numberical_optimization import Optimizer
opt=Optimizer(tf)

Now we can process our optimization(default: steepest_descent Method):

x_min=opt.GD_optmize(start=tf.start_point())
print("optmal x:\n{x}".format(x=x_min))

result:

Method: steepest_descent
whole optmization take 0.010001420974731445s
optmal x:
[[1.99999999]
 [0.33333347]
 [0.99999983]]

Example for Method "linear_conjugate_gradient"

x_min=opt.GD_optmize(start=tf.start_point(),A=A,Method="linear_conjugate_gradient")
print("optmal x:\n{x}".format(x=x_min))

result:

Method: linear_conjugate_gradient
waring: this method can only optmize funtion form like (0.5x.T*A*x-b.T*x)
whole optmization take 0.0010037422180175781s
optmal x:
[[2.        ]
 [0.33333333]
 [1.        ]]

Example for Method "FR_conjugate_gradient"

x_min=opt.GD_optmize(start=tf.start_point(),A=A,Method="FR_conjugate_gradient")
print("optmal x:\n{x}".format(x=x_min))

result:

Method: FR_conjugate_gradient
whole optmization take 0.039859771728515625s
optmal x:
[[2.00000041]
 [0.33333318]
 [0.99999995]]

Example for Method "PR_conjugate_gradient"

x_min=opt.GD_optmize(start=tf.start_point(),A=A,Method="PR_conjugate_gradient")
print("optmal x:\n{x}".format(x=x_min))

result:

Method: PR_conjugate_gradient
whole optmization take 0.02293872833251953s
optmal x:
[[1.99999999]
 [0.33333335]
 [1.00000008]]

Example for Method "HR_conjugate_gradient"

x_min=opt.GD_optmize(start=tf.start_point(),A=A,Method="HR_conjugate_gradient")
print("optmal x:\n{x}".format(x=x_min))

result:

Method: HR_conjugate_gradient
whole optmization take 0.024933576583862305s
optmal x:
[[2.        ]
 [0.33333352]
 [0.99999993]]

About

a numberical optimization lib base on numpy

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%