Nirupam Gupta
{first_name}.{last_name}
@georgetown.edu

Home Page

Google scholar profile

Expediting the Gradient-Descent Method using
Iterative Pre-Conditioning

The Gradient-descent method is perhaps the oldest iterative algorithm we know, thanks to Cauchy who supposedly proposed it in 1847. This method, however old it may be, is still just as relevant, if not more, as it was in the second half of the 19th century. For instance, our modern-day machine learning and aritificially intelligent systems rely heavily on this simple and elegant iterative method.

Now, anyone who has ever studied the correctness of the gradient-descent method knows that the rate of convergence of this iterative method depends largely upon the conditioning of the optimization problem being solved. In short, the method requires a large number of iterations to converge (to a solution) if the problem being solved is ill-conditioned. More importantly, the implementation of the gradient-descent method on a digital machine that could only process and register real values with bounded precision is quite unstable when the optimization problem being solved is ill-conditioned.

Motivated by the reasons above we proposed and study an pre-conditioning approach, details for which can be found in our publications listed below. The proposed pre-conditioning approach is iterative, and can be executed on distributed server-agent architectures. So far, we have rigorously shown that the proposed iterative pre-conditioning technique mitigates the negative influence of the conditioning of the optimziation problem on the rate of convergence of the gradient-descent method. Importantly, in certain cases the resultant pre-conditioned gradient-descent method provably achieves superlinear rate of convergence, and therefore, requires exponentially less number of iterations to converge to the solution than the traditional gradient-descent method.

Besides the improved rate of convergence, the iteratively pre-conditioned gradient-descent method is more robust against system noise in comparison to existing state-of-the-art optimization algorithms. Details of our pre-conditioning technique and its improvements are presented in detail in the articles listed below.

  1. Iterative Pre-Conditioning for Expediting the Gradient-Descent Method: The Distributed Linear Least-Squares Problem, co-authored with Kushal Chakraborty, and Nikhil Chopra.
  2. Iterative Pre-Conditioning to Expedite the Gradient-Descent Method (pdf),
    co-authored with Kushal Chakraborty, and Nikhil Chopra. American Control Conference (2020, IEEE).
  3. On Distributed Solution of Ill-Conditioned System of Linear Equations under Communication Delays (pdf), co-authored with Kushal Chakraborty, and Nikhil Chopra. Indian Control Conference (Dec' 2019, IEEE).