निरुपम गुप्ता / Nirupam Gupta
Hello! Thanks for dropping by.
I am a Postdoctoral Scientist in the Distributed Computing Laboratory (DCL) at EPFL, sponsored by Rachid Guerraoui. Here's my CV.
Research Work: I create (and sometimes solve) problems in distributed optimization, machine learning, and control systems; emphasis on fault-tolerance, robustness, and privacy-preservation. List of my research projects:
Teaching Experience: At Georgetown University, I taught a seminar course on Algorithms for Distributed Machine Learning. The course introduced the contemporary algorithms and challenges for solving the problem of distributed machine learning. It was offered to Computer Science PhD students in the spring semester of 2020. Details on the course can be found here.
Some of my recent research work -
Approximate Byzantine Fault-Tolerance in Distributed Optimization (arXiv, Jan'21),
with Shuo Liu, and Nitin Vaidya.
An important extension to our prior work on exact Byzantine fault-tolerance, which appeared in the proceedings of PODC'20. In this particular paper, as the name suggests, we study the problem of approximate Byzantine fault-tolerance - a generalization of exact fault-tolerance. The results presented here have a much wider applicability, as opposed to the results on exact fault-tolerance. We present generic fault-tolerance properties that can be applied directly to contemporary real-world distributed optimization problems; such as federated learning, multi-sensor networks, and network resource allocation.
Byzantine Fault-Tolerance in Decentralized Optimization under 2f-Redundancy (arXiv, Sept'20),
with Thinh T. Doan, and Nitin Vaidya.
In this work, we extend our prior results on Byzantine fault-tolerant distributed optimization for server-based system artchitecture to the decentralized peer-to-peer system architecture. This paper presents the first ever decentralized optimization algorithm with provable exact Byzantine fault-tolerance for high-dimensional optimization problems. The paper has been accepted for the proceedings of the 2021 American Control Conference.
Byzantine Fault-Tolerant Distributed Machine Learning Using Stochastic Gradient Descent (SGD) and Norm-Based Comparative Gradient Elimination (CGE) (arXiv, Aug'20), with Shuo Liu, and Nitin Vaidya.
In this work, we show the applicability of a norm-based gradient elimination techinique (proposed by us) to Byzantine fault-tolerance in distributed stochastic gradient-descent method for distributed machine learning.
Iterative Pre-Conditioning for Expediting the Gradient-Descent Method: The Distributed Linear Least-Squares Problem (arXiv, Aug'20),
with Kushal Chakraborty, and Nikhil Chopra.
In this work, we propose the first ever distributed iterative optimization method with superlinear rate of convergence. Specifically, we show that a traditional gradient-descent method when coupled with an iterative pre-conditioner matrix can achieve superlinear convergence rate - unequivocally superior to state-of-the-art accelerated methods; namely Nesterov's accelerated method, heavy-ball method, and the quasi-Newton method called BFGS. (Variants of this work, showing improved robustness to system noise, have been published in the proceedings of the 2020 American Control Conference, and the IEEE Control Systems Letters - 2021.)
Preserving Statistical Privacy in Distributed Optimization, with Shripad Gade, Nikhil Chopra, and Nitin H. Vaidya.
We present a distributed optimization protocol that preserves statistical privacy of agents' local cost functions against a passive adversary that corrupts some agents in a peer-to-peer network without affecting the correctness of the solution, unlike the more widely used and popular differential privacy protocols. The work has been published in the IEEE Control Systems Letters, 2021.
Some highlights of my education -
Besides literacy in three languages, namely Hindi (native), English, and Gujarati, I have also managed to obtain a couple of academic degrees: