Some of my recent research work -
Byzantine Fault-Tolerance in Decentralized Optimization under Minimal Redundancy (arXiv, Sept'20),
co-authored with Thinh T. Doan, and Nitin Vaidya.
In this paper, we extended our prior work on Byzantine fault-tolerant distributed optimization for server-based system artchitecture to peer-to-peer system architecture. In short, this paper presents the first ever provably correct decentralized optimization algorithm for solving a general class of distributed optimization problem in the presence of Byzantine agents in a peer-to-peer multi-agent system.
Byzantine Fault-Tolerant Distributed Machine Learning Using Stochastic Gradient Descent (SGD) and Norm-Based Comparative Gradient Elimination (CGE) (arXiv, Aug'20), co-authored with Shuo Liu, and Nitin Vaidya.
A sistributed machine learning problem is a special case of a more general problem of distributed optimization. Thus, a distributed optimization algorithm, which can tolerate some Byzantine agents in the system, should also be applicable to the case of distributed learnin. However, in practice, an optimization algorithm cannot directly be applied to solving a learning problemm without stochastic approximations. This practical limitation makes it difficult to extend the fault-tolerance property of a distributed optimization algorithm to distributed learning. In this paper, we present formal guarantees and empiritical results for the correctness of our existing Byznatine fault-tolerant distributed optimization algorithm when applied to the case of distributed learning.
Iterative Pre-Conditioning for Expediting the Gradient-Descent Method: The Distributed Linear Least-Squares Problem (arXiv, Aug'20),
co-authored with Kushal Chakraborty, and Nikhil Chopra.
We present an idea of iterative pre-conditioning for improving the convergence speed of the traditional gradient-descent method when applied to a distributed setting. Through rigorous analyses, and experimentation, we show unequivocal superiority of the convergence rate of our iteratively pre-conditioned gradient-descent method in comparison to other state-of-the-art acceleration techniques, such as Nesterov's method or heavy-ball method, and also the quasi-Newton methods, such as BFGS.
Preserving Statistical Privacy in Distributed Optimization, co-authored with Shripad Gade, Nikhil Chopra, and Nitin H. Vaidya. The IEEE Control Systems Letters (L-CSS 2021).
We present a distributed optimization protocol that preserves statistical privacy of agents' local cost functions against a passive adversary that corrupts some agents in a peer-to-peer network without affecting the correctness of the solution, unlike the more widely used and popular differential privacy protocols. The protocol presented has a wide range of applications including privacy in distributed state estimation, resource allocation and machine learning.
Fault-tolerance in Distributed Optimization: The Case of Redundancy (pdf), co-authored with Nitin Vaidya. The 39th ACM Symposium on Principles of Distributed Computing (PODC'20) [presentation video].
In this paper we first present a key impossibility result for Byzantine fault-tolerance in distributed optimization, and then present the first ever provably correct Byzantine fault-tolerant distributed optimization algorithm. Omitted details in this paper can be found in our two technical reports; (1) Resilience in Collaborative Optimization: Redundant and Independent Cost Functions (arXiv, March'20), and (2) Byzantine Fault Tolerant Distributed Linear Regression (arXiv, March'19).
Some highlights of my education -
I am humbly grateful for having received some decent education in this life. Besides literacy in three languages, namely Hindi (native), English, and Gujarati, I have also managed to obtain some academic degrees. 🙂