DistGD

Three-node example

The goal of DistGD (Distributed Gradient Descent) is to efficiently optimize a global objective function expressed as a sum of a list of local objective functions belonging to different agents situated in a network via a cluster architecture like Spark. You supply a list of local objective functions, weights of the connections between the agents, initialize a vector initial values, and it takes care of the computations, returning the optimal values. See the github page for a vignette.

Benjamin Osafo Agyare
Benjamin Osafo Agyare
PhD Student in Statistics