This code allows reproducing the results from (arXiv):
[1] Hadi Abbaszadehpeivasti, Etienne de Klerk, Adrien Taylor (2025). "On the convergence rate of the boosted Difference-of-Convex Algorithm (DCA)."
Requirements:
- Files in the PEPit folder relies on Python and requires the installation of PEPit.
- Files in the PESTO folder relies on Matlab and requires the installation of PESTO.
(1) Execution in Python (requires installation of PEPit)
Boosted_DCA.ipynb
: full jupyter notebook to re-obtain the numerical study (and comparison with the rates provided in [1]) of the boosted DCA algorithm. The scripts also allow direct study of the algorithms under Lojasiewicz-type properties.Gradient_descent.ipynb
: full jupyter notebook to re-obtain the numerical study (and comparison with the rates provided in [1]) of gradient descent applied to a smooth function satisfying a Lojasiewicz-type property.
(2) Execution in Matlab (requires installation of PESTO)
Boosted_DCA_script.m
: main script (calling other functions) that allows numerical study (and comparison with the rates provided in [1]) and to reproduce Figure 1, Figure 2, and Figure 3 of [1].Boosted_DCA.m
: PESTO code to perform the worst-case analysis of boosted DCA.Shifted_minimization.m
: helper function that allows simpler implementation of the boosted DCA within the PESTO framework.
(3) One-iteration analysis
Boosted_DCA_script.m
: Mathematica notebook for verifying the reformulations (computer-algebra version).Boosted_DCA_script.m
: Jupyter notebook for verifying the reformulations (numerical version, with PEPit).