Enhancing the efficiency of the stochastic method by using non-smooth and non-convex optimization

Authors

Anjani Kumar Singha, Research scholar,  Swaleha Zubair, Assistant Professor
Aligarh Muslim University, Uttar Pradesh, India.

Abstract

In this paper, an attempt has been made to analyses advanced stochastic methods for the optimization use of non-convex, non-smooth finite sum problems. Interestingly, despite the widespread use and importance of non-convex models, our standing of the non-smooth, non-convex counterpart is very limited. Knowingly, in this context, the non-convex part of smooth and the non-smooth part is convex. Surprisingly, it is not clear about the proximal stochastic gradient that it has probable convergence with constant mini-batches at a stationary point. Thus, this paper is instrumental in showing the development of fast stochastic algorithms that probably converge to a stationary point with constant mini-batches. Hence, it is helpful in minimizing a fundamental gap in our understanding of non-smooth, non-convex problems. For stochastic methods, the optimization techniques used in non-asymptotic convergence rates are applicable for non-convex, on-smooth problems with mini-batches. Perhaps, this is an extension of our analysis to mini-batch variants, showing (Theoretical) linear speed up due to mini batching in parallels settings. Comparatively, by using variants of these algorithms, the faster convergence rate has been induced than batch proximal gradient descent. Henceforth, this paper experimentally highlights an amazing subclass of non-smooth, non-convex functions for an extension of global linear convergence rates. Finally, the exposition of this advanced approach is concentrated around topics related to the experimental Ideas, although in certain aspects it is also pertinent to analogous issues in combinational optimization.