CIVILICA We Respect the Science
(ناشر تخصصی کنفرانسهای کشور / شماره مجوز انتشارات از وزارت فرهنگ و ارشاد اسلامی: ۸۹۷۱)

A Proximal Method of Stochastic Gradient for Convex Optimization

عنوان مقاله: A Proximal Method of Stochastic Gradient for Convex Optimization
شناسه ملی مقاله: JR_COAM-8-1_002
منتشر شده در در سال 1402
مشخصات نویسندگان مقاله:

Zeinab Saeidian - ‎Department of Mathematics‎, ‎University of Kashan‎, ‎Kashan‎, ‎Iran.
Maryam Mahmoudoghli - ‎K.N‎. ‎Toosi University of Technology‎, ‎Tehran‎, ‎Iran‎.

خلاصه مقاله:
‎The Proximal Stochastic Average Gradient (Prox-SAG+) is a primary method used for solving optimization problems that contain the sum of two convex functions. This kind of problem usually arises in machine learning, which utilizes a large amount of data to create component functions from a dataset. A proximal operation is applied to obtain the optimal value due to its appropriate properties. The Prox-SAG+ algorithm is faster than some other methods and has a simpler algorithm than previous ones. Moreover, using this specific operator can help to reassure that the achieved result is optimal. Additionally, it has been proven that the proposed method has an approximately geometric rate of convergence. Implementing the proposed operator makes the method more practical than other algorithms found in the literature. Numerical analysis also confirms the efficiency of the proposed scheme.

کلمات کلیدی:
Proximal stochastic average gradient‎, ‎Convergence property‎, ‎Training examples‎, ‎Machine learning

صفحه اختصاصی مقاله و دریافت فایل کامل: https://civilica.com/doc/1672766/