Consider a stochastic gradient iteration:

\(\theta_$k+1$ = \theta_$k$ - \gamma_k F(\theta_k))\

where $F$ is a noisy estimate of the gradient $\nabla f$

Now, a book says that it converges in the following sense : $f(\theta_k)$ converges and $\nabla f(\theta_k)$ converges to zero and then it says that it is the strongest possible result for gradient related stochastic approximation.

What is the meaning of it ? Why does not it shows the convergence of the iterates ?

asked 15 Aug '14, 09:27

sosha's gravatar image

accept rate: 0%

edited 15 Aug '14, 09:28

Be the first one to answer this question!
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "Title")
  • image?![alt text](/path/img.jpg "Title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported



Asked: 15 Aug '14, 09:27

Seen: 548 times

Last updated: 15 Aug '14, 09:28

OR-Exchange! Your site for questions, answers, and announcements about operations research.