BP algorithm has been used for wide range of applications. One of the most important limitations of this algorithm, is the low rate of convergence. The important reason behind this, is the saturation property of its activation functions. Once the output of a unit lies in the saturation area, the corresponding decent gradient would take a very small value. This will result in very little progress in the weight adjustment, if one takes affixed small learning rate parameter. To avoid this undesired phenomenon, one may consider a relative large learning rate. Unfortunately this would be dangerous, because it may take the algorithm diverges especially when the weight adjustment happens to fall into the surface regions with a large steepness. So, we require algorithms capable of tuning dynamically learning rate according to changes of gradient values. In this paper, different methos of dynamic changing of learning rate has been considered. Variable Learning Rate (VLR) algorithm and learning automata based learning rate adaptation algorithms are considered and compared with each other. Because the VLR parameters have important influence in its performance, so we use learning automata for adjusting them. In the proposed algorithm called Adaptive Variable Learning Rate (AVLR) algorithm, VLR parameters are adapted dynamically by learning automata according to error changes. Simulation results on various problems highlight better the merit of the proposed AVLR.