Backpropagation algorithm performs gradient descent only in the weight space of a network with fixed topology. A very small network cannot learn the problem well, and a very larger network will lead to overfitting and poor generalization performance. Algorithms that can find an appropriate network architecture automatically are thus highly desirable. The algorithms that are introduced by researchers can be classified into five major groups. Pruning algorithms, constructive algorithms, hybrid algorithms, evolutionary algorithms, and learning automata based algorithms. Meybodi and Beigy introduced the first learning automata based algorithms, called survival algorithm. This algorithm produces networks with low complexity and high generalization. Survival algorithm by turning off and on the weights, tries to find the most important weights. At the beginning, all weights of the network are on and contribute to learning. The on weights, whose absolute values are less than a threshold value, are penalized and those, whose absolute value are larger than another threshold value, are rewarded. The on weights, whose absolute values lie between these two threshold values, neither rewarded, nor penalized. The values of these two thresholds are determinative and have considerable effect on the performance of the survival algorithm. Determination of the values of these thresholds is not an easy task and usually is determined by trial and error or using past experience. In this paper, we propose a method for adaptation of these two threshold values. The proposed method have been tested on number of problems and shown through simulations that the network generated by the survival algorithm when threshold values are adapted has lesser number of weights and neurons, comparing to the network generated by the first version of the algorithm reported earlier. Experimentation shows that the adaptive survival algorithm has nearly the same degree of generalization as the non-adaptive version.