The easiest method of communication between humans and machines is through speech and one of the essentials aspects of this relationship, is perception of humanistic sentiments by machine. As a result, getting speech’ s patterns and creating a system based on this model has been a challenge for researchers in recent years. Although the emotion shown in speech could ranges in a very divergent spectrum, because of pander to accent, culture and environment, but a fixed patterns could be found in feelings of people’ s speech. In this paper, a new algorithm to detect emotion in human voice is presented. In proposed algorithm, features are extracted from the audio signal, inspired by human hearing. and then the optimal features are chosen from the extracted features with the aim of increasing the speed and accuracy of diagnosis with an intelligent method. In addition, classifying is done by combining a set classifiers subsequently, patterns of anger, joy, fear, boredom, disgust and sadness is distinguished by the designed intelligent system. Results of the simulations of the implemented algorithm is presented with two databases, Farsi and Germany and then compared with the outcomes of other algorithms with the same databases. Results indicate that the proposed algorithm could predict emotions of anger, joy, fears, boredom, disgust and sadness with good accuracy. This algorithm could be used in designing control systems and robot guidance. In addition, emotion recognition system could be utilized in psychology, medicine, and behavioral science and security applications such as polygraph.