Introduction: Speech is the most effective way to exchange information. In a speech, a speaker's voice carries additional information other than the words and grammar content of the speech, i. e., age, gender, and emotional state. Many studies have been conducted with various approaches to the emotional content of speech. These studies show that emotion content in speech has a dynamic nature. The dynamics of speech make it difficult to extract the emotion hidden in a speech. This study aimed to evaluate the implicit emotion in a message through emotional speech processing by applying the Mel-Frequency Cepstral Coefficient (MFCC) and Short-Time Fourier Transform (STFT) features. Methods: The input data is the Berlin Emotional Speech Database consisting of seven emotional states, anger, boredom, disgust, anxiety/fear, happiness, sadness, and neutral version. MATLAB software is used to input audio files of the database. Next, the MFCC and STFT features are extracted. Feature vectors for each method are calculated based on seven statistical values, i. e. minimum, maximum, mean, standard deviation, median, skewness, and kurtosis. Then, they are used as an input to an Artificial Neural Network. Finally, the recognition of emotional states is done by training functions based on different algorithms. Results: The results revealed that the average and accuracy of emotional states recognized using STFT features are better and more robust than MFCC features. Also, emotional states of anger and sadness have a higher rate of recognition, among other emotions. Conclusion: STFT features showed to be better than MFCC features to extract implicit emotion in speech.