The application of communication between human and computer has been emerging as an important matter in communication with the surrounding environment. If the computer could sense the human’ s emotion, it would be easier to establish a connection between the computer and human. Therefore, the extraction of emotions is an important topic in communication between them. For extracting an emotion, which is absolutely undeniable, various biological signals are used. One of the simple and high-precision methods to acquire data from these signals is to implement the eye tracking and the concentration on the screen technique. In this paper, the eye tracking technique is used to extract emotions for the communication between the human and computer. According to the acquired data from the persons and videos, some of the characteristics of signals, including focus areas, pupil diameter, statistical features, and features of videos are extracted. In addition, in order to improve the results, combining the features are proposed. Afterwards, based on two distinct outputs i. e. Arousal and Valence, and employing a linear combination and reducing the dimension, some features are selected separately. Finally, to classify the two-axes associated with the Arousal and Valence in the range of 0 to 9, which is divided into three equal parts; special types of KNN and SVM methods combined with AdaBoost classifier are used. The numerical studies have shown that the average extraction accuracy is 68. 66% for Arousal axis, and 74. 66% for Valence axis. As a result, the overall accuracy is improved 5. 5% compared to the previous works, respectively.