Machine learning is an application of artificial intelligence that is able to automatically learn and improve from experience without being explicitly programmed. The primary assumption for most of the machine learning algorithms is that the training set (source domain) and the test set (target domain) follow from the same probability distribution. However, in most of the real-world applications, this assumption is violated since the probability distribution of the source and target domains are different. This issue is known as domain shift. Therefore, transfer learning and domain adaptation generalize the model to face target data with different distribution. In this paper, we propose a domain adaptation method referred to as IMage Alignment via KErnelized feature learning (IMAKE) in order to preserve the general and geometric information of the source and target domains. IMAKE finds a common subspace across domains to reduce the distribution discrepancy between the source and the target domains. IMAKE adapts both the geometric and the general distributions, simultaneously. Moreover, IMAKE transfers the source and target domains into a shared low dimensional subspace in an unsupervised manner. Our proposed method minimizes the marginal and conditional probability distribution differences of the source and target data via maximum mean discrepancy and manifold alignment for geometrical distribution adaptation. IMAKE maps the input data into a common latent subspace via manifold alignment as a geometric matching method. Therefore, the samples with the same class labels are collected around their means, and samples with different class are separated, as well. Moreover, IMAKE maintains the source and target domain manifolds to preserve the original data position and domain structure. Also, the use of kernels and mapping data into Hilbert space provides more accurate separation between different classes and is suitable for data with complex and unbalanced structures. The proposed method has been evaluated using a variety of benchmark visual databases with 36 experiments. The results indicate the significant improvements of the proposed method performance against other machine learning and transfer learning approaches.