In traditional speech processing, feature extraction and classification were conducted as separate steps. The advent of deep neural networks has enabled methods that simultaneously model the relationship between acoustic and phonetic characteristics of speech while classifying it directly from the raw waveform. The first convolutional layer in these networks acts as a filter bank. To enhance interpretability and reduce the number of parameters, researchers have explored the use of parametric filters, with the SincNet architecture being a notable advancement. In SincNet's initial convolutional layer, rectangular bandpass filters are learned instead of fully trainable filters. This approach allows for modeling with fewer parameters, thereby improving the network's convergence speed and accuracy. Analyzing the learned filter bank also provides valuable insights into the model's performance. The reduction in parameters, along with increased accuracy and interpretability, has led to the adoption of various parametric filters and deep architectures across diverse speech processing applications. This paper introduces different types of parametric filters and discusses their integration into various deep architectures. Additionally, it examines the specific applications in speech processing where these filters have proven effective.