Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

Journal Issue Information

Archive

Year

Volume(Issue)

Issues

Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Issue Info: 
  • Year: 

    2023
  • Volume: 

    16
  • Issue: 

    2
  • Pages: 

    253-273
Measures: 
  • Citations: 

    0
  • Views: 

    100
  • Downloads: 

    0
Abstract: 

Introduction Redundancy is a commonly used technique to increase the reliability of a system. However, because of some limitations, such as high cost and space, this method cannot always be used. These constraints can be overcome by using a reduction method, which involves improving the system’, s reliability by reducing the failure rate of some of its components by a constant factor 0 < ,< 1. Based on the reduction factor, the concept of reliability equivalence factors was introduced by Rade (1993). The reliability equivalence factor (REF) is a factor as 0 < ,< 1 by which the failure rates of some system components are reduced such that the system reliability reaches the reliability of a system that is improved via an arbitrary method. The REF is a valuable tool for comparing the different ways of system improvements. Consider a coherent system of order n, with component lifetimes T1, : : :, Tn. If P(Ti > t) = ,F, i(t) for some , i > 0 and i = 1, : : :, n, then the mutually s-independent lifetime variables T = (T1, : : :, Tn) follow the proportional hazard rates (PHR) model, where ,F is the baseline survival function and ,= (, 1, : : :, , n) is the proportional hazard vector. In this paper, we apply the reduction method in series and parallel systems under the PHR model and discuss the relation of the REF and , . Material and Methods We discuss that the reduction method can be considered as a special case of the PHR model and then, based on REF, compare the homogeneous and heterogeneous strategies in the PHR model. Results and Discussion This paper considers the conditions in which the lifetimes of two series or parallel systems are stochastically ordered. We then discuss how the REF can be used to improve and equivalent the system lifetimes. The REFs are often obtained by numerical methods and mathematical packages in literature. In this paper, based on survival and mean reliability equivalence factors, the equivalence between the reduction method and heterogeneous strategy in the PHR model for the parallel and series systems with independent components is investigated. We present closed formulas for the REF of series and parallel systems when the lifetimes of components follow the PHR model. Sufficient conditions for the relative ageing comparisons of the improved series and parallel systems under the PHR model and reduction method are also developed. Conclusion There is a close relationship between the PHR model and the reduction method. We apply this relation and find some conditions for the equivalence of the lifetimes of two series or parallel systems. We also compare the lifetimes of two series systems under the PHR model and the reduction method based on the ageing faster orders in terms of the hazard and the reversed hazard rates. By a study on the reduction method and heterogeneous strategy in the PHR model for the series system with the component reliability vector , F (t) = ( ,F(t), : : :, ,F(t)), we find that the improved systems by the reduction method and heterogeneous strategy in the PHR model are equivalent in the sense of ageing faster order in the hazard rate. For the parallel and series systems with the component reliability vector , F (t) = ( ,F(t), : : :, ,F(t)), we also find the sufficient condition under that the improved systems by the reduction method age faster than those systems, improved by heterogeneous strategy in the PHR model.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 100

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

AFSHARI R.

Issue Info: 
  • Year: 

    2023
  • Volume: 

    16
  • Issue: 

    2
  • Pages: 

    275-294
Measures: 
  • Citations: 

    0
  • Views: 

    75
  • Downloads: 

    0
Abstract: 

Introduction Acceptance sampling plans (ASPs) are one of the statistical tools widely used by inspectors to evaluate the quality of productions. Depending on whether the desired quality characteristic of products can be measured on a numerical scale or not, the corresponding ASP is called variable and attribute ASP, respectively. Most ASPs only apply the current lot information to decide the quality of manufactured products. One of the drawbacks of such plans is that they need a large sample size to inspect the lot to judge its quality. To solve this problem, the methods of sampling in ASPs were developed, and conditional plans were presented (Montgomery, 2020). In addition to the current information, past information on the process is also used in decision-making. Although the variable multiple dependent state sampling (MDS) plan is preferred over the conditional plans due to the small sample size required, it is impossible to use it in a situation where the quality of manufactured products depends on more than one quality characteristic. In this study, to improve the performance of the mentioned method, ST pk-based MDS plan is proposed, which applies both current and past knowledge of the process and is applicable to inspect products with independent/dependent characteristics following a multivariate normal distribution. Material and Methods Let X = (X1,,,, ,Xv) ′,be a random vector of independent quality characteristics and follow a multivariate normal distribution. The proposed variable MDS multivariate (VMDSM) plan, designed under the process capability index ST pk, has four parameters m,n,kr and ka. Applying a nonlinear optimization problem, optimal values of plan parameters are obtained. Also, to develop the application of the VMDSM plan in the presence of dependent variables, the principal component analysis technique is used. While comparing the performance of the proposed plan with variable single sampling (VSS), variable repetitive group sampling (VRGS) and modified VRGS (Modified-VRGS) plans based on the required sample size and operating characteristic (OC) curve. An industrial example is given to explain how to use the introduced plan. Results and Discussion To study the impact of the contracted values between customer and producer on the plan parameters, we consider several combinations of consumer’, s risk ( , ) and manufacturer’, s risk ( , ) as well as a different number of defective items in parts per million. The results demonstrate that the required sample size decreases when ( , ) and ( , ) become larger. Moreover, findings indicate that the OC curve of the proposed method is the best one among VSS, VRGS and Modified-VRGS plans. Also, compared to VSS and VRGS plans, the introduced plan needs a small sample size and is economical. Conclusion Today’, s, it is clear that production quality depends on many quality characteristics. So designing a multivariate ASP with small sample size, strong OC curve, and simple implementation is unavoidable. In this study, we proposed a plan suitable for a situation where the quality characteristics are mutually independent/dependent with multivariate normal distribution. The results showed that the introduced plan is better than the existing VSS and VRGS plans in terms of the required sample size and OC curve, and it is preferable to the Modified-VRGS method based on the OC curve.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 75

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    16
  • Issue: 

    2
  • Pages: 

    295-308
Measures: 
  • Citations: 

    0
  • Views: 

    63
  • Downloads: 

    0
Abstract: 

Introduction The two-parameter Burr type XII (Burr( , , , )) distribution has been proposed as a lifetime model and a model in accelerated life test data representing times to breakdown of an insulating fluid. A classical method for estimating the parameter of interest is based on sample information, for example, calculating the maximum likelihood estimator (MLE). A Bayesian approach to a statistical problem requires defining a prior distribution over the parameter space and loss function. Many Bayesians believe that just one prior can be elicited. In practice, the prior knowledge is vague, and any produced prior distribution is only an approximation to the true one. Various solutions to this problem, such as robust Bayesian estimation, have been proposed. Another method is E-Bayesian estimation, introduced by Han (١, ٩, ٩, ٧, ), obtained by the expectation of a Bayes estimate of the unknown parameter over the hyperparameters. Material and Methods Suppose that n devices are placed on test simultaneously, and the test will finish immediately after r components have failed. r is fixed, and the length of the experiment is a random variable. This is a type-II censoring type, and data consisting of the r smallest lifetimes are x = (x1, : : :, xr). In some estimation problems, using an unbounded loss function may be inappropriate. For example, in estimating the mean life of the components of an aircraft, the amount of loss for estimating the parameter by an estimator is essentially bounded. We consider the Burr type XII model and obtain the Bayesian and E-Bayesian estimation of ,using censored data under a reflected gamma loss function. The uniqueness of this study comes from the fact that, thus far, no attempt has been made to use the E-Bayesian method for estimation in the Burr type XII model under the reflected gamma loss function. The method of E-Bayesian estimation is based on the expectation of Bayesian estimation over the hyperparameters of the prior distribution. The properties of EBayesian estimations and asymptotic relations are computed. A simulation study is conducted for comparison of the performances of proposed estimators. Results and Discussion A simulation study is performed to compare the proposed estimators. The samples are generated from the Burr(1,0: 1) distribution for selected values of n, and censored samples are obtained using some selected values of r. The Bayesian and E-Bayesian estimations are computed for selected values of c,u and v. The estimated bias and risks are calculated for repeated 10000 times. It is observed from the simulation study that the performances of E-Bayesian estimates improve by increasing n (and r). Moreover, the estimator ^ , EB3 has the most minor estimated bias and risk and therefore, it is proposed for estimation of , .

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 63

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

Bazyari A.

Issue Info: 
  • Year: 

    2023
  • Volume: 

    16
  • Issue: 

    2
  • Pages: 

    309-330
Measures: 
  • Citations: 

    0
  • Views: 

    66
  • Downloads: 

    0
Abstract: 

Introduction For the risk models, problems associated with calculating ruin probabilities have received considerable attention in recent years. These include studies of the finite and infinite time ruin probabilities, the surplus before ruin and the deficit at a ruin, and moments of these variables. One of the essential criteria in the insurance company risk model is the accurate or approximate calculation of ruin probability. In the present paper, we consider the individual risk model of an insurance company with dependent claims and assume that the binary vector of random variables of claim sizes is independent. Also, they have a common joint distribution function. A recursive formula for infinite time ruin probability is based on the initial reserve and joint probability density function of random variables of claim sizes using probability inequalities and the induction method. Some numerical examples and simulation studies are presented for checking the results related to the light-tailed bivariate Poisson, heavy-tailed Log-Normal and Pareto distributions. The results are compared for Farlie–, Gambel–, Morgenstern and bivariate Frank copula functions. The effect of claims with heavy-tailed distributions on the ruin probability is also investigated. Material and Methods We compute the infinite time ruin probability in the individual risk model with a dependent structure for light and heavy-tailed distributions with dependent claims. Although many authors have investigated the ruin probabilities, the present paper considers a specific type of dependency. Some numerical examples with heavy and light-tailed distributions are presented to show the application of results. Results and Discussion The risk process is a model of the accumulation of the insurer’, s capital and premium incomes during the periods. So, the risk process is one of the most important stochastic processes for an insurance company which can be defined as a discrete or continuous time in actuarial risk theory. Problems associated with calculating infinite ruin probabilities for the individual risk model have recently received considerable attention. In this article, we showed that the type of statistical distribution is essential in calculating the ruin probabilities. The obtained results are held for class heavy and lighttailed distributions. If the claim sizes are correlated, the insurance company must be cautious in preparing and adjusting the insurance policy. Conclusion For the insurance companies to be able to compensate the claims of policyholders, they must organize the company’, s insurance model based on precise mathematical rules and calculations. The statistical distribution of claim sizes and the number of insurance premiums must be correctly selected and considered. Also, as the level of correlation between claims increases, the ruin probability will increase, and the effects of heavy tail distributions on the ruin probability will be investigated. In these cases, we assume that the insurance company is interested in computing the ruin probabilities depending on the quantities and conditions of the risk model. The considered approaches in this paper use the methods for implementation, and one can find a proper application in insurance practice.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 66

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

Chaji A.

Issue Info: 
  • Year: 

    2023
  • Volume: 

    16
  • Issue: 

    2
  • Pages: 

    331-348
Measures: 
  • Citations: 

    0
  • Views: 

    83
  • Downloads: 

    0
Abstract: 

Introduction A decision tree is a flow chart-like graph structure of decision nodes, branches, and leaf nodes starting from the root and ending at the leaves. The node is the independent variable on which the test is performed, and the root node is placed at the top of the tree, while the leaf is the dependent variable (answer) or category label, which is the last node of the tree. The decision at each stage of tree construction depends on the previous branching operation, which is crucial to the predictive capability of the tree. The branching method of information gain, by using the concept of entropy, measures how much information a feature provides about a class and decides to split the tree at each node. Material and Methods The first dataset describes the diagnosis of cardiac Single Proton Emission Computed Tomography (SPECT) image contains 267 SPECT image sets (patients), and 22 Attributes such that each of the patients is classified into two categories: normal and abnormal. The second data set contains 1024 binary attributes (molecular fingerprints) used to classify 1687 chemicals into two classes (binder to androgen receptor/positive, non-binder to androgen receptor /negative). Also, a real-world dataset used that contains 90 instances and 7 attributes include gender, blood pressure, blood sugar, cholesterol, smoking, weight and occupation of the patient, which were collected to predict the determination of the treatment method (medical treatment or angiography). A new approach is proposed to produce a decision tree based on the T-entropy criterion. The method applied on the three datasets, examined by 11 evaluation criteria and compared with the well-known methods of Gini index, Shannon, Tsallis, and Renyi entropies for splitting the decision tree, with a proportion of 75 % for training and 25 % for testing and for 300 times of execution (each time of execution leads to the production of a decision tree). Also, a comparison is made between T-entropy and other discussed splitting measures in terms of the area under the ROC curve (AUC). Results and Discussion The performance of splitting methods of the Gini index, Shannon, Tsallis, Renyi entropies and T-entropy are examined. The evaluation criteria of accuracy (A￿, ￿, ), sensitivity, specificity, positive predictive value (PPV)(or precision), F-S￿, ore index (F١, ), negative predictive value (NPV), false discovery rate (F￿, R), false positive rate (FPR), false negative rate (FNR) and Mean square error for the three data sets were calculated. The maximum values for the first six criteria and the minimum values for the second four criteria indicate the better performance of the decision tree based on the introduction method. Also, the AUC value of the t-entropy method presented for all three data sets is higher than other methods, which indicates that the value of the true positive rate is higher than the false positive rate in the t-entropy method compared to other methods. Conclusion The results suggest that the proposed node-splitting method based on Tentropy has better behaviour than the other discussed methods for both low and high numbers of samples. Also, today because of the increasing growth of the big data problem and, on the other hand, the superiority of the Tentropy splitting method over the other investigated methods on different sizes of the dataset, the benefit of this method is twofold.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 83

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    16
  • Issue: 

    2
  • Pages: 

    349-372
Measures: 
  • Citations: 

    0
  • Views: 

    83
  • Downloads: 

    0
Abstract: 

Introduction The stochastic comparison of coherent systems is an essential task in the reliability theory. Several results have been provided for the systems with independent and identically distributed (IID) components. System signatures are helpful tools in the study and comparison of coherent systems in the IID case. However, system signatures have some restrictions for comparing the systems with identically distributed (ID) components. Recently, Navarro et al. (٢, ٠, ١, ٣, ) showed that the system reliability function could be expressed as a distorted function of the common reliability function of the components. This representation allows us to present different comparison results in a unified way in the ID case. In this paper, by using the concept of distortion function, we propose a new representation of the reversed mean residual lifetime (RML) of a coherent system with ID component lifetimes. We provide some sufficient conditions for the RML ordering of two coherent systems. Comparison results are also presented based on the faster ageing order in the reversed mean residual lifetime. Material and Methods We use the copula function to explore the structural dependency of the components and then present comparison results based on the representation of the system distribution as a distorted distribution function of the common components’,distribution. In some factual situations, we may observe the phenomenon of crossing reversed mean residual lifetimes and variance residual lifetimes. Then, we can not compare the corresponding lifetimes in usual stochastic orders. Relative ageing orders enable us to compare the lifetimes in these cases. Results and Discussion We want to order a coherent system with ID components in the RML. A new representation of the mean inactivity time of a coherent system with ID components is obtained. This representation is used to compare the RMLs of two coherent systems. Some sufficient conditions such that one coherent system dominates another system concerning ageing faster order in the reversed mean and variance residual life orders are also discussed. These results are derived based on a representation of the system reliability function as a distorted function of the common reliability function of the components. Some examples are given to explain the results. Conclusion In this paper, we study the RML ordering of coherent systems based on distortion function. We show that the distortion function of a coherent system and its integral play an important role in the RML comparison of coherent systems. We also find a sufficient condition for the RML ordering of two coherent systems. We study RML orderings of all coherent systems with 1-3 IID and DID components under the FGM survival copula. The effect of the dependency is also verified in the RML comparisons. Finally, we focus on the ageing faster order in terms of RML and introduce a new ageing faster order in terms of reversed variance residual lifetime. We provide sufficient conditions under which one coherent system is ageing faster than another one with respect to the proposed orders.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 83

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    16
  • Issue: 

    2
  • Pages: 

    373-395
Measures: 
  • Citations: 

    0
  • Views: 

    203
  • Downloads: 

    0
Abstract: 

Introduction Among different forecasting methods, the singular spectrum analysis (SSA) is a powerful nonparametric technique with both filtering and forecasting capabilities. The SSA method breaks down the observational series into two components, i. e. noise and signal, using the eigenvalues and eigenvectors of the trajectory matrix. Then it calculates the forecast by reconstructing the original series using the signal component and a recursive linear relationship from the original series. Since the real data are not noise-free, the linear recurrent relation (LRR) coefficients obtained from the eigenvectors of the trajectory matrix are also contaminated with noise and lead to a reduction in the forecast accuracy. Thus, to improve the performance of the recursive forecasting methods, some actions have been taken into account. In this paper, a hybrid method is proposed to improve the recursive forecasting performance of SSA using the Kalman filter algorithm. Then, the effectiveness of this method was compared with the reconstructed SSA-R, the SSA weighting algorithm, and the basic SSA method using the root mean square error criterion. Material and Methods For a time series yt with constant window length L, there are L-1, SSA-R co efficient ϕ, 1, : : :, ϕ, L􀀀, 1 are obtained from eigenvectors of the trajectory matrix. Therefore if the observational series includes noise, the estimated coefficients ϕ, 1, : : :, ϕ, L􀀀, 1 are not accurate and will reduce the accuracy of the prediction. We will use the state-space equations and Kalman filter algorithm to reduce noise, improve our prediction, and define the KF-SSA-R method. Another approach that can be used to improve the recursive prediction of SSA method is the reconstructed SSA. In this method, the prediction coefficients are obtained from the re-execution of the SSA method for the reconstructed series. In the last method, to improve the prediction of SSA method, we use a weighting algorithm. In this method, to obtain the coefficients, 2/3 observations are used to calculate the weights. Results and Discussion To investigate the effect of data refinement, the proposed methods have been compared using simulation studies and for the gas consumption in England. For the impact of noise on the performance of the methods, the signal-tonoise ratio is used for different window lengths and prediction horizons. To compare the introduced methods, the RRMSE is used. Conclusion The simulation studies and real data results show that the KF 􀀀,SSA 􀀀,R method compared to the basic SSA, Reconstructed SSA and weighted SSA have better performance, especially when the window length is small. When the window length is considerable, the weighted SSA method is more efficient than the primary SSA method in the data obtained from the structural model. Compared with the reconstructed SSA and original SSA methods, when the window length is small, we conclude that the reconstructed SSA method performs better.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 203

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    16
  • Issue: 

    2
  • Pages: 

    397-416
Measures: 
  • Citations: 

    0
  • Views: 

    60
  • Downloads: 

    0
Abstract: 

Introduction In this paper, the reliability of the multicomponent stress-strength model is studied. The system components may experience the same or different stress levels. In some cases, several stresses are imposed on a system simultaneously, and if the system’, s strength is greater than the stresses, the system remains intact. This article considers a multicomponent system with n2 components when n1 stresses are imposed on each component simultaneously, and all stresses and strengths are independent. The main subject of this model is the study of Rr, k = P(Xr: n1 < Yk: n2), where Xr: n1 is the rth ordered stress variable and Yk: n2 is the kth ordered strength variable. The stress and strength variables distributions are considered the inverse Exponential with unknown scale parameters. Based on the inverse Exponential distribution, Rr, k is obtained. The k-out-of-n2: F system and its exceptional cases, series and parallel systems are studied. In a k-out-of-n2: F system, the system is failed if at least k components fail. Therefore, the reliability of the system is Rn1, k = P( at least n2 􀀀,k + 1 of the Yis exceed Xn1: n1). The special cases of this system are series and parallel systems, whose the stress-strength reliabilities are Rn1, 1 and Rn1, n2, respectively. Rn1, k is the probability that the maximum of stresses is less than the kth strength, Rn1, 1 is the probability that the maximum of stresses is less than the minimum of strengths and Rn1, n2 is the probability that the maximum of stresses is less than the maximum of strengths. Material and Methods One of the most important topics in stress-strength models is the estimation of the reliability parameter. We take a random sample from each distribution of stress and strength variables. The scale parameters are estimated by the maximum likelihood method, and according to the invariance property of this estimator, Rr, k is estimated. Also, the maximum likelihood estimators of Rn1, k, Rn1, 1 and Rn1, n2 are provided. Using the Delta method, the asymptotic distribution of the estimation of Rr, k and the asymptotic confidence interval for Rr, k have been obtained. Results and Discussion Simulation study for the n1 = 5stress and n2 = 7strength model is performed. The stress-strength reliability of the 3-out-of-7: F system and that of the series and parallel systems are estimated. Simulation results show that if the sample size increases, the absolute value of the bias of the maximum likelihood estimator and the mean square error always decreases. Also, two real data sets are considered. The Exponential and inverse Exponential distributions were fitted to both data sets. In the n1 = 5 stress and n2 = 7 strength model, it is observed that when r = 5 and k = 3,7, the inverse Exponential distribution is better than the Exponential distribution and for r = 5 and k = 1, the Exponential distribution is better than the inverse Exponential distribution. Conclusion In this article, we considered the n1stress-n2strength model if the distributions of stress and strength variables are inverse Exponential with different parameters. Using the maximum likelihood method, Rr, k is estimated, and its asymptotic confidence interval is derived. The simulation results show that for Rn1, k, the absolute values of its biases are small. If the sample size increases, the trend of the biases’,absolute values decreases and the mean square error is constantly decreasing. The paper’, s results can be used for the stress-strength model when several stresses are applied to the system components simultaneously, and each component has its strength. Further research in this model can be done with other probability distributions.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 60

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

ZANDI Z. | BEVRANI H.

Issue Info: 
  • Year: 

    2023
  • Volume: 

    16
  • Issue: 

    2
  • Pages: 

    417-434
Measures: 
  • Citations: 

    0
  • Views: 

    102
  • Downloads: 

    0
Abstract: 

Introduction In this study, we addressed parameter estimation in the linear regression model in the presence of multicollinearity when there exists some prior information about predictor variables that appears as a linear restriction on the model parameters. We estimated the parameters based on Liu-type linear shrinkage, preliminary test, Stein, and positive Stein strategies. The performance of the proposed estimators is compared to the Liu-type estimator in terms of their relative efficiency via a Monte Carlo simulation study and an actual data set. Material and Methods In the linear regression model, the ordinary least squares (OLS) estimator is the best linear unbiased estimator for model parameters when the predictor variables are independent. The multicollinearity problem arises when there exists near linear dependence among the predictor variables. This problem leads to variance inflation of the OLS estimator. Thus the interpretations based on it are not true. The ridge and Liu-type estimators are two methods to combat multicollinearity. The Liu-type estimator is more efficient than the ridge estimator when there is a strong correlation between the predictor variables. We suppose that there is some prior information about parameter vector ,under a linear restriction as R ,= r where R is a p2 ,p matrix and r is a p2 ,1 vector. The restricted estimator of ,is obtained by maximizing the log-likelihood function of the linear regression model under the linear restriction. The Liu-type restricted estimator can be defined in the presence of multicollinearity under the linear restriction. We propose the Liu-type shrinkage estimators using the Liu-type and Liu-type restricted estimators to improve the estimation of parameters. We compare the performance of the Liu-type shrinkage estimators and the Liu-type estimator in terms of their relative efficiency using a Monte Carlo simulation study. The simulation is conducted under different sample sizes, n = 30,50, the correlation level between the predictor variables ,= 0: 80,0: 90,0: 95, p1 = 5, and p2 = 3,5,7. To investigate the behavior of the proposed estimators, we define Δ,= ∥, ,􀀀, , 0∥, 2, where ∥, : ∥,is the Euclidean norm, ,is the parameters vector in the simulated model and , 0 is the true parameters vector in the candidate sub-model. We also apply the proposed estimation methods to a real data set. Results and Discussion The simulation results show that all estimators’,performances become better when p2 and ,increase for fixed n. For all combinations of p2, , , and n, the Liu-type restricted estimator has the best performance at Δ,= 0. As Δ,moves away from zero, all estimators’,simulated relative efficiencies (SREs) decrease. As ,approaches one, the performance of the Liu-type linear shrinkage estimator increases. Conclusion This paper suggested the Liu-type shrinkage estimators in the linear regression model in the presence of multicollinearity under the subspace information. A Monte Carlo simulation was conducted to compare the proposed estimators’,performance with the Liu-type estimator. The simulation results confirm that the proposed estimators perform better than the Liu-type estimator when Δ,= 0 and near it for all p2, , , and n.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 102

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    16
  • Issue: 

    2
  • Pages: 

    435-448
Measures: 
  • Citations: 

    0
  • Views: 

    93
  • Downloads: 

    0
Abstract: 

Introduction Studying crime data has become one of the essential topics in the world due to its connection with human security. Analyzing this type of data can effectively prevent future crimes and identify spatial patterns and factors that facilitate the commission of crimes to control crime-prone areas. Most of the time, crime data has a spatio-temporal structure that causes the formation of different spatio-temporal patterns. Therefore, spatio-temporal monitoring of crime data is essential in identifying factors that cause crime and preventing crime. An important issue in many cities is related to crime events, and the spatio-temporal Bayesian approach leads to identifying crime patterns and hotspots. In Bayesian analysis of spatio-temporal crime data, there is no closed form for posterior distribution because of its non-Gaussian distribution and the existence of latent variables. In this case, we face challenges such as high dimensional parameters, extensive simulation and time-consuming computation in applying MCMC methods. Material and Methods In this paper, we apply INLA to analyze crime data in Colombia. To describe the above concepts, a three-stage hierarchical model is considered. The advantages of this method can be the estimation of criminal events at a specific time and location and exploring unusual patterns in places. Results and Discussion The Bayesian analysis of crime data is usually performed as Bayesian infer ence of pure spatial or temporal patterns. However, such spatial or temporal Bayesian analyses are not suitable for crime data. In this article, in a case study, Bayesian hierarchical spatio-temporal analysis of crime data in Colombia was discussed using the INLA approach, which considers spatio-temporal dependence and makes the model more flexible in detecting unusual patterns. Exploratory data analysis is also discussed, detecting areas with unusual behaviour over time. Four different models were fitted to the data, and the best model that includes spatio-temporal interaction was selected using the DIC criterion. The research results identify the most important centre of crime in the Kennedy area of Bogotá, , as well as the highest crime rate in the time frame. Then, hierarchical spatio-temporal Bayesian analysis of these data was done with the INLA approach. Conclusion The advantage of using this Bayesian approach is that it includes the effects of spatio-temporal correlation in the model and makes the model flexible in detecting areas with abnormal behaviour over time and in different places. For this purpose, four different models, including side effects and spatio-temporal combination, were fitted to the crime data. The best model, including the spatio-temporal interaction effect, was proposed using the deviance information criterion. The comprehensive and scientific comparison of the two Bayesian methods INLA and the MCMC algorithm in terms of accuracy, speed and even accessibility and convenient use for researchers requires independent scientific and practical research because, for example, the various methods of sampling in the MCMC algorithms and sometimes its different methods in INLA make it difficult to compare accuracy. How to use parallel calculations in the application of these two methods is also effective in comparing the speed, and simply comparing the outputs cannot express the advantage of one method over the other.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 93

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

Moghimbeygi M.

Issue Info: 
  • Year: 

    2023
  • Volume: 

    16
  • Issue: 

    2
  • Pages: 

    449-468
Measures: 
  • Citations: 

    0
  • Views: 

    83
  • Downloads: 

    0
Abstract: 

Introduction Statistical shape analysis is one of the fields of multivariate statistics, where the main focus is on the geometric structures of objects. This analysis method is widely used in many scientific fields, such as medicine and morphology. One of the tools for diagnosing diseases or determining animal species is the images and the shapes extracted from them. Introducing methods of classifying shapes can be a solution to determine the class of each observation. Usually, in regression modelling, explanatory and dependent variables are quantitative. However, one may want to measure the relationship between an explanatory variable (with continuous values) and a dependent variable with qualitative values. One option is to use the multinomial logistic regression model. Therefore, a semiparametric multinomial logistic regression model to classify shape data is introduced in this paper. Material and Methods The power-divergence criterion is a measure for hypothesis testing in multinomial data. This criterion is used to define the kernel function of explanatory variables. The model is a multinomial logistic regression model based on kernel function as a function of explanatory variables and an intercept. Since the shapes’,geometric structure and size play a key role in the classification of shapes, the kernel function is determined based on the shape distances. The smoothing parameter was estimated using the least square cross-validation method. Also, the estimation of model parameters was done using the neural network method. Results and Discussion The shape space is a manifold, but most of the methods presented in the literature for classifying shapes were done in the shape tangent space or used linear transformations. Since mapping from the manifold to linear space decreases data information, applying tangent space and linear spaces will reduce classification accuracy. Therefore, the shape space is used to classify the shape data. The performance of the model in a simulation study and two real data sets were investigated in the paper. The two real data sets used in this paper are taken from the shape package in R software. The first data set is related to schizophrenia patients and people as control, and the second one is associated with the skull of three species of apes of two sexes. The classification of these data showed an accuracy of 82% and 84%, respectively. Also, a comparison was made with the previous methods based on a real data set, which showed the proper performance of our approach compared to the other two techniques. Conclusion Since in the nonparametric kernel function, suitable distances of the shape space were used, the introduced method performs better than those based on Euclidean spaces. Also, the ability to use other shape distances, such as partial, full Procrustes and Riemannian distances, makes the model more flexible in classifying different types of shape data. On the other hand, sizeand-shape distance can be used in the kernel function to classify data whose size plays a key role in their geometric structure. Furthermore, since few statistical distributions have been introduced in the shape space, nonparametric methods can be helpful in the analysis of shape data. However, using nonparametric methods in the shape space is time-consuming from the point of view of computer calculations.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 83

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    16
  • Issue: 

    2
  • Pages: 

    469-492
Measures: 
  • Citations: 

    0
  • Views: 

    93
  • Downloads: 

    0
Abstract: 

Introduction In real-life phenomena, we often encounter discrete data sets, such as the number of cancer cells, the number of rainy days, the volume of stocks, etc. Also, statistical inference based on correlated count data is frequently performed in biomedical studies, for example, the count of cancer cells, the number of patients with contagious diseases, and the number of successful trials. So, introducing a convenient model for counting time series is felt nowadays. The literature on integer-valued time series models has been introduced based on the Binomial thinning operator and its generalizations. As pioneers of the Integer-valued Autoregressive (INAR) models, we can cite McKenzie (1986) and Al-Osh and Alzaid (1987). This paper proposes a new flexible discrete Exponential-Weibull Distribution (DEW) and a new INAR(1) model based on the DEW innovations. This new DEW distribution provides several appealing statistical properties in terms of handling all forms of dispersion, skewness, and kurtosis. This subsequently allows for the modeling of count time series observations. In this context, the INAR(1) process is developed with Binomial thinning operators. The efficiency and superiority of the process in fitting counts data of deaths due to COVID-19 disease are compared with other competing models. Material and Methods This paper is concentrated on the new discrete distribution and its corresponding INAR(1) model. Several parametric and non-parametric estimation approaches are considered, including the conditional maximum likeli hood (parametric), conditional least squares (non-parametric), and Yule-Walker (non-parametric) estimation methods. Several INAR(1) processes are considered to fitting the COVID-19 data sets. The goodness of fit measures consists of the Akaike information criterion (AIC), Bayesian information criterion (BIC), consistent Akaike information criterion (CAIC), and Hanna-Quinn (HQIC). Results and Discussion The simulation comparison is conducted concerning the mean square errors (MSE), which shows the superiority of the conditional maximum likelihood estimation method. Due to evaluating the performance of estimators in terms of MSE, 100 iterations are considered for different sample sizes. Based on the goodness of fit measures, it is concluded that the DEW-INAR(1) process is preferred among other INAR(1) processes. Conclusion The main focus of the manuscript is to introduce the discrete version of the Exponential-Weibull distribution and the modeling of its corresponding integer-valued autoregressive process. The flexibility and comprehensiveness of the discrete Exponential-Weibull distribution in fitting different types of count data is deduced by examining the statistical characteristics. Also, parameter estimation methods and Monte Carlo simulation studies are presented. According to the simulation results, the conditional maximum likelihood parametric estimation method performs better than non-parametric approaches. Using the data of COVID-19, the efficiency of the new process is confirmed in comparison with the classical INAR(1) models.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 93

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
telegram sharing button
whatsapp sharing button
linkedin sharing button
twitter sharing button
email sharing button
email sharing button
email sharing button
sharethis sharing button