Archive

Year

Volume(Issue)

Issues

مرکز اطلاعات علمی SID1
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Author(s): 

SEDAGHAT A. | MOHAMMADI N.

Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    1-15
Measures: 
  • Citations: 

    0
  • Views: 

    855
  • Downloads: 

    168
Abstract: 

Reliable image matching is a vital step in many photogrammetric processes. Most image matching methods are based on the local feature algorithms because of their robustness to significant geometric and radiometric differences. A local feature is generally defined as a distinct structure with properties differing from its immediate neighbourhood. Generally, local-feature-based image matching methods consist of three main steps, including feature detection, feature description and feature correspondence. In the feature detection step, distinctive structures are extracted from images. In the feature description step, extracted features are represented with descriptors to characterize them. Finally in the correspondence step, the extracted features from two images are matched using particular similarity measures.n this paper, an automatic image matching approach based on the affine invariant features is proposed for wide-baseline images with significant viewpoint differences. The proposed approach consists of three main steps. In the first step, well-known Hessian-affine feature detector is used to extract local affine invariant features in the image pair. In Hessian-affine detector a multi-scale representation and an iterative affine shape adaption are used to deal with significant viewpoint differences including large scale changes. To improve the Hessian-affine detector capability, an advanced strategy based on the well-known UR-SIFT (uniform robust scale invariant feature transform) algorithm is applied to extract effective, robust, reliable, and uniformly distributed elliptical local features. For this purpose, a selection strategy based on the stability and distinctiveness constraints is used in the full distribution of the location and the scale.In the second step, a distinctive descriptor based on MROGH (Multisupport Region Order-Based Gradient Histogram) method, which is robust to significant geometrical distortions, is generated for each extracted feature. The main idea of the MROGH method is to pool rotation invariant local features based on intensity orders. Instead of assigning a main orientation to each feature, a locally rotation invariant schema is used. For this purpose a rotation invariant coordinate system is used to compute the pixels gradient. To compute descriptor, the pixels in the feature region are partitioned into several groups based on their intensity orders. Then, a specific histogram based on the pixels gradient magnitude and orientation is calculated for each group. Finally, the MROGH descriptor is generated by combining the values of all the gradient histogram from all groups into a single feature vector.Finally, feature correspondence and blunder detection process is performed using epipolar geometry based on fundamental matrix. The initial matched features that are not consistent with the estimated fundamental matrix are identified as false matches and eliminated. A distance threshold TE=1 pixel between each feature point and its epipolar line is considered as elimination condition. The experimental results using six close-range images show that the proposed method improves the matching performance compared with several state-of-the-art methods, including the MSER-SIFT, UR-SIFT and A-SIFT, in terms of the number of correct matched features, recall and positional accuracy. Based on the matching results, the proposed integrated method can be easily applied to a variety of photogrammetric and computer vision applications such as relative orientation, bundle adjustment, structure from motion and simultaneous localization and mapping (SLAM).

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 855

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 168 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    17-28
Measures: 
  • Citations: 

    0
  • Views: 

    773
  • Downloads: 

    658
Abstract: 

History inertial measurement unit (IMU) dates back to 1930. At that time, the technology because of its serious limitations on size, cost and power consumption, and the public was not applicable to small devices. Recent developments inertial measurement unit of MEMS sensors that are lighter, smaller and cheaper than other types of systems are inertial. These have made the inertial sensors in a variety of small devices such as mobile phones placed today. In this article, the first to introduce a variety of inertial systems and focus more on MEMS systems has been based on the mathematical model used where errors bias, scale factor and non-orthogonal axes is located. In summary, each of these errors can be expressed as follows. bias is the measurement error, regardless of the forces or the input rate of the sensor. The scale factor expresses the scale of the relationship between the output of a sensor with the force or the rate applied to the sensor. In other words, this error shows the difference between the ideal observation and the output of the sensor, and linear gradients are generated in observations in the inertial navigation systems. non-orthogonal axes between the axes is a defect in the construction and a lack of alignment of the three axes of measurement in the accelerometer or gyroscope. The following guidelines are provided to calibrate the static inertial systems. The methods by six static position (SPS), multi-position (MP) and recursive least squares (RLS) is an IMU used to determine the parameters of error. In this research, OARS software is used, a data aggregation system developed on the Android operating system. With 100Hz, this software provides accelerometer, gyroscope, magnetometer and GPS observations at 1 Hz. Observations of these sensors are stored in the csv format software in the phone's memory. In this study only gyroscope and accelerometer sensors are used. The results showed that the proposed methods First, the mathematical model with 12 parameters high dependency between the parameters compared to model 9 parameters. Secondly, if the raw observations in a particular situation, and the values averaged better results than entering into the equation raw observations or observations been removed noises by discrete wavelet transform. The unit of measurement error values in the multi-position on the three modes obtained by 0.082345, 0.083140 and 0.082952 is. Thirdly, the results of the error measurement unit in three ways proposed by 0.146245, 0.161520 and 0.082345 and because of the need to collect data in particular (exact alignment inertial system) to determine the calibration parameters show that the multi-position method is better than the other two methods.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 773

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 658 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    29-38
Measures: 
  • Citations: 

    0
  • Views: 

    1022
  • Downloads: 

    200
Abstract: 

Determination of the displacement vectors due to the landslide phenomenon is the initial stage for generating landslide inventory and susceptibility maps. The main objective of this paper is to evaluate the achievable accuracy of a simple two-dimensional localized geometric transformation method for the determination of the landslide displacement vectors. For this purpose, two IRS P5 backward images taken in two different revisit epochs over the Ardabil Province are used. The main objective of this paper is to compare the acquired image based displacement vectors with the ground measurements. The 2D localized transformation approach may be utilized to produce a small and medium scale country-wide landslide inventory maps and for continuous monitoring of the areas susceptible to large landslides. With regard to the fact that the coverage of the landslide zones in satellite imageries are usually small as compared with the entire image frame, the achieved accuracy figures suggest that the localized transformation approach may fulfill the required demands. Onsite displacement measurements performed by the GPS receivers, conducted by the ISTA SANJ DAGHIGH Co, were utilized to assess the accuracy of the localized transformation method. To compare the displacement vectors generated from the images with the ground observations, spatial registration between the points measured on the image and the points measured on the ground was conducted using the satellite supplied RPCs through which the ground coordinates for the measured image points are calculated. However, due to the fact that the supplied RPCs have a systematic shift error, to generate the ground coordinates for the measured image points, it is necessary to eliminate the systematic shift of the RPCs. This is achieved by identifying fixed feature points (outside the landslide zone) on the stereo P5 images for which ground observation were available through the large scale map of the area. The RPC shift error is then calculated by comparing the ground coordinates of the fixed points generated by the RPCs and their corresponding ground coordinates extracted from the large scale map. On the other hand, due to the lack of coincidence between the dates of the image acquisition and the ground observations, a temporal interpolation was also necessary. The temporal registration was performed using a linear interpolation approach assuming a linear land displacement. The average landslide speed on the ground was adopted as the amount of linear displacement trend for the period over which the assessment was conducted. The outline of the evaluation approach and a detailed description of the test results and the final accuracy figures are presented in this paper.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1022

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 200 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    39-55
Measures: 
  • Citations: 

    0
  • Views: 

    1121
  • Downloads: 

    505
Abstract: 

Urban land-use allocation is a complicated problem due to diversity in land-uses, a large number of parcels and different stakeholders with various and conflicting interests. A variety of studies presents many criteria in this regard. Different methods have been proposed for the optimal allocation of urban land-uses. The outputs of these methods are near optimum layouts that offer a suitable land-use for every land unit. However, because of some limitations such as disagreement of stockholders by a specific land-use or the high cost of land-use conversion to a certain land-use, it is not possible for planners to propose desirable land-uses for all parcels and as a result, have to use next priorities of the land-uses. Thus, prioritizing land-uses for parcels along with optimal land-use allocation could be essential in urban land-uses planning. Furthermore, due to the approximate nature of land-use evaluation criteria, using fuzzy calculations can be more compatible with the urban land-uses allocation models. Therefore, in this study, a parcel-level urban land-use prioritization model based on fuzzy calculation is presented. In the proposed model, at first evaluation criteria are estimated by fuzzy calculations for each parcel. Urban land-use evaluation criteria include neighborhood effects (i.e. compatibility, dependency, and proximity), physical suitability, and per capita. Compatibility and dependency factors depend on the different service level of each land-use. Each land-use is defined in three service level of local, district and regional and different radius of effect is considered according to these service levels. Furthermore, suitability criterion is calculated according to the characteristics and physical properties of land units for each land-use as a fuzzy number. Per-capita criterion is calculated as per capita violation in a fuzzy manner and is considered in land-use prioritization. After fuzzy calculations of criteria, the importance of each criterion must be determined. To determine the importance of each criterion in proposed model, the weight of criteria is estimated by subjective and objective weighting approaches. Expert knowledge is used for estimating subjective weights, and Shannon's entropy method applied for determining objective weights. The Fuzzy TOPSIS [1] technique is used to prioritize land-uses for each parcel. In fuzzy TOPSIS, after normalization and applying weights to each criterion, positive and negative ideal points are calculated based on the best and worst values of criteria. Finally, with calculating the distance of each land-use as alternatives from worst and best ideal points, land-uses will be ranked for each parcel. This procedure is repeated for all parcels in the study area, and therefore, all land-uses are ranked for all parcels. The proposed model was implemented on spatial data of region 7, district 1 of Tehran. Ranking urban land-uses for parcels in the study area showed that 77.2 percent of current land-uses have the first priority for their own parcels. It came that 22.8 percent of parcels in the study area were not allocated the first priority of land-use, and the land-use of these parcels can be susceptible to change. In the land-uses of the study area, in terms of susceptibility to changes, residential units have the best situation, and industrial units have worst situation. As a future research, based on proposed model, different scenarios can be proposed for optimal allocation of urban land-uses by taking into account stockholders' preferences. For modeling the stakeholders' preferences, approaches such as multi-agent systems and game theory can be used.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1121

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 505 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    57-73
Measures: 
  • Citations: 

    0
  • Views: 

    927
  • Downloads: 

    219
Abstract: 

There are more than 200 types of zoonotic diseases in the world and leptospirosis is the most important. Leptospirosis occurs mostly in areas with a tropical climate and abundant rainfall. There are no specific statistics of this disease worldwide, and records are underestimated for several reasons. Hence, the World Health Organization (WHO) named leptospirosis as a neglected tropical disease in the world and more research is needed in this field. When paddy season in the north of Iran begins, the disease spread and in severe cases leads to death. Leptospirosis is recognized globally as a multi-faceted disease and failure to recognize or treat it onetime can lead to death of patients. The main cause of the spread of this disease is a bacterium present in the body of domestic and wild animals, especially mice and dogs (as reservoirs of disease) and transmitted through the urine or feces to the environment. As a result, the bacteria can be transmitted to the human body through injuries to the skin or contact with contaminated soil and water. The environment and occupation are very effective in the spread of the disease, which is recognized as a work-related illness and can be dangerous in both urban and rural areas. The emergence of this disease can be due to reasons such as agriculture, livestock, butchers, recreational activities and water sports, poverty, travel to tropical areas, and any activity that leads to contact with water, soil or contaminated environment. This disease is more prevalent in fishermen and farmers, especially sugar cane farmers and workers, and it is very important to cause problems such as inability to work properly in the time and season needed planting and harvesting, as well as medical costs and even mortality. Compared to other provinces, Guilan province has the highest rate of leptospirosis recently. Therefore, the study and modeling of this disease in the province is of great importance. In this paper, the disease statistics in the rural area during 2009-2011 were assessed as the dependant variable and five variables considered as independent variables for modeling spatial distribution. Considering the important effects of bandwidth and weighting function on modeling results, the efficiency of fixed and adaptive kernels, Bi-Square and Gaussian weighting functions investigated. Two criteria were utilized to evaluate the results include MSE and Definition Coefficient. The results showed that the adaptive kernel and Bi-Square performed better than the fixed kernel and Gaussian, respectively. In terms of bandwidth selection criteria, AIC, CV and BIC played more meaningful role consecutively. Among the environmental variables, Temperature, Humidity and Evaporation illustrated positive relationship with disease and, elevation and slope showed negative relationship. The maps of the distribution of the disease indicated that the central regions of Guilan province are more prone to this disease than the other areas, and management and control of the disease in these areas is very important. Finally, all results were assessed by validation criteria and decision makers can use this helpful information for prevention programs and allocation of budget to the risky areas.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 927

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 219 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    75-88
Measures: 
  • Citations: 

    0
  • Views: 

    888
  • Downloads: 

    557
Abstract: 

Change Detection (CD) considered as an important issue among researchers due to its applications in different aspect like urban management, environmental monitoring, and damage assessment and so on. Different methods and techniques have been proposed for CD process. One of the most common categories presented in the field of CD is supervised and unsupervised techniques. Unsupervised CD techniques are based on image information and do not require any additional information including training samples. Several studies have been done for unsupervised change detection methods. Some of the proposed algorithm are based on clustering technique and considered two cluster centers for entire image. It can be critical because changed and unchanged pixels might not be present consistent behavior at entire image so it can be conducted misclassification of changed and unchanged pixels. Another method considered clustering process at block levels of image. So changed or unchanged pixel might not be present at all block level of image simultaneously, it can be leaded misclassification of above mentioned pixels. In this paper, a novel unsupervised CD method is proposed based on K-Means clustering algorithm improved by particle swarm optimization method (PSO) to solve above mentioned problem. The proposed method comprises five main steps including: 1-preprocessing (radiometric and geometric correction), 2-generation of difference image and feature extraction (neighborhoods pixels information), 3- split of difference image into non-overlapping block (block analysis), 4-the proposed PSO-K-Means clustering and create binary change map, and 5-accuracy assessment (i.e. absolute and relative accuracy assessment). The main goal of the proposed PSO-K-Means method is an automatic detection of change area which occurred between bi-temporal remote sensing images. In the most region, there is different spectrum of changes, so the aim of proposed method is detecting the spectrum of change area at block level and also preserving global image information. To achieve the mentioned aim a novel cost function which considered K-Mean clustering at block level and at entire image simultaneously presented in this paper. To find optimum clustering centers with the minimum cost or in other word finding optimum feature vector corresponded to optimum cluster centers (changed and unchanged clusters), the PSO method was employed. Three cost function comparisons were implemented in order to verify the necessity of the proposed cost function. Moreover, maximum voting method applied in order to combine different band change maps to improve CD result. Finally, a sensitivity analysis was employed in order to confirm the validation of the proposed PSO-K-Means method. The sensitivity analysis employed against different block size, different initial population and iteration in the optimization process. The results show the stability of proposed method against initial population and iteration parameters. The proposed algorithm showed sensitivity against changing block size of image. Experiments applied on two data sets (i.e., Alaska and Uremia Lake). Both Data sets acquired by the Landsat satellite with seven spectral bands and 30 meters pixel size with the same image size (400×400 pixels). The ground truth image was created manually by an expert in visual analysis of the input images. The proposed method improved change accuracy 8%-12% rather than common methods, (i.e. FCM, Otsu thresholding, K-Means, K-Medoids) in both Alaska and Uremia. The optimum block size determined using experimental result.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 888

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 557 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    89-107
Measures: 
  • Citations: 

    0
  • Views: 

    1056
  • Downloads: 

    234
Abstract: 

Nowadays, urban sprawl phenomenon have been seen in many of cities in the developing and developed countries. Urban sprawl is considered as a particular kind of urban growth which comes up with a lot of negative effects. The analysis of urban growth using spatial and attribute data of the past and present, is regarded as one of the basic requirements of urban geographical studies, future planning as well as the establishment of political policies for urban development. Various methods have been used to investigate the physical growth of cities up to now. The remote sensing method and geographical information system are the most updated, precise and economical methods used to investigate the physical growth of cities. Since physical expansion occurs in temporal and spatial scale, the built-up land use can be extracted by using remote sensing multi-temporal data. Then, by comparing these data in different time periods by using statistical and spatial analysis of geographical information system, amount, and the ratio of changes were evaluated and its trend was modeled to be used in the urban planning. In this study six temporal satellite images of 44 years interval (1972, 1984, 1992, 2000, 2008 and 2016) have been classified to determine the urban extent and growth of Tehran in 8 different geographical directions within a circular region. So as to analyze data, Pearson’s chi-square statistics, Shannon’s entropy model and degree of goodness index were utilized. Pearson’s chi-square statistical model determines the amount of urban growth difference in various time periods which can be used along with Shannon’s entropy model to determine changes and scattering in the expansion of urban boundaries. The degree of goodness index can be used to investigate the urban growth quality. In the last studies, these models have been used to analyze spatial phenomena of the city such as changes trend in the city structure and shape in order to spatial expansion and land use changes. In this study, the used method of analyzing spatial data completely differs from the literature. In this model, a prediction model of CA-Markov is used to anticipate the urban growth in the year 2024 and also statistical parameters such as Shannon’s entropy and Pearson’s chi-square are used to analyze the way, the amount and the degree of goodness of the urban growth in the past and future. In this way, it was found that the city of Tehran has a high degree-of-freedom, high sprawl, and a negative goodness in urban growth. The total sprawl equals to 4.71 which is dramatically higher than half of value. As a result, it can be generally inferred that the city experienced a high sprawl value during 1972-2016 and this trend would continue to next years. The results indicate that in various time periods the city does not experience a good urban growth. The investigation of the degree of goodness index in various sectors also follows the same trend as various time periods. Nonetheless, different sectors and time periods can be compared to each other. Sectors of North and NorthEast compared to the other sectors, have the best and the worst urban growth, respectively. This study establishes the foundation for an in-depth recognition of urban expansion in Tehran and optimization of future urban planning.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1056

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 234 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    109-125
Measures: 
  • Citations: 

    0
  • Views: 

    854
  • Downloads: 

    268
Abstract: 

Ionosphere is the upper part of the atmosphere that extends from 80 to 1200 km above the Earth’s surface. The existing free electrones and ions in the ionosphere layer affect the signal propogation speed such as satellite positioning and satellite altimetry signals. Regardless of the fact that Dual frequency measurments can remove ionospheric delay effect, dual frequency observations of the permanent GPS stations can also be utilized to produce the ionosphere maps including the vertical total electron content (VTEC) values. For instance, International GNSS service (IGS) sub-centers produce daily global ionosphere maps (GIMs) using the GNSS data. The spatial resolution of GIMs in the latitude and longitude directions is 2.5 degree and 5.0 degree, respectively, and their temporal resolution is 2 hours. One of the IGS sub-centers, namely CODE produces the GIMs based on the spherical harmonic basis functions up to the degree and order 15. The aim of this research is to develop a local inosphere model based on the B-spline basis functions using the combined GPS and satellite altimetry observations over Iran. Accordingly, the potentiality of the B-spline basis functions for local inosphere modeling was studied at first. For this purpose, a local ionosphere model (LIM) was produced based on observation data from 16 Iranian permanent GPS stations and 5 IGS ones and B-spline basis functions. My assumptions in this modeling are as follows: first, the ionosphere is a thin shell that is located on 450 km above the Earth’s surface, second, the smoothed code station observations obtained by Bernese 5.0 software is considered as observation vector. Third, the weight matrix elements are proportional to the satellite elevation angle. Forth, the differential code biases (DCBs) for all satellites which are obtained from IGS precise products, are considered as known parameters in the equations. And the last assumption was that a simple cosine mapping function was used to convert the slant total electron content (STEC) to the VTEC. As a result, the comparison between the LIM and the GIM showed that the B-spline basis functions were more efficient than the spherical harmonic ones for local ionosphere modeling. Following the first result, a new LIM, which is based on the B-spline basis functions, was produced by integration of permanent GPS station and Jason-2 satellite altimetry observations. The GPS and satellite altimetry observations were chosen from day 107 of year 2014, according to the latest maximum solar activity. The weight matrix of the GPS and satellite altimetry observations were determined based on the least-square varince component estimation (LS-VCE) method. The results showed that the local inosphere model derived from combination of the GPS and satelite altimetry observations were more accurate than the local inosphere model derived from the GPS observations only, this is due to the fact that the dual-frequency radar altimetry data are the main source of the ionospheric observations at sea, where there is no GPS permanent station, and can be used to improve the GIMs and LIMs. Finally As by-products, the DCB values for the permanent GPS recievers and the bias term between the GPS and satellite altimetry observations were determined.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 854

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 268 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

HAJI AGHAJANI S. | AMERIAN Y.

Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    127-138
Measures: 
  • Citations: 

    0
  • Views: 

    1039
  • Downloads: 

    194
Abstract: 

The Earth’s atmosphere can be described by a model of layers. Although there are also horizontal gradients of the meteorological parameters, it is sufficient for a general model to divide the atmosphere into a vertical layer structure since the vertical gradients of the meteorological parameters are significantly larger than the horizontal ones. Furthermore due to the influence of the gravity the regional horizontal differences become smoothed out towards higher altitudes. The atmosphere can be divided into the troposphere, the stratosphere, the mesosphere, the thermosphere and the exosphere. The tropospheric path delay is one the errors in GNSS observations and reduces the accuracy of GNSS positioning. Accurate estimation of tropospheric path delay in GNSS signals is necessary for meteorological applications. The tropospheric delay is divided into the dry and wet parts. The dry tropospheric delay depends on the pressure variations between satellite and Earth’s surface and can be determined accurately using the Saastamoinen and Hopfield models. The wet delay can be determined by subtracting the dry delay from the total GPS derived delayIn this paper the effect of radiosonde and ERA-Interim data in increasing the accuracy of positioning is compared. European center for medium range weather forecasting (ECMWF) is currently publishing ERA-I, a global reanalysis of the data. This reanalysis provides values of several meteorological parameters on a global ∼75 km. The vertical stratification is described on 37 pressure levels. The piecewise-linear (PWL) is a simple and powerful 2D ray tracing technique which is fast and accurate in processing. The refined piecewise-linear (RPWL) technique is another 2D ray tracing technique. The 3D ray tracing technique based on Eikonal equation is the strongest and newest ray tracing method. These equations are solved in order to get the ray path and the optical path length. The Eikonal equation itself is the solution of the so-called Helmholtz equation with respect to electro-magnetic waves. In this method the ray paths are not limited to a certain azimuthally fixed vertical plane. Tropospheric corrections were calculated using both types of data using 3-D ray tracing method in Bandar Abbas and Birjand stations. The station coordinates were determined using two methods: 1-the tropospheric error was considered as unknown, 2-this error was not considered. Then tropospheric corrections obtained from ray tracing method was applied to the GPS observations and positioning was done. The Bernese GPS software version 5.2 has been used to process the GPS data. It performs ionosphere-free linear combination equation of dual frequency GPS observations from each site within the regional network. The ZTD was calculated using this software including raw data from the GPS observation network and the ZHD was calculated according to the Saastamoinen model. The results indicate the importance of tropospheric correction in precise positioning. In addition, these results indicate that the results obtained using radiosonde corrections in Bandar Abbas are more accurate than the results obtained using ERA-Interim data. Although the results of two types of data in Birjand station do not have much difference. This can be attributed to small variations of water vapor and other atmospheric parameters in Birjand station.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1039

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 194 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    139-150
Measures: 
  • Citations: 

    0
  • Views: 

    1528
  • Downloads: 

    708
Abstract: 

Nowadays the three-dimensional presentation of real world features is very important and useful, and attracted researchers in various branches such as photogrammetry and geographic information systems and those interested in three-dimensional reconstruction of the building. Buildings are the most important part of a three dimensional model of a city, therefore extraction and modeling buildings of remote sensing data are important steps to build a urban digital model.Although many efforts to reconstruct the three-dimensional building of LiDAR data have been made by researchers in recent years, but challenges still exist in this area, especially in urban areas. In previous studies, dense vegetation and tall trees in the vicinity of the buildings cause to difficulty in the building extraction process and reduction in the accuracy of the modeling results. The aim of this article is extraction and reconstruction of buildings by using LiDAR point clouds in urban areas with high vegetation. In this study, factors such as the LiDAR return pulse, the height of points and area of the region is used to separate the non-structural parts. Ground points in segment-based method by changing the size segment in each iteration by mean and standard deviation of the height of points in any segment extracted. The vegetation points are extracted and identified using LIDAR return pulse and a new method called "three-dimensional imaging of points on two-dimensional surfaces ". The projection process is done in the planes of XY, XZ and YZ. Using Illustration of points and changing the angle of view makes the point clouds be evaluated in different directions. Region expanding algorithm and length constraints imposed in different planes has an important role in the separation of dense vegetation. The modeling of building is done by using break lines and important vertices of the building roof in layers of roof height. Extraction of building edge points and height layers of roof is done separately in each building. This points are isolated by height analysis of the roof points. In the line approximations grouping the points in each height layer, line fitting and adjustment of line directions are factors that caused the break lines and points of the building roof to be correctly created. In roof modeling, the basic structure of the roof is modeled and then the parts on the roof are added to the model. The overall structure of the roof is made by roof vertexes and normal vector of generated planes. At the end, by calculating the point’s distances from roof plane, the roof parts are identified and the model of this components are added to the roof. The proposed method is evaluated on LiDAR point clouds in an area of the Stuttgart, Germany, with a density of 4 points per square meter.The accuracy of the proposed method is evaluated by visual interpretation and quantitative comparisons with information extracted by a human operator. The accuracy of proposed method is about 96 percent in extracting building points and modeling error at the corner of the building is approximately 44 cm. Overall, the results represent the success of the proposed method in extracting and modeling of buildings in areas with dense vegetation.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1528

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 708 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    151-160
Measures: 
  • Citations: 

    0
  • Views: 

    1222
  • Downloads: 

    794
Abstract: 

Site selection, as one of the key principles of urban planning, plays an important role and has a huge impact on the development of urban progression and citizen satisfaction. Urban site selection is a public process that will certainly create the most satisfaction with the views of citizens, experts, specialists and managers of the relevant fields. The purpose of the urban site selection is to determine the best possible site for an urban facility within a specific region. Site selection has a significant impact on the success or failure of a project. Especially in developing countries, public urban service centers failure is caused by the unsuitable site of the service. One of the main reasons for this challenge is related to the approach of decision making which is usually based on the experts’ knowledge. In this regard, the application of spatial group decision-making methods by means of the geo-social network can be effective in promoting urban development and site selection of public services. The purpose of this study is to develop an algorithm for site selection of public services centers via geo-social network platform in a case of shopping centers in the district six of city of Tehran. Also, the proposed approach attempted to integrate the opinions of different gropes of stakeholders including managers, experts and especially the citizens of the case study region. Initially, after examining the characteristics of urban land use in the study area and using environmental parameters, related factors were identified and evaluated. Then, the relevant spatial information was collected and make them ready for use in the system. Major criteria include: population, traffic congestion, slope, land value, accessibility. Accessibility criteria includs sub-criteria for a distance from business centers, distance from tourist centers, distance from main roads, distance from public transportation centers, and distance from parking lots. The software development environment is the Telegram social network with a different graphical user interface for collecting different opinions and showing the results. The decision-making process is carried out in tow steps. First, the weighting process is done with Analytical Hierarchy Process (AHP) by all the groups of stakeholders, and then we utilized the fuzzy majority for the integration of achievements in the first step. To assess this decision-making scenario, the statistical society was considered to be about 50 people (15 experts, 10 managers and 25 citizens of the region).The results showed that in group decisions, the dispersion of selected options and their compliance with the criteria is more appropriate, but in individual decision making, users tend to pay more attention to one or two criteria. The results of the final decision of distribution were not fair. Experts' satisfaction from the result of the group decision making is 85% while in individual decision making it is decreased to 60%. Also, geo-social spatial group decision making is one of the most important tools available to urban planners and allows site selection for various public utility centers. The designed methodology demonstrated the efficiency of geo-social networks for solving the urban public services site selection problem.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1222

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 794 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    161-175
Measures: 
  • Citations: 

    0
  • Views: 

    651
  • Downloads: 

    228
Abstract: 

After the first moments a crisis take place, quick emergency responses can be improved by updated spatial information and online map services of target places. Efficient use of GIS in the phase of response in crisis management requires having access to reliable data related to the crisis. Considering the critical situations at the initial moments of all disasters including earthquakes, floods, and accidents, as well as the great significance of geographic data in relief and providing the injured with appropriate care, the necessity of such data becomes apparent. The real time information acquired from crowdsourcing information can update the basic GIS server maps. Therefore, this study incorporates the capability of smart phone sensors, GPS, Web 2.0, VGI and Server-based technologies to design and develop a system for collecting target hazardous information from volunteers. Users can send information regarding the hazard location, its images, and other explanations to a central server through web technology and GPRS. This information and other crowd sourced information can be viewed by all users in real time through an updated map in the GIS Server. This online information can be used by relief groups so that they can hurry to the rescue of the injured with minimal loss of time. One of the most important contributions in designing this system is considering to the improvement of the positional accuracy of targets with respect to the position of the mobile device. Several approaches have been recommended for this purpose. The solutions include the use of online map services, the use of geocoding services, and the use of arithmetic methods based on the measurements of sensors embedded in a smart mobile phone. If a volunteered user for relief operations is convinced to determine the location using the various required methods and then send the results, the accuracy of the information can be verified with greater certainty. The evaluation by a sample group of mobile users indicated that the system was easy to use and could be a substitute for telephone systems of reporting incidents. Fifty three and twenty four percent of the sample group of respondents agreed and completely agreed with this claim, respectively. Based on the users’ views, if such a system is developed and used on a large scale, it will be widely employed to help crisis management and relief operations. Fifty-three percent of users agreed and thirty-five percent of them completely agreed with the claim that the system has a simple user interface and can be easily used. It was found that the initial location determined by the mobile positioning system could be improved through the use of online map services and/or by utilization of mobile sensors in the arithmetic method. The results showed that, from the positional accuracy perspective, the arithmetic method by the means of device embedded sensors would yield the best result. However, geocoding is more is more economical to save time. The system's evaluation also showed that this method would quickly compile target hazardous area information and provide guidance to relief groups.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 651

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 228 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    177-188
Measures: 
  • Citations: 

    0
  • Views: 

    765
  • Downloads: 

    575
Abstract: 

Three-dimensional measurement is one of the primary interests of various industries such as quality control, documentation of cultural artifacts and medical image processing. In recent years, contrary to the contact-based mechanical methods, active image-based methods using laser light  or white light for recovering the surface of the object, have gained considerable attention because of their non-contact nature. Among these techniques, the phase-shifting digital fringe projection (DFP) is well-established due to its advantageous characteristics such as full resolution and high accuracy measurement Extremely dense point clouds that are obtained from structured light 3D scanners leads to many problems in further processing steps. The processing, modeling and visualizing of huge amount of points is a very hard problem for conventional computers.Most of the simplification methods suggest maintaining details. As a result, the high frequency noise in data are also kept during simplification process which leads to decrease the signal to noise ratio (SNR) in simplified data. Although the goal of the simplification step is to decrease further complexities in processing of point cloud, this simplification process still encounters high computational complexity such as search for neighbor points in three-dimensional space, curve fitting and normal extraction of surfaces. All existing simplification methods are applied after point cloud generation in the post-processing step. So, there is a waste of costs in calculations on the processing of a part of data which is not required to be generated during the three-dimensional measurement. Therefore, an algorithm/method that smartly generates the minimum required points that perfectly reconstruct the object can efficiently decrease the post-processing cost.A hybrid scanner system is proposed in this paper that prevents the production of unnecessary points during measurement by DFP technique, using the geometric characteristics of the surface that are obtained from Photometric Stereo (PS) technique. The PS method generates surface normal from the object surface. The surface curvature can be obtained for each image pixel using PS normal. The curvature image is classified and assigned to the different level of densities. The density levels are defined in image of the stereo camera in scanner system. So by removing pixels in regular numbers the density of each level is constructed. Hence, the first level of density is the same as maximum resolution of camera. Next levels are equal to 50 percent, 33 percent, 25 percent, and 20 percent of all pixels which are respectively result of sampling every other pixel, sampling one pixel from every two pixels, from every three pixels, and from every four pixels. Though, the question arises here that what is the criteria of determining the curvature intervals which are assigned to density levels. The basis of determining the curvature intervals for point simplification is the distance between simplified points and surface computed from original dense point cloud. This distance is chosen by user. In this paper it is equal to measurement system accuracy. So the unnecessary points are removed based on the curvature obtained via PS before calculation of 3D coordinates using FP technique. The surface normal obtained from PS has low high-frequency noise, so noisy data will not transfer to the simplified points. The simple hardware setup of PS technique provides an efficient tool for simplified measurement of DFP scanner. Also, the extraction and classification of geometric features of the object are performed in two-dimensional space with lower complexity in comparison with similar operation in three-dimensional space. The addition of PS method only adds the cost of a number of light sources to scanner system and also includes the addition of several images to the process of measurement. But on the other hand, it will reduce calculation time by 50% to 75% and will reduce the volume of data by 50% to 80% depending on the complexity of the geometry of the object. The reduction in density has been done with the assumption of maximum separation of simplified model from the main model with the distance of 0.01 mm and 0.02mm.The principal idea of the proposed method is to measure surfaces with the minimal required points that preserves the geometry. Contrary to most of the simplification methods, the proposed method performs simplification while measures the surface, so no more post-processing step for simplification is required. At first, surface normal are calculated by PS technique. Then surface curvatures are computed for each pixel in camera image from normal vectors and classified. Each class represents a point density level. The surface slope is also considered to correct foreshortening effect that is caused by projective geometry. The output of previous step is a 2D simplification guidance map which is used to measure 3D objects surface with DFP technique. All measurement steps by proposed system are as follows: computing surface Normal and curvature assigning curvature ranges to point density levels3D measurement by DFP based on simplification map

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 765

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 575 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    189-212
Measures: 
  • Citations: 

    0
  • Views: 

    1395
  • Downloads: 

    553
Abstract: 

Today, urban land use planning and management is an essential need for many developing countries. So far, lots of multi objective optimization models for land use allocation have been developed in the world. These models will provide set of non-dominated solutions, all of which are simultaneously optimizing conflicting social, economic and ecological objective functions, making it more difficult for urban planners to choose the best solution. An issue that is often left unnoticed is the application of spatial pattern and structures of urban growth on models. Clearly solutions that correspond with urban spatial patterns are of higher priority for planners. Quantifying spatial patterns and structures of the city requires the use of spatial metrics. Thus, the main objective of this study is to support decision-making using multi objective Meta-heuristic algorithms for land use optimization and sorting the solutions with respect to the spatial pattern of urban growth. In the first step in this study, we applied the non-dominated sorting genetic algorithm ΙΙ (NSGA_II) and multi objective particle swarm optimization (MOPSO) to optimize land use allocation in the case study. The four objective functions of the proposed model were maximizing compatibility of adjacent land uses, maximizing physical land suitability, maximizing accessibility of each land use to main roads, and minimizing the cost of land use change. In the next step, the two mentioned optimization models were compared and solutions were sorted with respect to the spatial patterns of the city acquired through the use of spatial metrics. A case study of Tehran, the largest city in Iran, was conducted. The six land use classes of industrial, residential, green areas, wetlands, Barren, and other uses were acquired through satellite imagery during the period of 2000 and 2012. Three scenarios were predicted for urban growth spatial structure in 2018; the continuation of the existing trend from 2000 to 2018, fragmented growth, and aggregated growth of the patches. Finally, the convergence and repeatability of the two algorithms were in acceptable levels and the results clearly show the ability of the selected set of spatial metrics in quantifying and forecasting the structure of urban growth in the case study. In the resulted arrangements of land uses, the value of the objective functions were improved in comparison with the present arrangement. In conclusion planners will be able to better sort outputs of the proposed algorithms using spatial metrics, allowing for more reliable decisions regarding the spatial structure of the city. This achievement also indicates the ability of the proposed model in simulation of different scenarios in urban land use planning.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1395

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 553 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    213-232
Measures: 
  • Citations: 

    0
  • Views: 

    856
  • Downloads: 

    566
Abstract: 

Land surface temperature plays an important role in the physics of surface atmosphere interactions. It is at the same time a driver and a signature of the energy and mass exchanges over land. Land surface temperature is highly variable in both space and time mainly as a result of the heterogeneity of the meteorological forcing, land cover, soil water availability, surface radiative properties and topography. Therefore, satellite-derived land surface temperature is widely used in a variety of applications including evapotranspiration monitoring, climate change studies, soil moisture estimation, vegetation monitoring, urban climate studies and forest fire detection. The normalization of the surface temperature relative to environmental parameters is essential in scientific studies and management decisions of urban and non-urban areas. For the first time, a normalization method for topography-induced variations of instantaneous solar radiation and air temperature has been applied to satellite land surface temperature. While land surface temperature data are widely used over relatively flat areas, this new approach offers the opportunity for new applications over mountainous areas. As a significant perspective, such a normalization method could potentially be used in conjunction with land surface temperature-based evapotranspiration methods over agricultural and complex terrain, soil moisture disaggregation methods and forest fire prediction models, among others. In practice, when applyingthe normalized land surface temperature as into to energy balance models, the energy balance would be driven by the mean (instead of the spatially-variable) downward radiation within the study area as it is commonly done over flat areas. The aim of the current study is to utilize the physical model based on the soil and vegetation energy balance equations for normalizing the land surface temperature relative to environmental parameters. For this purpose, Landsat 7 satellite bands, AST08 Surface kinetic temperature, MODIS water vapor product, ASTER digital elevation model and meteorological and climatic data sets were used. In the current work, topographic factors, the radiation which reached the surface, albedo, environmental lapse rate and vegetation, were considered as environmental parameters. For calculating the surface temperature, the single channel algorithm was used. Moreover, for calculating the downward radiation to the surface, the albedo of the surface, lapse rate and vegetation; respectively, direct and diffuss radiation of the solar and neighboring surfaces, a combination of Landsat 8 reflective bands, the digital elevation model, and NDVI index, were used. Finally, by creating the energy balance equations for dry bare soil cover, wet bare soil, fully stressed vegetation and unstressed vegetation, the temperature of various coverages was extracted by exploiting Newton's method and by optimizing parameters of the model in both global and local optimizations and combining resultant temperatures; modeled and normalized surface temperature was obtained. The environmental parameters normalization model is calibrated in two main steps using Landsat land surface temperature observations. The first step minimizes the mean difference between observed and modeled land surface temperature. The second step adjusts environmental lapse rate, surface soil dryness index and vegetation water stress index by minimizing the RMSE between Landsat land surface temperature and model-derived land surface temperature. For evaluating the accuracy of the proposed model results, coefficient correlation indexes and RMSE were used between the modeled and observed surface temperature values as well as the variance of normalized surface temperature values. The results of this study indicate that in global optimization, the values of the correlation coefficient, RMSE and variance for AST08 data were 0.89, 2.6 and 6.44, respectively for Landsat 7 data 0.93, 2.08 and 1.1 and in local optimization mode, the values of these criteria for AST08 data were 0.962, 1.61 and 0.71 respectively and for the data of Landsat 7, 0.977, 1.2 and 0.13. The results of the study showed that, in both global and local optimization methods, the performance of Landsat 7 for normalizing the land surface temperature is higher than ASTER. Also, the use of local optimization method for global optimization to estimate the optimal values ​​of the missing parameters increased the accuracy of normalization results. The investigation of results of the relation between the surface temperature and considered environmental parameters in this study before and after the normalization indicate that the effect of environmental parameters on the surface temperature noticeably reduced after the normalization. Results of the current study imply the high efficiency of the proposed model for normalizing the surface temperature relative to environmental parameters. Generally, the results of the research were an indicator of the efficiency of the proposed model for normalizing the surface temperature relative to environmental parameters.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 856

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 566 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

RAHIMI M.M. | HAKIMPOUR F.

Issue Info: 
  • Year: 

    2018
  • Volume: 

    7
  • Issue: 

    3
  • Pages: 

    233-251
Measures: 
  • Citations: 

    0
  • Views: 

    664
  • Downloads: 

    580
Abstract: 

These days Floating Car Data (FCD) is one of the major data sources in Intelligent Transportation System (ITS) applications like route suggestion, traffic monitoring, traffic flow analysis and etc. Due to GPS limited accuracy and noises and road network errors, utilizing of FCD in ITS applications needs an efficient and accurate map matching framework. Map matching is a well-established problem which deals with mapping raw time stamped location traces to edge of road network graph. Along with high success rate, novel map matching applications faces several challenges including variable sampling frequency and processing speed of FCD big data. In this paper we have proposed a general, efficient, accurate and distributed map matching framework. The proposed framework can handle variable sampling frequency data accurately. Although this framework does not depend on additional data other than road network and GPS, achieved high success rate shows effectiveness of our system. We have used spatial proximity, heading difference, bearing difference and shortest path as our matching criteria. We also employed dynamic weights for each criteria to make our framework independent from local parameters. We have also employed confidence levels to improve our matching success rate. To answer low frequency data challenges, we have present an extra criteria based on A* shortest path method with dynamic weighting method. We have used HDOP for weighting shortest path criteria. When we are not confident enough about a point matching, we use shortest path criteria to improve success rate and by this method we keep our overhead low. For the evaluation we have studied New York City (NYC) OSM trajectories as the case study. We also used OSM NYC road network as the base map. The evaluation results indicate 95.2% MM success rate in high sampling mode (10s) along with 89.5% success rate in low sampling frequency (120s). We have compared our method with a known map matching method that In the case of low sampling frequency, our method has improved matching accuracy up to 9.7%. We have evaluated the effect of utilizing shortest path criteria in low frequency scenario. Our results show that using shortest path have improved our result up to 3.5%. One of the major challenges in using FCD is storage, managing, analysis and batch processing of this big data. To face this challenge in this framework we have used cloud computing technologies along with Map Reduce paradigm based on Hadoop framework. The proposed cloud computing based framework can answer technical challenges for efficient and real-time storage, management, process and analyze of traffic big data. Our evaluation results indicate we have matched 7000 points/second on a cluster with 5 processing nodes. We have also processed 5 million records in 530 seconds using a cluster with 5 processing nodes. The main contributions are as follows: 1) we have proposed a general, distributed map matching framework using cloud computing technologies to answer to upstream ITS applications, 2) We have improved an efficient and accurate map matching algorithm which can handle different sampling frequencies using shortest path method and confidence level, 3) we have used dynamic method for weighting geometric, directional and shortest path constrains.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 664

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 580 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0