Archive

Year

Volume(Issue)

Issues

Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    1-13
Measures: 
  • Citations: 

    0
  • Views: 

    706
  • Downloads: 

    0
Abstract: 

Indeed, Full Polarimetry (FP) imaging system has proven its increased potential in various applications, but suffer from an increase in the Pulse Repetition Frequency (PRF) and the data rate over single polarization. Recently, there has been growing interest in Dual Polarimetry (DP) imaging system that is called Compact Polarimetry (CP). The CP is a new mode proposed in DP system which has several important advantages in comparison with the FP mode such as reduction ability in complexity, cost, mass, and data rate of a Synthetic Aperture RADAR (SAR) system. Moreover, this new mode not only can achieve a greater amount of information than standard DP modes, but also can cover a much greater swath widths compared to FP mode. For this reason, the CP data can be critical for monitoring applications, such as forest controlling and monitoring. Forest studying is the one of research areas that is attractive for RADAR remote sensing researchers, because it has an effective role in climate controlling. Therefore, forest cover classification is essential to manage natural resources and environment, land use plans and land potential. Despite the significant number of works carried out for CP SAR applications, very few researches have been performed to investigate the capability of CP data for forest cover classification. In this paper, the potential of CP data in forest area is investigated using complex Wishart classifier in two ways. First, we use 2´2 covariance matrices of the pi/4 and Circular Transmit-Linear Receive (CTLR) CP modes simulated by RADARSAT-2 FP mode which acquired over Petawawa research forest and second, 3´3 covariance matrices reconstructed from these modes were exploited. Next, we compare the results with the FP mode. Results of this study show that the pi/4 mode provide better overall accuracy in forest cover classification than the CTLR mode, as well as extending the CTLR mode via Souyris’s iteration model to generate the PQ-CTLR mode overally does not significantly affect the Wishart classification. However, construction of the PQ mode permits a direct comparison of the average scattering mechanisms. Therefore, in the next step, the Cloude-Pottier alpha angle is considered and calculated for PQ and FP modes. The PQ_pi/4 mode shows that it is a better mode to estimate the alpha angle parameter compared to the PQ-CTLR mode. This study shows that although CP modes do not produce as good classification accuracies as produced by FP mode, they are an effective strategy when the polarimetric system resources are limited or not available. Also, they are compatible as an optional mode for a FP SAR system. Moreover, circular polarizations e.g. CTLR modes, will be less sensitive to Faraday rotation effects attached to low frequency propagation in the ionosphere. At the present, at least two earth observation satellites which provide CP modes are already in orbit or to be launched in next few years. These are Indian RISAT-1, Japanese Advanced Land Observing Satellite-2 (ALOS-2), Canadian RADARSAT Constellation Mission (RCM) and Argentina SAR Observation & Communications Satellite (SAOCOM) that will able to collect the CP data in and CTLR modes.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 706

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    15-34
Measures: 
  • Citations: 

    0
  • Views: 

    1176
  • Downloads: 

    0
Abstract: 

In the past few decades as a result of urban population, spatial development of urban areas has been growing fast. This has led to some changes in the environment in these areas. Hence, detecting changes in different time periods in urban areas has a great importance. Conventional CD methods partition the observation space linearly or rely on a linear combination of the multitemporal data. As a result, they can be inefficient for images corrupted by either noise or radiometric differences that cannot be normalized. On the other hand, one of the main challenges in the production of maps of changes in urban areas, Constraints on the spectral separation of bare land and built-up area from each other in these areas. Therefore, in this paper, an automatic kernel based change detection method with the ability to use a combination of spectral data and spectral indices have been proposed. First, the spectral index for the separation of classes covering the urban area of multi-temporal images are extracted. In next step, differential image was generated via two approaches in high dimensional Hilbert space. By using change vector analysis and determining automatically a threshold, the pseudo training samples of the change and no-change classes were extracted. These training samples were used for determining the initial value of kernel C-means clustering parameters. Then, an optimizing a cost function with the nature of geometrical and spectral similarity in the kernel space is employed in order to estimate the kernel based C-means clustering’s parameters and to select the precise training samples. These training samples were used to train the kernel based minimum distance (KBMD) classifier. Lastly, the class’s label of each unknown pixel was determined using the KBMD classifier. To assess the accuracy and efficiency of the proposed change detection algorithm, this algorithm were applied on multi-spectral and multi-temporal Landsat 5 TM images of the city of Karaj in 1987 and 2011. Respect to the features used, the sensitivity analysis for proposed method carried out using five different feature sets. In order to assess the performance of the proposed automatic kernel-based CD algorithm in the case of using DFSS (Accuracy: 86.40 and Kappa: 0.83) and DFHS (Accuracy: 85.54 and Kappa: 0.82) differencing methods, we compared this technique with well-known CD methods, namely, the MNF based (Minimum Noise Fraction) CD method (Accuracy: 77.42 and Kappa: 0.76), SAM (Spectral Angle Mapper) CD method (Accuracy: 64.60 and Kappa: 0.60), and simple Image differencing CD method (Accuracy: 73.44 and Kappa: 0.70). The comparative analysis of proposed method and the classical CD techniques show that the accuracy of obtained change map can be considerably improved.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1176

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    35-47
Measures: 
  • Citations: 

    0
  • Views: 

    1733
  • Downloads: 

    0
Abstract: 

  The urban area has always been under the influence of population growth and human activities. This process causes the changes in land use/cover. Thus, for optimal management of the use of resources, it is necessary to be aware of these changes. On the other hand, satellite remote sensing has several advantages for monitoring land use/cover resources, especially for urban areas. In this regards, classifiying urban area over time present additional challenges for correctly analyzing remote sensing imagery. Nowadays, integrating different kinds of data and images, achieved by the different remote sensing sensors are known as a suitable solution for extracting more useful information. The passive optical sensors have been used extensively in mapping horizontal structures. However, radar data could be used as a complementary data, since these data would be gathered in different climatic conditions in 24 hours of a day, as well as some geo and manmade structures have a specific response in the radar frequency. Furthermore, LiDAR data could gather precise measurements from vertical structures. Hence, by integrating optical, radar, and LiDAR data more features and information would be prepared for different kinds of applications. In this research, we used these data sets to detect buildings, roads, and trees in a complex city sense, i.e., San Francisco, by generating 141 features, in passive optical sensors using high spatial resolution WorldView-2 imagery (image bands, vegetation indices, IHS color space, YIQ color space, YCbCr color space, the first order statistical features and the second order statistical features), also in LiDAR data (first and last pulse, first and last intensity, nDSM, NDI, slope 4 neighbor, slope 8 neighbor, aspect, roughness, smoothness, surface curve, profile curve, variance and laplacian) and in RADAR data using RADARSAT-2 (amplitude, phase, intensity, incidence angle, imagery part, real part, radar cross section, polarized HH, polarized VH, polarized HV, polarized VV, ratio element of scattering matrix, alpha, beta, pauli coefficient, krogager decomposition coefficient, freeman decomposition coefficient, yamaguchi decomposition coefficient, antropy, eigen value and anisotropy). We divide our merging data set to four regions. The first region include building feature, the second region include building and vegetation features, the thid region include building and road features and the forth region include vegetation feature. All thsese features merged and produc the cube of data with 141 dimension number. Then, by using the principal component analysis (PCA) feature extraction method, as well as the well-known intrinsic dimension (ID) methods, including second moment linear (SML) and noise whitened HFC (NWHFC) dimensionality of these data sets is reduced. Finally, the supervised classification method k-nearest neighbour (K-NN classifier) was utilized in order to detect buildings, roads, and trees and grouping features according to the earned accuracies. In this regards, the thirty present of ground truth data was used as traning data sets and remaing seventy present as test data sets. In addition, the fusing and merging these data sets (buildings, roads, and trees) reveal the superiority of the implemented method to classify map with overall accuracy by a margin of nearly 90% using proposed approach and support our analyses.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1733

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

VAHEDI B. | ALESHEIKH A.A.

Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    49-63
Measures: 
  • Citations: 

    0
  • Views: 

    995
  • Downloads: 

    0
Abstract: 

Since the emergence of the concept of Volunteered Geographic Information (VGI), the quality of this type of information is presented as its biggest problem. Therefore, this issue has been addressed frequently in the literature, and scientists have tried to evaluate the quality of VGI. However, attribute accuracy, despite its important role in a variety of spatial analyses and applications of VGI, has received less attention in comparison to other elements of quality in the literature. Positional accuracy, completeness, lineage, resolution, and time accuracy are among the most important elements of spatial data quality.In this study, using a novel method and by leveraging Levenshtein algorithm along with text pre-processing, attribute accuracy of volunteered geographic features is examined, comparing this data with reference data. Levenshtein algorithm calculates the difference between two strings of text by counting the number of changes (edits) necessary to change one word to another, and thus sometimes is referred to as Levenshtein distance.The first step of the proposed method is to find corresponding features in the two data sets to perform the comparison based on. This step is done by applying an automatic data matching algorithm between the two sets. This algorithm consists of five stages, each applied on either the reference data set or the VGI data set.After data matching is done, each VGI feature is compared with its corresponding match in the reference data set and the Levenshtein distance between the “name” attribute of these two features is calculated. Then, features are categorized as having correct (accurate), approximately correct, or incorrect names based on the Levenshtein distance and assuming that the name of the reference features are correct. For VGI features without a match in the reference data set, a search distance is defined, inside which reference features with the exact same name as the VGI feature are sought.The study area of this research is Tehran city, Iran. A data set produced by the municipality of Tehran is used as the reference data set and Open Street Map data as the VGI data set. According to the results, 47 percent of VGI features have a name attribute and among these, 33 percent of them have correct name, 44 percent have approximately correct name, and the remaining 23 percent have incorrect names. The Overall attribute accuracy of the VGI data set used in this study, is thus 77 percent, indicating that among those features that have a name attribute, 77 percent of them have either correct or approximately correct names. A future line of research, based on the findings of this paper, could be to develop methods for evaluating the attribute accuracy of a data set without having to compare it with a reference data set.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 995

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

MADADI A. | KARIMIPOUR F.

Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    65-76
Measures: 
  • Citations: 

    0
  • Views: 

    623
  • Downloads: 

    0
Abstract: 

The effects of human activities on the surface water quality have been exhaustively investigated. The results have shown that the impact of land use/land cover on the surface water quality varies from a location to another. However, the temporal characteristics of the problem are poorly understood. This paper examines the hypothesis states that the impact of land use /land cover on the surface water quality varies with time i.e., it is a spatio-temporal linkage. Spatio-temporal data mining analyses were deployed, as they are capable of extracting novel patterns and correlations hidden in the data. Water quality parameter including As (Arsenic) was considered for 12 water quality monitoring stations across Seattle, Washington. These parameters were examined against five land use/land cover types (urban, cultivation, hay, forest, and wetland) for 9 years from 1998 to 2006 to study how land use/land cover influence the parameters. Due to non-stationary intrinsic of the problem as well as spatial auto-correlation exist between the values observed at the monitoring stations; the ordinary least square regression produces unwanted bios in the results. Hence, the geographical weighted regression (GWR) was used in order to model the spatially varying characteristics of the problem. Furthermore, to incorporate the time-varying effects, the model was calibrated for values collected at the stations separately for wet and dry seasons.The linkage between LULC and the amount of Arsenic (as the case water quality parameter) at each station was extracted using the temporal geographically weighted regression (Equation 6 at paper) separately for the years 1998 to 2006. Figures 6 and 7 (at paper), respectively, show the results for the wet and dry seasons of the year 2006, classified based on the residual square ( ), which is a measure of goodness of fitness. Larger values for indicate more linkage between the LULC class and the water quality parameter. (Note: because of the limitation in space, the results for other years are not shown).We also applied a significance test on the extracted linkage at the confidence level of 95%. The results illustrate that except for cultivate lands, other classes could reasonably show the spatial variety of the linkage. On the other hand, comparing the results of the wet and dry seasons shows that the model could extract the linkage in wet seasons more efficiently than dry seasons. For example, while there is a negative relation between the urban land use and the amount of Arsenic for wet seasons at most of the stations, there is no significant relation between them at dry seasons. The reason could be the effect of seasonal changes on the water quality parameters due to seasonal rainfalls. On the other hand, there is a positive linkage between forests and the amount of Arsenic for both of wet and dry seasons. However, this linkage completely follows different patterns for wet and dry seasons, which could possibly happen because of the land cover change in the wet and dry seasons. These two examples certify that seasonal changes have considerable effects of the linkage between LULC and water quality parameters, and so must be treated separately.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 623

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    77-97
Measures: 
  • Citations: 

    0
  • Views: 

    1753
  • Downloads: 

    0
Abstract: 

Link travel time is the most important variable for determining travel route and start time of journeys. Link travel time is the basis of navigation and routing systems. Time dependent algorithms in Geographic Information Systems (GISs) calculate fastest routs based on the travel duration of links of a network depending on different times or dates. In fact, travel time estimation for future is the basis of time dependent routing. Currently different means of monitoring traffic flow such as traffic cameras or electromagnetic sensors are present [1]. However, these methods cannot estimate link travel time efficiently; high cost, low accuracy and dependency on human agents are major problems of using these methods. By emerging usage of portable receivers of positioning systems, researchers are more interested in exploiting the data produced by such equipments for monitoring traffic speed flow. Nowadays most public transit buses are equipped by AVL systems for monitoring purposes. In this research, travel durations of arterial links are estimated in real time by obtaining data from transit buses. Essential corrections for complications caused by buses slower speed and their leaving traffic flow at bus stops are modeled and applied towards a better assessment [5, 6].Determining link travel durations in time intervals needs an analysis on spatio-temporal data. We estimate travel duration for time intervals of 15 minutes length (7am to 9pm) assuming invariant parameters in calculation of travel duration for each interval. In our approach, first we calculate the travel duration for buses and compensate for the delays caused by bus stops, which include the acceleration and deceleration time at each stop. Simultaneously, timing data of traffic lights control signals are also incorporated in computations to improve accuracy of bus travel duration estimation. We use historical data to find required parameters for calculating bus travel duration. In addition to historical data, we also integrate real-time data and time series analysis to improve our travel duration estimation. In this research we use Holt-winters analysis [14] for a short term prediction of travel time. Finally, we obtain a set of observation equation that is solved by an optimization method.Buses movement data of two different bus routes in five days (6th to 10th December 2014) are used to estimate travel duration of three links in Motahari Street. Position information of each bus is provided every two minutes plus the time and position of every time the bus doors opens and closes. On the fifth day (10th December) three test vehicles equipped with GPS recievers are employed to collect validation movement data every one second. Drivers of the test vehicles are instructed to avoid extreme low or hight speed and drive with the flow of traffic in the middle lanes insofar as possible. Finally, calculated travel times are compared with results of two well-known methods namely baseline estimation algorithm [8] and Helinga method [4]. RMSE of the proposed approach indicates 22 percent improvement compared to Helinga approach and 30 percent improvement in comparison to baseline algorithm. This improvement shows that information obtained from a public bus monitoring system can be used efficiently for arterial link travel time estimation.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1753

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    99-109
Measures: 
  • Citations: 

    0
  • Views: 

    1006
  • Downloads: 

    0
Abstract: 

During the recent years, smartphones have substantially evolved in terms of software and hardware technological properties. In addition, the ubiquity feature of these devices has increased their usage in ubiquitous geographic information systems (UBI-GIS), especially in case of data visualization. Accessibility of modern mobile phones allows the users to overcome limits of time and space. This property provides a possibility to achieve the most important element of ubiquitous computing. In fact, smart phones combine potentials of various tracking techniques such as GPS and IMU sensors with high-resolution cameras that allow data visualizations anytime and anywhere. In ubiquitous computing, augmented reality (AR) has been discussed as a ubiquitous user interface to the real world and AR technique is considered as a user interface in ubiquitous computing systems. AR is an excellent user interface for mobile computing applications, because it allows intuitive information browsing of location-referenced information. In an AR environment, the user’s perception of the real world is enhanced by computer-generated entities such as 3D objects and spatialized audio. An AR system requires at least three significant components: a model of the environment in which the system should be deployed, a real-time capable method for tracking the human user and an ergonomically acceptable mobile hardware setup. In this paper, these three core topic has been selected in following way: 3d model of area in 3D GIS as model of the environment, MEMS inertial sensors, MEMS compass and GPS sensors as tracking hardware and smartphone as mobile hardware setup.There are many methods for data visualization in AR systems. Geo-labeling is one method for data visualization in ubiquitous computing, which is tightly connected to Geospatial Information Systems (GISs). GIS can play an important role in improving AR systems. GIS can be exploited for the creation of content and geospatial analyses such as visibility analysis can be used to improve AR systems. QR code and RFID are two traditional methods for Geo-labeling and data retrieving that require special equipment and large budget. Other method to identify surrounding environment is georeferenced objects that can be retrieved by their location. In this approach, AR view show many content because of there is no filtering method. In this paper, the proposed system integrates innovative technologies of 3D GIS with tools for 3D visibility and AR view in order to establish an improved Geo-labeling system. The principal components of the proposed system are explained and smart phones are employed as the platform for creating a ubiquitous environment, which conducts the Geo-labeling process. Haft-e Tir square in the Tehran City is selected for implementing the proposed method and an Android’s application is developed as well. Quantitative assessment of the system results indicates high performance of the AR technology for data visualization with high accuracy and applicability in an urban environment. The results indicate geo-labels are linked to their corresponding objects with an acceptable accuracy. Finally, the efficiency of system for geo-labeling and AR-based data visualization are analyzed through quantitative and quantitative methods, which supports development of systems with improved functionality.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1006

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    111-128
Measures: 
  • Citations: 

    0
  • Views: 

    701
  • Downloads: 

    0
Abstract: 

Map production and usable information in Geospatial information system have notable cost and time allocated that finally such information and decisions are based further activities, especially in urban areas. Updating data involves the development of a geospatial information system and use of it. Change detection process, provides context for updating information and the latest applications and challenging in many branches include: urban planning, the environment and other sciences of the earth. Common techniques that used to Change detection, usually are based on pixels. In this study, two binary mask and post classification comparison method was used in combination and then compared the results of the comparative method of classification. Binary mask is a combination of Fuzzy thresholding and Automatic thresholding method such Otsu, Then comparing the classifiers such as, maximum likelihood, support vector machines, nearest neighbor and neural networks were used. The data set is provided by a couple of acquired very high resolution images on the Azadshahr region, District 22 of Tehran city (Iran) by the Quickbird and GeoEye sensors in 2006 and 2011 respectively. This data set is characterized by 3 visible spectral bands (Blue, Green and Red). The results show that the proposed method in terms of qualitative and quantitative comparison showing the changes against the post classification comparison method was more accurate and the overall accuracy and Kappa coefficient by using neural networks to map the changes resulting from this method is the equivalent of 73.32 and 68.38. However, the accuracy of the post classification comparison for neural networks of 65.61 and 48.96 against the kappa coefficient is obtained.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 701

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

GOLI M.

Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    129-137
Measures: 
  • Citations: 

    0
  • Views: 

    635
  • Downloads: 

    0
Abstract: 

The solving of third geodetic boundary value problem need to gravity anomalies continued from surface of the Earth down to their mean values on geoid. Downward continuation (DC) is the most challenging part of precise geoid determination. The inverse of Poisson’s integral are frequently used by researchers for DC. In this paper, the planar approximation of Poisson’s integral is used which provides the same accuracy respect to other higher approximations such as, spherical or ellipsoidal.The DC problem is inherently ill-posed being highly sensitive to high frequencies part of gravity signal. The DC process is ill-posed in its continuous mode. For the numerical evaluation, the process as a linear system (Ax=b) will be well-posed if the Hadamard conditions: existence of solution, uniqueness and stabilities are fulfilled. The existence and uniqueness of solution is guaranteed physically, but the process may be unstable, i.e., the solution does not grasp continuously on the data (b). The continuous problems must be discretized in order to prepare for a numerical evaluation. The discretization form of an ill-posed problem may turn to a well-posed depending on the discretization step. In DC process, the spacing of gravity anomalies is a major factor for conception of instability.The discretization of Poisson’s integral equation can be done using two different mean (grid) and point (scatter) schemes. Usually, the gravity data are observed at scattered point such as at leveling benchmarks. Then, the mean gravity anomalies are predicted/averaged on regular mesh. DC of gridded gravity anomalies are much easier to implement and more stable due to the attenuating of the high frequency by averaging. In addition, the stability of linear equation systems is increased by removing the very close observations. However, the useful local part of gravity signal are lost by averaging and mean anomalies are unavoidably affected by perdition error particularly in regions of poor data coverage. The mean gravity anomalies on geoid can be directly computed from DC of observed gravity anomalies. This process leads to ill-condition linear system in most cases. Hence the some appropriate regularization methods need to obtained the desired accuracy. The DC of scattered data has some advantages such as, there is not prediction error in them or they contain all frequencies of gravity filed.In this study, the accuracy and stability of DC of scattered and gridded anomaly are investigated. The discrete Picard condition is utilized to study the ill-poseness and instability of the DC linear equations system. Numerical examination is done in two mountainous test areas in Iran with a poor gravity data coverage and in the USA with dense gravity observations. Numerical results in both test areas show that the DC of scattered anomalies is an ill-posed due to closeness of point anomalies in some areas such as along levelling lines. Whereas the DC of 5´5’ gridded anomaly is a well-posed and stable problem. The DC of EGM08 synthetic gravity anomalies indicates that despite the presence of prediction error in gridded anomalies and the removing some useful high frequencies, their results are more accurate than scattered anomalies.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 635

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

MOBASHERI M.R. | AMRAEI E.

Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    139-149
Measures: 
  • Citations: 

    0
  • Views: 

    718
  • Downloads: 

    0
Abstract: 

CCD Camera is a multi-spectral sensor placed on CBERS02 satellite platform. The imaging technique in this sensor is pushbroom. There are some vertical noises in the images captured by this sensor all because of unadjustment between different adjacent detectors in the CCD-Camera sensor, internal changes in detectors, mis-calibration and low values of signal to noise ratios. These noises for homogeneous surfaces in level2 products are more profound. The presence of these noises in the images renders correct interpretation and extraction of information hard. In this study, for correction of noise a method based on the spatial momentum adjustment is introduced. In the proposed method the statistical momentums such as mean and standard deviation of the column in each band are used for stabilization of the statistical characteristics of the detector array to their reference values. In the simulated image for vertical noise, 97% accuracy in denoising was achieved. Moreover 16% increase of the image quality in band1 and 19% in band2 shows the acceptable performance of the method in denoising. Also by implementing the method on band1 and 2 images, the standard deviation decreased from 9.47 to 9.01 and 5.72 to 5.25 respectively. This proves that the method was a success.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 718

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    151-164
Measures: 
  • Citations: 

    1
  • Views: 

    807
  • Downloads: 

    0
Abstract: 

An important issue in research on spatial science is the development and uses of a suitable framework for implementation, assessment and continuous improvement of Spatial Data Infrastructure (SDI). Due to the development of a National SDI in Iran, the need to justify the used resourced and to verify the efficiency of the implemented NSDI are felt. Creating SDI at global, regional, national, state, local and organizational level is a challenging issue for sustainable development of states. SDI assessment can be used to check whether the results of implementing SDI are effective; it can also increase common knowledge of SDI to improve future performances.The main objective of this study is to evaluate the Iranian SDI from different views. This research seeks to answer the question of how to evaluate the performance of an SDI at national level. The detection of strengths and weaknesses of the implemented SDI is another objective of this paper.For this purpose, various models for performance assessment were studied, and an appropriate one is selected. The selected model were modified to match the NSDI criteria. In this research, the Balanced ScoreCard (BSC) was selected for NSDI assessment. BSC assesses the performance from different perspectives, including financial, customer, internal processes and learning and growth perspectives. In order to fulfill the requirements, different views were considered. The most important points for an NSDI assessment include the financial metrics, strategies used, available technologies, users, employees, management, human resources, and the policy practiced. Through adoption of these criteria to the BSC perspectives, an appropriate method for NSDI performance assessment is proposed. Stability of an SDI assessment framework from different points of views is based on multiplicity of assessment views. The benefit of this method is the flexibility of the framework that permits to increase new perspectives of assessment and the adjustment and removal of the ones that exist. The evaluating of the Iranian NSDI using BSC model was performed for the first time in the country.In order to achieve the objectives of the study, various steps were taken. First, different views were considered. A questionnaire containing 62 questions with 5 options were prepared to include all aspects of NSDI assessment. After determining the sample size with visiting the centers and organizations in the field of Geomatics, the questionnaires were completed by experts in this field. The reliability test was done to ensure whether the questionnaire is able to assess the NSDI. Then, the weight for each question was calculated. Following, each questionnaire was weighted based on: the degree and the field of study; the work experience in the field of Geomatics, the organization of the person who filled the questionnaire and the correlation between answers for each person who responded. Then weights are normalized. After adopting the SDI assessment criteria with BSC method, criteria, sub-criteria and weights for each sub-criteria were estimated.The final results of each of the four perspectives of BSC model showed that financial perspective has a 34.56%, the customer perspective has a 41.08%, the internal processes has a 49.67% and learning and growth perspective has a 39.75% performance ratio. The results showed that the current situation of NSDI in Iran is not satisfactory. To improve this condition, more efforts are needed in all perspectives. Most of the weaknesses were seen in the financial field; while in the field of internal processes performance was better in comparison to others. Regarding all four perspectives, performance value of NSDI in Iran was estimated as 41.27% which is lower than the average yield. Therefore; despite of the efforts of managers and experts in the field of Geospatial Information System (GIS) and SDI, the results indicating a lack of satisfactory performance.Results of the assessment obtained from BSC model presented an overview of the current situation of the NSDI in Iran. In addition to seeing the results of each criterion, future researches can be conducted to examine more details of each criterion. To this end, the situation of related sub-criteria can be studied for assessing the strengths and weaknesses of NSDI. Other researchers can add more criteria and sub-criteria. The main reason for the lack of satisfaction in results can be assigned to the restriction in releasing of national geoportal. It should be noted that the results depend on the definition of sub-criteria and and their weights.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 807

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 1 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    165-173
Measures: 
  • Citations: 

    0
  • Views: 

    1699
  • Downloads: 

    0
Abstract: 

Nowadays, various pollutants have appeared in the air due to the human and biological activities. They threaten citizens’ lives especially in metropolises. These pollutions have direct effect on the health of citizens. Tehran, as the capital city and a metropolis, is engaged constantly with these risks. In recent years, one of the greatest threats for Tehran was suspended particles with a diameter of 2.5 microns, which causes most unhealthy days in recent years. The particles may be of natural origin (e.g., pollen, protozoa, fungi, plant fibers, and dust caused by volcanic activity) or human (e.g., combustion fumes, smoke, metal oxides, salts, oil or tar droplets, silicates, and metal thick smoke). Health studies have shown a significant association between exposure to dust and premature death from heart and lung diseases. In this respect, pollutant concentrations become a major challenge for management in Tehran. The status of the spatial distribution of pollution emissions enables managers to take appropriate actions proportionate to the dangers and risks and reduce risks. In other hand, measuring the concentration of pollutants is costly and performed for points. But, it is necessary that measuring these data where use of them for regional analysis and so then, generalization and distribution of these data to study on city area. Generally both methods are for spatial interpolation and dispersion models to identify and zoning pollutants. In recent years, for development of statistical and geostatistical models, it was available and used multiple spatial interpolation model. Linear interpolation methods use known values around unknown values and estimate these values, but effect of known values on unknown values cause to divide interpolation methods in two broad categories of totally and regionally parts. Frist part, a sheet gives fitness by total dates and in second method it takes place by near points. In Spatial studies, we faced with data that are shifting. They sifted by moving of a region to others. However, in this method of modeling, studied parameters are under independent variables that changed in regions. In these situations if use of statically methods ultimate matrix weight or Final dependence for each independent variable seem values same and, means that in totally method where consider total connect regions equal by each parameter, that in many cases are different with reality and dependency sifting with changing in locations. In contrast, in regionally method considered a limited area around any sample and estimated weight and relationship between independent and dependent variable(s). In this situation, weights and Dependency ratios are not constant and changed for regionally. So, our observe are very similar and those that are far of each other show Higher spatial dependence. This study used of a Geographically Weighted Regression method for zoning pollutants PM2.5 that is one of local methods. In this method use of land Use, population, elevation, main roads, freeways, temperature, wind and direction speed and Pollutants concentration as input data to model. In the general approach by concentration and know values and a Geographically Weighted Regression estimated weighted matrix and after applied This matrix to the Grid, Evaluated amounts of concentration. Finally, by Kriging model and concentration, PM2.5 concentrations fitted on Tehran. Finally, this research leads to produce the map of PM2.5 on the city of Tehran, which is useful to identify the risk areas in the city and applying measures to reduce pollution in these areas. Comparing these map produced with the observed data and reviewing statistical parameters such as coefficient of determination (R2=0.75-0.80) and root mean square error (RMSE=7.1-8.5) showed that the proposed model has high ability in estimation the concentration in various areas in the city of Tehran.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1699

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    175-191
Measures: 
  • Citations: 

    0
  • Views: 

    1226
  • Downloads: 

    0
Abstract: 

Timely and accurate detection of land cover/use changes is one of the most important issues in land planning and management. Remote sensing (RS) images have become an important data source for change detection (CD) in recent decades. Thresholding of difference image (DI) is a prevalent approach for RS-based CD. It can be shown that the changes in an environment are occurred in such a way that the different spectral changes of phenomenon can be detected in different parts of electromagnetic spectrum. Hence, utilization of several spectral bands can offer a higher accuracy in CD process. However, prevalent thresholding techniques are developed for one-dimensional space and they are not appropriate for CD in multi-dimentional space of RS images. The common approach to overcome this deficiency is to fuse data at feature and/or decision level. Some methods have already been developed for this purpose. Whereas, it is enigmatic to decide which of data fusion technique is the most appropriate one, a common particularity in all these approaches (except: voting and Bayesian) is their supervised nature, as the analyst must determine some parameters which can be the best fit to a certain application and dataset. On the other hand, unsupervised approaches, generally have low accuracy in CD process.In order to develop the thresholding technique to support multi-spectral images, a simple yet effective data fusion approach is proposed in this paper. The developed method is a linear combination of multi-spectral change image based on fusion. Applied weights in linear combination are optimized using Particle Swarm Optimization (PSO) algorithm.The proposed approach consists of the following two major steps. In the first step a multi-spectral change image is generated. Several methods can be used for that purpose. In this research, we chose difference image operation as it is simple to implement and easy to interpret. It includes a simple and straightforward arithmetic difference between the digital values of the two images obtained on different dates. In the next step, PSO is initialized with arbitrary weights and the weighted image fusion is then carried out as follows: Where denotes the weight associated to ith band of multi-spectral difference image ( ), such that Afterwards, the OTSU thresholding technique is applied to produce binary change mask (BCM) and evaluate the fitness of the fused change index (FCI). If any of the termination conditions (optimum fitness or maximum number of iteration) is satisfied, the current weights are saved as optimum weights of a weighted linear combination or else they are updated with PSO algorithm to reach the optimum values.The performance of the developed technique is evaluated on a bi-temporal multispectral images acquired by the Landsat-5 Thematic Mapper (TM) sensor in July 2000 and 2009. This data set is characterized by a spatial resolution of 30m´30m and 7 spectral bands ranging from blue light to shortwave infrared (0.45~2.35 mm). It is worth noting that the 6th band of these images (thermal infrared band), is not utilized due to low spatial resolution. The selected area is co-registered subsets of size (470´830 pixels) of two full scenes, including Khodafarin Dam (an earth-fill embankment dam on the Aras River straddling the border between Iran and Azerbaijan).Moreover to visual assessment of CD results, quantitative analysis has been carried out by selecting 2799 samples of changed regions and 5168 samples of unchanged regions, according to field work and image interpretation. The proposed linear combination of multispectral difference images based on fusion which is the development of the thresholding technique to support the multi-spectral images, has better accuracy in CD in comparison with individual spectral bands of DI and the other state-of-the-art image fusion algorithms at feature and/or decision level. Overall accuracy of 90.68% using the proposed method in comparison to an overall accuracy of 79.06% and 70.81% related to the prevalent voting algorithms (data fusion at decision level) and 80.77% related to the Bayesian algorithm (data fusion at feature level), confirms the effectiveness of the proposed method for unsupervised CD in multi-spectral and multi-temporal RS images.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1226

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

MABOUDI M. | AMINI J. | SAATI M.

Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    193-202
Measures: 
  • Citations: 

    0
  • Views: 

    857
  • Downloads: 

    0
Abstract: 

Road extraction from remotely sensed imageries is a rapid and cost effective method for acquiring transportation information and updating GIS (Geographic Information System) systems. Fast and continuous changes of the urban environment, increase the necessity of regular updating or revising road network layers in GIS systems. The difficulties in the design of an automated road network extraction system using remotely-sensed imagery lie in the fact that the image characteristics of road feature vary according to sensor type, spectral and spatial resolution, ground characteristics, etc. Even for an image taken over a particular urban area, different parts of the road network reveal different characteristics. In the real world, a road network is too complex to be modeled using a mathematical formulation or an abstract structural model. The existence of other objects (e.g., buildings and trees) casts shadows to occlude road features, thus complicates the extraction process.In this research, a general object –based framework for road extraction is implemented, moreover the effect of selection of segmentation method on road extraction is analyzed. Image segmentation is considered as the first and crucial step of objects based image analysis, which aims to obtain the so-called homogeneous segments for succeeding feature extraction, classification, and higher level image analysis. Extensive research has been conducted in the area of image segmentation. Major categories of current state-of-the-art RS image segmentation methods can be classified as follows: 1) point/pixel based; 2) feature based; 3)edge based; 4) region based; 5) texture based; 5) hybrid and so on.Preprocessing as the first step in the proposed method is designed to improve the quality of the image and identify relevant image pixels for further processing. Then, object-based segmentation method is used to extract the initial road segments. Segmented objects are classified into a binary image which represents road and non-road classes. In the next step, skeleton of road objects are extracted. After Skeletonization, a compact approximation of line segments and curves in a vector format are implemented in vectorization step. Small branches in road network, which are produced hitherto and are not known as road, are removed in pruning step; and finally the proposed method is evaluated by comparing with reference road network (as ground truth), which are generated from the road vector data from the GIS or manually extracted road network.For evaluation of the proposed method, real data of Worldview2 sensor in Shushtar area in Khuzestan province-Iran is utilized. Three different segmentation method implemented in eCognition software are tested. In this research, 2 popular quality metrics defined in literatures will be adopted. These metrics include completeness and correctness. Better quality using multi-resolution segmentation method is achieved. Pruning extracted road network leads in above 20% improvement in results. Final results - after multi-resolution segmentation and pruning- show 88% correctness and 85% completeness as evaluation criteria. In addition, selection of multi-resolution segmentation parameters is appraised and the effects of these parameters are assessed. This paper generally emphasizes on the role of image segmentation quality on further processing effectiveness and future works could compare the other state-of-the-art segmentation algorithms with results of multi-resolution algorithm.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 857

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

MASJEDI A. | KHAZAEI S.

Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    203-215
Measures: 
  • Citations: 

    0
  • Views: 

    919
  • Downloads: 

    0
Abstract: 

A hyperspectral image contains hundreds of narrow and contiguous spectral bands. Because of this high spectral resolution, hyperspectral images provide valuable information from the earth surface materials and objects. Therefore, target detection (TD) is a key issue in processing such data. In fact, the aim of TD algorithms is to find specific targets with known spectral signatures. In the other point of view, the enormous amount of information provided by hyper spectral images increases the computational burden as well as the correlation among spectral bands. Besides, even the best TD algorithms exhibit a large number of false alarms due to spectral similarity between the target and background especially at subpixel level in which the size of target of interest is smaller than the ground pixel size of the image. Thus, dimensionality reduction is often conducted as one of the most important steps before target detection to both maximize the detection performance and minimize the computational burden.This paper presents a method to improve the efficiency of subpixel TD based on selection of appropriate bands using genetic algorithm (GA). To use GA for band selection, two similar fitness function are proposed in this study. The first fitness function is introduced for cases in which the position of target is known. Regarding this, maximizing the output values of TD algorithm on the target pixels is used as the evaluation function. This maximization is roughly equivalent to minimizing the false alarm rate. The main problem in the use of the first fitness function is its need to know the correct position of target pixels in the image. Hence, the second function is proposed to solve this problem. In this function, the output value of TD algorithm is maximized on the simulated targets.In this study, the adaptive coherence estimator (ACE) as the well-known subpixel TD algorithm is used in its local form for the evaluations. Moreover, the target detection blind test data set is employed for the experiments. The data sets includes HyMap reflectance image of Cook City in Montana, USA. The ground resolution of imagery data is approximately 3 m. In the HyMap image, 12 targets, at full and subpixel sizes, were located in an open grass region, which included six fabric panels for the self-test and six for the blind-test.In this study, the local ACE algorithm is implemented using inner and outer detection windows with sizes of 3´3 and 5´5 pixels, respectively. Also, GA is performed with the population number of 100, the probability of mutation of 0.2, the probability of crossover of 0.8, and the maximum generation number of 100. Experimental results obtained for detecting the 10 subpixel targets considered show that, the number of false alarms produced when using dimension reduction by GA is completely low in comparison to that of obtained using all bands. Based on the results, the use of GA with first and second fitness functions reduce the false alarm rate by 95% and 75%, respectively, in comparison to using all bands. For fair comparison with the proposed method, the GA-contrast method is also performed on the same data set. The results show that, compared to the GA-contrast method, GA with the first and second fitness functions reduce the false alarm rate by 94% and 70%, respectively.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 919

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    217-231
Measures: 
  • Citations: 

    0
  • Views: 

    608
  • Downloads: 

    0
Abstract: 

Geosensor networks are constituted from a large number of nodes that each of these nodes are same sensor-enabled computers. Geosensor network can be imagined as microscopy environmental that gives collection and process of environmental information with specified spatial temporal resolution and high detailed in real time. One of the important applications of these networks is the extraction of topological relation between regions in some phenomena, such as discovery of the causes of forest fires creation, when the topological relation between very hot air, flammable materials, and forest region is converted from “disjoint” to “overlap” and “inside”. Due to existence of cavities in environmental phenomena, marshes or mountains in some regions, these regions must be modeled as regions with holes in geosensor networks. In this research, the regions with holes are monitored by geosensor network and the topological relation between them is extracted. In order to extract topological relation between regions with holes in geosensor networks, an algorithm was designed. Theoretical models, for example 4-intersection, 9-intersection, and RCC are used only for extraction of topological relation between regions that have not any holes, and these models cannot distinguish different topological relations between regions with holes; in designed algorithm, the modified 9-intersection model is used to derive topological relation between a region and another region with a hole. To calculate the this 9-intersection model and extracting relations between these two regions, only it is required that topological relation between the region without hole with each of hole and general region elements of region with holes is determined. In this research, in the first step, 4-intersection model is used and then topological relations between two regions are determined by calculating the modified 9-intersection model. Due to the environment conditions of network, it might not possible to carry out positioning the nodes by GPS; hence, the algorithm will act in such a method that nodes without position obtain topological relation between two regions only based on one-hop neighborhood information. In designed algorithm, decentralized computing system is used and its implementation is evaluated in a simulation. For implementation of designed algorithm, it is required that regions, geosensor network, and communication between nodes are modeled in the simulation program. After modeling of the regions, distributing of nodes is modeled randomly on these regions. It is required that communication between nodes to be possible through neighboring structures. The most basic network neighboring structure is unit disk graph which was used as the default structure in this research. Moreover, it is required to have a coverage structure for merging and integrating of information in the network, that a rooted tree structure is used in this research. Based on the rooted tree structure, one of the network nodes is selected as the root node and other nodes send own local information to the root node in path of tree branches. The local processing is done in each of network nodes and topological relation is calculated locally. Then, the local information was integrated with each other across network nodes and sent to root node. Finally, topological relation between two regions is determined in the root node based on the developed 9-intersection model.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 608

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    233-245
Measures: 
  • Citations: 

    0
  • Views: 

    1049
  • Downloads: 

    0
Abstract: 

Nowadays, most urban societies have experienced a new phenomenon so-called urban traffic congestion, which is caused by crossing too many vehicles from the same transportation infrastructure at the same time. Traffic congestion has different consequences such as air pollution, decrease in speed, increase in travel time, fuel consumption and even incidents. One of the feasible solutions for bringing off the increase in transportation demand is to improve the existing infrastructure by means of intelligent traffic control systems. From a traffic engineering point of view, a traffic control system consists of physical network, control devices (traffic signals, variable message signs, so forth), the model of transportation demand and control strategy. The focus of this paper is on the latter especially traffic signal control.Traffic signal control can be modeled by multi-agent systems perfectly because of its distributed and autonomous nature. In this context, drivers and traffic signals are considered distributed, autonomous and intelligent agents. Besides, due to high complexity arising in urban traffic patterns and nonstationarity of traffic environment, developing an optimized multi-agent system by preprogrammed agent’s behavior is most impractical. Therefore, the agents must, instead, discover their knowledge through a learning mechanism by interacting with the environment.Reinforcement Learning (RL) is a promising approach for training the agent in which optimizes its behavior by interacting with the environment. Each time the agent receives information on the current state of the environment, performs an action in its environment, which may changes the state of the environment, and receives a scalar reward that reflects how appropriate the agent’s behavior has been in the past. The function that indicates the action to take in a certain state is called the policy. The goal of RL is to find a policy that maximizes the long-term reward. Several types of RL algorithms have been introduced and they can be divided into three groups: Actor-Only, Critic-Only and Actor-Critic methods.Actor-Only methods typically work with a parameterized family of policies over which optimization procedures can be used directly. Often the gradient of the value of a policy with respect to the policy parameters is estimated and then used to improve the policy. The drawback of Actor-Only methods is that the increase of performance is harder to estimate when no value function is learned. Critic-Only methods are based on the idea to first find the optimal value function and then to derive an optimal policy from this value function. This approach undermines the ability of using continuous actions and thus of finding the true optimum. In this research, Actor-Critic reinforcement learning is applied as a learning method for true adaptive traffic signal control. Actor-Critic method is a temporal difference method that has a separate memory structure to explicitly represent the policy independent of the value function. The policy structure is known as the actor, because it is used to select actions and the critic is a state-value function.In this paper, AIMSUN, which is a microscopic traffic simulator, is used to model traffic environment. AIMSUN models stochastic vehicle flow by employing car-following, Lane Changing and gap acceptance. AIMSUN API was used to construct the state, execute the action, and calculate the signal reward in each traffic light. The state of the each agent is represented by a vector of 1 + P components, where the first component is the phase number and P is the number of entrance streets which goes to intersection. Also, the action of the agent is the duration of the current phase. The immediate reward is defined as the reduction in the total number of cars waiting in all entrance streets. In fact, difference between the total numbers of cars in two successive decision points is used as a signal reward. The reinforcement learning controller is benchmarked against optimized pretimed control. The results indicate that the Actor-Critic controller decreases Queue length, travel time, fuel consumption and air pollution when compared to optimized pretimed controller.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1049

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    247-264
Measures: 
  • Citations: 

    0
  • Views: 

    2811
  • Downloads: 

    0
Abstract: 

The occurrence of earthquake has made human being consider fundamental plans to reduce the consequent danger and destruction. The only means to reduce the vulnerability is to set specific management of urban crisis in construction; moreover, this aim cannot be achieved unless the city immunity, in confrontation of the earthquake, is considered as a major purpose in all stages of urban planning. Proper allocation of various urban land uses helps hugely the process of management of urban crisis related to the earthquake; accordingly, recognizing the different effective variables of vulnerability of urban areas from the aspects of urban land use, definition and declaring their relations with vulnerability, their analysis and finally preparing the land use optimizing maps with less percentage of vulnerability, is the principle target of this paper.In this paper, for optimizing urban land use allocation, with the approach of reducing the vulnerability caused by the earthquake based on the physical factors, the multi-objective optimization algorithm NSGA-II was used for modeling. The 12th district of Tehran was taken as the subject of study. In this algorithm the main objectives include: maximizing compatibility of adjacent land uses, accessibility of land uses, availability of sanitary-medical and residential land uses to the Road network and minimizing susceptibility in earthquake's time and Minimizing land uses change. Considering the fact that the NSGA-II algorithm is multi-objective, the decision maker encounters different solutions in the Pareto-optimal front, which makes the process more complicated. Accordingly, to aid the decision making process and presenting the correspondent scenarios with the decision makers' priority, the clustering analysis was used with K-means approach. To study the changes of the results of different implementations of algorithm and stability of optimization algorithm, convergence trend and repeatability test carried out.In the resulted optimized land use arrangements, the levels of objective functions are much better than the previous arrangement. Moreover, accessibility objective function has been improved mostly under the effect of optimization (27 %). The average percentage of the improvement of the objective functions in the algorithm was 19 %. In the repeatability test, the average percentage of the overlay of the algorithm's solutions in different runs was recorded as 76 %, which can be recognized as a proper value, and represents the suitable repeatability of the algorithm. The results were found acceptable based on the convergence trend, by having the stable value of the objective functions after specific times of iteration.Several factors represent the efficiency of the model which can be named as; the proper method of optimization that was compatible with the problem, defining the objective functions based on the reality and including the main aspects of the problem of the earthquake vulnerability in the presented model, concerning the opinion of the decision makers in the process of the research and the final stage for selecting the optimum arrangement with the analysis of the results of the scenarios and the scenarios' clustering. The results of this research can be an aid as a means to support deciding for the planner and urban management policy makers encountering earthquake, in planning appropriately for the urban spaces.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 2811

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    265-277
Measures: 
  • Citations: 

    0
  • Views: 

    709
  • Downloads: 

    0
Abstract: 

Damaged and devastated roads recognition and determination of their damage degree seem to be vital when they are affected by a natural disaster like an earthquake. This damage and obstacle is as a consequence of debris caused by collapsed building adjacent to the roads. Moreover, it is essential due to the emergency nature of facing such phenomenon. This study makes use of a new approach for the semi-automatic detection and assessment of marred roads in an affected urban area which is utilizing pre event roads vector map and both pre and post disaster Quick Bird satellite images. In this research, we need to explain a definition for the Damage. As a matter of fact, this damage or obstacle can cause any sort of disturbance on the functionality of the roads network such as conducting rescuers, retrieving survivors, and reconstruction operations. Indeed, in most of urban areas, the width of roads is not that wide, particularly in the third world countries and undeveloped areas. Thus, any trivial obstacle or extra object can cause a noticeable disturbance for transportation. Therefore, this damage is defined as both Debris engendered by collapsed buildings or any other urban structures, and the observation of parked cars on the surface of the roads in devastated areas. To illustrate, this method consists of two main steps; damage detection (by classification), and damage assessment. In this case, many different features are considered for classification of roads surface. These features are such as shadow index, NDVI, and GLCM based features. Furthermore, an appropriate Genetic Algorithm (GA) is designed and used to analyze and find the best set of optimal features. Given that there would be a potential defected band or any correlation among the features, this issue gives useless and unessential information to the classifier and increases the computations time and decrease the accuracy. Afterwards, with making use of these optimal features set and after trial and error between two well-known and prevalent classifiers (SVM and MLL), the supervised Support Vector Machine classifier was selected. It is because of gaining higher overall accuracy and enhancing the damage detection consequently. Thus, SVM is applied to the optimal features to detect damage (damage detection step). Since, a road is a slim object and to analyze the obstacle of this slim object more meticulously, it is needed to divide it into smaller parts. After dividing each individual road to several and equal partitions, a designed ‘Mamdani’ fuzzy inference system (FIS) is represented for the road damage assessment step. These three damage levels are including Low, Medium, and High damage levels. It is based on each small partition. That says, each partition goes into the Fuzzy inference system as a point and the output is the partition damage level index. Afterwards, some statistic criteria are considered on the number of different damaged partition and the damage level is generalized on each individual road. Therefore, each single road gets a damage label and lead to a roads damage level map. The proposed method was tested on QuickBird pan sharpened image from the Bam earthquake and the results indicate that an overall accuracy of 92% and a kappa coefficient of 0.9 were achieved for the damage detection step, and 82% of the roads were labeled correctly in the road damage assessment step. The obtained results show the efficiency and accuracy of the current algorithm.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 709

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

RAJABI A. | MOMENI M.

Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    279-292
Measures: 
  • Citations: 

    0
  • Views: 

    730
  • Downloads: 

    0
Abstract: 

Nowadays satellite imagery uses for producing and updating the maps because of their capabilities. In recent years, IRS-P5 images was used for updating the maps with 1:25000 scale. Also VHR images like IKONOS2 and QuickBird2 can be used for updating cadaster maps based on manual transformation. At present With easier access to these images and also appearance of VHR imagery like GeoEye1 and WorrldView2 and extension of advanced algorithms create a good opportunity for making large scale maps and speed the updates.It can be said that with using VHR images, updating maps is done better than making maps, so it’s in priority. But using satellite images and processing algorithms for making and updating large scale maps have some difficulties in preparing required layers in these kind of maps. Even GeoEye1 images that have 50cm spatial resolution, can’t prepare all of required layers. The main purpose in this theses is updating 1:2000 scale maps using GeoEye1 stereo image. Indeed we want to study the performance of these data for updating the maps with creating feature vector for image pixels instead of gray values and also using GeoEye1 stereo image instead of single vertical image. Our first assumption is that if we use GeoEye1 stereo image for new image instead of single vertical image, not only we can get higher precision for updating large scale maps, but also we can manage different height error and making shadows. For this purpose we used GeoEye1 stereo image. Our second assumption is that in updating large scale maps, GD-making of gray scales is no longer effective because our subject is referred to geometry of phenomenon. For this purpose, first all of features are extracted from image, then participate in GD-making and finally the most effective features in 3 groups are chosen and arranged with try and error that make a feature vector with independent members. In the beginning of work, first horizontal and vertical accuracy that required for large scale maps are reviewed, then the largest scale map that can be prepared with satellite images are selected (in this case is 1:2000) and finally the performance of GeoEye1 stereo images between 2006 to 2010 that used for building change detection and update 1:2000 scale maps are reviewed. Updating strategy for 1:2000 scale map that used in this theses has 5 stages: choosing data and pre-processing them, change detection, post-process the change detected results, assessment the change detected results and finally applying the results in maps. For these 3 stages; change detection, post-process the change detected results and assessment the change detected results; we written an algorithm based on differentiation of image pixels feature vector that detected building changes in 3 study regions, Additional pixels are eliminated and these changes detected by algorithm are compared with actual changes using confusion matrix and the results are showed In the form of Overall accuracy, producer accuracy and user accuracy. Accuracy Values obtained for change class in best condition for second region that was an area with low building density is 3.11%, 68.60% and 64.29%. But in third region that was an area with high building density, the acquired accuracies for changes class are 95.07%, 4.81% and 5.22%. Based on these results for change detection using GeoEye2 stereo images, suggested algorithm has necessary performance in the areas with low building densities. Also it proofs that gray scale deferential or any other image feature alone doesn’t perform well in change detection using VHR images. But using feature vector in GD-making is quite effective. And also we have been able to manage the error due to the height difference and shadows and introduce these parts to operator using stereo images.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 730

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

AKBARI M. | SAMADZADEGAN F.

Issue Info: 
  • Year: 

    2016
  • Volume: 

    5
  • Issue: 

    3
  • Pages: 

    293-303
Measures: 
  • Citations: 

    0
  • Views: 

    751
  • Downloads: 

    0
Abstract: 

Air pollution in cities is one of the most problems that effects on human health, environment, economy, urban management and etc. urban managers to overcome to this problem, should determine affecting parameters and also the way that they impact on air pollution to arrange necessary plans for it. Different researches have been assessed parameters such as meteorology, traffic and topography impacts on the air pollution. Identification of affecting parameters on air pollution in urban regions using co-location pattern mining can help to solve this problem. Co-location pattern represents a subset of spatial objects that their instances usually are in close proximity. Existing methods with shorthcomings such as applying only one feature-type, considering spatial relationships explicitly as input data and extracting patterns without any emphasis on a specific object aren’t appropriate to applications such as air pollution. Then, in the present research developed a new co-location pattern mining model so that it can handel mentioned shortcomings. In this research tried to consider affects of all three before mentioned parameters simultaneously on air pollution by extracting prevalent patterns. In this literature to develop the mentioned model, we defined a framework for a data mining problem. As there was a gap in existing literature for considering different feature types, new metrics have been defined to compute the participation ratio for all point, line and polygon data. Actually, the applied metric for point data is the available one but the other ones for line and polygon data have some extensions based on neighborhood to compute these metrics. As the air pollution is a serious problem for Tehran, the developed model implemented and tested on part of Tehran’s data. To apply the proposed method, we classified each of the studied parameters to three different classes (low, normal, high) based on their physical characteristics. The data of 4 days in Farvardin, Tir, Mehr and Dey months selected and used to first, check repeatability of results and second, based on changes in seasons, control the validity of the proposed model. The input value for neighborhood radius is 1500 meter and for prevalence threshold is 0.5. The neighborhood radius is selected based on the average distances between air pollution stations and meaningfully of parameters changes. Also, the prevalence threshold was selected to find patterns which at least half of its instances participate in the pattern. The assessed results of extracted patterns first show the ability and correctness of our proposed model and second represent that medium and high air pollutions produce meaningful patterns with low traffic volume, low wind speed and also low topography. Also, their attitude is towards central regions of our case, region 6 of Tehran. Finally, it is necessary to mention that the air pollution is a spatio-temporal problem and in addition to spatial dimension, we should have an attention to temporal aspect. But in this research, the emphasis is based on spatial extension of model to apply for all feature types. Extending the proposed model to mine spatial and temporal patterns simultaneously is the goal of researchers.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 751

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
telegram sharing button
whatsapp sharing button
linkedin sharing button
twitter sharing button
email sharing button
email sharing button
email sharing button
sharethis sharing button