<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>GPS World &#187; Tech Talk</title>
	<atom:link href="http://www.gpsworld.com/category/blogs/tech-talk/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.gpsworld.com</link>
	<description>The Business and Technology of Global Navigation and Positioning</description>
	<lastBuildDate>Tue, 11 Jun 2013 20:37:58 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.5.1</generator>
		<item>
		<title>The Kinematic GPS Challenge: First Gravity Comparison Results</title>
		<link>http://www.gpsworld.com/the-kinematic-gps-challenge-first-gravity-comparison-results-2/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-kinematic-gps-challenge-first-gravity-comparison-results-2</link>
		<comments>http://www.gpsworld.com/the-kinematic-gps-challenge-first-gravity-comparison-results-2/#comments</comments>
		<pubDate>Wed, 14 Mar 2012 17:53:36 +0000</pubDate>
		<dc:creator>GPS World staff</dc:creator>
				<category><![CDATA[Algorithms & Methods]]></category>
		<category><![CDATA[Opinions]]></category>
		<category><![CDATA[Survey News]]></category>
		<category><![CDATA[Tech Talk]]></category>
		<category><![CDATA[GRAV-D]]></category>
		<category><![CDATA[National Geodetic Survey]]></category>

		<guid isPermaLink="false">http://www.gpsworld.com/?p=497</guid>
		<description><![CDATA[By Theresa Diehl The National Geodetic Survey (NGS) has issued a “Kinematic GPS Challenge” to the community in support of NGS’ airborne gravity data collection program, called Gravity for the Redefinition of the American Vertical Datum (GRAV-D). The “Challenge” is meant to provide a unique benchmarking opportunity for the kinematic GPS community by making available [...]]]></description>
				<content:encoded><![CDATA[<p><em>By Theresa Diehl</em></p>
<p>The National Geodetic Survey (NGS) has issued a “Kinematic GPS Challenge” to the community in support of NGS’ airborne gravity data collection program, called Gravity for the Redefinition of the American Vertical Datum (<a href="http://www.ngs.noaa.gov/GRAV-D">GRAV-D</a>). The “Challenge” is meant to provide a unique benchmarking opportunity for the kinematic GPS community by making available two flights of data from GRAV-D’s airborne program for their processing. By comparing the gravity products that are derived from a wide variety of kinematic GPS processing products, a unique quality assessment is possible.</p>
<p>GRAV-D has made available two flights over three data lines (one line was flown twice) from the Louisiana 2008 survey. For more information on the announcement of the Challenge and descriptions of the data provided, see <a href="http://www.gpsworld.com/tech-talk-blog/the-kinematic-gps-challenge-supporting-airborne-gravimetry-missions-12350">Gerald Mader’s blog on November 29, 2011</a>. The GRAV-D program routinely operates at long-baselines (up to 600 km), high altitudes (20,000 to 35,000 ft), and high speeds (up to 280 knots), a challenging data set from a GPS perspective. As of December 2011, ten groups of kinematic GPS processors have provided a total of sixteen position solutions for each flight. At two data lines per flight, this yielded 64 total position solutions. Only a portion of the December 2011 data is discussed here, but all test results will soon be available on when the <a href="http://www.ngs.noaa.gov/GRAV-D/gpschallenge.shtml">Challenge website</a> is completed.</p>
<p>Why use the application of airborne gravity to investigate the quality of kinematic GPS processing solutions? Because the gravity measurement itself is an acceleration, which is being recorded with a sensor on a moving platform, inside a moving aircraft, in a rotating reference frame (the Earth). The gravity results are completely reliant on our ability to calculate the motion of the aircraft— position, velocity, and acceleration. These values are used in several corrections that must be applied to the raw gravimeter measurement in order to recover a gravity value (Table 1). The corrections in Table 1 are simplified to assume that the GPS antenna and gravimeter sensor are co-located horizontally and offset vertically by a constant, known distance.</p>
<p><img src="http://www.gpsworld.com/files/gpsworld/nodes/2012/12754/table1." alt="" /><br />
<em><strong>Table 1.</strong> GPS-Derived Values that are used in the Calculation of Free-Air Gravity Disturbances</em></p>
<p>All Challenge solutions are presented anonymously here, with f## designations. For each flight of data, the software that made the f01 solution is the same as for f16, f02 the same as f17, and so on.</p>
<p><strong>Test #1: Are the solutions precise and accurate?</strong></p>
<p>The first Challenge test compares each free-air gravity result versus the unweighted average of all the results, here called the ensemble average solution (Figure 1). This comparison highlights any GPS solutions whose gravity result is significantly different from the others, and will group together solutions that are similar to each other (precise). Precision is easy to test this way, but in order to tell which gravity results are accurate calculations of the gravity field, a “truth” solution is necessary. So, the Challenge data are also plotted alongside data from a global gravity model (<a href="http://earth-info.nga.mil/GandG/wgs84/gravitymod/egm2008/anomalies_dov.html">EGM08</a>) that is reliable, though not perfect, in this area.</p>
<p>Figure 1 shows two of the four data lines processed for the Challenge; these two data lines are actually the same planned data line, which was reflown (F15 L206, flight 15 Line 206) due to poor quality on the first pass (F06 L106, flight 6 Line 106). The 5-10 mGal amplitude spikes of medium frequency along L106 are due to turbulence experienced by the aircraft, turbulence that the GPS and gravity processing could not remove from the gravity signal.</p>
<p><img src="http://www.gpsworld.com/files/gpsworld/nodes/2012/12754/Fig1.jpg" alt="" /><br />
<em><strong>Figure 1</strong></em>.</p>
<p><img src="http://www.gpsworld.com/files/gpsworld/nodes/2012/12754/Fig3.jpg" alt="" /><br />
<em><strong>Figure 2</strong></em>.</p>
<p><em>Data from Flight 6, Line 106 (F06 L106, top) and Flight 15, Line 206 (F15, L206, bottom) for all Challenge solutions (anonymously labeled with f## designators). <strong>Figures 1 and 2.</strong> Comparison of Challenge free-air gravity disturbances (FAD) to the ensemble average gravity disturbance (dotted black line) and comparison to a reliable global gravity model, EGM08 (dotted red line). </em></p>
<p><img src="http://www.gpsworld.com/files/gpsworld/nodes/2012/12754/Fig2_1.jpg" alt="" /><br />
<strong><em>Figure 3.</em></strong></p>
<p><img src="http://www.gpsworld.com/files/gpsworld/nodes/2012/12754/Fig4.jpg" alt="" /><br />
<strong><em>Figure 4.</em></strong></p>
<p><em><strong>Figures 3 and 4.</strong> Difference between the individual Challenge gravity disturbances and the ensemble average. The thin black lines mark the 2-standard deviation levels for the differences. For F15 L206, one solution (f23) was removed from the difference plot and statistics because it was an outlier. For both lines, the ensemble’s difference with EGM08 is not plotted because it is too large to fit easily on the plot.</em></p>
<p>&nbsp;</p>
<p>The results of test #1 are surprising in several ways:</p>
<ul>
<li>The data using the PPP technique (precise point positioning, which uses no base station data) and the data using the differential technique (which uses base stations) produce equivalent gravity data results, where any differences between the methods are virtually indistinguishable.</li>
<li>There was one outlier solution (f23) that was removed from the difference plots and is still under investigation. Also, on F15 L206, solution f28 had an unusually large difference from the average though it performed predictably on the other lines. Of the remaining solutions, four solutions stand out as the most different from all the others: f03/f18, f04/f19, f05/f20, and f07/f22.</li>
<li>The solutions on the difference plots (right panels) cluster closely together, with 2-standard deviation values shown as thin horizontal lines on the plots. The Challenge solutions meet the precision requirements for the GRAV-D program: +/- 1 mGal for 2-standard deviations.</li>
<li>However, the large differences between the Challenge gravity solutions and the EGM08 “truth” gravity (left panels) mean that none of the solutions come close to meeting the GRAV-D accuracy requirement, which is the more important criterion for this exercise.</li>
</ul>
<p><strong>Test #2: Does adding inertial measurements to the position solution improve results?</strong></p>
<p>NGS operates an inertial measurement unit (IMU) on the aircraft for all survey flights. The IMU records the aircraft’s orientation (pitch, roll, yaw, and heading). Including the orientation information in the calculation of the position solution should yield a better position solution than GPS-only calculations, but it was not expected to be significantly better. Figure 2 shows the NGS best loosely-coupled GPS/IMU free-air gravity result versus the Challenge GPS-only results and Table 2 shows the related statistics.</p>
<p><img src="http://www.gpsworld.com/files/gpsworld/nodes/2012/12754/Fig5.jpg" alt="" /><br />
<em><strong>Figure 5.</strong></em></p>
<p><img src="http://www.gpsworld.com/files/gpsworld/nodes/2012/12754/Fig6.jpg" alt="" /><br />
<strong><em>Figure 6.</em></strong></p>
<p><em><strong>Figures 5 and 6.</strong> F06 L105. (Figure 5) Comparison of Challenge FAD gravity solutions (ensemble=black dotted line) with EGM08 (red dotted line); (Figure 6) comparison of Challenge gravity solutions (all GPS-only; ensemble=black dotted line) with NGS’ coupled GPS/IMU gravity solution (red dotted line).</em></p>
<p><img src="http://www.gpsworld.com/files/gpsworld/nodes/2012/12754/table2.jpg" alt="" /><br />
<em><strong>Table 2.</strong> Statistics for Comparison of GPS-only Challenge Ensemble Gravity and NGS GPS/IMU Gravity</em>.</p>
<p>&nbsp;</p>
<p>For all data lines, the GPS/IMU solution matches the EGM08 “truth” gravity solution more closely than any of the Challenge GPS-only solutions. In fact, the more motion that is experienced by the aircraft, the more that adding IMU information improves the solution. One conclusion from this test is that IMU data coupled with GPS data is a requirement, not optional, in order to obtain the best free-air gravity solutions.</p>
<p><strong>Additional Testing and Future Research</strong></p>
<p>Other testing has already been completed on the Challenge data and the results will be available on the Challenge website soon. Important results are:</p>
<ul>
<li>Two Challenge participants’ solutions perform better than the rest, two perform worse, and one is a low quality outlier. The reasons for these differences are still under investigation.</li>
<li>A very small magnitude sawtooth pattern in the latitude-based gravity correction (normal gravity correction) is the result of a periodic clock reset for the Trimble GPS unit in the aircraft. This clock reset is uncorrected in the majority of Challenge solutions. The clock reset causes an instantaneous small change in apparent position, which results in a 1-2 mGal magnitude unreal spike in the gravity tilt correction at each epoch with a clock reset.</li>
<li>There are significant differences, as noted by Gerry Mader, in the ellipsoidal heights of the Challenge solutions and the differences result in unusual patterns and magnitude differences in the free-air gravity correction.</li>
</ul>
<p>In order to further explore these Challenge results, IMU data will be released to the GPS Challenge participants in the spring of 2012 and GPS/IMU coupled solutions solicited in return. Additionally, basic information about the Challenge participants’ software and calculation methodologies will be collected and will form the basis of the benchmarking study.</p>
<p>We will still accept new Challenge participants through the end of February, when we will close participation in order to complete final analyses. Please contact Theresa Diehl and visit the <a href="http://www.ngs.noaa.gov/GRAV-D/gpschallenge.shtml">Challenge website</a> for data if you’re interested in participating.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.gpsworld.com/the-kinematic-gps-challenge-first-gravity-comparison-results-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>A Comparison of Lidar and Camera-Based Lane Detection Systems</title>
		<link>http://www.gpsworld.com/a-comparison-of-lidar-and-camera-based-lane-detection-systems/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=a-comparison-of-lidar-and-camera-based-lane-detection-systems</link>
		<comments>http://www.gpsworld.com/a-comparison-of-lidar-and-camera-based-lane-detection-systems/#comments</comments>
		<pubDate>Fri, 03 Feb 2012 17:55:07 +0000</pubDate>
		<dc:creator>GPS World staff</dc:creator>
				<category><![CDATA[GNSS]]></category>
		<category><![CDATA[Tech Talk]]></category>
		<category><![CDATA[highway fatalities]]></category>
		<category><![CDATA[LiDAR]]></category>

		<guid isPermaLink="false">http://www.gpsworld.com/?p=499</guid>
		<description><![CDATA[By Jordan Britt, David Bevly, and Christopher Rose Nearly half of all highway fatalities occur from unintended lane departures, which comprise approximately 20,000 deaths annually in the United States.  Studies have shown great promise in reducing unintended lane departures by alerting the driver when they are drifting out of the lane. At the core of [...]]]></description>
				<content:encoded><![CDATA[<p><em>By Jordan Britt, David Bevly, and Christopher Rose</em></p>
<p>Nearly half of all highway fatalities occur from unintended lane departures, which comprise approximately 20,000 deaths annually in the United States.  Studies have shown great promise in reducing unintended lane departures by alerting the driver when they are drifting out of the lane. At the core of these systems is a lane detection method typically based around the use of a vision sensor, such as a lidar (light detection and ranging) or a camera, which attempts to detect the lane markings and determine the position of the vehicle in the lane. Lidar-based lane detection attempts to detect the lane markings based on an increase in reflectivity of the lane markings when compared to the road surface reflectivity. Cameras, however, attempt to detect lane markings by detecting the edges of the lane markings in the image. This project seeks to compare two different lane detection techniques-one using a lidar and the other using a camera. Specifically, this project will analyze the two sensors’ ability to detect lane markings in varying weather scenarios, assess which sensor is best suited for lane detection, and determine scenarios where a camera or a lidar is better suited so that some optimal blending of the two sensors can improve the estimate of the position of the vehicle over a single sensor.</p>
<p><strong>Lidar-based lane detection</strong></p>
<p>The specific lidar-based lane detection algorithm for this project is based on fitting an ideal lane model to actual road data, where the ideal lane model is updated with each lidar scan to reflect the current road conditions. Ideally, a lane takes on a profile similar to the 100-averaged lidar reflectivity scans seen in Figure 1 with the corresponding segment.<br />
<em>Figure 1. Lidar reflectivity scan with corresponding lane markings</em>.</p>
<p>Note that this profile has a relatively constant area bordered by peaks in the data, where the peaks represent the lane markings and the constant area represents the surface of the road.  An ideal lane model is generated with each lidar scan to mimic this averaged data, where averaging the reflectivity directly in front of the vehicle generates the constant portion and increasing the average road surface reflectivity by 75 percent mimics the lane markings.  This model is then stretched over a range of some minimum expected lane width to some maximum expected lane width, and the minimum RMSE between the ideal lane and the lidar data is assumed to be the area where the lane resides. For additional information on this method, see Britt, Rose &amp; Levy, September 2011.</p>
<p><strong>Camera-based lane detection</strong></p>
<p>The camera-based method for this project was built in-house and uses line extraction techniques from the image to detect lane markings and calculate a lateral distance from a second-order polynomial model for the lane marking in image space. A threshold is chosen from the histogram of the image to compensate for differences in lighting, weather, or other non-ideal scenarios for extracting the lane markings. The thresholding operation converts the image into a binary image, which is followed by Canny edge detection. The Hough transform is then used to extract the lines from the image, fill in holes in the lane marking edges, and exclude erroneous edges. Using the slope of the lines, the lines are divided into left or right lane markings. Two criteria based on the assumption that the lane markings do not move significantly within the image from frame to frame are used to further exclude non-lane marking lines in the image. The first test checks that the slope of the line is within a threshold of the slope of the near region of the last frame’s second-order polynomial model. The second test uses boundary lines from the last frame’s second-order polynomial to exclude lines that are not near the current estimate of the polynomial. second-order polynomial interpolation is used on the selected lines’ midpoint and endpoints to determine the coefficients of the polynomial model, and a Kalman filter is used to filter the model to decrease the effect of erroneous polynomial coefficient estimates. Finally, the lateral distance is calculated using the polynomial model on the lowest measurable row of the image (for greater resolution) and a real-distance-to-pixel factor. For more information on this camera-based method, see Britt, et al.</p>
<p><em></em><br />
<em>Figure 2. Camera-based lane detection (green-detected lanes,blue-extracted lane lines, red-rejected lines).</em></p>
<p><strong>Testing</strong></p>
<p>Testing was performed at the NCAT (National Center for Asphalt Technology) in Opelika, Alabama, as seen in Figure 3.  This test track is very representative of highway driving and consists of two lanes bordered by solid lane markings and divided by dashed lane markings.  The 1.7-mile track is divided into 200-foot segments of differing types of asphalt with some areas of missing lane markings and other areas where the lanes are additionally divided by patches of different types and colors of asphalt.</p>
<p>&nbsp;</p>
<p><em></em><br />
<em>Figure 3. NCAT Test Facility in Opelika, Alabama.</em></p>
<p>A precision survey of each lane marking of the test track as well as precise vehicle positions using RTK GPS were used in order to have a highly accurate measurement of the ability of the lidar and camera to determine the position of the vehicle in the lane. Testing occurred only on the straights, and the performance was analyzed on the ability of the lidar and camera to determine the position of the lane using metrics of mean absolute error (MAE), mean square error (MSE), standard deviation of error (σ­<sub>error</sub>), and detection rate. The specific scenarios analyzed included varying speeds, varying lighting conditions (noon and dusk/ dawn), rain, and oncoming traffic. Table 1 summarizes the results for these scenarios. For additional results, please see [8].</p>
<table border="1" cellspacing="0" cellpadding="0">
<tbody>
<tr>
<td></td>
<td>
<p align="center"><strong>Scenario</strong></p>
</td>
<td>
<p align="center"><strong>MAE(m)</strong></p>
</td>
<td>
<p align="center"><strong>MSE(m)</strong></p>
</td>
<td>
<p align="center"><strong>σ­<sub>error </sub>(m)</strong></p>
</td>
<td>
<p align="center"><strong>%Det</strong></p>
</td>
</tr>
<tr>
<td>
<p align="center">Lidar</p>
</td>
<td>
<p align="center">Noon Weaving</p>
</td>
<td>
<p align="center">0.1818</p>
</td>
<td>
<p align="center">0.1108</p>
</td>
<td>
<p align="center">0.3076</p>
</td>
<td>
<p align="center">98</p>
</td>
</tr>
<tr>
<td>
<p align="center">Camera</p>
</td>
<td>
<p align="center">Noon Weaving</p>
</td>
<td>
<p align="center">0.1077</p>
</td>
<td>
<p align="center">0.0511</p>
</td>
<td>
<p align="center">0.2246</p>
</td>
<td>
<p align="center">80</p>
</td>
</tr>
<tr>
<td>
<p align="center">Lidar</p>
</td>
<td>
<p align="center">Dusk 45mph</p>
</td>
<td>
<p align="center">0.0967</p>
</td>
<td>
<p align="center">0.0176</p>
</td>
<td>
<p align="center">0.1245</p>
</td>
<td>
<p align="center">100</p>
</td>
</tr>
<tr>
<td>
<p align="center">Camera</p>
</td>
<td>
<p align="center">Dusk 45mph</p>
</td>
<td>
<p align="center">0.2021</p>
</td>
<td>
<p align="center">0.0592</p>
</td>
<td>
<p align="center">0.2433</p>
</td>
<td>
<p align="center">57</p>
</td>
</tr>
<tr>
<td>
<p align="center">Lidar</p>
</td>
<td>
<p align="center">Medium Rain</p>
</td>
<td>
<p align="center">0.1046</p>
</td>
<td>
<p align="center">0.0177</p>
</td>
<td>
<p align="center">0.1314</p>
</td>
<td>
<p align="center">65</p>
</td>
</tr>
<tr>
<td>
<p align="center">Camera</p>
</td>
<td>
<p align="center">Medium Rain</p>
</td>
<td>
<p align="center">0.0885</p>
</td>
<td>
<p align="center">0.0101</p>
</td>
<td>
<p align="center">0.0635</p>
</td>
<td>
<p align="center">91</p>
</td>
</tr>
<tr>
<td>
<p align="center">Lidar</p>
</td>
<td>
<p align="center">Low Beam, Night</p>
</td>
<td>
<p align="center">0.0966</p>
</td>
<td>
<p align="center">0.0159</p>
</td>
<td>
<p align="center">0.1215</p>
</td>
<td>
<p align="center">99</p>
</td>
</tr>
<tr>
<td>
<p align="center">Camera</p>
</td>
<td>
<p align="center">Low Beam, Night</p>
</td>
<td>
<p align="center">0.1182</p>
</td>
<td>
<p align="center">0.0185</p>
</td>
<td>
<p align="center">0.0762</p>
</td>
<td>
<p align="center">84</p>
</td>
</tr>
</tbody>
</table>
<p><em>Table 1. Lidar and camera results for various environments.</em></p>
<p>Additional testing on the effects of oncoming traffic at night was examined by parking a vehicle on the test track at a known location with the headlights on. Figure 4 shows the lateral error with respect to closing distance where a positive closing distance indicates driving at the parked vehicle, and a negative closing distance indicates driving away from the vehicle. Note that the camera does not report a solution at -200 m, which is due to track conditions and not the parked vehicle.</p>
<p><em><br />
Figure 4. Error vs. Closing Distance.</em></p>
<p>Based on these findings it would appear that the camera provided slightly more accurate measurements than the lidar while having a decrease in detection rate. Additionally the camera performed well in the rain where the lidar experienced decreased detection rates.</p>
<p><strong>References</strong></p>
<p>Frank S. Barickman. Lane departure warning system research and test development. Transportation Research Center Inc., (07-0495), 2007.</p>
<p>J. Kibbel, W. Justus, and K. Furstenberg. using multilayer laserscanner. In Proc. Lane estimation and departure warning Proc. IEEE Intelligent Transportation Systems, pages 607 611, September 13 15, 2005.</p>
<p>P. Lindner, E. Richter, G. Wanielik, K. Takagi, and A. Isogai. Multi-channel lidar processing for lane detection and estimation. In Proc. 12th International IEEE Conference on Intelligent Transportation Systems ITSC &#8217;09, pages 1 6, October 4 7, 2009.</p>
<p>K. Dietmayer, N. Kämpchen, K. Fürstenberg, J. Kibbel, W. Justus, and R. Schulz. Advanced Microsystems for Automotive Applications 2005. Heidelberg, 2005.</p>
<p>C. R. Jung and C. R. Kelber, “A lane departure warning system based on a linear-parabolic lane model,” in Proc. IEEE Intelligent Vehicles Symp, 2004, pp. 891–895.</p>
<p>C. Jung and C. Kelber, “A lane departure warning system using lateral offset with uncalibrated camera,” in Intelligent Transportation Systems, 2005. Proceedings. 2005 IEEE, sept. 2005, pp. 102 – 107.</p>
<p>A. Takahashi and Y. Ninomiya, “Model-based lane recognition,” in Proc. IEEE Intelligent Vehicles Symp., 1996, pp. 201–206.</p>
<p>Jordan Britt, C. Rose, &amp; D. Bevly, &#8220;A Comparative Study of Lidar and Camera-based Lane Departure Warning Systems,&#8221; <em>Proceedings of ION GNSS 2011</em>, Portland, OR, September 2011.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.gpsworld.com/a-comparison-of-lidar-and-camera-based-lane-detection-systems/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>On-Site Geo-Referencing of 3D Static Terrestrial Laser Scans</title>
		<link>http://www.gpsworld.com/on-site-geo-referencing-of-3d-static-terrestrial-laser-scans/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=on-site-geo-referencing-of-3d-static-terrestrial-laser-scans</link>
		<comments>http://www.gpsworld.com/on-site-geo-referencing-of-3d-static-terrestrial-laser-scans/#comments</comments>
		<pubDate>Wed, 29 Jun 2011 17:59:26 +0000</pubDate>
		<dc:creator>GPS World staff</dc:creator>
				<category><![CDATA[Augmentation & Assistance]]></category>
		<category><![CDATA[Tech Talk]]></category>
		<category><![CDATA[geo-referencing]]></category>
		<category><![CDATA[Jens-André Paffenholz]]></category>

		<guid isPermaLink="false">http://www.gpsworld.com/?p=501</guid>
		<description><![CDATA[By Jens-André Paffenholz This blog presents an efficient procedure for directly geo-referencing static 3D laser scans. This is a worthwhile way to obtain the required transformation parameters from the local sensor-defined coordinate system to a global system. Therefore, a multi-sensor systems (MSS) is designed with a phase-measuring laser scanner and 3D positional sensors (see Figure 1). [...]]]></description>
				<content:encoded><![CDATA[<p><em>By Jens-André Paffenholz</em></p>
<p>This blog presents an efficient procedure for directly geo-referencing static 3D laser scans. This is a worthwhile way to obtain the required transformation parameters from the local sensor-defined coordinate system to a global system. Therefore, a multi-sensor systems (MSS) is designed with a phase-measuring laser scanner and 3D positional sensors (see Figure 1). By means of at least one eccentrically mounted GNSS antenna on top of the rotating laser scanner one gets a 3D trajectory of the antenna reference point (ARP). The analysis of the resulting trajectory within a recursive state-space filtering approach (e.g., Kalman filter) yields the transformation parameters (position and orientation) and their full variance-covariance matrix. Apart from the geo-referencing of single laser scans the propagation of the transformation parameter variances to the point clouds is possible. Moreover, an improvement of the obtained direct geo-referencing results by means of matching algorithms (like, e.g., Iterative Closest Point (ICP) algorithm) with consideration of the stochastic point cloud information of each single 3D point is feasible.</p>
<p><img src="http://www.gpsworld.com/wp-content/uploads/2012/08/Fig-1.jpg" alt="" width="396" height="358" /><br />
<em>Figure 1. Sketch of the MSS (at the Geodetic Institute of the Leibniz Universität Hannover) composed of a phase-measuring laser scanner, GNSS equipment and two single-axis inclinometers.</em></p>
<p>&nbsp;</p>
<p><strong>Overview about the enlisted sensors, their specifications and the algorithm for the transformation parameter estimation</strong></p>
<p>The main characteristic of the terrestrial laser scanning (TLS) technique for engineering geodesy is the immediate data acquisition in 3D space. This is realised with a high spatial resolution (a few millimetres for mean distances of approx. 25 m), as well as with a very high frequency (up to 50 profiles per second) in a relative or local sensor-defined coordinate system. The TLS technique can be used in a static or a kinematic mode. Static scanning is characterised by one single fixed translation and orientation of the laser scanner in relation to an absolute or global coordinate system. For kinematic scanning, where the data acquisition is commonly reduced to 2D profiles, the translations and orientations are time-dependent. Hence, the transformation parameters for each profile are different in relation to each other as well as to an absolute or global coordinate system. When a combination of several static scans from different stations into one coordinate system (registration) is required, the transformation parameters for each scan have to be determined. For an additional link to an absolute or global coordinate system (geo-referencing), typically control points in a known geodetic datum are necessary. By the direct observation of the required transformation parameters by means of GNSS equipment and arbitrary navigation sensors, one can solve the registration and geo-referencing in one single step without the need of additional control points.</p>
<p>At the present developmental stage of the MSS (at the Geodetic Institute of the Leibniz Universität Hannover), it is composed of a phase-measuring laser scanner, one eccentrically mounted GNSS antenna and two inclinometers on top of the rotating laser scanner (cf. Figure 1). Hereby, the horizontal rotation of the laser scanner of at least 360 degrees is suitable to derive the position as well as the azimuthal orientation of the laser scanner.</p>
<p>Currently, the GNSS data processing is done in post processing. In general, real-time processing is possible within the purposed geo-referencing procedure. The practicability within the direct geo-referencing procedure due to expected higher variances for the trajectory points of the ARP has to be investigated in the future. However, the short high frequent trajectory of the ARP makes the GNSS analysis a challenging problem which has to be overcome. The overall duration is about 15 min with up to 20 hz data rate. In this approach the alternating antenna orientation with respect to an earth-centred earth-fixed coordinate system will contribute to the error budget due to the right-hand circular polarisation of the satellite signals and the azimuthally varying phase centre corrections (PCC). In addition, near-field effects caused by the antenna adaption (made from aluminium) on the laser scanner, or possibly multipath from the vicinity surrounding the scanner may contribute to the error budget. Investigations of these GNSS related errors yield to no significant impact of the used antenna adaption within a double difference analysis in the observation domain. As expected, the rotated PCC against the original PCC has an effect of up to 5 mm in the observation domain which corresponds to the horizontal offset components of the used GNSS antenna. The analysis in the coordinate domain also indicates an effect of up to 5 mm. The analysis shows that the PCC effect is dominated by the phase centre offset components. One can conclude that within the currently applied epoch-wise GNSS analysis the effect of rotated PCC has no significant impact on the transformation parameters in the geo-referencing procedure. For further details about these investigations please refer to Paffenholz et al. (2011).</p>
<p>The analysis of the 3D ARP trajectory (cf. Figure 2) is performed within an adaptive extended Kalman filter (aEKF). This yields the transformation parameters (position and orientation) alongside their full variance-covariance matrix. The benefits of using a closed form algorithm on the basis of a Kalman filter (KF) are the following: Firstly, the KF allows real-time data processing, and secondly, the parameter estimation will be less sensitive to outliers. To deal with non-linearities in the system and measurement equations, an extended KF (EKF) is used to estimate the transformation parameters of the MSS. Another promising approach for a non-linear state estimation is the combination of Sequential Monte Carlo filtering (also known as particle filter) and an EKF, which was proposed by Alkhatib et al. (2011). The main benefit of the proposed approach is the better performance in case of high-nonlinear state-space equations. An improvement of the dynamic model of the EKF can be achieved by augmenting the EKF with adaptive parameters. These parameters are time invariant and system-specific with well-known initial values. For further details please refer to Paffenholz et al. (2010).</p>
<p><img src="http://www.gpsworld.com/wp-content/uploads/2012/08/Fig-2.jpg" alt="" width="540" height="307" /><br />
<em>Figure 2. Sample ARP trajectory of a 360 degree rotation of the laser scanner around its vertical axis. Red indicates the original10 hz measurements with a Javad GNSS receiver Delta with Javad GrAnt-G3T antenna. Blue and green indicate the predicted and filtered trajectory within the aEKF approach, respectively.</em></p>
<p><strong><br />
</strong></p>
<p><strong>Performing the direct geo-referencing by applying the transformation parameters and calculation of the uncertainty measures of the 3D point cloud</strong></p>
<p>The final step of the purposed direct geo-referencing procedure is to apply the transformation parameters (translation vector as well as at least the azimuthal orientation) to the 3D point cloud. The three spatial rotation parameters can be reduced to the azimuthal orientation in case of a sufficient sensor orientation to the direction of gravity. The left part of Figure 3 shows the transformation result from the local sensor-defined to an absolute coordinate system in the case of two 3D point clouds, each from a different static scanner station (red and blue). The radial distance between the scanner and the object is 15 m and 20 m, respectively. It is obvious, that the two geo-referenced point clouds have a slight misalignment of a few centimetres. Due to the known absolute coordinates of the pillar on the roof of the building (middle part of the figure), one can conclude that the geo-referencing of the blue point cloud is inaccurate. Moreover, the variances for the transformation parameters from the blue station are higher than the variances for the red station. This leads to the conclusion that the estimated transformation parameters for the blue station are not reliable. Nevertheless, this direct geo-referencing can be used as adequate pre-registration for matching algorithms.</p>
<p>To overcome this misalignment the application of matching algorithms, like the ICP algorithm, is worthwhile. As input for the ICP algorithm the pre-registered 3D point clouds are used. The a-priori alignment (within a few centimetres) of the two point clouds is sufficient for the application of the ICP algorithm to find an adequate amount of corresponding points for a reliable estimation of the transformation parameters. The ICP result is shown in the right part of Figure 3. One can clearly see that the matching of the two point clouds was successful. The recent topic of the ongoing research is the consideration of the uncertainties of each point cloud within the ICP algorithm for a further improvement of the matching results.<br />
<img src="http://www.gpsworld.com/wp-content/uploads/2012/08/Fig-3A.jpg" alt="" width="382" height="455" />     <img src="http://www.gpsworld.com/files/gpsworld/nodes/2011/11941/Fig-3B.jpg" alt="" /><br />
<em>Figure 3. Left: Applied transformation parameters to two scans from different stations (red and blue). Right: Result after running the ICP algorithm on the pre-registered 3D point clouds (shown in the left part of this figure).</em></p>
<p>&nbsp;</p>
<p>In the current research work uncertainties for each single point cloud are calculated by variance propagation: Combining the uncertainties of the scanner measurements (e.g., manufacturer values for the angle and range measurement accuracy), and the uncertainties of the direct geo-referencing procedure (variance-covariance matrix of the transformation parameters obtained within the aEKF). As mentioned before, these uncertainties should be considered in the ICP algorithm in the ongoing work for a further improvement of the matching results. Bae et al. (2009) already stated that the consideration of positional uncertainties in the point cloud matching will be a worthwhile approach to improve the matching, as well as the interpretation of 3D point clouds. An example for the result of the variance propagation of the scanner and direct geo-referencing uncertainties is illustrated in Figure 4. The figure depicts a stochastic point cloud of the red station (similar 3D point cloud as shown in Figure 3). As measure for the uncertainty the mean of the coordinate uncertainty in a range of 5 mm up to 30 mm is shown.</p>
<p><img src="http://www.gpsworld.com/wp-content/uploads/2012/08/Fig-4_0.jpg" alt="" width="450" height="335" /></p>
<p><em>Figure 4. Stochastic point cloud of red station resulting from variance propagation for the uncertainties of the scanner measurements and the direct geo-referencing procedure. Depicted is the mean of the coordinate uncertainty.</em></p>
<p>&nbsp;</p>
<p><strong>Conclusions and Future Work</strong></p>
<p>This article describes an on-site direct geo-referencing of 3D static laser scans by means of tracking the circular motion of the laser scanner around its vertical axis with 3D positioning sensors. The required transformation parameters from the local to an absolute coordinate system are estimated within a Kalman filter approach. The results show a misalignment for two different static laser scanner stations in a range of a few centimetres. Nevertheless, this is an adequate pre-registration for matching algorithms. Besides the geo-referencing, the uncertainties of the 3D point clouds are calculated by variance propagation. The future work is focused on the consideration of the stochastic point cloud information within matching algorithms (like, e.g., ICP) for an optimal fusion of different (pre-) registered point clouds into one optimal solution.</p>
<p><strong>References</strong></p>
<p>Alkhatib, Hamza; Paffenholz, Jens-André; Kutterer, Hansjörg (2011): <em>Sequential Monte Carlo Filtering for nonlinear GNSS trajectories.</em> In: Sneeuw; Novák; Crespi and Sansò (Eds.): <em>Proceedings of the VII Hotine-Marussi Symposium on Mathematical Geodesy, </em>Rome, 6-10 June 2009. International Association of Geodesy (IAG). 1st Edition. Berlin, Heidelberg: Springer, (in press).</p>
<p>Bae, Kwang-Ho; Belton, David; Lichti, Derek D. (2009): <em>A Closed-Form Expression of the Positional Uncertainty for 3D Point Clouds. </em>In IEEE Trans. Pattern Analysis and Machine Intelligence 31 (4), pp. 577–590.</p>
<p>Paffenholz, Jens-André; Kersten, Tobias; Schön, Steffen; Kutterer, Hansjörg (2011): <em>Analysis of the Impact of Rotating GNSS Antennae in Kinematic Terrestrial Applications.</em> In: <em>Proceedings of the FIG Working Week 2011.</em> FIG. Marrakech, published on CD only / also available via <a href="http://www.fig.net" target="_blank">www.fig.net</a>.</p>
<p>Paffenholz, Jens-André; Alkhatib, Hamza; Kutterer, Hansjörg (2010): <em>Direct geo-referencing of a static terrestrial laser scanner. </em>In JAG 4 (3), 115–126.</p>
<hr />
<p><em>Jens-André Paffenholz received his Dipl.-Ing. in Geodesy and Geoinformatics at the Leibniz Universität Hannover. Since 2006 he has been research assistant and since 2008 also PhD candidate at the Geodetic Institute at the Leibniz Universität Hannover, respectively. His current interests are: terrestrial laser scanning, industrial measurement systems, and process automation of measurement systems. The present research focus is: precise direct geo-referencing in terrestrial laser scanning applications.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.gpsworld.com/on-site-geo-referencing-of-3d-static-terrestrial-laser-scans/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Where Time and Space Meet</title>
		<link>http://www.gpsworld.com/transportationaviationwhere-time-and-space-meet-10708/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=transportationaviationwhere-time-and-space-meet-10708</link>
		<comments>http://www.gpsworld.com/transportationaviationwhere-time-and-space-meet-10708/#comments</comments>
		<pubDate>Mon, 01 Nov 2010 21:03:40 +0000</pubDate>
		<dc:creator>GPS World staff</dc:creator>
				<category><![CDATA[Aviation]]></category>
		<category><![CDATA[Tech Talk]]></category>

		<guid isPermaLink="false">http://www.gpsworld.com/transportationaviationwhere-time-and-space-meet-10708/</guid>
		<description><![CDATA[Increasing availability and performance of state-of-the-art navigation sensors motivates the need for a highly accurate reference system commonly referred to as a time-space position information (TSPI) device. The Advanced Navigation Center at the Air Force Institute of Technology is working with the Air Force Flight Test Center to develop a next generation time-space position information (TSPI) system to be used for test and evaluation of modern navigation devices.]]></description>
				<content:encoded><![CDATA[<h4 class="body-first">Sensor Modeling and Sensitivity Analysis for a Next-Generation Time-Space Position Information System</h4>
<p><em>By Mark Smearcheck and Michael Veth, Air Force Institute of Technology</em></p>
<p class="body-first"><span class="drop-cap">I</span>ncreasing availability and performance of state-of-the-art navigation sensors motivates the need for a highly accurate reference system commonly referred to as a time-space position information (TSPI) device. The Advanced Navigation Center at the Air Force Institute of Technology is working with the Air Force Flight Test Center to develop a next generation time-space position information (TSPI) system to be used for test and evaluation of modern navigation devices.</p>
<p class="body">TSPI systems such as the GPS Aided Inertial Navigation Reference (GAINR) or Advanced Range Data System (ARDS) accompany navigation sensors during flight testing to collect the precise position, velocity, and attitude. Current GAINR TSPI performance levels include 1.0 m of position uncertainty, 0.1 m/s of velocity uncertainty, and 1.75 mrad of attitude uncertainty. Goal performance levels for next-generation TSPI call for an order of magnitude improvement over current systems.</p>
<p class="body">A more accurate test and evaluation device will likely require fusion of multiple sensors of varying modalities such as GPS, inertial, electro-optical and infrared cameras, laser range sensors, barometric altimeters, ground-based theodolites, and ground-based tracking radar. This research aims to identify an integrated sensing package and the sensing techniques required to achieve the next generation TSPI accuracy.</p>
<p class="body">In order to accomplish this task, a sensitivity analysis is performed that predicts the quality of the navigation solution attainable using various external sensor combinations. The sensitivity analysis requires sensor characterization and modeling in addition to development of a software simulated world (the flight test range) that the sensors are able to observe. Issues also investigated in this research include vision-aiding techniques, optical feature deployment, and testing in GPS-denied scenarios.</p>
<div id="attachment_17731" class="wp-caption alignnone" style="width: 410px"><a href="http://www.gpsworld.com/wp-content/uploads/2010/11/PHOTODEVICE.jpg"><img class="size-full wp-image-17731" alt="PHOTODEVICE" src="http://www.gpsworld.com/wp-content/uploads/2010/11/PHOTODEVICE.jpg" width="400" height="371" /></a><p class="wp-caption-text">The GPS Aided Inertial Navigation Reference (GAINR) system consists of a Honeywell 764-G embedded GPS/INS with a custom control and recording unit. The data are post-processed using an optimal smoother and differential GPS measurements.</p></div>
<h3 class="a-head">Sensors and Simulated World</h3>
<p class="body-first">The Air Force Flight Test Center currently obtains TSPI using the GAINR, which includes a navigation grade inertial measurement unit (IMU) and dual-frequency code-based differential GPS (DGPS). Carrier-phase GPS, if available, could be implemented to increase position accuracy.</p>
<p class="body">When integrated into a highly dynamic platform, such as tactical fighter, a kinematic solution may not always be obtainable due to difficulty resolving integer ambiguities and cycle slips experienced in the receiver’s tracking loops. The sensitivity of both code and carrier-phase differential GPS is included in this research due to the uncertain availability of a kinematic solution.</p>
<p><!--pagebreak--></p>
<p class="body">Scenarios of GPS denial are always an area of concern for the warfighter, and thus GPS-independent test-platforms must be examined. Other positioning sensors, useful in GPS-denied testing, include ground-based theodolites and radars. These devices are installed at surveyed locations on the test range and are used to track the test aircraft. Theodolites are pivoting platforms that may contain various sensors and provide range, azimuth angle, and elevation angle measurements. Radars are also used to provide the same type of measurements, along with an additional velocity measurement (<span class="figure-callout">Figure 1</span>).</p>
<div id="attachment_17730" class="wp-caption alignnone" style="width: 550px"><a href="http://www.gpsworld.com/wp-content/uploads/2010/11/overview.jpg"><img class="size-full wp-image-17730" alt="overview" src="http://www.gpsworld.com/wp-content/uploads/2010/11/overview.jpg" width="540" height="340" /></a><p class="wp-caption-text">Figure 1. Overview of possible TSPI sensors. The sensors consist of both aircraft-based and ground-based devices.</p></div>
<p class="body">Onboard optical sensors including high-resolution digital cameras and laser range finders have also been investigated for TSPI use. This research proposes to install surveyed targets on the test range that are easily identifiable through feature extraction and tracking methods such as the scale-invariant feature transform (SIFT).</p>
<p class="body">Cameras are able to observe position and attitude through homogenous pixel location measurements of image features (<span class="figure-callout">FIGURE 2</span>).</p>
<div id="attachment_17724" class="wp-caption alignnone" style="width: 550px"><a href="http://www.gpsworld.com/wp-content/uploads/2010/11/FIG2.jpg"><img class="size-full wp-image-17724" alt="FIG2" src="http://www.gpsworld.com/wp-content/uploads/2010/11/FIG2.jpg" width="540" height="439" /></a><p class="wp-caption-text">Figure 2. Simulated test range at Edwards AFB that includes optical targets, ground sensors, and a flight test profile. Optical landmarks are randomly spread within the field of view of the optical sensor over the trajectory.</p></div>
<p class="body">An objective of this sensitivity analysis is to show the attitude performance achievable through feature tracking of surveyed targets. When image-aiding of an IMU is implemented in a navigation filter, such as the extended Kalman filter (EKF), next generation TSPI level attitude accuracy should be reached.</p>
<p class="body">The other optical sensor investigated, the laser range finder, is used to augment the navigation solution by measuring distance to the surveyed targets detected by the camera.</p>
<p class="body">For the sensitivity analysis a simulated world is generated for the sensors to make observations. The world simulation includes GPS ephemeris, a digital terrain elevation database (DTED), gravity models, natural terrain landmarks/targets, manmade targets, a ground sensor deployment map, simulated flight test profile, and vehicle sensor installation lever-arms.</p>
<p><!--pagebreak--></p>
<h3 class="a-head">Sensitivity Analysis</h3>
<p class="body-first">The goal of the sensitivity analysis is to determine the minimal set of sensors that will meet next generation TSPI performance requirements. Sensor models and world characteristics are used to calculate expected position, velocity, and attitude uncertainty given a particular trajectory, sensor package, and feature set. The aircraft’s state vector, <a href="http://www.gpsworld.com/wp-content/uploads/2010/11/x.jpg"><img class="alignnone size-full wp-image-17729" alt="x" src="http://www.gpsworld.com/wp-content/uploads/2010/11/x.jpg" width="17" height="24" /></a>, as a function of the measurement, <em>z,</em> and uncertainty matrix, <em>R</em>, is represented as</p>
<p class="body-first"><a href="http://www.gpsworld.com/wp-content/uploads/2010/11/EQ1.jpg"><img class="alignnone size-full wp-image-17728" alt="EQ1" src="http://www.gpsworld.com/wp-content/uploads/2010/11/EQ1.jpg" width="240" height="22" /></a></p>
<p class="body">where <em>H</em> is the observation matrix. The observation matrix is a Jacobian made up of partial derivates of each sensor’s measurements with respect to position, velocity, and attitude. Example <em>H</em> matrix elements include the partial derivates describing the camera measurements with respect to position and attitude. The partial deviate of the pixel coordinate, <em>z<sub>i</sub></em>, of an image feature with respect to position, <em>p<sup>n</sup></em>, is</p>
<p class="body"><a href="http://www.gpsworld.com/wp-content/uploads/2010/11/EQ-2.jpg"><img class="alignnone size-full wp-image-17727" alt="EQ-2" src="http://www.gpsworld.com/wp-content/uploads/2010/11/EQ-2.jpg" width="400" height="63" /></a></p>
<p class="body-first">where T<sub>c</sub><sup>pix</sup> is the camera frame to pixel frame transformation matrix made up of calibration parameters, s<sup>c</sup> is the line of sight vector from the camera to the target expressed in the camera frame, C<sub>n</sub>b and C<sub>b</sub><sup>c</sup> are direction cosine matrices, and the subscript<em> z</em> denotes the<em> z</em> dimension of the indicated navigation frame. The partial derivative of the pixel coordinate of an image feature with respect to attitude, α, is calculated as</p>
<p class="body-first"><a href="http://www.gpsworld.com/wp-content/uploads/2010/11/eq3.jpg"><img class="alignnone size-full wp-image-17725" alt="eq3" src="http://www.gpsworld.com/wp-content/uploads/2010/11/eq3.jpg" width="400" height="114" /></a></p>
<p class="body">The <em>H</em> matrix’s partial derivatives describing observations from other navigation sensors are derived in our previous<br />
work, “Sensor Modeling and Sensitivity Analysis for a Next Generation Time-Space Position Information (TSPI) System,” <em>Proceedings of the ION International Technical Meeting</em>, 2010. The <em>a posteriori</em> uncertainty of the state or sensitivity,<em> P</em>, at time<em> k</em> is calculated as</p>
<p class="body"><a href="http://www.gpsworld.com/wp-content/uploads/2010/11/eq4.jpg"><img class="alignnone size-full wp-image-17726" alt="eq4" src="http://www.gpsworld.com/wp-content/uploads/2010/11/eq4.jpg" width="400" height="40" /></a></p>
<p class="body-first">where <em>P<sub>0</sub></em> is the initial uncertainty.</p>
<h3 class="a-head">Results</h3>
<p class="body-first">Results show the three sigma median uncertainty of position and attitude for various sensor combinations over a common flight profile through the test range (<span class="figure-callout">Figure 3</span>).</p>
<div id="attachment_17723" class="wp-caption alignnone" style="width: 550px"><a href="http://www.gpsworld.com/wp-content/uploads/2010/11/Smearcheck-Fig3.jpg"><img class="size-full wp-image-17723" alt="Smearcheck-Fig3" src="http://www.gpsworld.com/wp-content/uploads/2010/11/Smearcheck-Fig3.jpg" width="540" height="289" /></a><p class="wp-caption-text">Figure 3. Sensitivity analysis results of position and attitude with various sensor combinations. Scenarios of unobservable attitude are designed by the infinity symbol.</p></div>
<h3 class="a-head">Conclusions</h3>
<p class="body-first">The sensitivity analysis indicates that the most practical sensor package that meets next-generation TSPI performance is the combination of carrier-phase GPS and a high-resolution camera tracking ten SIFT features per image.</p>
<p class="body">In this example, tracking only two SIFT features per image does not provide the necessary level attitude accuracy, although incorporating inertial measurements is expected to reduce the overall number of features required per image.</p>
<p><!--pagebreak--></p>
<p class="body">In the absence of GPS, theodolites when coupled with a camera can function as a reasonable alternative. It should be noted that since the sensitivity analysis relies on a simulated world the feature tracking performance and target surveying accuracy may change during operational testing.</p>
<p class="body">The next phase of this research is to integrate the sensors with an IMU using an extended Kalman filter. Fusion with a navigation-grade INS is expected to improve position, velocity, and attitude accuracy.</p>
<p class="body">If simulated results are promising, the next phase of the effort will focus on collecting flight test data to validate the simulation and further increase the fidelity of the simulation.</p>
<h3 class="a-head">Acknowledgment</h3>
<p class="body-first">The authors would like to thank the Air Force Flight Test Center for supporting this research.</p>
<hr />
<p><em><span class="normal">MARK SMEARCHECK is a research engineer with the Advanced Navigation Technology Center at the Air Force Institute of Technology (AFIT) at Wright Patterson Air Force Base in Dayton, Ohio. He received his B.S. in electrical engineering in 2006 and his M.S. in electrical engineering in 2008, both from Ohio University. His research topics include micro-air vehicles, indoor navigation, image-aided navigation, pseudolites, and test range instrumentation.</span></em></p>
<p><em><span class="normal">LT. COl. MICHAEL VETH is an assistant professor of electrical engineering at AFIT and deputy director of the Advanced Navigation Technology Center. He received his Ph.D. and M.S. in electrical engineering from AFIT and his B.S. in electrical engineering from Purdue University. He is a graduate of Air Force Test Pilot School.</span></em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.gpsworld.com/transportationaviationwhere-time-and-space-meet-10708/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Integer ambiguity validation: Still an open problem?</title>
		<link>http://www.gpsworld.com/integer-ambiguity-validation-still-an-open-problem/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=integer-ambiguity-validation-still-an-open-problem</link>
		<comments>http://www.gpsworld.com/integer-ambiguity-validation-still-an-open-problem/#comments</comments>
		<pubDate>Mon, 30 Jul 2007 18:13:27 +0000</pubDate>
		<dc:creator>GPS World staff</dc:creator>
				<category><![CDATA[Algorithms & Methods]]></category>
		<category><![CDATA[Tech Talk]]></category>
		<category><![CDATA[GNSS]]></category>
		<category><![CDATA[Integer ambiguity]]></category>

		<guid isPermaLink="false">http://www.gpsworld.com/?p=516</guid>
		<description><![CDATA[By Sandra Verhagen High-precision Global Navigation Satellite System (GNSS) positioning results are obtained with carrier phase measurements, once the integer cycle ambiguities have been successfully resolved. The position solution is obtained in four steps: 1. Float solution:least-squares, discarding integer nature. 2. Integer solution: real-valued float ambiguities mapped to integer-valued ambiguities.Examples of integer estimators (Teunissen, 1998a): [...]]]></description>
				<content:encoded><![CDATA[<p id="post_message_68"><em>By Sandra Verhagen</em></p>
<p>High-precision Global Navigation Satellite System (GNSS) positioning results are obtained with carrier phase measurements, once the integer cycle ambiguities have been successfully resolved.</p>
<p id="post_message_68">The position solution is obtained in four steps:</p>
<p><em>1. Float solution:</em>least-squares, discarding integer nature.</p>
<p><em>2. Integer solution:</em><em> </em>real-valued float ambiguities mapped to integer-valued ambiguities.Examples of integer estimators (Teunissen, 1998a):</p>
<p id="post_message_68">• Integer Least-Squares: optimal, requires search to obtain solution.</p>
<p>• Integer Bootstrapping: may perform close to optimal (decorrelating ambiguity transformation required), no search required (e.g. widelaning, CIR, TCAR).</p>
<p>• Integer Rounding: the simplest of all methods.</p>
<p>&nbsp;</p>
<p>3. Integer acceptance test: decision whether or not to accept integer ambiguity solution. Examples: ratio test, distance test, projector test.</p>
<p>4. Fixed solution: if the integer solution is accepted, the fixed baseline is computed.</p>
<p>The third step is often referred to as the ‘integer validation’ problem. In Verhagen (2004) this problem was addressed, and different approaches were compared.</p>
<p>As an example, we will now consider the popular ratio test, which is defined as:</p>
<p><a title="july07-formula.jpg" href="http://www.gpsworld.com/files/gpsworld/nodes/2007/11316/july07-formula.jpg"><img src="http://www.gpsworld.com/files/gpsworld/nodes/2007/11316/july07-formula.jpg" alt="july07-formula.jpg" border="0" /></a></p>
<p>Wher<span style="font-family: Verdana;">e <em><span style="font-family: Franklin Gothic Medium;">ȃ</span></em> is the float solution with <span style="font-size: medium;">Q</span>ȃ</span><span style="font-family: Verdana;">, the corresponding variance matrix; and <em>ă</em> and ă&#8217;, the </span>corresponding integer estimate and the second-best integer candidate, respectively; δ is the critical value. <em>Note: </em>in practice, often the reciprocal of the ratio test, as specified here, is used.</p>
<p>The underlying principle of the ratio test can be explained with a 2-dimensional example, see the figure below. Assume we have two ambiguities in our model. The black hexagons are the so-called integer least-squares pull-in regions: if the float ambiguity estimate falls inside a certain hexagon, the integer solution is equal to the grid point in the center of this pull-in region. Applying the ratio test, however, implies that this integer solution is only accepted if it falls inside one of the red regions. Otherwise, the float ambiguity is considered to be too close to the boundary of a pull-in region, such that the integer solution is not sufficiently more likely than the second-best integer candidate.</p>
<p><a title="rtia-web.jpg" href="http://www.gpsworld.com/files/gpsworld/nodes/2007/11316/rtia-web.jpg"><img src="http://www.gpsworld.com/files/gpsworld/nodes/2007/11316/rtia-web.jpg" alt="rtia-web.jpg" border="0" /></a></p>
<p>Note that the size of the regions is controlled by the critical value, δ, see Verhagen and Teunissen (2006), and Teunissen and Verhagen (2007), where it is described how this value should be chosen.</p>
<p>It can be seen that the acceptance regions are invariant for translations with an integer value. As such, the ratio test is invariant to integer biases. In fact, the ratio test is not suitable for testing the correctness of the solution. A model error, such as a bias in the observations, will propagate into the float ambiguities, but it does not necessarily mean that the float ambiguity will be close to the boundary of a pull-in region.</p>
<p>Hence, the ratio test is not a model validation test, and should only be applied in order to test whether or not the integer solution can be regarded sufficiently more likely than any other integer candidate.</p>
<p>With regard to GNSS model validation, we can make the following remarks:</p>
<p>1. Classical testing theory based on statistical hypothesis testing is not applicable due to the integer nature of the carrier-phase ambiguities (Teunissen, 1998b).</p>
<p>2. Testing theory for testing the presence/absence of a model error is not yet available.</p>
<p>3. Questions that need to be answered are:</p>
<blockquote><p>• What are the appropriate test statistics?• How are they distributed under the null-hypothesis and alternative hypothesis?</p>
<p>• What are the appropriate acceptance/rejection regions?</p></blockquote>
<p><strong>References</strong></p>
<p>Teunissen, P.J.G. (1998). &#8220;A class of unbiased integer GPS ambiguity estimators.&#8221; <em>Artificial Satellites, 33</em>(1): 4-10.</p>
<p>Teunissen, P.J.G. (1998b). &#8220;GPS carrier phase ambiguity fixing concepts.&#8221; In: Teunissen, P.J.G. and A Kleusberg. <em>GPS for Geodesy,</em> Springer-Verlag, Berlin.</p>
<p>Teunissen, P.J.G. and Verhagen, S. (2007). &#8220;GNSS phase ambiguity validation: a review.&#8221; <em>Proceedings Space, Aeronautical and Navigational Electronics Symposium SANE2007</em>, The Institute of Electronics, Information and Communication Engineers (IEICE), Japan, 107(2): 1-6.</p>
<p>Verhagen, S. (2004). &#8220;Integer ambiguity validation: an open problem?&#8221; <em>GPS Solutions, 8</em>(1): 36-43.</p>
<p>Verhagen, S. and Teunissen, P.J.G. (2006). &#8220;New global navigation satellite system ambiguity resolution method compared to existing approaches.&#8221; <em>Journal of Guidance, Control and Dynamics, 29</em>(4): 981-991.</p>
<p><strong>Dr.ir. Sandra Verhagen </strong></p>
<p><strong>DEOS-MGP, TU Delft </strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.gpsworld.com/integer-ambiguity-validation-still-an-open-problem/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Antenna-induced biases in GNSS receivers May 17, 2007</title>
		<link>http://www.gpsworld.com/antenna-induced-biases-in-gnss-receivers-may-17-2007/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=antenna-induced-biases-in-gnss-receivers-may-17-2007</link>
		<comments>http://www.gpsworld.com/antenna-induced-biases-in-gnss-receivers-may-17-2007/#comments</comments>
		<pubDate>Thu, 17 May 2007 18:08:12 +0000</pubDate>
		<dc:creator>GPS World staff</dc:creator>
				<category><![CDATA[OEM]]></category>
		<category><![CDATA[Tech Talk]]></category>
		<category><![CDATA[antennas]]></category>
		<category><![CDATA[Inder Jeet Gupta]]></category>

		<guid isPermaLink="false">http://www.gpsworld.com/?p=511</guid>
		<description><![CDATA[By Inder Jeet Gupta It is well known that the phase center of a GNSS antenna can vary with the satellite direction. This phase center movement leads to aspect dependent carrier phase and code phase biases in the satellite signal. For precise geo-location, one needs to characterize the antenna-induced carrier and code phase biases over [...]]]></description>
				<content:encoded><![CDATA[<p id="post_message_45"><em>By Inder Jeet Gupta</em></p>
<p>It is well known that the phase center of a GNSS antenna can vary with the satellite direction. This phase center movement leads to aspect dependent carrier phase and code phase biases in the satellite signal. For precise geo-location, one needs to characterize the antenna-induced carrier and code phase biases over the upper hemisphere. In the case of fixed pattern antennas (the antenna pattern does not vary with the incident signal environment) one can characterize the antenna induced biases a priori and use the data for corrections in the field. This is a standard practice in the surveying community.</p>
<p>For antennas used with AJ (Anti-Jam) systems, however, <em>a priori</em> characterization of the antenna induced biases may not be of much value. These antennas consist of multiple elements. The signals received by various antenna elements are weighted and then summed together to form the composite output signal. The element weights depend on the incident signal (mainly interfering signal) scenario. As the incident signal scenario changes so do the individual antenna element weights which in turn will lead to different values for antenna induced carrier phase and code phase biases.</p>
<p>As illustration, Figure 1 shows the antenna induced code phase bias of an AJ antenna over the upper hemisphere in the absence of all interfering signals as well as in the presence of two interfering signals.</p>
<p><img src="http://www.gpsworld.com/files/gpsworld/images/0407Figure%201%20left.jpg" alt="" border="0" /><img style="border: 0px none;" src="http://www.gpsworld.com/wp-content/uploads/2012/08/0407Figure-2.jpg" alt="" width="192" height="186" border="0" /></p>
<p><em>Figure 1. Antenna induced code phase bias (in meters) over the upper hemisphere. Left: no interfering signal; right: two interfering signals. </em></p>
<p>In the figure, the center of the circle corresponds to the zenith and the outer ring corresponds to the horizon. The antenna induced code phase bias is plotted using a color scale in meters. Note that even in the absence of interfering signals, the antenna induced bias varies with the aspect angle. The presence of the interfering signals affects the antenna induced biases. This is true in the angular region surrounding the interfering signals as well as in the angular region away from the interfering signals.</p>
<p>One can observe this more clearly in Figure 2 where the difference between the antenna induced code phase biases in the absence of interfering signals and in the presence of interfering signals is plotted using a color scale in centimeters. Note that the difference in the antenna induced code phase bias is quite significant, and one may not be able to obtain precise location without proper corrections.</p>
<p><img style="border: 0px none;" src="http://www.gpsworld.com/wp-content/uploads/2012/08/0407Figure-1-left.jpg" alt="" width="216" height="206" border="0" /></p>
<p><em>Figure 2. Difference (in cm) between the antenna-induced code phase bias in the presence of two interfering signals and in the absence of the interfering signals.</em></p>
<p>The question is what could be done to minimize the effects of adaptive antenna induced biases in GNSS receivers. In my opinion, one can take the following two approaches. In the first approach (see reference), one predicts the antenna-induced biases on the fly. This approach requires knowledge of in situ volumetric patterns of individual elements of an AJ antenna over the bandwidth of GNSS signals as well as access to the antenna element weights. With a perfect knowledge of these quantities, one can come up with a very good prediction and can correct for the antenna induced biases. The sensitivity of the prediction to various parameters, however, needs to be studied.</p>
<p>The second approach would be to develop novel weighting algorithms for GPS receiver adaptive antennas. Note that the current algorithms are mostly designed to either steer nulls in the interfering signal directions or maximize carrier to noise ratio in some sense. These novel algorithms should not only lead to improved carrier to noise ratio in the presence of interfering signals but should also make sure that the antenna-induced biases do not vary from their values in the absence of all interfering signals.</p>
<p>Further, these algorithms should not use many degrees of freedom to meet the various constraints in that GNSS AJ antennas do not have many degrees of freedom. If most of the degrees of freedom are consumed to meet the above constraints then one will not have enough degrees of freedom left to null the interfering signals. This is a very challenging task, but leads to a good research problem!</p>
<p><strong> Inder J. Gupta </strong></p>
<p><strong>Ohio State University</strong></p>
<p><strong>References</strong></p>
<p>I.J. Gupta, et. al., <em>Prediction of antenna and antenna electronics induced biases in GNSS receivers,</em> Proceedings of ION 2007 National Technical Meeting, San Diego, CA, January 2007.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.gpsworld.com/antenna-induced-biases-in-gnss-receivers-may-17-2007/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>GPS Transmitter Frequencies</title>
		<link>http://www.gpsworld.com/gps-transmitter-frequencies/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=gps-transmitter-frequencies</link>
		<comments>http://www.gpsworld.com/gps-transmitter-frequencies/#comments</comments>
		<pubDate>Tue, 16 Jan 2007 18:06:29 +0000</pubDate>
		<dc:creator>GPS World staff</dc:creator>
				<category><![CDATA[Tech Talk]]></category>

		<guid isPermaLink="false">http://www.gpsworld.com/?p=509</guid>
		<description><![CDATA[Why are the two GPS Transmitter frequencies (1575.42 and 1227.6 MHz) coherently selected integer multiples of 10.23 MHz master clock? Question posted on CANSPACE on October 30, 2006, by Sivaraman Ranganathan. The document defining the GPS signal, IS-GPS-200, states that &#8220;The carrier frequencies for the L1 and L2 signals shall be coherently derived from a [...]]]></description>
				<content:encoded><![CDATA[<p id="post_message_34"><strong>Why are the two GPS Transmitter frequencies (1575.42 and 1227.6 MHz) coherently selected integer multiples of 10.23 MHz master clock?</strong></p>
<p><em>Question posted on CANSPACE on October 30, 2006, by Sivaraman Ranganathan.</em></p>
<p>The document defining the GPS signal, IS-GPS-200, states that &#8220;The carrier frequencies for the L1 and L2 signals shall be coherently derived from a common frequency source within the SV.&#8221; This makes the L1 and L2 multiples of the common frequency source 10.23MHz. (Section 3.3.1.1). Why is this? I believe this is done for simplicity of system design and operation. All components of the signal (code, carrier, and navigation data) are derived from the atomic frequency standards on board the satellite. If this were not done and separate frequency sources were used, then biases between the different components would occur, which would have to be calculated and removed.</p>
<p>IS-GPS-200 furthermore states in Section 3.3.1.8 that the C/A and P(Y) digital codes are as well derived from the same frequency standard. “All transmitted signals for a particular SV shall be coherently derived from the same on-board frequency standard; all digital signals shall be clocked in coincidence with the PRN transitions for the P-signal and occur at the P-signal transition speed. On the L1 channel the data transitions of the two modulating signals (i.e., that containing the P(Y)-code and that containing the C/A-code), L1 P(Y) and L1 C/A, shall be such that the average time difference between the transitions does not exceed 10 nanoseconds (two sigma)”.</p>
<p>Despite the coherence of the two carriers, it is understood there is a difference between the radiated L1 and L2 signals due in part to the different paths the signals take within the on-board electronics. This is called the differential group delay and an estimate of this difference is broadcast to users in the navigation message. The difference between L1 P(Y) and L2 P(Y) is designated Tgd (reference paragraph 20.3.3.3.3.2). The difference between L1 P(Y) and L2C is called the Inter-Signal Correction (ISC) (reference paragraph 30.3.3.3.1.1).</p>
<p>For further technical discussion of this topic, see the book Global Positioning System, Signals Measurements, and Performance by Pratap Misra and Per Enge (section 2.3.1).</p>
<p><strong>John Lavrakas, </strong><strong>President</strong></p>
<p><strong>Advanced Research Corp. </strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.gpsworld.com/gps-transmitter-frequencies/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Why are GLONASS satellites launched on Christmas Day?</title>
		<link>http://www.gpsworld.com/why-are-glonass-satellites-launched-on-christmas-day/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=why-are-glonass-satellites-launched-on-christmas-day</link>
		<comments>http://www.gpsworld.com/why-are-glonass-satellites-launched-on-christmas-day/#comments</comments>
		<pubDate>Tue, 16 Jan 2007 18:05:31 +0000</pubDate>
		<dc:creator>GPS World staff</dc:creator>
				<category><![CDATA[Tech Talk]]></category>

		<guid isPermaLink="false">http://www.gpsworld.com/?p=507</guid>
		<description><![CDATA[Why are GLONASS satellites launched on Christmas Day? Question posted on CANSPACE on December 10, 2006, by Kerry Matthews The latest triple-satellite GLONASS launch occurred on December 25th at 23:18 Moscow Time. This launch is the sixth GLONASS December launch in a row. In fact, all 9 launches since December 1995 have occurred in the [...]]]></description>
				<content:encoded><![CDATA[<p><span style="font-family: Verdana;"><span style="font-size: small;"><strong>Why are GLONASS satellites launched on Christmas Day?</strong></span></span><span style="font-family: Verdana;"><span style="font-size: small;"><em> Question posted on CANSPACE on December 10, 2006, by Kerry Matthews</em></span></span></p>
<p>The latest triple-satellite GLONASS launch occurred on December 25th at 23:18 Moscow Time. This launch is the sixth GLONASS December launch in a row. In fact, all 9 launches since December 1995 have occurred in the last month of the year with the exception of the launch on October 13th, 2000 (see <a href="http://gge.unb.ca/Resources/GLONASSConstellationStatus.txt" target="_blank">a list of GLONASS launches</a> going back to 1990).</p>
<p>Including this month’s launch, three of the recent launches have occurred on December 25th and one originally scheduled for the 25th, occurred on the 26th. Why the preponderance of December launches and launches on Christmas Day in particular?</p>
<p>First of all, we should realize that for most people in the Russian Federation, there is nothing special about December 25th. Most Christians in Russia belong to the Russian Orthodox Church which celebrates Christmas according to the Julian calendar — on January 7th. And in modern Russia, January 7th is a state-wide holiday. So, GLONASS launches don’t occur around December 25th because it’s a special day on the Russian calendar. So why do they occur then?</p>
<p>I posed this question to Col. (ret.) Nikolai Shienok, the former chief of the Information Department of the Coordination and Scientific Information Center of the Russian Ministry of Defense. After conferring with officials from Roscosmos (the Russian space agency) responsible for the GLONASS program, Col. Shienok confirmed that it is only for financial or organizational reasons that there is a preponderance of launches in December. “It is the last month of the year and it is impossible to postpone a planned launch further” Col. Shienok said.</p>
<p>Nevertheless, there may be some operational calendar constraints on GLONASS satellite launches as there are for launches of other satellites. Satellite operators typically try to avoid launching satellites when the Sun-orbit-plane or beta angle for the intended orbit is unfavorable. The beta angle is the angle between the geocentric position vector to the Sun and the satellite’s orbital plane. This angle determines if and for how long a satellite will be in the Earth’s shadow during its orbit. For a given orbit (altitude, inclination, and initial right ascension of the ascending node), the beta angle will vary over the year. Operators try to avoid a launch date when the satellite would be in eclipse for a significant fraction of its orbit so that during the crucial satellite deployment and commissioning phase, the satellite’s solar panels receive as much sunlight as possible to keep the satellite’s batteries fully charged. The recent GLONASS launch put the satellites into Plane 2 which is actually in one of its eclipse seasons right now. However, the satellites will be out of eclipse by early January.</p>
<p><span style="font-family: Verdana;"><span style="font-size: small;"><strong>Prof. Richard B. Langley </strong></span></span></p>
<p><span style="font-family: Verdana;"><span style="font-size: small;"><strong>Dept. Geodesy and Geomatics Engineering </strong></span></span></p>
<p><span style="font-family: Verdana;"><span style="font-size: small;"><strong>University of New Brunswick</strong> </span></span></p>
]]></content:encoded>
			<wfw:commentRss>http://www.gpsworld.com/why-are-glonass-satellites-launched-on-christmas-day/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

Page Caching using apc
Object Caching 1525/1621 objects using apc

 Served from: www.gpsworld.com @ 2013-06-11 18:49:23 by W3 Total Cache --