Automatic Labeling of Lane Markings for Autonomous Vehicles
|
|
|
- Dayna Bruce
- 10 years ago
- Views:
Transcription
1 Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA Introduction As autonomous vehicles become more popular, there is an increasing need to reduce the cost of the expensive equipment outfitted on each car. Traditional sensors for autonomous vehicles include an expensive lidar ( $70,000) to accurately determine depth near the car. Generally, if the equipment is damaged, the only solution is to purchase another. In this project I will build a system to automatically label highway lane markings. These labels will be used to train a deep neural network that will be able to predict lanes using images from inexpensive cameras. Removing the lidar completely from autonomous vehicles is beyond the scope of a quarter-long project. Future projects can use my system to label cars, road signs, and other obstacles in the road as inputs to other machine learning systems. Existing solutions to this problem such as the Mobileye Lane Departure Warning System are effective at a range of only 30 meters [3]. My goal is to predict lane markings beyond that. I will combine data collected from a Velodyne lidar (HDL-32E), HDR video streams, and a high precision GPS and build a large dataset of labeled lane markings. Eventually this data will be used to train a neural network which will predict lanes from only photos. 2. Related Work 2.1. Review of Related Work Before implementing my project, my research group was using a traditional computer vision approach to label lanes. The system would look in the bottom 2/3rds of the image and find the two largest white or yellow areas closest to the center of the frame. While this approach correctly labeled lanes in some situations, it failed when there was poor lighting or occlusions. When testing the neural network, it was obvious that there were too many erroneous labels because the system was unable to consistently predict the lanes. This project implements a system proven to be successful on other autonomous vehicles. Specifically, Professor Sebastian Thrun s research group fused Velodyne point clouds with camera images to find curbs, lane markings, and cars. My project is the first step to implementing many of these systems Main Contributions Instead of using traditional computer vision with a single camera, this project will fuse inputs from many different sensors. I use position from a high precision GPS, depth from a Velodyne Lidar, and color information from two stereoscopic cameras. This method requires all components to be time-synchronized and calibrated. To calibrate sensors, I use a combination of automatic and manual adjustment. Fusing the data from each sensor into one map allows me to generate highly accurate lane labels for 20- hours of driving data. 3. Technical Information 3.1. Summary The goal of my project is to fuse three independent sources of data to create accurate labels for lane markings. To do this I need to synchronize timing and calibrate each source with respect to the others. I implement manual and automatic techniques to calibrate each sensor and combine all data into a single map. From this map, I remove points that are not lane markings using thresholding. These points are easy to find because lane markings are highly reflective, and we know the approximate position relative to the Velodyne. Using hierarchical clustering, I can remove noise and ensure there is a one-to-one mapping between labels and lane markings. Using this method, I am able to quickly and accurate label lane markings better than the previous system Building a Dense Map Velodyne outputs a set of 64,000 data points at 10 Hz, each with a 3D coordinate, a measure of reflective intensity, and the laser index. The Velodyne also reports localization information with accelerometers, gyroscopes, and a low-resolution GPS. Highway lane markings are more re- 1
2 only needs to be calibrated for errors in rotation between the Velodyne and GPS. Given a point cloud (P ), the GPS to Velodyne rotation offsets (Rcalib ), and the change in position given by the GPS(T position ), we can start overlaying point clouds from different points in time (P ). Rcalib T position P = P (1) Extending this method to multiple point clouds, we obtain the dense map seen in Figure 3. Using the GPS to Velodyne calibration, we are able to obtain 3D maps of entire highways. This step makes it possible to reliably extract lane markings from the Velodyne s point cloud. Figure 1. Lanes are reflective so they have a higher intensity value flective than the surrounding asphalt, so the reported laser intensity value is larger. In Figure 1, green points indicate high reflectivity and red indicates low reflectivity. Individual Velodyne point clouds are too sparse to label lane markings further than 20 meters away. With a single point cloud the Velodyne can see the closest lane marking, at best. To label lanes farther than that, we need to combine clouds from different points in time into a single map. With accurate position and time information, we can overlay multiple point clouds into a single view. The GPS unit mounted on the car reports position with centimeter accuracy. A crucial step in combining point clouds is calibrating the GPS to the Velodyne. Figure 2 shows what can happen when the two units are not calibrated with respect to each other. Figure 3. Using a correct calibration produces recognizable objects in the point cloud 3.3. Time Synchronization Synchronizing the timing between all three data sources is crucial for building the map described in Section 3.2. Synchronizing this system is difficult because each individual component has its own clock. The GPS and Velodyne operate on different GPS units, and the cameras use the computer s UTC time2. Synchronizing the clocks is also complicated by different data refresh rates. The GPS reports position at 200 Hz, the Velodyne reports a point cloud at 10 Hz, and the camera s refresh their image at 50 Hz. The system finds the closest Velodyne point cloud and GPS position relative to the camera s timestamp. Since the Velodyne data refresh rate is 5x slower than the camera frame rate, this causes a timing error of up to 80 milliseconds. This disparity means that the point cloud may be up to 2.22 meters away from the actual markings at highway speeds (100 km/h). Although, this does not cause problems in practice because we assume the highways are generally straight. Figure 2. Integrating point clouds with an uncalibrated GPS yields blurry, unrecognizable results1 Although there are automated ways of finding this calibration, manual calibration works well if we consider each point cloud relative to the GPS s reference frame. We can make this assumption because both the Velodyne and GPS are rigidly mounted to the car a change in GPS position is the same as a change in Velodyne position. The system 1 Image 2 The computer s UTC time is synchronized with Stanford s NTP server at time.stanford.edu from Unsupervised Calibration for Multi-beam Lasers[1] 2
3 Given an image I calculate the edginess of each pixel. The edginess value of each pixel is determined by the maximum intensity distance between it and its eight neighbors. We will call this new edge image E. Next, apply a smoothing function such that the i, jth pixel s value is defined as: An additional complication of time synchronization is the disparity between UTC and GPS time. GPS time does not take leap seconds into account. Leap seconds occur nondeterministically up to twice each year. Since the start of GPS time (Jan 6, 1980) there have been 16 leap seconds3. This means that UTC time and GPS time are misaligned by 16 seconds. Since leap seconds cannot be predicted, the synchronization between the three components in the system will need to be updated when a new leap second is announced. Since this data is being post-processed, a timing misalignment error should be easily caught. Di,j = α Ei,j + (1 α) max Ex,y γ max(kx ik,ky jk) x,y (2) This smooths the edges of the image so that the objective function (defined in Equation 4) is rewarded for matching points close to edges4. Levinson recommends values α = 31 and γ = 0.98 for smoothing. With our new edginess image D, we can implement the second part of the algorithm: finding depth discontinuities in the Velodyne point cloud. Define a new point cloud X where each point i in X is defined as the maximum distance between two horizontally adjacent laser points P. Since the Velodyne is parallel to the ground, we can assume these points are from the same laser at different times in the sweep Camera Calibration and Velodyne Positioning The system also needs a correct calibration between the cameras with the Velodyne. Determining the rotation and translations between these two components can be done either manually or automatically. Manual calibration is done by projecting the point cloud into the camera s field of view and adjusting the transformation parameters until the points line up correctly with the scene. Specifically, we need to know the x, y, and z translation and the yaw, pitch, and roll rotations of the cameras relative to the Velodyne. The translation parameters are easily measured using a tape measure and the rotation parameters are initially guessed. My results for manual calibration can be seen in Figure4. Xi = max(kpi 1 k kpi k, kpp+1 k kpp k, 0)0.5 (3) We remove points from P that have distance values in X less than 30 centimeters to create P. This removing points with low distances increases the chance we see an edge at that point in the camera image. We construct objective function J given our initial guess Cg from manual calibration. J is maximized by aligning large depth discontinuities in P with high intensity values in D. Di,j refers to the pixel coordinate that point Pp projects to. J= X Cg Pp Di,j (4) p We can attempt to find an absolute maximum for J by performing a grid search across each of the six degrees of freedom in Cg. Iteratively searching this grid will eventually find the calibration C that maximizes J. This method only works if our initial guess is close to the correct calibration a valid assumption given the manual calibration qualitatively looks correct. Using the Velodyne-camera calibration obtained from Levinson s method and the GPS-Velodyne calibrations through manual calibration, I am able to combine all sensor inputs. I can now add color information the 3D depth maps shown in Figure3. With all of the sensors correctly calibrated, I can build 3D color maps of a scene. While this is not useful for labeling the lanes, it is easy to verify all three calibrations are correct (see Figure 5. Figure 4. Projection of the Velodyne point cloud using manual calibration Once I had an initial estimate for the affine parameters, I implemented the iterative algorithm described by Levinson[2]. This algorithm maximizes the correlation between depth discontinuities in the point cloud and color discontinuities in the camera image. Although this algorithm was designed to quickly determine if a calibration is correct, it can also be extended to refine a calibration given an initial guess. 3 The 4 The inverse distance transform is used in my implementation, but any smoothing function should work last one was announced in June 30,
4 filtered correctly. Figure 7 shows my results filtering out all lanes from the highway map. Alignment errors are within the 2.2 meter range explained in Section 3.3. Figure 5. A colored map of the scene. This fuses all three sensor inputs 3.5. Filtering Out the Lane Markings Figure 7. Lanes are filtered from the point cloud map of a highway While I originally believed that I needed to locate the ground plane to find lane markings, they can be isolated from the rest of the point cloud by thresholding. We know that in all of our test data the lanes are 1.5 meters below the Velodyne and approximately 1.5 meters on either side. Lane markings are also highly reflective, so points with intensity value less than than 60 can be discarded5. With this filter, I was able to see all reflective markings within about 50 meters of the car. Figure 6 shows my results for a single point cloud Clustering Lane Markings The results in Figure 7 are promising, but also noisy. To improve my labels, I used hierarchical clustering to group lanes together. Hierarchical clustering calculates the distance from one point to all other points. It groups points that are within a certain distance threshold of each other. I weight the distance function to reward points that lie the axis parallel to the direction of travel. This ensures that points from different lanes are not grouped together. After all points are grouped, I take the median of each group. This places a single point on each lane marking. I plot the projection of the points using the same calibration matrix obtained in Section 3.4. The results of clustering for a single lane is shown in Figure 8. Figure 6. Lanes can be easily filtered from the surrounding asphalt Using the integrated maps created in section 3.2 and the filters described above, I projected 5 seconds of driving onto one frame of video. Applying the filters to all frames in the map without applying another transformation would cause errors. Specifically, it could remove lane markings from the map if the road curved or changed elevation. To correctly apply the filters to a point cloud that occurs at t1, I first need to transform the cloud to the position at time t0 (where t0 is the point cloud closest to the camera timestamp). Next, I apply the filter and transform back into t1. Applying the filter after a transformation to t0 ensures that all clouds are 5 For Figure 8. Points are clustered to give one point per lane marking reference, the asphalt has reflectivity between 0 and 5 4
5 4. Experimental Results Using the Velodyne to label lanes works significantly better than the traditional computer vision approach. Looking for white and yellow clusters on the ground failed with lanes that were far away or had poor lighting. A significant advantage of my implementation is that it can label lanes far into the future with high accuracy. It can also label lanes that are occluded by cars or changes in elevation. This is very important when training the lane predictor because it should be able to predict the lane path even when the markers are hidden. Unfortunately, I cannot compare my results to a ground truth. Building the a dataset to compare to would require hand labeling the 20 hour corpus. Even with hand labeling, a human label may be very different than my system s label because any label on the lane marker is a correct. Qualitatively, the output of my system performs better than the previous system. Some example outputs of the lane labeler can be seen at 5. Conclusion This project combined three data sources to label lanes in a 20 hour data set. Much of the project involved synchronizing data across independent clocks and calibrating each data source with respect to the others. By combining multiple data sources, I was able to obtain results that far exceeded the accuracy of the previous color-based approach. Other projects can also build off by my system. Given the calibrations I determined, cars and road signs can be detected by simply changing the thresholds and clustering parameters. References [1] J. Levinson and S. Thrun. Unsupervised calibration for multibeam lasers. In International Symposium on Experimental Robotics, [2] J. Levinson and S. Thrun. Automatic online calibration of cameras and lasers. In Robotics: Science and Systems, [3] Mobileeye. Mobileye lane departure warning (ldw), Jan
How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud
REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada [email protected]
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
Tracking of Small Unmanned Aerial Vehicles
Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: [email protected] Aeronautics and Astronautics Stanford
3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
High Resolution RF Analysis: The Benefits of Lidar Terrain & Clutter Datasets
0 High Resolution RF Analysis: The Benefits of Lidar Terrain & Clutter Datasets January 15, 2014 Martin Rais 1 High Resolution Terrain & Clutter Datasets: Why Lidar? There are myriad methods, techniques
Topographic Change Detection Using CloudCompare Version 1.0
Topographic Change Detection Using CloudCompare Version 1.0 Emily Kleber, Arizona State University Edwin Nissen, Colorado School of Mines J Ramón Arrowsmith, Arizona State University Introduction CloudCompare
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
The process components and related data characteristics addressed in this document are:
TM Tech Notes Certainty 3D November 1, 2012 To: General Release From: Ted Knaak Certainty 3D, LLC Re: Structural Wall Monitoring (#1017) rev: A Introduction TopoDOT offers several tools designed specifically
T-REDSPEED White paper
T-REDSPEED White paper Index Index...2 Introduction...3 Specifications...4 Innovation...6 Technology added values...7 Introduction T-REDSPEED is an international patent pending technology for traffic violation
Speed Performance Improvement of Vehicle Blob Tracking System
Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA [email protected], [email protected] Abstract. A speed
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Automated Process for Generating Digitised Maps through GPS Data Compression
Automated Process for Generating Digitised Maps through GPS Data Compression Stewart Worrall and Eduardo Nebot University of Sydney, Australia {s.worrall, e.nebot}@acfr.usyd.edu.au Abstract This paper
Project Overview Mapping Technology Assessment for Connected Vehicle Highway Network Applications
Project Overview Mapping Technology Assessment for Connected Vehicle Highway Network Applications CGSIC Seattle Washington August2012 Table Of Contents Connected Vehicle Program Goals Mapping Technology
Neural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Control Saeid Fazli 1, Shahram Mohammadi 2, Morteza Rahmani 3 1,2,3 Electrical Engineering Department, Zanjan University, Zanjan, IRAN
Motion Graphs. It is said that a picture is worth a thousand words. The same can be said for a graph.
Motion Graphs It is said that a picture is worth a thousand words. The same can be said for a graph. Once you learn to read the graphs of the motion of objects, you can tell at a glance if the object in
An Iterative Image Registration Technique with an Application to Stereo Vision
An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract
A STUDY ON WARNING TIMING FOR LANE CHANGE DECISION AID SYSTEMS BASED ON DRIVER S LANE CHANGE MANEUVER
A STUDY ON WARNING TIMING FOR LANE CHANGE DECISION AID SYSTEMS BASED ON DRIVER S LANE CHANGE MANEUVER Takashi Wakasugi Japan Automobile Research Institute Japan Paper Number 5-29 ABSTRACT The purpose of
Extracting a Good Quality Frontal Face Images from Low Resolution Video Sequences
Extracting a Good Quality Frontal Face Images from Low Resolution Video Sequences Pritam P. Patil 1, Prof. M.V. Phatak 2 1 ME.Comp, 2 Asst.Professor, MIT, Pune Abstract The face is one of the important
Wii Remote Calibration Using the Sensor Bar
Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr
QUALITY ASSURANCE FOR MOBILE RETROREFLECTIVITY UNITS
State of Florida Department of Transportation QUALITY ASSURANCE FOR MOBILE RETROREFLECTIVITY UNITS FDOT Office State Materials Office Authors Patrick Upshaw Joshua Sevearance Hyung Lee Date of Publication
IP-S2 Compact+ 3D Mobile Mapping System
IP-S2 Compact+ 3D Mobile Mapping System 3D scanning of road and roadside features Delivers high density point clouds and 360 spherical imagery High accuracy IMU options without export control Simple Map,
COLOR-BASED PRINTED CIRCUIT BOARD SOLDER SEGMENTATION
COLOR-BASED PRINTED CIRCUIT BOARD SOLDER SEGMENTATION Tz-Sheng Peng ( 彭 志 昇 ), Chiou-Shann Fuh ( 傅 楸 善 ) Dept. of Computer Science and Information Engineering, National Taiwan University E-mail: [email protected]
ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES
ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES B. Sirmacek, R. Lindenbergh Delft University of Technology, Department of Geoscience and Remote Sensing, Stevinweg
Segmentation of building models from dense 3D point-clouds
Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
How To Analyze Ball Blur On A Ball Image
Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:
Using Optech LMS to Calibrate Survey Data Without Ground Control Points
Challenge An Optech client conducted an airborne lidar survey over a sparsely developed river valley. The data processors were finding that the data acquired in this survey was particularly difficult to
Scanners and How to Use Them
Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types
How To Use Trackeye
Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 [email protected], Introduction TrackEye is the world leading system for motion
Is a Data Scientist the New Quant? Stuart Kozola MathWorks
Is a Data Scientist the New Quant? Stuart Kozola MathWorks 2015 The MathWorks, Inc. 1 Facts or information used usually to calculate, analyze, or plan something Information that is produced or stored by
Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
IP-S2 HD. High Definition 3D Mobile Mapping System
IP-S2 HD High Definition 3D Mobile Mapping System Integrated, turnkey solution High Density, Long Range LiDAR sensor for ultimate in visual detail High Accuracy IMU and DMI Odometry for positional accuracy
USING THE XBOX KINECT TO DETECT FEATURES OF THE FLOOR SURFACE
USING THE XBOX KINECT TO DETECT FEATURES OF THE FLOOR SURFACE By STEPHANIE COCKRELL Submitted in partial fulfillment of the requirements For the degree of Master of Science Thesis Advisor: Gregory Lee
Computer vision. 3D Stereo camera Bumblebee. 25 August 2014
Computer vision 3D Stereo camera Bumblebee 25 August 2014 Copyright 2001 2014 by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved Thomas Osinga [email protected], [email protected]
automatic road sign detection from survey video
automatic road sign detection from survey video by paul stapleton gps4.us.com 10 ACSM BULLETIN february 2012 tech feature In the fields of road asset management and mapping for navigation, clients increasingly
Real-time 3D Scanning System for Pavement Distortion Inspection
Real-time 3D Scanning System for Pavement Distortion Inspection Bugao Xu, Ph.D. & Professor University of Texas at Austin Center for Transportation Research Austin, Texas 78712 Pavement distress categories
Online Learning for Offroad Robots: Using Spatial Label Propagation to Learn Long Range Traversability
Online Learning for Offroad Robots: Using Spatial Label Propagation to Learn Long Range Traversability Raia Hadsell1, Pierre Sermanet1,2, Jan Ben2, Ayse Naz Erkan1, Jefferson Han1, Beat Flepp2, Urs Muller2,
Environmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
Robot Perception Continued
Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart
Colorado School of Mines Computer Vision Professor William Hoff
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description
Geometric Camera Parameters
Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances
Face detection is a process of localizing and extracting the face region from the
Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.
Robotics. Lecture 3: Sensors. See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information.
Robotics Lecture 3: Sensors See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Locomotion Practical
Real-Time Tracking of Pedestrians and Vehicles
Real-Time Tracking of Pedestrians and Vehicles N.T. Siebel and S.J. Maybank. Computational Vision Group Department of Computer Science The University of Reading Reading RG6 6AY, England Abstract We present
Video in Logger Pro. There are many ways to create and use video clips and still images in Logger Pro.
Video in Logger Pro There are many ways to create and use video clips and still images in Logger Pro. Insert an existing video clip into a Logger Pro experiment. Supported file formats include.avi and.mov.
THE CONTROL OF A ROBOT END-EFFECTOR USING PHOTOGRAMMETRY
THE CONTROL OF A ROBOT END-EFFECTOR USING PHOTOGRAMMETRY Dr. T. Clarke & Dr. X. Wang Optical Metrology Centre, City University, Northampton Square, London, EC1V 0HB, UK [email protected], [email protected]
Photogrammetric Point Clouds
Photogrammetric Point Clouds Origins of digital point clouds: Basics have been around since the 1980s. Images had to be referenced to one another. The user had to specify either where the camera was in
High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks [email protected]
High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks [email protected] Advanced Methods for Manufacturing Workshop September 29, 2015 1 TetraVue does high resolution 3D
Detecting and positioning overtaking vehicles using 1D optical flow
Detecting and positioning overtaking vehicles using 1D optical flow Daniel Hultqvist 1, Jacob Roll 1, Fredrik Svensson 1, Johan Dahlin 2, and Thomas B. Schön 3 Abstract We are concerned with the problem
Solar Shading Android Application v6.0. User s Manual
Solar Shading Android Application v6.0 User s Manual Comoving Magnetics Revision: April 21, 2014 2014 Comoving Magnetics. All rights reserved. Preface... 3 Part 1: Shading Analysis... 4 How to Start a
Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.
Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture. Chirag Gupta,Sumod Mohan K [email protected], [email protected] Abstract In this project we propose a method to improve
Advanced Methods for Pedestrian and Bicyclist Sensing
Advanced Methods for Pedestrian and Bicyclist Sensing Yinhai Wang PacTrans STAR Lab University of Washington Email: [email protected] Tel: 1-206-616-2696 For Exchange with University of Nevada Reno Sept. 25,
Applications of Deep Learning to the GEOINT mission. June 2015
Applications of Deep Learning to the GEOINT mission June 2015 Overview Motivation Deep Learning Recap GEOINT applications: Imagery exploitation OSINT exploitation Geospatial and activity based analytics
MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH
MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH VRVis Research Center for Virtual Reality and Visualization, Virtual Habitat, Inffeldgasse
Azure Machine Learning, SQL Data Mining and R
Azure Machine Learning, SQL Data Mining and R Day-by-day Agenda Prerequisites No formal prerequisites. Basic knowledge of SQL Server Data Tools, Excel and any analytical experience helps. Best of all:
Introduction. www.imagesystems.se
Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 [email protected], Introduction Motion is the world leading software for advanced
Binary Image Scanning Algorithm for Cane Segmentation
Binary Image Scanning Algorithm for Cane Segmentation Ricardo D. C. Marin Department of Computer Science University Of Canterbury Canterbury, Christchurch [email protected] Tom
PASSIVE DRIVER GAZE TRACKING WITH ACTIVE APPEARANCE MODELS
PASSIVE DRIVER GAZE TRACKING WITH ACTIVE APPEARANCE MODELS Takahiro Ishikawa Research Laboratories, DENSO CORPORATION Nisshin, Aichi, Japan Tel: +81 (561) 75-1616, Fax: +81 (561) 75-1193 Email: [email protected]
Safe Robot Driving 1 Abstract 2 The need for 360 degree safeguarding
Safe Robot Driving Chuck Thorpe, Romuald Aufrere, Justin Carlson, Dave Duggins, Terry Fong, Jay Gowdy, John Kozar, Rob MacLaughlin, Colin McCabe, Christoph Mertz, Arne Suppe, Bob Wang, Teruko Yata @ri.cmu.edu
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Christian Zinner Safe and Autonomous Systems
Relative Permeability Measurement in Rock Fractures
Relative Permeability Measurement in Rock Fractures Siqi Cheng, Han Wang, Da Huo Abstract The petroleum industry always requires precise measurement of relative permeability. When it comes to the fractures,
KINEMATICS OF PARTICLES RELATIVE MOTION WITH RESPECT TO TRANSLATING AXES
KINEMTICS OF PRTICLES RELTIVE MOTION WITH RESPECT TO TRNSLTING XES In the previous articles, we have described particle motion using coordinates with respect to fixed reference axes. The displacements,
Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data
CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear
Color Segmentation Based Depth Image Filtering
Color Segmentation Based Depth Image Filtering Michael Schmeing and Xiaoyi Jiang Department of Computer Science, University of Münster Einsteinstraße 62, 48149 Münster, Germany, {m.schmeing xjiang}@uni-muenster.de
Interactive Segmentation, Tracking, and Kinematic Modeling of Unknown 3D Articulated Objects
Interactive Segmentation, Tracking, and Kinematic Modeling of Unknown 3D Articulated Objects Dov Katz, Moslem Kazemi, J. Andrew Bagnell and Anthony Stentz 1 Abstract We present an interactive perceptual
A Computer Vision System on a Chip: a case study from the automotive domain
A Computer Vision System on a Chip: a case study from the automotive domain Gideon P. Stein Elchanan Rushinek Gaby Hayun Amnon Shashua Mobileye Vision Technologies Ltd. Hebrew University Jerusalem, Israel
SUPER RESOLUTION FROM MULTIPLE LOW RESOLUTION IMAGES
SUPER RESOLUTION FROM MULTIPLE LOW RESOLUTION IMAGES ABSTRACT Florin Manaila 1 Costin-Anton Boiangiu 2 Ion Bucur 3 Although the technology of optical instruments is constantly advancing, the capture of
Monitoring Head/Eye Motion for Driver Alertness with One Camera
Monitoring Head/Eye Motion for Driver Alertness with One Camera Paul Smith, Mubarak Shah, and N. da Vitoria Lobo Computer Science, University of Central Florida, Orlando, FL 32816 rps43158,shah,niels @cs.ucf.edu
Point Cloud Simulation & Applications Maurice Fallon
Point Cloud & Applications Maurice Fallon Contributors: MIT: Hordur Johannsson and John Leonard U. of Salzburg: Michael Gschwandtner and Roland Kwitt Overview : Dense disparity information Efficient Image
3D Building Roof Extraction From LiDAR Data
3D Building Roof Extraction From LiDAR Data Amit A. Kokje Susan Jones NSG- NZ Outline LiDAR: Basics LiDAR Feature Extraction (Features and Limitations) LiDAR Roof extraction (Workflow, parameters, results)
Anamorphic Projection Photographic Techniques for setting up 3D Chalk Paintings
Anamorphic Projection Photographic Techniques for setting up 3D Chalk Paintings By Wayne and Cheryl Renshaw. Although it is centuries old, the art of street painting has been going through a resurgence.
A. OPENING POINT CLOUDS. (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor)
MeshLAB tutorial 1 A. OPENING POINT CLOUDS (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor) 2 OPENING POINT CLOUDS IN NOTEPAD ++ Let us understand
Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite
Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite Philip Lenz 1 Andreas Geiger 2 Christoph Stiller 1 Raquel Urtasun 3 1 KARLSRUHE INSTITUTE OF TECHNOLOGY 2 MAX-PLANCK-INSTITUTE IS 3
Introduction to Machine Learning Using Python. Vikram Kamath
Introduction to Machine Learning Using Python Vikram Kamath Contents: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Introduction/Definition Where and Why ML is used Types of Learning Supervised Learning Linear Regression
Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification
Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification Aras Dargazany 1 and Karsten Berns 2 Abstract In this paper, an stereo-based
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound Ralf Bruder 1, Florian Griese 2, Floris Ernst 1, Achim Schweikard
Field Application Note
Field Application Note Reverse Dial Indicator Alignment RDIA Mis-alignment can be the most usual cause for unacceptable operation and high vibration levels. New facilities or new equipment installations
HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAYS AND TUNNEL LININGS. HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAY AND ROAD TUNNEL LININGS.
HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAYS AND TUNNEL LININGS. HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAY AND ROAD TUNNEL LININGS. The vehicle developed by Euroconsult and Pavemetrics and described
Automatic Traffic Estimation Using Image Processing
Automatic Traffic Estimation Using Image Processing Pejman Niksaz Science &Research Branch, Azad University of Yazd, Iran [email protected] Abstract As we know the population of city and number of
Effective Use of Android Sensors Based on Visualization of Sensor Information
, pp.299-308 http://dx.doi.org/10.14257/ijmue.2015.10.9.31 Effective Use of Android Sensors Based on Visualization of Sensor Information Young Jae Lee Faculty of Smartmedia, Jeonju University, 303 Cheonjam-ro,
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
Build Panoramas on Android Phones
Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken
Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior
Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior Kenji Yamashiro, Daisuke Deguchi, Tomokazu Takahashi,2, Ichiro Ide, Hiroshi Murase, Kazunori Higuchi 3,
Mobile Mapping. VZ-400 Conversion to a Mobile Platform Guide. By: Joshua I France. Riegl USA
Mobile Mapping VZ-400 Conversion to a Mobile Platform Guide By: Joshua I France Riegl USA Table of Contents Introduction... 5 Installation Checklist... 5 Software Required... 5 Hardware Required... 5 Connections...
Synthetic Sensing: Proximity / Distance Sensors
Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Thomas Reilly Data Physics Corporation 1741 Technology Drive, Suite 260 San Jose, CA 95110 (408) 216-8440 This paper
A NEW SUPER RESOLUTION TECHNIQUE FOR RANGE DATA. Valeria Garro, Pietro Zanuttigh, Guido M. Cortelazzo. University of Padova, Italy
A NEW SUPER RESOLUTION TECHNIQUE FOR RANGE DATA Valeria Garro, Pietro Zanuttigh, Guido M. Cortelazzo University of Padova, Italy ABSTRACT Current Time-of-Flight matrix sensors allow for the acquisition
IP-S3 HD1. Compact, High-Density 3D Mobile Mapping System
IP-S3 HD1 Compact, High-Density 3D Mobile Mapping System Integrated, turnkey solution Ultra-compact design Multiple lasers minimize scanning shades Unparalleled ease-of-use No user calibration required
Real time vehicle detection and tracking on multiple lanes
Real time vehicle detection and tracking on multiple lanes Kristian Kovačić Edouard Ivanjko Hrvoje Gold Department of Intelligent Transportation Systems Faculty of Transport and Traffic Sciences University
INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users
INSTRUCTOR WORKBOOK for MATLAB /Simulink Users Developed by: Amir Haddadi, Ph.D., Quanser Peter Martin, M.A.SC., Quanser Quanser educational solutions are powered by: CAPTIVATE. MOTIVATE. GRADUATE. PREFACE
New Features in TerraScan. Arttu Soininen Software developer Terrasolid Ltd
New Features in TerraScan Arttu Soininen Software developer Terrasolid Ltd Version 013.xxx Computer ID changes in licenses Send new computer ID to Terrasolid if using: Server pool licenses (server ID and
Path Tracking for a Miniature Robot
Path Tracking for a Miniature Robot By Martin Lundgren Excerpt from Master s thesis 003 Supervisor: Thomas Hellström Department of Computing Science Umeå University Sweden 1 Path Tracking Path tracking
Mobile Robot FastSLAM with Xbox Kinect
Mobile Robot FastSLAM with Xbox Kinect Design Team Taylor Apgar, Sean Suri, Xiangdong Xi Design Advisor Prof. Greg Kowalski Abstract Mapping is an interesting and difficult problem in robotics. In order
