2018 International Conference on Sensor Network and Computer Engineering (ICSNCE 2018) 99 Research on the Key Technology of Survey Measurement Image Based on UAV Ding Li Health Services Administration Xi`an Medical University Xi’an, 710021, Shaanxi, China e-mail: 370486946@qq.com Chong Jiao School of Computer Science and Engineering Xi’an Technological University Xi’an, 710021, Shaanxi, China e-mail: 1342748406@qq.com Abstract—With the development of computer technology, especially the emergence of high resolution image sensor, aerial photogrammetry has played an important role in geological survey. Traditional aerial measurements are carried out by manned large aircraft, with a large volume of measured information and a wide range of shots. This method is suitable for large area operation, which is high and expensive for hardware site. UAV air measurement systems are usually used for small area measurements. The UAV is small and fluctuates greatly during flight, the collection of the data and images are not accurate enough. This paper has obtained the accurate measurement of image by studying the image fast matching theory, which has practical significance in the actual mapping and has higher research value. Keywords-UAV; Measurement Image; Image Validation; Image Preprocessing; Image Matching; Feature Extraction I. INTRODUCTION Quadrotor or Quadrotor UAV, simply called UAV, is a type of UAV that has four propellers and crosses the propeller. It can record aerial video with a miniature camera. At present, UAV surveying mainly relies on image aerial camera to capture images. Traditional measuring cameras are not only expensive, but also require film image scanning to acquire digital images with low shooting quality and long measuring time. In this paper, a non-metric camera CCD is used for image acquisition, the advantages of CCD camera are low prices, sensor stability, and high sensitivity. Non-metric cameras cannot directly measure because of the large distortion correction error, so it must be calibrated before carrying out aerial photography. II. UAV IMAGE PREPROCESSING The UAV photography system is equipped with a non-metric digital camera, which has unstable performance, uncertain orientation elements and results in optical distortion error in aerial photography. Optical distortion errors include radial distortion and decentering distortion. The focal length of the camera is fixed in this system, so the difference in distortion is the systematic error, which produces the same image for all images collected. The methods of camera calibration include optical laboratory calibration, laboratory calibration and office calibration. The experimental field is consisted of mark points in some known space coordinates. During the calibration process, the experimental field is photographed with the calibrated camera, and inner element and other elements that affect the shape of the beam are solved according to the intersection of the single-chip space or the multiple-chip space [1]. This system used UAV digital camera calibration software Easy Calibrate to calibrate digital camera SonyRx100 in the 2D experimental calibration field. The test results and contents are showed in table 1., and origin of coordinate is in the lower left corner of the image. 2018 International Conference on Sensor Network and Computer Engineering (ICSNCE 2018) 100 TABLE I. THE CALIBRATION RESULTS OF SONY RX100 CAMERA Contents of Calibration Value of Calibration Comment x0 -0.008214mm Camera interior azimuth element y0 -0.003216mm focus f 10.41234 coefficient of radial distortion k1 2.12E-10 Radial distortion error coefficient coefficient of radial distortion k2 -8.14E-18 coefficient of eccentric aberration p1 3.14E-7 Tangential distortion error coefficient coefficient of eccentric aberration p2 -1.42E-7 Correction of image distortion -- Indirect method, the coordinates of the corresponding point on the original image is calculated from the coordinates on the corrected image, and the image correction is implemented with the grayscale interpolation method [2]. As shown in Figure 1. P’Y’ X’ P X Y P Gray value assignment Grayscale resampling Calculate the image point coordinates Postcorrected image Distortion image Figure 1. Image distortion correction schematic diagram After years of research, the corrected value of the corrected image can be calculated by the deformation error correction model, and the deformation error correction model: (1) (2) x, y: coordinate of the image point which origin is the center of the image, x0, y0 : coordinates of the main point of the image (3) 2018 International Conference on Sensor Network and Computer Engineering (ICSNCE 2018) 101 : The error coefficient of radial distortion The error coefficient of eccentric aberration : The non-square scaling factor of pixels The arranged non-orthogonal error coefficients in the CCD array The coordinate of the camera in the camera is calculated with space resection method, and the accuracy of the high-outer azimuth elements is improved, and the precision of geometric calibration is improved. The attitude control of UAV is mainly through the signal of the attitude sensor. The attitude sensor includes the tilt sensor and the angular velocity sensor. The titlt sensor is implemented indirectly through the triaxial acceleration sensor [3]. The output signals represent the current three axial acceleration values. If the UAV remain static in space, then the acceleration values are simply converted to get the real dip parameters. However, it is impossible for an UAV to keep stationary in the air. Under the influence of the wind, the UAV may deviate in one direction. At this point, even if the UAV does maintain its level, the output of the acceleration sensor still deviate from the center value, resulting in misjudgment of the control core. In order to avoid this situation, it is necessary to introduce triaxial angular velocity sensors and ultrasonic range finder, the acceleration in the X and Y direction is corrected to obtain true tilt information with the three axial angular velocities and the acceleration in the Z-axis direction and the rate of change of the real-time height [4]. III. UNMANNED AERIAL VEHICLE IMAGE MATCHING The image matching technique generally adopts image matching technology. The corresponding matching algorithm is used to identify the same name point between two images or multiple images. Commonly used matching methods can be divided into two major categories, one is based on the matching of grayscale and the other is based on feature matching [5]. In this paper, the SIFT feature matching algorithm is used for high-precision matching of massive data. The SIFT matching algorithm is based on the matching of local features of the image. The algorithm holds invariance to translation, rotation occlusion, and so on, so it has strong stability in practice. The feature matching process is shown in Figure 2. Scale space extreme value detection precise positioning Determine the main direction of the key points Key point descriptor Match feature point SIFT features vector matching Figure 2. Feature matching flow chart A. Pyramid Image Pyramid image refers to the original image are decomposed to obtain a series of sub-images of different resolutions. The images are sorted by resolution from small to large, and then forming a set of pyramidal overlapping images. Find the matching point in the top level image, the matching position is used as the prediction position of the next layer, and the matching result of this layer is used as the initial matching position of the next layer to perform matching in order, and the matching result is used as control to match other feature points [6]. This top-down and coarse-to-fine process ensures the reliability of the image search process. In the pyramid structure, images are represented in a hierarchical structure. At the top of the pyramid structure, the lowest resolution of data is stored. With the increase of the layers of the pyramid, the resolution of the data is sequentially reduced. At the bottom of the pyramid, the highest resolution data that can meet the need of users is stored. Under the spatial reference, information is stored and displayed at different resolutions according to user needs, forming a pyramid structure with low-to-high resolution and small to large amount of data. The image pyramid structure is used for image coding and progressive image transmission. It is a typical hierarchical data structure form, suitable for multi-resolution organization of raster data and image data, 2018 International Conference on Sensor Network and Computer Engineering (ICSNCE 2018) 102 and also a kind of lossy compression of raster data or image data. B. Image Feature Extraction Feature extraction refers to using a computer to propose image information of the same name point in the image, which determines the common features in the image. The image feature extraction generally relies on the distribution of grayscale in the image, and the position, shape and size of the features are determined by this information. The SIFT feature matching algorithm mainly consists of two parts, extracting unrelated vector features from multiple images and matching SIFT feature vectors. The scale space representation is an area-based expression. As an important concept in the scale space theory, the scale space is defined as the product of a Gaussian convolution kernel and a remote sensing image. After derivation by Koendetink and Babaud et al., it is proved that the Gaussian kernel is the only linear kernel that realizes scale transformation. (4) L(x, y, σ) is a scale space, G(x, y, σ) is a Gaussian convolution kernel, and I(x, y) is a remote sensing image. x, y, and σ represent location parameters and scale parameters. (5) Using the scale space function to establish the Gaussian pyramid, the scale space function between two adjacent layers is influenced by the scale ratio between adjacent layers and the same order pyramid, and a differential Gaussian pyramid is established. The scale ratio between adjacent layers is defined as k, and the scale factor is defined as σ, and D(x, y, σ) is a differential Gaussian pyramid. Finally, each sample point is compared with the adjacent point in the space of the adjacent vertical and horizontal scales around the scale space of the same level. If the detection point is the local maximum or minimum value, the point will be a candidate for image at this scale. IV. EXPERIMENTAL TEST Installed Visual Studio 2017 on the computer and configured OpenCV for experimental testing. The experimental image is shown in Figure 3. SIFT uses C++ to extract image feature points, uses the two-dimensional feature point matching method Brute Force Matcher to match, sets a certain threshold to filter the matching results, uses the FindHomography function to set the RANSAC method to eliminate false matching, and tests the SIFT to understand its performance according to the above steps. After the threshold value and the basic matrix filter, the points are basically covered in the key area of the image, the distribution of pixel points is relatively uniform, and the error is low, which can meet the matching requirements of the system. The matching test image is shown in figure 3. 2018 International Conference on Sensor Network and Computer Engineering (ICSNCE 2018) 103 Figure 3. Feature Matching Experiments V. CONCLUSION At present, all countries in the world are stepping up the development of UAV. Compared to manned aircraft, UAV has the advantages of small size, low cost, ease of use, low environmental requirements, and strong survivability. Western countries have applied new and high technologies to the development of UAV, and advanced signal processing and communication technologies have been used to improve the image transmission speed and digital transmission speed of UAV. In this paper, based on the theory of air measurement image preprocessing, the image feature extraction method is studied. Visual c + + is used to implement SIFT to extract the image feature points. The two-dimension feature point matching method BruteForce Matcher is used to perform image region matching, and RANSAC is set by FindHomography function to eliminate false matches. We obtain the more satisfactory result of match after experiments. However, due to the deficiencies of UAV, it is difficult to compare with the professional image processing system. With the development of communication technology and control technology, UAV will surely have breakthrough application and development in the field of low-level measurement in the further. REFERENCE [1] Yu Sheng, Wen Caiqiang, Liu Shangguo. The precision measurement technology of digital camera indoor three-dimensional field [J].2012.01. [2] Tian Lei, Ma Ran. Research on the calibration method of unmanned aerial vehicle (UAV) [J].2016.07. [3] Li Xiang, Wang Yongjun, Li Zhi. Misalignment error and correction of the vector sensor of air position system [J]. Journal of sensor technology, 2017.02. [4] C. Harris, M. J. Stephens. A Combined Corner and Edge Detector[C]. Prco of the 4th Alvey Vision Conf, 2016: 147-152. [5] D. G. Lowe. Distinctive Image Features from Scale-Invariant Keypoints[J]. International Journal of Computer Vision.2014, 60(2): 91-110. [6] D. G. Lowe. Distinctive Image Features from Scale-Invariant Keypoints[J]. International Journal of Computer Vision.2014, 60(2): 91-110.