editorial machine vision sensorsdownloads.hindawi.com/journals/js/2018/3202761.pdf ·...

3
Editorial Machine Vision Sensors Oleg Sergiyenko , 1 Vera Tyrsa, 2 Wendy Flores-Fuentes , 1 Julio Rodriguez-Quiñonez , 1 and Paolo Mercorelli 3 1 Autonomous University of Baja California, Mexicali, BC, Mexico 2 Automobile and Highway University, Kharkiv, Ukraine 3 Leuphana University, Lueneburg, Germany Correspondence should be addressed to Oleg Sergiyenko; [email protected] Received 8 January 2018; Accepted 9 January 2018; Published 27 February 2018 Copyright © 2018 Oleg Sergiyenko et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The advances in mechatronics and robotics in the last 50 years have been improved by the sensory inputs to sys- tems, providing them with the sight capacity, when the machine visual interpretation is carried out, it increases systems intelligence. While visual perception to machines promises the greatest improvement, at the same time, it presents the greatest challenge, particularly when they are in the real-world situations. This special issue is intended to present and discuss breakthrough technological developments which are expected to revolutionize contributions to sensors, signal processing, and algorithms in machine vision, control, and navigation. It provides a reference on machine vision supporting techniques and 3D reconstruction for researchers and engineers. These contributions are focused on sensors for vision systems, intelligent navigation algorithms, and machine motion controllers, particularly on applications of unmanned aerial vehicle, drones, and autonomous and mobile humanoid robots. We received several sub- missions, and after two rounds of rigorous review, 10 papers were accepted. In the rst paper A Review of Deep Learning Methods and Applications for Unmanned Aerial Vehicles,A. Carrio et al. perform a thorough review on recent reported uses and applications of deep learning for UAVs, including the most relevant developments as well as their performances and limitations. Also, a detailed explanation of the main deep learning techniques is provided. In the paper An Ecient Calibration Method for a Stereo Camera System with Heterogeneous Lenses Using an Embedded Checkerboard Pattern,P. Rathnayaka et al. present two approaches to calibrate a stereo camera setup with heterogeneous lenses: a wide-angle sh-eye lens and a narrow-angle lens in left and right sides, respectively. Instead of using a conventional black-white checkerboard pattern, they design an embedded checkerboard pattern by combin- ing two dierently colored patterns. In the paper Automatic Detection Technology of Sonar Image Target Based on the Three-Dimensional Imaging,W. Kong et al. present a set of sonar image automatic detection technologies based on 3D imaging. The process consists of two steps, approximate position of the object by calculating the signal-to-noise ratio of each target and then the separa- tion of water bodies and strata by maximum variance between clusters (OTSU). In the paper Vision System of Mobile Robot Combin- ing Binocular and Depth Cameras,Y. Yang et al. propose a 3D reconstruction system combining binocular and depth cameras to improve the precision of the 3D recon- struction. The whole system consists of two identical color cameras, a TOF depth camera, an image processing host, a mobile robot control host, and a mobile robot; nally, double thread processing method is applied to improve the eciency of the system. In the paper Aphid Identication and Counting Based on Smartphone and Machine Vision,S. Xuesong et al. present a design of counting software that can be run on smartphones for real-time enumeration of aphids, the counting accuracy in the greenhouse is above 95%, while it can reach 92.5% outside. Hindawi Journal of Sensors Volume 2018, Article ID 3202761, 2 pages https://doi.org/10.1155/2018/3202761

Upload: phamque

Post on 30-Jul-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

EditorialMachine Vision Sensors

Oleg Sergiyenko ,1 Vera Tyrsa,2 Wendy Flores-Fuentes ,1 Julio Rodriguez-Quiñonez ,1

and Paolo Mercorelli 3

1Autonomous University of Baja California, Mexicali, BC, Mexico2Automobile and Highway University, Kharkiv, Ukraine3Leuphana University, Lueneburg, Germany

Correspondence should be addressed to Oleg Sergiyenko; [email protected]

Received 8 January 2018; Accepted 9 January 2018; Published 27 February 2018

Copyright © 2018 Oleg Sergiyenko et al. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The advances in mechatronics and robotics in the last 50years have been improved by the sensory inputs to sys-tems, providing them with the sight capacity, when themachine visual interpretation is carried out, it increasessystems intelligence. While visual perception to machinespromises the greatest improvement, at the same time, itpresents the greatest challenge, particularly when they arein the real-world situations.

This special issue is intended to present and discussbreakthrough technological developments which areexpected to revolutionize contributions to sensors, signalprocessing, and algorithms in machine vision, control, andnavigation. It provides a reference on machine visionsupporting techniques and 3D reconstruction for researchersand engineers. These contributions are focused on sensorsfor vision systems, intelligent navigation algorithms, andmachine motion controllers, particularly on applicationsof unmanned aerial vehicle, drones, and autonomousand mobile humanoid robots. We received several sub-missions, and after two rounds of rigorous review, 10papers were accepted.

In the first paper “A Review of Deep Learning Methodsand Applications for Unmanned Aerial Vehicles,” A. Carrioet al. perform a thorough review on recent reported usesand applications of deep learning for UAVs, including themost relevant developments as well as their performancesand limitations. Also, a detailed explanation of the main deeplearning techniques is provided.

In the paper “An Efficient Calibration Method for aStereo Camera System with Heterogeneous Lenses Using an

Embedded Checkerboard Pattern,” P. Rathnayaka et al.present two approaches to calibrate a stereo camera setupwith heterogeneous lenses: a wide-angle fish-eye lens and anarrow-angle lens in left and right sides, respectively. Insteadof using a conventional black-white checkerboard pattern,they design an embedded checkerboard pattern by combin-ing two differently colored patterns.

In the paper “Automatic Detection Technology of SonarImage Target Based on the Three-Dimensional Imaging,”W.Kong et al. present a set of sonar image automatic detectiontechnologies based on 3D imaging. The process consists oftwo steps, approximate position of the object by calculatingthe signal-to-noise ratio of each target and then the separa-tion of water bodies and strata by maximum variancebetween clusters (OTSU).

In the paper “Vision System of Mobile Robot Combin-ing Binocular and Depth Cameras,” Y. Yang et al. proposea 3D reconstruction system combining binocular anddepth cameras to improve the precision of the 3D recon-struction. The whole system consists of two identical colorcameras, a TOF depth camera, an image processing host, amobile robot control host, and a mobile robot; finally,double thread processing method is applied to improvethe efficiency of the system.

In the paper “Aphid Identification and Counting Basedon Smartphone and Machine Vision,” S. Xuesong et al.present a design of counting software that can be run onsmartphones for real-time enumeration of aphids, thecounting accuracy in the greenhouse is above 95%, whileit can reach 92.5% outside.

HindawiJournal of SensorsVolume 2018, Article ID 3202761, 2 pageshttps://doi.org/10.1155/2018/3202761

In the paper “Multisource Data Fusion Framework forLand Use/Land Cover Classification Using MachineVision,” S. Qadri et al. present the design of a novelframework for multispectral and texture feature-based datafusion to identify the land use/land cover data types cor-rectly. The study describes the data fusion of five landuse/cover types, that is, bare land, fertile cultivated land,desert rangeland, green pasture, and Sutlej basin river landderived from remote sensing. The obtained accuracyacquired using multilayer perceptron for texture data, mul-tispectral data, and fused data was 96.67%, 97.60%, and99.60%, respectively.

In the paper “A Nonlocal Method with Modified InitialCost and Multiple Weight for Stereo Matching,” S. Gaoet al. present a new nonlocal cost aggregation method forstereo matching, a modified initial cost is used, and a robustand reasonable tree structure is developed. The proposedmethod was tested onMiddlebury datasets and, experimentalresults show that the proposed method surpasses the classicalnonlocal methods.

In the paper “Global Measurement Method forLarge-Scale Components Based on a Multiple Field ofView Combination,” Y. Zhang et al. propose a noncon-tact and flexible global measurement method combininga multiple field of view (FOV); the measurement systemconsists of two theodolites and a binocular vision systemwith a transfer mark. The authors also propose a newglobal calibration method to solve the coordinate systemunification issue of different instruments in the measure-ment system.

In the paper “Automated Recognition of a Wallbetween Windows from a Single Image,” Y. Zhang et al.propose a method to recognize the wall between windowsfrom a single image automatically. The method startsfrom detection of line segments with further selection.The color features of the two sides are employed todetect line segments as candidate window edges. Finally,the images are segmented into several subimages, win-dow regions are located, and then the wall between thewindows is located.

In the paper “Visual Vehicle Tracking Based on DeepRepresentation and Semisupervised Learning,” Y. Caiet al. propose a semisupervised tracking algorithm thatuses deep representation and transfer learning. A 2Dmultilayer deep belief network is trained with a largenumber of unlabeled samples. The nonlinear mappingpoint at the top of this network is subtracted as the featuredictionary. Then, the feature dictionary is utilized to trans-fer train and update a deep tracker. The positive samplesfor training are the tracked vehicles. Finally, a particlefilter is used to estimate vehicle position.

Acknowledgments

The guest editorial team would like to thank the authorsof all the papers submitted to this special issue. We alsowant to thank the institutions to which we are affiliated,specially the Autonomous University of Baja California.Finally, the editors also wish to thank the anonymous

reviewers, some of whom helped with multiple reviewassignments. We hope that you will enjoy reading thisspecial issue focused on machine vision.

Oleg SergiyenkoVera Tyrsa

Wendy Flores-FuentesJulio Rodriguez-Quiñonez

Paolo Mercorelli

2 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwww.hindawi.com Volume 2018

RoboticsJournal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Shock and Vibration

Hindawiwww.hindawi.com Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwww.hindawi.com

Volume 2018

Hindawi Publishing Corporation http://www.hindawi.com Volume 2013Hindawiwww.hindawi.com

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwww.hindawi.com Volume 2018

International Journal of

RotatingMachinery

Hindawiwww.hindawi.com Volume 2018

Modelling &Simulationin EngineeringHindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Navigation and Observation

International Journal of

Hindawi

www.hindawi.com Volume 2018

Advances in

Multimedia

Submit your manuscripts atwww.hindawi.com