UAV Emergency Landing Site Selection System using Machine Vision

This paper address the problem area of Unmanned Aerial Vehicles (UAV) emergency scenarios in which forced or emergency landing becomes imperative. Emergency or forced landing becomes crucial when there is system failure which impacts the flight safety and UAV is unable to fly back to the emergency landing runway. This failure could be due to data link loses, GPS failure, engine or flight surface failure. Forced landing needs to be performed on safe landing site which could be plane surface, open fields or grounds. First step to accomplish the successful forced landing safely is to search and select the safe landing site. This article presents the system design which assists the UAV in selection of safe landing site having no obstacles, buildings and trees. The proposed system design uses computer vision and machine learning techniques in order to classify feasible and non-feasible landing sites. The proposed algorithms in this article also incorporate the scenarios having low lighting conditions due to clouds. The system has been designed and simulated in MATLAB and promising results have been achieved with very less processing time and computational power. Keywords—UAV; Forced landing; Emergency Landing; Machine Vision; Machine Learning; UAV GPS Failure; K-Nearest Neighbor.


I. INTRODUCTION
Recently a big boom is observed in the application of Unmanned Aerial Systems (UAS) in commercial and military setups which includes surveillance, special operations and ground troops aerial support, aerial coverage of public gathering, damage estimation and relief operation management, telecom relay for coverage in distant areas, exploration of oil and gas and geographical surveys.Now days small UAVs are also utilized in delivery of purchased products to the buyer like different packages, food items or other goods.The research had revealed that in case of general aviation i.e. manned aircrafts the accident rate is 1 per 100,000 flight hours, while in case of UAVs the accident rates exceed to 1 per 1000 flight hours.This clearly shows that the accident rate among unmanned aircrafts is 100 times more than that in manned aircrafts [1].The main failures which may cause accident are mechanical surface failures and data link loss.There are numbers of ways in which it is made certain that if data link of UAV is failed then how it can be controlled till link is established again with the control station.One of the way, which is implemented in number of the UAV systems is that the aircraft starts flying in the circular pattern with the pre-programmed circular radius, till the recovery of link.If link is not recovered within specified time, almost every UAV is programmed to return to emergency backup landing strip using GPS navigation.It was revealed in post-accident investigation that more than 25% of accidents were caused due to data link loss and UAV was not able to return to the backup emergency landing site.

II. RELATED WORK
In the past years valuable research had been carried out in the field of computer vision and machine learning to implement emergency or autonomous landing systems.In design of emergency landing systems some of the efficient and effective algorithms of machine learning used are Support Vector Machine (SVM) and Artificial Neural Network (ANN) in combination with digital image processing techniques for selection of appropriate landing site [2,3].It is observed that above mentioned machine learning algorithms i.e.SVM and ANN have different performance constraints e.g.SVM is complex and requires huge computational power, whereas, ANN requires large training data set which corresponds to greater training time.Due to these constraints both algorithms cannot meet the rapidly changing requirements in emergency landing area selection system.Apart from this, a much easier and simple machine learning algorithm i.e.K-Nearest Neighbor (KNN) can be customized to obtain the desired results.In order to customize KNN for the best results, techniques like texture/features extraction are also used in conjunction with it for accurate classification.On the part of image processing techniques edge detection and image segmentation had also been used to select the best suitable area in the image taken using the onboard cameras of UAS [4,5,6].The proposed approach gives excellent improvement in terms of computational power required and complexity.

III. SYSTEM DESIGN OVERVIEW
The methodology adopted in this system is based on Artificial Intelligence and image processing techniques.The first step in the proposed system is to get the imagery of the area UAV is flying through.The onboard UAV camera takes the top view images of the area lying in its Field of View (FOV) exactly in its perpendicular direction on the ground as shown in figure 1  Figure 3 shows the second mode that is Evaluation/Operation mode of the algorithm.During the flight, whenever there is an emergency due to any of the above discussed failures, the UAV camera takes the top view images to extracts the useful features from that image, takes the light intensity value from the sensor and passes to the trained machine learning algorithm.The algorithm calculates that whether the area is feasible for landing or not.

IV. WORKING PRINCIPLE
The emergency landing system in this paper proposes that UAV onboard camera which is a part of electro-Optical system of UAV, takes the top view images of the area just below it form the height of 1200m meters above the ground level.The retrieved image comprises the size of 960 x 576 pixels and covers the area of 1000m x 600m if taken from the said height or with relative zoom level.A large size UAV can easily land in this area [7].If the UAV is above the said height, still it can retrieve the same specifications image i.e. area and number of pixels by zooming-in to the specific value.Usually UAV cameras exhibits good zoom capability.For the training and testing of our algorithm the images are taken from the Google satellite imagery.After getting the image, the proposed emergency landing system often referred to as forced landing, extracts some useful features from the image and creates a feature vector using some simple and efficient image processing techniques.A modified (more specifically multi-group based) machine learning algorithm is devised by the authors which uses the feature vector to train the algorithm and the user/operator tags the images as Clear, Partially Clear or Unclear for safe landing in case of training mode while in Evaluation/Operation mode the algorithm, from its prior training experience, classify the class of the image .Clear areas as shown in figure 4 are those which visibly do not have obstacles like trees, houses/buildings, mountains etc in the whole image, they include the areas like plain grounds, smooth farms and wide fields etc. Figure 5 shows some areas that may contain open fields but with some hurdles or houses etc in an image, they are referred to as Partially Clear areas.These type of areas are the backup sites for the forces landing in case Clear areas are not available in the near vicinity.UAV can intelligently land in the open fields of these areas avoiding the obstacles specially the buildings/houses as landing/crash on the houses may lead to casualties.Unclear areas as obvious by the name are those which do not have any clear field for the forced landing like populated areas of the cities, wooded and hilly areas as depicted in Figure 6. A. Feature Vector Generation Feature vector is generated using basic and few advanced image processing techniques.Only those features of image are selected which gives significant data to classify a clear area from unclear area.Histogram thresholding is one of the very basic techniques but assists in extracting very good features from the image that helps in differentiating clear and unclear landing sites [8]. Figure 7(a,b,c) shows the histogram of Clear, Partially Clear and Unclear images corresponding images in figure 4(a), 5(a) and 6(a) respectively.
Our system uses gray scale image of 8 bits.It means that any pixel value within the image is between zero (for black pixel) to 2 8 -1 (for white pixel).Histogram thresholding gives a 2-Dimentional graph with a specific intensity value on x-axis and the total number of pixels in the image having that intensity value.Clear landing sites don't have obstacles so, they color intensity distribution is relatively very small as compared to the unclear ones.And partially clear sites have the values in between them.These types of features extracted from histogram can help in differentiating a good site from a bad one.Starting from the zero on x-axis when the number (of pixels) on y-axis exceeds the specific threshold value, that number is saved in a variable named as Histo_A and when the number (of pixels) decreases from that value, that number named as Histo_B.Empirical results have shown that optimized results have been found when the specific threshold value is set at 1250 pixels.Histo_Spread is their difference i.e. the spread of the distribution.The peak value in the yaxis shows the total number of pixels in corresponding intensity value on x-axis named as Histo_Peak.Histo_A of clear images are bigger than that of unclear images while Histo_Spread is smaller in Clear images than Unclear ones as shown in figure 7. Figures [8][9][10][11] shows the visualization as the scattered plots of above mentioned four features extracted using histogram technique.The blue circles represent Clear landing site image features while the Unclear ones are shown by Red Cross.Canny Edge Detection algorithm is used to detect wide range of edges in an image [9].The advantage of using Canny Edge detection technique over other wellknown edge detection algorithms is that it gives better results even in noisy conditions [10].The flow of Canny Edge detection is shown in figure 12.It is empirically tested that 0.5 threshold for the edge detection gives best optimized results in our system.In almost every case, more the edges in image, worst will be the area for landing.13. White color represents the edges in the image which the black shows clear areas.It is obvious from the results that the numbers of edges in an Unclear site are much greater than that in a Clear one.The total number of edges is a good attribute that can be used for classification.We call this attribute Edge_Count.If the edges count is measured in the centre of the landing area, it is gives a second level check that a site is for sure clear for landing where the UAV is going to land and named as Edge_Centre.Scattered plots of samples points data of afore mentioned two features extracted using edge detection are shown in figure 14.It is clear from the plots that using these features, a clear boundary could be drawn for classification.Spatial convolution is very useful technique used to perform several tasks in image processing.A kernel or mask is a 2-Dimentional window with some specific values convolved with the image under processing to perform different operations.In our case the kernel size of 100x100 is used.All matrix elements are set to value 1 (i.e.white) and it is convolved with the resultant of edge detection on image e.g. the images shown in figure 13, if the convolution sum matrix gives all zeros it means that 100x100 pixels area doesn't have any edge.So this area is marked in white color (all ones) and in the case of any edge in that area, it's marked with black (all zeros).We named this feature as Mask_Op.Its scattered plot for feature visualization is shown in figure 15.Another feature which is extracted from the masked image can be the count of total objects (white patches) and is named as No_Objects.As the number of objects in the above image increase from a specific value, it shows that clear areas are present in the image but in small patches which are not continuous.That makes it an unclear site for landing.If the number of objects in the image is zero, it means that there are no feasible areas available in the image to land, so it is also an unclear site.This attribute value should be in the centre of above mentioned points and in combination with other features, it is used to classify the images.
Extraction of previous feature shows that there is a need for another feature which computes the size of the objects detected in the masked image so that the size of the clear area can be found.This computed size of the object determines whether the area is adequate for safe landing or not.So Object_Boundary feature is defined to have the size of the boundary in pixels.The length of the object having maximum size amongst all the objects is retained in the variable.Clear landing sites have larger boundary sizes as compared to Unclear ones.Feature visualization of the attribute No_Objects and Object_Boundary are plotted as scattered plots in figure 16(a) and 16(b) recpectively.All the above discussed nine features along with breif details are tabulated in table 1.

Histo_B
Intensity value on x-axis where pixels no.reduces from the threshold value

Histo_Spread
The width of the histogram envelope.

Histo_Peak
The peak value in y-axis

Edge_Count
Total count of edges in the image

Edge_Centre
Total edges in the centre of landing area

Mask_Op
Result of convolution with Mask/Kernel

No_Objects
Total no. of objects in post masked image Object_Boundry Length of max.boundary in terms pixels

B. Multi-Group based Machine Learning Algorithm
Once the complete feature vector of nine entities is generated, the algorithm is to be trained in the training mode.Three parameters are passed to the Multi-Group based Machine learning algorithm; the feature vector, the class label, and the light intensity value from the light dependent resistor (LDR) transducer.The third parameter which is used to distinguish between different light intensities is very helpful in giving improved results in case of low lighting conditions especially in the cloudy weather.During cloudy weather due to the dim lighting, contrast in the image reduces.It results in a bad feature extraction which induces misclassification in algorithm.The light intensities have been divided in three groups; Bright-Light conditions (group 1), Normal-Light conditions (group 2) and Low-Light conditions (group 3) defined respectively as bright sunlight conditions, minimal sunlight or no sunlight but good light intensity (the case with few clouds, after sunrise or before sunset times) and dark clouds case when the clouds blocks the significant amount of light leading to retrieve low contrast images as shown in figure 17.So the histogram spread/envelope is shifted towards the left in the histogram plot.Three separate training databases are created in accordance with the light conditions so that for instance a bright-light image is saved and compared within the bright-light image database not with the others.In this case misclassification is reduced significantly.Figure 19 explains the Multi-Group based Machine learning block discussed earlier in figure 2 and 3.  Euclidean distance formula is used the distances between the points in the sample space as shown in equation (1).

C. Landing Site Selection System
In case of emergency, if the UAV finds the Clear site for landing, it will execute forced landing but if the emergency is time critical for instance data link is lost followed by GPS failure and fuel is limited then UAV has to autonomously select a site for landing immediately.To accomplish this task it can look for previous n partially clear sites in its saved memory in the near vicinity.The value of n depends on the critical time amount.Figure 20 shows site selection system flow diagram.Once an image is identified as safe for landing, a fully autonomous emergency landing needs to be executed on that location.As discussed earlier, UAV takes the top view image of the site so it means it cannot land on this site without turning back and making the best approach for landing.To cater this task UAV utilizes its currents status to calculate different parameters for instance UAV knows its current speed, altitude, Yaw, Pitch and Roll angle and air speed so, based on its recommended descend rate, it flies away from the site and then turns back gradually to point to the area where it is going to land.The descend rate should be kept as minimal as possible based on the scenario and extent of time criticality so that the smooth landing is made sure to avoid hard landing for the safety of UAV.The proposed algorithms have been extensively tested in MATLAB computing environment and graphical user interface (GUI) has been made for the simulation purpose.
In training mode the sample images are used for training, class/label is manually selected and the Light intensity group is set accordingly in the nomenclature of the images.After the training is completed, the training file is saved.Only the feature vector of each image is stored in the training file along with the class/label.This saves a lot of storage memory as compared to the scenario where the whole image is to be saved.The GUI based software also incorporates the test/Operation mode.For this purpose, any image can be input to the software system and its class/label can be predicted easily.As discussed earlier KNN is used with different values of K ranging from 3 to 15, the accuracy of the results varies accordingly as shown in figure 21.It is clear from that the 3NN gives the maximum accuracy of 91.6 %.This accuracy is achieved by training the algorithm generally for all type of areas.It can be increased by training the algorithm for a specific type of area prior to the mission.For the first UAV flight, it can be trained for any type of terrain or for all common terrains by manually feeding the sample training file to the UAV.
. The algorithm is trained using supervised learning.It has two operating modes; first one is Training mode in which the features extracted from the image, the class/label i.e.Clear/Partially clear/Unclear and value from the light intensity sensor are passed to the multi-group based machine learning algorithm.Once the adequate training has been done, it is ready to be tested in emergency scenarios.The training mode of the algorithm is shown in block diagram in figure 2. UAV Direction EO TV Camera FOV
(a).Histogram of Clear Landing Site image.(b).Histogram of Partially Clear Landing Site image.(c).Histogram of Unclear Landing Site image.

Figure 12 .
Figure 12.Basic Steps of Canny Edge Detection

Figure 16 .
Figure 16.Scattered plot of Feature Visualization-(a) No_Objects shows total no. of objects in Post-Masked image (b) Object_Boundary shows length of object's boundary 17

Figure 17 .Figure 18 .
Figure 17.Images taken in Low-Light conditions showing the low contrast imagery.Kth Nearest Neighbor (KNN) Machine Learning algorithm is used for the classification.Figure18shows the basic flow of the algorithm.

Figure 19 .
Figure 19.Landing site selection flow diagram

Figure 20 .
Figure 20.Landing site selection flow diagram

Figure 21 .
Figure 21.Accuracy of the results

Table 1 .
Feature Vector with brief details