Face Recognition in Crowded Drones
Question:
Discuss About The Intelligent Recognition In Crowded Drones?
Drones are also known as (UAV) unmanned aerial vehicles are nothing aircrafts which do not have pilot on board and can be piloted remotely or autonomously (Motlagh, Bagaa & Taleb, 2017). They have the capability to fly pre-programmed mission without any kind of autopilot suites. Faces are considered to be an important part of the people and drones can easily identify people through their faces. Recognition of face is considered as one of the popular in the domain of technology and can be easily considered to be badge of success in analysis and understanding of image (Chamasemani & Affendey, 2013). Face recognition can be considered to be key for drones for identification of individuals within a particular area or crowd. Drones at first need an idea about the targets and after that proper targets can be easily launched. Thus face recognition on drones can be considered to be a vital technical component.
In the coming pages of the report a literature review has been conducted. In the literature review part various parameters has been checked like crowd management, face recognition, sense to avoid, types of drones, night vision and summary has been discussed.
Detection technique mainly focus on tracking features, appearance and motions, and is used for detection of crowd. Motion methods are generally categorized as background subtraction, spatio- tempo filtering and lastly optical flow (Penmetsa et al., 2014). Background subtraction focus on segment which focus moving foreground by analyzing difference of current frame in comparison to a references. This particular technique is considered to be more effective by making use of static or pan-tilt zoom camera. Spatio –temporal filtering technique generally characteristics the pattern of motion of frame sequence. It is generally considered to be more sensitive to noise and variation in various moving pattern (Porikli et al., 2013). Optical flow based or dependent technique is however being considered to be relative movement between observer and scene and hence it has more robust to various kinds of simultaneous motion to both camera and targets. Optical flow is generally considered to be one of the most popular technique or method of analysis.
After proper detection of moving parts by UAV, one further step of classification of human being versus nonhuman crowds is required. Moving objects can be easily classified on the basis of shape, motion and characteristics or texture (Teutsch & Kruger, 2015). Shaped based patterns generally makes use of pattern recognition approach which is not robust in nature. On the other hand, motion based techniques are generally depending on key assumption for target based motion features which are unique enough to be properly recognized. There are certain number of methods which generally depend on various kinds of predefined methods and proper recognizing of human beings (Howard, 2013). Texture based or depend methods can be easily overcome this kind of limitation and easily provide improved or better detection quality along with better accuracy in various human classification. HOG generally applies a high dimension of vector of features which is present on the edges of the image.
Detection and Classification of Human Crowds
It generally makes use of support vector machine (SVM) for collection and supervision of learning models for data analysis which gain can be easily used for classification of various objects (Zeng, Zhang & Lim, 2016). HOG or linear SVM generally has a significant advantage which is generally present for art of system for human detection. UGV generally have higher value of resolution which provides upright view of the crowd. The HOG applied algorithm for human detection generally makes use of high fidelity and it also provides proper image position of various detected individuals.
Face recognition generally depends on two factors namely distance and angle of depression. A face needs to be detected before it is recognized properly. The performance of face detection among face++, Recognition and methods of open CV are generally compared under various settings in altitudes and ground distance between drones and their targets. Both the face++ and Recognition provides high and stable TPR in stable in face detection.
Drones generally takes pictures from the air so the altitude generates angles of depression between drones and local targets (Garibotto, 2013). This ultimately focus on position of faces from the collected pictures. The scores rated by Face++ and Recognition are stated to be stable when the ground distance is compared to be within 4 hours. So the target addresses collected methods with various numbers of height ranging from 1,5,3,4 and 5 meters in height and 2 to 4 meters in ground distance which is used for evaluation the performance of Face++ and recognition along with various angle of depression.
For both the methods the score drop value of angle of depression generally increases by a large value. Recognition provides a score of match value which is beyond the default level of match. It also focusses on matched cases from mismatched one (Shao & Fu, 2015). Face++ shows both matched and mismatched cases which generally comes up with angle of depression more than 40 degrees for all the models which are provided. So Face++ looks is able to distinguish various faces collected with large angle of depressions.
Possible approach for this kind of augmentation is implementation or adaptation of adaptation of 3D models techniques which can be easily used for generation of photos which is used for additional pitch angles (Ali et al., 2016). The face recognition model is generally trained with extra images with large pitch angels.
Fig 3 provides a clear overview of the fact that how augmentation in 3D can easily effect the influences of distinguishability of face++ and recognition which can be seen in various angels of depression. The fact is to be considered that 3D augmentation generally helps in Recognition in different angle of depression, then face++ provides best kind of match from it (Farber, 2014). The score of match cases are considered to be ascended in various angles of depression particularly for large value of angle of depression. The score for mismatched situation or cases generally arises for both the methods after the introduction of 3D augmentation. One of the best kind of reasons for this kind of phenomenon is nothing but face generation from FaceGen Modeler which might not be considered to be authentic and can easily confuse the scoring mechanism in Face++ and recognition in various kinds of mismatched cases.
Feasibility of Computer Vision in UAVs
This section of the report generally focuses on the feasibility of use of computer vision so that it can easily provide a proper level of situational awareness which can be easily used for task of UAV for sense to avoid (Srisamosorn et al., 2014). This particular term is used for analyzing the capability of a UAV for detection airborne traffic and providing proper response so that they can easily maintain separation distance. Two initial image processing algorithm has been easily determined which can be used for detection of small and point like features. This kind of two algorithms is generally used for processing of image streams which are based of real collision curves aircrafts which is used for variety of daytime backgrounds. Comparison are also provided two separate detection algorithm and have easily provided a best way which can be used for increasing the relicense to various image based noise. It ultimately helps in lower rate of false alarms which is generally specific to sensitive detection threshold.
The camera which is used for series of trials is PointGrey Research Dragonfly. It is specifically designed for industrial based machine tasks (Krishnan, 2015). A sensor platform is generally constructed by making use of Dragonfly camera and lens which consist of inertial measurement unit and generally used like a measuring units. A series of image streams is recorded which focus on targeting of aircrafts which is made to fly on direct kind of collision with a sensor platform for long period of time. For the purpose of comparison an alerted human observer is present at site for recording times which can be used for recording times at which aircraft can be easily detected through human visual system. The observer is mainly engaged was equipped with binoculars so that they can allow determination of various targets bearing tires of having attempting to easily locate aircraft with naked eyes.
Very small drones (Nano): This kind of drones are generally small in size and generally designed to easily fit in vary small spaces. This type of drones can be small like insect and is generally few centimeters in size (Tiwari & Singhai, 2017). This type of drones is generally used by spies to intelligence on people and also the fact is considered that their inability to be detected.
Small drones: This types of drones are small but are not so small like Nano drones. This type of drones can be easily lifted by making of arms and can be easily thrown into the air.
Medium drones: Medium drones are bit larger in comparison in comparison to small light aircrafts. They generally weight around 200 kilos and support of two people for its lifting (Alshearya, 2017).
Large Drones: This type of drone are generally large in size and easily be matched with the size of the aircrafts. This type of drones is generally used for high-powered surveillance in various war zones.
Usage: All the above mentioned type of drones are generally used for intelligence gathering and surveillance. Some of them are also used as toys.
Comparison of Face++ and Recognition in Face Detection
Multi Rotor: Multi rotor drones are generally stable in nature and are also known for maintenance of stationary position in the air for long interval of time (Alshearya, 2017). They are generally designed with various kinds of motors to help them stay airborne and keeping them stable.
Fixed Wing: Fixed wing drones are generally designed so that they can easily resemble with various airplanes and for this particular reason they cannot easily maintain stationary position when they are in air (Alshearya, 2017).
Single Rotor: A single rotor type of drone is generally designed with a single rotor which is generally used for flying various kinds of drones and smaller rotor which is placed close to tail provides direction of the drones. Single type of drones is better in comparison because of different capabilities.
Usage: This type of drones is generally used for some special kind of purpose like photography and meteorology.
Quadcopters: The quadcopters are generally designed with four rotors which are arranged in a square pattern (Tiwari & Singhai, 2017). It is most common type of drone which is used for recreational purpose but in many cases it is used for professional purpose.
GPS drones: This type of drone are considered to be smart in some ways. This type of drones is generally linked through GPS which helps in deciding the path of flight (Alshearya, 2017). They move from one place to another whenever they run out of power.
RTF drones: This type of drones are also known as ready to fly drones (Priji & Nair, 2016). They are known to be play and plug kind of drones. In this type of drones only unboxing is needed, followed by charging it and the drone is ready to fly
Helicopter drones: Helicopter drones are generally designed with a single type of rotor which helps it staying in air in a fixed position for a long interval of time (van Wyk & van der Haar, 2017). They are generally resembling or are similar in structure to helicopters so they are called helicopter drones.
Delivery drones: This kind of drones are mainly used by various organization for delivering of goods to clients or ship goods from the storehouse to the organization (van Wyk & van der Haar, 2017). They are generally designed in customized way which comes up with special kind of basket which provides a carriage space for various goods.
Thermal imaging is considered to be one well known or popular choice of implementation of night vision surveillance system. Night vision is nothing but the ability to see various things and objects in low light conditions. Night vision is possible by making use of combination of two techniques like biological or technology (Brundage et al., 2018). Open CV or open source computer vision library is nothing but a source which is used for computer vision and software library.
From the above discussion it can be easily stated that this report is all about intelligence face recognition using drones. In the above pages of the report a literature review has been conducted on the drones and their features. In the literature review section various parameters with respect to drones like crowd management, face recognition, sense to avoid, types of drones and their abilities and lastly night vision has been encountered. Drones can easily reach certain areas or location which is considered to be difficult for various human beings to reach. In this kind of application drones are generally used for detection or tracking of people which are present on the ground. In the face recognition two important parameters like effect of distance on drones and angle of depression of drones has been discussed. In the sense to avoid the capability of UAV detection airborne has been discussed in details. Various kinds of drones and their abilities has been discussed in brief. Nano, small, medium and large drones has been discussed in details in above pages of the report. Drones based on aerial platform like multi rotor, fixed wings, single rotor has been discussed in details. Different kinds of drones based on abilities like Quadcopter, GPS drone, RTF drone, helicopter drones, delivery drone has been discussed in details. In the end an idea about night vision has been provided and need of thermal imaging has been discussed in details.
Ali, A., Jalil, A., Niu, J., Zhao, X., Rathore, S., Ahmed, J., & Iftikhar, M. A. (2016). Visual object tracking—classical and contemporary approaches. Frontiers of Computer Science, 10(1), 167-188.
Alshearya, A., Almagbile, A., Alqrashi, M., Wang, J., Khalil, H., & Al-Zahrani (2017)., M. ASSESSMENT OF THE APPLICABILITY OF UNMANNED AERIAL VEHICLE (UAV) FOR MAPPING LEVELS OF CROWD DENSITY.
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., … & Anderson, H. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv preprint arXiv:1802.07228.
Chamasemani, F. F., & Affendey, L. S. (2013). Systematic review and classification on video surveillance systems. Int. Journal Information Technology and Computer Science, (7), 87-102.
Farber, H. B. (2014). Eyes in the sky: constitutional and regulatory approaches to domestic drone deployment. Syracuse L. Rev., 64, 1.
Garibotto, G., Murrieri, P., Capra, A., De Muro, S., Petillo, U., Flammini, F., … & Mazzino, N. (2013, September). White paper on industrial applications of computer vision and pattern recognition. In International Conference on Image Analysis and Processing (pp. 721-730). Springer, Berlin, Heidelberg.
Howard, C. (2013). UAV command, control & communications. Military & Aerospace Electronics, militaryaerospace. com.
Krishnan, A. (2015). Mass Surveillance, Drones, and Unconventional Warfare. BEHEMOTH-A Journal on Civilisation, 8(2), 12-33.
Motlagh, N. H., Bagaa, M., & Taleb, T. (2017). UAV-based IoT platform: A crowd surveillance use case. IEEE Communications Magazine, 55(2), 128-134.
Penmetsa, S., Minhuj, F., Singh, A., & Omkar, S. N. (2014). Autonomous UAV for suspicious action detection using pictorial human pose estimation and classification. ELCVIA: electronic letters on computer vision and image analysis, 13(1), 18-32.
Porikli, F., Brémond, F., Dockstader, S. L., Ferryman, J., Hoogs, A., Lovell, B. C., … & Venetianer, P. L. (2013). Video surveillance: past, present, and now the future [DSP Forum]. IEEE Signal Processing Magazine, 30(3), 190-198.
Priji, P., & Nair, R. S. Improved Real-time Multiple Face Detection and Recognition from Multiple Angles.
Shao, M., & Fu, Y. (2015). Deeply Self-Taught Multi-View Video Analytics Machine for Situation Awareness. In AFA Cyber Workshop, White Paper.
Srisamosorn, V., Kuwahara, N., Yamashita, A., Ogata, T., & Ota, J. (2014, December). Automatic face tracking system using quadrotors: Control by goal position thresholding. In Robotics and Biomimetics (ROBIO), 2014 IEEE International Conference on (pp. 1314-1319). IEEE.
Teutsch, M., & Kruger, W. (2015). Robust and fast detection of moving vehicles in aerial videos using sliding windows. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 26-34).
Tiwari, M., & Singhai, R. (2017). A Review of Detection and Tracking of Object from Image and Video Sequences. International Journal of Computational Intelligence Research, 13(5), 745-765.
van Wyk, S., & van der Haar, D. 2017 2nd Asia-Pacific Conference on Intelligent Robot Systems (ACIRS 2017).
Zeng, Y., Zhang, R., & Lim, T. J. (2016). Wireless communications with unmanned aerial vehicles: opportunities and c