Graph-based motor primitive generation framework
© Sung et al. 2015
Received: 23 July 2015
Accepted: 26 October 2015
Published: 15 December 2015
Unmanned aerial vehicles (UAVs) have many potential applications, such as delivery, leisure, and surveillance. To enable these applications, making the UAVs fly autonomously is the key issue, and requires defining UAV motor primitives. Diverse attempts have been made to automatically generate motor primitives for UAVs and robots. However, given that UAVs usually do not fly as expected because of external environmental factors, a novel approach for UAVs needs to be designed. This paper proposes a demonstration-based method that generates a graph-based motor primitive. In the experiment, an AR.Drone 2.0 was utilized. By controlling the AR.Drone 2.0, four motor primitives are generated and combined as a graph-based motor primitive. The generated motor primitives can be performed by a planner or a learner, such as a hierarchical task network or Q-learning. By defining the executable conditions of the motor primitives based on measured properties, the movements of the graph-based motor primitive can be chosen depending on changes in the indoor environment.
Nowadays, unmanned aerial vehicles (UAVs) have diverse types of goals, as their performance and techniques have improved. Because making UAVs fly autonomously requires motor primitives, defining and generating motor primitives is the key approach to autonomous UAVs. Especially, the framework design of UAVs are further researched [1, 2].
Usually, motor primitives are defined and generated linearly [3, 4]. Therefore, while executing motor primitives, changes in the environment are not considered, which can prevent a UAV from achieving its goal. For example, motor primitives are defined by dividing the movements learned through demonstration-based learning, and are then executed by a planner .
One motor-primitive generation approach is based on a graph [5–8]. Graph-based motor primitives have flexibility in dynamic environments because they consider the state of the environment while executing the motor primitives. A method that generates UAV motor primitives using demonstration-based learning  is suggested. The process of generating motor primitives is divided into three stages: Operation collection stage, time adjustment stage, and motor-primitive generation stage. The motor primitives, based on measured yaw, pitches and rolls, are divided into sub-motor primitives considering pinpoints, where pinpoints are predefined spots where the UAV should arrive.
When UAVs fly, they are affected by diverse types of external factors, such as wind, building structures, etc. Therefore, motor primitives are not usually performed as well as expected, which makes it difficult to execute multiple motor primitives consecutively. For example, two motor primitives are planned to be executed, one right after the other. If the first executed motor primitive is not performed as defined, the next motor primitive cannot be executed, because the preconditions have changed. Therefore, devising a novel approach to improve graph-based motor primitives is required.
This paper proposes a method that generates graph-based UAV motor primitives using demonstration-based learning. By connecting multiple movements and executing one of the connected movements, UAVs can cope with changing environments. The proposed method can be applied not only to UAVs, but also to robots and virtual characters.
This paper proceeds as follows. “Related work” introduces the approaches used to make UAVs fly autonomously. “Graph-based motor primitive generation process” proposes a graph-based motor-primitive generation framework. “Experiments” shows the performance of the generated graph-based motor primitives. “Conclusion” concludes the paper.
In the beginning, UAVs were designed for military, but recently low-cost UAVs have been developed and sold to consumers. For example, even though the AR.Drone 2.0 contains two cameras and an infrared sensor, it is sold at low cost . In addition, communication protocols are also defined for developers to control the AR.Drone 2.0 by programming. The AR.Drone 2.0 provides algorithms for inertial sensor correction and altitude estimation to maintain an accurate and robust state. An algorithm to display captured images is also contained. Finally, the AR.Drone 2.0 is connected and controlled through Wi-Fi. Diverse types of control modes based on finite-state machines are utilized.
A variety of studies has been done on controlling a UAV automatically. For example, markers attached to walls are recognized by cameras on the UAV, and then the UAV is controlled autonomously by utilizing the recognized markers . For autonomous flight, direction control and altitude control are required. By recognizing the markers by UAV cameras, the distances between markers and UAVs are measured. When the UAV comes within range of one of the markers, it responds by performing the signal corresponding to the marker.
Other research solves UAV path-finding problems: one conducts useless lines when vanishing points are revealed, and the other recognizes uncrossed lines as crossed lines . A vanishing point is defined as a point where the path-propagation direction is and where the outer lines are crossed. An algorithm to improve the difference when vanishing points are calculated was proposed. UAVs fly autonomously based on the algorithm.
In detail, all captured images are converted to black/white images to detect the border values of the captured images. By utilizing Canny Edge Detection, the border values are obtained and, by utilizing a Probabilistic Hough Transform, straight lines are detected. Vanishing points are found by excluding the errors of the uncrossed straight lines when calculating vanishing points. The improved algorithm was applied to the AR.Drone in an indoor environment and allowed the AR.Drone to fly autonomously.
In addition, research on controlling multiple UAVs for indoor-based group flight has been conducted . Controlling multiple UAVs concurrently not only requires a ground station that controls the UAVs, but also a motion-capture data server that obtains the indoor locations of the UAVs. The ground station controls the UAVs by utilizing a qualitative control algorithm. Motion-capture devices take pictures of the markers attached to the UAVs and then the motion-capture data server recognizes their locations based on the taken pictures.
The shortest path is set to make UAVs fly autonomously in an indoor environment . Given that GPS (global positioning system) signals cannot be received in an indoor environment, an algorithm to generate 3D maps utilizing a depth camera is proposed. The UAV’s current location is recognized based on the 2D images captured by UAV cameras and the generated 3D map of the UAV. An algorithm to reduce the complexity of 3D maps is proposed to calculate the UAV path for avoiding obstacles.
UAV locations are estimated based on crowd-sourced signals . A local Wi-Fi fingerprint is collected and automatically updated through multiple users’ smart phones. The UAV locations are calculated utilizing the strength of the measured UAV Wi-Fi and the updated Wi-Fi fingerprint. In addition, the UAV’s path is estimated by utilizing the sensor values of accelerators and directions.
Traditional control research of autonomous UAVs focuses on recognizing the current location as described above. However, a novel approach is required, not to control UAVs according to a predefined path, but to make UAVs fly autonomously, depending on changes in the dynamic environment. This paper presents a method that applies and improves a motor-primitive generation technique for humanoid robots.
Graph-based motor primitive generation process
UAV motor-primitive expression
UAV motor-primitive generation
The proposed method generates motor primitives in three stages: Operation-collection stage, time-adjustment stage, and motor-primitive generation stage. In the operation-collection stage, an operator uses a controller to make the UAV fly along a predefined path. Given that motor primitives are generated through demonstration-based learning, the UAV should fly the same path repeatedly. Therefore, the path should be defined in advance.
Then, the movements and motor primitives of the UAV are defined, based on the properties and the commands. When a UAV faces an intermediate pinpoint to adjust its location, a new movement is defined and added to the motor primitive. The order of the visited pinpoints of motor primitives should be the same during the demonstration-based learning. A single motor primitive is generated based on a single flight.
UAV motor-primitive execution
If a graph-based motor primitive G is executed, all joins in the joint-ordered set J are selected in order. If the ith join j i is selected, then one of the corresponding linked movements is selected, by considering the movement properties, and executed. Given that the UAV selects its movements based on the changed properties, it can consider changes in the environment.
All measured movement times
This paper proposed a novel motor-primitive generation framework for UAVs. The measured movements were integrated into a single graph-based motor primitive, which makes it possible to execute the movements of the graph-based motor primitive consecutively, considering changes in the environment.
Motor primitives are very important, given that the UAVs fly by selecting and performing motor primitives repeatedly. Motor primitives decide the quality of the UAVs. Therefore, the way to define motor primitives is very important. Traditionally, motor primitives are defined by setting the values of motor primitives linearly, which means that UAVs only fly according to the predefined path. However, because of dynamic environments, it is very hard to perform selected motor primitives completely. The motor primitives that reflect dynamic environments are demanded. In this paper, motor primitives are expressed by graph, which means that graph-based motor primitives are performed completely more than linear motor primitives. Therefore, graph-based motor primitives are proper to dynamic environments. In the future, the way to perform graph-based motor primitives needs to be handled.
All authors are participated in the experiment and the writing of this paper. All authors read and approved the final manuscript.
This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2014R1A1A1005955).
The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- Sung Y, Kwak J, Yang D, Park Y (2015) Ground station design for the control of multi heterogeneous UAVs. In: Korean multimedia society spring conference, Andong, May 2015, vol 18, no 1, pp 828–829Google Scholar
- Sung Y, Kwak J (2015) Tangible control interface design for drones of fire fighting and disaster prevention. In: KIPS fall conference, Jeju Island, October 2015, vol 22, no, 2, pp 1844–1845Google Scholar
- Calinon S, Guenter F, and Billard A (2007) On learning, representing, and generalizing a task in a humanoid robot. In: SMC’07: proceedings of IEEE transactions on systems, man, and cybernetics, vol 37, no 2, pp 286–298Google Scholar
- Koenig N, Matarić MJ (2006) Behavior-based segmentation of demonstrated task. In: ICDL: Proceedings of international conference on development and learning, Bloomington, May 2006Google Scholar
- Nicolescu MN, Matarić MJ (2001) Experience-based representation construction: learning from human and robot teacher. In: IROS’01: Proceedings of IEEE/RSJ international conference on intelligent robots and systems, Wailea, October 2001, vol 2, pp 740–745Google Scholar
- Nicolescu MN, Matarić MJ (2001) Learning and interacting in human-robot domains. In: SMC’01: Proceedings of IEEE transactions on systems, man, cybernetics, vol 31, no 5, pp 419–430Google Scholar
- Nicolescu MN, Matarić MJ (2002) A hierarchical architecture for behavior-based robots. In: AAMAS’02: Proceedings of the first international joint conference on autonomous agents and multi-agent systems, July 2002, pp 227–233Google Scholar
- Nicolescu MN, Matarić MJ (2003) Natural methods for robot task learning: instructive demonstrations, generalization and practice. In: AAMAS’03: Proceedings of the second international joint conference on autonomous agents and multi-agent systems, Melbourne, July 2003, pp 241–248Google Scholar
- Sung Y, Kwak J, Park JH (2015) Graph-based motor primitive generation method of UAVs based on demonstration-based learning. In: The 9th international conference on multimedia and ubiquitous engineering, Hanoi, Vietnam, 18–20 May 2015Google Scholar
- Bristeau JP, Callou F, Vissière D, Petit N (2011) The navigation and control technology inside the AR.Drone micro. In: UAV IFAC’11: Proceeding of international federation of automatic control, Milano, Italy, 28 August–2 September 2011, vol 18, no 1, pp 1477–1484Google Scholar
- Nitschke C, Minami Y, Hiromoto M, Ohshima H, Sato T (2014) A quadrocopter automatic control contest as an example of interdisciplinary design education ICCAS’14. In: Proceeding of 14th international conference on control, automation and systems, KINTEX, Gyeonggi-do, Korea, 2014, pp 678–685Google Scholar
- Son BR, Kang SM, Lee H, Lee DH (2013) A real time quadrotor autonomous navigation and remote control method. IEMEK J Embed Syst Appl 8(4):205–212View ArticleGoogle Scholar
- Cho DH, Moon ST, Rew DY (2014) Development of AR.Drone’s controller for indoor swarm flight. KSAS Korean Soc Aeronaut Space Sci 13(1):153–165Google Scholar
- Moo ST, Ha SH, Eom W, Kim WK (2014) 3D map generation system for indoor autonomous navigation. KSAS Korean Soc Aeronaut Space Sci 11(2):140–148Google Scholar
- Kim JM, Choi HW, Eom DS (2014) The location estimation of UAV in indoor environments using a crowd-sourced fingerprint mapping. Korean Inst Commun Inf Sci 54(1):914–915Google Scholar