Research Activities and Videos

 

For a list of videos published during my PhD, look below or go to: my Youtube channel

 

 

Distributed Sensing and Cooperative Recognition of Hand Gestures by a Mobile Robot Swarm

 

This video demonstration (with audio commentary) presents a Human-Swarm Interaction (HSI) system, where hand gestures presented by a human operator are collectively recognized by a swarm of mobile robots. This work has been accepted at conferences including IROS 2012, AAMAS 2012, HRI 2012.

 

 

 

 

Hand Gesture Recognition using a Swarm of Mobile Ground Robots

 

This video illustrates distributed sensing and cooperative recognition (decision-making) by a swarm of robots, to collectively identify a hand gesture and perform the task encoded by the gesture (command). The human moves into the room, where the robots are located. After at least one robot has detected the human, robots move into better positions to sense the instruction given by the human. Next, a swarm-level decision is built and the swarm perform the associated task, which is to split into two separate groups.

 

 

 

 

A Mobile Robot Swarm Counting Fingers from Hand Gestures

 

This additional video (supporting video for IROS 2012) demonstrates our developed HSI system, where a swarm of robots engages to recognize and identify finger counts from hand gestures (given by human operators). The LEDs of the robot swarm blink N times, where N corresponds to the number of recognized fingers.

 

 

 

 

Autonomous Deployment of a Swarm of UGVs and UAVs for Human-swarm Interaction

 

This video illustrates the autonomous deployment of a swarm of UGVs and UAVs for human-swarm interaction (HSI) in physical proximity.

 

 

 

 

Online Incremental Robot Learning using Partial/Binary Feedback from Humans

 

In this video (with audio commentary; accepted at HRI 2014), a robot (student) learns to recognize hand gestures from a human instructor (teacher) while performing some tasks. The task for the robot is move to the colored markers, which correspond to different hand gestures. The instructor gives binary or partial feedback (right/wrong; yes/no) after a hand gesture is predicted by the robot. To learn from partial (limited) feedback in such interactive settings, the Upper Confidence Weighted Learning (UCWL) scheme is adopted.

 

 

 

 

Face Pose Estimation using an Airborne Parrot AR.Drone 2.0 (UAV)

 

Face pose detection and estimation using multi-view Haar face detectors from OpenCV and Kalman filter for Face tracking. Evaluated using the front-facing camera of an airborne Parrot A.R. Drone 2.0, which acquires images at 360p (640x360 pixels) with 16:9.

 

 

 

 

Using Hand Gestures and Face Pose Estimates to Maneuver and Direct UAVs

 

In this video (supporting video for submission at HRI 2014), a Parrot UAV is controlled and directed (to move) using hand gestures presented by human operators. The UAV moves in the direction given by the hand gesture and based on its position with respect to the human (i.e., the estimated face pose of the human).

 

 

 

 

Commanding and Controlling UAVs to Follow Humans using Hand Gestures

 

In this video demonstration (with audio included), a Parrot A.R. Drone 2.0 (UAV) is commanded using hand gestures to follow a human located in a specific direction (i.e., on the left or right). To make it easy to differentiate between humans, humans wear individual markers that the UAV can recognize and track.

 

 

 

 

A Swarm of Robots Providing Visual Feedback to a Human based on the Direction of the Hand

 

This video illustrates a robot swarm conveying visual feedback to humans (using colored LEDs), to indicate spatial directions given by the human (by pointing the arm and hand).

 

 

 

 

Upper Body Motion Detection using an Airborne UAV and Colored Passive Markers (Gloves and Jacket)

 

This video illustrates upper body motion detection using tangible input devices (i.e., the two colored gloves and the colored jacket). The motion damping parameter bN controls the sensitivity of motion made by the arms, hands and the displacement of the human with respect to the field of view of the UAV camera.

 

 

 

 

Spatial Gestures for Selecting Individuals and Groups of Robots from a Swarm

 

In this video (supporting video for submission at IROS 2014), individuals and groups of UAVs are selected using natural, intuitive and spatial gestures given by human operators wearing tangible input devices such as colored gloves. This scheme uses a cascaded machine learning approach using multiple classifiers for spatial gesture learning and recognition. In this video, the selected robots, lift-off, move and land, similar to the use of "force" in Starwars movies.

 

 

 

 

Selecting Individuals and Groups of Robots from a Swarm of UGVs using Spatial Gestures

 

This video illustrates the use of spatial gestures, for selecting individual and groups of UGVs from a robot swarm. Spatial gestures serve as a natural and intuitive means for selecting robots.

 

 

 

 

Selecting Individuals and Groups of Robots from a Swarm of UAVs using Spatial Gestures

 

This video illustrates the use of spatial gestures, for selecting individual and groups of UAVs from a robot swarm and multi-robot system. Spatial gestures serve as a natural and intuitive means for selecting robots.

 

 

 

 

A Heterogenous Robot Swarm Performs Tasks given by Multiple Humans using a Gesture Language (Vocabulary)

 

A heterogeneous swarm (or multi-robot system) of UAVs and UGVs performs actions associated with the mission instructions given by human operators using a gesture-based language (vocabulary). In the first scene, Human-swarm Interaction (HSI) is realized using a single human operator, which requests the heterogeneous swarm to perform tasks. In the second scene, Multi-human and Swarm Interaction (MHSI) is visualized with two human operators providing commands to the heterogeneous robot swarm.

 

 

 

 

Human-Swarm Interaction using a Gesture Language (Vocabulary)

 

The two videos below illustrate the use of a Human-swarm Interaction (HSI) interaction language, namely a gesture-based language of commands with grammatical expressions. Gestures represent individual words and encode the semantic meaning of individual commands. A sequence of gestures corresponds to a sentence, namely a mission instruction. This video shows the procedure using which humans can provide mission instructions to swarm robotic and multi-robot systems.

 

 

 

 

 

Use of Gesture Language (Vocabulary) for Selecting and Commanding Robots

 

This video illustrates the use of a gesture-based language (vocabulary) of commands for humans to select and command robots from swarms and multi-robot systems.

 

 

 

 

Selecting and Commanding Individual Robots from a Robot Swarm using a Gesture Language (Vocabulary)

 

This video (with synthetic speech) illustrates how a human operator can select individual robots from a swarm using a finger pointing (spatial) gesture. Commands are given to the selected robot through the use of a gesture language/vocabulary (by providing a series of instructions).

 

 

 

 

Selecting and Commanding Some Robots (Groups) from a Robot Swarm using a Gesture Language

 

This video (with synthetic speech) illustrates how a human operator can select some robots (or a group of robots) from a swarm using two-handed gestures, which define the spatial cone (range) of the robots to be selected. Commands are given to the selected robots through the use of a gesture language/vocabulary (by providing a series of instructions).

 

 

 

 

Selecting and Commanding All Robots from a Robot Swarm using a Gesture Language (Vocabulary)

 

This video (with synthetic speech) illustrates how a human operator can select all robots in a swarm, and command the selected robots through the use of a gesture language/vocabulary (by providing a series of instructions).

 

 

 

 

Incrementally Selecting Individuals Robots from a Swarm using Spatial Gestures and Commanding Them

 

This Human-Swarm Interaction (HSI) video (with synthetic speech) illustrates a human operator incrementally selecting three individual robots from a UGV swarm, by pointing at each individual robot using spatial gestures.

 

 

 

 

A Swarm of 15 Mobile Robots Forming Geometrical Shapes based on Commands given by Humans

 

This video (with audio commentary) illustrates the use of a gesture language (vocabulary of commands), to instruct 15 robots in a swarm to form different geometrical shapes. In other words, a human provides a series of instructions/commands and the task of the selected robots (group) is to form a shape (i.e., a square, circle and a rectangle).

 

 

 

 

Interactive Swarm Feedback for Human Operators during Human-swarm Interaction

 

This video (with synthetic speech) illustrates the advantage of swarm-level interactive feedback given to human operators, during interaction (using a gesture-based language/vocabulary). Interactive swarm feedback provides intelligent and automated reasoning in situations with high uncertainty. The swarm requests the human to show the gesture again if it is uncertain. If a gesture shown does not exist in the predefined set of gestures, the human is notified. Also, if a given gesture is not relevant to that stage in the interaction process, the swarm notifies this to the human and requests the human to show a proper gesture.

 

 

 

 

A Swarm of Mobile Robots Interacting with a Human (with Colored Passive Markers) using a Gesture Language

 

This video (with synthetic speech) illustrates a swarm of UGVs receiving mission instructions from a human operator using a gesture-based language (vocabulary of commands).

 

 

 

 

Interactive Swarm Feedback for Human-swarm Interaction using a Gesture Language (Vocabulary)

 

This video (with synthetic speech) illustrates the advantage of swarm-level interactive feedback given to human operators, during interaction (using a gesture-based language/vocabulary). The interactive feedback from the UGV swarm provides intelligent and automated reasoning in situations with high uncertainty. The swarm requests the human to show the gesture again if it is uncertain. If a gesture shown does not exist in the predefined set of gestures, the human is notified. Also, if a given gesture is not relevant to that stage in the interaction process, the swarm notifies this to the human and requests the human to show a proper gesture.

 

 

 

 

 

Related Publications

 

[1] J. Nagi, H. Ngo, L. Gambardella, G. A. Di Caro, Wisdom of the Swarm for Cooperative-Decision Making in Human-Swarm Interaction, in Proc. of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, USA, May 26-30, 2015, pp. 1802-1808. [pdf]

 

[2] J. Nagi, G. A. Di Caro, A. Giusti, L. Gambardella, Learning Symmetric Face Pose Models Online Using Locally Weighted Projectron Regression, in Proc. of the 21st IEEE International Conference on Image Processing (ICIP), Paris, France, Oct. 27-30, 2014, pp. 1400-1404. [pdf] [online]

 

[3] J. Nagi, A. Giusti, L. Gambardella, G. A. Di Caro, Human-Swarm Interaction Using Spatial Gestures, in Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Chicago, USA, Sep. 14-18, 2014, pp. 3834-3841. [pdf] [online]

 

[4] H. Ngo, M. Luciw, N. Vien, J. Nagi, A. Forster, J. Schmidhuber. Efficient Interactive Multiclass Learning from Binary Feedback, ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 4, no. 3, Aug. 2014, pp. 1-25. [pdf] [online]

 

[5] J. Nagi, A. Giusti, F. Nagi, L. Gambardella, G. A. Di Caro, Online Feature Extraction for the Incremental Learning of Gestures in Human-Swarm Interaction, in Proc. of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, May 31-Jun. 5, 2014, pp. 3331-3338. [pdf] [online]

 

[6] J. Nagi, H. Ngo, J. Schmidhuber, L. M. Gambardella, G. A. Di Caro, Human-Robot Cooperation: Fast, Interactive Learning from Binary Feedback, in Proc. of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (Video Session), Bielefeld, Germany, March 3-6, 2014, pp. 107. [pdf] [online]

 

[7] J. Nagi, A. Giusti, G. A. Di Caro, L. Gambardella, Human Control of UAVs using Face Pose Estimates and Hand Gestures, in Proc. of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (Late Breaking Report), Bielefeld, Germany, March 3-6, 2014, pp. 252-253. [pdf] [online]

 

[8] G. A. Di Caro, A. Giusti, J. Nagi, Luca M. Gambardella, A Simple and Efficient Approach for Cooperative Incremental Learning in Robot Swarms, in Proc. of the 16th International Conference on Advanced Robotics (ICAR), Montevideo, Uruguay, Nov. 25-29, 2013, pp. 1-8. [pdf] [online]

 

[9] J. Nagi, G. A. Di Caro, A. Giusti, F. Nagi, L. Gambardella, Convolutional Neural Support Vector Machines: Hybrid visual pattern classifiers for multi-robot systems, in Proc. of the 11th International Conference on Machine Learning and Applications (ICMLA), Boca Raton, Florida, USA, Dec. 12-15, 2012, pp. 27-32. [pdf] [online]

 

[10] A. Giusti, J. Nagi, L. Gambardella, G. A. Di Caro, Cooperative Sensing and Recognition by a Swarm of Mobile Robots, in Proc. of the 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, Oct. 7-12, 2012, pp. 551-558. [pdf] [online]

 

[11] J. Nagi, H. Ngo, A. Giusti, L. M. Gambardella, J. Schmidhuber, G. A. Di Caro, Incremental Learning using Partial Feedback for Gesture-based Human-Swarm Interaction, in Proc. of the 21st IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Paris, France, Sept. 9-13, 2012, pp. 898-905. [pdf] [online]

 

[12] A. Giusti, J. Nagi, L. Gambardella, G. A. Di Caro, Distributed Consensus for Interaction between Humans and Mobile Robot Swarms, in Proc. of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS) (Demonstration Track), Valencia, Spain, Jun. 4-8, 2012, pp. 1503-1504. [pdf] [online]

 

[13] A. Giusti, J. Nagi, L. Gambardella, S. Bonardi, G. A. Di Caro, Human-Swarm Interaction through Distributed Cooperative Gesture Recognition, in Proc. of the 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (Video Session), Boston, USA, Mar. 5-8, 2012, pp. 401. [pdf] [online]

 

[14] J. Nagi, F. Ducatelle, G. A. Di Caro, D. Ciresan, U. Meier, A. Giusti, F. Nagi, J. Schmidhuber and L. M. Gambardella, Max-Pooling Convolutional Neural Networks for Vision-based Hand Gesture Recognition, in Proc. of the 2nd IEEE International Conference on Signal and Image Processing and Applications (ICSIPA), Kuala Lumpur, Malaysia, Nov. 16-18, 2011, pp. 342-347. [pdf] [online]

Last updated on 1 August 2016.

Copyright © 2016 Jawad Nagi. All rights reserved.