Mobile robot guidance

WCU Author/Contributor (non-WCU co-authors, if there are any, appear on document)
Oscar Gamez (Creator)
Western Carolina University (WCU )
Web Site:
Paul Yanik

Abstract: Assistive robotics is an increasingly growing field that has many applications. In an assisted living setting, there may be instances in which patients experience compromised mobility, and are therefore left either temporarily or permanently restricted to wheelchairs or beds. The utilization of assistive robotics in these settings could revolutionize treatment for immobile individuals by promoting effective patient-environment interaction and increase the independence and overall morale of affected individuals.Currently, there are two primary classes of assistive robots: service robots, and social robots. Service robots assist with tasks that individuals would normally complete themselves, but are unable to complete due to impairment or temporary restriction. Assistive social robots include companion robots, which stimulate mental activity and, intellectually engages its users. Current service robots may have depth sensors and visual recognition software integrated into one self-contained unit. The depth sensors are used for obstacle avoidance. Vision systems may be used for many applications including obstacle avoidance, gesture recognition, or object recognition. Gestures may be used by the unit as commands to move in the indicated direction.Assistive mobile robots have included devices such as laser pointers or vision systems to determine a user's object of interest and where it is located. Others have used video cameras for gesture recognition as stated above. Approaches to mobile robot guidance involving these devices may be difficult for individuals with impaired manual dexterity to use. If the individual is immobile, it would be difficult to operate the mentioned devices.The objective of this research was to integrate a method that allowed the user to command a robotic agent to traverse to an object of interest by utilizing eye gaze. This approach allowed the individual to command the robot with eyesight through the use of a head-worn gaze tracking device. Once the object was recognized, the robot was given the coordinates retrieved from the gaze tracker. The unit then proceeded to the object of interest by utilizing multiple sensors to avoid obstacles.In this research, the participant was asked to don an eye gaze tracker head worn device. The device gathered multiple points in the x, y, and z coordinate planes. MATLAB was used to determine the accuracy of the collected data, as well as the means to determine a set of x, y, and z coordinates needed as input for the mobile robot. After analyzing the results, it was determined that the eye gaze tracker could provide x and y coordinates that could be utilized as inputs for the mobile robot to get the object of interest. The z coordinate was determined to be unreliable as it would either be too short or overshoot from the object of interest.

Additional Information

Language: English
Date: 2018
Mobile robots
Eye tracking
Robots -- Programming
Computerized self-help devices for people with disabilities
People with disabilities
Human-robot interaction

Email this document to