FEATURE

Andrey Suslov, Vanatchanan / Shutterstock.com

Around the world, robots are working with humans to make some of the most dangerous jobs more productive and much safer:

  • In manufacturing, robots perform repetitive and sometimes hazardous tasks such as welding, assembling, dismantling, packaging, inspecting, and testing—at higher speeds and with greater precision than humans can produce.
  • In oil production, robotic drills operate in cold, in heat, at sea, and in other extreme environments that can become too treacherous for people to manage alone.
  • During disasters, robots explore narrow, flooded, or abandoned sites too perilous for emergency responders.

In many situations, the robots cannot operate by themselves. To succeed, they often need assistance from the problem-solving abilities of the human brain.

Watch a video about the research at the

Texas A&M NeuroErgonomics Lab

The NeuroErgonomics Lab team using mobile brain-body imaging.

“Robots contribute their precision and reliability to performing tasks that are repetitive or dangerous,” said Ranjana Mehta, associate professor in the College of Engineering and director of the Texas A&M NeuroErgonomics Lab. “People provide intelligence and ingenuity to choose the best option for solving a complex problem.” Partnerships between humans and robots are proving so successful that technologies that support them are expected to become a $13 billion industry by 2025, Mehta said. But there’s a new challenge to overcome: People and robots don’t always work well together. Humans tend to get bored, distracted, stressed, or fatigued, whereas robots remain focused but rigid (or nonadaptive). Those differences can strain their relationships, making their partnerships inefficient, unproductive, or even futile. Moreover, people and robots often share a confined work space. As a result, their collaborations can become as hazardous as the tasks they perform. Robots may accidentally strike their partners or trap them against another object.

To solve those problems, the National Science Foundation has awarded Mehta and her team a $1.2 million grant to study how humans interact with their robot collaborators.

The NeuroErgonomics Lab and its partners are using mobile brain-body imaging to study how and why people become distracted, fatigued, or anxious while working with robots in Texas manufacturing facilities. Armed with that information, the researchers will use machine learning to assist robots in adapting to human behavior. Sarah Hopko, doctoral student in the NeuroErgonomics Lab, also is working on modeling and predicting human-robot trust by using novel neural signals to inform the design and real-time calibration of trustworthy collaborative robots. In addition, the researchers will create a system of augmented reality—a computer-generated environment that combines an overlay of real-time information (such as text, graphics, or audio) with both virtual and real-world objects—that will allow workers to more safely collaborate with robots. However, many dangerous jobs still require humans to expose themselves to deadly environments—consider emergency responders such as firefighters, police officers, paramedics, search-and-rescue personnel, and contamination crews. A new generation of high-tech tools exists to help protect such workers:

  • wearable exoskeletons that increase strength and stamina, allowing humans to lift heavy objects with ease while preserving the ability to make decisions and take actions independently;
  • computer-assisted headgear that engages workers within an augmented reality, clearly and precisely displaying instructions, directions, blueprints, cross sections, maps, warnings, and other vital information in real time; and
  • real-time communication with industrial-grade robots, on the ground and in the air, which can perform highly demanding tasks with greater efficiency and precision, or can collect and report data from hazardous locations.

Those “human augmentation technologies,” also known as HATs, already exist and are available for purchase and deployment. Yet industries and governments are slow to adopt them, mainly because existing systems for training humans to use HATs are costly, cumbersome, and most of all ineffective.

To address that challenge, Mehta and her collaborators have been awarded a two-year, $5 million NSF grant to create a HAT-training system specifically for emergency responders. The system will use mixed-reality technology to immerse users in a variety of emergency and disaster scenarios. Similar to augmented reality, mixed reality permits the user to move within a physical environment while adding a computer-generated overlay of digital data and virtual objects. However, mixed reality also anchors virtual objects within the user’s simulated environment. That approach will allow responders to gain hands-on experience with exoskeletons, headgear, and newly emerging HATs in simulations of burning buildings, flooding streets, devastated cities, and other hazardous situations while remaining safely within a virtual experience. In addition, the system—known as LEARNER (Learning Environments with Augmentation and Robotics for Next-gen Emergency Responders)—will readily adapt to the needs of the trainee. “Responders tend to vary in their backgrounds, experiences, skill levels, learning abilities, and their readiness to take on new technologies,” Mehta said. “Today’s HAT-training systems take a one-size-fits-all approach. Instead, LEARNER will use personalized algorithms to adapt to each responder’s needs and progress.” Beyond the world of emergency response, LEARNER has the potential to train any workforce to employ HATs for extremely demanding tasks in complex, stressful, and often deadly environments.

“Our overall research also offer the potential to create jobs for workers with disabilities, to allow aging employees to remain in the workforce, and to immediately improve the skill sets of novice workers,” Mehta said. “It should also teach us a lot about the challenges of persuading workers to trust and accept new technologies.”


Video: TaigaShots / Shutterstock.com

SHARE THIS STORY

Featured Research Stories

PREVIOUS PAGE

CONTINUE TO NEXT PAGE

image