Detecting Brain Signals During Multitasking for an Adaptive User Interface

Merve Turgut
4 min readApr 26, 2021
Multitasking
Photo by Roman Odinstov on Pexels. Multitasking painting of Gautama Buddha on the wall of a shabby house.

Multitasking is a myth but we still try to do it. Many of us (especially women) are effortlessly able to complete two or more things at once and we might be thinking that we are great at multitasking but we are not. Our brains aren’t designed to do multiple things at once. Previous studies show that what we are doing is just switching across tasks and not fully engaging in one thing. This comes with a neurobiological cost that means depletion of brain resources. After an hour or two after attempting to multitask we feel tired and we can’t focus. That’s why some professions like air traffic controllers monitoring things at once are mandated to take 15 to 30 minutes breaks every hour and a half or 2. They are supposed to unplug, listen to music or walk to restore the lost neurochemicals during the regular task switching responsibilities.

Getting help from a human-robot system

Solovey et al. researched the subject in 2011 and proposed a human-robot system that can adapt according to the user’s cognitive states to help the user perform better and decrease frustration during multitasking activities. The proposed human-robot system uses a type of interface called “Adaptive User Interface” (AUI) as its name suggests it adapts to the user instead of the device. One example of an AUI is the text messages on our mobile devices. As we are typing something on our phone it predicts what we want to say and completes for us. Automatic assistive guidance and shortcuts make the user interface pleasant.

Difficulties of designing a good AUI for multitasking

Developing a good AUI for multitasking activity is a challenging job. There are a lot of things to consider:

  • The needs during multitasking may change over time
  • Multitasking can evoke several different cognitive states
  • Not all the multitasking activity is the same
  • The need to come up with an effective way to measure workload and attention shifting in a dynamic environment

Previous research focused on behavioral measurements like response time, accuracy, keystrokes, or screen contents to design a good AUI. Without understanding the internal cognitive processes occurring during task performance these metrics are not enough, we need to detect brain signals to understand the internals of multitasking activity and mental workload.

Tools used to detect brain signals

In their research, Solovey et al. picked one of the most frequently used brain measuring tools, functional near-infrared spectroscopy (fNIRS), a portable non-invasive tool for detecting brain activity to differentiate 4 different mental processes that may occur during multitasking which was impossible to measure with only the behavioral measures or task performance. Using fNIRS it was possible to automatically distinguish these four states.

A specialized fNIR cap with noninvasive probes allows
researchers to review the images in real-time.

The research included two different experiments which enabled real-time cognitive state information as input to the proposed adaptive human-robot system. The system could change behavior based on brain signals received helping the user in multitasking activity, supporting better task switching and interruption management.

AUI design problems

Adaptive brain-based user interfaces can help in multitasking tasks by automating the many different tasks and reducing the user’s mental workload. Managing mental workload has been an important goal of this research since the high mental workload is one of the main reasons for potential errors and performance degradation of the users.

To design an AUI and call it successful many factors must be considered. Evaluating the user interface by examining both human performance measures and system performance criteria like automation reliability, costs of action outcomes will produce different and more complex automation schemes for each system to meet the various needs of the users.

Other than the complexity issue, the system automation might also result in the:

  • Decreased situation awareness
  • Increased user complacency
  • Skill degradation of the user

Conclusion

The findings of this non-invasive brain-sensing research give hope that by developing intelligent adaptive user interfaces that have a greater understanding of the user’s cognitive state during multitasking we can increase the performance and accuracy without any explicit action by the user.

The only thing we need to be careful of is that any adaptation must be done carefully to ensure that the user does not feel he or she has lost control!

References:

Sensing Cognitive Multitasking for a Brain-Based Adaptive User Interface (Erin Treacy Solovey, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz, Angelo Sassaroli, Sergio Fantini, Paul Schermerhorn, Audrey Girouard, Robert J.K. Jacob, 2011)

--

--