Brain-computer interfaces

Rolling out of bed, brushing your teeth
and saying goodbye to your family —
most of us take these routine acts for granted, but for people locked into their bodies due to diseases like ALS or the resultant
disabilities from strokes, these actions may
be impossible or, at best, difficult.

But what if technology could bypass these
disabilities, allowing humans to conquer
genetics or disease and enable a return to a
normal or improved life? These breakthroughs may be closer than you think.

“The BrainLab is working to discover
impactful solutions for brain-computer interfaces by uncovering the underlying characteristics that affect users’ control,” says Dr.
Adriane Randolph, director of the BrainLab,
Coles College of Business, Kennesaw State

Smart Business spoke with Randolph
about how brain-computer interfaces may
help mitigate debilitating illnesses and injuries and create a new paradigm for the ways
in which humans interact with computers.

What is the working definition of brain-computer interfaces?

Brain-computer interfaces are also known
as direct-brain interfaces and brain-based
interfaces. They are tied to the emerging
fields of neuromarketing, neuroeconomics
and neuroinformation systems (IS). Brain-computer interfaces allow people to communicate and control devices in their environment without the need for voluntary movement but instead through the use of signals
from the brain. In addition, brain-computer
interfaces allow for applications more
informed about their users’ states of mind.

At this time, we cannot determine exactly
what you are thinking, but we can read the
patterns of your thoughts. You can learn to
control certain patterns at will and these can
be mapped to interfaces for control. For
example, I cannot read your mind to see
directly that you want your coffee with
cream, but I can offer you an interface with
which you may select coffee with cream by
perhaps concentrating on a related image
while that and other nondesired options are
highlighted, and I can then see a certain area
of your brain light up when the desired
option is highlighted.

What types of brain-computer interface
applications are studied?

Activity is recorded through direct means,
such as using electroencephalograms
(EEGs) where electrical activity of the brain
is recorded and functional near-infrared
(fNIR) imaging where oxygenated blood in
areas of the brain needed to fuel different
thought processes is monitored. Also, activity is recorded through indirect means, such
as by using galvanic skin response (GSR),
which is similar to a basic polygraph system
where small changes in sweat are recorded.
These techniques have just begun to move
from clinical use to real-world application for
control and assessment of user states.

Who are ideal candidates for the interfaces?

Brain-computer interfaces have been found
useful for some able-bodied populations, but
the target end-users are people who are literally locked into their bodies due to diseases
such as ALS (also known as Lou Gehrig’s
Disease) or after strokes. Individuals suffering from locked-in syndrome are completely
paralyzed and unable to speak but otherwise
cognitively intact. Traditional assistive technology is ineffective for this population of
users due to the physical nature of input devices, such as mice and keyboards. Unfortunately, we do not yet know the specific profile of an ideal candidate for the various types
of brain-computer interfaces, and this is the
primary mission of the KSU BrainLab.

What is the biggest challenge of changing
thoughts into actions?

In order for brain-computer interfaces to be
used as nonphysical input channels for communication and control, they require that
users be able to harness their appropriate
electrophysiological responses for effective
use of the interface. There is currently no formalized process for determining a user’s aptitude for control of various brain-computer
interfaces without testing on an actual system. I developed a model for matching users
with various interfaces in a way that predicts
control, and seek to further validate that
model. In addition, current brain-computer
interfaces still are quite slow and error-prone
where a user may only generate three words
per minute in contrast to the ability for unassisted communication by humans at 200
words per minute. Although perhaps not a
selling factor for use by able-bodied individuals, this is a significant triumph for someone
who might not otherwise have an outlet.

How might brain-computer interfaces cut
across business methodologies?

Brain-computer interfaces offer a new paradigm for the ways in which humans interact
with computers. They provide for more
informed, ‘natural user interfaces.’ Thus,
organizations may better understand their
clients’ true motivations. Further, we have
seen incredible leaps in nanotechnology and
changes in the way business is conducted
due to innovations such as the Internet and
wireless technologies. Brain-computer interfaces sit just beyond the horizon of technological breakthroughs that will impact business methodologies in the future. Already,
companies like Microsoft and IBM are
exploring these potentials.

DR. ADRIANE B. RANDOLPH is an assistant professor of Business Information Systems and director of the BrainLab at the Coles
College of Business, Kennesaw State University. She can be reached at (770) 423-6083 or [email protected].

Leave a Reply

Your email address will not be published. Required fields are marked *