Brain-Based Computer Interfaces in Virtual Reality
Brain-Based Computer Interfaces in Virtual Reality
Abstract—Virtual Reality (VR) research is accelerating the increase the cognitive ability of people to process informa-
development of inexpensive real-time Brain Computer Interface tion [1]. More information can be communicated between
(BCI). Hardware improvements that increase the capability of the human and the machine in a shorter amount of time
Virtual Reality displays and Brain Computer wearable sensors
have made possible several new software framework for devel- when using all the senses, such as audition, haptic feedback,
opers to use and create applications combining BCI and VR. vision, and somatosensation, than the old communication
It also enables multiple sensory pathways for communications bottleneck of keyboard input with display output.
with a larger sized data to users’ brains. The intersections In the VR interaction between the user and the computer,
of these two research paths are accelerating both fields and
will drive the needs for an energy-aware infrastructure to
the human user receives rich output that is felt through
support the wider local bandwidth demands in the mobile multiple senses inputting data into the brain. The BCI
cloud. In this paper, we complete a survey on BCI in VR from delivers faster input into the computer program. The fingers,
various perspectives, including Electroencephalogram (EEG)- voice, eye gaze, skin, head and body position are concurrent
based BCI models, machine learning, and current active channels of interactive communication between human and
platforms. Based on our investigations, the main findings of
this survey highlights three major development trends of BCI,
computer [2]. Numerous studies [3]–[6] show that humans
which are entertainment, VR, and cloud computing. can use brain Electroencephalogram (EEG) signals to con-
Index Terms—Virtual reality, electroencephalogram, brain vey their intentions to computers using BCI, a pathway
computer interface, brain machine interface, cloud computing between an enhanced, sensor-equipped or wired brain and
an external computing device. BCI enables user interaction
through “thought”.
I. I NTRODUCTION Recent advances in machine learning in BCI research
Computer technologies are at the edge of a huge leap combined with the development of inexpensive sensor wear-
forward with direct interface to the brain combined with able devices has made practical the interaction of users
Virtual Reality (VR). The increasing capabilities, processing with computers and mobiles through brain EEG signals
speed and maturity of the hardware has made VR software [7]. In the Virtual Environment (VE), the human perception
and devices more useful, affordable and responsive. Concur- of immersion has become a measurable factor [8], [9].
rently, machine learning is quickly advancing innovations While the physical movement of the human body responds
in Brain-based Computer Interface (BCI), which is already to the VE, its brain waves are active in dealing with
moving quickly to become widely available as the sensor environmental information, decision-making in space, and
technology is becoming more economical. The intersection motor movement [10]. Through real-time detection, EEG
of the two, VR and BCI, is the topic of this survey paper. communications data is sent to the computer containing
Natural human interaction is experienced in real behavior these variations and changes in brain activity. The significant
of user, senses and thoughts as they react in vast whole features of this brain activity is measured and translated into
body awareness to the VR experience. This immersion can control signals that can drive an output, with meaningful
content feedback. The pattern classification or machine
1 S. Li is with the Department of Computer Science, Pace University, learning or artificial intelligence analysis of the EEG signals
New York, NY 10038, USA, sukun.li@pace.edu. to find control signals in those patterns, in real-time, is the
2 A. Leider is with the Department of Computer Science, Pace Univer-
sity, New York, NY 10038, USA, aleider@pace.edu. cutting edge of the VR-BCI research field.
3 M. Qiu is with the Department of Computer Science, Pace University,
Our contributions of this work are twofold:
New York, NY 10038, USA, mqiu@pace.edu.
4 K. Gai is with the Department of Computer Science, Pace University, 1) We investigate the existing BCI VR researches from
New York, NY 10038, USA, keke.gai@pace.edu. three crucial aspects that are EEG-based BCI models,
5 M. Liu is with College of Electrical Engineering, Zhejiang University,
ZJ 310027, China, liumeiqin@zju.edu.cn machine learning, and platforms. The findings can be
∗ M. Qiu is the corresponding author of this paper. Email address: used as reference for future relevant researches.
mqiu@pace.edu 2) We present three main development trends of BCI
This work has been partially supported by the Open Research Project
of the State Key Laboratory of Industrial Control Technology, Zhejiang research in VR field, which include entertainment, VR-
University, China, ICT170331 (Professor Meikang Qiu). based applications, and cloud computing.
301
TABLE I: Summary of Classification Methods of BCI
BCI systems that need a rapid real-time response, but that C. Non-Linear Classifiers
are restricted to using limited computational resources, such 1) Support Vector Machine (SVM): It is possible to create
as mobile devices. nonlinear decision boundaries by using the “kernel trick”
The LDA decision plane can be represented mathemati- K(x, x ). This nonlinear SVM leads to a more flexible
cally as: decision boundary in the data space, which can increase
classification accuracy. The kernel function used in BCI
f (x) = wxT + b (1) research is the Gaussian or Radial Basis Function (RBF)
w is the linear model coefficient or weight vector, x is the kernel:
input feature vector and b as the bias. The weight vector w
−γ||x − x ||2
could be calculated as: K(x, x ) = exp( ) (5)
2σ 2
w= Σ−1
k (μ2 − μ1 ) (2) Thus, the decision function will be calculated as.
Where μ is the estimated mean of class i, and Σ is the
n
−γ||x − x ||2
covariance matrix. The estimators of the mean and of the f (x) = αi exp( )+b (6)
2σ 2
covariance matrix are calculated as: i=1
1
n
The corresponding nonlinear SVM is known as Gaussian
μ= xi (3) SVM or RBF-SVM. The RBF-SVM have given good results
n i=1
for BCI synchronous applications, fast enough for real-time
1
n
BCI.
Σk = (xi − μ)(xi − μ)T (4) 2) k-Nearest Neighbors (k-NN): The k-NN works by
n − 1 i=1
distinguishing the features corresponding to the different
x is a matrix containing n feature vectors. x1 , x2 , ..., xn ∈ classes to form separate clusters in the feature space, when
Rd . the closest neighbors belong to the same class. In classifying
2) Support Vector Machine (SVM): Support Vector Ma- the test feature vectors, k-NN distances that are similar
chine (SVM) is a discriminative classifier, similar to LDA, between the test vector and each class are considered
defined by finding a hyperplane in order to separate the between the clusters of the test sample and the most recent
feature vectors into several classes. What differs, in the class.
theory of SVM, is that it optimally selects hyperplanes The advantage of using the k-NN approach in classifica-
by finding the maximums of the margins of training data, tion is that the error probability, in the decision of which
which may increase the generalization errors for real-time cluster class the data point belongs to, is decreased. Some
online classifications of EEG data, in contrast to the LDA training samples may be affected by noise and artifacts,
classifiers. which can influence the classification results. If decisions
SVM supports using both linear and non-linear decision are made involving several neighbors, it is less likely that
boundaries. The linear analysis classifier version, linear errors will occur because the probability of several simulta-
SVM, uses regularization, in order to prevent the classifier neous errors in the data is much lower. On the other hand,
from being distracted by noisy datasets in large amounts of if several k-NN closest classes are considered, then a voting
BCI data [20]. scheme is then required to decide between the competing
302
choices. This k-NN with a weighting function (aka WKNN) the input data from BCI can help the computer learn how
is defined by the following equation (7) [21] : the human brain it is interfaced to works, resulting in a
deep learning enhanced ANN, where as fast as the user
dk −di
if dk = d1 trains their thoughts to become control signals, the computer
wi = dk −d1
k
(7)
1 if dk = d1 learns how to anticipate those signals in a better and more
careful way, most likely by using fewer attributes to get a
where di denotes the distance of the i-th nearest neighbor faster, more brain-equivalent result.
from a test example. So d1 corresponds to the nearest
neighbor and dk to the furthest neighbor. The decision rule IV. VR C APABLE BCI S OFTWARE
of k-NNC assigns the unknown examples to the class with Researchers interested in this area find a wealth of
the greatest sum of weights among its k nearest neighbors. software frameworks available to work on BCI with VR.
3) Mahalanobis Distance: Mahalanobis Distance is a Current BCI software platforms are: BCI2000 [22] , Open-
Neural Network (NN) classifier which assumes a Gaussian ViBE [23], BCILAB [24], BioSig [25], and FieldTrip [26] of
distribution N (μc , Mc ) for each class c. A feature vector the MATLAB software toolbox, a real-time processing VR
x is assigned to this class that corresponds to the nearest solution by g.tec, and the OpenBCI platform. These software
prototype: platforms, built for BCIs, and all VR capable, offer packages
for data acquisition, feature extraction, classification, and
Dc (x) = (x − μ)M −1 (x − μ)T (8) feedback presentation.
a) BCI2000: The BCI2000 is a general-purpose soft-
D. Neural Network Classifiers ware system [22] for diverse areas of real-time bio-signal
Neural Networks working with linear classifiers are com- processing in addition to EEG. It is not an open-source but
mon in BCI research. They can universally approximate any it is free for non-profit research and education.
continuous function. Their structure of multiple neurons and b) BCI++: BCI++ of Laboratory Sensibilab is a tool
layers simulate brain shape-shifting pattern recognition in for fast prototyping of BCI systems. The BCI++ framework
Artificial Neural Networks (ANN). consists of two main modules: Hardware Interface Mod-
The brain copying algorithms of ANN separate non- ule(HIM) and Graphical User Interface (GUI), that com-
linear data into classes as researchers think the human brain municate with each other over TCP/IP. This architecture is
recognizes patterns. ANN use hidden layers (at least one designed to divide real-time BCI/BMI system development
hidden layer) between the layer of input and the layer of into two parts: (1) signal processing algorithms and (2) a
output. Computer scientists are not sure how ANN works specific 2D/3D Graphic Engine GUI by AEnima.
between the hidden layers. c) BioSig: BioSig [25] is an open-source software
1) Multilayer Perceptron (MLP): A MLP is composed library. It was designed for biomedical signal processing
of multiple layers of neurons called Perceptrons. It is used for BCI research. It works with Matlab and has data
when the data is linearly separable, and is composed of a import/export, artifact processing, quality control, feature
minimum of three layers: one layer of input, one or several extraction algorithms and classification methods. The rtsBCI
hidden layers, and one layer of output, shown in Figure package is available for rapid prototyping.
2. Each input of a neuron is connected with the output d) BCILAB: BCILAB is an open source toolkit also
of neurons in the previous layer, where the output layer based on MATLAB for advanced BCI research that of-
determines the class of the input vector. fers a graphical and scripting user interface.A. Based on
the MATLAB environment, the main advantages of the
BCILAB toolbox are rapid prototyping, real-time testing,
and comparative evaluations.
BCILAB was built to support the mobile brain/body
imaging (MoBI) study [27], BCILAB can classify multiple
simultaneous data modalities, including eye gaze, body
motion capture and EEG, as well as other biological signals
such as those used in the DataRiver framework in ERICA
[24].
e) OpenViBE: OpenViBE is a free and open-source
software platform for the design, test and use of BCI
[23]. This independent platform can be run on Windows
Fig. 2: Diagram using MLP in ANN. and Linux systems without dependence on other software
and hardware. The OpenViBE platform is designed for
An exciting idea in the machine learning algorithms of non-programmers. It features an easy-to-use graphical user
BCI research is that using the non-linear ANN to process interface for authors and operators of the BCI applications.
303
TABLE II: Platform Comparisions B. Virtual Reality
Platform Requirements 3D Open A promising design of VR BCI is “Mind-Mirror”, de-
Visual- Source signed by Mercier-Ganady at al. [28] in 2014. It enable
ization
BCI2000 Independent System - - users to see the real-time changes in the brain’s brain
BCI++ Independent System - - waves through their head. This approach uses an optical
BioSig Matlab/Simulink library - face-tracking system through a semi-transparent mirror as a
BCILAB Matlab toolkit -
OpenViBE Independent System screen, to display and automatically follow the user’s head
- means not support movements. The resulting brain activity is extracted and
means support processed in real-time with an EEG device worn by the
user. This application is uses a Microsoft Kinect camera
for head movement tracking, and uses OpenViBE software
In a comparison with other platforms, as shown in Table as an platform to acquire and analyse the EEG data, then,
II, OpenViBE is superior to VR BCI application research. It through a Unity3D-based program, simulates and displays
provides third-party embedded tools to design and expand the virtual brain in the mirror back to the user. The real-
virtualized display and feedback, as well as real-time 3D time EEG signal power over the brain’s surface is seen and
visualization of brain activities, compared to other reviewed understood in the resulting brain topography visualization.
platforms. In addition, OpenViBE can be used for building In the medical field, Jose and Hugo et al. [6] designed
cloud-supported mobile computing devices to run the VR and developed a mixed reality solution, Brain AR/VR, to
applications of BCI. guide doctors during Transcranial Magnetic Stimulation
(TMS) procedures. Brain AR/VR was deployed in 2016 on a
V. D EVELOPMENT T RENDS Samsung Galaxy S4 smart-phone with an Android operating
The BCI VR research field is motivated by the commer- system. The TMS experts are using EEG caps to input their
cial promise of interactive video game technology, mobile brain wave data into the mobile application.
applications and medicine. We summarize three major de- C. Cloud Computing
velopment trends for future research of BCI in VR. The next development is cloud computing [29]–[31] that
is an merging technical term for achieving in-demand ser-
A. Entertainment vices by using the Internet-based technologies. The driven
BCI VR improves the way games are played by including force of using cloud computing in BCI is in line with
feedback information from brain activity, providing access the growing volume of data and the demand of real-time
to knowledge about the user experience. BCI can report on data analytics. Mobile devices do not always have enough
player’s mood and state of mind, including boredom, anxiety capacity to perform all the needed calculations for real-time,
or frustration. For example, “Mind Balance” is a video game which is most useful for control signals. For example, the
where the user must assist a frog-like character by helping contradictions between energy and working efficiency are
him keep his balance as he totters across a cosmic tightrope. generally considered a tradeoff in system designs.
The data detected and measured in this game is wirelessly Moreover, concurrent with the VR and BCI research ad-
collected with a “Cerebus” headset that captures brain vances are innovations in mobile cloud computing, spurred
activity, and feeds it into a C# signal processing engine, on by the desire to save energy [32] with green computing.
which subsequently analyzes those signals and determines The feature of data collections for brain waves determines
whether the user is looking to the left or right. that the process needs to be continuous and synchronous
A promising real-time BCI gaming system was designed in order to meet the needs of medical analyses or health
by Martisius and Damasevicius in 2015 [11]. It is a three- record tracking [33]. A centralized data center can save
class BCI system based on the State Visually Evoked Po- on-premises storage [34] and workloads, which can be
tentials Paradigm (SSVEP) and the Emotiv EPOC headset. beneficial for designing complex distributed systems as well
Their online target shooting game, implemented OpenViBE, as higher-level functions. Therefore, consider distributed
allows the user to mentally explode objects in the air, computing and different service deployment, future BCI
through controlled focus on visual stimulus as the EEG studies aligning with VR can be associated with the field
signals are processed. of cloud computing, such as cloud resource management,
This gaming system utilizes wave atom transformation for wireless communications [35], [36], security and privacy
feature extraction, achieving an average accuracy of 78.2% [37], and integrated cloud system.
using the linear discriminant analysis classifier, 79.3% using VI. C ONCLUSIONS
the SVM classifier with a linear kernel, and 80.5% using
another SVM with radial basis function kernel. The reason This paper presented a literature review on BCI applica-
this game is using multiple methods is that no single method tions in VR. We found that BCI and VR researches were
fully met the requirement of the real-time BCI applications. accelerating and the increase of communications bandwidth
304
between computers and humans was revolutionary. The [18] F.Lotte, M.Congedo, A.Lécuyer, F.Lamarche, and B.Arnaldi. A
review of classification algorithms for eeg-based brain–computer
prediction of the future of BCI and VR research emphasized interfaces. Journal of neural engineering, 4(2):R1, 2007.
the improvement of supporting cloud and mobile computing. [19] E. Haselsteiner and G. Pfurtscheller. Using time-dependent neural
Reducing extra energy waste with a cloudlet model means networks for EEG classification. IEEE Trans. on Rehabilitation
Engineering, 8(4):457–463, 2000.
battery power in the mobile device was conserved, making [20] M.Congedo, F.Lotte, and A.Lécuyer. Classification of movement in-
the BCI VR mobile device practical. Moreover, dynamic tention by spatially filtered electromagnetic inverse solutions. Physics
energy-aware small clouds could handle the larger commu- in medicine and biology, 51(8):1971, 2006.
[21] S.Dudani. The distance-weighted k-nearest-neighbor rule. IEEE
nications channel demands between the VR BCI and its Transactions on Systems, Man, and Cybernetics, SMC-6(4):325–327,
mobile computing device, as well as the mobile device and 1976.
the cloud resources used for the computationally intensive [22] G. Schalk, D. McFarland, T. Hinterberger, N. Birbaumer, and J. Wol-
paw. BCI2000: a general-purpose brain-computer interface system.
EEG-signal-to-control-signal pattern classifiers key to mak- IEEE Trans. on Biomedical Eng., 51(6):1034–1043, 2004.
ing this work. This unique new area of network support, [23] Y.Renard, F.Lotte, G.Gibert, M.Congedo, E.Maby, V.Delannoy,
was the next logical step to allow the wide adoption of BCI O.Bertrand, and A.Lécuyer. Openvibe: An open-source software
platform to design, test, and use brain–computer interfaces in real
VR innovations. and virtual environments. Presence: teleoperators and virtual envi-
R EFERENCES ronments, 19(1):35–53, 2010.
[24] C. Kothe and S. Makeig. Bcilab: a platform for brain-computer
[1] E.Rennison, L.Strausfeld, and D.Horowitz. Immersive movement- interface development. J. of Neural Engineering, 10(5):056014, 2013.
based interaction with large complex information structures, Novem- [25] A.Delorme, C.Kothe, A.Vankov, N.Bigdely-Shamlo, R.Oostenveld,
ber 28 2000. U.S. Patent 6,154,213. T.Zander, and S.Makeig. Matlab-based tools for bci research. In
[2] A.Dix. Human-computer interaction. Springer, 2009. Brain-Computer Interfaces, pages 241–259. Springer, 2010.
[3] Y. Zhang, Z. Zhu, and Z. Yun. Empower VR art and AR book with [26] R. Oostenveld, P. Fries, E. Maris, and J. Schoffelen. Fieldtrip:
spatial interaction. In Int’l Symp. on Mixed and Augmented Reality, open source software for advanced analysis of MEG, EEG, and
pages 274–279. IEEE, 2016. invasive electrophysiological data. Computational Intelligence and
[4] W. Neto, K. Shimizu, H. Mori, and T. Rutkowski. Virtual reality Neuroscience, 2011:1, 2011.
feedback environment for brain computer interface paradigm using [27] E.Jungnickel and K.Gramann. Mobile brain/body imaging (mobi) of
tactile and bone-conduction auditory modality paradigms. In 15th physical interaction with dynamically moving objects. Frontiers in
Int’l Symp. on SCIS, pages 469–472. IEEE, 2014. human neuroscience, 10, 2016.
[5] S.Finkelstein, A.Nickel, T.Barnes, and E.Suma. Astrojumper: Moti- [28] J.Mercier-Ganady, F.Lotte, E.Loup-Escande, M.Marchal, and
vating children with autism to exercise using a vr game. In CHI’10 A.Lécuyer. The mind-mirror: See your brain in action in your head
Extended Abstracts on Human Factors in Computing Systems, pages using eeg and augmented reality. In Virtual Reality (VR), 2014
4189–4194. ACM, 2010. iEEE, pages 33–38. IEEE, 2014.
[6] J. Soeiro, A. Cláudio, M. Carmo, and H. Ferreira. Mobile solution [29] M. Qiu, M. Zhong, J. Li, K. Gai, and Z. Zong. Phase-change
for brain visualization using augmented and virtual reality. In 20th memory optimization for green cloud with genetic algorithm. IEEE
Int’l Conf. on Information Visualisation, pages 124–129. IEEE, 2016. Transactions on Computers, 64(12):3528 – 3540, 2015.
[7] A.Fraguela, J.Oliveros, MM.orı́n, and L.Cervantes. Inverse electroen- [30] K. Gai, M. Qiu, and H. Zhao. Cost-aware multimedia data allocation
cephalography for cortical sources. Applied Numerical Mathematics, for heterogeneous memory using genetic algorithm in cloud comput-
55(2):191–203, 2005. ing. IEEE Transactions on Cloud Computing, PP(99):1–11, 2016.
[8] J.Bertera and K.Rayner. Eye movements and the span of the effective [31] K. Gai and S. Li. Towards cloud computing: a literature review on
stimulus in visual search. Attention, Perception, & Psychophysics, cloud computing and its development trends. In The 4th Int’l Conf.
62(3):576–585, 2000. on Multimedia Information Networking and Security, pages 142–146,
[9] A. Lecuyer. Playing with senses in VR: Alternate perceptions Nanjing, China, 2012.
combining vision and touch. IEEE Comp. Graph. & App., 37(1):20– [32] K. Gai, M. Qiu, H. Zhao, L. Tao, and Z. Zong. Dynamic energy-aware
26, 2017. cloudlet-based mobile cloud computing model for green computing.
[10] J. Wiener, C. Hölscher, S. Büchner, and L. Konieczny. Gaze Journal of Network and Computer Applications, 59:46–54, 2015.
behaviour during space perception and spatial decision making. [33] K. Gai, M. Qiu, L. Chen, and M. Liu. Electronic health record
Psychological research, 76(6):713–729, 2012. error prevention approach using ontology in big data. In 17th
[11] I.Martišius and R.Damaševičius. A prototype ssvep based real time IEEE International Conference on High Performance Computing and
bci gaming system. Computational intelligence and neuroscience, Communications, pages 752–757, New York, USA, 2015.
2016:18, 2016. [34] Y. Li, K. Gai, L. Qiu, M. Qiu, and H. Zhao. Intelligent cryptography
[12] E.Gratton, V.Toronov, U.Wolf, M.Wolf, and A.Webb. Measurement approach for secure distributed big data storage in cloud computing.
of brain activity by near-infrared light. Journal of Biomedical Optics, Information Sciences, 387:103–115, 2017.
10(1):011008–01100813, 2005. [35] K. Gai, M. Qiu, H. Zhao, and W. Dai. Privacy-preserving adaptive
[13] A.Gaume. Towards cognitive brain-computer interfaces: real-time multi-channel communications under timing constraints. In The IEEE
monitoring of visual processing and control using electroencephalog- International Conference on Smart Cloud 2016, pages 190–195, New
raphy. PhD thesis, Université Pierre et Marie Curie-Paris VI, 2016. York, USA, 2016. IEEE.
[14] S. Lemm, B. Blankertz, G. Curio, and K. R. Muller. Spatio-spectral [36] K. Gai, M. Qiu, M. Chen, and H. Zhao. SA-EAST: security-aware
filters for improving the classification of single trial eeg. IEEE efficient data transmission for ITS in mobile heterogeneous cloud
transactions on biomedical engineering, 52(9):1541–1548, 2005. computing. ACM Transactions on Embedded Computing Systems,
[15] F. Lotte and C. Guan. Regularizing common spatial patterns to 16(2):60, 2016.
improve bci designs: unified theory and new algorithms. IEEE [37] K. Gai, M. Qiu, Z. Ming, H. Zhao, and L. Qiu. Spoofing-jamming
Transactions on biomedical Engineering, 58(2):355–362, 2011. attack strategy using optimal power distributions in wireless smart
[16] G.Pfurtscheller, D.Flotzinger, and J.Kalcher. Brain-computer inter- grid networks. IEEE Transactions on Smart Grid, PP(99):1, 2017.
facea new communication device for handicapped persons. Journal
of Microcomputer Applications, 16(3):293–299, 1993.
[17] G. Sun, K. Li, X. Li, B. Zhang, S. Yuan, and G. Wu. A general
framework of brain-computer interface with visualization and virtual
reality feedback. In 8th Int’l Conf. on Dependable, Autonomic and
Secure Computing, pages 418–423. IEEE, 2009.
305