Electroencephalography Based Dexterous Robotics Hand Grasping and Manipulation: A Short Review- Juniper Publishers
Juniper Publishers- Journal of Robotics
Introduction
In general context, robotics hands do play an
important milestone towards achieving a much advanced and complicated
robotic grasping. In parallel to this, robotic hand grasping and
dexterous manipulation have been under focus for a while. This is also
considered an essential step onward for performing advanced operations
and tasks that are needed for current robotics use. Definitely, not
always analytical approaches for finding the right set or optimal set of
forces and wrenches for robotics hands will work, examples are found in
Han et al. [1].
This issue also is much complicated while considering robotics hands
with five digits (curling fingers applications). Use of
electroencephalography (EEG) brainwaves for robotics applications, is
also gaining a good ground recently. This is due to advancement of
robotics applications. EEG data recording and analysis is much easier
than before due to the advancement in technology related to neurology.
In contrary to positive and advantageous use of EEG for robotic
learning, EEG waves are such raw data, and the signalling behaviour are
very complicated, correlated, related, and they are of such multirate
waves. It is not a straight forward task to detect, decode, and
understand these waves. There are key steps for using
Electroencephalography for robotics grasping. Definably, there is
electroencephalography Classification, Decoding and understanding.
However, due to the massive, raw, coupled nature, and highly nonlinear
nature of EEG data, there are a number of issues related to
interpretation of these waves. While stating that,
electroencephalography is also related to define tasks as human is
achieving. Therefore, it is important to relate electroencephalography
(EEG) with defined tasks.
Given the above mentioned EEG related issues, there
are a number of analysis to be performed before using EEG for robotics
applications, or for BCI in general. This involves EEG signal
classification using a number of well defined techniques. PCA (Principle Components Analysis), ICA (Independent Components Analysis) ICA, LDA (Linear Discriminant Analysis), ANN RBF (Radial Basis Function - Neural Net), FL C-Means (Fuzzy C-Means Knowledge Based Classification), and SVM (Support Vector Machine).
These techniques have shown interesting results in terms of the ability
of were investigated by a number of classifying various features of EEG
waves.
In this respect, Xiao & Ding [2]
has achieved an evaluation of EEG features in decoding individual
finger movements for a single hand. They tested three possible
techniques to find the features that could be used in the future for
finger control. First, they investigated the Spectral Principal
Component (SPC) projections. Then, event related synchronization and
desynchronization (ERS and ERD), and finally for temporal data. Testing
was done on six individuals in a relatively isolated room and was done
in timely manner to get the specific data needed. All the necessary EEG
signal processing was done on the data to increase the SNR using
temporal and spatial filters. The three EEG features in the same
channels were decoded using support vector machine (SVM) technique that
analyzes the data and finds patterns associated with the different
fingers. The accuracy of the decoding was measured between all the EEG
features used and the guess level. The results showed that the PC
projections using first three spectral PCs showed the highest average
accuracy at (45.2%).
In Cerný & Štastný [3],
they have achieved an application of common spatial Patterns on
classification of the right hand finger movements from EEG Signal. In
this context, the paper used common spatial patterns (CSP) for the
classification of the thumb and little finger. This procedure processes
the multivariate signal and converts it to additive subcomponents. It
was accomplished that using a combination of different spatial filters
for different band yields the best results. The spatial filter used mu
or beta bands for training. Comparing the CSP with the Laplacian filter
yielded marginally better results in terms of classification score
(63.8% ± 5.2% vs. 61.9% ± 6.9%). Most subjects scored better with CSP
and some subject scored better with Laplacian. However, when considering
speed of BCI the CSP was faster during EEG classification. This was due
to the fact that Hidden Markov Models (HMM) training only needs to be
done two times with CSP as opposed to Laplacian which needs training for
all of the electrodes used. In contrary to previous research and
earlier studied, Agashe in [4]
has achieved the decoding the evolving grasping gesture from
electroencephalographic (EEG) activity. In contrary to the earlier
projected papers, this work focuses on the type of grasping the user
intends to do when manipulating different daily objects. This is a vital
part for having a prosthetic hand that replaces a human hand.
Previous studies mainly focused on invasive
techniques such as ECoG to extract the intended grasping method. This
paper tried to implement similar techniques but using EEG. Since earlier
studies found that low-pass filtered ECoG (local motor potential LMP)
shows precise features, they low pass filtered EEG signals. This was
used as the basis for decoding continuous hand kinematics when grasping.
The paper focused on five different types of grasping when trying to
grasp objects. To collect the data, the participants were asked to reach
and grasp some objects within their proximity and then return them to
initial position. The study was able get near optimal method using
EEG-predicated PC1 and PC2 trajectories on recursive Bayesian method
with EEG-predicated PC1 and PC2. This technique was computationally
efficient and demonstrated the feasibility of real time application by
getting information before movement on set as well as decoding data in
less time than the data duration. However, the algorithm needed a
relatively long time to complete (10-15min). Also, some
misclassification happened with similar grasps.
Agashe et al. in [5],
they proposed a global cortical activity predicts shape of hand during
grasping. In this context again, this paper is similar to previous one
in terms of decoding hand grasp by recording EEG signals of people
trying to grasp everyday items and returning them. However, this paper
actually implemented their technique on an amputee using hand
neuroprosthesis. This paper uses EEG signals, hand joints angular
velocities, and synergistic trajectories recorded during reach to grasp
movement to predict hand grasping. Based on the recorded data, the joint
angle velocity and synergy spaces of the hand trajectories was
reconstructed. Linear regression model was used for decoding. The
grasping showed to affect mainly the power of the 0.1-1Hz band. This
inherently meant that the EEG data had to be low-pass filtered at 1Hz.
Decoding accuracy between the predicted and actual movement for all 15
hand joints was r =0.49±0.02 where r is the correlation coefficient. All
of the information was used in a closed loop system that was used by an
amputee. After proper training the amputee was able to get 80% success
rate for over 100 trials. The amputee imaged reaching and grasping the
objects and the neuroprosthesis was used to implement the grasping. The
research was able to use simple linear models to decode the hand
grasping just like the previous study since these models have shown to
have high accuracy. This study was able to prove in real time
application the accuracy of the system.
Liao et al. in [6],
they studied the decoding individual finger movements from one hand
using human EEG signals. This paper tried to decode individual finger
movement. They used the findings from previous ECoG and implemented them
using EEG and compared the results with the ECoG results. In six
seconds they had to do specific things. For the first two seconds they
had to get ready for to be still while the screen is black. Next,
resting data was recorded while the subjects were looking at a fixation
on the screen for two seconds. For the final two seconds a random word
describing the finger to be moved twice appeared on the screen. The EEG
data was subjected to power spectrum analysis to be able to extract the
features. Mainly principal component analysis (PCA) and power spectrum
decoupling. To demonstrate the decoding accuracy, they decoded all pairs
of fingers movement in one hand. They achieved an average accuracy of
77.17%. They implemented similar techniques on ECoG signals and achieved
91.8% accuracy. The results achieved here are better than many other
studies especially to the one with single bands (alpha, beta, and
gamma). These are promising results since they decoded pairs of fingers
which is a harder task.
Hazrati & Erfanian [7],
have shown results for an online EEG-based brain-computer interface for
controlling hand grasp using an adaptive probabilistic neural network.
The subjects were naïve and had no experience. The subjects had to relax
to keep a virtual hand open and the imagination of grasping will make
the hand grasp. At first the hand was open, then a ball started falling
and the subjects tried to grasp the ball. When the ball gets held it
will change color and then the hand was finally opened. The subjects
received feedback from the first parts of the experiments to be able to
train an online classifier without offline data. Adaptive and static
classifiers were also employed. The adaptive is the one that uses the
online training while the static uses the already trained classifier.
For the classification, an adaptive probabilistic neural network (APNN)
was used to handle the variabilities in everyday brain signals. In this
respect, the classification accuracy was (75.4%) for first session and
(81.4%) for the second session. Such result were obtained while using
online training. For the third and eighth session, the accuracy was
(79%) and (84%) respectively, these were done using the already train
classifier. This definitely indicates that the ERS/ERD patterns are
better after the consecutive sessions of training. The classifier is
shown to be robust in time-varying and non-stationary environments.
Static classification gave consistent results. The adaptive classifier
improved the results and the resulting classifier gave the same results
when used in the static scheme.
Previous analysis does show a new trend towards the
use of EEG for teaching robots cognitive movements, and not to rely
solely on the classical ways of programming robotics systems. Using
Electroencephalography to teach robots the right movements is a new
challenge; however, there are still a number of not solved problems in
this context. A major issue is entirely related to not to rely on online
learning, however, the challenge is to move towards off-line of EEG
based learning. This means that to synthesis movements of robotic
fingers and hand dexterous grasping, even in an absence of human online
simultaneous learning.
For more open access journals please visit: Juniper publishers
For more articles please click on: Robotics & Automation Engineering Journal
Comments
Post a Comment