Product Name
Option 1 / Option 2 / Option 3
Weekly Delivery
Product Discount (-$0)
COUPON1 (-$0)
$0
$0
-
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
+
Cart is empty
Subtotal
$0
Order Discount
-$0
COUPON2
-$0
Total
$0

How To Build Brain-Computer Interfaces: The Neurable Toolkit

5
 min read
Dr. Davide Valeriani
This post originally appeared in:
Instructions
If you intend to use this component with Finsweet's Table of Contents attributes follow these steps:
  1. Remove the current class from the content27_link item as Webflows native current state will automatically be applied.
  2. To add interactions which automatically expand and collapse sections in the table of contents select the content27_h-trigger element, add an element trigger and select Mouse click (tap)
  3. For the 1st click select the custom animation Content 27 table of contents [Expand] and for the 2nd click select the custom animation Content 27 table of contents [Collapse].
  4. In the Trigger Settings, deselect all checkboxes other than Desktop and above. This disables the interaction on tablet and below to prevent bugs when scrolling.

Building Brain-Computer Interfaces (BCIs) requires key interdisciplinary skills, including neuroscience to understand the brain and form robust hypotheses, signal processing to clean the neural recordings you capture with electroencephalography (EEG) sensors, and machine learning to transform these signals into meaningful insights and metrics. Therefore, it is critical to leverage on progress made across these three domains to really innovate and deliver the next-generation BCI.

Throughout the years, Neurable’s team has assembled a list of tools that we now use on a daily basis to test our hypotheses, develop and validate our algorithms, visualize and discuss the results at scientific meetings, and make sure we build a sustainable and scalable BCI infrastructure. This “Neurable toolkit” allows us to iterate quickly and deliver robust algorithms while ensuring the reliability and repeatability"

Python is at the heart of Neurable R&D

Our core scientific and machine learning infrastructure is built in Python, the most popular programming language in data science. It is easy to learn, very powerful, and with a myriad of external libraries and tools to satisfy all sorts of computer science needs. In particular, we make extensive use of pandas to organize and analyze our data, numpy and scipy to perform heavy mathematical operations, matplotlib and seaborn for visualization. But how do we transform raw EEG data into meaningful brain insights? Our secret sauce is based on two key libraries: mne, for preprocessing and extracting relevant information from brain signals, and scikit-learn, for developing machine learning models to transform cleaned EEG data into our measurements of focus and attention. We also extensively use keras for more advanced machine learning algorithms, such as deep neural networks. Based on all these libraries, we build our own Python packages that facilitate our daily work. For example, we have an internal library called data_utils that allows us to safely download the EEG data we collected from our central repository, preprocess them, extract features, and validate them… all with one library. In our central repository, data are stored anonymously and are encrypted, to guarantee privacy and security. Another Python package we built allows us to perform standard statistical analysis and graphics for neuroscience applications, which we plan to release as open-source software in the future.

Machine learning operations (MLOps)

To ensure our infrastructure is robust and scalable, we use additional tools on top of Python that allow us to track the lifecycle of all our algorithms, from investigation to deployment and monitoring. In particular, we use MLflow to record the parameters and details of every model we develop, to ensure our results are fully reproducible, as well as the evaluation metrics that help us accelerate the decisions on which models should we move to production.

Where to get started

If you are a BCI enthusiast like us but feel a bit lost on where to learn more, there are some amazing tools we recommend you check out. First and foremost, join our discord where you will get to interact with the Neurable community about BCIs and wearable neurotechnology. NeuroTechX, another community that many of us contribute to, also has an educational website (NeurotechEDU) where you can find tutorials and datasets to start your BCI projects. Other datasets for BCI applications are available at BNCI. If you want to learn Python, we highly recommend this book, which is also freely available in PDF format. If you prefer a more practical approach, there are a number of online classes you can take on platforms like Coursera or Udemy. This article focused on tools Neurable uses. If you want to know more about the science, check out our white paper.


2 Distraction Stroop Tasks experiment: The Stroop Effect (also known as cognitive interference) is a psychological phenomenon describing the difficulty people have naming a color when it's used to spell the name of a different color. During each trial of this experiment, we flashed the words “Red” or “Yellow” on a screen. Participants were asked to respond to the color of the words and ignore their meaning by pressing four keys on the keyboard –– “D”, “F”, “J”, and “K,” -- which were mapped to “Red,” “Green,” “Blue,” and “Yellow” colors, respectively. Trials in the Stroop task were categorized into congruent, when the text content matched the text color (e.g. Red), and incongruent, when the text content did not match the text color (e.g., Red). The incongruent case was counter-intuitive and more difficult. We expected to see lower accuracy, higher response times, and a drop in Alpha band power in incongruent trials. To mimic the chaotic distraction environment of in-person office life, we added an additional layer of complexity by floating the words on different visual backgrounds (a calm river, a roller coaster, a calm beach, and a busy marketplace). Both the behavioral and neural data we collected showed consistently different results in incongruent tasks, such as longer reaction times and lower Alpha waves, particularly when the words appeared on top of the marketplace background, the most distracting scene.

Interruption by Notification: It’s widely known that push notifications decrease focus level. In our three Interruption by Notification experiments, participants performed the Stroop Tasks, above, with and without push notifications, which consisted of a sound played at random time followed by a prompt to complete an activity. Our behavioral analysis and focus metrics showed that, on average, participants presented slower reaction times and were less accurate during blocks of time with distractions compared to those without them.

Continue reading...

Stay up to date

Sign up and receive the latest on features and releases.
By subscribing, you agree to our Privacy Policy and provide consent to receive updates from our company.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.