- Remove the current class from the content27_link item as Webflows native current state will automatically be applied.
- To add interactions which automatically expand and collapse sections in the table of contents select the content27_h-trigger element, add an element trigger and select Mouse click (tap)
- For the 1st click select the custom animation Content 27 table of contents [Expand] and for the 2nd click select the custom animation Content 27 table of contents [Collapse].
- In the Trigger Settings, deselect all checkboxes other than Desktop and above. This disables the interaction on tablet and below to prevent bugs when scrolling.
At Neurable, we’ve spent nearly a decade building neurotechnology that empowers people without compromising who they are. As founders and as a company, our worldview has always been anchored in a simple reality: your thoughts, whether brilliant or bizarre, belong exclusively to you. (We originally put it this way in our article On Your Brain And Privacy.)
That belief isn’t a marketing line. It’s a north star that has guided every design decision, every product we enable, every conversation with policymakers, and every line of code. Neurotechnology is powerful but people’s agency is more so. Any company working with brain data has a responsibility to uphold that truth.
Below is our ethical compass. It’s how we think about neurodata, how we build, and how we believe this field should evolve highlighting some key principles and considerations.
1. A Platform Built for People, Not for Exploitation
Neurable doesn’t build end devices. Instead, we develop an enabling technology stack, which we call Neurable AI, a combination of biopotential sensing, on-device processing, and proprietary AI that integrates into products people already use.
Our first publicly launched collaboration, the MW75 Neuro from Master & Dynamic, shows what’s possible when neurotechnology is seamlessly embedded into everyday devices while keeping user trust paramount.
This matters because our business model is intentionally designed around a principle we consider foundational:
Your brain data should benefit you—not be used against you.
We do not sell brain data.
We proactively minimize the collection of personally identifiable information (PII).
We design systems so that brain-derived insights exist to improve the user’s experience, their understanding of themselves, and the functionality they choose to engage with.
In our earlier article, we drew an analogy to fitness trackers. Just as a tracker monitors your lower-body movement, we monitor brain activity. We make clear that EEG cannot “read your mind” or expose your private thoughts.
Our analogy is deliberate: if OpenAI trained its models on internet data, we train ours on ethically-sourced, consented, de-identified neurodata, data used exclusively to make our capabilities better for the user.
That is the loop we support. That is the loop we want the industry to adopt.
2. Why Our Ethical Compass Starts With First Principles—Not Just Compliance
The regulatory landscape around neurotechnology is evolving rapidly and unevenly. The biggest challenge we’ve seen is not “how to comply” (we intentionally build above regulatory requirements) but how fragmented and patchwork these policies can be.
But we also believe something deeper:
Regulation often focuses on technology when it should focus on principles.
Neuroethics should not be a conversation about electrodes, sensors, or models. It should be a conversation about privacy, autonomy, and freedom. Neurotechnology is a tool and tools can be used for good or bad means. What we can reinforce is how we use the tools role-modeling positive standards.
For us, this means:
- Freedom of choice: Users should never be coerced explicitly or implicitly into sharing neural data. That means that we shouldn’t stop someone from using a product just because they don’t like the new terms and conditions.
- Freedom from surveillance: Brain data should never be used to monitor individuals without meaningful consent.
- Freedom from manipulation: Neurotech should not nudge, steer, or pressure people without their agency.
- True informed consent: People must understand what they opt into, why, and what value they receive in return. That means that they actually understand what consent is and the implications of consenting vs. not consenting.
Ironically, search histories and smartphone metadata already reveal more about a person than EEG does today and likely will for a long time. Yet brain data feels more intimate, which means to us that even if brain-data is not as meaningful as others, the fact that people care about it is enough for us to stand up for it.
3. How We Build: On-Device Processing, Minimal PII, and Ethical Model Training
We’ve designed our technical architecture to reflect our ethics:
Minimize exposure. Maximize protection. Keep users in control.
Examples:
- On-device signal processing reduces the raw neural data leaving a device.
- Encryption logic on edge ensures transformations strip away more identifiable signals before transmission.
- De-identification prevents engineers from linking data to individuals; only two trained employees can access identity mappings, and only for user-requested support.
- Account deletion gives people control over their data.
- Research data is opt-in and clearly communicated as such.
- No audio, no video, no covert sensing.
We treat data as something entrusted to us, not something we are entitled to.
When we build new biomarkers or refine algorithms, data informs improvement—but is not connected back to the humans who generated it.
4. Transparency in AI: Validation, Accuracy, and Third-Party Verification
Neurable uses AI and machine learning to turn noisy biopotential signals into something meaningful and reliable. But we hold ourselves to a rigorous validation standard.
We benchmark model performance against:
- Gold-standard EEG systems
- Behavioral ground truths
- Independent third-party evaluations
A notable example: the U.S. Air Force Research Lab’s 711 Human Performance Wing validated that when Neurable AI is applied, our system delivered performance comparable to, or better, than research-grade wet EEG systems, even under real-world conditions.
Scientific rigor isn’t optional. It’s part of our culture.
5. Data Governance That Scales With Responsibility
We recognize that neurotechnology is not just a product, it’s an ecosystem.
As such:
- For defense use-cases, we run all processing entirely on edge, locally.
- Our future roadmap focuses on improving data quality while continuing to minimize user exposure and avoid expanding into unnecessary sensing modalities.
- Integration with other form factors, platforms, and applications is core to our business model but only in ways that respect user permission and protect privacy.
- Wherever possible, we seek guidance and input from those who know more than we do. Neurable is fortunate to have relationships across the data and neuroethics domains. Special shout outs to:
- The Institute of Neuroethics
- BrainMind
- Harvard’s DCI Network
- The Uniform Law Commission
- Various universities and many more amazing individuals who help us achieve more than we could by ourselves
6. Behavior and Culture Matter More Than Policies
One of the most overlooked truths in neurotechnology (and emerging technologies more broadly) is that behavior dictates outcomes even before policy. The trajectory of an entire field is usually set not by regulations but by the norms, habits, and precedents established by its earliest practitioners.
At Neurable, we take this seriously. Our internal behavior, how we talk about data, how we build products, how we caution partners, and how we interrogate our own assumptions is our first line of neuroethics. Before policy, before press releases, before any formal mechanism of accountability, culture is what shapes the ethical DNA of an organization.
Ethical trajectories are set early and reinforced socially.
Research shows:
- Organizational behavior studies (e.g., Schein, 2010) show that culture forms through repeated behaviors, which become more deeply embedded with repetition.
- Behavioral ethics research (Gino et al., 2009) shows that ethical behavior diffuses across teams; bad norms tend to spread, good norms can too.
- Tech governance scholars (Floridi, 2018; Mittelstadt et al., 2016) argue that early voluntary practices often become the baseline for regulation.
- Trust theory (Mayer, Davis & Schoorman, 1995) shows that stakeholders trust organizations based on demonstrated behavior (ability, benevolence, integrity) far more than on written policies.
Why neurotechnology amplifies this effect
Neurotech touches identity. People care not just about what the technology does, but how the people building it behave. The ethics of neurotechnology isn’t written in a PDF—it is modeled in every decision:
- How engineers think about data minimization
- How product teams discuss consent
- How leadership frames trade-offs
- How customer conversations center user agency
- How researchers treat participants
- How decisions are made under ambiguity
These behaviors set precedents. Those precedents become informal norms. And those norms become the ethical backbone of the company.
How we live it at Neurable
From our inception, we built a culture that reinforces:
- Transparency (users understand what’s happening and why)
- Beneficence (data must benefit the user first)
- Permissioned value creation (opt-in research, optional features)
- Humility (brain data is probabilistic, contextual, and never destiny)
These aren’t policies. They are behaviors that show up in our meetings, our hiring criteria, our engineering decisions, our product specs, our refusal to sell identity-linked data, and our openness to outside validation.
We believe that culture is the strongest safeguard neurotechnology has. If companies behave ethically before they are required to, the entire field advances with a higher standard of trust.
And that trust becomes the foundation on which all meaningful innovation is built.
Our North Star: People first.
If we had to reduce our entire neuroethics philosophy into a single statement, it would be this:
Neurable exists to make human technology, to help people understand themselves not to let others understand or control them.
We build technologies that illuminate cognitive states, enhance performance, and unlock human potential but always on the user’s terms.
We believe in a world where neurotechnology is as trusted as it is transformative. A world where human agency is protected. A world where brain data is used to uplift, not exploit.
That is our compass. That is our commitment. And that is the future we’re building, one partnership, one product, and one principled decision at a time.
2 Distraction Stroop Tasks experiment: The Stroop Effect (also known as cognitive interference) is a psychological phenomenon describing the difficulty people have naming a color when it's used to spell the name of a different color. During each trial of this experiment, we flashed the words “Red” or “Yellow” on a screen. Participants were asked to respond to the color of the words and ignore their meaning by pressing four keys on the keyboard –– “D”, “F”, “J”, and “K,” -- which were mapped to “Red,” “Green,” “Blue,” and “Yellow” colors, respectively. Trials in the Stroop task were categorized into congruent, when the text content matched the text color (e.g. Red), and incongruent, when the text content did not match the text color (e.g., Red). The incongruent case was counter-intuitive and more difficult. We expected to see lower accuracy, higher response times, and a drop in Alpha band power in incongruent trials. To mimic the chaotic distraction environment of in-person office life, we added an additional layer of complexity by floating the words on different visual backgrounds (a calm river, a roller coaster, a calm beach, and a busy marketplace). Both the behavioral and neural data we collected showed consistently different results in incongruent tasks, such as longer reaction times and lower Alpha waves, particularly when the words appeared on top of the marketplace background, the most distracting scene.
Interruption by Notification: It’s widely known that push notifications decrease focus level. In our three Interruption by Notification experiments, participants performed the Stroop Tasks, above, with and without push notifications, which consisted of a sound played at random time followed by a prompt to complete an activity. Our behavioral analysis and focus metrics showed that, on average, participants presented slower reaction times and were less accurate during blocks of time with distractions compared to those without them.




.png)