I spent the day messing around with 1.75mm conductive ABS BuMat filament, trying to create a 3D-printable EEG electrode. The long-term goal is to design an easily 3D-printable EEG electrode that nests into the OpenBCI “Spiderclaw” 3D printed EEG headset.
I decided to try to make the electrode snap into the standard “snappy electrode cable” that you see with some industry-standard EMG/EKG/EEG electrodes, like the one seen the picture below.
After some trial and error w/ AutoDesk Maya and a MakerBot Rep 1, managed to print a few different designs that snap pretty nicely into the cable seen above. At first, Joel (my fellow OpenBCI co-founder), and I we’re worried that the snappy nub would break off, but, to our pleasant surprise, it was strong enough to not break with repeated use. Though the jury is still out since we’ve only repeatedly snapped for 1 day.
Here you can see a screenshot of the latest prototype design in Maya. I added a very subtle concave curvature to the “teeth” on the underside of the electrode so that the electrode will hopefully make better contact with the scalp.
Here is a photo of a few different variations of the electrodes that we’re actually printed over the course of the day.
I’d like to note that I printed each electrode upside-down, with the pointy teeth facing upward on the vertical (Z) axis, with a raft and supports, as seen in the picture below.
I tested each of the electrodes with the OpenBCI board, trying to detect basic EMG/EEG signals from the O1/O2 positions on the back of the scalp—over the occipital lobe. I tried each electrode with no paste applied—simply conductive filament on skin. And then I tried each electrode with a small amount of Ten20 paste applied to the teeth. To my pleasant surprise, without applying any conductive Ten20 paste, I was able to detect small EMG artifacts by gritting my teeth, and very small artifacts from Alpha EEG brain waves, by closing my eyes. Upon applying the Ten20 paste, the signal was as good (if not better) than the signal that is recorded using the standard gold cup electrodes that come with the OpenBCI Electrode Starter Kit! Pretty awesome!
Here’s a screenshot of some very faint alpha (~10Hz) that I was able to pick up without any Ten20 paste applied to the electrode, with an electrode placed over the O2 node of the 10-20 system!
And here’s a screenshot of some very vibrant alpha (~10Hz) that I was able to detect with Ten20 paste applied to the 3D-printed electrode!
The signal looks pretty good. Joel may begin messing around with an active amplification hardware design that works with the any 3D-printed snappy electrode design.
In case you’re interested in printing your own, here’s a link to the github repo with the latest design of the electrode!
Over the course of the late summer and early fall I worked extensively on the OpenBCI Graphical User Interface (GUI). The first version of the application, as seen in [Image 2] below, was developed by Chip Audette, who is one of the biggest OpenBCI contributors and runs the amazing blog EEG Hacker. The GUI is developed in Processing, a Java-based creative coding framework.
I worked on:
[Image 3] updating the design & user experience (w/ the help of Agustina Jacobi)
[Image 4] adding a UI controller to manage the system state (initial hardware settings, startup, live data streaming mode, playback mode, synthetic data mode, etc.)
[Image 5] adding a UI controller to manage OpenBCI board channels settings
the startup protocol for establishing a connection between the OpenBCI GUI and the OpenBCI Board
a collapsable window for adding and testing new features, called the “Developer Playground”
a widget at the bottom of the application that gives feedback to the user about what the system is doing
To download the latest version of the OpenBCI GUI, check out the following Github repo! Don’t hesitate to fork it, make improvements, and try out new features in the developer playground. For more information on how to get up-and-running with the OpenBCI board, check out the following getting started guide on the OpenBCI website.
I wrote the following article which was published in Volume 41 of Make Magazine!
Conor wears an early prototype of the OpenBCI 3D-printable EEG Headset.
During this summer’s Digital Revolution exhibition at London’s Barbican Museum, a small brainwave-influenced game sat sandwiched between Lady Gaga’s Haus of Gaga and Google’s DevArt booth. It was Not Impossible Labs’ Brainwriter installation, which combined Tobii eye tracking and an OpenBCI Electroencephalography (EEG) device to allow players to shoot laser beams at virtual robots with just eye movement and brain waves. “Whoa, this is the future,” exclaimed one participant.
But the Brainwriter is designed for far more than just games. It’s an early attempt at using Brain-Computer Interface technology to create a comprehensive communication system for patients with ALS and other neurodegenerative disorders, which inhibit motor function and the ability to speak.
The brain is one of the final frontiers of human discovery. Each day it gets easier to leverage technology to expand the capabilities of that squishy thing inside our heads. Real-world BCI will be vital in reverse-engineering and further understanding the human brain.
Though BCI is in an embryonic state — with a definition that evolves by the day — it’s typically a system that enables direct communication between a brain and a computer, and one that will inevitably have a major impact on the future of humanity. BCIs encompass a wide range of technologies that vary in invasiveness, ease of use, functionality, cost, and real-world practicality. They include fMRI, cochlear implants, and EEG. Historically, these technologies have been used solely in medicine and research, but recently there’s been a major shift: As the technology becomes smaller, cheaper, and woven into the fabric of everyday life, many innovators are searching for real-world applications outside of medicine. It’s already happening, and it’s often driven by makers.
The field is expanding at an astounding rate. I learned about it two and a half years ago, and it quickly turned into an obsession. I found myself daydreaming about the amazing implications of using nothing more than my mind to communicate with a machine. I thought about my grandma who was suffering from a neurodegenerative disorder and how BCIs might allow her to speak again. I thought about my best friend who had just suffered a severe neck injury and how BCIs might allow him to walk again. I thought about the vagueness of attention disorders, and how BCIs might lead to complementary or even supplementary treatments, replacing overprescribed and addictive medications.
I went on to found OpenBCI with Joel Murphy as a way to offer access to every aspect of the BCI design and to present that information in an organized, collaborative, and educational way. I’m not the only one who sees the potential of this amazing new technology. But creating a practical, real-world BCI is an immense challenge — as the incredibly talented Murphy, who designed the hardware, says, “This stuff is really, really hard.” Many have attempted it but none have fully succeeded. It will take a community effort to achieve the technology’s potential while maintaining ethical design constraints. (It’s not hard to fathom a few not-too-far-off dystopian scenarios in which BCIs are used for the wrong reasons.)
Of the many types of BCIs, EEG has recently emerged as the frontrunner in the commercial and DIY spaces, partly because it is minimally invasive and easily translated into signals that a computer can interpret. After all, computers are complex electrical systems, and EEG is the sampling of electrical signals from the scalp. Simply put, EEG is the best way to get our brains and our computers speaking the same language.
EEG has existed for almost a hundred years and is most commonly used to diagnose epilepsy. In recent years, two companies, NeuroSky and Emotiv, have attempted to transplant EEG into the consumer industry. NeuroSky built the Mindwave, a simplified single-sensor system and the cheapest commercial EEG device on the market — and in doing so made EEG accessible to everyone and piqued the interest of many early BCI enthusiasts, myself included. Emotiv created the EPOC, a higher channel count system that split the gap between NeuroSky and research-grade EEG with regard to both cost and signal quality. While these devices have opened up BCI to innovators, there’s still a huge void waiting to be filled by those of us who like to explore the inner workings of our gadgets.
With OpenBCI, we wanted to create a powerful, customizable tool that would enable innovators with varied backgrounds and skill levels to collaborate on the countless subchallenges of interfacing the brain and body. We came up with a board based on the Arduino electronics prototyping platform, with an integrated, programmable microcontroller and 16 sensor inputs that can pick up any electrical signals emitted from the body — including brain activity, muscle activity, and heart rate. And it can all be mounted onto the first-ever 3D-printable EEG headset.
In the next 5 to 10 years we will see more widespread use of BCIs, from thought-controlled keyboards and mice to wheelchairs to new-age, immersive video games that respond to biosignals. Some of these systems already exist, though there’s a lot of work left before they become mainstream applications.
This summer something really amazing is happening: Commercially available devices for interfacing the brain are popping up everywhere. In 2013, more than 10,000 commercial and do-it-yourself EEG systems were claimed through various crowdfunded projects. Most of those devices only recently started shipping. In addition to OpenBCI, Emotiv’s new headset Insight, the Melon Headband, and the InteraXon Muse are available on preorder. As a result, countless amazing — and maybe even practical — implementations of the BCI are going to start materializing in the latter half of 2014 and into 2015. But BCIs are still nascent. Despite big claims and big potential, they’re not ready; we still need makers, who’ll hack and build and experiment, to use them to change the world.
The following images are a series of sketches, screenshots, and photographs documenting my design process in the creation of the OpenBCI Spiderclaw (version 1). For additional information on the further development of the Spiderclaw, refer to the OpenBCI Docs Headware section and my post on Spiderclaw (version 2). If you want to download the .STL files to print them yourself or work with the Maya file, you can get them from the OpenBCI Spiderclaw Github repo. Also, if 3D printed EEG equipment excites you, check out my post on 3D printable EEG electrodes!
ROB3115 is an interactive graphic novel that is influenced by the reader’s brainwaves. The experience is driven by the reader’s ability to cognitively engage with the story. ROB3115′s narrative and its fundamental interactive mechanic – the reader’s ability to focus – are tightly intertwined by virtue of a philosophical supposition linking consciousness with attention.
ROB3115 explores the intersection of interactive narrative, visual storytelling, and brain-computer interfacing. The experience, designed for an individual, puts the reader in the shoes of a highly intelligent artificial being that begins to perceive a sense of consciousness. By using a NeuroSky brainwave sensor, the reader’s brain activity directly affects the internal dialogue of the main character, in turn, dictating the outcome of his series of psychosomatic realizations. The system is an adaptation of the traditional choose-your-own-adventure. However, instead of actively making decisions at critical points in the narrative, the reader subconsciously affects the story via their level of cognitive engagement. This piece makes use of new media devices while, at the same time, commenting on the seemingly inevitable implications of their introduction into society.
Dot is one of the main characters in a sci-fi graphic novel that I’ve been working on as a side project. The story largely inspired my thesis, Rob3115, which is a graphic short story about a robot. The piece is interactive and is affected in real-time by the reader’s brainwaves.
I recently founded the Brain Interface Lab with some colleagues from Parsons MFA Design & Technology and Columbia University. The lab is dedicated to supporting the open-source software and hardware development of brain-computer interfaces. Check out our website and all of the awesome stuff that was created during our first big event titled Hack-A-Brain: