Search

C World

Conor's Blog/Portfolio

Category

Code

OpenBCI Graphical User Interface (GUI)

PowerUpBoard
[Image 1] — The OpenBCI Board (with which the OpenBCI GUI interfaces)

Over the course of the late summer and early fall I worked extensively on the OpenBCI Graphical User Interface (GUI). The first version of the application, as seen in [Image 2] below, was developed by Chip Audette, who is one of the biggest OpenBCI contributors and runs the amazing blog EEG Hacker. The GUI is developed in Processing, a Java-based creative coding framework.

OpenBCI-2014-09-20_13-04-02
[Image 2] OpenBCI GUI – Version 1

I worked on:

  • [Image 3] updating the design & user experience (w/ the help of Agustina Jacobi)
  • [Image 4] adding a UI controller to manage the system state (initial hardware settings, startup, live data streaming mode, playback mode, synthetic data mode, etc.)
  • [Image 5] adding a UI controller to manage OpenBCI board channels settings
  • the startup protocol for establishing a connection between the OpenBCI GUI and the OpenBCI Board
  • a collapsable window for adding and testing new features, called the “Developer Playground”
  • a widget at the bottom of the application that gives feedback to the user about what the system is doing

[Image 3] OpenBCI GUI - Version2
[Image 3] OpenBCI GUI – Version2
[Image 3] —
[Image 4] — UI controller to manage the system state
Screen Shot 2015-02-17 at 3.27.52 PM
[Image 5] — UI controller to manage OpenBCI board channels settings

To download the latest version of the OpenBCI GUI, check out the following Github repo! Don’t hesitate to fork it, make improvements, and try out new features in the developer playground. For more information on how to get up-and-running with the OpenBCI board, check out the following getting started guide on the OpenBCI website.

ROB3115 – A Neuro-Immersive Narrative

In-experience screenshot

ROB3115 is an interactive graphic novel that is influenced by the reader’s brainwaves. The experience is driven by the reader’s ability to cognitively engage with the story. ROB3115′s narrative and its fundamental interactive mechanic – the reader’s ability to focus – are tightly intertwined by virtue of a philosophical supposition linking consciousness with attention.

ROB3115 explores the intersection of interactive narrative, visual storytelling, and brain-computer interfacing. The experience, designed for an individual, puts the reader in the shoes of a highly intelligent artificial being that begins to perceive a sense of consciousness. By using a NeuroSky brainwave sensor, the reader’s brain activity directly affects the internal dialogue of the main character, in turn, dictating the outcome of his series of psychosomatic realizations. The system is an adaptation of the traditional choose-your-own-adventure. However, instead of actively making decisions at critical points in the narrative, the reader subconsciously affects the story via their level of cognitive engagement. This piece makes use of new media devices while, at the same time, commenting on the seemingly inevitable implications of their introduction into society.

This project was my thesis in graduating from Parsons with an M.F.A. in Design & Technology.

Brain Interface Lab

I recently founded the Brain Interface Lab with some colleagues from Parsons MFA Design & Technology and Columbia University. The lab is dedicated to supporting the open-source software and hardware development of brain-computer interfaces. Check out our website and all of the awesome stuff that was created during our first big event titled Hack-A-Brain:

audioBuzzers – Audio Visualizer (Unity)

Summary

This is a Unity-built audio visualizer of the song Major Tom, covered by the Shiny Toy Guns.

Project Files

The Web Player: http://a.parsons.edu/~russc171/UnityHW/AudioBuzzers_2/AudioBuzzers_2.html

The Unity Project: http://a.parsons.edu/~russc171/UnityHW/hw_wk5_audioBuzzers.zip

Screenshot

Screen Shot 2013-03-06 at 5.11.50 PM

Demo Reel

DEMO REEL

DEMO REEL BREAKDOWN

DRB

Plasma Ball Concentration Game (openFrameworks + Neurosky’s EEG Mindset)

Project Summary

This project relates to the brain-computer interface work I’ve been doing for my thesis. As I will soon be creating generative animations that responds to brain activity, which are part of a digital graphic novel, I wanted to do a prototype of a visually complex animation that was dependent on a person’s brain activity. This project was written in openFrameworks and uses a Neurosky Mindset to link a player’s attention level to the intensity of electricity being generated from a sphere in the middle of the screen. The meat of the code is a recursive function that creates individual lightning strikes at a frequency inversely proportional to the attention parameter calculated by the Neurosky EEG headset. The project was visually inspired by the tesla coil and those cool electricity lamps that were really popular in the 90s (see below).

Once the connection between the Neurosky headset and the user’s computer has strong connectivity, the user can press the ‘b’ key (for brain) to link their EEG with the plasma ball. At any point the user can press the ‘g’ key (for graph) to see a HUD that displays a bar graph of their attention value on a scale from 0-100. The graph also shows the connectivity value of the device and the average attention value, calculated over the previous 5 seconds, being used to dictate the frequency of the electricity.

In order to get this application working on your computer, you must first download and install the Neurosky Thinkgear connector. You should be able to get it working with any bluetooth enabled Neurosky device; I’ve documented how to do so in the readme file on my github. You can get my code for the project on my Github page here: https://github.com/crussoma/conorRussomanno_algo2012/tree/master/Conors_Final

Also, if you just want to see the recursive electricity code working independent of a person’s EEG, download and install the app lightningBall (not lightnightBall_brain) from my github.

Project Video

To see this project in action check out my demo reel and jump to 35s.

Visual Inspiration

lightnightBall

lightnightLamp

Screenshots

Screen Shot 2013-03-06 at 5.29.23 PM

Screen Shot 2013-03-06 at 5.29.00 PM

References

My code uses some of the logic and algorithms Esteban Hufstedler’s processing sketch:http://www.openprocessing.org/sketch/2924

Additionally, a big shout out to Akira Hayasaka for writing the Neurosky openFrameworks addon that I used to pull this off:  https://github.com/Akira-Hayasaka/ofxThinkGear

Interactive Android Application for EEG Biofeedback

//–The Code Is On Github!–//

ABSTRACT

This post details the research and development of a mobile application which receives and annotates neurofeedback data from a commercial electroencephalography (EEG) device with a single dry electrode. The system is designed to convert any Android mobile phone into a portable storage device that passively records the user’s brainwaves while providing an interface to manually annotate the data with daily activities and moods. This application has the potential to provide numerous benefits to medical fields including but not limited to: neuroscience, psychology, psychiatry, and head and neck trauma. While the medical implications seem to be the most immediately prevalent, an application like this could also provide the everyday person with a better understanding of how their daily routine affects their state of mind.

Useful Links

Electroencephalography (EEG), The Frontier Nerds, brain-computer interface (BCI), Arduino, NeuroSky, brainwave, Bluetooth, Google Android, Emotiv, MindFlex, Processing, Java, SD card, neurofeedback, active vs. passive electrode, wet vs. dry electrode, International 10-20 System, The OpenEEG Project, .csv file, baud rate, serial communication, integrated development environment (IDE)

INTRODUCTION

This project began as a personal infatuation with brain-computer interfacing after I discovered some fascinating interactive games and applications that people were developing using EEG technology. Neurofeedback, until the early 2000s, had been predominantly used in medical and academic settings, where expensive equipment was used to test subjects with specific neurological and psychological conditions. This paradigm began to change as open-source initiatives like The OpenEEG Project [1] and companies such as Emotiv [2] and NeuroSky [3] started developing commercial EEG hardware platforms that were available to the general public for testing and development. While these technologies are by no means cheap, they do provide a new medium for developers and artists to create projects that interface the neurological biofeedback of the human body.

Problem Statement

Today, Neuroscientists, Psychologists, and other physicians use neurofeedback to diagnose, predict, and treat certain neurological and pathological conditions. Some of these infirmities include epilepsy, seizures, mood disorders, and trauma. While research in this field has been conducted since the early 1900s, it hasn’t been until recently that large advancements have been made due to the immense impact of technology on understanding the human brain [4]. In lieu of this, doctors who use EEG in their work are seeing an increasing benefit in working with developers and computer scientists to provide new methods of both retrieving and analyzing biofeedback from the brain. While there will always be a need for running EEG experiments in controlled environments, advancements in technology are enabling the creation of new applications that can provide more portable devices for retrieving, storing, and annotating neurofeedback. Such devices could prove to be invaluable for the advancement of medicine and achieving a more thorough understanding of the human brain as a whole.

Additionally, the average person leads their daily life with very little understanding of what is actually going on inside of their own head. Most people have a very qualitative view of why they feel certain emotions or experience different moods. What most people don’t realize is that there are quantitative and measurable data that can be retrieved from the brain that can provide better insight into why we feel and act the way that we do. The biggest obstacle that is preventing the average person from knowing more about his or her own brain is the difficulty of providing a non-invasive yet informative neurofeedback system at an affordable cost.

Objective

The objective of this project is to provide a starting point for a customizable neurofeedback application that can be tailored to the needs of different doctors, researchers, and individuals. The application will help to provide additional insight into the understanding of the brain – both medically and in a general sense. Furthermore, it will serve as a reference for other developers that aspire to contribute to the field of interfacing the brain. It is important to keep in mind that this is an early iteration of an ongoing design process, and that development f this application will continue after further testing and collection of user feedback.

PRECEDENTS

Over the last decade, an increasing number of EEG-related projects have emerged outside of the medical world. They range from open-source collaboration initiatives to commercialized proprietary hardware intended for commercial development of applications. The projects listed below inspired the development of this application and provided invaluable knowledge for its execution.

The OpenEEG Project

EEG began to emerge outside of medical and research settings in 2004 when the non-profit organization known as Creative Commons, founded by Lawrence Lessig, launched an open-source EEG initiative called “The OpenEEG Project” [1]. It became the first online forum for open discussion about EEG technology and it has brought together many experienced professionals who are willing to share their knowledge about EEG hardware and software. The website’s homepage states that:

“The OpenEEG project is about making plans and software for do-it-yourself EEG devices available for free (as in GPL). It is aimed towards amateurs who would like to experiment with EEG. However, if you are a pro in any of the fields of electronics, neurofeedback, software development etc., you are of course welcome to join the mailing-list and share your wisdom.”

The website provides tutorials on how to build your own EEG devices, as well as, examples of code that manage the intricate signal processing side of the technology. Additionally, anyone has the ability to join the OpenEEG mailing list to receive up-to-date information on advancements in EEG technologies.

Frontier Nerds: How to Hack Toy EEGs

In April 2010, a team from NYU’s ITP graduate program comprised of Eric Mika, Arturo Vidich, and Sofy Yuditskaya published a blog post titled “How to Hack Toy EEGs.” It thoroughly documents how they hacked an early EEG toy built by NeuroSky known as the Mind Flex. In the tutorial, they provide a list of commercial EEG devices contemporary in 2010, a brief description of the science behind EEG, sample code, an Arduino library for retrieving and outputting the EEG data, a Processing sketch that visualizes the data channels sent out from the Mind Flex, a corresponding Processing library, and video documentation of the entire process [5]. The project is very well executed and documented, and it served as the primary inspiration for the application that I am currently developing, which uses the same Mind Flex toy to record the EEG data. Figure 2 reveals a generalized diagram of how the Frontier Nerds hacked the EEG data out of the Mind Flex and used Arduino to synthesize the data. My application is an extension of this process and uses their Arduino library to retrieve the signals which are then passed through a Bluetooth device to the Android.

Necomimi: Brainwave Controlled Cat Ears

In early 2012, a company called Neurowear designed a fashionable EEG accessory, known as the Necomimi, which uses NeuroSky’s internal signal processing chip. The accessory makes a pair of artificial cat ears react to the wearer’s brain state. As the wearer gets more excited the ears stand up. Conversely, when the wearer is more relaxed the ears sink down [6].

The device demonstrates the usability of commercial EEG data for real-time interaction. Though this is a very simple demonstration of the data, it serves as a proof of concept for the realm of commercial EEG. Additionally, its immediate popularity is indicative of a shift in society’s opinion on the integration of BCIs into common culture.

TARGET USER GROUPS

The main goal for the current iteration of this application is to provide a more portable and scalable application for medical fields that currently use EEG. With that said, the application is customizable and can be tailored for independent research and annotation of neurofeedback data.

Neurologists and Psychologists

In the field of Neurology, doctors are already examining correlations between EEG and numerous brain disorders, including addiction and other naturally occurring diseases of the brain. EEG is commonly used to study brain ailments such as multiple personality disorder [7], migraines [8], epilepsy [9], and seizures [10]. With regards to the effects of drugs on the human brain, research has also been done into EEG in the classification and evaluation of the pharmacodynamics of psychotropic drugs [11], cerebral blood flow velocity abnormalities in chronic cocaine users [12], as well as other forms of addiction. Additionally, EEG has been used to study many forms of brain trauma including neuronal injury after brain anoxia [13] and impact-related brain trauma [14]. While neurologists and psychologists continue to learn more about the (brain?), portable and easy-to-use neurofeedback applications like this project will provide new methods of attaining vital information for the advancement of neuroscience.

The Average Person

In addition to the immediate benefits that this technology will provide to doctors, there is also a future for personal EEG data tracking. In the same way that doctors use EEG to learn about neurological disorders and how they are triggered, the everyday person may soon be able to use portable neurofeedback devices to augment daily routine by quantitatively juxtaposing common daily activities against personal perception of state of mind.

TECHNOLOGY

This project involves the use of a variety of technologies. At the core of the system is the NeuroSky signal-processing chip. The data from this single dry electrode EEG toy is sent to an Arduino using methods described in the hack by the Frontier Nerds (mentioned above). The data is then encrypted and sent through a Bluetooth module that was purchased from Sparkfun [15]. The encrypted Bluetooth packets are received by, parsed, and time stamped by the application that I developed on an HTC Nexus One Google Android mobile phone. The packets are then methodically stored to a .csv file on the internal SD card of the phone. Additional manually selected inputs are also time-stamped and written to the .csv file on the phone’s internal SD card. The .csv file can then be opened in Microsoft Excel to be analyzed and visualized with charts and graphs. The remainder of this section goes into further detail about the technologies involved in this project.

Figure 1. Flow of EEG data from the Mind Flex to a Bluetooth communication device [5].

Electroencephalography (EEG)

EEG is the recording of changes in electric potential outside the skull, caused by fluctuations in brain activity. The electrical activity originates from current flowing between the neurons of the brain. EEG is commonly used to research evoked potentials (EP) and event-related potentials (EPRs). The former requires averaging the data and connecting it to external stimuli, and the latter refers to averaged EEG responses that correlate to more complex processing of stimuli. EGG signals are separated into different bands that are unique in their frequency. These different frequencies have unique distributions over the scalp and also have individual neural significance that relates to different brain activity. While there are more accurate methods of retrieving neurofeedback data than EEG, they are more invasive and require more equipment. These techniques include electrocorticography (ECG), and magnetic resonance imaging (MRI) [16].

EEG Bands

Standard EEG band frequencies fall in the range of 0 to just over 100 Hertz. These bands differ between adults and children and their locations are dominant in different regions of the brain. The chart below reveals some of the common EEG bands and their respective frequencies, brain states, and wave types.

Figure 2. Common EEG Band Chart [16]

Electrode Types

Electrodes used in EEG devices have two types of classification that affect the quality of data received by the electrode. The first classification is whether the electrode is “dry” or “wet”. Dry electrodes consist of a dry conductive surface that is placed against the scalp. Wet electrodes are coated with a highly conductive solution, often saline, which significantly increases the clarity of the data [17]. The other classification is whether the electrode is “active” or “passive”. Active electrodes have inbuilt circuitry that amplify the electrical current close to where the signal is picked up from the scalp. Due to the extremely small electric potentials (millionths of a volt) that are recorded during EEG, data can be greatly distorted by even the resistance of common conductive wire. Therefore, active electrodes that are able to amplify the signal early in the system produce a much better resolution. In both of these cases, however, the option that provides the stronger signal is also more cumbersome [1].

International 10-20 System

When dealing with multiple electrodes, EEG placement has been standardized to allow for better collaboration between researchers. The standardized system is referred to as the International 10-20 System. The diagram in Figure 3 depicts a top-down perspective of the electrode placement of an apparatus the uses the 10-20 system [18]. Note that this project does not use this system due to the fact that it utilizes a single electrode and in turn does not provide spatial resolution of the EEG signal.

Figure 3. International 10-20 System of EEG Electrode Placement [18]

Hardware Used

I used a variety of hardware in this project that enabled collection and transfer of EEG data.

NeuroSky MindFlex

The NeuroSky MindFlex is a proprietary commercial EEG device that uses a single electrode that is both dry and passive to parse a raw neurofeedback signal into 11 channels. The 11 channels are: connectivity – a reading between 0 and 200 (0 being perfect connectivity), attention (a black box value calculated by NeuroSky’s proprietary signal-processing software), meditation (similar to attention), theta, delta, low alpha, high alpha, low beta, high beta, low gamma, and high gamma [3].

Arduino

“Arduino is an open-source electronics prototyping platform based on flexible, easy-to-use hardware and software. It’s intended for artists, designers, hobbyists, and anyone interested in creating interactive objects or environments [19].”

Sparkfun Bluetooth Mate Silver

This Bluetooth module is designed to be used with the Arduino environment. The module’s modems work as a serial (RX/TX) communication. It can be programmed to stream serial data at baud rates between 2,400 and 115,200 bps. The device is described thoroughly on Sparkfun’s website, which contains schematics, data sheets, and tutorials for the product [15].

HTC Nexus One Google Android Phone

The HTC Nexus One is a mobile phone that runs on the Google Android operating system 2.3.6. It was released in 2010, is Bluetooth compatible, has a resolution of 480×800 px with 254 pixels per inch, and contains an internal SD card. Though it is not the most modern Android phone, it was still able to run this version of the EEG application – a good sign for future iterations.

Software Used

A wide variety of software was used to develop this application. I worked with Arduino and Processing primarily. Arduino’s IDE is based on C/C++ and is supported by an extensive online website with learning material, examples, and forums [19]. Processing is a large collection of Java libraries that are primarily graphics-based. It too is supported by an extensive website that allows even novice programmers to start developing right away. Processing is very useful when working with Android development because there is an Android mode built into the Processing IDE that allows developers to test applications on an Android emulator that can mimic various types of Android phones. Additionally, developers are able to access the Android SDK and APIs directly from the Processing environment. Certain aspects of the application required writing raw Java code that had not yet been directly translated into processing libraries. These elements of the code included working with Bluetooth, writing to the SD card, and running the application as a background process so as not to interrupt the data stream [20].

EARLY PROTOTYPE & PLANNING

This project began in December of 2011 after I first familiarized myself with EEG technology and the field of brain-computer interfacing. It started as a predominantly hardware-based project while I was still getting acquainted with the Arduino environment and replicating the hack detailed by the Frontier Nerds of NYU’s ITP department. In addition to the initial hardware, I created an interface concept design for what would eventually evolve into the current application that this paper details. Before beginning work on the current application, I also conducted a generic survey via Facebook to gather input on what activities and moods the “everyday person” would be interested in comparing to their own quantitative EEG data.

BrainCap v1.0

My own work with BCIs began with a project that I dubbed the BrainCap, which eventually became known as BrainCap v1.0 due to later iterations. The device used the Arduino library created by the Frontier Nerds to extract the serial EEG data from the NeuroSky MindFlex. With some simple Arduino code, the system wrote the incoming data to an SD card using an SD Breakout module purchased from Sparkfun. A 9-volt battery was used to power the device, which allowed the user to walk around without any chords attached. Additionally, the devices had built-in buttons and beepers to ensure that the system would start and stop appropriately without deleting the collected data. All of the components of the device were mounted onto a baseball cap that was purchased at a dollar store.

The device was successful in that it collected and time-stamped data, but there was no built-in mechanism for adding context to the data with external stimuli. I tried doing this by taking extensive notes during sessions when the data was recording, but it proved to be tedious. BrainCap v1.0 can be seen in Figure 4 while some of the data that was collected can be seen in Figure 5. A more thorough explanation of the project and its results can be viewed at the following URL:

http://www.digitaldistillation.com/pComp/?p=897

Figure 4. BrainCap v1.0
Figure 5. External Stimuli Recorded by Hand to Add Context to the Recorded EEG Data

Application Interface Concept Design

As the need for a more convenient method for recording external stimuli became apparent, I began to develop concepts for a mobile application interface to accompany the BrainCap. The basic interaction of this interface design was to provide a simple yet comprehensive system for manually inputting both preset and custom activities and moods to contextualize the neurofeedback. Some sketches of the early interface design, which ended up being very similar to the current interface, can be seen in Figure 6.

Figure 6. Early Interface Design

To ensure that the interface received some feedback before I began developing it, a simple survey was distributed via Facebook to get input about common daily activities and moods that people would be interested in learning more about with regards to their own EEG data. The survey questions can be seen in Figure 7.

Figure 7. Facebook Survey for Feedback on Interface Design Features

CURRENT SYSTEM

The current iteration of the application achieves the goals that were set in the early stages of the project. The system successfully records both passively collected EEG data from the MindFlex and manually input activities and moods from the mobile application to the internal SD card of the Nexus One Android mobile phone. The BrainCap was modified to include the bare minimum number of components in order to achieve a longer battery life and also to ensure the clean processing of EEG data with the Arduino.

Portable EEG Device (BrainCap v2.0)

BrainCap v2.0 consists of the following components: the NeuroSky MindFlex, an Arduino Uno, a long-lasting Ultralife 9v battery, a breadboard, the Sparkfun Bluetooth Mate Silver, electrical wires, and the same baseball cap used in v1.0. The only software written for this part of the project was compiled in Arduino and uses the library created by the Frontier Nerds. BrainCap v2.0 can be seen in Figure 8.

Figure 8. BrainCap v2.0

Mobile Application Interface

The interface of the Android application contains four main tabs: Annotate, Your Brain, Share, and Settings. Currently the only tab that has any functionality is the annotate tab. Within this tab the user is prompted to select from two types of manual inputs – activity or mood. Once either of these is selected, the user is taken to a new window where he or she is able to select from an assortment of various preset common activities or moods. In addition the user has the capability to create custom activities and moods to personalize his or her EEG annotation. Once an activity or mood is selected, the user has the ability to decide whether the input should be turned on or off, selected as an instant event, or retroactively added to record past events. If an activity or mood is turned on, it is then highlighted with a green overlay to indicate to the user that it has been activated. After a selection has been made, the application then writes the appropriate subcategory to the internal SD card on the Android and uses the internal clock of the phone to time-stamp the entry so it can be synchronized with the passively recorded EEG data. Additionally, the interface has a narrow text field at the top of the screen that shows the incoming EEG data packets that arrive once per second. A screenshot of the interface can be seen in Figure 9 below.

Figure 9. Screenshot of the Application’s Interface

Data Analysis Techniques

Once data has been recorded to the internal SD card of the phone, the .csv file that contains the data can be easily sent to a computer via a standard USB cable. That file can then be opened in Microsoft Excel to construct charts and graphs for better interpretation of the data. Figure 10 shows a graph of sample data that was recorded using the application detailed in this project. The graph depicts my brain’s attention and meditation activity as I played a FIFA World Cup final in the Xbox game FIFA 2010 with one of my friends. The data points are averaged over 30 second intervals to provide a more coherent visualization of the EEG. Averaging the data over larger intervals is necessary due to the poor quality of the EEG data that is recorded from a single electrode that is both dry and passive.

Figure 10. A Graph That Depicts My Brainwaves as I Played a Video Game with My Friend

CONCLUSION

The many phases of this project have encompassed a wide range of technologies that have come together to form a unique application with some potentially significant implications for various fields of research. Though the project is still in an early stage, the groundwork has been laid and the application can be easily adapted from this point forward. The testing that has been done so far has provided unique insight into what is possible with a system that uses this type of EEG device – a device that provides crude real-time data that can be averaged over longer intervals to identify clear trends in brain activity. However, the lack of data analysis is one of the major shortcomings of this project. Because I have the only working instance of the system I am unable to get large amounts of data from different test subjects.

Findings

At this point no analysis has been done into finding correlations between external stimuli, personal perception of state of mind, and EEG data. This type of research will require an initial user group to test the application. Despite this, the few tests that have been run show clear linear and oscillating trends in brainwaves when averaged over extended periods of time. It was discovered early on that the data is too chaotic when looking at the data points that are received every second. However, when this data is averaged, distinct patterns in brainwave activity can be identified.

Future Directions

Moving forward there are many steps that need to be taken.

Initial User Group

An initial user group that is willing to test this application needs to be selected in order to get user feedback on the interaction of the application. Additionally, this user group will provide the amount of data that is necessary to start juxtaposing EEG data against daily activities and moods. From there, data analysis can commence to see what types of correlations data of this quality can produce between EEG and other information.

Interface Additions

As far as the interface is concerned, a data visualization system will be implemented so that the user can track data in real-time. This data visualization will allow the user to see automated graphs of his or her brain activity without having to upload the .csv file into Microsoft Excel. The data visualization interface? will resemble something similar to the processing sketch (Figure 11) that the Frontier Nerds made to accompany their Arduino library [5].

Figure 11. EEG Data Visualization Developed by The Frontier Nerds [5]

Sharing Capability

The next iteration of this project will also include an option for the user to anonymously share his or her EEG data and inputs with a central database. This database will serve as an aggregation of human brain activity and will provide an invaluable set of data to analyze and use as a baseline to compare to the EEG data of any individual.

Ethical Considerations

When dealing with a technology as new and powerful as brain-computer interfacing, there are obvious and not-so-obvious ethical implications that must be taken into consideration. While the capabilities of this type of application are limited due to the crude quality of the data, technology will continue to advance allowing for clearer data to be acquired more easily. Soon it will not be science fiction to have a portable EEG device that has bi-directional interactivity – in other words, a device that talks back. The implications of this type of technology lead to numerous potential risks that need to be planned for in order to prevent their negative consequences. Some of these risks include health and safety, legal issues, abuse of technology, security and privacy, and social impacts.

ACKNOWLEDGEMENTS

I’d like to give a special thanks to my Parsons Design + Technology instructors from Spring 2012, Jonah Brucker-Cohen, Katherine Moriwaki, Joel Murphy, and Ed Keller, for providing great mentorship throughout the development of this project. I’d also like to thank Neurosky, Sparkfun, Arduino, Google, and Processing for giving me fun toys to play with. Thank you, my good friend, Joe Artuso, for editing the piece. Lastly, thank you too The Frontier Nerds for giving me a place to start.

REFERENCES

I tried my best to give credit to all of the work that I referenced in the development of this project. Please contact me if you’d like me to add/remove anything to/from this blog post. I will not hesitate to do so.

  1. The OpenEEG Project. http://openeeg.sourceforge.net/doc/
  2. Emotiv. http://www.emotiv.com/
  3. NeuroSky. http://neurosky.com/
  4. Electroencephalography. http://www.bem.fi/book/13/13.htm#03
  5. Mika, E., Vidich, A., Yuditskaya, S. How to Hack Toy EEGs. Frontier Nerds: An ITP Blog. http://frontiernerds.com/brain-hack
  6. Necomimi. http://neurowear.com/
  7. Arikan, K., et al. EEG Correlates of Startle Reflex with Reactivity to Eye Opening in Psychiatric Disorders: Preliminary Results. Clinical EEG and Neuroscience 37.3 (2006), 230-234.
  8. Bjørk, M.H., et al. Interictal Quantitative EEG in Migraine: A Blinded Controlled Study. The Journal of Headache and Pain 10.5 (2009), 331-339.
  9. Kennett, R. Modern Electroencephalography. Journal of Neurology 259 (4), 783-789. April 2012.
  10. Bubrick, E.J., Bromfield, E.B., Dworetzky, B.A. Utilization of Below-the-Hairline EEG in Detecting Subclinical Seizures. Clinical EEG and Neuroscience 41.1 (2010), 15-18.
  11. Saletu, Bernd, Anderer, P., Saletu-Zyhlarz, G. EEG Topography and Tomography (LORETA) in the Classification and Evaluation of the Pharmacodynamics of Psychotropic Drugs. Clinical EEG and Neuroscience 37.2 (2006), 66-80.
  12. Copersino, M.L., et al. EEG and Cerebral Blood Flow Velocity Abnormalities in Chronic Cocaine Users. Clinical EEG and Neuroscience 40.1 (2009), 39-42.
  13. Rossetti, A.O. Early EEG Correlates of Neuronal Injury After Brain Anoxia. Neurology 78 (11): 796-802. March, 2012.
  14. Tezer, I., Dericioglu, N., Saygi, S. Generalized Spike-Wave Discharges with Focal Onset in a Patient with Head Trauma and Diffuse Cerebral Lesions: A Case Report with EEG and Cranial MRI Findings. Clinical EEG and Neuroscience 35.3 (2004): 151-157.
  15. Sparkfun. http://www.sparkfun.com/
  16. Budzynski, H., Budzynski, T., Evans, J. Introduction to Quantitative EEG and Neurofeedback: Advanced Theory and Applications. 2nd Edition. Elsevier Inc. 2009.
  17. Wang, Wang, Maier, Jung, Cauwenberghs. Dry and Noncontact EEG Sensors for Mobile Brain-Computer Interfaces. IEEE Transaction on Neural Systems and Rehabilitation Engineering 20 (2): 228-235. March, 2012.
  18. Gilmore, R.L. American Electroencephalographic SocietyGuidelines in Electroencephalography, Evoked Potentials, and Polysomnography. J. Clin. Neurophysiol (11), 147. January, 1994.
  19. Arduino. http://arduino.cc/
  20. Processing for Android. http://wiki.processing.org/w/Android
  21. Rao, R.P.N. University of Washington. Computer Science. March 7, 2012. http://www.cs.washington.edu/homes/rao/

WANNA GET INVOLVED?!

I’d be happy to share the graphics, code, and system schematics with anyone that wants to help with application development and/or collection of data. If enough interest is generated, I’d also be willing to post a thorough step-by-step procedure for application/device development with all the necessary assets for the Android App. If you are even remotely interested, don’t be afraid to contact me at conor.russomanno@gmail.com AND comment on the blog (interest is contagious!). The step-by-step will eventually get submitted regardless, but support from you guys would definitely help get the ball rolling. Brainiacs, assemble!


Kinematics-to-Color Conversion Game

[VIDEO TO COME]

I. Introduction

This project was very experimental in nature.  I wanted to create an interface that uniquely translated the mind’s understanding of one common system onto a very different common system through a simple switch interface.  Both the physics of kinematics and the science of color have always intrigued me.  With this project I attempted to interface the two systems by means of an alternative method that the human mind would not typically think of. It was my hope that this odd translation would provide a new lens for understanding both systems, as well as, shed a new light on the human mind’s perception of a system.  In the end I decided to turn the interface into a game that tracked the player’s progress of how well he or she was able to translate between the two systems of calculus-based kinematics and the RGB color system.

II. How It Works

This game assumes that the player or viewer has a basic understanding of the RGB color system in addition to grasp of the calculus-based relationship between position, velocity, acceleration, and jerk (the rate of change of acceleration).  Through 4 clicks of the button (or switch) at the center of the application the system tracks the absolute value of the velocity, acceleration, and jerk of this interaction.  It does this by assuming a “distance” of 3 units – 1 unit for each click after the first one which is supposed to represent the starting line.  It then generates an average velocity (v), average acceleration (a), and average jerk (j) based on the timing between each of the 4 clicks.  These three values are then mapped onto the R, G, and B values respectively.  The algorithm behind the conversion assumes that the user’s 4-click timespan (total time) will be between 0.1 seconds and 30 seconds.

Method for Calculation

(Note: the values calculated for v, a, and j are rough averages due to the fact that there are only 4 data points recorded.  Additionally, this is one method of averaging the data but there are numerous other ways these averages could have been calculated.)

Referenced Variables:  T12 = time between the first and second click, T23 = time between second and third click, T34 = time between third and fourth click; additionally T1 = 0 (first click starts timer), T2 = time of second click referencing the timer that starts with the first click, T2.5 = the time halfway between the 2nd and 3rd click, etc.

Velocity:  This is calculated as the total distance, 3, divided by total time, T14:  x/t

v = 3 / (T12 + T23 + T34)

Acceleration:  This is calculated as the of the average of the local accelerations between click 1 and click 3, and click 2 and click 4.  In other words the change in velocity from T12 to T23 is averaged with the change in velocity from T23 to T34.  I used this method because in order to calculate an acceleration there needs to be a flux in velocity and with the recorded data there are actually two fluxes in velocity (at click 2 and click 3).  Note: t

a = ((A123 + A234) / 2)

A123 = (V23 – V12) / (T2.5 – T1.5)    and     A234 = (V34 – V23) / (T3.5 – T2.5)

Jerk:  This is calculated as the of the rate of change of acceleration between A123 and A234

 j = |(A432 – A321) / (T3 – T2)

Method for Conversion

Once the user has flipped the game’s switch 4 times each of the absolute values, or magnitudes, of the above calculations (v, a, and j) are respectively mapped onto a 0-255 range of the red (R), green (G), and blue (B) values of a color.  Initially the system mapped the lowest extremes of negative ( – ) acceleration and negative jerk to values of 0 for G and B respectively.  This was not necessary for velocity-to-R bc it was impossible to generate a negative value for velocity.  After some testing, however, I decided to change the conversion to where it used the absolute values of velocity and acceleration to map to the 0-255 ranges of G and B.  My rationale for this change was that the interaction between the two systems is confusing enough and shouldn’t include variable ranges for (a and j), with both negative and positive possibilites, that map onto the G and B variables that only have positive ranges.  Therefore, as it stands now acceleration values of 0 to 10 are mapped onto G-values of 0-255.  The original method had acceleration values of -10 to 10 being onto G-values of 0-255.

III. Process

Concept

Early Sketches and Calculations

These sketches and calculations show some of my thought processes in the development of my system to generate the average values for v, a, and j.  This was the first phase of my pseudocode before jumping into Openframeworks.

This is the first sketch for the layout of the game’s interface.

Manifestation

The final aesthetic was done in Photoshop and the back-end coding was done with Openframeworks.  Below is a screenshot and sample code of the variable I used:

IV. Conclusion

The game is not too difficult to pick up if you have a good understanding of calculus and/or kinematics, but it is very difficult to master due to the limited number of player inputs.  I found it hardest to produce a deep green color.  This entailed producing a high but constant acceleration without a high velocity or jerk.

Moving forward, I would like to get this game up on a website so that more people could test it.  Additionally I would like to make it social by adding a high scores database that all players can view and compete for.  In terms of practical applications for this type of conversion, a similar system might be beneficial to do data visualization in industries where calculating jerk is important.  Examples where people consider jerk include: boxing (the higher the jerk, the more devastating the punch), car accidents, etc.

Orbitorbs v2.1 – Solar System Simulator

Project Summary

This project is an extension of Orbitorbs v1.0.  I translated the code that I wrote in processing into Openframeworks, a C++ based programming language.  I added additional features that enabled more user control over the planetary system including:

  • The ability to pause the solar system simulation and edit planet parameters
  • A more intuitive interaction for editing planet parameters
  • The ability to turn on and off a function that links the computer microphone volume input to the strength of the gravitational constant dictating the force between the planets (activate by pressing the ‘e’ key and deactivate by pressing the ‘s’ key). The higher the volume, the higher the g-constant (directly proportional).

The algorithm uses 2-dimensional matrices to store the x and y parameters of the various planets and it implements Newton’s Law of Universal Gravitation:

newton

This project has the potential to be adapted into a new type of learning tool, allowing for a more fun and interactive method for teaching basic principles of physics including angular acceleration, gravitation, ideas of mass and density, and more.

Orbitorbs v2.1 (openframeworks) from Conor Russomanno on Vimeo.

The Code

If you want to play with this application or examine the code, please feel free to grab it from my github.

Blog at WordPress.com.

Up ↑