C World

Conor's Blog/Portfolio



ROB3115 – A Neuro-Immersive Narrative

In-experience screenshot

ROB3115 is an interactive graphic novel that is influenced by the reader’s brainwaves. The experience is driven by the reader’s ability to cognitively engage with the story. ROB3115′s narrative and its fundamental interactive mechanic – the reader’s ability to focus – are tightly intertwined by virtue of a philosophical supposition linking consciousness with attention.

ROB3115 explores the intersection of interactive narrative, visual storytelling, and brain-computer interfacing. The experience, designed for an individual, puts the reader in the shoes of a highly intelligent artificial being that begins to perceive a sense of consciousness. By using a NeuroSky brainwave sensor, the reader’s brain activity directly affects the internal dialogue of the main character, in turn, dictating the outcome of his series of psychosomatic realizations. The system is an adaptation of the traditional choose-your-own-adventure. However, instead of actively making decisions at critical points in the narrative, the reader subconsciously affects the story via their level of cognitive engagement. This piece makes use of new media devices while, at the same time, commenting on the seemingly inevitable implications of their introduction into society.

This project was my thesis in graduating from Parsons with an M.F.A. in Design & Technology.

Demo Reel




Plasma Ball Concentration Game (openFrameworks + Neurosky’s EEG Mindset)

Project Summary

This project relates to the brain-computer interface work I’ve been doing for my thesis. As I will soon be creating generative animations that responds to brain activity, which are part of a digital graphic novel, I wanted to do a prototype of a visually complex animation that was dependent on a person’s brain activity. This project was written in openFrameworks and uses a Neurosky Mindset to link a player’s attention level to the intensity of electricity being generated from a sphere in the middle of the screen. The meat of the code is a recursive function that creates individual lightning strikes at a frequency inversely proportional to the attention parameter calculated by the Neurosky EEG headset. The project was visually inspired by the tesla coil and those cool electricity lamps that were really popular in the 90s (see below).

Once the connection between the Neurosky headset and the user’s computer has strong connectivity, the user can press the ‘b’ key (for brain) to link their EEG with the plasma ball. At any point the user can press the ‘g’ key (for graph) to see a HUD that displays a bar graph of their attention value on a scale from 0-100. The graph also shows the connectivity value of the device and the average attention value, calculated over the previous 5 seconds, being used to dictate the frequency of the electricity.

In order to get this application working on your computer, you must first download and install the Neurosky Thinkgear connector. You should be able to get it working with any bluetooth enabled Neurosky device; I’ve documented how to do so in the readme file on my github. You can get my code for the project on my Github page here:

Also, if you just want to see the recursive electricity code working independent of a person’s EEG, download and install the app lightningBall (not lightnightBall_brain) from my github.

Project Video

To see this project in action check out my demo reel and jump to 35s.

Visual Inspiration




Screen Shot 2013-03-06 at 5.29.23 PM

Screen Shot 2013-03-06 at 5.29.00 PM


My code uses some of the logic and algorithms Esteban Hufstedler’s processing sketch:

Additionally, a big shout out to Akira Hayasaka for writing the Neurosky openFrameworks addon that I used to pull this off:

Kinematics-to-Color Conversion Game


I. Introduction

This project was very experimental in nature.  I wanted to create an interface that uniquely translated the mind’s understanding of one common system onto a very different common system through a simple switch interface.  Both the physics of kinematics and the science of color have always intrigued me.  With this project I attempted to interface the two systems by means of an alternative method that the human mind would not typically think of. It was my hope that this odd translation would provide a new lens for understanding both systems, as well as, shed a new light on the human mind’s perception of a system.  In the end I decided to turn the interface into a game that tracked the player’s progress of how well he or she was able to translate between the two systems of calculus-based kinematics and the RGB color system.

II. How It Works

This game assumes that the player or viewer has a basic understanding of the RGB color system in addition to grasp of the calculus-based relationship between position, velocity, acceleration, and jerk (the rate of change of acceleration).  Through 4 clicks of the button (or switch) at the center of the application the system tracks the absolute value of the velocity, acceleration, and jerk of this interaction.  It does this by assuming a “distance” of 3 units – 1 unit for each click after the first one which is supposed to represent the starting line.  It then generates an average velocity (v), average acceleration (a), and average jerk (j) based on the timing between each of the 4 clicks.  These three values are then mapped onto the R, G, and B values respectively.  The algorithm behind the conversion assumes that the user’s 4-click timespan (total time) will be between 0.1 seconds and 30 seconds.

Method for Calculation

(Note: the values calculated for v, a, and j are rough averages due to the fact that there are only 4 data points recorded.  Additionally, this is one method of averaging the data but there are numerous other ways these averages could have been calculated.)

Referenced Variables:  T12 = time between the first and second click, T23 = time between second and third click, T34 = time between third and fourth click; additionally T1 = 0 (first click starts timer), T2 = time of second click referencing the timer that starts with the first click, T2.5 = the time halfway between the 2nd and 3rd click, etc.

Velocity:  This is calculated as the total distance, 3, divided by total time, T14:  x/t

v = 3 / (T12 + T23 + T34)

Acceleration:  This is calculated as the of the average of the local accelerations between click 1 and click 3, and click 2 and click 4.  In other words the change in velocity from T12 to T23 is averaged with the change in velocity from T23 to T34.  I used this method because in order to calculate an acceleration there needs to be a flux in velocity and with the recorded data there are actually two fluxes in velocity (at click 2 and click 3).  Note: t

a = ((A123 + A234) / 2)

A123 = (V23 – V12) / (T2.5 – T1.5)    and     A234 = (V34 – V23) / (T3.5 – T2.5)

Jerk:  This is calculated as the of the rate of change of acceleration between A123 and A234

 j = |(A432 – A321) / (T3 – T2)

Method for Conversion

Once the user has flipped the game’s switch 4 times each of the absolute values, or magnitudes, of the above calculations (v, a, and j) are respectively mapped onto a 0-255 range of the red (R), green (G), and blue (B) values of a color.  Initially the system mapped the lowest extremes of negative ( – ) acceleration and negative jerk to values of 0 for G and B respectively.  This was not necessary for velocity-to-R bc it was impossible to generate a negative value for velocity.  After some testing, however, I decided to change the conversion to where it used the absolute values of velocity and acceleration to map to the 0-255 ranges of G and B.  My rationale for this change was that the interaction between the two systems is confusing enough and shouldn’t include variable ranges for (a and j), with both negative and positive possibilites, that map onto the G and B variables that only have positive ranges.  Therefore, as it stands now acceleration values of 0 to 10 are mapped onto G-values of 0-255.  The original method had acceleration values of -10 to 10 being onto G-values of 0-255.

III. Process


Early Sketches and Calculations

These sketches and calculations show some of my thought processes in the development of my system to generate the average values for v, a, and j.  This was the first phase of my pseudocode before jumping into Openframeworks.

This is the first sketch for the layout of the game’s interface.


The final aesthetic was done in Photoshop and the back-end coding was done with Openframeworks.  Below is a screenshot and sample code of the variable I used:

IV. Conclusion

The game is not too difficult to pick up if you have a good understanding of calculus and/or kinematics, but it is very difficult to master due to the limited number of player inputs.  I found it hardest to produce a deep green color.  This entailed producing a high but constant acceleration without a high velocity or jerk.

Moving forward, I would like to get this game up on a website so that more people could test it.  Additionally I would like to make it social by adding a high scores database that all players can view and compete for.  In terms of practical applications for this type of conversion, a similar system might be beneficial to do data visualization in industries where calculating jerk is important.  Examples where people consider jerk include: boxing (the higher the jerk, the more devastating the punch), car accidents, etc.

Orbitorbs v2.1 – Solar System Simulator

Project Summary

This project is an extension of Orbitorbs v1.0.  I translated the code that I wrote in processing into Openframeworks, a C++ based programming language.  I added additional features that enabled more user control over the planetary system including:

  • The ability to pause the solar system simulation and edit planet parameters
  • A more intuitive interaction for editing planet parameters
  • The ability to turn on and off a function that links the computer microphone volume input to the strength of the gravitational constant dictating the force between the planets (activate by pressing the ‘e’ key and deactivate by pressing the ‘s’ key). The higher the volume, the higher the g-constant (directly proportional).

The algorithm uses 2-dimensional matrices to store the x and y parameters of the various planets and it implements Newton’s Law of Universal Gravitation:


This project has the potential to be adapted into a new type of learning tool, allowing for a more fun and interactive method for teaching basic principles of physics including angular acceleration, gravitation, ideas of mass and density, and more.

Orbitorbs v2.1 (openframeworks) from Conor Russomanno on Vimeo.

The Code

If you want to play with this application or examine the code, please feel free to grab it from my github.

Orbitorbs v1.0 (Planetary Physics Simulation)


I completed this piece during Parsons DT Bootcamp 2011 prior to beginning my 1st year at grad school.  I did it using processing, a Java-based library of functions.


CyberGRID – NSF-Funded Virtual Collaboration Environment


CyberGRID began as an independent study for professor John E. Taylor of Columbia University’s civil engineering department.  At the time that I approached professor Taylor he conducted a yearly class that involved doing collaborative design projects with 4 other universities around the world.  This collaboration was facilitated via online classrooms in Second Life, a well-known web-based 3D environment for social, commercial, and academic meetups.  With Second Life, Taylor’s class virtually met and worked with other students and professors from the Indian Institute of Technology Madras, the Helsinki University of Technology (HUT), the University of Twente in the Netherlands, and the University of Washington (Seattle).

Professor Taylor initially asked a good friend of mine, Daniel Lasry, and myself to make alterations to the virtual islands that he was already leasing within the Second Life environment, as well as, attempt to write plug-ins for the Second Life interface to provide his students with customized interactivity.  After researching the capabilities of Second Life development, Daniel and I decided that CyberGRID’s customizability was limited by licensing restrictions that Second Life had in place.  We convinced professor Taylor to allow us to start from scratch and develop a comprehensive and fully customized virtual learning environment using the Unity game development platform, and Maya and Photoshop for asset creation.  The first phase of the project was in the form of an independent study where we familiarized ourselves with the Unity software and began developing a new aesthetic, a new interface, and new functionality based on feedback from users of the previous version of CyberGRID.

Phase 1 – Early Concepts and Learning (Independent Study)

During this phase of the project, the other designers and myself were familiarizing ourselves with the Unity development environment.  I had to learn how to optimize 3D models for game design, ensuring that the assets were all polygons and making sure that the faces were all pointed the correct direction.  Below is a scrapped concept render of part of the CyberGRID environment that I created during this early phase of the project.

Early CyberGRID Environment Concept Render

Phase 2 – Beta Development (NSF Funding)

After our team excelled during our independent study we were hired to continue working on the development of CyberGRID over the summer of 2010.  During this time is when the project really took off.  I was responsible for designing and creating an extensive virtual environment, creating/locating a collection of 3D virtual avatars (see my 3D Character Design post for a more thorough description of this process) for the future users of the application, and animating and texturing the characters and environment, and designing some of the UI.

These sketches are some early sketches of the environment design:

Early Environment Design
CyberGRID Environment Concept Art

Here are some screenshots of the UI and game environment:

CyberGRID Login Interface
CyberGRID Environment
Virtual Meeting Room w/ Conference Table & Screen-sharing

Phase 3 – Refinement and Testing

As we progressed into the following school year, we stayed on board and expanded the virtual environment and it’s features.  The following elements were added:

  • 3D sound
  • Personal screen-sharing on a joint virtual screen
  • Avatar customization and animations
  • An explorable Manhattan
  • Annotation of shared documents
Below is a render of Manhattan (model from Google’s 3D Warehouse w/ my textures) and the Manhattan Bridge, which I modeled from scratch.
Here is a screenshot of users interacting with a virtual monitor that is sharing a real-time render of one user who is working in Autodesk Maya – just one of the many powerful system features.
Virtual Screen-Sharing
Conclusion – CyberGRID Today

Currently, the development and use of CyberGRID is being pushed forward by professor John E. Taylor of Virginia Tech’s Engineering Department as he continues his research in virtual learning environments and the psychology of the relationship between human and computer interface.

Zombie King (2D Flixel Game)

CLICK HERE to play the game!


I designed this top down computer game, Zombie King with a few of my friends while at Columbia.  I worked as the teams primary concept artist and asset designer.  We used Flixel for our game engine, and I used a pencil, paper, and Photoshop for the asset design.  The mechanics behind the game are derived from a narrative where you are a zombie and you must lead a horde of fellow zombies in a war against the humans.

My Work Involved:

  • Character designs and animation sprites
  • Level Design
  • Concept Art
  • Cover Art
  • Game Mechanics

Game Art


CLICK HERE to play the game!

3D Character Design



This model was initially designed to be part of a custom character database for my CyberGRID project. Developing an entire custom character database of this level of detail posed to be a very time-consuming endeavor.  Thus, the project turned out to be a terrific learning experience on 3D character modeling and texturing, but we ended up purchasing (from a line of much less attractive models that were pre-rigged.  Note that this post is somewhat of a walk-through tutorial, but it assumes that the reader has at least a basic understanding of the Autodesk Maya software.


I used these tutorials as assistance for modeling and texturing.  They go into further detail about the steps involved in both processes:

3D Character Modeling:

Texturing a 3D Character:


Step 1) I started by drawing a front-view symmetrical sketch of the male anatomy, as well as, a profile sketch that matched the scale of the front view drawing (below).  The rear-view drawing was used as a reference once I need to texture the model.

Step 2) In Maya, I created simple cylindrical polygons with 8 axis subdivisions (varying numbers of spans depending on the part of the body) and scaled the vertices of the polygons to match the various major limbs (arm and leg) of one half of the body.  I only modeled half of the body so that I would be able to mirror what I had done to create a symmetrical mesh.  I used the front and side sketches above as Image Planes in the front and side windows respectively.  For the torso I used half of a cylinder (deleted the faces of the other half).  Once the arm, leg and half-torso were finished I sewed them together, combining the meshes using the Merge Vertex Tool.

Step 3) I then used the same technique to model the fingers, thumb, and palm of the hand on the side of the body that I had already modeled.  After it was finished I combined the hand mesh with the rest of the body and then used the Merge Vertex Tool to close the gaps between the meshes.

Step 4) After this I undertook a similar process for the foot but didn’t put as much detail into it under the assumption that the foot would go into a shoe once the character was rigged and animated.  I then duplicated the half of the body (without the head) and mirrored it to produce a symmetrical body:

Step 5) I then used a similar process to create the head of the model.  This process was more complex than the body parts due to the topographical abnormalities of a human head (nose, mouth, eyes, ears).  It required more frequent use of the Split Polygon and Insert Edge Loop tools.

Step 6) Once the model was complete, I used the Cut UVs tool to separate the UV map of the model into flat sections that would be easier to illustrate in Photoshop.  To do this I tried to make all of the UV “seams” in less visible areas (i.e. under the arms, sides of torso, inside of legs).  A good technique is to make the seems along heavy muscle contours – areas where it looks ok to have an abrupt change in color.  I then exported the UV Map and used it as a reference (overlay) to digitally produce the texture in Photoshop.  This process takes a good amount of tweaking because of the counterintuitive nature of drawing a 3D picture on a 2D surface.

Step 7) I then found a generic eyeball texture from the Internet and mapped it onto two spheres within the head.  In addition, I created a mesh for the hair and used a generic hair texture that I also found on the web.  I then rendered the model using the built-in Mental Ray renderer in Maya.  Most of the rendered images use a standard directional light; the ligher render uses a Physical Sun and Sky (Indirect Lighting).  Here are some of the final renders:

What’s Next) Next, I want to rig and animate the character, something that I have some experience doing (see my Emperor’s Champion post).  I also want to finally give the guy some clothes!  After that, I plan to make a female counterpart to this model and then sell both of them on a 3D model database like Turbosquid.

Blog at

Up ↑