Search

C World

Conor's Blog/Portfolio

Category

3D

3D printed EEG electrodes!

I spent the day messing around with 1.75mm conductive ABS BuMat filament, trying to create a 3D-printable EEG electrode. The long-term goal is to design an easily 3D-printable EEG electrode that nests into the OpenBCI “Spiderclaw” 3D printed EEG headset.

I decided to try to make the electrode snap into the standard “snappy electrode cable” that you see with some industry-standard EMG/EKG/EEG electrodes, like the one seen the picture below.

IMG_3583

After some trial and error w/ AutoDesk Maya and a MakerBot Rep 1, managed to print a few different designs that snap pretty nicely into the cable seen above. At first, Joel (my fellow OpenBCI co-founder), and I we’re worried that the snappy nub would break off, but, to our pleasant surprise, it was strong enough to not break with repeated use. Though the jury is still out since we’ve only repeatedly snapped for 1 day.

Here you can see a screenshot of the latest prototype design in Maya. I added a very subtle concave curvature to the “teeth” on the underside of the electrode so that the electrode will hopefully make better contact with the scalp.

Screen Shot 2015-02-16 at 6.17.48 PM

Here is a photo of a few different variations of the electrodes that we’re actually printed over the course of the day.

IMG_3581

FullSizeRender (3)

I’d like to note that I printed each electrode upside-down, with the pointy teeth facing upward on the vertical (Z) axis, with a raft and supports, as seen in the picture below.

Screen Shot 2015-02-16 at 6.35.01 PM

I tested each of the electrodes with the OpenBCI board, trying to detect basic EMG/EEG signals from the O1/O2 positions on the back of the scalp—over the occipital lobe. I tried each electrode with no paste applied—simply conductive filament on skin. And then I tried each electrode with a small amount of Ten20 paste applied to the teeth. To my pleasant surprise, without applying any conductive Ten20 paste, I was able to detect small EMG artifacts by gritting my teeth, and very small artifacts from Alpha EEG brain waves, by closing my eyes. Upon applying the Ten20 paste, the signal was as good (if not better) than the signal that is recorded using the standard gold cup electrodes that come with the OpenBCI Electrode Starter Kit! Pretty awesome!

Here’s a screenshot of some very faint alpha (~10Hz) that I was able to pick up without any Ten20 paste applied to the electrode, with an electrode placed over the O2 node of the 10-20 system!

OpenBCI-2015-02-16_14-39-10

And here’s a screenshot of some very vibrant alpha (~10Hz) that I was able to detect with Ten20 paste applied to the 3D-printed electrode!

OpenBCI-2015-02-16_17-27-57

The signal looks pretty good. Joel may begin messing around with an active amplification hardware design that works with the any 3D-printed snappy electrode design.

In case you’re interested in printing your own, here’s a link to the github repo with the latest design of the electrode!

More on this coming soon!

3D printed EEG Headset (aka “Spiderclaw” V1)

The following images are a series of sketches, screenshots, and photographs documenting my design process in the creation of the OpenBCI Spiderclaw (version 1). For additional information on the further development of the Spiderclaw, refer to the OpenBCI Docs Headware section and my post on Spiderclaw (version 2). If you want to download the .STL files to print them yourself or work with the Maya file, you can get them from the OpenBCI Spiderclaw Github repo. Also, if 3D printed EEG equipment excites you, check out my post on 3D printable EEG electrodes!

10-20 System (Scientific Design Constraint)

Concept Sketches

3D Modeling (in AutoDesk Maya)

3D Printing & Assembly

Future Plans

Headset_Interface

audioBuzzers – Audio Visualizer (Unity)

Summary

This is a Unity-built audio visualizer of the song Major Tom, covered by the Shiny Toy Guns.

Project Files

The Web Player: http://a.parsons.edu/~russc171/UnityHW/AudioBuzzers_2/AudioBuzzers_2.html

The Unity Project: http://a.parsons.edu/~russc171/UnityHW/hw_wk5_audioBuzzers.zip

Screenshot

Screen Shot 2013-03-06 at 5.11.50 PM

Demo Reel

DEMO REEL

DEMO REEL BREAKDOWN

DRB

‘Wetlands’ Architectural Renders

Project Summary

I spent the past 6 weeks working with the amazing and progressive artist Mary Mattingly on her project titled Wetlands. Most of her work explores the complex relationship between people and the Earth. Wetlands, currently in the design phase, is a self-sustained living environment that floats in the rivers outside of Philadelphia. The structure will be a low-cost floating barge with various components that explore DIY techniques of sustainability.

My Role

I worked with 2 other artists to create an architectural design for the structure that optimized the functional and design constraints. I helped with the concept drawings and took the lead on creating 3D renders of the design.

Renders

Project Presenation PDF: wetlands

CyberGRID – NSF-Funded Virtual Collaboration Environment

Introduction

CyberGRID began as an independent study for professor John E. Taylor of Columbia University’s civil engineering department.  At the time that I approached professor Taylor he conducted a yearly class that involved doing collaborative design projects with 4 other universities around the world.  This collaboration was facilitated via online classrooms in Second Life, a well-known web-based 3D environment for social, commercial, and academic meetups.  With Second Life, Taylor’s class virtually met and worked with other students and professors from the Indian Institute of Technology Madras, the Helsinki University of Technology (HUT), the University of Twente in the Netherlands, and the University of Washington (Seattle).

Professor Taylor initially asked a good friend of mine, Daniel Lasry, and myself to make alterations to the virtual islands that he was already leasing within the Second Life environment, as well as, attempt to write plug-ins for the Second Life interface to provide his students with customized interactivity.  After researching the capabilities of Second Life development, Daniel and I decided that CyberGRID’s customizability was limited by licensing restrictions that Second Life had in place.  We convinced professor Taylor to allow us to start from scratch and develop a comprehensive and fully customized virtual learning environment using the Unity game development platform, and Maya and Photoshop for asset creation.  The first phase of the project was in the form of an independent study where we familiarized ourselves with the Unity software and began developing a new aesthetic, a new interface, and new functionality based on feedback from users of the previous version of CyberGRID.

Phase 1 – Early Concepts and Learning (Independent Study)

During this phase of the project, the other designers and myself were familiarizing ourselves with the Unity development environment.  I had to learn how to optimize 3D models for game design, ensuring that the assets were all polygons and making sure that the faces were all pointed the correct direction.  Below is a scrapped concept render of part of the CyberGRID environment that I created during this early phase of the project.

Early CyberGRID Environment Concept Render

Phase 2 – Beta Development (NSF Funding)

After our team excelled during our independent study we were hired to continue working on the development of CyberGRID over the summer of 2010.  During this time is when the project really took off.  I was responsible for designing and creating an extensive virtual environment, creating/locating a collection of 3D virtual avatars (see my 3D Character Design post for a more thorough description of this process) for the future users of the application, and animating and texturing the characters and environment, and designing some of the UI.

These sketches are some early sketches of the environment design:

Early Environment Design
CyberGRID Environment Concept Art

Here are some screenshots of the UI and game environment:

CyberGRID Login Interface
CyberGRID Environment
Virtual Meeting Room w/ Conference Table & Screen-sharing

Phase 3 – Refinement and Testing

As we progressed into the following school year, we stayed on board and expanded the virtual environment and it’s features.  The following elements were added:

  • 3D sound
  • Personal screen-sharing on a joint virtual screen
  • Avatar customization and animations
  • An explorable Manhattan
  • Annotation of shared documents
Below is a render of Manhattan (model from Google’s 3D Warehouse w/ my textures) and the Manhattan Bridge, which I modeled from scratch.
Manhattan
Here is a screenshot of users interacting with a virtual monitor that is sharing a real-time render of one user who is working in Autodesk Maya – just one of the many powerful system features.
Virtual Screen-Sharing
Conclusion – CyberGRID Today

Currently, the development and use of CyberGRID is being pushed forward by professor John E. Taylor of Virginia Tech’s Engineering Department as he continues his research in virtual learning environments and the psychology of the relationship between human and computer interface.

Columbia Manhattanville Bowtie Building Excavation

For this project, I worked with 5 of my classmates in Columbia’s Civil Engineering department to develop the excavation plan and foundation design for the new Bowtie Building being built at 125th and Broadway in Columbia University’s Manhattanvile expansion.  Our civil class was separated into 5 groups which worked on separate aspects of the building.  The other groups were Concrete Design, Steel Design, Project Management, and Green Building Design.  For my group, I contributed to the structural design of the foundation in addition to modeling and rendering a sequence of images that visual detail the process.  Note: this is not actually the building that is being built at the site; our senior class proposed this as our final senior project to our professors.

Here is the sequence of images that demonstrate our proposed excavation process and our final foundation design:

This slideshow requires JavaScript.

The Eye of Big Brother

This 3D rendering (Maya, Photoshop) was an assignment that I completed for an art class at Columbia.  It is a commentary on the rapidly evolving technologies of the book industry as well as a tribute to the book 1984, one of my favorite childhood reads.

This slideshow requires JavaScript.

3D Character Design

Man_1

Summary:

This model was initially designed to be part of a custom character database for my CyberGRID project. Developing an entire custom character database of this level of detail posed to be a very time-consuming endeavor.  Thus, the project turned out to be a terrific learning experience on 3D character modeling and texturing, but we ended up purchasing (from turbosquid.com) a line of much less attractive models that were pre-rigged.  Note that this post is somewhat of a walk-through tutorial, but it assumes that the reader has at least a basic understanding of the Autodesk Maya software.

Resources:

I used these tutorials as assistance for modeling and texturing.  They go into further detail about the steps involved in both processes:

3D Character Modeling: http://www.creativecrash.com/tutorials/real-time-character-modeling-tutorial#tabs

Texturing a 3D Character:  http://www.3dtotal.com/index_tutorial_detailed.php?id=825#.TzQYQExWrDY

Process:

Step 1) I started by drawing a front-view symmetrical sketch of the male anatomy, as well as, a profile sketch that matched the scale of the front view drawing (below).  The rear-view drawing was used as a reference once I need to texture the model.

Step 2) In Maya, I created simple cylindrical polygons with 8 axis subdivisions (varying numbers of spans depending on the part of the body) and scaled the vertices of the polygons to match the various major limbs (arm and leg) of one half of the body.  I only modeled half of the body so that I would be able to mirror what I had done to create a symmetrical mesh.  I used the front and side sketches above as Image Planes in the front and side windows respectively.  For the torso I used half of a cylinder (deleted the faces of the other half).  Once the arm, leg and half-torso were finished I sewed them together, combining the meshes using the Merge Vertex Tool.

Step 3) I then used the same technique to model the fingers, thumb, and palm of the hand on the side of the body that I had already modeled.  After it was finished I combined the hand mesh with the rest of the body and then used the Merge Vertex Tool to close the gaps between the meshes.

Step 4) After this I undertook a similar process for the foot but didn’t put as much detail into it under the assumption that the foot would go into a shoe once the character was rigged and animated.  I then duplicated the half of the body (without the head) and mirrored it to produce a symmetrical body:

Step 5) I then used a similar process to create the head of the model.  This process was more complex than the body parts due to the topographical abnormalities of a human head (nose, mouth, eyes, ears).  It required more frequent use of the Split Polygon and Insert Edge Loop tools.

Step 6) Once the model was complete, I used the Cut UVs tool to separate the UV map of the model into flat sections that would be easier to illustrate in Photoshop.  To do this I tried to make all of the UV “seams” in less visible areas (i.e. under the arms, sides of torso, inside of legs).  A good technique is to make the seems along heavy muscle contours – areas where it looks ok to have an abrupt change in color.  I then exported the UV Map and used it as a reference (overlay) to digitally produce the texture in Photoshop.  This process takes a good amount of tweaking because of the counterintuitive nature of drawing a 3D picture on a 2D surface.

Step 7) I then found a generic eyeball texture from the Internet and mapped it onto two spheres within the head.  In addition, I created a mesh for the hair and used a generic hair texture that I also found on the web.  I then rendered the model using the built-in Mental Ray renderer in Maya.  Most of the rendered images use a standard directional light; the ligher render uses a Physical Sun and Sky (Indirect Lighting).  Here are some of the final renders:

What’s Next) Next, I want to rig and animate the character, something that I have some experience doing (see my Emperor’s Champion post).  I also want to finally give the guy some clothes!  After that, I plan to make a female counterpart to this model and then sell both of them on a 3D model database like Turbosquid.

Blog at WordPress.com.

Up ↑