top of page

Projects

EMG-Based Assistive Communication

It's been a dream of mine to create my own brain computer interface for quite some time. After seeing firsthand the impacts of a stroke and late-stage ALS on motor function and communication, I decided I wanted to figure out a low-cost, simple way for patients to communicate with their families again. 

 

My project revolved around creating a virtual augmentative or alternative communication board (AAC) that the user can interact with by means of an EMG biofeedback device connected to their temples. I then created a program that decodes the electrical activity from motor neurons when the user lightly taps their teeth together, allowing them to make a selection from a grid of communication options each representing specific needs or requests.

S.P.E.A.C

Sensory Personalized EEG-based Assistive Communication

S.P.E.A.C, or Sensory Personalized EEG-based Assistive Communication, is a project that builds off of my EMG assistive communication interface. 

Thanks to a grant from FAU Wave, this project will utilize raw encephalographic data (electrical activity in the brain) from a Muse biofeedback headset, so that a user can communicate to caretakers without needing to speak nor move. 

 

The S.P.E.A.C Interface will be an interactive locally-run web application. Through user-friendly visuals, the board will present a grid of communication options, each representing specific needs or requests, such as 'I'm Hungry', 'I'm Thirsty', or 'Please turn on the TV'. To make a selection, the user would simply have to imagine a particular action. A program will then decode the synchronization caused from the imagined action, and read aloud the intended selection.

S.P.E.A.C’s goal is inclusion and accessibility, as I’d like to patent an interface that is as accessible, affordable and accurate as possible.

Deep Ethogram Integration

Currently, most neurophysiological research labs that run and analyze behavior must do so by hand-scoring video data, a process which can take hours depending on the number of videos.

DeepEthogram is a machine-learning algorithm that uses convolutional neural networks to compute motion, extract features from motion and images, and classify features into behaviors, scoring data in minutes with minimal training data. 

I am in the process of integrating and training the DeepEthogram model while comparing its efficiency against alternatives so that our lab can adapt to the boom in AI and technology. By comparing its performance against manual scoring, my goal is to confirm it as a faster and more accurate method for tracking behavior, revolutionizing our research by saving considerable time and enhancing overall efficiency.

bottom of page