Projects
EMG-Based Assistive Communication
It's been a dream of mine to create my own brain computer interface for quite some time. After seeing firsthand the impacts of a stroke and late-stage ALS on motor function and communication, I decided I wanted to figure out a low-cost, simple way for patients to communicate with their families again.
My project revolved around creating a virtual augmentative or alternative communication board (AAC) that the user can interact with by means of an EMG biofeedback device connected to their temples. I then created a program that decodes the electrical activity from motor neurons when the user lightly taps their teeth together, allowing them to make a selection from a grid of communication options each representing specific needs or requests.
S.P.E.A.C
Sensory Personalized EEG-based Assistive Communication
S.P.E.A.C, or Sensory Personalized EEG-based Assistive Communication, is a project that builds off of my EMG assistive communication interface.
​
S.P.E.A.C accurately translates brain signals into real-time communication, enabling users to express themselves effortlessly. This solution enhances social inclusion and emotional well- being by providing a lifeline for those with communication difficulties, fostering independence and improving their quality of life.
Designed for use by individuals with ALS, stroke survivors, and those with speech impairments, the device offers a non-invasive, affordable alternative to existing communication aids. It provides real-time communication without the need for physical movement or invasive procedures, setting it apart from competitors and opening the door for thousands of new patients.
​
I am honored to say that I placed 1st in the FAU Wave Competition, and was awarded the Eric H Shaw Florida Atlantic Excellence in Innovation Award for the creation and development of S.P.E.A.C
​
Deep Ethogram Integration
Currently, most neurophysiological research labs that run and analyze behavior must do so by hand-scoring video data, a process which can take hours depending on the number of videos.
​
DeepEthogram is a machine-learning algorithm that uses convolutional neural networks to compute motion, extract features from motion and images, and classify features into behaviors, scoring data in minutes with minimal training data.
​
I am in the process of integrating and training the DeepEthogram model while comparing its efficiency against alternatives so that our lab can adapt to the boom in AI and technology. By comparing its performance against manual scoring, my goal is to confirm it as a faster and more accurate method for tracking behavior, revolutionizing our research by saving considerable time and enhancing overall efficiency.
​
​