ϲ

Skip to main content
  • Home
  • About
  • Faculty Experts
  • For The Media
  • ’Cuse Conversations Podcast
  • Topics
    • Alumni
    • Events
    • Faculty
    • Students
    • All Topics
  • Contact
  • Submit
Campus & Community
  • All News
  • Arts & Culture
  • Business & Economy
  • Campus & Community
  • Health & Society
  • Media, Law & Policy
  • STEM
  • Veterans
  • University Statements
  • ϲ Impact
  • |
  • The Peel
Sections
  • All News
  • Arts & Culture
  • Business & Economy
  • Campus & Community
  • Health & Society
  • Media, Law & Policy
  • STEM
  • Veterans
  • University Statements
  • ϲ Impact
  • |
  • The Peel
  • Home
  • About
  • Faculty Experts
  • For The Media
  • ’Cuse Conversations Podcast
  • Topics
    • Alumni
    • Events
    • Faculty
    • Students
    • All Topics
  • Contact
  • Submit
Campus & Community

Researchers’ Artificial Intelligence-Based Speech Sound Therapy Software Wins $2.5M NIH Grant

Wednesday, May 24, 2023, By Diane Stirling
Share
College of Arts and SciencesCollege of Engineering and Computer SciencefacultyResearch and Creative

Three ϲ researchers, supported by a recent $2.5 million grant from the , are working to refine a clinically intuitive automated system that may improve treatment for speech sound disorders while alleviating the impact of a worldwide shortage of speech-language clinicians.

The project, “Intensive Speech Motor Chaining Treatment and Artificial Intelligence Integration for Residual Speech Sound Disorders,” is funded for five years. , associate professor of communication sciences and disorders, is principal investigator. Preston is the inventor of , a treatment approach for individuals with speech sound disorders. Co-principal investigators are , assistant professor of electrical engineering and computer science, whose expertise is creating interpretable and fair human-centric artificial intelligence-based systems, and , a recent graduate of the communication sciences and disorders/speech-language pathology doctoral program.

Their system uses the evidence-based , an extensive library of speech sounds and artificial intelligence to “think” and “hear” the way a speech-language clinician does.

The project focuses on the most effective scheduling of Speech Motor Chaining sessions for children with speech sound disorders and also examines whether artificial intelligence can enhance Speech Motor Chaining—a topic Benway explored in her dissertation. The work is a collaboration between Salekin’s Laboratory for Ubiquitous and Intelligent Sensing in the and Preston’s in the .

Clinical Need

In speech therapy, learners usually meet with a clinician one-on-one to practice speech sounds and receive feedback. If the artificial intelligence version of Speech Motor Chaining (“ChainingAI”) accurately replicates a clinician’s judgment, it could help learners get high-quality practice on their own between clinician sessions. That could help them achieve the intensity of practice that best helps overcome a speech disorder.

The software is meant to supplement, not replace, the work of speech clinicians. “We know that speech therapy works, but there’s a larger issue about whether learners are getting the intensity of services that best supports speech learning,” Benway says. “This project looks at whether AI-assisted speech therapy can increase the intensity of services through at-home practice between sessions with a human clinician. The speech clinician is still in charge, providing oversight, critical assessment and training the software on which sounds to say are correct or not; the software is simply a tool in the overall arc of clinician-led treatment.”

170,000 Sounds

A library of 170,000 correctly and incorrectly pronounced “r” sounds was used to train the system. The recorded sounds were made by 400-plus children over 10 years, collected by researchers at ϲ, Montclair and New York Universities, and filed at the Speech Production Lab.

Benway wrote ChainingAI’s patent-pending speech analysis and machine learning operating code, which converts audio from speech sounds into recognizable numeric patterns. The system was taught to predict which patterns represent “correct” or “incorrect” speech. Predictions can be customized to individuals’ speech patterns.

During speech practice, the code works in real time with Preston’s Speech Motor Chaining website to sense, sort and interpret patterns in speech audio to “hear” whether a sound is made correctly. The software provides audio feedback (announcing “correct” or “not quite”), offers tongue-position reminders and tongue-shape animations to reinforce proper pronunciation, then selects the next practice word based on whether or not the child is ready to increase word difficulty.

Diagram of tongue formation on computer screen showing proper speech technique

ChainingAI speech therapy software provides feedback, including animations of tongue positioning, to help learners make sounds properly.  (Photo courtesy of Speech Production Lab)

Early Promise

The system shows greater potential than prior systems that have been developed to detect speech sound errors, according to the researchers.

Until now, Preston says, automated systems have not been accurate enough to provide much clinical value. This study overcomes issues that hindered previous efforts: Its example residual speech sound disorder audio dataset is larger; it more accurately recognizes incorrect sounds; and clinical trials are assessing therapeutic benefit.

“There has not been a clinical therapy system that has explicitly used AI machine learning to recognize correct and distorted “r” sounds for learners with residual speech sound disorders,” Preston says. “The data collected so far shows this system is performing well in relation to what a human clinician would say in the same circumstances and that learners are improving speech sounds after using ChainingAI.”

So Far, Just ‘R’

The experiment is currently focused on the “r” sound, the most common speech error persisting into adolescence and adulthood, and only on American English. Eventually, the researchers hope to expand software functionality to “s” and “z” sounds, different English dialects and other languages.

Ethical AI

Group of four faculty and PhD students who work on the ChainingAI project in the Sound Production Lab

Faculty and doctoral students who work on the project include, from left, Ph.D. student and project manager Nicole Caballero, Associate Professor Jonathan Preston, Assistant Professor Asif Salekin and Nina Benway, Communication Sciences and Disorders 2023 doctoral program graduate. (Photo by Alex Dunbar)

The researchers have considered ethical aspects of AI throughout the initiative. “We’ve made sure that ethical oversight was built into this system to assure fairness in the assessments the software makes,” Salekin says. “In its learning process, the model has been taught to adjust for age and sex of clients to make sure it performs fairly regardless of those factors.” Future refinements will adjust for race and ethnicity.

The team is also assessing appropriate candidates for the therapy and whether different scheduling of therapy visits (such as a boot camp experience) might help learners progress more quickly than longer-term intermittent sessions.

Ultimately, the researchers hope the software provides sound-practice sessions that are effective, accessible and of sufficient intensity to allow ChainingAI to routinely supplement in-person clinician practice time. Once expanded to include “s” and “z” sounds, the system would address 90% of residual speech sound disorders and could benefit many thousands of the estimated six million Americans who are impacted by these disorders.

 

  • Author

Diane Stirling

  • Recent
  • From Wedding Day Pics on Campus to Working at ‘Otto’s House’: Brianna and Kevin Shults Share Their Orange Love Story
    Friday, July 11, 2025, By Jen Plummer
  • Vintage Over Digital: Alumnus Dan Cohen’s Voyager CD Bag Merges Music and Fashion
    Monday, July 7, 2025, By John Boccacino
  • Empowering Learners With Personalized Microcredentials, Stackable Badges
    Thursday, July 3, 2025, By Hope Alvarez
  • WISE Women’s Business Center Awarded Grant From Empire State Development, Celebrates Entrepreneur of the Year Award
    Thursday, July 3, 2025, By Dawn McWilliams
  • Paulo De Miranda G’00 Received ‘Much More Than a Formal Education’ From Maxwell
    Thursday, July 3, 2025, By Jessica Youngman

More In Campus & Community

From Wedding Day Pics on Campus to Working at ‘Otto’s House’: Brianna and Kevin Shults Share Their Orange Love Story

It started with trivia nights at the Inn Complete and a mutual fandom of Orange sports and grew into a life filled with Orange pride, campus milestones and a little one who thinks Otto the Orange runs the world. For…

Former Orange Point Guard and Maxwell Alumna ‘Roxi’ Nurse McNabb Still Driving for an Assist

As point guard for the Orange women’s basketball team, Raquel-Ann “Roxi” Nurse McNabb ’98, G’99 was known for helping her teammates ‘make buckets’—a lot of buckets. The 1997 ϲ Athlete of the Year, two-time team MVP and three-time BIG…

Empowering Learners With Personalized Microcredentials, Stackable Badges

The University is enhancing its commitment to lifelong learning with digital badges, a tool that recognizes and authenticates the completion of microcredentials. The badges aim to support learners in their professional and personal development by showcasing achievements in short, focused…

Rose Tardiff ’15: Sparking Innovation With Data, Mapping and More

While pursuing a bachelor’s degree in geography in the Maxwell School, Rose Tardiff ’15 became involved with the Salt City Harvest Farm, a community farm near ϲ where newcomers from all over the world grow food and make social connections….

Paulo De Miranda G’00 Received ‘Much More Than a Formal Education’ From Maxwell

Early in his career, Paulo De Miranda G’00 embarked on several humanitarian aid and peacekeeping assignments around the world. “When we concluded our tasks, we wrote reports about our field work, but many times felt that little insight was given…

Subscribe to SU Today

If you need help with your subscription, contact sunews@syr.edu.

Connect With Us

For the Media

Find an Expert
© 2025 ϲ. All Rights Reserved.