By sharing our experience and findings from StammerTalk’s stuttered speech collection project, we advocate for a new AI data paradigm of community data stewardship, especially for data from and about marginalized communities.
I recently attended the “Voice-Activated AI for Stuttered Speech Convergence Symposium” organized by Michigan State University, Friends, and West Michigan University. I was honored to speak at Sociotechnical Challenges in Voice-Activated AI Panel with a fantastic group of panelists and participants from academic, industry, and nonprofits. It was an incredible experience to join the technicalContinue reading “Stuttering and Voice-activated AI: Panel Reflections”
Lindsay and Shaomei had a discussion about Shaomei’s recent experience at the National Stuttering Association annual conference, covering topics across AImpower’s goals, Shaomei’s personal identity as PWS, her connection to the stuttering community, and the lessons she learned at the conference.
Speech recognition technology has progressed a lot in recent years, especially when using modern deep learning techniques. While new models such as Facebook AI Research’s wav2vec has achieved 2.43 WER (Word Error Rate) in research benchmark dataset, their performance usually tanks when processing atypical speech, such as, speech by people who stutter, people who are deaf orContinue reading “Speech Technology for People Who Stutter”