Stuttering and Voice-activated AI: Panel Reflections

I recently attended the “Voice-Activated AI for Stuttered Speech Convergence Symposium” organized by Michigan State University, Friends, and West Michigan University. I was honored to speak at Sociotechnical Challenges in Voice-Activated AI Panel with a fantastic group of panelists and participants from academic, industry, and nonprofits. It was an incredible experience to join the technicalContinue reading “Stuttering and Voice-activated AI: Panel Reflections”

“Break the invisible wall” – upcoming talk about challenges and opportunities for people who stutter to participate in academic conferences

Post-OSS updates and reflections (1/24/2023) I tried out the format of doing the “live” presentation with a recorded talk (slides) today, and it worked surprisingly well, especially for virtual meetings! How it works: What I like about this format: I am so glad that we tried out this format – first time for me, andContinue reading ““Break the invisible wall” – upcoming talk about challenges and opportunities for people who stutter to participate in academic conferences”

Publishing our research at CHI 2023

We are excited to share that, our research with the stuttering community on their videoconferencing experience has been accepted by the ACM CHI Conference on Human Factors in Computing Systems (CHI ’23)! Our paper, titled “‘The World is Designed for Fluent People’: Benefits and Challenges of Videoconferencing Technologies for People Who Stutter“, details the methodologyContinue reading “Publishing our research at CHI 2023”

NSA 2022 Convention Experience and Reflections

Lindsay and Shaomei had a discussion about Shaomei’s recent experience at the National Stuttering Association annual conference, covering topics across AImpower’s goals, Shaomei’s personal identity as PWS, her connection to the stuttering community, and the lessons she learned at the conference.

Speech Technology for People Who Stutter

Speech recognition technology has progressed a lot in recent years, especially when using modern deep learning techniques. While new models such as Facebook AI Research’s wav2vec has achieved 2.43 WER (Word Error Rate) in research benchmark dataset, their performance usually tanks when processing atypical speech, such as, speech by people who stutter, people who are deaf orContinue reading “Speech Technology for People Who Stutter”