Apple offers its contribution to generate a tool that improves the lives of patients with certain diseases!
Seeing the names of Apple, Google, and Meta together is nothing new, but knowing the reason is a specific collaboration for a high-impact project is. According to spokespersons from the University of Illinois, these three tech giants are working together to develop the Voice Accessibility Project.
According to information shared by 9to5mac, the Speech Accessibility Project seeks to investigate and improve how artificial intelligence algorithms can be tuned to improve speech recognition for users with speech-affecting conditions
Collaboration in the name of health
According to the report, collaborations among technology companies working with the University of Illinois include Amazon, Apple, Google, Meta and Microsoft. And with them, non-profit organizations Team Gleason, which empowers people living with ALS, and the Davis Phinney Foundation for Parkinson’s Disease they are also working on the speech accessibility project.
According to UIUC Professor Mark Hasegawa-Johnson regarding the development of the project:
This task was challenging as it requires a large amount of infrastructure, ideally of the type that large tech companies can support, which is why we created a unique cross-functional team with expertise in linguistics, voice, AI, security and confidentiality.
For its part, the Davis Phinney Foundation (Parkinson) and Team Gleason (ALS) have pledged to support the project. According to the executive director of the Davis Phinney Foundation, Polly Dawkins:
Parkinson’s disease affects motor symptoms, makes writing difficult, making speech recognition a fundamental tool for communication and expression. Part of [nuestro] The commitment is to ensure that people with Parkinson’s disease have access to the tools, technologies and resources they need to live their best lives.
Of course, all of these companies have significant advancements in this technology as seen through voice assistants with tools such as Siri, Amazon Alexa, Google Assistant and others. For its part, Apple has also invested in technologies such as VoiceOver and Voice Control, which are best in class for users with visual impairments or reduced mobility.
The contribution of technology in combination with speech samples from individuals representing a variety of speech patterns It is essential to the success of the project.
According to a report shared by Engadget, the UIUC will recruit paid volunteers to provide voice samples and help create a “private, anonymized” dataset that can be used to train machine learning models. The group will focus on American English at first.