A team in the College of Liberal Arts has received a $450,000 grant from the Andrew W. Mellon Foundation to support a project dedicated to developing software to annotate audiovisual humanities collections.
Tanya Clement, an associate professor of English, will lead the team as it works on the AudiAnnotate Audiovisual Extensible Workflow project, according to an Oct. 6 press release.
Bertram Lyons, a software managing director at data management consulting company Audio Visual Preservation Solutions, which is partnering with Clement, said audio recordings typically have simple descriptions. These descriptions usually only say who is talking, at what date and time and who recorded it, Lyons said.
“(The project) will increase the ability for anybody who's searching on the internet, for example, to find and to know about the existence of this content,” Lyons said.
Bethany Radcliff, an English and information studies graduate student, said the existing technology adds text annotations to audio. The grant allows the team to focus on adding video and other collaborative features, she said. Radcliff said one of the goals of the project is to promote audio as a valid means of scholarship.
“There's this text privilege where books and manuscripts and anything that we can view and read the words of, anything visual, is easier to access,” Radcliff said. “Audio materials are often kind of cast to the side.”
Lyons said annotations are useful for those conducting research on historical recordings.
“Our goal here is to really increase the ability of the voices that are recorded in collections around the world to be found, to be used and to be part of the narrative that tells about history,” Lyons said.
Brumfield Labs, a company specializing in software development for historic documents, is partnering on the project.
Ben Brumfield, a partner at Brumfield Labs, said the annotations can outlast the softwares and servers because the annotations will be available on simple websites. He said people living in low-resource communities might not have access to advanced technology, and making simple websites allows people to view the pages on a smartphone.
Radcliff said the tool can also be helpful in making audio more accessible from a disability studies standpoint.
“We don't always want to just annotate what we hear people speaking, there's so much more … nonspeech sounds to clue you into the environment and to how people were feeling,” Radcliff said. “You can hear cars outside, you can hear people in the audience clapping or laughing. It brings this new focus on the complexities of making a soundscape fully accessible.”