Friday, April 26, 2024

Stanford A.I. can realistically score computer animations just by watching them

Share

[youtube https://www.youtube.com/watch?v=5I8KCTuDBek?feature=oembed&w=100&h=100]

In the early days of cinema, organists would add sound effects to silent movies by playing along to whatever was happening on screen. Jump forward to 2018, and a variation on this idea forms the basis of new work carried out by Stanford University computer scientists. They have developed an artificial intelligence system that’s able to synthesize realistic sounds for computer animation based entirely on the images it sees and its knowledge of the physical world. The results are synthesized sounds at the touch of a button.

“We’ve developed the first system for automatically synthesizing sounds to accompany physics-based computer animations,” Jui-Hsien Wang, a graduate student at Stanford’s Institute for Computational and Mathematical Engineering (ICME), told Digital Trends. “Our approach is general, [meaning that] it can compute realistic sound sources for a wide range of animated phenomena — such as solid bodies like a ceramic bowl or a flexible crash cymbal, as well as liquid being poured into a cup.”

The technology that makes the system work is pretty darn smart. It takes into account the varying position of the objects in the scene as assembled during the 3D modeling process. It identifies what these are, and then predicts how they will affect sounds being produced, whether it be to reflect, scatte,r or diffract them.

“A great thing about our approach is that no training data is required,” Wang continued. “It simulates sound from first physical principles.”

As well as helping more quickly add sound effects to animated movies, the technology could also one day be used to help designers work out how products are going to sound before they are physically produced.

There’s no word on when this tool might be made publicly available, but Wang said that the team is currently “exploring options for making the tool accessible.” Before it gets to that point, however, the researchers want to improve the system’s ability to model more complex objects, such as the lush reverberating tones of a Stradivarius violin.

The research is due to be presented as part of ACM SIGGRAPH 2018, the world’s leading conference on computer graphics and interactive techniques. Take a second to feel sorry for the poor Pixar foley artist at the back of the hall who just bought a new house!

Editors’ Recommendations

  • Awesome Tech You Can’t Buy Yet: Robo sidekicks, AC for your bed, and more
  • ‘Dreams’ hands-on preview
  • ‘Rogue medicine in a bathtub’: 4 experts on the vice and virtue of pharma hacking
  • It sounds ridiculous, but these beaver bots are designed for disaster zones
  • When we run out of room for data, scientists want to store it in DNA



Read more

More News