Procedural audio generation is an important method for automatically synthesizing realistic sounds for computer animations and games. While synthesis techniques for rigid bodies have been well studied, few existing works have tackled the challenges of soft-body interactions. In this dissertation, we explore practical methods for procedural audio generations of soft bodies. First, we synthesize both impulsive and continuous sounds for elastic deformations. Our method is built on granular synthesis that retargets sound tracks of real-world recordings to match the motion of input elastic objects. Next, to synthesize sounds for plastic deformations, we introduce a concatenative synthesis method with a fast feature correlation technique, which is able to calculate the motion events of highly deformable objects at run time and simultaneously generate sounds according to these motion signals. Last but not least, we focus on evaluating the synthesized audio using both subjective and objective techniques, and the results demonstrate that our proposed synthesis methods can produce convincing sounds for soft bodies that are comparable to the recorded ones. Our presented methods do not require computationally expensive physics simulations and have improved previous data-driven synthesis approaches with more efficient analysis, control and evaluation techniques, which makes it possible to automatically generate plausible audio for a variety of soft-body interactions.