Audio Tips & Tricks: Physics of Sound

Forums Resources Audio Audio Tips & Tricks: Physics of Sound

This topic contains 0 replies, has 1 voice, and was last updated by Animation Pagoda Staff Animation Pagoda Staff 2 years, 3 months ago. This post has been viewed 475 times

  • Author
    Posts
  • #4125
    Animation Pagoda Staff
    Animation Pagoda Staff
    Moderator
    • Topic Count 63
    • Replies 0
    • Offline

    Waves

    Sound is a form of vibrational energy that is transferred through a medium like air in the form of wavelengths. Humans can only hear sounds in the range of 20-20,000 Hertz. Sound can travel through liquids and gases. An oscilloscope is used to detect soundwaves. Amplitude denotes the peak height of a waveform.

    Air pressure can play a role in how sound travels. The less air pressure, the harder it is for sound to travel, so the loudness will decrease. Sound cannot be heard in a vacuum because atoms are too few and spread too far apart.

    Shock waves are produced when an object travels faster than the speed of sound at 1,225 km/h (761 mph). This is because the speeding object causes air pressure to rapidly compress in the wake, creating a sonic boom.

    Veritasium: Pyro Board: 2D Rubens’ Tube!

    CYMATICS: Science Vs. Music – Nigel Stanford


    Pitch vs. Tempo

    Pitch refers to the highness or lowness of sound. Frequency is how many times a wavelength loops in a set amount of time. 1 Hertz = 1 vibration/second. The higher the frequency, the higher the pitch.


    Bit Rate and Sampling Rate

    Digital audio cannot be stored on a computer as an uninterrupted wavelength. Instead, the track is sampled and the information gets converted into bits. This results in staggered breaks in the wave. Digital audio is sampled at a rate of 44.1kHz, or 44,100 times per second, so the breaks are very very small and barely noticeable to the human ear.


    Bit Depth

    The more times the computer can sample bits per second, the higher quality the video or audio will be. Audio uses a standard called Pulse-Code Modulation (PCM). Audio file bit depth can range from 4-64 bits per sample. 4 bits is just awful, so make sure never to use it. 8 bit sound is basically the quality of old arcade and Nintendo Gameboy music. 16-24 bits is the modern standard. Higher bit depth will result in much larger file sizes, and requires more advanced recording equipment.

    Over time, resampling a video or audio file will cause degradation as more information is lost each subsequent sampling. In extreme cases, this can result in corruptions and glitches.

    Video Data Degradation caused by resampling 1,000 times


    Nyquist-Shannon Sampling Theorem

    This is a conversion formula that states when taking an analog audio sample, in order to get the best quality sound from sampling, the sampling rate should be twice as much as the highest frequency in the original sample. The Nyquist Frequency is set at 22.05 kHz for CD audio, so 44.1 kHz is the standard sampling rate.


    Dynamic Range

    Dynamic Range refers to the difference between the maximum and minimum values, measured in decibels. Humans have a hearing range of about 140dB. Technology isn’t organic, so it has less flexibility. 96-120 dB is the general range for audio with 16-24 bit depth. The minimum value is called the noise floor. It is not really possible for most audio equipment to exceed 129 dB of dynamic range. Distortion tends to occur the higher the dynamic range is.


    Artifacts, Noise, Dithering, and Aliasing

    Sound doesn’t always record perfectly. Sometimes there are background noises or ragged audio. Sound clips may have been poorly edited, resulting in cut off sounds and artifacts. One common feature is recording extra audio levels that are out of the human hearing range. This can mess up the averaging calculations when sampling, producing false values called aliasing. Luckily, it can be cut out fairly easily using anti-aliasing filters.

    Dithering is a mathematical preset applied before sampling. Dithering adds noise that will help average out values better, producing less distortion when sampling at a lower bit rate.

    Jonathan Clark: Nyquist, Fletcher-Munson Curves, Anti-Aliasing, Quantization Noise, and Dithering


    Resonance, Reverberation

    Resonance occurs when sound vibration causes another object to oscillate. It generally takes very high concentrated energy to cause resonance. Resonance is measured in Decibels, a unit that denotes relative power or intensity. Prolonged exposure to noise over 80 decibels can cause damage to hearing. Most rock concerts and loud headphone music enter the 100-120 decibel range. 120 dB is the pain threshold for noise. Gunshots, fireworks, and explosions are around 140 decibels. 150 Decibels is about the limit for when human eardrums will start to rupture.

    Jamie Vendera: Can You Shatter Glass With Your Voice?


    Echo

    Sound waves can reflect off solid objects that they cannot penetrate. Sonar and echolocation rely on this principle. In general, most sound designers and musicians do not want uncontrolled echo effects. Baffles can be used to soundproof recording studios, but echoes can also be reduced or removed in post.

    Nancied: Remove echo from video using Audition

You must be logged in to reply to this topic.