User:Nzachariah3/Sandbox

From Wikipedia, the free encyclopedia



Neuronal Encoding of Sound
This article will explore the basic physiological principles of sound perception by the ear and will trace this phenomenal mechanism from its inception as pressure waves in air to the transduction of these waves into electrical impulses (action potentials) along auditory nerve fibers.


Introduction[edit]

With each passing day, the mysteries and complexities of contemporary neuroscience are continually redefined as scientists and engineers design experiments that are testaments to human curiosity and ingenuity. Thus what we know now of the auditory system has changed in the recent times and thus conceivably in the next two years of so, much of this will change. What follows is what is widely accepted in the scientific community in the year 2009.

This article is structured in a format that starts with a small exploration of what sound is followed by the the general anatomy of the ear which in turn will finally give way to explaining the encoding mechanism of the engineering marvel that is your ear. The article traces the route that sound waves first take from generation to finally being integrated and cognitively discerned in the auditory cortex

Basic Physics of Sound[edit]

Prior to the initiation on the study of the the neuronal encoding of sound, a fundamental understanding of what sound is and its characteristic must be examined thoroughly. This subsection is in no ways a comprehensive section on the physics of sound but has the basic information that will suffice for introductory learning.

Every little whisper, hum or clatter that you currently perceive can be from a reductionist standpoint parsimoniously be deconstituted into a function of waveform, phase, amplitude and frequency. Simply stated, if the specific waveform pattern, phase, amplitude and frequency of any person's voice can be mathematically determined, a speech pattern generator could replicate that voice within a certain degree of accuracy.

Sound waves are primarily categorized as what physicists call longitudinal waves which consists of regions of high pressure and corresponding areas of low pressure. Such regions of high pressure are known as compressions while regions of low pressure are known as rarefraction. These alternating regions of pressure are thus subject to dampening which is directly proportional to the distance from a sound source.

Phase[edit]

Image of Phase shifting in for simple sine wave

Phase is the characteristic of the sound wave that physically is equivalent to shifting the wave along the horizontal access. In digital signal processing, phase is essentially what constitutes a time delay. In other words, if one voice was to be played as recorded while the other was superimposed with a starting time delay of 10 seconds, we would describe the second voice as being phase shifted. An alternative article on Phase(wave)

Waveform[edit]

Waveform is a characteristic of sound that is used to describe the overall general shape of the sound wave. This is rarely a concern in sound processing due to the ability of both mathematical algorithms and the ear to decompose sounds into a sum of sinusoids using the Fourier Transform. As such, waveform is intrinsically described by the sum of sinusoids in Fourier Analysis and by sound perceived by the ear.

Amplitude[edit]

Illustrated Simple Sine Wave

This characteristic of any wave represents the amount of vertical perturbation from the mathematical average of all vertical perturbations. Amplitude when mathematically modeled is a scalable quantity and often the least complex of all characteristics that define sound. It is also the characteristic of a sound wave that determines the loudness with which one perceives noise. In a trigonometric transcendental function such as Csin(2pift), 'C' represents the amplitude of the sound wave. A great wikipedia article on Amplitude.

Frequency[edit]

The Frequency of a sound is defined as the amount of time it takes for the sound to complete one complete cycle. Thus frequency is inversely proportional to the wavelength which defines the distance between any two repetitive consecutive points on the mathematically idealized sound. Thus Wavelength = Frequency-1. The common units of frequency that is used for sound is Hertz or Hz which stands for cylces per second. It is this property of sound that has a direct bearing on the perceived pitch. Thus sopranos are described as having high pitch while bases have low pitches. An alternative article on Frequency. It must be noted that the audible frequency range for humans ranges from 20Hz to 20kHz. In reality adults typically are sensitive to frequencies in the 15-17kHz range.

Background[edit]

Anatomy of the ear[edit]

Flowchart of sound passage - outer ear

External Ear[edit]

Anatomically this region of the ear is defined as consisting of the pinna, concha and the auditory meatus. The fundamental function of this part of the ear is to selectively boost sound with frequency around 2-5Hz. Such selective amplification which at times can be magnified 60 fold is [1] due to the mechanical structure of the outer ear that facilitates such resonance. In a perfect juxtaposition between anatomy and physiology, it is interesting to note that the consonants in human speech that separate the basic units of communication (Phoneme) occurs within this selective amplification range. The external pinnae as a result of its asymmetrical structure is able to provide further cues about the elevation from which the sound originated. The vertical asymmetry of the pinnae selectively amplifies sounds of higher frequency from a high elevation thereby providing spatial information to the auditory cortex as will be explained later in this article.

Flowchart of sound passage - middle ear

Middle Ear[edit]

This part of the ear plays one of the most crucial roles in the auditory process as it essentially converts pressure variations in air to perturbations in the endolymph of the inner ear. Thus when stated in electrical circuit terms this part of ear is the essentially performs impedance matching or rather is the transfer function that allows for current to flow between two spatially mismatched circuits. The three phenomenal bones that are responsible for this complex process are the malleus, incus and stapes and are collectively known as the ear ossicles.[2] [3] The impedance matching is done through a careful regulation of the ratio of areas of the tympanic membrane (ear drum) and the footplate of the stapes. This facilitates in creating what is known as a transformer mechanism. [2] Furthermore the ossicles are angled in such a manner so as to resonate at 700-800Hz while at the same time protecting the inner ear from excessive transductional damage. A certain degree of top down control is present at the middle ear level primarily through two muscles present in this anatomical region: tensor tympani and the stapedius. These two muscles can stiffen the ossicles so as to reduce the amount of noise that is transmitted into the inner ear in instances of noisy surroundings. [2]

Flowchart of sound passage - inner ear

Inner Ear[edit]

The inner ear is truly a marvel of physiological engineering as within a diameter of 5 millimeters it contains the sensory mechanisms for inertial feedback while acting as both a frequency and acoustic amplifier. The inner ear is primarily composed of the cochlea, semicircular canals, utriculus and sacculus. The cochlea falls under the umbrella of the auditory system while the other structures (semicircular canals, utriculus and sacculus) are classified as structures of the vestibular system. The utriculus and the sacculus provide sensory feedback for linear acceleration while the semicircular canals do so for rotational acceleration.

The cochlea has over 32,000 hair cells in while the vestibular system (i.e. the semicircular canals, utriculus and sacculus) have over 134,000 hair cells. These auditory and vestibular hair cells serve as the work horses of both systems. The vestibular system is an equally complex organ but since it is not the focus of this article, it has not been explored here. Alternative articles on the Vestibular System exist and can be read for a comprehensive study. The cochlea in and of itself is an amazing organ about 7mm across and is able to perform biologically what mathematicians call the Fourier transform i.e. the cochlea breaks incoming sound into its constituent frequencies. The basal end of the cochlea encodes for the higher end of the audible frequency range while the apical end of the cochlea does so for the lower end of the frequency range. This tonotopy plays a crucial role in speech processing as it allows for spectral separation of sounds. A cross section of the basal end of the cochlea will reveal an anatomical structure with three main chambers (Scala vestibuli, Scala media and Scala tympani). As one traverses from the basal to the apical end of the cochlea, at a region known as the helicotrema, the Scala vestibuli merges with the Scala tympani resulting in the termination of the apical end with only two cochlear chambers. The fluid found in the cochlear chambers has been termed as perilymph. [4]

Transduction[edit]

Auditory Hair Cells[edit]

The auditory hair cells are the work horse of the system and is located both in the cochlea and the semicircular canals. Their primary function in the cochlea is a function known as mechano-transduction which will be explained below. The paucity of the auditory hair cells are surprising when compared to other sensory cells such as the rods and cones of the visual system. Thus the loss of low number (in the order of thousands) of auditory hair cells can be devastating while the loss of a larger number of retinal cells (in the order to hundred of thousands) will not be as inhibitive from a sensory standpoint. [5] Hair cells are on the whole divide into two main categories. The inner hair cells and the outer hair cells. The inner hair cells are the primary sensory receptors and a significant amount of the sensory input to the auditory cortex occurs from these hair cells. Outer hair cells on the other hand boost the neuronal signal by using electromechanical feedback. It is largely responsible for the frequency determination that is characteristic of the cochlea.

Mechano-transduction[edit]

When considering the anatomy of the auditory hair cells, a hair bundle can be seen on the apical surfaces of these cells. Each hair bundle has about 300 projections of actin cytoskeleton known as steriocilia. These steriocilia are anatomically arranged in order of progressive height. The actin filaments present in these steriocilia are highly interlinked and even cross linked with fibrin that makes these pseudo ciliary projections quite stiff. In addition to steriocilia, a true ciliary structure known as the kinocilium exists and is believed to play a role in hair cell degeneration that is caused by exposure to high frequencies. [1] [6]

The steriocilia are inserted into the apical membrane where it is hinged. When displaced along a plane parallel to the tallest steriocilia, the tallest steriocilia depolarizes which in turn causes subsequent depolarizations in the smaller steriocilia in that specific bundle. This serial depolarization is due to interconnected MET (mechanoeletrical transduction) channels which open when mechanical perturbations occur in the perilymph. These MET channels are interconnected with filaments known as tip links and fall under the category of cation selective transduction channels. Potassium is the ion that initiates the depolarization cascade by entering the cell through an open MET channel. This depolarization event induces calcium vesicles to fuse with the apical side of the hair cell which in turn causes the generation of an action potential in the auditory neuron. Hyperpolarization in the auditory system which occurs when potassium leaves the cell is equally important as it is what prevents the merging of calcium vesicles with the apical end of the hair cell. Thus as in most of the body, the transduction is dependent on the concentration/distribution of ions. The perilymph which is found in the scala tympani has a low potassium concentration while the endolymph found in the scala media has a high potassium concentration with an electrical potential of 80mV in comparison to the perilymph. The steriocilia are highly sensitive with the ability to measure perturbations as small as fluid fluctuations of 0.3nm and can convert this depolarizing potential into a nerve impulse in about 10 microseconds.

Nerve fibers in the Cochlea[edit]

There are primarily two types of neurons that can be found in the cochlear region: Type I and Type II. Each type of neuron has specific cell selectivity within the cochlea as will be enumerated in the sections below. The Mechanism that determines the selectivity of each type of neuron for a specific hair cell has been proposed by two diametrically opposed theories in neuroscience known as the Peripheral Instruction hypothesis and the Cell Autonomous Instruction hypothesis. The peripheral instruction hypothesis states that phenotypic differentiation between the two neurons are not made until after these undifferentiated neurons attach to hair cells which in will dictate the differentiation pathway. The Cell Autonomous Instruction hypothesis states that differentiation into Type I and Type II neurons occur following the last phase of mitotic division but preceding innervation. Both these types of neurons participate in the transmission of encoded electrical impulses to the primary auditory cortex as will be explored in the next section.

Type I Neurons[edit]

Type I neurons are responsible for innervating inner hair cells and occurs along the apical to basal direction. There is significantly greater convergence of this type of neuron towards the basal end in comparison with the apical end. A radial fiber bundle is acts as an intermediary between Type I neurons and inner hair cells. The ratio of innervation that is seen between Type I neurons and inner hair cells is 1:1 which results in high signal transmission fidelity and resolution. Information from Type I neurons are transmitted in parallel to the Central Nervous System.

Type II Neurons[edit]

Type II neurons on the other hand innervate outer hair cells and do so along the apical to basal direction. However there is significantly greater convergence of this type of neuron towards the apex end in comparison with the basal end. A 1:30-60 ratio of innervation is seen between Type II neurons and outer hair cells which in turn makes these neurons ideal for electromechanical feedback. Information from Type II neurons are integrated across multiple neurons prior to transmission to the Central Nervous System. Type II hair cells can be physiologically manipulated to innervate inner hair cells provided outer hair cells have been destroyed either through mechanical damage or chemical damage induced by chemicals such as Gentamycin. [7]

Auditory Cortex[edit]

Transmission of neuronal impulse

Primary auditory neurons carry action potentials from the cochlea pass through the transmission pathway shown in the image to the right. Each of these relay stations act as integration centers and cannot be examined in comprehensive detail due to the limited scope of this article. The final integration center of the action potential is the primary auditory cortex (A1) is located in the superior temporal gyrus of the temporal lobe. A1 has a precise tonotopic map (i.e. frequencies are organized in ascending order from the dorsal to ventral transverse axis). It also should be noted that the correlations between the tonotopic coding in the cochlea are matched with the tonotopy seen in the primary auditory cortex. A1 is the seat of all communication and much research is still being conducted to better understand both the spatial and temporal processing of signals that occur in this region. It is hypothesized that this region of the brain has combination sensitive neurons that have non linear responses to a combination of stimuli. The neurons in this brain structure are further believed to be highly sensitive when it comes to temporal processing and is highly dependent on time cues presented through time lags between neuronal signals of the two ears. Improvements in synaptic wiring of these temporal processing neurons are responsible for the enhanced echolocation seen in blind people who learn to locate objects purely based on perceived sound.

Any sound especially when relating to speech, is nothing but a conglomeration of frequencies that vary from the fundamental frequency by multiples. Recent studies conducted in bats and other mammals have revealed that the ability to process and interpret these modulation in frequencies primarily occurs in the superior and middle temporal gyri of the temporal lobe. Interestingly lateralization exists even in audition with the localization of speech in the left hemisphere and environmental sounds in the right hemisphere of the auditory cortex. Music with its influence on emotions are also processed in the right hemisphere of the auditory cortex. While the reason for such localization is not quite understood, it must be understood that lateralization in this instance does not imply exclusivity as both hemispheres do participate in the processing but one hemisphere tends to play a more significant role than the other.


Recent Breakthroughs[edit]

  • Alternation in encoding mechanisms have been noticed as one progresses through the auditory cortex. Encoding shifts from synchronous responses in the cochlear nucleus and later becomes dependent on rate encoding in the inferior caliculus. [8]
  • Despite advances in gene therapy that allows for the alteration of the expression of genes that effect audition such as Atoh1 and the use of viral vectors for such end, the micro mechanical and neuronal complexities that surrounds the inner ear hair cells, artificial regeneration in vitro remains a distant reality.[9]
  • Recent studies by Luis Lemus reveal that the auditory cortex may not be as involved top down processing as was previous thought. In studies conducted on primates for tasks that required the discrimination of acoustic flutter, Lemus found that the auditory cortex played only a sensory role and had nothing to do with the cognition of the task at hand. [10]
  • Due to the presence of the tonotopic maps in the auditory cortex at an early age, it was assumed that cortical reorganization had little to do with the establishment of these maps. However, recent work by Kandler et al has shown that these maps are formed as a result of plastic reorganization on a sub-cellular and circuit level. [11]


References[edit]

  1. ^ a b Hudspeth, A. J, (1989) "How the Ear Works". Nature, 341, 397-404.
  2. ^ a b c Hudde, Herbert and Weistenhöfer Christian , (2006) "Key Features of the Human Middle Ear". Journal for Otorhinolaryngology, 324-329.
  3. ^ Hudspeth, A. J, (2000) "Auditory Neuroscience: Development, Transduction and and Integration". Proc. National Academy of Sciences, 11690-11691.
  4. ^ Hudspeth, A. J, (2001) "How the Ear works work: Mechanoelectrical transduction and amplification by hair cells of the internal ear". Harvey Lect, 97, 41-54.
  5. ^ Kass, Jon H. et al, (1999) Auditory processing in primate cerebral cortex. Current Opinions in Neurobiology, 9, 500.
  6. ^ Fettiplace, Robert et al (2006) The Sensory and Motor roles of auditory hair cells. Nature Reivews, 7, 19-29.
  7. ^ Rubel, Edwin W., (2002) "Auditory System Development:Primary Auditory Neurons and their Targets". Annual Reviews in Neuroscience, 25, 51-101.
  8. ^ Frisina, Robert D., (2001) "Subcortical neural coding mechanisms for auditory temporal processing". Hearing Research, 158, 1-27.
  9. ^ Brigande, John V., (2009) "Quo vadis, hair cell regeneration?". Nature Neuroscience, 12, 679-685.
  10. ^ Lemus, Luis,(2009) "Neural codes for perceptual discrimination of acoustic flutter in the primate auditory cortex". Proceeding of the National Academy of Sciences, 106, 9471–9476.
  11. ^ Kandler, Karl et al,(2009) "Tonotopic reorganization of developing auditory brainstem circuits". Nature Neuroscience, 12, 711-717.