Shared Flashcard Set


Describe common aphasias based on the Lichtheim model, through neurological imaging methods and recent studies on language organization in the brain.
Language - Other
Undergraduate 4

Additional Language - Other Flashcards




Transcortical Sensory Aphasia
Results from a cut in communications between the auditory cortex and Licthheim's "concept area." Comprehension bad, repetition ok.
Transcortical Motor Aphasia
Results from lack of communication between the auditory motor area and L's "concept area." production is bad (like Broca's A), repetition ok.
Pure Word Deafness

  • Bilateral damage to auditory cortext (BA 41-2)
  • Patients can hear regular sound, but not spoken language. Production is fine. Results from bilateral damage to the auditory cortex similar to Wernicke's A.

Severely impaired articulation (motor speech). Loss of communication between auditory motor area and output.
BOLD Response

Blood Oxygen Level Dependency. Refers to blood vessels reallocating oxygenated blood to areas of the brain which require more energy to carry out computation. Thus, fMRI can detect how densely a vessel is populated with oxygenated/deoxygenated blood due to their differing magnetic signal.


1.) Small dip in BOLD intensity.

2.) Increased oxygen demand due to neuronal activity (peaks)

3.) "Trough" period where demand is lower before returning to baseline.

Pros and Cons of fMRI


  • Good spatial localization (~4mm)
  • Bad temporal resolution (6-8 sec) dependent on BOLD and will therefore not improve
  • No timing information (when the brain begins processing.
  • No trial randomization
  • Not all brain areas can be imaged equally.


Pros and Cons of PET


  • Good spatial localization (~4mm)
  • Straightforward tracking of BOLD response
  • Can monitor use of NTs
  • Awful temporal resolution (30-40 sec), which also depends on which isotope is being used.
  • No timing information
  • No trial randomization
  • Isotopes are radioactive and must be manufactured onsite.
  • Patients are exposed to radiation, may only be imaged a few times.


Pros and Cons of Hemodynamic Imaging Methods


  • Great localization. Can overlay results directly onto a picture of the patient's brain and the locations will basically line up
  • However, detection of activation depends on BLOOD:
    • slowed due to physical impediments
    • nearby arteries/veins affect quality of images, arteries (oxygenated) much more than veins.
  • Difficult to locate the exact source of activation, similar to seeing a supernova millennia after the event.
  • Relies on subtracting a condition response from a baseline


Pros and Cons of ERP


  • Directly records neural activity (signals at the scalp)
  • Measures timing on a ms level
  • Terrible spatial resolution, as the body is an electrical conductor and the signal travels through a least impeded path to reach the scalp (that is, where the signal is recorded is not always the origin and is always reduced)


Details of detection in MEG


  • Recordings are made perpendicular to the source as magnetic signals are given off orbiting the conduction line.
  • Records magnetic signals on the order of fT (filoTesla, 1x10^-13 Teslas), 8 magnitudes smaller than Earth's magnetic field.
  • Shielding, placing of machine, differential placement of coils used to offset signals from other sources.
  • Magnetic coils placed at different distances from the head. Signals from far away will look different between the coils when detected, brain signals will look about the same.


Pros and Cons of MEG


  • Direct neuronal recordings (magnetic)
  • Timing detection at the ms level.
  • Localization ok (~cm level) for cortical surfaces
  • No subtraction of signal required
  • Can do single-subject analysis due to sensitivity (no grand averages as ERP)
  • Localization is difficult due to the inverse problem
  • LOTS of noise


Processing Steps of MEG/ERP
  • EEG
    1. Artifact rejection-- remove signals that are obviously too janky to be from the brain.
    2. Epoching-- define time periods before/after stimuli in which responses are possible to occur.
    3. Time Locking-- Coordinate stimulus with recordings
    4. Averaging-- Across many trials, tends to remove most non-stimulus noise.
    5. Filtering-- Removes all signals above/below a threshold.
  • MEG
    1. Epoch
    2. Average
    3. Baseline Correct-- remove all signals that were present before the stimulus.
    4. Filter

Environmental Signal Reduction in ERP/MEG Recording



  • Comfortable chairs, setting
  • Varied stimuli
  • Give breaks
  • Build in blink time for subjects
  • Instruct participants not to move, scratch too much, etc.


Inverse Problem


  • Given a magnetic signal, it is extremely difficult to pinpoint the exact source and direction, since a signal is a vector
  • Can be done with source modeling.
  • Without this, though, localization is very difficult.
  • Forward Problem-- given a source vector, predict what kind of signal it would create. This is very easy.


Cortical Deafness


  • Bilateral damage to the auditory cortex (BA 41-2)
  • Cannot hear, but subcortical hearing (the ear's physical mechanisms) are fine.


Auditory Agnosia


  • Damage to auditory "association" area (BA 37/22)
  • Speech perception ok, recognizing non-speech sounds is bad
  • Sort of a reverse pure word deafness



  • Brain response originating from auditory cortex peaking around 100ms after stimulus onset
  • Sensitive to changes in sound frequency
  • Useful for differentiating speech sounds as most "vowels" rely on a certain frequency to be distinct
  • The latency should change if different frequencies are detected (MMF)


  • Mismatch Negativity/Field (ERP/MEG)
  • Differential brain response when hearing a deviant sound frequency compared to a baseline sound.
  • Responds to differences in any sound, but also phonemes

Sharma et al. 1993

  • Experiment 1: Compare physically different sounds that are the same phoneme (8 ms /da/ vs. 16 ms /da/ vs. 24 ms /da/...
  • Experiment 2: Compare physically/phonemically different sounds (/da/ vs. /ga/
  • Are the MMNs bigger/smaller between experiments?
  • No; no statistically significant difference.

Naatanen et al. 1997

  • Play physically and phonemically different sounds that become more and more distinct from the standard
  • MMN should increase with deviance level from standard
  • Reflects MMN detecting only physically different properties, doesn't yet look at detection of phonemes
  • One vowel in the set is non-native to the listeners; this did not generate a larger MMN
  • Conclusion: phonemes do matter.

Phillips et al. 2000

  • Looking at whether or not phonemic distinction is categorical-- that is, does the brain treat all all phonemes within a range the same?
  • Play a huge range of /da/ and /ta/. Therefore, there is no physical "baseline."
  • many-to-one ratio only found in categories of phonemes.
  • Control is same stimuli but no m-t-1 with categories
  • Categorization was only shown when m-t-1 was in effect.

Phillips, Pellathy, Marantz Follow-up

  • Should be able to recreate Phillips et al. 2000 with more consonants
  • Different consonants have different thresholds for when they are interpreted as which consonant (so, /ka/ vs. /ga/)
  • Not only were results confirmed (categorization across consonants) but difference was localized to LH

Sams et al. 1991

  • MEG study, played /pa/ in every trial, listeners watched a face pronounce either /pa/ or /ka/ (deviant)
  • MMF ~180 ms after stimulus when visual input was different, both when the deviant face was standard or deviant
  • Control: auditory stimulus played with a green/red light. No difference. The face matters.

Dehaene-Lambertz et al. 2000

  • Based on Dupoux et al. 1999: Fr and Jp listeners heard "ebzo," decided if there was an /u/ in recordings. No /u/ was ever present, but Jp listeners almost always heard it.
  • Hear "egma" baseline and deviant "eguma," must decided if they are the same or different.
  • Fr listeners had early middle and late responses to the deviant
  • Jp listeners only late responses
  • Therefore, phonemic categorization follows language rules.

Kazanina 2006

  • Does MMN really show differences in phonetic categories? Or just any language-important differences?
  • Rus vs. Kor listeners: /d/ vs. /t/ are different phonemes in Rus, but are same phoneme in Korean, although still detectable.
  • Rus show categorical perception, Kor did not, even when given a sliding scale for how "good" the /t/ sounded
  • Consistent with Phillips 2000: MMN reflects phonological categorization.

Evidence for LH lateralization of speech perception


  • Binder et al. 2000: Played words vs. tones, pseudowords vs. tones, and reversed words vs. tones. LH areas respond most when words or word-like sounds are heard.
  • See Burton et al. 2000: dip/tip, dip/ten experiment. Auditory cortex used in discrimination, BA used in segmentation
  • See Wang et al.: Training in tone recognition in Chinese LH lateralized.
  • See Phillips et al. 2000: MEG, showed LH lateralization in categorization of phonemes.


Burton et al. 2000


  • fMRI: mapping computational use of the brain
  • Discrimination: comparing words, is a certain sound different from the control?
  • Segmentation: breaking words into individual phonemes. Does each phoneme match up across words?
  • Disc: do "dip" and "tip" have the same first consonant?
  • Seg: Do "dip" and "ten" have the same first consonant?
  • Segmentation should be harder and require recruitment of more brain areas
  • Exp 1 (discrimination) showed bilateral superior temporal gyrus (STG) activation.
  • Exp 2 (segmentation) showed LH-centered inferior frontal gyrus (IFG) activation along with STG
  • That is, auditory cortex used in discrimination, but both AC and Broca's Area used when segmentation was necessary


Wang et al. 

  • Trained Eng listeners in recognizing tone in Mandarin Chinese
  • No MMN/MMF before training, MMN/MMF appeared in LH after training.

Supporting users have an ad free experience!