"

23 Studying the Brain

written by Jennifer Stamp, Kevin LeBlanc, & Noémie Bergeron-Germain

Learning Objectives

By the end of this section, you will be able to:

  • Describe non-invasive techniques to view brain structure
  • Describe non-invasive techniques to view brain activity
  • Identify advantages and disadvantages for neuroimaging techniques

What tools do we have to measure what’s happening inside the brain? For thousands of years, the main method was to take it out and examine it, which provided rich detail about neuroanatomy (Finger, 1994). However, this method limits the researcher to a single point in time and doesn’t tell us much about activity. Also, people tend to like their brains inside their heads while they’re still using them. The advent of neuroimaging changed all that, so it’s now possible to monitor changes in a living brain as it completes tasks.

Neuroimaging is the use of techniques to study the structure and function of the nervous system, developed as an objective way of studying the healthy human brain in a non-invasive way. This multidisciplinary field spans neuroscience, psychology, computer science, statistics, and medicine.  When neuroimaging happens in a medical setting it’s referred to as neuroradiology, but the methods are similar to those used in a research setting.

Viewing Brain Structure

Most early attempts at imaging the brain in the early 1900’s could only show cerebral blood vessels and ventricles, but in the 1970’s Allan Cormack and Godfrey Hounsfield developed computed axial tomography, or CAT, a technique that takes virtual slices of the brain using x-rays (Oransky, 2004). This earned them the Nobel Prize in Physiology or Medicine in 1979 and led to its widespread use for identifying brain injuries like strokes and tumours. CAT is faster and cheaper than other structural neuroimaging techniques, but it doesn’t provide the same level of detail and requires exposure to potentially harmful x-ray radiation, which limits it use.

 

Figure BB.23 CAT scan can be used to show brain tumours. (a) The image on the left shows a healthy brain, whereas (b) the image on the right indicates a brain tumour in the left frontal lobe. (credit a: modification of work by “Aceofhearts1968″/Wikimedia Commons; credit b: modification of work by Roland Schmitt et al)

Magnetic resonance imaging (MRI) is a newer technique which uses strong magnets to excite atomic particles. Different atoms have specific responses to magnetic energy, and this unique signature can be detected, which allows for better detail than x-ray-based techniques like CAT. This technological advance resulted in another Nobel Prize for Physiology or Medicine in 2003, this time for Peter Mansfield and Paul Lauterbur (Wikipedia, Neuroimaging, 2025). Structural MRI (usually referred to as just MRI) allows for excellent resolution of soft tissue, as shown in Figure BB.24. These scans are virtual slices through the brains of two identical twins (oriented as if we’re facing them while standing). The twin on the right had a diagnosis of schizophrenia, while their twin on the left was unaffected. The white arrows show the lateral ventricle, a large space in the brain filled with cerebrospinal fluid, which is bigger in the twin on the right (Suddath et al, 1990). Using structural MRI, researchers discovered a correlation between enlarged brain ventricles and schizophrenia.

Figure BB.24: MRI scans showing the Y-shaped lateral ventricle (white arrows) for two identical twins, (A) is unaffected while (B) has a diagnosis of schizophrenia. Adapted from Suddath et al, 1990.

 

 

Diffusion tensor imaging (DTI) is an MRI-based technique that specifically focuses on the movement of water molecules along neural pathways. The major information highways in the brain are made up of bundles of myelinated axons forming large white matter tracts, and water in the extracellular fluid flows along these tracts like small rivers.  By detecting the movement of water molecules, researchers can reconstruct what these pathways look like, by using software and colour coding (Ranzenberger et al., 2025). The images in Figure BB.25 show the corpus callosum in an individual with the multiple sclerosis on the right and an unaffected control on the left (Filippi et al., 2021). The lack of purple and blue tracts in the affected individual indicates loss of white matter.

 

Figure BB.25: Loss of white matter in the corpus callosum (CC) in multiple sclerosis (right) and an unaffected control (left), Colour-coding represents the direction of movement of water molecules, revealing axon tracts in the brain. Adapted from Fillippi et al., 2021. 

The widespread use of these MRI-based techniques changed neuroscience research, allowing us to ask questions that were impossible. But despite their usefulness, structural MRI and DTI have several drawbacks. They require specialized equipment, facilities, and technicians, so this type of neuroimaging is restricted to hospital and university laboratories. Also, MRI requires the scanned person to remain very still in a noisy, snug tube, which is challenging for some people, like small children or individuals with claustrophobia.

Viewing Brain Activity

Structural imaging is useful for viewing anatomical abnormalities like lesions or tumors, but these techniques can’t detect brain activity. There are two main approaches used to assess activity in the living brain, monitoring blood flow or measuring neuronal activity.

Techniques that track blood flow in the brain rest on the assumption that busy neurons are hungry neurons. As a brain area becomes more active, more blood flows to deliver glucose and oxygen (Ogawa et al., 1990), and there are several ways to measure this. One of the earliest methods was positron emission tomography (PET), where a mildly radioactive tracer, is ingested by the participant to track active neurons. The most common tracer is fluorodeoxyglucose, which, like regular glucose, gets taken up as an energy source by neurons, and specialized equipment detects radiation emitted from areas where this tracer accumulates.  This information is then reconstructed into a rough map to determine the location of increased brain activity.

Tracers can be attached to drugs to monitor neurotransmitter changes, so PET is useful for looking at different aspects of neuronal activity, not just glucose use. For example, a radiotracer that binds specifically to amyloid plaques, abnormal clumps of protein fragments that build up in the brain and disrupts neuronal communication contributing to the development of Alzheimer’s disease, can also be used. The PET scanner detects the radiation emitted by the tracer, producing images that reveal the distribution and amount of amyloid buildup, as seen from the image below.

Figure BB.26: PET scan showing the amount of amyloid plaque tracer (red = higher) in the brain of a normal subject relative to Alzheimer’s disease. Adapted from Chapleau et al., 2022.

PET can’t pinpoint events precisely in time and requires that the brain (and participant) be exposed to radiation; therefore, this technique has been replaced by other methods, like functional MRI.

Functional MRI (fMRI), like structural MRI and DTI, uses strong magnetic fields to detect unique signatures given off by molecules in a magnetic field. What makes it different is that it specifically focuses on haemoglobin, which carries oxygen around in the blood. Active neurons use glucose as a fuel and this process requires oxygen, so haemoglobin comes to the rescue when needed. It turns out that oxygenated and deoxygenated haemoglobin behave very differently under magnetic fields, and this blood oxygenation level dependent (BOLD) signal is used as a proxy for neuronal activity (Ogawa et al., 1990).

fMRI and BOLD have become the gold standard for measuring brain activity and have allowed us to ask questions that were once considered impossible, as shown in a 2006 study of a patient in a vegetative state after a serious brain injury. When asked to imagine certain actions such as playing tennis or walking around her home, she showed the same brain activation as uninjured controls (Owen et al., 2006; Figure BB.27).  The fact that she could respond with her own brain activity in an intentional, purposeful way allowed her to communicate despite her inability to move.

Figure BB.27: fMRI of a coma patient (top) and uninjured control (bottom) after motor imagery (playing tennis, left) or spatial navigation (thinking of walking around home, right). Red and yellow indicate areas specifically active for these tasks, after subtraction of baseline activity. Adapted from Owen et al., 2006. 

fMRI shares the same drawbacks as MRI and DTI; it’s expensive, restricted to lab or hospital setting, and sensitive to motion. This last issue restricts the types of tasks possible in fMRI, since they can’t involve a lot of movement. Displaying stimuli to participants in an MRI scanner is done using screens positioned in the participant’s central vision, but responses are limited to subtle movements, like squeezing a ball.

An alternative to fMRI is functional near-infrared spectroscopy (fNIRS), which also measures BOLD signal, but uses light instead of magnets. Light in the near infrared range, between 650-1000 nm, isn’t absorbed by skin, bone, or brain tissue but it is absorbed by haemoglobin. Furthermore, oxygenated and deoxygenated haemoglobin have distinct absorption patterns so any “leftover” light that’s reflected can be detected from the scalp (Figure BB.28). Like fMRI, the difference between oxygen-rich and oxygen-poor blood is used to calculate the BOLD signal, used as an indication of neuronal activity (Ferrari & Quaresima, 2012).

One huge benefit of fNIRS over fMRI is its tolerance to motion, so it can be used more types of behavioural tasks. One study used fNIRS to track motor learning on a downhill skiing video game (Nintendo Wii™ and Wii-Fit™) and compared cortical activity during the beginner trials to later ones. Participants showed more oxygenated hemoglobin in the right temporal lobe during the more advanced trials, an area implicated in balance (Figure BB.28, Karim et al., 2011). The apparatus itself is portable, so researchers can take the laboratory to the participants, which isn’t possible with PET or fMRI (Scarapicchia et al., 2017). Motion tolerance and portability make fNIRS ideal for measuring brain activity in children, even young infants (Wilcox & Biondi, 2015). The temporal resolution for fNIRS is on the order of milliseconds compared to 1-5 seconds for an fMRI BOLD signal. The biggest drawback of fNIRS is its limited spatial resolution, since near infrared light can’t penetrate far, so imaging is limited to the cortex.

Figure BB.28: Left image: Participant playing downhill skiing while monitored with fNIRS. Right image: Average BOLD activity for (A) initial trials and (B) advanced levels. Adapted from Karim et al., 2011.

Tracking blood flow, oxygen levels, and glucose use don’t directly measure neuronal activity because the BOLD signal lags a bit behind the neural event by a few seconds. Neurons can fire up to 1000 action potentials per second (Kandel et al., 2013), so lots can be missed while waiting for blood flow to respond, which makes it’s difficult to pinpoint the exact timing of neural activity. Consider how quickly your brain works – the average person can text 3 words per second (Palin et al., 2019) so BOLD signals are too slow to capture what’s happening during this behaviour. To capture these rapid events, we need to talk directly to neurons. When a neuron fires in the brain, it generates tiny electric fields, too small to be detected on its own, but when thousands of neurons fire together, their combined activity can be detected from the surface of the head.

Electroencephalography (EEG) is a non-invasive electrophysiological monitoring technique that is commonly used to measure electrical activity in the brain. Typically, EEG data is recorded using electrodes embedded in a mesh cap that is placed on the participant’s head with special gel between the electrodes and the scalp to enhance electrical signals (See Figure BB.29). Each electrode monitors the combined neural activity in a particular region, and the results are displayed as brainwaves (see Figure BB.30) showing both the number of waves per second, or frequency, and height of the recorded brainwaves, or amplitude. Because EEG measures neuronal activity directly, it can handle events with an accuracy within milliseconds (Cohen, 2017).

Figure BB.29: Using caps with electrodes, modern EEG research can study the precise timing of overall brain activities. credit: SMI Eye Tracking 

Figure BB.30 EEG trace across 16 electrodes, each starting on the left. The top five traces show activity during a right hippocampal seizure, starting at the mid-point. High amplitude waves like this indicate potential underlying pathology. Image source: Wikimedia Commons, Velasco et al, (2016).

EEG can be used in research settings to identify different mental states (e.g., focused vs relaxed), sleep stages (e.g., falling asleep, light sleep, or deep sleep), cognitive functioning (e.g., memory retrieval, language, social perception), and brain activity patterns of specific disorders. Clinically, EEG is the gold standard for diagnosing epilepsy. It is also routinely used in diagnosing and monitoring sleep disorders and brain injuries.

EEG offers many advantages compared to other neuroimaging techniques, it’s cheap, portable, and non-invasive. It also has excellent temporal resolution, which means that changes in neural activity can be detected in real time. It has poor spatial resolution because it only records activity from the surface of the cortex, making it difficult to localize brain signals or detect activity in deeper regions. Additionally, EEG has low signal-to-noise ratio; neural signals can be easily masked by signals from tensed muscles, eye movements, or the environment. In addition to these technical limitations, the conventional design of EEG is not compatible with afro-textured hair, leading to the systematic exclusions of people of African heritage from EEG studies and, in some cases, medical misdiagnoses (see Dig Deeper box).

Magnetoencephalography (MEG) is another brain imaging technique that records the brain’s activity by picking up tiny magnetic fields (see Figure BB.31). These magnetic fields are produced when groups of neurons work together and send electrical signals. While other tools like EEG measure the electrical activity directly through the scalp, MEG focuses on the magnetic fields that those electrical signals produce. These signals are recorded using highly sensitive sensors, allowing researchers to track brain activity with millisecond precision (Lee & Huang, 2020).  It also offers better spatial resolution than EEG, as magnetic fields are less distorted by the skull and scalp. MEG is a powerful tool for studying how different areas of the brain communicate, as it can track changes in brain activity with millisecond precision, allowing researchers to capture fast, real-time interactions between brain regions.

Despite its strengths, MEG has several limitations. It is extremely expensive to install and maintain, requiring a magnetically shielded room, highly sensitive equipment cooled with liquid helium, and patient stillness during recording. As a result, MEG is less widely available and typically used in specialized research or clinical settings.

Figure BB.31: Person undergoing MEG. Image source: Wikimedia Commons, National Institute of Mental Health.

 

 

Dig Deeper by Noémie Bergeron-Germain

Does EEG technology work for everyone?

Despite its widespread use in research and clinical settings, conventional EEG isn’t compatible with curly or coiled coarse hair. The follicle shape, size and density of this type of hair creates more volume that hinders the contact between the EEG cap and the person’s scalp.

Unfortunately, most people with curly or coiled coarse hair are of African or Caribbean ancestry so they’re more likely than people with straight hair (usually of European or East Asian ancestry) to be excluded from EEG research studies (Franbourg et al., 2009; Loussouran et al., 2007). Indeed, sometimes having coarse curly or coiled hair can disqualify someone from participating in research studies altogether (e.g., Adams et al., 2024; Bradford et al., 2024; Choy et al., 2021). Other times, the signal-to-noise ratio is not good enough for the data to be retained in the final group analysis (Etienne et al., 2020). A literature review by Choy et al. (2021) found that among 81 peer-reviewed EEG research articles, only five of them explicitly reported a sample of Black participants, although it was unclear whether their data was included in the final analysis.

The recruitment and retention of historically marginalized participants is important for research. Otherwise, the representativeness of the studied sample is limited, which threatens the generalizability of the study’s findings. If a study has low external validity, its data does not translate well to practical applications in the real world. This issue is even more critical when evidence from a study with low generalizability is used to inform protocols and policies. Clinically, the conventional design of EEG poses a problem for accurate diagnostic testing in people of African ancestry, thereby contributing to health discrepancies.

Thankfully, inclusive strides are being made in EEG research. Notably, Etienne, Laroia, and their colleagues (2020) developed the Sevo (Haitian Creole for “Brain”) clip, an EEG equipment prototype that uses Afro-textured hair to its advantage (see Figure BB.32). To use Sevo clips, the participant has their hair braided into straight-back cornrows (see Figure BB.33). The clips are then inserted and held in place by the braids (see Figure BB.34). The individual electrodes are subsequently placed in each clip. Preliminary findings have shown that the Sevo clips performed better than a clinical standard system with a high-quality amplifier (Etienne, et al., 2020; Kwasa et al., 2024). Emerging research has also demonstrated that with a lower quality amplifier, the EEG data obtained with Sevo clips followed the patterns of EEG data (i.e., peaks and deflections) obtained with a more conventional EEG system, although the size of the signal was reduced for the Sevo system (Gomes Nobre, 2024).

Figure BB.32: Components of the Sevo electrode prototype. (a) CAD drawings of the modular components of the electrode and (b) CAD drawing of the assembled clip shown alongside a photograph of the 3D printed prototype. This illustration, from Etienne, Laroia, and colleagues (2020) is included on the basis of fair dealing.

Figure BB.33: A schematic of 10-20 consistent “straight-back” cornrowing. (right) Demonstration of cornrowing on a participant. The hair is braided down the head, exposing the scalp for electrode placement in locations consistent with 10-20 arrangement. This illustration (left) and this photo (right), from Etienne, Laroia, and colleagues (2020) are included on the basis of fair dealing.

Figure BB.34: Sevo Clips Placed in Cornrowed Hair. This photo, from CBC Radio-Canada Découverte, is included on the basis of fair dealing. Click here to watch the whole reportage on racial biases in medical diagnoses here

Other EEG innovations that may help overcome the design barriers of traditional EEG systems include fingered electrodes, skin-screw electrodes, dry-comb electrodes, and non-permanent EEG sensors with built-in adhesives (like a Band-Aid) (see review by Adams et al., 2024; see review by Choy et al., 2021).

In addition to the design of EEG systems, other factors need to be considered in the inclusion of diverse participants in EEG research studies. For instance, many participants who have been historically harmed by healthcare or scientific institutions may not understand or trust EEG research (and researchers). Furthermore, given the significance that hair holds in African, Caribbean and Black communities, participants may be reluctant to engage in research that may interfere with their hair styles or routines. In their comprehensive literature review, Adams and colleagues (2024) provide a list of suggestions for developing EEG protocols that are culturally safer and more inclusive. For instance, they propose creating “hair bars” in EEG labs that contain styling tools and products for participants to restyle their hair after the EEG data collection. They also encourage research teams to familiarize themselves with hairstyle terminology, be flexible in scheduling EEG appointments around wash days or style changes, and develop informational materials for participants to feel informed, autonomous, and well-prepared with a particular attention to race-matched models, language, and dialects.

 

License

Icon for the Creative Commons Attribution 4.0 International License

Introduction to Psychology & Neuroscience (2nd Edition) Copyright © 2020 by Edited by Leanne Stevens, Jennifer Stamp, & Kevin LeBlanc is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.