Sunday, September 26, 2010

Timely Performance

Paula Parungao



What is time? We can't see it, hear it, smell it, taste it nor touch it and yet we know it exists. We know this because we depend on time so much in doing our daily activities. For example, you want to meet a friend for lunch. What's the first thing you ask? Where and what time? Let's say the friend answers the where but not the time. What then? You'd have no idea when your friend will come; you don't even know if he/she is coming on THAT day. For all you know your friend meant tomorrow or next week.

Humans aren't psychic (well, in the view of science we're not capable of being so), we can't predict what will happen in the next few seconds. And yet we move as if we do. When we dress up, we have all intention of going to work or school. When we wait at a certain place to meet up with friends, we expect them to come. When we go to our favorite store, we have full knowledge that we'd buy that chocolate bar we'd been craving for. All of this sureness in moving in the present with the unpredictable future in mind.

Time and space, at least in our brains, seem to be strongly linked to each other. This may be because one of the most common uses of our temporal mechanisms is to act out whatever needs to be done in a specific space. But where in our brain exactly does this all happen? Research has suggested that extrastriate visual areas, V5/MT and V3 are important for temporal processing. But how they're important is another thing.

The study I'm about to present aims to answer two questions.
1) What is the direct role of v5/MT in temporal discrimination?
2) Will the disruption of either the left or right parietal cortex interfere with time perception in the audio or visual domain?
Before the actual study, participants were subject to repetitive transcranial magnetic stimulation (rTMS) while performing five tasks. Four of which tested temporal discrimination of moving visual, static visual and auditory stimuli and one which became the control task. This was to get a general view of how the brain works when presented with the stimuli. After this, the participants were subject to the real experiment.

Experiment 1
Participants were presented with stimuli on a 19-inch color monitor. What was presented to them was an array of yellow dots in a black background. This was presented twice. Participants were asked to point out which of the two arrays had a longer duration.

Experiment 2
Same as Experiment 1 only this time the dots weren't moving.

Experiment 3
The stimuli presented were two vertical columns consisting of twelve dots each. The columns outlined a "path" (the target). In "absent" trials, the dots were displayed randomly. Participants were asked whether they saw the "path" in the first array or in the second array.

Experiment 4
Same as Experiment 1 and 2 only difference being that the participants were exposed to single auditory tones when presented with the intervals.

Experiment 5
Same as Experiment 4 only with a different time duration.

The researchers found out that when TMS was applied in the V5/MT or the right inferior parietal cortex (IPC), temporal discrimination of moving stimuli was impaired and greater differences between the two arrays mentioned earlier were needed to reach 75% accuracy. This meant that the effect of TMS increased uncertainty response but not the perception of time becoming longer or shorter. This determined that V5/MT and IPC are both independently important in the temporal discrimination of moving stimuli. The results were the same for Experiment 2. It suggests that V5/MT might be involved in low-level visual timing. As for the auditory stimuli, there was no significant effect of V5/MT.



Based on these results, the researchers concluded that V5/MT had a role in both temporal and spatial vision specific to visual modality. They also showed that the right, and not the left, posterior parietal cortex is responsible for discrimination of visual and auditory durations. It also shows that two models may be responsible for perception of time in the brain. It may be that timing is either centered in one part of the brain or distributed along those areas that are capable of temporal processing and that these areas are involved in task, modality and lengths of duration used. This research also showed that time may be an important factor for degenerate representation in the brain.

Okay I admit the entry is rather technical. It does concern the brain after all and what better way to explain the brain than through pure technicality. The brain is, after all, something we shouldn't mess around with thus we need to explain it in an objective manner. As for the how this study has struck me enough to write a blog entry about it...that may be subjective. My blog won't be used for future study after all.

Moving on. This research done by Walsh, et. al. explains how the very abstract concept of time is captured in our brains. Isn't it amazing that even something so complex is not enough in complexity so as to not be comprehended by that 3 pound mush that compromised 2% of our body weight. Wow, makes me realize that all of us are all brawn and almost no brain. Anyway. The study showed where in our brains exactly time is processed and how it's processed. At least, visually and auditory for the most part. This processing happens mostly in the right posterior parietal cortex which is known to be responsible in representing the different parts of space thus proving that time and space are intertwined in our noggins. It's responsible for the determination of vision for action (ehem accordance) and spatial vision. Basically how we perceive and act on the world in a definite time and space. Who knew our brains could operationalize such a vague and abstract concept that determines so much in our lives? Hey, in a way, we may even be kinda sorta psychic. With time as our crystal ball. Isn't that SO COOL??

Okay, that's as much as I can glean from the experiment without becoming a bore. For more information on how the time-space continuum works, please contact Einstein from the grave. Or my Physics 71 professor. :)



Reference:
Bahrami, B., Bueti, D., & Walsh, V. (2008). Sensory and association cortex in time perception. Journal of Cognitive Neuroscience. 20, 6, p1054-1062.

Saturday, September 25, 2010

AFFORDANCE

by Michelle T. de los Santos



I want to talk about affordance this time since it’s the word that really stuck in my mind in our last 135 class reporting. As the word affordance was mentioned several times during the reporting, I clearly learned by heart that affordance means function or affordance = what objects are used for. Simple one right?! :D For example, an affordance would be seeing a chair as something to sit on or a bed as something to sleep. Easy as that! :P

I found a study by Chang, Wade, and Stoffregen (2009) who investigated the perception of affordances or critical action capabilities for aperture passage in an environment–person–person (E–P–P) system, which comprised a lead adult, responsible for perception of the system, and a child as a companion.

Their method includes eight large and eight small female undergraduates served as perceivers and one large and one small girl served as companions. The perceivers were companioned with a large and a small girl individually, the perceivers perceptually judged the minimum aperture width for the E–P–P system, and then the adult–child dyads (a pair of people) actually walked through to determine the system’s actual minimum aperture width (Chang, Wade, and Stoffregen, 2009).

Results of the study demonstrated that perceivers precisely judged the action capabilities of an E–P–P system based on the body-scaled information of each adult–child dyad. The findings extended the previous concept of affordances for an environment person system to affordances for an E–P–P system (Chang, Wade, and Stoffregen, 2009).
djshukhdkashdk
I like the study because it was so relevant. The study is visible and available in our daily setting and environment. We usually escort weak and old people like our lolo and lola across the street and parents also help their children cross the crosswalk in daily life. These situations are very common. I learned in the article that the action has to do with the environment and a person plus person system. The article discussed that within the system, people perceive the environment from their own perspective; however, to act as a single unit, one of the two persons is dominant and determines how both should act to accomplish a specific goal. This environment–person–person (E–P–P) relationship is related to the dominant and the following individuals’ characteristics, as well as the characteristics of the environment. The perceptions of the lead person determine the behavior of the dyad. Thus, the study was made. The parent–child dyad can be an example. The parent needs to know the ratio of time needed to cross the crosswalk to time available to cross the crosswalk, as they are the more experienced and responsible member of the dyad. The researcher mentioned that if the ratio is less than one, then the dyad could cross safely (Chang, Wade, and Stoffregen, 2009).
hahhahahahhahahaa
Indeed, this was an interesting study! :) Further experiments and researchers can improve and verify this more using another set of participants, a bigger number of participants. Male participants can also be considered next time to see if there's a difference and to avoid gender bias. In addition, I agree that in the future, reseachers should also examine the affordances of an E–P–P system in different joint actions.


Reference:

Chang, C., Wade, M. G., & Stoffregen, T. A. (2009). Perceiving affordances for aperture passage in an environment–person–person system. Journal of Motor Behavior, 41, 495-500.

Cristina Menchaca 2007-49018

Technology has been developing soquickly that the purpose of computers has extended far beyond typing up documents, making graphs, computing math equations, and anything connected to being online. Virtual realityhas been used as a method for cognitive rehabilitation, computers and machines are being developed to help paralyzed people carry out actions, and a lot more. Five years ago, 2D virtual reality programs were tested to see if they could train people with intellectual disabilities and helpthem with their shopping skills.

Almost all of us are familiar with the concept of virtual reality. Taking it from its name, virtua, reality or VR simulates real life situations while creating the illusion that one is in and is interacting with that world, artificial may it be. Studies have actually found that VR can enhance the learning and transfer of skills to everyday circumstances. In other words, there is a clear, positive transfer effect from virtual to real training. These have been proven through activity in the nervous system, neuroplastic changes in the cerebral cortex, and neuroimaging and psychophysiological studies. Thus, aside from being used in entertainment or in analyzing consumers’ attitudes and behaviors regarding a certain product, VR has been applied in functional and vocational training. Recently, there has been interest in studying whether people with learning disabilities would 1) be motivated to use a virtual environment, 2) be capable of using it, and 3) benefit from such a method of training. This is exactly what Tam et al sought to find: how effective a non-immersive, flat-screen VR method would be compared to a conventional psychoeducational method in training people with intellectual disabilities to shop in a local supermarket.

A convenience sample was used to obtain 16 participants (from four different organizations) who had a Stanford-Binet IQ test score from 40 to 54. All of them were trainees of a vocational skills training center. Selection of participants included the following requirements: at least 16 years old, emotionally and medically stable, no history of psychiatric problems or autism, independent in basic self-care activities, able to follow simple verbal instructions, able to grasp simple concepts about money, have real shopping needs, and have given consent in participating in the study. Four males and four females were randomly assigned into a group: intervention group (the VR method) and control group (the conventional method). All participants were introduced to the training objectives, training on supermarket skills, practice of shopping skills, and revision of the shopping skills. Before and after the assigned programs were carried out, a checklist for supermarket shopping skills was used in assessment, and the participants’ behavior throughout the program was also observed and noted.

What made the two set-ups so different? The control group made use of psychoeducational training: a teaching-learning process that included demonstration, role-play, and verbal feedback. VR on the other hand is interactive in nature, enabling the user to exercise direct control over a video-based virtual environment. Users are allowed to navigate, explore, and interact with videos that make up a virtual supermarket environment. The shopping process is divided into a series of tasks that required participants use to their judgment. Choices are provided at crucial points, and participants can proceed and get immediate visual and auditory reinforcement if they choose the right way to proceed. You may be thinking, why wasn’t a typical 3D method used instead of this non-immersive 2D program that makes use of a touch screen? A fully immersive display that includes a head mount 1) might not be feasible (or even necessary) for people with cognitive deficits, and 2) in general, may cause side effects like vertigo, nausea, eyestrain, disorientation, etc. because of a conflict between perceptions in different sense modalities.

There’s more to the methods. For each set-up, two sessions per person were held, each lasting 30 minutes. In the VR method, a trainer demonstrated options first. Participants received help in familiarizing themselves with navigating in the virtual environment. Retraining occurred on an individual basis, involving two trainers who gave instructions to the participants. Trainers physically collaborated with the trainees, interacting and communicating in nonverbal ways to help them. Trainers also tracked the trainees’ visual attention and physical movements in interacting with the environment (hands). In the conventional group, each participant took part in a two-part psychoeducational tutorial and role-play. Participants received consistent instructions from a trainer that were complemented with audiovisual demonstrations. Using information-based and simulated methods, the trainer introduced concepts and skills required, then the participants role-played.

Between groups and within groups differences were assessed. Participants in both groups showed improvement after the training, and the difference is significant. Training effect was more consistent for the VR group (scores 6 to 11) compared to the other group (scores 1 to 11), but the difference was not significant, meaning that VR can achieve the same level of improvement in conventional intervention. It suggests though that the VR program has a slightly greater effect, but a larger sample would be needed to confirm so. Participants who went through conventional training actually showed more varied learning outcomes because the VR method focused on consistency and motivation for certain tasks. Still, this study supported the finding that learning in a virtual training environment can be extended to reality. The VR program was a more realistic environment, while the conventional program made use of instructions and role-play only. There was also effective feedback (that facilitated learning) in the VR set-up because of the program’s design. In the conventional set-up, feedback from trainers as the participants role-played may not have been considered objective and consistent by the participants.

This study is a clear example of taking action because it allows participants to scan their environment and make decisions and actions based on the important cues they see. The checklist for the abilities for the participants is as follows, each being rated as 1 for dependent, 2 for needs assistance, and 3 for independent:

1. Can recognize the sign of the supermarket

2. Can enter in the right entrance

3. Can recall the target item

4. Can decide whether or not to use the food cart

5. Can get into aisles and identify whether or not the target item is there

6. Can decide which aisle to enter given more than one choice

7. Can locate items on shelves, displays, or bins

8. Can locate items similar to the target item

9. Can locate the target item

10. Can choose the correct amount of the target

11. Can check food expiration dates when suspected

12. Can avoid purchasing products that are dented, opened or appear spoiled

13. Can pick up the target item

14. Knows the need to pay for the item

15. Can search for the cashier after picking up the item

16. Can locate the cashier

17. Can find a cashier in service

18. Can queue at the cashier

19. Can put the item on the counter

20. Can pay using Hong Kong money

21. Can communicate appropriately with the cashier when needed

22. Can get the change

23. Can pick up the bought item

24. Can find the correct exit

VR creates an “artificial” multisensory experience of an environment, including space and events, and thus may be more effective for participants than simple role-play, where participants may have difficulty generalizing their actions when in the real environment. However, it was also observed by the trainers that impaired learning ability of participants limits their ability to navigate within the virtual environment and even in participating in such training. Cognitive issues when designing the VR system should thus be considered. I suppose this was difficult for the researchers. On one hand, there’s the importance of making sure participants are comparable, but on the other hand, there’s a compromise for that when the sample is very specific- in this case, people with intellectual disabilities, and people with such have different levels of intelligence, capability, etc.

I liked that the very essence of this study was something ‘life promoting’. People with disabilities already have less advantage than other people, so it’s heartwarming to know that technology is being put into good use, so that maybe, their lives can be less difficult and they can depend less on others. I appreciated the checklist actually, because the items were so specific. The items made me realize how we take for granted the things we don’t even think we think of, when there are people who actually have difficulty doing them. Speaking in terms of technicality, I liked that choosing participants was very specific, so that comparison among participants and evaluation of results would not be ‘nullified’. I also liked that the study ensured an equal number of males and females per set-up, to account for gender differences. One may think that having such numerous criteria to be a participant is unfair in the sense that the study is still biased because it cannot speak for those with less capability. I think otherwise, because this is just the starting study. For now, it would be best to have a specific sample, to see if the method even works. When it can be improved, then can we worry about having the method be one that could suit anyone. I appreciate that VR was considered as an option in such training. After all, it makes sense to practice in a condition that is almost life-like, so that it is not hard to apply it in real life. It makes extra sense for the intellectually disabled, not because they are any less, but because being disabled, not being able to practice in a more real setting (as in role-play) might be harder to apply. Finally, I also have Asian pride because of this experiment, since the study was done in Hong Kong, and made use of Hong Kong dollars. I like that the study made things as ‘real life’ as possible, through the use of real money for example, no matter what set-up.

My only suggestion for this study is that a bigger sample be used so that the effectiveness of the method can be verified and its comparison with the conventional method be established. It’s really from here that the technology can then be developed so that it could reach more people. This study had important knowledge that the participants could learn from. Speaking short term, participants would think of questions like where am I in the environment, what do I see, where do I go, how do I get there? Speaking long term, participants could ask themselves questions like what can and do I learn as I see and explore the environment? Perhaps in the future, technology can extend further and train people in various skills of community survival: transport skills, road safety, wheelchair accessibility, etc. Whatever happens, I’m sure we can all agree even from this study alone that the virtual reality environment can be a very powerful tool in rehabilitation and improvement of life, not just in entertainment and whatnot.

Source:

Tam, S., Man, D., Chan, Y., Sze, P., & Wong, C. (2005). Evaluation of a computer-assisted, 2-D virtual reality system for training people with intellectual disabilities on how to shop. Rehabilitation Psychology, 50(3), 285-291.

The Eyes helps Hearing




By Kevin Chan

As we are about to transition from mainly studying the visual sense in our perception class to studying the auditory sense, I though of writing a topic that involved audio and vision. Because of that I chose the study that in a way is a transition as well from the visual to the auditory. In a nutshell, the study tested whether adding visual information about the articulatory gestures (such as lip movements) could enhance the perceptual.

To start off, the brain integrates two sources of information for speech comprehension: information in vision (lip movements) and audition (linguistic sounds). Furthermore, can this audiovisual integration of speech facilitate the perception of perceiving a second language?


The methodology was simple. There was a audio

only trial, a video only trial and a audiovisual trial. All participants had been exposed to either Spanish or Catalan as their second language of simple Spanish- Catalan phonemes. Each trial consisted of a presentation of one disyllabic stimulus for a duration of 800 ms. the task was for the participants to press as fast (and accurately of course) as possible the correct syllable of the stimulus.

The results of the study indicates that the addition of the visual information (the pictures of the lips moving) about the speakers' gestures enhanced the ability to discriminate sounds in a second language. It actually is in constras with previous studies statement an improvement in overall comprehension based on audiovisual inputs. Therefore, integration of visual gestures to auditory information can produce a specific improvement in phonological processing.

A sound suggestion would be to test this study cross-culturally. In the said study the language used was Spanish- Catalan. How applicable would this be to other forms of language?

For example, lets look at Chinese, a language very close to my heart. In Chinese, there is such a thing as intonation, the pitch and speed can affect the meaning of the word. Two different words can be "spelled out" (although spelling in Chinese is different" completely the same way but because of its intonation can mean different things. Mai can mean both buy and sell depending on the intonation. Mai (with a stress) means sell while when you say like as if you are ask

ing it means buy. The question now is would visuals be able to enhance this if CHinese is strongly an auditory language. If you see a chinese person say "mai" his lip movement would probably be very very very much alike. How then is this study applicable to that?

Funny because after taking a course in psycholinguistics (psychology 145) I did not know this. I did not think that visuals such as this had a profound effect on comprehension. A whole different chapter of this can be included in the textbook that we used for the course.

I think a great application of this study is for those who are hearing impaired. Since now we know that visual speech information such as gestures can greatly enhance the comprehensi

on of spoken messages(with is the motor theory of speech perception), we can somehow device a system that focuses on a person's mouth or something (I'm just thinking out loud). For example, in the news, there can be a window in the lower part of the screen that is zooms in the mouth of the reporter. By doing so, people who are hearing impaired can look at the mouth which will enhance their comprehension.

Also, this will serve people who do not have a hearing disability as well. Companies that make instructional material to learn language (such as the Rosetta Stone) could employ the implications of the said study. They should stop production of materials that are purely audio (such as learning CDs) and focus on materials that are audiovisual in nature. Also, they can similarly in

clude a small window that is zoomed in the mouths of the main communicator in their instructional audiovisual materials.

Furthermore, for those who are trying to learn a new language, it maybe a good idea to look at the mouths of people who speak that particular language.This study is actually very

much perfect for me for I am currently taking Spanish 10 this semester. That means for me, I should look at the mouth of my Spanish professor while she talks for it actually might make me speak better Spanish! Voy a intentar que! (I will try that!)






SOURCE:

Navarra, J. & Soto-Faraco, S. (2007). Hearing lips in a second language: visual articulatory information enables the perception of second language sounds. Psychological Research 71: 4-12


Friday, September 24, 2010

From the very first time I laid my eyes on you boy, my heart said follow through…

Gaze Shifts and Person Perception
Isabel Acosta 2007-49035


You've seen it before, guy walks in room and suddenly, his eyes focus on this one girl, and their eyes suddenly lock. If the girl looks away, and then discretely looks back, the guy smiles --he knows he has a shot. But how does he know??

Detecting and interpreting gaze cues are skills that human beings are so naturally good at. Gaze cues are so central to all our human interactions. We categorize people by merely looking at them, or by judging how they look at us. We extremely value encounters with people who we have made eye contact with. We put so much meaning in them and from these cues, we base our next actions or responses -- it "facilitates the generation of a contextually appropriate behavioral response". Isn't that so amazing?? From looking at these cues, we can find out the other person's intentions, motivations, and even what the person wants us to do (or not do). The eyes are just so important to social cognition and communication, and it is crucial for humans to have a kind of information processing that can decode such cues.

What do you think she's trying tot tell you with this look?

It was found that infants have been fascinated with the eyes, and that at 4 months, they are already able to differentiate between a direct gaze and an averted gaze. At 9-18 months, they are already able to tell if an adult's intentions are ambiguous through their eyes. Scientists have also already found out that specific brain regions become activated when processing gaze cues, such as the superior temporal sulcus, amygdala, medial prefrontal cortex and ventral striatum (related to prediction of reward and punishment). Although much has been researched on the neuroscience of gaze processing, apparently, little is known about how gaze cues can affect a person's perception. Mason, Tatkow and Macrae (2005) try to find out if gaze cues, specifically gaze shifts, affect a person's likability and attractiveness perception of a target.

Direct and averted gaze

The researchers conducted two very simple and straightforward experiments. They asked 24 women and 19 men from Dartmouth College to rate 38 female faces (with neutral expressions) whether they were likeable (1) to extremely likable (5). The pictures where displayed for 2,000 ms, and then afterwards, the participants were given some time to rate them. The pictures were animated in two ways, they appeared to be looking at the participant, or they appeared to be looking away from the participant. If the participant was placed in the attention-away condition, in the first 1,000 ms, the picture they saw displayed a direct gaze, and then changed to averted gaze (eyes looking left or right) for the remaining 1,000 ms. The opposite occurred for the attention-toward condition. So, the researchers specifically made sure the gaze shifts was the only differing variable.

Results showed that ratings of likability were high when social attention was directed toward rather than away from the participants. "Targets were evaluated more favorably when gaze shifts signaled attentional engagement," regardless of sex (male or female0. The researchers were curious --does judgmental relevance of targets affect the effects of gaze shifts? What if the judgement needed is irrelevant to a particular sex (female) like physical attractiveness of the female models, will the same results be observed? The researchers then conducted a second experiment to find out if judgment relevance moderates the effects of gaze shifts.

Which ad is more appealing?

In the second experiment, the same methods were employed, except that this time they had to rate the targets/pictures from 1 (attractive) to 5 (extremely attractive). The researchers found that the participant's sex had a main effect --males found the targets more attractive than the females did. They also found that ratings of attractiveness were way higher when social attention was directed toward rather than away from male raters, which was not observed in the female raters. They concluded that when the requested judgment was more relevant to men than women, only men were influenced by gaze shifts when evaluating the targets.


Although I am pretty troubled with the validity of the effect of sex in judging attractiveness (were the participants all heterosexuals?) and the heterosexist bias that physical attractiveness of females is a judgment only relevant to males, I was impressed with the implications of the study. It is certainly interesting how much information we can get and transmit just by LOOKING at people, and how much this acquired information affects our perceptions. "Decoding the language of the eyes streamlines the complex process of everyday social interaction. It is an ability that lies at the very heart of human social cognition." So much can be assumed just by these looks!! Gaze direction also influences person construal, because it moderates our social attention. If someone or something is interesting, we direct our gaze in its direction. This signaling of the locus of attention conveys information about its importance to the perceiver. Furthermore, patterns of gaze direction, or shifts in gazes signal changes in social attention, which has implied social meaning. If you make eye contact with someone, and then the person hurriedly looks away and does not look back, how do you perceive this person? The researchers concluded that "gaze shifts modulate people’s evaluations of others, and that this effect is shaped by the interplay of several factors, which includes the status of the target (i.e., cue provider), the identity of the perceiver, and the nature of the judgment under consideration." The researchers emphasized that judgmental context or relationship that exists between the perceiver changes how gazes affect our person perception. Gaze shifts are sensitive to context. This makes a lot of sense to me and is such an significant note to remember. Our perceptions of people, and any other stimuli really, are always taken in the light of the context of the situation or environment they are in, and they are subject to the current situation of the person perceiving them --which includes past experiences, memories, values, beliefs, biases etcetera. It is so interesting for me how everything does not occur by itself --everything dynamically interacts with each other and these elements cannot be separated from each other.


It is so cool that decoding these eye cues come so naturally to us. It is so automatically hardwired in our social brains that the eyes and its gazes contain a lot of meaning. It is so amazing how our brains are equipped with such a kind of information processing system that can figure out the meanings of such subtle cues! We are truly beautifully constructed species.



So, remember to think before you look --you never know what message you're signaling from your gaze.

Source: Mason, M. F., Tatkow, E. P., & Macrae, C. N. (2005). The Look of Love: Gaze Shifts and Person Perception. Psychological Science , 16 (3), 236-239.

Saturday, September 18, 2010

Cristina Menchaca, 2007-49018

Love is blind?

Try jealousy.

Last week our group conducted our experiments which were on inattentive blindness. Inattentive blindness is a phenomenon that surrounds the idea that our visual awareness of things in the environment dependslargely on our ability to direct our attention to them. By focusing on specific things, we fail to see other things that are right in front of us. The question now is, can jealousy cause blindness?

Why jealousy? We tend to prioritize emotional stimuli to the extent that our visual awareness of nearby, non-emotional stimuli is impaired. Various studies have proven that our close relationships with others, one of the primary contexts for the experience of emotion, can affect our moods, behaviors, and health. Putting these two ideas together, the question then, which the researches sought to answer, is ‘Can fluctuations in perceived social context affect us to the extent of influencing our visual processing of the world?’

The idea for this study came from a previous study done on women, which tried to determine whether or not the presence of social support within the context of a romantic relationship decreases affective reactivity to an emotionally aversive stimulus, such as an electric shock. The study showed so, that holding a person’s hand reduces threat-related activity in the brain, with greater attenuation of the hand was of the woman’s husband. Additionally, how much the threat was attenuated depended on the husband-wife relationship: more attenuation correlated with higher self-reported marital satisfaction. Based on this study, the researchers then thought about the opposite. Could it be that a perceived threat to the relationship would induce a heightened state of sensitivity (such as anxiety or unease) given emotionally aversive cues? This would then mean that, possibly, fluctuations in security regarding one’s romantic relationship can literally affect how one sees or perceives the world.

Two studies were used. Heterosexual couples were recruited and the tests were administered to the females, who had to search for a target within a sequence of fleeting images while trying not to be distracted by a neutral or emotional picture that would appear. (Typically, there is more difficulty reporting a target when an emotional distractor appears before or right after the target that when the distractor is neutral). For the set-up with perceived relationship threat, the females performed the task while their male partners rated the attractiveness of landscapes, and then attractiveness of other romantically accessible women. Since relationship threat manipulation could be different for each of the women, they were all asked at the end of the experiment to report how uneasy they were about their partners rating other women. This was then correlated with emotion-induced blindness (the task of the women). Having the males rate landscapes also ensured a substantial ‘practice’ before relationship threat manipulation. These were also included in analyses so that the researchers could be sure that results really came from relationship threat manipulation and not from individual differences.

Results showed that the degree to which women reported their unease about their partners rating the attractiveness of other women was significantly and inversely correlated with target detection accuracy following a negative distractor, and not with accuracy following neutral distractions or trials with no distractions. In other words, this is a significant correlation with emotion-induced blindness caused by negative distractors. A good prediction for these results could be that a heightened state of anxiety could increase general distractibility, but in both experiments, self-reported unease was correlated with performance decrements induced by distractors considered emotionally negative, not neutral distractors, erotic distractors, or baseline conditions. Also, unease about having one’s romantic partner rate the attractiveness of other women correlated with emotion-induced blindness only when the partner was rating other women and not while he was rating the attractiveness of landscapes. This suggests that heightened sensitivity to emotional distractions was a function of the effectiveness of the relationship threat manipulation, not simply a function of a more general association between trait anxiety and a stronger bias to attend to emotional information.

Why was did tht in a relatively homogenous population’ (or, to eliminate as much noise as possible in ratings of general unease). How would males react if they were the ones who took the test instead? How about males in same sex relationships, or females in same sex relationships? Further research on such samples could give us not just an idea of how jealousy affects visual perception but also how similar and different results can be given a gender or sexual orientation. Perhaps age can also be considered. Would there be a difference if the attractive stimuli were older, younger? That could imply that reactions are very specific to stimuli, just like modules. How about race of the stimuli? Results on such a study could tell us about what people find attractive, or on an opposite extent, it may even give us a picture of racial discrimination, should a certain race evoke no threat to a person. Finally, another interesting thing to consider would be the other emotions, which could affect visual perception just as jealousy has been shown to have an effect. How about anger, sadness, excitement, etc.?

I should say though, that I did like a couple of things about this experiment. One was that put ideas together: the fact that we attend to more emotional things and miss out non-emotional ones, and that close relationships can effect our emotion, behavior, etc. I also liked that this study tried to see the opposite of a previous study, which looked at how social support (from a romantic partner) decreases affective response to threat. Finally, and most of all, I liked that this study made use of things such as a self-report of unease, a task for males to rate landscapes and then women, and different types of stimuli, to ensure that results were not based on noise. By using such techniques, the researchers could be sure that the results were not based on individual differences, were because of negative emotional stimuli, and that reactions were solely based on the manipulations and not from a general reaction to negative stimuli.


As many of us know, the language of social relationships is filled with visual metaphor (beauty is in the eye of the beholder, but love is blind). Isn’t it quite interesting that such phrases can actually connect to reality in a concrete way? Social emotions influence us so deeply that they can actually affect our processes in visual awareness, just as jealousy has shown. Who knows how else our visual (even other sensory) processes are affected by our moods. This isn’t just something that advertisers would love to hear and take advantage of, but it has big implications on how we react to and deal with things, and how we should see others in this light. Is love blinding? We have yet to find out, along with other emotions. For now, based on this study, we can actually say that jealousy is blinding.


Source:

Most, S., Laurenceau, J., Graber, E., Belcher, A., & Smith, C. V. (2010). Blind Jealousy? Romantic Insecurity Increases Emotion-Induced Failures of Visual Perception. Emotion, 10(2), 250-256.