Sunday, September 26, 2010
Timely Performance
What is time? We can't see it, hear it, smell it, taste it nor touch it and yet we know it exists. We know this because we depend on time so much in doing our daily activities. For example, you want to meet a friend for lunch. What's the first thing you ask? Where and what time? Let's say the friend answers the where but not the time. What then? You'd have no idea when your friend will come; you don't even know if he/she is coming on THAT day. For all you know your friend meant tomorrow or next week.
Humans aren't psychic (well, in the view of science we're not capable of being so), we can't predict what will happen in the next few seconds. And yet we move as if we do. When we dress up, we have all intention of going to work or school. When we wait at a certain place to meet up with friends, we expect them to come. When we go to our favorite store, we have full knowledge that we'd buy that chocolate bar we'd been craving for. All of this sureness in moving in the present with the unpredictable future in mind.
Time and space, at least in our brains, seem to be strongly linked to each other. This may be because one of the most common uses of our temporal mechanisms is to act out whatever needs to be done in a specific space. But where in our brain exactly does this all happen? Research has suggested that extrastriate visual areas, V5/MT and V3 are important for temporal processing. But how they're important is another thing.
The study I'm about to present aims to answer two questions.
1) What is the direct role of v5/MT in temporal discrimination?
2) Will the disruption of either the left or right parietal cortex interfere with time perception in the audio or visual domain?
Before the actual study, participants were subject to repetitive transcranial magnetic stimulation (rTMS) while performing five tasks. Four of which tested temporal discrimination of moving visual, static visual and auditory stimuli and one which became the control task. This was to get a general view of how the brain works when presented with the stimuli. After this, the participants were subject to the real experiment.
Experiment 1
Participants were presented with stimuli on a 19-inch color monitor. What was presented to them was an array of yellow dots in a black background. This was presented twice. Participants were asked to point out which of the two arrays had a longer duration.
Experiment 2
Same as Experiment 1 only this time the dots weren't moving.
Experiment 3
The stimuli presented were two vertical columns consisting of twelve dots each. The columns outlined a "path" (the target). In "absent" trials, the dots were displayed randomly. Participants were asked whether they saw the "path" in the first array or in the second array.
Experiment 4
Same as Experiment 1 and 2 only difference being that the participants were exposed to single auditory tones when presented with the intervals.
Experiment 5
Same as Experiment 4 only with a different time duration.
The researchers found out that when TMS was applied in the V5/MT or the right inferior parietal cortex (IPC), temporal discrimination of moving stimuli was impaired and greater differences between the two arrays mentioned earlier were needed to reach 75% accuracy. This meant that the effect of TMS increased uncertainty response but not the perception of time becoming longer or shorter. This determined that V5/MT and IPC are both independently important in the temporal discrimination of moving stimuli. The results were the same for Experiment 2. It suggests that V5/MT might be involved in low-level visual timing. As for the auditory stimuli, there was no significant effect of V5/MT.
Based on these results, the researchers concluded that V5/MT had a role in both temporal and spatial vision specific to visual modality. They also showed that the right, and not the left, posterior parietal cortex is responsible for discrimination of visual and auditory durations. It also shows that two models may be responsible for perception of time in the brain. It may be that timing is either centered in one part of the brain or distributed along those areas that are capable of temporal processing and that these areas are involved in task, modality and lengths of duration used. This research also showed that time may be an important factor for degenerate representation in the brain.
Okay I admit the entry is rather technical. It does concern the brain after all and what better way to explain the brain than through pure technicality. The brain is, after all, something we shouldn't mess around with thus we need to explain it in an objective manner. As for the how this study has struck me enough to write a blog entry about it...that may be subjective. My blog won't be used for future study after all.
Moving on. This research done by Walsh, et. al. explains how the very abstract concept of time is captured in our brains. Isn't it amazing that even something so complex is not enough in complexity so as to not be comprehended by that 3 pound mush that compromised 2% of our body weight. Wow, makes me realize that all of us are all brawn and almost no brain. Anyway. The study showed where in our brains exactly time is processed and how it's processed. At least, visually and auditory for the most part. This processing happens mostly in the right posterior parietal cortex which is known to be responsible in representing the different parts of space thus proving that time and space are intertwined in our noggins. It's responsible for the determination of vision for action (ehem accordance) and spatial vision. Basically how we perceive and act on the world in a definite time and space. Who knew our brains could operationalize such a vague and abstract concept that determines so much in our lives? Hey, in a way, we may even be kinda sorta psychic. With time as our crystal ball. Isn't that SO COOL??
Okay, that's as much as I can glean from the experiment without becoming a bore. For more information on how the time-space continuum works, please contact Einstein from the grave. Or my Physics 71 professor. :)
Reference:
Bahrami, B., Bueti, D., & Walsh, V. (2008). Sensory and association cortex in time perception. Journal of Cognitive Neuroscience. 20, 6, p1054-1062.
Saturday, September 25, 2010
AFFORDANCE
I found a study by Chang, Wade, and Stoffregen (2009) who investigated the perception of affordances or critical action capabilities for aperture passage in an environment–person–person (E–P–P) system, which comprised a lead adult, responsible for perception of the system, and a child as a companion.
Their method includes eight large and eight small female undergraduates served as perceivers and one large and one small girl served as companions. The perceivers were companioned with a large and a small girl individually, the perceivers perceptually judged the minimum aperture width for the E–P–P system, and then the adult–child dyads (a pair of people) actually walked through to determine the system’s actual minimum aperture width (Chang, Wade, and Stoffregen, 2009).
djshukhdkashdk
Chang, C., Wade, M. G., & Stoffregen, T. A. (2009). Perceiving affordances for aperture passage in an environment–person–person system. Journal of Motor Behavior, 41, 495-500.
Cristina Menchaca 2007-49018
What made the two set-ups so different? The control group made use of psychoeducational training: a teaching-learning process that included demonstration, role-play, and verbal feedback. VR on the other hand is interactive in nature, enabling the user to exercise direct control over a video-based virtual environment. Users are allowed to navigate, explore, and interact with videos that make up a virtual supermarket environment. The shopping process is divided into a series of tasks that required participants use to their judgment. Choices are provided at crucial points, and participants can proceed and get immediate visual and auditory reinforcement if they choose the right way to proceed. You may be thinking, why wasn’t a typical 3D method used instead of this non-immersive 2D program that makes use of a touch screen? A fully immersive display that includes a head mount 1) might not be feasible (or even necessary) for people with cognitive deficits, and 2) in general, may cause side effects like vertigo, nausea, eyestrain, disorientation, etc. because of a conflict between perceptions in different sense modalities.
There’s more to the methods. For each set-up, two sessions per person were held, each lasting 30 minutes. In the VR method, a trainer demonstrated options first. Participants received help in familiarizing themselves with navigating in the virtual environment. Retraining occurred on an individual basis, involving two trainers who gave instructions to the participants. Trainers physically collaborated with the trainees, interacting and communicating in nonverbal ways to help them. Trainers also tracked the trainees’ visual attention and physical movements in interacting with the environment (hands). In the conventional group, each participant took part in a two-part psychoeducational tutorial and role-play. Participants received consistent instructions from a trainer that were complemented with audiovisual demonstrations. Using information-based and simulated methods, the trainer introduced concepts and skills required, then the participants role-played.
Between groups and within groups differences were assessed. Participants in both groups showed improvement after the training, and the difference is significant. Training effect was more consistent for the VR group (scores 6 to 11) compared to the other group (scores 1 to 11), but the difference was not significant, meaning that VR can achieve the same level of improvement in conventional intervention. It suggests though that the VR program has a slightly greater effect, but a larger sample would be needed to confirm so. Participants who went through conventional training actually showed more varied learning outcomes because the VR method focused on consistency and motivation for certain tasks. Still, this study supported the finding that learning in a virtual training environment can be extended to reality. The VR program was a more realistic environment, while the conventional program made use of instructions and role-play only. There was also effective feedback (that facilitated learning) in the VR set-up because of the program’s design. In the conventional set-up, feedback from trainers as the participants role-played may not have been considered objective and consistent by the participants.
This study is a clear example of taking action because it allows participants to scan their environment and make decisions and actions based on the important cues they see. The checklist for the abilities for the participants is as follows, each being rated as 1 for dependent, 2 for needs assistance, and 3 for independent:
1. Can recognize the sign of the supermarket
2. Can enter in the right entrance
3. Can recall the target item
4. Can decide whether or not to use the food cart
5. Can get into aisles and identify whether or not the target item is there
6. Can decide which aisle to enter given more than one choice
7. Can locate items on shelves, displays, or bins
8. Can locate items similar to the target item
9. Can locate the target item
10. Can choose the correct amount of the target
11. Can check food expiration dates when suspected
12. Can avoid purchasing products that are dented, opened or appear spoiled
13. Can pick up the target item
14. Knows the need to pay for the item
15. Can search for the cashier after picking up the item
16. Can locate the cashier
17. Can find a cashier in service
18. Can queue at the cashier
19. Can put the item on the counter
20. Can pay using Hong Kong money
21. Can communicate appropriately with the cashier when needed
22. Can get the change
23. Can pick up the bought item
24. Can find the correct exit
VR creates an “artificial” multisensory experience of an environment, including space and events, and thus may be more effective for participants than simple role-play, where participants may have difficulty generalizing their actions when in the real environment. However, it was also observed by the trainers that impaired learning ability of participants limits their ability to navigate within the virtual environment and even in participating in such training. Cognitive issues when designing the VR system should thus be considered. I suppose this was difficult for the researchers. On one hand, there’s the importance of making sure participants are comparable, but on the other hand, there’s a compromise for that when the sample is very specific- in this case, people with intellectual disabilities, and people with such have different levels of intelligence, capability, etc.
I liked that the very essence of this study was something ‘life promoting’. People with disabilities already have less advantage than other people, so it’s heartwarming to know that technology is being put into good use, so that maybe, their lives can be less difficult and they can depend less on others. I appreciated the checklist actually, because the items were so specific. The items made me realize how we take for granted the things we don’t even think we think of, when there are people who actually have difficulty doing them. Speaking in terms of technicality, I liked that choosing participants was very specific, so that comparison among participants and evaluation of results would not be ‘nullified’. I also liked that the study ensured an equal number of males and females per set-up, to account for gender differences. One may think that having such numerous criteria to be a participant is unfair in the sense that the study is still biased because it cannot speak for those with less capability. I think otherwise, because this is just the starting study. For now, it would be best to have a specific sample, to see if the method even works. When it can be improved, then can we worry about having the method be one that could suit anyone. I appreciate that VR was considered as an option in such training. After all, it makes sense to practice in a condition that is almost life-like, so that it is not hard to apply it in real life. It makes extra sense for the intellectually disabled, not because they are any less, but because being disabled, not being able to practice in a more real setting (as in role-play) might be harder to apply. Finally, I also have Asian pride because of this experiment, since the study was done in Hong Kong, and made use of Hong Kong dollars. I like that the study made things as ‘real life’ as possible, through the use of real money for example, no matter what set-up.
My only suggestion for this study is that a bigger sample be used so that the effectiveness of the method can be verified and its comparison with the conventional method be established. It’s really from here that the technology can then be developed so that it could reach more people. This study had important knowledge that the participants could learn from. Speaking short term, participants would think of questions like where am I in the environment, what do I see, where do I go, how do I get there? Speaking long term, participants could ask themselves questions like what can and do I learn as I see and explore the environment? Perhaps in the future, technology can extend further and train people in various skills of community survival: transport skills, road safety, wheelchair accessibility, etc. Whatever happens, I’m sure we can all agree even from this study alone that the virtual reality environment can be a very powerful tool in rehabilitation and improvement of life, not just in entertainment and whatnot.
Source:
Tam, S., Man, D., Chan, Y., Sze, P., & Wong, C. (2005). Evaluation of a computer-assisted, 2-D virtual reality system for training people with intellectual disabilities on how to shop. Rehabilitation Psychology, 50(3), 285-291.
The Eyes helps Hearing
By Kevin Chan
As we are about to transition from mainly studying the visual sense in our perception class to studying the auditory sense, I though of writing a topic that involved audio and vision. Because of that I chose the study that in a way is a transition as well from the visual to the auditory. In a nutshell, the study tested whether adding visual information about the articulatory gestures (such as lip movements) could enhance the perceptual.
To start off, the brain integrates two sources of information for speech comprehension: information in vision (lip movements) and audition (linguistic sounds). Furthermore, can this audiovisual integration of speech facilitate the perception of perceiving a second language?
The methodology was simple. There was a audio
only trial, a video only trial and a audiovisual trial. All participants had been exposed to either Spanish or Catalan as their second language of simple Spanish- Catalan phonemes. Each trial consisted of a presentation of one disyllabic stimulus for a duration of 800 ms. the task was for the participants to press as fast (and accurately of course) as possible the correct syllable of the stimulus.
The results of the study indicates that the addition of the visual information (the pictures of the lips moving) about the speakers' gestures enhanced the ability to discriminate sounds in a second language. It actually is in constras with previous studies statement an improvement in overall comprehension based on audiovisual inputs. Therefore, integration of visual gestures to auditory information can produce a specific improvement in phonological processing.
A sound suggestion would be to test this study cross-culturally. In the said study the language used was Spanish- Catalan. How applicable would this be to other forms of language?
For example, lets look at Chinese, a language very close to my heart. In Chinese, there is such a thing as intonation, the pitch and speed can affect the meaning of the word. Two different words can be "spelled out" (although spelling in Chinese is different" completely the same way but because of its intonation can mean different things. Mai can mean both buy and sell depending on the intonation. Mai (with a stress) means sell while when you say like as if you are ask
ing it means buy. The question now is would visuals be able to enhance this if CHinese is strongly an auditory language. If you see a chinese person say "mai" his lip movement would probably be very very very much alike. How then is this study applicable to that?
Funny because after taking a course in psycholinguistics (psychology 145) I did not know this. I did not think that visuals such as this had a profound effect on comprehension. A whole different chapter of this can be included in the textbook that we used for the course.
I think a great application of this study is for those who are hearing impaired. Since now we know that visual speech information such as gestures can greatly enhance the comprehensi
on of spoken messages(with is the motor theory of speech perception), we can somehow device a system that focuses on a person's mouth or something (I'm just thinking out loud). For example, in the news, there can be a window in the lower part of the screen that is zooms in the mouth of the reporter. By doing so, people who are hearing impaired can look at the mouth which will enhance their comprehension.
Also, this will serve people who do not have a hearing disability as well. Companies that make instructional material to learn language (such as the Rosetta Stone) could employ the implications of the said study. They should stop production of materials that are purely audio (such as learning CDs) and focus on materials that are audiovisual in nature. Also, they can similarly in
clude a small window that is zoomed in the mouths of the main communicator in their instructional audiovisual materials.
Furthermore, for those who are trying to learn a new language, it maybe a good idea to look at the mouths of people who speak that particular language.This study is actually very
much perfect for me for I am currently taking Spanish 10 this semester. That means for me, I should look at the mouth of my Spanish professor while she talks for it actually might make me speak better Spanish! Voy a intentar que! (I will try that!)
SOURCE:
Navarra, J. & Soto-Faraco, S. (2007). Hearing lips in a second language: visual articulatory information enables the perception of second language sounds. Psychological Research 71: 4-12
Friday, September 24, 2010
From the very first time I laid my eyes on you boy, my heart said follow through…
Although I am pretty troubled with the validity of the effect of sex in judging attractiveness (were the participants all heterosexuals?) and the heterosexist bias that physical attractiveness of females is a judgment only relevant to males, I was impressed with the implications of the study. It is certainly interesting how much information we can get and transmit just by LOOKING at people, and how much this acquired information affects our perceptions. "Decoding the language of the eyes streamlines the complex process of everyday social interaction. It is an ability that lies at the very heart of human social cognition." So much can be assumed just by these looks!! Gaze direction also influences person construal, because it moderates our social attention. If someone or something is interesting, we direct our gaze in its direction. This signaling of the locus of attention conveys information about its importance to the perceiver. Furthermore, patterns of gaze direction, or shifts in gazes signal changes in social attention, which has implied social meaning. If you make eye contact with someone, and then the person hurriedly looks away and does not look back, how do you perceive this person? The researchers concluded that "gaze shifts modulate people’s evaluations of others, and that this effect is shaped by the interplay of several factors, which includes the status of the target (i.e., cue provider), the identity of the perceiver, and the nature of the judgment under consideration." The researchers emphasized that judgmental context or relationship that exists between the perceiver changes how gazes affect our person perception. Gaze shifts are sensitive to context. This makes a lot of sense to me and is such an significant note to remember. Our perceptions of people, and any other stimuli really, are always taken in the light of the context of the situation or environment they are in, and they are subject to the current situation of the person perceiving them --which includes past experiences, memories, values, beliefs, biases etcetera. It is so interesting for me how everything does not occur by itself --everything dynamically interacts with each other and these elements cannot be separated from each other.
It is so cool that decoding these eye cues come so naturally to us. It is so automatically hardwired in our social brains that the eyes and its gazes contain a lot of meaning. It is so amazing how our brains are equipped with such a kind of information processing system that can figure out the meanings of such subtle cues! We are truly beautifully constructed species.
So, remember to think before you look --you never know what message you're signaling from your gaze.
Source: Mason, M. F., Tatkow, E. P., & Macrae, C. N. (2005). The Look of Love: Gaze Shifts and Person Perception. Psychological Science , 16 (3), 236-239.
Saturday, September 18, 2010
Cristina Menchaca, 2007-49018
Try jealousy.
Why jealousy? We tend to prioritize emotional stimuli to the extent that our visual awareness of nearby, non-emotional stimuli is impaired. Various studies have proven that our close relationships with others, one of the primary contexts for the experience of emotion, can affect our moods, behaviors, and health. Putting these two ideas together, the question then, which the researches sought to answer, is ‘Can fluctuations in perceived social context affect us to the extent of influencing our visual processing of the world?’
The idea for this study came from a previous study done on women, which tried to determine whether or not the presence of social support within the context of a romantic relationship decreases affective reactivity to an emotionally aversive stimulus, such as an electric shock. The study showed so, that holding a person’s hand reduces threat-related activity in the brain, with greater attenuation of the hand was of the woman’s husband. Additionally, how much the threat was attenuated depended on the husband-wife relationship: more attenuation correlated with higher self-reported marital satisfaction. Based on this study, the researchers then thought about the opposite. Could it be that a perceived threat to the relationship would induce a heightened state of sensitivity (such as anxiety or unease) given emotionally aversive cues? This would then mean that, possibly, fluctuations in security regarding one’s romantic relationship can literally affect how one sees or perceives the world.
Two studies were used. Heterosexual couples were recruited and the tests were administered to the females, who had to search for a target within a sequence of fleeting images while trying not to be distracted by a neutral or emotional picture that would appear. (Typically, there is more difficulty reporting a target when an emotional distractor appears before or right after the target that when the distractor is neutral). For the set-up with perceived relationship threat, the females performed the task while their male partners rated the attractiveness of landscapes, and then attractiveness of other romantically accessible women. Since relationship threat manipulation could be different for each of the women, they were all asked at the end of the experiment to report how uneasy they were about their partners rating other women. This was then correlated with emotion-induced blindness (the task of the women). Having the males rate landscapes also ensured a substantial ‘practice’ before relationship threat manipulation. These were also included in analyses so that the researchers could be sure that results really came from relationship threat manipulation and not from individual differences.
Results showed that the degree to which women reported their unease about their partners rating the attractiveness of other women was significantly and inversely correlated with target detection accuracy following a negative distractor, and not with accuracy following neutral distractions or trials with no distractions. In other words, this is a significant correlation with emotion-induced blindness caused by negative distractors. A good prediction for these results could be that a heightened state of anxiety could increase general distractibility, but in both experiments, self-reported unease was correlated with performance decrements induced by distractors considered emotionally negative, not neutral distractors, erotic distractors, or baseline conditions. Also, unease about having one’s romantic partner rate the attractiveness of other women correlated with emotion-induced blindness only when the partner was rating other women and not while he was rating the attractiveness of landscapes. This suggests that heightened sensitivity to emotional distractions was a function of the effectiveness of the relationship threat manipulation, not simply a function of a more general association between trait anxiety and a stronger bias to attend to emotional information.
Why was did tht in a relatively homogenous population’ (or, to eliminate as much noise as possible in ratings of general unease). How would males react if they were the ones who took the test instead? How about males in same sex relationships, or females in same sex relationships? Further research on such samples could give us not just an idea of how jealousy affects visual perception but also how similar and different results can be given a gender or sexual orientation. Perhaps age can also be considered. Would there be a difference if the attractive stimuli were older, younger? That could imply that reactions are very specific to stimuli, just like modules. How about race of the stimuli? Results on such a study could tell us about what people find attractive, or on an opposite extent, it may even give us a picture of racial discrimination, should a certain race evoke no threat to a person. Finally, another interesting thing to consider would be the other emotions, which could affect visual perception just as jealousy has been shown to have an effect. How about anger, sadness, excitement, etc.?
As many of us know, the language of social relationships is filled with visual metaphor (beauty is in the eye of the beholder, but love is blind). Isn’t it quite interesting that such phrases can actually connect to reality in a concrete way? Social emotions influence us so deeply that they can actually affect our processes in visual awareness, just as jealousy has shown. Who knows how else our visual (even other sensory) processes are affected by our moods. This isn’t just something that advertisers would love to hear and take advantage of, but it has big implications on how we react to and deal with things, and how we should see others in this light. Is love blinding? We have yet to find out, along with other emotions. For now, based on this study, we can actually say that jealousy is blinding.
Source:
Most, S., Laurenceau, J., Graber, E., Belcher, A., & Smith, C. V. (2010). Blind Jealousy? Romantic Insecurity Increases Emotion-Induced Failures of Visual Perception. Emotion, 10(2), 250-256.