Episode 2 – The Science of Happiness

Thank you for taking the time to listen to the second episode of the show. Here are a few references and further reading sources for you:

  1. Lyubomksky, S., Sheldon, K. M., & Schkade, D. (2005). Pursuing happiness: The architecture of sustainable change. Review of General Psychology, 9(2), 111-131. doi:10.1037/1089-2680.9.2.111
  2. Diener, E. (1984). Subjective well-being. Psychological Bulletin, 95(3), 542-575. doi:10.1037/0033-2909.95.3.542
  3. Diener, E. (1994). Assessing subjective well-being: Progress and opportunities. Social Indicators Research, 31(2), 103-157. doi:10.1007/BF01207052
  4. Diener, E., Suh, E. M., Lucas, R. E., & Smith, H. L. (1999). Subjective well-being: Three decades of progress. Psychological Bulletin, 125(2), 276–302. doi:10.1037/0033-2909.125.2.276
  5. Lykken, D., & Tellegen, A. (1996). Happiness Is a Stochastic Phenomenon. Psychological Science, 7(3), 186–189. doi:10.1111/j.1467-9280.1996.tb00355.x

Stay tuned for the next episode and please share your experiences on your journey to become happier, in the comments section below.

 

Episode 1 – Social Media

Welcome to my podcast: Overdosing on Intellect! This is the first episode of the show, in which I talk about social media. Please subscribe to the show and get notified whenever a new episode is available.

References to the studies I mentioned during the podcast:

  1. Gwenn Schurgin O’Keeffe, Kathleen Clarke-Pearson, Council on Communications and Media, The Impact of Social Media on Children, Adolescents, and Families, In Pediatrics, Volume 127, Issue 4, April 2011, Pages 800-804, https://doi.org/10.1542/peds.2011-0054.
  2. Edson C. Tandoc, Patrick Ferrucci, Margaret Duffy, Facebook use, envy, and depression among college students: Is facebooking depressing?, In Computers in Human Behavior, Volume 43, February 2015, Pages 139-146, ISSN 0747-5632, https://doi.org/10.1016/j.chb.2014.10.053.
  3. Lauren E. Sherman, Ashley A.Payton, Leanna M. Hernandez, Patricia M. Greenfield, Mirella Dapretto, The Power of the Like in Adolescence, In Psychological Science, Volume 27, Issue 7, May 2016, Pages 1027-1035, https://doi.org/10.1177/0956797616645673.
  4. Amy L. Gonzales and Jeffrey T. Hancock, Mirror, mirror on my Facebook wall: effects of exposure to Facebook on self-esteem, In Cyberpsychology, Behavior, and Social Networking, Volume 14, Issue 1-2, February 2011, Pages 79-83, https://doi.org/10.1089/cyber.2009.0411.
  5. Megan A. Moreno, Lauren Jelenchick, Rosalind Koff, Jens Eickhoff, Depression and Internet Use among Older Adolescents: An Experience Sampling Approach, In Psychology, Volume 3, September 2012, Pages 743-748, https://doi.org/10.4236/psych.2012.329112.

You can listen to the podcast below as well:

 

Communicating research

This has been on my mind for a long time now. Time and time again I have seen news articles about research studies that don’t really correspond to the studies they are supposedly reporting. Sometimes, the title of the news article or the main point it conveys is totally different from the one the research study, itself, tries to communicate. Other times, correlation is interpreted as causation, or a statement is extracted from the research paper without its preceding or succeeding statements that explain the conditions in which these results hold true. As a result, I have seen a lot of us, researchers, complain about the media not reporting exactly what we meant in our papers.

One of the reasons for this, I believe, is that we don’t give the media enough information about research and its settings. Obviously, someone who hasn’t been in that setting and/or doesn’t have a clue as to what it entails will not be able to explain it accurately. Unless our journalists are scientists themselves or they have been in close contact with scientific research, they won’t be able to accurately report/communicate research to the general public.

The second reason for this is that our scientists are being trained on how to do wet lab research or how to analyze a specific type of data in a computer program, but not how to talk to the people outside of their field. We are so used to using jargon in our talks and abbreviating long scientific terms in our writings that we forget that our audience may not be the same each and every time.

Having journalists that have trainings in science or have been in a research setting before could help alleviate the first issue. However, I think I can provide more insight into the second issue. One of the things that I think researchers need to know is that if others are not reporting your research or if they are reporting it wrong, you should be doing something about it: Maybe write a letter, very much like the ones you write to scientific journals about the papers they have published, albeit using a different language, to the journalist or the corresponding media outlet. You could also work on your lay-person summary skills and write your own version of what your research entails. Another avenue of action could be to volunteer to train journalists about what it is you do in the dark rooms of your lab every day from dawn to dusk.

All in all, we have to make sure the people that are most often our end product users, know what we are doing about the problems they face everyday. We might even be able to raise money to fund our next project by informing donors about the studies we undertake. I know I’m going to start enriching the tutorial section of my website with related articles and will occasionally write about my research, what it entails and how it can benefit the general public. Let’s make science and research information available to everyone!

Confirmation Bias

This subject was on my list of writings for a long time and today I decided to finally write about it. Induction is a way of reasoning. But, it definitely has downsides to it. Suppose I move into a new town and the first few people I see are all physicians. Does that mean that everyone living in this town is a physician? Of course, not. It just means that the people I have seen are doctors. However, this may lead to me forming a hypothesis that maybe the vast majority of the people living in this city are doctors, which is a legitimate hypothesis. In order to test for that hypothesis, I have to devise a study protocol. I have to randomly select a sample of the people in that city and see if they are physicians. My main focus in this post is actually the process of random selection.

Statistically, random selection means that everyone in the population has the same chance of being chosen into the sample. If you go around and ask only the people who carry a stethoscope, your sample is biased. If you go around and ask only the people who own a shop, your sample is biased. Because, there is a high chance that the person with a stethoscope is a doctor and a person who owns a shop is not. Now, this is when I have good (or at least neutral) intentions!

If, for whatever reason, I decide to mock a group of people (which I hope I won’t!), I actually try to be biased. More specifically, I use Confirmation Bias, an informal logical fallacy, in which I only seek out information that supports my hypothesis and disregard anything that says otherwise. Unfortunately, many of the new developments on the web are going this way. For example, the personalized ads on Google, suggestions on Netflix or Amazon, and the “Because you liked …, you’ll like …” principle in general is feeding the beast even more. But, that’s a subject for another post I’ll probably attend to later!

A few weeks ago, I saw the video of Jesse Watters going down to New York’s Chinatown and interviewing people. For those of you who haven’t seen it yet, here is the video:

I don’t exactly know what this correspondent’s actual intentions were, but this is an example of how nonrandom selection skews the results of a study. I don’t want to go into a detailed assessment of this video, but there are a few things I want to point out:

  1. The host says they wanted to sample “political opinion” because China was mentioned 12 times in the first US presidential debate. What he doesn’t say is how that 12 times compares with other words or countries mentioned in the debate. Also, he doesn’t say why New York’s Chinatown was chosen.
  2. The correspondent actually starts the interview with a cultural stereotype.
  3. He doesn’t comment on cultural norms, e.g. the fact that some of the people call Hillary as “Clinton’s wife”. I remember I read a piece somewhere and a gentleman from China had commented on it “Please write more.” This is a cultural practice there, somehow meaning “Keep up the great work,” while this may mean “you haven’t written enough on this piece” for people from some other cultures.
  4. He has chosen some of the people who couldn’t answer the question, apparently because they don’t speak that much English.

This is something that we see a lot in our everyday lives. I remember a few years back when there was a debate going on in Iran in terms of which soccer team has the biggest number of fans and a TV correspondent went to a stadium to talk to the fans of a specific team and the people he chose to show on TV were either the not-well-educated fans or the fans who couldn’t speak the country’s official language well. He didn’t comment on what he saw and who he talked to, but the representation was enough for many people to start mocking the fans of that team.

It is also important to comment on the population that we have selected the sample from. For example, if the population we are looking at is birds in Australia and we only choose a random sample of 10 birds from Melbourne, our sample is biased (even though the sample has been selected randomly). However, if the population we were interested in was birds in Melbourne, our sample would be acceptable. But, going back to the video above, the host starts with talking about China in general and then goes to the New York Chinatown. This is how first impressions can be important. Although they say that they have gone to the NY Chinatown for this interview, you would induce that these people interviewed are representatives of the Chinese all over the world.

And here is a video from Big Think on Confirmation Bias and the effect of first impressions to end the post. Please share your views about this in the comments below.

Thanks for reading! 🙂