• I’d be happy to write the methods section (parts related to the

    experiment design, .
    programming,
    recruitment and
    online experimentation).

    I may need a few paragraphs from the stimuli creators (Corinne?) on production process, materials used, and actors etc.

  • Introduction

  • Methods

  • Results

  • Do:

  • Left out:

    • Language selection
    • Dutch
  • • The link can be found here:

    • Upon arrival, each user was given an ID. 5 parts, Tracking accomplished through the script specified in MTUrk blog.

  • Guilt

    • Amend and approach
    • Focused on the behavior (i.e., one’s actions)
  • Shame

    • Disconnect and withdraw
    • Focused on the self (i.e., one’s own worth etc.)
  • Recruitment and Experiment Environment

  • Experimental Design and Procedure

  • Materials

  • Appendix

  • Study 1

    Goal: To investigate underlying emotional structure underneath guilt and shame. Uses actors of 2nd study as participants

  • Study 2

    Goal: To investigate whether people can distinguish facial expressions (i.e., nonverbal signs) of guilty and shameful people

  • The experiment was programmed on Qualtrics ([[[Footnote: www.qualtrics.com]]]) and was administered online. For the purpose of the experiment, 100 participants were recruited via the online participant recruitment system of University of Amsterdam, and completed the study for course credits. The experiment lasted about one hour. One participant was excluded from the analyses due to data showing a possible technical problem may have had caused this participant to skip one of the videos in the experiment [[[FOOTNOTE XJABC]]].

    Although participants were able to participate the experiment from their personal computers, and therefore performed in it in a relatively less controlled environment [[[FOOTNOTE JGTRA]]], measures were taken to ensure a good experimental environment and high-quality data. These measures include reaction time recordings and click tracking on each page of the experiment, a user-friendly and distraction free design, and instructions to users (such as asking them to view the experiment in full-screen) [[[FOOTNOTE: The experiment can be reached at. abc.com]]]. Optimal screen sizes were also automatically enforced,
    and participants were not allowed to participate if they attempted to join the study on a mobile device with a small screen size (e.g., a smart phone). Due to a compatibility problem with one of the features used with the experiment (auto-progression to next page upond completion of a video), participants who are using ‘Internet Explorer’ or ‘Edge’ browsers were also prevented from join the study,
    and they were asked to retry joining by using a supported browser.

    [[[FOOTNOTE XJABC]]] An additional 47 participants were dropped out of the experiment before making any significant progress. Due to the nature of the online experimental environments, this was considered as exploration behavior (i.e., peeking into the study); and these participants were not considered as dropouts in the traditional sense (e.g., unable to continue with the experiment).

    [[[FOOTNOTE JGTRA]]] [[[DRAFT]]] In numerous studies, data collected via MTurk were shown to be of high quality and be as reliable
    as data collected from offline lab studies (Buhrmester et al., 2011; Horton, Rand, & Zeckhauser, 2011;
    Paolacci, Chandler, & Ipeirotis, 2010)

  • Following an introduction to study, an informed consent form and instructions, participants were presented with the experimental paradigm.

    The experimental paradigm was a video watching task during which participants were asked to guess intentions and emotions of actors who talked to a webcam for 45 seconds. The actors were volunteers who were asked to talk about an experience that made them feel either neutral, shameful, or guilty, depending on the condition. Sound was removed from videos, and participants were asked to guess emotions and intentions of the actors by visual means only. To standardize participant experiences and expose them to the same stimuli as much as possible, all video controls were also removed from the experiment—therefore, participants were unable to pause or skip parts of the videos. Furthermore, viewing the videos was allowed only once, and when a video was finished, the experiment automatically proceeded to the next section.

    The experiment was a repeated measures design with 3 conditions: guilt, shame, and neutral. In the experiment, each participant were presented with 16 45-second video clips. There were in total 16 videos and 8 actors. In 8 of these videos, actors reported a neutral experience, 4 of them a guilty experience, and 4 a shameful experience. The experiment was randomized in such a way that no actor appeared two times in a row. It should also be noted that all actors used in this experiment were female.

    After each video, participants were asked to rate how likely they thought the actors had certain intentions (e.g., ‘to apologize’, ‘to approach’) and feelings (e.g., ‘angry’, ‘sad’, ‘guilty’, ‘ashamed’) on scales ranging from 0 (“Definitely no”) to 10 (“Definitely yes”).

    At the end of the video block, in order to check for possible effects of likeability, participants were asked to look at pictures of all 8 of the actors and rate how attractive and likable they found them on 7-point scales.

    As additional control measures, following the experimental (i.e., video) block, participants were also asked to complete Amsterdam Emotion Recognition Test (AERT, [[[REFERENCE]]]) and Interpersonal Reactivity Index (IRI; [[[Davis, 1983]]]). These measures were intended to serve as a standard to test whether participants were able to accurately identify emotions.

    Finally, participants completed in demographics block and were asked in an open question whether they had any issues (e.g., glitches) during the experiment.

  • Stimuli

  • Video Evaluation Questionnaires

  • Likability Check

  • AERT

  • IRI

  • Demographics

  • Participant Feedback Form

  • #1
    Video Introduction Text

    In this research we want to investigate the content of first impressions. We ask you to watch 16 different video clips of people speaking and imagine that you are having a conversation with them on Skype.

    The persons you will see will be talking about a specific event in their lives. The same person may talk about more than one event, and thus appear more than once. While you are watching the video clips, you will hear no audio and therefore will be unable to identify exactly what they are talking about.

    After each video clip, we will ask a number of questions to ascertain whether you are able to identify what the person was trying to communicate, and what emotions this person felt while talking.

  • #2 VIDEO INSTRUCTIONS

    In the next screen, you will see the first video clip. Each video clip lasts 45 seconds.

    Please bear in mind that there will be no playback controls and you will not be able to rewind or replay the videos. Therefore it is important for you to keep your attention on the videos as they are being played.

    Also, in order to prevent accidentally skipping a video, the next button will appear only after a video has finished playing.

  • #3 qInt

  • #4 qEmo

  • Manipulation Checks for Antedecent Conditions

    Ratings of the items below were compared in MANOVA. They were successfully induced.

    • Inappropriate behavior
    • Transgressing personal norms
    • Hurting a person

    Ratings were tansgressing one’s own norms was the same in guilt and shame conditions

    Behaving inappropriately in public is higher in shame conditions

    Hurting a person physically or psychologically is higher in guilt condition

  • Shame and Guilt Befora and after Talking

    • Guilt: Higher before talking, less after talking

    • Shame: Lower before talking, higher after talking

  • Targets

    • Most frequent target for both groups: Mother
    • Stragners: Only in shame, none in guilt
  • Self-evaluation

    Guilt:

    • More responsible
    • More like a bad person
    • Felt more like their behavior was atypical
  • Action Tendencies

    • Guilt
      • More reparation
    • Shame
      • Slightly more awkwardness
      • But did not withdraw more as expected
  • Other Questions

    • Shame
      • Harder to talk about
  • Corrine:
    A quick update and question for Can:

    There are indeed 100 participants in (0.1)GS_combined, and 99 in (0.2)GS_unnecesary-columns-omitted and the version 3 files. The missing participant’s ID is R_2sciORhXxUX1bmc.
    They have no responses to the items for N1A1, but I don’t see anything unusual about the responses that are there.

    Was this participant excluded because of the incomplete data, or was it just a mistake?*

  • Corrine
    Hi Agneta -

    That sounds like a reasonable plan.

    Can noticed that the number of participants overall didn’t match - expected 100, have data for 99. I generated the 0.3 file from his [0.2]GS_unnecesary-columns-cleaned.xlsx file, and haven’t had a chance to track down why the difference exists yet, but am planning to investigate this weekend, before continuing with analyses.

    -Corinne

  • <e:
    1 - If there were participants whose IDs were not recorded on each part of the experiment, they were removed due to dropping out from the experiment. The number of participants in the first part of the experiment (welcome and introduction) was 147, whereas the number in the last part (demographics) was 100. Therefore, 47 participants were marked as dropouts and removed from the analysis. (You seem to have one less participant [i.e., 99 total]; I’m not sure why this difference may be the case in your analysis).

    • The videos used in the experiment were recordings of individuals who were wearing headphones and talking to a webcam in a cubicle.

      [[[CORRINE]]]:
      In order to record the videos used in the study, [[[XX]]] volunteers were recruited and [[[YY]]] videos were captured as part of [[[ZZZ]]] study. [[[ Volunteers were instructed to… ]]]

      For purposes of the current study, a subset of 16 videos were produced from this larger collection by selecting 16 45-second segments. During video processing, audio tracks were also removed from the videos [[[OR was audio never recorded in the first place?]]].

      The resulting 16 video segments featured 8 (female) actors; and consisted of 8 videos in which volunteers reported a neutral experience, 4 videos with a ‘guilty experience’, and 4 with a ‘shameful experience’.

      The videos were presented in a random order, with the exception that no actor appeared twice in two subsequent videos.

      The introduction and instruction text used for videos can be found in [[[appendix 1]]].

    • After watching each video, participants were told to imagine themselves in a Skype call with the actor, and were asked to rate 9 possible intentions (e.g., ‘to withdraw from conversation’, ‘to ignore you’) and 10 possible emotions (e.g., ‘happy’, ‘relieved’, ‘guilty’, ‘ashamed’) of actors on 10-point scales (1=’definitely no’; 10=’Definitely yes’) set on a matrix ([[[see: appendix INT/EMO]]]). In the letter questionnaire, inclusion of items other than guilt and shame was intended for a more natural evaluation experience for participants that is not exclusively focused on the target emotions. The order of appearance of items were randomized.

    • In order to probe for possible effects of likeability, participants were asked how attractive, kind and pleasant they found the actor by looking at a picture of each actor taken from the videos. The questions were answered on a scale ranging from 0 (‘not at all’) to 6 (‘completely’).([[[see: appendix |Attractiveness]]]).

      Both the order of appearance of pictures and choices were randomized.

    • As a control measure, emotion detection performance of participants were tested using Amsterdam Emotion Recognition Test (AERT) [[[REFERENECE NEEDED]]]. AERT consists of 24 photographs in which faces of factors depict different emotions. [[[Explain further]]].

      The participants were presented with the pictures in a randomized order and asked to choose one emotion out of seven possible options. For each picture, these options were ‘fear’, ‘contempt’, ‘shame’, ‘anger’, ‘sadness’, ‘disgust’, and ‘something else’.

      The order of appearance of pictures, as well as all choices except ‘something else’ were randomized.

    • In order to further probe emotion interpretation tendencies and biases of participants, Interpersonal Reactivity Index [[[IRI, (Davis, 1983)]]] was used. IRI is a widely used tool for on measuring perspective taking, fantasy proneness, empathetic concern, and personal distress proneness in interpersonal situations with four corresponding subscales. The example items are “I sometimes try to understand my friends better by imagining how things look from their perspective.” and “Before criticizing somebody, I try to imagine how I would feel if I were in their place.” The scale consists of 28 items that are answered on five point scales (1=’Does not describe me well’; 5=’Describes me very well’). The order of appearance of questions in IRI were not randomized.

    • Demographic questions were related to age, sex, gender, nationality, and first language of participants.

    • At the end of the experiment, participants were asked whether they encountered any errors or glitches during the experiment.

    {"cards":[{"_id":"747650fd74191d2f4a0001ef","treeId":"74764f2274191d2f4a0001ea","seq":9292075,"position":0.5,"parentId":null,"content":"I'd be happy to write the methods section (parts related to the \n\nexperiment design, .\nprogramming, \nrecruitment and \nonline experimentation). \n\nI may need a few paragraphs from the stimuli creators (Corinne?) on production process, materials used, and actors etc. "},{"_id":"77cf907e928c035bbe00017d","treeId":"74764f2274191d2f4a0001ea","seq":9796242,"position":0.75,"parentId":null,"content":"# Introduction"},{"_id":"77cf90c8928c035bbe00017e","treeId":"74764f2274191d2f4a0001ea","seq":9796263,"position":1,"parentId":"77cf907e928c035bbe00017d","content":"**Guilt**\n- Amend and approach\n- Focused on the behavior (i.e., one's actions)"},{"_id":"77cf9162928c035bbe00017f","treeId":"74764f2274191d2f4a0001ea","seq":9796261,"position":2,"parentId":"77cf907e928c035bbe00017d","content":"**Shame**\n- Disconnect and withdraw\n- Focused on the self (i.e., one's own worth etc.)"},{"_id":"74764f3774191d2f4a0001ec","treeId":"74764f2274191d2f4a0001ea","seq":9796191,"position":1,"parentId":null,"content":"# Methods"},{"_id":"747650cd74191d2f4a0001ee","treeId":"74764f2274191d2f4a0001ea","seq":9293910,"position":0.25,"parentId":"74764f3774191d2f4a0001ec","content":"## Recruitment and Experiment Environment"},{"_id":"74768ead74191d2f4a000203","treeId":"74764f2274191d2f4a0001ea","seq":9294618,"position":0.5,"parentId":"747650cd74191d2f4a0001ee","content":"The experiment was programmed on Qualtrics ([[[Footnote: www.qualtrics.com]]]) and was administered online. For the purpose of the experiment, 100 participants were recruited via the online participant recruitment system of University of Amsterdam, and completed the study for course credits. The experiment lasted about one hour. One participant was excluded from the analyses due to data showing a possible technical problem may have had caused this participant to skip one of the videos in the experiment [[[FOOTNOTE XJABC]]].\n\nAlthough participants were able to participate the experiment from their personal computers, and therefore performed in it in a relatively less controlled environment [[[FOOTNOTE JGTRA]]], measures were taken to ensure a good experimental environment and high-quality data. These measures include reaction time recordings and click tracking on each page of the experiment, a user-friendly and distraction free design, and instructions to users (such as asking them to view the experiment in full-screen) [[[FOOTNOTE: The experiment can be reached at. abc.com]]]. Optimal screen sizes were also automatically enforced, \nand participants were not allowed to participate if they attempted to join the study on a mobile device with a small screen size (e.g., a smart phone). Due to a compatibility problem with one of the features used with the experiment (auto-progression to next page upond completion of a video), participants who are using 'Internet Explorer' or 'Edge' browsers were also prevented from join the study, \n and they were asked to retry joining by using a supported browser. \n\n[[[FOOTNOTE XJABC]]] An additional 47 participants were dropped out of the experiment before making any significant progress. Due to the nature of the online experimental environments, this was considered as exploration behavior (i.e., peeking into the study); and these participants were not considered as dropouts in the traditional sense (e.g., unable to continue with the experiment). \n\n[[[FOOTNOTE JGTRA]]] [[[DRAFT]]] In numerous studies, data collected via MTurk were shown to be of high quality and be as reliable\nas data collected from offline lab studies (Buhrmester et al., 2011; Horton, Rand, & Zeckhauser, 2011;\nPaolacci, Chandler, & Ipeirotis, 2010)"},{"_id":"7476840374191d2f4a0001fa","treeId":"74764f2274191d2f4a0001ea","seq":9292212,"position":1,"parentId":"74768ead74191d2f4a000203","content":"Corrine: \nA quick update and question for Can: \n\nThere are indeed 100 participants in (0.1)GS_combined, and 99 in (0.2)GS_unnecesary-columns-omitted and the version 3 files. The missing participant's ID is R_2sciORhXxUX1bmc. \nThey have no responses to the items for N1A1, but I don't see anything unusual about the responses that are there. \n\nWas this participant excluded because of the incomplete data, or was it just a mistake?*\n"},{"_id":"747684f874191d2f4a0001fb","treeId":"74764f2274191d2f4a0001ea","seq":9292213,"position":2,"parentId":"74768ead74191d2f4a000203","content":"Corrine\nHi Agneta - \n\nThat sounds like a reasonable plan.\n\nCan noticed that the number of participants overall didn't match - expected 100, have data for 99. I generated the 0.3 file from his [0.2]GS_unnecesary-columns-cleaned.xlsx file, and haven't had a chance to track down why the difference exists yet, but am planning to investigate this weekend, before continuing with analyses.\n\n-Corinne "},{"_id":"747685f174191d2f4a0001fc","treeId":"74764f2274191d2f4a0001ea","seq":9292214,"position":3,"parentId":"74768ead74191d2f4a000203","content":"<e: \n1 - If there were participants whose IDs were not recorded on each part of the experiment, they were removed due to dropping out from the experiment. The number of participants in the first part of the experiment (welcome and introduction) was 147, whereas the number in the last part (demographics) was 100. Therefore, 47 participants were marked as dropouts and removed from the analysis. (You seem to have one less participant [i.e., 99 total]; I'm not sure why this difference may be the case in your analysis)."},{"_id":"74765e5a74191d2f4a0001f1","treeId":"74764f2274191d2f4a0001ea","seq":9292661,"position":0.5,"parentId":"74764f3774191d2f4a0001ec","content":"## Experimental Design and Procedure"},{"_id":"7476871674191d2f4a0001fd","treeId":"74764f2274191d2f4a0001ea","seq":9294037,"position":1,"parentId":"74765e5a74191d2f4a0001f1","content":"Following an introduction to study, an informed consent form and instructions, participants were presented with the experimental paradigm.\n\nThe experimental paradigm was a video watching task during which participants were asked to guess intentions and emotions of actors who talked to a webcam for 45 seconds. The actors were volunteers who were asked to talk about an experience that made them feel either neutral, shameful, or guilty, depending on the condition. Sound was removed from videos, and participants were asked to guess emotions and intentions of the actors by visual means only. To standardize participant experiences and expose them to the same stimuli as much as possible, all video controls were also removed from the experiment—therefore, participants were unable to pause or skip parts of the videos. Furthermore, viewing the videos was allowed only once, and when a video was finished, the experiment automatically proceeded to the next section.\n\nThe experiment was a repeated measures design with 3 conditions: guilt, shame, and neutral. In the experiment, each participant were presented with 16 45-second video clips. There were in total 16 videos and 8 actors. In 8 of these videos, actors reported a neutral experience, 4 of them a guilty experience, and 4 a shameful experience. The experiment was randomized in such a way that no actor appeared two times in a row. It should also be noted that all actors used in this experiment were female.\n\nAfter each video, participants were asked to rate how likely they thought the actors had certain intentions (e.g., 'to apologize', 'to approach') and feelings (e.g., 'angry', 'sad', 'guilty', 'ashamed') on scales ranging from 0 (\"Definitely no\") to 10 (\"Definitely yes\").\n\nAt the end of the video block, in order to check for possible effects of likeability, participants were asked to look at pictures of all 8 of the actors and rate how attractive and likable they found them on 7-point scales.\n\nAs additional control measures, following the experimental (i.e., video) block, participants were also asked to complete Amsterdam Emotion Recognition Test (AERT, [[[REFERENCE]]]) and Interpersonal Reactivity Index (IRI; [[[Davis, 1983]]]). These measures were intended to serve as a standard to test whether participants were able to accurately identify emotions.\n\nFinally, participants completed in demographics block and were asked in an open question whether they had any issues (e.g., glitches) during the experiment."},{"_id":"7476656874191d2f4a0001f3","treeId":"74764f2274191d2f4a0001ea","seq":9294484,"position":0.75,"parentId":"74764f3774191d2f4a0001ec","content":"## Materials"},{"_id":"74768e2e74191d2f4a000201","treeId":"74764f2274191d2f4a0001ea","seq":9294488,"position":1,"parentId":"7476656874191d2f4a0001f3","content":"### Stimuli"},{"_id":"7476efbc74191d2f4a000213","treeId":"74764f2274191d2f4a0001ea","seq":9294588,"position":1,"parentId":"74768e2e74191d2f4a000201","content":"The videos used in the experiment were recordings of individuals who were wearing headphones and talking to a webcam in a cubicle.\n\n[[[CORRINE]]]:\nIn order to record the videos used in the study, [[[XX]]] volunteers were recruited and [[[YY]]] videos were captured as part of [[[ZZZ]]] study. [[[ Volunteers were instructed to... ]]]\n\nFor purposes of the current study, a subset of 16 videos were produced from this larger collection by selecting 16 45-second segments. During video processing, audio tracks were also removed from the videos [[[OR was audio never recorded in the first place?]]].\n\nThe resulting 16 video segments featured 8 (female) actors; and consisted of 8 videos in which volunteers reported a neutral experience, 4 videos with a 'guilty experience', and 4 with a 'shameful experience'.\n\nThe videos were presented in a random order, with the exception that no actor appeared twice in two subsequent videos.\n\nThe introduction and instruction text used for videos can be found in [[[appendix 1]]]."},{"_id":"74798a6874191d2f4a00021d","treeId":"74764f2274191d2f4a0001ea","seq":9294492,"position":1.5,"parentId":"7476656874191d2f4a0001f3","content":"### Video Evaluation Questionnaires"},{"_id":"74798e9974191d2f4a00021f","treeId":"74764f2274191d2f4a0001ea","seq":9294574,"position":0.5,"parentId":"74798a6874191d2f4a00021d","content":"After watching each video, participants were told to imagine themselves in a Skype call with the actor, and were asked to rate 9 possible intentions (e.g., ‘to withdraw from conversation’, ‘to ignore you’) and 10 possible emotions (e.g., ‘happy’, ‘relieved’, ‘guilty’, ‘ashamed’) of actors on 10-point scales (1='definitely no'; 10='Definitely yes') set on a matrix ([[[see: appendix INT/EMO]]]). In the letter questionnaire, inclusion of items other than guilt and shame was intended for a more natural evaluation experience for participants that is not exclusively focused on the target emotions. The order of appearance of items were randomized."},{"_id":"74798b5474191d2f4a00021e","treeId":"74764f2274191d2f4a0001ea","seq":9294495,"position":1.75,"parentId":"7476656874191d2f4a0001f3","content":"### Likability Check"},{"_id":"7476f03e74191d2f4a000215","treeId":"74764f2274191d2f4a0001ea","seq":9294586,"position":1,"parentId":"74798b5474191d2f4a00021e","content":"In order to probe for possible effects of likeability, participants were asked how attractive, kind and pleasant they found the actor by looking at a picture of each actor taken from the videos. The questions were answered on a scale ranging from 0 ('not at all') to 6 ('completely').([[[see: appendix |Attractiveness]]]). \n\nBoth the order of appearance of pictures and choices were randomized."},{"_id":"7476c75974191d2f4a00020b","treeId":"74764f2274191d2f4a0001ea","seq":9294496,"position":2,"parentId":"7476656874191d2f4a0001f3","content":"### AERT\n\n"},{"_id":"7476ef3674191d2f4a000212","treeId":"74764f2274191d2f4a0001ea","seq":9294581,"position":1,"parentId":"7476c75974191d2f4a00020b","content":"As a control measure, emotion detection performance of participants were tested using Amsterdam Emotion Recognition Test (AERT) [[[REFERENECE NEEDED]]]. AERT consists of 24 photographs in which faces of factors depict different emotions. [[[Explain further]]].\n\nThe participants were presented with the pictures in a randomized order and asked to choose one emotion out of seven possible options. For each picture, these options were 'fear', 'contempt', 'shame', 'anger', 'sadness', 'disgust', and 'something else'. \n\n\nThe order of appearance of pictures, as well as all choices except 'something else' were randomized."},{"_id":"7476c77674191d2f4a00020c","treeId":"74764f2274191d2f4a0001ea","seq":9294591,"position":3,"parentId":"7476656874191d2f4a0001f3","content":"### IRI"},{"_id":"7479e13c74191d2f4a000221","treeId":"74764f2274191d2f4a0001ea","seq":9294600,"position":0.5,"parentId":"7476c77674191d2f4a00020c","content":"In order to further probe emotion interpretation tendencies and biases of participants, Interpersonal Reactivity Index [[[IRI, (Davis, 1983)]]] was used. IRI is a widely used tool for on measuring perspective taking, fantasy proneness, empathetic concern, and personal distress proneness in interpersonal situations with four corresponding subscales. The example items are \"I sometimes try to understand my friends better by imagining how things look from their perspective.\" and \"Before criticizing somebody, I try to imagine how I would feel if I were in their place.\" The scale consists of 28 items that are answered on five point scales (1='Does not describe me well'; 5='Describes me very well'). The order of appearance of questions in IRI were not randomized."},{"_id":"7476c78b74191d2f4a00020d","treeId":"74764f2274191d2f4a0001ea","seq":9294499,"position":4,"parentId":"7476656874191d2f4a0001f3","content":"### Demographics"},{"_id":"74770eb074191d2f4a000217","treeId":"74764f2274191d2f4a0001ea","seq":9294478,"position":1,"parentId":"7476c78b74191d2f4a00020d","content":"Demographic questions were related to age, sex, gender, nationality, and first language of participants."},{"_id":"7479eff774191d2f4a000222","treeId":"74764f2274191d2f4a0001ea","seq":9294502,"position":5,"parentId":"7476656874191d2f4a0001f3","content":"### Participant Feedback Form"},{"_id":"7479f5a574191d2f4a000223","treeId":"74764f2274191d2f4a0001ea","seq":9294479,"position":1,"parentId":"7479eff774191d2f4a000222","content":"At the end of the experiment, participants were asked whether they encountered any errors or glitches during the experiment."},{"_id":"7476ba2974191d2f4a000207","treeId":"74764f2274191d2f4a0001ea","seq":9292247,"position":1.5,"parentId":"74764f3774191d2f4a0001ec","content":"Appendix"},{"_id":"7476ba5a74191d2f4a000208","treeId":"74764f2274191d2f4a0001ea","seq":9292259,"position":1,"parentId":"7476ba2974191d2f4a000207","content":"#1\nVideo Introduction Text\n \nIn this research we want to investigate the content of first impressions. We ask you to watch 16 different video clips of people speaking and imagine that you are having a conversation with them on Skype.\n \nThe persons you will see will be talking about a specific event in their lives. The same person may talk about more than one event, and thus appear more than once. While you are watching the video clips, you will hear no audio and therefore will be unable to identify exactly what they are talking about.\n \nAfter each video clip, we will ask a number of questions to ascertain whether you are able to identify what the person was trying to communicate, and what emotions this person felt while talking."},{"_id":"7476bf0b74191d2f4a000209","treeId":"74764f2274191d2f4a0001ea","seq":9292260,"position":2,"parentId":"7476ba2974191d2f4a000207","content":"#2 VIDEO INSTRUCTIONS\n \nIn the next screen, you will see the first video clip. Each video clip lasts 45 seconds.\n \nPlease bear in mind that there will be no playback controls and you will not be able to rewind or replay the videos. Therefore it is important for you to keep your attention on the videos as they are being played.\n\nAlso, in order to prevent accidentally skipping a video, the next button will appear only after a video has finished playing."},{"_id":"7476d32074191d2f4a00020f","treeId":"74764f2274191d2f4a0001ea","seq":9292359,"position":3,"parentId":"7476ba2974191d2f4a000207","content":"#3 qInt"},{"_id":"7476d35274191d2f4a000210","treeId":"74764f2274191d2f4a0001ea","seq":9292360,"position":4,"parentId":"7476ba2974191d2f4a000207","content":"#4 qEmo"},{"_id":"77cf820c928c035bbe000179","treeId":"74764f2274191d2f4a0001ea","seq":9796192,"position":1.375,"parentId":null,"content":"# Results"},{"_id":"77cf8275928c035bbe00017a","treeId":"74764f2274191d2f4a0001ea","seq":9796384,"position":1,"parentId":"77cf820c928c035bbe000179","content":"## Study 1\nGoal: To investigate underlying emotional structure underneath guilt and shame. Uses actors of 2nd study as participants"},{"_id":"77cf82a1928c035bbe00017b","treeId":"74764f2274191d2f4a0001ea","seq":9796272,"position":1,"parentId":"77cf8275928c035bbe00017a","content":"### Manipulation Checks for Antedecent Conditions\nRatings of the items below were compared in MANOVA. They were successfully induced.\n- Inappropriate behavior\n- Transgressing personal norms \n- Hurting a person\n\nRatings were ***tansgressing one's own norms*** was the same in guilt and shame conditions\n\n***Behaving inappropriately in public*** is higher in shame conditions\n\n***Hurting a person physically or psychologically*** is higher in guilt condition"},{"_id":"77cf9010928c035bbe00017c","treeId":"74764f2274191d2f4a0001ea","seq":9796288,"position":2,"parentId":"77cf8275928c035bbe00017a","content":"### Shame and Guilt Befora and after Talking\n\n- Guilt: Higher before talking, less after talking\n\n- Shame: Lower before talking, higher after talking"},{"_id":"77cfa064928c035bbe000180","treeId":"74764f2274191d2f4a0001ea","seq":9796295,"position":3,"parentId":"77cf8275928c035bbe00017a","content":"### Targets\n\n- Most frequent target for both groups: Mother\n- Stragners: Only in shame, none in guilt"},{"_id":"77cfa823928c035bbe000181","treeId":"74764f2274191d2f4a0001ea","seq":9796320,"position":4,"parentId":"77cf8275928c035bbe00017a","content":"**Self-evaluation**\n\nGuilt: \n - More responsible\n - More like a bad person\n - Felt more like their behavior was atypical"},{"_id":"77cfb0f0928c035bbe000182","treeId":"74764f2274191d2f4a0001ea","seq":9796363,"position":5,"parentId":"77cf8275928c035bbe00017a","content":"### Action Tendencies\n- Guilt\n - More reparation\n- Shame\n - Slightly more awkwardness\n - But did not withdraw more as expected"},{"_id":"77cfb453928c035bbe000183","treeId":"74764f2274191d2f4a0001ea","seq":9796339,"position":6,"parentId":"77cf8275928c035bbe00017a","content":"### Other Questions\n- Shame\n - Harder to talk about"},{"_id":"77cfc6bf928c035bbe000184","treeId":"74764f2274191d2f4a0001ea","seq":9796387,"position":2,"parentId":"77cf820c928c035bbe000179","content":"## Study 2\nGoal: To investigate whether people can distinguish facial expressions (i.e., nonverbal signs) of guilty and shameful people"},{"_id":"747999ab74191d2f4a000220","treeId":"74764f2274191d2f4a0001ea","seq":9294620,"position":1.75,"parentId":null,"content":"Do:\n"},{"_id":"7478dc2174191d2f4a00021b","treeId":"74764f2274191d2f4a0001ea","seq":9294625,"position":2,"parentId":null,"content":"Left out:\n- Language selection\n- Dutch"},{"_id":"74768e7974191d2f4a000202","treeId":"74764f2274191d2f4a0001ea","seq":9294627,"position":3,"parentId":null,"content":"• The link can be found here:\n\n\n• Upon arrival, each user was given an ID. 5 parts, Tracking accomplished through the script specified in MTUrk blog."}],"tree":{"_id":"74764f2274191d2f4a0001ea","name":"GS Study","publicUrl":"74764f2274191d2f4a0001ea"}}