Portraits in straw: A reply to Melinder et al. (2020) Article Swipe
YOU?
·
· 2021
· Open Access
·
· DOI: https://doi.org/10.1002/acp.3791
· OA: W3129122446
We recently published a study of extended forensic child interviews in Norway which included a large sample (n = 207) of interviews with preschool-aged children (Baugerud et al., 2020). Criticizing the study, Melinder, Magnusson, Ask, et al. (2020) make the bold assertion that: “when [the reader adopts] a different perspective and contextual knowledge [it] change the conclusions.” We were curious to read their reinterpretation of our findings but were left disappointed. Melinder et al.'s comments comprised a mixture of unsupported claims, factual errors, misrepresentations of the study and the Norwegian legal system, and mistaken assumptions about the coding practices we employed. In what follows, we address their comments and correct their errors, organized according to their own subheadings. Melinder et al. suggested that we are not familiar with the Convention of the Rights of the Child (CRC) (1989), incorporated into Norwegian law in 2002 (Križ & Skivenes, 2017), writing: “In Baugerud et al. (2020), the legal requirement for Norwegian police interviewers to provide information to children about the purpose of the police interview was not acknowledged” (p. 2). In fact, Norwegian law enforcement officers will be obliged to act in accordance with the CRC with respect to children's right to this information, but specific guidelines outlining how children should be informed about the purpose of the interviews in which they are involved have yet to be implemented (Gording- Stang, 2020; Ministry of Justice and Public Security, 2020). Melinder, Magnusson, Ask, et al. (2020) then go one step further. Not only did they assume the existence of certain interviewer utterances but also incorrectly asserted how those utterances had been miscoded, claiming: “Consequently, the context of how interviewers introduce the topic was not considered and the aforementioned important ethical practices were scored as suggestive information/questions.” (p. 2). Because Melinder et al. have no access to contemporary forensic interviews of preschool children from the various police districts in Norway, their ideas about the content of those interviews are purely speculative, but their comments motivated us to re-examine a subset of the interviews we studied. When we examined a randomly selected 33% (n = 69) of the 207 investigative interviews, we found, contrary to their speculations, that no information regarding the purpose of the interview was provided in 62 interviews, and that the utterances providing this information in the remaining seven were both vague and coded as facilitators rather than as suggestive questions contrary to the assumptions of Melinder et al. They go on: “Late in the interview, the interviewer may therefore ask the child about other evidence present in the case—as part of the typical legal procedure—after the child has been given the opportunity to provide free narratives” (p. 2), and that “all of these mandated questions were likely to be scored as suggestive” (p. 2). Again, this is speculative, based on faulty presumptions which are incorrect. In a recent study, we sampled investigative interviews of preschool children (n = 71) to ascertain at what point during the interview children were asked questions relating to themes not previously mentioned by the child. A time-series analysis revealed that such questions were distributed throughout the investigative interviews, with most being asked early rather than later in the interview (Johnson et al., n.d.). Clearly, Melinder et al.'s allegations that we misused the scoring system were serious but without any basis in reality. Their article provided no justification for their assertion. All interviews were coded by a coder blind to the study's hypotheses, a procedure that is the recommended practice in the field (Brown, Lewis, Stephens, & Lamb, 2017; Teoh & Lamb, 2013) and is in fact considered the “gold standard” to avoid conformation bias in research (Lilienfeld & Waldman, 2017; Robertson & Kesselheim, 2016). The scoring system we used was, in fact, the same as that employed by Melinder, Magnusson, and Gilstrap (2020, p. 8) in a recent analysis of 33 investigative interviews with 3- to 15-year-old children: “Interviewer codes were developed from the Lamb et al. (1996, 2011) NICHD scoring protocol and the method used in a previous evaluation of Norwegian child interviews from 2002 to 2010 (Johnson et al., 2015).” Melinder, Magnusson, Ask, et al. (2020) objected to our conclusion that adoption of the SI model has not presaged improvements in the quality of questioning employed by forensic interviewers, claiming that we both misunderstood the principles underlying the model and had categorized questions and utterances arbitrarily and inappropriately. These objections lack merit. Our scoring system was based on well-established scoring procedures emphasizing the value of non-suggestive encouragement of children to give further and more specific details about forensically relevant events. “Facilitators” are non-suggestive supportive utterances which are recommended in most “best practice” models or protocols today; they are not in any way unique to the SI model. However, because facilitators do not specifically request information, they were excluded from the analyses focused on various kinds of information-seeking prompts, in accordance with the practice adopted in a number of recent studies (Hershkowitz et al., 2017; Otgaar et al., 2019; Price et al., 2016), However, the paper clearly identified the frequency with which the interviewers used facilitators. Melinder, Magnusson, Ask, et al. (2020) then claimed that “The SI model is built around the premise that the use of breaks helps the child rest, calm down, and supports attention and memory (Langballe & Davik, 2017)” (p. 3). This statement is problematic in several respects. First, the SI model is built around the assumption that it is beneficial to have multiple breaks when interviewing preschoolers; this may be true but there is no empirical evidence to support it. A recent experimental study, co-authored by two of our critics, compared an “adapted” version of the NICHD protocol with the Norwegian SI model, introducing a single break in the latter condition, and found no advantage of the SI strategy (Magnusson et al., 2020).11 In fact, the actual NICHD Protocol has long recommended that interviewers take at least one break (Lamb et al., 2011, 2018) and two recent papers have explored the dynamics of multi-session protocol interviews (Blasbalg et al., 2020; Hershkowitz et al., 2021). None of the interviewers in Magnusson et al.'s study were appropriately trained to use either interview technique so the generalization of the findings to field contexts is limited. In fact, only one study, our own, has investigated the effectiveness of adopting the SI model; the effectiveness of multiple breaks remains unexplored. Melinder, Magnusson, Ask, et al. (2020) further objected that “differences in questioning across different interview sessions (separated by breaks) should have been considered in the statistical analyses” (p. 3), citing a study by Hershkowitz and Terner (2006). In that study, 40 alleged victims of abuse, aged 6–13 years, were re-interviewed and asked free-recall questions after a single 30-min break. Hershkowitz and Terner (2006, p. 1141) found that the children showed signs of fatigue and had difficulty focusing their attention after the break. It is thus entirely unclear how these findings support the value of multiple breaks when interviewing even younger children. Melinder, Magnusson, Ask, et al. (2020) further noted that some studies of child forensic interviews employ sequential analyses to explore the interactive dynamics. We are well aware of this approach, which we have employed (Ahern et al., 2014; Johnson et al., n.d.) and will continue to employ when relevant to our research questions. Such methods were, however, not appropriate for the study under discussion, whose purpose was simply to examine the types of questions asked by interviewers. Melinder, Magnusson, Ask, et al. (2020) called for a theoretical rationale for dividing the sample into age groups, and they questioned the practice of presenting results with respect to age groups rather than relative to a continuous scale, although they did not state what age intervals might be more appropriate. These are rather curious objections for two reasons. Firstly, the paper was not a study of child development; our focus was upon the performance of the adult interviewers, not that of the children. Secondly, sorting children into age groups is a long-standing and well-accepted practice in this field of research even when the performance of the children is the focus (Ceci et al., 1994; Leichtman & Ceci, 1995) and most major studies of child interviews sort the interviewees into age groups (Eisen et al., 2002). Any age grouping (even into year groups) is somewhat arbitrary, depending upon the age range of the children studied and the number of children included in the study (Hershkowitz et al., 2012). In the target study, we were particularly fortunate to have access to an unusually large sample of interviews covering a narrow age range. This permitted us to analyze patterns of questioning in some detail, and we chose to compare age groups that each included substantial numbers of interviews. Melinder, Magnusson, Ask, et al. (2020) further argued that the label “preschool” is misleading because our sample included 6-year-olds, some of whom might be in school. In fact, only two of the children (less than 1% of the sample) were in school at the time of the interviews studied, making “preschool children” a sufficiently accurate shorthand label. In any case, Melinder et al. failed to explain why the choice of age intervals or the overall description of our sample should invalidate our results and conclusions. Melinder et al. did not offer any evidence to support these two accusations. Indeed, as noted above, it was their critique, rather than our study, that appeared to misunderstand and misrepresent the Norwegian legal system. Further, ours was one of many studies, undertaken by researchers all over the world, documenting how little investigative practices had changed, despite consensus about the best practices that ought to be employed. The results complemented and extended findings based on analyses conducted in Norway over the last 30 years, including those co-authored by Melinder (Thoresen et al., 2008; Johnson et al., 2015). It is entirely unclear what possible conflicts of interest Melinder et al. sought to imply. In sum, Melinder, Magnusson, Ask, et al. (2020) critique appears unwarranted, informed by an incorrect reading of our paper and unfounded beliefs about the content of Norwegian field interviews of preschool children to which they have had no access. As noted above, their commentary provided no justification whatsoever for their allegations. We are surprised that these errors and misunderstandings escaped the otherwise sharp eyes of the journal's reviewers. Data sharing is not applicable to this article as no new data were created or analyzed in this study.