Comparing Design Team Self Reports With Actual Performance: Cross Validating Assessment Instruments Article Swipe
YOU?
·
· 2020
· Open Access
·
· DOI: https://doi.org/10.18260/1-2--10043
· OA: W83659694
NOTE: The first page of text has been automatically extracted and included below in lieu of an abstract Main Menu Session 2630 Comparing Design Team Self-Reports with Actual Performance: Cross-Validating Assessment Instruments Robin Adams1, Pimpida Punnakanta 1, Cynthia J. Atman 1,2, Craig D. Lewis 1 Center for Engineering Learning and Teaching 2 Department of Industrial Engineering University of Washington Assessing student learning of the engineering design process is challenging. Students’ ability to answer test questions about the design process or record their design activities may differ significantly from their actual performance in solving “messy” open-ended problems. In the Pacific Northwest, multi-university participants in a National Science Foundation supported project (Transferable Integrated Design Engineering Education, TIDEE) have implemented and disseminated a Mid-Program Assessment instrument for assessing engineering student design competency. One part of the instrument requires student teams to document (e.g., self-report) their design decisions and processes while engaged in a design task. These written self-reports are scored using a rubric that has demonstrated a high inter-rater reliability. We are interested in comparing the scores derived from these self- reports with measures of actual design performance. Our research method for analyzing design performance is verbal protocol analysis. In this study, eighteen teams of students (2-6 students per team) from four different institutions were videotaped as they completed the TIDEE Mid-Program Assessment. In this paper we provide 1) a description of the assessment instrument, 2) our research methods for assessing the validity of the instrument, 3) examples of comparing self-reports to performance, and 4) a summary of our findings. We conclude with a discussion of the strengths and weaknesses of this study, as well as implications for teaching and assessing engineering student design competency. Introduction To compete in an increasingly global economy, the education of tomorrow’s engineers needs to emphasize competency in the solving of open-ended engineering design problems. This theme is evident in the growing level of collaboration among accrediting agencies, industry, and federal funding agencies to support research on the assessment of student learning and to encourage excellence in curriculum and pedagogy that provide an exposure to engineering practice 1-3. Also, the implementation of the new ABET EC 2000 criteria4 makes it necessary for engineering programs to identify, assess, and demonstrate evidence of design competency. These changes in accreditation have expanded a goal of assessing student learning outcomes to making judgments about curricula and instructional practices with an aim towards continual improvement. Assessing student learning of the engineering design process is particularly challenging, and efforts to assess design competency are varied 5-6. Examples of using surveys include self- assessments of abilities and knowledge7-8 and peer-based instruments where students assess the competency of their peers9-10. Examples of performance-based assessments include: juries where Proceedings of the 2002 American Society for Engineering Education Annual Conference & Exposition Copyright © 2002, American Society for Engineering Education Main Menu