MakEval

While researchers advance work in understanding various aspects of makerspaces and the increase of making in education, we lament the lack of robust research tools that can be used for research and evaluation of these experiences. While we have used surveys and observation protocols that exist for evaluating STEM-learning in other projects (Maltese & Harsh, 2015; Simpson, Burris & Maltese, 2017), the reality is that making is rather different from many other contexts and a suite of maker-specific tools is warranted.

This issue is particularly salient for educators seeking ways to gather data that justify their use of making to administrators and funders. Interestingly, this issue exists for single teachers and small museum staff as much as it does for science centers with full evaluation teams – no one has had the time to focus on developing tools while also getting their programming started or expanded to meet the needs of the people they serve. As maker programs expand around the country, basic research and evaluation tools are needed to evaluate program impact and set the foundation for understanding key program elements.

Making is different, and a suite of maker-specific tools is warranted.

To that end, our team is creating suites of tools—including surveys, assessments, and observation protocols—that provide educators, researchers and program administrators with information to evaluate maker programs/experiences with youth. We identified a set of five key targets for evaluation, based on formal and informal maker educators’ survey and interview data, that include: creativity, critical thinking and problem solvingagency/independence, and involvement in STEM practices, and development of interest and identity in STEM/making.

Although a single tool that could answer the major questions that we (and others) have would be ideal, that is unrealistic and most “common instruments” are often too blunt to be of value. As we advance this work we are guided by the ideal that there are a number of features that we need to address with each tool and across the set:

  • Ability to judge changes over time
  • Standardization across contexts
  • Useful for providing both formative and substantive feedback
  • Must be age/developmentally appropriate
  • Gather school/community data to complement
  • Useful for educators/admins who have limited time/resources to complete
  • Associated with convincing reliability and validity evidence

We seek an acceptable balance between richness of findings and the pragmatic needs of those who have limited time to determine if making is “working” for their youth. To create these suites of MakEval tools, we have been reviewing existing tools, modifying tools, and creating new ones to serve our needs. We are currently in the early phases of data collection and analysis with these suites of tools in a variety of making contexts.