Problem-Solving Analysis Protocol (P-SAP)
Welcome! This website is intended to introduce you to our collaborative work in the area of cognitive outcomes assessment. We focus mainly on the Problem-Solving Analysis Protocol (P-SAP) on this website but you will also find information on our other collaborative projects including the Cognitive Learning Scale (CLS). Both instruments can be used free of charge for purposes of research and assessment. We ask only that you contact either one of us, Peggy Fitch, Central College or Pamela Steinke, University of South Carolina Upstate, to let us know how you are using them. We also encourage you to contact either one of us to ask questions or get feedback on their use.
The Problem-Solving Analysis Protocol (P-SAP) is a written problem-solving protocol for assessing problem solving skills that can be easily integrated into the normal activities of a class (Steinke & Fitch, 2003). We created the P-SAP based on the work of researchers on cognitive outcomes of service-learning (Eyler & Giles, 1999) and the reflective judgment framework of intellectual development (King & Kitchener, 1994). The P-SAP presents a real-world issue to the student that is directly relevant to the application of material the student is learning in the course, and by simply changing the issue the protocol can be used in a wide variety of classes. Students answer a series of questions about the causes, consequences and solutions for a problem that arises from the issue. The problem-solving protocol can be used in class as a graded assignment or exam question or as a class exercise to start discussion. The P-SAP is especially designed to help measure cognitive skills developed in academic experiential learning activities such as service-learning but can be adapted for other uses. The P-SAP has demonstrated good inter-rater reliability and construct validity with intellectual development and cognitive learning measures.
The P-SAP allows two different uses for assessment purposes. First, whether the protocol is used as a graded assignment or not, faculty in the discipline can score a sample of protocols for students’ comprehension and application of content knowledge. Many departmental program assessment plans include outcomes about students’ ability to apply knowledge but faculty members have difficulty identifying how to assess application.
Second, assessment teams can also code the same protocol for more general problem-solving skills as related to other intellectual skills such as critical thinking, knowledge transfer and perspective taking. This second application can be scored using the P-SAP rubrics. The P-SAP rubrics measure locus and complexity of reasoning in relationship to problem-solving. If you are using the P-SAP for outcomes assessment you will probably want to use the global coding rubrics which are easier to score. If you are using the P-SAP for research, you may either want to use the global rubrics or use the full set of rubrics for scoring. The full set of P-SAP rubrics provides scoring criteria for the two dimensions (locus/source and complexity) separately for each of the four questions in the protocol (questions about problem, cause, solution and analysis of solution). See examples of low, medium, and high coding for locus and complexity using the global coding rubric with problems from educational psychology and child development courses.
The Cognitive Learning Pre-Post Scale (CLS) is a pretest-posttest self-report scale to assess application and depth of knowledge in courses with an experiential learning component and/or high-impact practice. The current version of the self-report cognitive learning scale uses the stem, “In this course, course requirements that went beyond participation in class and assigned readings…” Items were adapted from Eyler and Giles (1999) with additions for a total of nine items. Examples of items include, “did not greatly enhance my learning in the course beyond what I gain from reading the text and attending class” (reverse scored) and “did help me to see the complexity of real life problems and their solutions.” A pretest version of this scale uses the stem, “Typically, course requirements that go beyond participation in class and assigned readings…” Steinke and Fitch (2007) have argued that because the responses are analyzed by comparison of pretest and posttest scores, not by the strength of a single self-report, that the CLS can provide information beyond typical indirect measures. The CLS has demonstrated good inter-item reliability and construct validity with other measures of cognitive learning, including the P-SAP.
- P-SAP Questions and Issues
- P-SAP Template
- P-SAP Global Coding Rubric (revised 12-6-2013)
- P-SAP Rubric (revised 2013)
- P-SAP Coding Examples (Educational Psychology)
- P-SAP Coding Examples (Child Development)
- Cognitive Learning Pre-Post Scale HIPs
- Assessing Service Learning. Pamela Steinke, NC State, and Peggy Fitch, Central College, (June 2007). Research and Practice in Assessment, Volume 1, Issue 2.
Eyler, J. & Giles, D.E., Jr. (1999). Where’s the learning in service-learning? San Francisco: Jossey-Bass.
King, P.M. & Kitchener, K.S. (1994). Developing reflective judgment: Understanding and promoting intellectual growth and critical thinking in adolescents and adults. San Francisco: Jossey-Bass.
Steinke, P. & Fitch, P. (2003). Using written protocols to measure service-learning outcomes. In S. H. Billig & J. Eyler (Eds.), Advances in service-learning research, Vol. 3: Research exploring context, participation, and impacts, 171-194. Greenwich, CT: Information Age Publishing.
Steinke, P., & Fitch, P. (2007). Assessing service-learning. Research & Practice in Assessment, 1(2), 1–8. Retrieved from http://www.rpajournal.com/dev/wp-content/uploads/2012/05/A32.pdf
Additional Relevant Author Publications:
Fitch, P., & Steinke, P. (April, 2013). Tools for assessing cognitive outcomes of experiential learning. Workshop presented at the NCA HLC Annual Conference on Quality in Higher Education, Chicago.
Fitch, P, Steinke, P, & Hudson, T. (2013). Research and theoretical perspectives on cognitive outcomes of service learning. In P. H. Clayton, R.G. Bringle, & J.A. Hatcher (Eds.), Research on service learning: Conceptual frameworks and assessment, Vol. 2A: Students and faculty (pp. 57-83). Sterling, VA: Stylus.
Steinke, P. & Fitch, P. (2014). Using goal-based learning to understand why service-learning improves cognitive outcomes, Currents in Teaching and Learning, 7(1). For a link to the archives portion of the website where you can access the issue, go to: http://www.worcester.edu/Currents-Archives/
Steinke, P. & Fitch, P. (2011). Outcomes assessment from the perspective of psychological science: The TAIM Approach. In J. Penn (Vol. Ed.), New directions for institutional research: Measuring complex general education student learning outcomes, 149, 15-26. San Francisco: Jossey-Bass.
Steinke, P., & Fitch, P. (2007). How to measure problem-solving ability: The problem-solving analysis protocol (P-SAP). Toolkit: The nuts and bolts newsletter from Office of Assessment Services, 5(3). Retrieved from http://www.niu.edu/assessment/Toolkit/vol5_ish3.pdf
Steinke, P., Fitch, P., Johnson, C., & Waldstein, F. (2002). An interdisciplinary study of service-learning predictors and outcomes among college students. In S. H. Billing & A. Furco (Eds.), Advances in service-learning research, Vol 2: Service-learning research through a multidisciplinary lens, (pp.73-102). Greenwich, CT: Information Age.
Steinke, P. & Harrington, A. (2002). Implementing service-learning in the natural sciences. [Electronic version] National Society for Experiential Education Quarterly, 27 (3), 4-10.