JCE Pick - Linked Formative Assessment

I used to be big into concept maps as an additional way for my students to show understanding besides a quiz or test. In fact, so much so, that my students a few years ago had to make a concept map as a mandatory test review. I’ve shied away from that over the past few years in a new school (why? I’m actually not so sure) - probably because I got too much information from my students about their level of understanding. I’m sure if I conducted a rigorous method of assessment, there would be a correlation between how well students could connect ideas and assessment scores.

In the November issue of the Journal of Chemical Education, the authors of Developing and Implementing an Assessment Technique To Measure Linked Concepts describe their more manageable, quantifiable methods (“closed ended” tools versus “open ended” tools, like concept maps) and share some of those tools (in the article and not supporting information- thanks!).

I’m left with these question for the authors of the article.

  • I’m curious as to why they chose the connections that they did in their own probes for students. Why do you desire that students make these particular connections? Are all types of connections equally important? The questions on the probes seem to be based upon recalling knowledge and skills (depth of knowledge, DOK, levels 1 and 2, per Webb’s Depth of Knowledge Levels). If I were to administer some of the probes directly in my classroom, would my results tell me that students can make connections or that they can recall topics related to this one scenario? Hard to say without digging deeper into my student thinking.

  • Are connections among DOK levels 1-2 as important as connections among DOK levels 3-4? What is the balance? For instance, as a research scientist, you are expected to make connections among many different areas of the literature to attack your work from a variety of angles (DOK 3-4). Is the assumption that we might consider testing connections among DOK levels 1-2 so that students can eventually attack DOK levels 3-4?

I ask these questions because they are the questions I have about my own practice. I’m working on a post about assessment during a long term project, and it has taken me forever to write it because it’s turned into a monster. Assessment is tough for all of us. There are a lot of layers to unpack surrounding student thinking - the authors spend a lot of time at the end discussing limitations and future work. Ultimately, I appreciate their efforts to remind people like me to engage my students in building meaning and making connections.


Year: 
2015
issue: 
92
Page: 
1807
Article type: 
Join the conversation.

All comments must abide by the ChemEd X Comment Policy, are subject to review, and may be edited. Please allow one business day for your comment to be posted, if it is accepted.

Comments 2

Scott Lewis | Thu, 12/17/2015 - 14:51

Hi,

Thank you for taking the time to review our article, I hope you find our assessment technique useful in the classroom.  In response to your questions:

1.  We chose those connections based on our past work with open-ended questions titled Creative Exercises.  (You can see this open-access article for more information: ( http://pubs.rsc.org/en/content/articlelanding/2011/rp/c1rp90020j#!divAbs... )  When we gave open-ended questions about different chemical topics, many of the connections students chose to make were used for the MLCs presented in the article.  Ideally, we would want our students to see all topics connected as a single coherent view of how matter operates and that assessing these connections is a necessary part of the goal.

2.  The DOK levels you indicate seem to be hierarchical with the fourth level the most advanced, so it seems logical that the more advanced the level the more important (as it presumes students could perform the lower levels).  It's clear that the more advanced levels require assessing student writing (or other forms of discourse) to assess student reasoning.  One area of future work we envision is having students select a rationale for each response choice, but even that falls short of having students generate and communicate their reasonings.  Ultimately, if your classroom setting permits, I think closed-ended assessments such as MLCs paired with open-ended assessments to measure levels 3 and 4 would be desirable.

Thank you again.  One final note, we have also found that adding in a partial credit "unsure" option makes the student response pattern a lot closer to what is seen in conventional multiple choice questions.  If you would like example MLCs with this format, please feel free to E-mail me at: slewis@usf.edu

Tracy Schloemer's picture
Tracy Schloemer | Fri, 01/01/2016 - 14:43

Hi Scott-

Thank you so much for your responses! I appreciate that you took the time to read my thoughts and respond to some of my questions (especially with the link to previous work). I hope that in the long run, the work you're doing provides feedback to the teachers you are collaborating with and, most importantly, the students.

Realistically, just as you suggest, a blend of assessment types are desirable...if only there was infinite time in the day.

All the best,

Tracy