Back to Feedback

Click Edit This Page above to add your feedback.
Type your feedback under one of the headings below.
Click Save when done.


Help on editing a wiki.

Tinkerplots
Tinkerplots, the Ministry-Licensed software that was used in the session has a bunch of helpful video tutorials that can be found from the Help Menu.

Here is a message from Dr. John A. Ross:
Hi Paul and Bill,
 
I enjoyed your plenary session at the GAINS conference yesterday. I think you are absolutely on the right track in the
strategies you have developed for engaging teachers with student outcome data. The challenge is substantial and I
encourage you to persevere and to continue to share your experiences with those who are trying to increase wise data use.
 
I especially appreciated your discussion of the longitudinal study in which you compared grade 6 EQAO reading to
grade 9 English marks. Two thoughts came to mind:
(1) how much attrition was there (i.e., could you find/match a reasonable proportion of the kids)? and
(2) how much were the results influenced by differences between the assessments?
 
Attached is an article in press that compares EQAO results to report card grades: both assessments were administered
at about the same time, to the same kids, and measured the same curriculum. Yet there were fairly large differences
in the scores. I wonder how much of the difference in the grade 6-9 scores that you found could be attributed to program
as opposed to test differences?
 
Great stuff; I look forward to hearing more about your work.
 
John
 
The abstract for the article Dr. Ross is referring to.

Response from Paul Costa:

Thanks to Dr. Ross for his encouragement and support for our work on Disaggregating Data to Reach Every Student and also for providing content to this wikispace for our first discussion thread on this topic.

I will respond in this space to his thoughts on the longitudinal study in which grade 6 EQAO reading was compared to grade 9 English marks:

(1) How much attrition was there (i.e., could a reasonable proportion of the kids be found/matched)?

I would say that a reasonable proportion of students could be tracked from grade 6 to grade 9 in our inquiry. This varied with the fidelity to record-keeping of the schools, especially in the earlier years. The cohort of students that we looked at was in grade 9 in 2006-2007 and in grade 6 in 2003-2004. This was the cohort that we have the first assigned Ontario Education Numbers (OEN) and that we were able to match records from their high school to their elementary school. The students we were not able to match were either not registered in a school in our Board when they were in grade 6 or there was no record of them having participated in EQAO testing at the time. In the future, attrition should be improved as better records have been kept since the introduction of the OEN, and our methods and tools for tracking students have improved as students move from school to school or as they enter our Board in later grades. Bill may have further comments to make on this topic as he was directly involved with mining the data from our Student Information System, Maplewood and Data Warehouse, A-Plus.

(2) How much were the results influenced by differences between the assessments? How much of the differences in the grade 6-9 scores could be attributed to program as opposed to test differences?

Excellent questions and two thoughts come to mind. Firstly, goals for research require evidence obtained with much more statistical rigor than our study involved and before the conclusions reached can be acted upon. This is not true of our goals. However, we also came up with the conclusion made in Dr. Ross’ article that report card grades were consistently higher than external assessments (EQAO) for grade 6 students and consistently lower for grade 9 students.

Secondly, our goal in this case was to identify schools where there were significant differences in report card grades and standardized test scores of students. In the case of mathematics we used grade 9 EQAO results for comparison as well as grade 6 scores. If a school had a substantial number of students where these indicators did not align, we shared this information with the teachers and invited conversation. The differences may have been attributed to differences in programs of instruction or assessment practices, which may not have been aligned with practices outlined in Ministry documents and grounded in research. If this were the case, this direction of reasoning would be brought forward through facilitated conversations with teachers about the differences in achievement results evidenced in the data. Without focusing on how much of a difference there was between the achievement indicators, our conversations could still move us to action on improving instructional and assessment practices and improving student achievement.

Dr. Ross’s questions do suggest that there is a need for more research in this area. We will have to continue to gather longitudinal data to see whether our locally targeted actions have had an impact on student achievement. I would definitely be interested in seeing statistical evidence that gives us insight or support for our targeted actions.

Paul Costa

Importance of using data to inform decision making very clear. Appreciated the demonstration of the possibilities using Tinkerplots - so specific information & very useful. Demonstrated for me need to inquire about our Board's IT support to provide us with this type of data to inform schools for targeted and structured support. Thank you.
Add other comments/feedback above this line.


I am cautious of comparisons that link literacy to English only in grade 9. As we know, the OSSLT is a measurement of literacy across the curriculum up to the end of grade nine. The EQAO Framework document outlines where the literacy skills are in various subject areas. I can see the usefulness of this for math where it is a little more cut and dry from grade 6 to 9, but would advise people to be careful with grade 9 English marks as the only comparison. Our message all along has been a cross-curricular approach to literacy at the secondary level. Is there a way to gauge literacy in other subject disciplines and also compare to grade 6 EQAO? I suspect secondary English teachers would frown upon being targeted for this comparison. I also question the validity of the data for this comparison since it would be unfair to say that English is the primary purveyor of literacy. There is a presupposition that the same skills assessed on the grade 6 EQAO are the same skills evaluated in an English classroom, which is faulty. Prior to the revised English 9-12 curriculum which is currently being implemented for the first time, those EQAO reading and writing skills were not an explicit part of the expectations for English. I think it is important to know what specifically the English report card marks are based on before making a comparison since this data clearly precedes the release of the revised curriculum. Otherwise, an excellent presentation! S. Roberts, Upper Grand

It is important to keep reminding educators that the OSSLT is not an English test. Indeed, the Mosenthal taxonomy and training in question structure--which we think can yield interpretable literacy data for any/all subjects--may provide data to drop into and manipulate in Tinkerplots. We are also looking at other ways to gauge specific literacy skills informally at the classroom level. Lesley


Hi Paul and Bill

I too really appreciated your session on the use of Tinkerplots and seeing the power of disaggregating the data to the extent that you showed us in the presentation. It inspired me to investigate Tinkerplots further than I have and to use it for some research we are doing around our Numeracy Coaching project. Within this project, we are using PRIME Operations tools to assess grades 4-6 students across 4 schools within which our coaches work, and in 8 other schools where we have little or no support from Program Staff. We began this data collection in the Fall of 2006 when we started numeracy coaching in these schools. Testing occurred in October and then again in June (right after EQAO testing was completed). We have again tested the students in the same grades this past Fall. Last year's results show that the coaching is making a significant difference within those schools compared with the non coaching schools based on PRIME results. We decided to compare these PRIME scores with EQAO levels and found a correlational match which is very intersting (and likely what we would have imagined would be the case.)

What interests me now is...... can we see if Tinkerplots can be used to disaggreagate this data further to see gender differences, etc. that can be then used to help focus teachers and coaches further on fine tuning their work together.

As an aside, report card marks for these students, while we have not directly compared them with EQAO statistically, upon general "eye balling" appear to be higher than EQAO scores. This seems to be true of many schools within which we work. Helping teachers through teacher moderation activities to be able to develop consistency with levelling student work will continue to keep us busy of course.

Again, thanks Paul and Bill for your down to earth and useful presentation.
Janine Pitman