Assessing the Integration of Technology in Learning Part 1

It is all well and good discussing or planning the integration of technology in a school, procuring devices and implementing your chosen device platform, but how do you measure if these big plans have had any impact? What are your success criteria? Is it enough to celebrate the individual wins without somehow analysing the broader picture for the entire cohort?

This is the first of a handful of posts aiming to research and analyse modus operandi for assessing the integration of technology in a school. My MA research project is based around the evaluation of teaching ICT in subjects, which is related to this enquiry. But before I talk about that, let’s look at what constitutes assessment of technology integration.

Most online discussion talks about learning being transformed through technology. More recently, it seems to me, this dialogue is breaking out into a wider discussion about meaningful learning and not necessarily about technology; but maybe that’s just where I am drawn in my journey as an educational technologist. I am not against discussion about the advantages technology can bring without learning being transformed. The practical workflow benefits to deploying one-to-one devices across a school, because of savings on text books and photocopying alone, is a simple reallocation of resources, and if a school can manage that shift then surely it brings their working practices in line with modern times. For me, this splits the enquiry into two major categories (the sub-lists could contain many more items):

  1. Application and administration
    1. implementing technology to ease the collation of files and data within the organisation;
    2. administrate statutory rights such as attendance registers and health and safety procedures;
    3. creation and management of digital resources for teaching and learning;
    4. teacher and pupil centred workflow including simple communications;
    5. toolset: email/messaging, document management system, office tools, MIS.
  2. Transformation of learning
    1. remote real-time collaboration inside and outside the classroom/school;
    2. access to the wealth of information on the web;
    3. real world connections that facilitate analysis, synthesis, evaluation and abstraction of learning;
    4. access to devices, individual control of devices, learner choice of toolset.

The Assessment Models

Prompted by reading Miguel Guhlin’s review of Models of technology integration, I started to think about the best existing method or model for my school. Clearly some American districts are in a more advanced position than my school in taking the one-to-one plunge. But is such an assessment model necessary in an 850 pupil school? The four models offered up by Miguel are:

  • SAMR: substitution, augmentation, modification and redefinition;
    • simple rubric to identify how technology is used in learning;
  • LOTI: Levels of Technology Integration;
    • a slightly more detailed (six levels as oppose to four) look at the use of technology in lessons;
    • I recommend a quick look at the sniff test and HEAT observation form;
  • TIM: Technology Integration Matrix;
    • the matrix is the most detailed of all the models;
    • Pitches five levels of technology use (entry –> transformation) against five characteristics of a learning environment (active, collaborative, constructive, authentic and goal directed);
    • the matrix link is interactive and every point on the grid is supported by video examples of classroom practice;
    • the detail seems helpful but overly verbose.
  • TPaCK: Technological Pedagogical and Content Knowledge
    • looks at how TK and PK and CK intersect with each other;
    • seems useful for more strategic discussions about learning design and the implementation of ICT in a school, and possibly teacher professional development requirements.

    The four models being discussed
    The four models being discussed

As part of teaching KS3 ICT through other subjects this year, I need a method for measuring the success of how the technology has been used. Also, we are busy writing our new ICT strategy which will not prescribe one-to-one deployment but investigate it by initialising a BYOD project for Sixth Form and two class sets of tablet devices to be operational within only two departments. I need a way to assess if these projects are a success or not. All the models listed above incorporate a spectrum equivalent to substitution to transformation with slight amendments here or there. My instinct is to keep it simple, but to learn from others endeavours to achieve a working model.

My own evaluation of ICT in Subjects involved questionnaires to a sample of participant pupils coupled with an analysis of their academic attainment data in the participant subjects. This was then triangulated with participant teacher questionnaires to check the findings from their point of view. It is about blending opinion-based and empirical evidence to discover the benefits or drawbacks of this curriculum model. The results showed that, despite several possible opportunities, pupils never once questioned the use of ICT to enhance their learning. They did indicate that some activities were better than others. 83% of the academic data analysed showed a positive impact on their performance. That statistic is a big deal but I’m not going to make any great claims because the data needs further analysis and cross-referencing before it can be relied upon. The next step will be to evaluate the ICT based activities according to one of the substitution models. So, which one should I use? Miguel developed the work of Kim Confino et al (2010) to make a Classroom Learning Activity Rubric. This matrix of rubrics also adapts the TIM approach, but it puts the SAMR categories as the levels of technology use. This is appealing because the less categories there are, the easier it is to disseminate the use of the model, and my use of it, to others in my school.

Next steps

I’m going to read these matrices in more detail and try applying them to the ICT learning we have done in KS3 to date. This will give me an idea of how practical actually doing this is, and whether or not it is worthwhile. Also, I will be looking into the NAACE ICT Mark to see how they measure the integration of technology into schools.

However, to end this post, I want to readdress the first of the two categories mentioned at the beginning: application and administration. I am writing the strategy document with three other people: a governor, our Systems Manager and a parental advisor. In our extensive discussions about what we want to achieve, we were clear that the benchmark for our investigation into BYOD is simply providing web access to pupils during lessons. This is sufficient to justify initial investment so that we can learn about the pitfalls and positives first hand. This TechCrunch article is about a startup making big strides in America to ‘measure the impact of technology spending on student learning’. I am not alone in understanding the need to measure the large investments currently flowing through many educational organisations. Is it enough to analyse the impact on workflow or must we include learning transformation? This comes back to being clear and faithful to the school’s core purpose: aspire to excellence in education. And, it is important to highlight that the transformation of learning is not the remit of an ICT strategy, it is the remit of a teaching and learning strategy which an ICT strategy should be defined by and directly support. If you become a one-to-one school, you are potentially changing the way all learning happens. So, whereas I do want to be able to measure the success of the integration of technology in a school, which by implication includes any impact technology has on learning, do I have to measure the quality of all learning? Is it really possible to separate the two? If not, should the ICT stakeholders be acting as an arm of the teaching and learning body or independently in ensuring their provision of technology in the learning environment? Maybe this is the rub. Maybe when ICT becomes the core productivity tool of all learning (after the brain?) the school has to accommodate the massive overlap between the two. Our strategy is not immersive one-to-one as of yet so we don’t need to worry about this right now, but we do need to prepare for how it might work in our school.

If you are making any strides into this area, please do say so, and I would love to know any thoughts you have about what I have shared here.


Assessing the Integration of Technology in Learning Part 1

3 thoughts on “Assessing the Integration of Technology in Learning Part 1

  1. daibarnes says:

    From James Stuttard on twitter (

    Tweet1: that was an interesting read looking forward to part2

    Tweet2: I’ve set measurable outcomes for our 1:1 project, but we’ve not thought about how to measure impact on learning

    Tweet3: I’m going to measure impact on teaching by monitoring planning and walking around the buildings.

  2. stevemargetts says:

    Very thought-provoking post, Dai. I am starting some work with Steve Wheeler now where we are going to try and evaluate the impact of our iPads on learning. I will hopefully have some early feedback towards the end of April. I look forward to reading your progress.

Leave a Reply

Your email address will not be published. Required fields are marked *