Tablets 4 Schools 2013 Twitter notes on Storify

I didn’t attend this event. I was lucky enough to receive a personal invite but had already committed myself to another tablet event (much smaller scale) with a company called Jigsaw24 who have some innovative ideas on how to roll out iPad in schools. On the train home I read through the tweets and found Tony Parkin had impartially documented the gist of what was presented. I was going to write up the notes (they’re in my notebook) but time is against me, so here is a storify of the key tweets. All are worth reading from beginning to end, but it is long so I’ll say goodbye here… comments at the bottom should you feel the need!

PS: remember to click *Read next page* link at bottom of storify embed.


Tablets 4 Schools 2013 Twitter notes on Storify

What are qualifications actually worth?

Reading this short post on Mashable (extract below), which cites that Google have stopped asking job applicants for their academic results, I am forced to question the authentic worth of academic qualifications.

Google, one of the tech giants, have stopped (to some extent) requiring these certificates that our societies have evolved as the means of securing income earning potential. Is this indicative of wider change?

I have the pleasure of working with Graham Hobson, the Chief Technology Officer of PhotoBox, in developing our ICT Strategy. He gave a talk to my Sixth Form ICT students on SCRUM and his use of other agile development methods. At the end of his talk, I asked about the qualifications he requires from technologist job applicants. None. No A Levels and no degree. He places value on authentic (evidence-based) passion: some kind of portfolio of work they have done. Graham anticipated new recruits would become useful after training in the specific systems used by the companies and this might take a few months.

In the interests of balance, I spoke to a fellow teacher at another school whom has taught A Level Computing for many years. He was struck by how easily his students, past and present, were able to find temporary jobs connected to Computer Science in some way, and how this often led to long-term employment in interesting positions. His experience is as follows:

…if the computing boys can find a place to get started they quickly develop their skills to a level that either gets them holiday work or a full time job later. Graduates in CS from good universities are very employable; there seems to be a shortage of people at this level. But the skillset may be quite hard to develop. Few boys here choose CS (no UCAS applicants here in CS for 2014, the first time this has happened for as long as I can remember, we usually have 3-6). The job is not very well defined, as you know from your brother [my brother is a co-founder of Marketing QED and a formidable programmer]. If you are good then there is great opportunity but it’s not like other professions e.g. medicine where there is a more clearly defined path to the top. I guess computing is more entrepreneurial: you develop skills and then form your own company; sell up, get rich and start again. Exciting, interesting but not so clearly defined. We are puzzled by the lack of interest in software development and CS and usually put it down to things like the lure of the City, a fear of being labelled a geek, fear that it is boring and that it is quite hard.

Worthy of note is that, despite this evidence that it’s potentially fruitful, young people are steering their paths away from Computer Science qualifications. This indicates that it is not a natural career path. Maybe it is because being a programmer is too niche and not seen as a way to high-end income earning potential.

I asked my brother who is CTO of marketingQED and a programmer. He uses a recruitment agency who do the initial screening and basic competency checks. All compatible candidates are forwarded to HR in his company who telephone interview them to sift the wheat from the chaff and these candidates are then psychometrically tested. Successful candidates are then interviewed by the development team where, as part of the process, they have to present a piece of their own code to be discussed in detail. Two members of his development team had started a degree but failed to complete them. In both cases the courses were at non-UK universities where it is expected that the student will be working alongside their study and work can start to dominate their time, usurping the qualification. My brother noted that qualifications become significant when two CVs are otherwise similar, and added it is very important to include detail that makes you stand out, citing an example: ‘one of our guys listed “Mushroom Picking” as his only hobby which led to us interviewing him to find out more and he ended up getting the job despite lack of qualification’. So, we can see that qualifications are not defining in this selection process. The capacity of a candidate to write and discuss their code is. Are we sufficiently developing these skills in school?

Google, however, are opting to refine their selection process, possibly finding traditional methods are not satisfactory in identifying the key characteristics they require:

Google is more focused on “behavioral interviews,” which are less about the interviewer than the applicant. Bock [Laszlo Bock, Google’s SVP] says these type of interviews yield more information about how the job candidate deals with situations in the real world.

Bock also noted during the interview that Google has found grade-point averages to be a “worthless” metric for hiring. For that reason, Google has stopped asking most applicants for their transcripts and GPAs. In fact, Google is also hiring more people with no college experience whatsoever.

“The proportion of people without any college education at Google has increased over time as well,” Bock said in the interview. “So we have teams where you have 14 percent of the team made up of people who’ve never gone to college.”

Source: (accessed: 21/6/13)

Is this an increase in relatively low-educated technologists? It feels pretty much impossible to collate how all the major tech companies recruit their technologists, but surely this is indicative that, despite there being excellent Computer Science degree courses (as well as some dodgy ones), that the criteria for interview success is based on individual personality and a portfolio of projects, with a core element of being able to show that you have an authentic understanding of your work. Was it ever any different? You would be hard-pushed to say personality never played a part, with or without specific qualifications. What this does mean is that we, as educators, need to be encouraging our pupils to develop themselves holistically in their specific areas of career interest. And to find a way of creating a portfolio or evidence-base of their work, be it private that can be shared at interview, or public on a blog or similar. Pushing for league table success might make sense to the educational organisations, but, in some cases, it might be a disservice to the young people in our care. Having said that, one of my ICT students now studying Business and Information Systems at Aston University visited school to have lunch with me a couple of weeks ago. He was surprised by the quality of his whole education. He presumed that all schools educate you in the same way but he now realises that this is not true. The education we provide for our students is really very important to them. To each of them. And we should make sure they have their eyes open, know a lot about the world, and that there are many roads to choose from. As we are driven by exam results and the like, the employers are increasingly abandoning such measures as indicators of suitability for employment. How do we tell our students and pupils that school is possibly not providing the first rungs on the career path?

What are qualifications actually worth?

Assessing the Integration of Technology in Learning Part 1

It is all well and good discussing or planning the integration of technology in a school, procuring devices and implementing your chosen device platform, but how do you measure if these big plans have had any impact? What are your success criteria? Is it enough to celebrate the individual wins without somehow analysing the broader picture for the entire cohort?

This is the first of a handful of posts aiming to research and analyse modus operandi for assessing the integration of technology in a school. My MA research project is based around the evaluation of teaching ICT in subjects, which is related to this enquiry. But before I talk about that, let’s look at what constitutes assessment of technology integration.

Most online discussion talks about learning being transformed through technology. More recently, it seems to me, this dialogue is breaking out into a wider discussion about meaningful learning and not necessarily about technology; but maybe that’s just where I am drawn in my journey as an educational technologist. I am not against discussion about the advantages technology can bring without learning being transformed. The practical workflow benefits to deploying one-to-one devices across a school, because of savings on text books and photocopying alone, is a simple reallocation of resources, and if a school can manage that shift then surely it brings their working practices in line with modern times. For me, this splits the enquiry into two major categories (the sub-lists could contain many more items):

  1. Application and administration
    1. implementing technology to ease the collation of files and data within the organisation;
    2. administrate statutory rights such as attendance registers and health and safety procedures;
    3. creation and management of digital resources for teaching and learning;
    4. teacher and pupil centred workflow including simple communications;
    5. toolset: email/messaging, document management system, office tools, MIS.
  2. Transformation of learning
    1. remote real-time collaboration inside and outside the classroom/school;
    2. access to the wealth of information on the web;
    3. real world connections that facilitate analysis, synthesis, evaluation and abstraction of learning;
    4. access to devices, individual control of devices, learner choice of toolset.

The Assessment Models

Prompted by reading Miguel Guhlin’s review of Models of technology integration, I started to think about the best existing method or model for my school. Clearly some American districts are in a more advanced position than my school in taking the one-to-one plunge. But is such an assessment model necessary in an 850 pupil school? The four models offered up by Miguel are:

  • SAMR: substitution, augmentation, modification and redefinition;
    • simple rubric to identify how technology is used in learning;
  • LOTI: Levels of Technology Integration;
    • a slightly more detailed (six levels as oppose to four) look at the use of technology in lessons;
    • I recommend a quick look at the sniff test and HEAT observation form;
  • TIM: Technology Integration Matrix;
    • the matrix is the most detailed of all the models;
    • Pitches five levels of technology use (entry –> transformation) against five characteristics of a learning environment (active, collaborative, constructive, authentic and goal directed);
    • the matrix link is interactive and every point on the grid is supported by video examples of classroom practice;
    • the detail seems helpful but overly verbose.
  • TPaCK: Technological Pedagogical and Content Knowledge
    • looks at how TK and PK and CK intersect with each other;
    • seems useful for more strategic discussions about learning design and the implementation of ICT in a school, and possibly teacher professional development requirements.

    The four models being discussed
    The four models being discussed

As part of teaching KS3 ICT through other subjects this year, I need a method for measuring the success of how the technology has been used. Also, we are busy writing our new ICT strategy which will not prescribe one-to-one deployment but investigate it by initialising a BYOD project for Sixth Form and two class sets of tablet devices to be operational within only two departments. I need a way to assess if these projects are a success or not. All the models listed above incorporate a spectrum equivalent to substitution to transformation with slight amendments here or there. My instinct is to keep it simple, but to learn from others endeavours to achieve a working model.

My own evaluation of ICT in Subjects involved questionnaires to a sample of participant pupils coupled with an analysis of their academic attainment data in the participant subjects. This was then triangulated with participant teacher questionnaires to check the findings from their point of view. It is about blending opinion-based and empirical evidence to discover the benefits or drawbacks of this curriculum model. The results showed that, despite several possible opportunities, pupils never once questioned the use of ICT to enhance their learning. They did indicate that some activities were better than others. 83% of the academic data analysed showed a positive impact on their performance. That statistic is a big deal but I’m not going to make any great claims because the data needs further analysis and cross-referencing before it can be relied upon. The next step will be to evaluate the ICT based activities according to one of the substitution models. So, which one should I use? Miguel developed the work of Kim Confino et al (2010) to make a Classroom Learning Activity Rubric. This matrix of rubrics also adapts the TIM approach, but it puts the SAMR categories as the levels of technology use. This is appealing because the less categories there are, the easier it is to disseminate the use of the model, and my use of it, to others in my school.

Next steps

I’m going to read these matrices in more detail and try applying them to the ICT learning we have done in KS3 to date. This will give me an idea of how practical actually doing this is, and whether or not it is worthwhile. Also, I will be looking into the NAACE ICT Mark to see how they measure the integration of technology into schools.

However, to end this post, I want to readdress the first of the two categories mentioned at the beginning: application and administration. I am writing the strategy document with three other people: a governor, our Systems Manager and a parental advisor. In our extensive discussions about what we want to achieve, we were clear that the benchmark for our investigation into BYOD is simply providing web access to pupils during lessons. This is sufficient to justify initial investment so that we can learn about the pitfalls and positives first hand. This TechCrunch article is about a startup making big strides in America to ‘measure the impact of technology spending on student learning’. I am not alone in understanding the need to measure the large investments currently flowing through many educational organisations. Is it enough to analyse the impact on workflow or must we include learning transformation? This comes back to being clear and faithful to the school’s core purpose: aspire to excellence in education. And, it is important to highlight that the transformation of learning is not the remit of an ICT strategy, it is the remit of a teaching and learning strategy which an ICT strategy should be defined by and directly support. If you become a one-to-one school, you are potentially changing the way all learning happens. So, whereas I do want to be able to measure the success of the integration of technology in a school, which by implication includes any impact technology has on learning, do I have to measure the quality of all learning? Is it really possible to separate the two? If not, should the ICT stakeholders be acting as an arm of the teaching and learning body or independently in ensuring their provision of technology in the learning environment? Maybe this is the rub. Maybe when ICT becomes the core productivity tool of all learning (after the brain?) the school has to accommodate the massive overlap between the two. Our strategy is not immersive one-to-one as of yet so we don’t need to worry about this right now, but we do need to prepare for how it might work in our school.

If you are making any strides into this area, please do say so, and I would love to know any thoughts you have about what I have shared here.


Assessing the Integration of Technology in Learning Part 1

Marking work electronically in Frog


As part of ICT in Subjects, my department is working with the Music department to create scratch games and/or animations with music and sound effects composed in CuBase.

First lesson is to recreate the famous pong game in scratch. So, I set a Frog assignment (Quick Issue Work) to all five classes so 125 pupils could hand in their file to be scored out of 3 [0 = no file; 1 = struggled; 2 = complete with errors; 3 = complete] plus a comment.


Below is a video of the marking process of one file. Frog is not very good at this yet. I wonder if Frog4OS will be any better at this. Frog4 runs on any device because it is coded in HTML5, and so will remove limitations on usage – people expect a VLE to work properly on any device (maybe not handhelds) or in any modern browser. Frog3 is guaranteed to work only on Windows running IE.

The marking does work but there are a lot of clicks involved. In Moodle, all the grades and files and comments would be accessible from one page which makes the process much faster and allows easy copy and pasting of comments. Also, you cannot release (return to pupil) marks for pupils that have done their work before the deadline. I made a big mistake as shown in the video – knew it as I did it: d’oh! – by assigning the same assignment to all five classes. This means we will not to be able to release the marks until all pupils have submitted.

Anyway, if interested, watch the video and please let me know about your experiences of marking work in Frog3 or Frog4.


Marking work electronically in Frog