6m

Reading Comprehension | Technology for Educators | Literacy

Unlocking Comprehension: The Bedrock Reading Test

By Andy Sammons

18 Jul 2024

image-9336879625619565953655309c9cf980cf5f3190-3000x1995-jpg

We’ve written recently about the complexities of reading comprehension and some potential strategies to navigate these. A key issue that schools face, though, is intervening - at scale.

How do you solve a problem like reading?

Before joining Bedrock, as a faculty lead for English and self-diagnosed data geek, I’ve always been intrigued by reading tests and their usage in schools.

In the wake of the pandemic, I persuaded my Deputy Head to allow us to use Non Verbal Reasoning tests to give us a sense of the incoming Year 7’s reading performance after their primary schools had to use teacher assessments. In truth, I just wanted a measure something to give me a sense of the pupils’ capabilities. Like many leaders at the time, I just wanted to hang my hat on something.

Even with a return to “normal” (he says) it’s worth taking a moment to pause and reflect upon how reading test data is actually used in schools. Often, as teachers, we receive the pupils’ reading ages (which is actually inherently problematic if taken in isolation), and then we diligently write them onto our seating plans with an awareness that one pupil has a lower reading age than another. We might raise our eyebrows at a pupil’s score, or nod sagely, confirming our intuitions for others. In some cases, we might then provide some key words or break down texts. At a departmental level, I remember finding all kinds of patterns and linking distance between chronological and reading ages, targeting my interventions there. All good, sensible stuff (I hope).

Yes, a number is better than nothing. But, let’s face it, a number in and of itself isn’t really all that actionable.

A new solution to an old problem

So, when emails started dropping in my inbox about Bedrock’s development of a reading test, it piqued my interest. Literacy is at the heart of what the organisation does: essentially, every day is focussed on two basic questions:

  1. How can we use edtech to drive up literacy standards?
  2. How can we make it easier for schools and teachers to drive up literacy standards?

These two questions shape everything, and that’s precisely why this reading test is so exciting.

I’ve had the privilege of speaking with Martine Holland- Bedrock’s Head of Assessment and the chief architect of the test- to unpick the key aspects of the test. The reading test has two key drivers (Daisy Christodoulou writes brilliantly about these concepts here):

  • Reliability: the consistency of the assessment
  • Validity: what is measured

Martine’s position was that most traditional reading tests provide a sense of certainty for schools because they gave schools a sense of safety in terms of reliability: effectively, at least it gave schools the opportunity to rank and understand the populations of pupils in front of them.

However, a key element of Bedrock’s reading test project was embarking on the enormous task of unpicking what ‘validity’ truly meant when it comes to reading. Perhaps the most exciting element of this test is that improving its validity hasn’t come at the expense of reliability: if there was such a thing as a silver bullet for this type of thing, this is it! If you take a moment to think about what you’re testing when it comes to reading, suddenly, everything being represented by a number seems a little undercooked. A cursory thought gives you things like decoding, explicit information, inference and deeper meanings, for example.

This is where things get really interesting. As someone with a lot of experience leading English in schools, as well as having a hand in whole school literacy, I can - with absolute honesty- say that Bedrock’s response to the conundrum of validity in reading is a game changer.

Three reasons you should be excited about the Bedrock Reading Test

Here are three ways that Bedrock’s reading test will empower you to drive up literacy standards in your school:

  1. More than numbers: Reading underpins everything when it comes to learning. So it makes complete sense to do everything humanly possible to capture this in as much granular detail as possible. Perhaps most importantly, the test acknowledges the complexity of reading and the plethora of potential barriers that can inhibit learners; Perfetti & Stafura (2014) have written about the many ‘pressure points’ that can lead to reading comprehension difficulties. Rather than one abstract notion of ‘increasing difficulty,’ the test deliberately assesses key areas of reading, including:

  • explicit retrieval
  • implicit retrieval
  • meaning in context
  • summary
  • predictions
  • evidencing ideas
  • Inter- and intratextuality

With this test, we’re certainly moving into the realms of being able to support and intervene in terms of different subskills of reading. And, by the way, if it’s numbers you’re into, not only does the test give you a chronological age, but it gives you a range of other nationally benchmarked data so that you can properly contextualise the performance of your learners.

  1. Unpicking difficulty: When Martine spoke about this, a little bit of me thought “you’re almost certainly going to lose me here!” but I hung in there. Again, it’s one thing to assess difficulty from the perspective of a single subject but to think about this in terms of reading as a skill is mind-boggling. But somehow, they’ve managed it. In cognitive load theory, this is known as ‘intrinsic’ load; again, it’s really exciting to learn about a reading test that actively appreciates the inherent complexity and nuance of our language- it considers grammar, formality, literary effect, range of characters and ideas, familiarity and textual structure. In essence, when it reports a score, you can be confident that in terms of validity, it’s really taken into consideration the intricacies of language. Effectively, you can have real trust that not only does this test provide robust, reliable data, but morally, it gives all learners from all kinds of backgrounds the opportunity to succeed.

  1. The test itself: The team at Bedrock worked with the Cambridge Psychometrics Centre to formulate its measurement methodology. The test itself uses Item Response Theory, meaning thousands of responses from pupils have been compiled to create a gold-standard statistical model with a reliability metric of 0.94. Importantly, the test itself is adaptive, meaning that pupils see questions that are relevant to them as quickly as possible, thereby maximising the opportunity for them to engage meaningfully with the test.

If you’re interested in Bedrock’s new reading test, please do get in touch; we’d love you to join us on this exciting journey, and have every confidence that this represents a new and exciting shift in how we understand and intervene in our young people’s reading.

If you’d like to find out more about our Reading Test visit our webpage here.

Find out more about our Reading Test

Book a call
image-047804956e2da17822a54aea2283544934290f02-309x473-png