Oral Assessment in Australian Schools: What the Data Says (And How Elqo Helps)
Every few years a story circulates in Australian education: students "can't speak in front of a class anymore." Too much screen time, too many group chats, too few opportunities to stand up and present. The narrative is intuitive, but most of the public evidence cited for it is anecdotal — because the largest national achievement series schools talk about, NAPLAN, doesn't actually test speaking or listening at all.
That measurement gap matters. It means most "kids can't speak" claims rest on proxies (writing scores, reading scores, teacher anecdotes) rather than direct, comparable national data on oral skills in Year 7 to Year 12. But it also means the real story is more nuanced — and, for school leaders and English faculties, more actionable — than headlines suggest.
This post walks through what Australian data actually says about oral skills and oral assessment, what NSW and Victorian English curricula still require, the workload pressures squeezing teachers' ability to deliver those tasks well, and where a tool like Elqo fits as a low-lift, curriculum-aligned classroom support.
The NAPLAN Blind Spot: Speaking Isn't Measured
The most-cited national achievement series in Australia is NAPLAN. According to ACARA's own fact sheet, NAPLAN tests "Reading, Writing, Language Conventions… and Numeracy." Speaking and listening are not assessed nationally.
That creates a structural problem for any "speaking is in decline" claim. Even if students' oral fluency in classrooms had genuinely worsened year-on-year, the headline indicator schools and the public most often see could not confirm it. At the same time, official assessment frameworks define literacy broadly, explicitly including listening and speaking as language modes — so the gap between "what literacy is" and "what the test measures" is structural, not accidental.
The honest position to take, then, is: there is no direct national time series on secondary students' oral assessment performance. What we have is a set of indirect indicators — some neutral, some genuinely concerning — that together tell a coherent story.
What the Indirect Indicators Show
1. Communication vulnerability is rising at school entry
The Australian Early Development Census (AEDC) measures children in their first year of full-time school across five domains, including Communication skills and general knowledge. The AEDC's vulnerable group is described, in plain language, as children who "have poor communication skills and articulation" and may have "difficulties talking to others and being understood."
The national trend on that domain has gone the wrong way:
- 2015: 8.5% of children developmentally vulnerable
- 2018: 8.2%
- 2021: 8.4%
- 2024: 8.9% — up 0.5 percentage points since 2021
This is the cleanest piece of national data we have on communication and articulation as a measured trend. It shows a modest but meaningful rise in vulnerability since 2021, and a higher rate in 2024 than in 2015. It is measured at school entry rather than in Year 9 or 11, so it is best read as a leading indicator: more students are arriving at primary and, eventually, secondary school with communication support needs than was the case a decade ago.
2. NAPLAN literacy proxies are mixed, not a clean decline
The proxies for older students' language competence are NAPLAN reading and writing. The picture is more nuanced than headlines suggest. For Year 9 nationally:
- Reading mean: 580.8 (2016) vs 577.6 (2022) — broadly stable.
- Reading at or above the national minimum standard: 92.8% (2016) vs 89.6% (2022) — declining tail.
- Writing mean: 549.1 (2016), down to 542.4 (2018), back up to 559.9 by 2022.
Reading averages held; the lower tail thinned. Writing dipped, then rose. This is not a neat "language skills are collapsing" story. It is, however, consistent with distributional widening — averages stable, the bottom group growing — which is exactly the pattern where a structured, low-friction speaking practice tool tends to add the most lift.
3. International data shows long-run literacy stagnation, not collapse
For 15-year-olds, OECD's PISA reading data on Australia points to performance that has been broadly stable since around 2015 but has declined over the longer run. That supports the same cautious interpretation: the system-level story is stagnation, not free-fall, and any classroom intervention should be pitched as one targeted lever, not a fix for a national crisis.
What Senior English Actually Requires (And Why It Matters)
Despite the absence of a national oral skills test, both NSW and Victoria's senior English programs still explicitly include or mandate oral and multimodal assessment. This is the single most important fact for any school evaluating a speaking and presenting tool, because it shifts the conversation from "should we invest in oral skills?" to "how do we run the assessments we're already required to run, well?"
NSW: Multimodal presentation is built into Stage 6 sample programs
NESA's published assessment and reporting guidance for English Standard, English Advanced, EAL/D, and Extension all include multimodal presentation tasks in their sample school-based assessment programs. NESA defines a multimodal presentation as a task that "includes at least one mode other than reading and writing such as listening, speaking, viewing and representing." For EAL/D, listening is required and speaking is allowed. For Stage 6 English Standard, the published course components weight "communication of ideas appropriate to audience, purpose and context across all modes" at 50% of the assessed program.
That is not a footnote. It is half the assessable component, and it explicitly covers spoken delivery.
Victoria: VCE oral presentation is mandated in Unit 4
VCAA's assessment guidance for VCE English and EAL is direct: "The requirement for an oral presentation is mandated in Unit 4, Outcome 2." VCE English Unit 2 also explicitly includes creating a persuasive oral text. VCE Literature requires assessment tasks that include speaking and listening across both Year 11 and Year 12 sequences. VCE English Language allows oral tasks as a legitimate mode of demonstrating learning.
And critically, VCAA explicitly accepts a range of formats: "present an individual, formal speech," "debate," "dialogue," "podcasts," and recorded delivery. Recording is not a workaround — it is part of the published guidance.
The implication
The strongest policy evidence in both states points to sustained inclusion of oral and multimodal expectations, not removal. Schools that miss this often assume oral assessment is fading; in reality, the requirement has stayed put while the system around it has changed.
The Real Squeeze: Workload, Not Curriculum
If oral assessment is required, why does it sometimes feel like classroom oral practice has thinned out? The answer that holds up to evidence isn't "the curriculum dropped it." It's teacher capacity has tightened.
Two pieces of Australian data make this concrete:
- Monash University's Teachers' Perceptions of their Work survey: the proportion of teachers who agreed or strongly agreed that workload was manageable fell from 24.4% in 2019 to 13.8% in 2022.
- Grattan Institute survey of school leaders: 77% of school leaders said teachers at their school always or frequently do not have enough time to prepare for effective teaching (28% always, 49% frequently).
That matters specifically for oral assessment because oral tasks are the most operationally expensive thing English teachers are asked to run. They require scheduling, live performance observation, real-time rubric marking, individualised feedback, and moderation across markers. None of those steps shrink when the period is 50 minutes. A class of 25 students delivering five-minute speeches needs at least two double periods just for delivery, before any rehearsal or feedback.
Even VCAA notes the constraint directly: school-assessed coursework "must not unduly add to the workload." So the official guidance simultaneously mandates the oral and constrains how much marking weight it can demand. The realistic outcome in schools, when capacity is tight, is that oral tasks become "minimum viable" — shorter task windows, thinner feedback, fewer rehearsal cycles, less consistent rubric evidence — rather than disappearing entirely.
This is the gap most worth solving.
The Capability Gap, Quantified
To make the operational risk concrete: in the NSW English Standard sample program, individual assessment tasks fall in a 20% to 40% weighting range for Year 11 and 10% to 40% for Year 12. If an oral or multimodal task carries a typical 25% weight and a cohort underperforms by 10 percentage points on that task because of low speaking competence (a 65% average instead of a 75% achievable), the expected impact on the course assessment aggregate is 2.5 percentage points.
That is large enough to shift grade bands for students near boundaries — and it is also disproportionately visible. Oral performance is observed in the room, often in front of peers and across markers, so under-performance is far less easy to absorb than a quiet drop on a written task.
On the upside, evidence syntheses summarised by Evidence for Learning estimate that well-designed oral language interventions are associated with around six months of additional progress over a year, often measured via reading comprehension. In other words, structured speaking practice doesn't only help oral marks — it transfers to broader literacy outcomes that schools are already accountable for.
What Schools Actually Need From An Oral Skills Tool
Pulling all of this together, the design brief for any classroom speaking tool worth piloting in an Australian secondary school is unusually specific:
- Curriculum-aligned by default. It should map to the assessment tasks teachers already run — NSW multimodal presentations, VCE Unit 4 oral point of view, VCE Unit 2 persuasive oral, VCE Literature oral components, EAL/D listening-and-speaking tasks. The tool can't ask faculties to invent a new task category.
- Low operational load on teachers. Setup in minutes. Auto-structured prompts. Short student attempts (60–180 seconds). Instant criteria-aligned feedback. Rubric evidence that is exportable for marking and moderation rather than re-entered by hand.
- Asynchronous-friendly. VCAA's own guidance allows recorded delivery and podcast formats. A tool that supports recorded attempts converts oral assessment from a synchronous bottleneck into an asynchronous workflow, which is the only realistic way to fit rehearsal cycles into a tight calendar.
- Designed for a wider distribution of student readiness. AEDC data implies more students will arrive in secondary classrooms with communication vulnerability than was the case in 2015. Tools need to scaffold up, not assume confident speakers.
- Mode-aware. VCAA explicitly notes that delivery criteria like eye contact and gesture depend on whether the format is live, video, or audio. The rubric needs to flex with the mode.
- Generates evidence, not just practice. If teachers can't see who practiced, when, and how performance changed, the tool is a coaching app, not a school product. It needs reporting that supports faculty conversations and parent communication.
Where Elqo Fits
Elqo is built for exactly this brief: a structured speaking and presenting practice loop, designed for short attempts, instant AI feedback, and curriculum-aligned task templates that schools can plug into existing assessment programs without inventing anything new.
What that looks like in practice:
- Curriculum-aligned task templates. Teachers can run multimodal presentation rehearsals, point-of-view oral attempts, persuasive oral practice, debate-style turns, and listen-then-speak EAL/D tasks — matched to the assessment tasks specified by NESA and VCAA.
- Short, repeatable attempts. 60 to 180 seconds per attempt, so a full class can run multiple takes in a single period without a scheduling crisis. This is the operational unlock for tightly-packed timetables.
- Instant, criterion-aligned feedback. Pace, filler-word density, vocal clarity, eye contact (for video), and structural signals like contention and signposting — surfaced immediately so students iterate within the same lesson rather than waiting a week for written feedback.
- Mode-aware rubrics. Live, recorded video, and audio-only formats are scored on the criteria that actually apply to that mode — consistent with VCAA's published guidance.
- Teacher dashboards and exportable evidence. Faculties get visibility into who's practiced, attempt counts, score trajectories, and exportable artefacts that support marking, moderation, and parent conversations.
- Asynchronous by design. Students can rehearse at home; teachers can mark from recordings rather than burning a full double period on synchronous delivery.
The framing is deliberate: Elqo is not pitched as a fix for a national crisis — the data does not support that framing — but as a targeted lever on a measurable capability gap under real school constraints. Required oral assessment, tightening teacher capacity, a wider distribution of incoming student readiness, and meaningful literacy transfer when speaking is taught well. Those four conditions hold simultaneously across most Australian secondary schools.
See How Elqo Plugs Into Your English Faculty
Elqo gives senior English teachers an easy way to run multimodal and oral assessment practice with instant, criteria-aligned AI feedback — and exportable evidence for marking and moderation. Built for NSW Stage 6 and VCE English programs.
Talk to Us About a School PilotCurriculum Alignment at a Glance
For English faculties evaluating where Elqo plugs in, the official sources line up cleanly:
- NSW English Standard (Stage 6): Sample Year 11 program: 3 tasks including a multimodal presentation. Sample Year 12 program: 4 tasks including a multimodal presentation. Communication across modes weighted 50% of the assessed component.
- NSW English Advanced (Stage 6): Same multimodal requirement, higher demand for sophistication and coherence.
- NSW English EAL/D (Stage 6): Multimodal task that must include listening, may include speaking. Direct fit for listen-then-speak practice loops.
- NSW English Extension (Stage 6): Independent research project that can be presented multimodally — effectively a viva or defence in practice.
- VCE English / EAL: Unit 2 includes creating a persuasive oral text. Unit 4 Outcome 2 mandates an oral presentation tied to analysing argument and presenting a point of view. Recorded delivery and podcasts are explicitly accepted.
- VCE Literature: At least one assessment task in Units 1–2 and one in Units 3–4 must include an oral component. At least one must include speaking and listening.
- VCE English Language: Schools may incorporate speaking and listening into assessment, including formal speeches, debates, dialogues, and podcasts.
- ACARA General Capabilities (national): The Literacy capability explicitly includes "Speaking and listening" and emphasises "planned speaking situations." Personal and Social, Critical and Creative Thinking, and Ethical Understanding capabilities all reinforce speaking, presenting, and structured argument as core developmental work.
Where to Start: A Realistic Pilot Path
Based on the data above, the strongest fit for a first pilot is Years 10 and 11. The reasoning:
- Senior English assessment structures explicitly include oral and multimodal tasks, so the curriculum pull is real.
- Students are mature enough to self-coach, iterate, and respond to rubric-driven feedback.
- It sits just before high-stakes Year 12 oral tasks (VCE Unit 4, NSW HSC multimodal), so practice gains compound into assessment that counts.
- Year 11 calendars are tighter than Years 7–9 but less brittle than Year 12, which is the right setting for honest piloting.
A workable pilot rhythm: one short oral task per fortnight in English, run as a 60–90 second response to a structured prompt, with three attempts each, instant AI feedback per attempt, and a teacher rubric review that informs the next prompt. That cycle takes about 15 minutes of class time, builds visible attempt data over a term, and produces exportable evidence for the multimodal or oral task that follows in the formal assessment program.
Years 7–9 are a strong secondary pilot window for habit formation, where the goal is building speaking confidence ahead of senior assessment pressure. Year 12 cohorts can be selectively included for VCE Unit 4 oral readiness, but tighter calendars and authenticity expectations mean it's better treated as a focused exam-readiness sprint than a year-long program.
The Bottom Line
The simple "oral skills have collapsed" narrative isn't supported by the data, because the data doesn't exist in the form people assume. What is supported, and what schools can act on, is more useful:
- NSW and Victorian senior English curricula still require oral and multimodal assessment.
- AEDC data shows more students arrive at school with communication vulnerability than a decade ago.
- Teacher capacity to run high-touch oral assessment has tightened significantly since 2019.
- Evidence syntheses suggest structured oral language work delivers around six months of additional progress per year, with transfer to broader literacy.
The combined effect is a clear, measurable capability gap inside a curriculum that already mandates oral work. That's exactly the gap Elqo is designed to close — with low teacher overhead, mode-aware rubrics, asynchronous-friendly practice, and exportable evidence that fits how Australian English faculties already mark and moderate.
If you lead an English faculty, run a senior school program, or sit on a teaching and learning team thinking about oral assessment readiness for 2026 and 2027, we'd be glad to talk through what a pilot looks like in your context.
Curriculum-Aligned Speaking Practice for Australian Schools
Elqo is built for senior English oral and multimodal assessment in NSW and VCE programs. Short attempts, instant AI feedback, exportable rubric evidence — designed to reduce teacher workload, not add to it.
Book a School Walkthrough