Skip to content
IbisSaaS Holdings
← All posts
EMS
Education
TEI

What the NREMT's TEI rollout actually changes

Technology Enhanced Items aren't a cosmetic upgrade. They're a different test, and most prep platforms aren't ready.

Vincent Burburan8 min read

The National Registry's transition to Technology Enhanced Items has been treated, in most program meetings I have sat through, as a UI refresh. New question types, slightly different screens, same exam underneath. That framing is wrong, and the gap between how programs are preparing students and how the test now actually behaves is widening.

TEIs are not a cosmetic upgrade. They are a different psychometric instrument grafted onto a familiar credential, and the platforms students are using to prepare are mostly pretending the change has not happened.

What a TEI actually is

The umbrella term covers a small family of item formats that share one property: the candidate produces a structured response, not a single letter. The ones that matter on the paramedic exam are roughly these.

  • Drag and drop. The candidate moves elements between zones — placing leads on a 12-lead diagram, sorting medications into therapeutic classes, assigning patients to triage tags.
  • Ordered response. A list of steps must be put in correct sequence — the order of operations for a synchronized cardioversion, the assessment sequence on a trauma patient with an unstable airway.
  • Multi-select. Choose every option that applies. Crucially, the score is not partial credit on a four-choice MCQ; the standard rubric on the National Registry blueprint requires all correct responses and no incorrect ones for the item to score.
  • Hot spot. The candidate clicks a region of an image — the location for needle decompression, the correct dermatome for a described pain pattern, the site of an obvious deformity.
  • Multi-step calculation. A scenario that requires the candidate to compute a dose, then compute the volume given a concentration, then compute a drip rate given a set size. Each step depends on the prior one and each is scored independently.

The Registry's stated reason for the shift is item discrimination at the entry-level competence boundary. Traditional four-option multiple choice has known psychometric problems near the cut score: a candidate who can eliminate two distractors and guess between the remaining two has a fifty percent chance of being right without knowing the answer. TEIs raise the floor on that guessing strategy. On a five-step ordered response item, random guessing gives you roughly one chance in a hundred and twenty.

That single fact changes everything downstream. A student who has been scoring in the high seventies on multiple choice practice exams may walk into the real test, encounter a block of TEIs, and discover that their score is meaningfully lower than they expected. Not because they got dumber on test day, but because the inflation provided by guessable distractors disappeared.

What this means for students

The first practical implication is that pattern recognition strategies — the kind every test prep program teaches, often implicitly — break down.

A student who has internalized "if the question stem mentions chest pain and the patient is over fifty, the answer is probably aspirin" is operating on a heuristic that maps to one of four discrete answer choices. That heuristic does not help when the screen presents a diagram of a 12-lead and asks the student to drag five lead labels to the correct anatomical positions. It does not help when the screen presents a list of nine assessment steps and asks for them in order. The interface has removed the affordance the heuristic was built on.

The second implication is timing. Drag-and-drop and ordered response items take longer to complete than multiple choice items. The Registry is aware of this and has adjusted the time allocations, but students whose practice has been entirely on multiple choice often arrive at TEI items with no calibrated sense of how long they should take, and they either rush them or burn ten minutes on a single five-point item.

The third implication is the calculation block. The multi-step calculation TEI is the single most consequential change for paramedic candidates. It tests whether a student can carry a number across three operations without losing track of units. In the legacy format, a flubbed concentration calculation often led the student to an answer that was not on the screen, which was a useful signal — pick the closest one and move on, accepting the score loss. In a TEI calculation, the student types the number. There is no closest answer. If the math is wrong, the score is zero.

What this means for programs

Program directors are looking at the same shift from a different angle. The Committee on Accreditation of Educational Programs for the Emergency Medical Services Professions tracks first-attempt pass rates and uses them in the self-study review. TEIs introduce a new source of variance in those rates that is not yet well-characterized.

Programs that were comfortably above the threshold may see their numbers wobble during the rollout window simply because their item bank has not caught up. The mitigation is not more multiple choice practice. It is structured practice on the actual TEI formats, scored against the Registry's actual rubric, integrated into the program's normal assessment cadence rather than bolted on as exam-prep in the final cohort weeks.

That last point is where most programs are quietly struggling. The test bank vendors most programs have licensed for years are still mostly shipping multiple choice with a thin TEI veneer. A drag-and-drop "item" that can be solved by reading the labels is not a TEI. A multi-select question with two obvious correct answers and two obvious distractors is not a TEI. The format is not the point. The point is the cognitive demand.

A drag-and-drop item that can be solved by reading the labels is not a TEI. The format is not the point. The point is the cognitive demand.

Where prep platforms are failing

If you sit a student down in front of three of the largest paramedic test prep apps on the market today and watch them work through what those platforms call TEI content, the same pattern emerges. The interaction is technically a drag-and-drop or a hot spot, but the underlying item is structured like a multiple choice question with the answers presented as draggable tiles. The candidate's reasoning is unchanged. The screen looks different.

This is not a small failure. A student who practices on disguised multiple choice will walk into the actual exam with miscalibrated confidence. They will believe they have practiced TEIs. They have not.

The fix is not technically hard. It requires authoring items that genuinely require the response format — items where the answer cannot be expressed as a choice between four written options because the answer is, for example, a sequence, or a coordinate, or a set of selections whose meaning depends on which other selections were made. Building those items takes time. It takes content authors who understand both the clinical material and the psychometric design constraints. There is no shortcut.

What we are building toward

Path2Medic is the consumer-facing piece of how Ibis approaches this. The platform renders the actual TEI formats — real drag and drop, real ordered response, real hot spot on real anatomy and ECG images, real multi-step calculation with unit tracking. Students who practice on the platform are practicing the formats they will face, not surrogate formats that imitate the screens.

Foresight is the institutional piece. It gives paramedic programs a way to author their own TEI items at the program level, mapped against the Registry blueprint, with item analytics that show which items are discriminating and which are not. The aim is to give program directors the tooling that the legacy test bank vendors should have shipped two years ago.

Neither product solves the problem on its own. Students who practice TEIs without the program-level item analysis will plateau. Programs that have item analytics but whose students practice on disguised multiple choice will see those analytics tell them nothing useful. The two pieces have to fit together, and over the next two cohort cycles, the programs that arrange for them to fit together will be the ones whose pass rates do not wobble.

The Registry has signaled clearly where the test is going. The remaining question is which programs and which prep platforms will catch up, and how quickly.

Get in touch

Building something that has to work the first time?

We take on a small number of new clients each quarter. Tell us what you are building, what rules surround it, and where you are stuck.