Paramedic program assessment is overdue for software
Why the CoAEMSP self-study is the artifact every program builds by hand — and why it shouldn't be.
Every accredited paramedic program in the country produces, on roughly a five-year cycle, a self-study document for the Committee on Accreditation of Educational Programs for the Emergency Medical Services Professions. The self-study is the operational document that determines whether the program retains accreditation, and through it, whether its graduates remain eligible to sit the National Registry exam. It is the most consequential single artifact a program produces.
It is also, in nearly every program I have spoken with, assembled by hand from spreadsheets and PDFs, six to ten weeks before submission, by an associate program director who is also teaching three courses that semester. That fact deserves more attention than it gets.
What the self-study actually requires
The CoAEMSP standards have evolved over the years, but the data demands have only grown. A current self-study expects, at minimum, the following.
- Cohort outcome data. First-attempt and final pass rates on the National Registry cognitive and psychomotor exams, retention rates, positive placement rates, employer satisfaction surveys. Each metric is reported per cohort and trended across the review window.
- Exam blueprint mapping. Evidence that the program's internal summative assessments map cleanly to the National Registry blueprint — every blueprint domain represented, weighted appropriately, with item-level mapping documented.
- Item analysis. For program-administered comprehensive exams, the standard documentation includes item difficulty, item discrimination, distractor analysis, and Cronbach's alpha for internal consistency. Programs whose exams have items with negative discrimination have to explain why.
- Remediation evidence. For students identified as at risk, a documented remediation plan, evidence the plan was executed, and outcome data on whether the remediation worked.
- Retention and attrition tracking. Not just the headline number. The exit point of every student who left the program, categorized by reason — academic, financial, personal, employment.
The standards are written as outcomes, not as a software specification. But the underlying data demands have a software shape. A program that has been collecting this data continuously, in a structured form, can produce most of the self-study with queries. A program that has not has to reconstruct it.
How most programs assemble it
The dominant workflow, in my experience interviewing program directors, looks like this.
Cohort outcome data is exported from the National Registry's program portal as a CSV. It is pasted into a master spreadsheet that has been versioned, copied, and renamed across program directors for the past decade. The spreadsheet has tabs for each cohort. The tabs have inconsistent column orders.
Exam blueprint mapping exists as a Word document maintained by a senior instructor. It maps the program's final summative exam to the blueprint by item number. When a new edition of the exam is built — usually annually — the mapping document is updated by hand, sometimes by the same instructor, sometimes by whoever is available.
Item analysis is performed in Excel using the formulas the program's first medical director set up in 2017. The formulas are correct. The pivot tables are extremely fragile. The instructor responsible for running them has, on more than one occasion, had to redo six hours of work because a sort operation rearranged the rows but not the columns.
Remediation plans live in the learning management system, scattered across course shells, attached to individual students. To assemble the self-study evidence, someone has to walk through each at-risk student's record by hand and assemble the chain of plan, execution, and outcome.
Retention and attrition tracking lives in the registrar's office, in the program director's notebook, and in the institutional research office, in three different shapes that have to be reconciled.
The result is a self-study that takes six to ten weeks of senior staff time to produce, that is accurate as of the day it is finalized and stale a month later, and that — because of the effort required to reproduce it — is essentially never referenced between accreditation cycles. The richest dataset the program has about itself is the one it works hardest to ignore.
The case for a continuous data layer
The argument for treating program assessment as software is not that software is better at the work. It is that the work, once you look at it, is software. Every artifact in the self-study is the output of a query against data the program already collects. The labor is not in collecting the data. It is in reshaping it from the form it lives in into the form the self-study requires.
The labor is not in collecting the data. It is in reshaping it from the form it lives in into the form the self-study requires.
A continuous data layer means the program's exam delivery, the blueprint mapping, the item bank, the remediation tracking, and the cohort outcome data live in the same system, with the same identifiers for students, items, and cohorts. The self-study, in that world, is a report. Not a six-week reconstruction project. A report.
The change that matters most is not the time saved at submission. It is what happens between submissions. When item analysis is a query rather than a six-hour Excel session, the program runs it after every exam. When at-risk identification runs continuously rather than at the end of the term, intervention happens early enough to matter. When blueprint coverage is checked automatically every time a new exam is built, the drift that programs currently catch only at the self-study review is caught the day it happens.
This is not a hypothetical workflow. It is how every other regulated industry that produces evidence of its own performance — clinical research, financial audit, manufacturing quality — operates. The paramedic education sector is roughly fifteen years behind on tooling, and the gap shows up in director burnout more than in pass rates.
Where Foresight fits
Foresight is the venture inside Ibis that is solving this. It is not a learning management system and it is not an exam delivery platform in the sense that those terms are usually used. It is the data layer underneath a program's assessment work. Instructors author exams in Foresight, students take them in Foresight, item analytics run continuously in Foresight, blueprint mapping is owned by the platform rather than by a Word document, and the self-study artifacts are reports against the underlying tables.
We did not start Foresight to build a new test bank. We started it because the program directors I spoke with described, over and over, the same set of problems and the same set of workarounds, and the workarounds were costing them six weeks of their lives every accreditation cycle. The product is a response to a workflow that has been broken in the same way at every program I have visited.
What a program director should expect
If you are a program director evaluating tooling in this space — Foresight or anything else — there is a short list of capabilities that distinguish a serious platform from a dressed-up exam delivery tool.
Domain-level analytics by cohort. You should be able to ask, in one click, how the current cohort is performing on cardiology versus respiratory versus medical emergencies, how that compares to the previous three cohorts, and where the largest gaps are. If the answer requires exporting to Excel, the platform is not serious.
Individual at-risk indicators. The platform should surface the students who are likely to fail — by some defined and documented criterion the program has agreed to — early enough that intervention is possible. The criterion should be visible. If the platform claims to do this with an opaque model, you cannot defend the model in your self-study and you should not buy the platform.
Exam-blueprint mapping owned by the platform. This is the single most consequential capability and the one most often missing. The blueprint, the program's exam items, and the mapping between them should live in the platform's data model. The instructor should not be maintaining a separate document. When the blueprint changes — and it does, every few years — the platform should surface every item that needs re-review, not silently drift out of compliance.
A platform that has these three things makes the self-study a report. A platform that has none of them is a place to deliver exams, which is useful but does not solve the underlying problem. The difference is whether the program owns its data or whether the program is renting access to it.
The accreditation cycle is going to continue arriving every five years whether the tooling improves or not. The programs that decide, in the next eighteen months, to move their assessment work onto a continuous data layer will be the ones whose directors are not staring at a half-built self-study three weeks before the deadline in 2031.

