It’s not about what comes next, it’s about what comes first!

A week into my 4 month stay in London I am given the opportunity to assist with one of London’s Thoracic U/S courses. The venue is impressive: large sky-lit rooms, plenty of room for the audience and scanning stations, multiple flat screens displaying real time scans performed by the speakers.

The audience, a little different than I’m used to, is made up of all varieties of physicians: internists, respirologists, GPs, surgeons, & emergency physicians. The course speakers come from a range of specialities as well: here we have radiologists, respirologists, & surgeons.

Late into the course, a speaker initiates a case-based review of some of the material just covered. Case 2: a young male with history of fall from scaffold. He puts up a CXR showing a substantial hemopneumothorax. The speaker goes on to ask the audience about the next best test. It’s clear that he’s looking to make a point: despite everything he had just taught, the next best test would not be ultrasound, it would be CT.

He was right, the next best test would have been CT. And yet, in my mind, as I sat there doing my best to not blurt out my frustration, all I could think was that he had asked the wrong question.

It’s not about what comes next, it’s about what comes first!

Now, I know very well I do not own this idea. The “Ultrasound First” movement has been gaining ground for some time ( Top that with this year being “the Year of Ultrasound” and you’d think maybe I should calm down. After all, we’ll get there eventually, right?

Well, I’m not so sure. First of all, I wasn’t in some backroom rounds, I was at a dedicated point-of-care thoracic ultrasound course. Of all the places, the role of POCUS should be clearest here. And yet it’s not. Part of the message is coming through, but there’s a lot being lost. Time to turn up the gain.

Sure, POCUS has a tremendous role to play in the outpatient setting. The British Thoracic Society now strongly recommends (if not mandates) that whenever applicable thoracic procedures should be done under u/s guidance. Great, I get that. But what concerns me is that we are failing to illustrate to our colleagues the broader utility that can be found in this remarkable technology.

That brings me to the second point: in medicine we are used to thinking about what comes next, and not so much about what should have come first. In a sound world, that young fall victim would have had his hemopneumothorax identified within minutes of presentation to the ED (if not prehospital even as some may have it). Had he been unstable, that chest would have been decompressed right then and there with chest drain in place soon after. No CXR for diagnosis needed, the significant pathology would have been suspected through a combination of history & physical exam and then solidified by POCUS performed by the attending care team. All within minutes, all at the bedside. For the stable fall patient, the same approach would have exposed somewhat occult pathology to the benefit of the entire care team and most importantly the patient who would now be identified as having (not suspected of having) a large hemopneumothorax.

But for us to get there, we actually have to go back. We have to hit ‘rewind’ in our clinical skill-set back to the point of the bedside examination. There we have to insert our newly acquired bedside ultrasound assessments. It is there that we have to teach ourselves to think “Ultrasound First.”

As physicians, we are going to have to spend some serious time unlearning our first steps. Let’s be clear and honest about this: It is very difficult to unlearn anything!

And so maybe that is why the process has been slow going. Perhaps we need to be more explicit about the fact that we’re not just talking about adding the next test. What we’re talking about now is the first, and in some cases most important, test of all.

Cheers from London,

Paul Olszynski

Generally, I don’t generalize. That being said…

Drawing Generalizations from High Fidelity Simulation Research

I’ve just returned from a productive meeting at the Whipps Cross University Hospital Simulation Suite (aka the MET suite). We are preparing for a study that is intended to assess the value of simulation in the development of critical care ultrasound skills.

We tested out ‘little Ben’ (London’s first edus2 system), went over our study design, looked at props needed, and discussed roles for stooges (the local term for confederates that I am already quite fond of). To top it all off, we engaged in the usual sim centre chit-chat that those who regularly sim are accustomed to: recent projects, gag reel, local goings on. In fact, I’d say it was a very productive meeting.

This is welcome news. As a guy who managed to convince himself that flying overseas to conduct medical education research was the only logical next step (after conceiving a study design too cumbersome to pull off in his own backyard), meeting  great folk at the MET suite helps this study get off to a good start.

In this short time, it has struck me that there exists a tremendous degree of similarity amongst the sim suite back home (University of Saskatchewan) and the one here at Whipps Cross University Hospital. This has me wondering, and hopeful, about our potential to generalize our upcoming research findings.

The Value of Conducting Research Outside One’s Own Home Turf

The posters have been made and sent out, the faculty slowly enrolling, and the next step is trainee recruitment. Here we will hopefully recruit more participants than what would be possible at the U of S, here there will be less bias as most people in London have never heard my big mouth flap about ultrasound (except those fine physicians at last week’s course), here they have not used an edus2.

That said, there are some challenges. Putting aside the bureaucratic demands of ethics approval for a study from abroad, there are major implications in terms of the significance of the research findings. Coming to the UK means an increased challenge in terms of the generalization of findings (not to be confused with Generalizability Theory which is a statistical construct with respect to test reliability and validity).

What I mean here by ‘generalization’ is the extent to which the findings in one study can be applied, or generalized, to other groups or environments. It is an area of controversy in education research circles, and rightfully so. The arguments for either side can be made on both pragmatic and theoretical terms.

“Well that’s nice that they got those results, but that would never work in my class!” This is a common refrain heard amongst educators when discussing the merits of a given learning program or initiative (especially one being imposed on them by higher-ups). And they’ve got a point. Intuitively, the complex and dynamic nature of a classroom does limit the degree to which we can generalize the impact of even the most promising of learning initiatives.

In most educational settings the variables, and confounders, seem almost limitless: the size and condition of the school/classroom, the neighborhood, local socioeconomic determinants, funding, special needs, cultural context, individual learner needs, and the list goes on… enough to make anyone a skeptic about generalizations in education research in general! And yet it is done, often through advanced statistical analysis of quantitative data.

Thankfully, it might just be that simulation based medical education (SBME), and specifically High Fidelity Simulation, is as close to the perfect education research lab as one could ask for.

It is uncanny just how similar the sim suite at Whipps Cross University Hospital is to the sim suite I am used to at the University of Saskatchewan. The layout of the rooms, the mannequins, the one-way mirrors, control rooms, all make for a very familiar feel. I’ll take it a step further and say that even the staff members seem almost kindred. Enough so that within a very short time we have already begun working as a team. I’ve been to other sim suites as well (albeit for much briefer experiences) and would suggest this seems fairly consistent (I welcome any other thoughts on this experience). What does this mean for research? It means fewer variables, fewer confounders, and more potential to generalize!

We are also no longer talking about a very heterogenous group; we are talking about medical graduates who are, in most instances, driven & dedicated learners. Don’t get me wrong, I fully acknowledge there are cultural and gender issues that affect even this small subset of learners, but these are far fewer than those facing a sixth grade teacher in downtown London or Saskatoon.

During my proposal defense, I suggested to my committee that even with a small study size (20-30 participants compared to larger education studies with hundreds of participants), the results of this study may still help inform EM programs elsewhere of the value of ultrasound simulation during High Fidelity Simulation. The look of surprise on their faces made it clear I was wading into uncharted waters. How could a relatively small study offer much by way of generalization?

The Case for the Generalization of Findings in High Fidelity Simulation Research (as a subset of simulation based medical education research)

In the high fidelity simulation (HFS) setting, we can and do control the environment: the mannequins, background, monitors, and treatments are all standardized through clear written instructions and preparation. Did I mention they use the exact same Laerdal equipment here as we do at the U of S?

The learners represent a somewhat homogenous group. Of course the extent of this depends on the intended study population/audience but as mentioned above, you’re already further ahead because they are all medical graduates (and in our case training in EM). That said, before jumping into a new initiative that looks great, do look at the study population carefully to assess for similarity with your own institution.

High Fidelity Simulation within SBME is largely a western medicine phenomenon. It has a particular focus on acute and critical care, crisis resource management and safety. As such, many of the concepts are already consistently taught (the ABCs, BLS, ACLS, ATLS, and so on). This means sim faculty are already well versed in a similar language (to lesser and greater extents of course).

That last piece of the puzzle is the most challenging: meaningful research outcomes that are worth generalizing! Its great that we have a good setting for research, but that in no way guarantees worthwhile research is being conducted.

Firstly, the nature of the data plays a big role in the generalization of findings. Whereas quantitative data lends itself reasonably well as it reduces data to numbers and mathematical relationships, qualitative data is much more challenging (if not impossible) to use for generalization (More on this soon as this study employs both methods but I am told does not fall into mix-methods methodology?!)

Whatever the case, whenever possible we should be looking for outcomes at the upper ends of Kirkpatrick’s Framework: transfer to practice, improved patient outcomes, and improved health of the community. Studies assessing ACLS adherence and outcomes, decreased procedural complication rates, and faster times to diagnosis using specific devices, these represent the best that SBME research has to offer. Here comes in an additional challenge to generalization: if we are looking at patient outcomes, patients now represent an added variable to the equation.

Arguments against the generalization of findings are certainly worth consideration. Institutions that have well imbedded simulation programs (as a continuous part of the curriculum) are more likely to have more robust outcomes. This is partly because, when it comes to SBME, major gains are made with repeated exposure (including, but not exclusive to, deliberate practice). Other drawbacks include differences in terminology (stooges vs confederates) and levels of resources.

For a great overview of SBME, check out:

When I read SBME research (specfically the High Fidelity Simulation kind), I often find myself thinking that the methods and findings seem consistent with practices at my own institution. Certainly this was the case with the Girzadas et al. (2009) Hybrid Simulation study. And so yes, I do find myself generalizing study implications to my trainees. Ultimately, this is the whole point, right? As education researchers, we can only hope that our research designs will serve to inform others of best practices, worthwhile initiatives, and potential pitfalls.

And while it is often criticized as excessively resource intensive, it just may be that the HFS environment does hold promise as being the near-ideal education research lab.

Cheers from London,

Paul Olszynski

Peer review by Brent Thoma @boringem