Dental Nurses Nailing It: A Peek into the RETURN Intervention’s Success Secrets
Alright, let’s chat about something that’s a bit of a game-changer in the world of pearly whites and keeping them healthy. We all know that dental woes are super common, right? But here’s the kicker: most of them are preventable! And one of the big ways to prevent them is by, you guessed it, actually going to the dentist for regular check-ups, not just when you’re in agony.
Now, here’s where it gets a tad more complicated. Studies keep showing us that things like our diet, how often we brush, and whether we see a dentist regularly don’t just affect our oral health; they’re also tangled up with health inequalities. It’s a bit of a tough pill to swallow, but folks from lower socio-economic groups, who often need dental care the most, are usually the least likely to get those planned visits in. So, what can we do about it?
Enter the RETURN Intervention!
This is where I got really interested in a study about the RETURN (inteRventions to rEduce inequaliTies in the Update of Routine deNtal care) trial. It’s a pragmatic randomised controlled trial (fancy term for a real-world study) that tested a brief behaviour change intervention. The goal? To help patients get back into the habit of visiting the dentist for planned care. And who were the superheroes delivering this intervention? Dental nurses!
Now, dental nurses usually help the dentist chairside – mixing materials, suction, that sort of thing. Some in the UK do give health education advice, but it’s not a universal part of the job. So, getting them to deliver a behaviour change intervention is a pretty neat idea. But, and it’s a big but, when you’re testing interventions like this, you’ve got to make sure it’s actually being delivered as intended. This is where something called intervention fidelity comes in. It’s all about ensuring the study results are reliable because the intervention was delivered consistently and correctly. Think of it like quality control for research!
This particular paper I dived into was all about assessing the fidelity of this RETURN intervention. They used a mixed-methods approach – numbers and stories – to get the full picture, focusing on how the nurses were trained and how they actually delivered the intervention.
Training the Troops: Getting Dental Nurses Ready
So, how do you get dental nurses, who might have varied educational backgrounds and not much experience in this specific area, ready to deliver a behaviour change chat? The RETURN team put a lot of thought into this.
They had:
- Half-day face-to-face training sessions with a mix of learning styles (didactic, group chats, role-play).
- 1:1 individualised shadowing training, which was super important. This used a coaching style to build confidence and tailor support.
- Ongoing booster training and reflective practice sessions to keep skills sharp and prevent “skills drift.”
The trainers themselves even went on a 3-day course to learn how to train the dental teams effectively! Talk about thorough. They used checklists to make sure all training content was covered and assessment forms to see if nurses were picking up the skills.
The findings? Well, training was successfully standardised, which is great. However, it wasn’t all smooth sailing. Not all nurses who started the training achieved competency to deliver the intervention on their own. It turned out that things like a nurse’s prior dental nursing experience, their confidence levels, and even how comfortable they were with the wider trial procedures (like IT systems, which we’ll get to!) played a big role. For instance, nurses who were trainees, newly qualified, or no longer practicing struggled more with confidence. This suggests that maybe this kind of intervention is better suited to more experienced, currently practicing dental nurses.

Interestingly, while only about half the site teams initially met the skills acquisition threshold in the standardised training, almost everyone felt the training explained their role well and met its objectives. The 1:1 shadowing was highlighted by the nurses as essential for building confidence, especially because talking to patients about behaviour change was “totally out of their comfort zone.” One nurse even mentioned how it helped her feel more confident handling potentially tricky conversations, like if a patient brought up safeguarding issues.
Sticking to the Plan: How Well Was the Intervention Delivered?
Okay, so training happened. But did the nurses actually deliver the RETURN intervention as it was designed? This is the million-dollar question for fidelity! The intervention itself had a couple of key parts:
- A ‘patient pack’ with booklets and short videos.
- A structured behaviour change conversation, guided by the spirit of motivational interviewing (MI), where patients could choose to focus on one of six common barriers to dental visits (like cost, time, fear, embarrassment, etc.).
- It all wrapped up with a goal-setting and action-planning exercise, and a follow-up text message.
To check delivery fidelity, the team asked dental nurses to audio-record their intervention sessions. They ended up with a whopping 462 recordings to analyse! They used a detailed checklist (the RETURN Intervention Fidelity Checklist) to score these recordings. They set a pretty high bar: for an intervention to be considered “high fidelity,” it had to score 80% or more in every single domain of the checklist. A tough target!
So, what did they find?
- Only 12.1% of interventions hit that super-strict “80%+ in every domain” threshold.
- BUT, a much more encouraging 79.9% (that’s nearly 4 out of 5!) achieved an overall score of 80% or more. This is what they called the ‘overall fidelity score’.
- When they looked at the different parts of the intervention, mean scores were generally between 75% and 85%. So, pretty darn good overall!
This tells me that even though that top-tier threshold was tough to meet consistently, the RETURN intervention was generally delivered with high levels of fidelity. The team also looked at “skills drift” – did nurses get worse over time? Nope! In fact, many nurses improved, suggesting the ongoing training and support worked a treat.
What Made a Difference to Fidelity?
The researchers dug a bit deeper to see what might be influencing these fidelity scores. They found a few interesting things:
- The Interventionist: Yep, the nurse delivering it mattered. This isn’t hugely surprising, as different people have different communication styles and levels of comfort.
- Intervention Dose: This is basically how long the intervention session lasted. The mean was about 15.8 minutes, but it varied. And guess what? The more time spent (more “dose”), the higher the chances of hitting that fidelity threshold.
- The Barrier Chosen: The topic the patient chose to discuss (e.g., cost, anxiety) also had an impact. For example, interventions focusing on ‘cost’ seemed to have a harder time hitting high fidelity scores. The researchers mused that ‘cost’ might be seen as more of a structural barrier, making it trickier for nurses to use the MI-style supportive statements effectively compared to, say, dental anxiety. It’s a good point – the intervention was very patient-led, and if patients from lower socio-economic backgrounds (who might be more likely to pick ‘cost’) are less likely to lead conversations, it could make it harder for nurses to hit all the MI-based fidelity markers.

The qualitative data – from observations and interviews with the dental nurses – added so much colour to these findings. Three main themes popped out:
- Research Naivety Increased Training Needs: None of the nurses had done research before, and many felt this slowed down their learning curve. One said, “I remember not quite understanding what we were doing… I think that’s because I’d no research background.”
- Confidence was Key (and Shadowing Helped!): As mentioned, confidence was a biggie. The shadowing training was invaluable here. “If you had of set me loose after the afternoon we spent [intervention delivery training], I think I might have froze in front of the patient,” one nurse admitted.
- Wider Trial IT Requirements Were a Barrier: This was a fascinating one! Many nurses found the IT systems for the trial processes (like consent, randomisation) difficult. One nurse felt that “worrying about the IT did get in the way of me making progress with the conversation with the patient.” This is a crucial point – the burden of the trial itself can impact the intervention.
So, What’s the Big Takeaway?
This study is pretty cool because it’s one of the first to do such a deep dive into intervention fidelity for a behaviour change intervention delivered by dental nurses in a dental setting. And the news is largely good!
Dental nurses can be trained to deliver these complex interventions, and they can do it well (with high fidelity) alongside their usual jobs.
However, it’s not a one-size-fits-all situation. Here are some golden nuggets I took away:
- Experience Matters: Nurses with more patient-facing experience and good baseline communication skills are likely to find it easier and be more successful.
- Tailored Training is Crucial: Especially for building confidence. The 1:1 shadowing was a winner.
- Consider the Whole Trial Burden: If trial procedures (like complex IT) are too clunky, it can impact training and intervention delivery. Maybe separate roles for trial tasks and intervention delivery could be an idea for future studies?
- Flexible Fidelity Scoring: For interventions that are tailored to the patient (like RETURN was), the way we measure fidelity might need to be a bit more flexible to account for these variations.
The authors also rightly point out that doing such an in-depth fidelity assessment might itself influence how interventionists behave. It’s a bit of a researcher’s paradox! But they argued that the benefits of understanding what’s needed for real-world implementation outweighed this. And I tend to agree. Knowing who is best suited to deliver these interventions and how to train them is invaluable.
Strengths, Limitations, and Looking Forward
Every study has its strengths and limitations. A big strength here was the sheer amount of data, especially those audio-recordings. Using mixed methods also gave a much richer picture. On the flip side, not all sessions were recorded, so there’s a small chance the picture isn’t 100% complete. Also, the same team members were involved in training, implementation, and scoring, which could introduce a smidge of bias, though they took steps to mitigate this (like reflexivity and doing scoring after trial follow-up was done).

The recommendations coming out of this are super practical for anyone looking to do similar research in primary dental care:
- Adaptable Fidelity Tools: For complex, tailored interventions, make your scoring criteria flexible.
- Know Your Patients: Think about how patient characteristics (like socio-economic status or health literacy) might influence fidelity.
- Plan Training Wisely: Different interventions and settings need different training efforts. Get sites involved early and pick your interventionists carefully (think experience!).
- Explore Trial Burden: We need more research on how the demands of the trial itself affect intervention fidelity in dental settings.
- Embed Research Skills: Let’s get more research training into primary dental care generally!
Ultimately, this study provides some solid reassurance about the scientific validity of the main RETURN trial results (which I’m now keen to see!). It shows that with the right approach, dental nurses are more than capable of stepping up to deliver important behaviour change support. It’s a step forward in making dental care more proactive and equitable, and that’s something to smile about!
Source: Springer
