When it comes to a quality improvement projects, we tend to focus on what we know. We apply our knowledge in a way that benefits the patient.
We know, for example, that certain mattresses distribute pressure better and therefore reduce incidence of pressure injuries. Studies confirm that their use makes a difference. So we adopt that knowledge to make a change. We trust the studies because they conform to the high demands of science.
But if we want to continue to improve care for pressure injuries, the current research is not enough. We need to go beyond what we already know. We need to ask questions about what we don't know, and find a way to answer those questions.
We need to find the question marks in our knowledge. That's how Zena Moore, head of the School of Nursing & Midwifery at Royal College of Surgeons in Ireland, describes the path to progress.
Dr. Moore presents a comprehensive system for collecting clinical research that is accurate, impartial, and most importantly, clinically useful. The process starts with a question and ends with an answer to that question. It begins with the end in mind.
Finding the Gaps in What We Know
As with any other areas of study, the information we have on pressure injuries is just the tip of an enormous iceberg that is waiting to be discovered. For example, we know that dry skin is a major concern at long term care facilities. We also know that dry skin is associated with pruritus, which includes scratching.
That's a starting point that can give us new information if we approach it correctly.
They key is finding the gap, according to Dr. Moore. We need to sort out what we know from what we don't know. We know the problem with dry skin. We don't know the best way to treat it.
In that scenario, we can design a study comparing two particular treatments. Both would address the same demographic. If we do it correctly, we will have clinically usable data on which treatment is more effective.
The process starts with articulating exactly what question we will try to answer. That’s the research question, and the process will focus on finding the answer to that question. With the question in mind, it's easier to focus on the best way to answer it. Do we want a qualitative or a quantitative study? What do we need to test to get the answer? All of those qualifiers come into focus when the question is clear.
Look out for Bias
Once we have the results of the trials, it's crucial to check for bias in order to ensure that the answers are reliable. Bias is defined as a consistent deviation from the truth.
There are all different types of biases that can make their way into a study. One could be a conflict of interest. For example, if you are studying how different mattresses effect pressure injuries, make sure that none of the companies with business interests in mattresses are sponsoring the study.
Performance bias is also very common. It's important to make sure that the different groups being studied have the same information about what it taking place. If the execution of the study is different for different groups, the results will not be reliable.
Another common error that must be avoided is detection bias. That results from differences in how groups are approached. For example, there may appear to be a higher rate of cancer in one region over another. But it could be detection bias if the way cancer is tested and diagnosed is different. The higher rate may simply be better success in catching cancer earlier.
Look at the Certainty of the Evidence
Other factors to consider when evaluating the findings of a study:
Inconsistency - When combining results of one study with another, make sure that the methods and objectives match. Otherwise, you are not comparing apples with apples, and the results may not be usable. Of course, if you only have one study, there is less risk of inconsistency.
Indirectness - A study only covers the setting that was studied. So if the study was done in the ICU and not in pediatrics, you can’t extrapolate the findings from the ICU to the pediatric units. Patients could be different, and that could related to the different units. You can’t assume the findings are true for other groups.
Imprecision - Is the population you are studying being well represented? Look at sample size and the number of events. Is it enough to form a conclusion?
Publication bias - Studies with statistically significant results are more likely to be published than studies without statistically significant results. Hints that the studies might have publication bias include whether the results comes from a series of small studies, especially when they are sponsored privately.
Finding the Answers
We rely on new information to continue to make improvements in how we treat patients, particularly in areas such as pressure injury prevention, where so much is left to discover. Acquiring usable data does not need to be out of reach.
Using proper methods ensures that the data you collect will help you move forward and serve as a resource for others trying to do the same thing. Focus on the question you are trying to answer - knowing your question mark - is the way to get there.