Patient satisfaction surveys should not be complex, but they do need to be thought out.
This US example (open it in a new window and read on...) shows how even an apparently simple set of questions, and a five point scale, can actually miss the point. It asks patients for ratings, but gives no "hard" data that can be used for improvement.
Take the second question for example, "[rate] the time it took us to answer your call", which asks for a 1 (poor) to 5 (excellent) score.
Spotted the problem? Well there are actually two that are most notable.
First, the question gathers patient satisfaction feedback using a highly subjective scale - is a 30 second pick up time good or bad? Well, it all depends on your opinion which, although valid, doesn't really help the provider make meaningful changes. This same issue of subjectivity causes the second problem: that the data won't allow you to see the distribution of time spent waiting... and if you don't know this, then you don't know if the patient that scored you a "1" was actually waiting for longer than the norm or is just more demanding.
Question 4 is another classic, asking about "the convenience of our location". What, exactly, are you going to do if 25% of your patients say the location isn't convenient? Probably the options are very limited... so, unless you're tying it to marketing exercises in order to assess correlation between satisfaction with the location and first time service users (which isn't the case here - although the authors miss a golden opportunity to do this later on in the survey) you have to question the value of including this question in the first place.
So what's the solution? We're of the view that balancing numerical data and opinions gives the most balanced view: testing patients' opinions whilst letting healthcare providers obtain the data needed to assess improvement opportunities.
Next have a look at this survey can youy spot the flaws here? Again there are several...
But the most significant is the way in which there's confusion about who is completing the survey: is it the patient (see the intro and questions 1,2,5 and 6) or is it the parent (remaining questions)? There's no reason why a well designed survey (and patient satisfaction system) couldn't ask the child... after all, they are the patient.
Clarity is essential when designing a patient satisfaction survey and this one, sadly falls wide of the mark.
This time the survey starts off better, asking very focussed questions on which the staff can act. But now look futher down the list.
Could the typical patient really comment in an informed manner about:
Of course it would be great to get this sort of data. But, unless you're only treating genuinely expert patients, without a "don't know" option you'll end up with garbage answers (and even including a "don't know" wouldn't automatically have solved things).
So what can we take away from all of this. Well several things.
Patient satisfaction surveys need to:
If you're considering designing your own patient surveys, really take time to set your data collection objectives (and the kinds of information you need), think through questions individually and in aggregate, and always take the time to critically re-read from a patient's viewpoint. Better still, why not involve patients (or other stakeholders) in the design/testing process?