This article describes four areas of success improving data quality; why and how. It goes on to outline continuing challenges.
The first area of success has been collecting measures at assessments (probably ~85% for the year 2014-2015). I think some of the reasons for this are:
- There is now clear guidance from Team Manager’s that it’s EXPECTED (not “suggested” or “encouraged”) that the measures are done at all first assessments unless there are extenuating circumstances. Increasingly the culture is that is a mandatory requirement. The idea of mandatory requirements has been consolidated through CQUIN’s on gaol based measures which highlight the importance of using measures in CAMHS standard clinical practice. We have seen a progression in incorporating measures into case discussion.
- In some teams sending the measures out to families in advance of their appointments seems to work very well, as they often come with them completed. Some clinician’s have found it clinical meaningful to score up the RCADS and SDQ at the start of the initial assessment to help determine the clinical level of difficulties.
- The measures fit well as part of the “Choice” appointment within the CAPA model, which is used in the generic team which sees the most cases
- The Assistants have had to be robust about chasing clinicians to do missing time one measures particularly given recent high staff turnover and high caseloads. We have found that this is important at the beginning to support clinicians learning and remembering to do measures but, this can only stand as a short term solution and then a longer term protocol instituted.
- Training at all levels of the organisation and within all teams has been important in establishing the culture of using measures. Additionally the role of supervisor in facilitating the use of measures has been essential and UPromise training for lead supervisors has helped to support this.
The second area of success has been in collecting review/discharge measures (increased from a minimal amount to ~50% of cases seen in 2014-2015).
- Again, support and guidance from Team Managers that this is expected of all clinicians, and forms a part of the clinical work and is not just a “tick-box” exercise
- The Assistants developing a system to remind clinicians when it’s been approx. six months since Initial Assessment or last full review – the reminders are then sent via the Team Managers giving them more “weight”. Clinicians report that this is helpful – however, it’s quite an imprecise method and requires a lot of input to maintain (keeping a local “tracking spreadsheet”) so could definitely be improved
- Use of measures as part of referrals within teams to other disciplines and between teams
A third area of success has been an increase in use of session by session measures – What has worked is:
- Clinicians who have been on the CYP-IAPT training make up a disproportionately large proportion of the good and meaningful use of session-by-session measures . The training clearly greatly develops their understanding and use of
- Ongoing training/guidance through meeting with Assistants and team training slots have helped.
- In the absence of an IT system which can easily storage and allow clinicians access to session measures we continue to work on systems around to facilitate clinicians use of them in their therapeutic practice while and at the same time allowing assistant psychologists easy access for data entry.
A fourth area of success is that we now understand more clearly what kind of outcomes the CYP-IAPT quarterly reports are looking for (e.g. making sure that cases are closed; which measures can be “paired”; what “counts” and “doesn’t count” in the different tables). Some of the guidance for this has come from Central IAPT, but a lot has come about through (painful at times!) trial and error. In relation to specific variables that Greenwich CAMHS has improved their data return e.g. type of clinician (i.e. CYP IAPT trained, or not), referral source, completion of current view, the key has been paying attention to exactly what’s in the quarterly reports and making sure that data is going on CODE. Some of this data is cross matched with RIO by assistant psychologist’s and they also have a printed list of what profession all the clinicians are and whether they’re IAPT trained.
Briefly, what’s still not working is:
- Having to maintain one IT system for clinical records and one for routine outcome measures means there is constantly a “split” between clinical and “data” uses of the measures, which is hindering them being fully integrated into normal practice
- This also means that the whole system is reliant on Assistant Psychologists to maintain the “data side of things”, but this is getting increasingly unsustainable as the quantity of measures being used increases
- Likewise, the administration side of keeping up with photocopying measures, storing completed measures etc. is also becoming increasingly unsustainable on a limited amount of Assistant Psychologist time
- Very unclear still how applicable the measures are for specialist CAMHS teams, though increasingly clear that some are definitely NOT suitable (e.g. measures for LD team)
- Ongoing challenge of bringing more established clinicians used to working in the “old ways” on board – much more difficult than with newer or more recently qualified clinicians!