by Mike Petree, ACMHC, Research Systems Consultant, Outcome Tools

Historically, treatment programs have managed to run successful businesses without proving treatment effectiveness—at least in the private-pay mental healthcare field. It’s not that clients and professional referents aren’t asking for outcomes, as any admissions personnel can testify, it’s that the vast majority of programs have no outcomes data and no motivation to collect it unless market forces require them to do so.

When the topic of success rates arises, common responses include clever diversions, such as raising philosophical questions about what “success” really means, sharing a favorite anecdote describing one client’s success, or referring the potential customer to an ecstatic alumnus.

Today, however, a growing number of treatment programs are researching effectiveness and can respond to the question of success by authoritatively stating, “Our program works! We’ve been measuring for years.”

Current Landscape

Currently the National Association of Therapeutic Schools and Programs (NATSAP) and the Outdoor Behavioral Healthcare Council (OBHC) are working with more than 70 programs to build an outcomes database, using an electronic system called Outcome Tools to collect the data. These data are compiled in a large database and processed annually by Dr. Mike Gass and his team of researchers at the University of New Hampshire. The results, which show strengths and weaknesses in comparison to other programs, are provided to each contributor.

Although the project began with only treatment program data, several independent educational consultants (IECs) and interventionists have joined in the data collection. With their contributions, the data collected provide a richer view of the series of treatments, which often includes different programs and types of services. Referring professionals play a key role in rounding out the data set.

Participating programs, IECs, and interventionists use OutcomeTools, a web-based data collection system, to electronically administer a small battery of instruments, including the Youth Outcome Questionnaire for adolescent clients and parents; the OQ 45.2 for adult clients; the General Functioning Scale of the Family Assessment Device; and a demographic survey for clients, parents, and treating clinicians. At the basic level, these surveys are administered at admission, discharge, and 6 and 12 months postdischarge.

This battery of tools collects information about diagnostic history, treatment history, referring professionals, current symptoms, and other variables that are then correlated with the results of the OQ Measures. The OQ Measures serve as the backbone of the system and were developed specifically for measuring change. They come equipped with a clinical change index that, on the basis of the normative data used to validate the instrument, shows whether the differences in scores from one instance to the next are different enough to be considered clinical change.

IECA Involvement

IECA recently commissioned an analysis of the NATSAP aggregate to explore the differences in outcomes for clients who have and have not hired IECs. The results of that analysis will be compiled in an article authored by Dr. Stephen Javorski to be published in the summer of 2015. This analysis is the first of many future inquiries pertinent to professional referents and evidences the cooperative relationship between NATSAP and IECA. The relationship between outcomes and the involvement of a referent will be an ongoing factor in future data analysis.

The Professional Referent’s Role

Because of the influential nature of the relationship that referents have with treatment programs, IECs play a key role in forwarding their industry’s shift from practicing the art of therapeutic placement to practicing the science-informed art of placement decision making. Unlike treatment programs that can take years to shift culture and approach, IECs are nimble and swiftly adaptable players in the treatment process. IECs can learn about current research results and immediately incorporate new information into program evaluation and placement decision making.

For example, in an article in the January 2015 Child & Youth Care Forum, “The Role of Transport Use in Adolescent Wilderness Treatment: Its Relationship to Readiness to Change and Outcomes,” authors Anita Tucker, Joanna Bettman, Christine Horton, and Casey Comart examined the impact of transport services on outcomes. The sample size consisted of 350 clients. Their results showed that clients who were transported had equal or better outcomes than those who weren’t. Although hard conclusions cannot be made from this study alone, such results do provide empirical insight into the process and will have direct impact on therapeutic placement decision making.

Referents can begin to influence programs toward data collection by asking the following questions when touring programs:

1. Are you collecting and reporting outcomes data?

2. Are you involved in an aggregate outcomes research initiative?

3. What normed and valid instruments are you using to collect your data?

4. Is this process overseen by a neutral third-party entity?

Conclusion

The private-pay mental health field is moving toward outcomes-informed treatment. Aggregate databases, such as those sponsored by NATSAP and OBHC, are growing rapidly and results are influencing the treatment process. Professional referents can play a powerful role in increasing the strength of this movement by asking a few important questions when touring and assessing programs and by collecting and sharing outcomes data on their clientele.

For more information about how to become involved, please contact Mike Petree at [email protected]