Mental Health Performance Measures Pilot Project

Primary Phase - Final Report

 

Prepared by ACSES Staff

October 1, 2001

 

Background

 

The Mental Health Performance Measures Pilot Project is a joint effort of the Alaska Mental Health Board and the Division of Mental Health and Development Disabilities (DMHDD) that began in February 2001.  The goal of this project is to develop standardized outcome measures to be used at all state-funded mental health treatment facilities in Alaska.  To finalize these questionnaires, DMHDD contracted with the Alaska Comprehensive and Specialized Evaluation Services (ACSES) to serve as independent consultants to conduct a pilot project on draft questionnaires.  The pilot project has been approached in two phases, a preliminary phase and a primary phase.  The work of the Preliminary Phase, to collect clinician and consumer feedback about the ease of use, structure, and utility of the instruments, has been completed and results were submitted to the Oversight Committee. The goal of the Primary Phase, to evaluate the sensitivity of the revised versions of the instruments to consumer change across time, has been achieved and results are presented in this report.

 

Reports Prepared by ACSES and Submitted to the Oversight Committee:

 

 

Research Design and Procedures for the Primary Phase

 

            Instrumentation

 

Three instruments were used in the Preliminary Phase, namely, the Client Assessment Worksheet, Mental Health Consumer Satisfaction Survey, and Mental Health Statistical Improvement Program Consumer Survey (MHSIP).  Based on feedback from clinicians and consumers, the Oversight Committee made additional revisions to these instruments after the Preliminary Phase. The resultant instruments are the Client Status Review, Demographics Questionnaire, and the MHSIP.

 

Client Status Review

For the Primary Phase, the revised Client Assessment Worksheet was renamed the Client Status Review. Appendix Seven provides the original Client Assessment Worksheet; Appendix Eight provides the Client Status Review used in the primary phase of this pilot project.

 

Demographic Questionnaire

For the Primary Phase, the revised Mental Health Consumer Satisfaction Survey was renamed the Demographic Questionnaire.   Two versions of the Demographic Questionnaire were developed, an Adult Services version and a Child and Family Services version.  Appendix Nine provides the original Mental Health Consumer Satisfaction Survey; Appendix Ten provides the Adult Demographic Questionnaire; Appendix Eleven provides the Youth Demographics Questionnaire.

 

Mental Health Statistical Improvement Program Consumer Surveys

Three versions of this instrument were used, namely, Adult Survey, Youth Services Survey, and Youth Services Survey for Families.  Appendix Twelve provides the original Adult MHSIP.  The original youth surveys consisted of 26 items; subsequent revisions (by the national group that is developing and refining this instrument) have resulted in 21-item questionnaires.  Appendix Thirteen provides the original Youth Services Survey; Appendix Fourteen provides the Youth Services Survey for Families.

 

            Global Assessment of Functioning

In addition to administering these questionnaires, clinicians were asked to provide a current Global Assessment of Functioning (GAF) for each client.  GAFs were collected to provide an established measure of client functioning against which to compare the overall CSR results.  It should be noted that GAFs are not precise measures; however, clinicians are trained in their use and they do provide some data for comparison purposes.

 

            Procedures

 

Prior to data collection, an ACSES staff member visited each of the participating agencies to provide training regarding timing and procedures, including information on research design, data submission procedures, and confidentiality issues.  Prior to administering the instruments, clinicians were to ask consumers if they would be willing to help with the project by answering the questionnaires, providing opinions about the questions, and making recommendations about possible changes. 

 

Time One Data Collection

At Time One, after receiving consumer consent, clinicians administered the CSR in a structured interview format to as many consumers as possible over a one-month time period (later extended to last six weeks), and asked the same consumers to complete and return the Demographic Questionnaire and MHSIP independently.  The one-month time period was selected to allow for missed appointments, to allow inclusion of consumers who are on a once-a-month schedule, and similar circumstances.  To match Time One and Time Two responses, questionnaires were coded with consumers’ ARORA or agency identification number (no names were used anywhere on the research protocols).  Due to two agencies getting started late, the data collection period was extended for an additional two weeks to maximize data collection opportunities; all sites were notified of and invited to participate in this change of timeframes.   Actual Time One dates differed from administration site to administration site and are provided in the individual agency descriptions below.

Subsequent to completing the CSR with the consumer, clinicians handed the consumer a packet containing the Demographic Questionnaire and the MHSIP to complete independently and gave them a brief overview of the tasks involved.  A cover letter from ACSES to the consumer was included in the packet to explain the purpose of the project and the procedures used to insure confidentiality and anonymity.  On behalf of ACSES, clinicians also provided stamped, self-addressed envelopes to the consumers, who had the option either to mail the questionnaires directly to ACSES or to seal the envelope and give it to an agency staff member (e.g., a receptionist) who would mail it for them.  Consumers were also given the option of asking clinicians to help them complete the questionnaires if they needed assistance. Due to the low rate of return by mail from consumers during the Preliminary Phase, clinicians were asked to encourage their consumers to exercise their right to give feedback about the services they receive and, if possible, to complete the questionnaires while at the agencies and return them in sealed envelopes to a staff person to be mailed for them. 

 

            Time Two Data Collection

Time Two administrations were conducted four to six weeks after Time One, following the same procedures outlined for Time One and using the same questionnaires.  During Time Two, clinicians administered the questionnaires to the same consumers who responded during Time One.  Any consumers who completed the questionnaires during Time One and were discharged before Time Two began, were to be administered the questionnaires the second time at their time of discharge.  Ideally, consumers were to complete the questionnaires during a termination session.  However, if the consumers did not attend a termination session, the clinicians were asked to mail all three instruments to the consumers to complete and return in a pre-paid, self-addressed envelope to ACSES.  Time Two data collection began on July 16 and was completed for all agencies on August 13, 2001.

            Due to the low participation rate at several of the sites, as reported in the Interim Report, the Oversight Committee requested that more Time One data be collected.  This was particularly important for gathering rural consumer and clinician feedback, as well as additional input from child and youth consumers.  Arrangements for additional Time One data collection were made; agencies did not collect Time Two data for those consumers. 

 

Participating Agencies

 

Based on community size, region, willingness to participate, number of consumers serviced, and informed consent procedures in place, six pilot sites were selected by DMHDD to participate in this pilot project.  These agencies were as follows:  Southcentral Counseling Center, 4Rivers Counseling Service, Alternatives Community Mental Health Center, Bristol Bay Mental Health Center, LifeQuest, and Norton Sound Behavioral Health Service.  A brief description of their level of participation during the Primary Phase follows.

 

Southcentral Counseling Center (SCC), Anchorage

Staff training for Time One of the primary phase was provided to the clinical associates on May 10.  Staff volunteered to administer a few more surveys than required.  One associate was unable to participate due to emergency family leave during most of the Time One period.  Data collection was scheduled for May 11 through June 15, 2001.

            Staff training for the Time Two administration phase was conducted with the clinical associates on July 5.  Time Two data was collected from July 16 through August 13.  It is notable that throughout this pilot project participation at this agency was higher than at any of the other agencies.  This may be attributable to the Executive Director’s strategy to solicit volunteers, set a minimum level of participation, and offer monetary and compensatory incentives to each staff participant who achieved the requisite participation level.  SCC staff members are to be commended for their efforts in this project.

 

4Rivers Counseling Service, McGrath

Staff training for Time One of the primary phase was conducted on May 4 with the Executive Director.  The primary clinician at this site continued to have strong personal objections to the overall project and called in sick on the day of the training.  However, on the same day as the training, an ACSES staff member observed this clinician to be in good health on the flight from Anchorage to McGrath.  The Director prepared detailed written instructions for the clinician outlining the administration process and a directive that she was to participate by administering the questionnaires to her consumers according to the research design procedures. 

The clinician called an ACSES staff person stating that, because of to 4Rivers policies and procedures, the clients’ bill of rights, and her ethical practice, she could not administer the surveys to clients aged 13-18 without written informed consent of the parents.  She reported that this was consistent with her need to obtain written informed consent from parents each time she traveled to her assigned villages to provide mental health services.  She expressed a great deal of concern that asking her clients to complete the questionnaires was not part of their treatment plans and, therefore, outside the realm of professional practice.  She also reported being unable to administer the questionnaires to children and youth because she saw these consumers in their school environment and therefore did not have contact with the parent/primary caregiver to obtain informed consent.  The ACSES staff person notified the Executive Director of the clinician’s request for an informed consent form and the need to coordinate this request with DMHDD (as other agencies had successfully done).  The informed consent form developed for this project was provided to the Executive Director.  Anne Henry, DMHDD, also talked with the clinician to impress upon her the importance of gathering information from her client population.  The clinician stated that she was participating; however, she did not schedule further trips to the villages during Time One administration and was on vacation for the month of June. 

            The Director prepared a short introduction letter from his agency to explain the project to consumers and to emphasize the importance of rural input into the project.  Data collection was scheduled to begin on May 7 and to be completed on June 15.  Because the Director was out of McGrath for most of the Time One administration period and the clinician did not administer any questionnaires, no data was received from this site.  The lack of data from Time One precluded the agency’s participation during Time Two administration.

 

Alternatives CMHC, Anchorage

Staff training for Time One of the primary phase was conducted on May 10 with clinicians and case managers, with much discussion about the administration process.  Staff wanted to administer the questionnaires to their consumers aged 18-21 who were involved with the Division of Juvenile Justice.  For this group of consumers, the adult questionnaires were used, with the clinicians noting the consumer’s age and custody status on the forms.

The Utilization Review Manager expressed the need for an informed consent prior to administering the questionnaires.  Although it was explained that the consent form developed for the Preliminary Phase was sufficient, the manager remained firm in the need for a different consent form.  This issue was referred to Executive Director to resolve with DMHDD.  A staff person noted that the Division of Family and Youth Services (DFYS) has its own regulations about consumers participating in this type of project.  The Director decided that they would contact DFYS about giving them a general consent for their consumers to participate in the project, which might eliminate the need for youths to complete the consent forms.  Developing and coordinating another consent form with DMHDD and other projects with which the agency was involved led to delay in data collection.  Data collection started the week of May 21 and was completed on June 15. 

            Time Two staff training was conducted on July 10, and data were collected from July 16 through August 13.  Staff also collected additional Time One data during this phase from consumers who had not yet participated in the project.  During the Primary Phase, the agency experienced significant staff turnover that negatively affected staff’s ability to collect an adequate sample size of consumers.  The Executive Director and the Utilization Review Coordinator remained interested, cooperative, and enthusiastic about the project, and did their best to collect data. 

 

Bristol Bay Mental Health Center, Dillingham

Staff training for Time One of the primary phase was conducted on May 1, with data collection for adult consumers starting on May 2 and scheduled to end May 30.  The Executive Director requested that staff administer the questionnaires to child and youth clients as well.  However, no child or youth surveys were administered because the assigned clinician felt the CSR was not age-appropriate.  In some of the villages, village-based family social workers were to be asked to administer the questionnaires to their clients, instead of agency staff.  A ‘village code’ number on the form would identify this.  Time One data collection was conducted from May 8 through June 15.

            Staff training for Time Two was held on July 9 with data collection occurring from July 16 through August 13.  Staff also collected additional Time One data during this phase from consumers who had not yet participated in the project.

 

LifeQuest, Wasilla

Staff training for Time One of the primary phase was conducted on May 3 with case managers and the Medical Services Team; staff training for outpatient clinicians was conducted on May 7.  The Medical Services Team opted out of administering the questionnaires due to the ‘non-billable’ nature of the service and concerns that the data would not show improvement unless administered at intake, since the majority of their consumers were already stabilized on medications.  Data collection by case managers began on May 7; outpatient clinicians started data collection on May 8.  Data collection was completed on June 15.

            Time Two staff training was conducted on July 12 with case managers and outpatient clinicians, with data collection scheduled to begin on July 16 and to be completed on August 13.   When no data had been received from the agency by August 27, an ACSES staff member contacted one of the managers coordinating the project at LifeQuest.  This individual reported being unsure what had happened, stating that the departure of the administrative person coordinating the project and staff’s busy schedules were the most likely contributing factors to the lack of data.  The manager reported that staff was ‘so pushed’ with other demands that she had not asked them to do more.  The Executive Director, however, stressed the importance of gathering the Time Two data to the managers and the data collection period was extended until September 14 to permit staff to re-administer as many surveys as possible.   This deadline was extended again until September 21, when data had still not been received.   Due to the need to establish a cut-off date that would permit adequate time to evaluate the data, the deadline was changed to September 19, and an ACSES staff person traveled to Wasilla to pick up any data that had been collected.

 

Norton Sound Behavioral Health Services (BHS), Nome

Staff training for Time One of the primary phase was conducted on May 15.  For the first week during the administration period, no surveys were administered because most of the staff was attending a local training conference.  In some of the villages, staff planned to ask the village-based counselors to administer the questionnaires.  Data was collected from May 21 through June 15. 

            Time Two staff training was conducted on July 13, with data collection occurring from July 16 through August 13.  Staff also gathered additional Time One data during this phase from consumers who had not yet participated in the project.  Despite the Executive Director’s continued interest and enthusiasm about the project, the agency’s participation rate continued to be lower than expected.  This significantly limited the rural input that was so important to the project.

 

Findings

 

Findings are presented in three sections: 1) staff observations, which outline observations made by ACSES staff regarding the level of cooperation and enthusiasm about the pilot project among participating agency staff; 2) quantitative results, which provide data derived from the questionnaires; and 3) qualitative results, which summarize written comments made by consumers and clinicians about the questionnaires.  Individualized reports regarding consumer satisfaction at Time One were prepared for three agencies for which adequate data were available.  These reports were distributed directly to these agencies and are not part of this report.

 

Staff Observations

 

Clinicians’ responses to participating in the pilot project were obtained from direct observation by an ACSES staff member during staff training and from direct comments by clinicians.  As in the Preliminary Phase, responses ranged from enthusiastic acceptance of the project to significant resistance.  Staff at most of the agencies were professional and cooperative.  However, at one agency, some staff did not even look at the questionnaires during the training; at another agency, one clinician avoided the training by calling in sick.  All agency directors continued to be very cooperative and interested in participating.  Specific levels of staff interest and cooperation at the various agencies can be inferred from the response rates contained in Table One.

Several agencies reported difficulties obtaining consumer participation for Time Two administrations.  Reasons cited included not being able to contact consumers by phone, consumers being unwilling to meet with the clinician, and consumers being on extended vacations or absences from home during late summer months or relocating out of state.  At two agencies, staff turnover and workloads were reported as obstacles to completing the Time Two administrations.  Another concern had to do with language, with some staff members reporting that, for many of their clients, English was a second language, creating a significant barrier to data collection.  Another concern expressed by some staff members was that administering the instruments could have a negative impact on therapeutic relationships.  These individuals suggested that an independent party, rather than agency staff, should administer these types of questionnaires.  Finally, several expressed concerns that the State would cut funding if consumers did not show improvement over the two administrations.

 

Quantitative Results

 

Response and Match Rates

Table 1 provides the number of instruments completed at Time One and Time Two, broken down by agency.  Across all agencies, 150 completed CSRs were submitted to ACSES for Time One and 69 CSRs were submitted for Time Two.  In addition to the 150 CSRs returned at Time One, 21 CSRs were collected out of phase with no plans for a Time Two administration; these 21 CSRs were included in Time One analyses.  Of the total 171 Time One CSR protocols, 37.4% (n=64) were submitted by SCC, 33.9% (n=58) by LifeQuest, 16.4% (n=28) by Bristol Bay, 7% (n=12) by Norton Sound, and 5.3% (n=9) by Alternatives. 

The length of time between Time One and Time Two administrations ranged from 37 to 135 days, with a mean of 72.6 days (SD=26.9).  Based on ARORA numbers provided on the CSR, it was possible to match 67 Time One (n=150) and Time Two (n=69) CSRs, representing an overall match rate of 44.7% (67/150).  Two of the Time Two CSRs included ARORA numbers that did not match with any Time One ARORA numbers. It is unknown whether this means that two consumers were administered Time Two protocols without having received Time One protocols, or whether ARORA coding was incorrect.  The five participating agencies varied in the overall match rate between Time One and Time Two as follows: Southcentral Counseling Center, 75% (48/64); Bristol Bay, 23.5% (4/17); Life Quest, 22.4% (13/58); Norton Sound, 20% (1/5); and Alternatives, 16.7% (1/6). 

For Time One, 106 Demographic Questionnaires and 106 MHSIPs were received from consumers. For Time Two, 54 Demographic Questionnaires and 54 MHSIPs were received.  An additional 15 Demographic Questionnaires and MHSIPs were received for and used in subsequent Time One analyses.  These additional protocols had no matching Time Two data.  Of these received Time One forms, 112 were adult versions and 19 were youth versions; of the 54 Time Two forms, 53 were from adults and 1 from a youth.  On the consumer-completed questionnaires, to insure confidentiality of responses, no means to match Time One and Time Two information was requested.  However, by comparing responses to demographic items, it was possible to match 42 Time One and Time Two Demographic Questionnaires and MHSIPs, for an overall match rate of 39.6% (42/106).

 

Consumer Characteristics

Based on the place of administration of the 171 Time One CSRs, it was possible to determine that 76.6% of the participants were from urban areas and 23.4% were from rural areas.  Unfortunately, of the 67 matched CSRs, only 7.5% (n=5) were received from rural areas.  Table 2 provides Time One and Time Two consumer and clinician responses for the adult version of the Demographic Questionnaires.  Based on Time One data, 51 (45.5%) were female, 46 (41.1%) were male, and 15 (13.4%) gave no response.  The majority of adult respondents were in the 23-59 years old category (83.9%, n=94).  Of these consumers, 58 (51.8%) were Caucasian, 40 (35.7%) Alaska Native/American Indian, five (4.5%) African American, three (2.7%) Hispanic/Latino, two (1.8%) Asian/Pacific Islander, one (0.9%) did not know, and three (2.7%) provided no response.  Of the consumers, 67% indicated receiving case management services, 76.8% psychiatric services and medications, 33.9% transportation services, and 19.6% housing.  These figures suggest that the typical consumers who participated in this pilot study were persons with persistent and severe mental illness.

            Table 3 provides Time One and Time Two consumer responses for 19 youth and child version Demographic Questionnaires.  Based on Time One data, the majority of minor respondents were in the 6-12 years old category (63.2%, n=12).  Of the respondents, 13 (68.4%) were Caucasian, three (15.8%) Alaska Native/American Indian, one (5.3%) African-American, and two (10.5%) Other.

 

            Client Status Review

            On average, clinicians reported that it took 10-30 minutes to administer the CSR, with the most commonly reported timeframe being 10-15 minutes.  A few clinicians reported taking longer, depending on the client’s intellectual level or diagnosis.  One clinician noted that ‘scheduling time to meet with the clients was more of a problem’ than administration time.

Of the 171 CSRs received during Time One administration, five questionnaires were missing the consumer’s ARORA or agency number.  Most of the missing ARORA (or agency) numbers were later obtained by an ACSES staff member; however, this took additional staff time.  Almost three-quarters of the Time One instruments (70.8%, n=121) were returned without the Sum of Ratings section completed.  Of the 50 CSRs returned with the Sum of Ratings scored, eight (16.0%) were scored incorrectly.  Of the 69 CSRs received during Time Two, all included an ARORA or agency number.  Of these 69 Time Two CSRs, 18 (26.1%) did not provide calculated Sum of Ratings.  Of the 41 returned with the Sum of Ratings scored, three (7.3%) were scored incorrectly.

            Initial review of the CSR data revealed some missing responses, particularly on the five components of Question #6.  At Time One, 7.0% of protocols had at least one of the 10 responses missing; at Time Two, 4.3% had at least one item missing.  Table Four provides the specific number of missing responses for each CSR item.  As the Sum of Ratings is calculated based on responses to all 10 items, any missing data will yield a Sum of Rating that is not indicative of the consumer’s actual level of functioning.   An alternative would be to calculate a Sum of Rating based on the average score to the responses provided; this would then yield a score that is comparable to other Sum of Ratings received from the same or other individuals.  However, scoring the questionnaire in this matter is more complicated and prone to error.

            Table Five provides Time One and Time Two responses for the 67 matched CSRs.  Table Six provides means and standard deviations for each CSR item and the Sum of Ratings for Times One and Two.  As a way of reminder, on the CSR lower scores indicate higher level of functioning.  Thus, the desired change is one of lower scores at Time Two as compared to Time One.  Thus, a difference score (typically calculated as Time Two minus Time One) that is negative indicates improvement, whereas a difference score that is positive indicates worsening.  Table Six provides these mean difference scores and results of t-tests comparing scores at Time One and Two.  For the 10 primary items, four had negative (improved) difference scores means (with difference scores ranging from -.03 to- .25), five had positive (worsened) difference score means (with difference scores ranging from .01 to .15), and one had equal means (i.e., a difference score of 0) at Time Two as compared to Time One.  The Sum of Ratings was negligibly higher (worse) at Time Two than Time One.  Results of the t-tests revealed that the differences between Time One and Time Two failed to reach statistical significance on all individual items and the Sum of Ratings. 

            To look at change from Time One to Time Two for each item and the Sum of Ratings in yet another way, Table Seven provides the number of consumers who, at Time One versus Time Two, had either the same, a lower, or a higher score.  Table Seven also provides results of McNemar’s change tests (Siegal, 1956), used to determine whether changes in responses to these items increased or decreased in a consistent manner across consumers from Time One to Time Two.  For all of the individual items, the highest number of respondents had no change on the individual items.  For those individuals for whom change was reported, on a majority of the items the change was in the direction of improvement (i.e., a decrease in item score).  However, when considering the Sum of Ratings, the proportion of consumers who had the same scores was very low; the proportions who had an increase or decrease in functioning was nearly identical.

            Results of McNemar’s tests revealed significant decreases in scores (improvements of functioning) on four items, namely, “How often can you currently get the physical health care that you need?”, “Have friends/relatives asked you to cut down on alcohol or other drugs or to quit entirely?”; “Have you experienced guilt because of your drinking or use?”; and “Do you need an eye opener in the morning to get started?”  Note that for the three drinking related items, decreases (improvements) meant that the individuals stated “yes” at Time One and “no” at Time Two.

In terms of overall functioning, at Time One, the Sum of Ratings ranged from 8 to 34, with a mean of 19.4 (SD=4.3) and with lower scores indicating better functioning.  GAFs obtained at Time One ranged from 21 to 95, with a mean of 48.6 (SD=12.0) and with higher scores indicating better functioning.  The correlation between the Time One Sum of Ratings and GAF scores was -.37.  At Time Two, the Sum of Ratings ranged from 14 to 42, with a mean of 19.5 (SD=5.0).  GAFs obtained at Time Two ranged from 21 to 80, with a mean of 42.3 (SD=10.8).  The correlation between Time Two Sum of Ratings and GAF scores was -.22.  These levels of correlations suggest that the GAF and Sum of Ratings measure different aspects of consumer functioning.

 

MHSIP

Table Eight provides Time One and Time Two responses to the adult version of the MHSIP.  Table Nine provides means and standard deviations for each MHSIP item and the Domain Subscales for Times One and Two.  Table Nine also provides the mean difference scores and results of t-tests comparing Times One and Two.  Of the means for the 28 items, 20 were lower at Time Two than Time One (indicating higher levels of satisfaction), seven were higher (indicating lower level of satisfaction), and one was the same.  Of these differences, t-tests revealed statistical significance for only one item, namely, “Encouraged to use consumer programs”, with greater satisfaction being expressed with this item at Time Two.  Of the four subscales, all were somewhat lower at Time Two than Time One, indicating slightly higher levels of satisfaction at Time Two; however, the differences were not statistically significant. 

            Table Ten provides the number of consumers who had the same, lower, or higher scores at Time Two as compares to Time One for each item and the domain subscales.  Table Ten also provides results of McNemar’s change tests (Siegal, 1956), used to determine whether changes in responses to these items increased or decreased in a consistent manner from Time One to Time Two.  For most of the items, the highest number of respondents had no change on the individual items.  For those individuals for whom change was reported, on 22 of the items, the change was in the direction of greater satisfaction (decrease in score); on four of the items, change was in the direction of lesser satisfaction, and on two items the number of consumers whose satisfaction decreased and increased were the same.  Results of the McNemar’s tests revealed significant changes in scores on only four items, all in the direction of greater satisfaction: “Staff were willing to see me as often as I felt necessary”; “Encouraged to use consumer-run programs”; “I do better in school and/or work”; and “My housing situation has improved”.  When considering the four domain subscales, the majority of consumers reported slightly higher levels of satisfaction at Time Two than Time One, although these differences did not reach statistical significance.

            Table Eleven provides Time One responses to the Youth Services Survey for Families.  No Time Two data were provided on this instrument.  Table Twelve provides Time One and Time Two responses to the Youth Services Survey.  Given that only one Time Two questionnaire was collected, no further analyses were conducted.

 

Additional Analyses

            At the request of the Oversight Committee, additional analyses were conducted to assess whether rural and urban respondents differed from one another in their responses to the instruments.  Additionally, analyses were requested regarding potential differences between consumers of Alaska Native heritage as compared to those of other ethnic backgrounds.  The rural versus urban comparisons were based on location of data collection and hence were possible for all instruments.  Ethnicity was not assessed on the CSR, making such analyses impossible for this instrument.  As ethnicity information is collected on the Demographic Questionnaires, it was possible to conduct analyses on ethnicity for the MHSIP and Demographic Questionnaire. 

            Rural versus Urban Analyses:  Initial review of the Demographic Questionnaire revealed that consumers from urban and rural agencies differed significantly on a number of critical variables related to type of clientele served.  Specifically, urban consumers represented in the current sample were more likely than their rural counterparts to report that they are:

·  receiving psychiatric services/medications (91% urban; 56% rural);

·  receiving case management (86% urban; 25% rural)

·  receiving long-term care (2 years or more; 72% urban; 34% rural)

·  non-Native versus Alaska Native (79% urban; 39% rural)

These findings strongly suggest that the urban and rural samples included in this project were not equivalent, but rather were comprised of significantly different consumers.  Specifically, it appears likely that the urban sample was comprised of a relatively homogeneous group of consumers who received intensive long-term care and who were likely to represent a persistently and severely mentally ill population.  The rural sample appeared to be comprised of a much more heterogeneous group of consumers.  Further, rural versus urban location of assessment was highly correlated with ethnic background, making it impossible to tease out the different effects of these two ways of grouping the sample. 

Given the dissimilarity of the urban versus rural sample and the confounding of location with ethnicity, additional analyses are not appropriate (thus, although they were calculated, they are not reported here, lest they be misunderstood as usable and appropriate).  For example, if differences were revealed in urban versus rural responses, it would be impossible to ascertain whether these differences were caused by urban versus rural factors, ethnic backgrounds, or type of consumer.  To address questions regarding location in the state and ethnicity, future data collection must incorporate careful stratified sampling procedures.

 

Qualitative Results

 

A complete listing of the qualitative comments provided by clinicians and consumers is provided in Appendices One to Six, categorized by instrument and site.  Following is a summary of the most commonly expressed concerns, provided separately for each questionnaire.  This summary should not be used in place of reading the comments.  It is evident that the clinicians and consumers put a lot of thought into their comments and a careful review of the comments is crucial to the credibility of any additional instrument revision or replacement effort.  Not surprisingly, given that all clinicians and many consumers had encountered the questionnaires previously, most consumer and clinician responses to the follow-up questions about the questionnaires themselves were brief, and significantly fewer responses were obtained than in the Preliminary Phase. 

           

Overall Comments

According to clinicians, consumers were willing to complete the questionnaires in the structured interview format and most who were asked to participate did so voluntarily.  One consumer who declined to participate did so because of ‘too much asking and too much paperwork’ (as recorded by the clinician).  One rural agency reported that three consumers declined to participate.  One clinician offered this general comment about all of the questionnaires:  ‘Many of the participants requested help with the forms, which were created with the intention for independent completion from the consumers.  Keeping this in mind, I think it is important to plan the wording carefully to encompass the wide range of education barriers that many people struggle with.’  A number of clinicians noted that consumers needed clarification on many questions.  At one rural agency, clinicians noted that about one-half of their consumers needed help completing the Demographic Questionnaire and MHSIP.  They also noted that most consumers did not want to complete the forms while at the agency, preferring to take and mail them later.  Also of note were concerns expressed by consumers about inadequate staffing, overworked staff, and Medicaid funding cuts that have decreased services and make filling out additional forms a significant hardship. 

 

Client Status Review:  Overall Comments

Appendix One provides clinician and consumer comments in response to the CSR.  Several consumers expressed positive opinions about the process:  ‘I enjoy the questions.  I support this questionnaire.  Good job.’  However, a number of consumers also expressed negative comments about the CSR, such as ‘I didn’t like intrusive questions; I don’t think these questions are necessary; don’t like nosey people.’  One consumer commented that ‘the questions assuming substance abuse and problems with the law are offensive.’ 

            A significant number of clinicians and consumers commented that the CSR was not appropriate for children and youth, and recommended developing a separate questionnaire that is more applicable to a younger consumer population.  Comments in this regard included ‘doesn’t apply to consumer’s age’, ‘none applied’, ‘not worded for parents to answer for a child’, and ‘should make a form for the parent specifically to answer for the child.’  Several clinicians noted having to reword questions, using consumers’ names, since they were requesting the information from the consumers’ parents.  The overall recommendation from consumers and clinicians alike was to develop a separate questionnaire for child and youth consumers structured in such a manner as to be completed by a parent or primary caregiver.

            One rural provider commented, representatively so, in conversation that rural services differ significantly from urban services in that they tend to be much more short-term (one or two sessions), crisis-focused, or intensive in nature.  Systemic interventions at the family and community level are more common and will not be captured by the current CSR.  This provider suggested alternative means for assessing the success of rural mental health programs, not focusing on individual clients, but focusing on community perceptions and use of the local mental health agency.

            The majority of consumers reported that the CSR questions were neither hard to answer (64.2%), nor unclear (68.9%).  One consumer suggested adding the question, ‘What do you feel has been helping the most?’  Another suggested asking, ‘Is treatment confidential (private)?’, while noting the lack of privacy in a case manager’s office.  Another consumer indicated a ‘Need to ask if [services are] adequate after hospitalization.’  Clinician comments included adding questions about client satisfaction and complaints about medications and services, treatment results over time, and lack of improvement and its reasons.  One clinician indicated that the questionnaire should have asked more about mental health.  Another clinician asked, ‘What are you attempting to determine?’

 

            Specific CSR Item Comments

Following is a summary of comments made regarding specific CSR items.  The reader again is urged to read the detailed comments in the Appendix and not to rely only upon the summaries.

Comments about CSR Question One:  On Question 1 (regarding activity), one consumer commented ‘If you weren’t doing one of those things, you’d be dead.’  Along the same line, one clinician noted that the response ‘not active’ makes it appear ‘as if the consumer never does any activity’, suggesting  ‘It may be better to state a choice such as rarely active, minimally active, etc.’ Another asked ‘level of activity, how defined for a home-schooled child?’  Another consumer described the question as ‘nebulous.’  There were several clinician comments that adding agency group activities should be part of meaningful activity since this was ‘an active area’ for many consumers.

            Comments about CSR Question Two:  Several consumers commented that Question 2 (regarding physical health problems) was difficult to answer or unclear due to the wording and being ‘unsure about physical.’  One consumer attributed weight problems that negatively affected her mental health as the factor that kept her from doing normal activities.

            Comments about CSR Question Three:  For Question 3 (regarding physical health care access), a number of consumers noted services that they needed, such as eye and/or dental care.  One consumer wrote ‘I need dental care and just filled out a grant through the dental program.’

            Comments about CSR Question Four:  Numerous comments about Question 4 (regarding payment for health care) indicated that consumers had difficulty choosing between ‘fully insured’ or ‘well-insured’ because Medicaid and Medicare were not identified and do not cover all services.  Other consumer comments included ‘confusing; Medicaid & Medicare needed to be specified.’  One consumer asked ‘Is health care the same as mental health care?’ 

            Comments about CSR Question Five:  There were several comments about including crisis respite services on Question 5 (regarding hospitalization for mental health care) since consumers had used this type of service with the past six months.

            Comments about CSR Question Six:  Question 6 (regarding drug and alcohol use) received the most criticism, with a recurrent theme from both clinicians and consumers being that the question presumed a substance use problem.  One consumer captured the predominant feeling about this question:  ‘I don’t drink and you should have asked if I did before you implied that I needed to answer these questions.’  Other consumer comments included: ‘I felt it was intrusive; assumes I have drug/alcohol problem; where to indicate no longer using? You assume I’m alcoholic; I have 3 years clean and sober and nowhere to note it; Don’t drink!’ Clinician comments echoed those of consumers:  ‘#6 continues to be a poorly worded question as many of those who participated do not use drugs or alcohol and felt this question assumed that they did; Consumers find it difficult to differentiate whether the question is pertaining to past or present use; Need to ask whether client drinks/drugs.’  There also seemed to be some confusion on the part of consumers about how to answer the questions.  One clinician commented that a consumer had answered ‘yes’ to the question about feeling guilty because of one relapse with alcohol use.  Another consumer answered ‘yes’ to ‘are you annoyed by friends or relatives who question your use?’ and wrote in ‘annoyed by other people pressuring (me) to use.’  There were also a number of comments wondering if caffeine and tobacco were considered drugs. One clinician noted that response option one had ‘weird phrasing – double negative.’

            Comments about CSR Question Seven:  On Question 7 (regarding financial situation), one parent indicated that the child has Medicaid, but the parent herself or himself faces ‘extreme financial hardship.’   One clinician reported that a consumer had problems with this question, stating ‘Kids questioned if this relates to their finances or their parents’?’

            Comments about CSR Question Eight:  On Question 8 (regarding housing), several consumers wrote in ‘pay rent’ as a clarification to ‘live with others.’  One consumer was unsure about being independent or not, commenting ‘I’m very dependent on various agencies.’  One clinician stated that the question ‘continues to not address transitional housing’ and asked ‘Does this fit into the homeless category?’  A clinician commented that ‘the parents own; the kids live with their parents. Question doesn’t make sense.’

            Comments about CSR Question Nine:  For Question 9 (regarding general safety), consumers’ comments indicated that mental health issues, rather than environmental factors, primarily determined their feelings of safety.  One parent added an explanation to response option three to indicate that the child is fearful of being teased or picked on.  Several consumers noted feeling fearful or paranoid as a result of their symptoms or their own actions, acknowledging that there were no identifiable external threats.  Several clinicians noted that consumers felt safe and were aware of suffering from paranoia.  Only one consumer referred to environmental factors, stating ‘It’s hard to feel safe when hearing about community violence that may not directly affect me.’

            Comments about CSR Question Ten:  One clinician had several comments for Question 10 (regarding involvement with police, court, jail) about the redundancy of ‘legal issues pending’ and ‘felony charges pending’, and the need to clarify terms, such as ‘extreme impact’ and ‘non-lock-up facility.’  Another clinician commented that ‘non-criminal involvement’ was unclear terminology.  One clinician added the comment ‘unknown, if any’ regarding the consumer’s legal involvement.

 

            Demographic Questionnaire (Adult, Child and Families)

Of the consumers who completed a Demographic Questionnaire and MHSIP, 95.5% returned them directly to ACSES via mail.  The vast majority of consumers (60.1%) reported that they understood the questions and that they were clear (61.5%).  Only two consumers stated that the questionnaire was too long. 

Appendix Two provides comments on the adult version of the Demographics Questionnaire; Appendix Three provides comments on the youth version. Many consumers took the opportunity to express appreciation for staff and services they receive, as well as to identify some problem areas.  Comments from a number of consumers suggested the need to ask open-ended questions regarding how they felt about the quality of the services.  Suggestions were as follows: 

·        Do you need more help?

·        Are you happy with your counselor or doctor?

·        Are you happy with the programs?

·        How are the meds working?

·        If you don’t like your counselor, what are your options?

·        Can you choose providers?

·        Does the system adequately help you completely control your mental illness?

One consumer wondered if “… this [is] a test of our competence?’  Another consumer stated ‘the purpose isn’t clear…’ A few consumers noted that the questionnaire did not ask everything that they thought was important about mental health services and that questions should be added, although they did not elaborate.

            Two respondents completed one questionnaire each for two children receiving services (completed by biological parent and adoptive parent for one child; completed by a biological parent and a foster parent for the other child).  On the question asking about guardian, conservator, or payee, several consumers wrote in the name of the person, instead of selecting the job title of the person. 

            Several consumers selected or wrote in more than one response for the question about ethnicity.  When this occurred, the minority ethnicity was recorded in the data tables.  One clinician recommended using the new United States Census Bureau categories for the ethnic group. 

            For the question regarding services, one clinician noted that ‘housing, transportation, employment and advocacy services are not applicable’ in rural location.  Several clinicians noted that consumers did not know what ‘advocacy’ meant.  Some consumers were unclear that group therapy or group counseling were included in the ‘therapy/counseling’ choice, as indicated by their addition of the name of specific treatment groups in the ‘Other’ category.  A couple of consumers selected ‘Other’ and wrote in comments, such as ‘new client’ or ‘just started’.  

            The question concerning frequency of services received the largest number of write-in consumer responses, suggesting the need to more clearly define the answer options.  One consumer selected two responses: ‘daily’ and wrote ‘occasionally’ in the ‘Other’ category.  Consumers wrote a variety of responses in the ‘Other’ category (e.g., ‘for years’, ‘varies’, ‘3-5 days/week’, ‘3x a week’, ‘4x a week’, ‘rarely’, ‘5 days/week’).  One consumer selected three responses, commenting that ‘some services are monthly, others may be weekly – to lump them together makes it more difficult to answer.’

 

            MHSIP (Adult)

Over three-fourths of the consumers found the questions understandable and clear (77.4%) and inclusive of issues that they thought were important to ask about mental health services (61.9%).  Appendix Four provides the comments provided by consumers in response to the adult version of the MHSIP.  A few consumers responded that the questionnaire did not ask everything that is important about mental health services, but did not elaborate on what was missing.  Of note were consumer comments regarding the need to ask about other agency staff and about billing techniques.  Only three respondents felt that the questionnaire was too long.  One consumer asked ‘Will I get the outcome of this survey?’ and wondered ‘if this survey helps in funding.’  Another consumer suggested adding a question about ‘What programs or areas of your life would you like more help in…?’  Another consumer commented ‘This questionnaire is a good tool if your ears and hearts are open and compassionate.’  A clinician reported continuing to have to provide assistance in understanding the questions and clarifying the wording to simplify questions to many of the consumers.  The clinician reported that ‘many of the participants requested assistance with the Demographic and MHSIP survey questionnaires which were intended for independent completion from the consumers.  Keeping this in mind, I think it is important to plan the wording carefully to encompass the wide range of education barriers that many people struggle with so that, in the event this form is completed independently, it will be easy to follow and answer.’  One clinician reported that many of the consumers read the word ‘options’ in Question 2 as ‘opinion’ and needed assistance with the question. The suggestion was to change the wording of the question to ‘choice’ or ‘preference.’  Several consumers noted that Question 17 was difficult to understand and unclear, with one consumer commenting that ‘both therapist and I decided treatment goals’ and another ‘difficult to know if you want the end results or the actual therapy/participation.’  Question 19 was also identified as problematic for several consumers, but no elaboration was provided.

 

MHSIP (Youth and Families)

The majority of respondents (70.6%) thought the questionnaires were understandable, clear, and inclusive of issues that were important to ask about mental health services.  None reported that the questionnaires were too long.  Appendix Five provides consumers’ comments in response to the Youth Services Survey for Families; Appendix Six provides consumers comments in response to the Youth Services Survey.  However, few comments were provided by consumers.  One consumer suggested adding the question ‘what kind of services do you feel were lacking?’  One consumer noted the ‘high turnover in manpower’ and having been ‘treated disparately’ at an agency.  One youth consumer selected N/A for 18 of the 26 responses.  Another youth consumer selected two responses for the question asking if staff were sensitive to cultural/ethnic background, writing in the comment ‘1 person’ for Agree and ‘all others’ for Disagree.

 

Recommendations

 

Two sets of recommendations will be offered.  The first set focuses on recommendations for revising the instruments; the second set focuses on recommendations for the next phase of pilot testing. 

 

Recommendations for Revisions

 

Assuming that DMHDD and the AMHB wish to revise the CSR, Demographic Questionnaires, and the MHSIPs to incorporate Primary Phase feedback from consumers and clinicians, a number of recommendations follow for such revisions. These recommendations were developed based upon consumers’ and clinicians’ feedback.

 

Client Status Review

·        Develop a separate questionnaire that is age-appropriate for child/youth consumers

·        Question 1:  Answer 5:  change to ‘Rarely active’

·        Question 2: Answer 1:  change to ‘Rarely’

Answer 5:  change to ‘Almost always’

·        Question 3:  Reword parenthetical description to read “Does not include dental and eye care, and is defined as care received from your regular, local, or visiting health care provider”

·        Question 4: Specify in the question whether health care refers to mental or physical health, or both

Answer 1:  change ‘ Fully insured’ to ‘free

Answer 2:  change ‘Well-insured’ to ‘Good insurance’; add ‘, such as Medicaid/Medicare/insurance co-pay’ to end of parenthetical sentence

·        Questions 6:  Given considerable missing data on this item, and clinician and consumer criticisms, consider a major rewrite of this item.  One possibility is to rewrite question to be a more general query of substance use/abuse that highlights the need for further evaluation, rather than using a standardized assessment tool designed to access degree of SA

·        Question 8:

Answer 1:  delete ‘or I chose to live with others’

Answer 2:  delete ‘family or others’

Answer 4:  change ‘temporary’ to ‘transitional’ and add ‘temporarily’ to ‘staying with family/friends’

·        Question 10:

Answer 3:  change ‘legal issues now pending’ to ‘misdemeanor changes pending or conviction’

Answer 4:  after ‘pending’, add ‘or conviction’; delete ‘or extreme impact’; change ‘contested divorce, contested custody issues’ to ‘contested divorce and/or custody issues’

Answer 5:  change ‘non-lock-up facility’ to ‘half-way house’; add ‘Title 12’ before ‘mandatory’

 

Demographic Questionnaire (Adult Services)

·        For all versions, incorporate the questions at the end of the MHSIP

·        Question 2a:  Change ‘If yes,’ to check boxes for guardian, payee, conservator

·        Question 3:

Change stem to read:  ‘Which one of …’

Use new US census ethnic categories

·        Question 5, answer 1 – add ‘(includes individual & group)’

·        Question 8, answer 1 – after ‘Daily’, add ‘(5-7 days/week)’

 

Demographic Questionnaire (Child & Family Services)

·        Completed by:  change ‘biological parent’ to ‘biological/adoptive parent’

·        Question 2:  change stem to read ‘Which one of …’

·        Question 6:  after stem, add ‘(Please select only one.)’

·        Question 7, answer 1 – add ‘(includes individual & group)’

·        Question 10, answer 1 – after ‘Daily’, add ‘(5-7 days/week)’

 

Adult MHSIP

·        Place demographic questions at end of MHSIP questions, similar to that of Youth & Family MHSIPs

·        Include open-ended questions in the Adult version:

What has been the most helpful thing about the services you have received over the last 6 months?

What would improve services here?

 

Youth MHSIP

·        Use the revised 21-item versions of the Youth Services Survey and Youth Services Survey for Families

 

Recommendations for the Next Phase of Pilot Work

 

For any future investigations into the utility and psychometric characteristics of these instruments, we make the following recommendations:

 

1)      The Oversight Committee may be interested in gathering feedback from staff members at mental health agencies around the state via focus group and key informant interview formats to identify problem areas and possible solutions that could help further refine the performance measures instruments for youth and rural consumers.  It appears that rural providers perceive the instruments as less applicable to their clientele than urban providers, which has likely contributed to the resistance by rural providers to participating in this pilot project and may also lead to resistance to using the measures in future.  The suggestion of rural providers to develop alternate means of measuring outcomes in rural areas in this regard appears particularly pertinent. 

 

2)      Using the same focus group and key informant interview formats suggested in item 1), it would be helpful to gather feedback from child and youth mental health care providers before beginning construction of a child/youth version of the CSR.  Merely adapting the current CSR downward in age may not fully meet the needs and concerns of these providers, as completely different topics may need to be assessed to measure outcomes among a younger consumer population.

 

3)      To gain further insight into the current questionnaires and any newly developed questionnaires (such as a child or rural version), administration of instruments should be repeated at a different set of agencies (or at different programs within agencies that participated this time).  This re-testing phase is particularly important for determining the utility of new versions (youth, rural) in rural areas, with ethnically diverse populations, and with youth and adolescent populations.  Stratified sampling (by primary diagnosis, ethnicity, gender, and rural/urban residence) may be necessary to receive definitive answers about the instruments’ utility across geographic locations, ethnic backgrounds, and types of clientele.  Such stratified sampling would require extreme cooperation by agencies and may be difficult to realize. 

 

4)      This phase ideally would allow sufficient time to include a trial of reporting collected data back to clinical staff to let them see what types of summary reports can be expected based on the data collection.  Such feedback would serve two purposes.  First, it would demonstrate to clinicians that useful data comes back to the agency and that data do not just disappear into a “black hole.”  Second, such reporting back would allow for double-checking whether the data are indeed useful in meeting their purpose of improving services.  Clinicians could look over the data report to ascertain if the data would, in fact, help them improve the services they are currently providing to their consumers. 

 

For future investigations into the statewide implementation of outcome measures, we make the following recommendations:

 

5)      To facilitate staff participation, it is essential that clinicians understand how the data collected will assist in improving services and that the ultimate goal is not to serve as a tool to “punish” agencies and staff members.  To this end, it may be helpful to develop educational materials regarding these instruments that clearly explain: a) purpose of the measures, b) who will use the data, c) how the data will be disseminated to State offices, agencies, consumers, and clinical staff, and, d) how data will contribute to improved quality of services. 

 

6)      It appears necessary to collect more input from direct service provider about possible barriers, resistances, and solutions to the actual process of gathering and reporting outcome data (independent of what instruments are being used).  It appears that rural versus urban staff members had different concerns and that different procedures (and perhaps even instruments) are needed to accommodate rural versus urban settings.  To gather this input, agency staff and consumers could meet in (separate) focus groups to answer specific questions about the process of gathering and using outcome data.  These focus groups could be combined with the focus groups mentioned in item 1) above. 

 

7)      Given concerns and the lack of clarity expressed by clinicians and consumers about the purpose of these instruments, it may help to use the terminology of “outcome” rather than “performance” measures.