Citizen Advocacy Board Manual
Section 8

Go to
CA Board Manual contents
CA home page

Citizen Advocacy Programme Evaluation

Original text written by Wayne Eliuk and Carolyn Bardwell-Wheeler, updated July 1999

Citizen Advocacy is the only advocacy form to have established, world- recognised evaluation standards. A process has been developed to guide and evaluate the work of Citizen Advocacy Programmes in ‘Standards for Citizen Advocacy Program Evaluation’ (CAPE). Member Programmes of the Citizen Advocacy NSW Association have agreed to have evaluations against CAPE standards every two to three years. While the work involved, and the cost, in preparing for an evaluation may seem daunting; Programmes which regularly undergo the process are much more likely to establish and maintain strong practice. Providing opportunities for people from Programmes to be team members is a useful contribution to the development of Citizen Advocacy since the evaluation process is also recognised as a valuable learning experience for the participants.

The CAPE manual and process has proved itself equally applicable to Australian as to American Programmes.

The Evaluation Instrument: CAPE

Since the conceptualisation of Citizen Advocacy by Wolf Wolfensberger in 1966, this relatively new helping form has emerged in the form of over 200 Citizen Advocacy offices in the United States, Canada, Australia, New Zealand and England. Historically, many Citizen Advocacy Programmes evolved their own particular variation of so-called ‘Citizen Advocacy’, whereby the original concept has often been misinterpreted and inappropriately implemented. As a result, some form or technique was needed to measure a Citizen Advocacy Programme’s adherence to essential citizen advocacy principles and practice.

The Manual

‘Standards for Citizen Advocacy Program Evaluation’ (CAPE) by John O’Brien and Wolf Wolfensberger, was designed to meet the need for an instrument which would provide a standard against which Programmes calling themselves Citizen Advocacy could be measured. The instrument describes all of the essential, and some of the more desirable, specific components of the citizen advocacy concept into observable and measurable variables. In effect, CAPE constitutes a partial blueprint for implementing Citizen Advocacy.

CAPE was developed over a period of several years. The first edition was edited by John O’Brien and Wolf Wolfensberger and first printed in 1978. The second and current edition is published by the Person to Person: Citizen Advocacy office in Syracuse, in conjunction with the Training Institute for Human Service Planning, Leadership and Change Agentry, directed by Wolf Wolfensberger.

CAPE Ratings

CAPE consists of 36 ratings, divided into three categories designated:

1. Adherence to Citizen Advocacy Principles

2. Citizen Advocacy Office Effectiveness

3. Program Continuity and Stability

The Adherence to Citizen Advocacy Principles cluster consists of twenty ratings, grouped under the following headings: Advocate Independence, Program Independence, Clarity of Staff Function, Balanced Orientation to Protégé Needs and Positive Interpretations of Handicapped People.

Citizen Advocacy Office Effectiveness cluster consists of ten ratings, which measure seven key activities and the balancing of these activities. This rating cluster also looks at the sufficiency of the citizen advocacy staff in relationship to the demands of their job(s). These ratings are as follows: Vision and Creativity of Protégé Recruitment, Advocate Recruitment, Advocate Orientation. Advocate-protégé Matching, Follow-up and Support to Relationships, Ongoing Training, Advocate Associate Emphasis, Balance of Key Citizen Advocacy Activities, Encouragement of Advocate Involvement with Voluntary Associations and Sufficiency of Citizen Advocacy Staff.

The Program Continuity and Stability cluster consists of six ratings, grouped under the headings of community leadership involvement and funding issues.

Each of the 36 ratings consists of an explanation of the nature of the rating, including why the rating is in CAPE. The rating describes what evidence must be collected to make a rating assignment, and spells out a range of either four or five levels of quality. The rating levels are statements that describe levels of performance that range from the lowest level (‘major deficiencies in complying with the principle of the ratings’), through intermediate levels, to the highest level of ‘distinctly positive implementation of the principle presented by the rating’.

The CAPE Team

Though largely easily readable and relatively straightforward, CAPE is not designed or intended for use by individuals acting alone or without good knowledge of Citizen Advocacy. CAPE is intended to be used for evaluations by a team of at least three ‘raters’ who are reasonably sophisticated regarding citizen advocacy principles and practice and who have had previous CAPE or other similar experiences on evaluation teams. The Team leader is responsible for the work of the Team, giving the verbal feedback and the written report, although a Team member may write it. Therefore, the Team Leader must have a sound understanding of Citizen Advocacy practice and previous experience as a Team member. There is also usually a team member who is new to Citizen Advocacy and/or to CAPE evaluations, which provides an excellent means of training in Citizen Advocacy. However, it is essential that at least three people on a team have a strong background in Citizen Advocacy.

CAPE evaluations are demanding both on the Programme being assessed and on team members. A great deal of preparation is needed to ensure that the evaluation will go smoothly, and that the team is able to gather sufficient information to use the CAPE instrument. Team members work hard and often under challenging circumstances, as they frequently have to shift gears mentally, work together in sub-teams with people they do not know well, and find their way around a community (sometimes back roads and country places!) with novel directions. However, CAPE team members need not be ‘professionals’ but rather people with a strong commitment to Citizen Advocacy and a willingness to engage in the evaluation process and demands. The outcome is invariably a valuable learning experience for both the Programme being evaluated and those who participate in the evaluation process.

The CAPE Process

The general format for CAPE evaluations is somewhat standardised. Before the evaluation begins, team members review representative documentation and study the CAPE manual. During the assessment, the team members interview individuals who represent every aspect of the Citizen Advocacy programme, including staff, board members, advocates, protégés, and other community members who are interested and supportive of the endeavour. The files and office documents are usually reviewed at the office. Once the relevant information has been collected, the team meets as a whole and conducts what is called ‘conciliation’. The conciliation process is guided by a team leader who leads the team’s analysis of each rating.

This requires extensive sharing of the relevant information, then comparing the evidence to the criteria of the ratings, and then selecting the rating level that most accurately characterises the performance of the Citizen Advocacy Programme. The analysis continues until the team reaches a consensus as to the level of performance for each rating.

Besides measuring the Citizen Advocacy Office on the 36 ratings criteria of CAPE, a team also engages in an analysis of issues, especially those considered to be ‘overriding’ or ‘major issues’. Such issues may exceed the parameters of the specific CAPE ratings, or conceivably even CAPE itself.

All CAPE evaluations adhere to two crucial guidelines. The first is called the ‘what, not why rule’. Evidence is always considered in terms of what the particular Citizen Advocacy office is actually doing. The countless ‘why’s’ regarding Programme practice are deemed irrelevant when assigning rating levels, even though they must be acknowledged by the team and understood in the context of the overall Programme. However, when a team is working towards consensus on individual ratings, they consider only the reality of prevailing practices.

Evaluating the Citizen Advocacy Office Role

The second major guideline is that the fundamental perspective upon which CAPE hinges is the welfare of individual protégés. While advocates and the community commonly derive all sorts of benefits from Citizen Advocacy, the most immediate goal of the match should be the benefit to the protégé. Evaluation teams are not evaluating relationships per se, but rather the efforts and structures of the Programme to promote advocate identification and action on behalf of protégés.

Reporting Back

Once a team has completed its analysis, it prepares its recommendations and feedback. Sometimes feedback is given in an oral presentation, but there is a written report as well. The outcome of an effective, rigorous CAPE evaluation by a skilled Team is a report which should form the basis for change and development in the Programme for the next two-three years.

While the content of the report depends on the Team as a whole, the Programme Board may negotiate beforehand with the Team Leader as to the format and particular issues which the Board may like especially considered.

The Programme should expect the Team to answer three questions:

- Is the Programme doing Citizen Advocacy?

- Is the Programme doing effective Citizen Advocacy?

- Will the Programme survive and thrive?

Added to this the Report should provide an in-depth analysis of the work of the Programme and include recommendations for improvement.

It is recommended that, if the Team Leader does not write the Report, then the Team member selected to do so has writing skills, a deep understanding of Citizen Advocacy, and experience in CAPE evaluations. The Team Leader, who is responsible for the quality and content of the final report, should closely mentor the Report writer.

Using the Report

Obtaining the report through the CAPE process costs the Programme and the people involved a considerable amount of resources in time, money, emotion and personal efforts it is important to make good use of it. It provides the Programme with an opportunity to consider, in detail, the things it does well and the things needing improvement. By strengthening the practice of the Programme, it is more able to fulfil its mission of protecting people with intellectual disability through Citizen Advocacy.