UNDERSTANDING ACCREDITED PROGRAMME'S

   

Select an accredited programme and critically discuss the following areas of practice or underpinning theoretical frameworks.

6. The evidence base for accredited programme's  (1000 words)

According to a National Probation Service report (Setting the Pace-how the National Probation Service has delivered a New Choreography. 2004) between 58-69% of offenders can be seen to have problems that can be ‘challenged and changed by successfully completing an accredited programme.’ (Pp. 10). These programme's offer a wide range of resources to combat certain types of offender behaviour, and are not only evidence based, but run consistently through the whole of the country. The term accredited means that the programme has actually been recognized as something that has been judged to be successful, can be replicated and delivered on a national basis. For the purpose of this essay one will look at the accredited programme of Enhanced Thinking Skills (ETS), or as it has become more recently known, the General Offending Programme (GOP). 

The evidence for the implementation of programme's as interventions can be traced through the ‘what works’ movement, which is the foundation for which programme's have been built in and measured around. The ‘what works’ movement reared its head on the back of the famous (and wrongly attributed) ‘nothing works’ claim by Martinson in his 1974 article ‘What works?’ When looking at Martinson’s claims it is clear to see that he felt that there was no clear evidence to say that any type of rehabilitative intervention had actually heralded any type of success on re-offending rates.

He added that it was difficult to interpret those results available and that such results were both ambiguous and difficult to interpret. At the same time further research ((Lipton 1975, Brody 1976) suggested that there were inconsistencies in the way that research of effectiveness was carried out and that the research was also ‘plagued by poor methodology’ (McGuire, J. Pp. 5). What works therefore can be seen as a reaction to the nothing works ethos, an opportunity for the Probation Service to look into the effectiveness of interventions of any type.

By the end of the 1980s the argument as to whether rehabilitation could reduce re-offending continued at pace. However the mid 1980s had also yielded a new type of statistical tool that could be used in reviewing the effectiveness of interventions-meta-analysis. This is effectively an analysis of a large collection of results from numerous studies and combining them. It is claimed (Glass 1976. Pp. 3) that it offered a….

    “Rigorous alternative to the casual, narrative discussions of research studies

             which typify our attempts to make sense of the rapidly expanding

                       research literature.”

The tool was used to great effect by a group of Canadian practitioners (Andrews, Zinger, Hoge, Bonta, Gendreau & Cullen. 1990) who argued strongly against the ‘nothing works’ claims and insisted they had developed and implemented a ‘cognitive behavioural’ programme that was successful at reducing reconviction rates. Their study of over 40,000 offenders was one of eight major meta-analytical studies-and showed significant positive effects in terms of reducing re-offending.

Further work on this type of intervention was carried out by Ross and Fabiano (1985) who developed and implemented a programme entitled Reasoning and Rehabilitation. Its evaluation took place in 1988 and the results were encouraging (reconviction rate 18.1% compared to those on ordinary probation (69.5%). This lead to the development of STOP (Straight Thinking on Probation) in Britain and a subsequent explosion of cognitive behavioural programme's (i.e. R&R-Raynor and Vanstone 1997) evident in the contemporary probation setting. So does this shift in ideological theory and the subsequent ‘positive’ results suggest that the evidence base is wholly justified for the implementation of accredited programme's such as ETS?

There are obvious flaws in the studies and subsequent implementation of programme based interventions. However before analysing such flaws one must first look at the discrepancies that exist in the meta-analytical methodology-the tool which most of these primary studies have been based.

Primarily it is important to acknowledge the involvement of human interference in the process, at the beginning and the end. It is humans who make the choices of which studies to include in a meta-analytical study and it is humans who interpret the data at the end-humans that are fallible, bias and imperfect. An example of this fallibility can be drawn from an early meta-analytical study carried out by Whitehead and Lab (1989) who stated that rehabilitative treatment had ‘little impact on treatment.’

However they failed to calculate the overall effect, which in 1993 Losel managed to do-he found the effect to be positive which essentially changed the original authors interpretation. Therefore human involvement in this statistical measuring tool potentially alters the findings of all meta-analytical studies carried out-rendering the ‘evidence base’ unreliable.

Secondly, On reviewing the studies that have been carried out on the research and implementation of cognitive based programme's it is clear to see that only a minority of them were carried out in Britain, and most were in custodial settings (Mair, G. 1997) Instead most were carried out in America and Canada-two very different cultures to our own. Essentially interpreting their findings and applying them to British society is dangerous and could lead to evidence based practice being ineffective and unreliable.

Thirdly research evidence on What Works is based primarily on white young male adults in the USA. This leads us onto the fact that What works is culturally flawed and biased and terms of gender, age, ethnicity-and therefore not representative of the population as a whole. Does this mean that programme's are simply aimed at white, young males and are a reflection of white concerns? 

What we are left with are intervention programme's which are evidence based, and of that point there is no dispute. The big question surrounds the reliability and accuracy of this evidence.

The evidence is flawed in a number of ways, indeed the whole What Works philosophy is flawed in terms of its bias – if its findings are based primarily on white, young males who reside in the United States how representative will it’s finding be and how reliable, accurate and effective are those findings when applied to other cultures? The evidence base for accredited programme's is not as reliable as one first thought, and for a modern day NPS that prides itself on its anti-discriminatory practice one could also surmise it is a contradiction that it bases its work on a discriminatory evidence base.

7. Programme integrity (1000 words)

In terms of programme integrity this essay will not be focusing on the content of the programme, but rather on issues of management and ensuring the integrity and quality of the programme. There are two aspects of integrity. The first is that of treatment integrity. The second aspect is that of programme integrity, which is the maintenance and development of all the activities that support the delivery of the programme. Although both types of integrity are interlinked this essay will focus on the latter-programme integrity.

According to Hollin with ‘high levels of programme integrity programme's have a greater chance of success.’ (1995. Pp. 207).

This statement can be reinforced by the fact that should programme integrity be non existent it would give rise to the ‘nothing works’ ethos and subsequently cause disenchantment among those practitioners involved with programme's and as a result produce a poor service. Programme integrity embraces the use of proper assessments, effective monitoring and evaluation, staff that are trained and properly supervised, effective communication systems and above all good management. If these areas are maintained and developed within accredited programme's then programme integrity will remain high-along with the possibility of success.

So where do the problems surrounding programme integrity lie? Primarily there is increasing evidence to suggest that there is a continuing resistance to the placing of offenders on programme's (Joint accreditation Panel 2003-4. Pp.5), which among other factors has been linked to programme integrity.  On further investigation the National Probation Directorate have set targets for programme's admissions and completions, and so on this basis the integrity of accredited programme's must remain high with the directorate. However it can be said that programme integrity can and should be measured empirically, yet statistics do integrity of programme's few favours.

For example in England and Wales in 2002-3 a target of 9000 completions were set for programme's, yet only 5909 completions were achieved. To achieve these ‘completion rates,’ which the NPS are keen to do, a vast number of offenders need to be admitted to programme's such as ETS.

The referrals in the same year amounted to 27,564, and of those 19,150 were given a programme as a condition of an order. So although 66% of the completion target was achieved, of the 19,150 given a programme only 5909 completed the programme, or 30.8%.  These statistics indicate that less than a third of those on programme's complete the course. What does this say for integrity? Is programme integrity the main reason for a poor completion rate or have other factors caused a poor completion rate and integrity suffers as a consequence?

However in the Setting the Pace manual (2004) the NPS have produced examples of how programme integrity has risen in certain areas due to an increase in completions….

       “….our team had significant problems with offenders failing to attend….we

           have worked hard to resolve these issues by tightening links with case

         management, pre-motivational work is completed..tutors building close

         working relationships with offenders….” (Pp. 11)

Despite these fairly limited examples, which when compared to the national statistics recorded earlier is to say the least rare,  they are examples of how programme integrity can be built and maintained, and that as a consequence success through the number of completions can be achieved.

So far we have seen the impact of statistics on the integrity of a programme, and it would be fair to say that should there have been a 99% completion rate then one could surmise that the integrity of the programme's would be extremely high, that is to say that the management and running of a specific programme had resulted in overwhelming success. The question posed at this point would then be ‘Other than statistical data are there alternative ways to measure programme integrity?

The first avenue to be explored is that of observer recording, whereby a trained observer watches the procedures of a delivery and assesses the content of the delivery and how whether it achieves its learning outcome among many objectives. With the ETS programme each session is recorded and selections are viewed by managers, practitioners, trainees and even the government. The second avenue is through the practitioner and client report. An example of this would be the ‘Achieving Programme Integrity’ checklist used by the practitioner in the STOP initiative introduced in mid Glamorgan in the early 1990s. As for the clients they have the opportunity to give feedback through evaluation. These types of monitoring tools exist to identify threats to integrity and as a consequence aim to improve the integrity of a programme.

However if simply measuring the levels of programme integrity and consequently improving them was the answer, then improving integrity would not be a problem. There are however many problems that can be associated with monitoring the integrity of a programme. The first of these problems is that of ‘organizational resistance.’

If an organization is not committed to a programme and its integrity then it could be said that even the most highly trained, highly skilled practitioner will have little impact. The second problem is that of client resistance, which may be identified through the number of completions identified earlier in the essay. The third problem is that of practitioner resistance. As a deliverer of programme's such as ETS one must be committed to the principle that practice will be informed by data gathered and that most of your professional autonomy would have to be surrendered. If a practitioner is not devoted to and unquestioning of those two factors then the effective delivery of that programme (treatment integrity) and the integrity of the programme as a whole are both severely threatened.

One would conclude by saying that the integrity of a programme is extremely important to its success. However managing and monitoring integrity are extremely difficult to carry out and achieve. Of particular importance to a programme's integrity are statistics, or more specifically completions-which are where programme's suffer. Essentially a programme is judged first and foremost by its completion rate.

In 2002-3 it is apparent that completion rates were low across the country as a whole, and with the government ever more determined to cut crime there is increased pressure on programme's such as ETS to become more successful. Therefore there is a danger that in order to gain more completions more offenders will be referred to these programme's simply to achieve government targets. A consequence-more drop outs and a further loss of integrity for programme interventions.

 

References

 

Mair, G. (1997) Community Penalties and Probation. Oxford Handbook of Criminology

London.

 

Mcguire, J. (ed) (1996) What Works: Reducing Reoffending.

Chichester, West Sussex: John Wiley & Sons.

 

Probation Journal – The Journal of Community and Criminal Justice (2004) Vol.51(1)

London: Sage Publications

 

Setting The Pace – how the National Probation Service has delivered a New Choreography. (2004)

London: Home Office

 

 

Copyright(C) 2007 - 2015. All rights reserved.

 

 

 PROBATION HOME