Please reform accreditation

The annual Deans and Directors meeting at ALISE this year proved refreshingly robust. We had but one real topic, the accreditation process pursued by the ALA  Committee on Accreditation. There is a proposal afoot to reduce the number of standards from six to five. This alone is worthy of celebration as ALA follows the laughable requirement of having one person per standard when forming site teams to visit programs. There is almost no justification for this but tradition, and consequently, site teams have arrived at schools outnumbering the tenure-track faculty. Since no one seems to be laughing, especially those who foot the bill for this extravagance, it would at least seem as if this merging of a couple of standards has one tangible benefit for programs.

That said, the discussion quickly moved on from wordsmithing the standards to challenging the whole process, and it was not just a minority of folks who pushed for reform. Speaker after speaker complained of the persistent disconnect between the review by the site team and the final decisions from the politburo committee, the slavish insistence on over-documenting learning outcomes, the constant demands for reports, reports and even more reports (usually about very little), the credentials of those conducting the review, and in some case, the embarrassment teams cause to programs by their obvious lack of  familiarity with university standards when dealing with upper administrations. Sadly, there was also a feeling in the room that one must be careful raising objections or one’s program will face retribution for speaking out (hence my temperate comments here). It really is hard to imagine that anyone believes this is a voluntary, collegial process anymore. Does it surprise you that only now, after years of campaigning,  the deans and directors will actually have a representative at the table when a new committee (we need more!) is formed to consider the problems?

Despite what one imagines, deans and directors like to do more than just complain (yes, it’s hard to resist the line that we leave this to the faculty–rimshot please!), we actually considered some alternatives. These included reducing the number and lengths of reports between reviews, using existing statistical data rather than forcing repeated submissions, lengthening the time between review visits, and getting more faculty involved in the final review committee. All sensible options, but I’d like to suggest we go further.

Accreditation, for all its flaws, is essentially about quality control, but somewhere along the line, the emphasis on quality has taken a backseat to control. There are many reasons which I won’t rehash here, but no matter the motivations, the results are obvious. Programs are expected to comply to language, measures and indices that reveal little about quality and more about allegiance. Take for example, the rather important matter of graduate placement. Certainly it is used by potential students, it might reasonably be interpreted as a measure of how well a program prepares new professionals for their careers, and it is based on the input of external employers, but it’s not mentioned specifically in the standards. One could meet all the requirements for accreditation, articulating all the specific learning outcomes for each course, and yet reveal nothing about the real job prospects and advancement of the students who come for this education. Is it any wonder we hear so many accounts of disgruntled, poorly paid graduates who feel their Master’s degree was not quite all it promised to be?

How hard could it be to identify and document indices of quality? I would suggest there are some basic measures we can all agree offer us some clues as a program’s overall quality:

  • Faculty size and rank
  • Graduation rate
  • Employment rate of graduates
  • Budget and resources
  • Curricular coverage

Surely there are others but let’s consider these for a moment. If a program has e.g., 12 faculty, all on tenure-track, this tells us something. If it has 5, one of whom is a part-timer and only two of whom are on tenure track, this tells us something else. No, it’s not automatically the case the the first is to be accredited and the second not, but it does give us a real data point. Having sufficient faculty is important. Having these faculty be on tenure-track tells us about the university in which the program exists and how it views the program. And having these same faculty deliver the courses that make up the program tells us something more. Similarly with budget. These are hard numbers which obviously vary across regions and universities but there is surely a minimum,  secure, recurring funding level that a faculty of a certain size must have to deliver a graduate program. We can make the same estimates on space or technical infrastructure for programs, a basic threshold at which we can be confident a program really is able to exist and deliver instruction. And yes, let’s measure employment rate. It is not a perfect score, there are none, but if your graduates are in demand and earning decent salaries over time, this suggests the professional community must be satisfied to some extent with your program’s efforts. If you cannot demonstrate this, then maybe it suggests that what you are providing is not quite up to professional standards.

You can see where this is going. I would allow for small schools,  or those just starting up, to make a case for themselves by emphasizing some measures over others. Mature programs should be able to demonstrate relatively objectively how they are resourced, what faculty standards they maintain, how they deliver the program and where their graduates go upon completion. Such reporting need not be onerous. Certainly there is room for a narrative report on the program’s emphasis,  mission, plans and general philosophy, but this would be wrapped around some hard data of the kind outlined above and used to justify the claims to quality.  There is surely a form of Turing test for programs we could apply here — answer the questions and let a normal evaluator determine if you are running a solid program or a diploma mill.

The second part of this would be to revisit the mechanisms of reviews. If a program was small or new, unable to document some key aspects such as placement or curricular coverage by appropriate faculty, or if the budget and resources seemed to prevent appropriate instructional delivery, then by all means send in a review team and make some specific recommendations. If a program decides to revisit its mission, is merged or generally undergoes a major change of direction, then send in a review team. But for most programs, once established and able to continually document their capabilities using data, let them do so by reporting every few years how they are doing using this agreed data set.  I suggest that this need not be difficult. If enrolments are healthy, faculty are strong and actively delivering the program rather than leaving it to adjuncts, and graduates can report healthy employment prospects in relevant professional roles, then it’s likely the program is doing something right. There are certainly more data points  and explanation to add but these basic measures of quality are essential — without them, something is likely in need of attention.

Most schools are already overburdened by compliance reporting and university-wide accreditation processes. Adding more to the process really does not seem to add value.  The shift to more data-driven reporting of agreed quality indices (and can anyone seriously argue against graduate employment as one such index?) would allow for some flexibility in review, not foist a one-size-fits-all cycle on every program or allow increasingly obsessive attention to secondary processes to dominate the review. Some programs would have a site visit, some would not. Some would be required to justify developments, others would be able to continue as they are doing if the data made their case. Schools would in some sense be able to tailor reviews as best fit their needs and we might move toward that more collegial, voluntary process of quality control that we are told is at the heart of accreditation.  That it might also shake out a few of the programs that are failing to deliver anything of real value would be a bonus, but I am sure none of us knows any of those.

 

 

Social Widgets powered by AB-WebLog.com.