When I was studying for my doctoral exams, I was repeatedly asked by my revered advisor when I thought I'd be "ready to write the exams." My realistically-paranoid reply was, invariably, "Whenever everybody on the committee agrees that I'm going to pass those exams." I refused to be tied down to a timetable, because I had seen people in the snake-pit that was my graduate school get screwed when they either ignored the indirect cautions of their advisers, or, even worse, were not warned at all by negligent advisers failing to do their jobs.
The corollary was that I met with each of the three musicologists on my exam committee on multiple regular occasions for a solid year in advance of the exams. I was blessed beyond measure that Peter Burkholder, Tom Mathiesen, and Austin Caswell were willing to do this--I know of schools where PhD advisors refuse to meet with exam candidates, and simply expect them to "know everything" on the day they walk in the door (are you listening, A. Peter Brown?!?)--but fortunately my three would meet.
And when we did, I would always open the conversation by saying "OK, here is what I've been working on for the past [two/three] [weeks/months/etc]. This is where I expect to be [two/three] [weeks/months] from now. Are you satisfied with the direction, scope, and coverage I am creating?" and I would wait for a confirmation. The unspoken but nevertheless implicit followup statement was "...because if you as the adviser aren't happy with my progress, for Christ's sake tell me now!" I trusted those three men completely, and I knew they wouldn't screw me, but I had seen it happen to other people and I wanted to be sure to give them regular, explicit opportunity to caution me or tell me to step it up. I was blessed beyond measure that they encouraged me to define the criteria, topics, and skills upon which my expertise would be assessed.
I was reminded of this recently, in the very (seemingly) different environment of Tenure & Promotion. We recently made a new tenure-track hire, who is working out very well: bright, energetic, personable, engaged with students, easy to get along with. However, his arrival has also provided us with a salutary opportunity to look at our "best practices" and their attendant presumptions--not necessarily to critique them, but because aspects of practice and presumption that we three who have been here longer may feel we "know" or "can trust" may not actually appear in any written, much less legally-binding, form.
In the world of academic musicology, criteria for tenure and promotion can vary wildly, but typically certain components can be presumed to operate: specifically, the expectation that the successful candidate for T&P will meet or exceed expectations in 3 areas, usually weighted something like this:
(I) Teaching (50%-70%),
(II) Research (or "Research & Creative Activity") (20%-30%),
(III) Service (10%-30%).
We'll leave until later in this post the impossibility of quantifying what are essentially impressionistic values ("is this journal article worth 0.75 per cent toward 'Research & Creative Activity', or only 0.7 per cent?"), and focus right now on the metrics: in other words, what phenomena (publications, student evaluations verbal or numeric, number of committees served upon, number of regional, national, or international conference presentations, original compositions, performances conducted, ad infinitum ad nauseum) are alleged to be so measured.
Part of what makes this process difficult is that there is simply not enough parity between one individual candidate and another, one department and another. What we regard as "meeting" or "exceeding" or "far-exceeding" expectations in any one or all three of these areas both contrasts, and is measured by different metrics, than another division in our School, another department on our campus, or even than a parallel-sized and -missioned musicology department at another university. So the idea that there is some sort of quantifiable formula which will permit precision and parity in assessing a colleague's performance against some kind of abstract external yardstick is ridiculous.
But, we work for suits and pencil-pushers, some of whom (especially at the level of the Legislature) are as near to functional illiteracy as makes, for this purpose, no difference. These are not people who are equipped to make actual, accurate, case-by-case critical assessments of performance. So we're forced to tart-up some kind of quantifiable measures, and to document them, so that both the process and the individual candidate are protected in the event of suit-driven repercussions.
Hence this process of, essentially, imposing a numerical framework on phenomena that are not objective.
All this is exacerbated, and yet made more urgent, for two other reasons, not usually articulated in the Operating Procedures, but which are in fact much more significant to the actual day-to-day health & well-being of our staff and our department:
(1) because faculty colleagues (new hires, especially) have an understandable concern to be very sure they understand and can work constructively toward the goals that are expected or articulated by the assessment tools. In other words, if a new hire's T&P are going to live or die on the basis of these artificially-quantified metrics, then that new hire is very understandably insistent upon having them spelled out--as s/he should be. That means we have to articulate the procedures and the tools of assessment, and that we must have confidence that, if push ever comes to shove, those tools are defensible, and would provide us the means to make arguments on behalf of a candidate. And the candidate needs to know that too.
(2) because, as is typical in a large state-school-housed conservatory, our faculty colleagues in the School are so massively overloaded, so hour-by-hour day-by-day week-by-week semester-by-semester overworked, that there's really no way that we can presume that our colleagues--that is, the people who will actually have the most direct vote on a T&P decision--will be adequately informed about the range, scope, and significance of what any particular candidate does. Typically, in our faculty (which is far more emotionally and interpersonally healthy than the snake-pit where Dharmonia and I did our grad work), the persons who are most aware of, and eloquent in expressing, the value of a candidate's contribution are that person's divisional colleagues. The winds & percussion people, for example, are most intimately aware of a wind or percussion teacher's contribution, because they work side-by-side in the same ensembles and teaching situations and divisional meetings (and are usually hallway neighbors as well); the conducting faculty will be next-best-informed, because those latter will deal with the teacher's own playing and the impact of his/her teaching in their ensembles (and will typically understand the nature of the national comparative standards for that role); the composition faculty may have some awareness, if they make use of the teacher or the teacher's students in premiering new works; the music education faculty may have some (small) awareness, because in our music-education-centric student body, most of the teacher's studio students will actually be music education students--a notoriously gossipy bunch...
But the divisions which are least likely to really have the full measure of the breadth, depth, and quality of that hypothetical winds or percussion teacher's work are those same divisions which engage his/her contributions least immediately or directly--namely us, the music theory and/or musicology faculty. We on the academic side see the people on the studio and ensembles side, we chat with them in faculty meetings, we attend their concerts as much as we humanly can (pet peeve: academic faculty who don't go to studio/ensemble concerts; studio/ensemble faculty who don't even go to each others' concerts), but the reality is that we do not, day-by-day week-by-week semester-by-semester "In the Trenches," have much time or opportunity for direct experience of that person's contribution.
The converse is also true. Especially in the case of the theorists and musicologists, the vast majority of whose "Research & Creative Activity" actually occurs in venues and/or locations far distant from the local, there is neither time nor opportunity for studio/ensemble faculty to really be up-to-date (except if mandated by committee duty) with the contributions of the theorists and musicologists. Even in the case of a studio/ensemble faculty like our own, who in contrast to some are actually bright, intellectually-curious, well-read people, absent regular and convenient opportunity to work side-by-side with us, there's little chance they will actually know the breadth, depth, and quality of what we do.
I've blogged before about the legitimate need on the part of academic scholars to articulate, promote, and cross-fertilize their own research & creative activity to wider on- and off-campus populations--in effect, to do "audience development" for our fields of academic inquiry--but here, I want to address a complementary insight I was finally able to articulate, in a staff meeting, to our new hire:
He was expressing understandable concern at the remarkably sparse, loose (can you say "gnomic"?) language in which the metrics for assessment are laid out in our OP's. Although I had several times, in the past, expressed to him during similar conversations my sense that he truly had "nothing to worry about," and that the most important arbiters of his performance were going to be the people who'd picked him first-in-the-search, and who worked side-by-side with him inside the Musicology division, I hadn't quite found the way to articulate these reassurances that seemed to address his legitimate concerns.
Until finally he said "well, nobody has really told me exactly 'how much is enough'" (in terms of "numbers of articles, numbers of conference papers, numbers of books published, numbers of grant dollars raised, in the a given assessment period). And he was right--there are departments or campuses or disciplines which will say "well, don't even think about going up for tenure until you have 'The Book' published or 'three CDs completed' or 'X hundred-thousand $$ raised'" but such is not the case in our division or, indeed, within our School of Music. So in the absence of those (in my opinion) dumbass arbitrary metrics of accomplishment--as a result of which a lot of pointless books or vanity CDs or wasted dollars are created or expended--what should this colleague use as the benchmark against which to shape his own performance?
He was understandably concerned at the lack of externally-imposed metrics. And a light-bulb went on in my head: I finally realized why that lack has not yet been--nor do I expect it to be--an Achilles heel in our peoples' timely progress toward T&P. And the insight was this:
The lack of external metrics is not a handicap--it is an opportunity. In an environment of reasonably sound collegiality, our outside-Musicology colleagues will trust our internal assessment: if we say, in the T&P discussions which follow our internal report on the candidate, 'This person is doing a job that exceeds expectations," then--at our school, with its admittedly healthy inter-faculty relations--our extra-Musicology colleagues will believe us. In the absence of anyone outside our division having time, inclination, or expertise to inform themselves at length and in detail about Musicology's metrics, we can define our own. This lack of external definition--the "looseness" which was making our new hire nervous--is a huge channel of opportunity if it is perceived and seized with the right strategic understanding.
When I was hired, I was asked in my statements of educational, research, and service philosophies to define my own research & teaching. And, thank the Universe, I understood at that very early stage that this was an opportunity--that, in the absence of an outside arbiter imposing a definition on what was "enough" or "good" research, I could do it myself. And, that once so defined, the only criteria of performance to which I could be held were of my own definition.
So I started the statements by saying, point-blank, "My research occurs at the nexus of history and performance practice. I am a scholar of performance, across chronological and geographical distance. Thus, it is essential that I engage in performance, analysis of performance, and the pedagogy of performance, in order to do my research."
And, BANG! In three sentences, in a kind of cognitive jiu-jitsu which was enormously significant in how I framed what I do to my colleagues, in my first year on the job, in my first "Annual Report" which is the principal candidate-supplied document on which is based the Director's (my boss's) annual assessment, I had seized the opportunity--the "looseness and vagary"--of the existing OP's, and I had defined what I do, in the single document which, permutated, referred-back-to, read by colleagues and T&P committees and upper administrators, was going to be the principal metric against which my success was assessed.
In other words, precisely because there was no clear external definition, I could create an internal definition, one that felt right to me, the one that I thought most accurate, precisely, concretely, and inclusively established the criteria for future assessments of my success. They cannot fault you for failing to do things you yourself did not include within the criteria which they then accepted. Conversely, if you yourself are excelling at the criteria that you yourself took trouble to define at the right strategic moment, and if those criteria have previously been read into the record, then they must respond that you are [meeting/exceeding/far exceeding] expectations.
Seizing the moment, shaping the discourse, assigning our own metrics of assessment, thinking smart and strategically about the kind of "audience outreach and development" that lets our colleagues actually understand the breadth, depth, and impact of what we do, is enormously important in carving out functional, valued roles for ourselves in a changing academic world.
Tuesday, February 03, 2009
Day 18 (round III) "In the trenches" (random-things edition)
Posted by CJS at 4:15 PM
Labels: Trenches series
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment