Iterating on a Hiring Process (Part 2)

Never Underestimate the Lowly Spreadsheet

In , I wrote about treating the hiring process like a project. In that post, I mentioned creating tools and processes for evaluation. In this post, I’ll go into further detail regarding those processes.

Observed Problem

Group decision making with multi-variant inputs are complicated social activities that tend to unduly favor quick-to-words people. Exhaustion and fatigue should not be why you finalize decisions. Can we create processes to improve collaborative decision-making?

We should also provide time for personal reflection and development of ideas. Then provide time and space to bring those perspectives forward.

This process echoes the Think-pair-share 🔍 collaborative teaching strategy.

The Four Spreadsheets of the Apocalypse

Here’s the Hiring Process Documentation and Toolkit Google Drive 🔍 folder. You’ll can start with the README concerning Hiring Committee Toolkit document. Note the naming convention, it conforms to the naming recommendations I made in .

For the hiring process, I developed four spreadsheets.

  • Knowledge, Skill, and Abilities (KSAs 🔍) weighting
  • Resume / curriculum vitae (CV 🔍) screening
  • Question workshopping
  • Initial phone screening

Each spreadsheet had similar goals:

  • Enable individuals to contribute their evaluations asynchronously
  • Aggregate those evaluations for group discussion

One thing to consider, for the hiring process in which we piloted these tools, each committee member had equal say in the decision. We operated through consensus.

Knowledge, Skills, and Abilities Weighting

I first developed the Establishing Weighted KSA.

The goal of this spreadsheet is to provide a space for each person to individually rank the KSAs. And from the individual ranks, establish an average weight for each KSA. Note, there is an assumption that each ranking person is given equal consideration in their ranking; That is to say Chris’s opinion about KSA weights carries the same weight as Pat’s opinion. If this is not the case, make sure to discuss this with the hiring committee as all later tools build on the KSA weights.

From the position description, we had a list of KSAs. We established the number of “points” to give each person. Each person would then allocate those points to each of the KSAs.

With each person’s points allocated, we reviewed the summary sheet (e.g. Weighted KSA). Part of the review was to tease out our difference in understanding.

For example, maybe Chris gave the self-directed KSA a weight 2, and Lindsay gave it a 5. That’s a place for us to have a conversation. And maybe that conversation would surface a better shared understanding, which might lead to people shifting their points.

Or maybe they didn’t shift their points. In either case, the Average column reflects the group’s general sentiment regarding the relative importance of each KSA.

Resume / Curriculum Vitae Screening

With our KSAs salient, we went on to screen the applicants. In this round, the goal was to quickly evaluate the applications and find a cohort of applicants to advance to the first round of interviews.

We used the Pre Screen Rubric to facilitate this activity.

The purpose of this spreadsheet is to provide a tool to help screen applicants. Each hiring committee member comes with their mental models for quick assessment. The goal is to provide each committee member a place to provide their quick assessment.

Each committee member has a sheet for their assessments. The Summary sheet is both the place to enter the candidate identifier (e.g. applicant number or “name”) as well as see the assessments in aggregate.

With this spreadsheet, each person would give an assessment of each candidate. We based these assessments solely on the cover letter and resume. In this case, we’re not yet explicitly using the weighted KSAs.

Once everyone completed their assessment, we reviewed the resulting averages. We allowed for discussions as we then picked the cohort to advance. These discussions were along the lines of “Huh, I ranked candidate 3 this way, and you ranked them lower. What are you observing that I might be missing?” "

Question Workshopping

With our first round of interviews looming, we spent time developing the questions we’d ask. In I mentioned that we’d send these questions ahead of our initial conversation. We also expected a response before our conversation.

For that we used the Hiring Questions and KSAs spreadsheet.

The hiring committee individually wrote questions that were part of the initial interview conversation. Two members of the hiring committee paired up with the task of recommending five questions to the hiring committee. The two members mapped each question to one or more KSA, and proceeded to draft five questions that provided the best coverage of the desired KSAs.

The committee wrote up a lot of questions, and two of us paired those questions down. We then brought those questions to committee and further refined them.

Initial Phone Screening

With our questions and answers in hand, we then used the Screening Rubric to facilitate our evaluation of candidates.

The purpose of this spreadsheet is to provide a tool for each hiring member to fill out their evaluation of each candidate. During the phone screening, the hiring committee members should work to ensure that they can make a reasonable assessment of each KSA.

Each committee member has a sheet for their assessments. The Summary sheet is both the place to enter the candidate identifier (e.g. applicant number or “name”) as well as see the weighted average.

This spreadsheet encodes a lot of complexity.

It is where we apply the weighted KSAs from the Knowledge, Skills, and Abilities Weighting.

We also account for the fact that not everyone might be able to attend the interview. We established quorum by setting the Number of Incomplete Allowed for Group Quorum variable.

And given that not everyone may have felt comfortable with the evaluating each KSA, the spreadsheet allows for a Number of Personal Incompletes.

With those three constraints, we then calculate four variables:

Weighted Score
This is the sum of each KSAs weight multiplied by the KSAs rating. Note, because we allow incompletes, this is only a partial answer and not useful for direct interpretation. (If everyone were to provide an rating for each KSAs, then this would be an adequate value)
Possible Score
This is the maximum rating (e.g. 4) times the sum of the weights for each KSA rated by the committee member. Alone, it is somewhat meaningless. But when compared with the Weighted Score, it shows how far from the maximum score the candidate was; hence Percent Score (see below).
Percent Score
This is the Weighted Score divided by the Possible score. The closer to 100%, means that most ratings were high. A score of 100% means the committee member gave all 4s for their ratings.
Percent Evaluated
The percentage of KSAs that the committee member rated; useful to establish a relative "certainty" in the candidate's percent score.

Immediately after the candidate’s interview, each of us went and filled out are evaluation of that candidate. Then, when we’d completed all of the initial interviews, we went in and reviewed the scores.

In the review, the tool allowed for discussion of differences in perception (e.g., I see you gave them a 1 and I gave them a 4. Let’s talk about it.)

We invited the candidates with the highest percent score to a later more involved interview.

Observations

I developed these spreadsheets to equip the hiring committee to do asynchronous work so that when meeting as a group we could accept where we agreed and discuss our not-yet-in-alignment perspectives.

These discussion were quick, topical, and helped move towards consensus.

In other words, we exposed numeric insight into each person’s position and in doing so it appeared to help accept a group decision based on the aggregate wisdom and insights.

It appears that the process:

  • Gives individuals time to think and develop thoughts
  • Facilitates effective meetings by requiring preparation
  • Helps highlight alignment and differences in understanding in a detached manner
  • Creates space for consensus in the aggregate
  • Requires clarification of the purpose of this decision
  • May provide a means of meta-analysis of the varied decisions of an organization
  • Makes meetings fun For some definitions of fun, your experiences may vary, void where prohibited

Conclusion

Since piloting this process, other academic research libraries have begun using all or portions of it for their hiring processes. Others have adopted and modified process for other decision making efforts.

In the above case, we used the initial interview as a spring board for the involved interview. We asked the candidates to revisit and refine their answer to the following question: It’s . What impact have you made on the Samvera Community in the year since you took up post? Describe your plan on how to get there.

We wanted to see how they’d further develop and refine their initial plan using the information gleaned from the initial interview. I’m particularly pleased that we asked this question; It felt like a natural progression of the conversation we had.

In the Strategic Innovation Lab 🔍 at the Hesburgh Libraries 🔍, we took the principles of the process and began forming rubrics for project evaluation. It’s a work in progress, but highlights the utility of giving people space to think about their answer and also share their perspective.