A recent watchdog report found that the Air Force has one of the best officer performance evaluation systems among the services, but it falls short in two key areas: aligning performance expectations with organizational goals, and reviewing officer evaluation systems for bias and accuracy.
“By revising policy or guidance to direct raters to explicitly align individual officer performance expectations with organizational goals, the Navy, Marine Corps, and Air Force can better ensure that officers’ daily activities and performance are cascading upwards to meet the goals of the organization,” the Government Accountability Office wrote in a Nov. 13 report.
GAO developed 11 key practices after reviewing publications on performance evaluation in the private and public sectors. The Air Force had fully incorporated eight out of the 11 practices, more than any other service. But only the Army had aligned its officer performance expectations with organizational goals, while the Air Force, Navy, and Marine Corps had not.
Every year, Air Force officers receive an officer performance brief (OPB), where superior officers assess them in four performance areas: executing the mission, leading people, managing resources, and improving the unit. Between those four areas are divided 10 Airman Leadership Qualities. Raters write their assessment of the officer’s performance in each area in just a few sentences.
While “executing the mission” and “managing resources” sound like goals, GAO classified the four areas as organizational values—the moral code of an organization—not organizational goals, which are end results expected to be achieved within a specific period.
When the end results are not spelled out, it leaves raters to decide whether the officer actually achieved them, explained Dr. Bradley Podliska, an associate professor at Air University who co-wrote an article for War On The Rocks in March about improving the Air Force commander selection process.
“‘Executing the mission’ can or possibly cannot be related to organizational goals. It’s up to the individual rater whether to make that determination,” Podliska told Air & Space Forces Magazine, adding that his views do not necessarily represent those of the Air Force or the Department of Defense.
“The GAO is saying that these reports have to make it explicitly clear what the organizational goals are, so therefore that officer is going to be rated based on that standard,” he said.
For example, at Air University, teachers are expected to teach a certain number of courses and achieve a minimum positive student evaluation score, among other distinct goals, Podliska said. At an aircraft maintenance squadron, the organizational goal might be to reach a certain aircraft mission-capable rate. But under the current system, when an officer achieves those metrics, it might not necessarily factor into their rater’s assessment of them, Podliska said.
“You would assume that that’s how they’re being evaluated, but because it’s not explicitly clear with the organizational goals, it’s dependent on that individual rater how well they are actually doing in the evaluation,” he said. “I would think, if you talk to any officer, they are going to have stories about how what their rater wrote down about them had almost absolutely nothing to do with what they did. Anecdotally, everybody has stories like that.”
OPBs also require stratification, where officers of the same grade are ranked one through five, for example. The ranking makes it easy for promotion boards to select a winning officer, but without concrete performance metrics, they may be based on “basically useless data,” Podliska said.
The GAO made a similar argument and pointed out that organizational goals can help align officer training and provide concrete starting points for evaluating the effectiveness of a squadron, group, wing, or other organization.
Replace the Abstract
GAO is not the first to call for changes to the Air Force officer evaluation system. Col. Jason Lamb, then using the pseudonym Col. Ned Stark, sparked renewed interest in the topic from 2018 to 2020 when he wrote a series of essays on improving Air Force officer promotion and leadership development.
“We have some great leaders in our Air Force, but we need to do a better job of finding and developing more of them while weeding out toxic leaders before they have a chance to do significant harm to our Airmen and missions,” Lamb wrote in one essay.
The Air Force is not alone in its soul-searching: in 2020, the Army launched a Battalion Commander Assessment Program, where candidates are evaluated based on a five-day series of cognitive tests, interviews with a psychologist, communication assessments, reports from peers and subordinates, and other tests.
So far, the results are promising: under the first BCAP, 34 percent fewer officers were chosen for command than under the old system, which was just a board reviewing personnel files. Many Soldiers rejected under the first BCAP came back the next year after learning from their mistakes. Ninety-four percent of the participants said BCAP was a better way to select battalion commanders than the old system, and 97 percent said the Army should continue BCAP.
In their March article, Podliska and his co-author, Air Force Maj. Maria Patterson, pointed out that BCAP is part of a larger Army effort to identify specific command leadership attributes in its doctrine, then use objective data to assess how close Soldiers are to the mark. The Air Force needs to spell out its own command leadership attributes to guide development, they said.
“Within the Air Force, a plethora of doctrine, regulations, instructions, manuals, and technical orders exist, ranging from how to properly use a chair to developing a strategy for modern international warfare with near-peer threats,” wrote Podliska and Patterson. “Still, one of the most critical aspects of the military foundation is neglected—leadership in command.”
A complementary effort would be to align individual performance expectations with organizational goals, so that the Air Force could better identify high-performing officers with objective data, Podliska said.
“Let’s replace the abstract with actual metrics,” he said. “What does it mean to lead people? How do you actually define that in terms of quantifiable variables? Let’s look at some of the research.”
Numbers may not account for everything, Podliska cautioned, which is why more abstract values could still play a role, particularly for taking care of subordinates. But if the Air Force does decide to change its system, it needs a way of checking to see if it works; the GAO reported that none of the services had fully incorporated such a mechanism.
“[T]he Air Force makes incremental changes—such as policy updates—to the performance evaluation system as needed and has a process for ensuring completeness of performance evaluation reports,” the report said. “However, it has not regularly evaluated the system’s processes and tools to help ensure the effectiveness, accuracy, and quality of the system, and it does not review ratings or related trends to ensure fairness or accuracy of individual ratings.”
For its part, the Air Force partially concurred with GAO’s recommendation to explicitly align officer expectations with officer goals.
“The Air Force recognized that there can be confusion between the core values and organizational goals as they relate to the evaluation system and noted that the service would examine how to incorporate the requirement most effectively into its policy,” GAO noted. “[W]e are encouraged by the Air Force’s stated commitment to examine how to clarify its organizational goals and align those goals with officer expectations in policy.”