The Counsel & Clarity™ Blog

Welcome to the Counsel & Clarity blog, where we share insights, advice, and inspiration for navigating setbacks and fostering success.

Dive into our collection of articles on preventive legal support, legal operations coaching, and leadership consulting, all designed to help executives and businesses transform challenges into growth opportunities.

Learn from our expertise and join the conversation to build resilience and drive positive change in your personal and professional journey.

NEW_KPIs

Thomson Reuters' RPM: A Flawed Metric for the AI Age?

August 26, 202416 min read

As artificial intelligence increasingly influences legal practice, Thomson Reuters Institute has introduced its new Relative Performance Measure (RPM) for evaluating lawyer performance. But is this metric truly the revolutionary tool it claims to be, or is it just another flawed attempt to quantify the increasingly complex world of legal work? Let's dive deep and ask the hard questions about RPM.

RPM is ostensibly designed to evaluate lawyer performance in the age of A.I. It purports to measure a lawyer's output and productivity relative to their peers, taking into account factors beyond traditional billable hours. However, as we'll explore, the implementation and implications of this metric raise significant concerns.

The Peer Problem: Who's Really Being Compared?

At the heart of RPM lies a fundamental issue: the nebulous concept of "peers." The metric purports to measure a lawyer's performance "relative to peers," but the definition of these peers remains frustratingly vague. Are we talking about lawyers with the same job title? In the same firm? Across different firms?

The legal world is far from homogeneous. A senior associate at a boutique firm in rural America faces vastly different challenges and opportunities than one at a top-tier firm in New York City. How can RPM possibly account for these disparities? Thomson Reuters owes us a clear explanation of how it defines and selects these so-called peers.

Moreover, the peer comparison approach raises concerns about potential biases and reinforcement of existing inequalities in the legal profession. If the "peer" group is predominantly composed of a certain demographic, it could inadvertently penalize lawyers from underrepresented backgrounds who may face unique challenges in their careers. Without transparency about how these peer groups are constructed, there's a risk that RPM could perpetuate or even exacerbate existing disparities in the legal field.

Another critical issue is the dynamic nature of legal careers. Lawyers often transition between different types of practices, firm sizes, or even move in-house. How does RPM account for these career changes? A lawyer who moves from a large corporate firm to a public interest organization may see a drastic change in their "relative performance" that has nothing to do with their actual skills or dedication. This raises questions about the longitudinal validity of RPM scores and their potential impact on lawyer mobility and career development. Thomson Reuters needs to address how RPM can provide meaningful comparisons across the diverse and evolving landscape of legal careers.

While the peer comparison issue is fundamental to RPM's structure, it's not the only aspect that raises questions. Let's turn our attention to how RPM conceptualizes productivity in the A.I. age.

The Productivity Paradox

In justifying the need for the Relative Performance Measure (RPM), Thomson Reuters makes a bold claim: "firm profitability is becoming less tied to productivity." This statement, presented as fact, raises more questions than it answers. What exactly does "productivity" mean in this context? If it's not about billable hours - the traditional yardstick of legal productivity - what is it about?

Moreover, Thomson Reuters asserts that this trend will "likely accelerate as a result of generative artificial intelligence." This sweeping generalization demands deeper examination. How exactly will A.I. impact productivity? Will it make some lawyers more productive while rendering others obsolete? These are crucial questions that the RPM metric seems to gloss over.

Redefining Productivity in the Age of A.I.

The disconnect between productivity and profitability that Thomson Reuters alludes to is a complex issue that deserves more than a passing mention. In traditional legal practice, productivity often equated to hours billed. However, with the rise of alternative fee arrangements, value-based billing, and efficiency-driven practices, this equation is no longer straightforward. RPM purports to address this shift, but it's unclear how it accounts for these varied billing models and their impact on both productivity and profitability.

Furthermore, the introduction of A.I. into legal practice adds another layer of complexity to the productivity question. A.I. tools can dramatically speed up certain tasks, like document review or contract analysis. But does completing these tasks faster necessarily equate to higher productivity or value for the client? And how does RPM factor in the time spent learning to use and implement A.I. tools effectively? These are critical considerations in evaluating lawyer performance in the A.I. age that Thomson Reuters needs to address.

The Firm-Wide Impact of A.I. on Productivity

The productivity paradox extends beyond individual lawyer performance to firm-wide strategies. As A.I. takes over more routine tasks, firms may need fewer junior lawyers but more tech-savvy professionals. How does RPM account for this shift in workforce composition? Does it consider the productivity of legal technologists or A.I. specialists who may not bill hours in the traditional sense but significantly contribute to a firm's efficiency and profitability? Without clarity on these points, RPM risks becoming an outdated metric almost as soon as it's implemented.

Lastly, the emphasis on productivity - however it's defined - raises ethical concerns. In a profession where due diligence, careful analysis, and ethical considerations are paramount, an overemphasis on productivity could incentivize cutting corners. How does RPM ensure that it's not inadvertently encouraging quantity over quality, or speed over thoroughness? As the legal industry grapples with the implications of A.I., it's crucial that performance metrics like RPM don't undermine the core values of the profession in pursuit of a narrow definition of productivity.

The challenges in defining and measuring productivity lead us to an equally complex issue: how does RPM define and quantify a lawyer's "output" in the era of A.I.?

Decoding "Output" in the Age of A.I.

RPM supposedly measures a lawyer's "output," but in a world where A.I. can draft contracts and conduct legal research, what constitutes human output? Is it the ability to use A.I. tools effectively? The quality of human oversight? The creativity in legal strategy that A.I. can't replicate?

Thomson Reuters needs to clearly define what it means by "output" in this new landscape. Without this clarity, RPM risks becoming an arbitrary number that fails to capture the true value of a lawyer's work.

The Challenge of Attribution in A.I.-Assisted Legal Work

The concept of "output" in legal work has always been multifaceted, encompassing quantitative measures like documents produced or hours billed, as well as qualitative aspects such as the soundness of legal advice or the favorability of negotiated terms. With the introduction of A.I., this complexity has increased exponentially. A.I. can rapidly generate vast amounts of text, from memos to briefs, but the real value often lies in the human lawyer's ability to critically evaluate, refine, and apply this A.I.-generated content. How does RPM account for this crucial interpretative and editorial role? There's a risk that by focusing on easily quantifiable outputs, the metric could undervalue the essential human elements of legal work.

Moreover, the integration of A.I. into legal practice raises questions about the attribution of work. When a lawyer uses an A.I. tool to draft a contract, who or what is responsible for the "output"? Is it the A.I. system, the lawyer who prompted and refined the A.I.'s work, or perhaps the tech team that implemented and customized the A.I. tool for the firm? RPM's approach to this attribution question could have significant implications for how lawyers engage with A.I. tools and how firms structure their workflows. If RPM doesn't adequately credit lawyers for their role in A.I.-assisted work, it could disincentivize the adoption of efficiency-enhancing technologies.

Valuing Innovation and Creativity in Legal Output

Furthermore, the nature of legal "output" varies greatly across different practice areas and types of legal work. Transactional lawyers might produce contracts and deal documents, litigators generate briefs and motions, while advisory lawyers might primarily output memos and client advice. Each of these areas is impacted differently by A.I. For instance, contract drafting might be more readily automated than crafting nuanced litigation strategy. Does RPM have the flexibility to account for these varying impacts of A.I. across different legal specialties? Without such nuance, the metric risks creating a one-size-fits-all approach to performance evaluation that fails to recognize the diverse ways lawyers add value in different contexts.

Lastly, there's the question of innovation and legal creativity. Some of the most valuable legal work involves developing novel legal theories, crafting innovative deal structures, or finding unique solutions to complex legal challenges. This type of output is often not easily quantifiable and may not be amenable to A.I. assistance. How does RPM capture and value this crucial aspect of legal work? There's a danger that by focusing on more easily measurable outputs, RPM could inadvertently discourage the kind of creative, boundary-pushing work that often delivers the highest value to clients and advances the field of law.

The ambiguities surrounding the definition of "output" are compounded by another critical issue: the lack of transparency in RPM's calculation methodology.

The Opacity Problem

Perhaps most troubling is the proprietary nature of RPM's calculation. While Thomson Reuters outlines general steps, the actual formula remains a black box. In an age where algorithmic transparency is increasingly crucial, especially in evaluative tools, this opacity is deeply concerning. How can lawyers trust a metric they can't fully understand or scrutinize?

This lack of transparency raises significant ethical and practical concerns. In the legal profession, where due diligence and thorough analysis are paramount, asking lawyers to accept a performance metric without full disclosure of its methodology is problematic at best. It's akin to asking a client to trust a legal strategy without explaining the reasoning behind it. The legal industry has long prided itself on rigorous, evidence-based practices. Introducing a proprietary, opaque metric into this environment not only goes against these principles but also risks undermining the very foundations of professional evaluation and advancement.

Moreover, the black-box nature of RPM opens the door to potential biases and unfairness in its application. Without access to the specific variables and weightings used in the calculation, it's impossible to verify whether the metric inadvertently discriminates against certain groups of lawyers. For instance, does it adequately account for the unique challenges faced by lawyers from underrepresented backgrounds, or those who take on pro bono work? Does it consider the varying resource availability across different firm sizes and types? The legal profession has been grappling with issues of diversity and inclusion; an opaque performance metric could potentially exacerbate existing inequalities if its inner workings are not open to scrutiny and adjustment.

Given these concerns about transparency and fairness, we must ask a fundamental question: is RPM truly addressing a need in the legal industry, or is it creating new problems?

RPM: A Solution in Search of a Problem?

One has to wonder: is RPM truly addressing the needs of the legal industry, or is it a product created to capitalize on the anxiety surrounding A.I.'s impact on law? The legal profession is grappling with fundamental questions about the nature of legal work in the A.I. age. It's not clear that a single metric, no matter how sophisticated, can adequately address these complex issues.

It's worth noting that the legal industry isn't alone in grappling with performance measurement in the digital age. Other professional services fields, such as management consulting and accounting, have long used a variety of metrics to evaluate performance, including utilization rates, client satisfaction scores, and revenue generation. However, these fields have generally relied on a combination of quantitative and qualitative measures, rather than attempting to distill performance into a single metric. RPM's approach of creating a unified score is novel, but it may be oversimplifying the complex nature of legal work.

It's important to acknowledge that RPM, if implemented thoughtfully, could potentially offer some benefits. A more comprehensive performance metric could, in theory, provide a more nuanced view of a lawyer's contributions beyond billable hours. It could also encourage the development of skills needed in the A.I. age. However, as we'll explore, the current form of RPM raises serious concerns that outweigh these potential benefits.

The introduction of RPM seems to be based on the assumption that traditional performance metrics are no longer sufficient in the age of A.I. However, this premise itself warrants scrutiny. While A.I. is undoubtedly changing aspects of legal practice, the core competencies that make a great lawyer – critical thinking, persuasive argumentation, ethical judgment, and client relations – remain largely unchanged. By focusing on a new, A.I.-centric metric, there's a risk of overemphasizing technological proficiency at the expense of these fundamental legal skills. RPM might be solving a problem that doesn't exist while creating new ones in the process.

Moreover, the legal industry's challenges in the A.I. era are multifaceted and vary greatly across different practice areas, firm sizes, and geographical locations. A one-size-fits-all metric like RPM may struggle to capture this diversity. For instance, how does RPM account for the vastly different A.I. adoption rates between large corporate firms and small public interest practices? Does it consider the varying ethical implications of A.I. use across different areas of law? By attempting to distill the complexities of legal practice in the A.I. age into a single number, RPM risks oversimplifying the nuanced reality of modern legal work. Instead of a universal metric, what the legal industry might truly need is a more flexible, context-aware approach to performance evaluation that can adapt to the specific challenges and opportunities presented by A.I. in different legal contexts.

Beyond the question of RPM's necessity, there's a broader issue to consider: the potential dangers of reducing complex professional performance to a single number.

The Danger of Over-Quantification

In our quest to measure everything, we risk losing sight of the intangible qualities that make great lawyers. Empathy, creativity, ethical judgment - these are crucial aspects of legal work that don't easily translate into numbers. By reducing lawyers to a single score, RPM might inadvertently encourage a form of practice that prioritizes measurable outputs over these essential, but less quantifiable, skills.

This over-reliance on quantification can lead to a phenomenon known as Goodhart's Law, which states that when a measure becomes a target, it ceases to be a good measure. In the context of RPM, lawyers might begin to optimize their behavior to improve their score, rather than focusing on providing the best possible service to their clients. For instance, if RPM heavily weights the number of documents produced or the speed of task completion, lawyers might be incentivized to churn out more documents or rush through tasks, potentially at the expense of quality and thoughtful analysis. This could lead to a degradation of legal services and undermine the very essence of what it means to be a good lawyer.

For instance, consider a scenario where a junior lawyer spends significant time mentoring a summer associate, or where a senior partner invests weeks in developing a novel legal theory for a pro bono case. These activities, while crucial for the profession's long-term health and the pursuit of justice, might not immediately translate into measurable "output" or "productivity" as defined by RPM. Consequently, lawyers might be disincentivized from engaging in such vital, but less quantifiable, activities.

Moreover, the emphasis on quantifiable metrics might disproportionately disadvantage certain types of legal work that are inherently less measurable. Pro bono work, mentoring junior lawyers, building client relationships, or engaging in complex, long-term litigation are all crucial activities that may not immediately translate into tangible outputs. How does RPM account for these vital aspects of legal practice? There's a real risk that by focusing on what can be easily measured, we create a skewed picture of lawyer value that fails to capture the full spectrum of contributions to the legal profession and society at large. This could lead to a narrowing of legal practice, where lawyers are discouraged from engaging in activities that don't directly improve their RPM score, ultimately resulting in a less diverse, less innovative, and less socially responsible legal profession.

Bottom Line: RPM Appears to Rest on Shifting Sands

While Thomson Reuters' attempt to create a more comprehensive performance metric is commendable, RPM raises more questions than it answers. In its current form, at least, it appears to be a flawed tool that fails to adequately address the complexities of legal work in the A.I. age. The issues we've explored - from the nebulous definition of peers to the dangers of over-quantification -- suggest that RPM may be built on shifting sands, unable to provide a stable foundation for evaluating lawyer performance in our rapidly evolving legal landscape.

Before law firms consider adopting RPM, they should demand answers to these critical questions:

  1. How exactly are "peers" defined and selected? The legal world's diversity makes peer comparison a complex issue that requires careful consideration and transparency.

  2. What specific factors go into measuring "output" and "productivity"? In an age where A.I. can draft contracts and conduct research, the definition of lawyer output needs clear articulation.

  3. How does the metric account for the varying impacts of A.I. across different areas of law and types of firms? The uneven adoption and impact of A.I. across the legal sector necessitates a nuanced approach.

  4. Why is the calculation proprietary, and how can its accuracy and fairness be verified? The black-box nature of RPM goes against the principles of transparency that the legal profession holds dear.

  5. How does RPM ensure it's not inadvertently disadvantaging certain groups of lawyers? The potential for bias in algorithmic systems is a well-known issue that requires careful scrutiny.

  6. How does RPM account for essential but less quantifiable aspects of legal work, such as creativity, ethical judgment, and client relationship building?

  7. What safeguards are in place to prevent RPM from incentivizing quantity over quality in legal work?

  8. How does RPM adapt to the rapidly changing nature of legal work in the A.I. age? Is it flexible enough to remain relevant as new technologies emerge?

  9. How does RPM balance the evaluation of A.I.-assisted work with purely human contributions?

  10. What is the long-term impact of RPM on legal education and the development of future lawyers?

As the legal industry grapples with the implications of A.I., it's crucial that we don't allow tools like RPM to shape the future of law without rigorous scrutiny and debate. The value of a lawyer extends far beyond what can be quantified -- it lies in their judgment, their ethical standards, their ability to navigate complex human situations, and their commitment to justice. Any performance metric that fails to capture these essential qualities is, at best, incomplete, and at worst, potentially harmful to the profession and the clients it serves.

In conclusion, while the intent behind RPM may be laudable, its current form raises serious concerns. As we navigate the intersection of law and A.I., we must ensure that our evaluation methods enhance, rather than diminish, the core values of the legal profession. The future of law is too important to be left to an algorithm we don't fully understand or trust.

Relative Performance MeasureLawyer performance evaluationArtificial Intelligence in LawLegal productivity metricsThomson Reuters InstituteLegal industry AI impactethical concerns in legal metrics

Noel Bagwell

Founder of Counsel & Clarity™

Back to Blog

Copyright © Counsel & Clarity Legal, PLLC, and Counsel & Clarity Consulting, LLC, jointly d/b/a "Counsel & Clarity™" All Rights Reserved. No attorney-client relationship results from the consumption of information on this site. Nothing expressed herein should be construed as legal advice. Please take no action that could affect your legal rights without first consulting a local attorney.

Contact Us