{"id":881439,"date":"2025-12-29T01:22:50","date_gmt":"2025-12-29T07:22:50","guid":{"rendered":"https:\/\/newsycanuse.com\/index.php\/2025\/12\/29\/ai-roi-how-to-measure-the-true-value-of-ai\/"},"modified":"2025-12-29T01:22:50","modified_gmt":"2025-12-29T07:22:50","slug":"ai-roi-how-to-measure-the-true-value-of-ai","status":"publish","type":"post","link":"https:\/\/newsycanuse.com\/index.php\/2025\/12\/29\/ai-roi-how-to-measure-the-true-value-of-ai\/","title":{"rendered":"AI ROI: How to measure the true value of AI"},"content":{"rendered":"<article id=\"post-4106788\">\n<div>\n<div>\n<div>\n<h2>\n\t\t\t\tTime saved and money earned tell only part of the story. The real ROI of AI depends on how well organizations adapt, scale, and believe.\t\t\t<\/h2>\n<\/p><\/div>\n<div id=\"remove_no_follow\">\n<p><body><\/p>\n<div>\n<p>For all the buzz about AI\u2019s potential to transform business, many organizations struggle to ascertain the extent to which their AI implementations are actually working.<\/p>\n<p>Part of this is because AI doesn\u2019t just replace a task or automate a process \u2014 rather, it changes how work itself happens, often in ways that are hard to quantify. <a href=\"https:\/\/www.cio.com\/article\/405620\/measuring-the-business-impact-of-ai.html\">Measuring that impact<\/a> means deciding what <em>return<\/em> really means, and how to connect new forms of digital labor to traditional business outcomes.<\/p>\n<p>\u201cLike everyone else in the world right now, we\u2019re figuring it out as we go,\u201d says <a href=\"https:\/\/www.linkedin.com\/in\/agustina-branz-4217562a\/\" rel=\"nofollow\">Agustina Branz<\/a>, senior marketing manager at Source86.<\/p>\n<\/div>\n<div>\n<p>That trial-and-error approach is what defines the current conversation about <a href=\"https:\/\/www.cio.com\/article\/4095159\/a-cios-5-point-checklist-to-drive-positive-ai-roi.html\">AI ROI<\/a>.<\/p>\n<p>To help shed light on <a href=\"https:\/\/www.cio.com\/article\/4032809\/what-cios-need-to-know-about-measuring-ai-value.html\">measuring the value of AI<\/a>, we spoke to several tech leaders about how their organizations are learning to gauge performance in this area \u2014 from simple benchmarks against human work to complex frameworks that track cultural change, cost models, and the hard math of value realization.<\/p>\n<h2 id=\"the-simplest-benchmark-can-ai-do-better-than-you\">The simplest benchmark: Can AI do better than you?<\/h2>\n<p>There\u2019s a fundamental question all organizations are starting to ask, one that underlies nearly every AI metric in use today: How well does AI perform a task relative to a human? For Source86\u2019s Branz, that means applying the same yardstick to AI that she uses for human output.<\/p>\n<\/div>\n<div>\n<p>\u201cAI can definitely make work faster, but faster doesn\u2019t mean ROI,\u201d she says. \u201cWe try to measure it the same way we do with human output: by whether it drives real results like traffic, qualified leads, and conversions. One KPI that has been useful for us has been cost per qualified outcome, which basically means how much less it costs to get a real result like the ones we were getting before.\u201d<\/p>\n<p>The key is to compare against what humans delivered in the same context. \u201cWe try to isolate the impact of AI by running A\/B tests between content that uses AI and those that don\u2019t,\u201d she says.<\/p>\n<p>\u201cFor instance, when testing AI-generated copy or keyword clusters, we track the same KPIs \u2014 traffic, engagement, and conversions \u2014 and compare the outcome to human-only outputs,\u201d Branz explains. \u201cAlso, we treat AI performance as a directional metric rather than an absolute one. It is super useful for optimization, but definitely not the final judgment.\u201d<\/p>\n<\/div>\n<div>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/marc-aurele\/\" rel=\"nofollow\">Marc\u2011Aurele Legoux<\/a>, founder of an organic digital marketing agency, is even more blunt. \u201cCan AI do this better than a human can? If yes, then good. If not, there\u2019s no point to waste money and effort on it,\u201d he says. \u201cAs an example, we implemented an AI agent chatbot for one of my luxury travel clients, and it brought in an extra \u20ac70,000 [$81,252] in revenue through a single booking.\u201d<\/p>\n<p>The KPIs, he said, were simply these: \u201cDid the lead come from the chatbot? Yes. Did this lead convert? Yes. Thank you, AI chatbot. We would compare AI-generated outcomes \u2014 leads, conversions, booked calls \u2014against human-handled equivalents over a fixed period. If the AI matches or outperforms human benchmarks, then it\u2019s a success.\u201d<\/p>\n<p>But this sort of benchmark, while straightforward in theory, becomes much harder in practice. Setting up valid comparisons, controlling for external factors, and attributing results solely to AI is easier said than done.<\/p>\n<\/div>\n<div>\n<h2 id=\"hard-money-time-accuracy-and-value\">Hard money: Time, accuracy, and value<\/h2>\n<p>The most tangible form of AI ROI involves time and productivity. <a href=\"https:\/\/www.linkedin.com\/in\/johnatalla\/\" rel=\"nofollow\">John Atalla<\/a>, managing director at Transformativ, calls this \u201cproductivity uplift\u201d: \u201ctime saved and capacity released,\u201d measured by how long it takes to complete a process or task.<\/p>\n<p>But even clear metrics can miss the full picture. \u201cIn early projects, we found our initial KPIs were quite narrow,\u201d he says. \u201cAs delivery progressed, we saw improvements in decision quality, customer experience, and even staff engagement that had measurable financial impact.\u201d<\/p>\n<p>That realization led Atalla\u2019s team to create a framework with three lenses: productivity, accuracy, and what he calls \u201cvalue-realization speed\u201d\u2014 \u201chow quickly benefits show up in the business,\u201d whether measured by payback period or by the share of benefits captured in the first 90 days.<\/p>\n<\/div>\n<div>\n<p>The same logic applies at Wolters Kluwer, where <a href=\"https:\/\/www.linkedin.com\/in\/aoifelouisemay\/\" rel=\"nofollow\">Aoife May<\/a>, product management association director, says her teams help customers compare manual and AI-assisted work for concrete time and cost differences.<\/p>\n<p>\u201cWe attribute estimated times to doing tasks such as legal research manually and include an average attorney cost per hour to identify the costs of manual effort. We then estimate the same, but with the assistance of AI.\u201d Customers, she says, \u201creduce the time they spend on obligation research by up to 60%.\u201d<\/p>\n<p>But time isn\u2019t everything. Atalla\u2019s second lens \u2014 decision accuracy \u2014 captures gains from fewer errors, rework, and exceptions, which translate directly into lower costs and better customer experiences.<\/p>\n<\/div>\n<div>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/adriandunkley\/\" rel=\"nofollow\">Adrian Dunkley<\/a>, CEO of StarApple AI, takes the financial view higher up the value chain. \u201cThere are three categories of metrics that always matter: efficiency gains, customer spend, and overall ROI,\u201d he says, adding that he tracks \u201chow much money you were able to save using AI, and how much more you were able to get out of your business without spending more.\u201d<\/p>\n<p>Dunkley\u2019s research lab, Section 9, also tackles a subtler question: how to trace AI\u2019s specific contribution when multiple systems interact. He relies on a process known as \u201cimpact chaining,\u201d which he \u201cborrowed from my climate research days.\u201d Impact chaining maps each process to its downstream business value to create a \u201cpre-AI expectation of ROI.\u201d<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/tom-poutasse-556635192\/\" rel=\"nofollow\">Tom Poutasse<\/a>, content management director at Wolters Kluwer, also uses impact chaining, and describes it as \u201ctracing how one change or output can influence a series of downstream effects.\u201d In practice, that means showing where automation accelerates value and where human judgment still adds essential accuracy.<\/p>\n<\/div>\n<div>\n<p>Still, even the best metrics matter only if they\u2019re measured correctly. Establishing baselines, attributing results, and accounting for real costs are what turn numbers into ROI \u2014 which is where the math starts to get tricky.<\/p>\n<h2 id=\"getting-the-math-right-baselines-attribution-and-cost\">Getting the math right: Baselines, attribution, and cost<\/h2>\n<p>The math behind the metrics starts with setting clean baselines and ends with understanding how AI reshapes the cost of doing business.<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/mikadzesalome\/\" rel=\"nofollow\">Salome Mikadze<\/a>, co-founder of Movadex, advises rethinking what you\u2019re measuring: \u201cI tell executives to stop asking \u2018what is the model\u2019s accuracy\u2019 and start with \u2018what changed in the business once this shipped.\u2019\u201d<\/p>\n<\/div>\n<div>\n<p>Mitadze\u2019s team builds those comparisons into every rollout. \u201cWe baseline the pre-AI process, then run controlled rollouts so every metric has a clean counterfactual,\u201d she says. Depending on the organization, that might mean tracking first-response and resolution times in customer support, lead time for code changes in engineering, or win rates and content cycle times in sales. But she says all these metrics include \u201ctime-to-value, adoption by active users, and task completion without human rescue, because an unused model has zero ROI.\u201d<\/p>\n<p>But baselines can blur when people and AI share the same workflow, something that spurred Poutasse\u2019s team at Wolters Kluwer to rethink attribution entirely. \u201cWe knew from the start that the AI and the human SMEs were both adding value, but in different ways \u2014 so just saying \u2018the AI did this\u2019 or \u2018the humans did that\u2019 wasn\u2019t accurate.\u201d<\/p>\n<p>Their solution was a tagging framework that marks each stage as machine-generated, human-verified, or human-enhanced. That makes it easier to show where automation adds efficiency and where human judgment adds context, creating a truer picture of blended performance.<\/p>\n<\/div>\n<div>\n<p>At a broader level, measuring ROI also means grappling with what AI actually costs. <a href=\"https:\/\/www.linkedin.com\/in\/michaelmansard\/\" rel=\"nofollow\">Michael Mansard<\/a>, principal director at Zuora\u2019s Subscribed Institute, notes that AI upends the economic model that IT has taken for granted since the dawn of the SaaS era.<\/p>\n<p>\u201cTraditional SaaS is expensive to build but has near-zero marginal costs,\u201d Mansard says, \u201cwhile AI is inexpensive to develop but incurs high, variable operational costs. These shifts challenge seat-based or feature-based models, since they fail when value is tied to what an AI agent accomplishes, not how many people log in.\u201d<\/p>\n<p>Mansard sees some companies <a href=\"https:\/\/www.cio.com\/article\/4046457\/vendor-pricing-experiments-leave-cios-ai-costs-in-flux.html\">experimenting with outcome-based pricing<\/a> \u2014 paying for a percentage of savings or gains, or for specific deliverables such as Zendesk\u2019s $1.50-per-case-resolution model. It\u2019s a moving target: \u201cThere isn\u2019t and <a href=\"https:\/\/www.cio.com\/article\/3624540\/how-will-ai-agents-be-priced-cios-need-to-pay-attention.html\">won\u2019t be one \u2018right\u2019 pricing model<\/a>,\u201d he says. \u201cMany are shifting toward usage-based or outcome-based pricing, where value is tied directly to impact.\u201d<\/p>\n<\/div>\n<div>\n<p>As companies mature in their use of AI, they\u2019re facing a challenge that goes beyond defining ROI once: They\u2019ve got to keep those returns consistent as systems evolve and scale.<\/p>\n<h2 id=\"scaling-and-sustaining-roi\">Scaling and sustaining ROI<\/h2>\n<p>For Movadex\u2019s Mikadze, measurement doesn\u2019t end when an AI system launches. Her framework treats ROI as an ongoing calculation rather than a one-time success metric. \u201cOn the cost side we model total cost of ownership, not just inference,\u201d she says. That includes \u201cintegration work, evaluation harnesses, data labeling, prompt and retrieval spend, infra and vendor fees, monitoring, and the people running change management.\u201d<\/p>\n<p>Mikadze folds all that into a clear formula: \u201cWe report risk-adjusted ROI: gross benefit minus TCO, discounted by safety and reliability signals like hallucination rate, guardrail intervention rate, override rate in human-in-the-loop reviews, data-leak incidents, and model drift that forces retraining.\u201d<\/p>\n<\/div>\n<div>\n<p>Most companies, Mikadze adds, accept a simple benchmark: ROI = (\u0394 revenue + \u0394 gross margin + avoided cost) \u2212 TCO, with a payback target of less than two quarters for operations use cases and under a year for developer-productivity platforms.<\/p>\n<p>But even a perfect formula can fail in practice if the model isn\u2019t built to scale. \u201cA local, motivated pilot team can generate impressive early wins, but scaling often breaks things,\u201d Mikadze says. Data quality, workflow design, and team incentives rarely grow in sync, and \u201cAI ROI almost never scales cleanly.\u201d<\/p>\n<p>She says she sees the same mistake repeatedly: A tool built for one team gets rebranded as a company-wide initiative without revisiting its assumptions. \u201cIf sales expects efficiency gains, product wants insights, and ops hopes for automation, but the model was only ever tuned for one of those, friction is inevitable.\u201d<\/p>\n<\/div>\n<div>\n<p>Her advice is to treat AI as a living product, not a one-off rollout. \u201cSuccessful teams set very tight success criteria at the experiment stage, then revalidate those goals before scaling,\u201d she says, defining ownership, retraining cadence, and evaluation loops early on to keep the system relevant as it expands.<\/p>\n<p>That kind of long-term discipline depends on infrastructure for measurement itself. StarApple AI\u2019s Dunkley warns that \u201cmost companies aren\u2019t even thinking about the cost of doing the actual measuring.\u201d Sustaining ROI, he says, \u201crequires people and systems to track outputs and how those outputs affect business performance. Without that layer, businesses are managing impressions, not measurable impact.\u201d<\/p>\n<h2 id=\"the-soft-side-of-roi-culture-adoption-and-belief\">The soft side of ROI: Culture, adoption, and belief<\/h2>\n<p>Even the best metrics fall apart without buy-in. Once you\u2019ve built the spreadsheets and have the dashboards up and running, the long-term success of AI depends on the extent to which people adopt it, trust it, and see its value.<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/michaeldomanic\/\" rel=\"nofollow\">Michael Domanic<\/a>, head of AI at UserTesting, draws a distinction between \u201chard\u201d and \u201csquishy\u201d ROI.<\/p>\n<\/div>\n<div>\n<p>\u201cHard ROI is what most executives are familiar with,\u201d he says. \u201cIt refers to measurable business outcomes that can be directly traced back to specific AI deployments.\u201d Those might be improvements in conversion rates, revenue growth, customer retention, or faster feature delivery. \u201cThese are tangible business results that can and should be measured with rigor.\u201d<\/p>\n<p>But squishy ROI, Domanic says, is about the human side \u2014 the cultural and behavioral shifts that make lasting impact possible. \u201cIt reflects the cultural and behavioral shift that happens when employees begin experimenting, discovering new efficiencies, and developing an intuition for how AI can transform their work.\u201d Those outcomes are harder to quantify but, he adds, \u201cthey are essential for companies to maintain a competitive edge.\u201d As AI becomes foundational infrastructure, \u201cthe boundary between the two will blur. The squishy becomes measurable and the measurable becomes transformative.\u201d<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/johnpettit1\/\" data-type=\"link\" data-id=\"https:\/\/www.linkedin.com\/in\/johnpettit1\/\" rel=\"nofollow\">John Pettit,<\/a> CTO of Promevo, argues that self-reported KPIs that could be seen as falling into the \u201csquishy\u201d category \u2014 things like employee sentiment and usage rates \u2014 can be powerful leading indicators. \u201cIn the initial stages of an AI rollout, self-reported data is one of the most important leading indicators of success,\u201d he says.<\/p>\n<p>When 73% of employees say a new tool improves their productivity, as they did at one client company he worked with, that perception helps drive adoption, even if that productivity boost hasn\u2019t been objectively measured. \u201cWord of mouth based on perception creates a virtuous cycle of adoption,\u201d he says. \u201cEffectiveness of any tool grows over time, mainly by people sharing their successes and others following suit.\u201d<\/p>\n<p>Still, belief doesn\u2019t come automatically. StarApple AI and Section 9\u2019s Dunkley warn that employees often fear AI will erase their credit for success. At one of the companies where Section 9 has been conducting a long-term study, \u201cstaff were hesitant to have their work partially attributed to AI; they felt they were being undermined.\u201d<\/p>\n<p>Overcoming that resistance, he says, requires champions who \u201cput in the work to get them comfortable and excited for the AI benefits.\u201d Measuring ROI, in other words, isn\u2019t just about proving that AI works \u2014 it\u2019s about proving that people and AI can win together.<\/p>\n<\/div>\n<p><\/body><\/div>\n<\/p><\/div>\n<div id=\"rightrail-wrapper\">\n<p>\n\t\t\t\tSUBSCRIBE TO OUR NEWSLETTER\t\t\t<\/p>\n<h3>\n\t\t\t\tFrom our editors straight to your inbox\t\t\t<\/h3>\n<p>\n\t\t\t\tGet started by entering your email address below.\t\t\t<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<p> Tama Mischke<br \/><a href=\"https:\/\/www.cio.com\/article\/4105938\/ai-roi-how-to-measure-the-true-value-of-ai.html\" class=\"button purchase\" rel=\"nofollow noopener\" target=\"_blank\">Read More<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Time saved and money earned tell only part of the story. The real ROI of AI depends on how well organizations adapt, scale, and believe. For all the buzz about AI\u2019s potential to transform business, many organizations struggle to ascertain the extent to which their AI implementations are actually working. Part of this is because<\/p>\n","protected":false},"author":1,"featured_media":881440,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[23044,22458],"tags":[12247,11284],"class_list":{"0":"post-881439","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-measure","8":"category-value","9":"tag-measure","10":"tag-value"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/881439","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/comments?post=881439"}],"version-history":[{"count":0,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/881439\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media\/881440"}],"wp:attachment":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media?parent=881439"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/categories?post=881439"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/tags?post=881439"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}