I’ve sat in countless quarterly business reviews over the years, on both sides of the table. I’ve been the Managed Service Provider (MSP) walking through beautifully formatted decks showing green checkmarks everywhere: 99.8% uptime, average ticket resolution time of 4.2 hours, customer satisfaction score of 4.6 out of 5. And I’ve been the advisor watching these presentations unfold.
One question consistently exposes the gap: “What percentage of your users experienced at least one disruption that prevented them from working last quarter?”
Almost always, silence. Most MSPs don’t track that metric.
Here’s the uncomfortable truth: most companies struggle to determine whether their MSP is actually delivering value because the metrics they’re shown are carefully curated. MSPs present the picture they want you to see, and without context or comparison, those metrics can be nearly impossible to interpret accurately.
The Metrics Game
MSPs aren’t trying to deceive you. They’re showing you what they measure, and they measure what makes them look good. But here’s the challenge: most business leaders don’t have time to dig deeply into IT metrics. They have to trust what they’re shown. And those metrics are often engineered to paint the rosiest possible picture.
Take ticket closure rates, for example. An MSP shows you that they closed 95% of tickets within their SLA targets. Impressive, right? Except they might be practicing “ticket stuffing,” where one actual problem gets split into multiple tickets. Your accounting software won’t connect to the server? That becomes three separate tickets: network connectivity, application error, and user access. Each one closes quickly, the metrics look fantastic, but your accounting team was still down for six hours.
Or consider response times. An MSP boasts about their average response time of 15 minutes. What they don’t tell you: “response” means someone acknowledged the ticket, not that anyone actually started working on the problem. One of your accountants’ laptops died during month-end close, and yes, someone responded in 12 minutes to say “we’re looking into it.” The actual fix took eight hours.
I’ve seen this from both sides. When you’re the MSP, you’re measured on operational efficiency. Hit your ticket numbers, keep your response times low, maintain high satisfaction scores. These aren’t dishonest practices, necessarily. They’re just optimized for what gets measured.
The problem is that none of these metrics tell you whether your business is running smoothly. Uptime percentages, ticket closure rates, response times. These are operational metrics that matter to service delivery. But they don’t tell you the story that matters: can your people do their jobs effectively?
I’ve seen companies pay six figures annually for “excellent” IT support while their sales team routinely lost deals because they couldn’t access customer history or pricing during critical calls. The MSP had fantastic metrics. The business was suffering.
What You Should Actually Be Measuring
The metrics that matter aren’t about IT performance. They’re about business impact. When we work with companies to evaluate their MSP relationships, we help them track things like:
- Business disruption hours: Not system uptime, but actual time that employees couldn’t do their jobs. Your email server going down for an hour? That’s 50 employee hours lost if it affects 50 people, not a single “incident” that got resolved.
- Strategic initiative velocity: How long does it take to go from “we need this capability” to “we’re using this capability”? If it takes six months to deploy a tool that should take three weeks, that’s a problem your MSP’s metrics won’t reveal.
- Revenue-critical system reliability: Not all systems matter equally. Your email being down for an hour is annoying. Your e-commerce platform being down for an hour costs you real money.
The challenge is that most companies don’t know how to collect these metrics, and their MSP certainly isn’t volunteering to track them. Why would they? These metrics might reveal that while their operational numbers look great, the business impact is mediocre at best. That’s where the conflict of interest becomes painfully clear. You’re asking the MSP to measure themselves on metrics that might prove they’re not delivering value.
The Independent Advisory Difference
This is exactly why we built Technology Advisory Professionals (TAP) as a truly independent firm. We don’t implement anything. We don’t sell software. We don’t have vendor partnerships. We exist solely to answer one question: “Is your technology serving your business?”
When we evaluate an MSP relationship, we’re not looking at their dashboards. We’re interviewing your people. We’re watching how work actually gets done. We’re measuring the metrics that connect technology performance to business outcomes.
Sometimes we find that an MSP is doing excellent work, and we tell the client exactly that. Sometimes we discover that an MSP is hitting all their internal targets while the business struggles. And sometimes we find that the problems aren’t with the MSP at all.
The point isn’t to “catch” MSPs doing something wrong. It’s to create alignment between what’s being measured and what truly matters.
How to Actually Hold Your MSP Accountable
If you’re working with an MSP right now (and you should be, good MSPs are invaluable), here’s how to shift the relationship toward real accountability:
- Define your own success metrics. Don’t just accept the metrics your MSP provides. Those metrics are designed to make them look good. Decide what business outcomes matter to you, then work backward to figure out how technology supports or hinders those outcomes. And watch for metric gaming. If ticket counts suddenly spike while problem resolution stays the same, that’s a red flag.
- Start with business outcomes, not technical specifications. Instead of “we need 99.9% uptime,” try “our operations team needs the inventory system responding in under 2 seconds because delays compound throughout our fulfillment process.”
- Review the relationship with external expertise. You wouldn’t ask your general contractor to inspect their own work. Having an independent advisor review your MSP relationship annually brings objectivity that simply can’t exist when the MSP is grading their own performance.
The Real Question
The question isn’t whether your MSP has good metrics. The question is whether those metrics tell you anything useful about whether your technology is enabling your business to thrive.
Most mid-sized companies can’t answer that question confidently. They know they’re spending a lot on IT. They know they’re still frustrated. They just can’t pinpoint exactly where the disconnect is.
That’s the conversation we have every day at TAP. Not “is your MSP bad?” but “is your technology actually serving your business, and how would you know?”
The MSP industry has matured tremendously. There are exceptional firms doing remarkable work. But the inherent conflict remains: the people implementing your technology are also telling you whether it’s working. Independent oversight isn’t about distrust. It’s about ensuring your technology investments directly serve your business.





