Skip to content

Kevin McGowan

Writer | Speaker | Project Leader

Menu
  • Kevin Who?
  • Project Management Blog
  • Music Blog
Menu

Artificial Intelligence vs. Human Project Managers

Posted on February 20, 2026February 21, 2026 by Kevin

AI Is Powerful. But Implementation Still Requires Project Management.

I use AI almost every day. And no, I’m not a bot. It helped me edit this article, to be honest. 

It helps me structure my articles when I start to lose the plot, as well as making structural recommendations. Not as good as a bonafide human editor, for sure, but it helps me clean up any grammar issues and conceptual messiness. At work,  use AI regularly as well, as it summarizes meetings quickly, and does a good job pulling out action items. It helps me think through problems faster. At an individual level, it’s genuinely useful.

So this isn’t skepticism about the technology itself. Go AI!

What I’ve been thinking about lately is something else: why so many AI implementations struggle once they move beyond experimentation. According to this piece in Forbes, 95% of corporate AI initiatives show zero return. Zero. That’s not a great number for a technology touted as “the next big shift.” It’s like hiring a mover and having them show up, pick up your couch and bookshelves, put them right back down in the same spot and then leave again. 

There’s no shortage of research suggesting enterprise AI adoption is uneven. RAND has reported that more than 80% of AI projects fail, often due to issues like poor scoping, unrealistic expectations, and weak change management.¹ IDC has found that roughly 28% of AI initiatives are abandoned before completion.² And other reporting on generative AI has suggested that many projects fail to deliver measurable profit-and-loss impact.³

The common thread in these findings isn’t that AI doesn’t work. It’s that organizations struggle to integrate it. And don’t get me wrong, I know that this is a very complicated thing. It’s not easy to shift how your business works, to integrate new services in your tech stack, and also to change how people work with a new tool. 

That’s where project management comes in. Yes, I’m a project manager and see all things through this lens. In this case, especially, I do think it’s a useful perspective. 

AI is often introduced as a tool. But often, it behaves more like a transformation initiative. It impacts workflows, data quality, governance, accountability, and even culture. When those pieces aren’t coordinated, the implementation can falter. Not because the model failed, but because the system around it wasn’t ready. Ironically, it’s a conceptual infrastructure that humans need to build to ready their teams for a big technological shift. 

If organizations want better outcomes from AI, I don’t think the answer is more hype or faster adoption. It’s stronger fundamentals. Enter the world of the Project Manager, my friends. 

Here are a few project management practices that would materially improve AI implementations.

1. Start With Integration, Not the Tool

It’s tempting to begin with the technology: select a platform, run a pilot, test use cases. Some teams will see some gains and get excited about it, but that doesn’t ensure positive gains and positive vibes across the board. 

The important question is this: Where does AI fit into existing workflows?

Before launching anything, map the current process. Where does information enter? Who reviews it? What systems feed it? What downstream decisions depend on it? What happens when something goes wrong?

Then ask how AI changes that flow. 

Does it replace a step? Accelerate one? Introduce a new validation requirement? Create new points of failure?

This is classic integration management, right out of the PMBOK. The Project Management Institute defines integration management as ensuring all elements of a project work together coherently across the lifecycle.⁴ In the AI context, that means you cannot treat the model as separate from the process.

If the surrounding workflow isn’t redesigned deliberately, AI ends up as a parallel system — useful in pockets, but not producing enterprise-level value.

2. Define Success Before You Launch

Many AI initiatives are approved with vague objectives like “increase efficiency” or “improve productivity.”

That’s a good start, but it’s not enough. To revisit the mover analogy: when the mover shows up at your house, what is the success criteria? Is it merely to move your stuff to a new location? Or do you expect a certain degree of care? If they smash all your records to bits, and drop your books into puddles, how successful is that move? So, give it a rethink. 

What does improvement actually mean?
Is it time saved per task? Cost reduction? Customer satisfaction scores?

You really need to define the metric before implementation.

If the goal is reducing response time by 20%, write that down. If the goal is lowering processing cost per transaction, quantify it. Without a measurable target, it becomes almost impossible to evaluate whether the project worked.

One reason AI ROI is hard to prove is that success criteria are often retrofitted after deployment.

Project management discipline forces clarity upfront.

3. Use Real Prioritization Frameworks

Not every AI use case deserves immediate investment.

Some are high-impact and low-effort. Others are complex, data-intensive, and unlikely to generate meaningful value in the short term.

A simple framework like RICE (Reach, Impact, Confidence, Effort) can bring structure to AI prioritization. It forces teams to ask:

  • How many users or customers are affected?
  • How significant is the potential impact?
  • How confident are we in our assumptions?
  • What is the true implementation effort?

AI projects often look deceptively simple in early demos. The effort expands once data cleaning, security review, integration, training, and oversight are included.

A structured prioritization exercise slows down the rush just enough to improve decision quality.

4. Don’t Underestimate Change Management

AI changes how work is done, and that alone can create uncertainty in the team. 

If employees see AI as a threat rather than a tool, adoption suffers. If roles shift without clear communication, friction increases. If training is minimal, performance will decline (or stagnate).

Change management is not a soft add-on to AI implementation; it’s a firm requirement.

Clear communication about purpose, guardrails, and expectations helps reduce resistance. Involving stakeholders early improves buy-in. Providing training improves output quality.

Many AI failures attributed to “technology limitations” are actually adoption failures.

5. Protect Institutional Knowledge

One of the more subtle risks in AI implementation is the loss of contextual knowledge.

AI systems perform best when they’re supported by people who understand edge cases, historical decisions, and exceptions. If experienced team members are removed too quickly (especially in the name of anticipated automation gains) the organization can lose critical expertise.

In early-stage implementations, human oversight is often more important, not less.

It’s reasonable to pursue efficiency. But it’s risky to assume that projected automation gains are stable before the system has proven itself in real operating conditions.

A phased approach (validate performance first, then adjust staffing) is more aligned with long-term stability.

6. Track Benefits After Go-Live

Implementation is not the finish line. It’s more of a new start. 

Once an AI system is live, benefits realization should be tracked intentionally. Are the projected time savings materializing? Has quality improved or declined? Are there unintended consequences?

A six- or twelve-month post-implementation review can be revealing. Some gains hold. Others erode. New bottlenecks emerge.

Without structured follow-up, organizations risk declaring success too early or abandoning initiatives without understanding what actually went wrong.

AI implementation should be treated as an ongoing capability build, not a one-time launch.

Technology maturity and organizational maturity don’t always move at the same speed. The research suggests many enterprises are still learning how to scale AI effectively. That’s not surprising. Major shifts in operating models rarely happen cleanly.

The opportunity here isn’t to move faster than everyone else. It’s to move more deliberately.

None of those practices are new. 

  • Strong integration management. 
  • Clear success metrics. 
  • Disciplined prioritization. 
  • Real change management. 
  • Post-launch evaluation.

They’re core project management fundamentals.

AI may be new. But the principles required to implement it well are not.


References

  1. Feldman Z, et al. Artificial intelligence project failure rates and causes. RAND Corporation. 2023. https://www.rand.org/pubs/research_reports/RRA2680-1.html
  2. IDC. Nearly 28% of AI projects are abandoned before completion. Reported by InfoWorld. 2023. https://www.infoworld.com/article/3713938/idc-nearly-28-percent-of-ai-projects-abandoned-before-completion.html
  3. Reporting on generative AI ROI challenges. Tom’s Hardware. 2024. https://www.tomshardware.com/tech-industry/artificial-intelligence/95-percent-of-generative-ai-implementations-in-enterprise-have-no-measurable-impact-on-p-and-l-says-mit-flawed-integration-key-reason-why-ai-projects-underperform
  4. Project Management Institute. A Guide to the Project Management Body of Knowledge (PMBOK Guide). 7th ed. PMI; 2021.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Efficient for Some!
  • Artificial Intelligence vs. Human Project Managers
  • Why I Got My PMP After Years of Figuring It Out
  • This CD Smells Like Bleach
  • Beyond Technical Expertise: Why Leadership and Adaptability Define Senior Project Managers
  • LinkedIn
© 2026 Kevin McGowan | Powered by Minimalist Blog WordPress Theme