Guides & Best Practices
November 18, 2025

Webinar Recap: Practical AI for Total Rewards Leaders—What Comp Teams Need Now

Giac Soliman and Haris Ikram unpack the gap between AI hype and practical adoption in rewards, sharing real-world use cases, responsible AI frameworks, and the emerging skills compensation teams need heading into 2026.

Webinar Recap: Practical AI for Total Rewards Leaders—What Comp Teams Need Now
Emma Biskupiak
Emma Biskupiak
Emma's a straight shooter with a passion for telling stories and making the workplace a better place.

AI isn’t replacing compensation teams—but it is redefining what they can do.

That was the core message of our October 18 webinar, where Haris Ikram sat down with global compensation leader and CompTech Roundup creator Giac Soliman for a fast-moving, deeply practical conversation on how AI is reshaping rewards work today.

With ChatGPT and Claude both down that morning, the theme became even more real:
“Today we’re relying on Giac-GPT,” Haris joked—a reminder of just how quickly AI has become infrastructure for comp and HR teams.

Below is a full recap of the insights, frameworks, and real-world examples shared during the session.

Why AI Adoption Still Feels Hard for Comp Teams

Despite the headlines, most Total Rewards teams are still in the early stages of AI adoption. Giac explained why.

1. Teams are overwhelmed—not uninterested.

We’re all being flooded with AI news and product releases—sometimes daily.
“It’s just an extra task now to keep up with the world and upskill,” Giac noted.

For reward teams already buried in cycles, benchmarking, job architecture, and stakeholder alignment, adding “learn AI” to the pile can feel impossible.

2. Compensation data is uniquely sensitive.

Comp involves high-stakes, high-risk data—salary history, equity, location, performance, and market data layered together.

That means:

  • More stakeholders
  • More compliance considerations
  • More scrutiny
  • More time to align internally

As Giac put it:
“We’re affecting people’s livelihoods… the more stakeholders involved, the longer it will take.”

3. Leaders fear the unknown—and the unrealistic.

Haris shared that leadership expectations often swing between two extremes:

  • AI will solve everything instantly, or
  • AI is too risky to touch

Neither is true.

AI doesn’t magically eliminate work. But it can transform it—if you have good data, clear workflows, and human judgment guiding it.

4. The biggest risk isn’t AI—it’s unmanaged AI.

Giac was direct on this point:

“The biggest risk right now isn’t AI in rewards—it’s decentralized, bottom-up AI use.”
Analysts pasting comp data into random tools. Unvetted prompts. No guardrails. No audit trails.

This is where real exposure comes from—not from well-governed tools.

Why Teams Can’t Stay in ‘Wait-and-See’ Mode

Both speakers agreed: waiting carries more risk than experimenting.

Teams that delay adoption fall behind in:

  • Efficiency: manual analysis that AI can complete in seconds
  • Consistency: pay decisions that vary team to team
  • Fairness: missing early signals around compression, equity, or outliers
  • Agility: inability to answer “what if?” questions on the fly

As Haris put it:
“AI is the great equalizer—even 200-person orgs can now get decision-quality analytics.”

A Simple Framework for Using AI Responsibly in Total Rewards

Giac shared a practical, repeatable structure for assessing and rolling out AI—one rooted in responsible, human-centered decision-making.

1. Start with Purpose

Before touching pay data, ask:

  • What problem should this model actually solve?
  • Is the data clean enough?
  • Is the job architecture stable?
  • Are we predicting past bias forward?

“One thing I always remind myself: AI should help me decide. It shouldn’t make the decision.” —Giac

2. Governance & Security

Compensation leaders—not just Legal and Security—must be part of the risk review.

Key considerations:

  • Where does the data live?
  • What PII is used?
  • Who has access?
  • Is there a DPIA? Audit logs?
  • How would the company respond to a failure?

3. Explainability

If you can’t explain an AI-generated pay recommendation in plain language, you can’t defend it.

A good explanation sounds like:

“Based on this role, performance, and market data, the model suggests X because Y.”

4. Human-in-the-loop Checkpoints

AI accelerates analysis—but humans still validate context, fairness, and alignment to pay philosophy.

Key checkpoints:

  • Outliers
  • High-impact decisions
  • Exceptions and overrides
  • Compliance-sensitive actions

“AI is an assistant, not an authority,” Giac emphasized.

5. Iterate Like an Agile Team

Here Haris added a key point:
AI deployment is not a waterfall project—it’s iterative.

  • Start small
  • Show wins
  • Communicate frequently
  • Add new tasks gradually

This mirrors the CandorIQ rollout philosophy: crawl → walk → run.

Where AI Is Already Working in Rewards

The webinar highlighted multiple real-world use cases—many already in place at organizations today.

1. Manager self-service for job creation

Automating:

  • Job matching
  • Benchmarking
  • Initial range guidance
  • Drafting role descriptions

This removes hundreds of repetitive back-and-forth interactions.

2. Offer optimization

AI analyzes:

  • Market movement
  • Pay philosophy
  • Internal equity
  • Candidate experience

…and gives comp teams a first-pass recommendation.

3. Anomaly detection in pay equity

One of the clearest value-adds Giac cited:

“The model can fetch the tea for me—I still decide what to do with it.”

AI accelerates the math; humans drive decisions.

4. Drafting cycle communications

Teams are already using AI to:

  • Draft manager FAQs
  • Summarize cycle rules
  • Generate first-cut explanations
  • Provide guidance tailored to different personas

This is high volume, low drama work—perfect for AI.

5. Budget modeling and forecasting

Haris highlighted CandorIQ’s work here:

  • “Ask and answer” analytics
  • Pay vs. performance insights
  • Quick scenario modeling
  • Identifying compression or high-cost teams
  • Predictive payroll forecasting

All without needing a dedicated data science team.

What’s Coming Next: AI’s Role in Rewards by 2026

When Haris asked Giac about the future, his answer was clear:

1. Judgment becomes the bottleneck—not analytics.

Historically, comp bottlenecks were:

  • Excel skills
  • Data literacy
  • Regression modeling

With AI handling more of this, the constraint shifts to human judgment.

“The sweet spot isn’t who writes the fanciest prompt—it’s who exercises good judgment with AI.” —Giac

2. More predictive insights

Forecasts like:

  • Retention risks
  • Budget pressure
  • Range compression
  • Future payroll impact

…will become standard.

3. Tighter integration across HR + Finance + Talent

AI will connect decisions across:

  • Performance
  • Headcount
  • Hiring
  • Compensation
  • Financial planning

The walls between “reward decisions” and “business decisions” will continue to thin.

4. Democratization of advanced analytics

Smaller teams will suddenly have access to insights once reserved for enterprise orgs with data science departments.

The One Takeaway for Rewards Leaders

We closed the session by asking both speakers for one mindset shift to bring back to your team tomorrow.

Giac’s Takeaway

Start experimenting—with guardrails.
Don’t switch off your judgment.

“AI is everywhere. The risk isn’t AI—it’s brain rot.”

Practical AI for Total Reward L…


Leaders who use AI without judgment will be replaced by those who use both well.

Haris’ Takeaway

AI won’t replace you.
But people who use AI will replace people who don’t.

Practical AI for Total Reward L…

And the teams who learn faster—who build small wins, iterate, and use AI as an analyst—will be the ones who gain the most strategic ground.

Watch the Recording & Continue Learning

If you missed the live session, the full recording is available now.

You can also:

Reach out for a product demo or free benchmarking data sample
Thank you for contacting us!
We will be in touch with you shortly
Oops! Something went wrong while submitting the form.