Why the Supply Chain Instinct That Made You Good at Your Job Is Holding You Back From the Next Version of It
By Julie Van de Kamp | Chief Marketing & Operations Officer, SONAR / FreightWaves
I gave a speech recently to a room full of supply chain executives — mostly manufacturers. The talk covered AI in freight: what SONAR is doing with it, what C.H. Robinson and Uber Freight are doing with it, what the AI data center build-out means for flatbed markets, where agent-based automation is heading.
It was well received. People stayed after. There were good questions.
And then I asked the room how many of them were using AI tools — at work or in their personal lives.
Almost no hands went up.
I don’t say that to embarrass anyone. I say it because it was genuinely clarifying. These are smart, experienced professionals running complex operations for major companies. And somewhere between knowing that AI is transforming their industry and actually opening a chat window, there’s a gap the size of a freight market.
One executive put it perfectly: “Our entire job is to avoid risk. We’re trained to see new, unproven things as threats. That’s how we protect our supply chains. And AI looks like a risk.”
He’s right. And he’s wrong. Both at the same time.
The Risk Calculus Has Flipped
For most of your career, risk aversion in technology meant waiting. Let someone else be the early adopter. Let the bugs get worked out. Let the industry develop standards. Let legal catch up. That was often the right call — and it served supply chain organizations well.
That calculus no longer applies to AI. Not because AI is risk-free. It isn’t. But because the risk profile has changed in a way that the traditional instinct doesn’t account for.
The question used to be: Is the risk of adopting this tool greater than the risk of not adopting it?
For most enterprise software over the past 30 years, the answer was often “yes, wait.” The risk of early adoption — implementation costs, disruption, vendor instability, workflow chaos — was real and immediate. The risk of waiting was theoretical and distant.
AI in 2026 inverts that equation completely.
The risk of adoption is real but manageable. The risk of not adopting is now concrete, measurable, and actively playing out on earnings calls.
C.H. Robinson — the largest freight broker in the world — reported a 40% productivity increase since 2022 driven by AI. Their Q3 2025 adjusted operating margin expanded 680 basis points. EPS grew 67.5%. During the longest freight market downturn in modern memory. CEO Dave Bozeman told investors explicitly: “We are not waiting for a market recovery to improve our financial results.”
UPS announced 48,000 job cuts and cited AI-enabled operations as the reason they can move more volume with fewer people.
These are not hypothetical future threats. These are your competitors’ last four earnings calls.
The supply chain professionals who are waiting for AI to be “proven” are watching the proof arrive in real time — in the form of competitors with structurally lower cost structures and faster decision cycles. The gap compounds every quarter.
So yes, your risk aversion is correct. You are right to be cautious. The risk is enormous. It’s just not the risk you think it is.
What the Data Privacy Concern Actually Is — and Isn’t
The most specific objection I heard in that room was data privacy. And I want to take it seriously, because it’s not irrational — it’s just aimed at the wrong tier of AI use.
Here’s what most supply chain executives imagine when they picture “using AI at work”: uploading confidential contracts, customer data, proprietary pricing models, or supplier agreements to some cloud service and having that information scraped, stored, or exposed. That is a legitimate concern. Your IT and legal teams are right to govern it carefully. Enterprise AI deployment at that level requires proper vetting, vendor contracts, data processing agreements, and probably a conversation with your CISO.
But that’s Tier 3. You don’t have to start at Tier 3. You don’t even have to go near it to capture most of the value.
Three Tiers of Adoption — Starting With Zero Exposure
Think about AI adoption the way you’d think about any risk mitigation framework: start with what’s controllable, build confidence, expand.
Tier 1: Personal use, zero company data. Ask Claude or ChatGPT to summarize a trade publication article. Have it explain a new tariff regulation in plain English. Use it to prep for a supplier negotiation — describe the situation in general terms and ask it to anticipate objections. Draft a framework for a supplier scorecard from scratch. None of this involves company data. None of it touches anything proprietary. The only thing you’re risking is 20 minutes of your time. This is where you build the muscle: how to prompt, what these tools can do, where they fall short.
Most executives who do this report the same experience: they ask a question they’d normally spend an hour researching, and they have a solid starting point in 90 seconds. That moment — not a conference presentation, not an earnings call, not an article — is when the shift happens. It’s visceral in a way that no amount of data can manufacture.
Tier 2: Sanitized business use. Now you’re using AI on information that’s already public or already anonymized. Industry trend analysis. Risk assessment templates. Supplier evaluation frameworks. Meeting prep. Scenario planning against tariff changes using publicly available rate data. You’re generating real work product — things that save hours — without exposing anything sensitive. This is where most of the productivity gains live for individual contributors and leaders.
Tier 3: Enterprise-grade, IT-approved tools. Microsoft Copilot inside your existing M365 environment, where your data never leaves your company’s tenant. SONAR’s AI layer operating on top of freight market data that was already external to your organization. Vendor-specific tools your legal and IT teams have reviewed. By the time you get here, you’ve already built enough fluency and confidence that the governance conversation becomes practical rather than existential. You know what you want the tool to do. You can evaluate whether the vendor’s data handling meets your standards. You’re not making a faith-based decision — you’re making an informed one.
The critical message: you do not have to solve the data governance question to start. The vast majority of the value is accessible at Tier 1 and Tier 2. Today. Without a procurement process, a vendor agreement, or a CISO conversation.
The Competency Gap Is More Dangerous Than the Tool
Here’s the thing nobody talks about in data privacy discussions: the real long-term risk isn’t a data breach. It’s a competency gap.
AI fluency is becoming a professional skill the way spreadsheet literacy became one in the 1990s. There was a moment — probably around 1993 or 1994 — when some financial professionals were still doing analysis by hand and calling it more reliable than “those computer models.” They weren’t wrong that the computer could produce errors. They were wrong about which error was more dangerous.
The same dynamic is playing out now in supply chain. The professionals who develop AI fluency over the next two years will be materially more productive, faster to insight, and better at managing complexity than those who don’t. Not because AI does their thinking for them — it doesn’t, and the tools are explicit about this — but because they’ve learned to use a force multiplier that their peers haven’t.
This is especially acute in supply chain, where the volume of signals — rates, tender rejections, capacity indicators, port dwell times, geopolitical events, weather — has always exceeded any individual’s ability to process. AI doesn’t replace the supply chain professional’s judgment. It expands the surface area of what that judgment can be applied to.
The executive who can ask a plain-language question about their freight exposure to a tariff scenario and get a synthesized answer in 60 seconds is not doing the same job as the one waiting for an analyst to build a spreadsheet. They are operating at a different level. And that gap will widen, not narrow.
Start Somewhere Embarrassingly Small
The most common reason experienced professionals don’t start with AI isn’t privacy. It’s a subtler form of resistance: the feeling that if you’re going to do something, you should do it properly, at scale, with a plan.
That instinct — which again is a supply chain instinct — is working against you here.
The right move is to start somewhere embarrassingly small. Ask it to rewrite one email. Have it summarize one industry report you’ve been meaning to read. Ask it one question you’d normally Google. The quality of that first interaction will either confirm your skepticism or shift it. Either outcome is useful.
The executives in that room who told me they don’t use AI — they’re not behind yet. Not irreparably. But the window for catching up comfortably is closing. The C.H. Robinson and Uber Freight case studies aren’t projections. They’re documented outcomes that are already flowing through to competitive positioning.
Your risk aversion has served your supply chains well. Point it in the right direction.
The risk is not adopting AI.
Julie Van de Kamp is the Chief Marketing & Operations Officer at SONAR, the world’s largest freight market intelligence platform. SONAR tracks $125 billion in freight across 135+ markets and delivers real-time intelligence across all modes of transportation.
For more research on AI and freight market intelligence, visit GoSONAR.com.