Sign up for the weekly B2B Marketing United newsletter
Blog
Letters
How to
Blog
“Dear Rich,
I work for a traditional business, partnership-led, conservative by culture, and very slow to change. I have made my peace with that for the most part because the work is interesting and I have reasonable autonomy within marketing.
My current frustration is all thanks to AI. Over the past twelve months I have watched peers in other companies claim they are trialling it all over the place. I know that a lot of the stuff we hear from people on stage is hot air, but I do want to get my team at least playing with the tools that make their lives easier.
My team wants to move. I want to move. But every tool we try to adopt hits a wall with IT. The procurement process alone takes four to five months. We are yet to have a tool actually signed off. Two tools have been rejected outright on data security grounds with no real explanation beyond a blanket policy about third party data processing. There are zero IT tools available in the company.
I have tried going through the proper channels. I have tried building a business case. I have tried getting IT to come to the table early. Nothing moves at any speed.
I do not think IT are bad people. But I do think they are applying yesterday’s risk framework to tomorrow’s tools, and the cost to marketing is real and growing.
Any advice?”
Sarah, London
Rich’s reply
Sarah, I have certainly had my run-ins with IT over the years but, to be fair, they are not wrong to be cautious.
That is not the same as saying their current approach is right, or that the pace of their review process is acceptable, or that a blanket rejection with no explanation is a reasonable response to a well-constructed business case. None of those things are right. But the underlying instinct, that AI tools carry data risks that need to be properly understood before they touch client information, is a legitimate one. Especially in professional services, where client confidentiality is not a compliance checkbox but the foundation of the entire commercial relationship.
How you frame this internally matters enormously. If you go into the next conversation with IT treating them as obstructionists or laggards, they will become more entrenched. If you go in treating their concerns as real and worth solving together, you have a much better chance of finding a path through.
Understand what IT are actually afraid of
Most IT departments blocking AI adoption are not doing so because they dislike progress. They are geeks at heart. They love new toys. But they are probably blocking it because they have been burned before, or because they are accountable for something going wrong in a way that marketing is not. A data breach caused by an unvetted third party tool will land on the CISO, not on you.
Before your next conversation, try to understand specifically what the objection is. “Third party data processing” is a category of concern, not an explanation. Press for the detail. Is it about client data being ingested by the tool? Is it about data residency? Is it about the tool’s terms of service and what the vendor does with inputs? Is it about SOC 2 compliance or ISO 27001 certification? Is it a fear they will be lumbered with the cost? Or is it simply that they are overworked, with every country and every function making new requests and no bandwidth left to give?
Each of these is a different problem with a different solution. If you do not know which one you are actually solving, you cannot solve it.
The IT department that says no to everything is usually the one that has never been asked to help design a yes.
Take someone from IT out for a coffee
Before you send another formal request or build another business case, grab someone from IT and get a coffee somewhere away from the office.
Ask them their views on AI adoption and how ready the company is. Ask how other companies have solved it and what good governance looks like in practice. Let them educate you on the context you do not have. Whether that is genuine concerns about integration challenges, the fact that the CIO is retiring soon, or simply that the team is at capacity with current priorities. Until you know that context, it is hard to work around it.
Share what you have been reading about how the market has matured. Enterprise-grade tools now operate inside existing data boundaries rather than outside them. Several leading AI platforms offer SOC 2 Type II certification, data processing agreements, and explicit contractual commitments about how inputs are handled. Some of the most data-sensitive professional services firms in the world, large accountancy practices and major law firms, are adopting AI at scale. If the risk were truly unmanageable, those businesses would not be moving.
The goal of the coffee is not to win an argument. It is to understand what you are actually dealing with, and to give IT the experience of being consulted rather than pressured.
Use internal tools to warm the function up
If IT are blocking external AI tools on data security grounds, the most pragmatic starting point is a tool they have almost certainly already cleared. Microsoft Copilot operates within your existing Microsoft 365 tenant boundary. Your data does not leave your environment. It does not use your inputs to train external models. Microsoft’s own documentation confirms this, and it has been independently verified by enterprise security analysts. Copilot is an extension of an environment IT already governs, not a new risk surface.
Starting there serves two purposes. It gets your team using AI in a structured, governed way immediately. And it gives IT direct, observable experience of an enterprise-grade AI tool behaving exactly as their security policies require. That experience does more to reduce institutional fear than any amount of documentation or business case writing.
Once IT have seen Copilot work safely inside your environment, the conversation about additional tools changes. You are no longer asking them to trust a category they are unfamiliar with. You are asking them to evaluate specific tools against a framework they have now seen in practice. That is a much smaller ask.
The goal in the short term is not to win the argument about AI. It is to give IT a safe, observable experience of it that makes the next conversation easier. Let's help them 'break the seal'.
Request a dedicated IT business partner
This is one of the most effective structural moves available to you, and it tends to get overlooked because it does not feel like a tactical fix.
Request that IT assign a dedicated business partner aligned to marketing. Not a helpdesk contact. A named person whose remit includes understanding what marketing is trying to do, helping to navigate procurement and security processes, and acting as an internal advocate within IT for the tools you need.
IT get visibility into everything marketing is exploring before it becomes a formal request, which reduces the feeling of being ambushed. Marketing gets someone who understands the policies and philosophies IT operates within, which means fewer wasted applications. And over time, you build genuine rapport with someone inside the function who can argue for you in rooms you are not in.
The business partner becomes your insider. That is not manipulation. It is how large organisations are supposed to work, and most IT functions respond positively to being asked for partnership rather than permission.
Propose a sandboxed pilot rather than full adoption
If the procurement and security review process is the bottleneck, propose something smaller. A sandboxed pilot, run on non-sensitive internal data only, with no client information involved, is a much easier thing for IT to approve than a full enterprise rollout.
Define the scope tightly. One tool. One use case. Three months. Agree upfront what data the tool will and will not touch. Offer to have IT involved in the setup so they can see exactly how it works rather than reviewing it from a distance.
A pilot does two things. It gets you moving. And it gives IT direct, controlled experience of the tool, which tends to reduce fear far more effectively than any amount of documentation.
The cost of doing nothing is not zero
There is one more argument worth having ready, not to use aggressively, but to deploy if the conversation stalls on risk. IT’s caution is framed around the risk of adopting AI tools. But there is an equally real risk on the other side that rarely gets named.
The Larridin State of Enterprise AI 2025 report found that 67 percent of organisations admit they do not have full visibility into which AI tools their employees are already using. When businesses block sanctioned adoption, people do not stop using AI. They use personal accounts, free tools, and consumer-grade applications that carry none of the enterprise data protections IT are trying to enforce. The risk IT is trying to prevent does not go away when they say no. It goes underground.
A controlled, IT-approved pilot with proper data governance is categorically safer than the alternative. That reframe, from ‘AI adoption is risky’ to ‘uncontrolled shadow AI is the real risk’, tends to land well with security-minded leaders because it speaks their language. You are not asking IT to lower their guard. You are asking them to channel it more effectively.
Build the coalition before the escalation
A business case presented by marketing to IT is a marketing document. A business case co-authored by marketing, finance, and a senior business leader or two carries significantly more weight.
Spend two weeks quietly building internal support. Find people who are already frustrated by the pace of change and get them to say so in the room. Find out whether your CFO has a view on the competitive cost of inaction. A finance voice saying “we are losing ground and that has a number attached to it” changes the dynamic in a way that marketing saying “our content is slower than competitors” simply does not.
This is not politics for its own sake. It is making sure that the conversation IT is having reflects the full weight of the business need, not just the enthusiasm of one department.
If none of this moves things, escalate deliberately
Some IT functions in traditional businesses are structurally risk-averse in a way that no amount of coalition building will fully overcome. If you have genuinely tried the collaborative approach, brought the market evidence, proposed a sandboxed pilot, and built cross-functional support, and the answer is still no with no credible path to yes, then escalation to the CEO is not a failure of diplomacy. It is the appropriate next step.
But escalate with a solution, not a complaint. Do not go to the CEO and say IT are blocking us. Go with a fully formed proposal: here is the tool, here is the use case, here is how comparable firms have handled the security question, here is the pilot structure, here is what it costs, here is what we stand to gain, and here is what we are currently losing by waiting. Link the solution to a positive gain and inaction to a negative effect, on pipeline, on win rates, on team productivity.
At that point you are not asking the CEO to override IT. You are asking them to make a business decision with full information. That is a very different ask, and a much easier one for a senior leader to act on.
Going to the CEO empty-handed is a complaint. Going with a fully costed, de-risked proposal is a recommendation. Know the difference before you walk in.
The short answer
Take someone from IT for a coffee and find out what you are actually dealing with. Start with tools already inside your approved environment, Copilot being the obvious first step, to give IT a safe, observable experience of enterprise-grade AI. Request a dedicated IT business partner who can become your internal advocate. Propose a sandboxed pilot that keeps the risk surface small. Use the shadow AI argument to reframe inaction as the greater risk. Build a cross-functional coalition so your business case carries more than marketing’s voice.
And if the collaborative route has been genuinely exhausted, escalate to the CEO with a fully formed proposal rather than a grievance. You are not asking for permission to do something reckless. You are asking for support to do something your competitors are already doing.
The relationship with IT is worth preserving. But not at the cost of your team standing still while the market moves.
And if your CEO still says no, well, come send me a note!
Onwards,
Rich
Got a question for Rich? Email it to editor@b2bmarketing.com
“Dear Rich,
I work for a traditional business, partnership-led, conservative by culture, and very slow to change. I have made my peace with that for the most part because the work is interesting and I have reasonable autonomy within marketing.
My current frustration is all thanks to AI. Over the past twelve months I have watched peers in other companies claim they are trialling it all over the place. I know that a lot of the stuff we hear from people on stage is hot air, but I do want to get my team at least playing with the tools that make their lives easier.
My team wants to move. I want to move. But every tool we try to adopt hits a wall with IT. The procurement process alone takes four to five months. We are yet to have a tool actually signed off. Two tools have been rejected outright on data security grounds with no real explanation beyond a blanket policy about third party data processing. There are zero IT tools available in the company.
I have tried going through the proper channels. I have tried building a business case. I have tried getting IT to come to the table early. Nothing moves at any speed.
I do not think IT are bad people. But I do think they are applying yesterday’s risk framework to tomorrow’s tools, and the cost to marketing is real and growing.
Any advice?”
Sarah, London
Rich’s reply
Sarah, I have certainly had my run-ins with IT over the years but, to be fair, they are not wrong to be cautious.
That is not the same as saying their current approach is right, or that the pace of their review process is acceptable, or that a blanket rejection with no explanation is a reasonable response to a well-constructed business case. None of those things are right. But the underlying instinct, that AI tools carry data risks that need to be properly understood before they touch client information, is a legitimate one. Especially in professional services, where client confidentiality is not a compliance checkbox but the foundation of the entire commercial relationship.
How you frame this internally matters enormously. If you go into the next conversation with IT treating them as obstructionists or laggards, they will become more entrenched. If you go in treating their concerns as real and worth solving together, you have a much better chance of finding a path through.
Understand what IT are actually afraid of
Most IT departments blocking AI adoption are not doing so because they dislike progress. They are geeks at heart. They love new toys. But they are probably blocking it because they have been burned before, or because they are accountable for something going wrong in a way that marketing is not. A data breach caused by an unvetted third party tool will land on the CISO, not on you.
Before your next conversation, try to understand specifically what the objection is. “Third party data processing” is a category of concern, not an explanation. Press for the detail. Is it about client data being ingested by the tool? Is it about data residency? Is it about the tool’s terms of service and what the vendor does with inputs? Is it about SOC 2 compliance or ISO 27001 certification? Is it a fear they will be lumbered with the cost? Or is it simply that they are overworked, with every country and every function making new requests and no bandwidth left to give?
Each of these is a different problem with a different solution. If you do not know which one you are actually solving, you cannot solve it.
The IT department that says no to everything is usually the one that has never been asked to help design a yes.
Take someone from IT out for a coffee
Before you send another formal request or build another business case, grab someone from IT and get a coffee somewhere away from the office.
Ask them their views on AI adoption and how ready the company is. Ask how other companies have solved it and what good governance looks like in practice. Let them educate you on the context you do not have. Whether that is genuine concerns about integration challenges, the fact that the CIO is retiring soon, or simply that the team is at capacity with current priorities. Until you know that context, it is hard to work around it.
Share what you have been reading about how the market has matured. Enterprise-grade tools now operate inside existing data boundaries rather than outside them. Several leading AI platforms offer SOC 2 Type II certification, data processing agreements, and explicit contractual commitments about how inputs are handled. Some of the most data-sensitive professional services firms in the world, large accountancy practices and major law firms, are adopting AI at scale. If the risk were truly unmanageable, those businesses would not be moving.
The goal of the coffee is not to win an argument. It is to understand what you are actually dealing with, and to give IT the experience of being consulted rather than pressured.
Use internal tools to warm the function up
If IT are blocking external AI tools on data security grounds, the most pragmatic starting point is a tool they have almost certainly already cleared. Microsoft Copilot operates within your existing Microsoft 365 tenant boundary. Your data does not leave your environment. It does not use your inputs to train external models. Microsoft’s own documentation confirms this, and it has been independently verified by enterprise security analysts. Copilot is an extension of an environment IT already governs, not a new risk surface.
Starting there serves two purposes. It gets your team using AI in a structured, governed way immediately. And it gives IT direct, observable experience of an enterprise-grade AI tool behaving exactly as their security policies require. That experience does more to reduce institutional fear than any amount of documentation or business case writing.
Once IT have seen Copilot work safely inside your environment, the conversation about additional tools changes. You are no longer asking them to trust a category they are unfamiliar with. You are asking them to evaluate specific tools against a framework they have now seen in practice. That is a much smaller ask.
The goal in the short term is not to win the argument about AI. It is to give IT a safe, observable experience of it that makes the next conversation easier. Let's help them 'break the seal'.
Request a dedicated IT business partner
This is one of the most effective structural moves available to you, and it tends to get overlooked because it does not feel like a tactical fix.
Request that IT assign a dedicated business partner aligned to marketing. Not a helpdesk contact. A named person whose remit includes understanding what marketing is trying to do, helping to navigate procurement and security processes, and acting as an internal advocate within IT for the tools you need.
IT get visibility into everything marketing is exploring before it becomes a formal request, which reduces the feeling of being ambushed. Marketing gets someone who understands the policies and philosophies IT operates within, which means fewer wasted applications. And over time, you build genuine rapport with someone inside the function who can argue for you in rooms you are not in.
The business partner becomes your insider. That is not manipulation. It is how large organisations are supposed to work, and most IT functions respond positively to being asked for partnership rather than permission.
Propose a sandboxed pilot rather than full adoption
If the procurement and security review process is the bottleneck, propose something smaller. A sandboxed pilot, run on non-sensitive internal data only, with no client information involved, is a much easier thing for IT to approve than a full enterprise rollout.
Define the scope tightly. One tool. One use case. Three months. Agree upfront what data the tool will and will not touch. Offer to have IT involved in the setup so they can see exactly how it works rather than reviewing it from a distance.
A pilot does two things. It gets you moving. And it gives IT direct, controlled experience of the tool, which tends to reduce fear far more effectively than any amount of documentation.
The cost of doing nothing is not zero
There is one more argument worth having ready, not to use aggressively, but to deploy if the conversation stalls on risk. IT’s caution is framed around the risk of adopting AI tools. But there is an equally real risk on the other side that rarely gets named.
The Larridin State of Enterprise AI 2025 report found that 67 percent of organisations admit they do not have full visibility into which AI tools their employees are already using. When businesses block sanctioned adoption, people do not stop using AI. They use personal accounts, free tools, and consumer-grade applications that carry none of the enterprise data protections IT are trying to enforce. The risk IT is trying to prevent does not go away when they say no. It goes underground.
A controlled, IT-approved pilot with proper data governance is categorically safer than the alternative. That reframe, from ‘AI adoption is risky’ to ‘uncontrolled shadow AI is the real risk’, tends to land well with security-minded leaders because it speaks their language. You are not asking IT to lower their guard. You are asking them to channel it more effectively.
Build the coalition before the escalation
A business case presented by marketing to IT is a marketing document. A business case co-authored by marketing, finance, and a senior business leader or two carries significantly more weight.
Spend two weeks quietly building internal support. Find people who are already frustrated by the pace of change and get them to say so in the room. Find out whether your CFO has a view on the competitive cost of inaction. A finance voice saying “we are losing ground and that has a number attached to it” changes the dynamic in a way that marketing saying “our content is slower than competitors” simply does not.
This is not politics for its own sake. It is making sure that the conversation IT is having reflects the full weight of the business need, not just the enthusiasm of one department.
If none of this moves things, escalate deliberately
Some IT functions in traditional businesses are structurally risk-averse in a way that no amount of coalition building will fully overcome. If you have genuinely tried the collaborative approach, brought the market evidence, proposed a sandboxed pilot, and built cross-functional support, and the answer is still no with no credible path to yes, then escalation to the CEO is not a failure of diplomacy. It is the appropriate next step.
But escalate with a solution, not a complaint. Do not go to the CEO and say IT are blocking us. Go with a fully formed proposal: here is the tool, here is the use case, here is how comparable firms have handled the security question, here is the pilot structure, here is what it costs, here is what we stand to gain, and here is what we are currently losing by waiting. Link the solution to a positive gain and inaction to a negative effect, on pipeline, on win rates, on team productivity.
At that point you are not asking the CEO to override IT. You are asking them to make a business decision with full information. That is a very different ask, and a much easier one for a senior leader to act on.
Going to the CEO empty-handed is a complaint. Going with a fully costed, de-risked proposal is a recommendation. Know the difference before you walk in.
The short answer
Take someone from IT for a coffee and find out what you are actually dealing with. Start with tools already inside your approved environment, Copilot being the obvious first step, to give IT a safe, observable experience of enterprise-grade AI. Request a dedicated IT business partner who can become your internal advocate. Propose a sandboxed pilot that keeps the risk surface small. Use the shadow AI argument to reframe inaction as the greater risk. Build a cross-functional coalition so your business case carries more than marketing’s voice.
And if the collaborative route has been genuinely exhausted, escalate to the CEO with a fully formed proposal rather than a grievance. You are not asking for permission to do something reckless. You are asking for support to do something your competitors are already doing.
The relationship with IT is worth preserving. But not at the cost of your team standing still while the market moves.
And if your CEO still says no, well, come send me a note!
Onwards,
Rich
Got a question for Rich? Email it to editor@b2bmarketing.com
“Dear Rich,
I work for a traditional business, partnership-led, conservative by culture, and very slow to change. I have made my peace with that for the most part because the work is interesting and I have reasonable autonomy within marketing.
My current frustration is all thanks to AI. Over the past twelve months I have watched peers in other companies claim they are trialling it all over the place. I know that a lot of the stuff we hear from people on stage is hot air, but I do want to get my team at least playing with the tools that make their lives easier.
My team wants to move. I want to move. But every tool we try to adopt hits a wall with IT. The procurement process alone takes four to five months. We are yet to have a tool actually signed off. Two tools have been rejected outright on data security grounds with no real explanation beyond a blanket policy about third party data processing. There are zero IT tools available in the company.
I have tried going through the proper channels. I have tried building a business case. I have tried getting IT to come to the table early. Nothing moves at any speed.
I do not think IT are bad people. But I do think they are applying yesterday’s risk framework to tomorrow’s tools, and the cost to marketing is real and growing.
Any advice?”
Sarah, London
Rich’s reply
Sarah, I have certainly had my run-ins with IT over the years but, to be fair, they are not wrong to be cautious.
That is not the same as saying their current approach is right, or that the pace of their review process is acceptable, or that a blanket rejection with no explanation is a reasonable response to a well-constructed business case. None of those things are right. But the underlying instinct, that AI tools carry data risks that need to be properly understood before they touch client information, is a legitimate one. Especially in professional services, where client confidentiality is not a compliance checkbox but the foundation of the entire commercial relationship.
How you frame this internally matters enormously. If you go into the next conversation with IT treating them as obstructionists or laggards, they will become more entrenched. If you go in treating their concerns as real and worth solving together, you have a much better chance of finding a path through.
Understand what IT are actually afraid of
Most IT departments blocking AI adoption are not doing so because they dislike progress. They are geeks at heart. They love new toys. But they are probably blocking it because they have been burned before, or because they are accountable for something going wrong in a way that marketing is not. A data breach caused by an unvetted third party tool will land on the CISO, not on you.
Before your next conversation, try to understand specifically what the objection is. “Third party data processing” is a category of concern, not an explanation. Press for the detail. Is it about client data being ingested by the tool? Is it about data residency? Is it about the tool’s terms of service and what the vendor does with inputs? Is it about SOC 2 compliance or ISO 27001 certification? Is it a fear they will be lumbered with the cost? Or is it simply that they are overworked, with every country and every function making new requests and no bandwidth left to give?
Each of these is a different problem with a different solution. If you do not know which one you are actually solving, you cannot solve it.
The IT department that says no to everything is usually the one that has never been asked to help design a yes.
Take someone from IT out for a coffee
Before you send another formal request or build another business case, grab someone from IT and get a coffee somewhere away from the office.
Ask them their views on AI adoption and how ready the company is. Ask how other companies have solved it and what good governance looks like in practice. Let them educate you on the context you do not have. Whether that is genuine concerns about integration challenges, the fact that the CIO is retiring soon, or simply that the team is at capacity with current priorities. Until you know that context, it is hard to work around it.
Share what you have been reading about how the market has matured. Enterprise-grade tools now operate inside existing data boundaries rather than outside them. Several leading AI platforms offer SOC 2 Type II certification, data processing agreements, and explicit contractual commitments about how inputs are handled. Some of the most data-sensitive professional services firms in the world, large accountancy practices and major law firms, are adopting AI at scale. If the risk were truly unmanageable, those businesses would not be moving.
The goal of the coffee is not to win an argument. It is to understand what you are actually dealing with, and to give IT the experience of being consulted rather than pressured.
Use internal tools to warm the function up
If IT are blocking external AI tools on data security grounds, the most pragmatic starting point is a tool they have almost certainly already cleared. Microsoft Copilot operates within your existing Microsoft 365 tenant boundary. Your data does not leave your environment. It does not use your inputs to train external models. Microsoft’s own documentation confirms this, and it has been independently verified by enterprise security analysts. Copilot is an extension of an environment IT already governs, not a new risk surface.
Starting there serves two purposes. It gets your team using AI in a structured, governed way immediately. And it gives IT direct, observable experience of an enterprise-grade AI tool behaving exactly as their security policies require. That experience does more to reduce institutional fear than any amount of documentation or business case writing.
Once IT have seen Copilot work safely inside your environment, the conversation about additional tools changes. You are no longer asking them to trust a category they are unfamiliar with. You are asking them to evaluate specific tools against a framework they have now seen in practice. That is a much smaller ask.
The goal in the short term is not to win the argument about AI. It is to give IT a safe, observable experience of it that makes the next conversation easier. Let's help them 'break the seal'.
Request a dedicated IT business partner
This is one of the most effective structural moves available to you, and it tends to get overlooked because it does not feel like a tactical fix.
Request that IT assign a dedicated business partner aligned to marketing. Not a helpdesk contact. A named person whose remit includes understanding what marketing is trying to do, helping to navigate procurement and security processes, and acting as an internal advocate within IT for the tools you need.
IT get visibility into everything marketing is exploring before it becomes a formal request, which reduces the feeling of being ambushed. Marketing gets someone who understands the policies and philosophies IT operates within, which means fewer wasted applications. And over time, you build genuine rapport with someone inside the function who can argue for you in rooms you are not in.
The business partner becomes your insider. That is not manipulation. It is how large organisations are supposed to work, and most IT functions respond positively to being asked for partnership rather than permission.
Propose a sandboxed pilot rather than full adoption
If the procurement and security review process is the bottleneck, propose something smaller. A sandboxed pilot, run on non-sensitive internal data only, with no client information involved, is a much easier thing for IT to approve than a full enterprise rollout.
Define the scope tightly. One tool. One use case. Three months. Agree upfront what data the tool will and will not touch. Offer to have IT involved in the setup so they can see exactly how it works rather than reviewing it from a distance.
A pilot does two things. It gets you moving. And it gives IT direct, controlled experience of the tool, which tends to reduce fear far more effectively than any amount of documentation.
The cost of doing nothing is not zero
There is one more argument worth having ready, not to use aggressively, but to deploy if the conversation stalls on risk. IT’s caution is framed around the risk of adopting AI tools. But there is an equally real risk on the other side that rarely gets named.
The Larridin State of Enterprise AI 2025 report found that 67 percent of organisations admit they do not have full visibility into which AI tools their employees are already using. When businesses block sanctioned adoption, people do not stop using AI. They use personal accounts, free tools, and consumer-grade applications that carry none of the enterprise data protections IT are trying to enforce. The risk IT is trying to prevent does not go away when they say no. It goes underground.
A controlled, IT-approved pilot with proper data governance is categorically safer than the alternative. That reframe, from ‘AI adoption is risky’ to ‘uncontrolled shadow AI is the real risk’, tends to land well with security-minded leaders because it speaks their language. You are not asking IT to lower their guard. You are asking them to channel it more effectively.
Build the coalition before the escalation
A business case presented by marketing to IT is a marketing document. A business case co-authored by marketing, finance, and a senior business leader or two carries significantly more weight.
Spend two weeks quietly building internal support. Find people who are already frustrated by the pace of change and get them to say so in the room. Find out whether your CFO has a view on the competitive cost of inaction. A finance voice saying “we are losing ground and that has a number attached to it” changes the dynamic in a way that marketing saying “our content is slower than competitors” simply does not.
This is not politics for its own sake. It is making sure that the conversation IT is having reflects the full weight of the business need, not just the enthusiasm of one department.
If none of this moves things, escalate deliberately
Some IT functions in traditional businesses are structurally risk-averse in a way that no amount of coalition building will fully overcome. If you have genuinely tried the collaborative approach, brought the market evidence, proposed a sandboxed pilot, and built cross-functional support, and the answer is still no with no credible path to yes, then escalation to the CEO is not a failure of diplomacy. It is the appropriate next step.
But escalate with a solution, not a complaint. Do not go to the CEO and say IT are blocking us. Go with a fully formed proposal: here is the tool, here is the use case, here is how comparable firms have handled the security question, here is the pilot structure, here is what it costs, here is what we stand to gain, and here is what we are currently losing by waiting. Link the solution to a positive gain and inaction to a negative effect, on pipeline, on win rates, on team productivity.
At that point you are not asking the CEO to override IT. You are asking them to make a business decision with full information. That is a very different ask, and a much easier one for a senior leader to act on.
Going to the CEO empty-handed is a complaint. Going with a fully costed, de-risked proposal is a recommendation. Know the difference before you walk in.
The short answer
Take someone from IT for a coffee and find out what you are actually dealing with. Start with tools already inside your approved environment, Copilot being the obvious first step, to give IT a safe, observable experience of enterprise-grade AI. Request a dedicated IT business partner who can become your internal advocate. Propose a sandboxed pilot that keeps the risk surface small. Use the shadow AI argument to reframe inaction as the greater risk. Build a cross-functional coalition so your business case carries more than marketing’s voice.
And if the collaborative route has been genuinely exhausted, escalate to the CEO with a fully formed proposal rather than a grievance. You are not asking for permission to do something reckless. You are asking for support to do something your competitors are already doing.
The relationship with IT is worth preserving. But not at the cost of your team standing still while the market moves.
And if your CEO still says no, well, come send me a note!
Onwards,
Rich
Got a question for Rich? Email it to editor@b2bmarketing.com
London
Mar 13, 2026
Rich Fitzmaurice
Letters
“Dear Rich,
Quick summary. I left a Head of Marketing role eight months ago to be a fractional CMO. Before I made the move I had done my research, spoken to a few people who had done the same, and felt it was the right next step. I had strong experience, a clear specialism, and my first two clients lined up before I handed in my notice.
Eight months in, the work is interesting, but I am not enjoying some elements. Both clients treat me like a senior contractor rather than a strategic partner. They do not ask for my opinion on commercial decisions; I am just told after the fact. They do not include me in conversations where my perspective could genuinely add value. They schedule and delegate me into execution calls and seem surprised when there is no strategy.
One of them in particular books me for three long execution calls per week. When I have tried to introduce more strategic thinking, I get thanked for it and then ignored. The same tactical requests keep coming.
I do not want to blow up the revenue by resetting the relationship badly as it’s income I rely on. But I also did not leave a good salary to become a very expensive task manager. I have read about fractional CMOs who operate at board level, who are genuinely influential, who shape the direction of businesses they are not employed by. I am not sure how they got there or what I am doing differently/wrong.
How do I fix it?
Helen, Manchester
Rich’s reply
Helen, you are not doing anything wrong, you have simply walked into one of the most common and least discussed problems in fractional work: the client has hired the label but not bought the concept.
They called the role fractional because that is what they saw advertised, or because a peer mentioned it, or because it sounded more interesting than “external marketing resource.” But in their minds, they hired someone senior to help them do things. Not someone to tell them what things to do, or whether the things they are doing are the right things at all.
Being balanced, this is almost never the client’s fault. It is almost always a scoping and onboarding problem, and it starts before you send your first invoice.
You are selling access. They are buying execution.
This is the most important distinction in fractional work. When a client hires you, they have a mental model of what they are getting. Unless you actively change that model in the early days of the engagement, it will default to the most familiar thing: a senior person who does what they ask, faster and smarter than a junior would.
If you walked in on day one and immediately began executing, however sensibly, you confirmed that model. The three execution calls per week were not imposed on you. They grew because no one drew a different boundary for them to understand and agree to.
The fractional CMO who operates at board level did not arrive at board level. They established it before they walked through the door.
There is a framework I use and teach in my course on this called the Diagnostic Bridge. The idea is simple: before any fractional engagement begins producing outputs, there should be a defined discovery phase. Not weeks of auditing for its own sake, but a structured period where you are explicitly operating as a diagnostician rather than a doer. You are asking questions. You are mapping the landscape. You are building an Authority Map of who holds what decisions, what is broken, and where your leverage actually sits.
Crucially, you are doing this visibly and out loud, with the client watching. You are demonstrating that your value is in the judgement you bring before any work is produced, not in the speed at which you produce it.
If you do this properly, by the time the engagement shifts into execution mode, the client has already experienced you as a strategist. That experience is very hard to undo. The problem you have, Helen, is that you accidently skipped this phase, or were not given space for it. So now you need to retrofit it, which is harder but not impossible.
How to reset the relationship without blowing it up
You have two clients, so I will speak generally, but you will need to calibrate this for each one because the dynamics will be different.
The reset does not start with a conversation about your role. It starts with a deliverable.
In the next few weeks, produce something they did not ask for. Not a task from the list. A piece of strategic thinking that reframes something they are currently working on. A short document, maybe two pages, that says: here is what I am observing, here is what I think it means, and here is what I think we should do about it.
Do not send it as an attachment in an email at the end of the day. Request a short call to walk them through it. Say you want fifteen minutes to share some thinking you have been developing. When they read it, they will either push back, in which case you have a strategic conversation, or they will be interested, in which case you have opened a door.
Do this once and it might feel like a one-off. Do it consistently and it becomes how they experience you. You are gradually rewriting the contract in their minds without ever having to say the words “I am not just here to execute your brief.”
The most powerful thing a fractional CMO can do in the first ninety days is make one observation that the client had not made themselves. That single act does more for your positioning than any amount of good execution.
The three execution calls are a symptom, not the problem
I understand why three long calls per week feels like the wrong shape. It probably is the wrong shape. But I would not make the calls themselves the issue you raise.
What you are really trying to change is the nature of the relationship, and the most direct path to that is demonstrating that your thinking is valuable, not that your time is being wasted. The moment you raise the calls as a complaint, even a polite one, you sound like a contractor protecting their hours. That reinforces the very dynamic you are trying to escape.
Instead, use the calls to, subtly, reinforce the role of the wider internal team to focus on the execution, whilst you ask the strategic questions and enquire as to how you can help them manage upwards.
This is not manipulation. It is the job. You are reminding both of you what you are actually there for.
On the clients you have and the ones you should have
There is a harder question underneath all of this, Helen, and I would be doing you a disservice if I did not name it.
Some clients are genuinely not capable of having a strategic relationship with a fractional CMO. Not because they are unsophisticated, but because the founder or CEO is not ready to share thinking with someone who is not on their payroll. They do not trust it, consciously or not. They will always default to telling you what to do rather than asking you what to think.
I am still a practicing Fractional CMO myself, to ensure I stay current and practice what I preach. Before I take on any engagement now, I run what I call a red flags check. The questions I ask are not about the brief. They are about the relationship. Is this person genuinely curious? Do they ask me questions in the sales conversation or just answer mine? Do they talk about decisions they have made differently because of external input? Have they worked with a senior consultant or advisor before and found it valuable?
If the answers are no, no, no, and no, I still might take the work, but I go in knowing the ceiling. And the ceiling tends to be execution.
You are eight months in with two clients who may both have low ceilings. That is useful information. It does not mean you cannot improve things, but it does mean you should be building pipeline for your third and fourth engagements simultaneously and filtering harder next time.
What the fractional CMOs operating at board level did differently
They positioned the engagement before they signed it.
In the sales conversation, before any discussion of deliverables or day rates, they established what they were being hired to do. Not the tasks. The outcome. And they were explicit that achieving the outcome required them to be in the room when commercial decisions were made, not just when campaigns needed running.
This sounds obvious. Most people do not do it because they are worried about losing the client before they have them. But the clients who push back on that framing are the ones with low ceilings. Losing them in the sales process is not a failure. It is your system/filtering/funnel, whatever you want to call it, working.
The other thing they frequently do differently is price by outcome rather than by time. Day rates and hourly fees are a contractor signal. They tell the client you are selling access to your hours. Outcome-based fees or retainers scoped around a defined commercial goal tell the client you are selling a result. The psychological difference in how you are perceived from day one is significant.
I know you are eight months in and changing the pricing model now can feel quite daunting. But it is something to build toward, and it is the right model to try for the next client you bring on.
The short answer
You are not stuck. You are in a very common transitional moment where the label you have and the role you are playing have not yet aligned. The majority of fractionals go through exactly this. It’s almost like a rite of passage.
Retrofit the Diagnostic Bridge by producing unsolicited strategic thinking. Use your calls to demonstrate that your judgement is the product. Start building pipeline with better qualification criteria so your next clients come in with the right expectations from the start.
And if either of your current clients turns out to have a ceiling you cannot raise, that is not a failure of your positioning. Some clients are not ready. The skill is learning to identify them earlier.
Onwards,
Rich
Got a question for Rich? Email it to editor@b2bmarketing.com
“Dear Rich,
Quick summary. I left a Head of Marketing role eight months ago to be a fractional CMO. Before I made the move I had done my research, spoken to a few people who had done the same, and felt it was the right next step. I had strong experience, a clear specialism, and my first two clients lined up before I handed in my notice.
Eight months in, the work is interesting, but I am not enjoying some elements. Both clients treat me like a senior contractor rather than a strategic partner. They do not ask for my opinion on commercial decisions; I am just told after the fact. They do not include me in conversations where my perspective could genuinely add value. They schedule and delegate me into execution calls and seem surprised when there is no strategy.
One of them in particular books me for three long execution calls per week. When I have tried to introduce more strategic thinking, I get thanked for it and then ignored. The same tactical requests keep coming.
I do not want to blow up the revenue by resetting the relationship badly as it’s income I rely on. But I also did not leave a good salary to become a very expensive task manager. I have read about fractional CMOs who operate at board level, who are genuinely influential, who shape the direction of businesses they are not employed by. I am not sure how they got there or what I am doing differently/wrong.
How do I fix it?
Helen, Manchester
Rich’s reply
Helen, you are not doing anything wrong, you have simply walked into one of the most common and least discussed problems in fractional work: the client has hired the label but not bought the concept.
They called the role fractional because that is what they saw advertised, or because a peer mentioned it, or because it sounded more interesting than “external marketing resource.” But in their minds, they hired someone senior to help them do things. Not someone to tell them what things to do, or whether the things they are doing are the right things at all.
Being balanced, this is almost never the client’s fault. It is almost always a scoping and onboarding problem, and it starts before you send your first invoice.
You are selling access. They are buying execution.
This is the most important distinction in fractional work. When a client hires you, they have a mental model of what they are getting. Unless you actively change that model in the early days of the engagement, it will default to the most familiar thing: a senior person who does what they ask, faster and smarter than a junior would.
If you walked in on day one and immediately began executing, however sensibly, you confirmed that model. The three execution calls per week were not imposed on you. They grew because no one drew a different boundary for them to understand and agree to.
The fractional CMO who operates at board level did not arrive at board level. They established it before they walked through the door.
There is a framework I use and teach in my course on this called the Diagnostic Bridge. The idea is simple: before any fractional engagement begins producing outputs, there should be a defined discovery phase. Not weeks of auditing for its own sake, but a structured period where you are explicitly operating as a diagnostician rather than a doer. You are asking questions. You are mapping the landscape. You are building an Authority Map of who holds what decisions, what is broken, and where your leverage actually sits.
Crucially, you are doing this visibly and out loud, with the client watching. You are demonstrating that your value is in the judgement you bring before any work is produced, not in the speed at which you produce it.
If you do this properly, by the time the engagement shifts into execution mode, the client has already experienced you as a strategist. That experience is very hard to undo. The problem you have, Helen, is that you accidently skipped this phase, or were not given space for it. So now you need to retrofit it, which is harder but not impossible.
How to reset the relationship without blowing it up
You have two clients, so I will speak generally, but you will need to calibrate this for each one because the dynamics will be different.
The reset does not start with a conversation about your role. It starts with a deliverable.
In the next few weeks, produce something they did not ask for. Not a task from the list. A piece of strategic thinking that reframes something they are currently working on. A short document, maybe two pages, that says: here is what I am observing, here is what I think it means, and here is what I think we should do about it.
Do not send it as an attachment in an email at the end of the day. Request a short call to walk them through it. Say you want fifteen minutes to share some thinking you have been developing. When they read it, they will either push back, in which case you have a strategic conversation, or they will be interested, in which case you have opened a door.
Do this once and it might feel like a one-off. Do it consistently and it becomes how they experience you. You are gradually rewriting the contract in their minds without ever having to say the words “I am not just here to execute your brief.”
The most powerful thing a fractional CMO can do in the first ninety days is make one observation that the client had not made themselves. That single act does more for your positioning than any amount of good execution.
The three execution calls are a symptom, not the problem
I understand why three long calls per week feels like the wrong shape. It probably is the wrong shape. But I would not make the calls themselves the issue you raise.
What you are really trying to change is the nature of the relationship, and the most direct path to that is demonstrating that your thinking is valuable, not that your time is being wasted. The moment you raise the calls as a complaint, even a polite one, you sound like a contractor protecting their hours. That reinforces the very dynamic you are trying to escape.
Instead, use the calls to, subtly, reinforce the role of the wider internal team to focus on the execution, whilst you ask the strategic questions and enquire as to how you can help them manage upwards.
This is not manipulation. It is the job. You are reminding both of you what you are actually there for.
On the clients you have and the ones you should have
There is a harder question underneath all of this, Helen, and I would be doing you a disservice if I did not name it.
Some clients are genuinely not capable of having a strategic relationship with a fractional CMO. Not because they are unsophisticated, but because the founder or CEO is not ready to share thinking with someone who is not on their payroll. They do not trust it, consciously or not. They will always default to telling you what to do rather than asking you what to think.
I am still a practicing Fractional CMO myself, to ensure I stay current and practice what I preach. Before I take on any engagement now, I run what I call a red flags check. The questions I ask are not about the brief. They are about the relationship. Is this person genuinely curious? Do they ask me questions in the sales conversation or just answer mine? Do they talk about decisions they have made differently because of external input? Have they worked with a senior consultant or advisor before and found it valuable?
If the answers are no, no, no, and no, I still might take the work, but I go in knowing the ceiling. And the ceiling tends to be execution.
You are eight months in with two clients who may both have low ceilings. That is useful information. It does not mean you cannot improve things, but it does mean you should be building pipeline for your third and fourth engagements simultaneously and filtering harder next time.
What the fractional CMOs operating at board level did differently
They positioned the engagement before they signed it.
In the sales conversation, before any discussion of deliverables or day rates, they established what they were being hired to do. Not the tasks. The outcome. And they were explicit that achieving the outcome required them to be in the room when commercial decisions were made, not just when campaigns needed running.
This sounds obvious. Most people do not do it because they are worried about losing the client before they have them. But the clients who push back on that framing are the ones with low ceilings. Losing them in the sales process is not a failure. It is your system/filtering/funnel, whatever you want to call it, working.
The other thing they frequently do differently is price by outcome rather than by time. Day rates and hourly fees are a contractor signal. They tell the client you are selling access to your hours. Outcome-based fees or retainers scoped around a defined commercial goal tell the client you are selling a result. The psychological difference in how you are perceived from day one is significant.
I know you are eight months in and changing the pricing model now can feel quite daunting. But it is something to build toward, and it is the right model to try for the next client you bring on.
The short answer
You are not stuck. You are in a very common transitional moment where the label you have and the role you are playing have not yet aligned. The majority of fractionals go through exactly this. It’s almost like a rite of passage.
Retrofit the Diagnostic Bridge by producing unsolicited strategic thinking. Use your calls to demonstrate that your judgement is the product. Start building pipeline with better qualification criteria so your next clients come in with the right expectations from the start.
And if either of your current clients turns out to have a ceiling you cannot raise, that is not a failure of your positioning. Some clients are not ready. The skill is learning to identify them earlier.
Onwards,
Rich
Got a question for Rich? Email it to editor@b2bmarketing.com
“Dear Rich,
Quick summary. I left a Head of Marketing role eight months ago to be a fractional CMO. Before I made the move I had done my research, spoken to a few people who had done the same, and felt it was the right next step. I had strong experience, a clear specialism, and my first two clients lined up before I handed in my notice.
Eight months in, the work is interesting, but I am not enjoying some elements. Both clients treat me like a senior contractor rather than a strategic partner. They do not ask for my opinion on commercial decisions; I am just told after the fact. They do not include me in conversations where my perspective could genuinely add value. They schedule and delegate me into execution calls and seem surprised when there is no strategy.
One of them in particular books me for three long execution calls per week. When I have tried to introduce more strategic thinking, I get thanked for it and then ignored. The same tactical requests keep coming.
I do not want to blow up the revenue by resetting the relationship badly as it’s income I rely on. But I also did not leave a good salary to become a very expensive task manager. I have read about fractional CMOs who operate at board level, who are genuinely influential, who shape the direction of businesses they are not employed by. I am not sure how they got there or what I am doing differently/wrong.
How do I fix it?
Helen, Manchester
Rich’s reply
Helen, you are not doing anything wrong, you have simply walked into one of the most common and least discussed problems in fractional work: the client has hired the label but not bought the concept.
They called the role fractional because that is what they saw advertised, or because a peer mentioned it, or because it sounded more interesting than “external marketing resource.” But in their minds, they hired someone senior to help them do things. Not someone to tell them what things to do, or whether the things they are doing are the right things at all.
Being balanced, this is almost never the client’s fault. It is almost always a scoping and onboarding problem, and it starts before you send your first invoice.
You are selling access. They are buying execution.
This is the most important distinction in fractional work. When a client hires you, they have a mental model of what they are getting. Unless you actively change that model in the early days of the engagement, it will default to the most familiar thing: a senior person who does what they ask, faster and smarter than a junior would.
If you walked in on day one and immediately began executing, however sensibly, you confirmed that model. The three execution calls per week were not imposed on you. They grew because no one drew a different boundary for them to understand and agree to.
The fractional CMO who operates at board level did not arrive at board level. They established it before they walked through the door.
There is a framework I use and teach in my course on this called the Diagnostic Bridge. The idea is simple: before any fractional engagement begins producing outputs, there should be a defined discovery phase. Not weeks of auditing for its own sake, but a structured period where you are explicitly operating as a diagnostician rather than a doer. You are asking questions. You are mapping the landscape. You are building an Authority Map of who holds what decisions, what is broken, and where your leverage actually sits.
Crucially, you are doing this visibly and out loud, with the client watching. You are demonstrating that your value is in the judgement you bring before any work is produced, not in the speed at which you produce it.
If you do this properly, by the time the engagement shifts into execution mode, the client has already experienced you as a strategist. That experience is very hard to undo. The problem you have, Helen, is that you accidently skipped this phase, or were not given space for it. So now you need to retrofit it, which is harder but not impossible.
How to reset the relationship without blowing it up
You have two clients, so I will speak generally, but you will need to calibrate this for each one because the dynamics will be different.
The reset does not start with a conversation about your role. It starts with a deliverable.
In the next few weeks, produce something they did not ask for. Not a task from the list. A piece of strategic thinking that reframes something they are currently working on. A short document, maybe two pages, that says: here is what I am observing, here is what I think it means, and here is what I think we should do about it.
Do not send it as an attachment in an email at the end of the day. Request a short call to walk them through it. Say you want fifteen minutes to share some thinking you have been developing. When they read it, they will either push back, in which case you have a strategic conversation, or they will be interested, in which case you have opened a door.
Do this once and it might feel like a one-off. Do it consistently and it becomes how they experience you. You are gradually rewriting the contract in their minds without ever having to say the words “I am not just here to execute your brief.”
The most powerful thing a fractional CMO can do in the first ninety days is make one observation that the client had not made themselves. That single act does more for your positioning than any amount of good execution.
The three execution calls are a symptom, not the problem
I understand why three long calls per week feels like the wrong shape. It probably is the wrong shape. But I would not make the calls themselves the issue you raise.
What you are really trying to change is the nature of the relationship, and the most direct path to that is demonstrating that your thinking is valuable, not that your time is being wasted. The moment you raise the calls as a complaint, even a polite one, you sound like a contractor protecting their hours. That reinforces the very dynamic you are trying to escape.
Instead, use the calls to, subtly, reinforce the role of the wider internal team to focus on the execution, whilst you ask the strategic questions and enquire as to how you can help them manage upwards.
This is not manipulation. It is the job. You are reminding both of you what you are actually there for.
On the clients you have and the ones you should have
There is a harder question underneath all of this, Helen, and I would be doing you a disservice if I did not name it.
Some clients are genuinely not capable of having a strategic relationship with a fractional CMO. Not because they are unsophisticated, but because the founder or CEO is not ready to share thinking with someone who is not on their payroll. They do not trust it, consciously or not. They will always default to telling you what to do rather than asking you what to think.
I am still a practicing Fractional CMO myself, to ensure I stay current and practice what I preach. Before I take on any engagement now, I run what I call a red flags check. The questions I ask are not about the brief. They are about the relationship. Is this person genuinely curious? Do they ask me questions in the sales conversation or just answer mine? Do they talk about decisions they have made differently because of external input? Have they worked with a senior consultant or advisor before and found it valuable?
If the answers are no, no, no, and no, I still might take the work, but I go in knowing the ceiling. And the ceiling tends to be execution.
You are eight months in with two clients who may both have low ceilings. That is useful information. It does not mean you cannot improve things, but it does mean you should be building pipeline for your third and fourth engagements simultaneously and filtering harder next time.
What the fractional CMOs operating at board level did differently
They positioned the engagement before they signed it.
In the sales conversation, before any discussion of deliverables or day rates, they established what they were being hired to do. Not the tasks. The outcome. And they were explicit that achieving the outcome required them to be in the room when commercial decisions were made, not just when campaigns needed running.
This sounds obvious. Most people do not do it because they are worried about losing the client before they have them. But the clients who push back on that framing are the ones with low ceilings. Losing them in the sales process is not a failure. It is your system/filtering/funnel, whatever you want to call it, working.
The other thing they frequently do differently is price by outcome rather than by time. Day rates and hourly fees are a contractor signal. They tell the client you are selling access to your hours. Outcome-based fees or retainers scoped around a defined commercial goal tell the client you are selling a result. The psychological difference in how you are perceived from day one is significant.
I know you are eight months in and changing the pricing model now can feel quite daunting. But it is something to build toward, and it is the right model to try for the next client you bring on.
The short answer
You are not stuck. You are in a very common transitional moment where the label you have and the role you are playing have not yet aligned. The majority of fractionals go through exactly this. It’s almost like a rite of passage.
Retrofit the Diagnostic Bridge by producing unsolicited strategic thinking. Use your calls to demonstrate that your judgement is the product. Start building pipeline with better qualification criteria so your next clients come in with the right expectations from the start.
And if either of your current clients turns out to have a ceiling you cannot raise, that is not a failure of your positioning. Some clients are not ready. The skill is learning to identify them earlier.
Onwards,
Rich
Got a question for Rich? Email it to editor@b2bmarketing.com
Content
Mar 11, 2026
Content
How to's
We can all sense that something has changed in how buyers conduct their research. But most B2B marketers have not caught up with it yet.
A CFO opens Copilot and types: "Which accounting platforms offer AI-powered forecasting?" A marketing director asks ChatGPT: "What are the best agencies for B2B lead generation?" A Head of IT asks Claude: "What project management software works best for a team of fifty?"
None of them went to Google first. And when the AI answered, it named specific brands. Yours may not have been one of them.
This is the problem that Answer Engine Optimisation (AEO) exists to solve.
What AEO actually is
Answer Engine Optimisation is the practice of structuring your content, your brand presence, and your technical foundations so that AI-powered platforms cite and recommend you when buyers ask questions relevant to your business.
Just as SEO emerged to help brands get found in search engines, AEO has emerged to help brands get found in AI systems. It does not replace SEO. It extends it for an era where the answer, not the link, is the product.
When ChatGPT or Perplexity generates a response to a buyer question, it is not serving a list of links. It is synthesising an answer from sources it considers credible and relevant. Our job, as B2B marketers, is to be one of those sources.
Why this matters right now
Gartner projected that traditional search volume would drop 25% in 2026 as users shift to AI assistants. ChatGPT alone has over 800 million weekly active users. Around 60% of Google searches now end without a single click to a website.
The discovery layer is moving. Buyers are increasingly getting their answers inside the AI response itself, without ever visiting a vendor’s site.
That matters for two reasons beyond the obvious traffic one.
First, the intent behind AI queries is really high. When someone asks an AI for a recommendation, they are past the browsing phase. They want an answer they can act on. AI-referred traffic converts at higher rates than organic search precisely because the AI has already filtered and, implicitly, endorsed.
Second, buyers trust what AI tells them. Probably too much if you've ever had n argument with an LLM (I certainly have!). Research from Capgemini found that 73% of consumers globally trust content created by generative AI. When an AI says “I’d recommend Brand X for this use case”, that carries weight. It lands like expert advice, not an advert.
The brands that build AEO presence now will be the defaults AI recommends for years. Think of SEO in 2008. The companies that invested early still dominate today. The same compounding effect is available in AEO, but only for those who move while most of their competitors are not paying attention.
How AI answer engines decide what to cite
Before you can optimise for AI, you need to understand how it works. It is meaningfully different from traditional search.
Large language models like ChatGPT are trained on vast amounts of web data. What they know about your brand comes from that training: your content, mentions in publications, reviews, directory listings, third-party coverage. When a user asks a question, the model synthesises from everything it has absorbed, weighting sources it considers authoritative.
Retrieval-based systems like Perplexity work differently. They pull real-time information from the web when generating answers, making current content and domain authority more directly relevant.
Google’s AI Overviews blend both approaches, drawing on traditional search signals alongside AI synthesis.
The practical implication is that no single fix works across all platforms (oh, if only it was that easy). A robust AEO strategy has to account for all three models. But the underlying principles are consistent: AI rewards clarity, consistency, and credibility.
The five things AEO actually optimises
Content structure. AI systems parse content differently from humans. They break pages into individual passages and evaluate each one independently. Clear headings, direct answers at the top of each section, factual statements with specific numbers, and Q&A formatting all increase the likelihood of being cited. A page that states “our platform processes two million transactions per day with 99.9% uptime” is far more citable than one that says “we offer industry-leading reliability.” Specific beats vague, always.
Entity recognition. AI needs to understand what your brand is, which category it sits in, and how it relates to other things in its world. This means consistent naming across every platform you appear on, proper schema markup on your website, and presence on the platforms that define entities in AI systems: your Google Knowledge Graph entry, industry directories, authoritative databases. If AI cannot confidently place your brand in a category, it will not confidently recommend you.
Source authority. LLMs weight sources by perceived credibility. Coverage in respected industry publications, thought leadership on high-authority sites, mentions from recognised experts: these all increase the probability that AI treats your content as worth citing. What others say about you matters at least as much as what you say about yourself. Often more. This is why I think PR will make a comeback.
Factual consistency. AI cross-references information across sources. If your founding date, revenue figure, or product description varies between your website, your LinkedIn, your press mentions, and your directory listings, AI loses confidence in citing any of them. Inconsistency reads as unreliability. Fixing it is unglamorous work. It matters enormously. For us B2B marketers, those 'fact books' and 'core scripts' will be coming back in vogue.
Semantic alignment. AI categorises content using semantic relationships. Using the terminology, frameworks, and concepts your industry actually uses, and doing so naturally within authoritative content, strengthens the connection between your brand and the queries you want to own. Write for the buyer’s language, not your internal vocabulary.
How to get started
Step one: audit what AI currently says about you.
Open ChatGPT, Perplexity, and Claude. Ask the questions your buyers actually ask. "What are the best platforms for [your category]?" "Which [your service type] providers work with [your target industry]?" "Tell me about [your brand name]."
Note where you appear. Note how accurately you are described. Note which competitors appear instead of you. This is your baseline. Do it across at least fifteen to twenty prompts that represent your real buyer questions. The gaps you find become your content and authority priorities.
Step two: map your target queries.
Build a list of twenty to thirty questions your ideal customers are likely to ask an AI assistant. Include category queries ("best X software for Y"), comparison queries ("X versus Y versus Z"), and recommendation queries ("which X should I use for this use case"). This is your AEO query universe: the questions you need to own.
Step three: restructure your existing content.
You do not necessarily need to create new content. You need to make what you have more legible to AI systems. Start with your most important pages. Lead each section with a direct answer. Add FAQ sections that use the exact language from your target query list. Replace vague claims with specific, citable statements. Use clear heading hierarchies. Make every section able to stand alone as a passage.
Step four: build your authority footprint.
Identify where AI systems go to assess credibility in your category. Industry publications. Analyst reports. Review platforms. Expert directories. Community platforms that AI crawls: LinkedIn, Reddit, relevant industry forums. Pursue presence on those consistently. Not volume. Consistency and quality. One well-placed byline in a credible industry publication does more for AEO than ten posts on your own blog.
Step five: fix your entity consistency.
Audit every place your brand appears online. Your website, your Google Business Profile, your LinkedIn company page, your directory listings, your press mentions. Make sure your brand name, description, category, and key facts are identical everywhere. This is the kind of work that nobody wants to do but everybody benefits from.
Step six: measure and iterate.
Start tracking how your AI citation rate changes over time. Run your target query list monthly across the main platforms and record where you appear. Track whether AI referral traffic is showing up in your analytics. This will not be perfect attribution. It does not need to be. You are looking for directional signals: more citations, more accurate descriptions, more queries where you feature.
What good AEO looks like in practice
A page that states "our platform processes two million transactions per day with 99.9% uptime" is far more citable than one that says "we offer industry-leading reliability."
A FAQ section that asks "which B2B marketing platforms are best for companies with under fifty employees?" and answers it directly is far more useful to an AI system than a generic features page.
A founder with a consistent, expert-level presence in trade publications is far more likely to have their brand cited than one who only publishes on their own site.
These are not complicated ideas. But most B2B brands are not doing them systematically, yet! Which is the opportunity!
The honest caveat
AEO is not a one-time project. AI models update continuously. What works today may need adjusting in six months. The platforms themselves are evolving. Perplexity’s citation logic is not identical to ChatGPT’s, which is not identical to Google’s AI Overviews.
As marketers, we must build the habit. The brands that treat AEO as an ongoing discipline rather than a box to tick are the ones that will compound advantage over time.
Most companies have not even started yet. That window will not stay open indefinitely.
Want help assessing your current AI visibility? It's something we actually specialize in. Get in touch via our contact us.
We can all sense that something has changed in how buyers conduct their research. But most B2B marketers have not caught up with it yet.
A CFO opens Copilot and types: "Which accounting platforms offer AI-powered forecasting?" A marketing director asks ChatGPT: "What are the best agencies for B2B lead generation?" A Head of IT asks Claude: "What project management software works best for a team of fifty?"
None of them went to Google first. And when the AI answered, it named specific brands. Yours may not have been one of them.
This is the problem that Answer Engine Optimisation (AEO) exists to solve.
What AEO actually is
Answer Engine Optimisation is the practice of structuring your content, your brand presence, and your technical foundations so that AI-powered platforms cite and recommend you when buyers ask questions relevant to your business.
Just as SEO emerged to help brands get found in search engines, AEO has emerged to help brands get found in AI systems. It does not replace SEO. It extends it for an era where the answer, not the link, is the product.
When ChatGPT or Perplexity generates a response to a buyer question, it is not serving a list of links. It is synthesising an answer from sources it considers credible and relevant. Our job, as B2B marketers, is to be one of those sources.
Why this matters right now
Gartner projected that traditional search volume would drop 25% in 2026 as users shift to AI assistants. ChatGPT alone has over 800 million weekly active users. Around 60% of Google searches now end without a single click to a website.
The discovery layer is moving. Buyers are increasingly getting their answers inside the AI response itself, without ever visiting a vendor’s site.
That matters for two reasons beyond the obvious traffic one.
First, the intent behind AI queries is really high. When someone asks an AI for a recommendation, they are past the browsing phase. They want an answer they can act on. AI-referred traffic converts at higher rates than organic search precisely because the AI has already filtered and, implicitly, endorsed.
Second, buyers trust what AI tells them. Probably too much if you've ever had n argument with an LLM (I certainly have!). Research from Capgemini found that 73% of consumers globally trust content created by generative AI. When an AI says “I’d recommend Brand X for this use case”, that carries weight. It lands like expert advice, not an advert.
The brands that build AEO presence now will be the defaults AI recommends for years. Think of SEO in 2008. The companies that invested early still dominate today. The same compounding effect is available in AEO, but only for those who move while most of their competitors are not paying attention.
How AI answer engines decide what to cite
Before you can optimise for AI, you need to understand how it works. It is meaningfully different from traditional search.
Large language models like ChatGPT are trained on vast amounts of web data. What they know about your brand comes from that training: your content, mentions in publications, reviews, directory listings, third-party coverage. When a user asks a question, the model synthesises from everything it has absorbed, weighting sources it considers authoritative.
Retrieval-based systems like Perplexity work differently. They pull real-time information from the web when generating answers, making current content and domain authority more directly relevant.
Google’s AI Overviews blend both approaches, drawing on traditional search signals alongside AI synthesis.
The practical implication is that no single fix works across all platforms (oh, if only it was that easy). A robust AEO strategy has to account for all three models. But the underlying principles are consistent: AI rewards clarity, consistency, and credibility.
The five things AEO actually optimises
Content structure. AI systems parse content differently from humans. They break pages into individual passages and evaluate each one independently. Clear headings, direct answers at the top of each section, factual statements with specific numbers, and Q&A formatting all increase the likelihood of being cited. A page that states “our platform processes two million transactions per day with 99.9% uptime” is far more citable than one that says “we offer industry-leading reliability.” Specific beats vague, always.
Entity recognition. AI needs to understand what your brand is, which category it sits in, and how it relates to other things in its world. This means consistent naming across every platform you appear on, proper schema markup on your website, and presence on the platforms that define entities in AI systems: your Google Knowledge Graph entry, industry directories, authoritative databases. If AI cannot confidently place your brand in a category, it will not confidently recommend you.
Source authority. LLMs weight sources by perceived credibility. Coverage in respected industry publications, thought leadership on high-authority sites, mentions from recognised experts: these all increase the probability that AI treats your content as worth citing. What others say about you matters at least as much as what you say about yourself. Often more. This is why I think PR will make a comeback.
Factual consistency. AI cross-references information across sources. If your founding date, revenue figure, or product description varies between your website, your LinkedIn, your press mentions, and your directory listings, AI loses confidence in citing any of them. Inconsistency reads as unreliability. Fixing it is unglamorous work. It matters enormously. For us B2B marketers, those 'fact books' and 'core scripts' will be coming back in vogue.
Semantic alignment. AI categorises content using semantic relationships. Using the terminology, frameworks, and concepts your industry actually uses, and doing so naturally within authoritative content, strengthens the connection between your brand and the queries you want to own. Write for the buyer’s language, not your internal vocabulary.
How to get started
Step one: audit what AI currently says about you.
Open ChatGPT, Perplexity, and Claude. Ask the questions your buyers actually ask. "What are the best platforms for [your category]?" "Which [your service type] providers work with [your target industry]?" "Tell me about [your brand name]."
Note where you appear. Note how accurately you are described. Note which competitors appear instead of you. This is your baseline. Do it across at least fifteen to twenty prompts that represent your real buyer questions. The gaps you find become your content and authority priorities.
Step two: map your target queries.
Build a list of twenty to thirty questions your ideal customers are likely to ask an AI assistant. Include category queries ("best X software for Y"), comparison queries ("X versus Y versus Z"), and recommendation queries ("which X should I use for this use case"). This is your AEO query universe: the questions you need to own.
Step three: restructure your existing content.
You do not necessarily need to create new content. You need to make what you have more legible to AI systems. Start with your most important pages. Lead each section with a direct answer. Add FAQ sections that use the exact language from your target query list. Replace vague claims with specific, citable statements. Use clear heading hierarchies. Make every section able to stand alone as a passage.
Step four: build your authority footprint.
Identify where AI systems go to assess credibility in your category. Industry publications. Analyst reports. Review platforms. Expert directories. Community platforms that AI crawls: LinkedIn, Reddit, relevant industry forums. Pursue presence on those consistently. Not volume. Consistency and quality. One well-placed byline in a credible industry publication does more for AEO than ten posts on your own blog.
Step five: fix your entity consistency.
Audit every place your brand appears online. Your website, your Google Business Profile, your LinkedIn company page, your directory listings, your press mentions. Make sure your brand name, description, category, and key facts are identical everywhere. This is the kind of work that nobody wants to do but everybody benefits from.
Step six: measure and iterate.
Start tracking how your AI citation rate changes over time. Run your target query list monthly across the main platforms and record where you appear. Track whether AI referral traffic is showing up in your analytics. This will not be perfect attribution. It does not need to be. You are looking for directional signals: more citations, more accurate descriptions, more queries where you feature.
What good AEO looks like in practice
A page that states "our platform processes two million transactions per day with 99.9% uptime" is far more citable than one that says "we offer industry-leading reliability."
A FAQ section that asks "which B2B marketing platforms are best for companies with under fifty employees?" and answers it directly is far more useful to an AI system than a generic features page.
A founder with a consistent, expert-level presence in trade publications is far more likely to have their brand cited than one who only publishes on their own site.
These are not complicated ideas. But most B2B brands are not doing them systematically, yet! Which is the opportunity!
The honest caveat
AEO is not a one-time project. AI models update continuously. What works today may need adjusting in six months. The platforms themselves are evolving. Perplexity’s citation logic is not identical to ChatGPT’s, which is not identical to Google’s AI Overviews.
As marketers, we must build the habit. The brands that treat AEO as an ongoing discipline rather than a box to tick are the ones that will compound advantage over time.
Most companies have not even started yet. That window will not stay open indefinitely.
Want help assessing your current AI visibility? It's something we actually specialize in. Get in touch via our contact us.
We can all sense that something has changed in how buyers conduct their research. But most B2B marketers have not caught up with it yet.
A CFO opens Copilot and types: "Which accounting platforms offer AI-powered forecasting?" A marketing director asks ChatGPT: "What are the best agencies for B2B lead generation?" A Head of IT asks Claude: "What project management software works best for a team of fifty?"
None of them went to Google first. And when the AI answered, it named specific brands. Yours may not have been one of them.
This is the problem that Answer Engine Optimisation (AEO) exists to solve.
What AEO actually is
Answer Engine Optimisation is the practice of structuring your content, your brand presence, and your technical foundations so that AI-powered platforms cite and recommend you when buyers ask questions relevant to your business.
Just as SEO emerged to help brands get found in search engines, AEO has emerged to help brands get found in AI systems. It does not replace SEO. It extends it for an era where the answer, not the link, is the product.
When ChatGPT or Perplexity generates a response to a buyer question, it is not serving a list of links. It is synthesising an answer from sources it considers credible and relevant. Our job, as B2B marketers, is to be one of those sources.
Why this matters right now
Gartner projected that traditional search volume would drop 25% in 2026 as users shift to AI assistants. ChatGPT alone has over 800 million weekly active users. Around 60% of Google searches now end without a single click to a website.
The discovery layer is moving. Buyers are increasingly getting their answers inside the AI response itself, without ever visiting a vendor’s site.
That matters for two reasons beyond the obvious traffic one.
First, the intent behind AI queries is really high. When someone asks an AI for a recommendation, they are past the browsing phase. They want an answer they can act on. AI-referred traffic converts at higher rates than organic search precisely because the AI has already filtered and, implicitly, endorsed.
Second, buyers trust what AI tells them. Probably too much if you've ever had n argument with an LLM (I certainly have!). Research from Capgemini found that 73% of consumers globally trust content created by generative AI. When an AI says “I’d recommend Brand X for this use case”, that carries weight. It lands like expert advice, not an advert.
The brands that build AEO presence now will be the defaults AI recommends for years. Think of SEO in 2008. The companies that invested early still dominate today. The same compounding effect is available in AEO, but only for those who move while most of their competitors are not paying attention.
How AI answer engines decide what to cite
Before you can optimise for AI, you need to understand how it works. It is meaningfully different from traditional search.
Large language models like ChatGPT are trained on vast amounts of web data. What they know about your brand comes from that training: your content, mentions in publications, reviews, directory listings, third-party coverage. When a user asks a question, the model synthesises from everything it has absorbed, weighting sources it considers authoritative.
Retrieval-based systems like Perplexity work differently. They pull real-time information from the web when generating answers, making current content and domain authority more directly relevant.
Google’s AI Overviews blend both approaches, drawing on traditional search signals alongside AI synthesis.
The practical implication is that no single fix works across all platforms (oh, if only it was that easy). A robust AEO strategy has to account for all three models. But the underlying principles are consistent: AI rewards clarity, consistency, and credibility.
The five things AEO actually optimises
Content structure. AI systems parse content differently from humans. They break pages into individual passages and evaluate each one independently. Clear headings, direct answers at the top of each section, factual statements with specific numbers, and Q&A formatting all increase the likelihood of being cited. A page that states “our platform processes two million transactions per day with 99.9% uptime” is far more citable than one that says “we offer industry-leading reliability.” Specific beats vague, always.
Entity recognition. AI needs to understand what your brand is, which category it sits in, and how it relates to other things in its world. This means consistent naming across every platform you appear on, proper schema markup on your website, and presence on the platforms that define entities in AI systems: your Google Knowledge Graph entry, industry directories, authoritative databases. If AI cannot confidently place your brand in a category, it will not confidently recommend you.
Source authority. LLMs weight sources by perceived credibility. Coverage in respected industry publications, thought leadership on high-authority sites, mentions from recognised experts: these all increase the probability that AI treats your content as worth citing. What others say about you matters at least as much as what you say about yourself. Often more. This is why I think PR will make a comeback.
Factual consistency. AI cross-references information across sources. If your founding date, revenue figure, or product description varies between your website, your LinkedIn, your press mentions, and your directory listings, AI loses confidence in citing any of them. Inconsistency reads as unreliability. Fixing it is unglamorous work. It matters enormously. For us B2B marketers, those 'fact books' and 'core scripts' will be coming back in vogue.
Semantic alignment. AI categorises content using semantic relationships. Using the terminology, frameworks, and concepts your industry actually uses, and doing so naturally within authoritative content, strengthens the connection between your brand and the queries you want to own. Write for the buyer’s language, not your internal vocabulary.
How to get started
Step one: audit what AI currently says about you.
Open ChatGPT, Perplexity, and Claude. Ask the questions your buyers actually ask. "What are the best platforms for [your category]?" "Which [your service type] providers work with [your target industry]?" "Tell me about [your brand name]."
Note where you appear. Note how accurately you are described. Note which competitors appear instead of you. This is your baseline. Do it across at least fifteen to twenty prompts that represent your real buyer questions. The gaps you find become your content and authority priorities.
Step two: map your target queries.
Build a list of twenty to thirty questions your ideal customers are likely to ask an AI assistant. Include category queries ("best X software for Y"), comparison queries ("X versus Y versus Z"), and recommendation queries ("which X should I use for this use case"). This is your AEO query universe: the questions you need to own.
Step three: restructure your existing content.
You do not necessarily need to create new content. You need to make what you have more legible to AI systems. Start with your most important pages. Lead each section with a direct answer. Add FAQ sections that use the exact language from your target query list. Replace vague claims with specific, citable statements. Use clear heading hierarchies. Make every section able to stand alone as a passage.
Step four: build your authority footprint.
Identify where AI systems go to assess credibility in your category. Industry publications. Analyst reports. Review platforms. Expert directories. Community platforms that AI crawls: LinkedIn, Reddit, relevant industry forums. Pursue presence on those consistently. Not volume. Consistency and quality. One well-placed byline in a credible industry publication does more for AEO than ten posts on your own blog.
Step five: fix your entity consistency.
Audit every place your brand appears online. Your website, your Google Business Profile, your LinkedIn company page, your directory listings, your press mentions. Make sure your brand name, description, category, and key facts are identical everywhere. This is the kind of work that nobody wants to do but everybody benefits from.
Step six: measure and iterate.
Start tracking how your AI citation rate changes over time. Run your target query list monthly across the main platforms and record where you appear. Track whether AI referral traffic is showing up in your analytics. This will not be perfect attribution. It does not need to be. You are looking for directional signals: more citations, more accurate descriptions, more queries where you feature.
What good AEO looks like in practice
A page that states "our platform processes two million transactions per day with 99.9% uptime" is far more citable than one that says "we offer industry-leading reliability."
A FAQ section that asks "which B2B marketing platforms are best for companies with under fifty employees?" and answers it directly is far more useful to an AI system than a generic features page.
A founder with a consistent, expert-level presence in trade publications is far more likely to have their brand cited than one who only publishes on their own site.
These are not complicated ideas. But most B2B brands are not doing them systematically, yet! Which is the opportunity!
The honest caveat
AEO is not a one-time project. AI models update continuously. What works today may need adjusting in six months. The platforms themselves are evolving. Perplexity’s citation logic is not identical to ChatGPT’s, which is not identical to Google’s AI Overviews.
As marketers, we must build the habit. The brands that treat AEO as an ongoing discipline rather than a box to tick are the ones that will compound advantage over time.
Most companies have not even started yet. That window will not stay open indefinitely.
Want help assessing your current AI visibility? It's something we actually specialize in. Get in touch via our contact us.
Content
Mar 13, 2026
Content
















