AI-Powered Seller

Why Your IT Team Is Sabotaging Your AI Strategy

Jake Dunlap Season 1 Episode 14

IT is RUINING many companies’ Generative AI strategy. AI is not just an IT decision. It’s a business decision. But for too many companies, outdated fears and IT roadblocks are sabotaging AI adoption and killing productivity.

In this episode, Jake Dunlap takes on the five biggest barriers IT is putting up against generative AI and lays out exactly how to break through them. Whether you’re a sales rep trying to use AI or a leader pushing for adoption, this is the playbook for moving forward.

Data security & privacy – Why fears from 2023 are holding companies back and what’s actually true today
Integration complexity – Stop overthinking AI strategy and start integrating at the role level
Lack of control – AI outputs are improving fast. Are humans really giving better advice?
ROI concerns – The real numbers behind AI adoption and how to quantify its impact
Ongoing Support & Maintenance – Why AI is easier to manage than most sales tech you’re already using

If your company is dragging its feet on AI adoption, this episode arms you with the facts to push past hesitation and finally implement AI in a way that actually works.

You’ll want to share this episode with your IT team and sales leadership. It’s time to get AI over the hump.


Join the AI-Powered Seller Newsletter:  https://bit.ly/ai-powered-seller-newsletter 

Sign up for Custom GPTs: https://link.meetjourney.ai/aff_ad?campaign_id=4859&aff_id=2751&source=Jake-KD-Podcast&hostNameId=24435 

Connect with Jake: https://www.linkedin.com/in/jakedunlap/ 

Speaker 1:

All right, welcome everyone. Another episode of the AI Powered Seller. I am Jake Dunlap, ceo of Scaled Journey, ai RevOptics and the Dunlap household. Well, I'm not really the CEO. I'm more like the president there, or maybe I'm more like vice president.

Speaker 1:

Let's be honest, today's episode is going to be a shot in the gut for a lot of companies. I'm going to talk about IT and how IT is ruining many companies' generative AI strategy and, hopefully, what you can do about it. And it's not intentional and maybe ruining is a little harsh, but I'm just seeing it more and more in the market, where what's happening is companies are trying to move forward. They're like, hey, I know for these roles, I can be more productive if I just have this thing. And then IT is like no, we didn't sanction that and it's killing productivity as a part of that. And so if you're an IT professional listening to this, I just want to say that I love you and this isn't anything personal but there is a big, big difference between understanding what AI is and understanding the capabilities of what it can do. So today I'm going to talk about the big ones, like what we're seeing in the market. There's kind of five core issues that we're seeing IT push back on Data security, integration, lack of control, clear ROI and ongoing support right, and I'm going to try to break down each one of these so you can break it down for your leadership team or, if you're in IT, maybe I can give you some comfort to move forward with some more aggressive solutions here, because the future is now. You know, if you've listened to the last week's episode or the last episode of the pod, then you would have heard me talk about what's already capable today In the last episode of the pod.

Speaker 1:

Then you would have heard me talk about what's already capable today. In the last episode, I really broke down that man, forget this general AI deployment. You could be creating this, and so you know, if you didn't listen to that episode, make sure to go take a listen. Make sure, if you haven't already subscribe to the podcast so you get the latest and greatest If you're watching this live. Thank you, I appreciate you. Make sure to like the video, subscribe to the channel as well, and we're going to get into it.

Speaker 1:

So the first thing I'm going to talk about is data security and privacy. Oh man, this is the IT calling card. So let me tell you what happened? There was an incident that happened in April of 2023. So, literally, my friends, two full years ago, openai had only been around. Since what? November, december, january, february, march Five months. It's a little baby, a little AI baby, and what happened in April 2023 is someone from Samsung decided to upload some of their board minutes. To summarize it Well, somehow somebody and, by the way, this is chat GPT like didn't, it was still like old information on.

Speaker 1:

You know, there wasn't like a live internet, there was no custom GPTs. The model was like the three, the three, five or I can't remember what it was, and it literally said like right there, it's like do not upload sensitive documents. Like chat GPT is five months old. It's like look man, I'm reckless right now. And somebody was able to actually search those board minutes and found some details. It wasn't some like proprietary you know thing.

Speaker 1:

That incident created scar tissue across every company, like not every, but a lot of companies, where they're like oh, we can't share proprietary data. It's going to feed the model and like, yeah, you know what? You're not wrong two years ago, like that's what it was, but now, if you fast forward to today, you know, openai encrypts data in AE Okay, these are nerd stuff AES-256-TLS-1.2. Openai, also SOC 2 compliant. It literally says and you know what I'm going to do I'm going to literally drop the T's and C's into the show notes. It literally says no information. If you use the Teams, the Enterprise or the API, no information is shared back, the conversations aren't shared back, nothing's shared back to training, et cetera.

Speaker 1:

And so, when it comes to data security and privacy, chat GPT, open AI, is just as safe as any of as Gemini, as co-pilot of any of the other solutions, right, and it has superior models. Chat GPT now has they've got the four, the four oh model, the four five model, the oh, the one model, the three five model. Like, there's so many different models that are better for different use cases that, by not allowing your team and, by the way, perplexity is fantastic, claude is also fantastic, and all three of those are superior to Gemini and copilot. And so, when it comes to data security and privacy, if you are making your generative AI decisions purely based on that, you are literally forcing your teams to use software that is inferior to what exists today. So, look, it's job is to be safe and compliant, etc. I get that, but we got to get rid of this mindset of chat. Gpt April of 2023. It has the best models and, again, any of these now are secure.

Speaker 1:

You're not hearing any of these. You know breaches. You know, the worst thing I've heard recently is you know, some Russian hackers this was shoot maybe a few weeks ago were able to get some custom GPTs that were public. By the way, if you create the custom GPTs in your Teams or Enterprise environment, this is not applicable. Applicable. They were able to have it give up its custom instructions, like hey, how did they write you? How did they do this? Which, again, I don't think that that's a big deal and it's certainly not applicable to a company that has a team's environment. Okay, so that's what's up.

Speaker 1:

So, again, if you're a seller out there, look, I'm not saying go against your IT department, but I am saying that there is close as possible to no risk If you put a call transcript in there that all of a sudden, your competitor is going to go ooh, let's see what Scaled is talking about. Like it's not happening. Okay, this is all fantasy land, right? So if you're worried about it, don't worry about it. It departments, I'm sorry, sorry, it's the truth. So if you don't like it, go do your own homework and tell me I'm wrong. Feel free to leave it in the comments, but leave me the comments on how the teams or enterprise edition will share information back and potential breaches in the last you know 12 months. Okay, all right, so that's my high horse on data. All right, next up my friends integration stuff. Okay, open AI's API is super, super easy you can use.

Speaker 1:

I mentioned it maybe for the first time make an innate in in the last week's episode. But more than anything, when I think about integration complexity, I want you to think about integrating it to the role of the person, and that's for everybody out there. This is how you should be thinking about it. For my role, my wife even created a custom GPT. I should be creating little agents for myself that prompt me so I don't have to prompt, and so again last week's episode, I broke down like what goes into a custom agent? But when I think integration, it says okay for this marketing role. How should this marketing role use it? Okay, they're going to use generative AI. They're going to have one little agent for this thing. They're going to still need to prompt for this thing. Great, this next role? How should they integrate it in there? So it's really not this complex integration that we need to think about across every department, across every group. It's about the role.

Speaker 1:

And I've said this 5,000 times. It drives me crazy. It's like stop asking what's our AI integration strategy. That is literally like I wonder if 1995, they had a chief internet officer who said what's our internet strategy? You're like, well, what was the answer then? The answer then was well, it depends. And what does it depend on? It depends on the role of the person using it. That's how they're gonna use the internet. That is also how they're gonna use generative AI. So that's what's up, my friends, on integration stuff, okay.

Speaker 1:

Next is lack of control. Okay, I can't control what it says. Like what if it gives misinformation? This is a big one, right? Let's be honest. Back in the day, the word hallucination well, hallucination used to mean something completely different. But over the last few years, hallucination means AI will make shit up, right, and it will just kind of say well, I think that this is interesting. Let me give you a really good example. This is well over a year ago, right? So now it's not like well, jake said it hallucinates.

Speaker 1:

I was using the paid version, probably a year and a half ago, I was like, okay, I'm trying to write this type of content, help me to get some stats around this thing. It came back with some killer stats. I was like, man, these are really really good. And it was also one of those things where I was like this is too good, like these stats are, like they're too much like validating my opinion, and so I'm like, can you cite your sources for me? And it goes oh no, no, those were just placeholder sources and this is before like it had search turned on a whole bunch of other things and I was like, like you just made these up, like thank god, I didn't go post these things. So when it comes to lack of control, the hallucination and that part of it is getting less and less and less an issue again.

Speaker 1:

If you are, if you're not using the paid version of any of these tools you absolutely should be they will all cite their sources for you. So the whole idea of chatPT making stuff up in stats is gone. It's gone. Chatgpt will cite its sources. You say, great, cite your source. It's like here's the link, here's the things I used as a part of it Journey AI. If you go use meetjourneyai it, literally the very first thing it returns a search. It says here are the five sources that I used. You use some of the advanced reasoning models, it will show you the sources. So you know the lack of control over outputs. The other point so that's one point is it already is citing sources. So therefore it's pulling from information and making extrapolation not just based off general LLM thought, but also off of sources. So know that. The other thing, I think that is very, very interesting around this.

Speaker 1:

Okay, this is going to be the gut punch maybe for a lot of you. How often does your friend or your leader give bad advice? How often does you ask a human about something and they give you advice and you're like that's like 80% good. And they give you advice and you're like that's like 80% good. So why do we? And maybe I'll tell me take it further how often do you do a Google search and the results suck or like mediocre? So, guys, because it doesn't return the perfect answer, the perfect, controlled, exact thing every single time, who cares? Neither does any other place that you get advice from, every place you get advice from, gives mediocre advice at times, other times 10 out of 10. So I want a lot of you to think about that. Are you throwing the, you know? One, does your prompting just suck to where you're not getting good outcomes? Two sure, it might not be perfect, it might hallucinate a little bit, but that's better than not using it right?

Speaker 1:

The analogy that I always use is this your people have the internet okay, your people can go to websites. Your people right now can go to OnlyFans right, if that's their thing on the internet. So is it better that they still have the internet and can use it, or is it better to shut everybody off because it might not be, you know, like because of one or two use cases? Well, let me guess the answer is it's better to have the internet, okay. So we have to take that same approach to AI. Yes, we don't have a hundred percent control of it, but you don't have a hundred percent control now over the quality of answers that people are getting. So get over it as a part of it, okay.

Speaker 1:

Next up, I've got ROI concerns. Okay, jake, what's the ROI? And I get this one. This one to me is like I totally get it. You know, as someone who's been in sales for a long time. There are kind of some very basic sales arguments. The first is I can make you more money. Make you more money is always the best sales argument, because then I can show this really positive lift. Everyone likes to make more money. Step two is I can save you money. Not quite as good, we struggle with that component. The next is I can get you a higher quality of insert thing right, so more higher quality widgets. And then the last is time savings. Time savings is the one that people struggle to quantify the most and that is, I think, one of the biggest issues that I see in generative AI is because it's a time saving and also higher quality. It's actually the last two which, candidly, sometimes can be the hardest to prove, because it's those two things I think companies are struggling to say what's the exact ROI on this? Oh my gosh. Because it's those two things I think companies are struggling to say what's the exact ROI on this? Oh my gosh, like, what's the ROI right? And so I will tell you the stats that we are seeing here and you can do what you want.

Speaker 1:

About eight months ago, we did a survey. We had about 300 sales reps, sales leaders, respond how many of you are? You know how much for those. If you're, you know you had to be using chat to BT. How often are you using it? And I, at that point, I want to say it was like 28% of people said five hours or more a week. Holy crap, I mean that that should be eyeopening, guys. We literally redid that survey. Um, I think we only had a couple hundred respondents, um, like a month ago. So six months later, 42% said they were using it five hours or more.

Speaker 1:

My friends, let me ask you what is faster? Going to Google? Okay, I get a new meeting. Oh, yay, I got a new meeting. I go. This is me typing. Right, I'm typing. Oh, let me go to their website. Okay, I go to their website, click around, blah, blah, blah. Then I'm like okay, now let me go to their LinkedIn profile. Okay, read, read, read, read, read, read. Okay, fast forward 20, 30 minutes. That's how people are doing it right now.

Speaker 1:

Or, you know, and again, no-transcript, give me a PDF of the person's LinkedIn profile. Then it reads it all for you and it says great, jake, here's your discovery, call prep. So then obviously you could prompt that. You can say, great, hey, here's the link, here's what I think you know if you don't want to use a custom GPT. But what is faster, logically, like what's the ROI of Google? Like, what's the like, whenever we went from going to the library to Google, who was the first person that was like, well, what's the ROI of using Google? I can go to the library and get this information. It's like, well, logically, logically, I think this is going to save us a ton of time. So it's like I can just give something a link, give it the thing, it does all the summary and analysis for me, then I can look at it and review it and see what I like. Or I go and I'd search around the internet.

Speaker 1:

So, my friends, to implement this stuff, you're talking about 50, 60 bucks a month per user to have custom agents. You know you can get the paid version of this stuff for 20, 30 bucks a user, so the costs are not high. The time savings literally you're talking about. You know we're talking to a team that has 400 people. They deployed Copilot and they didn't employ any of the agents along. That do all the things I'm mentioning. And I'm like, guys, you have 400 reps because you're still making them go and learn how to prompt and be an expert and do these things. You're still requiring them to spend hours doing that. If you have 400 reps and I can save them two hours, you know, or two and a half hours per rep per week because they now are getting prompt, that is a thousand hours of productivity in your sales team per week. What are your quotas, okay? Well, there you go. I just gave you guys two free calendar months, or whatever the math is.

Speaker 1:

So the ROI to me is pretty straightforward right Now. Yes, should, over time, we be able to show more meetings booked? Absolutely. Should we be able to show decreases in sales cycles because we're having better, higher quality conversations, more win rates? Absolutely, we should be able to tie it to make more money, which is what everybody loves. But guess what? I can tie it to higher quality insights and time savings pretty significantly. It's a big, big deal. So that's my ROI cost concerns. It's not that expensive. It's way cheaper than probably 80% of the sales tech you use. There's a lot that I would give up as a part of that. So support and maintenance Okay, this is last but certainly not least, and again I get this one in particular.

Speaker 1:

The issue we're seeing is that this stuff is just evolving so quickly that, you know, in the last, you know, eight weeks, chatgpt released the O1 Pro Reasoning Model 10 out of 10. Claude raised $3.5 billion. They just released the 4-5 model. So this stuff is evolving quickly. So, yeah, there is a little bit of staying up to speed. It's required. It's required for everybody, not just IT. For for you in sales, for you as sales leadership. You have to stay on top of this stuff too, right, and so you know, for us, the beautiful part is all of this is actually because the the nice part about what's happening is there's a lot of really good technologies that they're doing the development for you, and a lot of this is how you put it together.

Speaker 1:

The future will be everyone can build a X agent, but are my instructions or my knowledge documents better than yours? Can I program the agent with proprietary data sets better than you can? And so, when you think about maintenance and, you know, keeping up to speed on some of your custom GPTs yeah, they are. Those are things that you're going to need to adapt every few months, right? Oh, there's a new model out or this thing happened. So, yeah, I need to go and change this part of the instructions.

Speaker 1:

But the cool part about this is it's not highly technical, like most of these changes are wording or phrasing or how we're putting together things or making the knowledge documents better. So that's the really interesting is, you know, we're entering this world of where sellers every single one of you can learn how to create an automation and make it's not tough, right? Lots of people on our team have trained themselves. So, you know, when it comes to, you know, call it this idea of support and maintenance. The support and maintenance is very minimal, right, and it's no different. You know, just think, the only thing you're really supporting and maintaining is the proprietary knowledge sets, because all the technologies are automatically upgrading in this and, yeah, sure, if one of their technologies goes down, it something misfires, it happens, but that happens in technology every time. So this idea of support and maintenance being a blocker, it's so much easier to support than like almost any sales technology tool, because you're just using words. You're saying, hey, why isn't it giving me the same outcome? How do I go make those tweaks? So it's easier, at least in my opinion, it's literally easier than ever. So that's what I've got for you.

Speaker 1:

All right, data security and privacy Guys. April 2023, it's two years later. Can we please move on? Right, all of these things are secure. You're not seeing it happen when you invest in the paid versions, et cetera. It's time to move on. There are superior models. You are hamstringing, you know hamstrung. Hamstringing your team by not letting them use the best of the best and forcing them to use, you know, technology that's already been outpaced a year ago. Okay, integration complexity we're talking about integrating in the roles. Stop with this big picture stuff. We'll focus on the roles. Let's figure out here how we're going to integrate it here. Pick one or two pilot use cases. Right. Lack of control Again, continuing to make the agents better.

Speaker 1:

You can get your quality of information better or the output better, and just understand these models are getting better and better. Ai is never gonna be worse than it is right now. Roi I don't know how to say it. Like, what's the ROI of Salesforce? I don't know, but I know it saves you a lot of time and forecasting would be nearly impossible without it. I don't know how to quantify that. What's the ROI of having access to Google? I don't know, but the ROI of having the internet is better than the ROI of not having the internet right. And support and maintenance. This stuff is really really really basic stuff. This stuff is not overly technical. Your code is breaking. That's not how it works. A lot of this is just keeping up to speed with, like small changes in the model and making sure we're supplying it with better information.

Speaker 1:

So, my IT friends, hopefully you got a lot of value out of this. Maybe you're like, hey, I want to subscribe too. Make sure to forward this episode over to your IT team. Make sure to subscribe to the channel, like the video, if you got value out of it, or make sure to share this podcast if you're listening as well, too, with your team.

Speaker 1:

My hope for this episode is we kind of are lowering the temperature a little bit. We're making it a little easier to be like, all right, cool, like let's get going, let's not worry about the fear and uncertainty and that stuff. Like we actually can get started much faster and it's a lot less complex than we think, and it's actually getting less complex as time goes on. So that's what I got for you everybody. Thank you for tuning in another episode of AI Powered Seller, we will see you on a new episode every two weeks so, like I said, make sure to subscribe to the channel and share with your teams. Like I said, I think every CEO, cio needs to listen to this episode to understand what is actually happening at the forefront of generative AI. Thanks again, everyone. We'll see you on the next one.

People on this episode