Going Guerrilla with AI: Real talk on adoption for teams
“People are using AI Tools whether they’re being allowed to or not. More and more of large organisations’ ways of working are starting to operate out of these things organically.”
Ben Le Ralph
Listen to the full episode for an in-depth look at how AI is changing the way teams work and why strategy may soon become the next big challenge in the age of AI.
In this episode, you’ll hear about:
- What Ben means by the “strategy execution gap”
- Why AI adoption is messy, fragmented, and very real
- The comparison between AI and Excel in organisational usage
- How teams are using AI tools unofficially to move faster
- The personal vs organisational use of AI – what’s working and what’s not
- The shift from AI hype to quiet productivity
- Why grey user insight can still be useful
- The risks (and benefits) of AI hallucinations
- Why AI won’t replace jobs – but will shift how we work
- How delivery bottlenecks are giving way to deeper strategy work
- The future of AI as a practical, democratic tool for decision-making
- The emerging value of strategy work in the wake of AI adoption
- How AI is revealing deeper patterns that drive decision-making, even if not perfectly accurate
Key links
AI for Busy People
Meet & Gather
B Corp Certification
Ben Le Ralph’s TikTok
Ben Le Ralph’s LinkedIn
About our guest
Ben Le Ralph is the founder of AI For Busy People and runs a small co-working space in Richmond called Meet and Gather.
Over the past 15 years, he has helped small teams, often within larger organisations, to achieve big things.
He specialises in supporting business owners and team leaders to align their teams on the right strategy and implement practical systems that supercharge delivery. Ben is passionate about helping teams work smarter and build things that actually move the needle and make an impact.
Before launching AI For Busy People, Ben co-founded and scaled a B-Corp certified consultancy, growing it to a team of 15+ and $6 million in revenue. His company partnered with some of Australia’s most recognisable organisations and government departments to help them rethink how they tackle complex social challenges.
About our host
Our host, Chris Hudson, is a Teacher, Experience Designer and Founder of business transformation coaching and consultancy Company Road.
Company Road was founded by Chris Hudson, who saw over-niching and specialisation within corporates as a significant barrier to change.
Chris considers himself incredibly fortunate to have worked with some of the world’s most ambitious and successful companies, including Google, Mercedes-Benz, Accenture (Fjord) and Dulux, to name a small few. He continues to teach with University of Melbourne in Innovation, and Academy Xi in CX, Product Management, Design Thinking and Service Design and mentors many business leaders internationally.
Transcript
Chris Hudson: 0:07
Okay. Hey everyone, and welcome back to the Company Road Podcast, where we explore what it takes intrapreneurs to change a company from the inside out. And I’m your host, Chris Hudson, and today we’re gonna be diving into how AI is revolutionising the way that entrepreneurs can drive change within their organisations. So joining us today is Ben Le Ralph, founder of AI for Busy People. And he’s the mind behind Meet and Gather coworking space that was set up in Richmond here in Melbourne. And Ben has spent 15 years helping small teams achieve outsised impacts often within larger organisations. And before his current ventures, he co-founded and scaled a B Corp certified consultancy to about$6 million in revenue, partnering with Australia’s most recognisable organisations. And today, Ben’s gonna share. How AI can bridge what he calls a strategy execution gap. We’re gonna get into some of that in a moment, but giving smaller teams and people really like us, the leverage to accomplish what previously required, huge amounts of resources, massive teams. So whether you’re leading transformation efforts or you’re looking to drive innovation from within in some sort of way, this conversation is gonna give you load of practical insights on harnessing AI as your secret weapon for change. And if you’ve seen any of Ben’s clips, he’s on TikTok and all the different channels. He’s a prolific content creator and shares some amazing stuff. So tune into what he has to say and Ben, a very warm welcome to the show. Thanks for coming on.
Ben Le Ralph: 1:25
Thanks for having me, Chris.
Chris Hudson: 1:26
Great. And now Ben, you’re no stranger to content creation yourself. So you’re on all the socials and obviously you’re putting out videos and hints and tips and tricks all the time. You work deeply in the area of AI. And it’s been a bit of a head turner, obviously, on LinkedIn and across much of the media for a while now. But have we grown bored of it? Has it grown bored of us? I don’t know. Are we doing things with it anymore? There are a thousand people every minute. It seems, making themselves into Mattel action figures right now. But what have been some of the hottest topics in AI at the moment, do you think?
Ben Le Ralph: 1:56
I look, I think you’re so right in that it’s becoming very normalised very quickly. I think it went from something that was very out there and like peak. Interesting to something that people are tackling within their companies is like, oh actually this is something that I better start doing something about. And I think the Mattel action figures is also interesting in that it is amazing how fast a fad can go from something that’s like wildly interesting to like so over and like we’re talking like maybe in an afternoon. Yeah, but that’s how easy the AI tools are these days, right? Like you can literally just type with no pre-knowledge on anything about machine learning and create a very impressive image of yourself doing something wacky. And that learning curve fall off has been, I. It’s very interesting to say.
Chris Hudson: 2:46
Yeah. Yeah. So there’s stuff popping up like that all the time. There was the one before, it was like the week before or whatever, I can’t really remember what it was called, but it was the like a certain illustration style that people were using. So you probably know the one, but yeah, it just feels like there’s more, more things being tried out, more things being pushed, and yeah, people are getting comfy. Right. So is it getting more interesting or is it just plateaued into this creative. Pool of,
Ben Le Ralph: 3:11
yeah. Yeah. It’s certainly a position where you can either choose to be pessimistic about the future or optimistic about the future, and I’m always quite optimistic about the future. I think what’s interesting about this moment or what, like I harness my content around is this sense that like there are lots of announcements coming out at a very high rate. Like what actually got me starting my channel back just before Christmas was OpenAI did something like the 12 days of Christmas and they released new features every single day. And I’m like, in a regular instance or in a regular world, just one of those announcements would’ve been something that they built up to four for like three or four months. Hmm. And like that’s happening at not one company, but. At 50 or a hundred companies where someone’s finding these new features, whether it’s creating images or it’s writing code, or creating apps, like these little things that it can do. They’re being released and announced every day and it’s almost gone from, wow, this is interesting to like. Just trying to keep up with the various ways that you can implement it has become almost like a part-time job. So yeah, I think it’s exciting, but in a different way
Chris Hudson: 4:22
because the, yeah, the fragmented nature of that, obviously it’s kind of splintering, right? It feels like there are many, many, many, many options now and everyone knows Chat, GPT and. They think they’re impressive when they mention chat GPT. Yeah. But there are thousands and thousands of things that you could try. Right. And it, it’s becoming a harder space to navigate and some of it’s good and some of it’s not good. Right. So some of it’s still picking up speed. So where do you think some of the challenges and some of the ways to navigate that kind of help, where can it be easily done and where is it helpful to start, do you think?
Ben Le Ralph: 4:56
I certainly agree that you can get lost. In all the possibilities, right? Like you can just literally look at all of the announcements, very hypey, very frothy, and you can just get stuck in, should I try this? Like should I try chat GBT, or should I try Claude or should I do an agent? Or literally things that you can just get lost in. And I think particularly in larger organisations where. You can’t necessarily just download something and start using it right away in like a professional sense. I used to work on like old school technology projects where you would go through and you’d have the ideation and you’d do the strategy and you’d go through a full design implementation process, and then you’d pick technology and like all of that. Even if you’re doing it quickly takes, takes six months, and then you get into an agile world and you’re like, oh, maybe we can break that down into like a discovery phase and then some sprints. But all of that just feels like it takes too long in this current environment. And so I think the challenges that people are facing is finding these new ways of working that allow you to work and experiment. At a faster rate while also I guess balancing the need for safety and security and planning and strategy. Like all of these things have to come together and they’re very different speeds.
Chris Hudson: 6:18
Yeah. It’s kind of interesting to see probably what the number of organisations at different stages almost, it feels like there. The stages of maturity are wild and different. Some and some, some teams are running rogue within an isolated unit within an organisation because they’re able to do so. Or a lot of people just using their personal laptops as well, which, yeah, that may be another thing which kind of goes under the radar, but they’re just trying to see what they can output. And then maybe it’s just a case of then business casing it or then doing out. But yeah, this notion of kind of guerrilla AI is kind of interesting because you know, you want to think about. Okay, well where is it official? Where is not official? If you are waiting for the official lines, then what are you having to wait for and what are the signals for it being okay to use? But yeah, is there a way of just getting around that and there’s some gorilla stuff and getting the work done, as you say.’cause people are just using it on the phone, on the home laptop, you know, whatever it is.
Ben Le Ralph: 7:08
A hundred percent. I forget where I got this from, but I heard someone recently talking about this idea that like chat, GPT and these like assistants that were used like that in particular, like there’s lots of different ways AI can be used, but that like assistant in particular is being implemented in organisations the same way Excel was. Right, right. And so this idea of a spreadsheet where people would create them for literally anything, and like large organisations, there is so much data that’s just jam packed outside of all of the other systems that they’ve got. It just like came out. Into and then someone’s put it in a shit spreadsheet and like people would often, probably not people listening to this podcast, but people out there in the real world would be very surprised about how much critical infrastructure is actually just being run out of Excel because it is just so easy to spin up a sheet. It’s quite powerful and people can just like get. Something that they’re being asked to do, done. And I think that’s the niche that Chat GPT or these AI assistants is feeling is that it’s like people are using them, whether they’re being allowed to or not. More and more of large organisations, ways of working, starting to operate out of these things organically. Yeah. And so then the question really becomes how do you scale that across teams, across the organisation, and kind of utilise. That in a more formal way.
Chris Hudson: 8:32
Yeah. Yeah. Which you think you always have to do, but you don’t always really, because you know, with the adoption curve as it is, some people might not get round to it or need to, whereas like a 10% or a 20%, I dunno what the percentage is, but there’ll be some fore runners and some people behind that. Right. So the people that need to use it will the people that won’t it. Probably will just be an Excel version.
Ben Le Ralph: 8:53
Yeah, I think it, it would be interesting, I think a philanthropic who makes Claude came out with a new research, like a new paid tier, which is like 200 bucks, and what they’re finding is that the people who are buying it, developers at large organisations using their personal cash to buy it, like it’s proven that it can help them so much that they’re spending their own money on it just because it makes their day easier.
Chris Hudson: 9:17
Oh yeah, that happened with Adobe as well. Like people were just buying like the licenses for themselves and yeah, maybe not with Figma, that was a bit more legit, but it felt like there were workarounds. People were just finding ways, not in a software sense, but just work stationary or other like basic things that you would just bring into work because people wanted to have it a certain way. Yeah, it’s been something that’s been hacked a little bit for a while. I think
Ben Le Ralph: 9:40
certainly. I think there’s more of that like groundswell usage of it, like that actually feels relatively mature almost within organisations. It’s the how do you then build it into your products to improve the user experience of the business. That’s, I feel like people are more nervous around, how can I use it? My own personal workflow is a lot safer than. How do I restructure our service offering to take advantage of AI? Yeah, and I think, say if the part of the curve is that like AI is here and it can help me personally is maturing, the next wave is how can we improve our internal processes by using it to do admin, project support, writing content, like there’s thousands of use cases, but it’s within your internal teams and organisation. And then the last lagging factor would be how do we change the product in a world that has AI in it?
Chris Hudson: 10:35
Yeah. It’s an interesting point. It presents a fairly open-ended challenge or question, which is around how those individuals that are using it in their own way in a fairly nuanced way. How does that become scaled and systemised, and how do people actually make sense of it and work together as a team in collaboration with it as well. Because it has been, you know, I’m gonna input this email and ask it to do something better. Make this email sound more formal, make it sound more funny and do it in the tone of voice of Tom Hanks, or whatever you wanna use. But you know you can see how it’s become very. Individualistic and quite sort of introvert, not in inward facing, I wanna say, even though it’s an outward expression in the end because it’s creating an outputting something as generative AI. But it feels like that ownership of it sits with the individual and it needs to then stretch into an organisation. So that could be interesting to see how it plays out.
Ben Le Ralph: 11:29
Very, like at the moment, we’re still in a, I mean, it depends on the organisation, it depends on the people, but I still feel like we’re in a moment now where people feel like if they can catch you using AI, they’ve got something over on you. They’re like, oh, there’s an N Dash, there’s a little dash in the words there. I’m pretty sure that chat GPT is written that, or whatever it is, and I think we all just need to get over. That as like a, those go. That’s it. Yeah.
Chris Hudson: 11:58
My wife’s a designer. She was talking about typography and was like, the only people that know about in dashes are creative and art directors, directors that worked in the nineties, or people that have come from a serious publishing background. And you wouldn’t use, nobody else would have a clue as to how to use it. So. Yeah, it pops up, right. And yeah, other things, the spelling, capitalisation, you can soon see what words it’s using quite frequently, so,
Ben Le Ralph: 12:21
totally. And there’s, there’s parts of it where it’s like your job is to make the con, like you as a person is your job is to make the content better.’cause chat, GPT can create some generic stuff. Yeah. But then there’s the element of like, if it’s just polishing the words or helping you think the fact that it did, that shouldn’t be something that you have to. Have as a secret. It always reminds me that there was a time when Wikipedia was seen as like trash. If someone call you referencing Wikipedia, they’re like, oh, you are not a serious person. And in the like the span of 10, 15 years, I think if someone’s citing Wikipedia, they’re citing it because they’re like, this is fact and there’s so many other news sources. So I think these things change.
Chris Hudson: 13:11
Fate news and all that. The fact that I’ve got a 13-year-old daughter and she’s watching YouTube and that is all the truth, right? That is like the truth. And it’s just somebody like you and I on this podcast, we’re just talking about what we think. But that’s taken to mean something, the associated meaning of an expression. It could be verbal content, written content, anything, but as soon as it’s made and it’s out there in the world, it’s considered to be real, so. That’s right. Yeah. It’s a hard thing. That sense of truth, it feels like the fact that it’s a pooled, crowdsourced thing, it’s bringing together a lot of source of information and obviously the more it’s being fed, the more it understands and the more nuanced it becomes, and that sense of crowd knowledge and the sense of the wider shared knowledge pool, it feels like you would in the end get to a definition of the truth that’s evolving and evolving, evolving, but that feels like it, it could become quite credible. What do you think?
Ben Le Ralph: 14:07
Oh, hard agree. I think, yeah, definitely. I think there used to be this concept of like wisdom of the crowds and I think. It kind of plays on that a bit, right? Which is, it’s like there’s probably a few ways to think about it, the first part of it is that they crawled a bunch of random content on the internet and just by taking that large snapshot, taught these models how to speak English in a well-defined way. And within that. It can speak English thing. There was some fact and there was some truth, but there was also some craziness and there was just wild speculation about putting like glue on pizza or whatnot. Right. But then as people use it. Provide feedback. People can add much better data sources. They can remove things that don’t work. There’s a lot of reinforcement learning that people just have using it, and I think that is what notionally makes this thing smarter and smarter over time, is that it wasn’t created by one person, it was created it. It’s being used by so many people and getting so much feedback on a daily basis that it’s impossible for it to. It is not the truth, but it’s what we think the truth is as a society right now which can get a little philosophical, right? Like what is the,
Chris Hudson: 15:29
it can, what’s driving what, I mean, it feels like if you’ve got smart people putting smart things in and feeding, giving it feedback, then it would educate If you’ve got. Dumb people putting, I can’t call people, dumb people, but if you put in like,’cause I was just running some research about this the other day and with AI. We were talking a little bit about how the difference between interaction with a conversational AI and a human and people are just abrupt to the point, direct borderline rude with the AI. You know, some people say you’ve gotta be nice to it. That’s another topic of conversation. But yeah, that better results very direct. Like, can you gimme this? I’ve got these items in my fridge. What am I cooking? You know, it’s kinda like you talk to it. Like, you wouldn’t talk to anybody else in your life. So it’s kind of the blandness of it could lead to a lesser version of the AI, but obviously that could be balanced with the people that are. Pushing its capability and feeding it with a lot richer content so that will always be in balance, I’m sure, as attention. What do you think?
Ben Le Ralph: 16:30
There’s a few things that I think in there, and I’m not an expert at model design or how these models are put together by any way. So, everything that I say is a grain of salt. But what I find interesting is. We are kind of moving as a society, I guess, as all communication has to just more like the amount of photos we have, the amount of interactions we have, like all of that is becoming more and more and more and more. And even like the printing press before that to the printing press and after was just. More people being able to share their experience and I think there’s something to the notion of smart put it, people putting smart things in and trying to protect from the dumb people. Not putting dumb things in is almost balanced. Where it’s like is if you just get lots and lots and lots of stuff from lots and lots of different people and the lots of different people, bit is I think the most important part. Which is that you just get so many different experiences of the world that don’t necessarily usually get picked up in things that were being published or talked about. And so it’s this balancing factor of yeah, there’s people who are saying things that are clearly like untrue by any scientific standard, but often those people are talking to an experience that they have or they think about things in a particular way or like all of that stuff. As long as the s can be tuned in a way to operate the way we expect. Actually, we are getting a much better breadth and depth of what’s actually happening in the world than we were probably having before when we were trusting a small amount of people to publish very specifically accurate things.
Chris Hudson: 18:12
Yeah. Yeah. So from a representation point of view, it would be more visibly and accurately represented. For better or for worse. It’s what TikTok or YouTube is.
Ben Le Ralph: 18:25
That’s, it’s, it’s certainly a for better or for worse situation. We’re
Chris Hudson: 18:27
thinking we want to create and we, this is what we think people wanna see and read. So that’s how it’s gonna be.
Ben Le Ralph: 18:33
Look, I would base this just on looking at the transition of like going from no websites where all you could reference was books. And I was old enough to just catch the generation where you’re like half of my schooling was done purely going to the library and looking at physical books. Yeah. And then the other half was like, now we’ve got websites and access. Yeah. And I think AI, if it’s just another orbit of magnitude of there, it’s like there’s more craft, but there’s also a lot more genuine, valuable stuff. Yeah. In that would be my,
Chris Hudson: 19:02
so it’s like having, that’s the, yeah.
Ben Le Ralph: 19:04
What I would hold to in my framework, not knowing how the numbers of the math work under the hood.
Chris Hudson: 19:09
Yeah. The days of the encyclopaedia on a CD wrong.
Ben Le Ralph: 19:13
Yeah. There used to be gold standard.
Chris Hudson: 19:14
That’s all the knowledge in the world. Apparently. It’s on one disc and I’m gonna put it in my machine. I’m gonna ask you whatever I need, you know? Interesting stuff. So that will be the trend more and more so for navigating that more and more. It feels like we’d need some discretion, right? Like we need some judgment around how, how to handle that, what to take, what’s credible, what’s not it. It’s gonna be hard, isn’t it? Yeah, recommendations. It’s a bit like choosing whether to go with this airline or that, or with this shop or that shop. But it’s much harder than that.
Ben Le Ralph: 19:46
Certainly. Yeah like where it becomes interesting, like when you actually get AI to do stuff for you, right? Like they’re acting as these AI agents like book a flight or something slightly more complicated than that. Like actually seeing where the rubber hits the road there. It shows you where they’re dumb in very particular ways. Like I’ve always thought there’s that saying of like, users aren’t good at telling you what they want, but they’re good at telling you what they don’t want. Once you show it to them, I think there’s a notion of that when you are working with an assistant and you are just asking it for ideas or you’re talking with it and it’s all just language, it’s a little bit easy to just be overwhelmed by how realistic it sounds or how human it sounds and be like, oh yeah, that sounds credible, that sounds great. But then when you get it to do an actual task where you can see that there’s a right or a wrong, like get it to code something for you or book a flight. And then seeing it struggle, it probably gives you a healthy dose of, this isn’t magic, this is just a computer that’s easier to interact with.
Chris Hudson: 20:50
Yeah, it’s falling from grace. You, you put in this whole thing and you know, people are know, to some extent go quite in depth with their prompts now, right around what you’re asking it to do. So if you’ve written your masterpiece in your instruction manual and you’ve given it all the information you think it needs, and you get served a turd or you get something bad that you weren’t expecting, then that must be part of the learning experience. People have gotta be okay with that.
Ben Le Ralph: 21:16
Yeah, a hundred percent learning about to be a prompt engineer. This is something that’s changed, I think, dramatically in the last six months, which is, I think there used to be this idea that you were gonna have people in your organisation who were gonna be prompt engineers or like they were gonna be the people who use AI or everyone was gonna have to learn this skill. Whereas now I think everyone kinda has this realisation where it’s like you’re not gonna have one person there. Using the AI, everyone needs to learn how to ask a question and get an accurate response. Yeah. As part of their job. Like that’s gonna be something we are all gonna have to get on.
Chris Hudson: 21:48
Yeah. Kind of like a waiter in a restaurant, you know, take in the order to take to the kitchen and the AI’s gonna give you the food and then you’re gonna take it back to the table. It’s a gated version, which would’ve resulted in a different AI in the end as well. Presumably, if that had been curated by a certain set number of people, whereas now it’s open for everybody. So, yeah.
Ben Le Ralph: 22:09
Can I kind of prompted from a question you asked earlier, and this is potentially controversial. It might, you might not think it’s controversial at all, but I’ve had. Yeah, so we’re like both user research people. Yeah. But one thing that I’ve found very interesting with AI is this idea of like grey user insight. You know, if you’ve got like grey water, it’s like water. But I wouldn’t necessarily drink it, but like I’d put it on the garden, right? Like it’s this water. It’s a bit tainted, but it’s okay. I’ve started seeing, when I work with organisations. The quality improve and actually the quality improved by making the customer ever present, but not necessarily having that customer data be tied back to a specific user or the same with like analytics within an organisation. You know, like there would be this whole thing about like, how do we. Use our analytics within the organisation with an LLM if we can’t guarantee the accuracy end to end. Whereas what I’ve seen is that the decisions in an organisation are being made irrespective of whether they’re getting data or user research at all. And so taken a bunch of research reports that have already been conducted, feeding them into a bot, doing some clever prompting, and then just giving it to an exec and saying, yeah. Run your thinking through this voice of customer bot. Yeah. Significantly improves the output of their decision making. One, because the AI is good. But two, because really what’s happening is every time they go to do something, they’re thinking about customer. Like the AI could literally do nothing, but the fact that they’re having to engage and frame their decision in a way that you would ask a customer just that improves it. And so I think my controversial opinion is, is that AI lies, but. So what, like there are degrees in which I don’t necessarily think that that matters. Hang
Chris Hudson: 24:00
on, hang on. So where are the lies? That was the controversial bit. Where are they?
Ben Le Ralph: 24:05
Oh, so I think the controversial is like, when I hear people in organisations talk about AI, their biggest thing is they’re like, oh, but doesn’t it hallucinate? Doesn’t it make stuff up? Yeah,
Chris Hudson: 24:12
yeah, yeah. That’s very interesting. I mean, I, I work a lot in research. As we were talking about just before the show, it’s. Yeah, the, the point of view, you know, the dynamics around leadership and obviously how, how close the opinions are held, you know, and how that can be liberated often is needing to be backed up by user research or customer research, and people are going to some links, you know, sometimes you’re talking to thousands of people and surveying them to get. Get the data that supports or, you know, disproves, you know, somebody’s highly important point of view. You know, that can be the case too. Yeah. So, so yeah, we’re, we’re, we’re thinking about that. But yeah, if it’s democratised, you know, the AI, if it was asking, you know, if the, if the CEO had this point of view, was asking the AI for the answer and the AI was, was not, you know, obviously not really gonna care about what the CEO thinks because it’s gonna give the answer that thinks it’s the right answer. Um, then that, that will change political dynamics, you know, within, within team structures and decisioning and all of that, which, which could be interesting. Certainly good decisioning out of the mix from an intro intrapreneurship point of view, from an organisation point of view, because it can be neutralised as something that is, you know, it’s crowd learned and you know, all the data that’s feeding into it. It’s the collective wisdom as we were saying. Then, you know, what’s that gonna free us up to do? What are we gonna be doing if we’re not talking about what the decision should be?
Ben Le Ralph: 25:37
Yeah, totally.
Chris Hudson: 25:40
Yeah. I mean, yeah. Sorry. Do you have a follow on?
Ben Le Ralph: 25:43
I could talk about that forever, but yeah. Okay. You go,
Chris Hudson: 25:48
yeah. We’re, how far away from that do you think? Do you think we’re close to that? Or do are we, are we just learning the baby steps? At the moment,
Ben Le Ralph: 25:56
I’m not necessarily convinced that. AI is gonna take anybody’s job, to be frank, I, the way I see it playing out realistically is that there are a lot of people trying to figure out how to reduce head count using AI, and I haven’t seen a single version of that that has worked to meaningfully reduce the amount of people that they have. Yeah. While keeping a level of quality, like there are actually plenty of high profile examples where people fired a bunch of their support staff and then started hiring them back. Well, they said they weren’t gonna hire any more developers. And then you just look at their LinkedIn profile and there is developer jobs in there for days. Mm. And so I think what will happen is that anything AI can do becomes commoditised right away. It’s like by definition, if AI can do it, yeah. Any organisation can do it. And so I think much is more, what’s more likely to happen is that it raises the bar. For everything and so that there are certainly things that you used to spend a lot of time in your day doing. Whether you’re a researcher or a developer or any of these kind of knowledge work professions, they’re like, what you do during the day will change. But I’d be very surprised if what happened was it did all the work and then we all sat there and we were like, I dunno what we’re gonna do. Yeah. Mostly because I don’t know what your experience is like, but my experience within organisations is that. There are a lot of people who have a bunch of stuff that they say they should be doing. Talking to customers is like a great example. Yeah. But they never actually have the time to do that stuff, and so I don’t think you ever get to the end of a list of tasks.
Chris Hudson: 27:40
So all they’re gonna do is talk to customers? Is that what you’re thinking? They’ll be out, they’ll be around their homes having tea every day.
Ben Le Ralph: 27:49
If that’s the competitive advantage, then that’s. That’s what you’d end up doing? Yeah. No, I, I don’t, it’s a bit facetious, but no, I don’t actually think that that’s all that Unlikely. Yeah. Particularly because I think probably what will happen is that AI allows us to, and this is the difference between like a strategy execution gap, right? Which is that, yeah, you used to want to do things within an organisation. You’d have all of these brainstorming sessions, you’d have a lot of strategy, and they would all come down to this one bottleneck, which was delivery. Yeah. And you just could not do that much. Yeah. Whereas I think what I’m saying is that that delivery is starting to become much less. Of a bottleneck and so more ideas can flow through, which requires a lot more strategy, research, customer insight. To be able to actually create good stuff because the can we do it is no longer the bottleneck that forces decision making, if that makes sense.
Chris Hudson: 28:44
Yeah, yeah. I mean that is an interesting point in itself because I wondered where it would leave our friend’s strategy in the mix of this conversation because what we’ve seen in, I don’t know if this is relating to AI not, but it’s certainly the last 12 months feels like the senior management in a lot of cases, big corporates. I was being trimmed back. So there are a lot of redundancies here and there. There’s a real focus on shipping the work and delivery it’s engineering, it’s dev obviously work needs from a product point of view, if you’re running a software business or whatever, you just need to get the design done and out there into the world so that you can see whether it works or not, but. Fringe strategy, where does it sit? Is it just being directed from above the implementation, the design, that can often be now with AI just driven with a few simple prompts and people are just getting to a version of something creative and it ends up being evolved a little bit. But there’s either a danger of the strategy being missed or a danger of the executional aspects, just being a bit grey water as you were describing before. We get around some of that and still keep it joined up. Do you think?
Ben Le Ralph: 29:49
Yeah, I think strategy is about to have a re a renaissance, to be honest. I think there are certainly gonna be companies, Renaissance the
Chris Hudson: 29:56
right word, it feels like, right. Traditional word to use.
Ben Le Ralph: 30:00
Yeah. And I think, and, and this a bit rogue, but I think it will be, yeah, traditional strategy, like I think. It’s hard to speak in such broad terms, but what I’ve experienced is that there are a lot of people who do strategy who that isn’t how I would describe what they do.
Chris Hudson: 30:16
Yeah. Okay.
Ben Le Ralph: 30:17
Yeah. I just don’t necessarily think that often those roles are being all that strategic and it’s not their fault. It’s just that what they’re being asked to do is we’ve got all these requests coming in and we’ve got one development team. Could you run the strategy of what essentially gets prioritised in through that pipe? Yeah. And if the pipe gets bigger, you don’t need that job quite so much. Yeah. But if you are executing at three or four, or five or 10 x the amount that you were doing before. If that’s not all directed, then you’re gonna see organisations just splinter at 2, 5, 10, 50 times the rate that they were doing before. Like I’m always amazed at just how much waste is happening within organisations because there are teams who are executing things, unaligned or unrelated. To each other. Yeah. And without good solid traditional strategy that’s aligning these teams, you won’t be able to hire a bunch of people to sit between those teams to do the gatekeeping anymore. Like it just won’t be something that is physically able to do by a person. And a specific example of this that I found interesting, and it’s more of an experiment that I run with people than anything else but. Obviously you’ve got your organisational strategy and you’ve got your organisational values, and they’re defined to various degrees of clarity. And then what you’ve got is, say you’re a manager and you’re working across a department, you’ve got all the things that they’re typing into an LLM like ChatGPT, and you’ve got all the emails that they’re sending on a daily basis. And running their emails and their chats through an LLM essentially saying, can you articulate what the strategy this person has? Like what are they actually executing against? Yeah. It is amazing how like polar opposite they can often be. Yeah. Like they’re just disconnected. And I think ideally where LLMs can play a really interesting role in strategy is keeping. That aligned. It’s what we’re executing actually what we’re saying we’re gonna do? Because we don’t have to have a meeting to figure that out. We can use other metrics.
Chris Hudson: 32:31
Yeah, it presents an interesting ethical question as well around what it’s got and what is it just answering the question that you’ve asked it about the strategy. But it could by the same token answer the question like, what is that person actually doing for the company? And how are they spending their time and all of these. Taboo type questions that come up. So I think from a leadership point of view, that data access and consent, some of their privacy conversation comes in. It’s not just about the outside world and giving your data away to a bot or an AI that is learning from it.’cause in this case, it could influence either yours or somebody else’s career. Right?
Ben Le Ralph: 33:09
I think that’s right. I think the difficulty is I think if I was coming at it without ever working in corporate, I would come out pretty hard to say. If you are at work and you’re writing an email and someone asks what you do, like if someone comes to you in a notion like, what do you do on a daily basis? And they can’t answer that question. I think that’s a problem, and I don’t feel like that’s necessarily unethical it feels fundamental one on one. I don’t think that that is ethically problematic. Yeah. We both have worked in large corporates and so we both know how that gets manipulated or used like you screen capturing what people are physically doing or tracking mouse clicks or like, all of that becomes gross and I think it stems from not fundamentally understanding the value that people are creating. Like I think it actually comes from a place of not understanding what people do. Yeah. Like if you are. Trying to quantify what someone does by hours spent or emails written or whatever, then you’ve kind of missed the point.
Chris Hudson: 34:13
Oh yeah. But there’s a lot of that going on. You know, the witch hunt it’s always happening. Feels like, whereas the point you were making about, oh yeah. That was written by AI, wasn’t it? So what did you do with the rest of your day? Yeah. It’s gonna be conversation. So I don’t know if you’re on the receiving end of that sort of feedback, how should we be preparing for it do you think?
Ben Le Ralph: 34:32
Probably not the right person to ask
Chris Hudson: 34:34
necessarily. You’re an evangelist for AI, so
Ben Le Ralph: 34:37
Yeah, and I would also, I guess before that be kind of an evangelist for not measuring output. I. Or not measuring someone’s value within an organisation based on output. And really what you should be doing is having or time, like it always seemed like crazy metrics to me, and I’m not the right person to ask because I think that’s a stupid thing to do. And so I acknowledge that there are management out there that think that that’s the best way to get the performance out of their team. And I think in the long term, that will prove to be incorrect. Yeah, to the extent that they use AI to do that. In the same way that they’ve used other tech to do that, I think there needs to be good solid frameworks, rules, laws from a government who understands this stuff, to protect people from like blatant misuse of this stuff. Like I think that that’s probably very important.
Chris Hudson: 35:25
Yeah. Alright, maybe I’ll ask you a more comfortable question. Yeah. Which is, he is probably leaning more into the positive sides of AI and not maybe some of the negative at this point, but it’s also in regard to that witch-hunt aspect. So if you’re an intrapreneur within an organisation and maybe using some of the guerrilla tactics or maybe not, but how do you positively create good news and groundswell around AI in a way that you’ve seen has worked within organisations?
Ben Le Ralph: 35:51
think fundamentally AI is, a real good word, to be able to attract some funding for an initiative that you’ve got going on. Like I honestly, so many of the clients that I work with, like we are not really rolling out AI. Notionally, what you are doing is you’re coming in and like, well, what we would do is you come in and you look at a business and you say, well, what are you trying to achieve? What are all the things that you don’t want to do or that are taking you away from what you’re trying to achieve? And then from that, you say, great, how can we use this new suite of technologies that have come out to enable those business outcomes? And so probably what I would say is you use the term AI. To the extent that it is helpful to you, to the extent that it allows the organisation to give you some money to experiment or to try something new or to go out from a different approval process, like it’s good at like. Particularly these days, like getting some funding for something that’s interesting, but that if the project is just about AI, it’s probably not gonna succeed in the long term. Like it needs to be tied to some kind of customer outcome. Yeah, a business outcome or employee efficiency outcome, like it’s gotta be a process. Attached to something real, not something AI.
Chris Hudson: 37:08
Yeah. So, because I reckon that when it first came out there, there were all sorts of projects that were just going up and out because it had AI just stamped on it, right? Yeah. But what you’re saying is that there’s more to that now. So the business casing around AI, you’ll still get the money, but you’re gonna have to put in a proper case for it now. So what do you think has changed there?
Ben Le Ralph: 37:28
I think it’s maturing and people are becoming smarter about it. I think we’ve gone through a phase where there’s enough case studies now to see where it works and where it doesn’t work. I think within larger organisations, things like AI and like Agile is a very good example. I don’t know if you’re familiar with Agile? Yeah, yeah, yeah, yeah. So I think Agile, it’s success. As a fad that is certainly dying off. Now, I, I would say its success came from organisations had this core problem that they were rigid in how they worked. Like they felt like people couldn’t move, like they weren’t literally agile. Like someone needed to come and like throw all their processes out and come in with a bunch of new processes. And where it succeeded was agile was used to like as an excuse to spend some money on internal reorganisation to be able to create an environment where people could have all of these other things that I wouldn’t necessarily say were agile. Like if you could use an Agile project to improve psychological safety, big a tick, if you could use Agile to come in and get rid of a whole bunch of hierarchical approval. Big tick. Yeah. If you came in and used Agile to just set up a bunch of Scrum ceremonies, then that. Didn’t really help anybody. And the reason I say all of that is that I think. Companies within their technology at the moment have kind of got a bit stale. Like people have their apps or they’ve got their website or they’ve got their business set up on a bunch of technology that if someone came through with some budget and said, how could you significantly improve these things? People would have a bunch of answers. Like, people, there’s plenty of research has been done about how all this stuff could be improved. And so to the extent that you can use AI to focus, that will internally. That’s where I think organisations are succeeding, where organisations are like, oh, we’re gonna spin up our own LLM and we’re gonna pump it through a whole bunch of data, and we’re gonna do a whole bunch of stuff, which is just AI without being attached to any kind of user outcome or business outcome. Then I think those are failing and each one of them that fails gets talked about and is just another reason that people are becoming much more. Analytical about how these projects should work.
Chris Hudson: 39:44
Yeah. Yeah. Wow. Alright. So Agile’s gonna be killed by AI. You heard it here first, but
Ben Le Ralph: 39:51
yeah, that’s probably the most unfair, like that if I get any heat from this podcast, it’ll probably, from talking bad about Agile,
Chris Hudson: 39:57
they’re gonna be coming for you. You know, there some passionate folk out there that, that love a bit of agile,
Ben Le Ralph: 40:02
so Yeah. Yeah. Look, I, I’ve taught many, many people Agile as,
Chris Hudson: 40:07
yeah. Well, yeah. You’ve turned, you’ve changed.
Ben Le Ralph: 40:12
I’ve changed. I know, I’ve, I’ve jumped into the new, into the new van.
Chris Hudson: 40:16
Oh, it’s funny. I mean, the fact that a generative knowledge based tool as it was first probably understood in a mass context has now, from what you’re saying, has applications for collaboration to the extent that it could fix for some of that is kind of interesting because you’re not just using Wikipedia, you’re trying to make it change the way in which the organisation runs. So from a operational point of view, like what’s it gonna take to make that bridge possible, do you think?
Ben Le Ralph: 40:48
What I’m pretty excited about is what works well is when small groups of cross-functional, like a small group of people work together on like an end-to-end delivery, right? Like it just feels and least probably researched that backs this up that I’ve read and consumed and I’m kind of summarising badly, but I think that there is this sense that like people work better when they’re interacting with a small group of other people and have a direct idea about what they’re actually trying to do. And the thing that I’m excited about when it comes to AI is that there is only so much bad with that people can actually store. Within one of those teams, like traditionally you couldn’t have legal and marketing and customer research and development and all of that in one team, and so you would have to breath out and then these projects would get bigger than anyone could understand, and then that’s what leads to failure. Whereas I like the idea of small teams within an organisation who have access to like 70% of the information they need from legal and. 50% of the information that they’d need from marketing. And so what that ends up looking like is you’ve got a marketing team and a legal team and a ops team who are there enabling the business and their job is to, one, make sure that their LLM knowledge base tool works as good as it can, is as helpful as like it is delivering value. Yeah. And then their other job is doing like the 20% what actually is the legal expertise that you’re paying this person for? Yeah. And having that person have enough time to be actually able to get into the team that actually needed their expertise at that moment.
Chris Hudson: 42:27
Yeah. Right. It’s two days a week now. Or not five days a week, they’ve gotta be in the office. I don’t know. It could some of those conversations maybe. Yeah. Well
Ben Le Ralph: 42:34
that’s a question about value, right? Yeah. Like if you can do, like, say you’re a lawyer with a particular skill set in acquisitions, right? Yeah. The company needs to be have you on the employee because they’re gonna do acquisitions. Yeah. Do they literally need you five days a week when they’re not doing that process? No, probably not and this is where you can be pessimistic or optimistic, either. The optimistic read is we end up doing higher quality work or working four days a week. And the pessimistic read is all of those people become subcontractors and have to jump from organisation to organisation to make that work.
Chris Hudson: 43:11
Yeah, I mean the fractional roles. Have been quite popular from a gig economy point of view, but also from the point of view, just having shared resources and access to experts as and when you need. So a decentralised model can work for a lot of companies. Just depends on whether that’s right. And obviously those that need it in house. And if you need your legal counsel or if you’re in IP or whatever you’re doing, you’re gonna have to have people at a point of escalation at any point. So I think it just depends a little bit on the business model. But yeah, raise an interesting question, and I think that the rate things are changing probably suit them. We think some of these behaviours and changes and trends will probably start to take shape right, in the next six to 12 months. What do you think?
Ben Le Ralph: 43:50
I think so, you go through a whole blockchain technology revolution where everyone tells you the next six months blockchain’s gonna change everything. And you’re like, yeah, yeah, yeah, yeah. And like it never comes. And then someone comes through with AI and they’re like, AI is gonna change everything. And you’re like, wanna have a bit of hesitancy about is this actually gonna be as transformative as the technologists suggest? But i’m always actually surprised by how slow organisations move. So yeah, I think it’s gonna change quickly, but I’m often wrong about. Exactly how fast.
Chris Hudson: 44:21
Yeah. That’s things change all the people out there listening to this and that’s right. How fast do you wanna go? How brave are you? I think there’s a question around risk. You know, risk often comes up and these sorts of things are thrown about in they want to mitigate risk, but also manage any kind of issues with continuity basically, and the service model and an operational model. You can’t just turn it all off and throw something else in overnight, but there’ll be ways in which that could be tried out presumably quite, quite easily. So, yeah. Interesting. Certainly.
Ben Le Ralph: 44:53
Yeah, I think that’s probably the biggest thing that’s changed in the last three months, three to six months, is that people have gone from the risk of doing things with AI is too high to the risk of not doing. Or understanding AI is starting to build up on people.
Chris Hudson: 45:07
Oh, yeah.
Ben Le Ralph: 45:07
Like organisations are starting to be like, look, if we miss the boat on this, that risk is starting to outweigh the what happens if it’s bad risk.
Chris Hudson: 45:16
Yeah, yeah. And why is that, do you think? What’s behind that being a risk, do you feel?
Ben Le Ralph: 45:22
Probably, I’d love to give you a much more academic reason, but I think it probably is just like. Call lizard brain, just being like, I’m seeing more and more people take this thing up, and I think that uptake is genuine, but there’s only so much that you can see that in the media and not have it cement itself in there, that it’s something that you should think about.
Chris Hudson: 45:44
Yeah, yeah. What do you, A couple of other questions. So what about the argument around it being detrimental to the experience of learning and more of a pedagogy and this kind of understanding of if it’s all out there and written note, is it teaching us to not think as much or to think as laterally? You know, it’s particularly relevant, obviously not just within corporate context, but within schools and universities and is it just spitting out the answers and does that mean, what does that mean for us all? Yeah.
Ben Le Ralph: 46:14
I, I really grapple, or I wrestle with this. I look, there’s, there’s research out there that proves that people who use these. Think less deeply and by some measures, dumber. Yeah. So that’s hard to refute. I guess. On the other hand, it’s never been my experience that a technology that allows you to do something more has resulted in me understanding it less and that often. What you see is the more people can get their hands on the thing, the more they go down into the rabbit hole of how does this thing actually work and I wonder if like, what’s gonna come out of I. Shake out in the wash of like, we look back from 10 years from now, what we say is that, turns out technology wasn’t as bad as we thought, but social media was five times buist than we thought it was gonna be. Like, it’s hard to unpick necessarily, but there’s a lot of, when you talk about the research around like what’s making us dumber or what’s making us more unhappy, a lot of that. It kind of isolates down to this weird dynamic that we’ve created for ourselves where we put out only the best stuff. Yeah. Don’t put out any of the bad stuff and like create these weird microcultures. so yeah, I would worry about that in particular. I think probably the other nuance, and look, I’m just talking, I should ask what you think, but I think the other nuance that I, I would have or the experience is that I’m dyslexic and so. I feel like I was born at exactly the right time because computers have always done a lot to be able to get me from like, being awful at school and learning and not being able to participate really at all. To actually being someone who I feel like I’m quite well educated because I was able to bring the technology together and just even on the smallest things, being able to just smash out some thoughts and have AI put all the commas in the right spot and then send it out has been Yeah, yeah, yeah. Or turn text into audio is also man. Amazing. And so, I don’t know, there’s a whole massive answer in there. What do you make of it? This is probably an area where you might understand a little better than I,
Chris Hudson: 48:22
well, that puts me on the spot, I think there’s obviously two different skills of thought. You know, one is that the process of learning is training your brain and it’s giving you that sense of critical thinking to what should be taught at the schools, what should be taught at the universities, what should professional development in the end look like. You know, they’re all big questions that could be taken one way or another. Some would say you should be using pen and paper. Some would say it’s okay to use an iPad or an AI. You know, it’s all being used almost for the same purpose, but there are different methods. So I think it’ll probably just come down to preference and parents are always gonna have a preference around where and how their kids are gonna get educated on that basis. Yeah. But yeah, I just wonder if it puts the brakes on, like you’ve gotta wonder when you’ve got kids, whether if they’re just consuming, created content and that content is so much more readily available and so, so much more frequently available. How much more can we take that? And is it putting the brakes on them creating for themselves or is it inspiring them? I mean, it’s completely subjective and probably down to the individual as to what that would. What impact that would have on you longer term. The person that isn’t in any way connects it’s technology, on a nice island somewhere that they might end up, they might end up learning in different ways too. So it feels like it’s all formative. Right? It feels like you would go through. Your walk of life understanding and seeking out probably what is interesting to you. And if you wanna learn differently, then you can learn differently. If you don’t wanna do it then fine. But I think from a more democratic point of view, it feels like, particularly within an organisational context, there should be some sense of choice ultimately as to whether you want to create your work in a certain way or in another. And the comparison of those doesn’t need to be made. It’s just about whether the work can be done, whether it can be done to a good enough standard, whether it’s exciting, whether it’s inspiring, so I feel there’s, there’s a lot to kind of figure out. And obviously you mentioned social media, you can’t just separate that from AI or from technology because it’s all interrelated now. So I don’t feel like. We can really avoid it, but we can avoid the metaverse. Right.
Ben Le Ralph: 50:40
Let’s all just like as, yeah, as a society, let’s all just agree that that’s not something that we have to do. Yeah.
Chris Hudson: 50:47
Yeah. Super interesting. So Ben, I really want to say just thank you for so much for coming onto the show. It is been a fascinating discussion. We’ve talked a bit about. Your own personal work and the work that you do within your consulting practice around AI, but obviously you’re going into organisations and you’re helping to set up some of these systems and fixes for what people would need. So I’m sure if people had questions they’d like to get in touch, like what’s the best way for people to get in touch with you?
Ben Le Ralph: 51:13
Yeah, great question. Look, LinkedIn’s probably the best. So LinkedIn, any of the social media platforms are great. Yeah, like LinkedIn, TikTok, Instagram, AI for busy people, or Ben Le Ralph. Yeah, the, the algorithms will help you find me on any of those platforms. Yeah, that’s probably the easiest way to, to get in contact with me.
Chris Hudson: 51:30
Good stuff. Thanks so much. And yeah. LinkedIn’s in trouble, right? What do you think? It’s, there’s a lot of the content being pushed out there that’s just no good anymore.
Ben Le Ralph: 51:38
Oh my God. Yeah. Do you know what actually I have a very different answer about this. Yeah. Go ahead. A month ago I was on like a big push on like, LinkedIn is the like literal worst and like I would complain about it to everyone. I’m in this phase, I’m trying to get myself out there. I’m trying to market and I’m like, just hated LinkedIn and an old friend just like sat me down. He is been in the corporate world for. Forever and he just looked at me. He is like, generally speaking, I don’t know about much about social media, but generally speaking, networks are as good as like you make them. Yeah. And so I actually have had more success recently with the perspective around like, I. I don’t know. The main feed is necessarily full of a bunch of people who are making pictures of themselves all look the same, but surrounding yourself by like good people on LinkedIn or like small groups or whatever. I’ve actually, I don’t know, it’s, it’s been a real benefit. I.
Chris Hudson: 52:33
Yeah. Yeah.
Ben Le Ralph: 52:34
To me personally, so I can’t talk too badly of it actually.
Chris Hudson: 52:37
Yeah. Okay. Good, good. There’s some nice campfire moments out there for those that are looking, so if you need to, you’ve gotta
Ben Le Ralph: 52:43
wade through a lot of sludge.
Chris Hudson: 52:45
Yeah. Bring it together yourself. Yeah. Cool. Well, we’ll leave it there. Thanks so much, Ben. Really appreciate you coming to the show. And yeah, thanks. Thanks so much for sharing your knowledge and your wisdom and yeah, we’ll let you get on with your evening.
Ben Le Ralph: 52:59
Thanks so much, Chris. It was, yeah. Great to be on. Really appreciate it.
Okay, so that’s it for this episode. If you’re hearing this message, you’ve listened all the way to the end. So thank you very much. We hope you enjoyed the show. We’d love to hear your feedback. So please leave us a review and share this episode with your friends, team members, leaders if you think it’ll make a difference.
After all, we’re trying to help you, the intrapreneurs kick more goals within your organisations. If you have any questions about the things we covered in the show, please email me directly at chris@companyroad.co. I answer all messages so please don’t hesitate to reach out and to hear about the latest episodes and updates.
Please head to companyroad.co to subscribe. Tune in next Wednesday for another new episode.
0 Comments