Work's Not Working... Let's Fix It!

People Aren’t Data: How to be an AI Savvy Leader with David De Cremer

Sian Harrington Season 2 Episode 5

In this episode of Work’s Not Working Siân Harrington speaks with AI and leadership expert David De Cremer about the challenges business leaders face when integrating AI into the workplace. They explore how the rush to adopt AI can often miss the mark by focusing too much on technological solutions and not enough on the human elements that make successful AI integration possible.

David argues that leaders are often overwhelmed by AI’s potential and mistakenly delegate the responsibility to tech experts, which results in a lack of alignment between AI’s use and the organisation’s business goals. He highlights the need for an "AI-enabling" culture, where AI serves human intelligence rather than replacing it, and stresses that ethical and human-centred approaches are essential for long-term success.

Throughout the conversation David shares practical strategies for fostering a balanced approach to AI adoption, ensuring that it augments human creativity and decision-making. He also touches on how leaders can upskill their teams, manage the risks of over-reliance on AI and avoid the pitfalls of treating people as mere data points.

Key Takeaways

  • AI as an enabler, not a replacement: David emphasizes that AI should be seen as a tool to support human decision-making rather than something that diminishes human involvement. Leaders must create AI-enabling cultures that put people first.
  • Leadership’s role in AI adoption: Leaders often feel disconnected from AI implementation, delegating it to tech teams. David highlights the importance of leaders being AI-savvy, actively participating in the process and aligning AI use with business goals.
  • Human-centred leadership: The conversation underscores that AI adoption should not reduce employees to data points. Ethical upskilling and clear communication about AI’s role are critical to maintaining trust and employee engagement.
  • Balancing innovation with responsibility: David warns of the dangers of rushing into AI adoption due to competitive pressures. Thoughtful implementation that considers both the opportunities and challenges of AI is key to realising its benefits.
  • Soft skills in the AI era: As AI takes on more technical tasks, soft skills like empathy, creativity and collaboration become even more important. Leaders must foster these skills in themselves and their teams to thrive in an AI-driven future.

About David De Cremer

Professor David De Cremer is a world-renowned expert in leadership and organisational transformation in the AI era and author of The AI Savvy Leader: Nine Ways to Take Back Control and Make AI Work

Interested in insights about people leadership, HR and the future of work?
Seize and shape the future of work with The People Space, a leading digital HR magazine for forward-thinking leaders. We empower you to put people at the heart of work, navigating the evolving intersection of technology, business and human insight. Join us in building a future where people and machines collaborate for a more human-centric workplace

David De Cremer (00:00)

I see many companies making the mistake, okay, well, we're going to adopt AI. As business leaders, we've approved the budget. So it's fine. So the CTO comes to the town hall meeting and explains it. And in the passion that the CTO has for technology, I usually see speeches as an AI will be able to do this and this, and we're going to optimize this. And they use wording like optimizing, efficiency, reducing errors – very abstract, very tech -driven engineering kind of design mindset. 

And I've seen it so many times after that, CTO comes down from the stage and I heard one CEO say, that was really good in terms of technology, but I don't think people get it. I actually don't think people like you anymore. And what happened here is a disconnect. Because when you listen then to the people, were saying like, what is happening here? What does this mean for us? I'm just being traded as a data point. I don't exist. It's all about technology. And that's where the connection has been missed. So as a business leader, you need to be extremely aware of you're the connecting point. 

Siân Harrington (01:11)

Hey, everyone. Welcome to Work’s Not Working, a show about forward thinking people leaders, innovators and academics and how they think we can fix work to make it more meaningful, healthy, inclusive and sustainable. Brought to you by The People Space. 

I'm Siân Harrington and on the show today, David De Cremer on why leadership is failing to harness AI effectively and how to take an AI savvy leadership approach to fix this. As AI becomes increasingly integrated into the workplace, it's clear we're at a crossroads. Leaders face the challenge of not only adopting AI but doing so in a way that augments rather than diminishes human potential.

David argues that the key to navigating this AI-driven landscape lies in understanding that AI is a tool, not a saviour. It's about using it to enhance human decision making and not to replace it. So later on we'll dive into practical tips on how to avoid the dangers of over-relying on AI. We'll discuss key steps to fostering ethical upskilling in your organization and David's concept of AI-enabling cultures.

He'll share how leaders can balance the fear of missing out with the need for thoughtful human-centred implementation. Plus David tells me how he's personally using AI to enhance his work as a business school dean and how, if we're not careful, AI could actually make us more mediocre, not creative. 

But first, let me tell you a little bit more about David. Professor David De Cremer is an expert on developing knowledge and applications to understand more deeply how leaders can be effective, and transforming in this new technology era and how building trust can be used as a business asset, plus why ethics -based compliance and governance is key in driving human -centred tech transformations. His research in these areas has led to several bestselling books, including Leadership by Algorithm, Who Leads and Who Follows in the AI Era, and his latest, The AI -Savvy Leader: Nine Ways to Take Back Control and Make AI Work.

Currently David is the Dunton Family Dean of Damore McKim School of Business at North-eastern University in Boston, the States. He's also a Professor of Management and Technology and an advisory board member at EY for its global AI projects. Before moving to Boston, he was Professor in Management at the National University of Singapore Business School, the Director and Founder of the Centre on AI Technology for Humankind, and a member of the Scientific Committee of AI Singapore.

David has received many career rewards for his scientific work and accolades, including most influential economist in the Netherlands, a global thought leader by the Trust Across America, Trust Around the World Organization, and one of the world's top 30 management gurus. 

So let's dive now into why leaders must rethink their approach to AI and what it truly means to be an AI savvy leader. I start by asking David about the pressures leaders face today in adopting AI and why it's so critical to get it right. 

So I wondered if I could start by asking, what isn't working about work today in the context of leadership and AI? 

David De Cremer (04:26)

That's a good question. What do leaders, what do employees, what does the human workforce experience now that AI is in the workplace? Because as we all know, AI is inevitable. That's the narrative that's being used. So we'll probably delve deeper into how we're going to use it but just the mere presence of AI, the idea that AI is going to be in your workplace, brings pressures with it. And I can see in my consultancy, the interviews I have with employees, with leaders, these pressures are pretty diverse. It can be in terms of not being able to live up to the expectations because it's so massive, the change that they're introducing.  As we know, it's like the myth of AI almost, it's magic. People are scared, have fear. Am I going to live up to expectations? What is it going to do to my job? So there's job uncertainty. We all know that job uncertainty is a stressor in people's lives. 

It also leads to issues of trust. What is this technology about? How will you use it? What does it mean for me? Can I trust this? And if I don't have that trust because I don't feel any sense of control or autonomy in my job anymore, what I've observed sometimes is that people will rebel against it in different ways. Like they will try avoiding using it, they will even sabotage it, or there's simply plain resistance as we can see in any change management projects. Obviously all those feelings, experiences, perceptions, they're stressors. 

And some of my own research has shown that if these stressors are available and meaning that leaders have not explained very well the meaning, the reason, the purpose of using AI and in what way, it leads people to sleep less, be stressed at work, make long hours not knowing what to do. And that uncertainty in itself, of course we know leads to burnout. 

So it's extremely important for leadership today to really facilitate the introduction of AI, it as a change management project where I always say, and also in my book, The AI Savvy Leader, it's humans first. AI is in service of the human. 

Siân Harrington (06:44)

Yeah. And as you mentioned you talk about how leaders are beginning to value this computational prowess that AI has and increasingly relying on AI over human understanding. Why is that dangerous? And how do you think is currently impacting decision-making in organizations? 

David De Cremer  (07:03)

Well, the fact that leaders see computational powers of AI, even as more important than the human workforce at the moment, in my view has been really stimulated by how we talk about it, how gurus introduce it, how we say it's inevitable. The word inevitable in itself already is, my God, this is world changing, but how is it going to change my world? It's not only rocking the boat, it's going to be a different boat. And am I still going to be on that boat? That is a lot of pressure. 

But it's just so overwhelming that I can see it's not just only employees, it's leaders, business leaders as well, who are completely blown away by it. Because don't forget, most business leaders at the executive, the C-suite level, when they were educated, things like AI, sustainability, they weren't on the curriculum. They were not introduced to that. So they are from a generation where they're not AI savvy, to say so. And that means they see this as, wow, it's tech only and it must be driven by people who understand technology or tech experts. And what I observed there is that as soon as AI comes into the company they delegate right away to tech experts and the problem that starts then is yes, but you're using it for business purposes. Tech experts are not trained as business experts. They're afraid of it. They step back. They think, I don't know anything by AI, why would I even be involved in the transformation process and not the right business questions are being asked. 

And that's a problem. That's a problem for the company because there's no alignment between why are you using AI, just tell me in simple terms. And your organizational purpose, your strategy. That alignment doesn't exist because simply because AI is magic. People feel overwhelmed. Business leaders step out. We don't answer those questions. The company is not benefiting. 

And as I pointed out earlier the workforce is suffering because where's the meaning? Why are we doing this? And what does it mean for me? Those questions are not answered either because business leaders are not part, they're not participating actively in the transformation process. 

Siân Harrington (09:21)

It's interesting. And as you said in things I've read that you've written before, we mustn't forget that AI is the tool. It's not a saviour, it's a tool.  You've looked at other applications of technology in business. How does this compare to other leadership missteps that we've taken with technology? I know you've referred to big data. We can learn some elements from that. So how would you see business leaders are approaching AI compared to those and what can we learn from that? 

David De Cremer (09:50)

Yes. AI is of course not the first technology, as you say. There have been missteps in the past. What I see went a little bit wrong there was, hey, the assumption we sit on data and we should use this data to get more insights. That's a good one. But people started over-relying on the more data you have, the better because we will have more information in their mindset became, oh, that's more accurate, which is not necessarily always the case, and because it's more accurate, it becomes the guideline. So basically what the data say, that's how I should act.

And again this is the data and the alignment with your organizational purpose. Meaning you need to ask questions and then you can see whether you have the right kind of data. If you don't have those right kind of data, you go out and collect more of them. But that's what leaders needed to do with their tech teams and data analysts because they're waiting for that question. 

So I always say jokingly to any executive when I ask, so what is it that you want to achieve and how are you using technology? How are you using data? “I'm waiting for our data analysts to tell me what is in the data.” And I said, you're overpaid and they're underpaid. And they said, “what are you talking about?” Because basically you're saying I'm waiting for my data analysts to say what the purpose of my company should be and which steps I should be taking. Meaning he's doing your job. Whereas you should be asking the questions that your organization because there's always a why are you in business? What is your organization about? Who are your customers? What do they care about? You have to phrase the business questions. 

So you see, here's the transition to the application of AI because AI relies on data. That's where he gets his information from. So again, AI is a tool which is assisting. Hence, like I said, it's in service of you. It's in service of human intelligence. And that's where business leaders miss the point as well. When AI then really has a technology in itself came, it's so smart, they forgot already about data issue because it's so smart it's going to tell us what to do. 

And first and second AI cannot be held accountable because it doesn't have an opinion. doesn't make judgment calls. So data only is meaningful if you know what you're doing. Hence, when I ask my executives that I meet, what do you want to use AI exactly for? They can't answer the question that I'm actually a little bit afraid that they also don't know the purpose of their work. Those two are aligned. 

Siân Harrington (12:29)

Yeah, there's good points there. And our data is still such a massive issue. I hear more than anything, probably from leaders, like we still are sitting on all this data and we don't really know what to do with it. But you're so right. You can look at data – even I look at data sometimes in relation to my business and it's telling me one thing and actually to go down that route then isn't necessarily the right way for the business because that's not where we want to be. We're not collecting the right data. I think it's a really still such a huge chasm to jump over. 

David De Cremer (13:00)

Well, I wouldn't say that we're all of us are not collecting the right data. No, we are collecting data but the idea that emerged was what is the challenge? What's the threat? Coming back to your question of not understanding the relevance and how to use data is that people have the idea as well, and this goes along with thinking AI should be driven by tech experts only because I don't understand it is, the more data you have, the better. And this is crucial why you as humans work with data and are leading data is there's always a point that you stop looking at the data and make a judgment call to say, I know enough, I'm going to make a decision now. And that judgment call is a unique human responsibility. So you can see putting everything together when people, business leaders, see data as guiding them and AI as something they don't understand and tech experts who are not business experts should drive, you're acting irresponsibly actually. 

Siân Harrington (14:00)

That's a great point. So in your book, your new book, you're talking and advocating for AI savvy leaders. And so these are leaders who integrate AI to augment rather than replace human intelligence. So you've got nine ways that leaders can take back control, make AI work effectively. For time's sake we can't go through all nine, even though they're all great, but I wondered if you could briefly walk us through some of the core ones that you think are important. 

David De Cremer (14:30)

Okay. So yes, the solution is of course, we need more AI -savvy leaders. And hence this is why I wrote the book to address some of the problems that we've already discussed here. So what does it mean to be an AI-savvy leader? Because I connect it, as you've seen in the book, to leadership behaviours that we've been using for many decades. Your workforce will change. It will be human AI interaction rather than humans only or AI only, which means there's another dynamic. You need to manage your people in being able and willing to deal with this new technology. And you need to make sure that AI is implemented in human-centred ways, meaning in ways that are intuitively enough for humans to start learning the  tool, Install trust in the tool, why you're using it, give meaning. 

Be trustworthy as an organization yourself as a leader because it means like we have the right intentions and, because we have the right intentions, have an open and transparent communication. Try to have flat communication styles because this is the thing. Now you bring AI in AI is not an optimal model from day one. You need to train AI as well because it needs more. It learns from the data. It starts to learn from your feedback. Today with ChatGPT prompts are actually your feedback. It's a reinforcement system. It's learning. 

So that means as a leader, you need to create a context where, a) people feel safe and trusted to work with that AI and that you can build in feedback cycles that are flat, meaning flat of hierarchy. 

So because AI is a different animal here, but it's part of your workforce, you have to learn now. And this is really change management principles still to create a climate where people feel they're participating. They can provide feedback is heard right away. you can, so it's very participative. 

So that's why I'm saying AI adoption in itself is actually an inclusive act.  Tech experts, non-tech experts are involved. Feedback is provided very quickly. As a leader, because both experts are involved, you also need to be able to develop a narrative. You need to explain why AI is used, what kind of AI probably to solve what business problem. So you need to be that AI savvy leader. So your communication needs to be built on, are you AI savvy enough?

And what does AI savviness mean? For leaders it goes beyond mere coding experiences. So it's not about coding. You're not a computer scientist here. It's fine if you understand some of it, but that's not where it stops. And in most business schools, you'll see, we teach big data and a coding course and that's it. No. Because coding is what a computer scientist does. If you know a little bit, you're not an expert in anything. 

No, you need to bring your leadership expertise in together with that savviness and that means understanding AI and human intelligence are two different types. It’s like comparing apples and oranges. And as you've probably read everywhere, when you hear CEOs talk or when you hear gurus talk it's always AI competing with humans. That's so tiring but also it introduces a zero sum game, which means as soon as AI can do your job better, you can't do that task anymore. And that's actually reducing humans rather than elevating humans. We should be augmenting. So as a leader, need to have an exact narrative that you understand they're different ones. 

And what do you need to understand about AI strengths? Like for example, okay, based on data, do you want to have it structured, unstructured? Do you want to have a black box or not a black box? Everyone says, of course, I don't want to have a black box because you have your risk management to do towards your stakeholders in business. That's also why they then understand, oh all the AI that’s seen the demos and that is developed in the lab. That's not the AI I'm going to use. Of course not. Because in the lab, there are no stakeholders. 

So that's why I'm saying AI generates ideas, content, but we still need to ask the questions and edit it. So we need to put it in a context because AI doesn't understand context. Doesn't understand culture. Doesn't understand norms as we do. And that's where we are driving force as leaders because you put it in that context so that all the information and knowledge and content that has been generated becomes knowledge that can drive our decision -making. 

So in the limitation of AI, you see the potential of what humans can do. That's the level of AI savviness as they need to reach because then they can bring tech experts and business experts together and have them talking. So that means leaders also need soft skills because you need to promote collaborations across functions. So you need to develop your emotional intelligence, develop the empathy of the concerns people have and the questions that they may have. 

So that's why we're saying today, actually, although AI is so present, we're moving into a feeling economy where the biggest part of your salary in the future will be determined by your soft skills rather than your hard skills, because the hard skills are easily replicable. Strengths of AI. Where does the limitations of AI lie? That's where the strengths of the human comes in which are those soft skills. So that's what is going to happen towards the future. So you need to start developing those skills, both as a leader and as an employee. 

Siân Harrington (20:00)

I just was literally just reading a new report out by Udemy on Gen Z and Gen Z were saying very strongly that the skills they wanted to try and develop were those soft skills. That generation's recognizing that creativity is really going to be vital, that problem solving, that emotional intelligence. But I wonder, looking at some of those areas you've identified and the nine areas you talk about in the book, which do you think leaders struggle with the most and why? 

David De Cremer (20:36)

Two things. my, I always say the complete title is the AI Savvy Leader: Nine ways to take back control and make AI work. It really has two parts and they struggle with both in my view.

What leaders struggle when it comes down to AI savviness, and I see this as an educator and as a researcher myself is, how do we teach them? What is the appropriate level of AI savviness and how do we do this? Because it's basically a continuous lifelong effort. So I see companies struggle. They say, okay, we bring in an expert, you explain AI and that's it. No, that's not it. 

Because today you may have AI assistance in how you're trying to improve your customer experience. Tomorrow you're going to use the simply AI providing information. Tomorrow you will use large language models because you'll have the chatbots do some of the work for you. And then the day after tomorrow, I don't know what was going to be available. So you need to know at least, look, these things are changing. So that's lifelong learning. 

And how do you implement that in the organization at the appropriate levels and at the appropriate knowledge level? So I see that's a big struggle for right now for organizations.

The second thing is, as it has always been, engineering is relatively easy. People are hard. It's deep. And that's the problem. Going back to your earlier question where you said, so why are leaders so intimidated? Why is it that tech prowess overtakes just the normal leadership mindset of dealing with your humans? That's what it is because they approach AI adoption as an engineering exercise. Whereas in reality, it's actually behavioural. 

Siân Harrington (22:23)

That's a great point to make. And so who is doing it well? Have you come across any organizations that you think are integrating AI in the right way and their leaders are looking at it in the right way? Any lessons we can learn from that? Or, conversely, are there any that you think you, and may not be able to say who they are, but have been approaching it completely wrong? And what sort of consequences are there? 

David De Cremer (20:45)

The companies that do best in that are usually leaders who are able to take the perspective of the concerns. There people have what the AI introduction brings in terms of concerns and worries, who are savvy enough to put it in a context of what it is that they're going to do so that a tech expert also understands, okay, this is how we have to implement it because they don't think about, course, humans have that response and they don't want to use it.

And who themselves are, I can see humble, for example, humble leadership style is extremely important. I may know enough about AI but I'm not an expert. I know some of you, I know what human psychology means, but again, we're in it together. So the humbleness is very important because I mentioned lifelong learning, that should be a collective effort. So that's humbleness in itself. You have to accept that you don't know things.

You as a business leader need to have that narrative that's AI savvy enough but human centred. Because you see, most of us delegated also to the CTO again. They don't have that narrative. This is where we do coaching sessions as well for the CTOs because they struggle because many of them can't build those relationships that are needed to install that trust. So again, it comes down to the soft skills. So you see that is extremely important. That's what you as the leader of your company really need to guard how that communication happens. That's completely your responsibility. Hence, of course you need to be savvy enough in understanding what your CTO and tech experts are going to do and what they want to do and why they want to do it. 

So companies who basically have that kind of leadership do so much better. Most companies today, you start as a pilot study. So any AI adoption is a pilot. That's why we need to exchange feedback. That's why I said we need flat communication that is we learn and we can trust and be transparent. And it's extremely crucial because if you don't have that you can't go beyond the pilot stage. So up to 80% of companies today bail because they can't upscale and create value across the board. And the simple reason is because you have too many silos, they don't communicate and there's basically no feedback cycle that's flat enough and that it's trustworthy enough. Again, a very important role for business leaders.

Silos, they've been a problem forever since organizations exist, but today it's even more important that you can promote that collaboration. So that's why you referred earlier to the Gen Z generations. So you're starting to see that collaborative efforts, collaboration as a skill is becoming so important and people wouldn't think of this when they think of AI because they think AI is going to take my job because all the most business discussions get stuck in, is AI going to take your job or not? Because that's sensational. And we don't look beyond that by saying, we created the tool still, first of all, we are entitled to decide how we want to use it because we created it. So there's a prescriptive norm there that we can say, look, we can still decide how to use it. But also the descriptive is simply, if you don't put it, if you don't bring it in that humans can work with it, you're not going to go beyond the pilot study and you won't upscale. And that's the big lesson.

Siân Harrington (26:08)

There's a couple of things you've said there that have made me think. First of all the idea of the communication. You've got the other side of that where people are concerned for their jobs and businesses aren't communicating with them. So then you've got them all worried about what might happen. And a couple of years ago I was doing some research on this myself and talking to companies and I was actually surprised at how few of them were tackling that particular issue, were future-readying their workforce and being honest and saying the job you're doing today may not be in its exact same form in the future because AI can take some of this. This is why we're going to be talking about the upskilling, the reskilling, we're going to be moving you this way. That conversation doesn't seem to be happening. Are you hearing much about that conversation? 

David De Cremer (27:00)

I completely agree with what you're saying. Yes, companies talk a lot about AI is going to replace your job, but then to your question, do they do anything about it? No, I don't see it happening much. And I think there are a few problems there. So I do observe, as you've just mentioned, that there's talk about job replacement but where it's going to go and what does this mean for your job, as I said earlier, no business leaders explain it. 

And the few problems are, first of all, we need more clarity about what a job is. And this is relevant in today's AI for the following reason. A job is a collection of tasks. You can't just say the job as a whole is right away gone. Except maybe if you work in assembly line, when it's more physical work, you can literally take that. Those are mini jobs. So yes, your job may become less and less tasks. And this is where the risk is. If companies don't invest at the same time that they're adopting AI, investing in terms of what does it do to the jobs? How does it reshape it in terms of these tasks, less of these tasks? So what is it that I'm expecting for you to do when the guru say you free up time, you can be creative. How does that look like? Because we all keep using the abstract concept, we'll be more creative. Let's be honest. Can every human being on this planet do a creative job? No, that's not going to happen either. 

So we have to be really realistic here. What does it mean when these tasks are automated? So you need to invest money there. And importantly it needs to be collective. The ones whose jobs are affected, that's where you need to get the feedback from to see, okay, how much time do you have left now? Basically, what do you do with that time? Okay. How do these jobs relate to the higher level jobs? What is it that we're trying to achieve? Again, business questions, running the organization that you say, okay, then these and these people should be doing this. They should be trained in this way because this is when AI starts augmenting. Because it takes away certain of those tasks but you want to elevate and uplift people. Of course, you won't be able to do this for every job. But this should be the mentality but it's completely lacking. 

Most companies always say jokingly, look, if they can make sure that society pays for the changes they'll do it. So if people lose their job, they get unemployment benefits and we'll work together to see what the new jobs will be. That's a problem because then you're just waiting until these jobs are gone. And then you'll think, okay, what are the new jobs of the future have to be? We're losing valuable years here. Basically. And adopting AI is at the same time already an obligation to invest in jobs of the future. 

There's a lot of research out there. There's a lot of reports out there. These should be the jobs, but then it gets stuck in the skills again. We've already talked about the skills and why those soft skills, for example, matter. It's time now that you start investing at the same time and it's a participative exercise because you want to get the feedback right away from your people. And it's a business question exercise as well, because you see how the jobs are related to what you want to achieve. If those jobs for some people won't exist anymore you have to be honest about that as well. 

Siân Harrington (30:30)

Yeah. I so agree with that point about creativity. I've mentioned that a few times before myself. Not every job is going to be creative, but we need to move away from that idea that by default, the more human things are creativity and we're all going to be creative. 

David De Cremer (30:45)

Yeah. mean, building on that, are we all going to be creative? You know what AI is going to do, especially now with ChatGPT, the large language models, generative AI. Basically, I always say jokingly, it will put us all above average. We'll be better than average but the old average before the AI, you know, we'll all be above that average. But what's going to happen is actually if we're not careful, if we don't invest in what this really means for being creative and uplifting ourselves, we're all going to be mediocre as well. 

Because there's one famous study that showed that when we apply AI, yes, the average goes up. People become better, students became better. So less Ds, less C minuses, but also more Bs and less A pluses. We can talk about creativity. Then I would have expected that everyone goes for the A plus. No. 

Siân Harrington (31:40)

The other thing you mentioned just now was about the how to use it. And I'm coming back to where we started in a way, there's a huge pressure at the moment for leaders to adopt AI and to adopt it quickly. And people are worried about missing out and their competitors getting ahead of them. What advice can you give those who feel that they're almost being pushed into rushing this adoption? How can they balance that with the need to have more thoughtful implementation. 

David De Cremer (32:410)

Yes. As you say, there's a strong fear of missing out by business leaders because there's a sense of urgency. The pressure is on, very little time to reflect. The question then becomes, how can people, leaders deal with this sense of urgency? Because clearly just having let your business decision drive by a fear of missing out is  not going to do a trick because as we've seen you delegate immediately to the tech experts who don't ask the right business questions. AI adoption actually doesn't reveal any value. 

And we see this actually today as well because there's so many efforts out there to commercialize AI that AI becomes a business model in itself, but we're not really earning money yet with AI. Businesses aren't, we're investing a lot of money and of course the tech companies, they want you to invest a lot. For example, ChatGPT, everyone loves using it until you have to pay $25 for it a month, then no one wants to use it. So clearly we're not creating any value and it's not even a business model. But yet there's the sense of urgency that you have to do it. 

Now, I think the big problem there is because the narrative that's being used around the why of AI adoption. Why do you adopt AI? And this always comes back when I ask executives. What do you want to use AI for in your industry, in your organization? Just tell me, very simply, why do you want to use it? You can't answer it. The only answer they have is a very narrow one, efficiency and productivity. The sense of urgency brings with it that we only think about AI in one dimension, efficiency and productivity. The result is that we only approach humans then in terms of one dimension. We see them as task completers and being as efficient as possible. 

Now we all know human motivation is a little bit more complex than one dimension. So having that kind of imposed because of the ‘sense of urgency’ mindset is not going to help any AI adoption project because then you're only thinking a human should be the same as a computer, more efficient, more productive. And that's also the only way to really motivate them, whereas we know, no, people get motivated when they have a sense of control, sense of autonomy. That they feel they belong in the exercise. It's an inclusive act, I said, AI adoption. Those are the human motives that leaders need to make sure are being satisfied as well when bringing in that AI. Bringing clarity, bringing meaning. And that's what they should not lose focus on. 

Yes, there is a sense of urgency but you have to think about it like this. If every company adopts as they're doing now, AI simply for efficiency concerns and we reduce the motivational palette of human motivation to one dimension, there's a problem because literally every company is using AI. So AI doesn't bring a competitive advantage if you just bring it in the company because everyone is using it. 

What it does is it can bring in a competitive advantage if you know what you're really good at. What your organization is really good at and what you people can do. And that's where you're to use AI to make them even better in that one. Which requires, of course, that more multiple dimension of, I'm going to motivate them here so that they know I care about their interests because I want to make them even better in what they're good at. And that's where the leadership comes in again. And you cannot lose focus of that. So it's a simple reality. AI can bring a competitive advantage if you know how to make it work for your organization and your workforce. 

Siân Harrington (36:04)

Now you work in a sector where obviously AI is playing a big role, be it through your students or how you might look to teaching in the future. And obviously you've got your own experience. So do you use AI in your work? And if you do, how do you make sure it's augmenting you rather than taking over? How would you use it on a day to day if you do?

David De Cremer (36:28)

So yes, do I use it? Yes. But I use it like most people will probably use it in an augmenting way. I use it as an assistant in terms, because I have to send out emails a lot. I have to make speeches. So I do put in keywords but you have to learn to prompt. You have to give detailed information. You can pretend it's a human being. And say, okay, put it in a context where people at the moment are demoralized or put in a context where people are extremely passionate but uncertain. Adding more of this will give you better responses. 

Now it's responses. Don't forget, you're still the one who is asking the question and it's a response that you have to edit and interpret and see whether it's suitable. Because most of the language will still not be very natural. So you have to work, but it saves me time. 

Now we're a school where I've changed the mission into, ‘we're looking at educating socially responsible business leaders of the world to work, navigate and create in a tech enabled environment’. This means we have three assumptions in our school. AI is a tool. You need to use AI in a holistic way that it creates value across the board. So not only the efficiency, it's the complete motivational palette to make sure that it's implemented. And our teaching will move from content to knowledge. So I don't care whether you use ChatGPT or not. Students are allowed to because we want to simulate the working conditions. You have to start using it basically. You have to use it. But how do you use it today that generates? 

So teaching has to start now instead of, you don't know much. Let's see whether you can generate and replicate and write a nice essay. Now that's not the teaching anymore. The teaching is now, it's been generated and I'm not going to give you points because I don't know whether you wrote it or ChatGPT.  I do ask and then we ask, keep the prompts as well so that we can see you've got those skills. 

The real grading is going to happen, what are you going to do with it so that you create real knowledge for something uniquely that you're striving for in our business field? Which means this has been generated. You edit, you look at it and you say, I'm throwing a business scenario at you. Okay. How are you going to use that information in line with who you are in terms of your leadership style, in terms of your customers. We give that information. You have to come up with these scenarios now. And this requires a certain sense of creativity where you transform content and ideas into knowledge that drives your decision-making. See, that's a different way of teaching. It's much more involvement. It's much more participative. Again, those soft skills, even in education. And that's where the grading will have to start happening.

Siân Harrington (39:12)

I could go on for ages because I love talking about AI and leadership in the business context, but let me drill down a little. A  lot of our listeners are chief people officers or HR leaders. So what role do you see them playing in this whole discussion around AI and in particular the alignment with human values? 

David De Cremer (39:34)

So what role does HR play when dealing with AI with data? And I think then the first thing that comes to mind, of course, is already data protection, privacy, making sure that people are treated as people and not as numbers or data in itself. 

So there's always a risk. I think it's important that you also create a culture of what I call in terms of AI-savviness, ethical upskilling. So when we talk about the ethics of AI, too many people also become being blown away. AI is magic. AI will also decide what is ethical and what's not. You can put any ethical model in it. Yes. But at the end of the day, there's still a judgment call to be made. Because is it, am I just going to be basing it on the results and that's ethical? Or is it how you treated someone, the procedures you used? Is it the law? Those are all models, but it requires a judgment call and knowing the context as well. And dealing with values, something that is Implicit communication that AI doesn't necessarily observe or can infer or come up with a human solution that people accept. So that's why I call it ethical upskilling. 

In the AI era people will have to be trained even more than ever in recognizing moral dilemmas, being aware of those dilemmas and reflecting on it. And companies need to provide that support. The way they can do this is have a narrative that ethics is implicated in most of the things they do,  have champions, have AI ethics champions, meaning, okay, you can go to them. You have the regulations, for example, the EU regulations, which are the most strict ones in the world. That information is available but also that there is an open, and again, transparent communication about, because about ethics itself. And not simply as we know it now with compliance. Yeah, you know the rules and here are the boxes to tick. We know that those things fail. People fail. Good people sometimes do bad things because they don't know. So you need that open communication. So that's definitely a culture that they need to build. 

The second thing is the culture of respect, ethics of care in terms of AI. So it's not because we bring in AI that everyone becomes a data point. Yeah, you collect data from people but they're not data in itself. And I see this mistake being done very quickly. And I see it with any academic researcher as well. We say, we like to study humans but then we collect data and then we talk about the data as if it's data, but not humans anymore. That's the same risk that we have. I've seen this so many times in companies. This was, for example, one of the emails got leaked by accident from HR and the reference was to employee numbers. And the response was like, so this is how I'm being talked about. I'm number 107 in this data centre. This is the general trend. 

And this brings me to a third point. Yes, you can see trends in data, but that doesn't mean you can have a generalizable approach. So sometimes I say jokingly to my HR friends, finally, you made your dream come true with AI. You've become an IT department as the HR department, because now you're just dealing with people as data sets. Of course they don't like that but the point is HR should be the backbone of any organization in a human-centred way. And especially because human-centred approaches to AI are so important as I explained HR should be there as a safeguard in terms of make sure the communication is treating humans as humans. Make sure your narrative is not focused on people as data because that promotes only the efficiency and productivity framework. It's that sensitive. Those things need to be taken care of. So people analytics in itself is not an engineering exercise either. It's still a people's exercise. 

Siân Harrington (43:23)

Good point. So to sum up for leaders, be they HR, CEOs, marketing leaders, anyone who's listening to this. What are the first three steps they can take from today to become more AI savvy but to do it in a way that really adds value and keeps people right at the centre? 

David De Cremer (43:40)

Communicate very clearly that you're not building an AI-driven culture but an AI-enabling culture. The difference is AI enabling is augmentation. So you understand as a business leader in an organization, AI is there to help us in service of human intelligence. 

Second thing is learning. I said, look, lifelong learning is important. It's going to be continuously. So you need to facilitate that this can be done on an individual basis but also a collective basis. Like I said, many of these adoption processes, when they're participative, it's easier to get feedback. It's easier to build trust and it's easier to experiment, iterate and become a resilient organization. That's all to your advantage because changes are always there. So find out the exact way how you're going to promote that lifelong learning culture. HR is important here. 

And finally lead by example. Most of you are also afraid of AI. It's okay. Show that, share that, that culture of trust and transparency in it. And again, that makes inclusive AI adoption important. But also lead by example, explain how you do it. So one of the best speeches I've ever seen from a CEO was a woman who really said, look, I've learned about AI this way. And they told me this is how it works. What does it mean for my organization? I was thinking about it and this is how I started using it. So how are you going to use it? Have these discussions. So show that you're using it to some extent and that you're learning along as well. To make AI adoption work you cannot have a top down. It's much more bottom up than ever. 

Siân Harrington (45:20)

That was David de Cremer on the importance of AI savvy leadership. By the way, he makes a fascinating point about the potential risks of banning AI tools like ChatGPT in the workplace. He argues that such bans could backfire, leading to employees using these tools covertly, which might create even bigger privacy and security risks. It's a thought-provoking angle on how we need to manage AI adoption openly and thoughtfully. 

So thank you so much for listening to the show this week. You can subscribe wherever you get your podcasts. Follow me on LinkedIn at Siân Harrington, The People Space. And if you want more insights and resources on the future of work, do check out thepeoplespace.com. This episode was produced by Nigel Pritchard and you've been listening to Work’s not Working… Let's Fix It! See you next time.

 

People on this episode