Work's Not Working... Let's Fix It!

Lost in the Machine: Reclaiming Humanity in the Age of AI with Tomas Chamarro-Premuzic

Sian Harrington Season 2 Episode 7

In this episode of Work’s Not Working… Let’s Fit It! Siân Harrington dives into the complex and often paradoxical relationship between humans and artificial intelligence with Tomas Chamorro-Premuzic, organisational psychologist and author of the provocative book I, Human. Together they unpack the ways AI is reshaping workplaces ­­– and our very minds – for better and worse.

From the alarming economic toll of digital distraction to the rise of "datification" Tomas sheds light on how technology is hijacking attention, narrowing our thinking and making us more predictable. But it’s not all dystopia. Tomas offers hope in the form of practical strategies to stay human in a world increasingly run by machines.

For HR and people leaders this episode raises critical questions: How do we use AI to amplify – not replace – our humanity? And how can we build workplaces where empathy, creativity and critical thinking thrive amidst the algorithms?

Key Takeaways

  • AI: A Weapon of Mass Distraction: Tomas calls out AI’s role in fuelling multitasking and digital addiction. He reveals how this distraction costs the US economy $650 billion annually, far outweighing other workplace challenges like absenteeism and turnover.
  • The Datification of Work: AI thrives on big data but Tomas warns that in making us more efficient it also risks making us more robotic. The challenge for leaders: How do we embrace AI without losing our human unpredictability?
  • The Rise of Digital Narcissism: Technology has amplified cultural narcissism, entrenching us in filter bubbles that feed our biases. Tomas explains how this impacts leadership, fostering groupthink, weakening cognitive diversity and creating workplace polarisation.
  • The Paradox of Productivity: While AI boosts efficiency it can also encourage intellectual laziness, leaving us unprepared to think critically or independently. Tomas advocates for rediscovering analogue connections and injecting humanity back into our daily routines.
  • Practical Actions for HR Leaders: Tomas urges HR to focus on three priorities: 
  1. Upskilling mid-level managers to handle AI’s complexities. 
  2. Humanising workplace cultures to counterbalance AI’s dominance. 
  3. Cultivating curiosity and experimentation to adapt to AI’s evolving potential.

Tomas leaves listeners with a powerful challenge: Don’t become a robot. As AI increasingly mimics humanity, we must double down on what makes us unique – our empathy, creativity and ability to connect meaningfully with others.

Interested in insights about people leadership, HR and the future of work?
Seize and shape the future of work with The People Space, a leading digital HR magazine for forward-thinking leaders. We empower you to put people at the heart of work, navigating the evolving intersection of technology, business and human insight. Join us in building a future where people and machines collaborate for a more human-centric workplace

Tomas Chamorro-Premuzic (00:00)

AI is actually a weapon of mass distraction. Now it's actually really hard to even disconnect for a minute or two. And if you look at the research, it shows that, even when people are working from anywhere or working from home actually they're more likely to be distracted by AI-fuelled algorithms that hijack their attention and actually make them or force them to multitask. A lot of research in multitasking, even going back to the pre AI age shows that multitasking deducts the equivalent of 10 IQ points from our performance. So it's basically as debilitating our smoking weed, maybe minus the benefits for creativity or self-perceived benefits for creativity. And there's a lot of research quantifying, for instance, in the US the economic cost of digital distraction amounts to $650 billion a year. So that's about 15 times higher than the economic cost of absenteeism, turnover and even wellness, which obviously has become a big industry. 

 Intro

Hey everyone, welcome to Work’s Not Working, a show about forward thinking people leaders, innovators and academics and how they think we can fix work to make it more meaningful, healthy, inclusive and sustainable. Brought to you by The People Space. 

Siân Harrington (01.32)

Hi, I’m Siân Harrington and I’m joined by Tomas Chamorro-Premuzic, renowned organisational psychologist and author of the provocative new book I, Human. Tomas delves into how artificial intelligence is not only reshaping our workplaces but also rewiring our brains – for better or worse.

Think about this: research shows that the average adult now checks their phone over 300 times a day and spends up to a third of their life on screen time. While AI and technology promise to save us time, Tomas highlights a surprising paradox – instead of reducing our workloads, these tools often create new layers of complexity, leaving many of us feeling more distracted and overwhelmed than ever. 

For HR and people leaders this raises big questions: how do we harness AI to genuinely enhance productivity without compromising wellbeing? And as AI takes over tasks once considered uniquely human, how do we ensure our organisations remain places where empathy, creativity and critical thinking thrive?

Later, we’ll explore the paradox of how technology meant to connect us is making our connections weaker than ever. And we’ll tackle the big question: how do we stay human in a world that increasingly treats us like data points?

But first, let me tell you a little more about Tomas. Tomas is an international authority in people analytics, talent management, leadership development and the human-AI interface. His commercial work focuses on the creation of science-based tools that improve organisations' ability to predict performance, and people's ability to understand themselves. 

 He is currently the Chief Innovation Officer at ManpowerGroup, co-founder of DeeperSignals and Metaprofiling, and Professor of Business Psychology at University College London and Columbia University. He has previously held academic positions at New York University and the London School of Economics, and lectured at Harvard Business School, Stanford Business School, Johns Hopkins, IMD and INSEAD and has written 12 books.

So if you’re wondering how to future-proof yourself and your team in this new era, you won’t want to miss this episode.

Siân Harrington (04:01)

I’m delighted that you're joining us today, Tomas, really looking forward to this conversation. Now you've got a book, I Human, and within that you ask whether we will use AI to improve our lives, improve our work, or whether we're going to allow it to alienate us. And I want to start really by looking at how technology and AI is rewiring our brains. So how is it reshaping how we think and how we're behaving? What are you seeing?

Tomas Chamorro-Premuzic (04:30)

So when we think about how AI is impacting human behaviour I think we need to separate between two different types of artificial intelligence. What I would call AI 1.0, which is still how we humans mostly interact with AI, that is to say, machine learning algorithms that basically help us make choices faster, not necessarily better, but they simplify a lot of our decision-making, for instance, by editing and curating the news we consume, the friends we talk to, the potential new connections that we make, and also how we consume music, movies, how we shop. If our listeners use some mobile or online dating sites, the algorithms also help them find love or potential relationships.

I think they're what we gain in efficiencies we clearly can basically leverage by using our thinking and having more time for activities that basically require our curiosity or imagination. 

And then what I would call AI 2.0, which is basically the kind that most people discuss today, generative AI is not so much a prediction machine as a production machine. And that one is very interesting because there's clear evidence that we can delegate and outsource some of our thinking, some of our creative output to that. And I think what's been interesting in the last two years since ChatGPT and other large language models have gone mainstream and have become really a viral sensation is that most people are using them. Most people are using them in a clandestine way because they are being more productive in terms of achieving the same output but with less inputs, which is the definition of productivity. 

But then also, unsurprisingly, they're not running to their boss or manager to say, Hey, can you give me more work because I've now saved all of this time. So I think what's interesting to me - and to sum it up  - is that both forms of AI save us a lot of time but I think it's still a question mark and a big kind of organizational leadership challenge: what are we actually going to do with the time we save? And what ways can we reinvest it so that we actually become more productive in the sense of increasing our output and actually we upgrade our thinking as opposed to thinking at all. 

Siân Harrington (07:00)

What's interesting there of course is that we're talking about giving us more time but actually there's research coming out which saying this is adding to overwhelm. People are actually finding at the moment all this technology. We've now got another layer to learn with the generative AI. And there's a lot of distraction, lots of overwhelm, a constant flow of information. How is that impacting us in terms of fostering that sort of distraction, restlessness? Is it impacting our decision-making abilities, this constant bombardment? 

Tomas Chamorro-Premuzic (07:38)

Yeah. There’s some really interesting research on how AI is actually a weapon of mass distraction, right? If you think about it some of our listeners might be old enough to remember the nostalgic early days of dial-up internet where you basically got really excited because you are waiting one minute, sometimes more, that noise will take you online and connect you with other people and information, knowledge. Now it's actually really hard to even disconnect for a minute or two. 

And if you look at the research, it shows that, even when people are working from anywhere or working from home, which obviously saves them a lot of time commuting in places like London, New York, LA, San Francisco, increasingly most big cities, actually they're more likely to be distracted by AI-fuelled algorithms that hijack their attention and actually make them or force them to multitask. A lot of research in multitasking, even going back to the pre AI age shows that multitasking deducts the equivalent of 10 IQ points from our performance. So it's basically us debilitating our smoking weed, maybe minus the benefits for creativity or self-perceived benefits for creativity. 

And there's a lot of research quantifying, for instance, in the US the economic cost of digital distraction amounts to $US650 a year. So that's about 15 times higher than the economic cost of absenteeism, turnover and even wellness, which obviously has become a big industry. So the point is that the technology can make us more productive but sometimes the key to being more productive is to actually ignore technology and engage in good digital detox hygiene.

Siân Harrington (09:37)

It’s interesting then, isn't it because on the one hand we've got the leaders in particular expecting AI to give us all this productivity gain. Then we've got this $650 billion on the other side, which is going to cancel it out. Do you think blunt instruments like we've seen in France – and we're talking about in the UK – where at the end of the day you turn off your phone, no emails, that type of thing, is the answer to this distraction? Because I'm not entirely convinced there because I just think we've now got this smartphone and it comes everywhere with us and it's constantly pinging. So the distractions are happening not just at the end of the day but during the day all the time. 

Tomas Chamorro-Premuzic (10:16)

So if we want to basically find the cure to our digital kind of addiction and distractions, in essence, you always have two broad mechanisms. One is you hope that people self-regulate. And then, of course, that would only be as good as people's level of self-control, conscientiousness and discipline, meaning 10% are really good at it. Then maybe 15, 20% are awful at it. And the rest of us are somewhat in the middle. But what we know is that the middle is not very productive because if you check your phone 300 times a day, it's average. And the average adult alive today is expected, listen to this statistic, to spend 20 to 22 years of their life on screen time. That's basically a quarter and sometimes a third of your life. So self-regulation might not be effective. 

Then there's you actually regulate it. Now that can happen in the form of organizations, managers, leaders, having an etiquette. Simple things like do not bring your phone to the meetings or the analogue in-person meetings or having a day in which basically there are no online meetings or you basically don't email people. As you mentioned, countries like France have instituted no contact after 5 or 6pm routine. I was recently in Switzerland where many schools actually have a full week of no phones for kids and their parents. And we know that some of this can be effective, but also that oftentimes people don't like to be told what to do. And that actually, if you ban something, you're making it even more enticing, right? Even more exciting. 

We've seen in the earlier stages of the digital revolution that companies tried to ban Facebook. It doesn't work. Now companies are trying to ban ChatGPT or large language models. It doesn't work either. You have it on your phone and actually that's like prohibition. It only makes it more exciting. 

Siân Harrington (12:17)

Yeah, absolutely. Do you think we're becoming too reliant on tech? Is it impacting our cognitive skills? We're talking a lot at the moment about, particularly with AI and generative AI and other forms of AI, that we need to be able to be more nuanced, to be better decision makers, to have critical thinking. Have you seen any of that playing out and what's the impact for the workplace? 

Tomas Chamorro-Premuzic (12:45)

So when we look at the impact that let's say our technological or cyber addictions and compulsive kind of behaviours have on our brain it's probably a little too soon to say. There are some brain scanning studies, et cetera, but typically you need more than 10, 15, 20 years of pervasive and recurring kind of habits to see something. Now granted, it took us a long time to understand that smoking cigarettes increased the likelihood of getting lung cancer. So sometimes the absence of an effect doesn't mean that there isn't an effect. It might be that we haven't found it yet.

Jonathan Hyde, a professor at NYU, has just published a book called The Anxious Generation, where he charts the behavioural and the psychological impact that our excessive use of social media, smartphones and AI-fuelled digital distractions have particularly on young people and children. And it is quite shocking. It has actually increased and accelerated a lot of regulation, particularly in the European Union, trying to mitigate excessive use by these vulnerable kind of people and individuals. 

And then I think there is the question of our, I would say, intellectual and maybe performance dependence. There is no question today that you couldn't join a job and you couldn't be part of the knowledge economy if you don't have access to your smartphone, if you don't access the internet, and perhaps even if you don't access generative AI because you'll be at a huge disadvantage compared to other workers that do. Even if you're really smart, people will outperform you if they have access to the internet and generative AI and other forms of technology. 

Now, if you think that we never in the history of humanity invented a technology, whether it's fire, wood, the microwave, the dishwasher, and now AI to work harder, always to work less and efficiency basically is about being lazier. I think the real risk is that because we have access to information, because we can outsource a lot of cognitive, creative, intellectual tasks to AI, actually we forget that we should also think and that we should also create and engage in critical thinking. 

And I am very worried by the fact that AI is actually turning us into a more exaggerated version of ourselves. It's making it harder to exit our digital cocoons, our filter bubbles. It seems like we're all intellectually radicalized because if you're only exposed to information that actually amplifies your beliefs and makes you feel like you are smart and that you're right, even if it's actually detaching you from reality, you lose track of facts, information and evidence of reality. 

And obviously it has a very bad effect on empathy because you actually downgrade your ability to understand how different people feel and think about things and to question your own beliefs. So think that intellectual radicalization is a real problem. 

Siân Harrington (15:45)

There's a couple of things there. You mentioned the laziness and I think, I can't remember if it was the Boston Consulting Group research or could have been MIT or possibly together where they found that it did increase people's performance, but people were beginning to get a bit lazy using generative AI expecting it to come up with all the answers. So that was a very interesting piece of research that came out recently. But your point on the impact it's having on biases and things like that, we sometimes call that digital narcissism, don't we? And that idea of our self-perception and that echo chamber idea. How do you see that playing out in work in terms of the impact that sort of digital narcissism, if you want to call it that, can have on leadership, on ability to manage people, on risks in work. 

Tomas Chamorro-Premuzic (16:40)

Clearly, it would be unfair to blame social media or the AI that fuels it for making us narcissistic. But if we look at the data it's throwing gasoline to the fire. Because cultural narcissism in the sense of people having more and more, I would say, self-centred ambitions, aspirations, the rise of entitlement, which basically means having very high aspirations but coupled with a poor work ethic because if you think you deserve to be famous, you deserve to be successful, you deserve to be the boss, you're not going to work that hard to actually close the gap between where you are and where you think you should be or where you want to be. So all of that was there. 

But then if you look at the fact that most of us interact with each other online. think digital and technologically mediated relationships and interactions probably make up about 70, 80% of the time for most people and in-person analogue interactions only about 20% of the time or so. And when we interact with each other, AI encourages everybody to give us very positive feedback, even if they're not that impressed with what we're doing. And the algorithms that basically reinforce our behaviours to create stickiness in those platforms certainly nurture and harness digital narcissism.

I would say if you go to an office, a 3D brick and mortar office, and you walk around the floor telling everybody how amazing you are, not listening to anybody talking all the time, sharing your unsolicited views about really delicate and controversial topics, even if you're totally out of depth and you have no idea about the war of the Middle East or Ukraine or climate change or abortion or guns or politics or the election. We all have really strong opinions but are not necessarily that informed. And if you do that, all of that plus, of course, telling everybody every day what your cat had for breakfast that morning, you'll be a pretty annoying colleague. You'll be being pretty obnoxious but online AI will turn you into an influencer.

Siân Harrington (18:50)

I'm hearing a lot at the moment about polarisation in the workplace. I'm hearing a lot about conflict resolution being one of the top areas again that leaders are needing to learn how to manage. And it strikes me that part of it is we don't know how to filter ourselves. In the old days, you'd just be down the pub, having a drink, having a bit of a moan, wouldn't you? And then that's it. You leave it there. But now we're used to everybody thinking we're important and what we say. And so do you think that's playing into that polarisation? 

Tomas Chamorro-Premuzic (19:22)

I think it's really quite interesting how we almost dug our own hole when it comes to really convincing people, especially young workers, that a job isn't enough. They need to have a career and a career isn't a career unless they're working somewhere where they can find a higher sense of purpose, almost a higher calling and where their values and beliefs are fully represented. And furthermore, even advocated by leaders and managers and that we should all bring our whole self to work and our authentic self should be out on display. 

And in a way we've basically dug our own hole now because if the average employee expects their boss or manager or leader to engage in regular kind of corporate advocacy and voice their beliefs about, again, anything that's happening in the realm of global news and politics, whether that's the England national football team or the performance of the Government or the riots or delicate issues, you will end up with one of two options. Either you're going to make people very happy but then you have a cult that's not a culture. It's like your people can’t think for themselves and they're blindly following their leaders and having, it's almost like a populist form of management or leadership that is there to amplify and confirm and lubricate our own beliefs, which creates group think and homogeneity of thought and is actually a big enemy of cognitive diversity. 

Or inevitably you're going to make some people really annoyed and alienate them and antagonise them because, let's face it, what's the point of having people from different walks of lives and with different backgrounds if they don't all think differently about different things and if they don't feel differently about things.  And fundamentally, the fact that now the word tolerance has even become something negative and has a negative connotation because it's not sufficient to tolerate people who aren't like you. You have to embrace them and love them and celebrate them. That's never going to happen. What's wrong with pretending to be nice and cordial and kind to your neighbour and then bitching about them when you get home and when they can't see? That worked for the entire history of humanity. And I think that's more than enough if we can foster a climate of civility where you make people understand that they don't have to embrace others or love them or celebrate them when they think diametrically different about these allegations. They should just do their job and try to get along with others and respect and tolerate others, especially when they have different viewpoints. 

Siân Harrington (22:20)

Yeah. We talk a lot about the need for managers to be better at those sort of power skills, as we now call them, but on that EI and this sort of side, but it's like a minefield out there and we're not able to, or quite a few people don't realize, where that break is. This whole idea of bringing your whole self to work, I think can play on both sides here. And I'm not a massive fan of it because I do think sometimes you don't want to show everything about you. There is a line here. So it's a terribly difficult one, I think.  

But on the other side of that, we've got what you refer to as ‘datification’. And so if we see it from another side, we as humans are feeling that we are just data points today. That these algorithms, these new systems coming into work, everything around us is not treating us as humans. So that's almost the other side of this. So can you explain a little bit more about that concept of ‘datification’ and how that's having an impact? 

Tomas Chamorro-Premuzic (23:25)

So ‘datification’ refers to the fact that even before AI became mainstream most of us, including knowledge workers, spend most of our working hours and days actually creating a vast repository of data, information and symbols about stuff –  our interactions with each other, our emails, our communications, our decisions with clients, key stakeholders, employees, direct reports, et cetera. 

You think about digital transformation, which most organizations are either undergoing or underwent, that's really about creating big data, internal and external kind of indicators of reality and of working life. And of course we wouldn't have AI unless we had that data because AI without data is meaningless. These data fuel the algorithms that then ended up predicting our behaviour. 

The trick that is often missed is that when we use AI as a co-pilot and we have AI in the background and we have algorithms that basically are trying to simplify or there to simplify our decisions and creating insights that basically edit and create our lives., actually, they're also trying to make us more predictable. So just when you log into Netflix, Netflix will show you four movies and you can pick one of these four, even though there's a million movies in the background, but of course we all know that if we reject these four choices we might end up with a two-hour very painful task of arguing with our partner or our family about what to watch. And then nobody can make a decision. And then we go to bed angry and annoyed and we wasted two hours. Right? 

So the same applies to everything. When an algorithm tells us when the AI that completes our emails is telling us that it can just finish a sentence like that and send the email, where our digital twin or virtual clone can attend a meeting for us and speak to us, it's basically restricting the number of choices and the kind of wide range of behaviours and repertoire that we would have if we used our imagination. 

And so that's an important trick because in a way, one thing is for AI to become more human-like but another thing is for humans actually to become more like machine or more robotic, right? And if you cannot tell whether the email is coming from a human or from AI, that isn't only because the AI has become very good at mimicking the human. It's also because the human has become very robotic and more like AI. 

And so what I argue about in the book is that actually a very good call to action, if we think about it, is to try to become a less predictable version of ourselves. It's to try to almost think of what the AI would do in our place, in our situation to then try to do something differently. And that requires a little bit of imagination. It requires us to engineer some serendipity and to inject some surprise into our working lives. Not just surprising others, but also surprising ourselves. 

Siân Harrington  (26:40)

Surprise is a nice word. It's not one you hear a lot about in this context but I think that's a good one. So yeah, I wanted to come onto some tips and tricks in a minute maybe, but just picking up on earlier, you mentioned Jonathan Hyde's book, The Anxious Generation. It's one of the biggest problems in work at the moment. Burnout, mental health, it is really huge issue and I think that it seems to be growing. There's no doubt about that. Is it fair to blame technology for this? Do you see that as playing a serious role or are there other areas that we as leaders should be looking at in terms of supporting people in this area? 

Tomas Chamorro-Premuzic  (27:24)

I think we can blame technology for enhancing structural or cultural issues that were there to begin with. If you take the rise of anxiety, depression, loneliness, burnout, mental health issues, maybe 20 or 30% of this is aggravated by technology, by the fact that actually there's this paradoxical effect whereby the more connected we are, the weaker our connections are to others, right? And that much the more access we have to information, the easier it is to be misinformed or uninformed or to believe what you want to believe. You could go further back into the past and seek for other cultural or historical kind of causes that led us to invent these technologies and actually use and abuse them in the way we have. 

Again, a simple example is you could see how excessive use of Instagram or TikTok fuels cultural narcissism. The selfie wouldn't have been invented if we weren't somewhat self-obsessed to begin with. The fact that the iPhone at some point included the camera that is pointing at you hasn't created an obsession with selfies. They put it there because the engineers knew that we were already quite self-obsessed. 

Siân Harrington (28:50)

Let's pick up then on some, some of the sort of more practical actions, we as individuals to start with, as opposed to leaders, but we as employees, individuals, how can we ensure we try and maintain this sense of humanity then in this increasingly tech driven world? You talk about a quest to reclaim what makes us unique. How do we do that? 

Tomas Chamorro-Premuzic (29:10)

When we think about the recommendations or tips for individuals, for humans, for employees, humans in general, I think there's a couple of things that I highlight in the book. The first is really to focus on harnessing the skills that AI will probably not replace. AI has probably won the IQ battle. It scores higher on IQ tests than even really smart humans and it knows how to solve most problems that actually can be solved, objectively speaking.

But even if AI won the IQ battle, I think the EQ battle is still up for grabs. So when it comes to displaying empathy, self-awareness, the capacity to understand not just things but also other people who want to be understood by humans, those are the things that we need to focus on harnessing. AI is really good at explaining everything without understanding anything and I know some humans are also good at explaining things they can't understand, but we don't want more of those. So I think those are the things that we can harness. And just because AI hasn't actually substituted for them doesn't mean it can't copy them. 

A study done last year in US hospitals showing that when doctors outsource patient communication to ChatGPT patients actually experience 40% higher levels of empathy while talking to a chatbot, right? Not because the doctors can't display empathy or are psychopaths but because they're typically not bothered by that task, right? They're not rewarded by it, promoted by it. So that would be the first. 

And I think the second one is really to rediscover, I think, the forgotten magic of analogue encounters with other people. Everything that happens online will be the joint remit of humans and AI, of human intelligence and artificial intelligence. But the ability to connect with others on a human to human level and to actually display and basically show not just human but also humane behaviours to others will be really important. 

There's, I think, a lot of evidence showing that when you try to optimize environments or cultures for efficiency, it can be quite dehumanising. Even when people get excited because they can pre-order their coffee on an app and by the time they get to the coffee shop, it's ready and then they save 35 seconds of their busy daily life. And because of that, you don't say good morning to the barista. And if the barista says good morning to you, you're annoyed because you have to either answer back or just run away. And if you answer back, you waste more time, et cetera. So I think build into your routine and into your habits, a little bit of humanity, inject some more humanity that will actually make you a better colleague and a nicer human being at work and in life in general. 

Siân Harrington (32:08)

We've got a bit of a dichotomy here, haven't we? Because we've spent quite a lot of time talking about how all this technology is making us less human in some respects. And yet on the other side, that's the thing we need to be developing. So yeah, it's a really complicated thing. And also, I when you mentioned the modem noise earlier, I thought these things to do with technology that show our age, as you said, but the introduction of the smartphone now shows that you're actually quite old if you remember that. So this newer generation coming into the workplace, this is their life. They are used to, as you said, dating through apps. People don't know how to use a phone. It's just not how they are. Bringing the analogue back, it makes a lot of sense. But I wonder how we're going to try and do that in reality and get people to realise that it's fun. 

Tomas Chamorro-Premuzic (33:06)

Yeah. And you can imagine that the next generation or people who have been born very recently are probably going to be used to interacting with AI as much as humans and deep fakes won't be a novelty for them or something that is controversial. And so we don't know what consequences that will have. What we do know is that at the same time, our hardware is the same. We modern, current humans don't differ biologically from our 300,000 year old ancestors. And that means we still have certain needs, just like your biological needs, you have social needs that need to be fulfilled and they do require the ability to connect with others, the ability to feel what other people are feeling and also the ability to see the world through other humans and other people. And I think all of that doesn't go away. 

The what and the why remain the same, even if the how is now very technologically mediated and an evolutionary challenge to most of us. Indeed, some of us who are old enough to also be nostalgic about candy bar phones and a past in which we only got excited when a text message arrived because we weren't on 15 different apps at the same time all the time.  

Siân Harrington (34:25)

So coming back into the working world what role can HR and people leaders play here in helping to mitigate some of these negative aspects of tech and particularly AI that we've talked about in terms of employee behaviour, wellbeing, etc?

Tomas Chamorro-Premuzic (34:42)

So I think if the question is what can HR professionals and HR leaders do to help navigate this human AI interface and to future proof their organizations, teams,and cultures for the AI age? It's a really important question. 

You mentioned before how difficult it is for managers to do this. And of course HR leaders are still managers. But I think investing in mid-level managers is the biggest opportunity. So HR leaders there should try very hard to convince their CFOs and CEOs and executives to actually upscale and reskill mid-level managers. You can have an amazing AI strategy but it will break and not translate into execution unless you actually equip mid-level managers to navigate this. 

And I feel really sorry for mid-level managers who are already under so much pressure and as you said, overwhelmed, they need to understand ESG, diversity, digital transformation, the AI, human ethics, regulation, climate change, advocacy. And as we know, the majority of managers go into management roles without any training. And just because they were good or somewhat good at their previous role. So that's the biggest opportunity for in terms of investment. 

Everybody who understands what's happening right now with AI and says, in order to leverage this technology we need to basically unlock human potential. So therefore you need to risk getting upscale people. I haven't seen learning and development budgets double or triple in the last two years. So that's a big area of investment. 

I would say the second one is really to do their bit to actually humanise the culture because AI will continue to evolve and upgrade itself. But the best places to work for and work in, not just in the future but also today, won't be the ones with the best AI models or the most accurate algorithms or the best data to enhance insights and decision-making. They will be the ones that have managed to build and construct a humanistic counterpart to actually make these places fun and interesting and create the selves of belonging. So I it's almost like the more people will depend on AI and they will depend on AI a lot to their jobs and their tasks, and they might not just have AI as a co-pilot, but be a co-pilot to AI in the not-so distant future, the more we need to work on humanizing work. And often that means rehumanising cultures. 

And I think the final one really is to foster a good sense of curiosity and experimental mindset in their teams, cultures and organizations. There is a lot of people who pretend to know where AI is taking us but nobody really knows. There are also leaders who have very strong opinions about it without even having spent 30 seconds on ChatGPT to actually interact with it. It's just as bad to say, I'm going to ban this and this is irrelevant as it is to say, okay, by next Monday, I want to see 50% high levels of productivity because this is supposed to be the best thing since sliced bread.

What HR leaders can do is to ensure that people actually try this out, to ensure that they experiment, to ensure that they share their learnings and to actually create a culture of curiosity, of experimentation, so that basically you can bring people along this journey and actually leverage the distributed power of the knowledge economy that you have in your organizations. 

Siân Harrington (38:32)

So a blunt question. AI – good or bad for the workplace? 

Tomas Chamorro-Premuzic (38:38)

I have to say yes is the answer to that question because it is both good and bad. But I am a huge believer in the opportunity and in the potential that this has, but I'm also not naive and I do worry about the risks. It's really no different from any other technology and if you look at an extreme example, maybe you can look at nuclear energy, right? It can basically give you clean energy and preserve the environment but it can also give you nuclear disasters or the atomic bomb. 

So I think it's up to us. And ultimately the critical point is not artificial intelligence but it's human intelligence, human integrity, beating human stupidity and the corrupt or dark side nature that also exists in humans. 

Siân Harrington (39:26)

And to end with one last question, if there were one takeaway for leaders from this discussion and from all the work that you've done for this book and all the great research you've done in the past, Tomas, what one thing should they do now? What one action can they start today in terms of making sure that they produce like a human centric AI future for their workers and for their workplace? 

Tomas Chamorro-Premuzic (39:53)

I guess my final one generic universal kind of takeaway message for managers, leaders and humans is as follows: Try not to become a robot. Try not to become a machine. Remain a human. AI will continue to display an incredible ability to exhibit human-like behaviour but try not to become a machine in that process. Remain human and humane. 

Siân Harrington (40:24)

That was Tomas Chamorro-Premuzic, sharing his insights on how AI is transforming our workplaces – and what we, as humans, need to do to thrive. I hope you found this conversation as thought-provoking as I did. I’ve been inspired to think more about my own ‘digital hygiene’ – and maybe even reconsider how often I check my phone. 

If Tomas’s ideas resonated with you, I highly recommend his book I, Human. It’s packed with practical advice and deep insights into how to navigate the future of work.

So thank you so much for listening to this week’s episode of Work’s Not Working – Let’s Fix It! Don’t forget to subscribe wherever you get your podcasts and follow me, Siân Harrington, on LinkedIn. For more ideas on rethinking work and leadership, check out www.thepeoplespace.com.

This episode was produced by Nigel Pritchard. Until next time, stay curious, stay human and keep fixing what’s not working.

People on this episode