Book club: Machines behaving badly

Season 4: Episode 11

Recently, Karen and Lachy discovered they are both Lachy and Karen are avid readers and set up their own book club! The first book they selected was Machines Behaving Badly by Toby Walsh, and it is the topic of discussion for this episode. The book is about ethics and mortality in AI, a topical choice given the ‘chat’ around AI, right now.

Transcript

Lachy Gray  00:01

In today's episode, we are discussing a book that Karen and I have both read. So this is our first book club. After I discovered when we record the last episode, that Karen reads over 100 books a year. So that is two books a week, which is just phenomenal. I thought I read a lot, but clearly I do not. So he agreed to read a book and discuss it on the podcast. And the first book we chose is machines behaving badly threatened by Toby Walsh. It's a book about ethics and morality in AI. The authors are noisy. He's scientists, a professor of AI at the University of New South Wales, and he's at CSIRO is data 61. He's a researcher, software developer. And his Twitter account at Toby Walsh, has been voted one of the top 10 to follow to keep abreast of developments in AI. So Karen, where should we begin?

Karen Kirton  01:10

Well, I read the book on the weekend. I know. But it wasn't an easy read. I actually, I tried to see if I could slightly cheat and get it on audio so that I could listen while I was doing other things, and then go back to the paperback to kind of refresh myself. But it wasn't available on audio, which was disappointing. So Toby Walsh are your publishers if you're happening to listen to this, that would be great. So then I thought, well, we're discussing AI. So I thought the early right thing to do is to ask chat GPT what are the things we should discuss in a book club right by? And if you have been ignoring the world for the last seven months chat GPT is a generative AI solution. You can go online, you can ask questions, and it gives you answers. It's kind of like today's version of magic eight ball. But it uses lots of data that it's called from the internet over many years. Anyway, that's a very simplistic way to describe it, but just thought I should do that, because I did come across someone last week who had never heard of it. And I realised, okay, not everyone's living in this bubble. So I thought it better my normal book clubs. And to be honest, the book clubs that I have are more about the food and wine where we just happen to mention a book. So it made me slightly unprepared for a proper book discussion. So chat JpT gave me seven categories to consider. And then reflective questions within those and I thought that they were actually not too bad. And it started with, you know, what's the central thesis of the book, but as I was reading through it, I thought, you know, we went through all seven of those that probably pretty boring episode for people, but I am going to put the list of what regurgitated on the website so if you're interested, you can go and have a look. But for me, you know, I actually read Tracy Spicer's manmade not long ago, which is all about ethics and morality in AI. So I didn't feel like I particularly received new information through the book, but it did solidify for me how problematic aspects of AI. How about you Lachy, huh?

Lachy Gray  03:22

Yeah, definitely. Yeah, I thought the author took an interesting perspective on the topic, sort of merging his technical AI expertise with a philosophical perspective on the moral and ethical considerations of AI, and really just how grey the whole area is, and how important it is to be talking about it. And I appreciated that he ate while it did identify challenges and negative aspects to do with bias and privacy, and also a whole chapter on lethal autonomous weapons, which was scary.

Karen Kirton  04:00

That was a bit disturbing.

Lachy Gray  04:02

Yeah. And he also shared ideas on how these could be reduced on a gator, which I appreciate because it's really easy to pick holes and things, but not be part of the solution. Like it gives you practical examples like how do you programme a self driving car to make autonomous decisions? If it's got to choose between running into a child or a group of elderly people? Which is the so called trolley problem or moral dilemma? How do you prevent the camera and radar tech that makes self driving cars possible, being used in Lethal drones, for example? So I think that's a really important conversation to be happening. And I think one unique aspect too is he talks about who's actually developing AI. He makes the point it's a very small group of people. or so maybe 10s of 1000s of people internationally who have PhDs in AI. And within that group, they're mostly male. They're white. And they're not representative of society that they're impacting with AI. And he calls them that sea of dudes. I really liked it.

Karen Kirton  05:18

Yeah, and it's certainly an issue when we have technology that's being made for everybody around the world, but it's only reflecting perhaps a portion of those people. And, you know, I think in context of the makeup work theme, for this season of upskilling, you know, reading the book, I have started to reflect on other major technological changes that we've seen over the last 20 years in business. And I think if we go back to perhaps the last major technological change with social media, and you know, how different it was, when we first logged into MySpace, or Facebook and wide eyed and innocent, you know, thinking we were sharing things with friends, or having a bit of fun, you know, versus the massive privacy and security concerns that have now emerged, you know, there's been going for almost 20 years now. And I feel like that experience has given us collectively the ability to now reflect critically and call for change and regulation with AI, which probably took closer to 10 years to happen more broadly, with social media. And so I think for me, you know, really what what this book was about is okay, well as a business owner, but also as a human being, you know, just putting our head in the ground with AI is not an option. And I appreciate that this book and others like it are aiming to educate us non techies in really simple language on what AI is. And you know, what the pros and cons are?

Lachy Gray  06:57

Yeah, it's a good point. We always reading about Facebook in the news, and there's something they've done, or haven't done, which is usually protected their users data. And it difficult to retrospectively implement regulation, isn't it? Especially when the companies become so powerful, and have huge bundles of cash behind them, they can fight anything. I think it's interesting to see Europe really lead the way APR, and more recently, with the AI regulation that they're looking at, a lot of the change has come from them. Even though many of the companies are based in the US. I think it's easy to get starry eyed about all the positives. I will bring. And Walsh quotes Neil Postman, who's an American educator and cultural critic, early on in the book, who says that each, at each tech revolution, there are winners and losers. And for every advantage that New Tech offers, there's a corresponding disadvantage. And the disadvantage might exceed the advantage or the advantage may be worth the cost, but you don't know. Yeah, but he goes on to say that the advantages and disadvantage and never distributed evenly among the population. So there will be some people who will benefit and there will be some people who will be harmed. And I think that's a really important message. Yeah, I don't hear that talked about enough. We're talking about especially we're talking about functionality, there's a lot of focus on functionality, and how it's going to benefit in sort of automation, for example. And then yeah, the that, therefore might lead to job loss for the cycle. Okay. Well, let's talk more about that. Like, this is an example what, what happens for those people? Oh, wait, where did they go there?

Karen Kirton  08:57

Yeah. And I think, to that point, that really stuck out to me in the book as well. Because when we start to think outside of our own business and our own country, and we start to look at this as a global issue, then yeah, what does that look like? And you know, we've seen plenty examples, unfortunately, over, you know, many, many years since industrial relations of you know, how we actually have started to go to less advantaged countries overseas. And so we suddenly automate some of the work that they're doing and what happens there if they don't have that government, cushion and ability to actually keep them in work and housed and fed. I don't want to go too far down the rabbit hole. But you know, I just I thought it was a really a really powerful moment in the book, actually to start to think about what does this actually made for us globally as humans?

Lachy Gray  09:52

Absolutely. And he says that's why we should be cautious of technological change, and why we should be suspicions of capitalists who, by definition are risk takers and cultural risk takers. So they, they're willing and perhaps comfortable to exploit new technology to its fullest. And perhaps less concerned about what traditions cultural norms are overthrown in the process. And I think that yeah, is again, it's a really important message. And so, I think we do need regulation of AI. Walsh talks to this, he uses self driving cars as an example, to highlight the lack of governance and regulation. He makes the point that when flight took off in the 1900s, independent bodies were set up to investigate accidents and to share findings with the industry. Kassar was set up to licence pilots and ground crew and so on, and that this just doesn't exist for self driving cars. But it should.

Karen Kirton  10:59

Yeah, and it's so interesting that it doesn't exist, right. Like that was the thing that I thought as well as, it's not just that independent bodies were set up, but they actually share information with each other across the planet. And it at a time, when actually, that would have been much harder to do than it is today. So yeah, why why aren't we doing this? And I think it's so disappointing that we're really slow with regulation, despite many people that seen as your real experts in this field actually calling for it. You know, and again, I was reflecting on Okay, well, you know, have we seen this type of thing before, and I was thinking about the gig economy versus employment law. And, you know, as businesses that we're giving people, you know, jobs within the gig economy, were starting up, you know, that's when the unions started to say, well hang on a second, like, how do they fit within our legal landscape. And unfortunately, like, there just comes a point where you can't unscramble the egg, so to speak. And so regulation doesn't necessarily do what it should. So then we end up with different rules across countries, sometimes even just across different states of Australia. And so I do think there's an opportunity now to actually start to look at this properly, and it is a global issue and a global concern. And I love the comparison to flight I had heard it before, you know, especially in that term of like, the, the actual wording of AI is a bit of a misnomer, because it isn't actually intelligence. You know, just as flying a plane, it's not the same as flight for birds, we actually would fly in a completely different way. We don't have feathers, flapping wings. So you know, AI is not the same as human intelligence is actually a completely different type of intelligence. And I was having this conversation with my kids in the car the other day, because they were arguing with me about how AI is smarter than humans. And so I was trying to dig into a Why do you think that where have you heard it? Like, probably shouldn't do these videos or children, but explain to them that actually is just a completely different type of intelligence. You know, and of course, with that can have serious questions in relation to things like copyright and privacy as well.

Lachy Gray  13:25

Yeah. Yeah, definitely. Yeah, I also liked that discussion about AI is bit of a misnomer. And I think you you hear it often that tishie movies, AI is represented as, as a robot. And that's kind of an N for a long time, really. So we kind of have these sort of human versions of it in our heads, which is probably not super accurate. Actually, making a robot like a human is extremely difficult. And a bunch of people pointed out how hard that is to get them to do useful tasks. hasn't stopped. Elon Musk investing in Tesla, or recently, I saw that one of the points that Walsh makes is that autonomy is one new ethical challenge that I pose is because he says that the other challenges like bias or invasion of privacy, we've faced those before. I think we talked about this in our episode on AI and hiring. That in many ways, AI, Shaya shirt. hold a mirror up all these challenges that are already exist in human society, and how we communicate with each other. It says that we've never had machines making decisions independently of the human masters. So whose response Ansible for the actions of an autonomous AI, so So I thought, well, let's, let's apply that to the workplace. So Karen, let's say that you work for a large company that uses AI to fully automate the hiring process saves time for all involved. It's a fair process for the candidates, because there is no human bias, apparently. Yes. So make the decision that Tobias should be hired and probably hires him. And a few months down the track, Tobias tries to steal customer data. So who's responsible for the hire? Is it the AI?

Karen Kirton  15:42

Yeah, right. And, and I think in that scenario, like, terrible situation, because the company would just have to own that with the customer, and then work out, okay, what are our obligations from here. And I think if we take it a step further, and we look at something like the banking industry, where there are so many requirements for banks, for example, to make sure that person can repay the loan that they give them. Now, if only AI makes the decision, and then the person defaults, they go bankrupt. And the regulator finds that decision to give him the money was unconscionable conduct, that who's responsible? Is it the developer of the AI? Is it the bank as a whole? You know, who's the individual in that process? And I think we've seen this in many other areas of societies. You know, they talk about, you know, the white collar crime, you never actually get a criminal out of it, because we can't say what's the company, but it never really is the company, there's always somebody there or multiple people that have made decisions. But in this situation, yeah, it's a piece of software. So, you know, it's, it is really super interesting. And I think that we just have so much to unpack in this space, I really enjoyed the book to gain some more understanding of these questions. And, you know, I would encourage everyone, if you if you haven't yet started educate yourself on AI, what are the different types? What does it mean, you know, what does that look like for your business? But also, you know, you as a person in society today, what are your takeaways that you took from it Lachy?

Lachy Gray  17:22

Yeah, I think just asking who benefits? Who benefits and in each of these scenarios where AI is being applied, there will be winners, and there are going to be people who are disadvantaged, and often significantly, and talking about that, think is really important. And encouraging. regulation, regulation doesn't have to be a bad thing as it as it's often perceived, and commentated on. I think it actually helps with sharing knowledge, sharing mistakes, sharing learnings. Again, there's only a very few number of companies who have the resources to develop AI. And they're all for profit. So they're not going to share unless they're encouraged to, but the airlines are doing it. So I think there is there is hope, about your current.

Karen Kirton  18:31

Yeah, I agree with all of that. I think the main takeaway for me is that even if we consider ourselves not techie, not that interested, we just have to be this is going to be part of of our lives, it already is, in many ways that we may not even realise, you know, from asking questions to our phone, and you know, Netflix giving us different episodes to watch. It's already there. And so I think it's just so important to actually start to understand at least at a high level, you know, what does this look like for you and encourage your team as well to start thinking about it? Because, you know, though, we've been talking about the morality of AI today, because that was the book topic. I don't want people to think that everything's bad AI either. You know, so I think that education piece is so important to say, Okay, well, how can we actually leverage this and use it for good as well?

Lachy Gray  19:24

Absolutely. Yeah. That's, that's a great message. So links to articles and anything else we've discussed will be over on our websites jano.com Today, you and amplify, hr.com. Today, you just follow the links to the podcast section. We do have a link tree now as well. And we do have a special offer on there for our podcast listeners. So check it out. That's at link ter.ie/make It worth podcasts. And if you've received value from this episode, we'd love it if you could leave a rating or review over at Apple podcasts.

Karen Kirton  19:58

And coming up in the next episode. going to look at how smaller businesses can develop their team members and consider career pathways.

Lachy Gray  20:06

Yes, and that podcast episode's coming up two weeks from now. So click the subscribe button to be notified of when it's available. Any final thoughts Karen?

Karen Kirton  20:15

I should have asked chat gpt ever had any final thoughts? No, I think I've said my final thoughts which is, yeah, just get it get educated. Start looking at the topic and start thinking about how you can use AI in your business responsibly.

Lachy Gray  20:34

Thanks so much for joining us, and we'll see you next time on the Make It Work podcast.

We'd love to chat about how Yarno can benefit your business

Mark Eggers

Mark, our Head of Sales, will organise a no-obligation call with you to understand your business and any training challenges you’re facing. Too easy.