loader from loading.io

E048 Bala Madhusoodhanan on Critical Considerations for Leaders when Adopting AI Solutions

Agile Innovation Leaders

Release Date: 02/09/2025

E048 Bala Madhusoodhanan on Critical Considerations for Leaders when Adopting AI Solutions show art E048 Bala Madhusoodhanan on Critical Considerations for Leaders when Adopting AI Solutions

Agile Innovation Leaders

Bio   Bala has rich experience in retail technology and process transformation. Most recently, he worked as a Principal Architect for Intelligent Automation, Innovation & Supply Chain in a global Fortune 100 retail corporation. Currently he works for a luxury brand as Principal Architect for Intelligent Automation providing technology advice for the responsible use of technology (Low Code, RPA, Chatbots, and AI). He is passionate about technology and spends his free time reading, writing technical blogs and co-chairing a special interest group with The OR Society.   ...

info_outline
E047 Brian McDonald on the Art & Craft of Storytelling (Part 2) show art E047 Brian McDonald on the Art & Craft of Storytelling (Part 2)

Agile Innovation Leaders

Bio  Brian McDonald, an award-winning author, filmmaker, graphic novelist, and podcaster, is a sought-after instructor and consultant. He has taught his story seminar and consulted for various companies, including Pixar, Microsoft, and Cirque du Soleil.  Interview Highlights 01:30 The Story Spine 04:00 Proposal, argument, conclusion 07:40 Video games – noodles are not cake 11:30 Armature 16:25 Stories in speeches 21:25 Finding your armature 23:00 Tools and weapons go together 25:30 The first act 27:00 Angels 28:00 Brian’s memoir 28:45 Paying attention   ...

info_outline
E046 Brian McDonald on the Art & Craft of Storytelling (Part 1) show art E046 Brian McDonald on the Art & Craft of Storytelling (Part 1)

Agile Innovation Leaders

Bio   Brian McDonald, an award-winning author, filmmaker, graphic novelist, and podcaster, is a sought-after instructor and consultant. He has taught his story seminar and consulted for various companies, including Pixar, Microsoft, and Cirque du Soleil.   Interview Highlights   02:45 The gift of writing 04:00 Rejected by Disney 05:35 Defining a story 07:25 Conclusions 10:30 Why do we tell stories? 13:40 Survival stories 17:00 Finding the common thread 19:00 The Golden Theme  20:45 Neuroscience   Connect    Books and references ...

info_outline
(S4) E045 Kerrie Dorman on Entrepreneurship and Supporting Businesses through Mentoring show art (S4) E045 Kerrie Dorman on Entrepreneurship and Supporting Businesses through Mentoring

Agile Innovation Leaders

Bio    Kerrie, a serial entrepreneur, was introduced to mentoring after her last successful business sale. Realising she had no support or guidance in what she was doing, Kerrie founded the Association of Business Mentors in 2011 to provide mentoring skills and training for those seeking to mentor business owners professionally. Kerrie’s vision for the ABM was to provide reassurance to business owners that they are in the safe hands of a trusted and experienced ABM professional business mentor. Kerrie mentors businesses of all shapes and sizes. She also mentors within the...

info_outline
Ula Ojiaku: The 5 Crucial Questions Every Leader Should Ask Before Launching Any New Project show art Ula Ojiaku: The 5 Crucial Questions Every Leader Should Ask Before Launching Any New Project

Agile Innovation Leaders

Bio Uloaku (Ula) Ojiaku is the Founder/ CEO of Mezahab Group Ltd (a UK-based consultancy focused on helping leaders in large organisations improve how they work to deliver value to their customers).      With over 20 years of professional experience, Ula has board-level experience and has worked in multiple countries, in a variety of technical, business and leadership roles across industries including Retail, Oil & Gas, Telecommunications, Financial Services, Government, Higher Education and Consulting.   Ula hosts the Agile Innovation Leaders podcast,...

info_outline
(S4) E044 Darren Wilmshurst on Solving Organisational Challenges and Demonstrating Value show art (S4) E044 Darren Wilmshurst on Solving Organisational Challenges and Demonstrating Value

Agile Innovation Leaders

Bio Darren, as the European Managing Director of Cprime, spearheads transformation initiatives in EMEA, leveraging over two decades of experience in banking and IT leadership. As a SAFe Fellow and renowned author, he drives strategic growth by defining innovative go-to-market strategies and deepening client relationships. Darren is responsible for overseeing Cprime’s consultancy services, implementing complex programs, and negotiating multi-million pound contracts, positioning the company as a leader in organisational efficiency and performance optimisation.    He co-authored...

info_outline
(S4) E043 Denise Tilles on the Three Pillars of Product Operations show art (S4) E043 Denise Tilles on the Three Pillars of Product Operations

Agile Innovation Leaders

Bio wrote the recently published book . Co-authored with ’s Melissa Perri, the book is the must-read guide technology leaders have been missing.  With over a decade of product leadership experience, Denise supports companies like Bloomberg, Sam’s Club, and athenahealth by strengthening capabilities around: Product Operations, Product Strategy, and establishing a Product Operating Model.     Interview Highlights 01:00 Background and beginnings 04:00 Product Operations: The book 06:30 Product Operations vs Product Management 07:30 The Three Pillars of Product...

info_outline
(S4) E042 Peter Newell & Dr Alison Hawks on Enabling Innovation and Agility in Defence show art (S4) E042 Peter Newell & Dr Alison Hawks on Enabling Innovation and Agility in Defence

Agile Innovation Leaders

 Bio: Pete Newell Pete Newell is a nationally recognized innovation expert whose work is transforming how the  government and other large organizations compete and drive growth.  He is the CEO of BMNT, an internationally recognized innovation consultancy and early-stage tech accelerator that helps solve some of the hardest real-world problems in national security, state and local governments, and beyond. Founded in Silicon Valley, BMNT has offices in Palo Alto, Washington DC, Austin, London, and Canberra. BMNT uses a framework, called H4X®, to drive innovation at speed....

info_outline
(S4) E041 David Bland on Testing Ideas & Assumptions (and How Leaders Can Help) show art (S4) E041 David Bland on Testing Ideas & Assumptions (and How Leaders Can Help)

Agile Innovation Leaders

Bio   David is known for his ability to deliver inspiring and thought-provoking presentations that challenge audiences to think differently about innovation and product development. His keynotes and workshops are engaging and interactive, with a focus on real-world examples and case studies. David’s message is relevant for entrepreneurs, executives, and organizations of all sizes and industries, and he illustrates concepts live on stage to leave attendees with concrete tools and techniques they can use to drive innovation and growth in their own business.   Interview...

info_outline
Ula Ojiaku: A Story on Leadership and High Performing Teams show art Ula Ojiaku: A Story on Leadership and High Performing Teams

Agile Innovation Leaders

Bio Uloaku (Ula) Ojiaku is the Founder/ CEO of Mezahab Group Ltd (a UK-based consultancy focused on helping leaders in large organisations improve how they work to deliver value to their customers).       With over 20 years of professional experience, Ula has board-level experience and has worked in multiple countries, in a variety of technical, business and leadership roles across industries including Retail, Oil & Gas, Telecommunications, Financial Services, Government, Higher Education and Consulting.    Ula hosts the Agile Innovation Leaders...

info_outline
 
More Episodes

Bio  

Bala has rich experience in retail technology and process transformation. Most recently, he worked as a Principal Architect for Intelligent Automation, Innovation & Supply Chain in a global Fortune 100 retail corporation. Currently he works for a luxury brand as Principal Architect for Intelligent Automation providing technology advice for the responsible use of technology (Low Code, RPA, Chatbots, and AI). He is passionate about technology and spends his free time reading, writing technical blogs and co-chairing a special interest group with The OR Society.  

Interview Highlights

02:00 Mentors and peers

04:00 Community bus

07:10 Defining AI

08:20 Contextual awareness

11:45 GenAI

14:30 The human loop

17:30 Natural Language Processing

20:45 Sentiment analysis

24:00 Implementing AI solutions

26:30 Ethics and AI

27:30 Biased algorithms

32:00 EU AI Act

 33:00 Responsible use of technology  

Connect  

Bala Madhusoodhanan on LinkedIn    

Books and references  

·       https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html  - NLP  

·       https://www.theregister.com/2021/05/27/clearview_europe/  - Facial Technology Issue  

·       https://www.designnews.com/electronics-test/apple-card-most-high-profile-case-ai-bias-yet  - Apple Card story  

·       https://www.ft.com/content/2d6fc319-2165-42fb-8de1-0edf1d765be3  - Data Centre growth  

·       https://www.technologyreview.com/2024/02/06/1087793/what-babies-can-teach-ai/  

·       Independent Audit of AI Systems -  

·       Home | The Alan Turing Institute  

·       Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World, Marco Iansiti & Karim R. Lakhani  

·       AI Superpowers: China, Silicon Valley, and the New World, Kai-Fu Lee  

·       The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You, Mike Walsh  

·       Human+Machine: Reimagining Work in the Age of AI, Paul R Daugherty, H. James Wilson  

·       Superintelligence: Paths, Dangers, Strategies, Nick Bostrom  

·       The Alignment Problem: How Can Artificial Intelligence Learn Human Values, Brian Christian  

·       Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI, Reid Blackman  

·       Wanted: Human-AI Translators: Artificial Intelligence Demystified, Geertrui Mieke De Ketelaere  

·       The Future of Humanity: Terraforming Mars, Interstellar Travel, Immortality, and Our Destiny Beyond, Michio Kaku, Feodor Chin et al

 Episode Transcript

Intro: Hello and welcome to the Agile Innovation Leaders podcast. I’m Ula Ojiaku. On this podcast I speak with world-class leaders and doers about themselves and a variety of topics spanning Agile, Lean Innovation, Business, Leadership and much more – with actionable takeaways for you the listener.

Ula Ojiaku

So I have with me here, Bala Madhusoodhanan, who is a principal architect with a global luxury brand, and he looks after their RPA and AI transformation. So it's a pleasure to have you on the Agile Innovation Leaders podcast, Bala, thank you for making the time.

Bala Madhusoodhanan

It's a pleasure to have a conversation with the podcast and the podcast audience, Ula. I follow the podcast and there have been fantastic speakers in the past. So I feel privileged to join you on this conversation.

Ula Ojiaku

Well, the privilege is mine. So could you start off with telling us about yourself Bala, what have been the key points or the highlights of your life that have led to you being the Bala we know now?

Bala Madhusoodhanan

It's putting self into uncharted territory. So my background is mechanical engineering, and when I got the job, it was either you go into the mechanical engineering manufacturing side or the software side, which was slightly booming at that point of time, and obviously it was paying more then decided to take the software route, but eventually somewhere the path kind of overlapped. So from a mainframe background, started working on supply chain, and then came back to optimisation, tied back to manufacturing industry. Somewhere there is an overlap, but yeah, that was the first decision that probably got me here. The second decision was to work in a UK geography, rather than a US geography, which is again very strange in a lot of my peers. They generally go to Silicon Valley or East Coast, but I just took a choice to stay here for personal reasons. And then the third was like the mindset. I mean, I had over the last 15, 20 years, I had really good mentors, really good peers, so I always had their help to soundboard my crazy ideas, and I always try to keep a relationship ongoing.

Ula Ojiaku

What I'm hearing is, based on what you said, lots of relationships have been key to getting you to where you are today, both from mentors, peers. Could you expand on that? In what way?

Bala Madhusoodhanan

The technology is changing quite a lot, at least in the last 10 years. So if you look into pre-2010, there was no machine learning or it was statistics. People were just saying everything is statistics and accessibility to information was not that much, but post 2010, 2011, people started getting accessibility. Then there was a data buzz, big data came in, so there were a lot of opportunities where I could have taken a different career path, but every time I was in a dilemma which route to take, I had someone with whom either I have worked or who was my team lead or manager to guide me to tell me, like, take emotion out of the decision making and think in a calm mind, because you might jump into something and you might like it, you might not like it, you should not regret it. So again, over the course of so many such decisions, my cognitive mind has also started thinking about it. So those conversations really help. And again, collective experience. If you look into the decision making, it's not just my decision, I'm going through conversations that I had with people where they have applied their experience, so it's not just me or just not one situation, and to understand the why behind that, and that actually helps. In short, it's like a collection of conversations that I had with peers. A few of them are visionary leaders, they are good readers. So they always had a good insight on where I should focus, where I shouldn't focus, and of late recently, there has been a community bus. So a lot of things are moving to open source, there is a lot of community exchange of conversation, the blogging has picked up a lot. So, connecting to those parts also gives you a different dimension to think about.

Ula Ojiaku

So you said community bus, some of the listeners or people who are watching the video might not understand what you mean by the community bus. Are you talking about like meetups or communities that come around to discuss shared interests?

Bala Madhusoodhanan

If you are very much specifically interested in AI, or you are specifically interested in, power platform or a low code platform, there are a lot of content creators on those topics. You can go to YouTube, LinkedIn, and you get a lot of information about what's happening. They do a lot of hackathons, again, you need to invest time in all these things. If you don't, then you are basically missing the boat, but there are various channels like hackathon or meetup groups, or, I mean, it could be us like a virtual conversation like you and me, we both have some passionate topics, that's why we resonate and we are talking about it. So it's all about you taking an initiative, you finding time for it, and then you have tons and tons of information available through community or through conferences or through meetup groups.

Ula Ojiaku

Thanks for clarifying. So, you said as well, you had a collection of conversations that helped you whenever you were at a crossroad, some new technology or something emerges or there's a decision you had to make and checking in with your mentors, your peers and your personal Board of Directors almost, that they give you guidance. Now, looking back, would you say there were some turns you took that knowing what you know now, you would have done differently?

Bala Madhusoodhanan

I would have liked to study more. That is the only thing, because sometimes the educational degree, even though without a practical knowledge has a bigger advantage in certain conversation, otherwise your experience and your content should speak for you and it takes a little bit of effort and time to get that trust among leaders or peers just to, even them to trust saying like, okay, this person knows what he's talking about. I should probably trust rather than, someone has done a PhD and it's just finding the right balance of when I should have invested time in continuing my education, if I had time, I would have gone back two years and did everything that I had done, like minus two years off-set it by two years earlier. It would have given me different pathways. That is what I would think, but again, it's all constraints. I did the best at that point in time with whatever constraints I had. So I don't have any regret per se, but yeah, if there is a magic wand, I would do that.

Ula Ojiaku

So you are a LinkedIn top voice from AI. How would you define AI, artificial intelligence?

Bala Madhusoodhanan

I am a bit reluctant to give a term Artificial Intelligence. It's in my mind, it is Artificial Narrow Intelligence, it's slightly different. So let me start with a building block, which is machine learning. So machine learning is like a data labeller. You go to a Tesco store, you read the label, you know it is a can of soup because you have read the label, your brain is not only processing that image, it understands the surrounding. It does a lot of things when you pick that can of soup. You can't expect that by just feeding one model to a robot. So that's why I'm saying like it's AI is a bit over glorified in my mind. It is artificial narrow intelligence. What you do to automate certain specific tasks using a data set which is legal, ethical, and drives business value is what I would call machine learning, but yeah, it's just overhyped and heavily utilised term AI.

Ula Ojiaku

You said, there's a hype around artificial intelligence. So what do you mean by that? And where do you see it going?

Bala Madhusoodhanan

Going back to the machine learning definition that I said, it's basically predicting an output based on some input. That's as simple as what we would say machine learning. The word algorithm is basically something like a pattern finder. What you're doing is you are giving a lot of data, which is properly labelled, which has proper diversity of information, and there are multiple algorithms that can find patterns. The cleverness or engineering mind that you bring in is to select which pattern or which algorithm you would like to do for your use case. Now you're channelling the whole machine learning into one use case. That's why I'm going with the term narrow intelligence. Computers can do brilliant jobs. So you ask computers to do like a Rubik's cubes solving. It will do it very quickly because the task is very simple and it is just doing a lot of calculation. You give a Rubik's cube to a kid. It has to apply it. The brain is not trained enough, so it has to cognitively learn. Maybe it will be faster. So anything which is just pure calculation, pure computing, if the data is labelled properly, you want to predict an outcome, yes, you can use computers. One of the interesting videos that I showed in one of my previous talks was a robot trying to walk across the street. This is in 2018 or 19. The first video was basically talking about a robot crossing a street and there were vehicles coming across and the robot just had a headbutt and it just fell off. Now a four year old kid was asked to walk and it knew that I have to press a red signal. So it went to the signal stop. It knew, or the baby knew that I can only walk when it is green. And then it looks around and then walks so you can see the difference – a four year old kid has a contextual awareness of what is happening, whereas the robot, which is supposed to be called as artificial intelligence couldn't see that. So again, if you look, our human brains have been evolved over millions of years. There are like 10 billion neurons or something, and it is highly optimised. So when I sleep, there are different set of neurons which are running. When I speak to you, my eyes and ears are running, my motion sensor neurons are running, but these are all highly optimised. So the mother control knows how much energy should be sent on which neuron, right, whereas all these large language models, there is only one task. You ask it, it's just going to do that. It doesn't have that intelligence to optimise. When I sleep, maybe 90 percent of my neurons are sleeping. It's getting recharged. Only the dream neurons are working. Whereas once you put a model live, it doesn't matter, all the hundred thousand neurons would run. So, yeah, it's in very infancy state, maybe with quantum computing, maybe with more power and better chips things might change, but I don't see that happening in the next five to 10 years.

Ula Ojiaku

Now, what do you say about Gen AI? Would you also classify generative AI as purely artificial neural intelligence?

Bala Madhusoodhanan

The thing with generative AI is you're trying to generalise a lot of use cases, say ChatGPT, you can throw in a PDF, you can ask something, or you can say, hey, can you create a content for my blog or things like that, right? Again, all it is trying to do is it has some historical content with which it is trying to come up with a response. So the thing that I would say is humans are really good with creativity. If a problem is thrown at a person, he will find creative ways to solve it. The tool with which we are going to solve might be a GenAI tool, I don't know, because I don't know the problem, but because GenAI is in a hype cycle, every problem doesn't need GenAI, that's my view. So there was an interesting research which was done by someone in Montreal University. It talks about 10 of the basic tasks like converting text to text or text to speech and with a generative AI model or multiple models, because you have a lot of vendors providing different GenAI models, and then they went with task specific models and the thing that they found was the task specific models were cheap to run, very, very scalable and robust and highly accurate, right. Whereas GenAI, if, when you try to use it and when it goes into a production ready or enterprise ready and if it is used by customers or third party, which are not part of your ecosystem, you are putting yourself in some kind of risk category. There could be a risk of copyright issues. There could be a risk of IP issues. There could be risk of not getting the right consent from someone. I can say, can you create an image of a podcaster named Ula? You never know because you don't remember that one of your photos on Google or Twitter or somewhere is not set as private. No one has come and asked you saying, I'm using this image. And yeah, it's finding the right balance. So even before taking the technology, I think people should think about what problem are they trying to solve? In my mind, AI or artificial intelligence, or narrow intelligence can have two buckets, right. The first bucket is to do with how can I optimise the existing process? Like there are a lot of things that I'm doing, is there a better way to do it? Is there an efficient way to do it? Can I save time? Can I save money? Stuff like that. So that is an optimisation or driving efficiency lever. Other one could be, I know what to do. I have a lot of data, but I don't have infrastructure or people to do it, like workforce augmentation. Say, I have 10 data entry persons who are graduate level. Their only job is to review the receipts or invoices. I work in FCA. I have to manually look at it, approve it, and file it, right? Now it is a very tedious job. So all you are doing is you are augmenting the whole process with an OCR engine. So OCR is Optical Character Recognition. So there are models, which again, it's a beautiful term for what our eyes do. When we travel somewhere, we get an invoice, we exactly know where to look, right? What is the total amount? What is the currency I have paid? Have they taken the correct credit card? Is my address right? All those things, unconsciously, your brain does it. Whereas our models given by different software vendors, which have trained to capture these specific entities which are universal language, to just pass, on data set, you just pass the image on it. It just picks and maps that information. Someone else will do that job. But as part of your process design, what you would do is I will do the heavy lifting of identifying the points. And I'll give it to someone because I want someone to validate it. It's human at the end. Someone is approving it. So they basically put a human in loop and, human centric design to a problem solving situation. That's your efficiency lever, right? Then you have something called innovation level - I need to do something radical, I have not done this product or service. Yeah, that's a space where you can use AI, again, to do small proof of concepts. One example could be, I'm opening a new store, it's in a new country, I don't know how the store layout should look like. These are my products. This is the store square footage. Can you recommend me the best way so that I can sell through a lot? Now, a visual merchandising team will have some ideas on where the things should be, they might give that prompt. Those texts can be converted into image. Once you get the base image, then it's human. It's us. So it will be a starting point rather than someone implementing everything. It could be a starting point. But can you trust it? I don't know.

Ula Ojiaku

And that's why you said the importance of having a human in the loop.

Bala Madhusoodhanan

Yeah. So the human loop again, it's because we humans bring contextual awareness to the situation, which machine doesn't know. So I'll tie back this to the NLP. So Natural Language Processing, it has two components, so you have natural language understanding and then you have natural language generation. When you create a machine learning model, all it is doing is, it is understanding the structure of language. It's called form. I'm giving you 10,000 PDFs, or you're reading a Harry Potter book. There is a difference between you reading a Harry Potter book and the machine interpreting that Harry Potter book. You would have imagination. You will have context of, oh, in the last chapter, we were in the hilly region or in a valley, I think it will be like this, the words like mist, cold, wood. You started already forming images and visualising stuff. The machine doesn't do that. Machine works on this is the word, this is a pronoun, this is the noun, this is the structure of language, so the next one should be this, right? So, coming back to the natural language understanding, that is where the context and the form comes into play. Just think of some alphabets put in front of you. You have no idea, but these are the alphabet. You recognise A, you recognise B, you recognise the word, but you don't understand the context. One example is I'm swimming against the current. Now, current here is the motion of water, right? My current code base is version 01. I'm using the same current, right? The context is different. So interpreting the structure of language is one thing. So, in natural language understanding, what we try to do is we try to understand the context. NLG, Natural Language Generation, is basically how can I respond in a way where I'm giving you an answer to your query. And this combined is NLP. It's a big field, there was a research done, the professor is Emily Bender, and she one of the leading professors in the NLP space. So the experiment was very funny. It was about a parrot in an island talking to someone, and there was a shark in between, or some sea creature, which basically broke the connection and was listening to what this person was saying and mimicking. Again, this is the problem with NLP, right? You don't have understanding of the context. You don't put empathy to it. You don't understand the voice modulation. Like when I'm talking to you, you can judge what my emotion cues are, you can put empathy, you can tailor the conversation. If I'm feeling sad, you can put a different spin, whereas if I'm chatting to a robot, it's just going to give a standard response. So again, you have to be very careful in which situation you're going to use it, whether it is for a small team, whether it is going to be in public, stuff like that.

Ula Ojiaku

So that's interesting because sometimes I join the Masters of Scale strategy sessions and at the last one there was someone whose organisational startup was featured and apparently what their startup is doing is to build AI solutions that are able to do sentiment analysis. And I think some of these, again, in their early stages, but some of these things are already available to try to understand the tone of voice, the words they say, and match it with maybe the expression and actually can transcribe virtual meetings and say, okay, this person said this, they looked perplexed or they looked slightly happy. So what do you think about that? I understand you're saying that machines can't do that, but it seems like there are already organisations trying to push the envelope towards that direction.

Bala Madhusoodhanan

So the example that you gave, sentiment of the conversation, again, it is going by the structure or the words that I'm using. I am feeling good. So good, here is positive sentiment. Again, for me the capability is slightly overhyped, the reason being is it might do 20 percent or 30 percent of what a human might do, but the human is any day better than that particular use case, right? So the sentiment analysis typically works on the sentiment data set, which would say, these are the certain proverbs, these are the certain types of words, this generally referred to positive sentiment or a good sentiment or feel good factor, but the model is only good as good as the data is, right?

So no one is going and constantly updating that dictionary. No one is thinking about it, like Gen Z have a different lingo, millennials had a different lingo. So, again, you have to treat it use case by use case, Ula.

Ula Ojiaku

At the end of the day, the way things currently are is that machines aren't at the place where they are as good as humans. Humans are still good at doing what humans do, and that's the key thing.

Bala Madhusoodhanan

Interesting use case that I recently read probably after COVID was immersive reading. So people with dyslexia. So again, AI is used for good as well, I'm not saying it is completely bad. So AI is used for good, like, teaching kids who are dyslexic, right? Speech to text can talk, or can translate a paragraph, the kid can hear it, and on the screen, I think one note has an immersive reader, it actually highlights which word it is, uttering into the ears and research study showed that kids who were part of the study group with this immersive reading audio textbook, they had a better grasp of the context and they performed well and they were able to manage dyslexia better. Now, again, we are using the technology, but again, kudos to the research team, they identified a real problem, they formulated how the problem could be solved, they were successful. So, again, technology is being used again. Cancer research, they invest heavily, in image clustering, brain tumours, I mean, there are a lot of use cases where it's used for good, but then again, when you're using it, you just need to think about biases. You need to understand the risk, I mean, everything is risk and reward. If your reward is out-paying the minimum risk that you're taking, then it's acceptable.

Ula Ojiaku

What would you advise leaders of organisations who are considering implementing AI solutions? What are the things we need to consider?

Bala Madhusoodhanan

Okay. So going back to the business strategy and growth. So that is something that the enterprises or big organisations would have in mind. Always have your AI goals aligned to what they want. So as I said, there are two buckets. One is your efficiency driver, operational efficiency bucket. The other one is your innovation bucket. Just have a sense check of where the business wants to invest in. Just because AI is there doesn't mean you have to use it right. Look into opportunities where you can drive more values. So that would be my first line of thought. The second would be more to do with educating leaders about AI literacy, like what each models are, what do they do? What are the pitfalls, the ethical awareness about use of AI, data privacy is big. So again, that education is just like high level, with some examples on the same business domain where it has been successful, where it has been not so successful, what are the challenges that they face? That's something that I would urge everyone to invest time in. I think I did mention about security again, over the years, the practice has been security is always kept as last. So again, I was fortunate enough to work in organisations where security first mindset was put in place, because once you have a proof of value, once you show that to people, people get excited, and it's about messaging it and making sure it is very secured, protecting the end users. So the third one would be talking about having secure first design policies or principles. Machine learning or AI is of no good if your data quality is not there. So have a data strategy is something that I would definitely recommend. Start small. I mean, just like agile, you take a value, you start small, you realise whether your hypothesis was correct or not, you monitor how you performed and then you think about scale just by hello world doesn't mean that you have mastered that. So have that mindset, start small, monitor, have constant feedback, and then you think about scaling.

Ula Ojiaku

What are the key things about ethics and AI, do you think leaders should be aware of at this point in time?

Bala Madhusoodhanan

So again, ethical is very subjective. So it's about having different stakeholders to give their honest opinion of whether your solution is the right thing to do against the value of the enterprise. And it's not your view or my view, it's a consent view and certain things where people are involved, you might need to get HR, you might need to get legal, you might need to get brand reputation team to come and assist you because you don't understand the why behind certain policies were put in place. So one is, is the solution or is the AI ethical to the core value of the enterprise? So that's the first sense check that you need to do. If you pass that sense check, then comes about a lot of other threats, I would say like, is the model that I'm using, did it have a fair representation of all data set? There's a classic case study on one of a big cloud computing giant using an AI algorithm to filter resumes and they had to stop it immediately because the data set was all Ivy League, male, white, dominant, it didn't have the right representation. Over the 10 years, if I'm just hiring certain type of people, my data is inherently biased, no matter how good my algorithm is, if I don't have that data set. The other example is clarify AI. They got into trouble on using very biased data to give an outcome on some decision making to immigration, which has a bigger ramification. Then you talk about fairness, whether the AI system is fair to give you an output. So there was a funny story about a man and a woman in California living together, and I think the woman wasn't provided a credit card, even though everything, the postcode is the same, both of them work in the same company, and it was, I think it has to do with Apple Pay. Apple Pay wanted to bring in a silver credit card, Apple card or whatever it is, but then it is so unfair that the women who was equally qualified was not given the right credit limit, and the bank clearly said the algorithm said so. Then you have privacy concern, right? So all these generic models that you have that is available, even ChatGPT for that matter. Now you can chat with ChatGPT multiple times. You can talk about someone like Trevor Noah and you can say hey, can you create a joke? Now it has been trained with the jokes that he has done, it might be available publicly. But has the creator of model got a consent saying, hey Trevor, I'm going to use your content so that I can give better, and how many such consent, even Wikipedia, if you look into Wikipedia, about 80 percent of the information is public, but it is not diversified. What I mean by that is you can search for a lot of information. If the person is from America or from UK or from Europe, maybe from India to some extent, but what is the quality of data, if you think about countries in Africa, what do you think about South America? I mean, it is not representing the total diversity of data, and we have this large language model, which has been just trained on that data, right? So there is a bias and because of that bias, your outcome might not be fair. So these two are the main things, and of course the privacy concern. So if someone goes and says, hey, you have used my data, you didn't even ask me, then you're into lawsuit. Without getting a proper consent, again, it's a bad world, it's very fast moving and people don't even, including me, I don't even read every terms and condition, I just scroll down, tick, confirm, but those things are the things where I think education should come into play. Think about it, because people don't understand what could go wrong, not to them, but someone like them. Then there is a big fear of job displacement, like if I put this AI system, what will I do with my workforce? Say I had ten people, you need to think about, you need to reimagine your workplace. These are the ten jobs my ten people are doing. If I augment six of those jobs, how can I use my ten resources effectively to do something different or that piece of puzzle is always, again, it goes back to the core values of the company, what they think about their people, how everything is back, but it's just that needs a lot of inputs from multiple stakeholders.

Ula Ojiaku

It ties back to the enterprise strategy, there is the values, but with technology as it has evolved over the years, things will be made obsolete, but there are new opportunities that are created, so moving from when people travelled with horses and buggies and then the automotive came up. Yes, there wasn't as much demand for horseshoes and horses and buggies, but there was a new industry, the people who would mechanics or garages and things like that. So I think it's really about that. Like, going back to what you're saying, how can you redeploy people? And that might involve, again, training, reskilling, and investing in education of the workforce so that they're able to harness AI and to do those creative things that you've emphasised over this conversation about human beings, that creative aspect, that ability to understand context and nuance and apply it to the situation.

Bala Madhusoodhanan

So I was fortunate to work with ForHumanity, an NGO which basically is trying to certify people to look into auditing AI systems. So EU AI Act is now in place, it will be enforced soon. So you need people to have controls on all these AI systems to protect - it's done to protect people, it's done to protect the enterprise. So I was fortunate enough to be part of that community. I'm still working closely with the Operation Research Society. Again, you should be passionate enough, you should find time to do it, and if you do it, then the universe will find a way to give you something interesting to work with. And our society, The Alan Turing Institute, the ForHumanity Society, I had a few ICO workshops, which was quite interesting because when you hear perspectives from people from different facets of life, like lawyers and solicitors, you would think, ah, this statement, I wouldn't interpret in this way. It was a good learning experience and I'm sure if I have time, I would still continue to do that and invest time in ethical AI. As technology, it's not only AI, it's ethical use of technology, so sustainability is also part of ethical bucket if you look into it. So there was an interesting paper it talks about how many data centres have been opened between 2018 to 2024, which is like six years and the power consumption has gone from X to three times X or two times X, so we have opened a lot. We have already caused damage to the environment with all these technology, and just because the technology is there, it doesn't mean you have to use it, but again, it's that educational bit, what is the right thing to do? And even the ESG awareness, people are not aware. Like now, if you go to the current TikTok trenders, they know I need to look into certified B Corp when I am buying something. The reason is because they know, and they're more passionate about saving the world. Maybe we are not, I don't know, but again, once you start educating and, telling those stories, humans are really good, so you will have a change of heart.

Ula Ojiaku

What I'm hearing you say is that education is key to help us to make informed choices. There is a time and place where you would need to use AI, but not everything requires it, and if we're more thoughtful in how we approach, these, because these are tools at the end of the day, then we can at least try to be more balanced in the risks and taking advantage of opportunities versus the risks around it and the impact these decisions and the tools that we choose to use make on the environment. Now, what books have you found yourself recommending most to people, and why?

Bala Madhusoodhanan

Because we have been talking on AI, AI Superpower is one book which was written by Kai-Fu Lee. There is this book by Brian Christian, The Alignment Problem: Machine Learning and Human Values alignment of human values and machine it was basically talking about what are the human values? Where do you want to use machine learning? How do you basically come up with a decision making, that's a really interesting read. Then there is a book called Ethical Machines by Reid Blackman. So it talks about all the ethical facets of AI, like biases, fairnesses, like data privacy, transparency, explainability, and he gives quite a detail, example and walkthrough of what that means. Another interesting book was Wanted: Human-AI Translators: Artificial Intelligence Demystified by a Dutch professor, again, really, really lovely narration of what algorithms are, what AI is, where, and all you should think about, what controls and stuff like that. So that is an interesting book. Harvard Professor Kahrim Lakhani, he wrote something called, Competing in the Age of AI, that's a good book. The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You by Mike Walsh is another good book, which I finished a couple of months back.

Ula Ojiaku

And if the audience wants to find you, how can they reach out to you?

Bala Madhusoodhanan

They can always reach out to me at LinkedIn, I would be happy to touch base through LinkedIn.

Ula Ojiaku

Awesome. And do you have any final words and or ask of the audience?

Bala Madhusoodhanan

The final word is, again, responsible use of technology. Think about not just the use case, think about the environmental impact, think about the future generation, because I think the damage is already done. So, at least not in this lifetime, maybe three or four lifetimes down the line, it might not be the beautiful earth that we have.

Ula Ojiaku

It's been a pleasure, as always, speaking with you, Bala, and thank you so much for sharing your insights and wisdom, and thank you for being a guest on the Agile Innovation Leaders Podcast.

Bala Madhusoodhanan

Thank you, lovely conversation, and yeah, looking forward to connecting with more like minded LinkedIn colleagues.

Ula Ojiaku

That’s all we have for now. Thanks for listening. If you liked this show, do subscribe at www.agileinnovationleaders.com or your favourite podcast provider. Also share with friends and do leave a review on iTunes. This would help others find this show. I’d also love to hear from you, so please drop me an email at [email protected] Take care and God bless!