FROM THE CEO Home Resources Beyond the AI Hype: The Crucial Role of Responsible Governance August Newsletter CEO Article Beyond the AI Hype: The Crucial Role of Responsible Governance I remember waiting a decade for the long-promised year of mobile to arrive. When it did, it went by almost unnoticed. So it’s incredible to see how quickly AI has ‘burst onto the scene’ and taken over conversations so entirely. It is undeniable that this is the year of AI. Don’t get me wrong, I am well aware that the modern field of AI has been around for years (decades even) , but it is still surprising to see the speed at which it has become a conversational starter. Depending on who you speak to – it is either the next identified ‘threat’ to our jobs and/or the opportunity to be more efficient. As a marketer, I’m excited about the possibilities of this emerging technology. But as a leader and board member I’m equally cautious about jumping all in or all out. There is most definitely a need to embrace AI in the way it fosters innovation but equally understand how to mitigate the risks.. AI has been around in some form for a while, however it has always been ‘behind the scenes’ - powering technology we may use regularly but haven't thought too much about its role within that. OpenAI’s release of ChatGPT to the public last year fired the starting pistol on a new ‘gold rush’. Its accessibility, ease of use and the multitude of stories of its ability to successfully create content pieces from speeches to articles or papers understandably prompted people to experiment and test its potential for personal and work-related tasks. This was clearly the view worldwide with Chat GPT reaching the phenomenal milestone of fastest-growing consumer application in history with 100 million active users in just two months from launch. I’m excited because, from what I’ve seen so far, generative AI more generally (beyond just Chat GPT) has the potential to be a genuinely game changing technological advance which could transform aspects of our lives and the way we as marketers approach our jobs. For so long we’ve all bemoaned the fact there is more than ever to do, and there’s not enough time to think and focus on the fun and interesting parts of our job. AI has the potential to automate so many mundane and repetitive tasks we’re burdened with, and set us free to spend our time being as creative and free thinking as we want. This emerging technology (as it is often categorised) is not only coming, it is here - ignoring it is not an option. Similarly, embracing it with wild abandon is also not something businesses can do either. There is a very real risk to brand reputation, compliance, privacy responsibilities and value of assets that must be considered and mitigated. From a board level, I feel the urgency for businesses to get ahead in developing a framework within which the various forms of AI should be reviewed. This needs to be done sooner rather than later as the uptake and application of generative AI technologies is a bottom-up movement. Team members are already experimenting with technologies without a clear understanding of potential business risks and without internal business policies. Inevitably marketers are on the forefront of this - we’re an industry that thrives on innovation and experimentation, and the temptation to take time-consuming tasks away from already stretched teams is enormous. The problem is that AI as used by the general business, including marketers is still largely in the realms of experimentation, with enthusiastic amateurs playing with the new tools without a full grasp of what it means, and more importantly no guardrails from their business to help them navigate them. I’ve been around for long enough to see these situations play out before. In fact it brings to mind the early days of social media – a Wild West-like phase where people were posting carelessly (but easily) mixing up personal posting with that under brand accounts. It took some real cases of brand risk exposure (and embarrassment) for businesses to set rules that helped employees find the ways the channel could work for the brand . This is what is needed here, a framework within which a business can test, learn and find the ways in which the emerging tech can be innovative with minimal risk. That will require organisations looking at top down policies, support and governance. Bottom up education, training, feedback channels and testing opportunities within boundaries and at the middle level access to the right practical tools, ethical standards of use and cross functional collaboration. And then there is the addition of risk with the data that may be shared if teams are unaware of risks, responsibilities and accountability. A recent survey from Kizen found that 9 in 10 employees who made at least $100,000 per year were using AI in their work life. But, when left unchecked, such usage can lead to inadvertent exposure of sensitive company data. Once data is uploaded to these systems there is no getting it back again. The genie is out of the bottle. The same goes with the numerous third party tools being used to harness these services. Every link in the chain is a new place for hackers to target and potentially infiltrate systems. Samsung has even recently banned ChatGPT after an engineer was found to have uploaded a sensitive internal source code to the platform. These are, after all, web-based platforms. Equally alarming is the quality of responses these AI tools generate being only as good as the data inputted. This means uninformed decisions could be made based on incorrect or misleading data, potentially exposing the business to risk. I highlight this not to be a scare monger, but to put us on alert to promote action and fast. This isn’t to say that we should avoid AI. On the contrary, we need to harness well,– within frameworks and governance for using AI. Such policies would not only help mitigate risks but would also ensure that the technology is used responsibly and effectively. The boards that I serve on are already discussing this kind of governance and what needs to be done. There is a genuine desire to mitigate the risk and embrace the opportunities. We just cant get it wrong. As far as I can see there is not a lot of public dialogue on this issue, and it is something ADMA is committed to helping drive through. We need to foster a culture of safe and responsible use of AI at all levels of business, advocate for such with government and within industry. This involves contributing to government consultations on ways to promote responsible AI use, integrating AI ethics principles into our data and privacy policies and strategies, and helping marketers develop robust AI strategies and we must balance this with human interaction and injection. As creative and efficient as AI is, I still believe that at our very best, marketers still create at a higher level than any AI can (at this point in time anyway). What we must not do, however, is to get distracted from our core responsibilities. AI is a powerful tool, but it is still just a tool. In the same way we don’t let untrained people use wrecking balls, we shouldn’t be allowing unfettered access to AI without the right principles in place. Once we have the right frameworks in place and we know that we are not putting our businesses or consumer trust at risk – then I say let’s have some fun with these tools. The cheeky side of me wishes I had written this column in Chat GPT… but the responsible side knows that within here I speak to the ADMA community. The cheeky side of me would love to use its powers to write my daily motivational quips to inspire my daughter to want to do her HSC. FIND OUT FIRST, STAY CONNECTED Sign up to receive ADMA newsletters, updates, trends, special offers, events, critical issues and more Job role*Agency Account Manager/ExecutiveAgency Account/Strategy DirectorCDOCEO / Managing DirectorClient Service / Sales ManagerClient Service/Sales DirectorCMO / CCO / Marketing DirectorCreative Director / HeadData Analyst / Scientist / EngineerDesigner/Copywriter/Creative ManagerEarly Career Data Analyst / Scientist / EngineerHead of Analytics / Analytics LeaderHead of Category/Customer Experience/InsightsHead of Marketing/BrandHead of ProductHR/Learning and Development ManagersIT Director/ManagerLegal/RegulatoryMarketing ConsultantMarketing Executive / CoordinatorMarketing Freelancer / ContractorProduct / Brand / Digital / Communication ManagerSenior Data Analyst / Scientist / EngineerSenior Marketing/Brand ManagerOther You may unsubscribe at any time using the link provided in the communication. View our Privacy Policy.