Ramesh Shanmuganathan, an Executive Vice President and Group CIO at John Keells Holdings (JKH) and Chief Executive of John Keells IT, delves into how iterative open AI has positioned the world at a pivotal turning point, where the democratization of artificial intelligence is reshaping the way we work, learn, and interact with technology. This accessible and collaborative approach is breaking down barriers, enabling individuals from all walks of life to harness the power of AI. However, this transformation also necessitates careful consideration of ethics, governance, and digital identity to ensure that the benefits of AI are equitably distributed and responsibly utilized, marking a critical juncture in our journey towards an AI-enhanced future, Shanmuganathan explains in this interview.
Iterative open AI models are disrupting work and life in general. How does one deal with this disruption and ensure that the average person benefits from it? Are we anywhere close to the so-called singularity?
I believe there has been extensive discussion about AI, singularity, and utopia. However, today’s conversation primarily centres around inclusivity. The question at hand is whether AI is accessible to the privileged alone or if it can benefit everyone, potentially disrupting or assisting the average individual. OpenAI, in this context, has played a pivotal role in breaking down the affordability barriers associated with AI.
In the past, AI was prohibitively expensive, even for corporations, due to the substantial investments required and the need for specialized skills. It involved progressing from mere number crunching and predictability based on historical data patterns to developing intelligence for the future. This transition led to the emergence of supercomputers, exemplified by a computer defeating Garry Kasparov in chess a decade ago. Supercomputers were employed in fields like weather forecasting and oil exploration due to the immense computing power and data volumes they could handle.
Today, the complexity behind AI is concealed from ordinary users. We interact with tools like ChatGPT as if we’re browsing the internet—such is the level of simplification. This simplicity is a result of the collaborative efforts within the open-source community, making AI accessible to the average person.
I believe we should perceive AI as a co-pilot for humans, analogous to a pilot’s relationship with their co-pilot. This collaborative approach should be applied across various roles and individuals, and that’s when AI truly becomes valuable.
For instance, a student can view AI as a 24/7 tutor, always available to assist in their learning journey. Similarly, a programmer, even with substantial knowledge, can turn to AI as a co-pilot when faced with a challenging problem. It’s not about outsourcing all responsibilities, such as studying or completing assignments, to AI; instead, it’s about ethical usage and understanding why and how we employ it.
Drawing a parallel, consider the availability of weapons in the United States; it’s not the technology or the weapon itself but the user’s intentions and actions that determine their ethical use. Responsibility lies not only with those creating AI but also with those utilizing it. Hence, governance, ethics, culture, and the conditioning of what gets created and how it’s used are critical aspects to consider.
I believe the true benefits of AI will be realized by the masses when we can effectively manage this entire lifecycle, taking into account the responsible use and governance of this powerful technology.
Can you demystify or define some of these concepts and terms that tend to be used interchangeably: artificial intelligence, superintelligence, and singularity?
Today, ChatGPT has evolved into artificial general intelligence, but it touches on the concept of superintelligence to the extent that it can create original content, almost like having its own brain. When discussing artificial superintelligence, we ask how closely it mimics a human brain or whether it’s indistinguishable from human intelligence. Essentially, we’re talking about whether it matches or even surpasses human capabilities. However, this isn’t the case today.
From an analytical, knowledge, or inference perspective, AI has made significant strides. Still, if we consider the emotional aspects, motor skills, and ethical considerations, it lags far behind. Let me illustrate with an example. In a cricket match, when someone appeals for an out, and we review the video footage, it provides abstract data. However, the final decision rests with a human being, the third umpire. Even though the third umpire makes the ultimate decision, the 40,000 spectators may agree or disagree. The decision is also influenced by personal beliefs about what’s right or wrong and what one believes occurred. This illustrates that interpretation is a subjective element.
Singularity, I believe, will also depend on the individual perspective. One person might say AI is as good as a human being, while someone with superior intellect may argue that AI falls short. From this viewpoint, it’s a somewhat general statement, and as for achieving true singularity, I don’t think we’re anywhere close. We might be 30-40 years away from fully replicating the capabilities of a human being in its entirety. However, in specific domains or aspects, we will continue to make incremental progress.
Since artificial general intelligence is primarily trained on publicly available data, is there a risk of bias in this data due to societal and resource disparities, and could this bias be harmful, especially in poorer countries with limited data resources?
That is an interesting observation since data is the foundational element for discussion here. When discussing big data, there are five key elements to consider. First, there’s the volume of data that you aim to incorporate into a model. Second, the velocity or the speed at which the data changes is crucial. Third, we have to think about the variety of data, whether it’s structured, semi-structured, or unstructured. Fourth, there’s the aspect of veracity, which pertains to the quality and integrity of the data and the value it brings, is the fifth element.
Traditionally, when dealing with AI, the focus was often on value. It revolved around asking whether there was a compelling business case for investing in AI. However, in the context of OpenAI, the emphasis has shifted. OpenAI is developing algorithms that let people explore and determine the potential use cases. The objective of the OpenAI model is to reduce subjectivity in these algorithms.
If we consider LAN Language models, like Bart, Co-pilot from Microsoft, or ChatGPT, these models have algorithms that, to a certain extent, are trained in a relatively unbiased manner. However, when we build upon these models, it’s essential to be cautious about the data we use to fine-tune them, ensuring that it remains aligned with our intended purposes and doesn’t introduce bias.
For example, when ChatGPT-3 was launched, there were notable negative case studies. This was because, for a large model like ChatGPT, determining whether it should respond to questions like “How do I make a bomb out of RDX” or “How do I poison a person” posed a significant challenge. This highlights the complexities and ethical considerations associated with AI.
AI operates based on the data it has been trained on and might not inherently discern the intention behind a question. For instance, it may not recognize that a person asking how to make a bomb is potentially harmful, whether it’s an adult or a three-year-old child. This underscores the importance of ethics in AI training.
If we don’t train AI models to recognize and handle harmful content, they won’t know how to respond appropriately. This leads to a fundamental question: Can AI models, when a child or any user is interacting with them, determine the identity and intentions of the person asking the question? This, in turn, ties back to the concept of digital identity.
To provide proper responses and protect users, it becomes crucial to understand the digital identity of the person interacting with AI. Profiling and identifying users can help tailor responses and ensure that AI is used responsibly and safely.
Are you suggesting then that our interaction with AI is going to be determined by how good our digital identities are?
That is a crucial aspect of governing and ensuring the ethical use of AI. Just as we can’t rely solely on laws without enforcement mechanisms in society, we can’t expect AI to operate flawlessly without governance and ethical considerations.
AI is a platform, and it can be used for both beneficial and potentially harmful purposes. To govern AI effectively and bring ethics into the equation, we must address the issue of digital identity. Digital identity plays a vital role in validating and regulating AI usage.
In the coming decade, the education system is expected to undergo significant changes, and AI will likely be an integral part of it. When assigning tasks to students and expecting original work, verifying the authenticity of their contributions will be essential. Digital identity, as exemplified by India’s Aadhaar system, will play a pivotal role in this process. It will drive the next wave of growth and help AI become a supportive tool rather than a threat to human progress.
The key question we should ask is how to ensure AI coexists harmoniously with every individual and enhances their quality of life. AI can bring positive changes into areas like healthcare, education, personalization, and financial planning. To unlock this potential, digital identity will become a foundational element to enable AI personalization and responsible use.
While we value morals and values, society also has this infrastructure of laws, regulations, policing, and judgment, is it not merely a digital identity that does that? Are you suggesting then that we may need, foundationally yes, a digital ID, but more infrastructure on top of it?
Exactly. We can draw a parallel with the legal system, highlighting the importance of identity in both the physical and digital worlds. Just as a person filing a legal case needs a recognized identity through documents like a birth certificate or passport, in the digital realm, establishing user identity is essential for governing and regulating AI usage effectively.
To govern AI usage, it’s crucial to identify and profile users. Much like how there are juvenile courts for individuals below a certain age who lack a recognized identity, in the digital space, there needs to be mechanisms for recognizing and managing user profiles. This enables the implementation of measures to ensure the responsible and ethical use of AI.
For instance, if a three-year-old child is interacting with AI, parents should have the capability to set restrictions, similar to how they can control internet access. AI can be programmed to adhere to these restrictions, but it requires user profiling to be effective. With user personas defined, such as age groups, permission can be granted or restricted accordingly. This approach promotes better protection, governance, and ethical use of AI.
Taking proactive steps to evolve and implement these governance mechanisms is crucial. Waiting until AI becomes pervasive and then attempting to address these issues may be too late. Developing and integrating these governance measures now is essential to ensure responsible and safe AI use as it becomes more deeply integrated into our lives.
AI, such as deep fake technology, is becoming increasingly prevalent in our daily lives. However, some regions, like Sri Lanka, may not have implemented digital IDs or robust regulatory structures yet. How should society address this transition, especially when it seems like we are lagging in terms of readiness and regulation?
One approach to addressing the challenges posed by AI is to build safeguards and controls directly into AI platforms. Relying solely on human intervention to detect and manage malicious content and other issues may not be sufficient. Using AI to implement and enforce safeguards is a forward-thinking strategy.
Ensuring that security measures are aligned with AI technology is essential, especially in fields like pharmaceuticals and healthcare, where the consequences of compromised data or biased outcomes can be severe. The COVID-19 pandemic has indeed brought this issue into focus, with questions about vaccine efficacy and data sharing. Integrating AI to analyze data and make informed decisions, especially regarding symptoms and vaccine effectiveness, could potentially lead to better outcomes. However, the decision on what knowledge should be shared for the benefit of the masses versus what should remain restricted is a subjective assessment made by humans, not AI.
Therefore, fostering international collaboration at various levels, including among governments, NGOs, and regional organizations, is crucial. By working together, we can create a collaborative platform that prioritizes the greater good of humanity, ensures responsible AI use, and addresses complex issues related to data sharing, ethics, and security. This collaborative approach is essential for harnessing the full potential of AI while mitigating risks and ensuring the well-being of society.
Your input on how AI can essentially enhance human well-being and happiness is based on the principle that we need to collaborate. Are you optimistic that we can get to some utopia with AI?
I’m not sure about utopia but it’s important to maintain a balanced view of the potential impact of AI. AI indeed has the potential to yield significant benefits for the average person. The increasing availability of electricity, internet access, and affordable computing power has democratized AI, enabling people to leverage it in various ways.
Over the past year, there have been numerous positive stories about how AI, such as ChatGPT, is being used for learning, homework, and enhancing interest in various subjects. The widespread adoption and enthusiasm surrounding AI tools like ChatGPT are indeed remarkable.
It’s essential to foster this enthusiasm and not let negative narratives about AI job displacement or harm to humanity overshadow the positive aspects. Properly managing AI, setting up appropriate guardrails, and ensuring responsible use can help harness its potential for the betterment of society.
AI, when properly governed and utilized, has the potential to enhance the quality of life for individuals. While achieving a utopian society may be an ambitious goal, AI can contribute significantly to improving the overall well-being and opportunities available to people. It’s about striking a balance and ensuring that AI serves as a valuable tool for progress rather than a threat.
Given that Sri Lanka still lags in AI regulations, what are one or two simple actions we can take to create effective rules and encourage collaborations so that AI benefits everyone and makes our lives better?
Leapfrogging in technology, much like what happened with the adoption of mobile technology in Sri Lanka, offers an excellent opportunity for us. We leapfrogged from wired to wireless and we had the highest penetration because we were late to enter the market. By embracing technologies like Cloud and AI, we can bypass the need to invest heavily in building traditional infrastructure and instead directly benefit from the latest innovations.
Changing mindsets is indeed a crucial aspect of this transformation. Encouraging a shift in mindset from scepticism to embracing new technologies can help unlock the potential for leapfrogging.
Sri Lanka is in a unique position to capitalize on this opportunity. Leveraging the talent pool and fostering an entrepreneurial spirit can turn Sri Lanka into an innovation hub that drives future growth. Collaborating with multinational organizations and regional hubs, especially with the rapidly growing economy of India, can create a bridge to prosperity. The potential for a ripple effect similar to what Hong Kong experienced with China or Mexico with the U.S. is significant.
Technology can serve as the glue that brings countries together and propels them into the next phase of growth. Recognizing and seizing this opportunity is crucial for Sri Lanka to maximize its potential and play a pivotal role in the evolving regional and global landscape.