Generative AI has garnered massive fanbase, advocacy and adoption, since its foray. This pervasive technology has redefined the way work gets done for individuals and enterprises alike. While the rate of innovation and deployment has been swift, the bigger challenges that enterprises are having to deal with play out in the form of maturity, efficacy and accountability.
The question enterprises need to ask is, “how do we strike a balance between innovation and deployment and responsible use of Generative AI?” While this is a colossal subject, we can dissert it into two broad areas: The problem and the solution(s).
The Problem:
The problem can be classified as a state of “completion,” which gradually culminates in a state of “transformation.”
Influenced by Generative AI and associated technologies, the completion phase defines a fundamental change in our societal fabric through a paradigm shift in thoughts, beliefs, perceptions, and values. Some quantitative behaviours of this state include further deterioration or blurred structure for a fair labour economy, increased impact on human brains, heavy dependency on artificial intelligence, violated principles of singularity, and so on.
The state of transformation can take on two dimensions - business and technical. This phase follows a cause-and-effect path; technical limitations/unfulfillment gives rise to wrong business inferences and decision-making. For example, Stable Diffusion, Midjourney and DALL·E 2, can produce remarkable visuals and styles. The capabilities of text generators are perhaps even more striking as it writes essays, poems, and summaries.
This raises the question of ownership, autonomy, and the future of art. Art created by artificial intelligence is facing significant criticism due to lesser (or no) involvement from the creators (copyright) and the usage of other artists’ original work without permission or credit to train models.
Although Generative AI has the potential to reshape industries and explore new opportunities, it can also be manipulated to the whim of anyone willing to use lies and propaganda to further their own ends, as the research by MIT suggests. The danger lies in the fact that we remain blind-sighted unless we already know the answer or define a set of rules or conditions that’s proven to have been violated. What’s more, in this journey of creation, digital media will eventually be populated with more synthetic content, blurring the line of truth and falsehood – taking us to no-mans-land.
Since the working intricacies of these algorithms are still being figured out, taking a “pull the plug” approach based on perceptions or devising a fail-safe method might not be in the best interest of the enterprise trying to implement generative AI. The point to keep in mind is that AI not only replicates human biases but also confers its scientific credibility of some sort.
As responsible technologists, our goal must be to slow down the pace of transformation thereby allowing it to influence and impact societies gradually and deeply.
The Solution
Given the myriad opportunities that Generative AI presents, scientists globally are working towards tackling challenges by plugging in mechanisms such as creating human feedback, defining constitutional AI, identifying deception, understanding generalization, asking governments to regulate AI, setting up regulation frameworks in AI (like in India), and human-aided AI.
The fact remains that despite efforts, malefactors remain elusive in the mantle of algorithms. As a scientific community, the solution lies in slowing down and evaluating the pros of cons of experimentation before proceeding further.
Setting guidelines could be a good first step in that direction. Users must remain vigilant, interpreting and consuming digital media while taking a mindful view of the efficacy of the information– its source, facts, accuracy, etc. Identifying community leaders who can build awareness and educate societies about the use and misuse of these newer technologies could complement their adoption into the mainstream.
Finally, and most important, is trying to validate online information with print media or additional sources, which is still the foundational bedrock of knowledge and wisdom. This, is because it takes time for digital media and algorithms to penetrate the print media – its proliferation is a lot more effort and time-consuming. But probably this is the last frontier of truth which must be defended at any cost.
In conclusion, tackling AI ethics challenges is no simple matter. Expecting to solve all ethical issues is an incredible feat. Instead, we should recognise that dealing with ethics is part of what humans do and the use of technology can add complexity to traditional or well-known ethical questions. We should furthermore recognise that AI ethics often cannot be distinguished from the ethics of technology in general. But at the same time, it has certain specialities that need to be considered before it’s too late.