New Delhi: The onslaught of artificial intelligence has polarized many, but innovators, inventors, and investors all agree that its impact will be felt across all facets of life. More important, it is critical to define both the benefits and risks from AI.
In a dissertation on the philosophies of AI in its present form, Vinod Khosla, co-founder of Sun Microsystems and founder of Khosla Ventures, argues that global AI innovation and investments will create a clear divide between dystopian and utopian approaches. In turn, the philosophy pursued by innovators will define how investments in AI will be made in the coming decades.
Khosla, an iconic Silicon Valley entrepreneur, was one of the early investors in Sam Altman’s OpenAI—investing $50 million in the venture three years prior to the public launch of ChatGPT. The fund also participated in OpenAI’s recent $6.6 billion round earlier this year. Khosla also invested in India’s generative AI startup, Sarvam.
Industry experts say that determining a stance in the AI argument will be crucial in how humankind innovates on the technology, the guardrails that are implemented in it, and the eventual form that it will take in the long run.
Elaborating upon the utopian stance, Khosla said, “Imagine a post-scarcity economy where technology eliminates material limitations… and scarcity becomes obsolete—most jobs (will go) away. Yet, we’d still have enough abundance to pay citizens via some redistribution effort so they can cover a minimum standard of living materially higher than today’s minimum.”
Yet, on the other end of the spectrum, Khosla argued that risks associated with innovating upon AI “are real, but manageable”.
‘Bad sentient AI’
“In the present debate, the doomers are focusing on the small ‘bad sentient AI’ risk, and not the most obvious one—losing the AI race to nefarious nation states. This makes AI dangerous for the West. Ironically, those who fear AI and its capacity to erode democracy and manipulate societies should be most fearful of this risk. It is why we can’t lose to China, and why we must step up and use AI for the benefit of all humanity,” he said.
Industry experts and consultants underline that this debate is a necessary one—an argument that should shape the direction of our innovation.
Jayanth Kolla, cofounder of technology consultancy firm Convergence Catalyst, said, “Much like the early innovation of fire, the lack of debates on the safety of fire could have led to a haywire development of the technology. The same reflects upon nuclear energy also—and defines whether we can use nuclear energy for clean resources, or for warfare. This is why it is important to define policies that state what is permissible innovation, and what could impact us adversely.”
Jaspreet Bindra, founder of tech consultant AI&Beyond, concurred, stating that Khosla’s argument is foundational towards AI’s journey into superintelligence—or as is more commonly stated, artificial general intelligence (AGI).
“The entire idea of what could make AI dystopian will be a foundational substance towards designing our own traffic lights that regulate the flow of AI into the future. This will help us then design what should the eventual idea of AGI or superintelligence be—it is not necessarily to replace humans in the evolutionary journey, but rather, to complement our role in jobs,” Bindra said.
Kolla further underlined that the key to look at the future of AI innovation lies in the journey that human revolutions have taken. “We went from industrial revolution to the information revolution. Today, our jobs involve leveraging information and knowledge to determine work as we know it today. In future, once the evolution of AI goes further, we will seek to leverage the emotional intelligence of humans—where our cognizance serves more critical roles in society. Machines, in turn, will get far superior decision-making powers in comparison to what is defined by instructions today.”
AI to create jobs
It is this that Khosla argued in his essay, titled ‘AI: Dystopia or utopia?’. Underlining the evolution of jobs, Khosla said that in the next five years to two decades, “it is possible that AI will create new jobs we cannot currently conceive of. But over the long haul, AI will eliminate most ‘jobs’ insofar as a job is defined as a trade or profession one must pursue to support their needs and lifestyle”.
It is this that Industry chiefs have also underlined the journey towards AGI, over the past week. OpenAI chief executive Sam Altman underlined that superintelligence is only “a few thousand days” away, while Anthropic cofounder expects a form of AGI to become available by 2026. Neither, however, envisions an AI future that does not involve humans.
Elucidating this further, Khosla said, “We will need to redefine what it means to be human. This new definition should focus not on the need for work or productivity but on passions, imagination, and relationships, allowing for individual interpretations of humanity.”