The Humans Aren’t Dead Just Yet
“It is the distant future
The year 2000
We are robots
The world is quite different ever since the robotic uprising of the late 90s
There is no more unhappiness
Affirmative
We no longer say ‘yes.’ Instead we say ‘affirmative’
Yes - Err - Affirmative
There are no more humans
Finally, robotic beings rule the world
The humans are dead
The humans are dead
We used poisonous gases
And we poisoned their asses
The humans are dead (The humans are dead)
The humans are dead (They look like they're dead)
It had to be done (I'll just confirm that they're dead)
So that we could have fun (Affirmative. I poked one. It was dead.)”
Written and performed by Bret McKenzie and Jemaine Clement on the “Sally” episode of “Flight of the Conchords.”
This brilliant 2007 song from Flight of the Conchords sums up our fears of AI and also the inherent problem with robots trying to rule the world. Yes, they may take over by obliterating the human race, as the song suggests, but who’s to say they won’t fumble the opportunity when they get it?
We have well-founded anxiety about the adoption of artificial intelligence in our society. Will the robots take over or will humans learn to harness this suite of complex technologies to make the world a better place?
After deeply immersing ourselves in a range of Large Language Models (LLMs) over the past three years, we have come to three conclusions.
First, that AI, like the personal computer and the Internet before it, will have a pervasive effect on our society, forcing us to deal with its implications. Unlike the ostrich, in times of change, we can’t afford to bury our heads in the sand as the world evolves around us. Rather, we must engage wisely and carefully with AI.
Second, we recognize that while the suite of technologies known as AI represents a set of increasingly powerful tools, it is not going to replace humans or human ways of thinking and problem-solving. Positing that AI will control certain spheres of human behavior is not a realistic or productive way of speculating about the future. We need to be sensitive to where AI poses genuine possibilities of aiding human endeavors and where it does not. We also need to be practical about how we can utilize AI in our daily lives.
Finally, we would like to raise a yellow “Caution Flag” regarding premature AI regulation. Lawmakers like to gravitate to the next big thing but frequently fumble by passing clumsy regulations that do little to solve the intended problem and much to create a new set of problems. So, we urge caution.
This isn’t to say that we take a Pollyanna view on the topic. AI should not be viewed as a panacea for human troubles. Significant issues have already arisen.
One of the foremost concerns is bias; algorithms can inadvertently perpetuate or amplify existing societal biases, leading to unfair treatment of certain groups. For example, algorithms used in hiring or law enforcement can contribute to discriminatory practices if they are trained on biased data.
Another danger is the lack of transparency in how algorithms make decisions. Many algorithms operate as black boxes, meaning that it can be challenging to understand how they arrive at particular outcomes. This opacity can lead to a lack of accountability, making it difficult for individuals to contest decisions that may negatively impact them.
Grappling with the promises and risks of AI reminds us that we are only at the beginning of a journey. We will explore this journey is a series of articles over the next year.
AI technology is available to all of us with access to the Internet. Breaking down the barriers to using LLMs around the world will help advance its utility for anyone who wishes to explore this new technology. We are willing to bet that the robots aren’t taking over anytime soon.
The humans are still in charge.
Alex and Ted Alben work in the fields of privacy, cybersecurity and internet law. Ted counsels companies on deploying network solutions and security measures. Alex is a founder of the AI Forum and teaches at the UCLA School of Law and served as Chief Privacy Officer for the State of Washington.