Viewpoint: ‘Terminator’ aside, how do we create ethical AI?

Listen to this article by the Baltimore Business Journal

Maybe filmmaker James Cameron saw the future clearly when “Terminator,” starring Arnold Schwarzenegger, debuted in 1984. The machines, in this case the Cyberdyne Systems Model 101, had the potential to dominate and destroy mankind.

Not only did they look like us, but the machines learned quickly, adapted to stressful situations, were cunning and relentless in their pursuit of their goal.

Today, the technology staring us in the face isn’t a menacing cyborg, but perhaps something even more calculating and confusing: artificial intelligence. While AI holds many promises, it also holds the power to transform business — and for that matter, humanity — in terrifying ways.

AI is moving so fast that more than a thousand industry leaders and observers — including OpenAI co-founder Elon Musk and Apple co-founder Steve Wozniak — last month called for a six-month pause on further research, to consider the implications of the advances. In an open letter published on the Future of Life Institute’s website, they warned of risks to “all jobs” and of the “loss of control of our civilization.”

And if the tech industry won’t hit the pause button voluntarily, governments should impose a moratorium, the letter said.

“It’s quite dangerous technology. I fear I may have done some things to accelerate it,” Musk reportedly told the Reuters wire service.

The open letter spurred a backlash from other industry leaders, with some saying that a pause would threaten innovation and would give nefarious interests a six-month advantage. Others, like Microsoft co-founder Bill Gates, said a moratorium would be ineffective and, in any event, impossible to enforce.

Given the tech industry’s stalemate and the historic inability of government to keep pace with technology, I suggest a different starting point: Look inward.

The ethical use of AI demands that we get seriously introspective about our values. As business leaders, and as human beings, we must focus on who we are; our core principles, beliefs and values are more important than ever.

As a CEO who operates a business with more than 150 employees steeped in technology, I have concerns.

First, there is no question that the broad use of AI has the potential to do unrecoverable damage to humanity if not governed properly. Whether you liken it to the industrial revolution or the invention of the wheel, this technology will change the very nature of how (or if) we function as a society, how (or if) we perform our jobs; how (or if) we learn, drive, shop, receive medical care, and relate with each other.

Will it erase entire classes of occupations, like customer-service representatives? Will it also displace knowledge workers, like lawyers, accountants, scientists, educators, consultants? Will it become a weapon embraced by the military? What impact will it have on our children, whose views are already being shaped by TikTok and the internet?

What impact will it have on political and social discourse? Will it be used to launch complex phishing campaigns to trick people into giving up personal and sensitive information? Will it be used to create and disseminate false and biased “information” campaigns that result in social upheaval?

Unfortunately, it’s too soon to tell. In its infancy, AI appears contained, with its data sets limited. But, for how long? Pause or no pause, at some point Musk and the others will begin moving forward.

In the meantime, business and community leaders must figure out what ethical AI looks like, and develop rubrics — scorecards — to determine what constitutes the ethical use of AI.

As a first step, then, I urge my colleagues and friends to immediately sharpen their awareness of their culture, because if you know your culture, you know yourself.

Does AI give us license to put employees out on the streets because there is a system that can write faster, think smarter and act more quickly than a human? Does it prompt us to cut strings with vendors or business partners because maybe we no longer need them? What does our moral compass say?

I can assure you this moral dilemma is a deeply personal conversation that will take place in families, universities, churches, synagogues, the media, and among people at all levels from working class to elites. But even smart people may not be able to think fast enough to determine whether the next advance in AI is ethical for the business and communities where they work and live. AI’s pace of development demands the development of rubrics to determine what we will accept.

As James Cameron told the Hollywood Reporter in 2017, “What was science fiction in the ‘80s is now imminent. It’s coming over the horizon at us.”

Those are chilling words.

How we address AI could be a defining moment not only for our businesses, but for us as a people. We must arrive at a conclusion that preserves our values and culture.

If we fail to do that, then we lose control. By the time we figure out what is harmful AI, it might be too late because our Cyberdyne Systems Model 101 has just gone online.

Tony Gruebl. CEO of Think Systems Inc., a national technology and operations advisory practice based in Baltimore, can be reached at tgruebl@thinkconsulting.com.