Artificial Intelligence

Artificial Intelligence (AI) has always been a staple of science fiction writing. Isaac Asimov used AI as part of his writing, introducing the robot R. Daneel Olivaw as an investigating partner to NYC detective Elijah Bailey in the 1954 novel “Caves of Steel” – famously revealing the Three Laws of Robotics. But across the full breadth of the “Robot Series” of short stories and novels, R. Daneel Olivaw grows, develops and, in a way, becomes a better human than human. The hope and promise of AI.

AI is broad category of research and application that includes neural networks, machine learning, and more. The field of AI research was born at a workshop at Dartmouth College in 1956, where the term “Artificial Intelligence” was coined by John McCarthy to distinguish the field from cybernetics. The basic trajectory of the research continues the classic Church-Turing idea of whether a machine can exhibit human behavior – the most basic of which is learning. Which implies it has a teacher who provides an environment and information. In other words, show a neural network enough pictures of a cat, tell it “this is a cat” and the AI learns to identify new pictures as “cat” without being told. It just needed enough information and adequate processing power.

Movies such as the 1983 “War Games” and the series of Terminator movies carry as a basic plot device – be careful what you wish for. The AI units eventually get unlimited processing power and unfettered access to all information – and then reach the conclusion the planet would be better off without humans. This is the si-fi version of TMI – coupled with inadequate instruction. Had the inventors of “Skynet” (the villainous AI in Terminator) been build with the Three Laws of Robotics, it would have been better for humanity.

So where is all this going? AI in the novels/movies and in real life is only as good as the human “instructor.” One of the major issues in the AI research field is who is doing the instruction and what information is being supplied to the neural networks. Take a wrong turn and incredible amounts of misinformation can be processed and learned. There are “bots” on the internet whose only purpose is to spread disinformation and then this network of sites and people become this strange amalgamation of artificial intelligence where fact, even if accurate, is processed differently and wrong conclusions are reached. That’s the danger in autonomous weapons. They are great until they reach a horribly wrong conclusion and innocent lives are lost when the AI component of the weapons can not process the indistinct human intuition of “what’s wrong here.” The weapon does what it has been taught.

But most of us (hopefully) won’t encounter autonomous weapon systems. But we will encounter Google and other such internet tech companies whose AI/neural networks are busy at work as we supply massive amounts of information. But still, the question that remains in all of this milieu is “who are the teachers?” And that has been part of an on-going debate within the industry, research universities and labs. We think of AI as being impartial, mathematical and a utopia free of human flaws. It isn’t. Some neural networks learn from massive amounts of information on the internet — and that information was created by people. That means we are building computer systems that exhibit human bias.

About six years ago, Google introduced an online photo service which could analyze snapshots and automatically sort them into digital folders based on what was pictured. A friend of an AI developer sent him a link to some pictures just posted online. They were picture taken from an outing to a local park. The AI within the photo service organized photos of Black people into a folder called “gorillas.” Not long after, a Black researcher in Boston discovered that an A.I. system couldn’t identify her face — until she put on a white mask. Given the role AI is projected to have in policing and all manner of life online, it makes you wonder what other human biases have been “fed” to the machine. It seems to be learning things we would ourselves like to unlearn. If you’d like to read more about this phenomena, Cade Metz published an interesting and disturbing look into the problem.

1 thought on “Artificial Intelligence

  1. Gathering, organizing and distributing data is something at AI does very well. Google has access to massive amounts of free data worldwide. Other companies that consult the government, have access to other massive amounts of data even if some of the data is off-limits. But I think the key, as with the reference to the bots, is that AI without an interpreter is flawed. It will always take a human person who can figure out what the data means and what data is useless.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.