Sunday, January 23, 2011

Fracking Toasters!

Recently, IBM introduced a machine named Watson that was developed, essentially, to trounce human contestants on Jeopardy.  If you have not heard about this, and want to learn more, go here:

Th claim made by Watson's developers is that he is capable of understanding the complexities of human language in the same way that a human brain can--only faster.  He has been developed for Jeopardy because part of the challenge of the game is the ability to parse speech appropriately so you can make sense of things like puns, metaphors, and other linguistic devices that we use when we want to say something without coming right out and saying it.  The challenge for a machine is that understanding such devices requires that you know not only the rules of language but also all the exceptions and every-changing cultural usages of certain terms and phrases.

Watson is not the first attempt at language-learning A.I. nor will he be the last.  But the idea that his developers have written algorithms to approximate how the human brain processes language and uses a schema for decision making to decide how to reply to a statement.  Watson is full of information; when a statement is made on Jeopardy, he searches that store of knowledge, pulling out possible bits that could be applicable.  His algorithms then do further analysis of the chosen data to decide, based on several different variables, what is the best choice for the immediate context.  He then must use a final algorithm to transform that piece of data into an appropriately phrased question. 

All of this raises some very interesting questions about the purpose of developing such language-proficient A.I. machines, as well as some more philosophical questions about the nature of intelligence itself, and the role of language in making us "human."  Personally, I have many questions about how the algorithms have been written to mirror human language use and development--that is sort of my thing.  Language in the brain is something that researchers still do not completely understand; even users of language are not clear on many of the rules and exceptions that they operate under every day.  Furthermore, language is more than responding to a consistent set of parameters, such as the context of Jeopardy.  While Watson has been developed to respond to the stimuli represented by the categories and statements, how is he going to react after the first commercial break when Alex Trebeck stops to have a chat with him, to get to know the contestants, and asks Watson a question about some trivial, mildly embarrassing thing--as he does to everyone else?  Will Watson be able to parse that spontaneous language, for which he will have no way of knowing the context beforehand? 

  On the one hand, it forces some potentially harmful assumptions about how the brain works, and the role of language in everyday life.  Language is a living thing; it changes over time and between specific users.  It is highly context bound.  As a small example, I will mention my friend Tyler.  He is from the South, therefore, he can call me "hon" and "darling" and it doesn't seem inappropriate.  That language fits in that context.  If, on the other hand, pretty much anyone else I know called me either of those things, it would be jarring; I would notice immediately.  Depending on the context, I would have to interpret it as a joke, or an insult, or the crossing of some sort of intimacy boundary.  How can that sort of sensitivity be programmed into a machine?

Clearly, this idea of language and intelligence is something that I could go on about all day, but I have a second point I want to make in this post.  An ethical one.  It seems to me that creating machines that are more and more like humans not something we should be working towards.  Last night, I watched Terminator Salvation, and of course I have seen every other sci-fi interpretation of the post-apocalyptic world we will inhabit after the machines take over, including my personal favorite, BSG.  I have learned two very important things from these shows:  the first is that we cannot be tampering with this sort of technology.  The more we make the machines like us, the more they will be like us, and we want to be in control.  A brief glimpse through human history shows that our main objective is being in charge.  So, we should not be surprised when Watson and his buddies decide they would like to take the reigns of the world for a little while--and never give them back.

The other thing that I have learned, however, is that some machines can, in fact, be trusted. As long as they are hot enough.  Such as this one:

Or this one:

I guess the version you get depends on how the future of the war with the machines plays out, but I can tell you, I am sort of hoping to meet this guy in 2018.  But only if he is programmed with his original Australian accent.

As hot as he is, I am still not comfortable with the idea of A.I.  Yes, I am fascinated by Watson.  Yes, I would love to have a robot that looks like Sam Worthington.  But I still have a lot of questions about the whole thing.  So, this time, I am going to ask that you, my readers, chime in on all this through your comments.  What do you all think: should we fear the machines?

No comments:

Post a Comment

Pace of play

In baseball, pace of play refers to how long it takes for individual plays to happen and the overall length of the game.  It's also the...