Dartmouth AI Seminar
This weekend I attended a seminar at Dartmouth College commemorating the fifthieth anniversary of the first Dartmouth Summer Symposium on Artificial Intelligence, where the term was coined. It was a great seminar and it really made me think.
The first speaker was George Cybenko, Professor of Engineering at Thayer. His work is in the current application of Artificial Intelligence, and he had lots of great slides and demos of the state of AI, the autonomous military vehicle project, and the trend towards computing density. According to current rates and Moore’s Law, within 30 years a microchip will have the connection complexity of the human min. Today’s Itanium 2 chip is as complex as a honeybee’s brain.
There has been some theory in the literature about how animal brains may operate on the quantum rather than digital level, with microtubules within cells storing quantum superpositions of information. I asked Professor Cybenko about the current state of this research and it’s still an open question. If it’s true, simply making an Itanium 10 in 2030 won’t approach the complexity of the human brain - we’ll need quantum computing to do that. He gave the great example of an art expert being able to look at a vase and say, instantly, “that’s clearly a fake, but I’m not sure why,” which then takes weeks of experts’ time to prove empirically. To me, that bolsters the argument for our brain storing quantum superpositions.
The most surprising speaker for me was James Moor, Chair of the Philosophy Department. I naively had low expectations for the talk, what with trouble getting PowerPoint up and all, but that was a misleading book cover. His talk covered many aspects of how we should deal with an AI, but the most striking part of his talk was AI Ethics. Right now we’re at the point where we’d like to think about how we should teach ethics to AI’s, if they’re ever to become autonomous. So, the next thirty years may be just that kind of mundane part of how we code ethics into a machine-parseable framework. Where things get interesting is what happens after we achieve it.
I asked the question during the QA session, “Is Ethics Static?” to which Professor Moor gave a good response that ethics by its nature has to evolve. So, I asked the followup question, “so once AI’s learn ethics, and in thirty years they may be as smart as we are, isn’t it natural that they’d decide to continue to evolve the ethics we taught them, perhaps not applying the same values as we would?” This led to a discussion of the three models of the AI future - AI’s as slaves, AI’s conquering humans, and humans merging with AI’s. “What if learning Polish was as simple as plugging in a new ‘chip’? - What would that mean for Dartmouth 300 years from now?” the professor asked.
Discussion continued after the talk and I spoke with Professor Moor for a few minutes about the cyborg outcome. I haven’t given it much credibility in the past, but after this talk I’m changing my mind (rimshot). There exist three certainties, given a flexible enough timeframe:
- we will invent technologies more complex than our brains
- we will teach these AI’s ethics
- these AI’s will evolve themselves to be smarter and more capable than we are
Insofar as we cannot know the mind of God, we don’t know what these AI’s will think of us or how their ethics will evolve. We do know the human race is resource-intensive and damaging to the environment to a degree that an artificial being would not be, so they may view us as pests, or if we’re lucky they may simply view us as pets or respected ancestors.
Either way, Artificial Intelligence isn’t the right term anymore, that’s just one aspect of the project. We’re about to (on most timescales) create a new form of life here on Earth and if we intend to compete for any of the same niches as Engineered Life, well, evolution does not tend to be favorable to the less capable. So, we may be the species that gets to decide if it wants to evolve or not. In this case, the evolution would be to compete with the Engineered Lifeforms, and that makes the Cyborg outcome the most likely for the humans who wish to compete. The degree to which this will be necessary to maintain the species is decidedly unclear and perhaps disquieting for those of us who will be around then and whose children will be smack in the middle of it.
One thing’s for certain - it won’t happen suddenly. I gave the example of treating a patient with Alzheimer’s Disease - what if we had a nano-treatment that could prop up the patient’s failed neurons with engineered replacement neurons to return quality of life to those suffering from the Disease? Hardly anybody would argue that was unethical or question the essence of the person. Now, suppose the person’s brain starts to fail in other ways simply from ‘old age’? Would it be wrong to continue the treatment to prevent the person from dying? It would be hard to argue for letting the person simply die on principle. So, over the next dozen years, perhaps all the natural brain has died and been replaced with engineered structures. But the outward appearance of the person is unchanged, because the process was gradual and the neurons are an exact replacement for the person’s original brain. “Mommy, is Grandma a Computer?”
Some other interesting notes from the Conference:
EXAMPLES OF LINGUISTIC AMBIGUITY (how it’s hard to teach an AI)
flying over Zurich.
Fred saw the mountains flying over Zurich.
The police arrested the demonstrators because they feared violence.
The police arrested the demonstrators because they advocated violence.
These are hard for humans to parse correctly, let alone an AI.
NEURAL NETWORKS
A software program using neural networks such that given a board layout and dice rules, but no game rules, after 1.5 million games simulated in the neural network, the resulting AI was able to beat a champion-level human player in Backgammon. Again, he wasn’t taught the rules. Oh, look, I’ve just gone and anthropomorphized an AI.