How to Punish a Professor

Make her teach! The Valley News has the story of Dartmouth Professor Mara Sabinson who “alleges that after she refused to quit, college administrators punished her by assigning her to teach freshman writing seminars.” When I toured Dartmouth, their big selling point was that the professors are really engaged in teaching, not “I’ll teach the postdocs and have grad students teach the undergrads,” so this is exactly the wrong kind of press for the Marketing Department.

The attitude that teaching freshmen is punishment is hardly hidden by these professors, and the students aren’t so dumb as to not understand what’s going on. Those who would take a professorship and not love teaching should go out and get a job in industry. Those who would eschew the industry route in favor of the safety of the tenure track while still despising teaching have no place in higher education, and are a threat to the competitiveness of the U.S. .

Dartmouth AI Seminar

This weekend I attended a seminar at Dartmouth College commemorating the fifthieth anniversary of the first Dartmouth Summer Symposium on Artificial Intelligence, where the term was coined. It was a great seminar and it really made me think.

The first speaker was George Cybenko, Professor of Engineering at Thayer. His work is in the current application of Artificial Intelligence, and he had lots of great slides and demos of the state of AI, the autonomous military vehicle project, and the trend towards computing density. According to current rates and Moore’s Law, within 30 years a microchip will have the connection complexity of the human min. Today’s Itanium 2 chip is as complex as a honeybee’s brain.

There has been some theory in the literature about how animal brains may operate on the quantum rather than digital level, with microtubules within cells storing quantum superpositions of information. I asked Professor Cybenko about the current state of this research and it’s still an open question. If it’s true, simply making an Itanium 10 in 2030 won’t approach the complexity of the human brain - we’ll need quantum computing to do that. He gave the great example of an art expert being able to look at a vase and say, instantly, “that’s clearly a fake, but I’m not sure why,” which then takes weeks of experts’ time to prove empirically. To me, that bolsters the argument for our brain storing quantum superpositions.

The most surprising speaker for me was James Moor, Chair of the Philosophy Department. I naively had low expectations for the talk, what with trouble getting PowerPoint up and all, but that was a misleading book cover. His talk covered many aspects of how we should deal with an AI, but the most striking part of his talk was AI Ethics. Right now we’re at the point where we’d like to think about how we should teach ethics to AI’s, if they’re ever to become autonomous. So, the next thirty years may be just that kind of mundane part of how we code ethics into a machine-parseable framework. Where things get interesting is what happens after we achieve it.

I asked the question during the QA session, “Is Ethics Static?” to which Professor Moor gave a good response that ethics by its nature has to evolve. So, I asked the followup question, “so once AI’s learn ethics, and in thirty years they may be as smart as we are, isn’t it natural that they’d decide to continue to evolve the ethics we taught them, perhaps not applying the same values as we would?” This led to a discussion of the three models of the AI future - AI’s as slaves, AI’s conquering humans, and humans merging with AI’s. “What if learning Polish was as simple as plugging in a new ‘chip’? - What would that mean for Dartmouth 300 years from now?” the professor asked.

Discussion continued after the talk and I spoke with Professor Moor for a few minutes about the cyborg outcome. I haven’t given it much credibility in the past, but after this talk I’m changing my mind (rimshot). There exist three certainties, given a flexible enough timeframe:

  1. we will invent technologies more complex than our brains
  2. we will teach these AI’s ethics
  3. these AI’s will evolve themselves to be smarter and more capable than we are

Insofar as we cannot know the mind of God, we don’t know what these AI’s will think of us or how their ethics will evolve. We do know the human race is resource-intensive and damaging to the environment to a degree that an artificial being would not be, so they may view us as pests, or if we’re lucky they may simply view us as pets or respected ancestors.

Either way, Artificial Intelligence isn’t the right term anymore, that’s just one aspect of the project. We’re about to (on most timescales) create a new form of life here on Earth and if we intend to compete for any of the same niches as Engineered Life, well, evolution does not tend to be favorable to the less capable. So, we may be the species that gets to decide if it wants to evolve or not. In this case, the evolution would be to compete with the Engineered Lifeforms, and that makes the Cyborg outcome the most likely for the humans who wish to compete. The degree to which this will be necessary to maintain the species is decidedly unclear and perhaps disquieting for those of us who will be around then and whose children will be smack in the middle of it.

One thing’s for certain - it won’t happen suddenly. I gave the example of treating a patient with Alzheimer’s Disease - what if we had a nano-treatment that could prop up the patient’s failed neurons with engineered replacement neurons to return quality of life to those suffering from the Disease? Hardly anybody would argue that was unethical or question the essence of the person. Now, suppose the person’s brain starts to fail in other ways simply from ‘old age’? Would it be wrong to continue the treatment to prevent the person from dying? It would be hard to argue for letting the person simply die on principle. So, over the next dozen years, perhaps all the natural brain has died and been replaced with engineered structures. But the outward appearance of the person is unchanged, because the process was gradual and the neurons are an exact replacement for the person’s original brain. “Mommy, is Grandma a Computer?”

Some other interesting notes from the Conference:

EXAMPLES OF LINGUISTIC AMBIGUITY (how it’s hard to teach an AI)

flying over Zurich.

Fred saw the mountains flying over Zurich.

The police arrested the demonstrators because they feared violence.

The police arrested the demonstrators because they advocated violence.

These are hard for humans to parse correctly, let alone an AI.

NEURAL NETWORKS

A software program using neural networks such that given a board layout and dice rules, but no game rules, after 1.5 million games simulated in the neural network, the resulting AI was able to beat a champion-level human player in Backgammon. Again, he wasn’t taught the rules. Oh, look, I’ve just gone and anthropomorphized an AI.

Secret Laws

When last I wrote about the Outlawing of Fair Use of video, I wrote that HR 4569 aims to require analog transmissions to maintain DRM “somehow”.

It turns out the “somehow” has a name, but we don’t know the how. The name is VEIL, and its workings are secret. For $10K and an NDA you can get a look at the decoder specs, but nobody can learn how the encoder works. Passage of this bill would create a secret law, where the citizenry must comply with a technology but may not know how it works or be able to debate its merits in the legislative process.

Again, write your Representative.

Heim Theory

New Scientist has an accessible article on Heim Theory – an 8 dimensional theory developed to unify quantum physics and relativity. The theory is not well documented or published but it does manage to derive the masses of elemental particles with astonishing accuracy and provides a mechanism/alterative for dark energy/matter. Should it be true it may be possible to build a space propulsion system using high energy magnetism that exceeds our current hopes, which has attracted interest from military and national labs.

I still maintain that constants are limitations in our knowledge and a proper theory will allow for the derivation of these what we now consider magic numbers.

Too Much Information

The ACLU has a cute Flash demo showing a possible future where there are no consumer data privacy regulations. Combined with very tight integration between query software, databases, high speed internet, nation ID numbers, and corporate partnerships, this particular future looks bleak. It encourages one to think about the root causes of such a future and what might be the right thing to do in the present.

Outlawing Fair Use of Video

Joe Born, CEO of Neuros has posted his letter to Reps. Sensenbrenner and Conyers over their new bill, HR 4569, the Digital Transition Content Security Act. Neuros makes recording devices for people to time/place-shift their media viewing.

This bill aims to outlaw any electronic devices which convert analog video signals to any digital form without “somehow” preserving DRM.

It thereby also outlaws Fair Use of digital media since the DMCA made the “analog hole” the sole remedy for fair use of digital media. To my readers from Wisconsin or Minnesota: remember how these guys think come Election Day. Also ask yourself if they’re doing this for their citizens’ benefit or if they’re bought and paid for.

Please write your Congressman to express your thoughts on this issue. You don’t even need a stamp.

New Soy Plywood Glue Based on Mussel Adhesive

Here’s an article at Wood & Wood Products Magazine describing a new type of glue made from soy flour that’s replacing the urea formaldehyde glues typically used in plywoods.

The urea formaldehyde glues are especially troublesome as they outgas formaldehyde into the living space. The new type of glue was made by Kaichang Li at Oregon State University who studied the adhesive mussels use to attach themselves to rocks. Li figured out how the adhesive works and set to developing a process that can modify soy proteins to have similar properties. The method is simple enough that he can do it in his kitchen mixer.

According to this month’s Journal of Light Construction (not yet online) the new plywood can be boiled for 20 hours, dried, and boiled again without delaminating. The standard test for outdoor plywoods is a 4-hour boil and today’s plywoods cannot survive the 20-hour test.

Columbia Forest Products has already switched its hardwood veneer plywoods over to the new adhesive and is investigating switching over its other plywoods. As urea formaldehyde is derived from fossil fuels, its price is expected to increase with oil.

New Steamed Dumpling Record

Takeru Kobayashi is in for some pain. He recently swallowed 83 steamed dumplings in 8 minutes. story.

I’ve done a quarter of that on a good day at Empire Garden in Boston (the best dim sum in town). It’s not so bad until you drink something, then it expands. Some things are worth the pain, and dim sum is one of them. Here’s raising a cup of chrysanthemum tea to Kobayashi-san.

SCOTUS in Plainfield

Wow, it turns out one of the Supreme Court Justices, Stephen Breyer, has a home in my town, Planfield, NH. And some local Libertarians want to turn it into a “park”, comprised of two stone tablets, in protest of the awful SCOTUS decision, Kelo v. City of New London, which Breyer was on the wrong side of.

I’m not sure I get the point here. The hotel in Weare is a for-profit private enterprise that will enhance the tax base of the town. Constitution Park offers no such benefit to the people of Plainfield. In fact, a park is one of the uses that could be allowed under the old interpretation of emminent domain.

167 Acres is plenty to build a nice resort on that. People will come, Plainfield will benefit, and it’s more interesting than two stone tablets, and is directly related to the intent of the SCOTUS decision. ‘The Constitution Hotel and Resort’ is interesting - ‘Constitution Park’ is pointless. I won’t be voting for a new park come Town Meeting.