March 28, 2020
Computer Ethics 1985
I was going through a shelf of books I haven’t visited in a couple of decades and found a book I used in 1986 when I taught Introduction to Computer Science in my last year as a philosophy professor. (It’s a long story.) Ethical Issues in the Use of Computers was a handy anthology, edited by Deborah G. Johnson and John W. Snapper (Wadsworth, 1985).
So what were the ethical issues posed by digital tech back then?
The first obvious point is that back then ethics were ethics: codes of conduct promulgated by professional societies. So, Part I consists of eight essays on “Codes of Conduct for the Computer Professions.” All but two of the articles present the codes for various computing associations. The two stray sheep are “The Quest for a Code of Professional Ethics: An Intellectual and Moral Confusion” (John Ladd) and “What Should Professional Societies do About Ethics?” (Fay H. Sawyier).
Part 2 covers “Issues of Responsibility”, with most of the articles concerning themselves with liability issues. The last article, by James Moor, ventures wider, asking “Are There Decisions Computers Should Not Make?” About midway through, he writes:
“Therefore, the issue is not whether there are some limitations to computer decision-making but how well computer decision making compares with human decision making.” (p. 123)
While saluting artificial intelligence researchers for their enthusiasm, Moor says “…at this time the results of their labors do not establish that computers will one day match or exceed human levels of ability for most kinds of intellectual activities.” Was Moor right? It depends. First define basically everything.
Moor concedes that Hubert Dreyfus’ argument (What Computers Still Can’t Do) that understanding requires a contextual whole has some power, but points to effective expert systems. Overall, he leaves open the question whether computers will ever match or exceed human cognitive abilities.
After talking about how to judge computer decisions, and forcefully raising Joseph Weizenbaum’s objection that computers are alien to human life and thus should not be allowed to make decisions about that life, Moor lays out some guidelines, concluding that we need to be pragmatic about when and how we will let computers make decisions:
“First, what is the nature of the computer’s competency and how has it been demonstrated? Secondly given our basic goals and values why is it better to use a computer decision maker in a particular situation than a human decision maker?”
We are still asking these questions.
Part 3 is on “Privacy and Security.” Four of the seven articles can be considered to be general introductions fo the concept of privacy. Apparently privacy was not as commonly discusssed back then.
Part 4, “Computers and Power,” suddenly becomes more socially aware. It includes an excerpt from Weizenbaum’s Computer Power and Human Reason, as well as articles on “Computers and Social Power” and “Peering into the Poverty Gap.”
Part 5 is about the burning issue of the day: “Software as Property.” One entry is the Third Circuit Court of Appeals finding in Apple vs. Franklin Computer. Franklin’s Ace computer contained operating system code that had been copied from Apple. The Court knew this because in addition to the programs being line-by-line copies, Franklin failed to remove the name of one of the Apple engineers that the engineer had embedded in the program. Franklin acknowledged the copying but argued that operating system code could not be copyrighted.
That seems so long ago, doesn’t it?
Because this post mentions Joseph Weizenbaum, here’s the beginning of a blog post from 2010:
I just came across a 1985 printout of notes I took when I interviewed Prof. Joseph Weizenbaum in his MIT office for an article that I think never got published. (At least Google and I have no memory of it.) I’ve scanned it in; it’s a horrible dot-matrix printout of an unproofed semi-transcript, with some chicken scratches of my own added. I probably tape recorded the thing and then typed it up, for my own use, on my KayPro.
In it, he talks about AI and ethics in terms much more like those we hear today. He was concerned about its use by the military especially for autonomous weapons, and raised issues about the possible misuse of visual recognition systems. Weizenbaum was both of his time and way ahead of it.
Date: March 28th, 2020 dw