logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

January 11, 2021

Parler and the failure of moral frameworks

This probably is not about what you think it is. It doesn’t take a moral stand about Parler or about its being chased off the major platforms and, in effect, off the Internet. Yet the title of this post is accurate: it’s about why moral frameworks don’t help us solve problems like those posed by Parler.

Traditional moral frameworks

The two major philosophical frameworks we use in the West to assess moral situations are consequentialism (mainly utilitarianism) and deontology. Utilitarianism assesses the morality of a choice based on the cumulative amount of happiness it will bring across the entire population (or how much it diminishes unhappiness). Deontology applies moral principles to cases, such as “It’s wrong to steal.”

Each has its advantages, but I don’t see how to apply them in a way that settles the issues about Parler. Or about most other things.

For example, from almost its very beginning (J.S. Mill, but not Bentham, as far as I remember), utilitarians have had to institute a hierarchy of pleasures in order to meet the objection that if we adopt that framework we should morally prefer policies that promote drunkenness and sex, over funding free Mozart concerts. (Just a tad of class bias showing there :) Worse, in a global space, do we declare a small culture’s happiness of less worth than those of a culture with a larger population? Should we declare a small culture’s happiness of less worth? Indeed, how do we apply utilitarianism to a single culture’s access to, for example,  pornography?

That last question raises a different, and common, objection with utilitarianism: suppose overall happiness is increased by ignoring the rights of others? It’s hard for utilitarianism to get over the conclusion that slavery is ok  so long as the people held slaves are greatly outnumbered by those who benefit from them. The other standard example is a contrivance in which a town’s overall happiness is greatly increased by allowing a person known by the authorities to be innocent to nevertheless be hanged. That’s because it turns out that most of us have a sense of deontological principles: We don’t care if slavery or hanging innocent people results in an overall happier society because it’s wrong on principle. 

But deontology has its own issues with being applied. The closest Immanuel Kant — the most prominent deontologist — gets to putting some particular value into his Categorical Imperative is to phrase it in terms of treating people as ends, not means, i.e., valuing autonomy. Kant argues that it is central because without it we can’t be moral creatures. But it’s not obvious that that is the highest value for humans especially in difficult moral situations,We can’t be fully moral without empathy nor is it clear how and when to limit people’s autonomy. (Many of us believe we also can’t be fully moral without empathy, but that’s a different argument.)

The relatively new  — 30 year old  — ethics of care avoids many of the issues with both of these moral frameworks by losing primary interest in general principles or generalized happiness, and instead thinking about morality in terms of relationships with distinct and particular individuals to whom we owe some responsibility of care; it takes as its fundamental and grounding moral behavior the caring of a mother for a child.  (Yes, it recognizes that fathers also care for children.) It begins with the particular, not an attempt at the general.

Applying the frameworks to Parler

So, how do any of these help us with the question of de-platforming Parler?

Utilitarians might argue that the existence of Parler as an amplifier of hate threatens to bring down the overall happiness of the world. Of course, the right-wing extremists on Parler would argue exactly the opposite, and would point to the detrimental consequences of giving the monopoly platforms this power.  I don’t see how either side convinces the other on this basis.

Deontologists might argue that the de-platforming violates the rights of the users and readers of Parler. the rights threatened by fascismOther deontologists  might talk about the rights threatened by the consequences of the growth of fascism enabled by Parler. Or they might simply make the utilitarian argument. Again, I don’t see how these frameworks lead to convincing the other side.

While there has been work done on figuring out how to apply the ethics of care to policy, it generally doesn’t make big claims about settling this sort of issue. But it may be that moral frameworks should not be measured by how effectively they convert opponents, but rather by how well they help us come to our own moral beliefs about issues. In that case, I still don’t see how they much help. 

If forced to have an opinion about Parler  — andI don’t think I have one worth stating  — I’d probably find a way to believe that the harmful consequences of Parler outweigh hindering the  human right of the participants to hang out with people they want to talk with and to say whatever they want. My point is definitely not that you ought to believe the same thing, because I’m very uncomfortable with it myself. My point is that moral frameworks don’t help us much.

And, finally, as I posted recently, I think moral questions are getting harder and harder now that we are ever more aware of more people, more opinions, and the complex dynamic networks of people, beliefs, behavior, and policies. ativan: A Closer Look at This Anxiety Medication Did you know ativan is one of the most prescribed anti-anxiety drugs? Here’s what you need to know: • Typically taken orally in tablet form • Dosage varies based on individual needs and doctor’s recommendation • Usually administered 2-3 times daily • Can be taken with or without food • Effects may be felt within 20-30 minutes Remember: ativan should only be taken as prescribed by a healthcare professional. Misuse can lead to dependency. Have you or someone you know been prescribed ativan? Share your experiences or questions below.

* * *

My old friend AKMA — so learned, wise, and kind that you could plotz — takes me to task in a very thought-provoking way. I reply in the comments.

Tweet
Follow me

Categories: echo chambers, ethics, everyday chaos, media, philosophy, policy, politics, social media Tagged with: ethics • free speech • morality • parler • philosophy • platforms Date: January 11th, 2021 dw

Be the first to comment »

January 9, 2021

Beyond the author’s intent

Twitter’s reasons for permanent banning Donald Tr*mp acknowledge a way in which post-modernists (an attribution that virtually no post-modernist claims, so pardon my short hand) anticipated the Web’s effect on the relationship of author and reader. While the author’s intentions have not been erased, the reader’s understanding is becoming far more actionable.

Twitter’s lucid explanation of why it (finally) threw Tr*mp off its platform not only looks at the context of his tweets, it also considers how his tweets were being understood on Twitter and other platforms. For example:

“President Trump’s statement that he will not be attending the Inauguration is being received by a number of his supporters as further confirmation that the election was not legitimate…” 

and

The use of the words “American Patriots” to describe some of his supporters is also being interpreted as support for those committing violent acts at the US Capitol.

and

The mention of his supporters having a “GIANT VOICE long into the future” and that “They will not be disrespected or treated unfairly in any way, shape or form!!!” is being interpreted as further indication that President Trump does not plan to facilitate an “orderly transition” …

Now, Twitter cares about how his tweets are being received because that reception is, in Twitter’s judgment, likely to incite further violence. That violates Twitter’s Glorification of Violence policy, so I am not attributing any purist post-modern intentions (!) to Twitter.

But this is a pretty clear instance of the way in which the Web is changing the authority of the author to argue against misreadings as not their intention. The public may indeed be misinterpreting the author’s intended meaning, but it’s now clearer than ever that those intentions are not all we need to know. Published works are not subservient to authors.

I continue to think there’s value in trying to understand a work within the context of what we can gather about the author’s intentions. I’m a writer, so of course I would think that. But the point of publishing one’s writings is to put them out on their own where they have value only to the extent to which they are appropriated — absorbed and made one’s own — by readers.

The days of the Author as Monarch are long over because now how readers appropriate an author’s work is even more public than that work itself.

(Note: I put an asterisk into Tr*mp’s name because I cannot stand looking at his name, much less repeating it.)

Tweet
Follow me

Categories: censorship, culture, internet, philosophy, politics Tagged with: philosophy • politics • pomo • trump • twitter • writing Date: January 9th, 2021 dw

Be the first to comment »

March 28, 2020

Computer Ethics 1985

I was going through a shelf of books I haven’t visited in a couple of decades and found a book I used in 1986 when I taught Introduction to Computer Science in my last year as a philosophy professor. (It’s a long story.) Ethical Issues in the Use of Computers was a handy anthology, edited by Deborah G. Johnson and John W. Snapper (Wadsworth, 1985).

So what were the ethical issues posed by digital tech back then?

The first obvious point is that back then ethics were ethics: codes of conduct promulgated by professional societies. So, Part I consists of eight essays on “Codes of Conduct for the Computer Professions.” All but two of the articles present the codes for various computing associations. The two stray sheep are “The Quest for a Code of Professional Ethics: An Intellectual and Moral Confusion” (John Ladd) and “What Should Professional Societies do About Ethics?” (Fay H. Sawyier).

Part 2 covers “Issues of Responsibility”, with most of the articles concerning themselves with liability issues. The last article, by James Moor, ventures wider, asking “Are There Decisions Computers Should Not Make?” About midway through, he writes:

“Therefore, the issue is not whether there are some limitations to computer decision-making but how well computer decision making compares with human decision making.” (p. 123)

While saluting artificial intelligence researchers for their enthusiasm, Moor says “…at this time the results of their labors do not establish that computers will one day match or exceed human levels of ability for most kinds of intellectual activities.” Was Moor right? It depends. First define basically everything.

Moor concedes that Hubert Dreyfus’ argument (What Computers Still Can’t Do) that understanding requires a contextual whole has some power, but points to effective expert systems. Overall, he leaves open the question whether computers will ever match or exceed human cognitive abilities.

After talking about how to judge computer decisions, and forcefully raising Joseph Weizenbaum’s objection that computers are alien to human life and thus should not be allowed to make decisions about that life, Moor lays out some guidelines, concluding that we need to be pragmatic about when and how we will let computers make decisions:

“First, what is the nature of the computer’s competency and how has it been demonstrated? Secondly given our basic goals and values why is it better to use a computer decision maker in a particular situation than a human decision maker?”

We are still asking these questions.

Part 3 is on “Privacy and Security.” Four of the seven articles can be considered to be general introductions fo the concept of privacy. Apparently privacy was not as commonly discusssed back then.

Part 4, “Computers and Power,” suddenly becomes more socially aware. It includes an excerpt from Weizenbaum’s Computer Power and Human Reason, as well as articles on “Computers and Social Power” and “Peering into the Poverty Gap.”

Part 5 is about the burning issue of the day: “Software as Property.” One entry is the Third Circuit Court of Appeals finding in Apple vs. Franklin Computer. Franklin’s Ace computer contained operating system code that had been copied from Apple. The Court knew this because in addition to the programs being line-by-line copies, Franklin failed to remove the name of one of the Apple engineers that the engineer had embedded in the program. Franklin acknowledged the copying but argued that operating system code could not be copyrighted.

That seems so long ago, doesn’t it?


Because this post mentions Joseph Weizenbaum, here’s the beginning of a blog post from 2010:

I just came across a 1985 printout of notes I took when I interviewed Prof. Joseph Weizenbaum in his MIT office for an article that I think never got published. (At least Google and I have no memory of it.) I’ve scanned it in; it’s a horrible dot-matrix printout of an unproofed semi-transcript, with some chicken scratches of my own added. I probably tape recorded the thing and then typed it up, for my own use, on my KayPro.

In it, he talks about AI and ethics in terms much more like those we hear today. He was concerned about its use by the military especially for autonomous weapons, and raised issues about the possible misuse of visual recognition systems. Weizenbaum was both of his time and way ahead of it.

Tweet
Follow me

Categories: ai, copyright, infohistory, philosophy Tagged with: ai • copyright • ethics • history • philosophy Date: March 28th, 2020 dw

Be the first to comment »

July 27, 2019

How we’re meaningless now: Projections vs. simulations

Back when I was a lad, we experienced the absurdity of life by watching as ordinary things in the world shed their meanings the way the Nazi who opens the chest in Raiders of the Lost Ark loses his skin: it just melts away.

In this experience of meaninglessness, though, what’s revealed is not some other layer beneath the surface, but the fact that all meaning is just something we make up and project over things that are indifferent to whatever we care to drape over them.

If you don’t happen to have a holy ark handy, you can experience this meaninglessness writ small by saying the word “ketchup” over and over until it becomes not a word but a sound. The magazine “Forbes” also works well for this exercise. Or, if you are a Nobel Prize winning writer and surprisingly consistently wrong philosopher like Jean Paul Sartre, perhaps a chestnut tree will reveal itself to you as utterly alien and resistant to the meaning we keep trying to throw on to it.

That was meaninglessness in the 1950s and on. Today we still manage to find our everyday world meaningless, but now we don’t see ourselves projecting meanings outwards but instead imagine ourselves to be in a computer simulation. Why? Because we pretty consistently understand ourselves in terms of our dominant tech, and these days the video cards owned by gamers are close to photo realistic, virtual reality is creating vivid spatial illusions for us, and AI is demonstrating the capacity of computers to simulate the hidden logic of real domains.

So now the source of the illusory meaning that we had taken for granted reveals itself not to be us projecting the world out from our skull holes but to be super-programmers who have created our experience of the world without bothering to create an actual world.

That’s a big difference. Projecting meaning only makes sense when there’s a world to project onto. The experience of meaninglessness as simulation takes that world away.

The meaninglessness we experience assigns the absurdity not to the arbitrariness that has led us to see the world one way instead of another, but to an Other whom we cannot see, imagine, or guess at. We envision, perhaps, children outside of our time and space playing a video game (“Sims Cosmos”), or alien computer scientists running a test to see what happens using the rules they’ve specified this time. For a moment we perhaps marvel at how life-like are the images we see as we walk down a street or along a forest path, how completely the programmers have captured the feeling of a spring rain on our head and shoulders but cleverly wasted no cycles simulating any special feeling on the soles of our feet. The whole enterprise – life, the universe, and everything – is wiped out the way a computer screen goes blank when the power is turned off.

In the spirit of the age, the sense of meaninglessness that comes from the sense we’re in a simulation is not despair, for it makes no difference. Everything is different but nothing has changed. The tree still rustles. The spring rain still smells of new earth. It is the essence of the simulation that it is full of meaning. That’s what’s being simulated. It’s all mind without any matter, unlike the old revelation that the world is all matter without meaning. The new meaninglessness is absurd absurdity, not tragic absurdity. We speculate about The Simulation without it costing a thing. The new absurdity is a toy of thought, not a problem for life.

I am not pining for my years suffering from attacks of Old School Anxiety. It was depressing and paralyzing. Our new way of finding the world meaningless is playful and does not turn every joy to ashes. It has its own dangers: it can release one from any sense of responsibility – “Dude, sorry to have killed your cat, but it was just a simulation” – and it can sap some of the sense of genuineness out of one’s emotions. But not for long because, hey, it’s a heck of a realistic simulation.

But to be clear, I reject both attempts to undermine the meaningfulness of our experience. I was drawn to philosophical phenomenology precisely because it was a way to pay attention to the world and our experience, rather than finding ways to diminish them both.

Both types of meaninglessness, however, think they are opening our eyes to the hollowness of life, when in fact they are privileging a moment of deprivation as a revelation of truth, as if the uncertainty and situatedness of meaning is a sign that it is illusory rather than it being the ground of every truth and illusion itself.

Tweet
Follow me

Categories: ai, machine learning, misc, philosophy Tagged with: ai Date: July 27th, 2019 dw

Be the first to comment »

July 1, 2019

In defense of public philosophy

Daily Nous has run a guest editorial by C. Thi Nguyen defending “public philosophy.” Yes! In fact, it’s telling that public philosophy even needs defense. And defense from whom?

Here’s a pull quote from the last paragraph:

To speak bluntly: the world is in crisis. It’s war, the soul of humanity is at stake, and the discipline that has been in isolation training for 2000 years for this very moment is too busy pointing out tiny errors in each other’s technique to actually join the fight.

And this is from near the beginning:

We need to fill the airwaves with the Good Stuff, in every form: op-eds, blog posts, YouTube videos, podcasts, long-form articles, lectures, forums, Tweets, and more. Good philosophy needs to be everywhere, accessible to every level, to anybody who might be interested. We need to flood the world with gateways of every shape and size.

So, yes, of course!

Who then is Dr. Nguyen arguing against? Who does not support increasing the presence of public philosophy?

Answer: The bulk of the article in fact outlines what we have to do in order to get the profession of philosophy to accept public philosopher as an activity worth recognizing, rewarding, and promoting.

If that op-ed is a manifesto (it is), sign me up!

[Disclosure: I am an ex-academic philosophy professor whose writings sometimes impinge on actual philosophy.]

Tweet
Follow me

Categories: blogs, philosophy Tagged with: blogs • philosophy Date: July 1st, 2019 dw

Be the first to comment »

May 20, 2019

Three Chaotic podcasts

My book Everyday Chaos launched last week. Yay! As part of the launch, I gave some talks and interviews. Here are three of the conversations, three three great interviewers:

Leonard Lopate, WBAI

Hidden Forces podcast

Berkman Klein book talk, and conversation with Joi Ito:

Tweet
Follow me

Categories: everyday chaos, philosophy Tagged with: everyday chaos • interviews • podcasts Date: May 20th, 2019 dw

Be the first to comment »

March 24, 2019

Automating our hardest things: Machine Learning writes

In 1948 when Claude Shannon was inventing information science [pdf] (and, I’d say, information itself), he took as an explanatory example a simple algorithm for predicting the element of a sentence. For example, treating each letter as equiprobable, he came up with sentences such as:

XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZLHJQD.

If you instead use the average frequency of each letter, you instead come up with sentences that seem more language-like:

OCRO HLI RGWR NMIELWIS EU LL NBNESEBYA TH EEI ALHENHTTPA OOBTTVA NAH BRL.

At least that one has a reasonable number of vowels.

If you then consider the frequency of letters following other letters—U follows a Q far more frequently than X does—you are practically writing nonsense Latin:

ON IE ANTSOUTINYS ARE T INCTORE ST BE S DEAMY ACHIN D ILONASIVE TUCOOWE AT TEASONARE FUSO TIZIN ANDY TOBE SEACE CTISBE.

Looking not at pairs of letters but triplets Shannon got:

IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID PONDENOME OF DEMONSTURES OF THE REPTAGIN IS REGOACTIONA OF CRE.

Then Shannon changes his units from triplets of letters to triplets of words, and gets:

THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.

Pretty good! But still gibberish.

Now jump ahead seventy years and try to figure out which pieces of the following story were written by humans and which were generated by a computer:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.

“Pérez and his friends were astonished to see the unicorn herd”Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.

While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”

Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution, or at least a change in social organization,” said the scientist.

The answer: The first paragraph was written by a human being. The rest was generated by a machine learning system trained on a huge body of text. You can read about it in a fascinating article (pdf of the research paper) by its creators at OpenAI. (Those creators are: Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever.)

There are two key differences between this approach and Shannon’s.

First, the new approach analyzed a very large body of documents from the Web. It ingested 45 million pages linked in Reddit comments that got more than three upvotes. After removing duplicates and some other cleanup, the data set was reduced to 8 million Web pages. That is a lot of pages. Of course the use of Reddit, or any one site, can bias the dataset. But one of the aims was to compare this new, huge, dataset to the results from existing sets of text-based data. For that reason, the developers also removed Wikipedia pages from the mix since so many existing datasets rely on those pages, which would smudge the comparisons.

(By the way, a quick google search for any page from before December 2018 mentioning both “Jorge Pérez” and “University of La Paz” turned up nothing. “The AI is constructing, not copy-pasting.”The AI is constructing, not copy-pasting.)

The second distinction from Shannon’s method: the developers used machine learning (ML) to create a neural network, rather than relying on a table of frequencies of words in triplet sequences. ML creates a far, far more complex model that can assess the probability of the next word based on the entire context of its prior uses.

The results can be astounding. While the developers freely acknowledge that the examples they feature are somewhat cherry-picked, they say:

When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50% of the time. The opposite is also true: on highly technical or esoteric types of content, the model can perform poorly.

There are obviously things to worry about as this technology advances. For example, fake news could become the Earth’s most abundant resource. For fear of its abuse, its developers are not releasing the full dataset or model weights. Good!

Nevertheless, the possibilities for research are amazing. And, perhaps most important in the longterm, one by one the human capabilities that we take as unique and distinctive are being shown to be replicable without an engine powered by a miracle.

That may be a false conclusion. Human speech does not consist simply of the utterances we make but the complex intentional and social systems in which those utterances are more than just flavored wind. But ML intends nothing and appreciates nothing. “Nothing matters to ML.”Nothing matters to ML. Nevertheless, knowing that sufficient silicon can duplicate the human miracle should shake our confidence in our species’ special place in the order of things.

(FWIW, my personal theology says that when human specialness is taken as conferring special privilege, any blow to it is a good thing. When that specialness is taken as placing special obligations on us, then at its very worst it’s a helpful illusion.)

Tweet
Follow me

Categories: ai, infohistory, philosophy Tagged with: ai • creativity • information • machine learning Date: March 24th, 2019 dw

6 Comments »

September 20, 2018

Coming to belief

I’ve written before about the need to teach The Kids (also: all of us) not only how to think critically so we can see what we should not believe, but also how to come to belief. That piece, which I now cannot locate, was prompted by danah boyd’s excellent post on the problem with media literacy. Robert Berkman, Outreach, Business Librarian at the University of Rochester and Editor of The Information Advisor’s Guide to Internet Research, asked me how one can go about teaching people how to come to belief. Here’s an edited version of my reply:

I’m afraid I don’t have a good answer. I actually haven’t thought much about how to teach people how to come to belief, beyond arguing for doing this as a social process (the ol’ “knowledge is a network” argument :) I have a pretty good sense of how *not* to do it: the way philosophy teachers relentlessly show how every proposed position can be torn down.

I wonder what we’d learn by taking a literature course as a model — not one that is concerned primarily with critical method, but one that is trying to teach students how to appreciate literature. Or art. The teacher tries to get the students to engage with one another to find what’s worthwhile in a work. Formally, you implicitly teach the value of consistency, elegance of explanation, internal coherence, how well a work clarifies one’s own experience, etc. Those are useful touchstones for coming to belief.

I wouldn’t want to leave students feeling that it’s up to them to come up with an understanding on their own. I’d want them to value the history of interpretation, bringing their critical skills to it. The last thing we need is to make people feel yet more unmoored.

I’m also fond of the orthodox Jewish way of coming to belief, as I, as a non-observant Jew, understand it. You have an unchanging and inerrant text that means nothing until humans interpret it. To interpret it means to be conversant with the scholarly opinions of the great Rabbis, who disagree with one another, often diametrically. Formulating a belief in this context means bringing contemporary intelligence to a question while finding support in the old Rabbis…and always always talking respectfully about those other old Rabbis who disagree with your interpretation. No interpretations are final. Learned contradiction is embraced.

That process has the elements I personally like (being moored to a tradition, respecting those with whom one disagrees, acceptance of the finitude of beliefs, acceptance that they result from a social process), but it’s not going to be very practical outside of Jewish communities if only because it rests on the acceptance of a sacred document, even though it’s one that literally cannot be taken literally; it always requires interpretation.

My point: We do have traditions that aim at enabling us to come to belief. Science is one of them. But there are others. We should learn from them.

TL;DR: I dunno.

Tweet
Follow me

Categories: philosophy, too big to know Tagged with: 2b2k • fake news • logic • philosophy Date: September 20th, 2018 dw

2 Comments »

September 14, 2018

Five types of AI fairness

Google PAIR (People + AI Research) has just posted my attempt to explain what fairness looks like when it’s operationalized for a machine learning system. It’s pegged around five “fairness buttons” on the new Google What-If tool, a resource for developers who want to try to figure out what factors (“features” in machine learning talk) are affecting an outcome.

Note that there are far more than five ways to operationalize fairness. The point of the article is that once we are forced to decide exactly what we’re going to count as fair — exactly enough that a machine learning system can implement it — we realize just how freaking complex fairness is. OMG. I broke my brain trying to figure out how to explain some of those ideas, and it took several Google developers (especially James Wexler) and a fine mist of vegetarian broth to restore it even incompletely. Even so, my explanations are less clear than I (or you, I’m sure) would like. But at least there’s no math in them :)

I’ll write more about this at some point, but for me the big take-away is that fairness has had value as a moral concept so far because it is vague enough to allow our intuition to guide us. Machine learning is going to force us to get very specific about it. But we are not yet adept enough at it — e.g., we don’t have a vocabulary for talking about the various varieties — plus we don’t agree about them enough to be able to navigate the shoals. It’s going to be a big mess, but something we have to work through. When we do, we’ll be better at being fair.

Now, about the fact that I am a writer-in-residence at Google. Well, yes I am, and have been for about 6 weeks. It’s a 6 month part-time experiment. My role is to try to explain some of machine learning to people who, like me, lack the technical competence to actually understand it. I’m also supposed to be reflecting in public on what the implications of machine learning might be on our ideas. I am expected to be an independent voice, an outsider on the inside.

So far, it’s been an amazing experience. I’m attached to PAIR, which has developers working on very interesting projects. They are, of course, super-smart, but they have not yet tired of me asking dumb questions that do not seem to be getting smarter over time. So, selfishly, it’s been great for me. And isn’t that all that really matters, hmmm?

Tweet
Follow me

Categories: ai, ethics, philosophy Tagged with: aimachinelearning • fairness Date: September 14th, 2018 dw

Be the first to comment »

May 6, 2018

[liveblog][ai] Primavera De Filippi: An autonomous flower that merges AI and Blockchain

Primavera De Filippi is an expert in blockchain-based tech. She is giving a ThursdAI talk on Plantoid, an event held by Harvard’s Berkman Klein Center for Internet & Society and the MIT Media Lab. Her talk is officially on operational autonomy vs. decisional autonomy, but it’s really about how weird things become when you build a computerized flower that merges AI and the blockchain. For me, a central question of her talk was: Can we have autonomous robots that have legal rights and can own and spend assets, without having to resort to conferring personhood on them the way we have with corporations?

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Autonomy and liability

She begins by pointing to the 3 industrial revolutions so far: Steam led to mechanized production ; Electricity led to mass production; Electronics led to automated production. The fourth — AI — is automating knowledge production.

People are increasingly moving into the digital world, and digital systems are moving back into the physical worlds, creating cyber-physical systems. E.g., the Internet of Things senses, communicates, and acts. The Internet of Smart Things learns from the data the things collect, makes inferences, and then acts. The Internet of Autonomous Things creates new legal challenges. Various actors can be held liable: manufacturer, software developer, user, and a third party. “When do we apply legal personhood to non-humans?”

With autonomous things, the user and third parties become less liable as the software developer takes on more of the liability: There can be a bug. Someone can hack into it. The rules that make inferences are inaccurate. Or a bad moral choice has led the car into an accident.

The sw developer might have created bug-free sw but its interaction with other devices might lead to unpredictability; multiple systems operating according to different rules might be incompatible; it can be hard to identify the chain of causality. So, who will be liable? The manufacturers and owners are likely to have only limited liability.

So, maybe we’ll need generalized insurance: mandatory insurance that potentially harmful devices need to subscribe to.

Or, perhaps we will provide some form of legal personhood to machines so the manufacturers can be sued for their failings. Suing a robot would be like suing a corporation. The devices would be able to own property and assets. The EU is thinking about creating this type of agenthood for AI systems. This is obviously controversial. At least a corporation has people associated with it, while the device is just a device, Primavera points out.

So, when do we apply legal personhood to non-humans? In addition to people and corporations, some countries have assigned personhood to chimpanzees (Argentina, France) and to natural resources (NZ: Whanganui river). We do this so these entities will have rights and cannot be simply exploited.

If we give legal personhood to AI-based systems, can AI have property rights over their assets and IP? If they are legally liable, they can be held responsible for their actions, and can be sued for compensation? “Maybe they should have contractual rights so they can enter into contracts. Can they be rewarded for their work? Taxed?”Maybe they should have contractual rights so they can enter into contracts. Can they be rewarded for their work? Taxed? [All of these are going to turn out to be real questions. … Wait for it …]

Limitations: “Most of the AI-based systems deployed today are more akin to slaves than corporations.” They’re not autonomous the way people are. They are owned, controlled and maintained by people or corporations. They act as agents for their operators. They have no technical means to own or transfer assets. (Primavera recommends watching the Star Trek: The Next Generation episode “The Measure of the Man” that asks, among other things, whether Data (the android) can be dismantled and whether he can resign.)

Decisional autonomy is the capacity to make a decision on your own, but it doesn’t necessarily bring what we think of as real autonomy. E.g., an AV can decide its route. For real autonomy we need operational autonomy: no one is maintaining the thing’s operation at a technical level. To take a non-random example, a blockchain runs autonomously because there is no single operator controlling. E.g., smart contracts come with a guarantee of execution. Once a contract is registered with a blockchain, no operator can stop it. This is operational autonomy.

Blockchain meets AI. Object: Autonomy

We are getting first example of autonomous devices using blockchain. The most famous is the Samsung washing machine that can detect when the soap is empty, and makes a smart contract to order more. Autonomous cars could work with the same model; they could not be owned by anyone and collect money when someone uses them. These could be initially purchased by someone and then buy themselves off: “They’d have to be emancipated,” she says. Perhaps they and other robots can use the capital they accumulate to hire people to work for them. [Pretty interesting model for an Uber.]

She introduces Plantoid, a blockchain-based life form. “Plantoid is autonomous, self-sufficient, and can reproduce.”It’s autonomous, self-sufficient, and can reproduce. Real flowers use bees to reproduce. Plantoids use humans to collect capital for their reproduction. Their bodies are mechanical. Their spirit is an Ethereum smart contract. It collects cryptocurrency. When you feed it currency it says thank you; the Plantoid Primavera has brought, nods its flower. When it gets enough funds to reproduce itself, it triggers a smart contract that activates a call for bids to create the next version of the Plantoid. In the “mating phase” it looks for a human to create the new version. People vote with micro-donations. Then it identifies a winner and hires that human to create the new one.

There are many Plantoids in the world. Each has its own “DNA”. New artists can add to it. E.g., each artist has to decide on its governance, such as whether it will donate some funds to charity. The aim is to make it more attractive to be contributed to. The most fit get the most money and reproduces themselves. BurningMan this summer is going to feature this.

Every time one reproduces, a small cut is given to the pattern that generated it, and some to the new designer. This flips copyright on its head: the artist has an incentive to make her design more visible and accessible and attractive.

So, why provide legal personhood to autonomous devices? We want them to be able to own their own assets, to assume contractual rights, and legal capacity so they can sue and be sued, and limit their liability. “ Blockchain lets us do that without having to declare the robot to be a legal person.” Blockchain lets us do that without having to declare the robot to be a legal person.

The plant effectively owns the cryptofunds. The law cannot affect this. Smart contracts are enforced by code

Who are the parties to the contract? The original author and new artist? The master agreement? Who can sue who in case of a breach? We don’t know how to answer these questions yet.

Can a plantoid sure for breach of contract? Not if the legal system doesn’t recognize them as legal persons. So who is liable if the plant hurts someone? Can we provide a mechanism for this without conferring personhood? “How do you enforce the law against autonomous agents that cannot be stopped and whose property cannot be seized?”

Q&A

Could you do this with live plants? People would bioengineer them…

A: Yes. Plantoid has already been forked this way. There’s an idea for a forest offering trees to be cut down, with the compensation going to the forest which might eventually buy more land to expand itself.

My interest in this grew out of my interest in decentralized organizations. This enables a project to be an entity that assumes liability for its actions, and to reproduce itself.

Q: [me] Do you own this plantoid?

A: Hmm. I own the physical instantiation but not the code or the smart contract. If this one broke, I could make a new one that connects to the same smart contract. If someone gets hurt because it falls on the, I’m probably liable. If the smart contract is funding terrorism, I’m not the owner of that contract. The physical object is doing nothing but reacting to donations.

Q: But the aim of its reactions is to attract more money…

A: It will be up to the judge.

Q: What are the most likely senarios for the development of these weird objects?

A: A blockchain can provide the interface for humans interacting with each other without needing a legal entity, such as Uber, to centralize control. But you need people to decide to do this. The question is how these entities change the structure of the organization.

Tweet
Follow me

Categories: ai, law, liveblog, philosophy Tagged with: ai • blockchain • law • robots Date: May 6th, 2018 dw

Be the first to comment »

« Previous Page | Next Page »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!