keepps. Get yours at bighugelabs.com

Moral Machine

My most recent article has been posted over on the OSC IB Blogs site: Moral Machines. I've re-posted it below. Visit the OSC-IB Blogs site and explore the posts on other areas of interest for students and for teachers

Have you seen this video?
You’ll find it on this page http://moralmachine.mit.edu/ at the MIT website, “A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self.driving cars.  We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians.  As an outside observer, you judge which outcome you thing is more acceptable.  You can then see how your responses compare with those of other people.”  You are offered 10 languages to choose from.
There are scores of posts about this topic on the web. in August 2016  The Verge wrote,
“The Moral Machine adds new variations to the trolley problem: do you plow into a criminal or swerve and hit an executive? Seven pregnant women (who are jay-walking) or five elderly men (one of whom is homeless) plus three dogs? It’s basically a video game, and you’re trying to min-max human life based on which people you think most deserve to live and how active you are willing to be in their death…A serious question: what is the intended use for this information? The website describes it as “a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas,” but the information that’s actually being gathered is more unsettling. Any output from this test will produce some kind of ranking of the value of life (executive > jogger > retiree > dog, e.g.), and since the whole test is premised on self-driving tech, it seems like the plan is to use that ranking to guide the moral decision-making of autonomous cars?”
This week the Verge wrote on this topic again: “If self-driving cars become widespread, society will have to grapple with a new burden: the ability to program vehicles with preferences about which lives to prioritize in the event of a crash. Human drivers make these choices instinctively, but algorithms will be able to make them in advance. So will car companies and governments choose to save the old or the young? The many or the few?”
The Guardian’s 24 October 2018 post’s headline reads “Who should AI kill in a driverless car crash? It depends who you ask – Responses vary around the world when you ask the public who an out-of-control self-driving car should hit”
The article begins, “Responses to those questions varied greatly around the world. In the global south, for instance, there was a strong preference to spare young people at the expense of old – a preference that was much weaker in the far east and the Islamic world. The same was true for the preference for sparing higher-status victims – those with jobs over those who are unemployed.  When compared with an adult man or woman, the life of a criminal was especially poorly valued: respondents were more likely to spare the life of a dog (but not a cat).”
“On Wednesday, the team behind the Moral Machine released responses from more than two million people spanning 233 countries, dependencies and territories. They found a few universal decisions — for instance, respondents preferred to save a person over an animal, and young people over older people — but other responses differed by regional cultures and economic status.
The findings are important as autonomous vehicles prepare to take the road in the U.S. and other places around the world. In the future, car manufacturers and policymakers could find themselves in a legal bind with autonomous cars. If a self-driving bus kills a pedestrian, for instance, should the manufacturer be held accountable?
The study’s findings offer clues on how to ethically program driverless vehicles based on regional preferences, but the study also highlights underlying diversity issues in the tech industry — namely that it leaves out voices in the developing world.”
Nature’s web page (24 October 2018) posts an article about the publication of the paper.
“…People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots, and what we show here with data is that there are no universal rules,” says Iyad Rahwan, a computer scientist at the Massachusetts Institute of Technology in Cambridge and a co-author of the study.
“The survey, called the Moral Machine, laid out 13 scenarios in which someone’s death was inevitable. Respondents were asked to choose who to spare in situations that involved a mix of variables: young or old, rich or poor, more people or fewer…”  The post includes this video:
The research has been published (24 October 2018) in the journal Nature Nature (volume 563, pages59–64 (2018) and can be read online at the link.
Abstract: “With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed texplore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, wsummarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, wshow that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article arpublicly available.”
At first sight this topic may not directly relate to my blog area of “Vision: Tech in IB Schools” but it will in the near future, and the moral implications of teaching AI how to make decisions that IB learners and teachers can support should be discussed in school communities now, and often.

A Cautionary Tale

My most recent article has been posted over on the OSC IB Blogs site: A Cautionary Tale I've re-posted it below. Visit the OSC-IB Blogs site and explore the posts on other areas of interest for students and for teachers


A week or so ago I read a BBC blog post that I thought I should share on this blog.  Then a few days later I read the same story on Petapixel.com, a photography blog.  I have also found it on CNN.com,  Independent.ieMetro.co.uk, and aplus.com. I’m sure there are more, but that’s enough to be going on with!
Here’s the story:  Shubnum Khan is a South African author (Onion Tears, Penguin), artist (IG: shubnumkhan), freelance writer (Huffpost SAO magazine, TimesMarie ClaireSunday Times etc.) (her Twitter page). On July 28 she shared this story on Twitter.  The story is in the form of  a very long series of tweets, which I urge to read.  It begins, “So today I’m going to tell you the story of How I Ended Up with my Face On a McDonald’s Advert in China – A Cautionary Tale. Six or so years ago, a friend in Canada posted a pic on my FB wall to say she found an advert of me promoting immigration in a Canadian newspaper. Naturally I was shocked and…confused. I studied the pic and agreed that it was me. Now I didn’t mind that I was promoting immigration in Canada but I couldn’t understand why my face was in a paper all the way on that side of the world.”
Screenshot of Shubnum Khan (@ShubnumKhan) July 28, 2018
In summary, several years ago she and some friends at university went to a free photo shoot, where they signed a photographer’s model release form which they didn’t read.  She wasn’t told verbally that the photos of her would be sold on a stock photo web site. She describes in her tweets how she discovered the variety of products her face has been used to sell, the unpleasant physical conditions that have been photoshopped onto her image, and how her picture has been used as the cover image on three books!
She contacted the photographer, who eventually took her image off his stock photo site. But she has no control over its use by those who had already purchased it.
“…now that I’m older and more assertive & aware of power plays and manipulation I can easily see how we were all used – a whole gallery of free photographs for this photographer to sell and we haven’t made a cent for all the things WE’VE advertised…Also this could have gone badly – my photo could have come up in a wrong place (I mean, the right to ‘distort photo and character!’) is scary af (sic) and so if anything, I hope my story is also a cautionary tale to be careful what you sign.”
“It’s also pretty telling of how easily you can be exploited in this new age & how startlingly deceptive everything is. Those testimonials are fake, those adverts are fake. Your holiday tour guide, your tutor or your future bride could just be some random uni student …Be clever. Be aware. Don’t get caught up. I’m sure I could have made some money out of this, but instead I’m out there promoting acne cream while someone else gets the profits. And now you know.” — Shubnum Khan (@ShubnumKhan) July 28, 2018
In the CNN story, Khan is quoted as saying she’s surprised at how big the story has become since sharing it on Twitter. “I didn’t expect that at all. I knew it was a strange story but I thought people wouldn’t get too surprised that things like this happen. I’m glad we can still feel surprised and compassionate about situations like this.”
For further reading, you might want to follow up on this extract from the Twitter Terms of Service:
“…By submitting, posting or displaying Content on or through the Services, you grant us a worldwide, non-exclusive, royalty-free license (with the right to sublicense) to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute such Content in any and all media or distribution methods (now known or later developed). This license authorizes us to make your Content available to the rest of the world and to let others do the same. You agree that this license includes the right for Twitter to provide, promote, and improve the Services and to make Content submitted to or through the Services available to other companies, organizations or individuals for the syndication, broadcast, distribution, promotion or publication of such Content on other media and services, subject to our terms and conditions for such Content use. Such additional uses by Twitter, or other companies, organizations or individuals, may be made with no compensation paid to you with respect to the Content that you submit, post, transmit or otherwise make available through the Services…”
From the Rules for use of IB Terms and Conditions web page:
License Grant to IB
“You retain full copyright ownership in any posting submitted on IB websites. By submitting or distributing your User Postings, you hereby grant to the IB a worldwide, non-exclusive, transferable, assignable, sub licensable, fully paid-up, royalty-free, perpetual, irrevocable right and license to host, transfer, display, perform, reproduce, modify, distribute, re-distribute, relicense and otherwise use, make available and exploit your User Postings in connection with the provision of My IB services and the IB’s activities, including for educational and promotional purposes.”
License Grant to My IB users
“You retain full copyright ownership in User Postings submitted on IB websites. By submitting or distributing your User Postings, you hereby grant to each user of the IB websites a non-exclusive license to access and use your User Postings in connection with their use of the IB websites for their own personal purposes.”
Extract of the Facebook Terms of Service 3. Your commitments to Facebook and our community, point 3, The permissions you give us:
“Specifically, when you share, post, or upload content that is covered by intellectual property rights (like photos or videos) on or in connection with our Products, you grant us a non-exclusive, transferable, sub-licensable, royalty-free, and worldwide license to host, use, distribute, modify, run, copy, publicly perform or display, translate, and create derivative works of your content (consistent with your privacy and application settings). This means, for example, that if you share a photo on Facebook, you give us permission to store, copy, and share it with others (again, consistent with your settings) such as service providers that support our service or other Facebook Products you use.”

AI: Humans learning to relate to learning machines

My most recent article has been posted over on the OSC IB Blogs site: AI: Humans Learning to Relate to Learning Machines I've re-posted it below. Visit the OSC-IB Blogs site and explore the posts on other areas of interest for students and for teachers

It seems that recently, my tech and education reading has been full of information and opinions about AI, Machine Learning and Robots. In this post I present you with a collection of articles exploring how humans learning to relate to learning machines interacts with our world of educating young people.

The first is an article from The Conversation, by Stephen Corbett, Head of School of Education & Childhood Studies, University of Portsmouth,  No, mobile phones should not be banned in schools, in which the author explores ideas and options for dealing with students and smartphones in schools. "Whether we embrace it or not, mobile technology is a fundamental part of the modern world. Today’s students will have jobs that rely on technology, and they need to be mature enough to use it wisely – and appropriately."

Artificial Neural Network delirium by Google Research "iterative-lowlevel-feature-layer" flickr photo 
by The Liberty Looker https://flickr.com/photos/55049135@N00/18877259068 shared into the public domain using (PDM)

Having considered this relatively simple question, move on to a post on FastCompany, The case against teaching kids to be polite to Alexa by  Mike Elgan, exploring the idea of the "courtesy conundrum" and that "When parents tell kids to respect AI assistants, what kind of future are we preparing them for?"  "The world is changing. And parents need to prepare kids for the world they’ll actually live in. We need to teach them the old things, like good manners, and the new things, like the truth about AI."...  “Being able to identify what makes humans different than machines is going to be a very important skill as AI devices infiltrate more and more aspects of our lives,” Golin says.  For starters, kids need to learn that Amazon Echos and Google Homes do not fit in the same category as mom and dad, but in the same category as TVs and toasters."

Next is a post from MIT Media Lab, Kids, AI devices, and intelligent toys  by Stefania Druga and Randi Williams.  The post presents a review of their findings presented in their paper  Hey Google, is it OK if I eat you?: Initial Explorations in Child-Agent Interaction,” presented at the Interaction Design and Children conference at Stanford University on June 27 (2017). "Children are growing up with technology that blurs the line between animate and inanimate objects. How does this interaction affect kids’ development?"  The authors' research focuses on 3 key questions. "Beyond ethical and security issues, the emergence of these devices raises serious questions about how children’s interactions with smart toys may influence their perceptions of intelligence, cognitive development, and social behavior. So, our long-term research objectives are motivated by the following questions:
  • How could exposure to, or interaction with, these smart bots affect children?
  • What are the short- and long-term cognitive and civic implications?
  • What design considerations could we propose to address the ethical concerns surrounding these issues?
Fourth on the list is an article by AI researcher Sherol Chen, How to Explain AI to Kids: Science Fiction, Movie Trailers, and Youtube Videos I Use to Help Kids Understand Artificial Intelligence. "...I’ve taught lessons on Artificial Intelligence in six different countries from ages as young as 11 to graduate student level, and regardless of culture, there are two important concepts that I make sure to introduce young students: curiosity and grit. Without the means to cultivate curiosity and grit, many students avoid Computer Science before they even begin.

"There are three ways I do this:
  1. Motivations from History: “The Why.” Give them the historical context and motivations for the technology they use everyday, making sure they understand that before these devices existed, they were just a crazy idea someone dreamed up.
  2. Productive Curiosity: “The What.” Give permission and encouragement towards asking the right questions of “Why?,” “What?,” and “How?.” Lead them by demonstrating what the right kind of questions feel like by tying it to stories and concepts they are familiar with.
  3. Ideas worth Realizing: “The How.” Show them in what ways they could dream, while emphasizing that if it really was that obvious and easy, someone would have done it by now."
Chen's post ends with a paragraph which will resonate with anyone teaching and learning in the  IB world:   "So, do I ever actually tell them what AI is? Not really. I show them a bunch of YouTube videos and lead them through discussions on how they’d like to connect these dots. To me, how you define AI is only as meaningful as the community you find yourself in, and even then, it slightly changes every so often. What I’d rather encourage is the lifelong curiosity on what it means to be human, and how we can be better at that."

For a basic introduction to Artificial Intelligence, watch this 6 min video from HubSpot ("Software to fuel your growth and build deeper relationships, from first hello to happy customer and beyond.")