www.flickr.com
keepps. Get yours at bighugelabs.com

Digital Literacy and Critical Thinking

Recommended reading from Inside Higher Ed as we head into another school year, perhaps with students in the classroom, or some of the students some of the time in some of the classrooms, or no students, or no teachers....  But almost certainly something digital.

"Going Digital by Knowing Digital", posted by Mark Lieberman on March 13, 2019.


Here are 3 paragraphs from the post to get you interested:

 "Digital literacy is spreading throughout the Winston-Salem curriculum as well. Students in a general chemistry course assemble current events blogs that help them learn the intricacies of the Spark tool while beefing up their science knowledge. A writing instructor has transformed her course into a project-based format that examines the digital-print divide and its effect on how information is transmitted and perceived."

"Cohn likes to use the term “digital fluencies” to describe the difference between the ability to use technology and the ability to critique it. Turning on a computer and opening an internet browser is using technology. Understanding the domain of the website and assessing the design require a deeper understanding."

"Cohn envisions writing instructors asking students to construct essays about how and from where they consume information, and science instructors urging students to interrogate the difference between looking at a virtual-reality model of a human body and a hand-drawn sketch. At her own institution, Cohn has been teaming up with faculty members to offer in-class workshops on these topics.
“A question we can help them think through is ‘Why?’” Cohn said."

To Scale: The Solar System

To Scale: The Solar System from Wylie Overstreet on Vimeo.

On a dry lakebed in Nevada, a group of friends build the first scale model of the solar system with complete planetary orbits: a true illustration of our place in the universe.
A film by Wylie Overstreet and Alex Gorosh
alexgorosh.com
wylieoverstreet.com
Copyright 2015

Moral Machine

My most recent article has been posted over on the OSC IB Blogs site: Moral Machines. I've re-posted it below. Visit the OSC-IB Blogs site and explore the posts on other areas of interest for students and for teachers
_______________________

Have you seen this video?
You’ll find it on this page http://moralmachine.mit.edu/ at the MIT website, “A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self.driving cars.  We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians.  As an outside observer, you judge which outcome you thing is more acceptable.  You can then see how your responses compare with those of other people.”  You are offered 10 languages to choose from.
There are scores of posts about this topic on the web. in August 2016  The Verge wrote,
“The Moral Machine adds new variations to the trolley problem: do you plow into a criminal or swerve and hit an executive? Seven pregnant women (who are jay-walking) or five elderly men (one of whom is homeless) plus three dogs? It’s basically a video game, and you’re trying to min-max human life based on which people you think most deserve to live and how active you are willing to be in their death…A serious question: what is the intended use for this information? The website describes it as “a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas,” but the information that’s actually being gathered is more unsettling. Any output from this test will produce some kind of ranking of the value of life (executive > jogger > retiree > dog, e.g.), and since the whole test is premised on self-driving tech, it seems like the plan is to use that ranking to guide the moral decision-making of autonomous cars?”
This week the Verge wrote on this topic again: “If self-driving cars become widespread, society will have to grapple with a new burden: the ability to program vehicles with preferences about which lives to prioritize in the event of a crash. Human drivers make these choices instinctively, but algorithms will be able to make them in advance. So will car companies and governments choose to save the old or the young? The many or the few?”
The Guardian’s 24 October 2018 post’s headline reads “Who should AI kill in a driverless car crash? It depends who you ask – Responses vary around the world when you ask the public who an out-of-control self-driving car should hit”
The article begins, “Responses to those questions varied greatly around the world. In the global south, for instance, there was a strong preference to spare young people at the expense of old – a preference that was much weaker in the far east and the Islamic world. The same was true for the preference for sparing higher-status victims – those with jobs over those who are unemployed.  When compared with an adult man or woman, the life of a criminal was especially poorly valued: respondents were more likely to spare the life of a dog (but not a cat).”
“On Wednesday, the team behind the Moral Machine released responses from more than two million people spanning 233 countries, dependencies and territories. They found a few universal decisions — for instance, respondents preferred to save a person over an animal, and young people over older people — but other responses differed by regional cultures and economic status.
The findings are important as autonomous vehicles prepare to take the road in the U.S. and other places around the world. In the future, car manufacturers and policymakers could find themselves in a legal bind with autonomous cars. If a self-driving bus kills a pedestrian, for instance, should the manufacturer be held accountable?
The study’s findings offer clues on how to ethically program driverless vehicles based on regional preferences, but the study also highlights underlying diversity issues in the tech industry — namely that it leaves out voices in the developing world.”
Nature’s web page (24 October 2018) posts an article about the publication of the paper.
“…People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots, and what we show here with data is that there are no universal rules,” says Iyad Rahwan, a computer scientist at the Massachusetts Institute of Technology in Cambridge and a co-author of the study.
“The survey, called the Moral Machine, laid out 13 scenarios in which someone’s death was inevitable. Respondents were asked to choose who to spare in situations that involved a mix of variables: young or old, rich or poor, more people or fewer…”  The post includes this video:
The research has been published (24 October 2018) in the journal Nature Nature (volume 563, pages59–64 (2018) and can be read online at the link.
Abstract: “With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed texplore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, wsummarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, wshow that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article arpublicly available.”
At first sight this topic may not directly relate to my blog area of “Vision: Tech in IB Schools” but it will in the near future, and the moral implications of teaching AI how to make decisions that IB learners and teachers can support should be discussed in school communities now, and often.
keepps. Get yours at bighugelabs.com