AI Guidelines
General evolving ethical considerations
- Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI
- Understanding artificial intelligence ethics and safety (The Turing Institute, 2019).
- Ethics of Artificial Intelligence and Robotics (Stanford Encyclopedia of Philosophy, 2020).
Ethical considerations of AI in Higher Education
- AI & AI: Exploring the contemporary intersections of artificial intelligence and academic integrity
- Understanding the Impact of Artificial Intelligence on Higher Education
AI Literacy and evaluation does not mean you need to understand the advanced mechanics of AI. It means that you are actively learning about the technologies involved and that you critically approach any texts you read that concern AI. Check out the ROBOT test, developed to help you evaluate resources created by and about AI.
Reliability
Objective
Bias
Ownership
Type
Reliability
- How reliable is the information available about the AI technology?
- If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias?
- If it is produced by the party responsible for the AI, how much information are they making available?
- Is information only partially available due to trade secrets?
- How biased is they information that they produce?
Objective
- What is the goal or objective of the use of AI?
- What is the goal of sharing information about it?
- To inform?
- To convince?
- To find financial support?
Bias
- What could create bias in the AI technology?
- Are there ethical issues associated with this?
- Are bias or ethical issues acknowledged?
- By the source of information?
- By the party responsible for the AI?
- By its users?
Owner
- Who is the owner or developer of the AI technology?
- Who is responsible for it?
- Is it a private company?
- The government?
- A think tank or research group?
- Who has access to it?
- Who can use it?
Type
- Which subtype of AI is it?
- Is the technology theoretical or applied?
- What kind of information system does it rely on?
- Does it rely on human intervention?
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test
Type of Harm | Causes | Examples |
Bias and Discrimination |
|
Software developer, Jacky Alcine's, experience with Google Photos in 2015. Read BBC News Story here.
|
Denial of Individual Autonomy, Recourse, and Rights |
|
Julia Angwin's study on criminal justice AI models (particularly COMPAS) revealed disparate error rates between black and white defendants when assessing criminal risk for recidivism. How do we operationalize decision making in a way that aligns with equity? Read 2016 ProPublica story here. |
Non-transparent, Unexplainable, or Unjustifiable Outcomes |
|
Rich Caruana's work in the 1990's with rule-based and neural network models for patient analysis at hospitals in Pittsburgh. One rule-based model nonsensically ruled that patients with pneumonia who also have asthma should be treated as lower risk than those without asthma. This rule is based on a real correlation in the data used to train the model. Asthmatics with pneumonia have better health outcomes than non-asthmatics; this is due to the fact that they were considered so risky that they received highest priority care. Caruana ultimately rejected the neural network; he safely assumed the system learned correlations in the data which may illude the humans using it, potentially causing harmful decision-making. |
Invasions of Privacy |
|
The Cambridge Analytica-Facebook scandal of 2016. AI methods were deployed to target voters through psychographic profiling algorithms on Facebook. Read about this here. |
Isolation and Disintegration of Social Connection |
|
See Joe Burton's 2023 paper on Algorithmic extremism here. |
Unreliable, Unsafe, or Poor-Quality Outcomes |
|
Fatal 2018 Collision between vehicle controlled by developmental automated driving system and pedestrian in Tempe, AZ. The accident was due to the system's lack of exposure to training data that considered jaywalkers. Read National Transportation Safety Board story here. |
(Adapted from the Turing Institute's guide to understanding artificial intelligence ethics and safety, 2019)
Check out this course: Practical Data Ethics
Groups and movements for ethical data and ethical AI
URL: https://library.brockport.edu/faculty-ai