Skip to Main Content
SUNY Brockport logo

AI for Faculty

This guide will help faculty understand and integrate AI into their courses.

Ethics & Evaluation

AI Guidelines

AI Literacy and evaluation does not mean you need to understand the advanced mechanics of AI. It means that you are actively learning about the technologies involved and that you critically approach any texts you read that concern AI. Check out the ROBOT test, developed to help you evaluate resources created by and about AI. 

Reliability

Objective

Bias

Ownership

Type


Reliability
  • How reliable is the information available about the AI technology?
  • If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias?
  • If it is produced by the party responsible for the AI, how much information are they making available? 
    • Is information only partially available due to trade secrets?
    • How biased is they information that they produce?
 
Objective
  • What is the goal or objective of the use of AI?
  • What is the goal of sharing information about it?
    • To inform?
    • To convince?
    • To find financial support?
Bias
  • What could create bias in the AI technology?
  • Are there ethical issues associated with this?
  • Are bias or ethical issues acknowledged?
    • By the source of information?
    • By the party responsible for the AI?
    • By its users?
Owner
  • Who is the owner or developer of the AI technology?
  • Who is responsible for it?
    • Is it a private company?
    • The government?
    • A think tank or research group?
  • Who has access to it?
  • Who can use it?
Type
  • Which subtype of AI is it?
  • Is the technology theoretical or applied?
  • What kind of information system does it rely on?
  • Does it rely on human intervention? 

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test

Type of Harm Causes Examples
Bias and Discrimination
  • AI can reinforce, reproduce, and amplify already existing patterns of marginalization, inequity, and discrimination.  
  • AI technologies have human designers and authors with perceptions and implicit biases of their own; their creations can reflect these.
  • Data samples used to train and test algorithmic systems can be insufficiently representative.  

Software developer, Jacky Alcine's, experience with Google Photos in 2015.

Read BBC News Story here.

 

Denial of Individual Autonomy, Recourse, and Rights
  • AI systems automate decisions, predictions, and classifications, etc. that were previously exclusively attributable to human agents. Who is held accountable if complex algorithmic systems generate harmful outcomes?
  • In cases of injury/harm, lack of ability to attribute accountability might violate human rights of individuals involved. 

Julia Angwin's study on criminal justice AI models (particularly COMPAS) revealed disparate error rates between black and white defendants when assessing criminal risk for recidivism. How do we operationalize decision making in a way that aligns with equity? 

Read 2016 ProPublica story here.

Non-transparent, Unexplainable, or Unjustifiable Outcomes
  • Machine learning models generate results based on operations that are sometimes beyond or can illude interpretive capabilities of human intelligence, minimizing human insight into potential traces of bias, inequity, or unfairness in results that directly affect human beings. 

Rich Caruana's work in the 1990's with rule-based and neural network models for patient analysis at hospitals in Pittsburgh. One rule-based model nonsensically ruled that patients with pneumonia who also have asthma should be treated as lower risk than those without asthma. This rule is based on a real correlation in the data used to train the model. Asthmatics with pneumonia have better health outcomes than non-asthmatics; this is due to the fact that they were considered so risky that they received highest priority care.

Caruana ultimately rejected the neural network; he safely assumed the system learned correlations in the data which may illude the humans using it, potentially causing harmful decision-making. 

Invasions of Privacy
  • Development of AI tech involves use of personal data
  • Is this data captured consensually?
  • Is data handled in a way that potentially places individuals at risk?
  • Deployment of targeted AI systems based on an individual's data might infringe on that individual's autonomy/agency.

The Cambridge Analytica-Facebook scandal of 2016. AI methods were deployed to target voters through psychographic profiling algorithms on Facebook.

Read about this here

Isolation and Disintegration of Social Connection
  • Excessive automation potentially reduces human-to-human contact. 
  • Algorithmically enabled hyper-personalization in an increasingly digital world potentially limits, radicalizes, and polarizes individual behavior/world views.
See Joe Burton's 2023 paper on Algorithmic extremism here.
Unreliable, Unsafe, or Poor-Quality Outcomes
  • Poor data management, negligent design, and questionable deployment lead to implementation of AI systems that produce unreliable, unsafe, or poor-quality outcomes. 
  • Negative outcomes of AI affect the reality of human life.
  • Negative outcomes of poorly managed AI systems affect human trust in AI at large, regardless of how societally beneficial a piece of AI tech actually is. 

Fatal 2018 Collision between vehicle controlled by developmental automated driving system and pedestrian in Tempe, AZ. The accident was due to the system's lack of exposure to training data that considered jaywalkers. 

Read National Transportation Safety Board story here

(Adapted from the Turing Institute's guide to understanding artificial intelligence ethics and safety, 2019)

Last Updated: Oct 8, 2024 2:41 PM
URL: https://library.brockport.edu/faculty-ai

This work is licensed under a Creative Commons 4.0 CC-BY-NC-SA license. We ask that anyone who adapts or reuses this guide, in whole or in part, link to our guide and give credit to Drake Memorial Library and SUNY Brockport.