Ethical Considerations for AI Use at Skidmore

Generative AI is a rapidly evolving technology with social, legal, and ethical implications that continue to emerge. While our understanding of these tools is still developing, there are consistent concerns about their influence on user experience, privacy, labor, the environment, and the nature of work and learning itself. This overview is intended for all faculty, staff, and students at Skidmore College and aims to promote thoughtful, critical engagement with generative AI technologies. 

The sections that follow outline key ethical considerations, including bias, hallucinations, labor concerns, accessibility, environmental impact, and privacy, to help users critically assess and responsibly engage with AI technologies. While these considerations offer broad guidance on ethical issues related to AI, the use of generative AI must always be evaluated within the specific context of its use, including the audience and purpose. This resource is not meant to authorize the use of AI in any particular class or situation: students should adhere to their instructor’s guidelines. Students may visit this website for examples of how faculty may define AI use in their courses. Faculty and staff are also encouraged to consider community standards, publishing and granting agency requirements, and other relevant professional or institutional expectations when deciding how to use AI in their work. For specific examples and references related to the ethical considerations discussed here, please refer to the Ethical Considerations for AI Use – Examples, which provides links to relevant articles.

Bias

AI models reflect the biases in their training data, often amplifying historical inequalities and reinforcing preexisting assumptions. Selection bias can result in the underrepresentation of marginalized communities, while confirmation bias may lead users to refine prompts until they receive a desired response. Overreliance on AI-generated outputs without critical evaluation can also contribute to automation bias, where users accept machine-generated information uncritically.

Mitigating these biases requires a conscious effort to evaluate AI-generated responses, cross-reference information with credible sources, and engage with multiple perspectives. Faculty, staff, and students should approach AI outputs with skepticism and a commitment to intellectual rigor, ensuring that reliance on AI does not replace critical analysis.


Hallucinations

AI hallucinations occur when Generative AI tools produce incorrect, misleading, or nonexistent content. These hallucinations may be false citations, incorrect historical or scientific claims, or inconsistencies in AI-generated images and sounds. Because AI generates responses based on probability rather than comprehension, it may create content that appears plausible but lacks factual accuracy.

Image-based and sound-based AI systems are also prone to hallucinations, though in different ways than text-based models. Instead of generating nonsensical word combinations, image-generating AI can introduce visual distortions—such as adding extra fingers to hands—because it recognizes patterns but does not correctly understand anatomy. It perceives fingers follow a specific structure but fails to grasp their precise arrangement. Similarly, sound-based AI may produce audible artifacts or noise. This happens because it first constructs a spectrogram, a visual representation of sound, and then attempts to convert it back into a smooth waveform, sometimes introducing unintended distortions in the process.

Identifying AI hallucinations requires vigilance. Users should verify AI-generated references and factual claims against trusted sources to ensure that misinformation does not spread unchecked. Keeping a log of AI interactions—including prompts and follow-up verification steps—can foster a habit of critical evaluation and increase awareness of AI’s limitations across a variety of contexts.


Labor

A content aggregator, or moderator, is an individual or organization that collects data. Content aggregators are often employees who train and improve the tool’s algorithms. However, some worker communities have been exploited, as noted in Time magazine’s article, “150 African Workers for ChatGPT, TikTok and Facebook Vote to Unionize at Landmark Nairobi Meeting.” These employees, often noted as “invisible workers” or “ghost workers,” can range from those who train, annotate, or label the data to those who enhance and test the algorithm or the models. Outsourced and contract data workers are especially susceptible to these conditions.

Many generative AI tools rely on web scraping to collect publicly available information from the internet, often without distinguishing between freely accessible content and copyrighted material. Some models have been trained on copyrighted works without explicit permission, raising legal and ethical concerns about the rights of artists and authors. Additionally, even content used under fair use provisions may still contribute to the devaluation of creative labor. The broad and indiscriminate collection of data can also expose data workers to sensitive and distressing material, posing risks to their mental health.


Accessibility

Generative AI tools present both opportunities and challenges for accessibility in higher education. While these tools can provide accommodations and support for users with disabilities, they may also create new barriers or reinforce existing ones. The technology’s rapid development often outpaces accessibility standards, leading to interface designs that may not work well with screen readers or other assistive technologies. Additionally, the cognitive load required to craft effective prompts can present challenges for some users.

Many AI tools also struggle to consistently generate accessible content, such as proper alternative text for images or maintaining heading hierarchy in documents. Users with visual, auditory, or cognitive disabilities may encounter difficulties when AI-generated content lacks proper structure or fails to meet Web Content Accessibility Guidelines (WCAG). Furthermore, although many generative AI tools are currently free, more and more are applying a cost to access them or to use premium features, creating barriers that exacerbate inequities.


Environmental Impact

Training and running large language models (LLMs) and other generative AI systems require significant computational resources, resulting in substantial energy consumption and carbon emissions. A single training run for an LLM can emit as much carbon as several cars over their entire lifetimes. The environmental cost extends beyond energy usage to include water consumption for cooling data centers and mining of rare earth metals used in server hardware.

The rapid iteration of AI models, with newer versions being released frequently, compounds these environmental concerns. Additionally, the widespread adoption of AI tools in daily workflows increases the aggregate energy consumption across organizations. While cloud providers are increasingly moving toward renewable energy sources, the environmental footprint of AI remains significant and often overlooked in institutional decision-making.


Privacy

Generative AI tools collect vast amounts of data, raising concerns about privacy and data security. Many AI platforms automatically store user inputs, potentially integrating them into future model training. This poses risks for individuals inadvertently sharing sensitive or personal information through AI interactions. Given that AI privacy policies frequently evolve, users must remain informed about how their data is collected, stored, and shared. Best practice is to check the data privacy policy of the tool regularly for how data is collected, used, and stored. Furthermore, do not share sensitive information or personal or confidential material.

For those handling student data, it is critical to ensure compliance with FERPA (Family Educational Rights and Privacy Act), which protects the privacy of student education records. Using generative AI tools to process or store student-related information—such as grades, course performance, advising notes, or lists of majors and minors—could pose risks if the tool does not explicitly comply with FERPA requirements. Because determining what counts as FERPA-protected data can be nuanced, faculty and staff are encouraged to consult with the Registrar’s Office before using AI tools in any capacity that involves student records. This helps ensure that institutional responsibilities around data privacy and student rights are fully upheld.

A few questions to consider:

  • What is the data privacy policy? Are you asked for consent to provide data or are you automatically opting-in?
  • How long is your data held (data retention)?
  • What securities are there for de-identification?
  • How are privacy risks addressed?
  • Is there a third-party company involved? If so, what access and control does this company have over the data collected?
  • If you cannot find or do not trust the data privacy policy of a free AI tool, you might consider a paid subscription where privacy protections are clearer and more explicitly stated. Paid versions of AI tools often offer stronger assurances against data collection and retention, making them a more secure option for sensitive or institutional work.

The Future of AI & Human Collaboration

As AI continues to evolve, its ethical use must be shaped by a commitment to human dignity, critical thinking, and responsible innovation. AI should be understood as an augmentative tool rather than a replacement for human expertise. Faculty, staff, and students must engage with AI critically, recognizing its potential and limitations.

Office

Our office is on the second floor of the library (Library 222)

Hours

Our office hours are M-F: 8:30am – 4:30pm.

Email

leds@skidmore.edu