GW considers generative AI to serve as a 24/7 tutor for students to explore learning content, generate practice quizzes, introduce and explain alternative perspectives, highlight potentially overlooked ideas, assist with technical projects, and more. GW Provost AI Guidelines highlight that while genAI tools can not replace instructor expertise, they may enable students to come better prepared to a class, explore learning content deeper, and work through the processes and products instructors lay out. While we encourage the Corcoran School of the Arts & Design students to explore and investigate all tools, we propose a few ethical considerations as they navigate the world of genAI and LLM modules. These considerations will ensure students are responsibly engaging with available tools and being mindful of the world around them.
After analyzing 5,000 images, Nicoletti and Bass (2023) learned that images constructed with the Generative AI tool Stable Diffusion amplified gender and racial stereotypes. So, while Generative AI carries the potential for inaccuracy and misleading outputs, it fabricates data with the same confidence it utilizes for accurate information (Nicoletti and Bass, 2023).
With the vast availability of generative AI tools, we encourage responsible and ethical use of technological development. At Corcoran, AI tools should not substitute for your essential development and learning. Generative AI tools should not replace original creativity, imagination, thought processes, image generation, or writing. This document outlines expectations, ethical and legal considerations, and a list of sources students can refer to if and when using generative AI tools. Like the AI domain, this is a living document that will develop as discourse shifts.
Your go-to checklist, if you are thinking about using genAI
- [ ] AI learns from human behavior and data, and humans are biased. Ensure that generated materials do not perpetuate biases, cause harm, or offend anyone.
- [ ] Check the accuracy of generated content every time. This is explicitly concerning artificial intelligence tools hallucinating and generating factually incorrect information. For more guidance on hallucinations and mitigating inaccuracy, read: How Can We Counteract Generative AI’s Hallucinations?
- [ ] Allocate additional review time in your project/process timeline to ensure multiple and diverse review perspectives––including those of peers, faculty, and project partners.
- [ ] Only use generated material if you understand the topic. Consult ****a subject matter expert.
- [ ] Respect copyrights and intellectual property rights is fundamental. Ensure that AI-generated visuals do not infringe upon the rights of others. Only use AI to create original visuals or images for which you have the necessary rights.
- [ ] Ensure AI image generators do not generate the final product submitted for any of the classes. However, you can use image generators to develop and test ideas, granted you transparently share how AI tools are and were used in your projects.
<aside>
<img src="/icons/bookmark_blue.svg" alt="/icons/bookmark_blue.svg" width="40px" /> Unless otherwise specified by your professor/instructor, generative AI tools usage may be limited to the following (for assignments and explorations):
- To automate or assist in organizational or managerial tasks.
- For research, brainstorming, or mood-boarding, with proper citation.
- For visual outputs, with prior declaration of intent and reasons for use, and communication with the instructor.
- For proofreading, grammar-checking, or editing original writings.
</aside>
Ethical Considerations
- AI tools are at risk of perpetuating gender and racial bias: For decades, AI modules have racially profiled people of color, created an overt bias in their outputs, and perpetuated biases in institutions and programs. AI tools inherently lack diversity in the people developing them, lack inclusive code and training, and support unregulated and biased information overall.
- AI tools may spread misinformation and hate: Current AI tools use information generated before 2021 by scraping websites, applications, and other digital tools, including tweets, social media, and blog posts. Because these AI tools may not filter the information for a user’s request in the same way an MCG employee operates under an ethical code, the output may contain subjective information, misinformation or disinformation, or offensive and unethical content.
- Users have limited information on AI decision-making models: Users need insight into how the generative tools arrive at a particular response. Due to this lack of transparency, users cannot address the production of unintended bias in the generated information.
- Most importantly, we need to examine the usage of generative AI tools with the lens of Climate Change. International Energy Agency (IEA) calculated that 2% of global electricity usage is attributed to artificial intelligence, with high consumption coming from energy-intensive data centers, artificial intelligence (AI), and cryptocurrencies, which could double by 2026 (IEA, 2024). One of the areas with the fastest-growing demand for energy is the form of machine learning called generative AI, which requires a lot of energy for training and producing answers to queries (Vox, 2024).
Legal & Recognition Considerations