Generative artificial intelligence (generative AI or GenAI) is an umbrella term for a variety of computer-based applications that can produce new content — text, images, sounds, videos, music, numerical data sets, software code — in response to a user prompt and on the basis of algorithmic modeling. Popular generative AI tools — ChatGPT, Bard, Bing Chat, DALL-E, Midjourney, Copilot, and others — use large sets of raw data to predict statistically probable outputs that can appear startlingly human-like. Since such outputs can include, for instance, the kinds of writing required of students for classroom assignments, the kinds of audio or video content that may violate copyright law, and the sorts of data sets that can reveal legally protected personal or corporate information, generative AI can undermine academic, legal, and more broadly ethical standards of integrity. It also has enormous potential to assist student learning, enhance artistic and business productivity, and enable scientific and medical discovery.
In light of these perils and opportunities, the College of Arts and Sciences has crafted statements of principle meant to guide the use of generative AI in both teaching and research. If you have question about these statements, contact Assistant Dean Albert Pionke. We also strongly encourage faculty to visit the website of the Artificial Intelligence Teaching Enhancement Initiative, which helps faculty integrate AI into their teaching by providing ready-to-use AI resources.
AI and Teaching
Used only at the discretion and with the full knowledge and permission of an instructor — who should communicate the degree to which it is or is not permitted on each course syllabus — generative AI, if used at all, should be an aid to rather than a substitute for learning in the classroom, whether in-person or virtual. For students, this means that no assignment substantially generated by an AI application should be submitted as the student’s own work, and all assignments completed with the aid of an AI application should be acknowledged as such. For instructors, this means that no grade on student work should be substantially assigned by an AI application, and all grades generated with the aid of an AI application should indicate clearly how the AI was used to assist with grading. Instructors should further guard against AI-generated “leaderboards,” “score cards,” or other types of aggregated data sets that make individual students’ grades visible to others in the class.
AI and Research
Faculty and student researchers who choose to incorporate generative AI in their work should be mindful of the risks to security, privacy, confidentiality, intellectual property, publication and grantor compliance, and accuracy associated with the use of such tools. Any data put into a publicly-available AI will then itself become public, so researchers must scrub all personal (names, SSNs, health status, enrollment status), proprietary (patented, trademarked, copyrighted), and other sensitive information to avoid FERPA, HIPAA, and research secrecy violations or future intellectual property disputes. Moreover, any publications or grant applications produced with the aid of generative AI must be thoroughly edited to remove any such violations as well as inaccuracies (whether outright inventions or inherent biases) that the application may have imported from its own much larger dataset. Researchers should also be aware that a number of prominent journals (including Science and Nature) and grantors (e.g., the NIH) have put in place strict policies governing the use of generative AI; violation of these policies could well make a researcher’s work unpublishable or unfundable. Last, but most certainly not least, researchers must consider whether the use of AI will produce any ethical conflicts, including but not limited to breaches of academic integrity, methodological transparency, and civil and human rights. Fundamentally, all researchers remain responsible for the content they produce, whether with or without the aid of generative AI tools.