How to Cite AI in College Papers: What 210 Professors Actually Require
We analyzed 210 AI policies from college syllabi to find out what professors actually expect when students disclose AI use — from a simple sentence to a full chat transcript with carbon footprint estimate.
How to Cite AI in College Papers: What 210 Professors Actually Require
You are taking four courses this semester. One professor says "mention it in a footnote." Another wants the full chat transcript appended to your paper. A third has not said anything about AI at all. And the fourth wants you to estimate the carbon footprint of every interaction you had with ChatGPT.
Welcome to the current state of AI citation in college coursework.
We analyzed the Syllabi Policies for Generative AI Repository, a public collection of 210 course-level AI policies drawn from 181 institutions across 75 academic disciplines. Our goal: figure out what professors actually require when it comes to citing and disclosing AI use in student work. The range is extraordinary.
37% Say Nothing, 63% Want Something
Of the 210 policies we reviewed, 77 (37%) either do not address AI citation at all or ban AI outright with no citation path. The remaining 133 policies (63%) require some form of disclosure -- but what counts as "disclosure" varies wildly.
At the bare minimum, some professors ask for a single sentence acknowledging that you used an AI tool. At the other extreme, Kennesaw State University requires students to estimate the carbon emissions, water usage, and electricity consumption of their AI interactions. Between those two poles, there is an entire spectrum of requirements that students must navigate, often across four to six courses with completely different expectations.
This inconsistency matters. A student who follows one professor's disclosure norms might unknowingly violate another's policy in the same semester. The stakes range from a deduction on a single assignment to an academic integrity violation that goes on your record.
If you are interested in how admissions offices handle AI disclosure differently from classroom policies, we covered that gap in detail in The AI Policy Gap: When Your Admissions Office and Your Professors Disagree.
Data source: Syllabi Policies for Generative AI Repository
The Citation Specificity Ladder
Not all AI citation requirements are created equal. Based on our analysis, we found eight distinct levels of specificity that professors require, forming a clear hierarchy from minimal to extreme.
Level 0: No requirement mentioned -- 77 policies (37%). These syllabi either do not address AI at all or ban it outright without providing a citation pathway. If AI is prohibited entirely, there is nothing to cite. If the syllabus is silent, students are left guessing.
Level 1: Basic acknowledgment -- The most common requirement. A simple statement that you used an AI tool. Something like "I used ChatGPT for portions of this assignment." No detail about what you did or how. Just an acknowledgment that it happened.
Level 2: Name the tool and describe how -- Here the bar rises. You must specify the exact tool -- ChatGPT, Claude, Gemini, Copilot -- and describe the nature of your use. Was it brainstorming? Editing? Research assistance? Code generation? The professor wants to know not just that you used AI, but what role it played in your process.
Level 3: Share your prompts -- 46 policies (22%). Nearly one in four professors who address AI want to see the actual prompts you used. This shifts the disclosure from a summary to a record. The logic is sound: your prompts reveal how you thought about the problem and how much intellectual work you delegated to the machine versus how much you directed yourself.
Level 4: Write a reflection -- 27 policies (13%). Beyond documenting what happened, these professors want you to analyze it. What did you learn from the AI interaction? How did it shape your thinking? What would you have done differently? Did the AI output change your argument or confirm it? This turns the disclosure into a metacognitive exercise.
Level 5: Submit full transcript or appendix -- 11 policies (5%). Append the entire chat conversation as an appendix to your paper. Every prompt, every response, every follow-up. This is a significant documentation burden, particularly for students who use AI iteratively across multiple sessions.
Level 6: Screenshots -- 6 policies (3%). Not just a transcript, but visual proof. Screenshot every AI interaction and include them in your submission. This requirement often reflects a concern that text-based transcripts can be edited or fabricated, while screenshots are harder to alter convincingly.
Level 7: Carbon impact statement -- 1 policy (Kennesaw State University). Estimate the CO2 emissions, water consumption, and electricity usage associated with your AI interactions. This is the most unusual requirement in the entire dataset and reflects a growing awareness of the environmental costs of large language model inference. Whether this is a pedagogical tool or a philosophical statement, it stands alone at the top of the specificity ladder.
The jump between each level is significant. A student comfortable with Level 1 disclosure might be completely unprepared for Level 5. And because professors rarely coordinate their AI policies within a department -- let alone across a university -- students routinely face multiple levels simultaneously.
APA vs. MLA vs. Chicago: How to Format the Citation
Beyond what to disclose, 48 policies specify how to format the citation. The breakdown: 34 require APA format, 13 require MLA, and 1 (Habib University) provides a BibTeX template for students in technical fields.
Here is what each looks like in practice:
APA format:
OpenAI. (2025). ChatGPT (GPT-4) [Large language model]. https://chat.openai.com
APA's 7th edition has published official guidance on citing AI-generated content, making it the most standardized option. The format treats the AI company as the author, the tool as the title, and includes a bracketed description of the source type.
MLA format:
"Prompt text" response. ChatGPT, version GPT-4, OpenAI, 15 Feb. 2026, chat.openai.com.
MLA's approach is still evolving. The current recommendation treats the AI output as a response to your prompt, names the tool and version, identifies the company, and includes the date of the interaction. Because MLA guidelines have been updated multiple times since generative AI became widespread, check with your professor for which version they expect.
BibTeX (Habib University example):
This is rare but worth noting. Habib University provides a BibTeX template that students in computer science and engineering courses can use in LaTeX documents. If you are writing in a technical discipline and your professor has not specified a format, APA is the safest default.
When no format is specified: If your professor requires AI citation but does not name a style, default to whatever citation style the course already uses. A history course using Chicago should cite AI in Chicago format. A psychology course using APA should cite AI in APA format. When in doubt, ask -- and keep the receipt of you asking.
The Best AI Disclosure Statements We Found
Some syllabi do more than require disclosure -- they model what good disclosure looks like. Here are the strongest examples we found.
Austin Community College (Horticulture) provides an example statement that is specific, actionable, and easy to replicate:
"In this paper, ChatGPT 5o was used to help establish a baseline idea to write the paper, then to help locate primary sources A, B, C, and then to edit the grammar in the last 3 pages of the document on 8/18/2025."
This is excellent because it names the tool and version, describes three distinct uses (ideation, source-finding, editing), specifies which parts of the paper were affected, and includes a date. Any student can follow this template.
Georgia Tech's "RAD" framework distills disclosure into three principles: Responsible, Allowed, Disclosed. Students must confirm that their AI use was responsible (appropriate for the task), allowed (permitted by the assignment), and disclosed (documented transparently). The framework's strength is its simplicity -- students can apply it as a quick checklist before submitting any assignment.
Western New England University takes an interesting inversion: the "non-use declaration." Students must explicitly state that they did NOT use AI tools. This flips the default assumption. Instead of only requiring disclosure when AI was used, it requires a statement either way. The pedagogical logic is that forcing students to actively confirm non-use makes the decision more conscious than simply saying nothing.
If you are navigating AI disclosure for college applications rather than coursework, our guide on how to write an AI disclosure for your college application covers the admissions side. And for a broader look at how disclosure norms are spreading beyond education, see AI Disclosure: College Apps, Job Apps, Everywhere.
The Most Unusual Requirements
Some policies go well beyond standard citation. These are the outliers -- requirements that are creative, strict, or philosophically distinctive.
Kalamazoo College requires students to manually re-type (not copy-paste) any AI-generated output they want to include in their work. The logic is pedagogical: the physical act of retyping forces students to read and process every word the AI produced rather than passively accepting a block of text. It is slow by design.
Indiana University requires that any AI-generated text be highlighted in light gray within the submitted document. Additionally, AI-produced content is capped at 25% of any submission. This creates a visual and structural constraint: students and instructors can immediately see how much of the work came from a machine, and exceeding the cap is a policy violation regardless of quality.
Boston University uses a differential grading baseline for students who declare AI tool use. Students who disclose AI assistance are evaluated against a higher standard on the assumption that AI-assisted work should be better than unassisted work. This is one of the few policies that builds AI use into the grading rubric itself rather than treating it as a binary pass/fail disclosure question.
Kennesaw State University stands alone with its environmental impact requirement. Students must estimate the CO2 emissions, water usage, and electricity consumption associated with their AI interactions and include this estimate in their submission. Whether you view this as a meaningful environmental exercise or a creative deterrent, it is the single most unusual AI citation requirement in the 210-policy dataset.
Practical Advice: What to Default To
If you are staring at a syllabus that says something vague like "cite any AI tools used" without further detail, here is a universal template that covers the vast majority of professor requirements:
- What tool you used. Name it specifically: ChatGPT (GPT-4), Claude 3.5 Sonnet, Gemini, Grammarly, etc.
- What you used it for. Describe the task: brainstorming, outlining, editing grammar, generating code, finding sources, translating.
- What you changed from its output. Note how you modified, verified, or built upon what the AI produced. This shows your intellectual contribution.
- Date of use. Include when the interaction happened. AI models change over time, and the date makes your citation verifiable.
This four-part statement satisfies roughly 90% of the policies we analyzed. It covers Levels 1 through 3 on the specificity ladder and provides a foundation that can be expanded for Level 4 (add a reflection paragraph) or Level 5 (append the transcript).
When in doubt, disclose more rather than less. No professor has ever penalized a student for being too transparent about their process. Plenty have penalized students for being too opaque.
For a deeper look at how disclosure norms work at the admissions level, see our analysis of the disclosure landscape across 150+ universities. If you are wondering whether to tell colleges you used AI on your application, we covered the data-driven answer in Should You Tell Colleges You Used AI?. And for the methodology behind how we classify disclosure requirements from D0 (no mechanism) to D3 (mandatory attestation), see our classification methodology.
Source Credit
The course-level data in this analysis comes from the Syllabi Policies for Generative AI Repository, a public dataset of 210 AI policies from college syllabi across 181 institutions. For admissions-level AI policies, see the GradPilot AI Policy Observatory, which tracks AI policies across 150+ universities using our L/D/E classification framework.
Worried About AI Detection?
170+ universities now use AI detection. Check your essays before submission.