Sample Artificial Intelligence Syllabus Statements
ACADEMIC INTEGRITY
The following pages are part of the Academic Integrity toolkit entries.
You can jump around or review each page in sequence by following the hyperlinks:
- Academic Integrity Section Overview
- Promoting Academic Honesty
- Investigating Suspected Academic Dishonesty
- Academic Integrity in Online Environments
- Artificial Intelligence and Academic Integrity
- Sample Artificial Intelligence Syllabus Statements (This page)
- Additional Resources
What are some sample syllabus statements about the use of artificial intelligence?
Syllabus Statements
Below are examples of statements from other universities based on different levels of AI use in a course. These examples are from this resource on Classroom Policies for AI Generative Tools Links to an external site. compiled by Lance Eaton. Choose the example that best aligns with your policy and revise it to include specific details for your course.
Select each tab to reveal the examples.
Example #1: Students may not use AI at all
University of Delaware Links to an external site.; Kevin R. Guidry
Students are not allowed to use advanced automated tools (artificial intelligence or machine learning tools such as ChatGPT or Dall-E 2) on assignments in this course. Each student is expected to complete each assignment without substantive assistance from others, including automated tools.
Example #2: Students may use AI under certain circumstances
George Washington University, Alexa Alice Joubin
Policy on the use of generative artificial intelligence tools:
Using an AI-content generator such as ChatGPT to complete assignments without proper attribution violates academic integrity. By submitting assignments in this class, you pledge to affirm that they are your own work and you attribute use of any tools and sources.
Learning to use AI responsibly and ethically is an important skill in today’s society. Be aware of the limits of conversational, generative AI tools such as ChatGPT.
- Quality of your prompts: The quality of its output directly correlates to the quality of your input. Master “prompt engineering” by refining your prompts in order to get good outcomes.
- Fact-check all of the AI outputs. Assume it is wrong unless you cross-check the claims with reliable sources. The current AI models will confidently reassert factual errors. You will be responsible for any errors or omissions.
- Full disclosure: Like any other tool, the use of AI should be acknowledged. At the end of your assignment, write a short paragraph to explain which AI tool and how you used it, if applicable. Include the prompts you used to get the results. Failure to do so is in violation of academic integrity policies. If you merely use the instructional AI embedded within Packback, no disclosure is needed. That is a pre-authorized tool.
Here are approved uses of AI in this course. You can take advantage of a generative AI to:
- Fine tune your research questions by using this tool https://labs.packback.co/question/ Enter a draft research question. The tool can help you find related, open-ended questions
- Brainstorm and fine tune your ideas; use AI to draft an outline to clarify your thoughts
- Check grammar, rigor, and style; help you find an expression
Example #3: Students will use AI for some assignments but not all
University of Queensland (UQ), Australia, Links to an external site. Kelly Matthews
We will use AI tools that harness large language models, including ChatGPT Links to an external site. (and DALL-E 2 Links to an external site. among others), as pedagogical opportunities for learning and teaching in the course. Doing so aligns with the course objective on digital literacies (course objective 4) and opens up a class dialogue about the role of AI in education, including opportunities and complexities for teachers' everyday work in facilitating the learning of diverse student cohorts. AI in education is a vital topic for pre-service teachers who have to navigate ongoing changes in the educational landscape caused by digital technologies like AI and machine learning.
Maintaining high ethical standards of integrity as per UQ policy and as professional teachers mean any use the AI in assessment tasks will be identified and referenced.
At the beginning of the course, we will co-create a class agreement on the use of AI tools that ensures everyone
- has equal access to such tools and knowledge of their benefits and limitations;
- understands the appropriate use of them; and
- is clear on policies and procedures for their use.
The co-created class agreement will align with UQ's academic integrity policies and procedures. We will revisit the agreement throughout the semester to ensure all students and the teaching team have a shared understanding of expectations and policies while recognising we will hold differing personal and professional views on AI in education.
Example #4: Students may ask the instructor for permission to use AI
University of Delaware Links to an external site.; Kevin R. Guidry
Students are allowed to use advanced automated tools (artificial intelligence or machine learning tools such as ChatGPT or Dall-E 2) on assignments in this course if instructor permission is obtained in advance. Unless given permission to use those tools, each student is expected to complete each assignment without substantive assistance from others, including automated tools.
You may also want to require students to explicitly document or acknowledge their use of this tool. Potential language for that:
If permission is granted to use advanced automated tools (artificial intelligence or machine learning tools such as ChatGPT or Dall-E 2), they must be properly documented and credited. Text generated using ChatGPT-3 should include a citation such as: “Chat-GPT-3. (YYYY, Month DD of query). “Text of your query.” Generated using OpenAI. https://chat.openai.com/ Links to an external site.” Material generated using other tools should follow a similar citation convention.
You may also want to require students to provide a brief explanation of how they used a particular tool. For example:
If a tool is used in an assignment, students must also include a brief (2-3 sentences) description of how they used the tool.
Example #5: Students may use AI as long as it is cited
FOUNDATION MODEL CLASS POLICY Links to an external site., v1.0 Ryan S. Baker May be used under Creative Commons-ShareAlike 3.0, CC BY-SA 3.0
Within this class, you are welcome to use foundation models (ChatGPT, GPT, DALL-E, Stable Diffusion, Midjourney, GitHub Copilot, and anything after) in a totally unrestricted fashion, for any purpose, at no penalty. However, you should note that all large language models still have a tendency to make up incorrect facts and fake citations, code generation models have a tendency to produce inaccurate outputs, and image generation models can occasionally come up with highly offensive products. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or a foundation model. If you use a foundation model, its contribution must be acknowledged in the hand in; you will be penalized for using a foundation model without acknowledgement. Having said all these disclaimers, the use of foundation models is encouraged, as it may make it possible for you to submit assignments with higher quality, in less time. The university's policy on plagiarism still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own.
From Boston University
Students shall
- Give credit to AI tools whenever used, even if only to generate ideas rather than usable text or illustrations.
- When using AI tools on assignments, add an appendix showing (a) the entire exchange, highlighting the most relevant sections; (b) a description of precisely which AI tools were used (e.g. ChatGPT private subscription version or DALL-E free version), (c) an explanation of how the AI tools were used (e.g. to generate ideas, turns of phrase, elements of text, long stretches of text, lines of argument, pieces of evidence, maps of conceptual territory, illustrations of key concepts, etc.); (d) an account of why AI tools were used (e.g. to save time, to surmount writer’s block, to stimulate thinking, to handle mounting stress, to clarify prose, to translate text, to experiment for fun, etc.).
- Not use AI tools during in-class examinations, or assignments, unless explicitly permitted and instructed.
- Employ AI detection tools and originality checks prior to submission, ensuring that their submitted work is not mistakenly flagged.
- Use AI tools wisely and intelligently, aiming to deepen understanding of subject matter and to support learning.
Instructors shall
- Seek to understand how AI tools work, including their strengths and weaknesses, to optimize their value for student learning.
- Treat work by students who declare no use of AI tools as the baseline for grading.
- Use a lower baseline for students who declare use of AI tools, depending on how extensive the usage, while rewarding creativity, critical nuance, and the correction of inaccuracies or superficial interpretations in response to suggestions made by AI tools.
- Employ AI detection tools to evaluate the degree to which AI tools have likely been employed.
- Impose a significant penalty for low-energy or unreflective reuse of material generated by AI tools and assign zero points for merely reproducing the output from AI tools.
This policy recognizes that
-
- This policy depends on goodwill, a sense of fairness, and honorable character.
- Some instructors may prefer stronger restrictions on the use of AI tools and they are free to impose them so long as care is taken to maintain transparency and fairness in grading.
- This policy takes account of the existence of subscription versions of AI tools, which are not affordable for some students; the policy may need to be revised as the differences between subscription and free versions become better understood.
- This policy may be revised in light of other policies and novel technological developments in AI tools.
Example #6: Students will use AI throughout the course
State University of New York - Oswego, Mohammad Tajvarpour
ARTIFICIAL INTELLIGENCE (AI) POLICY
This course encourages and embraces the ethical use of Artificial Intelligence (AI). Throughout the course, it is essential to utilize generative AI systems, including but not limited to Text to Text, Text to Image, Text to Audio, and Image to Video, in a manner that upholds integrity.
As a student in this course, you are expected to actively incorporate AI tools while upholding integrity. You hold the responsibility to assess the integrity and impartiality of your submissions, ensuring they remain unbiased. It is important to recognize that AI has inherent limitations, and human supervision is necessary to verify the quality and appropriateness of the output. Thus, exercising responsible AI usage requires human oversight and verification.
Moreover, you are required to thoroughly read and certify the content of each submission. This entails a careful review to confirm the accuracy and suitability of the AI-generated content before submission.
AI Acknowledgement: To promote transparency, every assignment must include an ""AI Acknowledgement"" section. This section should clearly explain how AI was employed in the preparation and composition of the assignment. This acknowledgement allows us to acknowledge the role of AI in the learning process and understand its impact on the work produced.
By adhering to this AI policy, we aim to cultivate a learning environment where AI tools are utilized responsibly, ensuring the integrity of our work and promoting ethical AI practices throughout the course.
Please read through each tab