Guidelines for using generative AI in teaching at SDQ

Aus SDQ-Wiki
(Weitergeleitet von Generative AI Guidelines)

(as of 2024-04-22)

These guidelines assumes a basic familiarity with generative AI, large language models (such as GPT), and AI assistants based thereupon (such as ChatGPT). For background information, we recommend the guide by Gimpel et al.[1] and the guidelines by Universität Mannheim[2]. The guidelines focus on providing recommendations for the use of generative AI in student work, including text, code, and other artefacts, and advise on how instructors can guide their students in this endeavour. These guidelines apply to all courses within SDQ.

Violation of the guidelines can be treated as an attempt to cheat and lead to failing an examination (5.0).  

General Guidelines

Generative AI is a novel technology that can improve productivity. However, its effect on the learning process is not yet well understood. These guidelines reflect the learning goals of different courses and the competencies to be acquired by students. Depending on the learning goals, the use of generative AI may be counterproductive or useful in achieving these goals.

Students and teachers should be aware of the typical problems and pitfalls of generative AI in all the usage scenarios described below. The guidelines of the Universität Mannheim[2] provide an excellent overview of problems and pitfalls, organized by usage scenario. No course requires students to use generative AI unless otherwise mentioned and provided. If students choose to use generative AI, it is their responsibility to get acquainted with the potential problems and pitfalls described in [1] and [2]. One example is the problem that generative AI could literally copy content from copyrighted or otherwise protected material (see [1], p. 25 and p.35). Users of generative AI are responsible for checking their results for such infringements, for example by conducting a web search.

The use of generative AI must be documented.

An example of how the documentation should be is described below.

We believe that critical reflection is crucial for the effective and responsible application of generative AI in teaching and learning activities. Therefore, all activities that allow students to leverage generative AI in their studies should address reflection on challenges and opportunities in a given scenario and should encourage students to appraise the implications of their reliance on AI critically.

In all cases, students remain responsible for their work. This includes those parts of their work that may have been generated with or informed by AI. This implies that students are expected to critically appraise contributions by AI, that they are expected to fully explain such elements within their code or written text, and that they acknowledge and make transparent which portions of their work were generated or informed by AI.

Teachers must provide information on these guidelines in their courses (ideally in the introductory session) and, if they wish to deviate, provide information on the extent and scenarios to which generative AI can be used in the relevant course. Students must also inform themselves about the admission and rules of using generative AI in the relevant courses.

If the guidelines for an examination do not permit the use of generative AI for certain use cases, then the use of generative AI for this course and this use case may correspond to an attempt to cheat.

Guidelines for (Pro-)Seminars at SDQ

Qualification goals of the proseminar:

Students can academically address basic topics in computer science (in a specific field of study). In this process, students can apply the steps from simple literature research to the preparation of results in written and oral form. Students are able to analyse information, to abstract, and to communicate fundamental principles and relationships in a concise form. Students can reproduce scientific results in written and oral form.

(translated from the Modulhandbuch)

Thus, we expect students to do the above-highlighted steps themselves. This means that students are expected to read the cited literature, organize the structure of their thesis on their own, and personally formulate the text. Students may use generative AI to make them more productive in the above-mentioned tasks by automating other related tasks (such as checking grammar and typography) and to get immediate feedback and help where they would otherwise ask their advisor (such as asking the AI assistant to explain a concept from a paper in more straightforward terms if the student has not understood them directly after reading a paper). That means AI can be used as a "sparring partner" that supports the student but not as a "ghost writer" that replaces the student.

Thus, it is ok to

  • use an AI assistant to better understand a topic covered in the literature, i.e., ask for explanations of aspects you do not understand when reading the papers,
  • use generative AI for typography as well as grammar and style corrections,
  • use generative AI to receive feedback on texts or your outline.

It is not ok to

  • use an AI assistant to generate the outline and focus with little to no student input
  • ask an AI assistant to formulate entire paragraphs or sections for you.
  • generate summaries of papers without reading the paper yourself
  • ask an AI assistant what important papers have been published on a topic without doing a keyword-based search in bibliographic databases such as the ACM Digital Library or Google Scholar first. Getting an overview by relying only on the AI assistant is problematic because of biases in the underlying data. For example, an AI assistant may favour certain author groups over others, may continue to discriminate against minoritized authors (see [3] for more information on citational justice) or apply opaque criteria for what constitutes an “important” paper.

Advice for teachers and supervisors

  • when meeting with the students and discussing the final oral presentation, ask questions to check whether the students have thoroughly understood the ideas presented.

The (pro)seminar is intended to prepare for the bachelor's or master's thesis, and thus, it is in the core interest of students to use this opportunity to learn. We trust students to adhere to these principles.

See also: Portal:Studentische Arbeiten/Proseminar

Guidelines for Final Theses (Master’s theses, Bachelor’s theses) at SDQ

From the study and examination regulations (Studien- und Prüfungsordnung):

The Bachelor's thesis should demonstrate that students are able to work independently on a problem from their field of study within a limited period of time using scientific methods.

(translated from SPO 2022, Bachelor Informatik)

Thus, the guidelines for using generative AI in science and for authorship by the German Research Foundation (DFG) [4] and the Association of Computing Machinery [5] (the main international professional association for computer science) apply. In particular, the ACM policy on authorship states:

The use of generative AI tools and technologies to create content is permitted but must be fully disclosed in the Work. […] If you are uncertain about the need to disclose the use of a particular tool, err on the side of caution, and include a disclosure in the acknowledgements section of the Work. Basic word processing systems that recommend and insert replacement text, perform spelling or grammar checks and corrections, or systems that do language translations are to be considered exceptions to this disclosure requirement and are generally permitted and need not be disclosed in the Work.[5]

Note that authors are required to take full responsibility for all content in the published Works.[5]

Similarly, the DFG statement on the use of generative AI in science states: When making their results publicly available, scientists should disclose, in the interests of scientific integrity, whether and which generative models they have used, for what purpose and to what extent.[4] (translated)

Thus, for most kinds of theses, students are allowed to use AI assistants but must disclose their use so that their advisors and reviewers can assess the student’s own contribution and whether this contribution indeed demonstrates the student’s ability to work independently on a problem using scientific methods. This means students should consult in advance with their advisors about whether using AI assistants for a certain task is advisable. For example, generating tables to analyse measurements is usually advisable. As another extreme example, completely generating a core chapter of the thesis is usually not advisable, as little own contribution will be left to demonstrate the above-mentioned ability.

Students shall ask their supervisors how to document the use of generative AI in their thesis. Depending on the thesis topic, different forms of documentation are appropriate (ranging from short acknowledgements described below to more detailed transcripts of the student’s interaction with the AI assistant).

For example, students may add a section called Acknowledgements, which states:

  • ChatGPT with GPT 4 and Advanced Data Analysis was used to analyze the measurements and to generate the tables used in the Section Evaluation.
  • GitHub Copilot was used to generate the implementation / the following parts of the implementation…

Reviewers will take this information into account when assessing and grading the students’ own contributions.

Different guidelines apply to interdisciplinary theses and theses related to textual sciences (such as legal studies). Check with your advisors.

Key takeaways: Be transparent about the use of AI assistants while working on the thesis and in the final thesis document. Ask your advisor if in doubt.

See also: Portal:Studentische Arbeiten/Ausarbeitung

Guidelines for exercises as part of lectures at SDQ

Many lectures offer exercises during the semester to help students learn. Some of the exercises need to be completed to be able to participate in the exam, and others are only optional learning offers. In both cases, students demonstrate their acquired competencies in an exam. In most cases (unless explicitly stated), these exercises have to be solved without the use of generative AI. Such exercises need to be simple enough to be solved quickly, which means that current generative AI tools may be able to solve them completely, which does not help the student reach the learning objectives. We trust students not to use generative AI for such exercises as they will miss the opportunity to learn and to properly prepare for the exam. Depending on the qualification goals, teachers may provide different guidelines for specific lectures.

Guidelines for Praxis der Software-Entwicklung (PSE) and Teamprojekt Softwareentwicklung (TSE) at SDQ

Qualification goals:

“Students learn to carry out a complete software project according to the state of the art in software engineering in teams of 4-6 participants.” (translated from the Modulhandbuch for PSE) Programming basics have been practised in the fundamental lecture already. As today, the “state of the art in software engineering” includes the use of generative AI, students may use generative AI to become more productive if they wish and if it matches the skill set and competencies they wish to acquire.

In any case, the responsibility for the results lies with the students. Thus, students must carefully check whether generated code, documentation, or other artefacts (or parts thereof) fulfil the requirements of the project and course. For example, faults of generated code cannot be blamed on the AI but are the students' responsibility. Students must always be able to explain all elements of the code that they created. This includes generated pieces of code.

The use of generative AI must be documented as described for final theses above.

See also: Praxis der Software-Entwicklung

Guidelines for practical labs at SDQ

Usually, practical labs have more specific learning goals than learning how to program. Thus, students can usually use generative AI to become more productive if they wish and if it matches the skillset and competencies they wish to acquire. Similar guidelines than for final theses apply: Advisors and reviewers will, at the end, assess the student’s contribution. Thus, students shall check with their advisors before using AI assistants to generate core aspects of their solution. Please check the respective course information to see whether this general guideline applies to your practical lab course. The use of generative AI must be documented as described in the section on final theses above.

In any case, the students are responsible for the results. Thus, students are required to carefully check whether generated code, documentation, or other artefacts (or parts thereof) fulfil the requirements of the project and course. For example, faults of generated code cannot be blamed on the AI but are the responsibility of the students. Additionally, students are required to be able to explain all parts of their results.


Guidelines for “Prüfungsleistungen anderer Art“, such as the final tasks of “Programmieren”

For graded alternative assessments (Prüfungsleistungen anderer Art), guidelines will be defined at the level of the individual course. If the use of generative AI is allowed in a course, the use must be documented as described for final theses above.

For the lecture Programming in summer term 2024, AI-generated artefacts are considered disallowed aids (unerlaubte Hilfsmittel). While you can chat with generative AI assistants to try to better understand a topic (e.g. ask it to explain Polymorphism), all submitted artefacts must be written by the student or created by simple completion mechanisms of IDEs such as Eclipse. If in doubt, ask in the Ilias forum.

If guidelines for an alternative assessment do not allow the use of generative AI for specific use cases but the artefacts handed in by students for grading cause the teacher to assume that generative AI might have been inappropriately used to produce the artefacts, students may be asked to explain their artefact in an additional oral session that takes place no later than three weeks after the grading of the artefacts has been completed.

Additional useful resources

[6]

  1. 1,0 1,1 1,2 Gimpel, H., Hall, K., Decker, S., Eymann, T., Lämmermann, L., Mädche, A., … & Vandrik, S. (2023). Unlocking the power of generative AI models and systems such as GPT-4 and ChatGPT for higher education: A guide for students and lecturers (No. 02-2023). Hohenheim Discussion Papers in Business, Economics and Social Sciences. http://opus.uni-hohenheim.de/volltexte/2023/2146/
  2. 2,0 2,1 2,2 Zentrum für Lehren und Lernen der Universität Mannheim (2023). ChatGPT im Studium: Potenziale ausschöpfen, Integrität wahren. https://www.uni-mannheim.de/media/Einrichtungen/Koordinationsstelle_Studieninformationen/Dokumente/Erstsemester/ChatGPT_Handreichung_Studierende_UMA_Stand_Mai_2023.pdf
  3. Kwon, D. (2022). The rise of citational justice: How scholars are making references fairer. Nature, 603(7902), 568-571. https://www.nature.com/articles/d41586-022-00793-1
  4. 4,0 4,1 Präsidium der Deutschen Forschungsgemeinschaft (DFG) (2023). Stellungnahme des Präsidiums der Deutschen Forschungsgemeinschaft (DFG) zum Einfluss generativer Modelle für die Text- und Bilderstellung auf die Wissenschaften und das Förderhandeln der DFG. https://www.dfg.de/download/pdf/dfg_im_profil/geschaeftsstelle/publikationen/stellungnahmen_papiere/2023/230921_stellungnahme_praesidium_ki_ai.pdf
  5. 5,0 5,1 5,2 ACM Publications Board (2023). ACM Policy on Authorship. https://www.acm.org/publications/policies/new-acm-policy-on-authorship
  6. Teubner, T., Flath, C. M., Weinhardt, C., van der Aalst, W., & Hinz, O. (2023). Welcome to the era of chatgpt et al. the prospects of large language models. Business & Information Systems Engineering, 65(2), 95-101. https://link.springer.com/article/10.1007/s12599-023-00795-x