Institutsseminar/2022-10-14

Aus SDQ-Institutsseminar
Version vom 30. September 2022, 10:31 Uhr von Timur Sağlam (Diskussion | Beiträge)
(Unterschied) ← Nächstältere Version | Aktuelle Version (Unterschied) | Nächstjüngere Version → (Unterschied)
Termin (Alle Termine)
Datum Freitag, 14. Oktober 2022
Uhrzeit 11:30 – 12:00 Uhr (Dauer: 30 min)
Ort Raum 348 (Gebäude 50.34)
Webkonferenz https://sdq.kastel.kit.edu/wiki/SDQ-Oberseminar/Microsoft Teams
Vorheriger Termin Fr 14. Oktober 2022
Nächster Termin Fr 21. Oktober 2022

Termin in Kalender importieren: iCal (Download)

Vorträge

Vortragende(r) Pascal Krieg
Titel Preventing Code Insertion Attacks on Token-Based Software Plagiarism Detectors
Vortragstyp Bachelorarbeit
Betreuer(in) Timur Sağlam
Vortragssprache
Vortragsmodus in Präsenz
Kurzfassung Some students tasked with mandatory programming assignments lack the time or dedication to solve the assignment themselves. Instead, they plagiarize a peer’s solution by slightly modifying the code. However, there exist numerous tools that assist in detecting these kinds of plagiarism. These tools can be used by instructors to identify plagiarized programs. The most used type of plagiarism detection tools is token-based plagiarism detectors. They are resilient against many types of obfuscation attacks, such as renaming variables or whitespace modifications. However, they are susceptible to inserting lines of code that do not affect the program flow or result.

The current working assumption was that the successful obfuscation of plagiarism takes more effort and skill than solving the assignment itself. This assumption was broken by automated plagiarism generators, which exploit this weakness. This work aims to develop mechanisms against code insertions that can be directly integrated into existing token-based plagiarism detectors. For this, we first develop mechanisms to negate the negative effect of many types of code insertion. Then we implement these mechanisms prototypically into a state-of-the-art plagiarism detector. We evaluate our implementation by running it on a dataset consisting of real student submissions and automatically generated plagiarism. We show that with our mechanisms, the similarity rating of automatically generated plagiarism increases drastically. Consequently, the plagiarism generator we use fails to create usable plagiarisms.

Neuen Vortrag erstellen

Hinweise