Investigating Variational Autoencoders and Mixture Density Recurrent Neural Networks for Code Coverage Maximization: Unterschied zwischen den Versionen

Aus SDQ-Institutsseminar
Keine Bearbeitungszusammenfassung
Keine Bearbeitungszusammenfassung
 
(3 dazwischenliegende Versionen desselben Benutzers werden nicht angezeigt)
Zeile 6: Zeile 6:
|termin=Institutsseminar/2022-05-06
|termin=Institutsseminar/2022-05-06
|vortragsmodus=online
|vortragsmodus=online
|kurzfassung=Graphical User Interfaces (GUIs) are a common interface to control software. Testing the graphical elements
|kurzfassung=Graphical User Interfaces (GUIs) are a common interface to control software. Testing the graphical elements of GUIs is time-consuming for a human tester because it requires interacting with each element, in each possible state that the GUI can be in. Instead, automated approaches are desired, but they often require many interactions with the software to improve their method. For computationally-intensive tasks, this can become infeasible. In this thesis, I investigate the usage of a reinforcement learning (RL) framework for the task of automatically maximizing the code coverage of desktop GUI software using mouse clicks. The framework leverages two neural networks to construct a simulation of the software. An additional third neural network controls the software and is trained on the simulation. This avoids the possibly costly interactions with the actual software. Further, to evaluate the approach, I developed a desktop GUI software on which the trained networks try to maximize the code coverage. The results show that the approach achieves a higher coverage compared to a random tester when considering a limited amount of interactions. However, for longer interaction sequences, it stagnates, while the random tester increases the coverage further, and surpasses the investigated approach. Still, in comparison, both do not reach a high coverage percentage. Only random testers, that use a list of clickable widgets for the interaction selection, achieved values of over 90% in my evaluation.
of GUIs is time-consuming for a human tester because it requires interacting with each element, in each
possible state that the GUI can be in. Instead, automated approaches are desired, but they often
require many interactions with the software to improve their method. For computationally-intensive
tasks, this can become infeasible. In this thesis,
I investigate the usage of a reinforcement learning (RL) framework for the task of automatically
maximizing the code coverage of desktop GUI software using mouse clicks.
The framework leverages two neural networks
to construct a simulation of the software. An additional third neural network controls the software
and is trained on the simulation. This avoids the possibly costly interactions with the actual software.
Further, to evaluate the approach, I developed
a desktop GUI software on which the trained networks try to maximize the code coverage.
The results show that the approach
achieves a higher coverage compared to a random tester when considering a limited amount of interactions.
However, for longer interaction sequences, it stagnates, while the random tester increases the coverage further,
and surpasses the investigated approach. Still, in comparison, both do not reach a high coverage percentage.
Only random testers, that use a list of clickable widgets for the interaction selection, achieved values of
over 90% in my evaluation.
}}
}}

Aktuelle Version vom 27. April 2022, 10:21 Uhr

Vortragende(r) Patrick Deubel
Vortragstyp Masterarbeit
Betreuer(in) Daniel Zimmermann
Termin Fr 6. Mai 2022
Vortragssprache
Vortragsmodus online
Kurzfassung Graphical User Interfaces (GUIs) are a common interface to control software. Testing the graphical elements of GUIs is time-consuming for a human tester because it requires interacting with each element, in each possible state that the GUI can be in. Instead, automated approaches are desired, but they often require many interactions with the software to improve their method. For computationally-intensive tasks, this can become infeasible. In this thesis, I investigate the usage of a reinforcement learning (RL) framework for the task of automatically maximizing the code coverage of desktop GUI software using mouse clicks. The framework leverages two neural networks to construct a simulation of the software. An additional third neural network controls the software and is trained on the simulation. This avoids the possibly costly interactions with the actual software. Further, to evaluate the approach, I developed a desktop GUI software on which the trained networks try to maximize the code coverage. The results show that the approach achieves a higher coverage compared to a random tester when considering a limited amount of interactions. However, for longer interaction sequences, it stagnates, while the random tester increases the coverage further, and surpasses the investigated approach. Still, in comparison, both do not reach a high coverage percentage. Only random testers, that use a list of clickable widgets for the interaction selection, achieved values of over 90% in my evaluation.