location: Current position: jianghe >> Scientific Research >> Paper Publications

Automated Quality Assessment for Crowdsourced Test Reports of Mobile Applications

Hits:

Indexed by:会议论文

Date of Publication:2018-01-01

Included Journals:EI、CPCI-S

Volume:2018-March

Page Number:368-379

Key Words:crowdsourced testing; test reports; test report quality; quality indicators; natural language processing

Abstract:In crowdsourced mobile application testing, crowd workers help developers perform testing and submit test reports for unexpected behaviors. These submitted test reports usually provide critical information for developers to understand and reproduce the bugs. However, due to the poor performance of workers and the inconvenience of editing on mobile devices, the quality of test reports may vary sharply. At times developers have to spend a significant portion of their available resources to handle the low-quality test reports, thus heavily decreasing their efficiency. In this paper, to help developers predict whether a test report should be selected for inspection within limited resources, we propose a new framework named TERQAF to automatically model the quality of test reports. TERQAF defines a series of quantifiable indicators to measure the desirable properties of test reports and aggregates the numerical values of all indicators to determine the quality of test reports by using step transformation functions. Experiments conducted over five crowdsourced test report datasets of mobile applications show that TERQAF can correctly predict the quality of test reports with accuracy of up to 88.06% and outperform baselines by up to 23.06%. Meanwhile, the experimental results also demonstrate that the four categories of measurable indicators have positive impacts on TERQAF in evaluating the quality of test reports.

Pre One:Model Checking The Uncertainties in Software Systems Introduced by Intelligent Components

Next One:Preface