Analysis of the Effectiveness of Iterative Prompts in the Integration of Classification and Summarization of User Reports Based on NLP
DOI:
https://doi.org/10.34123/icdsos.v2025i1.510Keywords:
Classification, Natural Language Processing (NLP), Summarization, User ReportsAbstract
User reports submitted through feedback features or ticketing systems provide valuable insights for improving mobile applications. However, the high volume of reports creates challenges for review and decision-making. Effective classification and summarization are therefore essential to manage this information efficiently, allowing developers to quickly identify recurring issues and support data-driven development strategies. This study automates large-scale user feedback processing using Natural Language Processing (NLP) and evaluates multiple language models. The Bigbird-Small model achieved the highest agreement with the majority (81.51%) due to its ability to process long-text contexts. XLM-R-Base performed competitively (78.08%), while BERT-Base and Roberta-Base showed stable performance (75.68% and 74.32%). Distilbert-Base, though more computationally efficient, had slightly lower accuracy (74.32%). For summarization, Simple Prompt and Iterative Prompt approaches were compared. The Iterative Prompt with four iterations performed best, achieving similarity 0.911, compression 0.846, keyword overlap 0.624, and redundancy 0.070. These results demonstrate that combining automated classification with iterative summarization can significantly improve both efficiency and accuracy in managing user reports, supporting better decision-making and enhanced mobile app development.