Large Language Models in Software Engineering: Automation, Collaboration, and Challenges
DOI:
https://doi.org/10.56028/aetr.15.1.1795.2025Keywords:
Large Language Model (LLM), Software Engineering, Automated Code Generation, Software Quality Assurance (SQA), Human–AI Collaboration.Abstract
As artificial intelligence (AI) technology develops rapidly, large-scale pre-trained language models (LLMs) have become widely applied in software engineering. This has revolutionized software development and brought significant value to testing, debugging, operations, project management, and other areas. This article systematically reviews the latest research findings on LLMs empowered software engineering through literature analysis and comparative research. It also provides a detailed explanation of requirements analysis and modeling, automated code generation, intelligent debugging, and operations and maintenance, and discusses software quality assurance and team collaboration. The article also explores the challenges encountered in the practical application of LLMs, primarily security and privacy concerns, difficulties in understanding and controlling, and issues such as copyright ownership and legal liability. Besides, the article offers predictions for future trends. Neural-symbolic integration, human-computer collaboration, and the establishment of standards are key drivers of continued progress in software engineering in the era of LLMs. Despite its contributions, this study is limited by its reliance on existing literature and secondary data, highlighting the need for future longitudinal and quantitative research to assess the real-world impact of LLMs on software quality, security, and maintainability.