Page 62 - SMILESENG
P. 62

Intl. Summer School on Search- and Machine Learning-based Software Engineering
 command-line as a standalone executable, without the need for opening Eclipse for running. This is particularly useful if, for instance, one needs to integrate it in their current development workflow (e.g., using continuous integration). The tool uses the Extract Method refactoring operation provided by the Java Development Toolkit (JDT) of Eclipse to test the feasibility of code extractions programmatically. Finally, the tool chooses the best sequence of method extractions found during the search: the one that reduces the SSCC to (or below) the threshold and minimizes the number of method extractions.
IV. PRELIMINARY RESULTS
We conducted a study to evaluate the proposed approach when reducing the SSCC of methods in 10 open source projects from GitHub: two popular frameworks for multi- objective optimization, five platform components to accelerate the development of smart solutions, and three popular open- source projects with more than 10,000 stars and forked more than 900 times. In total, these projects have 1,050 cognitive complexity issues. The proposed approach was able to fix, on average, 78% of the cognitive complexity issues on these projects, taking almost 20 hours to process all methods on the 10 software projects under study. Most studied open-source projects required more than one code extraction to reduce the SSCC of their methods to 15. However, five projects required five or more code extractions to reduce the SSCC of some methods. In general, code extractions reduced, on average, SSCC by 12 units. Nevertheless, some code extractions re- duced methods SSCC up to 72 units.
V. CONCLUSION
We formulated the reduction of software cognitive complexity provided by SonarCloud and SonarQube, to a given threshold, as an optimization problem. We then proposed an approach to reduce the cognitive complexity of methods in software projects to the chosen threshold though the application of sequences of Extract Method refactoring operations. We also conducted some experiments in 10 open-source software projects, analyzing more than 1,000 methods with a cognitive complexity greater than the default threshold suggested by SonarQube (15). The proposed approach was able to reduce the cognitive complexity to or below the threshold in 78% of those methods.
Despite the obtained results, there are several open questions to further discuss:
• It could exist multiple optimal Extract Method refactoring operations when reducing the cognitive complexity of a method, each of them impacting the code differently: number of resulting and/or extracted lines of code (LOC), number of arguments in the signature of extracted methods, SSCC reduction, SSCC of new extracted methods, etc.). Which one should we apply? This opens the door to multiple criteria decision-making.
• An aspect that is out of the scope of this article is the choice of the name for the new extracted methods. Given that the name of new methods can influence the understanding of the resulting source code, how can we name new extracted methods? Creating a dictionary with keywords in the original method and using natural language processing techniques with Transformers could be a good starting point to handle this fact.
• Having too many return, break, and continue statements in a method decreases the method’s essential understandability. This happens because the flow of execution is broken each time one of these statements is encountered. This fact could prevent the extraction of the code, making an instance of the cognitive complexity reduction problem unsolvable. Can we pre-process a method to refactor return, break, and continue statements in order to favor the cognitive complexity reduction task?
• Enumeration algorithms used so far in the proposed approach could fail to scale with the code size because the number of refactoring plans can grow exponentially with the number of lines of code. We believe that model- ing SSCC reduction as an Integer Linear Programming optimization problem makes sense. This would make it feasible to apply efficient solvers, like CPLEX, to get optimal solutions very quickly. Nevertheless, there is still another open question: is the software cognitive complexity reduction of a method an NP-hard problem?
ACKNOWLEDGMENT
This resarch has been supported by Universidad de Ma´laga (grants B1-2020 01 and B4-2019-05) and project PID2020-116727RB-I00 funded by MCIN/AEI /10.13039/501100011033. Rube´n Saborido and Javier Ferrer are supported by postdoc grants POSTDOC 21 00567 and DOC/00488, respectively, funded by the Andalusian Ministry of Economic Transformation, Industry, Knowledge and Universities.
REFERENCES
[1] G. A. Campbell, “Cognitive Complexity: An Overview and Evaluation,” in Proceedings of the 2018 International Conference on Technical Debt, ser. TechDebt ’18. New York, NY, USA: Association for Computing Machinery, 2018, pp. 57–58. [Online]. Available: https://doi.org/10.1145/3194164.3194186
[2] R. Saborido, J. Ferrer, F. Chicano, and E. Alba, “Automatizing software cognitive complexity reduction,” IEEE Access, vol. 10, pp. 11 642–11 656, 2022.
50


















































































   60   61   62   63   64