As projects evolve, developers frequently postpone necessary software restructuring, known as refactoring, at different levels of abstraction to improve the quality in the rush to deliver a new release until a crisis happens. When that occurs it often results in substantially degraded
system performance, perhaps an inability to support new features, or even in a terminally broken system architecture.
Test hypotheses will be posed to advance our understanding of the refactoring rational by studying the correlation and mutual impact between (a) software quality issues at different levels, revealing when and how code anomalies propagate architecture antipatterns; (b) the code reviews/commit messages and detected/fixed antipatterns, identifying critical refactoring opportunities with relevant explanations, beyond traditional static and dynamic analyses; and (c) code antipatterns and bugs. We will distill this knowledge to design, implement and evaluate an interactive refactoring framework that will quantify the refactoring rational to compare software quality among developers and projects to create a baseline and enable human-based, domain-specific abstraction and high-level design to guide automated code-level atomic refactoring steps. It will also provide automated description of the recommended refactorings and their rational in natural language and generate commit messages when developers approve recommended refactorings and software repairs.