Replies: 5 comments 1 reply
-
Interesting! In my python monorepo we use Regarding some of the tooling you mention above, I am thinking that a great first start for some use cases could maybe be to get |
Beta Was this translation helpful? Give feedback.
-
I haven't heard of polylith before, it looks 😍 Just after I discovered ruff analyze graph looks promising. |
Beta Was this translation helpful? Give feedback.
-
Here is another one, only a couple of months old: 🔍 CFG-based dead code detection – Find unreachable code after exhaustive if-elif-else chains |
Beta Was this translation helpful? Give feedback.
-
Hi there, thanks for starting the thread. I'm the maintainer of Import Linter and Grimp (a Rust-backed library which it uses for the heavy lifting). Also Impulse, a little library for visualizing the graph. |
Beta Was this translation helpful? Give feedback.
-
Yes, thank you for opening the discussion! I don't have much to add at the moment, but I'm excited to follow the thread and take a look at the tools mentioned 😄 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
The landscape of Python software quality tooling is currently defined by two contrasting forces: high-velocity convergence and deep specialization. The recent, rapid adoption of Ruff has solved the long-standing community problem of coordinating dozens of separate linters and formatters, establishing a unified, high-performance axis for standard code quality.
A second category of tools continues to operate in necessary, but isolated, silos. Tools dedicated to architectural enforcement and deep structural metrics, such as:
These projects address fundamental challenges of code maintainability, evolvability, and architectural debt that extend beyond the scope of fast, stylistic linting. The success of Ruff now presents the opportunity to foster a cross-tool discussion focused not just on syntax, but on structure.
Specialized quality tools are vital for long-term maintainability and risk assessment. Tools like
import-linter
andtach
mitigate technical risk by enforcing architectural rules, preventing systemic decay, and reducing change costs. Complexity and cohesion metrics from tools such ascomplexipy
,lcom
, andcohesion
quantitatively flag overly complex or highly coupled components, acting as early warning systems for technical debt. By analysing the combined outputs, risk assessment shifts to predictive modelling: integrating data from individual tools (e.g.,import-linter
violations,complexipy
scores) creates a multi-dimensional risk score. Overlaying these results, such as identifying modules that are both low in cohesion and involved intach
-flagged dependency cycles, generates a "heat map" of technical debt. This unified approach, empirically validated against historical project data like bug frequency and commit rates can yield a predictive risk assessment. It identifies modules that are not just theoretically complex but empirically confirmed sources of instability, transforming abstract quality metrics into concrete, prioritized refactoring tasks for the riskiest codebase components.Reasons to Connect
Bring the maintainers and core users of these diverse tools into a shared discussion.
Increasing Tool Visibility and Sustainability: Specialized tools often rely on small, dedicated contributor pools and suffer from knowledge isolation, confining technical debate to their specific GitHub repository. A broader discussion provides these projects with critical outreach, exposure to a wider user base, and a stronger pipeline of new contributors, ensuring their long-term sustainability.
Let's start the conversation on how to 'measure' maintainable, and architecturally sound Python code.
And keep Goodhart's law: "When a measure becomes a target, it ceases to be a good measure" in mind ;-)
Beta Was this translation helpful? Give feedback.
All reactions