Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> improved codebase

I've seen lots of claims about AI coding skill, but that one might be able to improve (and not merely passably extend) a codebase is a new one. I'd want to see it before I believe it.



It depends what you're fitting to. At the simplest, you can ask for a reduction in cyclomatic/cognitive complexity measured using a linter, extraction of methods (where a paragraph of code serves no purpose other than to populate a variable) or complex conditionals, move from an imperative to a declarative approach, etc. These are all things that can be caught through pattern matching and measured using a linter or code review tool (CodeRabbit, Sourcery or Codescene).

Other things might need to be done in two stages. You might ask the agent to first identify where code violates CQRS, then for each instance, explain the problem, and spawn a sub-agent to address that problem.

Other things the agent might identify this way: multiple implications, use of conflicted APIs, poor separation of concerns at a module or class level.

I don't typically let the agent do any of this end to end, but I would typically manually review findings before spawning subagents with those findings.


Claude will refactor but more than that, it can add documentation. And it can be asked about a codebase too. "Where does FOO happen?" "How does BAR work?".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: