Description
🤖 Issue: LLM-Driven Static API Design Assessment for PyFluent
Context
In a previous release, we developed an api-check tool to statically check the PyFluent settings API for design issues such as:
- Undesirable or inconsistent naming
- Redundant context in path structures
- Unclear groupings or missed opportunities for logical hierarchy
- Non-dictionary words or inconsistently used terms
This approach successfully identified concrete improvements but required significant human post-processing to sift through false positives and to provide nuanced, design-oriented recommendations.
Why Now
Modern LLMs are much stronger at:
- Understanding API structure and semantics holistically
- Distinguishing between well-designed and poorly designed sections
- Suggesting actionable design improvements beyond mechanical style checks
The goal is to move from rule-based linting to LLM-augmented, human-centered design reviews at scale.
Vision
We could:
- Train or prompt an LLM with examples of well- and poorly-designed sections of the settings API
- Feed the entire current API to the LLM for static analysis
- Get back assessments, flagged issues, and high-level design proposals for:
- Overly TUI-style remnants (e.g.,
materials.database
) - Redundant or confusing path structures
- Naming inconsistencies or redundant words
- Poorly grouped or scattered related objects
- Overly TUI-style remnants (e.g.,
We’d then run a human feedback loop to refine these recommendations and identify redesign candidates.
Example
A concrete recent example is materials.database
— it still exposes only TUI-style commands and no real settings objects, yet is surfaced in the stable API. This kind of mismatch could be flagged automatically by an LLM trained with just a few examples.
Goals
✅ Automate static design checks with modern LLM capabilities
✅ Reduce the burden of manual review
✅ Provide maintainers with clear, actionable suggestions
✅ Support a more unified, user-friendly PyFluent API
Next Steps
- Scope how the LLM would ingest the settings API object tree and metadata
- Define the format for training examples (good vs. bad)
- Prototype a small LLM prompt to see what insights it generates
- Build the first pipeline: extract API tree → prompt LLM → capture results → generate report
Request: Open to ideas, collaborators, or early prototypes for this. Please comment if you’re interested in helping design the next-generation api-check with an LLM at its core.