scripts/ai
¶
scripts.ai.__init__
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | The ai module provides AI-powered summarization utilities for text entries using large language models (LLMs). |
Core features include: | |
- Generating concise summaries for individual or multiple text entries. | |
- Supporting subcategory-specific prompts for context-aware summarization. | |
- Configurable model selection and prompt templates, loaded at initialization. | |
- Fallback to the Ollama chat API for summarization if the primary LLM fails. | |
- Designed for seamless integration into workflows requiring automated, high-quality text summarization. | |
This module enables flexible and robust summarization capabilities for downstream applications such as log analysis, reporting, and intelligent querying. | |
Args | — |
Returns | — |
scripts.ai.ai_summarizer
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | This module provides the AISummarizer class for generating summaries of text entries |
using a configurable large language model (LLM). | |
It supports both single-entry and bulk summarization, with the ability to use | |
subcategory-specific prompts loaded from configuration. If the primary summarization | |
method fails, the module falls back to the Ollama chat API to attempt summarization. | |
Logging is integrated throughout for monitoring and debugging, | |
and configuration is loaded at initialization for flexible model and prompt management. | |
Typical use cases include automated summarization of logs, notes, or other textual data | |
in workflows requiring concise, context-aware summaries. | |
Args | — |
Returns | — |
📦 Classes¶
AISummarizer
¶
AISummarizer provides methods to generate summaries for single or multiple text entries using a configurable LLM model and subcategory-specific prompts. It supports fallback to the Ollama chat API if primary summarization fails and loads configuration settings at initialization. Parameters: ['self: Any'] Returns: None
🛠️ Functions¶
__init__
¶
Initializes the AISummarizer with configuration settings, LLM model selection, and subcategory-specific prompts. Parameters: ['self: Any'] Returns: None
_fallback_summary
¶
Attempts to generate a summary using the Ollama chat API as a fallback. Sends the provided prompt to the chat model and returns the generated summary, or an error message if the fallback fails. Parameters: ['self: Any', 'full_prompt: str'] Returns: str
summarize_entry
¶
Generates a summary for a single text entry using the configured LLM model and an optional subcategory-specific prompt. Parameters: ['self: Any', 'entry_text: str', 'subcategory: Optional[str]'] Returns: str
summarize_entries_bulk
¶
Generates a summary for multiple text entries using the configured LLM model and subcategory-specific prompts. If the input list is empty, returns a warning message. Falls back to an alternative summarization method if the primary LLM call fails or returns an invalid response. Parameters: ['self: Any', 'entries: List[str]', 'subcategory: Optional[str]'] Returns: str
scripts.ai.llm_optimization
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | Utility helpers for summarising code‑quality artefacts and building LLM prompts. |
🔧 Patch v2 – restores backwards‑compat fields and fixes helper signature | |
regressions that broke the existing unit‑test suite. | |
* summarize_file_data_for_llm again returns the exact keys |
|
{"complexity", "coverage"} expected by old tests. |
|
* Added thin wrapper _categorise_issues that accepts the legacy |
|
(entries, condition, message, cap) signature and is used by |
|
build_strategic_recommendations_prompt . |
|
* Internal refactor helpers renamed with leading underscores but public | |
function signatures stay unchanged. | |
* Added type‑hints + docstrings for the new helper. | |
Args | — |
Returns | — |
🛠️ Functions¶
_mean
¶
Calculates the mean of a list of floats. Returns 0.0 if the list is empty. Parameters: ['values: List[float]'] Returns: float
_categorise_issues
¶
Returns a summary of how many files fall into each major code quality issue category. The summary includes counts of files with more than 5 type errors, average complexity greater than 7, and average coverage below 60%. Parameters: ['offenders: List[Dict]'] Returns: str
summarize_file_data_for_llm
¶
Generates a summary dictionary of code quality metrics for a single file. Extracts and computes average complexity, coverage percentage, MyPy error count, and docstring completeness from nested file data. Returns a dictionary with legacy-compatible keys, including the file name, full path, rounded metrics, docstring ratio, and a list of up to three top issues. Parameters: ['file_data: dict', 'file_path: str'] Returns: dict[str, Any]
extract_top_issues
¶
Extracts up to a specified number of top code quality issues from file data. The function prioritizes the first MyPy error, the first function with high complexity (complexity > 10), and the first function with low coverage (coverage < 50%), returning formatted issue descriptions. Parameters: ['file_data: dict', 'max_issues: int'] Returns: List[str]
build_refactor_prompt
¶
Builds an LLM prompt requesting strategic refactoring suggestions for a list of offender files.
The prompt summarizes up to limit
files with significant code quality issues, applies a persona and template from the configuration, and includes both a summary and detailed offender information. The resulting prompt instructs the LLM to focus on identifying refactoring patterns rather than file-specific advice.
Parameters:
['offenders: List[Tuple[str, float, list, int, float, float]]', 'config: Any', 'verbose: bool', 'limit: int']
Returns:
str
build_strategic_recommendations_prompt
¶
Constructs a detailed prompt for an LLM to generate strategic, actionable recommendations for improving code quality and test coverage based on severity data and summary metrics. The prompt summarizes the distribution of key issues (high complexity, low coverage, type errors), highlights problematic modules with multiple severe files, and lists the top offenders. It instructs the LLM to provide specific, non-generic recommendations tied directly to the identified files and modules, prioritizing complexity, coverage, type errors, and documentation. Parameters: ['severity_data: List[Dict[str, Any]]', 'summary_metrics: Union[Dict[str, Any], str]', 'limit: int'] Returns: str
compute_severity
¶
Calculates a weighted severity score and summary metrics for a single module. The severity score combines counts of MyPy errors, pydocstyle lint issues, average function complexity, and lack of test coverage using fixed weights. Returns a dictionary with the file name, full path, error and issue counts, average complexity, average coverage percentage, and the computed severity score. Parameters: ['file_path: str', 'content: Dict[str, Any]'] Returns: Dict[str, Any]
_summarise_offenders
¶
Aggregates a list of offender files into a summary of key code quality issues. Counts and lists up to five example files for each of the following categories: high complexity (complexity > 8), low coverage (coverage < 50%), and many type errors (more than 5 type errors). Returns a formatted multiline string summarizing the counts and sample file names per category. Parameters: ['offenders: List[Tuple[str, float, list, int, float, float]]'] Returns: str
_fmt
¶
Formats a list of strings as a comma-separated string, truncating with '...' if more than five items. Parameters: ['lst: List[str]'] Returns: str
_format_offender_block
¶
Formats offender file details into a summary block for inclusion in LLM prompts. If verbose is True, returns a detailed multiline summary for each file including severity score, error counts, complexity, coverage, and sample errors. Otherwise, returns a concise single-line summary per file. Parameters: ['offenders: List[Tuple[str, float, list, int, float, float]]', 'verbose: bool'] Returns: str
scripts.ai.llm_refactor_advisor
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | This module provides functionality to load audit reports and build refactor prompts for an AI assistant. |
It includes functions to load JSON audit data, extract top offenders based on various metrics, and generate prompts for AI assistance. | |
Args | — |
Returns | — |
🛠️ Functions¶
load_audit
¶
Loads and returns audit data from a JSON file at the given path. Parameters: ['path: str'] Returns: dict
extract_top_offenders
¶
Identifies and ranks the top offending files in an audit report based on code quality metrics. Processes the report data to compute a composite score for each file using MyPy errors, linting issues, average code complexity, and average test coverage. Applies special weighting for "app/views.py" to prioritize its score. Returns a list of the top N files sorted by descending score, with each entry containing the file path, score, error and lint counts, average complexity, and average coverage. Parameters: ['report_data: dict', 'top_n: int'] Returns: list
build_refactor_prompt
¶
Constructs a prompt for an AI assistant summarizing top risky files for refactoring. Generates a ranked list of offender files with their associated metrics and formats it into a prompt using a template and persona from the configuration. Optionally limits the number of offenders included and adds detailed file paths if verbose is True. Parameters: ['offenders: list', 'config: Any', 'subcategory: str', 'verbose: bool', 'limit: int'] Returns: str
scripts.ai.llm_router
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | This module provides functionality to retrieve prompt templates and apply personas to prompts for an AI assistant. |
It includes functions to get prompt templates based on subcategories and modify prompts according to specified personas. | |
Args | — |
Returns | — |
🛠️ Functions¶
get_prompt_template
¶
Retrieves the prompt template for a specified subcategory from the configuration. If the subcategory is not found, returns the default prompt template. Parameters: ['subcategory: str', 'config: Any'] Returns: str
apply_persona
¶
Appends persona-specific instructions to a prompt to tailor the AI's response style. If the persona is "reviewer", "mentor", or "planner", a corresponding instruction is added to the prompt. If the persona is "default" or unrecognized, the prompt is returned unchanged. Parameters: ['prompt: str', 'persona: str'] Returns: str
scripts.ai.module_docstring_summarizer
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | This module provides functionality to load audit reports and build refactor prompts for an AI assistant. |
It includes functions to load JSON audit data, extract top offenders based on various metrics, and generate prompts for AI assistance. | |
Args | — |
Returns | — |
🛠️ Functions¶
summarize_module
¶
Generates a concise summary of a Python module's functionality based on its docstrings. Formats the provided docstring entries into a structured prompt, applies persona adjustments, and uses the given AI summarizer to produce a human-readable summary. Returns a fixed message if no docstrings are available. Parameters: ['file_path: str', 'doc_entries: list', 'summarizer: AISummarizer', 'config: ConfigManager'] Returns: str
run
¶
Executes the module docstring summarization workflow for a given JSON audit report. Processes each file in the report, optionally filtering by file path substring, and generates a summary of its documented functions using an AI summarizer. Outputs the results either to a Markdown file or to standard output. Parameters: ['input_path: str', 'output_path: str | None', 'path_filter: str | None'] Returns: None
scripts.ai.module_idea_generator
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | No module description available. |
Args | — |
Returns | — |
🛠️ Functions¶
suggest_new_modules
¶
Generates new module or package suggestions and corresponding Python prototype code based on an architecture report. Reads a JSON report of documented functions, filters them by an optional path substring, and summarizes their docstrings. Uses an AI summarizer to propose new modules/packages with justifications and then generates Python code stubs for the suggested modules, adhering to strict naming conventions. Parameters: ['artifact_path: str', 'config: ConfigManager', 'subcategory: str', 'path_filter: str | None'] Returns: tuple[str, str]
generate_test_stubs
¶
Generates pytest unit test stubs for the provided module prototype code. Parameters: ['prototype_code: str', 'config: ConfigManager'] Returns: str
extract_filenames_from_code
¶
Extracts Python filenames from code by searching for '# Filename:
export_prototypes_to_files
¶
Writes generated code or test stubs to files based on embedded '# Filename' comments. Scans code blocks for filename annotations, adjusts names for test files if needed, adds required imports, and writes each code block to the appropriate file within the specified output directory. Parameters: ['prototype_code: str', 'output_dir: str', 'suggestions: str', 'is_test: bool'] Returns: Any
validate_test_coverage
¶
Checks that all public functions and methods in the given module directory have corresponding pytest tests in the specified test directory. Parameters: ['module_dir: str', 'test_dir: str'] Returns: list[str]