dashboard
¶
dashboard.__init__
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | No module description available. |
Args | — |
Returns | — |
dashboard.ai_integration
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | No module description available. |
Args | — |
Returns | — |
📦 Classes¶
AIIntegration
¶
No description available. Parameters: ['self: Any', 'config: Any', 'summarizer: Any'] Returns: None
🛠️ Functions¶
__init__
¶
Initializes the AIIntegration instance with configuration and summarizer components. Parameters: ['self: Any', 'config: Any', 'summarizer: Any'] Returns: Any
generate_audit_summary
¶
Generates an AI-driven audit summary based on provided metrics context. Combines a persona-enriched audit summary prompt with the given metrics context and returns a summarized audit report as a string. Parameters: ['self: Any', 'metrics_context: str'] Returns: str
generate_refactor_advice
¶
Generates AI-driven refactoring advice based on analysis of merged code data. Analyzes the provided merged data to identify the top offenders for refactoring, constructs a contextual prompt, and returns a summary suggestion along with the list of top offenders. Parameters: ['self: Any', 'merged_data: Any', 'limit: int'] Returns: Any
generate_strategic_recommendations
¶
Generates strategic recommendations based on merged code analysis data. Writes the merged data to a temporary JSON file and invokes a CLI assistant in strategic mode with the specified limit and persona. Returns the output generated by the CLI assistant. Parameters: ['self: Any', 'merged_data: Any', 'limit: int'] Returns: Any
chat_general
¶
Generates an AI-driven summary response to a user query based on analyzed code report data. Parameters: ['self: Any', 'user_query: Any', 'merged_data: Any'] Returns: Any
chat_code
¶
Generates an AI-driven code analysis summary for a specific file based on user input. Builds a detailed context using the file's complexity and linting information, issue locations, placeholder module summaries, and AI-generated refactor recommendations. Incorporates the user's query and persona to produce a comprehensive code analysis summary for the file. Parameters: ['self: Any', 'file_path: Any', 'complexity_info: Any', 'lint_info: Any', 'user_query: Any'] Returns: Any
chat_doc
¶
Generates a summary of a module's documentation using the provided functions list. Parameters: ['self: Any', 'module_path: Any', 'funcs: Any'] Returns: Any
dashboard.app
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | No module description available. |
Args | — |
Returns | — |
🛠️ Functions¶
init_artifacts_dir
¶
Returns the directory to use for artifacts, preferring the given default if it exists. If the specified default directory does not exist, returns the current directory instead. Parameters: ['default_dir: str'] Returns: str
dashboard.data_loader
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | No module description available. |
Args | — |
Returns | — |
🛠️ Functions¶
is_excluded
¶
Determines whether a file path should be excluded based on predefined patterns. Parameters: ['path: str'] Returns: bool
load_artifact
¶
Loads a JSON artifact from the specified path, supporting compressed and specialized formats.
Attempts to load a coverage-related JSON artifact from the given path, handling .comp.json.gz
, .comp.json
, and plain .json
variants. Applies specialized decompression for known report formats and filters out top-level keys matching exclusion criteria.
Parameters:
['path: str']
Returns:
Dict[str, Any]
weighted_coverage
¶
Calculates the lines-of-code weighted average coverage from function coverage data. Parameters: ['func_dict: Dict[str, Any]'] Returns: float
dashboard.metrics
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | Module: scripts/dashboard/metrics.py |
Extracts all data-transformation and metrics logic from the Streamlit app. | |
Args | — |
Returns | — |
🛠️ Functions¶
compute_executive_summary
¶
Generates high-level summary metrics for the dashboard's Executive Summary. Aggregates unique test counts, average strictness and severity scores, number of production files, overall coverage percentage, and percentage of missing documentation from the provided data sources. Parameters: ['merged_data: Dict[str, Any]', 'strictness_data: Dict[str, Any]'] Returns: Dict[str, Any]
get_low_coverage_modules
¶
Returns the modules with the lowest coverage percentages. Iterates over modules in the strictness data, excluding filtered files, and collects their coverage values. Returns a list of (module name, coverage) tuples for the modules with the lowest coverage, sorted in ascending order. Parameters: ['strictness_data: Dict[str, Any]', 'top_n: int'] Returns: List[Tuple[str, float]]
coverage_by_module
¶
Calculates line-of-code weighted coverage for each module and returns the modules with the lowest coverage. Parameters: ['merged_data: Dict[str, Any]', 'top_n: int'] Returns: List[Tuple[str, float]]
compute_severity
¶
Calculates a severity score for a file based on linting errors, code complexity, and coverage. The severity score combines the number of mypy errors, pydocstyle lint issues, average function complexity, and coverage ratio using weighted factors. Returns a dictionary summarizing the file's name, path, error counts, average complexity, average coverage percentage, and computed severity score. Parameters: ['file_path: str', 'content: Dict[str, Any]'] Returns: Dict[str, Any]
compute_severity_df
¶
Builds a DataFrame summarizing severity metrics for all files. Applies the provided severity computation function to each file in the merged data and constructs a DataFrame from the results, sorted by severity score in descending order with the index reset. Parameters: ['merged_data: Dict[str, Any]', 'compute_severity_fn: Any'] Returns: pd.DataFrame
build_prod_to_tests_df
¶
Creates a DataFrame mapping each production module to its unique covering tests and related metrics. Deduplicates tests by name within each module, retaining the highest severity and corresponding strictness for each test. Calculates the average strictness and severity across unique tests per module, and lists the names of all covering tests. The resulting DataFrame includes the production module name, test count, average strictness, average severity, and a comma-separated list of test names, sorted by test count in descending order. Parameters: ['strictness_data: Dict[str, Any]'] Returns: pd.DataFrame
severity_distribution
¶
Categorizes tests into Low, Medium, and High severity buckets based on their highest observed severity. Deduplicates tests globally by test name, retaining only the highest severity for each test, and returns a count of tests in each severity category. Parameters: ['strictness_data: Dict[str, Any]'] Returns: Dict[str, int]