scripts/ci_analyzer
¶
scripts.ci_analyzer.__init__
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | No module description available. |
Args | — |
Returns | — |
scripts.ci_analyzer.drilldown
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | This module provides functionality to generate detailed Markdown reports for top offenders in code quality analysis. |
It includes functions to create drilldowns that summarize linting errors, complexity, coverage, and function descriptions for the top offenders. | |
Args | — |
Returns | — |
🛠️ Functions¶
generate_top_offender_drilldowns
¶
Generates a Markdown report with expandable drilldowns for the top N files with the most severe code quality issues. For each top offender file, includes sections for MyPy errors, Pydocstyle issues, functions with high complexity or low coverage, and function docstring summaries if available. Parameters: ['severity_df: Any', 'report_data: dict', 'top_n: int'] Returns: str
scripts.ci_analyzer.metrics_summary
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | This module provides functionality to generate summary metrics from code quality reports. |
It includes functions to analyze report data and summarize key metrics related to methods, tests, complexity, docstrings, and linter issues. | |
Args | — |
Returns | — |
🛠️ Functions¶
generate_metrics_summary
¶
Generates a Markdown summary of key code quality metrics from report data. Aggregates counts of total methods audited, methods missing tests, high-complexity methods (complexity ≥ 10), methods missing docstrings, and linter issues from a nested report data dictionary. Returns a Markdown-formatted string summarizing these metrics. Parameters: ['report_data: dict'] Returns: str
scripts.ci_analyzer.severity_audit
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | This module provides functionality to generate a CI code quality audit report. |
It includes functions to format priority levels and generate summary header blocks based on severity metrics. | |
Args | — |
Returns | — |
🛠️ Functions¶
format_priority
¶
Returns a formatted priority label based on the given severity score. A high priority ("🔥 High") is assigned for scores above 30, medium ("⚠️ Medium") for scores above 15, and low ("✅ Low") otherwise. Parameters: ['score: float'] Returns: str
generate_header_block
¶
Generates a Markdown header block summarizing key metrics from the CI code quality audit. Calculates totals and percentages for files analyzed, files with issues, top risk file, methods audited, missing tests, missing docstrings, and linter issues. Includes visual indicators for documentation, testing, and linting coverage. Parameters: ['severity_df: Any', 'report_data: Dict[str, Dict]'] Returns: str
generate_severity_table
¶
Generates a Markdown table ranking the top 10 files by severity for the CI audit report. If the severity DataFrame is empty, returns a placeholder row indicating no files found. Each row displays file name, Mypy errors, lint issues, average complexity with risk emoji, average coverage with a visual bar, severity score, and formatted priority level. Parameters: ['severity_df: Any'] Returns: str
main
¶
Generates a CI code quality audit report in Markdown format. Parses command-line arguments for input and output paths, loads audit data, computes severity metrics, and writes a formatted Markdown report summarizing code quality findings. Returns: None
scripts.ci_analyzer.severity_index
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | This module provides functionality to compute severity scores for code quality analysis. |
It includes functions to compute individual severity scores for files and to create a severity index DataFrame from report data. | |
Args | — |
Returns | — |
🛠️ Functions¶
compute_severity
¶
Calculates a severity score for a file using its coverage and linting report data. The severity score is a weighted sum of MyPy errors, Pydocstyle lint issues, average function complexity, and coverage deficit. Returns a dictionary summarizing these metrics and the computed severity score. Parameters: ['file_path: str', 'content: dict'] Returns: dict
compute_severity_index
¶
Aggregates severity scores for multiple files into a sorted DataFrame. Processes report data for each file, computes severity metrics, and returns a DataFrame sorted by severity score in descending order. If no data is provided, returns an empty DataFrame with predefined columns. Parameters: ['report_data: dict'] Returns: pd.DataFrame
scripts.ci_analyzer.visuals
¶
🧠 Docstring Summary
Section | Content |
---|---|
Description | This module provides functionality to render visual representations of risk and scores for code quality analysis. |
It includes functions to generate risk emojis and bar representations based on severity scores. | |
Args | — |
Returns | — |
🛠️ Functions¶
risk_emoji
¶
Returns an emoji indicating risk level based on the given severity score. A green emoji ("🟢") represents low risk for scores 90 and above, yellow ("🟡") indicates moderate risk for scores between 70 and 89, and red ("🔴") signifies high risk for scores below 70. Parameters: ['score: float'] Returns: str
render_bar
¶
Generates a horizontal bar visualizing a score as filled and unfilled segments. Parameters: ['score: float', 'width: int'] Returns: str