Files
thechart/DEVELOPER_GUIDE.md
William Valentin a521ed6e9a
Some checks failed
Build and Push Docker Image / build-and-push (push) Has been cancelled
Add quick test runner and enhance run_tests script
- Introduced `quick_test.py` for running specific test categories (unit, integration, theme, all).
- Updated `run_tests.py` to improve test execution and reporting, including coverage.
- Removed outdated test scripts for keyboard shortcuts, menu theming, note saving, and entry updating.
- Added new test script `test_theme_changing.py` to verify theme changing functionality.
- Consolidated integration tests into `test_integration.py` for comprehensive testing of TheChart application.
- Updated theme manager to ensure color retrieval works correctly.
- Modified test constants to import from the correct module path.
2025-08-05 15:09:13 -07:00

18 KiB

TheChart Developer Guide

📖 Consolidated Documentation: This document combines multiple documentation files for better organization and easier navigation.

Table of Contents

Overview

Development setup, testing, and architecture

Development Environment Setup

Prerequisites

  • Python 3.13+: Required for the application
  • uv: Fast Python package manager (10-100x faster than pip/Poetry)
  • Git: Version control

Quick Setup

## Clone and setup
git clone <repository-url>
cd thechart

## Install with uv (recommended)
make install

## Or manual setup
uv venv --python 3.13
uv sync
uv run pre-commit install --install-hooks --overwrite

Environment Activation

## fish shell (default)
source .venv/bin/activate.fish
## or
make shell

## bash/zsh
source .venv/bin/activate

## Using uv run (recommended)
uv run python src/main.py

Testing Framework

Test Infrastructure

Professional testing setup with comprehensive coverage and automation.

Testing Tools
  • pytest: Modern Python testing framework
  • pytest-cov: Coverage reporting (HTML, XML, terminal)
  • pytest-mock: Mocking support for isolated testing
  • coverage: Detailed coverage analysis
Test Statistics
  • 93% Overall Code Coverage (482 total statements, 33 missed)
  • 112 Total Tests across 6 test modules
  • 80 Tests Passing (71.4% pass rate)
Coverage by Module
Module Coverage Status
constants.py 100% Complete
logger.py 100% Complete
graph_manager.py 97% Excellent
init.py 95% Excellent
ui_manager.py 93% Very Good
main.py 91% Very Good
data_manager.py 87% Good

Test Structure

Test Files
  • tests/test_data_manager.py (16 tests): CSV operations, validation, error handling
  • tests/test_graph_manager.py (14 tests): Matplotlib integration, dose calculations
  • tests/test_ui_manager.py (21 tests): Tkinter UI components, user interactions
  • tests/test_main.py (18 tests): Application integration, workflow testing
  • tests/test_constants.py (12 tests): Configuration validation
  • tests/test_logger.py (8 tests): Logging functionality
  • tests/test_init.py (23 tests): Initialization and setup
Test Fixtures (tests/conftest.py)
  • Temporary Files: Safe testing without affecting real data
  • Sample Data: Comprehensive test datasets with realistic dose information
  • Mock Loggers: Isolated logging for testing
  • Environment Mocking: Controlled test environments

Running Tests

Basic Testing
## Run all tests
make test
## or
uv run pytest

## Run specific test file
uv run pytest tests/test_graph_manager.py -v

## Run tests with specific pattern
uv run pytest -k "dose_calculation" -v
Coverage Testing
## Generate coverage report
uv run pytest --cov=src --cov-report=html

## Coverage with specific module
uv run pytest tests/test_graph_manager.py --cov=src.graph_manager --cov-report=term-missing
Continuous Testing
## Watch for changes and re-run tests
uv run pytest --watch

## Quick test runner script
./scripts/run_tests.py

Pre-commit Testing

Automated testing prevents commits when core functionality is broken.

Configuration

Located in .pre-commit-config.yaml:

  • Core Tests: 3 essential tests run before each commit
  • Fast Execution: Only critical functionality tested
  • Commit Blocking: Prevents commits when tests fail
Core Tests
  1. test_init: DataManager initialization
  2. test_initialize_csv_creates_file_with_headers: CSV file creation
  3. test_load_data_with_valid_data: Data loading functionality
Usage
## Automatic on commit
git commit -m "Your changes"

## Manual pre-commit check
pre-commit run --all-files

## Run just test check
pre-commit run pytest-check --all-files

Dose Calculation Testing

Comprehensive testing for the complex dose parsing and calculation system.

Test Categories
  • Standard Format: 2025-07-28 18:59:45:150mg → 150.0mg
  • Multiple Doses: 2025-07-28 18:59:45:150mg|2025-07-28 19:34:19:75mg → 225.0mg
  • With Symbols: • • • • 2025-07-30 07:50:00:300 → 300.0mg
  • Decimal Values: 2025-07-28 18:59:45:12.5mg|2025-07-28 19:34:19:7.5mg → 20.0mg
  • No Timestamps: 100mg|50mg → 150.0mg
  • Mixed Formats: • 2025-07-30 22:50:00:10|75mg → 85.0mg
  • Edge Cases: Empty strings, NaN values, malformed data → 0.0mg
Test Implementation
## Example test case
def test_calculate_daily_dose_standard_format(self, graph_manager):
    dose_str = "2025-07-28 18:59:45:150mg|2025-07-28 19:34:19:75mg"
    result = graph_manager._calculate_daily_dose(dose_str)
    assert result == 225.0

Medicine Plotting Tests

Testing for the enhanced graph functionality with medicine dose visualization.

Test Areas
  • Toggle Functionality: Medicine show/hide controls
  • Dose Plotting: Bar chart generation for medicine doses
  • Color Coding: Proper color assignment and consistency
  • Legend Enhancement: Multi-column layout and average calculations
  • Data Integration: Proper data flow from CSV to visualization

UI Testing Strategy

Testing user interface components with mock frameworks to avoid GUI dependencies.

UI Test Coverage
  • Component Creation: Widget creation and configuration
  • Event Handling: User interactions and callbacks
  • Data Binding: Variable synchronization and updates
  • Layout Management: Grid and frame arrangements
  • Error Handling: User input validation and error messages
Mocking Strategy
## Example UI test with mocking
@patch('tkinter.Tk')
def test_create_input_frame(self, mock_tk, ui_manager):
    parent = Mock()
    result = ui_manager.create_input_frame(parent, {}, {})
    assert result is not None
    assert isinstance(result, dict)

Code Quality

Tools and Standards

  • ruff: Fast Python linter and formatter (Rust-based)
  • pre-commit: Git hook management for code quality
  • Type Hints: Comprehensive type annotations
  • Docstrings: Detailed function and class documentation

Code Formatting

## Format code
make format
## or
uv run ruff format .

## Check formatting
make lint
## or
uv run ruff check .

Pre-commit Hooks

Automatically installed hooks ensure code quality:

  • Code Formatting: ruff formatting
  • Linting Checks: Code quality validation
  • Import Sorting: Consistent import organization
  • Basic File Checks: Trailing whitespace, file endings

Development Workflow

Feature Development

  1. Create Feature Branch: git checkout -b feature/new-feature
  2. Implement Changes: Follow existing patterns and architecture
  3. Add Tests: Ensure new functionality is tested
  4. Run Tests: make test to verify functionality
  5. Code Quality: make format && make lint
  6. Commit Changes: Pre-commit hooks run automatically
  7. Create Pull Request: For code review

Medicine System Development

Adding new medicines or modifying the medicine system:

## Example: Adding a new medicine programmatically
from medicine_manager import MedicineManager, Medicine

medicine_manager = MedicineManager()
new_medicine = Medicine(
    key="sertraline",
    display_name="Sertraline",
    dosage_info="50mg",
    quick_doses=["25", "50", "100"],
    color="#9B59B6",
    default_enabled=False
)
medicine_manager.add_medicine(new_medicine)

Testing New Features

  1. Unit Tests: Add tests for new functionality
  2. Integration Tests: Test feature integration with existing system
  3. UI Tests: Test user interface changes
  4. Dose Calculation Tests: If affecting dose calculations
  5. Regression Tests: Ensure existing functionality still works

Debugging and Troubleshooting

Logging

Application logs are stored in logs/ directory:

  • app.log: General application logs
  • app.error.log: Error messages only
  • app.warning.log: Warning messages only

Debug Mode

Enable debug logging by modifying src/logger.py configuration.

Common Issues

Test Failures
  • Matplotlib Mocking: Ensure proper matplotlib component mocking
  • Tkinter Dependencies: Use headless testing for UI components
  • File Path Issues: Use absolute paths in tests
  • Mock Configuration: Proper mock setup for external dependencies
Development Environment
  • Python Version: Ensure Python 3.13+ is used
  • Virtual Environment: Always work within the virtual environment
  • Dependencies: Keep dependencies up to date with uv sync --upgrade

Performance Testing

  • Dose Calculation Performance: Test with large datasets
  • UI Responsiveness: Test with extensive medicine lists
  • Memory Usage: Monitor memory consumption with large CSV files
  • Graph Rendering: Test graph performance with large datasets

Architecture Documentation

Core Components

  • MedTrackerApp: Main application class
  • MedicineManager: Medicine CRUD operations
  • PathologyManager: Pathology/symptom management
  • GraphManager: Visualization and plotting
  • UIManager: User interface creation
  • DataManager: Data persistence and CSV operations

Data Flow

  1. User Input → UIManager → DataManager → CSV
  2. Data Loading → DataManager → pandas DataFrame → GraphManager
  3. Visualization → GraphManager → matplotlib → UI Display

Extension Points

  • Medicine System: Add new medicine properties
  • Graph Types: Add new visualization types
  • Export Formats: Add new data export options
  • UI Components: Add new interface elements

Deployment Testing

Standalone Executable

## Build executable
make deploy

## Test deployment
./dist/thechart

Docker Testing

## Build container
make build

## Test container
make start
make attach

Cross-platform Testing

  • Linux: Primary development and testing platform
  • macOS: Planned support (testing needed)
  • Windows: Planned support (testing needed)

For user documentation, see README.md. For feature details, see docs/FEATURES.md.


Originally from: DEVELOPMENT.md

This document provides a comprehensive guide to testing in TheChart application.

Test Organization

Directory Structure

thechart/
├── tests/                  # Unit tests (pytest)
│   ├── test_theme_manager.py
│   ├── test_data_manager.py
│   ├── test_ui_manager.py
│   ├── test_graph_manager.py
│   └── ...
├── scripts/               # Integration tests & demos
│   ├── integration_test.py
│   ├── test_menu_theming.py
│   ├── test_note_saving.py
│   └── ...

Test Categories

1. Unit Tests (/tests/)

Purpose: Test individual components in isolation Framework: pytest Location: /tests/ directory

Running Unit Tests
cd /home/will/Code/thechart
source .venv/bin/activate.fish
python -m pytest tests/
Available Unit Tests
  • test_theme_manager.py - Theme system and menu theming
  • test_data_manager.py - Data persistence and CSV operations
  • test_ui_manager.py - UI component functionality
  • test_graph_manager.py - Graph generation and display
  • test_constants.py - Application constants
  • test_logger.py - Logging system
  • test_main.py - Main application logic
Writing Unit Tests
## Example unit test structure
import unittest
import sys
import os

## Add src to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))

from your_module import YourClass

class TestYourClass(unittest.TestCase):
    def setUp(self):
        """Set up test fixtures."""
        pass

    def tearDown(self):
        """Clean up after tests."""
        pass

    def test_functionality(self):
        """Test specific functionality."""
        pass

2. Integration Tests (/scripts/)

Purpose: Test complete workflows and system interactions Framework: Custom test scripts Location: /scripts/ directory

Available Integration Tests
integration_test.py

Comprehensive export system test:

  • Tests JSON, XML, PDF export formats
  • Validates data integrity
  • Tests file creation and cleanup
  • No GUI dependencies
.venv/bin/python scripts/integration_test.py
test_note_saving.py

Note persistence functionality:

  • Tests note saving to CSV
  • Validates special character handling
  • Tests note retrieval
test_update_entry.py

Entry modification functionality:

  • Tests data update operations
  • Validates date handling
  • Tests duplicate prevention
test_keyboard_shortcuts.py

Keyboard shortcut system:

  • Tests key binding functionality
  • Validates shortcut responses
  • Tests keyboard event handling

3. Interactive Demonstrations (/scripts/)

Purpose: Visual and interactive testing of UI features Framework: tkinter-based demos

test_menu_theming.py

Interactive menu theming demonstration:

  • Live theme switching
  • Visual color display
  • Real-time menu updates
.venv/bin/python scripts/test_menu_theming.py

Running Tests

Complete Test Suite

cd /home/will/Code/thechart
source .venv/bin/activate.fish

## Run unit tests
python -m pytest tests/ -v

## Run integration tests
python scripts/integration_test.py

## Run specific feature tests
python scripts/test_note_saving.py
python scripts/test_update_entry.py

Individual Test Categories

## Unit tests only
python -m pytest tests/

## Specific unit test file
python -m pytest tests/test_theme_manager.py -v

## Integration test
python scripts/integration_test.py

## Interactive demo
python scripts/test_menu_theming.py

Test Runner Script

## Use the main test runner
python scripts/run_tests.py

Test Environment Setup

Prerequisites

  1. Virtual Environment: Ensure .venv is activated
  2. Dependencies: All requirements installed via uv
  3. Test Data: Main thechart_data.csv file present

Environment Activation

## Fish shell
source .venv/bin/activate.fish

## Bash/Zsh
source .venv/bin/activate

Writing New Tests

Unit Test Guidelines

  1. Place in /tests/ directory
  2. Use pytest framework
  3. Follow naming convention: test_<module_name>.py
  4. Include setup/teardown for fixtures
  5. Test edge cases and error conditions

Integration Test Guidelines

  1. Place in /scripts/ directory
  2. Test complete workflows
  3. Include cleanup procedures
  4. Document expected behavior
  5. Handle GUI dependencies appropriately

Interactive Demo Guidelines

  1. Place in /scripts/ directory
  2. Include clear instructions
  3. Provide visual feedback
  4. Allow easy theme/feature switching
  5. Include exit mechanisms

Test Data Management

Test File Creation

  • Use tempfile module for temporary files
  • Clean up created files in teardown
  • Don't commit test data to repository

CSV Test Data

  • Most tests use main thechart_data.csv
  • Some tests create temporary CSV files
  • Integration tests may create export directories

Continuous Integration

Local Testing Workflow

## 1. Run linting
python -m flake8 src/ tests/ scripts/

## 2. Run unit tests
python -m pytest tests/ -v

## 3. Run integration tests
python scripts/integration_test.py

## 4. Run specific feature tests as needed
python scripts/test_note_saving.py

Pre-commit Checklist

  • All unit tests pass
  • Integration tests pass
  • New functionality has tests
  • Documentation updated
  • Code follows style guidelines

Troubleshooting

Common Issues

Import Errors
## Ensure src is in path
import sys
import os
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
GUI Test Issues
  • Use root.withdraw() to hide test windows
  • Ensure proper cleanup with root.destroy()
  • Consider mocking GUI components for unit tests
File Permission Issues
  • Ensure test has write permissions
  • Use temporary directories for test files
  • Clean up files in teardown methods

Debug Mode

## Run with debug logging
python -c "import logging; logging.basicConfig(level=logging.DEBUG)" scripts/test_script.py

Test Coverage

Current Coverage Areas

  • Theme management and menu theming
  • Data persistence and CSV operations
  • Export functionality (JSON, XML, PDF)
  • UI component initialization
  • Graph generation
  • Note saving and retrieval
  • Entry update operations
  • Keyboard shortcuts

Areas for Expansion

  • Medicine and pathology management
  • Settings persistence
  • Error handling edge cases
  • Performance testing
  • UI interaction testing

Contributing Tests

When contributing new tests:

  1. Choose the right category: Unit vs Integration vs Demo
  2. Follow naming conventions: Clear, descriptive names
  3. Include documentation: Docstrings and comments
  4. Test edge cases: Not just happy path
  5. Clean up resources: Temporary files, windows, etc.
  6. Update documentation: Add to this guide and scripts/README.md

Originally from: TESTING.md


📖 Documentation Navigation


This document was generated by the documentation consolidation system. Last updated: 2025-08-05 14:53:36