A comprehensive automation and algorithms reference project demonstrating modern testing patterns and best practices.
Sloth Python is an educational and professional-grade project combining:
- 🧪 Advanced test automation frameworks (Robot Framework, pytest, Playwright)
- 🤖 AI-powered self-healing test locators that automatically repair broken selectors
- 🏗️ Comprehensive algorithm library (data structures, ML, divide & conquer, and more)
- ⚙️ Production-ready CI/CD workflows using GitHub Actions
- 🔧 AI-driven test script generation from natural-language goals using MCP
Perfect for learning modern test automation, exploring algorithms, or as a reference for professional test frameworks.
- Quick Start
- Installation
- Getting Started as a Contributor
- Configuration
- Running Tests
- Self-Healing Framework
- AI-Generated Test Scripts
- CI/CD Pipeline
- Project Structure
- Best Practices
- Troubleshooting
- Documentation
- Contributing
- License
- Support & Feedback
- Advanced Test Automation: Robot Framework and pytest examples for unit, API, and Playwright-based UI testing
- Self-Healing Locators: AI-assisted Playwright framework that automatically detects and repairs broken element selectors
- AI Test Script Generation: MCP-aware Playwright workflow that generates runnable pytest UI tests from natural-language goals
- Algorithm Library: Curated implementations of algorithms, data structures, and machine learning concepts
- Production-Ready CI/CD: GitHub Actions workflows for automated smoke tests and nightly regression suites
- Comprehensive Examples: Real-world test scenarios and automation patterns
- Python 3.12+ (Tested with Python 3.14)
- Git
New to open source? Check out GETTING_STARTED.md for a beginner-friendly guide.
Want to contribute? Start with the Contributing Guide.
- Clone the repository:
git clone https://github.com/466725/sloth-python.git cd sloth-python
-
Create and activate a virtual environment:
# Windows py -3.14 -m venv .venv .\.venv\Scripts\activate # Linux/Mac python3 -m venv .venv source .venv/bin/activate
-
Install dependencies:
pip install -r requirements.txt
Note: This installs the packages used by Robot Framework, pytest, Playwright, and the supporting demo utilities.
-
Install Playwright Browsers:
playwright install
Runtime settings are centralized in utils/config.py and read from environment variables with safe defaults.
| Variable | Default | Description |
|---|---|---|
TANGERINE_URL |
https://www.tangerine.ca/en/personal |
Base URL for Tangerine UI tests |
DEEP_SEEK_URL |
https://api.deepseek.com |
Base URL for DeepSeek-compatible API calls |
OPENAI_URL |
https://api.openai.com |
Base URL for OpenAI API calls |
UI_LOCALE |
en-US |
Browser locale used by Playwright-based UI tests |
SLEEP_TIME |
1 |
Generic sleep duration used in selected fixtures |
COOKIE_BANNER_TIMEOUT_SECONDS |
5 |
Wait time for Tangerine cookie banner handling |
PW_HEADLESS |
true |
Playwright headless mode (1/0, true/false, yes/no, on/off) |
AI_GEN_MODEL |
gpt-4.1 |
Model used by the UI test generator |
AI_GEN_BASE_URL |
OPENAI_URL value |
OpenAI-compatible base URL used by generator |
AI_GEN_MAX_DOM_CHARS |
12000 |
Max DOM/element-tree size sent to the model |
AI_GEN_OUTPUT_DIR |
pytest_demo/tests/ai/generated_playwright |
Default output folder for generated tests |
Quick local check:
python -m utils.configThe project uses pytest as the primary test runner. Configuration is handled in pytest.ini.
Common Commands:
# Run all tests
pytest
# Run only Unit and API tests (Fast)
pytest -m "unit or api"
# Run UI tests
pytest -m ui
# Generate Allure Report
pytest --alluredir=temps/allure-results --clean-alluredir
allure serve temps/allure-resultsUnit Test Examples:
# Run all unit tests
pytest -m unit
# Run only csv reader unit tests
pytest pytest_demo/tests/unit/test_csv_reader.py -q
# Run one unit test case by node id
pytest pytest_demo/tests/unit/test_csv_reader.py::test_read_csv_to_list_converts_numeric_cells_to_int -qSpecific UI Suites:
# Tangerine (Playwright Only)
pytest pytest_demo/tests/ui/tangerine_playwrightFor pytest_demo/tests/ui/tangerine_playwright, Playwright records video per test and keeps/attaches it only when a test fails. Videos are written under temps/playwright-videos/tangerine_playwright/.
This project demonstrates three ways to test APIs. They target different needs and can coexist in the same repo.
| Approach | Location | Strengths | Trade-offs |
|---|---|---|---|
Pytest + pure Python (requests / SDK) |
pytest_demo/tests/api/deep_seek_api_test.py |
Maximum flexibility, strongest Python debugging, easy fixture/parametrize patterns | Less business-readable for non-Python users |
| Robot + Python keyword library | robot_demo/api/deep_seek_api_hybrid_test.robot + robot_demo/api/deep_seek_keywords.py |
Readable Robot test flow with reusable Python logic for complex handling | Requires maintaining both .robot and .py layers |
Robot-only (RequestsLibrary) |
robot_demo/api/deep_seek_api_test.robot |
Fully keyword-driven API checks, easy for Robot-focused contributors | Complex payload/assertion logic can become verbose in .robot |
When to use which:
- Use Pytest + Python when API logic is complex (custom retries, advanced validation, reusable helpers).
- Use Robot + Python keyword when you want readable Robot scenarios but still need Python power behind keywords.
- Use Robot-only RequestsLibrary for straightforward request/response checks and fully keyword-driven demos.
Run commands:
# Pytest API demo
pytest -q pytest_demo/tests/api/deep_seek_api_test.py
# Robot API demo (Robot + Python keyword library)
python -m robot --outputdir temps/robot_api robot_demo/api/deep_seek_api_hybrid_test.robot
# Robot API demo (Robot-only RequestsLibrary)
python -m robot --outputdir temps/robot_api robot_demo/api/deep_seek_api_test.robotAll DeepSeek demos use OPENAI_API_KEY; DEEP_SEEK_URL is optional and defaults to https://api.deepseek.com.
Generated Playwright tests default to pytest_demo/tests/ai/generated_playwright/. For generation commands, examples, and configuration, see AI-Generated UI Test Scripts.
Robot Framework demos are located in robot_demo/.
The suite under robot_demo/tangerine_playwright/ mirrors the Tangerine UI coverage from pytest_demo/tests/ui/tangerine_playwright.
Included checks:
- Homepage title validation
- Sign-in navigation title validation
- Sign-up navigation title validation
Suite lifecycle:
Suite Setup:Open Browser SessionTest Setup:Open Tangerine HomepageTest Teardown:Capture Failure ArtifactsSuite Teardown:Close Browser Session
The shared Test Setup always opens the Tangerine homepage and accepts the cookie banner when it is present.
Run All Demos:
# Output results under temps/robot_all
python -m robot --outputdir temps/robot_all robot_demo/Run Specific Suite:
python -m robot --outputdir temps/robot_calculator robot_demo/calculator/
python -m robot --outputdir temps/robot_tangerine_playwright robot_demo/tangerine_playwright/Optional dry run (syntax and keyword wiring only):
python -m robot --dryrun --outputdir temps/robot_tangerine_playwright_dryrun robot_demo/tangerine_playwright/Reports:
Robot generates output.xml, log.html, and report.html in the selected output directory under temps/.
Artifact behavior (Tangerine suite):
- For
temps/robot_tangerine_playwright/, failure screenshots are saved underartifacts/playwright/screenshots/ - For
temps/robot_tangerine_playwright/, failure videos are saved underartifacts/playwright/videos/ - Screenshot and video links are logged into Robot
log.html/report.html - Passed-test videos are deleted to reduce artifact size
Import path note:
The Tangerine Robot keyword libraries self-bootstrap the project root import path, so running with -P is optional for normal local usage.
This project includes an advanced self-healing mechanism for Playwright-based UI tests that automatically detects and repairs broken locators.
Location: pytest_demo/self_healing/
Locator Store:
pytest_demo/locators/signinpage.jsonpytest_demo/locators/signuppage.json
- Primary Locator Failure → Framework attempts primary locator
- Backup Locators → Tries backup selectors from locator store
- DOM Scanning → Scans page DOM for similar elements using fuzzy matching
- Auto-Update → If a match is found, test passes and the page-specific locator file is automatically updated
- Resilience → Subsequent test runs use the updated selector
- Reduced Maintenance: Eliminates manual locator fixes after UI changes
- Improved Stability: Tests are more resilient to minor DOM alterations
- Smart Learning: System learns from failures and improves over time
The Robot suite in robot_demo/tangerine_playwright/ uses the same self-healing locator store, but limits healing to these keys in the Playwright keywords:
tangerine.logintangerine.signup
Locator definitions are shared from:
pytest_demo/locators/signinpage.jsonpytest_demo/locators/signuppage.json
Robot mode currently runs with read-only healing (auto_update=False) so it can recover using stored locator strategies without silently rewriting the locator files.
This project includes an AI-powered test generation system that automatically creates pytest + Playwright test scripts from live page context.
The system follows a 6-step end-to-end workflow:
- Browser Automation - Playwright opens the target page
- Context Collection - MCP-style context is captured (DOM, screenshots, network events)
- Prompt Construction - Context + goal are packaged into a structured prompt
- AI Generation - OpenAI-compatible model generates Python test code
- Code Normalization - Output is cleaned, validated, and formatted
- File Output - Runnable test file is written to
pytest_demo/tests/ai/generated_playwright/
OPENAI_API_KEYenvironment variable must be set- All dependencies in
requirements.txtinstalled (includesmcpandopenai) - Playwright browsers installed:
playwright install
Step 1: Set your API key
$env:OPENAI_API_KEY = "<your-openai-api-key>"Supports OpenAI-compatible endpoints:
- OpenAI (default)
- DeepSeek
- Azure OpenAI
- Other OpenAI API clones
Step 2: Generate a test (example: Tangerine homepage)
python -m pytest_demo.ai_generation.cli `
--url "https://www.tangerine.ca/en/personal" `
--goal "Verify homepage loads and Sign In button is visible" `
--test-name "test_tangerine_homepage" `
--output "pytest_demo/tests/ai/generated_playwright/test_tangerine_homepage.py"Step 3: Run the generated test
pytest -q pytest_demo/tests/ai/generated_playwright/test_tangerine_homepage.pypython -m pytest_demo.ai_generation.cli [OPTIONS]
Options:
--url TEXT Target page URL (required)
--goal TEXT Natural language test goal (required)
--test-name TEXT Generated function name (default: test_generated_ui_flow)
--output TEXT Output file path (uses AI_GEN_OUTPUT_DIR by default)
--model TEXT LLM model name (default: gpt-4.1)
--base-url TEXT OpenAI-compatible endpoint (default: OPENAI_URL)
--headless {true,false} Run Playwright headless (default: from PW_HEADLESS)
--help Show help message
Generate sign-in page test:
python -m pytest_demo.ai_generation.cli `
--url "https://www.tangerine.ca/app/#/login" `
--goal "Verify sign-in page loads and username/password fields are present" `
--test-name "test_tangerine_signin"Generate sign-up page test:
python -m pytest_demo.ai_generation.cli `
--url "https://www.tangerine.ca/app/#/signup" `
--goal "Verify sign-up page loads and registration form is visible" `
--test-name "test_tangerine_signup"Run all generated tests:
pytest -q pytest_demo/tests/ai/generated_playwrightAI generation settings are read from utils/config.py and environment variables:
| Variable | Default | Description |
|---|---|---|
AI_GEN_MODEL |
gpt-4.1 |
LLM model identifier |
AI_GEN_BASE_URL |
OpenAI endpoint | API base URL (OpenAI-compatible) |
AI_GEN_MAX_DOM_CHARS |
12000 |
Max DOM size sent to model |
AI_GEN_OUTPUT_DIR |
pytest_demo/tests/ai/generated_playwright |
Default output folder |
Override defaults via environment variables:
$env:AI_GEN_MODEL = "gpt-4.1-mini"
$env:AI_GEN_BASE_URL = "https://api.deepseek.com"
$env:AI_GEN_OUTPUT_DIR = "pytest_demo/tests/ai/my_generated_tests"| Module | Purpose |
|---|---|
mcp_context.py |
Collects Playwright page context (MCP protocol) |
prompt_builder.py |
Constructs structured prompts for the AI model |
ai_client.py |
OpenAI-compatible chat client |
generator.py |
Orchestrates generation: context → AI → file |
cli.py |
Command-line interface for test generation |
Location: pytest_demo/ai_generation/
When you run the generator with the homepage goal, it produces:
from playwright.sync_api import Page
import pytest
@pytest.mark.ui
def test_tangerine_homepage(page: Page):
page.goto("https://www.tangerine.ca/en/personal", wait_until="domcontentloaded")
# Verify page loads with correct title
assert page.title() == "Tangerine"
# Verify Sign In button is visible
login_button = page.locator("#login")
assert login_button.is_visible()
# Verify main heading is present
heading = page.locator("h1")
assert heading.is_visible()
assert "Welcome" in heading.text_content()Key features of generated tests:
- ✅ Fully runnable pytest + Playwright code
- ✅ Uses resilient locator strategies (id, css, text)
- ✅ Includes meaningful assertions based on page context
- ✅ Safe fallback template if AI response is malformed
- ✅ Automatically adds missing imports
- ✅ Ready to integrate with CI/CD
Generated tests are plain pytest + Playwright files. They do not automatically wrap interactions with the self-healing helpers, so treat them as a strong starting point and adapt them if you want them to participate in the locator-store workflow described in the self-healing section.
The generator and its configuration are covered by focused unit tests:
pytest -q pytest_demo/tests/ai/test_ai_generation.py pytest_demo/tests/unit/test_config.pyTests validate:
- Prompt structure includes goal and MCP context
- Code normalization from fenced markdown responses
- Fallback template generation
- Root-relative output path resolution
- Configuration defaults and environment overrides
- API Costs: OpenAI API calls are charged per token. Monitor usage during testing.
- Model Selection:
gpt-4.1recommended for best results;gpt-4.1-minifor cost savings - Headless Mode: Use
--headless falseduring development to watch test generation - DOM Size: Large pages (>12000 chars) are truncated. Adjust with
AI_GEN_MAX_DOM_CHARS - Test Quality: Review generated tests before committing. AI is powerful but not perfect.
- Custom Endpoints: Use
--base-urlfor DeepSeek, Azure, or self-hosted OpenAI clones
Automated testing is orchestrated through GitHub Actions workflows to ensure code quality and early defect detection.
Smoke Tests (Push + Pull Request)
- Run on pushes to
main/master - Run on pull requests targeting
main/masterwhile the PR is open - Include pytest
unit+apicoverage and the Robot calculator suite - Provide fast feedback on core regressions
Nightly Regression Suite (2 AM UTC)
- Runs from the scheduled workflow at
0 2 * * * - Installs Playwright browsers with dependencies
- Executes the full pytest suite and all Robot suites
- Attempts Allure report generation after the test run
The workflow uploads generated reports when available in the run's Artifacts section, including:
allure-report/temps/log.htmltemps/report.htmltemps/output.xml
- Navigate to the workflow run on GitHub
- Download the artifacts zip file
- Extract and open
report.htmlin your browser
Use these commands to mimic the core CI flow locally (see Running Tests for more command variants):
# Run smoke tests
pytest -m "unit or api"
python -m robot robot_demo/calculator/
# Run a nightly-like full pass
playwright install
pytest --tb=short --maxfail=5
python -m robot --outputdir temps robot_demo/sloth-python/
├── algorithms/ # Algorithms & Data Structures
│ ├── backtracking/ # Backtracking algorithms
│ ├── divide_and_conquer/ # Divide & conquer patterns
│ ├── machine_learning/ # ML implementations (KNN, SVM, Decision Trees, etc.)
│ ├── maths/ # Mathematical algorithms
│ ├── searches/ # Search algorithms (binary, linear, etc.)
│ ├── sorts/ # Sorting algorithms
│ ├── strings/ # String manipulation algorithms
│ ├── conversions/ # Number system conversions
│ └── data_structures/ # Trees, heaps, queues, stacks, tries, etc.
│
├── pytest_demo/ # Pytest Test Suite
│ ├── ai_generation/ # AI + MCP context driven script generator
│ ├── tests/ # Test cases
│ │ ├── AI/ # AI-generation tests and generated Playwright scripts
│ │ │ └── generated_playwright/
│ │ ├── unit/ # Unit tests
│ │ ├── api/ # API tests (Requests)
│ │ └── ui/ # UI tests
│ │ └── tangerine_playwright/
│ ├── self_healing/ # Self-healing Playwright framework
│ ├── locators/ # Locator repository (signinpage.json, signuppage.json)
│ ├── conftest.py # Pytest fixtures & configuration
│ └── ...
│
├── robot_demo/ # Robot Framework demo suites (API/UI/keyword patterns)
│ ├── api/ # API demos (Robot-only RequestsLibrary and Robot + Python keywords)
│ ├── calculator/ # Calculator test suite
│ └── tangerine_playwright/ # Tangerine UI suite (custom Playwright library)
│
├── fun_part/ # Educational & Fun Examples
│ ├── go_game/ # Game implementations
│ ├── bilibili/ # API demo projects
│ └── web_programming/ # Web examples
│
├── utils/ # Shared Utilities
│ ├── config.py # Configuration management
│ ├── constants.py # Application constants
│ └── csv_reader.py # CSV utilities
│
├── .github/workflows/ # GitHub Actions CI/CD definitions
├── requirements.txt # Python dependencies
├── pytest.ini # Pytest configuration
├── pyproject.toml # Project metadata
└── README.md # This file
- algorithms/ - Production-ready implementations for learning and reference
- pytest_demo/ - Complete test automation examples with best practices
- robot_demo/ - Robot demo suites for API and UI automation patterns
- utils/ - Reusable components (config, constants, helpers)
This project demonstrates industry best practices:
- Page Object Model (POM) - Maintainable UI test structure
- Fixtures & Dependency Injection - Pytest fixtures for test setup/teardown
- Marker-Based Organization - Categorize tests with markers such as
unit,api,ui, andplaywright - Parameterization - Run same test with multiple data sets
- Self-Healing - AI-powered locator recovery mechanism
- Type Hints - Type annotations for better IDE support and documentation
- Docstrings - Comprehensive module and function documentation
- Error Handling - Proper exception handling and logging
- Configuration Management - Externalized config for different environments
- DRY Principle - Reusable utilities and helper functions
- Automated Testing - Smoke tests on PRs, full regression nightly
- Report Generation - HTML and Allure reports for test visibility
- Artifact Management - Uploaded for debugging and report review
Issue: "ModuleNotFoundError" when running tests
# Solution: Ensure virtual environment is activated and dependencies installed
source .venv/bin/activate # or .\.venv\Scripts\activate on Windows
pip install -r requirements.txtIssue: Playwright tests timeout
# Solution: Install browsers and retry a focused UI suite first
playwright install
pytest pytest_demo/tests/ui/tangerine_playwright -qIssue: Locator selector not found in Playwright
- If the test uses the self-healing helpers, the framework may recover automatically
- Check
pytest_demo/locators/signinpage.jsonandpytest_demo/locators/signuppage.jsonfor updated selectors - Manual fix: Update the JSON or run with
-vflag for detailed logs
If you find Sloth Python useful — whether for learning, professional automation, or as a reference — please consider sponsoring!
Your support helps fund:
- 🛠️ Ongoing maintenance and new features
- 🤖 AI/Playwright tooling improvements
- 📚 More algorithm and test examples
- ⏱️ Faster responses to issues and PRs
Even a small monthly contribution makes a big difference. Thank you! 🙏
Contributions are welcome and appreciated! Whether you're fixing bugs, adding features, improving documentation, or sharing new algorithm implementations, we'd love your help.
- Fork the repository on GitHub
- Create a feature branch with a descriptive name:
git checkout -b feature/add-new-algorithm git checkout -b fix/self-healing-bug git checkout -b docs/improve-readme
- Make your changes and ensure code quality:
- Follow PEP 8 style guidelines
- Add type hints for new functions
- Include docstrings and comments
- Write unit tests for new functionality
- Test your changes locally:
pytest -m "unit or api" # Quick smoke test pytest --tb=short # Full test suite
- Commit with clear messages:
git commit -m "feat: add new sorting algorithm" git commit -m "fix: correct self-healing locator logic"
- Push your branch and create a Pull Request on GitHub with:
- Clear title and description
- Reference to any related issues (e.g.,
Fixes #42) - Explanation of changes and why they're needed
- Algorithms - New algorithm implementations in
algorithms/(with tests) - Test Automation - Enhanced Robot Framework keywords, new UI test examples
- Self-Healing - Improvements to the locator recovery mechanism
- AI Generation - Enhancements to the MCP-driven test generator
- Documentation - README updates, code examples, tutorials
- CI/CD - Workflow improvements, additional test coverage
# Complete initial setup first (see Installation), then:
# Create and switch to feature branch
git checkout -b feature/your-feature-name
# Install dependencies (if adding new packages)
pip install -r requirements.txt
# Make your changes and test
pytest
python -m robot robot_demo/calculator/
# Commit and push
git add .
git commit -m "feat: describe your changes"
git push origin feature/your-feature-name
# Create Pull Request on GitHub- Python - PEP 8, type hints, docstrings
- Tests - Pytest or Robot Framework with clear naming
- Documentation - Updated README.md or inline comments for complex logic
- Commit Messages - Clear, concise, use conventional commits (feat:, fix:, docs:, etc.)
- GitHub Discussions - Ask questions and share ideas
- GitHub Issues - Report bugs or request features
- Check existing issues - Your question might already be answered
Thank you for contributing! 🙌
This project is licensed under the MIT License.
The MIT License permits:
- ✅ Commercial use
- ✅ Modification
- ✅ Distribution
- ✅ Private use
With the conditions:
⚠️ License and copyright notice must be included
- GitHub Issues - Report bugs and request features
- GitHub Discussions - Ask questions, share ideas, and discuss best practices
- Documentation - Check
README.mdand inline code comments for implementation details - Example Tests - Review
pytest_demo/androbot_demo/for working examples
Found a bug? Please open an issue with:
- Python version and OS (e.g., Python 3.14 on Windows 11)
- Steps to reproduce the issue
- Expected vs actual behavior
- Error message and stack trace (if applicable)
- Environment details (e.g., Playwright version, headless/headed mode)
Have an idea for improvement? Open an issue with:
- Clear description of the feature or problem
- Proposed solution or use case
- Alternative approaches you've considered (if any)
- Examples or code snippets showing the idea
Have questions or want to discuss testing strategies? Use GitHub Discussions to:
- Share test automation patterns and best practices
- Get advice on test framework choices
- Discuss algorithm implementations
- Connect with other contributors
We actively monitor both Issues and Discussions—your feedback helps improve this project!
- CONTRIBUTING.md - Detailed guidelines for contributing code, algorithms, or documentation
- CODE_OF_CONDUCT.md - Community standards and expectations for respectful interaction
- SECURITY.md - How to responsibly report security vulnerabilities
- ISSUES_AND_PULL_REQUESTS.md - Templates and guidelines for issues and PRs
We provide issue templates to streamline reporting:
- Bug Reports - For issues and problems
- Feature Requests - For new functionality ideas
- Documentation - For improvements to docs
- Questions - For general inquiries (consider using Discussions instead)
- Python - Programming language
- Pytest - Testing framework
- Playwright - Modern browser automation
- Robot Framework - Keyword-driven testing
- OpenAI API - AI-powered test generation
This project draws on industry best practices from:
- Test automation communities
- Software engineering principles
- Algorithm research and implementations
We welcome feedback, contributions, and ideas from the community. If you find this project useful, please consider:
- ⭐ Starring the repository
- 🔗 Sharing it with others
- 🤝 Contributing improvements
- 💬 Providing feedback via Issues or Discussions