AI-Powered Test Case Generation
TestPlanIt integrates with leading AI providers to automatically generate comprehensive test cases from requirements, issues, and documentation. This powerful feature uses Large Language Models (LLMs) to understand your project context and create detailed, executable test scenarios.
Overview
The AI test case generation feature allows you to:
- Generate from Issues: Create test cases directly from Jira, GitHub, or Azure DevOps issues
- Generate from Documents: Create test cases from requirements documents or specifications
- Smart Field Population: Automatically populate custom template fields with relevant content
- Context-Aware Generation: Considers existing test cases to avoid duplication
- Flexible Quantity Control: Generate anywhere from single test cases to comprehensive test suites
- Auto-tagging: Automatically generate and assign relevant tags
Supported AI Providers
OpenAI
- Models: GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
- Authentication: API Key
- Strengths: Excellent natural language understanding, reliable structured output
Google Gemini
- Models: Gemini Pro, Gemini Pro Vision
- Authentication: API Key
- Strengths: Strong reasoning capabilities, cost-effective
Anthropic Claude
- Models: Claude 3 Opus, Claude 3 Sonnet, Claude 3 Haiku
- Authentication: API Key
- Strengths: Excellent instruction following, safety-focused
Ollama (Self-Hosted)
- Models: Llama 2, Code Llama, Mistral, and other open-source models
- Authentication: None (local deployment)
- Strengths: Privacy, no API costs, customizable
Azure OpenAI
- Models: GPT-4, GPT-3.5 Turbo (deployed on Azure)
- Authentication: API Key + Deployment Name
- Strengths: Enterprise features, data residency, SLA guarantees
Custom LLM
- Models: Any OpenAI-compatible API endpoint
- Authentication: Configurable (API Key)
- Strengths: Maximum flexibility, support for custom models
System Configuration
Administrator Setup
- Navigate to Administration → LLM Integrations
- Click Add LLM Integration
- Configure your preferred AI provider:
Name: "Production OpenAI"
Provider: OPENAI
Model: gpt-4-turbo-preview
Status: ACTIVE
OpenAI Configuration
API Key: sk-...your-openai-api-key
Model: gpt-4-turbo-preview
Max Tokens: 4096
Temperature: 0.7
Google Gemini Configuration
API Key: your-gemini-api-key
Model: gemini-pro
Max Tokens: 8192
Temperature: 0.7
Anthropic Claude Configuration
API Key: your-anthropic-api-key
Model: claude-3-sonnet-20240229
Max Tokens: 4096
Temperature: 0.7
Ollama Configuration
Base URL: http://localhost:11434
Model: llama2:13b
Max Tokens: 4096
Temperature: 0.7
Azure OpenAI Configuration
API Key: your-azure-openai-key
Endpoint: https://your-resource.openai.azure.com/
Deployment Name: gpt-4-deployment
API Version: 2024-02-15-preview
Max Tokens: 4096
Temperature: 0.7
Custom LLM Configuration
Base URL: https://your-custom-endpoint.com/v1
API Key: your-custom-api-key
Model: your-model-name
Max Tokens: 4096
Temperature: 0.7
Note: Custom LLM endpoints must be compatible with the OpenAI API format.
Project Assignment
After creating an LLM integration:
- Go to Project Settings → LLM Integrations
- Select the integration from available options
- Configure project-specific settings:
- Default generation parameters
- Field selection preferences
- Auto-tagging preferences
- Save settings
Using AI Test Generation
Prerequisites
Before using AI test generation, ensure:
- At least one active LLM integration is configured
- At least one active issue tracking integration (for issue-based generation)
- Project has test case templates configured
- User has appropriate permissions for test case creation
Generation Wizard
The AI test generation wizard guides you through a 4-step process:
Step 1: Select Source
Choose your test generation source:
From Issue:
- Select an existing issue from your integrated tracking system
- Issues are automatically fetched with full context including descriptions and comments
- Supports Jira, GitHub Issues, Azure DevOps work items
From Document:
- Enter requirements directly into the form
- Provide title, description, and priority
- Ideal for early-stage requirements or internal specifications
Step 2: Select Template
- Choose the test case template to use for generated cases
- All template fields are displayed for review
- Select which fields to populate with AI-generated content
- Required fields are automatically included
- Optional fields can be included or excluded based on your needs
Step 3: Configure Generation
Quantity Options:
- Just One: Generate a single, comprehensive test case
- A Couple: Generate 2-3 focused test cases
- A Few: Generate 3-5 test cases covering different scenarios
- Several: Generate 5-8 test cases with good coverage
- Many: Generate 8-12 test cases for thorough testing
- Maximum: Generate comprehensive test suite (12+ cases)
Additional Instructions:
- Provide specific guidance for the AI
- Example: "Focus on security testing scenarios"
- Common suggestions available as quick-add buttons:
- Security testing
- Edge cases
- Happy path scenarios
- Mobile compatibility
- API testing
- Accessibility testing
Auto-Generate Tags:
- Enable to automatically create and assign relevant tags
- Tags are generated based on test content and context
- Existing tags are reused when appropriate
Step 4: Review and Import
- Review all generated test cases
- Each case shows:
- Name and description
- Generated test steps (if applicable)
- Populated template fields
- Generated tags (if enabled)
- Priority and automation status
- Select specific test cases to import
- Bulk select/deselect options available
Generation Process
When you click "Generate":
- Context Analysis: The AI analyzes the source material and existing test cases
- Template Processing: Template fields and requirements are processed
- Content Generation: Test cases are generated based on your specifications
- Field Population: Custom fields are populated with relevant content
- Tag Generation: Tags are automatically created (if enabled)
- Quality Validation: Generated content is validated for completeness
Generated Content Structure
Test Case Fields
The AI automatically populates:
Core Fields:
- Name: Descriptive, action-oriented test case names
- Description: Detailed test objectives and scope (if template field exists)
- Priority: Inferred from source issue priority or requirement importance
Template Fields:
- Preconditions: Required setup or system state
- Test Data: Sample data needed for execution
- Environment: Target testing environment
- Expected Results: Detailed expected outcomes
- Post-conditions: Expected system state after testing
System Fields:
- Steps: Detailed action/expected result pairs
- Tags: Contextually relevant tags
- Automated: Suggestion for automation potential
- Estimate: Time estimate based on complexity
Test Steps Format
Generated test steps follow a consistent structure:
Step 1: Navigate to the login page
Expected Result: Login form is displayed with username and password fields
Step 2: Enter valid credentials ([email protected] / password123)
Expected Result: Credentials are accepted and validated
Step 3: Click the "Login" button
Expected Result: User is redirected to the dashboard
Advanced Features
Context Awareness
The AI considers:
- Existing Test Cases: Avoids duplication of current test scenarios
- Project Domain: Understands your application type and testing needs
- Template Structure: Adapts content to fit your specific template fields
- Issue History: Incorporates comments and updates from linked issues
Field Selection Optimization
- Required Fields: Always populated with essential content
- Optional Fields: Can be selectively included based on your workflow
- Field Types: Content is formatted appropriately for each field type:
- Rich text fields receive formatted content
- Dropdown fields receive valid option values
- Multi-select fields receive appropriate value arrays
Intelligent Tagging
Auto-generated tags include:
- Functional Areas: Based on the feature being tested (e.g., authentication, payment)
- Test Types: Based on testing approach (e.g., integration, unit, e2e)
- Priorities: Based on issue priority or risk assessment
- Platforms: Based on mentioned platforms or environments
Best Practices
Source Material Quality
- Detailed Issues: More detailed issues produce better test cases
- Clear Requirements: Well-written requirements lead to comprehensive test coverage
- Include Context: Add comments or descriptions that explain business logic
- Specify Constraints: Mention any technical limitations or dependencies
Template Configuration
- Field Naming: Use descriptive field names that clearly indicate their purpose
- Field Types: Choose appropriate field types for different content types
- Required vs Optional: Mark fields as required only if they're truly essential
- Field Ordering: Arrange fields logically in the template
Generation Settings
- Start Small: Begin with fewer test cases and adjust based on quality
- Review Carefully: Always review generated content before importing
- Iterate: Use additional instructions to refine generation
- Tag Strategy: Develop a consistent tagging strategy for your project
Quality Assurance
- Review Generated Steps: Ensure test steps are executable and complete
- Validate Field Content: Check that generated content fits field constraints
- Test Data Verification: Ensure generated test data is appropriate and valid
- Link Verification: Confirm that generated test cases properly link to source issues
Troubleshooting
Common Issues
No AI providers available:
- Verify that at least one LLM integration is configured and active
- Check that the integration is assigned to your project
- Confirm your user has appropriate permissions
Generation fails with timeout:
- Try reducing the quantity of test cases to generate
- Simplify additional instructions
- Check API rate limits for your provider
Poor quality test cases:
- Provide more detailed source material
- Add specific instructions about testing focus
- Review and refine your template field definitions
- Consider using a more capable AI model
Fields not populating correctly:
- Verify field types in your template
- Check field naming and descriptions
- Ensure selected fields are appropriate for AI generation
Error Messages
"No AI model is configured"
- Add an LLM integration in project settings
- Ensure the integration is active and properly configured
"API quota exceeded"
- Your AI provider's usage limits have been reached
- Wait for quota reset or upgrade your plan
- Consider switching to a different provider
"Invalid API configuration"
- Check API keys and credentials
- Verify the model name is correct
- Test the integration connection
Performance Optimization
- Model Selection: Balance quality needs with response time
- Batch Processing: Generate multiple test cases in single requests when possible
- Field Selection: Only populate fields you actually need
- Template Optimization: Streamline templates for AI generation
API Integration
For programmatic access to AI test generation:
Key Endpoints
LLM Integrations:
GET /api/llm-integrations- List available integrationsPOST /api/llm-integrations/test-connection- Test integrationGET /api/llm-integrations/{id}/models- Get available models
Test Generation:
POST /api/llm/generate-test-cases- Generate test casesPOST /api/llm/validate-content- Validate generated contentGET /api/llm/generation-history- Get generation history
Example Request
POST /api/llm/generate-test-cases
{
"projectId": 123,
"issue": {
"key": "PROJ-456",
"title": "User login functionality",
"description": "Implement secure user authentication..."
},
"template": {
"id": 789,
"fields": [...selectedFields]
},
"context": {
"userNotes": "Focus on security testing",
"existingTestCases": [...],
"folderContext": 10
},
"quantity": "several",
"autoGenerateTags": true
}
Security Considerations
Data Privacy
- API Requests: Source material is sent to AI providers for processing
- Retention: Most providers don't retain request data (verify with your provider)
- Sensitive Data: Avoid including sensitive information in source material
- Self-Hosted Options: Consider Ollama for maximum data privacy
Access Control
- Permission Model: Same as regular test case creation
- Audit Logging: All AI generation activities are logged
- Rate Limiting: Built-in rate limiting prevents abuse
Migration and Updates
Upgrading AI Providers
- Create new integration with updated settings
- Test generation quality with new provider
- Update project assignments
- Archive old integration when satisfied
Model Updates
- New models are automatically available when providers release them
- Update model names in integration settings
- Test generation quality with new models before switching
Monitoring and Analytics
Usage Metrics
Track important metrics in the admin dashboard:
- Generation Volume: Number of test cases generated per period
- Success Rate: Percentage of successful generations
- User Adoption: Which teams are using AI generation
- Cost Tracking: API usage and associated costs
Quality Metrics
- Review Rate: Percentage of generated cases that are reviewed before import
- Acceptance Rate: Percentage of generated cases that are imported
- Modification Rate: How often generated cases are edited post-import
Future Enhancements
Planned improvements include:
- Custom Model Fine-Tuning: Train models on your specific domain
- Multi-Language Support: Generate test cases in different languages
- Visual Test Generation: Generate test cases from UI mockups
- Regression Analysis: Automatically update test cases when requirements change
- Test Execution Integration: Connect generated cases to automation frameworks