Skip to main content

Configuration Options

These options apply to the reporter. If you're using the Launcher Service, see Choosing Your Setup for which options apply where.

Required

OptionTypeDescription
domainstringBase URL of your TestPlanIt instance
apiTokenstringAPI token for authentication (starts with tpi_)
projectIdnumberProject ID where results will be reported (find this on the Project Overview page)

Optional

OptionTypeDefaultDescription
testRunIdnumber | string-Existing test run to add results to (ID or name). If set, runName is ignored
runNamestring'{suite} - {date} {time}'Name for new test runs (ignored if testRunId is set). Supports placeholders
testRunTypestringAuto-detectedTest framework type. Auto-detected from WebdriverIO config (mocha'MOCHA', cucumber'CUCUMBER', others → 'REGULAR'). Override manually if needed.
configIdnumber | string-Configuration for the test run (ID or name)
milestoneIdnumber | string-Milestone for the test run (ID or name)
stateIdnumber | string-Workflow state for the test run (ID or name)
caseIdPatternRegExp | string/\[(\d+)\]/gRegex pattern for extracting case IDs from test titles
autoCreateTestCasesbooleanfalseAuto-create test cases if they don't exist
createFolderHierarchybooleanfalseCreate folder hierarchy based on Mocha suite structure (requires autoCreateTestCases and parentFolderId)
parentFolderIdnumber | string-Folder for auto-created test cases (ID or name)
templateIdnumber | string-Template for auto-created test cases (ID or name)
tagIds(number | string)[]-Tags to apply to the test run (IDs or names). Tags that don't exist are created automatically
uploadScreenshotsbooleantrueUpload intercepted screenshots to TestPlanIt (requires screenshot capture — see Screenshot Uploads)
includeStackTracebooleantrueInclude stack traces for failures
completeRunOnFinishbooleantrueMark run as complete when tests finish
oneReportbooleantrueCombine parallel workers from the same spec file into a single test run. Does not persist across spec file batches — use the Launcher Service for that
timeoutnumber30000API request timeout in ms
maxRetriesnumber3Retry attempts for failed requests
verbosebooleanfalseEnable debug logging

Run Name Placeholders

Customize your test run names with these placeholders:

PlaceholderDescriptionExample
{suite}Root suite name (first describe block)Login Tests
{spec}Spec file name (without extension)login
{date}Current date in ISO format2024-01-15
{time}Current time14:30:00
{browser}Browser name from capabilitieschrome
{platform}Platform/OS namedarwin, linux, win32

The default run name is '{suite} - {date} {time}', which uses the root describe block name to identify your test runs.

// wdio.conf.js
export const config = {
reporters: [
['@testplanit/wdio-reporter', {
domain: 'https://testplanit.example.com',
apiToken: process.env.TESTPLANIT_API_TOKEN,
projectId: 1,
// Default: '{suite} - {date} {time}'
// Custom example:
runName: 'E2E Tests - {browser} - {date} {time}',
}]
],
};

Appending to Existing Test Runs

Add results to an existing test run instead of creating a new one:

// wdio.conf.js
export const config = {
reporters: [
['@testplanit/wdio-reporter', {
domain: 'https://testplanit.example.com',
apiToken: process.env.TESTPLANIT_API_TOKEN,
projectId: 1,
testRunId: 456, // Add results to this existing run
}]
],
};

This is useful for:

  • Aggregating results from multiple CI jobs
  • Running tests in parallel across machines
  • Re-running failed tests without creating new runs

Associating with Configurations and Milestones

Track test results against specific configurations (browser/OS combinations) and milestones:

// wdio.conf.js
export const config = {
reporters: [
['@testplanit/wdio-reporter', {
domain: 'https://testplanit.example.com',
apiToken: process.env.TESTPLANIT_API_TOKEN,
projectId: 1,
configId: 5, // e.g., "Chrome / macOS"
milestoneId: 10, // e.g., "Sprint 15"
stateId: 2, // e.g., "In Progress" workflow state
}]
],
};