Example Prompts
Real prompts you can paste into Claude Desktop, Cursor, or any MCP-aware agent once the TestPlanIt server is wired up. Each example shows the user prompt, the tool call(s) the agent is expected to make, and what the agent will see back.
Tool names in this guide are the canonical MCP names the server registers
(testplanit_{domain}_{operation}). Most tools require a projectId — if the
agent does not already have one, it should call testplanit_projects_list first.
"Show me the most recent issues in project Acme"
Show me the most recent issues in the Acme project.
Tool calls (1–2):
testplanit_projects_list({})— only if the agent does not already know the project id; returns{ items: [{ id, name, ... }] }.testplanit_issues_list({ projectId: <id> })— requiredprojectId. Optional filters:externalSystem(JIRA | GITHUB | AZURE_DEVOPS | SIMPLE_URL),integrationId,status,externalStatus,cursor,limit(default 25, max 100).
What comes back: a page of issues ordered by createdAt DESC then id DESC. Each row carries linkedCaseCount inline so the agent can rank issues by how many test cases reference them. Cursor-pagination via nextCursor for older pages.
{
"items": [
{ "id": 411, "externalKey": "JIRA-892", "summary": "Login fails on Safari",
"status": "open", "externalStatus": "In Progress", "linkedCaseCount": 6,
"createdAt": "2026-05-06T18:14:09Z" }
],
"hasNextPage": true,
"nextCursor": 411
}
"Who tested JIRA-1234?"
Who tested JIRA-1234? Show me the most recent results for that issue.
Tool calls (3):
testplanit_issues_find_by_key({ projectId: <P>, externalKey: "JIRA-1234", externalSystem: "JIRA" })→ resolves the issue id.testplanit_cases_list({ projectId: <P>, issueId: <id from step 1> })→ RepositoryCases linked to the issue.testplanit_test_run_results_list({ caseIds: [<from step 2>] })→ most-recent results per case, ordered byexecutedAt DESC.
What comes back: a list of run results, each with executedBy: { id, name, email } inline. The agent can summarize "most recent run on case X was 3 days ago by Sarah, status Pass."
If the agent already has the issue id, it skips step 1.
"Show me failed test runs from last week"
Show me test runs in project Acme that completed in the last 7 days with failures.
Tool calls (1):
testplanit_test_runs_list({ projectId: <P>, from: "<7-days-ago ISO>", to: "<today ISO>", isCompleted: true }).
What comes back: each row carries inline statusCounts: [{ id, name, count }] plus untested and total. The agent filters to rows where the failed-status count is non-zero locally — no follow-up call needed for status rollup.
"What automated tests are stale?"
Which automated tests in project Acme have not been updated alongside their code,
or have never been run?
Tool calls (1–2):
testplanit_cases_list({ projectId: <P>, automated: true, staleSinceUpdate: true })→ automated tests whose latest execution is older than the latest update.- (Optional)
testplanit_cases_list({ projectId: <P>, automated: true, hasNeverExecuted: true })→ automated tests with no execution history at all.
What comes back: each row carries lastUpdatedAt and latestResult inline so the agent can describe staleness without a follow-up call. The response stamps truncated: true when the post-filter scan cap (400) is hit; combine with repositoryId to scope.
"What test cases live in this code repository?"
List the automated test cases in our `playwright-suite` repository.
Tool calls (2):
testplanit_code_repositories_list({ projectId: <P> })→ resolves repository ids; credentials are never returned. Note: this lists repositories that hold TestPlanIt's automated test code, not application code.testplanit_cases_list({ projectId: <P>, repositoryId: <id from step 1>, automated: true })→ cases imported from that test repo, withlastUpdatedAt+latestResultinline.
"What manual cases cover this automated test?"
Show me the manual test cases linked to automated test case #7.
Tool calls (1):
testplanit_repository_case_links_list({ caseId: 7 })→ each row'sotherCasecarries the counterpart denormalized; optionallinkTypefilter (e.g.,SAME_TEST_DIFFERENT_SOURCE).
What comes back: a list of links each with otherCase: { id, name, source, automated } so the agent can describe the manual side-by-side coverage.
"Create a test run for JIRA-892 with the uncovered cases"
Create a test run for JIRA-892 covering all test cases linked to it.
Tool calls (3):
testplanit_issues_find_by_key({ projectId: <P>, externalKey: "JIRA-892", externalSystem: "JIRA" })→ resolves the issue id.testplanit_cases_list({ projectId: <P>, issueId: <id from step 1> })→ gets the caseIds linked to that issue.testplanit_runs_create({ projectId: <P>, name: "JIRA-892 coverage run", caseIds: [<ids from step 2>] })→ creates the run and adds all the cases in a single call.
What comes back: the full run detail with total equal to the number of linked cases and untested equal to total (no results have been submitted yet).
{
"id": 5,
"name": "JIRA-892 coverage run",
"untested": 6,
"total": 6,
"statusCounts": [],
"testCases": [ /* first 50 run cases inline */ ],
"testCasesNextCursor": null
}
"Mark the login test as passed in run 5"
Mark the login test case as passed in run 5.
Tool calls (2):
testplanit_test_runs_get({ runId: 5 })→ returns the run with inlinetestCases; each testCase has anid(the TestRunCase ID) andrepositoryCase.nameso the agent can identify the login case.testplanit_test_run_results_create({ testRunCaseId: <id of login case from step 1>, statusName: "Passed" })→ submits the result; the run case's current status is updated atomically.
What comes back: the full result detail including attempt: 1, the resolved status, and the testRunCase summary.
{
"id": 555,
"attempt": 1,
"executedAt": "2026-05-07T12:00:00Z",
"status": { "id": 1, "name": "Passed" },
"executedBy": { "id": "user-1", "name": "Alice", "email": "[email protected]" },
"testRunCase": {
"id": 100,
"repositoryCaseId": 99,
"repositoryCase": { "id": 99, "name": "Login flow", "source": "MANUAL" },
"testRun": { "id": 5, "name": "Sprint 12 regression" }
},
"elapsed": null,
"notes": null,
"stepResults": [],
"attachments": [],
"issues": []
}
See also
- Overview — what the MCP server does
- Configuration — wire your AI client to TestPlanIt
- npm package README — full tool reference for every prompt example above