Documentation Index
Fetch the complete documentation index at: https://mcpjam-mintlify-docs-update-pr-1946-1777273189104.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
The MCP Apps Conformance SDK lets you validate the server-side MCP Apps surface your server exposes through tools and ui:// resources.
Use it when you want the same checks as the CLI’s apps conformance command, but inside your own test runner or CI pipeline.
This currently validates the server-side MCP Apps surface only. It does
not prove full host-side SEP-1865 behavior such as ui/initialize,
sandbox-proxy forwarding, or host notification ordering.
Import
import {
MCPAppsConformanceSuite,
MCPAppsConformanceTest,
renderConformanceReportJUnitXml,
renderConformanceReportJson,
toConformanceReport,
} from "@mcpjam/sdk";
Basic usage
const test = new MCPAppsConformanceTest({
url: "https://your-server.com/mcp",
timeout: 30_000,
});
const result = await test.run();
console.log(result.passed); // true
console.log(result.summary); // "7/7 checks passed, 0 failed, 0 skipped"
console.log(result.discovery.uiToolCount);
For a stdio server:
const test = new MCPAppsConformanceTest({
command: "node",
args: ["server.js"],
timeout: 30_000,
});
Suite usage
import { writeFileSync } from "node:fs";
const suite = new MCPAppsConformanceSuite({
name: "Apps CI",
target: {
url: "https://your-server.com/mcp",
timeout: 30_000,
},
defaults: {
checkIds: [
"ui-tools-present",
"ui-tool-metadata-valid",
],
},
runs: [
{},
{
label: "resources",
checkIds: [
"ui-resources-readable",
"ui-resource-contents-valid",
"ui-resource-meta-valid",
],
},
],
});
const result = await suite.run();
const report = toConformanceReport(result);
writeFileSync(
"apps-conformance.junit.xml",
renderConformanceReportJUnitXml(report),
);
writeFileSync(
"apps-conformance.report.json",
JSON.stringify(renderConformanceReportJson(report), null, 2),
);
MCPAppsConformanceSuiteConfig is shaped for CI matrices:
| Property | Type | Required | Description |
|---|
name | string | No | Suite label shown in summaries and JUnit XML |
target | MCPServerConfig | Yes | Shared HTTP or stdio target |
defaults | Partial<Omit<MCPAppsConformanceConfig, keyof MCPServerConfig>> | No | Shared defaults applied to every run |
runs | Array<... & { label?: string }> | Yes | Individual check selections and labels |
MCPAppsConformanceConfig extends the standard MCPServerConfig, so it accepts the same HTTP and stdio connection settings as MCPClientManager.
Additional property:
| Property | Type | Required | Default | Description |
|---|
checkIds | MCPAppsCheckId[] | No | all checks | Run only the selected MCP Apps checks |
Example with custom headers and a focused check set:
const test = new MCPAppsConformanceTest({
url: "https://your-server.com/mcp",
requestInit: {
headers: {
Authorization: `Bearer ${process.env.TOKEN}`,
},
},
checkIds: [
"ui-resources-readable",
"ui-resource-contents-valid",
],
});
Check ids
Available checks:
ui-tools-present
ui-tool-metadata-valid
ui-tool-input-schema-valid
ui-listed-resources-valid
ui-resources-readable
ui-resource-contents-valid
ui-resource-meta-valid
Result shape
run() returns an MCPAppsConformanceResult.
| Property | Type | Description |
|---|
passed | boolean | Whether all selected checks passed (no failures) |
target | string | URL or stdio command under test |
checks | MCPAppsCheckResult[] | Individual check results |
summary | string | Human-readable summary |
durationMs | number | Total duration |
categorySummary | Record<"tools" | "resources", ...> | Pass/fail counts by category |
discovery | object | Counts for tools and UI resources discovered during the run |
Each MCPAppsCheckResult includes:
id
category
title
description
status
durationMs
- optional
details
- optional
warnings
- optional
error
MCPAppsConformanceSuite.run() returns an MCPAppsConformanceSuiteResult:
| Property | Type | Description |
|---|
name | string | Suite name |
target | string | Shared target under test |
passed | boolean | true iff every run passed |
results | Array<MCPAppsConformanceResult & { label: string }> | Per-run results |
summary | string | Human-readable suite summary |
durationMs | number | Total suite duration |
CI reporting
All three conformance domains use the same shared reporting helpers:
toConformanceReport(result) normalizes protocol, OAuth, and apps runs into a single report shape.
renderConformanceReportJUnitXml(report) emits redacted JUnit XML for CI dashboards.
renderConformanceReportJson(report) returns the redacted JSON-ready object for artifact uploads.
That makes the SDK and CLI JUnit output byte-identical for the same result.
What the runner validates
The current runner checks:
- At least one tool advertises MCP Apps UI metadata.
- Tool metadata uses a valid
ui:// resource URI and valid visibility values.
- Tool
inputSchema is a non-null JSON Schema object.
- Listed UI resources use
ui:// and text/html;profile=mcp-app.
- Referenced UI resources are readable via
resources/read.
- Resource payloads provide exactly one HTML document through
text or blob.
_meta.ui.csp, permissions, domain, and prefersBorder use valid shapes.
Notes
- The runner always advertises the MCP Apps UI extension capability so servers do not hide their MCP Apps surface when custom
clientCapabilities are supplied.
- Deprecated
_meta["ui/resourceUri"] is accepted but surfaced as a warning.
- Tool name SHOULD validations (length, character set, uniqueness) surface as warnings, not failures.
- HTML validation is intentionally lightweight. It verifies the expected MIME type and document-style HTML payload, not the full browser lifecycle.
- For a runnable project that writes JUnit XML from Vitest, see
examples/conformance/basic/.