Introduction
Welcome to the Mankinds documentation.
Mankinds is an agentic AI governance platform. Autonomous agents evaluate and monitor AI systems across 8 trust dimensions — privacy, security, accuracy, fairness, explainability, accountability, sustainability and systemic risk — continuously, from development to production, grounded in regulatory knowledge.
The platform combines a dual knowledge graph architecture (70+ regulatory frameworks), autonomous evaluation agents, and structured outputs (scorecards, risk sheets, audit packages) to turn compliance from a manual process into an automated, auditable one.
Key Capabilities
- Offline evaluation — Connect an AI system via API or SDK. Agents run structured test scenarios against the system using ground truth datasets.
- Online evaluation — Connect observability tools (Langfuse, LangSmith, Datadog). Agents analyze real production traces to detect drift and degradation.
- 86 evaluation criteria across 8 dimensions, grounded in EU AI Act, GDPR, ISO 42001, NIST AI RMF and 70+ regulatory frameworks.
- 30+ pre-built connectors — databases, observability platforms, document sources, and AI system endpoints.
Resources
- SDK: JavaScript/TypeScript and Python SDK documentation
- Knowledge Glossary: Key terms definitions for AI evaluation
Quick Start
In a few minutes, evaluate your first AI system.
Create an account on app.mankinds.io and generate your API key from settings.
pip install mankinds-sdk
from mankinds_sdk import MankindsClient
client = MankindsClient(api_key='mk_your_api_key')
# Create a system, generate a dataset, evaluate
system = client.create_system('My Chatbot', '...', endpoint={...})
dataset = client.generate_dataset(system['id'], num_scenarios=10)
result = client.evaluate(system['id'])
print(f"Score: {result['summary']['overall_score']}")
See the full SDK guide for the detailed API reference.
Support
For any questions: [email protected]