Manual monitoring can't keep up.
Tracking militancy across South Asia means manually monitoring dozens of feeds, cross-referencing spreadsheets, and waiting weeks for ACLED or UCDP updates to confirm what you already suspected. By the time structured data is published, the situation has moved on. Meanwhile, the real reporting is happening on Telegram channels, WhatsApp groups, and Facebook pages no existing platform is systematically watching.
// Real data · Updated continuously · AI-extracted · ACLED-benchmarked · 90-day projection horizon
A new standard in situational reporting.
Purpose-built for conflict researchers, analysts, and organisations tracking militancy in South Asia's most complex regions.
30, 60, and 90-day conflict projections.
A multi-input inference engine that synthesizes incident data, structural indicators, research findings, and messaging intelligence to generate calibrated conflict projections.
Faster than a weekly database. Stricter than a news feed.
A fully automated intelligence pipeline that ingests, deduplicates, extracts, validates, and publishes conflict data — then feeds it forward into projection models.
Generate citable reports. In the format you need.
Build custom reports from any combination of incidents, entities, and projections — export as PDF, CSV, or JSON. Every output includes full source attribution and methodology documentation, ready for publication, briefings, or integration into your existing workflows.
Calibrated conflict projections for South Asia.
AI Projections combines incident trends, structural indicators, and research findings to generate calibrated conflict forecasts with published confidence intervals. Methodology, inputs, and accuracy metrics are documented and public.
Eight countries. One platform.
200+ sources across South Asia's most active conflict zones.
We document everything. Including what we get wrong.
StandRep is built on the principle that a conflict intelligence tool is only as valuable as its transparency. Our event taxonomy is modelled on the ACLED codebook. Source reliability uses the NATO A–F framework. We benchmark against ACLED weekly and publish the concordance metrics. Projection accuracy — mean absolute error, calibration score — is published monthly after back-testing against observed outcomes.
We don't claim perfect coverage. We don't hide low-confidence events or uncertain projections behind a clean interface. Every number on this platform has a methodology page behind it. That page is public, SEO-indexed, and linked from every feature.
How we compare.
Who uses StandRep.
Start small. Scale when you need to.
Frequently asked questions
StandRep is launching soon. Help us build it right.
We're building StandRep with input from the analysts and researchers who will use it. Waitlist members get early access and a direct line to shape the roadmap — what data sources to prioritise, which projection models to build first, and how outputs should be structured for your workflows.
No spam. No marketing. Only the launch and what you helped build.




















