LLM-Powered Threat Modelling.

Self-serve threat modeling powered by large language models

Overview

A self-serve threat modeling platform that puts automated security analysis directly in the hands of engineering teams. By combining structured architecture inputs with LLM reasoning, the tool produces threat models that are both comprehensive and actionable.

Key Features

  • JIRA Integration - Import JIRA tickets with comments, links, and attachments as context
  • Upload Context - Support for PRDs, architecture diagrams, screenshots, and text files
  • LLM-Powered Analysis - Automatic threat generation using OpenAI or Anthropic
  • STRIDE Methodology - Threats categorized by Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege
  • Risk Scoring - Likelihood x Impact scoring with severity classification
  • Shareable Reports - Generate public share links for stakeholders

How It Works

1

Architecture Input

Engineers provide system details through an intuitive interface - upload PRDs, architecture diagrams, screenshots, or import JIRA tickets with comments and attachments as context.

2

LLM Analysis

The architecture is processed through OpenAI or Anthropic models that decompose the system into components, identify assets, entry points, and trust boundaries.

3

Threat Generation

Threats are generated and categorized using the STRIDE methodology with Likelihood x Impact risk scoring. Each threat includes actionable mitigations with priority and effort estimates.

4

Share & Export

Generate public share links for stakeholders or export reports in Markdown and JSON formats. Integrate directly into issue trackers for engineering follow-up.

Tech Stack

React 18TypeScriptHonoBunPostgreSQLDrizzle ORMOpenAI / Claude APITailwind CSS