Open47
3 February 2026
Uses AI to reviews code for security issues in 5 minutes
Opportunity — What problem are you working on, and why it matters
Security vulnerabilities cause real damage: In 2025 alone, insecure code led to 12+ incidents and near-misses across OGP, with quarterly reports repeatedly flagging the same issues like weak access controls and inadequate testing.
Current approach doesn't scale: Teams rely on annual security testing (VAPT), meaning vulnerabilities can sit in production for up to 12 months before discovery. Manual code reviews are slow, don't scale across 80+ engineers, and miss subtle security flaws.
Existing tools create noise, not solutions: Traditional scanners use simple pattern matching, generating high false positives that teams ignore, while general AI coding assistants focus on bugs, not security.
Open47 understands your entire codebase: Instead of analysing code changes in isolation, it explores how different parts connect, checks existing security controls, and validates whether vulnerabilities are actually exploitable. This dramatically reduces false positives while catching real security issues in every pull request.
Fast and affordable: Most reviews cost around $1 and complete in under 10 minutes, giving teams immediate security feedback instead of waiting months for annual testing.

Example of a GitHub comment left by Open47 in a pull request
Velocity — What you actually built or changed in the last month
What users can do now: Get automatic security reviews on every pull request in under 10 minutes, instead of waiting up to 12 months for annual security testing. Open47 catches vulnerabilities you might not know to look for and automates security checks that previously required manual expert review.
Version 1 taught us hard lessons: By analysing only changed code, it missed existing safeguards in other parts of the system. It often flagged “issues” that were already protected by shared middleware and validation, or non-exploitable developmental code. This created too many false positives and users found it too noisy.
I might have to turn it off — the false positives are too high.— One frustrated user of V1

Flow diagram of how Open47 V1 works
Version 2 redesign focuses on context: We completely rebuilt the system to understand full codebase context before reporting issues. It intelligently explores your entire project, fetching relevant files and checking how different parts connect before deciding if something is actually vulnerable.
Key technical improvements: Runs security analysis multiple times to catch more issues, processes large code changes step-by-step for reliability, and validates findings against the entire codebase.

Flow diagram of how an improved Open47 V2 works
Critical learning: False positives matter more than false negatives. When tools are wrong too often, people stop trusting them. We'd rather surface fewer issues that teams actually act on.
Traction — How real people are using it, and what is happening as a result

Example of users fixing vulnerabilities surfaced by Open47
Real usage across hackathon teams: Reviewed a few major hackathon projects including Keypress, Polyglot, Confetti, and Insight, surfacing over 30 security issues ranging from minor problems to serious vulnerabilities.
Teams see immediate value and are taking action: They've acknowledged findings and fixed many issues based on our recommendations, including insecure data access, missing input validation, and exposed storage URLs.
very nice, wish i had this before doing code review lol— Azer (Confetti)
I'm fixing some of the advisories by Open47 … these are really cool— Eliot (Keypress)
good catch, will fix in a subsequent PR— Shyam (Playtime)
Significant time and cost savings: Most reviews cost around $1. Reviewing large pull request no longer takes a whole day too.
Examples of cost and time taken for large code changes
Project | Lines of code | Time taken | Cost |
|---|---|---|---|
Confetti | ~14,000 | 16 minutes | ~$6.20 |
Polyglot | ~12,000 | 33 minutes | ~$7.70 |
Next steps and current limitations: Planning to pilot on production code with security engineers involved, add organisation-specific rules, and expand beyond single repositories. Currently limited by expensive AI models and tested only up to 12,000 lines of code changes.
Hello from the only engineer hacking on Open47— @adriangohjw
