Before investing engineering time and capital, critical unknowns need validation. This checklist distinguishes between what’s already validated (from market research) and what still requires primary research.
| Question | Status | Evidence |
|---|
| Is the security pain real? | VALIDATED | 32-41% critical vulns (Enkrypt AI, earezki.com); ClawHavoc incident; OWASP/NIST publishing |
| Is discovery fragmented? | VALIDATED | 9+ registries, no cross-platform search, no trust scores |
| Are enterprises using AI agents? | VALIDATED | 57% have agents in production (LangChain); Composio $2M ARR |
| Is commerce viable now? | NOT VALIDATED | <$100K/month ecosystem revenue; 21st.dev ~$400/mo MRR only documented success |
| Will developers pay for registry features? | NOT VALIDATED | No survey data on this specific question |
| Question | How to Validate | Priority |
|---|
| Do developers actually want a CROSS-PLATFORM scanner (vs. registry-specific)? | Interview 30 developers using MCP servers across 2+ platforms | P0 |
| Would enterprises pay for skill governance SPECIFICALLY? | Interview 10 IT/security managers at companies using AI agents | P0 |
| Is Findable’s integrated approach (security + discovery + governance) more compelling than best-of-breed (Snyk + Smithery)? | Concept test with 20 developers and 5 enterprise buyers | P0 |
| What’s the willingness to pay for enterprise governance? | Van Westendorp pricing survey with 10 enterprise buyers | P1 |
| How do developers currently discover skills? | Survey 100+ agent users: “How did you find the last 5 skills you installed?” | P0 |
Target: 50+ interviews before production code. Can start scanner development in parallel.
| Question | How to Validate | Priority |
|---|
| Why not just use ClawHub for OpenClaw skills? | Interview OpenClaw developers — do they search outside ClawHub? | P0 |
| Why not just use Snyk for security? | Interview devs who use mcp-scan — what’s missing? | P0 |
| Why not just use Smithery for MCP discovery? | Interview Smithery users — what pain remains? | P0 |
| What percentage of developers use skills from 2+ protocols (MCP + SKILL.md)? | Survey 100+ developers | P0 |
These are the hardest questions. If developers are satisfied with registry-specific tools and don’t need cross-platform search, Findable’s core thesis weakens.
Minimum Viable Product (3-4 weeks):
-
Scanner CLI — npx findable scan ./my-mcp-server/
- API key / credential leak detection
- Known malware pattern matching
- Basic prompt injection detection
- Trust Score (0-100)
- Free, open-source (MIT)
-
Simple web search — findable.sh
- Index skills from Smithery + PulseMCP + MCP.so + skills.sh
- Basic search with trust scores displayed
- No login required
-
Security report — “State of Agent Skills Security”
- Scan 10K+ skills from major registries
- Publish findings as a blog post + report
- Drive awareness and validate demand
Test with:
- 50 developers from MCP/Claude/OpenClaw communities
- 10 enterprise security/IT people
- 5 SaaS product managers
Success criteria:
- 500+ CLI scans in first 2 weeks
- 30% of alpha users return within 7 days
- 5+ unsolicited requests for enterprise features
- Positive signal on HN/DEV Community discussion
| Question | How to Validate | Priority |
|---|
| Do users want CLI-first or web-first? | A/B: CLI-only vs. web+CLI during alpha | P1 |
| Is cross-platform search important? | Usage analytics: do users search across protocols? | P1 |
| What security issues matter most? | Rank: API leaks vs. malware vs. prompt injection vs. privacy | P0 |
| Is the trust score useful? | Survey alpha users: “Did the trust score influence your decision?” | P0 |
| Question | Status | How to Finish |
|---|
| Can we detect API key leaks in skills? | PROVEN — Snyk found 7.1% leak credentials | Build regex + pattern library; test on 1K skills |
| Can we detect prompt injection? | FEASIBLE — LLM-based detection works | Test on synthetic + real examples; measure false positive rate |
| Can we index 100K+ skills with fast search? | PROVEN — pgvector + tsvector handles this scale | Load test with real data |
| Can we parse MCP manifests and SKILL.md? | FEASIBLE — specs are documented | Collect 100 samples per source; test parser |
| Can we build an MCP server for agent-facing search? | FEASIBLE — MCP protocol is well-documented | Build prototype; test with Claude Code |
| Source | Access Method | Status |
|---|
| Official MCP Registry | .well-known/mcp/server.json + GitHub | Available |
| Smithery | Public API | Review ToS |
| PulseMCP | Public listing + API | Review ToS |
| Glama.ai | Public listing | Review ToS |
| MCP.so | Public listing | Scraping may be needed |
| skills.sh | Public CLI + listing | Vercel ToS review |
| SkillsMP | Public aggregator | Review ToS |
| ClawHub | Public API / GitHub | API may have rate limits |
| GitHub | GitHub API (5K req/hr authenticated) | Available |
Action: Review ToS of all registries. Reach out for API access/partnership where needed.
| Question | How to Answer | Status |
|---|
| Can ClawHub build all this themselves? | Assess ClawHub’s team, roadmap, and post-creator-departure trajectory | INVESTIGATE |
| Will Snyk launch a registry? | Monitor Snyk blog, Invariant Labs roadmap | WATCH WEEKLY |
| Will Vercel add security to skills.sh? | Monitor skills.sh releases | WATCH WEEKLY |
| Is Composio planning public discovery? | Monitor product announcements, test their product | INVESTIGATE |
| Will Anthropic expand the official registry? | Monitor AAIF announcements, MCP spec evolution | WATCH |
| Challenge | Honest Answer |
|---|
| ”Why not just use Snyk?” | Snyk scans code. We integrate trust into discovery. Different product surface. But if they build a registry, we need to rethink. |
| ”Why not just use ClawHub?” | ClawHub is OpenClaw-only. We’re cross-platform (MCP + SKILL.md). But if cross-platform demand is weak, our differentiation weakens. |
| ”Why not just use Smithery?” | Smithery is MCP-only, no security. But they have 322K monthly visits and strong UX. We need to match their UX and add trust. |
| ”What’s your moat?” | Trust data compounds — every scan builds proprietary security intelligence. But it takes time and volume to create a real moat. |
| Model | Status | Evidence |
|---|
| Enterprise governance revenue | EMERGING | Composio $2M ARR at 200+ customers |
| Security scanning API | FEASIBLE | Snyk model, Semgrep model |
| Developer subscriptions | UNKNOWN | No data on developer WTP for registry features |
| Marketplace commissions | NOT READY | <$100K/month ecosystem GMV |
| Question | How to Test | Priority |
|---|
| What would enterprises pay for governance? | Discovery calls with 10 enterprise buyers; benchmark vs. Snyk ($98/dev/mo), GitHub Enterprise ($21/user/mo) | P1 |
| Would developers pay $29/mo for premium features? | Price sensitivity testing during alpha | P1 |
| Is 15% take rate acceptable (if commerce launches)? | Compare to Apify (20%), App Store (30%), Gumroad (10%) | P2 |
| Question | Priority | Status |
|---|
| Can we legally index skills from all registries? | P0 | Review ToS |
| Liability for trust scores? | P0 | Legal counsel needed |
| SOC2 requirements? | P1 | Compliance advisor |
| GDPR for publisher data? | P1 | Legal counsel |
| Trademark “Findable”? | P0 | Trademark search |
| Decision | Recommendation |
|---|
| Incorporation | Delaware C-Corp (standard for US VC) |
| India subsidiary | Pvt Ltd (if engineering in India) |
| IP ownership | US entity |
| Question | Honest Answer | Evidence Needed |
|---|
| ”Why won’t Snyk build this?" | "They might. But they scan code, not build registries. Different product. And we have 12-18 month window before they’d combine all features.” | Monitor Snyk roadmap |
| ”Why won’t ClawHub add this?" | "They could for OpenClaw. But we’re cross-platform. And post-creator-departure + ClawHavoc, their trajectory is uncertain.” | Track ClawHub development |
| ”Is there real demand?" | "Security demand is proven (32-41% vulns, OWASP/NIST). Enterprise governance is emerging ($2M ARR at Composio). Commerce is not yet validated.” | Scanning results, enterprise interviews |
| ”What about the GPT Store failure?" | "GPT Store was single-platform with no quality signals. We’re cross-platform, security-first, and NOT leading with commerce.” | Competitive analysis |
| ”What’s your TAM?" | "$28-115M. The low end is more likely near-term. Enterprise governance + security scanning.” | Validated TAM analysis |
| ”Year 5 revenue?" | "$10-20M realistic, not $63M. Enterprise governance is the primary driver.” | Revised financial model |
| # | Decision | Status | Notes |
|---|
| D1 | Platform name | DECIDED: Findable (findable.sh) | |
| D2 | Lead with security | DECIDED | Most validated wedge |
| D3 | Defer commerce | DECIDED | Until ecosystem GMV >$500K/mo |
| D4 | Open-source scanner | DECIDED | MIT license for CLI |
| D5 | Enterprise governance as first revenue | DECIDED | Months 8-14 |
| D6 | Incorporation structure | PENDING | Delaware C-Corp recommended |
| D7 | Engineering location | LEANING: India-first | Cost advantage, talent pool |
| D8 | Fundraise timing | OPEN | Pre-seed after MVP, or bootstrap? |
| Week | Focus | Deliverables |
|---|
| Week 1 | Customer discovery + scanner prototype | 15 developer interviews; working scanner CLI (basic); domain + branding finalized |
| Week 2 | Scanner v1 + batch scanning | Scanner detects API leaks + malware patterns; scan 5K+ skills; start compiling security report |
| Week 3 | Web search + trust scores | Basic web UI with search; trust scores displayed; index 30K+ skills from major registries |
| Week 4 | Launch prep + validation | Security report draft; alpha feedback synthesis; launch materials ready; HN post drafted |
Week 5+: Launch security report. Open-source scanner. Measure response. Iterate or pivot based on traction.