πŸ”¬ Technical Deep Dive

How SecPhi Actually Works

A transparent look at our architecture, what AI can and can't do, and honest answers about multi-agent systems.

🧠

What LLMs Can (and Can't) Do

Large Language Models (LLMs) like GPT-4, Claude, and Gemini are text-in, text-out machines. They receive text, process it, and generate text back. That's it. They cannot:

🌐❌ Cannot Access the internet
πŸ“‘βŒ Cannot Make API requests
πŸ“βŒ Cannot Read files directly
πŸ—„οΈβŒ Cannot Query databases
πŸ’‘Key Insight

LLMs only "see" what your application code feeds them. When SecPhi shows CVE analysis, the AI didn't fetch that dataβ€”our backend code did, then packaged it into a prompt for the AI to analyze.

πŸ”„

What Actually Happens

// SecPhi Backend (JavaScript/Node.js)
// Step 1: Our code makes API call
const response = await fetch('https://nvd.nist.gov/api/...')
const cveData = await response.json()
// Step 2: Our code builds a prompt WITH that data
const prompt = `
Here is real CVE data from NVD:
${JSON.stringify(cveData)}
Analyze this vulnerability and explain the risk.
`
// Step 3: Our code sends prompt to LLM
const analysis = await callGemini(prompt)

The Backend is the Middleman:

1
Fetches real data from APIs (NVD, CISA, GitHub)
2
Packages that data into a prompt
3
Sends prompt to LLM
4
Returns LLM's response to the user
πŸ€–

Multi-Agent: What It Really Means

⚑Honest Assessment

In SecPhi's current form, "multi-agent" is primarily a presentation layer. Here's what's actually happening versus what it looks like.

What It Looks LikeWhat It Actually Is
4 specialized agents debatingSame LLM called 4 times with different prompts
Agents reaching consensusYour code averaging their outputs
Real-time debateSequential API calls displayed with animation
Different expert perspectivesDifferent system prompts to same model

What the Code Actually Does:

// "Multi-agent" = same LLM with different prompts
// "Scanner Agent"
const scanner = await callGemini("You are a scanner. Analyze...")
// "Scorer Agent"
const scorer = await callGemini("You are a risk scorer. Rate...")
// "Exploit Agent"
const exploit = await callGemini("You are a threat expert. Assess...")
// "Patch Agent"
const patch = await callGemini("You are a remediation expert. Fix...")
πŸ’¬

A single well-written prompt could produce 80% of the same value. The multi-agent UI adds presentation clarity and demonstrates architectural thinking.

🎯

What "Real" Multi-Agent Would Look Like

πŸ“SecPhi Today
  • All agents use same NVD data
  • Agents run in parallel, never see each other
  • Same Gemini model with different prompts
  • Consensus = averaging scores
πŸš€Real Multi-Agent
  • Each agent queries DIFFERENT data sources
  • Agents respond to each other's outputs
  • Different specialized models
  • Agents can disagree and flag conflicts
βœ…

Why We Built It This Way

πŸ—οΈ

Architecture Ready

The structure is in place to evolve into true multi-agent

πŸ“Š

Verified Data

CVE info comes from official sources, not AI hallucinations

🎨

Clear UX

Tabbed agent interface makes complex analysis digestible

πŸ’‘

Educational

Demonstrates multi-agent concepts for learning & interviews

Transparency Builds Trust

We believe in being honest about what AI can and can't do. That's how we build products you can actually rely on.

Explore Full ArchitectureTry SecPhi