Menu

MiroFish Overview

Relevant source files

MiroFish is a next-generation swarm intelligence engine designed to create high-fidelity digital simulations of real-world scenarios. By ingesting "seed" materials—such as news articles, policy drafts, or financial reports—MiroFish automatically constructs a parallel world populated by thousands of autonomous agents. These agents possess independent personalities, long-term memories, and behavioral logic, allowing them to interact and evolve socially to predict future outcomes. README.md27-32 README-EN.md27-32

The system serves as a "digital sandbox" where users can observe emergent behaviors from a "God's-eye view" and inject variables to test policy risks, public relations strategies, or creative narratives. README-EN.md38-41

🔄 The Five-Stage Simulation Lifecycle

MiroFish operates through a structured workflow that transitions from raw data to deep analytical insights:

  1. Graph Building: Extraction of entities and relationships from source documents to build a Knowledge Graph (GraphRAG) and inject collective memory. README-EN.md88
  2. Environment Setup: Generation of agent personas and platform configurations (Twitter/Reddit) based on the extracted ontology. README-EN.md89
  3. Simulation Execution: Parallel execution of multi-agent interactions across simulated social platforms, with dynamic temporal memory updates. README-EN.md90
  4. Report Generation: The ReportAgent uses a specialized toolset to analyze simulation logs and generate comprehensive predictive reports. README-EN.md91
  5. Deep Interaction: A post-simulation phase where users can chat directly with any agent or the ReportAgent to explore specific nuances. README-EN.md92

Sources: README.md86-93 README-EN.md87-92


🛠 Tech Stack

MiroFish utilizes a decoupled architecture combining modern web technologies with advanced AI orchestration:

ComponentTechnology
FrontendVue.js 3, Vite, D3.js (Visualization), Tailwind CSS
BackendPython 3.11+, Flask (REST API), UV (Package Management)
Simulation EngineOASIS (by CAMEL-AI)
Memory LayerZep Cloud (GraphRAG & Episode Storage)
LLM OrchestrationOpenAI-compatible SDK (supports GPT-4o, Qwen-plus, etc.)

Sources: package.json1-21 backend/pyproject.toml11-35 Dockerfile1-11


🏗 System Integration

The system is split into a Node.js-based frontend (port 3000) and a Python/Flask backend (port 5001). The frontend provides a step-by-step wizard to guide the user through the simulation lifecycle, while the backend manages long-running tasks like document processing, LLM-based persona generation, and simulation execution. package.json9-11 docker-compose.yml9-11

High-Level Component Interaction

This diagram illustrates how the primary code entities bridge the gap between user intent and the underlying simulation engine.

Diagram: System Entity Mapping


Sources: package.json9-11 Dockerfile26-29 README-EN.md118-128

Simulation Workflow Logic

The following diagram maps the logical stages of the simulation to the specific backend services and configuration entities defined in the code.

Diagram: Workflow to Code Mapping

Sources: README-EN.md87-92 backend/pyproject.toml20-24 docker-compose.yml13-14


📖 Major Child Sections

For detailed technical documentation, please refer to the following sub-pages:

Syntax error in textmermaid version 11.12.3