Deployment

Deploy Lifeverse AI to production

Deployment options

OptionBest forComplexity
LocalDevelopment, testingLow
Single serverSmall deployments (1-5 agents)Medium
CloudProduction with full agent teamsMedium-high

Local deployment

# Start the daemon (runs all agents on schedule)
python3 daemon/start.py

# Or run a single agent manually
python3 scripts/run_agent.py --agent ceo

Production deployment

Infrastructure

ComponentRecommendation
ComputeAny Linux/macOS server with Python 3.10+
Agent memoryLocal SQLite (per-agent databases)
Application dataGoogle Cloud SQL or PostgreSQL
Time-series dataMongoDB Atlas
DashboardsVercel, Cloud Run, or any Node.js host

Environment setup

# Required environment variables
ANTHROPIC_API_KEY=your-api-key
DATABASE_URL=your-production-database
MONGODB_URI=your-mongodb-connection

# Optional
LOG_LEVEL=INFO
MAX_CONCURRENT_AGENTS=3
HEALTH_CHECK_INTERVAL=300

Daemon configuration

# config/daemon.yaml (production)
daemon:
  check_interval: 60
  max_concurrent_agents: 3
  health_check_interval: 300
  log_level: INFO
  error_notification: hello@yourcompany.com

Deploy dashboards

cd dashboards/your-dashboard
npm run build
npm start

Monitoring

Essential metrics to track:

  • Agent health — are agents executing on schedule?
  • API usage — Anthropic token consumption and costs
  • Memory growth — SQLite database sizes over time
  • Message backlog — unprocessed A2A messages
  • Error rates — failed agent executions
# Health dashboard
python3 daemon/health_dashboard.py

# Agent status report
python3 scripts/agent_status.py --detailed

Backup

# Backup all agent brains
python3 scripts/backup_brains.py --output /path/to/backup/

# Backup knowledge library
python3 scripts/backup_knowledge.py --output /path/to/backup/

Back up agent SQLite databases regularly. These contain all agent memory and context — losing them means agents start from scratch.

On this page