Local-first Research OS for papers, workflows, experiments, channels, and automation.
ResearchClaw is not just a chat wrapper. The current codebase is a local-first FastAPI application that combines:
- a long-running app runtime with control-plane APIs
- a web console for chat, papers, research, channels, sessions, cron jobs, models, skills, workspace, environments, and MCP
- multi-agent routing with per-agent workspaces and binding rules
- a persistent research state layer for projects, workflows, tasks, notes, claims, evidences, experiments, artifacts, and drafts
- built-in channels for
console,telegram,discord,dingtalk,feishu,imessage,qq, andvoice - model/provider management with multiple providers, multiple models per provider, and fallback chains
- standard
SKILL.mdsupport, Skills Hub search/install APIs, MCP client management, and custom channels - automation triggers, cron jobs, heartbeat, proactive reminders, and runtime observability
- paper search/download, BibTeX utilities, LaTeX helpers, data analysis, browser/file tools, and structured research memory
It is still an Alpha project, but it is no longer just a platform shell. The code now includes a minimal research workflow runtime, claim/evidence graph, experiment tracking, blocker remediation, and project dashboard. The biggest remaining gaps are evidence-matrix quality, stronger claim-evidence validation, richer external execution adapters, and submission/reproducibility packaging.
git clone https://github.com/MingxinYang/ResearchClaw.git
cd ResearchClaw
pip install -e .researchclaw init --defaults --accept-securityThis creates:
- working dir:
~/.researchclaw - secret dir:
~/.researchclaw.secret - bootstrap Markdown files such as
SOUL.md,AGENTS.md,PROFILE.md, andHEARTBEAT.md
researchclaw models configOr add one directly:
researchclaw models add openai --type openai --model gpt-5 --api-key sk-...Supported provider types in code today: openai, anthropic, gemini, ollama, dashscope, deepseek, minimax, other, custom.
researchclaw app --host 127.0.0.1 --port 8088Open http://127.0.0.1:8088.
If the page says Console not found, build the frontend once:
cd console
npm install
npm run buildThe backend automatically serves console/dist when it exists.
After startup, open the Research page in the console to:
- create a project
- inspect workflows, claims, and reminders
- view execution health and recent blockers
- dispatch, execute, or resume remediation work
- FastAPI app with
/api/health,/api/version,/api/control/*,/api/automation/*,/api/providers,/api/skills,/api/mcp,/api/workspace, and more - gateway-style runtime bootstrapping for runner, channels, cron, MCP, automation store, and config watcher
- runtime status snapshots for agents, sessions, channels, cron, heartbeat, skills, automation runs, and research services
- project abstraction with persistent
project -> workflow -> task -> artifactrelationships - workflow stages for
literature_search,paper_reading,note_synthesis,hypothesis_queue,experiment_plan,experiment_run,result_analysis,writing_tasks, andreview_and_followup - structured notes including paper notes, idea notes, experiment notes, writing notes, and decision logs
- claim/evidence graph that can link papers, notes, experiments, PDF chunks, citations, generated tables, and artifacts
- experiment tracking with execution bindings, heartbeat/result ingestion, contract validation, result bundle validation, and compare APIs
- proactive workflow reminders plus remediation tasks for missing metrics, outputs, or artifact types
- project dashboards and blocker panels, including batch dispatch/execute/resume actions in the console and APIs
Built-in tools registered by the agent include:
semantic_scholar_searchbibtex_search,bibtex_add_entry,bibtex_exportlatex_template,latex_compile_checkdata_describe,data_queryrun_shell,read_file,write_file,edit_file,append_filebrowse_url,browser_use,send_file,memory_searchskills_list,skills_activate,skills_read_file
Bundled skills currently shipped in src/researchclaw/agents/skills/ include:
arxivbrowser_visiblecitation_networkcrondingtalk_channeldocxexperiment_trackerfigure_generatorfile_readerhimalayaliterature_reviewnewspaper_summarizerpdfpptxresearch_notesresearch_workflowsxlsx
Runtime data lives under the working directory, while secrets are stored separately:
~/.researchclaw/
├── config.json
├── jobs.json
├── chats.json
├── research/
│ └── state.json
├── sessions/
├── active_skills/
├── customized_skills/
├── papers/
├── references/
├── experiments/
├── memory/
├── md_files/
├── custom_channels/
└── researchclaw.log
~/.researchclaw.secret/
├── envs.json
└── providers.json
Provider credentials and persisted environment variables are intentionally kept out of the working directory.
Backend checks:
pip install -e ".[dev]"
PYTHONPATH=src pytest -qConsole build:
npm --prefix console run buildWebsite build:
corepack pnpm --dir website run buildRepo-wide helper:
scripts/check-ci.sh --skip-installMain documentation files in this repository:
The current codebase is best described as:
- already strong on runtime infrastructure, control plane, channels, and provider/skill compatibility
- already usable for persistent research projects, workflow execution, experiment tracking, claim/evidence linking, and blocker handling
- still incomplete as a full autonomous research platform: evidence-matrix quality, rigorous validators, deeper execution backends, and submission packaging remain ahead