Claude Code for Accounting Research: An Initial Discussion
A long-form Q&A I had while figuring out how to drop Claude Code into a real accounting-research project — literature PDFs, Python/SAS/R code, LaTeX writeups, Beamer slides, results, and all. Covers the four-tier CLAUDE.md hierarchy, skills, and how to organize the repo so the assistant actually knows where things live.
Question 1
How can I use Claude code to work on my research projects in accounting? I have a project folder. Subfolders include: literature, containing pdf files and converted markdowns of relevant papers I need to cite and know; code folder, Python, sas code, and R code; writeup folder, latex and word file, results folder, the Python/SAS/R exported CSV or pdf files. Slides folder, latex Beamer and PPT presentations. I want to discuss issues in my research and need CC to discuss analyze summarize make suggestions and hands down on both coding and writing
Great question — this gets into the most powerful part of Claude Code’s architecture. Let me search for the latest docs on this.
This is a rich and evolving ecosystem. Here’s a comprehensive, accounting-research-specific guide synthesizing everything from the current docs.
The Full Memory & Skills Architecture for Your Research Project Layer 1: The Four-Tier CLAUDE.md Hierarchy All levels combine — they don’t replace each other. More specific levels override on conflicts, and everything is active simultaneously.  
~/.claude/CLAUDE.md ← Tier 1: YOUR global prefs (all projects) your-research-project/CLAUDE.md ← Tier 2: This project's shared brain your-research-project/CLAUDE.local.md ← Tier 3: Your personal notes (gitignored) your-research-project/literature/CLAUDE.md ← Tier 4: Subfolder-specific rules your-research-project/code/CLAUDE.md your-research-project/writeup/CLAUDE.md ... etc.
Important caveat on subfolder CLAUDE.md: Subfolder CLAUDE.md files are not loaded at launch — they are only included when Claude actually reads files in those subtrees.  So if you ask Claude to work on writeup/paper.tex, it will pick up writeup/CLAUDE.md at that point. This is actually ideal for research — each subfolder only loads its context when you’re actively working there.
What Goes Where — Accounting Research Edition ~/.claude/CLAUDE.md — Your Researcher Identity (global)
Question 2
How can I use Claude code to work on my research projects in accounting? I have a project folder. Subfolders include: literature, containing pdf files and converted markdowns of relevant papers I need to cite and know; code folder, Python, sas code, and R code; writeup folder, latex and word file, results folder, the Python/SAS/R exported CSV or pdf files. Slides folder, latex Beamer and PPT presentations. I want to discuss issues in my research and need CC to discuss analyze summarize make suggestions and hands down on both coding and writing
This is a great use case for Claude Code — it’s genuinely well-suited for exactly the kind of multi-modal research workflow you’re describing. Here’s a practical guide tailored to accounting research:
Setting Up Claude Code for Your Research Project 1. Install & Launch Claude Code is a CLI tool. Install it via npm and set your API key:
npm install -g @anthropic-ai/claude-code export ANTHROPIC_API_KEY=your_key_here cd /path/to/your/research/project claude
Once inside your project folder, run /init to get started.
- The CLAUDE.md File — Your Most Important Asset Each Claude Code session begins with a fresh context window, and CLAUDE.md is the mechanism that carries knowledge across sessions.  It becomes part of Claude’s system prompt, so every conversation starts with your project context already loaded — eliminating the need to re-explain your folder structure, preferences, or conventions repeatedly.  Create a CLAUDE.md in your project root tailored to accounting research. Here’s a template to adapt:
Question 3
How about using cursor together with Claude code? I still want my hands down on my coding and writing for sure. And how can I make sure the progress are synced without conflicts? How about also using GitHub commit
Great question — this is the modern research stack that many power users are converging on. Let me get current info.
This is a well-trodden combo that the community has converged on. Here’s the full picture for your accounting research setup.
Cursor + Claude Code: The Research Dual-Tool Stack How They Divide Labor — The Core Insight The tools don’t conflict because Claude Code’s CLAUDE.md lives in your repo and Cursor reads it too — your project context is shared.  Think of it this way:
| Tool | Strength for Research | When to Use |
|---|---|---|
| Cursor | Inline editing, autocomplete, seeing changes visually, Ctrl+K quick fixes | Writing code/LaTeX line by line, reviewing diffs, fast iterations |
| Claude Code | Cross-file reasoning, multi-step plans, architecture, docs, Git | Big restructuring, running scripts, discussing strategy, committing |
Running them side-by-side is like pairing a careful architect with a sharp technical writer — Claude for clarity and commit hygiene, Cursor for scaffolding and in-the-flow editing. 
Setup: Running Claude Code Inside Cursor The official Claude Code VS Code/Cursor extension is the recommended way — install it from the Extensions view (Cmd+Shift+X), search “Claude Code”, and install. It provides a native panel with side-by-side diffs, conversation history, and @-mention of files with specific line ranges.  Alternatively, the terminal-in-Cursor approach: 1. Open your research project folder in Cursor 2. Open Cursor’s integrated terminal 3. Run claude — this activates Claude Code’s IDE integration mode, it detects it’s running inside Cursor and establishes a connection, giving it full context of your project files, which file you have open, what text you have selected, and diagnostic errors from your linter. 
Your Research Workflow: Who Does What Cursor handles: ∙ Writing a regression in code/03_main_tables.R line by line with autocomplete ∙ Quick LaTeX fixes in writeup/paper.tex with Ctrl+K ∙ Reviewing Claude Code’s proposed changes via the diff panel before accepting ∙ Navigating between your subfolders visually Claude Code handles: ∙ “Read results/table3.csv and update writeup/tables.tex to match the new coefficients” ∙ “Check all my regression scripts in code/ for look-ahead bias” ∙ “Restructure writeup/paper.tex Section 3 based on the reviewer’s comment in STATUS.md” ∙ Writing Git commits with meaningful messages For quick edits and questions, use Cursor’s built-in features. For complex multi-file operations, use Claude Code. The best approach when they conflict: let Claude finish its operation first, then use Cursor for smaller follow-up tweaks. 
GitHub: The Sync and Conflict Solution Git is the single source of truth that makes the whole system work without conflicts. Here’s the discipline to adopt: Your .gitignore for research:
Personal/local Claude memory
CLAUDE.local.md .claude/STATUS_local.md
R/Python/SAS outputs (regeneratable — track code, not outputs)
results/*.csv results/logs/
LaTeX build artifacts
writeup/.aux writeup/.log writeup/.bbl writeup/.blg writeup/.synctex.gz writeup/.pdf # optional: track or not based on preference
OS/editor noise
.DS_Store .cursor/ *.Rhistory pycache/ .Rdata
Track these in Git:
✅ CLAUDE.md (all levels) ✅ .claude/STATUS.md ✅ .claude/skills/ ✅ code/.py, .R, .sas ✅ writeup/.tex, .bib ✅ literature/.md (converted papers) ✅ slides/.tex, .pptx
The Commit Discipline — No Conflicts, Clear History The key rule: commit before you hand off between tools. Claude Code is genuinely excellent at writing commits — it helps with succinct titles, useful bodies, and explanations that read well and capture plenty of detail, the chore work most researchers skip altogether.  Branch strategy for a research project:
main ← clean, working state; what you'd share with coauthors ├── analysis ← active coding work (Python/R/SAS) ├── writing ← active writing work (LaTeX/Word) └── robustness ← robustness checks that might not make the paper
Daily workflow:
Morning: start fresh
git pull # sync if collaborating git checkout writing # or whichever branch fits today's work claude # launch Claude Code, run /session-start
During work — commit frequently, small chunks
git add code/03_main_tables.R
Ask Claude Code: "write a commit message for what we just did"
git commit -m "Add industry FE to Table 3, recheck clustering"
Cursor edit → quick commit
git add writeup/paper.tex git commit -m "Revise hypothesis development per R1 comment 3"
End of session
/wrap-session # Claude updates STATUS.md git add .claude/STATUS.md git commit -m "Update session status: entropy balancing done, Table 5 pending" git push
If Cursor and Claude Code both edited the same file: Git will catch it. Since Claude Code shows diffs before applying changes, the habit is: always review the diff in Cursor before accepting, then commit immediately. Never let two sessions accumulate edits to the same file without committing between them.
Preventing Conflicts: The Three Rules Rule 1 — One tool edits a file at a time. If Claude Code is working on writeup/paper.tex, don’t simultaneously edit it in Cursor. Finish Claude’s task → review diff → commit → then use Cursor. Rule 2 — Commit at every logical checkpoint. Not at end of day — at end of each task. “Added Table 4” is a commit. “Revised Section 3 intro” is a commit. This gives you a precise rollback if an AI edit goes wrong. Rule 3 — CLAUDE.md and STATUS.md are the handoff documents. Before switching from Cursor work to Claude Code work (or vice versa), make sure these files reflect current state. Claude Code reads them fresh every session — they’re your sync mechanism.
The Full Daily Loop
Morning git pull → /session-start → read STATUS.md → know exactly where you are
Coding work (code/ subfolder) Cursor: write/edit R/Python/SAS with autocomplete Claude Code: debug, restructure, check for bias, run scripts → commit each logical unit
Writing work (writeup/ subfolder) Cursor: line-level LaTeX editing, quick Ctrl+K fixes Claude Code: cross-section revisions, consistency checks vs results/ → commit each section or revision round
End of session /wrap-session → STATUS.md updated git add . → git commit -m "Session: [what was done]" → git push
This setup gives you genuinely hands-on control — you’re always the one reviewing diffs, accepting or rejecting changes, and making the commits — while Claude Code handles the heavy reasoning and Cursor handles the fluid in-editor experience. The Git log becomes your research diary, and STATUS.md is your memory bridge across sessions.
Question 4
I want to use subagents as research advisors in my project. For each of the five major phases — Research Design, Data & Measurement, Empirical Analysis, Manuscript, Dissemination — I want to be able to call on a specific expert. For example, I’d want Prof. ... advising me on econometrics. How do I set this up in Claude Code?
This is one of the highest-leverage things you can do with Claude Code for research. The pattern is called the advisor panel, and it maps perfectly onto the five-phase project lifecycle. Let me explain how it works and how to build it.
The Core Idea: Advisor Skills
Each advisor is a skill — a folder in .claude/skills/ containing a SKILL.md that encodes the advisor’s intellectual identity: their domain, their characteristic questions, their standards, their blind spots, and the register they speak in. When you invoke the skill, Claude Code adopts that persona and reviews your work through that lens.
The advisor doesn’t replace you. It challenges you, the way a real advisor does: flagging weak identification, pointing out missing citations, asking why you’re not using a more robust estimator, noting that the prose buries the punchline. You decide what to act on.
Practical Starting Point
Don’t try to build all five advisors at once. Pick the phase you are currently in and the one advisor whose voice you most want in your head. For a paper at the analysis stage, that’s usually the econometrics advisor. For a paper going to submission, it’s the manuscript advisor plus the journal-fit advisor.
Write the skill, invoke it on one section, see what it produces. The feedback loop will tell you quickly whether the persona is calibrated right — if it produces vague generalities, sharpen the criteria; if it produces feedback that sounds exactly like a real referee, you’ve got it.
Question 5
What is the actual difference between the advisor-as-skill approach (Q4) and using real subagents? What are the trade-offs, especially around context information and context window limits?
The distinction is architectural, and it matters more the longer your project runs. The two approaches differ on one axis: who owns the context window.
Two Models, One Axis
Advisor skill — same Claude instance, same context window. The skill instructions load as an additional prompt layer on top of everything already in the conversation. The advisor sees the full session: every file you have open, every decision you have discussed, every correction you have made in this session. It inherits your context.
Subagent — a separate Claude instance, spawned via the Agent tool, with its own fresh context window. It starts from zero. It knows only what you explicitly pass it in the dispatch prompt. It runs independently — potentially in parallel with other subagents — and returns its output as text to the parent. It cannot read the parent’s conversation history.
Skill (persona) Subagent (independent process)
───────────────────────────── ─────────────────────────────────
Parent context window Parent context window
┌──────────────────────┐ ┌───────────────────┐
│ conversation history │ │ conversation hist │
│ loaded files │ │ loaded files │
│ prior decisions │ ───▶ │ dispatch prompt │◀─ only this
│ open questions │ └───────────────────┘
│ [advisor skill instr] │ │
└──────────────────────┘ ▼
│ Subagent context window (fresh)
▼ ┌───────────────────────────────┐
advisor sees all of it │ dispatch prompt │
│ files passed explicitly │
│ no session history │
└───────────────────────────────┘
│
▼
returns text result to parent
The Context Trade-off in Detail
What the skill advisor has that the subagent doesn’t
When you invoke an advisor skill mid-session, it implicitly knows:
- That you already decided to cluster standard errors at the firm level (because you discussed it earlier in the session)
- That Reviewer 2 asked specifically about the parallel trends test (because you pasted the review letter)
- That the coefficient on
C9 × Post2019is 0.034 and barely significant (because you loaded the results CSV twenty messages ago) - That you are targeting JAE, not TAR (because it’s in
CLAUDE.md, which loads at session start)
None of this needs to be re-stated. The advisor already has it. This is the implicit context advantage of skills — the accumulated working memory of the session is available for free.
A subagent dispatched at the same moment knows none of this. You must reconstruct the relevant context in the dispatch prompt: what the paper is about, what the current regression spec is, what the referee already asked for, what decisions are already locked in. If you omit something, the subagent may give advice that contradicts a decision you already made an hour ago.
What the subagent has that the skill advisor doesn’t
The context window is a finite budget. A long research session accumulates fast: your CLAUDE.md, the STATUS.md, several loaded files (paper draft, regression output, data description), and a full conversation history. By the midpoint of a working session on a large manuscript, the advisor skill is reading the paper through an already-crowded context window. At worst, early parts of the conversation get auto-compacted and the advisor’s implicit context degrades silently.
A subagent gets a full fresh context window — typically 200K tokens. You can hand it the complete manuscript, the full regression output, and three comparison papers from the literature, and it has the entire budget to read all of it carefully. It is not competing for space with your session history.
This matters most for:
- Full-paper reviews — a 60-page manuscript plus appendix is a lot of context; a subagent handles it cleanly
- Cross-file analysis — asking an advisor to check consistency between the writeup and all five regression tables is expensive in the parent window; a subagent does it in its own
- Parallel dispatch — you cannot run three advisor skills simultaneously in one session; you can dispatch three subagents in a single message and get all three reviews back concurrently
The cost of the subagent’s fresh start
The subagent’s blank slate is its strength and its liability. Everything it needs must be in the dispatch prompt. For a research project, the things most likely to be missing — and most likely to cause wrong advice — are:
- Decisions already locked in. "We are using firm-level clustering, not state-level — this was settled with the coauthors." The subagent doesn’t know this and may recommend changing it.
- Prior reviewer feedback. "R2 specifically asked for a heterogeneity test by financial constraint quartile." The subagent may not flag this gap because it doesn’t know the gap was flagged.
- Project conventions. Things in
CLAUDE.mdthat load automatically for skills — variable naming, journal style, the DiD design rationale — must be passed explicitly to the subagent or it will miss them.
The remedy is a context brief: a short structured block at the top of every subagent dispatch that summarizes the locked-in decisions, the current open questions, and the constraints.
Decision Guide: Skill or Subagent?
| Situation | Use skill | Use subagent |
|---|---|---|
| Mid-session, files already loaded, quick persona check | ✓ | |
| Full-paper review, fresh eyes, large document | ✓ | |
| Need implicit awareness of session decisions | ✓ | |
| Context window is getting crowded (>50% full) | ✓ | |
| Running multiple advisors in parallel | ✓ | |
| Iterating on a section in real time | ✓ | |
| End-of-phase review before moving on | ✓ | |
| Advisor needs to see what you discussed 2 hours ago | ✓ |
The practical rule: skills for in-session iteration; subagents for heavyweight one-shot reviews. Use skills when the advisor benefits from knowing what you’ve been doing. Use subagents when you want fresh eyes — and can afford to brief them properly.
Integration with the Five-Phase Project
In the five-phase lifecycle, subagents fit naturally at phase boundaries — the moments when you declare one phase done and are about to move to the next. These are the moments when you want a clean, thorough review, not an iterative in-session check:
- Finishing §2 (Data & Measurement): dispatch a measurement-validity subagent to review the full NLP pipeline before moving to analysis
- Finishing §3 (Empirical Analysis): dispatch [..] to review all five tables before you start writing
- Finishing §4 (Manuscript): dispatch the full advisor panel in parallel before submission
The context brief for each of these dispatches can be generated automatically by the /handoff skill — the same document that briefs the next session agent also briefs the advisor subagents, since both need the same information: where the project stands, what is locked in, and what still needs scrutiny.
Question 6
So the right model is: write the advisor as a skill, and then each time I invoke it I decide whether to run it in the current context window or dispatch it as a subagent with that same skill — correct?
Yes. That is exactly the right model. The skill is the unit of definition; the execution mode is a separate per-invocation decision.
Advisor skill (defined once)
│
├─── invoke in current context → /econometrics §3 of paper.tex
│ (skill loads into parent window; advisor sees full session)
│
└─── dispatch as subagent → Agent tool: "act as Prof xxxx,
(skill passed in prompt; load skill from .claude/skills/...,
fresh context window) here is the context brief + file"
The practical consequence: you write wooldridge-econometrics/SKILL.md once, calibrate it once, and then choose at invocation time based on your current situation:
- Context window still has room, files already loaded, mid-session iteration → run in current context. The advisor gets your session history for free. One command, instant result.
- Session is long, paper is large, or you want parallel reviews → dispatch as subagent. Pass the skill instructions in the prompt alongside a context brief. The advisor gets a fresh 200K window and full attention on just the task you hand it.
The only thing that changes between the two modes is how you invoke it. The skill content — the persona, the criteria, the required closing move — is identical either way. You never rewrite the advisor; you only choose its execution environment.
This also means the two modes are not mutually exclusive within a session. You might run a quick skill-mode check on a single paragraph mid-session, then at the end of the session dispatch the same advisor as a subagent for a full-paper review. Same advisor, two different scopes, two different execution modes, one skill file.