Task-Specific Instruction Templates (Using in Multi-Repository Project) (Claude) #1123
LimitlessVibes
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I wanted to share a pattern I've been developing with Claude while working on a multi-repo project with SpecKit. It's been helpful for dealing with context issues during long development sessions. I kept running into the same issues when working on tasks that would slowly degrade over time, not paying attention to previous instructions, configurations, patterns, etc. I'd like feedback on this as it's been a learning curve.
Claude wouldn't engage the MCPs and sub-agents it was supposed to use, would lose track of which repo it was working in, and would forget what security checks were non-negotiable. The context would slowly degrade over time with each compact, and I'd find myself constantly having to remind it "hey, you need to use the semgrep MCP for this" or "don't forget to invoke the database-expert skill."
Even after I'd remind Claude it would drift back to old patterns. I get it that speckit is LLM agnostic, and I am currently only using Claude, but I imagine this would apply to other platforms and their available feature sets.
What I needed was a way to say: "For THIS specific task, in THIS specific repo, you MUST use THESE specific tools, in THIS specific order." As a requirement.
What I came up with was I started experimenting with creating task-specific instruction files that sit alongside my SpecKit task definitions. The structure looks like this:
Each file is basically a detailed playbook for that one task. How I implement this currently, is that ask claude to analyze each task in the speckit on which which sub-agents, skills, mcps, etc are that I have loaded and then using the template, create the specific task instructions and then modify the tasks.md to reference the specific intructions for that task. Yes, this eats up more tokens, but it keeps Claude focused so that I am spending less time troubleshooting and going back to do things over.
What Goes In These Files
I've iterated on this template, and still experimenting but I wanted to share to make sure I am not completely off base with this approach:
Task classification tags - I tag each task with types like [DATABASE], [SECURITY-CRITICAL], [API], etc. This should automatically tell Claude which tools are non-negotiable. For example, anything tagged [SECURITY-CRITICAL] MUST run semgrep before completion. No exceptions.
Explicit tool requirements - I list out every MCP, skill, and sub-agent that needs to be used as requirements. Like "you MUST use prisma-local MCP for migrations" or "you MUST invoke the database-expert skill before modifying the schema."
Dependency mapping - Visual maps showing which tasks need to be done first, which ones this task blocks, and which ones can be worked on in parallel.
Workflow sequences - Step-by-step instructions for BEFORE/DURING/AFTER the work. Query memory MCP first, invoke the project skill, research patterns, then start coding. During implementation, use specific tools at specific times. After completion, store decisions and update task status.
Acceptance criteria with teeth - Every task has a checklist that MUST be completed. All items checked off, all quality gates passed, all security scans clean. Can't mark the task complete until everything's done.
What this looks like in practice:
For a security-critical auth task, I'd tag it [SECURITY-CRITICAL] and [AUTHENTICATION], which automatically means:
Claude can't check off the acceptance criteria unless those scans have run.
For a database task, I'd tag it [DATABASE], which means:
Plus I'd map out the dependencies. Like "you can't start this until Task 0.2.1 (PostgreSQL setup) is done, and Tasks 1.2.1 and 1.3.2 are blocked until you finish this."
For long-running sessions, the workflow keeps Claude on track:
Because of this task-scoped context, instead of trying to keep 35 tasks worth of information in Claude's head at once, each task gets its own focused instruction set. When you're working on Redis setup, you only need to think about Redis setup.
And the mandatory tool declarations prevent regression. Claude can look at the file and go "oh right, this is a [DATABASE] task, I need to use these specific tools."
The dependency maps are helpful for multi-repo work too. Claude doesn't get confused about which repo it's in or what order things need to happen plus it's a nice visual for the developer to understand things
How It Plays Out in Practice
Here's a typical workflow now:
Me: "Start working on task 0.3.1"
Claude:
The Template Itself
I've sanitized a version of my template that anyone can use. It's got all the placeholders for task metadata, tool requirements, dependencies, acceptance criteria, verification steps, etc. You can grab it and customize it for your own project.
Sanitized Template
File structure:
Key tech:
{TECH_LIST_WITH_DOCS_LINKS}
Example:
Common Mistakes
{TASK_SPECIFIC_PITFALLS}
For DATABASE tasks:
For SECURITY tasks:
General stuff to avoid:
Files
Will create/modify:
{FILE_LIST}
Example:
shared/src/lib/redis.ts- newdocker-compose.yml- modified.env.example- modifiedDocumentation to update:
{DOCS_TO_UPDATE}
Example:
docs/INFRASTRUCTURE.mddocs/DEVELOPMENT.mdREADME.mdTests:
{TEST_FILES}
Example:
shared/src/lib/redis.test.tsVerification
How to verify it's done:
Example:
Commands to run:
{BUILD_COMMAND} {TYPE_CHECK_COMMAND} {TEST_COMMAND} {LINT_COMMAND}Security checks (if [SECURITY-CRITICAL]):
{SEMGREP_SCAN_COMMAND}Reference Docs
Project docs:
{PROJECT_SPECIFIC_DOCS}
Post-Completion Checklist
After finishing, do these in order:
Trigger Keywords
{COMMA_SEPARATED_KEYWORDS}
Example: Redis setup, Redis integration, cache config, Task 0.2.2
Task Map
Additional Context
{TASK_SPECIFIC_CONTEXT}
Example:
Beta Was this translation helpful? Give feedback.
All reactions