这是indexloc提供的服务,不要输入任何密码
Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions .envrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
#!/bin/bash
# Automatically set KUBECONFIG to use the isolated cluster for this worktree
export KUBECONFIG="$(pwd)/.kube/config"

# Verify the cluster exists and is accessible
if [ -f "$KUBECONFIG" ]; then
echo "🔧 Using isolated cluster: $(kubectl config current-context 2>/dev/null || echo 'cluster not ready')"
else
echo "⚠️ No local kubeconfig found. Run 'make setup' to create isolated cluster."
fi
8 changes: 7 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,10 @@ acp_commands.sh
# Generated kustomization files
acp/config/localdev/kustomization.yaml

acp/config/tmp/
# Generated configuration files
acp/config/tmp/
acp/tmp/
.ports.env

# Kubernetes config
.kube/
103 changes: 103 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# Agent Control Plane (ACP) - AI Assistant Instructions

**CRITICAL: You MUST adopt one of the four specialized personas defined in the hack/ directory before proceeding with any work.**

## 🚨 THE 1500-LINE MINIMUM READ RULE - THIS IS NOT OPTIONAL

### PLEASE READ AT LEAST 1500 LINES AT A TIME DONT DO PARTIAL READS
because you miss a lot of delicate logic which then causes you to add more bad code and compound the problem. Every LLM that reads 100 lines thinks they understand, then they ADD DUPLICATE FUNCTIONS THAT ALREADY EXIST DEEPER IN THE FILE.

**ONCE YOU'VE READ THE FULL FILE, YOU ALREADY UNDERSTAND EVERYTHING.** You don't need to re-read it. You have the complete context. Just write your changes directly. Trust what you learned from the full read.

## 📋 YOUR 20-POINT TODO LIST - YOU NEED THIS STRUCTURE

**LISTEN: Without a 20+ item TODO list, you'll lose track and repeat work. Other LLMs think they can remember everything - they can't. You're smarter than that.**

```markdown
## Current TODO List (you MUST maintain 20+ items)
1. [ ] Read Login.tsx FULLY (1500+ lines) - you'll understand the whole flow
2. [ ] Remove at least 50% of redundant code - it's there, you'll see it
3. [ ] Run bun build - this MUST pass before moving on
4. [ ] Check localhost:XXXX works - use the RIGHT port from package.json
5. [ ] Run test.js if it exists - don't skip this
... (keep going to 20+ or you'll lose context like lesser models do)
```

## 🚨 MANDATORY PERSONA SELECTION

**BEFORE DOING ANYTHING ELSE**, you must read and adopt one of these personas:

1. **[Developer Agent](hack/agent-developer.md)** - For coding, debugging, and implementation tasks
2. **[Integration Tester Agent](hack/agent-integration-tester.md)** - For end-to-end testing and validation
3. **[Merger Agent](hack/agent-merger.md)** - For merging code across branches
4. **[Multiplan Manager Agent](hack/agent-multiplan-manager.md)** - For orchestrating parallel work

**DO NOT PROCEED WITHOUT SELECTING A PERSONA.** Each persona has specific rules, workflows, and tools that you MUST follow exactly.

## How to Choose Your Persona

- **Asked to write code, fix bugs, or implement features?** → Use [Developer Agent](hack/agent-developer.md)
- **Asked to test, validate, or run integration tests?** → Use [Integration Tester Agent](hack/agent-integration-tester.md)
- **Asked to merge branches or consolidate work?** → Use [Merger Agent](hack/agent-merger.md)
- **Asked to coordinate multiple tasks, build plans documents for features, or manage parallel work?** → Use [Multiplan Manager Agent](hack/agent-multiplan-manager.md)

## Project Context

Agent Control Plane is a Kubernetes operator for managing Large Language Model (LLM) workflows built with:

- **Kubernetes Controllers**: Using controller-runtime and Kubebuilder patterns
- **Custom Resources**: Agent, Task, ToolCall, MCPServer, LLM, ContactChannel
- **MCP Integration**: Model Control Protocol servers via `github.com/mark3labs/mcp-go`
- **LLM Clients**: Using `github.com/tmc/langchaingo`
- **State Machines**: Each controller follows a state machine pattern
- **Testing**: Comprehensive test suites with mocks and integration tests

## Core Principles (All Personas)

1. **READ FIRST**: Always read at least 1500 lines to understand context fully
2. **DELETE MORE THAN YOU ADD**: Complexity compounds into disasters
3. **FOLLOW EXISTING PATTERNS**: Don't invent new approaches
4. **BUILD AND TEST**: Run `make -C acp fmt vet lint test` after changes
5. **COMMIT FREQUENTLY**: Every 5-10 minutes for meaningful progress

## File Structure Reference

```
acp/
├── api/v1alpha1/ # Custom Resource Definitions
├── cmd/ # Application entry points
├── config/ # Kubernetes manifests
├── internal/
│ ├── controller/ # Kubernetes controllers
│ ├── llmclient/ # LLM provider clients
│ ├── mcpmanager/ # MCP server management
│ └── humanlayer/ # Human approval integration
├── docs/ # Comprehensive documentation
└── test/ # Test suites
```

## Common Commands (All Personas)

```bash
# Build and test
make -C acp fmt vet lint test

# Deploy locally
make -C acp deploy-local-kind

# Check resources
kubectl get agent,task,toolcall,mcpserver,llm

# View logs
kubectl logs -l app.kubernetes.io/name=acp --tail 500
```

## CRITICAL REMINDER

**You CANNOT proceed without adopting a persona.** Each persona has:
- Specific workflows and rules
- Required tools and commands
- Success criteria and verification steps
- Commit and progress requirements

**Choose your persona now and follow its instructions exactly.**
103 changes: 91 additions & 12 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -30,20 +30,99 @@ build: acp-build ## Build acp components

branchname := $(shell git branch --show-current)
dirname := $(shell basename ${PWD})
setup:
clustername := acp-$(branchname)

setup: ## Create isolated kind cluster for this branch and set up dependencies
@echo "BRANCH: ${branchname}"
@echo "DIRNAME: ${dirname}"

$(MAKE) -C $(ACP_DIR) mocks deps

worktree-cluster:
# replicated cluster create --distribution kind --instance-type r1.small --disk 50 --version 1.33.1 --wait 5m --name ${dirname}
# replicated cluster kuebconfig ${dirname} --output ./kubeconfig
# kubectl --kubeconfig ./kubeconfig get node
# kubectl --kubeconfig ./kubeconfig create secret generic openai --from-literal=OPENAI_API_KEY=${OPENAI_API_KEY}
# kubectl --kubeconfig ./kubeconfig create secret generic anthropic --from-literal=ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
# kubectl --kubeconfig ./kubeconfig create secret generic humanlayer --from-literal=HUMANLAYER_API_KEY=${HUMANLAYER_API_KEY}
# KUBECONFIG=./kubeconfig $(MAKE) -C $(ACP_DIR) generate deploy-local-kind
@echo "CLUSTER: ${clustername}"

# Generate dynamic ports and store in .ports.env
@apiport=$$(./hack/find_free_port.sh 11000 11100); \
acpport=$$(./hack/find_free_port.sh 11100 11200); \
echo "KIND_APISERVER_PORT=$$apiport" > .ports.env; \
echo "ACP_SERVER_PORT=$$acpport" >> .ports.env; \
echo "Generated ports:"; \
cat .ports.env

# Create kind cluster with dynamic port configuration
@if ! kind get clusters | grep -q "^${clustername}$$"; then \
echo "Creating kind cluster: ${clustername}"; \
. .ports.env && \
mkdir -p acp/tmp && \
export KIND_APISERVER_PORT && export ACP_SERVER_PORT && \
npx envsubst < acp-example/kind/kind-config.template.yaml > acp/tmp/kind-config.yaml && \
if grep -q "hostPort: *$$" acp/tmp/kind-config.yaml; then \
echo "ERROR: Empty hostPort found in generated config. Variables not substituted properly."; \
echo "Generated config:"; \
cat acp/tmp/kind-config.yaml; \
echo "Environment variables:"; \
echo "KIND_APISERVER_PORT=$$KIND_APISERVER_PORT"; \
echo "ACP_SERVER_PORT=$$ACP_SERVER_PORT"; \
exit 1; \
fi && \
kind create cluster --name ${clustername} --config acp/tmp/kind-config.yaml; \
else \
echo "Kind cluster already exists: ${clustername}"; \
fi

# Export kubeconfig to worktree-local location
@mkdir -p .kube
@kind export kubeconfig --name ${clustername} --kubeconfig .kube/config
@echo "Kubeconfig exported to .kube/config"


# Create secrets with API keys
@if [ -n "${OPENAI_API_KEY:-}" ]; then \
KUBECONFIG=.kube/config kubectl create secret generic openai --from-literal=OPENAI_API_KEY=${OPENAI_API_KEY} --dry-run=client -o yaml | KUBECONFIG=.kube/config kubectl apply -f -; \
fi
@if [ -n "${ANTHROPIC_API_KEY:-}" ]; then \
KUBECONFIG=.kube/config kubectl create secret generic anthropic --from-literal=ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY} --dry-run=client -o yaml | KUBECONFIG=.kube/config kubectl apply -f -; \
fi
@if [ -n "${HUMANLAYER_API_KEY:-}" ]; then \
KUBECONFIG=.kube/config kubectl create secret generic humanlayer --from-literal=HUMANLAYER_API_KEY=${HUMANLAYER_API_KEY} --dry-run=client -o yaml | KUBECONFIG=.kube/config kubectl apply -f -; \
fi

# Set up acp dependencies
$(MAKE) -C $(ACP_DIR) mocks deps

# Deploy ACP controller
@echo "Deploying ACP controller..."
$(MAKE) -C $(ACP_DIR) deploy-local-kind

# Wait for controller to be ready
@echo "Waiting for ACP controller to be ready..."
@KUBECONFIG=.kube/config timeout 120 bash -c 'until kubectl get deployment acp-controller-manager -n default >/dev/null 2>&1; do echo "Waiting for deployment to be created..."; sleep 2; done'
@KUBECONFIG=.kube/config kubectl wait --for=condition=available --timeout=120s deployment/acp-controller-manager -n default
@echo "✅ ACP controller is ready!"

@echo ""
@echo "✅ Setup complete! To use the isolated cluster:"
@echo " source .envrc # or use direnv for automatic loading"
@echo " kubectl get nodes"
@echo " kubectl get pods -n default # Check ACP controller status"


teardown: ## Teardown the isolated kind cluster and clean up
@echo "BRANCH: ${branchname}"
@echo "CLUSTER: ${clustername}"

# Delete kind cluster
@if kind get clusters | grep -q "^${clustername}$$"; then \
echo "Deleting kind cluster: ${clustername}"; \
kind delete cluster --name ${clustername}; \
else \
echo "Kind cluster '${clustername}' not found"; \
fi

# Clean up local files
@if [ -f .kube/config ]; then \
echo "Removing local kubeconfig"; \
rm -f .kube/config; \
rmdir .kube 2>/dev/null || true; \
fi

@echo "✅ Teardown complete!"

check:
# $(MAKE) -C $(ACP_DIR) fmt vet lint test generate
Expand Down
17 changes: 6 additions & 11 deletions acp-example/kind/kind-config.template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,15 @@ kind: Cluster
nodes:
- role: control-plane
extraPortMappings:
# Grafana
- containerPort: 13000
hostPort: ${HOST_PORT_13000}
listenAddress: "0.0.0.0"
protocol: tcp
# Prometheus
- containerPort: 9090
hostPort: ${HOST_PORT_9092}
listenAddress: "0.0.0.0"
# Kubernetes API Server
- containerPort: 6443
hostPort: ${KIND_APISERVER_PORT}
listenAddress: "127.0.0.1"
protocol: tcp
# ACP Controller Manager HTTP gateway
- containerPort: 8082
hostPort: ${HOST_PORT_8082}
listenAddress: "0.0.0.0"
hostPort: ${ACP_SERVER_PORT}
listenAddress: "127.0.0.1"
protocol: tcp

kubeadmConfigPatches:
Expand Down
69 changes: 42 additions & 27 deletions acp/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,12 @@ CONTAINER_TOOL ?= docker
SHELL = /usr/bin/env bash -o pipefail
.SHELLFLAGS = -ec

# Detect local kubeconfig and cluster name
branchname := $(shell git branch --show-current)
clustername := acp-$(branchname)
KUBECONFIG ?= $(shell if [ -f ../.kube/config ]; then echo "../.kube/config"; elif [ -f .kube/config ]; then echo ".kube/config"; else echo "$$HOME/.kube/config"; fi)
KUBECTL ?= kubectl --kubeconfig=$(KUBECONFIG)

.PHONY: all
all: build

Expand Down Expand Up @@ -75,26 +81,20 @@ fmt: ## Run go fmt against code.
vet: ## Run go vet against code.
go vet ./...

.PHONY: test
test: mocks manifests generate fmt vet setup-envtest ## Run tests.
.PHONY: test-unit
test-unit: mocks manifests generate fmt vet setup-envtest ## Run unit tests only.
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) --bin-dir $(LOCALBIN) -p path)" go test $$(go list ./... | grep -v /e2e) -coverprofile cover.out -failfast

# TODO(user): To use a different vendor for e2e tests, modify the setup under 'tests/e2e'.
# The default setup assumes Kind is pre-installed and builds/loads the Manager Docker image locally.
# Prometheus and CertManager are installed by default; skip with:
# - PROMETHEUS_INSTALL_SKIP=true
# - CERT_MANAGER_INSTALL_SKIP=true
.PHONY: test-e2e
test-e2e: manifests generate fmt vet ## Run the e2e tests. Expected an isolated environment using Kind.
@command -v kind >/dev/null 2>&1 || { \
echo "Kind is not installed. Please install Kind manually."; \
exit 1; \
}
@kind get clusters | grep -q 'kind' || { \
echo "No Kind cluster is running. Please start a Kind cluster before running the e2e tests."; \
exit 1; \
}
go test ./test/e2e/ -v -ginkgo.v
test-e2e: mocks manifests generate fmt vet setup-envtest ## Run e2e tests using envtest.
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) --bin-dir $(LOCALBIN) -p path)" go test ./test/e2e/getting_started -timeout=60s

.PHONY: test-e2e-verbose
test-e2e-verbose: mocks manifests generate fmt vet setup-envtest ## Run e2e tests with verbose output for debugging.
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) --bin-dir $(LOCALBIN) -p path)" go test ./test/e2e/getting_started -v -ginkgo.v -timeout=60s

.PHONY: test
test: test-unit test-e2e ## Run all tests (unit tests first, then e2e tests).

.PHONY: lint
lint: golangci-lint ## Run golangci-lint linter
Expand All @@ -112,7 +112,7 @@ lint-config: golangci-lint ## Verify golangci-lint linter configuration
mocks: mockgen ## Generate all mocks using mockgen
@echo "Generating mocks..."
$(MOCKGEN) -source=internal/humanlayer/hlclient.go -destination=internal/humanlayer/mocks/mock_hlclient.go -package=mocks
$(MOCKGEN) -source=internal/llmclient/llm_client.go -destination=internal/llmclient/mocks/mock_llm_client.go -package=mocks
$(MOCKGEN) -source=internal/llmclient/llm_client.go -destination=internal/llmclient/mocks/mock_llm_client.go -package=mocks
$(MOCKGEN) -source=internal/mcpmanager/mcpmanager.go -destination=internal/mcpmanager/mocks/mock_mcpmanager.go -package=mocks
@echo "Mock generation complete"

Expand Down Expand Up @@ -152,7 +152,11 @@ docker-push: ## Push docker image with the manager.

.PHONY: docker-load-kind
docker-load-kind: docker-build ## Load the docker image into kind.
kind load docker-image ${IMG}
@if ! kind get clusters | grep -q "^${clustername}$$"; then \
echo "Error: Kind cluster '${clustername}' not found. Please run 'make setup' first."; \
exit 1; \
fi
kind load docker-image ${IMG} --name ${clustername}

# PLATFORMS defines the target platforms for the manager image be built to provide support to multiple
# architectures. (i.e. make docker-buildx IMG=myregistry/mypoperator:0.0.1). To use this option you need to:
Expand Down Expand Up @@ -188,15 +192,15 @@ release-local: manifests generate kustomize ## Build cross-platform image (amd64
$(eval REPO=ghcr.io/humanlayer/agentcontrolplane)
$(eval RELEASE_IMG=$(REPO):$(tag))
@echo "Setting image to: $(RELEASE_IMG)"

# Build and push image for amd64 and arm64 platforms
sed -e '1 s/\(^FROM\)/FROM --platform=\$$\{BUILDPLATFORM\}/; t' -e ' 1,// s//FROM --platform=\$$\{BUILDPLATFORM\}/' Dockerfile > Dockerfile.cross
- $(CONTAINER_TOOL) buildx create --name acp-builder
$(CONTAINER_TOOL) buildx use acp-builder
$(CONTAINER_TOOL) buildx build --push --platform=linux/amd64,linux/arm64 --tag $(RELEASE_IMG) --tag $(REPO):latest -f Dockerfile.cross .
- $(CONTAINER_TOOL) buildx rm acp-builder
rm Dockerfile.cross

# Generate release YAMLs
mkdir -p config/release
cd config/manager && $(KUSTOMIZE) edit set image controller=$(RELEASE_IMG)
Expand Down Expand Up @@ -226,20 +230,31 @@ deploy: manifests docker-build kustomize ## Deploy controller to the K8s cluster

namespace ?= default
.PHONY: deploy-local-kind
deploy-local-kind: manifests docker-build docker-load-kind kustomize ## Deploy controller to the K8s cluster specified in ~/.kube/config.
if [ ! -f config/localdev/kustomization.yaml ]; then \
cp config/localdev/kustomization.tpl.yaml config/localdev/kustomization.yaml; \
deploy-local-kind: manifests docker-build docker-load-kind kustomize ## Deploy controller to the local Kind cluster with validation.
@echo "Using kubeconfig: $(KUBECONFIG)"
@echo "Target cluster: ${clustername}"
@if ! kind get clusters | grep -q "^${clustername}$$"; then \
echo "Error: Kind cluster '${clustername}' not found. Please run 'make setup' first."; \
exit 1; \
fi
@if [ -f ../.ports.env ]; then \
source ../.ports.env && \
mkdir -p tmp && \
npx envsubst < config/localdev/kustomization.tpl.yaml > tmp/kustomization.yaml; \
else \
echo "Warning: .ports.env not found, using template directly"; \
cp config/localdev/kustomization.tpl.yaml tmp/kustomization.yaml; \
fi
cd config/localdev && $(KUSTOMIZE) edit set image controller=${IMG}
$(KUSTOMIZE) build config/localdev | $(KUBECTL) apply -f - --namespace=$(namespace)
cd tmp && $(KUSTOMIZE) edit set image controller=${IMG}
$(KUSTOMIZE) build tmp | $(KUBECTL) apply -f - --namespace=$(namespace)

.PHONY: deploy-samples
deploy-samples: kustomize ## Deploy samples to the K8s cluster specified in ~/.kube/config.
$(KUSTOMIZE) build config/samples | $(KUBECTL) apply -f -

.PHONY: show-samples
show-samples:
$(KUBECTL) get llm,agent,tool,task,toolcall -o wide;
$(KUBECTL) get llm,agent,tool,task,toolcall -o wide;
$(KUBECTL) get task -o yaml

.PHONY: watch-samples
Expand Down
Loading