这是indexloc提供的服务,不要输入任何密码
Skip to content

Conversation

@josegonzalez
Copy link
Member

Closes #7988

@othercorey
Copy link
Contributor

Should this fail instead of wait for the lock? If the task has an issue, won't the schedule create a backlog of tasks to run one after the other? I'm not very familiar with the scheduler wrapper to know if does something else.

@josegonzalez
Copy link
Member Author

Yeah I used an exclusive lock for docker-local. Kubernetes might start containers though if the old one finishes within the timeout (though I intend to increase the timeout to allow for better autoscaling).

@othercorey
Copy link
Contributor

Is there anything I can do to help with this? I don't know how to create test cases for these, but I can try if needed.

@josegonzalez
Copy link
Member Author

Yeah the next thing is to add a test that does something like sleep forever in a detached command with a madeup cron id, and then try to run that twice. The second command should fail (and of course we should clean up the first one).

@othercorey
Copy link
Contributor

othercorey commented Oct 21, 2025

I wrote a test for this, but I couldn't get it to fail. Noticed a few things:

  • The document says deny instead of forbid
  • Missing stdout format of Concurrency in CommandList()
  • trigger-scheduler-docker-local-scheduler-run always sets concurrency_policy twice
    if [[ -n "$DOKKU_CONCURRENCY_POLICY" ]]; then
      concurrency_policy="$DOKKU_CONCURRENCY_POLICY"
    fi
    concurrency_policy="$DOKKU_CONCURRENCY_POLICY"

I don't know what's wrong or why I can't seem to affect the test.

@othercorey
Copy link
Contributor

I don't know what's missing in the test :(.

@josegonzalez
Copy link
Member Author

I haven't taken a look but we should probably docker inspect the generated container. When you run a detached app container, it outputs the new container id, so we can do something like

container_id="$output"
run /bin/bash -c "docker container inspect $container_id"

That would help in figuring out if the container has the expected labels.

Also, run the second app container with --trace mode enabled, so we can see if its even hitting the logic or if the logic is just bad :)

@josegonzalez josegonzalez added this to the v0.37.0 milestone Nov 10, 2025
@josegonzalez josegonzalez mentioned this pull request Nov 10, 2025
@josegonzalez josegonzalez changed the base branch from master to 0.37-release November 10, 2025 07:02
@josegonzalez
Copy link
Member Author

Looks like the concurrency policy attached is allow, so there is something wrong in how I retrieve the policy. I'll take a look tonight.

@josegonzalez
Copy link
Member Author

Ah the problem is that we release the lock basically immediately due to using --detach in the test. I think actually we don't need a lock and can just check if there is a running container with the same cron_id if the policy is forbid.

@josegonzalez
Copy link
Member Author

There were a few small bugs but I think I've squashed them and the tests should pass now :)

@josegonzalez
Copy link
Member Author

I also need to see if we can force the concurrency for cron:run when using the k3s scheduler. Might need to interact with the kubernetes api for this and manually handle forbid and replace.

@josegonzalez josegonzalez merged commit d9e5182 into 0.37-release Nov 15, 2025
181 of 183 checks passed
@josegonzalez josegonzalez deleted the 7988-prevent-overlap branch November 15, 2025 22:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add support to prevent overlapping execution of tasks

3 participants