Skip to main content
All CollectionsTines FAQ
What is action run orchestration?
What is action run orchestration?
J
Written by Julianne Walker
Updated over 3 months ago

Understanding action run orchestration

In Tines, it’s important to consider the issue of fairness regarding action runs. How do you ensure a single story doesn’t monopolize all of the compute power for action runs on a stack, thus stunting other stories? This question applies to single and multi-tenant stacks, where all of the stories share action run computing capacity.

To address this issue, we have a concept called “fair orchestration.” “Fair orchestration” guarantees that each story on a stack receives a fair amount of worker time to process action runs.

How does it work?

Each story has a bucket that is filled with a certain number of tokens every minute. Currently, the default number of tokens is 3 million. These 3 million tokens represent 50 minutes worth of worker time per minute.

Note

  • 1 token = 1 millisecond

  • 1 second = 1,000 milliseconds

  • 3 million milliseconds = 50 minutes

  • Therefore, within a given minute, each story can have concurrent action runs using up to 50 minutes of worker time*

    • *We use the term “worker time” as we are referring to Sidekiq workers that process action runs, aka compute time.

As the actions in a story execute, tokens are deducted from its bucket. From the time an action run starts, every second, 1000 tokens are deducted from the story’s bucket until the action run is complete. This allows us to account for action runs that are active (i.e., started but not completed) when determining how much worker time a story is using. If a story depletes all of its allocated tokens in the given minute, any new action runs produced will be enqueued the next minute when the bucket refills.

Autoscaling to keep up with the demand

We ensure that our workers can auto-scale to match story run demand as needed in the Cloud. This is based on the percentage of workers currently available to process action runs on a stack. If the system notices that the percentage of workers available is less than the defined set of thresholds, workers are automatically scaled up. We do this preemptively to ensure action runs are not queued for too long.

Implementing fair orchestration in our action run logic ensures all stories receive an equitable share of compute time. And to ensure action runs are processed consistently with balanced worker allocation across the stack. Our goal here is to guarantee your action runs are getting enqueued and started as swiftly as possible, always.

Did this answer your question?