Timeouts in Unqork

Prev Next

Timeouts occur when a process running on the server reaches a preset time limit. When it reaches that time limit, the process ends regardless of the process status. Timeouts can lead to significant problems in your application and become a greater issue as the application’s scale increases.

When working in Unqork, you can prevent timeouts in a few ways. In a workflow, large data sets can be processed in batches. Or, a Timer node can be used to make the application run longer than the set timeout. Additionally, Data Workflows using indexed MongoDB queries, with properly maintained data, can also prevent timeouts.

Timeouts in Unqork

In Unqork, all server-side processes have a time limit. For example, a workflow running multiple processes stops if it exceeds the five-minute limit, and a server-side execution has a five-minute limit.

For each of these areas, there are ways to avoid timeouts. Click on the tabs below to learn more:

Workflow

There are two methods for ensuring an automated workflow does not timeout. The first is to run potentially large data sets in smaller batches, instead of a large one. It's important to note that the overall automated workflow has a time limit of five minutes, while each module included in the workflow has an individual time limit of two minutes. So, you must manage the execution time of each module, along with the overall workflow.

In addition to batch processing, you can use a Timer node to ensure the process does not timeout. The workflow resets the server-side process every time it progresses to the Timer node. If you have a process that takes longer than five minutes, add a Timer node on a loop to keep the workflow running.

Data Workflows

There are a few ways to ensure your Data Workflows do not reach the two-minute timeout. The first is to separate large Data Workflows into smaller ones. That way, each part runs more quickly and has less of a chance to timeout. The next best practice is to be thoughtful about how you order the operations. If possible, connect Filter operators earlier in the Data Workflow so that later operators perform actions on smaller data sets. Like workflows, you can perform batch processes using a batch loop. Batch loops let you work on a smaller, more manageable data set.

To learn more about available loop snippets in Unqork, view the following links:

MongoDB Queries

While we recommend using queries over Data Workflows to improve performance, timeouts can still occur as submissions increase. There are two methods to ensure queries are performing well. The first is to ensure the queries you are using are indexed. Doing so improves performance and decreases the likelihood of a timeout. Next, it’s important to clean your database. As submissions accumulate, performance decreases if unnecessary submissions are not removed. The most efficient way to remove these submissions is to build an automated workflow that deletes them.

Bulk Processing Using a Workflow

In this example, you’ll learn how to use an automated workflow to complete bulk processes, like deleting or updating all submissions. We’ll review the necessary steps and focus on batching logic. Depending on the number of submissions you’ll process, there’s the potential that your workflow might timeout. However, you can configure a Timer node to prevent timeout issues.

For this example, we’ll review the configuration of the Batch Process found here: https://training.unqork.io/app/designer#/display/module-builder?moduleId=64f22a1e926ee36c5c18f32f.

Open the module and make a copy of its definition to explore its configuration.

On Load Field Group Component

First, let's review the on-load logic of the module.

On Load section displaying schemaModuleId and sseModuleId with initOnLoad highlighted.

The first Field Group component contains the on-load logic. This component includes one Initializer and two Hidden components. These Hidden components determine the functionality of the batch processing. One component contains a schema ID as the source for the processed submissions. The other component contains a server-side execution ID that processes the submissions. The Initializer component triggers a Plug-In component to retrieve the submissions, while using the Edit Submission Trigger Type because it's part of a workflow.

Get and Update Submissions Field Group Component

Now, let’s review the logic necessary to retrieve the correct submissions for each batch.

Overview of submission management functions including current offset and submission counts.

The currentOffset Hidden component has a Default Value of 0. With each batch, it updates and uses the new value in the Get Submissions call. Doing so ensures it retrieves a new batch of submissions each time it runs. The Plug-In component making the call uses the following Data Source URL:

/fbu/uapi/modules/{{data.schemaModuleId}}/submissions?sort=created&offset={{data.currentOffset}}&dataFields=noFields&limit=20

The call uses the schema ID and a current offset that returns a batch of 20 submissions. Then, the Plug-In component triggers the dwfSetSubmissionsAndOffset Data Workflow using a post-trigger.

You can use the dataFields=noFields parameter to return submission IDs only.

Next, let’s review the various tasks the Field Group component's Data Workflow performs.

Data workflow diagram showing inputs, outputs, and processing steps for submissions.

First, the Data Workflow inputs the submissions returned by the plugGetSubs Plug-In component, and sets the Default Value of the countOfSubmissions Hidden component. Next, the Data Workflow determines if the number of submissions is greater than zero. If the value is greater than zero, the Data Workflow runs another batch. If it equals zero, it stops and creates a value of 'no', outputting it to the runAgain Hidden component. Retrieved submissions include the submission ID and an index that is stored in the submissions Hidden component. The offset of 20 ensures the Plug-In component retrieves the next batch of submissions when it repeats the loop.

After the Data Workflow populates the countOfSubmissions and submissions Hidden components, it triggers another Data Workflow component. Open the configuration drawer of the dwfSplitSubs Data Workflow, and you’ll see that it performs various complex tasks. However, the process is quite simple.

This Data Workflow inputs the submissions with their submission ID and index. It filters for a submission using order=0 to return a single submission. It retrieves the submission ID and stores it in a Hidden component. Then, the Data Workflow triggers a Plug-In component. This process repeats 20 times, or as many times as there are submissions. Each submission ID is stored in its associated Hidden component and then triggers its Plug-In component.

SSE Update Plugins Field Group Component

Like the previous Data Workflow, the logic in this Field Group component repeats 20 times. It consists of 20 Panel components, each containing an Initializer, Plug-In, and two Hidden components. Each Panel component retrieves and stores a single submission.

The image below displays a portion of the configuration in the SSE Update Plugins Field Group component:

Overview of SSE Update Plugins with various update statuses and flags for each module.

To explain the configuration in these Panel components, let's begin with the subToUpdateA Hidden component. The dwfSplitSubs Data Workflow sets the Hidden component's Default Value to the first submission ID. The Data Workflow also triggers the plugSubToUpdateA Plug-In component, which makes an Execute Module API call on the sseModuleId Hidden component determines the process you want to complete. This Plug-In component also uses a post-trigger to trigger the initDoneUpdateA Initializer component. This Initializer component sets flagDoneA Hidden component's Default Value to 1, and triggers the calcAllDone Calculator component. This logic repeats, increasing the value until all submissions are retrieved.

Because the Data Workflow triggers each Plug-In component separately, they run asynchronously. This process improves the application's performance and efficiency. If, for some reason, the plugSubToUpdateA Plug-In component takes a long time to finish, the plugSubToUpdateB Plug-In component does not have to wait before it runs.

While you could create a larger batch with 100 or 1000 submissions, you risk overloading the server when you make that many calls simultaneously. Batches of 20 are a reliable and efficient size.

Response & Retrigger Field Group Component

The final Field Group component contains a Calculator and Decisions component that controls when the loop should repeat or end.

Workflow diagram illustrating various flags and conditions for process completion.

After the Plug-In component runs, the calcAllDone Calculator component determines when the batch is finished. It checks each batch’s flagDone Hidden component and determines if they add up to 20. If they do, it sets the allDoneFlag Hidden component’s Default Value to 1. If the value is not 20, it sets the Default Value to 0. When all the Plug-In components have finished, the ruleAllDone Hidden component triggers the dwfRunAgain Data Workflow component.

If the countOfSubmissions Hidden component’s value is 20, the Data Workflow sets the runAgain Hidden component’s value to ‘yes’. If the value is less than 20, no more submissions exist, and the runAgain Hidden component’s Default Value is set to ‘no’. Then, the workflow checks the runAgain Hidden component’s Default Value. If the value is 'yes', the batch process runs again. If the value is 'no’, the workflow ends.