Modul 2: Flowy base training


Please support us to improve our service. We would like to use statistics anonymously.

We do not pass on your data! You can find more information in our privacy policy.

Decline

Module 2: Flowy base training

Flowy is an advanced workflow management platform designed to streamline and automate business processes. It provides an efficient and systematic way to orchestrate and execute a wide range of tasks, from simple to complex, across various systems and applications. By creating workflows in Flowy, businesses can automate their processes, increase productivity, reduce errors, and gain better insights into their operations.

What is Flowy?

Flowy is more than a task automation tool; it is a complete process automation suite. At its core, Flowy provides a visual designer where you can create, manage, and run workflows. These workflows are designed to automate a wide variety of tasks, from data manipulation to complex business processes.

Flowy is platform-independent, meaning it can connect and interact with various external systems and applications, regardless of their underlying technology. It does this through the use of "steps", which are individual tasks or actions that a workflow performs.

Key Components

Here are some of the key components you'll be working with in Flowy:

  • Triggers: These are events that initiate a workflow. Triggers can be time-based (e.g., running a workflow every day at 5 PM), event-based (e.g., starting a workflow when a new file is uploaded), or manually triggered by a user.
  • Events: These are occurrences or changes in state that a workflow responds to or takes action upon. Events can be triggered internally within a workflow, or externally from other systems, users, or services.
  • Processes: These are the workflows themselves. A process is a sequence of steps designed to accomplish a specific task or series of tasks.
  • Credentials: These are used to securely store and manage access to external systems and services that your workflows interact with.
  • Steps: These are the individual tasks or actions that a workflow performs. Steps can include anything from sending an email, making an API call, manipulating data, interacting with databases, and more.
  • Variables: Variables are used to store and manipulate data within a workflow. They can hold anything from simple text and numbers to complex data structures like lists and dictionaries.

Why Use Flowy?

Flowy helps businesses automate their processes, freeing up valuable time and resources. With Flowy, you can:

  • Automate repetitive tasks: By automating repetitive tasks, you can free up time for more important and strategic activities.
  • Increase productivity: Automation helps you get more done in less time, increasing the overall productivity of your team.
  • Reduce errors: Automation reduces the risk of human error, ensuring that tasks are performed accurately every time.
  • Gain insights: Flowy provides detailed logs and analytics, giving you insights into your processes and helping you identify areas for improvement.

In the upcoming chapters, we will dive deeper into the different components of Flowy, how to create workflows, and how to use variables and logs for debugging and performance optimization. Stay tuned!

Triggers

Triggers are a fundamental part of Flowy as they initiate processes. Flowy currently supports three types of triggers: Cron, Messaging, and Rest. It's important to note that a process must be created before a trigger can be associated with it. Also, multiple triggers can point to the same process, providing flexibility in how and when processes are initiated.

The initiation of a trigger invariably results in the generation of a corresponding event type.

Cron Trigger

Cron triggers are time-based and can be configured to initiate processes at specified intervals. They can be set to run once a day or periodically throughout the day. You can define intervals in seconds, minutes, or hours.

For instance, a Cron trigger can be set to initiate a process every hour, every day at a specific time, or every Monday at 9 AM.

Messaging Trigger

Messaging triggers are used to start processes based on incoming messages. These messages can arrive via Kafka or JMS, making Messaging triggers useful for integrating with message-based systems or microservices architectures.

Rest Trigger

Rest triggers provide REST capabilities to Flowy, allowing processes to be started by HTTP requests. Rest triggers can handle both authenticated and unauthenticated access, supporting both basic and session-based authentication methods.

Rest triggers also allow for validation checks to be performed before a process is executed. These checks can be defined for path variables, query parameters, headers, and the request body itself.

You can find more details about REST triggers in the Flowy documentation.

Events

In Flowy, the concept of events is closely tied to triggers. Every time a trigger fires, an event is generated. This integral functionality ensures that every action or event that initiates a process is recorded and traceable, enhancing the traceability and auditability of the system.

There are various types of triggers in Flowy, and correspondingly, for every trigger type, there is a different event type. This diversity of event types accommodates the varied nature of triggers and ensures that each kind of initiation action is appropriately captured and documented.

Each event in Flowy has two essential components: an input and an output. The input contains the data that initiated the event, while the output is the result of the event. However, the output is optional, and not all events will have an output.

Events in Flowy are not just about successful process initiations. If something goes wrong during the firing of a trigger, the event will contain an exception and a stack trace. This ensures that any issues or errors are logged and can be reviewed for debugging or process improvement purposes.

One of the important features of events in Flowy is that they are persisted, meaning they are stored and can be retrieved for later analysis or audit. Events provide both the start and endpoint of each execution. This makes them a crucial tool for understanding the lifecycle of processes and workflows, offering insights into how and when different parts of the system are being activated.

Events as well as process and step logs can be reviewed through the user interface. These logs may be subject to automatic removal based on the corresponding configurations set at the process level. Therefore, it's recommended to periodically review these settings and adjust them according to your data retention requirements to ensure important information isn't inadvertently lost.

In summary, events in Flowy are an essential component of the system's functionality, providing a rich record of process initiations, successes, and failures. They enhance transparency, traceability, and the ability to debug and improve the system's performance over time.

Validations

In Flowy, validations serve as a potent feature ensuring the integrity of data that gets processed by your system. This feature empowers developers to establish specific constraints and conditions for your data, including factors such as data type, value ranges, length prerequisites, among others.

The primary purpose of Flowy's validations is to enforce data integrity by confirming that the input data aligns with the specified rules. However, it is important to note what validations do not undertake. They are not designed to verify permissions or data access rights. While Flowy's validations can i.e. confirm the correct data format and length for an user ID field, they do not ascertain whether the user has the necessary permissions to modify that account. The latter is the responsibility of the application's authorization or access control system.

Flowy's validations support an array of data types, including:

  • Boolean
  • Double
  • Float
  • Integer
  • Long
  • Number
  • String

Validation rules

Depending upon the data type, it is possible to specify additional rules like minimum and maximum values for Float/Integer/Number. Fields can be marked as required or mandatory, irrespective of their data type.

By combining one or multiple validations, the validation process can be streamlined. In this scenario, all the combined rules undergo extensive examination.

Complex data types

Flowy's validations support inheritance-like structures using the Object data type. This proves useful when handling intricate data schemas, where certain validation rules are shared across multiple data inputs.

As an illustration, consider a basic validation rule checking for "name" and "email". This rule can be perceived as a " base" rule and might be used for various entities in your application, such as users, companies, or contacts.

Now, let's assume we have another entity – "employee" – which possesses all the attributes of a "user" (name and email), but includes additional fields like "employeeId". Instead of formulating a new validation rule, you can create an " employee" validation rule and reference the "user" validation rule as an "Object". Then, you can simply add the additional rules for "employeeId".

This usage of the "Object" data type facilitates the reutilization and extension of existing validation rules, resulting in less redundancy and easier maintenance. This aspect of Flowy's validations allows developers to devise more effective, structured, and adaptable validation schemas.

Validations can be configured for REST triggers as well as for processes in general.

You can find more details about validations in the Flowy documentation.

Credential Management in Flowy

Flowy treats credentials, required for accessing various systems or instances, as a distinct object type. This design choice offers several advantages in terms of security, functionality, and system integrity.

Importance of Separating Credentials

By separating credentials from other types of data, Flowy provides a clear demarcation between sensitive and non-sensitive data, significantly reducing the risk of accidental exposure of credentials. This approach also simplifies the process of managing and changing credentials, making it more efficient and less prone to errors.

Flowy allows for optional encryption of credential values, offering users an extra layer of security for their sensitive data. Importantly, Flowy does not provide version control for credentials as a safety measure to prevent the potential risk of exposing previous versions of sensitive data.

During the export to a module, specific properties and their values are intentionally omitted for security reasons. This helps prevent accidental leaks of sensitive information during the export process.

Flowy supports a range of credential types to facilitate integration with various systems and platforms.

Steps

In Flowy, Steps are fundamental building blocks that allow for the quick and effective configuration of workflows. Each step in Flowy, regardless of its type, possesses a set of basic properties:

  • Step ID: This is an automatically generated ID that is uniquely assigned to each step within a process.
  • Name: This property defines the name of the step.
  • Visual ID: This unique ID is assigned only by the Flowy front-end, and it facilitates a clearer visualization of the process.
  • Enabled: This property indicates whether a step is active or not. Only active steps are processed during execution.
  • Cache Storage: This setting defines if cache should be persisted. It's important to note that this setting depends on the same setting at the process level.

Now, let's take a closer look at the types of steps that Flowy supports out of the box:

  • Code Steps: These steps enable the creation of manual code. You can write in JavaScript or Groovy.
  • Control Steps: These steps are used to control the flow of the process. Examples of control steps include 'For Each', 'While', and 'Unset Variable'.
  • Input and Output Steps: These steps provide reading and writing capabilities. For example, JDBC and REST steps fall into this category.
  • Plugin Steps: These steps allow the use of custom or Hub-shared plugins in your workflow.

Each type of step provides a different function and can be used in different scenarios. For each step type, Flowy provides a table that outlines when cache and/or scripting can be used. This can help you understand the potential use cases and limitations of each step​1​.

In the following chapters, we will dive deeper into each step type, exploring their specific properties, use cases, and potential exceptions that may occur during their execution.

You can find details on each and every step in the Flowy documentation.

Code Steps

Code Steps in Flowy refer to the implementation of custom logic within the Flowy workflow, allowing for a high degree of flexibility and customization. These steps can be written in either JavaScript or Groovy, providing developers with the option to use a language they are comfortable with.

While the exact nature and purpose of the code in a Code Step will depend on the requirements of the specific workflow being developed, the presence of Code Steps provides a flexible way for developers to implement their own custom functionality as part of a Flowy workflow. This could include anything from complex calculations to interactions with external systems or APIs, depending on what is needed.

Creating a code step in Flowy generally involves defining the step, specifying the language it will use (JavaScript or Groovy), and then writing the custom code that should be executed when the step is run.

This level of flexibility and customization is what makes Flowy a powerful tool for defining and managing complex workflows, allowing developers to precisely control the behavior of their workflows and implement custom functionality as needed.

Please note that while Code Steps provide a lot of flexibility, they should be used judiciously as the complexity of the workflow can increase with the use of custom code, potentially making it harder to maintain and debug. As always, best practices of software development such as code modularity, readability, and good commenting habits should be maintained.

Please refer to the official Flowy documentation or contact Flowy support for more current and detailed information on creating and using Code Steps within Flowy.

Control Steps

Control steps are crucial components within Flowy that enable you to manage the flow of your process execution. These steps can be considered the "steering wheel" of your workflows, allowing you to navigate the execution path according to your specific needs. There are three primary types of control steps: For Each, While, and Unset Variable.

For Each

The For Each control step is essentially a loop that operates on a collection of items. This step can be used to execute a series of operations for every item in a specified list. For instance, if you have a list of customer orders and you want to process each order individually, you would use the For Each step.

While

The While control step is another loop construct but operates based on a condition rather than a collection. The loop will continue executing as long as the specified condition remains true. This is useful when you need to perform repetitive tasks until a certain state is reached, such as retrying a failed network request until it succeeds.

Unset Variable

The Unset Variable control step allows you to remove a variable from the process context. This is particularly useful when you want to free up memory or ensure that old data doesn't affect subsequent steps in the workflow. By using this step, you can maintain a clean and efficient process execution environment.

Switch

In Flowy, the "Switch" step serves as a powerful tool in controlling the flow of your workflows. It essentially works as a decision-making entity that chooses the next step of execution based on the evaluation of certain conditions defined by the user. This allows you to create dynamic, condition-based workflows that can adapt to different situations.

The "Switch" step evaluates a condition, and based on whether the condition is true or false selects the branche to proceed with the execution. This mirrors the functionality of a typical "switch" statement in many programming languages, making it an invaluable control step in building complex workflows.

In Flowy, an explicit "If/Else" step has not been implemented. This design decision was made because the functionality of an "If/Else" construct is inherently covered by the "Switch" step. The "Switch" step in Flowy evaluates a condition and directs the workflow to different branches based on the result, effectively recreating the decision-making utility of an "If/Else" construct. This approach keeps the workflow steps streamlined while still providing robust control over the workflow's path.

Please note that these control steps are a part of the core functionality provided by Flowy. However, the specific usage and configuration details may vary depending on your workflow design and the complexity of the tasks you are automating.

It's worth noting that the conditions for control steps can be specified in either JavaScript or Groovy. This flexibility allows users to choose the language they are most comfortable with when setting up their workflow conditions. Whether you're implementing a "Switch" step or any other control step, you can craft your conditions in the programming language that best suits your needs and proficiency. This further enhances the adaptability and user-friendliness of Flowy, making it a tool that can be tailored to a wide variety of workflow management requirements.

Input and Output Steps

Input and Output steps are an integral part of the Flowy workflow automation software. They provide the crucial functionality of reading from or writing to other systems, thus enabling interaction between Flowy and a wide array of external resources.

"Input" refers to the process of reading data from an external system into Flowy, while "Output" refers to writing data from Flowy to an external system. Together, these steps enable workflows to not only process internal data but also interact with external systems to fetch or send data.

Flowy's Input and Output steps support interaction with both relational and non-SQL databases. This makes it possible to fetch data from a database (input), process it within Flowy, and then store the results back into a database (output).

Relational databases, such as MySQL or PostgreSQL, are organized into tables, while non-SQL databases, such as MongoDB or Document DB, offer more flexible data structures. Flowy's Input and Output steps are designed to work seamlessly with both types of databases, offering broad compatibility with various data storage solutions.

In addition to databases, Flowy also allows for interaction with RESTful APIs and message-based systems. RESTful APIs are a common method for systems to communicate over the internet, and Flowy's Input and Output steps can send requests to these APIs to fetch or send data.

Similarly, message-based systems like Kafka or RabbitMQ are used for asynchronous communication between systems. Flowy can consume messages from these systems (input) or produce messages to them (output), allowing for robust, asynchronous data flow within your workflows.

Beyond conventional data sources and destinations, Flowy also considers the creation of a PDF as an Output step. After processing data, sometimes it is necessary to present it in a structured, human-readable format. PDFs are widely used for this purpose due to their portability and compatibility. With Flowy, you can design workflows that not only process data but also generate PDF reports, invoices, receipts, or any other types of documents as part of the output.

Plugin Steps

Plugin Steps form a significant part of the Flowy process automation ecosystem, offering users the capability to extend the functionalities provided by Flowy.

One of the key strengths of Flowy is its expandability, and Plugin Steps play an important role in this aspect. Through the use of Plugin Steps, you can integrate additional functionalities and tools into your Flowy processes. Whether it is a custom-built solution to address a unique business problem or a widely-used software that you want to include in your workflow, Plugin Steps make it possible.

Creating a Plugin Step involves writing custom code or using pre-built plugins available in the Hub. Plugins can be written in Java.

Once a Plugin Step is created, it can be integrated into a workflow like any other step. You can specify the order of execution and how the Plugin Step interacts with other steps in the workflow. It's also possible to conditionally execute Plugin Steps based on the results of previous steps.

Plugin Steps can provide a way to integrate with third-party systems, APIs, or software that your business relies on. This can greatly increase the capabilities of your Flowy processes, allowing you to automate even more tasks and improve productivity.

Variables and Cache

Flowy offers a dynamic and effective way to manage data during the execution of a process through variables and cache. This mechanism allows for seamless data transfer, simple use in various steps, and efficient debugging and analysis. Here's an overview of how variables and cache work in Flowy: Variables

Variables

Variables in Flowy are identified by a prefix of $. followed by the variable name (e.g., $.myVariable). These are some key points about Flowy variables:

  • Declaration: Unlike many programming languages, variables in Flowy do not need to be declared upfront. However, initializing them can be beneficial in many scenarios to ensure they have a defined state when first used.
  • Usage: Variables can be simply used in many steps, such as the "JavaScript" and "Groovy" steps. This makes it easy to pass data between different parts of a process.
  • Persistence: Once defined, variables exist for the duration of the process execution. They might get persisted when a child process is executed and then loaded from the cache when the execution continues.
  • Unsetting Variables: Variables can be unset using the appropriate step, which can help manage memory and control the data's lifecycle during a process.
  • Passing Variables to Child Processes: Variables can be passed on to child processes, providing a way to maintain state and share data across multiple processes. If needed, variables can also be used to capture and pass back return values from child processes.

Variables can be overwritten at any point in time. This provides the flexibility to adapt to changing conditions or needs within a process. Moreover, variables can change data types as needed, allowing them to store various types of information ranging from simple integers to more complex objects.

Cache

In Flowy, the cache serves as a store for all variables after each step execution. This automatic persistence is beneficial for debugging and analysis since it allows you to examine the state of all variables at each step. However, considering the workload, it might be advisable to disable this feature for production use. Here are the key points regarding Flowy's cache:

  • Persistence: By default, all variables, which form the cache, are persisted after each executed step. This feature provides a way to track the state of your data throughout the process.
  • Debugging and Analysis: The automatic persistence of variables aids in debugging and analysis. If an error occurs during the process, you can check the state of all variables at each step to help identify the issue.
  • Performance Considerations: Depending on the workload, persisting the cache after each step might affect performance. Therefore, you might consider disabling this feature in production environments.
  • Accessibility: The cache (and all of its variables) is fully accessible to all steps of a process.

These features of variables and cache in Flowy offer powerful tools for managing data within and across processes. They provide flexibility and control, aiding in everything from simple tasks to complex workflows.

Implementing a Workflow with Flowy

The development of a workflow using Flowy typically follows a sequence of stages that allow for a progressive, iterative construction of functionality. This approach not only promotes quick attainment of usable results, but also ensures thorough testing of both the workflow and its deployment process.

Step 1: Creating a Process

Start by creating a process. At this initial stage, you only need to implement a small part of the functionality that will be included in the final workflow. This allows for the gradual building and testing of features, and makes it easier to identify and rectify any issues that may arise.

Step 2: Creating Triggers

After setting up the initial process, proceed to create triggers. For testing purposes, it's practical to create a cron trigger and manually activate it as necessary. Alternatively, you can set up a REST trigger and activate it using tools like curl, Insomnia REST, or Postman. These methods allow you to verify that the triggers are working correctly before they are integrated into the full workflow.

Step 3: Creating Additional Objects

As you continue to develop your workflow, create additional necessary objects such as credentials, templates, translations, and validations. These elements should be created and updated alongside the relevant triggers and processes. This allows the workflow to be dynamically updated and expanded, while keeping the different components in sync.

Step 4: Test Workflow

Once your workflow is set up, it's crucial to test it to ensure it is functioning as expected. This involves running both positive and negative test cases to cover all possible outcomes.

Positive test cases should verify that the workflow correctly performs its intended functions under expected conditions. For instance, if you have a workflow that retrieves data from a database, a positive test case might involve confirming that the workflow correctly retrieves and formats the data when it is available in the database.

Negative test cases, on the other hand, should confirm that the workflow appropriately handles errors or unexpected conditions. In the previous example, a negative test case might involve ensuring that the workflow correctly handles scenarios where the requested data is not available in the database.

It's also important to consider the performance and scalability of your workflow. Depending on your needs, you may need to verify whether your workflow can handle a large volume of tasks or run for an extended period of time. You may also need to confirm that multiple instances of the workflow can run concurrently without causing errors or slowdowns.

In addition, consider any other requirements specific to your use case. For instance, you may need to check whether processes can run asynchronously, what the maximum duration of a process can be, and how many instances of a process can run simultaneously.

Step 5: Review Permission Model

After the functionality of your workflow has been successfully tested, it's time to review the permissions associated with your workflow objects. It's important to ensure that all objects have only the necessary permissions to function correctly and securely.

Flowy makes it easy to adjust permissions for all types of objects using the mass modification function. This function allows you to quickly and easily change permissions for multiple objects at once, saving you the time and effort of adjusting permissions individually.

Remember, the principle of least privilege should guide your permission model. Each object should have the minimum permissions necessary to perform its function. This helps to minimize the potential damage if an object is compromised, and it can also help to prevent accidental changes or deletions.

Step 6: Add Objects to Module

To simplify the maintenance and organization of your workflows, it's recommended to group related objects into modules. A module is a logical grouping of objects that are related or work together to achieve a specific function. Grouping objects into modules can make it easier to understand and manage your workflows, especially as they grow more complex.

Adding objects to a module in Flowy is straightforward. Simply navigate to the module where you want to add the objects, and then select the objects you want to add. Once the objects are added, they will be logically grouped together in the module, making them easier to find and manage.

Remember, good organization is key to maintaining and scaling your workflows effectively. Regularly review your modules to ensure they are organized logically and efficiently.

Configuration Data and Business Logic

In Flowy, there's a clear distinction between configuration data and business logic. This separation allows workflows to be easily moved through different environments (like development, testing, and production) while being continually tested.

Configuration data typically includes:

  • Credentials: Information used to authenticate or authorize access to certain steps or processes.
  • Settings: Configurable parameters that can affect the behavior or outcome of a process.

Business logic, on the other hand, is implemented with:

  • Triggers: Actions or events that start the execution of a process.
  • Processes: The sequences of steps that make up a workflow.
  • Templates: Reusable sets of steps or processes.
  • Translations: Localization data to support different languages within a process.
  • Validations: Rules or conditions that a process or step must meet.
  • Libraries: Collections of external routines that a program can use. These are particularly useful for code reuse and encapsulation.
  • Plugins: Custom or pre-built extensions to enhance or add functionality to a process.
  • Modules: Groups of related functionality, often encapsulating certain business logic.

By differentiating between these two types of data, Flowy enables the easy rollout of existing workflows to new environments, as it's primarily a matter of configuration. This approach ensures that by the time workflows reach the production environment, they have undergone thorough testing, and their deployment and support processes have been vetted.

Logs

When working with workflow processes, logs are essential to understanding what's happening under the hood. They provide detailed insight into each stage of the process, from the overall process execution to the individual steps and events. In this chapter, we will discuss the three types of logs that are crucial for monitoring and troubleshooting workflow processes: Process logs, Step logs, and Event logs.

Process Logs

Process logs are the simplest form of logs. They provide a broad overview of the workflow execution. Specifically, these logs offer information about the event that was processed, the start and end time of the process, and the instance on which the process was executed.

The purpose of process logs is to give you a high-level understanding of the process execution. You can quickly identify when and where the process was executed, and get a brief summary of the processed event. This can be particularly useful in distributed systems where multiple instances may be running different parts of the process.

Step Logs

Step logs are more detailed and are generated during the execution of a process for each step included in the process. These logs provide insights about the precise outcome of each step, the state of the cache at the time of execution, debug information, and any exceptions that may have occurred.

For instance, in the case of a JDBC step, the step log would contain the statement that was executed. If something went wrong during the execution of the step, the step log would provide information about the exception that occurred.

It's important to note that it's possible to enable or disable step logs. Depending on your needs, you may choose to limit the volume of logs by disabling step logs for certain parts of the process, or you may enable them to get a more granular view of the process execution.

Event Logs

Event logs offer detailed information about the events in a process. This includes information about the outcome of the event, its timing, and the instance it ran on.

Event logs are particularly useful in understanding the sequence of events in the process and identifying any issues or bottlenecks. They allow for a thorough examination of each event, making them a valuable tool for monitoring and debugging.

In conclusion, logs are an indispensable tool for monitoring and troubleshooting workflow processes. By understanding and utilizing Process, Step, and Event logs, you can gain a comprehensive understanding of your workflows, and effectively identify and resolve any issues that may arise.

Debugging processes

Debugging a process in Flowy requires an understanding of the execution flow, data handling, and the interpretation of logs. Here's a step-by-step guide on how to effectively debug a process.

Triggering a Process

Before you can begin debugging, you need to trigger the process. The simplest method to do this is through a manually fired cron trigger. This initiates the execution of the process, providing you with the necessary data for your debugging session.

Understanding the Process Logic

Before starting the debugging process, it's crucial to understand the logic of the process you are about to execute. Some processes may have critical operations, such as data deletion, which if not handled properly can lead to data loss. If the process involves such operations, it's recommended to backup the data before initiating the process. This measure provides a safety net in case of unexpected outcomes.

Reviewing the Logs

After the execution completes, you can review the process' outcome as well as each intermediate step through different log types. Logs provide vital information about the execution flow and the status of each step. They can shed light on the precise outcome of each step, including the cache at the time of execution, and any exceptions that may have occurred.

Using the Debug View

The most convenient way to debug is to open the process or event logs and switch to the debug view. This view displays the actual process along with the actual execution data. This feature provides an easy way to see which step succeeded and which failed. By reviewing the detailed logs, you can analyze if the logic is working as expected.

The debug view allows you to step through the process in a visual manner, providing insights into the data flow, step outcomes, and possible bottlenecks or errors. By correlating this information with your understanding of the process logic, you can identify any discrepancies and rectify them.

Remember, debugging is an iterative process. You may have to go through these steps several times before you can pinpoint and resolve all issues in a process. By methodically analyzing the logs and understanding the process, you can make the debugging process efficient and effective.

Creation of workflows

Creating workflows in Flowy involves defining sequences of steps and the desired end state. Flowy then takes care of executing the sequence in the defined order, while providing robust error monitoring and handling, load balancing, and automatic logging of actions for auditing purposes.

To illustrate, let's consider an example where we are managing orders in an online shop.

  1. Defining the Steps: Each step in Flowy represents an action or decision point. In the context of our online shop, the steps could be:
  • Check if the product is in stock
  • Calculate the total cost
  • Process the payment
  • Send a confirmation email to the customer

Each of these steps is defined within Flowy, with input and output parameters as needed, and the logic or rules for executing the step. For example, the "Check if the product is in stock" step might need the product ID as an input and would output a boolean indicating whether or not the product is in stock.

  1. Connecting the Steps: Once the steps are defined, you link them together in the order they should be executed. The output of one step typically becomes the input of the next, creating a sequential flow of data. If a step involves making a decision, such as "If the product is in stock", you can define different subsequent steps based on the decision outcome.
  2. Testing and Execution: Once you've defined and connected your steps, you can test the workflow to ensure it behaves as expected. Flowy provides robust error handling and monitoring, so if a step fails during execution, you'll be notified and can take corrective action. Once you're satisfied with the workflow, you can set it to run automatically based on triggers or schedule.
  3. Flexibility and Reusability: One of the main advantages of Flowy is that you can create flexible and reusable processes. Each step is treated as an independent function that can be reused in different workflows. This means you can easily modify the logical connections between steps to meet the requirements of changing business processes.
  4. Monitoring and Metrics: Flowy provides audit logs, fault tolerance, and capacity & load distribution. It also offers KPIs & metrics to help you monitor the performance of your workflows and make necessary adjustments. This allows you to maintain transparency and control over your processes, ensuring they are efficient and effective.

Remember that Flowy is a data-driven software platform, so each step and workflow you create should be designed with data collection and usage in mind. The more effectively you can use data to drive your workflows, the more value you'll get out of Flowy.

Authentication:

Flowy provides a diverse range of authentication methods to securely access the processing service. Depending on the requirements and preferences, users can choose to authenticate using their API key, username and password, or Basic Authentication. This section will provide an in-depth discussion of these methods.

Logging in Using an API Key

An API key is a unique identifier that is associated with the user's account. It is a convenient way to authenticate your requests in Flowy. The process of logging in using the API key is straightforward.

To log in, users need to send a POST request to <restTriggerURL>/login with the following headers and body:

Content-Type: application/json
{"email":"youraddress@example.com","apiKey":"11111111-1111-1111-1111-111111111111"}

In a successful login attempt, the server will respond with the following message:

{"status":"SUCCESS","statusCode":"OK",
"statusText":"OK",
"data":{"eventId":123456,"body":{"token":"22222222-2222-2222-2222-222222222222"}}}

This response contains a token that should be used for authenticating subsequent requests that require authentication. The token should be sent in the request header in the following format: x-auth-token: 22222222-2222-2222-2222-222222222222.

Logging in Using Basic Authentication

Flowy also supports Basic Authentication for accessing REST triggers. However, this setting is disabled by default on the REST trigger and must be enabled manually. This feature is mainly intended for backward compatibility, and its use is generally not recommended due to the relative insecurity of Basic Authentication as compared to other methods.

Logging in Using Username and Password

For users who prefer traditional methods of authentication or for processes that utilize Flowy's built-in user management, Flowy provides the option to log in using a username and password.

To do this, send a POST request to <restTriggerURL>/login-by-password and include the email and password in the request. If the login is successful, the server will return a token that can be used for subsequent requests.

Each of these authentication methods offers a different level of convenience and security, enabling users to choose the one that best fits their needs. As always, when dealing with authentication, it's crucial to keep your credentials secure to prevent unauthorized access.

Roles and permissions

Flowy offers a robust and flexible system for managing roles and permissions, allowing for granular control over user interactions with various objects within the system. Permissions are categorized into three distinct levels:

  • Use: This permission level allows for the usage of an object. It's particularly useful in scenarios where operational teams manage credentials, while developers and support teams merely utilize them.

  • View: This level encompasses all the capabilities provided by the 'Use' permission, but also extends read-only access to the object, allowing users to view its characteristics.

  • Edit: The 'Edit' permission provides all the functionalities of the 'View' level, but it also gives the capability to modify object characteristics. This is the highest level of permission available.

These three permission levels determine how users can interact with Flowy objects. It's important to note that for cron and messaging triggers, the permissions do not directly impact execution. However, for REST triggers, the roles assigned to the trigger are essential and dictate when the trigger can run. To be able to link to a process, a user with at least 'Use' role for that particular process is necessary.

Flowy also includes a set of default roles, which often end in _CREATOR, _DELETER, or _ADMIN. These roles regulate the permissions users have within the admin interface. However, users and roles can also function outside of the admin user interface, providing flexibility in user management.

Two special virtual roles are also part of Flowy's permission system:

  • AUTHENTICATED: This role is automatically assigned to any authenticated user. It cannot be directly assigned to user accounts.

  • UNAUTHENTICATED: This is the default role for all others who do not fall under the AUTHENTICATED category. Like the former, it cannot be directly assigned to user accounts.

These virtual roles provide the necessary capabilities to enable public access to triggers, if desired. This flexible system of roles and permissions allows Flowy to meet a wide range of security and access requirements, ensuring that users have the appropriate level of access to perform their tasks while maintaining the integrity and security of the system.

You can find more details about permissions in the Flowy documentation.

Versioning in Flowy

In the context of Flowy, versioning is an essential feature that automatically tracks changes made to objects, providing a history of modifications over time. This powerful feature facilitates traceability, accountability, and can aid in understanding the evolution of a process or object within the Flowy environment. It's important to note that all Flowy objects are subject to version control, with the notable exception of credentials. This is a security measure designed to ensure the protection of sensitive data.

Object Version History

Every change to a Flowy object creates a new version, which is stored in the object's history tab. This means that every time you modify an object, Flowy keeps track of what was changed, thereby creating a comprehensive and chronological archive of every version of the object. These historical entries can provide a valuable reference when you want to compare current and past configurations, revert to previous versions, or simply understand the evolution of an object over time.

History Retention Policy

By default, Flowy retains all historical entries indefinitely. While this can be beneficial in development environments where frequent changes and reversions may occur, it could potentially lead to large amounts of data being stored in a production environment. To manage storage space and maintain system performance, Flowy provides a configurable history cleaning option.

When history cleaning is enabled, Flowy purges old entries according to the specified parameters. It's a good practice to enable this feature in production environments, where maintaining a lean and efficient system is often more critical than preserving an exhaustive history of every object. It's important to consider your organization's specific needs and workflows when setting up your history cleaning parameters.

Handling Deleted Objects

Flowy's versioning system also covers deleted objects. When an object is deleted, it's not immediately expunged from the system. Instead, Flowy retains a record of the deleted object, allowing it to be restored if necessary. You can access and restore these deleted objects through the "Admin / History" function of the admin UI. This feature provides a safety net against accidental deletions and facilitates recovery in case of errors or unforeseen issues.

In conclusion, Flowy's versioning system is a robust feature that enhances transparency, accountability, and control in the management of objects within the platform. By offering automatic version control, indefinite history retention (with configurable cleaning options), and the ability to recover deleted objects, Flowy provides users with a flexible and reliable system to manage their workflows over time.

Basic performance and scalability considerations:

Performance and Scalability Considerations

Performance and scalability are critical components when configuring any system. There are several process level configurations which can directly influence these two parameters. Monitoring and adjusting these settings throughout the lifetime of the processes can optimize system performance and enhance scalability. Simultaneous Executions

Two significant parameters are the maximum simultaneous executions per instance and the overall maximum simultaneous executions.

  • Max Simultaneous Executions Per Instance: This configuration defines the maximum number of active processes that can run on a per-instance basis. For instance, setting this value to 2 allows the process to run twice simultaneously on the same instance. The value set here must not exceed the overall max simultaneous executions.
  • Overall Max Simultaneous Executions: This setting, on the other hand, determines the maximum number of active processes allowed across all connected instances. A value of 3, for example, means that a specific process can run up to three times across all active instances. This value must be equal to or higher than the maximum simultaneous executions per instance.

Both of these settings are critical for maintaining optimal system load and avoiding process congestion. Cache Storage and Logging

The settings for cache storage and logging can also have an impact on performance.

  • Cache Storage: This configuration determines whether cache should be persisted or not. If this setting is disabled, it takes precedence over the same setting at the step level. When enabled, cache persistence only occurs for steps that also have this setting enabled. Please note that cache is only persisted when logging is enabled.
  • Logging: This setting decides whether logs should be stored persistently. If it's disabled, it overrules the same setting on the step level. When enabled, log persistence only occurs for steps with the same setting enabled.

It is important to note that these settings should be disabled for high-performance and production usage for optimal performance. However, they should be enabled on development environments and during troubleshooting to aid in identifying and fixing potential issues.

Time to Live (TTL)

TTL for logs and errors are also significant considerations in terms of performance and scalability.

  • TTL for Logs: This setting defines the maximum duration that logs should be retained. This applies to events as well as process and step logs. A value of 0 disables the cleaning process, meaning logs are kept indefinitely. The cleaning process is indiscriminate and affects all logs, regardless of their type.
  • TTL for Errors: This configuration determines the maximum duration that errors should be kept. It affects events as well as process and step logs. A value of 0 halts the cleaning process, although error logs will still be cleaned when their TTL matches the value set for logs. The cleaning process is specific to error logs in this case. The set value must be equal to or lower than the log TTL.

Monitoring errors and setting a TTL that allows operation units to periodically review them is strongly recommended. It can be an effective tool to identify recurring issues and improve system performance and scalability over time.

In summary, performance and scalability are crucial to the efficient running of any system. It is important to regularly monitor and adjust settings in response to changes in system workload and requirements. By doing so, system performance can be optimized and the system can scale effectively when needed.

Practical Application of Training

This chapter provides an illustration of how the knowledge gained from the training can be applied in a practical context.

Fibonacci Numbers

Fibonacci numbers form a sequence where each number is the sum of the two preceding ones. It starts from 0 and 1 and extends infinitely. For instance, the first ten Fibonacci numbers are: 0, 1, 1, 2, 3, 5, 8, 13, 21, and 34. Let's see how to implement this sequence with Flowy.

Define the Fibonacci Sequence Process

First you need to define the process that will generate the Fibonacci sequence. This process will utilize Flowy variables, which will store the start value and the amount of numbers to be generated, passed from the REST trigger.

In this process, you will define a loop that runs for the number of times specified by the amount variable. In each iteration of the loop, the process will calculate the next number in the Fibonacci sequence and append it to an array. The two most recent numbers in the sequence will be stored in variables, allowing the process to calculate the next number in the sequence by adding these two variables together.

To implement the Fibonacci sequence:

  1. Start by defining two variables, $.var1 and $.var2, initializing them to 0 and 1 respectively (using an JavaScript step).

  2. Extend the initialisation step by adding a variable $.arrayOfFiboNums and setting it to new Array().

  3. Create a While step, which will continue until it has calculated the desired number in the Fibonacci sequence. For the initial run, limit this to 100. You can use $.var2 <= 100 as condition.

  4. In the loop, create a temporary variable (i.e. $.tmpVar) to store the value of $.var2 (using an JavaScript step).

  5. Set $.var2 to the sum of $.var1 and $.var2 (once again, using a JavaScript step).

  6. Then, set $.var1 to the value of the temporary variable.

  7. Append the value of $.var1 to the result list: $.arrayOfFiboNums.add($.var1).

  8. Finally, below the loop, set - using a JavaScript step - the following values:

    $.restResponse.statusCode = "OK" $.restResponse.body.numbers = $.arrayOfFiboNums

REST trigger

A REST trigger is the second Flowy object you'll be dealing with. This object will allow your application to start a process when an HTTP request is made to a specified URL endpoint. You can use the tools provided in Flowy to create a REST trigger that responds to GET requests.

The endpoint could look something like this: /fibonacci.

Test the process

Open the REST trigger, click on "Copy URL" and paste into a new browser tab.

Possible extensions

Adjust the REST trigger to support start=0&amount=10. The start parameter will determine the first number in the Fibonacci sequence, and the amount parameter will define how many numbers in the sequence should be returned. Adjust the process accordingly.

Random Number Generator

A simple random number generator can be a good starting point to understand how to generate and manipulate variables in Flowy.

Daily Weather Updates

Develop a process that uses an external weather API to fetch the daily weather of a specified location and sends it to your email. This requires the usage of an Cron trigger respective templates.

Currency Converter

Develop a workflow that takes an amount in one currency and converts it into another currency using current exchange rates. This will require integration with an external currency exchange rate API.

Word Counter

Create a process that counts the number of words in a given text. This can be a simple but effective way to understand how to manipulate and analyze data in Flowy.

Web Page Status Checker

Develop a simple process that checks the status of a given webpage (e.g., whether it's up or down) by sending a HTTP request and analyzing the response.