1. Home
  2. Knowledge Base
  3. Documentation
  4. Overview
  5. Architecture

Architecture

When we talk about the architecture of AlgoRun, there are two ways to see it. First, there is the architecture for an actual deployed pipeline that governs how to a live pipeline operates and manages it's workload.
Secondly, we have the core system architecture which is the composition of all the applications and services that the Pipeline Deployments run on top of.

Pipeline Deployment Architecture

When a pipeline is deployed, all of the resources required to operate the pipeline are deployed into Kubernetes. The diagram below shows the logical flow of data and requests into the pipeline deployment.
Each pipeline deployment is made up of Endpoints, Kafka Topics, Algos, Data Connectors and Hooks. For more details on the operation of a pipeline, visit the Pipeline and Deployment sections of the documentation.

System Architecture

The system architecture described below consists of the infrastructure components that form the core of the system. AlgoRun is composed of best-in-class open source and internal components, with the goal of providing an Enterprise level processing pipeline and workflow engine. Flexibility to handle any project, infinite scalability plus being cloud and programming language / cloud agnostic are the primary tenants of the system.

AlgoRun is built on Kubernetes, which lends itself as a perfect fit for our cloud agnostic preference. All of the leading cloud providers offer Kubernetes as a service, which simplifies the infrastructure management requirements for AlgoRun. For installing applications into Kubernetes, Helm has become the de-facto open source standard package manager. Every component required to run AlgoRun has a Helm chart for easy configuration and deployment. Out of the box, AlgoRun comes with reasonable defaults to get up and running quickly. As expected, everything is highly configurable within each Helm chart and each component can be adjusted for very particular deployments.

The AlgoRun Technology Stack

Ingress Resources

The Ingress resources manage all of the external access to the services provided by AlgoRun. This includes the routing, traffic management and security features required to reliably expose the services needed outside of the Kubernetes cluster. The primary ingress resource is Ambassador which is an extremely fast edge proxy, based on Envoy, and is perfectly suited to handle the majority of the inbound traffic. Nginx is also used ingress in some scenarios, currently for video stream ingress with RTMP.

Why Ambassador?

There are quite a few Kubernetes ingress options available and the list is continuing to grow. While NGINX could have fulfilled many of the needs, we selected Ambassador as the primary ingress for these reasons:

  • Ambassador is based on Envoy and is designed from the ground up for a microservices architecture.
  • The dynamic configuration capabilities were a requirement for deployment of new pipelines at runtime.
  • Support for gRPC in addition to Http
  • Rate limiting, traffic shadowing, canary routing capabilities

Ambassador (with the built in Envoy proxy) turned out to be an excellent fit for all inbound http and gRPC traffic. As the need arose for video stream processing, we found Nginx to be a better option due to the RTMP module. If a pipeline requires a media streaming ingress, Nginx is deployed instead of Ambassador.

Frontend Services

AlgoRun UI

The AlgoRun UI is a unified dashboard and portal for building, managing, deploying and monitoring your AI pipelines. The UI gives you a user-friendly way to manage:

  • Your algorithm and model repository
  • A powerful visual pipeline builder
  • Deployment configuration management
  • Pipeline monitoring, alerting and log aggregation

AlgoRun Api

The AlgoRun Api is the primary gateway to all of the backend services. It provides the interfaces for:

  • Algorithm and model repository and synchronization with AlgoHub.com
  • Pipeline definitions and versioning
  • Pipeline deployment configuration
  • Pipeline deployment commands (deploy, terminate, update)

Grafana

Grafana is a powerful analytics and monitoring solution that integrates nicely with a Prometheus metrics server, which AlgoRun uses. The graphs and monitoring dashboards are built using Grafana.

SignalR

SignalR is the service that provide real-time updates with push notifications and websocket streaming. Subscribing to the SignalR channels gives the AlgoRun UI real-time alerts and streaming log entries.

Backend Services

Pipeline Operator

The Pipeline Operator is a Kubernetes Operator that manages all of the custom resources and configuration required by every application and service deployed for a Pipeline Deployment. It is the key automation component in that it handles the deployment of algos, data connectors, kafka topics, endpoints, and hooks. The Pipeline Operator also provides key monitoring and log management functionality.

MariaDB / PostgreSQL

Currently, a SQL database is required for the storage of the Algo repository, Pipeline and Deployment configurations. Either MySql (Including it's derivative MariaDB) or PostgreSQL are supported out of the box.

Prometheus

Prometheus is the open source monitoring and metric aggregation server for AlgoRun. With it's integration coverage for every component in the system, it provides the perfect backbone for all metrics.

Redis

Redis provides the caching layer for AlgoRun, which is currently used to enable SignalR scale-out for efficient real-time notifications. Redis is also used for the AlgoRun Api Server request caching.

Data Layer

MinIO

MinIO is a high performance object storage server that is the primary data repository for AlgoRun. It is fully S3 compatible, which enables broad integration capabilities, yet the power and control of running your object storage within the Kubernetes cluster.

Kafka

Kafka is the data streaming engine that powers the real-time data pipelines and inter-algorithm communication. In AlgoRun, algorithms and models receive their input data from Kafka topics and push their output to additional Kafka topics. This creates a backbone of continuous

Amazon S3

AlgoRun can utilize any S3 compatible object storage. If Amazon S3 itself is preferred, MinIO can be disabled and S3 can be used directly.

Was this article helpful?

Related Articles

Share Your Valuable Opinions