Build Your Data and AI Infrastructure Agility with Mondrian Fabric

Mondrian Fabric is a software suite and tools that enable organizations and enterprises to build and optimize the infrastructure for big data and AI. Mondrian Fabric is a part of Mondrian Platform, the infrastructural and data platform for simplifying big data and AI that is actively developed by Mondrian AI.

In the era of data economy, infrastructure plays a pivotal role in building the competitive edge. Companies that are capable of extracting values from the data effectively and at speed by leveraging the data infrastructure will lead in their respective markets and set themselves apart from the competitors.

Building data capability starts from employing the right strategy for data infrastructure, which is closely related with the infrastructure design. In the recent years, we have seen how cloud computing gain popularity due to the promise of cost-saving and on-demand scaling. With the traditional on-premise approach, companies need to allocate upfront investment to buy servers, network, and other equipment.

Cloud computing eases the capital pressure by letting companies and organizations flexibly rent the infrastructure from the infrastructural cloud providers. Flexibility is emphasized due to less stringent contract requirements with the cloud providers. It is common to observe infrastructural contract that is arranged on daily basis and rental fee that is measured hourly. This is a stark contrast to upfront investment where hardware / equipment depreciation is calculated annually before finally being written off.

Virtualization and Containerization

The core technology that empowers cloud computing is virtualization. Traditionally, the operating system (OS) runs on top of the hardware, hence the terminology bare metal deployment. With virtualization, a new piece of technology that is referred to as hypervisor is introduced into the OS layering.

A hypervisor is placed on top of the host OS to abstract / virtualize the underlying physical compute resources (CPU, memory, disk, network). The hypervisor manages the deployment of virtual machines (VM) on the host. Each virtual machine is allocated some slice of the physical resources in form of virtual CPU/RAM/disk/network and runs its own OS, referred to as guest OS, in isolated environment. Finally, applications and their dependent libraries will be installed on the VM instances. Adding new application requires full guest OS image to be run in the VM, besides the application binary and libraries. This results in space concern and also slow startup time.

Containerization optimizes the application deployment by enabling various isolated applications to be running on the same OS. Each container consists of packaged, self-contained, ready-to-deploy application and whenever necessary the dependent libraries and middleware to run the application. Containerized applications are sharing the same OS with the host. This means that less space is required to run the application. Application startup time is also much faster since restarting the application does not mean restarting the OS. This is different with the case of virtualization where application restart also requires OS restart.


Containerization technology makes it possible to break one big, monolithic application into smaller components, where each component provides one or more services to the other components. In other words, the monolithic application is transformed into a suite of small services. Each service runs in its own process and communicates with lightweight mechanism. Services can be rapidly invoked and mapped to whatever business service is required. Those services can also be deployed independently using automated deployment. With such independent deployment, services can be deployed more often and scaled independently of other services.

Containerization and Micro Services for Infrastructure Agility

The shifting to containerization is related with the drawback of monolithic application. A monolitic application has all of its building blocks or components tightly coupled. When an issue is spotted for even a small part of the application, patching the issue will require a the whole application to be shut down and restarted. While this can be acceptable for PoC-level or non-mission critical application, mission critical application applies stringent SLA and requires minimal system shutdown or restart. A better approach is to apply patch or maintenance only to the affected parts and keep the application running and provided the services.

Let’s revisit this from a sample case use of an image-based object detection application. When such application is built using monolithic approach, the image upload module, image processing module, image detection module will all reside in one baremetal or virtual machine server. If bottleneck / performance issue is observed on the image upload module and patch is to be applied to fix the issue, the whole application must be rebuilt and redeployed. Rebuilding and redeploying an application just for a small change can be unjustified especially if such small changes occur frequently.

This is where the container-based approach becomes a better fit for such application deployment. If the monolith application is containerized, it will be broken down into independent services namely image upload service, image processing service, and image detection service. Each service will only care about its own business logic. Issues are isolated to a specific service and hence fixing the issue will only result in redeployment of one specific container for the service instead of the whole application. Redesigning the application into a set of loosely-coupled, independent services is referred to as microservice architecture.

Container-based Application Deployment

A containerized application can be composed of multiple containers. These containers may reside in the same host or distributed in a container cluster. The following figure depicts how containers that compose a service can be distributed in the cluster.


The explanation about the picture above:

  • Each cluster consists of several host nodes. A host node can be a VM server or a baremetal server.
  • Each host node can contain several containers.
  • A service is built from one or more containers. Those containers can be located in the same host node or different ones
  • An application is built by composing one or several services
  • Volumes are used for applications that require data persistence. Containers can mount volumes. Data stored in the volumes persists even after a container is terminated
  • Links allow two or more containers to connect and communicate
  • External applications can also connect to a service or application running inside a container via a network port

One prominent issue with the distributed container deployment is the container and cluster management. The cluster management service needs to manage the deployment of distributed containerized applications and provides an API to allow operating the cluster from the creation of containers, container sets, services, and also other life-cycle functions. Challenge in building such cluster management service includes addressing the scalability, load balancing, service discovery and orchestration, and transfer/migration of service deployments between clusters.

Specific to load balancing, containerized application is naturally designed to support load balancing. Spinning up a containerized application takes much less time (few seconds) compared to starting up an application running on baremetal or VMs.


Imagine a scenario where the very same system that provides image API experiences sudden spike of traffic. Without load balancing the application, it is almost guaranteed that the system will go out of service soon because it cannot serve the incoming traffic. To mitigate the service disruption, another API service instance must be started so that traffic will be split into two instances. If the application runs in baremetal, launching another instance will take a while since the environment has to be set up and dependencies libraries installed first before the application be started.

With containerized application, the dependencies and environment are preconfigured into the container image. The container orchestrator simply launches a new container instance and immediately the system is now capable to handle twice the original traffic size only in few seconds. This is a huge win especially for managing the system that expects no shutdown during the operation.

Mondrian Fabric for Infrastruscture Automation and Simplification

Mondrian Fabric is the core element of Mondrian Platform that aims to automate and simplify big data and AI deployment system for organizations and enterprises. The following picture depicts the construction blocks of Mondrian Fabric and the overall Mondrian Platform.


As can be seen in the picture, the Mondrian Platform consists of these three big parts:

  • Mondrian Fabric: big data and AI infrastructure automation and deployment tools. A set of tools to setup and configure the multi-user environment for 1) performing big data tasks, especially setting up the compute cluster for data ETL, and 2) performing AI tasks, especially notebook-based AI model building and deployment on GPU server
  • Mondrian Registry: a private repository that consists of proprietary container images that can be orchestrated to build cluster infrastructure for big data or computing infrastructure for AI
  • Mondrian Cloud Connect: software module and computing bus that is used to expand the infrastructure reach from the intranet / on-premise servers to the public cloud such as AWS, Azure, Google Cloud Engine, or others

Big data and AI are often regarded as two different domains that require separate domain expertise. In reality, these two domains are often deeply interconnected. AI models need solid data set so that the accuracy and recall can be optimized. However, building good data set often needs solid big data expertise, especially in transforming raw, unstructured data into data with proper format that can be used as the dataset to build the AI model.

With Mondrian Fabric, the needs for the expertise to build the data infrastructure for experimenting and performing big data and AI tasks has been reduced to convenient use of UI-based tool to manage various big data and AI components. This provides quick start for organizations and companies that are considering their first move to be an AI-powered entity.

Mondrian Platform targets organizations and enterprises that already have some servers or computing nodes, yet it is flexible enough to be extended to support application and service deployment on the public cloud or hybridly combine the servers in the own data center with the public cloud.

Contact our sales to request for assessment on your current infrastructure or discuss about building or transforming the infrastructure to support the next generation data infrastructure with Mondrian Fabric.

Show Comments