Skip to the content.

ForestFlow

ForestFlow is a scalable policy-based cloud-native machine learning model server. ForestFlow strives to strike a balance between the flexibility it offers data scientists and the adoption of standards while reducing friction between Data Science, Engineering and Operations teams.

ForestFlow is policy-based because we believe automation for Machine Learning/Deep Learning operations is critical to scaling human resources. ForestFlow lends itself well to workflows based on automatic retraining, version control, A/B testing, Canary Model deployments, Shadow testing, automatic time or performance-based model deprecation and time or performance-based model routing in real-time.

Our aim with ForestFlow is to provide data scientists a simple means to deploy models to a production system with minimal friction accelerating the development to production value proposition.

To achieve these goals, ForestFlow looks to address the proliferation of model serving formats and standards for inference API specifications by adopting, what we believe, are currently, or are becoming widely adopted open source frameworks, formats, and API specifications. We do this in a pluggable format such that we can continue to evolve ForestFlow as the industry and space matures and we see a need for additional support.

Contents

Overview

Why ForestFlow?

Continuous deployment and lifecycle management of Machine Learning/Deep Learning models is currently widely accepted as a primary bottleneck for gaining value out of ML projects.

We first set out to find a solution to deploy our own models. The model server implementations we found were either proprietary, closed-source solutions or had too many limitations in what we wanted to achieve. The main concerns for creating ForestFlow can be summarized as:

Model Deployment

For model deployment, ForestFlow supports models described via MLfLow Model format which allows for different flavors i..e, frameworks & storage formats.

ForestFlow also supports a BASIC REST API for model deployment as well that mimics the MLflow Model format but does not require it.

Inference

For inference, we’ve adopted a similar approach. ForestFlow provides 2 interfaces for maximum flexibility; a BASIC REST API in addition to standardizing on the GraphPipe API specification.

Relying on standards, for example using GraphPipe’s specification means immediate availability of client libraries in a variety of languages that already support working with ForestFlow; see GraphPipe clients.

Please visit the quickstart guide to get a quick overview of setting up ForestFlow and an example on inference. Also please visit the Inference documentation for a deeper dive.

Currently Supported model formats

Go to the Quick Start Guide to get started then dive a little deeper and learn about ForestFlow Concepts and how you can tailor it to fit your own use-cases.

Contributing

While ForestFlow has already delivered tremendous value for us in production, it’s still in early phases of development as there are plenty of features we have planned and this continues to evolve at a rapid pace. We appreciate and consistently, make use of and, contribute open source projects back to the community. We realize the problems we’re facing aren’t unique to us so we welcome feedback, ideas and contributions from the community to help develop our roadmap and implementation of ForestFlow.

Check out Contribution Guide for more details on contributing to ForestFlow.