Serving AI/ML models in the open standard formats PMML and ONNX with both HTTP (REST API) and gRPC endpoints.
Table of Contents
- Features
- Prerequisites
- Installation
- PMML
- ONNX
- How to deploy and undeploy a PMML or ONNX model in AI-Serving
- REST APIs
- gRPC APIs
- Examples
- Support
- License
Features
AI-Serving is a flexible, high-performance inference system for machine learning and deep learning models, designed for production environments. Its key features include:
- Out-of-the-box support for PMML (Predictive Model Markup Language) and ONNX (Open Neural Network Exchange) models.
- HTTP (RESTful) and gRPC APIs for seamless integration.
- Support for both v1 and v2 APIs, with the v2 API fully compatible with the Open Inference Protocol.
- Dynamic batching for ONNX models to improve GPU utilization and increase throughput.
- Configurable request timeouts for better control in production.
- Model warm-up on startup to ensure low-latency inference.
Prerequisites
- Java >= 1.8
Installation
Install using Docker
The easiest and most straight-forward way of using AI-Serving is with Docker images.
Install from Source
Install SBT
The sbt build system is required. After sbt installed, clone this repository, then change into the repository root directory:
Build Assembly
AI-Serving depends on ONNX Runtime to support ONNX models, and the default CPU accelerator (OpenMP) is used for ONNX Runtime:
Set the property -Dgpu=true to use the GPU accelerator (CUDA) for ONNX Runtime:
sbt -Dgpu=true clean assembly
Run set test in assembly := {} to disable unit tests if you want to skip them when generating an assembly jar:
sbt -Dgpu=true 'set test in assembly := {}' clean assemblyAn assembly jar will be generated:
$REPO_ROOT/target/scala-2.13/ai-serving-assembly-<version>.jar or ai-serving-gpu-assembly-<version>.jar
Start Server
Simply run with the default CPU backend for ONNX models:
java -jar ai-serving-assembly-<version>.jar
GPU backend for ONNX models:
java -Donnxruntime.backend=cuda -jar ai-serving-gpu-assembly-<version>.jar
Several available execution backends: TensorRT, DirectML, Dnnl and so on. See Advanced ONNX Runtime Configuration for details.
Server Configurations
By default, the HTTP endpoint is listening on http://0.0.0.0:9090/, and the gRPC port is 9091. You can customize those options that are defined in the application.conf. There are several ways to override the default options, one is to create a new config file based on the default one, then:
java -Dconfig.file=/path/to/config-file -jar ai-serving-assembly-<version>.jar
Another is to override each by setting Java system property, for example:
java -Dservice.http.port=9000 -Dservice.grpc.port=9001 -Dservice.home="/path/to/writable-directory" -jar ai-serving-assembly-<version>.jar
AI-Serving is designed to be persistent or recoverable, so it needs a place to save all served models, that is specified by the property service.home that takes /opt/ai-serving as default, and the directory must be writable.
PMML
Integrates PMML4S to score PMML models. PMML4S is a lightweight, clean and efficient implementation based on the PMML specification from 2.0 through to the latest 4.4.1.
PMML4S is written in pure Scala that running in JVM, AI-Serving needs no special configurations to support PMML models.
ONNX
Leverages ONNX Runtime to make predictions for ONNX models. ONNX Runtime is a performance-focused inference engine for ONNX models.
ONNX Runtime is implemented in C/C++, and AI-Serving calls ONNX Runtime Java API to support ONNX models. ONNX Runtime can support various architectures with multiple hardware accelerators, refer to the table on aka.ms/onnxruntime for details.
Since both CPU and GPU binaries of X64 have been distributed to the Maven Central, AI-Serving depends on them directly now, the CPU is used by default, and you can switch to GPU via the property -Dgpu=true defined above.
If you depend on other different OS/architectures or hardware accelerators, refer to the next topic Advanced ONNX Runtime Configuration, otherwise, ignore it.
Advanced ONNX Runtime Configuration
Build ONNX Runtime
You need to build both native libraries JNI shared library and onnxruntime shared library for your OS/Architecture:
Please, refer to the onnxtime build instructions, and the --build_java option always must be specified.
Load ONNX Runtime
Please, see Build Output lists all generated outputs. Explicitly specify the path to the shared library when starting AI-Serving:
java -Donnxruntime.backend=execution_backend -Donnxruntime.native.onnxruntime4j_jni.path=/path/to/onnxruntime4j_jni -Donnxruntime.native.onnxruntime.path=/path/to/onnxruntime -jar ai-serving-assembly-<version>.jar
How to deploy and undeploy a PMML or ONNX model in AI-Serving
There are two ways to deploy a PMML or ONNX model into AI-Serving: manual deployment and deployment via API:
Manual Deployment
To deploy a model manually, follow these steps:
-
Locate the directory specified by the
service.homeproperty. -
Create a subdirectory named
modelsif it does not already exist. -
Inside the model name directory, create a subdirectory for the model version (for example, 1, 2, etc.).
-
Place the model file into the version directory:
- Use the fixed filename
model.pmmlfor PMML models. - Use the fixed filename
model.onnxfor ONNX models.
- Use the fixed filename
-
Customized inference behavior can be configured in model.conf, which can be placed either in the model directory or within a specific version directory. The following parameters are supported:
-
max-batch-size=8
Specifies the maximum number of inference requests that can be grouped into a single batch. This option is effective only if the deployed ONNX model supports dynamic input shapes.
-
max-batch-delay-ms=10 (milliseconds)
The maximum time the server waits to accumulate requests before forming a batch. If the batch does not reach max-batch-size within this time window, it will be executed with the available requests.
-
request-timeout-ms=20 (milliseconds)
The maximum time allowed for processing an inference request. Requests exceeding this duration will be terminated or returned with a timeout error (HTTP: 504, gRPC: 4 DEADLINE_EXCEEDED).
-
warmup-count=100
The number of warm-up inference runs executed when the model is loaded.
-
warmup-data-type=zero (options:
zero,random)Specifies the type of input data used during model warm-up.
- zero: Uses zero-filled tensors.
- random: Uses randomly generated input data matching the model’s input shape and type.
-
-
All models placed in this directory structure will be automatically loaded when AI-Serving starts.
Refer to the following example for guidance:
/opt/ai-serving
└── models
├── iris
│ └── 1
│ └── model.pmml
├── mnist
│ ├── 1
│ │ ├── model.conf
│ │ └── model.onnx
│ └── 2
│ └── model.onnx
└── mobilenet
├── 1
│ └── model.onnx
└── model.confUndeploying a model is straightforward, simply remove the corresponding model directory or the specific version directory.
Deployment via API
To deploy and undeploy a model via the deployment API, refer to the following REST and gRPC APIs for deploying and undeploying models.
REST APIs
v2 REST APIs
They are fully compatible with Predict Protocol - Version 2
v1 REST APIs
When an error occurs, all APIs will return a JSON object as follows:
{
"error": <an error message string>
}
1. Validate API
URL:
PUT http://host:port/v1/validate
Request:
Model with Content-Type that tells the server which format to handle:
Content-Type: application/xml, orContent-Type: text/xml, the input is treated as a PMML model.Content-Type: application/octet-stream,application/vnd.google.protobuforapplication/x-protobuf, the input is processed as an ONNX model.
If no Content-Type specified, the server can probe the content type from the input entity, but it could fail.
Response:
Model metadata includes the model type, input list, output list, and so on.
{
"type": <model_type>
"inputs": [
{
"name": <input_name1>,
"type": <field_type>,
...
},
{
"name": <input_name2>,
"type": <field_type>,
...
},
...
],
"outputs": [
{
"name": <output_name1>,
"type": <field_type>,
...
},
{
"name": <output_name2>,
"type": <field_type>,
...
},
...
],
...
}
2. Deploy API
Deployment URL:
PUT http://host:port/v1/models/${MODEL_NAME}
Request:
Model with Content-Type, see validation request above for details.
Response:
{
// The specified servable name
"name": <model_name>
// The deployed version starts from 1
"version": <model_version>
}
Undeployment URL:
DELETE http://host:port/v1/models/${MODEL_NAME}
Response:
3. Model Metadata API
URL:
GET http://host:port/v1/models[/${MODEL_NAME}[/versions/${MODEL_VERSION}]]
- If
/${MODEL_NAME}/versions/${MODEL_VERSION}is missing, all models are returned. - If
/versions/${MODEL_VERSION}is missing, all versions of the specified model are returned. - Otherwise, only the specified version and model is returnd.
Response:
// All models are returned from GET http://host:port/v1/models
[
{
"name": <model_name1>,
"versions": [
{
"version": 1,
...
},
{
"version": 2,
...
},
...
]
},
{
"name": <model_name2>,
"versions": [
{
"version": 1,
...
},
{
"version": 2,
...
},
...
]
},
...
]
// All versions of the specified model are returned from GET http://host:port/v1/models/${MODEL_NAME}
{
"name": <model_name>,
"versions": [
{
"version": 1,
...
},
{
"version": 2,
...
},
...
]
}
// The specified version and model is returned from http://host:port/v1/models/${MODEL_NAME}/versions/${MODEL_VERSION}
{
"name": <model_name>,
"versions": [
{
"version": <model_version>,
...
}
]
}
4. Predict API
URL:
POST http://host:port/v1/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]
/versions/${MODEL_VERSION} is optional. If omitted the latest version is used.
Request:
The request body could have two formats: JSON and binary, the HTTP header Content-Type tells the server which format to handle and thus it is required for all requests.
-
Content-Type: application/json. The request body must be a JSON object formatted as follows:{ "X": { "records": [ { "input_name1": <value>|<(nested)list>, "input_name2": <value>|<(nested)list>, ... }, { "input_name1": <value>|<(nested)list>, "input_name2": <value>|<(nested)list>, ... }, ... ], "columns": [ "input_name1", "input_name2", ... ], "data": [ [ <value>|<(nested)list>, <value>|<(nested)list>, ... ], [ <value>|<(nested)list>, <value>|<(nested)list>, ... ], ... ] }, // Output filters to specify which output fields need to be returned. // If the list is empty, all outputs will be included. "filter": <list> }The
Xcan take more than one records, as you see above, there are two formats supported. You could use any one, usually thesplitformat is smaller for multiple records.records: list like [{column -> value}, … , {column -> value}]split: dict like {columns -> [columns], data -> [values]}
-
Content-Type: application/octet-stream,application/vnd.google.protobuforapplication/x-protobuf.The request body must be the protobuf message
PredictRequestof gRPC API, besides of those common scalar values, it can use the standardonnx.TensorProtovalue directly. -
Otherwise, an error will be returned.
Response:
The server always return the same type as your request.
- For the JSON format. The response body is a JSON object formatted as follows:
{
"result": {
"records": [
{
"output_name1": <value>|<(nested)list>,
"output_name2": <value>|<(nested)list>,
...
},
{
"output_name1": <value>|<(nested)list>,
"output_name2": <value>|<(nested)list>,
...
},
...
],
"columns": [ "output_name1", "output_name2", ... ],
"data": [
[ <value>|<(nested)list>, <value>|<(nested)list>, ... ],
[ <value>|<(nested)list>, <value>|<(nested)list>, ... ],
...
]
}
}
- For the binary format. The response body is the protobuf message
PredictResponseof gRPC API.
Generally speaking, the binary payload has better latency, especially for the big tensor value for ONNX models, while the JSON format is easy for human readability.
gRPC APIs
gRPC predict v2 proto specification
Refer to the grpc_predict_v2.proto specification
gRPC predict v1 proto specification
Please, refer to the protobuf file ai-serving.proto for details. You could generate your client and make a gRPC call to it using your favorite language. To learn more about how to generate the client code and call to the server, please refer to the tutorials of gRPC.
Examples
Python
Curl
Start AI-Serving
We will use Docker to run the AI-Serving:
docker pull autodeployai/ai-serving:latest docker run --rm -it -v /opt/ai-serving:/opt/ai-serving -p 9090:9090 -p 9091:9091 autodeployai/ai-serving:latest 16:06:47.722 INFO AI-Serving-akka.actor.default-dispatcher-5 akka.event.slf4j.Slf4jLogger Slf4jLogger started 16:06:47.833 INFO main ai.autodeploy.serving.AIServer$ Predicting thread pool size: 16 16:06:48.305 INFO main a.autodeploy.serving.protobuf.GrpcServer AI-Serving grpc server started, listening on 9091 16:06:49.433 INFO main ai.autodeploy.serving.AIServer$ AI-Serving http server started, listening on http://0.0.0.0:9090/
Make REST API calls to AI-Serving
In a different terminal, run cd $REPO_ROOT/src/test/resources, use the curl tool to make REST API calls.
Scoring PMML models
We will use the Iris decision tree model single_iris_dectree.xml to see REST APIs in action.
curl -X PUT --data-binary @single_iris_dectree.xml -H "Content-Type: application/xml" http://localhost:9090/v1/validate { "algorithm": "TreeModel", "app": "KNIME", "appVersion": "2.8.0", "copyright": "KNIME", "formatVersion": "4.1", "functionName": "classification", "outputs": [ { "name": "predicted_class", "optype": "nominal", "type": "string" }, { "name": "probability", "optype": "continuous", "type": "real" }, { "name": "probability_Iris-setosa", "optype": "continuous", "type": "real" }, { "name": "probability_Iris-versicolor", "optype": "continuous", "type": "real" }, { "name": "probability_Iris-virginica", "optype": "continuous", "type": "real" }, { "name": "node_id", "optype": "nominal", "type": "string" } ], "inputs": [ { "name": "sepal_length", "optype": "continuous", "type": "double", "values": "[4.3,7.9]" }, { "name": "sepal_width", "optype": "continuous", "type": "double", "values": "[2.0,4.4]" }, { "name": "petal_length", "optype": "continuous", "type": "double", "values": "[1.0,6.9]" }, { "name": "petal_width", "optype": "continuous", "type": "double", "values": "[0.1,2.5]" } ], "runtime": "PMML4S", "serialization": "pmml", "targets": [ { "name": "class", "optype": "nominal", "type": "string", "values": "Iris-setosa,Iris-versicolor,Iris-virginica" } ], "type": "PMML" }
curl -X PUT --data-binary @single_iris_dectree.xml -H "Content-Type: application/xml" http://localhost:9090/v1/models/iris { "name": "iris", "version": "1" }
curl -X PUT -F "model=@single_iris_dectree.xml" -F "config=@conf.json" http://localhost:9090/v1/models/iris { "name": "iris", "version": "2" }
curl -X GET http://localhost:9090/v1/models/iris
{
"createdAt": "2020-04-23T06:29:32",
"id": "56fa4917-e904-4364-9c15-4c87d84ec2c4",
"latestVersion": 1,
"name": "iris",
"updateAt": "2020-04-23T06:29:32",
"versions": [
{
"algorithm": "TreeModel",
"app": "KNIME",
"appVersion": "2.8.0",
"copyright": "KNIME",
"createdAt": "2020-04-23T06:29:32",
"formatVersion": "4.1",
"functionName": "classification",
"hash": "fc44c33123836be368d3f24829360020",
"outputs": [
{
"name": "predicted_class",
"optype": "nominal",
"type": "string"
},
{
"name": "probability",
"optype": "continuous",
"type": "real"
},
{
"name": "probability_Iris-setosa",
"optype": "continuous",
"type": "real"
},
{
"name": "probability_Iris-versicolor",
"optype": "continuous",
"type": "real"
},
{
"name": "probability_Iris-virginica",
"optype": "continuous",
"type": "real"
},
{
"name": "node_id",
"optype": "nominal",
"type": "string"
}
],
"inputs": [
{
"name": "sepal_length",
"optype": "continuous",
"type": "double",
"values": "[4.3,7.9]"
},
{
"name": "sepal_width",
"optype": "continuous",
"type": "double",
"values": "[2.0,4.4]"
},
{
"name": "petal_length",
"optype": "continuous",
"type": "double",
"values": "[1.0,6.9]"
},
{
"name": "petal_width",
"optype": "continuous",
"type": "double",
"values": "[0.1,2.5]"
}
],
"runtime": "PMML4S",
"serialization": "pmml",
"size": 3497,
"targets": [
{
"name": "class",
"optype": "nominal",
"type": "string",
"values": "Iris-setosa,Iris-versicolor,Iris-virginica"
}
],
"type": "PMML",
"version": 1
}
]
}curl -X POST -d '{"X": [{"sepal_length": 5.1, "sepal_width": 3.5, "petal_length": 1.4, "petal_width": 0.2}]}' -H "Content-Type: application/json" http://localhost:9090/v1/models/iris { "result": [ { "node_id": "1", "probability_Iris-setosa": 1.0, "predicted_class": "Iris-setosa", "probability_Iris-virginica": 0.0, "probability_Iris-versicolor": 0.0, "probability": 1.0 } ] }
curl -X POST -d '{"X": {"columns": ["sepal_length", "sepal_width", "petal_length", "petal_width"],"data":[[5.1, 3.5, 1.4, 0.2], [7, 3.2, 4.7, 1.4]]}, "filter": ["predicted_class"]}' -H "Content-Type: application/json" http://localhost:9090/v1/models/iris { "result": { "columns": [ "predicted_class" ], "data": [ [ "Iris-setosa" ], [ "Iris-versicolor" ] ] } }
Scoring ONNX models
We will use the pre-trained MNIST Handwritten Digit Recognition ONNX Model to see REST APIs in action.
curl -X PUT --data-binary @mnist.onnx -H "Content-Type: application/octet-stream" http://localhost:9090/v1/validate { "outputs": [ { "name": "Plus214_Output_0", "shape": [ 1, 10 ], "type": "tensor(float)" } ], "inputs": [ { "name": "Input3", "shape": [ 1, 1, 28, 28 ], "type": "tensor(float)" } ], "runtime": "ONNX Runtime", "serialization": "onnx", "type": "ONNX" }
curl -X PUT --data-binary @mnist.onnx -H "Content-Type: application/octet-stream" http://localhost:9090/v1/models/mnist { "name": "mnist", "version": 1 }
curl -X GET http://localhost:9090/v1/models/mnist
{
"createdAt": "2020-04-16T15:18:18",
"id": "850bf345-5c4c-4312-96c8-6ee715113961",
"latestVersion": 1,
"name": "mnist",
"updateAt": "2020-04-16T15:18:18",
"versions": [
{
"createdAt": "2020-04-16T15:18:18",
"hash": "104617a683b4e62469478e07e1518aaa",
"outputs": [
{
"name": "Plus214_Output_0",
"shape": [
1,
10
],
"type": "tensor(float)"
}
],
"inputs": [
{
"name": "Input3",
"shape": [
1,
1,
28,
28
],
"type": "tensor(float)"
}
],
"runtime": "ONNX Runtime",
"serialization": "onnx",
"size": 26454,
"type": "ONNX",
"version": 1
}
]
}curl -X POST -d @mnist_request_0.json -H "Content-Type: application/json" http://localhost:9090/v1/models/mnist { "result": [ { "Plus214_Output_0": [ [ 975.6703491210938, -618.7241821289062, 6574.5654296875, 668.0283203125, -917.2710571289062, -1671.6361083984375, -1952.7598876953125, -61.54957580566406, -777.1764526367188, -1439.5316162109375 ] ] } ] }
curl -X POST --data-binary @mnist_request_0.pb -o response_0.pb -H "Content-Type: application/octet-stream" http://localhost:9090/v1/models/mnistSave the binary response to response_0.pb that is in the protobuf format, an instance of PredictResponse message, you could use the generated client from ai-serving.proto to read it.
Note, the content type of predict request must be specified explicitly and take one of four candidates. An incorrect request URL or body returns an HTTP error status.
curl -i -X POST -d @mnist_request_0.json http://localhost:9090/v1/models/mnist
HTTP/1.1 400 Bad Request
Server: akka-http/10.1.11
Date: Sun, 19 Apr 2020 06:25:25 GMT
Connection: close
Content-Type: application/json
Content-Length: 92
{"error":"Prediction request takes unknown content type: application/x-www-form-urlencoded"}Support
If you have any questions about the AI-Serving library, please open issues on this repository.
Feedback and contributions to the project, no matter what kind, are always very welcome.
License
AI-Serving is licensed under APL 2.0.