Skip to content
This repository has been archived by the owner on Jul 10, 2023. It is now read-only.

v0.7.1-beta

Pre-release
Pre-release
Compare
Choose a tag to compare
@whbruce whbruce released this 23 Feb 00:50
87de4c9

Intel® Deep Learning Streamer Pipeline Server Release v0.7.1

Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Server, formerly known as Video Analytics Serving, is a python package and microservice for deploying hardware optimized media analytics pipelines. It supports pipelines defined in GStreamer* or FFmpeg* media frameworks and provides APIs to discover, start, stop, customize and monitor pipeline execution. Intel® DL Streamer Pipeline Server is based on Intel® DL Streamer and FFmpeg Video Analytics.

What's Changed

Title Description
Product name change Video Analytics Serving is now called Intel® Deep Learning Streamer Pipeline Server as it is part of the Intel® DL Streamer product suite.
Breaking API change: Pipeline instances are now uuid strings Pipeline instances created by different services can now be uniquely identified. Applications that depended on pipeline instances being integer values must be updated to handle strings.

What's New

Title Description
Kubernetes Load Balancing Sample Show how to use MicroK8s with the HAProxy load balancer to distribute work across pods in a cluster
REST API endpoint to list all pipeline instances Endpoint GET /pipelines/status returns all pipeline instances as an array of status objects.
REST API status and stop endpoints no longer require pipeline name and version The following endpoints have been added.
  • GET /pipelines/{instance_id} Get {instance_id} summary
  • DELETE /pipelines/{instance_id} Stop {instance_id}
  • GET /pipelines/status/{instance_id} Get {instance_id} status
The new endpoints have equivalent functionality to the following which are deprecated.
  • GET /pipelines/{name}/{version}/{instance_id} Get summary for {instance_id}
  • DELETE /pipelines/{name}/{version}/{instance_id} Stop {instance_id}
  • GET /pipelines/{name}/{version}/{instance_id}/status Get {instance} status
VA Client enhancements The following features have been added to support the Kubernetes sample.
  • Use remote service
  • Start multiple streams to help measure stream density
  • Display results from MQTT and Kafka metadata destinations

What's Fixed

Description Issue
Prevent pipeline instances from resetting #58
REST API for status and stop ignores pipeline name and version #92
EdgeX sample fails when run from behind a proxy #97
REST service fails to start due to soft_unicode import error #101

Known Issues

Known issues can be found as GitHub issues. If you encounter defects in functionality, please submit an issue.

Description Issue
Docker build fails if directory name contains spaces #38
Models can be picked up from previous build #71
Difficult to get normalized coordinates for spatial analytics parameters #87
Some public models from Open Model Zoo do not produce inference results #89
Pipeline failure in some multi-GPU systems #98
Intermittent 30s delay in pipeline start during multi-stream sessions #104
Kubernetes deployment fails if no_proxy contains * #105
VA Client reports incorrect average fps across multiple streams #106

Tested Base Images

Supported base images are listed in the Building Intel(R) DL Streamer Pipeline Server document.

* Other names and brands may be claimed as the property of others.