The AMD Vitis™ AI Integrated Development Environment is a complete AI inference development solution for AMD adaptive hardware including Versal™ AI Edge series, Versal AI Core series, Zynq ™, Zynq UltraScale+™ MPSoC, Alveo™ accelerator card. The Vitis AI IDE provides a rich set of AI models, optimized Deep-learning Processor Unit (DPU) cores, tools, libraries, and example designs for AI inference deployments from the data center to the edge.
Join us for this webinar in which we will present and discuss some of the latest features and enhancements enabled by the 3.0 release, including:
- Early-access support for the Versal AI Edge VEK280 evaluation kit and Alveo V70 data center accelerator card. These exciting new platforms take advantage of the new AIE-ML tile architecture, offering higher performance, lower latency, reduced DDR memory bandwidth requirements, and reduced programmable logic consumption for AI inference applications.
- The new Vitis AI ONNX Runtime Engine (VOE) which provides integrated ONNX Runtime support. VOE enables developers to deploy a wider range of models and operators than could previously have been supported by Vitis AI.
- Enhancements to the Whole Graph Optimizer (WeGO) workflow for Data Center applications provide developers with the benefit of on-the-fly quantization, while simultaneously providing support for virtually any operator or sub graph that is supported by the native framework in which the model was trained.
- Support for AMD ROCm™ GPU enabled hosts offers developers and production engineers flexibility when deploying high-performance Data Center solutions with Alveo accelerator cards
Whether you are an existing Vitis AI IDE developer, or simply considering AMD for your next AI inference project, this webinar will give you a jump-start leveraging Vitis AI to accelerate machine learning inference in your next design.