AMD Is Solidly in ProAV Market with IPMX AV-over-IP Chips, UHD Processing Chips and AI Chip Solutions
AMD (yes, the chip company) just debuted silicon solutions supporting AV-over-IP. The following companies are already using its IPMX-based AV-over-IP solutions:
AV-over-IP
- IPMX with Privacy Encryption Protocol (PEP) – New feature added by Adeas and Nextera Video to their IPMX, ST 2110, NMOS IP core portfolio to enable media content privacy based on VSF TR-10-13. Cores are compatible with AMD Zynq 7030 and UltraScale+ SoCs and FPGAs.
- NDI |HX3 – NDI has a range of camera, converter and display products supporting both NDI|HX3 and NDI High Bandwidth, giving both options in AMD adaptive SoCs for the first time.
- Dante – Wider platform support with single-chip/module and new IP core solutions for both networked audio and video.
- IP10 – Highlighting the new codec from Blackmagic Design to support ST 2110 workflows.
- HTJ2K – Has an FPGA-based encode and decode of the royalty-free High-Throughput JPEG 2000 mezzanine codec for AV-over-IP streaming.
- JPEG XS – Extending the capabilities of JPEG XS for AV-over-IP, intoPIX has new Temporal Differential Coding (TDC) profile and Main + Proxy stream support with their TicoXS FIP solution capable of running on AMD FPGAs, SoCs, CPUs and GPUs.
UHD processing products:
- VESA DSC – New Display Stream Compression IP from Alma Technologies running in conjunction with HDMI 2.1 IP from AMD.
- DisplayPort 2.1 – Full UHBR20 support with new redriver/retimer FMCs from Parretto.
- LED Wall Processing – Demonstrating ultra-thin Ventana MicroLED tiles driven by a Megapixel HELIOS LED processor, underlying AMD FPGAs enable support of 8K+ HDR, 100Gb and ST 2110 for residential, broadcast and virtual production applications.
- LED Wall Processing – Based on AMD FPGAs, a product showcase of NovaStar’s MX2000 Pro 8K LED processor driving the main LED wall on the stand.
AI in AV:
- Face Tracking – A new face tracking demo from Makarena Labs, a foundation for powering private and AI-driven functions at the edge in many AV use cases.
- Large Language Models (LLMs) – Low-cost, compact edge-based Generative AI hardware accelerator that can implement LLMs running on AMD Versal AI Edge adaptive SoCs – used to incorporate chatbot functionality, predictive maintenance and other AI capabilities into Pro AV products.