AV professionals in all types of organizations — from educational institutions and corporations to government agencies and churches — are constantly striving to make their content accessible and understandable to more viewers, whether in-person or online. Extending the company’s rich history of innovation in automated media captioning to multi-language translation, ENCO‘s new AI-powered enTranslate system will make its AV industry debut in booth 5491 at InfoComm 2019 (June 12-14 in Orlando, Fla).
enTranslate combines the powerful speech-to-text engine from ENCO’s patented enCaption open and closed captioning solution with advanced translation technology powered by Veritone, enabling automated, near-real-time translation of live or pre-recorded content for alternative-language captioning, subtitling and more.
Here’s a live video demo of enCaption:
Helping make AV content understandable to viewers who don’t speak its original language, enTranslate offers an easy and affordable solution to automatically translate live presentations – such as keynote presentations, board meetings, legislative sessions, lectures or sermons – and recorded content such as training and learning videos. Users can choose to embed translated captions in short and long-form VOD content for subsequent on-demand consumption, or to display live, open-captioned subtitles on local video displays to assist in-person attendees. enTranslate supports 46 languages including English, Spanish, French and more.
enTranslate builds on the highly-accurate, machine learning powered speech recognition core first implemented in enCaption to interpret incoming live or file-based audio, then feeds the resulting text to its advanced translation engine. Blending artificial intelligence with sophisticated linguistics modeling, enTranslate uses a Neural Machine Translation methodology to provide high-quality translations based on the context surrounding the current words and phrases.
enTranslate offers both live and offline translation, and can be deployed on-premises or in the cloud. For file-based applications, audio or video clips can be easily ingested into the system and captioned with translations in any supported language, enabling users to quickly and affordably process large libraries of previously recorded content.