<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Nicholas Frederiksen</title>
    <description>The latest articles on Forem by Nicholas Frederiksen (@nfrederiksen).</description>
    <link>https://forem.com/nfrederiksen</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/nfrederiksen"/>
    <language>en</language>
    <item>
      <title>Pulling an HLS Stream and Pushing it to a New Output</title>
      <dc:creator>Nicholas Frederiksen</dc:creator>
      <pubDate>Thu, 09 Dec 2021 16:06:20 +0000</pubDate>
      <link>https://forem.com/video/pulling-an-hls-stream-and-pushing-it-to-a-new-output-3o0h</link>
      <guid>https://forem.com/video/pulling-an-hls-stream-and-pushing-it-to-a-new-output-3o0h</guid>
      <description>&lt;p&gt;This blog post expects the reader to be somewhat familiar with the HLS streaming format, and AWS Elemental.&lt;/p&gt;

&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;Ever wanted to take an existing live stream and put it somewhere? Now you can.&lt;br&gt;
Introducing our first version of &lt;a href="https://github.com/Eyevinn/hls-pull-push" rel="noopener noreferrer"&gt;HLS-Pull-Push&lt;/a&gt;, a node library which creates service with a REST API that can generate fetchers that pull segments from a source HLS stream and then pushes them to an output destination, accompanied with HLS manifests referencing the segments in their new home.&lt;/p&gt;

&lt;p&gt;The output destination can be a folder on your local machine, an S3 bucket, or even AWS Elemental MediaPackage. The limits of the output destinations are up to the implementers of the Output adapters (more on that later).&lt;/p&gt;

&lt;p&gt;With this post, I would like to share with you an overview of how the library is built and how it works. &lt;/p&gt;
&lt;h2&gt;
  
  
  Pulling
&lt;/h2&gt;

&lt;p&gt;In this lib, pulling content from a source HLS stream is done through the &lt;code&gt;@eyevinn/hls-recorder&lt;/code&gt; library. You can find it &lt;a href="https://www.npmjs.com/package/@eyevinn/hls-recorder" rel="noopener noreferrer"&gt;here&lt;/a&gt; on &lt;strong&gt;npmjs&lt;/strong&gt; 🙌 &lt;/p&gt;
&lt;h3&gt;
  
  
  HLS-Recorder
&lt;/h3&gt;

&lt;p&gt;In short, HLS-Recorder is an open-source library, written in TypeScript by Eyevinn, that will continuously fetch the multivariant and media playlists from an input HLS stream and parse them, extracting segment data from the newest segment additions in the playlists, and storing them internally as a JSON object.&lt;br&gt;
HLS-Recorder can also playback a new HLS stream containing every stored (or recorded) segment, as it also has a restify server built-in. By default, HLS-Recorder will serve an event HLS stream, and will add an endlist tag once the recording session is stopped.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"An event playlist is specified by the EXT-X-PLAYLIST-TYPE tag with a value of EVENT. It doesn't initially have an EXT-X-ENDLIST tag, indicating that new media files will be added to the playlist as they become available." &lt;a href="https://developer.apple.com/documentation/http_live_streaming/example_playlists_for_http_live_streaming/event_playlist_construction" rel="noopener noreferrer"&gt;Reference&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;HLS-Recorder can accept various types of HLS streams as input. Namely, HLS streams that are LIVE, VOD, EVENT, with Audio Tracks, with Subtitle Tracks, using AES-128 encryption, or fragmented MP4.&lt;/p&gt;

&lt;p&gt;But what's most interesting is that HLS-Recorder also uses node EventEmitter to emit an event triggered each time a new segment is added to the internal segment storage. On each trigger, the segment storage itself is sent. Allowing the user to do anything they want with the parsed segment objects. Below shows an example of how the emitted data could look like...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Recorded&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Segments&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Playlist&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Variant_&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; 
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mediaSeq"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"segList"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"index"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"duration"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"uri"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://lab-live.cdn.eyevinn.technology/SHORT60SEC/video/39acb45308be9cf94f27a66b6377a324.ts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"cue"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"map"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"index"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"duration"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"uri"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://lab-live.cdn.eyevinn.technology/SHORT60SEC/video/88154ae79efd8e0f8b8490d8ae177514.ts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"cue"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"map"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This emitted data is what HLS-Pull-Push uses as input for its Pushing operations. To learn more about the HLS-Recorder lib, check it out on &lt;a href="https://github.com/Eyevinn/hls-recorder" rel="noopener noreferrer"&gt;github&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pushing
&lt;/h2&gt;

&lt;p&gt;As mentioned above, the data pushed is based on the emitted JSON data from the pulling operations. Once collected, the &lt;strong&gt;pushing&lt;/strong&gt; operations can start. It will first generate and push a new multivariant playlist manifest to the output destination.&lt;br&gt;
Afterward, it pushes all the newest segments received from the pulling operation. Then lastly, it generates and pushes new media playlist manifests based on the pushed segments from the previous step.     &lt;/p&gt;

&lt;p&gt;To support a variety of output destinations, the uploading part of the pushing operation is mainly done through adapters that implement an OutputPlugin interface.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;IOutputPlugin&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;createOutputDestination&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nf"&gt;getPayloadSchema&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;Logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;logMessage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;IOutputPluginDest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Logger&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nf"&gt;_fileUploader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nf"&gt;uploadMediaPlaylist&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nf"&gt;uploadMediaSegment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Currently, only one adapter is included in the library, but more adapters are to come, and you are naturally welcome to implement your own.&lt;br&gt;
What will differ between all adapters are of course their file uploading methods, as different destinations may require different technologies and SDKs.&lt;br&gt;
The adapters' main responsibilities are, of course, downloading and uploading the media files, and m3u8 files to their final destination, with customized file names. &lt;/p&gt;
&lt;h2&gt;
  
  
  Output Plugin: MediaPackage
&lt;/h2&gt;

&lt;p&gt;The available output plugin in HLS-Pull-Push currently is for AWS Elemental MediaPackage. The plugin will fetch media files and then put them to a given MediaPackage channel ingest URL using the network protocol &lt;em&gt;WebDAV&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;Working with WebDAV you will need a set of credentials. Credentials that AWS Elemental MediaPackage will generate automatically when you create a new MediaPackage channel. &lt;/p&gt;

&lt;p&gt;Suffice to say, in order for you to use this plugin you will need access to the WebDAV credentials (username &amp;amp; password) from MediaPackage. &lt;/p&gt;

&lt;p&gt;Here's a fun fact about pushing to MediaPackage, file naming is key. We figure out that files must be uploaded under the name &lt;strong&gt;channel&lt;/strong&gt;* (e.g. channel.m3u8, channel_1.m3u8, channel_1_50.ts) otherwise MediaPackage will reject it like yesterday's french fries.   &lt;/p&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;You yourself can try out HLS-Pull-Push. Only requirements are nodejs 12+ and access to your AWS Elemental MediaPackage Channel and Channel credentials.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 1: Install
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;npm install @eyevinn/hls-pull-push&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 2: Script Set-Up
&lt;/h4&gt;

&lt;p&gt;Paste the following code block in a .js file named pullpush&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;HLSPullPush&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;MediaPackageOutput&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@eyevinn/hls-pull-push&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pullPushService&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;HLSPullPush&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;pullPushService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;registerPlugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mediapackage&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MediaPackageOutput&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="c1"&gt;// Start the Service&lt;/span&gt;
&lt;span class="nx"&gt;pullPushService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 3: Run the Script
&lt;/h4&gt;

&lt;p&gt;Run &lt;code&gt;node pullpush.js&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4: Make a POST
&lt;/h4&gt;

&lt;p&gt;Lastly, make a POST request to the &lt;code&gt;api/v1/fetcher&lt;/code&gt; endpoint. For convienience you can use the swagger api at: &lt;a href="http://localhost:8080/api/docs" rel="noopener noreferrer"&gt;http://localhost:8080/api/docs&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F325l0njp9mis0ia8obxx.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F325l0njp9mis0ia8obxx.PNG" alt="Swagger API for Pull Push"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Use this JSON as your POST payload. Just make sure to insert your own MediaPackage details.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Demo_Push_To_MediaPackage"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://cph-p2p-msl.akamaized.net/hls/live/2000341/test/master.m3u8"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"output"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mediapackage"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"payload"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ingestUrls"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;insert&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;MediaPackage&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;channel&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;input&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;URL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;here&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"username"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;insert&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;MediaPackage&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;channel&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;username&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;here&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"password"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;insert&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="err"&gt;MediaPackage&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;channel&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;password&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;here&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The demo source stream used in my example is a 24/7 Live Stream of the short film "Tears of Steel" looped over. I've seen this film many times now. You could almost say it's my favorite... &lt;strong&gt;almost say&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Verify by making a GET request, and see if you get any session data in the response. If all went well, then you should be able to see new content on all of your MediaPackage channel output URLs. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lwovxj4wl5kxde98emb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lwovxj4wl5kxde98emb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, that's all there is to it really. Not too complicated, right? Oh yeah, if you need to stop the fetcher just make a DELETE request. Thank you for reading!  &lt;/p&gt;

&lt;h2&gt;
  
  
  Current Limitations &amp;amp; Future Work
&lt;/h2&gt;

&lt;p&gt;As mentioned in the beginning, this is only a first version of HLS-Pull-Push. More work will be done to the library to address the limitations it has.&lt;/p&gt;

&lt;h3&gt;
  
  
  Output: MediaPackage
&lt;/h3&gt;

&lt;p&gt;When pushing to AWS Elemental MediaPackage one must be aware that, for playback, it works best with segments of similar transcoding.&lt;/p&gt;

&lt;p&gt;Furthermore, when it comes to DASH repackaging, there are some extra conditions for a source HLS stream to work with Ad breaks in it. AWS Elemental MediaPackage requires SCTE-35 messaging in the Ad filled source HLS stream to be able to properly repackage the input stream into DASH.&lt;/p&gt;

&lt;p&gt;We are planning on extending HLS-Pull-Push to add SCTE-35 messaging into the input stream based on CUE tags found in the source HLS stream. Thus taking this kind of restriction out of the mind of the user.&lt;/p&gt;

&lt;p&gt;For HLS repackaging with Ads, all you need to do is to enable Ad markers and set it to 'Passthrough' in the MediaPackage Endpoints settings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Output: S3 Bucket
&lt;/h3&gt;

&lt;p&gt;We are also planning to implement and add a plugin for pushing to an AWS S3 bucket. Stay tuned.&lt;/p&gt;




&lt;h2&gt;
  
  
  About Eyevinn Technology
&lt;/h2&gt;

&lt;p&gt;Eyevinn Technology is an independent consultant firm specialized in video and streaming. Independent in a way that we are not commercially tied to any platform or technology vendor.&lt;/p&gt;

&lt;p&gt;At Eyevinn, every software developer consultant has a dedicated budget reserved for open source development and contribution to the open source community. This give us room for innovation, team building and personal competence development. And also gives us as a company a way to contribute back to the open source community.&lt;/p&gt;

&lt;p&gt;Want to know more about Eyevinn and how it is to work here. Contact us at &lt;a href="mailto:work@eyevinn.se"&gt;work@eyevinn.se&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>streaming</category>
      <category>hls</category>
      <category>pull</category>
      <category>push</category>
    </item>
    <item>
      <title>Breaking in to a Live Stream on a Virtual Channel Powered by the Eyevinn Channel Engine</title>
      <dc:creator>Nicholas Frederiksen</dc:creator>
      <pubDate>Fri, 17 Sep 2021 13:54:57 +0000</pubDate>
      <link>https://forem.com/video/breaking-in-to-a-live-stream-on-a-virtual-channel-powered-by-the-eyevinn-channel-engine-5f8j</link>
      <guid>https://forem.com/video/breaking-in-to-a-live-stream-on-a-virtual-channel-powered-by-the-eyevinn-channel-engine-5f8j</guid>
      <description>&lt;p&gt;In the early days, the term "breaking" referred to a technical procedure used inside a broadcasting studio. It was, put simply, a term used for when studios would interrupt a broadcast feed, with most usually, a live feed to some urgent news reporting.&lt;/p&gt;

&lt;p&gt;This procedure has for a while been something we would like our virtual Channel Engine to one day simulate. &lt;br&gt;
Fortunately, that day has come. In this blog post, I write about the newest feature coming to the Channel Engine, and about a Breaking News API build to best demonstrate this feature. &lt;/p&gt;

&lt;p&gt;I am going to assume that you are at least somewhat familiar with the Eyevinn Channel Engine. If you aren't, then I'd recommend reading this &lt;a href="https://eyevinntechnology.medium.com/server-less-ott-only-playout-bc5a7f2e6d04" rel="noopener noreferrer"&gt;article&lt;/a&gt; first.   &lt;/p&gt;

&lt;p&gt;Switching the feed abruptly? How does it do this? &lt;br&gt;
Continue to the next chapter if you want to know, and if you just want to jump straight in to the demo, then skip to the last chapter. &lt;/p&gt;

&lt;h2&gt;
  
  
  Channel Engine V3
&lt;/h2&gt;

&lt;p&gt;The latest release candidate, V3, for the Channel Engine includes a new major feature. The feature for the possibility to mix in a true live stream in a Channel Engine powered linear channel (VOD2Live) according to a set schedule. Just as a reminder, know that this feature is currently in Beta.&lt;/p&gt;

&lt;p&gt;To achieve this, Channel Engine has gotten some new internal components, see the red boxes in the figure below.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2snjfhj0eumx20uyz0k.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2snjfhj0eumx20uyz0k.PNG" alt="Live-mix overview diagram"&gt;&lt;/a&gt;&lt;br&gt;
These components are what make this feature possible, and the following text provides a short overview of what they are. &lt;/p&gt;

&lt;h3&gt;
  
  
  Session-Live
&lt;/h3&gt;

&lt;p&gt;This component is responsible for generating the customized live source manifests that will better match the context of the Channel Engine stream.&lt;/p&gt;

&lt;p&gt;It also handles all the HLS manifest manipulations (stitching) necessary to have the live stream mix in properly with the VOD2Live stream. Resulting in a smooth overlapping of segments.&lt;/p&gt;

&lt;p&gt;There is an internal playhead in this component that will make sure to always fetch the latest live source manifest at a given time interval.&lt;/p&gt;

&lt;p&gt;However, before the Session-Live component can start its job, it will need two things. Segments from the latest media sequence on the VOD2Live stream, and a live source URI. &lt;/p&gt;

&lt;p&gt;This is information that it receives from the next component we are going to talk about.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stream Switcher
&lt;/h3&gt;

&lt;p&gt;This component is a form of gate and essentially decides from what stream (VOD2Live or true live) the virtual channel should get the manifests from and return to clients, and its decision is based on an event schedule. &lt;/p&gt;

&lt;p&gt;The schedule, in this sense, is just a list of JSON objects with some essential properties such as start time, ending time, URI, etc.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To learn about the specifics check out the "Live Mixing (BETA)" section in the Channel Engine V3 &lt;a href="https://github.com/Eyevinn/channel-engine/tree/v3-release-candidate#readme" rel="noopener noreferrer"&gt;README&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Stream Switcher will, on a set frequency, obtain and inspect the schedule's content, and if it finds a JSON object whose event start time is now, then it will commence a switching operation where it will send the necessary data (live URI and VOD2Live segments) to the Session-Live component. &lt;/p&gt;

&lt;p&gt;From there, the Session-Live component can generate proper manifests. Manifests that the Stream Switcher, from now on, will hand out to all client requests. &lt;/p&gt;

&lt;p&gt;And as you'd expect, after a while, when the ending time of the event has been reached, the Stream Switcher will commence a switch-back operation where afterward it will start returning VOD2Live manifests to the clients once again.&lt;/p&gt;

&lt;p&gt;Now, we have yet to disclose where the Stream Switcher actually obtains its schedule from. This brings us to our next and last new component.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stream Switch Manager
&lt;/h3&gt;

&lt;p&gt;This component holds the logic for managing and returning a schedule list to the Stream Switcher. This component and can be customized much like a Channel Engine Asset Manager component. It is a class that needs to be implemented following a certain interface.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Again, to learn about the specifics check out the "Live Mixing (BETA)" section in the Channel Engine V3 &lt;a href="https://github.com/Eyevinn/channel-engine/tree/v3-release-candidate#readme" rel="noopener noreferrer"&gt;README&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;How does one populate the schedule list with events? Well, that's up to the implementer. It can be done through external API gateways or directly in the switch manager itself. The only requirement is that it has a &lt;code&gt;getSchedule()&lt;/code&gt; function that returns a list of Event JSONs.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Extra Features
&lt;/h3&gt;

&lt;p&gt;Here are some neat things that the breaking-in feature also does.&lt;br&gt;
The switch doesn't only limit you to break in with a live stream, you can also break in with a VOD, allowing for a quicker way to broadcast prerecorded content on the fly. To do this one just needs to change the &lt;code&gt;type&lt;/code&gt; property of the Event JSON.&lt;/p&gt;

&lt;p&gt;Now if the event source gets compromised, e.g URI becomes unreachable, then the Stream Switcher will handle it by simply switching back to the VOD2Live stream, ensuring that the clients are continuously receiving content.  &lt;/p&gt;

&lt;p&gt;We would also like to mention that this feature is also compatible with the other Channel Engine feature which is also in Beta, the High Availability feature. Bringing you redundancy, strong up times, and the benefits of load balancers. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;More on this feature can be found in the "High Availability" section in the Channel Engine V3 &lt;a href="https://github.com/Eyevinn/channel-engine/tree/v3-release-candidate#readme" rel="noopener noreferrer"&gt;README&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Delimitations
&lt;/h3&gt;

&lt;p&gt;As mentioned, this feature is currently in Beta and has some delimitation in what it can do. In its current version, in order to operate properly, it must follow these restrictions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Restrictions
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Uniform transcoding. Segment duration must match (or be within a margin ~ ±2 sec) across all planned content (VOD and LIVE). &lt;br&gt;
&lt;em&gt;Note: Sequence length can be customized in Channel Engine, but not on the fly.&lt;/em&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Must not schedule a switch into or out of an Ad break on the VOD2Live stream. &lt;br&gt;
&lt;em&gt;Note: This will be addressed in the next update.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Related to the previous point. The Live event must not include ad breaks in it. Ads can be played but the HLS cue-tags will be left out of the manifest.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Must not schedule back-to-back Live events, this is not allowed at the moment and could possibly cause issues. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Trying It Out with The Breaking News API
&lt;/h2&gt;

&lt;p&gt;Now for the exciting part, testing it out yourself.&lt;br&gt;
The Channel Engine V3 is only responsible for stitching in the true Live stream with the VOD2Live stream. The responsibility for choosing what specific Live stream it should stitch in and when it should stitch it falls to the Stream Switch Manager and the scheduling service.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgyitw4r07gpk5kfc5w7r.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgyitw4r07gpk5kfc5w7r.PNG" alt="Breaking News API Swagger page"&gt;&lt;/a&gt;&lt;br&gt;
A Breaking News API was built as a simple reference implementation on how you can create your own stream switch manager to then use with this Channel Engine V3 feature. &lt;/p&gt;

&lt;p&gt;In brief, the Breaking News API has a &lt;code&gt;/breaking&lt;/code&gt; POST endpoint where you can append an Event JSON (as mentioned earlier) to a channel's schedule. &lt;/p&gt;

&lt;p&gt;A Proof-of-Concept nodeJs service has been built that will spin up a simple instance of the Channel Engine (which will loop over the same VOD over and over) and the Breaking News API to demonstrate the feature in a convenient way. &lt;/p&gt;

&lt;p&gt;You will find the POC service &lt;a href="https://github.com/Eyevinn/breaking-news-vc" rel="noopener noreferrer"&gt;here&lt;/a&gt; on GitHub&lt;/p&gt;

&lt;p&gt;Just simply clone the repository, and follow the documentation, you can either run the service with docker or just with a good old &lt;code&gt;npm install&lt;/code&gt; and &lt;code&gt;npm start&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once you have it up and running. View the virtual channel stream at: &lt;br&gt;
&lt;a href="https://web.player.eyevinn.technology/?manifest=http%3A%2F%2Flocalhost%3A8000%2Flive%2Fmaster.m3u8%3Fchannel%3D11" rel="noopener noreferrer"&gt;https://web.player.eyevinn.technology/?manifest=http%3A%2F%2Flocalhost%3A8000%2Flive%2Fmaster.m3u8%3Fchannel%3D1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then to populate the event schedule with an Event JSON, I'd recommend for you to go to the Swagger documentation page at: &lt;a href="http://localhost:8001/api/docs" rel="noopener noreferrer"&gt;http://localhost:8001/api/docs&lt;/a&gt; &lt;br&gt;
where you can execute a POST call with some already predefined values that should trigger a live stream break-in at the soonest possible moment. Of course, you can always set the start and end time to something else if you want to.&lt;/p&gt;

&lt;p&gt;This is what you can expect&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qxlx5nandtx4qgh5jnz.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qxlx5nandtx4qgh5jnz.PNG" alt="expect_1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgvrmwz6rh36frpm616d.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgvrmwz6rh36frpm616d.PNG" alt="expect_2"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Eyevinn Technology is the European leading independent consultancy firm specializing in video technology and media distribution.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you need assistance in the development and implementation of this, our &lt;a href="https://video-dev.team/" rel="noopener noreferrer"&gt;team of video developers&lt;/a&gt; are happy to help out. If you have any questions or comments just drop us a line in the comments section to this post.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>video</category>
      <category>streaming</category>
      <category>opensource</category>
      <category>virtualchannel</category>
    </item>
    <item>
      <title>Adding Support for Multi-Audio Tracks in The Eyevinn Channel Engine</title>
      <dc:creator>Nicholas Frederiksen</dc:creator>
      <pubDate>Thu, 01 Jul 2021 07:57:06 +0000</pubDate>
      <link>https://forem.com/video/adding-support-for-multi-audio-tracks-in-the-eyevinn-channel-engine-2bii</link>
      <guid>https://forem.com/video/adding-support-for-multi-audio-tracks-in-the-eyevinn-channel-engine-2bii</guid>
      <description>&lt;p&gt;In this blog post, I'll describe how I extended the current demuxed audio feature so that the Channel Engine could play multiple audio tracks. I will also assume that reader is somewhat familiar with the HLS streaming format and Channel Engine or has at least read the documentation in the Channel Engine git repo &lt;a href="https://github.com/Eyevinn/channel-engine" rel="noopener noreferrer"&gt;link&lt;/a&gt;, or this article &lt;a href="https://eyevinntechnology.medium.com/server-less-ott-only-playout-bc5a7f2e6d04" rel="noopener noreferrer"&gt;link&lt;/a&gt; beforehand.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;The Eyevinn Channel Engine is an Open-Source service that works well with muxed VODs, but when it comes to demuxed VODs, it does the bare minimum at the moment, namely just using the first audio track it could find. This demuxed support feature can certainly be extended. &lt;/p&gt;

&lt;p&gt;But before we get into it, I need to clarify what I mean when I say "audio tracks" and "audio groups", as I will be using these words throughout this post. &lt;/p&gt;

&lt;p&gt;In an HLS master manifest, you can have a media item with the attribute &lt;code&gt;TYPE=AUDIO&lt;/code&gt; with a reference to a media playlist manifest containing the audio segments. This is what I will be referring to as an "audio track". Multiple audio tracks can exist in the HLS master manifest. These tracks can be grouped/categorized, by the media item's GROUP-ID attribute. Audio tracks that have the same GROUP-ID value is what I will refer as an "audio group". In other words, an audio group consists of one or more audio tracks. GROUP-IDs are an HLS requirement for media items.&lt;/p&gt;

&lt;p&gt;Now, a quick overview as to how the old demuxed audio feature worked.&lt;/p&gt;

&lt;p&gt;The Channel Engine would create a master manifest for its channel stream based on the specifications detailed in the &lt;code&gt;ChannelManager&lt;/code&gt; object, which one passes as an option to the Channel Engine instance. If we passed a variable signaling the Channel Engine that we want to use demuxed content then the Channel Engine will do the following extra steps when creating the master manifest.&lt;/p&gt;

&lt;p&gt;The Channel Engine will add 1 media item of type audio to the master manifest with the GROUP-ID attribute set to the first GROUP-ID found in a stream item in the VOD asset's master manifest.&lt;/p&gt;

&lt;p&gt;Then when the audio track is requested by the player/client, the Channel Engine will respond with an audio playlist manifest. The playlist will have references to audio segments belonging to the VOD asset's first available audio track for that audio group. Even if there are multiple audio groups in the VOD, they won't be used. Even if there are multiple audio tracks within the audio groups, they won't be used. There is clearly potential here to add support for using more than one specific audio track and audio group.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges
&lt;/h3&gt;

&lt;p&gt;The task in question will have some implementation challenges. &lt;br&gt;
A few things needed to be taken into account. Namely:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to have the client select a track of a certain audio group.&lt;/li&gt;
&lt;li&gt;How to have the client select a certain language/audio track within the selected audio group.&lt;/li&gt;
&lt;li&gt;How to handle the case where the requested audio group is not present in the current VOD.&lt;/li&gt;
&lt;li&gt;How to handle the case where the requested language is not present amongst the current VOD's audio tracks for that audio group.&lt;/li&gt;
&lt;li&gt;How to handle VOD stitching when VODs have a different set of audio groups.&lt;/li&gt;
&lt;li&gt;How to handle VOD stitching when VODs have a different set of languages/audio tracks.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Delimitations
&lt;/h3&gt;

&lt;p&gt;My implemented solution in its current state did not cover every edge case. Meaning that some points mentioned in &lt;strong&gt;Challenges&lt;/strong&gt; have yet to be addressed. However, the implementation works fairly well for the most basic case and can be extended in the future to handle more edge cases.&lt;/p&gt;

&lt;p&gt;My solution will assume that every VOD uses the same audio GROUP-ID and uses mostly the same languages in their audio tracks.&lt;/p&gt;

&lt;p&gt;As a side-note, the effects of VODs not using the same GROUP-ID will result in an error. A proposed solution is mentioned in the &lt;strong&gt;Future Work&lt;/strong&gt; chapter.&lt;/p&gt;
&lt;h3&gt;
  
  
  Implementation
&lt;/h3&gt;

&lt;p&gt;The following steps give an overview of how I added support for multi-audio in the Channel Engine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Adding audio media items to the master manifest based on a set of predefined audio languages.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To address the challenge of how the client is to select a certain audio group and audio track, I extended the current method in place, which became a reiteration of the method used for selecting different VOD profiles.&lt;/p&gt;

&lt;p&gt;The plan was to let the client select a track based on what's been predefined. So to have it work like it does for VOD profiles, I needed to extend the &lt;code&gt;ChannelManager&lt;/code&gt; class with an extra function.&lt;/p&gt;

&lt;p&gt;Media Items are added to the Master Manifest with attribute values set according to a predefined JSON object, defined in a &lt;code&gt;_getAudioTracks()&lt;/code&gt; function in the &lt;code&gt;ChannelManager&lt;/code&gt; class/object.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9psssbj99ot492wk9oh.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9psssbj99ot492wk9oh.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the resulting master manifest may look something like this...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#EXTM3U
#EXT-X-VERSION:4
## Created with Eyevinn Channel Engine library (version=2.19.3)
##    https://www.npmjs.com/package/eyevinn-channel-engine
#EXT-X-SESSION-DATA:DATA-ID="eyevinn.tv.session.id",VALUE="1"
#EXT-X-SESSION-DATA:DATA-ID="eyevinn.tv.eventstream",VALUE="/eventstream/1"

# AUDIO groups
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="en", NAME="English",AUTOSELECT=YES,DEFAULT=YES,CHANNELS="2",URI="master-audio_en.m3u8;session=1"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="sv", NAME="Swedish",AUTOSELECT=YES,DEFAULT=NO,CHANNELS="2",URI="master-audio_sv.m3u8;session=1"

#EXT-X-STREAM-INF:BANDWIDTH=6134000,RESOLUTION=1024x458,CODECS="avc1.4d001f,mp4a.40.2",AUDIO="audio"
master6134000.m3u8;session=1
#EXT-X-STREAM-INF:BANDWIDTH=2323000,RESOLUTION=640x286,CODECS="avc1.4d001f,mp4a.40.2",AUDIO="audio"
master2323000.m3u8;session=1
#EXT-X-STREAM-INF:BANDWIDTH=1313000,RESOLUTION=480x214,CODECS="avc1.4d001f,mp4a.40.2",AUDIO="audio"
master1313000.m3u8;session=1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: Notice that GROUP-ID is not a field in the audioTrack JSON, and so the GROUP-ID in the master manifest's media items are actually permanently set to the first GROUP-ID found in the very first VOD. This is how it worked before, and my feature extension has kept it that way for now. See &lt;strong&gt;Delimitations&lt;/strong&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Next was to make some small adjustments to the route handler (specifically &lt;code&gt;_handleAudioManifest()&lt;/code&gt;) for the endpoint of a URI in a media item. &lt;br&gt;
The Channel Engine reads parameter values from the client request in a clever way. Values can be extracted from the request path itself.&lt;br&gt;
Values extracted are the audio group Id and language. This tells us what segments we are to include in the media manifest response.      &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Make it possible in HLS-vodtolive, to load in all audio groups, and also all audio tracks for each audio group.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that we know what segments the client is looking for, how do we find them? This is where the Eyevinn dependency package &lt;code&gt;hls-vodtolive&lt;/code&gt; comes into play.&lt;/p&gt;

&lt;p&gt;In short, the &lt;code&gt;hls-vodtolive&lt;/code&gt; package creates an HLSVod class/object which given a VOD master manifest as input, will load and store all segments referenced in that manifest into a JSON object organized by profiles. An HLSVod object will also divide the segments into an array of subsets, that we call media sequences. So each subset/media sequence will be used to create a pseudo-live looking media manifest.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1htiuqwel07br58rueji.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1htiuqwel07br58rueji.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This class however, did not properly load segments from audio media manifests. An extension was needed. &lt;/p&gt;

&lt;p&gt;Not going into detail, I can say that it was changed so that the HLSVod would load all audio segments from every audio media manifest and organize them by audio groups, then by languages.&lt;br&gt;
Effectively, storing all audio segments possible from the original VOD manifest.    &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Make it so that you can stitch audio tracks between two HLSVods.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This step involves more expansions to the &lt;code&gt;hls-vodtolive&lt;/code&gt; package. Expansions are done to the HLSVod class function &lt;code&gt;_loadPrevious()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You see, an HLSVod can load after another when using the function &lt;code&gt;loadAfter()&lt;/code&gt;, and when doing so will inherit some segments from the HLSVod before it. This basically makes it possible to create media sequences that smoothly go from the contents of one VOD to the other, using HLS discontinuity tags. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Look at the Channel Engine chapter in Server-less OTT-Only Playout article for more info on it &lt;a href="https://eyevinntechnology.medium.com/server-less-ott-only-playout-bc5a7f2e6d04" rel="noopener noreferrer"&gt;Link&lt;/a&gt;.    &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The tricky part is deciding who inherits what from who. &lt;br&gt;
Ideally, if the 2 HLSVods in question have the same set of languages and audio group names, then it's fairly straightforward who gets what. But if they have nothing in common, then it suddenly becomes ambiguous. However, it is probably more likely that the Channel Engine User is using VOD assets that have at least some common languages/audio tracks.  &lt;/p&gt;

&lt;p&gt;That being said, it is a possibility that the VOD assets may have named their GROUP-IDs differently. However, as of now, it is assumed that this is not the case. This addressed in the &lt;strong&gt;Delimitations&lt;/strong&gt; chapter, and then again in the &lt;strong&gt;Future Work&lt;/strong&gt; chapter below.&lt;/p&gt;

&lt;p&gt;See the figures below for a visual representation of the challenge. The figure depicts a scenario where the prior HLSVod has audio tracks for the languages English, Swedish, and Russian. While the current HLSVod has audio tracks for English, Swedish, and Norwegian. &lt;br&gt;
The outer box represents the current HLSVod object, and the inner colored boxes represent audio tracks in the HLSVod.&lt;br&gt;
The blue and red represent the prior VOD segments and current VOD segments respectively. Again it is assumed that both HLSVods have the same audio group.&lt;/p&gt;

&lt;p&gt;So... matching languages inherit segments from each other, but what segments does the unique languages inherit? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9kgo2iuhk6mxmunxuk2.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9kgo2iuhk6mxmunxuk2.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The answer is... any segments really. What is important is that we can enable generating media sequences that transition smoothly and that the client gets the proper audio for the VOD. Sure it might not be in the expected language, but at least the HLS player will not be confused. &lt;/p&gt;

&lt;p&gt;However, I thought it would be best for the unique languages to inherit segments from the previous VOD's default language, or more specifically the first loaded language (which usually corresponds to the default language in a demuxed VOD).&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9b2m1grsfv31h8369pal.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9b2m1grsfv31h8369pal.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lastly, in case a loaded language from the previous VOD did not get inherited at all, then we simply remove it from our collection of audio tracks for an audio group, so that we do not evoke any false positives when a request comes for that language.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvnc8wapa5qxko8n8amy.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvnc8wapa5qxko8n8amy.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But this brings up an interesting question. What do we do if a request comes for a language that is not in the HLSVod's collection? &lt;/p&gt;

&lt;p&gt;Well, we simply provide a fallback track. In other words, let's say the client is requesting an audio track in the language of Russian but the current VOD only has English and Swedish, then we will respond with the English audio track instead, assuming that English was the first loaded audio track for the HLSVod.&lt;/p&gt;

&lt;p&gt;And that's all there was to it! &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxncnqubaahd9gyrq28ks.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxncnqubaahd9gyrq28ks.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
After these steps it now became possible to play, select, and transition between audio tracks for demuxed VODs in the Channel Engine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Future Work
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Add support for a fallback audio group&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Adding a fallback audio group when an audio group is not found will help to ensure that the Channel Engine stream will always have audio to play. Doing this, it is important to make sure that every audio track in the fallback audio group has segments from the prior VOD stitched in front of it. It would probably work again to distribute segments from the prior VODs first loaded audio track for its first loaded audio group. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add support for presetting, selecting, and using multiple audio groups.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As of now, we only support the use of a single audio group. If there ever is a need to want to use more than one audio group at a time in Channel Engine, then we would need to expand channel options in the &lt;code&gt;ChannelManager&lt;/code&gt;. However, there will be a challenge in how we then deal with mapping between audio groups of different names among VODs.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Loading audio tracks that have the same language in a single VOD.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We do not support the loading of duplicate languages, but they do occur in HLS manifests. For example, a VOD could have an English track and an English Commentary track in it, both setting their language value to "en". In our current state, only the first English track would be loaded.&lt;br&gt;
Now the use case for having an English commentary track as a preset track is not very common, I'd imagine. But it could be nice to have support for it if it ever became a desired feature in Channel Engine.&lt;br&gt;
That said... An immediate workaround would be to prepare the HLS manifest beforehand and just make sure that every media item has a unique language value. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Nicholas Frederiksen is a developer at Eyevinn Technology, the European leading independent consultancy firm specializing in video technology and media distribution.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you need assistance in the development and implementation of this, our team of video developers are happy to help you out. If you have any questions or comments just drop a line in the comments section to this post.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>streaming</category>
      <category>hls</category>
    </item>
  </channel>
</rss>
