<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: api.video</title>
    <description>The latest articles on Forem by api.video (@api_video).</description>
    <link>https://forem.com/api_video</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/api_video"/>
    <language>en</language>
    <item>
      <title>The ultimate guide to online video (codecs, containers and more)</title>
      <dc:creator>Sebastian Marin</dc:creator>
      <pubDate>Tue, 13 Jun 2023 15:53:35 +0000</pubDate>
      <link>https://forem.com/api_video/the-ultimate-guide-to-online-video-codecs-containers-and-more-i26</link>
      <guid>https://forem.com/api_video/the-ultimate-guide-to-online-video-codecs-containers-and-more-i26</guid>
      <description>&lt;p&gt;Video is one of the most popular forms of media today, with billions of hours of video content being watched every day. But have you ever wondered how all of this video content is compressed and decompressed? Or what the difference is between a codec and a container? Understanding these concepts is crucial for anyone who works with video, whether you’re a content creator, marketer, or just someone who enjoys watching videos online. In this article, we’ll take a deep dive into the world of video formats, codecs, and containers. We’ll explore the most popular codecs used today, such as H.264, H.265, and VP9, and discuss how they work to compress and decompress video files. We’ll also examine some of the most popular video formats used today, such as MP4, AVI, and MOV. By the end of this article, you’ll have a better understanding of how video files work and why it’s important to understand these concepts.&lt;/p&gt;

&lt;p&gt;With the rise of video-sharing platforms like YouTube, Vimeo, and TikTok, more and more people are uploading videos to the internet every day. Understanding codecs and compression is essential for creating high-quality videos that can be easily shared and viewed on different devices and platforms. Similarly, understanding container formats is important for ensuring that your videos are compatible with different software applications and operating systems.&lt;/p&gt;

&lt;p&gt;By using the right codecs and container formats, you can create videos that are optimized for different use cases. For example, if you’re creating a video for streaming over the internet, you’ll want to use a codec that provides good compression while maintaining high-quality video playback. On the other hand, if you’re creating a video for archival purposes, you may want to use a codec that provides lossless compression to preserve the original quality of the video.&lt;/p&gt;

&lt;p&gt;Now that we’ve established the importance of codecs and containers in digital video, let’s dive a little deeper into video encoding and the different types of codecs that are commonly used.&lt;/p&gt;

&lt;p&gt;Encoding is the process of exporting digital video into a format and specification suitable for playback by a user. Each video file contains two elements: codecs and containers. Codecs are compression-decompression algorithms that compress the video data to reduce its size and decompress it for playback. Containers are file formats that hold the compressed video data along with other information such as audio tracks, subtitles, and metadata.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UfjwHBNn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/umd2r19vpy1zv0itn7vg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UfjwHBNn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/umd2r19vpy1zv0itn7vg.png" alt="Illustration of file format metaphor" width="745" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Imagine a shipping container filled with packages of many types. In this analogy, the shipping container is the container format, and the codec is the tool that creates the packages and places them in the container. The container format determines how the compressed video data is stored and organized within the file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deep dive into compression-decompression algorithms
&lt;/h2&gt;

&lt;p&gt;Codecs compress and decompress video data by using complex algorithms that analyze the video data and remove any unnecessary information while preserving the quality of the video. During the compression process, the codec analyzes the video data and identifies areas that can be compressed without affecting the overall quality of the video. The codec then discards this information and stores only the essential data required to recreate the video.&lt;/p&gt;

&lt;p&gt;When decompressing the video data, the codec reverses this process by analyzing the compressed data and recreating the original video data by filling in any missing information. This process is done in real-time during playback, allowing you to watch high-quality videos without having to wait for them to fully download.&lt;/p&gt;

&lt;p&gt;There are two main types of codecs: lossy and lossless. Lossy codecs use compression algorithms that discard some of the original video data to reduce the file size. Lossless codecs, on the other hand, use compression algorithms that preserve all of the original video data while still reducing the file size.&lt;/p&gt;

&lt;p&gt;There are many different lossy video codecs available today, each with its own strengths and weaknesses. Some of the most popular codecs include H.264, H.265 (also known as HEVC), VP9, and AV1. H.264 is currently the most widely used codec and is supported by most devices and platforms. It provides good compression while maintaining high-quality video playback. &lt;/p&gt;

&lt;p&gt;Let's take H.264 as an example and explain how its algorithms actually work:&lt;/p&gt;

&lt;p&gt;H.264 uses a &lt;strong&gt;block-oriented motion compensation&lt;/strong&gt; algorithm. This means that it divides each frame into small blocks called &lt;strong&gt;macroblocks&lt;/strong&gt; and then predicts the movement of these blocks in the next frame. By predicting the movement of these blocks, H.264 can compress the video data more efficiently.&lt;/p&gt;

&lt;p&gt;In a little more detail:&lt;/p&gt;

&lt;p&gt;The entropy coding algorithm used in H.264 is called &lt;strong&gt;CABAC (Context-adaptive binary arithmetic coding).&lt;/strong&gt; CABAC is used to compress the video data by encoding the symbols that represent the video frames. The symbols are typically pixel values or motion vectors that describe how the pixels move from one frame to another. CABAC uses adaptive probability estimation to adjust the probabilities of symbols based on the context in which they appear. This means that as more data is compressed, CABAC can adjust its probabilities to better reflect the actual distribution of symbols in the data stream. This allows CABAC to achieve better compression than other entropy coding algorithms that use fixed probabilities. &lt;/p&gt;

&lt;p&gt;The same process occurs in reverse when decoding an H.264 video.&lt;/p&gt;

&lt;p&gt;H.265 is a newer lossy codec that provides even better compression than H.264 but requires more processing power to decode. VP9 is an open-source codec developed by Google that provides high-quality video playback while using less bandwidth than other codecs. AV1 is a newer codec that provides even better compression than VP9 but is not yet widely supported by devices and platforms.&lt;/p&gt;

&lt;p&gt;Lossless codecs on the other hand retain all of the original data after decompression. However, lossless video codecs like ProRes codecs can’t compress videos low enough for live web streaming. In contrast, a lossy codec will compress a file by permanently getting rid of data—especially data that is redundant.&lt;/p&gt;

&lt;p&gt;Lossless compression is also used in cases where it is important that the original and the decompressed data be identical, or where deviations from the original data would be unfavorable like in the production stage of videos and movies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Video formats: not just .mp4
&lt;/h2&gt;

&lt;p&gt;Video formats are more than just a file’s extension, like the .mp4 in Video.mp4. They include a whole package of files that make up the video stream, audio stream, and any metadata included with the file. All of this data is read by a video player to stream video content for playback.&lt;/p&gt;

&lt;p&gt;The video stream includes the data necessary for motion video playback, while the audio stream includes any data related to sound. Metadata is any data outside of audio and sound, including bitrate, resolution, subtitles, device type, and date of creation.&lt;/p&gt;

&lt;p&gt;In addition to standard metadata fields, MP4 files can also contain Extensible Metadata Platform (XMP) metadata. XMP metadata is a standard for embedding metadata in digital files that was developed by Adobe Systems. It allows you to add custom metadata fields to your video files that can be used to store additional information about the video.&lt;/p&gt;

&lt;p&gt;Some examples of XMP metadata fields that can be added to MP4 files include camera settings, location data, and copyright information. This information can be useful for video production and editing. For example, if you are working on a video project that includes footage from multiple cameras, you can use XMP metadata to keep track of which camera was used for each shot.&lt;/p&gt;

&lt;p&gt;There are many different video formats available, each with its own advantages and disadvantages. Some of the most common video formats include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MP4&lt;/strong&gt;: This is a popular format that is widely supported by most devices and platforms. It uses the H.264 codec for video compression and can store both audio and video data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AVI&lt;/strong&gt;: This format is widely used on Windows-based systems. It uses a variety of codecs for video compression, including DivX and XviD.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WMV&lt;/strong&gt;: This format was developed by Microsoft and is commonly used for streaming video on the web. It uses the Windows Media Video codec for compression.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MOV&lt;/strong&gt;: This format was developed by Apple and is commonly used on Mac-based systems. It uses the H.264 and ProRes codecs for video compression and can store both audio and video data.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Choosing the right video format depends on your specific needs. If you want a format that is widely supported by most devices and platforms, then MP4 is a good choice. If you are working on a Windows-based system, then AVI might be a better choice. If you are streaming video on the web, then WMV might be the best option.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bit depth and rate…
&lt;/h2&gt;

&lt;p&gt;In video files, &lt;strong&gt;bit depth&lt;/strong&gt; and &lt;strong&gt;color depth&lt;/strong&gt; are two different concepts that are often confused with each other. &lt;strong&gt;Bit depth&lt;/strong&gt; refers to the number of bits used to represent each pixel in an image or video file. The higher the bit depth, the more colors can be represented in an image or video file. &lt;strong&gt;Color depth&lt;/strong&gt;, on the other hand, refers to the number of colors that can be displayed in an image or video file. It is measured in bits per pixel (bpp).&lt;/p&gt;

&lt;p&gt;For example, if a video file has a bit depth of 8 bits and a color depth of 24 bpp, it means that each pixel in the video file is represented by 8 bits and can display up to 16.7 million colors.&lt;/p&gt;

&lt;p&gt;In summary, bit depth refers to the number of bits used to represent each pixel in an image or video file, while color depth refers to the number of colors that can be displayed in an image or video file.&lt;/p&gt;

&lt;p&gt;Now on to &lt;strong&gt;Bitrate. It&lt;/strong&gt; is the rate at which bits are processed or computed over time. The higher the bitrate, the more data there is, which generally means a higher-quality video file. When choosing a bitrate it is important to consider the device your videos will be played on. High bitrate file streams need lots of processing power to encode, and a fast internet connection to download for playback. There are two main types of bitrate: &lt;strong&gt;CBR (constant bitrate)&lt;/strong&gt; and &lt;strong&gt;VBR (variable bitrate)&lt;/strong&gt;. Bitrates directly affect the file size and quality of your video. The higher the bitrate, the better the quality of your video, but also the larger the file size.&lt;/p&gt;

&lt;p&gt;For example, if you have a video with a bitrate of 10 Mbps (megabits per second), it means that 10 million bits are transmitted every second.&lt;/p&gt;

&lt;p&gt;A useful application for your new understanding of bitrate is making sure all your future videos uploaded to a specific platform meet the requirements for minimum and maximum bitrate. This is important when using programs such as Adobe's Premiere Pro or Apple's final cut pro to obtain the best results when exporting.&lt;/p&gt;

&lt;p&gt;Here is a brief guide on the bitrates required for optimal streaming performance on Instagram, TikTok, and YouTube:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Instagram&lt;/strong&gt;: Instagram recommends a bitrate of 3-5 Mbps for optimal streaming performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TikTok&lt;/strong&gt;: TikTok recommends a bitrate of 2.5-3.5 Mbps for optimal streaming performance and a maximum of 8.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;YouTube&lt;/strong&gt;: YouTube recommends a bitrate of 8 Mbps for 1080p video at 30 frames per second (fps) and 12 Mbps for 1080p video at 60 fps and a maximum of 50 at 1080p 60fps or 68 Mbps at 4K.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Please note that these are general recommendations and the actual bitrate required may vary depending on the video content and other factors.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Playback Techniques for Video and Livestreaming&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When on the viewing end of the video streaming pipeline there are a few things to take into account: &lt;/p&gt;

&lt;p&gt;Video playback and livestreaming are two different methods of viewing video content. Video playback refers to the process of downloading a video file and playing it back on your device. This is the traditional method of watching videos online. Livestreaming, on the other hand, refers to the process of broadcasting live video content over the internet in real time. This allows viewers to watch events as they happen.&lt;/p&gt;

&lt;p&gt;There are several formats available for live streaming. One popular format that supports adaptive bitrate (ABR) is HTTP Live Streaming (HLS). HLS or HTTP Live Streaming is a popular format that uses adaptive bitrate streaming techniques to enable high-quality streaming with small HTTP-based file segments. The key to this format’s popularity is the use of HTTP, which is the universal language protocol for the Internet. This allows for reduced infrastructure costs, greater reach, and simple HTML5 player implementation.&lt;/p&gt;

&lt;p&gt;Another popular format is Real-Time Messaging Protocol (RTMP), which is a proprietary protocol developed by Adobe Systems for streaming audio, video, and data over the internet between a server and a client.&lt;/p&gt;

&lt;p&gt;A great feature of the HLS format is the M3U8 file descriptor. This file allows the client or device to automatically select the best quality stream at any time to prevent playback buffering regardless of bandwidth or CPU power. This means that if your internet connection slows down or speeds up, the stream will automatically adjust to ensure that you have a smooth viewing experience.&lt;/p&gt;

&lt;p&gt;Wow, it turns out that the ultimate guide to video is a bit more complex than anticipated 😅🙃.&lt;/p&gt;

&lt;p&gt;Congratulations on understanding all the nuances of digital video! It’s no small feat to have a good grasp of such a complex topic. You should be proud of yourself!&lt;/p&gt;

&lt;p&gt;If you’re looking for a way to simplify your video hosting, encoding, transcoding, and streaming pipeline, I recommend using &lt;a href="http://api.video"&gt;api.video&lt;/a&gt;. This platform makes it easy to manage your online video needs. Upload your videos, encode them into multiple formats and resolutions, and stream them to your viewers all from one place. Plus, &lt;a href="http://api.video"&gt;api.video&lt;/a&gt; offers a variety of features such as analytics and security options to help you get the most out of your video content.&lt;/p&gt;

&lt;p&gt;Whether you’re a seasoned video professional or just starting out, &lt;a href="http://api.video"&gt;api.video&lt;/a&gt; is the perfect solution for simplifying your video workflow.&lt;/p&gt;

</description>
      <category>onlinevideo</category>
      <category>codec</category>
      <category>beginners</category>
      <category>web</category>
    </item>
    <item>
      <title>Unleashing the power of python and FFMPEG: Extracting and stitching video frames with ease</title>
      <dc:creator>Sebastian Marin</dc:creator>
      <pubDate>Thu, 25 May 2023 15:44:56 +0000</pubDate>
      <link>https://forem.com/api_video/unleashing-the-power-of-python-and-ffmpeg-extracting-and-stitching-video-frames-with-ease-2ded</link>
      <guid>https://forem.com/api_video/unleashing-the-power-of-python-and-ffmpeg-extracting-and-stitching-video-frames-with-ease-2ded</guid>
      <description>&lt;h3&gt;
  
  
  TL;DR:
&lt;/h3&gt;

&lt;p&gt;Learn how to harness the capabilities of Python and the FFMPEG library to effortlessly extract frames from a video at regular intervals. Explore the step-by-step process, from leveraging the &lt;strong&gt;&lt;code&gt;ffmpeg-python&lt;/code&gt;&lt;/strong&gt; module to extracting frames, and discover how to seamlessly stitch them back together into a dynamic video sequence. Unleash your creativity by automating video skimming and programmatically creating video trailers with just a few lines of code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unlocking New Possibilities
&lt;/h3&gt;

&lt;p&gt;Have you ever wondered how you can extract frames from a video at regular intervals using Python and FFMPEG? Well, wonder no more! In this article, I will delve into the exciting world of video processing and show you how to effortlessly extract a list of frames from a video file. This powerful technique not only enables you to perform a quick skim of the video but also opens up a world of possibilities, such as creating a captivating trailer for your video programmatically.&lt;/p&gt;

&lt;p&gt;By utilising the robust capabilities of Python and the versatile FFMPEG library, you can automate the extraction process and obtain frames evenly spaced throughout the video duration. Whether you're a video enthusiast looking to explore the content of a lengthy recording or a content creator aiming to generate engaging teasers, this technique will undoubtedly come in handy.&lt;/p&gt;

&lt;p&gt;Once you've extracted the frames, you might be wondering, "What can I do with them?" Well, the possibilities are endless! You can reassemble the frames back into a condensed video, allowing you to create a visually stunning trailer that captures the essence of the original footage. Imagine the convenience of automating this process instead of manually sifting through hours of video content to find the most compelling moments. With Python and FFMPEG, you have the power to programmatically curate a captivating preview, saving you time and effort.&lt;/p&gt;

&lt;p&gt;In this article, we'll guide you through the step-by-step process of extracting frames from a video using Python and FFMPEG. I'll cover the necessary installations and setup, dive into the code implementation, and explore additional customisation options to suit your specific needs. So, whether you're a beginner seeking to expand your Python skills or an experienced developer looking for an efficient video processing solution, this tutorial is tailored just for you.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Extract Images From Your Video&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To extract images from your video, we will utilize the powerful &lt;strong&gt;&lt;code&gt;ffmpeg-python&lt;/code&gt;&lt;/strong&gt; module. This module provides a convenient Python interface to interact with the FFMPEG library, enabling us to perform various video processing tasks with ease.&lt;/p&gt;

&lt;p&gt;To get started, you need to ensure that &lt;strong&gt;&lt;code&gt;ffmpeg-python&lt;/code&gt;&lt;/strong&gt; is installed in your Python environment. If it's not installed yet, you can easily install it by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;pip&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;ffmpeg&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;python&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once &lt;strong&gt;&lt;code&gt;ffmpeg-python&lt;/code&gt;&lt;/strong&gt; is installed, let's explore the process of extracting images from a video at specific intervals, as illustrated in the code snippet below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;ffmpeg&lt;/span&gt;

&lt;span class="n"&gt;YOUR_FILE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'sample-mov-file.mov'&lt;/span&gt;
&lt;span class="n"&gt;probe&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ffmpeg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;probe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;YOUR_FILE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;time&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;probe&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'streams'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'duration'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="n"&gt;width&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;probe&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'streams'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'width'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Set how many spots you want to extract a video from.
&lt;/span&gt;&lt;span class="n"&gt;parts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;7&lt;/span&gt;

&lt;span class="n"&gt;intervals&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt; &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;parts&lt;/span&gt;
&lt;span class="n"&gt;intervals&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;intervals&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;interval_list&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;intervals&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;intervals&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;interval_list&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;ffmpeg&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;YOUR_FILE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ss&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'scale'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Image'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;'.jpg'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;vframes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Let's break down the code snippet step by step:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Firstly, import the &lt;strong&gt;&lt;code&gt;ffmpeg&lt;/code&gt;&lt;/strong&gt; module, which provides the necessary video processing capabilities.&lt;/li&gt;
&lt;li&gt;In the &lt;strong&gt;&lt;code&gt;YOUR_FILE&lt;/code&gt;&lt;/strong&gt; variable, specify the name of the video file from which you want to extract images. Ensure that the video file is located in the same folder as the code sample.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;&lt;code&gt;ffmpeg.probe&lt;/code&gt;&lt;/strong&gt; function is utilized to retrieve essential information about the video. By accessing properties like &lt;strong&gt;&lt;code&gt;'duration'&lt;/code&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;code&gt;'width'&lt;/code&gt;&lt;/strong&gt; from the probe, we can determine the video's duration and width.&lt;/li&gt;
&lt;li&gt;Define the &lt;strong&gt;&lt;code&gt;parts&lt;/code&gt;&lt;/strong&gt; variable to represent the desired number of intervals from which you want to extract images.&lt;/li&gt;
&lt;li&gt;Calculate the &lt;strong&gt;&lt;code&gt;intervals&lt;/code&gt;&lt;/strong&gt; value by dividing the video's duration by the number of parts and converting it to an integer.&lt;/li&gt;
&lt;li&gt;Create the &lt;strong&gt;&lt;code&gt;interval_list&lt;/code&gt;&lt;/strong&gt; using list comprehension, generating tuples that specify the start and end time for each interval.&lt;/li&gt;
&lt;li&gt;Iterate over each interval in the &lt;strong&gt;&lt;code&gt;interval_list&lt;/code&gt;&lt;/strong&gt; using a loop.&lt;/li&gt;
&lt;li&gt;Within the loop, utilize the &lt;strong&gt;&lt;code&gt;ffmpeg.input&lt;/code&gt;&lt;/strong&gt; function to specify the input file, with the &lt;strong&gt;&lt;code&gt;ss&lt;/code&gt;&lt;/strong&gt; parameter set to the end time of the current interval (&lt;strong&gt;&lt;code&gt;item[1]&lt;/code&gt;&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;Apply the &lt;strong&gt;&lt;code&gt;.filter&lt;/code&gt;&lt;/strong&gt; method to scale the output image, ensuring the width remains consistent while automatically adjusting the height to maintain the aspect ratio.&lt;/li&gt;
&lt;li&gt;Use the &lt;strong&gt;&lt;code&gt;.output&lt;/code&gt;&lt;/strong&gt; method to define the output file name for the extracted frame. In this example, the format &lt;strong&gt;&lt;code&gt;'Image' + str(i) + '.jpg'&lt;/code&gt;&lt;/strong&gt; is employed, where &lt;strong&gt;&lt;code&gt;i&lt;/code&gt;&lt;/strong&gt; represents the index of the image.&lt;/li&gt;
&lt;li&gt;Finally, execute the extraction process by calling &lt;strong&gt;&lt;code&gt;.run()&lt;/code&gt;&lt;/strong&gt;. The loop will iterate to the next interval accordingly.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To assemble the extracted images back into a video sequence, consider the following code snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;ffmpeg&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;ffmpeg&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Image%d.jpg'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;framerate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'output.mp4'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code snippet compiles the sequence of extracted frames named &lt;strong&gt;&lt;code&gt;'Image%d.jpg'&lt;/code&gt;&lt;/strong&gt;, where &lt;strong&gt;&lt;code&gt;%d&lt;/code&gt;&lt;/strong&gt; represents the frame number. It then creates a new video file named &lt;strong&gt;&lt;code&gt;'output.mp4'&lt;/code&gt;&lt;/strong&gt; with a frame rate of 1 frame per second, effectively assembling the extracted images into a video sequence.&lt;/p&gt;

&lt;p&gt;Now that you have the code snippet at your disposal, you can start extracting frames from your videos programmatically. Experiment with different frame rates and explore the exciting possibilities of using these extracted frames to create engaging visual content, such as video trailers or quick skims of your videos.&lt;/p&gt;

&lt;p&gt;Get ready to unlock the potential of video frame extraction with Python and FFMPEG. Discover the magic of transforming videos into dynamic visual experiences that leave a lasting impression on your audience.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>New Video Uploader JavaScript Library</title>
      <dc:creator>api.video</dc:creator>
      <pubDate>Thu, 12 Aug 2021 00:00:00 +0000</pubDate>
      <link>https://forem.com/api_video/new-video-uploader-javascript-library-8ch</link>
      <guid>https://forem.com/api_video/new-video-uploader-javascript-library-8ch</guid>
      <description>&lt;p&gt;Uploading a video to api.video just got easier! &lt;/p&gt;

&lt;p&gt;We’ve written a &lt;a href="https://api.video/blog/endpoints/video-upload"&gt;number of posts&lt;/a&gt; on how to upload videos to api.video. We have launched demos at &lt;a href="https://upload.a.video"&gt;upload.a.video&lt;/a&gt; and &lt;a href="https://privatelyupload.a.video/"&gt;privatelyupload.a.video&lt;/a&gt; on how to securely upload videos from the browser. &lt;/p&gt;

&lt;p&gt;In the two demos above, we used JavaScript to select the video file, slice it into smaller pieces, and then upload the file to api.video. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We just made it even easier! &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We have created a &lt;a href="https://docs.api.video/docs/video-uploader"&gt;JavaScript uploader&lt;/a&gt; script to abstract this code into a single function - allowing you to focus on your code and making the video upload process easier for you and your development team! &lt;/p&gt;

&lt;h2&gt;
  
  
  JS Uploader Widget
&lt;/h2&gt;

&lt;p&gt;You can read about upload a video in the &lt;a href="https://api.video/blog/tutorials/uploading-large-files-with-javascript"&gt;tutorial&lt;/a&gt; describing the code. The JS Uploader widget abstracts the file slice, creating all of the headers, and uploading the files - it also adds retries for segments that have issues! The new code for uploading is under 30 lines, and most of that is for updating the page during the upload. First, we add the script to the head (with a defer tag to not slow page load)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;script src="https://unpkg.com/@api.video/video-uploader" defer&amp;gt;&amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we have a event listener for changes in the file upload form:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;input.addEventListener('change', () =&amp;gt; { console.log("upload commencing"); 
uploadFile(input.files[0]); 
function uploadFile(files) { const uploader = new VideoUploader({ file: input.files[0], 
//changed to sandbox, because we cannot have nice things 
uploadToken: "to5PoOjCz98FdLGnsrFflnYo", 
chunkSize: 1024\*1024\*10, 
// 10MB 
retries: 10 }); 
uploader.onProgress((event) =&amp;gt; { 
var percentComplete = Math.round(event.currentChunkUploadedBytes / event.chunksBytes \* 100); 
var totalPercentComplete = Math.round(event.uploadedBytes / event.totalBytes \* 100); 
document.getElementById("chunk-information").innerHTML = "Chunk # " + event.currentChunk + " is " + percentComplete + "% uploaded. Total uploaded: " + totalPercentComplete +"%";
 }) 
uploader.upload() 
.then((video) =&amp;gt; { 
console.log(video); 
playerUrl = video.assets.player; 
console.log("all uploaded! Watch here: ",playerUrl ) ; 
document.getElementById("video-information").innerHTML = "all uploaded! Watch the video [here](%5C'%22)" ; }); 
} 
}); 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  walking through the code
&lt;/h3&gt;

&lt;p&gt;The uploadFile function takes the files and creates an uploader function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const uploader = new VideoUploader({ 
file: input.files[0], 
//changed to sandbox, because we cannot have nice things 
uploadToken: "to5PoOjCz98FdLGnsrFflnYo", 
chunkSize: 1024\*1024\*10, 
// 10MB 
retries: 10 }); 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The chunksize defaults to 50MB, but I have lowered this to 10MB for the sake of the demo, so you can see multiple chunks uploaded. The retry count defaults to 5. &lt;/p&gt;

&lt;h4&gt;
  
  
  During Upload
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uploader.onProgress((event) =&amp;gt; { var percentComplete = Math.round(event.currentChunkUploadedBytes / event.chunksBytes \* 100); 
var totalPercentComplete = Math.round(event.uploadedBytes / event.totalBytes \* 100); 
document.getElementById("chunk-information").innerHTML = "Chunk # " + event.currentChunk + " is " + percentComplete + "% uploaded. Total uploaded: " + totalPercentComplete +"%";
}) 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The onProgress updates the video upload every 50ms. We calculate the % uploaded, and update the browser - so the user is aware that the video upload is underway. &lt;/p&gt;

&lt;h4&gt;
  
  
  Upload complete
&lt;/h4&gt;

&lt;p&gt;When the video upload is complete, the API will reply with the videoId, and links to the video. We take this information to display the url to the user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
uploader.upload() .then((video) =&amp;gt; { console.log(video); 
playerUrl = video.assets.player; 
console.log("all uploaded! Watch here: ",playerUrl ) ;
document.getElementById("video-information").innerHTML = "all uploaded! Watch the video [here](%5C'%22)" ; }); 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You can try out our library at &lt;a href="https://upload.a.video/JS.html"&gt;upload.a.video/JS.html&lt;/a&gt; The new JS video uploader widget weighs in at just 11.5 KB, and as it is not required until after the page loads, will not impact your page speed (as long as you use the defer tag). It will make building video upload widgets in JavaScript easier, as we;ve abstracted all the complexity into the library - allowing you to focus on your website, and not on the intricacies of partial uploads of video files. Read all of the &lt;a href="https://docs.api.video/docs/video-uploader"&gt;technical details&lt;/a&gt; in our documentation, and get started with your video app uploads today!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why We Built Our Own CDN</title>
      <dc:creator>api.video</dc:creator>
      <pubDate>Tue, 03 Aug 2021 00:00:00 +0000</pubDate>
      <link>https://forem.com/api_video/why-we-built-our-own-cdn-44bo</link>
      <guid>https://forem.com/api_video/why-we-built-our-own-cdn-44bo</guid>
      <description>&lt;h2&gt;
  
  
  Video assets are hard to cache and manage efficiently
&lt;/h2&gt;

&lt;p&gt;A smart, distributed cache layer that extracts as much data as possible from storage to manage video assets and improve playback latency is a technical challenge that none of the major players in the market (Akamai, Fastly, CloudFront, CloudFlare, CDNetworks, etc) have fully responded to. That's because their infrastructure and networks must respond to a large market and multiple use cases. &lt;/p&gt;

&lt;p&gt;At api.video, our use case is video. We are focused on building a high-end, high-performance and smart private CDN to specifically manage video assets, powered by a real-time data pipeline composed of RUM (Real User Monitoring) data, customer analytics data and measurement of the health of the whole infrastructure that make it up. &lt;/p&gt;

&lt;h2&gt;
  
  
  But why is it so hard to manage videos?
&lt;/h2&gt;

&lt;p&gt;Video is the most consumed and fastest growing asset in the world. According to &lt;a href="https://www.techsmith.com/blog/video-for-internal-communication/#:~:text=In%20our%20research%2C%20we%20found,who%20prefer%20text%2Dbased%20email"&gt;Techsmith&lt;/a&gt;, it's preferred over any other communication method including texts or images. Your smartphone camera is changing very quickly every year and the latest models already support 8K resolution with HDR support and video files get bigger and bigger. &lt;/p&gt;

&lt;p&gt;To give you an idea of a video file's size from your favorite smartphone: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 minute footage of 1080p/30fps HDR is around 100MB with h264 compression &lt;/li&gt;
&lt;li&gt;1 minute footage of 1080p/60fps HDR is around 200MB with h264 compression &lt;/li&gt;
&lt;li&gt;1 minute footage of 4K/30fps with HDR is around 400MB with h264 compression &lt;/li&gt;
&lt;li&gt;1 minute footage of 4K/60fps HDR is around 600MB with h264 compression &lt;/li&gt;
&lt;li&gt;1 minutes footage of 8K/24fps HDR is around 800MB with h265 compression &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that you have this in mind, scale these numbers to 10 minutes or even 1 hour of video footage and think back decades to the heyday of the Divx format when an entire movie was stored on a 700MB disk.🤯 &lt;/p&gt;

&lt;p&gt;With the massive deployment of optical fibers and the 5G network deployment all around the world, Ultra High Definition will become more popular and super easy and fast to upload. &lt;/p&gt;

&lt;p&gt;So let's get back to our concern with caching content on the edge and now I think you're starting to see the problem... &lt;/p&gt;

&lt;p&gt;Think about what happens if you have several thousand videos ranging from a few minutes to an hour long that you need to stream all over the world. Now imagine there are many others like you, doing the same thing with their own videos. Public CDNs must also cache other files that travel over the Internet (images, HTML content, JS code, CSS files, etc.) and since they deal with many different assets, they must build and provide an infrastructure capable of handling any kind of content and can not be deeply specialized in just one. Building a CDN specializing in video processing therefore requires a huge storage capacity with adequate hardware and network and also smart caching policies. &lt;/p&gt;

&lt;p&gt;All major public CDNs to date rely on SSDs (Solid State Drive) for data caching. Their latency times can reach up to 200 µs. Some of them also support NVMe (Non-Volatile Memory Express) SSDs for caching, lowering access latencies to around 100 µs. These latencies may seem very low at first glance, but in the context of providing an ultra-low latency, end-to-end live streaming service, the smallest amount of latency added to one of the stages has a cascade effect on all following steps. &lt;/p&gt;

&lt;p&gt;The caching servers that rely on SSDs and/or NVMe SSDs are good for assets with long lifespan or with long caching time (for example VOD (Video on Demand)) otherwise you will have to change your SSDs often and that will cost you a lot of money. SSDs are also not suitable for assets that have to be delivered with low latency and a very short lifespan. &lt;/p&gt;

&lt;p&gt;For this use case, to deliver everything super fast, it's best to rely on DRAM memory whose access latency with the CPU takes nanoseconds. &lt;/p&gt;

&lt;h2&gt;
  
  
  Let's get closer
&lt;/h2&gt;

&lt;p&gt;CDN's PoPs (Points of Presence) are the last bricks involved in the dissemination and provision of video assets for reading and reaching the last mile of delivery. &lt;/p&gt;

&lt;p&gt;Having PoPs on IXPs (Internet eXchange Point) is fundamental when serving a worldwide audience. However, it's not enough to prevent loss of latency in milliseconds during the last miles of delivery. Netflix, which offers the best video playback experience today, taught us this lesson. We have to build CDN PoPs on ISPs(Internet Service Provider) or close to them to reach the last miles. With that in mind, the only remaining network to travel is between the ISPs of the end-users and the devices of the end-users. &lt;/p&gt;

&lt;p&gt;As you have probably understood, despite having the best hardware possible, network constraints are an important part (to say the least) of achieving the goal of ultra low latency. Because if you don't know exactly how the Internet works and don't take care about how the different networks exchange traffic between them, you will miss something critical and you will fail. &lt;/p&gt;

&lt;h2&gt;
  
  
  It's all about the peering policy
&lt;/h2&gt;

&lt;p&gt;The internet connects a ton of very diverse devices around the world (desktop, laptop, mobile, IoT, servers, etc). Most people think of it as a single network, but in reality it is made up of several little networks that communicate with each other through different ISPs and these exchange traffic with larger scale networks. &lt;/p&gt;

&lt;p&gt;These networks have two ways to exchange traffic between them: peering or transit. To simplify a lot, remember that transit is where small networks have to pay to send their traffic to a larger one to send it over the Internet through a contract. On the contrary a peering is a Win-Win commercial contract (in general) where two networks of approximately the same size exchange traffic between them for free. &lt;/p&gt;

&lt;p&gt;There are two type of peering: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Private peering: a physical link connects the two entities. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Public peering: several networks provide a single physical link to an IXP which makes it possible to pool the peerings agreements with several entities. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In conclusion, the more a telecom operator has peering agreements (private or public), the better it is interconnected with the other networks constituting the Internet and has a direct link with them which has a direct positive impact on traffic latency. &lt;/p&gt;

&lt;p&gt;By relying on high end cloud providers we ensure that their peering policy already plays nice with IXP, ISP and other cloud providers. We negotiate additional peering agreements when needed, according to the audience that we face and by building our own data centers and networks in the future. &lt;/p&gt;

&lt;h2&gt;
  
  
  Manage private content without loosing caching ability
&lt;/h2&gt;

&lt;p&gt;If you've played with our API before, you probably know that we provide a best in class feature to let you protect video streaming and control your audience. Our unique system is the only one in the streaming world that avoids making developers do something to protect video consumption because at &lt;a href="http://api.video"&gt;api.video&lt;/a&gt;, we love to make things easier for them. &lt;/p&gt;

&lt;p&gt;As we handle all the complexity on our side, we have to deal with some specific constraints introduced by our system (we love challenges). Each time a private video has to be displayed, we generate a dedicated, unique url for you. It's cool but it prevents most public CDNs from reusing their cache as they rely on the URL to cache and serve the cached content. &lt;/p&gt;

&lt;p&gt;We are able to achieve extremely low latency thanks to a Nginx module that we developed internally. We also achieve low latency by managing and caching assets delivered through our private routing system and reuse them each time private content is consumed with another private token without relying on cookies or another legacy token system that can add overhead and latency. We stay compliant and functional with current privacy policies (blocking of third-party cookies) and keep on top of any new ones implemented in all modern browsers. &lt;/p&gt;

&lt;h2&gt;
  
  
  Smart caching policies
&lt;/h2&gt;

&lt;p&gt;At &lt;a href="http://api.video"&gt;api.video&lt;/a&gt; we like standards and because of that, we rely on standard HTTP headers for all caching rules that are understood by all CDNs and browsers. In case we need to offload some part of the traffic or even all the traffic in case of a total CDN failure to a third party provider, we are not locked by any external providers. This strategy ensures that we never have a total delivery and caching outage at the CDN level. &lt;/p&gt;

&lt;p&gt;By having our own CDN infrastructure, we can leverage all intelligence and data that we have to cache with the highest efficiency possible, not relying on public CDNs that need to consider all use cases, and make their own cost compromise. If a new technology is available, we can implement it as soon as possible without having to wait for it to be implemented. We can adapt our incident mitigation to re-balance requests on the PoP that we know will be more stable and with less latency while we fix something. We can leverage the knowledge of our own caching infrastructure, we can pinpoint the performance bottlenecks on it and fix it, while with an external solution we would have to wait for it to be eventually put in their roadmap. &lt;/p&gt;

&lt;p&gt;We do all of that to provide you a high-end video platform with the best streaming performance for you and your audiences.&lt;/p&gt;

&lt;p&gt;Authored by Anthony Dantard - CTO&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Web APIs of record.a.video</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Tue, 30 Mar 2021 00:00:00 +0000</pubDate>
      <link>https://forem.com/api_video/the-web-apis-of-record-a-video-2f3</link>
      <guid>https://forem.com/api_video/the-web-apis-of-record-a-video-2f3</guid>
      <description>&lt;p&gt;We’ve just released &lt;a href="https://record.a.video"&gt;record.a.video&lt;/a&gt;, a web application that lets you record and share videos. If that were not enough, you can also livestream. The page works in Chrome, Edge, Firefox Safari (14 and up), and on Android devices. This means that the application will work for about 75% of people using the web today. That’s not great, but since there are several new(ish) APIs in the application, it also isn’t that bad! Here are a list of the major APIs used in record.a.video (don't forget to try it out!). ## ## Video API &lt;/p&gt;

&lt;p&gt;The most important API in record.a.video is &lt;a href="https://api.video"&gt;api.video&lt;/a&gt;. api.video handles all of the video encoding, hosting and delivery. The app simply uploads the recorded video (more on this later), and api.video takes care of the rest! &lt;/p&gt;

&lt;p&gt;For live streaming - again api.video accepts the live stream, and converts it into a live HLS stream for immediate live viewing. &lt;/p&gt;

&lt;p&gt;Without this backbone to handle all of the videos, record.a.video could not exist. But, before the video is uploaded and processed, we are doing a lot of cool stuff in the browser that I find really fascinating. I thought that identifying the APIs and how they work would be of interest to many web developers. &lt;/p&gt;

&lt;h3&gt;
  
  
  Browser APIs
&lt;/h3&gt;

&lt;p&gt;Here are the media APIs I am using to capture video: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia"&gt;MediaDevices.getUserMedia&lt;/a&gt; API can connect to the device camera &amp;amp; microphone to record video. I’ve already built a live stream video (read all about it in the &lt;a href="https://api.video/blog/video-trends/live-streaming-a-video-using-just-the-browser"&gt;api.video blog&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Screen_Capture_API"&gt;ScreenCapture&lt;/a&gt; API has excellent desktop support, but no support on mobile devices. &lt;/p&gt;

&lt;p&gt;With these two APIs I can grab the screen and the camera. For my video recording, that’s nearly half the battle. In &lt;a href="https://livestream.a.video"&gt;livestream.a.video&lt;/a&gt; app, we simply took the camera video and streamed it. But now, I want to combine these two feeds and record it for upload. In order to do that, I draw the 2 video feeds to a canvas. Then we record from the canvas using: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/MediaStream_Recording_API"&gt;Media Recorder API&lt;/a&gt; reads from the canvas, and creates a new video instance that we can save, or stream to the live streaming server. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API"&gt;WebSpeech API&lt;/a&gt; does instantaneous speech recognition - enabling the possibility to add captions to your recording. It is experimental, and in Chrome only. But what I played with it, it was *really* cool, so I added it anyway. Since it is only Chrome, I built the app to gracefully turns off captioning on all the other browsers. &lt;/p&gt;

&lt;h3&gt;
  
  
  Web API Browser support:
&lt;/h3&gt;

&lt;p&gt;With web APIs, it is crucial to understand the landscape of browser support. How well will my app work across the browser landscape? If a critical API is not supported on a browser - that means my application will not work (I'm looking at you MediaRecorder...) &lt;/p&gt;

&lt;p&gt;getUserMedia: getUserDevices works in all devices, so cameras and microphones are fair play for recording! &lt;/p&gt;

&lt;p&gt;ScreenCapture API: Excellent desktop browser support, but zero mobile support. This kind of makes sense - sharing a mobile screen would be weird. So on mobile, the app will only share the camera. &lt;/p&gt;

&lt;p&gt;MediaRecorder/MediaSource: This is where Safari has issues: Desktop: only supported in the newest Safari 14. iOS: no support. Without the ability to record from the canvas - the application just cannot work. I spent a *lot of time* debugging this on desktop Safari. :( &lt;/p&gt;

&lt;p&gt;WebSpeech recognition: As previously stated: Chrome only. &lt;/p&gt;

&lt;p&gt;So, The full sharing screen and camera will work on desktops (and Chrome &amp;amp; Edge can add captions). &lt;/p&gt;

&lt;p&gt;On mobile - we can only share the camera. I thought it would be cool to share both the front &amp;amp; rear camera at the same time on mobile...but that does not work. (You can try it yourself at &lt;a href="https://record.a.video/index1.html"&gt;record.a.video/index1.html&lt;/a&gt;.) If that ever becomes available, it will become another really cool application for live video. &lt;/p&gt;

&lt;h2&gt;
  
  
  The future
&lt;/h2&gt;

&lt;p&gt;If Media Recorder support appears in iOS Safari, we can perhaps support iOS devices. But, until that time, the application will not work. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The webAPIs in record.a.video are super powerful, but some of them required a slightly different browser implementation. In the coming week, I will be writing up sample implementations, and the workarounds I had to utilize to make these APIs work across the supported browsers.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building record.a.video part 1: MediaDevices.getUserMedia() for camera and audio recording</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Tue, 30 Mar 2021 00:00:00 +0000</pubDate>
      <link>https://forem.com/api_video/building-record-a-video-part-1-mediadevices-getusermedia-for-camera-and-audio-recording-14pn</link>
      <guid>https://forem.com/api_video/building-record-a-video-part-1-mediadevices-getusermedia-for-camera-and-audio-recording-14pn</guid>
      <description>&lt;p&gt;We recently launched &lt;a href="https://record.a.video"&gt;record.a.video&lt;/a&gt;, a new web app that can record your camera and screen right from the browser!  When you are done recording, it uploads the video to &lt;a href="https://api.video"&gt;api.video&lt;/a&gt; to create a link for easy sharing of your video.&lt;/p&gt;

&lt;p&gt;In the process of building this app, I learned a lot about a few web APIs, so I thought I would write a bit more detail in how I used these APIs and how they work.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In this post, we'll use getUserMedia API to record the user's camera and microphone.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In &lt;a href="https://api.video/blog/tutorials/building-record-a-video-the-screencapture-api"&gt;post 2&lt;/a&gt;, I discussed recording the screen, using the Screen Capture API.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using the video streams created in posts 1&amp;amp; 2, I draw the combined video on a canvas.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In &lt;a href="https://api.video/blog/tutorials/building-record-a-video-the-mediarecorder-api"&gt;post 3&lt;/a&gt;, I'll discuss the MediaRecorder API, where I create a video stream from the canvas to create the output video.  This output stream feeds into the video upload (for video on demand playback) or the live stream (for immediate and live playback).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In &lt;a href="https://api.video/blog/tutorials/building-record-a-video-using-the-webspeech-api-for-live-captioning"&gt;post 4&lt;/a&gt;, I'll discuss the Web Speech API. This API converts audio into text in near real time, allowing to create 'instant' captions for any video being created in the app. This is an experimental API that only works in Chrome, but was so neat, I included it in record.a.video anyway. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  MediaDevices.getUserMedia()
&lt;/h2&gt;

&lt;p&gt;Let's start at the beginning,  With record.a.video, we grab the camera and microphone inputs from the device to record the video.  This API allows the browser to interact with external media devices, and read their output.&lt;/p&gt;

&lt;p&gt;In record.a.video, we enumerate the video and audio inputs to display them as options for the recording:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rP3SR0XT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1617096856-screenshot-2021-03-30-at-10-33-48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rP3SR0XT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1617096856-screenshot-2021-03-30-at-10-33-48.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the capture begins, we display the chosen camera, and record from the chosen microphone. So, how does this work? &lt;/p&gt;

&lt;h2&gt;
  
  
  Enumerating the devices
&lt;/h2&gt;

&lt;p&gt;When you call the enumerateDevices, you get all of the details of the devices:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;navigator.mediaDevices.enumerateDevices() .then(function(devices) { devices.forEach(function(device) { console.log(device.kind + ": " + device.label +" id = " + device.deviceId); if(device.kind =="videoinput"){ //add a select to the camera dropdown list // var option = document.createElement("option"); 
console.log(device); 
deviceIds[counter] = (device.deviceId); 
deviceNames[counter]= (device.label); 
counter++; } }); 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we look at the console log, we get a list of all the input and outputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;audioinput: Default - Blue Snowball (046d:0ab9) id = default audioinput: Blue Snowball (046d:0ab9) id = 693957e7a6c63d00cf9068338ec0108bfdfad2f108182bc988f7ed79430d5024 
audioinput: Logitech StreamCam (046d:0893) id = ec257829b4d910400bdff0fe8639e3e2c5b9bd761c1cd4454b8f272672e4d482 
audioinput: MacBook Pro Microphone (Built-in) id = be26a24fb1ca054193d51c80a6c875a091a285153959ddae8ab24f55c939bcd5 
audioinput: Iriun Webcam Audio (Virtual) id = 5e911d92f330fb9333fee989407c5c84c3b99af29046d8182e582545e772687f 
videoinput: Logitech StreamCam (046d:0893) id = 8d59c8e7bc02076c4230ba70125c03491020950b008769bd593bdb13e33c1ce7 
videoinput: FaceTime HD Camera (Built-in) (05ac:8514) id = e3fce20226c193876c5ff206407fd4815ad5b1e6329e67a8e82c9636d8e75c8d audiooutput: Default - External Headphones (Built-in) id = default audiooutput: U32J59x (DisplayPort) id = b342ee4661c78101936e50a2b5a3e5080d5ed748e031547e492c7ab0eeddd9df 
audiooutput: External Headphones (Built-in) id = 4a3980d8579a418193b1e8ff46771f204e87b2997b90d0d4e7e60a4acaae1235 
audiooutput: MacBook Pro Speakers (Built-in) id = b7045191ebec43d797348478004f25472c868bcae61ad04a7de73f70878e27b2 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we learn that, yes, I have a lot of audio/video devices hooked into my computer. (mental note: why is Iriun webcam only presenting as audio, and not as video?) Each device has a: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kind: audio or video; input or output &lt;/li&gt;
&lt;li&gt;label: A text description for human consumption &lt;/li&gt;
&lt;li&gt;deviceId: a random string that uniquely IDs the device. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We'll use the deviceId to decide which device to broadcast, but in record.a.video, use the labels in the form - since they are better descriptions for our users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Picking a broadcast
&lt;/h2&gt;

&lt;p&gt;Once a user has chosen their video input, we can make a request to obtain this video. (The same approach works for audio as well, so I'm only describing video here):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;navigator.getUserMedia = (navigator.mediaDevices.getUserMedia || navigator.mediaDevices.mozGetUserMedia || navigator.mediaDevices.msGetUserMedia || navigator.mediaDevices.webkitGetUserMedia); 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When requesting a camera, you can apply constraints to the request to enforce exactly what you would like, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cameraW=1280; 
cameraH=720; 
cameraFR=25; 
var camera1Options = { audio:false, 
video:{ deviceId: deviceIds[0], 
width: { min: 100, ideal: cameraW, max: 1920 }, 
height: { min: 100, ideal: cameraH, max: 1080 }, frameRate: {ideal: cameraFR} } }; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here I set the ideal video as 1280x720, but allow for differences with the min &amp;amp; max parameters. This means that if the camera cannot provide 1280x720, it will give me a similar size, but in the available sizes for the camera. &lt;/p&gt;

&lt;p&gt;Note: I have set the audio to false. In this demo, I am applying the video onto the screen with a video tag. If the browser were to play audio from the camera, the mic would pick it up, and you'd get an awful feedback loop. I want to avoid this. I could just mute the video, but I thought it would be cool to take another Media Stream for the audio, and apply that track to my stream during recording (after it appears on the screen). &lt;/p&gt;

&lt;h3&gt;
  
  
  Getting the stream and placing it on the page
&lt;/h3&gt;

&lt;p&gt;I have a video element called 'video1'. I assign the getUserMedia to display there:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;video1= document.getElementById("video1"); navigator.mediaDevices.getUserMedia(camera1Options).then(function(stream1){ video1.srcObject=stream1; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When this call is made - the browser will ask the user for permission to use the camera. If the user accepts the sharing request - the browser grabs the camera feed, and applies it into the video1 element:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5my1uf2B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1617107872-screenshot-2021-03-30-at-13-34-37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5my1uf2B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1617107872-screenshot-2021-03-30-at-13-34-37.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y1yyildl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1617108334-screenshot-2021-03-30-at-13-37-29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y1yyildl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1617108334-screenshot-2021-03-30-at-13-37-29.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's all there is to it! &lt;br&gt;
With the getUserdevices API, pulling the video feed from your camera to the browser just takes a few lines of code. &lt;/p&gt;

&lt;h2&gt;
  
  
  If one camera is great, what about two?
&lt;/h2&gt;

&lt;p&gt;While most desktops do not have multiple cameras, most smartphones do. I thought it would be very cool to extract a video feed from 2 cameras on one device. &lt;/p&gt;

&lt;p&gt;(Imagine a real estate walkthrough where you can see the surroundings, and the face of the person presenting.) So I built a demo application. You can see this application in action at &lt;a href="https://record.a.video/inex1.html"&gt;record.a.video/index1.html&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;I basically repeat the camera1 and video1 code above with camera2 and video2, and use the first two cameras reported by the enumeration query. &lt;/p&gt;

&lt;p&gt;When I test the page on desktop, I get both cameras to stream (which is super cool!) Unfortunately on Android &amp;amp; iOS, only one camera can stream at a time - resulting in the first camera grabbing a still image from the start or capture, and going moving to an inactive state as the second camera becomes active. (On my phone, the front camera has the still image, and the rear camera continues to show video). &lt;/p&gt;

&lt;p&gt;I imagine that this is a battery/CPU saving setting, as running 2 cameras full time on a phone battery would quickly use up the battery. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The MediaDevices.getUserMe4dia() API allows the browser to use the external camera and mics to display (and record) content. See it in action at &lt;a href="https://record.a.video"&gt;record.a.video&lt;/a&gt; and build your own version of a video recording system today!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building "Loom" in the browser: record a video with api.video</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Fri, 26 Mar 2021 00:00:00 +0000</pubDate>
      <link>https://forem.com/api_video/building-loom-in-the-browser-record-a-video-with-api-video-56pk</link>
      <guid>https://forem.com/api_video/building-loom-in-the-browser-record-a-video-with-api-video-56pk</guid>
      <description>&lt;p&gt;With the explosive growth of remote working in the past year, we’ve all worked to find different ways of communicating. Slack and Zoom have become major tools for communication as we are all no longer in the same building or office. &lt;/p&gt;

&lt;p&gt;Apps like Clubhouse have become ‘places’ to go and hang out and chat with others.&lt;/p&gt;

&lt;p&gt;One app that has featured explosive growth over the past year is Loom. Loom is an app that lets you record short video messages - sharing your screen or just using your camera. When you are done, the video is saved on a server, and you are given a link. This link is easy to share: on social media, on Slack etc. It is a great way to communicate an idea, or to express your point - it’s everything that is great about video, packaged in an easy to use interface. It is no wonder that Loom is so successful. &lt;/p&gt;

&lt;p&gt;My only issue is that I either have to use an installed app on my Mac (265 MB installed), or a Chrome extension (I did not go this approach, as Chrome eats too much memory already). &lt;/p&gt;

&lt;h2&gt;
  
  
  Can I build a Loom-like app - in the browser?
&lt;/h2&gt;

&lt;p&gt;I lead developer relations at api.video. We offer a full service api for video transcoding, hosting and delivering video streams. It is incredibly fast, and can be used to share videos in the same way Loom does. So this raised 2 questions: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If I delegate the ‘hard’ bits of the video handling to api.video, can I build a recording app in the browser? &lt;/li&gt;
&lt;li&gt;Can I get this app running in all four major modern browsers? Desktop &amp;amp; Mobile? &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;TL;dr: &lt;br&gt;
Question 1 is yes - you can build a website to record and share video (screen sharing and camera video). It will even stream live video! &lt;br&gt;
Question 2: Almost all browsers. On the desktop, the full featured app works great in Chrome, Edge Firefox, and Safari 14.0.3 (there are some support issues that will fail below Safari 14). &lt;br&gt;
For Mobile: it works on Android (without screen sharing). Unfortunately, the APIs used are not yet in mobile Webkit, so no iOS support at this time. &lt;/p&gt;

&lt;p&gt;Try it out! The application is live at &lt;a href="https://record.a.video"&gt;record.a.video&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vmJNhUMM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1616757294-screenshot-2021-03-26-at-11-13-30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vmJNhUMM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1616757294-screenshot-2021-03-26-at-11-13-30.png" alt="" width="880" height="1410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you open &lt;a href="https://record.a.video"&gt;record.a.video&lt;/a&gt;, you'll get a menu as above. (You may not get the available cameras and microphones - press "start capture" and allow camera/microphone permissions, then reload the page). &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The top bar allows you to switch between Live streaming and recording a video. &lt;/li&gt;
&lt;li&gt;Choose your camera and mic (if you have more than one). &lt;/li&gt;
&lt;li&gt;Select the screen layout - do you want to show just the camera or screen, or do you want to overlay the camera on the screen? &lt;/li&gt;
&lt;li&gt;Live captions: If you are using Chrome, we can add live captions to the video being created. Pick top or bottom for the captions. &lt;/li&gt;
&lt;li&gt;"Show recording" if you leave this checked, you'll see the video being recorded in the browser. Our CEO dislikes the "vortex effect" you see when sharing the screen, so we added this button for him to disable the view. (👋 Hi Cedric!) &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that you've chosen all your options, you're ready to start capture! &lt;/p&gt;

&lt;h3&gt;
  
  
  Recording
&lt;/h3&gt;

&lt;p&gt;Once you've started recording, you'll see 3 new views appear: * The screen being shared &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The camera &lt;/li&gt;
&lt;li&gt;The recording (showing screen, camera or both). (Unless you hid the recording). &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UndKsO0d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1616757841-screenshot-2021-03-26-at-11-22-05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UndKsO0d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1616757841-screenshot-2021-03-26-at-11-22-05.png" alt="" width="880" height="814"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Record and stream right from your browser!
&lt;/h2&gt;

&lt;p&gt;So it works! Recording your screen and camera in the browser with live streaming or recording and uploading for asynchronous viewing. &lt;/p&gt;

&lt;p&gt;What completely blows my mind about this application is that leveraging web APIs, this site weighs in at just under 300KB. There is a NodeJS backend to handle the streaming (we have to convert the format to RTMP for ingestion into api.video), but if you remove the live streaming component, this app runs with just JavaScript (you can even run it locally on your computer!) &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Record.a.video lets you easily record and share video - live or asynchronously! Go give it a shot &lt;a href="https://record.a.video"&gt;record.a.video&lt;/a&gt;, and if you want to build your own - the code is on &lt;a href="https://github.com/dougsillars/recordavideo"&gt;Github&lt;/a&gt;. IN the coming days, we’ll add some technical posts describing how we built it. If you have any comments - please leave us a note in our &lt;a href="https://community.api.video"&gt;community forum&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>New feature: Webhooks</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Tue, 23 Mar 2021 00:00:00 +0000</pubDate>
      <link>https://forem.com/api_video/new-feature-webhooks-430f</link>
      <guid>https://forem.com/api_video/new-feature-webhooks-430f</guid>
      <description>&lt;h2&gt;
  
  
  Webhooks
&lt;/h2&gt;

&lt;p&gt;We're super excited to begin rolling out webhooks as a part of api.video. Today we have released our video on demand (VOD) encoding webhook. &lt;/p&gt;

&lt;h3&gt;
  
  
  What are webhooks
&lt;/h3&gt;

&lt;p&gt;Webhooks are callbacks that announce when something has occurred (pushing data to you), rather than having to ask whether an event has occurred (polling data from the server). &lt;/p&gt;

&lt;h3&gt;
  
  
  Polling vs. pushing
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s0IewzQf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1616508005-2247608175268b5d4f7o.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s0IewzQf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1616508005-2247608175268b5d4f7o.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The classic polling interface is children asking "are we there yet?" every few minutes from the backseat of the car. The classic "pushing" of notifications is your phone telling you when you have a new email message. Your phone is not asking every minute, but the server tells your phone that an email has arrived. Polling takes more energy - as both sides have to keep asking for data, and depending on the polling interval, can add delays to the process. Pushing data delivers the needed data as soon as it is available - using fewer resources to deliver more timely data. &lt;/p&gt;

&lt;h2&gt;
  
  
  Polling for video encoding complete
&lt;/h2&gt;

&lt;p&gt;Before webhooks were launched, you had to poll the &lt;a href="https://docs.api.video/reference#get-video-status"&gt;video status&lt;/a&gt; endpoint to know when a video was ready for playback. The encoding.playable result tells us if one version of the video is ready (typically a low quality version), but if you wanted to wait until a higher quality, like 720p video was encoded, you could look at that size to see when the encoding was completed. &lt;/p&gt;

&lt;h2&gt;
  
  
  Webhook encoding events
&lt;/h2&gt;

&lt;p&gt;Our first webhook is the video.encoding.quality.completed which pushes a message for each quality of each video has been encoded. For example, when a recently uploaded video was encoded, the webhook received several webhooks: one for each version. Here is an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ type: 'video.encoding.quality.completed', 
emittedAt: '2021-03-23T17:03:39.908Z', 
videoId: 'vi7DWgHCu29smlHThpzEajfs', 
encoding: 'hls', 
quality: '720p' } 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells us that this videoId was encoded as HLS at 720p. &lt;/p&gt;

&lt;h3&gt;
  
  
  Usages
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Track uploads&lt;/strong&gt;: If you allow users to upload videos, you can track every videoId uploaded every day. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Display in your app&lt;/strong&gt;: Perhaps you only want to display a video when the 720p (or 1080p) video is ready (240p is great, but can be really fuzzy). You can now use these webhooks to decide when a video can be pushed live. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Send your mp4 for analysis&lt;/strong&gt;. Perhaps you would like to do further processing with the mp4 (to add captions, or to performa video moderation). Now, when the webhook alerts you that it is ready, you can begin that secondary processing. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Api calls
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.api.video/reference#post-webhooks"&gt;Create webhook&lt;/a&gt; &lt;br&gt;
&lt;a href="https://docs.api.video/reference#list-webhooks"&gt;List webhooks&lt;/a&gt; &lt;a href="https://docs.api.video/reference#get-webhook"&gt;Get webhook&lt;/a&gt; &lt;br&gt;
&lt;a href="https://docs.api.video/reference#delete-webhook"&gt;Delete webhook&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Today, we allow you to have 3 webhook endpoints. We may allow for more endpoints in the future. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each webhook event is sent 3 times. If it fails 3 times, no further attempts are madeto deliver the message. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There is no tracking or history of webhook notifications. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There is no authentication or certificate validation at this time. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Live demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://webhook.a.video"&gt;webhook a video&lt;/a&gt; displays encoding events when a video is uploaded. It is connected to the &lt;a href="https://upload.a.video"&gt;upload a video&lt;/a&gt; application, so if you upload a video there, you'll see the encoding events in the webhook application.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1r-WxXFx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1616522395-screenshot-2021-03-23-at-17-59-28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1r-WxXFx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1616522395-screenshot-2021-03-23-at-17-59-28.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the screenshot above, you can see multiple encoding events have occurred: for several different videos. It shows the webhook event, the date &amp;amp; time, the videoId, and the format and size of the encoding. Rather than display these results, your application can base it's internal logic on these events. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We're excited to begin launching webhooks, with the first being VOD encoding events. We'll next be working on live stream webhooks, so look for that coming in the near future. If you have any comments about our webhooks (or any other feature you'd like to see, please leave a post at out &lt;a href="https://community.api.video"&gt;community forum&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Sharing a Video: Sending a Video Via Livestream</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Tue, 09 Feb 2021 00:00:00 +0000</pubDate>
      <link>https://forem.com/api_video/sharing-a-video-sending-a-video-via-livestream-2f10</link>
      <guid>https://forem.com/api_video/sharing-a-video-sending-a-video-via-livestream-2f10</guid>
      <description>&lt;p&gt;Video on demand (VOD) is a great way to give your customers a way to watch videos when &lt;em&gt;they&lt;/em&gt; want to watch them. But what if you want that recorded video to be played at a specific time? &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maybe you are a teacher, and you want your pre-recorded lecture to broadcast at 8 AM for your class to watch (but you don’t actually need to be there at 8 AM). &lt;/li&gt;
&lt;li&gt;A “replay event” of a live stream that occurs 6 hours later for viewers in other parts of the world. &lt;/li&gt;
&lt;li&gt;A “video share” party where a group of people can share watching a video. This is not possible with VOD, but is possible with a video livestream. In this post, we'll walk through the steps required to convert a recorded video into a video livestream for scheduled playback. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Broadcasting a VOD
&lt;/h2&gt;

&lt;p&gt;There are 3 things you’ll need to do a basic VOD broadcast: 1. A livestream: When you livestream with api.video, you’ll need to &lt;a href="https://api.video/blog/tutorials/live-stream-tutorial"&gt;create a livestream&lt;/a&gt; that will be used to broadcast your video. Once you’ve created a livestream for the video, you’ll need the parameters from the response - namely the streamKey and the iframe/player urls for your viewers to see the livestream. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A recorded video. You’ll want to have a video that you wish to send over the livestream. This can be a video local to your computer, but in this tutorial, it will be a file that has been &lt;a href="https://api.video/blog/tutorials/uploading-large-files-with-javascript"&gt;uploaded to api.video&lt;/a&gt;. We’ll use the mp4 url for the streaming. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;FFMPEG You’ll also need &lt;a href="https://ffmpeg.org/download.html"&gt;FFMPEG installed&lt;/a&gt; on your computer. If you use a Mac, you can install via &lt;a href=""&gt;HomeBrew&lt;/a&gt;. FFMPEG is the tool that will help you take your VOD and convert it into a broadcast. Ok, Now you have everything you need. In a terminal window on your computer, we’ll run a FFMPEG command:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ffmpeg -i &amp;lt;your video to be streamed&amp;gt; -preset ultrafast -tune zerolatency -f flv rtmp://broadcast.api.video/s/&amp;lt;livestream streamkey&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ffmpeg will take an input (-i) if your video, and using an ultrafast and zerolatency transcoding, create a fly stream, and send it to the api.video RTMP stream.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CZllYYXC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1613044051-screenshot-2021-02-08-at-19-44-13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CZllYYXC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1613044051-screenshot-2021-02-08-at-19-44-13.png" alt="" width="880" height="748"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;FFMPEG will start running, and spit out a lot of code into the terminal.. Here you can see that the video is being encoded and sent on.&lt;/p&gt;

&lt;p&gt;That’s literally all you have to do to make this work.&lt;/p&gt;

&lt;h2&gt;
  
  
  I have an 8 AM class, UGH!
&lt;/h2&gt;

&lt;p&gt;Ok, what if you could automate this command to run right at 8am? It is easy to do!&lt;/p&gt;

&lt;p&gt;Create a Bash script with your command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

ffmpeg -i class_recording.mp4 -preset ultrafast -tune zerolatency -f flv rtmp://broadcast.api.video/s/&amp;lt;livestream streamkey&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we’ll use crontab (mac and Linux) to automate the command. Let's say class is Feb 10, at 8AM.&lt;/p&gt;

&lt;p&gt;Running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;crontab -e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Opens the cronjob queue (probably in VIM). Type ‘i’ to begin typing, and enter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;00 08 10 02 * &amp;lt;path to your script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will run the script at 8 AM on February 10. To exit VIM, type , ‘:wq” to save the file. Now - as long as your computer is on at 8AM on the 10th, the video will stream for you!&lt;/p&gt;

&lt;h2&gt;
  
  
  Looping a video in a livestream
&lt;/h2&gt;

&lt;p&gt;Do you want to have a video on continuous loop? With api.video, you &lt;em&gt;can&lt;/em&gt; do this with just our video player:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://embed.api.video/vod/vi6xuvHolHxZQ5r6KETXAiR4#autoplay;muted;loop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Appending the autoplay, muted and loop parameters to the player url tells the api.video player to do all of these actions - the video will continuously playback for your users.&lt;/p&gt;

&lt;p&gt;But if you want to have the loop in a livestream, this is possible as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ffmpeg -stream_loop &amp;lt;number of loops&amp;gt; -i https://cdn.api.video/vod/vi1UQBDAMqAPCRxB3dmw1thc/mp4/1080/source.mp4 -preset ultrafast -tune zerolatency -f flv rtmp://broadcast.api.video/s/1d1e7a11-14a6-4984-b6d4-0c9864aec3dd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simply insert the -stream_loop in the beginning of your ffmpeg command, and add the number of additional plays afterward (-1 is an infinite loop).  &lt;/p&gt;

&lt;h2&gt;
  
  
  Sharing a movie
&lt;/h2&gt;

&lt;p&gt;Ok - since we can have a livestream - we can now “share” a movie with others. Note: the videos are &lt;em&gt;not&lt;/em&gt; perfectly synced (a work in progress), but we can all watch the same video - at approximately the same location for each of us.&lt;/p&gt;

&lt;p&gt;We’ve sort of covered this idea with a class lecture - you might imagine all the students will be online at 8AM to watch the class. But what if we wanted to do something fun?  &lt;/p&gt;

&lt;p&gt;we've built &lt;a href="https://share.a.video"&gt;share.a.video&lt;/a&gt;, a demo app running NodeJS, that replicates the FFMPEG transcoding shown above, but on your remote server - and then gives a playback view where anyone with the url can watch the video. In the example, we use 'Big Buck Bunny' and `Sita Sings the Blues' (both Creative Commons licensed videos). The code is open sourced on &lt;a href="https://github.com/dougsillars/shareavideo"&gt;Github&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On page load - we look to see if the livestream is already broadcasting:&lt;/p&gt;

&lt;p&gt;On the Node server, we have some of the video data hardcoded. To find out if the video is playing, we call the Livestream endpoint, and match the livestreamIds - what we are really interested in is the broadcasting parameter:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
    //get data on both movies:&lt;br&gt;
    const client = new apiVideo.Client({ apiKey: apiVideoKey });&lt;br&gt;
    let allLiveStreams = client.lives.search();&lt;br&gt;
    var videos =[{&lt;br&gt;
        "name": "Big Buck Bunny",&lt;br&gt;
        "livestream": "li6ndv3lbvrZELWxMKGzGg9V",&lt;br&gt;
        "broadcasting":false, &lt;br&gt;
        "iframe":"",&lt;br&gt;
        "thumbnail":"",&lt;br&gt;
        "description": "Big Buck Bunny is a free and open source movie, created by Blender, and released under Creative Commons 3.0."&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    },{
        "name": "Sita Sings the Blues",
        "livestream": "li7e2ePBRYKY6AOfPU8HSt91",
        "broadcasting":false,
        "iframe":"",
        "thumbnail":""
        ,
    "description": "Sita Sings the Blues is a an open source movie, created by Nina Paley, and released under CC-BY-SA."
    }];


allLiveStreams.then(function(liveList){
    //console.log(liveList);
    for(var i=0;i&amp;lt;livelist.length i iframe thumbnail and broadcasting status for each video if videos livelist src="%22%20+liveList%5Bi%5D.assets.player%20+%22" width='\"100%\"' height='\"100%\"' frameborder='\"0\"' scrolling='\"no\"' allowfullscreen='\"true\"";' console.log return res.render we the json array of data to web application. application is built with pug can use logic from variable decide what displayed: h1 movie has already started. click image enter theatre. p h2 make sure your phone silenced. else begin livestream. it take a few seconds buffer up then get you into img.image playing theatre display page that lets viewers playback. invite start when livestream call endpoint which will being started ffmpeg commands those above do vod-&amp;gt; live transcoding on the server:

                                                                 ```
                                                                    //ok this will kick the video stream off
    console.log(req.body);
    var videoToStream = req.body.movie;
    //counter for array data BBB=0, SSTB =1 (more will go from here)
    var counter = 0;
    if(videoToStream==="sstb"){
        counter =1;
    }   
    console.log("video to stream:",videoToStream );
    var videoLink = videoUrls[counter];
    var rtmpDestination = "rtmp://broadcast.api.video/s/"+streamKeys[counter];

    var ops = [
        '-i', videoLink, 
          '-preset', 'ultrafast', '-tune', 'zerolatency', 
            '-f', 'flv', rtmpDestination        
    ];

    console.log("ops", ops);
    ffmpeg_process=spawn('ffmpeg', ops);
    //ffmpeg started
    console.log("video stream started");

    ffmpeg_process.stderr.on('data',function(d){
        console.log('ffmpeg_stderr',''+d);
    });

    //
     res.sendStatus(200);
                                                                 ```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When the stream starts, we head back to the client side of the app for another cool trick.&lt;/p&gt;

&lt;p&gt;A livestream requires about 15 seconds of video to be transcoded before it is live - so if we opened the livestream URL right away, viewers would get an error. So, we play a little trick when we get the response from the server:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```
 oReq.onload = function (oEvent) {
        console.log("video started: ",movieUpload );
        var livestreamid = document.getElementById("videoDiv").innerHTML;
        document.getElementById("videoDiv").innerHTML="";
        console.log("liveid", livestreamid);
        var videos = ['vi74TmfoJyPmVJVnIl4jzMLA', livestreamid];

        //now we create the player
        //since the livestream can take some time to start - we'll kick off with the 10 second countdown video
        var counter = 0;
        createVideo(counter);
        document.getElementsByClassName('image')[0].style.display= 'none';
        document.getElementById('thumb-live-indicator').className = "active";

        //code lifted from playlist demo
        function createVideo(counter) {
            console.log("video", counter +videos[counter]);

            var vodOptions = { 
                id: videos[counter], 
                 autoplay: true
                 // ... other optional options s
             };
             var liveOptions = { 
                id: videos[counter], 
                 autoplay: true,
                 live: true
                 // ... other optional options s
             };
             videoOptions = vodOptions;
             if(counter &amp;amp;gt;0){
                 //live video
                 videoOptions= liveOptions;
                 //add teh sync button
             // liveSync();

             }
            console.log("player options", videoOptions);

            window.player = new PlayerSdk("#imageDiv", videoOptions);
            player.addEventListener('play', function() { 
                //console.log("playing");
                onPlay(counter);
            });
            player.addEventListener('ended',function() { 
                console.log("ended");
                counter ++;
                //if we hit the end of the array - start over again

                onEnd(counter);
            });

        }


        function onPlay(counter) {
           // console.log("onPlay");
            console.log("counter" ,counter);

            console.log("video playing");
        }
        function onEnd(counter){
            //console.log("onEnd");

            //console.log("video over");
            player.destroy();
            //video is over - so start another one...
            createVideo(counter);
        }
        //end code lifted from playlist demo

}
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We create a &lt;a href="https://api.video/blog/tutorials/creating-video-playlists"&gt;video playlist&lt;/a&gt;- where the first video is a 10 second countdown - like from the movies, and the 2nd video is the livestream.&lt;/p&gt;

&lt;p&gt;This gives the live video enough time to build up a buffer and be ready to play, and makes for a fun experience for viewers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, we've walked through several different ways to convert a recorded video into a livestream, all using FFMPEG in the background. We've covered a basic command line implementation, and then shown how to schedule that same command.&lt;/p&gt;

&lt;p&gt;We've also built a sample application based on NodeJS at &lt;a href="https://share.a.video"&gt;share.a.video&lt;/a&gt; that does the same thing, but on a remote server, and has built in webviews to start and watch the videos.&lt;/p&gt;

&lt;p&gt;If you still have questions (or want to share how you are piping recored video into a livestream - join the conversation in the &lt;a href="https://community.api.video"&gt;api.video developer community&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Create Video Integrations Without Coding - So Easy It's a Zap!</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Tue, 02 Feb 2021 10:36:37 +0000</pubDate>
      <link>https://forem.com/api_video/create-video-integrations-without-coding-so-easy-it-s-a-zap-2252</link>
      <guid>https://forem.com/api_video/create-video-integrations-without-coding-so-easy-it-s-a-zap-2252</guid>
      <description>&lt;p&gt;Now you can combine api.video with all your favorite applications without doing any coding! We are proud to announce that we've partnered with Zapier. Zapier allows you to integrate your favorite applications without any coding at all. Perhaps you don't program, but you have a cool idea for how to use api.video services with another application, or maybe you want to do a proof-of-concept before you code your idea. Zapier will let you do it all. Some examples for you to consider: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://api.video/blog/tutorials/it-s-a-zap-upload-a-video-and-receive-an-sms-notification"&gt;&lt;strong&gt;Receive a text message every time a new video is uploaded&lt;/strong&gt;&lt;/a&gt; - This would be great if you want to stay up-to-date with breaking news clips, the latest episode of your favorite TV show coming out, or whenever someone on your team completes a new video project. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Copy videos you add to Amazon S3 over to api.video&lt;/strong&gt; - Whenever you add a new video to Amazon S3, add a backup copy at api.video! &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Send a slack message about a new video upload&lt;/strong&gt; - If you have a workflow where other team members need to know a new video is available for them to edit, you can notify them in slack about every upload. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Copy videos you add to Dropbox over to api.video&lt;/strong&gt; - Any time you add a new video to Dropbox, automatically add a backup copy with api.video.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Send an email notification about new videos&lt;/strong&gt; - Send a note through Gmail every time you upload a new video to api.video. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  How Does Zapier Work?
&lt;/h1&gt;

&lt;p&gt;When you sign up for an account with Zapier, you'll learn about three concepts: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zap &lt;/li&gt;
&lt;li&gt;Trigger &lt;/li&gt;
&lt;li&gt;Action 
A Zap is an application you build using Zapier. It's composed of at least one trigger and one action that occur between two different products. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A trigger is an event that kicks off a zap. For example, a trigger event could be you uploading a video to api.video. When a trigger event occurs, that's the cue for an action to happen. An action is what happens in response to a trigger. Say you have the trigger event of a video being added to api.video. A response action could be sending you a text message to alert you that there's a new video. &lt;/p&gt;

&lt;p&gt;Using Zapier, you can choose from thousands of applications to use as a trigger or an action (depending on what's available for each application). Not all applications offer triggers and actions, and you may not necessarily see the trigger event or action that you want to happen. If that's the case for api.video, give us some feedback in our &lt;a href="https://community.api.video/"&gt;community forum&lt;/a&gt; about what you'd like to see! &lt;/p&gt;

&lt;h1&gt;
  
  
  How to Sign up for Zapier
&lt;/h1&gt;

&lt;p&gt;Signing up for Zapier is a snap! To help you out, we've provided the sign up steps here: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;a href="https://zapier.com/"&gt;Zapier's website&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Sign up&lt;/strong&gt; in the upper right corner. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the sign up option that works best for you. We are using the sign up with Google option here. Enter your email and click &lt;strong&gt;Continue as -your email-&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--znFF3Sr7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1611899979-zap1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--znFF3Sr7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1611899979-zap1.png" alt="" width="880" height="743"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;After you're logged in, you're asked what apps you use. Type &lt;strong&gt;api.video&lt;/strong&gt; in the search bar, and when it comes back in the search results, click the &lt;strong&gt;api.video icon&lt;/strong&gt;. You can choose other apps you're interested in integrating with as well. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When you're done choosing apps, click &lt;strong&gt;Finish Setup&lt;/strong&gt;. NOTE: If you didn't want to choose any apps for starters, you can also choose &lt;strong&gt;Skip&lt;/strong&gt;. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it! You're all done, and ready to try creating your first Zap! &lt;/p&gt;

&lt;h1&gt;
  
  
  Create a Zap
&lt;/h1&gt;

&lt;p&gt;Creating a zap is intuitive. From the dashboard, you'll see a screen that shows you the set up for a basics zap.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YIfVhQ7M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1611901065-zapwelcome.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YIfVhQ7M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datocms-assets.com/26885/1611901065-zapwelcome.png" alt="" width="880" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the image, what you'll be doing is choosing two applications and transferring information between them. The application you choose for the left side is going to be your trigger event. The application you choose for the right side is going to respond to the event by taking action. &lt;/p&gt;

&lt;p&gt;Generic set up will include these steps: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://api.video"&gt;Sign up&lt;/a&gt; for an api.video account.&lt;/li&gt;
&lt;li&gt;In Zapier, choose api.video for your trigger event. &lt;/li&gt;
&lt;li&gt;Set up the trigger event. You're provided with instructions for what to do every step of the way. &lt;/li&gt;
&lt;li&gt;Choose the application for your action event. &lt;/li&gt;
&lt;li&gt;You'll again need to have an account with the application you select, and the appropriate privileges for what you want to do. &lt;/li&gt;
&lt;li&gt;You'll have the opportunity to test the parts of your app as you go along. &lt;/li&gt;
&lt;li&gt;At the end, you'll be able to do a test run of the completely new integration you've made between applications. &lt;/li&gt;
&lt;li&gt;You can turn on your zap, and it will start running every time the trigger event occurs! You can also use pre-made zaps. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To make things as easy as possible, we offer walk throughs for setting up each of our pre-made zaps. See a complete list of our zaps at &lt;a href="https://zap.a.video"&gt;zap.a.video&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Do You Want to Stream a Snowman?</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Tue, 19 Jan 2021 00:00:00 +0000</pubDate>
      <link>https://forem.com/api_video/do-you-want-to-stream-a-snowman-fbb</link>
      <guid>https://forem.com/api_video/do-you-want-to-stream-a-snowman-fbb</guid>
      <description>&lt;p&gt;It has been 3 years since my family has had a good snow. Last year, we lived in Zurich from November through January - sure that we would get a few blizzards of the white stuff. Nothing. The forecasts this winter have held promise 5 days out, but the forecast always fizzled at the last moment. Last weekend however, we saw that the snow was probably going to finally hit - a red alert - if you will! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lMkNai6y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wr6v6lergcn85902eged.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lMkNai6y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wr6v6lergcn85902eged.png" alt="Alt Text" width="880" height="927"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The forecast promised snow, but also that it would be rain by 1 PM. Now, I get excited by snow, but there is no excitement like that of a 5 year old: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--99bXMLUQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fs1uv193e49lsyyv1no5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--99bXMLUQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fs1uv193e49lsyyv1no5.png" alt="Alt Text" width="880" height="801"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By 7AM, there was no light in the sky, but there was a dusting of snow. That was enough, and we were off to the park across the street, where we could make up for a lack of depth with a huge area of snow. The kids quickly got to work, and built a snowman in the corner of the park - where we could see it from the house. &lt;/p&gt;

&lt;h2&gt;
  
  
  Livestreaming the Snowman
&lt;/h2&gt;

&lt;p&gt;I have a &lt;a href="https://api.video/blog/tutorials/video-streaming-with-a-raspberry-pi"&gt;Raspberry PI livestream&lt;/a&gt; pointing very close to the corner of the park. With a minor adjustment, I was able to center the stream on the snowman. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://embed.api.video/vod/viLH7CMho8QWjVOrBCiXQxU"&gt;https://embed.api.video/vod/viLH7CMho8QWjVOrBCiXQxU&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is no video of the snowman being built - the house Wi-Fi was off, and honestly, I was lucky to get a cup of coffee in before we headed out. But the best was actually yet to come! For me, seeing the enjoyment that others have with our snowman was worth it. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ASGCjeAx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hmkhgjmysoiia48fl8db.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ASGCjeAx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hmkhgjmysoiia48fl8db.png" alt="Alt Text" width="880" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My livestream is set to save each hour of video, so it is easy to watch the replay (at 2x speed) to see the neighbors look at our snowman. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://embed.api.video/vod/vi6HPg2FfLnmKe6ah8Yi6jBp#t=1940"&gt;https://embed.api.video/vod/vi6HPg2FfLnmKe6ah8Yi6jBp#t=1940&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://embed.api.video/vod/vi6HPg2FfLnmKe6ah8Yi6jBp#t=1940"&gt;https://embed.api.video/vod/vi6HPg2FfLnmKe6ah8Yi6jBp#t=1940&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The link above uses one of api.videos cool indexing features. The #t=1940 tells the video player to start playback at 1940s (that's 32 minutes and change) into the video. That way you can see the dog walking/photo taking occur, and not have to scan through the video. If you scan back to ~12:00 in this video, you may see a spontaneous TikTok dance recording on the park bench :) &lt;/p&gt;

&lt;h2&gt;
  
  
  Instant Replay
&lt;/h2&gt;

&lt;p&gt;As predicted, the snow turned to rain (earlier than promised, sadly), and it got very slushy out. We came inside for cocoa and soup, and as the sun came out, Mr. Snowman began to lean. It was the lean that all snowman builders know - the death fall was imminent. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cbWwK7lw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4urx89lq0495e8zr07r3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cbWwK7lw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4urx89lq0495e8zr07r3.png" alt="Alt Text" width="880" height="893"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There's nothing that we can do: the snow is now all slush, and the sun is out - Mr. Snowman's fate is sealed. All we could do was watch from the window. I missed the moment that his head fell, but my kids saw it. But an event like that - you want to see it again. We could have waited until my hour long stream was done and watched the VOD version. But api.video has an 'instant replay' feature where we can rewind the livestream to re-watch parts of the stream - while the stream is stil live! So, my kids and I got to the computer (and I managed to start a screen recording), and we watched the head “kerplunk” several times:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://embed.api.video/vod/vi6YhiOxntqAvxdAtwYaVp4i"&gt;https://embed.api.video/vod/vi6YhiOxntqAvxdAtwYaVp4i&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  End of the day
&lt;/h2&gt;

&lt;p&gt;As the sun began to set, we noticed that Mr. Snowman had a new head. Looking back at the VOD, we see that a neighbor worked out in the field to create a new head, and then carried it across the green to bestow it upon Mr. Snowman &lt;/p&gt;

&lt;p&gt;&lt;a href="https://embed.api.video/vod/vi2h1EDYae0HSSRtuPlgN5TU#t=120"&gt;https://embed.api.video/vod/vi2h1EDYae0HSSRtuPlgN5TU#t=120&lt;/a&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Addendum
&lt;/h3&gt;

&lt;p&gt;On Monday morning, I was sipping my coffee and readying to start work, I saw a crew of workers pointing at the remains of Mr. Snowman.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F3Ep8RGX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k5gozlebyq1myqgllcrq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F3Ep8RGX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k5gozlebyq1myqgllcrq.png" alt="Alt Text" width="880" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hmm - seems like a pretty big response for a snowman in the corner of the park… It turns out these gentlemen ripped out the fence bordering the park, and built a new one - all without disturbing the remains of Mr. Snowman: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://embed.api.video/vod/vixUdJbwvi7AAzHoYgRifuZ#t=305"&gt;https://embed.api.video/vod/vixUdJbwvi7AAzHoYgRifuZ#t=305&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;And a few hours later, a brand new fence, and a diminished Mr. Snowman. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So, using my api.video livestream, I was able extract even more enjoyment from our snowman than the typical building and watching from the window. We could track its “lean” through the day, and see our neighbours interact with the snowman. We used the live instant replay to watch his head fall to the ground. The recordings can help us remember this day, and the fun we had as a family in the park with Mr Snowman. Maybe you'll be able to create family memories you'll always cherish with your livestream. Try out our &lt;a href="https://api.video"&gt;API&lt;/a&gt;, set up a livestream, and let us know what you think in our &lt;a href="https://community.api.video"&gt;community forum&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Video Moderation with Machine Learning</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Fri, 13 Nov 2020 00:00:00 +0000</pubDate>
      <link>https://forem.com/api_video/video-moderation-with-machine-learning-4o2c</link>
      <guid>https://forem.com/api_video/video-moderation-with-machine-learning-4o2c</guid>
      <description>&lt;h1&gt;
  
  
  Video Content Moderation
&lt;/h1&gt;

&lt;p&gt;User generated content (UGC) is taking over the internet, and one of the fastest growing segments of this trend is video UGC. From sites offering video product reviews to vlogging to online education - video is being created at the fastest pace yet (and shows no sign of stopping).&lt;/p&gt;

&lt;p&gt;Many sites are looking for ways to easily incorporate UGC video on their sites. &lt;/p&gt;

&lt;p&gt;All of our customers who allow UGC to be posted on their site worry about a few bad actors working to ruin their brand by posting inappropriate content. In order to protect their brand and the content on their site - evaluating any user generated content before it is allowed to be placed on the website is an important step. Traditional content moderation requires human moderators (which takes time and is expensive). &lt;/p&gt;

&lt;p&gt;In this post, I'll walk through another alternative: using machine learning to moderate videos. Before publishing, it is scanned using machine learning for content, tested against rules, and then either accepted or rejected based on those rules. Super fast, works 24 hours a day, and there will be no human error in the categorisation of the video. As a demonstration, we've built a site with &lt;a href="https://api.video"&gt;api.video&lt;/a&gt; for video hosting, and using &lt;a href="https://thehive.ai"&gt;Hive AI&lt;/a&gt; to power the video moderation. You can try it out &lt;a href="https://moderate.a.video"&gt;moderate.a.video&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Basics
&lt;/h2&gt;

&lt;h3&gt;
  
  
  api.video
&lt;/h3&gt;

&lt;p&gt;api.video is a full service api--based video hosting solution. Use APIs to upload videos, modify and serve streaming video. Every video that is uploaded is transcoded into a video stream, and can be delivered in a custom video player. In this solution, we'll use the &lt;a href="https://docs.api.video/reference#videos-delegated-upload"&gt;delegated upload&lt;/a&gt; (used like a public key) for uploading the videos. We'll use the video &lt;a href="https://api.video/blog/tutorials/video-tagging-best-practices"&gt;tagging&lt;/a&gt; function to label each video based on the moderation results. We can search all of the tagged videos to deliver the videos (and custom player) to users based on their moderation state. &lt;/p&gt;

&lt;h3&gt;
  
  
  Hive AI
&lt;/h3&gt;

&lt;p&gt;Once the videos are uploaded, we need to run moderation before allowing them to appear on the website. Hive AI has several moderation suites. We've set up our API to use 2 endpoints. For short videos (under 25s) - we scan 1 frame every second, and for long videos, one frame every 5 seconds. (The demo above only works for 25s long videos). The API is trained to identify several subjects that, depending on the context of your application, might be important to moderate. In this post, we will use a small subset of the trained models: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Safe for work/Not Safe for Work &lt;/li&gt;
&lt;li&gt;Yes/No: Female nudity &lt;/li&gt;
&lt;li&gt;Yes/No: Male nudity &lt;/li&gt;
&lt;li&gt;Yes/No: Female swimwear &lt;/li&gt;
&lt;li&gt;Yes/No: Shirtless male &lt;/li&gt;
&lt;li&gt;Yes/No: guns &lt;/li&gt;
&lt;li&gt;Yes/No: smoking &lt;/li&gt;
&lt;li&gt;Yes/No: Nazis &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The full list of parameters that can be identified can be found &lt;a href="https://thehive.ai/hive-moderation-suite"&gt;here&lt;/a&gt;. After each frame is analyzed, a JSON file with data for each frame analyzed is returned for analysis. After analysis, the video is tagged with moderation values - ensuring that it only appears on pages appropriate for the video. &lt;/p&gt;

&lt;h3&gt;
  
  
  Code
&lt;/h3&gt;

&lt;p&gt;This application is available on &lt;a href="https://github.com/dougsillars/videoModeration"&gt;Github&lt;/a&gt;. The general flow is: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On Upload, a video is tagged "needs moderation" (and will not appear on the site). &lt;/li&gt;
&lt;li&gt;hiveAI performs frame analysis. &lt;/li&gt;
&lt;li&gt;Based on the analysis, the video is tagged as "SFW" or "NSFW" (etc.). &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J9QwqtmK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cbv1xhupogdk53ulth4t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J9QwqtmK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cbv1xhupogdk53ulth4t.png" alt="Alt Text" width="880" height="433"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  The App
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wFHm8req--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/phm3c0wg9wj3hnbhm1j7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wFHm8req--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/phm3c0wg9wj3hnbhm1j7.png" alt="Alt Text" width="880" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On entering the site, users are presented with an upload form. As we walk through this example, we'll be following the upload of the intro theme to the classic TV show Baywatch. With api.video, we have created a delegated upload key - which allows us to place the code publicly on the webpage without exposing our API private key. The form takes the video, and uploads the video to api.video. The uploader uses the &lt;a href="https://api.video/blog/tutorials/uploading-large-files-with-javascript"&gt;Blob API&lt;/a&gt; to break the videos into 50MB segments for uploading. For the purposes of the demo - we show how many chunks are created, and update the progress of each chunk, in addition to the total video upload: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xAxofGiX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/spk37s80ifuxkz5mqty0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xAxofGiX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/spk37s80ifuxkz5mqty0.png" alt="Alt Text" width="712" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The response from the upload provides the api.video videoId, which is used to identify the video at api.video. This is then sent to the NodeJS backend as a POST (along with the video's name). On the Node server, we begin the process of moderating the video. First, we call the &lt;a href="https://docs.api.video/reference#patch-video"&gt;update video&lt;/a&gt; endpoint to add the video's name and to tag the video "needsScreening" to indicate that it has entered the moderation queue. &lt;/p&gt;

&lt;h3&gt;
  
  
  Transcoding
&lt;/h3&gt;

&lt;p&gt;When the video is uploaded, api.video's servers begin the process of creating different size/bitrate videos to provide adaptive bitrate streaming. We also create a mp4 version of the video. For content moderation, we need to submit the mp4 to HiveAI. The Node server pings api.video's &lt;a href="https://docs.api.video/reference#get-video-status"&gt;video status&lt;/a&gt; endpoint every 2 seconds to determine when the mp4 is ready. Initially, the API will indicate that the video is not playable (transcoding has not yet started). Once transcoding has started, the api lists the encoding status of every format being created, so we can monitor the encoding status of the mp4. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OPJIwpzO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e5thpl7cejshlsw1h07p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OPJIwpzO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e5thpl7cejshlsw1h07p.png" alt="Alt Text" width="804" height="786"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the mp4 is created, we can create our connection to HiveAI, and make the request for content moderation. The request looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ method: 'POST', url: 'https://api.thehive.ai/api/v2/task/sync', headers: { accept: 'application/json', authorization: 'token {API TOKEN}' }, form: { image_url: 'https://cdn.api.video/vod/vi1iHWIy6Doy0LBJl3ajaED0/mp4/1080/source.mp4' } } 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and a few seconds later, a huge JSON response comes back - 39 categories * x frames analysed. Let's look at a snip of one frame to see the sort of information we get (for brevity, I've only included the first 5 categories that are returned):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ "class": "general_not_nsfw_not_suggestive", "score": 0.00460230773187999 }, { "class": "general_nsfw", "score": 5.180850871024288e-06 }, { "class": "general_suggestive", "score": 0.995392511417249 }, { "class": "no_female_underwear", "score": 0.9998768576722025 }, { "class": "yes_female_underwear", "score": 0.00012314232779748514 } 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;"Not NSFW" means "Not Not Safe for Work" which, removing the double negative is "safe for work." This score is combined with "not suggestive", and is scored at 0.004. Since 0 is a low score, and 1 is a high score, this means that the API has determined that this frame is not considered appropriate for work. Looking at the next 2 values, general NSFW is also very small, but the "general suggestive" is 99.9%. Since the yes:no scores will always add up to one, this means that the general suggestiveness is what makes this frame not safe for work. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QPyovYDr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ir43rdejse1f9jtelxm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QPyovYDr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ir43rdejse1f9jtelxm2.png" alt="Alt Text" width="880" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Compiling the scores
&lt;/h3&gt;

&lt;p&gt;Hive AI gives you scores for each category on each frame, but it is up to us to define the pass/fail criteria for our videos. In order to do this, I take all the scores for each category, and place them in an array. With the data sequestered for each category, I can calculate the min, max, average and median of each score. I also calculate how many frames appear over 0.9 as a 90% certainty ("yes_smoking") and how many frames appear under 0.1 (90% certainty of "no_smoking.") &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AGLopvEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4fnn60ztjxxxrkqnbtuv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AGLopvEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4fnn60ztjxxxrkqnbtuv.png" alt="Alt Text" width="880" height="765"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above "SFW" array (really the 'not not safe for work' response, but that is a mouthful), of 22 frames, I find: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;min score: 0 (not safe for work) &lt;/li&gt;
&lt;li&gt;max score: 1 (safe) &lt;/li&gt;
&lt;li&gt;average: 0.55 right in the middle! &lt;/li&gt;
&lt;li&gt;median: 0.75 * count of frames &amp;gt; 0.9: 9 &lt;/li&gt;
&lt;li&gt;count of frames &amp;lt;0.1: 7 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see, 9 frames are over 0.9 (safe!), but 7 are below 0.1 (not safe!). If the median falls below 0.9, that means at least 50% of the frames are not "certain" to be safe for work ( which I place at the 90% threshold). Based on these numbers, my rudimentary pass/fail algorithm deems the Baywatch intro "NSFW." If you wanted to prevent videos with smoking from appearing to your audience, just one frame with a 90% certainty of smoking would be enough to cause the video to be categorised as "yes_smoking." I used this same threshold for guns, Nazis, nudity, underwear and swimwear.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Interestigly, this does push the Schindler's List trailer into the "yes smoking" category. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Based on these metrics, it is not terribly surprising that the Baywatch video is flagged for "yes female swimwear" and "yes shirtless male." Also unsurprisingly, the algorithm found an absence of smoking, guns and Nazis. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0l1gLrbw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4hutwfbirf845uru44d1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0l1gLrbw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4hutwfbirf845uru44d1.png" alt="Alt Text" width="700" height="766"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we have the categories measured for each application, we can remove the tag needsScreening and add in the new tags from the moderation. This is done with the &lt;a href="https://docs.api.video/reference#patch-video"&gt;Video Update&lt;/a&gt; endpoint. And that ends the moderation process for the video.&lt;/p&gt;

&lt;h3&gt;
  
  
  Video Categories
&lt;/h3&gt;

&lt;p&gt;Now that each video has been categorised, it is easy to display each video category. Each video's categories have been added to the video as a tag. and the &lt;a href="https://docs.api.video/reference#list-videos"&gt;List videos&lt;/a&gt; endpoint allows us to search by tag - returning every video that does not have smoking, or is safe for work, etc. Using Node, we can generate the list of videos (sorted newest first), and send them to the client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.get('/no_smoking', (req, res) =&amp;gt; { //get list of no smoking videos client = new apiVideo.Client({ apiKey: apiVideoKey}); 
let recordedList = client.videos.search({"tags":'no_smoking', "sortBy":"publishedAt","sortOrder":"desc"}); 
recordedList.then(function(list) { console.log("list of tagged videos"); 
console.log(list); 
return res.render('videos',{list}); }).catch((error) =&amp;gt; { console.log(error); }); }); 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The API returns the metadata for each video (including links for the video), so creating a page with each video iFrame is a pretty simple task. On each page in the sample app, you can display (and watch) each of the videos assigned to a category. In the demo, I use Pug for the rendering&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;each video in list p #{video.title} iframe(type="text/html", src=video.assets.player, width = "960", height="540",frameborder="0", scrollling="no") p #{video.publishedAt} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, the "yes guns" page has 3 movie trailers (as I write this post): Indiana Jones and the Last Crusade, Die Hard and the latest James Bond.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There we have it - we have uploaded a video to api.video, and before it is displayed on the site, it is moderated by HiveAI for several categories of inappropriateness. Based on the analysis for each frame, the video is categorised into buckets and displayed on the appropriate page on the website. Try it yourself, the code is up and running at &lt;a href="https://moderate.a.video"&gt;https://moderate.a.video&lt;/a&gt;. If you have questions about using content moderation with your videos at api.video, feel free to reach out on our &lt;a href="https://community.api.video"&gt;community&lt;/a&gt; or comment on the &lt;a href="https://github.com/dougsillars/videoModeration"&gt;Github repo&lt;/a&gt;. We'd love to see how you are using moderation to sort and categorise your videos, and the rubrics you utilise to decide what categories a video might fall into,&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
