<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Gcore </title>
    <description>The latest articles on Forem by Gcore  (@gcoreofficial).</description>
    <link>https://forem.com/gcoreofficial</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gcoreofficial"/>
    <language>en</language>
    <item>
      <title>Image CDN Explained: What is it and How Does it Work</title>
      <dc:creator>Andrey Kuyukov</dc:creator>
      <pubDate>Tue, 11 Jul 2023 11:19:27 +0000</pubDate>
      <link>https://forem.com/gcoreofficial/image-cdn-explained-what-is-it-and-how-does-it-work-3ee2</link>
      <guid>https://forem.com/gcoreofficial/image-cdn-explained-what-is-it-and-how-does-it-work-3ee2</guid>
      <description>&lt;p&gt;Images are a vital component of the modern web page: They are informative, attractive, improve a page’s engagement, and positively affect the revenue and conversion metrics on the commercial websites. Regardless of the number of pictures used on a website, because of their size, images still have a significant impact on the overall website performance. However, many images on the web are delivered in outdated formats, in excessively high resolution, and with additional baggage of uncompressed metadata. An image CDN is a modern, cloud-based service that addresses these challenges and streamlines image processing on the web.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is an Image CDN?
&lt;/h2&gt;

&lt;p&gt;An image CDN is a content delivery network with additional functionality for image compression and transformation in real time. It has two main functions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Rapidly distribute content to audiences across the globe&lt;/strong&gt;, just like a traditional CDN speeds up content delivery. This requires a global network of caching servers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamically convert, compress, and transform images in the cloud&lt;/strong&gt;, eliminating the need for pre-upload editing. This occurs thanks to an image CDN’s additional software mechanisms.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1cfTJBST--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e4zgq1sc9h4oj0p7c230.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1cfTJBST--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e4zgq1sc9h4oj0p7c230.png" alt="Comparison of image delivery and transformation process with a traditional CDN vs. image CDN" width="800" height="213"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 1. Traditional CDN vs. Image CDN&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;While a traditional CDN targets fast content delivery regardless of the content’s format and parameters, an image CDN—while still aiming for fast delivery—focuses on images as a specific genre of data. As such, an image CDN takes into account all the distinctive features and challenges that images bring to content delivery and provides tailored tooling for their most common use cases.&lt;/p&gt;
&lt;h2&gt;
  
  
  How Does an Image CDN Work?
&lt;/h2&gt;

&lt;p&gt;The origin server stores images of a web application (website.) After the client’s query, a CDN server pulls the requested image from the origin, applies all necessary manipulations, creates its copy on the server, and transfers the processed version of the image to the client, as shown in Figure 2. In other words, an image CDN works as a reverse proxy between an origin server (web hosting or image storage) and a client’s browser.&lt;/p&gt;

&lt;p&gt;If the next query for this image has the same parameters, the CDN server will respond with the same version of the image without requesting the origin and any additional processing. If not, the above process occurs anew.&lt;/p&gt;

&lt;p&gt;Let’s inspect in greater depth how image transformation and delivery happen with an image CDN.&lt;/p&gt;
&lt;h3&gt;
  
  
  Image Transformation
&lt;/h3&gt;

&lt;p&gt;All manipulations with images are determined by specific parameters that a website administrator sets up in the image’s URL while designing a webpage. Here’s how this link may look:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://assets.gcore.pro/site/image-stack/transform.png?width=400&amp;amp;height=600&amp;amp;fit=fit 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The query strings in the image’s URL define the operations that will be applied to the image along the way. When the client visits a web page, they request all the files of the page—including images—via the files’ URLs. At that moment, the client’s browser sends a query to the origin that “transform.png” should be delivered with “width=400”, “height=600”, and “fit=fit”. But first, this request goes to the closest CDN server to check if this file’s version already exists in the CDN’s cache.&lt;/p&gt;

&lt;p&gt;If there is no cached image with the given parameters, the request goes to the origin to retrieve the original “transform.png” file, transfer it to the CDN server, and apply all necessary parameters there. Then, the resized “transform.png” will be delivered and presented to the browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--urxMv-IE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1i6vd7umpl1yaytnpo6s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--urxMv-IE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1i6vd7umpl1yaytnpo6s.png" alt="Comparison of the image transformation and retrieval process when an image is cached on an Image CDN vs when it is not cached." width="800" height="183"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 2. How an image CDN transformation works&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: In Figure 2, the “final” version of the image has WebP format, but it wasn't mentioned in the query string. This is because of the internal settings of this specific image CDN.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The end quality of the transformation in the image CDN depends on how many functions it can apply to the processing content—information you can find in providers’ plan description, product documentation, or similar. Basic functions usually include conversion to WebP format, resizing, and image compression. More advanced services are able to convert in AVIF format, apply watermarks, and perform manipulations like rotate and blur. &lt;/p&gt;

&lt;h3&gt;
  
  
  Image Delivery
&lt;/h3&gt;

&lt;p&gt;To explain the “delivery” aspect of an image CDN, we first need to review the traditional CDN service. A traditional CDN’s primary function is to speed up delivery by reducing the physical distance between geographically distributed audiences and the origin server, using a wide network of edge (caching) servers.&lt;/p&gt;

&lt;p&gt;In general, the more edge servers you have, the less latency your clients experience. Cached copies of the web content are served closer to the users, so user requests can be fulfilled much faster because there is no need to travel all the distance to the origin, and most queries will be terminated on the edge.&lt;/p&gt;

&lt;p&gt;Another feature of “delivery” in the image CDN is the ability to deliver a specific version of the content to the exact client. It means that based on the request HTTP header, a CDN server can distinguish browser requirements and return the most suitable version of the image.  &lt;/p&gt;

&lt;p&gt;The situation with image formats illustrates the case: if the browser doesn’t accept AVIF format, it will receive the image in WebP, if the browser supports neither AVIF nor WebP, it will get JPG.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bvy0IfGN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/krz1kuuzfi0r6n5hp8z7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bvy0IfGN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/krz1kuuzfi0r6n5hp8z7.png" alt="How a CDN speeds up content delivery" width="800" height="290"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 3. How does content delivery in Image CDN work&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  When Is an Image CDN Useful?
&lt;/h2&gt;

&lt;p&gt;The most noticeable industries and areas where an image CDN can be helpful are those web services in which visual media plays a key role in communication between their clients and products. Usually, these are image-rich websites from industries such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;E-commerce and retail &lt;/li&gt;
&lt;li&gt;Digital media and blogs&lt;/li&gt;
&lt;li&gt;Online travel agencies (OTAs)&lt;/li&gt;
&lt;li&gt;Real estate websites&lt;/li&gt;
&lt;li&gt;Classified ads websites&lt;/li&gt;
&lt;li&gt;Photo stocks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are several distinctive use cases in which this solution can be beneficial. Let’s check them out in more depth.&lt;/p&gt;

&lt;h3&gt;
  
  
  Product Images for Online Commerce
&lt;/h3&gt;

&lt;p&gt;Appealing product images are crucial for online shops, retailers, and any other commercial services on the web. They should clearly show every small detail when users zoom in, which means you need to upload images in high resolution.&lt;/p&gt;

&lt;p&gt;And here’s where the dilemma occurs: Hi-res pictures increase the file size and degrade the webpage speed, while lightweight images usually look less attractive and engaging.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EMeSPy4G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hec87bsrornn9j6cmsod.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EMeSPy4G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hec87bsrornn9j6cmsod.png" alt="Without an image CDN, an image at 960 KB is clear and sharp, whereas an image at 80 KB is pixelated." width="800" height="290"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 4. The traditional trade-off between high quality and fast loading product images&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;An image CDN can solve this challenge elegantly. Using modern image formats like WebP and AVIF, an image CDN becomes a powerful solution in balancing images’ sizes against their quality, allowing the best of both worlds. An image CDN can compress an image up to 80KB while keeping its visible quality at the 960KB level, bypassing the problem presented in Figure 4.&lt;/p&gt;

&lt;h3&gt;
  
  
  User-Generating Content Processing
&lt;/h3&gt;

&lt;p&gt;Websites like classified ads, real estates, or photo galleries often employ a crowdsourcing approach to bring images to their services. By doing so, they face handling a huge amount of incoming content generated and uploaded by their users.&lt;/p&gt;

&lt;p&gt;Since each image may have many preview versions (like thumbnails,) it’s almost impossible to manage all these pictures manually. That’s where an image CDN service comes in handy. Developing an integration between your image-uploading services and the image CDN can streamline the process and automate all the operations.&lt;/p&gt;

&lt;p&gt;Unsplash.com is a well-known free photo library which is fueled by user-generated content. If you browse different versions of the same picture on this site, you’ll notice that they have different image CDN commands embedded in each URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P5954y_g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1s41d719iifqoadcshq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P5954y_g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1s41d719iifqoadcshq5.png" alt="Unsplash.com allows users to upload photos, and optimizes their delivery via an image CDN" width="800" height="344"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 5. Unsplash.com is a user-upload photo library that uses an image CDN&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  General Website Maintenance
&lt;/h3&gt;

&lt;p&gt;An image CDN can also simplify operations for a website not affiliated with image-rich platforms, like e-commerce or photo stocks. Here are some examples of how this mechanism may be useful for any website admin.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No pre-upload editing&lt;/strong&gt;. An image CDN allows you to forget about tedious Photoshop-related rituals and tasks for your design team. Just upload a picture to the web hosting (or storage) and set all necessary parameters in its URL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streamlined web design operations&lt;/strong&gt;. You need only store the original image without preparing different copies for each possible scenario. This helps during mobile and tablet optimization and makes it easier to apply changes if you need to update the design urgently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decreased bandwidth costs&lt;/strong&gt;. On average, images account for 50-60% of website traffic. Decreasing the size of each picture you will reduce the overall volume of traffic passing through your CDN servers. Less traffic means less money paid for that service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7f_0fhRV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/czzabwwshodvvvl4wc07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7f_0fhRV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/czzabwwshodvvvl4wc07.png" alt="An image CDN reduces bandwidth costs by decreasing overall traffic volume" width="800" height="305"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 6. How an image CDN reduces bandwidth costs&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To learn more about how image optimization could be useful for your website maintenance, check out our &lt;a href="https://gcore.com/blog/image-optimization-use-cases/"&gt;article on image optimization use cases&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Benefits of Using Image CDN
&lt;/h2&gt;

&lt;p&gt;Putting aside all the image-related features of an image CDN, it offers a wealth of additional benefits that come from its conventional CDN abilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Overall Web Performance Improvement
&lt;/h3&gt;

&lt;p&gt;A CDN accelerates static asset delivery at a global scale. It improves performance not only by reducing the file size of images, but also by decreasing the physical distance between a server and user. This facilitates faster connection establishment (lower TTFB) and faster data transmission.&lt;/p&gt;

&lt;p&gt;A CDN’s dedicated network capacity and additional connectivity can prevent network congestion and make routing faster. The effect will be amplified globally if your audience is located all around the world.&lt;/p&gt;

&lt;h3&gt;
  
  
  Better Availability
&lt;/h3&gt;

&lt;p&gt;Another native feature of a CDN is its ability to mitigate traffic spikes during peak usage times and DDoS attacks. Using a global network of edge servers can prevent disruptions to your web services and can help keep your business available on the web—even under harsh conditions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Security
&lt;/h3&gt;

&lt;p&gt;A properly connected CDN means that the majority of your website traffic goes through it. To ensure your CDNs security and protect it from malicious activities, many CDNs come with embedded web security mechanisms such as SSL/TLS data encryption, access control rules and policies, and even a web application firewall (WAF) for preventing hacking.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing an Image CDN Provider
&lt;/h2&gt;

&lt;p&gt;If you’re considering an image CDN provider for your company’s next project, look for these key features to ensure you’re getting the best service:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Variety of image-transformation features&lt;/strong&gt;. Consider what features you need when it comes to image transformation. Most websites don't need photo blurring, adding of watermarks, or image rotation. But for some web services, having such tooling could be a major advantage and may save thousands of dollars per month.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WebP, AVIF and image compression&lt;/strong&gt;. The most common reason that people seek image optimization is to reduce the file weight for better web performance. For example, delivering images in AVIF format can save an average of 60-70% of the original file size.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content delivery capabilities&lt;/strong&gt;. A robust CDN service with many points of presence across the globe can reduce website loading time, improve availability of your web service, and make it more secure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extended API&lt;/strong&gt;. API is essential for automating processes and operations. In terms of image optimization, extended API capabilities are necessary for processing user-generated content for such services as classified ads, photo galleries, and user review platforms.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Third party or own infrastructure&lt;/strong&gt;. In many cases, it’s not a big deal who’s behind the infrastructure of your image CDN. But if you’re running a large web service with strict SLAs, it makes sense to ensure that your data is processed properly and your service provider has full control over its servers.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Image Stack by Gcore
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://gcore.com/image-stack"&gt;Image Stack&lt;/a&gt; is an image optimization module running on top of the Gcore CDN, a robust content delivery platform with hundreds of edge servers across the globe. It provides WebP, AVIF transformation, image quality control, resizing, and other manipulations by simply adding a few query strings to your URLs.​ Try our service for free, with 100,000 free image optimization requests per month included.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>tutorial</category>
      <category>performance</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How to add a video call feature to your iOS app in 15 minutes</title>
      <dc:creator>Dmitri Tkach</dc:creator>
      <pubDate>Tue, 27 Jun 2023 06:29:34 +0000</pubDate>
      <link>https://forem.com/gcoreofficial/how-to-add-a-video-call-feature-to-your-ios-app-in-15-minutes-p47</link>
      <guid>https://forem.com/gcoreofficial/how-to-add-a-video-call-feature-to-your-ios-app-in-15-minutes-p47</guid>
      <description>&lt;p&gt;In this article, we’ll show you how to integrate video calls into your iOS app in 15 minutes. You don’t have to implement the entire WebRTC stack from scratch; you can just use a ready-made SDK.&lt;/p&gt;

&lt;p&gt;Here is what the result will look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pUwjTSDZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/psy2dwvyxaeotk7l6bo4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pUwjTSDZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/psy2dwvyxaeotk7l6bo4.gif" alt="Image description" width="240" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ll implement the business logic and interface. The video call function will be integrated using GCoreVideoCallsSDK, a GCore framework that takes care of creating and interacting with sockets and WebRTC, connecting/creating a video call room, and interacting with the server.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: This article is part of a series about working with iOS. In other articles, we show you how to &lt;a href="https://gcore.com/blog/how-to-create-a-mobile-streaming-app-on-ios/"&gt;create a mobile streaming app on iOS&lt;/a&gt;, and how to &lt;a href="https://gcore.com/blog/add-vod-uploading-feature-to-ios-app/"&gt;add VOD uploading&lt;/a&gt; and &lt;a href="https://gcore.com/blog/add-smooth-scrolling-vod-feature-to-ios-app/"&gt;smooth scrolling VOD&lt;/a&gt; features to an existing app.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What functions you can add with the help of this guide
&lt;/h2&gt;

&lt;p&gt;The solution includes the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Video calling with the camera and microphone.&lt;/li&gt;
&lt;li&gt;Showing the conversation participants using your designs.&lt;/li&gt;
&lt;li&gt;Sending a video stream from the built-in camera to the server.&lt;/li&gt;
&lt;li&gt;Receiving a video stream from the server.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The solution architecture
&lt;/h2&gt;

&lt;p&gt;Here is what the solution architecture for video calls looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b8mX72Kj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/741a2do2crbtp03g1d0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b8mX72Kj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/741a2do2crbtp03g1d0d.png" alt="Image description" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to integrate the video calling feature into your app
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Prep
&lt;/h3&gt;

&lt;p&gt;The application must connect to the room and show the user and speakers on the screen. To accomplish this, we’ll use UICollectionView and UICollectionViewCel to display the participants, and UIView to display the user. WebRTC provides &lt;code&gt;RTCEAGLVideoView&lt;/code&gt; to display video streams in the application. We’ll also create a small model to store data and link it all. The SDK will be UIViewController.&lt;/p&gt;

&lt;p&gt;The final application will look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pWs9Ck39--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qgpvvxh0h5if74tw7xwu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pWs9Ck39--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qgpvvxh0h5if74tw7xwu.png" alt="Image description" width="700" height="1238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, you’ll need to install all the dependencies and request the rights to video/microphone recording from the user.&lt;/p&gt;

&lt;h4&gt;
  
  
  Dependencies
&lt;/h4&gt;

&lt;p&gt;Use CocoaPods to instal &lt;a href="https://github.com/ethand91/mediasoup-ios-client"&gt;Mediasoup iOS Client&lt;/a&gt; and &lt;a href="https://github.com/G-Core/ios-video-calls-SDK"&gt;GcoreVideoCallsSDK&lt;/a&gt; by specifying the following in podfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source 'https://github.com/G-Core/ios-video-calls-SDK.git'

... 

pod "mediasoup_ios_client", '1.5.3' 
pod "GCoreVideoCallsSDK", '2.6.0'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Permissions
&lt;/h4&gt;

&lt;p&gt;To allow the application to access the camera and microphone, specify the permissions in the project’s Info.plist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;NSMicrophoneUsageDescription&lt;/code&gt; (Privacy – Microphone Usage Description)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;NSCameraUsageDescription&lt;/code&gt; (Privacy – Camera Usage Description)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0z1WhSdU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ph4y2y851nhcx8b3o18y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0z1WhSdU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ph4y2y851nhcx8b3o18y.png" alt="Image description" width="800" height="142"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create the UI
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Model
&lt;/h4&gt;

&lt;p&gt;A model is required to store any data, in this case, user data.&lt;/p&gt;

&lt;p&gt;Create a model file and import GCoreVideoCallsSDK into it. To store user data, create the VideoCallUnit structure, which will contain the user’s ID and name, and RTCEAGLVideoView, which is assigned to it. Here is what the whole file will look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import GCoreVideoCallsSDK 

final class Model { 
    var localUser: VideoCallUnit? 
    var remoteUsers: [VideoCallUnit] = [] 
} 

struct VideoCallUnit { 
    let peerID: String 
    let name: String 
    let view = RTCEAGLVideoView() 
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  CollectionCell
&lt;/h4&gt;

&lt;p&gt;CollectionCell will be used to display the speaker.&lt;/p&gt;

&lt;p&gt;Cells are reused and can show a large number of different speakers in their lifetime. For this to work, we need a mechanism that will remove the previous thread from the screen, pull a new thread onto the screen, and set its position on the screen. To implement the mechanism, create a CollectionCell class and set the inheritance from UICollectionViewCell. This class will contain only one property: &lt;em&gt;rtcView&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import GCoreVideoCallsSDK 

final class CollectionCell: UICollectionViewCell { 
   weak var rtcView: RTCEAGLVideoView? { 
       didSet { 
           oldValue?.removeFromSuperview() 
           guard let rtcView = rtcView else { return } 
           rtcView.frame = self.bounds 
           addSubview(rtcView) 
       } 
   } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  ViewController
&lt;/h4&gt;

&lt;p&gt;Set up a controller to manage the entire process:&lt;/p&gt;

&lt;p&gt;Import GCoreVideoCallsSDK.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   import GCoreVideoCallsSDK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a model property.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   let model = Model()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the property cellID to hold the cell ID, and the lazy property collectionView to manage the cells (lazy to use cellID, view, and self). Set up the collection layout, assign a controller as &lt;em&gt;dataSource&lt;/em&gt;, and register the cell.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lazy var collectionView: UICollectionView = { 
    let layout = UICollectionViewFlowLayout() 
    layout.itemSize.width = UIScreen.main.bounds.width - 100 
    layout.itemSize.height = layout.itemSize.width 
    layout.minimumInteritemSpacing = 10 

    let collection = UICollectionView(frame: view.bounds, collectionViewLayout: layout) 
    collection.backgroundColor = .white 
    collection.dataSource = self 
    collection.register(CollectionCell.self, forCellWithReuseIdentifier: cellID) 

    return collection 
}()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the localView property. The layout for this view needs to be made from the code; to do this, create the method initConstraints.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let localView: UIView = { 
    let view = UIView(frame: .zero) 
    view.translatesAutoresizingMaskIntoConstraints = false 
    view.layer.cornerRadius = 10  
    view.backgroundColor = .black 
    view.clipsToBounds = true  

    return view 
}() 

func initConstraints() { 
     NSLayoutConstraint.activate([ 
      localView.widthAnchor.constraint(equalToConstant: UIScreen.main.bounds.width / 3), 
      localView.heightAnchor.constraint(equalTo: localView.widthAnchor, multiplier: 4/3), 
      localView.leftAnchor.constraint(equalTo: view.leftAnchor, constant: 5), 
      localView.bottomAnchor.constraint(equalTo:  view.bottomAnchor, constant: -5) 
    ]) 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the gcMeet property.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var gcMeet = gcMeet.shared
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the &lt;em&gt;viewDidLoad&lt;/em&gt; method, add collectionView and localView to the main view and initialize the constraints.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;override func viewDidLoad() { 
    super.viewDidLoad() 

    view.backgroundColor = .white 
    view.addSubview(collectonView) 
    view.addSubview(localView) 

    initConstraints() 
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create an extension to subscribe the controller to UICollectionViewDataSource. This is necessary in order to link the UI to the model and set up cells when they appear.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;extension ViewController: UICollectionViewDataSource { 
    func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -&amp;gt; Int { 
        // Get remote peers count
        return model.remoteUsers.count
    } 

    func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -&amp;gt; UICollectionViewCell { 
        let cell = collectionView.dequeueReusableCell(withReuseIdentifier: cellID, for: indexPath) as! CollectionCell 
        cell.rtcView = model.remoteUsers[indexPath.row].view 
        cell.layer.cornerRadius = 10 
        cell.backgroundColor = .black 
        cell.clipsToBounds = true 

        return cell 
    } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The UI is now ready to display the call. Now we need to set up the relationship with the SDK.&lt;/p&gt;

&lt;h4&gt;
  
  
  Initializing GCoreMeet
&lt;/h4&gt;

&lt;p&gt;The connection to the server is made through &lt;em&gt;GCoreMeet.shared&lt;/em&gt;. The parameters for the local user, room, and camera will be passed to it. You also need to call the method to activate the audio session, which will allow you to capture headphones connected. The SDK passes data from the server to the application through listeners: &lt;em&gt;RoomListener&lt;/em&gt; and &lt;em&gt;ModeratorListener&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Add all this to the viewDidLoad method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;override func viewDidLoad() { 
    super.viewDidLoad() 

    view.backgroundColor = .white 
    view.addSubview(collectonView) 
    view.addSubview(localView) 

    let userParams = GCoreLocalUserParams(name: "EvgenMob", role: .moderator) 
    let roomParams = GCoreRoomParams(id: "serv1z3snbnoq") 
    gcMeet.connectionParams = (userParams, roomParams) 

    gcMeet.audioSessionActivate() 

    gcMeet.moderatorListener = self 
    gcMeet.roomListener = self 

    try? gcMeet.startConnection() 

    initConstraints() 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;roomID&lt;/em&gt; can be taken from &lt;a href="https://meet.gcore.com"&gt;https://meet.gcore.com&lt;/a&gt; by clicking on the &lt;em&gt;Create a room for free&lt;/em&gt; button and selecting a conference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ql5geSI6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fx7dnzulxuxzboinyb86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ql5geSI6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fx7dnzulxuxzboinyb86.png" alt="Image description" width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the RoomListener methods, you will also receive video streams (both your own and remote users’), audio streams, and information about users and moderator actions, which will allow you to render the necessary UI. This will be discussed further later in the article.&lt;/p&gt;

&lt;p&gt;Through the ModeratorListener, you will receive requests from other users to enable streams and be notified when new users join the waiting room. The &lt;code&gt;MeetRoomParameters&lt;/code&gt; can also use the following parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;clientHostName&lt;/code&gt;: You can leave this nil, in which case the default will be meet.gcorelabs.com&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;peerId&lt;/code&gt;: If you leave this nil, the ID will be auto-generated by the SDK.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Interacting with the Gcore server
&lt;/h3&gt;

&lt;p&gt;Interaction with the server is carried out through an object that subscribes to the RoomListener protocol. It has a number of methods. You can find more details about these, as well as about the SDK, on &lt;a href="https://github.com/G-Core/ios-video-calls-SDK/blob/main/README.md"&gt;SDK readme&lt;/a&gt;. Below is an image of what the process of this interaction looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gn0bz9Wf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fics3bqgwly3xdfyfl8f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gn0bz9Wf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fics3bqgwly3xdfyfl8f.png" alt="Image description" width="800" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the simplest implementation, you need only a few methods.&lt;br&gt;
First, subscribe the controller to the RoomListener protocol:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;extension ViewController: RoomListener { 

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the required methods to extension.&lt;/p&gt;

&lt;p&gt;The first method is called when joining a room and sends initial data related to room permissions, a list of participants, and information about the local user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func roomClientHandle(_ client: GCoreRoomClient, forAllRoles joinData: GCoreJoinData)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’re interested in the user list:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// To get data at the moment of entering the room
func roomClientHandle(_ client: GCoreRoomClient, forAllRoles joinData: GCoreJoinData) { 
    switch joinData { 
    case othersInRoom(remoteUsers: [GCoreRemoteUser]): 
        remoteUsers.forEach { 
            model.remoteUsers += [ .init(peerID: $0.id, name: $0.displayName ?? "") ] 
        } 
        collectonView.reloadData() 
    default: 
        break 
    } 
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next method is called when the connection status of the room changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func roomClientHandle(_ client: GCoreRoomClient, connectionEvent: GCoreRoomConnectionEvent)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The device’s camera and microphone will turn on once successfully connected.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// To update the status of the connection to the room
func roomClientHandle(_ client: GCoreRoomClient, forAllRoles joinData: GCoreJoinData) { 
    switch joinData { 
    case othersInRoom(remoteUsers: [GCoreRemoteUser]): 
        remoteUsers.forEach { 
            model.remoteUsers += [ .init(peerID: $0.id, name: $0.displayName ?? "") ] 
        } 
        collectonView.reloadData() 
    default: 
        break 
    } 
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next method is called for events related to remote user data but not related to media.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func roomClientHandle(_ client: GCoreRoomClient, remoteUsersEvent: GCoreRemoteUsersEvent) 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will use it to add and remove users during the call.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// To respond to actions related to remote users, not related to video/audio streams
func roomClientHandle(_ client: GCoreRoomClient, remoteUsersEvent: GCoreRemoteUsersEvent) { 
    switch remoteUsersEvent { 
    case handleRemote(user: GCoreRemoteUser): 
        model.remoteUsers += [.init(peerID: handlePeer.id, name: handlePeer.displayName ?? "")] 
        collectonView.reloadData() 

    case closedRemote(userId: String): 
        model.remoteUsers.removeAll(where: { $0.peerID == peerClosed }) 
        collectonView.reloadData() 

    default: 
        break 
    } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last method is called when the SDK is ready to provide the user’s video stream and when videos from the peers arrive.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func roomClientHandle(_ client: GCoreRoomClient, mediaEvent: GCoreMediaEvent)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll use it to render the UI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// To respond to receiving and disabling video/audio streams
func roomClientHandle(_ client: GCoreRoomClient, mediaEvent: GCoreMediaEvent) { 
    switch mediaEvent { 
    case produceLocalVideo(track: RTCVideoTrack): 
        guard let localUser = model.localUser else { return } 
        videoTrack.add(localUser.view) 
        localUser.view.frame = self.localView.bounds 
        localView.addSubview(localUser.view) 

    case handledRemoteVideo(videoObject: GCoreVideoStream):  
        guard let user = model.remoteUsers.first(where: { $0.peerID == videoObject.peerId }) else { return } 
        videoObject.rtcVideoTrack.add(user.view) 

    default: 
        break 
    } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let &lt;em&gt;xcode&lt;/em&gt; fill in the rest of the methods without giving them functionality.&lt;/p&gt;

&lt;p&gt;Setup is complete! The project is ready to run on a real device (running on an emulator is not recommended).&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;Your app now has a video calling feature. Here is what the implementation of this feature looks like in our &lt;a href="https://github.com/G-Core/ios-demo-video-calls"&gt;demo application&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cs3xtIcN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/prrz4bor4w3xpxlxvfiu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cs3xtIcN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/prrz4bor4w3xpxlxvfiu.png" alt="Image description" width="800" height="570"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer notes
&lt;/h2&gt;

&lt;p&gt;You can get pixelBuffer from the SDK, which contains an image frame link (received from the camera) so you can do whatever you like with it. To do this, you need to subscribe the controller under &lt;code&gt;MediaCapturerBufferDelegate&lt;/code&gt; and implement the &lt;code&gt;mediaCapturerDidBuffer&lt;/code&gt; method. The example below adds a blur before sending the frame to the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;extension ViewController: MediaCapturerBufferDelegate { 
    func mediaCapturerDidBuffer(_ pixelBuffer: CVPixelBuffer) { 
        let image = CIImage(cvPixelBuffer: pixelBuffer).applyingGaussianBlur(sigma: 10) 
        CIContext().render(image, to: pixelBuffer) 
    } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Using the SDK and Gcore services, you can easily and quickly integrate video calling functionality into your application. Users will be pleased; they’re used to video calls in popular services like Instagram, WhatsApp, and Facebook, and now they will see a familiar feature in your application.&lt;/p&gt;

&lt;p&gt;You can check out the source code for our project here: &lt;a href="https://github.com/G-Core/ios-demo-video-calls"&gt;ios-demo-video-calls&lt;/a&gt;. There you can also take a peek at the implementation of other methods, moderator mode, screen preview, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ethand91/mediasoup-ios-client"&gt;Mediasoup iOS Client&lt;/a&gt; was used to implement WebRTC on iOS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/G-Core/ios-video-calls-SDK"&gt;GcoreVideoCallsSDK&lt;/a&gt; is used to connect and interact with the room, as well as to create sockets.&lt;/p&gt;

&lt;p&gt;The article is based on the &lt;a href="https://github.com/G-Core/ios-demo-video-calls"&gt;GcoreVideoCalls&lt;/a&gt; application.&lt;/p&gt;

</description>
      <category>ios</category>
      <category>webrtc</category>
    </item>
    <item>
      <title>Interacting with Gcore API Using JetBrains HTTP Client</title>
      <dc:creator>Rizal Gowandy</dc:creator>
      <pubDate>Thu, 22 Jun 2023 11:29:28 +0000</pubDate>
      <link>https://forem.com/gcoreofficial/interacting-with-gcore-api-using-jetbrains-http-client-nla</link>
      <guid>https://forem.com/gcoreofficial/interacting-with-gcore-api-using-jetbrains-http-client-nla</guid>
      <description>&lt;p&gt;Testing API requests is an essential aspect of developing and maintaining a RESTful API. Testing requests for your RESTful API is crucial for ensuring its functionality, reliability, performance, and adherence to specifications. It helps in identifying and addressing issues early in the development process, leading to a more stable and robust API that delivers the expected results to its consumers. In this blog, we’ll guide you through using the JetBrains HTTP client to interact with Gcore’s API, enhancing your development workflow through greater efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use JetBrains HTTP Client?
&lt;/h2&gt;

&lt;p&gt;If you're looking for an efficient way to test your REST APIs or send HTTP requests, JetBrains HTTP client is an excellent option. The JetBrains HTTP client is an integrated tool available within JetBrains IDEs like IntelliJ IDEA, PyCharm, and WebStorm. It facilitates the testing, debugging, and analysis of RESTful APIs by allowing developers to create, execute, and inspect HTTP requests directly from their IDEs. With features like request creation, request history, response inspection, syntax highlighting, code snippet generation, variable substitution, test scripts, and authentication support, the JetBrains HTTP client provides a convenient and efficient way for developers to interact with APIs during development and testing. It enhances the development workflow by eliminating the need for external tools and ensuring seamless integration within the IDE environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Install the JetBrains HTTP Client
&lt;/h2&gt;

&lt;p&gt;To start using the JetBrains HTTP client, first install it in your IDE. The HTTP client is built into JetBrains IntelliJ IDEA, PhpStorm, WebStorm, and PyCharm, so you don’t need to install anything extra.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create a New HTTP Request File
&lt;/h2&gt;

&lt;p&gt;After installing the JetBrains HTTP client, open your IDE and create a new file by selecting &lt;strong&gt;File → New → HTTP Request&lt;/strong&gt;. Here’s an example of a health check API provided by Gcore:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;### HEALTH CHECK

GET https://api.gcore.com/dns/healthcheck
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A health check API is an endpoint that provides information about the health and status of an application or service. It allows you to assess programmatically the availability and readiness of critical components and determine if your application is functioning properly.&lt;/p&gt;

&lt;p&gt;Health check APIs are commonly used in conjunction with monitoring and alerting systems to proactively detect issues and ensure system reliability. They are especially useful in distributed and microservices architectures, and are often integrated with containerization or orchestration systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Add Request Headers
&lt;/h2&gt;

&lt;p&gt;Request headers contain additional information about the request, such as the type of data being sent or authentication credentials. To add headers, click the &lt;strong&gt;Headers&lt;/strong&gt; tab in the request window and provide the required information. Some of Gcore’s API requires an authorization header. Here’s an example of the GET zone API that requires a header:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;### GET ZONE

GET https://api.gcore.com/dns/v2/zones?limit=10
Authorization: Bearer REPLACE_WITH_YOUR_TOKEN
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Add Request Body
&lt;/h2&gt;

&lt;p&gt;If you need to send data with your request, you can add it to the request body. Click the &lt;strong&gt;Body&lt;/strong&gt; tab, choose the type of data you want to send (JSON, XML, etc.,) and enter your data in the editor. Here’s an example of an API request to create a new zone on Gcore:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;### CREATE ZONE

POST https://api.gcore.com/v2/zones
Authorization: Bearer REPLACE_WITH_YOUR_TOKEN
Content-Type: application/json

{
  "name": "one.gcdn.co",
  "primary_server": "ns1.example.com",
  "serial": 1,
  "nx_ttl": 1
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Send the Request
&lt;/h2&gt;

&lt;p&gt;Once you have completed the request configuration, click &lt;strong&gt;Run&lt;/strong&gt; to send the request. To see the response, check the Response tab, which shows the response status code, headers, and body.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET https://api.gcore.com/dns/healthcheck

HTTP/1.1 200 OK
Server: nginx
Date: Tue, 20 Jun 2023 07:32:04 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 124
Connection: keep-alive
Content-Encoding: deflate
Vary: Accept-Encoding
Strict-Transport-Security: max-age=15724800; includeSubDomains
Cache: MISS
X-ID: ed-up-gc38
X-NGINX: nginx-be
Accept-Ranges: bytes
X-ID-FE: ed-up-gc38

{
  "app": "gcdn-dns-api",
  "hash": "0b5371a",
  "postgres_ping_success": true,
  "uptime": "191h39m45.146275442s",
  "uptime_seconds": 689985,
  "version": "0b5371a"
}

Response file saved.
&amp;gt; 2023-06-20T143205.200.json

Response code: 200 (OK); Time: 790ms (790 ms); Content length: 145 bytes (145 B)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;JetBrains HTTP client is a useful tool for testing REST APIs and making HTTP requests. It is easy-to-use, informative, convenient for everyday usage. By following the steps outlined in this tutorial, you can easily use JetBrains HTTP client to test your APIs and analyze responses. &lt;a href="https://apidocs.gcore.com/dns#tag/zones" rel="noopener noreferrer"&gt;Check out Gcore’s API docs&lt;/a&gt; to learn more.&lt;/p&gt;

</description>
      <category>guide</category>
      <category>solution</category>
    </item>
    <item>
      <title>How to add a VOD uploading feature to your iOS app in 15 minutes</title>
      <dc:creator>Lana Krasotskaia</dc:creator>
      <pubDate>Fri, 09 Jun 2023 11:43:40 +0000</pubDate>
      <link>https://forem.com/gcoreofficial/how-to-add-a-vod-uploading-feature-to-your-ios-app-in-15-minutes-4ao</link>
      <guid>https://forem.com/gcoreofficial/how-to-add-a-vod-uploading-feature-to-your-ios-app-in-15-minutes-4ao</guid>
      <description>&lt;p&gt;This is a step-by-step guide on Gcore’s solution for adding a new VOD feature to your iOS application in 15 minutes. The feature allows users to record videos from their phone, upload videos to storage, and play videos in the player inside the app.&lt;/p&gt;

&lt;p&gt;Here is what the result will look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--37NwKtaW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/am8eof3z3ejw92r5hbih.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--37NwKtaW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/am8eof3z3ejw92r5hbih.gif" alt="Image description" width="240" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is part of a series of guides about adding new video features to an iOS application. In other articles, we show you how to create a mobile streaming app on iOS, and how to add video call and smooth scrolling VOD features to an existing app.&lt;/p&gt;

&lt;h2&gt;
  
  
  What functions you can add with the help of this guide
&lt;/h2&gt;

&lt;p&gt;The solution includes the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Recording: Local video recording directly from the device’s camera; gaining access to the camera and saving raw video to internal storage.&lt;/li&gt;
&lt;li&gt;Uploading to the server: Uploading the recorded video to cloud video hosting, uploading through TUSclient, async uploading, and getting a link to the processed video.&lt;/li&gt;
&lt;li&gt;List of videos: A list of uploaded videos with screenshot covers and text descriptions.&lt;/li&gt;
&lt;li&gt;Player: Playback of the selected video in AVPlayer with ability to cache, play with adaptive bitrate of HLS, rewind, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to add the VOD feature
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Permissions&lt;/strong&gt;&lt;br&gt;
The project uses additional access rights that need to be specified. These are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NSMicrophoneUsageDescription (Privacy: Microphone Usage Description)&lt;/li&gt;
&lt;li&gt;NSCameraUsageDescription (Privacy: Camera Usage Description).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Authorization&lt;/strong&gt;&lt;br&gt;
You’ll need a Gcore account, which can be created in just 1 minute at &lt;a href="https://gcore.com/"&gt;gcore.com&lt;/a&gt;. You won’t need to pay anything; you can test the solution with a free plan.&lt;/p&gt;

&lt;p&gt;To use Gcore services, you’ll need an access token, which comes in the server’s response to the authentication request. Here’s how to get it:&lt;/p&gt;

&lt;p&gt;1) Create a model that will come from the server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;struct Tokens: Decodable { 
    let refresh: String 
    let access: String 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) Create a common protocol for your requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;protocol DataRequest { 
    associatedtype Response 

    var url: String { get } 
    var method: HTTPMethod { get } 
    var headers: [String : String] { get } 
    var queryItems: [String : String] { get } 
    var body: Data? { get } 
    var contentType: String { get } 

    func decode(_ data: Data) throws -&amp;gt; Response 
} 

extension DataRequest where Response: Decodable { 
    func decode(_ data: Data) throws -&amp;gt; Response { 
        let decoder = JSONDecoder() 
        return try decoder.decode(Response.self, from: data) 
    } 
} 

extension DataRequest { 
    var contentType: String { "application/json" } 
    var headers: [String : String] { [:] } 
    var queryItems: [String : String] { [:] } 
    var body: Data? { nil } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) Create an authentication request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;struct AuthenticationRequest: DataRequest { 
    typealias Response = Tokens 

    let username: String 
    let password: String 

    var url: String { GсoreAPI.authorization.rawValue } 
    var method: HTTPMethod { .post } 

    var body: Data? { 
       try? JSONEncoder().encode([ 
        "password": password, 
        "username": username, 
       ]) 
    } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4) Then you can use the request in any part of the application, using your preferred approach for your internet connection. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func signOn(username: String, password: String) { 
        let request = AuthenticationRequest(username: username, password: password) 
        let communicator = HTTPCommunicator() 

        communicator.request(request) { [weak self] result in 
            switch result { 
            case .success(let tokens):  
                Settings.shared.refreshToken = tokens.refresh 
                Settings.shared.accessToken = tokens.access 
                Settings.shared.username = username 
                Settings.shared.userPassword = password 
                DispatchQueue.main.async { 
                    self?.view.window?.rootViewController = MainController() 
                } 
            case .failure(let error): 
                self?.errorHandle(error) 
            } 
        } 
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Setting up the camera session&lt;/strong&gt;&lt;br&gt;
In mobile apps on iOS systems, the AVFoundation framework is used to work with the camera. Let’s create a class that will work with the camera at a lower level.&lt;/p&gt;

&lt;p&gt;1) Create a protocol in order to send the path to the recorded fragment and its time to the controller, as well as the enumeration of errors that may occur during initialization. The most common error is that the user did not grant the rights for camera use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import Foundation 
import AVFoundation 

enum CameraSetupError: Error { 
    case accessDevices, initializeCameraInputs 
} 

protocol CameraDelegate: AnyObject { 
    func addRecordedMovie(url: URL, time: CMTime) 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) Create the camera class with properties and initializer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;final class Camera: NSObject { 
    var movieOutput: AVCaptureMovieFileOutput! 

    weak var delegate: CameraDelegate? 

    private var videoDeviceInput: AVCaptureDeviceInput! 
    private var rearCameraInput: AVCaptureDeviceInput! 
    private var frontCameraInput: AVCaptureDeviceInput! 
    private let captureSession: AVCaptureSession 

    // There may be errors during initialization, if this happens, the initializer throws an error to the controller 
    init(captureSession: AVCaptureSession) throws { 
        self.captureSession = captureSession 

        //check access to devices and setup them 
        guard let rearCamera = AVCaptureDevice.default(for: .video), 
              let frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front), 
              let audioInput = AVCaptureDevice.default(for: .audio) 
        else { 
            throw CameraSetupError.accessDevices 
        } 

        do { 
            let rearCameraInput = try AVCaptureDeviceInput(device: rearCamera) 
            let frontCameraInput = try AVCaptureDeviceInput(device: frontCamera) 
            let audioInput = try AVCaptureDeviceInput(device: audioInput) 
            let movieOutput = AVCaptureMovieFileOutput() 

            if captureSession.canAddInput(rearCameraInput), captureSession.canAddInput(audioInput), 
               captureSession.canAddInput(frontCameraInput),  captureSession.canAddOutput(movieOutput) { 

                captureSession.addInput(rearCameraInput) 
                captureSession.addInput(audioInput) 
                self.videoDeviceInput = rearCameraInput 
                self.rearCameraInput = rearCameraInput 
                self.frontCameraInput = frontCameraInput 
                captureSession.addOutput(movieOutput) 
                self.movieOutput = movieOutput 
            } 

        } catch { 
            throw CameraSetupError.initializeCameraInputs 
        } 
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) Create methods. Depending on user’s actions on the UI layer, the controller will call the corresponding method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func flipCamera() { 
        guard let rearCameraIn = rearCameraInput, let frontCameraIn = frontCameraInput else { return } 
        if captureSession.inputs.contains(rearCameraIn) { 
            captureSession.removeInput(rearCameraIn) 
            captureSession.addInput(frontCameraIn) 
        } else { 
            captureSession.removeInput(frontCameraIn) 
            captureSession.addInput(rearCameraIn) 
        } 
    } 

    func stopRecording() { 
        if movieOutput.isRecording { 
            movieOutput.stopRecording() 
        } 
    } 

    func startRecording() { 
        if movieOutput.isRecording == false { 
            guard let outputURL = temporaryURL() else { return } 
            movieOutput.startRecording(to: outputURL, recordingDelegate: self) 
            DispatchQueue.main.asyncAfter(deadline: .now() + 0.1) { [weak self] in 
                guard let self = self else { return } 
                self.timer = Timer.scheduledTimer(timeInterval: 1, target: self, selector: #selector(self.updateTime), userInfo: nil, repeats: true) 
                self.timer?.fire() 
            } 
        } else { 
            stopRecording() 
        } 
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4) To save a video fragment in memory, you will need a path for it. This method returns this path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Creating a temporary storage for the recorded video fragment 
    private func temporaryURL() -&amp;gt; URL? { 
        let directory = NSTemporaryDirectory() as NSString 

        if directory != "" { 
            let path = directory.appendingPathComponent(UUID().uuidString + ".mov") 
            return URL(fileURLWithPath: path) 
        } 

        return nil 
    } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5) Subscribe to the protocol in order to send the path to the controller.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//MARK: - AVCaptureFileOutputRecordingDelegate 
//When the shooting of one clip ends, it sends a link to the file to the delegate 
extension Camera: AVCaptureFileOutputRecordingDelegate { 
    func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) { 
        if let error = error { 
            print("Error recording movie: \(error.localizedDescription)") 
        } else { 
            delegate?.addRecordedMovie(url: outputFileURL, time: output.recordedDuration) 
        } 
    } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Layout for the camera&lt;/strong&gt;&lt;br&gt;
Create a class that will control the camera on the UI level. The user will transmit commands through this class, and it will send its delegate to send the appropriate commands to the preceding class.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You will need to add your own icons or use existing ones in iOS.&lt;/p&gt;

&lt;p&gt;1) Create a protocol so that your view can inform the controller about user actions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;protocol CameraViewDelegate: AnyObject { 
    func tappedRecord(isRecord: Bool) 
    func tappedFlipCamera() 
    func tappedUpload() 
    func tappedDeleteClip() 
    func shouldRecord() -&amp;gt; Bool 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) Create the camera view class and initialize the necessary properties.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;final class CameraView: UIView { 
    var isRecord = false { 
        didSet { 
            if isRecord { 
                recordButton.setImage(UIImage(named: "pause.icon"), for: .normal) 
            } else { 
                recordButton.setImage(UIImage(named: "play.icon"), for: .normal) 
            } 
        } 
    } 

    var previewLayer: AVCaptureVideoPreviewLayer? 
    weak var delegate: CameraViewDelegate? 

    let recordButton: UIButton = { 
        let button = UIButton() 
        button.setImage(UIImage(named: "play.icon"), for: .normal) 
        button.imageView?.contentMode = .scaleAspectFit 
        button.addTarget(self, action: #selector(tapRecord), for: .touchUpInside) 
        button.translatesAutoresizingMaskIntoConstraints = false 

        return button 
    }() 

    let flipCameraButton: UIButton = { 
        let button = UIButton() 
        button.setImage(UIImage(named: "flip.icon"), for: .normal) 
        button.imageView?.contentMode = .scaleAspectFit 
        button.addTarget(self, action: #selector(tapFlip), for: .touchUpInside) 
        button.translatesAutoresizingMaskIntoConstraints = false 

        return button 
    }() 

    let uploadButton: UIButton = { 
        let button = UIButton() 
        button.setImage(UIImage(named: "upload.icon"), for: .normal) 
        button.imageView?.contentMode = .scaleAspectFit 
        button.addTarget(self, action: #selector(tapUpload), for: .touchUpInside) 
        button.translatesAutoresizingMaskIntoConstraints = false 

        return button 
    }() 

    let clipsLabel: UILabel = { 
        let label = UILabel() 
        label.textColor = .white 
        label.font = .systemFont(ofSize: 14) 
        label.textAlignment = .left 
        label.text = "Clips: 0" 

        return label 
    }() 

    let deleteLastClipButton: Button = { 
        let button = Button() 
        button.setTitle("", for: .normal) 
        button.setImage(UIImage(named: "delete.left.fill"), for: .normal) 
        button.addTarget(self, action: #selector(tapDeleteClip), for: .touchUpInside) 

        return button 
    }() 

    let recordedTimeLabel: UILabel = { 
        let label = UILabel() 
        label.text = "0s / \(maxRecordTime)s" 
        label.font = .systemFont(ofSize: 14) 
        label.textColor = .white 
        label.textAlignment = .left 

        return label 
    }() 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) Since the view will show the image from the device’s camera, you need to link it to the session and configure it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; func setupLivePreview(session: AVCaptureSession) { 
        let previewLayer = AVCaptureVideoPreviewLayer.init(session: session) 
        self.previewLayer = previewLayer 
        previewLayer.videoGravity = .resizeAspectFill 
        previewLayer.connection?.videoOrientation = .portrait 
        layer.addSublayer(previewLayer) 
        session.startRunning() 
        backgroundColor = .black 
    } 

    // when the size of the view is calculated, we transfer this size to the image from the camera 
    override func layoutSubviews() { 
        previewLayer?.frame = bounds 
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4) Create a layout for UI elements.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    private func initLayout() { 
        [clipsLabel, deleteLastClipButton, recordedTimeLabel].forEach { 
            $0.translatesAutoresizingMaskIntoConstraints = false 
            addSubview($0) 
        } 

        NSLayoutConstraint.activate([ 
            flipCameraButton.topAnchor.constraint(equalTo: topAnchor, constant: 10), 
            flipCameraButton.rightAnchor.constraint(equalTo: rightAnchor, constant: -10), 
            flipCameraButton.widthAnchor.constraint(equalToConstant: 30), 
            flipCameraButton.widthAnchor.constraint(equalToConstant: 30), 

            recordButton.centerXAnchor.constraint(equalTo: centerXAnchor), 
            recordButton.bottomAnchor.constraint(equalTo: bottomAnchor, constant: -5), 
            recordButton.widthAnchor.constraint(equalToConstant: 30), 
            recordButton.widthAnchor.constraint(equalToConstant: 30), 

            uploadButton.leftAnchor.constraint(equalTo: recordButton.rightAnchor, constant: 20), 
            uploadButton.bottomAnchor.constraint(equalTo: bottomAnchor, constant: -5), 
            uploadButton.widthAnchor.constraint(equalToConstant: 30), 
            uploadButton.widthAnchor.constraint(equalToConstant: 30), 

            clipsLabel.leftAnchor.constraint(equalTo: leftAnchor, constant: 5), 
            clipsLabel.centerYAnchor.constraint(equalTo: uploadButton.centerYAnchor), 

            deleteLastClipButton.centerYAnchor.constraint(equalTo: clipsLabel.centerYAnchor), 
            deleteLastClipButton.rightAnchor.constraint(equalTo: recordButton.leftAnchor, constant: -15), 
            deleteLastClipButton.widthAnchor.constraint(equalToConstant: 30), 
            deleteLastClipButton.widthAnchor.constraint(equalToConstant: 30), 

            recordedTimeLabel.topAnchor.constraint(equalTo: layoutMarginsGuide.topAnchor), 
            recordedTimeLabel.leftAnchor.constraint(equalTo: leftAnchor, constant: 5) 
        ]) 
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result of the layout will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ykgzd4cu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/63lpjthw37jkcx1rxnj2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ykgzd4cu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/63lpjthw37jkcx1rxnj2.png" alt="Image description" width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5) Add the initializer. The controller will transfer the session in order to access the image from the camera:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    convenience init(session: AVCaptureSession) { 
        self.init(frame: .zero) 
        setupLivePreview(session: session) 
        addSubview(recordButton) 
        addSubview(flipCameraButton) 
        addSubview(uploadButton) 
        initLayout() 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6) Create methods that will work when the user clicks on the buttons.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  @objc func tapRecord() { 
        guard delegate?.shouldRecord() == true else { return } 
        isRecord = !isRecord 
        delegate?.tappedRecord(isRecord: isRecord) 
    } 

    @objc func tapFlip() { 
        delegate?.tappedFlipCamera() 
    } 

    @objc func tapUpload() { 
        delegate?.tappedUpload() 
    } 

    @objc func tapDeleteClip() { 
        delegate?.tappedDeleteClip() 
    } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5: Interaction with recorded fragments&lt;/strong&gt;&lt;br&gt;
On an iPhone, the camera records video in fragments. When the user decides to upload the video, you need to collect its fragments into one file and send it to the server. Create another class that will do this command.&lt;/p&gt;

&lt;p&gt;Note: When creating a video, an additional file will be created. This file will collect all the fragments, but at the same time, these fragments will remain in the memory until the line-up is completed. In the worst case, it can cause a lack of memory and crash from the application. To avoid this, we recommend limiting the recording time allowed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`import Foundation 
import AVFoundation 

final class VideoCompositionWriter: NSObject { 
    private func merge(recordedVideos: [AVAsset]) -&amp;gt; AVMutableComposition { 
        //  create empty composition and empty video and audio tracks 
        let mainComposition = AVMutableComposition() 
        let compositionVideoTrack = mainComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid) 
        let compositionAudioTrack = mainComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid) 

        // to correct video orientation 
        compositionVideoTrack?.preferredTransform = CGAffineTransform(rotationAngle: .pi / 2) 

        // add video and audio tracks from each asset to our composition (across compositionTrack) 
        var insertTime = CMTime.zero 
        for i in recordedVideos.indices { 
            let video = recordedVideos[i] 
            let duration = video.duration 
            let timeRangeVideo = CMTimeRangeMake(start: CMTime.zero, duration: duration) 
            let trackVideo = video.tracks(withMediaType: .video)[0] 
            let trackAudio = video.tracks(withMediaType: .audio)[0] 

            try! compositionVideoTrack?.insertTimeRange(timeRangeVideo, of: trackVideo, at: insertTime) 
            try! compositionAudioTrack?.insertTimeRange(timeRangeVideo, of: trackAudio, at: insertTime) 

            insertTime = CMTimeAdd(insertTime, duration) 
        } 
        return mainComposition 
    } 

    /// Combines all recorded clips into one file 
    func mergeVideo(_ documentDirectory: URL, filename: String, clips: [URL], completion: @escaping (Bool, URL?) -&amp;gt; Void) { 
        var assets: [AVAsset] = [] 
        var totalDuration = CMTime.zero 

        for clip in clips { 
            let asset = AVAsset(url: clip) 
            assets.append(asset) 
            totalDuration = CMTimeAdd(totalDuration, asset.duration) 
        } 

        let mixComposition = merge(recordedVideos: assets) 

        let url = documentDirectory.appendingPathComponent("link_\(filename)") 
        guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) else { return } 
        exporter.outputURL = url 
        exporter.outputFileType = .mp4 
        exporter.shouldOptimizeForNetworkUse = true 

        exporter.exportAsynchronously { 
            DispatchQueue.main.async { 
                if exporter.status == .completed { 
                    completion(true, exporter.outputURL) 
                } else { 
                    completion(false, nil) 
                } 
            } 
        } 
    } 
}`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 6: Metadata for the videos&lt;/strong&gt;&lt;br&gt;
There is a specific set of actions for video uploading:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Recording a video&lt;/li&gt;
&lt;li&gt;Using your token and the name of the future video, creating a request to the server to create a container for the video file&lt;/li&gt;
&lt;li&gt;Getting the usual VOD data in the response&lt;/li&gt;
&lt;li&gt;Sending a request for metadata using the token and the VOD ID&lt;/li&gt;
&lt;li&gt;Getting metadata in the response&lt;/li&gt;
&lt;li&gt;Uploading the video via TUSKit using metadata&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Create requests with models. You will use the Decodable protocol from Apple with the enumeration of Coding Keys for easier data parsing.&lt;/p&gt;

&lt;p&gt;1) Create a model for VOD, which will contain the data that you need.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;struct VOD: Decodable { 
    let name: String 
    let id: Int 
    let screenshot: URL? 
    let hls: URL? 

    enum CodingKeys: String, CodingKey { 
        case name, id, screenshot 
        case hls = "hls_url" 
    } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) Create a CreateVideoRequest in order to create an empty container for the video on the server. The VOD model will come in response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;struct CreateVideoRequest: DataRequest { 
    typealias Response = VOD 

    let token: String 
    let videoName: String 

    var url: String { GсoreAPI.videos.rawValue } 
    var method: HTTPMethod { .post } 

    var headers: [String: String] { 
        [ "Authorization" : "Bearer \(token)" ] 
    } 

    var body: Data? { 
       try? JSONEncoder().encode([ 
        "name": videoName 
       ]) 
    } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) Create a VideoMetadata model that will contain data for uploading videos from the device to the server and the corresponding request for it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;struct VideoMetadata: Decodable { 
    struct Server: Decodable { 
        let hostname: String 
    } 

    struct Video: Decodable { 
        let name: String 
        let id: Int 
        let clientID: Int 

        enum CodingKeys: String, CodingKey { 
            case name, id 
            case clientID = "client_id" 
        } 
    } 

    let servers: [Server] 
    let video: Video 
    let token: String 

    var uploadURLString: String { 
        "https://" + (servers.first?.hostname ?? "") + "/upload" 
    } 
} 

// MARK: Request 
struct VideoMetadataRequest: DataRequest { 
    typealias Response = VideoMetadata 

    let token: String 
    let videoId: Int 

    var url: String { GсoreAPI.videos.rawValue + "/\(videoId)/" + "upload" } 
    var method: HTTPMethod { .get } 

    var headers: [String: String] { 
        [ "Authorization" : "Bearer \(token)" ] 
    } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 7: Putting the pieces together&lt;/strong&gt;&lt;br&gt;
We’ve used the code from our &lt;a href="https://github.com/G-Core/ios-demo-vod-hosting"&gt;demo application&lt;/a&gt; as an example. The controller class is described here with a custom view. It will link the camera and the UI as well as take responsibility for creating requests to obtain metadata and then upload the video to the server.&lt;/p&gt;

&lt;p&gt;Create View Controller. It will display the camera view and TextField for the video title. This controller has various states (upload, error, common).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MainView&lt;/strong&gt;&lt;br&gt;
First, create the view.&lt;/p&gt;

&lt;p&gt;1) Create a delegate protocol to handle changing the name of the video.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;protocol UploadMainViewDelegate: AnyObject { 
    func videoNameDidUpdate(_ name: String)

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) Create the view and initialize all UI elements except the camera view. It will be added by the controller.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;final class UploadMainView: UIView { 
    enum State { 
        case upload, error, common 
    } 

    var cameraView: CameraView? { 
        didSet { initLayoutForCameraView() } 
    } 

    var state: State = .common { 
        didSet { 
            switch state { 
            case .upload: showUploadState() 
            case .error: showErrorState() 
            case .common: showCommonState() 
            } 
        } 
    } 

    weak var delegate: UploadMainViewDelegate? 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) Add the initialization of UI elements here, except for the camera view. It will be added by the controller.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    let videoNameTextField = TextField(placeholder: "Enter the name video") 

    let accessCaptureFailLabel: UILabel = { 
        let label = UILabel() 
        label.text = NSLocalizedString("Error!\nUnable to access capture devices.", comment: "") 
        label.textColor = .black 
        label.numberOfLines = 2 
        label.isHidden = true 
        label.textAlignment = .center 
        return label 
    }() 

    let uploadIndicator: UIActivityIndicatorView = { 
        let indicator = UIActivityIndicatorView(style: .gray) 
        indicator.transform = CGAffineTransform(scaleX: 2, y: 2) 
        return indicator 
    }() 

    let videoIsUploadingLabel: UILabel = { 
        let label = UILabel() 
        label.text = NSLocalizedString("video is uploading", comment: "") 
        label.font = UIFont.systemFont(ofSize: 16) 
        label.textColor = .gray 
        label.isHidden = true 
        return label 
    }()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4) Create a layout for the elements. Since the camera will be added after, its layout is taken out in a separate method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; private func initLayoutForCameraView() { 
        guard let cameraView = cameraView else { return } 
        cameraView.translatesAutoresizingMaskIntoConstraints = false 
        insertSubview(cameraView, at: 0) 

        NSLayoutConstraint.activate([ 
            cameraView.leftAnchor.constraint(equalTo: leftAnchor), 
            cameraView.topAnchor.constraint(equalTo: topAnchor), 
            cameraView.rightAnchor.constraint(equalTo: rightAnchor), 
            cameraView.bottomAnchor.constraint(equalTo: videoNameTextField.topAnchor), 
        ]) 
    } 

    private func initLayout() { 
        let views = [videoNameTextField, accessCaptureFailLabel, uploadIndicator, videoIsUploadingLabel] 
        views.forEach { 
            $0.translatesAutoresizingMaskIntoConstraints = false 
            addSubview($0) 
        } 

        let keyboardBottomConstraint = videoNameTextField.bottomAnchor.constraint(equalTo: layoutMarginsGuide.bottomAnchor) 
        self.keyboardBottomConstraint = keyboardBottomConstraint 

        NSLayoutConstraint.activate([ 
            keyboardBottomConstraint, 
            videoNameTextField.heightAnchor.constraint(equalToConstant: videoNameTextField.intrinsicContentSize.height + 20), 
            videoNameTextField.leftAnchor.constraint(equalTo: leftAnchor), 
            videoNameTextField.rightAnchor.constraint(equalTo: rightAnchor), 

            accessCaptureFailLabel.centerYAnchor.constraint(equalTo: centerYAnchor), 
            accessCaptureFailLabel.centerXAnchor.constraint(equalTo: centerXAnchor), 

            uploadIndicator.centerYAnchor.constraint(equalTo: centerYAnchor), 
            uploadIndicator.centerXAnchor.constraint(equalTo: centerXAnchor), 

            videoIsUploadingLabel.centerXAnchor.constraint(equalTo: centerXAnchor), 
            videoIsUploadingLabel.topAnchor.constraint(equalTo: uploadIndicator.bottomAnchor, constant: 20) 
        ]) 
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5) To show different states, create methods responsible for this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  private func showUploadState() { 
        videoNameTextField.isHidden = true 
        uploadIndicator.startAnimating() 
        videoIsUploadingLabel.isHidden = false 
        accessCaptureFailLabel.isHidden = true 
        cameraView?.recordButton.setImage(UIImage(named: "play.icon"), for: .normal) 
        cameraView?.isHidden = true 
    } 

    private func showErrorState() { 
        accessCaptureFailLabel.isHidden = false 
        videoNameTextField.isHidden = true 
        uploadIndicator.stopAnimating() 
        videoIsUploadingLabel.isHidden = true 
        cameraView?.isHidden = true 
    } 

    private func showCommonState() { 
        videoNameTextField.isHidden = false 
        uploadIndicator.stopAnimating() 
        videoIsUploadingLabel.isHidden = true 
        accessCaptureFailLabel.isHidden = true 
        cameraView?.isHidden = false 
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6) Add methods and a variable for the correct processing of keyboard behavior. The video title input field must always be visible.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  private var keyboardBottomConstraint: NSLayoutConstraint? 

    private func addObserver() { 
        [UIResponder.keyboardWillShowNotification, UIResponder.keyboardWillHideNotification].forEach { 
            NotificationCenter.default.addObserver( 
                self, 
                selector: #selector(keybordChange), 
                name: $0,  
                object: nil 
            ) 
        } 
    } 

    @objc private func keybordChange(notification: Notification) { 
        guard let keyboardFrame = notification.userInfo?["UIKeyboardFrameEndUserInfoKey"] as? NSValue, 
              let duration = notification.userInfo?[UIResponder.keyboardAnimationDurationUserInfoKey] as? Double 
        else {  
            return 
        } 

        let keyboardHeight = keyboardFrame.cgRectValue.height - safeAreaInsets.bottom 

        if notification.name == UIResponder.keyboardWillShowNotification { 
            self.keyboardBottomConstraint?.constant -= keyboardHeight 
            UIView.animate(withDuration: duration) { 
                self.layoutIfNeeded() 
            } 
        } else { 
            self.keyboardBottomConstraint?.constant += keyboardHeight 
            UIView.animate(withDuration: duration) { 
                self.layoutIfNeeded() 
            } 
        } 
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;7) Rewrite the initializer. In deinit, unsubscribe from notifications related to the keyboard.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; override init(frame: CGRect) { 
        super.init(frame: frame) 
        initLayout() 
        backgroundColor = .white 
        videoNameTextField.delegate = self 
        addObserver() 
    } 

    required init?(coder: NSCoder) { 
        super.init(coder: coder) 
        initLayout() 
        backgroundColor = .white 
        videoNameTextField.delegate = self 
        addObserver() 
    } 

    deinit { 
        NotificationCenter.default.removeObserver(self) 
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;8) Sign the view under UITextFieldDelegate to intercept the necessary actions related to TextField.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;extension UploadMainView: UITextFieldDelegate { 
    func textFieldShouldReturn(_ textField: UITextField) -&amp;gt; Bool { 
        delegate?.videoNameDidUpdate(textField.text ?? "") 
        return textField.resignFirstResponder() 
    } 

    func textField(_ textField: UITextField, shouldChangeCharactersIn range: NSRange, replacementString string: String) -&amp;gt; Bool { 
        guard let text = textField.text, text.count &amp;lt; 21 else { return false } 
        return true 
    } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Controller&lt;/strong&gt;&lt;br&gt;
Create ViewController.&lt;/p&gt;

&lt;p&gt;1) Specify the necessary variables and configure the controller.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;final class UploadController: BaseViewController { 
    private let mainView = UploadMainView() 

    private var camera: Camera? 
    private var captureSession = AVCaptureSession() 
    private var filename = "" 
    private var writingVideoURL: URL! 

    private var clips: [(URL, CMTime)] = [] { 
        didSet { mainView.cameraView?.clipsLabel.text = "Clips: \(clips.count)" } 
    } 

    private var isUploading = false { 
        didSet { mainView.state = isUploading ? .upload : .common } 
    } 

    // replacing the default view with ours 
    override func loadView() { 
        mainView.delegate = self 
        view = mainView 
    } 

    // initialize the camera and the camera view 
    override func viewDidLoad() { 
        super.viewDidLoad() 
        do { 
            camera = try Camera(captureSession: captureSession) 
            camera?.delegate = self 
            mainView.cameraView = CameraView(session: captureSession) 
            mainView.cameraView?.delegate = self 
        } catch { 
            debugPrint((error as NSError).description) 
            mainView.state = .error 
        } 
    } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) Add methods that will respond to clicks of the upload button on View. For this, create a full video from small fragments, create an empty container on the server, get metadata, and then upload the video.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    // used then user tap upload button 
    private func mergeSegmentsAndUpload() { 
        guard !isUploading, let camera = camera else { return } 
        isUploading = true 
        camera.stopRecording() 

        if let directoryURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first { 
            let clips = clips.map { $0.0 } 
            // Create a full video from clips 
            VideoCompositionWriter().mergeVideo(directoryURL, filename: "\(filename).mp4", clips: clips) { [weak self] success, outURL in 
                guard let self = self else { return } 

                if success, let outURL = outURL { 
                    clips.forEach { try? FileManager.default.removeItem(at: $0) } 
                    self.clips = [] 

                    let videoData = try! Data.init(contentsOf: outURL) 
                    let writingURL = FileManager.default.temporaryDirectory.appendingPathComponent(outURL.lastPathComponent) 
                    try! videoData.write(to: writingURL) 
                    self.writingVideoURL = writingURL 
                    self.createVideoPlaceholderOnServer() 
                } else { 
                    self.isUploading = false 
                    self.mainView.state = .common 
                    self.present(self.createAlert(), animated: true) 
                } 
            } 
        } 
    } 

    // used to send createVideo request 
    private func createVideoPlaceholderOnServer() {                 
        guard let token = Settings.shared.accessToken else {  
            refreshToken() 
            return 
        } 

        let http = HTTPCommunicator() 
        let request = CreateVideoRequest(token: token, videoName: filename) 

        http.request(request) { [weak self] result in 
            guard let self = self else { return } 

            switch result { 
            case .success(let vod): 
                self.loadMetadataFor(vod: vod) 
            case .failure(let error): 
                if let error = error as? ErrorResponse, error == .invalidToken { 
                    Settings.shared.accessToken = nil 
                    self.refreshToken() 
                } else { 
                    self.errorHandle(error) 
                } 
            } 
        } 
    } 

    // Requesting the necessary data from the server 
    func loadMetadataFor(vod: VOD) { 
        guard let token = Settings.shared.accessToken else {  
            refreshToken() 
            return 
        } 

        let http = HTTPCommunicator() 
        let request = VideoMetadataRequest(token: token, videoId: vod.id) 
        http.request(request) { [weak self] result in 
            guard let self = self else { return } 

            switch result { 
            case .success(let metadata): 
                self.uploadVideo(with: metadata) 
            case .failure(let error):  
                if let error = error as? ErrorResponse, error == .invalidToken { 
                    Settings.shared.accessToken = nil 
                    self.refreshToken() 
                } else { 
                    self.errorHandle(error) 
                } 
            } 
        } 
    } 

    // Uploading our video to the server via TUSKit 
    func uploadVideo(with metadata: VideoMetadata) { 
        var config = TUSConfig(withUploadURLString: metadata.uploadURLString) 
        config.logLevel = .All 

        TUSClient.setup(with: config) 
        TUSClient.shared.delegate = self 

        let upload: TUSUpload = TUSUpload(withId:  filename, 
                                          andFilePathURL: writingVideoURL, 
                                          andFileType: ".mp4") 
        upload.metadata = [ 
            "filename" : filename, 
            "client_id" : String(metadata.video.clientID), 
            "video_id" : String(metadata.video.id), 
            "token" : metadata.token 
        ] 

        TUSClient.shared.createOrResume(forUpload: upload) 
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) Subscribe to the TUSDelegate protocol to track errors and successful downloads. It can also be used to display the progress of video downloads.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//MARK: - TUSDelegate 
extension UploadController: TUSDelegate { 

    func TUSProgress(bytesUploaded uploaded: Int, bytesRemaining remaining: Int) { } 
    func TUSProgress(forUpload upload: TUSUpload, bytesUploaded uploaded: Int, bytesRemaining remaining: Int) {  } 
    func TUSFailure(forUpload upload: TUSUpload?, withResponse response: TUSResponse?, andError error: Error?) { 
        if let error = error { 
            print((error as NSError).description) 
        } 
        present(createAlert(), animated: true) 
        mainView.state = .common 
    } 

    func TUSSuccess(forUpload upload: TUSUpload) { 
        let alert = createAlert(title: "Upload success") 
        present(alert, animated: true) 
        mainView.state = .common 
    } 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4) Subscribe to the protocols of the MainView, the camera, and the camera view in order to correctly link all the work of the module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//MARK: - extensions CameraViewDelegate, CameraDelegate 
extension UploadController: CameraViewDelegate, CameraDelegate { 
    func updateCurrentRecordedTime(_ time: CMTime) { 
        currentRecordedTime = time.seconds 
    } 

    func tappedDeleteClip() { 
        guard let lastClip = clips.last else { return } 
        lastRecordedTime -= lastClip.1.seconds 
        clips.removeLast() 
    } 

    func addRecordedMovie(url: URL, time: CMTime) { 
        lastRecordedTime += time.seconds 
        clips += [(url, time)] 
    } 

    func shouldRecord() -&amp;gt; Bool { 
        totalRecordedTime &amp;lt; maxRecordTime 
    } 

    func tappedRecord(isRecord: Bool) { 
        isRecord ? camera?.startRecording() : camera?.stopRecording() 
    } 

    func tappedUpload() { 
        guard !clips.isEmpty &amp;amp;&amp;amp; filename != "" else { return } 
        mergeSegmentsAndUpload() 
    } 

    func tappedFlipCamera() { 
        camera?.flipCamera() 
    } 
} 

extension UploadController: UploadMainViewDelegate { 
    // used then user change name video in view 
    func videoNameDidUpdate(_ name: String) { 
        filename = name 
    } 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This was the last step; the job is done! The new feature has been added to your app and configured.&lt;/p&gt;

&lt;h2&gt;
  
  
  Result
&lt;/h2&gt;

&lt;p&gt;Now you have a full-fledged module for recording and uploading videos.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NuhhATmO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7u3epa8jtil0fgkdgddf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NuhhATmO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7u3epa8jtil0fgkdgddf.png" alt="Image description" width="800" height="820"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Through this guide, you’ve learned how to add a VOD uploading feature to your iOS application. We hope this solution will satisfy your needs and delight your users with new options.&lt;/p&gt;

&lt;p&gt;Also, we invite you to take a look at our &lt;a href="https://github.com/G-Core/ios-demo-vod-hosting"&gt;demo application&lt;/a&gt;. You will see the result of setting up the VOD viewing for an iOS project.&lt;/p&gt;

</description>
      <category>vod</category>
      <category>iosapps</category>
      <category>streaming</category>
      <category>howto</category>
    </item>
    <item>
      <title>How to use DNS SDK in Golang</title>
      <dc:creator>Kiswono Prayogo</dc:creator>
      <pubDate>Wed, 31 May 2023 07:18:56 +0000</pubDate>
      <link>https://forem.com/gcoreofficial/how-to-use-dns-sdk-in-golang-55cl</link>
      <guid>https://forem.com/gcoreofficial/how-to-use-dns-sdk-in-golang-55cl</guid>
      <description>&lt;p&gt;So we're gonna try to manipulate DNS records using go SDK (not REST API directly). I went through first 2 page of google search results, and companies that providing SDK for Go were:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;   IBM &lt;a href="https://github.com/IBM/networking-go-sdk"&gt;networking-go-sdk&lt;/a&gt; - 161.26.0.10 and 161.26.0.11 - timedout resolving their own website&lt;/li&gt;
&lt;li&gt;   AWS &lt;a href="https://docs.aws.amazon.com/sdk-for-go/api/service/route53/"&gt;route53&lt;/a&gt; - 169.254.169.253 - timedout resolving their own website&lt;/li&gt;
&lt;li&gt;   DNSimple &lt;a href="https://dnsimple.com/api/go"&gt;dnsimple-go&lt;/a&gt; - 162.159.27.4 and 199.247.155.53 - 160-180ms and 70-75ms from SG&lt;/li&gt;
&lt;li&gt;   Google &lt;a href="https://github.com/googleapis/google-api-go-client/tree/main/examples"&gt;googleapis&lt;/a&gt; - 8.8.8.8 and 8.8.4.4 - 0ms for both from SG&lt;/li&gt;
&lt;li&gt;   GCore &lt;a href="https://github.com/G-Core/gcore-dns-sdk-go"&gt;gcore-dns-sdk-go&lt;/a&gt; - 199.247.155.53 and 2.56.220.2 - 0ms and 0-171ms (171ms on first hit only, the rest is 0ms) from SG&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I've used google SDK before for non-DNS stuff, a bit too raw and so many required steps. You have to create a project, enable API, create service account, set permission for that account, download credentials.json, then hit using their SDK -- not really straightforward, so today we're gonna try G-Core's DNS, apparently it's very easy, just need to visit their website and sign up, profile &amp;gt; API Tokens &amp;gt; Create Token, copy it to some file (for example: &lt;code&gt;.token&lt;/code&gt; file).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G-G5K77F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a4ekvy5lraudyh699awx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G-G5K77F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a4ekvy5lraudyh699awx.jpg" alt="create token" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is example how you can create a zone, add an A record, and delete everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
  "context"
  _ "embed"
  "strings"
  "time"  
  "github.com/G-Core/gcore-dns-sdk-go"
  "github.com/kokizzu/gotro/L"
)

//go:embed .token
var apiToken stringfunc main() {
  apiToken = strings.TrimSpace(apiToken)

  // init SDK
  sdk := dnssdk.NewClient(dnssdk.PermanentAPIKeyAuth(apiToken), func(client *dnssdk.Client) {
    client.Debug = true
  })
  ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
  defer cancel()
  const zoneName = `benalu2.dev`

  // create zone
  _, err := sdk.CreateZone(ctx, zoneName)
  if err != nil &amp;amp;&amp;amp; !strings.Contains(err.Error(), `already exists`) {
    L.PanicIf(err, `sdk.CreateZone`)
  }

  // get zone
  zoneResp, err := sdk.Zone(ctx, zoneName)
  L.PanicIf(err, `sdk.Zone`)
  L.Describe(zoneResp)
  // add A record
  err = sdk.AddZoneRRSet(ctx,
    zoneName,        // zone
    `www.`+zoneName, // name
    `A`,             // rrtype
    []dnssdk.ResourceRecord{
      { // https://apidocs.gcore.com/dns#tag/rrsets/operation/CreateRRSet
        Content: []any{
          `194.233.65.174`,
        },
      },
    },
    120, // TTL
  )
  L.PanicIf(err, `AddZoneRRSet`)

  // get A record
  rr, err := sdk.RRSet(ctx, zoneName, `www.`+zoneName, `A`)
  L.PanicIf(err, `sdk.RRSet`)
  L.Describe(rr)  // delete A record
  err = sdk.DeleteRRSet(ctx, zoneName, `www.`+zoneName, `A`)
  L.PanicIf(err, `sdk.DeleteRRSet`)

  // delete zone
  err = sdk.DeleteZone(ctx, zoneName)
  L.PanicIf(err, `sdk.DeleteZone`)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full source code repo is &lt;a href="https://github.com/kokizzu/dns1"&gt;here&lt;/a&gt;. Apparently it's very easy to manipulate DNS record using their SDK, after adding record programmatically, all I need to do is just delegate (set authoritative nameserver) to their NS: ns1.gcorelabs.net and ns2.gcdn.services.In my case because I bought the domain name on google domains, then I just need to change this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vLavQiy---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9q2pdbn6ajpyxep7h0u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vLavQiy---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9q2pdbn6ajpyxep7h0u.png" alt="Image description" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then just wait it to be delegated properly (until all DNS servers that still caching the old authorized NS cleared up), I guess that it. This article republished with permission from kokizzu's personal &lt;a href="https://kokizzu.blogspot.com/2023/04/how-to-use-dns-sdk-in-golang.html"&gt;blog&lt;/a&gt; &lt;/p&gt;

</description>
      <category>go</category>
      <category>network</category>
      <category>development</category>
      <category>programming</category>
    </item>
    <item>
      <title>How we lowered the bitrate for live and VOD streaming by 32.5% without sacrificing quality</title>
      <dc:creator>Maxim K</dc:creator>
      <pubDate>Mon, 29 May 2023 11:13:42 +0000</pubDate>
      <link>https://forem.com/gcoreofficial/how-we-lowered-the-bitrate-for-live-and-vod-streaming-by-325-without-sacrificing-quality-4dpn</link>
      <guid>https://forem.com/gcoreofficial/how-we-lowered-the-bitrate-for-live-and-vod-streaming-by-325-without-sacrificing-quality-4dpn</guid>
      <description>&lt;p&gt;We’ve been experimenting with transcoding settings to lower the bitrate without losing video quality. And we’ve succeeded. Here are Gcore’s current results in comparison with our competitors:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Video quality&lt;/th&gt;
&lt;th&gt;Gcore, Mbps&lt;/th&gt;
&lt;th&gt;MUX*, Mbps&lt;/th&gt;
&lt;th&gt;Bitmovin*, Mbps&lt;/th&gt;
&lt;th&gt;Cloudflare*, Mbps&lt;/th&gt;
&lt;th&gt;Dacast*, Mbps&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;4K&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2K&lt;/td&gt;
&lt;td&gt;7.2&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1080p&lt;/td&gt;
&lt;td&gt;4.05&lt;/td&gt;
&lt;td&gt;4.8&lt;/td&gt;
&lt;td&gt;4.8&lt;/td&gt;
&lt;td&gt;3.6&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;720p&lt;/td&gt;
&lt;td&gt;1.8&lt;/td&gt;
&lt;td&gt;2.5&lt;/td&gt;
&lt;td&gt;2.4&lt;/td&gt;
&lt;td&gt;1.8&lt;/td&gt;
&lt;td&gt;5.2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;480p&lt;/td&gt;
&lt;td&gt;0.8&lt;/td&gt;
&lt;td&gt;1.6&lt;/td&gt;
&lt;td&gt;1.2&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;2.1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;360p&lt;/td&gt;
&lt;td&gt;0.45&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;0.8&lt;/td&gt;
&lt;td&gt;0.7&lt;/td&gt;
&lt;td&gt;1.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;240p&lt;/td&gt;
&lt;td&gt;0.2&lt;/td&gt;
&lt;td&gt;0.7&lt;/td&gt;
&lt;td&gt;0.4&lt;/td&gt;
&lt;td&gt;0.5&lt;/td&gt;
&lt;td&gt;0.5&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;* Average data obtained from platform testing.&lt;/p&gt;

&lt;p&gt;Users can now watch video in SD and HD quality, even with a poor internet connection. The chart below is a visual demonstration of quality differences on the “encoding ladder” between standard and optimized transcoding settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9p4r605pqgyw6ieag7d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9p4r605pqgyw6ieag7d.png" alt="Encoding ladder"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Decreasing the bitrate has also made streaming more cost-effective for our clients: the lower the Mbps, the less traffic is consumed, which in turn lowers the total cost of streaming.&lt;/p&gt;

&lt;p&gt;In this article, we’ll discuss why we took on video optimization and how the video bitrate quality ratio has changed. We’ll also show metrics that support the results.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why working to lower bitrate matters
&lt;/h1&gt;

&lt;p&gt;One of the main tenets of building online video solutions is adaptive streaming. You air a few broadcasts simultaneously with varying video qualities and bitrates. The user’s player can switch between them: if the internet speed dips, the quality decreases; as soon as the speed recovers, the quality improves.&lt;/p&gt;

&lt;p&gt;When setting the video and bitrate quality, Apple &lt;a href="https://developer.apple.com/documentation/http_live_streaming/http_live_streaming_hls_authoring_specification_for_apple_devices" rel="noopener noreferrer"&gt;recommends&lt;/a&gt; starting with the settings in the table below, and then selecting what you need for specific circumstances.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Video quality&lt;/th&gt;
&lt;th&gt;Bitrate after transcoding&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1080p&lt;/td&gt;
&lt;td&gt;6 Mbps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;720p&lt;/td&gt;
&lt;td&gt;3 Mbps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;480p&lt;/td&gt;
&lt;td&gt;1.1 Mbps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;360p&lt;/td&gt;
&lt;td&gt;0.365 Mbps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;240p&lt;/td&gt;
&lt;td&gt;0.145 Mbps&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In practice, some streaming providers stop with these settings. But the values above are not ideal. We’ll illustrate this using a soccer game as an example.&lt;/p&gt;

&lt;p&gt;A viewer decides to watch a soccer game on their phone. They have a good connection that lets them watch in 1080p. If we use Apple’s recommendations, the viewer will receive the stream at 6 Mbps, and for the game’s 120 minutes will consume 5.4 GB of traffic. This goes beyond the capabilities of many unlimited mobile plans. The viewer can use up an entire month’s worth of traffic for this one game.&lt;/p&gt;

&lt;p&gt;6 Mbps / 8 b * 60 s * 120 m = 5400 MB&lt;/p&gt;

&lt;p&gt;But what if we could reduce the bitrate by 0.5 Mbps without sacrificing quality? In the case of our soccer game, reducing the bitrate by 0.5 Mbps saves the viewer 450 MB overall.&lt;/p&gt;

&lt;p&gt;5.5 Mbps / 8 b * 60 s * 120 m = 4950 MB&lt;/p&gt;

&lt;p&gt;5 Mbps / 8 b * 60 s * 120 m = 4500 MB&lt;/p&gt;

&lt;p&gt;4.5 Mbps / 8 b * 60 s * 120 m = 4050 MB&lt;/p&gt;

&lt;p&gt;4 Mbps / 8 b * 60 s * 120 m = 3600 MB&lt;/p&gt;

&lt;p&gt;This will also save the livestream provider money. In most cases, video content is delivered through a CDN, and the provider pays for the delivery on a PAYG basis: the more data is delivered, the more it costs. Reducing traffic volumes can save on that delivery. And if the provider prefers VOD streaming, this can lower the amount of space the video will take up in storage.&lt;/p&gt;

&lt;h1&gt;
  
  
  How we optimized video
&lt;/h1&gt;

&lt;p&gt;During one stage of development for Gcore’s streaming platform, we determined the optimal set of video profiles for us.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Video quality (H.264)&lt;/th&gt;
&lt;th&gt;Bitrate after transcoding&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;4K&lt;/td&gt;
&lt;td&gt;14 Mbps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2K&lt;/td&gt;
&lt;td&gt;10.8 Mbps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1080p&lt;/td&gt;
&lt;td&gt;6.1 Mbps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;720p&lt;/td&gt;
&lt;td&gt;2.7 Mbps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;480p&lt;/td&gt;
&lt;td&gt;1.15 Mbps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;360p&lt;/td&gt;
&lt;td&gt;0.68 Mbps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;240p&lt;/td&gt;
&lt;td&gt;0.3 Mbps&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;But coding technology is constantly advancing. This opened up new possibilities. We decided to experiment with transcoding settings to reduce the bitrate without losing video quality as a result.&lt;/p&gt;

&lt;p&gt;For testing, we picked out 50 diverse videos and started choosing parameters. Below are examples of videos we tested.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/JvhOS0A9VbE"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/MeXTK-iWK1w"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/KLJsP1D625A"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/24vVDpLkTYU"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/RJS5SHK9K-8"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;First, we processed each video using the old transcoding settings. Then we changed the settings and compared the new video with the old one. The key evaluation criteria were video quality, the size of the transmitted video file, and transcoding speed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nh0uo93v5fsgs0x08j3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nh0uo93v5fsgs0x08j3.png" alt="Profit"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We compared quality using objective metrics and algorithms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VMAF&lt;/li&gt;
&lt;li&gt;PSNR&lt;/li&gt;
&lt;li&gt;SSIM&lt;/li&gt;
&lt;li&gt;Ciede2000&lt;/li&gt;
&lt;li&gt;Cambi&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Results
&lt;/h1&gt;

&lt;p&gt;After a few months of testing, we landed on the required settings: we managed to lower the bitrate without losing any perceived video quality. Of course, the video quality did fall slightly, but the human eye is not able to distinguish these changes, which was confirmed by metrics as well as the results of blind testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  How much the bitrate dropped
&lt;/h2&gt;

&lt;p&gt;The results of the changes are shown in the table below:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Video quality&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;th&gt;Difference&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;4K&lt;/td&gt;
&lt;td&gt;14 Mbps&lt;/td&gt;
&lt;td&gt;14 Mbps&lt;/td&gt;
&lt;td&gt;−0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2K&lt;/td&gt;
&lt;td&gt;10.8 Mbps&lt;/td&gt;
&lt;td&gt;7.2 Mbps&lt;/td&gt;
&lt;td&gt;−33%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1080p&lt;/td&gt;
&lt;td&gt;6.1 Mbps&lt;/td&gt;
&lt;td&gt;4.05 Mbps&lt;/td&gt;
&lt;td&gt;−33%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;720p&lt;/td&gt;
&lt;td&gt;2.7 Mbps&lt;/td&gt;
&lt;td&gt;1.8 Mbps&lt;/td&gt;
&lt;td&gt;−33%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;480p&lt;/td&gt;
&lt;td&gt;1.15 Mbps&lt;/td&gt;
&lt;td&gt;0.8 Mbps&lt;/td&gt;
&lt;td&gt;−30%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;360p&lt;/td&gt;
&lt;td&gt;0.68 Mbps&lt;/td&gt;
&lt;td&gt;0.45 Mbps&lt;/td&gt;
&lt;td&gt;−33%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;240p&lt;/td&gt;
&lt;td&gt;0.3 Mbps&lt;/td&gt;
&lt;td&gt;0.2 Mbps&lt;/td&gt;
&lt;td&gt;−33%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Perceived quality did not change as a result of this reduction. Below is a slider with one frame from a 1080p video. The version on the left shows the frame before transcoding optimization, and the version on the right shows the frame after.&lt;/p&gt;

&lt;h2&gt;
  
  
  What these results mean for viewers and clients
&lt;/h2&gt;

&lt;p&gt;The average difference between the old and new bitrate values was 32.5%. This means that with the new transcoding settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;users consume 32.5% less traffic while watching video;&lt;/li&gt;
&lt;li&gt;clients using a CDN for streaming pay 32.5% less to deliver video content; and&lt;/li&gt;
&lt;li&gt;clients providing VOD streaming get 32.5% more storage space.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Metrics: The difference between the new and old transcoding settings
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Transcoding speed
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2yoh4rw2n8crep5qwxz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2yoh4rw2n8crep5qwxz.png" alt="Transcoding speed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Transcoded file size
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fca9gvwmso6br7dheat59.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fca9gvwmso6br7dheat59.png" alt="Transcoded file size"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  VMAF
&lt;/h2&gt;

&lt;p&gt;VMAF is an objective evaluation metric of perceived video quality that was developed by Netflix. The integral estimate obtained from analysis considers the degree of similarity between the original image and the modified image. The closer the VMAF is to 100, the better. Deterioration in image quality is perceivable to the human eye at intervals of 5. In our case, when comparing the original and transcoded videos, the VMAF score averaged a 97. In other words, the differences between the two videos being tested was imperceptible to the human eye. The results of comparing the old and new transcoding settings can be seen as percentages in the graph below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faow59uisgcnmswkxr27p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faow59uisgcnmswkxr27p.png" alt="VMAF"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  PSNR
&lt;/h2&gt;

&lt;p&gt;PSNR is the peak signal-to-noise ratio. PSNR determines the level of distortion during compression and includes the mean squared error (MSE) calculation. Accepted values range from 0 to 100 and are measured in decibels on a logarithmic scale. The higher the value, the more detail is preserved in the video sequences after compression, translating to higher quality. The average PSNR in our case was 49. The results of comparing the old and new transcoding settings can be seen as percentages in the graph below:&lt;/p&gt;

&lt;h2&gt;
  
  
  PSNR Y
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgij2628gcu8wn6xxy3ln.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgij2628gcu8wn6xxy3ln.png" alt="PSNR Y"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  PSNR Cb
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv30c19ysd3sfiorl6z7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv30c19ysd3sfiorl6z7e.png" alt="PSNR Cb"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  PSNR Cr
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yyx80ukk1eni0y8ynef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yyx80ukk1eni0y8ynef.png" alt="PSNR Cr"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  PSNR HVS Y
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zoeva8ugqsoakr6te88.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zoeva8ugqsoakr6te88.png" alt="PSNR HVS Y"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  PSNR HVS Cb
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ulqtifr4q52g5ly9wdo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ulqtifr4q52g5ly9wdo.png" alt="PSNR HVS Cb"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  PSNR HVS Cr
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsm6ppqdk94293b89k4s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsm6ppqdk94293b89k4s.png" alt="PSNR HVS Cr"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  PSNR HVS
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2tgub8ux59ag9umav3de.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2tgub8ux59ag9umav3de.png" alt="PSNR HVS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  SSIM
&lt;/h2&gt;

&lt;p&gt;SSIM is an image quality evaluation metric for three criteria: luminance, contrast, and structure. Accepted values are between 0 and 1, and the higher the value, the lower the distortion and the higher the quality. This is one of the first successful metrics that most closely corresponds to the human perception of an image, as confirmed in many studies. The average SSIM in our case was 0.9975. The results of comparing the old and new transcoding settings can be seen as percentages in the graph below:&lt;/p&gt;

&lt;h2&gt;
  
  
  Float SSIM
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20ftez0rff30ctk76i7a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20ftez0rff30ctk76i7a.png" alt="Float SSIM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Float MS SSIM
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcivcergjj3mxvqbmzxn6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcivcergjj3mxvqbmzxn6.png" alt="Float MS SSIM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Ciede2000
&lt;/h2&gt;

&lt;p&gt;Ciede2000 is a color difference formula that numerically expresses the difference between two colors in colorimetry. The average Ciede2000 in our case was 47.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0awvo0d9vk45btvhbmeg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0awvo0d9vk45btvhbmeg.png" alt="Ciede2000"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CAMBI
&lt;/h2&gt;

&lt;p&gt;CAMBI is a banding artifact (contouring) detector from Netflix. CAMBI scores start at 0, meaning no banding is detected. Higher CAMBI scores mean that more visible banding artifacts are identified. The maximum CAMBI that can be observed in a sequence is 24 (meaning the video is unwatchable). As a rule, a CAMBI score around 5 means that banding is starting to become slightly annoying. The average CAMBI score in our case was 1.5. The results of comparing the old and new transcoding settings can be seen as percentages in the graph below:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foczpqp2z9cl9x7am7otu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foczpqp2z9cl9x7am7otu.png" alt="CAMBI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Long months of experimenting helped us make streaming more efficient. Bitrate reduction allows clients to save on content delivery though a CDN and on video storage, while viewers use less internet traffic watching videos.&lt;/p&gt;

&lt;p&gt;Compared to competitors, Gcore’s new bitrate turned out to be lower for most video quality values:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Video Quality&lt;/th&gt;
&lt;th&gt;Gcore, Mbps&lt;/th&gt;
&lt;th&gt;MUX*, Mbps&lt;/th&gt;
&lt;th&gt;Bitmovin*, Mbps&lt;/th&gt;
&lt;th&gt;Cloudflare*, Mbps&lt;/th&gt;
&lt;th&gt;Dacast*, Mbps&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;4K&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2K&lt;/td&gt;
&lt;td&gt;7.2&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1080p&lt;/td&gt;
&lt;td&gt;4.05&lt;/td&gt;
&lt;td&gt;4.8&lt;/td&gt;
&lt;td&gt;4.8&lt;/td&gt;
&lt;td&gt;3.6&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;720p&lt;/td&gt;
&lt;td&gt;1.8&lt;/td&gt;
&lt;td&gt;2.5&lt;/td&gt;
&lt;td&gt;2.4&lt;/td&gt;
&lt;td&gt;1.8&lt;/td&gt;
&lt;td&gt;5.2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;480p&lt;/td&gt;
&lt;td&gt;0.8&lt;/td&gt;
&lt;td&gt;1.6&lt;/td&gt;
&lt;td&gt;1.2&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;2.1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;360p&lt;/td&gt;
&lt;td&gt;0.45&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;0.8&lt;/td&gt;
&lt;td&gt;0.7&lt;/td&gt;
&lt;td&gt;1.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;240p&lt;/td&gt;
&lt;td&gt;0.2&lt;/td&gt;
&lt;td&gt;0.7&lt;/td&gt;
&lt;td&gt;0.4&lt;/td&gt;
&lt;td&gt;0.5&lt;/td&gt;
&lt;td&gt;0.5&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;* Average data obtained from platform testing.&lt;/p&gt;

&lt;p&gt;We don’t plan to stop there. We’re continuing to improve our solutions to make streaming even more efficient and convenient for our clients.&lt;/p&gt;

</description>
      <category>livestreaming</category>
      <category>vodstreaming</category>
      <category>lowbitrate</category>
      <category>videooptimization</category>
    </item>
    <item>
      <title>What are bad bots? | How to stop bad bot traffic</title>
      <dc:creator>Gcore</dc:creator>
      <pubDate>Fri, 21 Apr 2023 11:37:42 +0000</pubDate>
      <link>https://forem.com/gcoreofficial/what-are-bad-bots-how-to-stop-bad-bot-traffic-4l39</link>
      <guid>https://forem.com/gcoreofficial/what-are-bad-bots-how-to-stop-bad-bot-traffic-4l39</guid>
      <description>&lt;p&gt;Bad bots are computer programs designed to carry out harmful actions such as stealing website content, account hacking, and DDoS attacks. The damaging outcome has been exposed through multiple news outlets. These reports have shed some light on how bad bots are being used to spread misinformation on social media, commit identity theft, and steal bank accounts.&lt;/p&gt;

&lt;p&gt;Our main goal is to equip users and website/application owners like you with valuable insights on bad bots: how to comprehend the different types of bad bots, and how to prevent bad bot traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the types of bad bots?
&lt;/h2&gt;

&lt;p&gt;Let’s dive into the most common types of malicious bots out there. Familiarizing yourself with these threats is crucial to understanding how they can potentially harm your website or even target you as an internet user. Below is a list we’ve created for you to discover the different types of bad bots that you need to watch out for.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. DDoS bot&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DDoS bots are used by cybercriminals that seek to disrupt a website or online service by overwhelming it with traffic from multiple sources. To execute this attack effectively, botnets come into play. Botnets are networks of computers and internet of things (IoT) devices that have been infected with malware and are under the control of a hacker or malicious actor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do DDoS botnets work?&lt;/strong&gt;&lt;br&gt;
Malicious actors can manipulate bots remotely, corrupting a large number of internet-connected devices after infecting them with malware. What makes this especially alarming is that the owner of the compromised device may not be aware that their device has been infected.&lt;/p&gt;

&lt;p&gt;In every botnet, there are four key components:&lt;/p&gt;

&lt;p&gt;Bot master. This is the attacker who creates and manages the bot code and controls the entire botnet.&lt;br&gt;
Bot code. Also known as a bot controller, this is a malicious program that is designed to infect vulnerable devices and turn them into bots.&lt;br&gt;
Bots (also called “zombies”). These are the compromised devices that are infected with the bot code and can be controlled remotely by the bot master.&lt;br&gt;
Command and control (C&amp;amp;C) server. This is the central server to which all the bots in the botnet connect to communicate with each other and receive commands from the bot master. The C&amp;amp;C server allows the bot master to send instructions to the bots, such as launching a DDoS attack.&lt;br&gt;
Let’s take a look at the typical setup of a botnet and how these four participants work together.&lt;/p&gt;

&lt;p&gt;Typical botnet configuration: The bot master sends code to infect bots (zombies) controlled by the command and control server&lt;br&gt;
In the diagram, the bot master distributes a bot code to victim computers. This can be done through email attachments, malicious links, software downloads, or exploiting vulnerabilities. When the victim’s computer becomes infected (i.e., becomes a bot), it joins the botnet and connects to the C&amp;amp;C server. The attacker sends instructions to the bot through the C&amp;amp;C server and synchronizes its actions with other bots.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key takeaways about a DDoS botnet&lt;/strong&gt;&lt;br&gt;
The bot master is responsible for setting up the C&amp;amp;C mechanism and providing instructions to the bots.&lt;br&gt;
Botnets rely on C&amp;amp;C mechanisms to coordinate the actions of infected machines.&lt;br&gt;
The effectiveness of DDoS attacks often depends on the structure of the attacker’s architecture, the number of bots in the botnet controlled with a C&amp;amp;C mechanism.&lt;br&gt;
DDoS bots can use a variety of techniques to carry out their attacks, including the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---Bw7ULe---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/otkvu10fk2xls455gzcz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---Bw7ULe---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/otkvu10fk2xls455gzcz.png" alt="A table showcasing the different DDoS Bot attack types with examples and impact" width="740" height="1380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Account takeover bot&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is a type of bad bot that cybercriminals use to take over users’ online accounts. These bots are designed to automate the process of guessing or cracking login credentials, such as usernames and passwords. Once the bad bot takes over the account, it can carry out harmful activities like stealing confidential information, spamming, or being used in phishing campaigns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does an account takeover bot work?&lt;/strong&gt;&lt;br&gt;
A cybercriminal typically obtains a list of stolen usernames and passwords from data breaches, phishing attacks, or the dark web.&lt;br&gt;
The attacker uses account takeover bots to automatically test login credentials on different websites—for instance, e-commerce or social media sites—persisting until they successfully gain access to an account. With the use of bots, even strong passwords can be cracked in no time, putting personal information at risk.&lt;br&gt;
Once the bot has taken over the account, the attacker can carry out different malicious activities, such as making unauthorized purchases or posting spam messages.&lt;br&gt;
Before we discuss different types of account takeover bots, let’s take a look at a few examples of incidents involving account takeovers:&lt;/p&gt;

&lt;p&gt;Twitter hack: In July 2020, several high-profile Twitter accounts were hacked, including those of Barack Obama, Elon Musk, and Bill Gates. The attackers used an account takeover scheme to promote a Bitcoin scam to the followers of these accounts.&lt;br&gt;
Equifax data breach: In 2017, Equifax, one of the largest credit reporting agencies, suffered a data breach that exposed the personal information of millions of consumers. The breach was the result of an account takeover bot, where the attackers gained access to Equifax’s systems by exploiting a vulnerability in its website software.&lt;br&gt;
Uber breach: In 2016, the personal information of 57 million users and drivers of the ride-sharing service Uber was exposed due to a data breach caused by an account takeover. The attackers were able to gain access to an Uber engineer’s account, which contained access keys to Uber’s Amazon Web Services account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are the types of account takeover bots?&lt;/strong&gt;&lt;br&gt;
Now that you’ve gained an understanding of the impact of this bad bot, let’s explore common types of account takeover bots, including their descriptions, examples, and the potential consequences they can cause.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KeTjMqOP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oa80au4s0f7t54rube5e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KeTjMqOP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oa80au4s0f7t54rube5e.png" alt="Type of Account Takeover (ATO) Bot, description, examples and impact" width="742" height="860"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Among the various types of account takeover bots, the most widespread is credential stuffing. According to a report from Google, 52% of individuals use the same passwords for multiple accounts. This means that if a cybercriminal gains access to one of those accounts, they may also be able to access other sensitive accounts, including those containing credit card information, bank account details, and social media profiles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kSHQ6xbu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ly5f76xs5vxdmclzhpjj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kSHQ6xbu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ly5f76xs5vxdmclzhpjj.png" alt="Image description" width="599" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Web content scraping bot&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These malicious bots use web content scraping techniques to extract data and content from websites, including copying information from the HTML code and databases of the victim’s server. However, it’s worth noting that legitimate uses of web content scraping do exist, such as search engine bots like Googlebot, which help to index websites and improve search results. But the majority of web content scraping is actually done for malicious and illegal purposes, like stealing copyrighted content, pricing scraping to undercut competitors, and, of course, data breach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does a web content scraping bot work?&lt;/strong&gt;&lt;br&gt;
The cybercriminal programs a web scraping bot to visit the target website.&lt;br&gt;
The bot reads the HTML code of the website and looks for relevant data to extract.&lt;br&gt;
The bot extracts the desired data from the HTML code and may also extract data from the databases that are connected to the victim’s website.&lt;br&gt;
The extracted data is stored in a structured format, such as a spreadsheet or scraper’s database.&lt;br&gt;
Once the bot has scraped all the data from the website, the attacker will analyze it for various purposes—for example, for reposting copyrighted materials.&lt;br&gt;
What are the types of content scraping bots?&lt;br&gt;
Content scraping, also known as web scraping, is the act of using bots to download most or all of a website’s content without the owner’s consent. It falls under the category of data scraping and is usually done using automated bots. Website scraper bots can download all of a site’s content within seconds.&lt;/p&gt;

&lt;p&gt;In this section, we will cover different types of content scraping, how they work, and the impact they can cause on users or businesses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SbxDs4St--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t7v86dv7m0ov1z9xbx33.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SbxDs4St--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t7v86dv7m0ov1z9xbx33.png" alt="Image description" width="590" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are the risks of bad bots?&lt;/strong&gt;&lt;br&gt;
The risks associated with malicious bots extend beyond just business organizations. As a regular user, you are also a prime target for these bots, which puts your personal information, online security, and overall well-being at risk.&lt;/p&gt;

&lt;p&gt;One particularly dangerous example is Trickbot, a botnet discovered by researchers in 2019. It was designed to steal login credentials and financial information on a global scale and had the ability to spread ransomware and malware, putting millions of people at risk as the infection on affected machines was not traceable.&lt;/p&gt;

&lt;p&gt;The potential dangers associated with bad bot traffic are numerous and should not be taken lightly. Here are just a few of the risks:&lt;/p&gt;

&lt;p&gt;Identity theft. With account takeover bots, personal data can be snatched and used to infiltrate sensitive accounts, which could result in identity theft and cause significant monetary harm to the user.&lt;br&gt;
Malware infections. It is a prevalent method for bots to infiltrate a computer system, often through downloads disguised as social media or email links. These links may appear as pictures or videos, containing harmful viruses and malware. If a user’s computer becomes infected, it could become part of a botnet.&lt;br&gt;
Spam. This can be a result of account takeover bots when the attacker uses the victim’s credentials to send out spam emails or messages.&lt;br&gt;
Information theft. Web scraping bots have the ability to acquire sensitive information, including confidential user data such as login details, personal addresses, and other private information.&lt;br&gt;
Brand damage. Content scraping bots can duplicate and repost a company’s content on various fake and untrusted websites, which may result in losing potential clients.&lt;br&gt;
Financial loss. DDoS bots can be used to flood a website with traffic, causing it to be unavailable for regular users and resulting in lost revenue for businesses.&lt;br&gt;
Data breaches. Credential stuffing bots can be used to test stolen login credentials on multiple sites, increasing the risk of a data breach. This is because if a user’s credentials work on one site, such as a social media account, they may also work on other sites where the user has financial information, such as their bank account.&lt;br&gt;
Intellectual property theft. Web scraping bots can also be used to steal intellectual property, such as copyrighted images or product designs, leading to financial loss for creators.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to stop bad bot traffic&lt;/strong&gt;&lt;br&gt;
The issue now arises on how regular website owners and users like you can prevent malicious bot traffic. Unfortunately, there is no single solution to address this concern. However, there are some recommended measures to stop and prevent the associated risks of bad bot traffic. Let’s explore the following recommendations.&lt;/p&gt;

&lt;p&gt;Implement CAPTCHA challenges. To prevent automated bot attacks, websites can implement measures that require users to complete tasks that only humans can accomplish. These tasks often involve solving puzzles or answering questions before accessing sensitive data on a website.&lt;br&gt;
Use web application firewalls (WAFs). These can block malicious traffic by analyzing incoming traffic and filtering out suspicious requests.&lt;br&gt;
Monitor web traffic. This can help identify unusual traffic patterns that may be indicative of bot activity.&lt;br&gt;
Implement rate limiting. This can limit the number of requests a user or IP address can make within a certain time frame, which can help prevent bot attacks.&lt;br&gt;
Use bot detection software. This can analyze web traffic to identify and block bot traffic based on specific criteria such as IP addresses, user-agent strings, and behavior patterns.&lt;br&gt;
Implement bot management policies. This can involve identifying and blocking known bot traffic, blacklisting suspicious IP addresses, and whitelisting known good bots.&lt;br&gt;
Regularly update software and security protocols. This can help prevent bots from exploiting known vulnerabilities in software or systems.&lt;br&gt;
Using these strategies can help website owners and organizations identify and reduce the risks of malicious bots, improving their online security. However, it’s important to keep in mind that these strategies might also affect legitimate human traffic and helpful bots that enhance website features. To effectively combat malicious bot traffic, website owners should consult with experts todifferentiate between good and bad bots and implement mitigation strategies that balance security with website functionality. This helps to ensure that their websites remain accessible to legitimate users while minimizing the risks posed by bad bots. At Gcore, we understand the importance of providing effective measures against bad bot traffic and will provide information on how it assists our clients in countering these threats in the following section.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does Gcore’s DDoS and bot protection help against bad bot traffic?
&lt;/h2&gt;

&lt;p&gt;Here at Gcore, we guarantee that your online business will continue to function seamlessly, regardless of any disruptions or threats. Our security platform is designed to keep your digital business operations safe from cybercriminal attacks. We have scrubbing centers located globally that are linked to various service providers and have backup copies of essential systems, such as cleaning servers, managing servers, data storage systems, and network equipment. With our platform, you can be confident that any potential attack will not affect your website’s performance or cause any disruption to your visitors and customers. Let’s take a closer look at the protection services we offer to defend against DDoS attacks and malicious bots.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Protection against DDoS attacks&lt;/strong&gt;&lt;br&gt;
Gcore’s DDoS protection ensures uninterrupted application performance even during large-scale attacks, minimizing the risk of service disruptions and preventing degradation of website performance. Here are some key points about how the DDoS protection in our web security module operates:&lt;/p&gt;

&lt;p&gt;Attackers generate spam traffic to overwhelm targeted servers.&lt;br&gt;
The DDoS protection layer detects and filters incoming traffic. This includes protection against network and transport layer (L3 and L4)  and also against application layer DDoS attacks (L7).&lt;br&gt;
Real-time bot protection. We’ll prevent parsing, advertisement fraud, and theft of your user’s personal data.&lt;br&gt;
WAF hacking protection. It protects our clients from manual hacking and attempts to exploit vulnerabilities or loopholes in your website without implementing third-party SDKs or making changes to the application’s code.&lt;br&gt;
Furthermore, there are various security features to protect against DDoS attacks. These are designed to prevent or mitigate the impact of a DDoS attack on a target network or website. Some of the common DDoS security features offered by Gcore include the following:&lt;/p&gt;

&lt;p&gt;A globally distributed network to filter all traffic around the world.&lt;br&gt;
Our growing distributed network capacity will always exceed any single DDoS attack.&lt;br&gt;
Protection against low-rate attacks from their first request.&lt;br&gt;
Advanced load balancing algorithms for better availability.&lt;br&gt;
To learn more, check out our Global DDoS protection page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Protection against bad bots&lt;/strong&gt;&lt;br&gt;
At our company, we understand the importance of keeping your web applications and servers safe from malicious bot activities. That’s why we offer top-of-the-line bot protection services that prevent website fraud attacks, spamming of request forms, brute-force attacks, and other harmful bot activities.&lt;/p&gt;

&lt;p&gt;How do we achieve this? Our team of experts utilizes advanced algorithms that identify and remove unwanted traffic that has entered your system’s perimeter. This not only prevents overloading but also ensures that your business processes run smoothly. Want to learn more about how our protection module operates? Here are some key points:&lt;/p&gt;

&lt;p&gt;First, bad bots imitate human behavior to conduct activities that are considered inappropriate.&lt;br&gt;
Second, our system’s bot protection feature identifies and terminates connections from bots engaged in automated activities.&lt;/p&gt;

&lt;p&gt;The workflow of the client only interacts with legitimate users, and not with any bad bot traffic.&lt;br&gt;
Our bot protection system provides protection against the following harmful bad bot activities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DDoS botnet attacks&lt;/li&gt;
&lt;li&gt;Account takeover attempts&lt;/li&gt;
&lt;li&gt;Web content scraping&lt;/li&gt;
&lt;li&gt;API data scraping&lt;/li&gt;
&lt;li&gt;Form submission abuse&lt;/li&gt;
&lt;li&gt;TLS session attacks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Discover more details about Gcore’s bot protection.&lt;/p&gt;

&lt;p&gt;Now that you’re familiar with our robust DDoS and bot protection services, let’s dive into real-world use cases across various industries and their corresponding descriptions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V5pp5NcP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e2twe2dxk0y3cxj7wsg9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V5pp5NcP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e2twe2dxk0y3cxj7wsg9.png" alt="Image description" width="589" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Protecting your website against bad bot traffic is more important now than ever before. These malicious bots can pose a significant risk to both your website’s security and performance, leading to negative impacts on legitimate user traffic. But with Gcore’s effective mitigation strategies, you can safeguard your online systems and services from the risks associated with bad bot activity. Our DDoS protection and Edge Stream services, such as CDN, provide a comprehensive solution that detects and blocks bad bot traffic, ensuring optimal performance and maximum security. To learn more and start protecting your business today, &lt;a href="https://gcore.com/emergency-ddos-protection"&gt;contact us at Gcore&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
