<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Griffin</title>
    <description>The latest articles on Forem by Griffin (@wingofagriffin).</description>
    <link>https://forem.com/wingofagriffin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/wingofagriffin"/>
    <language>en</language>
    <item>
      <title>What is WHEP? – Intro to WebRTC Streaming Part 2</title>
      <dc:creator>Griffin</dc:creator>
      <pubDate>Mon, 01 May 2023 18:51:30 +0000</pubDate>
      <link>https://forem.com/dolbyio/what-is-whep-intro-to-webrtc-streaming-part-2-3d99</link>
      <guid>https://forem.com/dolbyio/what-is-whep-intro-to-webrtc-streaming-part-2-3d99</guid>
      <description>&lt;p&gt;In &lt;a href="https://dolby.io/blog/what-is-whip-intro-to-webrtc-streaming-part-1/" rel="noopener noreferrer"&gt;the previous article&lt;/a&gt;, we discussed WebRTC and the new standard developed to help us ingest data with it, known as &lt;a href="https://datatracker.ietf.org/doc/draft-ietf-wish-whip/" rel="noopener noreferrer"&gt;WHIP&lt;/a&gt;. However, for data that is ingested, that same data will likely need to be egressed, or distributed at some point. Bring in WebRTC-HTTP egress protocol, or WHEP. Abstractly, the ingestion is the part that covers the uploading of data to a server, and the egress handles the downloading to an end user. The benefits we gained from WHIP, such as the low latency and end-to-end encryption apply here as well: WHEP enables WebRTC communication on the other end of the content delivery pipeline; WHEP assists with serving content to the viewer.&lt;/p&gt;

&lt;p&gt;In this post, we will take a look at WHEP, an IETF protocol developed to let us use WebRTC to egress content to other destinations as a way to modernize content delivery over the web from previous standards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is WHEP useful?
&lt;/h2&gt;

&lt;p&gt;As mentioned above, WHIP only solves half of the equation when working with WebRTC-based content delivery. While you could read &lt;a href="https://datatracker.ietf.org/doc/draft-murillo-whep/" rel="noopener noreferrer"&gt;the official IETF documentation&lt;/a&gt;, we will summarize it more simply here. WHEP aims to solve the distribution aspect of WebRTC-based content, See the below diagram for a visual aid of how this works together using Dolby.io Real Time Streaming APIs as an example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcu5q9iu055e67hks20w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcu5q9iu055e67hks20w.png" alt="WHIP/WHEP Workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The benefit of having WHEP supplement the egress of broadcast WebRTC infrastructure is similar to the benefits of WHIP, namely the standardization. The same way WHIP allows broadcasters to focus on their infrastructure and the scaling of it without needing to worry about logistics, WHEP allows the distributors to focus on end-user experience, as they know exactly how the data will be received and handled. The end goal is optimization of time and resources across all parties with standardization.&lt;/p&gt;

&lt;p&gt;WHIP and WHEP do for real-time video what RTMP did for flash video or what SRT does for transport streams.  It standardizes the way (protocol) that the media servers speak to each other, like a language, so that any WHIP encoder can talk to any WHIP server and any WHEP service can talk to any WHEP player, without any other setup. Using the WHIP/WHEP URL should simply work no matter which environment is being used.&lt;/p&gt;

&lt;p&gt;There are many situations where a standard protocol for streaming media consumption over WebRTC would be helpful. Some options or examples include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Interoperability between WebRTC services, media servers, publishers, and players&lt;/li&gt;
&lt;li&gt;Playing WebRTC streams on TVs and other smart devices that do not support custom JavaScript scripts&lt;/li&gt;
&lt;li&gt;Creating modular, reusable software for media players&lt;/li&gt;
&lt;li&gt;Integrating with &lt;a href="https://dashif.org/webRTC/report.html#54-example-client-architecture" rel="noopener noreferrer"&gt;DASH&lt;/a&gt;, a current popular standard for adaptive bitrate streaming&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Where it differs from just being “the WHIP spec but in reverse” is in the specifics of the protocol. While for the most part it does behave the same as WHIP, using HTTP requests with Bearer Tokens for authentication, it is more flexible with &lt;a href="https://en.wikipedia.org/wiki/Session_Description_Protocol" rel="noopener noreferrer"&gt;SDP communication&lt;/a&gt;. WHEP allows for an SDP offer to be delivered immediately in the same HTTP request, or to send a POST request with intent to receive an offer back. This offers more flexibility depending on use case and environment, which can be learned more about in the white paper provided above. RTSP, an older standard used in the industry, does not support this model for example.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dolby.io + WHEP
&lt;/h2&gt;

&lt;p&gt;Dolby.io is a leader in the definition and research of WHEP, which is an open standard.  Like WHIP, our researchers have developed this standard, offer support for it into our &lt;a href="https://docs.dolby.io/streaming-apis/reference/whep_whepsubscribe" rel="noopener noreferrer"&gt;Streaming Platform&lt;/a&gt;, and are working directly with software and hardware partners to integrate WHEP directly into their ecosystems. To learn more, see this Kranky Geek recording of Dolby.io Senior Director of Engineering Sergio Garcia Murillo, the head researcher working on developing WHIP and WHEP:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/rIQVVJOjR0U" rel="noopener noreferrer"&gt;https://youtu.be/rIQVVJOjR0U&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We believe WHEP is the future of WebRTC egress, and we want to support the community and projects around it. WHEP is only useful if it gains wide adoption. We encourage you to try out WHEP for your next streaming project and let us know your experience. One way to do this is with our &lt;a href="https://github.com/dolbyio-samples/streaming-WHIP-WHEP-node-sample" rel="noopener noreferrer"&gt;sample app using Node&lt;/a&gt;, our &lt;a href="https://github.com/millicast/videojs-plugin-millicast-whep" rel="noopener noreferrer"&gt;sample using Video.js&lt;/a&gt;, or using another community implementation such as &lt;a href="https://www.meetecho.com/blog/whip-whep/" rel="noopener noreferrer"&gt;this one by Lorenzo Miniero&lt;/a&gt;. We’d love to hear your thoughts on our &lt;a href="https://twitter.com/dolbyio" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/company/dolbyio/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>webrtc</category>
      <category>learning</category>
    </item>
    <item>
      <title>What is WHIP? Intro to WebRTC Streaming Part 1</title>
      <dc:creator>Griffin</dc:creator>
      <pubDate>Thu, 20 Apr 2023 18:03:23 +0000</pubDate>
      <link>https://forem.com/dolbyio/what-is-whip-intro-to-webrtc-streaming-part-1-4f6d</link>
      <guid>https://forem.com/dolbyio/what-is-whip-intro-to-webrtc-streaming-part-1-4f6d</guid>
      <description>&lt;p&gt;When considering which tool to use for your real-time streaming platform, WebRTC is one of the hot concepts brought into the forefront. While WebRTC has been around since 2011 and has since been successful at being used in many scenarios, optimizing WebRTC for live generated content, such as in the broadcasting industry, as opposed to pre-existing files is where things get more complex. WHIP and WHEP are two new standards designed to assist in ingesting and egressing this media into WebRTC instead of having to rely on using older standards like RTMP to do that.&lt;/p&gt;

&lt;p&gt;In this post, we will focus on WHIP, or WebRTC-HTTP ingestion protocol, an IETF protocol developed to let us use WebRTC to ingest content into our platforms over these old protocols.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why WHIP?
&lt;/h2&gt;

&lt;p&gt;For those of you who are overwhelmed with &lt;a href="https://datatracker.ietf.org/doc/draft-ietf-wish-whip/"&gt;the official IETF document&lt;/a&gt;, WHIP (sometimes known as WISH) is an open standard that you can use right now for your WebRTC based ingestion. You can use it today with open source software such as &lt;a href="https://docs.dolby.io/streaming-apis/docs/using-whip-with-gstreamer"&gt;GStreamer&lt;/a&gt; or &lt;a href="https://github.com/CoSMoSoftware/OBS-studio-webrtc"&gt;OBS (fork)&lt;/a&gt; as a way to publish your content with WebRTC.&lt;/p&gt;

&lt;p&gt;A benefit to using WebRTC based content is it’s extremely low latency and security with end-to-end encryption. However, initial versions of WebRTC based streaming were associated with poor quality and limited viewer numbers. WHIP solves this by removing the translation layers needed to use WebRTC before that cause many of the previously mentioned flaws, giving us all of the benefits of WebRTC without the downsides. WHIP provides a standard signaling protocol for WebRTC to make it easy to support and integrate into software and hardware.&lt;/p&gt;

&lt;p&gt;WHIP provides support for important standards, such as HTTP POST based requests for SDP O/A, HTTP redirections for load balancing, and authentication and authorization done by the Auth HTTP header and Bearer Tokens.&lt;/p&gt;

&lt;p&gt;Think of this like a train station. Without any rail signalers, the trains will behave sporadically, causing potential slowdowns if too many trains are on the same track, tracks that are unused, and possible crashes and collisions. With a signaler, the trains will be directed more orderly, optimizing the system to keep things moving quickly and efficiently. WHIP acts as this signaler, handling things like creating or deleting endpoints if needed and performing operations like &lt;a href="https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/"&gt;Trickle ICE&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;How Does Dolby.io fit in?&lt;br&gt;
As mentioned before, WHIP is an open standard. Dolby.io supports WHIP not only by providing integrations, but also by leading the definition and research into the standard. Our researchers have developed this standard, and our engineers have implemented it into our &lt;a href="https://docs.dolby.io/streaming-apis/docs"&gt;Streaming Platform&lt;/a&gt;, as well as having worked directly with software and hardware partners to integrate this standard directly into their platforms, such as &lt;a href="https://docs.dolby.io/streaming-apis/docs/using-whip-with-flowcaster"&gt;FlowCaster&lt;/a&gt; and &lt;a href="https://docs.dolby.io/streaming-apis/docs/using-osprey-talon-whip-hardware-encoder"&gt;Osprey&lt;/a&gt; for both software and hardware encoding.&lt;/p&gt;

&lt;p&gt;We believe WHIP is the future of WebRTC ingestion, and we want to support the development and community around it. As a standard is nothing without wide adoption. We encourage you to try our WHIP today with one of the priorly mentioned integrations for your next streaming project and let us know your experience. We’d love to hear your thoughts on our &lt;a href="https://twitter.com/dolbyio"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/company/dolbyio/"&gt;LinkedIn&lt;/a&gt;. Or try out our &lt;a href="https://github.com/dolbyio-samples/streaming-WHIP-WHEP-node-sample"&gt;sample app using Node&lt;/a&gt;, and leave some feedback on GitHub.&lt;/p&gt;

&lt;p&gt;Stay tuned for Part 2 where we will talk about the other end of the process, WHEP, or WebRTC-HTTP egress protocol, and see how WebRTC will define the future of all streaming and broadcasting communications.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>webrtc</category>
      <category>learning</category>
    </item>
    <item>
      <title>Automating Your Stream Start, Intro, and Ending Processes with OBS Macros</title>
      <dc:creator>Griffin</dc:creator>
      <pubDate>Wed, 16 Nov 2022 22:04:39 +0000</pubDate>
      <link>https://forem.com/dolbyio/automating-your-stream-start-intro-and-ending-processes-with-obs-macros-1p24</link>
      <guid>https://forem.com/dolbyio/automating-your-stream-start-intro-and-ending-processes-with-obs-macros-1p24</guid>
      <description>&lt;p&gt;As you begin developing a brand and a streaming presence, it is often desirable to define and establish your branding with digital assets that are used at the beginning and ends of each of your stream. Often times we see this as an intro video that will broadcast to your audience that your stream is beginning, giving them time to settle in and get excited for the show, whether it be an auction, sports broadcast, gaming event, or something else. While entirely possible to do this manually every time, it can add to the growing complexity of tasks to do when beginning your broadcast. Sometimes you might even forget to play the video at all, human error and all. In this article, we will showcase a few different ways we can automate the media playback to occur whenever you begin your stream, taking an extra step away so you can focus on the broadcast while keeping your brand intact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting Stream
&lt;/h2&gt;

&lt;p&gt;Before thinking about what the intro video is going to look like, first think about what the very beginning of a stream looks like. Most viewers who show up early will see a “Broadcast is not Live” page by default, which isn’t ideal. We want to ensure that users know that you are about to go live soon to keep them on the page instead of closing the window. Though if we start too early, not enough users will be around to see the intro video in the first place, both losing brand awareness, and letting people feel like they missed out. The solution to this is to still start the stream ahead of schedule, but with a static image or video that simply lets the audience know that stream is starting soon.&lt;/p&gt;

&lt;p&gt;We can automate the process of beginning this video with an OBS plugin called &lt;a href="https://github.com/WarmUpTill/SceneSwitcher/releases/latest" rel="noopener noreferrer"&gt;Advanced Scene Switcher&lt;/a&gt;. Once downloaded and installed with the appropriate installer for your system, this plugin will create a new option on the “Tools” menu of OBS with the same name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9n6essl5zntx5pne7jc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9n6essl5zntx5pne7jc.png" alt="tools menu" width="300" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: If you are experiencing issues seeing this plugin in the Tools menu, ensure you are not installing OBS from Homebrew on MacOS or similar package managers, but from the official install site. We have experienced issues with the package managed versions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding Advanced Scene Switcher to OBS-WebRTC
&lt;/h2&gt;

&lt;p&gt;For use with Dolby.io Streaming, we will want to add the plugin to our installation of OBS-WebRTC. Thankfully, the plugin is compatible, however the installer will only look for OBS-Studio. To add the plugin manually to OBS-WebRTC, do as follows:&lt;/p&gt;

&lt;p&gt;MacOS:&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;~/Library/Application Support/obs-studio&lt;/code&gt; there will be a folder titled “plugins”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2b2balx7k3ew97wrkod3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2b2balx7k3ew97wrkod3.png" alt="Plugins Folder" width="574" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy this folder with the installed plugin(s) and paste it into &lt;code&gt;~/Library/Application Support/obs-webrtc&lt;/code&gt;. Restart OBS-WebRTC and it should appear as normal.&lt;/p&gt;

&lt;p&gt;Windows:&lt;/p&gt;

&lt;p&gt;Follow the above instructions, but replacing the MacOS path with the Windows path for plugins:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;C:\Program Files\OBS-Studio\obs-plugins\64bit&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;C:\Program Files\OBS-WebRTC\obs-plugins\64bit&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Advanced Scene Switcher
&lt;/h2&gt;

&lt;p&gt;Upon opening Advanced Scene Switcher, we should see a few options. The most important of which is determining if it is on. You can customize how you want OBS to auto-start the plugin, though ensure that the status is Active before using it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rzxdzal45u1105sy2wi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rzxdzal45u1105sy2wi.png" alt="Inactive Plugin" width="800" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, switch to the “Macro” tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02oo27vfl3bbbalyrxli.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02oo27vfl3bbbalyrxli.png" alt="Macro Tab" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we can begin adding macros for our stream to automate multiple different actions. Let’s begin with the “Stream is about to begin” automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting the Stream
&lt;/h2&gt;

&lt;p&gt;Inside the Macro tab, we have a few different panels to work with. To begin, lets add a new macro by clicking the “+” under the left “Macros” panel and name it what you want. In this case, I named it “Starting stream”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxmb810hnmpnumr96eey.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxmb810hnmpnumr96eey.png" alt="Macro Bar" width="348" height="992"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, under the “Edit macro” panel, we have two sections, the macro conditions on top and the macro actions on the bottom. First, lets click the “+” under macro conditions to create a new condition. This will then generate a conditional statement builder for us to define what will begin our macro. In this case, we want it to read “If Streaming Stream starting”. This will trigger the macro whenever we start the stream.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxzaborjjlyn8zrumync.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxzaborjjlyn8zrumync.png" alt="Stream Starting" width="800" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we want to create a new macro action by clicking the “+” under the bottom panel. There are a couple of things we want to do here, the first of which being to switch to our Starting stream scene. This can be one with an action with the following information: Switch scene → Switch to scene  using “Cut”, which will automatically set the stream to that scene upon starting the broadcast.&lt;/p&gt;

&lt;p&gt;If you haven’t already yet, we suggest creating a scene with a placeholder image or looping video to direct to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxhkr4d9s13vvpatblsv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxhkr4d9s13vvpatblsv.png" alt="Stream Starting Switch" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also add in some audio as a part of this automation. First, lets add a media source into our scene with some royalty free music we want to play as people join the stream. This could also be a looping video if you are not using a static image in the scene. Ensure that “Loop” and “Restart playback when source becomes active” are checked to ensure that the music doesn’t fully stop until you tell it to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh90z189xbu0oqnac1v9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh90z189xbu0oqnac1v9.png" alt="Add Media" width="800" height="671"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, back in Advanced Scene Switcher, we can add a new macro action to our Starting stream macro to read “Media Music Play ”. This will then ensure that the audio file starts upon starting your stream!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4z71aqkh3xxpd2mpk356.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4z71aqkh3xxpd2mpk356.png" alt="Play Music" width="800" height="119"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, upon starting your stream, OBS will automatically switch to your “Starting stream” scene, and play the audio/video file for a greeting page for all early viewers to be welcomed by as you start amassing viewers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Stream Intro
&lt;/h2&gt;

&lt;p&gt;Now that we have created our “Starting soon” macro, we will want to add in a macro for an intro video that will play before switching to the live video feed. This can be done very much in a similar way to what we did above, but with a different macro condition. With a new macro we will title “Intro”, we want to create a condition this time with the “Hotkey” option. Name it whatever seems best to you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftw2fpyvju4fhifp4c37p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftw2fpyvju4fhifp4c37p.png" alt="Start Intro Hotkey" width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can then modify the hotkey within the OBS Settings menu under “Hotkeys” to whichever key command seems best, which can be found under the name submitted in the previous step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0lvibex73stpyud2k34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0lvibex73stpyud2k34.png" alt="OBS Hotkeys Menu" width="800" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Back in Advance Scene Switcher, we can now add in the actions as we did before. Let’s add in another Switch scene:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uwzxuq9lr3v8a7zmqyv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uwzxuq9lr3v8a7zmqyv.png" alt="Swap to Intro" width="800" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another Media action:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fafkgif0m1lylfzxkipa5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fafkgif0m1lylfzxkipa5.png" alt="Play Video" width="800" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This time, we want to switch the scene one more time after the video finishes. To do this, we first need to add a “Wait” action equal to the length of the intro video. This will keep the actions from triggering before the video is finished.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fraynnszdjis56wsokxwo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fraynnszdjis56wsokxwo.png" alt="Wait Command" width="800" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then after the wait, we can Switch scene one last time:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrarqut0bglis6ulkbsa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrarqut0bglis6ulkbsa.png" alt="Swap to Live Feed" width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And now upon hitting the assigned hotkey, the Getting Started scene will swap to your intro video, play it, then switch to the live feed when complete!&lt;/p&gt;

&lt;h2&gt;
  
  
  Ending Stream Macro
&lt;/h2&gt;

&lt;p&gt;Before ending your stream, you may want to give your viewers another “Stream is ending” screen to let them know the broadcast is completed if tuning in. This will be build very similarly to the previous macro. To begin, we start with another Hotkey condition:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35xbb6ub7fjyg3r63ntx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35xbb6ub7fjyg3r63ntx.png" alt="Ending Hotkey" width="800" height="126"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we switch the scene to “Stream Ending”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd24nz6l45gx58hmgbqja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd24nz6l45gx58hmgbqja.png" alt="Stream Ending Swap" width="800" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Play our exit music or video&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdlmsl5quzxibikx3wew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdlmsl5quzxibikx3wew.png" alt="Play Music" width="800" height="119"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait for the media to finish:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4javj2vivdzttm9w2xi1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4javj2vivdzttm9w2xi1.png" alt="Wait Again" width="800" height="115"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then stop the stream with a final action of “Streaming → Stop Streaming”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3voly16bua8hv8zupd1r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3voly16bua8hv8zupd1r.png" alt="Stop Streaming" width="800" height="110"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that’s it! One hotkey will transition to an ending slide, play music, and end the stream for you. No more fiddling around with multiple applications and buttons to finish that task.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;In this article, we outlined a few different things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installing Advanced Scene Switcher to OBS (and OBS-WebRTC)&lt;/li&gt;
&lt;li&gt;Configuring macros for

&lt;ul&gt;
&lt;li&gt;Starting a stream&lt;/li&gt;
&lt;li&gt;Playing an intro video&lt;/li&gt;
&lt;li&gt;Ending a stream&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Adding custom hotkey support&lt;/li&gt;

&lt;li&gt;Managing autoplayed media for streams&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This is an extremely useful tool for turning your live streams into a more professional, broadcast quality operation, with use cases expanding to many different industries needing to live stream their content and events.&lt;/p&gt;

&lt;p&gt;Read more about the Advanced Scene Switcher on &lt;a href="https://obsproject.com/forum/resources/advanced-scene-switcher.395/" rel="noopener noreferrer"&gt;the OBS Forum&lt;/a&gt;, or read &lt;a href="https://dolby.io/blog/using-webrtc-in-obs-for-remote-live-production/" rel="noopener noreferrer"&gt;this blog post&lt;/a&gt; on getting started with OBS-WebRTC to enable your live streams to broadcast in real time for maximum interactivity with your viewers.&lt;/p&gt;

&lt;p&gt;Happy streaming!&lt;/p&gt;

</description>
      <category>mentorship</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Transcribing Dolby.io Communications Recordings with Deepgram</title>
      <dc:creator>Griffin</dc:creator>
      <pubDate>Mon, 11 Apr 2022 16:43:28 +0000</pubDate>
      <link>https://forem.com/dolbyio/transcribing-dolbyio-communications-recordings-with-deepgram-3ee5</link>
      <guid>https://forem.com/dolbyio/transcribing-dolbyio-communications-recordings-with-deepgram-3ee5</guid>
      <description>&lt;p&gt;In this digital age where virtual conferences are a dime a dozen, we see a large number of them recorded for future records. There are many uses for these records, including sharing with people who were unable to attend live, distributing for use as training, and keeping backups for future reference. One aspect of these recordings that is taken for granted, however, is accessibility. In this blog, we will demonstrate how to take recordings from your Dolby.io Communications conferences, and use &lt;a href="https://deepgram.com/"&gt;Deepgram&lt;/a&gt; to transcribe them to text.&lt;/p&gt;

&lt;p&gt;Having text copies of your conference recordings is a good way to offer alternative ways to digest the information. Some people read faster than they listen to spoken words. Some people might not speak the same first language as the one in the conference, and are more comfortable reading it. Others might be hearing impaired, and prefer to read for the most amount of comfort. Whatever reason one might have, we want to make it simple to automate the transcription generation process. Here, we will be using the &lt;a href="https://docs.dolby.io/communications-apis/reference/authentication-api"&gt;Dolby.io Communications REST APIs&lt;/a&gt; in tandem with &lt;a href="https://developers.deepgram.com/api-reference/#transcription-prerecorded"&gt;Deepgram’s Pre-recorded Audio API&lt;/a&gt; in Python as an example of how to generate this process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Libraries
&lt;/h2&gt;

&lt;p&gt;Before we begin coding, we need to ensure we have all the proper libraries for calling these APIs. We can do this with a simple pip command (use the appropriate pip command for your operating system):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip3 &lt;span class="nb"&gt;install &lt;/span&gt;asyncio deepgram-sdk dolbyio-rest-apis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will install both the Dolby.io and Deepgram SDKs, as well as Python’s native asynchronous function library to aid us in calling the async requests the two SDKs use.&lt;/p&gt;

&lt;p&gt;It is also a good idea to sign up for a free &lt;a href="https://dolby.io/signup"&gt;Dolby.io&lt;/a&gt; and &lt;a href="https://console.deepgram.com/signup"&gt;Deepgram&lt;/a&gt; account if you haven’t already, to get your API credentials.&lt;/p&gt;

&lt;h2&gt;
  
  
  Obtaining an API Token
&lt;/h2&gt;

&lt;p&gt;In order to use the Dolby.io Communications REST APIs, we need to first generate a temporary access token. This is to help prevent your permanent account credentials from being accidentally leaked, as the token will expire automatically. To learn more about this, read the &lt;a href="https://docs.dolby.io/communications-apis/reference/authentication-api"&gt;documentation&lt;/a&gt;. In this case, we want to fill in the consumer key and secret with our &lt;a href="https://dashboard.dolby.io/dashboard/applications/summary"&gt;credentials&lt;/a&gt; from our &lt;strong&gt;Communications APIs&lt;/strong&gt; (not Media). We then call the &lt;code&gt;get_api_access_token&lt;/code&gt; endpoint within a function so we can generate a fresh token every time we make another call. This is not the most secure way to handle this, but will ensure we don’t run into any expired credentials down the road. To learn more, see our &lt;a href="https://docs.dolby.io/communications-apis/docs/guides-security-best-practices"&gt;security best practices guide&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;dolbyio_rest_apis.communications&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;authentication&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;asyncio&lt;/span&gt;

&lt;span class="c1"&gt;# Input your Dolby.io Communications Credentials here
&lt;/span&gt;&lt;span class="n"&gt;CONSUMER_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"&amp;lt;DOLBYIO_CONSUMER_KEY&amp;gt;"&lt;/span&gt;
&lt;span class="n"&gt;CONSUMER_SECRET&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"&amp;lt;DOLBYIO_CONSUMER_SECRET&amp;gt;"&lt;/span&gt;

&lt;span class="c1"&gt;# Create a function that will generate a new api access token when needed
&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;gen_token&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;authentication&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_api_access_token&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CONSUMER_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;CONSUMER_SECRET&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'access_token'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;"Access Token: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;gen_token&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Getting the Conference ID
&lt;/h2&gt;

&lt;p&gt;Now that we can call the Dolby.io APIs, we first want to get the internal conference ID of the recording we want to transcribe. We can do this by simply calling the &lt;code&gt;get_conferences&lt;/code&gt; endpoint with our token.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;dolbyio_rest_apis.communications.monitor&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;conferences&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;conferences&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_conferences&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;gen_token&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="c1"&gt;# Save the most recent conference. Change '-1' to whichever conference you want.
&lt;/span&gt;&lt;span class="n"&gt;confId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'conferences'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'confId'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;confId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that in this code sample, we are using the parameter: &lt;code&gt;['conferences'][-1]['confId']&lt;/code&gt;. This will pull only the most recent conference in the list as noted by the "-1" array value. If you are automating this to work with every newly generated conference, this likely will not be an issue. However if you are looking to do this with a specific conference, we suggest using &lt;a href="https://docs.dolby.io/communications-apis/reference/get-conferences"&gt;the optional parameters in the get_conferences endpoint&lt;/a&gt; to obtain the desired conference ID.&lt;/p&gt;

&lt;h2&gt;
  
  
  Obtaining the Recording
&lt;/h2&gt;

&lt;p&gt;With the conference ID in hand, we can now call an endpoint to generate a URL that contains the audio file of our conference. For this code sample, we are using a &lt;a href="https://docs.dolby.io/communications-apis/docs/guides-dolby-voice"&gt;Dolby Voice&lt;/a&gt; conference, so we will use the endpoint to &lt;a href="https://docs.dolby.io/communications-apis/reference/get-dolby-voice-audio-recordings"&gt;Get the Dolby Voice audio recording&lt;/a&gt;. If you know you are &lt;strong&gt;not&lt;/strong&gt; using Dolby Voice, you can use &lt;a href="https://docs.dolby.io/communications-apis/reference/get-mp3-recording"&gt;this endpoint&lt;/a&gt; instead. Note that we are only obtaining the audio track of the conference instead of both the audio and the video. This is for maximum file compatibility with the transcription software. Note that the URL produced is also temporary, and will expire after some time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;dolbyio_rest_apis.communications.monitor&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;recordings&lt;/span&gt;

&lt;span class="c1"&gt;# Save only the mp3 file and return as a URL.
# If your conference does not use Dolby Voice, use 'download_mp3_recording' instead.
# https://github.com/dolbyio-samples/dolbyio-rest-apis-client-python/blob/main/client/src/dolbyio_rest_apis/communications/monitor/recordings.py
&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;recordings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_dolby_voice_recordings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;gen_token&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;confId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;recording_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'url'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;recording_url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To help illustrate, &lt;a href="https://dolby.io/wp-content/uploads/2022/03/record_full_conf_5bcb9f6c-72c9-4a2f-afc3-eb70e935244c.mp3"&gt;here is an example conference recording&lt;/a&gt; made for transcription generated from the above code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Transcoding it with Deepgram
&lt;/h2&gt;

&lt;p&gt;While Deepgram does work with local files, the presigned recording url saves us many steps avoiding the hassle of needing to download and upload a file to a secure server. With the URL, we can skip those steps and directly insert the URL into the code below adapted from their &lt;a href="https://developers.deepgram.com/documentation/getting-started/prerecorded/"&gt;Python Getting Started Guide&lt;/a&gt;. The code provided only uses the &lt;a href="https://developers.deepgram.com/documentation/features/punctuate/"&gt;Punctuation feature&lt;/a&gt;, but could easily expanded with an assortment of &lt;a href="https://developers.deepgram.com/documentation/features/"&gt;the many features Deepgram provides&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;deepgram&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Deepgram&lt;/span&gt;

&lt;span class="c1"&gt;# Your Deepgram API Key
&lt;/span&gt;&lt;span class="n"&gt;DEEPGRAM_API_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'&amp;lt;DEEPGRAM_API_KEY&amp;gt;'&lt;/span&gt;

&lt;span class="c1"&gt;# Location of the file you want to transcribe. Should include filename and extension.
&lt;/span&gt;&lt;span class="n"&gt;FILE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;recording_url&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;

  &lt;span class="c1"&gt;# Initialize the Deepgram SDK
&lt;/span&gt;  &lt;span class="n"&gt;deepgram&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Deepgram&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;DEEPGRAM_API_KEY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="c1"&gt;# file is remote
&lt;/span&gt;  &lt;span class="c1"&gt;# Set the source
&lt;/span&gt;  &lt;span class="n"&gt;source&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;'url'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;FILE&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;# Send the audio to Deepgram and get the response
&lt;/span&gt;  &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;create_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;deepgram&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;transcription&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;prerecorded&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;'punctuate'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="c1"&gt;# Write only the transcript to the console
&lt;/span&gt;  &lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'results'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'channels'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'alternatives'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'transcript'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="c1"&gt;# If not running in a Jupyter notebook, run main with this line instead:
&lt;/span&gt;  &lt;span class="c1"&gt;# asyncio.run(main())
&lt;/span&gt;&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="n"&gt;exception_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exception_object&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exception_traceback&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;exc_info&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="n"&gt;line_number&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;exception_traceback&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tb_lineno&lt;/span&gt;
  &lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;'line &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;line_number&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;exception_type&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; - &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Deepgram response provides many datapoints related to our speech, but to pull only the transcription of the file, we are calling &lt;code&gt;['results']['channels'][0]['alternatives'][0]['transcript']&lt;/code&gt;. Feel free to modify the response to generate whatever is most relevant to your needs. For the above sample provided, the result of the transcription is as follows:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Following text is a transcription of the s en of the parchment declaration of independence. The document on display in the rot the national archives Museum. The spelling and punctuation reflects the originals.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;This is a very basic foray in how to get started with transcribing your conference recordings. We heavily suggest you invest some time into expanding this to fit your specific use case to maximize the benefit you get from using these tools.&lt;/p&gt;

&lt;p&gt;As mentioned before, we suggest taking a look at what Deepgram has to offer in terms of additional features you could add on to the transcription process. For example: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://developers.deepgram.com/documentation/features/diarize/"&gt;Diarization&lt;/a&gt; can help differentiate who is saying what when there are multiple people in a conference.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://developers.deepgram.com/documentation/features/named-entity-recognition/"&gt;Named Entity Recognition&lt;/a&gt; and/or &lt;a href="https://developers.deepgram.com/documentation/features/keywords/"&gt;Keywords&lt;/a&gt; to help increase accuracy by providing prior information of things like names and proper nouns.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The transcription of the example recording was not perfect. There are many reasons for this, including imperfect recording environments, confusing speech patterns, and compression as examples. To help give the transcription algorithms a better chance, one option could be to use the &lt;a href="https://docs.dolby.io/media-apis/docs/enhance-api-guide"&gt;Dolby.io Media Enhance API&lt;/a&gt; to attempt to clean up the audio before sending it to transcription.&lt;/p&gt;

&lt;p&gt;If you want to automatically generate a transcription after every recording is over, we can take advantage of &lt;a href="https://docs.dolby.io/communications-apis/docs/webhooks-overview"&gt;webhooks&lt;/a&gt; to remove the manual intervention for you. In fact, the &lt;a href="https://docs.dolby.io/communications-apis/docs/webhooks-events-recordingaudioavailable"&gt;Recording.Audio.Available event&lt;/a&gt; provides the recording URL within the event body itself, reducing the number of steps needed to obtain it.&lt;/p&gt;

&lt;p&gt;One final idea is if you do only have the video file ready for whatever reason, you can use the &lt;a href="https://docs.dolby.io/media-apis/docs/transcode-api-guide"&gt;Dolby.io Media Transcode API&lt;/a&gt; to convert the video file into a format accepted by the transcription service.&lt;/p&gt;

&lt;p&gt;You can find the source code file stored in a &lt;a href="https://jupyter.org/"&gt;Jupyter&lt;/a&gt; notebook at &lt;a href="https://github.com/dolbyio-samples/blog-deepgram-transcribe/tree/main"&gt;this GitHub repository&lt;/a&gt;. If you run into any issues, don’t hesitate to &lt;a href="https://support.dolby.io/hc/en-au"&gt;contact our support team&lt;/a&gt; for help, and good luck coding!&lt;/p&gt;

</description>
      <category>python</category>
      <category>hackwithdg</category>
      <category>transcription</category>
      <category>conferencing</category>
    </item>
    <item>
      <title>Exploring Content Collaboration Solutions for Content Production</title>
      <dc:creator>Griffin</dc:creator>
      <pubDate>Tue, 02 Nov 2021 01:59:41 +0000</pubDate>
      <link>https://forem.com/wingofagriffin/exploring-content-collaboration-solutions-for-content-production-1444</link>
      <guid>https://forem.com/wingofagriffin/exploring-content-collaboration-solutions-for-content-production-1444</guid>
      <description>&lt;p&gt;As a Developer Relations professional, I spend a pretty large amount of my time writing, organizing, and collecting content for publication to our blog. This is at a degree where it is necessary to have a solid project management system to keep everything in order. Having recently been a student as well, I know first hand how important it is to allow for easy collaboration in drafting content, making edits, suggestions, and sharing simple to make it less of a chore to contribute.&lt;/p&gt;

&lt;p&gt;It all broke one day where I was getting fed up with the solution I was working on, thinking to myself, there must be something better. We live in a extremely innovative time of technology, surely there must be a startup that solves my needs. So I went on a hunt to trial and test out a bunch of different content collaboration solutions I found from sources like Google, Quora, AlternativeTo, the DevRel community, and more to see if I could find that perfect solution, and maybe help some other people decide at the same time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Requirements
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VmFdjnUn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lotydjciqezlyxuuroj6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VmFdjnUn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lotydjciqezlyxuuroj6.png" alt="Dolby.io Blog" width="800" height="812"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At Dolby.io we collect content from a variety of people in multiple different media, but at the end of the day we want to keep everything centralized and consistent when possible. &lt;a href="https://dolby.io/blog/"&gt;Our blog&lt;/a&gt; is handled in the backend using &lt;a href="https://wordpress.com/"&gt;Wordpress&lt;/a&gt;, so when considering options for content management it was critical that it played well enough with Wordpress, to avoid as much manual intervention as possible. Because of this, there were a few "must haves" for the platforms to consider:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Exceptional handling of code blocks and inline code
&lt;/h4&gt;

&lt;p&gt;This is a developer blog after all, so there needs to be proper ways to share code. Ideally this will contain proper syntax highlighting for languages used, and has good copy-paste ability.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Commenting and editing is simple
&lt;/h4&gt;

&lt;p&gt;As this is a collaboration platform, we want the collaboration tools to be robust. Being able to comment on sections of text or the whole document should be easy, with a bonus for tools like "Suggest Edits" for inline changes.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Intuitive project management tools
&lt;/h4&gt;

&lt;p&gt;With the scale of posts written over time and the large number of contributors, keeping track of the status and metadata of posts within the same system they are written in is a huge benefit. This helps managers figure out the status of the content, and see what is missing from a glance, and automatically populates based on the original edit.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Able to handle multiple users editing at the same time
&lt;/h4&gt;

&lt;p&gt;We don't want collisions to happen if two people are editing at the same time, as that can result in a lot of lost work and wasted time. Ideally this will be a living document that updates automatically.&lt;/p&gt;

&lt;h4&gt;
  
  
  5. Image and other media handling not tied to their servers
&lt;/h4&gt;

&lt;p&gt;As the content will all be copied into Wordpress, we want to ensure media doesn't copy the internal reference instead of the base file. Otherwise it can look fine for people connected to the company network, but not to the public, which is a tricky thing to troubleshoot when not anticipating it.&lt;/p&gt;

&lt;h4&gt;
  
  
  6. Cross-platform
&lt;/h4&gt;

&lt;p&gt;We have team members on multiple different platforms who should all be able to contribute. The best solution for this is a web app, where it can easily be connected to a SSO solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nice to Haves
&lt;/h3&gt;

&lt;p&gt;There are a few other bonus points that can help separate the good from the great that I considered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Low price or a free tier that fits our needs&lt;/li&gt;
&lt;li&gt;Version history management&lt;/li&gt;
&lt;li&gt;Markdown based editor as either a default or an option alongside a rich-text editor&lt;/li&gt;
&lt;li&gt;Have a free trial so I can test it without needing to pay&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Immediate Outs
&lt;/h3&gt;

&lt;p&gt;With these rules, some popular tools were immediately thrown out and will not be discussed in further length.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.google.com/docs/about/"&gt;Google Docs&lt;/a&gt;, &lt;a href="https://www.microsoft.com/en-us/microsoft-365"&gt;Office 365&lt;/a&gt;, and &lt;a href="https://www.box.com/notes"&gt;Box Notes&lt;/a&gt; all do not have great code block handling, and were taken out of consideration.&lt;/p&gt;

&lt;p&gt;Tools like &lt;a href="https://github.com/"&gt;GitHub&lt;/a&gt; are enticing for purely technical teams, especially with their &lt;a href="https://docs.github.com/en/issues/organizing-your-work-with-project-boards/managing-project-boards/about-project-boards"&gt;Projects&lt;/a&gt; feature, but we work with a mixture of backgrounds that make these products too high a barrier of entry. This doesn't stop GitHub of being a good part of the workflow for linking code samples and Gists.&lt;/p&gt;

&lt;p&gt;Similarly, cloud based Markdown editors such as &lt;a href="https://hedgedoc.org/"&gt;HedgeDoc&lt;/a&gt;, &lt;a href="https://stackedit.io/"&gt;StackEdit&lt;/a&gt;, &lt;a href="https://hackmd.io/"&gt;HackMD&lt;/a&gt; and &lt;a href="https://draftin.com/"&gt;Draft&lt;/a&gt; among others are not great choices for those who don't have Markdown experience, plus are a bit more barebones than desired for project management. The same applies for LaTex collaboration software like &lt;a href="https://www.overleaf.com/"&gt;Overleaf&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Apps like &lt;a href="https://slickdocs.com/"&gt;Slick&lt;/a&gt; are MacOS only, which doesn't work for team members on Windows or other platforms.&lt;/p&gt;

&lt;h2&gt;
  
  
  Products I trialed
&lt;/h2&gt;

&lt;p&gt;In no particular order, here are the products I tried out, and my thoughts on how they worked for &lt;strong&gt;us&lt;/strong&gt;. Keep in mind that just because something might not be a good fit for our workflows doesn't mean it isn't perfect for yours.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;a href="https://www.atlassian.com/software/confluence"&gt;Confluence&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j1BnZNJT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/46xziwfyosq0bfuhczzy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j1BnZNJT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/46xziwfyosq0bfuhczzy.png" alt="Confluence Image" width="800" height="683"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Confluence is a tool that a vast majority of people who have worked in tech in the last 15 years are familiar with, due to it being a part of the Atlassian ecosystem, where Jira has been a centerpiece of modern tech workflows. I've used Confluence in every single job I've worked at for this reason, and I tend to have a love/hate relationship with it. It's an extremely powerful tool that allows massive customization of the spaces it has, links very well with itself and Jira, and provides third party extension support for workflows that it doesn't support out of the box.&lt;/p&gt;

&lt;p&gt;However, it feels dated with the implementation of a lot of these benefits, where just because the option is there doesn't mean that the option was elegantly designed. For example, code blocks are supported, but require you specifically use a code block macro that is its own container instead of living with the other in line text. It also uses its own markup syntax, which makes taking existing Markdown that you may already have not as simple as copy-pasting it into the platform. Other small quirks like it needing to host every file you upload to it, having a functional, yet quirky way of dealing with simultaneous edits, and the typical Atlassian sentiments many tech workers know make it a compromise.&lt;/p&gt;

&lt;p&gt;We use Confluence already at Dolby.io, and I imagine many of you reading this are also using this tool right now, so the price is already right. The best way to sum up my feelings of Confluence are "if it ain't broke, don't fix it", which fits some team's philosophies better than others, make of that as you will.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;a href="https://www.notion.so/"&gt;Notion&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AS2Q01G3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ujei124wzqp20thlfqge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AS2Q01G3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ujei124wzqp20thlfqge.png" alt="Notion Image" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notion is the relative new big hitter coming to the block with a recent surge in users that has slowly started taking up the startup scene. Notion is what I would describe as the Atlassian suite if it was made with modern tech design, and it shows from the visual aesthetic to the technical decisions made. However, instead of separating the company into multiple products, it is all grouped in one workspace, where you can link tables, docs, kanban boards, and more together seamlessly and it frankly feels elegant.&lt;/p&gt;

&lt;p&gt;Notion uses Markdown as its text processor, though instead of keeping the raw text, it automatically interprets it to make the lives of writers a bit easier, though not as intuitive as a traditional rich-text editor. If text is highlighted, it brings up the command palette, though this isn't the most intuitive behavior for new users. I would say it is similar to Slack's editor, which makes sense due to the high user base.&lt;/p&gt;

&lt;p&gt;Overall I was a big fan of Notion, and its growth tends to support my sentiment, as I see more and more companies start using Notion as their main platform for nearly everything. There were issues in image copying to Wordpress, but this was unfortunately fairly standard across most platforms. I will note that I have had server issues in the past with Notion with issues like slowness to not being able to access my documents, though I have not experienced it in recent tests of the platform, and can likely attribute this to its growing pains.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;a href="https://quip.com"&gt;Quip&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tzZN68LI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rrwunx5vx9zwo23khcny.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tzZN68LI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rrwunx5vx9zwo23khcny.gif" alt="Quip Animation" width="600" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Quip is Salesforce's option in the content collaboration space with a heavy focus on live discussions, with chat, edits, and its integration with Salesforce being its key marketing points. I've used Quip outside of this test in reviewing content from third parties, and found it to be fine, but nothing out of the ordinary in terms of capabilities.&lt;/p&gt;

&lt;p&gt;While it was able to do match these claims in experience, overall it really does seem like this is built for sales teams, not content creation, where a lot of the project management assumes heavy Salesforce use, whom I imagine are the target customers for this product.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;a href="https://www.dropbox.com/paper"&gt;Dropbox Paper&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iw7-U_vF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9l8tgusmlzvhsprda578.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iw7-U_vF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9l8tgusmlzvhsprda578.gif" alt="Dropbox Paper Animation" width="494" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unlike its major competitors in Google Docs and Box Notes, Dropbox Paper has native support for code blocks, which was a huge boon in its usability for technical content. It's another Markdown based editor with rich-text interpretation, not unlike Notion's offering. It still has commenting support and version history built in (including comment history) which makes it great as a living document solution.&lt;/p&gt;

&lt;p&gt;This is where the benefits dwindle though, as its project management capabilities are virtually non-existent, being a cloud-storage solution first. Additionally it played strangely with Wordpress, where linespaces were not kept and code blocks weren't recognized properly. If your team is already using Dropbox as the storage provider, this is an option for consideration, though I wouldn't suggest it except for the most basic of needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. &lt;a href="https://coda.io"&gt;Coda&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0NalXdcu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjjjmnr6s361sn23nlkm.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0NalXdcu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjjjmnr6s361sn23nlkm.gif" alt="Coda Animation" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Upon asking my fellow DevRel community which tools they enjoyed using, Coda was one that was brought up a few times, but with the caveat that it was "highly technical" and more like a Smartsheet or Airtable than a collaboration document solution. I agree with this take, where instead of the previous couple feeling more like a document centric solution, this one felt like more of a project management solution, where content wasn't managed as well as Notion or Confluence.&lt;/p&gt;

&lt;p&gt;To be fair, this is not Coda's intent with the platform, instead directly making the statement &lt;a href="https://coda.io/publishing"&gt;Docs are the new blogs&lt;/a&gt;, where they encourage you to use Coda as the publishing platform. Coda itself looks like a super promising and interesting tool for an alternative blogging process.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. &lt;a href="https://slab.com/"&gt;Slab&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u6JkpSj2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fdzuumb9xn5rw4nf6478.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u6JkpSj2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fdzuumb9xn5rw4nf6478.png" alt="Slab Image" width="563" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Slab we return back to the "content-first" sub-category of solutions where we have another Markdown based editor with a few bells and whistles added for ease of use. Some of these include a content map that helps visualize the assortment of categories your documents live in, templating, and robust commenting solutions.&lt;/p&gt;

&lt;p&gt;Once again, project management is limited, though they have built in integrations with Jira and Miro among others that imply that it isn't meant to be that and you should be using tools dedicated to that workflow for that purpose. &lt;/p&gt;

&lt;h3&gt;
  
  
  7. &lt;a href="https://walling.app/"&gt;Walling&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N2sAyDhi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hovwxd1no7bq8vp4gvbw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N2sAyDhi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hovwxd1no7bq8vp4gvbw.gif" alt="Walling Animation" width="600" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Walling is another project management based solutions which takes advantage of a "grid-like" workspace for organizing content between "walls" while using Markdown as the editor. This makes keeping information like publishing metadata and checklists very pleasant to work with, as they have their own dedicated space instead of needing to format your document in a way that has them live inline with the actual content.&lt;/p&gt;

&lt;p&gt;Despite these features, there isn't a great tabling solution, and there isn't a great way to link to other posts in the workspace. It also didn't really feel like it was built for collaboration, as commenting isn't a super intuitive process at all. &lt;/p&gt;

&lt;h3&gt;
  
  
  8. &lt;a href="https://whimsical.com/docs"&gt;Whimsical&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HJ6-njtX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bsf54euypt0p7z8j8115.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HJ6-njtX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bsf54euypt0p7z8j8115.gif" alt="Whimsical Animation" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whimsical is all about keeping all content in an organized, central place. They center around five main types of content: docs, flowcharts, wireframes, mind maps, and sticky notes that can all be embedded within another. However, it felt more catered towards a creative organization rather than a robust project management feature, as I couldn't find a distinct way to properly create a workflow that autopopulated content when a new document was made.&lt;/p&gt;

&lt;p&gt;The benefits of small things like live cursors and linking content within content were useful. It also had the added benefit of working well with Wordpress, where images were not directly copied, but an image upload object was created in its stead, ensuring that you uploaded a proper copy of the image.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. &lt;a href="https://clickup.com/"&gt;ClickUp&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Srhy9a9g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t9t7nuz17kqm6jjwu2a4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Srhy9a9g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t9t7nuz17kqm6jjwu2a4.gif" alt="ClickUp Animation" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ClickUp have been advertising heavily lately, with banners all over San Francisco, and when I tried it there was a lot going for it. ClickUp has spaces dedicated for both "Spaces", or your tables of project management based content, and "Docs", which uses a Markdown based editor and organizes and shares your content.&lt;/p&gt;

&lt;p&gt;The spaces section reminded me a lot of Airtable with a bit more stuff going on, letting you expand each entry to see the editing history, to do list, attachments, and all, really solidifying itself as a proper "entry" of work to be done. The docs had useful tools like attributing authors and having ample version history. As a major bonus, content copied itself super well into Wordpress, which is a major time save in not needing to reformat every piece of content posted.&lt;/p&gt;

&lt;p&gt;Issues however started to appear once I tried mixing the spaces with the docs. Oddly, they didn't play super well together, where there was no great way to link a new doc as an entry into a specific space the same way you can with Notion and Confluence. You can paste the docs content into a space entry, or link a doc to an entry as an attachment that links to the doc, but I found no way to automate this process. This meant every single new doc I wanted to make also required me to make a new entry in the space I was recording the status on, and manually modify the metadata in the space each time the doc was updated.&lt;/p&gt;

&lt;p&gt;I was close to being satisfied with ClickUp as my "Goldilocks" solution for my issues, but it just barely missed the mark for being something worth migrating to. It feels like they were trying so hard to replace a large number of competitors that they missed some of the nuances that made their competitors stand out. I am optimistic that as they grow, some of these missing features are implemented into the platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;After my time trialing with all of these platforms (and unsubscribing from all of their mailing lists...), it helped me sort out what exactly was best for me in a content collaboration app, and where my biases landed. Clearly these solutions have their own strengths and weaknesses, and all have a very solid number of customers that use them daily.&lt;/p&gt;

&lt;p&gt;If I had to choose my favorite today (October 2021), &lt;strong&gt;Notion&lt;/strong&gt; was my overall winner, where if I was starting completely from scratch, Notion filled the majority of my needs and had a good user experience that I was excited to work in. It was easy to organize, edit, collaborate, and more in a way that facilitated as many automated processes as possible.&lt;/p&gt;

&lt;p&gt;However, I am not starting from scratch, and there are tradeoffs to migrating interfaces. So for that reason, we have decided to stick with Confluence, as all of our content is already there, we already pay for it, and it doesn't require our team members to sign up for a new account anywhere. A bit anti-climactic, but none of the solutions we found offered enough more than Confluence to be worth the resource investment to migrate.&lt;/p&gt;

&lt;p&gt;As a reminder, these solutions were only tested on how well they played with Wordpress. While Wordpress is still extremely popular, there are so many other growing and &lt;a href="https://jamstack.org/headless-cms/"&gt;popular CMS solutions&lt;/a&gt; out there like &lt;a href="https://www.sanity.io/"&gt;Sanity&lt;/a&gt;, &lt;a href="https://strapi.io/"&gt;Strapi&lt;/a&gt;, and &lt;a href="https://www.contentful.com/"&gt;Contentful&lt;/a&gt; that might play better with some of these applications than others. Especially given Wordpress' tendency to be a bit dated on how it plays with more modern content solutions.&lt;/p&gt;

&lt;p&gt;I suggest taking advantage of the free trials these platforms provide, though I hope this rundown provided some good context of what pros and cons each have to see if there is a good fit for your team. Remember that what my teams needs are might not be reflected in yours, and these products are all actively being worked on. Missing features can be added, and beloved features can be sunset. I'd love to hear what solutions you are using in the comments and see if there are any promising contenders I may have missed!&lt;/p&gt;

</description>
      <category>markdown</category>
      <category>content</category>
      <category>collaboration</category>
      <category>blog</category>
    </item>
  </channel>
</rss>
