<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Em Lazer-Walker</title>
    <description>The latest articles on Forem by Em Lazer-Walker (@lazerwalker).</description>
    <link>https://forem.com/lazerwalker</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/lazerwalker"/>
    <language>en</language>
    <item>
      <title>Adding a license to your open source art project</title>
      <dc:creator>Em Lazer-Walker</dc:creator>
      <pubDate>Tue, 14 Jun 2022 17:44:21 +0000</pubDate>
      <link>https://forem.com/lazerwalker/adding-a-license-to-your-open-source-art-project-2ma1</link>
      <guid>https://forem.com/lazerwalker/adding-a-license-to-your-open-source-art-project-2ma1</guid>
      <description>&lt;p&gt;As someone who spends a lot of time straddling the world of “professional” corporate open-source software and digital artists making weird new tech, I encounter something regularly that makes me sad. &lt;/p&gt;

&lt;p&gt;Someone will release a cool art project they’ve made, or a creative tool, or some other awesome bit of creativity! They're excited to put it out in the open so that others can use what they've made and learn from their process! That’s great! However, they didn’t include an open-source license. That’s not so great!&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do you need a license?
&lt;/h2&gt;

&lt;p&gt;By default, if you put source code (or art assets) up on the Internet, that does not mean that anyone is allowed to use it. You still own the copyright to that code, and anyone using your code is committing copyright infringement. In order to for someone to legally be able to use your IP, you need to explicitly grant them the right to do so. In practice, if you publish your project's source code and assets online without a license, you’re probably unlikely to sue someone who uses those assets, but on paper that’s illegal infringement.&lt;/p&gt;

&lt;p&gt;Regardless of whether your goal is restricting the use of your source code to specific uses or just telling people “hey, do whatever you want with this”, being straight-forward about your intentions is valuable. Telling someone explicitly what you do or don’t want them to do with what you’ve made, instead of needing them to assume what rights they don’t have, is going to lead to people feeling more comfortable looking at and potentially using your work!&lt;/p&gt;

&lt;p&gt;Additionally, if your project accepts contributions from the community, if you get any contributions that don’t include a license, you technically don’t have the right to use those changes! A model of “this is the way my code is licensed, and anyone contributing is agreeing to the same terms” is a valuable way to protect yourself.&lt;/p&gt;

&lt;p&gt;The solution to all of these ambiguities is to add a license: a bit of legalese to this git repo that explains the legal IP rights that anyone visiting this repo has to the code and assets stored in it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What license should I choose?
&lt;/h2&gt;

&lt;p&gt;There's sadly no one "correct" answer for what license you should use. Even though this is relatively cut-and-dry for e.g. open-source infrastructure projects built by tech startups, it gets fuzzier for art projects like this, and there is a lot of wiggle room depending on your goals.&lt;/p&gt;

&lt;p&gt;I've got a few different recommendations below, but first it's worth separating out that we're talking about two different things here. On the one hand, a license explicitly grants legal IP use rights to anyone who comes across this repo, and makes it legal for them to do certain things with your IP. On the other hand, you are presumably trying to communicate intent about how you as a creator want people to be able to use your work. &lt;/p&gt;

&lt;p&gt;In an ideal world, these two things are one and the same! In practice, OSS licenses are rather blunt tools, and it's useful to separate out "what am I communicating about my intent?" from "what rights am I legally granting?".&lt;/p&gt;

&lt;p&gt;That said, here are three(ish) good options to consider, with the caveat that I am not a lawyer and this is not legal advice.&lt;/p&gt;

&lt;h2&gt;
  
  
  An OSS license and a Creative Commons license
&lt;/h2&gt;

&lt;p&gt;A tricky thing is that "open source licenses" are written with code in mind, and it's a bit ambiguous how they apply to non-code things like art assets. A good rule of thumb is that if a piece of IP can be versioned the same way one would use source code, an open source license might be a good fit for it -- this means that things like hardware design files from software like KiCad can probably be safely protected under an OSS license, but 2D images or music might benefit from other licensing options.&lt;/p&gt;

&lt;p&gt;Outside of a few special cases (e.g. the &lt;a href="https://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&amp;amp;id=ofl"&gt;SIL Open Font License&lt;/a&gt; for fonts), the gold standard for licensing binary assets is &lt;a href="https://creativecommons.org/"&gt;Creative Commons&lt;/a&gt;, who maintain a large number of licenses allowing for different types of use. &lt;/p&gt;

&lt;p&gt;The inverse applies as well: because licenses like Creative Commons focused on artistic works don't care about the difference between binary distribution and source distribution, this makes them relatively unsuitable to use for covering code (even Creative Commons themselves &lt;a href="https://creativecommons.org/faq/#can-i-apply-a-creative-commons-license-to-software"&gt;don't recommend it&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;A common technique is to use multiple licenses within the same open source repository: all code is under one OSS license, and all non-code assets are under a Creative Commons license. This is a great option to remove ambiguity for projects that include both code and not-code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choosing a Code License: MIT License vs GPL
&lt;/h3&gt;

&lt;p&gt;There are a large number of open-source licenses you can use to license your source code. I want to focus on two today, the MIT License and the GPL License. Both are &lt;em&gt;extremely&lt;/em&gt; popular, arguably moreso than any other open-source licenses (they're the two licenses GitHub points people towards with their &lt;a href="https://choosealicense.com"&gt;Chose a License&lt;/a&gt; site). There are many popular licenses similar to the MIT license, but comparing MIT and GPL is useful since they're examples of two very different philosophies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The MIT License&lt;/strong&gt; basically says "you can do whatever the heck you want with my code, as long as you give me credit". &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The GPL&lt;/strong&gt; basically says "you can do whatever you want with my code, as long as any changes you make to my open-source code are themselves made open-source and licensed under the GPL".&lt;/p&gt;

&lt;p&gt;The MIT license grants "freedom" in the sense that anyone can use your code to build a proprietary commercial product. The GPL grants "freedom" in the sense that, even though it restricts what people can do with your code, it does so in a way that leads to more open-source software and thus net more "freedom" for end users. This concept of freedom the GPL represents is often referred to as "copyleft" (as opposed to "copyright")&lt;/p&gt;

&lt;p&gt;Most open-source software maintained by large tech companies is MIT or a similarly-philosophically-aligned license (Apache and BSD are two others you see a lot), because the economic model behind that sort of OSS is generally "a bunch of tech companies all contribute to this thing in order to benefit from it in their closed-source commercial products". These corporations would not consider being forced to open source all of their commercial products that happen to depend on GPL'd libraries to be a reasonable outcome, so this sort of corporate OSS tends to default to more "liberal" licenses that allow closed-source use.&lt;/p&gt;

&lt;p&gt;There's no clear correct answer. For-profit corporations find financial benefit in making licenses like MIT more common, but that doesn't mean that's the right choice for you. &lt;/p&gt;

&lt;p&gt;It's also awkward and worth noting that the GPL's author and main evangelist for decades is an alleged sex pest who (among many other things) resigned from his position at MIT after public comments defending Jeffrey Epstein. Regardless of how you feel about the ideas and ideals underlying the GPL and the Free Software Foundation, there's a lot of cultural and political baggage there. I don't think that's a reason to avoid the GPL, but it's worth being aware of.&lt;/p&gt;

&lt;p&gt;It's also worth noting here that there are weird politics around the term "open source". In order for a license to be considered "open source" by the Open Source Initiative, there are a number of criteria the license must meet. One of those criteria is that it must allow equal use to everyone. You won't see an "open source" license that restricts commercial use, because that by definition makes it not capital-O Open Source. I personally think this is bad!&lt;/p&gt;

&lt;h3&gt;
  
  
  Various Creative Commons Licenses
&lt;/h3&gt;

&lt;p&gt;As mentioned, Creative Commons is a non-profit that maintains a half-dozen different licenses intended to be used for creative works, rather than source code.&lt;/p&gt;

&lt;p&gt;There are a half-dozen different Creative Commons licenses you can choose. They span more or less the same ideological spectrum of MIT vs GPL, but with a bit more fine-grained choice around what restrictions are placed on what people can do with your work. &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://creativecommons.org/choose/"&gt;Creative Commons License Chooser&lt;/a&gt; is a good way to look at your options and how these differ. All CC licenses require attribution (except for CC0, but I'll talk about that later). You have a choice of whether or not you allow commercial use of your work, as well as a choice of whether adaptations of your work are allowed to be shared, not allowed to be shared, or require any derivative works to also be licensed under the same terms as your work (similar to the GPL).&lt;/p&gt;

&lt;p&gt;While restricting commercial use in derivative works may sound extremely appealing, Creative Commons themselves are philosophically opposed to it, in favor instead of what they call "&lt;a href="https://creativecommons.org/share-your-work/public-domain/freeworks"&gt;free cultural works&lt;/a&gt;". Their viewpoint is basically "eh, we don't want to allow this, but it's better that works be CC Non-Commercial than to not have any CC license". In particular, a concern I would have is how loosely "commercial use" is defined, and making sure that you’re actually disallowing the use cases you think you are.&lt;/p&gt;

&lt;p&gt;To me, in a lot of situations where I'd be tempted to add a non-commercial clause, I'd instead consider adding a share-alike clause. If what you want is for people to be able to freely remix your work, letting people release commercial products based on it but requiring them to also license those works under Creative Commons feels like a good compromise -- it's philosophically similar to the GPL, but without a lot of the legal or cultural baggage. &lt;/p&gt;

&lt;h3&gt;
  
  
  Do you lean on the side of leniency or strictness?
&lt;/h3&gt;

&lt;p&gt;Deciding whether to use a MIT-like license or a copyleft license (or whether or not your CC license should have a sharealike clause) is a tough decision.&lt;/p&gt;

&lt;p&gt;In my own work, I personally usually explicitly want to encourage others to make their improvements to my work public, but often find that &lt;em&gt;requiring&lt;/em&gt; that unintentionally limits some uses you'd like to enable. As a common example, if you would like people to be able to read your code as a reference, the GPL will allow that, but it technically forbids someone from copying a three-line snippet of code and using that in their own project, without making their entire project open-source under the terms of the GPL as well. That may be what you personally want, but in many cases I personally find that to be a bit excessive. &lt;/p&gt;

&lt;p&gt;This is perhaps less likely to happen with a CC share-alike license (just because it's less likely someone would pull out and use a tiny fraction of an art asset), but your license virality being too overbearing is still a concern in either case.&lt;/p&gt;

&lt;p&gt;Of course, people may still informally use your code in that way anyway, and you may be okay with that (read: not pursue legal action for violation of the terms of the GPL or CC). But choosing a more stringent legal position than you intend to perhaps enforce is a conscious choice you're making in that situation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Putting it all together
&lt;/h3&gt;

&lt;p&gt;It's worth emphasizing again the distinction between legal IP restrictions and communicating your intent. &lt;/p&gt;

&lt;p&gt;I personally lean towards being more lenient in what I allow legally (read: default to using an MIT-style license and a CC license without non-commercial or sharealike clauses), and informally expressing in the project documentation what I do or don't want people to do. &lt;/p&gt;

&lt;p&gt;I find explicitly writing "hey, I know you're allowed to do whatever you want with this, but PLEASE don't do X, Y, or Z" is a nice middle-ground of something that isn’t strictly legally enforceable, but communicates to reasonable human beings what I want.&lt;/p&gt;

&lt;p&gt;It's also possible that your personal ideological leanings would rather err on the side of unintentionally restricting valid use cases rather than unintentionally enabling use cases you don't want to allow. That's okay too!&lt;/p&gt;

&lt;p&gt;So, uh, what does this all mean? If I was choosing a license for this repo, and wanted to go the "OSS + CC" route, I'd probably pick MIT, either CC BY or CC BY-SA ("only attribution required" or "attribution and share-alike"), and put a paragraph in the README explaining my intent. But any of the other things mentioned here are reasonable options! &lt;/p&gt;

&lt;h2&gt;
  
  
  2. &lt;a href="https://anticapitalist.software/"&gt;The Anti-Capitalist Software License&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This is a unique license in that it adds restrictions on who can use your software. You can do whatever you want with the code, as long as you're an individual, a non-profit, an education institution, or a worker-owned co-op, with a carve-out banning military and law enforcement use.&lt;/p&gt;

&lt;p&gt;I'm inclined to view this license through the lens of performance art. While this license probably most directly aligns with how I personally would want a lot of my projects to be used, I've read a number of arguments claiming it almost certainly would not hold up to litigation, and would basically equate to including no license at all (with the repeated caveat that I am not a lawyer, and I additionally suspect many/most of the people posing those arguments are also not lawyers). It's possible this is unjustified fear, uncertainty and doubt; the current tech industry culture extremely stigmatizes any software licenses like this that do not conform to the "proper" definition of "open source".&lt;/p&gt;

&lt;p&gt;Along those lines, this WILL incidentally act as a deterrent to many larger tech companies using your work, as management will look at a non-standard license like this and say "the time and effort saved by using this code is worth less than the billable hours for the legal team to vet this license". This is possibly a positive for you.&lt;/p&gt;

&lt;p&gt;This license is on the extreme end of tradeoffs: it is an incredibly strong socio-political statement about how you wish your code to be used, but realistically is likely to leave you in the same legal IP situation as if you had not included a license at all. Using this license does have the positive benefit of normalizing licenses like this, which could eventually lead to future iterations of the idea that are more proven to be legally-enforceable.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. &lt;a href="https://creativecommons.org/share-your-work/public-domain/cc0/"&gt;Creative Commons 0&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A final option is to place your work in the public domain: you relinquish all rights to them, and anybody can use your IP for whatever purposes they want. This is "chaos mode" -- it's definitely the simplest of all these options, and the most easily-understood by people who don't want to have to become armchair IP lawyers. But it's also the least restrictive, for better and for worse. It's specifically worth calling out that people are not required to provide credit or attribution.&lt;/p&gt;

&lt;p&gt;If your intent is to put your project in the “public domain”, you should still specifically apply the CC0 license. Many countries do not have a legal mechanism (or only have limited ways) by which a living IP holder can dedicate their work to the public domain. CC0 is explicitly written to work around this, and allows authors to explicitly waive all possible copyright protection and IP rights they have, to the extent possible in their jurisdiction.&lt;/p&gt;

&lt;h1&gt;
  
  
  Okay, so how do I actually add a license to my project?
&lt;/h1&gt;

&lt;p&gt;So you've fretted and fussed and finally picked a license or set of licenses that you think are right for your project. What do you do now?&lt;/p&gt;

&lt;p&gt;If you just have a single license, this is simple. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For a code license&lt;/strong&gt;: in your project's repository (I'm assuming you're distributing this project via a git repo, but this could also just be in the folder root if you're e.g. distributing a zip file), include a file called README (or README.md, or similar) containing a copy of your license. &lt;/p&gt;

&lt;p&gt;A bonus improvement is to include the license as a header comment in every single code file. This makes it somewhat less likely that your code will get (unintentionally or intentionally) taken out-of-context and have the licensing info removed from it. This is best practices for larger corporate OSS, but if you don't have good technical infrastructure to do that automatically on your small art project I wouldn't worry too much about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For a creative commons license&lt;/strong&gt;: In whatever public documentation your project has (a README, a marketing website, etc), note the CC license prominently. The Creative Commons has great image buttons you can use.&lt;/p&gt;

&lt;p&gt;If you have a mixed license, it gets slightly hairier. Combining those two approaches — a LICENSE file and noting the CC license — is broadly right, although you want to be explicit about which files get which license applied to them. This is simpler if your assets are divided up such that you can cleanly say "this specific set of folders are CC licensed, and the rest are using this other code license". There really isn’t a legal standard here for “you MUST indicate things this way or nothing matters”, your goal is primarily making your licensing intent clear and unambiguous.&lt;/p&gt;

&lt;h2&gt;
  
  
  ...and that's it?
&lt;/h2&gt;

&lt;p&gt;If you have clear documentation in your repository about how which files are licensed which ways, that's all there is to it! Your project now has a license, and people can feel more comfortable using your work safely, knowing that they're respecting your wishes and intentions!&lt;/p&gt;

&lt;p&gt;This of course isn't a silver bullet. If someone willingly infringes on you, your options are still either to take them to court (potentially expensive and time-consuming!) or just let it be. But having an explicit license minimizes the chance you'll get into that sad situation, and also potentially makes it easier even if you do get into that sad degenerate state.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>art</category>
      <category>github</category>
    </item>
    <item>
      <title>So you want to run a virtual event</title>
      <dc:creator>Em Lazer-Walker</dc:creator>
      <pubDate>Tue, 10 May 2022 16:59:48 +0000</pubDate>
      <link>https://forem.com/lazerwalker/so-you-want-to-run-a-virtual-event-2dfd</link>
      <guid>https://forem.com/lazerwalker/so-you-want-to-run-a-virtual-event-2dfd</guid>
      <description>&lt;p&gt;If you're an event organizer right now, you're in a tough spot. You're desperate to get back to running an in-person event, but you know it isn't quite safe yet. &lt;/p&gt;

&lt;p&gt;Maybe you've seen in-person events in your community lead to large-scale COVID outbreaks or even deaths, and that terrifies you. &lt;/p&gt;

&lt;p&gt;Maybe your planning team disagrees on what an acceptable level of COVID safety is, or you don't have the resources to provide what you believe to be adequate safety.&lt;/p&gt;

&lt;p&gt;Even if you could run an event in-person, you're worried about losing the accessibility of a fully-remote event for attendees who are immunocompromised, or who live in far-off places and can't reasonably travel.&lt;/p&gt;

&lt;p&gt;Whatever reason, you need an online event. But you're also justifiably worried, because online events tend to suck. And now that people in a lot of the world are seeing their friends  in-person and spending time in places that aren't their homes, the prospect of spending a day or a weekend or a week plastered to a Twitch stream seems even more unpleasant than it did in 2020. &lt;/p&gt;

&lt;p&gt;How the heck do you actually throw a virtual event worth attending?&lt;/p&gt;

&lt;h2&gt;
  
  
  I don't have the answers
&lt;/h2&gt;

&lt;p&gt;Unfortunately, this isn't a "how-to" guide. If I could give you a concrete checklist for how to throw an online event that didn't suck, I would!&lt;/p&gt;

&lt;p&gt;It's also hard to give concrete suggestions since it's so dependent on your conference's audience, and how "experimental" a space is appropriate for you. I've attended exciting, creative, community-driven in-person conferences hoested in found spaces that needed major modifications to work as event venues (I love to tell the story of the &lt;a href="https://amaze-berlin.de"&gt;Berlin games festival&lt;/a&gt; where the two talk tracks where "in the karate dojo" and "on-stage at the combination nightclub and pool"). I've also attended plenty of corporate conferences in hotel ballrooms or city-owned convention centers. &lt;/p&gt;

&lt;p&gt;The former has a lot of leeway to get experimental with online events, while the latter is going to have a harder time convincing attendees to try something different. Knowing your audience is key. A lot of the advice in this article is more focused on people who &lt;em&gt;do&lt;/em&gt; have room to explore, but even if you're restricted to using a relatively-buttoned-up turnkey enterprise event platform, there are still usually ways you can nudge the social design of your space one way or another.&lt;/p&gt;

&lt;p&gt;If it's not obvious, this article is also focused more on events that look like conferences or festivals — the sort of event where an in-person event would typically be centered around one or more tracks of talks. Designing other types of events, like parties, face all of the same conceptual design challenges, but a lot of the specific tools at your disposal may be different.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The point of this piece is really to convey that designing a virtual event is, in fact, a design problem, and a deeply complex and largely unsolved one at that.&lt;/strong&gt; I'm hoping I can point you in the right direction for some of the questions you should be asking, and some of the design considerations that should be front-of-mind as you start to create an event for your community.&lt;/p&gt;

&lt;h2&gt;
  
  
  The first question: do you actually want to throw a live event?
&lt;/h2&gt;

&lt;p&gt;A question I ask to a lot of event organizers is "why are you producing a livestream instead of a YouTube playlist?" &lt;/p&gt;

&lt;p&gt;GitHub's most recent &lt;a href="https://githubuniverse.com/"&gt;GitHub Universe conference&lt;/a&gt; completely bypassed the idea of live talks: it being a "two-day event" simply meant that each day featured a new YouTube playlist of new talks to watch asynchronously, plus a fifteen minute "keynote" previewing that day's new talks.&lt;/p&gt;

&lt;p&gt;There are benefits to running a live event. A lot of people won't go out of their way to watch a half-hour talk on YouTube, even if it's directly applicable to their interests or professional development. A live conference feeling like a "happening" or a "moment" can give people the push they need to make time in their busy schedule for something they would enjoy and benefit from.&lt;/p&gt;

&lt;p&gt;On the other hand, it's a big ask to get people to attend a synchronous virtual conference, especially as more in-person activities become allowed. People feel guilty taking time off work to attend a weekday virtual conference, and don't really disconnect from work when they do. For weekend events, they're hesitant to spend their limited free time staring at a computer screen instead of doing, well, literally anything else. This is especially true if your event has little to no meaningful networking or social interaction driving a sense that people will miss out on something valuable if they just catch the talks on YouTube after the fact.&lt;/p&gt;

&lt;p&gt;There's no single correct answer here, but whether you throw a live virtual event or something more asynchronous, be sure to have a coherent reason why that's the correct choice for your community and attendees.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your event needs to be an event
&lt;/h2&gt;

&lt;p&gt;Why do people attend conferences or other events? They might say it's the talks, or the socializing and networking opportunities in the "hallway track", or for a certain class of professional events they might be honest and say "it's a free work-sponsored vacation".&lt;/p&gt;

&lt;p&gt;Any reason is valid. But the real underlying reason people attend your event, instead of finding another way to achieve those goals, is precisely because it's an "event". It's the act of traveling to an in-person conference that allows your brain to shift into a new headspace where you're receptive to new ideas or meeting new people. It's why companies run "off-sites" for big-picture brainstorming: that context shift is essential.&lt;/p&gt;

&lt;p&gt;It's hard to get this for an event that people attend from the comfort of their own home. It's even more difficult when you're asking people to use software — Discord, Slack, Zoom, Twitch — that they already use in their day jobs and social lives. There's no sense of moving to a new space. It's no wonder people find it hard to disconnect from their work when they're sitting at their work desk and using their work software!&lt;/p&gt;

&lt;h2&gt;
  
  
  How do you create that sense of place?
&lt;/h2&gt;

&lt;p&gt;This is the million-dollar question.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://roguelike.club"&gt;Roguelike Celebration&lt;/a&gt;, a game design conference I run, our answer was a &lt;a href="https://blog.lazerwalker.com/2020/10/22/virtual-events-and-game-design.html"&gt;custom-built social space&lt;/a&gt; blending design elements of chat apps like Discord and Slack with MUDs, the text-based precursors to MMORPGs. Consistent feedback we've gotten across two years of events is that, despite being a text-based chat space, attendees feel a sense of physical presence. They describe coming back year after year as having the same feeling as going back to an in-person venue, and they tend to use a lot of the same language that VR enthusiasts use around "presence" despite it being a text-only space. That's really cool!&lt;/p&gt;

&lt;p&gt;I wouldn't necessarily start with trying to build your own custom event platform like we did, even if you have the resources. Instead, get creative: what existing tech platforms can you repurpose for your event that will help it feel special? &lt;/p&gt;

&lt;p&gt;Online spatial chat platforms like &lt;a href="https://gather.town"&gt;Gather Town&lt;/a&gt; or &lt;a href="https://skittish.com"&gt;Skittish&lt;/a&gt; can be nice, but require a lot of customization (I'll talk more about them later!). A lot of event organizers I've spoken with lately are interested in repurposing existing online spaces — like, say, finding a free MMO or online game that can serve as a social space — which I think is a really interesting path to explore, even if I haven't seen concrete success there. &lt;/p&gt;

&lt;p&gt;Finding the right "found space" can also be tricky. I've seen events flop in spaces like Second Life, as anything that's 3D rather than 2D will have accessibility barriers to basic things like navigation unless somebody is already familiar with 3D games. I've seen events held in Roblox fall over from a technical standpoint, as its networking is generally extremely robust but has scaling issues if you want more than a dozen or so people in your space at once. I mention these not to say "X tool is bad!", but more to emphasize how important it is to test a potential space to the extent you can before committing to it.&lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;If you have access to &lt;em&gt;some&lt;/em&gt; development resources, a nice middle-ground might be building on top of existing open-source projects. Roguelike Celebration's social space is &lt;a href="https://github.com/roguelike-celebration/azure-mud"&gt;on GitHub&lt;/a&gt;, as is &lt;a href="https://github.com/molleindustria/likelike-online"&gt;LIKELIKE Online&lt;/a&gt;, a 2D pixel art gallery space. If you go down this route, it's worth emphasizing how much custom-built event spaces are, in fact, built for their specific contexts, and how well they could work for your event depends on how much context overlap there is. I don't know how well Roguelike Celebration's setup would work for an audience that doesn't get immediately excited at the idea of a text-only world. Similarly, LIKELIKE does a phenomenal job of replicating the vibes of an art gallery opening party, where attendees can vibe with the energy of the space and briefly say hi to people they know, but the way it "feels" like a loud noisy party means it'd be a poor fit for a networking-focused "hallway track" where you want people to be able to have in-depth conversations with each other.&lt;/p&gt;

&lt;h3&gt;
  
  
  Case Study: Deserted Island DevOps
&lt;/h3&gt;

&lt;p&gt;It's worth noting that you can create this sense of place without necessarily providing interactivity. I want to talk about a really clever and resourceful example: &lt;a href="https://desertedislanddevops.com/"&gt;Deserted Island DevOps&lt;/a&gt;. A conference that first took place in April 2020, at the start of the pandemic and the height of the popularity of Animal Crossing: New Horizons for the Nintendo Switch, it's a conference that "took place in Animal Crossing".&lt;/p&gt;

&lt;p&gt;Now, if you've played Animal Crossing, you know that "a conference in Animal Crossing" would be unwieldy and impractical. Animal Crossing has great multiplayer features, but it allows a maximum of 8 people in the same island, and chat options are limited. &lt;/p&gt;

&lt;p&gt;In practical terms, Deserted Island DevOps was a "boring" virtual conference: it was little more than a Twitch stream and an associated Discord. The "Animal Crossing" piece came from the Twitch stream itself: a video feed of an Animal Crossing instance, showing the speaker's Animal Crossing avatar (they had full control over their avatar's appearance and could trigger custom emote animations) and their slides composited into the scene.&lt;/p&gt;

&lt;p&gt;The event was a huge success, with nearly 12,000 live views over the course of the weekend, and a lot of buzz on social media for how innovative their approach was. I think a huge take-away from Deserted Island DevOps is that creating a sense of &lt;em&gt;place&lt;/em&gt; doesn't necessarily mean creating a novel interactive space people can navigate. Sometimes sufficiently evocative theming can be enough!&lt;/p&gt;

&lt;h2&gt;
  
  
  To videochat or not to videochat?
&lt;/h2&gt;

&lt;p&gt;A thing I have learned is that people have drastically different opinions about videochat. Some people desperately crave the ability to get literal "face time" with other event attendees. Others can't stand the idea of having to turn on their webcam to talk to strangers. Similarly, some people easily prefer audio-only chat to videochat, while others can't stand the loss of social cues that comes from having audio but not video. I'd urge you to make sure the design of your virtual event caters equally to all of these groups.&lt;/p&gt;

&lt;p&gt;Videochat is a high-trust activity. People in videochat are more vulnerable to harassment than text chat, and moderation is also trickier when you don't have a perfect record of everything said. Different communities and sets of conference attendees have different levels of trust; while there are plenty of trust and safety tools you can use to mitigate issues once your event has started, I'd urge you to both think about how to cultivate as healthy a community as you can before your event begins, and also approach designing videochat at your event with an honest eye towards where your event's community falls on the trust spectrum.&lt;/p&gt;

&lt;p&gt;In the past, I assumed the ideal was an event where attendees could freely choose to consensually escalate from text chat to audio or video at any point during the conference. For Roguelike Celebration 2021, we built out our own custom videochat, and spent a great deal of time designing an exceptionally thoughtful videochat system that did just that, seamlessly integrating videochat and text chat in our social space. To be blunt, our attempt was broadly a failure, with the focused exception of using our videochat for post-talk speaker Q&amp;amp;A breakout rooms. &lt;/p&gt;

&lt;p&gt;Humans instinctively prioritize paying attention to people with higher-fidelity modes of communication. I don't know how you design videochat that doesn't give preferential treatment to video users over audio or text users. This is definitely true if using an off-the-shelf event platform or videochat tool, but even with a custom-built solution it's an unsolved problem. And that's even before tackling issues of Zoom fatigue; even people who love videochat will get tired if they stay on video for too long.&lt;/p&gt;

&lt;h3&gt;
  
  
  Videochat as a focused tool
&lt;/h3&gt;

&lt;p&gt;Instead, I'd encourage you to think of videochat as a focused tool you can apply at specific times during your event. A lot of people who might not want to spend 8 hours on a call would probably love to spend a focused hour or two with a higher-fidelity chance to connect with other attendees. If these sessions are time-limited and opt-in, it's also easier to feel comfortable prioritizing videochat users over others in these sessions.&lt;/p&gt;

&lt;p&gt;One specific hard problem to design for is allowing people to "preview" conversations. In real life, it's easy for someone to hover at the outside of a conversation circle before choosing to engage or find a different conversation. Most online tools do a poor job of recreating that dynamic — joining a Zoom call and then leaving ten seconds later feels socially awkward, as does walking up to a chat circle in a tool like Gather Town and then leaving. There are ways to minimize that awkwardness can with either technical or social solutions, but it's a hard problem you need to think about and actively solve for.&lt;/p&gt;

&lt;p&gt;There are many ways to design focused video time. At Roguelike Celebration, we do "unconferencing" sessions, where people can self-select into Zoom rooms based on conversation topics proposed by attendees. I've also seen a lot of success with "speed dating"-style breakout rooms,  where people are placed in random conversation groups for a preset period of time before being automatically split up and reformed into new groups. &lt;/p&gt;

&lt;p&gt;I don't have a lot of concrete advice around unconferencing other than having decent tools to allow proposing and upvoting topics (and having an answer to the "how to poke your head into a room" problem), but it is a model that works really well for us.&lt;/p&gt;

&lt;h3&gt;
  
  
  Videochat breakout rooms / "speed dating" / "coffee chats"
&lt;/h3&gt;

&lt;p&gt;If you want to try a "speed dating" or "coffee chat" model, that's typically easy to implement using the "breakout room" functionality of most commercial videochat software. The rigid, predictable structure makes it convenient to make new people, as it hand-waves over a lot of the awkwardness of how to start a conversation with someone in the first place. &lt;/p&gt;

&lt;p&gt;The tricky thing is this pattern really requires you have a high-trust environment, even more than other videochat techniques. Because you can't simply leave a conversation while it's happening, it's easy for underrepresented groups to get frustrated if they feel like they aren't being listened to. You probably won't receive formal code of conduct reports, people will just quietly stop participating. While there are social tools you can use to try to shape these conversations to be healthy,  the success of a session like this really hinges on your attendees being kind, respectful, and generous. If your community is on the lower end of the trust spectrum, this specific technique might not be right for your event.&lt;/p&gt;

&lt;p&gt;If you do think that sort of "coffee chat" is appropriate for your community, I'd recommend groups of 3-4 people, 5 minute chats, and some sort of formal or informal structure for how to break the ice in each chat. 1-on-1 chats and longer durations can both work in higher-trust environments, but most of the time you want to aggressively optimize for mitigating just how miserable it is to be stuck in a bad conversation.&lt;/p&gt;

&lt;p&gt;Ice-breakers depend a lot on the context. I've seen professional networking sessions use a convention where each person starts by giving a 30-second intro to themselves, and I've seen people doing this in context of a creative retreat naturally lead with "so, what are you working on?". &lt;/p&gt;

&lt;h3&gt;
  
  
  Facilitators: you probably need them
&lt;/h3&gt;

&lt;p&gt;Regardless of what your videochat structure looks like, if you plan to have more than 5 or so people in a single videochat, you probably want facilitators to help keep the conversation moving. In work contexts, someone usually runs the meeting and plays this role; in social Zoom chats with friends, someone probably informally picks up this mantle. At an event where you are hoping that strangers or pseudo-strangers will talk to each other, making sure that you have a volunteer or staff member to play this role (instead of hoping an attendee might) is a good way to stack the deck and make sure things go well.&lt;/p&gt;

&lt;p&gt;This obviously depends a lot on your format. If you're using a spatial chat tool, it might be awkward to have people whose explicit job is to go around and start conversations. But if you're going to invest in making videochat work for your event, it's worth thinking about how having a few human facilitators can help ensure the experience is a positive one.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to create social interaction
&lt;/h2&gt;

&lt;p&gt;At in-person events, there's a wide range of social interaction people might take part in, with varying levels of interactivity or room for freeform conversation. Sitting down to discuss Serious Topics with someone one-on-one is different from making small talk over the cheese table is different from playing a game of cards where the majority of the space in the conversation is taken up by the logistics of the game itself.&lt;/p&gt;

&lt;p&gt;People are very good at subconsciously navigating these varied spaces and choosing the form of interaction for them that's right for them. But this requires an event that provides that wide variety of interactions. If your event just provides a single way to "network", it'll feel flat, and a large number of your attendees won't feel comfortable chatting.&lt;/p&gt;

&lt;p&gt;Like so much else I've been talking about, I sadly can't give you a checklist or playbook to just implement. Figuring out how to provide that range of activities is so highly dependent on both your audience and on your event platform. It's also really hard to talk about! I don't think Roguelike Celebration is perfect at this yet, but it may be instructive for me to outline some of the conversation opportunities we intentionally create or support. Going from most involved to least involved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each day, we host "unconferencing" sessions, where people can propose topics (generally design or technical subjects) to discuss in focused Zoom calls&lt;/li&gt;
&lt;li&gt;After each block of talks, speakers can optionally break out into a dedicated room for Q&amp;amp;A with attendees. Generally, the speaker is on a videocall while other attendees use text chat&lt;/li&gt;
&lt;li&gt;During the mainstage talks, we encourage a vibrant Twitch-style chat. With our audience, this generally leads to a good blend of insightful discussion/commentary and silly memes&lt;/li&gt;
&lt;li&gt;During the talks, people can also submit and upvote questions to be asked during any formal moderated Q&amp;amp;A time left at the end of a session (separate from the speaker breakout rooms). This gives people a chance to contribute to the discussion who are overwhelmed by the pace of the real-time chat&lt;/li&gt;
&lt;li&gt;During unstructured breaks, we've seen a pattern of people running their own playful or game-like activities in the space, ranging from running short tabletop RPG sessions to someone hosting a fortune-telling table&lt;/li&gt;
&lt;li&gt;In 2021, we introduced a short puzzle hunt / chain of riddles, which gave people an activity they could either do solo or team up with others and ask for help on&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Our space has consistently included a number of "fidget toys": objects you can pick up and carry with you, magical potions that append an emoji to your name, ways to interact with the space and see a random text generator do something silly. These give you an easy way to strike up a conversation with someone else by, say, commenting on their emoji, or asking where they got their item, and also give you a way to passively do something in the space alongside someone else without talking.&lt;/p&gt;

&lt;p&gt;It's worth calling out that our attendees spontaneously playing formal games and doing fortune tellings is not something to plan for. In most cases, that sort of attendee self-organizing doesn't happen, or happens relatively infrequently. If you want to try to encourage people to play online multiplayer party games together (which is a great form of social interaction!) I'd put it upon yourself as an organizer to make it happen. It's a lot easier to get attendees to jump into a game of Jackbox, say, if a facilitator actively schedules it, sets up the game, and tells people where to show up, rather than hoping an attendee will spontaneously choose to do all that work.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If this set of problems is something you're interested in solving or diving deeper on, I'd also recommend you read my &lt;a href="https://blog.lazerwalker.com/2020/10/22/virtual-events-and-game-design.html"&gt;earlier article about Roguelike Celebration&lt;/a&gt;, as I go into a few other topics not covered here. Kate Compton's &lt;a href="https://www.youtube.com/watch?v=3HWwSbnkg4I"&gt;talk about technology design for social creativity&lt;/a&gt; has also been formative in my thinking.&lt;/p&gt;

&lt;h2&gt;
  
  
  Purpose-built spatial event platforms
&lt;/h2&gt;

&lt;p&gt;I've seen a lot of interest in online event platforms specifically built to be fun spaces that lean on playfulness to provide an alternative to traditional video calls. There are a ton of these "spatial" event platforms: &lt;a href="https://gather.town"&gt;Gather Town&lt;/a&gt; is perhaps the best-known, but I have familiarity with &lt;a href="https://skittish.com"&gt;Skittish&lt;/a&gt;, &lt;a href="https://www.wonder.me/"&gt;Wonder&lt;/a&gt;, and &lt;a href="https://www.bramble.live/"&gt;Bramble&lt;/a&gt;, and you could easily find a dozen more. &lt;/p&gt;

&lt;p&gt;They broadly share similar featuresets: each attendee controls an individual avatar in a custom space (almost always 2D, or 2D movement within a 3D space, often with a retro pixel-art aesthetic), and when you walk close to another attendee, you can see their webcam feed and/or hear them speak via microphone. The core idea behind these sorts of platforms is that spatial audio/videochat recreates an aspect of in-person events, where you can walk up to somebody and have a conversation with them without it being a giant unwieldy video call.&lt;/p&gt;

&lt;p&gt;Different platforms differentiate themselves through things like their default aesthetic, their editing tools, whether you can host a live presentation in the space, the specifics of their audio/videochat, and things like that. But the core value proposition is often broadly similar. I often see these either adopted as the core platform for an event, or just used as the venue for a specific scheduled 1-2 hour "social hour" or networking session.&lt;/p&gt;

&lt;p&gt;If you're considering one of these tools, the main thing I would urge you is to recognize them as just that: tools. Just because you may be using an event platform that allows people to walk up to each other and strike up a conversation, that doesn't mean they will unless encouraged to do so through thoughtful social design. You absolutely can build a space that accomplishes the sort of goals I set out above using tools like these, but in most cases I'd recommend thinking of them as jumping-off points to build on rather than ready-made solutions that will solve the hard social problems for you. This holds true whether you're looking to hold a "social hour" in tools like this or are hoping to use them as your primary event platform.&lt;/p&gt;

&lt;p&gt;The linguistics conference LingComm has a great series of articles about running their 2021 online event, most notably one about &lt;a href="https://lingcomm.org/2021/06/28/hosting-online-conferences-for-building-community-the-case-of-lingcomm21/"&gt;the design of their Gather space&lt;/a&gt; (also to read: LingComm organizer Gretchen McCulloch's &lt;a href="https://www.wired.com/story/zoom-parties-proximity-chat/"&gt;Wired article&lt;/a&gt; about these sorts of tools). If you're looking to use one of these readymade spatial event platforms, I think LingComm is a great example of the sort of care and effort you need to put into designing your space (disclaimer: both of those articles liberally cite my work with Roguelike Celebration :P)&lt;/p&gt;

&lt;p&gt;Counter-intuitively, you may also have more success using one of these tools that is less popular. As I've discussed, a big part of the value of using a "non-traditional" event platform is the novelty of a space affording a context shift. If other events in your community regularly use one of these event platforms, it may already feel "same-y" to attendees. In other professional technical contexts, I often advise people to not make technology choices based on what's new and shiny, but that may be exactly what you want here.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sidenote: VR spaces
&lt;/h3&gt;

&lt;p&gt;One specific sub-flavor of this sort of chat tool you might encounter are VR spaces like &lt;a href="https://altvr.com/"&gt;AltSpace&lt;/a&gt;, &lt;a href="https://hello.vrchat.com/"&gt;VRChat&lt;/a&gt;, or &lt;a href="https://hubs.mozilla.com/"&gt;Mozilla Hubs&lt;/a&gt; that are primarily focused on VR but also usually support flat displays or smartphones.&lt;/p&gt;

&lt;p&gt;My advice here is simple: this is a great option if you're working with an audience that is already intimately familiar with VR or 3D virtual worlds. Otherwise, the technical barriers of getting people onboarded in a first-person 3D space are too large. You'll waste your time and your attendees will be frustrated.&lt;/p&gt;

&lt;p&gt;If you do happen to be running an event for a VR audience, I urge you to explore your options for yourself, but my informal opinion is that AltSpace is best for large events where thousands of people will watch a single presenter and you need to shard audience instances, while Mozilla Hubs is the most flexible (and the most accessible across a wide range of non-VR devices) for smaller and more informal events.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do you schedule it?
&lt;/h2&gt;

&lt;p&gt;One of the biggest strengths of virtual events is that you can welcome attendees from all over the world without requiring expensive travel. Supporting that through a schedule that's equally inclusive is surprisingly difficult. It's easy to schedule an in-person conference: your conference day aligns with the working day. That doesn't necessarily make sense when all of your attendees are in different time zones.&lt;/p&gt;

&lt;p&gt;I've seen two common approaches here, depending on your resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  24-hour conference
&lt;/h3&gt;

&lt;p&gt;One approach is to schedule a "24-hour conference". No matter what time zone an attendee is in, there will be something happening during the day their time. This is great for the way it doesn't center a specific region: it feels conceptually more inclusive, and concretely makes it easier for you to attract audiences outside of the geographic regions you usually operate in. &lt;/p&gt;

&lt;p&gt;It's also difficult to pull off, requiring having strong geographic diversity for both speakers and conference staff to make sure that all times feel equal. If you go down this route, I'd also consider how you can intentionally design your schedule during common overlap times to encourage groups to mingle, so it doesn't just feel like your conference has different "shifts" where people only interact with people from the same region as them. Similarly, if you run a 24-hour event but don't actually have a globally diverse pool of attendees, you'll end up with "dead" periods of time, which also isn't great. A global-first event schedule needs to be accompanied by a global-first marketing plan.&lt;/p&gt;

&lt;h3&gt;
  
  
  A core time zone
&lt;/h3&gt;

&lt;p&gt;The alternative is to pick a "core" time zone and schedule around that. Roguelike Celebration, for example, is scheduled around Pacific time, since we were historically hosted in-person in San Francisco. This is much easier for us to staff, although requires careful consideration to still be as accessible as possible. We need to be as accommodating as possible to speaker schedule requests, so someone isn't stuck speaking in the middle of the night their time. &lt;/p&gt;

&lt;p&gt;We also consciously schedule our event so that people in non-Americas time zones can experience as diverse a sample of content as possible during their daylight-hours overlap. For example, we host "unconferencing" sessions each day, where attendees can post topics they'd like to discuss and then hop into a Zoom call with other attendees. We intentionally schedule these at different times each day — typically one in the morning and one in the afternoon — to try to maximize the chance that someone in Europe or the APAC region can at least attend one of them.&lt;/p&gt;

&lt;p&gt;There's no one "correct" answer here. Maximizing the global accessibility of your schedule is a tricky problem that cuts across almost all of your other planning concerns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hybrid Events
&lt;/h2&gt;

&lt;p&gt;Maybe you're &lt;em&gt;really&lt;/em&gt; itching to run an in-person event, and you think it's possible to do &lt;em&gt;something&lt;/em&gt; small, alongside a virtual event.&lt;/p&gt;

&lt;p&gt;I have a hard truth for you. It's absolutely possible to run a successful "hybrid event", but it requires having a successful virtual event as a baseline. If you start from the standpoint of running a good in-person event, and attach a virtual component to it, virtual attendees will correctly recognize that they're an after-thought. &lt;/p&gt;

&lt;p&gt;You need to start by designing a top-tier virtual event experience, and then from there figure out how to augment that experience with in-person moments. If you're not taking that approach, you're not running a "hybrid event", you're running an in-person event with a secondary livestream.&lt;/p&gt;

&lt;p&gt;It hasn't happened yet, and I'm deeply critical of their almost total lack of COVID safeguards, but Apple's upcoming &lt;a href="https://developer.apple.com/wwdc22/"&gt;WWDC developer conference&lt;/a&gt; tentatively seems like a great example of how to do this right. WWDC is a weeklong virtual event; separately, there's a one-day in-person event you can submit a request to attend, that primarily consists of a viewing party for the WWDC keynote and Platform State of the Union speech as well as some tours of additional on-site Apple spaces. Presumably the keynote and SOTU are still being produced as livestream-first events, but allowing some people to capture the magic of an in-person live keynote experience (which was previously a tentpole of in-person WWDC) seems like tentatively a good way to allow some in-person camaraderie while not creating too much FOMO for attendees of an otherwise virtual-first event.&lt;/p&gt;

&lt;h2&gt;
  
  
  Odds and ends
&lt;/h2&gt;

&lt;p&gt;There are a lot of other important aspects to running a virtual event, but talking about them starts to become more logisical than conceptual. I could write a whole blog point about each of these (and have in many cases!) but a few quick things to keep in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Your event should ideally have live captions, provided by a human captioner. I've &lt;a href="https://blog.lazerwalker.com/2020/07/20/captions"&gt;written about the logistics of this&lt;/a&gt;. The short version is that it requires budgeting, but relatively little work, and is infinitely better than using AI-generated captions. If you can't afford a human captioner, make sure you have automated captions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure your schedule is easy to understand regardless of whether someone is in the same time zone as the organizers. I recommend investing in a little bit of front-end JavaScript code so your website automatically shows times in each attendee's local time zone.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your virtual event is likely more accessible due to people being able to attend worldwide, but that doesn't mean it's financially accessible. What you consider a "cheap" ticket might not be that for people elsewhere in the world. I've seen a handful of conferences adopt pay-what-you-want pricing — Roguelike Celebration has suggested tiers including "pay for yourself", "pay for yourself and someone else", and "I can't afford to pay". Our revenue has been roughly the same as it was before  introducing that PWYW structure, and I've heard the same thing from other organizers using the same pricing plan.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How to technically host remote talks is an extremely deep subject (that &lt;a href="https://blog.lazerwalker.com/2020/10/13/roguelike-celebration-av-setup.html"&gt;I've written about&lt;/a&gt;!), but big picture, have a dedicated AV person on duty instead of just your MC, and use a hosted all-in-one platform (like &lt;a href="https://streamyard.com"&gt;Streamyard&lt;/a&gt;) rather than fiddling with your own OBS setup. Run AV tests beforehand with every speaker, allow them to pre-record if they'd prefer, and be respectful that speakers may have valid reasons to not show their face on-camera. Ideally have a budget to pay for cameras/mics for those who need them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the thousands of words you've just read weren't enough from me: aside from the various posts of mine I've linked throughout this piece, I also wrote about &lt;a href="https://blog.lazerwalker.com/2020/07/09/virtual-worlds.html"&gt;building a better hallway track&lt;/a&gt; (this was the earliest design thinking that eventually led to the Roguelike Celebration space) as well as &lt;a href="https://blog.lazerwalker.com/2021/01/04/social-design-questions.html"&gt;a list of questions to ask yourself&lt;/a&gt; when designing a synchronous online social space.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I hope this massive brain-dump was helpful! If anything in here sparks inspiration for you to do something different in your own virtual event planning, I'd love to hear about it — my &lt;a href="https://twitter.com/lazerwalker"&gt;Twitter DMs&lt;/a&gt; are open.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;A quick hot tip to share the sort of brainspace I'm currently in: I'm convinced, for the right experimental event, &lt;a href="http://www.byond.com/"&gt;BYOND&lt;/a&gt; would be an incredible tool to build a custom space. It's an early-2000s low-code/no-code tool for building online multiplayer games, most notably &lt;a href="https://spacestation13.com/"&gt;Space Station 13&lt;/a&gt;. Getting the editor to run on a modern Windows 11 PC can be a challenge, but I'd &lt;em&gt;love&lt;/em&gt; to see an event run in a custom-built BYOND space. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Why Video Chat is a Hard Technical Problem</title>
      <dc:creator>Em Lazer-Walker</dc:creator>
      <pubDate>Fri, 12 Mar 2021 17:06:56 +0000</pubDate>
      <link>https://forem.com/lazerwalker/why-video-chat-is-a-hard-technical-problem-43gj</link>
      <guid>https://forem.com/lazerwalker/why-video-chat-is-a-hard-technical-problem-43gj</guid>
      <description>&lt;p&gt;Back over the summer, I began a series of experiments to play around with new forms of synchronous online social interaction while we're all stuck at home. These ranged from a &lt;a href="https://dev.to/lazerwalker/using-game-design-to-make-virtual-events-more-social-24o"&gt;virtual conference hosted in a custom text-based MMORPG&lt;/a&gt; to using real-time mocap in the browser to make 2D animated avatars:&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1272894598214492160-818" src="https://platform.twitter.com/embed/Tweet.html?id=1272894598214492160"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1272894598214492160-818');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1272894598214492160&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;For these early experiments, I used &lt;a href="https://webrtc.org/" rel="noopener noreferrer"&gt;WebRTC&lt;/a&gt;, a browser-based peer-to-peer videochat technology. Since I was churning out small experiments quickly, I cared about being able to build something as quickly as possible, and ideally without having to spin up complicated and/or expensive servers.&lt;/p&gt;

&lt;p&gt;WebRTC sounds like it's perfect for this! Being peer-to-peer means you don't need complicated or expensive server infrastructure, and being a well-supported piece of browser tech means there are a lot of educational resources out there.&lt;/p&gt;

&lt;p&gt;To jump straight to the punchline: after we built a WebRTC-based videochat service for &lt;a href="https://roguelike.club" rel="noopener noreferrer"&gt;Roguelike Celebration&lt;/a&gt;'s event platform, we ripped it out and replaced it with a series of Zoom links for the actual event. Our WebRTC setup simply wasn't viable for production use. &lt;/p&gt;

&lt;p&gt;I've since talked to many other folks who built out WebRTC setups, ranging from simple to complex, and similarly ran into unacceptable performance pitfalls. This doesn't mean that WebRTC as a technology isn't viable for things like this — all of the solutions I recommend later in this article ultimately still use WebRTC under the hood — but reality is significantly more complicated than just reading the WebRTC API spec and building against it.&lt;/p&gt;

&lt;p&gt;The rest of this article will walk you through our learning process, and what we learned is necessary to make a WebRTC videochat setup work in a production environment. Our path to functioning videochat was long and winding; I want to outline what we learned to save other people from spending the same time and effort we did to come to that understanding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem 1: Accessing AV Hardware
&lt;/h2&gt;

&lt;p&gt;Before we even get to sending audio and video streams over a network, we need audio and video streams. This means using the browser &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices" rel="noopener noreferrer"&gt;MediaDevices&lt;/a&gt; API, not yet WebRTC. But this has a catch!&lt;/p&gt;

&lt;p&gt;The API is simple. You call &lt;code&gt;navigator.mediaDevices.getUserMedia()&lt;/code&gt; and get access to audio and video streams. The catch: the user doesn't get to specify which specific input devices they want to use, so someone with multiple microphones or webcams is going to have a hard time. You'd assume web browsers would provide their own UIs to let users select devices, but the reality is complicated.&lt;/p&gt;

&lt;p&gt;If someone is using Firefox, they will in fact get a nice friendly popup asking which audio and video input they want to use. If they're using Chrome, that option is hidden deep in a settings menu, and it's extraordinarily bad at remembering your preferences. That UI doesn't exist at all anywhere in Safari.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: building a production-ready app means you'll need to &lt;strong&gt;build your own in-app device selector&lt;/strong&gt; for available audio and video inputs. &lt;/p&gt;

&lt;p&gt;This is doable, but a pain. You also have to deal with inconsistencies in the ways different browsers surface the MediaDevices APIs for accessing that data. Ideally, you're using some sort of persistent local storage (e.g. the localStorage API) so you can remember the user's preference and not make them navigate a dropdown every single time they enter a chat.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem 2: Making a connection
&lt;/h2&gt;

&lt;p&gt;Okay, so you've got proper audio and video streams, coming from the correct local input devices. Now we need a way to send that to other users!&lt;/p&gt;

&lt;p&gt;The most straight-forward way to do a group videochat in WebRTC is using what's called a full-mesh network topology. This sounds complicated, but it just means "every client is connected to every other client". If there are 3 of us in a chat, each of our web browsers has a direct connection to each of the other two people's web browsers, and a new person joining would immediately initiate three new connections to each of us.&lt;/p&gt;

&lt;p&gt;To open a WebRTC connection between two clients, one client generates an offer. The other client accepts that offer and generates a response. The initiating client accepts that response, and you're off to the races.&lt;/p&gt;

&lt;p&gt;To send these offers and responses back and forth between clients, you need some sort of data transport mechanism. And since you don't yet have a WebRTC data connection you can use, this means you'll need some sort of server infrastructure. Building and scaling a backend to exchange handshake strings between clients is a lot less work than building one to send video data, but it's not nothing. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; You'll need to &lt;strong&gt;build your own server backend&lt;/strong&gt; that can transport strings between clients until they successfully open a peer-to-peer connection.&lt;/p&gt;

&lt;p&gt;WebSockets are a great choice for this, but WebSockets are also a pain to scale compared to regular HTTP servers. I personally use a combination of &lt;a href="https://docs.microsoft.com/azure/azure-functions/functions-overview?WT.mc_id=spatial-6379-emwalker" rel="noopener noreferrer"&gt;Azure Functions&lt;/a&gt; and &lt;a href="https://docs.microsoft.com/azure/azure-signalr/signalr-overview?WT.mc_id=spatial-6379-emwalker" rel="noopener noreferrer"&gt;Azure SignalR Service&lt;/a&gt; to do this handshake (in an architecture similar to what I outline in &lt;a href="https://dev.to/lazerwalker/scaling-an-online-virtual-world-with-serverless-tech-4pfo"&gt;this article&lt;/a&gt;), but this still requires maintaining server-side services!&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem 3: What if network settings mean clients can't connect?
&lt;/h2&gt;

&lt;p&gt;Let's say you've built out a simple WebRTC flow, where 4 different people are all connected to each other. This means there'll be 6 different WebRTC connections across all participants. You'll quickly find something pretty weird: chances are, at least one of those 6 connections will fail and two people won't be able to videochat with each other.&lt;/p&gt;

&lt;p&gt;The short explanation for this is router settings. After the WebRTC signaling handshake is complete, a remote service called ICE tries to directly connect the two clients by getting publicly-accessible IP addresses for both. &lt;/p&gt;

&lt;p&gt;An ICE service will first try to use a STUN server, which is a server that basically exists to tell a client what its public IP address is. In the ideal case, this just works to give you working IP addresses for both clients, and you're done.&lt;/p&gt;

&lt;p&gt;If one or both clients are behind a particularly protective NAT layer (e.g. due to a corporate firewall), that STUN public IP dance isn't going to work. In that case, both clients need to connect to a relay, called a TURN server, that forwards all messages between the two since they can't connect directly.&lt;/p&gt;

&lt;p&gt;If you're interested in a more detailed technical explanation for this issue, &lt;a href="https://www.html5rocks.com/en/tutorials/webrtc/infrastructure/#after-signaling-using-ice-to-cope-with-nats-and-firewalls" rel="noopener noreferrer"&gt;this article&lt;/a&gt; is a great resource.&lt;/p&gt;

&lt;p&gt;Conventional wisdom says that about 80% of WebRTC connections will succeed with only STUN. This means that, unless you have a TURN server to fall back to, about 20% of all connections will fail!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Run your own &lt;strong&gt;TURN relay server&lt;/strong&gt; for when clients' NAT settings don't allow them to connect directly.&lt;/p&gt;

&lt;p&gt;STUN services are cheap to run, and it's pretty easy to find free ones that can scale with your prototype. Since TURN servers are more resource-intensive (given they're active beyond just the handshake stage of a connection), you'll probably need to host your own rather than find free community options.&lt;/p&gt;

&lt;p&gt;One option is to use &lt;a href="https://www.twilio.com/stun-turn" rel="noopener noreferrer"&gt;Twilio's hosted TURN service&lt;/a&gt;. Another is to &lt;a href="https://devblogs.microsoft.com/cse/2018/01/29/orchestrating-turn-servers-cloud-deployment/?WT.mc_id=spatial-6379-emwalker" rel="noopener noreferrer"&gt;host your own Docker image on a cloud provider such as Azure&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem 4: What if too many people are connected?
&lt;/h2&gt;

&lt;p&gt;At this point, you've got a working videochat app. You've built your own AV selector UI to let people pick their devices. You've built server infrastructure to let clients complete offer handshakes. You're running a TURN server to make sure that everyone can connect regardless of their network setup. This all sounds great.&lt;/p&gt;

&lt;p&gt;And then, you try to have a videocall with more than 4 people and your computer comes to a grinding halt.&lt;/p&gt;

&lt;p&gt;This "full-mesh" setup - where each person in a 4-person videochat is sending and receiving video data from each of the other three participants - is incredibly wasteful. &lt;/p&gt;

&lt;p&gt;For each additional participant, your own bandwidth and CPU/GPU consumption increase linearly. Even on a pretty beefy computer with a solid fast network connection, performance usually anecdotally starts degrading somewhere above 4-ish video participants or 10-ish audio-only participants. &lt;/p&gt;

&lt;p&gt;And that assumes a solid network connection. If one participant has slow Internet speeds, ideally other clients would start sending them a lower-bitrate video stream, but that sort of selective real-time transcoding really isn't feasible to do in the browser.&lt;/p&gt;

&lt;p&gt;It's worth noting that this is not just a technical concern but an accessibility issue: by building a system that falls over unless you have a top-of-the-line computer and a blazing fast Internet connection, you're building a system that only serves the most privileged.&lt;/p&gt;

&lt;p&gt;There's no clear fix here other than not having to send out your same audio/video stream N times and having to simultaneously decode and present N remote A/V streams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Move away from a full-mesh peer-to-peer system in favor of a centralized system, most likely a &lt;strong&gt;Selective Forwarding Unit&lt;/strong&gt; (SFU).&lt;/p&gt;

&lt;p&gt;A SFU is a server that acts as a single WebRTC peer to send and receive video data. Instead of connecting to all of the other people using your chat app directly, your client just connects to the SFU and sends its A/V streams to that single source. The SFU selectively decides which other connected clients should receive a given audio or video stream, and can also intelligently do things such as dynamic video reencoding to serve lower-bitrate streams to clients with lower bandwidth caps.&lt;/p&gt;

&lt;p&gt;There are many different ways to run a SFU, but one common way is integrating the &lt;a href="https://mediasoup.org/" rel="noopener noreferrer"&gt;mediasoup&lt;/a&gt; library into your own Node.js server so you can configure and scale it exactly how you would like.&lt;/p&gt;

&lt;h2&gt;
  
  
  ...but that's A LOT for just doing basic video chat!
&lt;/h2&gt;

&lt;p&gt;I agree! My goal was initially to build some fun little prototypes of novel social interaction patterns, and instead I found myself deep in the technical weeds of networking protocols and peer-to-peer network topologies.&lt;/p&gt;

&lt;p&gt;I hope this mile-high overview of the tricky bits of implementing WebRTC can at least get you to understand why this is a hard problem, and give you the lay of the land for coming up with your own solution.&lt;/p&gt;

&lt;p&gt;In particular, I have two concrete recommendations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;If you're just experimenting, start out by using a fully-hosted video solution such as &lt;a href="https://docs.microsoft.com/azure/communication-services/overview?WT.mc_id=spatial-6379-emwalker" rel="noopener noreferrer"&gt;Azure Communication Service&lt;/a&gt; or &lt;a href="https://www.twilio.com/docs/video" rel="noopener noreferrer"&gt;Twilio Programmable Video&lt;/a&gt;. You'll get an easy-to-integrate API that doesn't require running your own server backend, audio and video chat that automatically scales to any number of simultaneous users, and relatively minimal costs for prototype-scale use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you're building a production piece of software where video or audio chat will be a core component, a hosted solution is still the most effort-free option, but you may want to build your own solution to save costs and have more control over your infrastructure. If that's the case, jump straight to running your own SFU. Trying to just get by with a full-mesh topology and maybe a TURN server is ultimately not going to be good enough. Learn from the experiences of myself and countless others and save yourself the time and effort.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Has this helped? Come up with your own solution to recommend? Let me know on &lt;a href="https://twitter.com/lazerwalker" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, I'm always happy to hear from more folks tackling these hard problems :)&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>webrtc</category>
      <category>javascript</category>
      <category>azure</category>
    </item>
    <item>
      <title>An (Incomplete) List of Questions To Ask When Designing a Synchronous Online Social Space</title>
      <dc:creator>Em Lazer-Walker</dc:creator>
      <pubDate>Mon, 04 Jan 2021 16:52:51 +0000</pubDate>
      <link>https://forem.com/lazerwalker/an-incomplete-list-of-questions-to-ask-when-designing-an-online-social-space-3n8d</link>
      <guid>https://forem.com/lazerwalker/an-incomplete-list-of-questions-to-ask-when-designing-an-online-social-space-3n8d</guid>
      <description>&lt;p&gt;We're in an exciting period of change where we're still figuring out what online events should be. &lt;/p&gt;

&lt;p&gt;Most would agree that, whether you're talking about a birthday party or a professional conference, a large grid of faces in a group video chat likely isn't the ideal setup to foster meaningful online interaction. But we don't really know yet what &lt;em&gt;is&lt;/em&gt; the ideal setup!&lt;/p&gt;

&lt;p&gt;A lot of people are experimenting with new technical platforms to better support spontaneous interactions and small group conversations in online settings (&lt;a href="https://dev.to/lazerwalker/using-game-design-to-make-virtual-events-more-social-24o"&gt;myself included!&lt;/a&gt;), but it's still the wild west. &lt;/p&gt;

&lt;p&gt;Here's a list of questions to ask yourself as you're trying to design a more thoughtful space for online communication.  Many of these are unsubtle leading questions with a 'correct' answer, but others are more open-ended and more meant to encourage reflection.&lt;/p&gt;

&lt;p&gt;This list emerged out of my work on the &lt;a href="https://dev.to/lazerwalker/using-game-design-to-make-virtual-events-more-social-24o"&gt;text-based social space&lt;/a&gt; that powered the Roguelike Celebration conference as well as some future events. My goal is to make sure that you're thinking about the right design elements and principles to create a well-considered environment for your event attendees.&lt;/p&gt;

&lt;p&gt;To be clear, I'm talking about spaces meant for synchronous real-time communication, and generally (but not exclusively!) about temporary spaces for time-limited events rather than longer-persisting spaces. Think more meetups, parties, and conferences, than coworking spaces, persistent virtual worlds, or traditional social media.&lt;/p&gt;

&lt;p&gt;Although a lot of the conversation is currently focused on cartoony 2D environments that include spatial video chat (&lt;a href="https://gather.town"&gt;Gather Town&lt;/a&gt; is currently the highest-profile, but I could easily name a dozen competitors), I'm trying to ask questions that are applicable regardless of whether you're a solo event host throwing a party on Zoom for your friends or are a VC-funded startup building a VR platform for 3D virtual worlds.&lt;/p&gt;

&lt;h1&gt;
  
  
  Input Methods
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;How do you mitigate or minimize exhaustion from the performative nature of extended group video chat (aka "Zoom fatigue")?&lt;/li&gt;
&lt;li&gt;How do you accommodate people who are more comfortable communicating over text or audio chat instead of video?&lt;/li&gt;
&lt;li&gt;If you allow multiple input modalities (text vs audio vs video, VR head and hand tracking vs mouse and keyboard or game controller, etc), how do you ensure that people with lower-fidelity communication methods don't feel like "lesser" attendees than those with a wider range of expression?&lt;/li&gt;
&lt;li&gt;Are your software and communication methods fully accessible? Can people with vision impairment or low vision, people who are deaf or hard-of-hearing, and people with motor impairments all use your tool to communicate with each other?&lt;/li&gt;
&lt;li&gt;What languages does your tool support? Can non-English speakers understand or read your UI? Do your user-editable text inputs (usernames, text chat, etc) support non-Roman alphabets and right-to-left languages?&lt;/li&gt;
&lt;li&gt;If you provide a multitude of input methods, is it clear to attendees what their options are and what the tradeoffs are? This is especially true if you offer options that may be mutually exclusive in practice, such as turning on your webcam vs wearing a VR headset&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Demographics
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;What is the average session time for an attendee to your space? A few hours for a meetup or party, a few days for a conference, a persistent long-term coworking space, etc...&lt;/li&gt;
&lt;li&gt;What is the average number of attendees to an event in your space? 10 people, 100 people, and 1000 people have very different needs.&lt;/li&gt;
&lt;li&gt;Do attendees to events in your space typically already know each other, are they all strangers, or somewhere in between?&lt;/li&gt;
&lt;li&gt;If the answer to any of the previous questions is "it depends", what tools are you providing event hosts to make sure their event space is well-tailored to the needs of their specific event?&lt;/li&gt;
&lt;li&gt;If there are specific types of events your space is better suited for, how do you communicate this to event hosts?&lt;/li&gt;
&lt;li&gt;Do the hardware requirements and level of technical involvement required to access your space match the capabilities of an average attendee to your events?&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Fostering Social Interaction
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;In what ways does your space provide activities or interactions that can serve as conversational hooks to encourage discussion?&lt;/li&gt;
&lt;li&gt;What level of attention is required to do these activities? Do you offer activities with a range of involvement levels to allow attendees to self-select for how much room for freeform conversation they want versus focusing primarily on a structured activity?&lt;/li&gt;
&lt;li&gt;Separate from level of involvement needed, do your activities provide varying levels of structure and rules, to accommodate people with different levels of creativity and willingness to jump in and try something new? Some people are excited to improvise and play make-believe with little to no prompting; others need more encouragement and structure to make it feel socially acceptable to engage in playful activities.&lt;/li&gt;
&lt;li&gt;Do different types of activities or interactions appeal to different types of attendee personalities? The &lt;a href="https://en.wikipedia.org/wiki/Bartle_taxonomy_of_player_types"&gt;Bartle taxonomy of player types&lt;/a&gt; may be a helpful, if incomplete, lens&lt;/li&gt;
&lt;li&gt;How "mandatory" are all of these activities?&lt;/li&gt;
&lt;li&gt;To what extent are these activities or interactions explicit and broadcasted versus being secrets hidden throughout your space? How do you balance encouraging as much involvement as possible with creating a sense of exploration and mystery?&lt;/li&gt;
&lt;li&gt;To the extent that your space has secrets or elements that are less obvious, how does knowledge-sharing about that tie into your other social scaffolding and conversational hooks?&lt;/li&gt;
&lt;li&gt;If your event is centered around a singular activity (e.g. a talk or series of talks), how do you balance between pushing people to attend that activity versus allowing or encouraging people who would prefer to keep participating in the "hallway track" instead?&lt;/li&gt;
&lt;li&gt;If your space is meant to host larger gatherings, how do you foster and encourage smaller group conversations?&lt;/li&gt;
&lt;li&gt;Once attendees are having smaller conversations, how do they find new people to talk to or move to a different conversation?&lt;/li&gt;
&lt;li&gt;If I don't know anyone at an event, how do I find people to talk to with similar interests as me or who want to talk about the same things as me? Is there a way for me to signal my interests, or a place I can go to indicate what I'm looking for?&lt;/li&gt;
&lt;li&gt;If there is technical or design friction involved in moving to a new conversation (rather than social friction), is this an intentional choice designed to create specific conversational dynamics, or is this something you should aim to optimize out?&lt;/li&gt;
&lt;li&gt;How do you balance actively encouraging fluidity of conversations versus letting people deeply engrossed in conversation stay there?&lt;/li&gt;
&lt;li&gt;By default, online spaces won't have the equivalent of "I need to go to the bathroom" or "let me refill my cheese plate". If somebody wants to get out of a focused conversation, do you provide socially-acceptable excuses to leave? &lt;/li&gt;
&lt;li&gt;How does your social scaffolding scale as users become familiar with your platform? To what extent is it focused on the novelty of the space itself (e.g. is most discussion focused on how cool and original the space is, how much it feels like you're talking in-person, etc?) or does it still provide useful conversational hooks for expert users?&lt;/li&gt;
&lt;li&gt;How can a person or conversation group broadcast things such as "we'd love to talk to new people!" or "go away, we're having a private conversation" to others? In physical spaces, these would typically be communicated via subconscious body language cues that can be difficult to directly recreate digitally.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Aesthetics and World-Building
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Does the aesthetic theming of your space match the tone of the event? A house party is not a professional conference is not a friendly coworking space&lt;/li&gt;
&lt;li&gt;If you are a platform or a space where the answer to the previous question is "it depends", what creation tools do you offer event hosts (or attendees!) to customize the feeling of a space?&lt;/li&gt;
&lt;li&gt;If you provide creation tools, how do you educate event hosts not just how to use them but how to build good things with them? Are you providing event hosts who aren't architects or videogame level designers the scaffolding they need to create spaces that succeed at an intentional design goal?&lt;/li&gt;
&lt;li&gt;If your space is graphical, what sort of art assets or visual design creation tools do you provide? Can event hosts bring their own assets if they want to? Are they required to bring their own assets?&lt;/li&gt;
&lt;li&gt;If event hosts or attendees are encouraged to provide their own art assets, what are the barriers to entry for creation? Creating 3D models is more work than 2D sprites is more work than writing prose text.&lt;/li&gt;
&lt;li&gt;To what extent can attendees modify or shape the space? Do they have access to the same creation tools as event hosts? If not, are there alternative ways for them to express themselves in the space in a persistent or semi-persistent way?&lt;/li&gt;
&lt;li&gt;How much control do attendees have over their own presentation? This could mean anything from usernames and user profiles to 2D or 3D avatars to something else entirely. How do these forms of self-expression themselves provide hooks for people to start conversations about?&lt;/li&gt;
&lt;li&gt;Do your various forms of attendee self-expression provide space for in-jokes and spontaneous culture to emerge over the course of the event? How do your creative tools actively encourage this?&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Trust and Safety
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Do you provide event hosts the tools to effectively moderate their events and enforce a code of conduct? (e.g. the ability to ban attendees and remove individual messages, tools for users to report CoC violations and issues, perhaps some sort of secure auditable log to review in the case of CoC reports)&lt;/li&gt;
&lt;li&gt;Do individual attendees have the trust and safety tools they need to minimize the damage of abuse or harassment without escalating to the event hosts? (e.g. robust muting and blocking tools)&lt;/li&gt;
&lt;li&gt;Do you have sufficient live human moderators at your event to make attendees feel safe? Depending on your space, it may not be feasible (or even desirable!) to have an organizer present and listening in every possible space where people might congregate, but do attendees feel comfortable with the level of moderation when they need to e.g. report a CoC violation?&lt;/li&gt;
&lt;li&gt;Many event organizers in VR social spaces feel the need to explicitly explain to new attendees that, as in the real world, standing too close to someone else in VR is viewed as an invasion of personal space. Does your space have cultural norms where unintentional violations may cause discomfort or harm, and if so how do you communicate and educate about them?&lt;/li&gt;
&lt;li&gt;When designing various features and interactions between users, have you actively considered how those features might be vectors for abuse and harassment and designed defensively against that?&lt;/li&gt;
&lt;li&gt;How do you balance a desire to allow pseudonymity with a desire to keep bad actors accountable for their actions? How does your user registration policy and user profile design reflect this?&lt;/li&gt;
&lt;li&gt;If you as a larger platform place restrictions on allowed content, are your rules and enforcement policies explicit? Is it clear what will happen when the policies of a specific event conflict with the platform as a whole?&lt;/li&gt;
&lt;li&gt;How do you control user access to specific events? Where do you strike the balance between making it as simple as possible to join an event versus preventing bad actors and trolls from entering events they were not invited to?&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Technical limitations
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;What hardware is needed to access your space? Does it run on mobile devices? How about an underpowered 5-year-old computer?&lt;/li&gt;
&lt;li&gt;If someone attempts to use your space with underpowered or unsupported hardware, are they warned about potential issues before joining? Are they prevented from joining entirely?&lt;/li&gt;
&lt;li&gt;Does accessing your space require a downloadable executable, or can it run in a web browser?&lt;/li&gt;
&lt;li&gt;If your space is focused on a certain technology (e.g. videochat, or VR head + hand tracking), does it meaningfully work without appropriate hardware? &lt;/li&gt;
&lt;li&gt;Do event hosts feel like they need to spend a meaningful portion of their event providing instruction and technical assistance to attendees?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is naturally an incomplete list of concerns, but hopefully is helpful as you work on your own novel online social spaces and events!&lt;/p&gt;

&lt;p&gt;If you're working on something cool, I'd love to hear about it! Feel free to &lt;a href="https://twitter.com/lazerwalker"&gt;say hello&lt;/a&gt;, I'm always excited to check out exciting new experiments in this space.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Scaling an Online Virtual World with Serverless Tech</title>
      <dc:creator>Em Lazer-Walker</dc:creator>
      <pubDate>Fri, 20 Nov 2020 15:59:15 +0000</pubDate>
      <link>https://forem.com/lazerwalker/scaling-an-online-virtual-world-with-serverless-tech-4pfo</link>
      <guid>https://forem.com/lazerwalker/scaling-an-online-virtual-world-with-serverless-tech-4pfo</guid>
      <description>&lt;p&gt;I help run an annual game design conference called &lt;a href="https://roguelike.club" rel="noopener noreferrer"&gt;Roguelike Celebration&lt;/a&gt;. Naturally, this year we were a virtual event instead of in-person for the first time. However, instead of just broadcasting a Twitch stream and setting up a Discord or Slack instance, we built our own custom browser-based text-based social space, inspired by online games and MMOs!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgvud5e5yyq74wh1c28jv.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgvud5e5yyq74wh1c28jv.jpeg" alt="The Roguelike Celebration social space"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've written about &lt;a href="https://dev.to/lazerwalker/using-game-design-to-make-virtual-events-more-social-24o"&gt;the design underlying our social space&lt;/a&gt;, as well as our approach to &lt;a href="https://dev.to/lazerwalker/running-a-virtual-conference-roguelike-celebration-s-av-setup-44hk"&gt;AV infrastructure&lt;/a&gt;, but in this article I wanted to talk about the technical architecture and how we used serverless technology to design for scale.&lt;/p&gt;

&lt;p&gt;From an engineering standpoint, we built an &lt;a href="https://github.com/lazerwalker/azure-mud" rel="noopener noreferrer"&gt;open-source&lt;/a&gt; real-time game and chat platform. We eventually ended up selling around 800 tickets, meaning we needed to support at least that many concurrent users in a single shared digital space.&lt;/p&gt;

&lt;p&gt;Our timeline for the project was incredibly short — I built the platform in about three months of part-time work aided by a handful of incredibly talented volunteers — which meant we didn't really have time to solve hard scaling problems. So what did we do?&lt;/p&gt;

&lt;h2&gt;
  
  
  Overall Server Architecture
&lt;/h2&gt;

&lt;p&gt;A "traditional" approach to building something like this would likely involve building a server that could communicate with game clients — likely a combination of HTTP and WebSockets, in the case of a browser-based experience — as well as read/write access to some sort of database.&lt;/p&gt;

&lt;p&gt;If we ended up having more concurrent users than that one server could handle, I'd have two options: run the server process on a beefier computer ("vertical" scaling) or figure out how to span multiple servers and load-balance between them ("horizontal" scaling). &lt;/p&gt;

&lt;p&gt;However, on such a tight time scale, I didn't want to get into a situation where I would need to design and run load tests to figure out what my needs and options were. It's possible none of these scaling issues would actually be relevant given the size of our conference, but it wasn't possible to be confident about that without investing time we didn't have into testing. Particularly, I knew from experience that scaling WebSockets is an especially frustrating challenge.&lt;/p&gt;

&lt;p&gt;Instead, I reached for a "&lt;a href="https://azure.microsoft.com/en-us/overview/serverless-computing/?WT.mc_id=spatial-10257-emwalker" rel="noopener noreferrer"&gt;serverless&lt;/a&gt;" solution. Instead of provisioning a specific server (or set of servers), I would use a set of services that themselves know how to auto-scale without any input on my end, charging directly for usage.&lt;/p&gt;

&lt;p&gt;This sort of architecture often has a reputation for being more expensive than just renting raw servers (more on costs later!), but in our case it was well worth the peace of mind of not having to think about scaling at all.&lt;/p&gt;

&lt;p&gt;Here's a high-level look at the architecture we ended up building:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fligz0wmbyigvfejvodwa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fligz0wmbyigvfejvodwa.png" alt="Architecture diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If I wanted to not have to think about scaling, we needed our server-side code to be running on a serverless platform such as &lt;a href="https://docs.microsoft.com/azure/azure-functions?WT.mc_id=spatial-10257-emwalker" rel="noopener noreferrer"&gt;Azure Functions&lt;/a&gt;. Instead of deploying a proper Node.js server (our code was all written in TypeScript), I wanted to be able to upload individual TypeScript functions to be mapped to specific HTTP endpoints, with our cloud provider automatically calling those functions as needed and automatically scaling up capacity as needed.&lt;/p&gt;

&lt;p&gt;However, as a real-time game, we also needed real-time communication. The typical way to do this in a web browser is to use WebSockets, which require long-standing persistent connections. That model isn't compatible with the serverless function model, where by definition your computing resources are fleeting and each new HTTP request is processed by a different short-lived VM. &lt;/p&gt;

&lt;h3&gt;
  
  
  Azure SignalR Service
&lt;/h3&gt;

&lt;p&gt;Enter &lt;a href="https://docs.microsoft.com/azure/azure-signalr/signalr-overview?WT.mc_id=spatial-10257-emwalker" rel="noopener noreferrer"&gt;Azure SignalR Service&lt;/a&gt;, a hosted SignalR implementation designed to solve this problem. If you're familiar with WebSockets but not SignalR, you can think of SignalR as a protocol layer on top of WebSockets that adds features like more robust authentication. But for our purposes, what matters isn't the use of SignalR instead of raw WebSockets, but the fact that Azure SignalR Service is a hosted service that can manage those long-standing connections and provide an API to communicate with them from short-lived Azure Functions code.&lt;/p&gt;

&lt;p&gt;The only issue is that Azure SignalR Service only handles one-way communication: you can send messages to connected clients from the server (including our serverless functions), but clients can't send messages back to the server. This is a limitation of Azure SignalR Service, not SignalR as a protocol. &lt;/p&gt;

&lt;p&gt;For our purposes, this was fine: we built a system where clients sent messages to the server (such as chat messages, or commands to perform actions) via HTTP requests, and received messages from the server (such as those chat messages sent by other clients) over SignalR. This approach also let us lean heavily on SignalR's group management tools, which simplified logic around things like sending chat messages to people in specific chat rooms.&lt;/p&gt;

&lt;h3&gt;
  
  
  HTTP requests and latency
&lt;/h3&gt;

&lt;p&gt;Using HTTP requests for client-to-server messages did add extra latency to the system that wouldn't exist if we could do everything over WebSockets instead. Even using WebSockets by itself can be a performance issue for particularly twitch-heavy games, as being a TCP-based protocol often makes speedy packet delivery trickier than the UDP-based socket solutions most fast-paced multiplayer games use. &lt;/p&gt;

&lt;p&gt;These are problems that any browser-based game needs to solve, but fortunately for us, we weren't dealing with 2D or 3D graphics and a networked physics model and other systems that typically complicate games netcode. As a text-based experience, the extra tens to hundreds of milliseconds of latency added by using HTTP requests for client-to-server messages were totally acceptable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Redis as a persistence layer
&lt;/h3&gt;

&lt;p&gt;From there, we also had a persistence layer to handle things such as remembering players' user profiles and who was in what room (we didn't store text messages, other than dumping them to a controlled audit log only accessed when addressing Code of Conduct violations). &lt;/p&gt;

&lt;p&gt;We used &lt;a href="https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-overview?WT.mc_id=spatial-10257-emwalker" rel="noopener noreferrer"&gt;Redis&lt;/a&gt;, a key-value store primarily intended to be used as a caching layer. It worked great for our purposes: its ease of use made it easy to integrate, and its emphasis on speed helped make sure that database access didn't add to latency, since we were already incurring extra latency from our reliance on HTTP requests. Redis sometimes isn't as suitable for long-term persistence compared to a proper database, but given we were running an ephemeral installation for two days that didn't matter. &lt;/p&gt;

&lt;p&gt;That said, any sort of database or key-value store would likely have worked great for our admittedly simple needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  So... did it work?
&lt;/h3&gt;

&lt;p&gt;We were easily able to support hundreds of concurrent users in the same space during our live event, and had absolutely zero issues with server performance or load. &lt;a href="https://docs.microsoft.com/azure/azure-functions?WT.mc_id=spatial-10257-emwalker" rel="noopener noreferrer"&gt;Azure Functions&lt;/a&gt; can scale more or less infinitely, &lt;a href="https://docs.microsoft.com/azure/azure-signalr/signalr-overview?WT.mc_id=spatial-10257-emwalker" rel="noopener noreferrer"&gt;Azure SignalR Service&lt;/a&gt; gave us a clear path to upgrade to support more concurrent users if we needed to (up to 100,000 concurrents), and our &lt;a href="https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-overview?WT.mc_id=spatial-10257-emwalker" rel="noopener noreferrer"&gt;Redis&lt;/a&gt; instance never went above a few hundred kilobytes of storage or above 1% of our available processing power, even using the cheapest instance Azure offers.&lt;/p&gt;

&lt;p&gt;Most importantly, I didn't need to think about scale. The space cost about $2.50 per day to run (for up to 1,000 concurrent users), which might have been prohibitively expensive for a long-lasting space run as a non-profit community event, but was absolutely fine for a time-bounded two-day installation (and I'm already working on ways to bring that cost down).&lt;/p&gt;

&lt;p&gt;I've used this general architecture before for a &lt;a href="https://dev.to/azure/making-a-weird-gif-wall-using-azure-functions-and-signalr-2gmm"&gt;previous art installation&lt;/a&gt;, but seeing it work so flawlessly with a much larger conference gave me confidence it would have scaled up to even 10x as many attendees without any trouble.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design your way around hard problems instead of solving them
&lt;/h2&gt;

&lt;p&gt;In general, I'm really optimistic about this sort of serverless workflow as a way of building real-time games quickly. When working on experimental experiences such as Roguelike Celebration's space, I think it's essential to be able to spend your time focusing on hard questions surrounding what the most interesting experience to build is, rather than having to spend your limited engineering resources focused on hard scaling problems. &lt;/p&gt;

&lt;p&gt;Scaling traditional real-time netcode is an incredibly difficult problem, even if it's a relatively solved one. Our approach let us functionally sidestep a whole bunch of those difficult problems and focus on building a truly unique and magical virtual event, which absolutely resulted in a better experience for attendees than if we'd invested our time manually scaling.&lt;/p&gt;

&lt;p&gt;Whether you're literally trying to figure out how to scale your magical online experience, or you're working on some other interesting experiment, I'd recommend taking the same approach as us: sidestepping difficult problems with outside-the-box design can let you focus your attention on more mission-critical design questions.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you're interested in learning more about the Roguelike Celebration space, you may want to check out the aforementioned &lt;a href="https://dev.to/lazerwalker/using-game-design-to-make-virtual-events-more-social-24o"&gt;design blog post&lt;/a&gt; or the &lt;a href="https://github.com/lazerwalker/azure-mud" rel="noopener noreferrer"&gt;code on GitHub&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Using Game Design to Make Virtual Events More Social</title>
      <dc:creator>Em Lazer-Walker</dc:creator>
      <pubDate>Thu, 22 Oct 2020 15:31:26 +0000</pubDate>
      <link>https://forem.com/lazerwalker/using-game-design-to-make-virtual-events-more-social-24o</link>
      <guid>https://forem.com/lazerwalker/using-game-design-to-make-virtual-events-more-social-24o</guid>
      <description>&lt;p&gt;&lt;em&gt;This is part of a series of posts/etc about Roguelike Celebration 2020! If you like this, you may also like my post about the &lt;a href="https://dev.to/lazerwalker/scaling-an-online-virtual-world-with-serverless-tech-4pfo"&gt;technical architecture of our social space&lt;/a&gt;, our &lt;a href="https://dev.to/lazerwalker/running-a-virtual-conference-roguelike-celebration-s-av-setup-44hk"&gt;streaming AV setup&lt;/a&gt;, or the &lt;a href="https://github.com/lazerwalker/azure-mud" rel="noopener noreferrer"&gt;open source codebase&lt;/a&gt; for the social space.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A few months ago, I had a conundrum: I couldn't stand virtual conferences.&lt;/p&gt;

&lt;p&gt;I personally go to in-person conferences to talk to people: to catch up with friends, to make new friends, to have intellectually stimulating conversations. I'd personally rather watch talk videos at home on my own time than spend my limited time at a synchronous event watching them. &lt;/p&gt;

&lt;p&gt;Current virtual events are almost entirely talks! We know how to record and broadcast talks over the Internet really well; we don't know how to replicate the social side of things, or the quote-unquote "hallway track".&lt;/p&gt;

&lt;p&gt;At first, this wasn't a huge problem for me. I mostly just avoided virtual events except for the occasional speaking engagement, and started doing some experimentation on the side about designing new types of online social spaces to foster the sort of small-group conversation I was missing.&lt;/p&gt;

&lt;p&gt;But then it became time to organize this year's &lt;a href="https://roguelike.club" rel="noopener noreferrer"&gt;Roguelike Celebration&lt;/a&gt;, a game design conference I've helped run for the past four years, and the conundrum revealed itself. If I was going to spend my time and effort bringing an online event into existence, I wanted it to be one I actually wanted to attend!&lt;/p&gt;

&lt;p&gt;I pitched the team on something radical: instead of using Zoom and Discord, what if we built our own event platform and social space, built from the ground up to foster the sorts of intimate social interaction that made the in-person event special?&lt;/p&gt;

&lt;p&gt;Roguelike Celebration ended up becoming a test-bed for a text-based social space and online game that served as the digital venue for our 2020 event, adopting design techniques taken from online games and virtual worlds to encourage meaningful interaction and conversation between attendees.&lt;/p&gt;

&lt;p&gt;This article is going to walk through the underlying design decisions that led to what we built, as well as talk a bit about the space itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what was the space?
&lt;/h2&gt;

&lt;p&gt;To contextualize everything I'm about to say, let me explain the space itself.&lt;/p&gt;

&lt;p&gt;As mentioned, Roguelike Celebration took place in a custom browser-based text-based social space. Most of the UI and UX design were based on modern chat apps like Discord and Slack, but structurally it much more resembled MUDs, the text-based precursors to modern online MMOs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbgszj6z9mxueul7ctc1b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbgszj6z9mxueul7ctc1b.png" alt="The registration desk"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each attendee starts by creating a profile that contained not only their name, pronouns, etc, but also a text description of what their avatar looks like, visible by all other attendees. They are then dropped into a chat room with a virtual "registration desk", from which they can navigate to other rooms in the virtual conference space we had built.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9sp4tgvf2i363kds0m2k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9sp4tgvf2i363kds0m2k.png" alt="Map of the space"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;"Rooms" in this space are more like rooms in a MUD or online game than in Discord or Slack. Each room has a text description of what it contains, with hyperlinks to navigate to adjacent rooms, as well as fun novel things attendees could interact with. &lt;/p&gt;

&lt;p&gt;We tried to strike a balance between being a "normal" event venue and being playful: there were locations like a quiet lounge and an exhibition hall where we were showing a curation of games, but there was also a dance floor with a DJ set your avatar could dance to, the bar was serving polymorph potions instead of alcohol, and the foyer happened to be haunted.&lt;/p&gt;

&lt;p&gt;As a particularly important example, the "theater" contained our talk livestream embedded right in the page. So when talks were starting, attendees would all move to the theater just like they would at an in-person conference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fizfgjaeza8pmxyndjv9l.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fizfgjaeza8pmxyndjv9l.jpeg" alt="Streaming talks in the theater"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each virtual room was also its own chat room. Like in other online virtual worlds, you can only take part in conversations in the room you're physically in. If you want to talk to other people, you have to move to another room.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fd5atqqujv12bdfdrcv81.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fd5atqqujv12bdfdrcv81.jpeg" alt="Swag table"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On top of the theater broadcasting our live talk videos, most individual rooms had their own exciting special activities going on. The kitchen had a vending machine that would produce randomly-generated food items you could pick up and carry around with you, while the dance floor had embedded chiptune DJ sets you could make your character dance to. Fun easter eggs and things to explore or pick up or interact with filled every room of the space.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuivsnp7nonyxxc46i3o1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuivsnp7nonyxxc46i3o1.png" alt="Octopode"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Entirely text-based?!
&lt;/h3&gt;

&lt;p&gt;The fact that the space was text-based was largely done for logistical, rather than aesthetic, reasons. &lt;/p&gt;

&lt;p&gt;The hardest problem we were solving was figuring out what sort of tone and what level of game-like interactions would help foster the social dynamics we were aiming for. Using text meant we could rapidly iterate on content and systems, rather than getting caught up in the additional complexity of building a 2D or 3D rendering system or having higher asset production costs. &lt;/p&gt;

&lt;p&gt;Even using ASCII graphics (a la classic roguelikes) would have added in a lot of complex design problems to solve that we were able to sidestep with text descriptions. &lt;/p&gt;

&lt;p&gt;That said, it wouldn't surprise me if being text-based would still still be the correct design choice even given more design resources. Text descriptions can be a lot more evocative than representational graphics, and having something look a bit less like a traditional videogame helps make the space feel appropriate for a professional conference.&lt;/p&gt;

&lt;p&gt;Overall, the response was overwhelmingly positive. Many, many attendees remarked on how Roguelike Celebration felt the most like physically "attending" an event to them since quarantine started.&lt;/p&gt;

&lt;h2&gt;
  
  
  How did we get here?
&lt;/h2&gt;

&lt;p&gt;My chief design goal was to create a social space where people could have small group conversations: say, a conversation with 2-10 people where maybe you know some of the people, but it's just as likely you won't know anyone. &lt;/p&gt;

&lt;p&gt;Given that overarching goal, I quickly settled on a few key design tentpoles:&lt;/p&gt;

&lt;h3&gt;
  
  
  A novel space is inherently valuable
&lt;/h3&gt;

&lt;p&gt;In games, we talk about the idea of the "magic circle", a boundary that clearly delineates the space where a game or play takes place as distinct from the normal world. Activities within the magic circle being distinct from normal reality gives people freedom to express themselves more freely and to, well, play (within reason and established safety limits, of course). &lt;/p&gt;

&lt;p&gt;A similar thing happens with in-person conferences. The event venue (especially if in a destination location!) serves as a freeing liminal space that helps attendees be present and engaged. Whether you're at a conference to learn from the talks, to meet new people, or frankly to just enjoy a free employer-paid vacation, the act of being in a different physical place does a lot to get you in a mindset where you're ready to embrace new experiences.&lt;/p&gt;

&lt;p&gt;This is difficult for online events in the time of quarantine! If you're like me, you're largely spending 40 hours a week sitting at home, using Slack and Zoom or some other similar text and videoconferencing software. Asking people to spend their weekends at their same computers attending a Zoom conference with a Discord or Slack doesn't accomplish that goal!&lt;/p&gt;

&lt;p&gt;We realized that, even if our custom social space was otherwise a complete and utter failure, the mere act of having it be a new and novel space would still be valuable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Allow small-group conversations at a technical level
&lt;/h3&gt;

&lt;p&gt;Having a conference Discord or Slack means having a few hundred people in the same dozen text channels. This setup affords two different modes of interaction: people can talk in those large public channels with a few hundred participants, or they can slide into other people's DMs for 1:1 chats. &lt;/p&gt;

&lt;p&gt;Neither of these are particularly great for enabling intimate group conversations with strangers! In particular, we talked to a large number of potential attendees who expressed extreme discomfort and anxiety about trying to have any sort of conversation in those large public hundred-person chat channels.&lt;/p&gt;

&lt;p&gt;Conversely, VR social spaces such as AltspaceVR or Mozilla Hubs do a great job of enabling the sort of fluid small-group conversations you get naturally in-person. Physical presence, spatial audio, and body language cues from head-tracking and hand controllers mean that you can naturally split off from a group conversation to start a smaller conversation, and then effortlessly rejoin the larger conversation whenever you want, similar to how you would in a physical setting. &lt;/p&gt;

&lt;p&gt;But I've regrettably found VR social spaces to be completely inaccessible to people who aren't VR enthusiasts, even when using software like AltspaceVR or Hubs that technically support non-VR desktop and mobile devices.&lt;/p&gt;

&lt;p&gt;We knew we needed to find a technical model for chat that, while not in VR, was closer to what VR offers than to Slack.&lt;/p&gt;

&lt;p&gt;Borrowing the spatial chat model from MUDs gave us the property we wanted where you could be in a room with a small group of people having an intimate conversation!&lt;/p&gt;

&lt;h3&gt;
  
  
  Playful design adds spontaneity
&lt;/h3&gt;

&lt;p&gt;Even if we create a space where people &lt;em&gt;can&lt;/em&gt; talk to each other in small groups, that doesn't mean they will. Striking up a cold conversation with a stranger is hard and scary!&lt;/p&gt;

&lt;p&gt;At in-person events, there are a number of easy hooks that make it socially acceptable to initiate small-talk. You might strike up a conversation with the person sitting next to you in-between talks about the talk you've just seen. You might comment on a sticker on someone's laptop, or the logo on their t-shirt. You can walk up a sponsor booth and know that someone sitting there will be thrilled to chat about their company.&lt;/p&gt;

&lt;p&gt;We don't have any of those affordances by default in online spaces!&lt;/p&gt;

&lt;p&gt;My hypothesis for Roguelike Celebration was that we could fill that void by integrating game mechanics and playful elements borrowed from online games. &lt;/p&gt;

&lt;p&gt;In particular, I was inspired by &lt;a href="https://lostgarden.home.blog" rel="noopener noreferrer"&gt;Dan Cook&lt;/a&gt;'s work at Spry Fox around how to design MMOs to encourage formation of meaningful friendships. His design work and writing is primarily concerned with helping people form deep friendships over the course of months or years, rather than us trying to get people to be mildly friendly over two days, but a lot of the core concepts he talks about in his &lt;a href="https://www.youtube.com/watch?v=voz6S7ryWC0" rel="noopener noreferrer"&gt;GDC talks&lt;/a&gt; were still directly applicable.&lt;/p&gt;

&lt;p&gt;A particular piece of social science he explores is the idea that friendships are formed through &lt;strong&gt;repeated spontaneous interactions over time.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This model reinforces some design decisions I've already explained: if you want spontaneous interactions, that seemingly requires a more spatial chat model than a giant Discord server where everybody is always in the same chat rooms at the same time. &lt;/p&gt;

&lt;p&gt;From there, adding game-like and playful activities to the space can encourage these moments of spontaneous interaction to happen more frequently. &lt;/p&gt;

&lt;p&gt;To ground this in a concrete example, the space had a bar area where attendees could drink a polymorph potion (designed/implemented by &lt;a href="https://twitter.com/ampepers" rel="noopener noreferrer"&gt;Alexei&lt;/a&gt; and &lt;a href="https://twitter.com/kawaidragoness" rel="noopener noreferrer"&gt;Kawa&lt;/a&gt;) that would add a random emoji to the front of their name. Let's look at all of the ways this simple playful element provided social value to attendees:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;People who needed a break from conversation could wander off to the bar and drink a few more polymorph potions as a sort of fidget activity&lt;/li&gt;
&lt;li&gt;Because this was an exciting special thing that was only available in the bar, this encouraged people to move in and out of the bar regularly, letting the bar serve as a sort of conversational nexus where you could bump into someone you knew (or didn't know but had seen in other rooms)&lt;/li&gt;
&lt;li&gt;Seeing other attendees with cool emojis encouraged people to explore the space to find other secrets like that, helping attendees circulate to different rooms and increasing the chance of spontaneous interactions&lt;/li&gt;
&lt;li&gt;When someone couldn't figure out how to add an emoji to their name, they'd ask people who already had emoji, giving people an excuse to be helpful and get to know each other in the process&lt;/li&gt;
&lt;li&gt;Being able to have some control over your emoji — you could drink as many potions as you wanted with no ill effect, so you could keep chugging until you got an emoji you particularly liked — served as a form of player expression. This is both extremely satisfying as an attendee and serves as a great conversation opener for other attendees.&lt;/li&gt;
&lt;li&gt;Running into someone with the same emoji as you was a particularly potent way to start a conversation and instantly feel affinity towards a stranger.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most of our playful elements like this were fairly simple due to time constraints. I'd love to explore more complex and involved systems for future events, but you can see here how even the simplest game-like interactions can massively impact the sociability of the space.&lt;/p&gt;

&lt;h2&gt;
  
  
  Video chat is valuable in moderation
&lt;/h2&gt;

&lt;p&gt;Video chat is far more effective than audio chat at helping convey emotional nuance, and audio chat is equally more effective than text chat. This is important if our goal is to foster new friendships and interesting connections! &lt;/p&gt;

&lt;p&gt;But at this point in quarantine, we're all well aware of Zoom Fatigue. It's clear that running an entire conference on video or audio chat is a great way to burn everyone out.&lt;/p&gt;

&lt;p&gt;As we spoke with potential attendees, we realized there are broadly two types of online communicators: those who are happier communicating online in text, and those who are happier using videochat to the extent that they have the emotional energy. Finding a way to make both groups of people feel comfortable and socially stimulated felt like a valuable goal.&lt;/p&gt;

&lt;p&gt;This led us to aim for an event where communication over text chat was the default, but there were many opportunities for attendees to consensually opt-in to escalate into audio or video chat. Being primarily based in text chat lets attendees save up their emotional energy for focused higher-quality moments of video conversation, and grounding those moments in opt-in activities means that each individual attendee can self-moderate how much video chat they can handle.&lt;/p&gt;

&lt;p&gt;Our plan for video chat in the space was twofold: we planned to schedule discrete blocks of time for video chat-based networking sessions, but also offer lower-key videoconferencing in every room that people in that room could join at any time.&lt;/p&gt;

&lt;p&gt;For the former, we held unconferencing sessions where attendees could propose and upvote discussion topics, which were then assigned to specific rooms. We also intended to schedule breakout room-style networking sessions (where you would be randomly moved into a videochat with 3-5 random people for 10 minutes), but couldn't find room in our final conference schedule. These are the two structures I've seen work particularly well for structured video chat to avoid the anarchy and fatigue of unstructured 30-person Zoom calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting it all together
&lt;/h2&gt;

&lt;p&gt;Combining all of these design ideas, we ended up with a space that, by all accounts, was fairly successful at achieving its goals. &lt;/p&gt;

&lt;p&gt;As mentioned, I was blown away by how many of the remarks about the space talked about a sense of physical presence that mirrors the way people talk about VR — this felt to people like they were physically "attending" our conference in ways they hadn't felt before with virtual events. &lt;/p&gt;

&lt;p&gt;One moment that sticks with me was when an attendee set up shop at a specific table in the kitchen and did tarot readings for anyone who brought her an offering (of any object taken from elsewhere in the space). To me, this level of an attendee buying into the fantasy and aesthetic of our space, and contributing their own playfulness to the mix, is a ringing endorsement for our game-like and playful approach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx3g817glnprgzx6qzllz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx3g817glnprgzx6qzllz.png" alt="Tweet advertising tarot readings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One pain point we did run into was just not having enough ways to test what we were doing. We were able to run a "preview event" that served as a test of the space (both from a design and from a technical load-test standpoint), and that proved essential in shaping the design of the space for its final iteration. But for the most part, it's incredibly difficult to playtest or validate your designs ahead of time when doing so requires dozens to hundreds of people. &lt;/p&gt;

&lt;p&gt;There are a ton of things I'm eager to change for our next iteration, as well as new hypotheses I have for how we can be more effective at encouraging conversations. But it's also still unbelievable to me that we accomplished as much as we did with about three months of development time (mostly me, plus contributions from a handful of amazing conference organizers and volunteers) and only that one public playtest session.&lt;/p&gt;

&lt;h2&gt;
  
  
  Call for Collaboration
&lt;/h2&gt;

&lt;p&gt;This social space was essentially an experiment intended to test my hypothesis that borrowing elements from online games — physicality, playful interactions, etc — would create a space where people could have the sorts of small person-to-person interactions I've been missing from online events. &lt;/p&gt;

&lt;p&gt;I think it was about as successful at that as I could reasonably hope, but it was also built for an audience perfectly suited to what I built. As a conference of mostly game designers, attendees were broadly familiar with the sorts of interface paradigms and game mechanics they were being presented with, and they're already used to the idea that a professional conference can be playful and silly and fun. &lt;/p&gt;

&lt;p&gt;I think all the work I'm doing is interesting and broadly applicable to other communities, but figuring out how to make it accessible to wider audiences is a complicated problem!&lt;/p&gt;

&lt;p&gt;All of the code for our social space is &lt;a href="https://github.com/lazerwalker/azure-mud" rel="noopener noreferrer"&gt;open-source&lt;/a&gt;, and anyone could technically build their own space based on my work without my input. But this design space is so nascent and so experimental that I think it would be hard to use this codebase without more context around the social design decisions we made. Which is to say, I suspect the most successful second deployment of this tech would be one that I continue to be involved in.&lt;/p&gt;

&lt;p&gt;I'm excited to work with conference organizers to figure out what that would mean! If you have an event that you think could work well for something like this, shoot me an &lt;a href="//mailto:socialspace@lazerwalker.com"&gt;email&lt;/a&gt; or &lt;a href="https://twitter.com/lazerwalker" rel="noopener noreferrer"&gt;Twitter DM&lt;/a&gt; and let's chat about how the work I'm doing can benefit your community!&lt;/p&gt;

&lt;h2&gt;
  
  
  This isn't just about this space
&lt;/h2&gt;

&lt;p&gt;Maybe you don't run synchronous online events. Maybe you do, but you think a text-based tool like this isn't right for your community. More than talking about great the specific thing I built is, I want to hammer home the idea that designers of online games and virtual worlds have been thinking about and solving these social design problems for literal decades. &lt;/p&gt;

&lt;p&gt;The most effective way to make online events more engaging is going to be looking towards game design and virtual world design to learn what makes those spaces tick. &lt;/p&gt;

&lt;p&gt;This doesn't (necessarily) mean making actual games. Our space more resembles Discord or Slack from a UI/UX perspective than a historical MUD. &lt;/p&gt;

&lt;p&gt;It also doesn't mean building more of the same online event platforms we already have, but throwing in some 2D pixel art or traditional 'gamification' markers (leaderboards, badges, etc) or other surface-level signifiers.&lt;/p&gt;

&lt;p&gt;What we actually need to take from game design is the understanding of how to use play and playful design to create environments whose architecture encourages and rewards positive social interactions through psychologically satisfying systems. This isn't by any means easy, but I hope the work I've done can show that it's doable!&lt;/p&gt;

</description>
      <category>games</category>
      <category>conferences</category>
      <category>mud</category>
      <category>events</category>
    </item>
    <item>
      <title>Running A Virtual Conference: Roguelike Celebration’s AV Setup
</title>
      <dc:creator>Em Lazer-Walker</dc:creator>
      <pubDate>Tue, 13 Oct 2020 15:45:45 +0000</pubDate>
      <link>https://forem.com/lazerwalker/running-a-virtual-conference-roguelike-celebration-s-av-setup-44hk</link>
      <guid>https://forem.com/lazerwalker/running-a-virtual-conference-roguelike-celebration-s-av-setup-44hk</guid>
      <description>&lt;p&gt;The &lt;a href="https://roguelike.club"&gt;Roguelike Celebration&lt;/a&gt; conference has been running for five years, but two weeks ago marks our first foray into an online-only event!&lt;/p&gt;

&lt;p&gt;The most notable thing about this iteration of the event was the &lt;a href="https://dev.to/lazerwalker/using-game-design-to-make-virtual-events-more-social-24o"&gt;custom MMO-like social space&lt;/a&gt; that hosted the event. You can read more about that &lt;a href="https://dev.to/lazerwalker/using-game-design-to-make-virtual-events-more-social-24o"&gt;here&lt;/a&gt;, but today I wanted to talk about something a bit more universally applicable to any online event: the nuts-and-bolts of how we ran our AV setup.&lt;/p&gt;

&lt;p&gt;We were a two-day single-track conference, with talks streamed to both Twitch and YouTube (more on that later), and the YouTube stream embedded directly within our custom event platform software. &lt;/p&gt;

&lt;p&gt;This is a technical post for people who will directly be handling AV needs for their own virtual events. I walk through both the technologies we used and the structural/philosophical choices we made about how to work with our speakers.&lt;/p&gt;

&lt;p&gt;To be clear, I was not the person actually operating the stream during the event itself: that honor goes to &lt;a href="https://twitter.com/kawaiidragoness"&gt;Kawa&lt;/a&gt; and &lt;a href="https://twitter.com/MuffiTuffi"&gt;Travis&lt;/a&gt; on the AV logistics side of things, and &lt;a href="https://twitter.com/"&gt;Alexei&lt;/a&gt; and &lt;a href="https://twitter.com/swartzcr"&gt;Noah&lt;/a&gt; as emcees/hosts. That said, many of these high-level tooling/process decisions were mine, and the knowledge I'm sharing about our experience comes from both my personal observations and from speaking with the people who were actively running the stream.&lt;/p&gt;

&lt;h1&gt;
  
  
  StreamYard
&lt;/h1&gt;

&lt;p&gt;We used &lt;a href="https://streamyard.com"&gt;StreamYard&lt;/a&gt; as our streaming studio. We liked the idea of having a solution that wasn’t reliant on the host’s home Internet connection, and that was easier for both the host and for speakers to deal with than messing around with videoconferencing software.&lt;/p&gt;

&lt;p&gt;Having a browser-based tool meant that a conference organizer wasn't running OBS and a videochat client with support for NDI, and that it was easier for us to swap out which organizer was on-duty for technical setup during the event itself.&lt;/p&gt;

&lt;p&gt;StreamYard was great. It was easy for us to use, easy for speakers to connect to, and it effortlessly let us stream to both Twitch and YouTube simultaneously.&lt;/p&gt;

&lt;p&gt;Throughout our event, we had at least two organizers in StreamYard at all times. We intentionally split out the roles of "emcee" and "technical AV person" into two separate people, largely so that in case of technical issues the emcee could continue to stall and keep the audience occupied (tech permitting!) while the other person fixed the issues. I suspect StreamYard is easy enough to use you could likely get away with only one person, but this worked really well for us. &lt;/p&gt;

&lt;p&gt;We did have a few minor issues with StreamYard:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;While we were able to integrate our own custom overlays, we didn't have as much control over our display as we would have using something like OBS. It's possible that some of the things we thought we couldn't do (e.g. custom fonts or adding arbitrary text labels) are things that StreamYard is capable of doing, but the documentation wasn't great — we often found ourselves watching YouTube videos from the community when StreamYard's official documentation wasn't helpful.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Related to the previous issue: even though we had a human captioner providing live captions, we were unable to embed those directly as closed captions within the stream as we would have been able to with OBS. In our specific case, this was mostly okay: most attendees were watching via our custom social space, where we directly embedded the captions below the stream, and our Twitch and YouTube channels directed people to a website where they could view the captions. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prerecorded video playback was still reliant on a home internet connection. StreamYard doesn’t let you upload video files and directly play them, so playing a pre-recorded video meant the emcee opening up a Chrome tab with the video file and screen-sharing that tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For times we played prerecorded videos, the volume was frequently lower than live speakers, and we didn't have a way to dynamically adjust this during the event. In the future, we'd likely take the time to normalize all prerecorded videos before streaming, but it's frustrating StreamYard doesn't appear to have any real-time audio mixing tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;StreamYard's highest paid tier allows you to capture recordings of up to 8 hours long. Our conference had roughly 8 hours of video each day. To avoid unintentionally cutting off our recordings if our schedule went long, we split each day up into two separate StreamYard 'studio' recording instances. Switching over added some minor logistics hassle, and also caused some issues where the YouTube embed widget we were using wasn't capable of automatically switching from the morning to the afternoon video feeds, meaning attendees would occasionally need to refresh their browsers to switch over.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;StreamYard is a bit finicky when it comes to playing audio over screen-sharing, particularly on MacOS. This wasn't an issue for our specific speaker pool, and is most likely a technical limitation for any browser-based technology, but is worth noting if you're running a particularly multimedia-heavy event.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are other browser-based streaming services such as  &lt;a href="https://restream.io"&gt;Restream.io&lt;/a&gt; and &lt;a href="https://stageten.tv"&gt;Stage Ten&lt;/a&gt;, but we didn't really spend time looking into them. I'd used StageYard before as a speaker, it was easy to use and affordable, we went with it. &lt;/p&gt;

&lt;p&gt;It's possible one of these alternative services would have given us all of the features and ease-of-use did that StreamYard did, but without some of the other hassles we encountered. I'm not sure. I don't want to speak for my other organizers, but the next time I host an online event, I'll likely investigate to see if that's the case. &lt;/p&gt;

&lt;p&gt;That said, I could also see myself just as easily using StreamYard again, and would heartily recommend it except for those minor caveats.&lt;/p&gt;

&lt;h1&gt;
  
  
  Streaming: Twitch vs YouTube
&lt;/h1&gt;

&lt;p&gt;We’re a conference about games and game development. Streaming to Twitch makes sense to us, since that’s where our audience is.&lt;/p&gt;

&lt;p&gt;However, Twitch doesn’t offer dynamic bitrate re-encoding of your streams unless you’re a Twitch Partner. This means that viewers on bad Internet connections can’t choose to load your stream at a low bitrate so they can actually watch it. This is exceedingly bad for accessibility!&lt;/p&gt;

&lt;p&gt;YouTube does offer this! Our compromise was to stream to both Twitch and YouTube. Our custom social space embedded the YouTube stream, so everyone within the space was watching on YouTube. Even given that, our stream view counts were roughly equal across Twitch and YouTube.&lt;/p&gt;

&lt;p&gt;One other thing to note is that we actively wanted to disable chat on both our YouTube and Twitch streams. Our approach was to have all text chat take place in our custom social space, where attendance was limited to people who had chosen to acquire a ticket (whether free or paid) and had agreed to our code of conduct, and where we had active moderation efforts. &lt;/p&gt;

&lt;p&gt;YouTube easily lets you disable live chat on streams. Twitch does not. &lt;/p&gt;

&lt;p&gt;On Twitch, we were able to set chat settings so that the only people who could what were people who had followed our account for more than 3 months. They could only post one message every five minutes, and it could only be emoji. This was functionally fine in practice, but it was frustrating Twitch wouldn’t let us just completely turn chat off, and meant we did have to keep an eye on it.&lt;/p&gt;

&lt;p&gt;Which is to say: if you don’t have a solid reason to stream to Twitch (e.g. you’re a games conference), stream to YouTube instead of Twitch.&lt;/p&gt;

&lt;h1&gt;
  
  
  Speaker AV Tests
&lt;/h1&gt;

&lt;p&gt;This may be obvious, but it's worth calling out: we scheduled 5-10 minutes for each speaker to pop into the StreamYard a week or so before the event to test out their AV setup and get used to the environment. &lt;/p&gt;

&lt;p&gt;In our case, this was helpful to confirm that each speaker's audio + video situation was sufficient (we had a very small budget to buy speaker equipment when necessary). At a previous online event I spoke at, these AV tests going poorly are what led to the organizers switching from an OBS-based setup to StreamYard. &lt;/p&gt;

&lt;p&gt;StreamYard made this particularly easy. We could give all speakers a link to join the StreamYard instance ahead of time. Since anyone who joins the stream session is put into a 'green room' by default (where they can chat but aren't on-stream) that also made it easy to manage back-to-back AV tests if we were running late.&lt;/p&gt;

&lt;p&gt;Speaker AV tests are fairly easy to schedule and make happen, but are essential to making sure things go smoothly during the event itself!&lt;/p&gt;

&lt;h1&gt;
  
  
  Prerecording
&lt;/h1&gt;

&lt;p&gt;We asked — but did not require — all speakers to send us recorded videos ahead of time, to use as a backup in case of technical failure. Most speakers did, which I’m extremely grateful for. I’m also grateful we didn’t need to unexpectedly fall back to a backup.&lt;/p&gt;

&lt;p&gt;We also made it clear to speakers that, while our default assumption was that speakers would present live, choosing to air a prerecorded video was perfectly fine.&lt;/p&gt;

&lt;p&gt;From what I’ve seen as a speaker and conference organizer, experienced public speakers tend to be split pretty evenly about whether they’d prefer to perform live or provide a recorded talk. Some thrive on the adrenaline of knowing there’s a live audience, while others appreciate being able to take the time to record a perfect take or edit after the fact. We wanted both of these groups to do what would make them most comfortable and result in the best possible talks.&lt;/p&gt;

&lt;p&gt;At the same time, an aspect of Roguelike Celebration that I really appreciate is that many of our accepted talks tend to come from first-time public speakers. There are certainly exceptions, but in general I’ve found that many inexperienced speakers give better performances live than pre-recorded. This isn’t a knock against anyone in that situation; maintaining high energy levels when you know you’re not speaking to anyone is a skill that often needs to be consciously learned.&lt;/p&gt;

&lt;p&gt;Framing live talks as the default of two equally valid options let us nudge our new speakers in the direction that best set them up for success, while still allowing everyone to make their own personal choice as to what would let them give their best performance.&lt;/p&gt;

&lt;p&gt;I think this strategy worked out well for us. Worth noting that, even for speakers who opted to use a prerecorded video, we asked (but did not require) that they show up live after their talk for moderated live Q&amp;amp;A.&lt;/p&gt;

&lt;h1&gt;
  
  
  Don't Require Speakers to be On-Camera
&lt;/h1&gt;

&lt;p&gt;We had a few speakers who did not want to show their faces. At least one speaker use a Snap Camera filter, and another speaker performed as a VTuber with a puppeteered 2D avatar. &lt;/p&gt;

&lt;p&gt;These required more intensive AV setups, but the onus to get that working was generally on the speakers rather than us. Our job was primarily to be supportive: our goal as organizers is to enable speakers to give the best talks they can, and making sure they're comfortable is an important part of that. &lt;/p&gt;

&lt;p&gt;If a speaker is uncomfortable showing their face, I would probably nudge them towards a solution that conveys some sense of body language over disabling their video feed entirely — even the Snap Camera solution did a great job of conveying emotion in a way that just audio wouldn't have — but again, speaker comfort needs to come first.&lt;/p&gt;

&lt;p&gt;From a technical standpoint, it's worth noting that StreamYard didn't have any issue with streaming video from virtual camera sources (e.g. SnapChat Camera or an OBS scene exposed as a camera).&lt;/p&gt;

&lt;h1&gt;
  
  
  And that was the video portion of our conference!
&lt;/h1&gt;

&lt;p&gt;All in all, the livestreaming portion of our event went remarkably smoothly. At times, we were even &lt;em&gt;ahead&lt;/em&gt; of schedule, which I think speaks to how effortless our setup was. &lt;/p&gt;

&lt;p&gt;Hopefully this might be useful to you if you're planning an online event and need to figure out how to handle the technical streaming aspect of your talks!&lt;/p&gt;

</description>
      <category>streaming</category>
      <category>events</category>
      <category>conferences</category>
      <category>twitch</category>
    </item>
    <item>
      <title>A rock-paper-scissors app with gesture detection and voice</title>
      <dc:creator>Em Lazer-Walker</dc:creator>
      <pubDate>Thu, 09 Jul 2020 13:16:06 +0000</pubDate>
      <link>https://forem.com/azure/a-rock-paper-scissors-app-with-gesture-detection-and-voice-3471</link>
      <guid>https://forem.com/azure/a-rock-paper-scissors-app-with-gesture-detection-and-voice-3471</guid>
      <description>&lt;p&gt;&lt;em&gt;By &lt;a href="https://twitter.com/revodavid" rel="noopener noreferrer"&gt;David Smith&lt;/a&gt; and &lt;a href="https://twitter.com/lazerwalker" rel="noopener noreferrer"&gt;Em Lazer-Walker&lt;/a&gt;, Cloud Advocates at Microsoft&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In this blog post, we’ll show you how to build a web application that will access your camera and say something whenever you make a specific gesture with your hand. This is a simplified version of the &lt;a href="https://docs.microsoft.com/samples/microsoft/rockpaperscissorslizardspock/azure-rock-paper-scissors/?WT.mc_id=devto-blog-davidsmi" rel="noopener noreferrer"&gt;Rock, Paper, Scissors, Lizard, Spock&lt;/a&gt; application, and you can &lt;a href="https://victorious-coast-06aa4f30f.azurestaticapps.net/" rel="noopener noreferrer"&gt;try out the app here&lt;/a&gt; or deploy it yourself with the instructions below. After you launch the app using a desktop browser, click Start and allow access to your camera, and then make one of the hand gestures from the game created by &lt;a href="http://www.samkass.com/theories/RPSSL.html" rel="noopener noreferrer"&gt;Sam Kass and Karen Bryla&lt;/a&gt;. Make sure your volume is turned up, and when the application sees a valid gesture, it will speak to you as it is recognized.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://victorious-coast-06aa4f30f.azurestaticapps.net" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fimf5egdzp7s0060f8d4m.png" alt="The app in action"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can customize and run this application yourself by &lt;a href="https://github.com/lazerwalker/neural-tts-sample" rel="noopener noreferrer"&gt;visiting this GitHub repository&lt;/a&gt; and following the directions shown. All you need is an &lt;a href="https://azure.com/free/?WT.mc_id=devto-blog-emwalker" rel="noopener noreferrer"&gt;Azure subscription&lt;/a&gt;, and it uses free services so it won’t cost you anything to try it out. &lt;/p&gt;

&lt;p&gt;Let’s dive into the various components of the application: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Speech&lt;/strong&gt;. The speech generated when the application detects a valid gesture is generated on demand with &lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/text-to-speech?WT.mc_id=devto-blog-emwalker" rel="noopener noreferrer"&gt;Cognitive Services Neural Text to Speech&lt;/a&gt;. Neural TTS can synthesize a humanlike voice in a variety of languages (with &lt;a href="https://aka.ms/NTTS-new-voices-blog" rel="noopener noreferrer"&gt;15 more just added&lt;/a&gt;!) and &lt;a href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp&amp;amp;WT.mc_id=devto-blog-emwalker#adjust-speaking-styles" rel="noopener noreferrer"&gt;speaking styles&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vision&lt;/strong&gt;. The hand gesture detection is driven by &lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/home?WT.mc_id=devto-blog-emwalker" rel="noopener noreferrer"&gt;Custom Vision&lt;/a&gt; in Azure Cognitive Services. It’s based on the same vision model used by the &lt;a href="https://docs.microsoft.com/samples/microsoft/rockpaperscissorslizardspock/azure-rock-paper-scissors/?WT.mc_id=devto-blog-davidsmi" rel="noopener noreferrer"&gt;Rock, Paper, Scissors, Lizard, Spock&lt;/a&gt; application, but running locally in the browser. No camera images are sent to the server.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Web Application&lt;/strong&gt;. The application is built with &lt;a href="https://docs.microsoft.com/azure/static-web-apps/overview?WT.mc_id=devto-blog-emwalker" rel="noopener noreferrer"&gt;Azure Static Web Apps&lt;/a&gt;, which means you can create your own website with a version of the application in just minutes – and for free! &lt;/p&gt;

&lt;h2&gt;
  
  
  Customizing the application
&lt;/h2&gt;

&lt;p&gt;Because we’ve provided all of the code behind the application, it’s easy to customize and see the differences for yourself. As soon as you check in changes to your forked GitHub repository, Static Web Apps will automatically rebuild and deploy the application with your changes. Here are some things to try, and you can find &lt;a href="https://github.com/revodavid/machine-learning-rps/blob/main/CUSTOMIZATION.md" rel="noopener noreferrer"&gt;detailed instructions in the repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Customize the speech&lt;/strong&gt;. All of the speech generated by the application is defined using the SSML standard, which you can easily customize simply by modifying the text in a JavaScript object.  Here are some things you can try: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Change the words spoken for each hand signal by modifying the text. &lt;/li&gt;
&lt;li&gt;Try changing the default voice or language by configuring the default. &lt;/li&gt;
&lt;li&gt;Try a different speaking style, like “newscast” or “empathetic” with SSML. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Customize what’s recognized by the camera&lt;/strong&gt;. The GitHub repository includes only the exported rock-paper-scissors Custom Vision model, but not the source data used to train the model. You train your own vision model with Custom Vision, &lt;a href="https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/export-your-model?WT.mc_id=devto-blog-emwalker" rel="noopener noreferrer"&gt;export it for TensorFlow.js&lt;/a&gt;, and replace the provided model.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Going Deeper
&lt;/h2&gt;

&lt;p&gt;If you’d like to learn more about the technology used in this app, check out these Microsoft Learn modules on &lt;a href="https://docs.microsoft.com/en-us/learn/modules/publish-app-service-static-web-app-api/?WT.mc_id=devto-blog-emwalker" rel="noopener noreferrer"&gt;Static Web Apps&lt;/a&gt;, &lt;a href="https://docs.microsoft.com/en-us/learn/modules/classify-images-with-custom-vision-service/?WT.mc_id=devto-blog-emwalker" rel="noopener noreferrer"&gt;Custom Vision&lt;/a&gt;, and &lt;a href="https://docs.microsoft.com/en-us/learn/modules/synthesize-text-input-speech/?WT.mc_id=devto-blog-emwalker" rel="noopener noreferrer"&gt;Text-to-Speech&lt;/a&gt;. If you have any feedback about the app itself, please leave an issue in the Github repository, or reach out to either of us (&lt;a href="https://twitter.com/revodavid" rel="noopener noreferrer"&gt;David&lt;/a&gt; and &lt;a href="https://twitter.com/lazerwalker" rel="noopener noreferrer"&gt;Em&lt;/a&gt;) directly. This was a fun app to make, and we hope you have fun playing with it too! &lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>javascript</category>
      <category>serverless</category>
      <category>azure</category>
    </item>
    <item>
      <title>What is Spatial Audio, Why Does it Matter, and What's Apple's Plan?</title>
      <dc:creator>Em Lazer-Walker</dc:creator>
      <pubDate>Tue, 30 Jun 2020 11:55:01 +0000</pubDate>
      <link>https://forem.com/lazerwalker/what-is-spatial-audio-why-does-it-matter-and-what-s-apple-s-plan-239b</link>
      <guid>https://forem.com/lazerwalker/what-is-spatial-audio-why-does-it-matter-and-what-s-apple-s-plan-239b</guid>
      <description>&lt;p&gt;At WWDC 2020, Apple announced that iOS apps will soon be able to use motion data coming from your AirPods Pro to enable head-tracked spatial audio. They talked about this largely in context of playing movies with multi-channel surround sound, but that's probably the least interesting application of spatial audio.&lt;/p&gt;

&lt;p&gt;As someone who's been working in the field for a long time — my research at the MIT Media Lab in 2015 and 2016 focused on location-based storytelling in public spaces using spatial audio — I wanted to try to give some context around why this is interesting and what it might enable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is spatial/positional audio?
&lt;/h2&gt;

&lt;p&gt;Spatial or positional audio (these terms are typically used interchangeably) lets you position sounds anywhere in 3D space. Instead of just thinking about sound engineering at the level of "is this to the listener's left, right, or neither?" as you would with normal stereo sound, you can place specific sounds at specific 3D locations around the listener: say, a sound that's in front of you, a little bit to the left, and a meter or two above head/ear height.&lt;/p&gt;

&lt;p&gt;If that sounds a lot like surround sound, you're not wrong, but the underlying technology is different. Surround sound systems play audio out of speakers that are placed in different physical locations. To play a sound that sounds like it's behind you and to the right, you use a speaker that is literally behind you and to the right.&lt;/p&gt;

&lt;p&gt;The sort of spatial audio we're talking about is (usually, but not always) concerned with producing sound that is situated precisely in 3D space despite coming out of a pair of normal headphones.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does spatial audio work?
&lt;/h2&gt;

&lt;p&gt;Let's talk about how humans normally hear sounds in the real world.&lt;/p&gt;

&lt;p&gt;If a loud noise happens directly to your left, those sound waves will reach both your left and right ears. But while they have a pretty direct route into your left ear, your right ear will receive them after they've passed through and been shaped by your skull, your brain, and pretty much everything else in there.&lt;/p&gt;

&lt;p&gt;Humans — specifically, our brains, inner ears, and outer ears working together — are really good at processing the difference in these sound waves and transforming those raw signals into your conscious mind saying "ah yeah, that sound is coming from over there".&lt;/p&gt;

&lt;p&gt;To produce spatial audio that lets you hear things in precise 3D locations through a set of headphones, the audio coming out of each headphone ear needs to essentially recreate the sound attenuation that happens naturally so you can to trick your low-level auditory systems into thinking the sound is coming from someplace else.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does do you produce spatial audio using analog methods?
&lt;/h2&gt;

&lt;p&gt;The traditional way of producing binaural audio recordings involves taking two microphones and sticking them in the 'ears' of a mannequin head that's been designed to roughly match the density of a human head. Done right, this gives you a stereo audio recording that truly does capture the 3D soundscape as it was recorded.&lt;/p&gt;

&lt;p&gt;The audio walks by sound artist Janet Cardiff are a great example of these traditional analog methods. If you want to get a sense of how effective this technique can be, grab a pair of headphones and listen to a few minutes of her NYC audio walk &lt;a href="https://soundcloud.com/incredibleworksofart/sets/janet-cardiff"&gt;Her Long-Black Hair&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using digital techniques for spatial audio
&lt;/h2&gt;

&lt;p&gt;Manually setting up two microphones and a test dummy is a lot of work. Modern audio production techniques typically involve math instead. Specifically, they use a set of models known as Head-Related Transfer Functions (HRTF) that describe the transformation that occurs when a "pure" sound is attenuated in such a way so as to mimic what a specific ear would hear.&lt;/p&gt;

&lt;p&gt;Although specific HRTFs theoretically vary from person to person (all of our bodies are different!), in practice researchers have generated a few different algorithms that will work well for most situations. If you're using HRTF as a programmer or sound engineer, in most cases you'll just see "HRTF" as an option you can enable and that's really the extent to which you need to think about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do you adapt to the listener?
&lt;/h2&gt;

&lt;p&gt;Whether you're working with analog binaural recordings or digital tools that can apply HRTF, the results are usually stereo audio files that sound impressively like they're positioned in real space.&lt;/p&gt;

&lt;p&gt;But they're still static recordings. Let's say that a sound is directly behind you. If you turn 90 degrees to your left, you'd expect that sound is now directy to your left. But because the audio recording was made so that sound is "behind" you, it will turn along with you, breaking the illusion.&lt;/p&gt;

&lt;p&gt;This is where being able to create spatial audio in software is valuable. If you have some way of tracking the position of the listener's head, you can dynamically adjust your sound generation in real-time to keep a sound source fixed in the listener's concept of real-world space.&lt;/p&gt;

&lt;p&gt;Right now, the main commercial application of something like this is VR and AR headsets. Since they already have high-quality head tracking data for graphics rendering, using that same data for positional audio is a no-brainer.&lt;/p&gt;

&lt;h1&gt;
  
  
  Where AirPods Pro come in
&lt;/h1&gt;

&lt;p&gt;Given all of that, hopefully you can see how adding spatial audio features to AirPods Pro might work.&lt;/p&gt;

&lt;p&gt;Apple already offers an API for producing spatial audio with HRTF, both integrated into ARKit for AR experiences and in the more general-purpose AVFoundation library. This has actually been part of iOS since 2014!&lt;/p&gt;

&lt;p&gt;With iOS 14, Apple is adding a set of new APIs to use the motion sensors already in AirPods Pro to provide head-tracking, letting developers build positional audio experiences that are aware of how the listener is moving their head.&lt;/p&gt;

&lt;p&gt;This isn't a completely new concept — until recently, Bose maintained a similar platform for head-tracked spatial audio using certain Bose headphone models and a third-party smartphone SDK — but being supported at a first-party system level, with Apple's incredibly popular headphones, will almost certainly help this see wider use than Bose's SDK.&lt;/p&gt;

&lt;h2&gt;
  
  
  The audio version of ARKit
&lt;/h2&gt;

&lt;p&gt;You could technically build visual AR apps without using an AR framework like ARKit or ARCore. It's not a lot of work to show a live camera feed in a mobile app, and then overlay a 3D model on top of it. But unless you're doing a lot of manual computer vision work, you're not going to have the world awareness to keep that object's position fixed in real space as the user moves the phone around.&lt;/p&gt;

&lt;p&gt;Head tracking for spatial audio is similar. Prior to this announcement, you could easily make experiences for iOS that play positional 3D audio soundscapes through users' headphones. But without head tracking, they lack an awareness of and connection to the physical world, and it's not possible to make the sounds feel like they're fixed in a concrete real-world position. This new API solves that.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does this look like from a technical standpoint?
&lt;/h2&gt;

&lt;p&gt;As of the writing of this piece, Apple's APIs aren't ready for public consumption. There's a new &lt;a href="https://developer.apple.com/documentation/coremotion/cmheadphonemotionmanager"&gt;headphone motion API&lt;/a&gt;, and a new as-yet-unused &lt;a href="https://developer.apple.com/documentation/audiounit/audio_unit_properties/spatialization_algorithms"&gt;configuration option for different spatialization algorithms&lt;/a&gt; that seems unconnected to the existing AVFoundation APIs.&lt;/p&gt;

&lt;p&gt;The latter suggests to me that Apple may release a higher-level system that, say, automatically adds head-tracked spatial audio to any apps already playing audio through ARKit. I suspect they will heavily encourage developers to use ARKit when appropriate, as augmenting headphone motion data with camera-based world tracking will likely provide better tracking results.&lt;/p&gt;

&lt;p&gt;That said, once new AirPods Pro firmware has been released that support sending motion data, the headphone motion manager will be enough for interested developers to dive in and start building spatial audio experiences. &lt;/p&gt;

&lt;p&gt;Four years ago, I built some &lt;a href="https://github.com/lazerwalker/ios-3d-audio-test"&gt;quick experiments&lt;/a&gt; using the iPhone's built-in accelerometer and gyroscope to control a spatial audio scene generated using Apple's existing AVFoundation spatial audio APIs. The code to wire up the two was straight-forward back then, and a similar approach should work just as well when it's using motion data coming from the headphones instead of from the phone itself.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why does this matter?
&lt;/h1&gt;

&lt;p&gt;This is all well and good. But what does head-tracked spatial audio actually enable? Providing a more immersive experience for films or 3D games, as Apple suggested, is a natural use case, but far from the most interesting one.&lt;/p&gt;

&lt;p&gt;What's difficult about answering this question is that there isn't really yet a well-established field of design for building audio-only real-world experiences that take advantage of positional audio. Existing audio-only platforms like voice assistants don't really have a concept of grounding an audio experience in the physical world; even the people building rich gaming experiences for those platforms don't have the clearest answer of how spatiality might change things.&lt;/p&gt;

&lt;p&gt;Based on my experience working with spatial audio, there are at least a few broad classes of potential applications that really excite me. This is far from an exhaustive list, but here's a taste of the sorts of experiences we might see as spatial audio becomes more of a thing:&lt;/p&gt;

&lt;h2&gt;
  
  
  Wayfinding
&lt;/h2&gt;

&lt;p&gt;One of the first usecases people tend to think of for spatial audio are helping people navigate the world. Microsoft has already released an app called &lt;a href="https://www.microsoft.com/en-us/research/product/soundscape/"&gt;Soundscape&lt;/a&gt; that uses binaural audio to help people who are blind or have low vision to navigate the world.&lt;/p&gt;

&lt;p&gt;It's easy to imagine turn-by-turn navigation apps adding in support for spatial audio cues, and interaction patterns such as "follow this sound that keeps moving in the direction you should walk" becoming commonplace.&lt;/p&gt;

&lt;p&gt;As Apple improves their indoor location technology, this could also easily become a big part of making indoor wayfinding viable before they ship AR glasses, since the ARKit model of "hold your phone out in front of you while you walk through a space" is both socially and physically awkward.&lt;/p&gt;

&lt;h2&gt;
  
  
  Improving existing audio content
&lt;/h2&gt;

&lt;p&gt;If you speak to anyone who's worked on a social platform for VR, they will be quick to point out how much of a difference spatial audio makes in fostering natural voice conversations. When human voices are mapped to distinct physical locations, it's like a switch is flipped in the brain that makes it easier to differentiate similar-sounding voices, even if you're on a platform that doesn't have great lip-syncing or other visual ways to indicate who's speaking.&lt;/p&gt;

&lt;p&gt;It wouldn't surprise me to see applications like group voice chat apps or even podcast apps embrace spatial audio as a way to make conversation feel more natural and easier to make sense of at a subconscious level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-world gaming and playful experiences
&lt;/h2&gt;

&lt;p&gt;One of the projects that resulted from my MIT research into spatial audio was a &lt;a href="https://www.youtube.com/watch?v=swQ338aOGm0"&gt;site-specific generative poetry walk&lt;/a&gt; built for a park in San Francisco. Being built for consumer iPhones in 2016 meant it doesn't use head tracking for its positional audio, but key to the piece are the binaural audio soundscapes that subtly fade in depending on where in the park you are. &lt;/p&gt;

&lt;p&gt;If you're in the main grassy field in the park, you may hear kids laughing and playing off in the distance, and you won't really be sure whether they exist in the real world or just in the audio; the cacophany of birds chirping as you enter the fenced-off community garden create a sense of magic and connection to nature in a visually stunning space.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://soundcloud.com/incredibleworksofart/sets/janet-cardiff"&gt;Janet Cardiff audio walk&lt;/a&gt; I mentioned earlier does similar magic tricks with (also non-head-tracked) positional audio. You'll hear a couple arguing behind you, or police sirens going off on the street outside the park, and not be sure whether it's reality or fiction. &lt;/p&gt;

&lt;p&gt;Cardiff applies a ton of incredibly subtle psychological tricks to prevent you from turning your head and breaking the illusion of the static baked-in binaural audio. This means her work is generally in a league of its own, as this sort of work is so difficult to replicate without her sheer experience and talent.&lt;/p&gt;

&lt;p&gt;Having readily-available consumer head-tracked audio means making these sorts of experiences will be so much more accessible to creators of all types, not just ones with extensive experience in traditional binaural audio production.&lt;/p&gt;

&lt;p&gt;To be clear, I think the future of games and playful experiences focused on spatial AR audio isn't in extending games like Pokémon Go to be more "immersive", but in taking design cues from live-action role-playing and immersive theatre design communities. My design goal for my poetry walk was to create something that encourages players to appreciate the mundane beauty of a public space in their neighborhood, blurring reality with fiction and make-believe to elevate reality into something that feels magical.&lt;/p&gt;

&lt;p&gt;Positional audio is so much more powerful — and so much cheaper to produce — than current-day 3D visual AR at causing that sort of emotional reaction in players. Apple helping to make head-tracked positional audio mainstream could bring on a waterfall of beautiful hyperlocal audio experiences.&lt;/p&gt;

&lt;h1&gt;
  
  
  So what's the takeaway?
&lt;/h1&gt;

&lt;p&gt;I feel like I'm standing over here, wildly waving my arms at everyone to pay attention to my favorite pet technology that's finally on the verge of becoming mainstream. But it's true!&lt;/p&gt;

&lt;p&gt;I think spatial audio in general is a much more powerful technology than a lot of people give it credit for, but good head-tracking available in consumer hardware is the piece that's been missing for it to find more widespread appeal. By piggy-backing off of existing popular headphones, Apple is well-positioned to make spatial audio tech explode in a way that it hasn't before.&lt;/p&gt;

&lt;p&gt;I'm so excited to see what people make if this, and can't wait to dive in more as Apple updates the AirPods Pro firmware and makes beta API access available. Let me know if you're working on something cool or have cool ideas for ways to use spatial audio tech!&lt;/p&gt;

</description>
      <category>ios</category>
      <category>gamedev</category>
      <category>design</category>
      <category>audio</category>
    </item>
    <item>
      <title>Using Data to Improve Your Narrative Games!</title>
      <dc:creator>Em Lazer-Walker</dc:creator>
      <pubDate>Wed, 29 Jan 2020 16:53:43 +0000</pubDate>
      <link>https://forem.com/lazerwalker/using-data-to-improve-your-narrative-games-4g7o</link>
      <guid>https://forem.com/lazerwalker/using-data-to-improve-your-narrative-games-4g7o</guid>
      <description>&lt;h1&gt;
  
  
  Analytics in Interactive Fiction?
&lt;/h1&gt;

&lt;p&gt;When I speak with indie game developers making narrative games, a lot of people perk up when I mention analytics. Gathering data about user behavior can feel like a weird dark art: it often sounds like something you should be doing, but you don't quite know what to track or how to make sense of that data, and you probably also wonder about whether or not it's actually ethical to capture that sort of data.&lt;/p&gt;

&lt;p&gt;This article is about how I use analytics in my &lt;a href="https://twinery.org"&gt;Twine&lt;/a&gt; games! I'll talk about how analytics are "traditionally" used in large free-to-play games, as well as the ways in which that approach does or does not apply to people making small experimental narratives. &lt;/p&gt;

&lt;p&gt;I'll walk through a few examples of how I've concretely found gathering analytics data useful in my work, as well as give some links to the free tools I use to do that data-gathering in Twine (largely the &lt;a href="https://lazerwalker.com/playfab-twine"&gt;PlayFab-Twine&lt;/a&gt; tool I maintain). &lt;/p&gt;

&lt;p&gt;Although my experience here — and the tools I mention — are largely grounded in Twine, this should be applicable to those making choice-based interactive fiction games in most any environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  How free-to-play games use analytics
&lt;/h2&gt;

&lt;p&gt;Free-to-play games are by far the most regimented users of analytics data. To be clear, I don't want to endorse the way F2P uses data, and I think it's largely not relevant to people making free or premium interactive fiction games. &lt;/p&gt;

&lt;p&gt;But it's still useful to get a sense for how they function before we talk about which specific elements to crib and which to leave be.&lt;/p&gt;

&lt;p&gt;This is also a mile-high view; it goes without saying that I'm not trying to speak to how EVERY free-to-play game uses analytics. This is a sort of abstracted overview based on my personal experiences in free-to-play over the years.&lt;/p&gt;

&lt;h3&gt;
  
  
  Guiding Metrics
&lt;/h3&gt;

&lt;p&gt;For the most part, free-to-play games are laser-focused on a funnel:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You get new users into your game. In 2020, you're probably paying money to acquire them through channels like video ads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Some number of these players will keep playing your game.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Of the people who keep playing your game, some of them will become paying players.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once people are paying you, you want them to pay you as much money as you can get from them.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The goal of metrics is mostly to quantify each of these steps, so you can then run measured experiments to make small improvements to each of those steps.&lt;/p&gt;

&lt;p&gt;If you can make the average retention longer, or increase the amount of money your average player pays, that can have huge returns in your profits.&lt;/p&gt;

&lt;p&gt;For the most part, there are a few metrics that are universally tracked:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retention&lt;/strong&gt; is measured in terms of what percentage of players return a certain number of days after installing.&lt;/p&gt;

&lt;p&gt;Day 1 — or D1 — retention is how many players come back the day after they install, D7 is how many people return 7 days after installing, so on and so forth. You'll typically track D0, D1, D7, and D30 retention.&lt;/p&gt;

&lt;p&gt;Retention numbers beyond day 30 are often ignored. I'd argue that's likely a mistake, but that's a discussion for another day!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Revenue&lt;/strong&gt; is exactly what it sounds like. You might track things like the average revenue per daily active user (ARPDAU), average revenue per paying user (ARPPU), or what percentage of your players give you any money.&lt;/p&gt;

&lt;p&gt;Although you'll focus on those metrics as the most important things to track, you'll typically also use them as jumping-off points for finding other things to keep an eye on. A product owner on a free-to-play game  will typically add lots of tracking into their game, with an eye towards answering specific questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;When launching new features&lt;/strong&gt;: are people engaging with this the way it was anticipated? If you visualize a feature of a game as a funnel, where are people dropping off of that funnel?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Health checks&lt;/strong&gt;: games will often have a specific number that's indicative of overall game health. When I worked on Words With Friends, we looked at the total number of turns played by all players each day. If that number was unexpectedly low, that was a sign that there was something wrong, either with our servers or with a new design&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deciding what new features to prioritize&lt;/strong&gt;: new feature work will often be oriented around raising a specific metric, usually one of the key revenue or retention metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  This sounds gross. Why does this apply to me?
&lt;/h3&gt;

&lt;p&gt;As a conceptual framework, this sort of numbers-driven approach to game design is intended to maximize how much time and money a player is spending on your game, typically using psychological tricks and addiction patterns taken from gambling. That's not great.&lt;/p&gt;

&lt;p&gt;The good news is, even a I've just spent time describing this whole approach to analytics, it probably doesn't apply to you!&lt;/p&gt;

&lt;p&gt;F2P narrative games certainly exist, but I'm writing this article for people who don't work in that space. If you're working on a game that's a bounded narrative experience — whether free or "premium" — you're (hopefully!) not focused on the sort of questions around optimizing money and engagement time from players that F2P designers are, you're much more focused on conveying some aspect of the human experience or evoking some sort of emotion in players. &lt;/p&gt;

&lt;p&gt;In that situation, a strict application of F2P analytics techniques is actively going to make you a worse designer! But I do think it's possible to use similar techniques and tools to get a better sense of how players are interacting with your game, in ways that are qualitatively different from what you'll get from e.g. in-person playtesting. Having that information can help inform design decisions in ways that can result in a better experience for your players, in particular in the moments when your expectations turn out to not match up with reality.&lt;/p&gt;

&lt;p&gt;It's also absolutely possible to gather that data without capturing personally-identifiable data on any given individual player, which helps me personally feel a lot less squeamish about this sort of data collection.&lt;/p&gt;

&lt;h3&gt;
  
  
  Metrics aren't about individual players
&lt;/h3&gt;

&lt;p&gt;An important point with metrics data is they're mainly useful in aggregate. If you're interested in charting out the journey a single player takes, just looking at numbers isn't going to tell you what that player is thinking or experiencing. Tools like in-person playtesting or quantitative surveys are going to be far more effective at that personal individual scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  So what &lt;em&gt;are&lt;/em&gt; metrics useful for?
&lt;/h3&gt;

&lt;p&gt;I recently shipped a &lt;a href="https://microsoft.com/mysterymansion"&gt;small Twine game for work&lt;/a&gt; that's essentially a short escape room game: you're in a spooky old house, and need to solve a bunch of puzzles.&lt;/p&gt;

&lt;p&gt;Using my &lt;a href="https://lazerwalker.com/playfab-twine"&gt;PlayFab-Twine&lt;/a&gt; tool (more on that later!), we were able to gather data on whenever a player visited a different Twine node in our game, with the ability to write complex queries around that dataset.&lt;/p&gt;

&lt;p&gt;We broadly found analytics to be incredibly useful for answering a whole bunch of different design problems. &lt;/p&gt;

&lt;h4&gt;
  
  
  Who are our players?
&lt;/h4&gt;

&lt;p&gt;Data about how many people played our game, how often repeat players came back, where players were coming from (geographic region / OS and browser version / etc), and how long people on average spent with the game was super useful to get a sense for who was playing our game.&lt;/p&gt;

&lt;h4&gt;
  
  
  Is our game broken?
&lt;/h4&gt;

&lt;p&gt;Many interactive fiction tools have some sort of mechanism for validating that every passage is theoretically visible, basically by brute-forcing every possible path in the game. &lt;/p&gt;

&lt;p&gt;For us, analytics was able to serve a similar goal: once the game was out in the hands of a large number of users, we were able to look at visit numbers for every Twine passage relative to the total user count. If any passage had 0 visits, or an exceedingly low number of visits, that was a strong sign there was likely some sort of engineering error that prevented players from reaching that content.&lt;/p&gt;

&lt;p&gt;This isn't a replacement for traditional manual QA, but can still serve as helpful extra validation your game works as expected!&lt;/p&gt;

&lt;h4&gt;
  
  
  Where are players the most unhappy?
&lt;/h4&gt;

&lt;p&gt;With our game's initial release, we were disappointed with how few players were playing the game all the way to the end. &lt;/p&gt;

&lt;p&gt;Our game is a puzzle game whose structure is generally fairly open, but we have a few strict chokepoints where players must solve a specific puzzle to progress. This made finding the problem fairly easy: taking each of these chokepoints, we graphed out how many people viewed the Twine passage where they needed to essentially input the answer to the puzzle as opposed to how many people viewed the passage immediately following the puzzle. &lt;/p&gt;

&lt;p&gt;This made it clear which exact puzzle in our game was too difficult. Adding some more aggressive hinting earlier in the game quickly fixed things, which we were able to verify both by looking at passage view counts for the "puzzle solved" passage as well as our metrics for overall game completion.&lt;/p&gt;

&lt;p&gt;Using similar techniques, we were also able to look at the more nonlinear and multilinear parts of our game: when given freedom to wander wherever they wanted, what paths did players take? We were able to look both at overall view counts for individual passages (e.g. did more people go into the dining room than the study?) as well as ordering of the two (e.g. for people who went both upstairs and downstairs, which area did they explore first?). &lt;/p&gt;

&lt;p&gt;Our game didn't have a particularly complex puzzle graph, so we didn't need to really make changes to the flow of our game in order to make it work, but having that sense of where players' eyes were naturally drawn would have given us a clearer sense of how to rearrange the game space's architecture if those had become problems.&lt;/p&gt;

&lt;h4&gt;
  
  
  But what if you're not making a puzzle game?
&lt;/h4&gt;

&lt;p&gt;Your game might not as clearly break down into explicit discrete chokepoints, but this same general approach can tell you a lot about player data.&lt;/p&gt;

&lt;p&gt;Often you'll design a choice with two options, expecting that roughly 50% of players will select each option. Tracking those passage views can help confirm whether player behavior meets your expectations. &lt;/p&gt;

&lt;p&gt;If they're not, that then gives you the opportunity to either revisit the choice you're presenting players to try to get it closer to what you were imagining, or to lean into it and redesign other aspects of the game based on what that tells you about what your players are actually interested in.&lt;/p&gt;

&lt;p&gt;If you're making an episodic game, you could also see how tracking this data can help influence future decisions. You don't want to make creative decisions completely at the mercy of your players, sure, but knowing that your players prefer a certain character or a certain type of gameplay might be useful data to have as you continue to write and design. And while asking players directly what they like is also a useful technique, it's equally useful to look at what players do rather than what they tell you.&lt;/p&gt;

&lt;h4&gt;
  
  
  How can we convey choices to players?
&lt;/h4&gt;

&lt;p&gt;We didn't end up shipping anything like this in the final game itself, but we were really interested in being able to show Telltale style metrics to users within the game itself. After completing the game, we imagined players might be able to see "for this major choice, X% of players made the same choice as you". &lt;/p&gt;

&lt;p&gt;Once you have aggregated data about how players interact with your game, you can imagine other interesting ways to surface this to players. It's not the right fit for every game, sure, but using your data to directly empower players can also help you allay your own concerns that you're using these metrics techniques for good rather than evil.&lt;/p&gt;

&lt;h3&gt;
  
  
  Be data-informed, not data-driven
&lt;/h3&gt;

&lt;p&gt;All of these tools are incredibly powerful, but it's worth reiterating that you shouldn't solely rely on metrics. Even in live F2P games, people talk a lot about having the wisdom to know when to disregard what the data is telling you.&lt;/p&gt;

&lt;p&gt;Whatever your design goals are, you shouldn't look at metrics as numbers to obsessively optimize, but a very specific tool to use to augment your decision-making process. When you have a question about how players are interacting with your game, looking at data and constructing a specific query can be one way to get one specific perspective of what's going on.&lt;/p&gt;

&lt;p&gt;Which is to say: analytics are great! Use them! But looking at metrics can itself become an unhealthy Skinner Box-esque system that triggers compulsive behavior in you as a designer.&lt;/p&gt;

&lt;h3&gt;
  
  
  So how can I do this in my game?
&lt;/h3&gt;

&lt;p&gt;Maybe you're reading all of this and saying "wow, that sounds great! I want to apply these techniques to my own game!"&lt;/p&gt;

&lt;p&gt;How to get started depends a lot on what you're making, what tools you're using, and how technical you are.&lt;/p&gt;

&lt;p&gt;I'm personally partial to &lt;a href="https://docs.microsoft.com/en-us/gaming/playfab/#pivot=documentation&amp;amp;panel=playfab&amp;amp;WT.mc_id=blog-playfabtwine-emwalker"&gt;PlayFab&lt;/a&gt;. It's a hosted web service that basically aims to power your game's backend. It's intended to be used in rather large F2P games, where it can do things like manage your game's entire economy and social systems, but I really like using it for small-scale projects just for things like user authentication, leaderboards, cloud saves, and analytics. Every personal project I've worked on has fit within its free tier (even my game &lt;a href="https://flappyroyale.io"&gt;Flappy Royale&lt;/a&gt;, which had nearly 200,000 DAU at its peak)&lt;/p&gt;

&lt;p&gt;For analytics in particular, I appreciate that it's focused specifically on the needs of game designers, rather than people building business-focused web applications. &lt;/p&gt;

&lt;p&gt;Being owned by Microsoft is also a plus from a privacy standpoint: most popular large analytics services are owned by ad providers, which gives them a perverse incentive to sell your users' data. There are a ton of smaller startup analytics tools that are great as well, but they're historically likely to be acquired by those same large ad companies. Using a tool owned by a large stable company like Microsoft that isn't in the advertising business seems like the best of all worlds.&lt;/p&gt;

&lt;p&gt;If you're a coder, PlayFab is easy to get started with. Whether you're writing in Unity, Unreal, JavaScript on the web, or anything else, it's usually only a few lines of code to authenticate with PlayFab and start sending real-time analytics data. Check out their &lt;a href="https://docs.microsoft.com/en-us/gaming/playfab/personas/developer?WT.mc_id=blog-playfabtwine-emwalker"&gt;getting started guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you're using Twine, I maintain a tool called &lt;a href="https://lazerwalker.com/playfab-twine"&gt;PlayFab-Twine&lt;/a&gt; designed to make it as easy as possible to add analytics to your game via PlayFab without writing any custom code.&lt;/p&gt;

&lt;p&gt;With only 5-6 lines of JavaScript copy/pasted into your Twine project, and a few minutes' worth of configuration, you'll be able to use PlayFab's web UI to answer the same sorts of questions I did for the Mystery Mansion project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Go forth and measure things!
&lt;/h3&gt;

&lt;p&gt;Hopefully this has been a useful intro to the world of how analytics can (and can't) help your narrative game! I'm excited for you to start gathering and making decisions influenced by (but not too heavily!) actual real-world data from your players!&lt;/p&gt;

&lt;p&gt;Please do drop me a note on &lt;a href="https://twitter.com/lazerwalker"&gt;Twitter&lt;/a&gt; if you do something cool with what I've talked about!&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>games</category>
      <category>twine</category>
      <category>metrics</category>
    </item>
    <item>
      <title>A Modern Developer's Workflow For Twine</title>
      <dc:creator>Em Lazer-Walker</dc:creator>
      <pubDate>Thu, 16 Jan 2020 16:56:23 +0000</pubDate>
      <link>https://forem.com/lazerwalker/a-modern-developer-s-workflow-for-twine-4imp</link>
      <guid>https://forem.com/lazerwalker/a-modern-developer-s-workflow-for-twine-4imp</guid>
      <description>&lt;p&gt;I love &lt;a href="https://twinery.org"&gt;Twine&lt;/a&gt;! Whether you're trying to prototype a larger work or make something on your own, it's such a powerful and easy-to-use tool to make hypertext-based narrative games.&lt;/p&gt;

&lt;p&gt;That said, a common complaint I've heard from most people I've talked to who use it seriously is how readily its workflows fall apart at scale. &lt;/p&gt;

&lt;p&gt;A visual graph editor is a fantastic approach for small projects, but gets unmanageable quickly on larger projects. Additionally, the way the Twine 2 editor handles files means using using tools like version control can be difficult, and merging changes from multiple collaborators can be nearly impossible.&lt;/p&gt;

&lt;p&gt;But there's a solution! I'm going to spend the next few minutes walking you through my Twine development workflow. There are three important parts of it I want to talk about:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Plain text files&lt;/strong&gt;. I use &lt;a href="https://code.visualstudio.com/?WT.mc_id=devto-blog-emwalker"&gt;VS Code&lt;/a&gt; to write my games, rather than using the visual Twine editor. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modern version control&lt;/strong&gt;, storing my games in git on GitHub.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic publishing&lt;/strong&gt;. Every time I push a new version of my game to GitHub, it's instantly playable via &lt;a href="https://github.com/features/actions"&gt;GitHub Actions&lt;/a&gt; and &lt;a href="https://pages.github.com"&gt;GitHub Pages&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's step through the tools I use, and how you can get set up with a similar toolchain!&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing in a Text Editor
&lt;/h2&gt;

&lt;p&gt;Why is it valuable to be able to write Twine games as text files instead of as nodes in a visual graph?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It scales better.&lt;/strong&gt; When your game grows to be tens of thousands of words, navigating Twine's node-based visual editor can be a pain. Having your entire game be in a single text file, that you can manipulate and browse however you'd like, is far easier for even medium-sized projects. And that's even before considering that being able to split your script up into multiple files, which can greatly reduce the cognitive load for larger projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It allows for reuse.&lt;/strong&gt; Have some macros or other bits of scripting you'd like to reuse between passages, or across multiple game projects? Being able to copy/paste text in an IDE is a lot easier than managing it in the visual editor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It gives you access to better writing tools&lt;/strong&gt;. I'm more comfortable writing in the same text editor I use for other programming and writing tasks than I am in Twine's text boxes. It also means I can use the tools they provide to make my life easier! &lt;/p&gt;

&lt;p&gt;VS Code has extensions to add syntax highlighting for both Harlowe and Sugarcube. More than that, access to its entire IDE ecosystem means I can pull in tools to help with creative prose writing. This means basic things like spell check and an omnipresent word counter, but it can also mean more powerful tools to do things like &lt;a href="https://alexjs.com"&gt;warn me if I'm using subtly sexist/racist/ableist language&lt;/a&gt; or even &lt;a href="https://www.robinsloan.com/notes/writing-with-the-machine/"&gt;spark my creativity by collaborating with an AI&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It enables more robust versioning and collaboration&lt;/strong&gt;. More on this later, but writing my game in a text file means it's stored in a human-readable text file, which is what enables all of the other great tools and techniques I'll be talking about next.&lt;/p&gt;

&lt;p&gt;This all sounds great! To get all of these benefits, we can use a special programming language called Twee!&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Twee?
&lt;/h3&gt;

&lt;p&gt;In the olden days of Twine 1, there were two officially-supported ways to make games: using the Twine visual editor, or by writing code in a scripting language called twee that could be compiled by an official CLI tool, also called &lt;code&gt;twee&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;(A fun historical sidenote: even though the Twine's visual editor is the more popular tool, the twee CLI predates it by 3 years!)&lt;/p&gt;

&lt;p&gt;Twee code is conceptually the same as a Twine graph, with different blocks of text in a file referring to different passages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;:: Start
This is the first passage in a Twine game!

[[This is a link|Next Passage]]


:: Next Passage
The player just clicked a link to get here!

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When Twine 2 came out, support for the twee language was officially killed, and the only officially supported path was to use the Twine 2 visual editor and its greatly-expanded support for story formats.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do you use Twee with Twine 2?
&lt;/h2&gt;

&lt;p&gt;When Twine 2 wasn't accompanied by a "Twee 2", the community stepped up, and a number of third-party twee CLI tools emerged. The twee language needed to adapt, though, since Twine 2 handles story formats in a vastly different way from Twine 1. &lt;/p&gt;

&lt;p&gt;What follows is a bit of a technical explanation of the development of modern Twee tools. I think it's interesting, but if you want to skip over it, the main practical takeaway is that I use the &lt;a href="https://www.motoslave.net/tweego"&gt;Tweego&lt;/a&gt; CLI tool to write a newer version of Twee that's called &lt;a href="https://github.com/iftechfoundation/twine-specs/blob/master/twee-3-specification.md"&gt;Twee 3&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Twine 2 Story Formats: A Technical Explanation
&lt;/h3&gt;

&lt;p&gt;To understand why we can't just use the old &lt;code&gt;twee&lt;/code&gt; tool with Twine 2, we need to understand how Twine 2 story formats work.&lt;/p&gt;

&lt;p&gt;Internally, Twine 2 stores your work as an XML document. When you click the "publish" button in the Twine 2 editor, that XML document is passed to the selected "story format", which is essentially an HTML template. A story format will typically embed JS within that template to parse and modify the Twine story data as appropriate to display it as a playable game. &lt;/p&gt;

&lt;p&gt;This is why/how different story formats present vastly different authoring syntax: as far as Twine the engine is concerned, a passage's text is just an arbitrary text blob (except insofar as it parses links to draw lines in the visual graph editor), and it's then up to the story format to decide how to parse a passage to provide narrative functionality.&lt;/p&gt;

&lt;p&gt;If you're curious to see a "minimum viable story format", I maintain a story format called &lt;a href="https://github.com/lazerwalker/twison"&gt;Twison&lt;/a&gt; that converts Twine story data XML into JSON, with a few bits of computation and data-munging meant to make the JSON easier to consume if you're integrating it into your own game engine.&lt;/p&gt;

&lt;p&gt;This all means a story format is essential to actually going from a script to a playable game! It isn't enough for a hypothetical CLI tool to just take your twee code and bundle it up into the same XML format that Twine 2 uses internally, it also needs to then pass that XML to a story format and generate an HTML file from that interaction.&lt;/p&gt;

&lt;h3&gt;
  
  
  So... is there or isn't there a Twee 2?
&lt;/h3&gt;

&lt;p&gt;The last few years have been a tumultuous time for people who would want to write Twee. After quite some time of different people building out different competing Twine 2-compatible twee compilers, there is now a &lt;a href="https://github.com/iftechfoundation/twine-specs/blob/master/twee-3-specification.md"&gt;formal language specification&lt;/a&gt; for Twee 3, maintained by the Interactive Fiction Technology Foundation (IFTF). &lt;/p&gt;

&lt;p&gt;It's designed to be a superset of the original &lt;code&gt;twee&lt;/code&gt; language (retroactively known as Twee 1), and to be fairly easy to convert between twee code and the internal format used by the Twine 2 visual editor. &lt;/p&gt;

&lt;p&gt;If you're interested in the history and politics of how we got here, &lt;a href="https://videlais.com/2019/06/08/an-oral-history-of-twee/"&gt;this oral history&lt;/a&gt; is a great overview.&lt;/p&gt;

&lt;p&gt;There are multiple functioning Twee 3 compilers, but I personally use &lt;a href="https://www.motoslave.net/tweego"&gt;Tweego&lt;/a&gt;. I'm sure others are great as well, but Tweego works well, is actively maintained, and is easy to get support for in the &lt;a href="https://discordapp.com/invite/n5dJvPp"&gt;official Twine Discord&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use Tweego
&lt;/h2&gt;

&lt;p&gt;If you're comfortable using CLI tools, Tweego is quite easy to use. After downloading the correct binary from the &lt;a href="https://www.motoslave.net/tweego/"&gt;website&lt;/a&gt;, you can call it directly to simply compile a &lt;code&gt;.twee&lt;/code&gt; file into a compiled &lt;code&gt;.html&lt;/code&gt; file you can play in a browser:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ /path/to/tweego -o example.html example.twee
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's the sample code from earlier updated to Twee 3 and with some metadata:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;::StoryData
{
    "ifid": "F2277A49-95C9-4B14-AE66-62526089F861",
    "format": "Harlowe",
    "format-version": "3.1.0",
    "start": "Start"
}

::StoryTitle
My test story!

:: Start
This is the first passage in a Twine game!

[[This is a link|Next Passage]]


:: Next Passage
The player just clicked a link to get here!

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That &lt;code&gt;ifid&lt;/code&gt; is a random unique identifier for a game. If you try to compile a Twee file without including that, tweego will automatically generate one for you. &lt;/p&gt;

&lt;p&gt;Similarly, tweego has a ton of other options and flags you can pass in, that you can see by running &lt;code&gt;tweego --help&lt;/code&gt;. For the options that do things like specify a story format, I'd highly recommend just specifying that in a metadata block like I have above.&lt;/p&gt;

&lt;p&gt;Also worth calling out is the &lt;code&gt;--watch&lt;/code&gt; option. If you run &lt;code&gt;tweego -o example.html example.twee --watch&lt;/code&gt;, it will start up a server that watches for file changes and then recompiles. If you have a text editor open in one window and a web browser open in another one pointed to your compiled output, this is a great way to quickly test changes!&lt;/p&gt;

&lt;h3&gt;
  
  
  But I want to use the visual editor!
&lt;/h3&gt;

&lt;p&gt;If you have a reason to use the Twine 2 visual editor for something, you can also use it with Tweego. You can take the .html file output by Tweego and import it directly into Twine 2. When you're done, you can convert back from a .html file produced by Twine 2 into Twee by using the &lt;code&gt;-d&lt;/code&gt; flag (e.g. &lt;code&gt;tweego -o example.twee example.html -d&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;As an aside: the Twee language includes import functionality that lets you spread your game across multiple files and then join them at compilation time. That can be a really powerful technique for managing larger games, or reusing macros across projects, but that sort of workflow can make jumping back and forth with the visual editor trickier. See the &lt;a href="https://www.motoslave.net/tweego/docs/"&gt;tweego docs&lt;/a&gt; for more info.&lt;/p&gt;

&lt;h2&gt;
  
  
  Version Control
&lt;/h2&gt;

&lt;p&gt;As mentioned, one of the coolest parts about writing Twine games in plain text files is how much easier they are to version. &lt;/p&gt;

&lt;p&gt;If you've ever tried to revisit previous versions of a Twine game you've made, or tried to collaborate with other writers, you know how difficult this can be when you're operating purely on &lt;code&gt;.html&lt;/code&gt; files! Whether you're using git or just storing &lt;code&gt;.html&lt;/code&gt; files on a server somewhere, having to import and export files that aren't particularly human readable is a major pain.&lt;/p&gt;

&lt;p&gt;In the past, I've often given up on trying to fix merge conflicts with other writers, and just manually copy-pasted changes into the Twine editor by hand. That's frustrating, and avoidable by storing everything in Twee files instead!&lt;/p&gt;

&lt;p&gt;I'm not going to walk through how I use git and GitHub, but I will say one important thing that I do is not store my compiled .html files in git at all. Rather, I'm going to set up a build process so that GitHub is responsible for automatically compiling my &lt;code&gt;.twee&lt;/code&gt; files into &lt;code&gt;.html&lt;/code&gt; files. This means we can keep the git repository clean and readable!&lt;/p&gt;

&lt;h2&gt;
  
  
  Automatically building on GitHub
&lt;/h2&gt;

&lt;p&gt;The concepts of CI and CD (continuous integration and continuous delivery, respectively) are very popular in non-game software development. The high-level idea is that it shouldn't require a lot of manual work to deploy a new version of your software. &lt;/p&gt;

&lt;p&gt;As soon as you push up new code to your version control server, it should be responsible for making sure things aren't broken and then compiling it, deploying it, or whatever else might need to be done to get your code into the hands of users.&lt;/p&gt;

&lt;p&gt;This might seem foreign, or perhaps overkill, if you're just used to the flow of writing a game, getting an HTML file, and uploading that to something like &lt;a href="https://itch.io"&gt;https://itch.io&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;However, &lt;a href=""&gt;GitHub Actions&lt;/a&gt; are a lightweight free service we can use to easily set up a deployment pipeline! In the previous section, I mentioned I don't store the compiled HTML files in my git repos for Twine/Twee games. Instead, GitHub Actions handles everything.&lt;/p&gt;

&lt;p&gt;Every time I push a new version of a Twine game to GitHub, a GitHub Action runs that uses Tweego to compile my game, and then publishes it to &lt;a href=""&gt;GitHub Pages&lt;/a&gt;. The end result is I don't need to think about how to publish my game, or worry if I've forgotten to deploy the latest version or not: whatever version of my Twee code I can read on GitHub, that's the version players are playing!&lt;/p&gt;

&lt;p&gt;Getting this set up with your own Twine/Twee project is easy. Let's walk through it!&lt;/p&gt;

&lt;h3&gt;
  
  
  Add the story format to git
&lt;/h3&gt;

&lt;p&gt;When your Twee specifies that you're using a story format like Harlowe or Sugarcube, Tweego can find the correct story format because the version of Tweego you've downloaded from the Tweego website includes a half-dozen standard ones. The way we'll be installing Tweego on GitHub Actions won't have access to those.&lt;/p&gt;

&lt;p&gt;Within your git directory, create a folder called &lt;code&gt;storyformats&lt;/code&gt;. Go into wherever you've downloaded Tweego, and move the appropriate story format(s) from its &lt;code&gt;storyformats&lt;/code&gt; directory into the one you've just created. Commit and push that to git.&lt;/p&gt;

&lt;p&gt;This is also generally a good thing for maintaining your game in the future! If you come back to this in five years, it's possible this specific version of the story format you're using might not still be available, and tracking it down might be hard; including the exact story format bundle within your git repo will help ensure (although not guarantee) your ability to edit and compile your game.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Started with GitHub Actions
&lt;/h3&gt;

&lt;p&gt;To set up a GitHub Action, all you need to do is add a new file into your git repo.&lt;/p&gt;

&lt;p&gt;GitHub Actions are based on "workflows", which are configuration files. If you add a file called &lt;code&gt;.github/workflows/build.yml&lt;/code&gt; (or any &lt;code&gt;.yml&lt;/code&gt; file inside that directory), it will read that config and try to use it.&lt;/p&gt;

&lt;p&gt;That file should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Build

on:
  push:
    branches:
      - master

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v1

      - name: Use Go 1.13
        uses: actions/setup-go@v1
        with:
          go-version: 1.13.x

      - name: build game
        run: |
          go get github.com/tmedwards/tweego
          export PATH=$PATH:$(go env GOPATH)/bin
          tweego YOUR_TWEE_FILE.twee -o dist/index.html

      - name: Deploy to Pages
        uses: peaceiris/actions-gh-pages@v3
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}
          publish_branch: gh-pages
          publish_dir: ./dist
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Be sure to swap out &lt;code&gt;YOUR_TWEE_FILE.twee&lt;/code&gt; for the actual filename, and change any other tweego settings you might need to. If you're not sure what you're doing, you probably want to leave the output file as &lt;code&gt;dist/index.html&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This script uses &lt;a href="https://pages.github.com"&gt;GitHub Pages&lt;/a&gt; to host your games. It's a free hosting service for static sites such as Twine games that's integrated right into GitHub. It's totally free, and can scale to support any amount of traffic. I think it's absolutely the best and easiest way to host small websites like Twine games that don't require any sort of backend server services.&lt;/p&gt;

&lt;p&gt;If you don't want to use GH Pages to host your game, you'll want to replace the last "Deploy" step with whatever you're using instead.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing your GitHub Action
&lt;/h3&gt;

&lt;p&gt;If you make a new commit and push it to your game's master branch on GitHub, after a few minutes it should be live on the web! By default, it should be available at &lt;code&gt;https://[your-github-username].github.com/[repo-name]&lt;/code&gt;, although it's also possible to configure GitHub Pages to work with a &lt;a href="https://help.github.com/en/github/working-with-github-pages/configuring-a-custom-domain-for-your-github-pages-site"&gt;custom domain name&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The GitHub Action can take a few minutes to compile and deploy, so be patient! You can also click through to the "Actions" tab in your repository and see the build as it progresses.&lt;/p&gt;

&lt;p&gt;For those who are interested, let's walk through what this config file is doing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This just names the workflow. It can be anything you want; it'll show up in the Actions UI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  push:
    branches:
      - master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This indicates the series of steps that follow will execute whenever someone pushes code to the master branch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  build:
    runs-on: ubuntu-latest

    steps:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we've started to define the task itself. Specifically, it runs on Linux, although that doesn't really matter to us.&lt;/p&gt;

&lt;p&gt;Conceptually, a workflow is made up of a number of steps. A step can either be some code we manually write, or it can be a preset collection of actions provided by the community.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- uses: actions/checkout@v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This checks out the latest version of our code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Use Go 1.13
  uses: actions/setup-go@v1
  with:
    go-version: 1.13.x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tweego is written in the programming language Go. We'll be compiling Tweego's code from scratch, which means we need a Go compiler. This gives us a working environment for Go code, and lets us specify which version of Go we want.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: build game
    run: |
      go get github.com/tmedwards/tweego
      export PATH=$PATH:$(go env GOPATH)/bin
      tweego YOUR_TWEE_FILE.twee -o dist/index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a custom script! The first &lt;code&gt;go get&lt;/code&gt; line downloads and compiles the Tweego tool itself. The next line does some fiddly environment setup you don't particularly need to worry about (modifying our PATH so we can just call the &lt;code&gt;tweego&lt;/code&gt; binary without specifying a full filepath). Finally, we run tweego itself.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Deploy
  uses: peaceiris/actions-gh-pages@v3
  env:
    github_token: ${{ secrets.GITHUB_TOKEN }}
    publish_branch: gh-pages
    publish_dir: ./dist
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, we have an HTML file in a directory called &lt;code&gt;dist&lt;/code&gt;. This is a &lt;a href="https://github.com/peaceiris"&gt;third-party action&lt;/a&gt; created by another GitHub user that deploys code straight to GitHub Pages. This config uses an automatically-generated access token (so it has permissions to commit/deploy), and specifies that we want to take all of the files in the &lt;code&gt;dist&lt;/code&gt; directory and publish them to the &lt;code&gt;gh-pages branch&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  ...and that's it!
&lt;/h2&gt;

&lt;p&gt;And with all of that, we should be good to go!&lt;/p&gt;

&lt;p&gt;As someone used to working with more programer-focused tools, I've found this workflow to make it WAY easier and more pleasant to work on games with Twine. Hopefully it's helpful to you too!&lt;/p&gt;

&lt;p&gt;If this is interesting to you, you might also be interested in &lt;a href="https://lazerwalker.com/playfab-twine"&gt;PlayFab-Twine&lt;/a&gt;, my tool to easily and automatically add free analytics to your Twine games. The &lt;a href="https://github.com/lazerwalker/playfab-twine"&gt;GitHub repo&lt;/a&gt; for that site is also a great example of a Twine project developed using this workflow!&lt;/p&gt;

&lt;p&gt;Drop me a note if you're using any of this stuff, I'd love to hear from you!&lt;/p&gt;

</description>
      <category>github</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Making a weird GIF wall using Azure Functions and SignalR</title>
      <dc:creator>Em Lazer-Walker</dc:creator>
      <pubDate>Fri, 06 Dec 2019 19:14:42 +0000</pubDate>
      <link>https://forem.com/azure/making-a-weird-gif-wall-using-azure-functions-and-signalr-2gmm</link>
      <guid>https://forem.com/azure/making-a-weird-gif-wall-using-azure-functions-and-signalr-2gmm</guid>
      <description>&lt;p&gt;At this year’s &lt;a href="http://xoxofest.com"&gt;XOXO festival&lt;/a&gt;, one of the top-secret closing party happenings was a special live listening of &lt;a href="http://neilcic.com"&gt;Neil Cicerega&lt;/a&gt;'s latest mashup album. If you're not familiar with Neil's work, his previous album &lt;a href="http://www.neilcic.com/mouthmoods/"&gt;Mouth Moods&lt;/a&gt; might give you an idea of what was played: a weird and surprising concept album that sort of amounts to cramming an excessive amount of Pure Internet™ into your ear through mashups, references, and very clever mixing.&lt;/p&gt;

&lt;p&gt;One of the XOXO organizers approached &lt;a href="https://twitter.com/reedkavner"&gt;Reed Kavner&lt;/a&gt; and I to make some sort of interactive installation to accompany the listening party: a sort of gif wall where listeners could post GIFs and other weird Internet ephemera as a way of annotating the piece.&lt;/p&gt;

&lt;p&gt;I had just started my new job on the Microsoft &lt;a href="https://twitter.com/azureadvocates"&gt;Azure Advocates&lt;/a&gt; team, so I took this as a chance to try out a whole bunch of Azure tech for the first time!&lt;/p&gt;

&lt;h1&gt;
  
  
  A Wall of Pure Internet
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://uploads.lazerwalker.com/IMG_2162.MOV"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GiZbp99v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://uploads.lazerwalker.com/xoxo-420p.gif" alt="Video of the wall in action"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The goal was to create a completely overwhelming wall of GIFs and text. We wanted people to be able to live-annotate the music by pulling up memes the music itself was referencing, while itself playing into a sort of Internet-y vaporwave visual aesthetic.&lt;/p&gt;

&lt;p&gt;We decided to rely on Slack rather than build out our own UI. XOXO has an active year-round Slack community, and most attendees were already logged into the festival Slack on their phones. This handled a whole bunch of hard problems for us: authentication, mapping posts to real names (important to handle Code of Conduct violations) and fully handling GIF search (including explicit content filters).&lt;/p&gt;

&lt;p&gt;The level of trust we put in our community (along with our real-name policy) meant we could also allow people to post plaintext messages instead of just GIFs. Along with that, it mattered to us that we supported all of the custom emoji that our Slack supports, since the community has built up a large collection of meaningful ones. &lt;/p&gt;

&lt;p&gt;One other conscious design decision was to not rate-limit how often anybody could post. When you post a GIF or some text, it shows up on screen and slowly grows over time, but any newer GIFs that come after yours will cover yours up. We simply set the starting size of a post based on how recently the author last posted. If somebody wanted to sit there and spam GIFs as quickly as they could, we wanted to let them do that, but making their content start smaller meant their fun wouldn't come at the expense of annoying others.&lt;/p&gt;

&lt;h1&gt;
  
  
  Serverless? With a long-running client?!
&lt;/h1&gt;

&lt;p&gt;While Reed built out the JS front-end (available on &lt;a href="https://github.com/reedkavner/gif-viz"&gt;GitHub&lt;/a&gt;), I was responsible for the server infrastructure to send messages to a web browser.&lt;/p&gt;

&lt;p&gt;I was interested in using &lt;a href="https://azure.microsoft.com/en-us/services/functions/?WT.mc_id=devto-blog-emwalker"&gt;Azure Cloud Functions&lt;/a&gt; to avoid needing to spin up my own server on something like EC2 or Heroku. With "serverless" tools like Azure Cloud Functions, you just upload a single free-floating function (JS in my case), and instead of you maintaining a server runtime, Azure is responsible for spinning up an instance and running your function any time somebody hits a specified HTTP endpoint. In our case, that endpoint is a webhook being triggered by a Slack API app.&lt;/p&gt;

&lt;p&gt;On the browser side, we assumed we'd use a WebSocket connection to send messages to the client. However, WebSockets require a long-living connection. With serverless functions, we only have an execution environment at the moment our function is being called, which makes it rather difficult for the browser app to have a persistent WS connection!&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter SignalR!
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/azure-signalr/signalr-overview?WT.mc_id=devto-blog-emwalker"&gt;SignalR&lt;/a&gt; is a technology designed to make it easy for servers to broadcast real-time messages to various clients. It’s different from WebSockets in that it’s unidirectional — it can only be used to send messages from servers to clients, not the other way around. &lt;/p&gt;

&lt;p&gt;It's mostly meant for larger, more enterprise-focused uses: it gracefully handles things that WebSockets doesn’t like more complex authentication and connection handshakes. It operates at a higher level of abstraction than WebSockets: by default, it even uses WebSockets in the browser as its transport mechanism, but can fall back to alternate methods automatically (e.g. polling) without you needing to worry about it as a developer.&lt;/p&gt;

&lt;p&gt;We don't care about the security or reliability promises of SignalR, but we do care that  Azure offers a hosted SignalR service that can interoperate with Azure Cloud Functions. This lets us overcome the issue of needing a long-running connection to a short-lived server! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l20AYvOS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/rpg03awjwn36ok6akxgf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l20AYvOS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/rpg03awjwn36ok6akxgf.png" alt="Architecture diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The browser client connects to the Azure SignalR service, which maintains that connection for as long in the browser is open. In the meanwhile, any time an Azure Function instance spins up and executes, it can independently connect to the SignalR service and push messages to the queue. We get the flexibility of using serverless functions to build our node app, but can still maintain a long-running WebSocket connection to the client app. Neat!&lt;/p&gt;

&lt;h2&gt;
  
  
  Using SignalR with Cloud Functions: Declaring Inputs and Outputs
&lt;/h2&gt;

&lt;p&gt;I'm not going to explain in here how to get set up with Azure Functions — check out &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-function-vs-code?WT.mc_id=devto-blog-emwalker"&gt;this tutorial&lt;/a&gt; for getting started using the official &lt;a href="https://code.visualstudio.com?WT.mc_id=devto-blog-emwalker"&gt;VS Code&lt;/a&gt; extension, which is by far the easiest way to manage the fiddly bits — but I do want to talk a bit about how I integrated SignalR with my cloud Function.&lt;/p&gt;

&lt;p&gt;Azure Functions have a really elegant way of handling external dependencies into your code. An Azure Function is just a single file with a single code function, but accompanying it is a &lt;code&gt;function.json&lt;/code&gt; config file that specifies all inputs and outputs the function accepts. Add a bunch of dependencies to your &lt;code&gt;function.json&lt;/code&gt; file, and they'll automatically be injected into your function as arguments!&lt;/p&gt;

&lt;p&gt;Setting up SignalR requires two different functions. First, there's a short setup handshake required: a browser that wants to connect to our SignalR instance needs to hit an HTTP endpoint that returns the magic connection string it needs to complete the connection&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"disabled"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"bindings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"authLevel"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"anonymous"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"httpTrigger"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"direction"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"in"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"req"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"direction"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"out"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"res"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"signalRConnectionInfo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"connectionInfo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"hubName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"chat"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"direction"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"in"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;





&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;connectionInfo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;connectionInfo&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can see here we're setting up a function that has standard ExpressJS request/response inputs/outputs, as well as an extra &lt;code&gt;connectionInfo&lt;/code&gt; argument that we specify in our &lt;code&gt;function.json&lt;/code&gt; file should contain SignalR connection info to a message queue called "chat".&lt;/p&gt;

&lt;p&gt;Our actual "post a message" Slack webhook function has a slightly different &lt;code&gt;function.json&lt;/code&gt; file, as it uses the SignalR connection as an output (essentially a message queue it pushes messages onto) rather than an input:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;disabled&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bindings&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;authLevel&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;anonymous&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;httpTrigger&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;direction&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;in&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;req&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;methods&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;post&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
      &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;direction&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;out&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;res&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;signalR&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$return&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hubName&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;chat&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;direction&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;out&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;"name": "$return"&lt;/code&gt; property means that whatever our function returns ends up getting pushed onto the &lt;code&gt;"chat"&lt;/code&gt; SignalR queue as a message, which in turn gets pushed to all connected SignalR clients.&lt;/p&gt;

&lt;p&gt;With these two functions in place, the actual client code to connect to the SignalR queue is fairly simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;signalR&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;HubConnectionBuilder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;withUrl&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`https://xoxo-closing-party.azurewebsites.net/api`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;configureLogging&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;signalR&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;LogLevel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Information&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;build&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="nx"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;newMessage&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;addPost&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// m is a JSON blob containing whatever our function sends&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onclose&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;disconnected&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="nx"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;start&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Connected!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You'll notice the SignalR library itself is responsible for hitting the handshake endpoint and then subscribing to new messages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Emojis are Hard!
&lt;/h2&gt;

&lt;p&gt;With this code so far, my backend was sending messages to Reed's JS webapp containing message text and, if applicable, GIF data. But all emoji were coming through as Slack-style text shortnames. e.g. instead of the "🎉" emoji, the messages contained the string &lt;code&gt;:tada:&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Fixing this actually meant handling two totally separate things: proper Unicode emoji, and our Slack instance's custom emoji set.&lt;/p&gt;

&lt;p&gt;For “official” emoji, I was able to find someone else who already wrote a quick script to fetch Slack's mapping. This CLI one-liner I modified from the web gave me a JSON object mapping from short name to Unicode code point.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://raw.githubusercontent.com/iamcal/emoji-data/master/emoji.json | &lt;span class="se"&gt;\&lt;/span&gt;
  npx ramda-cli &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s1"&gt;'reject (.unified.includes("-"))'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s1"&gt;'chain (emoji) -&amp;gt; emoji.short_names.map -&amp;gt; {...emoji, short_name: it}'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s1"&gt;'sort-by (.short_name)'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s1"&gt;'index-by (.short_name)'&lt;/span&gt; &lt;span class="s1"&gt;'map -&amp;gt; "0x#{it.unified}"'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; emoji.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;





&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"abacus"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F9EE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"abc"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F524"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"abcd"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F521"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"accept"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F251"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"adult"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F9D1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"aerial_tramway"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F6A1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"airplane_arriving"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F6EC"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"airplane_departure"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F6EB"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"alarm_clock"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x23F0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"alien"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F47D"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ambulance"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F691"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"amphora"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F3FA"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"anchor"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x2693"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"angel"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F47C"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"anger"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F4A2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"angry"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F620"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"anguished"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F627"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ant"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F41C"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"apple"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x1F34E"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"aquarius"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x2652"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;From there, I was able to use built-in JS string replacement functions to replace all valid Unicode emoji with the proper Unicode code points:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;replaceEmoji&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;standardEmojiMap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./emoji&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="se"&gt;\:(&lt;/span&gt;&lt;span class="sr"&gt;.*&lt;/span&gt;&lt;span class="se"&gt;?)\:&lt;/span&gt;&lt;span class="sr"&gt;/g&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;original&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;standardEmojiMap&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fromCodePoint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;standardEmojiMap&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// This isn't in our list of Unicode emoji — either it's a custom emoji or nonsense&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;original&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Custom emoji were a bit trickier. Slack offers an &lt;a href="https://api.slack.com/methods/emoji.list"&gt;API endpoint&lt;/a&gt; to grab the custom emoji for any given Slack instance. &lt;/p&gt;

&lt;p&gt;Crucially, although it returns a map whose keys are emoji names, the values can be one of two things: a URL to a CDN-hosted image for that emoji, or the name of another emoji name that it's an alias for. So when doing my own find/replace, I needed to check if it was an alias, and if so make sure to resolve that. When I eventually landed on an actual URL, I replaced the &lt;code&gt;:emoji:&lt;/code&gt; with an HTML &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; tag pointed at the CDN URL. &lt;/p&gt;

&lt;p&gt;This made things slightly trickier for Reed: however he was rendering this text on-screen, he now needed to make sure that &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; tags were rendered properly as HTML, but also do that in a way where &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; tags wouldn't be executed as arbitrary JavaScript. It added some complexity, but we concluded that was easier than alternative methods of specifying "this image should be injected at this point within the text".&lt;/p&gt;

&lt;p&gt;I cached this custom emoji data from Slack in an Azure CosmosDB database. While it's not like our custom emoji updated all that frequently, I needed to build out that caching infrastructure to handle fetching names as well. &lt;/p&gt;

&lt;p&gt;Messages from Slack only contained unique user IDs, not human-readable names, so just like emoji I ended up needing to make some API calls to Slack's &lt;a href="https://api.slack.com/methods/users.list"&gt;user list&lt;/a&gt; API endpoint so I could do my own lookup.&lt;/p&gt;

&lt;p&gt;I'm not going to go into that process of using CosmosDB right now — our name cache (but not our emoji cache!) ended up falling over in production, and it was suggested to me after-the-fact that &lt;a href="https://azure.microsoft.com/en-ca/services/storage/tables/?WT.mc_id=devto-blog-emwalker"&gt;Azure Table Storage&lt;/a&gt; would have been a better fit for our needs.&lt;/p&gt;

&lt;h1&gt;
  
  
  The End-Result
&lt;/h1&gt;

&lt;p&gt;...and that's (more or less) all there was to it! I glossed over a whole lot here, but you can check out the &lt;a href="https://github.com/lazerwalker/xoxo-closing-party"&gt;GitHub repo&lt;/a&gt; to see the code itself. I was impressed how well Azure Functions and SignalR worked — messages came through within a second or two of people sending them, it scaled effortlessly even when we were getting hundreds of messages per minute, and everybody loved the installation!&lt;/p&gt;

&lt;p&gt;I'd love to see someone else take our code (or just inspiration from us) and make something similar! Shout at me on &lt;a href="https://twitter.com/lazerwalker"&gt;Twitter&lt;/a&gt; if you do anything cool like this.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>serverless</category>
      <category>javascript</category>
      <category>node</category>
    </item>
  </channel>
</rss>
