<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jeremy Katz</title>
    <description>The latest articles on Forem by Jeremy Katz (@katzj).</description>
    <link>https://forem.com/katzj</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/katzj"/>
    <language>en</language>
    <item>
      <title>How Google manages open source</title>
      <dc:creator>Jeremy Katz</dc:creator>
      <pubDate>Thu, 23 Jul 2020 14:36:56 +0000</pubDate>
      <link>https://forem.com/tidelift/how-google-manages-open-source-hpm</link>
      <guid>https://forem.com/tidelift/how-google-manages-open-source-hpm</guid>
      <description>&lt;p&gt;Many people know that Google uses a single repository, the monorepo, to store all internal source code. The Google monorepo has been &lt;a href="https://danluu.com/monorepo/" rel="noopener"&gt;blogged about&lt;/a&gt;, &lt;a href="https://www.youtube.com/watch?v=W71BTkUbdqE" rel="noopener"&gt;talked about at conferences&lt;/a&gt;, and written up in &lt;a href="https://cacm.acm.org/magazines/2016/7/204032-why-google-stores-billions-of-lines-of-code-in-a-single-repository/fulltext" rel="noopener"&gt;Communications of the ACM&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Most of this has focused on how the monorepo impacts Google developer productivity and the ability to have software written by one team and used by many other teams. But I haven’t seen as much written about how it also impacts the way teams within Google consume and use open source.&lt;/p&gt;

&lt;h2&gt;The benefits of the Google monorepo&lt;/h2&gt;

&lt;p&gt;Just like internal code, third-party open source code is also imported into the monorepo &lt;a href="https://opensource.google/docs/thirdparty/" rel="noopener"&gt;under a /third_party prefix&lt;/a&gt;. There are  a number of benefits to this approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Single version&lt;/strong&gt;. Much like with internally developed libraries at Google, importing open source code into the monorepo ensures that the same version of a library is used in all applications rather than having a spaghetti of versions to understand and support across the many applications within Google.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ease of updates&lt;/strong&gt;. With a single version of the code in one place, updating an open source library either for normal maintenance or because of a critical security issue is much easier. You just have to update the project copy in /third_party and every application in the Google monorepo now gets built with that new version. You do, though, have to ensure that you haven’t broken the build of anything else in the monorepo.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependency clarity&lt;/strong&gt;. By having a single location where every dependency is stored, Google engineers can easily see which things within the monorepo depend on a given open source library. Thus when doing an update for a security vulnerability, the developers who own the individual applications can easily be notified that they need to deploy new binaries with the fixed dependency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplified licensing review&lt;/strong&gt;. Licensing reviews can be done in a single location rather than requiring a new review any time an application wants to depend on a new-to-it library. As you can imagine, at Google scale, vast numbers of open source projects have already had their licenses reviewed and approved for use inside of Google.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It turns out that these same benefits that Google gets from a monorepo can also be valuable to most other engineering organizations using open source—even though not operating at Google scale. But most engineering organizations don’t have the human power or financial resources to ensure that they get them on their own.&lt;/p&gt;

&lt;p&gt;After all, one of the main benefits of using open source to begin with is having access to a lot of common infrastructure components without having to write them from scratch yourself. &lt;/p&gt;

&lt;p&gt;But most development teams still need to have a high degree of confidence that the software that they are using is being properly maintained. They need confirmation that it is licensed in a way that is acceptable to the organization, and they need to know that it is secure, or be notified when there are vulnerabilities. &lt;/p&gt;

&lt;p&gt;At a basic level, most developers would love to have access to “known good” components like Google’s developers get when pulling from the monorepo, rather than the dependency roulette of bringing in new open source components without any sort of sanity check.&lt;/p&gt;

&lt;h2&gt;How to manage open source like Google&lt;/h2&gt;

&lt;p&gt;Every organization could benefit from managing open source like Google does. Fortunately, the Tidelift Subscription makes it easy for you to &lt;a href="https://blog.tidelift.com/if-your-open-source-dependencies-are-a-mess-weve-got-you-introducing-catalogs" rel="noopener"&gt;create customized catalogs&lt;/a&gt; of open source components that provide many of the benefits of Google's approach, without the need to maintain your own fork or invest in creating and maintaining your own monorepo.&lt;/p&gt;

&lt;p&gt;With the Tidelift Subscription, you’ll be able to see the catalog of open source packages and releases you use across all of your applications. You can approve new packages as developers need them with workflow automation—developers request packages, and managers or architects review and approve. &lt;/p&gt;

&lt;p&gt;You can disallow certain packages or package releases based on known security vulnerabilities or licensing concerns. Or you can centrally flag that a vulnerability that is largely theoretical in nature can be ignored not just once, but by every development team without requiring each one to painstakingly review the vulnerability and assess it on their own to pass some pre-deployment scanner test. &lt;/p&gt;

&lt;h2&gt;Partnering with open source maintainers&lt;/h2&gt;

&lt;p&gt;We even take it a step further by partnering with the maintainers of many open source packages to help ensure that they are well maintained, have clear licensing, and get timely security fixes as vulnerabilities are discovered. This is a win-win, because the more subscribers who use a project, the more its maintainers get paid, which means they have even more time and incentive to keep their projects well maintained and up to date.&lt;/p&gt;

&lt;p&gt;As a Tidelift subscriber, you can set your own policies for how you would like to use open source projects within your organization—or you can just choose to accept our guidance entirely.&lt;/p&gt;

&lt;h2&gt;Customizing your catalogs&lt;/h2&gt;

&lt;p&gt;A catalog of managed open source within Tidelift can be consumed in lots of different ways.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Your developers can ensure that they are using appropriate packages and versions with our command line tool and request new ones as they discover a need. &lt;/li&gt;
&lt;li&gt;You can add a check as a part of your continuous integration pipeline to ensure that nothing is built that uses components that haven’t been vetted. &lt;/li&gt;
&lt;li&gt;You can plug into a central artifact manager (such as &lt;a href="https://docs.tidelift.com/article/69-using-with-jfrog-artifactory" rel="noopener"&gt;JFrog Artifactory&lt;/a&gt;) to only allow approved components to be downloaded. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each option can be used individually or, for the most effective deployment, use all three!&lt;/p&gt;

&lt;p&gt;If you are interested in learning more about best practices for managing open source dependencies, we can help. &lt;a href="https://tidelift.com/about/contact" rel="noopener"&gt;Talk to one of our experts&lt;/a&gt; or read more about &lt;a href="https://tidelift.com/subscription/the-tidelift-guide-to-managed-open-source" rel="noopener"&gt;the Tidelift approach to managed open source here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@jbcreate_?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Joseph Barrientos&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/google-open-source?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>google</category>
      <category>opensource</category>
      <category>monorepo</category>
      <category>devops</category>
    </item>
    <item>
      <title>Spring cleaning: 3 tips for getting your application development house in order</title>
      <dc:creator>Jeremy Katz</dc:creator>
      <pubDate>Tue, 26 May 2020 18:20:31 +0000</pubDate>
      <link>https://forem.com/tidelift/spring-cleaning-3-tips-for-getting-your-application-development-house-in-order-11bm</link>
      <guid>https://forem.com/tidelift/spring-cleaning-3-tips-for-getting-your-application-development-house-in-order-11bm</guid>
      <description>&lt;p&gt;Despite some indications to the contrary where I live in the northeast US, it is finally spring in the northern hemisphere—which many people traditionally view as the best time to take on do-it-yourself projects and improvements around the house that have been piling up over the winter.&lt;/p&gt;

&lt;p&gt;Painting the front staircase. Finally organizing the garage. Tackling that garden overhaul.&lt;/p&gt;

&lt;p&gt;When it comes to your job, spring can also be a good chance to invest some time and energy in improvements to the applications that your development team is building and how you work on them. There are always plenty of back-burnered items that you’ve been meaning to get around to.&lt;/p&gt;

&lt;p&gt;Things like ensuring you have a process to respond to security problems. Understanding how to answer obscure licensing questions for the components you use. Making an inventory list of all of the open source components you use like your boss asked you for six months ago. Not to mention, of course, plenty of things which are specific to your own application architecture.&lt;/p&gt;

&lt;p&gt;Here’s a starting point for your to-do list:&lt;/p&gt;

&lt;h2&gt;1. Inventory what you have&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Make a list of all of the third-party SaaS services that are being used across your entire organization.&lt;/strong&gt; Are you paying for software or seats that you’re not using? Do you have good security controls in place, for example through a single sign-on provider like Okta or OneLogin?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Make a list of your in-house applications. &lt;/strong&gt;Where are they in their lifecycle? Who’s on point for maintaining them? Are they aligned with your broader IT strategy? For example, can they be migrated to the public cloud, or do you need to start thinking about re-architecture or retirement of these services for the next growth period ahead?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Make a list of your open source components. &lt;/strong&gt;Now that you’ve inventoried your applications, dig into them to discover what third-party open source they are using. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;2. Clean it up to eliminate inefficiency&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cut unneeded expenses.&lt;/strong&gt; With your newfound visibility and an idea of your near-term priorities, cancel SaaS services which you aren’t using anymore and ensure access to these services is locked down and centrally administered.&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Communicate the plan.&lt;/strong&gt; Now that you’ve got a plan around your internal application development, come up with a communications plan and roll it out to the application developers on your team starting with the high level strategy. Be sure every application has a clear owner. They’ll be happy to hear about the big picture and understand how their work fits into it. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Centralize your open source management process.&lt;/strong&gt; Now that you’ve got a list of your current open source components and a roadmap for where you want to go, finally tackle that task of creating a central repository of known-good open source components that will meet your standards today and in the future. To get the heavy lift off your to-do list, consider partnering with the independent open source maintainers behind the projects that you use via &lt;a href="https://tidelift.com/" rel="noopener"&gt;the Tidelift Subscription&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;3. Planning the next steps&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Plan for your internal apps. &lt;/strong&gt;Create a database (or just a spreadsheet or wiki page!) of your custom-built applications with key attributes such as the primary owner and reference to your company’s security, licensing, and maintenance policies. &lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Plan your open source management policy. &lt;/strong&gt;Establish a baseline policy for open source components that go into your applications. How will you handle security vulnerabilities? What open source licenses work for your organization? Do you want to favor actively maintained packages over found source code? (You can learn more about how to work through these questions with &lt;a href="https://tidelift.com/about/resources/guides" rel="noopener"&gt;these resources&lt;/a&gt; from Tidelift.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And there you go—done and dusted. By taking time to get your arms around the application development resources you’re already committed to and making a plan for the future, you will be in a better place to have your development team working on the important things—like building applications that matter to your customers.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Photo credit: &lt;a href="https://unsplash.com/@jeshoots"&gt;JEShoots&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>technology</category>
      <category>motivation</category>
      <category>opensource</category>
    </item>
    <item>
      <title>New to working from home? Here’s how to make remote work work.</title>
      <dc:creator>Jeremy Katz</dc:creator>
      <pubDate>Tue, 07 Apr 2020 13:10:13 +0000</pubDate>
      <link>https://forem.com/tidelift/new-to-working-from-home-here-s-how-to-make-remote-work-work-39ib</link>
      <guid>https://forem.com/tidelift/new-to-working-from-home-here-s-how-to-make-remote-work-work-39ib</guid>
      <description>&lt;p&gt;Over the past few weeks, companies employing millions of workers have had to figure out how they can make remote work work. Organizations across the technology industry and beyond have moved to working from home as a temporary solution for keeping employees and communities healthy.&lt;/p&gt;

&lt;p&gt;When we started Tidelift a few years ago, we knew that we wanted to build a distributed team. Remote work wasn’t foreign to our four founders—we all have backgrounds working on open source software, where the contributions come from people all around the world, and collaboration without co-location is key. &lt;/p&gt;

&lt;p&gt;So we’ve been incredibly intentional about designing our culture around the remote work experience because we know the way we communicate and interact on a daily basis affects our happiness, productivity, and success as a business. &lt;/p&gt;

&lt;p&gt;While we got many things right from the beginning, there are other areas where we have done poorly but learned and improved over time. Here are some of the key lessons I can share from our experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a consistent meeting experience
&lt;/h3&gt;

&lt;p&gt;If one person is on video, everyone is on video from their own machines. We didn’t always work this way, but when we tried this, it was instantly a better experience for everyone. It’s critical, particularly for teams that aren’t used to working from home, that everyone is able to see everyone else’s face.&lt;/p&gt;

&lt;p&gt;We even use this same technique in interviews—we have pairs of folks talking with candidates at the same time. If your team is even partially remote, this is the single biggest tip I can give you to make the experience better. &lt;/p&gt;

&lt;h3&gt;
  
  
  Design group social opportunities
&lt;/h3&gt;

&lt;p&gt;One example of a virtual social activity is our daily “water cooler.” The basic idea is that we set aside 15 minutes where we all join a shared video call, with no agenda. We shoot the breeze, talk about the newest board games, what we watched on TV last night, what the dog is doing right now, introduce a new employee, whatever. &lt;/p&gt;

&lt;p&gt;We have conversations about so many fun things and it helps us all be humans together. Different people’s interests are represented and the conversation meanders and even sometimes gets quiet. But that’s okay. We appreciate the chance to be a little social and break the remote work solitude for a few minutes, while seeing people we wouldn’t otherwise get a chance to see and talk with. &lt;/p&gt;

&lt;p&gt;If your team is new to working from home, creating a chance to talk as a group about topics that have nothing to do with your business can help maintain a sense of normalcy during a tough time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Engineer 1:1 social situations
&lt;/h3&gt;

&lt;p&gt;Not everyone is extroverted and wants to jump into a conversation with dozens of people. So we use &lt;a href="https://donut.ai"&gt;donut.ai&lt;/a&gt; to facilitate random pairings between two people so they can learn more about each other in a smaller setting. Some of our team members do this while walking to show coworkers cool parts of their neighborhood or home working space.&lt;/p&gt;

&lt;h3&gt;
  
  
  Standardize communication
&lt;/h3&gt;

&lt;p&gt;Pick one preferred communication mechanism for the company. It’s important to have a primary place to go for information and communication, and we have chosen to use Slack as our primary tool. The ability to have low-latency discussion is an important distinction over email and gets us closer to the way people interact in person.&lt;/p&gt;

&lt;p&gt;Slack also helps us “default to open,” where as many conversations as possible are happening on public channels where every employee can access them, as opposed to email, where only the people who are copied can follow along. Side benefit: this also makes it possible for new employees to catch up on discussions that happened before they arrived. &lt;/p&gt;

&lt;p&gt;We use Zoom for video calls and have a standing expectation that if you’ve been talking in circles about something on Slack for more than 10-15 minutes, you start a video chat to increase the bandwidth and reduce the latency even more.&lt;/p&gt;

&lt;p&gt;Along these lines, another good policy is to ensure people don’t have to read everything. One of the criticisms of using Slack heavily is “there is so much to read.” But for us, Slack is the virtual office. Conversations happen all around in a physical office that you may miss, so the same will happen in your virtual office.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ensure decisions are made remote-first
&lt;/h3&gt;

&lt;p&gt;Many of us have worked in organizations that supported remote work, yet all of the important decisions were made by people who were co-located in an office. We learned from this not to marginalize people who were working from home. Now it’s more important than ever, not least for employee morale, not to make decisions in smaller groups than are normal for your company under typical working conditions. &lt;/p&gt;

&lt;h3&gt;
  
  
  Focus on written communication
&lt;/h3&gt;

&lt;p&gt;Written communication becomes the default when working from home, so you have to focus on doing it well—and prioritize the time required to communicate in a distributed team. This can come out in ways that aren’t obvious: writing design docs around features that will be built, descriptive text for designs to try to flesh out the details, or writing project plans to coordinate across people and teams.&lt;/p&gt;

&lt;p&gt;Know that text is lossy and your teammates mean well. We all are occasionally short and in text, communication can seem even more abrupt. Remember that we’re all in this together and gently let someone know (in private) if you see them coming across more harshly than they certainly intended.&lt;/p&gt;

&lt;p&gt;Those are some of the most important things we’ve learned along the way about making remote work successful. We hope it’s helpful to you as you think about how your company can approach designing a productive home work environment for the coming months.  &lt;/p&gt;

&lt;p&gt;All the above said, remember that work can be challenging right now for people. While we have been set up for remote work from the beginning, most of us have a degree of stress and anxiety about world events that will make some days harder than others. &lt;/p&gt;

&lt;p&gt;Also, in addition to just the normal of working from home, we are now working from home with partners, kids, or roommates in the house as well, which adds other complications. And as a parent, there may be an aspect of trying to give kids a sense of normalcy and learning. &lt;/p&gt;

&lt;p&gt;So be empathetic to your coworkers and don’t write any current challenges off as just the result of working from home. I think that my friend Mark said it well last week:&lt;br&gt;
&lt;/p&gt;
&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--tWsW2Gh2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1245782678626074626/DgnrMFEe_normal.jpg" alt="Mark Imbriaco @ Home profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Mark Imbriaco @ Home
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @markimbriaco
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P4t6ys1m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      I've worked from home for almost 20 years before this job. I'm very comfortable working from home. And I'm struggling right now to maintain any sort of focus or feel like I'm getting much useful done. &lt;br&gt;&lt;br&gt;But you know what? I'd be struggling exactly the same way in the office.
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      16:04 PM - 23 Mar 2020
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1242120017103278081" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-reply-action.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1242120017103278081" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-retweet-action.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      24
      &lt;a href="https://twitter.com/intent/like?tweet_id=1242120017103278081" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-like-action.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
      184
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


</description>
      <category>remote</category>
      <category>wfh</category>
    </item>
    <item>
      <title>Why coordinated security vulnerability disclosure policies are important</title>
      <dc:creator>Jeremy Katz</dc:creator>
      <pubDate>Thu, 23 Jan 2020 16:20:10 +0000</pubDate>
      <link>https://forem.com/tidelift/why-coordinated-security-vulnerability-disclosure-policies-are-important-3k6k</link>
      <guid>https://forem.com/tidelift/why-coordinated-security-vulnerability-disclosure-policies-are-important-3k6k</guid>
      <description>&lt;p&gt;We believe that working with maintainers to create coordinated security vulnerability policies is important. Why? Here’s one story to illustrate.&lt;/p&gt;

&lt;p&gt;Last year, a new &lt;a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11324"&gt;security vulnerability&lt;/a&gt; was found in the &lt;a href="https://tidelift.com/subscription/pkg/pypi-urllib3"&gt;urllib3 library&lt;/a&gt;—a powerful HTTP client for Python. If you are using Python, then you’re probably using urllib3. &lt;/p&gt;

&lt;p&gt;When one of the core developers of Python 3, Christian Heimes, discovered this security vulnerability, he followed the disclosure policy on the urllib3 GitHub page, which gave instructions on how to notify the maintainers via Tidelift. Tidelift works with all of our participating maintainers to set up coordinated security vulnerability disclosure policies for their projects, which helps avoid risky &lt;a href="https://blog.tidelift.com/enough-of-zero-day-fire-drills"&gt;zero-day security vulnerability scenarios&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Tidelift then took the following measures:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We worked with MITRE to coordinate the allocation of a CVE for the vulnerability. CVEs provide an industry standard way to refer to a vulnerability across vendors. &lt;/li&gt;
&lt;li&gt;Next, we collaborated with the urllib3 maintainers to implement a fix and have it tested by the original reporter.&lt;/li&gt;
&lt;li&gt;We alerted our subscribers about the existence of this new vulnerability.&lt;/li&gt;
&lt;li&gt;In addition to the information on the security vulnerability’s existence, we also gave subscribers information on which new releases would resolve the vulnerability in their codebases.&lt;/li&gt;
&lt;li&gt;We linked the release notes for users to understand any other changes present in the urllib3 update.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This process—which historically has often taken months with many open source projects—all occurred within 1 day. &lt;/p&gt;

&lt;p&gt;If the package hadn’t had a maintainer watching over it, a scenario like this might require that your team spend time forking the library, patching it yourselves, and crossing your fingers that an official patch would be released before you descend into &lt;a href="https://blog.tidelift.com/dependency-hell-is-inevitable"&gt;dependency hell&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;This is where Tidelift helps. Tidelift ensures that there are maintainers standing behind covered packages who have the financial incentives to fix problems quickly once they are discovered.&lt;/p&gt;

&lt;p&gt;In the case of urllib3, all of this was handled before our customers even knew there was an issue. This same scenario has been repeated a number of times since we launched our security vulnerability disclosure process in December 2018.&lt;/p&gt;

&lt;p&gt;"Tidelift has made the process of offering a comprehensive vulnerability disclosure process simple for the urllib3 team,” said co-maintainer of urllib3, &lt;a href="https://github.com/sethmlarson?tab=overview&amp;amp;from=2017-12-01&amp;amp;to=2017-12-31&amp;amp;org=urllib3"&gt;Seth Larson&lt;/a&gt;. “This makes delivering secure code and responding quickly to vulnerabilities easy even for a small team."&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>security</category>
    </item>
    <item>
      <title>It's the end of Python 2. Are we prepared?</title>
      <dc:creator>Jeremy Katz</dc:creator>
      <pubDate>Mon, 28 Oct 2019 13:45:35 +0000</pubDate>
      <link>https://forem.com/tidelift/it-s-the-end-of-python-2-are-we-prepared-2onk</link>
      <guid>https://forem.com/tidelift/it-s-the-end-of-python-2-are-we-prepared-2onk</guid>
      <description>&lt;p&gt;In just a few short months, Python 2 will officially reach the end of its supported life. 💀 This means that anyone building applications in Python will need to have moved to Python 3 if they want to keep getting updates including, importantly, fixes for any security vulnerabilities in the core of Python or in the standard library. How did we get here?&lt;/p&gt;

&lt;p&gt;Python 3 was initially released on December 3, 2008 and included a variety of major  compatibility-breaking changes. Overall, these changes are welcomed by Pythonistas and remove a lot of hacks and workarounds that had evolved over time. One of my favorites is that things like &lt;em&gt;dict.items()&lt;/em&gt; no longer return a list, so you don’t have to use &lt;em&gt;dict.iteritems()&lt;/em&gt; to get a lower memory and more performant way to iterate over dictionary items. &lt;/p&gt;

&lt;p&gt;Others, while still welcome, are more challenging from a compatibility perspective as they bring along syntax changes to the core language. This meant that many of the Python libraries that we use for building applications weren’t ready for Python 3. Django, Flask, urllib3, etc... none were ready for the initial release of Python 3. But they now are and have been for quite a while. The efforts to support multiple Python versions have been great but can’t continue forever.&lt;/p&gt;

&lt;p&gt;This isn’t the first time this kind of event has happened in the Python world, though. Way back in October 2000, Python 2 came out. This major release of Python had a number of incompatible changes that impacted developers, especially surrounding how one worked with strings and Unicode. &lt;/p&gt;

&lt;p&gt;At that time I was working for Red Hat and maintaining Anaconda, the installer for Red Hat Linux. We had decided that migrating all of the Python usage within Red Hat to Python 2 was a priority. There were many less Python modules back then and a small group of us (employed by Red Hat!) were able to do the work to update the modules we shipped to support Python 2. We sent patches upstream, in some cases taking over upstream maintenance of the module, and were able to help move the world forward to Python 2. &lt;/p&gt;

&lt;p&gt;But today is different. There are now over 200,000 Python libraries. It’s not practical for one company to help drive all of the changes in the ecosystem to support this new and incompatible release. And the vast majority of the Python packages out there are maintained by volunteers—people who are doing this in their spare time and as a labor of love. &lt;/p&gt;

&lt;p&gt;This challenge of how to migrate successfully from Python 2 to Python 3 is exactly the sort of situation where having an incentive for maintainers to support a new version and work with the incompatibilities would be so much better. It’s a perfect example of why we need to pay the maintainers of the open source libraries that all of our applications depend upon. With strong financial incentives in place, the speed and comprehensiveness of our preparation for Python 3 could have been accelerated.&lt;/p&gt;

&lt;p&gt;For users, major incompatible changes like those involved in the migration to Python 3 are an important part of keeping software vibrant, alive, and performant. But without being a psychic, it is simply impossible to understand how the world will change and evolve and require modifications to our software. &lt;/p&gt;

</description>
      <category>opensource</category>
      <category>python</category>
    </item>
    <item>
      <title>Let’s extend continuous integration to our open source dependencies</title>
      <dc:creator>Jeremy Katz</dc:creator>
      <pubDate>Thu, 18 Jul 2019 14:47:09 +0000</pubDate>
      <link>https://forem.com/tidelift/let-s-extend-continuous-integration-to-our-open-source-dependencies-4ma1</link>
      <guid>https://forem.com/tidelift/let-s-extend-continuous-integration-to-our-open-source-dependencies-4ma1</guid>
      <description>&lt;p&gt;Over the past 5-10 years, the software development world has fully embraced the idea of continuous integration.&lt;/p&gt;

&lt;p&gt;It’s not a new idea at all—continuous integration can easily be tracked back twenty years to Kent Beck’s book &lt;a href="https://www.amazon.com/Extreme-Programming-Explained-Embrace-Change/dp/0201616416"&gt;Extreme Programming&lt;/a&gt; where it was one of the twelve practices to help a software development team develop higher quality software while also having a good quality of life. &lt;/p&gt;

&lt;p&gt;The general idea behind continuous integration is that you want to set up systems where you are constantly testing changes immediately as they are introduced into the build. Daily builds, which had been the previous Best You Could Hope For, were no longer a fast enough feedback loop to allow for the development of quality software. &lt;/p&gt;

&lt;p&gt;Prior to Kent Beck’s book, developers avoided doing this integration because of the conflicts and problems that were uncovered. Extreme Programming instead suggested that “if it hurts, do it more often.” This approach leads to an environment where you find problems faster and also get a lot better at identifying problems and being able to fix them quickly. &lt;/p&gt;

&lt;p&gt;But with our move to the cloud and increased focus on developer productivity, it’s become a practice adopted not just by niche early adopters but instead a defacto part of software engineering best practices. By continuously integrating our code, we are forced to do things such as having our code in a version control system, automating our builds, having automated testing as part of our builds, and having everyone commit early and often. So if you’re in a company and have a change pending, you’re regularly pulling in the changes from the rest of your team as you do your development and ensuring that the code continues to work.&lt;/p&gt;

&lt;p&gt;That’s well and good while we’re talking about the code being built inside your organization. &lt;a href="https://blog.tidelift.com/cloud-providers-manage-your-compute-storage-and-network.-but-who-manages-your-open-source-libraries"&gt;But as we have covered previously&lt;/a&gt;, the application code being developed within your organization is only 20% of the application code. If you’re like most development teams, there are a wide swath of frameworks and dependencies that you’re using which are open source. And we treat these libraries as precious and unable to be upgraded. An upgrade of these components is treated with a fear that is far beyond that which we attribute to the rest of the code being built by our team.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why is that? 🤔🤔🤔
&lt;/h3&gt;

&lt;p&gt;Some of it just stems from history. When you purchase proprietary software, often you purchase a specific version of that software. Upgrading to a new version requires you to work with procurement to buy the new version. That friction and the relative lack of frequency of updates have meant that we just aren’t used to doing these sorts of upgrades on a regular basis for code that is not our own.&lt;/p&gt;

&lt;p&gt;Another—and perhaps larger part—is just a matter of comfort. We don’t know when the changes happen to code we don’t control. We don’t have a good way to understand why there are changes. We don’t know the people who made the changes. So the default assumption is that they aren’t making changes that will benefit us.&lt;/p&gt;

&lt;h3&gt;
  
  
  It’s time for us to work differently
&lt;/h3&gt;

&lt;p&gt;Let’s start upgrading our dependencies early and often. When an open source package we depend on is updated, we should update our application to use the new version. By bringing in the changes more often, the cost of integrating each change will be lower, just like the cost of integrating the code from other parts of your organization is lower. &lt;/p&gt;

&lt;p&gt;And then, when there is a major security vulnerability, the cost will be far lower to your organization. First because the amount of change will be reduced and thus the amount of work to validate that there aren’t problems for you. But just as importantly, because as an organization, you will be more used to the process of integrating the updates.&lt;/p&gt;

&lt;p&gt;I’d be remiss if I didn’t point out that Tidelift helps to make this process even easier. With a Tidelift Subscription, you will get information directly from the maintainers about what has changed in each new version of the packages that you depend on. &lt;/p&gt;

&lt;p&gt;We also work with the maintainers to help provide recommendations about what version you should upgrade to given what you currently use. And in cases where there are incompatibilities between versions of different software that you depend on, we will work with the maintainers to resolve the incompatibility. &lt;/p&gt;

&lt;p&gt;The result? Open source libraries you can depend on the same way you depend on commercial software. &lt;/p&gt;

</description>
      <category>opensource</category>
      <category>news</category>
      <category>technology</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
