<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Matt</title>
    <description>The latest articles on Forem by Matt (@matt_henderson).</description>
    <link>https://forem.com/matt_henderson</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/matt_henderson"/>
    <language>en</language>
    <item>
      <title>We removed advertising cookies, here’s what happened</title>
      <dc:creator>Matt</dc:creator>
      <pubDate>Thu, 11 Jan 2024 07:00:00 +0000</pubDate>
      <link>https://forem.com/matt_henderson/we-removed-advertising-cookies-heres-what-happened-88k</link>
      <guid>https://forem.com/matt_henderson/we-removed-advertising-cookies-heres-what-happened-88k</guid>
      <description>&lt;p&gt;&lt;em&gt;This is not another abstract post about what the ramifications of the cookieless future might be; Sentry actually removed cookies from our website a few months ago. Here’s how it impacted us positively and negatively, in both expected and unexpected ways. I hope this can serve as a guide or inspire others who are considering making this change.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why remove cookies?
&lt;/h3&gt;

&lt;p&gt;The thought of going completely cookieless seems daunting and borderline ludicrous to a lot of the marketers that I talk to. If you were to remove all cookies today, like we did at Sentry in July 2023, it would radically change the functionality of your martech stack. This is covered thoroughly in articles &lt;a href="https://www.smartinsights.com/digital-marketing-strategy/digital-advertising-trends-cookieless-advertising-is-coming-are-you-ready-for-it/" rel="noopener noreferrer"&gt;like this&lt;/a&gt;, but in brief, things like attribution, remarketing, or Account Based Marketing (ABM) would be rendered much less effective and it would almost certainly lead to a material loss in revenue.&lt;/p&gt;

&lt;p&gt;Not to mention If performance marketing is a key driver of revenue for your business, levers like targeting, bidding, reporting, and procurement become extremely challenging. However, with &lt;a href="https://developer.chrome.com/en/blog/cookie-countdown-2023oct/" rel="noopener noreferrer"&gt;Google’s promise to remove cookies&lt;/a&gt; for some users in Q1 2024 and completely &lt;a href="https://techcrunch.com/2022/07/27/google-delays-move-away-from-cookies-in-chrome-to-2024/" rel="noopener noreferrer"&gt;remove cookies from Chrome by the second half of 2024 looming&lt;/a&gt;, you basically have three approaches you can take:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ignore that this is happening&lt;/strong&gt; and wait until you are forced to make changes around Q2/Q3 this year (not recommended).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Become an early adopter&lt;/strong&gt; as ad platforms continue to optimize and introduce newer solutions for cookie removal like Enhanced Conversions, GA4, Conversions API, Conversion Lift, etc. so as to not cause any drastic shake ups until the platforms are ready for the shift.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Completely remove cookies&lt;/strong&gt; sooner rather than later, and start directing your time and energy towards future models that respect your users’ privacy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7i67lw40esnq5ge1ou3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7i67lw40esnq5ge1ou3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We chose the third route despite Google sending us an entire deck to tell us why we shouldn’t. We actually took it a step further by removing all user tracking, period. Was it worth it? In this post, I’ll share some early results and all of the pitfalls and triumphs that we’ve faced ever since removal of every advertising cookie and user tracking in June 2023. My hope is that this can give others a dos and don’ts list and best practices for the inevitable shift from cookies in 2024.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why completely remove cookies and user tracking now when I can just wait?
&lt;/h2&gt;

&lt;p&gt;The way I see it, there are a handful of reasons a marketer might decide to gut their website of cookies and traditional user tracking in general:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cookie consent banners, App Tracking Transparency (ATT), and consent management platforms (CMPs) for GDPR/CCPA compliance have already obfuscated tracking data enough to make you question if the data you have is reliable.&lt;/li&gt;
&lt;li&gt;Your audience is probably privacy-conscious. Aren’t we all learning to be as companies &lt;a href="https://www.wired.co.uk/article/tiktok-data-privacy" rel="noopener noreferrer"&gt;misuse&lt;/a&gt; our data? Ultimately you want your brand to align with your audience’s desires and demands. And your audience will appreciate one less banner to dismiss when they just want to load your page.&lt;/li&gt;
&lt;li&gt;Your time and effort could be better spent driving towards learning new cookieless tools and techniques and also building sugar-free (bad cookie joke) solutions for acquiring customers, understanding behaviors, reporting, and measurement.&lt;/li&gt;
&lt;li&gt;This shift will happen later this year anyway and you’re likely already struggling with increasingly non-traceable user journeys, &lt;a href="https://en.wikipedia.org/wiki/Dark_social_media" rel="noopener noreferrer"&gt;dark social&lt;/a&gt; channels, etc. Being reactive will put you behind while other companies (see Apple) move forward and build moats of differentiation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At Sentry, the idea to strip all advertising cookies was born out of respecting our privacy conscious audience, and our own &lt;a href="https://blog.sentry.io/privacy-by-default/" rel="noopener noreferrer"&gt;privacy by default values&lt;/a&gt;. We market to developers who notoriously do not like being marketed to (we should know; we are a developer-led company and Sentry users ourselves), so the idea of removing ad cookies instantly intrigued us. And, if given the choice, why not build for tomorrow, today? That decision has had ancillary benefits along our journey that we’ll get into, one being that it gave us an excuse to do away with the annoying cookie consent banners which have come to define modern web browsing.&lt;/p&gt;

&lt;p&gt;Sounds like good-enough reasoning, right? Despite the momentum internally to do this, to be honest, as a growth marketer I was excited at the prospect of it, but I started thinking about all of the things that are going to break. The initial response I had was to raise concerns for why we shouldn’t do it or to delay it. The following breakpoints stuck out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Our attribution models (first and multi-touch in our case)&lt;/li&gt;
&lt;li&gt;Our marketing reports in our business insights tool&lt;/li&gt;
&lt;li&gt;Google Analytics&lt;/li&gt;
&lt;li&gt;All of our SEO reporting (built on GA and GA4)&lt;/li&gt;
&lt;li&gt;Google Ads smart bidding&lt;/li&gt;
&lt;li&gt;Procurement of onboarding new tools&lt;/li&gt;
&lt;li&gt;Remarketing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…And there’s an admittedly large list of stuff we didn’t think through that caused problems when we removed ALL user tracking on our site:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;YouTube video embeds on our website used cookies&lt;/li&gt;
&lt;li&gt;Our GCLID from Google Ads passing onto our site*&lt;/li&gt;
&lt;li&gt;Our bot blocking tool for ads relying on both a pixel and GCLID&lt;/li&gt;
&lt;li&gt;Lookalike modeling not being an option b/c of data sharing with ad platforms*&lt;/li&gt;
&lt;li&gt;Not being able to use Salesforce x Google Ads integration*&lt;/li&gt;
&lt;li&gt;4xing our website’s 404s and redirects on accident (whoops)&lt;/li&gt;
&lt;li&gt;Completely losing signups from display marketing and YouTube&lt;/li&gt;
&lt;li&gt;Losing referrer data for content analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;*Some of these were more due to extreme measures we took by removing user tracking, cookies, and the cookie banner as a whole, but most were due to the removal of cookies&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We’ll cover some of these and share how to avoid the same mistakes we made in the next section.&lt;/p&gt;

&lt;p&gt;To state the obvious, performance marketing is critical for many businesses; Gartner says digital spending accounted for &lt;a href="https://www.webstrategiesinc.com/blog/how-much-budget-for-online-marketing" rel="noopener noreferrer"&gt;over half of marketing budgets in 2022&lt;/a&gt;. Given the importance of this channel and the challenges laid out above, how can you actually take on the risk of running performance monitoring without cookies?&lt;/p&gt;

&lt;p&gt;I don’t have all of the answers for every industry and audience, but I’ll offer you insights from our first eight months going completely without cookies and user tracking.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to run performance marketing completely cookieless
&lt;/h3&gt;

&lt;p&gt;Let’s break this down into four sections: targeting, bidding, tracking/reporting, and procurement (don’t skip this!). Each section will cover considerations to be thinking about when going cookieless, the issues that we ran into, and the pivots that we made so that you can help prepare for the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Targeting Without Cookies
&lt;/h2&gt;

&lt;p&gt;The current state of targeting has already been severely impacted by various changes in the last few years. Remember the havoc Apple wreaked on the entire digital advertising ecosystem when they launched &lt;a href="https://www.adjust.com/glossary/app-tracking-transparency/" rel="noopener noreferrer"&gt;ATT&lt;/a&gt;? At the time, Facebook (now Meta) estimated a loss in &lt;a href="https://www.forbes.com/sites/kateoflahertyuk/2022/10/08/apples-12-billion-strike-to-facebook-is-suddenly-taking-shape/?sh=4e553fdb1636" rel="noopener noreferrer"&gt;10-13 billion ad revenue dollars &lt;/a&gt;related to the change, likely a direct reflection of advertisers shifting budget away from Meta because they lost ROI. And that was just one in a string of privacy changes that likely impacted digital marketing performance.&lt;/p&gt;

&lt;p&gt;If that wasn’t enough of a triggering thought, how much less of a remarketing pool did you have across ad platforms when people started opting out of the consent banner on your site, 30% reduction? 50%… or more? On top of that, you were already dealing with ad blockers, and probably depend heavily on walled gardens to give you targeting options that work for your specific audience.&lt;/p&gt;

&lt;p&gt;When cookies are deprecated this year, there will undoubtedly be more struggles for performance marketers. Without traditional pixels or conversion signals, Google (largest ad platform in the world) struggles to find intent of web visitors to purchase. Shifting now, this made our targeting on Google search much worse, but actually still tolerable because you can use keywords intent and negative search terms as a guiding light. However Youtube, Display, Shopping, &lt;a href="https://support.google.com/google-ads/answer/13859703?hl=en" rel="noopener noreferrer"&gt;Demand Gen&lt;/a&gt;, or PMAX campaigns depend on these purchase intent signals paired with black box contextual audiences that Google gives you which, at times, in our case were scattershot at best. For example, one custom intent audience that utilized search terms/history was showing us billions of impressions (more than the current population of the world) in the estimated audience size, even though it utilized long tail search terms. Not exactly precise targeting to reach software developers in our case.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmjdtqb5fqtwe47gr7b3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmjdtqb5fqtwe47gr7b3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What happened to our targeting?
&lt;/h2&gt;

&lt;p&gt;Our Google rep, our ad agency, and I all knew that Google’s lack of signals would cut deep, so when we went cookieless, we treated it as an opportunity to learn how we would pivot to find our audience, rather than a doomsday event.&lt;/p&gt;

&lt;p&gt;Display and other channels were severely hampered after cookie removal; our traditional retargeting motion died off pretty quickly as we couldn’t use GA4’s audiences or our Google Ads pixel. We had to pivot our strategy fast.&lt;/p&gt;

&lt;p&gt;We decided to rely on ad engagement retargeting (rather than traditional retargeting) on most of our ad channels which isn’t the same, but still gives us a semblance of a funnel. We tailored our ads that are focused on middle of funnel (MOF) and bottom of funnel (BOF) to this engaged audience. This is a not-so-bad patch for a broken retargeting effort as long as you have engaging content and the budget to promote it. Engagement retargeting is offered on Meta, LinkedIn, Youtube, and Reddit, and we pair video ads at the top of the funnel to drive engagement.&lt;/p&gt;

&lt;p&gt;At the same time we were figuring out remarketing, we also lost confidence in prospecting with Display and decided to migrate budget to sponsorships and publishers that we instinctively knew had our core audience. This may produce less flashy conversion counts depending on your business, but it gave us a place to tell our story to our audience. It also forced us to focus more on content marketing which speaks to the potential customer’s jobs to be done; it also builds trust and goodwill instead of trying to get vanity signups or [insert your BOF KPI] with a cold prospecting audience.&lt;/p&gt;

&lt;p&gt;When deciding to remove cookies or user tracking in general, these are your top three targeting considerations IMO:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How you’ll segment funnel stages in creative ways&lt;/li&gt;
&lt;li&gt;Will you still have faith in your current channels to continue to reach your audience or do you need to explore other channels?&lt;/li&gt;
&lt;li&gt;Which bidding model you will shift to and what potential impacts to targeting might you see&lt;/li&gt;
&lt;li&gt;BONUS: How will you prepare the stakeholders in your company for the inevitable impact on traceable conversions? This could be its own post, so we’ll keep this one about the nuts and bolts, but you should start thinking about this now. Getting cross-functional buy-in (and crucially, that of leadership) is the best first step, so feel free to share this post with whomever needs to see it&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Bidding
&lt;/h2&gt;

&lt;p&gt;Most ad platforms rely on pixels and cookies to train the bidding algorithm on which type of users convert for your business and which will hit specific KPIs or cost per actions (CPAs) that you’re going after.&lt;/p&gt;

&lt;p&gt;In our case most of our ad platforms were set up with pixels that sent conversion signals to the ad platforms when a signup or a request for a demo happened on our website. We were also piping in conversions with offline uploading to get down-funnel events to train Google to find the right user intent. This can be done with Google, Meta conversions API (CAPI), and LinkedIn (with &lt;a href="https://bit.ly/offline-conversion-upload" rel="noopener noreferrer"&gt;offline conversion upload&lt;/a&gt;.) For digital marketers this is a way to improve CPAs and level up from 101 digital marketing, but it also involves either cookieing or sending user information back to an ad platform. Quick nuance: for certain tracking technology like hashed offline passbacks—although it isn’t technically a cookie—it may still be subject to “cookie laws” and you may need to keep your screen-eating cookie banner. So in addition to cookies, we had to do away with GCLID passback for example, because we wanted to remove our cookie banner and because our users deserve their privacy. Be sure to discuss with your legal team how this might apply if you want to remove your banner.&lt;/p&gt;

&lt;p&gt;For us, our bidding models had the largest impact on revenue; because Google is one of our largest ad platforms and is now no longer trained on clicks/intention to purchase, just that the user clicked. There was actually about a two to three week delay in performance declines after we went cookieless where Google still had data we used to train it on and our campaigns continued to perform well.&lt;/p&gt;

&lt;p&gt;However when comparing period over period from before we removed user tracking and cookies on our site to after, we saw around a 30% increase in our cost per click (CPCs) in Google search. It varied between brand and non-brand searches, but this shows the impact of what you’re up against. Bottom line: your ad campaigns could get that much worse post-cookie removal if you don’t adopt hashed passbacks or offline signals/apis and if you’re utilizing smart strategies like Target CPA bidding, Maximize Conversions, etc.&lt;/p&gt;

&lt;p&gt;Our options became pretty limited without these conversion signals fed into our ad platforms because of our hard line drawn at total cookie banner removal and 0 user tracking:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Google Search:&lt;/strong&gt; we shifted back to Enhanced CPC and Max Clicks which optimizes for less important factors than our old model (definitely a downgrade). We stay on top of our negative keywords and regional data so much more now as a proxy for using a bidding model to sort through intent of users.&lt;/p&gt;

&lt;p&gt;**For YouTube: **We shifted to Target CPV for any video viewer or engagement and Target CPM for awareness. We had to remove video action campaigns here and so we focused more on reach. Instead of a drastic loss, we actually ended up 13xing our views on our videos period over period, and are getting more quality site visitors than we used to. We’re still getting conversions from this channel and our benchmarks for view rate are right at our industry average (and above the benchmark from Google) so this ended up being a successful shift. We also saw over 100% growth in folks mentioning they heard us from Youtube on our onboarding survey quarter over quarter. Shifting bidding strategies here could be worth trying, regardless of your decision with cookies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Display:&lt;/strong&gt; We shifted to viewable CPM. As a result, our CPM was nearly cut in half and we saw a surprisingly large increase in impressions (north of 20%) month over month after making this shift. However our conversions tailed off and then dropped off of a cliff with the shift in bidding models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsmzaa71rj5t8s8z6p4t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsmzaa71rj5t8s8z6p4t.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: we kept our bidding model on smart bidding for a while after the cookie change in hopes Google would utilize historical conversion data, it worked for a little but was a sharp decline”&lt;/p&gt;

&lt;p&gt;These conversions weren’t converting through our funnel to paying customers at a high enough rate anyway (even with a long term view), so our shift in bidding strategy for display was paired with a shift to executing more brand and awareness marketing with display rather than direct response which wasn’t a great fit for the channel anyway.&lt;/p&gt;

&lt;p&gt;For most other channels: We removed pixels and rely on UTMs paired with manual bidding. More on this in the section below.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cookieless Tracking/Attribution/Reporting
&lt;/h2&gt;

&lt;p&gt;Consider the challenges you probably already face with regards to reporting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Most users on paid social channels are on mobile devices which introduces cross device tracking as well as view-through targeting challenges.&lt;/li&gt;
&lt;li&gt;Ad blockers were already plaguing your ability to see a large portion of your audience. &lt;a href="https://backlinko.com/ad-blockers-users" rel="noopener noreferrer"&gt;42.7%&lt;/a&gt; of internet users worldwide use ad blockers.&lt;/li&gt;
&lt;li&gt;Hidden user journey moments like a shared linkedin post or an executive hearing about you on a podcast or at an event are already influencing deals.&lt;/li&gt;
&lt;li&gt;If you’re using an attribution model, it’s probably not accounting for the above touches and you are probably favoring certain channels too much given how complex a buyer journey is in today’s age.&lt;/li&gt;
&lt;li&gt;Matching up spend to down-funnel events for ROI/ROAS calculations is a big undertaking. Drilling down to the keyword or audience to understand your distribution areas and where to spend your budget is time consuming.&lt;/li&gt;
&lt;li&gt;Even if you have everything orchestrated and deduping out of one ad server like CM360, you’re still dealing with the cookie banner and ad blockers removing a chunk of your vision and signals.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now take away cookies and user tracking and you’re left with…?&lt;/p&gt;

&lt;p&gt;Reporting and attribution became a huge lift for us when we went cookieless. Our Business Insights (BI) team flagged that our attribution data was going to break and we needed to migrate to a new underlying table. On top of this, we had to switch attribution models from First and Multi-touch (depending on what we were measuring) to Last-click because we would only be able to track the previous page visited by a user before conversion without storing data on the user’s browser. We were lucky to have a competent BI team and alignment top-down to be able to pivot and get a new attribution model and new table of underlying data.&lt;/p&gt;

&lt;p&gt;This took a TON of back and forth, basically building logic that an out-of-the-box attribution solution already has in SQL, but we finally got to a place where we could salvage around 50% of attribution data. This was a huge win for us, to directionally understand the best channels that lead to business outcomes. Because we lost so much data, we also instituted a self-reported attribution “how did you hear about us” survey. Eight months in, and we’ve actually found new channels and learned a lot about our old ones. Ironically, a lack of data actually led to new insights.&lt;/p&gt;

&lt;p&gt;My advice here is to figure out where your attribution is done (CRM, Data Warehouse, 3rd party tool?) and if you’re serious about removing cookies, start discussing the lift it would take to move to a model that’s simpler.&lt;/p&gt;

&lt;p&gt;After more than a half a year of cookieless attribution, my takeaway is that UTMs and referrer data, when done right, directionally do the trick for understanding which ads, publishers, campaigns, and audiences are performing. The self-reported attribution survey we prioritized now helps remove our bias and uncover new expansion areas. We’re also working on understanding blended data. All of these are excellent muscles to develop for a cookieless future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Procurement
&lt;/h2&gt;

&lt;p&gt;Going cookieless doesn’t mean you shouldn’t actively be hunting for new solutions to help improve performance. In the Marketing Technology Landscape, published by chiefmartec.com and its editor Scott Brinker, the amount of technology options for marketing grew from 1&lt;a href="https://chiefmartec.com/2023/05/2023-marketing-technology-landscape-supergraphic-11038-solutions-searchable-on-martechmap-com/" rel="noopener noreferrer"&gt;50 in 2011 to 11,038 in 2023&lt;/a&gt;. There are more options now than ever before, and much more innovation and disruption happening.&lt;/p&gt;

&lt;p&gt;We’ve recently met with companies that do Attribution, Account Based Marketing, Demand Sided Platforms (DSPs), Connected TV, and Analytics tools, and every conversation sputters out as soon as we mention privacy and our lack of cookies. Our marketing team would leave demo calls with serious concerns that our new privacy policy would render the tool not worth the sticker price, even if we could use it in a limited fashion.&lt;/p&gt;

&lt;p&gt;Despite this challenge, there are going to be more and more tools out there that can navigate user tracking and cookieless well. Here are a few questions that I’ve found to be useful to ask current vendors up for renewal, or prospective tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does your software comply with data protection and privacy regulations, such as GDPR or CCPA? If so, can we get access to any documentation that you have?&lt;/li&gt;
&lt;li&gt;Data Processing Agreement&lt;/li&gt;
&lt;li&gt;Terms of service&lt;/li&gt;
&lt;li&gt;BONUS: While you’re here asking for docs, you may have a security/compliance team that will ask for these anyway:&lt;/li&gt;
&lt;li&gt;ISO/SOC2 certifications&lt;/li&gt;
&lt;li&gt;Prepopulated security questionnaire like the &lt;a href="https://cloudsecurityalliance.org/artifacts/consensus-assessments-initiative-questionnaire-v3-1/" rel="noopener noreferrer"&gt;CAIQ&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Does it require a tracking script to run?&lt;/li&gt;
&lt;li&gt;If the script picks up analytics data, e.g., conversion data, then it could violate your cookie policy.&lt;/li&gt;
&lt;li&gt;Can you explain the different types of cookies your software utilizes and their specific functions?&lt;/li&gt;
&lt;li&gt;How does your software collect and store data related to user interactions?&lt;/li&gt;
&lt;li&gt;Does your software share user data with third parties, and if so, how is this handled?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Often a tool will claim that they are cookieless and still do some form of user tracking. For example, we switched from GA4 to Plausible and needed to know the nuances of how they use IP addresses. Know that if you want to remove your cookie banner, even if a tool is “cookieless”, you still may not be able to use it because cookie laws can apply broadly to tracking technology. Make sure to get on the same page about this with your legal team early on, before you make any procurement decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Going cookieless might seem like a big undertaking with complex cross-functional challenges, so I can see the temptation to put it off. But the cookieless future is coming, whether you like it or not. Shifting away from cookies is a burden, and although you’ll have to sacrifice, you may gain a new perspective, and learn how to market to your users while respecting their privacy.&lt;/p&gt;

&lt;p&gt;To summarize:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remember the lack of visibility and data issues you’re already facing and how this will step-function degrade as we speed towards H2 2024&lt;/li&gt;
&lt;li&gt;Completely cookieless performance marketing is achievable&lt;/li&gt;
&lt;li&gt;It takes cross functional alignment from all stakeholders: business insights, website, legal, leadership, etc.&lt;/li&gt;
&lt;li&gt;There will be expected outcomes, and unexpected outcomes&lt;/li&gt;
&lt;li&gt;Going cookieless forces you to get creative and to start re-imagining the way you do targeting, bidding, tracking, and procurement.&lt;/li&gt;
&lt;li&gt;We didn’t cover every challenge that we faced in this post, and we still (and will continue to) measure the impacts to KPIs eight months in.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; Would we do this all over again?&lt;/p&gt;

&lt;p&gt;Because Sentry values privacy and the future of the open internet, yes. We took a firm stance on this despite knowing that paid channels would get a major shake up. But if you work somewhere that will hold on tightly to their cookie banner and rapidly diminishing tracking, at minimum we would suggest the following:&lt;/p&gt;

&lt;p&gt;Institute self-reported attribution somewhere in your conversion flow.&lt;br&gt;
Move all paid channels to utilizing conversion APIs, hashed passbacks, offline uploads, etc. (because of the downsides we experienced firsthand without being able to use them.)&lt;br&gt;
Have conversations about testing new channels that might not have as clear of a ROI as demand capture channels like paid search&lt;br&gt;
Even if visitors to your site notice nothing else, they’ll at least have one less cookie banner to dismiss, and that is a win for the entire web.&lt;/p&gt;

&lt;h2&gt;
  
  
  Postscript
&lt;/h2&gt;

&lt;p&gt;For clarification, Sentry has removed all cookies, other than essential cookies that do not require site visitor consent. Work with your legal team to better understand which of your cookies qualify as essential cookies under the laws that apply to you.&lt;/p&gt;

</description>
      <category>cookie</category>
      <category>marketing</category>
      <category>sentry</category>
    </item>
    <item>
      <title>How We Reduced Replay SDK Bundle Size by 35%</title>
      <dc:creator>Matt</dc:creator>
      <pubDate>Thu, 16 Nov 2023 18:02:51 +0000</pubDate>
      <link>https://forem.com/sentry/how-we-reduced-replay-sdk-bundle-size-by-35-2g0f</link>
      <guid>https://forem.com/sentry/how-we-reduced-replay-sdk-bundle-size-by-35-2g0f</guid>
      <description>&lt;p&gt;&lt;a href="https://blog.sentry.io/js-browser-sdk-bundle-size-matters/"&gt;Bundle Size matters&lt;/a&gt; - this is something we SDK engineers at Sentry are acutely aware of. In an ideal world, you'd get all the functionality you want with no additional bundle size - oh, wouldn't that be nice? Sadly, in reality any feature we add to the JavaScript SDK results in additional bundle size for the SDK - there is always a trade off to be made.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://docs.sentry.io/product/session-replay/"&gt;Session Replay&lt;/a&gt;, this is especially challenging. Session Replay allows you to capture what's going on in a users' browsers, which can help developers debug errors or other problems the user is experiencing. While this can be incredibly helpful, there is also a considerable amount of JavaScript code required to actually make this possible - thus leading to an increased bundle size.&lt;/p&gt;

&lt;p&gt;In version 7.73.0 of the JavaScript SDKs, we updated the underlying &lt;a href="https://github.com/getsentry/rrweb"&gt;rrweb&lt;/a&gt; package from v1 to v2. While this brought a host of improvements, it also came with a considerable increase in bundle size. This tipped us over the edge to declare a bundle size emergency, and focus on bringing the additional size Session Replay adds to the SDK down as much as possible.&lt;/p&gt;

&lt;p&gt;We're very happy to say that our efforts have been successful, and we managed to reduce the minified &amp;amp; gzipped bundle size compared to the rrweb 2.0 baseline by 23% (~19 KB), and by up to 35% (~29 KB) with maximum tree shaking configuration enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffodd4ig20rv6z9arxnkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffodd4ig20rv6z9arxnkl.png" alt="Image description" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps we took to reduce bundle size
&lt;/h2&gt;

&lt;p&gt;In order to achieve these bundle size improvements, we took a couple of steps ranging from removing unused code to build time configuration and improved tree shaking:&lt;/p&gt;

&lt;p&gt;Made it possible to remove iframe &amp;amp; shadow DOM support via a build-time flag&lt;br&gt;
Removed canvas recording support by default (users can opt-in via a config option, &lt;a href="https://github.com/getsentry/sentry-javascript/issues/6519"&gt;support is coming&lt;/a&gt;)&lt;br&gt;
Removed unused code from our rrweb fork&lt;br&gt;
Removed unused code in Session Replay itself&lt;br&gt;
Made it possible to remove the included compression worker in favor of hosting it yourself&lt;br&gt;
Moved to a different compression library with a smaller footprint&lt;/p&gt;
&lt;h2&gt;
  
  
  Primer: rrweb
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/getsentry/rrweb"&gt;rrweb&lt;/a&gt; is the underlying tool we use to make the recordings for Session Replay. While we try to contribute to the main rrweb repository as much as possible, there are some changes that are very specific to our needs at Sentry, which is why we also maintain a &lt;a href="https://github.com/getsentry/rrweb"&gt;forked version&lt;/a&gt; of rrweb with some custom changes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Primer: Tree Shaking
&lt;/h2&gt;

&lt;p&gt;Tree shaking allows a JavaScript bundler to remove unused code from the final bundle. If you're not familiar with how it works and the advantages tree shaking brings, you can &lt;a href="https://docs.sentry.io/platforms/javascript/configuration/tree-shaking/"&gt;learn more about it in our docs&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Made it possible to remove iframe &amp;amp; shadow DOM support via a build-time flag
&lt;/h2&gt;

&lt;p&gt;While rrweb allows you to capture more or less everything that happens on your page, some of the things it can capture may not be necessary for some users. For these cases, we now allow users to manually remove certain parts of the rrweb codebase they may not need at build time, reducing the bundle size.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://github.com/getsentry/sentry-javascript/pull/9274"&gt;getsentry/sentry-javascript#9274&lt;/a&gt; &amp;amp; &lt;a href="https://github.com/getsentry/rrweb/pull/114"&gt;getsentry/rrweb#114&lt;/a&gt; we implemented the ground work to allow for tree shaking iframe and shadow DOM recordings. This means that if, for example, you don't have any iframes on your page, you can safely opt-in to remove this code from your application.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://github.com/getsentry/sentry-javascript-bundler-plugins/pull/428"&gt;getsentry/sentry-javascript-bundler-plugins#428&lt;/a&gt; we implemented an easy way to implement these optimizations in your app. If you are using one of our bundler plugins:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/@sentry/webpack-plugin"&gt;@sentry/webpack-plugin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/@sentry/vite-plugin"&gt;@sentry/vite-plugin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/@sentry/rollup-plugin"&gt;@sentry/rollup-plugin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/@sentry/esbuild-plugin"&gt;@sentry/esbuild-plugin&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can just update to its latest version, and add this configuration to the plugin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sentryPlugin({
  bundleSizeOptimizations: {
    excludeDebugStatements: true,
    excludeReplayIframe: true,
    excludeReplayShadowDom: true,
  },
})

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will save you about 5 KB gzipped of bundle size!&lt;/p&gt;

&lt;p&gt;How we implemented build-time tree shaking flags&lt;br&gt;
We already had some build-time flags for tree shaking implemented in the JavaScript SDK itself (&lt;code&gt;__SENTRY_DEBUG__&lt;/code&gt; and &lt;code&gt;__SENTRY_TRACING__&lt;/code&gt;). We followed the same structure for rrweb:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// General tree shaking flag example
if (typeof __SENTRY_DEBUG__ === 'undefined' || __SENTRY_DEBUG__) {
  console.log('log a debug message!')
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, this code will result in &lt;code&gt;log a debug message&lt;/code&gt;! being logged. However, if you replace the &lt;code&gt;__SENTRY_DEBUG__&lt;/code&gt; constant at build time with &lt;code&gt;false&lt;/code&gt;, this will result in the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (typeof false === 'undefined' || false) {
  console.log('log a debug message!')
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which bundlers will optimize to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (false) {
  console.log('log a debug message!')
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And in turn, since the code inside of &lt;code&gt;if (false)&lt;/code&gt; will definitely never be called, it will be completely tree shaken away.&lt;/p&gt;

&lt;p&gt;For rrweb, we used the same approach to allow you to remove certain recording managers:&lt;/p&gt;

&lt;p&gt;In order to avoid touching all the parts of the code that may use a manager, we added new dummy managers following the same interface but doing nothing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;interface ShadowDomManagerInterface {
  init(): void
  addShadowRoot(shadowRoot: ShadowRoot, doc: Document): void
  observeAttachShadow(iframeElement: HTMLIFrameElement): void
  reset(): void
}

class ShadowDomManagerNoop implements ShadowDomManagerInterface {
  public init() {}
  public addShadowRoot() {}
  public observeAttachShadow() {}
  public reset() {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, in the place where the &lt;code&gt;ShadowDomManager&lt;/code&gt; is usually initialized, we can do the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const shadowDomManager =
  typeof __RRWEB_EXCLUDE_SHADOW_DOM__ === 'boolean' &amp;amp;&amp;amp; __RRWEB_EXCLUDE_SHADOW_DOM__
    ? new ShadowDomManagerNoop()
    : new ShadowDomManager()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means that by default, the regular &lt;code&gt;ShadowDomManager&lt;/code&gt; is used. However, if you replace &lt;code&gt;__RRWEB_EXCLUDE_SHADOW_DOM__&lt;/code&gt; at build time with &lt;code&gt;true&lt;/code&gt;, the &lt;code&gt;ShadowDomManagerNoop&lt;/code&gt; will be used, and the &lt;code&gt;ShadowDomManager&lt;/code&gt; will thus be tree shaken away.&lt;/p&gt;

&lt;h2&gt;
  
  
  Removed canvas recording support by default
&lt;/h2&gt;

&lt;p&gt;Since we currently &lt;a href="https://github.com/getsentry/sentry-javascript/issues/6519"&gt;do not support replaying captured canvas elements&lt;/a&gt;, and because the canvas capturing code makes up a considerable amount of the rrweb codebase, we decided to remove this code by default from our rrweb fork, and instead allow you to opt-in to use this by passing a canvas manager into the rrweb &lt;code&gt;record()&lt;/code&gt; function.&lt;/p&gt;

&lt;p&gt;We implemented this in &lt;a href="https://github.com/getsentry/rrweb/pull/122"&gt;getsentry/rrweb#122&lt;/a&gt;, where we started to export a new &lt;code&gt;getCanvasManager&lt;/code&gt; function, as well as accepting such a function in the &lt;code&gt;record()&lt;/code&gt; method. With this, we can successfully tree-shake the unused canvas manager out, leading to smaller bundle size by default, unless users manually import &amp;amp; pass the &lt;code&gt;getCanvasManager&lt;/code&gt; function.&lt;/p&gt;

&lt;p&gt;Once we fully support capturing &amp;amp; replaying canvas elements in Session Replay &lt;a href="https://github.com/getsentry/sentry-javascript/issues/6519"&gt;(coming soon)&lt;/a&gt;, we will add a configuration option to new &lt;code&gt;Replay()&lt;/code&gt; to opt-in to canvas recording.&lt;/p&gt;

&lt;h2&gt;
  
  
  Removed unused code from rrweb
&lt;/h2&gt;

&lt;p&gt;Another step we took to reduce bundle size was to remove &amp;amp; streamline some code in our rrweb fork. rrweb can be configured in a lot of different ways and is very flexible. However, due to its flexibility, a lot of the code is not tree shakeable, because it depends on runtime configuration.&lt;/p&gt;

&lt;p&gt;For example, consider code like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { large, small } from './my-code'

function doSomething(useLarge) {
  return useLarge ? large : small
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code snippet, even if we know we only ever call this as &lt;code&gt;doSomething(false)&lt;/code&gt;, it is impossible to tree shake the &lt;code&gt;large&lt;/code&gt; code away, because statically we cannot know at build time that &lt;code&gt;useLarge&lt;/code&gt; will always be &lt;code&gt;false&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Because of this, we ended up fully removing certain parts of rrweb from our fork:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;hooks&lt;/code&gt; related code &lt;a href="https://github.com/getsentry/rrweb/pull/126"&gt;getsentry/rrweb#126&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;plugins&lt;/code&gt; related code getsentry/rrweb#123&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remove some functions on &lt;code&gt;record&lt;/code&gt; that we don't need &lt;a href="https://github.com/getsentry/rrweb/pull/113"&gt;getsentry/rrweb#113&lt;/a&gt;&lt;br&gt;
In addition, we also made some general small improvements which we also contributed upstream to the main rrweb repository:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Avoid unnecessary cloning of objects or arrays &lt;a href="https://github.com/getsentry/rrweb/pull/125"&gt;getsentry/rrweb#125&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Avoid cloning events to add timestamp &lt;a href="https://github.com/getsentry/rrweb/pull/124"&gt;getsentry/rrweb#124&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Removed unused code in Session Replay
&lt;/h2&gt;

&lt;p&gt;In addition to rrweb, we also identified &amp;amp; removed some unused code in Session Replay itself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clean up some logs and internal checks &lt;a href="https://github.com/getsentry/sentry-javascript/pull/9392"&gt;getsentry/sentry-javascript#9392&lt;/a&gt;, &lt;a href="https://github.com/getsentry/sentry-javascript/pull/9391"&gt;getsentry/sentry-javascript#9391&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Remove unused function &lt;a href="https://dev.togetsentry/sentry-javascript#9393"&gt;getsentry/sentry-javascript#9393&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Updated library used for compression
&lt;/h2&gt;

&lt;p&gt;We used to compress replay payloads with &lt;a href="https://github.com/nodeca/pako"&gt;pako&lt;/a&gt;, which, while it worked well enough, turned out to be a rather large (bundle-size wise) library for compression. We switched over to use &lt;a href="https://github.com/101arrowz/fflate"&gt;fflate&lt;/a&gt; in &lt;a href="https://github.com/getsentry/sentry-javascript/pull/9436"&gt;getsentry/sentry-javascript#9436&lt;/a&gt; instead, which reduced bundle size by a few KB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Made it possible to host compression worker
&lt;/h2&gt;

&lt;p&gt;We use a web worker to compress Session Replay recording data. This helps to send less data over the network, and reduces the performance overhead for users of the SDK. However, the code for the compression worker makes up about 10 KB gzipped of our bundle size - a considerable amount!&lt;/p&gt;

&lt;p&gt;Additionally, since we have to load the worker from an inlined string due to CORS restrictions, the included worker does not work for certain environments, because it requires a more lax &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP"&gt;CSP&lt;/a&gt; setting which some applications cannot comply with.&lt;/p&gt;

&lt;p&gt;In order to both satisfy stricter CSP environments, as well as allowing to optimize the bundle size of the SDK, we added a way to tree shake the included compression worker, and instead provide a URL to a self-hosted web worker.&lt;/p&gt;

&lt;p&gt;Implemented in &lt;a href="https://github.com/getsentry/sentry-javascript/pull/9409"&gt;getsentry/sentry-javascript#9409&lt;/a&gt;, we added an example web worker that users can host on their own server, and then pass in a custom &lt;code&gt;workerUrl&lt;/code&gt; to &lt;code&gt;new Replay({})&lt;/code&gt;. With this setup, users save 10 KB gzipped of their bundle size, and can serve the worker as a separate asset that can be cached independently.&lt;/p&gt;

</description>
      <category>sentry</category>
      <category>frontend</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Performance Monitoring for Every Developer: Web Vitals &amp; Function Regression Issues</title>
      <dc:creator>Matt</dc:creator>
      <pubDate>Wed, 15 Nov 2023 20:30:49 +0000</pubDate>
      <link>https://forem.com/sentry/performance-monitoring-for-every-developer-web-vitals-function-regression-issues-26j2</link>
      <guid>https://forem.com/sentry/performance-monitoring-for-every-developer-web-vitals-function-regression-issues-26j2</guid>
      <description>&lt;p&gt;Extracting relevant insights from your performance monitoring tool can be frustrating. You often get back more data than you need, making it difficult to connect that data back to the code you wrote. Sentry’s Performance monitoring product lets you cut through the noise by detecting real problems, then quickly takes you to the exact line of code responsible. The outcome: Less noise, more actionable results.&lt;/p&gt;

&lt;p&gt;Today, we’re announcing two new features to help web, mobile, and backend developers discover and solve performance problems in their apps: Web Vitals and Function Regression Issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Web Vitals: From performance scores to slow code
&lt;/h2&gt;

&lt;p&gt;Web Vitals unify the measurement of page quality into a handful of useful metrics like loading performance, interactivity, and visual stability. We used these metrics to develop the Sentry Performance Score, which is a normalized score out of 100 calculated using the weighted averages of Web Vital metrics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpo5xgt7k3p3qark57nx5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpo5xgt7k3p3qark57nx5.png" alt="Image description" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Sentry Performance Score is similar to Google’s Lighthouse performance score, with one key distinction: Sentry collects data from real user experiences, while Lighthouse collects data from a controlled lab environment. We modeled the score to be as close to Lighthouse as possible while excluding components that were only relevant in a lab environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Web Vitals identify the biggest opportunities for improvement
&lt;/h2&gt;

&lt;p&gt;To improve your overall performance score, you should start by identifying individual key pages that need performance improvements. To simplify this and help you cut to the chase, we rank pages by Opportunity, which indicates the impact of a single page on the overall performance score.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz376g8gqzf2ffm9u1e5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz376g8gqzf2ffm9u1e5.png" alt="Image description" width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, the highest opportunity page in Sentry’s own web app is our Issue Details page. Since this is the most commonly accessed page in our product, improving its performance would significantly improve the overall experience of using Sentry.&lt;/p&gt;

&lt;p&gt;After identifying a problematic page, the next step is to find example events where users had a subpar experience. Below, you’ll see events that represent real users loading our Issue Details page, with many experiencing poor or mediocre performance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxa18boiieyfvqzcvp19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxa18boiieyfvqzcvp19.png" alt="Image description" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above screenshot, you’ll see a user that experienced a performance score of 9 out of 100 (Poor), primarily driven by a 10+ second Largest Contentful Paint (LCP). Ouch. These worst-case events highlight performance problems not evident during local development or under ideal conditions (e.g. when users have a fast network connection, a high-spec device, etc.).&lt;/p&gt;

&lt;p&gt;You’ll notice some of these events have an associated ▶️ Replay. When available, these let you see a video-like reproduction of a user’s real experience with that page. When optimizing your app’s performance, these replays can help you understand where users have a subpar experience–for example, when they struggle with 10-second load times.&lt;/p&gt;

&lt;h2&gt;
  
  
  Span waterfalls highlight your most expensive operations
&lt;/h2&gt;

&lt;p&gt;To find out what caused the slow LCP, you can click the event’s Transaction button, which provides a detailed breakdown of the operations that occurred during page load. We call these operations spans.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4va4e449o2pp39512n3j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4va4e449o2pp39512n3j.png" alt="Image description" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The most relevant spans are the ones that occur before the red LCP marker, as these spans are potentially LCP-blocking. Spans that occur after the LCP marker still impact overall page performance, but do not impact the initial page load.&lt;/p&gt;

&lt;p&gt;The first span that looks like a clear performance bottleneck is the app.page.bundle-load span, which measures how long it takes to load the JavaScript bundle. In this case, loading the bundle alone takes almost 6 seconds or about 60% of our total LCP duration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtqn6uy4rjp89kv507q8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtqn6uy4rjp89kv507q8.png" alt="Image description" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The JavaScript bundle load time depends primarily on its size; reducing the bundle size would significantly improve page load speed. But, even if we reduced bundle load time by 50%, LCP would only drop from 12 to 7 seconds — which means we need to look for additional optimization opportunities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevwwrhfj5r72wekg04gd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevwwrhfj5r72wekg04gd.png" alt="Image description" width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next clear opportunity is this ui.long-task.app-init span taking almost 1 second. Long task spans represent operations over 50 milliseconds where the browser is executing JavaScript code and blocking the UI thread. Since this is a pure JavaScript operation, let’s go deeper and find out what’s going on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Browser Profiling takes you to the line of code responsible
&lt;/h2&gt;

&lt;p&gt;Identifying the code causing a long task has traditionally been difficult, as you must reproduce the issue in a development environment with access to a profiler. To solve this, Sentry has launched new support for collecting browser JavaScript profiles in production (on Chromium-based browsers). This lets you debug real user issues and collect a wide range of sample profiles across your user base.&lt;/p&gt;

&lt;p&gt;In this example, we can open the profile associated with the page load event and see the code that executed during the ~1-second long task span:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwmtuznruau4qy8tn088.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwmtuznruau4qy8tn088.png" alt="Image description" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The EChartsReactCore.prototype.componentDidMount function takes 558 ms to execute, which is over half the duration of the long task span. This function is linked to a React component that renders a chart, provided by the open-source ECharts visualization library. This looks exactly like where we should focus our attention if we hope to bring our Issues Details pageload times down.&lt;/p&gt;

&lt;p&gt;In summary, we first identified a page with a poor performance score, then determined that reducing the JavaScript bundle size and optimizing a specific React component could significantly improve the performance of our Issue Details page. By finding pages with high opportunity scores, breaking down page load events, and diving deep into JavaScript performance with profiles, you can enhance your product’s overall user experience.&lt;/p&gt;

&lt;p&gt;Web Vitals are available to Sentry customers today. Profiling for Browser JavaScript is also now available in beta.&lt;/p&gt;

&lt;h2&gt;
  
  
  Function Regression Issues: more than just alerts
&lt;/h2&gt;

&lt;p&gt;Recently, we launched the ability to view the slowest and most regressed functions across your application. Now, we can help you debug function-level regressions with a new type of Performance Issue. Function Regression Issues notify you when a function in your application regresses, but they do more than just detect the regression— they use profiling data to give you essential context about what changed so you can solve the problem.&lt;/p&gt;

&lt;p&gt;Function Regression Issues can be detected on any platform that supports Sentry Profiling. Below, we’ll walk through a backend example in a Python project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyn9jiezikcg2pv0czu7a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyn9jiezikcg2pv0czu7a.png" alt="Image description" width="800" height="779"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The screenshot above is a real Function Regression Issue that identified a slowdown in Sentry’s live-running server code. It triggered because the duration of a function that checks customer rate limits stored in Redis regressed by nearly 50%.&lt;/p&gt;

&lt;p&gt;The top chart shows how function duration changed over time, while the bottom chart shows the number of invocations (throughput). You’ll notice that throughput also appears to increase during the slowdown period. This suggests that increased load might be one of the causes of this regression.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftirrbfcy6tubpynus58e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftirrbfcy6tubpynus58e.png" alt="Image description" width="800" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the screenshot above, you’ll see that the same issue gives us other essential information like which API endpoints were impacted by the regression and how much they regressed. This data reveals that the rate-limiting function was widely used and called by many of our endpoints, resulting in a significant regression in overall backend performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jump from Regression Issues to Profiles to find the root cause
&lt;/h2&gt;

&lt;p&gt;The Function Regression Issue makes it easy to see the example profiles captured before and after the regression — they’re displayed right under the “Most Affected” endpoints. Comparing these profiles gives the most crucial context, revealing (at a code level) the change in runtime behavior that caused the regression.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fidu1tcdurb46xw3rp33e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fidu1tcdurb46xw3rp33e.png" alt="Image description" width="800" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, we looked at an example profile event that was captured before the regression occurred. We noticed that our regressed function calls into two functions within the third-party redis module: ConnectionPool.get_connection and ConnectionPool.release.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkkhn43gr21v914q5rjq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkkhn43gr21v914q5rjq.png" alt="Image description" width="800" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We compared this to a profile collected after the regression and noticed that one of those two functions, ConnectionPool.get_connection, was taking significantly longer than before. Each function frame in a profile offers source context for where the function was defined and the executed line number. In this case, opening this source location in the redis module yielded the following line:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcjg09jyjbv3w22koshge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcjg09jyjbv3w22koshge.png" alt="Image description" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This line attempts to acquire a lock — and the significant wall-time duration increase when executing this line suggests that multiple processes or threads are trying to acquire the lock simultaneously. This lock contention is the code-level reason for the regression.&lt;/p&gt;

&lt;p&gt;This lock contention issue also aligns with what we saw earlier in the throughput graph – increased throughput makes contention more likely. Through additional investigation, we discovered that the increased throughput for this function corresponded to an increase in the number of Redis connections starting around the time of the regression; our next step is to isolate the source of the additional connections.&lt;/p&gt;

&lt;p&gt;Using this example, we’ve illustrated how Function Regression Issues can help you link performance regressions directly to the code causing the regression with profiling data. While this specific example focused on a backend use case, this capability works on any platform that supports Sentry Profiling.&lt;/p&gt;

&lt;p&gt;Function Regression issues are available today to Early Adopters and will become generally available over the next 2 weeks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;A high-performance bar is critical for building differentiated products that people want to use. With Web Vitals and Function Regression Issues, we’ve provided more ways for all developers to solve performance problems by connecting them to code.&lt;/p&gt;

&lt;p&gt;Tune in for more &lt;a href="https://sentry.io/events/launch-week/" rel="noopener noreferrer"&gt;exciting product announcements&lt;/a&gt; with Sentry Launch Week. To set up Sentry Performance today, check out this &lt;a href="https://docs.sentry.io/product/performance/getting-started/" rel="noopener noreferrer"&gt;guide&lt;/a&gt;. You can also drop us a line on &lt;a href="https://twitter.com/getsentry" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or &lt;a href="https://discord.com/invite/sentry" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;, or share your Web Vitals feedback with us on &lt;a href="https://github.com/getsentry/sentry/discussions/59620" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And, if you’re new to Sentry, you can &lt;a href="https://sentry.io/signup/" rel="noopener noreferrer"&gt;try it for free&lt;/a&gt; or &lt;a href="https://sentry.io/demo/" rel="noopener noreferrer"&gt;request a demo&lt;/a&gt; to get started.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>opensource</category>
      <category>frontend</category>
    </item>
  </channel>
</rss>
