<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: GitProtect Team</title>
    <description>The latest articles on Forem by GitProtect Team (@gitprotectteam).</description>
    <link>https://forem.com/gitprotectteam</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gitprotectteam"/>
    <language>en</language>
    <item>
      <title>How the Shared Responsibility Model Gap Makes You Lose Money</title>
      <dc:creator>GitProtect Team</dc:creator>
      <pubDate>Thu, 09 Apr 2026 09:58:30 +0000</pubDate>
      <link>https://forem.com/gitprotect/how-the-shared-responsibility-model-gap-makes-you-lose-money-4nb8</link>
      <guid>https://forem.com/gitprotect/how-the-shared-responsibility-model-gap-makes-you-lose-money-4nb8</guid>
      <description>&lt;p&gt;In January 2017, GitLab experienced a major outage that lasted around 18 hours, all because an engineer accidentally deleted data from the main database server. They ended up losing about six hours’ worth of database changes permanently. &lt;/p&gt;

&lt;p&gt;The postmortem shows how recovery paths can look good on paper, then fall apart under pressure. And, of course, the business only realizes this after the outage is already expensive.&lt;/p&gt;

&lt;p&gt;The same failure pattern shows up across SaaS portfolios. Vendors focus on uptime and making collaboration smooth at scale. They roll out controls for retention, recovery, and audit. But defaults, plan limits, and everyday operational constraints often fail enterprise requirements unless customers configure, test, and prove those controls.&lt;/p&gt;

&lt;p&gt;A lot of organizations treat the Shared Responsibility model—sometimes called the Limited Liability Model—more like a box to check during procurement rather than something they fund, test, and review. That’s where the losses for enterprises start to add up.&lt;/p&gt;

&lt;p&gt;We’ll circle back to the GitLab incident later as a case study for execution failure. But first, let’s take a look at the highest-cost gaps between what platforms do and what enterprises need.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Economics of the Shared Responsibility Model Gap
&lt;/h2&gt;

&lt;p&gt;Gaps in the Shared Responsibility model can lead to unexpected costs and missed commitments. Weak recovery and weak proof create predictable expenses. Uptime alone does not cap them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Four Cost Buckets
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Reconstruction and rework
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://gitprotect.io/blog/github-atlassian-gitlab-handles-backup-and-restore-busted/" rel="noopener noreferrer"&gt;When restore paths only give back partial results&lt;/a&gt;, teams end up spending time and money to rebuild lost data, metadata, and relationships. &lt;/p&gt;

&lt;p&gt;Take this example: &lt;a href="https://support.atlassian.com/atlassian-cloud/kb/jira-attachments-broken-after-importing-from-csv-file/" rel="noopener noreferrer"&gt;Jira CSV-based recovery&lt;/a&gt; can break attachments and needs some special handling for issue links. If 10,000 issues are affected and around 20% need manual fixes after export/import, at 5 minutes per issue, we are talking about 167 hours of rework. With loaded labor rates of &lt;a href="https://www.salary.com/research/salary/hiring/mid-level-developer-salary" rel="noopener noreferrer"&gt;$66 to $87/hour&lt;/a&gt;, that adds up to roughly $11.0k to $14.5k in direct labor, not even counting the extra downtime. &lt;/p&gt;

&lt;h2&gt;
  
  
  Downtime and productivity loss
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://gitprotect.io/blog/devsecops-mythbuster-nothing-fails-in-the-cloud-saas/" rel="noopener noreferrer"&gt;Every hour that core tools are down&lt;/a&gt; or just not usable is time you’re paying for but can’t turn into work. &lt;/p&gt;

&lt;p&gt;For instance, if you have 100 developers at $66 to $87/hr facing 6 hours of downtime, you’re looking at around $40k to $53k in lost paid time. Even when the tools are backed up, the time it takes to switch contexts adds more costs. &lt;a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/10/p903-mark.pdf" rel="noopener noreferrer"&gt;A commonly cited estimate&lt;/a&gt; from Microsoft suggests it takes about 23 minutes to get back on track after being interrupted. It could add about $2.5k to $3.3k for those 100 developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compliance and audit drag
&lt;/h2&gt;

&lt;p&gt;The costs here aren’t just about fines. You’ve also got to factor in the work needed for remediation, audit exceptions, and delays in procurement when you can’t prove that controls are working.&lt;/p&gt;

&lt;p&gt;For example, in enterprise deals that hinge on SOC 2 Type II, the reporting period can stretch on for months. If there are significant gaps, it can delay getting the enterprise ready for the reporting windows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trust and commercial friction
&lt;/h2&gt;

&lt;p&gt;Incidents come with downstream costs that go beyond the technical side. There is a loss of confidence, added scrutiny, and extra process overhead. Internally, teams might lose trust and have to create workarounds. Externally, customers and users might slow down adoption, ask for more support, renegotiate contracts, or even churn.&lt;/p&gt;

&lt;p&gt;Take security questionnaires, for example. They’re pretty standard and often seen as a hassle in B2B settings. After an incident, they get even more complicated and time-sensitive. Producing answers can &lt;a href="https://www.scworld.com/perspective/time-to-streamline-security-questionnaires" rel="noopener noreferrer"&gt;take up to 15 hours&lt;/a&gt; each time, and it adds up with each review cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Five Gaps Where Default Platform Controls Fail Enterprise Obligations
&lt;/h2&gt;

&lt;p&gt;Enterprise exposure often stems from defaults, retention windows, and operational constraints. Here are five gaps that highlight these issues.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note on terminology: organizations tier systems differently. In this post, “critical systems” means the applications where loss of access, integrity, or collaboration context would cause material, operational, or compliance impact. Start with the top tier, then expand.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Gap 1: Retention Duration
&lt;/h2&gt;

&lt;p&gt;Service providers often maintain long-term retention, mainly for their own operational needs and critical system info. This helps them run, secure and troubleshoot their services. Platforms like GitLab, GitHub, or Atlassian follow standards like SOC 2 or  ISO 27001, which usually require them to retain certain internal data for much longer than 365 days. &lt;/p&gt;

&lt;p&gt;This can give customers a false sense of security. They often assume that the same long-term retention applies to their own data too. In reality, that’s usually not the case, and customer-level data retention and recoverability may be much more limited. For instance, &lt;a href="https://docs.github.com/en/enterprise-cloud@latest/admin/concepts/security-and-compliance/audit-log-for-an-enterprise" rel="noopener noreferrer"&gt;GitHub Enterprise audit log events&lt;/a&gt; and &lt;a href="https://learn.microsoft.com/en-us/purview/audit-log-retention-policies" rel="noopener noreferrer"&gt;Microsoft Purview Audit (Standard) audit records&lt;/a&gt; cover 180 days by default.  Enterprise obligations (legal, contractual, and regulatory) often call for records to be kept for several years. Examples include &lt;a href="https://www.law.cornell.edu/cfr/text/45/164.316" rel="noopener noreferrer"&gt;HIPAA documentation retention requirements&lt;/a&gt; of six years and &lt;a href="https://www.finra.org/rules-guidance/guidance/interpretations-financial-operational-rules/sea-rule-17a-4-and-related-interpretations" rel="noopener noreferrer"&gt;SEC Rule 17a-4&lt;/a&gt; broker-dealer record retention requirements of not less than 3–6 years.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shared Responsibility model boundary&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The provider takes care of audit logging and retention controls, but they come with defaults and limits based on your plan. It’s up to you to define the necessary retention periods, configure those settings, and ensure coverage across critical systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Impact if you do nothing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If an incident happens, &lt;strong&gt;late detection makes it tough to figure out what went wrong&lt;/strong&gt; because the trail of “who did what when” is gone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Litigation risks increase if you can’t keep or&lt;/strong&gt; produce electronically stored information under &lt;a href="https://www.law.cornell.edu/rules/frcp/rule_37" rel="noopener noreferrer"&gt;Rule 37(e)&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Not producing required historical records can lead to &lt;strong&gt;audit findings, remediation, and enforcement exposure&lt;/strong&gt; in regulated environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Can you piece together “what changed” and “who did it” over the required timeframe?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to close the gap&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement a backup strategy with a &lt;strong&gt;flexible retention policy&lt;/strong&gt; of up to unlimited.&lt;/li&gt;
&lt;li&gt;Set &lt;strong&gt;clear retention requirements&lt;/strong&gt; for each system with designated owners and a regular review schedule.&lt;/li&gt;
&lt;li&gt;Centralize and keep audit and &lt;strong&gt;configuration evidence beyond the platform defaults&lt;/strong&gt; (SIEM, archive, backup) and check coverage periodically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Metric to track&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The percentage of critical systems where the evidence lookback meets or exceeds the required duration (goal: 100%)&lt;/p&gt;

&lt;h2&gt;
  
  
  Gap 2: Metadata Preservation
&lt;/h2&gt;

&lt;p&gt;One big pitfall is treating migration tools like they’re a backup solution. &lt;/p&gt;

&lt;p&gt;“Migration” can refer to shifting data within the same platform (cloud-to-cloud, or on-premises to cloud) or switching to a completely different platform. In any case, exports often bring back the primary records, but they can miss the collaboration layer that makes systems usable. We’re talking about relationships, discussions, review context and history here.&lt;/p&gt;

&lt;p&gt;Cross-platform migrations raise the stakes because each platform structures and stores metadata slightly differently, so one-to-one preservation is not guaranteed.Vendors warn against this in their documentation. For instance, &lt;a href="https://docs.github.com/en/enterprise-server@3.18/repositories/archiving-a-github-repository/backing-up-a-repository" rel="noopener noreferrer"&gt;GitHub mentions that migration archives are just migration artifacts, not backups&lt;/a&gt;. They also point out that migration outcomes might include warnings where data was skipped or migrated with caveats.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shared Responsibility model boundary&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The provider puts out APIs and migration or export mechanisms. It’s up to you to ensure that recovery keeps the metadata and relationships essential for smooth operation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Impact if you do nothing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Teams might get their code back, but they’ll &lt;strong&gt;lose the decision-making context&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rework&lt;/strong&gt; can &lt;strong&gt;pile up&lt;/strong&gt; since intent and review history are missing.&lt;/li&gt;
&lt;li&gt;I*&lt;em&gt;ncident response can drag on&lt;/em&gt;* because scope and intent are harder to reconstruct.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After a restore, can teams explain what changed and why using the recovered collaboration context, or do they only have the raw repository?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to close the gap&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go for backups that &lt;strong&gt;keep full metadata&lt;/strong&gt; and relationships.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run granular restore tests&lt;/strong&gt; to make sure the metadata is intact.&lt;/li&gt;
&lt;li&gt;Define and document &lt;strong&gt;key metadata for each system&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Metric to track&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The percentage of systems where the last restore test confirmed metadata integrity. Aim for 100%.&lt;/li&gt;
&lt;li&gt;Measure the average time to reconstitute in-scope systems. Set a target and check it quarterly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Gap 3: Tested Recovery
&lt;/h2&gt;

&lt;p&gt;Many organizations run backups, but recovery fails because restores are not tested in real-world scenarios. In SaaS, restore speed is often bound by API and platform limits. Even a correct backup can miss RTO if restoration depends on thousands of API operations that can’t be executed fast enough. Breakpoints often show up only during incidents, such as corrupted archives, missing credentials, and unclear runbooks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shared Responsibility model boundary&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The provider sets the stage with APIs and exports that have rate limits and operational constraints. Meanwhile, you have to show that recovery meets RTO/RPO by routinely testing restores in realistic conditions and keeping a record of the results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Impact if you do nothing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A restore might work, but if it misses the RTO, it turns into &lt;strong&gt;a business outage&lt;/strong&gt;. Teams end up waiting, rebuilding things manually, or dealing with only partial restorations.&lt;/li&gt;
&lt;li&gt;The scope of investigation and &lt;strong&gt;remediation grow&lt;/strong&gt;s.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Restore throughput becomes a bottleneck&lt;/strong&gt; that competes with ongoing delivery work.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Can you restore within RTO if your most privileged production credentials are compromised and primary data is deleted or corrupted?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to close the gap&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://gitprotect.io/blog/become-the-master-of-disaster-disaster-recovery-testing-for-devops/" rel="noopener noreferrer"&gt;Conduct quarterly restore tests&lt;/a&gt; for your most important systems with documented RTO/RPO results and pass/fail criteria.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automate restore testing&lt;/strong&gt; wherever you can.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintain restore runbooks&lt;/strong&gt; that include escalation paths and evidence collection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Metric to track&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Restore success rate, defined as the percentage of quarterly tests that meet RTO/RPO. Start by measuring the critical systems, then gradually expand to other systems based on priority.&lt;/li&gt;
&lt;li&gt;Percentage of critical systems with a documented, tested restore runbook. Goal: 100%.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Gap 4: Independent Recovery
&lt;/h2&gt;

&lt;p&gt;Recovery copies that are stored in the same tenant/account and under the same credentials as production can be deleted, encrypted, or invalidated by someone using a compromised admin account. Replication doesn’t solve this issue. It improves availability, but it also spreads deletions and corruption around.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shared Responsibility model boundary&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The provider builds availability and redundancy. You make sure recovery points exist outside the danger zone of production-admin compromises.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Impact if you do nothing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One compromised account can wipe out all your recovery points. Even if some copies survive, you might &lt;strong&gt;not have a reliable restore point&lt;/strong&gt; after a destructive event.&lt;/li&gt;
&lt;li&gt;Logical corruption and ransomware can spread across replicated systems. With time-limited rollback tools, if you catch it too late, you may end up with &lt;strong&gt;no clean point-in-time restore&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If privileged admin credentials get stolen, can that account delete backups or invalidate recovery copies? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to close the gap&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Store backups in a &lt;strong&gt;different account/tenant&lt;/strong&gt; with separate admin credentials.&lt;/li&gt;
&lt;li&gt;Add &lt;strong&gt;deletion safeguards&lt;/strong&gt; that match your risk level, like immutable storage (WORM) or time-delayed deletion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Metric to track&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Percentage of critical systems where production admin can delete backups. Goal: 0%.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gap 5: Compliance Evidence
&lt;/h2&gt;

&lt;p&gt;Auditors and enterprise buyers want proof that backups run, restores are tested, retention is applied, and failures are reviewed. &lt;a href="https://drata.com/learn/soc-2/type-2-overview" rel="noopener noreferrer"&gt;SOC 2 Type II&lt;/a&gt; is built around this concept. It checks how well controls are working over a specific audit period. In the &lt;a href="https://assets.ctfassets.net/rb9cdnjh59cm/72xv4p67HVXKp6CjWmjkPk/1cdbfa19f6307e2720396b66a6194dc9/trust-services-criteria-updated-copyright.pdf" rel="noopener noreferrer"&gt;Availability criteria&lt;/a&gt;, A1.2 covers backup and recovery infrastructure, and A1.3 dives into testing recovery procedures, including the regular check on backup completeness.&lt;/p&gt;

&lt;p&gt;This is where a lot of teams trip up. They run backups, but cannot prove when the last successful restore test was, when a retention policy was in effect, or who reviewed failures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shared Responsibility model boundary&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The provider offers platform audit logs, often with retention limits. You produce evidence packs: backup coverage, retention settings, restore-test results, exception handling, and exported logs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Impact if you do nothing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Weak evidence can &lt;strong&gt;slow down audits, procurement processes, and customer security reviews&lt;/strong&gt;. Missing elements can lead to SOC 2 exceptions, requiring you to fix things and push reporting deadlines back.&lt;/li&gt;
&lt;li&gt;Collecting data manually turns into &lt;strong&gt;weeks of screenshot chasing&lt;/strong&gt; and log reconstruction across IT, Sec, and GRC teams.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If evidence lives only in the compromised tenant, can an attacker tamper with or delete it? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to close the gap&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automate evidence capture&lt;/strong&gt; for backup coverage, retention settings, and restore-test results.&lt;/li&gt;
&lt;li&gt;Export and protect &lt;strong&gt;audit and config logs outside the production admin plane&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://gitprotect.io/blog/security-compliance-best-practices/" rel="noopener noreferrer"&gt;Collect evidence continuously&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Metrics to track&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Time to produce an audit-ready evidence pack for in-scope systems. Aim for under 24 hours.&lt;/li&gt;
&lt;li&gt;Percentage of systems with automated evidence reporting. Goal: 100%.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Learn more about the Shared Responsibility model for different service providers:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;📌 &lt;em&gt;Shared Responsibility Model in &lt;a href="https://gitprotect.io/blog/shared-responsibility-model-in-azure-devops/" rel="noopener noreferrer"&gt;Azure DevOps&lt;/a&gt;&lt;/em&gt;&lt;br&gt;
📌 &lt;em&gt;&lt;a href="https://gitprotect.io/blog/atlassian-cloud-shared-responsibility-model-are-you-aware-of-your-duties/" rel="noopener noreferrer"&gt;Atlassian Cloud&lt;/a&gt; Shared Responsibility Model&lt;/em&gt;&lt;br&gt;
📌 &lt;em&gt;&lt;a href="https://gitprotect.io/blog/gitlab-shared-responsibility-model-a-guide-to-collaborative-security/" rel="noopener noreferrer"&gt;GitLab Shared Responsibility Model&lt;/a&gt;&lt;/em&gt;&lt;br&gt;
📌 &lt;em&gt;&lt;a href="https://gitprotect.io/blog/github-shared-responsibility-model-and-source-code-protection/" rel="noopener noreferrer"&gt;GitHub Shared Responsibility Model&lt;/a&gt;&lt;/em&gt;&lt;br&gt;
📌 &lt;em&gt;&lt;a href="https://gitprotect.io/blog/microsoft-365-what-are-your-duties-within-the-shared-responsibility-model/" rel="noopener noreferrer"&gt;Microsoft 365&lt;/a&gt;: What Are Your Duties Within The Shared Responsibility Model&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Real-Life Cases: When the Gap Caused Loss
&lt;/h2&gt;

&lt;p&gt;Let’s take a look at three incidents that map directly to one or more gaps above. &lt;/p&gt;

&lt;h2&gt;
  
  
  GitLab com (2017)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Gaps exposed: metadata preservation, tested recovery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As we talked about in the intro, in 2017, GitLab.com accidentally removed data from the main database server. This outage affected roughly 5,000 projects, 5,000 comments, and 700 new users. Issues and snippets were also impacted. &lt;/p&gt;

&lt;p&gt;They noted that several recovery options just weren’t available when they needed them. The replica wasn’t usable. Azure disk snapshots hadn’t been enabled for the database servers. pg_dump backups were failing due to a PostgreSQL version mismatch, leaving the expected S3 backups out of commission. Recovery relied on an LVM snapshot created for staging, and copying and restoring it took many hours due to slow disks. &lt;/p&gt;

&lt;p&gt;Even though GitLab was the provider here, this kind of failure can happen anywhere. “Backup exists” and “recovery works” are not the same without a tested, metadata-complete recovery. If you can’t restore the collaboration layer, you risk losing operational continuity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What would reduce the impact?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://gitprotect.io/blog/why-backup-github-gitlab-or-bitbucket-the-risk-of-data-loss/" rel="noopener noreferrer"&gt;Independent backups&lt;/a&gt; that capture both repos and the collaboration layer, allowing for point-in-time restores.&lt;/li&gt;
&lt;li&gt;Restore testing that proves RTO/RPO and checks the metadata integrity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Atlassian Cloud (2022)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Gaps exposed: tested recovery and independent recovery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On April 5, 2022, Atlassian reported that an internal maintenance script &lt;a href="https://www.atlassian.com/blog/atlassian-engineering/post-incident-review-april-2022-outage" rel="noopener noreferrer"&gt;deleted 883 cloud sites&lt;/a&gt;, affecting 775 customers. This meant that many organizations lost access to Jira, Confluence, and other Atlassian Cloud products. For some customers, the &lt;a href="https://gitprotect.io/blog/was-the-jira-outage-the-last-atlassian-problem/" rel="noopener noreferrer"&gt;outage dragged on for as long as 14 days&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;This incident shows the tested recovery gap at scale. The presence of backups did not translate into a fast, predictable recovery path when hundreds of tenants had to be restored under pressure. Plus, it exposed how many customers lacked independence in their recovery options. Most organizations didn’t have a way to control their recovery points or maintain continuity while waiting for the vendor to sort things out.&lt;/p&gt;

&lt;p&gt;Losing access to key systems for several days led to problems like missed approvals, manual reconciliations, and a whole backlog to tackle once access returned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What would reduce the impact?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Customer-controlled copies outside Atlassian Cloud to support continuity of operations and decision-making while restoration progressed.&lt;/li&gt;
&lt;li&gt;The ability to restore important projects or spaces in a separate environment (self-managed instance or alternative workspace) to maintain workflow.&lt;/li&gt;
&lt;li&gt;Restore drills that test “time-to-operate” under real constraints (volume, permissions, and dependencies).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Code Spaces (2014)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Gaps exposed: independent recovery and tested recovery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In June 2014, Code Spaces, a code-hosting SaaS provider, &lt;a href="https://thehackernews.com/2014/06/cyber-attack-on-code-spaces-puts.html" rel="noopener noreferrer"&gt;faced a DDoS attack&lt;/a&gt; along with an extortion demand. The attacker got into the company’s AWS console. When Code Spaces tried to take back control, the attacker deleted resources like EBS snapshots, S3 buckets, AMIs, and several instances.&lt;/p&gt;

&lt;p&gt;Later on, Code Spaces announced that most of its data, backups, machine configurations, and “offsite backups” were either partially or completely deleted over about a 12-hour period. Shortly after, the company announced it would cease trading.&lt;/p&gt;

&lt;p&gt;This incident shows the independent recovery gap. Recovery copies were reachable from the same compromised control plane. Because the privileged credentials were compromised, both production and recovery got hit. Plus, there wasn’t any solid restore path that could hold up under those circumstances.&lt;/p&gt;

&lt;p&gt;You see a similar failure mode in Saas when tenant admins can delete recovery copies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What would reduce the impact?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backups are kept in a separate AWS account with different credentials, so the attacker couldn’t access them.&lt;/li&gt;
&lt;li&gt;Immutable backup storage, so even if someone had console access, backups couldn’t be deleted for a set retention period.&lt;/li&gt;
&lt;li&gt;A third-party backup outside AWS entirely to spread out the risk across different providers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to Close the Shared Responsibility Model Gap
&lt;/h2&gt;

&lt;p&gt;SaaS vendors keep platforms running. You need to &lt;a href="https://gitprotect.io/blog/devops-security-data-protection-best-practices/" rel="noopener noreferrer"&gt;make sure recovery and proof are predictable&lt;/a&gt;. In practice, this operating model breaks down into three main parts.&lt;/p&gt;

&lt;p&gt;First, keep recovery points you control away from the same tenant and admin blast radius as your production. At the same time, make sure retention matches your obligations, not just platform defaults.&lt;/p&gt;

&lt;p&gt;Next, connect your evidence to your assurance needs (SOC 2, ISO 27001, customer security reviews) without turning compliance into a manual project.&lt;/p&gt;

&lt;p&gt;Lastly, prove it works. Run restore tests for your key systems on a regular schedule and under real constraints.&lt;/p&gt;

&lt;p&gt;📌 &lt;em&gt;For teams using GitHub, GitLab, Bitbucket, Azure DevOps, Jira, Confluence, and Microsoft 365, GitProtect could be a solid way to get that operating model in place.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It offers customer-controlled backups that sit outside the source platform, supports policy-driven retention, and gives you exportable reports for audits and security reviews.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It backs up core data and collaboration metadata, allows for granular restores, and reduces reliance on default platform recovery windows.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;GitProtect supports flexible deployment and storage options that reduce dependency on a single cloud or admin plane. If it fits into your continuity strategy, cross-platform migration can also give you another recovery path.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;✍️ Subscribe to &lt;a href="https://gitprotect.io/gitprotect-newsletter.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;GitProtect DevSecOps X-Ray Newsletter&lt;/a&gt; – your guide to the latest DevOps &amp;amp; security insights&lt;/p&gt;

&lt;p&gt;🚀 Ensure compliant &lt;a href="https://gitprotect.io/sign-up.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;DevOps backup and recovery with a 14-day free trial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📅 Let’s discuss your needs and &lt;a href="https://calendly.com/d/3s9-n9z-pgc/gitprotect-live-demo?month=2024-04&amp;amp;utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;see a live product tour&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>security</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>Write Once, Read Many: How WORM Storage Makes Your Data Secure</title>
      <dc:creator>GitProtect Team</dc:creator>
      <pubDate>Thu, 19 Mar 2026 12:17:26 +0000</pubDate>
      <link>https://forem.com/gitprotect/write-once-read-many-how-worm-storage-makes-your-data-secure-1gdj</link>
      <guid>https://forem.com/gitprotect/write-once-read-many-how-worm-storage-makes-your-data-secure-1gdj</guid>
      <description>&lt;p&gt;WORM (&lt;strong&gt;Write Once, Read Many&lt;/strong&gt;) is a data storage model specifically designed to guarantee &lt;strong&gt;data integrity&lt;/strong&gt; over time. In a WORM-compliant storage, data is &lt;strong&gt;written once&lt;/strong&gt; and &lt;strong&gt;cannot be altered or erased&lt;/strong&gt; for a defined retention period (can be &lt;strong&gt;read as often as needed&lt;/strong&gt; though).&lt;/p&gt;

&lt;h2&gt;
  
  
  What is WORM (Write Once Read Many)
&lt;/h2&gt;

&lt;p&gt;WORM enforces &lt;strong&gt;two crucial rules&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;👉 data cannot be rewritten (no overwrite)&lt;br&gt;
👉 data cannot be erased (no delete) until retention expires&lt;/p&gt;

&lt;p&gt;Bear in mind – these must be &lt;strong&gt;enforced at the storage level&lt;/strong&gt;, not through any kinds of permissions or user roles. That distinction is key because data may still be vulnerable if protection depends on who is logged in or what rights they have.&lt;/p&gt;

&lt;h2&gt;
  
  
  How did WORM originate, and what is it used for?
&lt;/h2&gt;

&lt;p&gt;WORM storage was developed for &lt;strong&gt;environments where data is treated as evidence&lt;/strong&gt;. &lt;a href="https://gitprotect.io/blog/how-to-protect-your-finance-and-banking-devops-data/" rel="noopener noreferrer"&gt;Financial&lt;/a&gt; institutions, healthcare providers, and &lt;a href="https://gitprotect.io/industries/regulated-industries.html" rel="noopener noreferrer"&gt;regulated industries&lt;/a&gt; often rely on WORM-compliant storage instances to ensure that records remain complete, unchanged, and legally defensible over time. In these contexts, even a &lt;strong&gt;single altered byte of data&lt;/strong&gt; could &lt;strong&gt;invalidate the whole dataset&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;From a technical perspective, WORM storage typically works by assigning a desired object or record a &lt;a href="https://gitprotect.io/blog/the-importance-of-data-retention-policies-in-devops-backup-and-recovery/#popup-maker" rel="noopener noreferrer"&gt;retention period&lt;/a&gt; &lt;strong&gt;at the time it is written&lt;/strong&gt;. Now, until that period expires, the storage system itself &lt;strong&gt;rejects any attempt to modify or remove&lt;/strong&gt; the data, regardless of intent or access level the user has.&lt;/p&gt;

&lt;p&gt;👉 The key implication is that &lt;strong&gt;if data can be changed or deleted&lt;/strong&gt; before its retention period ends, it is &lt;strong&gt;not WORM-compliant&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This separates WORM from traditional storage models and lays the foundation for modern &lt;a href="https://gitprotect.io/blog/devops-security-data-protection-best-practices/" rel="noopener noreferrer"&gt;data protection strategies&lt;/a&gt;. This is especially true for environments exposed to &lt;strong&gt;&lt;a href="https://gitprotect.io/blog/ransomware-attacks-on-github-bitbucket-and-gitlab-what-you-should-know/" rel="noopener noreferrer"&gt;ransomware&lt;/a&gt;, insider threats, and &lt;a href="https://gitprotect.io/blog/security-compliance-best-practices/" rel="noopener noreferrer"&gt;compliance&lt;/a&gt; audits&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How WORM works in practice
&lt;/h2&gt;

&lt;p&gt;The mechanism used for WORM-compliant storage instances is simple and unforgiving:&lt;/p&gt;

&lt;p&gt;👉 First, an object is &lt;strong&gt;written to storage&lt;/strong&gt;, then &lt;strong&gt;a retention lock is applied&lt;/strong&gt;. Now, until expiration, there is &lt;strong&gt;no overwrite, no delete, and no metadata changes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;compliance-grade WORM implementations&lt;/strong&gt;, the set retention cannot be bypassed even by administrators without violating the integrity of the storage system itself. This is what separates WORM from configuration-based immutability.&lt;/p&gt;

&lt;h2&gt;
  
  
  WORM vs immutable storage
&lt;/h2&gt;

&lt;p&gt;The terms WORM and &lt;a href="https://gitprotect.io/blog/immutable-storage/" rel="noopener noreferrer"&gt;immutable storage&lt;/a&gt; are sometimes used interchangeably. That’s a mistake. They actually refer to &lt;strong&gt;different levels of enforcement&lt;/strong&gt; – confusing them may lead to false security assumptions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Immutable storage&lt;/strong&gt; is a broader (and often weaker) concept. In many systems it is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;implemented at the application layer,&lt;/li&gt;
&lt;li&gt;dependent on permissions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keep in mind that immutable storage is also &lt;strong&gt;vulnerable to&lt;/strong&gt; credentials compromise, misconfigurations, and insider threats.&lt;/p&gt;

&lt;p&gt;While &lt;strong&gt;WORM-compliant storage&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;enforced at the storage layer&lt;/li&gt;
&lt;li&gt;independent of application logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 Every &lt;strong&gt;WORM system is immutable&lt;/strong&gt;, but &lt;strong&gt;not every immutable system is WORM-compliant&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why WORM is important against ransomware
&lt;/h2&gt;

&lt;p&gt;Ransomware does not attack data itself. It actually &lt;strong&gt;attacks your ability to recover&lt;/strong&gt; the given data. Typical attack chains would involve account takeover, deletion of backups, production encryption, and ransom payment demand.&lt;/p&gt;

&lt;p&gt;The WORM-compliant storage breaks this chain as &lt;strong&gt;backups still exist, and the data cannot be deleted, encrypted, or overwritten&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  WORM-compliant storage in GitProtect
&lt;/h3&gt;

&lt;p&gt;GitProtect.io uses WORM-style &lt;strong&gt;immutable storage as a built-in ransomware defense&lt;/strong&gt; and regulatory readiness measure, ensuring backup data remains unchanged for defined retention periods. The platform’s approach has three practical dimensions.&lt;/p&gt;

&lt;h2&gt;
  
  
  1 Object Lock immutability support
&lt;/h2&gt;

&lt;p&gt;When GitProtect writes backup data to &lt;strong&gt;S3-compatible storage&lt;/strong&gt; with Object Lock enabled, the storage itself &lt;strong&gt;enforces WORM retention&lt;/strong&gt;. This means backups are &lt;a href="https://gitprotect.io/blog/why-immutable-backups-are-essential-for-data-security-in-devops/" rel="noopener noreferrer"&gt;stored with native immutability&lt;/a&gt; ensured by the storage provider’s lock mechanics – &lt;strong&gt;preventing modification or deletion&lt;/strong&gt; during retention.&lt;/p&gt;

&lt;p&gt;It leverages standard object storage features to make backups &lt;strong&gt;tamper-resistant&lt;/strong&gt; at the storage API level.&lt;/p&gt;

&lt;h2&gt;
  
  
  2 You choose the storage target
&lt;/h2&gt;

&lt;p&gt;GitProtect supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;its own cloud storage with WORM enforcement enabled by default&lt;/li&gt;
&lt;li&gt;user-managed &lt;a href="https://gitprotect.io/blog/s3-storage-for-devops-backups/" rel="noopener noreferrer"&gt;S3-compatible&lt;/a&gt; buckets that already have Object Lock turned on&lt;/li&gt;
&lt;li&gt;any combination of cloud, on-premise, or hybrid targets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This flexibility lets you &lt;strong&gt;implement WORM in the storage&lt;/strong&gt; tier that matches your &lt;strong&gt;compliance and resilience requirements&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  3 Complemented by multi-storage replication
&lt;/h2&gt;

&lt;p&gt;Immutable backups alone reduce data-tampering risk, but GitProtect also lets you distribute copies across multiple storage instances (cloud, on-premise, hybrid). This supports the adherence to robust strategies like the &lt;a href="https://gitprotect.io/blog/3-2-1-backup-rule-complete-guide/" rel="noopener noreferrer"&gt;3-2-1 backup rule&lt;/a&gt; – multiple copies, different systems, one off-site – with immutable snapshots protected at each target.&lt;/p&gt;

&lt;p&gt;The combination of Object Lock immutability with distributed backup copies means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;locked backups cannot be overwritten or erased&lt;/li&gt;
&lt;li&gt;separate copies exist outside of any one storage failure&lt;/li&gt;
&lt;li&gt;restore paths remain available even if a primary target is compromised &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Practical product behaviors worth noting
&lt;/h2&gt;

&lt;p&gt;Immutable configuration must be enabled when creating the bucket for some storage types – you &lt;strong&gt;cannot turn it on retroactively after creation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;GitProtect’s internal retention and versioning logic cannot be used &lt;strong&gt;instead of external object storage retention policies&lt;/strong&gt;. In order to set up WORM storage, in AWS, for example, save the data in the immutable and WORM-compliant S3 storage with an object lock set to 6 months, then set the retention in GitProtect for 3 months (after this period, a notification to delete will be sent to the storage). This data will be removed from the storage after 6 months.&lt;/p&gt;

&lt;p&gt;With this approach, you guarantee &lt;strong&gt;resilience against&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ransomware,&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://gitprotect.io/blog/human-error-the-most-common-cybersecurity-mistakes-for-devops/" rel="noopener noreferrer"&gt;human error&lt;/a&gt;,&lt;/li&gt;
&lt;li&gt;malicious admins.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why WORM alone is not enough
&lt;/h2&gt;

&lt;p&gt;Here’s the uncomfortable truth – &lt;strong&gt;WORM-compliant storage&lt;/strong&gt; without: &lt;strong&gt;replication, account separation, monitoring, or recovery&lt;/strong&gt; testing creates a false sense of confidence.&lt;/p&gt;

&lt;p&gt;WORM must be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;part of a backup &amp;amp; &lt;a href="https://gitprotect.io/blog/become-the-master-of-disaster-disaster-recovery-plan-for-devops/" rel="noopener noreferrer"&gt;disaster recovery strategy&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;integrated with operational processes&lt;/li&gt;
&lt;li&gt;regularly tested via restore scenarios&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where it ties back directly into the Shared Responsibility Model, ransomware prevention, and &lt;a href="https://gitprotect.io/blog/become-the-master-of-disaster-disaster-recovery-testing-for-devops/" rel="noopener noreferrer"&gt;recovery testing&lt;/a&gt; best practices – thus, outlining its &lt;strong&gt;importance in current data protection strategies&lt;/strong&gt; (not just in &lt;a href="https://gitprotect.io/blog/devops-pillars-top-15-devops-principles/" rel="noopener noreferrer"&gt;DevOps&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;✍️ Subscribe to &lt;a href="https://gitprotect.io/gitprotect-newsletter.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;GitProtect DevSecOps X-Ray Newsletter&lt;/a&gt; – your guide to the latest DevOps &amp;amp; security insights&lt;/p&gt;

&lt;p&gt;🚀 Ensure compliant &lt;a href="https://gitprotect.io/sign-up.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;DevOps backup and recovery with a 14-day free trial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📅 Let’s discuss your needs and &lt;a href="https://calendly.com/d/3s9-n9z-pgc/gitprotect-live-demo?month=2024-04&amp;amp;utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;see a live product tour&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>devops</category>
      <category>backup</category>
      <category>security</category>
    </item>
    <item>
      <title>30 Cybersecurity Statistics You Must Know in 2026</title>
      <dc:creator>GitProtect Team</dc:creator>
      <pubDate>Thu, 12 Mar 2026 13:57:48 +0000</pubDate>
      <link>https://forem.com/gitprotect/30-cybersecurity-statistics-you-must-know-in-2026-312b</link>
      <guid>https://forem.com/gitprotect/30-cybersecurity-statistics-you-must-know-in-2026-312b</guid>
      <description>&lt;p&gt;DevOps teams did not sign up to be security teams. But if you run repos, CI/CD, cloud roles, SaaS apps, integrations, or backups, you operate the systems attackers lean on.&lt;/p&gt;

&lt;p&gt;Most breaches are not flashy. They start with routine failures: a token left in a repo, MFA not enforced, an overprivileged API key that never expires, or backups that are deletable by the same admin identity.&lt;/p&gt;

&lt;p&gt;Attackers do not need to “break in” if they can log in. They move with normal tooling and blend into admin traffic. They pull data out through services the business already trusts, then decide whether to encrypt. Ransomware is often the finale, not the opening act.&lt;/p&gt;

&lt;p&gt;This article compiles &lt;strong&gt;30 cybersecurity statistics&lt;/strong&gt; from recent reports and maps them to a typical attack lifecycle. Most numbers were published in 2025, often about 2024 incidents, plus a few early 2026 outlook surveys. We hope this serves as a kind of checklist for your DevOps stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exposure and misconfiguration: where leaks start
&lt;/h2&gt;

&lt;h2&gt;
  
  
  1 65% of Forbes 2025 AI 50 companies had confirmed secret leaks on GitHub
&lt;/h2&gt;

&lt;p&gt;Leaked material included &lt;strong&gt;API keys, token&lt;/strong&gt;, and &lt;strong&gt;credentials&lt;/strong&gt;. Many were in places teams often overlook: deleted forks, gists, and secondary repositories. (source: Wiz’s State of AI in the Cloud report, Nov 2025)&lt;/p&gt;

&lt;p&gt;Even strong teams leak secrets in the corners. If your posture is “we scan repos, so we’re fine”, this should unsettle you.&lt;/p&gt;

&lt;h2&gt;
  
  
  2 Median time to remediate leaked GitHub secrets was 94 days
&lt;/h2&gt;

&lt;p&gt;Verizon’s 2_025 Data Breach Investigations Report_ reports a 94-day median time to remediate secrets leaked in GitHub repositories. Their dataset covers 22,052 incidents and 12,195 confirmed breaches across many org sizes and industries.&lt;/p&gt;

&lt;p&gt;Ninety-four days is not just a delay. It is a window.&lt;/p&gt;

&lt;h2&gt;
  
  
  3 Most scanner-detected repo secrets in 2025 were tied to web app infrastructure (39%) and CI/CD (32%)
&lt;/h2&gt;

&lt;p&gt;Next were &lt;strong&gt;cloud infrastructure&lt;/strong&gt; (15%) and &lt;strong&gt;databases&lt;/strong&gt; (5%). For disclosed web app infrastructure secrets, 66% were JWTs used for authentication and sessions. For cloud, 43% were Google Cloud API keys. (source: Verizon’s 2025 DBIR)&lt;/p&gt;

&lt;p&gt;This is not a niche AppSec problem. It is a pipeline and runtime problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  4 61% of SaaS accounts have MFA disabled or inactive
&lt;/h2&gt;

&lt;p&gt;This figure comes from Saas Alerts’ &lt;em&gt;SaaS Application Security Insights 2025&lt;/em&gt;, based on SaaS security telemetry from &lt;strong&gt;43,000+ SMBs&lt;/strong&gt; and nearly six million user accounts. &lt;/p&gt;

&lt;p&gt;It seems that the mere availability of MFA does not solve the problem, does it?&lt;/p&gt;

&lt;h2&gt;
  
  
  5 75% of organizations had a SaaS security incident in the last year
&lt;/h2&gt;

&lt;p&gt;AppOmni reports this rate in State of SaaS Security 2025, based on a survey of 803 security leaders and practitioners globally. The report also notes that many incidents were linked to &lt;strong&gt;unauthorized applications&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Unauthorized apps are an attack surface. If you do not control what is connected, you do not control your risk.&lt;/p&gt;

&lt;h2&gt;
  
  
  6 63% of organizations see external data oversharing
&lt;/h2&gt;

&lt;p&gt;Cloud Security Alliance reported this in 2025 research based on a survey of 420 IT and security professionals. The same research also found that 56% of orgs say employees send confidential data to &lt;strong&gt;unauthorized SaaS apps&lt;/strong&gt;, and it flags &lt;strong&gt;IAM gaps&lt;/strong&gt; like weak privilege control (58%) and poor user lifecycle automation (54%).&lt;/p&gt;

&lt;p&gt;This is how data walks out without a “breach” headline.&lt;/p&gt;

&lt;h2&gt;
  
  
  7 Organizations without full SaaS visibility are 5 x more likely to face an incident or data loss through 2027
&lt;/h2&gt;

&lt;p&gt;Gartner states this as a planning assumption in its &lt;em&gt;2025 Magic Quadrant for SaaS Management Platforms&lt;/em&gt;. This assumption also applies to organizations that do not centrally manage SaaS lifecycles.&lt;/p&gt;

&lt;h2&gt;
  
  
  8 By 2027, 75% of employees will buy, change, or build technology outside IT control
&lt;/h2&gt;

&lt;p&gt;Gartner projected this at its Security and Risk Management Summit 2023, &lt;strong&gt;up from 41% in 2022&lt;/strong&gt;. SaaS sprawl is coming at us full speed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Initial access
&lt;/h2&gt;

&lt;h2&gt;
  
  
  9 Over 60% of cloud security events relate to initial access, persistence, or credential theft
&lt;/h2&gt;

&lt;p&gt;This figure comes from Elastic Security Labs’ &lt;em&gt;Global Treat Report 2025&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;In the cloud, identity is the main control point. Emphasize hardening authentication and watching for abnormal privileged access.&lt;/p&gt;

&lt;h2&gt;
  
  
  10 In 2025, exploited vulnerabilities were the #1 driver of ransomware success
&lt;/h2&gt;

&lt;p&gt;Statista reported this in Nov 2025 based on a survey of cybersecurity professionals. 32% said ransomware succeeded due to exploited vulnerabilities. The next most cited causes were &lt;strong&gt;compromised credentials (23%)&lt;/strong&gt;, then &lt;strong&gt;malicious email (19%)&lt;/strong&gt;, phishing (18%) and brute force (6%).&lt;/p&gt;

&lt;p&gt;Patch and identity are still doing the most of the work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lateral movement and data exfiltration, the quiet phase
&lt;/h2&gt;

&lt;h2&gt;
  
  
  11 In 2024, RDP was the top tool attackers used to move inside networks
&lt;/h2&gt;

&lt;p&gt;ReliaQuest’s &lt;em&gt;Annual Cyber-Threat Report 2025&lt;/em&gt; breaks down lateral movement techniques used in 2024 incidents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Remote Desktop Protocol (26%)&lt;/strong&gt;: Common in Windows environments. With stolen credentials, it can blend in as normal activity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal spear phishing (16%)&lt;/strong&gt;: Uses trusted internal messages to expand access to more accounts and systems&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSH/SMB/Windows admin shares (14%)&lt;/strong&gt;: Leans on standard remote admin paths after attackers get valid credentials.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why “we have EDR” is not a full answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  12 80% of breaches involved exfiltration
&lt;/h2&gt;

&lt;p&gt;ReliaQuest’s report also shows that data theft is a core part of most incidents. In exfiltration cases, data was moved in two main ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;60% to mainstream cloud storage (Google Drive, Mega, Amazon S3)&lt;/li&gt;
&lt;li&gt;40% over C2 channels to attacker-run infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because blocking common cloud storage is often not realistic for day-to-day work, you should monitor for unusual data movement and identity activity instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ransomware and operational disruption: when the business stops
&lt;/h2&gt;

&lt;h2&gt;
  
  
  13 93% of paying victims later learned their data was stolen
&lt;/h2&gt;

&lt;p&gt;Paying does not end the problem. Victims who paid often still lost data, were &lt;strong&gt;attacked again (83%)&lt;/strong&gt;, or &lt;strong&gt;could not recover all data (45%)&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;These figures come from CrowdStrike’s &lt;em&gt;State of Ransomware 2025 survey&lt;/em&gt; of 1,100 IT and security decision makers across Australia, France, Germany, India, Singapore, the UK, and the US.&lt;/p&gt;

&lt;h2&gt;
  
  
  14 57% of organizations rely on a single layer of security to protect their cloud backups from ransomware
&lt;/h2&gt;

&lt;p&gt;That’s one conclusion from an EON survey of 154 IT and cloud leaders at Google Cloud Next 2025. The survey also found that &lt;strong&gt;13%&lt;/strong&gt; of organizations have &lt;strong&gt;no ransomware protection&lt;/strong&gt; for cloud backups, while &lt;strong&gt;29% use multiple layers&lt;/strong&gt; like immutability, anomaly detection, and MFA.&lt;/p&gt;

&lt;p&gt;At the same time, EON attributes 23% of cloud data loss to ransomware or breaches. This is why separate admin boundaries and &lt;a href="https://gitprotect.io/blog/immutable-storage/" rel="noopener noreferrer"&gt;immutability&lt;/a&gt; should be treated as baseline controls.&lt;/p&gt;

&lt;h2&gt;
  
  
  15 Ransomware succeeds due to gaps in expertise (40.2%) and visibility (40.1%)
&lt;/h2&gt;

&lt;p&gt;Sophos’ &lt;em&gt;State of Ransomware 2025&lt;/em&gt; surveyed 3,400 IT and security leaders across 17 countries whose orgs were hit by ransomware. The most cited factors contributing to the success of ransomware were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lack of expertise (40.2%)&lt;/li&gt;
&lt;li&gt;Unknown security gaps (40.1%)&lt;/li&gt;
&lt;li&gt;Lack of people/capacity (39.4%)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This points to a basic operations problem. The fix is: clearer process, better coverage, and enough staffing, not another tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  16 54% of board members think they’re prepared. Security teams disagree
&lt;/h2&gt;

&lt;p&gt;In the CrowdStrike’s survey 54% of board and C-level leaders said they are “very prepared” for ransomware, versus &lt;strong&gt;46% of security teams&lt;/strong&gt;. It also found that 76% of organizations report this disconnect is growing.&lt;/p&gt;

&lt;p&gt;That gap is a risk by itself. It leads to underfunded controls and unrealistic recovery plans.&lt;/p&gt;

&lt;h2&gt;
  
  
  The cost of failure, money, legal exposure, and downtime
&lt;/h2&gt;

&lt;h2&gt;
  
  
  17 Average breach cost in 2025: $4.44M globally and $10.22M in the US
&lt;/h2&gt;

&lt;p&gt;These figures come from IBM’s &lt;em&gt;Cost of a Data Breach Report 2025&lt;/em&gt;. IBM notes &lt;strong&gt;the global average fell&lt;/strong&gt; from $4.88M in 2024, while the &lt;strong&gt;US average rose by 9%&lt;/strong&gt; in 2025, due to higher regulatory penalties and higher detection and escalation costs.&lt;/p&gt;

&lt;p&gt;The study covered 600 organizations with breaches between March 2024 and February 2025 across 17 industries in 16 countries or regions. &lt;/p&gt;

&lt;p&gt;If you operate in the US, plan for  higher escalation and penalty costs. That changes the ROI math for detection, response, and auditability.&lt;/p&gt;

&lt;h2&gt;
  
  
  18 41% of 5B+ revenue companies report higher exposure to damaging breaches
&lt;/h2&gt;

&lt;p&gt;PwC highlights this fact in its &lt;em&gt;2026 Global Digital Trust Insights&lt;/em&gt; survey of business and tech leaders (May to July 2025). The same survey shows higher exposure among &lt;strong&gt;US-based organizations&lt;/strong&gt; (37%) and &lt;strong&gt;TMT&lt;/strong&gt; (33%).&lt;/p&gt;

&lt;h2&gt;
  
  
  19 46% of organizations experienced an outage or service disruption due to attacks
&lt;/h2&gt;

&lt;p&gt;That’s the reality shown in Red Canary’s &lt;em&gt;Security Operations Trends Report&lt;/em&gt; 2025, based on a survey of 550 security leaders across the US, UK, Australia, New Zealand, and the Nordics. In the same research, leaders estimated the average incident cost over the past year at $3.7M.&lt;/p&gt;

&lt;p&gt;Downtime is not rare, so your recovery plan needs to work in practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  20 61% of security leaders report breaches caused by failed or misconfigured controls in the last 12 months
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;65% of those breaches cost more than $1M&lt;/strong&gt;. (Source: &lt;em&gt;Security Leaders Peer Report 2025&lt;/em&gt; by Panaseer, based on a survey of 400 security leaders at larger orgs in the US and UK).&lt;/p&gt;

&lt;p&gt;It seems that a lot of “breach prevention” is basic control hygiene.&lt;/p&gt;

&lt;h2&gt;
  
  
  21 955 hours of disruptions in total across key DevOps SaaS platforms in 2024
&lt;/h2&gt;

&lt;p&gt;GitProtect’s report, &lt;em&gt;&lt;a href="https://gitprotect.io/devops-threats-unwrapped.html" rel="noopener noreferrer"&gt;The CISO’s Guide to DevOps Threats 2025&lt;/a&gt;&lt;/em&gt;, puts the combined time of disruptions for GitHub, Bitbucket, Jira, GitLab and Azure DevOps in 2024 at 955 hours. That’s enough time to sail across the Atlantic on a small yacht, make a short stop in the Caribbean, reach the East Coast, and head back to Europe.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI is accelerating the mess, more tools, more leaks, weaker controls
&lt;/h2&gt;

&lt;h2&gt;
  
  
  22 46% of organizations struggle to monitor non-human identities
&lt;/h2&gt;

&lt;p&gt;This finding comes from the Cloud Security Alliance’s &lt;em&gt;State of SaaS Security Report 2025&lt;/em&gt; mentioned earlier. The report also flags growing concern about &lt;strong&gt;overprivileged API access&lt;/strong&gt; as GenAI tools and SaaS-to-SaaS integrations spread (56%).&lt;/p&gt;

&lt;p&gt;Non-human identities and API sprawl are where governance collapses. Treat machine access like first-class identity.&lt;/p&gt;

&lt;h2&gt;
  
  
  23 AI-driven social engineering is the top 2026 threat
&lt;/h2&gt;

&lt;p&gt;ISACA’s &lt;em&gt;2026 Tech Trends and Priorities Global Pulse Poll&lt;/em&gt; surveyed 2,963 professionals in digital trust fields (cybersecurity, audit, governance, risk, compliance) and placed this risk at the top. In the same ranking, &lt;strong&gt;ransomware and extortion&lt;/strong&gt; came next 54%), followed by &lt;strong&gt;insider threats&lt;/strong&gt; (35%).&lt;/p&gt;

&lt;h2&gt;
  
  
  24 66% of organisations expect AI to have the biggest impact on cybersecurity, but only 37% assess tools before deployment
&lt;/h2&gt;

&lt;p&gt;World Economic Forum highlights this gap in its &lt;em&gt;Global Cybersecurity Outlook 2025&lt;/em&gt;, based on 409 survey responses from 57 countries &lt;/p&gt;

&lt;p&gt;This is shadow IT moving faster than most control processes. If you do not offer a quick, clear path, teams will deploy tools first and deal with security later.&lt;/p&gt;

&lt;h2&gt;
  
  
  25 78% of companies lag on basic data and AI security practices
&lt;/h2&gt;

&lt;p&gt;Accenture’s State of &lt;em&gt;Cybersecurity Resilience 2025&lt;/em&gt; (survey of 2,286 respondents from $1B+ revenue companies across multiple regions) paints a broader picture. It also found that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;22% have clear policies and training for generative AI use&lt;/li&gt;
&lt;li&gt;25% fully apply encryption and access controls for sensitive data across transit, storage and processing&lt;/li&gt;
&lt;li&gt;83% have not built a secure cloud foundation with integrated monitoring, detection, and response.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI mostly makes existing gaps hurt more, like weak data controls, weak cloud foundations, poor inventory.&lt;/p&gt;

&lt;h2&gt;
  
  
  26 Shadow AI adds $670K to breach cost in high-usage orgs
&lt;/h2&gt;

&lt;p&gt;IBM’s 2025 breach-cost research also looked at “shadow AI” and found:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;20% of organisations said they had a breach tied to shadow AI incidents.&lt;/li&gt;
&lt;li&gt;These incidents more often involved PII (65%) and IP (40%).&lt;/li&gt;
&lt;li&gt;Data was frequently spread across multiple environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Shadow AI is solved not only by policy but also by controlling data movement and access across environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  27 37% of SMBs plan to address AI risk by expanding cyber insurance coverage
&lt;/h2&gt;

&lt;p&gt;Hiscox’s &lt;em&gt;Cyber Readiness Report 2025&lt;/em&gt; also lays out what what other measures SMBs plan to do over the next three years to reduce AI-related risks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Employee training on AI threats: 36%&lt;/li&gt;
&lt;li&gt;Regular AI usage audits: 36%&lt;/li&gt;
&lt;li&gt;Hiring AI-skilled staff: 33%&lt;/li&gt;
&lt;li&gt;Using AI security consultants: 33%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The research covered 5,750 companies (50 to 249 employees) in the US and Europe (Jul-Aug 2025).&lt;/p&gt;

&lt;h2&gt;
  
  
  Supply chain and third-party blast radius
&lt;/h2&gt;

&lt;h2&gt;
  
  
  28 54% of large organizations say third-party risk management is a major challenge
&lt;/h2&gt;

&lt;p&gt;World Economic Forum’s &lt;em&gt;Global Cybersecurity Outlook 2025&lt;/em&gt; frames supply chain complexity and limited supplier visibility as a top cyber risk. Key concerns include &lt;strong&gt;third-party software vulnerabilities&lt;/strong&gt; and &lt;strong&gt;attacks&lt;/strong&gt; that spread &lt;strong&gt;through connected partners and systems&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;If you use vendors, integrations, and open source, you already have a supply chain. The question is whether you can see and constrain it.&lt;/p&gt;

&lt;h2&gt;
  
  
  29 24% of orgs with 5,000+ external data-sharing partners had 10+ breaches per year
&lt;/h2&gt;

&lt;p&gt;Kiteworks’ &lt;em&gt;Data Security and Compliance Risk 2025&lt;/em&gt; links breach frequency to the number of outside groups with whom an organization shares  private data with. &lt;strong&gt;Orgs with fewer than 500 partners did better&lt;/strong&gt;, 34% reported zero breaches.&lt;/p&gt;

&lt;p&gt;More partners means more ways for data to leak. If you add integrations, scale controls and monitoring with them.&lt;/p&gt;

&lt;h2&gt;
  
  
  30 59% of IT and security professionals cite code vulnerabilities as the top AppSec concern
&lt;/h2&gt;

&lt;p&gt;This figure comes from Thales’ 2025 survey of nearly 3,200 IT and security professionals across 20 countries and 15 industries. Other main AppSec concerns were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Software supply chain issues (48%)&lt;/li&gt;
&lt;li&gt;API attacks (38%).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Top DevSecOps challenges included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Secrets management (54%)&lt;/li&gt;
&lt;li&gt;Sprint cadence and execution (48%)&lt;/li&gt;
&lt;li&gt;Open-source SCA (44%)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✍️ Subscribe to &lt;a href="https://gitprotect.io/gitprotect-newsletter.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;GitProtect DevSecOps X-Ray Newsletter&lt;/a&gt; – your guide to the latest DevOps &amp;amp; security insights&lt;/p&gt;

&lt;p&gt;🚀 Ensure compliant &lt;a href="https://gitprotect.io/sign-up.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;DevOps backup and recovery with a 14-day free trial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📅 Let’s discuss your needs and &lt;a href="https://calendly.com/d/3s9-n9z-pgc/gitprotect-live-demo?month=2024-04&amp;amp;utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;see a live product tour&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>devops</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>Your GitLab Data Security: 14 Critical Areas To Address</title>
      <dc:creator>GitProtect Team</dc:creator>
      <pubDate>Fri, 19 Dec 2025 12:09:19 +0000</pubDate>
      <link>https://forem.com/gitprotect/your-gitlab-data-security-14-critical-areas-to-address-4j4j</link>
      <guid>https://forem.com/gitprotect/your-gitlab-data-security-14-critical-areas-to-address-4j4j</guid>
      <description>&lt;p&gt;Modern organizations often use GitLab as a core version control system (VCS), making it one of the most essential systems for DevOps. Given the critical nature of the data stored here, thorough evaluation of risks and implementing data protection best practices are a must. According to the Shared Responsibility Model, GitLab provides security for the underlying infrastructure, while the user’s duty is to keep data protected.&lt;/p&gt;

&lt;p&gt;👉 More about &lt;a href="https://gitprotect.io/blog/gitlab-shared-responsibility-model-a-guide-to-collaborative-security/" rel="noopener noreferrer"&gt;GitLab’s Shared Responsibility Model&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this article, we go into detail about the possible ways to lose GitLab data, and how this can be prevented.&lt;/p&gt;

&lt;h2&gt;
  
  
  1 Accidental deletion of projects
&lt;/h2&gt;

&lt;p&gt;Let’s begin by stating that human error is the most common cause of data loss in 2025. A single misclick on “Delete project” or “Delete group” can permanently erase GitLab repositories, merge requests, wikis, and all related metadata. In terms of GitLab.com, it is stated that deleted projects enter a pending deletion state – then they are &lt;a href="https://docs.gitlab.com/user/project/working_with_projects/#delete-a-project" rel="noopener noreferrer"&gt;automatically erased&lt;/a&gt; after 30 days.&lt;/p&gt;

&lt;p&gt;💡 &lt;em&gt;&lt;strong&gt;To delete projects, you need to have the  Owner role (or admin permissions). Poor management of access control opens pathways for accidental deletion to take place.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Avoid accidental deletions in GitLab:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Archive instead of deleting &lt;/li&gt;
&lt;li&gt;Restrict permissions to delete&lt;/li&gt;
&lt;li&gt;Protect important branches, and configure Protected Branches so that essential code cannot be removed or overwritten &lt;/li&gt;
&lt;li&gt;Automate backups with GitLab’s native capabilities or opt for third-party solutions like GitProtect.io &lt;/li&gt;
&lt;li&gt;Implement flexible disaster recovery with features like point-in-time and granular restore&lt;/li&gt;
&lt;li&gt;Track deletions through GitLab Audit Events (GitLab Premium and Ultimate). You will need to review logs or integrate with external monitoring tools&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2 Vulnerable credentials
&lt;/h2&gt;

&lt;p&gt;Compromised credentials remain a key factor behind many data breaches. Access tokens or SSH keys, when exposed, grant an attacker the same level of access as the account owner would. They can &lt;a href="https://gitprotect.io/blog/github-repojacking-are-you-sure-your-github-is-safe/" rel="noopener noreferrer"&gt;hijack repos&lt;/a&gt;, modify them, or even delete them without any restriction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authentication and credentials security:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use short-lived tokens and rotate regularly&lt;/li&gt;
&lt;li&gt;GitLab built-in Secret Detection and Secret Masking &lt;/li&gt;
&lt;li&gt;Require multifactor authentication (MFA)&lt;/li&gt;
&lt;li&gt;Limit scopes and permissions of PATs, group tokens, and project tokens&lt;/li&gt;
&lt;li&gt;Keep GitLab instances up to date for latest security patches&lt;/li&gt;
&lt;li&gt;Audit access logs and usage – GitLab has Audit Events (for Premium and Ultimate)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3 Data overwritten during force push
&lt;/h2&gt;

&lt;p&gt;In GitLab, users can rewrite commit history using the force flag in the git push command. Use it with caution… A force push can permanently overwrite commits, delete your teammates’ work (rewrite pointers), or reset branches to their older states. The risk is especially high when it comes to force push and shared or production branches. It can take place when developers try to ‘clean up’ commit history or resolve conflicts, and then unintentionally remove data in the process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don’t lose data due to force push:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure Protected Branches to disallow force pushes, disable branch deletion, and control who can push &amp;amp; merge&lt;/li&gt;
&lt;li&gt;All changes shall require review and go through a merge request&lt;/li&gt;
&lt;li&gt;Push rules guarantee extra protection, such as enforcing signed commits and blocking tag deletion&lt;/li&gt;
&lt;li&gt;Use git reflog to recover commits&lt;/li&gt;
&lt;li&gt;Prevent accidental overwriting of teammates’ commits with –force-with-lease. It does not overwrite commits that you don’t already have locally and is considered the ‘safe’ alternative to force push &lt;/li&gt;
&lt;li&gt;Implement off-site backups with point-in-time restore that supports recovery after a force push&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4 Beware of insider threats
&lt;/h2&gt;

&lt;p&gt;Insider threats can be accidental deletions as well as malicious sabotage, like credential sharing. GitLab actually centralizes repositories, issues, CI/CD pipelines, and secrets; therefore, a single compromised account can damage the development lifecycle, reputation, and business continuity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secure your organization from within:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Leverage the principle of least privilege &lt;/li&gt;
&lt;li&gt;Implement &lt;a href="https://gitprotect.io/features/authentication/role-based-access-controls.html#article-content" rel="noopener noreferrer"&gt;role-based access control&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;With GitLab’s group and subgroup hierarchy, ensure users inherit only the permissions required for their role and prevent accidental overexposure&lt;/li&gt;
&lt;li&gt;Review permissions on a regular basis&lt;/li&gt;
&lt;li&gt;Enforce MFA and SSO for all users, and SSO managed by the external identity providers (IdP) for all users&lt;/li&gt;
&lt;li&gt;Clearly separate duties between administration and development &lt;/li&gt;
&lt;li&gt;Don’t overlook off-boarding processes – revoke unnecessary permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5 Data corruption beyond source code
&lt;/h2&gt;

&lt;p&gt;Apart from repos, GitLab includes CI/CD data, artifacts, issues, wikis, attachments, and metadata. Now, these are all stored across multiple backend services such as Gitaly (for repositories), PostgreSQL (for metadata and issues), Redis (for caching and sessions), and object storage (for artifacts and uploads). If any of these break or end up misconfigured, projects can become partially or fully unrecoverable. In self-managed instances, misconfigured storage paths, failed Gitaly nodes, or unoptimized PostgreSQL replication can cause integrity issues or data desynchronization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To prevent data corruption:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Perform regular full and incremental backups&lt;/li&gt;
&lt;li&gt;Verify data integrity frequently&lt;/li&gt;
&lt;li&gt;Apply configuration management and monitoring to avoid corruptions before deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  6 The growth of ransomware
&lt;/h2&gt;

&lt;p&gt;The rate of ransomware has grown rapidly in recent years. While self-managed GitLab instances face significantly higher ransomware and malware exposure than GitLab.com (SaaS), both require proper security measures. Self-managed GitLab instances, however, put the duty on the user to manage the OS, network, storage, and access to data; so if any layer gets compromised, attackers can encrypt and/or corrupt:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gitaly repositories&lt;/li&gt;
&lt;li&gt;PostgreSQL metadata (issues, MRs, permissions, pipeline data)&lt;/li&gt;
&lt;li&gt;CI/CD artifacts and logs&lt;/li&gt;
&lt;li&gt;uploads, LFS objects, registry images&lt;/li&gt;
&lt;li&gt;backup directories (if improperly stored on the same server)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ransomware can lead to service outages, encrypted repositories, or corruption of every project under a group. This would further result in damaged reputation, costly compliance violations, or complete stop of primary operations. With GitLab being a DevSecOps platform, &lt;a href="https://gitprotect.io/blog/ransomware-attacks-on-github-bitbucket-and-gitlab-what-you-should-know/" rel="noopener noreferrer"&gt;ransomware&lt;/a&gt; wouldn’t just affect code, but also pipelines, secrets, deploy tokens, and business continuity too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ransomware protection
&lt;/h2&gt;

&lt;p&gt;As the party responsible for the protection of accounts, access, authorization, and data, the user needs strict permission control, a secure network, and &lt;a href="https://gitprotect.io/blog/why-immutable-backups-are-essential-for-data-security-in-devops/" rel="noopener noreferrer"&gt;immutable, off-site backups&lt;/a&gt;. To prevent ransomware from doing any damage to your GitLab environment, isolate GitLab components (Gitaly, PostgreSQL, Redis, object storage) on restricted networks and avoid exposing them publicly. In this way, you can guarantee endpoint protection, restrict deploy tokens, enforce firewall rules, and prevent malicious code execution. Remember to back up your data with trusted providers, implement flexible recovery, and stay compliant with industry standards.&lt;/p&gt;

&lt;h2&gt;
  
  
  7 No backup or disaster recovery
&lt;/h2&gt;

&lt;p&gt;Backup and disaster recovery (DR) are key aspects of any effective data protection strategy. Reliable solutions guarantee data security in the face of accidental deletions, malicious insiders, ransomware attacks, service outages, and even simple migrations. Under the aforementioned Shared Responsibility Model, the user is responsible for both backup and recovery.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://gitprotect.io/blog/i-use-github-gitlab-bitbucket-so-i-dont-need-backup/" rel="noopener noreferrer"&gt;Why third-party backup is necessary for GitLab (and other Git-based platforms)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To ensure data integrity, data recovery, and prevent attackers from erasing (or altering) any of your GitLab data, &lt;strong&gt;your backup and DR solution shall meet a number of requirements&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Immutable, off-site, WORM-compliant storage &lt;/li&gt;
&lt;li&gt;Geo-redundancy, replication&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://gitprotect.io/blog/data-backups-in-terms-of-data-residency/" rel="noopener noreferrer"&gt;Data residency&lt;/a&gt; of choice &lt;/li&gt;
&lt;li&gt;Automated backup&lt;/li&gt;
&lt;li&gt;Scheduling with different, customizable plans&lt;/li&gt;
&lt;li&gt;Full coverage with all critical metadata&lt;/li&gt;
&lt;li&gt;Encryption at rest and in transit &lt;/li&gt;
&lt;li&gt;3-2-1 backup rule &lt;/li&gt;
&lt;li&gt;Unlimited retention&lt;/li&gt;
&lt;li&gt;Compliance with industry regulations like SOC 2 Type II or ISO 27001&lt;/li&gt;
&lt;li&gt;Flexible recovery with point-in-time restore and full data recovery&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  8 Single Group Owner bottleneck
&lt;/h2&gt;

&lt;p&gt;Relying on a single Project Maintainer or Group Owner leaves you with a single point of failure in GitLab. If that one person is unavailable, teams may be unable to merge code, approve changes, manage runners, or update project settings.&lt;/p&gt;

&lt;p&gt;💡 &lt;em&gt;&lt;strong&gt;The Group Owner gets full administrative rights over a group and all its projects. Then, the Project Maintainer is the highest project-level role (can push to protected branches and manage repo settings).&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Both of these roles involve responsibility. The entire SDLC may stop instantly once the responsible individual becomes unavailable, so &lt;strong&gt;be sure to have&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At least two Group Owners for important groups or subgroups&lt;/li&gt;
&lt;li&gt;Multiple Project Maintainers for critical repositories&lt;/li&gt;
&lt;li&gt;Document processes so there is no person-dependent knowledge&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  9 GitLab service disruptions
&lt;/h2&gt;

&lt;p&gt;Both GitLab.com (SaaS) and self-managed GitLab instances are prone to service disruptions. Such cases can leave users with no access to their critical data. Self-managed instances introduce greater complexity and risk, where downtime can lead to data loss. If your instance becomes unavailable with failed upgrades or misconfigured storage instances, metadata may end up incomplete or unrecoverable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Avoid downtime, data loss, and damaged reputation&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintain local clones and/or off-site mirrors of critical repositories&lt;/li&gt;
&lt;li&gt;Store backups off-site &lt;/li&gt;
&lt;li&gt;Test all upgrades and migrations in a staging environment&lt;/li&gt;
&lt;li&gt;Monitor GitLab components and implement alerts to detect issues early&lt;/li&gt;
&lt;li&gt;Use High Availability (HA) for production instances&lt;/li&gt;
&lt;li&gt;Utilize GitLab Geo replication for regional redundancy, but monitor for replication lag&lt;/li&gt;
&lt;li&gt;Back up before every upgrade or configuration change&lt;/li&gt;
&lt;li&gt;Avoid relying on CI pipelines for backups or exports (if runners or the API go down, your backups go down too)&lt;/li&gt;
&lt;li&gt;Leverage third-party backup and DR solutions for cross-over restore, point-in-time, and granular recovery to minimize downtime&lt;/li&gt;
&lt;li&gt;Create a downtime communication plan, outline roles clearly&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  10 Pipeline or job failure due to misconfiguration or API overload
&lt;/h2&gt;

&lt;p&gt;GitLab CI/CD pipelines depend on a valid &lt;em&gt;.gitlab-ci.yml&lt;/em&gt;, correctly configured runners (matching tags, proper executors), and sufficient API availability. Any misconfigurations in pipeline logic, or missing variables, can cause jobs to fail before any artifacts or build outputs are saved. When these jobs cover deployables, documentation, or backups, failures may lead to data loss.&lt;/p&gt;

&lt;p&gt;On GitLab.com, API rate limits apply to job tokens, artifact uploads, registry operations, and automation flows. Pipelines that rely on API calls may fail or stop mid-execution under heavy load, resulting in incomplete artifacts or failed exports.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How shall this be addressed&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validate all pipelines before execution&lt;/li&gt;
&lt;li&gt;Guarantee proper runner configuration&lt;/li&gt;
&lt;li&gt;Critical pipelines get dedicated runners&lt;/li&gt;
&lt;li&gt;Monitor API rate limits, CI/CD errors &amp;amp; set up alerts&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  11 Insecure GitLab CI/CD pipelines and runners
&lt;/h2&gt;

&lt;p&gt;GitLab CI/CD has direct access to your environment (repos, variables, tokens, and deploy credentials). If pipelines or runners are left open, misconfigured, or too permissive, you hand attackers a ready-made execution path. A single job can expose CI/CD variables, leak tokens, or run code that tampers with artifacts or pushes poisoned changes upstream.&lt;/p&gt;

&lt;p&gt;💡 &lt;em&gt;In short: if your runners aren’t isolated and your pipelines aren’t locked down, your entire SDLC becomes an entry point.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secure pipelines and runners:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Never expose runners publicly &lt;/li&gt;
&lt;li&gt;Protect secrets and variables &lt;/li&gt;
&lt;li&gt;Define who can run pipelines and limit job permissions&lt;/li&gt;
&lt;li&gt;Verify images before they get to CI&lt;/li&gt;
&lt;li&gt;Review pipelines just like code&lt;/li&gt;
&lt;li&gt;Keep staging and production credentials separated, do not reuse tokens between environments&lt;/li&gt;
&lt;li&gt;Monitor runners for suspicious jobs and rotate your tokens&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  12 Unsafe third-party integrations
&lt;/h2&gt;

&lt;p&gt;GitLab allows users to integrate third-party tools to streamline their work and simplify collaboration. These include Jira, Slack and Kubernetes. Carefully evaluate everything that gets added into your GitLab environment, especially when it comes to production repos. &lt;/p&gt;

&lt;p&gt;Every integration is a new potential failure point. If an integration is misconfigured or uses overly broad tokens, attackers don’t even have to breach GitLab directly to steal your data. The attacker will exploit weaker systems tied to your instance and pull secrets or trigger pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ensure security for all third-party integrations&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use minimal token scopes&lt;/li&gt;
&lt;li&gt;Rotate all integration credentials regularly&lt;/li&gt;
&lt;li&gt;Remove unused webhooks and abandoned apps&lt;/li&gt;
&lt;li&gt;Validate inbound requests (signatures, HTTPS)&lt;/li&gt;
&lt;li&gt;Monitor integration-triggered activity&lt;/li&gt;
&lt;li&gt;Store secrets only as masked, scoped CI/CD variables&lt;/li&gt;
&lt;li&gt;Run &lt;a href="https://about.gitlab.com/blog/how-to-integrate-custom-security-scanners-into-gitlab/" rel="noopener noreferrer"&gt;custom security scanners&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  13 Mirrors and repository divergence
&lt;/h2&gt;

&lt;p&gt;Mirrors need proper management. If a mirrored repo isn’t syncing, branches drift, commits diverge, and teams end up working on outdated code. Failed pull/push mirrors, expired tokens, or silent sync errors leave repos out of date. Now, any overwrite or merge after that becomes real data loss.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Address the risks&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sync mirrors on a schedule, not manually&lt;/li&gt;
&lt;li&gt;Rotate tokens so mirrors don’t break silently&lt;/li&gt;
&lt;li&gt;Let CI test merges before they hit main&lt;/li&gt;
&lt;li&gt;Keep feature branches synced with main regularly&lt;/li&gt;
&lt;li&gt;Fix conflicts locally before pushing upstream&lt;/li&gt;
&lt;li&gt;Review permissions so only trusted roles can overwrite mirrored branches&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  14 Public projects and data exposure
&lt;/h2&gt;

&lt;p&gt;Public projects are the easiest way for secrets to get leaked. A single commit with an API key, access token, SSH key, or environment file is enough to expose your entire GitLab environment. GitLab’s public visibility makes it instantly accessible to scanners and bots. Even deleted commits stay in history, forks, caches, and mirrors. Once it’s pushed publicly, you’ve lost control of it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Watch out for public projects&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep sensitive projects private by default&lt;/li&gt;
&lt;li&gt;Use Secret Detection to catch leaked keys instantly&lt;/li&gt;
&lt;li&gt;Block commits containing secrets with pre-commit hooks&lt;/li&gt;
&lt;li&gt;Rotate any credential that ever touched a public commit&lt;/li&gt;
&lt;li&gt;Use protected branches so no one pushes unreviewed code&lt;/li&gt;
&lt;li&gt;Scan projects regularly for forgotten secrets or config files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✍️ Subscribe to &lt;a href="https://gitprotect.io/gitprotect-newsletter.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;GitProtect DevSecOps X-Ray Newsletter&lt;/a&gt; – your guide to the latest DevOps &amp;amp; security insights&lt;/p&gt;

&lt;p&gt;🚀 Ensure compliant &lt;a href="https://gitprotect.io/sign-up.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;DevOps backup and recovery with a 14-day free trial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📅 Let’s discuss your needs and &lt;a href="https://calendly.com/d/3s9-n9z-pgc/gitprotect-live-demo?month=2024-04&amp;amp;utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;see a live product tour&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gitlab</category>
      <category>devops</category>
      <category>programming</category>
      <category>security</category>
    </item>
    <item>
      <title>How to Protect Jira Assets: Best Practices For Backup And Recovery</title>
      <dc:creator>GitProtect Team</dc:creator>
      <pubDate>Thu, 11 Dec 2025 11:01:59 +0000</pubDate>
      <link>https://forem.com/gitprotect/how-to-protect-jira-assets-best-practices-for-backup-and-recovery-4lc4</link>
      <guid>https://forem.com/gitprotect/how-to-protect-jira-assets-best-practices-for-backup-and-recovery-4lc4</guid>
      <description>&lt;p&gt;&lt;strong&gt;It’s hard to imagine a modern ITSM (IT Service Management) and general configuration management in Jira without Jira Assets. All the more so, it allows IT teams to model physical infrastructure, logical dependencies, user ownership, licensing, and even financial amortization of resources. The possible challenge is its hybrid architecture, followed by tight schema and application logic coupling. Any automation error or misconfigured import may corrupt your CMDB.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Forget about hypothetical situations. The market knows numerous multi-hour outages in ITSM operations. Mainly due to schema-wide cascading deletions and overwritten attribute sets (caused by faulty API scripts).&lt;/p&gt;

&lt;p&gt;Certainly, protecting this layer requires you to deeply understand how data is stored and where the critical paths lie. You also need to get familiar with how to build a backup and recovery pipeline that aligns with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RTOs under 1 hour,&lt;/li&gt;
&lt;li&gt;RPOs under 15 minutes,&lt;/li&gt;
&lt;li&gt;ability to pass integrity validation under load.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;According to the &lt;a href="https://gitprotect.io/docs/gitprotect-ciso-guide-to-devops-threats-2025.pdf" rel="noopener noreferrer"&gt;CISO’s guide to DevOps threats&lt;/a&gt;, Atlassian’s Jira experienced 132 incidents of different impact in 2024, which shows a 44% growth compared to 2023.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assets storage model. What you’re backing up
&lt;/h2&gt;

&lt;p&gt;Jira Assets data is stored across specific ActiveObjects tables in the Jira database (&lt;strong&gt;Data Center&lt;/strong&gt;). The most vital include (among others):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfs65o6cdw5ly945pjct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfs65o6cdw5ly945pjct.png" alt="Jira Assets" width="686" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The tables mentioned use foreign keys and serialized JSON structures. They require precise relational consistency to avoid orphaned data or circular dependencies.&lt;/p&gt;

&lt;p&gt;However, elements like attachments reside in the filesystem. They’re placed within the &lt;strong&gt;/data/attachments&lt;/strong&gt; directory in Jira Home by default. If you exclude attachments from backup, object attributes pointing to these files will break (unnoticed) during recovery. The Jira system will fail to render previews or links.&lt;/p&gt;

&lt;p&gt;In the case of a &lt;strong&gt;Cloud&lt;/strong&gt; instance, the approach is different. You can say Atlassian abstracts this layer entirely. Jira Assets data resides in a proprietary backend atop an AWS stack. That means there is no direct database access. Backups must be handled via the &lt;strong&gt;Jira Assets REST API&lt;/strong&gt; – with or without third-party tooling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 1. Solutions for Jira Assets DC
&lt;/h2&gt;

&lt;p&gt;Though Atlassian has already announced the end of support for its Data Center by March 28, 2029, let’s still look at some of the options. A full backup of Assets on Jira Data Center starts with consistent database snapshots. For PostgreSQL, a point-in-time consistent export of relevant tables may be set using:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pg_dump -Fc \&lt;br&gt;
  -t "AO_8542F1_IFJ_OBJ" \&lt;br&gt;
  -t "AO_8542F1_IFJ_OBJ_TYPE" \&lt;br&gt;
  -t "AO_8542F1_IFJ_ATTRIBUTE" \&lt;br&gt;
  -t "AO_8542F1_IFJ_SCHEMA" \&lt;br&gt;
jira_prod &amp;gt; assets_only.dump&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You have to execute such an export with complete transaction consistency. So, &lt;strong&gt;–no-synchronized-snapshots&lt;/strong&gt; is not advised. Significantly, if the schema changes due to ongoing imports or automation.&lt;/p&gt;

&lt;p&gt;At the same time, the attachments must be captured:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tar -czf attachments_$(date +%F).tgz&lt;br&gt;
/var/atlassian/application-data/jira/data/attachments&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;However, in [regulated industries](&lt;a href="https://gitprotect.io/industries/regulated-industries.html" rel="noopener noreferrer"&gt;https://gitprotect.io/industries/regulated-industries.html&lt;/a&gt;, best practices include generating **SHA256 **hashes post-backup:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sha256sum assets_only.dump attachments_*.tgz &amp;gt; backup_hashes.sha256&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;During recovery, this approach validates that no tampering or corruption has occurred.&lt;/p&gt;

&lt;h2&gt;
  
  
  A bit deeper dive into database backups in Jira
&lt;/h2&gt;

&lt;p&gt;It’s worth noting that Atlassian recommends native database tools for backups due to their reliability and performance. For &lt;strong&gt;PostgreSQL&lt;/strong&gt;, the usual choice for logical backups in Jira is the pg_dump utility. For physical backups for larger instances, pg_basebackup is best.&lt;/p&gt;

&lt;p&gt;For example, configure a cron job on the Jira Data Center to create a &lt;strong&gt;daily logical backup&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;`#!/bin/bash&lt;br&gt;
BACKUP_DIR="/backups/jira/db"&lt;br&gt;
TIMESTAMP=$(date +%Y%m%d_%H%M%S)&lt;br&gt;
DB_USER="jirauser"&lt;br&gt;
DB_NAME="jiradb"&lt;br&gt;
BACKUP_FILE="$BACKUP_DIR/jira_db_$TIMESTAMP.sql.gz"&lt;/p&gt;

&lt;p&gt;mkdir -p $BACKUP_DIR&lt;br&gt;
pg_dump -U $DB_USER $DB_NAME | gzip &amp;gt; $BACKUP_FILE&lt;/p&gt;

&lt;h1&gt;
  
  
  Rotate backups (keep 7 days)
&lt;/h1&gt;

&lt;p&gt;find $BACKUP_DIR -name "jira_db_*.sql.gz" -mtime +7 -delete`&lt;/p&gt;

&lt;p&gt;The script above dumps the database, compresses it, and rotates backups to manage disk space. Of course, you need to ensure the DB_USER has sufficient permissions. Then, it’s time to test the backup integrity using gunzip and psql to restore it to a test environment.&lt;/p&gt;

&lt;p&gt;Another example of a daily dump in &lt;strong&gt;PostgreSQL&lt;/strong&gt; may look like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pg_dump -U jira_user -h localhost -F c -b -f /backups/jira_db_$(date +%Y%m%d).backup jira_db&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For comparison, the same step, but with rotation, in &lt;strong&gt;MySQL&lt;/strong&gt; can be shaped as shown below:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mysqldump -u jira_user -p$PASSWORD jira_db | gzip &amp;gt; /backups/jira_db_$(date +%Y%m%d).sql.gz find /backups -type f -name '*.gz' -mtime +30 -delete&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Considering &lt;strong&gt;physical backups&lt;/strong&gt; in PostgreSQL, pg_basebackup provides a faster option for large databases (&amp;gt;50GB). For example:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pg_basebackup -U $DB_USER -D /backups/jira/pg_basebackup_$TIMESTAMP -Fp -Xs -P&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The command creates a full backup of the PostgreSQL data directory. Combined with write-ahead logs (WAL), it’s suitable for point-in-time recovery (PITR). To enable WAL archiving, configure archive_mode and archive_command in postgres.conf.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;archive_mode = on&lt;br&gt;
archive_command = ‘cp %p /backup/jira/wal/%f’&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What about filesystem backups?
&lt;/h2&gt;

&lt;p&gt;As you already know, the JIRA_HOME directory, typically located at ~/.jira-home, contains critical files like attachments (data/attachments), indexes (caches/indexes), and configuration files. &lt;/p&gt;

&lt;p&gt;To back up the described directory, you need careful synchronization to avoid corrupting Lucene indexes, which Jira utilizes for search. So, if you aim for a robust solution, then the rsync method with a pre-backup script to pause indexing is a good choice.&lt;/p&gt;

&lt;p&gt;`#!/bin/bash&lt;br&gt;
JIRA_HOME="/var/atlassian/application-data/jira"&lt;br&gt;
BACKUP_DIR="/backups/jira/fs"&lt;br&gt;
TIMESTAMP=$(date +%Y%m%d_%H%M%S)&lt;br&gt;
BACKUP_FILE="$BACKUP_DIR/jira_home_$TIMESTAMP.tar.gz"&lt;/p&gt;

&lt;h1&gt;
  
  
  Pause indexing
&lt;/h1&gt;

&lt;p&gt;curl -u admin:admin -X POST &lt;a href="http://jira.example.com/rest/api/2/indexing/pause" rel="noopener noreferrer"&gt;http://jira.example.com/rest/api/2/indexing/pause&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Sync and archive
&lt;/h1&gt;

&lt;p&gt;rsync -av --delete $JIRA_HOME /tmp/jira_home_backup&lt;br&gt;
tar -czf $BACKUP_FILE -C /tmp jira_home_backup&lt;/p&gt;

&lt;h1&gt;
  
  
  Resume indexing
&lt;/h1&gt;

&lt;p&gt;curl -u admin:admin -X POST &lt;a href="http://jira.example.com/rest/api/2/indexing/resume" rel="noopener noreferrer"&gt;http://jira.example.com/rest/api/2/indexing/resume&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Rotate backups
&lt;/h1&gt;

&lt;p&gt;find $BACKUP_DIR -name "jira_home_*.tar.gz" -mtime +7 -delete`&lt;/p&gt;

&lt;p&gt;The script:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;pauses indexing through Jira’s REST API, &lt;/li&gt;
&lt;li&gt;syncs the JIRA_HOME directory&lt;/li&gt;
&lt;li&gt;archives it&lt;/li&gt;
&lt;li&gt;resumes indexing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You need to replace admin: admin with a service account. In the next step, secure the credentials using a .netrc file or environmental variables.&lt;/p&gt;

&lt;p&gt;In general, the rsync command for attachments and indexes may be utilized as:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;rsync -avz /var/atlassian/application-data/jira/ backup-server:/jira_backups/&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Jira’s built-in backup service
&lt;/h2&gt;

&lt;p&gt;Like any self-respecting software developer, Atlassian has implemented &lt;a href="https://gitprotect.io/blog/gitprotect-jira-backup-vs-atlassians-built-in-backup-abilities/" rel="noopener noreferrer"&gt;a native data backup mechanism&lt;/a&gt; in Jira. The platform’s admins utilize an XML backup utility, accessible via:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Administration → System → Backup System&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It generates a single ZIP file containing issues, configurations, and selected JIRA_HOME data. The solution is convenient yet very limited. The mechanism excludes attachments and is resource-intensive, often causing performance degradation on large instances with more than 500,000 issues. &lt;/p&gt;

&lt;p&gt;For smaller instances, you can schedule XML backups using a script involving the REST API. For instance:&lt;/p&gt;

&lt;p&gt;`#!/bin/bash&lt;br&gt;
JIRA_URL="&lt;a href="http://jira.example.com" rel="noopener noreferrer"&gt;http://jira.example.com&lt;/a&gt;"&lt;br&gt;
USERNAME="admin"&lt;br&gt;
PASSWORD="admin"&lt;br&gt;
BACKUP_DIR="/backups/jira/xml"&lt;br&gt;
TIMESTAMP=$(date +%Y%m%d_%H%M%S)&lt;/p&gt;

&lt;h1&gt;
  
  
  Trigger backup
&lt;/h1&gt;

&lt;p&gt;curl -u $USERNAME:$PASSWORD -X POST "$JIRA_URL/rest/backup/1/export/runbackup" -H "Content-Type: application/json" -d '{"cbAttachments": false}'&lt;/p&gt;

&lt;h1&gt;
  
  
  Wait for backup completion and download
&lt;/h1&gt;

&lt;h1&gt;
  
  
  Note: Implement polling logic to check backup status`
&lt;/h1&gt;

&lt;p&gt;However, automating XML backups is less reliable compared to database and file system backups. The method can cause potential timeouts. If so, it should be reserved as a secondary option (or for configuration exports).&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing point-in-time restore: granular recovery and referential pitfalls
&lt;/h2&gt;

&lt;p&gt;Exceeding acceptable downtime windows while restoring full backups isn’t rare. Many or even most teams aim for &lt;a href="https://gitprotect.io/features/data-restore-disaster-recovery/granular-restore.html#article-content" rel="noopener noreferrer"&gt;granular restores&lt;/a&gt;. Especially during incidents initiated by automation or import errors. &lt;/p&gt;

&lt;p&gt;However, such a step is far more complex than it appears.&lt;/p&gt;

&lt;p&gt;Let’s take rolling back a single object as an example. It requires identifying all dependencies (e.g., attributes that refer to other object types, automation rules using such attributes), truncating the specific object set, and reinserting from a filtered pg_restore.&lt;/p&gt;

&lt;p&gt;For instance (SQL):&lt;/p&gt;

&lt;p&gt;`DELETE FROM AO_8542F1_IFJ_OBJ WHERE OBJECT_TYPE_ID = 11203;&lt;/p&gt;

&lt;p&gt;pg_restore --data-only --table=AO_8542F1_IFJ_OBJ --file=filtered_obj.dump assets_only.dump`&lt;/p&gt;

&lt;p&gt;However, making such changes without setting up the necessary rules for your data (attribute constraints) and the unique identifiers linking things together (UUID bindings) can lead to problems. You might have broken connections between the data (dangling objects). &lt;/p&gt;

&lt;p&gt;Other issues might be inconsistencies in your system’s overall structure, such as breaking schema consistency. Therefore, you must test restores on staging. You should be using the exact version of Jira and the plugin state. Equally important is utilizing checksum-based post-restore validation against a dataset.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 2. Solutions for Jira Assest Cloud
&lt;/h2&gt;

&lt;p&gt;It’s worth reminding that Atlassian Cloud doesn’t officially support full schema-level exports in JSON format via the REST API. They were part of older Insight Server versions. The recommended way to export (backup) large sets of objects is to use the CSV export with the Jira (Service Management) UI. &lt;/p&gt;

&lt;p&gt;If you plan to automate exports (backups), you usually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;export CSV files manually or utilize scripting UI automation&lt;/li&gt;
&lt;li&gt;or use the REST API to retrieve objects individually or in pages (pagination), which requires custom scripts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, let’s retrieve the workspace ID required for API calls. Before making the latter, you need your workspace ID (bash):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -u email@example.com:API_TOKEN \&lt;br&gt;
  -H "Accept: application/json"\&lt;br&gt;
  "https://yourdomain.atlassian.net/rest/servicedeskapi/insight/workspace"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Note the “id” field of the workspace you want to work with from the JSON response.&lt;/p&gt;

&lt;p&gt;To restore objects from JSON files, you must send a single (one) POST request per object. For this purpose, you use the current Assets API endpoint and Basic Auth with an API token.&lt;/p&gt;

&lt;p&gt;For instance, take a JSON file (object_345.json):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
  "objectTypeId": "250",&lt;br&gt;
  "attributes": [&lt;br&gt;
    {&lt;br&gt;
    "objectTypeAttributeId": "2797",&lt;br&gt;
    "objectAttributeValues": [&lt;br&gt;
            { "value": "Object Name" }&lt;br&gt;
          ]&lt;br&gt;
    },&lt;br&gt;
    {&lt;br&gt;
        "objectTypeAttributeId": "2807",&lt;br&gt;
        "objectAttributeValues": [&lt;br&gt;
              { "value": "Object Description" }&lt;br&gt;
       ]&lt;br&gt;
    }&lt;br&gt;
   ]&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Of course, you must get the correct objectTypeId and objectTypeAttributeId from your Assets schema configurator or via API.&lt;/p&gt;

&lt;p&gt;Now, create a POST request to create an object:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -X POST \&lt;br&gt;
  -u email@example.com:API_TOKEN \&lt;br&gt;
  -H "Content-Type: application/json" \&lt;br&gt;
  -d @object_345.json \&lt;br&gt;
"https://api.atlassian.com/jsm/insight/workspace/{yourworkspaceId}/v1/object/create"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It’s worth mentioning that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the old endpoint &lt;a href="https://yourdomain.atlassian.net/rest/insight/1.0/object/create" rel="noopener noreferrer"&gt;https://yourdomain.atlassian.net/rest/insight/1.0/object/create&lt;/a&gt; is deprecated and doesn’t work in Atlassian Cloud&lt;/li&gt;
&lt;li&gt;bulk export of objects in JSON format via the REST API is not supported in Atlassian Cloud&lt;/li&gt;
&lt;li&gt;for large datasets, export CSV using UI or implement paginated object retrieval through API&lt;/li&gt;
&lt;li&gt;import requires sequential POST requests per object, respecting object type and attribute IDs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A quick look at recovery procedures
&lt;/h2&gt;

&lt;p&gt;Approaching &lt;strong&gt;database recovery&lt;/strong&gt;, let’s start with logical backups. To restore the latter, you can use:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gunzip jira_db_20250415_120000.sql.gz&lt;br&gt;
psql -U jirauser -d jiradb &amp;lt; jira_db_20250415_120000.sql&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;When it comes to physical backups, you should stop the PostgreSQL service. Then, you copy the backup to the data directory and replay the WAL files (if using PITR). The last thing is to test restores in &lt;a href="https://gitprotect.io/blog/4-reasons-to-treat-backup-as-a-vital-part-of-jira-sandbox-to-production-migration/" rel="noopener noreferrer"&gt;a sandbox environment&lt;/a&gt; to validate &lt;a href="https://gitprotect.io/blog/rto-and-rpo-what-are-those-metrics-about-and-how-to-improve-them/" rel="noopener noreferrer"&gt;RTO and RPO&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  File system recovery with validating and testing
&lt;/h2&gt;

&lt;p&gt;When you want to restore JIRA_HOME, the first thing to do is to stop Jira. Then it’s done, extract the backup, and verify file permissions. For instance:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tar -xzf jira_home_20250415_120000.tar.gz -C /var/atlassian/application-data&lt;br&gt;
chown -R jira:jira /var/atlassian/application-data/jira&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Further, rebuild indexes in &lt;strong&gt;Administration → System → Indexing&lt;/strong&gt; after restoration to ensure search functionality.&lt;/p&gt;

&lt;p&gt;A good practice (or even a must) is regularly testing backups by restoring them to a staging environment. You can automate the process with a script to check backup integrity.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;#!/bin/bash&lt;br&gt;
BACKUP_FILE="/backups/jira/db/jira_db_20250415_120000.sql.gz"&lt;br&gt;
gunzip -t $BACKUP_FILE &amp;amp;&amp;amp; echo "Backup is valid" || echo "Backup is corrupt"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In short, a backup is useless if it can’t be restored.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jira Assets backup automation and third-party integrations
&lt;/h2&gt;

&lt;p&gt;Experts across the Internet keep repeating that manual backups are error-prone. To remove human-made errors from the backup and restore equation, top-performing teams integrate tools like &lt;strong&gt;GitProtect.io&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It’s an enterprise-grade, automated backup and disaster recovery tool, tailored for DevOps and PM data protection, including Jira, Bitbucket, GitHub, GitLab, and Azure DevOps..&lt;/p&gt;

&lt;p&gt;The GitProtect solution allows you to back up and restore your Jira Assets in &lt;a href="https://gitprotect.io/blog/jira-backup-to-s3/" rel="noopener noreferrer"&gt;a few simple steps&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;From the Jira admin’s perspective, the solution extends backup and recovery beyond Atlassian’s native capabilities. Companies can use granular control, &lt;a href="https://gitprotect.io/blog/security-compliance-best-practices/" rel="noopener noreferrer"&gt;compliance&lt;/a&gt;, and resilience for mission-critical workflows.&lt;/p&gt;

&lt;p&gt;Here’s why:&lt;/p&gt;

&lt;h2&gt;
  
  
  Cross-tool dependency protection
&lt;/h2&gt;

&lt;p&gt;Usually, Jira is integrated with other DevOps tools, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Git repositories (e.g., issues referenced in commit messages)&lt;/li&gt;
&lt;li&gt;CI/CD pipelines are connected via third-party integration automation rules.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GitProtect.io backs up Jira Cloud and Git repositories (GitHub, GitLab, Bitbucket, Azure DevOps), preserving platform issue references.&lt;/p&gt;

&lt;p&gt;When backups are configured across these tools, GitProtect ensures that cross-referenced data (e.g., Jira issue keys in Git commits) remains accessible after recovery, provided all linked systems are restored.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backup automation without script maintenance
&lt;/h2&gt;

&lt;p&gt;Even though Atlassian Cloud provides basic backups and Data Center requires manual scripts, GitProtect allows for:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Policy-drive scheduling&lt;/strong&gt;&lt;br&gt;
For example, daily incremental and weekly full backups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Pre/post-backup hooks&lt;/strong&gt;&lt;br&gt;
For instance, pause indexing during backup to ensure consistency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- No reliance on brittle pg_dump or filesystem snapshots&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Compliance and legal hold
&lt;/h2&gt;

&lt;p&gt;The tool uses immutable backups (&lt;a href="https://gitprotect.io/blog/immutable-storage/" rel="noopener noreferrer"&gt;WORM storage&lt;/a&gt;) for audit trails, which is crucial for SOC2/ISO 27001. Backups are supported with role-based access control; for example, Jira schema restores are restricted to admins.&lt;/p&gt;

&lt;p&gt;Faster, granular recovery&lt;br&gt;
GitProtect allows you to &lt;strong&gt;restore individual issues&lt;/strong&gt; (not just the entire project) through Jira’s REST API integration. Your team can also utilize &lt;strong&gt;point-in-time recovery&lt;/strong&gt; for attachments or workflows corrupted by misconfigured apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Offsite replication for disaster recovery
&lt;/h2&gt;

&lt;p&gt;The tool makes it easy to connect and use &lt;strong&gt;hybrid storage targets&lt;/strong&gt; (e.g., AWS S3, Azure Blob, on-prem NAS, etc.) with encryption-in-transit. This also entails &lt;strong&gt;geo-redundancy&lt;/strong&gt; to meet RPO/RTO SLAs, e.g., &lt;strong&gt;under 1 hour recovery for critical projects&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztcvaias3so74mfzlo75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztcvaias3so74mfzlo75.png" alt="Jira Assets 2" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wmck6ky04idiznya7sn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wmck6ky04idiznya7sn.png" alt="Jira Assets 3" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Native Jira backups lack &lt;strong&gt;cross-tool consistency&lt;/strong&gt; and &lt;strong&gt;legal-grade retention&lt;/strong&gt;. GitProtect.io fills the gap by treating Jira as part of the DevOps pipeline (not just a standalone database). For teams that already back up Git repos with GitProtect.io, adding Jira is a &lt;strong&gt;natural extension&lt;/strong&gt; to protect the entire SDLC.&lt;/p&gt;

&lt;h2&gt;
  
  
  More than a safety net: metrics that matter
&lt;/h2&gt;

&lt;p&gt;Ideally, each Jira admin could treat backup as a simple checkbox. However, the reality of business and IT clarifies that backups are a core component of system integrity and, as such, must be tied to SLA/OLA performance indicators. Among the latter, the most vital are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- RPO (Recovery Point Objective)&lt;/strong&gt;&lt;br&gt;
Note that without API-level automation, the realistic RPO in Jira Cloud is &lt;strong&gt;24 to 48 hours&lt;/strong&gt;. With automation, the needed time is reduced to &lt;strong&gt;minutes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5jgaslt4p2m12eyiq7b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5jgaslt4p2m12eyiq7b.png" alt="Jira Assets 4" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- RTO (Recovery Time Objective)&lt;/strong&gt;&lt;br&gt;
Considering full schema recovery, the average RTO in the Data Center is &lt;strong&gt;12 to 35 minutes&lt;/strong&gt;. Of course, if you assume tested SQL restore paths. The measurement shrinks to even &lt;strong&gt;under 10 minutes&lt;/strong&gt; with object-level restores and tested pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Integrity rate&lt;/strong&gt;&lt;br&gt;
Backup verification using checksums and dry-run imports yields an &lt;strong&gt;over 99.97% success rate&lt;/strong&gt; when using automated validation scripts on staged environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final notes
&lt;/h2&gt;

&lt;p&gt;All the information above shows that protecting Jira Assets should be based on a disciplined approach to backup and recovery. That includes blending native tools and automation, followed by rigorous testing.&lt;/p&gt;

&lt;p&gt;Jira administrators can mitigate risks and ensure operational continuity by (among others) implementing database and file system backups and validating recovery procedures. Various scripts, configurations, and strategies outlined here are a foundation for resilience, adaptable to instances of different scales and complexities.&lt;/p&gt;

&lt;p&gt;However, given the limitations of the native Atlassian tools, using a third-party solution like GitProtect is a far more convenient, safe, and efficient approach. The software allows you to manage all aspects of a reliable backup and disaster recovery process quickly. That includes granular restore, automation without maintenance, and cross-tool dependency protection. &lt;/p&gt;

&lt;p&gt;Let’s not forget unmatched RPO and RTO times with over 99,97% success rate. These give any Jira admin more confidence and a sound argument in security-related activities.&lt;/p&gt;

&lt;p&gt;✍️ Subscribe to &lt;a href="https://gitprotect.io/gitprotect-newsletter.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;GitProtect DevSecOps X-Ray Newsletter&lt;/a&gt; – your guide to the latest DevOps &amp;amp; security insights&lt;/p&gt;

&lt;p&gt;🚀 Ensure compliant &lt;a href="https://gitprotect.io/sign-up.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;DevOps backup and recovery with a 14-day free trial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📅 Let’s discuss your needs and &lt;a href="https://calendly.com/d/3s9-n9z-pgc/gitprotect-live-demo?month=2024-04&amp;amp;utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;see a live product tour&lt;/a&gt;&lt;/p&gt;

</description>
      <category>jira</category>
      <category>backup</category>
      <category>projectmanagement</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Azure DevOps Pipelines 101: A Beginner’s Guide to CI/CD</title>
      <dc:creator>GitProtect Team</dc:creator>
      <pubDate>Thu, 04 Dec 2025 14:46:35 +0000</pubDate>
      <link>https://forem.com/gitprotect/azure-devops-pipelines-101-a-beginners-guide-to-cicd-4940</link>
      <guid>https://forem.com/gitprotect/azure-devops-pipelines-101-a-beginners-guide-to-cicd-4940</guid>
      <description>&lt;p&gt;&lt;strong&gt;In software engineering, the deployment process is not just about running a script and hoping it sticks. A big part of it is automation, not as a luxury, but a necessity. And that’s where Azure Pipelines steps in. The software provides a robust CI/CD engine embedded in the Azure DevOps ecosystem.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Developers and DevOps engineers working with version control systems, containers, or even legacy monoliths can leverage Azure Pipelines. It offers the structured scaffolding needed to build, test, and deliver code at scale, including support for major languages.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use it: Azure Pipelines
&lt;/h2&gt;

&lt;p&gt;It all starts with the deceptive simplicity of Azure DevOps structure. In short, you can describe it as “define how your code should be built, tested, and deployed. Then, let the platform handle the sequence.”&lt;/p&gt;

&lt;p&gt;The ‘how’ resides in a YAML file. It becomes the blueprint for your pipeline run. But while syntax and indentation are essential, what truly matters in understanding the conceptual building blocks is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;pipelines&lt;/li&gt;
&lt;li&gt;stages&lt;/li&gt;
&lt;li&gt;jobs&lt;/li&gt;
&lt;li&gt;tasks&lt;/li&gt;
&lt;li&gt;steps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A pipeline in Azure DevOps isn’t a single process. It’s a hierarchy that consists of multiple stages. Each one contains one or more jobs. Every job can execute a series of tasks – either in sequence or in parallel jobs – depending on how you configure the agent pool and pipeline triggers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff60vavisdo0hm29dfb0s.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff60vavisdo0hm29dfb0s.jpg" alt="Azure pipelines overview" width="783" height="783"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Pipelines
&lt;/h2&gt;

&lt;p&gt;Azure Pipelines support a wide array of project types, programming languages, and deployment targets. The solution always scales to fit, whether you’re:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;pushing Java microservices to Azure Container Registry&lt;/li&gt;
&lt;li&gt;compiling an iOS app on self-hosted macOS agents&lt;/li&gt;
&lt;li&gt;publishing .NET artifacts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Azure Pipelines’ hybrid architecture distinguishes it from other CI systems. It offers both Microsoft-hosted agents for ephemeral, on-demand builds and the option to use your own infrastructure via self-hosted agents. This allows you tighter control over security, dependencies, and runtime environments.&lt;/p&gt;

&lt;p&gt;Another thing is virtual machines. They spin up quickly and run your build jobs. Then they vanish, leaving only pipeline artifacts behind. You can also orchestrate parallel jobs or multi-phased builds that span staging and production environments – an essential capability for continuous delivery pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure DevOps pipeline
&lt;/h2&gt;

&lt;p&gt;When creating a new pipeline in Azure DevOps, you begin by selecting a source from your version control repository. This could be GitHub, GitLab, Bitbucket, or Azure DevOps – any system that supports Git will do.&lt;/p&gt;

&lt;p&gt;When the source is wired, you define the pipeline code using a YAML file typically stored at the root of your repository.&lt;/p&gt;

&lt;p&gt;Such a YAML definition becomes the single source of truth for how your builds and deployments execute. The pipeline automatically builds the code, runs unit tests, packages the output into executable files, and ships the release to the target platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtg4h0ppodb2mdpduccn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtg4h0ppodb2mdpduccn.jpg" alt="Azure DevOps pipelines overview" width="772" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous Integration
&lt;/h2&gt;

&lt;p&gt;This topic may seem exploited, yet CI isn’t a buzzword. It’s a discipline. In Azure DevOps, continuous integration (CI) begins when code is pushed to the repository. Pipeline triggers – explicitly or inferred – initiate the build pipeline.&lt;/p&gt;

&lt;p&gt;These pipelines can validate pull requests, enforce code quality rules, and block bad merges before they reach the mainline.&lt;/p&gt;

&lt;p&gt;Running unit tests, performing test integration routines, and scanning for security flaws become routine checks rather than manual chores. &lt;a href="https://gitprotect.io/blog/automate-devops-tasks-devops-should-automate/" rel="noopener noreferrer"&gt;Automation&lt;/a&gt; reduces the cognitive load on teams and reinforces engineering discipline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure DevOps
&lt;/h2&gt;

&lt;p&gt;It’s worth underlining that Azure DevOps is a complete lifecycle platform, not just a build server. It ties together&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;code repositories&lt;/li&gt;
&lt;li&gt;work tracking&lt;/li&gt;
&lt;li&gt;testing infrastructure&lt;/li&gt;
&lt;li&gt;artifact storage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Azure DevOps services integrate tightly with pipelines, enabling teams to trace every pipeline run back to a specific commit, pull request, or work item. The Azure DevOps organization URL becomes the entry point to your pipeline ecosystem. It defines the workspace under which you operate.&lt;/p&gt;

&lt;p&gt;Within that, you can create multiple projects, assign role-based access control, and set up audit trails for every change and deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous Delivery
&lt;/h2&gt;

&lt;p&gt;While continuous integration (CI) focuses on building and testing, continuous delivery extends the pipeline all the way to production. Azure Pipelines offers native container support, allowing you to create a release pipeline efficiently. It enables you to push images directly to Azure Container Registry and deploy containers with zero manual intervention.&lt;/p&gt;

&lt;p&gt;Release pipelines manage the deployment process across various stages – development, QA, staging, and production. You can create a controlled and auditable release mechanism when you define:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;release gates&lt;/li&gt;
&lt;li&gt;environment variables&lt;/li&gt;
&lt;li&gt;deployment tasks&lt;/li&gt;
&lt;li&gt;approval workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1wm8kynzre649wk0kht.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1wm8kynzre649wk0kht.jpg" alt="Continuous delivery" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Build pipeline
&lt;/h2&gt;

&lt;p&gt;Looking at the anatomy, a build pipeline includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;source checkout&lt;/li&gt;
&lt;li&gt;dependency resolution&lt;/li&gt;
&lt;li&gt;build compilation&lt;/li&gt;
&lt;li&gt;artifact packaging&lt;/li&gt;
&lt;li&gt;publishing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whether using a Maven pipeline template, a .NET Core build task, or scripting languages like Bash or PowerShell, the pipeline provides hooks at every step. &lt;/p&gt;

&lt;p&gt;The output of a build pipeline is one more pipeline artifact. Release pipelines or other downstream jobs, then use these. You can configure pipeline settings to optimize execution and manage caching. You can also use secure secrets management to handle sensitive configuration data. &lt;/p&gt;

&lt;h2&gt;
  
  
  First pipeline, new pipeline run
&lt;/h2&gt;

&lt;p&gt;From a beginner’s perspective, the first pipeline is both a milestone and a test of patience. You define the azure-pipelines.yml at the root of your project. A stripped-down example for a Node.js app may look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;trigger:
  - main

pool:
  vmImage: 'ubuntu-latest'

steps:
  - task: NodeTool@0
    inputs:
      versionSpec: '16.x'
    displayName: 'Install Node.js'

  - script: |
      npm install
      npm run build
    displayName: 'Build and Compile'

  - task: PublishBuildArtifacts@1
    inputs:
      pathToPublish: 'dist'
      artifactName: 'app'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once committed, the pipeline will trigger automatically on each push to main. That is your entry point into CI/CD.&lt;/p&gt;

&lt;h2&gt;
  
  
  CI/CD – Continuous Integration, Continuous Delivery
&lt;/h2&gt;

&lt;p&gt;You shouldn’t view &lt;a href="https://gitprotect.io/blog/exploring-best-practices-and-modern-trends-in-ci-cd/" rel="noopener noreferrer"&gt;CI/CD&lt;/a&gt; as a feature but as a habit. That means every commit becomes a release candidate, so code quality is validated constantly. In practice, you stop fearing deployment day. The reason is you’re deploying every day.&lt;/p&gt;

&lt;p&gt;Azure Pipelines bolsters this lifecycle by treating your infrastructure as code. You can define pipeline steps declaratively and manage environmental variables, script arguments, and secrets like any other version-controlled asset.&lt;/p&gt;

&lt;p&gt;Further, the code delivery can be faster because you’re confident in the progress, not just the result.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build, test, and deploy
&lt;/h2&gt;

&lt;p&gt;A mature DevOps lifecycle has three pillars – build, test, and deploy. They are hardwired into Azure Pipelines. Based on the success criteria, the pipeline supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;test runners&lt;/li&gt;
&lt;li&gt;quality gates&lt;/li&gt;
&lt;li&gt;conditional deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unit, integration, and UI run in isolated agents for automated tests. You can configure pipeline jobs to run multiple jobs in parallel, shaving minutes off build times and providing feedback faster.&lt;/p&gt;

&lt;p&gt;This isn’t a nice-to-have. The build, test, and deploy approach is a survival tactic for teams that ship frequently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fede8fxqfd3054ti8kxjy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fede8fxqfd3054ti8kxjy.jpg" alt="Azure pipelined buid test" width="711" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Native container support
&lt;/h2&gt;

&lt;p&gt;Containerized builds are a native feature in Azure Pipelines. They allow you to build Docker images and push them to the Azure Container Registry, as well as deploy containers to Kubernetes clusters or App Services.&lt;/p&gt;

&lt;p&gt;Using containers removes variability between build environments. You no longer have to worry about a dependency working locally but failing in production. The entire build and runtime environment becomes versioned and repeatable.&lt;/p&gt;

&lt;p&gt;Of course, pipeline steps can also run inside containers, reducing toolchain conflicts and enabling easier reuse across projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced workflows
&lt;/h2&gt;

&lt;p&gt;No complex pipeline workflow is built from scratch. It evolves. Azure Pipelines support advanced workflows that include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;matrix builds&lt;/li&gt;
&lt;li&gt;conditional stages&lt;/li&gt;
&lt;li&gt;custom templates&lt;/li&gt;
&lt;li&gt;reusable pipeline script (code)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can create deployment strategies with canary releases and blue-green deployments. Another option is rolling updates. Integrating external services from the Azure DevOps Marketplace or calling REST APIs from tasks is no problem.&lt;/p&gt;

&lt;p&gt;More importantly, you can embed &lt;a href="https://gitprotect.io/blog/azure-devops-security-best-practices/" rel="noopener noreferrer"&gt;security&lt;/a&gt; at every stage, from verifying the integrity of the source code to scanning container images and managing secrets with Key Vault.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security – a critical oversight
&lt;/h2&gt;

&lt;p&gt;While the Azure Pipelines engine enables powerful automation, it also introduces risk. Especially when CI/CD definitions live only within the Azure DevOps organization and are tied to a specific cloud-hosted target environment (cloud build).&lt;/p&gt;

&lt;p&gt;The YAML file defining your pipelines is typically stored in your repos and benefits from version control.&lt;/p&gt;

&lt;p&gt;However, the pipeline execution context – agent pools, service connections, secrets, and run history – is outside the repository scope.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s where a solution like GitProtect for Azure DevOps becomes handy.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It ensures that the repositories containing your &lt;strong&gt;YAML pipelines, scripts, and deployment configurations&lt;/strong&gt; are continuously secured.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitProtect for the target environment
&lt;/h2&gt;

&lt;p&gt;The backup tool also captures organization-level data, such as permissions, repositories, and metadata. Thus, if something goes wrong, you can restore critical access structures and codebases.&lt;/p&gt;

&lt;p&gt;In other words, the GitProtect ensures the &lt;strong&gt;infrastructure-as-code backbone&lt;/strong&gt; of your DevOps is versioned, encrypted, and recoverable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7omqtwcp57p2zyutojf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7omqtwcp57p2zyutojf.jpg" alt="security threats" width="783" height="760"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let’s wrap it up – protecting your pipeline infrastructure
&lt;/h2&gt;

&lt;p&gt;The CI/CD process doesn’t begin with runtime. Indeed, it doesn’t end there. While secrets, access tokens, and container registries often get the attention they deserve, the foundations, like the repositories, permissions, and pipeline code, are just as critical.&lt;/p&gt;

&lt;p&gt;A compromised or lost repository containing your pipeline YAML definitions is not just an inconvenience. It’s a failure of reproducibility.&lt;/p&gt;

&lt;p&gt;In most Azure DevOps workflows, the pipeline logic is stored directly in the version control system, alongside application code. That makes &lt;a href="https://gitprotect.io/azure-devops-backup.html" rel="noopener noreferrer"&gt;Azure repository backup&lt;/a&gt; a necessary condition for safeguarding CI/CD integrity.&lt;/p&gt;

&lt;p&gt;However, relying solely on Git itself as your safety net ignores real-world failure modes: accidental deletion, permission misconfiguration, and malicious tampering.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitProtect in action
&lt;/h2&gt;

&lt;p&gt;This is where we can mention GitProtect for Azure DevOps again, as it’s increasingly relevant. It provides automated and encrypted backups of cloud hosted pipelines and other critical components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;source code repos (including pipeline YAML files and scripts)&lt;/li&gt;
&lt;li&gt;repository-level metadata&lt;/li&gt;
&lt;li&gt;access permissions&lt;/li&gt;
&lt;li&gt;wikis and documentation linked to code projects&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, it secures the code-defined layer of your pipeline infrastructure. If your DevOps process is truly versioned as code. Then, that code and its underlying infrastructure need proper backup, audit, and restore capabilities.&lt;/p&gt;

&lt;p&gt;By backing up the repositories where your pipelines exist, GitProtect helps maintain the reproducibility and recoverability of your CI/CD workflows. It does this even when your Azure DevOps environment encounters human error or external disruption.&lt;/p&gt;

&lt;p&gt;✍️ Subscribe to &lt;a href="https://gitprotect.io/gitprotect-newsletter.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;GitProtect DevSecOps X-Ray Newsletter&lt;/a&gt; – your guide to the latest DevOps &amp;amp; security insights&lt;/p&gt;

&lt;p&gt;🚀 Ensure compliant &lt;a href="https://gitprotect.io/sign-up.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;DevOps backup and recovery with a 14-day free trial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📅 Let’s discuss your needs and &lt;a href="https://calendly.com/d/3s9-n9z-pgc/gitprotect-live-demo?month=2024-04&amp;amp;utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;see a live product tour&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>azuredevops</category>
      <category>coding</category>
    </item>
    <item>
      <title>How to Prevent Backup-related Throttling Without Losing Data (or Mind)</title>
      <dc:creator>GitProtect Team</dc:creator>
      <pubDate>Mon, 24 Nov 2025 09:55:18 +0000</pubDate>
      <link>https://forem.com/gitprotect/how-to-prevent-backup-related-throttling-without-losing-data-or-mind-3k4k</link>
      <guid>https://forem.com/gitprotect/how-to-prevent-backup-related-throttling-without-losing-data-or-mind-3k4k</guid>
      <description>&lt;p&gt;&lt;strong&gt;Consider that your backup is running smoothly. Your dashboards are green. The DevOps team is sleeping peacefully. And yet, behind the calm surface, something is happening. Your API limits are being chewed up, call by call, until you’re throttled into silence. Suddenly, your system stalls – quietly and invisibly. The irony is, you build a backup system for resilience. Now, it’s the vulnerability.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There’s a quiet assumption built into most backup systems. It’ll resolve itself if you just throw enough bandwidth, retries, and threads at the problem. However, in &lt;a href="https://gitprotect.io/blog/measuring-devops-success-the-metrics-that-matter/" rel="noopener noreferrer"&gt;DevOps&lt;/a&gt;, it’s naive. Dangerous.&lt;/p&gt;

&lt;h2&gt;
  
  
  Throttling in DevOps is not just a glitch
&lt;/h2&gt;

&lt;p&gt;Why is it dangerous? When every DevOps tool communicates through rate-limited APIs, such as GitHub, GitLab, Bitbucket, or &lt;a href="https://gitprotect.io/blog/is-azure-devops-down-how-to-ensure-resilience/" rel="noopener noreferrer"&gt;Azure DevOps&lt;/a&gt;, there is no alternative. &lt;/p&gt;

&lt;p&gt;SaaS vendors aren’t being punitive when they impose API limits. They protect their infrastructure, maintain availability, and shield users from abuse. The problem arises when a backup solution interacts with the mentioned systems as if it owns the place. All the more so when a backup system is “unaware” of API hygiene.&lt;/p&gt;

&lt;p&gt;So, a backup running that blindly pulls:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;metadata&lt;/li&gt;
&lt;li&gt;repositories&lt;/li&gt;
&lt;li&gt;pull requests&lt;/li&gt;
&lt;li&gt;webhooks&lt;/li&gt;
&lt;li&gt;smconfigurations&lt;/li&gt;
&lt;li&gt;comments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyneuqoh9a4kiwhpzxqu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyneuqoh9a4kiwhpzxqu.jpg" alt="backup throttling" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is done without pacing itself. The result? From the API perspective, such a situation becomes indistinguishable from a &lt;a href="https://gitprotect.io/blog/data-protection-and-backup-predictions-for-2025-and-beyond/" rel="noopener noreferrer"&gt;denial-of-service attack&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once throttling begins, the dominoes fall fast: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;backup jobs stall&lt;/li&gt;
&lt;li&gt;critical configurations aren’t captured&lt;/li&gt;
&lt;li&gt;version histories become fragmented&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then the false assumption that everything is safe and restorable shatters.&lt;/p&gt;

&lt;h2&gt;
  
  
  The hidden cost of a throttled backup
&lt;/h2&gt;

&lt;p&gt;You may not notice it immediately. The log will appear deceptively normal, with a few skipped objects or a slight delay here and there. And yet, the cost reveals a problem over time.&lt;/p&gt;

&lt;p&gt;Then, you try to reverse-engineer the cause of the failure. It turns out that a silent, unreported throttling event sabotaged the entire chain. From a business perspective, it’s not a bug but an existential risk.&lt;/p&gt;

&lt;p&gt;The whole thing is not about technical debt. &lt;strong&gt;It can lead to reputational damage, especially when a disaster recovery test fails to function properly.&lt;/strong&gt; There’s also an operational cost when the dev team has to manually reconfigure environments. Additionally, audit exposure reveals that your backup missed a repository containing your compliance script.&lt;/p&gt;

&lt;h2&gt;
  
  
  How innovative systems avoid the trap
&lt;/h2&gt;

&lt;p&gt;Effective throttling prevention is an architectural approach rather than a reactive one. It begins with backup engines deeply integrated with API rate limits, rather than just being aware of them. The big picture is that such systems don’t brute-force their way into &lt;a href="https://gitprotect.io/blog/the-ultimate-guide-to-saas-backups-for-devops/" rel="noopener noreferrer"&gt;SaaS platforms&lt;/a&gt;. They converse, interpret headers, as well as observe quotas. Moreover, they anticipate consequences.&lt;/p&gt;

&lt;p&gt;This manifests, among other things, through dynamic pacing. A well-designed system reads the room. It doesn’t assume the same call rate will work at 2 a.m. and 10 a.m. The solution monitors the API’s responses, notes the Retry-After headers, and adapts its behavior without human intervention.&lt;/p&gt;

&lt;p&gt;If it notices pressure, it backs off. When a specific endpoint is rate-limited, the flow is redirected to other tasks that can be performed in parallel without exceeding quotas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some systems, such as GitProtect, take this further and utilize well-designed job orchestration.&lt;/strong&gt; Backup threads are distributed based on understanding the API’s cost per object type, rather than relying on a simplistic “repository-per-thread” logic. &lt;/p&gt;

&lt;p&gt;A repo with massive commit histories (or one linked to &lt;a href="https://gitprotect.io/blog/exploring-best-practices-and-modern-trends-in-ci-cd/" rel="noopener noreferrer"&gt;CI workflows&lt;/a&gt;) is treated differently from a dormant wiki. Such an approach reduces unnecessary API pressure and ensures that backups don’t trip over themselves even at scale.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://xopero.com/blog/en/full-copy-incremental-copy-and-differential-copy-backup-types/" rel="noopener noreferrer"&gt;Incremental&lt;/a&gt; logic also plays a crucial role. Well-developed systems track state, not full sweeps that trigger abuse mechanisms. It’s about monitoring commit deltas, webhook triggers, and change stamps. Such an approach reduces API usage and dramatically shortens backup windows. That creates a more sustainable rhythm harmonizing with the platform’s operational limits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3c3bxttykjsqplx0mbc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3c3bxttykjsqplx0mbc.jpg" alt="personal token vs app token" width="800" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Authentication method and credential distribution
&lt;/h2&gt;

&lt;p&gt;One thing is barely mentioned, yet it screws up more backup ventures than experts are willing to admit. It’s the way API authentication works. Next to bandwidth, schedule, or even volume, another factor may be the damn token.&lt;/p&gt;

&lt;p&gt;If your system utilizes a personal access token or OAuth to communicate with GitHub’s API, you might already be on thin ice. The reason is that you tied your entire backup workload to the (human) user’s rate limit. That means one well-timed backup sweep, and you’ve burned through their quota.&lt;/p&gt;

&lt;p&gt;At the same time, the developer can’t push code or trigger a CI run. He probably can’t even click around in the interface – without getting rate-limited.&lt;/p&gt;

&lt;p&gt;You can also use the Github App method. And that’s a completely different story, as it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;is  scoped to the organization&lt;/li&gt;
&lt;li&gt;has broader limits&lt;/li&gt;
&lt;li&gt;doesn’t hijack anyone’s personal quota&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It resembles running backups from the service entrance, not the front door. That means you’re not clogging up the hallway. Such an approach makes a difference at scale. &lt;strong&gt;While backing up dozens or hundreds of repos, the traffic quickly adds up. When all of that goes through a single credential – even a big one –  it’s a gamble.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unless your system “knows” how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;split the load&lt;/li&gt;
&lt;li&gt;distribute jobs across multiple credentials&lt;/li&gt;
&lt;li&gt;route around the throttle points&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The above may seem like a hack, but it’s a strategy. Why? Because the trick here is not about staying under the limit. The goal is to avoid sabotaging your devs. &lt;strong&gt;Instead of creating a burden with backups, they should be tools available when needed—without the API misbehaving.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What does throttling prevention mean for business continuity
&lt;/h2&gt;

&lt;p&gt;A system noticing API limits doesn’t just avoid getting blocked. It ensures continuity and successful test restores. This way, you can bounce back in the event of:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe63ru336xf9c4qxmwrwo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe63ru336xf9c4qxmwrwo.jpg" alt="Incident Recovery" width="745" height="748"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In more strict terms, that means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;faster Mean Time To Recover (MTTR)&lt;/li&gt;
&lt;li&gt;fewer gaps in data lineage&lt;/li&gt;
&lt;li&gt;peace of mind when facing auditors&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The above means you don’t have to inform your DevOps team that they’ve lost a week’s worth of pull requests because the backup system was shut down mid-process.&lt;/p&gt;

&lt;p&gt;The overall effect? The operational gains compound. &lt;strong&gt;By avoiding overloading systems, infrastructure costs drop. At the same time, developer frustration decreases as data restoration progresses.&lt;/strong&gt; From a &lt;a href="https://gitprotect.io/blog/integrating-security-as-code-a-necessity-for-devsecops/" rel="noopener noreferrer"&gt;security perspective&lt;/a&gt;, you reduce the attack surface. Why? Because your backup engine isn’t constantly knocking on the API’s door like a spam bot.&lt;/p&gt;

&lt;h2&gt;
  
  
  A final thought
&lt;/h2&gt;

&lt;p&gt;If your current backup solution allows you to define “resilience” as merely retrying failed calls until the quota resets, then it’s not resilience. It’s just a sign of recklessness. Worse, you’re operating on borrowed time if you treat GitHub, GitLab, or Azure DevOps like a dump truck instead of a carefully rationed endpoint.&lt;/p&gt;

&lt;p&gt;The third “if” relates to your &lt;a href="https://gitprotect.io/use-cases/disaster-recovery.html" rel="noopener noreferrer"&gt;disaster recovery&lt;/a&gt; plan, which hinges on restoring from a system. According to this plan, the team has problems defining and thus noticing throttling. Calling it a plan is gambling without knowing its basic rules.&lt;/p&gt;

&lt;p&gt;Teams must also acknowledge that backup isn’t about volume, but control supported with precision. This way, you can respect the limits imposed by the systems you protect. The banal conclusion is that when your backup and disaster recovery solution doesn’t demonstrate those qualities, the question is not whether you’ll fail. It’s when.&lt;/p&gt;

&lt;p&gt;✍️ Subscribe to &lt;a href="https://gitprotect.io/gitprotect-newsletter.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;GitProtect DevSecOps X-Ray Newsletter&lt;/a&gt; – your guide to the latest DevOps &amp;amp; security insights&lt;/p&gt;

&lt;p&gt;🚀 Ensure compliant &lt;a href="https://gitprotect.io/sign-up.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;DevOps backup and recovery with a 14-day free trial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📅 Let’s discuss your needs and &lt;a href="https://calendly.com/d/3s9-n9z-pgc/gitprotect-live-demo?month=2024-04&amp;amp;utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;see a live product tour&lt;/a&gt;&lt;/p&gt;

</description>
      <category>backup</category>
      <category>devops</category>
      <category>devsecops</category>
      <category>throttling</category>
    </item>
    <item>
      <title>DevOps Threats Unwrapped: Mid-Year Report 2025</title>
      <dc:creator>GitProtect Team</dc:creator>
      <pubDate>Thu, 25 Sep 2025 11:53:25 +0000</pubDate>
      <link>https://forem.com/gitprotect/devops-threats-unwrapped-mid-year-report-2025-1o7c</link>
      <guid>https://forem.com/gitprotect/devops-threats-unwrapped-mid-year-report-2025-1o7c</guid>
      <description>&lt;p&gt;From minor hiccups to full-blown blackouts, the first half of 2025 made it clear that even the most trusted DevOps platforms are not immune to disruption.&lt;/p&gt;

&lt;p&gt;In this ecosystem, every commit, push, and deployment relies on complex systems that, despite their brilliance, are fragile. Like a Jenga tower of integrations, it takes just one wrong move – a misclicked setting, a leaked secret, an API failure – for the whole thing to wobble.&lt;/p&gt;

&lt;p&gt;GitHub now hosts over 100 million users and 420 million repositories. Microsoft Azure DevOps has surpassed 1 billion users worldwide, while GitLab reports 30 million registered users. Bitbucket powers more than 10 million professional teams, with Jira adding millions more to this global ecosystem. But as these platforms grow in size and complexity, avoiding outages or human error becomes increasingly difficult. At this scale, with top global brands relying on them, security breaches and increasingly sophisticated cyber threats are no longer a possibility, but a certainty.&lt;/p&gt;

&lt;p&gt;DevOps Threats Unwrapped: Mid-Year Report by GitProtect examines the threats of H1 2025, focusing on unplanned outages, attacks, and silent mistakes with severe consequences. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key insights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure DevOps recorded a total of &lt;strong&gt;74 incidents&lt;/strong&gt;, including one of the &lt;strong&gt;longest-lasting performance degradations&lt;/strong&gt; that spanned &lt;strong&gt;159 hours&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;European users were particularly affected, accounting for &lt;strong&gt;34%&lt;/strong&gt; of all incidents on Azure DevOps.&lt;/li&gt;
&lt;li&gt;GitHub saw a &lt;strong&gt;58% year-over-year increase&lt;/strong&gt; in the number of incidents, reaching &lt;strong&gt;109 reported cases&lt;/strong&gt;:
   17 of them were classified as major, leading to &lt;strong&gt;over 100 hours of total disruption&lt;/strong&gt;.
   April stood out as the most turbulent month, with incidents accumulating to &lt;strong&gt;330 hours and 6 minutes&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;GitLab patched &lt;strong&gt;65 vulnerabilities&lt;/strong&gt; and faced &lt;strong&gt;59 incidents&lt;/strong&gt;, resulting in approximately &lt;strong&gt;1,346 hours of service disruption&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Bitbucket experienced &lt;strong&gt;22 incidents&lt;/strong&gt; of varying impact, which together lasted more than &lt;strong&gt;168 hours&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Jira reported o*&lt;em&gt;ver 2,390 hours of cumulative downtime&lt;/em&gt;* across its ecosystem –  that’s nearly &lt;strong&gt;100 full days&lt;/strong&gt; of service disruption.&lt;/li&gt;
&lt;li&gt;A total of &lt;strong&gt;330 incidents&lt;/strong&gt; impacted DevOps platforms in the first half of 2025.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your DevOps pipeline is the heart of your organization’s innovation, consider these your warning signs. &lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Devops
&lt;/h2&gt;

&lt;p&gt;In the first half of the year, Azure DevOps experienced a total of &lt;strong&gt;74 incidents&lt;/strong&gt;, including 3 advisory cases and 71 incidents of degraded performance.&lt;/p&gt;

&lt;p&gt;Some incidents impacted multiple components at the same time, while others affected only a single component. For instance, one outage could disrupt Pipelines, Boards, Repos, and Test Plans simultaneously. In our methodology, such an event is counted as one incident overall, even if it influenced several services. Within this total, the components were affected the following number of times:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pipelines: affected 31 times (21%) – the most unstable component&lt;/li&gt;
&lt;li&gt;Boards: affected 28 times (19%)&lt;/li&gt;
&lt;li&gt;Test Plans: affected 28 times (19%)&lt;/li&gt;
&lt;li&gt;Repos: affected 27 times (18%)&lt;/li&gt;
&lt;li&gt;Core Services: affected 16 times (11%)&lt;/li&gt;
&lt;li&gt;Other services: affected 15 times (10%)&lt;/li&gt;
&lt;li&gt;Artifacts: affected 6 times (4%)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vio754yb9fpv0s74vho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vio754yb9fpv0s74vho.png" alt="Azure DevOps incidents in H1 2025" width="768" height="691"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In January 2025, Azure DevOps users worldwide faced one of the &lt;a href="https://status.dev.azure.com/_event/591852339" rel="noopener noreferrer"&gt;longest-lasting performance&lt;/a&gt; degradations on record— &lt;strong&gt;a 159-hour disruption&lt;/strong&gt; that severely impacted pipeline functionality. For almost a week, users trying to create Managed DevOps Pools within new subscriptions without existing pools experienced persistent failures. These attempts repeatedly timed out with the provisioning error: &lt;em&gt;“The resource write operation failed to complete successfully, because it reached terminal provisioning state ‘Canceled’.”&lt;/em&gt; The issue led to delays in builds, deployments, and onboarding processes across affected environments, highlighting the operational risks tied to large-scale platform dependencies.&lt;/p&gt;

&lt;p&gt;Another serious security challenge for Microsoft Azure DevOps in 2025 was the discovery of &lt;a href="https://cybersecuritynews.com/multiple-azure-devops-vulnerabilities/" rel="noopener noreferrer"&gt;multiple critical vulnerabilities&lt;/a&gt;, including SSRF and CRLF injection flaws within the endpointproxy and Service Hooks components. These vulnerabilities could be exploited to carry out DNS rebinding attacks and allow unauthorized access to internal services. Such attacks present significant risks in cloud environments, including data leakage and potential theft of access tokens. In response, Microsoft released security patches and awarded a $15,000 bug bounty to the researchers who discovered the issues.&lt;/p&gt;

&lt;p&gt;Additionally, it’s worth noting that &lt;strong&gt;European customers experienced a higher number of incidents&lt;/strong&gt; – 27 incidents, representing roughly 34% of all incidents. In contrast, Azure DevOps users in India and Australia reported the fewest incidents of degraded performance, accounting for only 4% of all incidents.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub
&lt;/h2&gt;

&lt;p&gt;GitHub experienced a significant &lt;strong&gt;58% rise&lt;/strong&gt; in incidents during the first half of 2025, jumping from &lt;strong&gt;69 cases&lt;/strong&gt; in H1 2024 to &lt;strong&gt;109&lt;/strong&gt; this year. &lt;/p&gt;

&lt;p&gt;Among this year’s incidents, 17 were classified as major, causing over &lt;strong&gt;100 hours of total disruption&lt;/strong&gt;. That’s enough time to run over 1,000 CI/CD pipelines from start to finish or binge-watch the entire Marvel Cinematic Universe.&lt;/p&gt;

&lt;p&gt;Seventy-eight incidents (72%) had a minor impact.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zp6q7bjad9iatcfharn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zp6q7bjad9iatcfharn.png" alt="GitHub incidents H1 2025" width="768" height="691"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;May recorded the highest number of incidents, with 23 reported cases, while April saw the longest cumulative incident duration, totaling 330 hours and 6 minutes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the first half of 2025, GitHub Actions emerged as the most affected component, with 17 incidents, including a major disruption in May that lasted 5 hours. The outage, caused by a backend caching misconfiguration, delayed nearly 20% of Ubuntu-24 hosted runner jobs in public repos. This could have slowed down development cycles, impacted release schedules, and reduced productivity for teams relying on GitHub Actions for continuous integration. GitHub resolved the issue by redeploying components and scaling resources, and committed to improving failover resilience going forward. &lt;/p&gt;

&lt;p&gt;Meanwhile, attackers actively exploited GitHub to spread malware. Among the most notable malware campaigns noticed during this time were Amadey, Octalyn Stealer, AsyncRAT, ZeroCrumb, and Neptune RAT. &lt;/p&gt;

&lt;h2&gt;
  
  
  GitLab
&lt;/h2&gt;

&lt;p&gt;In the first half of 2025, GitLab patched &lt;strong&gt;65 vulnerabilities&lt;/strong&gt; of varying severity, marking a slight decrease from the 70 vulnerabilities disclosed during the same period in 2024.&lt;/p&gt;

&lt;p&gt;During this time, GitLab also experienced &lt;strong&gt;59 incidents&lt;/strong&gt;, totaling approximately &lt;strong&gt;1,346 hours of disruption&lt;/strong&gt;. These included partial service disruptions (20 incidents – 34%) and degraded performance (17 – 29%), followed by operational issues (10 incidents – 17%), &lt;strong&gt;full service outages&lt;/strong&gt; (&lt;strong&gt;7 incidents, adding up to over 19 hours of downtime – 12%&lt;/strong&gt;), and planned maintenance (5 incidents – 8%).&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://status.gitlab.com/pages/incident/5b36dc6502d06804c08349f7/684924623a6a9a05d1226eb7" rel="noopener noreferrer"&gt;longest service disruption&lt;/a&gt; lasted for over 4 hours and was caused by issues related to a specific worker and traffic saturation that affected the primary database. This incident led to 503 errors and impacted the availability of GitLab.com services for users.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu54c8z0ii9lkk7ivbqoi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu54c8z0ii9lkk7ivbqoi.png" alt="GitLab incidents H1 2025" width="800" height="720"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the most notable incidents involved a data breach at &lt;a href="https://www.bleepingcomputer.com/news/security/europcar-gitlab-breach-exposes-data-of-up-to-200-000-customers/" rel="noopener noreferrer"&gt;Europcar Mobility Group data breach&lt;/a&gt;. The attackers successfully infiltrated GitLab repositories and stole source code for Android and iOS applications, along with personal information of up to 200,000 customers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Atlassian
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Bitbucket
&lt;/h2&gt;

&lt;p&gt;In H1, Bitbucket experienced &lt;strong&gt;22 incidents&lt;/strong&gt; with varying severity, resulting in &lt;strong&gt;over 168 hours of downtime&lt;/strong&gt;. Among these, 2 critical incidents lasted for over 4 hours, impacting key services including Website, API, Git via SSH, Authentication and user management, Webhooks, Source downloads, Pipelines, Git LFS, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2q5s34v3xpe13qsr47no.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2q5s34v3xpe13qsr47no.png" alt="Bitbucket incidents H1 2025" width="800" height="720"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;January 2025 proved to be one of the most challenging months for Bitbucket, highlighted by a major outage widely reported on DownDetector. For 3 hours and 47 minutes, access to &lt;a href="https://www.bleepingcomputer.com/news/technology/bitbucket-services-hard-down-due-to-major-worldwide-outage/" rel="noopener noreferrer"&gt;Bitbucket Cloud’s&lt;/a&gt; website, APIs, and pipelines was completely unavailable, disrupting developer workflows worldwide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jira
&lt;/h2&gt;

&lt;p&gt;Atlassian’s Jira ecosystem – including Jira, Jira Service Management, Jira Work Management, and Jira Product Discovery – experienced &lt;strong&gt;66 incidents&lt;/strong&gt; in the first half of 2025, marking a &lt;strong&gt;24%&lt;/strong&gt; increase compared to H1 2024. Altogether, these disruptions added up to more than &lt;strong&gt;2,390 hours&lt;/strong&gt;, or nearly &lt;strong&gt;100 full days&lt;/strong&gt; of downtime.&lt;/p&gt;

&lt;p&gt;Much of this downtime resulted from a prolonged maintenance period that began in mid-March and extended through the end of May. As a consequence, users of the Free edition of Jira services, particularly those located in Singapore and Northern California, may have experienced outages lasting up to 120 minutes per customer.&lt;/p&gt;

&lt;p&gt;As mentioned, Jira recorded 66 unique incidents. Some of these incidents impacted multiple products simultaneously. The numbers below reflect total disruptions per product rather than unique events.&lt;/p&gt;

&lt;p&gt;Jira users were the most affected, with &lt;strong&gt;52 disruptions&lt;/strong&gt; (39% of all Jira-related service impacts). Jira Service Management followed with 46 disruptions (35%), while Jira Work Management experienced 24 (18%). Jira Product Discovery had the fewest, with 11 disruptions (8%).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbop2b5fwtn8dp2bixvl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbop2b5fwtn8dp2bixvl.png" alt="Jira incidents H1 2025" width="800" height="720"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jira was also at the center of several notable incidents in early 2025. One of the most concerning involved a string of ransomware attacks carried out by the HellCat group. The attackers developed a playbook for infiltrating organizations via Atlassian Jira instances, using stolen credentials to gain access. High-profile victims included &lt;a href="https://www.bleepingcomputer.com/news/security/telefonica-confirms-internal-ticketing-system-breach-after-data-leak/" rel="noopener noreferrer"&gt;Telefónica&lt;/a&gt;, &lt;a href="https://www.bleepingcomputer.com/news/security/orange-group-confirms-breach-after-hacker-leaks-company-documents/" rel="noopener noreferrer"&gt;Orange Group&lt;/a&gt;, &lt;a href="https://cybersecuritynews.com/jaguar-land-rover-breached-by-hellcat/" rel="noopener noreferrer"&gt;Jaguar Land Rover&lt;/a&gt;, &lt;a href="https://hackread.com/hellcat-ransomware-firms-infostealer-stolen-jira-credentials/" rel="noopener noreferrer"&gt;Asseco Poland&lt;/a&gt;, &lt;a href="https://hackread.com/hellcat-ransomware-firms-infostealer-stolen-jira-credentials/" rel="noopener noreferrer"&gt;HighWire Press (USA)&lt;/a&gt;, &lt;a href="https://hackread.com/hellcat-ransomware-firms-infostealer-stolen-jira-credentials/" rel="noopener noreferrer"&gt;Racami (USA)&lt;/a&gt;, and LeoVegas Group (Sweden).&lt;/p&gt;

&lt;h2&gt;
  
  
  Over 300 incidents in the DevOps ecosystem
&lt;/h2&gt;

&lt;p&gt;In total, 330 incidents of various severity levels were recorded across the major code hosting and collaboration platforms, ranging from Azure DevOps to Jira.&lt;/p&gt;

&lt;p&gt;Here’s how the incident breakdown looks across platforms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub – 33% of all incidents&lt;/li&gt;
&lt;li&gt;Azure DevOps – 22%&lt;/li&gt;
&lt;li&gt;GitLab – 18%&lt;/li&gt;
&lt;li&gt;Jira platform tools (Jira, JWM, JSM, JPD) – 20%&lt;/li&gt;
&lt;li&gt;Bitbucket – 7%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96dgnasbyjmkncfyjxmj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96dgnasbyjmkncfyjxmj.png" alt="Mid-year threats report 2025" width="800" height="720"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While not all of them caused full-outages, the scale and distribution of these events reveal a lot about where the ecosystem’s weak points might be. Whether it’s version control, issue tracking, or CI/CD pipelines – these interruptions remind us that even the most popular platforms face reliability challenges. And for teams building software at scale, stability can no longer be taken for granted.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Dev Platforms Go Down
&lt;/h2&gt;

&lt;p&gt;DevOps teams can’t afford to wait passively when their core tools go down. Proactive backup, well-defined contingency plans, and flexible workflows make the difference between delivery and recovery.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Back it up&lt;/strong&gt;. Use automated backups for code, pipelines, issues, and boards. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Work local&lt;/strong&gt;. Ensure the ability to code with local clones and offline workflows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mirror critical repos&lt;/strong&gt;. Redundancy across platforms (e.g., GitHub ↔ GitLab) keeps projects moving.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review and adapt&lt;/strong&gt;. After any outage, run a quick post-mortem. Improve what didn’t work.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Methodology note
&lt;/h2&gt;

&lt;p&gt;All data comes from GitProtect’s internal analyses. Percentages may not add up to 100 due to rounding.&lt;/p&gt;

&lt;p&gt;✍️ Subscribe to &lt;a href="https://gitprotect.io/gitprotect-newsletter.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;GitProtect DevSecOps X-Ray Newsletter&lt;/a&gt; – your guide to the latest DevOps &amp;amp; security insights&lt;/p&gt;

&lt;p&gt;🚀 Ensure compliant &lt;a href="https://gitprotect.io/sign-up.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;DevOps backup and recovery with a 14-day free trial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📅 Let’s discuss your needs and &lt;a href="https://calendly.com/d/3s9-n9z-pgc/gitprotect-live-demo?month=2024-04&amp;amp;utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;see a live product tour&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>devsecops</category>
      <category>programming</category>
      <category>github</category>
    </item>
    <item>
      <title>The Power of Scheduled Automated Backups for DevOps and SaaS</title>
      <dc:creator>GitProtect Team</dc:creator>
      <pubDate>Thu, 25 Sep 2025 10:42:43 +0000</pubDate>
      <link>https://forem.com/gitprotect/the-power-of-scheduled-automated-backups-for-devops-and-saas-3lp7</link>
      <guid>https://forem.com/gitprotect/the-power-of-scheduled-automated-backups-for-devops-and-saas-3lp7</guid>
      <description>&lt;p&gt;&lt;strong&gt;In 2020, a DevOps team at a mid-sized fintech startup almost lost its entire source code. A failed container update caused a cascading failure in their self-hosted GitLab instance. The backup was… somewhere. No one checked it in weeks. The recovery process took three days. The cost was around $70,000 in downtime and customer compensation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The event wasn’t a matter of not having a &lt;a href="https://gitprotect.io/blog/simplifying-developer-workflows-how-effective-backup-strategy-reduces-cognitive-load/" rel="noopener noreferrer"&gt;backup strategy&lt;/a&gt;. It was a matter of assuming someone, somewhere, had run the proper function at the right time. In this case, no one has.&lt;/p&gt;

&lt;h2&gt;
  
  
  The case for scheduling. Why manual backups don’t scale
&lt;/h2&gt;

&lt;p&gt;Nowadays, manual backups resemble trying to pull a horse cart onto a Formula 1 track. All the more so in DevOps. In short, they belong to another era.&lt;/p&gt;

&lt;p&gt;Development pipelines usually evolve hourly. New repos are spun up during sprints. Secrets rotate and workflows change. No matter how much caffeine you provide, even the most focused engineer can’t match or keep up with the constant, often rapid changes in the &lt;a href="https://gitprotect.io/blog/measuring-devops-success-the-metrics-that-matter/" rel="noopener noreferrer"&gt;DevOps environment&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;New repositories, updated pipelines, rotated secrets, and shifting permissions – these changes can occur multiple times a day, often across distributed teams. And so, manual oversight simply can’t scale.&lt;/p&gt;

&lt;p&gt;Scheduled, automated backups replace ad-hoc actions with digital discipline. They ensure that source code, metadata, configurations, and secrets are preserved continuously, not retroactively. And it’s not about convenience but rather data and business continuity.&lt;/p&gt;

&lt;p&gt;Backup scheduling frameworks allow teams to define frequency, scope, and logic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;hourly backups  of active projects&lt;/li&gt;
&lt;li&gt;daily full backups of the production environment&lt;/li&gt;
&lt;li&gt;weekly compression of dormant repositories.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Such schedulers adapt to business hours or maintenance windows. They reduce system load without sacrificing &lt;a href="https://gitprotect.io/blog/secdevops-a-practical-guide-to-the-what-and-the-why/" rel="noopener noreferrer"&gt;backup fidelity&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compression. A thing for backup efficiency
&lt;/h2&gt;

&lt;p&gt;Let’s assume your DevOps platform generates 100 GB of backup data per week. Now, let’s multiply that amount by 52 (weeks in a year) first, then by the number of environments you manage. Having the final number, consider your storage bill.&lt;/p&gt;

&lt;p&gt;To reduce such a cost, you use compression. However, it’s not about lowering storage costs alone. The idea is to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;speed up transmission&lt;/li&gt;
&lt;li&gt;reduce I/O latency&lt;/li&gt;
&lt;li&gt;make &lt;a href="https://gitprotect.io/blog/become-the-master-of-disaster-disaster-recovery-plan-for-devops/" rel="noopener noreferrer"&gt;restores faster&lt;/a&gt;, especially when seconds count.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s worth underlining that today’s backup tools (modern ones, as marketing calls them) implement block-level deduplication, delta encoding, and intelligent compression algorithms. The latter also contains a popular wording, but in this case, the mechanism doesn’t just shrink data blindly – it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;analyzes patterns&lt;/li&gt;
&lt;li&gt;removes redundancy&lt;/li&gt;
&lt;li&gt;stores only what has changed or is essential.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In backup, this means faster transfer and smaller storage footprints. Ultimately, this translates into quicker restores. It’s because the system avoids saving the same blocks of files repeatedly. And that’s a more efficient and context-aware solution. A smart one! &lt;/p&gt;

&lt;p&gt;Additionally, these capabilities enable a full weekly backup to function like an incremental one. It also saves on storage space and is fast in transit while being precise in content. So, you get the complete picture instead of a burden.&lt;/p&gt;

&lt;h2&gt;
  
  
  Time, control, and sanity are your team’s gains
&lt;/h2&gt;

&lt;p&gt;From the performance and control perspective, every engineer who doesn’t verify backups manually is doing something worthwhile. The above may sound like a silly statement, yet scheduled automation reclaims hours that would otherwise be spent clicking through logs or manually archiving repos.&lt;/p&gt;

&lt;p&gt;It’s also about clarity. A scheduled system doesn’t forget, doesn’t get sick, distracted, or promoted to another department. It just runs as designed.&lt;/p&gt;

&lt;p&gt;Besides, this is not limited to reducing workload only. Such an environment standardizes expectations. During audits, the team knows where to find historical states. Furthermore, in case of an incident response, they can act instead of searching.&lt;/p&gt;

&lt;p&gt;Most importantly, automated backups restore a sense of psychological safety. That’s not fluff but operational readiness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring and observability. Know when something goes wrong
&lt;/h2&gt;

&lt;p&gt;Scheduling is only the beginning. A backup that silently fails is worse than no backup at all. Effective automated systems integrate monitoring and alerting. It isn’t restricted to “backup succeeded” banners, but provides deep insights:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;skipped objects&lt;/li&gt;
&lt;li&gt;version mismatches &lt;/li&gt;
&lt;li&gt;retention conflicts&lt;/li&gt;
&lt;li&gt;integrity violations and others.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;REST APIs enable this telemetry to be fed directly into existing SIEM or platforms (if applicable). Backup health should be visible in the same observability space as deployment metrics and application logs. If you’re watching your pipeline latency, you should also be watching your backup status. &lt;/p&gt;

&lt;h2&gt;
  
  
  Best practices for backup automation in DevOps and SaaS environments
&lt;/h2&gt;

&lt;p&gt;Considering best practices in backup automation for &lt;a href="https://gitprotect.io/blog/devops-pillars-top-11-devops-principles/" rel="noopener noreferrer"&gt;DevOps&lt;/a&gt;, the first principle is scope, which refers to what exactly gets backed up for faster and more reliable data recovery.&lt;/p&gt;

&lt;h2&gt;
  
  
  The first: scope
&lt;/h2&gt;

&lt;p&gt;The catch here is not just backing up repositories, but also considering backing up metadata. &lt;a href="https://gitprotect.io/blog/the-ultimate-guide-to-saas-backups-for-devops/" rel="noopener noreferrer"&gt;SaaS platforms&lt;/a&gt; abstract infrastructure. That doesn’t mean, however, that data is disposable.&lt;/p&gt;

&lt;p&gt;In a disaster scenario, restoring a Git repository without its metadata is akin to restoring a WordPress site without its database.&lt;/p&gt;

&lt;h2&gt;
  
  
  The second: isolation
&lt;/h2&gt;

&lt;p&gt;The second thing about best practices is isolation. All backups should (or must) be stored independently of the source environment. If GitHub, Azure DevOps, Bitbucket, or GitLab goes down, you cannot rely on their API to access your saved states. Such a separation is not paranoia but a well-thought-out protocol. Following it, you can still restore your data when the main platform fails. &lt;/p&gt;

&lt;p&gt;Of course, some may react with “duh”, yet reality shows that such a banal fact is not common knowledge. Especially among IT experts, who treat Git-based SaaS services as an ultimate and trustworthy solution.&lt;/p&gt;

&lt;p&gt;You aren’t one of them, are you?&lt;/p&gt;

&lt;h2&gt;
  
  
  The third: policy
&lt;/h2&gt;

&lt;p&gt;The next thing is policy. In this section, define retention rules that align with both compliance and business continuity. Although a thirty-day retention window may satisfy auditors, it may not be sufficient if a misconfiguration corrupted your system six weeks ago.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fourth: testing
&lt;/h2&gt;

&lt;p&gt;It’s a final but equally necessary element. It’s vital to keep in mind and practice that scheduled backups are as helpful as their restore scenarios. So, schedule not just backups, but the whole drill. The time to discover that your workflow metadata wasn’t included is not after a breach!&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-world integration. How teams do it right
&lt;/h2&gt;

&lt;p&gt;Picture a distributed gaming company. The organization is working with cloud-based GitOps pipelines that face frequent reconfigurations tied to multiple third-party APIs. They scheduled critical YAML files hourly, protected by &lt;a href="https://gitprotect.io/blog/why-immutable-backups-are-essential-for-data-security-in-devops/" rel="noopener noreferrer"&gt;immutable storage&lt;/a&gt;. Every restore here was cross-validated against SHA digests and tested in staging.&lt;/p&gt;

&lt;p&gt;Another example may be related to a biotech company. Its regulatory obligations demanded the exact history of repository changes. However, their team didn’t have time to verify manual exports. Automated backups triggered by commit activity ensured that the system captured new objects in real time, while also archiving full snapshots every 24 hours. Zero-touch and fully compliant.&lt;/p&gt;

&lt;p&gt;The examples presented in both cases are neither extravagant nor overly complex. They’re a routine, once well-designed automation is in place.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why schedulers alone aren’t enough
&lt;/h2&gt;

&lt;p&gt;Some things must go beyond scripts. Many teams are leaning on shell scripts to schedule backups via cron. Well, it’s better than nothing, but it’s also a fragile scaffold, including hardcoded paths, no integrity verification, and no centralized monitoring.&lt;/p&gt;

&lt;p&gt;In other words, when a script fails silently, no one notices. No one, until they do. That’s why mature solutions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;integrate with IAM&lt;/li&gt;
&lt;li&gt;offer role-based access&lt;/li&gt;
&lt;li&gt;support &lt;a href="https://gitprotect.io/blog/granular-restore-for-jira-software-github-team-github-v2-project-extended-support/" rel="noopener noreferrer"&gt;granular restore&lt;/a&gt; and many more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It means that whether you need to restore a deleted repository, a specific pull request comment, or an issue thread, it should be easy and quick.&lt;/p&gt;

&lt;h2&gt;
  
  
  The GitProtect approach
&lt;/h2&gt;

&lt;p&gt;At this point, it’s worth noting that GitProtect – the Backup and Disaster Recovery system – automates backup scheduling with surgical precision. Its policy engine handles frequency, data coverage, storage targets, and retention – all in a UI or via API. The tool compresses, encrypts, and monitors all needed elements of your dataset. &lt;/p&gt;

&lt;p&gt;And yes. It integrates with DevOps ecosystems like GitHub, GitLab, Bitbucket, Azure DevOps, and Jira. The solution allows you to see what’s necessary. Even in case you forget something, you’ll never lose visibility.&lt;/p&gt;

&lt;p&gt;There’s something to be said for a system that keeps working long after everyone’s forgotten it’s even running. That’s more or less the point with GitProtect. You don’t just line up the schedule and walk away—it keeps things in check. Locks what’s been saved. Ensures that no one, not even those with admin keys, can tamper with the copies once they’re in place. Not by accident, not by design.&lt;/p&gt;

&lt;p&gt;Restoring isn’t boxed in either. If things move from GitHub to GitLab – or &lt;a href="https://gitprotect.io/blog/migrate-gitlab-to-github-how-to-do-it-in-an-efficient-and-data-consistent-way/" rel="noopener noreferrer"&gt;back&lt;/a&gt; – the backups don’t complain. They follow. The data appears where you need it, in the format you left it.&lt;/p&gt;

&lt;p&gt;There’s also a quiet background process that constantly checks the information it stores. Not just counting files or ticking off timestamps. It’s testing whether the backup would work if you had to lean on it. You probably wouldn’t notice unless something’s wrong. That’s the idea.&lt;/p&gt;

&lt;p&gt;And when someone does ask for proof, compliance, audit, or legal, you’re not scrambling. Every step’s been logged already. The system isn’t shouting about it, but the record’s there. Like an invisible notebook someone’s been keeping for you.&lt;/p&gt;

&lt;p&gt;All in all, it’s not flashy. That’s probably the point. It just doesn’t forget. Even when you do.&lt;/p&gt;

&lt;h2&gt;
  
  
  To sum it up
&lt;/h2&gt;

&lt;p&gt;The whole idea with engaging scheduled automated backups isn’t about convenience. Maybe a bit. The main point here is data, environment, and thus, your company’s resilience. In DevOps and SaaS-driven markets, where downtime is a liability and human errors are a certainty, automation is the baseline, not the bonus.&lt;/p&gt;

&lt;p&gt;Whether your goal is &lt;a href="https://gitprotect.io/blog/security-compliance-best-practices/" rel="noopener noreferrer"&gt;compliance&lt;/a&gt;, continuity, or simply a good night’s sleep, the true power of scheduled backups lies in their quiet reliability. And with the system like GitProtect, that reliability comes built-in. It’s engineered not just to save data, but to save time.&lt;/p&gt;

&lt;p&gt;After all, things move fast. One minute, a repo’s spun up for a quick patch, the next it’s got five contributors and three pipelines hooked into it. In all that motion, expecting someone to remember to hit backup on time (every time) is a gamble. Scheduled automation cuts that risk out. It doesn’t wait on memory, meetings, or who’s on call. It just runs quietly and predictably.&lt;/p&gt;

&lt;p&gt;And workflows, where even a slight misstep can cost a release (in a not-so-bad scenario), that kind of reliability isn’t a luxury. It’s how you stay in the game.&lt;/p&gt;

&lt;p&gt;✍️ Subscribe to &lt;a href="https://gitprotect.io/gitprotect-newsletter.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;GitProtect DevSecOps X-Ray Newsletter&lt;/a&gt; – your guide to the latest DevOps &amp;amp; security insights&lt;/p&gt;

&lt;p&gt;🚀 Ensure compliant &lt;a href="https://gitprotect.io/sign-up.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;DevOps backup and recovery with a 14-day free trial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📅 Let’s discuss your needs and &lt;a href="https://calendly.com/d/3s9-n9z-pgc/gitprotect-live-demo?month=2024-04&amp;amp;utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;see a live product tour&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>saas</category>
      <category>devsecops</category>
      <category>backup</category>
    </item>
    <item>
      <title>Measuring DevOps Success: The Metrics That Matter</title>
      <dc:creator>GitProtect Team</dc:creator>
      <pubDate>Thu, 21 Aug 2025 11:58:15 +0000</pubDate>
      <link>https://forem.com/gitprotect/measuring-devops-success-the-metrics-that-matter-imf</link>
      <guid>https://forem.com/gitprotect/measuring-devops-success-the-metrics-that-matter-imf</guid>
      <description>&lt;p&gt;You can’t optimize your DevOps if you don’t track its metrics. However, measuring DevOps performance isn’t only about vanity charts or arbitrary numbers. The right indicators show how well your software delivery solutions perform under pressure. Combined with resilience architecture, these metrics guide your engineering teams to reduce lead time, cut failure rates, and recover faster.&lt;/p&gt;

&lt;p&gt;In other words, you have full insight into potential bottlenecks and can introduce changes where they matter most. This is a path to optimizing processes and maintaining continuous improvement, which boosts business goals and KPIs.&lt;/p&gt;

&lt;p&gt;But here’s the catch. Metrics don’t exist in isolation. Without the proper safety measures, all indicators are fragile and susceptible to even the slightest changes. For instance, robust &lt;a href="https://gitprotect.io/blog/devops-security-data-protection-best-practices/" rel="noopener noreferrer"&gt;backup and disaster recovery&lt;/a&gt; protect performance baselines and ensure that &lt;a href="https://gitprotect.io/blog/devops-pillars-top-11-devops-principles/" rel="noopener noreferrer"&gt;DevOps&lt;/a&gt; velocity doesn’t come at the cost of reliability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Metrics that matter. An example of measuring DevOps success
&lt;/h2&gt;

&lt;p&gt;If you are looking for metrics that correlate directly with business goals and outcomes, you can check Google’s DevOps Research and Assessment (&lt;a href="https://services.google.com/fh/files/misc/2023_final_report_sodr.pdf" rel="noopener noreferrer"&gt;DORA&lt;/a&gt;). The program included the analysis of 32,000+ professionals across 3,000+ organizations. &lt;/p&gt;

&lt;p&gt;Based on that, it was possible to establish the performance of elite teams (in 2023):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcj7ivw21jgoorcl31qo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcj7ivw21jgoorcl31qo.png" alt="performance of elete teams" width="707" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s clear that these metrics and their values correlate directly with a practical representation of business success, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;higher profitability&lt;/li&gt;
&lt;li&gt;customer satisfaction&lt;/li&gt;
&lt;li&gt;team engagement (and other)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  From code to deployment. How metrics map to risk
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Deployment Frequency&lt;/strong&gt; (see the table above) shows your ability to ship quickly. Typically, teams doing continuous delivery deploy 10-20 times per day. More deploys mean more moving parts and more room for mistakes. &lt;/p&gt;

&lt;p&gt;For example, deleting a GitLab tag used in deployment triggers or corrupting a YAML file mid-release can slow or stop the entire release pipeline.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Lead Time for Changes&lt;/strong&gt; measures the time from &lt;a href="https://gitprotect.io/blog/how-to-undo-a-commit-in-git/" rel="noopener noreferrer"&gt;commit&lt;/a&gt; to production. According to the theory, CI/CD pipelines automate this process. In practice, however, corrupted pipelines or lost secrets slow everything down.&lt;/p&gt;

&lt;p&gt;For better clarity, consider a case where an Azure DevOps pipeline is deleted during a refactor. Reproducing the exact pipeline from memory or fragments can take hours or days without a versioned backup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Change Failure Rate&lt;/strong&gt; (CFR) reflects testing quality as well as system complexity. The faster you ship, the more pressure on automated test reliability. To prevent potential production breaks due to a lousy merge or broken environment variable, you need a sound backup platform enabling point-in-time rollback of the repo or pipeline configuration (keeping acceptable CFR).&lt;/p&gt;

&lt;p&gt;The above also affects &lt;strong&gt;Mean Time To Restore&lt;/strong&gt; (MTTR), which is critical for resilience. Let’s say the Jira configuration is damaged after an app update, or Bitbucket repos become compromised when a misconfigured webhook is deployed. A low MTTR is the difference between a minor disruption and a business outage in those cases (and others). Here, time-to-recovery should be measured in minutes, not hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  An example of code to track deployment frequency automatically
&lt;/h2&gt;

&lt;p&gt;You can track the deployment frequency using a few mechanisms, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;commit-to-prod logs&lt;/li&gt;
&lt;li&gt;tagging events&lt;/li&gt;
&lt;li&gt;pipeline metadata.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For instance, consider a basic Python script to count deployments per day (utilizing &lt;a href="https://gitprotect.io/blog/how-to-use-gitlab-api-best-practices-for-your-development-team/" rel="noopener noreferrer"&gt;GitLab API&lt;/a&gt;):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4m27rcymovh68e5gwob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4m27rcymovh68e5gwob.png" alt="Python scrip" width="567" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The data obtained this way allows you to build internal benchmarks and detect pipeline failures and changes in productivity performance (drops). A good idea is to combine with MTTR and failure rates to correlate deploys with breakage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Failure scenarios that backups can prevent
&lt;/h2&gt;

&lt;p&gt;Talking about failure scenarios, connecting metrics with real-world DevOps threats creates a few possibilities.&lt;/p&gt;

&lt;p&gt;For instance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbn8bgzow4wtwb203xqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbn8bgzow4wtwb203xqt.png" alt="metrics and failure scenarios" width="708" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s worth noting that backups allow for forensic rollback. In other words, teams don’t just fix the problem; they learn from it and maintain their velocity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating backup into DevOps metrics strategy
&lt;/h2&gt;

&lt;p&gt;Backup integration into your DevOps metric strategy means making it a fundamental part of your delivery process. With the full deployment of a sound backup platform, you can cover much ground.&lt;/p&gt;

&lt;p&gt;That includes your Git repos on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub&lt;/li&gt;
&lt;li&gt;GitLab&lt;/li&gt;
&lt;li&gt;Bitbucket&lt;/li&gt;
&lt;li&gt;Azure DevOps.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using tools like GitProtect.io also allows you to take care of the backups in Jira (both Jira Cloud, Jira Service Management, and Jira Assets). Especially when it comes to Jira projects and configurations, along with the critical elements of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;webhooks&lt;/li&gt;
&lt;li&gt;deployment keys&lt;/li&gt;
&lt;li&gt;environment variables&lt;/li&gt;
&lt;li&gt;audit logs&lt;/li&gt;
&lt;li&gt;permissions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This versatile capacity allows you to conveniently embed backups into the fabric of your delivery process. You can set up policies to automatically take a snapshot of your system before every &lt;a href="https://gitprotect.io/blog/best-practices-for-jira-sandbox-to-production-migration/" rel="noopener noreferrer"&gt;production deployment&lt;/a&gt;. Then, it’s time for backup verification and restore testing. &lt;/p&gt;

&lt;p&gt;Make them a standard part of your sprint cycles. It’s a good practice and a profitable move. Of course, tracking and logging the restore events to correlate with MTTR and SLA adherence is still crucial.&lt;/p&gt;

&lt;p&gt;Going further, advanced teams use or treat logs as observability signals. They usually help visualize resilience over time.’&lt;/p&gt;

&lt;h2&gt;
  
  
  A few inconvenient facts
&lt;/h2&gt;

&lt;p&gt;If you don’t have your metrics or they vanish along with the pipeline, they were never metrics. You only had guesses. In elite DevOps organizations, speed and reliability are intertwined. The four key DevOps metrics – deployment frequency, lead time, change failure rate, and MTTR – don’t just describe team performance. As you probably know, they define it.&lt;/p&gt;

&lt;p&gt;However, all provided numbers are volatile and vulnerable without a solid BDR (backup and disaster recovery) plan. That’s why turning your backups into a measurable advantage is vital. This way, you reduce MTTR, lower failure rates, and maintain lead time even in disaster scenarios.&lt;/p&gt;

&lt;p&gt;This isn’t just operational hygiene. Consider it a performance multiplier.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, what’s with GitProtect.io in DevOps metrics?
&lt;/h2&gt;

&lt;p&gt;GitProtect is a sound and versatile backup and disaster recovery system for the DevOps ecosystem. The tool supports the platforms mentioned above: GitHub, GitLab, Bitbucket, &lt;a href="https://gitprotect.io/blog/gitlab-to-azure-devops-migration/" rel="noopener noreferrer"&gt;Azure DevOps&lt;/a&gt;, as well as Jira.&lt;/p&gt;

&lt;p&gt;The described software plays a pivotal role in ensuring and maintaining the stability and continuity of DevOps processes. It directly serves the key success metrics discussed so far.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment Frequency support
&lt;/h2&gt;

&lt;p&gt;Again, high deployment frequency (even multiple times a day) increases the risk of errors; think of deleted GitLab tags or corrupted YAML files during releases. The easiest and most effective way to mitigate these and other risks is to utilize:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated backups&lt;/strong&gt; of repos and metadata, especially scheduled, allow for rapidly restoring lost data without disrupting the release cycle. As for &lt;strong&gt;flexible recovery&lt;/strong&gt;, the point-in-time restore functionality makes it easier for teams to resume deployment processes quickly, maintaining high deployment frequency.&lt;/p&gt;

&lt;p&gt;Using GitProtect, teams can sustain continuous delivery and achieve elite performance levels (on-demand, multiple daily deployments).&lt;/p&gt;

&lt;h2&gt;
  
  
  Reducing Lead Time for changes
&lt;/h2&gt;

&lt;p&gt;Losing critical DevOps and/or project management data can significantly extend the time from code commit to production. To address this problem, you can protect &lt;strong&gt;the entire DevOps ecosystem&lt;/strong&gt;. Backups cover source code and metadata with webhooks, deployment keys, environmental variables, etc. That eliminates the need for manual reconstruction.&lt;/p&gt;

&lt;p&gt;Another element is &lt;a href="https://gitprotect.io/features/data-restore-disaster-recovery/cross-over-recovery.html#article-content" rel="noopener noreferrer"&gt;cross-over restore&lt;/a&gt;. The ability to restore data between platforms (e.g., from &lt;a href="https://gitprotect.io/blog/migrate-gitlab-to-github-how-to-do-it-in-an-efficient-and-data-consistent-way/" rel="noopener noreferrer"&gt;GitLab to GitHub&lt;/a&gt;) supports and ensures continuity even during outages.&lt;/p&gt;

&lt;p&gt;A proper backup and disaster recovery system minimizes delays caused by failures and enforces teams to achieve &lt;strong&gt;lead times under an hour&lt;/strong&gt; – as seen in top performers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lowering the Change Failure Rate (CFR)
&lt;/h2&gt;

&lt;p&gt;The growing likelihood of errors that lower CFR often requires IT teams to focus on two significant elements. &lt;/p&gt;

&lt;p&gt;Having a higher risk of faulty merges or broken environment variables means you need to utilize the &lt;strong&gt;restoration to the last known good state&lt;/strong&gt; mechanism. In cases like a corrupted .gitlab-ci.yml file or misconfigured Bitbucket branch protection rules, you should be able to roll back to the correct configuration.&lt;/p&gt;

&lt;p&gt;That also means strong &lt;strong&gt;encryption and secure storage&lt;/strong&gt;. Your data should be protected with AES-256 encryption – in transit and at rest – safeguarding against ransomware and other threats. Naturally, it reduces the risk of failure due to security breaches.&lt;/p&gt;

&lt;p&gt;In that matter, using GitProtect enhances test and system reliability and helps maintain a 0-15% CFR.&lt;/p&gt;

&lt;h2&gt;
  
  
  Minimizing Mean Time To Restore (MTTR)
&lt;/h2&gt;

&lt;p&gt;You don’t need to be convinced that MTTR is critical for your system’s resilience. For example, during incidents like corrupted Jira configurations or compromised Bitbucket repos.&lt;/p&gt;

&lt;p&gt;The first thing that comes to mind in this topic is undoubtedly &lt;strong&gt;granular and rapid recovery&lt;/strong&gt;. Such a function and ability allow you to restore specific elements, like a single Jira issue or YAML file. The same goes for entire environments—in minutes (not hours)! &lt;/p&gt;

&lt;p&gt;That also means you can establish &lt;strong&gt;disaster recovery readiness&lt;/strong&gt;. Utilizing mechanisms like Disaster Recovery and cross-platform restoration (on-premises, cloud, or cross-platform) guarantees business continuity during major outages, such as a GitHub downtime.&lt;/p&gt;

&lt;p&gt;Let’s not forget compliance with &lt;a href="https://xopero.com/blog/en/the-evolution-of-data-backup-is-the-3-2-1-backup-rule-a-thing-of-the-past/" rel="noopener noreferrer"&gt;the 3-2-1 rule&lt;/a&gt; (at least). The reason is simply the support for multiple storage locations (local and cloud) and unlimited retention to guarantee reliable data recovery.&lt;/p&gt;

&lt;p&gt;Using GitProtect, you can maintain MTTR below one hour. This will minimize downtime and financial losses (e.g., $9,000 per minute of downtime, as noted in the article).&lt;/p&gt;

&lt;h2&gt;
  
  
  Enhancing overall resilience and compliance
&lt;/h2&gt;

&lt;p&gt;With everything described to this point, something needs to be underlined. The importance of backup as part of DevOps metrics strategy is particularly important in the context of &lt;em&gt;the Shared Responsibility Model&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Read more about your responsibilities in &lt;a href="https://gitprotect.io/blog/github-shared-responsibility-model-and-source-code-protection/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, &lt;a href="https://gitprotect.io/blog/gitlab-shared-responsibility-model-a-guide-to-collaborative-security/" rel="noopener noreferrer"&gt;GitLab&lt;/a&gt;, &lt;a href="https://gitprotect.io/blog/shared-responsibility-model-in-azure-devops/" rel="noopener noreferrer"&gt;Azure DevOps&lt;/a&gt;, and &lt;a href="https://gitprotect.io/blog/atlassian-cloud-shared-responsibility-model-are-you-aware-of-your-duties/" rel="noopener noreferrer"&gt;Atlassian&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Responsibility (in any meaning) necessitates &lt;strong&gt;compliance with security standards&lt;/strong&gt;. Certifications like &lt;a href="https://gitprotect.io/blog/git-backup-for-soc-2-compliance/" rel="noopener noreferrer"&gt;SOC 2 Type II&lt;/a&gt;, &lt;a href="https://gitprotect.io/blog/iso-27001-certification-gitprotects-by-xopero-software-iso-27001-audit-process-explained/" rel="noopener noreferrer"&gt;ISO 27001&lt;/a&gt;, and GDPR ensure adherence to legal and regulatory requirements. Especially if the latter are critical for a given industry, like healthcare or finance. &lt;/p&gt;

&lt;p&gt;However, compliance needs to be followed by &lt;strong&gt;centralized management&lt;/strong&gt;. The goal is to simplify performance and track compliance with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;backup monitoring&lt;/li&gt;
&lt;li&gt;SLA reports&lt;/li&gt;
&lt;li&gt;notifications (email/Slack).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Yet another thing is &lt;strong&gt;integration with DevOps processes&lt;/strong&gt;. You can strengthen process resilience with automated snapshots before deployment and restore tests within sprint cycles. &lt;/p&gt;

&lt;p&gt;From GitProtect’s perspective, you can protect data and allow your DevOps teams to focus on innovation rather than manual recovery. In other words, you can improve overall business KPIs like profitability and customer satisfaction. &lt;/p&gt;

&lt;h2&gt;
  
  
  Possible scenarios with GitProtect’s role
&lt;/h2&gt;

&lt;p&gt;Consider a few possible scenarios presented below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj20onqkymjuvwfcql5ae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj20onqkymjuvwfcql5ae.png" alt="possible scenarios" width="708" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary of measuring DevOps success
&lt;/h2&gt;

&lt;p&gt;There’s no doubt that DevOps metrics and tracking them are vital to optimizing, for instance, software delivery or business outcomes in general. Teams considered elite achieve multiple daily deployments, lead times under one hour, CFR of 0-15%, and MTTR below 60 minutes. They drive profitability and customer satisfaction.&lt;/p&gt;

&lt;p&gt;High deployment frequency increases risks like corrupted pipelines or lost configurations, so a solid backup system mitigates them. Especially with automated snapshots and point-in-time restores. The same goes for supporting platforms like GitHub, GitLab, Bitbucket, Azure DevOps, and Jira.&lt;/p&gt;

&lt;p&gt;Finally, incorporating backup systems like GitProtect into your business platform boosts resilience, reduces MTTR, and helps maintain performance, which is essential for DevOps success.&lt;/p&gt;

&lt;p&gt;✍️ Subscribe to &lt;a href="https://gitprotect.io/gitprotect-newsletter.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;GitProtect DevSecOps X-Ray Newsletter&lt;/a&gt; – your guide to the latest DevOps &amp;amp; security insights&lt;/p&gt;

&lt;p&gt;🚀 Ensure compliant &lt;a href="https://gitprotect.io/sign-up.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;DevOps backup and recovery with a 14-day free trial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📅 Let’s discuss your needs and &lt;a href="https://calendly.com/d/3s9-n9z-pgc/gitprotect-live-demo?month=2024-04&amp;amp;utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;see a live product tour&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cybersecurity</category>
      <category>coding</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Dev Platform Breaches: How GitHub, Jira &amp; Confluence Exposed Mercedes, Apple, Disney &amp; Others</title>
      <dc:creator>GitProtect Team</dc:creator>
      <pubDate>Thu, 07 Aug 2025 12:30:54 +0000</pubDate>
      <link>https://forem.com/gitprotect/dev-platform-breaches-how-github-jira-confluence-exposed-mercedes-apple-disney-others-4no7</link>
      <guid>https://forem.com/gitprotect/dev-platform-breaches-how-github-jira-confluence-exposed-mercedes-apple-disney-others-4no7</guid>
      <description>&lt;p&gt;Welcome to the DevOps multiverse. Here, code is currency, while platforms like GitHub, Jira, and Confluence power critical infrastructure. Here, even the smallest misstep can trigger a chain reaction measured in gigabytes of leaked data, thousands of compromised credentials, and millions of dollars in financial losses, not to mention reputational damage.&lt;/p&gt;

&lt;p&gt;These risks aren’t theoretical. Breaches at household-name enterprises expose a harsh truth: DevOps pipelines have become the new battleground for cyberattacks. What connects Mercedes-Benz, Apple, Cisco, and The New York Times? All became victims of DevOps security failures, proving that even tech giants aren’t immune when code meets cybersecurity complacency.&lt;/p&gt;

&lt;p&gt;Key insights:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mercedes: 270GB of proprietary code exposed via leaked GitHub token&lt;/li&gt;
&lt;li&gt;New York Times: 210GB internal data leaked, including Wordle source code&lt;/li&gt;
&lt;li&gt;Apple: Internal Jira &amp;amp; Confluence tools leaked&lt;/li&gt;
&lt;li&gt;Disney: 2.5GB of corporate secrets stolen by Club Penguin fans&lt;/li&gt;
&lt;li&gt;Schneider Electric: 400K rows of user data stolen, $125K ransom demanded&lt;/li&gt;
&lt;li&gt;Cisco: GitHub breach leaked source code, AWS keys, and Jira tickets&lt;/li&gt;
&lt;li&gt;WordPress: 390K+ credentials stolen via fake GitHub repo&lt;/li&gt;
&lt;li&gt;Fake WinRAR: Site distributed malware via GitHub&lt;/li&gt;
&lt;li&gt;Python: Leaked GitHub token threatened core PyPI repositories&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Continue reading for a detailed analysis of these breaches, or check the complete CISO’s Guide to DevOps Threats. &lt;/p&gt;

&lt;h2&gt;
  
  
  Global Cybersecurity Landscape at a Glance
&lt;/h2&gt;

&lt;p&gt;Globally, cyber attacks occur with &lt;a href="https://bizplanr.ai/blog/cyber-security-statistics" rel="noopener noreferrer"&gt;alarming frequency&lt;/a&gt; – roughly one every 39 seconds – amounting to over 2,000 incidents each day. This relentless pace fuels a massive economic toll: cybercrime is projected to cost the global economy $10.5 trillion annually by 2025, climbing to $15.63 trillion by 2029, according to &lt;a href="https://cybersecurityventures.com/hackerpocalypse-cybercrime-report-2016/" rel="noopener noreferrer"&gt;Cybersecurity Ventures&lt;/a&gt;. The United States alone accounts for &lt;a href="https://www.embroker.com/blog/cyber-attack-statistics/" rel="noopener noreferrer"&gt;59% of ransomware attacks&lt;/a&gt;, and &lt;a href="https://newsroom.ibm.com/2024-07-30-ibm-report-escalating-data-breach-disruption-pushes-costs-to-new-highs" rel="noopener noreferrer"&gt;70% of data breaches&lt;/a&gt; cause significant operational disruptions. The ripple effect doesn’t stop at the breached company — it also hits business partners, clients, and entire supply chains, amplifying the overall impact of the attack.&lt;/p&gt;

&lt;p&gt;The notion of complete immunity has always been a myth. Even the biggest organizations remain vulnerable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mercedes: 270GB of proprietary code exposed via leaked GitHub token
&lt;/h2&gt;

&lt;p&gt;Due to a mishandled GitHub token, Mercedes-Benz’s source code was exposed to the public. A Mercedes-Benz employee leaked a GitHub token in their repository, granting unrestricted access to all source code on the company’s GitHub Enterprise server. During the exposure, attackers could have accessed critical information, including API keys, design documents, database credentials, and other sensitive data, which could have potentially caused financial, legal, and reputational damage. &lt;/p&gt;

&lt;h2&gt;
  
  
  New York Times: 270GB internal data leaked, including Wordle source code
&lt;/h2&gt;

&lt;p&gt;270GB of internal data belonging to The New York Times was exposed, including alleged source code for Wordle, internal communications, and sensitive authentication credentials linked to over 5,000 GitHub repositories. The New York Times confirmed that the incident involved the inadvertent exposure of credentials to a third-party code platform. However, the organization stated that no unauthorized access to its internal systems had been detected and that operations remained unaffected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Apple: Internal Jira &amp;amp; Confluence tools leaked
&lt;/h2&gt;

&lt;p&gt;In June 2024, a threat actor known as IntelBroker claimed responsibility for a breach of Apple’s internal authentication infrastructure. The leaked data included proprietary plugins and configurations used to integrate AppleConnect-SSO with Jira and Confluence, posing significant supply chain risks. According to cybersecurity firm AHCTS, the breach did not affect end-user services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disney: 2.5GB of corporate secrets stolen by Club Penguin fans
&lt;/h2&gt;

&lt;p&gt;Club Penguin fans exploited Disney’s Confluence server to access old internal game data but inadvertently stole 2.5 GB of sensitive corporate information, including developer tools, internal infrastructure, advertising plans, and business documentation. The breach occurred using previously exposed credentials and included internal API endpoints, S3 bucket credentials, and links to developer resources, potentially increasing Disney’s exposure to further attacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Schneider Electric: 400K rows of user data stolen, $125K ransom demanded
&lt;/h2&gt;

&lt;p&gt;Schneider Electric confirmed a breach involving its internal project tracking platform, hosted in an isolated environment. The threat actor, known as “Grep,” claims to have accessed the company’s Jira server using exposed credentials and stolen 40GB of data, including 400K rows of user information, 75K unique email addresses, and other critical project data. The stolen information reportedly includes details about projects, issues, and plugins, and the attackers have demanded $125,000 to prevent a data leak.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cisco: GitHub breach leaked source code, AWS keys, and Jira tickets
&lt;/h2&gt;

&lt;p&gt;Cisco confirmed that some files were stolen after hacker IntelBroker claimed access to source code, credentials, and other sensitive data via GitHub and a SonarQube project. While no internal systems were breached, the attacker exploited a public-facing DevHub used for customer resources. Cisco reported that only a limited number of files were exposed, with no sensitive personal or financial data found. &lt;/p&gt;

&lt;h2&gt;
  
  
  WordPress: 390K+ credentials stolen via fake GitHub repo
&lt;/h2&gt;

&lt;p&gt;A malicious GitHub repository enabled the exfiltration of 390K+ credentials, primarily targeting WordPress accounts, through a fake tool called “Yet Another WordPress Poster”. The repository, associated with a threat actor dubbed MUT-1244, also deployed malware via a rogue npm dependency and phishing emails. Victims included pentesters, security researchers, and malicious actors who inadvertently exposed sensitive data such as SSH private keys and AWS credentials. MUT-1244’s tactics included creating trojanized GitHub repositories hosting fake PoC exploit code and employing phishing emails to deliver payloads like cryptocurrency miners and data theft tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fake WinRAR: The site distributed malware via GitHub
&lt;/h2&gt;

&lt;p&gt;Security researchers at SonicWall uncovered a fake WinRAR website (winrar[.]co) hosting a malicious shell script designed to download further malware from a GitHub repo named “encrypthub.” The repository contained ransomware, crypto mining software, information stealers, and injection tools, with harvested system data sent to a Telegram account — illustrating the danger of typosquatting and weaponized open-source infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Python: Leaked GitHub token threatened core PyPI repositories
&lt;/h2&gt;

&lt;p&gt;Researchers at JFrog identified a leaked GitHub token embedded in a public Docker container, granting access to sensitive PyPI repositories. The token, belonging to PyPI admin Ee Durbin, was exposed due to misconfigured GitHub API usage. Although the token was quickly revoked, it posed a critical supply chain risk. Separately, Checkmarx reported malicious PyPI packages exfiltrating data via Telegram bots.&lt;/p&gt;

&lt;h2&gt;
  
  
  The untold impact of DevOps data leaks
&lt;/h2&gt;

&lt;p&gt;While DevOps breaches at companies such as Mercedes-Benz, Apple, The New York Times, and Cisco often make headlines, the true cost of these incidents is rarely disclosed. &lt;/p&gt;

&lt;p&gt;At first glance, the impact may appear limited to brief negative press or a dent in reputation. But beneath the surface, the real price tag can be far more significant, ranging from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;costly data recovery and environmental restoration,&lt;/li&gt;
&lt;li&gt;loss of competitive edge due to exposed code or strategic plans, disruptions to business continuity,&lt;/li&gt;
&lt;li&gt;to potential regulatory penalties.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The bottom line? Most organizations downplay the full scope of these incidents in public statements. Yet the sheer scale of the leaks—hundreds of gigabytes of data, millions of records, and sensitive internal repositories—reveals a much deeper, and likely more damaging, reality.&lt;/p&gt;

&lt;p&gt;To dive deeper into these incidents and uncover emerging trends in cyberattacks targeting DevOps environments—including threats like Lumma Stealer, NJRat, fake GitHub repositories, and GitLab exploits—read the full &lt;a href="https://gitprotect.io/docs/gitprotect-ciso-guide-to-devops-threats-2025.pdf" rel="noopener noreferrer"&gt;CISO’s Guide to DevOps Threats&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;✍️ Subscribe to &lt;a href="https://gitprotect.io/gitprotect-newsletter.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;GitProtect DevSecOps X-Ray Newsletter&lt;/a&gt; – your guide to the latest DevOps &amp;amp; security insights&lt;/p&gt;

&lt;p&gt;🚀 Ensure compliant &lt;a href="https://gitprotect.io/sign-up.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;DevOps backup and recovery with a 14-day free trial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📅 Let’s discuss your needs and &lt;a href="https://calendly.com/d/3s9-n9z-pgc/gitprotect-live-demo?month=2024-04&amp;amp;utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;see a live product tour&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>projectmanagement</category>
      <category>security</category>
    </item>
    <item>
      <title>How To Restore a Deleted Branch In Azure DevOps</title>
      <dc:creator>GitProtect Team</dc:creator>
      <pubDate>Thu, 07 Aug 2025 12:15:09 +0000</pubDate>
      <link>https://forem.com/gitprotect/how-to-restore-a-deleted-branch-in-azure-devops-3d9j</link>
      <guid>https://forem.com/gitprotect/how-to-restore-a-deleted-branch-in-azure-devops-3d9j</guid>
      <description>&lt;p&gt;Human error is one of the most common causes leading to data loss or data breaches. In the &lt;a href="https://itic-corp.com/security-data-breaches-top-cause-of-downtime-in-2022/" rel="noopener noreferrer"&gt;ITIC report&lt;/a&gt;, they state that 64 % of downtime incidents have their roots in human errors.&lt;/p&gt;

&lt;p&gt;If you think that in &lt;a href="https://gitprotect.io/blog/devsecops-mythbuster-nothing-fails-in-the-cloud-saas/" rel="noopener noreferrer"&gt;SaaS environments all your data is safe&lt;/a&gt;, you need to think once again. All SaaS providers, including Microsoft, follow the &lt;a href="https://gitprotect.io/blog/shared-responsibility-model-in-azure-devops/" rel="noopener noreferrer"&gt;Shared Responsibility Model&lt;/a&gt;, which states that the service provider is responsible for the accessibility of its infrastructure and services, while a user is responsible for their data availability, including backup and disaster recovery.&lt;/p&gt;

&lt;p&gt;So, who will be responsible for data recovery if your Azure DevOps branch is deleted? This can be caused by a range of things, from accidental deletion or simple cleanups to incorrect coding practices. Hovewer, the user is the one who needs to deal with the consequences of deletion, such as data loss, failure of compliance, or legal requirements. Thus, it is necessary that your organization can recover fast from any event of failure, including accidental deletion.&lt;/p&gt;

&lt;p&gt;In this article, we will outline the best ways and practices to restore your deleted Azure DevOps branch. Though it’s worth stating that the safest and most reliable method is to implement a backup &amp;amp; DR solution to ensure your Azure DevOps security strategy. Such tools, as GitProtect backup &amp;amp; DR software for Azure DevOps, can provide you with the most flexible options for securing and restoring branches.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accidental deletion?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://gitprotect.io/blog/human-error-the-most-common-cybersecurity-mistakes-for-devops/" rel="noopener noreferrer"&gt;Human error&lt;/a&gt; remains one of the leading causes of failures, including accidental deletions. In fact, research from &lt;a href="https://www.researchgate.net/figure/Analysis-of-causes-of-data-loss-Figure-3-How-Denial-Of-Service-DOS-attack-works_fig2_311470799" rel="noopener noreferrer"&gt;ResearchGate&lt;/a&gt; shows that 32% of data loss incidents are due to human mistakes. This makes it critical to pair secure coding practices with a reliable backup strategy to keep your Azure DevOps branches both protected and recoverable.&lt;/p&gt;

&lt;p&gt;Consider this scenario: your teammate accidentally deletes a feature branch just before it’s merged. How quickly can you restore it and resume work? With the cost of downtime reaching up to $9,000 per minute, and that’s without accounting for regulatory penalties… So, every second counts.&lt;/p&gt;

&lt;p&gt;Accidental deletion is a common risk that organizations must plan for in both their operational and compliance strategies. When equipped with the right backup solution and a &lt;a href="https://gitprotect.io/blog/become-the-master-of-disaster-disaster-recovery-testing-for-devops/" rel="noopener noreferrer"&gt;well-tested Disaster Recovery plan&lt;/a&gt;, restoring any branch can be fast, seamless, and secure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Proper branch management in Azure DevOps
&lt;/h2&gt;

&lt;p&gt;To avoid losing any branches to accidental deletions, it is crucial to implement secure coding practices along with proper configuration of branch policies. To edit branch policies, you will need to be a member of the Project Administrators security group or at least have repository-level &lt;strong&gt;Edit policies&lt;/strong&gt; permissions. Manage your branch policies by locating and selecting &lt;strong&gt;Repos&lt;/strong&gt;, then &lt;strong&gt;Branches&lt;/strong&gt; to open the web portal. Now, simply find the branch you need to adjust using the S*&lt;em&gt;earch branch name&lt;/em&gt;* in the upper right corner. Click on &lt;strong&gt;More options&lt;/strong&gt; right next to the branch and select &lt;strong&gt;Branch policies&lt;/strong&gt; from the menu available.&lt;/p&gt;

&lt;p&gt;Apart from implementing a third-party backup &amp;amp; DR solution and adjusting the policies for your branches, it is important to pay attention to other key best practices. To avoid errors, accidental deletions, and branches being lost, try to keep your strategy simple and build according to the following practices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintain good quality and an up-to-date main branch&lt;/li&gt;
&lt;li&gt;Make use of feature branches for any new features or bug fixes&lt;/li&gt;
&lt;li&gt;Using pull requests, merge your feature branches into the main branch&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Possible ways to restore deleted branches
&lt;/h2&gt;

&lt;p&gt;You can use Git commands to restore the branch. This process relies on Git’s reflog (reference log), which keeps track of changes to the repository, including branch deletions. You can also restore from your local repo or use the web portal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Method # 1: Azure DevOps web portal
&lt;/h2&gt;

&lt;p&gt;As stated by Azure DevOps official &lt;a href="https://learn.microsoft.com/en-us/azure/devops/repos/git/restore-deleted-branch?view=azure-devops" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;, to ‘restore a Git branch in your own repo from Visual Studio or the command line, push your branch from your local repo to Azure Repos to restore it’. This method is a convenient way to restore a deleted git branch. To start off, locate your repository, open it, and go to the &lt;strong&gt;Branches&lt;/strong&gt; view. Now, find your specific branch, using the &lt;strong&gt;Search all branches&lt;/strong&gt; option in the top right corner. Then, click the link to &lt;strong&gt;search for the exact match in deleted branches&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;This option helps to gain info about the commits, who deleted the branch, and when. Finally, in order to restore a deleted git branch in your Azure DevOps, select the “…” icon, which is right by the branch name, and then click &lt;strong&gt;Restore branch&lt;/strong&gt; from the list. The branch will then be recreated at the last commit to which it pointed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbn6zrpq2peqjmxgsyoo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbn6zrpq2peqjmxgsyoo.png" alt="restore deleted branch in Azure DevOps 1" width="457" height="294"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Source:&lt;/em&gt; &lt;a href="https://learn.microsoft.com/en-us/azure/devops/repos/git/restore-deleted-branch?view=azure-devops" rel="noopener noreferrer"&gt;Restore a deleted Git branch from the web portal&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bear in mind, branch policies and permissions do not get restored. Additionally, if you have used the same branch name for a number of different commits, you might not see all the commits you expected to have when you restore the deleted branch. To help with that, simply go to the &lt;strong&gt;Pushes&lt;/strong&gt; page of your restored branch and then see the whole history of your branch. &lt;/p&gt;

&lt;p&gt;Also, to get to a specific commit, you will need to select &lt;strong&gt;New branch&lt;/strong&gt; from the “…” icon. Then, you can use a pull request, cherry-pick, or merge to add any of the commits back into your desired branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpcozhskoptzlg2ctnbe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpcozhskoptzlg2ctnbe.png" alt="restore deleted branch in Azure DevOps 2" width="257" height="210"&gt;&lt;/a&gt;&lt;br&gt;
Source: &lt;a href="https://learn.microsoft.com/en-us/azure/devops/repos/git/restore-deleted-branch?view=azure-devops" rel="noopener noreferrer"&gt;Restore a deleted Git branch from the web portal&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is best suited for those who:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;prefer a GUI-based method rather than a command-line,&lt;/li&gt;
&lt;li&gt;have full access to the repo, need quick restore&lt;/li&gt;
&lt;li&gt;want to review any commit info (for example, who deleted the branch)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hovewer, there are some facts that you need to keep in mind. This process of restore isn’t automated; thus, it increases the risk of mistakes, especially under time pressure or during the incident. Another important thing is a limited time frame. Azure DevOps retains deleted branches for only 90 days. After that, the data is permanently deleted, and if you notice the deletion after that time, your data might be lost.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is required to restore a branch?
&lt;/h2&gt;

&lt;p&gt;It is important to remember that there are certain requirements outlined that you will have to meet in order to be able to restore branches. Keep in mind that users with &lt;strong&gt;Stakeholder&lt;/strong&gt; access will have full access to Azure Repos (including viewing, cloning, and contributing). Take a look below to see more detailed requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnactlli8kx4foe6k9h0b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnactlli8kx4foe6k9h0b.png" alt="restore deleted branch in Azure DevOps 3" width="800" height="326"&gt;&lt;/a&gt;&lt;br&gt;
Source: &lt;a href="https://learn.microsoft.com/en-us/azure/devops/repos/git/restore-deleted-branch?view=azure-devops" rel="noopener noreferrer"&gt;Azure DevOps documentation&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Method # 2: Restore from the local repository
&lt;/h2&gt;

&lt;p&gt;Since git is a distributed system, the developer’s local repository is actually a full copy of the project history. So all the commits and references are there, provided ‘git fetch’ has not been run, as it removes or updates the references from remotes. &lt;/p&gt;

&lt;p&gt;If your branch was deleted from the remote, restoring from the local repo is a viable option. Now, in your local repo, run ‘git branch -r’ and this should list all the remote branches. So, when someone deletes ‘stage’ on the remote, but you haven’t fetched yet, your local still has origin/stage. Now, create a new local branch – ‘stage’, that points to the desired commit that your deleted branch pointed to. To do this, run ‘git checkout -b stage origin/stage’. Now that you have restored your branch locally, simply ‘git push’ it to Azure DevOps.&lt;/p&gt;

&lt;p&gt;However, if you did fetch after deletion, there is a chance your reference origin/stage, along with the deleted branch, may be gone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Method # 3: Recovering a Deleted Branch with ‘git reflog’
&lt;/h2&gt;

&lt;p&gt;If you’ve accidentally deleted a branch using git branch -D , then &lt;a href="https://gitprotect.io/blog/how-to-use-git-reflog-reflog-vs-log/" rel="noopener noreferrer"&gt;git reflog&lt;/a&gt; can help. Reflog (reference log) keeps track of where your HEAD and branch references have pointed in your local repository, even after actions like deletions, resets, or rebases. It allows you to view the recent history of commits, including those that may no longer be part of a visible branch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Option 1: Immediate Recovery After Deletion
&lt;/h2&gt;

&lt;p&gt;If you just deleted the wrong branch and haven’t run many additional Git commands, the recovery can be almost instantaneous. Simply:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check your terminal output — it often displays the last commit SHA associated with the deleted branch.&lt;/li&gt;
&lt;li&gt;If not, run git reflog to locate the commit hash tied to the deleted branch.&lt;/li&gt;
&lt;li&gt;Once you have the correct SHA, restore the branch with: &lt;em&gt;git checkout &lt;/em&gt; This checks out the commit in a detached HEAD state.&lt;/li&gt;
&lt;li&gt;To recreate the branch: &lt;em&gt;git checkout -b &lt;/em&gt; This command creates a new branch at that commit and reattaches HEAD.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This approach ensures that your work isn’t lost, provided the deleted commits haven’t yet been garbage-collected by Git (which typically occurs after 90 days).&lt;/p&gt;

&lt;h2&gt;
  
  
  Option 2: Recovering a Branch a while after deletion
&lt;/h2&gt;

&lt;p&gt;If you didn’t notice the deletion right away, or have made other changes since, you can still recover the branch using git reflog:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Run the reflog&lt;/strong&gt; to inspect your recent Git history: git reflog This will show a list of recent HEAD movements, including the commit where your deleted branch last pointed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Find the correct entry&lt;/strong&gt; — look for the commit SHA that corresponds to the last known state of your deleted branch. It might be labeled something like: abc1234 HEAD@{4}: checkout: moving from  to master&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check out that commit&lt;/strong&gt; using: git checkout abc1234 This puts you in a detached HEAD state at the desired commit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recreate the branch&lt;/strong&gt; at that commit: git checkout -b &lt;/li&gt;
&lt;li&gt;Optionally, &lt;strong&gt;push it back to the remote&lt;/strong&gt; if needed: git push origin &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This method works even if some time has passed, as long as the commit hasn’t been garbage-collected (which by default happens after 90 days in Git).&lt;/p&gt;

&lt;h2&gt;
  
  
  Method # 4: Third-party solutions to address risks
&lt;/h2&gt;

&lt;p&gt;Relying on a third-party backup and disaster recovery solution is the most dependable way to restore your Azure DevOps data, even beyond the 90-day mark after a branch has been deleted. Every project within Azure DevOps should be properly secured to avoid disruptions and data loss.&lt;/p&gt;

&lt;p&gt;Microsoft itself outlines in its documentation that securing Azure DevOps data is the user’s responsibility, and it is recommended to implement third-party backup and disaster recovery solutions for maximum security and compliance. &lt;/p&gt;

&lt;p&gt;Unlike manual restore methods or temporary workarounds, a trusted backup solution functions as an insurance policy; it simplifies and accelerates restore processes while providing a reliable safety net. It reduces downtime, minimizes risk, and offers guaranteed data recovery, something that manual processes can’t ensure.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to restore a deleted branch in Azure DevOps with GitProtect
&lt;/h2&gt;

&lt;p&gt;Without a proper backup strategy aligned with industry best practices, recovery may not be possible when disaster strikes. That’s why your backup must include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automated, scheduled backups&lt;/strong&gt; that meet your defined RPO and RTO objectives,&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Long-term or even infinite retention&lt;/strong&gt;, enabling point-in-time recovery and fulfilling Shared Responsibility Model obligations,&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-storage localization&lt;/strong&gt;, which allows you to have your backup copies in multiople locations, both cloud and local,&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replication&lt;/strong&gt;,&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AES encryption at rest and in transit&lt;/strong&gt;, with the option to use your own encryption keys,&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ransomware protection&lt;/strong&gt;, essential as backup, is your last line of defense,&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These capabilities ensure your &lt;strong&gt;Disaster Recovery plan is ready for every scenario&lt;/strong&gt;, including accidental deletions. GitProtect provides all these backup features and even goes beyond to make sure that every disaster scenario is foreseen. Thus, with the solution, you get key restore capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Point-in-time restores&lt;/strong&gt; – go back to any specific point in time and restore data from that period.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Granular restore&lt;/strong&gt; – specify what pieces of Azure DevOps data you need to restore.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full data recovery&lt;/strong&gt; – use to recover the entire Azure DevOps environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-over recovery&lt;/strong&gt; – restore your Azure DevOps data to another Git hosting service (GitHub, GitLab, Bitbucket).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, to restore your deleted Azure DevOps branch from a backup, you just need to pick up the moment from which you need to restore your data, and run the restore.&lt;/p&gt;

&lt;p&gt;Let’s imagine that your organization’s branch was deleted yesterday, so all you need to do is roll back your data from before yesterday.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkezr79rr95kq5ol1qga.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkezr79rr95kq5ol1qga.png" alt="restore deleted branch in Azure DevOps 4" width="754" height="825"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;To sum up, for the highest level of security and most flexible restore options, it is recommended to implement third-party backup and disaster recovery vendors, such as GitProtect. This way, you get a preventive safety net in the form of immutable, automated, and scheduled backups along with many different restore capabilities to accommodate any restore scenario you may be facing in Azure DevOps.&lt;/p&gt;

&lt;p&gt;✍️ Subscribe to &lt;a href="https://gitprotect.io/gitprotect-newsletter.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;GitProtect DevSecOps X-Ray Newsletter&lt;/a&gt; – your guide to the latest DevOps &amp;amp; security insights&lt;/p&gt;

&lt;p&gt;🚀 Ensure compliant &lt;a href="https://gitprotect.io/sign-up.html?utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;DevOps backup and recovery with a 14-day free trial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📅 Let’s discuss your needs and &lt;a href="https://calendly.com/d/3s9-n9z-pgc/gitprotect-live-demo?month=2024-04&amp;amp;utm_source=d&amp;amp;utm_medium=m" rel="noopener noreferrer"&gt;see a live product tour&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>programming</category>
      <category>azure</category>
      <category>coding</category>
    </item>
  </channel>
</rss>
