<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kapil Uthra</title>
    <description>The latest articles on Forem by Kapil Uthra (@kapiluthra).</description>
    <link>https://forem.com/kapiluthra</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kapiluthra"/>
    <language>en</language>
    <item>
      <title>AWS Migration Series: Database Migration Service Overview</title>
      <dc:creator>Kapil Uthra</dc:creator>
      <pubDate>Sat, 11 Sep 2021 04:44:06 +0000</pubDate>
      <link>https://forem.com/kapiluthra/aws-migration-series-database-migration-service-overview-8l3</link>
      <guid>https://forem.com/kapiluthra/aws-migration-series-database-migration-service-overview-8l3</guid>
      <description>&lt;p&gt;AWS Database Migration Service (AWS DMS) is a cloud service that makes it easy to migrate relational databases, data warehouses, NoSQL databases, and other types of data stores. You can use AWS DMS to migrate your data into the AWS Cloud or between combinations of cloud and on-premises setups.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RyRRS6As--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rbn2wyvpv223lvu6ff2m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RyRRS6As--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rbn2wyvpv223lvu6ff2m.png" alt="AWS DMS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS DMS is a server in the AWS Cloud that runs replication software. You create a source and target connection to tell AWS DMS where to extract from and load to. Then you schedule a task that runs on this server to move your data.&lt;/p&gt;

&lt;h3&gt;
  
  
  How AWS DMS Works?
&lt;/h3&gt;

&lt;p&gt;AWS Database Migration Service (AWS DMS) is a web service that you can use to migrate data from a source data store to a target data store. These two data stores are called endpoints. You can migrate between source and target endpoints that use the same database engine, such as from an Oracle database to an Oracle database. You can also migrate between source and target endpoints that use different database engines, such as from an Oracle database to a PostgreSQL database. The only requirement to use AWS DMS is that one of your endpoints must be on an AWS service. You can't use AWS DMS to migrate from an on-premises database to another on-premises database.&lt;/p&gt;

&lt;p&gt;At a high level, when using AWS DMS you do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a replication server.&lt;/li&gt;
&lt;li&gt;Create source and target endpoints that have connection information about your data stores.&lt;/li&gt;
&lt;li&gt;Create one or more migration tasks to migrate data between the source and target data stores.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A task can consist of three major phases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The full load of existing data&lt;/li&gt;
&lt;li&gt;The application of cached changes&lt;/li&gt;
&lt;li&gt;Ongoing replication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS DMS loads data&lt;/strong&gt; from tables on the source data store to tables on the target data store. While the full load is in progress, any changes made to the tables being loaded are cached on the replication server; these are the cached changes.&lt;/p&gt;

&lt;p&gt;When the full load for a given table is complete, AWS DMS immediately begins to &lt;strong&gt;apply the cached changes&lt;/strong&gt; for that table. Once the table is loaded and the cached changes applied, AWS DMS begins to collect changes as transactions for the ongoing replication phase.&lt;/p&gt;

&lt;p&gt;At the start of the &lt;strong&gt;ongoing replication phase&lt;/strong&gt;, a backlog of transactions generally causes some lag between the source and target databases. The migration eventually reaches a steady state after working through this backlog of transactions. At this point, you can shut down your applications, allow any remaining transactions to be applied to the target, and bring your applications up, now pointing at the target database.&lt;/p&gt;

&lt;h4&gt;
  
  
  Limitations:
&lt;/h4&gt;

&lt;p&gt;AWS DMS creates the target schema objects necessary to perform the migration. However, AWS DMS takes a minimalist approach and creates only those objects required to efficiently migrate the data. In other words, AWS DMS creates tables, primary keys, and in some cases unique indexes, but doesn't create any other objects that are not required to efficiently migrate the data from the source. For example, it doesn't create secondary indexes, nonprimary key constraints, or data defaults.&lt;/p&gt;

&lt;p&gt;AWS DMS doesn't perform schema or code conversion. If you want to convert an existing schema to a different database engine, you can use AWS SCT. AWS SCT converts your source objects, table, indexes, views, triggers, and other system objects into the target data definition language (DDL) format. You can also use AWS SCT to convert most of your application code, like PL/SQL or TSQL, to the equivalent target language.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In the next part we will discuss about AWS SCT (Schema Conversion Tool) for heterogeneous migrations.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>database</category>
      <category>aws</category>
      <category>migration</category>
    </item>
    <item>
      <title>AWS Migration Series: DataSync Overview</title>
      <dc:creator>Kapil Uthra</dc:creator>
      <pubDate>Sat, 28 Aug 2021 05:38:59 +0000</pubDate>
      <link>https://forem.com/kapiluthra/aws-migration-series-datasync-overview-5fp7</link>
      <guid>https://forem.com/kapiluthra/aws-migration-series-datasync-overview-5fp7</guid>
      <description>&lt;p&gt;AWS DataSync is an online data transfer service that simplifies, automates, and accelerates moving data between on-premises storage systems and AWS storage services, and also between AWS storage services. DataSync can copy data between Network File System (NFS), Server Message Block (SMB) file servers, self-managed object storage, Amazon Simple Storage Service (Amazon S3) buckets, Amazon EFS file systems, and Amazon FSx for Windows File Server file systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data transfer between self-managed storage and AWS
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tV4LvBDC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/to2pe7d7pe1t2puv3rxe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tV4LvBDC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/to2pe7d7pe1t2puv3rxe.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Data transfer between AWS storage services
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rLsbf7Lu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ezsn5jvxv3u9ho5ry8c2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rLsbf7Lu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ezsn5jvxv3u9ho5ry8c2.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  How DataSync works?
&lt;/h4&gt;

&lt;p&gt;To understand the science behind DataSync, we need to understand few key components.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Agent&lt;/strong&gt; An agent is a VM that you own that is used to read or write data from self-managed storage systems. The agent can be deployed on VMware ESXi, KVM, Microsoft Hyper-V hypervisors, or it can be launched as an Amazon EC2 instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Location&lt;/strong&gt; Any source or destination location that is used in the data transfer (for example, Amazon S3, Amazon EFS, Amazon FSx for Windows File Server, NFS, SMB, or self-managed object storage).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Task&lt;/strong&gt; Consists of a source location and a destination location, and configuration that define how data is transferred.  Configuration settings can include options such as how to treat metadata, deleted files, and permission.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Task execution&lt;/strong&gt; An individual run of a task, which includes information such as start time, end time, bytes written, and status.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G3S_Hmkj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j9cqad0gifbcy4fvmg5n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G3S_Hmkj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j9cqad0gifbcy4fvmg5n.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  Workflow
&lt;/h6&gt;

&lt;ul&gt;
&lt;li&gt;In the &lt;em&gt;LAUNCHING&lt;/em&gt; status, DataSync initializes the task execution.&lt;/li&gt;
&lt;li&gt;In the &lt;em&gt;PREPARING&lt;/em&gt; status, DataSync examines the source and destination file systems to determine which files to sync.&lt;/li&gt;
&lt;li&gt;In the &lt;em&gt;TRANSFERRING&lt;/em&gt; status, DataSync starts transferring files and metadata from the source file system to the destination. &lt;/li&gt;
&lt;li&gt;In the &lt;em&gt;VERIFYING&lt;/em&gt; status, DataSync verifies consistency between the source and destination file systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--02xiGnNh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2xt8ix10vq3ruw2dmixz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--02xiGnNh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2xt8ix10vq3ruw2dmixz.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source:&lt;/em&gt; &lt;a href="https://docs.aws.amazon.com/"&gt;AWS Documentation&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>migration</category>
    </item>
    <item>
      <title>Caching Overview</title>
      <dc:creator>Kapil Uthra</dc:creator>
      <pubDate>Tue, 24 Aug 2021 09:15:04 +0000</pubDate>
      <link>https://forem.com/kapiluthra/caching-overview-40hl</link>
      <guid>https://forem.com/kapiluthra/caching-overview-40hl</guid>
      <description>&lt;p&gt;In computing, a cache is a high-speed data storage layer which stores a subset of data, typically transient in nature, so that future requests for that data are served up faster than is possible by accessing the data’s primary storage location. Caching allows you to efficiently reuse previously retrieved or computed data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Areas where caching can exist:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Client-Side - HTTP Cache Headers, Browsers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DNS - DNS Servers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Web - HTTP Cache Headers, CDNs, Reverse Proxies, Web Accelerators, Key/Value Stores&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;App - Key/Value data stores, Local caches&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Database - Database buffers, Key/Value data stores&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uhodsvhd4y3eyqe4b8x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uhodsvhd4y3eyqe4b8x.png" alt="Caching Existence"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Common thoughts while working with caching:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Is it safe to use a cached value? The same piece of data can have different consistency requirements in different contexts. For example, during online checkout, you need the authoritative price of an item, so caching might not be appropriate. On other pages, however, the price might be a few minutes out of date without a negative impact on users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is caching effective for that data? Some applications generate access patterns that are not suitable for caching—for example, sweeping through the key space of a large datasets that is changing frequently. In this case, keeping the cache up to date could offset any advantage caching could offer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is the data structured well for caching? Simply caching a database record can often be enough to offer significant performance advantages. However, other times, data is best cached in a format that combines multiple records together. Because caches are simple key-value stores, you might also need to cache a data record in multiple different formats, so you can access it by different attributes in the record.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Caching design patterns:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Lazy caching
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Your app receives a query for data, for example the top 10 most recent news stories.&lt;/li&gt;
&lt;li&gt;Your app checks the cache to see if the object is in cache.&lt;/li&gt;
&lt;li&gt;If so (a cache hit), the cached object is returned, and the call flow ends.&lt;/li&gt;
&lt;li&gt;If not (a cache miss), then the database is queried for the object. The cache is populated, and the object is returned.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Write-through
&lt;/h4&gt;

&lt;p&gt;In a write-through cache, the cache is updated in real time when the database is updated. So, if a user updates his or her profile, the updated profile is also pushed into the cache. You can think of this as being proactive to avoid unnecessary cache misses, in the case that you have data that you absolutely know is going to be accessed.  A good example is any type of aggregate the top 10 most popular news stories, or even recommendations. Because this data is typically updated by a specific piece of application or background job code, it's straightforward to update the cache as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Points to be considered
&lt;/h3&gt;

&lt;p&gt;Always apply a &lt;strong&gt;time to live&lt;/strong&gt; (TTL) to all of your cache keys, except those you are updating by write-through caching. You can use a long time, say hours or even days. This approach catches application bugs, where you forget to update or delete a given cache key when updating the underlying record. Eventually, the cache key will auto-expire and get refreshed.&lt;/p&gt;

&lt;p&gt;For rapidly changing data rather than adding write-through caching or complex expiration logic, just set a short TTL of a few seconds. If you have a database query that is getting hammered in production, it's just a few lines of code to add a cache key with a 5 second TTL around the query. This code can be a wonderful Band-Aid to keep your application up and running while you evaluate more elegant solutions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evictions&lt;/strong&gt; occur when memory is over filled or greater than max memory setting in the cache, resulting into the engine to select keys to evict in order to manage its memory. The keys that are chosen are based on the eviction policy that is selected.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;allkeys-lfu: The cache evicts the least frequently used (LFU) keys regardless of TTL set&lt;/li&gt;
&lt;li&gt;allkeys-lru: The cache evicts the least recently used (LRU) regardless of TTL set&lt;/li&gt;
&lt;li&gt;volatile-lfu: The cache evicts the least frequently used (LFU) keys from those that have a TTL set&lt;/li&gt;
&lt;li&gt;volatile-lru: The cache evicts the least recently used (LRU) from those that have a TTL set&lt;/li&gt;
&lt;li&gt;volatile-ttl: The cache evicts the keys with shortest TTL set&lt;/li&gt;
&lt;li&gt;volatile-random: The cache randomly evicts keys with a TTL set&lt;/li&gt;
&lt;li&gt;allkeys-random: The cache randomly evicts keys regardless of TTL set&lt;/li&gt;
&lt;li&gt;no-eviction: The cache doesn’t evict keys at all. This blocks future writes until memory frees up.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For basic caching use-cases, LRU-based rules are more prevalent, however depending on your objectives, you may choose to deploy a TTL or Random-based eviction policy if that better matches your needs. Higher Evictions usually implies that for cache cluster/node we might need a larger memory footprint.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source:&lt;/em&gt; &lt;a href="https://docs.aws.amazon.com/" rel="noopener noreferrer"&gt;AWS Documentation&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>caching</category>
    </item>
    <item>
      <title>Migrate an on-premises Microsoft SQL Server database to Amazon RDS via Amazon S3</title>
      <dc:creator>Kapil Uthra</dc:creator>
      <pubDate>Sun, 22 Aug 2021 03:36:45 +0000</pubDate>
      <link>https://forem.com/kapiluthra/migrate-an-on-premises-microsoft-sql-server-database-to-amazon-rds-via-amazon-s3-5hna</link>
      <guid>https://forem.com/kapiluthra/migrate-an-on-premises-microsoft-sql-server-database-to-amazon-rds-via-amazon-s3-5hna</guid>
      <description>&lt;p&gt;This migration process includes making a backup and restoring the backup in an Amazon Simple Storage Service (Amazon S3) bucket and using SQL Server Management Studio (SSMS).&lt;/p&gt;

&lt;p&gt;This process includes following major steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Making a backup of on-premises database via SSMS (SQL Server Management Studio)&lt;/li&gt;
&lt;li&gt;Transferring that backup file in Amazon Simple Storage Service (Amazon S3)&lt;/li&gt;
&lt;li&gt;Finally, restoring the backup file in Amazon RDS for SQL Server.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdherhuuwtdgb1jsf27vh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdherhuuwtdgb1jsf27vh.png" alt="Architecture Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Create an Amazon RDS for Microsoft SQL Server DB instance
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Select SQL Server as the database engine in Amazon RDS for SQL Server.&lt;/li&gt;
&lt;li&gt;Choose the required SQL Server Edition&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Create a backup file from the on-premises Microsoft SQL Server database
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Connect to the on-premises SQL Server database through SSMS.&lt;/li&gt;
&lt;li&gt;Create a backup of the database.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Upload the backup file to Amazon S3
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Create a bucket in Amazon S3.&lt;/li&gt;
&lt;li&gt;Upload the backup file to the S3 bucket.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Restore the database in Amazon RDS for SQL Server
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Open the Amazon RDS console&lt;/li&gt;
&lt;li&gt;Choose Option groups in the navigation pane.&lt;/li&gt;
&lt;li&gt;Choose the Create group button.&lt;/li&gt;
&lt;li&gt;Add the SQLSERVER_BACKUP_RESTORE option to the option group&lt;/li&gt;
&lt;li&gt;Add the option group to Amazon RDS for SQL Server.&lt;/li&gt;
&lt;li&gt;Connect to Amazon RDS for SQL Server through SSMS.&lt;/li&gt;
&lt;li&gt;Call the rds_restore_database stored procedure to restore the database.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  More details on Restoring a database
&lt;/h4&gt;

&lt;p&gt;As mentioned above for restoring the database in RDS call the rds_restore_database stored procedure. Amazon RDS creates an initial snapshot of the database after the restore task is complete and the database is open.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;exec msdb.dbo.rds_restore_database &lt;br&gt;
@restore_db_name='database_name', &lt;br&gt;
@s3_arn_to_restore_from='arn:aws:s3:::bucket_name/file_name.extension',&lt;br&gt;
@with_norecovery=0|1,&lt;br&gt;
[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],&lt;br&gt;
[@type='DIFFERENTIAL|FULL'];&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Example of single file restore:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;exec msdb.dbo.rds_restore_database&lt;br&gt;
@restore_db_name='mydatabase',&lt;br&gt;
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak';&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Example of multifile restore:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;exec msdb.dbo.rds_restore_database&lt;br&gt;
@restore_db_name='mydatabase',&lt;br&gt;
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup*';&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Check restoring task status:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;exec msdb.dbo.rds_task_status&lt;br&gt;
[@db_name='database_name'],&lt;br&gt;
[@task_id=ID_number];&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;code&gt;exec msdb.dbo.rds_task_status @db_name='mydatabase';&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source:&lt;/em&gt; &lt;a href="https://docs.aws.amazon.com/" rel="noopener noreferrer"&gt;AWS Documentation&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>architecture</category>
      <category>migration</category>
      <category>database</category>
    </item>
  </channel>
</rss>
