<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: ScaleGrid</title>
    <description>The latest articles on Forem by ScaleGrid (@scalegridio).</description>
    <link>https://forem.com/scalegridio</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/scalegridio"/>
    <language>en</language>
    <item>
      <title>MongoDB ACID Transactions – Keep your Data Safe</title>
      <dc:creator>ScaleGrid</dc:creator>
      <pubDate>Fri, 22 Sep 2023 11:49:06 +0000</pubDate>
      <link>https://forem.com/scalegrid/mongodb-acid-transactions-keep-your-data-safe-5719</link>
      <guid>https://forem.com/scalegrid/mongodb-acid-transactions-keep-your-data-safe-5719</guid>
      <description>&lt;p&gt;Most popular relational databases today support “ACID” properties – Atomicity, Consistency, Isolation and Durability. Developers and DBAs (Database Administrators) working with databases have a good understanding of ACID behavior.&lt;/p&gt;

&lt;p&gt;However, when working with NoSQL databases like MongoDB cloud, there are some key differences that are important for you to understand. MongoDB offers great flexibility in storage, schema, and scaling, but relaxes some of the ACID properties. It’s essential to understand the differences as you model your data and execute MongoDB commands.&lt;/p&gt;

&lt;p&gt;ACID properties—Atomicity, Consistency, Isolation, Durability— are essential for data processing in database systems. They ensure transactions are either entirely successful or not executed at all, safeguarding data integrity.&lt;/p&gt;

&lt;p&gt;With enterprises handling immense data, the need for rapid, trustworthy transactions becomes paramount. Handling real-time processing, concurrency, and business-critical transactional systems across multiple servers presents challenges. In this complex landscape, the importance of MongoDB’s support for multi-document ACID transactions continues to grow.&lt;/p&gt;

&lt;h2&gt;ACID Properties Explained&lt;/h2&gt;

&lt;p&gt;Databases treat single document transactions as a sequence of operations, ensuring the safe management of storing related data. ACID properties are at the heart of reliable processing, ensuring the database’s integrity even when concurrent transactions or disruptions occur.&lt;/p&gt;

&lt;h2&gt;
&lt;a href="https://scalegrid.io/blog/mongodb-acid/"&gt;MongoDB ACID&lt;/a&gt; Principles&lt;/h2&gt;

&lt;h3&gt;Atomicity&lt;/h3&gt;

&lt;p&gt;Wikipedia defines Atomicity as follows – “In an atomic transaction, a series of database operations either all occur, or nothing occurs. A guarantee of atomicity prevents updates to the database from occurring only partially, which can cause greater problems than rejecting the whole series outright. In other words, atomicity means indivisibility and irreducibility”.&lt;/p&gt;

&lt;p&gt;Atomicity ensures that all operations within a database transaction are treated as a single, indivisible unit. Either all the operations are executed or none are, ensuring there’s no halfway point where a transaction is left incomplete.&lt;/p&gt;

&lt;p&gt;This principle is crucial for safeguarding the integrity of a database, especially in scenarios where system failures or crashes could occur mid-transaction. For example, in MongoDB, a write operation is atomic on the level of a single document transaction, even if the operation modifies multiple embedded documents within a single document.&lt;/p&gt;

&lt;p&gt;In real-world applications, consider financial systems: if a bank transaction fails after debiting an account but before crediting another bank account, financial errors could occur. Atomicity ensures that such halfway scenarios are avoided. If any part of the transaction encounters an issue, the whole transaction is rolled back.&lt;/p&gt;

&lt;p&gt;Atomicity works hand-in-hand with the other ACID properties. For instance, while atomicity ensures all-or-nothing execution, durability guarantees that once a transaction is committed, it remains so, even in the face of system failures.&lt;/p&gt;

&lt;h3&gt;Consistency&lt;/h3&gt;

&lt;p&gt;Consistency ensures that every transaction in a database brings it from one valid state to another. Before the start of a transaction, the database is in a consistent state, and it should return to this state once the transaction is completed. Any transaction, therefore, doesn’t violate predefined rules or constraints. If, for some reason, a transaction can’t meet these criteria, it’s rolled back, and the database remains unchanged.&lt;/p&gt;

&lt;p&gt;While atomicity ensures all parts of a transaction complete or none do, consistency ensures the transaction maintains the database’s integrity. Together, they prevent databases from reaching unreliable states.&lt;/p&gt;

&lt;h3&gt;Isolation&lt;/h3&gt;

&lt;p&gt;Isolation addresses the issue of concurrency in databases. When multiple transactions occur simultaneously, isolation ensures that each one is executed in a way that it’s unaware of other concurrent transactions. This gives the illusion that transactions are processed sequentially.&lt;/p&gt;

&lt;p&gt;Without proper isolation, simultaneous transactions could interfere with each other, leading to potential data corruption. By keeping transactions separate and unaware of each other, isolation ensures that each transaction maintains the database’s consistent state, even in a multi-user environment.&lt;/p&gt;

&lt;h3&gt;Durability&lt;/h3&gt;

&lt;p&gt;Durability promises that once a transaction is committed, it will remain so, even if the system crashes immediately afterward. The changes made during the transaction are permanent and won’t be lost due to any subsequent failures.&lt;/p&gt;

&lt;p&gt;Post-transaction, any changes made are stored safely. This assurance is vital for trust in a DBMS. For businesses, this guarantees that once they receive a confirmation, the transaction, whether financial or otherwise, is settled and irreversible.&lt;/p&gt;

&lt;p&gt;While atomicity ensures an all-or-nothing approach during transaction processing, durability ensures that this ‘all’ remains intact and unaffected by unforeseen system challenges, cementing the changes permanently.&lt;/p&gt;

&lt;h2&gt;What are ACID transactions?&lt;/h2&gt;

&lt;p&gt;ACID transactions are closely intertwined with Database Management Systems (DBMS). These principles are put forth to maintain a high standard of data reliability within DBMS. By adhering to ACID properties, DBMS can ensure error-free transaction processing, even when facing multiple, simultaneous operations.&lt;/p&gt;

&lt;p&gt;In an era of massive data influx, ensuring that data operations are consistent and reliable is vital. ACID transactions prevent data anomalies, providing a framework that safeguards data, which in turn reinforces trust in data-driven decisions.&lt;/p&gt;

&lt;h3&gt;ACID transaction example&lt;/h3&gt;

&lt;p&gt;Imagine a bank transaction where a user transfers money from savings to checking. Two operations are involved: debiting the savings account and crediting the checking account. If a serious system failure or crash occurs after the debit but before the credit, without ACID compliance, the money could vanish. Atomicity ensures that both operations are completed or none are, protecting the user’s funds.&lt;/p&gt;

&lt;p&gt;In a relational database like SQL Server, consider an e-commerce scenario where a customer places an order. The system needs to update inventory, log the transaction, and adjust the customer’s balance.&lt;/p&gt;

&lt;p&gt;The interplay of these operations demands that either all succeed or all fail, maintaining database integrity. ACID properties, thus, are universally crucial across database platforms, not just MongoDB, to keep data validity and ensure reliable data processing.&lt;/p&gt;

&lt;h2&gt;Why are ACID transactions important?&lt;/h2&gt;

&lt;p&gt;ACID transactions serve as the backbone of data reliability in database systems. They ensure that every transaction is processed rigorously, minimizing risks of data anomalies or corruption.&lt;/p&gt;

&lt;p&gt;For businesses, ACID transactions offer peace of mind and operational continuity. In today’s digital age, where data drives decision-making, having a trustworthy and reliable database system is paramount.&lt;/p&gt;

&lt;p&gt;ACID transactions safeguard against potential data losses or inaccuracies which could lead to costly mistakes or misinformed strategies. By ensuring data integrity and consistency, these transactions empower businesses to confidently leverage their data, optimizing operations and driving growth.&lt;/p&gt;

&lt;h2&gt;How do Multi-Document ACID Transactions work in MongoDB?&lt;/h2&gt;

&lt;p&gt;MongoDB, known for its flexibility and scalability, has evolved to support multi-document ACID transactions, even in its NoSQL environment. Introduced in version 4.0, MongoDB’s transactions behave similarly to those in traditional relational databases, allowing multiple changes to be made in a single transaction across different collections and documents.&lt;/p&gt;

&lt;p&gt;Utilizing snapshot isolation, presents a consistent view of data, ensuring that ongoing transactions are not affected by others. Furthermore, if any part of the transaction fails, MongoDB ensures that no changes are committed, preserving atomicity.&lt;/p&gt;

&lt;p&gt;While this post focuses on MongoDB ACID transactions, it’s also worthwhile to understand the role of storage engines in MongoDB’s overall performance and capabilities. Different storage engines offer varying levels of ACID compliance, and your choice can significantly affect your database’s efficiency and reliability.&lt;/p&gt;

&lt;p&gt;If you’re interested in diving deeper into how different storage engines impact MongoDB’s performance and ACID compliance, we recommend reading our detailed blog post on MongoDB Storage Engines.&lt;/p&gt;

&lt;p&gt;Traditional Relational Database Management Systems (RDBMS) and MongoDB differ in their handling of ACID transactions. While both systems aim for the same outcomes in terms of integrity, their architectures differ. RDBMSes focus on structured related data with predefined schemas, whereas MongoDB’s document-oriented approach allows more fluid data structures.&lt;/p&gt;

&lt;p&gt;In RDBMS, transactions usually involve structured queries across tables. In contrast, MongoDB’s transactions span collections of JSON-like documents. Despite these structural differences, MongoDB has incorporated critical ACID principles, ensuring that even in a NoSQL world, the sanctity and reliability of data are upheld, drawing parallels in transactional trustworthiness with its relational counterparts.&lt;/p&gt;

&lt;h2&gt;When should I use MongoDB multi-document transactions?&lt;/h2&gt;

&lt;p&gt;MongoDB ACID multi-document transactions should be employed when operations span multiple collections or require modifications to multiple documents concurrently. They are ideal for complex use cases where inter-related documents must be updated together, ensuring database integrity and consistency across changes. These transactions become particularly useful in scenarios like inventory management or financial operations where a chain of related data alterations is common.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Benefits and Challenges&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;The key benefit of using multi-document transactions is the guarantee of ACID properties across multiple operations, ensuring consistent and reliable data updates. However, they might introduce a performance overhead, especially if overused or misconfigured. It’s crucial to evaluate when the complexity of operations justifies their use, balancing the need for integrity with system performance.&lt;/p&gt;

&lt;h2&gt;Best Practices with MongoDB Database Transactions&lt;/h2&gt;

&lt;p&gt;When implementing MongoDB ACID transactions, always ensure that the four properties are upheld. Make use of session and commit functionalities to manage your transactions, ensuring that they either complete successfully or revert to the pre-transaction state. Regularly monitor the system for any inconsistencies and rectify them promptly.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Tips for optimization&lt;/b&gt;&lt;br&gt;
Limit the scope of transactions to what’s essential, avoiding unnecessary operations.&lt;br&gt;
Monitor transaction durations and keep them as short as possible to reduce contention.&lt;br&gt;
Structure your database schema effectively, which can greatly improve transactional efficiency.&lt;/p&gt;

&lt;p&gt;ScaleGrid’s DBaaS solution streamlines MongoDB ACID transactions by offering automated management tools and intuitive interfaces. The platform ensures optimal performance, monitoring, and scalability, simplifying transactional complexities. By leveraging ScaleGrid, businesses can achieve more efficient, reliable, and effortless transaction management without administrative overhead.&lt;/p&gt;

&lt;h2&gt;ACID vs BASE&lt;/h2&gt;

&lt;p&gt;ACID (Atomicity, Consistency, Isolation, Durability) and BASE (Basically Available, Soft State, Eventually Consistent) represent two fundamental philosophies in the database world. While ACID emphasizes strict reliability and consistency, often at the cost of availability and partition tolerance, BASE takes a more relaxed approach, prioritizing availability over immediate consistency. In the BASE model, databases ensure that data becomes consistent over time, accepting temporary inconsistencies.&lt;/p&gt;

&lt;p&gt;MongoDB primarily aligns with the BASE philosophy, focusing on scalability and flexibility. However, with the introduction of multi-document ACID transactions in its recent versions, MongoDB has made significant strides towards bridging the gap between the two paradigms.&lt;/p&gt;

&lt;p&gt;It now offers developers the choice: they can leverage BASE’s scalability and resilience for many use cases while employing MongoDB ACID transactions when strict data integrity is important. This dual capability positions MongoDB uniquely, allowing it to cater to a wide range of application requirements and use cases, from high-availability systems to business-critical transactional processes.&lt;/p&gt;

&lt;h2&gt;Summary&lt;/h2&gt;

&lt;p&gt;Throughout this article, we’ve delved deep into the essence of ACID properties, their fundamental role in DBMS, and how MongoDB has embraced these tenets, providing developers with robust tools for transaction management.&lt;/p&gt;

&lt;p&gt;We’ve compared the traditional ACID approach to the more modern BASE philosophy, highlighting MongoDB’s unique position in bridging these two paradigms. As enterprises seek reliability and flexibility in their database solutions, understanding these principles becomes crucial.&lt;/p&gt;

&lt;p&gt;If you want to know how MongoDB compares to Redis in terms of performance, speed, and scalability, check out our Redis vs. MongoDB Performance blog post.&lt;/p&gt;

&lt;h2&gt;Elevate Your MongoDB Experience with ScaleGrid&lt;/h2&gt;

&lt;p&gt;Experience top performance and reliability with ScaleGrid’s ACID-compliant &lt;a href="https://scalegrid.io/mongodb/"&gt;managed MongoDB&lt;/a&gt; service, offering features like automated backups, performance analytics, and seamless scaling to improve data integrity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/contact"&gt;Book a demo&lt;/a&gt; and discover how ScaleGrid can transform your database experience!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>PostgreSQL EXPLAIN – What are the Query Costs?</title>
      <dc:creator>ScaleGrid</dc:creator>
      <pubDate>Mon, 04 Apr 2022 21:25:27 +0000</pubDate>
      <link>https://forem.com/scalegrid/postgresql-explain-what-are-the-query-costs-4j17</link>
      <guid>https://forem.com/scalegrid/postgresql-explain-what-are-the-query-costs-4j17</guid>
      <description>&lt;h2&gt;Understanding the Postgres EXPLAIN cost&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;EXPLAIN&lt;/code&gt; is very useful for understanding the performance of a Postgres query. It returns the execution plan generated by PostgreSQL query planner for a given statement. The &lt;a href="https://scalegrid.io/blog/postgres-explain-cost/"&gt;&lt;code&gt;EXPLAIN&lt;/code&gt;&lt;/a&gt; command specifies whether the tables referenced in a statement will be searched using an index scan or a sequential scan.&lt;/p&gt;

&lt;p&gt;Some of the first things you’ll notice when reviewing the output of an &lt;code&gt;EXPLAIN&lt;/code&gt; command are the cost statistics, so it’s natural to wonder what they mean, how they’re calculated, and how they’re used.&lt;/p&gt;

&lt;p&gt;In short, the PostgreSQL query planner is estimating how much time the query will take (in an arbitrary unit), with both a startup cost and a total cost for each operation. More on that later. When it has multiple options for executing a query, it uses these costs to choose the cheapest, and therefore hopefully fastest, option.&lt;/p&gt;

&lt;h2&gt;What unit are the costs in?&lt;/h2&gt;

&lt;p&gt;&lt;b&gt;The costs are in an arbitrary unit.&lt;/b&gt; A common misunderstanding is that they are in milliseconds or some other unit of time, but that’s not the case.&lt;/p&gt;

&lt;p&gt;The cost units are anchored (by default) to a single sequential page read costing 1.0 units (&lt;code&gt;seq_page_cost&lt;/code&gt;). Each row processed adds 0.01 (&lt;code&gt;cpu_tuple_cost&lt;/code&gt;), and each non-sequential page read adds 4.0 (&lt;code&gt;random_page_cost&lt;/code&gt;). There are many more constants like this, all of which are configurable. That last one is a particularly common candidate, at least on modern hardware. We’ll look into that more in a bit.&lt;/p&gt;

&lt;h2&gt;Startup Costs&lt;/h2&gt;

&lt;p&gt;The first numbers you see after &lt;code&gt;cost=&lt;/code&gt; are known as the "startup cost". This is an estimate of how long it will take to &lt;b&gt;fetch the first row&lt;/b&gt;. As such, the startup cost of an operation includes the cost of its children.  &lt;/p&gt;

&lt;p&gt;For a sequential scan, the startup cost will generally be close to zero, as it can start fetching rows straight away. For a sort operation, it will be higher because a large proportion of the work needs to be done before rows can start being returned.&lt;/p&gt;

&lt;p&gt;To look at an example, let’s create a simple test table with 1000 usernames:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
CREATE TABLE users (
    id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    username text NOT NULL);
INSERT INTO users (username)
SELECT 'person' || n
FROM generate_series(1, 1000) AS n;
ANALYZE users;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s take a look at a simple query plan, with a couple of operations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
EXPLAIN SELECT * FROM users ORDER BY username;

QUERY PLAN                                                    |
--------------------------------------------------------------+
Sort  (cost=66.83..69.33 rows=1000 width=17)                  |
  Sort Key: username                                          |
  -&amp;gt;  Seq Scan on users  (cost=0.00..17.00 rows=1000 width=17)|

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above query plan, as expected, the estimated statement execution cost for the &lt;code&gt;Seq Scan&lt;/code&gt; is &lt;code&gt;0.00&lt;/code&gt;, and for the &lt;code&gt;Sort&lt;/code&gt; is &lt;code&gt;66.83&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;Total costs&lt;/h2&gt;

&lt;p&gt;The second cost statistic, after the startup cost and the two dots, is known as the "total cost". This is an estimate of how long it will take to &lt;b&gt;return all the rows&lt;/b&gt;.&lt;/p&gt;

&lt;p&gt;Let’s look at that example query plan again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
QUERY PLAN                                                    |
--------------------------------------------------------------+
Sort  (cost=66.83..69.33 rows=1000 width=17)                  |
  Sort Key: username                                          |
  -&amp;gt;  Seq Scan on users  (cost=0.00..17.00 rows=1000 width=17)|

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that the total cost of the &lt;code&gt;Seq Scan&lt;/code&gt; operation is &lt;code&gt;17.00&lt;/code&gt;. For the &lt;code&gt;Sort&lt;/code&gt; operation is 69.33, which is not much more than its startup cost (as expected). &lt;/p&gt;

&lt;p&gt;Total costs usually include the cost of the operations preceding them. For example, the total cost of the Sort operation above includes that of the Seq Scan.&lt;/p&gt;

&lt;p&gt;An important exception is &lt;code&gt;LIMIT&lt;/code&gt; clauses, which the planner uses to estimate whether it can abort early. If it only needs a small number of rows, the conditions for which are common, it may calculate that a simpler scan choice is cheaper (likely to be faster).&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
EXPLAIN SELECT * FROM users LIMIT 1;

QUERY PLAN                                                    |
--------------------------------------------------------------+
Limit  (cost=0.00..0.02 rows=1 width=17)                      |
  -&amp;gt;  Seq Scan on users  (cost=0.00..17.00 rows=1000 width=17)|

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the total cost reported on the Seq Scan node is still 17.00, but the full cost of the Limit operation is reported to be 0.02. This is because the planner expects that it will only have to process 1 row out of 1000, so the cost, in this case, is estimated to be 1000th of the total.&lt;/p&gt;

&lt;h2&gt;How the costs are calculated&lt;/h2&gt;

&lt;p&gt;In order to calculate these costs, the Postgres query planner uses both constants (some of which we’ve already seen) and metadata about the contents of the database. The metadata is often referred to as "statistics".&lt;/p&gt;

&lt;p&gt;Statistics are gathered via &lt;a href="https://www.postgresql.org/docs/current/sql-analyze.html"&gt;&lt;code&gt;ANALYZE&lt;/code&gt;&lt;/a&gt; (not to be confused with the &lt;code&gt;EXPLAIN&lt;/code&gt; parameter of the same name), and stored in &lt;a href="https://www.postgresql.org/docs/current/catalog-pg-statistic.html"&gt;&lt;code&gt;pg_statistic&lt;/code&gt;&lt;/a&gt;. They are also refreshed automatically as part of &lt;a href="https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-STATISTICS"&gt;autovacuum&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;These statistics include a number of very useful things, like roughly the number of rows each table has, and what the most common values in each column are.&lt;/p&gt;

&lt;p&gt;Let’s look at a simple example, using the same query data as before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
EXPLAIN SELECT count(*) FROM users;

QUERY PLAN                                                   |
-------------------------------------------------------------+
Aggregate  (cost=19.50..19.51 rows=1 width=8)                |
  -&amp;gt;  Seq Scan on users  (cost=0.00..17.00 rows=1000 width=0)|

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our case, the planner's statistics suggested the data for the table was stored within 7 pages (or blocks), and that 1000 rows would be returned. The cost parameters &lt;code&gt;seq_page_cost&lt;/code&gt;, &lt;code&gt;cpu_tuple_cost&lt;/code&gt;, and &lt;code&gt;cpu_operator_cost&lt;/code&gt; were left at their defaults of &lt;code&gt;1&lt;/code&gt;, &lt;code&gt;0.01&lt;/code&gt;, and &lt;code&gt;0.0025&lt;/code&gt; respectively.&lt;/p&gt;

&lt;p&gt;As such, the Seq Scan total cost was calculated as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Total cost of Seq Scan
= (estimated sequential page reads * seq_page_cost) + (estimated rows returned * cpu_tuple_cost)
= (7 * 1) + (1000 * 0.01)
= 7 + 10.00
= 17.00

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And for the Aggregate as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Total cost of Aggregate
= (cost of Seq Scan) + (estimated rows processed * cpu_operator_cost) + (estimated rows returned * cpu_tuple_cost)
= (17.00) + (1000 * 0.0025) + (1 * 0.01) 
= 17.00 + 2.50 + 0.01
= 19.51 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;How the planner uses the costs&lt;/h2&gt;

&lt;p&gt;Since we know Postgres will pick the query plan with the lowest total cost, we can use that to try to understand the choices it has made. For example, if a query is not using an index that you expect it to, you can use settings like &lt;a href="https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-SEQSCAN"&gt;&lt;code&gt;enable_seqscan&lt;/code&gt;&lt;/a&gt; to massively discourage certain query plan choices. By this point, you shouldn’t be surprised to hear that settings like this work by increasing the costs!&lt;br&gt;
Row numbers are an extremely important part of cost estimation. They are used to calculate estimates for different join orders, join algorithms, scan types, and more. Row cost estimates that are out by a lot can lead to cost estimation being out by a lot, which can ultimately result in a suboptimal plan choice being made.&lt;/p&gt;

&lt;h3&gt;Using EXPLAIN ANALYZE to get a query plan&lt;/h3&gt;

&lt;p&gt;When you write SQL statements in PostgreSQL, the &lt;code&gt;ANALYZE&lt;/code&gt; command is key to optimizing queries, making them faster and more efficient. In addition to displaying the query plan and PostgreSQL estimates, the &lt;code&gt;EXPLAIN ANALYZE&lt;/code&gt; option performs the query (be careful with &lt;code&gt;UPDATE&lt;/code&gt; and &lt;code&gt;DELETE&lt;/code&gt;!), and shows the actual execution time and row count number for each step in the execution process. This is necessary for monitoring SQL performance.&lt;/p&gt;

&lt;p&gt;You can use &lt;code&gt;EXPLAIN ANALYZE&lt;/code&gt; to compare the estimated number of rows with the actual rows returned by each operation.&lt;/p&gt;

&lt;p&gt;Let's look at an example, using the same data again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
QUERY PLAN                                                                                                 |
-----------------------------------------------------------------------------------------------------------+
Sort  (cost=66.83..69.33 rows=1000 width=17) (actual time=20.569..20.684 rows=1000 loops=1)                |
  Sort Key: username                                                                                       |
  Sort Method: quicksort  Memory: 102kB                                                                    |
  -&amp;gt;  Seq Scan on users  (cost=0.00..17.00 rows=1000 width=17) (actual time=0.048..0.596 rows=1000 loops=1)|
Planning Time: 0.171 ms                                                                                    |
Execution Time: 20.793 ms                                                                                  |

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that the total execution cost is still 69.33, with the majority of that being the Sort operation, and 17.00 coming from the Sequential Scan. Note that the query execution time is just under 21ms.&lt;/p&gt;

&lt;h4&gt;Sequential scan vs. Index Scan&lt;/h4&gt;

&lt;p&gt;Now, let’s add an index to try to avoid that costly sort of the entire table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
​​CREATE INDEX people_username_idx ON users (username);

EXPLAIN ANALYZE SELECT * FROM users ORDER BY username;

QUERY PLAN                                                                                                                       |
---------------------------------------------------------------------------------------------------------------------------------+
Index Scan using people_username_idx on users  (cost=0.28..28.27 rows=1000 width=17) (actual time=0.052..1.494 rows=1000 loops=1)|
Planning Time: 0.186 ms                                                                                                          |
Execution Time: 1.686 ms                                                                                                         |

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the query planner has now chosen an Index Scan, since the total cost of that plan is 28.27 (lower than 69.33). It looks that the index scan was more efficient than the sequential scan, as the query execution time is now just under 2ms.&lt;/p&gt;

&lt;h2&gt;Helping the planner estimate more accurately&lt;/h2&gt;

&lt;p&gt;We can help the planner estimate more accurately in two ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Help it gather better statistics&lt;/li&gt;
&lt;li&gt;Tune the constants it uses for the calculations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The statistics can be especially bad after a big change to the data in a table. As such, when loading a lot of data into a table, you can help Postgres out by running a manual &lt;code&gt;ANALYZE&lt;/code&gt; on it. Statistics also do not persist over a major version upgrade, so that’s another important time to do this.&lt;/p&gt;

&lt;p&gt;Naturally, tables also change over time, so &lt;a href="https://www.postgresql.org/docs/current/runtime-config-autovacuum.html"&gt;tuning the autovacuum settings&lt;/a&gt; to make sure it runs frequently enough for your workload can be very helpful.&lt;/p&gt;

&lt;p&gt;If you’re having trouble with bad estimates for a column with a skewed distribution, you may benefit from increasing the amount of information Postgres gathers by using the &lt;a href="https://www.postgresql.org/docs/current/sql-altertable.html"&gt;&lt;code&gt;ALTER TABLE SET STATISTICS&lt;/code&gt;&lt;/a&gt; command, or even the &lt;a href="https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET"&gt;&lt;code&gt;default_statistics_target&lt;/code&gt;&lt;/a&gt; for the whole database.&lt;/p&gt;

&lt;p&gt;Another common cause of bad estimates is that, by default, Postgres will assume that two columns are independent. You can fix this by asking it to gather correlation data on two columns from the same table via &lt;a href="https://www.postgresql.org/docs/current/sql-createstatistics.html"&gt;extended statistics&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;On the constant tuning front, there are a lot of parameters you can tune to suit your hardware. Assuming you’re running on SSDs, you’ll likely at minimum want to tune your setting of &lt;code&gt;random_page_cost&lt;/code&gt;. This defaults to 4, which is 4x more expensive than the &lt;code&gt;seq_page_cost&lt;/code&gt; we looked at earlier. This ratio made sense on spinning disks, but on SSDs it tends to penalize random I/O too much. As such a setting closer to 1, or between 1 and 2, might make more sense. At ScaleGrid, we default to 1.&lt;/p&gt;

&lt;h2&gt;Can I remove the costs from query plans?&lt;/h2&gt;

&lt;p&gt;For many of the reasons mentioned above, most people leave the costs on when running &lt;code&gt;EXPLAIN&lt;/code&gt;. However, should you wish, you can turn them off using the &lt;code&gt;COSTS&lt;/code&gt; parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
EXPLAIN (COSTS OFF) SELECT * FROM users LIMIT 1;

QUERY PLAN             |
-----------------------+
Limit                  |
  -&amp;gt;  Seq Scan on users|

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;To re-cap, the costs in query plans are Postgres’ estimates for how long an SQL query will take, in an arbitrary unit.&lt;/p&gt;

&lt;p&gt;It picks the plan with the lowest overall cost, based on some configurable constants and some statistics it has gathered.&lt;/p&gt;

&lt;p&gt;Helping it estimate these costs more accurately is very important to help it make good choices, and keep your queries performant.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>developer</category>
    </item>
    <item>
      <title>How to Migrate Redis™ Data Using Redis-Shake </title>
      <dc:creator>ScaleGrid</dc:creator>
      <pubDate>Fri, 26 Nov 2021 21:26:51 +0000</pubDate>
      <link>https://forem.com/scalegrid/how-to-migrate-redis-data-using-redis-shake-4cba</link>
      <guid>https://forem.com/scalegrid/how-to-migrate-redis-data-using-redis-shake-4cba</guid>
      <description>&lt;p&gt;Migrating ScaleGrid for Redis™* data from one server to another is a common requirement that we hear from our customers. Two of the main reasons we hear are often due to migration of hardware, or the need to split data between servers.&lt;/p&gt;

&lt;p&gt;Typically, you want to migrate with minimal downtime while using the standard Redis tools. In this blog post, we will guide you through the process using the open-source tool &lt;a href="https://github.com/alibaba/RedisShake"&gt;Redis-Shake&lt;/a&gt;. Developed and maintained by the NoSQL Team in the Alibaba-Cloud Database department, &lt;a href="https://scalegrid.io/blog/how-to-migrate-redis-data-using-redis-shake/"&gt;Redis-Shake allows you to easily migrate Redis data between Redis clusters&lt;/a&gt;. While this guide is more aimed towards ScaleGrid for Redis™* deployments, this guide can be used for other types of Redis deployments as well with some slight modifications.&lt;/p&gt;

&lt;p&gt;Without further ado, let’s get started with the migration guide!&lt;/p&gt;

&lt;h2&gt;Prerequisites&lt;/h2&gt;

&lt;p&gt;You’ll need a Linux or Windows machine to perform the migration. (If needed, this can be either the source or the target machines).&lt;/p&gt;

&lt;p&gt;An important thing to make sure of before you continue is that Redis-Shake does not support SSL-enabled clusters as of now. Please make sure that both the source and the target clusters are not SSL-enabled.&lt;/p&gt;

&lt;h2&gt;Firewall Rule&lt;/h2&gt;

&lt;p&gt;Make sure you have created a firewall rule in order to allow your Linux/Windows machine to connect to both your source and target clusters. Refer to this &lt;a href="https://help.scalegrid.io/docs/redis-cluster-operations-firewall-rules"&gt;link&lt;/a&gt; for more info on how to set this up on ScaleGrid.&lt;/p&gt;

&lt;h2&gt;Download Redis-Shake &lt;/h2&gt;

&lt;p&gt;Download and decompress Redis-Shake on the Linux/Windows machine after choosing the latest/stable version from this &lt;a href="https://github.com/alibaba/RedisShake/releases"&gt;link&lt;/a&gt;. You can also use the following commands below to download the latest version as of today (v2.1.1).&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
wget https://github.com/alibaba/RedisShake/releases/download/release-v2.1.1-20210903/release-v2.1.1-20210903.tar.gz
tar -xvzf release-v2.1.1-20210903.tar.gz
cd release-v2.1.1-20210903/

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next up, we will show you two different use cases for Redis-Shake.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Migrating data from a Standalone and Master-Slave&lt;/li&gt;
&lt;li&gt;Migrating data in Cluster Mode&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
&lt;a id="1"&gt;&lt;/a&gt;Migrating Data - Standalone and Master-Slave&lt;/h2&gt;

&lt;p&gt;In this section, we will show you how to migrate your data in these different setups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Standalone to Standalone&lt;/li&gt;
&lt;li&gt;Master-Slave to Master-Slave&lt;/li&gt;
&lt;li&gt;Standalone to Master-Slave (and vice-versa)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;1. Gather info&lt;/h4&gt;

&lt;p&gt;Note down the following info from the Overview tab in the cluster details page from both your source and target cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hostname&lt;/strong&gt;: The Connection String is in the format [hostname:port]. It should look something like this: &lt;em&gt;EX-redms-00-redis-master.example.domain.io&lt;/em&gt;
&lt;/li&gt;
&lt;ul&gt;
&lt;li&gt;(In the case of a Master-Slave cluster, we’ll connect to the master).&lt;/li&gt;
&lt;/ul&gt;




&lt;li&gt;
&lt;strong&gt;Port&lt;/strong&gt;: 6379&lt;/li&gt;

&lt;li&gt;
&lt;strong&gt;Password&lt;/strong&gt;: You can find your password and reset it from the console under Credentials&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;2. Edit Config File&lt;/h4&gt;

&lt;p&gt;Open your “redis-shake.conf” file in the main folder and edit the following variables:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;source.type = &lt;strong&gt;standalone&lt;/strong&gt;&lt;br&gt;
source.address = &lt;strong&gt;&amp;lt;source_hostname&amp;gt;:6379&lt;/strong&gt;&lt;br&gt;
source.password_raw = &lt;strong&gt;&amp;lt;source_password&amp;gt;&lt;/strong&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;target.type = &lt;strong&gt;standalone&lt;/strong&gt;&lt;br&gt;
target.address = &lt;strong&gt;&amp;lt;target_hostname&amp;gt;:6379&lt;/strong&gt;&lt;br&gt;
target.password_raw = &lt;strong&gt;&amp;lt;target_password&amp;gt;&lt;/strong&gt;&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;3. Sync Data&lt;/h4&gt;

&lt;p&gt;Run the following command to synchronize data between the source and target cluster:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ ./redis-shake -type sync -conf redis-shake.conf&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;4. Wait For Logs&lt;/h4&gt;

&lt;p&gt;Wait until you see the following information in the logs:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sync rdb done.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This means that the full synchronization has been completed and incremental synchronization begins.&lt;/p&gt;

&lt;h4&gt;5. Stop Incremental Sync&lt;/h4&gt;

&lt;p&gt;If &lt;code&gt;+writeBytes=0&lt;/code&gt; is seen for a long time, then that means no new data is incremented. You can stop the incremental synchronization by pressing &lt;strong&gt;Ctrl+C&lt;/strong&gt;. It should look like something like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sync:  +forwardCommands=0  +filterCommands=0  +writeBytes=0&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;6. Match Number of Keys&lt;/h4&gt;

&lt;p&gt;Now follow this &lt;a href="https://help.scalegrid.io/docs/connect-redis-via-command-line"&gt;link&lt;/a&gt; to our help doc which helps you connect to both clusters using redis-cli and then match the number of keys between your clusters using the &lt;code&gt;info keyspace&lt;/code&gt; command on both clusters.&lt;/p&gt;

&lt;p&gt;And that’s it! You’ve now successfully migrated data from one cluster to another using Redis-Shake. If you have clusters in Cluster Mode, you might want to have a look at our next section in this guide to see what you need to do if you're running that type of cluster.&lt;/p&gt;

&lt;h2&gt;
&lt;a id="2"&gt;&lt;/a&gt;Migrating Data - Cluster Mode&lt;/h2&gt;

&lt;p&gt;In this section, we will show you how to migrate data from one Redis deployment in cluster mode to another cluster.&lt;/p&gt;

&lt;h4&gt;1. Gather info&lt;/h4&gt;

&lt;p&gt;Note down the following info from the Overview tab in the cluster details page from both your source and target cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hostname&lt;/strong&gt;: The Connection String is in the format [hostname1:port], [hostname2:port], [hostname3:port]. It should look something like this: &lt;em&gt;EX-redms-03-redis-master.example.domain.io, EX-redms-06-redis-master.example.domain.io, EX-redms-09-redis-master.example.domain.io&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Port&lt;/strong&gt;: 6379&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Password&lt;/strong&gt;: You can find your password and reset it from the console under Credentials&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;2. Edit Config File&lt;/h4&gt;

&lt;p&gt;Open your “redis-shake.conf” file in the main folder and edit the following variables:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;source.type = &lt;strong&gt;cluster&lt;/strong&gt;&lt;br&gt;
source.address = &lt;strong&gt;&amp;lt;source_hostname1&amp;gt;:6379;&amp;lt;source_hostname2&amp;gt;:6379;&amp;lt;source_hostname3&amp;gt;:6379&lt;/strong&gt;&lt;br&gt;
source.password_raw = &lt;strong&gt;&amp;lt;source_password&amp;gt;&lt;/strong&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;target.type = &lt;strong&gt;standalone&lt;/strong&gt;&lt;br&gt;
target.address = &lt;strong&gt;&amp;lt;target_hostname1&amp;gt;:6379;&amp;lt;target_hostname2&amp;gt;:6379;&amp;lt;target_hostname3&amp;gt;:6379&lt;/strong&gt;&lt;br&gt;
target.password_raw = &lt;strong&gt;&amp;lt;target_password&amp;gt;&lt;/strong&gt;&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;3. Sync Data&lt;/h4&gt;

&lt;p&gt;Run the following command to synchronize data between the source and target cluster:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ ./redis-shake -type sync -conf redis-shake.conf&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;4. Wait For Logs&lt;/h4&gt;

&lt;p&gt;Wait until you see the following information in the logs:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sync rdb done.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This means that the full synchronization has been completed and incremental synchronization begins.&lt;/p&gt;

&lt;h4&gt;5. Stop Incremental Sync&lt;/h4&gt;

&lt;p&gt;If &lt;code&gt;+writeBytes=0&lt;/code&gt; is seen for a long time, then that means no new data is incremented. You can stop the incremental synchronization by pressing &lt;strong&gt;Ctrl+C&lt;/strong&gt;. It should look like something like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sync:  +forwardCommands=0  +filterCommands=0  +writeBytes=0&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;6. Match Number of Keys&lt;/h4&gt;

&lt;p&gt;Now follow this &lt;a href="https://help.scalegrid.io/docs/connect-redis-via-command-line"&gt;link&lt;/a&gt; to our help doc which helps you connect to both clusters using redis-cli and then match the number of keys between your clusters using the &lt;code&gt;info keyspace&lt;/code&gt; command on both clusters.&lt;/p&gt;

&lt;p&gt;And that’s it! You’ve now successfully migrated data from one cluster in Cluster Mode to another using Redis-Shake. For more info on Redis-Shake and its capabilities, visit their &lt;a href="https://github.com/alibaba/RedisShake"&gt;GitHub page&lt;/a&gt; to find out more and get the latest version.&lt;/p&gt;

&lt;p&gt;*Redis is a trademark of Redis Labs Ltd. Any rights therein are reserved to Redis Labs Ltd. Any use by ScaleGrid is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and ScaleGrid.&lt;/p&gt;

</description>
      <category>redis</category>
      <category>database</category>
      <category>developer</category>
    </item>
    <item>
      <title>New and Evolving PostgreSQL Enterprise Features with Recent Releases</title>
      <dc:creator>ScaleGrid</dc:creator>
      <pubDate>Wed, 15 Sep 2021 20:03:11 +0000</pubDate>
      <link>https://forem.com/scalegrid/new-and-evolving-postgresql-enterprise-features-with-recent-releases-3jhh</link>
      <guid>https://forem.com/scalegrid/new-and-evolving-postgresql-enterprise-features-with-recent-releases-3jhh</guid>
      <description>&lt;p&gt;PostgreSQL also introduced many other breakthrough features in versions 10 through 11, 12 and 13 that make it a real competitor against Oracle, such as partitioning enhancements, parallel query and logical replication. In this blog post, we will showcase some of the new and notable PostgreSQL features included in more recent versions of PostgreSQL.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;h2&gt;New &amp;amp; Notable PostgreSQL Features&lt;/h2&gt;
&lt;ul&gt;
    &lt;li&gt;Partitioning Features&lt;/li&gt;
    &lt;li&gt;Indexing Features&lt;/li&gt;
    &lt;li&gt;Authentication&lt;/li&gt;
    &lt;li&gt;Other Notable Features&lt;/li&gt;
    &lt;li&gt;Summary&lt;/li&gt;
&lt;/ul&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
&lt;a id="1"&gt;&lt;/a&gt;Partitioning Features&lt;/h2&gt;

&lt;h3&gt;Declarative Partitioning&lt;/h3&gt;

&lt;p&gt;Until PostgreSQL version 9.4, there was no actual partitioning in PostgreSQL. It was achieved only through table inheritance, which offered very limited functionality and performance benefits. Much of the functionality is managed manually through triggers or SQL commands. For example, we had to use triggers to direct a row before INSERT to the correct partition. We have to create indexes for each partition separately. PostgreSQL version 10 gave birth to declarative partitioning, but we still had to create constraints and indexes for each partition.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.postgresql.org/docs/release/11.0/"&gt;PostgreSQL 11 comes complete with a very impressive set of new Partitioning features&lt;/a&gt; to both help improve performance and also to help make partitioned tables more transparent to applications.&lt;/p&gt;

&lt;h3&gt;Logical replication of partitioned tables&lt;/h3&gt;

&lt;p&gt;With PostgreSQL 13, they introduced support for logically replicating partitioned tables. Previously, you were forced to replicate partitions to your standbys individually. Now, however, you can automatically replicate all your partitions at the same time&lt;/p&gt;

&lt;h3&gt;Partitioning Methods&lt;/h3&gt;

&lt;p&gt;The currently supported partitioning methods are range, list, and hash.&lt;/p&gt;

&lt;h3&gt;Keys &amp;amp; Indexes&lt;/h3&gt;

&lt;p&gt;Full support for foreign keys on partitioned tables was added in PostgreSQL 12. PostgreSQL also boasts support for primary keys, indexes, and triggers on partitioned tables.&lt;/p&gt;

&lt;h3&gt;Default Partition&lt;/h3&gt;

&lt;p&gt;PostgreSQL allows the creation of a “default” partition for storing data that does not match any of the remaining partitions. Oracle users will love this feature as it is not available in Oracle Database.&lt;/p&gt;

&lt;h3&gt;Row Movement&lt;/h3&gt;

&lt;p&gt;UPDATE statements that change a partition key column now cause affected rows to be moved to the appropriate partitions.&lt;/p&gt;

&lt;h3&gt;Partition Pruning&lt;/h3&gt;

&lt;p&gt;Improve SELECT performance through enhanced partition elimination strategies during query planning and execution. A new method of performing &lt;a href="https://www.postgresql.org/docs/11/ddl-partitioning.html#DDL-PARTITION-PRUNING"&gt;partition elimination&lt;/a&gt; has been added. This new algorithm is able to determine matching partitions by looking at the query’s WHERE clause. The previous algorithm checked each partition, in turn, to see if it could match the WHERE clause of the query. This resulted in an additional increase in planning time as the number of partitions grew.&lt;/p&gt;

&lt;h3&gt;Partition Pruning During Query Execution&lt;/h3&gt;

&lt;p&gt;As for prepared statements, query parameters are not known before the execution. Query Planner cannot eliminate partitions during the planning phase as parameters are not known. So, the executor does the partition pruning during execution to access only the partitions that match parameters.&lt;/p&gt;

&lt;h2&gt;
&lt;a id="2"&gt;&lt;/a&gt;Indexing Features&lt;/h2&gt;

&lt;h3&gt;Covering Index&lt;/h3&gt;

&lt;p&gt;PostgreSQL now allows you to add non-key columns in the btree index. Since queries typically need to retrieve more columns than just the ones they search on, PostgreSQL allows you to create an index in which some columns are just “payload” and are not part of the search key. It helps in doing just &lt;a href="https://www.postgresql.org/docs/11/indexes-index-only-scans.html"&gt;index-only scans&lt;/a&gt; to retrieve the required rows.&lt;/p&gt;

&lt;h3&gt;Reindex Concurrently&lt;/h3&gt;

&lt;p&gt;Starting PostgreSQL 12, it’s possible to rebuild an index with &lt;a href="https://www.postgresql.org/docs/12/sql-reindex.html"&gt;REINDEX CONCURRENTLY&lt;/a&gt; without locking the table for read/writes, just like the Oracle REBUILD INDEX command.&lt;/p&gt;

&lt;h3&gt;Parallel Index Creation&lt;/h3&gt;

&lt;p&gt;With &lt;a href="https://www.postgresql.org/docs/11/sql-createindex.html"&gt;parallel index creation&lt;/a&gt; (Introduced in PostgreSQL 11, currently applicable only for b-tree indexes), indexes can be created faster up to max_parallel_workers value and setting maintenance_work_mem large enough to hold multiple copies of data. Parallel index creation can significantly reduce your &lt;a href="https://www.cybertec-postgresql.com/en/postgresql-parallel-create-index-for-better-performance/"&gt;index creation time&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;Deduplication of Data in B-Tree Indexes&lt;/h3&gt;

&lt;p&gt;Sometimes there will be duplicate entries in an index. This means that a leaf node in a B-Tree index with at least two index entries in the same index contains the same data for all the index columns. With the addition of deduplication in PostgreSQL 13, you can group these duplicate B-Tree index entries and gather them in a group entry. The benefit of this is saving space and reduced load on disk and RAM, as you won’t have to duplicate column data. Duplicate entries also cause unwanted index bloating.&lt;/p&gt;

&lt;h2&gt;
&lt;a id="3"&gt;&lt;/a&gt;Authentication&lt;/h2&gt;

&lt;h3&gt;SCRAM-SHA-256&lt;/h3&gt;

&lt;p&gt;In PostgreSQL 11 came support for &lt;a href="https://www.postgresql.org/docs/11/auth-password.html"&gt;SCRAM-SHA-256 password authentication&lt;/a&gt;. This method prevents password sniffing on untrusted connections and allows you to store your passwords in a cryptographically hashed form.&lt;/p&gt;

&lt;p&gt;Out of all the currently supported password authentication methods, this is the most secure one.&lt;/p&gt;

&lt;h3&gt;GSSAPI&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.postgresql.org/docs/11/gssapi-auth.html"&gt;GSSAPI&lt;/a&gt; allows secure authentication, as well as automatic single-sign-on for systems that support it. This authentication method relies on a GSSAPI-compatible security library. The data sent over the database connection will be unencrypted unless SSL is used; however the authentication itself is secure. GSSAPI with Kerberos authentication is possible with PostgreSQL according to industry standards. When Kerberos is used, a standard principal is used in the format “servicename/hostname@realm”. Any principal included in the keytab used by the server will be accepted by the PostgreSQL server.&lt;/p&gt;

&lt;h3&gt;LDAP&lt;/h3&gt;

&lt;p&gt;Operating similarly to password authentication, this authentication method uses LDAP as the verification method. It’s only used to validate username and password pairs, hence the user must exist in the database already for the authentication to work. &lt;a href="https://www.postgresql.org/docs/11/auth-ldap.html"&gt;LDAP authentication&lt;/a&gt; operates in either a simple bind mode or what’s called search+bind. Search+bind allows you to use other identifiers compared to the simple bind mode, which only allows the distinguished name, domain name, or email to be used.&lt;/p&gt;

&lt;h3&gt;Certificate&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://www.postgresql.org/docs/11/auth-cert.html"&gt;certificate authentication method&lt;/a&gt; uses SSL certificates to authenticate. Consequently, it’s only available for SSL connections. With certificate authentication, no password is required. The client simply needs to provide a valid and trusted certificate to be able to authenticate. The certificate’s common name will be matched with the database user name, and if a match is found, the client will be logged in.&lt;/p&gt;

&lt;h2&gt;
&lt;a id="4"&gt;&lt;/a&gt;Other notable features&lt;/h2&gt;

&lt;h3&gt;Parallel Vacuum&lt;/h3&gt;

&lt;p&gt;With the release of PostgreSQL 13, improvements to the &lt;a href="https://www.postgresql.org/docs/13/sql-vacuum.html"&gt;VACUUM&lt;/a&gt; command were implemented. The improvement in question is the added parameter option PARALLEL. With PARALLEL, you can perform index vacuum and index cleanup phases of VACUUM. This allows you to parallel vacuum multiple indexes corresponding to a single table.&lt;/p&gt;

&lt;h3&gt;Parallel Query &lt;/h3&gt;

&lt;p&gt;Most Reports queries, which generally scan a lot of data, suffer from performance degradation due to needing to scan or aggregate data from many rows even with an index scan. These queries can use only one CPU until 9.4 and run serially.&lt;/p&gt;

&lt;p&gt;With parallel query (which offers parallel sequential scan, merge join, hash join, aggregate and other parallel query plan features), these queries can take advantage of multiple workers, and performance can improve from 2x to 10x as per &lt;a href="https://www.postgresql.org/docs/11/parallel-query.html"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Check out this post to learn &lt;a href="https://www.postgresql.org/docs/current/when-can-parallel-query-be-used.html"&gt;when to use parallel queries&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;Native Logical Replication&lt;/h3&gt;

&lt;p&gt;PostgreSQL introduced &lt;a href="https://www.postgresql.org/docs/10/logical-replication.html"&gt;native logical replication&lt;/a&gt; in version 10 to provide a more flexible replication option, unlike &lt;a href="https://scalegrid.io/blog/getting-started-with-postgresql-streaming-replication/"&gt;streaming replication&lt;/a&gt; and more like Oracle Streams, to replicate specific tables, columns, or rows. This can be used to split between multiple databases or consolidate from multiple databases. And can also replicate between different major versions of PostgreSQL. &lt;/p&gt;

&lt;h3&gt;Stored Procedures with Embedded Transactions&lt;/h3&gt;

&lt;p&gt;Yet another addition that makes PostgreSQL compatible for migrations from Oracle database.&lt;/p&gt;

&lt;p&gt;We don’t have any transaction control inside the pgsql program (DO block or function) in version 9.4, and have to use a workaround like dblink to start and commit/rollback transactions.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.postgresql.org/docs/11/sql-createprocedure.html"&gt;stored procedures&lt;/a&gt;, which premiered in version 11, we can now implement transaction control within any pgsql structure like while loop, for loop, or if else statement. Although stored procedures are similar to functions, they have to be invoked by CALL command and can work as independent programs.&lt;/p&gt;

&lt;h3&gt;Generated Columns&lt;/h3&gt;

&lt;p&gt;Like Oracle database, &lt;a href="https://www.postgresql.org/docs/12/ddl-generated-columns.html"&gt;generated columns in PostgreSQL&lt;/a&gt; can store the data automatically computed from other columns within the row. This speeds up queries by not having to compute the value during query execution, and instead, the generated column value is computed on INSERT or UPDATE to the row.&lt;/p&gt;

&lt;h3&gt;JIT Compilation&lt;/h3&gt;

&lt;p&gt;PostgreSQL 11, 12 and 13 all support &lt;a href="https://www.postgresql.org/docs/current/jit-reason.html"&gt;Just-in-Time (JIT) compilation&lt;/a&gt;, which was added back in 2018. JIT compilation is the process of turning an interpreted program evaluation into a native program. With JIT compilation, this process can be done at run time. An advantage of JIT is the possibility of generating expression-specific functions that the CPU can natively execute. In turn, the process gives you a speedup.&lt;/p&gt;

&lt;h3&gt;Page Checksums&lt;/h3&gt;

&lt;p&gt;Page checksums is a feature that helps you verify the integrity of data stored to disk. Previously, you could only enable page checksums during the initialization of a PostgreSQL cluster. PostgreSQL 12 introduced the ability to enable or disable page checksums in an offline cluster via the pg_checksums command.&lt;/p&gt;

&lt;h2&gt;
&lt;a id="5"&gt;&lt;/a&gt;Summary&lt;/h2&gt;

&lt;p&gt;As you can see from this article, PostgreSQL is an ever-evolving database system that brings many powerful features to the table. Every new version adds new exciting features that make it a real competitor to other databases such as Oracle. Its partitioning and indexing features get many new updates that add onto its already expansive toolbox.&lt;/p&gt;

&lt;p&gt;If you are looking for a managed solution for your PostgreSQL database, feel free to check out our &lt;a href="https://scalegrid.io/postgresql.html"&gt;ScaleGrid for PostgreSQL&lt;/a&gt; service. We provide a &lt;a href="https://console.scalegrid.io/users/register"&gt;free 30-day trial&lt;/a&gt; that allows you to use all our management features with no restrictions. Pricing starts at just &lt;a href="https://scalegrid.io/pricing.html?db=POSTGRESQL&amp;amp;cloud=cloud_digital_ocean&amp;amp;replica=deployment_standalone&amp;amp;instance=Nano#section_pricing_dedicated"&gt;$10/month&lt;/a&gt; and gives you access to a fully managed enterprise-grade PostgreSQL hosting solution with 24/7 support.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>developer</category>
    </item>
    <item>
      <title>Redis vs. Memcached – 2021 Comparison</title>
      <dc:creator>ScaleGrid</dc:creator>
      <pubDate>Fri, 07 May 2021 23:25:11 +0000</pubDate>
      <link>https://forem.com/scalegrid/redis-vs-memcached-2021-comparison-5hep</link>
      <guid>https://forem.com/scalegrid/redis-vs-memcached-2021-comparison-5hep</guid>
      <description>&lt;p&gt;&lt;a href="https://redis.io/" rel="noopener noreferrer"&gt;Redis&lt;/a&gt; stands for &lt;b&gt;RE&lt;/b&gt;mote &lt;b&gt;DI&lt;/b&gt;ctionary &lt;b&gt;S&lt;/b&gt;erver, created in 2009 by &lt;a href="https://redislabs.com/blog/thank-you-salvatore-sanfilippo/" rel="noopener noreferrer"&gt;Salvatore Sanfilippo&lt;/a&gt;. &lt;a href="https://memcached.org/" rel="noopener noreferrer"&gt;Memcached&lt;/a&gt;, on the other hand, was created in 2003 by &lt;a href="https://en.wikipedia.org/wiki/Brad_Fitzpatrick" rel="noopener noreferrer"&gt;Brad Fitzpatrick&lt;/a&gt;. Both Redis and Memcached are:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;NoSQL in-memory data structures&lt;/li&gt;
    &lt;li&gt;Written in C&lt;/li&gt;
    &lt;li&gt;Open source&lt;/li&gt;
    &lt;li&gt;Used to speed up applications&lt;/li&gt;
    &lt;li&gt;Support sub-millisecond latency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In 2014, Salvatore wrote an excellent &lt;a href="https://stackoverflow.com/a/23650189/2767333" rel="noopener noreferrer"&gt;StackOverflow post&lt;/a&gt; on when it makes more sense to use Memcached than Redis. In this post, we provide a current and detailed comparison between Redis and Memcached so that you can make an informed choice about their use in your application.&lt;/p&gt;

&lt;h2&gt;Infographic&lt;/h2&gt;

&lt;p&gt;This post has been condensed down into an infographic. With this infographic you can easily visualize the results of this comparison to see which one comes out on top in different scenarios. Click the image below to see the full infographic. If you want to read the comparison in text format, click here.&lt;br&gt;
&lt;a href="https://scalegrid.io/blog/redis-vs-memcached-2021-comparison/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fscalegrid.io%2Fblog%2Fwp-content%2Fuploads%2F2021%2F05%2FRedis-vs-Memcached-Infographic-Preview-ScaleGrid-Blog.jpg" alt="Redis vs Memcached Infographic"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
&lt;a id="start"&gt;&lt;/a&gt;Documentation&lt;/h2&gt;

&lt;p&gt;To start off, Redis is much more comprehensively documented than Memcached. This makes it easier to learn, administer and use.&lt;/p&gt;

&lt;h2&gt;Database Model&lt;/h2&gt;

&lt;p&gt;Redis is primarily a key value store. While keys are binary strings, the advantage with Redis is that the value is not limited to only binary strings. They can be a variety of data structures which enable storing of complex objects and provide a rich set of operations over them. Redis also provides for extensibility via Redis modules. Redis modules are extensions that provide for additional data structures and features not available within the core feature set. Here’s an sample of few features that are now available as modules:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Document Store&lt;/li&gt;
    &lt;li&gt;Graph DBMS&lt;/li&gt;
    &lt;li&gt;Search Engine&lt;/li&gt;
    &lt;li&gt;Time Series DBMS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Memcached is a plain key value store which only supports binary strings as value.&lt;/p&gt;

&lt;h2&gt;Data Structures&lt;/h2&gt;

&lt;p&gt;As mentioned above, Redis offers multiple data structure types allowing it to be extremely flexible to use, including Strings, &lt;a href="https://scalegrid.io/blog/introduction-to-redis-data-structures-hashes/" rel="noopener noreferrer"&gt;Hashes&lt;/a&gt;, Lists, &lt;a href="https://scalegrid.io/blog/introduction-to-redis-data-structures-sets/" rel="noopener noreferrer"&gt;Sets&lt;/a&gt;, &lt;a href="https://scalegrid.io/blog/introduction-to-redis-data-structures-sorted-sets/" rel="noopener noreferrer"&gt;Sorted Sets&lt;/a&gt;, &lt;a href="https://scalegrid.io/blog/introduction-to-redis-data-structure-bitmaps/" rel="noopener noreferrer"&gt;Bitmaps&lt;/a&gt;, Bitfields, HyperLogLog, Geospatial indexes, and Streams. You can learn more about these in this &lt;a href="https://scalegrid.io/blog/top-redis-use-cases-by-core-data-structure-types/" rel="noopener noreferrer"&gt;Top Redis Use Cases by Core Data Structure Types&lt;/a&gt; article.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://redislabs.com/wp-content/uploads/2020/06/key-value-data-stores-2-v2.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fredislabs.com%2Fwp-content%2Fuploads%2F2020%2F06%2Fkey-value-data-stores-2-v2.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image source: &lt;a href="https://redislabs.com/redis-enterprise/data-structures/" rel="noopener noreferrer"&gt;https://redislabs.com/redis-enterprise/data-structures/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Memcached only supports plain binary strings which are great for read-only data, so if you don't need all the bells and whistles of Redis, Memcached is a simpler database for you to use.&lt;/p&gt;

&lt;h2&gt;Database Rank &amp;amp; Popularity&lt;/h2&gt;

&lt;p&gt;The greater popularity of a database results in a larger community of users, more discussions and user-generated tutorials, and more help and support through third-party tools such as &lt;a href="https://scalegrid.io/redis/hosting-comparison.html" rel="noopener noreferrer"&gt;DBaaS platforms&lt;/a&gt; and &lt;a href="https://scalegrid.io/blog/the-top-6-free-redis-memory-analysis-tools/" rel="noopener noreferrer"&gt;analysis tools&lt;/a&gt; to help optimize your deployments.&lt;/p&gt;

&lt;p&gt;Redis is the 8th most popular database in the world as of February 2021, according to &lt;a href="https://db-engines.com/en/ranking" rel="noopener noreferrer"&gt;DB-Engines&lt;/a&gt; because of its simplicity, rich data structures, and great documentation. Memcached is currently ranked as the 28th most popular database. When Redis and Memcached are ranked with respect to key-value database models, Redis stands in 1st and Memcached comes 4th. However, if you are looking only for an open-source key-value database, or one that can be deployed on-premise, &lt;a href="https://db-engines.com/en/ranking/key-value+store" rel="noopener noreferrer"&gt;Memcached comes in 2nd&lt;/a&gt; as both Amazon DynamoDB and Microsoft Azure Cosmos DB are both commercial databases that can only be deployed in the cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/redis-vs-memcached-2021-comparison/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fscalegrid.io%2Fblog%2Fwp-content%2Fuploads%2F2021%2F04%2FDB-Engines-Ranking-Key-Value-Store.png" alt="DB-Engines-Ranking-Key-Value-Store"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Architecture&lt;/h2&gt;

&lt;p&gt;Redis and Memcached both follow &lt;b&gt;client-server &lt;/b&gt;architecture. Clients populate the data in the server in the form of key-value.&lt;/p&gt;

&lt;p&gt;Redis is single threaded where on the other hand Memcached has a multithreaded architecture. Memcached scales better on a system with more cores that can handle more operations if compute capacity is scaled. However, more than one Redis instance can be initiated on the same system to utilize additional cores.&lt;/p&gt;

&lt;h2&gt;Ease of Use&lt;/h2&gt;

&lt;p&gt;As explained above in the Database model section, Redis, being a multi-model database, can be used with any type of data model. In Redis, it is easy to write code as it simplifies complex tasks. Redis has &lt;a href="https://redis.io/topics/data-types-intro" rel="noopener noreferrer"&gt;advanced data structures&lt;/a&gt; and is not limited to simple string values. For instance, if your application stores data in sets and you want to keep track of sets in a list you can do this easily in Redis. A similar task on Memcached is not possible. But there are other ways to do the same tasks that will require more lines of code.&lt;/p&gt;

&lt;p&gt;Memcached, on the other hand, only stores plain string values. So, the application is left to deal with the data structure complexity.&lt;/p&gt;

&lt;h2&gt;Data Partitioning&lt;/h2&gt;

&lt;p&gt;Redis supports &lt;a href="https://redis.io/topics/partitioning" rel="noopener noreferrer"&gt;partitioning&lt;/a&gt; of data across multiple node instances. &lt;a href="https://stackshare.io/redis" rel="noopener noreferrer"&gt;Current users&lt;/a&gt; of Redis leverage different techniques like range partitioning, hash partitioning, and consistent hashing for data partitioning. In Redis, data partitioning can be implemented in three different ways:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Client-side partitioning&lt;/li&gt;
    &lt;li&gt;Proxy-assisted partitioning (example: &lt;a href="https://github.com/twitter/twemproxy" rel="noopener noreferrer"&gt;twemproxy&lt;/a&gt;)&lt;/li&gt;
    &lt;li&gt;Server-side partitioning with query routing within the cluster nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Memcached also supports data partitioning across multiple nodes, and &lt;a href="https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/deploying-memcached-and-redis-on-oci.pdf" rel="noopener noreferrer"&gt;consistent hashing&lt;/a&gt; is a recommended approach to ensure the traffic load is evenly distributed.&lt;/p&gt;

&lt;h2&gt;Supported Languages&lt;/h2&gt;

&lt;p&gt;Redis supports almost all of the most used programming &lt;a href="https://db-engines.com/en/system/Memcached%3BRedis" rel="noopener noreferrer"&gt;languages&lt;/a&gt;, from high-level to low-level languages. Memcached however supports less number of languages compared to Redis, but does support all the popular languages.&lt;/p&gt;

&lt;h4&gt;Memcached&lt;/h4&gt;

&lt;ul&gt;
    &lt;li&gt;.Net&lt;/li&gt;
    &lt;li&gt;C&lt;/li&gt;
    &lt;li&gt;C++&lt;/li&gt;
    &lt;li&gt;ColdFusion&lt;/li&gt;
    &lt;li&gt;Erlang&lt;/li&gt;
    &lt;li&gt;Java&lt;/li&gt;
    &lt;li&gt;Lisp&lt;/li&gt;
    &lt;li&gt;Lua&lt;/li&gt;
    &lt;li&gt;OCaml&lt;/li&gt;
    &lt;li&gt;Perl&lt;/li&gt;
    &lt;li&gt;PHP&lt;/li&gt;
    &lt;li&gt;Python&lt;/li&gt;
    &lt;li&gt;Ruby&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;Redis&lt;/h4&gt;

&lt;ul&gt;
    &lt;li&gt;C&lt;/li&gt;
    &lt;li&gt;C#&lt;/li&gt;
    &lt;li&gt;C++&lt;/li&gt;
    &lt;li&gt;Clojure&lt;/li&gt;
    &lt;li&gt;Crystal&lt;/li&gt;
    &lt;li&gt;D&lt;/li&gt;
    &lt;li&gt;Dart&lt;/li&gt;
    &lt;li&gt;Elixir&lt;/li&gt;
    &lt;li&gt;Erlang&lt;/li&gt;
    &lt;li&gt;Fancy&lt;/li&gt;
    &lt;li&gt;Go&lt;/li&gt;
    &lt;li&gt;Haskell&lt;/li&gt;
    &lt;li&gt;Haxe&lt;/li&gt;
    &lt;li&gt;Java&lt;/li&gt;
    &lt;li&gt;JavaScript (Node.js)&lt;/li&gt;
    &lt;li&gt;Lisp&lt;/li&gt;
    &lt;li&gt;Lua&lt;/li&gt;
    &lt;li&gt;MatLab&lt;/li&gt;
    &lt;li&gt;Objective-C&lt;/li&gt;
    &lt;li&gt;OCaml&lt;/li&gt;
    &lt;li&gt;Pascal&lt;/li&gt;
    &lt;li&gt;Perl&lt;/li&gt;
    &lt;li&gt;PHP&lt;/li&gt;
    &lt;li&gt;Prolog&lt;/li&gt;
    &lt;li&gt;Pure Data&lt;/li&gt;
    &lt;li&gt;Python&lt;/li&gt;
    &lt;li&gt;R&lt;/li&gt;
    &lt;li&gt;Rebol&lt;/li&gt;
    &lt;li&gt;Ruby&lt;/li&gt;
    &lt;li&gt;Rust&lt;/li&gt;
    &lt;li&gt;Scala&lt;/li&gt;
    &lt;li&gt;Scheme&lt;/li&gt;
    &lt;li&gt;Smalltalk&lt;/li&gt;
    &lt;li&gt;Swift&lt;/li&gt;
    &lt;li&gt;Tcl&lt;/li&gt;
    &lt;li&gt;Visual Basic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;Transactions&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://redis.io/topics/transactions" rel="noopener noreferrer"&gt;Redis "transactions"&lt;/a&gt; are executed with the three below guarantees:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Transactions are serialized and executed sequentially&lt;/li&gt;
    &lt;li&gt;Either all of the commands, or none, are processed (atomic transactions)&lt;/li&gt;
    &lt;li&gt;Optimistic locking offers an extra guarantee using check-and-set&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Redis makes sure that all only one command from one client machine is executed at once. All the commands in the transactions are executed when the “EXEC” command is called to ensure the atomicity.&lt;/p&gt;

&lt;p&gt;Memcached, on the other hand, does not provide transaction management.&lt;/p&gt;

&lt;h2&gt;Replication&lt;/h2&gt;

&lt;p&gt;Redis offers a simple leader-follower (master-slave) &lt;a href="https://redis.io/topics/replication" rel="noopener noreferrer"&gt;replication&lt;/a&gt; that creates exact copies of the master instances, with these features:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;The master keeps on sending data commands to the slave as long as they are connected.&lt;/li&gt;
    &lt;li&gt;If the connection breaks, the slave will follow partial resynchronization only copying the data that was missed during disconnection.&lt;/li&gt;
    &lt;li&gt;If partial resynchronization is not possible, then it will try a full resynchronization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also leverage the high availability features, &lt;a href="https://scalegrid.io/blog/high-availability-with-redis-sentinels-connecting-to-redis-masterslave-sets/" rel="noopener noreferrer"&gt;Redis Sentinels&lt;/a&gt; or &lt;a href="https://scalegrid.io/blog/?s=redis+cluster" rel="noopener noreferrer"&gt;Redis Cluster&lt;/a&gt;, for advanced failover protection.&lt;/p&gt;

&lt;p&gt;Native Memcached does not support replication, but you can use &lt;a href="http://repcached.lab.klab.org/" rel="noopener noreferrer"&gt;Repcached&lt;/a&gt;, a free open source patch to achieve high availability for your deployment. It offers multi master replication, asynchronous data replication, and supports all Memcached commands.&lt;/p&gt;

&lt;h2&gt;Snapshots/Persistence&lt;/h2&gt;

&lt;p&gt;Snapshots are simply a read-only view of your database as it was at a certain point in time. Redis supports snapshots, and by default, Redis saves snapshots of the dataset on disk in a binary file called dump.rdb. You can manually call a snapshot, or customize the frequency or change threshold for running the operation.&lt;/p&gt;

&lt;p&gt;Here are the two &lt;a href="https://redis.io/topics/persistence" rel="noopener noreferrer"&gt;persistence&lt;/a&gt; options Redis supports:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;RDB persistence&lt;/li&gt;
    &lt;li&gt;AOF persistence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;RDB stands for “Redis Database Backup”. It is a compact, point-in-time snapshot of the database at a specific time. It takes up less space, maximizes Redis performance, and is good for disaster recovery.&lt;/p&gt;

&lt;p&gt;AOF stands for “Append Only File”. AOF keeps track of all the commands that are executed, and in a disastrous situation, it re-execute the commands to get the data back. This method takes more space as all the commands are executed again, and is not a very durable method of snapshotting.&lt;/p&gt;

&lt;p&gt;Memcached on the other hand does not support on disk persistence.&lt;/p&gt;

&lt;h2&gt;Server-Side Scripts&lt;/h2&gt;

&lt;p&gt;Lua is the embedded scripting language for your Redis server, available as of version 2.6, which lets you perform operations inside Redis to simplify your code and increase performance. The two main functions used to evaluate scripts using the Lua interpreter are:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;EVAL&lt;/li&gt;
    &lt;li&gt;EVALSHA&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When Lua script is being executed all other requests are blocked as shown in the figure below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/redis-transactions-long-running-lua-scripts/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fscalegrid.io%2Fblog%2Fwp-content%2Fuploads%2F2020%2F06%2FRedis-Lua-Scripts-Diagram.jpg" alt="Redis Lua Scripts Diagram - ScaleGrid Blog"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Redis also includes &lt;a href="https://redis.io/topics/ldb" rel="noopener noreferrer"&gt;Lua scripts debugger&lt;/a&gt; in version 3.2 which makes writing complex scripts easier and helps in boosting performance.&lt;/p&gt;

&lt;p&gt;Memcached does not support any server-side scripting.&lt;/p&gt;

&lt;h2&gt;Scalability&lt;/h2&gt;

&lt;p&gt;There are two techniques to horizontally scale your Redis database:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Adding shards in Redis Clusters&lt;/li&gt;
    &lt;li&gt;Adding nodes to a Redis HA (master/replica) setup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also vertically scale your Redis setup when you need more memory or compute. It can be done without downtime if you have a HA setup or you use Redis Cluster technology.&lt;/p&gt;

&lt;p&gt;The Memcached server doesn’t provide a mechanism to distribute data across nodes (sharding). So, in Memcached, horizontal &lt;a href="http://pages.cs.wisc.edu/~vijayc/papers/memcached.pdf" rel="noopener noreferrer"&gt;scalability&lt;/a&gt; is as simple as adding more nodes - the problem of partitioning your data into different shards will have to be done on the application/client level. There are some open source tools that can help you with this.&lt;/p&gt;

&lt;h2&gt;Communication Protocol&lt;/h2&gt;

&lt;p&gt;Redis uses TCP as a network protocol and does not support UDP.&lt;/p&gt;

&lt;p&gt;Memcached supports both the TCP and UDP &lt;a href="https://www.it.iitb.ac.in/frg/wiki/images/1/13/Memcache_detailed_report.pdf" rel="noopener noreferrer"&gt;communication protocols&lt;/a&gt;. Data is sent to the Memcached server in two forms:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Text lines: Send commands and receive responses from the server.&lt;/li&gt;
    &lt;li&gt;Unstructured data: Receive or send value information for a given key, and data is returned in the same format provided.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Supported Cache Eviction Policies&lt;/h2&gt;

&lt;p&gt;Redis supports different types of &lt;a href="https://docs.redislabs.com/latest/rs/administering/database-operations/eviction-policy/" rel="noopener noreferrer"&gt;eviction policies&lt;/a&gt;. Let's take a look at some.&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;b&gt;noeviction: &lt;/b&gt;In “noeviction” an error is returned when memory reaches it bound.&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;b&gt;allkeys-lru: &lt;/b&gt;Lru stands for “least recent used”. This policy removes the least recently used data.&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;b&gt;allkeys-lfu: &lt;/b&gt;Lfu stands for “least frequently used”. This policy removes the least frequently used data.&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;b&gt;allkeys-random: &lt;/b&gt;This policy removes the data randomly.&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;b&gt;volatile-lru: &lt;/b&gt;Volatile data is with expiration data set. This policy removes the least recently used volatile data.&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;b&gt;volatile-lfu: &lt;/b&gt;Volatile data is with expiration data set. This policy removes the least frequently used volatile data.&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;b&gt;volatile-random: &lt;/b&gt;This policy removes the volatile data randomly.&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;b&gt;volatile-ttl: &lt;/b&gt;“TTL” stands for time to live. This policy removes the data that have the shortest time to live.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/memcached/memcached/wiki/UserInternals" rel="noopener noreferrer"&gt;Memcached&lt;/a&gt; uses &lt;a href="https://github.com/memcached/memcached/blob/master/doc/new_lru.txt" rel="noopener noreferrer"&gt;LRU algorithm&lt;/a&gt; to evict data when space is required. It first searches for the already expired data to delete if expired data is not available the &lt;a href="https://github.com/memcached/memcached/blob/master/doc/new_lru.txt" rel="noopener noreferrer"&gt;LRU algorithm&lt;/a&gt; is used.&lt;/p&gt;

&lt;h2&gt;Publish &amp;amp; Subscribe Messaging&lt;/h2&gt;

&lt;p&gt;Redis supports &lt;a href="https://redis.io/topics/pubsub" rel="noopener noreferrer"&gt;Pub/Sub messaging&lt;/a&gt; (publish and subscribe). There are three commands that are used for this purpose.&lt;/p&gt;

&lt;p&gt;The client uses:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Subscribe&lt;/li&gt;
    &lt;li&gt;Unsubscribe&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Subscribe and unsubscribe are used to get messages from a specific channel.&lt;/p&gt;

&lt;p&gt;The server uses:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Publish&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The “publish” is used to push data to the clients.&lt;/p&gt;

&lt;p&gt;Memcached does not support publish and subscribe messaging.&lt;/p&gt;

&lt;h2&gt;Streams Support&lt;/h2&gt;

&lt;p&gt;Redis supports Kafka-like streams with 5.0 or higher version using a new data structure “&lt;a href="https://redis.io/topics/streams-intro" rel="noopener noreferrer"&gt;Redis Streams&lt;/a&gt;”. Redis Streams has the concept of consumer groups, like &lt;a href="https://kafka.apache.org/" rel="noopener noreferrer"&gt;Apache Kafka&lt;/a&gt;, that lets client applications consume messages in a distributed fashion, making it easy to scale and create highly available systems.&lt;/p&gt;

&lt;p&gt;Memcached does not offer native support for Streams, but there are open source library tools like &lt;a href="https://www.madewithtea.com/posts/kafcache-memcached-and-kafka-streams" rel="noopener noreferrer"&gt;Kafcache&lt;/a&gt; for stream processing at low latency.&lt;/p&gt;

&lt;h2&gt;Geospatial Support&lt;/h2&gt;

&lt;p&gt;Redis has a data structure called &lt;a href="https://redis.io/commands/geoadd" rel="noopener noreferrer"&gt;Geospatial indexes&lt;/a&gt; that stores the longitude and latitude data of a location. You can perform different operations on the geospatial data, like calculating the distance between two points or finding nearby places.&lt;/p&gt;

&lt;p&gt;Memcached does not have any special data structures to handle geospatial data.&lt;/p&gt;

&lt;h2&gt;Performance&lt;/h2&gt;

&lt;p&gt;A performance comparison between in-memory key-value data stores is more of an intellectual exercise than of any practical importance - unless you are deploying systems at such scale that this becomes interesting as a cost-saving measure. This is because such stores are IO bound and usually the network latency might play a bigger role in application perceived latency than the database latency.&lt;/p&gt;

&lt;p&gt;A more practical performance aspect is storage efficiency - how much data can be packed in the same amount of memory. Even here, the internal data structures used by Redis vary based on the data size. So any discussion on performance between these databases should be taken with a pinch of salt.&lt;/p&gt;

&lt;p&gt;Let’s take a look at some comparisons shown in a 2016 research &lt;a href="https://www.researchgate.net/publication/304780144_A_performance_evaluation_of_in-memory_databases" rel="noopener noreferrer"&gt;paper&lt;/a&gt;. In this paper, the authors experiment the widely used in-memory databases to measure their performance in terms of:&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;The time taken to complete operations.&lt;/li&gt;
    &lt;li&gt;How efficiently they use memory during operations.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Database versions used in the paper:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th&gt;Database&lt;/th&gt;
&lt;th&gt;Version&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Redis&lt;/td&gt;
&lt;td&gt;3.0.7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memcached&lt;/td&gt;
&lt;td&gt;1.4.14&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;Write Operation&lt;/h2&gt;

&lt;p&gt;While writing data, as you can see, in the table below Memcached shows exceptional speed even after the number of records moves up to million.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;The calculated time to write key-value pairs (ms)&lt;/b&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th colspan="4"&gt;Number of records&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Database&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;1,000&lt;/td&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;100,000&lt;/td&gt;
&lt;td&gt;1,000,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Redis&lt;/td&gt;
&lt;td&gt;34&lt;/td&gt;
&lt;td&gt;214&lt;/td&gt;
&lt;td&gt;1,666&lt;/td&gt;
&lt;td&gt;14,638&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memcached&lt;/td&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;276&lt;/td&gt;
&lt;td&gt;2,813&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;Read Operation&lt;/h2&gt;

&lt;p&gt;Reading data stays almost consistent in Redis even for a million records but in Memcached as the number of records go up the time also increases a little bit.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;The elapsed time to read value corresponding to a given key per database (ms)&lt;/b&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th colspan="4"&gt;Number of records&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Database&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;1,000&lt;/td&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;100,000&lt;/td&gt;
&lt;td&gt;1,000,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Redis&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memcached&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;30&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;Memory Use&lt;/h2&gt;

&lt;p&gt;While discussing memory usage, Redis is always the best as you can see in the &lt;a href="https://www.researchgate.net/publication/304780144_A_performance_evaluation_of_in-memory_databases" rel="noopener noreferrer"&gt;results&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Memory usages of in-memory databases for write operation (MB)&lt;/b&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th colspan="4"&gt;Number of records&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Database&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;1,000&lt;/td&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;100,000&lt;/td&gt;
&lt;td&gt;1,000,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Redis&lt;/td&gt;
&lt;td&gt;2.5&lt;/td&gt;
&lt;td&gt;3.8&lt;/td&gt;
&lt;td&gt;4.3&lt;/td&gt;
&lt;td&gt;62.7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memcached&lt;/td&gt;
&lt;td&gt;5.3&lt;/td&gt;
&lt;td&gt;27.2&lt;/td&gt;
&lt;td&gt;211&lt;/td&gt;
&lt;td&gt;264.9&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As you can see Redis is better than Memcached.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Memory usages of in-memory databases for delete operation (MB)&lt;/b&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th colspan="4"&gt;Number of records&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Database&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;1,000&lt;/td&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;100,000&lt;/td&gt;
&lt;td&gt;1,000,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Redis&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memcached&lt;/td&gt;
&lt;td&gt;2.2&lt;/td&gt;
&lt;td&gt;2.1&lt;/td&gt;
&lt;td&gt;2.2&lt;/td&gt;
&lt;td&gt;2.2&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;Transport Layer Security (TLS) Support&lt;/h2&gt;

&lt;p&gt;Redis has native TLS support starting Redis 6.0. Earlier versions of Redis recommended the use of stunnel for providing TLS support.&lt;/p&gt;

&lt;p&gt;Memcached 1.5.13 and above versions support authentication and encryption via &lt;a href="https://github.com/memcached/memcached/wiki/TLS" rel="noopener noreferrer"&gt;TLS&lt;/a&gt;. This feature is still in &lt;a href="https://github.com/memcached/memcached/wiki/TLS" rel="noopener noreferrer"&gt;experimental stages&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;Authentication&lt;/h2&gt;

&lt;p&gt;Up to Redis 5.x, Redis only supported a simple password based authentication. This password was stored in plaintext on the server. Redis in version 6.0 onwards supports a fully featured &lt;a href="https://redis.io/topics/acl" rel="noopener noreferrer"&gt;ACL&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Memcached version 1.4.3 and greater has &lt;a href="https://github.com/memcached/memcached/wiki/SASLHowto" rel="noopener noreferrer"&gt;SASL support&lt;/a&gt;. Previously Memcached had no authentication layer.&lt;/p&gt;

&lt;h2&gt;Summary&lt;/h2&gt;

&lt;p&gt;Redis and Memcached are both great and have applications in different areas. Redis being developed later has many advanced functionalities and has great documentation and community.&lt;/p&gt;

</description>
      <category>redis</category>
      <category>memcached</category>
      <category>database</category>
      <category>comparison</category>
    </item>
    <item>
      <title>mysqldump Best Practices: Part 2 - Migrations Guide</title>
      <dc:creator>ScaleGrid</dc:creator>
      <pubDate>Mon, 12 Apr 2021 22:18:25 +0000</pubDate>
      <link>https://forem.com/scalegrid/mysqldump-best-practices-part-2-migrations-guide-1h7d</link>
      <guid>https://forem.com/scalegrid/mysqldump-best-practices-part-2-migrations-guide-1h7d</guid>
      <description>&lt;p&gt;In the second and final part of our mysqldump best practices we will talk about &lt;a href="https://scalegrid.io/blog/mysqldump-best-practices-part-2/"&gt;how to handle the migration and import for stored program objects and views from your MySQL database&lt;/a&gt;. To read more about the prerequisites for a successful dump and restore operation for large MySQL databases, check out the first part of this 2-part blog series.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;h2&gt;mysqldump Best Practices&lt;/h2&gt;
&lt;ul&gt;
    &lt;li&gt;&lt;a href="https://scalegrid.io/blog/mysqldump-best-practices-part-1/"&gt;Part 1 - MySQL Prerequisites&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;Part 2 - Migrations Guide&lt;/li&gt;
&lt;/ul&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
&lt;a id="intro"&gt;&lt;/a&gt;Importing your stored procedures, functions and triggers&lt;/h2&gt;

&lt;p&gt;By default, mysqldump imports views and triggers. However it does not import procedures, functions and events. To import procedures and functions, the &lt;code&gt;--routines&lt;/code&gt; option should be specified, and to import events, the &lt;code&gt;--events&lt;/code&gt; option should be specified.&lt;/p&gt;

&lt;h2&gt;
&lt;a id="1"&gt;&lt;/a&gt;1. Importing triggers&lt;/h2&gt;

&lt;p&gt;Mysqldump will attempt to dump all the triggers in your database by default. To be able to dump a table's &lt;code&gt;triggers&lt;/code&gt;, you must have the &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html#priv_trigger"&gt;&lt;code&gt;TRIGGER&lt;/code&gt;&lt;/a&gt; privilege for the table. If the dump user does not have this privilege, triggers will be skipped and mysqldump will not throw any error. So don’t be surprised if you don't see any triggers imported to your destination database.&lt;/p&gt;

&lt;h2&gt;
&lt;a id="2"&gt;&lt;/a&gt;2. Importing events&lt;/h2&gt;

&lt;p&gt;To import events, you need to specify &lt;code&gt;--events&lt;/code&gt; option while invoking the mysqldump utility. This option requires the &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html#priv_event"&gt;&lt;code&gt;EVENT&lt;/code&gt;&lt;/a&gt; privileges for those databases. Again, mysqldump will silently skip events if the dump user does not have these privileges, even if you have specified --event option when invoking mysqldump.&lt;/p&gt;

&lt;h2&gt;
&lt;a id="3"&gt;&lt;/a&gt;3. Importing functions and stored procedure&lt;/h2&gt;

&lt;p&gt;To import routines, you need to specify &lt;code&gt;--routines&lt;/code&gt; option while invoking the mysqldump utility. This option requires the &lt;code&gt;global select&lt;/code&gt; privileges. Even in this case, mysqldump will silently skip functions and procedures if the dump user does not have these privileges, even if you have specified &lt;code&gt;--routines&lt;/code&gt; option when invoking mysqldump.&lt;/p&gt;

&lt;h3&gt;
&lt;a id="3-1"&gt;&lt;/a&gt;3.1 Importing non deterministic functions&lt;/h3&gt;

&lt;p&gt;A stored program that modifies data is called non deterministic if it does not produce repeatable results. Example rand() function.  It is especially challenging to use such functions in replicated setups, as they can result in different data on source and replica. To control such possibilities, MySQL imposes certain restrictions on function creation if binary logs are enabled.&lt;/p&gt;

&lt;p&gt;By default, for a &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/create-function.html"&gt;&lt;code&gt;CREATE FUNCTION&lt;/code&gt;&lt;/a&gt; statement to be accepted, at least one of &lt;code&gt;DETERMINISTIC&lt;/code&gt;, &lt;code&gt;NO SQL&lt;/code&gt;, or &lt;code&gt;READS SQL DATA&lt;/code&gt; must be specified explicitly. Otherwise an error occurs:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ERROR 1418 (HY000) at line 181: This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you &lt;em&gt;might&lt;/em&gt; want to use the less safe log_bin_trust_funable)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;So if your function is not declared as deterministic on the source, and binary logging is enabled on your destination, you will see the above error during the restore of the dump. Hence it is important to understand the deterministic nature of your functions upfront. If you are sure that your functions are deterministic, you need to turn on the &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators"&gt;&lt;code&gt;log_bin_trust_function_creators&lt;/code&gt;&lt;/a&gt; configuration on your destination before the restore operation. When enabled, MySQL allows creation of such functions even when binary logging is enabled.&lt;/p&gt;

&lt;h2&gt;
&lt;a id="4"&gt;&lt;/a&gt;4. SQL SECURITY characteristic of the stored routines and views.&lt;/h2&gt;

&lt;p&gt;MySQL allows a &lt;code&gt;SQL SECURITY&lt;/code&gt; context to be specified while creating the store programs or views. The &lt;code&gt;SQL SECURITY&lt;/code&gt; characteristic can be specified as &lt;code&gt;DEFINER&lt;/code&gt; or &lt;code&gt;INVOKER&lt;/code&gt;. If the &lt;code&gt;SQL_SECURITY&lt;/code&gt; context is &lt;code&gt;DEFINER&lt;/code&gt;, the routine executes using the privileges of the account named in the routine &lt;code&gt;DEFINER&lt;/code&gt; clause. If the context is &lt;code&gt;INVOKER&lt;/code&gt;, routine executes using the privileges of the user who invokes it. The default value is &lt;code&gt;DEFINER&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you are restoring stored routines or views, you need to ensure the definer user account exists on your destination database with appropriate grants. Otherwise you will encounter failures during restore.&lt;/p&gt;

&lt;p&gt;Let's demonstrate this with an example related to views.&lt;/p&gt;

&lt;p&gt;Let's suppose you have Views V1 and V2 defines as below:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;CREATE definer=admin@'%' VIEW mydb.V1 AS SELECT * FROM solution_table;&lt;br&gt;
CREATE definer=admin@'%' VIEW mydb.V2 AS SELECT * FROM V1 where num1=10;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Note that views are dumped by default by mysqldump and if you don’t have the user ‘admin’ on your destination,  you will encounter the below error during the restore operation:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Command failed with error - ERROR 1449 (HY000) at line 206 in file: '/mysql_data/mysqldump/sqldump_1582457155758.sql': The user specified as a definer ('admin'@'%') does not exist.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Note that it is not just sufficient to ensure the user exists, but the user needs to have appropriate privileges to execute the views. For example if the user &lt;code&gt;admin@'%'&lt;/code&gt; exists on the destination, but does not have &lt;code&gt;SELECT&lt;/code&gt; privileges on the mydb database, you will see an error message:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;'/mysql_data/mysqldump/sqldump_1582456858033.sql':View 'mydb.V2' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them.&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
&lt;a id="5"&gt;&lt;/a&gt;Summary&lt;/h2&gt;

&lt;p&gt;In this 2-part blog series, we covered important prerequisites you need to handle to ensure successful migration of your data and stored programs. &lt;a href="https://scalegrid.io/mysql.html"&gt;ScaleGrid MySQL&lt;/a&gt; hosting handles these guidelines to provide a smooth experience while importing your data to ScaleGrid platform. Please share with us your experience and best practices you adopt for MySQL data migrations!&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>database</category>
      <category>cloud</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Which Is The Best PostgreSQL GUI? 2021 Comparison</title>
      <dc:creator>ScaleGrid</dc:creator>
      <pubDate>Wed, 17 Mar 2021 16:11:50 +0000</pubDate>
      <link>https://forem.com/scalegrid/which-is-the-best-postgresql-gui-2021-comparison-4a6g</link>
      <guid>https://forem.com/scalegrid/which-is-the-best-postgresql-gui-2021-comparison-4a6g</guid>
      <description>&lt;p&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fscalegrid.io%2Fblog%2Fwp-content%2Fuploads%2F2019%2F09%2FWhich-Is-The-Best-PostgreSQL-GUI-2019-Comparison-ScaleGrid-Blog.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fscalegrid.io%2Fblog%2Fwp-content%2Fuploads%2F2019%2F09%2FWhich-Is-The-Best-PostgreSQL-GUI-2019-Comparison-ScaleGrid-Blog.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PostgreSQL graphical user interface (GUI) tools help open source database users to manage, manipulate, and visualize their data. In this post, we discuss the &lt;a href="https://scalegrid.io/blog/which-is-the-best-postgresql-gui-2019-comparison/" rel="noopener noreferrer"&gt;top 6 GUI tools&lt;/a&gt; for administering your &lt;a href="https://scalegrid.io/postgresql.html" rel="noopener noreferrer"&gt;PostgreSQL deployments&lt;/a&gt;. PostgreSQL is the fourth most popular database management system in the world, and heavily used in all sizes of applications from small to large. The traditional method to work with databases is using the command-line interface (CLI) tool, however, this interface presents a &lt;a href="https://tableplus.com/blog/2018/08/cli-vs-gui-which-one-is-better.html" rel="noopener noreferrer"&gt;number of issues&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;It requires a big learning curve to get the best out of the DBMS.&lt;/li&gt;
    &lt;li&gt;Console display may not be something of your liking, and it only gives very little information at a time.&lt;/li&gt;
    &lt;li&gt;It is difficult to browse databases and tables, check indexes, and monitor databases through the console.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Many &lt;a href="https://www.quora.com/Do-most-engineers-prefer-working-off-of-the-command-line-for-database-systems-or-using-a-GUI" rel="noopener noreferrer"&gt;still prefer CLIs&lt;/a&gt; over GUIs, but this set is ever so shrinking. I believe anyone who comes into programming after 2010 will tell you GUI tools increase their productivity over a CLI solution.&lt;/p&gt;

&lt;h2&gt;Why Use a GUI Tool?&lt;/h2&gt;

&lt;p&gt;Now that we understand the issues users face with the CLI, let’s take a look at the &lt;a href="https://www.computerhope.com/issues/ch000619.htm" rel="noopener noreferrer"&gt;advantages of using a PostgreSQL GUI&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Shortcut keys make it easier to use, and much easier to learn for new users.&lt;/li&gt;
    &lt;li&gt;Offers great visualization to help you interpret your data.&lt;/li&gt;
    &lt;li&gt;You can remotely access and navigate another database server.&lt;/li&gt;
    &lt;li&gt;The window-based interface makes it much easier to manage your PostgreSQL data.&lt;/li&gt;
    &lt;li&gt;Easier access to files, features, and the operating system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, bottom line, GUI tools make PostgreSQL developers' lives easier.&lt;/p&gt;

&lt;h2&gt;Top PostgreSQL GUI Tools&lt;/h2&gt;

&lt;p&gt;Today I will tell you about the 6 best PostgreSQL GUI tools. Let’s start with the first and most popular one.&lt;/p&gt;

&lt;h3&gt;1. pgAdmin&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/wp-content/uploads/2019/09/Which-Is-The-Best-PostgreSQL-GUI-pgadmin-UI-ScaleGrid-Blog.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fscalegrid.io%2Fblog%2Fwp-content%2Fuploads%2F2019%2F09%2FWhich-Is-The-Best-PostgreSQL-GUI-pgadmin-UI-ScaleGrid-Blog.png" alt="Which Is The Best PostgreSQL GUI - pgAdmin"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.pgadmin.org/" rel="noopener noreferrer"&gt;pgAdmin&lt;/a&gt; is the de facto GUI tool for PostgreSQL, and the first tool anyone would use for PostgreSQL. It supports all PostgreSQL operations and features while being free and open source. pgAdmin is used by both novice and seasoned DBAs and developers for database administration.&lt;/p&gt;

&lt;p&gt;Here are some of the top reasons why PostgreSQL users love pgAdmin:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Create, view and edit on all common PostgreSQL objects.&lt;/li&gt;
    &lt;li&gt;Offers a graphical query planning tool with color syntax highlighting.&lt;/li&gt;
    &lt;li&gt;The dashboard lets you monitor server activities such as database locks, connected sessions, and prepared transactions.&lt;/li&gt;
    &lt;li&gt;Since pgAdmin is a web application, you can deploy it on any server and access it remotely.&lt;/li&gt;
    &lt;li&gt;pgAdmin UI consists of detachable panels that you can arrange according to your likings.&lt;/li&gt;
    &lt;li&gt;Provides a procedural language debugger to help you debug your code.&lt;/li&gt;
    &lt;li&gt;pgAdmin has a portable version which can help you easily move your data between machines.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are several cons of pgAdmin that users have generally complained about:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;The UI is slow and non-intuitive compared to paid GUI tools.&lt;/li&gt;
    &lt;li&gt;pgAdmin uses too many resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;pgAdmin can be used on Windows, Linux, and Mac OS. We listed it first as it’s the most used GUI tool for PostgreSQL, and the only native PostgreSQL GUI tool in our list. As it’s dedicated exclusively to PostgreSQL, you can expect it to update with the latest features of each version. pgAdmin can be downloaded from their official &lt;a href="https://www.pgadmin.org/download/" rel="noopener noreferrer"&gt;website&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;pgAdmin Pricing: Free (open source)&lt;/h4&gt;

&lt;h3&gt;2. DBeaver&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/wp-content/uploads/2019/09/Which-Is-The-Best-PostgreSQL-GUI-DBeaver-UI-ScaleGrid-Blog.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fscalegrid.io%2Fblog%2Fwp-content%2Fuploads%2F2019%2F09%2FWhich-Is-The-Best-PostgreSQL-GUI-DBeaver-UI-ScaleGrid-Blog.png" alt="Which Is The Best PostgreSQL GUI - DBeaver"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dbeaver.io/" rel="noopener noreferrer"&gt;DBeaver&lt;/a&gt; is a major cross-platform GUI tool for PostgreSQL that both developers and database administrators love. DBeaver is not a native GUI tool for PostgreSQL, as it supports all the popular databases like MySQL, MariaDB, Sybase, SQLite, Oracle, SQL Server, DB2, MS Access, Firebird, Teradata, Apache Hive, Phoenix, Presto, and Derby - any database which has a JDBC driver (over 80 databases!).&lt;/p&gt;

&lt;p&gt;Here are some of the top DBeaver GUI features for PostgreSQL:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Visual Query builder helps you to construct complex SQL queries without actual knowledge of SQL.&lt;/li&gt;
    &lt;li&gt;It has one of the best editors – multiple data views are available to support a variety of user needs.&lt;/li&gt;
    &lt;li&gt;Convenient navigation among data.&lt;/li&gt;
    &lt;li&gt;In DBeaver, you can generate fake data that looks like real data allowing you to test your systems.&lt;/li&gt;
    &lt;li&gt;Full-text data search against all chosen tables/views with search results shown as filtered tables/views.&lt;/li&gt;
    &lt;li&gt;Metadata search among rows in database system tables.&lt;/li&gt;
    &lt;li&gt;Import and export data with many file formats such as CSV, HTML, XML, JSON, XLS, XLSX.&lt;/li&gt;
    &lt;li&gt;Provides advanced security for your databases by storing passwords in secured storage protected by a master password.&lt;/li&gt;
    &lt;li&gt;Automatically generated ER diagrams for a database/schema.&lt;/li&gt;
    &lt;li&gt;Enterprise Edition provides a special online support system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the cons of DBeaver is it may be slow when dealing with large data sets compared to some expensive GUI tools like Navicat and DataGrip.&lt;/p&gt;

&lt;p&gt;You can run DBeaver on Windows, Linux, and macOS, and easily &lt;a href="https://help.scalegrid.io/docs/postgresql-connecting-to-gui-dbeaver" rel="noopener noreferrer"&gt;connect DBeaver PostgreSQL&lt;/a&gt; with or without SSL. It has a free open-source edition as well an enterprise edition. You can &lt;a href="https://dbeaver.com/buy/" rel="noopener noreferrer"&gt;buy&lt;/a&gt; the standard license for enterprise edition at $199, or by subscription at $19/month. The free version is good enough for most companies, as many of the DBeaver users will tell you the free edition is better than pgAdmin.&lt;/p&gt;

&lt;h4&gt;
&lt;a href="https://dbeaver.com/buy/" rel="noopener noreferrer"&gt;DBeaver Pricing&lt;/a&gt;: Free community, $199 standard license&lt;/h4&gt;

&lt;h3&gt;3. OmniDB&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/wp-content/uploads/2019/09/Which-Is-The-Best-PostgreSQL-GUI-OmniDB-UI-ScaleGrid-Blog.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fscalegrid.io%2Fblog%2Fwp-content%2Fuploads%2F2019%2F09%2FWhich-Is-The-Best-PostgreSQL-GUI-OmniDB-UI-ScaleGrid-Blog.png" alt="Which Is The Best PostgreSQL GUI - OmniDB"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next PostgreSQL GUI we’re going to review is &lt;a href="https://omnidb.org/" rel="noopener noreferrer"&gt;OmniDB&lt;/a&gt;. OmniDB lets you add, edit, and manage data and all other necessary features in a unified workspace. Although OmniDB supports other database systems like MySQL, Oracle, and MariaDB, their primary target is PostgreSQL. This open source tool is mainly sponsored by 2ndQuadrant. OmniDB supports all three major platforms, namely Windows, Linux, and Mac OS X.&lt;/p&gt;

&lt;p&gt;There are many reasons why you should use OmniDB for your Postgres developments:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;You can easily configure it by adding and removing connections, and leverage encrypted connections when remote connections are necessary.&lt;/li&gt;
    &lt;li&gt;Smart SQL editor helps you to write SQL codes through autocomplete and syntax highlighting features.&lt;/li&gt;
    &lt;li&gt;Add-on support available for debugging capabilities to PostgreSQL functions and procedures.&lt;/li&gt;
    &lt;li&gt;You can monitor the dashboard from customizable charts that show real-time information about your database.&lt;/li&gt;
    &lt;li&gt;Query plan visualization helps you find bottlenecks in your SQL queries.&lt;/li&gt;
    &lt;li&gt;It allows access from multiple computers with encrypted personal information.&lt;/li&gt;
    &lt;li&gt;Developers can add and share new features via &lt;a href="https://www.2ndquadrant.com/en/news/announcing-release-omnidb-29/" rel="noopener noreferrer"&gt;plugins&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are a couple of cons with OmniDB:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;OmniDB lacks community support in comparison to pgAdmin and DBeaver. So, you might find it difficult to learn this tool, and could feel a bit alone when you face an issue.&lt;/li&gt;
    &lt;li&gt;It doesn’t have as many features as paid GUI tools like Navicat and DataGrip.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;OmniDB users have favorable opinions about it, and you can download OmniDB for PostgreSQL from &lt;a href="https://omnidb.org/en/downloads-en" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;OmniDB Pricing: Free (open source)&lt;/h4&gt;

&lt;h3&gt;4. DataGrip&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/wp-content/uploads/2019/09/Which-Is-The-Best-PostgreSQL-GUI-Datagrip-UI-ScaleGrid-Blog.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fscalegrid.io%2Fblog%2Fwp-content%2Fuploads%2F2019%2F09%2FWhich-Is-The-Best-PostgreSQL-GUI-Datagrip-UI-ScaleGrid-Blog.png" alt="Which Is The Best PostgreSQL GUI - Datagrip"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="http://jetbrains.com/datagrip/" rel="noopener noreferrer"&gt;DataGrip&lt;/a&gt; is a cross-platform integrated development environment (IDE) that supports multiple database environments. The most important thing to note about DataGrip is that it's developed by JetBrains, one of the leading brands for developing IDEs. If you have ever used PhpStorm, IntelliJ IDEA, PyCharm, WebStorm, you won't need an introduction on how good JetBrains IDEs are.&lt;/p&gt;

&lt;p&gt;There are many exciting features to like in the DataGrip PostgreSQL GUI:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;The context-sensitive and schema-aware auto-complete feature suggests more relevant code completions.&lt;/li&gt;
    &lt;li&gt;It has a beautiful and customizable UI along with an intelligent query console that keeps track of all your activities so you won’t lose your work. Moreover, you can easily add, remove, edit, and clone data rows with its powerful editor.&lt;/li&gt;
    &lt;li&gt;There are many ways to navigate schema between tables, views, and procedures.&lt;/li&gt;
    &lt;li&gt;It can immediately detect bugs in your code and suggest the best options to fix them.&lt;/li&gt;
    &lt;li&gt;It has an advanced refactoring process – when you rename a variable or an object, it can resolve all references automatically.&lt;/li&gt;
    &lt;li&gt;DataGrip is not just a GUI tool for PostgreSQL, but a full-featured IDE that has features like version control systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are a few cons in DataGrip:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;The obvious issue is that it's not native to PostgreSQL, so it lacks PostgreSQL-specific features. For example, it is not easy to debug errors as not all are able to be shown.&lt;/li&gt;
    &lt;li&gt;Not only DataGrip, but most JetBrains IDEs have a big learning curve making it a bit overwhelming for beginner developers.&lt;/li&gt;
    &lt;li&gt;It consumes a lot of resources, like RAM, from your system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DataGrip supports a tremendous list of database management systems, including SQL Server, MySQL, Oracle, SQLite, Azure Database, DB2, H2, MariaDB, Cassandra, HyperSQL, Apache Derby, and many more.&lt;/p&gt;

&lt;p&gt;DataGrip supports all three major operating systems, Windows, Linux, and Mac OS. One of the downsides is that JetBrains products are comparatively costly. DataGrip has two different prices for organizations and individuals. &lt;a href="https://www.jetbrains.com/datagrip/buy/#commercial?billing=yearly" rel="noopener noreferrer"&gt;DataGrip for Organizations&lt;/a&gt; will cost you $19.90/month, or $199 for the first year, $159 for the second year, and $119 for the third year onwards. The &lt;a href="https://www.jetbrains.com/datagrip/buy/#personal?billing=yearly" rel="noopener noreferrer"&gt;individual package&lt;/a&gt; will cost you $8.90/month, or $89 for the first year. You can test it out during the free 30 day &lt;a href="https://www.jetbrains.com/datagrip/download" rel="noopener noreferrer"&gt;trial period&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
&lt;a href="https://www.jetbrains.com/datagrip/buy/#commercial?billing=yearly" rel="noopener noreferrer"&gt;DataGrip Pricing&lt;/a&gt;: $8.90/month to $199/year&lt;/h4&gt;

&lt;h3&gt;5. Navicat&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/wp-content/uploads/2019/09/Which-Is-The-Best-PostgreSQL-GUI-Navicat-UI-ScaleGrid-Blog.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fscalegrid.io%2Fblog%2Fwp-content%2Fuploads%2F2019%2F09%2FWhich-Is-The-Best-PostgreSQL-GUI-Navicat-UI-ScaleGrid-Blog.png" alt="Which Is The Best PostgreSQL GUI - Navicat"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.navicat.com/en/products/navicat-for-postgresql" rel="noopener noreferrer"&gt;Navicat&lt;/a&gt; is an easy-to-use graphical tool that targets both beginner and experienced developers. It supports several database systems such as MySQL, PostgreSQL, and MongoDB. One of the special features of Navicat is its collaboration with cloud databases like Amazon Redshift, Amazon RDS, Amazon Aurora, Microsoft Azure, Google Cloud, Tencent Cloud, Alibaba Cloud, and Huawei Cloud.&lt;/p&gt;

&lt;p&gt;&lt;span&gt;Important features of Navicat for Postgres include:&lt;/span&gt;&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;It has a very intuitive and fast UI. You can easily create and edit SQL statements with its visual SQL builder, and the powerful code auto-completion saves you a lot of time and helps you avoid mistakes.&lt;/li&gt;
    &lt;li&gt;Navicat has a powerful data modeling tool for visualizing database structures, making changes, and designing entire schemas from scratch. You can manipulate almost any database object visually through diagrams.&lt;/li&gt;
    &lt;li&gt;Navicat can run scheduled jobs and notify you via email when the job is done running.&lt;/li&gt;
    &lt;li&gt;Navicat is capable of synchronizing different data sources and schemas.&lt;/li&gt;
    &lt;li&gt;Navicat has an add-on feature (Navicat Cloud) that offers project-based team collaboration.&lt;/li&gt;
    &lt;li&gt;It establishes secure connections through SSH tunneling and SSL ensuring every connection is secure, stable, and reliable.&lt;/li&gt;
    &lt;li&gt;You can import and export data to diverse formats like Excel, Access, CSV, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Despite all the good features, there are a few cons that you need to consider before buying Navicat:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;The license is locked to a single platform. You need to buy different licenses for PostgreSQL and MySQL. Considering its heavy price, this is a bit difficult for a small company or a freelancer.&lt;/li&gt;
    &lt;li&gt;It has many features that will take some time for a newbie to get going.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can use Navicat in Windows, Linux, Mac OS, and iOS environments. The quality of Navicat is endorsed by its world-popular clients, including Apple, Oracle, Google, Microsoft, Facebook, Disney, and Adobe. Navicat comes in three editions called enterprise edition, standard edition, and non-commercial edition. Enterprise edition costs you $14.99/month up to $299 for a perpetual license, the standard edition is $9.99/month up to $199 for a perpetual license, and then the non-commercial edition costs $5.99/month up to $119 for its perpetual license. You can get full price details &lt;a href="https://www.navicat.com/en/store/navicat-for-postgresql-plan" rel="noopener noreferrer"&gt;here&lt;/a&gt;, and download the Navicat trial version for 14 days from &lt;a href="https://www.navicat.com/en/download/navicat-for-postgresql" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
&lt;a href="https://www2.navicat.com/en/store/navicat-for-postgresql" rel="noopener noreferrer"&gt;Navicat Pricing&lt;/a&gt;: $5.99/month up to $299/license&lt;/h4&gt;

&lt;h3&gt;6. HeidiSQL&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/wp-content/uploads/2021/03/HeidiSQL-GUI-PostgreSQL.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fscalegrid.io%2Fblog%2Fwp-content%2Fuploads%2F2021%2F03%2FHeidiSQL-GUI-PostgreSQL.png" alt="Which is the best PostgreSQL GUI? HeidiSQL"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.heidisql.com/" rel="noopener noreferrer"&gt;HeidiSQL&lt;/a&gt; is a new addition to our best PostgreSQL GUI tools list in 2021. It is a lightweight, free open source GUI that helps you manage tables, logs and users, edit data, views, procedures and scheduled events, and is continuously enhanced by the active group of contributors. HeidiSQL was initially developed for MySQL, and later added support for MS SQL Server, PostgreSQL, SQLite and MariaDB. Invented in 2002 by Ansgar Becker, HeidiSQL aims to be easy to learn and provide the simplest way to connect to a database, fire queries, and see what’s in a database.&lt;/p&gt;

&lt;p&gt;Some of the advantages of HeidiSQL for PostgreSQL include:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Connects to multiple servers in one window.&lt;/li&gt;
    &lt;li&gt;Generates nice SQL-exports, and allows you to export from one server/database directly to another server/database.&lt;/li&gt;
    &lt;li&gt;Provides a comfortable grid to browse and edit table data, and perform bulk table edits such as move to database, change engine or ollation.&lt;/li&gt;
    &lt;li&gt;You can write queries with customizable syntax-highlighting and code-completion.&lt;/li&gt;
    &lt;li&gt;It has an active community helping to support other users and GUI improvements.&lt;/li&gt;
    &lt;li&gt;Allows you to find specific text in all tables of all databases on a single server, and optimize repair tables in a batch manner.&lt;/li&gt;
    &lt;li&gt;Provides a dialog for quick grid/data exports to Excel, HTML, JSON, PHP, even LaTeX.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are a few cons to HeidiSQL:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Does not offer a procedural language debugger to help you debug your code.&lt;/li&gt;
    &lt;li&gt;Built for Windows, and currently only supports Windows (which is not a con for our Windors readers!)&lt;/li&gt;
    &lt;li&gt;HeidiSQL does have a lot of bugs, but the author is very attentive and active in addressing issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If HeidiSQL is right for you, you can download it &lt;a href="https://www.heidisql.com/download.php" rel="noopener noreferrer"&gt;here&lt;/a&gt; and follow updates on their &lt;a href="https://github.com/HeidiSQL/HeidiSQL" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; page.&lt;/p&gt;

&lt;h4&gt;HeidiSQL Pricing: Free (open source)&lt;/h4&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Let’s summarize our top PostgreSQL GUI comparison. Almost everyone starts PostgreSQL with pgAdmin. It has great community support, and there are a lot of resources to help you if you face an issue. Usually, pgAdmin satisfies the needs of many developers to a great extent and thus, most developers do not look for other GUI tools. That’s why pgAdmin remains to be the most popular GUI tool.&lt;/p&gt;

&lt;p&gt;If you are looking for an open source solution that has a better UI and visual editor, then DBeaver and OmniDB are great solutions for you. For users looking for a free lightweight GUI that supports multiple database types, HeidiSQL may be right for you. If you are looking for more features than what’s provided by an open source tool, and you’re ready to pay a good price for it, then Navicat and DataGrip are the best GUI products on the market.&lt;/p&gt;

&lt;p&gt;While I believe one of these tools should surely support your requirements, there are other popular GUI tools for PostgreSQL that you might like, including &lt;a href="https://www.valentina-db.com/en/" rel="noopener noreferrer"&gt;Valentina Studio&lt;/a&gt;, &lt;a href="https://www.adminer.org/" rel="noopener noreferrer"&gt;Adminer&lt;/a&gt;, &lt;a href="https://www.dbvis.com/" rel="noopener noreferrer"&gt;DB visualizer&lt;/a&gt;, and &lt;a href="https://www.sql-workbench.eu/" rel="noopener noreferrer"&gt;SQL workbench&lt;/a&gt;. I hope this article will help you decide which GUI tool suits your needs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/which-is-the-best-postgresql-gui-2019-comparison/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fscalegrid.io%2Fblog%2Fwp-content%2Fuploads%2F2021%2F03%2FPostgreSQL-GUI-Blog-Infographic-ScaleGrid-DBaaS.jpg" alt="Which Is The Best PostgreSQL GUI? Infographic by ScaleGrid"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Which Is The Best PostgreSQL GUI? 2019 Comparison&lt;/h2&gt;

&lt;p&gt;Here are the top PostgreSQL GUI tools covered in our 2019 post:&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;pgAdmin&lt;/li&gt;
    &lt;li&gt;DBeaver&lt;/li&gt;
    &lt;li&gt;Navicat&lt;/li&gt;
    &lt;li&gt;DataGrip&lt;/li&gt;
    &lt;li&gt;OmniDB&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>postgres</category>
      <category>gui</category>
      <category>database</category>
      <category>cli</category>
    </item>
    <item>
      <title>Oracle Cloud Breakdown – Database Hosting Costs on OCI</title>
      <dc:creator>ScaleGrid</dc:creator>
      <pubDate>Fri, 05 Mar 2021 00:25:01 +0000</pubDate>
      <link>https://forem.com/scalegrid/oracle-cloud-breakdown-database-hosting-costs-on-oci-4j9c</link>
      <guid>https://forem.com/scalegrid/oracle-cloud-breakdown-database-hosting-costs-on-oci-4j9c</guid>
      <description>&lt;p&gt;When considering a new cloud provider, the big names come to mind -  AWS, Azure, and Google Cloud. If you’re a developer, you might even be considering a dev-friendly cloud like DigitalOcean or Linode. But, did you know there’s a (relatively) new player in the cloud game? &lt;a href="https://www.oracle.com/cloud/" rel="nofollow"&gt;Oracle Cloud Infrastructure&lt;/a&gt; (OCI) is a cloud computing service introduced by, you guessed it, Oracle, to claim their stake in the ever-growing cloud services market. In this post, we’re going to &lt;a href="https://scalegrid.io/blog/oracle-cloud-breakdown-database-hosting-costs-on-oci/"&gt;compare Oracle Cloud costs&lt;/a&gt; vs. popular cloud providers for fully managed database hosting for &lt;a href="https://scalegrid.io/mysql/oci.html"&gt;MySQL&lt;/a&gt;, &lt;a href="https://scalegrid.io/postgresql/oci.html"&gt;PostgreSQL&lt;/a&gt;, &lt;a href="https://scalegrid.io/redis/oci.html"&gt;Redis™&lt;/a&gt; and &lt;a href="https://scalegrid.io/mongodb/oci.html"&gt;MongoDB® database&lt;/a&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;h2&gt;Jump to the Oracle Cloud pricing comparison for:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;MySQL database&lt;/li&gt;
&lt;li&gt;PostgreSQL database&lt;/li&gt;
&lt;li&gt;Redis™ database&lt;/li&gt;
&lt;li&gt;MongoDB® database&lt;/li&gt;
&lt;/ul&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;An Overview on Oracle Cloud&lt;/h2&gt;

&lt;p&gt;Before we get started, let’s talk a little bit about Oracle Cloud. OCI was made generally available less than 5 years ago in 2016 under the name Oracle Bare Metal Cloud Services, only to be rebranded 2 years later as Oracle Cloud Infrastructure. In this short period of time, Oracle Cloud already has a deep portfolio of services rivaling any existing cloud on the market, including compute, storage, networking, governance, database management, load balancing and edge services to name just a few. Their growth is not just in services though, they’ve ensured their availability. With over 29 cloud regions available across the globe, they are aiming to offer proximity, data sovereignty requirements, and disaster protection for every country.&lt;/p&gt;

&lt;p&gt;Coming late to the game, Oracle Cloud needed to make a name for itself, a new name that is, and they chose to rebrand as the high performance + lower cost alternative to AWS. In this post, we are going to focus on costs, but look out for our benchmark reports on Oracle Cloud performance vs. leading cloud providers.&lt;/p&gt;

&lt;p&gt;So, what type of customer is Oracle Cloud targeting? As the company who dominated the on-premises infrastructure and databases market for decades, Oracle surely felt the massive transition of on-premises migrations to cloud. With the new &lt;a href="https://www.oracle.com/cloud/" rel="nofollow"&gt;Oracle Cloud&lt;/a&gt; services, Oracle is targeting “enterprises looking for higher performance, lower costs, and easier cloud migration for their existing on-premises applications.” In other words, the OCI strategy is actually highly focused on customer retention - migrating their existing on-premises customers to their cloud product vs. losing them to AWS, Azure or GCP.&lt;/p&gt;

&lt;p&gt;Now that we have a brief history on Oracle Cloud Infrastructure, let’s dig into pricing.&lt;/p&gt;

&lt;h2&gt;Oracle Cloud Pricing&lt;/h2&gt;

&lt;p&gt;According to the &lt;a href="https://www.oracle.com/cloud/economics/" rel="nofollow"&gt;Oracle website&lt;/a&gt;, OCI offers 44% lower compute costs for HPC vs. AWS. In this pricing comparison, we are going to see which cloud provider offers the most cost-effective pricing for fully managed database hosting at ScaleGrid, covering MySQL, PostgreSQL, Redis™ and MongoDB® database.&lt;/p&gt;

&lt;p&gt;In order to represent as many scenarios as possible, we are going to compare OCI vs. top cloud providers across ScaleGrid standalone and 3-node replica set deployments, and over 4 common deployment sizes by RAM: 8GB, 16GB, 32GB and 64GB of RAM. Each cloud offers slightly different configurations, so any difference in RAM size will be noted for reference in the pricing comparison tables. Note, all pricing is for our all-inclusive &lt;a href="https://scalegrid.io/pricing.html?db=MYSQL&amp;amp;cloud=cloud_oci&amp;amp;replica=deployment_standalone&amp;amp;instance=Medium#section_pricing_dedicated"&gt;Dedicated Hosting&lt;/a&gt; plans, though we additionally offer a &lt;a href="https://scalegrid.io/pricing.html?db=MYSQL&amp;amp;cloud=cloud_oci&amp;amp;replica=deployment_standalone&amp;amp;instance=Medium#section_pricing_byoc"&gt;Bring Your Own Cloud&lt;/a&gt; (BYOC) model that allows you to host through your own Oracle Cloud account. You can learn more about the differences in our &lt;a href="https://scalegrid.io/blog/bring-your-own-cloud-byoc-vs-dedicated-hosting-at-scalegrid/"&gt;Dedicated vs. BYOC&lt;/a&gt; blog post.&lt;/p&gt;

&lt;h3&gt;
&lt;a id="1"&gt;&lt;/a&gt;MySQL on OCI Pricing&lt;/h3&gt;

&lt;p&gt;We’re going to start with MySQL, the most popular open source database run on Oracle Cloud! MySQL is owned by Oracle, making it a natural fit for Oracle Cloud resources. Let’s take a look at &lt;a href="https://scalegrid.io/mysql.html"&gt;MySQL hosting&lt;/a&gt; pricing across clouds at ScaleGrid:&lt;/p&gt;

&lt;h4&gt;MySQL Standalone Deployment Pricing&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th width="30%"&gt;MySQL Standalone&lt;/th&gt;
&lt;th width="15%"&gt;8GB&lt;/th&gt;
&lt;th width="15%"&gt;16GB&lt;/th&gt;
&lt;th width="15%"&gt;32GB&lt;/th&gt;
&lt;th width="15%"&gt;64GB&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OCI&lt;/td&gt;
&lt;td&gt;$90&lt;/td&gt;
&lt;td&gt;$180&lt;/td&gt;
&lt;td&gt;$300&lt;br&gt;&lt;span&gt;(30GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$600&lt;br&gt;&lt;span&gt;(60GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AWS&lt;/td&gt;
&lt;td&gt;$191&lt;/td&gt;
&lt;td&gt;$302&lt;/td&gt;
&lt;td&gt;$605&lt;/td&gt;
&lt;td&gt;$1,178&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure&lt;/td&gt;
&lt;td&gt;$191&lt;/td&gt;
&lt;td&gt;$302&lt;/td&gt;
&lt;td&gt;$605&lt;/td&gt;
&lt;td&gt;$1,178&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GCP&lt;/td&gt;
&lt;td&gt;$191
&lt;span&gt;(7.5GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$302
&lt;span&gt;(15GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$605
&lt;span&gt;(30GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$1,178
&lt;span&gt;(60GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DigitalOcean&lt;/td&gt;
&lt;td&gt;$120&lt;/td&gt;
&lt;td&gt;$240&lt;/td&gt;
&lt;td&gt;$480&lt;/td&gt;
&lt;td&gt;$700&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linode&lt;/td&gt;
&lt;td&gt;$120&lt;/td&gt;
&lt;td&gt;$240&lt;/td&gt;
&lt;td&gt;$480&lt;/td&gt;
&lt;td&gt;$700&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As we can see in the above table, Oracle Cloud at ScaleGrid is by far the most cost-effective cloud for standalone MySQL deployments. On average, Oracle Cloud vs. AWS, Azure and GCP saves you 48.2% on your fully managed database hosting costs, and 25.4% for Oracle Cloud vs. DigitalOcean and Linode.&lt;/p&gt;

&lt;h4&gt;MySQL Replica Set Deployment Pricing&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th width="30%"&gt;MySQL Replica Set&lt;/th&gt;
&lt;th width="15%"&gt;8GB&lt;/th&gt;
&lt;th width="15%"&gt;16GB&lt;/th&gt;
&lt;th width="15%"&gt;32GB&lt;/th&gt;
&lt;th width="15%"&gt;64GB&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OCI&lt;/td&gt;
&lt;td&gt;$360&lt;/td&gt;
&lt;td&gt;$700&lt;/td&gt;
&lt;td&gt;$1,400&lt;br&gt;&lt;span&gt;(30GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$2,400&lt;br&gt;&lt;span&gt;(60GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AWS&lt;/td&gt;
&lt;td&gt;$560&lt;/td&gt;
&lt;td&gt;$847&lt;/td&gt;
&lt;td&gt;$1,696&lt;/td&gt;
&lt;td&gt;$3,299&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure&lt;/td&gt;
&lt;td&gt;$560&lt;/td&gt;
&lt;td&gt;$847&lt;/td&gt;
&lt;td&gt;$1,696&lt;/td&gt;
&lt;td&gt;$3,299&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GCP&lt;/td&gt;
&lt;td&gt;$560
&lt;span&gt;(7.5GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$847
&lt;span&gt;(15GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$1,696
&lt;span&gt;(30GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$3,299
&lt;span&gt;(60GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DigitalOcean&lt;/td&gt;
&lt;td&gt;$280&lt;/td&gt;
&lt;td&gt;$560&lt;/td&gt;
&lt;td&gt;$1,120&lt;/td&gt;
&lt;td&gt;$2,240&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linode&lt;/td&gt;
&lt;td&gt;$280&lt;/td&gt;
&lt;td&gt;$560&lt;/td&gt;
&lt;td&gt;$1,120&lt;/td&gt;
&lt;td&gt;$2,240&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Now, when we compare MySQL replica set deployment pricing at ScaleGrid for Oracle Cloud vs. AWS, Azure and GCP, we still see an average of 24.4% lower pricing. But, when we compare MySQL on Oracle Cloud vs. Linode and DigitalOcean, Oracle Cloud is actually 21.4% more expensive.&lt;/p&gt;

&lt;h3&gt;
&lt;a id="2"&gt;&lt;/a&gt;PostgreSQL on OCI Pricing&lt;/h3&gt;

&lt;p&gt;Next, we’re going to look at &lt;a href="https://scalegrid.io/postgresql.html"&gt;PostgreSQL hosting&lt;/a&gt; pricing for Oracle Cloud vs. AWS, Azure, GCP, DigitalOcean and Linode at ScaleGrid:&lt;/p&gt;

&lt;h4&gt;PostgreSQL Standalone Deployment Pricing&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th width="30%"&gt;PostgreSQL Standalone&lt;/th&gt;
&lt;th width="15%"&gt;8GB&lt;/th&gt;
&lt;th width="15%"&gt;16GB&lt;/th&gt;
&lt;th width="15%"&gt;32GB&lt;/th&gt;
&lt;th width="15%"&gt;64GB&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OCI&lt;/td&gt;
&lt;td&gt;$90&lt;/td&gt;
&lt;td&gt;$180&lt;/td&gt;
&lt;td&gt;$300&lt;br&gt;&lt;span&gt;(30GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$600&lt;br&gt;&lt;span&gt;(60GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AWS&lt;/td&gt;
&lt;td&gt;$191&lt;/td&gt;
&lt;td&gt;$302&lt;/td&gt;
&lt;td&gt;$605&lt;/td&gt;
&lt;td&gt;$1,178&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure&lt;/td&gt;
&lt;td&gt;$191&lt;/td&gt;
&lt;td&gt;$302&lt;/td&gt;
&lt;td&gt;$605&lt;/td&gt;
&lt;td&gt;$1,178&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GCP&lt;/td&gt;
&lt;td&gt;$180
&lt;span&gt;(7.5GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$300
&lt;span&gt;(15GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$600
&lt;span&gt;(30GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$1,300
&lt;span&gt;(60GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DigitalOcean&lt;/td&gt;
&lt;td&gt;$120&lt;/td&gt;
&lt;td&gt;$240&lt;/td&gt;
&lt;td&gt;$480&lt;/td&gt;
&lt;td&gt;$700&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linode&lt;/td&gt;
&lt;td&gt;$120&lt;/td&gt;
&lt;td&gt;$240&lt;/td&gt;
&lt;td&gt;$480&lt;/td&gt;
&lt;td&gt;$700&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Just like we saw with MySQL, standalone PostgreSQL hosting pricing on Oracle Cloud is 48.2% less expensive than AWS, Azure, or GCP on average at ScaleGrid, and 25.4% less expensive than DigitalOcean and Linode.&lt;/p&gt;

&lt;h4&gt;PostgreSQL Replica Set Deployment Pricing&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th width="30%"&gt;PostgreSQL Replica Set&lt;/th&gt;
&lt;th width="15%"&gt;8GB&lt;/th&gt;
&lt;th width="15%"&gt;16GB&lt;/th&gt;
&lt;th width="15%"&gt;32GB&lt;/th&gt;
&lt;th width="15%"&gt;64GB&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OCI&lt;/td&gt;
&lt;td&gt;$360&lt;/td&gt;
&lt;td&gt;$700&lt;/td&gt;
&lt;td&gt;$1,400&lt;br&gt;&lt;span&gt;(30GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$2,400&lt;br&gt;&lt;span&gt;(60GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AWS&lt;/td&gt;
&lt;td&gt;$560&lt;/td&gt;
&lt;td&gt;$847&lt;/td&gt;
&lt;td&gt;$1,696&lt;/td&gt;
&lt;td&gt;$3,299&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure&lt;/td&gt;
&lt;td&gt;$560&lt;/td&gt;
&lt;td&gt;$847&lt;/td&gt;
&lt;td&gt;$1,696&lt;/td&gt;
&lt;td&gt;$3,299&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GCP&lt;/td&gt;
&lt;td&gt;$700
&lt;span&gt;(7.5GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$1,400
&lt;span&gt;(15GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$2,400
&lt;span&gt;(30GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$4,800
&lt;span&gt;(60GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DigitalOcean&lt;/td&gt;
&lt;td&gt;$280&lt;/td&gt;
&lt;td&gt;$560&lt;/td&gt;
&lt;td&gt;$1,120&lt;/td&gt;
&lt;td&gt;$2,240&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linode&lt;/td&gt;
&lt;td&gt;$280&lt;/td&gt;
&lt;td&gt;$560&lt;/td&gt;
&lt;td&gt;$1,120&lt;/td&gt;
&lt;td&gt;$2,240&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Again, Oracle Cloud pricing vs. AWS and Azure averages 24.4% lower for PostgreSQL replica set pricing at ScaleGrid. PostgreSQL high availability users see even greater savings at Oracle Cloud vs. GCP with 47.6% lower pricing on average. However, Linode and DigitalOcean on average are 21.4% less expensive than Oracle Cloud for PostgreSQL replica set deployments at ScaleGrid.&lt;/p&gt;

&lt;h3&gt;
&lt;a id="3"&gt;&lt;/a&gt;ScaleGrid for Redis™ on OCI Pricing&lt;/h3&gt;

&lt;p&gt;Now, let’s take a look at &lt;a href="https://scalegrid.io/redis.html"&gt;ScaleGrid for Redis™ hosting&lt;/a&gt; to see how pricing for Oracle Cloud compares to AWS, Azure, GCP, DigitalOcean and Linode.&lt;/p&gt;

&lt;h4&gt;Redis™ Standalone Deployment Pricing&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th width="30%"&gt;Redis™ Standalone&lt;/th&gt;
&lt;th width="15%"&gt;8GB&lt;/th&gt;
&lt;th width="15%"&gt;16GB&lt;/th&gt;
&lt;th width="15%"&gt;32GB&lt;/th&gt;
&lt;th width="15%"&gt;64GB&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OCI&lt;/td&gt;
&lt;td&gt;$75&lt;br&gt;&lt;span&gt;(7.2GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$147&lt;br&gt;&lt;span&gt;(15GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$293&lt;br&gt;&lt;span&gt;(28GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$538&lt;br&gt;&lt;span&gt;(54GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AWS&lt;/td&gt;
&lt;td&gt;$147&lt;br&gt;&lt;span&gt;(7GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$293&lt;br&gt;&lt;span&gt;(14GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$538&lt;br&gt;&lt;span&gt;(28GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$1,035&lt;br&gt;&lt;span&gt;(57GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure&lt;/td&gt;
&lt;td&gt;$147&lt;br&gt;&lt;span&gt;(6.3GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$293&lt;br&gt;&lt;span&gt;(15GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$538&lt;br&gt;&lt;span&gt;(29GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$1,035&lt;br&gt;&lt;span&gt;(58GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GCP&lt;/td&gt;
&lt;td&gt;$147&lt;br&gt;&lt;span&gt;(7.5GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$293&lt;br&gt;&lt;span&gt;(15GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$538&lt;br&gt;&lt;span&gt;(30GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$1,035&lt;br&gt;&lt;span&gt;(60GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DigitalOcean&lt;/td&gt;
&lt;td&gt;$120&lt;br&gt;&lt;span&gt;(7GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$240&lt;br&gt;&lt;span&gt;(14GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$480&lt;br&gt;&lt;span&gt;(29GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$700&lt;br&gt;&lt;span&gt;(57GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linode&lt;/td&gt;
&lt;td&gt;$120&lt;br&gt;&lt;span&gt;(7GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$240&lt;br&gt;&lt;span&gt;(14GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$480&lt;br&gt;&lt;span&gt;(29GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$700&lt;br&gt;&lt;span&gt;(57GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When comparing Oracle Cloud vs. AWS, Azure and GCP pricing at ScaleGrid for Redis™, Oracle Cloud is almost half to cost, averaging 48.1% less expensive than these top cloud providers. We are also seeing the largest savings yet for DigitalOcean and Linode, where Oracle Cloud is averaging 34.6% less expensive for standalone Redis™ deployments at ScaleGrid.&lt;/p&gt;

&lt;h4&gt;Redis™ Replica Set Deployment Pricing&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th width="30%"&gt;Redis™ Replica Set&lt;/th&gt;
&lt;th width="15%"&gt;8GB&lt;/th&gt;
&lt;th width="15%"&gt;16GB&lt;/th&gt;
&lt;th width="15%"&gt;32GB&lt;/th&gt;
&lt;th width="15%"&gt;64GB&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OCI&lt;/td&gt;
&lt;td&gt;$480&lt;br&gt;&lt;span&gt;(7.2GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$911&lt;br&gt;&lt;span&gt;(15GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$1,796&lt;br&gt;&lt;span&gt;(28GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$3,438&lt;br&gt;&lt;span&gt;(54GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AWS&lt;/td&gt;
&lt;td&gt;$911&lt;br&gt;&lt;span&gt;(7GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$1,796&lt;br&gt;&lt;span&gt;(14GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$3,438&lt;br&gt;&lt;span&gt;(28GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$6,760&lt;br&gt;&lt;span&gt;(57GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure&lt;/td&gt;
&lt;td&gt;$911&lt;br&gt;&lt;span&gt;(6.3GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$1,796&lt;br&gt;&lt;span&gt;(15GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$3,438&lt;br&gt;&lt;span&gt;(29GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$6,760&lt;br&gt;&lt;span&gt;(58GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GCP&lt;/td&gt;
&lt;td&gt;$911&lt;br&gt;&lt;span&gt;(7.5GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$1,796&lt;br&gt;&lt;span&gt;(15GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$3,438&lt;br&gt;&lt;span&gt;(30GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$6,760&lt;br&gt;&lt;span&gt;(60GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DigitalOcean&lt;/td&gt;
&lt;td&gt;$280&lt;br&gt;&lt;span&gt;(7GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$560&lt;br&gt;&lt;span&gt;(14GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$1,120&lt;br&gt;&lt;span&gt;(29GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$2,240&lt;br&gt;&lt;span&gt;(57GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linode&lt;/td&gt;
&lt;td&gt;$280&lt;br&gt;&lt;span&gt;(7GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$560&lt;br&gt;&lt;span&gt;(14GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$1,120&lt;br&gt;&lt;span&gt;(29GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$2,240&lt;br&gt;&lt;span&gt;(57GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Oracle Cloud pricing continues to undercut AWS, Azure and Google Cloud for Redis™ replica set deployments at ScaleGrid with an average of 48.4% lower pricing across these RAM sizes. But, when we compare OCI vs. Linode and DigitalOcean, it averages 62% higher pricing for Redis™ at ScaleGrid using a high availability replica set configuration.&lt;/p&gt;

&lt;h3&gt;
&lt;a id="4"&gt;&lt;/a&gt;ScaleGrid for MongoDB® Database on OCI Pricing&lt;/h3&gt;

&lt;p&gt;Finally, let’s look at &lt;a href="https://scalegrid.io/mongodb.html"&gt;ScaleGrid for MongoDB® Database&lt;/a&gt;. In this comparison, we are going to narrow it down to Oracle Cloud vs. DigitalOcean and Linode.&lt;/p&gt;

&lt;h4&gt;MongoDB® Database Standalone Deployment Pricing&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th width="30%"&gt;MongoDB® Database Standalone&lt;/th&gt;
&lt;th width="15%"&gt;8GB&lt;/th&gt;
&lt;th width="15%"&gt;16GB&lt;/th&gt;
&lt;th width="15%"&gt;32GB&lt;/th&gt;
&lt;th width="15%"&gt;64GB&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OCI&lt;/td&gt;
&lt;td&gt;$175&lt;/td&gt;
&lt;td&gt;$344&lt;/td&gt;
&lt;td&gt;$700&lt;br&gt;&lt;span&gt;(30GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$1,119&lt;br&gt;&lt;span&gt;(60GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DigitalOcean&lt;/td&gt;
&lt;td&gt;$145&lt;/td&gt;
&lt;td&gt;$300&lt;/td&gt;
&lt;td&gt;$500&lt;/td&gt;
&lt;td&gt;$700&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linode&lt;/td&gt;
&lt;td&gt;$145&lt;/td&gt;
&lt;td&gt;$300&lt;/td&gt;
&lt;td&gt;$500&lt;/td&gt;
&lt;td&gt;$700&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;While MySQL, PostgreSQL and Redis standalone deployments were found to be less expensive through DigitalOcean and Linode vs. Oracle Cloud, ScaleGrid for MongoDB® database standalone actually averages 33.8% more expensive through OCI.&lt;/p&gt;

&lt;h4&gt;MongoDB® Database Replica Set Deployment Pricing&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th width="30%"&gt;MongoDB® Database Replica Set&lt;/th&gt;
&lt;th width="15%"&gt;8GB&lt;/th&gt;
&lt;th width="15%"&gt;16GB&lt;/th&gt;
&lt;th width="15%"&gt;32GB&lt;/th&gt;
&lt;th width="15%"&gt;64GB&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OCI&lt;/td&gt;
&lt;td&gt;$486&lt;/td&gt;
&lt;td&gt;$951&lt;/td&gt;
&lt;td&gt;$1,527&lt;br&gt;&lt;span&gt;(30GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;td&gt;$2,638&lt;br&gt;&lt;span&gt;(60GB RAM)&lt;/span&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DigitalOcean&lt;/td&gt;
&lt;td&gt;$350&lt;/td&gt;
&lt;td&gt;$750&lt;/td&gt;
&lt;td&gt;$1,500&lt;/td&gt;
&lt;td&gt;$2,400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linode&lt;/td&gt;
&lt;td&gt;$350&lt;/td&gt;
&lt;td&gt;$750&lt;/td&gt;
&lt;td&gt;$1,500&lt;/td&gt;
&lt;td&gt;$2,400&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When comparing ScaleGrid for MongoDB® database replica set pricing on Oracle Cloud vs. DigitalOcean and Linode, OCI is again more expensive at an average of 19.3% higher pricing for high availability deployments.&lt;/p&gt;

&lt;h2&gt;Which Cloud Offers The Best Pricing?&lt;/h2&gt;

&lt;h3&gt;OCI vs. AWS, Azure &amp;amp; GCP&lt;/h3&gt;

&lt;p&gt;When looking at standalone deployments, pricing for &lt;b&gt;Oracle Cloud Infrastructure is 48% cheaper than AWS, Azure or Google Cloud&lt;/b&gt; on average for MySQL, PostgreSQL and Redis™ fully managed deployments at ScaleGrid. That is nearly half the cost!&lt;/p&gt;

&lt;p&gt;High availability replica set deployments also show significant cost savings, where &lt;b&gt;Oracle Cloud averages 32.4% lower pricing than AWS and Azure, and 40.1% less than Google Cloud&lt;/b&gt; for MySQL, PostgreSQL and Redis™ at ScaleGrid.&lt;/p&gt;

&lt;p&gt;Oracle Cloud is a great alternative to AWS, Azure or Google Cloud Platform as they offer the same level of infrastructure, services, and partners to support any size business.&lt;/p&gt;

&lt;h3&gt;OCI vs. DigitalOcean &amp;amp; Linode&lt;/h3&gt;

&lt;p&gt;When we compare Oracle Cloud vs. DigitalOcean and Linode pricing at ScaleGrid for standalone deployments, OCI averages 25.4% less expensive for MySQL and PostgreSQL and 34.6% less expensive for Redis™. However, Linode and DigitalOcean are 33.8% less expensive than OCI for MongoDB® database at ScaleGrid.&lt;/p&gt;

&lt;p&gt;Then, when we look at high availability replica set deployments, OCI averages roughly 20% more expensive than DigitalOcean and Linode for MySQL, PostgreSQL, and MongoDB® database plans at ScaleGrid, and 62% more expensive for Redis™.&lt;/p&gt;

&lt;p&gt;While Oracle Cloud may provide some cost savings for standalone deployments vs. DigitalOcean and Linode, these cloud providers are really targeting different markets. Where OCI is focusing on enterprise applications, Linode and DigitalOcean cater more towards the developer market.&lt;/p&gt;

&lt;p&gt;The best cloud for your database hosting is, first and foremost, always going to be a decision driven by your application needs. While pricing is a significant factor, it’s also important to consider performance. Be on the lookout for future posts where we’re going to share benchmark reports on OCI performance vs. these top cloud providers.&lt;/p&gt;

&lt;p&gt;We hope that this post has been helpful in understanding cloud database pricing options at ScaleGrid! If you have any questions on our fully managed DBaaS plans for MySQL, PostgreSQL, Redis™ or MongoDB® database, &lt;a href="https://scalegrid.io/contact.html"&gt;get in touch&lt;/a&gt; with us and we will happily help you out.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>database</category>
      <category>hosting</category>
      <category>devops</category>
    </item>
    <item>
      <title>How To Set Up MySQL on DigitalOcean</title>
      <dc:creator>ScaleGrid</dc:creator>
      <pubDate>Fri, 05 Feb 2021 22:59:58 +0000</pubDate>
      <link>https://forem.com/scalegrid/how-to-set-up-mysql-on-digitalocean-49fp</link>
      <guid>https://forem.com/scalegrid/how-to-set-up-mysql-on-digitalocean-49fp</guid>
      <description>&lt;p&gt;MySQL is the number one open source relational database management system in the world, and is used by millions of developers across all application types. DigitalOcean, a fast-growing cloud provider that’s increasing in popularity amongst the developer community, is a great host to consider for your MySQL deployments. In this article, we’re going to show you &lt;a href="https://scalegrid.io/blog/how-to-set-up-mysql-digitalocean/"&gt;how to set up MySQL on DigitalOcean&lt;/a&gt; at ScaleGrid for the best possible performance in the cloud.&lt;/p&gt;

&lt;h2&gt;Building Your DigitalOcean MySQL Deployment&lt;/h2&gt;

&lt;p&gt;For this tutorial, we are using our fully managed &lt;a href="https://scalegrid.io/mysql.html"&gt;MySQL hosting&lt;/a&gt; solution, as it offers the highest throughput and lowest latency on DigitalOcean. Learn more in the &lt;a href="https://scalegrid.io/blog/best-mysql-digitalocean-performance-scalegrid-vs-digitalocean-managed-databases/"&gt;Best MySQL DigitalOcean Performance&lt;/a&gt; article to compare ScaleGrid vs. DigitalOcean Managed Databases pricing, latency and throughput performance.&lt;/p&gt;

&lt;p&gt;The entire process should take less than 20 minutes, with just a couple minutes at the beginning to customize your MySQL deployment configuration, and a couple minutes at the end to connect to your application.&lt;/p&gt;

&lt;p&gt;Video Tutorial: &lt;a href="https://www.youtube.com/watch?v=ywBy-w0KiN0"&gt;https://www.youtube.com/watch?v=ywBy-w0KiN0&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After signing up for a free &lt;a href="https://console.scalegrid.io/users/register"&gt;trial account&lt;/a&gt;, the first thing you’ll do is select MySQL as your database, DigitalOcean as your cloud, and Dedicated Hosting as your plan to initiate the cluster creation process:&lt;/p&gt;

&lt;h3&gt;Step 1. Enter MySQL Deployment Details&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://help.scalegrid.io/docs/mysql-new-cluster-dedicated-hosting"&gt;cluster creation&lt;/a&gt; wizard will open to the first step where you can customize the basic details of your deployment. Enter a name for your cluster using alphanumerics, and use a unique, descriptive name so you can easily recognize your cluster later on.&lt;/p&gt;

&lt;p&gt;Next, select the DigitalOcean data center you’d like to use for your master server from the cloud region dropdown - we recommend using the same region as your applications DigitalOcean Droplets for the best performance. Then you can select your VM size of DigitalOcean Droplet instances, and the version of MySQL you wish to use (all major versions supported). InnoDB Storage Engine is the default for all MySQL deployments. Click next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/how-to-set-up-mysql-digitalocean/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_AACusKW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://scalegrid.io/blog/wp-content/uploads/2021/01/Create-MySQL-on-DigitalOcean-Step-1.png" alt="Create MySQL on DigitalOcean - Step 1" width="100%" height="auto"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Step 2. Enable Master-Slave Replication&lt;/h3&gt;

&lt;p&gt;In step 2, you can enable DigitalOcean MySQL master-slave replication to add high availability to your deployment. Replication is important for keeping your application available and online for your users in the event your DigitalOcean data center region goes down. While DigitalOcean is a highly reliable cloud provider with 99.99% uptime, data centers inevitably go down, so replicating your data to additional data center locations allows you to minimize the risk of data loss and downtime for your users.&lt;/p&gt;

&lt;p&gt;If you want to continue with a standalone deployment, click next to move onto step 3 without enabling master-slave configuration.&lt;/p&gt;

&lt;p&gt;Note - master-slave configuration is only available for paid accounts, you cannot set up a replica set deployment while you are in trial mode.&lt;/p&gt;

&lt;p&gt;There are a couple options here to customize your MySQL master-slave setup. First, you can choose between 2 + 1 Quorum or 3 nodes for your deployment. Both offer high availability, but in the 2 + 1 Quorum setup, you have only 2 data-bearing nodes as the Quorum node serves as a decision-maker in a failover event. You can also contact &lt;a href="mailto:support@scalegrid.io"&gt;support&lt;/a&gt; to customize your master-slave configuration to add more nodes to your deployment.&lt;/p&gt;

&lt;p&gt;Next, you can choose between semisynchronous and asynchronous replication. &lt;a href="https://scalegrid.io/blog/data-integrity-and-performance-considerations-in-mysql-semisynchronous-replication/"&gt;Semisynchronous replication&lt;/a&gt; guarantees at least one slave has received all the data that the master has committed, ensuring that the failover to the right slave is lossless. In asynchronous replication, each slave syncs asynchronously to the master so failover from master to slave may result in failover to a server that is not fully synced up to the current master.&lt;/p&gt;

&lt;p&gt;Lastly, customize the DigitalOcean data center locations you wish to use for your slave nodes. Once you’re done, click next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/how-to-set-up-mysql-digitalocean/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6AbTn28L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://scalegrid.io/blog/wp-content/uploads/2021/01/Create-MySQL-on-DigitalOcean-Step-2.png" alt="Create MySQL on DigitalOcean - Step 2" width="100%" height="auto"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Step 3. Enable SSL&lt;/h3&gt;

&lt;p&gt;In step 3, you can enable SSL for your deployment. By default, MySQL server always installs and enables SSL configuration, but it’s not enforced that clients connect using SSL. Clients can choose to connect with or without SSL as the server allows both types of connections. Learn more about configuring and managing SSL in this &lt;a href="https://scalegrid.io/blog/configuring-and-managing-ssl-on-your-mysql-server/"&gt;MySQL tutorial&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/how-to-set-up-mysql-digitalocean/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T4VVR57v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://scalegrid.io/blog/wp-content/uploads/2021/01/Create-MySQL-on-DigitalOcean-Step-3.png" alt="Create MySQL on DigitalOcean - Step 3" width="100%" height="auto"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Step 4. Configure Firewalls&lt;/h3&gt;

&lt;p&gt;In the last step of the cluster creation process, enter at least one IP CIDR that will be allowed access to your MySQL deployment to set up your firewall rule. You can find your current IP CIDR at the top of the page. These can be customized later at both the cluster-level and account-level. Once you’ve entered it, click Add. Enter all the IP CIDRs you’d like, and then click next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/how-to-set-up-mysql-digitalocean/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gs1gO4mU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://scalegrid.io/blog/wp-content/uploads/2021/01/Create-MySQL-on-DigitalOcean-Step-4.png" alt="Create MySQL on DigitalOcean - Step 4" width="100%" height="auto"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Step 5. Review &amp;amp; Create&lt;/h3&gt;

&lt;p&gt;The last page provides a short summary of your new MySQL DigitalOcean cluster. Review all of the details and click the Previous button to go back to an earlier step and modify your configurations. Once you’re ready, click Create to spin up your cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/how-to-set-up-mysql-digitalocean/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TrCIP0jR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://scalegrid.io/blog/wp-content/uploads/2021/01/Create-MySQL-on-DigitalOcean-Step-5.png" alt="Create MySQL on DigitalOcean - Step 5" width="100%" height="auto"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that’s it! Your deployment is now being provisioned! This takes about 15 minutes, so while you wait, you can familiarize yourself with the management tools for MySQL in the console - &lt;a href="https://help.scalegrid.io/docs/mysql-backups"&gt;backups&lt;/a&gt;, &lt;a href="https://help.scalegrid.io/docs/mysql-backups-restores"&gt;restores&lt;/a&gt;, &lt;a href="https://help.scalegrid.io/docs/mysql-dynamic-scaling"&gt;scaling&lt;/a&gt;, &lt;a href="https://help.scalegrid.io/docs/mysql-alerts-rules"&gt;alerts&lt;/a&gt;, &lt;a href="https://help.scalegrid.io/docs/mysql-monitoring-slow-query-analyzer"&gt;query analysis&lt;/a&gt;, etc.&lt;/p&gt;

&lt;h2&gt;Connect &amp;amp; Migrate&lt;/h2&gt;

&lt;p&gt;Once the cluster status changes from “Creating” to “Running”, you can &lt;a href="https://help.scalegrid.io/docs/mysql-connecting-to"&gt;connect your deployment&lt;/a&gt;. There are a couple different ways to do this - through your &lt;a href="https://help.scalegrid.io/docs/mysql-connecting-to-your-application"&gt;connection string&lt;/a&gt; to your app, through the &lt;a href="https://help.scalegrid.io/docs/mysql-command-line-syntax"&gt;command line&lt;/a&gt;, through &lt;a href="https://help.scalegrid.io/docs/mysql-connecting-wordpress"&gt;WordPress&lt;/a&gt;, through &lt;a href="https://help.scalegrid.io/docs/mysql-connecting-to-mysql-workbench-gui-tool"&gt;MySQL Workbench&lt;/a&gt; or other popular &lt;a href="https://help.scalegrid.io/docs/mysql-connecting-to-gui"&gt;GUI tools&lt;/a&gt;. All of your connection details are available in the lower half of the Overview tab of your cluster details page.&lt;/p&gt;

&lt;p&gt;Visit the top tabs to &lt;a href="https://help.scalegrid.io/docs/mysql-create-database"&gt;create a MySQL database&lt;/a&gt; and &lt;a href="https://help.scalegrid.io/docs/mysql-create-database-user"&gt;create a MySQL database user&lt;/a&gt;. These can be accessed and managed anytime from the ScaleGrid console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/how-to-set-up-mysql-digitalocean/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ctVMVGBD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://scalegrid.io/blog/wp-content/uploads/2021/01/MySQL-DigitalOcean-Console-ScaleGrid.jpg" alt="MySQL DigitalOcean Console at ScaleGrid DBaaS" width="100%" height="auto"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Migrating is also an automated process that requires just a few details from your existing deployment. Click the Import Data button on your Overview tab, and enter your Server Name, Port, Database Name(s), Username and Password. Check out our &lt;a href="https://help.scalegrid.io/docs/mysql-migrations"&gt;MySQL Migrations&lt;/a&gt; doc for more detail, and additional migration options.&lt;/p&gt;

&lt;h2&gt;Running on DigitalOcean&lt;/h2&gt;

&lt;p&gt;Once you’re up and running, your deployment can virtually operate on auto-pilot. Since ScaleGrid is a fully managed service, we handle all of your day-to-day MySQL administration and maintenance so you can focus on building your app. If anything should arise that requires your attention, such as running low on disk space, we’ll send you a notification so you can login and see if you’re ready to scale up your cluster.&lt;/p&gt;

&lt;p&gt;You can also connect with our MySQL experts anytime through our 24/7 support to help you set up and troubleshoot your deployment.&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>database</category>
      <category>digitalocean</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Auditing PostgreSQL Using pgAudit</title>
      <dc:creator>ScaleGrid</dc:creator>
      <pubDate>Mon, 18 Jan 2021 22:34:47 +0000</pubDate>
      <link>https://forem.com/scalegrid/auditing-postgresql-using-pgaudit-1ggc</link>
      <guid>https://forem.com/scalegrid/auditing-postgresql-using-pgaudit-1ggc</guid>
      <description>&lt;p&gt;Auditing in information technology (IT) is a process of examining an organization’s IT infrastructure to ensure compliance with the requirements imposed by recognized standards or established policies. Data protection rules, such as the new GDPR regulations, are becoming increasingly stringent to protect user data, so it’s important that your database audits are set up properly to ensure both your application and user data is secure from vulnerabilities. In this blog post, we will discuss &lt;a href="https://scalegrid.io/blog/auditing-postgresql-using-pgaudit/"&gt;pgAudit - a tool that generates the audit logs needed to facilitate the auditing of PostgreSQL&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;What is pgAudit?&lt;/h2&gt;

&lt;p&gt;The PostgreSQL Audit Extension, &lt;a href="https://www.pgaudit.org/"&gt;pgAudit&lt;/a&gt;, is an open source extension that logs the events in a PostgreSQL database in a detailed audit log. It uses the native PostgreSQL logging facility, so the audit logs will be part of the PostgreSQL logs. The extension is based on the 2ndQuadrant pgAudit project authored by Simon Riggs, Abhijit Menon-Sen, and Ian Barwick, and includes enhancements by David Steele from Crunchy Data.&lt;/p&gt;

&lt;h2&gt;Why pgAudit over log_statement=all?&lt;/h2&gt;

&lt;p&gt;We can log all statements in PostgreSQL just by setting &lt;code&gt;log_statement=all&lt;/code&gt;. So why use pgAudit at all? The basic statement logging (using &lt;code&gt;log_statement&lt;/code&gt;) will only list the operations performed against the database. It will not provide the ability to filter operations, and the logs won’t be in the proper formatting required for auditing. pgAudit additionally provides granularity for logging specific classes of statements like &lt;code&gt;READ&lt;/code&gt; (&lt;code&gt;SELECT&lt;/code&gt; and &lt;code&gt;COPY&lt;/code&gt;), &lt;code&gt;WRITE&lt;/code&gt; (&lt;code&gt;INSERT&lt;/code&gt;, &lt;code&gt;UPDATE&lt;/code&gt;, &lt;code&gt;DELETE&lt;/code&gt;, etc.), &lt;code&gt;DDL&lt;/code&gt; etc. Furthermore, it provides object level auditing where only operations on specific relations will be logged.&lt;/p&gt;

&lt;p&gt;Another advantage of pgAudit over basic statement logging is that it provides the details of the operation performed instead of just logging the operation requested. For example, consider executing the anonymous code block using a DO statement.&lt;/p&gt;


&lt;pre&gt;&lt;code&gt; DO $$ BEGIN EXECUTE 'CREATE TABLE import' || 'ant_table (id INT)'; END $$; &lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The basic statement logging will result in:&lt;/p&gt;


&lt;pre&gt;&lt;code&gt; 2020-12-20 23:40:11 UTC:157.230.232.139(53064):sgpostgres@test:[9091]: LOG: statement: DO $$ BEGIN EXECUTE 'CREATE TABLE import' || 'ant_table (id INT)'; END $$; &lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;pgAudit will log the same operation as:&lt;/p&gt;


&lt;pre&gt;&lt;code&gt; 2020-12-20 23:40:11 UTC:157.230.232.139(53064):sgpostgres@test:[9091]: LOG: AUDIT: SESSION,4,1,FUNCTION,DO,,,"DO $$ BEGIN EXECUTE 'CREATE TABLE import' || 'ant_table (id INT)'; END $$;", 2020-12-20 23:40:11 UTC:157.230.232.139(53064):sgpostgres@test:[9091]: LOG: AUDIT: SESSION,4,2,DDL,CREATE TABLE,TABLE,public.important_table,CREATE TABLE important_table (id INT), &lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The above clearly indicates the pgAudit functionality that logs the operation and its internals with structured output that eases the search.&lt;/p&gt;

&lt;h2&gt;How to install pgAudit?&lt;/h2&gt;

&lt;p&gt;pgAudit is an extension that’s available for download from the PostgreSQL repository, or can be compiled and built from source. As a first step, the package needs to be downloaded and installed on the machine running PostgreSQL (this extension package is preinstalled on all ScaleGrid PostgreSQL deployments).&lt;/p&gt;

&lt;p&gt;Once installed, it needs to be loaded into PostgreSQL. This is achieved by adding &lt;code&gt;pgaudit&lt;/code&gt; to the &lt;code&gt;shared_preload_libraries&lt;/code&gt; config parameter. A restart of PostgreSQL is required for this configuration change to be effective. The next step is to enable the extension on the database by running &lt;code&gt;CREATE EXTENSION pgaudit&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now that the extension is ready, we need to make sure to set the configuration parameters for the extension to start logging. This can be as simple as setting the parameter &lt;code&gt;pgaudit.log&lt;/code&gt; to value &lt;code&gt;all&lt;/code&gt; and the pgAudit will start logging in &lt;code&gt;session&lt;/code&gt; mode.&lt;/p&gt;

&lt;p&gt;Now that we know how to install and enable pgAudit, let’s discuss the two audit logging modes it offers, session and object.&lt;/p&gt;

&lt;h2&gt;Session Audit Logging&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://github.com/pgaudit/pgaudit#session-audit-logging"&gt;session mode&lt;/a&gt;, pgAudit will log all the operations performed by a user. Setting the &lt;code&gt;pgaudit.log&lt;/code&gt; parameter to any of the defined values, other than &lt;code&gt;NONE&lt;/code&gt;, will enable session audit logging. The &lt;code&gt;pgaudit.log&lt;/code&gt; parameter specifies the classes of statements that will be logged in the session mode. The possible values are: &lt;code&gt;READ&lt;/code&gt;, &lt;code&gt;WRITE&lt;/code&gt;, &lt;code&gt;FUNCTION&lt;/code&gt;, &lt;code&gt;ROLE&lt;/code&gt;, &lt;code&gt;DDL&lt;/code&gt;, &lt;code&gt;MISC&lt;/code&gt;, &lt;code&gt;MISC_SET&lt;/code&gt;, &lt;code&gt;ALL&lt;/code&gt; and &lt;code&gt;NONE&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Setting the &lt;code&gt;pgaudit.log&lt;/code&gt; parameter to &lt;code&gt;ALL&lt;/code&gt; will log all the statements. The parameter can accept multiple classes using a comma-separated list and specific classes can be excluded with a - sign. For example, if you want to log all statements except &lt;code&gt;MISC&lt;/code&gt; class, the value of &lt;code&gt;pgaudit.log&lt;/code&gt; will be &lt;code&gt;ALL, -MISC, -MISC_SET&lt;/code&gt;. You can also enable pgAudit to create a separate log entry for each relation reference in a statement by setting &lt;code&gt;pgaudit.log_relation&lt;/code&gt; to on.&lt;/p&gt;

&lt;p&gt;Consider an example of creating a table. The SQL statement would be:&lt;/p&gt;


&lt;pre&gt;&lt;code&gt; CREATE TABLE persons(ID SERIAL PRIMARY KEY, LNAME varchar(20), FNAME varchar(20)); &lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The corresponding audit log entries are:&lt;/p&gt;


&lt;pre&gt;&lt;code&gt; 2020-12-21 00:00:11 UTC:157.230.232.139(53178):sgpostgres@test:[11514]: LOG: AUDIT: SESSION,5,1,DDL,CREATE SEQUENCE,SEQUENCE,public.persons_id_seq,"CREATE TABLE persons(ID SERIAL PRIMARY KEY, LNAME varchar(20), FNAME varchar(20));", 2020-12-21 00:00:11 UTC:157.230.232.139(53178):sgpostgres@test:[11514]: LOG: AUDIT: SESSION,5,1,DDL,CREATE TABLE,TABLE,public.persons,"CREATE TABLE persons(ID SERIAL PRIMARY KEY, LNAME varchar(20), FNAME varchar(20));", 2020-12-21 00:00:11 UTC:157.230.232.139(53178):sgpostgres@test:[11514]: LOG: AUDIT: SESSION,5,1,DDL,CREATE INDEX,INDEX,public.persons_pkey,"CREATE TABLE persons(ID SERIAL PRIMARY KEY, LNAME varchar(20), FNAME varchar(20));", 2020-12-21 00:00:11 UTC:157.230.232.139(53178):sgpostgres@test:[11514]: LOG: AUDIT: SESSION,5,1,DDL,ALTER SEQUENCE,SEQUENCE,public.persons_id_seq,"CREATE TABLE persons(ID SERIAL PRIMARY KEY, LNAME varchar(20), FNAME varchar(20));", &lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Object Audit Logging&lt;/h2&gt;

&lt;p&gt;In particular cases, it may be required to audit only a specific set of relations. In such cases, using session mode will only result in an unnecessarily large number of audit logs not corresponding to the required relations. &lt;a href="https://github.com/pgaudit/pgaudit#object-audit-logging"&gt;Object mode&lt;/a&gt; is especially suited for this purpose and can audit only a specific set of relations.&lt;/p&gt;

&lt;p&gt;Object audit logging is achieved using the PostgreSQL roles. A role can be created and assigned the permissions to access only a specific set of relations. This role should be specified in the configuration parameter &lt;code&gt;pgaudit.role&lt;/code&gt;. Object mode supports only &lt;code&gt;SELECT&lt;/code&gt;, &lt;code&gt;INSERT&lt;/code&gt;, &lt;code&gt;UPDATE&lt;/code&gt; and &lt;code&gt;DELETE&lt;/code&gt; statements. The classes of statements that are logged depends on the permissions granted to the role. For example, if the role has permissions to perform only &lt;code&gt;SELECT&lt;/code&gt;, then only &lt;code&gt;SELECT&lt;/code&gt; statements will be logged.&lt;/p&gt;

&lt;p&gt;Below is an example of object audit logging:&lt;/p&gt;

&lt;p&gt;Create a role and grant only &lt;code&gt;SELECT&lt;/code&gt; permissions. Set the &lt;code&gt;pgaudit.role&lt;/code&gt; to that role and run the &lt;code&gt;SELECT&lt;/code&gt; SQL statement:&lt;/p&gt;


&lt;pre&gt;&lt;code&gt; CREATE ROLE audit_person; GRANT SELECT ON persons TO audit_person; SET pgaudit.role = 'audit_person'; SELECT * FROM persons WHERE ID=404; &lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The above select statement will be logged as:&lt;/p&gt;


&lt;pre&gt;&lt;code&gt; 2020-12-21 00:27:09 UTC:157.230.232.139(54900):sgpostgres@test:[21835]: LOG: AUDIT: OBJECT,10,1,READ,SELECT,TABLE,public.persons,select * from persons where ID=404;, &lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;How to interpret the audit log entry?&lt;/h2&gt;

&lt;p&gt;So far, we have provided details on how the audit log entry looks, now let’s take a look at the audit log entry format. Each entry starts with the log_line_prefix mentioned for PostgreSQL logging, and then the rest of the output will be in CSV format. Consider the following simple audit log entry:&lt;/p&gt;


&lt;pre&gt;&lt;code&gt; 2020-12-21 00:27:09 UTC:157.230.232.139(54900):sgpostgres@test:[21835]: LOG: AUDIT: OBJECT,10,1,READ,SELECT,TABLE,public.persons,select * from persons where ID=404;, &lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In the above entry, the value &lt;/p&gt;
&lt;pre&gt;&lt;code&gt;2020-12-21 00:27:09 UTC:157.230.232.139(54900):sgpostgres@test:[21835]: &lt;/code&gt;&lt;/pre&gt; is from the log_line_prefix format &lt;code&gt;%t:%r:%u@%d:[%p]: &lt;/code&gt;. The audit entry contents start from &lt;code&gt;LOG: AUDIT: &lt;/code&gt; value and it follows CSV format. The value format is of the form:


&lt;pre&gt;&lt;code&gt;AUDIT_TYPE,STATEMENT_ID,SUBSTATEMENT_ID,CLASS,COMMAND,OBJECT_TYPE,OBJECT_NAME,STATEMENT,PARAMETER&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let’s take a look at each of the fields one by one:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
    &lt;thead&gt;
        &lt;tr&gt;
            &lt;th&gt;&lt;strong&gt;Field&lt;/strong&gt;&lt;/th&gt;
            &lt;th&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/th&gt;
            &lt;th&gt;&lt;strong&gt;Value from example audit entry&lt;/strong&gt;&lt;/th&gt;
        &lt;/tr&gt;
    &lt;/thead&gt;
    &lt;tbody&gt;
        &lt;tr&gt;
            &lt;td&gt;AUDIT_TYPE&lt;/td&gt;
            &lt;td&gt;Indicates the audit mode: SESSION or OBJECT&lt;/td&gt;
            &lt;td&gt;OBJECT&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td&gt;STATEMENT_ID&lt;/td&gt;
            &lt;td&gt;Unique statement identifier for each session&lt;/td&gt;
            &lt;td&gt;10&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td&gt;SUBSTATEMENT_ID&lt;/td&gt;
            &lt;td&gt;An identifier for each sub statement within the main statement&lt;/td&gt;
            &lt;td&gt;1&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td&gt;CLASS&lt;/td&gt;
            &lt;td&gt;Indicates the class of statements like READ, WRITE etc that are defined values for pgaudit.log parameter.&lt;/td&gt;
            &lt;td&gt;READ&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td&gt;COMMAND&lt;/td&gt;
            &lt;td&gt;The command used in the SQL statement&lt;/td&gt;
            &lt;td&gt;SELECT&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td&gt;OBJECT_TYPE&lt;/td&gt;
            &lt;td&gt;Can be TABLE, INDEX, VIEW, etc.&lt;/td&gt;
            &lt;td&gt;TABLE&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td&gt;OBJECT_NAME&lt;/td&gt;
            &lt;td&gt;The fully qualified object name&lt;/td&gt;
            &lt;td&gt;public.persons&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td&gt;STATEMENT&lt;/td&gt;
            &lt;td&gt;The actual statement executed&lt;/td&gt;
            &lt;td&gt;select * from persons where ID=404;&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td&gt;PARAMETER&lt;/td&gt;
            &lt;td&gt;When the pgaudit.log_parameter is set to true, the quoted CSV of parameters is listed if present, or "none" if there are no parameters. When the pgaudit.log_parameter is not set, the value will be “&amp;lt;not logged&amp;gt;”&lt;/td&gt;
            &lt;td&gt;&amp;lt;not logged&amp;gt;&lt;/td&gt;
        &lt;/tr&gt;
    &lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;Inference&lt;/h2&gt;

&lt;p&gt;pgAudit, with all its capabilities, simplifies the process of auditing by generating the audit trail log. Though there are a few caveats, like logging of renamed objects under the same name, it is still a robust tool that provides the required functionality. However, the audit information written in logs may not be just ideal for the auditing process - the auditing process is even better when those logs can be converted to a database schema, and audit data can be loaded to the database so you can easily query the information. This is where the PostgreSQL Audit Log Analyzer (pgAudit Analyze) is helpful. For more information, refer to the github pages of &lt;a href="https://github.com/pgaudit/pgaudit"&gt;pgAudit&lt;/a&gt; and &lt;a href="https://github.com/pgaudit/pgaudit_analyze"&gt;pgAudit Analyze&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>dba</category>
    </item>
    <item>
      <title>mysqldump Best Practices: Part 1 - MySQL Prerequisites</title>
      <dc:creator>ScaleGrid</dc:creator>
      <pubDate>Thu, 17 Dec 2020 17:47:01 +0000</pubDate>
      <link>https://forem.com/scalegrid/mysqldump-best-practices-part-1-mysql-prerequisites-1hjo</link>
      <guid>https://forem.com/scalegrid/mysqldump-best-practices-part-1-mysql-prerequisites-1hjo</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html"&gt;Mysqldump&lt;/a&gt; is a client utility that is used to perform logical &lt;a href="https://scalegrid.io/blog/mysqldump-best-practices-part-1/"&gt;backups of the MySQL database&lt;/a&gt;. This popular migration tool is useful for various use cases of MySQL such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backup and restore of databases.&lt;/li&gt;
&lt;li&gt;Migrating data from one server to another.&lt;/li&gt;
&lt;li&gt;Migrating data across different managed MySQL service providers.&lt;/li&gt;
&lt;li&gt;Migrating data between different versions of MySQL.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Mysqldump works by reading the source database objects and generating a set of SQL statements that are stored in a dump file. By replaying these statements on the destination database server, the original data is reconstructed. Since this model uses reading of the whole database and then essentially rebuilding, both dump and restore are time-consuming operations for a large database. The process might even turn cumbersome if you encounter errors during either dump or restore as it may lead you to fix the issues and re-run the operations. This is why it's important to plan well before you take up the dump and restore activity.&lt;/p&gt;

&lt;p&gt;In this 2-part blog series, we discuss some of the common aspects you should handle upfront to ensure a successful dump and restore activity. In the first part, we focus on the prerequisites you need to take care while importing the MySQL table data and in the second part, we will talk about how to handle import for stored program objects and views.&lt;/p&gt;

&lt;h2&gt;
&lt;a id="1"&gt;&lt;/a&gt;1. Space requirements&lt;/h2&gt;
&lt;p&gt;First off, it's important to ensure that your destination database volume has sufficient space to hold the imported data. Specifically, you need to be cautious if binary logs are enabled on your destination MySQL database, as binary logs generated while importing the data might take almost equal size as the data itself. Binary logs are needed if you want to restore your data on one server and want that to be replicated. In such cases, it's a good idea to plan the destination size greater than twice the size of the source database.&lt;/p&gt;

&lt;p&gt;It is also important to ensure sufficient space is available on the volume where you generate the mysqldump output file. Without these precautions, you may see your dump or restore failing due to insufficient space after running for a long time which is a loss of your productive time and effort.&lt;/p&gt;

&lt;h2&gt;2. Sql_mode&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html"&gt;sql_mode&lt;/a&gt; settings for MySQL server determine the SQL statement syntax and data validation checks that the server performs for the operations. It is important to ensure the &lt;code&gt;sql_mode&lt;/code&gt; of source and destination MySQL servers are compatible with each other, or you may encounter failures while restoring the dump you have taken. Let's demonstrate this with an example.&lt;/p&gt;


&lt;p&gt;Say you have a table on your source which has a date column having entries as zero dates:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
mysql&amp;gt; show create table sched;
-------------------------------------------------------------+
| Table | Create Table                                                                                                        |
-------------------------------------------------------------+
| sched | CREATE TABLE `sched` (
  `id` int(11) DEFAULT NULL,
  `ts` date DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+-------+---------------------------------------------------------------------------------------------------------------------

mysql&amp;gt; select * from sched;
+------+------------+
| id   | ts         |
+------+------------+
|    1 | 2020-01-12 |
|    2 | 0000-00-00 |
+------+------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Suppose the strict &lt;code&gt;sql_mode&lt;/code&gt; (and &lt;code&gt;NO_ZERO_DATE&lt;/code&gt;) is disabled on source, but enabled on the destination -  restoring such rows will result in failure such as:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
ERROR 1292 (22007) at line 40: Incorrect date value: '0000-00-00' for column 'ts’' at row 2
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You will typically see such issues if you are taking a compact dump by enabling the &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_compact"&gt;compact option&lt;/a&gt; as part of your mysqldump.&lt;/p&gt;

&lt;p&gt;If compact is disabled (which is by default) then you will not face this issue as mysqldump generates following conditional statement as part of the dump:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE,SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This means that during the restore &lt;code&gt;sql_mode&lt;/code&gt; is set to &lt;code&gt;'NO_AUTO_VALUE_ON_ZERO'&lt;/code&gt; before restoring the table data so restore goes through fine.&lt;/p&gt;

&lt;h2&gt;3. Unique_checks and foreign_key_checks&lt;/h2&gt;

&lt;p&gt;By default (if you don't use --compact option), mysqldump also sets the following:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;As explained &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/optimizing-innodb-bulk-data-loading.html"&gt;here&lt;/a&gt;, you can speed up restore operation by temporarily turning off the uniqueness checks during the session. For big tables, this saves a lot of disk I/O because InnoDB can use its change buffer to write secondary index records in a batch.&lt;/p&gt;

&lt;p&gt;If you have &lt;code&gt;FOREIGN KEY&lt;/code&gt; constraints in your tables, you can speed up table restore operation by turning off the foreign key checks for the duration of the restore session: For big tables, this can save a lot of disk I/O.&lt;/p&gt;

&lt;p&gt;Disabling &lt;code&gt;FOREIGN_KEY_CHECKS&lt;/code&gt; will also help to avoid errors due to foregin key constraint checks during the restore operation. Whenever a table with foregin key constraint is created, MySQL expects that the parent table which is referred to by the foregin key already exists. This is a problem since mysqldump utility dumps the tables in alphabetical order. Let’s take an example to demonstrate this.&lt;/p&gt;

&lt;p&gt;On the source database, we have two tables:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
CREATE TABLE `solution_table` (
  `num1` int(11) NOT NULL,
  `num2` int(11) DEFAULT NULL,
  PRIMARY KEY (`num1`));

CREATE TABLE `ref_table` (
  `key` int(11) DEFAULT NULL,
  `ref_num` int(11) DEFAULT NULL,
  KEY `ref_num` (`ref_num`),
  CONSTRAINT `ref_num_ibfk_1` FOREIGN KEY (`ref_num`) REFERENCES `solution_table` (`num1`)
)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The table &lt;code&gt;ref_table&lt;/code&gt; has a foreign key constraint that references the &lt;code&gt;solution_table&lt;/code&gt;. Based on the alphabetical order, mysqldump first dumps the contents of &lt;code&gt;ref_table&lt;/code&gt;. When this is replayed at the time of restore, it will fail with the error:
&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
ERROR 1215 (HY000) at line 50: Cannot add foreign key constraint - 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Which happens while executing the create table statement for &lt;code&gt;‘ref_table’&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In summary, be aware of the issues you may encounter, if you specify &lt;code&gt;--compact&lt;/code&gt; option while running mysqldump.&lt;/p&gt;

&lt;h2&gt;4. Privileges required for running mysqldump&lt;/h2&gt;

&lt;p&gt;The minimum privilege required by mysqldump for dumping a database is &lt;code&gt;SELECT&lt;/code&gt; on that database.&lt;/p&gt;

&lt;p&gt;However, if your database has views, you will need SHOW VIEW permissions as well,  as mysqldump always dumps views along with the tables of the database. Suppose you don't have &lt;code&gt;SHOW VIEW&lt;/code&gt; permissions, then the mysqldump will fail with:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt; 
mysqldump: Couldn't execute 'show create table `ivew`': SHOW VIEW command denied to user ‘dumpuser’@'172.31.18.79' for table 'iview' (1142)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Another point of interest is if your dumpuser has &lt;code&gt;SELECT&lt;/code&gt; permissions only on a particular table of the database, mysqldump will dump data only for that particular table and automatically ignores any other tables or views.&lt;/p&gt;

&lt;p&gt;So please ensure that the user executing mysqldump has all the appropriate privileges upfront in order to avoid any surprises or failures at a later time.&lt;/p&gt;

&lt;h2&gt;5. Max_allowed_packet&lt;/h2&gt;

&lt;p&gt;The largest communication packet handled by mysql is determined by the setting &lt;code&gt;max_allowed_packet&lt;/code&gt;. In the context of import, a communication packet is a single SQL statement sent to the MySQL server during the restore OR  a single row that is sent to the client during the dump.&lt;/p&gt;

&lt;p&gt;The default value of &lt;code&gt;max_allowed_packet&lt;/code&gt; for mysqldump is 24MB. if mysqldump receives a packet larger than this, then you may run into the error:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table `huge1` at row: 2.
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;So ensure that mysqldump uses the same or bigger value of &lt;code&gt;max_allowed_packet&lt;/code&gt; that is configured on the source MySQL instance.&lt;/p&gt;

&lt;p&gt;The option can be specified with the flag &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_max-allowed-packet"&gt;&lt;code&gt;--max-allowed-packet=&lt;i&gt;value&lt;/i&gt;&lt;/code&gt;&lt;/a&gt; when invoking the mysqldump.&lt;/p&gt;

&lt;p&gt;When restoring the dump, ensure that &lt;code&gt;max_allowed_packet&lt;/code&gt; size of your destination server is big enough to receive the packets from the dump file.&lt;/p&gt;

&lt;p&gt;Otherwise, during restore of the dump, you will see an error message:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
ERROR 2006 (HY000) at line 70: MySQL server has gone away
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This error can be a little misleading as you may think that the MySQL server has shut down or crashed. But, it just means that the server has received a bigger sized packet than its configured size of &lt;code&gt;max_allowed_packet&lt;/code&gt;. Again, the best practice is to ensure that the &lt;code&gt;max_allowed_packet&lt;/code&gt; value for your destination server is the same as the value in the source server. This is also an important setting that can be checked and set appropriately upfront, rather than facing the errors at a later time.&lt;/p&gt;

&lt;p&gt;In this first part of the mysqldump series, we discussed prerequisites for a successful dump and restore operation for large MySQL databases in order to help you avoid multiple attempts and unproductive time spent.&lt;/p&gt;

&lt;p&gt;In the next part, we will discuss best practices to import the stored programs and views from your MySQL database.&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>database</category>
      <category>developer</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Redis on Azure Performance Benchmark – ScaleGrid for Redis™ vs. Azure Cache</title>
      <dc:creator>ScaleGrid</dc:creator>
      <pubDate>Fri, 27 Nov 2020 18:57:18 +0000</pubDate>
      <link>https://forem.com/scalegrid/redis-on-azure-performance-benchmark-scalegrid-for-redis-vs-azure-cache-45lo</link>
      <guid>https://forem.com/scalegrid/redis-on-azure-performance-benchmark-scalegrid-for-redis-vs-azure-cache-45lo</guid>
      <description>&lt;p&gt;&lt;a href="https://redis.io/"&gt;Redis&lt;/a&gt; is an advanced key-value store. In fact, it is the &lt;a href="https://db-engines.com/en/ranking"&gt;number one key value store&lt;/a&gt; and eighth most popular database in the world. It has high throughput and runs from memory, but also has the ability to persist data on disk. Redis is a great caching solution for highly demanding applications, and there are many solutions available to help you deploy and manage Redis in the cloud. In this post, we are going to compare &lt;a href="https://scalegrid.io/blog/redis-azure-performance-benchmark/"&gt;ScaleGrid for Redis™&lt;/a&gt; vs. Azure Cache for Redis performance and management features to help you pick the best managed solution for your Redis deployment.&lt;/p&gt;

&lt;p&gt;ScaleGrid is a DBaaS provider that provides fully managed hosting not only for &lt;a href="https://scalegrid.io/redis.html"&gt;Redis™&lt;/a&gt;, but also for &lt;a href="https://scalegrid.io/mongodb.html"&gt;MongoDB® database&lt;/a&gt;, &lt;a href="https://scalegrid.io/mysql.html"&gt;MySQL&lt;/a&gt;, and &lt;a href="https://scalegrid.io/postgresql.html"&gt;PostgreSQL&lt;/a&gt;. The &lt;a href="https://help.scalegrid.io/docs/getting-started-bring-your-own-cloud-plans"&gt;Bring Your Own Cloud&lt;/a&gt; (BYOC) plan hosts the database server in your own AWS, Azure or GCP account.&lt;/p&gt;

&lt;p&gt;Azure provides a hosted service for Redis called &lt;a href="https://azure.microsoft.com/en-us/services/cache/"&gt;Azure Cache for Redis&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;ScaleGrid for Redis™ vs. Azure Cache for Redis Performance Benchmark&lt;/h2&gt;

&lt;p&gt;In this benchmark, we measure the performance in throughput and latency. Throughput is measured in operations per second (ops/sec), and latency in microseconds. Check out our Benchmark Configurations section later on in this post for information on how this performance benchmark was configured.&lt;/p&gt;

&lt;h2&gt;
&lt;a id="1"&gt;&lt;/a&gt;Throughput Performance&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/redis-azure-performance-benchmark/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EMx34gGC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://scalegrid.io/blog/wp-content/uploads/2020/11/ScaleGrid-Redis-Azure-Cache-Throughput.png" alt="ScaleGrid Redis Azure Cache Throughput Graph" width="100%" height="auto"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th&gt;&lt;b&gt;Connections&lt;/b&gt;&lt;/th&gt;
&lt;th&gt;&lt;b&gt;ScaleGrid for Redis™
&lt;/b&gt;&lt;/th&gt;
&lt;th&gt;&lt;b&gt;Azure Cache
C3 Std. 6GB&lt;/b&gt;&lt;/th&gt;
&lt;th&gt;&lt;b&gt;Azure Cache
P1 6GB&lt;/b&gt;&lt;/th&gt;
&lt;th&gt;&lt;b&gt;Azure Cache
P2 13GB&lt;/b&gt;&lt;/th&gt;
&lt;th&gt;&lt;b&gt;ScaleGrid Improvement&lt;/b&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;134,667&lt;/td&gt;
&lt;td&gt;16,461&lt;/td&gt;
&lt;td&gt;19,881&lt;/td&gt;
&lt;td&gt;38,459&lt;/td&gt;
&lt;td&gt;439%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;200&lt;/td&gt;
&lt;td&gt;147,551&lt;/td&gt;
&lt;td&gt;16,246&lt;/td&gt;
&lt;td&gt;25,361&lt;/td&gt;
&lt;td&gt;35,459&lt;/td&gt;
&lt;td&gt;474%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;td&gt;152,341&lt;/td&gt;
&lt;td&gt;15,872&lt;/td&gt;
&lt;td&gt;25,346&lt;/td&gt;
&lt;td&gt;35,045&lt;/td&gt;
&lt;td&gt;499%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;400&lt;/td&gt;
&lt;td&gt;152,624&lt;/td&gt;
&lt;td&gt;15,235&lt;/td&gt;
&lt;td&gt;19,043&lt;/td&gt;
&lt;td&gt;37,301&lt;/td&gt;
&lt;td&gt;539%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;span&gt;As we can see in the above graph, &lt;b&gt;ScaleGrid for Redis™ achieves approximately 5x higher throughput&lt;/b&gt; compared to Azure Cache for Redis when benchmarked with 100-400 connections. For example, while the Azure Cache for Redis P2 Premium 13GB plan manages around 36,000 ops/sec during all connection scenarios, ScaleGrid for Redis™ has over 130 000 ops/sec for all the scenarios.&lt;/span&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
&lt;a id="2"&gt;&lt;/a&gt;Latency Performance&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://scalegrid.io/blog/redis-azure-performance-benchmark/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kPOfugoC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://scalegrid.io/blog/wp-content/uploads/2020/11/ScaleGrid-Redis-Azure-Cache-Latency.png" alt="ScaleGrid Redis Azure Cache Latency Performance Graph" width="100%" height="auto"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th&gt;&lt;b&gt;Connections&lt;/b&gt;&lt;/th&gt;
&lt;th&gt;&lt;b&gt;ScaleGrid for Redis™
&lt;/b&gt;&lt;/th&gt;
&lt;th&gt;&lt;b&gt;Azure Cache
C3 Std. 6GB&lt;/b&gt;&lt;/th&gt;
&lt;th&gt;&lt;b&gt;Azure Cache
P1 6GB&lt;/b&gt;&lt;/th&gt;
&lt;th&gt;&lt;b&gt;Azure Cache
P2 13GB&lt;/b&gt;&lt;/th&gt;
&lt;th&gt;&lt;b&gt;ScaleGrid Improvement&lt;/b&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;744&lt;/td&gt;
&lt;td&gt;6,809&lt;/td&gt;
&lt;td&gt;5,896&lt;/td&gt;
&lt;td&gt;2,497&lt;/td&gt;
&lt;td&gt;-85%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;200&lt;/td&gt;
&lt;td&gt;1,353&lt;/td&gt;
&lt;td&gt;10,950&lt;/td&gt;
&lt;td&gt;8,447&lt;/td&gt;
&lt;td&gt;5,565&lt;/td&gt;
&lt;td&gt;-84%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;td&gt;2,044&lt;/td&gt;
&lt;td&gt;17,807&lt;/td&gt;
&lt;td&gt;13,045&lt;/td&gt;
&lt;td&gt;8,539&lt;/td&gt;
&lt;td&gt;-84%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;400&lt;/td&gt;
&lt;td&gt;2,609&lt;/td&gt;
&lt;td&gt;25,126&lt;/td&gt;
&lt;td&gt;16,999&lt;/td&gt;
&lt;td&gt;10,716&lt;/td&gt;
&lt;td&gt;-85%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;span&gt;While Azure Cache for Redis latency increases rapidly as the number of connections grow, ScaleGrid for Redis™ steadily achieves low latency across all connection counts. On average, &lt;b&gt;ScaleGrid for Redis™ has 85% lower latency&lt;/b&gt; than Azure Cache for Redis. This is especially noticeable when comparing Azure Cache for Redis (C3 Standard 6GB) to ScaleGrid for Redis™, where the difference is up to -99%.&lt;/span&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;Benchmark Summary&lt;/h3&gt;

&lt;p&gt;As you can tell from the above graphs, &lt;a href="https://scalegrid.io/redis.html"&gt;ScaleGrid for Redis™&lt;/a&gt; provides significantly higher throughput and lower latency. On average we can see approximately 5x the throughput and 85% lower latency compared to equivalent sizes on Azure Cache. The ScaleGrid for Redis™ &lt;a href="https://scalegrid.io/pricing.html?db=REDIS&amp;amp;cloud=cloud_azure&amp;amp;replica=deployment_standalone&amp;amp;instance=Micro#section_pricing_byoc"&gt;BYOC plan&lt;/a&gt; starts at $9 per month (720h + VM costs) and $18 per month (720h) for the &lt;a href="https://scalegrid.io/pricing.html?db=REDIS&amp;amp;cloud=cloud_azure&amp;amp;replica=deployment_standalone&amp;amp;instance=Micro#section_pricing_dedicated"&gt;Dedicated hosting plan&lt;/a&gt;. To learn more about how different Redis providers compare to ScaleGrid for Redis™, check out our &lt;a href="https://scalegrid.io/redis/hosting-comparison.html"&gt;Redis service providers comparison chart&lt;/a&gt; where you can learn more about ScaleGrid for Redis™.&lt;/p&gt;

&lt;h2&gt;
&lt;a id="3"&gt;&lt;/a&gt;Benchmark Configuration&lt;/h2&gt;

&lt;p&gt;Let’s take a look at the configurations we used in the performance benchmark:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th&gt;&lt;b&gt;Configuration&lt;/b&gt;&lt;/th&gt;
&lt;th&gt;&lt;b&gt;Details&lt;/b&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Benchmark Tool&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/RedisLabs/memtier_benchmark"&gt;Memtier Benchmark&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure Region for Redis&lt;/td&gt;
&lt;td&gt;East US&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure Region for Application&lt;/td&gt;
&lt;td&gt;East US&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Deployment Type&lt;/td&gt;
&lt;td&gt;Master-Slave&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;We have benchmarked Redis™ performance for the following configurations.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table width="100%"&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th&gt;&lt;b&gt;Provider&lt;/b&gt;&lt;/th&gt;
&lt;th&gt;&lt;b&gt;Plan Size&lt;/b&gt;&lt;/th&gt;
&lt;th&gt;&lt;b&gt;RAM&lt;/b&gt;&lt;/th&gt;
&lt;th&gt;&lt;b&gt;Monthly Cost&lt;/b&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ScaleGrid for Redis™&lt;/td&gt;
&lt;td&gt;&lt;a href="https://scalegrid.io/pricing.html?db=REDIS&amp;amp;cloud=cloud_azure&amp;amp;replica=deployment_redis_2+1&amp;amp;instance=Large#section_pricing_dedicated"&gt;Dedicated Hosting Large on Azure&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7GB memory&lt;/td&gt;
&lt;td&gt;$607&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure Cache for Redis&lt;/td&gt;
&lt;td&gt;C3 Standard - Moderate network bandwidth&lt;/td&gt;
&lt;td&gt;6GB memory&lt;/td&gt;
&lt;td&gt;$328.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure Cache for Redis&lt;/td&gt;
&lt;td&gt;P1 Premium - Moderate network bandwidth&lt;/td&gt;
&lt;td&gt;6GB memory&lt;/td&gt;
&lt;td&gt;$404.42&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure Cache for Redis&lt;/td&gt;
&lt;td&gt;P2 Premium - High network bandwidth&lt;/td&gt;
&lt;td&gt;13GB memory&lt;/td&gt;
&lt;td&gt;$810.30&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For each Redis™ server, we have benchmarked with 100, 200, 300 and 400 connections. Each connection sends 10,000 requests with 32 bytes object data size per request. We use non-SSL connections to connect Redis™ servers.&lt;/p&gt;

&lt;p&gt;Not only can ScaleGrid provide higher throughput and lower latency, it also brings many other features such as full admin access, scheduled backups and SSH access. You can find more information about &lt;a href="https://scalegrid.io/redis.html"&gt;ScaleGrid for Redis™ on our website&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;What to consider when choosing a Redis™ service?&lt;/h2&gt;

&lt;p&gt;So, with so many fully managed Redis™ service provider options, what are the most important features to look out for? Here is a checklist to use when choosing the right Redis hosting service for you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Dedicated Server&lt;/li&gt;
&lt;li&gt;Scalability&lt;/li&gt;
&lt;li&gt;Data Persistence&lt;/li&gt;
&lt;li&gt;Backups &amp;amp; Restores&lt;/li&gt;
&lt;li&gt;High Throughput &amp;amp; Low Latency&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
&lt;a id="4"&gt;&lt;/a&gt;Dedicated Server&lt;/h3&gt;

&lt;p&gt;Redis is a single-threaded server where data is stored in memory; therefore, having Redis running on a &lt;a href="https://help.scalegrid.io/docs/redis-new-cluster-dedicated-hosting"&gt;dedicated server&lt;/a&gt; is very important in a production environment. You don’t want your Redis server battling for CPU and memory resources with other services.&lt;/p&gt;

&lt;h3&gt;
&lt;a id="5"&gt;&lt;/a&gt;Scalability&lt;/h3&gt;

&lt;p&gt;Businesses grow, and the same for your data. It’s very important that your Redis service is able to perform a &lt;a href="https://help.scalegrid.io/docs/redis-scaling"&gt;dynamic, in-place scale up&lt;/a&gt; of your Redis server with little or no downtime.&lt;/p&gt;

&lt;h3&gt;
&lt;a id="6"&gt;&lt;/a&gt;Data Persistence&lt;/h3&gt;

&lt;p&gt;Depending on your business needs, you might need to persist your Redis data on physical storage. Redis provides two persistence options: RDB and AOF.&lt;/p&gt; 

&lt;p&gt;RDB is a point-in-time snapshots of your dataset at specified intervals to a Redis Database Backup file. The file can be transferred to other Redis instances.&lt;/p&gt;

&lt;p&gt;AOF stands for Append Only File. Redis logs every write operation which has been modified in your dataset.  It is a very reliable way to persist your data.&lt;/p&gt;

&lt;p&gt;Both RDB and AOF can be enabled at the same time and they have different trade offs.  For more details about their pros and cons, you can read more on &lt;a href="https://redis.io/topics/persistence"&gt;Redis Persistence page&lt;/a&gt; on redis.io.&lt;/p&gt;

&lt;p&gt;Your Redis service should provide options not only to persist the data, but also to deploy Redis in &lt;a href="https://redis.io/topics/replication"&gt;master/replica&lt;/a&gt; or &lt;a href="https://redis.io/topics/cluster-tutorial"&gt;cluster mode&lt;/a&gt; to minimize the chance of data loss.&lt;/p&gt;

&lt;h3&gt;
&lt;a id="7"&gt;&lt;/a&gt;Backups &amp;amp; Restores&lt;/h3&gt;

&lt;p&gt;Any Database as a Service (DBaaS) for Redis should also provide &lt;a href="https://help.scalegrid.io/docs/redis-backups-scheduled"&gt;scheduled&lt;/a&gt; and &lt;a href="https://help.scalegrid.io/docs/redis-backups-on-demand"&gt;on-demand&lt;/a&gt; backups so you can ensure you always have a regular schedule of backups available, and can perform them as needed before an application event. It should also provide “restore backup” options to existing databases or to a new database instance.&lt;/p&gt;

&lt;h3&gt;
&lt;a id="8"&gt;&lt;/a&gt;High Throughput &amp;amp; Low Latency&lt;/h3&gt;

&lt;p&gt;Redis can provide fast caching for applications. However, sometimes network latency can bottleneck accessing data from Redis. The key is to avoid physical distancing between your application and Redis. So, you want to make sure both the application and Redis are hosted on the same cloud provider region and on the same virtual network. Your Redis service provider should have the option to deploy your Redis server on the virtual network of your choice.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>database</category>
      <category>redis</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
