<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jerod Johnson</title>
    <description>The latest articles on Forem by Jerod Johnson (@jerodimusprime).</description>
    <link>https://forem.com/jerodimusprime</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jerodimusprime"/>
    <language>en</language>
    <item>
      <title>Analyze Live Salesforce Data in Infragistics Reveal </title>
      <dc:creator>Jerod Johnson</dc:creator>
      <pubDate>Fri, 12 Nov 2021 16:44:35 +0000</pubDate>
      <link>https://forem.com/jerodimusprime/analyze-live-salesforce-data-in-infragistics-reveal-44j5</link>
      <guid>https://forem.com/jerodimusprime/analyze-live-salesforce-data-in-infragistics-reveal-44j5</guid>
      <description>&lt;p&gt;Infragistics Reveal is a data visualization solution that simplifies the way you add embedded analytics and visualizations to your apps. Reveal supports connecting to OData APIs, but many enterprise APIs are not built using the OData standard. By combining Reveal with &lt;a href="https://www.cdata.com/products/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=analyze-live-salesforce-data-in-infragistics-reveal"&gt;CData solutions&lt;/a&gt;, you can instantly build dynamic dashboards from live enterprise data sources like Salesforce. &lt;/p&gt;

&lt;p&gt;With access to live data, you can build real-time, responsive dashboards that retrieve new data points with every refresh. &lt;a href="https://www.cdata.com/connect/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=analyze-live-salesforce-data-in-infragistics-reveal"&gt;CData Connect Cloud&lt;/a&gt; and &lt;a href="https://www.cdata.com/apiserver/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=analyze-live-salesforce-data-in-infragistics-reveal"&gt;API Server&lt;/a&gt; let you generate standard data APIs like OData for all your data, offering out-of-the-box connectivity to tools like Reveal. &lt;/p&gt;

&lt;p&gt;In this article, we walk through how to connect to Salesforce to Infragistics Reveal to create a simple dashboard, leveraging CData Connect Cloud and CData API server to facilitate the connection. &lt;/p&gt;

&lt;h3&gt;
  
  
  Why CData?
&lt;/h3&gt;

&lt;p&gt;Both CData Connect Cloud and CData API Server provide a single point of contact for all your enterprise data through standard data interfaces like SQL and OData. Instead of migrating data from dozens of sources to a static data store or writing your own integrations, simply point your tools to Connect Cloud or API Server and get instant, live access to all your data. &lt;/p&gt;

&lt;h3&gt;
  
  
  Why Connect Cloud?
&lt;/h3&gt;

&lt;p&gt;As a cloud-based integration platform, Connect Cloud is ideal for working with cloud apps that have embedded Reveal visualizations. With no servers to configure or data proxies to set up, you can simply use the Web-based UI to create a virtual database for Salesforce, generate an OData API, and connect from Reveal to start building dashboards based on live Salesforce data.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Why CData API Server?
&lt;/h3&gt;

&lt;p&gt;CData API Server can be installed on-prem or in a self-hosted environment enabling you to connect to data from on-prem or self-hosted apps built on Reveal dashboards. Create an OData endpoint for Salesforce using the straightforward point-and-click interface and connect from Reveal to create dynamic Salesforce visualizations.  &lt;/p&gt;

&lt;h1&gt;
  
  
  Getting Started: Salesforce Connection Properties
&lt;/h1&gt;

&lt;p&gt;To work with live Salesforce data in Reveal, you need to authenticate with Salesforce. There are several authentication methods available for connecting to Salesforce: Login, OAuth, and SSO. The Login method requires you to have the username, password, and security token of the user. &lt;/p&gt;

&lt;p&gt;If you do not have access to the username and password or do not wish to require them, you can use OAuth authentication. &lt;/p&gt;

&lt;p&gt;Use SSO (single sign-on) by setting the SSOProperties, SSOLoginUrl, and TokenUrl connection properties, which allow you to authenticate to an identity provider. See the "Getting Started" chapter in the help documentation for more information. &lt;/p&gt;

&lt;h1&gt;
  
  
  Using CData Connect Cloud
&lt;/h1&gt;

&lt;p&gt;To connect to your Salesforce data in Connect Cloud and enable OData access, navigate to the web interface for your Connect Cloud instance (&lt;a href="https://www.cdatacloud.net/myinstance"&gt;https://www.cdatacloud.net/myinstance&lt;/a&gt;). Follow the steps below to configure Connect Cloud. (Jump to building a dashboard.) &lt;/p&gt;

&lt;h2&gt;
  
  
  Add a Connect Cloud User
&lt;/h2&gt;

&lt;p&gt;Create a User to connect to Salesforce from Reveal through Connect Cloud. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click Users -&amp;gt;  Add &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure a User &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d_6tM2y_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j7su5jmqywrs6ctcv66i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d_6tM2y_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j7su5jmqywrs6ctcv66i.png" alt="Image description" width="880" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click Save Changes and make a note of the Authtoken for the new user &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o081ONlu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iwci48u9zpqr1gp5ggrq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o081ONlu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iwci48u9zpqr1gp5ggrq.png" alt="Image description" width="880" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Connect to Salesforce from Connect Cloud
&lt;/h3&gt;

&lt;p&gt;CData Connect Cloud uses a straightforward, point-and-click interface to connect to data sources and generate APIs. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open Connect Cloud and click Databases &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MnHrNifH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2afqyxm1ojpcigdk744x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MnHrNifH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2afqyxm1ojpcigdk744x.png" alt="Image description" width="880" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select "Salesforce" from Available Data Sources &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter the necessary authentication properties to connect to Salesforce (see above) &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tpLKaF_9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kt8qtqef4y688v8biri4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tpLKaF_9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kt8qtqef4y688v8biri4.png" alt="Image description" width="880" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click  Test Database &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Privileges -&amp;gt;  Add and add the new user (or an existing user) with the appropriate permissions (SELECT is the only required permission for Reveal) &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Add Salesforce OData Endpoints in Connect Cloud
&lt;/h3&gt;

&lt;p&gt;After connecting to Salesforce, create OData Endpoint for the desired table(s). &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click OData -&amp;gt; Tables -&amp;gt; Add Tables &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_2ji8Gtd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d2d4adhz247rymmeeshc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_2ji8Gtd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d2d4adhz247rymmeeshc.png" alt="Image description" width="880" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the Salesforce database &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KHzAcRRH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/inqq08jp2w6kkwwxfqa6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KHzAcRRH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/inqq08jp2w6kkwwxfqa6.png" alt="Image description" width="880" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the table(s) you wish to work with and click Next &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UPTmH8pi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kbcfka4xsys5qvhn0ocx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UPTmH8pi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kbcfka4xsys5qvhn0ocx.png" alt="Image description" width="880" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;(Optional) Edit the resource to select specific fields and more &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save the settings &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  (Optional) Configure Cross-Origin Resource Sharing (CORS)
&lt;/h3&gt;

&lt;p&gt;When accessing and connecting to multiple different domains from an application such as Ajax, there is a possibility of violating the limitations of cross-site scripting. In that case, configure the CORS settings in OData -&amp;gt; Settings. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Enable cross-origin resource sharing (CORS): ON &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Allow all domains without '*': ON &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access-Control-Allow-Methods: GET, PUT, POST, OPTIONS &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access-Control-Allow-Headers: Authorization &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Save the changes to the settings. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FxiW5m14--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yne8m299vyvfp8m5hrd3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FxiW5m14--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yne8m299vyvfp8m5hrd3.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Using API Server
&lt;/h1&gt;

&lt;p&gt;To connect to your Salesforce data in API Server and enable OData access, navigate to the web interface for your API Server instance (&lt;a href="https://localhost:8153"&gt;https://localhost:8153&lt;/a&gt;). Follow the steps below to configure the API Server. (Jump to building a dashboard.) &lt;/p&gt;

&lt;h3&gt;
  
  
  Connect to Salesforce from API Server
&lt;/h3&gt;

&lt;p&gt;CData API Server uses a straightforward, point-and-click interface to connect to data sources and generate APIs. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open API Server and click Settings -&amp;gt; Connection -&amp;gt;  Add Connection &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XjtcBlBA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i126nt9bi7frn444usmp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XjtcBlBA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i126nt9bi7frn444usmp.png" alt="Image description" width="880" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select "Salesforce" &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pd0dIPiN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7qhjqhtrm8gvmw7wyqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pd0dIPiN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7qhjqhtrm8gvmw7wyqx.png" alt="Image description" width="880" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enter the necessary authentication properties to connect to Salesforce (see above) &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oXtG6IqO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xolrvexiikffd2e8ha5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oXtG6IqO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xolrvexiikffd2e8ha5w.png" alt="Image description" width="880" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Add Salesforce Resource Definitions in API Server
&lt;/h3&gt;

&lt;p&gt;After connecting to Salesforce, create Resources, which represent API endpoints for Salesforce data. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click Settings -&amp;gt; Resources -&amp;gt; Add Resource &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FCHIJwCG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2u7xt3j5jn37ffsvcql9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FCHIJwCG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2u7xt3j5jn37ffsvcql9.png" alt="Image description" width="880" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the Salesforce connection &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Lq-6dcC---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/li0037b65traiapd8z9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lq-6dcC---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/li0037b65traiapd8z9u.png" alt="Image description" width="880" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the table you wish to retrieve and click Next &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ARtYUHBq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ue1hzcsn501fu2izd0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ARtYUHBq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ue1hzcsn501fu2izd0m.png" alt="Image description" width="880" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;(Optional) Edit the resource to select specific fields and more &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save the settings &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Add an API Server User
&lt;/h3&gt;

&lt;p&gt;Create a User to connect to Salesforce from Reveal through API Server. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click Settings -&amp;gt; Users &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click  Add &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure a User with access to the Salesforce Connection and Resource(s) &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QdDP5Vdl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w1bv804752olacfec092.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QdDP5Vdl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w1bv804752olacfec092.png" alt="Image description" width="880" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Nl5X-NZM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hl5jz4tjo8a0wj6jf7y4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Nl5X-NZM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hl5jz4tjo8a0wj6jf7y4.png" alt="Image description" width="880" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  (Optional) Configure Cross-Origin Resource Sharing (CORS)
&lt;/h3&gt;

&lt;p&gt;When accessing and connecting to multiple different domains from an application such as Ajax, there is a possibility of violating the limitations of cross-site scripting. In that case, configure the CORS settings in Settings -&amp;gt; Server. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Enable cross-origin resource sharing (CORS): ON &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Allow all domains without '*': ON &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access-Control-Allow-Methods: GET, PUT, POST, OPTIONS &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access-Control-Allow-Headers: Authorization &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Save the changes to the settings. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d2BcSZxS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/239ajiquudh7fmnylkvs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d2BcSZxS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/239ajiquudh7fmnylkvs.png" alt="Image description" width="880" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Create a Dashboard in Reveal
&lt;/h1&gt;

&lt;p&gt;With the API Server configured, we can visualize Salesforce data in Reveal. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log into Reveal and click Dashboards -&amp;gt;  New &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dOpa9FWD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zxhq0ev54wlgm5wssjws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dOpa9FWD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zxhq0ev54wlgm5wssjws.png" alt="Image description" width="880" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click  Data Source -&amp;gt; OData Feed &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5JNfCSaX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cz5gozzg9vd9e76xpxz9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5JNfCSaX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cz5gozzg9vd9e76xpxz9.png" alt="Image description" width="880" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Specify the API Server or Connect Cloud API endpoint URL, for example, &lt;a href="https://serverurl/api.rsc"&gt;https://serverurl/api.rsc&lt;/a&gt; &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kSe7ADcJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v89jczk19wxejsyaw912.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kSe7ADcJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v89jczk19wxejsyaw912.png" alt="Image description" width="880" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select Generic Credentials and specify the API Server username and authentication token &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JtmmTnXV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bp4e6bw7m5n84hfjfhpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JtmmTnXV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bp4e6bw7m5n84hfjfhpv.png" alt="Image description" width="880" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the entity you wish to visualize &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o7kgw_FW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pyvqt20mws71fkwh70in.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o7kgw_FW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pyvqt20mws71fkwh70in.png" alt="Image description" width="880" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select fields and choose a chart type &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zUXp8UNj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bzy8ri0eb3zvwckp33ry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zUXp8UNj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bzy8ri0eb3zvwckp33ry.png" alt="Image description" width="880" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  More Information
&lt;/h1&gt;

&lt;p&gt;Companies everywhere are looking to leverage their data to make effective decisions and grow their businesses. &lt;/p&gt;

&lt;p&gt;By providing industry-standard connectivity, both &lt;a href="https://www.cdata.com/connect/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=analyze-live-salesforce-data-in-infragistics-reveal"&gt;CData Connect Cloud&lt;/a&gt; and &lt;a href="https://www.cdata.com/apiserver/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=analyze-live-salesforce-data-in-infragistics-reveal"&gt;CData API Server&lt;/a&gt; democratize Salesforce access. Now everyone in the organization, from IT professionals &amp;amp; app developers to citizen analysts &amp;amp; data scientists can work with and find value from the entire Salesforce platform. &lt;/p&gt;

</description>
      <category>automation</category>
      <category>integration</category>
      <category>database</category>
      <category>connectivity</category>
    </item>
    <item>
      <title>Editing API Driver API Profiles </title>
      <dc:creator>Jerod Johnson</dc:creator>
      <pubDate>Fri, 12 Nov 2021 15:28:44 +0000</pubDate>
      <link>https://forem.com/jerodimusprime/editing-api-driver-api-profiles-m63</link>
      <guid>https://forem.com/jerodimusprime/editing-api-driver-api-profiles-m63</guid>
      <description>&lt;p&gt;RESTful APIs offer a straightforward way for businesses to work with external data and offer access to their own data. With more than 24,000 public APIs providing access to limitless data and the typical enterprise leveraging more than 200 applications, teams and developers need more efficient ways to query API data. With the CData API Driver, it is easier than ever to integrate with APIs at scale for data analytics and application development. &lt;/p&gt;

&lt;h2&gt;
  
  
  Why use the API Driver?
&lt;/h2&gt;

&lt;p&gt;Use simple SQL to access and work with data anywhere there’s an API. Like all &lt;a href="https://www.cdata.com/drivers/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=editing-api-driver-api-profiles"&gt;CData Drivers&lt;/a&gt;, the CData API Driver makes it easier to work with your data. With the API Driver and the available API Profiles, you can instantly and codelessly query dozens of APIs (including any APIs built using the &lt;a href="https://www.cdata.com/apiserver/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=editing-api-driver-api-profiles"&gt;CData API Server&lt;/a&gt;) from BI, data integration, and custom applications.  &lt;/p&gt;

&lt;p&gt;Thanks to its extensible design, organizations and developers can modify API Profiles from API Server to customize integrations and even create their own API Profiles. Easily expand the connectivity offered by API Driver to any RESTful API. &lt;/p&gt;

&lt;h1&gt;
  
  
  Editing an API Profile
&lt;/h1&gt;

&lt;p&gt;By default, existing API Profiles grant read access to APIs. We can edit the API Profiles to gain write access to APIs (where supported). &lt;/p&gt;

&lt;h2&gt;
  
  
  Unzip the Existing Profile
&lt;/h2&gt;

&lt;p&gt;API Profiles (.apip files) are compressed folders that contain a collection of schema files representing the endpoints available for the API. When you decompress (unzip) the file, you can edit the schema files individually to extend the functionality. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CkDkVzQ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bqum8t6f0dh5zojgao37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CkDkVzQ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bqum8t6f0dh5zojgao37.png" alt="Image description" width="854" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Open the Schema File
&lt;/h2&gt;

&lt;p&gt;Once the Profile is decompressed, open the schema file for the API endpoint you wish to modify (in your preferred text editor). Each schema file has several parts that define how SQL access is enabled for an API endpoint. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;api:info: This keyword maps API fields to table columns through scripted column definitions. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;attr: This element represents a column definition (details are below). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;api:set attr="...": This keyword (attribute) sets various parameters for the API integration, including paging functionality and information on how to parse the API response (via the RepeatElement attribute). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;api:script method="...": This keyword defines how read and write functionality is implemented for the API endpoint, including which internal operation is called and how specific functionality like filtering is managed. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below are the full contents of a the schema file for retrieving Zoom Meetings. &lt;/p&gt;

&lt;p&gt;Meetings.rsd &lt;/p&gt;































&lt;p&gt;&lt;a href="/api:info"&gt;/api:info&lt;/a&gt; &lt;/p&gt;

























&lt;p&gt;&lt;a&gt;api:push/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="/api:call"&gt;/api:call&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="/api:script"&gt;/api:script&lt;/a&gt; &lt;/p&gt;





&lt;p&gt;&lt;a href="/api:script"&gt;/api:script&lt;/a&gt; &lt;/p&gt;





&lt;p&gt;&lt;a href="/api:script"&gt;/api:script&lt;/a&gt; &lt;/p&gt;





&lt;p&gt;&lt;a href="/api:script"&gt;/api:script&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="/api:script"&gt;/api:script&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Update Column Definitions
&lt;/h2&gt;

&lt;p&gt;If we are going to implement write functionality, we need to modify the schema to allow write permissions to the appropriate columns. Before we update a column definition for the Profile, let's examine a single column definition and discuss the different attributes. &lt;/p&gt;

&lt;h2&gt;
  
  
  Id Column Definition
&lt;/h2&gt;



&lt;h2&gt;
  
  
  Column Definition Attributes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;name: The name of the column in the SQL interface for the API endpoint &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;xs:type: The data type associated with the column (e.g.: string, datetime, int, etc.) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;readonly: Whether the column allows writes (by default, this is always true) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;key: Whether a column is intended to be a unique identifier for the elements in a table/view &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;other:xPath: The path (exact or relative to theRepeatElement) in the API response to the column value &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can see from the schema file that most of the columns (which correspond to various API fields) are flagged as read-only. The first edit we need to make is to change those columns so the API Driver can write to them (based on the API specification) by removing the read-only flags: &lt;/p&gt;

&lt;p&gt;... &lt;/p&gt;





















&lt;p&gt;... &lt;/p&gt;

&lt;h2&gt;
  
  
  Add INSERT Functionality
&lt;/h2&gt;

&lt;p&gt;In this section, we walk through modifying the Schema file to support INSERT statements like the following: &lt;/p&gt;

&lt;p&gt;INSERT INTO Meetings &lt;/p&gt;

&lt;p&gt;(Topic,Type,StartTime,Duration,TimeZone,UserId) &lt;/p&gt;

&lt;p&gt;VALUES &lt;/p&gt;

&lt;p&gt;('Test Meeting',2,'2019-12-31T00:00:00',60,'EST','&lt;a href="mailto:user@domain.com"&gt;user@domain.com&lt;/a&gt;') &lt;/p&gt;

&lt;p&gt;NOTE: Topic, Type, StartTime, Duration, and TimeZone are all columns in our schema. The UserId attr is a pseudocolumn, which behaves like a table column but is not actually stored in the table. In this case, the UserId is used to POST the new meeting to a specific user account. &lt;/p&gt;

&lt;p&gt;In the schema file, the SELECT functionality is implemented in the  element: &lt;/p&gt;







&lt;p&gt;&lt;a&gt;api:push/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="/api:call"&gt;/api:call&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="/api:script"&gt;/api:script&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;To implement INSERT functionality, we need to modify the  element, setting the method attribute value to "POST", setting the input values in a data attribute and calling the appropriate operation (apisadoExecuteJSONGet in this case). &lt;/p&gt;

&lt;h3&gt;
  
  
  Setting the method Attribute
&lt;/h3&gt;





&lt;p&gt;&lt;a href="/api:script"&gt;/api:script&lt;/a&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Setting the data Attribute
&lt;/h3&gt;

&lt;p&gt;The Zoom Meetings API endpoint allows you to create a new meeting by submitting a JSON object with various fields for the new meeting. Values from an INSERT statement are mapped to corresponding fields in an _input object and are used to create the data attribute, which will be POSTed to the Zoom API. &lt;/p&gt;



&lt;p&gt;{ &lt;/p&gt;

&lt;p&gt;"topic": "[_input.Topic]", &lt;/p&gt;

&lt;p&gt;"type": [_input.Type], &lt;/p&gt;

&lt;p&gt;"start_time": "[_input.StartTime]", &lt;/p&gt;

&lt;p&gt;"duration": "[_input.Duration]", &lt;/p&gt;

&lt;p&gt;"timezone": "[_input.TimeZone]" &lt;/p&gt;

&lt;p&gt;} &lt;/p&gt;

&lt;p&gt;&lt;a href="/api:set"&gt;/api:set&lt;/a&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Calling the apisadoExecuteJSONGet Operation
&lt;/h3&gt;

&lt;p&gt;Once the data attribute is set, call the apisadoExecuteJSONGet operation using an api:call keyword and push the operation to the API Driver with an api:push keyword. &lt;/p&gt;



&lt;p&gt;... &lt;/p&gt;



&lt;p&gt;&lt;a&gt;api:push&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="/api:push"&gt;/api:push&lt;/a&gt;&lt;a href="/api:call"&gt;/api:call&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="/api:script"&gt;/api:script&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Using the Edited Profile
&lt;/h2&gt;

&lt;p&gt;With the edits made to the schema file, you are ready to start INSERTing Meetings to a Zoom account. Save the schema file, recompress (zip) the schema files, and change the extension of the compressed file to .apip. Use an application that supports standard connectivity to connect to the API Profile using the API Driver, then submit your INSERT statement. &lt;/p&gt;

&lt;h1&gt;
  
  
  INSERT Statement
&lt;/h1&gt;

&lt;p&gt;INSERT INTO Meetings &lt;/p&gt;

&lt;p&gt;(Topic,Type,StartTime,Duration,TimeZone,UserId) &lt;/p&gt;

&lt;p&gt;VALUES &lt;/p&gt;

&lt;p&gt;('Test Meeting (Edited Profile)',2,'2019-12-31T00:00:00',60,'EST','&lt;a href="mailto:user@domain.com"&gt;user@domain.com&lt;/a&gt;') &lt;/p&gt;

&lt;h4&gt;
  
  
  Meeting in Zoom
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oKdAq0iS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hpmk4l5de7ss46ftb1wa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oKdAq0iS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hpmk4l5de7ss46ftb1wa.png" alt="Image description" width="725" height="762"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  More Information
&lt;/h1&gt;

&lt;p&gt;The CData API Drivers simplify data connectivity for a wide range of popular data tools. Connect BI, Reporting, &amp;amp; ETL tools to live data from any application, database, or Web API. Built on the same robust SQL engine that powers other CData Drivers, the &lt;a href="https://www.cdata.com/apidriver/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=editing-api-driver-api-profiles"&gt;CData API Driver&lt;/a&gt; enables simple codeless query access to APIs through a single client interface. &lt;/p&gt;

</description>
      <category>automation</category>
      <category>connectivity</category>
      <category>database</category>
      <category>integration</category>
    </item>
    <item>
      <title>Creating API Driver API Profiles</title>
      <dc:creator>Jerod Johnson</dc:creator>
      <pubDate>Fri, 12 Nov 2021 14:29:52 +0000</pubDate>
      <link>https://forem.com/jerodimusprime/creating-api-driver-api-profiles-dg0</link>
      <guid>https://forem.com/jerodimusprime/creating-api-driver-api-profiles-dg0</guid>
      <description>&lt;p&gt;RESTful APIs offer a straightforward way for businesses to work with external data and offer access to their own data. With more than 24,000 public APIs providing access to limitless data and the typical enterprise leveraging more than 200 applications, teams and developers need more efficient ways to query API data. With the CData API Driver, it is easier than ever to integrate with APIs at scale for data analytics and application development. &lt;/p&gt;

&lt;h2&gt;
  
  
  Why use the API Driver?
&lt;/h2&gt;

&lt;p&gt;Use simple SQL to access and work with data anywhere there’s an API. Like all &lt;a href="https://www.cdata.com/drivers/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=creating-api-driver-api-profiles"&gt;CData Drivers&lt;/a&gt;, the CData API Driver makes it easier to work with your data. With the API Driver and the available API Profiles, you can instantly and codelessly query dozens of APIs (including any APIs built using the &lt;a href="https://www.cdata.com/apiserver/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=creating-api-driver-api-profiles"&gt;CData API Server&lt;/a&gt;) from BI, data integration, and custom applications.  &lt;/p&gt;

&lt;p&gt;Thanks to its extensible design, organizations and developers can modify API Profiles from API Server to customize integrations and even create their own API Profiles. Easily expand the connectivity offered by API Driver to any RESTful API. &lt;/p&gt;

&lt;p&gt;This article shows how to create a new API Profile to enable SQL access to a new API. &lt;/p&gt;

&lt;h1&gt;
  
  
  Creating an API Profile
&lt;/h1&gt;

&lt;p&gt;API Profiles grant SQL access to APIs. While there are dozens of APIs supported out of the box, you are also able to create a new Profile to enable SQL access to any API. &lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Schema File (.rsd)
&lt;/h3&gt;

&lt;p&gt;API Profiles contain a collection of schema files that represent the endpoints available for the API. For this article, we will start a Profile for the TripPin OData v4 API by creating a schema file for the People endpoint. Each schema file has several parts that define how SQL access is enabled for an API endpoint. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;api:info: This keyword maps API fields to table columns through scripted column definitions. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;attr: This element represents a column definition (details are below). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;api:set attr="...": This keyword (attribute) sets various parameters for the API integration, including paging functionality and information on how to parse the API response (via the RepeatElement attribute). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;api:script method="...": This keyword defines how read and write functionality is implemented for the API endpoint, including which internal operation is called and how specific functionality like filtering is managed. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Start by creating a new text file called People.rsd. From here, we walk through creating each section of the schema file, based on the API specification. &lt;/p&gt;

&lt;h3&gt;
  
  
  Create Column Definitions
&lt;/h3&gt;

&lt;p&gt;API Driver schema files enable SQL access to API endpoints, and this starts with creating column definitions for corresponding API fields for the given endpoint, using API Script keywords and other functionality. An api:script keyword contains the entire schema definition. An api:info keyword provides the table name &amp;amp; description and contains the column definitions, where each API field is mapped to a table with an attr (attribute) element. &lt;/p&gt;

&lt;p&gt;The People endpoint of our API returns a series of people, where each entry is represent by a JSON object similar to the following: &lt;/p&gt;

&lt;p&gt;{ &lt;br&gt;
"UserName" : "russellwhyte", &lt;br&gt;
"FirstName" : "Russell", &lt;br&gt;
"LastName" : "Whyte", &lt;br&gt;
"MiddleName" : null, &lt;br&gt;
"Gender" : "Male", &lt;br&gt;
"Age" : null, &lt;br&gt;
"Emails" : ["&lt;a href="mailto:Russell@example.com"&gt;Russell@example.com&lt;/a&gt;","&lt;a href="mailto:Russell@contoso.com"&gt;Russell@contoso.com&lt;/a&gt;"], &lt;br&gt;
"FavoriteFeature" : "Feature1", &lt;br&gt;
"Features" : ["Feature1","Feature2"], &lt;br&gt;
"AddressInfo" : [ &lt;br&gt;
{ "Address" : "187 Suffolk Ln.", &lt;br&gt;
"City":{ &lt;br&gt;
"Name" : "Boise", &lt;br&gt;
"CountryRegion" : "United States", &lt;br&gt;
"Region" : "ID" &lt;br&gt;
} &lt;br&gt;
}], &lt;br&gt;
"HomeAddress" : null} &lt;/p&gt;

&lt;p&gt;We can using path definitions based on the JSON structure to drill down into each of the values in the response, effectively flattening the response into a SQL table model. Create the column definitions based on the API specification and inferred information based on the response. An explanation of the column definition follows. &lt;/p&gt;

































&lt;p&gt;&lt;a href="/api:info"&gt;/api:info&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;... &lt;/p&gt;

&lt;h4&gt;
  
  
  Column Definition Attributes
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;name: The name of the column in the SQL interface for the API endpoint &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;xs:type: The data type associated with the column (e.g.: string, datetime, int, etc.) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;readonly: Whether the column allows writes (by default, this is always true) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;key: Whether a column is intended to be a unique identifier for the elements in a table/view &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;other:xPath: The path (exact or relative to theRepeatElement) in the API response to the column value &lt;/p&gt;

&lt;h4&gt;
  
  
  Specific Columns
&lt;/h4&gt;

&lt;p&gt;Here, we examine specific column definitions and explain how the different attributes create the SQL mapping for the API fields. &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Column&lt;/th&gt;
&lt;th&gt;Featured Attribute&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;UserName&lt;/td&gt;
&lt;td&gt;key&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ALL&lt;/td&gt;
&lt;td&gt;readonly&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ALL&lt;/td&gt;
&lt;td&gt;xs:type&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Feature1&lt;/td&gt;
&lt;td&gt;other:xPath&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;City&lt;/td&gt;
&lt;td&gt;other:xPath&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Meaning of key: Signifies UserName as a unique identifier for the table&lt;br&gt;
Meaning of readonly: Determines whether a column can be modified or not&lt;br&gt;
Meaning of xs:type: Sets the SQL datatype (based on API specification or data model)&lt;br&gt;
Meaning of other:xPath: The array index [0] indicates to pull the first entry in the Features JSON array &lt;br&gt;
Meaning of other:xPath: Drills into the AddressInfo JSON object to expose the city name &lt;/p&gt;

&lt;h3&gt;
  
  
  Add Global Parameters
&lt;/h3&gt;

&lt;p&gt;After creating the column definitions, we need to set the global parameters for integrating with the API, including the API endpoint to request data from, any required values for connecting, specific fields or headers for the API request, and the repeated element in the API response that represents individual entries for the API endpoint. &lt;/p&gt;

&lt;p&gt;... &lt;/p&gt;















&lt;p&gt;... &lt;/p&gt;

&lt;h3&gt;
  
  
  Add Read/Write Functionality
&lt;/h3&gt;

&lt;p&gt;With the columns and global parameters defined, we can complete the schema file by scripting the read and write functionality. SELECT functionality is implemented in the  keyword, setting the HTTP method to GET and calling the apisadoExecuteJSONGet operation to retrieve and process the data. &lt;/p&gt;







&lt;p&gt;&lt;a&gt;api:push/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="/api:call"&gt;/api:call&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="/api:script"&gt;/api:script&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;To implement INSERT / UPDATE / DELETE functionality, we need to add additional &lt;a&gt;api:script&lt;/a&gt; elements with the POST, MERGE, and DELETE methods and implement the specific functionality with further scripting. For this article, we will only implement SELECT functionality and throw an error message if we try to write to the API endpoint. &lt;/p&gt;

&lt;p&gt;Setting the method Attribute &lt;/p&gt;

&lt;p&gt;... &lt;/p&gt;





&lt;p&gt;&lt;a href="/api:throw"&gt;/api:throw&lt;/a&gt;&lt;a href="/api:script"&gt;/api:script&lt;/a&gt; &lt;/p&gt;





&lt;p&gt;&lt;a href="/api:throw"&gt;/api:throw&lt;/a&gt;&lt;a href="/api:script"&gt;/api:script&lt;/a&gt; &lt;/p&gt;





&lt;p&gt;&lt;a href="/api:throw"&gt;/api:throw&lt;/a&gt;&lt;a href="/api:script"&gt;/api:script&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;... &lt;/p&gt;

&lt;p&gt;With the API functionality implemented, we can use the profile with the API Driver in any tool or application that supports JDBC or ADO.NET connectivity, granting SQL access to the API. &lt;/p&gt;

&lt;h2&gt;
  
  
  Using the Profile in DbVisualizer
&lt;/h2&gt;

&lt;p&gt;The API Driver has two connection properties used to connect to an API: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Profile: The folder containing the schema files for your API. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ProfileSettings: A semi-colon separated list of name-value pairs for connection properties required by your chosen Profile — since we are connecting to an open API, we can leave this property blank. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because we are connecting in DbVisualizer, we will configure a JDBC connection string to connect to the API through the Profile: &lt;/p&gt;

&lt;p&gt;jdbc:apis:Profile=/PATH/TO/TripPin/; &lt;/p&gt;

&lt;p&gt;In DbVisualizer, we create a new Connection using the API Driver and set the Database URL to the connection string. From there, we can connect to the Profile, expand the data model, and explore the data in the People "table" (which represents the People API endpoint). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qXApiOf9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6etx0iei68kpbtftxxks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qXApiOf9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6etx0iei68kpbtftxxks.png" alt="Image description" width="880" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  More Information
&lt;/h1&gt;

&lt;p&gt;The CData API Drivers simplify data connectivity for a wide range of popular data tools. Connect BI, Reporting, &amp;amp; ETL tools to live data from any application, database, or Web API. Built on the same robust SQL engine that powers CData Drivers, the &lt;a href="https://www.cdata.com/apidriver/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=creating-api-driver-api-profiles"&gt;CData API Driver&lt;/a&gt; enables simple codeless query access to APIs through a single client interface. &lt;/p&gt;

</description>
      <category>automation</category>
      <category>integration</category>
      <category>connectivity</category>
      <category>database</category>
    </item>
    <item>
      <title>Access All Your Data in Google Data Studio with CData Connect Cloud </title>
      <dc:creator>Jerod Johnson</dc:creator>
      <pubDate>Fri, 12 Nov 2021 11:24:24 +0000</pubDate>
      <link>https://forem.com/jerodimusprime/access-all-your-data-in-google-data-studio-with-cdata-connect-cloud-4o1i</link>
      <guid>https://forem.com/jerodimusprime/access-all-your-data-in-google-data-studio-with-cdata-connect-cloud-4o1i</guid>
      <description>&lt;p&gt;Google Data Studio helps you create comprehensive reports and data visualizations. When combined with &lt;a href="https://www.cdata.com/connect/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=access-all-your-data-in-google-data-studio-with-cdata-connect-cloud"&gt;CData Connect Cloud&lt;/a&gt;, you get instant, cloud-to-cloud access to data from supported SaaS, Big Data and NoSQL sources for visualizations, dashboards, and more. &lt;/p&gt;

&lt;p&gt;CData Connect Cloud provides a live cloud-to-cloud interface for all your data, making it easy to build reports from live SaaS, Big Data and NoSQL sources in Google Data Studio — without replicating the data to a natively supported database. As you build visualizations, Google Data Studio generates SQL queries to gather data. Using optimized data processing out of the box, CData Connect Cloud pushes all supported SQL operations (filters, JOINs, etc) directly to the underlying data source, leveraging server-side processing to quickly return the requested data. &lt;/p&gt;

&lt;p&gt;This article shows how to use the CData Connect Partner Connector to connect to your &lt;a href="https://www.cdata.com/connect/integrations/datastudio/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=access-all-your-data-in-google-data-studio-with-cdata-connect-cloud"&gt;Connect Cloud instance from Google Data Studio&lt;/a&gt;. &lt;/p&gt;

&lt;h1&gt;
  
  
  User Guide
&lt;/h1&gt;

&lt;p&gt;The CData Connect Partner Connector gives you the ability to connect directly data to Google Data Studio through the &lt;a href="https://www.cdata.com/connect/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=access-all-your-data-in-google-data-studio-with-cdata-connect-cloud"&gt;CData Connect Cloud&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Grant Access
&lt;/h2&gt;

&lt;p&gt;Before using CData Connect Partner Connector, you will need to grant access to your Google Account. If you see this prompt, please click "Allow". &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Lk9_gIOT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gw3tzz696qv9kk8z0x04.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lk9_gIOT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gw3tzz696qv9kk8z0x04.jpg" alt="Image description" width="403" height="628"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect to CData Connect Cloud
&lt;/h2&gt;

&lt;p&gt;You will need your instance name, username, and password to establish a connection to Connect Cloud. Your instance name is found in the web UI address or MySQL/SQL Server host address for your Connect Cloud instance. For example, if your web UI address is &lt;a href="https://www.cdatacloud.net/my_instance"&gt;https://www.cdatacloud.net/my_instance&lt;/a&gt; and MySQL/SQL Server host address is my_instance.cdatacloud.net, your instance name is my_instance. &lt;/p&gt;

&lt;p&gt;Set username and password to the credentials for a Connect Cloud user for your instance with appropriate roles and privileges to access the desired data. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o08E7ZNv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dyzclgqgr6gdqyjd2m2d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o08E7ZNv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dyzclgqgr6gdqyjd2m2d.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When finished setting up credentials in Connect Cloud, enter your credentials information in the connector authentication panel. &lt;/p&gt;

&lt;p&gt;The format for Username here should be [instance name]/[user name]. For example, if your instance name is ustest1, and you have the user admin set in Connect Cloud, you should enter ustest1/admin. Password is be the user's password. &lt;/p&gt;

&lt;p&gt;When you finish entering the information, click "SUBMIT." &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WVMlh4Ni--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2r551t8doh05motsa6vq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WVMlh4Ni--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2r551t8doh05motsa6vq.jpg" alt="Image description" width="295" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Select a Database
&lt;/h2&gt;

&lt;p&gt;After you've successfully established the connection, select a database you want to use and click "NEXT." &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ioy4Funh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/saecgpdsspn9w9oy3u0w.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ioy4Funh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/saecgpdsspn9w9oy3u0w.jpg" alt="Image description" width="772" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Select a Table
&lt;/h2&gt;

&lt;p&gt;Then, select a table, and click "NEXT." &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qg6Q5gTx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/51kconzmhbjepbitf1l9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qg6Q5gTx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/51kconzmhbjepbitf1l9.jpg" alt="Image description" width="365" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Explore the Data
&lt;/h2&gt;

&lt;p&gt;If you see the message "Click "CONNECT" to continue," it means the configuration information is correct. You can hit the "CONNECT" button at the top right of the screen. You should not modify any configuration information at this time, or it may cause a connection error. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vstnkSw5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gueygnzh779ddagdctml.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vstnkSw5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gueygnzh779ddagdctml.jpg" alt="Image description" width="322" height="78"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this step, you can set the fields and their metadata. Common data types are automatically detected, but you can also change them to something else. When you are content with the schema, you can start generating your report. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YLX_Youi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/um8jdw47pqlmgbf5a4wc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YLX_Youi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/um8jdw47pqlmgbf5a4wc.jpg" alt="Image description" width="880" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Learn More
&lt;/h1&gt;

&lt;p&gt;As more businesses move toward data-driven digital transformation initiatives, lines of business are increasingly demanding access to their data to acquire actionable insights into performance and business health. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.cdata.com/connect/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=access-all-your-data-in-google-data-studio-with-cdata-connect-cloud"&gt;CData Connect&lt;/a&gt; is a consolidated connectivity platform that lets you easily connect any application, on-prem or in the cloud, with real-time data from anywhere. Unify access to all the cloud applications, databases, APIs, and services you use across your organization. &lt;/p&gt;

</description>
      <category>automation</category>
      <category>integration</category>
      <category>connectivity</category>
      <category>database</category>
    </item>
    <item>
      <title>Run Queries Across Many Data Sources at Once with the CData Query Federation Driver </title>
      <dc:creator>Jerod Johnson</dc:creator>
      <pubDate>Fri, 12 Nov 2021 10:46:32 +0000</pubDate>
      <link>https://forem.com/jerodimusprime/run-queries-across-many-data-sources-at-once-with-the-cdata-query-federation-driver-4l0k</link>
      <guid>https://forem.com/jerodimusprime/run-queries-across-many-data-sources-at-once-with-the-cdata-query-federation-driver-4l0k</guid>
      <description>&lt;p&gt;The &lt;a href="https://www.cdata.com/queryfederation/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=run-queries-across-many-data-sources-at-once-with-the-cdata-query-federation+Driver+"&gt;CData Query Federation Driver&lt;/a&gt;, allows you to federate and aggregate data across drivers as a single database connection. You can combine the Query Federation Driver with other &lt;a href="https://www.cdata.com/jdbc/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=run-queries-across-many-data-sources-at-once-with-the-cdata-query-federation+Driver+"&gt;CData JDBC Drivers&lt;/a&gt; or &lt;a href="https://www.cdata.com/ado/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=run-queries-across-many-data-sources-at-once-with-the-cdata-query-federation+Driver+"&gt;CData ADO.NET Providers&lt;/a&gt; to access all your SaaS, Big Data, and NoSQL sources as if they were one database. You can query or update data from a single source and even execute complex queries that JOIN data across sources. &lt;/p&gt;

&lt;p&gt;In this article, we will walk through the Query Federation Driver configuration to connect to multiple data sources. We will be looking at the JDBC drivers for Salesforce, MySQL, and Excel, but the principles for configuring the CData Query Federation Driver extend to all 250+ data sources that CData supports. &lt;/p&gt;

&lt;h2&gt;
  
  
  Download and Configure the Query Federation Driver
&lt;/h2&gt;

&lt;p&gt;Start working with all your data from a single endpoint by installing the &lt;a href="https://www.cdata.com/queryfederation/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=run-queries-across-many-data-sources-at-once-with-the-cdata-query-federation+Driver+"&gt;Query Federation Driver&lt;/a&gt; along with &lt;a href="https://www.cdata.com/drivers/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=run-queries-across-many-data-sources-at-once-with-the-cdata-query-federation+Driver+"&gt;CData drivers&lt;/a&gt; for your specific data sources.  &lt;/p&gt;

&lt;p&gt;Like any other database driver, the Federation Driver connects to data using connection string properties. There are two key connection properties for the Query Federation Driver: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Database Configuration: Defines the location of the JSON document that defines which CData drivers the Query Federation Driver uses to connect to data. Typically: %AppData%/Roaming/CData/Query Federation Data Provider/DatabaseConfiguration.json &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Default Schema: Which of the database schema defined in the Database Configuration document to use by default &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Database Configuration document contains JSON objects with fields for information about the drivers available to the Query Federation Driver. Each JSON object represents a database driver, and its name represents the schema name, like salesforcedb or mysqldb. The fields and values in the objects vary depending on whether you are using a Java-based Query Federation Driver (i.e. JDBC) or a C#-based Query Federation Driver (i.e. ADO.NET). &lt;/p&gt;

&lt;h2&gt;
  
  
  JDBC Schema Fields
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;driverClass: The name of the JDBC class for the driver, (e.g. cdata.jdbc.salesforce.SalesforceDriver) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;connectionUri: The JDBC URL to connect to data, (e.g. jdbc:cdata:salesforce:User=&lt;a href="mailto:user@domain.com"&gt;user@domain.com&lt;/a&gt;;Password=password;) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;driverPath: The path on disk to the location of the driver file, (e.g. C:\Program Files\CData\CData JDBC Driver for Salesforce\lib\cdata.jdbc.salesforce.jar) &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Sample DatabaseConfiguration.json File (JDBC)
&lt;/h2&gt;

&lt;p&gt;{ &lt;/p&gt;

&lt;p&gt;"salesforcedb": { &lt;/p&gt;

&lt;p&gt;"driverClass": "cdata.jdbc.salesforce.SalesforceDriver", &lt;/p&gt;

&lt;p&gt;"connectionUri": "jdbc:cdata:salesforce:User=&lt;a href="mailto:user@domain.com"&gt;user@domain.com&lt;/a&gt;;Password=password;", &lt;/p&gt;

&lt;p&gt;"driverPath": "/PATH/TO/cdata.jdbc.salesforce.jar" &lt;/p&gt;

&lt;p&gt;}, &lt;/p&gt;

&lt;p&gt;"exceldb": { &lt;/p&gt;

&lt;p&gt;"driverClass": "cdata.jdbc.excel.ExcelDriver", &lt;/p&gt;

&lt;p&gt;"connectionUri": "jdbc:cdata:excel:Excel File=/PATH/TO/MyExcelFile.xlsx;", &lt;/p&gt;

&lt;p&gt;"driverPath": "/PATH/TO/cdata.jdbc.excel.jar" &lt;/p&gt;

&lt;p&gt;}, &lt;/p&gt;

&lt;p&gt;"mysqldb": { &lt;/p&gt;

&lt;p&gt;"driverClass": "cdata.jdbc.mysql.MySQLDriver", &lt;/p&gt;

&lt;p&gt;"connectionUri": "jdbc:mysql:Server=localhost;Database=test;User=root;Password=pwd", &lt;/p&gt;

&lt;p&gt;"driverPath": "/PATH/TO/cdata.jdbc.mysql.jar" &lt;/p&gt;

&lt;p&gt;} &lt;/p&gt;

&lt;p&gt;} &lt;/p&gt;

&lt;h2&gt;
  
  
  ADO.NET Schema Fields
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;providerName: The name of the ADO.NET class for the Provider, (e.g. System.Data.CData.Salesforce) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;connectionString: The connection string used to connect to data, (e.g. User=&lt;a href="mailto:user@domain.com"&gt;user@domain.com&lt;/a&gt;;Password=password;) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;providerPath: The path on disk to the location of the driver file, (e.g. C:\Program Files\CData\CData ADO.NET Driver for Salesforce\lib\System.Data.CData.Salesforce.dll) &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Sample DatabaseConfiguration.json File (ADO)
&lt;/h2&gt;

&lt;p&gt;{ &lt;/p&gt;

&lt;p&gt;"salesforcedb": { &lt;/p&gt;

&lt;p&gt;"providerName": "System.Data.CData.Salesforce", &lt;/p&gt;

&lt;p&gt;"connectionString": "User=&lt;a href="mailto:user@domain.com"&gt;user@domain.com&lt;/a&gt;;Password=password;", &lt;/p&gt;

&lt;p&gt;"providerPath": "/PATH/TO/System.Data.CData.Salesforce.dll" &lt;/p&gt;

&lt;p&gt;}, &lt;/p&gt;

&lt;p&gt;"exceldb": { &lt;/p&gt;

&lt;p&gt;"providerName": "System.Data.CData.Excel", &lt;/p&gt;

&lt;p&gt;"connectionString": "Excel File=/PATH/TO/MyExcelFile.xlsx;", &lt;/p&gt;

&lt;p&gt;"providerPath": "/PATH/TO/System.Data.CData.Excel.dll" &lt;/p&gt;

&lt;p&gt;}, &lt;/p&gt;

&lt;p&gt;"mysqldb": { &lt;/p&gt;

&lt;p&gt;"providerName": "System.Data.CData.MySQL", &lt;/p&gt;

&lt;p&gt;"connectionString": "Server=localhost;Database=test;User=root;Password=pwd", &lt;/p&gt;

&lt;p&gt;"providerPath": "/PATH/TO/System.Data.CData.MySQL.dll" &lt;/p&gt;

&lt;p&gt;} &lt;/p&gt;

&lt;p&gt;} &lt;/p&gt;

&lt;h2&gt;
  
  
  Sample Query Federation Driver Connections
&lt;/h2&gt;

&lt;p&gt;To connect to the Query Federation Driver from a JDBC-capable tool or application, simply create the JDBC URL for the Query Federation Driver: &lt;/p&gt;

&lt;h3&gt;
  
  
  JDBC URL
&lt;/h3&gt;

&lt;p&gt;jdbc:queryfederation:DatabaseConfiguration=/PATH/TO/DatabaseConfiguration.json;DefaultSchema=salesforcedb &lt;/p&gt;

&lt;p&gt;For an ADO.NET-capable tool, create the connection string for the Query Federation Driver: &lt;/p&gt;

&lt;h3&gt;
  
  
  ADO.NET Connection String
&lt;/h3&gt;

&lt;p&gt;DatabaseConfiguration=/PATH/TO/DatabaseConfiguration.json;DefaultSchema=salesforcedb &lt;/p&gt;

&lt;p&gt;With the Query Federation Driver configured, we can connect to and begin working with data from multiple sources as if they were in a single database. Below, you can see the database design based on the above schema settings file, as displayed in DBVisualizer. &lt;/p&gt;

&lt;h3&gt;
  
  
  Query Federation Driver Schema Layout (in DBVisualizer)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EJbDQ8iN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2y9vjvhli0x8bl5vlscr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EJbDQ8iN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2y9vjvhli0x8bl5vlscr.png" alt="Image description" width="303" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  More Information
&lt;/h1&gt;

&lt;p&gt;Working with data across sources, whether from SaaS, Big Data, NoSQL, or even flat-file sources is invaluable to the data-driven organization. Developers can pick multiple data processing systems and access all of them with a single SQL-based interface. &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.cdata.com/queryfederation/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=run-queries-across-many-data-sources-at-once-with-the-cdata-query-federation+Driver+"&gt;CData Query Federation Driver&lt;/a&gt; offers a single, uniform experience with all your data, no matter where your data is housed or generated. The CData Query Federation Driver provides a universal data access layer that simplifies application development and data access. Using the Query Federation Driver, analysts can query data across systems through a common driver interface — all with standard SQL. &lt;/p&gt;

</description>
      <category>automation</category>
      <category>integration</category>
      <category>connectivity</category>
      <category>database</category>
    </item>
    <item>
      <title>CData AWS Glue Connector for Salesforce Deployment Guide </title>
      <dc:creator>Jerod Johnson</dc:creator>
      <pubDate>Fri, 05 Nov 2021 14:55:18 +0000</pubDate>
      <link>https://forem.com/jerodimusprime/cdata-aws-glue-connector-for-salesforce-deployment-guide-3e3g</link>
      <guid>https://forem.com/jerodimusprime/cdata-aws-glue-connector-for-salesforce-deployment-guide-3e3g</guid>
      <description>&lt;p&gt;AWS Glue is an ETL service from Amazon that enables you to prepare and load your data for storage and analytics. With Glue Studio, you can create no-code and low-code ETL jobs that work with data through &lt;a href="https://www.cdata.com/solutions/etl/glue/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=cdata-aws-glue-connector-for-salesforce-deployment-guide+"&gt;CData Glue Connectors&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;In this article, we walk through configuring the CData Glue Connector for Salesforce to create and run an AWS Glue job using live Salesforce data. &lt;/p&gt;

&lt;h1&gt;
  
  
  Typical Customer Deployment
&lt;/h1&gt;

&lt;p&gt;The CData AWS Glue Connector for Salesforce is a custom Glue Connector that makes it easy to transfer data from SaaS applications and custom data sources to your data lake in Amazon S3. Customers can subscribe to the Connector from the AWS Marketplace and use it in their AWS Glue jobs and deploy them into their product Apache Spark applications that run on AWS Glue. &lt;/p&gt;

&lt;p&gt;The Glue Connector for Salesforce can be deployed to any region and can be subscribed to and deployed in an AWS Glue job in just a few minutes. &lt;/p&gt;

&lt;h1&gt;
  
  
  Prerequisites and Requirements
&lt;/h1&gt;

&lt;p&gt;There are no external operating system, database type, or storage requirements for using the CData AWS Glue Connector for Salesforce. Customers will need familiarity with AWS Glue, AWS Glue Studio, and Python/Apache Spark to best utilize the Glue Connector for Salesforce. Customers will need an AWS account and a subscription to the AWS Glue Connector. AWS Glue and Glue Studio jobs run on Amazon EC2 instances; the CData AWS Glue Connector is a container image that runs on Amazon ECS; and the sample Glue job in this walkthrough stores data in Amazon S3. &lt;/p&gt;

&lt;h1&gt;
  
  
  Architecture
&lt;/h1&gt;

&lt;p&gt;The CData AWS Glue Connector for Salesforce is an Amazon ECR image that is used from Amazon Glue Jobs to read and write data from the Salesforce service. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhqix8ileniwyg21kqm7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhqix8ileniwyg21kqm7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Update Permissions for your IAM Role
&lt;/h1&gt;

&lt;p&gt;When you create the AWS Glue job, you specify an AWS Identity and Access Management (IAM) role for the job to use. The role must grant access to all resources used by the job, including Amazon S3 for any sources, targets, scripts, temporary directories, and AWS Glue Data Catalog objects. The role must also grant access to the CData Glue Connector for Salesforce from the AWS Glue Marketplace. &lt;/p&gt;

&lt;p&gt;NOTE: Do not use the root user for any deployments or operations. &lt;/p&gt;

&lt;p&gt;The following policies should be added to the IAM role for the AWS Glue job, at a minimum: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWSGlueServiceRole (For accessing Glue Studio and Glue Jobs) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AmazonEC2ContainerRegistryReadOnly (For accessing the CData AWS Glue Connector for Salesforce) &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you will be accessing data found in Amazon S3, add: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AmazonS3FullAccess (For reading from and writing to Amazon S3) &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And lastly, if you will be using AWS Secrets Manager to store confidential connection properties (see more below), you will need to add an inline policy similar to the following, granting access to the specific secrets needed for the Glue Job: &lt;/p&gt;

&lt;p&gt;{ &lt;/p&gt;

&lt;p&gt;"Version": "2012-10-17", &lt;/p&gt;

&lt;p&gt;"Statement": [ &lt;/p&gt;

&lt;p&gt;{ &lt;/p&gt;

&lt;p&gt;"Effect": "Allow", &lt;/p&gt;

&lt;p&gt;"Action": [ &lt;/p&gt;

&lt;p&gt;"secretsmanager:GetResourcePolicy", &lt;/p&gt;

&lt;p&gt;"secretsmanager:GetSecretValue", &lt;/p&gt;

&lt;p&gt;"secretsmanager:DescribeSecret", &lt;/p&gt;

&lt;p&gt;"secretsmanager:ListSecretVersionIds" &lt;/p&gt;

&lt;p&gt;], &lt;/p&gt;

&lt;p&gt;"Resource": [ &lt;/p&gt;

&lt;p&gt;"arn:aws:secretsmanager:us-west-2:111122223333㊙️aes128-1a2b3c", &lt;/p&gt;

&lt;p&gt;"arn:aws:secretsmanager:us-west-2:111122223333㊙️aes192-4D5e6F", &lt;/p&gt;

&lt;p&gt;"arn:aws:secretsmanager:us-west-2:111122223333㊙️aes256-7g8H9i" &lt;/p&gt;

&lt;p&gt;] &lt;/p&gt;

&lt;p&gt;} &lt;/p&gt;

&lt;p&gt;] &lt;/p&gt;

&lt;p&gt;} &lt;/p&gt;

&lt;p&gt;For more information about granting access to AWS Glue Studio and Glue Jobs, see Setting up IAM Permissions for AWS Glue in the AWS Glue documentation. &lt;/p&gt;

&lt;p&gt;For more information about granting access to the Amazon S3 buckets, see Identity and access management in the Amazon Simple Storage Service Developer Guide. &lt;/p&gt;

&lt;p&gt;For more information on setting up access control for your secrets, see Authentication and Access Control for AWS Secrets Manager in the AWS Secrets Manager documentation and Limiting Access to Specific Secrets in the AWS Secrets Manager User Guide. The credential retrieved from AWS Secrets Manager (a string of key-value pairs) is used in the JDBC URL used by the CData Glue Connector when connecting to the data source, as shown above. &lt;/p&gt;

&lt;p&gt;For more general information on IAM and IAM best practices, refer to theAWS IAM page. &lt;/p&gt;

&lt;h1&gt;
  
  
  Collect Salesforce Connection Properties
&lt;/h1&gt;

&lt;p&gt;There are several authentication methods available for connecting to Salesforce: Login, OAuth, and SSO. The Login method requires you to have the username, password, and security token of the user. &lt;/p&gt;

&lt;p&gt;If you do not have access to the username and password or do not wish to require them, you can use OAuth authentication. &lt;/p&gt;

&lt;p&gt;SSO (single sign-on) can be used by setting the SSOProperties, SSOLoginUrl, and TokenUrl connection properties, which allow you to authenticate to an identity provider. See the "Getting Started" chapter in the help documentation for more information. &lt;/p&gt;

&lt;h2&gt;
  
  
  OAuth Verifier
&lt;/h2&gt;

&lt;p&gt;Salesforce supports connecting via OAuth. To connect using OAuth, you need to follow the Headless OAuth instructions in the Help documentation for the Connector and save the OAuth Verifier code. &lt;/p&gt;

&lt;p&gt;Make a note of the necessary properties for use with the CData Glue Connector for Salesforce. &lt;/p&gt;

&lt;h1&gt;
  
  
  (Optional) Store Salesforce Connection Properties Credentials in AWS Secrets Manager
&lt;/h1&gt;

&lt;p&gt;To safely store and use your connection properties, you can save them in AWS Secrets Manager. &lt;/p&gt;

&lt;p&gt;Note: You must host your AWS Glue ETL job and secret in the same region. Cross-region secret retrieval is not supported currently. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Sign in to the AWS Secrets Manager console. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On either the service introduction page or the Secrets list page, choose Store a new secret. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the Store a new secret page, choose Other type of secret. This option means you must supply the structure and details of your secret. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can read more about the required properties to connect to Salesforce in the "Activate" section below. Once you know which properties you wish to store, create a key-value pair for each property. For example: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Username: account user (for example, &lt;a href="mailto:user@example.com"&gt;user@example.com&lt;/a&gt;) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Password: account password &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add any additional private credential key-value pairs required by the CData Glue Connector for Salesforce &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For more information about creating secrets, see Creating and Managing Secrets with AWS Secrets Manager in the AWS Secrets Manager User Guide. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Record the secret name, which is used when configuring the connection in AWS Glue Studio. &lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Subscribe to the CData Glue Connector for Salesforce
&lt;/h1&gt;

&lt;p&gt;To work with the CData Glue Connector for Salesforce in AWS Glue Studio, you need to subscribe to the Connector from the AWS Marketplace. If you have already subscribed to the CData Glue Connector for Salesforce, you can jump to the next section. &lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing Information
&lt;/h2&gt;

&lt;p&gt;Monthly Subscription Fee: $300.00 &lt;/p&gt;

&lt;p&gt;You are charged $300.00 once a month regardless of how many instances you launch after subscribing. &lt;/p&gt;

&lt;p&gt;Use of Local Zones or WaveLength infrastructure deployment may alter your final pricing. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to AWS Glue Studio &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Connectors &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click AWS Marketplace &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Search for the Connector "CData Salesforce" &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvt6xcd85giqdvtq9rfy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvt6xcd85giqdvtq9rfy.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click "Continue to Subscribe" &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Accept the terms for the Connector and wait for the request to be processed &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click "Continue to Configuration" &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotlzo9tcmo6pz2rl7g0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotlzo9tcmo6pz2rl7g0m.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Activate the CData Glue Connector for Salesforce in Glue Studio
&lt;/h1&gt;

&lt;p&gt;To use the CData Glue Connector for Salesforce in AWS Glue, you need to activate the subscribed connector in AWS Glue Studio. The activation process creates a connector object and connection in your AWS account. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Once you subscribe to the connector, a new Config tab shows up in the AWS Marketplace connector page. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the delivery options and click the "Continue to Launch" button. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F869lfnk4mv8w9q55e3uc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F869lfnk4mv8w9q55e3uc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On the launch tab, click "Usage Instructions" and follow the link that appears to create and configure the connection. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrd8hby7v4zxyjvv6rti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrd8hby7v4zxyjvv6rti.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Under Connection access, select the JDBC URL format and configure the connection. Below you will find sample connection string(s) for the JDBC URL format(s) available for Salesforce. You can read more about authenticating with Salesforce in the Help documentation for the Connector. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you opted to store properties in the AWS Secrets Manager, leave the placeholder values (e.g. ${Property1}), otherwise, the values you enter in the AWS Glue Connection interface will appear in the (read-only) JDBC URL below the properties. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Username &amp;amp; Password
&lt;/h2&gt;

&lt;p&gt;jdbc:cdata:Salesforce:AuthScheme=BASIC;User=${Username};Password=${Password};SecurityToken=${SecurityToken} &lt;/p&gt;

&lt;h2&gt;
  
  
  OAuth
&lt;/h2&gt;

&lt;p&gt;jdbc:cdata:Salesforce:AuthScheme=OAuth;OAuthSettingsLocation=${OAuthSettingsLocation};InitiateOAuth=REFRESH;OAuthVerifier=${OAuthVerifier};OAuthClientID=${OAuthClientID};OAuthClientSecret=${OAuthClientSecret} &lt;/p&gt;

&lt;h2&gt;
  
  
  OneLogin
&lt;/h2&gt;

&lt;p&gt;jdbc:cdata:Salesforce:AuthScheme=OneLogin;SSOLoginUrl=${SSOLoginUrl};SSOTokenUrl=${SSOTokenUrl};SSOProperties='IdPName=OneLogin;APIKey=${OneLoginAPIKey}' &lt;/p&gt;

&lt;h2&gt;
  
  
  PingFederate
&lt;/h2&gt;

&lt;p&gt;jdbc:cdata:Salesforce:AuthScheme=PingFederate;SSOLoginUrl=${SSOLoginUrl};SSOTokenUrl=${SSOTokenUrl};SSOProperties='IdPName=PingFederate;RelyingParty=${SalesforceDomain}' &lt;/p&gt;

&lt;h2&gt;
  
  
  OKTA
&lt;/h2&gt;

&lt;p&gt;jdbc:cdata:Salesforce:AuthScheme=OKTA;SSOLoginUrl=${SSOLoginUrl};SSOTokenUrl=${SSOTokenUrl};SSO+Properties='idpname=okta;domain=${OrgDomain};apiToken=${OktaAPIKey};' &lt;/p&gt;

&lt;h2&gt;
  
  
  ADFS
&lt;/h2&gt;

&lt;p&gt;jdbc:cdata:Salesforce:AuthScheme=ADFS;InitiateOAuth=REFRESH;OAuthClientId=${OAuthClientID};OauthClientSecret=${OAuthClientSecret};SSOProperties='IDPName=AzureAD;Resource=${SalesforceApplicationIDURI};Tenant=${AzureADTenant};';SSOTokenUrl=${SSOTokenURL}; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsra00eksj2ofp5rn42m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsra00eksj2ofp5rn42m.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;(Optional): Enable logging for the Connector. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you want to log the functionality from the CData Glue Connector for Salesforce you will need to append two properties to the JDBC URL: &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Logfile: Set this to "STDOUT://" &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verbosity: Set this to an integer (1-5) for varying depths of logging. 1 is the default, 3 is recommended for most debugging scenarios. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Configure the Network options and click "Create Connection." &lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Configure the Amazon Glue Job
&lt;/h1&gt;

&lt;p&gt;Once you have configured a Connection, you can build a Glue Job. &lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Job that Uses the Connection
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;In Glue Studio, under "Your connections," select the connection you created &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwwcbzs0qri8i3sv3d1s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwwcbzs0qri8i3sv3d1s.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click "Create job" &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The visual job editor appears. A new Source node, derived from the connection, is displayed on the Job graph. In the node details panel on the right, the Source Properties tab is selected for user input. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Configure the Source Node properties:
&lt;/h2&gt;

&lt;p&gt;You can configure the access options for your connection to the data source in the Source properties tab. Refer to the AWS Glue Studio documentation for more information. Here we provide a simple walk-through. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the visual job editor, make sure the Source node for your connector is selected. Choose the Source properties tab in the node details panel on the right, if it is not already selected. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Connection field is populated automatically with the name of the connection associated with the marketplace connector. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter information about the data location in the data source. Provide either a source table name or a query to use to retrieve data from the data source. An example of a query is SELECT Industry, AnnualRevenue FROM Account WHERE Name = 'GenePoint'. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1c2xroaiwnsgtj14il0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1c2xroaiwnsgtj14il0e.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;To pass information from the data source to the transformation nodes, AWS Glue Studio must know the schema of the data. Select "Use Schema Builder" to specify the schema interactively. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the remaining optional fields as needed. You can configure the following: &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Partitioning information - for parallelizing the read operations from the data source &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data type mappings - to convert data types used in the source data to the data types supported by AWS Glue &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Filter predicate - to select a subset of the data from the data source &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;See "Use the Connection in a Glue job using Glue Studio" for more information about these options. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;You can view the schema generated by this node by choosing the Output schema tab in the node properties panel. &lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Edit, Save, &amp;amp; Run the Job
&lt;/h2&gt;

&lt;p&gt;Edit the job by adding and editing the nodes in the job graph. See Editing ETL jobs in AWS Glue Studio for more information. &lt;/p&gt;

&lt;p&gt;After you complete editing the job, enter the job properties. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select the Job properties tab above the visual graph editor. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the following job properties when using custom connectors: &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Name: Provide a job name. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IAM Role: Choose (or create) an IAM role with the necessary permissions, as described previously. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type: Choose "Spark." &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Glue version: Choose "Glue 2.0 - Supports spark 2.4, Scala 2, Python 3." &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Language: Choose "Python 3." &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use the default values for the other parameters. For more information about job parameters, see "Defining Job Properties" in the AWS Glue Developer Guide. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7c5uaxtjxonn6s5rvf6d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7c5uaxtjxonn6s5rvf6d.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;At the top of the page, choose "Save." &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A green top banner appears with the message "Successfully created Job." &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After you successfully save the job, you can choose "Run" to run the job. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To view the generated script for the job, choose the "Script" tab at the top of the visual editor. The "Job runs" tab shows the job run history for the job. For more information about job run details, see "View information for recent job runs." &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F142g5ix4m93a27zsr6h4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F142g5ix4m93a27zsr6h4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Review the Generated Script
&lt;/h2&gt;

&lt;p&gt;At any point in the job creation, you can click on the Script tab to review the script being created by Glue Studio. If you create a simple job to write Salesforce data to an Amazon S3 bucket, your script will look similar to the following: &lt;/p&gt;

&lt;p&gt;Sample Script &lt;/p&gt;

&lt;p&gt;import sys &lt;/p&gt;

&lt;p&gt;from awsglue.transforms import * &lt;/p&gt;

&lt;p&gt;from awsglue.utils import getResolvedOptions &lt;/p&gt;

&lt;p&gt;from pyspark.context import SparkContext &lt;/p&gt;

&lt;p&gt;from awsglue.context import GlueContext &lt;/p&gt;

&lt;p&gt;from awsglue.job import Job &lt;/p&gt;

&lt;h2&gt;
  
  
  @params: [JOB_NAME]
&lt;/h2&gt;

&lt;p&gt;args = getResolvedOptions(sys.argv, ['JOB_NAME']) &lt;/p&gt;

&lt;p&gt;sc = SparkContext() &lt;/p&gt;

&lt;p&gt;glueContext = GlueContext(sc) &lt;/p&gt;

&lt;p&gt;spark = glueContext.spark_session &lt;/p&gt;

&lt;p&gt;job = Job(glueContext) &lt;/p&gt;

&lt;p&gt;job.init(args['JOB_NAME'], args) &lt;/p&gt;

&lt;h2&gt;
  
  
  @type: DataSource
&lt;/h2&gt;

&lt;h2&gt;
  
  
  @args: [connection_type = "marketplace.jdbc", connection_options = {"dbTable":"Account","connectionName":"cdata-[id]"}, transformation_ctx = "DataSource0"]
&lt;/h2&gt;

&lt;h2&gt;
  
  
  @return: DataSource0
&lt;/h2&gt;

&lt;h2&gt;
  
  
  @inputs: []
&lt;/h2&gt;

&lt;p&gt;DataSource0 = glueContext.create_dynamic_frame.from_options(connection_type = "marketplace.jdbc", connection_options = {"dbTable":"Account","connectionName":"cdata-[id]"}, transformation_ctx = "DataSource0") &lt;/p&gt;

&lt;h2&gt;
  
  
  @type: DataSink
&lt;/h2&gt;

&lt;h2&gt;
  
  
  @args: [connection_type = "s3", format = "json", connection_options = {"path": "s3://PATH/TO/BUCKET/", "partitionKeys": []}, transformation_ctx = "DataSink0"]
&lt;/h2&gt;

&lt;h2&gt;
  
  
  @return: DataSink0
&lt;/h2&gt;

&lt;h2&gt;
  
  
  @inputs: [frame = DataSource0]
&lt;/h2&gt;

&lt;p&gt;DataSink0 = glueContext.write_dynamic_frame.from_options(frame = DataSource0, connection_type = "s3", format = "json", connection_options = {"path": "s3://PATH/TO/BUCKET/", "partitionKeys": []}, transformation_ctx = "DataSink0") &lt;/p&gt;

&lt;p&gt;job.commit() &lt;/p&gt;

&lt;p&gt;Using the CData Glue Connector for Salesforce in AWS Glue Studio, you can easily create ETL jobs to load Salesforce data into an S3 bucket or any other destination. You can also use the Glue Connector to add, update, or delete Salesforce data in your Glue Jobs. &lt;/p&gt;

&lt;h1&gt;
  
  
  Health Check
&lt;/h1&gt;

&lt;p&gt;The CData Glue Connector for Salesforce is used as part of AWS Glue jobs. As such, you can use CloudWatch and the built-in logging (see the optional logging instructions above) to monitor the health of the Glue job and the functionality of the Connector. &lt;/p&gt;

&lt;h1&gt;
  
  
  Backup &amp;amp; Recovery
&lt;/h1&gt;

&lt;p&gt;The CData Glue Connector for Salesforce is a deployed container. Backup and recovery consists of simply resubscribing to the Connector in the event of a failure or corrupted deployment. &lt;/p&gt;

&lt;h1&gt;
  
  
  Routine Maintenance
&lt;/h1&gt;

&lt;p&gt;There are several pieces of routine maintenance involved with the CData Glue Connectors: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Rotating credentials &amp;amp; keys: Follow the guidance of your IT administration for the rotation of any credentials &amp;amp; keys stored in the AWS Secrets Manager &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Software patches &amp;amp; upgrades: The CData Glue Connector will be only be patched for breaking errors. Upgrades will be released quarterly. Monitor the "Latest Version" in the AWS Marketplace listing and simply re-subscribe if the Latest Version is greater than your currently subscribed version. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Managing license: Licenses can be managed (subscriptions discontinued as needed) in the AWS License Manager &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Learn More
&lt;/h1&gt;

&lt;p&gt;Organizations continue to evolve and use a variety of data stores that best fit their applications and data requirements. &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.cdata.com/solutions/etl/glue/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=cdata-aws-glue-connector-for-salesforce-deployment-guide+"&gt;CData AWS Glue Connectors&lt;/a&gt; simplify the process of accessing enterprise data in real-time. Connect with accounting, CRM, ERP, marketing automation, and cloud solutions from AWS Glue &amp;amp; AWS Glue Studio. &lt;/p&gt;

</description>
      <category>automation</category>
      <category>integration</category>
      <category>connectivity</category>
      <category>database</category>
    </item>
    <item>
      <title>Leading-Edge NoSQL Drivers for NoSQL Analytics &amp; Integration </title>
      <dc:creator>Jerod Johnson</dc:creator>
      <pubDate>Fri, 22 Oct 2021 13:10:16 +0000</pubDate>
      <link>https://forem.com/jerodimusprime/leading-edge-nosql-drivers-for-nosql-analytics-integration-4e4p</link>
      <guid>https://forem.com/jerodimusprime/leading-edge-nosql-drivers-for-nosql-analytics-integration-4e4p</guid>
      <description>&lt;p&gt;NoSQL databases, such as MongoDB, are an increasingly important component of enterprise data strategy, handling high-volume, scalable, and agile data management. As organizations adopt emerging NoSQL solutions, there is high demand to integrate these data sources with existing infrastructure and familiar tools. Standards-based data access facilitates your expansion into MongoDB by leveraging existing skills using standards like ODBC, JDBC, and ADO.NET. &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.cdata.com/drivers/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=leading-edge-nosql-drivers-for-nosql-analytics-and-integration+"&gt;CData Drivers&lt;/a&gt; allow users to interact with NoSQL databases as if they were a relational database, leveraging the flexibility of NoSQL data storage with the familiarity and utility of SQL-92 queries. In this article, we highlight the specific features and capabilities common across all CData NoSQL drivers, using the &lt;a href="https://www.cdata.com/drivers/mongodb/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=leading-edge-nosql-drivers-for-nosql-analytics-and-integration+"&gt;MongoDB Drivers&lt;/a&gt; as an example. &lt;/p&gt;

&lt;p&gt;When working with NoSQL data, it is common for data structures to be returned as JSON objects, arrays, or any combination thereof, due to the flexibility of NoSQL. Although this is convenient for storing hierarchical data, it can be difficult to work with in BI, reporting, and ETL tools. The CData Drivers include several facilities for mapping or flattening these data structures to simplify integration with standard tooling. &lt;/p&gt;

&lt;h1&gt;
  
  
  Key Features
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Free-Form Queries: requesting exactly the data you want from your tables. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Horizontal Flattening: drilling down into embedded data (sub-documents and arrays). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Vertical Flattening: treating embedded arrays of sub-documents as separate tables. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Custom Schema Definitions: defining how the drivers view the MongoDB data. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Client-Side JSON Functions: manipulating the data returned to perform client-side aggregation and transformation. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below are examples of these features. For reference, these examples are based on the following MongoDB sample document: &lt;/p&gt;

&lt;h1&gt;
  
  
  Sample Document
&lt;/h1&gt;

&lt;p&gt;{ &lt;/p&gt;

&lt;p&gt;"_id" : ObjectId("5780046cd5a397806c3dab38"), &lt;/p&gt;

&lt;p&gt;"address" : { &lt;/p&gt;

&lt;p&gt;"building" : "1007", &lt;/p&gt;

&lt;p&gt;"coord" : [-73.856077, 40.848447], &lt;/p&gt;

&lt;p&gt;"street" : "Morris Park Ave", &lt;/p&gt;

&lt;p&gt;"zipcode" : "10462" &lt;/p&gt;

&lt;p&gt;}, &lt;/p&gt;

&lt;p&gt;"borough" : "Bronx", &lt;/p&gt;

&lt;p&gt;"cuisine" : "Bakery", &lt;/p&gt;

&lt;p&gt;"grades" : [{ &lt;/p&gt;

&lt;p&gt;"date" : ISODate("2014-03-03T00:00:00Z"), &lt;/p&gt;

&lt;p&gt;"grade" : "A", &lt;/p&gt;

&lt;p&gt;"score" : 2 &lt;/p&gt;

&lt;p&gt;}, { &lt;/p&gt;

&lt;p&gt;"date" : ISODate("2013-09-11T00:00:00Z"), &lt;/p&gt;

&lt;p&gt;"grade" : "A", &lt;/p&gt;

&lt;p&gt;"score" : 6 &lt;/p&gt;

&lt;p&gt;}, { &lt;/p&gt;

&lt;p&gt;"date" : ISODate("2013-01-24T00:00:00Z"), &lt;/p&gt;

&lt;p&gt;"grade" : "A", &lt;/p&gt;

&lt;p&gt;"score" : 10 &lt;/p&gt;

&lt;p&gt;}, { &lt;/p&gt;

&lt;p&gt;"date" : ISODate("2011-11-23T00:00:00Z"), &lt;/p&gt;

&lt;p&gt;"grade" : "A", &lt;/p&gt;

&lt;p&gt;"score" : 9 &lt;/p&gt;

&lt;p&gt;}, { &lt;/p&gt;

&lt;p&gt;"date" : ISODate("2011-03-10T00:00:00Z"), &lt;/p&gt;

&lt;p&gt;"grade" : "B", &lt;/p&gt;

&lt;p&gt;"score" : 14 &lt;/p&gt;

&lt;p&gt;}], &lt;/p&gt;

&lt;p&gt;"name" : "Morris Park Bake Shop", &lt;/p&gt;

&lt;p&gt;"restaurant_id" : "30075445" &lt;/p&gt;

&lt;p&gt;} &lt;/p&gt;

&lt;h1&gt;
  
  
  Free-Form Queries
&lt;/h1&gt;

&lt;p&gt;The simplest way to access your MongoDB data is by issuing a query based on what you understand to be in the data source. This gives you the freedom to select exactly the data that you want, regardless of the existence of a strict table schema. Consider the sample document above. &lt;/p&gt;

&lt;p&gt;If you know that you want the _id, address.street, and grades[0] fields from each document, you can freely query that data from the database: &lt;/p&gt;

&lt;p&gt;SELECT &lt;/p&gt;

&lt;p&gt;[_id], &lt;/p&gt;

&lt;p&gt;[address.street], &lt;/p&gt;

&lt;p&gt;[grades.0], &lt;/p&gt;

&lt;p&gt;FROM restaurants; &lt;/p&gt;

&lt;p&gt;The driver returns the value for those fields, given that the documents contain data. If the field does not exist in a given document, the driver simply returns a NULL for the field. This feature is useful whenever you have control over the SQL query being submitted to the CData driver, but obviously cannot be used when the opposite is true. There are other options available in that case. &lt;/p&gt;

&lt;h2&gt;
  
  
  Horizontal Flattening
&lt;/h2&gt;

&lt;p&gt;The Flatten Arrays and Flatten Objects Connection Properties in the CData drivers allow you to control how objects and arrays in your MongoDB data are parsed to dynamically define the table schema for your MongoDB data. These properties allow you to configure how the data in a given document is horizontally flattened, creating a single table schema for all of the documents (including embedded data) in a given table. This is especially useful when you do not have granular control over the SQL queries being submitted. &lt;/p&gt;

&lt;p&gt;In the examples below, we display the expected results, based on various values for Flatten Arrays and Flatten Objects, for the following query: &lt;/p&gt;

&lt;p&gt;view source &lt;/p&gt;

&lt;p&gt;SELECT * &lt;/p&gt;

&lt;p&gt;FROM restaurants &lt;/p&gt;

&lt;p&gt;FlattenArrays=0;FlattenObjects=False; &lt;/p&gt;

&lt;p&gt;Without any horizontal flattening, the drivers discover seven columns for the table: _id, address, borough, cuisine, grades, name, and restaurant_id. Embedded data in the document is returned in a raw, aggregate form. &lt;/p&gt;

&lt;h3&gt;
  
  
  Result
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;_id : 5780046cd5a397806c3dab38 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Address: { "building" : "1007", "coord" : [-73.856077, 40.848447], "street" : "Morris Park Ave", "zipcode" : "10462" } &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Borough: Bronx &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cuisine: Bakery &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Grades: [{"date" : ISODate("2014-03-03T00:00:00Z"), "grade" : "A", "score" : 2 }, { "date" : ISODate("2013-09-11T00:00:00Z"), "grade" : "A", "score" : 6 }, { "date" : ISODate("2013-01-24T00:00:00Z"), "grade" : "A", "score" : 10 }, { "date" : ISODate("2011-11-23T00:00:00Z"), "grade" : "A", "score" : 9 }, { "date" : ISODate("2011-03-10T00:00:00Z"), "grade" : "B", "score" : 14}] &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name: Morris Park Bake Shop &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Restaurant Id: 30075445 &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;FlattenArrays=0;FlattenObjects=True; &lt;/p&gt;

&lt;p&gt;If you set Flatten Objects to "true", the number of columns expands as the embedded "address" sub-document is flattened. With Flatten Objects still set to "false" any arrays or arrays of documents will be returned as aggregates. &lt;/p&gt;

&lt;h3&gt;
  
  
  Result
&lt;/h3&gt;

&lt;p&gt;_id : 5780046cd5a397806c3dab38&lt;/p&gt;

&lt;p&gt;address.building : 1007&lt;/p&gt;

&lt;p&gt;address.coord : [-73.856077, 40.848447]&lt;/p&gt;

&lt;p&gt;address.street : Morris Park Ave&lt;/p&gt;

&lt;p&gt;address.zipcode : 10462 &lt;/p&gt;

&lt;p&gt;borough : Bronx&lt;/p&gt;

&lt;p&gt;cuisine : Bakery &lt;/p&gt;

&lt;p&gt;grades : [{"date" : ISODate("2014-03-03T00:00:00Z"), "grade" : "A", "score" : 2 }, ... ] &lt;/p&gt;

&lt;p&gt;name : Morris Park Bake Shop&lt;/p&gt;

&lt;p&gt;restaurant_id : 30075445 &lt;/p&gt;

&lt;p&gt;FlattenArrays=2;FlattenObjects=False; &lt;/p&gt;

&lt;p&gt;The Flatten Arrays property determines how many items in an embedded array of sub-documents to treat as individual columns. By setting Flatten Arrays to "2" (while leaving Flatten Objects = "false"), we extract the first two items in the embedded arrays of a document. &lt;/p&gt;

&lt;h3&gt;
  
  
  Result
&lt;/h3&gt;

&lt;p&gt;_id : 5780046cd5a397806c3dab38 &lt;/p&gt;

&lt;p&gt;address : { "building" : "1007", "coord" : [-73.856077, 40.848447], "street" : "Morris Park Ave", "zipcode" : "10462" } &lt;/p&gt;

&lt;p&gt;borough : Bronx &lt;/p&gt;

&lt;p&gt;cuisine : Bakery &lt;/p&gt;

&lt;p&gt;grades.0 : { "date" : ISODate("2014-03-03T00:00:00Z"), "grade" : "A", "score" : 2 } &lt;/p&gt;

&lt;p&gt;grades.1 : { "date" : ISODate("2013-09-11T00:00:00Z"), "grade" : "A", "score" : 6 } &lt;/p&gt;

&lt;p&gt;name : Morris Park Bake Shop&lt;/p&gt;

&lt;p&gt;restaurant_id : 30075445 &lt;/p&gt;

&lt;p&gt;FlattenArrays=1;FlattenObjects=True; &lt;/p&gt;

&lt;p&gt;With Flatten Arrays set to "1" and Flatten Objects = "true", we extract the first item in the embedded arrays of a document and flatten any embedded sub-documents. &lt;/p&gt;

&lt;h3&gt;
  
  
  Result
&lt;/h3&gt;

&lt;p&gt;_id : 57800... &lt;/p&gt;

&lt;p&gt;address.building : 1007&lt;/p&gt;

&lt;p&gt;address.coord.0 : -73.856077 &lt;/p&gt;

&lt;p&gt;address.street : Morris Park Ave &lt;/p&gt;

&lt;p&gt;address.zipcode : 10462 &lt;/p&gt;

&lt;p&gt;borough : Bronx &lt;/p&gt;

&lt;p&gt;cuisine : Bakery &lt;/p&gt;

&lt;p&gt;grades.0.date : 2014-03-03... &lt;/p&gt;

&lt;p&gt;grades.0.grade : A &lt;/p&gt;

&lt;p&gt;grades.0.score : 2 &lt;/p&gt;

&lt;p&gt;name : Morris Park Bake Shop &lt;/p&gt;

&lt;p&gt;restaurant_id : 30075445 &lt;/p&gt;

&lt;p&gt;These columns are available for use in INSERT and UPDATE statements as well, allowing you to add or update individual fields within sub-documents and arrays. &lt;/p&gt;

&lt;h1&gt;
  
  
  Vertical Flattening
&lt;/h1&gt;

&lt;p&gt;Documents in MongoDB frequently contain an array (or arrays) of sub-documents. While it is possible to drill down into these sub-documents using horizontal flattening (see above section), a common way of dealing with such arrays in NoSQL databases is to treat them as separate tables of data. This process is known as vertical flattening and doing so helps to build a relational model between the different 'types' of documents in a MongoDB instance. &lt;/p&gt;

&lt;p&gt;Considering the sample document above, you could retrieve the grades array as a separate table: &lt;/p&gt;

&lt;p&gt;SELECT &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;FROM [restaurants.grades]; &lt;/p&gt;

&lt;p&gt;This query returns the following data set: &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Date&lt;/th&gt;
&lt;th&gt;Grade&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;2014-03-03T00:00:00Z&lt;/td&gt;
&lt;td&gt;A&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2013-09-11T00:00:00Z&lt;/td&gt;
&lt;td&gt;A&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2013-01-24T00:00:00Z&lt;/td&gt;
&lt;td&gt;A&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2011-11-23T00:00:00Z&lt;/td&gt;
&lt;td&gt;A&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2011-03-10T00:00:00Z&lt;/td&gt;
&lt;td&gt;B&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You may also want to include information from the base restaurants table. You can do this with a join. Flattened arrays can only be joined with the root document. The driver expects the left part of the join is the array document you want to flatten vertically. Set the SupportEnhancedSQL connection property to false to join nested MongoDB documents. &lt;/p&gt;

&lt;p&gt;SELECT &lt;/p&gt;

&lt;p&gt;[restaurants].[_id], [restaurants.grades].* &lt;/p&gt;

&lt;p&gt;FROM &lt;/p&gt;

&lt;p&gt;[restaurants.grades] &lt;/p&gt;

&lt;p&gt;JOIN &lt;/p&gt;

&lt;p&gt;[restaurants] &lt;/p&gt;

&lt;p&gt;WHERE &lt;/p&gt;

&lt;p&gt;[restaurants].name = 'Morris Park Bake Shop' &lt;/p&gt;

&lt;p&gt;This query returns the following data set: &lt;/p&gt;

&lt;p&gt;_id : 5780046cd5a397806c3dab38 &lt;/p&gt;

&lt;p&gt;date : 2014-03-03T00:00:00Z &lt;/p&gt;

&lt;p&gt;grade : A &lt;/p&gt;

&lt;p&gt;score : 2 &lt;/p&gt;

&lt;p&gt;_id : 5780046cd5a397806c3dab38 &lt;/p&gt;

&lt;p&gt;date : 2013-09-11T00:00:00Z&lt;/p&gt;

&lt;p&gt;grade : A &lt;/p&gt;

&lt;p&gt;score : 6  &lt;/p&gt;

&lt;p&gt;_id : 5780046cd5a397806c3dab38 &lt;/p&gt;

&lt;p&gt;date : 2013-01-24T00:00:00Z  &lt;/p&gt;

&lt;p&gt;grade : A &lt;/p&gt;

&lt;p&gt;score : 10&lt;/p&gt;

&lt;p&gt;_id : 5780046cd5a397806c3dab38 &lt;/p&gt;

&lt;p&gt;date : 2011-11-23T00:00:00Z   &lt;/p&gt;

&lt;p&gt;grade : A &lt;/p&gt;

&lt;p&gt;score : 9&lt;/p&gt;

&lt;p&gt;_id : 5780046cd5a397806c3dab38  &lt;/p&gt;

&lt;p&gt;date : 2011-03-10T00:00:00Z    &lt;/p&gt;

&lt;p&gt;grade : B&lt;/p&gt;

&lt;p&gt;score : 14&lt;/p&gt;

&lt;h1&gt;
  
  
  Custom Schema Definitions
&lt;/h1&gt;

&lt;p&gt;In order to treat your MongoDB data as a relational database, a table schema must exist. The schema can be created dynamically by using Connection properties or by defining the schema yourself. This is another option for drilling down into your data when you do not have full control of the SQL queries being constructed. &lt;/p&gt;

&lt;p&gt;Given the document above, you could expose the _id (as the primary key), name, address.zipcode, and the first entry in the grades fields by creating the following schema: &lt;/p&gt;













&lt;p&gt;&lt;a href="/rsb:info"&gt;/rsb:info&lt;/a&gt;  &lt;/p&gt;



&lt;p&gt;&lt;a href="/rsb:script"&gt;/rsb:script&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Once you have created your custom schema files, save them to disk using ".rsd" as the file extension (typically in the db folder at the installation location) and set the Location Connection Property to the same location. The driver will expose the defined tables in any third party tools and apps based on the title attribute of rsb:info. You can also query the data explicitly by using the title as the table name in a SQL query: &lt;/p&gt;

&lt;p&gt;SELECT &lt;/p&gt;

&lt;p&gt;id, latest_grade &lt;/p&gt;

&lt;p&gt;FROM &lt;/p&gt;

&lt;p&gt;StaticRestaurants; &lt;/p&gt;

&lt;p&gt;By defining the schema for your MongoDB data, you gain granular control over you data in a way that is not commonly supported in BI, reporting, and ETL tools, allowing you to leverage the data visualization, transformation, and extraction features of your favorite tools to work with your data in the way that you want. Custom schemas also allow you to define different views of the data stored in a single "table", meaning that you can take full advantage of the NoSQL nature of a MongoDB database where a given table can contain documents whose relevant fields are differentiated by something like a type field. &lt;/p&gt;

&lt;h1&gt;
  
  
  Client-Side JSON Functions
&lt;/h1&gt;

&lt;p&gt;The documents in MongoDB data sets are essentially JSON structures. The CData Drivers support using standard SQL functions to work with JSON structures. These functions work by pulling in the MongoDB document and parsing the relevant information in the client. While there are many functions supported, we only highlight a few here. The examples below use the sample document, contained in the table 'Students': &lt;/p&gt;

&lt;p&gt;{ &lt;/p&gt;

&lt;p&gt;id: 123456, &lt;/p&gt;

&lt;p&gt;..., &lt;/p&gt;

&lt;p&gt;grades: [ &lt;/p&gt;

&lt;p&gt;{ "grade": "A", "score": 96 }, &lt;/p&gt;

&lt;p&gt;{ "grade": "A", "score": 94 }, &lt;/p&gt;

&lt;p&gt;{ "grade": "A", "score": 92 }, &lt;/p&gt;

&lt;p&gt;{ "grade": "A", "score": 97 }, &lt;/p&gt;

&lt;p&gt;{ "grade": "B", "score": 84 } &lt;/p&gt;

&lt;p&gt;], &lt;/p&gt;

&lt;p&gt;... &lt;/p&gt;

&lt;p&gt;} &lt;/p&gt;

&lt;h2&gt;
  
  
  JSON_EXTRACT
&lt;/h2&gt;

&lt;p&gt;The JSON_EXTRACT function can extract individual values from a JSON object. The following query returns the values shown below based on the JSON path passed as the second argument to the function: &lt;/p&gt;

&lt;p&gt;SELECT &lt;/p&gt;

&lt;p&gt;JSON_EXTRACT(grades,'[0].grade') AS Grade, &lt;/p&gt;

&lt;p&gt;JSON_EXTRACT(grades,'[0].score') AS Score &lt;/p&gt;

&lt;p&gt;FROM Students; &lt;/p&gt;

&lt;p&gt;This query returns the following data: &lt;/p&gt;

&lt;p&gt;Grade &lt;/p&gt;

&lt;p&gt;Score &lt;/p&gt;

&lt;p&gt;A &lt;/p&gt;

&lt;p&gt;96 &lt;/p&gt;

&lt;h2&gt;
  
  
  JSON_SUM
&lt;/h2&gt;

&lt;p&gt;The JSON_SUM function returns the sum of the numeric values of a JSON array within a JSON object. The following query returns the total of the values specified by the JSON path passed as the second argument to the function: &lt;/p&gt;

&lt;p&gt;SELECT &lt;/p&gt;

&lt;p&gt;Name, &lt;/p&gt;

&lt;p&gt;JSON_SUM(score,'[x].score') AS TotalScore &lt;/p&gt;

&lt;p&gt;FROM Students; &lt;/p&gt;

&lt;p&gt;This query returns the following data: &lt;/p&gt;

&lt;p&gt;TotalScore &lt;/p&gt;

&lt;p&gt;463 &lt;/p&gt;

&lt;h2&gt;
  
  
  DOCUMENT
&lt;/h2&gt;

&lt;p&gt;The DOCUMENT function can be used to retrieve the entire document as a JSON string. See the following query and its result as an example: &lt;/p&gt;

&lt;p&gt;SELECT &lt;/p&gt;

&lt;p&gt;DOCUMENT(*) &lt;/p&gt;

&lt;p&gt;FROM Students; &lt;/p&gt;

&lt;p&gt;The query above returns each document in the table as a single string. &lt;/p&gt;

&lt;p&gt;DOCUMENT &lt;/p&gt;

&lt;p&gt;{ "_id" : ObjectId("5780046cd5a397806c3dab38"), "address" : { "building" : "1007", "coord" : [-73.856077, 40.848447], "street" : "Morris Park Ave", "zipcode" : "10462" }, "borough" : "Bronx", "cuisine" : "Bakery", "grades" : [{ "date" : ISODate("2014-03-03T00:00:00Z"), "grade" : "A", "score" : 2 }, { "date" : ISODate("2013-09-11T00:00:00Z"), "grade" : "A", "score" : 6 }, { "date" : ISODate("2013-01-24T00:00:00Z"), "grade" : "A", "score" : 10 }, { "date" : ISODate("2011-11-23T00:00:00Z"), "grade" : "A", "score" : 9 }, { "date" : ISODate("2011-03-10T00:00:00Z"), "grade" : "B", "score" : 14 }], "name" : "Morris Park Bake Shop", "restaurant_id" : "30075445" } &lt;/p&gt;

&lt;h1&gt;
  
  
  Get Started
&lt;/h1&gt;

&lt;p&gt;Data is driving innovation and growth for business, but only for businesses prepared to handle data effectively. &lt;/p&gt;

&lt;p&gt;While relational databases have their place, today's data complexity, volume and demand call for the implementation of NoSQL databases for certain data applications. Standards-based drivers provide a universal bridge between this unwieldy NoSQL data, and the world of BI &amp;amp; Analytics.  &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.cdata.com/drivers/?cat=clouddb&amp;amp;utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=leading-edge-nosql-drivers-for-nosql-analytics-and-integration+"&gt;NoSQL Drivers&lt;/a&gt; offer tremendous flexibility in working with NoSQL databases. Users can connect their BI, Analytics, ETL, &amp;amp; custom applications with any SaaS, NoSQL, or Big Data source. &lt;/p&gt;

</description>
      <category>automation</category>
      <category>integration</category>
      <category>connectivity</category>
      <category>database</category>
    </item>
    <item>
      <title>SQL Access to Redis Data </title>
      <dc:creator>Jerod Johnson</dc:creator>
      <pubDate>Thu, 21 Oct 2021 13:49:32 +0000</pubDate>
      <link>https://forem.com/jerodimusprime/sql-access-to-redis-data-4g00</link>
      <guid>https://forem.com/jerodimusprime/sql-access-to-redis-data-4g00</guid>
      <description>&lt;p&gt;Traditionally, importing key-value data stores such as Redis into BI, reporting, and ETL tools is problematic, if not impossible. With the &lt;a href="https://www.cdata.com/drivers/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=sql-access-to-redis-data"&gt;CData Drivers&lt;/a&gt;, there are several different ways to building a traditional database model to easily work with Redis data in the BI, reporting, ETL, and custom applications of your choice. &lt;/p&gt;

&lt;p&gt;In this article, we discuss the way that CData standards-based drivers handle data stored in Redis and Redis Enterprise. &lt;/p&gt;

&lt;h2&gt;
  
  
  Redis Data Interpretation Approaches
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Using a Redis Key as a Table Name &lt;/li&gt;
&lt;li&gt;Using a Key Pattern as a Table Name &lt;/li&gt;
&lt;li&gt;Using a Key Pattern in the SQL Query WHERE Clause &lt;/li&gt;
&lt;li&gt;Using Connection Properties: DefineTables and TablePattern &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first three options are useful for working with Redis key-value pairs directly, just as if you were working in a traditional Redis environment. The option for configuring connection properties results in related Redis key-value pairs being pivoted into a more traditional data table model. Each approach is detailed below. &lt;/p&gt;

&lt;h1&gt;
  
  
  Redis Data Types
&lt;/h1&gt;

&lt;p&gt;Redis data is stored in key-value pairs, but instead of the common limit of simple strings, Redis can assign any of several data structures to a given key. Below is a list of the supported data structures (think data types) that can be found in Redis  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Binary-safe strings. &lt;/li&gt;
&lt;li&gt;Lists: collections of string elements sorted according to the order of insertion. They are basically linked lists. &lt;/li&gt;
&lt;li&gt;Sets: collections of unique, unsorted string elements. &lt;/li&gt;
&lt;li&gt;Sorted sets (ZSets): similar to sets but where every string element is associated to a floating number value, called score. The elements are always taken sorted by their score, so unlike sets it is possible to retrieve a range of elements (for example you may ask: give me the top 10, or the bottom 10). &lt;/li&gt;
&lt;li&gt;Hashes: maps composed of fields associated with values. Both the field and the value are strings. This is very similar to Ruby or Python hashes. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This article will discuss how the CData Software Drivers for Redis interact with the above Redis types and includes sample SQL queries for using the drivers to work with Redis data. &lt;/p&gt;

&lt;h1&gt;
  
  
  Using a Redis Key as a Table Name
&lt;/h1&gt;

&lt;p&gt;The most direct way to work with Redis data with our drivers is to use a Redis key as a table name. Doing so will return a small table with five columns: RedisKey, Value, ValueIndex, RedisType, and ValueScore. The values in these columns are dependent upon the Redis data type associated with the Redis key being used as a table name. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RedisKey - the Redis key &lt;/li&gt;
&lt;li&gt;Value - the string value associated with the RedisKey and ValueIndex &lt;/li&gt;
&lt;li&gt;ValueIndex - varies by type: 1 for strings; the one-based index for sets, lists, and sorted sets; or the associated field name for hashes &lt;/li&gt;
&lt;li&gt;RedisType - the Redis data type &lt;/li&gt;
&lt;li&gt;ValueScore - varies by type: NULL for strings, lists, sets, and hashes; or the associated score for sorted sets &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below you will find sample data, queries, and results based on Redis data types. &lt;/p&gt;

&lt;h3&gt;
  
  
  Redis Strings
&lt;/h3&gt;

&lt;p&gt;Create a string in Redis: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;set mykey somevalueOKIf you perform a SELECT query on mykey the driver will return the following: &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;SELECT * FROM mykey &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iP-kY1yb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1zrkddfe1txtfe8xh61g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iP-kY1yb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1zrkddfe1txtfe8xh61g.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Redis Lists
&lt;/h3&gt;

&lt;p&gt;Create a list in Redis: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;rpush mylist A B C(integer) 3If you perform a SELECT query on mylist the driver will return the following: &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;SELECT * FROM mylist &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o5KNtIvC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/65gy5972ycmspnn0g0v5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o5KNtIvC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/65gy5972ycmspnn0g0v5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Redis Sets
&lt;/h3&gt;

&lt;p&gt;Create a set in Redis: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;sadd myset 1 2 3(integer) 3If you perform a SELECT query on myset the driver will return the following (note that Redis can return the elements of a set in any order): &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;SELECT * FROM myset &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CTlo7g-6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oa65aayc3vajudcl6tld.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CTlo7g-6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oa65aayc3vajudcl6tld.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Redis Sorted Sets
&lt;/h3&gt;

&lt;p&gt;Create a ZSet (sorted set) in Redis: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;zadd hackers 1940 "Alan Kay" 1957 "Sophie Wilson" 1953 "Richard Stallman" 1949 "Anita Borg"(integer) 9If you perform a SELECT query on hackers the driver will return the following: &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;SELECT * FROM hackers &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h7NdI4LT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xr5w7skupb0r91zdjk0k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h7NdI4LT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xr5w7skupb0r91zdjk0k.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Redis Hashes
&lt;/h3&gt;

&lt;p&gt;Create a hash in Redis: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;hmset user:1000 username antirez birthyear 1977 verified 1OKIf you perform a SELECT query on user:1000 the driver will return the following: &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;SELECT * FROM user:1000 &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ao-iqHOw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7exqmc0rw4lo8hyikwum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ao-iqHOw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7exqmc0rw4lo8hyikwum.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Using a Key Pattern as a Table Name
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IwqufAL9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ren1msacnbn95urykvho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IwqufAL9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ren1msacnbn95urykvho.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
 If you have several Redis keys that match the same pattern (e.g., "user:*"), then you can use that pattern as a table name. This allows you to retrieve multiple Redis keys at once. Start by adding several keys to Redis that match a pattern: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;hmset user:1000 name "John Smith" email "&lt;a href="mailto:john.smith@example.com"&gt;john.smith@example.com&lt;/a&gt;" password "s3cret"OK&amp;gt; hmset user:1001 name "Mary Jones" password "hidden" email "&lt;a href="mailto:mjones@example.com"&gt;mjones@example.com&lt;/a&gt;"OK &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you use user:* as the table name, the driver will retrieve all Redis key-value pairs whose keys match the pattern. You can see the expected results below: &lt;/p&gt;

&lt;p&gt;SELECT * FROM [user:*] &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S6QpmxRv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/304238233uun3xnfy97e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S6QpmxRv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/304238233uun3xnfy97e.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Using a Key Pattern in a SQL Query WHERE Clause
&lt;/h1&gt;

&lt;p&gt;If you have several Redis keys that match a pattern and have more granular control over the SQL query, then you can use a key pattern (e.g., "user:*") as the criteria for the key column in a WHERE clause. The results will be the same as using a key pattern as the table name. This allows you to retrieve multiple Redis keys at once. Start by adding several keys that match a pattern: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;hmset user:1000 name "John Smith" email "&lt;a href="mailto:john.smith@example.com"&gt;john.smith@example.com&lt;/a&gt;" password "s3cret"OK&amp;gt; hmset user:1001 name "Mary Jones" password "hidden" email "&lt;a href="mailto:mjones@example.com"&gt;mjones@example.com&lt;/a&gt;"OK &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you use a table pattern as the criteria for the key column in the WHERE clause then you need to use "Redis" as the table name. The driver will retrieve all Redis key-value pairs whose keys match the pattern. You can see the expected results below: &lt;/p&gt;

&lt;p&gt;SELECT * FROM Redis WHERE key = 'user:*' &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8sK4nOy1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zirz7g8eqby3r5vwew85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8sK4nOy1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zirz7g8eqby3r5vwew85.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Using Connection Properties
&lt;/h1&gt;

&lt;p&gt;When it comes to connecting to data in third party tools and apps using drivers, you often have little control over how queries are formed and sent to the drivers. In these instances, it makes sense to configure the driver directly, using connection properties, to shape how the data is interpreted. For the Redis drivers, these two properties are DefineTables and TablePattern. &lt;/p&gt;

&lt;p&gt;For these sections, we will create the following hashes in our Redis instance: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;hmset user:1000 name "John Smith" email "&lt;a href="mailto:john.smith@example.com"&gt;john.smith@example.com&lt;/a&gt;" password "s3cret"OK&amp;gt; hmset user:1001 name "Mary Jones" email "&lt;a href="mailto:mjones@example.com"&gt;mjones@example.com&lt;/a&gt;" password "hidden" OK&amp;gt; hmset user:1002 name "Sally Brown" email "&lt;a href="mailto:sally.b@example.com"&gt;sally.b@example.com&lt;/a&gt;" password "p4ssw0rd"OK&amp;gt; hmset customer:200 name "John Smith" account "123456" balance "543.21"OK&amp;gt; hmset customer:201 name "Mary Jones" account "123457" balance "654.32" OK&amp;gt; hmset customer:202 name "Sally Brown" account "123458" balance "765.43"OK &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When these properties are used to define the driver's behavior, the Redis keys will be pivoted, so that each Redis key that matches the pattern in the definition is represented as a single row in the table. Each value associated with that Redis key becomes a column for the table. While this works differently for each Redis data type, this article will focus on hashes. &lt;/p&gt;

&lt;h3&gt;
  
  
  DefineTables Property
&lt;/h3&gt;

&lt;p&gt;The DefineTables connection property allows you to explicitly define the names of the tables that will appear in various tools and apps by aggregating all of the Redis keys that match a given pattern. To do so, set the property to a comma-separated string of name-value pairs, where the name is the name of the table and the value is the pattern used to assign Redis keys to that table. &lt;/p&gt;

&lt;p&gt;DefineTables=Users=user:&lt;em&gt;,Customers=customer:&lt;/em&gt;; &lt;/p&gt;

&lt;p&gt;With the property set as above, the Users and Customers tables will be exposed in the tool or app you are using. If you were to query the tables, you would see the following results: &lt;/p&gt;

&lt;p&gt;SELECT * FROM Users &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lEMCp9M4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pz1r7n45onchwranmtgy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lEMCp9M4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pz1r7n45onchwranmtgy.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SELECT * FROM Customers &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vB0pNGJp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vfvqlklm0q58xl1h395r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vB0pNGJp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vfvqlklm0q58xl1h395r.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  TablePattern Property
&lt;/h3&gt;

&lt;p&gt;The TablePattern connection property allows you to define the separator(s) that determine how the drivers define tables. For the Redis keys described above, user and customer would be defined as tables if the separator is set to ":" since the unique piece of each Redis key appears after the ":". If you have a need to structure the tables differently, to drill down further, you can include multiple instances of the separator. Set the property to a pattern that includes the separator(s) needed to define your table structure. Below is the default value. &lt;/p&gt;

&lt;p&gt;TablePattern=&lt;em&gt;:&lt;/em&gt;; &lt;/p&gt;

&lt;p&gt;With the property set as above, the tables user and customer will be exposed in the tool or app you are using. If you were to query the tables, you would see the following results: &lt;/p&gt;

&lt;p&gt;SELECT * FROM user &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ht9gtyjB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4lf9js3049hw5px32bty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ht9gtyjB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4lf9js3049hw5px32bty.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SELECT * FROM customer &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EEGsCC5h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7mx3es9vgoa5jdbbj0lz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EEGsCC5h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7mx3es9vgoa5jdbbj0lz.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  More Information
&lt;/h1&gt;

&lt;p&gt;Modern data-driven applications require modern solutions to quickly process a massive volume, variety, and velocity of data and automate decision making. &lt;/p&gt;

&lt;p&gt;With the &lt;a href="https://www.cdata.com/drivers/redis/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=sql-access-to-redis-data"&gt;CData Software Drivers for Redis&lt;/a&gt;, users can connect to live data cached in Redis from BI, analytics, and reporting tools through bi-directional data drivers. &lt;/p&gt;

</description>
      <category>automation</category>
      <category>integration</category>
      <category>connectivity</category>
      <category>database</category>
    </item>
    <item>
      <title>Connecting to Amazon DocumentDB with MongoDB Drivers </title>
      <dc:creator>Jerod Johnson</dc:creator>
      <pubDate>Wed, 20 Oct 2021 09:55:22 +0000</pubDate>
      <link>https://forem.com/jerodimusprime/connecting-to-amazon-documentdb-with-mongodb-drivers-14dp</link>
      <guid>https://forem.com/jerodimusprime/connecting-to-amazon-documentdb-with-mongodb-drivers-14dp</guid>
      <description>&lt;p&gt;With Amazon DocumentDB developers and DBAs can balance availability, read scalability, and latency with five pre-configured consistency levels. The data in an Amazon DocumentDB cluster is accessible just like a MongoDB 3.6 cluster.  &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.cdata.com/drivers/mongodb/odbc/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=connecting-to-amazon-documentdb-with-mongodb-drivers"&gt;CData Software Drivers for MongoDB&lt;/a&gt; provide a SQL-like interface to MongoDB data through standards-based drivers. This allows users to see their data in the BI, reporting, and ETL tools of their choice, as well as in custom tools and applications. &lt;/p&gt;

&lt;p&gt;This article walks through configuring an Amazon DocumentDB cluster and connecting to the cluster through any of the available CData Software MongoDB drivers. &lt;/p&gt;

&lt;h1&gt;
  
  
  Creating a DocumentDB Cluster
&lt;/h1&gt;

&lt;p&gt;To create a DocumentDB cluster, log in to your AWS Console, select Amazon DocumentDB, click Clusters and Create to add a new cluster. Once you create the cluster, you can connect to the cluster using the MongoDB client from an EC2 instance to populate the cluster. Once you have data in the Amazon DocumentDB cluster, you can connect using the CData MongoDB Drivers. &lt;/p&gt;

&lt;h1&gt;
  
  
  Connecting to a Amazon DocumentDB Cluster
&lt;/h1&gt;

&lt;p&gt;With your cluster created, make note of the cluster details (found by clicking on the cluster name on the Clusters page). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wtZCN5dz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82szjw2dn860aaegpu87.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wtZCN5dz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82szjw2dn860aaegpu87.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The table below describes the mapping from the DocumentDB cluster details to the CData Driver properties: &lt;/p&gt;

&lt;p&gt;CData Driver Property       |        Amazon DocumentDB Property &lt;/p&gt;

&lt;p&gt;Server                      --&amp;gt;           Cluster endpoint&lt;/p&gt;

&lt;p&gt;Port                        --&amp;gt;                  Port                                    &lt;/p&gt;

&lt;p&gt;User                        --&amp;gt;               Master username &lt;/p&gt;

&lt;p&gt;Password                    --&amp;gt;  (set when you create the cluster) &lt;/p&gt;

&lt;p&gt;In addition to the cluster details, you need to set a few more additional properties in the CData drivers to connect to an Amazon DocumentDB cluster: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;UseSSL:&lt;/strong&gt; True &lt;br&gt;
(Amazon DocumentDB Clusters have SSL enabled by default and this cannot be changed.) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SSLServerCert:&lt;/strong&gt; * &lt;br&gt;
(This automatically accepts the certificate presented by the DocumentDB Cluster, which is already hosted within the same region and security group as the EC2 instance.) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Other:&lt;/strong&gt; "UseFindAPI=true" &lt;br&gt;
(This is set to ensure compatibility with the DocumentDB version of the MongoDB API.) &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With all of the properties, your standard connection string would look like the following: &lt;/p&gt;

&lt;p&gt;Server=.cluster-xxxxxxxxxxxx..docdb.amazonaws.com; Port=27107; User=myMaster; Password=myPassword; UseSSL=True; SSLServerCert=*; Other='UseFindAPI=True'; &lt;/p&gt;

&lt;p&gt;Once the connection properties are set, you can quickly connect to your Amazon DocumentDB account using any of the MongoDB Drivers. &lt;/p&gt;

&lt;h1&gt;
  
  
  More Information
&lt;/h1&gt;

&lt;p&gt;When developing modern applications, document databases like MongoDB are a popular choice for storing semi-structured data for use cases like product catalogs, user profiles, mobile applications, and content management. &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.cdata.com/drivers/mongodb/odbc/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=connecting-to-amazon-documentdb-with-mongodb-drivers"&gt;MongoDB ODBC Driver&lt;/a&gt; is a powerful tool that allows you to connect with live MongoDB document databases, directly from any applications that support ODBC connectivity. Access MongoDB like you would a database - read, write, and update through a standard ODBC Driver interface. &lt;/p&gt;

</description>
      <category>automation</category>
      <category>integration</category>
      <category>connectivity</category>
      <category>database</category>
    </item>
    <item>
      <title>Perform API Operations Using Stored Procedures in CData SSIS Components </title>
      <dc:creator>Jerod Johnson</dc:creator>
      <pubDate>Tue, 19 Oct 2021 14:18:42 +0000</pubDate>
      <link>https://forem.com/jerodimusprime/perform-api-operations-using-stored-procedures-in-cdata-ssis-components-4e1o</link>
      <guid>https://forem.com/jerodimusprime/perform-api-operations-using-stored-procedures-in-cdata-ssis-components-4e1o</guid>
      <description>&lt;p&gt;APIs require up-to-date and strong documentation, increased security levels, comprehensive testing, routine versioning, and high-reliability to be appropriately utilized.  &lt;/p&gt;

&lt;p&gt;With the &lt;a href="https://www.cdata.com/ssis/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=perform-api-operations-using-stored-procedures-in-cdata-ssis-components+"&gt;CData SSIS Data Flow components&lt;/a&gt; you can import and export data from various data sources. However, in some cases the type of operation you need does not fit the SSIS Source or the SSIS Destination model. &lt;/p&gt;

&lt;p&gt;The CData SSIS Components surface additional functionality available in the underlying API as stored procedures. This article uses the CData SSIS Components for SharePoint to execute the DownloadDocument stored procedure, but you can follow the same process to execute the stored procedures of any CData SSIS Component to perform other actions like retrieving OAuth credentials, uploading attachments to emails, authoring reports from CRM applications, and more. &lt;/p&gt;

&lt;h1&gt;
  
  
  Connect to SharePoint from an SSIS Source Component
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open Visual Studio and create a new Integration Services Project. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add a CData SharePoint Source to the Data Flow Task. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w98dZ3o_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xnoi28e06eyywqhewfbb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w98dZ3o_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xnoi28e06eyywqhewfbb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the CData SharePoint Source, add a new Connection Manager and add your credentials for the SharePoint site. For this demo, you will want to set 'Show Hidden Columns' to True, as the sample code uses a hidden column. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nztQC85E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j101bm10bxv6hk9e21aj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nztQC85E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j101bm10bxv6hk9e21aj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the Source Component editor, select SQL Statement to configure the Source Component to execute a stored procedure. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set the SQL Query to an EXECUTE statement for the Stored Procedure, e.g., &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EXECUTE DownloadDocument @File = '', @Library = '', @RemoteFile = '' &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RJRSfate--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5hlo0xqa5yiucqd0f61y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RJRSfate--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5hlo0xqa5yiucqd0f61y.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Configuring a Parameterized Query
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;After you have set the SQL query in the Source component, click the Control Flow tab &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Variables from the SSIS menu and add the appropriate variables for the Stored Procedure (e.g., User, Library, and RemoteFile) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Control Flow tab, select the Data Flow Task &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Properties pane, click the expand button for the Expressions property &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Property expressions list, select a blank row and choose the SQL Statement for the CData SharePoint Source component &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iYjPKm_Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cbdrq7efcildzbvv5g4n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iYjPKm_Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cbdrq7efcildzbvv5g4n.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set the Expression to a parameterized EXECUTE statement for the Stored Procedure, including quotes, e.g., &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;"EXECUTE DownloadDocument @File = '" + @[User::File] + "', @Library = '" + @[User::Library] + "', @RemoteFile = '" + @[User::RemoteFile] + "'" &lt;/p&gt;

&lt;p&gt;After saving your Expression, you can execute the project and find the downloaded files in the download directory. &lt;/p&gt;

&lt;h1&gt;
  
  
  Learn More
&lt;/h1&gt;

&lt;p&gt;APIs have the potential to transform businesses. Forward-thinking companies are reducing costs and time spent by “building their solutions with best-of-breed components, which they access via APIs. &lt;/p&gt;

&lt;p&gt;With the &lt;a href="https://www.cdata.com/ssis/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=perform-api-operations-using-stored-procedures-in-cdata-ssis-components+"&gt;CData SSIS Components&lt;/a&gt; you can simplify data movement, integration, &amp;amp; transformation with codeless connectivity. Synchronize or replicate cloud applications and databases, import &amp;amp; export to various files and cloud storage solutions, connect back-office processes through SSIS workflows, and so much more. &lt;/p&gt;

</description>
      <category>automation</category>
      <category>integration</category>
      <category>connectivity</category>
      <category>database</category>
    </item>
    <item>
      <title>A Comparison of Database Drivers for MySQL </title>
      <dc:creator>Jerod Johnson</dc:creator>
      <pubDate>Mon, 18 Oct 2021 14:56:59 +0000</pubDate>
      <link>https://forem.com/jerodimusprime/a-comparison-of-database-drivers-for-mysql-289k</link>
      <guid>https://forem.com/jerodimusprime/a-comparison-of-database-drivers-for-mysql-289k</guid>
      <description>&lt;p&gt;MySQL is "The world's most popular open source database" and it is widely-used to store and access your data. In fact, there are many cloud and on-premise databases (like MemSQL and Google Cloud SQL) that use the MySQL interface. Several native, open-source drivers are available for connecting to your MySQL data from other applications. &lt;/p&gt;

&lt;p&gt;In this article, we will compare the read and write performance of the native &lt;a href="https://www.cdata.com/drivers/mysql/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=a-comparison-of-database-drivers-for-mysql"&gt;MySQL Connectors&lt;/a&gt; (JDBC and ODBC)1 and the CData Software JDBC and ODBC Drivers for MySQL2 when working with larger datasets. We show how the CData Driver is able to read large data sets approximately 20% faster than the native driver and write 1 million rows nearly 40% faster than the native drivers. &lt;/p&gt;

&lt;h1&gt;
  
  
  The Data
&lt;/h1&gt;

&lt;p&gt;In order to provide a reproducible comparison for reading data, we copied the US Amazon book reviews from the Amazon Customer Reviews dataset3 into a MySQL database. The details for the amazon_book_reviews table are below: &lt;/p&gt;

&lt;p&gt;Table Size : 9.7 GB &lt;/p&gt;

&lt;p&gt;Table Number of Rows : 10,534,179 &lt;/p&gt;

&lt;p&gt;Number of Columns : 15 &lt;/p&gt;

&lt;p&gt;# JDBC Driver Read Performance &lt;/p&gt;

&lt;p&gt;The main goal of this investigation was to compare the related performance of the JDBC drivers. We did this by running the same set of queries with each JDBC driver. We used LIMIT clauses to change the size of the dataset returned in each query, but we requested the same columns for each query. &lt;/p&gt;

&lt;h2&gt;
  
  
  Base Query
&lt;/h2&gt;

&lt;p&gt;SELECT &lt;/p&gt;

&lt;p&gt;marketplace, &lt;/p&gt;

&lt;p&gt;customer_id, &lt;/p&gt;

&lt;p&gt;review_id, &lt;/p&gt;

&lt;p&gt;product_id, &lt;/p&gt;

&lt;p&gt;product_parent, &lt;/p&gt;

&lt;p&gt;product_title, &lt;/p&gt;

&lt;p&gt;product_category, &lt;/p&gt;

&lt;p&gt;star_rating, &lt;/p&gt;

&lt;p&gt;helpful_votes, &lt;/p&gt;

&lt;p&gt;total_votes, &lt;/p&gt;

&lt;p&gt;vine, &lt;/p&gt;

&lt;p&gt;verified_purchase, &lt;/p&gt;

&lt;p&gt;review_headline, &lt;/p&gt;

&lt;p&gt;review_body, &lt;/p&gt;

&lt;p&gt;review_date, &lt;/p&gt;

&lt;p&gt;FROM &lt;/p&gt;

&lt;p&gt;cdata.amazon_book_reviews; &lt;/p&gt;

&lt;h2&gt;
  
  
  Limit by Query
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;LIMIT 1000000 &lt;/li&gt;
&lt;li&gt;Full Dataset &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To test the drivers, we connected to MySQL using a basic Java application and executed the above queries repeatedly. The results were read and stored in a new variable (based on the datatype) for each column in each row. &lt;/p&gt;

&lt;p&gt;Query Times by Driver (in seconds) &lt;/p&gt;

&lt;p&gt;Query           |      CData JDBC Driver    |   MySQL Connector/J &lt;br&gt;
1 (1,000,000 rows)         14.02 (+26%)           17.67 &lt;br&gt;
2 (~10,000,000 rows)       171.09 (+20.7%)       206.42 &lt;/p&gt;

&lt;p&gt;As can be seen in the results, the CData Driver regularly outperformed the native MySQL Driver, largely due to the way the CData JDBC Driver uses client-side resources. &lt;/p&gt;

&lt;h2&gt;
  
  
  JDBC Driver Resource Usage
&lt;/h2&gt;

&lt;p&gt;While testing the read performance of the JDBC drivers, we also measured client-side resource usage, looking specifically at memory. The charts below were found by running a sample Java program and using Java VisualVM to capture the memory usage. We used Java OpenJDK version 12.0.2 with a maximum heap size of 8 Gigabytes. &lt;/p&gt;

&lt;p&gt;For this comparison, we ran the query for the full dataset. &lt;/p&gt;

&lt;h2&gt;
  
  
  CData Driver
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qtg_suIM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9w8618xyb53syi1i9m8z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qtg_suIM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9w8618xyb53syi1i9m8z.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Native Driver*
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j7UDSjyd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/druejlbuo6ai7k3jlef6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j7UDSjyd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/druejlbuo6ai7k3jlef6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Note the change in scale for the Heap graph. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The CData Driver averages approximately 150 MB of heap usage for the duration of the run. However, the native driver continues to use more and more client resources for the duration of the run. In fact, we had to set the max Heap size to 8 GB in order for the native MySQL Driver to complete its execution. Despite the fact that the native driver is using significantly more resources, it still takes longer to read the data than it takes the CData JDBC Driver. &lt;/p&gt;

&lt;h1&gt;
  
  
  ODBC Driver Read Performance
&lt;/h1&gt;

&lt;p&gt;The main goal of this investigation was to compare the related performance of the ODBC Drivers. We did this by running the same set of queries with each ODBC Driver. We used LIMIT clauses to change the size of the dataset returned in each query, but we requested the same columns for each query. &lt;/p&gt;

&lt;h2&gt;
  
  
  Base Query
&lt;/h2&gt;

&lt;p&gt;SELECT &lt;/p&gt;

&lt;p&gt;marketplace, &lt;/p&gt;

&lt;p&gt;customer_id, &lt;/p&gt;

&lt;p&gt;review_id, &lt;/p&gt;

&lt;p&gt;product_id, &lt;/p&gt;

&lt;p&gt;product_parent, &lt;/p&gt;

&lt;p&gt;product_title, &lt;/p&gt;

&lt;p&gt;product_category, &lt;/p&gt;

&lt;p&gt;star_rating, &lt;/p&gt;

&lt;p&gt;helpful_votes, &lt;/p&gt;

&lt;p&gt;total_votes, &lt;/p&gt;

&lt;p&gt;vine, &lt;/p&gt;

&lt;p&gt;verified_purchase, &lt;/p&gt;

&lt;p&gt;review_headline, &lt;/p&gt;

&lt;p&gt;review_body, &lt;/p&gt;

&lt;p&gt;review_date, &lt;/p&gt;

&lt;p&gt;FROM &lt;/p&gt;

&lt;p&gt;cdata.amazon_book_reviews; &lt;/p&gt;

&lt;h2&gt;
  
  
  Limit by Query
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;LIMIT 1000000 &lt;/li&gt;
&lt;li&gt;Full Dataset &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To test the drivers, we connected to MySQL using a basic C++ application and executed the above queries repeatedly. The results were read and stored in a new variable (based on the datatype) for each column in each row. &lt;/p&gt;

&lt;p&gt;Query Times by Driver (in seconds) &lt;/p&gt;

&lt;p&gt;Query           | CData ODBC Driver      |    MySQL Connector/ODBC &lt;br&gt;
1 (1,000,000 rows)     14.95 (+96.7%)            29.40 &lt;br&gt;
2 (~10,000,000 rows)   177.51 (+72.3%)           305.77 &lt;/p&gt;

&lt;p&gt;As can be seen in the results, the CData ODBC Driver regularly outperformed the MySQL Connector/ODBC, largely due to better use of available client resources. &lt;/p&gt;

&lt;h2&gt;
  
  
  ODBC Driver Resource Usage
&lt;/h2&gt;

&lt;p&gt;While testing the read performance of the ODBC drivers, we also measured client-side resource usage, looking specifically at processing capacity and network bandwidth. The charts below were found by executing the simple C++ application and using the Windows Resource Monitor. &lt;/p&gt;

&lt;p&gt;For this comparison, we ran the query for the full dataset. &lt;/p&gt;

&lt;h2&gt;
  
  
  CData Driver
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UU_H-duV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eq6mbwau9az1nm0pq57g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UU_H-duV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eq6mbwau9az1nm0pq57g.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Native Driver
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eU5qkEo8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1dfg5aqsuh867mrcs2jn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eU5qkEo8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1dfg5aqsuh867mrcs2jn.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both drivers appear to use 20 - 25% of the available processing capacity, with CData using slightly more. The larger difference is in the network bandwidth used, where the CData Driver uses around 250Mbps compared to the 100Mbps used by the native driver. &lt;/p&gt;

&lt;h2&gt;
  
  
  Million Row Challenge
&lt;/h2&gt;

&lt;p&gt;In addition to measuring read performance, we also want to compare the write performance of the drivers. In short, the CData MySQL JDBC Driver is able to write 1 million rows nearly 40% faster than the native MySQL Connector. &lt;/p&gt;

&lt;p&gt;We used a simple Java program to add the rows to a copy of the amazon_book_reviews table referenced above.3. For our testing, we inserted the data in 100 batches of 10,000 rows. &lt;/p&gt;

&lt;p&gt;Sample Code &lt;/p&gt;

&lt;p&gt;//one batch &lt;/p&gt;

&lt;p&gt;Connection connection = DriverManager.getConnection("jdbc:mysql:server=" + myServer + ";port=" + myPort + ";user=" + myUser + ";password=" + myPassword + ";database=" + myDatabse + ";"); &lt;/p&gt;

&lt;p&gt;String cmd = "INSERT INTO cdata.amazon_book_reviews_insert (marketplace, customer_id, review_id, product_id, product_parent, product_title, product_category, star_rating, helpful_votes, total_votes, vine, verified_purchase, review_headline, review_body, review_date) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)"; &lt;/p&gt;

&lt;p&gt;PreparedStatement pstmt = connection.prepareStatement(cmd); &lt;/p&gt;

&lt;p&gt;for (int row = 0; row &amp;lt; rows.length; row++){ &lt;/p&gt;

&lt;p&gt;pstmt.setString(1, marketplace); &lt;/p&gt;

&lt;p&gt;pstmt.setLong(2, customer_id); &lt;/p&gt;

&lt;p&gt;pstmt.setString(3, review_id); &lt;/p&gt;

&lt;p&gt;pstmt.setLong(4, product_id); &lt;/p&gt;

&lt;p&gt;pstmt.setLong(5, product_parent); &lt;/p&gt;

&lt;p&gt;pstmt.setString(6, product_title); &lt;/p&gt;

&lt;p&gt;pstmt.setString(7, product_category); &lt;/p&gt;

&lt;p&gt;pstmt.setLong(8, star_rating); &lt;/p&gt;

&lt;p&gt;pstmt.setLong(9, helpful_votes); &lt;/p&gt;

&lt;p&gt;pstmt.setLong(10, total_votes); &lt;/p&gt;

&lt;p&gt;pstmt.setString(11, vine); &lt;/p&gt;

&lt;p&gt;pstmt.setString(12, verified_purchase); &lt;/p&gt;

&lt;p&gt;pstmt.setString(13, review_headline); &lt;/p&gt;

&lt;p&gt;pstmt.setString(14, review_body); &lt;/p&gt;

&lt;p&gt;pstmt.setDate(15, review_date); &lt;/p&gt;

&lt;p&gt;pstmt.addBatch(); &lt;/p&gt;

&lt;p&gt;} &lt;/p&gt;

&lt;p&gt;int[] affected = pstmt.executeBatch(); &lt;/p&gt;

&lt;p&gt;Results &lt;/p&gt;

&lt;p&gt;Time (in Seconds) to Insert 1m Rows &lt;/p&gt;

&lt;p&gt;CData JDBC Driver       |         MySQL Connector/J &lt;br&gt;
      77.6                                127.9                           &lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Most technologies that emerge are all about being more efficient and providing more functionality in a smaller package. If you can meet your data management needs with fewer tools, then it’s a win-win for cost-effectiveness, efficiency, and ease of use. &lt;/p&gt;

&lt;p&gt;The CData Driver offers better querying of large datasets over the native connector, processing the largest dataset at least 20% faster by making better use of the available client resources. &lt;/p&gt;

&lt;p&gt;When it comes to inserting data, the CData Driver stands apart in its ability to rapidly insert large sets of data. The &lt;a href="https://www.cdata.com/jdbc/"&gt;CData JDBC Driver&lt;/a&gt; is able to insert 1 million rows in under 80 seconds, nearly 40% faster than the native connector. &lt;/p&gt;

</description>
      <category>database</category>
      <category>connectivity</category>
      <category>integration</category>
      <category>data</category>
    </item>
    <item>
      <title>CData Connect: Derived Views and Query Federation </title>
      <dc:creator>Jerod Johnson</dc:creator>
      <pubDate>Fri, 15 Oct 2021 13:17:02 +0000</pubDate>
      <link>https://forem.com/jerodimusprime/cdata-connect-derived-views-and-query-federation-3bjp</link>
      <guid>https://forem.com/jerodimusprime/cdata-connect-derived-views-and-query-federation-3bjp</guid>
      <description>&lt;p&gt;Enterprise-scale data integration demands scalable tools to not only centralize and curate data at scale, but also efficiently explore and analyze data at large-scale. &lt;/p&gt;

&lt;p&gt;CData Connect solves the challenge of data exploration and discovery through virtualized connectivity to data distributed across many data sources. &lt;/p&gt;

&lt;p&gt;In this article, we explore the Derived Views and Query Federation features of &lt;a href="https://www.cdata.com/connect/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=cdata-connect-derived-views-and-query-federation"&gt;CData Connect&lt;/a&gt;, allowing users to create custom interfaces, surfacing combined data from many sources and even simultaneously query data across data sources. &lt;/p&gt;

&lt;h1&gt;
  
  
  Derived Views
&lt;/h1&gt;

&lt;p&gt;Getting a database model of your SaaS data, where each entity (Account, Contact, etc.) is represented by a database table is useful, but creating views of your enterprise data can help in a variety of ways: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplified access&lt;/strong&gt; - Views provide access to a specific subset of the data available, and can feature built-in filtering, column selections, and more to create easier access to commonly used queries &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improved security&lt;/strong&gt; - Views can select specific columns/rows and set the permissions on the view instead of the underlying table, giving data administrators more granular control over which data points are accessible by each user &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Complex, real-time data&lt;/strong&gt; - Views can roll up operations (summations, aggregations, etc.), JOINs, and more all while provide real-time access to data by performing live queries against the underlying table(s) each time the view itself is queried &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Derived View feature of CData Connect lets you do just that - create a "view" of your SaaS, Big Data, &amp;amp; NoSQL sources. The derived view behaves exactly like a user-defined view, but the definition of the view resides directly within the CData Connect instance. To create a view, you first need to create a virtual database for your data. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to the Data Model tab and click Data Sources &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select a source from the available Sources &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gGANyeVK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mxhc7579y86dco87z36t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gGANyeVK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mxhc7579y86dco87z36t.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configure the connection using the data source-specific properties &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D7y8LFI4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ia9fq6sokeqc6vq7ttrb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D7y8LFI4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ia9fq6sokeqc6vq7ttrb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once connected to a data source, you can explore the virtual database in the Data Explorer. You can select your newly created database and drag tables into the query pane. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KYvOyFln--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qyawn2f0c1rolufc1cb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KYvOyFln--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qyawn2f0c1rolufc1cb1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also create custom SQL queries based on the data model to request specific data. Click the Create Derived View button to save the query as a view. With the view saved, users will be able to easily access the results of the specific query saved. For example, you can use the view to visualize the results in popular BI and reporting tools. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O6Aqzp8J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1y4kssynuxeb0xjlnbnc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O6Aqzp8J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1y4kssynuxeb0xjlnbnc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Query Federation
&lt;/h1&gt;

&lt;p&gt;In addition to creating Derived Views, you can also leverage the Query Federation capabilities of CData Connect to query related data across different sources. For example, you would be able to join data from your CRM solution and your accounting solution. The feature in CData Connect is built using our existing query federation technology. To query data across different sources, you will need multiple virtual databases in your CData Connect instance. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xm2nCHZk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fhga99pc734fj9oewh6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xm2nCHZk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fhga99pc734fj9oewh6y.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the databases are created, you can query data from both of them simultaneously just like you would be able to query data from two related databases. For example, you can &lt;a href="https://www.cdata.com/kb/tech/bigquery-connect-powerbi.rst?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=cdata-connect%3A-derived-views-and-query-federation"&gt;connect to CData Connect from Power BI and visualize your data&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Fmq3Qw2s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9rr2wv0nf64va5ffs5ud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Fmq3Qw2s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9rr2wv0nf64va5ffs5ud.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Query Federation in Derived Views
&lt;/h2&gt;

&lt;p&gt;The Query Federation functionality can be used to create Derived Views across different data sources. This grants users and administrators all of the benefits of views on top of the benefits of querying data from multiple sources at the same time. Simply construct a query joining data from multiple virtual databases and save it as a derived view. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N8bpHD1v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkoh7mme3ikure96r6x2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N8bpHD1v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkoh7mme3ikure96r6x2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  More Information
&lt;/h1&gt;

&lt;p&gt;In order to transform the enormous amount of data into knowledge and actionable insights, teams must solve fundamental problems, such as data integration and query processing. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.cdata.com/connect/?utm_source=dev.to&amp;amp;utm_medium=kb-article-posting&amp;amp;utm_campaign=cdata-connect-derived-views-and-query-federation"&gt;CData Connect&lt;/a&gt; is a consolidated connectivity platform that lets you easily connect any application, on-prem or in the cloud, with real-time data from anywhere. Now, analysts can finally gain unified access to vital data and deliver hard, data-driven insights for their teams. &lt;/p&gt;

</description>
      <category>connectivity</category>
      <category>integration</category>
      <category>database</category>
      <category>data</category>
    </item>
  </channel>
</rss>
