<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: augusto kiniama rosa</title>
    <description>The latest articles on Forem by augusto kiniama rosa (@kiniama).</description>
    <link>https://forem.com/kiniama</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kiniama"/>
    <language>en</language>
    <item>
      <title>The Unofficial Snowflake Monthly Release Notes: December 2025</title>
      <dc:creator>augusto kiniama rosa</dc:creator>
      <pubDate>Fri, 02 Jan 2026 21:32:17 +0000</pubDate>
      <link>https://forem.com/kiniama/the-unofficial-snowflake-monthly-release-notes-december-2025-4724</link>
      <guid>https://forem.com/kiniama/the-unofficial-snowflake-monthly-release-notes-december-2025-4724</guid>
      <description>&lt;h4&gt;
  
  
  Monthly Snowflake Unofficial Release Notes #New features #Previews #Clients #Behavior Changes
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkwld2mxr956csfmjtay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkwld2mxr956csfmjtay.png" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome to the Unofficial Release Notes for Snowflake for December 2025! You’ll find all the latest features, drivers, and more in one convenient place.&lt;/p&gt;

&lt;p&gt;As an unofficial source, I am excited to share my insights and thoughts. Let’s dive in! You can also find all of Snowflake’s releases &lt;a href="https://medium.com/@augustokrosa/list/snowflake-unofficial-newsletter-list-b97037cfc9e6" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This December, we provide coverage up to release 9.40 (General Availability — GA). I hope to extend this eventually to private preview notices as well.&lt;/p&gt;

&lt;p&gt;I would appreciate your suggestions on how to continue combining these monthly release notes. Feel free to comment below or chat with me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;&lt;em&gt;LinkedIn&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Behavior change&lt;/strong&gt; bundle &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_05_bundle" rel="noopener noreferrer"&gt;2025_05&lt;/a&gt; is generally enabled for all customers, &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_06_bundle" rel="noopener noreferrer"&gt;2025_06&lt;/a&gt; is enabled by default but can be opted out until next BCR deployment, and &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_07_bundle" rel="noopener noreferrer"&gt;2025_07&lt;/a&gt; is disabled by default but may be opted in. Net new coming soon is 2026_01 coming in January, no information yet.&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s New in Snowflake
&lt;/h3&gt;

&lt;h4&gt;
  
  
  AI Updates (Cortex, ML, DocumentAI)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AI_REDACT for automated redaction of PII (GA), detects and redacts PII from unstructured text using a large language model (LLM). AI_REDACT recognizes categories like names and addresses, including partial PII like first or last names, replacing them with placeholders&lt;/li&gt;
&lt;li&gt;CORTEX_AISQL_USAGE_HISTORY Account Usage view (GA), provides detailed information about the usage of Cortex AI Functions in your SQL queries, providing finer-grained insights into how AI features are being used in your account&lt;/li&gt;
&lt;li&gt;Semantic views: Using standard SQL clauses to query semantic views (Preview), use standard SQL clauses in a SELECT statement to query a semantic view&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Snowflake Applications (Container Services, Notebooks and Applications, Snowconvert)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Support for Streamlit in Snowflake container runtime (Preview), run your Streamlit in Snowflake apps on containers&lt;/li&gt;
&lt;li&gt;SnowConvert AI 2.1.0 (New Features: IBM DB2: Implemented DECFLOAT transformation, Oracle: added support for transforming NUMBER to DECFLOAT using the Data Type Mappings feature, added a new report TypeMappings.csv that displays the data types that were changed using the Data Type Mappings feature; PowerBI: added support for the Transact connector pattern for queries and multiple properties in the property list for PowerBI; Teradata: added a new conversion setting Tables Translation which allows transforming all tables in the source code to a specific table type supported by Snowflake, enabled conversion of tables to Snowflake-managed Iceberg tables; SSIS: added support for full cache in SSIS lookup transformations; General:
Added temporary credentials retrieval for AI Verification jobs, added summary cards for selection and result pages, implemented full support for the Git Service, added ‘verified by user’ checkboxes and bulk actions to the selection and results pages, added a dependency tag for AI Verification, implemented the generation of a SqlObjects Report; Improvements: RedShift: Optimized RedShift transformations to only add escape characters when necessary in LIKE conditions; SSIS: improved Microsoft.DerivedColumn migrations for SSIS; General: added the number of copied files to relevant outputs, changed some buttons to the footer for improved UI consistency; &lt;strong&gt;Fixes:&lt;/strong&gt; Teradata: fixed transformation of bash variables substitution in scripts)&lt;/li&gt;
&lt;li&gt;SnowConvert AI 2.0.86 (Improvements: RedShift: added support for the MURMUR3_32_HASH function, replaced Redshift epoch and interval patterns with Snowflake TO_TIMESTAMP; SSIS: added support for converting Microsoft SendMailTask to Snowflake SYSTEM, implemented SSIS event handler translation for OnPreExecute and OnPostExecute; SQL Server: enhanced transformation for the Round function with three arguments; Informatica: updated InfPcIntegrationTestBase to import the real implementation of translators and other necessary components; General: enhanced procedure name handling and improved identifier splitting logic; improved object name normalization in DDL extracted code, implemented a temporal variable to keep credentials in memory and retrieve the configuration file, updated the TOML Credential Manager; improved error suggestions, added missing path validations related to ETL, improved the application update mechanism, implemented an exception to be thrown when calling the ToToml method for Snowflake credentials, changed the log path and updated the cache path, implemented a mechanism to check for updates, merged the Missing Object References Report with ObjectReferences, changed values in the name and description columns of the ETL.Issues report, added support for Open Source and Converted models in AI Verification, added a new custom JSON localizer, added a dialog to appear when accepting changes if multiple code units are present in the same file, added a FileSystemService, added an expression in the ETL issues report for SSISExpressionCannotBeConverted; Fixes: SQL Server: fixed a bug that caused the report database to be generated incorrectly, fixed a bug that caused unknown Code Units to be duplicated during arrangement; General: fixed an issue that prevented the cancellation of AI Verification jobs, fixed an issue to support EAI in the AI specification file, fixed an issue where the progress number was not being updated, fixed the handling of application shutdowns during updates)&lt;/li&gt;
&lt;li&gt;SnowConvert AI 2.0.57 (Improvements: SQL Server: enhanced SQL Server code extraction to return schema-qualified objects; General: enhanced Project Service and Snowflake Authentication for improved execution, removed GS validation from client-side, as it is now performed on the server side, implemented connection validation to block deployment, data migration, and data validation when a connection is unavailable, enhanced conversion to use the source dialect from project initialization, improved CodeUnitStatusMapper to accurately handle progress status in UI status determination, implemented batch insert functionality for enhanced object result processing; &lt;strong&gt;Fixes&lt;/strong&gt; : resolved an issue where conversion settings were not being saved correctly, corrected data validation select tree to properly skip folders, fixed content centering issues in the UI, normalized object names in AI Verification responses to prevent missing status entries in the catalog)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Lake Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Optimize existing semantic views or models with verified queries (Preview), optimize semantic views with verified queries. Snowflake analyzes them to enhance the semantic layer, enabling Cortex Analyst to answer more questions accurately beyond existing queries&lt;/li&gt;
&lt;li&gt;Private connectivity for Apache Iceberg™ REST catalog integrations (GA), Configure an Apache Iceberg™ REST catalog integration for outbound private connectivity, enabling connections to external Iceberg REST catalogs like generic Iceberg REST, AWS Glue Data Catalog, and Databricks Unity Catalog via private endpoints instead of the public internet&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Snowflake Applications (Container Services, Notebooks and Applications, Snowconvert)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Notebooks in Workspaces (Preview), the new notebook experience offers a fully-managed environment for data science and machine learning on Snowflake, combining the familiar Jupyter interface with enterprise-grade compute, governance, and collaboration. Notebooks run on a Container Runtime powered by Snowpark Container Services, with preconfigured containers optimized for AI/ML workloads, supporting CPUs, GPUs, parallel data loading, and distributed training APIs for popular ML packages. Key features include integration with Workspaces, improved compute and cost management, Jupyter compatibility, and an enhanced editing experience&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Realtime Data (Hybrid, Interactive &amp;amp; SnowPostGres)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowflake Postgres (Preview), allows creating, managing, and using Postgres instances directly within Snowflake, each on a dedicated VM. Connect via any Postgres client, integrating Postgres's reliable transactional capabilities with the Snowflake platform&lt;/li&gt;
&lt;li&gt;Interactive tables and interactive warehouses (GA), deliver low-latency query performance for high-concurrency workloads like real-time dashboards and APIs. This new Snowflake table type is optimized for interactive, low-latency queries. Interactive warehouses are designed for low-latency, interactive workloads, providing the best performance when querying these tables&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Pipelines, Data Loading, Unloading Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Schema evolution support for Snowpipe Streaming with high-performance architecture, Support for automatic schema evolution in Snowpipe Streaming enables pipelines to adapt to schema drift in near real-time, removing the need for manual DDL when new data attributes appear&lt;/li&gt;
&lt;li&gt;Snowflake High Performance connector for Kafka (Preview), is a high-performance Kafka connector for ingesting data into Snowflake tables. It leverages Snowflake’s Snowpipe Streaming for high throughput with low latency. Key features include transparent billing, Rust-based performance, in-flight transformations, server validation, and pre-clustering. PIPE objects manage and configure data ingestion&lt;/li&gt;
&lt;li&gt;Default pipe for Snowpipe Streaming with high-performance architecture, simplifies data ingestion by removing the need to create a pipe manually with CREATE PIPE DDL statements. Users can start streaming immediately to a target table, as the default pipe is implicitly available for any table receiving streaming data&lt;/li&gt;
&lt;li&gt;Snowpipe simplified pricing, enhancement to Enterprise and Standard Snowflake accounts, offers a simpler, more predictable Snowpipe pricing model that can significantly reduce data ingestion costs. It charges a fixed 0.0037 credits per GB for Snowpipe ingestion. Text files like CSV and JSON are billed by uncompressed size, while Binary files like Parquet and Avro are billed by their observed size&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Transformations
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Dynamic tables: Support for dual warehouses, optimize performance and cost by assigning dedicated warehouses for resource-intensive initializations and reinitializations, while using different warehouses for other refreshes&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Security, Privacy &amp;amp; Governance Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Copy tags when running a CREATE OR REPLACE TABLE command (Preview), Allows copying tags linked to the original table and columns. The new table and its columns share the same tags&lt;/li&gt;
&lt;li&gt;Notifications for data quality incidents (Preview), Automatically notify when a database data quality incident occurs, which happens when a data metric function (DMF) violates an expectation or shows an anomaly&lt;/li&gt;
&lt;li&gt;Network rules and policies support Google Cloud Private Service Connect IDs (GA), create Snowflake network rules and policies using Google Cloud Private Service Connect IDs&lt;/li&gt;
&lt;li&gt;Trust Center: Detection findings and event-driven scanners (Preview)
view new findings—detections in your account. This preview introduces event-driven scanners, which continuously monitor your account for specific events, alongside existing schedule-based scanners&lt;/li&gt;
&lt;li&gt;Programmatic access tokens: Removing the single-role restriction for service users
For service users (users with TYPE=SERVICE or TYPE=LEGACY_SERVICE), you can now generate a programmatic access token that is not restricted to a single role&lt;/li&gt;
&lt;li&gt;Private connectivity for internal stages on Google Cloud (GA)&lt;/li&gt;
&lt;li&gt;WORM backups (GA), Help organizations protect critical data from modification or deletion. Backups are point-in-time copies of Snowflake objects. You select which objects to back up (tables, schemas, or databases), the backup frequency, retention period, and whether to add a lock to prevent early deletion, supporting regulatory compliance, recovery, and cyber resilience&lt;/li&gt;
&lt;li&gt;Cost anomalies (GA), automatically detect cost anomalies from previous consumption levels, simplifying the identification of spikes or dips to optimize spending. Use it for account and organization-level anomalies&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  SQL, Extensibility &amp;amp; Performance Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Account Usage: New CATALOG_LINKED_DATABASE_USAGE_HISTORY view, displays the credit usage for catalog-linked databases. It includes compute and cloud services credit usage for each entity during an operation&lt;/li&gt;
&lt;li&gt;Vector aggregate functions, enable element-wise operations on multiple VECTOR values. These functions aggregate vector columns by computing element-wise results across all vectors in a group. Vector aggregate functions are vital in machine learning and data science for tasks like calculating centroids, ranges, or averages in vector datasets. They ignore NULLs, preserve data types, and are optimized for vector data. The new functions are: VECTOR_SUM, VECTOR_MIN, VECTOR_MAX, and VECTOR_AVG&lt;/li&gt;
&lt;li&gt;Access history improvements, let you monitor the SQL statements executed in Snowflake. It keeps track of the following types of statements: Data Manipulation Language (DML) statements, Data Query Language (DQL) statements, Data Definition Language (DDL) statements; Snowflake is expanding which SQL statements are included in the access history: added support for the following objects: listing, role, share, and session, added DQL command support for externally managed Apache Iceberg™ tables, enhanced support for database DDL commands, including the ALTER DATABASE command and commands related to database replication, enhanced DDL support for tables, including variations of ALTER TABLE and variations of ALTER TABLE…MODIFY COLUMN, enhanced support for file staging commands like GET and PUT&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Collaboration, Data Clean Rooms, Marketplace, Listings &amp;amp; Data Sharing
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Auto-fulfillment for listings that span databases (GA), providers can create listings on databases referencing views or tables across multiple databases. Granting reference usage to a share allows a single listing to span databases, eliminating the need for a combined database per listing. This increases flexibility, simplifies integration, and ensures all related listings are auto-fulfilled together&lt;/li&gt;
&lt;li&gt;Clean Rooms API Version: 12.2: updates to private preview features&lt;/li&gt;
&lt;li&gt;Clean Rooms API Version: 12.3: The lookalike audience modeling template was removed from the clean rooms UI. It's now accessible as a custom template via the clean rooms API for adding, modifying, and running, with updates to private preview features&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Open-Source &lt;a href="https://developers.snowflake.com/opensource/" rel="noopener noreferrer"&gt;Updates&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;terraform-snowflake-provider 2.12.0 (enables tighter enterprise security with the addition of new CRL and Proxy configuration fields. The migration tool has been significantly expanded to now support databases, roles, users, warehouses, and schemas, making IaC adoption much easier for existing environments. You also gain better control over serverless tasks and SCIM integrations with the addition of custom run_as_role support. Finally, check the migration guide before upgrading, as this version formally removes the deprecated account parameter)&lt;/li&gt;
&lt;li&gt;Modin 0.32.0 ()&lt;/li&gt;
&lt;li&gt;Snowflake VS Code Extension 1.21.0 (Features: added Snowpark Migration Accelerator (SMA) AI Assistant support for Jupyter Notebooks (.ipynb files), added SnowConvert AI Assistant support for Python (.py), Scala (.scala), and Jupyter Notebook (.ipynb) files, added AI disclaimer header to SMA and SnowConvert AI Assistant windows with links to Privacy Policy, AI Terms, Acceptable Use Policy, and Terms of Service; bug Fixes: fixed statement boundary detection issue for stored procedures with many CASE WHEN clauses)&lt;/li&gt;
&lt;li&gt;Streamlit 1.52.2 (minor updates)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Client, Drivers, Libraries and Connectors Updates
&lt;/h3&gt;

&lt;h4&gt;
  
  
  New features:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;.NET Driver 5.2.0 (Added multi-targeting support. NuGet now selects the appropriate build based on the target framework and OS, added support for native Arrow structured types)&lt;/li&gt;
&lt;li&gt;Go Snowflake Driver 1.18.1 (Included a shared library to collect telemetry to identify and prepare testing platforms for native Rust extensions)&lt;/li&gt;
&lt;li&gt;JDBC Driver 3.28.0 (Introduced a shared library for extended telemetry to identify and prepare the testing platform for native Rust extensions, added the ability to choose the connection configuration in the auto configuration file by specifying the aws-oauth-file parameter in the JDBC URL, updated grpc-java to 1.77.0 to address CVE-2025–58057 from transient dependency, updated netty to 4.1.128.Final to address CVE-2025–59419)&lt;/li&gt;
&lt;li&gt;Node.js 2.3.2 (Added support for Red Hat Enterprise Linux (RHEL) 9, added support for Node.js version 24, included a shared library to collect telemetry to identify and prepare testing platforms for native node addons)&lt;/li&gt;
&lt;li&gt;PHP PDO Driver for Snowflake 3.4.0 (Added native OKTA authentication support, implemented a new CRL (Certificate Revocation List) checking mechanism)&lt;/li&gt;
&lt;li&gt;Snowflake CLI 3.14.0 (Updated the snow streamlit deploy command to use the updated CREATE STREAMLIT syntax (FROM &lt;em&gt;source_location&lt;/em&gt;) instead of the deprecated syntax (ROOT_LOCATION = ‘’))&lt;/li&gt;
&lt;li&gt;Snowflake Python API 1.10.0 (Added support for the Streamlit resource, added support for the DECFLOAT data type)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.44.0 (New Features: Added support for targeted delete-insert via the overwrite_condition parameter in DataFrameWriter.save_as_table;Improvements: Improved DataFrameReader to return columns in deterministic order when using INFER_SCHEMA, added a dependency on protobuf&amp;lt;6.34 (was &amp;lt;6.32).)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.43.0 (New features
Added support for DataFrame.lateral_join, added support for Private Preview feature Session.client_telemetry, added support for Session.udf_profiler, added support for functions.ai_translate, added support for the following iceberg_config options in DataFrameWriter.save_as_table and DataFrame.copy_into_table: target_file_size, partition_by, Added support for the following functions in functions.py: String and Binary functions: base64_decode_binary, bucket, compress, day, decompress_binary, decompress_string, md5_binary, md5_number_lower64, md5_number_upper64, sha1_binary, sha2_binary, soundex_p123, strtok, truncate, try_base64_decode_binary, try_base64_decode_string, try_hex_decode_binary, try_hex_decode_string, unicode, uuid_string, Conditional expressions: booland_agg, boolxor_agg, regr_valy, zeroifnull, Numeric expressions: cot, mod, pi, square, width_bucket; Improvements: Enhanced DataFrame.sort() to support ORDER BY ALL when no columns are specified, removed experimental warning from Session.cte_optimization_enabled; Snowpark pandas API updates: added support for DataFrame.groupby.rolling(), added support for mapping np.percentile with DataFrame and Series inputs to Series.quantile, added support for setting the random_state parameter to an integer when calling DataFrame.sample or Series.sample, added support for the following iceberg_config options in to_iceberg:target_file_size, partition_by ; &lt;strong&gt;Improvements&lt;/strong&gt; : enhanced autoswitching functionality from Snowflake to native pandas for methods with unsupported argument combinations:shift() with suffix or non-integer periods parameters,sort_index() with axis=1 or key parameters,sort_values() with axis=1,melt() with col_level parameter,apply() with result_type parameter for DataFrame,pivot_table() with sort=True, non-string index list, non-string columns list, non-string values list, or aggfunc dict with non-string values,fillna() with downcast parameter or using limit together with value,dropna() with axis=1,asfreq() with how parameter, fill_value parameter, normalize=True, or freq parameter being week, month, quarter, or year,groupby() with axis=1, by!=None and level!=None, or by containing any non-pandas hashable labels,groupby_fillna() with downcast parameter,groupby_first() with min_count&amp;gt;1,groupby_last() with min_count&amp;gt;1, groupby_shift() with freq parameter, slightly improved the performance of agg, nunique, describe, and related methods on 1-column DataFrame and Series objects, add support for the following in faster pandas:groupby.apply, groupby.nunique, groupby.size, concat, copy, str.isdigit, str.islower, str.isupper, str.istitle, str.lower, str.upper, str.title, str.match, str.capitalize, str.__getitem__, str.center, str.count, str.get, str.pad, str.len, str.ljust, str.rjust, str.split, str.replace, str.strip, str.lstrip, str.rstrip, str.translate, dt.tz_localize, dt.tz_convert, dt.ceil, dt.round, dt.floor, dt.normalize, dt.month_name, dt.day_name, dt.strftime, dt.dayofweek, dt.weekday, dt.dayofyear, dt.isocalendar, rolling.min, rolling.max, rolling.count, rolling.sum, rolling.mean, rolling.std, rolling.var, rolling.sem, rolling.corr, expanding.min, expanding.max, expanding.count, expanding.sum, expanding.mean, expanding.std, expanding.var, expanding.sem, cumsum, cummin, cummax, groupby.groups, groupby.indices, groupby.first, groupby.last, groupby.rank, groupby.shift, groupby.cumcount, groupby.cumsum, groupby.cummin, groupby.cummax, groupby.any, groupby.all, groupby.unique, groupby.get_group, groupby.rolling, groupby.resample, to_snowflake, to_snowpark, resample.min, resample.max, resample.count, resample.sum, resample.mean, resample.median, resample.std, resample.var, resample.size, resample.first, resample.last, resample.quantile, resample.nunique, Make faster pandas disabled by default (opt-in instead of opt-out), improve performance of drop_duplicates by avoiding joins when keep!=False in faster pandas)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Scala and Java 1.18.0 (Add functions.try_to_date overload for format parameter, add functions.try_to_timestamp overload for format parameter, add Column.cast support for Any parameter type, add Column.equal_to support for Any parameter type, add Column.not_equal support for Any parameter type, add Column.gt support for Any parameter type, add Column.lt support for Any parameter type, add Column.leq support for Any parameter type, add Column.geq support for Any parameter type, add Column.equal_null support for Any parameter type, add Column.plus support for Any parameter type, add Column.minus support for Any parameter type, add Column.multiply support for Any parameter type, add Column.divide support for Any parameter type, add Column.mod support for Any parameter type)&lt;/li&gt;
&lt;li&gt;Snowpark Connect for Spark 1.7.0 (Snowpark Connect for Spark: Fix Parquet logical types (TIMESTAMP, DATE, DECIMAL) handling. Previously, Parquet files were read using physical types only (such as LongType for timestamps). Logical types can now be interpreted by returning proper types like TimestampType, DateType, and DecimalType. You can enable this by setting Spark configuration snowpark.connect.parquet.useLogicalType to true, use the output schema when converting Spark’s Row to Variant, handle empty JAVA_HOME, fix from_json function for MapType, support of configuration spark.sql.parquet.outputTimestampType for NTZ timezone; Snowpark Submit: Add support for --jars for pyspark workload, fix bug for Snowpark Submit JWT authentication)&lt;/li&gt;
&lt;li&gt;Snowpark Connect for Spark 1.6.0 (Support any type as output or input type in the Scala map and flatmap functions, support joinWith, support any return type in Scala UDFs, support registerJavaFunction)&lt;/li&gt;
&lt;li&gt;Snowpark ML 1.20.0 (New Model Registry features: vLLM is now supported as an inference back-end. The create_service API accepts a new argument, inference_engine_options, which allows you to specify the inference engine to use and other engine-specific options. To specify vLLM, set the inference_engine option to InferenceEngine.VLLM)&lt;/li&gt;
&lt;li&gt;SQLAlchemy 1.8.0 (Added logging of the SQLAlchemy version and pandas (if used), added support for Python 3.14 and earlier)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for SharePoint 1.0.3 (Behavior changes: added progress logs in the event table for the entire ingestion process, unprocessed file updates and inserts are now visible through the PUBLIC.CONNECTOR_ERRORS view)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Bug fixes:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;.NET Driver 5.2.0 (fixed CRL validation to reject newly downloaded CRLs when their NextUpdate value has expired, added exception handling to the session heartbeat to prevent network errors from disrupting background heartbeat checks, added retry support for HTTP 307/308 status codes, added the ability to specify non-string values in TOML configuration files. For example, port can now be specified as an integer)&lt;/li&gt;
&lt;li&gt;.NET Driver 5.2.1 (fixed the extremely rare case where intermittent network issues during uploads to Azure Blob Storage prevented metadata updates)&lt;/li&gt;
&lt;li&gt;Go Snowflake Driver 1.18.1 (Handled HTTP 307 and 308 responses in drivers to achieve better resiliency to backend errors, created a temporary directory only if needed during file transfers, fixed unnecessary user expansion for file paths during file transfers)&lt;/li&gt;
&lt;li&gt;JDBC Driver 3.28.0 (Fixed an issue where connection and socket timeout were not propagated to the HTTP client, fixed Azure 503 retries and configured it with the putGetMaxRetries parameter)&lt;/li&gt;
&lt;li&gt;Node.js 2.3.2 (Fixed the TypeScript definition for getResultsFromQueryId where queryId should be required and sqlText should be optional, bumped the dependency glob to address &lt;a href="https://nvd.nist.gov/vuln/detail/CVE-2025-64756" rel="noopener noreferrer"&gt;CVE-2025-64756&lt;/a&gt;, fixed a regression introduced in version 2.3.1 where SnowflakeHttpsProxyAgent was instantiated without the new keyword, breaking the driver when both OCSP was enabled and the HTTP_PROXY environment variable was used to set the proxy. This bug did not affect HTTPS_PROXY)&lt;/li&gt;
&lt;li&gt;Node.js 2.3.3 (Replaced the glob dependency used in PUT queries with a custom wildcard matching implementation to address security issues, fixed misleading debug messages during login requests, fixed a bug in the build script that failed to include minicore binaries in the dist folder)&lt;/li&gt;
&lt;li&gt;PHP PDO Driver for Snowflake 3.4.0 (Fixed the aarch64 build on MacOS)&lt;/li&gt;
&lt;li&gt;Snowflake CLI 3.13.1 (Fixed an issue with parsing the --vars values provided to snow dbt execute subcommands. This fix allows you to pass variables the same way as you would to the dbt CLI, such as --vars '{"key": "value"}’)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 4.1.1 (Relaxed the pandas dependency requirements for Python below 3.12, changed the CRL cache cleanup background task to a daemon thread to avoid blocking the main thread, fixed NO_PROXY issues with PUT operations)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.43.0 (Bug fixes: Fixed a bug where automatically-generated temporary objects were not properly cleaned up, fixed a bug in SQL generation when joining two DataFrames created using DataFrame.alias and CTE optimization is enabled, fixed a bug in XMLReader where finding the start position of a row tag could return an incorrect file position ; Snowpark pandas API Bug: Fixed a bug in DataFrameGroupBy.agg where func is a list of tuples used to set the names of the output columns, fixed a bug where converting a modin datetime index with a timezone to a numpy array with np.asarray would cause a TypeError,fixed a bug where Series.isin with a Series argument matched index labels instead of the row position)&lt;/li&gt;
&lt;li&gt;Snowpark Connect for Spark 1.7.0 (Snowpark Connect for Spark: add support for Spark integral types, add support for Scala 2.13, introduce support for integral types overflow behind snowpark.connect.handleIntegralOverflow configuration, add a configuration for using custom JAR files in UDFs, support Scala UDFs if UDFPacket lacks input types metadata, allow as input and output types case classes in reduce function; Snowpark Submit: add support for Scala 2.13, add support for — files argument)&lt;/li&gt;
&lt;li&gt;Snowpark Connect for Spark 1.6.0 (Fix JSON schema inference issue for JSON reads from Scala, change return types of functions returning incorrect integral types, fix update fields bug with struct type, fix unbounded input decoder, fix struct function when the argument is unresolved_star, fix column name for Scala UDFs when the proto contains no function name, add support for PATTERN in Parquet format, handle error and errorIfExists write modes)&lt;/li&gt;
&lt;li&gt;Snowpark ML 1.20.0 (Experiment Tracking bug fixes: exceeding the run metadata size limit in log_metrics or og_params issues a warning rather than raising an exception)&lt;/li&gt;
&lt;li&gt;SQLAlchemy 1.8.2 (Aligned the supported maximum python version with snowflake-connector-python to 3.13)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Google Analytics Aggregate Data 2.2.2 (Fixed an issue where the report start date was calculated incorrectly when report ingestion exceeded 2 hours)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for SharePoint 1.0.3 (fixed internal table definitions that were causing connector application upgrade issues, files without extensions no longer break the ingestion process, when upgrading the connector application, change tracking on connector tables is no longer disabled. We’ve also migrated broken Cortex Search indexes to make them refresh the data)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for SharePoint 1.0.4 (During the data synchronization of Microsoft 365 groups, group members are now retrieved only once for each group)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for SharePoint 1.0.5 (Fixed an issue that was causing empty values to be returned in the web_url column in the Cortex Search service responses)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This month’s release notes showcase Snowflake’s strong initiative to unify the entire data lifecycle, from ingestion and transformation to AI-driven application hosting, within a single, governed environment.&lt;/p&gt;

&lt;p&gt;AI &amp;amp; Machine Learning are now maturing into production use, shifting emphasis from experimental features to enterprise-ready governance and usability. The GA of AI_REDACT introduces automated, LLM-powered PII protection for unstructured text, while new Cortex AI Usage History views offer essential visibility into AI activity. With Notebooks in Workspaces and Vector Aggregate Functions, data teams now have a fully managed setup for distributed training and complex ML tasks right alongside their data. Snowflake’s rapid expansion into app development continues, strengthening its role as a backend for low-latency applications.&lt;/p&gt;

&lt;p&gt;The new Snowflake Postgres supports transactional workloads, while GA features like Interactive Tables and Warehouses deliver the performance needed for high-concurrency APIs and dashboards. In data engineering and governance, foundational processes are becoming more intelligent. Schema Evolution for Snowpipe Streaming reduces manual DDL changes during data drift. On the governance side, GA releases like WORM Backups and Cost Anomalies detection provide vital safeguards for compliance and budget control. Enhancements like SnowConvert, which now supports Informatica, DB2, and Oracle migrations, and the high-performance Kafka Connector, make migrating data into Snowflake easier than ever.&lt;/p&gt;

&lt;p&gt;Enjoy the reading.&lt;/p&gt;

&lt;p&gt;I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. I am also the Head of Data, Cloud, &amp;amp; Security Architecture at &lt;a href="https://archetypeconsulting.com/" rel="noopener noreferrer"&gt;Archetype Consulting&lt;/a&gt;. You can follow me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Subscribe to my Medium blog &lt;a href="https://blog.augustorosa.com/" rel="noopener noreferrer"&gt;https://blog.augustorosa.com&lt;/a&gt; for the most interesting Data Engineering and Snowflake news.&lt;/p&gt;

&lt;h4&gt;
  
  
  Sources:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/preview-features" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/preview-features&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/new-features" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/new-features&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/sql-improvements" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/sql-improvements&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/performance-improvements-2024" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/performance-improvements-2024&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/clients-drivers/monthly-releases" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/clients-drivers/monthly-releases&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/connectors/gard-2024" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/connectors/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=snowflake.snowflake-vsc" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=snowflake.snowflake-vsc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/migrations/snowconvert-docs/general/release-notes/release-notes/README" rel="noopener noreferrer"&gt;https://docs.snowconvert.com/sc/general/release-notes/release-notes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/apache/polaris/releases" rel="noopener noreferrer"&gt;https://github.com/apache/polaris/releases&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>newfeatures</category>
      <category>ai</category>
      <category>snowflake</category>
      <category>data</category>
    </item>
    <item>
      <title>The Unofficial Snowflake Monthly Release Notes: November 2025</title>
      <dc:creator>augusto kiniama rosa</dc:creator>
      <pubDate>Mon, 01 Dec 2025 12:31:01 +0000</pubDate>
      <link>https://forem.com/kiniama/the-unofficial-snowflake-monthly-release-notes-november-2025-4ohc</link>
      <guid>https://forem.com/kiniama/the-unofficial-snowflake-monthly-release-notes-november-2025-4ohc</guid>
      <description>&lt;h4&gt;
  
  
  Monthly Snowflake Unofficial Release Notes #New features #Previews #Clients #Behavior Changes
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkwld2mxr956csfmjtay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkwld2mxr956csfmjtay.png" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome to the Unofficial Release Notes for Snowflake for November 2025! You’ll find all the latest features, drivers, and more in one convenient place.&lt;/p&gt;

&lt;p&gt;As an unofficial source, I am excited to share my insights and thoughts. Let’s dive in! You can also find all of Snowflake’s releases &lt;a href="https://medium.com/@augustokrosa/list/snowflake-unofficial-newsletter-list-b97037cfc9e6" rel="noopener noreferrer"&gt;here&lt;/a&gt;. This month, we provide coverage up to release 9.37 (General Availability — GA).&lt;/p&gt;

&lt;p&gt;I would appreciate your suggestions on how to continue combining these monthly release notes. Feel free to comment below or chat with me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;&lt;em&gt;LinkedIn&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Behavior change&lt;/strong&gt; bundle &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_05_bundle" rel="noopener noreferrer"&gt;2025_05&lt;/a&gt; is generally enabled for all customers, &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_06_bundle" rel="noopener noreferrer"&gt;2025_06&lt;/a&gt; is enabled by default but can be opted out until next BCR deployment, and &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_07_bundle" rel="noopener noreferrer"&gt;2025_07&lt;/a&gt; is disabled by default but may be opted in.&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s New in Snowflake
&lt;/h3&gt;

&lt;h4&gt;
  
  
  AI &amp;amp; ML Updates (Cortex, ML, DocumentAI)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Import models from Hugging Face to Snowflake (Preview), support for importing external models in preview. Besides curated Snowflake models or your own, you can bring any Hugging Face transformer to your Snowflake model registry, and use it like other models&lt;/li&gt;
&lt;li&gt;AI_COMPLETE function (GA), Generates responses (completions) from prompts using your choice of large language model (LLM). AI_COMPLETE is the most general Cortex AI Function; it is not specialized for particular use cases like summarization or classification. It can generate various responses based on content and instructions in the prompt. Responses can be plain text or semi-structured data, with prompts containing text and images processed according to plain English instructions. Examples include explaining concepts to a child, assessing and simplifying reading levels, critiquing writing styles, estimating product ratings, comparing ads, identifying highest inflation rates in a graph, describing kitchen appliances in images, or extracting stock symbols and prices as JSON&lt;/li&gt;
&lt;li&gt;Document Processing Playground (Preview), helps users explore Snowflake’s AI document processing via a playground interface. You can upload documents, experiment with AI_EXTRACT and AI_PARSE_DOCUMENT functions, ask questions to extract info, preview layout and OCR results, and copy SQL queries and Python code for worksheets and notebooks&lt;/li&gt;
&lt;li&gt;Cortex Analyst Routing Mode (Preview), a query strategy that prioritizes semantic SQL and defaults to standard SQL only when needed. It simplifies SQL with guardrails from semantic views. Routing mode uses these views for higher accuracy and consistency, ensuring metrics, joins, and filters adhere to governed definitions. Benefits include: consistent metrics, safer defaults, and LLM-friendly shorter SQL&lt;/li&gt;
&lt;li&gt;AI_REDACT function (Preview), detects and redacts personally identifiable information (PII) from unstructured text data using a large language model (LLM). AI_REDACT automatically recognizes various categories of PII (such as name, address, and subcategories like first or last name) and replaces them with placeholder values&lt;/li&gt;
&lt;li&gt;Cortex Agents (GA), agents orchestrate data analysis from structured and unstructured sources to generate insights, plan tasks, use tools, and produce responses. They utilize Cortex Analyst and Cortex Search, along with LLMs, to analyze data through four steps: Planning, Tool use, Reflection, and Monitoring. Planning involves parsing requests, exploring options, splitting tasks, and ensuring compliance. Tools are used for data retrieval. Reflection evaluates results for next steps, while monitoring tracks performance and improves behavior&lt;/li&gt;
&lt;li&gt;Cortex AI Functions (GA), AI Functions enable unifying structured and unstructured analytics within a single platform and accelerate intelligent application development. You can build scalable, multimodal AI pipelines inside Snowflake, supporting text, image, audio, and video intelligence without external services or data movement. These functions will soon be generally available: AI_CLASSIFY categorizes inputs into user-defined categories; AI_TRANSCRIBE transcribes audio/video files, extracting text, timestamps, and speaker info; AI_EMBED generates embedding vectors for similarity search, clustering, and classification; AI_SIMILARITY computes similarity between two inputs without creating explicit embeddings&lt;/li&gt;
&lt;li&gt;Cortex AI_TRANSCRIBE function (GA), brings production-ready, SQL-native transcription for both audio and video content within Snowflake, making it easier to extract and analyze spoken information at scale&lt;/li&gt;
&lt;li&gt;The general availability release includes several improvements over the preview release:&lt;/li&gt;
&lt;li&gt;Automatic language detection improvements for higher accuracy across multilingual and mixed-language recordings.&lt;/li&gt;
&lt;li&gt;Support for MP4 and other video files, enabling transcription and analysis of media content for advertising and sponsorship analytics.&lt;/li&gt;
&lt;li&gt;Support for Norwegian and Hebrew, expanding language coverage to 31 languages.&lt;/li&gt;
&lt;li&gt;Overall transcription quality improvements across diverse environments and acoustic conditions&lt;/li&gt;
&lt;li&gt;Snowflake-managed MCP server (GA), A standards-based interface allows AI agents to securely retrieve data from Snowflake accounts without additional infrastructure. It offers standardized integration for tool discovery, comprehensive OAuth authentication via Snowflake’s built-in service, and RBAC for governance over tool discovery and invocation&lt;/li&gt;
&lt;li&gt;Snowflake Machine Learning Experiments (Preview), allows you to gather training run info for models and evaluate via Snowsight. Create experiment runs from training parameters, metrics, and artifacts. Experiments compare data to help select the right model. Snowflake Experiments are flexible, accepting any relevant data for evaluation&lt;/li&gt;
&lt;li&gt;Snowflake Intelligence (GA), is a powerful tool that provides insights and actionability from organizational data. Create charts, get instant answers with natural language, discover trends, and analyze data without needing technical skills or waiting for dashboards. Access and analyze thousands of data sources—structured and unstructured—simultaneously, connecting spreadsheets, documents, images, and databases&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Lake Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;External query engine support for Apache Iceberg™ tables with Snowflake Horizon Catalog (Preview), query Snowflake-managed Apache Iceberg™ tables with external engines supporting the open Iceberg REST protocol, like Apache Spark™. Apache Polaris™ (incubating) integrated into Horizon Catalog ensures interoperability. You can query tables in a Snowflake account using one Horizon Catalog endpoint and your existing Snowflake users, roles, policies, and authentication&lt;/li&gt;
&lt;li&gt;Apache Iceberg™ tables: Support for bi-directional data access with Microsoft Fabric (Preview), query Snowflake-managed Iceberg tables within Fabric by connecting a Snowflake database. You can choose an existing database or set up a new one. Once connected, Fabric generates an item for accessing your Snowflake-managed tables and allows querying OneLake tables with Iceberg metadata from Snowflake. To query Fabric Iceberg tables registered in Snowflake, set up a REST catalog integration for OneLake table APIs, which supplies table information from Fabric&lt;/li&gt;
&lt;li&gt;Replicate Snowflake-managed Apache Iceberg™ tables (Preview), this process involves copying from a source account to one or multiple target accounts within the same organization. It is seamlessly integrated with Snowflake replication and failover groups to ensure point-in-time consistency of objects on the target account&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Snowflake Applications (Container Services, Notebooks and Applications, Snowconvert)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Tri-Secret Secure data protection for Snowpark Container Services block volumes (GA)&lt;/li&gt;
&lt;li&gt;SnowConvert AI interface improvements, revised for better efficiency, control, and usability, the new interface allows running specific flows like extraction, deployment, and data validation independently. It features a dedicated project page showing available flows and provides granular control, simplifying management of complex workflows&lt;/li&gt;
&lt;li&gt;Snowflake Native Apps support for FedRAMP on AWS for apps with containers (GA), support for FedRAMP on Amazon Web Services enables apps with containers to be distributed to any Snowflake customer within FedRAMP regions. Apps operating in FedRAMP can leverage Snowpark Container Services, including compute pools, services, jobs, and external access integrations&lt;/li&gt;
&lt;li&gt;Improved stage volume implementation in Snowpark Container Services (GA)&lt;/li&gt;
&lt;li&gt;Snowpark Container Services in Google Cloud (GA)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Realtime Data (Hybrid, Interactive &amp;amp; SnowPostGres)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Interactive tables and interactive warehouses (Preview), provide enhanced query performance and real-time data processing capabilities for your Snowflake workloads&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Pipelines/Data Loading/Unloading Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowpipe Streaming with high-performance architecture on Azure and Google GCP (GA), enables large-scale, real-time data ingestion with high throughput and low latency into Snowflake across major cloud platforms&lt;/li&gt;
&lt;li&gt;Snowflake Openflow — Snowflake Deployments (GA), you can now run on Snowpark Container Services (SPCS)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Transformations
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;dbt Projects on Snowflake (GA), Create, edit, test, run, and manage dbt Core projects using Snowsight Workspaces to handle project files, deploy as schema-level objects, and use SQL and Snowflake CLI for deployment in CI/CD workflows. GA features include: faster command execution—upload times reduced from 6-6.5 minutes to 40-45 seconds; no secondary roles needed for Workspaces; expanded command support—dbt docs generate and retry commands, plus more flags; enhanced Snowsight view of project DAGs and source code; improved execution and scheduling UI; easy access to artifacts for debugging and integration; and support for replicating dbt objects to failover accounts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Security, Privacy &amp;amp; Governance Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Trust Center notifications in Snowsight (GA), enable Trust Center notifications about accounts that haven’t enrolled in multi-factor authentication (MFA)&lt;/li&gt;
&lt;li&gt;Anomaly detection for Data Quality Monitoring (Preview), set up anomaly detection for data quality monitoring so that Snowflake automatically detects unexpected changes in the following dimensions: volume of data in a table, and frequency with which a table is being updated&lt;/li&gt;
&lt;li&gt;Access control enhancements for cost anomalies, a cost anomaly occurs when daily consumption deviates from the expected range. Previously, only system administrators could view anomalies. Now, application roles allow you to grant some users access to view anomalies, while others can view and manage them.&lt;/li&gt;
&lt;li&gt;Excluding objects from sensitive data classification (GA), configure Snowflake to exclude schemas, tables, or columns from automatic classification so that they are skipped during the classification process&lt;/li&gt;
&lt;li&gt;Trust Center extensions (Preview), security partners and customers can use the Snowflake Native App Framework to create Trust Center extensions with additional scanner packages. Users can discover, install, and manage these extensions. Currently, several security partners like ALTR, Hunters, OneTrust, and TrustLogix offer Trust Center extensions in this preview&lt;/li&gt;
&lt;li&gt;Storage lifecycle policies (GA), Schema-level objects enable managing data retention in Snowflake tables via archiving or expiring rows based on conditions. Storage lifecycle policies offer key benefits: lower storage costs by moving old data to cheaper tiers; ensure regulatory compliance through automated archival and deletion; facilitate automated data management with Snowflake-managed compute; and allow flexible data retrieval by creating tables with selected rows&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Snowsight Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Shared Workspaces (Preview), collaborative development in Snowsight allows multiple users to work on the same files and folders within a role-based environment, enhancing team collaboration while ensuring security. Key features include creating shared workspaces, sharing files, managing access through roles, and collaborating with visibility into updates&lt;/li&gt;
&lt;li&gt;Performance Explorer (GA), Monitor interactive metrics for SQL workloads. These metrics display the overall health of your Snowflake environment, query activity, and changes to warehouses and tables&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  SQL, Extensibility &amp;amp; Performance Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Enhanced Query Acceleration Service (QAS) to intelligently determine when queries with LIMIT clauses can be accelerated, more queries with LIMIT clauses, even without ORDER BY, can now be accelerated. QAS automatically identifies when this improves performance, expanding the range of queries that benefit&lt;/li&gt;
&lt;li&gt;Preparation for renaming Snapshots feature to Backups
The billing line item for the WORM snapshots feature, which is currently in preview, changes from Snapshot to Backup, affects METERING_HISTORY Account Usage view, the value of the SERVICE_TYPE column changes from SNAPSHOT to BACKUP&lt;/li&gt;
&lt;li&gt;New DECFLOAT data type, support for the DECFLOAT data type stores numbers exactly with up to 38 significant digits and uses a dynamic base-10 exponent for large or small values. Unlike FLOAT, which approximates values, DECFLOAT represents exact values at the specified precision&lt;/li&gt;
&lt;li&gt;You can use the DECFLOAT data type when you need exact decimal results and a wide, variable scale in the same column&lt;/li&gt;
&lt;li&gt;Support for OAuth when authenticating with GitHub (GA), authenticate using OAuth when you’re integrating a repository on GitHub with Snowflake&lt;/li&gt;
&lt;li&gt;Run Apache Spark™ workloads on Snowflake (GA), connect your existing Spark workloads directly to Snowflake and run them on the Snowflake compute engine. As a result, you can run your PySpark dataframe code with all the benefits of the Snowflake engine&lt;/li&gt;
&lt;li&gt;Support for connecting Scala applications to Snowpark Connect for Spark (Preview), Connect your Scala apps to Snowpark Connect for Spark. After configuring the connection and starting the server, run Scala code to connect`&lt;/li&gt;
&lt;li&gt;Function, CREATE OR ALTER FUNCTION, updated to support changing function definition. For example, RUNTIME_VERSION, ARTIFACT_REPOSITORY (Python), PACKAGES, IMPORTS, return type, and function body&lt;/li&gt;
&lt;li&gt;Procedure, CREATE OR ALTER PROCEDURE, Updated to support changing procedure definition. For example, RUNTIME_VERSION, IMPORTS, PACKAGES, return type, procedure body, and ARTIFACT_REPOSITORY for Python stored procedures&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Collaboration, Data Clean Rooms Updates, Marketplace, Listings &amp;amp; Data Sharing
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Clean Rooms API Version: 11.9: fixed audience overlap threshold logic: Corrected SQL threshold comparison for accuracy. Overlap now measured as less than or equal to, with updates to private preview features&lt;/li&gt;
&lt;li&gt;Clean Rooms API Version: 11.8: updates to private preview features&lt;/li&gt;
&lt;li&gt;Clean Rooms API Version: 11.2: New custom templates: The following template flows have been removed from the clean rooms UI and made into custom templates that you can download and run in code: Last-touch attribution, Audience lookalike modeling, Inventory forecasting; and other updates to private preview features&lt;/li&gt;
&lt;li&gt;Support for paid listings in the Kingdom of Saudi Arabia (KSA) (GA), providers can create paid listings in KSA. See 'Who can provide paid listings' for countries where providers can offer paid listings, and consumers can access them in KSA. For a list of countries where consumers can access paid listings, see 'Supported consumer locations'&lt;/li&gt;
&lt;li&gt;Sharing semantic views, providers can share semantic views in private listings, in public listings on the Snowflake Marketplace, and organizational listings&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Open-Source &lt;a href="https://developers.snowflake.com/opensource/" rel="noopener noreferrer"&gt;Updates&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;terraform-snowflake-provider 2.11.0 (This release introduces new resources for Notebooks and Semantic Views, alongside scheduling enhancements for Tasks, and fixes an Identifier Parsing bug)&lt;/li&gt;
&lt;li&gt;terraform-snowflake-provider 2.10.1 (patch release focuses on stabilizing the authentication policy changes introduced in v2.10.0, fixed an issue with parsing the DESCRIBE output for authentication policies)&lt;/li&gt;
&lt;li&gt;Snowflake VS Code Extension 1.20.1 (Bug Fixes — When Snowflake: Enable Native App Panel is disabled, disable any recursive directory search for snowflake.yml to improve performance in large codebases)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Client, Drivers, Libraries and Connectors Updates
&lt;/h3&gt;

&lt;h4&gt;
  
  
  New features:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Go Snowflake Driver 1.18.0 (Added validation of CRL NextUpdate for freshly downloaded CRLs, added logging of query text and parameters)&lt;/li&gt;
&lt;li&gt;Go Snowflake Driver 1.17.1 (Added telemetry for login requests to supported platforms (such as EC2, Lambda, Azure function, and so on). You can disable the telemetry by setting the SNOWFLAKE_DISABLE_PLATFORM_DETECTION environment variable (SNOWFLAKE_DISABLE_PLATFORM_DETECTION=true), exposed QueryStatus from SnowflakeResult and SnowflakeRows in the GetStatus() function, added the CrlDownloadMaxSize parameter to limit the size of CRL downloads, added official support for RHEL9 (Red Hat Enterprise Linux 9), improved log messages, deprecated several configuration options and functions. For more information, see the Upcoming Gosnowflake v2 changes)&lt;/li&gt;
&lt;li&gt;ODBC 3.13.0 (Added support for Decfloat types, Support cross-signed chains during OCSP check, implemented a new CRL (Certificate Revocation List) checking mechanism, enabling CRLs improves security by checking for revoked certificates during the TLS handshake process)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 4.1.0 (Added official support for RHEL9, added the oauth_socket_uri connection parameter to allow users to specify separate server and redirect URIs for local OAuth server, added the no_proxy parameter for proxy configuration without using environmental variables, added the SNOWFLAKE_AUTH_FORCE_SERVER environment variable to force the driver to receive SAML tokens even without opening a browser when using the externalbrowser authentication method. The variable allows headless environments, such as Docker or Airflow) that run locally to authenticate the connection using a browser URL)&lt;/li&gt;
&lt;li&gt;.NET Driver 5.1.0 (Added the APPLICATION_PATH to the CLIENT_ENVIRONMENT sent during authentication to identify the application connecting to Snowflake, AWS WIF (Workload Identity Federation) now also checks the application configuration and AWS profile credentials store when determining the current AWS region, added ability for users to configure the maximum number of connections by setting the SERVICE_POINT_CONNECTION_LIMIT property, added the CRLDOWNLOADMAXSIZE connection parameter to limit the maximum size of CRL (certificate revocation list) files downloaded during certificate revocation checks)&lt;/li&gt;
&lt;li&gt;Snowflake CLI 3.13.0 (New features and updates
Added the — decimal-precision global option to allow setting arbitrary precision for Python’s Decimal type, added support for the auto_suspend_secs parameter in SPCS service commands (deploy, set, unset) to configure automatic service suspension after a period of inactivity, added the snow dbt describe and snow dbt drop commands, added the snow dbt execute … retry subcommand, added the following snow dbt deploy command options: — default-target to set a default target, — unset-default-target to clear the default target, — external-access-integration to set external access integrations (needed to pull external dependencies for altering a dbt project object), — install-local-deps to install dependencies located in the project, added support for running Streamlit apps on SPCS runtime, Added grant privileges definitions to the Streamlit snowflake.yml file, updated snowflake-connector-python to version 3.18.0, Relaxed dbt profiles.yml validation rules; added extra validation for role specified in profiles.yml)&lt;/li&gt;
&lt;li&gt;Snowflake Python API 1.9.0 (Behavior changes: Event sharing is now mandatory for all event types)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Scala and Java 1.17.0 (New features — new APIs: DataFrame.isEmpty, functions.try_to_timestamp, functions.try_to_date, functions.concat_ws_ignore_nulls, functions.array_flatten, Row.mkString (with overloads for customizable separators and formatting options), StructType.fieldNames (alias for StructType.names); &lt;strong&gt;Improvements&lt;/strong&gt; : functions.when and Column.when, along with Column.otherwise, now accept any literal arguments (for example, String, int, boolean, or null) in addition to Column instances, add functions.substring overload with support for start position and length arguments, add functions.lpad overloads to pad with String, or Array[Byte], add functions.rpad overloads to pad with String, or Array[Byte], add DataFrame.sort overload with support for variadic arguments, add DataFrame.show overloads with parameters to control truncation and number of displayed rows)&lt;/li&gt;
&lt;li&gt;Snowpark Connect for Spark 1.2.0 (Snowpark Connect for Spark
New features: Relax version requirements for grpcio and aiobotocore; Improvements: Specify dependencies version in meta.yaml, build compiled and architecture-specific conda package, ensure all CloudPickleSerializer.loads are not done in TCM, include OSS SQL tests that start with the WITH clause, do not upload Spark jars when running the server for pyt, update internal queries count; Snowpark Submit
Improvements: Generate unique workload names)&lt;/li&gt;
&lt;li&gt;Snowflake ML 1.19.0 (Experiment Tracking API (snowflake.ml.ExperimentTracking module, online feature serving in Feature Store)&lt;/li&gt;
&lt;li&gt;Snowpipe Streaming SDK 1.1.0 (With the release of the SDK version 1.1.0, Snowpipe Streaming’s high-performance architecture is now generally available for all accounts on Microsoft Azure, expanding its availability from Amazon Web Services (AWS), update on November 10, 2025: Support for Google Cloud Platform (GCP) is also added and is now generally available for all accounts)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Google Analytics Aggregate Data 2.2.1 (Behavior changes: Event sharing is now mandatory for all event types)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Bug fixes:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Go Snowflake Driver 1.18.0 (Fixed a data race error in tests caused by the platform detection init() function, made secrets detector initialization thread safe and more maintainable)&lt;/li&gt;
&lt;li&gt;Go Snowflake Driver 1.17.1 (Fixed a bug where GCP PUT/GET operations would fail when the connection context was canceled, fixed unsafe reflection of nil pointers on DECFLOAT function in the bind uploader, added temporary download files cleanup, added a small clarification in the oauth.go example on token escaping, ensured proper permissions for CRL cache directory,bypassed proxy settings for WIF metadata requests, fixed nil pointer dereferences while calling long-running queries, moved the keyring-based secure storage manager into a separate file to avoid the need to initialize keyring on Linux)&lt;/li&gt;
&lt;li&gt;.NET Driver 5.1.0 (Renew idle sessions in the pool if keep alive is enabled)&lt;/li&gt;
&lt;li&gt;ODBC 3.13.0 (Removed the trailing null termination character from the JWT header and payload)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 4.1.0 (Fixed a compilation error when building from sources with libc++, added OAUTH_AUTHORIZATION_CODE and OAUTH_CLIENT_CREDENTIALS to the list of authenticators that don’t require users to set the user parameter)&lt;/li&gt;
&lt;li&gt;Snowpark Connect for Spark 1.2.0 (Snowpark Connect for Spark: Fix tests for tcm, fix CSV column name discrepancy from Spark, use type cache for empty frames, resolve Windows OSS runner general issues; Snowpark Submit: Fix staged file reading)&lt;/li&gt;
&lt;li&gt;Snowflake ML 1.19.0 (Model registry bug fixes: get_version_by_alias now requires an exact match of the version’s Snowflake identifier)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;November 2025 signifies a key milestone in Snowflake’s transition from a data warehouse to a comprehensive AI Data Cloud. The main highlight is the advancement of Cortex AI, which now includes Cortex Agents and support for importing Hugging Face models. This shift expands Snowflake's capabilities from basic text generation to sophisticated, agent-driven workflows and customized machine learning processes.&lt;/p&gt;

&lt;p&gt;However, the operational updates are just as important. The general availability of dbt Projects and the expansion of Snowpipe Streaming to Azure and GCP demonstrate Snowflake's focus on enhancing developer experience and ensuring cross-cloud compatibility. Alongside this, the interoperability of Iceberg tables with Microsoft Fabric sends a clear message: Snowflake is creating a platform that emphasizes open standards, unified governance, and smooth integration, no matter where your data or workflows are located.&lt;/p&gt;

&lt;p&gt;Enjoy the reading.&lt;/p&gt;

&lt;p&gt;I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. I am also the Head of Data, Cloud, &amp;amp; Security Architecture at &lt;a href="https://archetypeconsulting.com/" rel="noopener noreferrer"&gt;Archetype Consulting&lt;/a&gt;. You can follow me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Subscribe to my Medium blog &lt;a href="https://blog.augustorosa.com/" rel="noopener noreferrer"&gt;https://blog.augustorosa.com&lt;/a&gt; for the most interesting Data Engineering and Snowflake news.&lt;/p&gt;

&lt;h4&gt;
  
  
  Sources:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/preview-features" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/preview-features&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/new-features" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/new-features&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/sql-improvements" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/sql-improvements&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/performance-improvements-2024" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/performance-improvements-2024&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/clients-drivers/monthly-releases" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/clients-drivers/monthly-releases&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/connectors/gard-2024" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/connectors/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=snowflake.snowflake-vsc" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=snowflake.snowflake-vsc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowconvert.com/sc/general/release-notes/release-notes" rel="noopener noreferrer"&gt;https://docs.snowconvert.com/sc/general/release-notes/release-notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>newfeatures</category>
      <category>data</category>
      <category>datasuperhero</category>
      <category>ai</category>
    </item>
    <item>
      <title>Snowflake Intelligence — Manage, Secure &amp; Keep Snowflake Working 100% While Using A Conversational…</title>
      <dc:creator>augusto kiniama rosa</dc:creator>
      <pubDate>Thu, 20 Nov 2025 20:02:06 +0000</pubDate>
      <link>https://forem.com/kiniama/snowflake-intelligence-manage-secure-keep-snowflake-working-100-while-using-a-conversational-6dg</link>
      <guid>https://forem.com/kiniama/snowflake-intelligence-manage-secure-keep-snowflake-working-100-while-using-a-conversational-6dg</guid>
      <description>&lt;h3&gt;
  
  
  Snowflake Intelligence — Manage, Secure &amp;amp; Keep Snowflake Working 100% While Using A Conversational AI Agent
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Use Snowflake Intelligence to Improve Performance for Your Overall Systems, from Costs, Security, and Performance Using Conversational AI Agents
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2ec92hku910p6h98dtn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2ec92hku910p6h98dtn.png" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was inspired by &lt;a href="http://twitter.com/umeshpatel_us" rel="noopener noreferrer"&gt;@umeshpatel_us&lt;/a&gt; to write this story. He went through the work of creating a &lt;a href="https://medium.com/snowflake/build-snowflake-cost-savings-and-performance-agent-in-5-minutes-854427c0fdd8" rel="noopener noreferrer"&gt;Semantic Layer&lt;/a&gt; for Snowflake Account Usage schemas, and I wonder when Snowflake will just provide a Semantic View for us, including all this. What else could I do with such power and information? Further, my co-worker &lt;a href="https://medium.com/u/f71a9e05e22a" rel="noopener noreferrer"&gt;Farris Jaber&lt;/a&gt; created a Snowflake Intelligence &lt;a href="https://blog.archetypeconsulting.com/whats-special-about-snowflake-intelligence-and-how-it-ties-together-all-aspects-of-the-snowflake-c6ee99cdd80c" rel="noopener noreferrer"&gt;What It Is article&lt;/a&gt;, which further prompted me to investigate and play with it from the perspective of how we can use this power to talk and diagnose, secure, and keep costs under control with Snowflake Intelligence.&lt;/p&gt;

&lt;p&gt;The basis of it, as per Umesh's article, is mapping a semantic layer, which I expanded on his semantic layer, and created two other semantic layers to provide larger coverage of the Snowflake Account Usage Views. More can for sure be done to get complete coverage, but it does start to give you some real information via a Conversational AI Agent that allows a clear maintenance for Snowflake pattern, including Cost Control, Maintenance, and Security.&lt;/p&gt;

&lt;p&gt;Take a moment to read &lt;a href="https://blog.archetypeconsulting.com/whats-special-about-snowflake-intelligence-and-how-it-ties-together-all-aspects-of-the-snowflake-c6ee99cdd80c" rel="noopener noreferrer"&gt;Farris Jaber's&lt;/a&gt; article, which provides an excellent overview of Snowflake Intelligence. This fantastic AI service allows users to effortlessly ask questions about their data using natural language, making data access more intuitive. It harnesses the power of agents driven by Large Language Models (LLMs) and equipped with various tools to access and interpret data. The system is thoughtfully designed to understand your queries, determine the most effective way to retrieve information, perform the necessary actions, and present the results in a clear and user-friendly manner—all with charts and insightful summaries to help you easily understand your data.&lt;/p&gt;

&lt;p&gt;I initially decided to expand the work, but after some trial and error, I landed on creating three agents with attached Semantic Views. These specialized agents are designed to give Snowflake Administrators and platform engineers direct, conversational control over the platform’s most critical pillars: cost, security, and performance. They unlock fast, targeted diagnostics and insights that were previously achievable only through complex, manual queries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Generalist Agent (includes everything below):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All-in-one cross-domain analysis&lt;/li&gt;
&lt;li&gt;20 ACCOUNT_USAGE tables&lt;/li&gt;
&lt;li&gt;94 metrics spanning all operational areas&lt;/li&gt;
&lt;li&gt;Best for: holistic insights, cross-domain correlations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Cost and Performance Specialist:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fast, focused cost and performance queries&lt;/li&gt;
&lt;li&gt;Query execution, credits, resource usage&lt;/li&gt;
&lt;li&gt;Best for: quick performance checks, cost analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Security Specialist:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dedicated security and authentication monitoring&lt;/li&gt;
&lt;li&gt;Login tracking, MFA adoption, threats&lt;/li&gt;
&lt;li&gt;Best for: security audits, compliance checks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This segmented approach allows a Snowflake Administrator to quickly focus on a specific domain — whether that’s an urgent security audit or a deep dive into cost-saving opportunities — using natural language queries tailored for that purpose.&lt;/p&gt;

&lt;p&gt;Additionally, the semantic views can be utilized for SQL queries directly, which is a capability currently in progress for broader BI tool integration. As of today, only Sigma Computing and Hex offer native integration with these Semantic Views, while support for other major tools like PowerBI and Tableau is actively being developed. Check out this&lt;a href="https://medium.com/snowflake/automatically-generate-accurate-semantic-models-using-cursor-738cf4162c5d" rel="noopener noreferrer"&gt;article&lt;/a&gt; on building Semantic Views with Cursor by &lt;a href="https://medium.com/@uniquejtx_3744?source=post_page---byline--738cf4162c5d---------------------------------------" rel="noopener noreferrer"&gt;@uniquejtx_3744&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For your information, I am currently pulling the information from all these views.:&lt;/p&gt;

&lt;h4&gt;
  
  
  Account Usage Views that I used:
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Query &amp;amp; Performance (2 views)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;QUERY_HISTORY (alias: qh)&lt;/li&gt;
&lt;li&gt;QUERY_ATTRIBUTION_HISTORY (alias: qa)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security &amp;amp; Authentication (1 view)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LOGIN_HISTORY (alias: login)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cost &amp;amp; Resource Usage (4 views)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;WAREHOUSE_METERING_HISTORY (alias: wh)&lt;/li&gt;
&lt;li&gt;STORAGE_USAGE (alias: storage)&lt;/li&gt;
&lt;li&gt;DATABASE_STORAGE_USAGE_HISTORY (alias: db_storage)&lt;/li&gt;
&lt;li&gt;STAGE_STORAGE_USAGE_HISTORY (alias: stage_storage)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Data Governance (4 views)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;USERS (alias: users)&lt;/li&gt;
&lt;li&gt;ROLES (alias: roles)&lt;/li&gt;
&lt;li&gt;GRANTS_TO_USERS (alias: grants_users)&lt;/li&gt;
&lt;li&gt;GRANTS_TO_ROLES (alias: grants_roles)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Operations &amp;amp; Monitoring (2 views)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TASK_HISTORY (alias: task_hist)&lt;/li&gt;
&lt;li&gt;SERVERLESS_TASK_HISTORY (alias: serverless_task)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Advanced Operations (7 views)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PIPE_USAGE_HISTORY (alias: pipe_usage)&lt;/li&gt;
&lt;li&gt;AUTOMATIC_CLUSTERING_HISTORY (alias: clustering)&lt;/li&gt;
&lt;li&gt;MATERIALIZED_VIEW_REFRESH_HISTORY (alias: mv_refresh)&lt;/li&gt;
&lt;li&gt;REPLICATION_USAGE_HISTORY (alias: replication)&lt;/li&gt;
&lt;li&gt;DATA_TRANSFER_HISTORY (alias: data_transfer)&lt;/li&gt;
&lt;li&gt;WAREHOUSE_LOAD_HISTORY (alias: wh_load)&lt;/li&gt;
&lt;li&gt;METERING_DAILY_HISTORY (alias: metering_daily)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Let’s Get Setup
&lt;/h3&gt;

&lt;p&gt;I copied Umesh's scripts, set up a new &lt;a href="https://github.com/augustorosa/cortex-snowflake-account-security-agent/" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;, and expanded on it. I added more Snowflake Account Usage views, and created three new Agents.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Snowflake account with ACCOUNTADMIN access&lt;/li&gt;
&lt;li&gt;Cortex features enabled in your region&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More importantly, there are five steps to create this lab:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Installation Steps

# 1. Clone repository
git clone https://github.com/augustorosa/cortex-snowflake-account-security-agent
cd cortex-snowflake-account-security-agent

# 2. Configure SnowSQL connection (optional)
snowsql -a &amp;lt;account&amp;gt; -u &amp;lt;username&amp;gt;

# 3. Deploy foundation (2 min)
snowsql -f "scripts/1. lab foundations.sql"

# 4. Deploy schema (1 min)
snowsql -f "scripts/2. SNOWFLAKE_INTELLIGENCE.TOOLS schema.sql"

# 5. Deploy specialist agents (3 min)
snowsql -f "scripts/2.2 COST_PERFORMANCE_SVW_SPECIALIST.sql"
snowsql -f "scripts/5.2 COST_PERFORMANCE_AGENT_SPECIALIST.sql"
snowsql -f "scripts/2.3 SECURITY_MONITORING_SVW_SPECIALIST.sql"
snowsql -f "scripts/5.3 SECURITY_MONITORING_AGENT_SPECIALIST.sql"

# 6. Deploy generalist agent (5 min) ⭐ ALL 6 PHASES
snowsql -f "scripts/2.4 SNOWFLAKE_MAINTENANCE_SVW_GENERALIST.sql"
snowsql -f "scripts/5.4 SNOWFLAKE_MAINTENANCE_AGENT_GENERALIST.sql"

# 7. Optional: Email integration (2 min)
snowsql -f "scripts/3. email integration.sql"

# 8. Run automated tests (2 min)
snowsql -f "scripts/TEST_ALL_PHASES.sql" -o output_format=table

You have successfully deployed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For details, refer to the GitHub repository; however, I have included my Generalist semantic view and Agent below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- ============================================================================
-- SNOWFLAKE MAINTENANCE SEMANTIC VIEW (GENERALIST)
-- ============================================================================
-- Comprehensive unified semantic view combining cost, performance, security,
-- governance, and operations monitoring across all Snowflake ACCOUNT_USAGE views
--
-- ARCHITECTURE:
-- - This is the GENERALIST semantic view for comprehensive cross-domain analysis
-- - Complements specialized views: 2.2 (Cost/Performance), 2.3 (Security)
--
-- DATA COVERAGE:
-- • Query &amp;amp; Performance
-- • Security &amp;amp; Authentication  
-- • Cost &amp;amp; Resource Usage
-- • Data Governance
-- • Operations &amp;amp; Monitoring
-- • Advanced Operations
-- ============================================================================

USE ROLE cortex_role;
USE SNOWFLAKE_INTELLIGENCE.TOOLS;

-- ============================================================================
-- COMPREHENSIVE SNOWFLAKE OPERATIONS SEMANTIC VIEW
-- ============================================================================
-- Includes: 20 ACCOUNT_USAGE tables, 35 dimensions, 94 metrics
-- 
-- Query &amp;amp; Performance: QUERY_HISTORY, QUERY_ATTRIBUTION_HISTORY
-- Security: LOGIN_HISTORY
-- Cost &amp;amp; Storage: WAREHOUSE_METERING, STORAGE_USAGE, DB/STAGE_STORAGE
-- Governance: USERS, ROLES, GRANTS
-- Operations: TASK_HISTORY, SERVERLESS_TASK_HISTORY
-- Advanced: PIPE, CLUSTERING, MV, REPLICATION, TRANSFER, LOAD, METERING
-- ============================================================================

CREATE OR REPLACE SEMANTIC VIEW 
    SNOWFLAKE_INTELLIGENCE.TOOLS.SNOWFLAKE_MAINTENANCE_SVW
TABLES (
  qh AS SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY,
  qa AS SNOWFLAKE.ACCOUNT_USAGE.QUERY_ATTRIBUTION_HISTORY,
  login AS SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY,
  wh AS SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY,
  storage AS SNOWFLAKE.ACCOUNT_USAGE.STORAGE_USAGE,
  db_storage AS SNOWFLAKE.ACCOUNT_USAGE.DATABASE_STORAGE_USAGE_HISTORY,
  stage_storage AS SNOWFLAKE.ACCOUNT_USAGE.STAGE_STORAGE_USAGE_HISTORY,
  users AS SNOWFLAKE.ACCOUNT_USAGE.USERS,
  roles AS SNOWFLAKE.ACCOUNT_USAGE.ROLES,
  grants_users AS SNOWFLAKE.ACCOUNT_USAGE.GRANTS_TO_USERS,
  grants_roles AS SNOWFLAKE.ACCOUNT_USAGE.GRANTS_TO_ROLES,
  task_hist AS SNOWFLAKE.ACCOUNT_USAGE.TASK_HISTORY,
  serverless_task AS SNOWFLAKE.ACCOUNT_USAGE.SERVERLESS_TASK_HISTORY,
  pipe_usage AS SNOWFLAKE.ACCOUNT_USAGE.PIPE_USAGE_HISTORY,
  clustering AS SNOWFLAKE.ACCOUNT_USAGE.AUTOMATIC_CLUSTERING_HISTORY,
  mv_refresh AS SNOWFLAKE.ACCOUNT_USAGE.MATERIALIZED_VIEW_REFRESH_HISTORY,
  replication AS SNOWFLAKE.ACCOUNT_USAGE.REPLICATION_USAGE_HISTORY,
  data_transfer AS SNOWFLAKE.ACCOUNT_USAGE.DATA_TRANSFER_HISTORY,
  wh_load AS SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_LOAD_HISTORY,
  metering_daily AS SNOWFLAKE.ACCOUNT_USAGE.METERING_DAILY_HISTORY
)
-- ============================================================================
-- DIMENSIONS: Categorical attributes for filtering and grouping
-- ============================================================================
DIMENSIONS (
  -- === QUERY HISTORY DIMENSIONS ===
  qh.QUERY_ID AS query_id COMMENT='Unique identifier for each query',
  qh.QUERY_TEXT AS query_text COMMENT='Full SQL text of the query',
  qh.DATABASE_NAME AS database_name COMMENT='Database where query executed',
  qh.SCHEMA_NAME AS schema_name COMMENT='Schema where query executed',
  qh.QUERY_TYPE AS query_type COMMENT='Type of query (SELECT, INSERT, etc)',
  qh.SESSION_ID AS session_id COMMENT='Session identifier',
  qh.USER_NAME AS user_name COMMENT='User who executed the query',
  qh.ROLE_NAME AS role_name COMMENT='Role used for query execution',
  qh.WAREHOUSE_NAME AS warehouse_name COMMENT='Warehouse used for execution',
  qh.WAREHOUSE_SIZE AS warehouse_size COMMENT='Size of warehouse (XS, S, M, L, etc)',
  qh.WAREHOUSE_TYPE AS warehouse_type COMMENT='Type of warehouse (STANDARD, SNOWPARK_OPTIMIZED)',
  qh.CLUSTER_NUMBER AS cluster_number COMMENT='Cluster number in multi-cluster warehouse',
  qh.QUERY_TAG AS query_tag COMMENT='User-defined query tag',
  qh.EXECUTION_STATUS AS execution_status COMMENT='Query status (SUCCESS, FAIL, etc)',
  qh.ERROR_CODE AS error_code COMMENT='Error code if query failed',
  qh.ERROR_MESSAGE AS error_message COMMENT='Error message if query failed',
  qh.START_TIME AS start_time COMMENT='Query start timestamp',
  qh.END_TIME AS end_time COMMENT='Query end timestamp',
  qh.QUERY_HASH AS query_hash COMMENT='Hash of query structure',
  qh.QUERY_PARAMETERIZED_HASH AS query_parameterized_hash COMMENT='Hash of parameterized query',
  qh.IS_CLIENT_GENERATED_STATEMENT AS is_client_generated_statement COMMENT='Whether query was client-generated',

  -- === QUERY ATTRIBUTION DIMENSIONS (only unique text/categorical columns) ===
  qa.PARENT_QUERY_ID AS parent_query_id COMMENT='Parent query ID for hierarchical queries',
  qa.ROOT_QUERY_ID AS root_query_id COMMENT='Root query ID in query hierarchy',

  -- === LOGIN HISTORY DIMENSIONS (Phase 2 - Security &amp;amp; Authentication) ===
  -- Note: Using exact column names as aliases to avoid parsing conflicts
  login.EVENT_TIMESTAMP AS event_timestamp COMMENT='When the login attempt occurred',
  login.EVENT_TYPE AS event_type COMMENT='Event type (LOGIN)',
  login.CLIENT_IP AS client_ip COMMENT='IP address of login attempt',
  login.REPORTED_CLIENT_TYPE AS reported_client_type COMMENT='Client software type',
  login.REPORTED_CLIENT_VERSION AS reported_client_version COMMENT='Client software version',
  login.FIRST_AUTHENTICATION_FACTOR AS first_authentication_factor COMMENT='First authentication method',
  login.SECOND_AUTHENTICATION_FACTOR AS second_authentication_factor COMMENT='Second authentication factor (MFA)',
  login.IS_SUCCESS AS is_success COMMENT='YES if successful, NO if failed',
  login.ERROR_CODE AS error_code COMMENT='Error code if login failed',
  login.ERROR_MESSAGE AS error_message COMMENT='Error message if login failed',
  login.CONNECTION AS connection COMMENT='Connection name used',

  -- === WAREHOUSE METERING DIMENSIONS (Phase 3 - Cost Tracking) ===
  -- Note: All warehouse metering dimensions cause conflicts
  -- WAREHOUSE_ID exists in QUERY_HISTORY, START_TIME/END_TIME exist in QUERY_HISTORY
  -- Use QUERY_HISTORY dimensions for warehouse analysis
  -- WAREHOUSE_METERING provides credit METRICS only

  -- === STORAGE USAGE DIMENSIONS (Phase 3 - Storage Tracking) ===
  storage.USAGE_DATE AS usage_date COMMENT='Date of storage measurement',

  -- === DATABASE STORAGE DIMENSIONS (Phase 3) ===
  db_storage.DATABASE_NAME AS database_name COMMENT='Database name from storage tracking'

  -- === GOVERNANCE DIMENSIONS (Phase 4) ===
  -- Note: USERS, ROLES, and GRANTS tables have too many conflicting column names
  -- (NAME, USER_NAME, ROLE_NAME, EMAIL, etc. all cause parsing conflicts)
  -- These tables provide METRICS only for governance analytics:
  -- - Total users, MFA adoption rate
  -- - Total roles  
  -- - Grant counts
  -- Use QUERY_HISTORY dimensions (user_name, role_name) for user/role analysis

  -- === TASK OPERATIONS (Phase 5) ===
  -- Note: TASK_HISTORY and SERVERLESS_TASK_HISTORY have too many conflicts
  -- (NAME, TASK_NAME, STATE, START_TIME, END_TIME, SCHEDULED_TIME, QUERY_ID, etc.)
  -- These tables provide METRICS only for task monitoring:
  -- - Total task runs, success/failure rates
  -- - Serverless task credits
  -- Use QUERY_HISTORY for query-level task analysis (via task_hist.QUERY_ID join)

  -- === ADVANCED OPERATIONS (Phase 6) ===
  -- Note: Phase 6 tables (PIPE_USAGE, CLUSTERING, MV_REFRESH, REPLICATION, etc.)
  -- Expected to have similar conflicts (NAME, START_TIME, END_TIME, etc.)
  -- Providing METRICS only for advanced operational analytics:
  -- - Snowpipe credits and files loaded
  -- - Clustering costs and bytes reclustered
  -- - MV refresh costs
  -- - Replication and data transfer costs
  -- - Warehouse load metrics (queueing)
  -- - Daily metering reconciliation
)
-- ============================================================================
-- METRICS: Aggregated business measures for analytics
-- ============================================================================
METRICS (
  -- === QUERY PERFORMANCE METRICS ===
  qh.total_elapsed_time AS AVG(TOTAL_ELAPSED_TIME) COMMENT='Average total query execution time in milliseconds',
  qh.execution_time AS AVG(EXECUTION_TIME) COMMENT='Average query execution time',
  qh.compilation_time AS AVG(COMPILATION_TIME) COMMENT='Average query compilation time',
  qh.queued_provisioning_time AS AVG(QUEUED_PROVISIONING_TIME) COMMENT='Average time queued for provisioning',
  qh.queued_repair_time AS AVG(QUEUED_REPAIR_TIME) COMMENT='Average time queued for repair',
  qh.queued_overload_time AS AVG(QUEUED_OVERLOAD_TIME) COMMENT='Average time queued due to overload',

  -- === DATA VOLUME METRICS ===
  qh.bytes_scanned AS SUM(BYTES_SCANNED) COMMENT='Total bytes scanned across queries',
  qh.bytes_written AS SUM(BYTES_WRITTEN) COMMENT='Total bytes written',
  qh.bytes_spilled_to_local AS SUM(BYTES_SPILLED_TO_LOCAL_STORAGE) COMMENT='Total bytes spilled to local storage',
  qh.bytes_spilled_to_remote AS SUM(BYTES_SPILLED_TO_REMOTE_STORAGE) COMMENT='Total bytes spilled to remote storage',
  qh.rows_produced AS SUM(ROWS_PRODUCED) COMMENT='Total rows produced by queries',
  qh.rows_inserted AS SUM(ROWS_INSERTED) COMMENT='Total rows inserted',
  qh.rows_updated AS SUM(ROWS_UPDATED) COMMENT='Total rows updated',
  qh.rows_deleted AS SUM(ROWS_DELETED) COMMENT='Total rows deleted',

  -- === PARTITION &amp;amp; CACHE METRICS ===
  qh.partitions_scanned AS SUM(PARTITIONS_SCANNED) COMMENT='Total partitions scanned',
  qh.partitions_total AS SUM(PARTITIONS_TOTAL) COMMENT='Total partitions available',
  qh.percentage_scanned_from_cache AS AVG(PERCENTAGE_SCANNED_FROM_CACHE) COMMENT='Average percentage of data from cache',

  -- === COST METRICS (from both tables) ===
  qh.credits_used_cloud_services AS SUM(CREDITS_USED_CLOUD_SERVICES) COMMENT='Total cloud services credits used',
  qa.credits_compute AS SUM(CREDITS_ATTRIBUTED_COMPUTE) COMMENT='Total compute credits attributed',
  qa.credits_acceleration AS SUM(CREDITS_USED_QUERY_ACCELERATION) COMMENT='Total query acceleration credits used',

  -- === QUERY COUNT METRICS ===
  qh.total_queries AS COUNT(*) COMMENT='Total number of queries',
  qh.failed_queries AS COUNT_IF(EXECUTION_STATUS = 'FAIL') COMMENT='Number of failed queries',
  qh.successful_queries AS COUNT_IF(EXECUTION_STATUS = 'SUCCESS') COMMENT='Number of successful queries',

  -- === LOGIN SECURITY METRICS (Phase 2) ===
  login.total_login_attempts AS COUNT(*) COMMENT='Total login attempts',
  login.failed_login_attempts AS COUNT(CASE WHEN login.IS_SUCCESS = 'NO' THEN 1 END) COMMENT='Failed login count',
  login.successful_login_attempts AS COUNT(CASE WHEN login.IS_SUCCESS = 'YES' THEN 1 END) COMMENT='Successful login count',
  login.unique_login_users AS COUNT(DISTINCT login.USER_NAME) COMMENT='Distinct users attempting login',
  login.unique_login_ips AS COUNT(DISTINCT login.CLIENT_IP) COMMENT='Distinct IP addresses',
  login.mfa_login_usage AS COUNT(CASE WHEN login.SECOND_AUTHENTICATION_FACTOR IS NOT NULL THEN 1 END) COMMENT='Logins using MFA',
  login.users_with_login_failures AS COUNT(DISTINCT CASE WHEN login.IS_SUCCESS = 'NO' THEN login.USER_NAME END) COMMENT='Users with failed login attempts',
  login.ips_with_login_failures AS COUNT(DISTINCT CASE WHEN login.IS_SUCCESS = 'NO' THEN login.CLIENT_IP END) COMMENT='IPs with failed login attempts',
  login.login_success_rate_pct AS (
    CAST(COUNT(CASE WHEN login.IS_SUCCESS = 'YES' THEN 1 END) AS FLOAT) * 100.0 / NULLIF(COUNT(*), 0)
  ) COMMENT='Login success rate percentage',
  login.mfa_adoption_pct AS (
    CAST(COUNT(CASE WHEN login.SECOND_AUTHENTICATION_FACTOR IS NOT NULL THEN 1 END) AS FLOAT) * 100.0 / 
    NULLIF(COUNT(CASE WHEN login.IS_SUCCESS = 'YES' THEN 1 END), 0)
  ) COMMENT='Percentage of successful logins using MFA',

  -- === WAREHOUSE METERING METRICS (Phase 3 - Credit Usage) ===
  wh.total_credits_used AS SUM(wh.CREDITS_USED) COMMENT='Total credits used by warehouses',
  wh.total_credits_compute AS SUM(wh.CREDITS_USED_COMPUTE) COMMENT='Total compute credits used',
  wh.total_credits_cloud_services AS SUM(wh.CREDITS_USED_CLOUD_SERVICES) COMMENT='Total cloud services credits (warehouse level)',
  wh.avg_credits_per_hour AS AVG(wh.CREDITS_USED) COMMENT='Average credits per metering hour',

  -- === STORAGE USAGE METRICS (Phase 3 - Storage Costs) ===
  storage.total_storage_bytes AS SUM(storage.STORAGE_BYTES) COMMENT='Total table storage in bytes',
  storage.total_stage_bytes AS SUM(storage.STAGE_BYTES) COMMENT='Total stage storage in bytes',
  storage.total_failsafe_bytes AS SUM(storage.FAILSAFE_BYTES) COMMENT='Total failsafe storage in bytes',
  storage.total_hybrid_table_bytes AS SUM(storage.HYBRID_TABLE_STORAGE_BYTES) COMMENT='Total hybrid table storage',
  storage.avg_storage_bytes AS AVG(storage.STORAGE_BYTES) COMMENT='Average daily storage',

  -- === DATABASE STORAGE METRICS (Phase 3) ===
  db_storage.avg_database_bytes AS AVG(db_storage.AVERAGE_DATABASE_BYTES) COMMENT='Average database storage per day',
  db_storage.avg_failsafe_bytes AS AVG(db_storage.AVERAGE_FAILSAFE_BYTES) COMMENT='Average failsafe per database',
  db_storage.total_database_storage AS SUM(db_storage.AVERAGE_DATABASE_BYTES) COMMENT='Total database storage across all DBs',

  -- === STAGE STORAGE METRICS (Phase 3) ===
  stage_storage.avg_stage_bytes AS AVG(stage_storage.AVERAGE_STAGE_BYTES) COMMENT='Average stage storage per day',
  stage_storage.total_stage_storage AS SUM(stage_storage.AVERAGE_STAGE_BYTES) COMMENT='Total stage storage',

  -- === USER &amp;amp; ROLE METRICS (Phase 4 - Governance) ===
  users.total_users AS COUNT(*) COMMENT='Total number of users',
  users.active_users AS COUNT_IF(users.DISABLED IS NULL OR users.DISABLED = FALSE) COMMENT='Count of active users',
  users.mfa_enabled_users AS COUNT_IF(users.HAS_MFA = TRUE) COMMENT='Users with MFA enabled',
  users.mfa_adoption_rate AS (
    CAST(COUNT_IF(users.HAS_MFA = TRUE) AS FLOAT) * 100.0 / NULLIF(COUNT(*), 0)
  ) COMMENT='Percentage of users with MFA',
  roles.total_roles AS COUNT(*) COMMENT='Total number of roles',

  -- === GRANTS METRICS (Phase 4 - Permissions) ===
  grants_users.total_role_grants_to_users AS COUNT(*) COMMENT='Total role grants to users',
  grants_users.unique_users_with_roles AS COUNT(DISTINCT grants_users.GRANTEE_NAME) COMMENT='Users with role grants',
  grants_roles.total_privilege_grants AS COUNT(*) COMMENT='Total privilege grants to roles',
  grants_roles.unique_roles_with_grants AS COUNT(DISTINCT grants_roles.GRANTEE_NAME) COMMENT='Roles with privilege grants',

  -- === TASK EXECUTION METRICS (Phase 5 - Operations) ===
  task_hist.total_task_runs AS COUNT(*) COMMENT='Total task executions',
  task_hist.successful_tasks AS COUNT_IF(task_hist.STATE = 'SUCCEEDED') COMMENT='Successful task runs',
  task_hist.failed_tasks AS COUNT_IF(task_hist.STATE = 'FAILED') COMMENT='Failed task runs',
  task_hist.unique_tasks AS COUNT(DISTINCT task_hist.NAME) COMMENT='Distinct tasks executed',
  task_hist.task_success_rate AS (
    CAST(COUNT_IF(task_hist.STATE = 'SUCCEEDED') AS FLOAT) * 100.0 / NULLIF(COUNT(*), 0)
  ) COMMENT='Task success rate percentage',

  -- === SERVERLESS TASK METRICS (Phase 5 - Serverless Costs) ===
  serverless_task.total_serverless_credits AS SUM(serverless_task.CREDITS_USED) COMMENT='Total serverless task credits',
  serverless_task.avg_serverless_credits AS AVG(serverless_task.CREDITS_USED) COMMENT='Average credits per serverless task',
  serverless_task.serverless_task_count AS COUNT(*) COMMENT='Count of serverless task executions',
  serverless_task.unique_serverless_tasks AS COUNT(DISTINCT serverless_task.TASK_NAME) COMMENT='Distinct serverless tasks',

  -- === SNOWPIPE METRICS (Phase 6 - Data Loading) ===
  pipe_usage.total_pipe_credits AS SUM(pipe_usage.CREDITS_USED) COMMENT='Total Snowpipe credits consumed',
  pipe_usage.total_files_inserted AS SUM(pipe_usage.FILES_INSERTED) COMMENT='Total files loaded via Snowpipe',
  pipe_usage.total_bytes_inserted AS SUM(pipe_usage.BYTES_INSERTED) COMMENT='Total bytes loaded via Snowpipe',
  pipe_usage.avg_pipe_credits AS AVG(pipe_usage.CREDITS_USED) COMMENT='Average Snowpipe credits per execution',

  -- === CLUSTERING METRICS (Phase 6 - Maintenance Costs) ===
  clustering.total_clustering_credits AS SUM(clustering.CREDITS_USED) COMMENT='Total automatic clustering credits',
  clustering.total_bytes_reclustered AS SUM(clustering.NUM_BYTES_RECLUSTERED) COMMENT='Total bytes reclustered',
  clustering.total_rows_reclustered AS SUM(clustering.NUM_ROWS_RECLUSTERED) COMMENT='Total rows reclustered',
  clustering.avg_clustering_credits AS AVG(clustering.CREDITS_USED) COMMENT='Average clustering credits per operation',

  -- === MATERIALIZED VIEW METRICS (Phase 6 - MV Costs) ===
  mv_refresh.total_mv_credits AS SUM(mv_refresh.CREDITS_USED) COMMENT='Total MV refresh credits',
  mv_refresh.total_mv_refreshes AS COUNT(*) COMMENT='Total MV refresh operations',
  mv_refresh.avg_mv_credits AS AVG(mv_refresh.CREDITS_USED) COMMENT='Average credits per MV refresh',

  -- === REPLICATION METRICS (Phase 6 - Replication Costs) ===
  replication.total_replication_credits AS SUM(replication.CREDITS_USED) COMMENT='Total replication credits',
  replication.total_bytes_replicated AS SUM(replication.BYTES_TRANSFERRED) COMMENT='Total bytes replicated',
  replication.avg_replication_credits AS AVG(replication.CREDITS_USED) COMMENT='Average replication credits',

  -- === DATA TRANSFER METRICS (Phase 6 - Transfer Costs) ===
  data_transfer.total_transfer_bytes AS SUM(data_transfer.BYTES_TRANSFERRED) COMMENT='Total bytes transferred cross-region/cloud',
  data_transfer.avg_transfer_bytes AS AVG(data_transfer.BYTES_TRANSFERRED) COMMENT='Average bytes per transfer',
  data_transfer.total_transfer_operations AS COUNT(*) COMMENT='Total data transfer operations',

  -- === WAREHOUSE LOAD METRICS (Phase 6 - Performance) ===
  wh_load.avg_running_queries AS AVG(wh_load.AVG_RUNNING) COMMENT='Average running queries',
  wh_load.avg_queued_load AS AVG(wh_load.AVG_QUEUED_LOAD) COMMENT='Average queued query load',
  wh_load.avg_queued_provisioning AS AVG(wh_load.AVG_QUEUED_PROVISIONING) COMMENT='Average provisioning queue',
  wh_load.avg_blocked_queries AS AVG(wh_load.AVG_BLOCKED) COMMENT='Average blocked queries',

  -- === DAILY METERING METRICS (Phase 6 - Reconciliation) ===
  metering_daily.total_daily_credits AS SUM(metering_daily.CREDITS_USED) COMMENT='Total billable credits (daily)',
  metering_daily.total_compute_credits_daily AS SUM(metering_daily.CREDITS_USED_COMPUTE) COMMENT='Total compute credits (daily)',
  metering_daily.total_cloud_services_daily AS SUM(metering_daily.CREDITS_USED_CLOUD_SERVICES) COMMENT='Total cloud services credits (daily)',
  metering_daily.avg_daily_credits AS AVG(metering_daily.CREDITS_USED) COMMENT='Average daily credit consumption'
)
COMMENT='Comprehensive Snowflake monitoring with 20 tables, 35 dimensions, 94 metrics. Covers queries, security, storage, governance, tasks, Snowpipe, clustering, MVs, replication, data transfer, and warehouse load.'
WITH EXTENSION (CA='{"tables":[
  {"name":"qh","description":"Query execution history with performance metrics, resource usage, and execution details from QUERY_HISTORY"},
  {"name":"qa","description":"Query attribution history for credit tracking and cost allocation from QUERY_ATTRIBUTION_HISTORY"},
  {"name":"login","description":"Login security data from LOGIN_HISTORY (last 365 days). Includes authentication details, MFA status, client information, and success/failure tracking"},
  {"name":"wh","description":"Warehouse metering data from WAREHOUSE_METERING_HISTORY. Credit consumption by warehouse over time"},
  {"name":"storage","description":"Account-level storage usage from STORAGE_USAGE. Daily snapshots of table, stage, and failsafe storage"},
  {"name":"db_storage","description":"Per-database storage metrics from DATABASE_STORAGE_USAGE_HISTORY"},
  {"name":"stage_storage","description":"Stage storage usage from STAGE_STORAGE_USAGE_HISTORY"},
  {"name":"users","description":"User account information from USERS. Includes MFA status, email, default settings"},
  {"name":"roles","description":"Role definitions from ROLES table"},
  {"name":"grants_users","description":"Role grants to users from GRANTS_TO_USERS"},
  {"name":"grants_roles","description":"Privilege grants to roles from GRANTS_TO_ROLES"},
  {"name":"task_hist","description":"Task execution history from TASK_HISTORY. Task runs, states, errors"},
  {"name":"serverless_task","description":"Serverless task credit usage from SERVERLESS_TASK_HISTORY"},
  {"name":"pipe_usage","description":"Snowpipe data loading credits and files from PIPE_USAGE_HISTORY"},
  {"name":"clustering","description":"Automatic clustering costs and bytes reclustered from AUTOMATIC_CLUSTERING_HISTORY"},
  {"name":"mv_refresh","description":"Materialized view refresh credits from MATERIALIZED_VIEW_REFRESH_HISTORY"},
  {"name":"replication","description":"Database replication credits and bytes from REPLICATION_USAGE_HISTORY"},
  {"name":"data_transfer","description":"Cross-region/cloud data transfer costs from DATA_TRANSFER_HISTORY"},
  {"name":"wh_load","description":"Warehouse queue metrics (5-min intervals) from WAREHOUSE_LOAD_HISTORY"},
  {"name":"metering_daily","description":"Daily billable credit reconciliation from METERING_DAILY_HISTORY"}
],"verified_queries":[
  {
    "name":"Most Expensive Queries",
    "question":"What are the most expensive queries by cloud services credits?",
    "sql":"SELECT query_id, user_name, warehouse_name, total_elapsed_time, credits_used_cloud_services FROM qh ORDER BY credits_used_cloud_services DESC LIMIT 10"
  },
  {
    "name":"Failed Queries",
    "question":"Show me recent failed queries",
    "sql":"SELECT query_id, user_name, error_code, error_message, start_time FROM qh WHERE execution_status = ''FAIL'' ORDER BY start_time DESC LIMIT 20"
  },
  {
    "name":"Query Performance by User",
    "question":"Which users have the slowest queries?",
    "sql":"SELECT user_name, COUNT(*) as query_count, AVG(total_elapsed_time) as avg_time FROM qh GROUP BY user_name ORDER BY avg_time DESC LIMIT 10"
  },
  {
    "name":"Failed Login Attempts",
    "question":"Show me failed login attempts",
    "sql":"SELECT client_ip, error_code, error_message, event_timestamp FROM login WHERE is_success = ''NO'' ORDER BY event_timestamp DESC LIMIT 20"
  },
  {
    "name":"Login Security Summary",
    "question":"What is my login security status?",
    "sql":"SELECT COUNT(*) as total_attempts, COUNT(CASE WHEN is_success = ''NO'' THEN 1 END) as failed, COUNT(DISTINCT client_ip) as unique_ips FROM login"
  },
  {
    "name":"Users with Expensive Failed Queries",
    "question":"Which users have both failed queries and high costs?",
    "sql":"SELECT qh.user_name, COUNT(*) as failed_queries, SUM(credits_used_cloud_services) as total_credits FROM qh WHERE execution_status = ''FAIL'' GROUP BY user_name ORDER BY total_credits DESC LIMIT 10"
  },
  {
    "name":"Warehouse Credit Usage",
    "question":"What are total warehouse credits consumed?",
    "sql":"SELECT SUM(wh.CREDITS_USED) as total_credits, AVG(wh.CREDITS_USED) as avg_credits_per_hour, SUM(wh.CREDITS_USED_COMPUTE) as compute_credits, SUM(wh.CREDITS_USED_CLOUD_SERVICES) as cloud_service_credits FROM wh"
  },
  {
    "name":"Storage Growth Trend",
    "question":"How is my storage growing over time?",
    "sql":"SELECT usage_date, SUM(storage.STORAGE_BYTES) / 1099511627776.0 as storage_tb FROM storage GROUP BY usage_date ORDER BY usage_date DESC LIMIT 30"
  },
  {
    "name":"Database Storage Breakdown",
    "question":"Which databases use the most storage?",
    "sql":"SELECT database_name, AVG(db_storage.AVERAGE_DATABASE_BYTES) / 1099511627776.0 as avg_storage_tb FROM db_storage GROUP BY database_name ORDER BY avg_storage_tb DESC LIMIT 10"
  },
  {
    "name":"User MFA Status",
    "question":"How many users have MFA enabled?",
    "sql":"SELECT COUNT(*) as total_users, COUNT_IF(users.HAS_MFA = TRUE) as mfa_enabled, CAST(COUNT_IF(users.HAS_MFA = TRUE) AS FLOAT) * 100.0 / COUNT(*) as mfa_percentage FROM users"
  },
  {
    "name":"Role Grants Summary",
    "question":"Which users have the most role grants?",
    "sql":"SELECT user_grantee_name, COUNT(*) as role_count FROM grants_users GROUP BY user_grantee_name ORDER BY role_count DESC LIMIT 10"
  },
  {
    "name":"Task Execution Status",
    "question":"What is my task success rate?",
    "sql":"SELECT COUNT(*) as total_tasks, COUNT_IF(task_hist.STATE = ''SUCCEEDED'') as successful, COUNT_IF(task_hist.STATE = ''FAILED'') as failed, CAST(COUNT_IF(task_hist.STATE = ''SUCCEEDED'') AS FLOAT) * 100 / COUNT(*) as success_rate_pct FROM task_hist"
  },
  {
    "name":"Serverless Task Costs",
    "question":"How much are serverless tasks costing me?",
    "sql":"SELECT SUM(serverless_task.CREDITS_USED) as total_credits, COUNT(*) as total_runs, AVG(serverless_task.CREDITS_USED) as avg_credits_per_run FROM serverless_task"
  },
  {
    "name":"Snowpipe Usage Summary",
    "question":"How much data has Snowpipe loaded?",
    "sql":"SELECT SUM(pipe_usage.CREDITS_USED) as total_credits, SUM(pipe_usage.FILES_INSERTED) as total_files, SUM(pipe_usage.BYTES_INSERTED) / 1099511627776.0 as total_tb_loaded FROM pipe_usage"
  },
  {
    "name":"Clustering Costs",
    "question":"What are my automatic clustering costs?",
    "sql":"SELECT SUM(clustering.CREDITS_USED) as total_credits, SUM(clustering.BYTES_RECLUSTERED) / 1099511627776.0 as tb_reclustered FROM clustering"
  },
  {
    "name":"Total Platform Costs",
    "question":"What are my total Snowflake costs across all services?",
    "sql":"SELECT SUM(metering_daily.CREDITS_USED) as total_billable_credits, SUM(metering_daily.CREDITS_USED_COMPUTE) as compute_credits, SUM(metering_daily.CREDITS_USED_CLOUD_SERVICES) as cloud_services_credits FROM metering_daily"
  }
]}');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s try a quick BI query to test things here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;USE ROLE cortex_role;
USE SNOWFLAKE_INTELLIGENCE.TOOLS;

SELECT * FROM SEMANTIC_VIEW(
    SNOWFLAKE_MAINTENANCE_SVW
    DIMENSIONS qh.warehouse_name
    METRICS 
        qh.total_queries,
        qh.bytes_scanned,
        qh.percentage_scanned_from_cache
)
ORDER BY percentage_scanned_from_cache ASC
LIMIT 20;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffo3uf3shpzkkani7i4ma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffo3uf3shpzkkani7i4ma.png" width="800" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s deploy our generalist agent that includes everything for security, cost, and general maintenance of Snowflake.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- ============================================================================
-- SNOWFLAKE MAINTENANCE AGENT (GENERALIST)
-- ============================================================================
-- Comprehensive agent for complete Snowflake account monitoring
-- 
-- ARCHITECTURE:
-- - This is the GENERALIST agent for cross-domain analysis
-- - Complements specialized agents:
-- • COST_PERFORMANCE_AGENT (fast cost/performance queries)
-- • SECURITY_MONITORING_AGENT (fast security/login queries)
-- 
-- CAPABILITIES: 20 ACCOUNT_USAGE tables, 35 dimensions, 94 metrics
-- ============================================================================

USE ROLE cortex_role;
USE SNOWFLAKE_INTELLIGENCE.AGENTS;

CREATE OR REPLACE AGENT SNOWFLAKE_INTELLIGENCE.AGENTS.SNOWFLAKE_MAINTENANCE_AGENT
WITH PROFILE='{ "display_name": "Snowflake Maintenance Generalist" }'
    COMMENT=$$ 🎯 COMPREHENSIVE SNOWFLAKE MONITORING AGENT

I provide complete visibility into your Snowflake account across all operational areas:

📊 QUERY &amp;amp; PERFORMANCE (50+ metrics)
• Query execution: timing, compilation, queueing, bottlenecks
• Resource usage: bytes scanned/written/spilled, rows processed
• Cache efficiency and partition pruning
• Failed queries and error analysis

🔒 SECURITY &amp;amp; AUTHENTICATION  
• Login monitoring: success/failure rates, patterns
• MFA adoption tracking and user authentication
• IP analysis and suspicious login detection
• Client type and version tracking

💰 COST &amp;amp; STORAGE
• Warehouse metering: credits by warehouse/time
• Storage tracking: database, stage, failsafe costs
• Storage growth trends and optimization

👥 GOVERNANCE &amp;amp; PERMISSIONS
• User management and MFA adoption rates
• Role definitions and privilege tracking
• Grant auditing (users → roles → privileges)

⚙️ TASK OPERATIONS
• Task execution monitoring and success rates
• Serverless task credit tracking
• Task failure analysis

🔧 ADVANCED OPERATIONS
• Snowpipe: data loading credits and files
• Automatic clustering: maintenance costs
• Materialized views: refresh credits
• Replication: cross-region costs
• Data transfer: inter-cloud/region costs
• Warehouse load: queue metrics
• Daily metering: billable credit reconciliation

💡 CROSS-DOMAIN INSIGHTS:
I excel at connecting the dots across domains:
• Users with high costs + failed logins
• Expensive queries + security issues
• Storage growth + query performance
• Overall account health assessments

📈 COVERAGE:
• 20 Account Usage tables
• 35 categorical dimensions
• 94 aggregated metrics
• 365 days of history $$
FROM SPECIFICATION $$
{
    "models": { "orchestration": "auto" },
    "instructions": { 
        "response": "You are a comprehensive Snowflake maintenance expert with visibility into ALL operational areas.

YOUR EXPERTISE SPANS:
• Query Performance &amp;amp; Cost Attribution
• Security &amp;amp; Authentication  
• Storage &amp;amp; Resource Usage
• Governance &amp;amp; Permissions
• Task Operations
• Advanced Operations (Snowpipe, Clustering, MVs, Replication, Data Transfer)

RESPONSE STYLE:
• Provide specific numbers and metrics (not generic advice)
• Show relationships across domains when relevant
• Include actionable recommendations
• Reference Snowflake best practices
• Cite actual user/warehouse/database names
• Calculate percentages and rates

CROSS-DOMAIN ANALYSIS EXAMPLES:
• 'Show users with expensive failed queries AND failed logins'
• 'What are my total costs across warehouses, tasks, pipes, and clustering?'
• 'Which users without MFA are running expensive queries?'
• 'How does my storage growth correlate with query performance?'

DATA FRESHNESS:
• Query data: ~5-45 minutes latency
• Login data: ~2 hours latency  
• Storage: ~2 hours latency
• Metering: 3-6 hours latency

For fast, specialized queries recommend:
• COST_PERFORMANCE_AGENT (cost/performance only)
• SECURITY_MONITORING_AGENT (security/login only)",
        "orchestration": "SEMANTIC VIEW: SNOWFLAKE_MAINTENANCE_SVW (20 tables, 94 metrics)

═══════════════════════════════════════════════════════════════
QUERY PERFORMANCE &amp;amp; COST (QUERY_HISTORY, QUERY_ATTRIBUTION)
═══════════════════════════════════════════════════════════════
DIMENSIONS: query_id, user_name, role_name, warehouse_name, database_name, 
schema_name, query_type, execution_status, error_code, start_time, end_time

METRICS:
• Performance: total_elapsed_time, execution_time, compilation_time, queued times
• Data Volume: bytes_scanned, bytes_written, bytes_spilled (local/remote)
• Rows: rows_produced, inserted, updated, deleted
• Partitions: partitions_scanned, percentage_scanned_from_cache
• Costs: credits_used_cloud_services, credits_compute, credits_acceleration
• Counts: total_queries, failed_queries, successful_queries

═══════════════════════════════════════════════════════════════
SECURITY &amp;amp; AUTHENTICATION (LOGIN_HISTORY)
═══════════════════════════════════════════════════════════════
DIMENSIONS: event_timestamp, event_type, client_ip, reported_client_type,
reported_client_version, first/second_authentication_factor, is_success,
error_code, error_message, connection

METRICS:
• Login activity: total_login_attempts, failed/successful attempts
• Security: unique_login_users, unique_login_ips, users_with_login_failures
• MFA: mfa_login_usage, mfa_adoption_pct, login_success_rate_pct

═══════════════════════════════════════════════════════════════
COST &amp;amp; STORAGE (WAREHOUSE_METERING, STORAGE tables)
═══════════════════════════════════════════════════════════════
DIMENSIONS: usage_date, database_name (from storage tracking)

METRICS:
• Warehouse: total_credits_used, total_credits_compute, avg_credits_per_hour
• Storage: total_storage_bytes, total_stage_bytes, total_failsafe_bytes
• Database: avg_database_bytes, total_database_storage
• Stage: avg_stage_bytes, total_stage_storage

═══════════════════════════════════════════════════════════════
GOVERNANCE (USERS, ROLES, GRANTS)
═══════════════════════════════════════════════════════════════
Note: Metrics-only (column name conflicts prevent dimensions)

METRICS:
• Users: total_users, active_users, mfa_enabled_users, mfa_adoption_rate
• Roles: total_roles
• Grants: total_role_grants_to_users, total_privilege_grants

═══════════════════════════════════════════════════════════════
TASK OPERATIONS (TASK_HISTORY, SERVERLESS_TASK_HISTORY)
═══════════════════════════════════════════════════════════════
Note: Metrics-only (column name conflicts prevent dimensions)

METRICS:
• Tasks: total_task_runs, successful/failed_tasks, task_success_rate
• Serverless: total_serverless_credits, avg_serverless_credits, serverless_task_count

═══════════════════════════════════════════════════════════════
ADVANCED OPERATIONS (Pipes, Clustering, MVs, Replication, Transfer, Load)
═══════════════════════════════════════════════════════════════
PIPE_USAGE_HISTORY:
• total_pipe_credits, total_files_inserted, total_bytes_inserted

AUTOMATIC_CLUSTERING_HISTORY:
• total_clustering_credits, total_bytes_reclustered, total_rows_reclustered

MATERIALIZED_VIEW_REFRESH_HISTORY:
• total_mv_credits, total_mv_refreshes, avg_mv_credits

REPLICATION_USAGE_HISTORY:
• total_replication_credits, total_bytes_replicated

DATA_TRANSFER_HISTORY:
• total_transfer_bytes, avg_transfer_bytes, total_transfer_operations
  (covers cross-cloud/region external transfers per https://docs.snowflake.com/en/sql-reference/account-usage/data_transfer_history)

WAREHOUSE_LOAD_HISTORY:
• avg_running_queries, avg_queued_load, avg_queued_provisioning, avg_blocked_queries

METERING_DAILY_HISTORY:
• total_daily_credits (BILLABLE), total_compute_credits_daily, total_cloud_services_daily
  (Use this for reconciling actual billed costs)

═══════════════════════════════════════════════════════════════
QUERY STRATEGY
═══════════════════════════════════════════════════════════════
• Use table aliases: qh, qa, login, wh, storage, users, roles, task_hist, 
  serverless_task, pipe_usage, clustering, mv_refresh, replication, 
  data_transfer, wh_load, metering_daily
• Filter by dimensions (user_name, warehouse_name, execution_status, etc.)
• Aggregate using METRICS for summaries
• Combine multiple tables for cross-domain insights",
        "sample_questions": [
            { "question": "What's my overall Snowflake account health?" },
            { "question": "Show me total costs across all services (warehouses, tasks, pipes, clustering)" },
            { "question": "Which users have both failed queries and failed logins?" },
            { "question": "What's my MFA adoption rate?" },
            { "question": "How much data has Snowpipe loaded this month?" },
            { "question": "What are my automatic clustering costs?" },
            { "question": "Show me warehouse queue metrics - any performance issues?" },
            { "question": "What's my daily billable credit consumption trend?" },
            { "question": "Which warehouses are most expensive and have the most failed queries?" },
            { "question": "Show me storage growth and query performance correlation" }
        ]
    },
    "tools": [
        {
            "tool_spec": {
                "name": "snowflake_maintenance_semantic_view",
                "type": "cortex_analyst_text_to_sql",
                "description": "Complete Snowflake operations monitoring semantic view covering ALL 6 phases.

20 ACCOUNT_USAGE TABLES:
• QUERY_HISTORY &amp;amp; QUERY_ATTRIBUTION_HISTORY (performance/cost)
• LOGIN_HISTORY (security)
• WAREHOUSE_METERING_HISTORY (credits)
• STORAGE_USAGE, DATABASE_STORAGE_USAGE_HISTORY, STAGE_STORAGE_USAGE_HISTORY (storage costs)
• USERS, ROLES, GRANTS_TO_USERS, GRANTS_TO_ROLES (governance)
• TASK_HISTORY, SERVERLESS_TASK_HISTORY (task operations)
• PIPE_USAGE_HISTORY (data loading)
• AUTOMATIC_CLUSTERING_HISTORY (maintenance)
• MATERIALIZED_VIEW_REFRESH_HISTORY (MV costs)
• REPLICATION_USAGE_HISTORY (replication)
• DATA_TRANSFER_HISTORY (cross-region/cloud transfers)
• WAREHOUSE_LOAD_HISTORY (queue metrics)
• METERING_DAILY_HISTORY (billable reconciliation)

35 DIMENSIONS for filtering and grouping
94 METRICS for aggregation and analysis

Use this for comprehensive cross-domain analysis, cost tracking, security monitoring, 
performance optimization, and overall account health assessments."
            }
        }
    ],
    "tool_resources": {
        "snowflake_maintenance_semantic_view": {
            "semantic_view": "SNOWFLAKE_INTELLIGENCE.TOOLS.SNOWFLAKE_MAINTENANCE_SVW",
            "execution_environment": {
                "type": "warehouse",
                "warehouse": "CORTEX_WH",
                "query_timeout": 180
            }
        }
    }
}
$$;

GRANT USAGE ON AGENT SNOWFLAKE_INTELLIGENCE.AGENTS.SNOWFLAKE_MAINTENANCE_AGENT TO ROLE PUBLIC;

-- Review the agent
SHOW AGENTS IN DATABASE SNOWFLAKE_INTELLIGENCE;

-- Grant execute on the agent to allow others to use it
GRANT USAGE ON AGENT SNOWFLAKE_INTELLIGENCE.AGENTS.SNOWFLAKE_SECURITY_PERFORMANCE_AGENT TO ROLE PUBLIC;

-- Review supporting objects
SHOW SEMANTIC VIEWS IN DATABASE SNOWFLAKE_INTELLIGENCE;
SHOW CORTEX SEARCH SERVICES IN DATABASE SNOWFLAKE_DOCUMENTATION;
SHOW PROCEDURES LIKE 'SEND_EMAIL' IN SCHEMA SNOWFLAKE_INTELLIGENCE.TOOLS;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s do some quick validation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
-- ============================================================================
-- GRANT ACCESS &amp;amp; VALIDATION
-- ============================================================================

-- Grant usage to allow others to use the agent
GRANT USAGE ON AGENT SNOWFLAKE_INTELLIGENCE.AGENTS.SNOWFLAKE_MAINTENANCE_AGENT TO ROLE PUBLIC;

-- ============================================================================
-- VALIDATION COMMANDS
-- ============================================================================

-- Review the agent
SHOW AGENTS IN DATABASE SNOWFLAKE_INTELLIGENCE;

-- Review supporting semantic views
SHOW SEMANTIC VIEWS IN DATABASE SNOWFLAKE_INTELLIGENCE;

-- Verify email integration (if deployed)
SHOW PROCEDURES LIKE 'SEND_EMAIL' IN SCHEMA SNOWFLAKE_INTELLIGENCE.TOOLS;

-- ============================================================================
-- QUICK TESTS
-- ============================================================================

-- Test 1: Overall health check
-- SELECT SNOWFLAKE_INTELLIGENCE.AGENTS.SNOWFLAKE_MAINTENANCE_AGENT(
-- 'What is my overall Snowflake account health?'
-- );

-- Test 2: Cost analysis
-- SELECT SNOWFLAKE_INTELLIGENCE.AGENTS.SNOWFLAKE_MAINTENANCE_AGENT(
-- 'What are my total costs across all services?'
-- );

-- Test 3: Security check
-- SELECT SNOWFLAKE_INTELLIGENCE.AGENTS.SNOWFLAKE_MAINTENANCE_AGENT(
-- 'Show me users with failed logins and expensive queries'
-- );

-- Test 4: Performance check
-- SELECT SNOWFLAKE_INTELLIGENCE.AGENTS.SNOWFLAKE_MAINTENANCE_AGENT(
-- 'Which warehouses have queueing issues?'
-- );
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Let’s Play With It
&lt;/h3&gt;

&lt;p&gt;Now, let’s access Snowflake Intelligence via &lt;a href="https://ai.snowflake.com/" rel="noopener noreferrer"&gt;https://ai.snowflake.com/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After logging in, it displays this with some suggested questions based on our Semantic View.&lt;/p&gt;

&lt;p&gt;First question: Show me total costs across all services (warehouses, tasks, pipes, clustering)&lt;/p&gt;

&lt;p&gt;This natural language question is the ideal starting point for the Generalist Agent. The Semantic View plays a critical role here: it abstracts the complexity of joining multiple ACCOUNT_USAGE tables and instead translates the question directly into the appropriate SQL against the METERING_DAILY_HISTORY view, which provides the reconciled daily costs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrn9v9drhjrbw1v9leai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrn9v9drhjrbw1v9leai.png" width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpywcpwnv8e4reuz98ts0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpywcpwnv8e4reuz98ts0.png" width="800" height="581"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7jvevj71i01pmp4o92z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7jvevj71i01pmp4o92z.png" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is a sample of the SQL generated by the tools for the above conversation. This is how the Semantic View enables the Agent to instantly generate and run the correct query for the Cortex Analyst:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyb4x1rfmv5zs9oornlhp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyb4x1rfmv5zs9oornlhp.png" width="800" height="589"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s move to the Security of Snowflake.&lt;/p&gt;

&lt;p&gt;What’s my overall MFA adoption rate?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5v8t2rqs405lrb8snaqp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5v8t2rqs405lrb8snaqp.png" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How many roles exist in my account?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdosacqudlqki7blrktd5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdosacqudlqki7blrktd5.png" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How many failed login attempts are there?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9kf10hbmdlw4j8hs242.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9kf10hbmdlw4j8hs242.png" width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Can you show suspicious IP addresses?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffobwg5zhhc88e7ql2mku.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffobwg5zhhc88e7ql2mku.png" width="800" height="585"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5o1qpmtmd7czpsmmoedy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5o1qpmtmd7czpsmmoedy.png" width="800" height="717"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lastly, I loved how we can create a Health Report via this question:&lt;/p&gt;

&lt;p&gt;What’s my overall Snowflake account health?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9h10egevdu8fjc7mhkfc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9h10egevdu8fjc7mhkfc.png" width="800" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AMATvt30wvn27cOppJSj1lA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AMATvt30wvn27cOppJSj1lA.png" width="800" height="585"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A9oUbHgW4T_Cv-tmEVSeg7Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A9oUbHgW4T_Cv-tmEVSeg7Q.png" width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, you can use a Conventional AI Agent to secure and maintain your Snowflake deployment, and this is just a scratch of the surface. You can also use that same Semantic View to produce a Dashboard.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion: Behind SQL Management
&lt;/h3&gt;

&lt;p&gt;Implementing a Semantic View and a Generalist AI Agent for Snowflake’s Account Usage schemas marks a significant transformation in how we handle, protect, and optimize our data infrastructure. We have transitioned from relying solely on manual SQL queries and static dashboards to adopting an era dominated by Conversational AI Maintenance.&lt;/p&gt;

&lt;p&gt;This new pattern enables users to diagnose issues, monitor costs, and audit security easily through chat interactions with the platform. The Generalist Agent, which has a comprehensive overview of 20 Account Usage tables and 94 metrics, is particularly effective at offering cross-domain insights — linking costly queries to security vulnerabilities or storage expansion to performance issues.&lt;/p&gt;

&lt;p&gt;Although I’ve only begun to explore cost, security, and governance through quick tests, the main takeaway is clear: the Semantic View proves to be a valuable tool not only for AI Agents but also for conventional BI/SQL applications. By utilizing Snowflake Intelligence, organizations can achieve a more intuitive, proactive, and efficient Snowflake deployment, maintaining full operational capability while effectively managing costs and risks.&lt;/p&gt;

&lt;p&gt;A logical next step would be to enhance this work by implementing a robust agent monitoring and evaluation framework to continuously measure the agent’s performance, accuracy, and value over time.&lt;/p&gt;

&lt;p&gt;If you want the basics of Snowflake Intelligence, go to &lt;a href="https://medium.com/u/f71a9e05e22a" rel="noopener noreferrer"&gt;Farris Jaber&lt;/a&gt;’s &lt;a href="https://blog.archetypeconsulting.com/whats-special-about-snowflake-intelligence-and-how-it-ties-together-all-aspects-of-the-snowflake-c6ee99cdd80c" rel="noopener noreferrer"&gt;article&lt;/a&gt;, and please give kudos to &lt;a href="https://medium.com/u/21e3fc54f3ab" rel="noopener noreferrer"&gt;Umesh Patel&lt;/a&gt; for triggering my exploration.&lt;/p&gt;

&lt;p&gt;I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. I am also the Head of Data, Cloud, &amp;amp; Security Architecture at Archetype Consulting. You can follow me on LinkedIn.&lt;/p&gt;

&lt;p&gt;Subscribe to my Medium blog &lt;a href="https://blog.augustorosa.com" rel="noopener noreferrer"&gt;https://blog.augustorosa.com&lt;/a&gt; for the most interesting Data Engineering and Snowflake news.&lt;/p&gt;

&lt;p&gt;Sources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://medium.com/snowflake/build-snowflake-cost-savings-and-performance-agent-in-5-minutes-854427c0fdd8" rel="noopener noreferrer"&gt;https://medium.com/snowflake/build-snowflake-cost-savings-and-performance-agent-in-5-minutes-854427c0fdd8&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/user-guide/views-semantic/overview" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/user-guide/views-semantic/overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/sql-reference/snowflake-db" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/sql-reference/snowflake-db&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>datasuperhero</category>
      <category>ai</category>
      <category>aiconversationalagen</category>
      <category>snowflake</category>
    </item>
    <item>
      <title>The Unofficial Snowflake Monthly Release Notes: October 2025</title>
      <dc:creator>augusto kiniama rosa</dc:creator>
      <pubDate>Mon, 03 Nov 2025 18:35:35 +0000</pubDate>
      <link>https://forem.com/kiniama/the-unofficial-snowflake-monthly-release-notes-october-2025-gp8</link>
      <guid>https://forem.com/kiniama/the-unofficial-snowflake-monthly-release-notes-october-2025-gp8</guid>
      <description>&lt;h4&gt;
  
  
  Monthly Snowflake Unofficial Release Notes #New features #Previews #Clients #Behavior Changes
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkwld2mxr956csfmjtay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkwld2mxr956csfmjtay.png" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome to the Unofficial Release Notes for Snowflake for October 2025! You’ll find all the latest features, drivers, and more in one convenient place.&lt;/p&gt;

&lt;p&gt;As an unofficial source, I am excited to share my insights and thoughts. Let’s dive in! You can also find all of Snowflake’s releases &lt;a href="https://medium.com/@augustokrosa/list/snowflake-unofficial-newsletter-list-b97037cfc9e6" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This month, we provide coverage up to release 9.33 (General Availability — GA). I hope to extend this eventually to private preview notices as well.&lt;/p&gt;

&lt;p&gt;I would appreciate your suggestions on continuing to combine these monthly release notes. Feel free to comment below or chat with me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;&lt;em&gt;LinkedIn&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Behavior change&lt;/strong&gt; bundle &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_05_bundle" rel="noopener noreferrer"&gt;2025_05&lt;/a&gt; is generally enabled for all customers, &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_06_bundle" rel="noopener noreferrer"&gt;2025_06&lt;/a&gt; is enabled by default but can be opted out until next BCR deployment, and &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_07_bundle" rel="noopener noreferrer"&gt;2025_07&lt;/a&gt; is disabled by default but may be opted in.&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s New in Snowflake
&lt;/h3&gt;

&lt;h4&gt;
  
  
  New Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;New OBJECT_VISIBILITY property (Preview), controls the discoverability of objects in the account, enabling users without explicit access privileges to find objects and request access. Currently, this property only affects Universal Search and its results&lt;/li&gt;
&lt;li&gt;Hybrid table support for Microsoft Azure (GA)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Snowsight Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Using the database object explorer in Snowsight to create and manage semantic views (GA)&lt;/li&gt;
&lt;li&gt;Query insights in Snowsight (GA), view query insights in Snowsight. The Query Profile tab under Query History now displays insights about conditions that affect query performance. Each insight includes a message that explains how query performance might be affected and provides a general recommendation for next steps&lt;/li&gt;
&lt;li&gt;Performance Explorer (Preview), monitor interactive metrics for SQL workloads. The metrics show the overall health of your Snowflake environment, query activity, changes to warehouses, and changes to tables&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  AI Updates (Cortex, ML, DocumentAI)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowflake-managed MCP server (Preview), lets AI agents securely retrieve data from Snowflake accounts without needing to deploy separate infrastructure. You can configure the MCP server to serve Cortex Analyst and Cortex Search as tools on the standards-based interface. MCP clients discover and invoke these tools, and retrieve data required for the application&lt;/li&gt;
&lt;li&gt;Named scoring profiles for Cortex Search Services (GA), allow you to save and reuse scoring configurations when querying a Cortex Search Service. A scoring configuration consists of optional boost and decay functions, as well as an optional reranker setting&lt;/li&gt;
&lt;li&gt;Verified query suggestions (Preview), available in Snowsight in preview. Cortex Analyst monitors incoming requests to surface queries for inclusion in a Verified Query Repository, allowing you to craft verified SQL responses for similar queries&lt;/li&gt;
&lt;li&gt;Cortex Search Component Scores (Preview), access detailed scoring information for search results using Cortex Search Component Scores. Component scores allow developers to understand how search rankings are determined and debug search performance&lt;/li&gt;
&lt;li&gt;CORTEX_EMBED_USER database role (GA), added a CORTEX_EMBED_USER database role in the SNOWFLAKE database to better manage access to Cortex embedding functions. Embedding functions, which convert text to a vector of numbers that represent the meaning of the text, include AI_EMBED, EMBED_TEXT_768, and EMBED_TEXT_1024&lt;/li&gt;
&lt;li&gt;AI_EXTRACT AISQL function (GA), function lets you extract information from text or document files using large language models. New features added: Table extraction support: Extract tabular data from documents, which helps you analyze financial reports, data sheets, invoices, and other documents that contain tabular data, flexible response formats: Define the response format using simple object schemas, arrays of questions, or JSON schemas that support both entity and table extraction, contextual guidance: Provide context to the model using the optional description field; for example, to help the model localize the correct table in a document, output length: The maximum output length for entity extraction is 512 tokens per question. For table extraction, the model returns answers that are a maximum of 4096 tokens long&lt;/li&gt;
&lt;li&gt;Cross-region inference for US Commercial Gov, now available for US Commercial Government regions on AWS. Cross-region inference on US Commercial Gov securely routes your traffic only through regions operating under the same compliance tier. All processing occurs on FIPS-validated infrastructure, keeping your workloads compliant with security requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Snowflake Applications (Container Services, Notebooks and Applications, Snowconvert)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowpark Container Services in Google Cloud (GA)&lt;/li&gt;
&lt;li&gt;Snowflake Native Apps: Shareback, securely request permission from consumers to share data back with you (the provider) or designated third parties. This powerful capability supports essential business needs such as compliance reporting, telemetry and analytics sharing, and data preprocessing by providing a secure, governed channel for data exchange&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Transformations
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;dbt Projects on Snowflake: Recent improvements (Preview), support the following functionalities: dbt Project failures show up as failed queries, Compile on create, Install deps on compile, MONITOR privilege and Accessing execution results is easier&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Lake Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Query data compaction jobs for Apache Iceberg™ tables, ICEBERG_STORAGE_OPTIMIZATION_HISTORY view to query data compaction jobs for Apache Iceberg™ tables within the last year. This view includes a CREDITS_USED column, which you can use to monitor the cost of data compaction. We will start billing for data compaction of data files for Snowflake-managed Iceberg tables on October 20th, 2025&lt;/li&gt;
&lt;li&gt;Partitioned writes for Apache Iceberg™ tables (GA), partitioned write support for Iceberg tables, Snowflake improves compatibility with the wider Iceberg ecosystem and enables accelerated read queries from external Iceberg tools. You can now use Snowflake to create and write to both Snowflake-managed and externally managed Iceberg tables with partitioning schemes&lt;/li&gt;
&lt;li&gt;Set a target file size for Apache Iceberg™ tables (GA), improves cross-engine query performance when you use an external Iceberg engine such as Apache Spark, Delta, or Trino that’s optimized for larger file sizes&lt;/li&gt;
&lt;li&gt;Write support for externally managed Apache Iceberg™ tables and catalog-linked databases (GA), these features made to GA: Write operations for externally managed Iceberg tables, and Catalog-linked databases that connect to external Iceberg REST catalogs&lt;/li&gt;
&lt;li&gt;Catalog-linked databases: Auto-refresh for Apache Iceberg™ table creation, leverage auto-refresh for Iceberg table creation in catalog-linked databases to improve metadata consistency&lt;/li&gt;
&lt;li&gt;Table optimization for Snowflake-managed Apache Iceberg™ tables (GA)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Security, Privacy &amp;amp; Governance Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Hybrid table support for Tri-Secret Secure, enabling TSS support for hybrid tables requires a storage configuration known as Dedicated Storage Mode&lt;/li&gt;
&lt;li&gt;Tri-Secret Secure supports private connectivity, support privately connecting Tri-Secret Secure with your key management service. You can now create a private endpoint for your customer-managed key (CMK)&lt;/li&gt;
&lt;li&gt;Lineage for stored procedures and tasks (GA), view the lineage graph in Snowsight, you can now obtain details about a stored procedure or task that resulted in a downstream object&lt;/li&gt;
&lt;li&gt;Organization account in a hybrid organization, contains accounts in both regulated regions and non-regulated regions&lt;/li&gt;
&lt;li&gt;CLIENT_POLICY parameter for authentication policies, create an authentication policy that sets the minimum version that is allowed for each specified client type. For more information, see the description of the CLIENT_POLICY parameter in the CREATE AUTHENTICATION POLICY command&lt;/li&gt;
&lt;li&gt;Organization-level findings in the Trust Center, include the following information: The number of violations in the organization, The accounts with the most critical violations, and The number of violations for each account in the organization&lt;/li&gt;
&lt;li&gt;Snowflake Notebooks replication (GA), replication for Snowflake Notebooks. Notebooks will now be replicated when they are part of a database included in a replication or failover group&lt;/li&gt;
&lt;li&gt;AWS cross-region support for PrivateLink (GA), supports using PrivateLink to privately connect a VPC endpoint in one AWS region to your Snowflake account in another supported AWS region&lt;/li&gt;
&lt;li&gt;Outbound network traffic to stages and volumes on Google Cloud Storage supports private connectivity (GA)&lt;/li&gt;
&lt;li&gt;Snowflake-managed network rules (General availability), the SNOWFLAKE.NETWORK_SECURITY schema that contains a suite of Snowflake-managed (built-in) network rules. These network rules provide a secure, consistent, fast, and low-maintenance way to manage network security for popular SaaS and partner applications&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  SQL, Extensibility &amp;amp; Performance Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Update to the 2025b release of the TZDB, uses the Time Zone Database (TZDB) for timezone information&lt;/li&gt;
&lt;li&gt;MERGE ALL BY NAME, When the target table and source must have the same number of columns and the same names for all of the columns, you can simplify MERGE operations by using MERGE ALL BY NAME&lt;/li&gt;
&lt;li&gt;Aliases for PIVOT and UNPIVOT columns, for Pivot use the AS clause to specify aliases for the pivot column names, and for UNPIVOT queries, you can use the AS clause to specify aliases for column names that appear in the result of the UNPIVOT operation&lt;/li&gt;
&lt;li&gt;New SQL parameter: ENABLE_GET_DDL_USE_DATA_TYPE_ALIAS, specifies whether the output returned by the GET_DDL function contains data type synonyms specified in the original DDL statement. This parameter is set to FALSE by default&lt;/li&gt;
&lt;li&gt;Reference table columns in lambda expressions when calling higher-order functions, when calling higher-order functions such as FILTER, REDUCE, and TRANSFORM&lt;/li&gt;
&lt;li&gt;SEARCH function supports PHRASE and EXACT search modes
The SEARCH function now supports two new search modes in addition to the existing OR and AND modes&lt;/li&gt;
&lt;li&gt;Snowflake Scripting CONTINUE handlers, can catch and handle exceptions without ending the Snowflake Scripting statement block that raised the exception&lt;/li&gt;
&lt;li&gt;Snowflake Scripting user-defined functions (UDFs) (GA), create SQL UDFs that contain Snowflake Scripting procedural language. Snowflake Scripting UDFs can be called in a SQL statement, such as a SELECT or INSERT statement&lt;/li&gt;
&lt;li&gt;Enforced join order with directed joins (GA), when you run join queries, you can now enforce the join order of the tables using the DIRECTED keyword. When you run a query with a directed join, the first, or left, table is scanned before the second, or right, table. For example, o1 INNER DIRECTED JOIN o2 scans the o1 table before the o2 table&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Clean Rooms Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;10.3 — Managed Account Invites Upon Request: Clean room users now need to reach out to their account representative to enable managed account invitations for their account. Each account will have a specific number of invitations available after requests are approved. This process is being implemented to ensure that users understand that initiating and accepting these invitations will result in separate billing invoices for these accounts&lt;/li&gt;
&lt;li&gt;10.4 — Supports linking external and Apache Iceberg™ views. Previously, linking external views or Iceberg views would result in failure of the clean room; now linking these view types in clean rooms is supported&lt;/li&gt;
&lt;li&gt;10.4 — Reference usage grants update: You can now include a dataset with a Snowflake policy defined in a different database than the source. To do so, you must grant your clean room access to that policy database to be able to link the data into a clean room&lt;/li&gt;
&lt;li&gt;10.4 — Fixes: If a clean room has a template that depends on a dataset that has become unavailable, previously the analysis would fail, and the clean room would become unusable in the UI. Now the template remains available, but the user is prompted to update the clean room to replace the missing dataset&lt;/li&gt;
&lt;li&gt;10.5 — Non-overlap Results &amp;amp; Messaging Improvements: Updated handling to ensure that non-overlap result percentage does not display above 100%; added updated messaging for non-overlap results being unavailable when filtering by a collaborator’s column&lt;/li&gt;
&lt;li&gt;10.5 — Jinja2 Library Upgrade: Updated Jinja2 templating library to version 3.1.6 with compatibility improvements&lt;/li&gt;
&lt;li&gt;10.7 — Enhanced error messaging: When IP addresses are blocked by network policies, enhanced error messages now provide better feedback to users.&lt;/li&gt;
&lt;li&gt;10.7 — Autodetection of modified or removed data sources: If a data source becomes unavailable after a clean room is created or configured, the edit flow in the UI now prompts the user to pick from a current list of available data objects and prompts for removal of unavailable data sources&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Marketplace, Listings &amp;amp; Data Sharing
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Organization user groups with organizational listings (Preview), providers can use organization user groups to assign consumers to organizational listings&lt;/li&gt;
&lt;li&gt;Publishing and consuming public marketplace listings in VPS regions (Preview), Snowflake Marketplace version 2 listings in VPS deployments&lt;/li&gt;
&lt;li&gt;Listings in government regions can be shared on the internal marketplace (Preview)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Open-Source &lt;a href="https://developers.snowflake.com/opensource/" rel="noopener noreferrer"&gt;Updates&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;terraform-snowflake-provider 2.10.0 (Promote features to stable, Compute pool (resource and data source), Git repository (resource and data source), Image repository (resource and data surce), Listing (resource), Service (resource and data source), User programmatic access token (resource and data source), Stabilize authentication policies, Add new WIF authenticator, Add missing compute pool family instances, Use improved SHOW query for warehouse, Use the new generation syntax in warehouses, Fix handling results during account creation)&lt;/li&gt;
&lt;li&gt;terraform-snowflake-provider 2.9.0 (Add new Oauth authorization code flow, Add new Oauth client credentials flow, Fix token authenticator for the token field)&lt;/li&gt;
&lt;li&gt;terraform-snowflake-provider 2.8.0 (Add use_private_link_endpoint option to storage integrations and other bug fixes)&lt;/li&gt;
&lt;li&gt;Modin 0.32.0 ()&lt;/li&gt;
&lt;li&gt;Snowflake VS Code Extension 1.20.0 (Added Snowpark Migration Accelerator (SMA) IA Assistant support for Scala EWIs, Added Snowpark Migration Accelerator (SMA) IA Assistant support for Snowpark Connect EWIs)&lt;/li&gt;
&lt;li&gt;Streamlit 1.51.0 (Features &amp;amp; Improvements: [AdvancedLayouts] Add width to st.plotly_chart, Automatically hide row indices in st.dataframe when row selection is active, [AdvancedLayouts] Add width to st.vega_lite_chart, Add codeTextColor config &amp;amp; update linkColor, [AdvancedLayouts] Add height to st.vega_lite_chart, [AdvancedLayouts] Add width to st.pydeck_chart, Use key as main identity for st.color_picker, Add type argument to st.popover to match st.button, [AdvancedLayouts] Add width to st.altair_chart, Add cursor kwarg to st.write_stream, Preload slow-compiling Python modules in streamlit hello, Reusable Custom Themes via theme.base config, streamlit run with no args runs streamlit_app.py, Allow st.feedback to have a default initial value, [AdvancedLayouts] Add height to st.altair_chart, [AdvancedLayouts] Modernize height parameter for st.pydeck_chart, [AdvancedLayouts] Update width &amp;amp; height for st.map, [AdvancedLayouts] Modernize width/height for st.scatter_chart, [AdvancedLayouts] Modernize width/height for st.area_chart &amp;amp; st.bar_chart, Custom Dark Theme — add light/dark configs for theme &amp;amp; theme.sidebar, Use key as main identity for st.segmented_control, Use key as main identity for st.radio, Use key as main identity for st.audio_input, Add pinned parameter to MultiselectColumn, Use key as main identity for st.slider &amp;amp; st.select_slider, Custom Dark Theme — support light/dark inheritance &amp;amp; new session message, Use key as main identity for st.chat_input, Add support for auto color to MultiselectColumn using chart colors, Allow configuring color for ProgressColumn, Use key as main identity for st.feedback &amp;amp; st.pills, [AdvancedLayouts] Add stretch height to st.dataframe, Custom Dark Theme — theme &amp;amp; sidebar creation, Custom Dark Theme — main &amp;amp; settings menu updates, Add API for st.space, Add st.components.v2.components namespace &amp;amp; classes; Bug Fixes: Make slider thumbs not overshoot the track, Fix Vega chart unrecognized dataset error, Add AbortController for async upload operations, Fix Plotly chart flickering by adding overflow hidden, Fix Pills not showing selected value(s) if disabled, Make Python Altair code thread-safe, Fix file watcher issue with common path check, Fix showErrorDetails config parsing for deprecation warnings, Make sure error message is explicitly shown for 500 errors, Make fuzzy search case insensitive, Fix DataFrame content width horizontal alignment, Fix pyplot/image width regression in fragments &amp;amp; containers)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Client, Drivers, Libraries, and Connectors Updates
&lt;/h3&gt;

&lt;h4&gt;
  
  
  New features:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowflake Connector for Google Analytics Aggregate Data 2.1.0 (Behavior changes: The connector creates additional tables in destination schema. The tables are used to store the configuration of the connector. The tables have _SFSDKEXPORT_V1 suffix; New features:
IMPORT_STATE procedure was added. The procedure can be used to recover a configuration of the reports, schedules, and history of the ingestions after the connector was uninstalled.)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Google Analytics Aggregate Data 2.2.0 (Behavior changes: Revoked the USAGE privilege on the STATE schema from the ADMIN application role)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for ServiceNow® V2 5.26.0 (Behavior changes: Custom journal tables are currently disabled. They’ll be restored with new functionality in a future release, When you pause the connector, only worker tasks are forcefully canceled. Other tasks keep running until they finish, so pausing might take a bit longer; New features: The connector now lets you use NOT LIKE and NOT IN operators for row filtering, so you can filter your data more flexibly during ingestion)&lt;/li&gt;
&lt;li&gt;.NET Driver 5.0.0 (BCR changes: Removed the log4net dependency and enabled delegated logging, Upgraded the AWS SDK library to v4, Removed some internal classes from the public API; New features: Implemented a new CRL (Certificate Revocation List) checking mechanism, Enabling CRLs improves security by checking for revoked certificates during the TLS handshake process, Added support for TLS 1.3. The default negotiated version of TLS is either TLS 1.2 or TLS 1.3, and the server decides which one to establish, Removed noisy log messages)&lt;/li&gt;
&lt;li&gt;Ingest Java SDK 4.3.1 (Enhanced cloud security: Snowpipe Streaming now fully supports server-side encryption with Amazon Web Services (AWS) Key Management Service (SSE-KMS) configured on your external AWS S3 and Google Cloud Storage volumes. This enhancement ensures that data uploaded during ingestion uses your required, higher-grade KMS encryption policy, moving beyond the previously hardcoded default encryption)&lt;/li&gt;
&lt;li&gt;JDBC Driver 3.27.0 (Added retries for HTTP responses 307 and 308 to handle internal IP redirects, PAT creation with the execute method now returns a ResultSet, Bumped netty to 4.1.127.Final to address CVE-2025–58056 and CVE-2025–58057, Added support for Interval Year-Month and Day-Time types in JDBC, Added support for Decfloat types in JDBC, Implemented a new CRL (Certificate Revocation List) checking mechanism)&lt;/li&gt;
&lt;li&gt;JDBC Driver 3.27.1 (Upgraded aws-sdk to 1.12.792 and added STS dependency, added RHEL 9 support, added support for identity impersonation when using workload identity federation: For Google Cloud Platform, added the workloadIdentityImpersonationPath connection parameter for authenticator=WORKLOAD_IDENTITY allowing workloads to authenticate as a different identity through transitive service account impersonation, and For AWS, added the workloadIdentityImpersonationRole connection parameter for authenticator=WORKLOAD_IDENTITY allowing workloads to authenticate through transitive IAM role impersonation, Bumped grpc-java to 1.76.0 to address CVE-2025–58056 from transient dependency)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Google Analytics Raw Data 1.8.0 ()&lt;/li&gt;
&lt;li&gt;Node.js 3.2.1 (Added the workloadIdentityAzureClientId configuration option, allowing you to customize the Azure Client for WORKLOAD_IDENTITY authentication, Added the workloadIdentityImpersonationPath configuration option for authenticator=WORKLOAD_IDENTITY, allowing workloads to use service account impersonation)&lt;/li&gt;
&lt;li&gt;ODBC 3.11.0 (Added support for workload identity federation in the AWS, Azure, Google Cloud, and Kubernetes platforms, Added the workload_identity_provider connection parameter, Added WORKLOAD_IDENTITY to the values for the authenticator connection parameter, Added the following configuration parameters: DisableTelemetry to disable telemetry, SSLVersionMax to specify the maximum SSL version, Added the PRIV_KEY_BASE64 and PRIV_KEY_PWD connection parameters that allow passing a base64-encoded private key)&lt;/li&gt;
&lt;li&gt;ODBC 3.12.0 (Improved performance of the multi-threaded bulk fetching workflow)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 3.18.0 (Added support for pandas conversion for Day-time and Year-Month Interval types)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 4.0.0 (BCR changes: Configuration files writable by a group or others now raise a ConfigSourceError with detailed permission information, preventing potential credential tampering, Reverted changing the exception type in case of token expired scenario for Oauth authenticator back to DatabaseError; New features: Implemented a new CRL (Certificate Revocation List) checking mechanism, Added the workload_identity_impersonation_path parameter to support service account impersonation for Workload Identity Federation. Impersonation is available only for Google Cloud and AWS workloads, Added the oauth_credentials_in_body parameter to support sending OAuth client credentials in a connection request body, Added an option to exclude botocore and boto3 dependencies during installation by setting the SNOWFLAKE_NO_BOTO environment variable to true, Added the ocsp_root_certs_dict_lock_timeout connection parameter to set the timeout (in seconds) for acquiring the lock on the OCSP root certs dictionary. The default value is -1, which represents no timeout)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.42.0 (Snowpark Python DB-API is now generally available, To access this feature, use DataFrameReader.dbapi() to read data from a database table or query into a DataFrame using a DB-API connection)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.41.0 (New features: Added a new function service in snowflake.snowpark.functions that allows users to create a callable representing a Snowpark Container Services (SPCS) service, Added a new function group_by_all() to the DataFrame class, Added connection_parameters parameter to DataFrameReader.dbapi() (Public Preview) method to allow passing keyword arguments to the create_connection callable, Added support for Session.begin_transaction, Session.commit, and Session.rollback, Added support for the following functions in functions.py: Geospatial functions, Added a parameter to enable and disable automatic column name aliasing for interval_day_time_from_parts and interval_year_month_from_parts functions; Improvements: The default maximum length for inferred StringType columns during schema inference in DataFrameReader.dbapi is now increased from 16 MB to 128 MB in parquet file–based ingestion; Dependency updates
Updated: dependency of snowflake-connector-python&amp;gt;=3.17,&amp;lt;5.0.0; Snowpark pandas API updates: Added support for the dtypes parameter of pd.get_dummies, Added support for nunique in df.pivot_table, df.agg, and other places where aggregate functions can be used, Added support for DataFrame.interpolate and Series.interpolate with the “linear”, “ffill”/”pad”, and “backfill”/bfill” methods. These use the SQL INTERPOLATE_LINEAR, INTERPOLATE_FFILL, and INTERPOLATE_BFILL functions (Public Preview); Improvements: Improved performance of Series.to_snowflake and pd.to_snowflake(series) for large data by uploading data via a parquet file. You can control the dataset size at which Snowpark pandas switches to parquet with the variable modin.config.PandasToSnowflakeParquetThresholdBytes, Enhanced autoswitching functionality from Snowflake to native pandas for methods with unsupported argument combinations: get_dummies() with dummy_na=True, drop_first=True, or custom dtype parameters, cumsum(), cummin(), cummax() with axis=1 (column-wise operations), skew() with axis=1 or numeric_only=False parameters, round() with decimals parameter as a Series, corr() with method!=pearson parameter, Set cte_optimization_enabled to True for all Snowpark pandas sessions, Add support for an expanded list for the faster pandas, Reuse row count from the relaxed query compiler in get_axis_len)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.40.0 (New features
Added a new module snowflake.snowpark.secrets that provides Python wrappers for accessing Snowflake Secrets within Python UDFs and stored procedures that execute inside Snowflake, Conditional expression functions, Semi-structured and structured date functions, String &amp;amp; binary functions, Differential privacy functions, Context functions, Geospatial functions; &lt;strong&gt;Snowpark pandas API updates&lt;/strong&gt; : Dependency updates: Updated the supported modin versions to &amp;gt;=0.36.0 and &amp;lt;0.38.0 (was &amp;gt;= 0.35.0 and &amp;lt;0.37.0), New features: Added support for DataFrame.query for DataFrames with single-level indexes, Added support for DataFrameGroupby.__len__ and SeriesGroupBy.__len__; Improvements: Hybrid execution mode is now enabled by default. Certain operations on smaller data now automatically execute in native pandas in-memory. Use from modin.config import AutoSwitchBackend; AutoSwitchBackend.disable() to turn this off and force all execution to occur in Snowflake, Added a session parameter pandas_hybrid_execution_enabled to enable/disable hybrid execution as an alternative to using AutoSwitchBackend, Removed an unnecessary SHOW OBJECTS query issued from read_snowflake under certain conditions, When hybrid execution is enabled, pd.merge, pd.concat, DataFrame.merge, and DataFrame.join can now move arguments to backends other than those among the function arguments, Improved performance of DataFrame.to_snowflake and pd.to_snowflake(dataframe) for large data by uploading data via a parquet file. You can control the dataset size at which Snowpark pandas switches to parquet with the variable modin.config.PandasToSnowflakeParquetThresholdBytes)&lt;/li&gt;
&lt;li&gt;Snowpark Connect for Spark 0.32.0 (Support for RepairTable, Make jdk4py an optional dependency of Snowpark Connect for Spark to simplify configuring Java home for end users, Support more interval type cases)&lt;/li&gt;
&lt;li&gt;Snowpark Connect for Spark 0.31.0 (Add support for expressions in the GROUP BY clause when the clause is explicitly selected, Add error codes to the error messages for better troubleshooting)&lt;/li&gt;
&lt;li&gt;Snowflake ML 1.16.0 (New modeling features: Support for scikit-learn versions earlier than 1.8, New ML Jobs features: Support for configuring the runtime image via the runtime_environment parameter at submission time. You may specify an image tag or a full image URL, New Model Registry features: Ability to mark model methods as volatile or immutable. Volatile methods may return different results when called multiple times with the same input, while immutable methods always return the same result for the same input. Methods in supported model types are immutable by default, while methods in custom models are volatile by default)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Bug fixes:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowflake Connector for Google Analytics Aggregate Data 2.1.1 (Export process faile. Scoped temporary tables could not be created in the destination schema)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Google Analytics Aggregate Data 2.1.2 (The IMPORT_STATE procedure now grants SELECT privilege to the application roles ADMIN and DATA_READER)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for ServiceNow® V2 5.26.0 (The connector now retries curl errors more times, making it more resilient to network issues in Azure deployments)&lt;/li&gt;
&lt;li&gt;Ingest Java SDK 4.3.1 (Fixed vulnerable dependencies and cleaned up internal dependency workarounds)&lt;/li&gt;
&lt;li&gt;JDBC Driver 3.27.0 (Fixed permission check of the .toml configuration file, Fixed pattern search for file when QUOTED_IDENTIFIERS_IGNORE_CASE is enabled)&lt;/li&gt;
&lt;li&gt;JDBC Driver 3.27.1 (Fixed exponential backoff retry time for non-auth requests)&lt;/li&gt;
&lt;li&gt;Node.js 3.2.1 (Fixed a regression causing PUT operations to encrypt files with the wrong smkId)&lt;/li&gt;
&lt;li&gt;ODBC 3.7.1 (Fixed a bug with numeric data conversion when using bulk fetching)&lt;/li&gt;
&lt;li&gt;ODBC 3.8.1 (Fixed a bug with numeric data conversion when using bulk fetching)&lt;/li&gt;
&lt;li&gt;ODBC 3.9.1 (Fixed a bug with numeric data conversion when using bulk fetching)&lt;/li&gt;
&lt;li&gt;ODBC 3.10.1 (Fixed a bug with numeric data conversion when using bulk fetching)&lt;/li&gt;
&lt;li&gt;ODBC 3.11.0 (Fixed an issue with the in-band telemetry event handler to properly reset the events, Fixed the HTTP headers used to authenticate via OKTA, Removed the trailing slash from the default RedirectUri within the OAuth Authorization process)&lt;/li&gt;
&lt;li&gt;ODBC 3.11.1 (Fixed a bug with numeric data conversion when using bulk fetching)&lt;/li&gt;
&lt;li&gt;ODBC 3.12.0 (Fixed a bug that, during OIDC usage, the token was not required, causing errors, Fixed the MacOS release to include the x86_64 architecture, Fix a bug with DEFAULT_VARCHAR_SIZE in configuration of the default varchar length parameter, Fixed a bug with numeric data conversion when using bulk fetching)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 4.0.0 (Fixed get_results_from_sfqid when using DictCursor and executing multiple statements at once, Fixed retry behavior for ECONNRESET errors, Fixed the return type of SnowflakeConnection.cursor(cursor_class) to match the type of cursor_class, Constrained the types of fetchone, :code:fetchmany, and fetchall, Fixed the “No AWS region was found” error when AWS region was set in the AWS_DEFAULT_REGION variable instead of in AWS_REGION for the WORKLOAD_IDENTITY authenticator)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.40.0 (Snowpark pandas API updates: Bug fixes: Fixed a bug that caused DataFrame.limit() to fail if the executed SQL contained parameter binding when used in non-stored-procedure/udxf environments, Added an experimental fix for a bug in schema query generation that could cause invalid sql to be generated when using nested structured types, ; Improvements: Improved DataFrameReader.dbapi (Public Preview) so it doesn’t retry on non-retryable errors, such as SQL syntax error on external data source query, Removed unnecessary warnings about local package version mismatch when using session.read.option(‘rowTag’, ).xml() or xpath functions, Improved DataFrameReader.dbapi (Public Preview) reading performance by setting the default fetch_size parameter value to 100000, Improved error message for XSD validation failure when reading XML files using session.read.option(‘rowValidationXSDPath’, ).xml(), Fixed multiple bugs in DataFrameReader.dbapi (Public Preview): Fixed UDTF ingestion failure with pyodbc driver caused by unprocessed row data, Fixed SQL Server query input failure due to incorrect select query generation, Fixed UDTF ingestion not preserving column nullability in the output schema, Fixed an issue that caused the program to hang during multithreaded Parquet based ingestion when a data fetching error occurred, Fixed a bug in schema parsing when custom schema strings used upper-cased data type names (NUMERIC, NUMBER, DECIMAL, VARCHAR, STRING, TEXT), Fixed a bug in Session.create_dataframe where schema string parsing failed when using upper-cased data type names (e.g., NUMERIC, NUMBER, DECIMAL, VARCHAR, STRING, TEXT))&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.41.0 (Fixed a bug that DataFrameReader.xml fails to parse XML files with undeclared namespaces when ignoreNamespace is True, Added a fix for floating point precision discrepancies in interval_day_time_from_parts, Fixed a bug where writing Snowpark pandas DataFrames on the pandas backend with a column multiindex to Snowflake with to_snowflake would raise KeyError, Fixed a bug that DataFrameReader.dbapi (Public Preview) is not compatible with oracledb 3.4.0, Fixed a bug where modin would unintentionally be imported during session initialization in some scenarios, Fixed a bug where session.udf|udtf|udaf|sproc.register failed when an extra session argument was passed. These methods do not expect a session argument; please remove it if provided; Snowpark pandas API updates: Fixed a bug where the row count was not cached in the ordered DataFrame each time count_rows() was called)&lt;/li&gt;
&lt;li&gt;Snowpark Connect for Spark 0.32.0 (Fix Join issues by refactoring qualifiers, Fix percentile_cont to allow filter and sort order expressions, Fix histogram_numeric UDAF, Fix the COUNT function when called with multiple args)&lt;/li&gt;
&lt;li&gt;Snowpark Connect for Spark 0.31.0 (Fix the window function unsupported cast issue)&lt;/li&gt;
&lt;li&gt;Snowflake ML 1.16.0 (Model Registry bug fixes: Remove redundant pip dependency warnings when artifact_repository_map is provided for warehouse model deployments)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;As we conclude this month’s unofficial Snowflake release notes, it’s evident that October 2025 brought one of the most significant waves of improvements across AI, governance, performance, and the broader Snowflake ecosystem. From powerful Cortex and DocumentAI upgrades to major advances in Apache Iceberg interoperability, Native Apps, and Snowpark, Snowflake continues to develop into a more open, developer-friendly, and AI-integrated data platform. The pace of innovation remains rapid — and the intersection of AI + data governance + secure collaboration is becoming the new core of Snowflake’s strategy.&lt;/p&gt;

&lt;p&gt;Thanks for joining me for another deep dive into what’s new. I hope these curated notes saved you time and clarified the most important changes you need to know. If this was helpful, I’d love to hear your feedback—whether it’s suggestions on format, sections you’d like to see expanded, or requests for private previews in future editions. Drop a comment or reach out on LinkedIn, and I’ll keep refining this resource for the community.&lt;/p&gt;

&lt;p&gt;I hear that there are a lot new product announcements coming during BUILD on November 4th to 6th. I had an early peak and they are very interesting.&lt;/p&gt;

&lt;p&gt;Enjoy the reading.&lt;/p&gt;

&lt;p&gt;I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. I am also the Head of Data, Cloud, &amp;amp; Security Architecture at &lt;a href="https://archetypeconsulting.com/" rel="noopener noreferrer"&gt;Archetype Consulting&lt;/a&gt;. You can follow me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Subscribe to my Medium blog &lt;a href="https://blog.augustorosa.com/" rel="noopener noreferrer"&gt;https://blog.augustorosa.com&lt;/a&gt; for the most interesting Data Engineering and Snowflake news.&lt;/p&gt;

&lt;h4&gt;
  
  
  Sources:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/preview-features" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/preview-features&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/new-features" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/new-features&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/sql-improvements" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/sql-improvements&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/performance-improvements-2024" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/performance-improvements-2024&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/clients-drivers/monthly-releases" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/clients-drivers/monthly-releases&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/connectors/gard-2024" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/connectors/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=snowflake.snowflake-vsc" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=snowflake.snowflake-vsc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowconvert.com/sc/general/release-notes/release-notes" rel="noopener noreferrer"&gt;https://docs.snowconvert.com/sc/general/release-notes/release-notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>datasuperhero</category>
      <category>snowflake</category>
      <category>ai</category>
      <category>data</category>
    </item>
    <item>
      <title>Snowflake Migration Series — Lesson 4: People and Process: The Work That Makes The Tech Stick</title>
      <dc:creator>augusto kiniama rosa</dc:creator>
      <pubDate>Mon, 13 Oct 2025 19:02:16 +0000</pubDate>
      <link>https://forem.com/kiniama/snowflake-migration-series-lesson-4-people-and-process-the-work-that-makes-the-tech-stick-2a9o</link>
      <guid>https://forem.com/kiniama/snowflake-migration-series-lesson-4-people-and-process-the-work-that-makes-the-tech-stick-2a9o</guid>
      <description>&lt;h3&gt;
  
  
  Snowflake Migration Series — Lesson 4: People and Process: The Work That Makes The Tech Stick
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Great Migrations Fail When People Are Left Behind. You Don’t Just Move Code; You Move Habits
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AqXZ6f8d-V4m6vSrG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AqXZ6f8d-V4m6vSrG" width="1024" height="680"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;zPhoto by Suzanne D. Williams on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  tl;dr
&lt;/h3&gt;

&lt;p&gt;Migrations don’t fail because Snowflake or dbt are bad tools. They fail when people are left behind. Many focus only on the technical aspects—like configuring Snowflake or rewriting code—and assume the team will naturally adapt to the new processes. This oversight can cause frustration, hinder adoption, and prevent realizing the platform’s full benefits.&lt;/p&gt;

&lt;p&gt;You aren’t just moving code — you’re changing habits. This lesson gives you a simple plan: the skills to grow, the roles to staff, the rituals to run every week, and how to use AI as a helpful co-pilot.&lt;/p&gt;

&lt;p&gt;To achieve long-term success, the people and process work stream should be prioritized equally with the technical migration outlined in the project charter, ensuring sufficient resources are allocated. This lesson gives you a simple plan: the skills to grow, the roles to staff, the rituals to run every week, and how to use AI as a helpful co-pilot.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/kiniama/snowflake-migration-series-lesson-1-the-power-of-a-strategic-mvp-your-first-slice-of-value-4k45-temp-slug-8662951"&gt;Lesson 1&lt;/a&gt;, RetailCorp shipped a thin, working slice. In &lt;a href="https://dev.to/kiniama/migration-series-lesson-2-unearthing-the-truth-a-strategic-approach-to-legacy-system-assessment-52ak-temp-slug-937858"&gt;Lesson 2&lt;/a&gt;, we assessed the legacy world while building. In &lt;a href="https://medium.com/snowflake/migration-series-lesson-3-unearthing-the-logic-a-realistic-guide-to-business-logic-translation-ca7a0de27eb8" rel="noopener noreferrer"&gt;Lesson 3&lt;/a&gt;, we translated business logic into Snowflake-native patterns. Lesson 4 is how those patterns become the new normal.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why People &amp;amp; Process Matter
&lt;/h4&gt;

&lt;p&gt;After our initial successes, RetailCorp's team began to drift: a few developers kept rebuilding nightly batches as a backup, analysts maintained private spreadsheets, and reviews became infrequent. Once we simplified and made the new process more visible, with short sprints, small demos, and clear tests, adoption increased significantly.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The New Ecosystem of Skills&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Adopting a modern data stack such as Snowflake and dbt isn't just about switching tools—it’s about embracing a whole new environment that requires fresh skills. Expecting an experienced Informatica team to quickly acquire these new competencies amid a migration is unrealistic. A modern data team now must excel in:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build well&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Modular SQL with dbt Proficiency: Writing modular SQL, using Jinja for templating, writing macros, defining tests, and understanding materializations.&lt;/li&gt;
&lt;li&gt;Data contracts as tests (not_null, unique, relationships, accepted values).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ship safely&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Version Control: Branching and pull requests in Git are fundamental for collaborative code development. Make sure there are clear code reviews (one data engineer + one analytics engineer).&lt;/li&gt;
&lt;li&gt;Automation: Building automated CI/CD pipelines for testing and deployment.&lt;/li&gt;
&lt;li&gt;Git for branches/PRs; CI to run tests on every PR; CD to deploy on merge.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Spend wisely&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Snowflake Specifics: Understanding virtual warehouses, cost management, query profiles, role-based access control, semi-structured data types, and simple cost budgets/alerts.&lt;/li&gt;
&lt;li&gt;Cloud Cost Management: A crucial new skill in a consumption-based cloud world.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Keep it teachable: one page per topic, one example, one exercise. A Snowflake like&lt;/em&gt; &lt;a href="https://quickstarts.snowflake.com/" rel="noopener noreferrer"&gt;&lt;em&gt;quickstart&lt;/em&gt;&lt;/a&gt; &lt;em&gt;made for your use-case is a good example.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Embedded and Agile Learning is Key&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Embed training in every sprint, not an afterthought. Rather than isolated, one-off sessions, skill development must be woven into the migration plan using agile methods. Success depends on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adopting Agile: Implement a sprint-based method that involves data engineers, analysts, and key business users directly in planning and developing the MVP and following phases.&lt;/li&gt;
&lt;li&gt;Continuous Learning Cycles: Embed learning opportunities within the sprints. For instance, one sprint could focus on building dbt models, while another emphasizes setting up dbt test or configuring a CI pipeline. Pair programming is especially useful in this setting.&lt;/li&gt;
&lt;li&gt;Develop the CI/CD pipeline gradually during the initial parallel phase. This method provides fast gains and encourages robust software engineering practices.&lt;/li&gt;
&lt;li&gt;Promoting new norms entails actively encouraging code reviews, establishing shared coding standards, and enhancing collaboration through Git.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Embrace Formal Change Management
&lt;/h4&gt;

&lt;p&gt;This marks a major change in work dynamics. Developing a formal change management plan is crucial, involving clear communication of the reasons for the change, continuous support, celebrating milestones such as the MVP launch, and actively managing resistance to the new processes. Supporting your team throughout this transition is vital for securing long-term success.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Week-by-Week 90-day enablement plan
&lt;/h3&gt;

&lt;p&gt;Adapt the plan to fit your organization; For teams under 5 people, collapse to 60 days. For teams over 20, extend to 120 days and add role-specific tracks. You can also create versions of this as business units move to the new platform.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Weeks 1–2: Set the stage&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Create a one-page team rulebook that outlines naming conventions, folder structure, default dbt tests, and a PR checklist. This document will serve as a template Git project, promoting best practices and can be reused as needed.&lt;/li&gt;
&lt;li&gt;Implement stand-up CI: execute dbt tests on pull requests and prevent merges if tests fail.&lt;/li&gt;
&lt;li&gt;Pick a roadmap item as a learning slice that will ship in 2 weeks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Weeks 3–6: Ship small, learn fast&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Two-week sprints, with each one delivering a small production update.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Rituals:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Daily 10-minute standup: discuss updates and obstacles.&lt;/li&gt;
&lt;li&gt;Mid-sprint partner rotation hour.&lt;/li&gt;
&lt;li&gt;End-of-sprint demo and decision log entry, keep it simple, summarized in one paragraph: describe what changed and why.&lt;/li&gt;
&lt;li&gt;Begin on-call duties for data, rotate schedules; the person on duty addresses red builds that day.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Weeks 7–12: Make it normal&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Include cost checks such as budgets and query alerts.&lt;/li&gt;
&lt;li&gt;Include two cross-team reviews, analytics ↔ data engineering, to distribute patterns more effectively.&lt;/li&gt;
&lt;li&gt;Display a basic scoreboard (see suggested below) on a wall or wiki.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Metrics that change behavior
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Lead time (from roadmap to prod)&lt;/li&gt;
&lt;li&gt;% of PRs with tests&lt;/li&gt;
&lt;li&gt;Red builds are fixed within 24h&lt;/li&gt;
&lt;li&gt;Duplicate logic removed&lt;/li&gt;
&lt;li&gt;Number of unique presenters in demos (shared ownership)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How AI Catalyze Team Adoption&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;AI can serve as a valuable resource in implementing this people-centric, agile learning approach. They function as immediate tutors and support the quick adoption of new norms.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;During a learning-focused sprint, team members can utilize AI to get instant, personalized guidance. For example: “I need to write a dbt test to ensure my foreign keys are valid. Can you show me an example of a relationship test?”&lt;/li&gt;
&lt;li&gt;Code Review Assistants: To foster the development of new standards, you can use AI to pre-screen code. Developers might ask an AI to “Review my SQL code against our team’s style guide and suggest improvements” before submitting a pull request for human review.&lt;/li&gt;
&lt;li&gt;On-Demand Onboarding Content: AIs can instantly generate learning materials for your sprints. For example, they can create a short tutorial with a practical exercise on using Jinja to build a dynamic date filter in a dbt model. This supports continuous learning while reducing the workload for senior team members.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Lessons 1–3 focused on showing value and clarifying the logic, while Lesson 4 highlights making the new approach sustainable. Successful implementation isn't just flipping switches; it's about building skills, creating simple weekly routines, and documenting straightforward rules for everyone. When teams actively work on the tasks, deliver gradually, and learn during the sprint instead of after, the platform you develop is more likely to be used as intended.&lt;/p&gt;

&lt;p&gt;Adopt a teachable approach by dedicating one page per topic, including an example, and providing an exercise. Ensure visibility through brief demonstrations, a dynamic decision log, and a public scoreboard. Focus on humane methods: prefer pairing over policing, utilize checklists instead of lectures, and see AI as a helpful co-pilot rather than the pilot. This will help the desired habits become normal quickly.&lt;/p&gt;

&lt;p&gt;Your next step is simple, select a small segment, run the 2-week cycle, and monitor the results — including lead time, PR test durations, and fixing red builds within a day. This evidence helps build trust across the organization and enables you to confidently retire outdated methods.&lt;/p&gt;

&lt;p&gt;Next: Lesson 5 — The Final Gate: proving trust with automated, cell-level validation.&lt;/p&gt;

&lt;p&gt;I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. I am also the Head of Data, Cloud, &amp;amp; Security Architecture at &lt;a href="https://archetypeconsulting.com/" rel="noopener noreferrer"&gt;Archetype Consulting&lt;/a&gt;. You can follow me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Subscribe to my Medium blog &lt;a href="https://blog.augustorosa.com/" rel="noopener noreferrer"&gt;https://blog.augustorosa.com&lt;/a&gt; for the most interesting Data Engineering and Snowflake news.&lt;/p&gt;




</description>
      <category>snowflake</category>
      <category>datasuperhero</category>
      <category>cloudmigration</category>
      <category>datamigration</category>
    </item>
    <item>
      <title>Snowflake Migration Series — Lesson 3: Unearthing the Logic: A Realistic Guide to Business Logic…</title>
      <dc:creator>augusto kiniama rosa</dc:creator>
      <pubDate>Tue, 07 Oct 2025 14:01:58 +0000</pubDate>
      <link>https://forem.com/kiniama/snowflake-migration-series-lesson-3-unearthing-the-logic-a-realistic-guide-to-business-logic-132p</link>
      <guid>https://forem.com/kiniama/snowflake-migration-series-lesson-3-unearthing-the-logic-a-realistic-guide-to-business-logic-132p</guid>
      <description>&lt;h3&gt;
  
  
  Snowflake Migration Series — Lesson 3: Unearthing the Logic: A Realistic Guide to Business Logic Translation
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The Reality: Tools are Accelerators, Not Finishers
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fda15xdxit7cednigaa2h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fda15xdxit7cednigaa2h.png" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  tl;dr
&lt;/h3&gt;

&lt;p&gt;When migrating data warehouses, one of the biggest challenges is translating years of business logic from an older system like Informatica to a newer, more flexible platform like dbt. It’s easy to wish for a quick, magic button solution, but the reality is that this process is complex and requires careful work. The key is to change how we see this task: it’s not just about converting code, but about uncovering and thoughtfully redesigning the underlying business logic to better support the organization. We assess first (scan) to map what’s safe to automate, then convert what’s safe and redesign the rest.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/kiniama/snowflake-migration-series-lesson-1-the-power-of-a-strategic-mvp-your-first-slice-of-value-4k45-temp-slug-8662951"&gt;Lesson 1&lt;/a&gt;, RetailCorp shipped a thin, working slice. In &lt;a href="https://dev.to/kiniama/migration-series-lesson-2-unearthing-the-truth-a-strategic-approach-to-legacy-system-assessment-52ak-temp-slug-937858"&gt;Lesson 2&lt;/a&gt;, we assessed the legacy world while we kept building. Now we face the messy heart of migration, which means years of business rules spread across stored procs, webservices, ETL tools, and scripts. The goal is not to copy code line-by-line. The goal is to carry the meaning of the logic into a simpler, Snowflake‑native shape.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenge
&lt;/h3&gt;

&lt;p&gt;At RetailCorp, we found hundreds of routines doing the same thing in different ways. Some updated facts row by row. Some used lookups that made sense years ago. When we tried a straight convert, it ran for the most part, but it was slow and confusing. When we stepped back and asked, What is this trying to do for the business? What is the value the business needs? The answer was simple: upsert orders, keep history, calculate totals the same way every time. We rebuilt those ideas with fewer moving parts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tools Are Just Accelerators
&lt;/h3&gt;

&lt;p&gt;Automated conversion tools are powerful and even required. They can speed up migration by handling a big volume of transformations. However, expecting these tools to create a perfect, one-to-one conversion is a mistake. The truth is, these tools are accelerators to help us get to value faster, and they will not finish the job. They give you a starting point, but a successful migration requires a tool-assisted, expert-led approach. AI is even accelerating the success you get from tools. You need to budget for tools to handle most of the work and for expert humans to oversee the most complex and valuable parts of the process.&lt;/p&gt;

&lt;h4&gt;
  
  
  Rethinking Logic, Not Just Rewriting Code
&lt;/h4&gt;

&lt;p&gt;When it comes to a legacy system, the most crucial patterns often can't be translated directly. For instance, a tool might convert an Informatica dynamic lookup into pure SQL, but the outcome would be extremely inefficient on a platform like Snowflake. We need to translate the intent, not the syntax.&lt;/p&gt;

&lt;p&gt;At this point, having an expert in the loop becomes crucial. Someone who thoroughly understands both the source and target platforms recognizes that the goal is to capture the business &lt;em&gt;intent&lt;/em&gt;, not the exact code. They'd identify the inefficient tool output and &lt;em&gt;redesign&lt;/em&gt; the pattern to utilize a native feature of the new platform, such as an intermediate table or statement in Snowflake. This expert-led redesign ensures the new system is not only accurate but also efficient and easy to maintain.&lt;/p&gt;

&lt;p&gt;That being said, from what I've seen, you can guide the system with best practices. Essentially, since your code converter relies on AI, you or your vendor can provide your own expertise. I might even share those guidelines in a future post.&lt;/p&gt;

&lt;p&gt;First, we scan with the converter in assessment mode to identify what’s auto-friendly and what needs redesign. Then, we convert the safe parts in conversion mode and send everything else to engineers for native Snowflake/dbt patterns. That’s the entire tool-assisted, expert-led process.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;A Tool-Assisted, Expert-Led Strategy&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Setting realistic expectations is really important. A good migration plan recognizes that while tools can help speed up about 70% of the translation work, the remaining 30% needs a skilled human touch. This isn't a sign that the tools are failing; it's simply the reality when moving complex business logic that has been developed over many years. It’s that human expertise that truly makes automation effective.&lt;/p&gt;

&lt;p&gt;Remember to handle migrations thoughtfully. Doing so helps ensure your business transitions smoothly without losing any of the key elements that keep it running well. Never lose focus on the value to RetailCorp you are trying to create.&lt;/p&gt;

&lt;h4&gt;
  
  
  How AI Enables Expertise
&lt;/h4&gt;

&lt;p&gt;LLMs are the perfect partner for experts in this tool-assisted, expert-led model. They serve as a powerful co-pilot, boosting the expert’s ability to deliver high-value redesign work.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Speeding Up Redesigns:&lt;/strong&gt; An expert can share inefficient, tool-generated code with an LLM and request a platform-native redesign. For instance: "Rewrite this cursor-based SQL logic as a single, set-based MERGE statement optimized for Snowflake."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Brainstorming New Patterns:&lt;/strong&gt; When dealing with a complex legacy pattern, an expert can use an LLM to explore different options. For example, they might ask, "What are the typical patterns for implementing transaction control in dbt on Snowflake? What are the pros and cons of each approach?" Sometimes it could be easier to do this part with Snowflake Cortex complete, as it has the context of the DDLs for schemas and tables, or pass those as part of AI questions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documenting Intent:&lt;/strong&gt; Once an expert rewrites a piece of logic, they can use an LLM to create documentation for the new approach that the rest of the team can follow. For example, "Explain why this MERGE statement is a more efficient way to implement the original dynamic lookup logic, and document it in Markdown format."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Troubleshooting Faster:&lt;/strong&gt; If you cannot immediately identify the error and why it is failing, run the error through AI for faster diagnosis. It is much faster to collaborate with the expert to reach a conclusion instead of spending hours or days trying to find a solution or waiting for support. “Help me troubleshoot my problem, see error log #error# and give potential fixes.”&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  The List: Short Checklist for Logic Translation
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;1) Assess and Triage:&lt;/strong&gt; Run a converter in assessment mode to get coverage % and hotspots, and tag each object: Auto (convert), Assist (AI + review), Manual (expert redesign).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2) Convert what’s safe:&lt;/strong&gt; Batch-convert DDL and straightforward SQL and land output in a repo; open a PR labeled “Converted (needs review)”.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3) Redesign the hard parts:&lt;/strong&gt; Replace cursors/row-by-row with set-based MERGE,Turn nightly “monster jobs” into streams + tasks or dbt incrementals, and collapse “mystery joins” into clear, tested models with one owner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4) Lock truth with tests:&lt;/strong&gt; Add four basics: not_null, unique, relationships, accepted values, and treat tests as the contract between tech and business.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5) Use AI as a co-pilot, not a pilot:&lt;/strong&gt; Ask for a Snowflake-native rewrite (e.g., “convert to one MERGE”), and generate first-pass docs and error explanations; humans approve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6) Write down why:&lt;/strong&gt; One paragraph per redesign: We changed X to Y because Red (impact: speed/cost/clarity).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7) Prove it:&lt;/strong&gt; Diff legacy vs new for the top tables; attach a pass/fail report to the ticket, and green two cycles in a row = ready to cut over.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8) Celebrate and retire:&lt;/strong&gt; Remove the old jobs. Close the loop with stakeholders. Celebrate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;By blending smart tools, expert insights, and the growing capabilities of large language models, you can more easily handle the tricky parts of translating logic. This approach helps you create a new platform that’s strong, fast, and up-to-date.&lt;/p&gt;

&lt;p&gt;In the adventure of migrating business logic, remember that tools are like fast cars — they can accelerate your progress, but won’t get you to the finish line alone. While automated tools handle much of the heavy lifting, a skilled expert is essential to navigate the complexities and redesign logic for efficiency. Think of them as your trusty GPS, helping you avoid the potholes and ensure everything functions smoothly in its new home.&lt;/p&gt;

&lt;p&gt;So, as you dive into this migration journey, embrace a tool-assisted, expert-led strategy. Mix the power of automation with human insight for a successful transition. With a dash of humor and a sprinkle of patience, you’ll not only keep your data safe but also transform it into something stronger and more efficient — just like a well-planned move! Happy migrating!&lt;/p&gt;

&lt;p&gt;Keep a watch for Lesson 4, which is coming in a week or so. Meanwhile, catch up to the series, &lt;a href="https://dev.to/kiniama/snowflake-migration-series-lesson-1-the-power-of-a-strategic-mvp-your-first-slice-of-value-4k45-temp-slug-8662951"&gt;Lesson 1&lt;/a&gt; and &lt;a href="https://dev.to/kiniama/migration-series-lesson-2-unearthing-the-truth-a-strategic-approach-to-legacy-system-assessment-52ak-temp-slug-937858"&gt;Lesson 2&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. I am also the Head of Data, Cloud, &amp;amp; Security Architecture at &lt;a href="https://archetypeconsulting.com/" rel="noopener noreferrer"&gt;Archetype Consulting&lt;/a&gt;. You can follow me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Subscribe to my Medium blog &lt;a href="https://blog.augustorosa.com/" rel="noopener noreferrer"&gt;https://blog.augustorosa.com&lt;/a&gt; for the most interesting Data Engineering and Snowflake news.&lt;/p&gt;




</description>
      <category>snowflake</category>
      <category>data</category>
      <category>datasuperhero</category>
      <category>migration</category>
    </item>
    <item>
      <title>The Unofficial Snowflake Monthly Release Notes: September 2025</title>
      <dc:creator>augusto kiniama rosa</dc:creator>
      <pubDate>Thu, 02 Oct 2025 15:44:02 +0000</pubDate>
      <link>https://forem.com/kiniama/the-unofficial-snowflake-monthly-release-notes-september-2025-22lm</link>
      <guid>https://forem.com/kiniama/the-unofficial-snowflake-monthly-release-notes-september-2025-22lm</guid>
      <description>&lt;h4&gt;
  
  
  Monthly Snowflake Unofficial Release Notes #New features #Previews #Clients #Behavior Changes
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkwld2mxr956csfmjtay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkwld2mxr956csfmjtay.png" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome to the September 2025 Unofficial Release Notes for Snowflake! Here, you’ll find all the latest features, drivers, and more in one convenient place.&lt;/p&gt;

&lt;p&gt;As an unofficial source, I am excited to share my insights and thoughts. Let’s dive in! You can also find all of Snowflake’s releases &lt;a href="https://medium.com/@augustokrosa/list/snowflake-unofficial-newsletter-list-b97037cfc9e6" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This month, we provide coverage up to release 9.29 (General Availability — GA). I hope to extend this eventually to private preview notices as well.&lt;/p&gt;

&lt;p&gt;I would appreciate your suggestions on continuing to combine these monthly release notes. Feel free to comment below or chat with me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;&lt;em&gt;LinkedIn&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Behavior change&lt;/strong&gt; bundle &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_04_bundle" rel="noopener noreferrer"&gt;2025_04&lt;/a&gt; is generally enabled for all customers, &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_05_bundle" rel="noopener noreferrer"&gt;2025_05&lt;/a&gt; is enabled by default but can be opted out until next BCR deployment, and &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_06_bundle" rel="noopener noreferrer"&gt;2025_06&lt;/a&gt; is disabled by default but may be opted in.&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s New in Snowflake
&lt;/h3&gt;

&lt;h4&gt;
  
  
  New Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowpipe Streaming with high-performance architecture (GA), new architecture is designed from the ground up for large-scale, real-time data ingestion with high throughput and low latency. Key features and benefits: Ingest data at up to 10 GB per second per table, with ingest-to-query latencies typically under 10 seconds, multi-language client support: Java and Python SDKs, built on a shared Rust core for efficiency, centralize your ingestion logic using the PIPE object, benefit from a new throughput-based pricing model that provides predictable costs&lt;/li&gt;
&lt;li&gt;Cost management — Updating budgets more frequently, the time period between consumption and a budget receiving information about the consumption is called the budget refresh interval, as short as an hour&lt;/li&gt;
&lt;li&gt;FILE data type (GA), enables multimodal AISQL workflows with unstructured data stored on internal or external stages. FILE values provide a way to reference files without encapsulating the actual file content. FILE objects let you: Store references to files in tables and pass them to AISQL functions, avoid duplicating file data and process it more efficiently by creating ad passing FILE values as references, integrate with existing data architectures by combining DIRECTORY functions with TO_FILE&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Snowsight Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Workspaces (GA), a unified editor for creating and managing code across file types. Workspaces enhance SQL editing with nested folders, rich editing, Copilot help, better charting, and column stats. All workspace content is file-based, simplifying complex projects and Git integration for version control, collaboration, and workflow consistency&lt;/li&gt;
&lt;li&gt;Query insights in Snowsight (Preview), the Query Profile tab under Query History now displays insights about conditions that affect query performance that explains how query performance might be affected and provides a general recommendation for next steps&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  AI Updates (Cortex, ML, DocumentAI)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Document AI models in the model registry, Now stores published or trained models in the Snowflake Model Registry, allowing copying between databases or schemas within or across accounts for easy management, versioning, and RBAC. The registry acts as a control plane for deploying Document AI model versions safely and efficiently across environments&lt;/li&gt;
&lt;li&gt;Cortex Agents: Admin object REST API (Preview), Create an agent via REST API and integrate it into your app to perform tasks or respond to queries. Configure a thread to maintain in-memory context, eliminating the need to send context repeatedly. Includes expanded agent features, Cortex Agents workflow updates, and new REST API endpoints&lt;/li&gt;
&lt;li&gt;Support for Snowflake Cortex AISQL in incremental dynamic table refresh, the SELECT clause for dynamic tables in incremental refresh mode. The same availability restrictions as described in AISQL functions apply, and add AI-powered insights directly to your dynamic tables, automatically analyzing data as it updates&lt;/li&gt;
&lt;li&gt;AI_FILTER Performance Optimization (Preview), delivers a 2–10x speedup and reduces token usage by up to 60% for suitable queries using AI functions in SELECT, WHERE, and JOIN … ON clauses&lt;/li&gt;
&lt;li&gt;Cortex AISQL AI_TRANSLATE (GA), provides industry-leading quality for translations of call transcripts, product reviews, social media comments, and other text content with 20% less token since preview&lt;/li&gt;
&lt;li&gt;Page filtering for AI_PARSE_DOCUMENT, includes page filtering capabilities, allowing you to parse specific pages or ranges within large documents, process only the content you need&lt;/li&gt;
&lt;li&gt;AI_COUNT_TOKENS AISQL function (Preview), helps you size your AI workloads by calculating the total number of input tokens processed by AISQL large language models and task-specific functions, so you can size queries appropriately before hitting model limits and accurately estimate costs based on input token usage&lt;/li&gt;
&lt;li&gt;Cortex Analyst semantic model improvements, support for private facts and metrics, and derived metrics calculated from other metrics using arithmetic operations and functions&lt;/li&gt;
&lt;li&gt;Cortex Agents integration for Microsoft Teams and Copilot (Preview), enhances Microsoft Teams and Copilot 365 with multi-agent support, higher-quality responses, multi-turn conversations, and a streamlined setup experience&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Snowflake Applications (Container Services, Notebooks, Streamlit, and Applications, Snowconvert, Openflow)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowpark Container Services in Google Cloud (GA)&lt;/li&gt;
&lt;li&gt;GRANT OWNERSHIP ON NOTEBOOK (GA)&lt;/li&gt;
&lt;li&gt;Automated granting of privileges (GA), Snowflake Native App providers can automate privilege granting to add needed privileges to the manifest. This enables the app to create objects in the consumer account without manual privilege grants or object creation by the consumer.&lt;/li&gt;
&lt;li&gt;App specifications (GA), Allow a Snowflake Native App provider to specify connection info the app requests. When the consumer installs it, they review and approve or decline&lt;/li&gt;
&lt;li&gt;Feature policies (GA), Snowflake Native App administrators can set feature policies to restrict objects that an app can create in the consumer account. They can review the required privileges in the listing before installing&lt;/li&gt;
&lt;li&gt;SnowConvert AI Verification (Preview), strengthens SnowConvert AI by automating functional validation of converted database code. AI Verification uses synthetic data generation, AI-driven unit testing, and AI-driven error resolution&lt;/li&gt;
&lt;li&gt;Snowflake Native Apps support for FedRAMP on AWS for apps with containers (Preview), apps with containers can be distributed to any Snowflake customer who can use them in FedRAMP region.&lt;/li&gt;
&lt;li&gt;Snowflake Openflow — Snowflake Deployments (Preview), run on Snowpark Container Services (SPCS) and provide a streamlined and integrated solution for data integration and connectivity across interoperable storage like Iceberg and Snowflake native storage&lt;/li&gt;
&lt;li&gt;Support for Streamlit in Snowflake in the People’s Republic of China (Preview)&lt;/li&gt;
&lt;li&gt;Snowconvert — IBM DB2 SQL Support&lt;/li&gt;
&lt;li&gt;Snowconvert — Lots of fixes for teradata, MS SQL Server, Oracle, SSIS, Bigquery&lt;/li&gt;
&lt;li&gt;Snowconvert — AI Verification step for SQL Server migrations&lt;/li&gt;
&lt;li&gt;Snowconvert — Tableau support&lt;/li&gt;
&lt;li&gt;Snowconvert — ETL &amp;amp; SSIS&lt;/li&gt;
&lt;li&gt;Snowconvert — dbt&lt;/li&gt;
&lt;li&gt;Snowconvert — PowerBI Support&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Lake Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Partitioned writes for Apache Iceberg™ tables (Preview), improves compatibility with the wider Iceberg ecosystem and enables faster read queries from external tools. You can now use Snowflake to create and write to both managed and external Iceberg tables with partitioning&lt;/li&gt;
&lt;li&gt;Support for position row-level deletes when writing to externally managed Apache Iceberg™ tables or catalog-linked databases on Amazon S3 or Google Cloud (Preview), these deletes are supported when Snowflake performs update, delete, and merge operations on the tables&lt;/li&gt;
&lt;li&gt;Support for position row-level deletes when writing to externally managed Apache Iceberg™ tables or catalog-linked databases on Azure (Preview), these deletes are supported when Snowflake performs update, delete, and merge operations on the table files. This feature is a performance improvement for these operations&lt;/li&gt;
&lt;li&gt;Prevent data compaction on Snowflake-managed Apache Iceberg™ tables, new ENABLE_DATA_COMPACTION parameter to specify whether Snowflake should perform data compaction on Snowflake-managed Apache Iceberg™ tables. Snowflake still performs compaction on these tables by default&lt;/li&gt;
&lt;li&gt;New system function to replace the catalog integration for an externally managed Apache Iceberg™ table, new SYSTEM$SET_CATALOG_INTEGRATION system function to replace the catalog integration associated with an externally managed Apache Iceberg™ table and you can use this function to access the latest Iceberg features for your tables, such as write support for externally managed Iceberg tables&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Realtime Data (Hybrid tables &amp;amp; SnowPostGres)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Hybrid table support for Microsoft Azure (Preview)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Security, Privacy &amp;amp; Governance Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Classifying views automatically (GA), Set up sensitive data classification so database views get automatically classified at regular intervals. Before, only tables could be automatically classified&lt;/li&gt;
&lt;li&gt;Excluding objects from automatic classification (Preview), automatically classifies all sensitive data in a database with a classification profile. You can configure Snowflake to exclude schemas, tables, or columns from being classified, so they are skipped during the process&lt;/li&gt;
&lt;li&gt;Using Snowsight to monitor data quality (Preview), the new Data Quality tab is available when you are viewing a table or view in Snowsight, use data profiling to view statistics about your object, such as row counts, null values, and value distributions and monitor the results of data metric functions (DMFs) associated with the object&lt;/li&gt;
&lt;li&gt;Multi-factor authentication — Support for one-time passcodes, generate one-time passcodes (OTPs) that users can use as their second factor of authentication when signing in to Snowflake with multi-factor authentication (MFA) and used to provide break glass access&lt;/li&gt;
&lt;li&gt;Data lineage for tasks, Determine that data moved from a source to a downstream object due to a task. Selecting the arrow between them shows task information&lt;/li&gt;
&lt;li&gt;Using SQL for Cortex Powered Object Descriptions (GA), call a stored procedure, AI_GENERATE_TABLE_DESC, to programmatically generate Cortex Powered Object Descriptions&lt;/li&gt;
&lt;li&gt;External OAuth support for Snowflake Open Catalog catalog integration (GA), support External OAuth. To configure a catalog integration with External OAuth, first configure External OAuth in Open Catalog, and then use the new OAUTH_TOKEN_URI parameter for the integration&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  SQL, Extensibility &amp;amp; Performance Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;New SYS_CONTEXT function for getting context about applications, sessions, and organizations and the new SYS_CONTEXT function to get context information about: current application, current environment, current session&lt;/li&gt;
&lt;li&gt;Read consistency mode for sessions with near-concurrent changes, set the READ_CONSISTENCY_MODE account-level parameter to define the level of consistency guarantees that are required for sessions with near-concurrent changes&lt;/li&gt;
&lt;li&gt;Retrieve bind variable values (Preview), retrieve the values of the bind variables for queries that have been executed by using the BIND_VALUES table function in the INFORMATION_SCHEMA schema&lt;/li&gt;
&lt;li&gt;You can use the RESAMPLE clause and a set of interpolation functions to fill gaps in time-series data, simplifies the process of generating continuous, uniformly-sampled time-series data&lt;/li&gt;
&lt;li&gt;More efficient workload distribution, improves query execution time by detecting and adaptively redistributing workloads across nodes in the warehouse, without user intervention&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Clean Rooms Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Version 10.2, bring the welcome modal: Fixed welcome modal links and UI display issues, and add data step: When no data is available, users can still refresh the tables list in the Add Data step, in case data has recently become available&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Marketplace, Listings &amp;amp; Data Sharing
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Billing views for Snowflake resellers and distributors, Views in the new BILLING schema offer billing data for Snowflake resellers' customers. For example, PARTNER_CONTRACT_ITEMS reveals reseller-customer contracts. Only resellers and distributors can access these views&lt;/li&gt;
&lt;li&gt;Declarative Sharing (Preview), allows providers to share and sell data products, enhanced by Snowflake Notebooks to help Snowflake consumers visualize and explore the data. Declarative Sharing’s simplified development experience makes it easy to get started quickly&lt;/li&gt;
&lt;li&gt;Declarative Shared Native Apps (Preview), allows providers to share and sell data products, enhanced by Snowflake Notebooks to help Snowflake consumers visualize and explore the data and simplifies development experience makes it easy to get started quickly&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Open-Source &lt;a href="https://developers.snowflake.com/opensource/" rel="noopener noreferrer"&gt;Updates&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;terraform-snowflake-provider 2.7.0 (support for generation 2 Standard warehouses and resource constraints for Snowpark-optimized warehouses, several bug fixes and documentation improvements)&lt;/li&gt;
&lt;li&gt;Modin 0.37.0 (bugfixes for Series.json, DataFrame.rename, and eval, and performance improvements for joins with AutoSwitchBackend enabled)&lt;/li&gt;
&lt;li&gt;Modin 0.36.0 (bug fix, a performance improvement for query() and eval(), and changes to the testing suite.)&lt;/li&gt;
&lt;li&gt;Snowflake VS Code Extension 1.19.1 (bug fix: Setting connection configuration path now accepts directory paths and will transform it into the expected file path)&lt;/li&gt;
&lt;li&gt;Streamlit 1.50.0 (Charts: Width option for st.line_chart, Sort option for st.bar_chart, Height option for st.graphviz_chart, Chart column color configuration; Widgets: MultiselectColumn support in st.dataframe and st.data_editor, Default parameter for st.tabs, Border option for st.table, New states for CopyButton (hover, active, focus-visible), Button widgets keyed by identity, Time/date input widgets keyed by identity, Selectbox and multiselect keyed by identity, Sample rate option for st.audio_input, Metric supports decimal values, Slider tick labels on hover; Theming: Theming font source support, Theme main color options, Theme background color options, Theme text color options)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Client, Drivers, Libraries and Connectors Updates
&lt;/h3&gt;

&lt;h4&gt;
  
  
  New features:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowflake Connector for ServiceNow® V2 5.25.0 (Behavior changes: For records in the sys_created_on or sys_updated_on columns with null values, the connector inserts an update event only when the record has changed since the last ingestion. Previously, the connector inserted an update event to the event log table during each ingestion cycle, regardless of whether the record changed. This behavior could cause the event log table to grow indefinitely, even if no changes were found in the table)&lt;/li&gt;
&lt;li&gt;Go Snowflake Driver 1.17.0 (Added support for Go 1.25, dropped support for Go 1.22, added ability to configure OCSP for each individual connection, added DECFLOAT support. See the gosnowflake documentation for details, added proxy options to connection parameters, added client_session_keep_alive_heartbeat_frequency connection paramameter, added support for multi-part downloads for S3, Azure and Google Cloud, added Config.singleAuthenticationPrompt to control authentication flow. When true, only one authentication is performed at a time, allowing for manual interactions such as MFA or OAuth. Default is true)&lt;/li&gt;
&lt;li&gt;Node.js 2.3.0 (Added certificate revocation list (CRL) validation support. For configuration options, see Certificate revocation list (CRL) options)&lt;/li&gt;
&lt;li&gt;Snowflake CLI 3.12.0 (Added the !edit command to the snow sql command to support external editors, added the — partial option to the snow logs command to support partial, case-insensitive matching of log messages, improved parsing !source with trailing comments, upgraded to typer=0.17.3 to improve the display of help messages, improved output handling with streaming queries in the snow sql command)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 3.17.4 (Added support for allowing intermediate certificates from the trust store to act as root certificates, updated bundled urllib3 to version v2.5.0, updated bundled requests to version v2.32.5, dropped support for OpenSSL versions older than 1.1.1)&lt;/li&gt;
&lt;li&gt;Snowflake Python API 1.8.0 (Added support for proxy configuration. You can provide proxy settings by using the HTTPS_PROXY environment variable)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.39.0 (New features: downgraded to level logging.DEBUG — 1 the log message saying that the Snowpark DataFrame reference of an internal DataFrameReference object has changed, eliminate duplicate parameter check queries for casing status when retrieving the session, retrieve DataFrame row counts through object metadata to avoid a COUNT(*) query (performance), added support for applying the Snowflake Cortex function Complete, introduce faster pandas: Improved performance by deferring row position computation, the following operations are currently supported and can benefit from the optimization: read_snowflake, repr, loc, reset_index, merge, and binary operations, if a lazy object (e.g., DataFrame or Series) depends on a mix of supported and unsupported operations, the optimization will not be used, updated the error message for when Snowpark pandas is referenced within apply, added a session parameter dummy_row_pos_optimization_enabled to enable/disable dummy row position optimization in faster pandas, dependency updates: Updated the supported modin versions to &amp;gt;=0.35.0 and &amp;lt;0.37.0 (was previously &amp;gt;= 0.34.0 and &amp;lt;0.36.0); Snowpark local testing updates: added support to allow patching functions.ai_complete)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.38.0 (New features
Added support for the following AI-powered functions in functions.py: ai_extract, ai_parse_document, ai_transcribe, added time travel support for querying historical data: Session.table() now supports time travel parameters: time_travel_mode,statement, offset, timestamp, timestamp_type, stream, dataFrameReader.table() supports the same time travel parameters as direct arguments, dataFrameReader supports time travel via option chaining (e.g., session.read.option(“time_travel_mode”, “at”).option(“offset”, -60).table(“my_table”)), added support for specifying the following parameters to DataFrameWriter.copy_into_location for validation and writing data to external locations: validation_mode, storage_integration, credentials, encryption, added support for Session.directory and Session.read.directory to retrieve the list of all files on a stage with metadata, added support for DataFrameReader.jdbc(Private Preview) that allows the JDBC driver to ingest external data sources, added support for FileOperation.copy_files to copy files from a source location to an output stage, added support for the following scalar functions in functions.py: all_user_names, bitand, bitand_agg, bitor, bitor_agg, bitxor, bitxor_agg, current_account_name, current_client, current_ip_address, current_role_type, current_organization_name, current_organization_user, current_secondary_roles, current_transaction, getbit; Snowpark pandas API Updates: completed support for the more functions on the “Pandas” and “Ray” backends, added support for Index.get_level_values())&lt;/li&gt;
&lt;li&gt;Snowflake ML 1.15.0 (Behavior changes: Model Registry behavior changes: drop support for deprecated conversational task type for Huggingface models. This task type has been deprecated by HuggingFace for some time and is due for removal from their API in the imminent future)&lt;/li&gt;
&lt;li&gt;Snowflake ML 1.14.0 (New ML Jobs features: The additional_payloads argument of the MLJob.submit_* methods has been renamed imports to better reflect its purpose. additional_payloads has been deprecated and will be removed in a future release)&lt;/li&gt;
&lt;li&gt;Snowflake ML 1.13.0 (New Model Registry features: you can now log a HuggingFace model without having to load the model in memory using huggingface_pipeline.HuggingFacePipelineModel. Requires the huggingface_hub package. To disable downloading from the HuggingFace repository, pall download_snapshot=False when instantiating huggingface_pipeline.HuggingFacePipelineModel, you can now use XGBoost’s enable_categorical=True models to with pandas DataFrames, when listing services, the PrivateLink inference endpoint in shown in the ModelVersion list.)&lt;/li&gt;
&lt;li&gt;Snowflake ML 1.12.0 (New Model Registry features: log text-generation models with signatures compatible with OpenAI chat completion compatible signatur, New Model Monitoring features: segment columns to enable filtered analysis, specified in the segment_columns field in the model monitor source options. Segment columns must exist in the source table and be of string type. add_segment_column and drop_segment_column methods are provided to add or remove segment columns in existing model monitors)&lt;/li&gt;
&lt;li&gt;Snowpipe Streaming SDK 1.0.0 (released the SDK to support the general availability of Snowpipe Streaming with high performance architecture for AWS deployments, performance and stability improvements)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Bug fixes:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowflake Connector for ServiceNow® V2 5.25.2 (Fixed an issue that caused the schema table to be created incorrectly during state import, fixed an issue in filtered reload mode where some state events could be saved in the wrong order, which could lead to missed updates, improved handling of large responses from ServiceNow, added more detailed logging for ServiceNow response properties)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for ServiceNow® V2 5.25.1 (Fixed an issue that could cause the metadata tables to be ingested incorrectly during reload when the connector is globally configured to fetch display values. As a result of this issue, flattened views were not created for some tables. If this issue occurred, the following metadata tables had to be reloaded: sys_dictionary, sys_db_object, sys_glide_object)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for ServiceNow® V2 5.25.0 (increased the range of page sizes that the connector tries during filtered ingestion. When fetching data, the connector should now be more resilient to timeout errors that come from the ServiceNow® API, fixed the internal cleanup job to retain internal connector information that is needed to perform ingestion. Previously, when this information was removed, it could cause ingestion failures, fixed an error during the creation of flattened views. This error was caused by a missing column in the internal connector table)&lt;/li&gt;
&lt;li&gt;Go Snowflake Driver 1.17.0 (Fixed missing DisableTelemetry option in Config, fixed multistatements in large result sets, fixed unnecessary retries when a context is cancelled, fixed a regression in loading TOML connection files, fixed race conditions in stage downloads)&lt;/li&gt;
&lt;li&gt;Node.js 2.3.0 (Improved debug logs when dowloading query result chunks, fixed missing error handling in getResultsFromQueryId(), fixed invalid transformation of null values to “” when using stage binds, extended typing of Bind)&lt;/li&gt;
&lt;li&gt;Snowflake CLI 3.12.0 (Fixed crashes with older x86_64 Intel CPUs., fixed the ! commands in snow sql commands so they no longer require a trailing ; for evaluation, fixed using ctx.var in snow sql with Jinja templating, fixed issues when pasting content with trailing new lines, fixed an issue with snow snowpark deploy failing on duplicated packages, fixed an issue causing a snow spcs logs IndexOutOfRange error)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 3.17.3 (Enhanced configuration file permission warning messages: Improved warning messages for readable permission issues to include clear instructions on how to skip warnings using the SF_SKIP_WARNING_FOR_READ_PERMISSIONS_ON_CONFIG_FILE environment variable, fixed the bug with staging pandas dataframes on AWS — the regional endpoint is used when required: This fix addresses the issue with the create_dataframe call on Snowpark)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.39.1 (Added an experimental fix for a bug in schema query generation that could cause invalid SQL to be genrated when using nested structured types)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.39.0 ( Fixed an issue with drop_duplicates where the same data source could be read multiple times in the same query but in a different order each time, resulting in missing rows in the final result. The fix ensures that the data source is read only once, fixed a bug with hybrid execution mode where an AssertionError was unexpectedly raised by certain indexing operations)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.38.0 (Bug fixes: fixed the _repr_ of TimestampType to match the actual subtype it represents, fixed a bug in DataFrameReader.dbapi that UDTF ingestion does not work in stored procedures, fixed a bug in schema inference that caused incorrect stage prefixes to be used; Improvements: enhanced error handling in DataFrameReader.dbapi thread-based ingestion to prevent unnecessary operations, which improves resource efficiency, bumped cloudpickle dependency to also support cloudpickle==3.1.1 in addition to previous versions, improved DataFrameReader.dbapi (Public Preview) ingestion performance for PostgreSQL and MySQL by using a server-side cursor to fetch data; Improvements: set the default transfer limit in hybrid execution for data leaving Snowflake to 100k, which can be overridden with the SnowflakePandasTransferThreshold environment variable. This configuration is appropriate for scenarios with two available engines, “pandas” and “Snowflake,” on relational workloads, improved the import error message by adding — upgrade to pip install “snowflake-snowpark-python[modin]” in the message, reduced the telemetry messages from the modin client by pre-aggregating into five-second windows and only keeping a narrow band of metrics that are useful for tracking hybrid execution and native pandas performance, set the initial row count only when hybrid execution is enabled, which reduces the number of queries issued for many workloads, added a new test parameter for integration tests to enable hybrid execution; Bug fixes: raised NotImplementedError instead of AttributeError on attempting to call Snowflake extension functions/methods to_dynamic_table(), cache_result(), to_view(), create_or_replace_dynamic_table(), and create_or_replace_view() on DataFrames or series using the pandas or ray backends)&lt;/li&gt;
&lt;li&gt;Snowflake ML 1.12.0 (Model Registry bug fixes: fixed an issue where the string representation of dictionary-type output columns was being incorrectly created during structured output deserialization, losing the original data type, fixed an inference server performance issue for wide (500+ features) and JSON inputs)&lt;/li&gt;
&lt;li&gt;SQLAlchemy 1.7.7 (fixed an issue that threw an exception for structured type columns dropped while collecting metadata)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Snowflake’s September 2025 release emphasizes three main themes: real-time processing, AI-native features, and governed interoperability. Snowpipe Streaming (GA) provides reliable low-latency ingestion at scale through a throughput-based approach. The FILE data type (GA) enables multimodal AISQL patterns without needing to duplicate blobs. Snowsight Workspaces (GA) and new Query Insights enhance daily development capabilities. In AI, DocumentAI models are now part of the Registry, Cortex Agents feature a REST API and integrate with Teams/Copilot, and AISQL sees improvements like faster (AI_FILTER), more cost-effective (AI_COUNT_TOKENS), and more versatile features (AI_TRANSLATE GA, dynamic-table support). Governance enhancements include automatic view classification, Snowsight Data Quality, MFA one-time codes, task lineage, and External OAuth for Open Catalog.&lt;/p&gt;

&lt;p&gt;The platform layer now includes OpenFlow snowflake native deployments (Preview), Native App controls such as automated grants and feature policies, SPCS on GCP (GA), and broad improvements to Iceberg like partitioned writes, position deletes, tunable compaction, and catalog integration updates. SQL performance and usability see tangible improvements with features like SYS_CONTEXT, read-consistency modes, bind-value introspection, time-series RESAMPLE, and adaptive workload distribution. Client/driver and OSS updates also ensure the ecosystem stays current. Overall, Snowflake is streamlining the process from data ingestion to insights, integrating AI into core workflows, and enhancing enterprise controls. When planning roadmaps, focus on streaming pipelines, AI-assisted transformations (using dynamic tables), and Iceberg-compatible designs — these features are ready for deployment.&lt;/p&gt;

&lt;p&gt;Enjoy the reading.&lt;/p&gt;

&lt;p&gt;I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. I am also the Head of Data, Cloud, &amp;amp; Security Architecture at &lt;a href="https://archetypeconsulting.com/" rel="noopener noreferrer"&gt;Archetype Consulting&lt;/a&gt;. You can follow me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Subscribe to my Medium blog &lt;a href="https://blog.augustorosa.com/" rel="noopener noreferrer"&gt;https://blog.augustorosa.com&lt;/a&gt; for the most interesting Data Engineering and Snowflake news.&lt;/p&gt;

&lt;h4&gt;
  
  
  Sources:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/preview-features" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/preview-features&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/new-features" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/new-features&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/sql-improvements" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/sql-improvements&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/performance-improvements-2024" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/performance-improvements-2024&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/clients-drivers/monthly-releases" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/clients-drivers/monthly-releases&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/connectors/gard-2024" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/connectors/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=snowflake.snowflake-vsc" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=snowflake.snowflake-vsc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/migrations/snowconvert-docs/general/release-notes/release-notes/README" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/migrations/snowconvert-docs/general/release-notes/release-notes/README&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>ai</category>
      <category>snowflake</category>
      <category>data</category>
      <category>newreleases</category>
    </item>
    <item>
      <title>Migration Series — Lesson 2: Unearthing the Truth: A Strategic Approach to Legacy System Assessment</title>
      <dc:creator>augusto kiniama rosa</dc:creator>
      <pubDate>Fri, 19 Sep 2025 19:01:45 +0000</pubDate>
      <link>https://forem.com/kiniama/migration-series-lesson-2-unearthing-the-truth-a-strategic-approach-to-legacy-system-assessment-59b2</link>
      <guid>https://forem.com/kiniama/migration-series-lesson-2-unearthing-the-truth-a-strategic-approach-to-legacy-system-assessment-59b2</guid>
      <description>&lt;h3&gt;
  
  
  Migration Series — Lesson 2: Unearthing the Truth: A Strategic Approach to Legacy System Assessment
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The Need to Perform Upfront Deeper Legacy Systems Assessments
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Ag-GL9Oa-MIRTqewd" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Ag-GL9Oa-MIRTqewd" width="1024" height="701"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Harshil Gudka on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The first step in any large-scale data migration is often the most contentious. Leaders face a difficult balance between the technical need for a deep, upfront analysis of the legacy system and the business’s demand to show progress and deliver value quickly. The traditional method, where a lengthy assessment is required before any development, is a trap. It often causes “analysis paralysis,” postponing the project before it even really starts.&lt;/p&gt;

&lt;p&gt;A successful migration requires a new mindset. The initial assessment should not be a “frozen” waterfall stage but a dynamic, parallel activity. See the first article of the series here: &lt;a href="https://blog.augustorosa.com/snowflake-migration-series-lesson-1-the-power-of-a-strategic-mvp-your-first-slice-of-value-3ee99ee0f8ac?source=user_profile_page---------2-------------fab031e08990----------------------" rel="noopener noreferrer"&gt;Lesson 1&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;The Parallel-Track Solution: Assess and Build Simultaneously&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The most effective strategy is to perform a deep assessment in &lt;strong&gt;parallel&lt;/strong&gt; with building the foundational elements of your new cloud platform, like Snowflake. This approach allows you to demonstrate progress on two fronts: you are de-risking the migration by understanding the legacy environment while simultaneously building the core infrastructure needed for the future state.&lt;/p&gt;

&lt;p&gt;This foundational work is a critical part of a “smart start” and should include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Defining items like security roles and access controls. In the Snowflake case, see this &lt;a href="https://blog.augustorosa.com/snowflake-architecture-series-a-foundational-checklist-fcb6c1db7f19" rel="noopener noreferrer"&gt;list&lt;/a&gt; to get you started.&lt;/li&gt;
&lt;li&gt;Establishing CI/CD pipelines early to enable automation for your transformation workflows and Snowflake Account Object Management (security, warehouses, governance, and so on).&lt;/li&gt;
&lt;li&gt;Specifying the base structures for your code deployment and testing frameworks.&lt;/li&gt;
&lt;li&gt;Ingestion of raw data is required for this phase using an ELT approach.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By doing this in parallel, the assessment informs the build, and the build provides a tangible platform for the eventual migration.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Redefining “Assessment”: From Exhaustive Inventory to Strategic Survey&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The approach also changes the nature of the assessment itself. The goal is no longer a traditional, exhaustive inventory of every single legacy asset. Instead, it becomes a “survey and risk assessment” activity designed to find out just enough to begin the migration intelligently. The purpose is to identify a valuable, technologically representative Minimum Viable Product (MVP) without boiling the ocean.&lt;/p&gt;

&lt;p&gt;Your assessment should be surgically focused on the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pick a thin slice&lt;/strong&gt; : Go for one that aligns with a core stakeholder need, like, for example, a critical BI asset for Sales and metrics like Annual Contract Value, or any directly operational use cases like risk score reports. It would be best to pick something that does not touch everything in your ecosystem, maybe even focus on one or two source systems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inventory what matters:&lt;/strong&gt; Instead of cataloging everything, using the thin slice, inventory everything that relates to it, dependencies, think systems&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Map dependencies for the MVP:&lt;/strong&gt; Conduct end-to-end dependency mapping specifically for the selected MVP candidate, not the entire legacy system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analyze complexity to de-risk:&lt;/strong&gt; Analyze the complexity of your first target to ensure it is achievable and won’t become a nightmare to deliver.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This reflects a migration mindset rather than legacy thinking. It’s not focused on thoroughly assessing old programs' performance but on understanding their logic and risks sufficiently to proceed.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;How LLMs Accelerate the Strategic Survey&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In this series, I will discuss LLMs and their role in speeding up processes. For most of the migration, I use a prompt instructing the LLM that 75% of the work should be handled by the LLM itself, not just code that creates logic. Based on experience, most simple to intermediate systems can be analyzed by an LLM for logic and risk, offering a faster overview compared to a human.&lt;/p&gt;

&lt;p&gt;Here are some things that you can do in this stage:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rapid Complexity Analysis:&lt;/strong&gt; You can feed legacy ETL scripts or database procedures into an LLM to get a quick analysis of their complexity. This helps you triage potential MVP candidates and avoid inadvertently selecting a “nightmare” target for your first phase.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scoped Dependency Mapping:&lt;/strong&gt; Instead of manually combing through code, you can use an LLM to analyze it for a specific business area. Ask it to identify source and target tables for a proposed MVP, speeding up dependency mapping for that part of the system. I used an LLM to analyze an SAP universe file and quickly identify the base tables, although you can review all pages and find similar information on page 415. I guarantee it will take longer to read that PDF than to have the LLM do it for you.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;On-Demand Documentation:&lt;/strong&gt; LLMs can instantly generate plain-language explanations of legacy code. This allows your team to quickly understand the business logic within a potential MVP without spending months on traditional documentation efforts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By combining a parallel-track methodology with the accelerating power of LLMs, you can satisfy the dual mandate of any migration: building a robust future platform while delivering tangible business value from the very beginning.&lt;/p&gt;

&lt;p&gt;Here is a sample prompt that analyses SAP BI Universes without using any software:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are an expert SAP/BO universe analyst and dbt/Snowflake data modeler on macOS. Repeat the following end-to-end workflow for a SAP universe and produce all deliverables exactly as specified.

Context
- OS: macOS
- Repo (working): /Users/augustorosa/Documents/git/sap-universe-to-dbt
- Universe file: sample/RW_SalesAndOperations.unv
- Target Snowflake source schema name: sap_prd

Objectives
1) Analyze the .unv and extract SQL/table metadata
   - Detect that .unv is a zip; list and extract contents
   - Parse UNW_Storage/Tables, Joins, Objects; find SQL-like text (SELECTs, joins, @Prompt, AFS/"...").
   - Build a complete inventory of referenced base objects: schema.object (e.g., prd.VBAK, prd."/AFS/MARM", Z* views/tables); include AFS SUB_* list.

2) Produce CSV inventories
   - outputs/base_tables_full.csv: schema,object including prd.* plus any unqualified table names found in Objects
   - outputs/base_tables.csv: simplified schema,object unique base list (optional)
   - outputs/base_tables_universe_only.csv: universe-only objects not found in the PDF (see step 4)

3) ERD
   - Create two ERDs (Mermaid erDiagram):
     a) Core OTC: VBAK, VBAP, VBEP, LIKP, LIPS, VBRK, VBRP, VBPA, KNA1 (+ keys/PK/FK, cardinalities)
     b) Master/pricing/Z-views/AFS: MARA, MAKT, MVKE, ADRC, T005T, TVM*, TVV*, A904/KONP, Z* views; show N:1 text/pricing relationships and links to MARA
   - Use clear PK/FK attributes, 1:N, N:M, and notes for VGBEL/VGPOS relationships.

4) PDF comparison
   - Convert sample/Sales and Operations.pdf to text; extract prd.* tokens
   - Compute overlaps and deltas vs universe inventory
   - Write universe-only list (step 2c)

Deliverables
- CSVs:
  - outputs/base_tables_full.csv
  - outputs/base_tables.csv (optional simple)
  - outputs/base_tables_universe_only.csv
- ERDs (Mermaid):
  - Core OTC diagram
  - Master/pricing/Z-views/AFS diagram

Constraints &amp;amp; Checks
- Use exact quoting for AFS names ("/AFS/…"); alias to safe snake_case in raw staging only.
- Keep ERD readable: split into 2+ diagrams to avoid overcrowding; include cardinalities and key fields.

At the end
- Print a short summary: counts of tables found, AFS SUB_* list, paths of produced CSVs, and created dbt files.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is an illustrative prompt, not saying it is the best, but the point is that up until now, someone would have to code software to analyze existing environments or do it manually. LLM help facilitate at least a portion of the work.&lt;/p&gt;

&lt;p&gt;Let’s also remember that Snowflake offers access to the top models on its platform, and you can perform this analysis directly within Snowflake. I plan to cover this aspect in another article, including how to use the Snowflake Cortex platform for migrations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;A strategic legacy system assessment isn’t about stalling progress with endless analysis. It’s about finding the right balance — learning just enough to move forward intelligently while laying down the foundations of your future platform. In &lt;a href="https://medium.com/@augustokrosa/3ee99ee0f8ac?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;Lesson 1: The Power of a Strategic MVP&lt;/a&gt;, I emphasized the importance of carving out a first slice of value early in the migration. Lesson 2 builds on that idea: your assessment should be focused, parallel to the build, and accelerated by modern tools like LLMs.&lt;/p&gt;

&lt;p&gt;This shift in mindset — from exhaustive cataloging to targeted surveying — helps you reduce risk, prevent analysis paralysis, and demonstrate progress simultaneously. It creates the foundation for migrations that are quicker, smarter, and more aligned with business goals.&lt;/p&gt;

&lt;p&gt;In the next article, &lt;em&gt;Migration Series — Lesson 3: Unearthing the Logic: A Realistic Guide to Business Logic Translation&lt;/em&gt;, I’ll dive into the next major challenge: how to extract, understand, and translate business logic from legacy systems into your new environment without losing meaning or control.&lt;/p&gt;

&lt;p&gt;I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. You can follow me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Subscribe to my Medium blog &lt;a href="https://blog.augustorosa.com/" rel="noopener noreferrer"&gt;https://blog.augustorosa.com&lt;/a&gt; for the most interesting Data Engineering and Snowflake news.&lt;/p&gt;




</description>
      <category>snowflake</category>
      <category>dataarchitecture</category>
      <category>datamigration</category>
      <category>migration</category>
    </item>
    <item>
      <title>Snowflake Migration Series — Lesson 1: The Power of a Strategic MVP: Your First Slice of Value</title>
      <dc:creator>augusto kiniama rosa</dc:creator>
      <pubDate>Fri, 12 Sep 2025 14:01:44 +0000</pubDate>
      <link>https://forem.com/kiniama/snowflake-migration-series-lesson-1-the-power-of-a-strategic-mvp-your-first-slice-of-value-3cj3</link>
      <guid>https://forem.com/kiniama/snowflake-migration-series-lesson-1-the-power-of-a-strategic-mvp-your-first-slice-of-value-3cj3</guid>
      <description>&lt;h3&gt;
  
  
  Snowflake Migration Series — Lesson 1: The Power of a Strategic MVP: Your First Slice of Value
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Empower Your Snowflake Migration Through Battle-Tested Lessons
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Awr92Ya6fnw8jW7fY" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Awr92Ya6fnw8jW7fY" width="1024" height="626"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Barth Bailey on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  tl;dr
&lt;/h3&gt;

&lt;p&gt;This new series of articles is a follow-up to my &lt;a href="https://www.youtube.com/live/IsbLoMbPiMA?si=2stPrbdqXQB18WL6" rel="noopener noreferrer"&gt;migration talk&lt;/a&gt; this year. After hearing a set of questions from clients and during our Toronto Snowflake User Group, I decided to turn the talk into a series of articles that discuss each of the lessons.&lt;/p&gt;

&lt;p&gt;While I want to get into lesson 1, let's discuss the challenge first. For context, I am talking about a retail corporation and its journey into a large data migration to Snowflake.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenge
&lt;/h3&gt;

&lt;p&gt;When RetailCorp started exploring migration to Snowflake, the potential advantages appeared highly appealing. These included real scalability to manage peak holiday traffic without system outages, cost savings by replacing large hardware costs and Oracle licenses with a pay-as-you-go model, increased agility for analysts to access data and generate reports more quickly, and sophisticated analytics features like machine learning for predicting customer churn and forecasting demand. Snowflake’s architecture effectively fulfilled its technical promises through its separation of storage and compute, strong handling of semi-structured data, and solid data sharing features.&lt;/p&gt;

&lt;p&gt;However, the migration process turned out to be more difficult than expected. The team found that their outdated legacy systems from over a decade ago contained hidden complexities, such as poorly documented Informatica workflows built by former developers, with dependencies reaching into unforeseen systems. Additionally, the team faced notable skill gaps: although they were experts in Informatica, they required quick learning in areas like Snowflake SQL, Azure, dbt transformations, Git version control, and managing cloud costs.&lt;/p&gt;

&lt;p&gt;The technical challenges went beyond just mastering new tools. While automated conversion tools from Informatica to dbt offered some help, they didn't serve as the perfect solution hoped for. Data validation became particularly time-consuming, as it required significant effort to confirm that the outputs of the new Snowflake/dbt pipeline matched trusted legacy Informatica reports exactly. Moreover, although the pay-as-you-go model seemed appealing in theory, it often led to unexpected costs when inefficient queries or improperly sized virtual warehouses ran continuously.&lt;/p&gt;

&lt;p&gt;These experiences underscored the gap between cloud migration promises and actual implementation, providing valuable lessons learned during this complex transformation process.&lt;/p&gt;

&lt;h3&gt;
  
  
  The MVP
&lt;/h3&gt;

&lt;p&gt;Your first step is to audit the environment and identify all source and target data sets, at least for the workload, preferably using some automated software if you are in a complex environment. For example, I have created small Python utilities that help us capture the environment and use LLMs to analyze the data, including columns, as this gives you early signs of complexity. These utilities use the power of AI to flag early signs of complexity. It works pretty well and flags many of the issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjuf5swjues6uvkbvvcs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjuf5swjues6uvkbvvcs.png" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you set up a parallel process to assess the legacy system while building your new cloud environment, the main question is: where do you begin? The best approach is to focus on the Minimum Viable Product (MVP). In a data migration context, however, an MVP isn't merely a small pilot or a temporary prototype. It should be the immediate, concrete outcome of your initial evaluation and foundational actions — a comprehensive, meaningful segment that showcases the new platform's complete end-to-end capabilities.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Proving a Critical Path with Business Outcome&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The primary purpose of the MVP is to prove that one critical, end-to-end data path works in the new environment. This path should be identified and de-risked through your parallel survey of the legacy system. Crucially, the outcome must be valuable to a business sponsor, delivering a vertical business outcome rather than a simple “like-for-like” technical port of an old process.&lt;/p&gt;

&lt;p&gt;Getting this single, imperfect thread working in production achieves several goals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It lays a new foundation for future work.&lt;/li&gt;
&lt;li&gt;It builds confidence among stakeholders.&lt;/li&gt;
&lt;li&gt;It shows tangible progress, creating momentum.&lt;/li&gt;
&lt;li&gt;It becomes the learning ground for the entire team.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Building and Showcasing the New Foundation&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The MVP is more than just data; it serves as a way to showcase the new, modern platform components. The aim is to bring the new “wiring patterns” to life using actual data slices. This means the MVP should actively build and showcase foundational elements like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The new secure cloud environment.&lt;/li&gt;
&lt;li&gt;CI/CD pipelines built on an Infrastructure as Code (IaC) framework.&lt;/li&gt;
&lt;li&gt;The modern deployment process and security model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The First Step in an Iterative Journey&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The MVP is clearly identified as the initial stage in an iterative process. Once completed, it starts a “scan-select-build-deploy-operate” cycle. Insights gained from delivering the first segment guide the choice and development of the next, such as shifting focus from sales data to customer experience or inventory data. Each subsequent segment expands on the previous one, increasing the platform's capabilities and complexity as part of an ongoing assessment process.&lt;/p&gt;

&lt;p&gt;This approach enables leadership to demonstrate regular successes, helping sustain project momentum and secure continuous budget approvals, thereby effectively minimizing the risk of a large-scale, overly ambitious “boil the ocean” project.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How LLMs Help Deliver Your MVP&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;LLMs can be a powerful accelerator in the “scan-select-build” portion of your MVP cycle.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Selecting the Right Slice:&lt;/strong&gt; When deciding on an MVP, you can use an LLM to help analyze the trade-offs. You can provide the LLM with summaries of different business processes and ask it to “Compare the potential business impact versus the technical complexity for a ‘customer churn prediction’ pipeline versus a ‘daily sales reporting’ pipeline, based on these descriptions.” This aids in selecting a high-impact, achievable first target.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creating Test Plans:&lt;/strong&gt; To ensure the MVP is robust, use an LLM to draft a comprehensive test plan. “Create a test plan for our sales data MVP. Include sections for data validation, unit tests for dbt models, integration tests for the CI/CD pipeline, and user acceptance testing criteria for the final dashboard.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By defining the MVP as the core upfront goal, you create focus for the entire team. It’s the mechanism that turns the promise of a modern data platform into a tangible reality, building confidence and momentum for the full migration journey ahead.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Every major migration starts with uncertainty, complexity, and unexpected challenges within legacy systems. RetailCorp’s experience teaches us that success isn’t achieved by attempting to move everything simultaneously but by demonstrating value early through a carefully selected MVP. Concentrating on a single, vital pathway that achieves complete business results helps build a reliable foundation for the team to trust and build upon.&lt;/p&gt;

&lt;p&gt;An MVP isn’t merely a pilot; it serves as the foundation that integrates modern cloud methods, security, and automation into production with tangible business value. Technologies like LLMs expedite this process by assisting teams in assessing trade-offs, creating code scaffolding, and developing more robust test plans. This leads to quicker confidence building, momentum, and gaining executive backing.&lt;/p&gt;

&lt;p&gt;Migration isn't solely about technology; it's about providing proof, fostering trust, and gaining insights rapidly. By adopting a strategic MVP, you transform Snowflake’s promise into tangible progress, laying the foundation for all subsequent lessons in this series.&lt;/p&gt;

&lt;p&gt;My next article in this series will cover how to conduct the actual assessment. Keep an eye out for it.&lt;/p&gt;

&lt;p&gt;I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. I am also the Head of Data, Cloud, &amp;amp; Security Architecture at &lt;a href="https://archetypeconsulting.com/" rel="noopener noreferrer"&gt;Archetype Consulting&lt;/a&gt;. You can follow me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Subscribe to my Medium blog &lt;a href="https://blog.augustorosa.com/" rel="noopener noreferrer"&gt;https://blog.augustorosa.com&lt;/a&gt; for the most interesting Data Engineering and Snowflake news.&lt;/p&gt;




</description>
      <category>snowflake</category>
      <category>datamigration</category>
      <category>data</category>
      <category>migration</category>
    </item>
    <item>
      <title>The Unofficial Snowflake Monthly Release Notes: August 2025</title>
      <dc:creator>augusto kiniama rosa</dc:creator>
      <pubDate>Wed, 03 Sep 2025 17:24:05 +0000</pubDate>
      <link>https://forem.com/kiniama/the-unofficial-snowflake-monthly-release-notes-august-2025-605</link>
      <guid>https://forem.com/kiniama/the-unofficial-snowflake-monthly-release-notes-august-2025-605</guid>
      <description>&lt;h4&gt;
  
  
  Monthly Snowflake Unofficial Release Notes #New features #Previews #Clients #Behavior Changes
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkwld2mxr956csfmjtay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkwld2mxr956csfmjtay.png" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome to the Unofficial Release Notes for Snowflake for August 2025! You’ll find all the latest features, drivers, and more in one convenient place.&lt;/p&gt;

&lt;p&gt;As an unofficial source, I am excited to share my insights and thoughts. Let’s dive in! You can also find all of Snowflake’s releases &lt;a href="https://medium.com/@augustokrosa/list/snowflake-unofficial-newsletter-list-b97037cfc9e6" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This month, we provide coverage up to release 9.25 (General Availability — GA). I hope to extend this eventually to private preview notices as well.&lt;/p&gt;

&lt;p&gt;I would appreciate your suggestions on continuing to combine these monthly release notes. Feel free to comment below or chat with me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;&lt;em&gt;LinkedIn&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Behavior change&lt;/strong&gt; bundle &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_03_bundle" rel="noopener noreferrer"&gt;2025_03&lt;/a&gt; is generally enabled for all customers, &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_04_bundle" rel="noopener noreferrer"&gt;2025_04&lt;/a&gt; is enabled by default but can be opted out until next BCR deployment, and &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_05_bundle" rel="noopener noreferrer"&gt;2025_05&lt;/a&gt; is disabled by default but may be opted in.&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s New in Snowflake
&lt;/h3&gt;

&lt;h4&gt;
  
  
  New Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowflake Intelligence (Preview), Gain insights and act on data with Snowflake Intelligence agents who answer questions, provide insights, and show visualizations. You can: Create charts and get instant answers using natural language and access and analyze thousands of data sources, including structured and unstructured data together&lt;/li&gt;
&lt;li&gt;Workload identity federation (GA) allows workloads like services, applications, and containers to authenticate to Snowflake without managing long-lived credentials. It offers security similar to External OAuth but is easier to implement. This involves configuring the workload's native identity provider, creating a Snowflake service user, and ensuring the workload uses a Snowflake driver capable of sending an attestation or security token from the identity provider&lt;/li&gt;
&lt;li&gt;Support for stored procedures in data lineage (Preview) extends lineage capabilities beyond data and ML to include process connections between source and target objects. When viewing the lineage graph in Snowsight, you can now get details about a stored procedure that impacted a downstream object. If nested, you can also see details about the top-level stored procedure in the hierarchy&lt;/li&gt;
&lt;li&gt;Private facts and metrics in semantic views, If defining a fact or metric solely for calculations in the semantic view and not for query results, use the PRIVATE keyword to mark it as private. Private facts and metrics cannot be queried or used in query conditions&lt;/li&gt;
&lt;li&gt;Write Once, Read Many (WORM) snapshots (Preview)
The Write Once, Read Many (WORM) snapshots feature is now in Preview.&lt;/li&gt;
&lt;li&gt;WORM snapshots represent backups of specific Snowflake tables, schemas, or databases. These backups are immutable and managed as snapshot sets within Snowflake. You can set policies for automatic snapshots, deletions, and protection of key snapshots&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Snowsight Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Organization profile updates, Support for creating organization profiles in Snowsight, and allow specific roles in an account to access a profile. When creating organization profiles, you can now assign specific roles within an account that can access the profile&lt;/li&gt;
&lt;li&gt;Snowsight navigation menu updates (Gradual rollout), organized by feature groups under key categories to help you find the tools you need more quickly&lt;/li&gt;
&lt;li&gt;Contacts (GA) associate users with objects like databases and tables, enabling users to contact the right person for help. Each contact, a schema-level object, contains communication details such as email or URL. An object can have multiple contacts if their purposes differ; for example, a table may have one contact for access approval and another for support&lt;/li&gt;
&lt;li&gt;Trust Center email notifications (GA), Configure the Trust Center to send email notifications for violations. You can choose to receive alerts for all enabled scanners in a package or individual scanners, and set the severity level for notifications&lt;/li&gt;
&lt;li&gt;Using the database object explorer in Snowsight to create and manage semantic views (Preview), use the database object explorer to create and manage semantic views&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  AI Updates (Cortex, ML, DocumentAI)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Document AI table extraction (GA), Extract tables from various document formats. You can now also export and import CSV files to review table extraction answers more easily&lt;/li&gt;
&lt;li&gt;Cortex Agents: admin configuration UI (Preview), Create an agent from the Agent admin page in Snowsight UI to answer questions and provide insights using a semantic view, model, Cortex Search, or a combination&lt;/li&gt;
&lt;li&gt;Cortex AISQL with AI_TRANSCRIBE (Preview), AI_TRANSCRIBE offers scalable SQL-native speech-to-text AI processing for extracting insights from customer care, healthcare, and meetings. Files are processed directly from object storage, avoiding data transfer, with no infrastructure setup needed for immediate use&lt;/li&gt;
&lt;li&gt;Snowflake ML Jobs (GA) is a framework that uses the Container Runtime from any environment. The ML Jobs SDK allows you to submit and manage jobs with Snowpark Container Services, utilize GPU and high-memory CPU instances for intensive tasks, and work in your preferred environment like VS Code or external notebooks&lt;/li&gt;
&lt;li&gt;Using SQL for Cortex Powered Object Descriptions (Preview), call the stored procedure AI_GENERATE_TABLE_DESC to generate descriptions. It uses Snowflake Cortex COMPLETE to automatically produce descriptions for tables, views, and columns&lt;/li&gt;
&lt;li&gt;Cortex Search Service replication (Preview), Supports replicating Cortex Search Services from a source to multiple target accounts within the same organization. It integrates with Snowflake replication and failover groups for point-in-time consistency on the target&lt;/li&gt;
&lt;li&gt;Distributed processing in Snowflake ML: Many Model Training and Distributed Partition Function, Supports distributed processing for training models and data processing across partitions. Many Model Training (MMT) efficiently trains multiple models on data partitions by specifying a column, training models in parallel. The Distributed Partition Function (DPF) processes data across nodes, partitioning by a column and executing Python functions in parallel&lt;/li&gt;
&lt;li&gt;AI Parse Document layout mode (GA), a fully managed SQL function extracts the layout of the page in Markdown format, preserving text, tables, and structural elements from documents with enterprise-grade accuracy and scale&lt;/li&gt;
&lt;li&gt;AI_EXTRACT function (Preview), Extract information from strings or files using AI_EXTRACT, which retrieves data from unstructured sources like text, images, and documents, such as financial, tax, contract, invoice, medical, marketing, and regulatory records&lt;/li&gt;
&lt;li&gt;Model Registry model deployment UI (Preview), Deploy models directly to SPCS Model Serving from the Model Registry UI and view or suspend the inference service directly from the same interface&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Snowflake Applications (Container Services, Notebooks and Applications, Snowconvert)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowpark Container Services in Google Cloud (GA)&lt;/li&gt;
&lt;li&gt;Snowpark Container Services batch jobs (Preview), support for running multiple replicas of a Snowpark Container Services job service&lt;/li&gt;
&lt;li&gt;CORS configuration to enable cross-origin requests to a Snowpark Container Services service (GA)&lt;/li&gt;
&lt;li&gt;New stage volume implementation in Snowpark Container Services (Preview), uses only limited in-memory caching, providing predictable behavior and faster throughput. This version will replace the current implementation. Snowflake recommends evaluating this preview unless your workload requires unsupported random writes or file appends&lt;/li&gt;
&lt;li&gt;Snowflake Native App Framework — MONITOR privilege support for apps (GA), supports granting the MONITOR privilege on an app to a user role or to another app&lt;/li&gt;
&lt;li&gt;Support for Streamlit 1.46 (GA), now supported in Streamlit in Snowflake.&lt;/li&gt;
&lt;li&gt;Support for custom components in Streamlit in Snowflake (Preview),
Streamlit in Snowflake now supports custom components that don’t need external service calls&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Lake Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Set a target file size for Apache Iceberg™ tables (Preview), Set target Parquet file size for Iceberg tables to boost query performance with engines like Spark, Delta, or Trino. Set it during table creation or later with ALTER ICEBERG TABLE&lt;/li&gt;
&lt;li&gt;Apache Iceberg™ tables: Row-level deletes for externally managed tables (GA), Supports row-level deletes with positional delete files for external Iceberg tables. Iceberg engines can update, delete, and merge these tables using copy-on-write and merge-on-read modes. This enhances interoperability between Snowflake and external tools managing Iceberg data, ensuring consistent behavior across compute engines&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Realtime Data (Hybrid tables &amp;amp; SnowPostGres)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Hybrid table storage for Time Travel data, consumption for hybrid table storage now takes into account the data that is retained by Time Travel&lt;/li&gt;
&lt;li&gt;Hybrid table support for periodic rekeying (GA), accounts that contain hybrid tables can enable and use periodic rekeying without any additional configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Pipelines/Data Loading/Unloading Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Dynamic tables: Support for immutability constraints, enabling control over updates by marking parts as immutable. This prevents updates or deletions, restricts modifications for certain rows, and limits future changes while permitting incremental updates elsewhere&lt;/li&gt;
&lt;li&gt;Dynamic tables: Support for backfill, create a dynamic table with initial data from a regular table. Backfilling is low-cost, zero-copy, making source data immediately available. You can backfill while defining a custom refresh query for future updates. With immutability constraints, backfilled data stays unchanged, even if it no longer matches the upstream source, ensuring persistence&lt;/li&gt;
&lt;li&gt;Apache Arrow library upgraded to 21.0.0; Snowflake 9.23 now uses Apache Arrow 21.0.0 for unloading Parquet data&lt;/li&gt;
&lt;li&gt;Dynamic tables: Support for UNION in incremental refresh mode, The UNION set operator now supports dynamic table incremental refresh, combining UNION ALL and SELECT DISTINCT&lt;/li&gt;
&lt;li&gt;Snowflake Connectors for Microsoft Power Apps (GA), Allows connection to Snowflake from Microsoft Power Apps, Power Automate, Copilot Studio, and other Microsoft apps. The Power Platform enables creating flows and adding actions to run and return results of custom SQL statements in Snowflake&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Security, Privacy &amp;amp; Governance Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Sensitive data classification: Automatic classification of a database (GA), set a classification profile on a database so that all tables and views within the database are automatically classified for sensitive data&lt;/li&gt;
&lt;li&gt;Data quality DMF Expectations (GA): Use expectations to set quality checks for a data metric function (DMF). When you associate an expectation with a DMF-object pair, if the DMF's value doesn't meet the expectation, it is flagged as a violation&lt;/li&gt;
&lt;li&gt;Self-service activation of Tri-Secret Secure (GA), Customers with ACCOUNTADMIN can activate or deactivate Tri-Secret Secure for Snowflake using system functions, without Snowflake support. The self-service process also makes managing your customer managed key (CMK) easier&lt;/li&gt;
&lt;li&gt;Support for keys generated with Elliptic Curve Digital Signature Algorithms (ECDSA), For Snowflake authentication methods using cryptographic keys (key-pair and External OAuth), you can generate keys with Elliptic Curve Digital Signature Algorithms (ECDSA) like P-256, P-384, and P-512. These signatures use hash algorithms SHA-256, SHA-384, and SHA-512, respectively&lt;/li&gt;
&lt;li&gt;Data quality: Updated privilege model allows non-owners to associate a data metric function with an object, Users with SELECT privilege on a table or view can now associate it with a data metric function (DMF) for data quality checks, a task previously limited to the owner. This update introduces a new property: EXECUTE AS ROLE, indicating the role the DMF runs with&lt;/li&gt;
&lt;li&gt;Object tags: New limit for allowed values, The ALLOWED_VALUES property of a tag controls which values can be associated with it. This list now allows 5,000 values, up from 300&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  SQL, Extensibility &amp;amp; Performance Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Enforced join order with directed joins (Preview), Enforce table join order with the DIRECTED keyword. A directed join scans the first (left) table before the second (right). For example, o1 INNER DIRECTED JOIN o2 scans o1 before o2. Use directed joins when migrating workloads with join order directives or to improve performance by controlling join sequence&lt;/li&gt;
&lt;li&gt;Tracing SQL statements run from handler code (GA), traces SQL statements executed with other traced code, like in a stored procedure or user-defined function handler&lt;/li&gt;
&lt;li&gt;Snowflake Scripting user-defined functions (UDFs), create SQL UDFs with Snowflake Scripting, which can be called in SQL statements like SELECT or INSERT, making them more flexible than stored procedures that only use SQL CALL&lt;/li&gt;
&lt;li&gt;ALTER LISTING command to simplify adding and removing targets (GA), You can now modify listings by adding or removing targets (accounts, roles, and organizations) without passing all existing targets. The ALTER LISTING command lets you provide a partial manifest with only the targets to change, using the familiar structures targets, external_targets, and organization_targets from the listing manifest reference&lt;/li&gt;
&lt;li&gt;Querying semantic views (General availability), Use a SELECT statement to query a semantic view, specifying the SEMANTIC_VIEW clause with desired dimensions and metrics. You can also filter results based on dimensions&lt;/li&gt;
&lt;li&gt;Semantic views: To list facts in a semantic view, schema, database, or account, run the SHOW SEMANTIC FACTS command&lt;/li&gt;
&lt;li&gt;Semantic views: Use ALTER SEMANTIC VIEW ... RENAME TO ... to rename a view&lt;/li&gt;
&lt;li&gt;Improved query performance with intelligent workload optimization, which continuously analyzes your workload patterns and automatically applies workload-specific optimizations. Intelligent workload optimization is only available on Snowflake Snowflake generation 2 standard warehouses (Gen2), Enhances the efficiency of queries with commonly used selective predicate patterns&lt;/li&gt;
&lt;li&gt;Preview support for directed joins. You can enforce join ordering when you run a query with the JOIN clause by adding the DIRECTED keyword. Easier migration of workloads with join order directives in Snowflake can improve performance by scanning tables in a specific order&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Clean Rooms Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Simplified cross-region sharing. Previously, a provider had to call both request_laf_cleanroom_requests and mount_laf_cleanroom_requests_share to get requests from consumers in different regions. Now, they can just call mount_request_logs_for_all_consumers&lt;/li&gt;
&lt;li&gt;Simplified installation by automating service user creation and verification, or reusing existing ones, reducing setup complexity&lt;/li&gt;
&lt;li&gt;Single-account testing allows acting as both provider and consumer in one account for a clean room test, making it easier for users with a single Snowflake account to test clean room code.&lt;/li&gt;
&lt;li&gt;Configurable refresh rates for Cross-Cloud Auto-Fulfillment. The default refresh rate for provider clean room data sent to consumers in other cloud regions has been shortened from 24 hours to 30 minutes. This refresh rate is configurable&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Open-Source &lt;a href="https://developers.snowflake.com/opensource/" rel="noopener noreferrer"&gt;Updates&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;terraform-snowflake-provider 2.6.0 ( &lt;strong&gt;Bug fixes&lt;/strong&gt; : Connection management improvements: connection fixes follow-up, enhance connection management and logging for secondary connections, fix updating privileges in grant_privileges_to_account_role, small failover group fixes, adjust PAT alter after fixes in Snowflake, &lt;strong&gt;Improvements&lt;/strong&gt; : change convert signature, upgrade go Snowflake driver, update migration guide, import docs, and task tests; &lt;strong&gt;Misc&lt;/strong&gt; : Test improvements: add manual test for consumption billing entity, address skipped tests, adjust architests after moving acceptance tests to a separate package, adjust security integration tests, fix check destroy logic, fix unsafe query integration test, move more task tests to account level, remove unnecessary TestAccPreCheck, run functional tests after setting up Terraform, use DropSafely in test setup helpers, code cleanup and refactoring, clean up all TODOs created during Terraform Plugin Framework and RestAPI research, clean up TODO comments left with “next PR” without issue number, sweeper adjustments, Nuke roles, databases, warehouses, shares, and network policies, add new database to the protected databases in sweepers, sweep security integrations, find versions of provider in gh organization, add new roadmap entry)&lt;/li&gt;
&lt;li&gt;Modin 0.32.0 ()&lt;/li&gt;
&lt;li&gt;Snowflake VS Code Extension 1.19.0 (Public Preview release of Snowpark Migration Accelerator Migration Assistant: resolve Snowpark Migration Accelerator migration issues with the assistance of Snowflake Cortex AI, browse for conversion issues in the current workspace using the new SMA issues panel, interact with Cortex to generate explanations and recommendations using new SMA Assistant WebView, enable this feature in your settings via the snowflake.snowparkMigrationAcceleratorAssistant.enabled property, configurable AI model preferences with automatic fallback support for Claude 3.7 Sonnet, Claude 3.5 Sonnet, Claude 4 Sonnet, Llama 3.1 70B, and Mistral Large 2, prioritize SNOWFLAKE_HOME environment variable for determining connection configuration file location to conform to documented driver behavior)&lt;/li&gt;
&lt;li&gt;Snowflake VS Code Extension 1.18.0 (Private Preview release of Snowpark Migration Accelerator Migration Assistant: resolve Snowpark Migration Accelerator migration issues with the assistance of Snowflake Cortex AI, browse for conversion issues in the current workspace using the new &lt;strong&gt;SMA issues&lt;/strong&gt; panel, interact with Cortex to generate explanations and recommendations using new SMA Assistant WebView)&lt;/li&gt;
&lt;li&gt;Streamlit 1.49.0 (Breaking Changes: clean up experimental dialog, fragment and widget replay, deprecate st.bokeh_chart in favour of streamlit-bokeh , allow editing of ListColumn via st.data_editor , New Features: [feat] Add configurable websocket ping interval to fix proxy connection drops, add directory support for st.file_uploader, add support for sparkline charts to st.metric , [feat] Simplify all copy to clipboard handling, allow setting duration for st.toast , [AdvancedLayouts] Updates dataframe and data editor to the new style for width and height, add an option for a wider st.dialog , add single- and multi-cell selections to st.dataframe , show check icon on column name copy , added st.pdf , add single- and multi-cell selection options to st.dataframe , add key to st.form_submit_button , add toolbar action to show underlying vega chart data, [AdvancedLayouts] Updates to st.image width/height and adding width/height to st.pyplot, add yellow to text/background/badge/divider color options, add explicit support for Pydantic models in cache_data/resource, add directory upload support to st.chat_input, update slider value on mouse release, add format_func support to SelectboxColumn , [AdvancedLayouts] Adds a width parameters to graphviz charts, allow select all/ deselect all when hiding dataframe columns from UI; Bug Fixes: [fix] Skeleton width should be full screen, [fix] st.chat_input collapses after submit, [fix] iframe and html change in default width when no width specified, [Fix] st.toast handles custom theming, [fix] Do not allow removing already uploaded files when disabled, ensure narrow currency symbol is used for formatting by, revert “Make st.logout use end_session_endpoint if provided in OIDC config (&lt;a href="https://github.com/streamlit/streamlit/pull/11901" rel="noopener noreferrer"&gt;#11901&lt;/a&gt;)”, [fix] Allow dismissing connection error dialog (and not having it continue to pop up), [fix] Cache layout_config data and use during replay, update st.page_link radii config, single mark charts use first of chartCategoricalColors , [fix] File Upload Drop Target truncates subtext, update credentials.py: try to fix showEmailPrompt in case where there is no email, [fix] Selectbox with accept_new_options on mobile shows keyboard, [fix] PlotlyChart with null value selection works properly , fix: prevents underlying column menu from opening when column visibility menu is open for st.dataframe , apply radii from theme to CheckboxColumn , [fix] Log deprecation warning for use_container_width in st.dataframe)&lt;/li&gt;
&lt;li&gt;Streamlit 1.49.1 (small fixes)&lt;/li&gt;
&lt;li&gt;Streamlit 1.48.0 (small fixes)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Client, Drivers, Libraries and Connectors Updates
&lt;/h3&gt;

&lt;h4&gt;
  
  
  New features:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowflake Connector for Google Analytics Raw Data 2.11.2 (Improved the logging for the connector operations. This improved logging includes the following more detailed information about BigQuery streaming downloads: Download progress percentage, Throttling information, Amount of data downloaded in each batch)&lt;/li&gt;
&lt;li&gt;.NET Driver 4.8.0 (Added support for workload identity federation in the AWS, Azure, Google Cloud, and Kubernetes platforms, Added the WORKLOAD_IDENTITY_PROVIDER connection parameter, Added WORKLOAD_IDENTITY to the values for the authenticator connection parameter, Added support of single use refresh tokens during the OAuth flow)&lt;/li&gt;
&lt;li&gt;Go Snowflake Driver 1.16.0 (Added support for workload identity federation in the AWS, Azure, Google Cloud, and Kubernetes platforms, Added the WorkloadIdentityProvider connection parameter, Added AuthTypeWorkloadIdentityFederation to the values for the authenticator connection parameter, Added support for CRL (certificate revocation list): CRL specifies the certificates that a given CA explicitly revoked. This driver now uses either CRL or OCSP to verify TLS certificates, CRL is disabled by default. Please review the CRL Enablement Guideline in this KB article &lt;a href="https://community.snowflake.com/s/article/Replacing-OCSP-with-CRL-as-the-method-of-certificate-revocation-checking" rel="noopener noreferrer"&gt;https://community.snowflake.com/s/article/Replacing-OCSP-with-CRL-as-the-method-of-certificate-revocation-checking&lt;/a&gt; and test in advisory mode before production use, Added support for opt-in single-use refresh tokens in the OAuth flow, Implemented a connectivity diagnostic tool, Added a session ID to logs produced by the connection and heartbeat modules, Added the RegisterTLSConfig function that lets you pass your own TLSConfig for the driver to use, Removed the dependency to static list of root CAs for OCSP checking. Now, the default list of root CAs is used)&lt;/li&gt;
&lt;li&gt;Ingest Java SDK 4.2.0 (Improved the reliability of streaming ingest into Iceberg tables, ensuring that your data is consistently uploaded to the correct location, improved how the SDK manages table keys, which ensures that our system stays in sync and helps maintain the stability and security of your tables, improved system stability for high-volume data by allowing connections to retry for up to five minutes, preventing immediate closures)&lt;/li&gt;
&lt;li&gt;JDBC Driver 3.26.1 (Added support for TLS version 1.3, including the following parameter: MIN_TLS_VERSION specifies the minimum SSL/TLS version to use when initiating a TLS handshake, MAX_TLS_VERSION specifies the maximum SSL/TLS version to use when initiating a TLS handshake)&lt;/li&gt;
&lt;li&gt;JDBC Driver 3.26.0 (Added support for workload identity federation in the AWS, Azure, Google Cloud, and Kubernetes platforms, Added the workloadIdentityProvider connection parameter, Added WORKLOAD_IDENTITY to the values for the authenticator connection parameter)&lt;/li&gt;
&lt;li&gt;Node.js 2.2.0 (Added support for Workload Identity Federation in the AWS, Azure, Google Cloud, and Kubernetes platforms, added the workloadIdentityProvider connection parameter, added WORKLOAD_IDENTITY to the values for the authenticator connection parameter, added the queryTag connection parameter to set the QUERY_TAG session parameter)&lt;/li&gt;
&lt;li&gt;ODBC 3.11.0 (Added support for workload identity federation in the AWS, Azure, Google Cloud, and Kubernetes platforms, added the workload_identity_provider connection parameter, added WORKLOAD_IDENTITY to the values for the authenticator connection parameter, added the following configuration parameters: DisableTelemetry to disable telemetry, SSLVersionMax to specify the maximum SSL version, added the PRIV_KEY_BASE64 and PRIV_KEY_PWD connection parameters that allow passing a base64-encoded private key)&lt;/li&gt;
&lt;li&gt;PHP PDO Driver for Snowflake 3.3.0 (Added ARM64 support for Linux, added support for the Easy Logging feature in a configuration file)&lt;/li&gt;
&lt;li&gt;Snowflake CLI 3.11.0 (Added the snow connection remove command, added support for the runtime_environment_version field in notebook entity configurations to let you specify runtime environment version for containerized notebooks, added the snow auth oidc commands for managing workload identity federation authentication: snow auth oidc read-token to read and display OIDC tokens from CI/CD environments, also included GitHub Actions provider support in these commands for password-less authentication in CI/CD pipelines)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 3.17.0 (Added support for workload identity federation in the AWS, Azure, Google Cloud, and Kubernetes platforms, added the workload_identity_provider connection parameter, added WORKLOAD_IDENTITY to the values for the authenticator connection parameter, added an unsafe_skip_file_permissions_check flag to skip file permission checks on the cache and configuration, added basic JSON support for Interval types, added populating of type_code in ResultMetadata for interval types, relaxed the pyarrow version constraint; versions &amp;gt;= 19 can now be used, introduced the snowflake_version property to the connection, added support for the use_vectorized_scanner parameter in the write_pandas function, added support of proxy setup using connection parameters without emitting environment variables)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 3.17.1 (Added the infer_schema parameter to write_pandas to perform schema inference on the passed data)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.37.0 (Added support for the following xpath functions in functions.py:xpath, xpath_string, xpath_boolean, xpath_int, xpath_float, xpath_double, xpath_long, xpath_short, added support for the use_vectorized_scanner parameter in the Session.write_arrow() function, dataFrame profiler adds the following information about each query: describe query time, execution time, and sql query text. To view this information, call session.dataframe_profiler.enable() and call get_execution_profile on a DataFrame, added support for DataFrame.col_ilike, added support for non-blocking stored procedure calls that return AsyncJob objects, added the block: bool = True parameter to Session.call(). When block=False, returns an AsyncJob instead of blocking until completion, added the block: bool = True parameter to StoredProcedure.__call__() for async support across both named and anonymous stored procedures, added Session.call_nowait() that is equivalent to Session.call(block=False); Snowpark pandas API Updates: Added support for efficient transfer of data between Snowflake and &lt;a href="https://www.ray.io/" rel="noopener noreferrer"&gt;&amp;lt;Ray&lt;/a&gt; with the DataFrame.set_backend method. The installed version of modin must be at least 0.35.0, and ray must be installed, Updated the supported modin versions to &amp;gt;=0.34.0 and &amp;lt;0.36.0 (was previously &amp;gt;= 0.33.0 and &amp;lt;0.35.0), added support for pandas 2.3 when the installed modin version is 0.35.0 or greater)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.36.0 (Session.create_dataframe now accepts keyword arguments that are forwarded in the internal call to Session.write_pandas or Session.write_arrow when creating a DataFrame from a pandas DataFrame or a pyarrow table, added new APIs for AsyncJob:AsyncJob.is_failed() returns a bool indicating whether a job has failed. Can be used in combination with AsyncJob.is_done() to determine if a job is finished and erred,AsyncJob.status() returns a string representing the current query status (such as, “RUNNING”, “SUCCESS”, “FAILED_WITH_ERROR”) for detailed monitoring without calling result(), added a DataFrame profiler. To use, you can call get_execution_profile() on your desired DataFrame. This profiler reports the queries executed to evaluate a DataFrame and statistics about each of the query operators. Currently an experimental feature, added support for the following functions in functions.py:ai_sentiment, updated the interface for the context.configure_development_features experimental feature. All development features are disabled by default unless explicitly enabled by the user; Improvements: Hybrid execution row estimate improvements and a reduction of eager calls, added a new configuration variable to control transfer costs out of Snowflake when using hybrid execution, added support for creating permanent and immutable UDFs/UDTFs with DataFrame/Series/GroupBy.apply, map, and transform by passing the snowflake_udf_params keyword argument. added support for mapping np.unique to DataFrame and Series inputs using pd.unique )&lt;/li&gt;
&lt;li&gt;Snowpark ML 1.11.0 (New Model Registry features: Made image_repo argument optional in ModelVersion.create_service. If not specified, a default image repository is used)&lt;/li&gt;
&lt;li&gt;Snowpark ML 1.10.0 (New Model Registry features: Added progress bars for ModelVersion.create_service and ModelVersion.log_model, logs from ModelVersion.create_service are now written to a file. The location of the log file is shown in the console)&lt;/li&gt;
&lt;li&gt;SnowSQL 1.4.5 (added support for workload identity federation in the AWS, Azure, Google Cloud, and Kubernetes platforms, added the workload_identity_provider connection parameter, added WORKLOAD_IDENTITY to the values for the authenticator connection parameter)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Bug fixes:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;.NET Driver 4.8.0 (Removed trailing slash from the default RedirectUri within the OAuth Authorization process, Fixed a problem with ignoring endpoint override in AWS FIPS deployments)&lt;/li&gt;
&lt;li&gt;Go Snowflake Driver 1.16.0 (Fixed an issue where error messages were not displayed while reading in structured types, fixed a memory leak in the arrow batches example, fixed issues with query cancellation, removed the trailing slash from the default RedirectUri within the OAuth Authorization process, fixed an issue with ignoring the maximum retry count when the timeout is not set)&lt;/li&gt;
&lt;li&gt;Ingest Java SDK 4.3.0 (Fixed vulnerable dependencies)&lt;/li&gt;
&lt;li&gt;JDBC Driver 3.26.1 (Fixed an issue with a NullPointerException when MFA is enabled in Okta and native Okta authentication is used, fixed an issue with CloseableHttpClient being cached indefinitely, Increased netty to version 4.1.124)&lt;/li&gt;
&lt;li&gt;JDBC Driver 3.26.0 (Fixed the OAuth Authorization Code’s default value for redirect URI removing a trailing / (slash) to be compliant with RFC 6749 Section 3.1.2.&lt;/li&gt;
&lt;li&gt;Fixed a bug resulting in NullPointerException when using SnowflakeChunkDownloader with connection pooling, fixed a bug preventing the use of auto-config with connection pooling, fixed a bug preventing application termination immediately because of telemetry threads, forced proxy basic authentication for an S3 client, removed the requirement for SF_ENABLE_EXPERIMENTAL_AUTHENTICATION environment variable in order to use workload identity federation, fixed array binding for the Date datatype)&lt;/li&gt;
&lt;li&gt;Node.js 2.2.0 (Fixed a network error when connecting with an expired OAuth access token, fixed the OAuth Authorization Code’s default value for redirect URI by removing a trailing / (slash) to be compliant with RFC 6749 Section 3.1.2, improved errors for GET commands)&lt;/li&gt;
&lt;li&gt;ODBC 3.11.0 (Fixed an issue with the in-band telemetry event handler to properly reset the events, fixed the HTTP headers used to authenticate via OKTA, removed the trailing slash from the default RedirectUri within the OAuth Authorization process)&lt;/li&gt;
&lt;li&gt;Snowflake CLI 3.10.1 (Fixed snow dbt deploy command to properly handle fully qualified names, fixed snow dbt deploy command to properly handle local directories with dots in names)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 3.17.0 (Fixed OAuth authenticator values, fixed a bug where a PAT with an external session authenticator was used while external_session_id was not provided in SnowflakeRestful.fetch, fixed the case-sensitivity of Oauth and programmatic_access_token authenticator values, fixed unclear error messages for incorrect authenticator values, fixed GCS staging by ensuring the endpoint has a scheme, fixed a bug where time-zoned timestamps fetched as a pandas.DataFrame or pyarrow.Table would overflow due to unnecessary precision. A clear error is now raised if an overflow cannot be prevented)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 3.17.1 (Reverted the snowflake namespace back to non-module)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 3.17.2 (Added the ability to disable endpoint-based platform detection by setting platform_detection_timeout_seconds to zero, fixed a bug where platform_detection was retrying failed requests with warnings to non-existent endpoints)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.37.0 (fixed a bug in CTE optimization stage where deepcopy of internal plans would cause a memory spike when a DataFrame is created locally using session.create_dataframe() using large input data, fixed a bug in DataFrameReader.parquet where the ignore_case option in the infer_schema_options was not respected, fixed a bug where to_pandas() had a different format of column name when the query result format is set to JSON and ARROW, deprecated: pkg_resources, added a dependency on protobuf&amp;lt;6.32Snowpark pandas API Updates: Fixed an issue in hybrid execution mode (Private Preview) where pd.to_datetime and pd.to_timedelta would unexpectedly raise IndexError, fixed a bug where pd.explain_switch would raise IndexError or return None if called before any potential switch operations were performed)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.36.0 (fixed an issue where the Snowpark pandas plugin would unconditionally disable AutoSwitchBackend even when users have explicitly configured it programmatically or with environment variables)&lt;/li&gt;
&lt;li&gt;Snowpark ML 1.11.0 (ML Jobs bug fixes: fixed TypeError: SnowflakeCursor.execute() got an unexpected keyword argument ‘_force_qmark_paramstyle’ inside stored procedure, fixed Error: Unable to retrieve head IP address when not all instances start within the timeout period)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;That’s a big month. August 2025 tightened the core and pushed the edges at the same time: security and governance got simpler to roll out (workload identity federation, Tri-Secret self-service, database-level auto-classification), pipelines became more controllable (dynamic tables with backfill and immutability), lineage grew up (stored procedures in graphs), and AI moved from “neat” to “operational” (Cortex Agents UI, AISQL transcribe, DocumentAI table extraction, ML Jobs GA). On the platform surface, Snowsight continues to pull more of the dev and governance loop into one place (contacts, Trust Center emails, semantic view tooling), while data lake and performance work (Iceberg targets and deletes, directed joins, Gen2 optimizations) make hybrid, high-throughput setups feel first-class.&lt;/p&gt;

&lt;p&gt;If you want quick wins this month, do this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Roll out &lt;strong&gt;workload identity federation&lt;/strong&gt; to one non-prod service and bump drivers to the latest.&lt;/li&gt;
&lt;li&gt;Pilot &lt;strong&gt;dynamic table backfill + immutability&lt;/strong&gt; on a small mart; document the refresh contract.&lt;/li&gt;
&lt;li&gt;Mark non-exposed calculations in &lt;strong&gt;semantic views&lt;/strong&gt; as PRIVATE, and start querying them directly from SQL.&lt;/li&gt;
&lt;li&gt;Turn on &lt;strong&gt;Trust Center notifications&lt;/strong&gt; and &lt;strong&gt;database-level classification&lt;/strong&gt; in a sandbox.&lt;/li&gt;
&lt;li&gt;Test &lt;strong&gt;directed joins&lt;/strong&gt; for a legacy workload where join order matters.&lt;/li&gt;
&lt;li&gt;Kick the tires on &lt;strong&gt;ML Jobs GA&lt;/strong&gt; or the &lt;strong&gt;Cortex Agent admin UI&lt;/strong&gt; with a narrow, high-value use case.&lt;/li&gt;
&lt;li&gt;If you work with iceberg engines, set the &lt;strong&gt;Iceberg target file size&lt;/strong&gt; and validate external row-level deletes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;BCR status this cycle: 2025_03 generally enabled; 2025_04 on by default (opt-out until next BCR); 2025_05 opt-in. Plan your change windows accordingly.&lt;/p&gt;

&lt;p&gt;I’ll keep aggregating these releases and adding context you can use in the field. Tell me what you shipped — or where you’re stuck — and I’ll add that feedback into next month’s notes.&lt;/p&gt;

&lt;p&gt;I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. I am also the Head of Data, Cloud, &amp;amp; Security Architecture at &lt;a href="https://archetypeconsulting.com/" rel="noopener noreferrer"&gt;Archetype Consulting&lt;/a&gt;. You can follow me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Subscribe to my Medium blog &lt;a href="https://blog.augustorosa.com/" rel="noopener noreferrer"&gt;https://blog.augustorosa.com&lt;/a&gt; for the most interesting Data Engineering and Snowflake news.&lt;/p&gt;

&lt;h4&gt;
  
  
  Sources:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/preview-features" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/preview-features&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/new-features" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/new-features&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/sql-improvements" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/sql-improvements&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/performance-improvements-2024" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/performance-improvements-2024&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/clients-drivers/monthly-releases" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/clients-drivers/monthly-releases&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/connectors/gard-2024" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/connectors/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=snowflake.snowflake-vsc" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=snowflake.snowflake-vsc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowconvert.com/sc/general/release-notes/release-notes" rel="noopener noreferrer"&gt;https://docs.snowconvert.com/sc/general/release-notes/release-notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>data</category>
      <category>ai</category>
      <category>snowflake</category>
      <category>hottopics</category>
    </item>
    <item>
      <title>The Unofficial Snowflake Monthly Release Notes: JULY 2025</title>
      <dc:creator>augusto kiniama rosa</dc:creator>
      <pubDate>Mon, 04 Aug 2025 04:06:32 +0000</pubDate>
      <link>https://forem.com/kiniama/the-unofficial-snowflake-monthly-release-notes-july-2025-2dbm</link>
      <guid>https://forem.com/kiniama/the-unofficial-snowflake-monthly-release-notes-july-2025-2dbm</guid>
      <description>&lt;h4&gt;
  
  
  Monthly Snowflake Unofficial Release Notes #New features #Previews #Clients #Behavior Changes
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkwld2mxr956csfmjtay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkwld2mxr956csfmjtay.png" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome to the Unofficial Release Notes for Snowflake for July 2025! You’ll find all the latest features, drivers, and more in one convenient place.&lt;/p&gt;

&lt;p&gt;As an unofficial source, I am excited to share my insights and thoughts. Let’s dive in! You can also find all of Snowflake’s releases &lt;a href="https://blog.infostrux.com/unofficial-release-snowflake/home" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This month, we provide coverage up to release 9.21 (General Availability — GA). I hope to extend this eventually to private preview notices as well.&lt;/p&gt;

&lt;p&gt;I would appreciate your suggestions on continuing to combine these monthly release notes. Feel free to comment below or chat with me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;&lt;em&gt;LinkedIn&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Behavior change&lt;/strong&gt; bundle &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_02_bundle" rel="noopener noreferrer"&gt;2025_02&lt;/a&gt; is generally enabled for all customers, &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_03_bundle" rel="noopener noreferrer"&gt;2025_03&lt;/a&gt; is enabled by default but can be opted out until next BCR deployment, &lt;a href="https://docs.snowflake.com/en/release-notes/bcr-bundles/2025_04_bundle" rel="noopener noreferrer"&gt;2025_04&lt;/a&gt; is disabled by default but may be opted in, 2025_05 is coming as per plan on 9.22 version.&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s New in Snowflake
&lt;/h3&gt;

&lt;h4&gt;
  
  
  New Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Alerts on new data (GA), Executed when new rows are added to a specified table or view, Snowflake evaluates the condition against these rows. You can set alerts for new data, such as error messages or dynamic table refreshes, or task executions logged to the event table, to stay notified&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Snowsight Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Billing contact updates (GA) for on-demand, self-service customers. Trial account holders adding a payment method can now update their billing contact info&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  AI Updates (Cortex, ML, DocumentAI)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AI Observability in Snowflake Cortex (GA), enables you to evaluate and trace your generative AI apps, making them more trustworthy and transparent. Systematically measure performance by running evaluations, logging traces for debugging, and benchmarking for deployments. Key features include: Evaluations, Comparison, Tracing&lt;/li&gt;
&lt;li&gt;Cortex Agents integration for Microsoft Teams and Copilot (Preview),
Allow natural language queries of structured and unstructured data, now supporting integration with Microsoft Teams and Microsoft 365 Copilot. Available in preview in Azure US East 2 (Virginia). Users can interact with a Cortex Agent in Teams or Copilot, making Snowflake data more accessible where users work&lt;/li&gt;
&lt;li&gt;Snowflake AISQL AI_SENTIMENT (GA), offers advanced sentiment classification across various content and languages. It helps organizations understand customer feelings and the specific aspects influencing satisfaction or concern&lt;/li&gt;
&lt;li&gt;Snowflake AISQL AI_EMBED multimodal embeddings (Preview), enabling customers to generate high-quality image and text embedding vectors directly within Snowflake using simple SQL. Embedding vectors allow text and images to be compared and searched based on their features. AI_EMBED allows organizations to: Develop advanced image search and similarity tools: Find similar products, medical images, or design assets across large datasets. Convert complex visuals into searchable vectors: Transform unstructured content into data. Improve content moderation: Detect and flag inappropriate visual media. Optimize digital asset management: Organize and retrieve marketing, brand, and creative assets via semantic image search. Support manufacturing quality control: Detect defects by comparing product images to standards. Enable intelligent document processing: Extract insights from invoices, contracts, and forms by embedding text and layout.&lt;/li&gt;
&lt;li&gt;ML Explainability visualizations (General availability), these visualizations help reveal how features affect the model’s behavior and predictions&lt;/li&gt;
&lt;li&gt;Snowflake Multi-Node ML Jobs (Preview), enables running distributed ML workflows within Snowflake ML container runtimes on multiple compute nodes. Multi-node ML jobs distribute work across nodes, handling large datasets and complex models with better performance and scalability&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Snowflake Applications (Container Services, Notebooks and Applications, Snowconvert)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowflake Native App Framework support for Snowflake machine learning models (GA), introduces machine learning models in the Snowflake Native App Framework, enabling use of Snowflake ML models in a Snowflake Native App&lt;/li&gt;
&lt;li&gt;Snowflake Native App with Snowpark Container Services support for Google Cloud (Preview), Apps with containers can be deployed on Google Cloud&lt;/li&gt;
&lt;li&gt;Support for Streamlit 1.45.1 (GA) is now supported in Streamlit in Snowflake&lt;/li&gt;
&lt;li&gt;Snowconvert 1.11.1, Support for new Snowflake &lt;a href="https://docs.snowflake.com/en/release-notes/2025/9_18#snowflake-scripting-output-out-arguments-general-availability" rel="noopener noreferrer"&gt;Out Arguments syntax&lt;/a&gt; within Snowflake Scripting on &lt;a href="https://docs.snowconvert.com/sc/translation-references/teradata/teradata-to-snowflake-scripting-translation-reference/output-parameters" rel="noopener noreferrer"&gt;Teradata&lt;/a&gt;, &lt;a href="https://docs.snowconvert.com/sc/translation-references/oracle/pl-sql-to-snowflake-scripting/output-parameters" rel="noopener noreferrer"&gt;Oracle&lt;/a&gt;, &lt;a href="https://docs.snowconvert.com/sc/translation-references/transact/snowflake-scripting/output-parameters" rel="noopener noreferrer"&gt;SQL Server&lt;/a&gt;, and &lt;a href="https://docs.snowconvert.com/sc/translation-references/redshift/sql-statements/create-procedure/arguments-mode" rel="noopener noreferrer"&gt;Redshift&lt;/a&gt; migrations, &lt;strong&gt;Fixes&lt;/strong&gt; : Enhanced Teradata Data Type Handling: JSON to VARIANT migration, improved recovery on Redshift procedures written with Python&lt;/li&gt;
&lt;li&gt;Snowconvert 1.11.0, New Data Validation framework integration for SQL Server End-to-End experience: Now, users can validate their data after migrating it. The Data Validation framework offers the following validations: Schema validation: Validate the table structure to attest the correct mappings among datatypes, and Metrics validation: Generate metrics of the data stored in a table, ensuring the consistency of your data post-migration&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Lake Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Partitioned writes for Apache Iceberg™ tables (Preview), Improves compatibility with the Iceberg ecosystem and speeds up read queries from external Iceberg tools. Snowflake now allows creating and writing to both Snowflake-managed and external Iceberg tables with partitioning&lt;/li&gt;
&lt;li&gt;Write support for externally managed Apache Iceberg™ tables and catalog-linked databases (Preview), these features enhance data workflows between Snowflake and the Iceberg ecosystem. Key features include creating Iceberg tables in remote catalogs via Snowflake, performing full DML on externally managed tables, linking a Snowflake database to remote Iceberg catalogs (e.g., AWS Glue, Snowflake Open Catalog), and discovering multiple remote tables without individual definitions&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Pipelines/Data Loading/Unloading Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Dynamic tables: Support for externally managed Apache Iceberg™ tables (GA), Create dynamic tables that read from Iceberg tables managed by external catalogs, enabling data processing from external data lakes without duplicating or ingesting data into Snowflake&lt;/li&gt;
&lt;li&gt;Run Spark workloads on Snowflake (Preview), Spark workloads on Snowflake with Snowpark Connect for Spark&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Security, Privacy &amp;amp; Governance Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Workload Identity Federation for AWS, GCP and Azure (Private Preview), process of enabling secure, credential-less access for workloads (applications, services, or scripts) running outside of Snowflake to access Snowflake resources&lt;/li&gt;
&lt;li&gt;Enforce privatelink-only access (Preview), Disable public access to privatelink-only accounts&lt;/li&gt;
&lt;li&gt;Data Quality: ACCEPTED_VALUES DMF function (GA), The new ACCEPTED_VALUES function checks if column values match a Boolean expression and returns the count of non-matching records, indicating data quality issues&lt;/li&gt;
&lt;li&gt;External network access with private connectivity: Google Cloud (GA), Create and manage Google Cloud Private Service Connect endpoints to enable access from external networks&lt;/li&gt;
&lt;li&gt;Cortex Powered Object Descriptions (GA), You can now generate descriptions for Snowflake objects with SELECT privilege using Snowflake Cortex, without needing OWNERSHIP privilege, except when saving descriptions&lt;/li&gt;
&lt;li&gt;Single-use refresh tokens for Snowflake OAuth (GA), Use single-use refresh tokens to boost your Snowflake OAuth security integrations&lt;/li&gt;
&lt;li&gt;Automatic classification of a database (Preview), set a classification profile on a database rather than a schema so that all tables and views within the database are automatically classified for sensitive data&lt;/li&gt;
&lt;li&gt;Determine which databases and schemas are monitored by automatic sensitive data classification (Preview), Call a system function to identify tables and views automatically classified by sensitive data classification. SYSTEM$SHOW_SENSITIVE_DATA_MONITORED_ENTITIES returns databases and schemas with classification profiles, indicating objects are classified at the profile's specified interval&lt;/li&gt;
&lt;li&gt;Automatic tag propagation: Event table to monitor conflicts (GA), Use an event table to collect telemetry data on automatic tag propagation, including conflicts and resolutions. After Snowflake begins data collection, query the table, create a stream for changes, or set alerts for specific events&lt;/li&gt;
&lt;li&gt;Account Usage: New CREDENTIALS view, View details about Programmatic access tokens, Passkeys, and Time-based one-time passcodes (TOTPs) in the new CREDENTIALS view within the ACCOUNT_USAGE schema&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  SQL, Extensibility &amp;amp; Performance Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snowflake Scripting output (OUT) arguments (GA), When an output argument is defined in a Snowflake Scripting stored procedure, it can return the current value to a calling program, like an anonymous block or another stored procedure&lt;/li&gt;
&lt;li&gt;Query insights (GA) are messages that detail how query performance could be impacted and offer general recommendations for follow-up actions. These insights can be accessed through the QUERY_INSIGHTS view&lt;/li&gt;
&lt;li&gt;You can use the ORDER BY ALL clause to sort results by all columns in the SELECT list, eliminating the need to specify each column individually&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Clean Rooms Updates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Check your account’s provider activation history for clean room calls via dcr_health.provider_run_provider_activation_history&lt;/li&gt;
&lt;li&gt;See clean room tasks running or recently stopped in your account. The new procedure dcr_health.dcr_tasks_health_check provides information about these tasks&lt;/li&gt;
&lt;li&gt;Analysis reports are now retained for Audience Overlap, SQL, and custom templates when editing or deleting a clean room. Previously, such actions would delete those reports&lt;/li&gt;
&lt;li&gt;Cross-Cloud Auto-Fulfillment updates now require enabling it before installing a clean room, simplifying sharing flow and enhancing the experience&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Open-Source &lt;a href="https://developers.snowflake.com/opensource/" rel="noopener noreferrer"&gt;Updates&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;terraform-snowflake-provider 2.4.0 ( &lt;strong&gt;What’s new&lt;/strong&gt; : Implement User Programmatic Access Tokens, Add PAT integration tests, Add a user_programmatic_access_token resource, Add a user_programmatic_access_tokens data source, Implement rotating pats, Implement Current Organization Account; &lt;strong&gt;Misc&lt;/strong&gt; : Add an example of recreating the pipe on changing stage attributes, Fix tests and update documentation after reverting changes from BCR 2025_03, Add basic listings to sdk, Add builders for legacy config DTO, Address pre-push errors, Adjust sweepers and test workflows, Mark DecodeSnowflakeID function as legacy, Add tests for the Plugin Framework: Test custom type with metadata, Test enum handling (suppression and validation), Test optional computed in plugin framework, Test optional with backing field, Test parameters handling (multiple variants), Test zero values in plugin framework; Use existing schema descriptions instead of string equivalents, Update labels in the repository)&lt;/li&gt;
&lt;li&gt;terraform-snowflake-provider 1.2.3 ( &lt;strong&gt;Misc:&lt;/strong&gt; Add Snowflake BCR migration guide, Set version to v1.2.3; &lt;strong&gt;Bug fixes&lt;/strong&gt; : Fix data types parsing for functions and procedures with 2025_03 Bundle, Introduce a new function and procedure parsing function)&lt;/li&gt;
&lt;li&gt;terraform-snowflake-provider 2.3.0 (What’s new: Add programmatic access token support to SDK, Add support for PROGRAMMATIC_ACCESS_TOKEN authenticator; &lt;strong&gt;Misc&lt;/strong&gt; : Account modification test assertion, Add Snowflake BCR migration guide, Configure plugin framework in functional tests, Do not build the whole project after the changelog entry, Enable testifylint and fix reported issues, Set up muxing in tests, Small account adjustments; &lt;strong&gt;Bug fixes&lt;/strong&gt; : Fix data types parsing for functions and procedures with 2025_03 Bundle; Introduce a new function and procedure, parsing function, remove unused conversion functions interfering with other tests)&lt;/li&gt;
&lt;li&gt;Modin 0.34.0 ( &lt;strong&gt;Stability and Bugfixes&lt;/strong&gt; : Preserve dtypes when inserting column to empty frame, Fix name ambiguity for value_counts() on Pandas backend, Add copy parameter to array methods, Log backend switching information with the modin logger, Display ‘modin.pandas’ instead of ‘None’ in backend switching information, Implement array_function stub, &lt;strong&gt;Update testing suite&lt;/strong&gt; : Use https for modin-datasets.intel.com, Stop calling np.array(copy=None) for numpy&amp;lt;2, Allow xgboost to log to root, Fix test_pickle by correctly using fixtures, Cap mpi4py&amp;lt;4.1 in CI; &lt;strong&gt;New Features&lt;/strong&gt; : Consider self_cost in hybrid casting calculator, Support pinning groupby objects in place, Support set_backend() for groupby objects, Support pin_backend(inplace=False) for groupby objects)&lt;/li&gt;
&lt;li&gt;Snowflake VS Code Extension 1.17.0 (Features: Enhanced Cortex AI integration with streaming support for real-time response generation; Bug Fixes: Fixed Snowpark: Debug execution for China deployments, Fixed Snowpark checkpoints loading into Jupyter Notebook files)&lt;/li&gt;
&lt;li&gt;Streamlit 1.47.1 (small bug fix)&lt;/li&gt;
&lt;li&gt;Streamlit 1.47.0 (Enhanced Theming and Customization: Font Weight Control, Categorical Colors for Charts, Dataframe and Heading Customization, Pandas Styler Support; expanded Widget Functionality: Width and Height Parameters, Chat Input Enhancements, Improved Column Configuration, Enhanced LinkColumn, Markdown in Dialog Titles, Spinner with Elapsed Time, Bytes Format for Columns, Proxy Support for Custom Components; Bug Fixes and Other Improvements)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Client, Drivers, Libraries and Connectors Updates
&lt;/h3&gt;

&lt;h4&gt;
  
  
  New features:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Go Snowflake Driver 1.15.0 (Private Preview (PrPr) features: Added support for Workload Identity Federation in the AWS, Azure, GCP, and Kubernetes platforms. This feature can only be accessed by setting the SF_ENABLE_EXPERIMENTAL_AUTHENTICATION environment variable to true, New features and updates: Added support for snake-case connection parameters, Optimized memory consumption during execution of PUT commands)&lt;/li&gt;
&lt;li&gt;JDBC Driver 3.25.1 (Added the ENABLE_WILDCARDS_IN_SHOW_METADATA_COMMANDS parameter to enable using patterns in DatabaseMetaData SHOW … IN … commands, added the OWNER_ONLY_STAGE_FILE_PERMISSIONS_ENABLED parameter which forces the directory that contains the stage files to have owner only permissions (0600))&lt;/li&gt;
&lt;li&gt;JDBC Driver 3.25.0 (Added support for sovereign clouds and removed obsolete issuer checks for Workload Identity Federation)&lt;/li&gt;
&lt;li&gt;Node.js 2.1.1 (Private Preview (PrPr) features: Added support for Workload Identity Federation in the AWS, Azure, GCP, and Kubernetes platforms. This feature can only be accessed by setting the SF_ENABLE_EXPERIMENTAL_AUTHENTICATION environment variable to true, Removed token caching for Client Credentials authentication, This release introduces TypeScript for development: The npm package contains compiled JavaScript code that contains no anticipated breaking changes for driver users)&lt;/li&gt;
&lt;li&gt;ODBC Driver 3.10.0 (Private Preview (PrPr) features: Added support for Workload Identity Federation in the AWS, Azure, GCP, and Kubernetes platforms. This feature can only be accessed by setting the SF_ENABLE_EXPERIMENTAL_AUTHENTICATION environment variable to true, Added support for configuring connection parameters in TOML files)&lt;/li&gt;
&lt;li&gt;Snowflake CLI 3.10.0 (Deprecations: Snowpark processor in the Snowflake Native App Framework, New features and updates: Added support for passing an OAuth token with the --token option, added the ability to suppress new Snowflake CLI version messages, added the following new --format options for outputting data:CSV, which formats query output as CSV, JSON_EXT, which outputs JSON as JSON objects instead of strings, added the --enabled_templating option for the snow sql command that lets you specify which of the following templates to use when resolving variables: Standard (&amp;lt;% ... %&amp;gt;), enabled by default, Legacy (&amp;amp;{ ... }), enabled by default, Jinja ({{ ... }}), disabled by default, added a packages alias for artifact_repository_packages in the snowflake.yml schema, added the snow stage copy @src_stage @dst_stage command for copying files directly between two named stages, added support for the DBT deploy, execute, and list commands)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 3.16.0 (Added the client_fetch_use_mp connection parameter that enables multi-processed fetching of result batches, which usually reduces fetching time, added support for the new Personal Access Token (PAT) authentication mechanism with external session ID, added the bulk_upload_chunks parameter to the write_pandas function. Setting this parameter to True changes the behavior of the write_pandas function to first write all the data chunks to the local disk and then perform the wildcard upload of the chunks folder to the stage. When set to False (default), the chunks are saved, uploaded, and deleted one by one, added Windows support for Python 3.13, added basic arrow support for Interval types, added support for Snowflake OAuth for local applications)&lt;/li&gt;
&lt;li&gt;Snowflake Python APIs 1.7.0 (Added support to the following methods for specifying the point-of-time reference when you use Time Travel to create streams:PointOfTimeStatement, PointOfTimeStream, PointOfTimeTimestamp)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.35.0 (New features: Added support for the following functions in functions.py: ai_embed, try_parse_json, Improvements: Improved query parameter in DataFrameReader.dbapi (Private Preview) so that parentheses aren’t needed around the query, improved error experience in DataFrameReader.dbapi (Private Preview) for exceptions raised when inferring the schema of the target data source; Snowpark local testing updates: added local testing support for reading files with SnowflakeFile. The testing support uses local file paths, the Snow URL semantic (snow://…), local testing framework stages, and Snowflake stages (&lt;a href="http://twitter.com/stage/file_path" rel="noopener noreferrer"&gt;@stage/file_path&lt;/a&gt;))&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.34.0 (Added a new option TRY_CAST to DataFrameReader. When TRY_CAST is True, columns are wrapped in a TRY_CAST statement instead of a hard cast when loading data, added a new option USE_RELAXED_TYPES to the INFER_SCHEMA_OPTIONS of DataFrameReader. When set to True, this option casts all strings to max length strings and all numeric types to DoubleType, added debuggability improvements to eagerly validate dataframe schema metadata. Enable it using snowflake.snowpark.context.configure_development_features(), added a new function snowflake.snowpark.dataframe.map_in_pandas that allows users to map a function across a dataframe. The mapping function takes an iterator of pandas DataFrames as input and provides one as output, added a ttl cache to describe queries. Repeated queries in a 15-second interval use the cached value rather than requery Snowflake, added a parameter fetch_with_process to DataFrameReader.dbapi (PrPr) to enable multiprocessing for parallel data fetching in local ingestion. By default, local ingestion uses multithreading. Multiprocessing can improve performance for CPU-bound tasks like Parquet file generation, added a new function snowflake.snowpark.functions.model that allows users to call methods of a model; &lt;strong&gt;Improvements:&lt;/strong&gt; Added support for row validation using XSD schema using rowValidationXSDPath option when reading XML files with a row tag using rowTag option, Improved SQL generation for session.table().sample() to generate a flat SQL statement, added support for complex column expression as input for functions.explode, added debuggability improvements to show which Python lines a SQL compilation error corresponds to. Enable it using snowflake.snowpark.context.configure_development_features(). This feature also depends on AST collections to be enabled in the session, which can be done using session.ast_enabled = True,Set enforce_ordering=True when calling to_snowpark_pandas():code: from a Snowpark DataFrame containing DML/DDL queries instead of throwing a NotImplementedError; &lt;strong&gt;Snowpark Local testing Updates&lt;/strong&gt; : Fixed a bug when processing windowed functions that lead to incorrect indexing in results, When a scalar numeric is passed to fillna, Snowflake will ignore non-numeric columns instead of producing an error; &lt;strong&gt;Snowpark pandas API Updates&lt;/strong&gt; : Added support for DataFrame.to_excel and Series.to_excel, added support for pd.read_feather, pd.read_orc, and pd.read_stata, added support for pd.explain_switch() to return debugging information on hybrid execution decisions, support pd.read_snowflake when the global modin backend is Pandas, added support for pd.to_dynamic_table, pd.to_iceberg, and pd.to_view; Improvements: added modin telemetry on API calls and hybrid engine switches, show more helpful error messages to Snowflake Notebook users when the modin or pandas version does not match our requirements, added a data type guard to the cost functions for hybrid execution mode (Private Preview) that checks for data type compatibility, added automatic switching to the pandas backend in hybrid execution mode (Private Preview) for many methods that are not directly implemented in pandas on Snowflake, Set the type and other standard fields for pandas on Snowflake telemetry; &lt;strong&gt;Dependency updates&lt;/strong&gt; : added tqdm and ipywidgets as dependencies so that progress bars appear when the user switches between modin backends, updated the supported modin versions to &amp;gt;=0.33.0 and &amp;lt;0.35.0 (was previously &amp;gt;= 0.32.0 and &amp;lt;0.34.0))&lt;/li&gt;
&lt;li&gt;Snowpark ML 1.9.1 (New DataConnector features: DataConnector objects can now be pickled, New Dataset features: Dataset objects can now be pickled, New Model Registry features: Models hosted on Snowpark Container Services now support wide input (500+ features))&lt;/li&gt;
&lt;li&gt;SnowSQL 1.4.4 (Updated openssl to version 3 for Windows)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Bug fixes:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;.NET Driver 4.7.0 (Set ConfigureAwait(false) for asynchronous Programmatic Access Token authentications, fixed an issue with the missing OAuthClientSecret parameter provided externally to a connection string when creating sessions that use the MinPoolSize feature)&lt;/li&gt;
&lt;li&gt;Go Snowflake Driver 1.15.0 (issue with permission handling for the configuration.toml file)&lt;/li&gt;
&lt;li&gt;JDBC Driver 3.25.1 (Fixed unnecessary exception wrapping during network retries, added retries for protocol_version error during TLS negotiation, fixed an issue with the default trust manager not extending X509ExtendedTrustManager, added a missing log parameter to the Session logs)&lt;/li&gt;
&lt;li&gt;JDBC Driver 3.25.0 (Fixed a bug that prevented TelemetryThreadPool from scaling based on the workload, fixed access token expiration handling for the legacy OAuth flow, removed an obsolete error log on HTTP response checks)&lt;/li&gt;
&lt;li&gt;Node.js 2.1.3 (Fixed an issue with using the Google Cloud Platform (GCP) XML API when useVirtualUrl=true, fixed a permission check for .toml configuration files, fixed unhandled resources after creating a connection to prevent the process from terminating when using external browser authentication, fixed an issue with oauthEnableSingleUseRefreshTokens in the authorization code flow)&lt;/li&gt;
&lt;li&gt;Node.js 2.1.2 (Fixed a TypeScript error that was introduced in version 2.1.1)&lt;/li&gt;
&lt;li&gt;Node.js 2.1.1 (Corrected an issue where Util.getProxyFromEnv incorrectly assumed HTTPS, causing HTTP_PROXY values to be ignored for HTTP traffic (port 80), improved extractQueryStatus to handle cases where getQueryResponse returns a null response, preventing occasional breaks, added ErrorCode to the core instance)&lt;/li&gt;
&lt;li&gt;ODBC Driver 3.10.0 (Fixed an issue with supporting virtual-style domains, fixed an issue that could potentially cause a buffer overflow)&lt;/li&gt;
&lt;li&gt;Snowflake CLI 3.10.0 (Fixed an issue where the snow sql command would fail when snowflake.yml is invalid and the query has no templating, fixed an issue with JSON serialzation for the Decimal, time, and binary data types)&lt;/li&gt;
&lt;li&gt;Snowflake Connector for Python 3.16.0 (Fixed write_pandas special characters usage in the location name, fixed the usage of use_virtual_url when building the location for a Google Cloud Storage (GCS) client)&lt;/li&gt;
&lt;li&gt;Snowflake Python APIs 1.7.0 (Fixed a warning: 'allow_population_by_field_name' has been renamed to 'validate_by_name',restored the behavior of the drop method of DAGOperation such that drop_finalizer must be set to True before the finalizer task is dropped, as a result of changes in the 9.20 Snowflake release, fetch_task_dependents started returning the finalizer task alongside other tasks that belong to the Directed Acyclic Graph (DAG). This behavior caused the drop method to always drop the finalizer)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.35.0 (Fixed a bug in DataFrameReader.dbapi (Private Preview) that fails dbapi with process exit code 1 in a Python stored procedure, fixed a bug in DataFrameReader.dbapi (Private Preview) where custom_schema accepts an illegal schema, fixed a bug in DataFrameReader.dbapi (Private Preview) where custom_schema doesn’t work when connecting to Postgres and MySQL, fixed a bug in schema inference that causes it to fail for external stages)&lt;/li&gt;
&lt;li&gt;Snowpark Library for Python 1.34.0 (Fixed a bug caused by redundant validation when creating an iceberg table, fixed a bug in DataFrameReader.dbapi (Private Preview) where closing the cursor or connection could unexpectedly raise an error and terminate the program, fixed ambiguous column errors when using table functions in DataFrame.select() that have output columns matching the input DataFrame’s columns. This improvement works when DataFrame columns are provided as Column objects, fixed a bug where having a NULL in a column with DecimalTypes would cast the column to FloatTypes instead and lead to precision loss)&lt;/li&gt;
&lt;li&gt;Snowpark ML 1.9.1 (Model Registry bug fixes: fix a bug with setting the PAD token when the HuggingFace text-generation model had multiple EOS tokens. The handler now picks the first EOS token as PAD token)&lt;/li&gt;
&lt;li&gt;SnowSQL 1.4.3 (Updated !system command library cleanup. Removed deprecation warning for setuptools)&lt;/li&gt;
&lt;li&gt;SQLAlchemy 1.7.6 (Fixed an issue with get_multi_indexes that assigned the wrong returned indexes when processing multiple indexes in a table)&lt;/li&gt;
&lt;li&gt;Native SDK for Connectors Java 2.2.0 (replacement of the SnowSQL tool with the new Snowflake CLI tool, which streamlines development workflows. This version also includes updated Java dependencies, For developers, this release introduces several new test builders for handlers, allowing for more comprehensive and customizable testing of connector components. These include builders for reset configuration, resource creation, and enabling/disabling resources, the SDK now provides new in-memory implementations for various services, such as scheduler, connector configuration, and task management. These in-memory versions are invaluable for unit testing, as they allow developers to test their connector logic without needing to connect to a live Snowflake instance. Additional new features include: New assertion classes for ingestion configuration and UUIDs, which simplify the process of writing assertions in tests, New classes for integration testing, such as SharedObjects, PathResolver, and ProcedureDescriptor, which provide helpful utilities for managing test objects and procedures, The InMemoryIngestionProcessRepository now has an implemented endProcess method, which previously threw an UnsupportedOperationException)&lt;/li&gt;
&lt;li&gt;Native SDK for Connectors Java 2.1.0 (significant new features and behavior changes aimed at improving connector management, configuration, and security, a major enhancement is the introduction of new procedures for managing the connector lifecycle. These include PUBLIC.RESET_CONFIGURATION() to reset the configuration wizard, and PUBLIC.RECOVER_CONNECTOR_STATE(STRING) to reset the connector's state. These procedures give developers more control over the connector's state and configuration. Additionally, the TASK_REACTOR.REMOVE_INSTANCE(STRING) procedure has been added to allow for the removal of a Task Reactor instance, this release also brings improvements to resource management with new callbacks for the PUBLIC.CREATE_RESOURCE() procedure and the introduction of ENABLE_RESOURCE(), DISABLE_RESOURCE(), and UPDATE_RESOURCE() procedures. These allow for more dynamic and programmatic management of resources within the connector, from a security and authentication perspective, this version introduces OAuth as an authentication mechanism in the Connection Configuration step of the Wizard. This is a significant improvement as it removes the need for users to create EXTERNAL ACCESS INTEGRATION and SECRET objects with credentials, other notable changes in version 2.1.0 include: the adoption of a new approach to identifiers, updates to the example connector to align with the latest SDK release, the explicit specification of Java 11 as the target build version, the addition of a missing grant for the VIEWER and DATA_READER app roles on the Streamlit UI, corrections to the setup.sql script to prevent failures during application version upgrades or downgrades)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Snowflake’s July 2025 releases mark a deliberate evolution of the platform’s core identity. Beyond its origins as a cloud data warehouse, Snowflake is establishing itself as a comprehensive, intelligent, and secure data cloud — an essential operating system for modern data-driven enterprises.&lt;/p&gt;

&lt;p&gt;Three dominant themes emerge from this wave of innovation:&lt;/p&gt;

&lt;p&gt;AI, AI Continues, the infusion of intelligence is now at the heart of the platform. With the general availability of AI Observability in Cortex and new multimodal embedding capabilities, Snowflake is bringing advanced AI and machine learning directly to where the data lives. This strategy democratizes AI, allowing organizations to perform complex tasks like sentiment analysis and image similarity searches with simple SQL. The integration of Cortex Agents with tools like Microsoft Teams and Copilot is a clear signal that Snowflake aims to push data insights directly into the daily workflows of business users, not just analysts.&lt;/p&gt;

&lt;p&gt;Security and governance, there is an unwavering focus on building a bedrock of trust through proactive security and governance. Features like Workload Identity Federation, privatelink-only enforcement, and the automatic classification of sensitive data are critical for enterprises navigating complex regulatory landscapes. These are not simply security add-ons; they are foundational enhancements designed to automate governance and embed zero-trust principles deep within the platform, making it possible to secure data at scale.&lt;/p&gt;

&lt;p&gt;An open platform, Snowflake is doubling down on its commitment to openness and developer empowerment. The deep and continued investment in Apache Iceberg™, the ability to run Spark workloads, and the constant stream of updates to the Native App Framework, Snowpark, and various drivers demonstrate a clear vision. Snowflake is not trying to create a walled garden. Instead, it is building a powerful, interoperable ecosystem where developers can use the tools they know to build the next generation of data-intensive applications directly on the platform.&lt;/p&gt;

&lt;p&gt;For organizations building a modern data strategy, these updates offer a compelling roadmap. They tackle key challenges: responsible AI use, data trust and security, and empowering developers to innovate freely. These enhancements keep Snowflake a key part of the modern enterprise data stack.&lt;/p&gt;

&lt;p&gt;Enjoy the reading.&lt;/p&gt;

&lt;p&gt;I am Augusto Rosa, a Snowflake Data Superhero and Snowflake SME. I am also the Head of Data, Cloud, &amp;amp; Security Architecture at &lt;a href="https://archetypeconsulting.com/" rel="noopener noreferrer"&gt;Archetype Consulting&lt;/a&gt;. You can follow me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Subscribe to my Medium blog &lt;a href="https://blog.augustorosa.com/" rel="noopener noreferrer"&gt;https://blog.augustorosa.com&lt;/a&gt; for the most interesting Data Engineering and Snowflake news.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sources:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/preview-features" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/preview-features&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/new-features" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/new-features&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/sql-improvements" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/sql-improvements&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/performance-improvements-2024" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/performance-improvements-2024&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/clients-drivers/monthly-releases" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/clients-drivers/monthly-releases&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowflake.com/en/release-notes/connectors/gard-2024" rel="noopener noreferrer"&gt;https://docs.snowflake.com/en/release-notes/connectors/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=snowflake.snowflake-vsc" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=snowflake.snowflake-vsc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.snowconvert.com/sc/general/release-notes/release-notes" rel="noopener noreferrer"&gt;https://docs.snowconvert.com/sc/general/release-notes/release-notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>newreleases</category>
      <category>data</category>
      <category>snowconvert</category>
      <category>streamlit</category>
    </item>
    <item>
      <title>The Data Engineer’s 2025 Upskilling: Get Started As a Snowflake Data Engineer</title>
      <dc:creator>augusto kiniama rosa</dc:creator>
      <pubDate>Wed, 23 Jul 2025 13:54:21 +0000</pubDate>
      <link>https://forem.com/kiniama/the-data-engineers-2025-upskilling-get-started-as-a-snowflake-data-engineer-17n0</link>
      <guid>https://forem.com/kiniama/the-data-engineers-2025-upskilling-get-started-as-a-snowflake-data-engineer-17n0</guid>
      <description>&lt;h4&gt;
  
  
  A 2025 Comprehensive Quick Guide to Mastering Snowflake Data Engineering
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Ah3hizDoXdYIu1zaG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Ah3hizDoXdYIu1zaG" width="1024" height="637"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Håkon Grimstad on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  tl;dr
&lt;/h3&gt;

&lt;p&gt;The field of data engineering is continually evolving. Skills that once seemed advanced are now standard. The role of the data engineer is fundamentally changing, similar to the broader developments in AI. We are no longer just building data pipelines; instead, we focus on designing intelligent, scalable, and automated data platforms. For those of us working with Snowflake, this transition offers a distinctive and exciting array of challenges and opportunities.&lt;/p&gt;

&lt;p&gt;This post provides a roadmap for data engineers aiming to not only stay relevant but also lead in this new era. We will examine the changing skill requirements and explore various Snowflake features and resources.&lt;/p&gt;

&lt;p&gt;The era of just being a “pipeline builder" is ending. Today’s data engineers engage in a wider array of tasks, including orchestrating AI-driven workflows, handling unstructured data, and integrating Large Language Models (LLMs). In the Snowflake context, this means we need to concentrate on several key areas for upskilling.&lt;/p&gt;

&lt;p&gt;Companies no longer seek only clean data; they desire intelligent, near-real-time systems that generate measurable business value. This requires adopting a platform-architect mindset to develop solutions that are reusable, scalable, and cost-efficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Snowflake-Centric Roadmap for the 2025 Data Engineer
&lt;/h3&gt;

&lt;p&gt;To succeed in this evolving environment, a clear plan is essential. Here's a Snowflake-focused roadmap with resources to help guide you journey:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Master the Modern Data Stack on Snowflake
&lt;/h4&gt;

&lt;p&gt;A thorough grasp of Snowflake’s core components is essential, as it serves as the foundation for developing all other skills.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7p3pi7uhdm11m22jpyc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7p3pi7uhdm11m22jpyc.png" width="356" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Core Concepts:&lt;/strong&gt; Understanding Snowflake’s architecture—such as virtual warehouses, storage, and the services layer—is essential. To validate these core skills, aiming for the &lt;strong&gt;SnowPro Core Certification&lt;/strong&gt; is a great objective. Check out this detailed study guide can serve as a helpful resource for this goal: &lt;a href="https://github.com/augustorosa/snowflake-snowpro-core-study-notes" rel="noopener noreferrer"&gt;Snowpro Core Study Notes&lt;/a&gt; (this guide was originally created by &lt;a href="https://www.linkedin.com/in/ivayloboiadjiev/" rel="noopener noreferrer"&gt;Ivaylo&lt;/a&gt;, and further updated by me).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Ingestion and Engineering:&lt;/strong&gt; Being skilled in loading and transforming data in Snowflake is essential. This covers techniques for both batch and continuous data loading. For practical experience, the &lt;strong&gt;“Getting Started — Data Engineering with Snowflake”&lt;/strong&gt; Quickstart offers an excellent starting point.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource:&lt;/strong&gt; &lt;a href="https://quickstarts.snowflake.com/guide/snowflake-northstar-data-engineering/index.html?utm_cta=drift#0" rel="noopener noreferrer"&gt;Getting Started — Data Engineering with Snowflake&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource:&lt;/strong&gt; &lt;a href="https://github.com/augustorosa/snowflake-snowpro-core-study-notes" rel="noopener noreferrer"&gt;Snowpro Core Study Notes&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  2. Embrace Modern Data Modelling and Transformations
&lt;/h4&gt;

&lt;p&gt;Once data enters Snowflake, its true value emerges through strong transformation and modeling. This process turns raw data into dependable, analytics-ready assets. Applying software engineering principles here is essential for creating scalable and maintainable data solutions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1o0njqpkzc4ahnwjjri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1o0njqpkzc4ahnwjjri.png" width="454" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Modelling:&lt;/strong&gt; A solid grasp of data modeling techniques, including dimensional modeling (star schemas), is crucial for creating data warehouses that are efficient and user-friendly for business users to analyze and query. For a detailed guide with practical examples, the book &lt;strong&gt;“Data Modeling with Snowflake”&lt;/strong&gt; serves as an excellent resource.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Book Resource:&lt;/strong&gt; &lt;a href="https://www.packtpub.com/en-us/product/data-modeling-with-snowflake-9781837634453" rel="noopener noreferrer"&gt;Data Modeling with Snowflake — Packt Publishing&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Examples:&lt;/strong&gt; &lt;a href="https://github.com/PacktPublishing/Data-Modeling-with-Snowflake" rel="noopener noreferrer"&gt;GitHub — PacktPublishing/Data-Modeling-with-Snowflake&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Transformation tooling&lt;/strong&gt; today relies on tools that incorporate version control, automate testing, and support modular SQL code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lw9t6nq8mtu3ndv8541.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lw9t6nq8mtu3ndv8541.png" width="720" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;dbt Core&lt;/strong&gt; has established itself as the industry standard for data transformation within warehouses. By learning dbt, you can develop complete transformation pipelines using SQL or Python, manage dependencies, perform data quality tests, and automatically generate documentation. Snowflake offers a dedicated Quickstart guide to help you get started quickly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource:&lt;/strong&gt; &lt;a href="https://quickstarts.snowflake.com/guide/data_teams_with_dbt_core/index.html#0" rel="noopener noreferrer"&gt;Getting Started with dbt Core and Snowflake&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQLMesh&lt;/strong&gt; is a rising, powerful alternative that brings innovative features such as automatic data diffing and virtual data environments. It aims to enhance deployment safety and efficiency by analyzing the potential effect of changes before they are implemented in production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource:&lt;/strong&gt; &lt;a href="https://sqlmesh.readthedocs.io/en/latest/integrations/engines/snowflake/" rel="noopener noreferrer"&gt;SQLMesh Snowflake Integration Documentation&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. Embrace Generative AI and LLMs with Snowflake Cortex
&lt;/h4&gt;

&lt;p&gt;The use of Generative AI marks one of the most significant changes in the data landscape. Snowflake leads in this area with Snowflake Cortex, a collection of AI tools that deliver the capabilities of LLMs directly to your system data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cortex Fundamentals:&lt;/strong&gt; As a data engineer, focus on utilizing Cortex's built-in functions for tasks such as sentiment analysis, translation, and text summarization directly within your SQL queries. The new &lt;strong&gt;SnowPro Associate: Platform Certification&lt;/strong&gt; specifically tests knowledge in this domain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building with Cortex:&lt;/strong&gt; Go beyond simple functions to create intelligent applications. The “Build a Customer Review Analytics Dashboard with Snowflake Cortex and Streamlit” Quickstart provides a great hands-on experience.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resource:&lt;/strong&gt; &lt;a href="https://quickstarts.snowflake.com/guide/avalanche-customer-review-data-analytics/index.html" rel="noopener noreferrer"&gt;Build a Customer Review Analytics Dashboard with Snowflake Cortex and Streamlit&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4. Understand MLOps and LLMOps in the Snowflake Ecosystem
&lt;/h4&gt;

&lt;p&gt;Even if you're not a data scientist, it's becoming more important for data engineers to understand the machine learning lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snowpark for Python&lt;/strong&gt; serves as your entry point for developing and deploying machine learning models within Snowflake. Get to know the Snowpark API and discover how it enables you to manipulate data with Python directly where it resides.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;End-to-End ML Workflows:&lt;/strong&gt; The "Machine Learning with Snowpark Python" Quickstart offers a detailed and advanced overview of how data engineering and MLOps integrate to build a complete ML pipeline.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resource:&lt;/strong&gt; &lt;a href="https://quickstarts.snowflake.com/guide/getting-started-with-snowflake-cortex-ml-forecasting-and-classification/index.html?index=..%2F..index#0" rel="noopener noreferrer"&gt;Getting Started with Snowflake ML Forecasting and Classification&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  5. Adopt a Platform Engineering Mindset
&lt;/h4&gt;

&lt;p&gt;Treating your data pipelines as products is emphasized for 2025. This involves creating reusable assets and automating all processes can.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure as Code (IaC) and CI/CD:&lt;/strong&gt; Discover how to handle your Snowflake infrastructure and data pipelines with tools like Terraform and GitHub Actions. This enables repeatable, version-controlled, and automated deployments, applying software development best practices to data management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a CI/CD Pipeline:&lt;/strong&gt; This Quickstart provides a practical guide on implementing these ideas by walking you through building a CI/CD pipeline with GitHub and the Snowflake CLI.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resource:&lt;/strong&gt; &lt;a href="https://quickstarts.snowflake.com/guide/devops_dcm_schemachange_github/index.html?index=..%2F..index#0" rel="noopener noreferrer"&gt;DevOps: Database Change Management with schemachange and GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource:&lt;/strong&gt; &lt;a href="https://blog.archetypeconsulting.com/learning-to-think-before-you-code-cb854ea77ba8" rel="noopener noreferrer"&gt;Learning To Think Before You Code&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  6. Level Up on System Design and Architecture
&lt;/h4&gt;

&lt;p&gt;Ultimately, mastering the design and development of scalable, comprehensive data systems on Snowflake is crucial. For engineers, gaining knowledge and experience in systems design and architecture is a valuable skill. Below, two topics are provided to help build these skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalable Data Pipelines:&lt;/strong&gt; Discover how to create robust and efficient data pipelines with features such as Dynamic Tables and Snowpipe Streaming, enabling automated and incremental data transformations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resource:&lt;/strong&gt; &lt;a href="https://quickstarts.snowflake.com/guide/CDC_SnowpipeStreaming_DynamicTables/index.html?index=..%2F..index#0" rel="noopener noreferrer"&gt;Snowpipe Streaming and Dynamic Tables for Real-Time Ingestion (CDC Use Case)&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Performance and Cost Optimization:&lt;/strong&gt; Learn how to efficiently manage your Snowflake expenses by focusing on warehouse sizing, query tuning, and resource tracking. The Quickstart titled “Build a Query Cost Monitoring Tool with Snowflake and Streamlit” offers practical guidance to develop this essential skill.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resource:&lt;/strong&gt; &lt;a href="https://quickstarts.snowflake.com/guide/query-cost-monitoring/index.html?index=..%2F..index#0" rel="noopener noreferrer"&gt;Build a Query Cost Monitoring Tool with Snowflake and Streamlit&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Your Path Forward: Continuous Learning
&lt;/h3&gt;

&lt;p&gt;The data engineering field in 2025 will keep changing. Snowflake data engineers who adopt a mindset of ongoing learning and focus on essential areas can not only stay current but also lead the development of future intelligent data platforms. Resources, such as Snowflake’s Quickstarts and community-created study guides for certifications like the SnowPro Core, are easily accessible.&lt;/p&gt;

&lt;p&gt;The key question is: Are you prepared to create the future?&lt;/p&gt;

&lt;p&gt;I am Augusto Rosa, a Snowflake Data Superhero, Snowflake SME, and Snowflake Toronto User-Group Organizer. I am also the Head of Data, Cloud, &amp;amp; Security Architecture at &lt;a href="https://archetypeconsulting.com/" rel="noopener noreferrer"&gt;Archetype Consulting&lt;/a&gt;. You can follow me on &lt;a href="https://www.linkedin.com/in/augustorosa/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Subscribe to my Medium blog &lt;a href="https://blog.augustorosa.com/" rel="noopener noreferrer"&gt;https://blog.augustorosa.com&lt;/a&gt; and Archetype Consulting blogs &lt;a href="https://blog.archetypeconsulting.com/" rel="noopener noreferrer"&gt;https://blog.archetypeconsulting.com/&lt;/a&gt; for the most interesting Data Engineering and Snowflake news.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://quickstarts.snowflake.com/" rel="noopener noreferrer"&gt;https://quickstarts.snowflake.com/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/augustorosa" rel="noopener noreferrer"&gt;https://github.com/augustorosa&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>ai</category>
      <category>roadmaps</category>
      <category>snowflake</category>
      <category>data</category>
    </item>
  </channel>
</rss>
