New updates and product improvements
You can now write Delta Tables to any major object storage destination.
You can now easily rotate keys on Snowflake and other key-pair enabled connections.
Advanced users on Enterprise plans can now tune specific worker sizes to enable higher performance on extra large workloads.
You can now specify models as "append only" where applicable for improved transfer efficiency. This enables more performant destination loading by reducing data scanned on merges across every destination type.
You can now configure partitions on specific model types. This enables more efficient queries on both writing jobs and downstream workloads where query patterns are predictable.
You can now validate data integrity between your source and destination. This feature will probabilistically verify the integrity of the data by comparing source and destination rows.
You can now utilize windowed transfers to "checkpoint" through larger transfers. This feature is particularly useful for high volume destinations where working through the transfer in "chunks" can unlock value earlier for the data recipient, or offer a indication of progress at transfer time.
You can now resolve transfer errors faster with contextual error codes.
You can now incorporate Databricks’ Unity Catalog functionality when you send data to Databricks destinations.
You can now send data to Azure Blob Storage, Cloudflare R2, SQL Server, and Google Sheets.
You can now sync and refresh on a per-destination basis.
You can now split large transfers into smaller chunks.
You can now write optimized source queries to reduce costs.
You can now connect to a source or destination using Role-based access control (RBAC).
You can now send data from multiple sources.
You can now assign multiple products to a single destination.
You can now use webhooks to send notifications to Slack, PagerDuty, and more.
You can now use Prequel with your schema tenanted database.