Skill Me UP posted this session on what Synapse means for your modern data warehouse.
In this session we will explore Azure Synapse, the interface, the technology and the development studio to evaluate what impact the latest Massively Parallel Processing compute engine in Azure will have on my existing or planned Modern Data Warehouse on the Azure Platform.
Did you ever wonder how much further AI can scale?
In this session, Nidhi Chappell (Head of Product, Specialized Azure Compute at Microsoft) and Christopher Berner (Head of Compute at OpenAI) share their perspectives and insight about how the Microsoft-OpenAI partnership is taking significant steps to eliminate the barriers of scale to AI processes.
Of specific interest is OpenAI’s new GPT-3 natural language processing model that required 175 billion parameters to train properly.
The rapid evolution of the cloud to support massive computational models across HPC and AI workloads is shifting paradigms giving customers options that were previously only possible with dedicated on-premises solutions or supercomputing centers.
Steve Scott, Technical Fellow and CVP Hardware Architecture at Microsoft Azure, shares his experiences from his first 5 months at Microsoft.
Brian Blanchard joins Scott Hanselman to discuss Azure landing zones and how you can prepare your destination Azure environment—not only to receive migrating applications, but also to balance agility, governance, and security considerations.
Brian Blanchard joins Scott Hanselman to discuss how you can unblock your cloud adoption efforts using the Cloud Adoption Framework governance methodology. This agile, iterative methodology enables governance maturity without impeding migration or innovation.
KPMG Ignition Tokyo, the centerpiece of KPMG Japan’s digital strategy, delivers specialty software solutions to its global clients. With a multi-cloud and hybrid approach, the firm is rolling out its next-generation, AI-based audit software built on Azure, and implementing Azure Arc to deliver seamless solutions for clients across multiple hybrid data estates.
Databricks livestreamed this webinar on how to combine the best of data warehouses and data lakes into a simple and unified approach with Delta Lake, for improved data reliability, performance, and operations
Companies look to support both business analytics and machine learning initiatives within their organization, but often face challenges with complex operations, proprietary technologies, and unreliable data.
Join our How to Build a Cloud Data Platform technical training series, where we’ll explore how to use Apache SparkTM, Delta Lake, MLflow, and other open source technologies to construct your Cloud Data Platform with Databricks to all handle all your use cases — data engineering, data science, machine learning and business analytics. These virtual sessions will include concepts, architectures, and demos.
At the end of each session, you will be given redemption codes for additional free Databricks self-paced training and/or demo notebooks for hands-on practice.