Scott Jeschonek joins Scott Hanselman to talk about Azure HPC Cache.

Whether you are rendering a movie scene, searching for variants in a genome, or running machine learning against a data set, HPC Cache can provide very low latency high throughput access to the required file data.

Even more, your data can remain on its Network Attached Storage (NAS) environment in your data center while you drive your jobs into Azure Compute.

Related links:

Usually, super-computers installed at academic and national labs get configured once, bought as quickly as possible before the grant money runs out, installed and tested,  and put to use for a four or five years or so.

Rarely is a machine upgraded even once, much less a few times.

But that is not he case with the “Corona” system at Lawrence Livermore National Laboratory, which was commissioned in 2017 when North America had a total solar eclipse – and hence its name.

While this machine, procured under the Commodity Technology Systems (CTS-1) to not only do useful work, but to assess the CPU and GPU architectures provided by AMD, was not named after the coronavirus pandemic that is now spreading around the Earth, the machine is being upgraded one more time to be put into service as a weapon against the SARS-CoV-2 virus which caused the COVID-19 illness that has infected at least 2.75 million people (confirmed by test, with the number very likely being higher) and killed at least 193,000 people worldwide.

NVIDIA CEO Jensen Huang addresses 1,400+ attendees of SC19, the annual supercomputing conference. in Denver

He introduced a reference design for building GPU-accelerated Arm servers, announced the world’s largest GPU-accelerated cloud-based supercomputer on Microsoft Azure, and unveiled NVIDIA Magnum IO storage software to eliminate data transfer bottlenecks for AI, data science, and HPC workloads.

Here’s another story of how big data and high performance computing and TensorFlow is reshaping medicine as we know it.

Virtual drug screening has the potential to accelerate the development of new treatments. Using molecular docking, molecular dynamics and other algorithms, researchers can quickly screen for new drug candidates. This saves the enormous expense and time that would have been required to make the same conclusions about those candidates in the lab and in clinical trials.