Armon Dadgar (@armon), HashiCorp CTO and co-founder, and Aaron Schlesinger (@arschles) walk us through the core concepts of Infrastructure as Code (IaC) and how it goes beyond what people typically think when they hear “Infrastructure.” They break down the what, when, how, and why IaC makes developers’ lives easier, whether you’re running a simple application or have a complex, multi-node system. You’ll learn how you can use HashiCorp Terraform to get up and running with IaC, going from nothing to a complete carbon copy of your production environment at the click of button (read: you focus on building, testing, and deploying, not spinning up test environments and hoping they’re close to what’s in production). 

Related Links:

It’s no secret that Deep Space Nine is one of my favorite shows of all time. In fact, I think it represents the high-water mark of Star Trek. Sadly, CBS doesn’t see it the same way and the licensing around Star Trek is split between CBS and Paramount. That leads to some “interesting” creative choices.

What’s interesting is how a fan was able to up convert footage shot on NTSC video to HD using AI.

From the article:

Don’t expect to see a fully upgraded DS9 hit the web in the near future. Legal issues notwithstanding (CBS is already antsy about unofficial Star Trek material), it’s still a technical challenge. It took CaptRobau two days to process five minutes of footage on his PC, and there are 176 episodes of the sci-fi classic. It’d take enthusiasts a long time to remaster everything. And you can likely forget about a fan-made 4K update. While AI Gigapixel does work, it’s both more intensive and has the unusual effect of creating very sharp edges while doing relatively little to sharpen everything else.

With the rise of Machine Learning came the rise of developer tools and libraries. What are they good for and what are the top ones that every data scientist and ML engineer should know. This article sheds some light on those questions.

A deep learning framework is an interface, library or a tool which allows us to build deep learning models more easily and quickly, without getting into the details of underlying algorithms. They provide a clear and concise way for defining models using a collection of pre-built and optimized components.

Here’s an inspirational story for you this Sunday about a regular guy who launched an ISP from his garage.

Many people complain about their internet service, but Brandt Kuykendall did something about it. A resident of the small town of Dillon Beach, CA, he found the service to his town was too slow and expensive. After months chasing down companies to get access to internet infrastructure, he finally started a DIY ISP in his garage – and neighbors were clamoring for access to his faster, cheaper, and better-serviced network.

Here’s a great exploration of Bayes’ Theorem and how to use it in real world problems.

Bayes’ theorem is a way to figure out conditional probability. Conditional probability is the probability of an event happening, given that it has some relationship to one or more other events. For example, your probability of getting a parking space is connected to the time of day you park, where you park, and what conventions are going on at any time. Bayes’ theorem is slightly more nuanced. In a nutshell, it gives you the actual probability of an event given information about tests.

Predicting the stock market is one of the most difficult things to do given all the variables. There are numerous factors involved – physical factors vs. psychological, rational and irrational behavior, etc. All these aspects combine to make share prices volatile and very difficult to predict accurately.

In this article, we will work with historical data about the stock prices of a publicly listed company. We will implement a mix of machine learning algorithms to predict the future stock price of this company, starting with simple algorithms like averaging and linear regression, and then move on to advanced techniques like Auto ARIMA and LSTM.

While Uber is known as one of the pioneers in self-driving vehicels, its autonomous vehicle division has been a source of contention for investors. TechCrunch recently reported numbers that were less than flattering : The ride-hailing company was spending $20 million a month on developing self-driving technologies.The Wall Street Journal estimates that Uber spent about $750 million on building out self-driving technologies before scaling back in 2018.

However, all is not bleak: Uber ’s autonomous vehicle unit may be about to get a massive ($1 billion+) cash injection?

It’s highly possible, according to news reports indicating a group of investors including SoftBank Group is putting money into the division. The Wall Street Journal reported last night that Uber, more formally known as Uber Technologies Inc., was in “late-stage” discussions with a consortium that would invest in the startup’s self-driving vehicle division.

TensorFlow 2.0 has arrived, with a focus on ease of use, developer productivity, and scalability.

Now there’s a contest to show off your TF2 chops: The #PoweredByTF 2.0 Challenge.

Here’s a synopsis:

Developers of all ages, backgrounds, and skill levels are encouraged to submit projects. Teams may have between 1 and 6 participants. Participants are encouraged to expand the scope of an existing TensorFlow 1.x project, to migrate and continue work on a historic TensorFlow 1.x project; or to create an entirely new software solution using TensorFlow 2.0.

Keras and eager execution . Robust model deployment in production on any platform. […]

Donovan Brown and Gopi Chigakkagari discuss how to integrate Azure Pipelines with various 3rd party tools to achieve full DevOps cycle with Multi-cloud support. You can continue to use you existing tools and get Azure Pipelines benefits: application release orchestration, deployment, approvals, and full traceability all the way to the code or issue.


  •      

    Related resources: