Jeremy Fielding has come up with a handy video introducing the concept of motor types, power, and references to how to wire, speed control, and use all the common types of motors with a focus on reusing motors salvaged from appliances and other sources. Steppers, BLDC, PMDC, single and three phase, universal motors, and more.
Two Minute Papers explores the paper “ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness ” The source documents and code are available at the following links:
Is it possible to use machine learning without needing to code? [Spoiler alert: it is.] Watch this video to see Siraj Raval explore this question and Uber’s AI lab’s python library called Ludwig that they’ve been using internally for 2 years.
Data integration is complex with many moving parts. It helps organizations to combine data and complex business processes in hybrid data environments. Failures are very common in data integration workflows. This can happen due to data not arriving on time, functional code issues in your pipelines, infrastructure issues, etc.
A common requirement is the ability to rerun failed activities within data integration workflows. In addition, sometimes you need to rerun activities to re-process data due to an error upstream in data processing. Azure Data Factory now enables you to rerun the entire pipeline or choose to rerun downstream from a particular activity inside a pipeline.
- Rerun activities inside your Azure Data Factory pipelines (blog post)
- Visually monitor Azure data factories – Rerun activities inside a pipeline (docs)
- Azure Data Factory (overview)
- Azure Data Factory (pricing details)
- Create a free account (Azure)
Never miss an episode Follow @AzureFriday
- Cognilytica is amazing!(04:00)
- All chatbots are dumb – for now. (09:00)
- Machine Learning vs. Machine Reasoning (11:30)
- The DIKUW Pyramid (11:55)
- More about Knowledge Graph… (14:00)
- More about Common Sense… (15:00)
- On generalization (16:05)
- ML and the Elephant in the Room (16:22)
- Movie reference: Guardians of the Galaxy (17:00)
- How did the AI Today podcast get started? (18:00)
- AI Today podcast with Dragos Margineantu, AI Chief Technologist at Boeing (19:44)
- Is AI retro? (22:50)
- Movie Reference: Short Circuit (23:30)
- Did you find data or did data find you? (025:00)
- Tech Breakfast DC (28:30)
- AOL plug (31:25)
- What’s your favorite part of your current gig? (32:00)
- More about pseudo-AI… (33:45)
- Shout-out to Brent Ozar (just not by name) (38:00)
- When I’m not working, I enjoy ___? (39:45)
- I think the coolest thing in technology is ___? (41:12)
- Bubble programming language (42:15)
- I look forward to the day when I can use technology to ___. (45:00)
- “Don’t overshare…” (46:30)
- The loneliest people (47:00)
- Warning: Do not watch movies while driving. (48:30)
- Also, eating tacos while driving is difficult. (49:00)
- “Lefties are alright…” – Kathleen (49:30)
- Ron may be a pool shark. (51:30)
- Ron and Kathleen write for TechTarget and Forbes. (53:00)
- Ron’s book recommendation: Hackers: Heroes of the Computer Revolution (56:00)
- Kathleen’s book recommendation: My Not-So-Perfect Life (57:00)
- Kathleen’s other book recommendation: The Glass Castle (57:40)
- You can glamp at Sandy River in Farmville (1:00:00)
This video explores the output of GANs described in this paper.
We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.
One of the promise of IoT is to allow bringing the intelligence of the Cloud to the Edge to run IoT data analytics as close as possible to the data source. This allows to reduce latencies, optimize performance and response times, support offline scenario, comply with privacy policies and regulations, reduce data transfer cost, and more…
One thing you really have to consider when bringing Artificial Intelligence to the edge is the hardware you will need to run these powerful algorithms. Ted Way from the Azure Machine Learning team joins Olivier on the IoT Show to discuss hardware acceleration at the Edge for AI. We will discuss scenarios and technologies Microsoft develops and uses to accelerate AI in the Cloud and at the Edge such as Graphic cards, FPGA, CPU,… To illustrate all this, Ted walks us through real life scenarios and demos IoT Edge running Machine Learning vision algorithms.
Learn more about hardware acceleration for AI at the Edge: https://docs.microsoft.com/azure/machine-learning/service/concept-accelerate-with-fpgas
Create a Free Account (Azure): https://aka.ms/aft-iot
BBC Click visited the world’s biggest smartphone show, Mobile World Congress, to see all the latest launches and developments in mobile tech.