TELEVISION

Introduction to Machine Learning

Series: Introduction to Machine Learning
4.8
(30)
Episodes
25
Rating
TVPG
Year
2006
Language
English

About

This series teaches you about machine-learning programs and how to write them in the Python programming language. For those new to Python, a "get-started" tutorial is included. Professor Michael L. Littman covers major concepts and techniques, all illustrated with real-world examples such as medical diagnosis, game-playing, spam filters, and media special effects.

Related Subjects

Episodes

1 to 3 of 25

1. Telling the Computer What We Want

31m

Professor Littman gives a bird's-eye view of machine learning, covering its history, key concepts, terms, and techniques as a preview for the rest of the series. Look at a simple example involving medical diagnosis. Then, focus on a machine-learning program for a video green screen, used widely in television and film. Contrast this with a traditional program to solve the same problem.

2. Starting with Python Notebooks and Colab

18m

The demonstrations in this series use the Python programming language, the most popular and widely supported language in machine learning. Dr. Littman shows you how to run programming examples from your web browser, which avoids the need to install the software on your own computer, saving installation headaches and giving you more processing power than is available on a typical home computer.

3. Decision Trees for Logical Rules

32m

Can machine learning beat a rhyming rule, taught in elementary school, for determining whether a word is spelled with an I-E or an E-I-as in "diet" and "weigh"? Discover that a decision tree is a convenient tool for approaching this problem. After experimenting, use Python to build a decision tree for predicting the likelihood for an individual to develop diabetes based on eight health factors.

4. Neural Networks for Perceptual Rules

30m

Graduate to a more difficult class of problems: learning from images and auditory information. Here, it makes sense to address the task more or less the way the brain does, using a form of computation called a neural network. Explore the general characteristics of this powerful tool. Among the examples, compare decision-tree and neural-network approaches to recognizing handwritten digits.

5. Opening the Black Box of a Neural Network

29m

Take a deeper dive into neural networks by working through a simple algorithm implemented in Python. Return to the green-screen problem from the first episode to build a learning algorithm that places the professor against a new backdrop.

6. Bayesian Models for Probability Prediction

29m

A program need not understand the content of an email to know with high probability that it's spam. Discover how machine learning does so with the Naive Bayes approach, which is a simplified application of Bayes' theorem to a simplified model of language generation. The technique illustrates a very useful strategy: going backward from effects (in this case, words) to their causes (spam).

Extended Details

  • Closed CaptionsEnglish

Artists