Back to Courses

Machine Learning Courses - Page 14

Showing results 131-140 of 485
Machine Learning in the Enterprise
This course encompasses a real-world practical approach to the ML Workflow: a case study approach that presents an ML team faced with several ML business requirements and use cases. This team must understand the tools required for data management and governance and consider the best approach for data preprocessing: from providing an overview of Dataflow and Dataprep to using BigQuery for preprocessing tasks. The team is presented with three options to build machine learning models for two specific use cases. This course explains why the team would use AutoML, BigQuery ML, or custom training to achieve their objectives. A deeper dive into custom training is presented in this course. We describe custom training requirements from training code structure, storage, and loading large datasets to exporting a trained model. You will build a custom training machine learning model, which allows you to build a container image with little knowledge of Docker. The case study team examines hyperparameter tuning using Vertex Vizier and how it can be used to improve model performance. To understand more about model improvement, we dive into a bit of theory: we discuss regularization, dealing with sparsity, and many other essential concepts and principles. We end with an overview of prediction and model monitoring and how Vertex AI can be used to manage ML models.
Regression Analysis with Yellowbrick
Welcome to this project-based course on Regression Analysis with Yellowbrick. In this project, we will build a machine learning model to predict the compressive strength of high performance concrete (HPC). Although, we will use linear regression, the emphasis of this project will be on using visualization techniques to steer our machine learning workflow. Visualization plays a crucial role throughout the analytical process. It is indispensable for any effective analysis, model selection, and evaluation. This project will make use of a diagnostic platform called Yellowbrick. It allows data scientists and machine learning practitioners to visualize the entire model selection process to steer towards better, more explainable models.Yellowbrick hosts several datasets from the UCI Machine Learning Repository. We’ll be working with the concrete dataset that is well suited for regression tasks. The dataset contains 1030 instances and 8 real valued attributes with a continuous target. We we will cover the following topics in our machine learning workflow: exploratory data analysis (EDA), feature and target analysis, regression modelling, cross-validation, model evaluation, and hyperparamter tuning. This course runs on Coursera's hands-on project platform called Rhyme. On Rhyme, you do projects in a hands-on manner in your browser. You will get instant access to pre-configured cloud desktops containing all of the software and data you need for the project. Everything is already set up directly in your internet browser so you can just focus on learning. For this project, you’ll get instant access to a cloud desktop with Python, Jupyter, Yellowbrick, and scikit-learn pre-installed. Notes: - You will be able to access the cloud desktop 5 times. However, you will be able to access instructions videos as many times as you want. - This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
AI Strategy and Governance
In this course, you will discover AI and the strategies that are used in transforming business in order to gain a competitive advantage. You will explore the multitude of uses for AI in an enterprise setting and the tools that are available to lower the barriers to AI use. You will get a closer look at the purpose, function, and use-cases for explainable AI. This course will also provide you with the tools to build responsible AI governance algorithms as faculty dive into the large datasets that you can expect to see in an enterprise setting and how that affects the business on a greater scale. Finally, you will examine AI in the organizational structure, how AI is playing a crucial role in change management, and the risks with AI processes. By the end of this course, you will learn different strategies to recognize biases that exist within data, how to ensure that you maintain and build trust with user data and privacy, and what it takes to construct a responsible governance strategy. For additional reading, Professor Hosanagar's book "A Human’s Guide to Machine Intelligence" can be used as an additional resource for more extensive information on topics covered in this module.
Support Vector Machine Classification in Python
In this 1-hour long guided project-based course, you will learn how to use Python to implement a Support Vector Machine algorithm for classification. This type of algorithm classifies output data and makes predictions. The output of this model is a set of visualized scattered plots separated with a straight line. You will learn the fundamental theory and practical illustrations behind Support Vector Machines and learn to fit, examine, and utilize supervised Classification models using SVM to classify data, using Python. We will walk you step-by-step into Machine Learning supervised problems. With every task in this project, you will expand your knowledge, develop new skills, and broaden your experience in Machine Learning. Particularly, you will build a Support Vector Machine algorithm, and by the end of this project, you will be able to build your own SVM classification model with amazing visualization. In order to be successful in this project, you should just know the basics of Python and classification algorithms.
Build Better Generative Adversarial Networks (GANs)
In this course, you will: - Assess the challenges of evaluating GANs and compare different generative models - Use the Fréchet Inception Distance (FID) method to evaluate the fidelity and diversity of GANs - Identify sources of bias and the ways to detect it in GANs - Learn and implement the techniques associated with the state-of-the-art StyleGANs The DeepLearning.AI Generative Adversarial Networks (GANs) Specialization provides an exciting introduction to image generation with GANs, charting a path from foundational concepts to advanced techniques through an easy-to-understand approach. It also covers social implications, including bias in ML and the ways to detect it, privacy preservation, and more. Build a comprehensive knowledge base and gain hands-on experience in GANs. Train your own model using PyTorch, use it to create images, and evaluate a variety of advanced GANs. This Specialization provides an accessible pathway for all levels of learners looking to break into the GANs space or apply GANs to their own projects, even without prior familiarity with advanced math and machine learning research.
Pneumonia Classification using PyTorch
In this 2-hour guided project, you are going to use EfficientNet model and train it on Pneumonia Chest X-Ray dataset. The dataset consist of nearly 5600 Chest X-Ray images and two categories (Pneumonia/Normal). Our main aim for this project is to build a pneumonia classifier which can classify Chest X-Ray scan that belong to one of the two classes. You will load and fine tune the pretrained EffiecientNet model and also to create a simple pytorch trainer to train the model. In order to be successful in this project, you should be familiar with python, convolutional neural network, basic pytorch. This is a hands on, practical project that focuses primarily on implementation, and not on the theory behind Convolutional Neural Networks. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Supervised Machine Learning: Classification
This course introduces you to one of the main types of modeling families of supervised Machine Learning: Classification. You will learn how to train predictive models to classify categorical outcomes and how to use error metrics to compare across different models. The hands-on section of this course focuses on using best practices for classification, including train and test splits, and handling data sets with unbalanced classes. By the end of this course you should be able to: -Differentiate uses and applications of classification and classification ensembles -Describe and use logistic regression models -Describe and use decision tree and tree-ensemble models -Describe and use other ensemble methods for classification -Use a variety of error metrics to compare and select the classification model that best suits your data -Use oversampling and undersampling as techniques to handle unbalanced classes in a data set   Who should take this course? This course targets aspiring data scientists interested in acquiring hands-on experience with Supervised Machine Learning Classification techniques in a business setting.   What skills should you have? To make the most out of this course, you should have familiarity with programming on a Python development environment, as well as fundamental understanding of Data Cleaning, Exploratory Data Analysis, Calculus, Linear Algebra, Probability, and Statistics.
Block.one: Getting Started with The EOSIO Blockchain
This is a self-paced lab that takes place in the Google Cloud console. In this lab, you create a virtual machine (VM) to host an EOSIO blockchain.
The Economics of AI
The course introduces you to cutting-edge research in the economics of AI and the implications for economic growth and labor markets. We start by analyzing the nature of intelligence and information theory. Then we connect our analysis to modeling production and technological change in economics, and how these processes are affected by AI. Next we turn to how technological change drives aggregate economic growth, covering a range of scenarios including a potential growth singularity. We also study the impact of AI-driven technological change on labor markets and workers, evaluating to what extent fears about technological unemployment are well-founded. We continue with an analysis of economic policies to deal with advanced AI. Finally, we evaluate the potential for transformative progress in AI to lead to significant disruptions and study the problem of how humans can control highly intelligent AI algorithms.
Probabilistic Graphical Models 2: Inference
Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems. This course is the second in a sequence of three. Following the first course, which focused on representation, this course addresses the question of probabilistic inference: how a PGM can be used to answer questions. Even though a PGM generally describes a very high dimensional distribution, its structure is designed so as to allow questions to be answered efficiently. The course presents both exact and approximate algorithms for different types of inference tasks, and discusses where each could best be applied. The (highly recommended) honors track contains two hands-on programming assignments, in which key routines of the most commonly used exact and approximate algorithms are implemented and applied to a real-world problem.