Back to Courses
Machine Learning Courses
Showing results 1-10 of 485
Getting Started with AWS Machine Learning
Machine learning (ML) is one of the fastest growing areas in technology and a highly sought after skillset in today’s job market. The World Economic Forum states the growth of artificial intelligence (AI) could create 58 million net new jobs in the next few years, yet it’s estimated that currently there are 300,000 AI engineers worldwide, but millions are needed. This means there is a unique and immediate opportunity for you to get started with learning the essential ML concepts that are used to build AI applications – no matter what your skill levels are. Learning the foundations of ML now, will help you keep pace with this growth, expand your skills and even help advance your career.
This course will teach you how to get started with AWS Machine Learning. Key topics include: Machine Learning on AWS, Computer Vision on AWS, and Natural Language Processing (NLP) on AWS. Each topic consists of several modules deep-diving into variety of ML concepts, AWS services as well as insights from experts to put the concepts into practice.
Computer Vision Basics
By the end of this course, learners will understand what computer vision is, as well as its mission of making computers see and interpret the world as humans do, by learning core concepts of the field and receiving an introduction to human vision capabilities. They are equipped to identify some key application areas of computer vision and understand the digital imaging process. The course covers crucial elements that enable computer vision: digital signal processing, neuroscience and artificial intelligence. Topics include color, light and image formation; early, mid- and high-level vision; and mathematics essential for computer vision. Learners will be able to apply mathematical techniques to complete computer vision tasks.
This course is ideal for anyone curious about or interested in exploring the concepts of computer vision. It is also useful for those who desire a refresher course in mathematical concepts of computer vision. Learners should have basic programming skills and experience (understanding of for loops, if/else statements), specifically in MATLAB (Mathworks provides the basics here: https://www.mathworks.com/learn/tutorials/matlab-onramp.html). Learners should also be familiar with the following: basic linear algebra (matrix vector operations and notation), 3D co-ordinate systems and transformations, basic calculus (derivatives and integration) and basic probability (random variables).
Material includes online lectures, videos, demos, hands-on exercises, project work, readings and discussions. Learners gain experience writing computer vision programs through online labs using MATLAB* and supporting toolboxes.
* A free license to install MATLAB for the duration of the course is available from MathWorks.
Analyze Box Office Data with Seaborn and Python
Welcome to this project-based course on Analyzing Box Office Data with Seaborn and Python. In this course, you will be working with the The Movie Database (TMDB) Box Office Prediction data set. The motion picture industry is raking in more revenue than ever with its expansive growth the world over. Can we build models to accurately predict movie revenue? Could the results from these models be used to further increase revenue? We try to answer these questions by way of exploratory data analysis (EDA) in this project and the next. The statistical data visualization libraries Seaborn and Plotly will be our workhorses to generate interactive, publication-quality graphs. By the end of this course, you will be able to produce data visualizations in Python with Seaborn, and apply graphical techniques used in exploratory data analysis (EDA).
This course runs on Coursera's hands-on project platform called Rhyme. On Rhyme, you do projects in a hands-on manner in your browser. You will get instant access to pre-configured cloud desktops containing all of the software and data you need for the project. Everything is already set up directly in your internet browser so you can just focus on learning. For this project, you’ll get instant access to a cloud desktop with Python, Jupyter, and scikit-learn pre-installed.
Notes:
- You will be able to access the cloud desktop 5 times. However, you will be able to access instructions videos as many times as you want.
- This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Code Free Data Science
The Code Free Data Science class is designed for learners seeking to gain or expand their knowledge in the area of Data Science. Participants will receive the basic training in effective predictive analytic approaches accompanying the growing discipline of Data Science without any programming requirements. Machine Learning methods will be presented by utilizing the KNIME Analytics Platform to discover patterns and relationships in data. Predicting future trends and behaviors allows for proactive, data-driven decisions. During the class learners will acquire new skills to apply predictive algorithms to real data, evaluate, validate and interpret the results without any pre requisites for any kind of programming. Participants will gain the essential skills to design, build, verify and test predictive models.
You Will Learn
• How to design Data Science workflows without any programming involved
• Essential Data Science skills to design, build, test and evaluate predictive models
• Data Manipulation, preparation and Classification and clustering methods
• Ways to apply Data Science algorithms to real data and evaluate and interpret the results
Extract, Transform, and Load Data
This course is designed for business and data professional seeking to learn the first technical phase of the data science process known as Extract, Transform and Load or ETL.
Learners will be taught how to collect data from multiple sources so it is available to be transformed and cleaned and then will dive into collected data sets to prepare and clean data so that it can later be loaded into its ultimate destination. In the conclusion of the course learners will load data into its ultimate destination so that it can be analyzed and modeled.
The typical student in this course will have experience working with data and aptitude with computer programming.
Advanced AI Techniques for the Supply Chain
In this course, we’ll learn about more advanced machine learning methods that are used to tackle problems in the supply chain. We’ll start with an overview of the different ML paradigms (regression/classification) and where the latest models fit into these breakdowns. Then, we’ll dive deeper into some of the specific techniques and use cases such as using neural networks to predict product demand and random forests to classify products. An important part to using these models is understanding their assumptions and required preprocessing steps. We’ll end with a project incorporating advanced techniques with an image classification problem to find faulty products coming out of a machine.
Natural Language Processing on Google Cloud
This course is an introduction to sequence models and their applications, including an overview of sequence model architectures and how to handle inputs of variable length.
• Predict future values of a time-series
• Classify free form text
• Address time-series and text problems with recurrent neural networks
• Choose between RNNs/LSTMs and simpler models
• Train and reuse word embeddings in text problems
You will get hands-on practice building and optimizing your own text classification and sequence models on a variety of public datasets in the labs we’ll work on together.
Prerequisites: Basic SQL, familiarity with Python and TensorFlow
Bracketology with Google Machine Learning
This is a self-paced lab that takes place in the Google Cloud console. In this lab you use Machine Learning (ML) to analyze the public NCAA dataset and predict NCAA tournament brackets.
Object Detection Using Facebook's Detectron2
In this 2-hour long project-based course, you will learn how to train an Object Detection Model using Facebook's Detectron2. Detectron2 is a research platform and a production library for deep learning, built by Facebook AI Research (FAIR). We will be building an Object Detection Language Identification Model to identify English and Hindi texts written which can be extended to different use cases. We will look at the entire cycle of Model Development and Evaluation in Detectron2. We will first look at how to load a dataset, visualize it and prepare it as an input to the Deep Learning Model.
We will then look at how we can build a Faster R-CNN model in Detectron2 and customize it. We will then configure the parameters & hyperparameters of the model. We will then move on to training the Model and subsequently to model inference and evaluation.
Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Build an End-to-End Data Capture Pipeline using Document AI
This is a self-paced lab that takes place in the Google Cloud console. In this lab you use Cloud Functions and Pub/Sub to create an end-to-end document processing pipeline using Document AI. The Document AI API is a document understanding solution that takes unstructured data, such as documents and emails, and makes the data easier to understand, analyze, and consume.
In this lab, you will create a document processing pipeline that will automatically process documents that are uploaded to Cloud Storage. The pipeline consists of a primary Cloud Function that processes new files that are uploaded to Cloud Storage using a Document AI form processor and then saves form data detected in those files to BigQuery. If the form data includes any address fields the address data is then written to a Pub/Sub topic that in turn triggers a second Cloud Function that uses to Geocoding API to provide geographic coordinate data for the address that is also written to BigQuery.
This is a simple pipeline that uses a general form processor that will detect basic form data, such as a labelled field containing address information. Document AI processors that use one of the specialized parsers that are beyond the scope of this lab provide enhanced entity information for specific document types even when those documents do not include labelled fields. For example, a Document AI Invoice parser can provide detailed address and supplier information, from an unlabelled invoice document because it understands the layout of invoices.
Popular Internships and Jobs by Categories
Browse
© 2024 BoostGrad | All rights reserved