Back to Courses
Data Science Courses - Page 7
Showing results 61-70 of 1407
Sentiment Analysis with Deep Learning using BERT
In this 2-hour long project, you will learn how to analyze a dataset for sentiment analysis. You will learn how to read in a PyTorch BERT model, and adjust the architecture for multi-class classification. You will learn how to adjust an optimizer and scheduler for ideal training and performance. In fine-tuning this model, you will learn how to design a train and evaluate loop to monitor model performance as it trains, including saving and loading models. Finally, you will build a Sentiment Analysis model that leverages BERT's large-scale language knowledge.
Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Doing Economics: Measuring Climate Change
This course will give you practical experience in working with real-world data, with applications to important policy issues in today’s society. Each week, you will learn specific data handling skills in Excel and use these techniques to analyse climate change data, with appropriate readings to provide background information on the data you are working with. You will also learn about the consequences of climate change and how governments can address this issue.
After completing this course, you should be able to:
• Understand how data can be used to assess the extent of climate change
• Produce appropriate bar charts, line charts, and scatterplots to visualise data
• Calculate and interpret summary statistics (mean, median, variance, percentile, correlation)
• Explain the challenges with designing and implementing policies that address climate change
No prior knowledge in economics or statistics is required for this course. No knowledge of Excel is required, except a familiarity with the interface and how to enter and clear data.
New Product Development For Small Businesses and Start-Ups
In this 1 hr 40 mins long project-based course, you will learn about the process of developing a new product for start-up companies, and small and medium-sized enterprises (SMEs). You will learn about idea generation and the evaluation processes in product development by using an idea generation model and online resources like Google Trends and Amazon. You will use methods to evaluate your product concept through market segmentation, growth potential, and the competition to your product. You will also evaluate a supplier and the cost to your product by analyzing component prices and production rates. By the end of this project, you will be able to create a full retrospective plan for the product launch and understand how and why the specifications are done.
Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
AI Workflow: Enterprise Model Deployment
This is the fifth course in the IBM AI Enterprise Workflow Certification specialization. You are STRONGLY encouraged to complete these courses in order as they are not individual independent courses, but part of a workflow where each course builds on the previous ones.
This course introduces you to an area that few data scientists are able to experience: Deploying models for use in large enterprises. Apache Spark is a very commonly used framework for running machine learning models. Best practices for using Spark will be covered in this course. Best practices for data manipulation, model training, and model tuning will also be covered. The use case will call for the creation and deployment of a recommender system. The course wraps up with an introduction to model deployment technologies.
By the end of this course you will be able to:
1. Use Apache Spark's RDDs, dataframes, and a pipeline
2. Employ spark-submit scripts to interface with Spark environments
3. Explain how collaborative filtering and content-based filtering work
4. Build a data ingestion pipeline using Apache Spark and Apache Spark streaming
5. Analyze hyperparameters in machine learning models on Apache Spark
6. Deploy machine learning algorithms using the Apache Spark machine learning interface
7. Deploy a machine learning model from Watson Studio to Watson Machine Learning
Who should take this course?
This course targets existing data science practitioners that have expertise building machine learning models, who want to deepen their skills on building and deploying AI in large enterprises. If you are an aspiring Data Scientist, this course is NOT for you as you need real world expertise to benefit from the content of these courses.
What skills should you have?
It is assumed that you have completed Courses 1 through 4 of the IBM AI Enterprise Workflow specialization and you have a solid understanding of the following topics prior to starting this course: Fundamental understanding of Linear Algebra; Understand sampling, probability theory, and probability distributions; Knowledge of descriptive and inferential statistical concepts; General understanding of machine learning techniques and best practices; Practiced understanding of Python and the packages commonly used in data science: NumPy, Pandas, matplotlib, scikit-learn; Familiarity with IBM Watson Studio; Familiarity with the design thinking process.
Troubleshooting and Solving Data Join Pitfalls
This is a self-paced lab that takes place in the Google Cloud console. This lab focuses on how to reverse-engineer the relationships between data tables and the pitfalls to avoid when joining them together.
Advanced Deep Learning Methods for Healthcare
This course covers deep learning (DL) methods, healthcare data and applications using DL methods. The courses include activities such as video lectures, self guided programming labs, homework assignments (both written and programming), and a large project.
The first phase of the course will include video lectures on different DL and health applications topics, self-guided labs and multiple homework assignments. In this phase, you will build up your knowledge and experience in developing practical deep learning models on healthcare data. The second phase of the course will be a large project that can lead to a technical report and functioning demo of the deep learning models for addressing some specific healthcare problems. We expect the best projects can potentially lead to scientific publications.
Using SAS Viya REST APIs with Python and R
SAS Viya is an in-memory distributed environment used to analyze big data quickly and efficiently. In this course, you’ll learn how to use the SAS Viya APIs to take control of SAS Cloud Analytic Services from a Jupyter Notebook using R or Python. You’ll learn to upload data into the cloud, analyze data, and create predictive models with SAS Viya using familiar open source functionality via the SWAT package -- the SAS Scripting Wrapper for Analytics Transfer. You’ll learn how to create both machine learning and deep learning models to tackle a variety of data sets and complex problems. And once SAS Viya has done the heavy lifting, you’ll be able to download data to the client and use native open source syntax to compare results and create graphics.
Bioconductor for Genomic Data Science
Learn to use tools from the Bioconductor project to perform analysis of genomic data. This is the fifth course in the Genomic Big Data Specialization from Johns Hopkins University.
TensorFlow for CNNs: Learn and Practice CNNs
This guided project course is part of the "Tensorflow for Convolutional Neural Networks" series, and this series presents material that builds on the second course of DeepLearning.AI TensorFlow Developer Professional Certificate, which will help learners reinforce their skills and build more projects with Tensorflow.
In this 2-hour long project-based course, you will learn the fundamentals of CNNs, structure, components, and how they work, and you will learn practically how to solve an image classification deep learning task in the real world and create, train, and test a neural network with Tensorflow using real-world images, and you will get a bonus deep learning exercise implemented with Tensorflow. By the end of this project, you will have learned the fundamentals of convolutional neural networks and created a deep learning model with TensorFlow on a real-world dataset.
This class is for learners who want to learn how to work with convolutional neural networks and use Python for building convolutional neural networks with TensorFlow, and for learners who are currently taking a basic deep learning course or have already finished a deep learning course and are searching for a practical deep learning project with TensorFlow. Also, this project provides learners with further knowledge about creating and training convolutional neural networks and improves their skills in Tensorflow which helps them in fulfilling their career goals by adding this project to their portfolios.
Interpretable Machine Learning Applications: Part 4
In this 1-hour long guided project, you will learn how to use the "What-If" Tool (WIT) in the context of training and testing machine learning prediction models. In particular, you will learn a) how to set up a machine learning application in Python by using interactive Python notebook(s) on Google's Colab(oratory) environment, a.k.a. "zero configuration" environment, b) import and prepare the data, c) train and test classifiers as prediction models, d) analyze the behavior of the trained prediction models by using WIT for specific data points (individual basis), e) moving on to the analysis of the behavior of the trained prediction models by using WIT global basis, i.e., all test data considered.
Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Popular Internships and Jobs by Categories
Browse
© 2024 BoostGrad | All rights reserved