Back to Courses

Data Science Courses - Page 68

Showing results 671-680 of 1407
Capstone: Connected Planning for Business Transformation
This course is the capstone project for learners completing the Connected Planning for Business Transformation specialization. In this project, you will develop a rationale and roadmap for Connected Planning implementation in your own organization. By examining the current state of planning in the organization, identifying specific areas where Connected Planning will deliver significant benefits, and addressing organizational obstacles, you will prepare yourself to advocate for Connected Planning, drive its adoption in your organization, and help transform your business. This course is presented by Anaplan, provider of a leading technology platform that is purpose-built for Connected Planning.
RStudio for Six Sigma - Process Capability
Welcome to RStudio for Six Sigma - Process Capability. This is a project-based course which should take under 2 hours to finish. Before diving into the project, please take a look at the course objectives and structure. By the end of this project, you will understand the concepts like DPU, DPO and DPMO; learn to import discrete defect data file and perform Process Capability Analysis, understand Throughput Yield and Rolled Throughput Yield (RTY) and calculate RTY for data imported from a file, understand Z score, Short and Longterm Standard Deviation, Short and Longterm Z Bench, Cp, Cpk, Pp, Ppk, and perform Process Capability Analysis for continuous data.
Advanced Data Science Capstone
This project completer has proven a deep understanding on massive parallel data processing, data exploration and visualization, advanced machine learning and deep learning and how to apply his knowledge in a real-world practical use case where he justifies architectural decisions, proves understanding the characteristics of different algorithms, frameworks and technologies and how they impact model performance and scalability.  Please note: You are requested to create a short video presentation at the end of the course. This is mandatory to pass. You don't need to share the video in public.
Using Databases with Python
This course will introduce students to the basics of the Structured Query Language (SQL) as well as basic database design for storing data as part of a multi-step data gathering, analysis, and processing effort. The course will use SQLite3 as its database. We will also build web crawlers and multi-step data gathering and visualization processes. We will use the D3.js library to do basic data visualization. This course will cover Chapters 14-15 of the book “Python for Everybody”. To succeed in this course, you should be familiar with the material covered in Chapters 1-13 of the textbook and the first three courses in this specialization. This course covers Python 3.
Deploying a Pytorch Computer Vision Model API to Heroku
Welcome to the “Deploying a Pytorch Computer Vision Model API to Heroku” guided project. Computer vision is one of the prominent fields of AI with numerous applications in the real world including self-driving cars, image recognition, and object tracking, among others. The ability to make models available for real-world use is an essential skill anyone interested in AI engineering should have especially for computer vision and this is why this project exists. In this project, we will deploy a Flask REST API using one of Pytorch's pre-trained computer vision image classification models. This API will be able to receive an image, inference the pre-trained model, and return its predicted classification. This project is an intermediate python project for anyone interested in learning about how to productionize Pytorch computer vision models in the real world via a REST API on Heroku. It requires preliminary knowledge on how to build and train PyTorch models (as we will not be building or training models), how to utilize Git and a fundamental understanding of REST APIs. Learners would also need a Heroku account and some familiarity with the Python Flask module and the Postman API Platform. At the end of this project, learners will have a publicly available API they can use to demonstrate their knowledge in deploying computer vision models.
Create a Big Number KPI Dashboard in Tableau Public
Tableau is widely recognized as one of the premier data visualization software programs. For many years access to the program was limited to those who purchased licenses. Recently, Tableau launched a public version that grants the ability to create amazing data visualizations for free. Account members can also share and join projects to collaborate on projects that can change the world. By the end of this project, you will learn how to create an easy-to-understand communication that will focus attention on specific metrics that guide decisions. We will learn how to create an account, how to load data sets, and how to manipulate Create a Big Number KPI Dashboard in Tableau Public. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Applied Data Science Capstone
This is the final course in the IBM Data Science Professional Certificate as well as the Applied Data Science with Python Specialization. This capstone project course will give you the chance to practice the work that data scientists do in real life when working with datasets. In this course you will assume the role of a Data Scientist working for a startup intending to compete with SpaceX, and in the process follow the Data Science methodology involving data collection, data wrangling, exploratory data analysis, data visualization, model development, model evaluation, and reporting your results to stakeholders. You will be tasked with predicting if the first stage of the SpaceX Falcon 9 rocket will land successfully. With the help of your Data Science findings and models, the competing startup you have been hired by can make more informed bids against SpaceX for a rocket launch. In this course, there will not be much new learning, instead you’ll focus on hands-on work to demonstrate and apply what you have learnt in previous courses. By successfully completing this Capstone you will have added a project to your data science and machine learning portfolio to showcase to employers.
Medical Diagnosis using Support Vector Machines
In this one hour long project-based course, you will learn the basics of support vector machines using Python and scikit-learn. The dataset we are going to use comes from the National Institute of Diabetes and Digestive and Kidney Diseases, and contains anonymized diagnostic measurements for a set of female patients. We will train a support vector machine to predict whether a new patient has diabetes based on such measurements. By the end of this course, you will be able to model an existing dataset with the goal of making predictions about new data. This is a first step on the path to mastering machine learning. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Build a Business Architecture using AWS Organization
In this 2-hours long project-based course, you will learn how to: • Create 'AWS Organization' • Add members to 'Organization' • Attach 'Service Control Policy' to member accounts • Enable 'Cloud Trail' for your Organization. This course is an apt learning platform for people who wanted to build their business in AWS Cloud. If you want to manage your business architecture which is deployed in AWS cloud, you must be familiar with 'AWS Organization'. With this powerful tool, you can manage multiple AWS Accounts in a much easier way. This course will be a best learning experience for AWS Solution Architect Associate who are looking for a real-world working experience that would be helpful to gain confidence to appear for Solution Architect Role interviews. It is also helpful for people who are looking to clear AWS solution architect associate certification exam with real time experience.
Predictive Modeling with Logistic Regression using SAS
This course covers predictive modeling using SAS/STAT software with emphasis on the LOGISTIC procedure. This course also discusses selecting variables and interactions, recoding categorical variables based on the smooth weight of evidence, assessing models, treating missing values, and using efficiency techniques for massive data sets. You learn to use logistic regression to model an individual's behavior as a function of known inputs, create effect plots and odds ratio plots, handle missing data values, and tackle multicollinearity in your predictors. You also learn to assess model performance and compare models.