Back to Courses

Data Analysis Courses - Page 40

Showing results 391-400 of 998
Statistical Inference
Statistical inference is the process of drawing conclusions about populations or scientific truths from data. There are many modes of performing inference including statistical modeling, data oriented strategies and explicit use of designs and randomization in analyses. Furthermore, there are broad theories (frequentists, Bayesian, likelihood, design based, …) and numerous complexities (missing data, observed and unobserved confounding, biases) for performing inference. A practitioner can often be left in a debilitating maze of techniques, philosophies and nuance. This course presents the fundamentals of inference in a practical approach for getting things done. After taking this course, students will understand the broad directions of statistical inference and use this information for making informed choices in analyzing data.
Statistics For Data Science
This is a hands-on project to give you an overview of how to use statistics in data science.
Publishing Visualizations in R with Shiny and flexdashboard
Data visualization is a critical skill for anyone that routinely using quantitative data in his or her work - which is to say that data visualization is a tool that almost every worker needs today. One of the critical tools for data visualization today is the R statistical programming language. Especially in conjunction with the tidyverse software packages, R has become an extremely powerful and flexible platform for making figures, tables, and reproducible reports. However, R can be intimidating for first time users, and there are so many resources online that it can be difficult to sort through without guidance. This course is the fourth in the Specialization "Data Visualization and Dashboarding in R." Learners will come to this course with a strong background in making visualization in R using ggplot2. To build on those skills, this course covers creating interactive visualization using Shiny, as well as combining different kinds of figures made in R into interactive dashboards.
Debugging Applications for Site Reliability Engineers
This is a self-paced lab that takes place in the Google Cloud console. Cloud Debugger lets developers debug running code with live request data. In this lab you will set breakpoints and log points on the fly to examine what caused an application's performance issues.
Fundamentals of Machine Learning for Supply Chain
This course will teach you how to leverage the power of Python to understand complicated supply chain datasets. Even if you are not familiar with supply chain fundamentals, the rich data sets that we will use as a canvas will help orient you with several Pythonic tools and best practices for exploratory data analysis (EDA). As such, though all datasets are geared towards supply chain minded professionals, the lessons are easily generalizable to other use cases.
Google Cloud Pub/Sub: Qwik Start - Command Line
This is a self-paced lab that takes place in the Google Cloud console. This hands-on lab shows you how to publish and consume messages with a pull subscriber, using the Google Cloud command line. Watch the short video <A HREF="https://youtu.be/oKU2wbTXMTY">Simplify Event Driven Processing with Cloud Pub/Sub</A>.
Introduction to Node-red
By the end of this project, you will learn the basic concepts and fundamentals of Node-red. Node-RED is an opensource flow-based development tool for visual programming in javascript it allows the programmers to interconnect physical I/O, could based-systems, databases and different APIs, in many ways. originally, it was designed to work with the Internet of Things, i.e. devices that interact and control the real world, as it has evolved, it has become useful for a range of applications. In this project, we are going to cover key-core nodes in node-red. at the final task of this project we will create a weather alert application using node-red.
Covid-19 Cases Forecasting Using Fbprophet
Predictive models attempt at forecasting future value based on historical data. In this hands-on project, we will analyze the transmission of Covid-19 virus across the globe and train a time-series model (fbprophet) to get the projection of corona virus-related cases in the United States.
Principles of fMRI 2
Functional Magnetic Resonance Imaging (fMRI) is the most widely used technique for investigating the living, functioning human brain as people perform tasks and experience mental states. It is a convergence point for multidisciplinary work from many disciplines. Psychologists, statisticians, physicists, computer scientists, neuroscientists, medical researchers, behavioral scientists, engineers, public health researchers, biologists, and others are coming together to advance our understanding of the human mind and brain. This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”.
AI Workflow: Business Priorities and Data Ingestion
This is the first course of a six part specialization.  You are STRONGLY encouraged to complete these courses in order as they are not individual independent courses, but part of a workflow where each course builds on the previous ones. This first course in the IBM AI Enterprise Workflow Certification specialization introduces you to the scope of the specialization and prerequisites.  Specifically, the courses in this specialization are meant for practicing data scientists who are knowledgeable about probability, statistics, linear algebra, and Python tooling for data science and machine learning.  A hypothetical streaming media company will be introduced as your new client.  You will be introduced to the concept of design thinking, IBMs framework for organizing large enterprise AI projects.  You will also be introduced to the basics of scientific thinking, because the quality that distinguishes a seasoned data scientist from a beginner is creative, scientific thinking.  Finally you will start your work for the hypothetical media company by understanding the data they have, and by building a data ingestion pipeline using Python and Jupyter notebooks.   By the end of this course you should be able to: 1.  Know the advantages of carrying out data science using a structured process 2.  Describe how the stages of design thinking correspond to the AI enterprise workflow 3.  Discuss several strategies used to prioritize business opportunities 4.  Explain where data science and data engineering have the most overlap in the AI workflow 5.  Explain the purpose of testing in data ingestion  6.  Describe the use case for sparse matrices as a target destination for data ingestion  7.  Know the initial steps that can be taken towards automation of data ingestion pipelines   Who should take this course? This course targets existing data science practitioners that have expertise building machine learning models, who want to deepen their skills on building and deploying AI in large enterprises. If you are an aspiring Data Scientist, this course is NOT for you as you need real world expertise to benefit from the content of these courses.   What skills should you have? It is assumed you have a solid understanding of the following topics prior to starting this course: Fundamental understanding of Linear Algebra; Understand sampling, probability theory, and probability distributions; Knowledge of descriptive and inferential statistical concepts; General understanding of machine learning techniques and best practices; Practiced understanding of Python and the packages commonly used in data science: NumPy, Pandas, matplotlib, scikit-learn; Familiarity with IBM Watson Studio; Familiarity with the design thinking process.