Back to Courses

Data Analysis Courses - Page 35

Showing results 341-350 of 998
Logistic Regression with NumPy and Python
Welcome to this project-based course on Logistic with NumPy and Python. In this project, you will do all the machine learning without using any of the popular machine learning libraries such as scikit-learn and statsmodels. The aim of this project and is to implement all the machinery, including gradient descent, cost function, and logistic regression, of the various learning algorithms yourself, so you have a deeper understanding of the fundamentals. By the time you complete this project, you will be able to build a logistic regression model using Python and NumPy, conduct basic exploratory data analysis, and implement gradient descent from scratch. The prerequisites for this project are prior programming experience in Python and a basic understanding of machine learning theory. This course runs on Coursera's hands-on project platform called Rhyme. On Rhyme, you do projects in a hands-on manner in your browser. You will get instant access to pre-configured cloud desktops containing all of the software and data you need for the project. Everything is already set up directly in your internet browser so you can just focus on learning. For this project, you’ll get instant access to a cloud desktop with Python, Jupyter, NumPy, and Seaborn pre-installed.
Computational Thinking with JavaScript 2: Model & Analyse
This is the second course in a sequence of four courses that develops essential 21st century computational thinking (CT) skills using the popular JavaScript programming language. At the end of this second course you will: know a framework for CT to help you model the real world using abstract data structures; have developing CT skills so that you can perform comon data analytics tasks; be able to read and write programs in JavaScript that involve processing, analysing and visualizing data, using a specialised library; and post your creations on the web to share your code with others. This course is suitable for: learners who have taken the first course in this specialization 'Computational Thinking in JavaScript 1: Draw and Animate' or for those who have basic JavaScript skills and want to learn about simple data analytics.
Beginner's guide to AWS Identity and Access Management (IAM)
In this 2-hours long project-based course, you will learn how to . Create IAM Identities using AWS Console and CLI . Enable ' Cross account access' . Create ' Identity Provider' in AWS By the end of this course, you will get a good understanding of what AWS IAM is and all we can do using IAM.
Measurement – Turning Concepts into Data
This course provides a framework for how analysts can create and evaluate quantitative measures. Consider the many tricky concepts that are often of interest to analysts, such as health, educational attainment and trust in government. This course will explore various approaches for quantifying these concepts. The course begins with an overview of the different levels of measurement and ways to transform variables. We’ll then discuss how to construct and build a measurement model. We’ll next examine surveys, as they are one of the most frequently used measurement tools. As part of this discussion, we’ll cover survey sampling, design and evaluation. Lastly, we’ll consider different ways to judge the quality of a measure, such as by its level of reliability or validity. By the end of this course, you should be able to develop and critically assess measures for concepts worth study. After all, a good analysis is built on good measures.
Creating Models using Smartpls
In this 1-hour long project-based course, you will learn how to create path models using Smartpls. We will take a project on changing behavior and check if attitudes or subjective norms impact behavior the most. We will learn how to launch this new software, create the model and run it. We will then show you how to interpret the same. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Getting Started with Data Analytics on AWS
Learn how to go from raw data to meaningful insights using AWS with this one-week course. Throughout the course, you’ll learn about the fundamentals of Data Analytics from AWS experts. Start off with an overview of different types of data analytics techniques - descriptive, diagnostic, predictive, and prescriptive before diving deeper into the descriptive data analytics. Then, apply your knowledge with a guided project that makes use of a simple, but powerful dataset available by default in every AWS account: the logs from AWS CloudTrail. The CloudTrail service enables governance, compliance, operational auditing, and risk auditing of your AWS account. Through the project you’ll also get an introduction to Amazon Athena and Amazon QuickSight. And, you’ll learn how to build a basic security dashboard as a simple but practical method of applying your newfound data analytics knowledge.
Reverse and complement nucleic acid sequences (DNA, RNA) using Python
In this 1-hour long project-based course, you will learn the basic building blocks in the Python language and how to Develop a Python program that constructs reverse, complement, and reverse-complement nucleic acid sequences (DNA, RNA). Also, you will make your code read a file that has a long DNA sequence and deal with one of the complete genomes for the novel coronavirus.
Framework for Data Collection and Analysis
This course will provide you with an overview over existing data products and a good understanding of the data collection landscape. With the help of various examples you will learn how to identify which data sources likely matches your research question, how to turn your research question into measurable pieces, and how to think about an analysis plan. Furthermore this course will provide you with a general framework that allows you to not only understand each step required for a successful data collection and analysis, but also help you to identify errors associated with different data sources. You will learn some metrics to quantify each potential error, and thus you will have tools at hand to describe the quality of a data source. Finally we will introduce different large scale data collection efforts done by private industry and government agencies, and review the learned concepts through these examples. This course is suitable for beginners as well as those that know about one particular data source, but not others, and are looking for a general framework to evaluate data products.
Analysing Covid-19 Geospatial data with Python
In this one-hour guided project, you will learn how to process geospatial data using Python. We will go through different geoprocessing tasks including how to create Geodataframes from CSV files and perform a spatial join.
What are the Chances? Probability and Uncertainty in Statistics
This course focuses on how analysts can measure and describe the confidence they have in their findings. The course begins with an overview of the key probability rules and concepts that govern the calculation of uncertainty measures. We’ll then apply these ideas to variables (which are the building blocks of statistics) and their associated probability distributions. The second half of the course will delve into the computation and interpretation of uncertainty. We’ll discuss how to conduct a hypothesis test using both test statistics and confidence intervals. Finally, we’ll consider the role of hypothesis testing in a regression context, including what we can and cannot learn from the statistical significance of a coefficient. By the end of the course, you should be able to discuss statistical findings in probabilistic terms and interpret the uncertainty of a particular estimate.