Back to Courses
Data Analysis Courses - Page 93
Showing results 921-930 of 998
JSON and Natural Language Processing in PostgreSQL
Within this course, you’ll learn about how PostgreSQL creates and uses inverted indexes for JSON and natural language content. We will use various sources of data for our databases, including access to an online API and spidering its data and storing the data in a JSON column in PostgreSQL. Students will explore how full-text inverted indexes are structured. Students will build their own inverted indexes and then make use of PostgreSQL built-in capabilities to support full-text indexes.
The Structured Query Language (SQL)
In this course you will learn all about the Structured Query Language ("SQL".) We will review the origins of the language and its conceptual foundations. But primarily, we will focus on learning all the standard SQL commands, their syntax, and how to use these commands to conduct analysis of the data within a relational database. Our scope includes not only the SELECT statement for retrieving data and creating analytical reports, but also includes the DDL ("Data Definition Language") and DML ("Data Manipulation Language") commands necessary to create and maintain database objects.
The Structured Query Language (SQL) can be taken for academic credit as part of CU Boulder’s Master of Science in Data Science (MS-DS) degree offered on the Coursera platform. The MS-DS is an interdisciplinary degree that brings together faculty from CU Boulder’s departments of Applied Mathematics, Computer Science, Information Science, and others. With performance-based admissions and no application process, the MS-DS is ideal for individuals with a broad range of undergraduate education and/or professional experience in computer science, information science, mathematics, and statistics. Learn more about the MS-DS program at https://www.coursera.org/degrees/master-of-science-data-science-boulder.
Data Literacy – What is it and why does it matter?
You might already know that data is not neutral. Our values and assumptions are influenced by the data surrounding us - the data we create, the data we collect, and the data we share with each other. Economic needs, social structures, or algorithmic biases can have profound consequences for the way we collect and use data. Most often, the result is an increase of inequity in the world. Data also changes the way we interact. It shapes our thoughts, our feelings, our preferences and actions. It determines what we have access to, and what not. It enables global dissemination of best practices and life improving technologies, as well as the spread of mistrust and radicalization. This is why data literacy matters.
A key principle of data literacy is to have a heightened awareness of the risks and opportunities of data-driven technologies and to stay up-to-date with their consequences. In this course, we view data literacy from three perspectives: Data in personal life, data in society, and data in knowledge production. The aim is threefold: 1. To expand your skills and abilities to identify, understand, and interpret the many roles of digital technologies in daily life. 2. To enable you to discern when data-driven technologies add value to people’s lives, and when they exploit human vulnerabilities or deplete the commons. 3. To cultivate a deeper understanding of how data-driven technologies are shaping knowledge production and how they may be realigned with real human needs and values.
The course is funded by Erasmus+ and developed by the 4EU+ University Alliance including Charles University (Univerzita Karlova), Sorbonne Unviersity (Sorbonne Université), University of Copenhagen (Københavns Universitet), University of Milan (Università degli studi di Milano), and University of Warsaw (Uniwersytet Warszawski).
Getting Started with Liquid to Customize the Looker User Experience
This is a Google Cloud Self-Paced Lab. In this lab you will use Liquid to customize dimensions and measure in Looker.
Getting started with Azure Data Explorer
In this 1 hour long project-based course, you will learn to create an Azure Data Explorer or ADX cluster in the Azure portal. You will learn to create databases and tables and perform data ingestion using commands as well as using one click ingestion method. You will also learn to manage scaling in Azure Data Explorer and manage database permissions. You will conclude by learning how to query data in Azure Data Explorer using Kusto Query Language.
You should have an active azure account
Handling Missing Values in R using tidyr
Missing data can be a “serious” headache for data analysts and scientists. This project-based course Handling Missing Values in R using tidyr is for people who are learning R and who seek useful ways for data cleaning and manipulation in R. In this project-based course, we will not only talk about missing values, but we will spend a great deal of our time here hands-on on how to handle missing value cases using the tidyr package. Be rest assured that you will learn a ton of good work here.
By the end of this 2-hour-long project, you will calculate the proportion of missing values in the data and select columns that have missing values. Also, you will be able to use the drop_na(), replace_na(), and fill() function in the tidyr package to handle missing values. By extension, we will learn how to chain all the operations using the pipe function.
This project-based course is an intermediate level course in R. Therefore, to complete this project, it is required that you have prior experience with using R. I recommend that you should complete the projects titled: “Getting Started with R” and “Data Manipulation with dplyr in R“ before you take this current project. These introductory projects in using R will provide every necessary foundation to complete this current project. However, if you are comfortable with using R, please join me on this wonderful ride! Let’s get our hands dirty!
Data Manipulation and Management using MYSQL Workbench
By the end of this project, you will be able to manage data efficiently in a specific database using MYSQL Workbench. You will be able to select data from tables from your database and use keywords in select statements such as LIMIT and Top. You will be able to update the data and filter them using where statements, and also apply conditions with different ways and keywords such as like, and, or, and between. Moreover, you will be able to apply the aggregate functions in your select statement and the grouping by, and finally, you will be able to join tables with each other using three different joins, left, inner and right join. A database management system stores organize and manage a large amount of information within a single software application. The Use of this system increases the efficiency of business operations and reduces overall costs. The world of data is constantly changing and evolving every second. This in turn has created a completely new dimension of growth and challenges for companies around the globe like Google, Facebook, or Amazon.
This guided project is for beginners in the field of data management and databases. It provides you with the basics of managing the database and data. It equips you with knowledge of the first steps in data management and data extraction. This in turn has created a completely new dimension of growth and challenges for companies around the globe like Google, Facebook, or Amazon.
Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Julia for Beginners in Data Science
This guided project is for those who want to learn how to use Julia for data cleaning as well as exploratory analysis. This project covers the syntax of Julia from a data science perspective. So you will not build anything during the course of this project.
While you are watching me code, you will get a cloud desktop with all the required software pre-installed. This will allow you to code along with me. After all, we learn best with active, hands-on learning.
Special Features:
1) Work with 2 real-world datasets.
2) Detailed variable description booklets are provided in the github repository for this guided project.
3) This project provides challenges with solutions to encourage you to practice.
4) The real-world applications of each function are explained.
5) Best practices and tips are provided to ensure that you learn how to use pandas efficiently.
6) You get a copy of the jupyter notebook that you create which acts as a handy reference guide.
Please note that the version of Julia used is 1.0.4
Note: This project works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
How To Create Effective Metrics
By the end of this project, you will be able to create effective metrics for a business. You will learn what metrics are, how to create benchmarks, and how to build a system for sharing and evaluating metrics. Excel is a great tool to use if you have plans to adopt a data-driven approach to making business decisions. We will be sharpening our data analysis tools in Excel during this project.
This is a great tool to use if you have plans to use data, analytics, and or metrics to improve your business functions and decision making.
Familiarity with basic business statistics and terms is helpful, but not required.
Getting Started in Google Analytics
In this project, you will learn how to connect your website to Google Analytics. You will be able to use Google Analytics to understand how your website is performing. You will become familiar with the Google Analytics interface and the standard reports to better understand your website audience. You will learn how to interpret this data to improve your website performance and effectiveness.
Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Popular Internships and Jobs by Categories
Browse
© 2024 BoostGrad | All rights reserved