Back to Courses

Data Science Courses - Page 130

Showing results 1291-1300 of 1407
Data Literacy – What is it and why does it matter?
You might already know that data is not neutral. Our values and assumptions are influenced by the data surrounding us - the data we create, the data we collect, and the data we share with each other. Economic needs, social structures, or algorithmic biases can have profound consequences for the way we collect and use data. Most often, the result is an increase of inequity in the world. Data also changes the way we interact. It shapes our thoughts, our feelings, our preferences and actions. It determines what we have access to, and what not. It enables global dissemination of best practices and life improving technologies, as well as the spread of mistrust and radicalization. This is why data literacy matters. A key principle of data literacy is to have a heightened awareness of the risks and opportunities of data-driven technologies and to stay up-to-date with their consequences. In this course, we view data literacy from three perspectives: Data in personal life, data in society, and data in knowledge production. The aim is threefold: 1. To expand your skills and abilities to identify, understand, and interpret the many roles of digital technologies in daily life. 2. To enable you to discern when data-driven technologies add value to people’s lives, and when they exploit human vulnerabilities or deplete the commons. 3. To cultivate a deeper understanding of how data-driven technologies are shaping knowledge production and how they may be realigned with real human needs and values. The course is funded by Erasmus+ and developed by the 4EU+ University Alliance including Charles University (Univerzita Karlova), Sorbonne Unviersity (Sorbonne Université), University of Copenhagen (Københavns Universitet), University of Milan (Università degli studi di Milano), and University of Warsaw (Uniwersytet Warszawski).
Getting Started with Liquid to Customize the Looker User Experience
This is a Google Cloud Self-Paced Lab. In this lab you will use Liquid to customize dimensions and measure in Looker.
Getting started with Azure Data Explorer
In this 1 hour long project-based course, you will learn to create an Azure Data Explorer or ADX cluster in the Azure portal. You will learn to create databases and tables and perform data ingestion using commands as well as using one click ingestion method. You will also learn to manage scaling in Azure Data Explorer and manage database permissions. You will conclude by learning how to query data in Azure Data Explorer using Kusto Query Language. You should have an active azure account
Handling Missing Values in R using tidyr
Missing data can be a “serious” headache for data analysts and scientists. This project-based course Handling Missing Values in R using tidyr is for people who are learning R and who seek useful ways for data cleaning and manipulation in R. In this project-based course, we will not only talk about missing values, but we will spend a great deal of our time here hands-on on how to handle missing value cases using the tidyr package. Be rest assured that you will learn a ton of good work here. By the end of this 2-hour-long project, you will calculate the proportion of missing values in the data and select columns that have missing values. Also, you will be able to use the drop_na(), replace_na(), and fill() function in the tidyr package to handle missing values. By extension, we will learn how to chain all the operations using the pipe function. This project-based course is an intermediate level course in R. Therefore, to complete this project, it is required that you have prior experience with using R. I recommend that you should complete the projects titled: “Getting Started with R” and “Data Manipulation with dplyr in R“ before you take this current project. These introductory projects in using R will provide every necessary foundation to complete this current project. However, if you are comfortable with using R, please join me on this wonderful ride! Let’s get our hands dirty!
Data Manipulation and Management using MYSQL Workbench
By the end of this project, you will be able to manage data efficiently in a specific database using MYSQL Workbench. You will be able to select data from tables from your database and use keywords in select statements such as LIMIT and Top. You will be able to update the data and filter them using where statements, and also apply conditions with different ways and keywords such as like, and, or, and between. Moreover, you will be able to apply the aggregate functions in your select statement and the grouping by, and finally, you will be able to join tables with each other using three different joins, left, inner and right join. A database management system stores organize and manage a large amount of information within a single software application. The Use of this system increases the efficiency of business operations and reduces overall costs. The world of data is constantly changing and evolving every second. This in turn has created a completely new dimension of growth and challenges for companies around the globe like Google, Facebook, or Amazon. This guided project is for beginners in the field of data management and databases. It provides you with the basics of managing the database and data. It equips you with knowledge of the first steps in data management and data extraction. This in turn has created a completely new dimension of growth and challenges for companies around the globe like Google, Facebook, or Amazon. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Julia for Beginners in Data Science
This guided project is for those who want to learn how to use Julia for data cleaning as well as exploratory analysis. This project covers the syntax of Julia from a data science perspective. So you will not build anything during the course of this project. While you are watching me code, you will get a cloud desktop with all the required software pre-installed. This will allow you to code along with me. After all, we learn best with active, hands-on learning. Special Features: 1) Work with 2 real-world datasets. 2) Detailed variable description booklets are provided in the github repository for this guided project. 3) This project provides challenges with solutions to encourage you to practice. 4) The real-world applications of each function are explained. 5) Best practices and tips are provided to ensure that you learn how to use pandas efficiently. 6) You get a copy of the jupyter notebook that you create which acts as a handy reference guide. Please note that the version of Julia used is 1.0.4 Note: This project works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Machine Learning Foundations: A Case Study Approach
Do you have data and wonder what it can tell you? Do you need a deeper understanding of the core ways in which machine learning can improve your business? Do you want to be able to converse with specialists about anything from regression and classification to deep learning and recommender systems? In this course, you will get hands-on experience with machine learning from a series of practical case-studies. At the end of the first course you will have studied how to predict house prices based on house-level features, analyze sentiment from user reviews, retrieve documents of interest, recommend products, and search for images. Through hands-on practice with these use cases, you will be able to apply machine learning methods in a wide range of domains. This first course treats the machine learning method as a black box. Using this abstraction, you will focus on understanding tasks of interest, matching these tasks to machine learning tools, and assessing the quality of the output. In subsequent courses, you will delve into the components of this black box by examining models and algorithms. Together, these pieces form the machine learning pipeline, which you will use in developing intelligent applications. Learning Outcomes: By the end of this course, you will be able to: -Identify potential applications of machine learning in practice. -Describe the core differences in analyses enabled by regression, classification, and clustering. -Select the appropriate machine learning task for a potential application. -Apply regression, classification, clustering, retrieval, recommender systems, and deep learning. -Represent your data as features to serve as input to machine learning models. -Assess the model quality in terms of relevant error metrics for each task. -Utilize a dataset to fit a model to analyze new data. -Build an end-to-end application that uses machine learning at its core. -Implement these techniques in Python.
Machine Learning Data Lifecycle in Production
In the second course of Machine Learning Engineering for Production Specialization, you will build data pipelines by gathering, cleaning, and validating datasets and assessing data quality; implement feature engineering, transformation, and selection with TensorFlow Extended and get the most predictive power out of your data; and establish the data lifecycle by leveraging data lineage and provenance metadata tools and follow data evolution with enterprise data schemas. Understanding machine learning and deep learning concepts is essential, but if you’re looking to build an effective AI career, you need production engineering capabilities as well. Machine learning engineering for production combines the foundational concepts of machine learning with the functional expertise of modern software development and engineering roles to help you develop production-ready skills. Week 1: Collecting, Labeling, and Validating data Week 2: Feature Engineering, Transformation, and Selection Week 3: Data Journey and Data Storage Week 4: Advanced Data Labeling Methods, Data Augmentation, and Preprocessing Different Data Types
How To Create Effective Metrics
By the end of this project, you will be able to create effective metrics for a business. You will learn what metrics are, how to create benchmarks, and how to build a system for sharing and evaluating metrics. Excel is a great tool to use if you have plans to adopt a data-driven approach to making business decisions. We will be sharpening our data analysis tools in Excel during this project. This is a great tool to use if you have plans to use data, analytics, and or metrics to improve your business functions and decision making. Familiarity with basic business statistics and terms is helpful, but not required.
AI Applications in People Management
In this course, you will learn about Artificial Intelligence and Machine Learning as it applies to HR Management. You will explore concepts related to the role of data in machine learning, AI application, limitations of using data in HR decisions, and how bias can be mitigated using blockchain technology. Machine learning powers are becoming faster and more streamlined, and you will gain firsthand knowledge of how to use current and emerging technology to manage the entire employee lifecycle. Through study and analysis, you will learn how to sift through tremendous volumes of data to identify patterns and make predictions that will be in the best interest of your business. By the end of this course, you'll be able to identify how you can incorporate AI to streamline all HR functions and how to work with data to take advantage of the power of machine learning.