Back to Courses

Data Science Courses - Page 60

Showing results 591-600 of 1407
Calculus through Data & Modelling: Techniques of Integration
In this course, we build on previously defined notions of the integral of a single-variable function over an interval. Now, we will extend our understanding of integrals to work with functions of more than one variable. First, we will learn how to integrate a real-valued multivariable function over different regions in the plane. Then, we will introduce vector functions, which assigns a point to a vector. This will prepare us for our final course in the specialization on vector calculus. Finally, we will introduce techniques to approximate definite integrals when working with discrete data and through a peer reviewed project on, apply these techniques real world problems.
Deep-Dive into Tensorflow Activation Functions
You've learned how to use Tensorflow. You've learned the important functions, how to design and implement sequential and functional models, and have completed several test projects. What's next? It's time to take a deep dive into activation functions, the essential function of every node and layer of a neural network, deciding whether to fire or not to fire, and adding an element of non-linearity (in most cases). In this 2 hour course-based project, you will join me in a deep-dive into an exhaustive list of activation functions usable in Tensorflow and other frameworks. I will explain the working details of each activation function, describe the differences between each and their pros and cons, and I will demonstrate each function being used, both from scratch and within Tensorflow. Join me and boost your AI & machine learning knowledge, while also receiving a certificate to boost your resume in the process! Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Data Visualization with Python
Visualizing data is used by virtually every discipline these days. It is used for analyzing web traffic to determine peak server load, growth and death rate of populations for biological analysis, analyzing weather patterns over time, stock market trends, and so on. Simply put, Data Visualization brings meaning to numbers that help people understand it. Seeing the data change can draw attention to trends and spikes that may otherwise go unnoticed. Python is an open-source (free) programming language has libraries that can be used to read and make useful graphics to present the data. In this course, you will create an application that reads data from CSV files. You will learn how to visualize the data using various techniques using existing Python libraries. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Panel Data Analysis with R
In this 1-hour long project-based course, you will learn how to conduct Panel Data (Regression) Analysis. You will receive step-by-step instructions to analyze the 'RENTAL' dataset from 'Introductory Econometrics: A Modern Approach' by Wooldridge using R Studio. In this project, we will discuss three models namely, Ordinary Least Square (OLS), Fixed effects (FE) and Random effects (RE) in brief and check which one fits the model best. You will also learn some additional diagnostic tests which were not required for this example but are useful for other panel datasets (especially, macro panels). Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Structuring Machine Learning Projects
In the third course of the Deep Learning Specialization, you will learn how to build a successful machine learning project and get to practice decision-making as a machine learning project leader. By the end, you will be able to diagnose errors in a machine learning system; prioritize strategies for reducing errors; understand complex ML settings, such as mismatched training/test sets, and comparing to and/or surpassing human-level performance; and apply end-to-end learning, transfer learning, and multi-task learning. This is also a standalone course for learners who have basic machine learning knowledge. This course draws on Andrew Ng’s experience building and shipping many deep learning products. If you aspire to become a technical leader who can set the direction for an AI team, this course provides the "industry experience" that you might otherwise get only after years of ML work experience. The Deep Learning Specialization is our foundational program that will help you understand the capabilities, challenges, and consequences of deep learning and prepare you to participate in the development of leading-edge AI technology. It provides a pathway for you to gain the knowledge and skills to apply machine learning to your work, level up your technical career, and take the definitive step in the world of AI.
Process Data with Microsoft Azure Synapse Link for Cosmo DB
In the past, performing traditional analytical workloads with Azure Cosmos DB has been a challenge. ETL mechanisms to migrate data from Cosmos DB to platforms more suited to performing analytics on data exist, but are a challenge to develop and maintain. Azure Synapse Link for Cosmos DB addresses the needs to perform analytics over our transactional data without impacting our transactional workloads. This is made possible through the Azure Cosmos DB Analytical store, which allows us to sync our transactional data into an isolated column store without us having to develop and manage complex ETL jobs, providing us with near real-time analytical capability on our data. In this project we will step through the process of configuring the services and process data using the Microsoft Azure Synapse Link for Cosmo DB. If you enjoy this project, we recommend taking the Microsoft Azure Data Fundamentals DP-900 Exam Prep Specialization: https://www.coursera.org/specializations/microsoft-azure-dp-900-data-fundamentals
Increasing Real Estate Management Profits: Harnessing Data Analytics
In this final course you will complete a Capstone Project using data analysis to recommend a method for improving profits for your company, Watershed Property Management, Inc. Watershed is responsible for managing thousands of residential rental properties throughout the United States. Your job is to persuade Watershed’s management team to pursue a new strategy for managing its properties that will increase their profits. To do this, you will: (1) Elicit information about important variables relevant to your analysis; (2) Draw upon your new MySQL database skills to extract relevant data from a real estate database; (3) Implement data analysis in Excel to identify the best opportunities for Watershed to increase revenue and maximize profits, while managing any new risks; (4) Create a Tableau dashboard to show Watershed executives the results of a sensitivity analysis; and (5) Articulate a significant and innovative business process change for Watershed based on your data analysis, that you will recommend to company executives. Airbnb, our Capstone’s official Sponsor, provided input on the project design. The top 10 Capstone completers each year will have the opportunity to present their work directly to senior data scientists at Airbnb live for feedback and discussion. "Note: Only learners who have passed the four previous courses in the specialization are eligible to take the Capstone."
Build your first Search Engine using AWS Kendra
This project is focused on building your first search engine using Amazon Kendra without writing a single line of code. By the end of this guided project, you will be able to build your first enterprise search engine by leveraging Amazon’s Kendra. Search as a capability is an important feature which is required by almost all medium and large enterprises as search helps filter relevant and required information in the world of big data. Search helps find relevant information quickly and saves time to go through vast information. Google’s first product was search engine, Amazon leverages search capability to browse the millions of products listed on its marketplace, Facebook has search capability for its users to find friends based on name, location, etc. and Microsoft also has its own bing search engine. AWS Kendra provides search as a service capability and as part of this guided project we shall study how to build a search engine.
Natural Language Processing with Probabilistic Models
In Course 2 of the Natural Language Processing Specialization, you will: a) Create a simple auto-correct algorithm using minimum edit distance and dynamic programming, b) Apply the Viterbi Algorithm for part-of-speech (POS) tagging, which is vital for computational linguistics, c) Write a better auto-complete algorithm using an N-gram language model, and d) Write your own Word2Vec model that uses a neural network to compute word embeddings using a continuous bag-of-words model. By the end of this Specialization, you will have designed NLP applications that perform question-answering and sentiment analysis, created tools to translate languages and summarize text, and even built a chatbot! This Specialization is designed and taught by two experts in NLP, machine learning, and deep learning. Younes Bensouda Mourri is an Instructor of AI at Stanford University who also helped build the Deep Learning Specialization. Łukasz Kaiser is a Staff Research Scientist at Google Brain and the co-author of Tensorflow, the Tensor2Tensor and Trax libraries, and the Transformer paper.
Computer Vision Fundamentals with Google Cloud
This course describes different types of computer vision use cases and then highlights different machine learning strategies for solving these use cases. The strategies vary from experimenting with pre-built ML models through pre-built ML APIs and AutoML Vision to building custom image classifiers using linear models, deep neural network (DNN) models or convolutional neural network (CNN) models. The course shows how to improve a model's accuracy with augmentation, feature extraction, and fine-tuning hyperparameters while trying to avoid overfitting the data. The course also looks at practical issues that arise, for example, when one doesn't have enough data and how to incorporate the latest research findings into different models. Learners will get hands-on practice building and optimizing their own image classification models on a variety of public datasets in the labs they will work on.