Back to Courses

Information Technology Courses - Page 102

Showing results 1011-1020 of 1471
Create a Component Anomaly Detection Model using Visual Inspection AI
This is a self-paced lab that takes place in the Google Cloud console. Ingesting, Aligning, and Training Component Anomaly Detection Models using Visual Inspection AI
Full Stack Cloud Development Capstone Project
In this project you will demonstrate the skills that you have mastered in cloud native application development. You will apply your new knowledge to a real-life challenge and use your expertise to develop a successful solution. The project provides you with an opportunity to solidify your full stack proficiency. As you design a dynamic user experience, you will work with GitHub actions to build, test, and deploy your application. You will develop frontend pages, add user administration, build actions for database operations, create backend services, make connections with cloud native APIs, and launch CI/CD pipelines. You will boost your capabilities with cloud native services, JavaScript, Django, JSON, IBM Cloud Foundry, Python, and Kubernetes. Then you will devise a solution for managing the containerized deployment of your application. When you complete this project, you will have a working cloud native application showpiece that will impress potential employers.
Cloud Data Engineering
Welcome to the third course in the Building Cloud Computing Solutions at Scale Specialization! In this course, you will learn how to apply Data Engineering to real-world projects using the Cloud computing concepts introduced in the first two courses of this series. By the end of this course, you will be able to develop Data Engineering applications and use software development best practices to create data engineering applications. These will include continuous deployment, code quality tools, logging, instrumentation and monitoring. Finally, you will use Cloud-native technologies to tackle complex data engineering solutions. This course is ideal for beginners as well as intermediate students interested in applying Cloud computing to data science, machine learning and data engineering. Students should have beginner level Linux and intermediate level Python skills. For your project in this course, you will build a serverless data engineering pipeline in a Cloud platform: Amazon Web Services (AWS), Azure or Google Cloud Platform (GCP).
Azure Data Factory : Implement SCD Type 1
In this project, you will learn how to implement one of the most common concept in real world projects i.e. Slowly Changing Dimension Type 1, using Azure Data Factory. Pre-requisites: Azure subscription Azure Data Factory knowledge (Basic) Following are the tasks covered in this project: Task 1: Understand Slowly Changing Dimension (SCD) Type 1 In this task, we will try to understand the concept of Slowly Changing Dimension and its different types, but will focus on Type 1 using a simple example. Task 2: Create Azure services like Azure Data Factory, Azure SQL Database In this task, we are going to create the azure services like azure data factory and azure sql database which are going to be used in later tasks. Azure sql database is going to contain the staging and dimension table whereas azure data factory is going to be used to create the data pipeline Task 3: Create Staging and Dimension Table in Azure SQL Database In this task, we will create the staging and dimension table in azure sql database. Also, we will insert some dummy records in staging table Task 4: Create a ADF pipeline to implement SCD Type 1 (Insert Logic) In this task, we are going to create the pipeline in azure data factory and implement the logic to insert new records which exists in staging table but doesnt exist in dimension. This is one scenario/use case of SCD Type 1. Task 5: Create a ADF pipeline to implement SCD Type 1 (Update Logic) In this task, we are going to create the pipeline in azure data factory and implement the logic to update records which exists in staging table as well as in dimension. This is another use case/scenario of SCD Type 1 Task 6: Demo of ADF pipeline This is final task in which we will run the pipeline to see whether it satisfies both the use case/scenario of SCD Type 1 All the Best !!
Configuring the C/C++ Extension Pack with Visual Studio Code
In this 1.5 hours guided project, you will learn how to install, configure and use the C/C++ extension pack in Visual Studio Code. At the end of the class, you will be familiar with the major components of the extension pack. You will also be able to build, debug, customize your development experience, and distribute your configurations to other workstations. Topics include C++ colorization, Intellisense, build, debug, CMake tools, SSH remote development, and Doxygen documentation generator. Basic C or C++ programming experience is highly recommended.
User Authentication: Identity-Aware Proxy
This is a self-paced lab that takes place in the Google Cloud console. Learn how to restrict access selected authenticated users with Identity-Aware Proxy without special programming. Discover how to retrieve user identity information from IAP.
Big Data Emerging Technologies
Every time you use Google to search something, every time you use Facebook, Twitter, Instagram or any other SNS (Social Network Service), and every time you buy from a recommended list of products on Amazon.com you are using a big data system. In addition, big data technology supports your smartphone, smartwatch, Alexa, Siri, and automobile (if it is a newer model) every day. The top companies in the world are currently using big data technology, and every company is in need of advanced big data technology support. Simply put, big data technology is not an option for your company, it is a necessity for survival and growth. So now is the right time to learn what big data is and how to use it in advantage of your company. This 6 module course first focuses on the world’s industry market share rankings of big data hardware, software, and professional services, and then covers the world’s top big data product line and service types of the major big data companies. Then the lectures focused on how big data analysis is possible based on the world’s most popular three big data technologies Hadoop, Spark, and Storm. The last part focuses on providing experience on one of the most famous and widely used big data statistical analysis systems in the world, the IBM SPSS Statistics. This course was designed to prepare you to be more successful in businesses strategic planning in the upcoming big data era. Welcome to the amazing Big Data world!
Microsoft Power Platform Fundamentals
In this course, you will learn the business value and product capabilities of Power Platform. You will create simple Power Apps, connect data with Microsoft Dataverse, build a Power BI Dashboard, automate a process with Power Automate, and build a chatbot with Power Virtual Agents. By the end of this course, you will be able to: • Describe the business value of Power Platform • Identify the core components of Power Platform • Demonstrate the capabilities of Power BI • Describe the capabilities of Power Apps • Demonstrate the business value of Power Virtual Agents Microsoft's Power Platform is not just for programmers and specialist technologists. If you, or the company that you work for, aspire to improve productivity by automating business processes, analyzing data to produce business insights, and acting more effectively by creating simple app experiences or even chatbots, this is the right course to kick-start your career and learn amazing skills. Microsoft Power Platform Fundamentals will act as a bedrock of fundamental knowledge to prepare you for the Microsoft Certification: Power Platform Fundamentals - PL900 Exam. You will be able to demonstrate your real-world knowledge of the fundamentals of Microsoft Power Platform. This course can accelerate your progress and give your career a boost, as you use your Microsoft skills to improve your team’s productivity.
Preparing for DP-900: Microsoft Azure Data Fundamentals Exam
Microsoft certifications give you a professional advantage by providing globally recognized and industry-endorsed evidence of mastering skills in digital and cloud businesses.​​ In this course, you will prepare to take the DP-900 Microsoft Azure Data Fundamentals certification exam. You will refresh your knowledge of the fundamentals of database concepts in a cloud environment, the basic skilling in cloud data services, and foundational knowledge of cloud data services within Microsoft Azure. You will recap on non-relational data offerings, provisioning and deploying non-relational databases, and non-relational data stores with Microsoft Azure. You will test your knowledge in a series of practice exams​ mapped to all the main topics covered in the DP-900 exam, ensuring you’re well prepared for certification success. You will prepare to pass the certification exam by taking practice tests with similar formats and content. You will also get a more detailed overview of the Microsoft certification program and where you can go next in your career. You’ll also get tips and tricks, testing strategies, useful resources, and information on how to sign up for the DP-900 proctored exam. By the end of this course, you will be ready to sign-up for and take the DP-900 exam.​ This course is intended for candidates with both technical and non-technical backgrounds. Data science and software engineering experience is not required; however, some general programming knowledge or experience would be beneficial. To be successful in this course, you need to have basic computer literacy and proficiency in the English language. You should be familiar with basic computing concepts and terminology, general technology concepts, including concepts of machine learning and artificial intelligence.
Orchestrating the Cloud with Kubernetes
This is a self-paced lab that takes place in the Google Cloud console. In this lab you will learn how to provision a complete Kubernetes cluster using Google Container Engine; deploy and manage Docker containers using kubectl; and break an application into microservices using Kubernetes’ Deployments and Services.