Back to Courses









Software Development Courses - Page 57
Showing results 561-570 of 1266

Azure Synapse SQL Pool - Implement Polybase
In this 1-hour long project-based course, you will learn how to implement Polybase in Azure Synapse SQL Pool.
In this project, we are going to see how to implement Polybase in Azure Synapse SQL Pool.
Polybase in simple words is, a feature provided by Azure SQL Pool through which you can access the data stored in Azure Data Lake Storage/Blob/HDFS using a SQL interface to access the files stored in above mentioned storage systems. Basically, you can execute SQL queries on the files containing the data.
To implement Polybase the source that we are considering is a text file stored in Azure Data Lake Storage - Gen2.
Pre requisites:
1. Azure subscription account
2. Basic understanding of Azure SQL Pool and Synapse Analytics
3. Basic understanding of T-SQL queries
Here is a brief description of the tasks we are going to perform in this project:
Task1: Create Azure Data Lake Storage - Gn2
In this task we are going to create the ADLS account which is going to have the source file (Customer.txt) which we would be eventually reading via SQL queries.
Task2: Create Source File and upload it on ADLS container
In this task, we are going to create a sample comma delimited text file and also see how to upload it on the container created in the ADLS account. Task1 & Task2 is to prepare our source.
Task3: Create Azure SQL Pool
In this task, we are going to create Azure SQL Pool and Azure Synapse Workspace. Polybase is a feature supported by Azure SQL Pool hence we need to create this service along with Synapse Workspace account.
Task4: Configure Polybase
So far in all above tasks we have created all the resources needed to configure and implement Polybase. Hence, in this task we are going to see how to configure Polybase.
Task5: Polybase in action
In this task, we are going to see polybase in action. We are going to see how to execute SQL queries on the Customer.txt file stored in ADLS account and retrieve the data.

The Pytorch basics you need to start your ML projects
In this 1-hour long project-based course, you will learn how to use simple commands to create and manipulate files and folders, perform multiple complex tasks using one simple command, use the superuser to perform high privilege operations.

Deploy a website to Azure with Azure App Service
In this course, you will see how web apps in Azure allow you to publish and manage your website easily without having to work with the underlying servers, storage, or network assets. Instead, you can focus on your website features and rely on the robust Azure platform to provide secure access to your site.
You will see how Azure App Service enables you to build and host web applications in the programming language of your choice without managing infrastructure. You will also learn how to create a website through the hosted web app platform in Azure App Service.
You will learn how to use the publishing features built into Visual Studio to deploy and manage ASP.NET Core web applications hosted on Azure. You'll use Azure App Service to scale a web app to match planned seasonal throughput requirements and also meet demand during short-term peak events. By the end of this course, you'll be able to create and maintain web apps that use Docker images that are stored in Container Registry.
This course will help you prepare for the Microsoft Certified: Azure Developer Associate certification. In this course, you will take a practice exam that covers key skills measured in the exam. This is the sixth course in a program of 8 courses to help prepare you to take the exam.
This course is part of a Specialization intended for developers who want to demonstrate their expertise in all phases of cloud development from requirements, definition, and design; to development, deployment, and maintenance; to performance tuning and monitoring. It is ideal for anyone interested in preparing for the AZ-204: Developing Solutions for Microsoft Azure exam. By the end of this program you will be ready to take and sign-up for the Exam AZ-204: Developing Solutions for Microsoft Azure.

Google Cloud Fundamentals for AWS Professionals
Google Cloud Fundamentals for AWS Professionals introduces important concepts and terminology for working with Google Cloud. Through videos and hands-on labs, this course presents and compares many of Google Cloud's computing and storage services, along with important resource and policy management tools.

Computational Thinking with JavaScript 3:Organise & Interact
This third course in the Computational Thinking with JavaScript specialization applies your developing JavaScript skills learned in the first two courses to the world of the web. You will learn how HTML and JavaScript together support the web pages with which we are so familiar, and develop skills so that you can create your own. As well as text presentation, at the heart of HTML, you will learn how to develop interactive, animated graphics, using JavaScript to dynamically add, remove and adjust the HTML objects on the screen. Furthermore, you will leave the more sheltered Coursera programming environment, working with external programming environments, and learning how to use new libraries. Through this practice with new application areas and new building blocks, we build on the computational thinking frameworks introduced in the earlier courses, focusing particularly on the challenges of maintaining a consistent understanding of the multiple computational representations required to master programming.

Getting Started With Game Development Using PyGame
In this 1-hour long project-based course, you will learn how to create a basic single-player Pong replica using the PyGame library for Python, creating a welcome screen, a game that responds to user input to move the paddle, scoring, and a game over screen with user options. By the end of the course, learners will have a basic understanding of the PyGame library and will be able to create simple games built on shapes. No previous experience with PyGame is required, as this is a basic introduction to the library, but familiarity with Python is recommended.
Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.

Mastering the Software Engineering Interview
You’ve hit a major milestone as a computer scientist and are becoming a capable programmer. You now know how to solve problems, write algorithms, and analyze solutions; and you have a wealth of tools (like data structures) at your disposal. You may now be ready for an internship or (possibly) an entry-level software engineering job. But can you land the internship/job? It depends in part on how well you can solve new technical problems and communicate during interviews. How can you get better at this? Practice!
With the support of Google’s recruiting and engineering teams we’ve provided tips, examples, and practice opportunities in this course that may help you with a number of tech companies. We’ll assist you to organize into teams to practice. Lastly, we’ll give you basic job search advice, and tips for succeeding once you’re on the job.

Data Manipulation at Scale: Systems and Algorithms
Data analysis has replaced data acquisition as the bottleneck to evidence-based decision making --- we are drowning in it. Extracting knowledge from large, heterogeneous, and noisy datasets requires not only powerful computing resources, but the programming abstractions to use them effectively. The abstractions that emerged in the last decade blend ideas from parallel databases, distributed systems, and programming languages to create a new class of scalable data analytics platforms that form the foundation for data science at realistic scales.
In this course, you will learn the landscape of relevant systems, the principles on which they rely, their tradeoffs, and how to evaluate their utility against your requirements. You will learn how practical systems were derived from the frontier of research in computer science and what systems are coming on the horizon. Cloud computing, SQL and NoSQL databases, MapReduce and the ecosystem it spawned, Spark and its contemporaries, and specialized systems for graphs and arrays will be covered.
You will also learn the history and context of data science, the skills, challenges, and methodologies the term implies, and how to structure a data science project. At the end of this course, you will be able to:
Learning Goals:
1. Describe common patterns, challenges, and approaches associated with data science projects, and what makes them different from projects in related fields.
2. Identify and use the programming models associated with scalable data manipulation, including relational algebra, mapreduce, and other data flow models.
3. Use database technology adapted for large-scale analytics, including the concepts driving parallel databases, parallel query processing, and in-database analytics
4. Evaluate key-value stores and NoSQL systems, describe their tradeoffs with comparable systems, the details of important examples in the space, and future trends.
5. “Think” in MapReduce to effectively write algorithms for systems including Hadoop and Spark. You will understand their limitations, design details, their relationship to databases, and their associated ecosystem of algorithms, extensions, and languages.
write programs in Spark
6. Describe the landscape of specialized Big Data systems for graphs, arrays, and streams

Deploy Machine Learning Model into AWS Cloud Servers
By the end of this project, you will learn how to build a spam detector using machine learning & launch it as a serverless API using AWS Elastic Beanstalk technology. You will be using the Flask python framework to create the API, basic machine learning methods to build the spam detector & AWS desktop management console to deploy the spam detector into the AWS cloud servers. Additionally, you will learn more about how to switch between different versions of your web application & also, monitoring your AWS servers using Elastic Beanstalk Desktop Management Console.
Note: To avoid distraction for set up during the course, we would recommend that you create an Amazon AWS account beforehand. Amazon AWS provides a free tier option for 1 year & the course materials will utilize services that fall under the free tier option.

Data Mining for Smart Cities
Internet of things (IoT) has become a significant component of urban life, giving rise to “smart cities.” These smart cities aim to transform present-day urban conglomerates into citizen-friendly and environmentally sustainable living spaces. The digital infrastructure of smart cities generates a huge amount of data that could help us better understand operations and other significant aspects of city life.
In this course, you will become aware of various data mining and machine learning techniques and the various dataset on which they can be applied. You will learn how to implement data mining in Python and interpret the results to extract actionable knowledge. The course includes hands-on experiments using various real-life datasets to enable you to experiment on your domain-related novel datasets. You will use Python 3 programming language to read and preprocess the data and then implement various data mining tasks on the cleaned data to obtain desired results. Subsequently, you will visualize the results for the most efficient description.
Popular Internships and Jobs by Categories
Browse
© 2024 BoostGrad | All rights reserved