Back to Courses

Software Development Courses - Page 56

Showing results 551-560 of 1266
Use C# to Process XML Data
By the end of this project, you will Use C# to process XML data in a C# program. XML is an eXtensible Markup Language used to transport data across the internet for display or other processing. It provides a standard format so data may be validated as well. C#, like other languages, contains classes to read and validate XML documents.
Launch an auto-scaling AWS EC2 virtual machine
In this 1-hour long project-based course, you will learn how to Launch an autoscaling AWS EC2 virtual machine using the AWS console. Amazon Elastic Compute Cloud is the service you use to create and run virtual machines (VM), also known as instances. By completing the steps in this guided project, you will successfully launch an auto-scaling Amazon EC2 virtual machine using the AWS console within the AWS Free Tier. You will also verify the auto-scalable EC2 virtual machine and then terminate your scaling infrastructure. Note: This course works best for learners who are based in the North American region. We’re currently working on providing the same experience in other regions.
Machine Learning Algorithms
In this course you will: a) understand the naïve Bayesian algorithm. b) understand the Support Vector Machine algorithm. c) understand the Decision Tree algorithm. d) understand the Clustering. Please make sure that you’re comfortable programming in Python and have a basic knowledge of mathematics including matrix multiplications, and conditional probability.
RPA Lifecycle: Development and Testing
To adopt RPA, you begin with the Discovery and Design phases and proceed onto the Development and Testing phase. RPA Lifecycle – Development and Testing is the second course of the Specialization on Implementing RPA with Cognitive Solutions and Analytics. In this course, you will learn how to develop and test bots. For this, you will use Automation Anywhere Enterprise Client (or AAE Client) to record, modify, and run tasks. AAE Client is a desktop application with an intuitive interface, that enables the creation of automated tasks with ease. It features ‘SMART’ Automation technology that quickly automates complex tasks without the need for any programming efforts. The learning will be reinforced through concept description, building bots, and guided practice.
Node.js Backend Basics with Best Practices
By the end of this project, you will create a backend using industry best practices that you will be able to cater to all of your different projects. This project gives you a head start with one of the most widely used libraries used for the backend, express.js. The project will provide you with the steps on how to design a node.js architecture following the separation of concerns design pattern Learning Node.js and Express.js will open the door for you to create solid and scalable backend systems that will be customized to your projects. This guided project is for intermediate software developers who would like to learn how to deliver a scalable and well-designed backend to apply in their projects or in their work in the future.
Set up a Continuous Integration (CI) workflow in CIrcleCI
In this 1-hour long project-based course on Setting up a Continuous Integration (CI) workflow in CircleCI, you will work through the complete workflow of getting a development project (nodeJS application) through version control (git and GitHub) and into a simple CI pipeline in CirclCI. This course is designed for developers who have never worked with a CI tool before who want to understand how continuous integration can benefit their development processes and/or how it fits together in a development lifecycle. By the end of this course, you will have a working pipeline of your own (in your own CircleCI user account) which will handle the building and testing of your code based on any pull requests made to your project repository in GitHub This is a beginner course and as such is not designed for intermediate developers or DevOps professionals and students who already understand CI/CD and want a deep-dive on CircleCI and its various configuration capabilities. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Azure Synapse SQL Pool - Implement Polybase
In this 1-hour long project-based course, you will learn how to implement Polybase in Azure Synapse SQL Pool. In this project, we are going to see how to implement Polybase in Azure Synapse SQL Pool. Polybase in simple words is, a feature provided by Azure SQL Pool through which you can access the data stored in Azure Data Lake Storage/Blob/HDFS using a SQL interface to access the files stored in above mentioned storage systems. Basically, you can execute SQL queries on the files containing the data. To implement Polybase the source that we are considering is a text file stored in Azure Data Lake Storage - Gen2. Pre requisites: 1. Azure subscription account 2. Basic understanding of Azure SQL Pool and Synapse Analytics 3. Basic understanding of T-SQL queries Here is a brief description of the tasks we are going to perform in this project: Task1: Create Azure Data Lake Storage - Gn2 In this task we are going to create the ADLS account which is going to have the source file (Customer.txt) which we would be eventually reading via SQL queries. Task2: Create Source File and upload it on ADLS container In this task, we are going to create a sample comma delimited text file and also see how to upload it on the container created in the ADLS account. Task1 & Task2 is to prepare our source. Task3: Create Azure SQL Pool In this task, we are going to create Azure SQL Pool and Azure Synapse Workspace. Polybase is a feature supported by Azure SQL Pool hence we need to create this service along with Synapse Workspace account. Task4: Configure Polybase So far in all above tasks we have created all the resources needed to configure and implement Polybase. Hence, in this task we are going to see how to configure Polybase. Task5: Polybase in action In this task, we are going to see polybase in action. We are going to see how to execute SQL queries on the Customer.txt file stored in ADLS account and retrieve the data.
The Pytorch basics you need to start your ML projects
In this 1-hour long project-based course, you will learn how to use simple commands to create and manipulate files and folders, perform multiple complex tasks using one simple command, use the superuser to perform high privilege operations.
Deploy a website to Azure with Azure App Service
In this course, you will see how web apps in Azure allow you to publish and manage your website easily without having to work with the underlying servers, storage, or network assets. Instead, you can focus on your website features and rely on the robust Azure platform to provide secure access to your site. You will see how Azure App Service enables you to build and host web applications in the programming language of your choice without managing infrastructure. You will also learn how to create a website through the hosted web app platform in Azure App Service. You will learn how to use the publishing features built into Visual Studio to deploy and manage ASP.NET Core web applications hosted on Azure. You'll use Azure App Service to scale a web app to match planned seasonal throughput requirements and also meet demand during short-term peak events. By the end of this course, you'll be able to create and maintain web apps that use Docker images that are stored in Container Registry. This course will help you prepare for the Microsoft Certified: Azure Developer Associate certification. In this course, you will take a practice exam that covers key skills measured in the exam. This is the sixth course in a program of 8 courses to help prepare you to take the exam. This course is part of a Specialization intended for developers who want to demonstrate their expertise in all phases of cloud development from requirements, definition, and design; to development, deployment, and maintenance; to performance tuning and monitoring. It is ideal for anyone interested in preparing for the AZ-204: Developing Solutions for Microsoft Azure exam. By the end of this program you will be ready to take and sign-up for the Exam AZ-204: Developing Solutions for Microsoft Azure.
Data Manipulation at Scale: Systems and Algorithms
Data analysis has replaced data acquisition as the bottleneck to evidence-based decision making --- we are drowning in it. Extracting knowledge from large, heterogeneous, and noisy datasets requires not only powerful computing resources, but the programming abstractions to use them effectively. The abstractions that emerged in the last decade blend ideas from parallel databases, distributed systems, and programming languages to create a new class of scalable data analytics platforms that form the foundation for data science at realistic scales. In this course, you will learn the landscape of relevant systems, the principles on which they rely, their tradeoffs, and how to evaluate their utility against your requirements. You will learn how practical systems were derived from the frontier of research in computer science and what systems are coming on the horizon. Cloud computing, SQL and NoSQL databases, MapReduce and the ecosystem it spawned, Spark and its contemporaries, and specialized systems for graphs and arrays will be covered. You will also learn the history and context of data science, the skills, challenges, and methodologies the term implies, and how to structure a data science project. At the end of this course, you will be able to: Learning Goals: 1. Describe common patterns, challenges, and approaches associated with data science projects, and what makes them different from projects in related fields. 2. Identify and use the programming models associated with scalable data manipulation, including relational algebra, mapreduce, and other data flow models. 3. Use database technology adapted for large-scale analytics, including the concepts driving parallel databases, parallel query processing, and in-database analytics 4. Evaluate key-value stores and NoSQL systems, describe their tradeoffs with comparable systems, the details of important examples in the space, and future trends. 5. “Think” in MapReduce to effectively write algorithms for systems including Hadoop and Spark. You will understand their limitations, design details, their relationship to databases, and their associated ecosystem of algorithms, extensions, and languages. write programs in Spark 6. Describe the landscape of specialized Big Data systems for graphs, arrays, and streams