Back to Courses
Data Science Courses - Page 96
Showing results 951-960 of 1407
Exploratory Data Analysis with MATLAB
In this course, you will learn to think like a data scientist and ask questions of your data. You will use interactive features in MATLAB to extract subsets of data and to compute statistics on groups of related data. You will learn to use MATLAB to automatically generate code so you can learn syntax as you explore. You will also use interactive documents, called live scripts, to capture the steps of your analysis, communicate the results, and provide interactive controls allowing others to experiment by selecting groups of data.
These skills are valuable for those who have domain knowledge and some exposure to computational tools, but no programming background is required. To be successful in this course, you should have some knowledge of basic statistics (e.g., histograms, averages, standard deviation, curve fitting, interpolation).
By the end of this course, you will be able to load data into MATLAB, prepare it for analysis, visualize it, perform basic computations, and communicate your results to others. In your last assignment, you will combine these skills to assess damages following a severe weather event and communicate a polished recommendation based on your analysis of the data. You will be able to visualize the location of these events on a geographic map and create sliding controls allowing you to quickly visualize how a phenomenon changes over time.
Measuring Stock Liquidity
In this 1-hour long project-based course, you will learn how to use Average Daily Traded Volume and Share Turnover to measure liquidity, use Depth of Market (DOM) and Bid-Ask Spread to compare liquidity, and use Variance Ratio to quantify liquidity.
Note: This course works best for learners who are based in the North America region. We're currently working on providing the same experience in other regions.
This course's content is not intended to be investment advice and does not constitute an offer to perform any operations in the regulated or unregulated financial market.
Extract Text Data with Python and Regex
By the end of this project you will learn what is regular expressions and how it works. during this project we are going to learn about basic to advanced concepts of regex by formatting phone numbers, email addresses and URLs. after that we will learn how to use regular expressions for data cleaning. and finally in the final task we are going to work with a dataset consists of daily personal notes, and we are going to use RegEx to pull out useful information out of our raw text data.
AI & Law
About this Course
This four-week course titled AI and Law explores the way in which the increasing use of artificially intelligent technologies (AI) affects the practice and administration of law defined in a broad sense. Subject matters discussed include the connection be between AI and Law in the context of legal responsibility, law-making, law-enforcing, criminal law, the medical sector and intellectual property law.
The course aims to equip members of the general public with an elementary ability to understand the meaningful potential of AI for their own lives. The course also aims to enable members of the general public to understand the consequences of using AI and to allow them to interact with AIs in a responsible, helpful, conscientious way. Please note that the law and content presented in this course is current as of the launch date of this course.
At the end of this course, you will have a basic understanding of how to:
• Understand the legal significance of the artificially intelligent software and hardware.
• Understand the impact of the emergence of artificial intelligence on the application and administration of law in the public sector in connection with the enforcement of criminal law, the modelling of law and in the context of administrative law.
• Understand the legal relevance of the use of artificially intelligent software in the private sector in connection with innovation and associated intellectual property rights, in the financial services sector and when predicting outcomes of legal proceedings.
• Understand the importance of artificial intelligence for selected legal fields, including labour law, competition law and health law.
Syllabus and Format
The course consists of four modules where one module represents about one week of part-time studies. A module includes a number of lectures and readings, and finishes with an assessment – a quiz and/or a peer graded assignment. The assessments are intended to encourage learning and ensure that you understand the material of the course. Participating in forum discussions is voluntary.
Modules
Module 1. AI and Law
Module 2. Legal AI in the Public Sector
Module 3. Legal AI in the Private Sector
Module 4. Selected Challenges
Lund University
Lund University was founded in 1666 and has for a number of years been ranked among the world’s top 100 universities. The University has 47 700 students and 7 500 staff based in Lund, Sweden. Lund University unites tradition with a modern, dynamic, and highly international profile. With eight different faculties and numerous research centers and specialized institutes, Lund is the strongest research university in Sweden and one of Scandinavia's largest institutions for education and research. The university annually attracts a large number of international students and offers a wide range of courses and programmes taught in English.
The Faulty of Law is one of Lund University’s four original faculties, dating back to 1666. It is a modern faculty with an international profile, welcoming both international and Swedish students. Education, research and interaction with the surrounding community are the main focus of the Faculty’s work. The connection between the three is particularly apparent in the programmes and courses offered by the university, including the university’s MOOC course in European Business Law. The students get the chance to engross themselves in traditional legal studies, while interacting with both researchers and professionally active lawyers with qualifications and experience from various areas of law.
The faculty offers three international Masters: two 2-year Master’s programmes in International Human Rights Law and European Business Law, and a 1-year Master’s in European and International Tax Law. Students from around 40 countries take part in the programmes which offer a unique subject specialization within each field, with highly qualified researchers and professional legal practitioners engaged in the teaching.
BigQuery Soccer Data Ingestion
This is a self-paced lab that takes place in the Google Cloud console. Get started with sports data science by importing soccer data on matches, teams, players, and match events into BigQuery tables.
Information access uses multiple formats, and BigQuery makes working with multiple data sources simple. In this lab you will get started with sports data science by importing external sports data sources into BigQuery tables. This will give you the basis for building more sophisticated analytics in subsequent labs.
Survival Analysis in R for Public Health
Welcome to Survival Analysis in R for Public Health!
The three earlier courses in this series covered statistical thinking, correlation, linear regression and logistic regression. This one will show you how to run survival – or “time to event” – analysis, explaining what’s meant by familiar-sounding but deceptive terms like hazard and censoring, which have specific meanings in this context. Using the popular and completely free software R, you’ll learn how to take a data set from scratch, import it into R, run essential descriptive analyses to get to know the data’s features and quirks, and progress from Kaplan-Meier plots through to multiple Cox regression. You’ll use data simulated from real, messy patient-level data for patients admitted to hospital with heart failure and learn how to explore which factors predict their subsequent mortality. You’ll learn how to test model assumptions and fit to the data and some simple tricks to get round common problems that real public health data have. There will be mini-quizzes on the videos and the R exercises with feedback along the way to check your understanding.
Prerequisites
Some formulae are given to aid understanding, but this is not one of those courses where you need a mathematics degree to follow it. You will need basic numeracy (for example, we will not use calculus) and familiarity with graphical and tabular ways of presenting results. The three previous courses in the series explained concepts such as hypothesis testing, p values, confidence intervals, correlation and regression and showed how to install R and run basic commands. In this course, we will recap all these core ideas in brief, but if you are unfamiliar with them, then you may prefer to take the first course in particular, Statistical Thinking in Public Health, and perhaps also the second, on linear regression, before embarking on this one.
Process Mining: Data science in Action
Process mining is the missing link between model-based process analysis and data-oriented analysis techniques. Through concrete data sets and easy to use software the course provides data science knowledge that can be applied directly to analyze and improve processes in a variety of domains.
Data science is the profession of the future, because organizations that are unable to use (big) data in a smart way will not survive. It is not sufficient to focus on data storage and data analysis. The data scientist also needs to relate data to process analysis. Process mining bridges the gap between traditional model-based process analysis (e.g., simulation and other business process management techniques) and data-centric analysis techniques such as machine learning and data mining. Process mining seeks the confrontation between event data (i.e., observed behavior) and process models (hand-made or discovered automatically). This technology has become available only recently, but it can be applied to any type of operational processes (organizations and systems). Example applications include: analyzing treatment processes in hospitals, improving customer service processes in a multinational, understanding the browsing behavior of customers using booking site, analyzing failures of a baggage handling system, and improving the user interface of an X-ray machine. All of these applications have in common that dynamic behavior needs to be related to process models. Hence, we refer to this as "data science in action".
The course explains the key analysis techniques in process mining. Participants will learn various process discovery algorithms. These can be used to automatically learn process models from raw event data. Various other process analysis techniques that use event data will be presented. Moreover, the course will provide easy-to-use software, real-life data sets, and practical skills to directly apply the theory in a variety of application domains.
This course starts with an overview of approaches and technologies that use event data to support decision making and business process (re)design. Then the course focuses on process mining as a bridge between data mining and business process modeling. The course is at an introductory level with various practical assignments.
The course covers the three main types of process mining.
1. The first type of process mining is discovery. A discovery technique takes an event log and produces a process model without using any a-priori information. An example is the Alpha-algorithm that takes an event log and produces a process model (a Petri net) explaining the behavior recorded in the log.
2. The second type of process mining is conformance. Here, an existing process model is compared with an event log of the same process. Conformance checking can be used to check if reality, as recorded in the log, conforms to the model and vice versa.
3. The third type of process mining is enhancement. Here, the idea is to extend or improve an existing process model using information about the actual process recorded in some event log. Whereas conformance checking measures the alignment between model and reality, this third type of process mining aims at changing or extending the a-priori model. An example is the extension of a process model with performance information, e.g., showing bottlenecks. Process mining techniques can be used in an offline, but also online setting. The latter is known as operational support. An example is the detection of non-conformance at the moment the deviation actually takes place. Another example is time prediction for running cases, i.e., given a partially executed case the remaining processing time is estimated based on historic information of similar cases.
Process mining provides not only a bridge between data mining and business process management; it also helps to address the classical divide between "business" and "IT". Evidence-based business process management based on process mining helps to create a common ground for business process improvement and information systems development.
The course uses many examples using real-life event logs to illustrate the concepts and algorithms. After taking this course, one is able to run process mining projects and have a good understanding of the Business Process Intelligence field.
After taking this course you should:
- have a good understanding of Business Process Intelligence techniques (in particular process mining),
- understand the role of Big Data in today’s society,
- be able to relate process mining techniques to other analysis techniques such as simulation, business intelligence, data mining, machine learning, and verification,
- be able to apply basic process discovery techniques to learn a process model from an event log (both manually and using tools),
- be able to apply basic conformance checking techniques to compare event logs and process models (both manually and using tools),
- be able to extend a process model with information extracted from the event log (e.g., show bottlenecks),
- have a good understanding of the data needed to start a process mining project,
- be able to characterize the questions that can be answered based on such event data,
- explain how process mining can also be used for operational support (prediction and recommendation), and
- be able to conduct process mining projects in a structured manner.
Microsoft Power Platform Fundamentals
In this course, you will learn the business value and product capabilities of Power Platform. You will create simple Power Apps, connect data with Microsoft Dataverse, build a Power BI Dashboard, automate a process with Power Automate, and build a chatbot with Power Virtual Agents.
By the end of this course, you will be able to:
• Describe the business value of Power Platform
• Identify the core components of Power Platform
• Demonstrate the capabilities of Power BI
• Describe the capabilities of Power Apps
• Demonstrate the business value of Power Virtual Agents
Microsoft's Power Platform is not just for programmers and specialist technologists. If you, or the company that you work for, aspire to improve productivity by automating business processes, analyzing data to produce business insights, and acting more effectively by creating simple app experiences or even chatbots, this is the right course to kick-start your career and learn amazing skills.
Microsoft Power Platform Fundamentals will act as a bedrock of fundamental knowledge to prepare you for the Microsoft Certification: Power Platform Fundamentals - PL900 Exam. You will be able to demonstrate your real-world knowledge of the fundamentals of Microsoft Power Platform.
This course can accelerate your progress and give your career a boost, as you use your Microsoft skills to improve your team’s productivity.
Python for Data Analysis: Pandas & NumPy
In this hands-on project, we will understand the fundamentals of data analysis in Python and we will leverage the power of two important python libraries known as Numpy and pandas. NumPy and Pandas are two of the most widely used python libraries in data science. They offer high-performance, easy to use structures and data analysis tools.
Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Supply Chain Analytics Essentials
Welcome to Supply Chain Analytics - an exciting area that is in high demand!
In this introductory course to Supply Chain Analytics, I will take you on a journey to this fascinating area where supply chain management meets data analytics. You will learn real life examples on how analytics can be applied to various domains of a supply chain, from selling, to logistics, production and sourcing, to generate a significant social / economic impact. You will also learn job market trend, job requirement and preparation. Lastly, you will master a job intelligence tool to find preferred job(s) by region, industry and company.
Upon completing this course, you will
1. Understand why analytics is critical to supply chain management and its financial / economic impact.
2. See the pain points of a supply chain and how analytics may relieve them.
3. Learn supply chain analytics job opportunities, and use a job intelligence tool to make data-driven career decisions.
I hope you enjoy the course!
Popular Internships and Jobs by Categories
Browse
© 2024 BoostGrad | All rights reserved