Duration 2 Days 12 CPD hours This course is intended for If you are a data analyst, data scientist, or a business analyst who wants to get started with using Python and machine learning techniques to analyze data and predict outcomes, this book is for you. Basic knowledge of computer programming and data analytics is a must. Familiarity with mathematical concepts such as algebra and basic statistics will be useful. Overview By the end of this course, you will have the skills you need to confidently use various machine learning algorithms to perform detailed data analysis and extract meaningful insights from data. This course is designed to give you practical guidance on industry-standard data analysis and machine learning tools in Python, with the help of realistic data. The course will help you understand how you can use pandas and Matplotlib to critically examine a dataset with summary statistics and graphs, and extract the insights you seek to derive. You will continue to build on your knowledge as you learn how to prepare data and feed it to machine learning algorithms, such as regularized logistic regression and random forest, using the scikit-learn package. You?ll discover how to tune the algorithms to provide the best predictions on new and unseen data. As you delve into later sections, you?ll be able to understand the working and output of these algorithms and gain insight into not only the predictive capabilities of the models but also their reasons for making these predictions. Data Exploration and Cleaning Python and the Anaconda Package Management System Different Types of Data Science Problems Loading the Case Study Data with Jupyter and pandas Data Quality Assurance and Exploration Exploring the Financial History Features in the Dataset Activity 1: Exploring Remaining Financial Features in the Dataset Introduction to Scikit-Learn and Model Evaluation Introduction Model Performance Metrics for Binary Classification Activity 2: Performing Logistic Regression with a New Feature and Creating a Precision-Recall Curve Details of Logistic Regression and Feature Exploration Introduction Examining the Relationships between Features and the Response Univariate Feature Selection: What It Does and Doesn't Do Building Cloud-Native Applications Activity 3: Fitting a Logistic Regression Model and Directly Using the Coefficients The Bias-Variance Trade-off Introduction Estimating the Coefficients and Intercepts of Logistic Regression Cross Validation: Choosing the Regularization Parameter and Other Hyperparameters Activity 4: Cross-Validation and Feature Engineering with the Case Study Data Decision Trees and Random Forests Introduction Decision trees Random Forests: Ensembles of Decision Trees Activity 5: Cross-Validation Grid Search with Random Forest Imputation of Missing Data, Financial Analysis, and Delivery to Client Introduction Review of Modeling Results Dealing with Missing Data: Imputation Strategies Activity 6: Deriving Financial Insights Final Thoughts on Delivering the Predictive Model to the Client
Duration 4 Days 24 CPD hours This course is intended for The workshop is designed for data scientists who currently use Python or R to work with smaller datasets on a single machine and who need to scale up their analyses and machine learning models to large datasets on distributed clusters. Data engineers and developers with some knowledge of data science and machine learning may also find this workshop useful. Overview Overview of data science and machine learning at scale Overview of the Hadoop ecosystem Working with HDFS data and Hive tables using Hue Introduction to Cloudera Data Science Workbench Overview of Apache Spark 2 Reading and writing data Inspecting data quality Cleansing and transforming data Summarizing and grouping data Combining, splitting, and reshaping data Exploring data Configuring, monitoring, and troubleshooting Spark applications Overview of machine learning in Spark MLlib Extracting, transforming, and selecting features Building and evaluating regression models Building and evaluating classification models Building and evaluating clustering models Cross-validating models and tuning hyperparameters Building machine learning pipelines Deploying machine learning models Spark, Spark SQL, and Spark MLlib PySpark and sparklyr Cloudera Data Science Workbench (CDSW) Hue This workshop covers data science and machine learning workflows at scale using Apache Spark 2 and other key components of the Hadoop ecosystem. The workshop emphasizes the use of data science and machine learning methods to address real-world business challenges. Using scenarios and datasets from a fictional technology company, students discover insights to support critical business decisions and develop data products to transform the business. The material is presented through a sequence of brief lectures, interactive demonstrations, extensive hands-on exercises, and discussions. The Apache Spark demonstrations and exercises are conducted in Python (with PySpark) and R (with sparklyr) using the Cloudera Data Science Workbench (CDSW) environment. The workshop is designed for data scientists who currently use Python or R to work with smaller datasets on a single machine and who need to scale up their analyses and machine learning models to large datasets on distributed clusters. Data engineers and developers with some knowledge of data science and machine learning may also find this workshop useful. Overview of data science and machine learning at scaleOverview of the Hadoop ecosystemWorking with HDFS data and Hive tables using HueIntroduction to Cloudera Data Science WorkbenchOverview of Apache Spark 2Reading and writing dataInspecting data qualityCleansing and transforming dataSummarizing and grouping dataCombining, splitting, and reshaping dataExploring dataConfiguring, monitoring, and troubleshooting Spark applicationsOverview of machine learning in Spark MLlibExtracting, transforming, and selecting featuresBuilding and evauating regression modelsBuilding and evaluating classification modelsBuilding and evaluating clustering modelsCross-validating models and tuning hyperparametersBuilding machine learning pipelinesDeploying machine learning models Additional course details: Nexus Humans Cloudera Data Scientist Training training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Cloudera Data Scientist Training course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 2 Days 12 CPD hours This course is intended for This intermediate course is for Business and Technical Specialist working with the Matching, Linking, and Search services of InfoSphere MDM Virtual module. Overview Understand how Matching and Linking work for both the Virtual Implementations of InfoSphere MDM Understand the MDM configuration project and database tables used by the PME Understand the PME Algorithms (Standardization, Bucketing and Comparison steps) and how to create and customize the algorithms using the workbench Understand how to analyze the Bucketing steps in an algorithm Understand how to generate weights for a given algorithm and how those weights are generated based on a sample database set Understand how to analyze the weights that are generated using the workbench Understand how to deploy the PME configuration for the Virtual implementations of InfoSphere MDM The InfoSphere MDM Virtual Module Algorithms V.11 course prepares students to work with and customize the algorithm configurations deployed to the InfoSphere MDM Probabilistic Matching Engine (PME) for Virtual MDM implementations. PME and Virtual Overview Virtual MDM Overview Terminology (Source, Entity, Member, Attributes) PME and Virtual MDM ( Algorithms, Weights, Comparison Scores, Thresholds) Virtual MDM Linkages and Tasks Virtual MDM Algorithms Standardization Bucketing Comparison Functions Virtual PME Data Model Algorithm configuration tables Member Derived Data Bucketing Data Bucket Analysis Analysis Overview Attribute Completeness Bucket Analysis Weights Weights Overview (Frequency-based weights, Edit Distance weights and Parameterize weights) The weight formula Running weight generation Analyzing weights Bulk Cross Match process Pair Manager Threshold calculations Additional course details: Nexus Humans ZZ880 IBM Virtual Module Algorithms for InfoSphere MDM V11 training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the ZZ880 IBM Virtual Module Algorithms for InfoSphere MDM V11 course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 1 Days 6 CPD hours This course is intended for This course is intended for: A technical audience at an intermediate level Overview Using Amazon SageMaker, this course teaches you how to: Prepare a dataset for training. Train and evaluate a machine learning model. Automatically tune a machine learning model. Prepare a machine learning model for production. Think critically about machine learning model results In this course, learn how to solve a real-world use case with machine learning and produce actionable results using Amazon SageMaker. This course teaches you how to use Amazon SageMaker to cover the different stages of the typical data science process, from analyzing and visualizing a data set, to preparing the data and feature engineering, down to the practical aspects of model building, training, tuning and deployment. Day 1 Business problem: Churn prediction Load and display the dataset Assess features and determine which Amazon SageMaker algorithm to use Use Amazon Sagemaker to train, evaluate, and automatically tune the model Deploy the model Assess relative cost of errors Additional course details: Nexus Humans Practical Data Science with Amazon SageMaker training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Practical Data Science with Amazon SageMaker course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 5 Days 30 CPD hours This course is intended for This introductory-level, fast-paced course is for skilled web developers new to React who have prior experienced working HTML5, CSS3 and JavaScript. Overview Our engaging instructors and mentors are highly experienced practitioners who bring years of current 'on-the-job' experience into every classroom. Working in a hands-on learning environment, guided by our expert team, attendees will learn about and explore: A basic and advanced understanding of React components An advanced, in-depth knowledge of how React works A complete understanding of using Redux How to build, validate, and populate interactive forms How to use inline styles for perfect looking components How to test React components How to build and use components How to get control of your build process A deep understanding of data-driven modeling with props and state How to use client-side routing for pages in your apps How to debug a React application Mastering React is a comprehensive hands-on course that aims to be the single most useful resource on getting up to speed quickly with React. Geared for more experienced web developers new to React, this course provides students with the core knowledge and hands-on skills they require to build reliable, powerful React apps. After the first few modules, you?ll have a solid understanding of React?s fundamentals and will be able to build a wide array of rich, interactive web apps with the framework. The first module is an introduction to the new functionality in ECMAScript 6 (JavaScript). Client-side routing between pages, managing complex state, and heavy API interaction at scale are also covered. This course consists of two parts. In the first part of the course students will explore all the fundamentals with a progressive, example-driven approach. You?ll create your first apps, learn how to write components, start handling user interaction, and manage rich forms. We end the first part by exploring the inner workings of Create React App (Facebook?s tool for running React apps), writing automated unit tests, and building a multi-page app that uses client-side routing. The latter part of the course moves into more advanced concepts that you?ll see used in large, production applications. These concepts explore strategies for data architecture, transport, and management: Redux is a state management paradigm based on the Flux architecture. Redux provides a structure for large state trees and allows you to decouple user interaction in your app from state changes. GraphQL is a powerful, typed, REST API alternative where the client describes the data it needs. Hooks is the powerful, new way to maintain state and properties with functional components and the future of React according to Facebook. ES6 Primer (Optional) Prefer const and let over var Arrow functions Modules Object.assign() Template literals The spread operator and Rest parameters Enhanced object literals Default arguments Destructuring assignments Your first React Web Application Setting up your development environment JavaScript ES6 /ES7 Getting started What?s a component? Our first component Building the App Making the App data-driven Your app?s first interaction Updating state and immutability Refactoring with the Babel plugin transform-class-properties JSX and the Virtual DOM React Uses a Virtual DOM Why Not Modify the Actual DOM? What is a Virtual DOM? Virtual DOM Pieces ReactElement JSX JSX Creates Elements JSX Attribute Expressions JSX Conditional Child Expressions JSX Boolean Attributes JSX Comments JSX Spread Syntax JSX Gotchas JSX Summary Components A time-logging app Getting started Breaking the app into components The steps for building React apps from scratch Updating timers Deleting timers Adding timing functionality Add start and stop functionality Methodology review Advanced Component Configuration with props, state, and children ReactComponent props are the parameters PropTypes Default props with getDefaultProps() context state Stateless Components Talking to Children Components with props.children Forms Forms 101 Text Input Remote Data Async Persistence Redux Form Modules Unit Testing & Jest Writing tests without a framework What is Jest? Using Jest Testing strategies for React applications Testing a basic React component with Enzyme Writing tests for the food lookup app Writing FoodSearch.test.js Routing What?s in a URL? React Router?s core components Building the components of react-router Dynamic routing with React Router Supporting authenticated routes Intro to Flux and Redux Why Flux? Flux is a Design Pattern Flux implementations Redux & Redux?s key ideas Building a counter The core of Redux The beginnings of a chat app Building the reducer() Subscribing to the store Connecting Redux to React Intermediate Redux Using createStore() from the redux library Representing messages as objects in state Introducing threads Adding the ThreadTabs component Supporting threads in the reducer Adding the action OPEN_THREAD Breaking up the reducer function Adding messagesReducer() Defining the initial state in the reducers Using combineReducers() from redux React Hooks Motivation behind Hooks How Hooks Map to Component Classes Using Hooks Requires react 'next' useState() Hook Example useEffect() Hook Example useContext() Hook Example Using Custom Hooks Using Webpack with Create React App JavaScript modules Create React App Exploring Create React App Webpack basics Making modifications Hot reloading; Auto-reloading Creating a production build Ejecting Using Create React App with an API server When to use Webpack/Create React App Using GraphQL Your First GraphQL Query GraphQL Benefits GraphQL vs. REST GraphQL vs. SQL Relay and GraphQL Frameworks Chapter Preview Consuming GraphQL Exploring With GraphiQL GraphQL Syntax 101 . Complex Types Exploring a Graph Graph Nodes ; Viewer Graph Connections and Edges Mutations Subscriptions GraphQL With JavaScript GraphQL With React
Duration 2 Days 12 CPD hours This course is intended for This introductory-level course is intended for Business Analysts and Data Analysts (or anyone else in the data science realm) who are already comfortable working with numerical data in Excel or other spreadsheet environments. No prior programming experience is required, and a browser is the only tool necessary for the course. Overview This course is approximately 50% hands-on, combining expert lecture, real-world demonstrations and group discussions with machine-based practical labs and exercises. Our engaging instructors and mentors are highly experienced practitioners who bring years of current 'on-the-job' experience into every classroom. Throughout the hands-on course students, will learn to leverage Python scripting for data science (to a basic level) using the most current and efficient skills and techniques. Working in a hands-on learning environment, guided by our expert team, attendees will learn about and explore (to a basic level): How to work with Python interactively in web notebooks The essentials of Python scripting Key concepts necessary to enter the world of Data Science via Python This course introduces data analysts and business analysts (as well as anyone interested in Data Science) to the Python programming language, as it?s often used in Data Science in web notebooks. This goal of this course is to provide students with a baseline understanding of core concepts that can serve as a platform of knowledge to follow up with more in-depth training and real-world practice. An Overview of Python Why Python? Python in the Shell Python in Web Notebooks (iPython, Jupyter, Zeppelin) Demo: Python, Notebooks, and Data Science Getting Started Using variables Builtin functions Strings Numbers Converting among types Writing to the screen Command line parameters Flow Control About flow control White space Conditional expressions Relational and Boolean operators While loops Alternate loop exits Sequences, Arrays, Dictionaries and Sets About sequences Lists and list methods Tuples Indexing and slicing Iterating through a sequence Sequence functions, keywords, and operators List comprehensions Generator Expressions Nested sequences Working with Dictionaries Working with Sets Working with files File overview Opening a text file Reading a text file Writing to a text file Reading and writing raw (binary) data Functions Defining functions Parameters Global and local scope Nested functions Returning values Essential Demos Sorting Exceptions Importing Modules Classes Regular Expressions The standard library Math functions The string module Dates and times Working with dates and times Translating timestamps Parsing dates from text Formatting dates Calendar data Python and Data Science Data Science Essentials Pandas Overview NumPy Overview SciKit Overview MatPlotLib Overview Working with Python in Data Science Additional course details: Nexus Humans Python for Data Science: Hands-on Technical Overview (TTPS4873) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Python for Data Science: Hands-on Technical Overview (TTPS4873) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 1 Days 6 CPD hours This course is intended for This course is intended for: Data platform engineers Solutions architects IT professionals Overview In this course, you will learn to: Apply data lake methodologies in planning and designing a data lake Articulate the components and services required for building an AWS data lake Secure a data lake with appropriate permission Ingest, store, and transform data in a data lake Query, analyze, and visualize data within a data lake In this course, you will learn how to build an operational data lake that supports analysis of both structured and unstructured data. You will learn the components and functionality of the services involved in creating a data lake. You will use AWS Lake Formation to build a data lake, AWS Glue to build a data catalog, and Amazon Athena to analyze data. The course lectures and labs further your learning with the exploration of several common data lake Introduction to data lakes Describe the value of data lakes Compare data lakes and data warehouses Describe the components of a data lake Recognize common architectures built on data lakes Data ingestion, cataloging, and preparation Describe the relationship between data lake storage and data ingestion Describe AWS Glue crawlers and how they are used to create a data catalog Identify data formatting, partitioning, and compression for efficient storage and query Lab 1: Set up a simple data lake Data processing and analytics Recognize how data processing applies to a data lake Use AWS Glue to process data within a data lake Describe how to use Amazon Athena to analyze data in a data lake Building a data lake with AWS Lake Formation Describe the features and benefits of AWS Lake Formation Use AWS Lake Formation to create a data lake Understand the AWS Lake Formation security model Lab 2: Build a data lake using AWS Lake Formation Additional Lake Formation configurations Automate AWS Lake Formation using blueprints and workflows Apply security and access controls to AWS Lake Formation Match records with AWS Lake Formation FindMatches Visualize data with Amazon QuickSight Lab 3: Automate data lake creation using AWS Lake Formation blueprints Lab 4: Data visualization using Amazon QuickSight Architecture and course review Post course knowledge check Architecture review Course review
Duration 1 Days 6 CPD hours This course is intended for This course is intended for: Data platform engineers Solutions architects IT professionals Overview In this course, you will learn to: Apply data lake methodologies in planning and designing a data lake Articulate the components and services required for building an AWS data lake Secure a data lake with appropriate permission Ingest, store, and transform data in a data lake Query, analyze, and visualize data within a data lake In this course, you will learn how to build an operational data lake that supports analysis of both structured and unstructured data. You will learn the components and functionality of the services involved in creating a data lake. You will use AWS Lake Formation to build a data lake, AWS Glue to build a data catalog, and Amazon Athena to analyze data. The course lectures and labs further your learning with the exploration of several common data lake architectures. Module 1: Introduction to data lakes Describe the value of data lakes Compare data lakes and data warehouses Describe the components of a data lake Recognize common architectures built on data lakes Module 2: Data ingestion, cataloging, and preparation Describe the relationship between data lake storage and data ingestion Describe AWS Glue crawlers and how they are used to create a data catalog Identify data formatting, partitioning, and compression for efficient storage and query Lab 1: Set up a simple data lake Module 3: Data processing and analytics Recognize how data processing applies to a data lake Use AWS Glue to process data within a data lake Describe how to use Amazon Athena to analyze data in a data lake Module 4: Building a data lake with AWS Lake Formation Describe the features and benefits of AWS Lake Formation Use AWS Lake Formation to create a data lake Understand the AWS Lake Formation security model Lab 2: Build a data lake using AWS Lake Formation Module 5: Additional Lake Formation configurations Automate AWS Lake Formation using blueprints and workflows Apply security and access controls to AWS Lake Formation Match records with AWS Lake Formation FindMatches Visualize data with Amazon QuickSight Lab 3: Automate data lake creation using AWS Lake Formation blueprints Lab 4: Data visualization using Amazon QuickSight Module 6: Architecture and course review Post course knowledge check Architecture review Course review Additional course details: Nexus Humans Building Data Lakes on AWS training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Building Data Lakes on AWS course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 1 Days 6 CPD hours In this hands on workshop for Agile Scrum Masters, Release Train Engineers and anyone serving as Jira Administrators, Jira experts will lead you through advanced configuration and customization settings in Jira, from installation through to customized screens, workflows, filters and reports. Jira Administration Adding and managing Users Administering and managing Groups Global Jira Settings Jira layout and interface customization User authentication and security Jira Customization Customization of screens and fields Customization of workflows Project and Board Administration Configuring and managing Projects Configuring and managing Boards Creating and managing Filters JQL Jira Integration Integrating Jira with Atlassian Tools Retrospectives and Documentation in Confluence Code management with Bitbucket Integration management with Bamboo Building a Dashboard with gadgets Jira Plug-ins and Marketplace
Duration 4 Days 24 CPD hours This course is intended for This course is intended for: Developers Solutions Architects Data Engineers Anyone with little to no experience with ML and wants to learn about the ML pipeline using Amazon SageMaker Overview In this course, you will learn to: Select and justify the appropriate ML approach for a given business problem Use the ML pipeline to solve a specific business problem Train, evaluate, deploy, and tune an ML model using Amazon SageMaker Describe some of the best practices for designing scalable, cost-optimized, and secure ML pipelines in AWS Apply machine learning to a real-life business problem after the course is complete This course explores how to use the machine learning (ML) pipeline to solve a real business problem in a project-based learning environment. Students will learn about each phase of the pipeline from instructor presentations and demonstrations and then apply that knowledge to complete a project solving one of three business problems: fraud detection, recommendation engines, or flight delays. By the end of the course, students will have successfully built, trained, evaluated, tuned, and deployed an ML model using Amazon SageMaker that solves their selected business problem. Module 0: Introduction Pre-assessment Module 1: Introduction to Machine Learning and the ML Pipeline Overview of machine learning, including use cases, types of machine learning, and key concepts Overview of the ML pipeline Introduction to course projects and approach Module 2: Introduction to Amazon SageMaker Introduction to Amazon SageMaker Demo: Amazon SageMaker and Jupyter notebooks Hands-on: Amazon SageMaker and Jupyter notebooks Module 3: Problem Formulation Overview of problem formulation and deciding if ML is the right solution Converting a business problem into an ML problem Demo: Amazon SageMaker Ground Truth Hands-on: Amazon SageMaker Ground Truth Practice problem formulation Formulate problems for projects Module 4: Preprocessing Overview of data collection and integration, and techniques for data preprocessing and visualization Practice preprocessing Preprocess project data Class discussion about projects Module 5: Model Training Choosing the right algorithm Formatting and splitting your data for training Loss functions and gradient descent for improving your model Demo: Create a training job in Amazon SageMaker Module 6: Model Evaluation How to evaluate classification models How to evaluate regression models Practice model training and evaluation Train and evaluate project models Initial project presentations Module 7: Feature Engineering and Model Tuning Feature extraction, selection, creation, and transformation Hyperparameter tuning Demo: SageMaker hyperparameter optimization Practice feature engineering and model tuning Apply feature engineering and model tuning to projects Final project presentations Module 8: Deployment How to deploy, inference, and monitor your model on Amazon SageMaker Deploying ML at the edge Demo: Creating an Amazon SageMaker endpoint Post-assessment Course wrap-up Additional course details: Nexus Humans The Machine Learning Pipeline on AWS training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the The Machine Learning Pipeline on AWS course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.