Duration 1 Days 6 CPD hours This course is intended for This course is intended for: A technical audience at an intermediate level Overview Using Amazon SageMaker, this course teaches you how to: Prepare a dataset for training. Train and evaluate a machine learning model. Automatically tune a machine learning model. Prepare a machine learning model for production. Think critically about machine learning model results In this course, learn how to solve a real-world use case with machine learning and produce actionable results using Amazon SageMaker. This course teaches you how to use Amazon SageMaker to cover the different stages of the typical data science process, from analyzing and visualizing a data set, to preparing the data and feature engineering, down to the practical aspects of model building, training, tuning and deployment. Day 1 Business problem: Churn prediction Load and display the dataset Assess features and determine which Amazon SageMaker algorithm to use Use Amazon Sagemaker to train, evaluate, and automatically tune the model Deploy the model Assess relative cost of errors Additional course details: Nexus Humans Practical Data Science with Amazon SageMaker training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Practical Data Science with Amazon SageMaker course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 2 Days 12 CPD hours This course is intended for This introductory-level course is intended for Business Analysts and Data Analysts (or anyone else in the data science realm) who are already comfortable working with numerical data in Excel or other spreadsheet environments. No prior programming experience is required, and a browser is the only tool necessary for the course. Overview This course is approximately 50% hands-on, combining expert lecture, real-world demonstrations and group discussions with machine-based practical labs and exercises. Our engaging instructors and mentors are highly experienced practitioners who bring years of current 'on-the-job' experience into every classroom. Throughout the hands-on course students, will learn to leverage Python scripting for data science (to a basic level) using the most current and efficient skills and techniques. Working in a hands-on learning environment, guided by our expert team, attendees will learn about and explore (to a basic level): How to work with Python interactively in web notebooks The essentials of Python scripting Key concepts necessary to enter the world of Data Science via Python This course introduces data analysts and business analysts (as well as anyone interested in Data Science) to the Python programming language, as it?s often used in Data Science in web notebooks. This goal of this course is to provide students with a baseline understanding of core concepts that can serve as a platform of knowledge to follow up with more in-depth training and real-world practice. An Overview of Python Why Python? Python in the Shell Python in Web Notebooks (iPython, Jupyter, Zeppelin) Demo: Python, Notebooks, and Data Science Getting Started Using variables Builtin functions Strings Numbers Converting among types Writing to the screen Command line parameters Flow Control About flow control White space Conditional expressions Relational and Boolean operators While loops Alternate loop exits Sequences, Arrays, Dictionaries and Sets About sequences Lists and list methods Tuples Indexing and slicing Iterating through a sequence Sequence functions, keywords, and operators List comprehensions Generator Expressions Nested sequences Working with Dictionaries Working with Sets Working with files File overview Opening a text file Reading a text file Writing to a text file Reading and writing raw (binary) data Functions Defining functions Parameters Global and local scope Nested functions Returning values Essential Demos Sorting Exceptions Importing Modules Classes Regular Expressions The standard library Math functions The string module Dates and times Working with dates and times Translating timestamps Parsing dates from text Formatting dates Calendar data Python and Data Science Data Science Essentials Pandas Overview NumPy Overview SciKit Overview MatPlotLib Overview Working with Python in Data Science Additional course details: Nexus Humans Python for Data Science: Hands-on Technical Overview (TTPS4873) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Python for Data Science: Hands-on Technical Overview (TTPS4873) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 2 Days 12 CPD hours This course is intended for Anyone who works with IBM SPSS Statistics and wants to learn advanced statistical procedures to be able to better answer research questions. Overview Introduction to advanced statistical analysis Group variables: Factor Analysis and Principal Components Analysis Group similar cases: Cluster Analysis Predict categorical targets with Nearest Neighbor Analysis Predict categorical targets with Discriminant Analysis Predict categorical targets with Logistic Regression Predict categorical targets with Decision Trees Introduction to Survival Analysis Introduction to Generalized Linear Models Introduction to Linear Mixed Models This course provides an application-oriented introduction to advanced statistical methods available in IBM SPSS Statistics. Students will review a variety of advanced statistical techniques and discuss situations in which each technique would be used, the assumptions made by each method, how to set up the analysis, and how to interpret the results. This includes a broad range of techniques for predicting variables, as well as methods to cluster variables and cases. Introduction to advanced statistical analysis Taxonomy of models Overview of supervised models Overview of models to create natural groupings Group variables: Factor Analysis and Principal Components Analysis Factor Analysis basics Principal Components basics Assumptions of Factor Analysis Key issues in Factor Analysis Improve the interpretability Use Factor and component scores Group similar cases: Cluster Analysis Cluster Analysis basics Key issues in Cluster Analysis K-Means Cluster Analysis Assumptions of K-Means Cluster Analysis TwoStep Cluster Analysis Assumptions of TwoStep Cluster Analysis Predict categorical targets with Nearest Neighbor Analysis Nearest Neighbor Analysis basics Key issues in Nearest Neighbor Analysis Assess model fit Predict categorical targets with Discriminant Analysis Discriminant Analysis basics The Discriminant Analysis model Core concepts of Discriminant Analysis Classification of cases Assumptions of Discriminant Analysis Validate the solution Predict categorical targets with Logistic Regression Binary Logistic Regression basics The Binary Logistic Regression model Multinomial Logistic Regression basics Assumptions of Logistic Regression procedures Testing hypotheses Predict categorical targets with Decision Trees Decision Trees basics Validate the solution Explore CHAID Explore CRT Comparing Decision Trees methods Introduction to Survival Analysis Survival Analysis basics Kaplan-Meier Analysis Assumptions of Kaplan-Meier Analysis Cox Regression Assumptions of Cox Regression Introduction to Generalized Linear Models Generalized Linear Models basics Available distributions Available link functions Introduction to Linear Mixed Models Linear Mixed Models basics Hierachical Linear Models Modeling strategy Assumptions of Linear Mixed Models Additional course details: Nexus Humans 0G09A IBM Advanced Statistical Analysis Using IBM SPSS Statistics (v25) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the 0G09A IBM Advanced Statistical Analysis Using IBM SPSS Statistics (v25) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 4 Days 24 CPD hours This course is intended for This course is best suited to developers, engineers, and architects who want to use use Hadoop and related tools to solve real-world problems. Overview Skills learned in this course include:Creating a data set with Kite SDKDeveloping custom Flume components for data ingestionManaging a multi-stage workflow with OozieAnalyzing data with CrunchWriting user-defined functions for Hive and ImpalaWriting user-defined functions for Hive and ImpalaIndexing data with Cloudera Search Cloudera University?s four-day course for designing and building Big Data applications prepares you to analyze and solve real-world problems using Apache Hadoop and associated tools in the enterprise data hub (EDH). IntroductionApplication Architecture Scenario Explanation Understanding the Development Environment Identifying and Collecting Input Data Selecting Tools for Data Processing and Analysis Presenting Results to the Use Defining & Using Datasets Metadata Management What is Apache Avro? Avro Schemas Avro Schema Evolution Selecting a File Format Performance Considerations Using the Kite SDK Data Module What is the Kite SDK? Fundamental Data Module Concepts Creating New Data Sets Using the Kite SDK Loading, Accessing, and Deleting a Data Set Importing Relational Data with Apache Sqoop What is Apache Sqoop? Basic Imports Limiting Results Improving Sqoop?s Performance Sqoop 2 Capturing Data with Apache Flume What is Apache Flume? Basic Flume Architecture Flume Sources Flume Sinks Flume Configuration Logging Application Events to Hadoop Developing Custom Flume Components Flume Data Flow and Common Extension Points Custom Flume Sources Developing a Flume Pollable Source Developing a Flume Event-Driven Source Custom Flume Interceptors Developing a Header-Modifying Flume Interceptor Developing a Filtering Flume Interceptor Writing Avro Objects with a Custom Flume Interceptor Managing Workflows with Apache Oozie The Need for Workflow Management What is Apache Oozie? Defining an Oozie Workflow Validation, Packaging, and Deployment Running and Tracking Workflows Using the CLI Hue UI for Oozie Processing Data Pipelines with Apache Crunch What is Apache Crunch? Understanding the Crunch Pipeline Comparing Crunch to Java MapReduce Working with Crunch Projects Reading and Writing Data in Crunch Data Collection API Functions Utility Classes in the Crunch API Working with Tables in Apache Hive What is Apache Hive? Accessing Hive Basic Query Syntax Creating and Populating Hive Tables How Hive Reads Data Using the RegexSerDe in Hive Developing User-Defined Functions What are User-Defined Functions? Implementing a User-Defined Function Deploying Custom Libraries in Hive Registering a User-Defined Function in Hive Executing Interactive Queries with Impala What is Impala? Comparing Hive to Impala Running Queries in Impala Support for User-Defined Functions Data and Metadata Management Understanding Cloudera Search What is Cloudera Search? Search Architecture Supported Document Formats Indexing Data with Cloudera Search Collection and Schema Management Morphlines Indexing Data in Batch Mode Indexing Data in Near Real Time Presenting Results to Users Solr Query Syntax Building a Search UI with Hue Accessing Impala through JDBC Powering a Custom Web Application with Impala and Search
Duration 0.5 Days 3 CPD hours This course is intended for This course is designed for business leaders and decision makers, including C-level executives, project managers, HR leaders, Marketing and Sales leaders, and technical sales consultants, who want to increase their knowledge of and familiarity with concepts surrounding data science. Other individuals who want to know more about basic data science concepts are also candidates for this course. This course is also designed to assist learners in preparing for the CertNexus DSBIZ⢠(Exam DSZ-110) credential. Overview In this course, you will identify how data science supports business decisions. You will: Explain the fundamentals of data science Describe common implementations of data science. Identify the impact data science can have on a business The ability to identify and respond to changing trends is a hallmark of a successful business. Whether those trends are related to customers and sales or to regulatory and industry standards, businesses are wise to keep track of the variables that can affect the bottom line. In today's business landscape, data comes from numerous sources and in diverse forms. By leveraging data science concepts and technologies, businesses can mold all of that raw data into information that facilitates decisions to improve and expand the success of the business. Data Science Fundamentals What is Data Science? Types of Data Data Science Roles Data Science Implementation The Data Science Lifecycle Data Acquisition and Preparation Data Modeling and Visualization The Impact of Data Science Benefits of Data Science Challenges of Data Science Business Use Cases for Data Science Additional course details: Nexus Humans CertNexus Data Science for Business Professionals (DSBIZ) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the CertNexus Data Science for Business Professionals (DSBIZ) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 5 Days 30 CPD hours This course is intended for This intermediate and beyond level course is geared for experienced technical professionals in various roles, such as developers, data analysts, data engineers, software engineers, and machine learning engineers who want to leverage Scala and Spark to tackle complex data challenges and develop scalable, high-performance applications across diverse domains. Practical programming experience is required to participate in the hands-on labs. Overview Working in a hands-on learning environment led by our expert instructor you'll: Develop a basic understanding of Scala and Apache Spark fundamentals, enabling you to confidently create scalable and high-performance applications. Learn how to process large datasets efficiently, helping you handle complex data challenges and make data-driven decisions. Gain hands-on experience with real-time data streaming, allowing you to manage and analyze data as it flows into your applications. Acquire practical knowledge of machine learning algorithms using Spark MLlib, empowering you to create intelligent applications and uncover hidden insights. Master graph processing with GraphX, enabling you to analyze and visualize complex relationships in your data. Discover generative AI technologies using GPT with Spark and Scala, opening up new possibilities for automating content generation and enhancing data analysis. Embark on a journey to master the world of big data with our immersive course on Scala and Spark! Mastering Scala with Apache Spark for the Modern Data Enterprise is a five day hands on course designed to provide you with the essential skills and tools to tackle complex data projects using Scala programming language and Apache Spark, a high-performance data processing engine. Mastering these technologies will enable you to perform a wide range of tasks, from data wrangling and analytics to machine learning and artificial intelligence, across various industries and applications.Guided by our expert instructor, you?ll explore the fundamentals of Scala programming and Apache Spark while gaining valuable hands-on experience with Spark programming, RDDs, DataFrames, Spark SQL, and data sources. You?ll also explore Spark Streaming, performance optimization techniques, and the integration of popular external libraries, tools, and cloud platforms like AWS, Azure, and GCP. Machine learning enthusiasts will delve into Spark MLlib, covering basics of machine learning algorithms, data preparation, feature extraction, and various techniques such as regression, classification, clustering, and recommendation systems. Introduction to Scala Brief history and motivation Differences between Scala and Java Basic Scala syntax and constructs Scala's functional programming features Introduction to Apache Spark Overview and history Spark components and architecture Spark ecosystem Comparing Spark with other big data frameworks Basics of Spark Programming SparkContext and SparkSession Resilient Distributed Datasets (RDDs) Transformations and Actions Working with DataFrames Spark SQL and Data Sources Spark SQL library and its advantages Structured and semi-structured data sources Reading and writing data in various formats (CSV, JSON, Parquet, Avro, etc.) Data manipulation using SQL queries Basic RDD Operations Creating and manipulating RDDs Common transformations and actions on RDDs Working with key-value data Basic DataFrame and Dataset Operations Creating and manipulating DataFrames and Datasets Column operations and functions Filtering, sorting, and aggregating data Introduction to Spark Streaming Overview of Spark Streaming Discretized Stream (DStream) operations Windowed operations and stateful processing Performance Optimization Basics Best practices for efficient Spark code Broadcast variables and accumulators Monitoring Spark applications Integrating External Libraries and Tools, Spark Streaming Using popular external libraries, such as Hadoop and HBase Integrating with cloud platforms: AWS, Azure, GCP Connecting to data storage systems: HDFS, S3, Cassandra, etc. Introduction to Machine Learning Basics Overview of machine learning Supervised and unsupervised learning Common algorithms and use cases Introduction to Spark MLlib Overview of Spark MLlib MLlib's algorithms and utilities Data preparation and feature extraction Linear Regression and Classification Linear regression algorithm Logistic regression for classification Model evaluation and performance metrics Clustering Algorithms Overview of clustering algorithms K-means clustering Model evaluation and performance metrics Collaborative Filtering and Recommendation Systems Overview of recommendation systems Collaborative filtering techniques Implementing recommendations with Spark MLlib Introduction to Graph Processing Overview of graph processing Use cases and applications of graph processing Graph representations and operations Introduction to Spark GraphX Overview of GraphX Creating and transforming graphs Graph algorithms in GraphX Big Data Innovation! Using GPT and Generative AI Technologies with Spark and Scala Overview of generative AI technologies Integrating GPT with Spark and Scala Practical applications and use cases Bonus Topics / Time Permitting Introduction to Spark NLP Overview of Spark NLP Preprocessing text data Text classification and sentiment analysis Putting It All Together Work on a capstone project that integrates multiple aspects of the course, including data processing, machine learning, graph processing, and generative AI technologies.
Duration 3 Days 18 CPD hours This course is intended for The ideal audience for the RPA and UiPath Boot Camp is beginners in the field of RPA and individuals in roles such as developers, project managers, operation analysts, and tech enthusiasts looking to familiarize themselves with automation technologies. It's also perfectly suited for business professionals keen on understanding and implementing automated solutions within their organizations to optimize processes. Overview This 'skills-centric' course is about 50% hands-on lab and 50% lecture, with extensive practical exercises designed to reinforce fundamental skills, concepts and best practices taught throughout the course. Working in a hands-on learning environment, led by our Automation Learning expert instructor, students will explore: Gain a thorough understanding of Robotic Process Automation (RPA) and its applications using UiPath, setting a solid foundation for future learning and application. Learn to record and play in UiPath Studio, a key skill that enables automating complex tasks in a user-friendly environment. Master the art of designing and controlling workflows using Sequencing, Flowcharting, and Control Flow, helping to streamline and manage automation processes effectively. Acquire practical skills in data manipulation, from variable management to CSV/Excel and data table conversions, empowering you to handle data-rich tasks with confidence. Develop competence in managing controls and exploring various plugins and extensions, providing a broader toolkit for handling diverse automation projects. Get hands-on experience with exception handling, debugging, logging, code management, and bot deployment, fundamental skills that ensure your automated processes are reliable and efficient. How to deploy and control Bots with UiPath Orchestrator The Hands-on Natural Language Processing (NLP) Boot Camp is an immersive, three-day course that serves as your guide to building machines that can read and interpret human language. NLP is a unique interdisciplinary field, blending computational linguistics with artificial intelligence to help machines understand, interpret, and generate human language. In an increasingly data-driven world, NLP skills provide a competitive edge, enabling the development of sophisticated projects such as voice assistants, text analyzers, chatbots, and so much more. Our comprehensive curriculum covers a broad spectrum of NLP topics. Beginning with an introduction to NLP and feature extraction, the course moves to the hands-on development of text classifiers, exploration of web scraping and APIs, before delving into topic modeling, vector representations, text manipulation, and sentiment analysis. Half of your time is dedicated to hands-on labs, where you'll experience the practical application of your knowledge, from creating pipelines and text classifiers to web scraping and analyzing sentiment. These labs serve as a microcosm of real-world scenarios, equipping you with the skills to efficiently process and analyze text data. Time permitting, you?ll also explore modern tools like Python libraries, the OpenAI GPT-3 API, and TensorFlow, using them in a series of engaging exercises. By the end of the course, you'll have a well-rounded understanding of NLP, and will leave equipped with the practical skills and insights that you can immediately put to use, helping your organization gain valuable insights from text data, streamline business processes, and improve user interactions with automated text-based systems. You?ll be able to process and analyze text data effectively, implement advanced text representations, apply machine learning algorithms for text data, and build simple chatbots. What is Robotic Process Automation? Scope and techniques of automation Robotic process automation About UiPath The future of automation Record and Play UiPath stack Downloading and installing UiPath Studio Learning UiPath Studio Task recorder Step-by-step examples using the recorder Sequence, Flowchart, and Control Flow Sequencing the workflow Activities Control flow, various types of loops, and decision making Step-by-step example using Sequence and Flowchart Step-by-step example using Sequence and Control flow Data Manipulation Variables and scope Collections Arguments ? Purpose and use Data table usage with examples Clipboard management File operation with step-by-step example CSV/Excel to data table and vice versa (with a step-by-step example) Taking Control of the Controls Finding and attaching windows Finding the control Techniques for waiting for a control Act on controls ? mouse and keyboard activities Working with UiExplorer Handling events Revisit recorder Screen Scraping When to use OCR Types of OCR available How to use OCR Avoiding typical failure points Tame that Application with Plugins and Extensions Terminal plugin SAP automation Java plugin Citrix automation Mail plugin PDF plugin Web integration Excel and Word plugins Credential management Extensions ? Java, Chrome, Firefox, and Silverlight Handling User Events and Assistant Bots What are assistant bots? Monitoring system event triggers Monitoring image and element triggers Launching an assistant bot on a keyboard event Exception Handling, Debugging, and Logging Exception handling Common exceptions and ways to handle them Logging and taking screenshots Debugging techniques Collecting crash dumps Error reporting Managing and Maintaining the Code Project organization Nesting workflows Reusability of workflows Commenting techniques State Machine When to use Flowcharts, State Machines, or Sequences Using config files and examples of a config file Integrating a TFS server Deploying and Maintaining the Bot Publishing using publish utility Overview of Orchestration Server Using Orchestration Server to control bots Using Orchestration Server to deploy bots License management Publishing and managing updates
Duration 3 Days 18 CPD hours This course is intended for Data Science for Marketing Analytics is designed for developers and marketing analysts looking to use new, more sophisticated tools in their marketing analytics efforts. It'll help if you have prior experience of coding in Python and knowledge of high school level mathematics. Some experience with databases, Excel, statistics, or Tableau is useful but not necessary. Overview By the end of this course, you will be able to build your own marketing reporting and interactive dashboard solutions. The course starts by teaching you how to use Python libraries, such as pandas and Matplotlib, to read data from Python, manipulate it, and create plots, using both categorical and continuous variables. Then, you'll learn how to segment a population into groups and use different clustering techniques to evaluate customer segmentation.As you make your way through the course, you'll explore ways to evaluate and select the best segmentation approach, and go on to create a linear regression model on customer value data to predict lifetime value. In the concluding sections, you'll gain an understanding of regression techniques and tools for evaluating regression models, and explore ways to predict customer choice using classification algorithms. Finally, you'll apply these techniques to create a churn model for modeling customer product choices. Data Preparation and Cleaning Data Models and Structured Data pandas Data Manipulation Data Exploration and Visualization Identifying the Right Attributes Generating Targeted Insights Visualizing Data Unsupervised Learning: Customer Segmentation Customer Segmentation Methods Similarity and Data Standardization k-means Clustering Choosing the Best Segmentation Approach Choosing the Number of Clusters Different Methods of Clustering Evaluating Clustering Predicting Customer Revenue Using Linear Regression Understanding Regression Feature Engineering for Regression Performing and Interpreting Linear Regression Other Regression Techniques and Tools for Evaluation Evaluating the Accuracy of a Regression Model Using Regularization for Feature Selection Tree-Based Regression Models Supervised Learning: Predicting Customer Churn Classification Problems Understanding Logistic Regression Creating a Data Science Pipeline Fine-Tuning Classification Algorithms Support Vector Machine Decision Trees Random Forest Preprocessing Data for Machine Learning Models Model Evaluation Performance Metrics Modeling Customer Choice Understanding Multiclass Classification Class Imbalanced Data Additional course details: Nexus Humans Data Science for Marketing Analytics training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Data Science for Marketing Analytics course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 5 Days 30 CPD hours This course is intended for Experienced system administrators and system integrators Consultants responsible for designing, implementing, and customizing vRealize Operations Overview By the end of the course, you should be able to meet the following objectives: List the vRealize Operations use cases Identify features and benefits of vRealize Operations Determine the vRealize Operations cluster that meets your monitoring requirements Deploy and configure a vRealize Operations cluster Use interface features to assess and troubleshoot operational problems Describe vRealize Operations certificates Create policies to meet the operational needs of your environment Recognize effective ways to optimize performance, capacity, and cost in data centers Troubleshoot and manage problems using workbench, alerts, and predefined dashboards Manage configurations Configure application monitoring using VMware vRealize Operations Cloud Appliance⢠Create custom symptoms and alert definitions, reports, and views Create various custom dashboards using the dashboard creation canvas Configure widgets and widget interactions for dashboards Create super metrics Set up users and user groups for controlled access to your environment Extend the capabilities of vRealize Operations by adding management packs and configuring solutions Monitor the health of the vRealize Operations cluster by using self-monitoring dashboards This course provides you with the knowledge and skills to deploy a VMware vRealize Operations cluster that meets the monitoring requirements of your environment.This course includes advanced capabilities such as customizing alerts, views, reports, and dashboards and explains the deployment and architecture in vRealize Operations. This course explains application monitoring, certificates, policies, capacity and cost concepts, and workload optimization with real-world use cases. This course covers troubleshooting using the workbench, alerts, and predefined dashboards, and how to manage compliance and configurations. This course also covers several management packs. Course Introduction Introduction and course logistics Course objectives Introduction to vRealize Operations List the vRealize Operations use cases Access the vRealize Operations User Interface (UI) vRealize Operations Architecture Identify the functions of components in a vRealize Operations node Identify the types of nodes and their role in a vRealize Operations cluster Outline how high availability is achieved in vRealize Operations List the components required to enable Continuous Availability (CA) Deploying vRealize Operations Design and size a vRealize Operations cluster Deploy a vRealize Operations node Install a vRealize Operations instance Describe different vRealize Operations deployment scenarios vRealize Operations Concepts Identify product UI components Create and use tags to group objects Use a custom group to group objects vRealize Operations Policies and Certificate Management Describe vRealize Operations certificates Create policies for various types of workloads Explain how policy inheritance works Capacity Optimization Define capacity planning terms Explain capacity planning models Assess the overall capacity of a data center and identify optimization recommendations What-If Scenarios and Costing in vRealize Operations Run what-if scenarios for adding workloads to a data center Discuss the types of cost drivers in vRealize Operations Assess the cost of your data center inventory Performance Optimization Introduction to performance optimization Define the business and operational intentions for a data center Automate the process of optimizing and balancing workloads in data centers Report the results of optimization potential Troubleshooting and Managing Configurations Describe the troubleshooting workbench Recognize how to troubleshoot problems by monitoring alerts Use step-by-step workflows to troubleshoot different vSphere objects Assess your environment?s compliance to standards View the configurations of vSphere objects in your environment Operating System and Application Monitoring Describe native service discovery and application monitoring features Configure application monitoring Monitor operating systems and applications by using VMware vRealize© Operations Cloud Appliance? Custom Alerts Create symptom definitions Create recommendations, actions, and notifications Create alert definitions that monitor resource demand in hosts and VMs Build and use custom views in your environment Custom Views and Reports Build and use custom views in your environment Create custom reports for presenting data about your environment Custom Dashboards Create dashboards that use predefined and custom widgets Configure widgets to interact with other widgets and other dashboards Configure the Scoreboard widget to use a metric configuration file Manage dashboards by grouping dashboards and sharing dashboards with users Super Metrics Recognize different types of super metrics Create super metrics and associate them with objects Enable super metrics in policies User Access Control Recognize how users are authorized to access objects Determine privilege priorities when a user has multiple privileges Import users and user groups from an LDAP source Extending and managing a vRealize Operations Deployment Identify available management packs in the VMware Marketplace? Monitor the health of a vRealize Operations cluster Generate a support bundle View vRealize Operations logs and audit reports Perform vRealize Operations cluster management tasks Additional course details:Notes Delivery by TDSynex, Exit Certified and New Horizons an VMware Authorised Training Centre (VATC) Nexus Humans VMware vRealize Operations: Install, Configure, Manage [V8.6] training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the VMware vRealize Operations: Install, Configure, Manage [V8.6] course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 3 Days 18 CPD hours This course is intended for The target audience for the SRE Practitioner course are professionals including: Anyone focused on large-scale service scalability and reliability Anyone interested in modern IT leadership and organizational change approaches Business Managers Business Stakeholders Change Agents Consultants DevOps Practitioners IT Directors IT Managers IT Team Leaders Product Owners Scrum Masters Software Engineers Site Reliability Engineers System Integrators Tool Providers Overview After completing this course, students will have learned: Practical view of how to successfully implement a flourishing SRE culture in your organization. The underlying principles of SRE and an understanding of what it is not in terms of anti-patterns, and how you become aware of them to avoid them. The organizational impact of introducing SRE. Acing the art of SLIs and SLOs in a distributed ecosystem and extending the usage of Error Budgets beyond the normal to innovate and avoid risks. Building security and resilience by design in a distributed, zero-trust environment. How do you implement full stack observability, distributed tracing and bring about an Observability-driven development culture? Curating data using AI to move from reactive to proactive and predictive incident management. Also, how you use DataOps to build clean data lineage. Why is Platform Engineering so important in building consistency and predictability of SRE culture? Implementing practical Chaos Engineering. Major incident response responsibilities for a SRE based on incident command framework, and examples of anatomy of unmanaged incidents. Perspective of why SRE can be considered as the purest implementation of DevOps SRE Execution model Understanding the SRE role and understanding why reliability is everyone's problem. SRE success story learnings This course introduces a range of practices for advancing service reliability engineering through a mixture of automation, organizational ways of working and business alignment. Tailored for those focused on large-scale service scalability and reliability. SRE Anti-patterns Rebranding Ops or DevOps or Dev as SRE Users notice an issue before you do Measuring until my Edge False positives are worse than no alerts Configuration management trap for snowflakes The Dogpile: Mob incident response Point fixing Production Readiness Gatekeeper Fail-Safe really? SLO is a Proxy for Customer Happiness Define SLIs that meaningfully measure the reliability of a service from a user?s perspective Defining System boundaries in a distributed ecosystem for defining correct SLIs Use error budgets to help your team have better discussions and make better data-driven decisions Overall, Reliability is only as good as the weakest link on your service graph Error thresholds when 3rd party services are used Building Secure and Reliable Systems SRE and their role in Building Secure and Reliable systems Design for Changing Architecture Fault tolerant Design Design for Security Design for Resiliency Design for Scalability Design for Performance Design for Reliability Ensuring Data Security and Privacy Full-Stack Observability Modern Apps are Complex & Unpredictable Slow is the new down Pillars of Observability Implementing Synthetic and End user monitoring Observability driven development Distributed Tracing What happens to Monitoring? Instrumenting using Libraries an Agents Platform Engineering and AIOPs Taking a Platform Centric View solves Organizational scalability challenges such as fragmentation, inconsistency and unpredictability. How do you use AIOps to improve Resiliency How can DataOps help you in the journey A simple recipe to implement AIOps Indicative measurement of AIOps SRE & Incident Response Management SRE Key Responsibilities towards incident response DevOps & SRE and ITIL OODA and SRE Incident Response Closed Loop Remediation and the Advantages Swarming ? Food for Thought AI/ML for better incident management Chaos Engineering Navigating Complexity Chaos Engineering Defined Quick Facts about Chaos Engineering Chaos Monkey Origin Story Who is adopting Chaos Engineering Myths of Chaos Chaos Engineering Experiments GameDay Exercises Security Chaos Engineering Chaos Engineering Resources SRE is the Purest form of DevOps Key Principles of SRE SREs help increase Reliability across the product spectrum Metrics for Success Selection of Target areas SRE Execution Model Culture and Behavioral Skills are key SRE Case study Post-class assignments/exercises Non-abstract Large Scale Design (after Day 1) Engineering Instrumentation- Instrumenting Gremlin (after Day 2)