Duration 3 Days 18 CPD hours This course is intended for This course is aimed at anyone who wants to harness the power of data analytics in their organization including: Business Analysts, Data Analysts, Reporting and BI professionals Analytics professionals and Data Scientists who would like to learn Python Overview This course teaches delegates with no prior programming or data analytics experience how to perform data manipulation, data analysis and data visualization in Python. Mastery of these techniques and how to apply them to business problems will allow delegates to immediately add value in their workplace by extracting valuable insight from company data to allow better, data-driven decisions. Outcome: After attending this course, delegates will: Be able to write effective Python code Know how to access their data from a variety of sources using Python Know how to identify and fix data quality using Python Know how to manipulate data to create analysis ready data Know how to analyze and visualize data to drive data driven decisioning across your organization Becoming a world class data analytics practitioner requires mastery of the most sophisticated data analytics tools. These programming languages are some of the most powerful and flexible tools in the data analytics toolkit. From business questions to data analytics, and beyond For data analytics tasks to affect business decisions they must be driven by a business question. This section will formally outline how to move an analytics project through key phases of development from business question to business solution. Delegates will be able: to describe and understand the general analytics process. to describe and understand the different types of analytics can be used to derive data driven solutions to business to apply that knowledge to their business context Basic Python Programming Conventions This section will cover the basics of writing R programs. Topics covered will include: What is Python? Using Anaconda Writing Python programs Expressions and objects Functions and arguments Basic Python programming conventions Data Structures in Python This section will look at the basic data structures that Python uses and accessing data in Python. Topics covered will include: Vectors Arrays and matrices Factors Lists Data frames Loading .csv files into Python Connecting to External Data This section will look at loading data from other sources into Python. Topics covered will include: Loading .csv files into a pandas data frame Connecting to and loading data from a database into a panda data frame Data Manipulation in Python This section will look at how Python can be used to perform data manipulation operations to prepare datasets for analytics projects. Topics covered will include: Filtering data Deriving new fields Aggregating data Joining data sources Connecting to external data sources Descriptive Analytics and Basic Reporting in Python This section will explain how Python can be used to perform basic descriptive. Topics covered will include: Summary statistics Grouped summary statistics Using descriptive analytics to assess data quality Using descriptive analytics to created business report Using descriptive analytics to conduct exploratory analysis Statistical Analysis in Python This section will explain how Python can be used to created more interesting statistical analysis. Topics covered will include: Significance tests Correlation Linear regressions Using statistical output to create better business decisions. Data Visualisation in Python This section will explain how Python can be used to create effective charts and visualizations. Topics covered will include: Creating different chart types such as bar charts, box plots, histograms and line plots Formatting charts Best Practices Hints and Tips This section will go through some best practice considerations that should be adopted of you are applying Python in a business context.
Duration 3 Days 18 CPD hours This course is intended for Before taking this course delegates should already be familiar with basic analytics techniques, comfortable with basic data manipulation tools such as spreadsheets and databases and already familiar with at least one programming language Overview This course teaches delegates who are already familiar with analytics techniques and at least one programming language how to effectively use the programming language for three tasks: data manipulation and preparation, statistical analysis and advanced analytics (including predictive modelling and segmentation). Mastery of these techniques will allow delegates to immediately add value in their work place by extracting valuable insight from company data to allow better, data-driven decisions. Outcomes: After completing the course, delegates will be capable of writing production-ready R code to perform advanced analytics tasks enabling their organisations make better, data-driven decisions. Becoming a world class data analytics practitioner requires mastery of the most sophisticated data analytics tools. These programming languages are some of the most powerful and flexible tools in the data analytics toolkit. Topic 1 Intro to our chosen language Topic 2 Basic programming conventions Topic 3 Data structures Topic 4 Accessing data Topic 5 Descriptive statistics Topic 6 Data visualisation Topic 7 Statistical analysis Topic 8 Advanced data manipulation Topic 9 Advanced analytics ? predictive modelling Topic 10 Advanced analytics ? segmentation
Duration 2 Days 12 CPD hours This course is intended for Audience: Data Scientists, Software Developers, IT Architects, and Technical Managers. Participants should have the general knowledge of statistics and programming Also familiar with Python Overview ? NumPy, pandas, Matplotlib, scikit-learn ? Python REPLs ? Jupyter Notebooks ? Data analytics life-cycle phases ? Data repairing and normalizing ? Data aggregation and grouping ? Data visualization ? Data science algorithms for supervised and unsupervised machine learning Covers theoretical and technical aspects of using Python in Applied Data Science projects and Data Logistics use cases. Python for Data Science ? Using Modules ? Listing Methods in a Module ? Creating Your Own Modules ? List Comprehension ? Dictionary Comprehension ? String Comprehension ? Python 2 vs Python 3 ? Sets (Python 3+) ? Python Idioms ? Python Data Science ?Ecosystem? ? NumPy ? NumPy Arrays ? NumPy Idioms ? pandas ? Data Wrangling with pandas' DataFrame ? SciPy ? Scikit-learn ? SciPy or scikit-learn? ? Matplotlib ? Python vs R ? Python on Apache Spark ? Python Dev Tools and REPLs ? Anaconda ? IPython ? Visual Studio Code ? Jupyter ? Jupyter Basic Commands ? Summary Applied Data Science ? What is Data Science? ? Data Science Ecosystem ? Data Mining vs. Data Science ? Business Analytics vs. Data Science ? Data Science, Machine Learning, AI? ? Who is a Data Scientist? ? Data Science Skill Sets Venn Diagram ? Data Scientists at Work ? Examples of Data Science Projects ? An Example of a Data Product ? Applied Data Science at Google ? Data Science Gotchas ? Summary Data Analytics Life-cycle Phases ? Big Data Analytics Pipeline ? Data Discovery Phase ? Data Harvesting Phase ? Data Priming Phase ? Data Logistics and Data Governance ? Exploratory Data Analysis ? Model Planning Phase ? Model Building Phase ? Communicating the Results ? Production Roll-out ? Summary Repairing and Normalizing Data ? Repairing and Normalizing Data ? Dealing with the Missing Data ? Sample Data Set ? Getting Info on Null Data ? Dropping a Column ? Interpolating Missing Data in pandas ? Replacing the Missing Values with the Mean Value ? Scaling (Normalizing) the Data ? Data Preprocessing with scikit-learn ? Scaling with the scale() Function ? The MinMaxScaler Object ? Summary Descriptive Statistics Computing Features in Python ? Descriptive Statistics ? Non-uniformity of a Probability Distribution ? Using NumPy for Calculating Descriptive Statistics Measures ? Finding Min and Max in NumPy ? Using pandas for Calculating Descriptive Statistics Measures ? Correlation ? Regression and Correlation ? Covariance ? Getting Pairwise Correlation and Covariance Measures ? Finding Min and Max in pandas DataFrame ? Summary Data Aggregation and Grouping ? Data Aggregation and Grouping ? Sample Data Set ? The pandas.core.groupby.SeriesGroupBy Object ? Grouping by Two or More Columns ? Emulating the SQL's WHERE Clause ? The Pivot Tables ? Cross-Tabulation ? Summary Data Visualization with matplotlib ? Data Visualization ? What is matplotlib? ? Getting Started with matplotlib ? The Plotting Window ? The Figure Options ? The matplotlib.pyplot.plot() Function ? The matplotlib.pyplot.bar() Function ? The matplotlib.pyplot.pie () Function ? Subplots ? Using the matplotlib.gridspec.GridSpec Object ? The matplotlib.pyplot.subplot() Function ? Hands-on Exercise ? Figures ? Saving Figures to File ? Visualization with pandas ? Working with matplotlib in Jupyter Notebooks ? Summary Data Science and ML Algorithms in scikit-learn ? Data Science, Machine Learning, AI? ? Types of Machine Learning ? Terminology: Features and Observations ? Continuous and Categorical Features (Variables) ? Terminology: Axis ? The scikit-learn Package ? scikit-learn Estimators ? Models, Estimators, and Predictors ? Common Distance Metrics ? The Euclidean Metric ? The LIBSVM format ? Scaling of the Features ? The Curse of Dimensionality ? Supervised vs Unsupervised Machine Learning ? Supervised Machine Learning Algorithms ? Unsupervised Machine Learning Algorithms ? Choose the Right Algorithm ? Life-cycles of Machine Learning Development ? Data Split for Training and Test Data Sets ? Data Splitting in scikit-learn ? Hands-on Exercise ? Classification Examples ? Classifying with k-Nearest Neighbors (SL) ? k-Nearest Neighbors Algorithm ? k-Nearest Neighbors Algorithm ? The Error Rate ? Hands-on Exercise ? Dimensionality Reduction ? The Advantages of Dimensionality Reduction ? Principal component analysis (PCA) ? Hands-on Exercise ? Data Blending ? Decision Trees (SL) ? Decision Tree Terminology ? Decision Tree Classification in Context of Information Theory ? Information Entropy Defined ? The Shannon Entropy Formula ? The Simplified Decision Tree Algorithm ? Using Decision Trees ? Random Forests ? SVM ? Naive Bayes Classifier (SL) ? Naive Bayesian Probabilistic Model in a Nutshell ? Bayes Formula ? Classification of Documents with Naive Bayes ? Unsupervised Learning Type: Clustering ? Clustering Examples ? k-Means Clustering (UL) ? k-Means Clustering in a Nutshell ? k-Means Characteristics ? Regression Analysis ? Simple Linear Regression Model ? Linear vs Non-Linear Regression ? Linear Regression Illustration ? Major Underlying Assumptions for Regression Analysis ? Least-Squares Method (LSM) ? Locally Weighted Linear Regression ? Regression Models in Excel ? Multiple Regression Analysis ? Logistic Regression ? Regression vs Classification ? Time-Series Analysis ? Decomposing Time-Series ? Summary Lab Exercises Lab 1 - Learning the Lab Environment Lab 2 - Using Jupyter Notebook Lab 3 - Repairing and Normalizing Data Lab 4 - Computing Descriptive Statistics Lab 5 - Data Grouping and Aggregation Lab 6 - Data Visualization with matplotlib Lab 7 - Data Splitting Lab 8 - k-Nearest Neighbors Algorithm Lab 9 - The k-means Algorithm Lab 10 - The Random Forest Algorithm
Duration 1 Days 6 CPD hours This course is intended for This course is designed for data scientists with experience of Python who need to learn how to apply their data science and machine learning skills on Azure Databricks. Overview After completing this course, you will be able to: Provision an Azure Databricks workspace and cluster Use Azure Databricks to train a machine learning model Use MLflow to track experiments and manage machine learning models Integrate Azure Databricks with Azure Machine Learning Azure Databricks is a cloud-scale platform for data analytics and machine learning. In this course, students will learn how to use Azure Databricks to explore, prepare, and model data; and integrate Databricks machine learning processes with Azure Machine Learning. Introduction to Azure Databricks Getting Started with Azure Databricks Working with Data in Azure Databricks Training and Evaluating Machine Learning Models Preparing Data for Machine Learning Training a Machine Learning Model Managing Experiments and Models Using MLflow to Track Experiments Managing Models Managing Experiments and Models Using MLflow to Track Experiments Managing Models Integrating Azure Databricks and Azure Machine Learning Tracking Experiments with Azure Machine Learning Deploying Models
Duration 2 Days 12 CPD hours This course is intended for DevOps Engineers Software Developers Telecommunications Professionals Architects Quality Assurance & Site Reliability Professionals Overview Automate basic freestyle projects Jenkins Pipelines and Groovy Programming Software lifecycle management with Jenkins Popular plugins Scaling options Integrating Jenkins with Git and GitHub (as well as other Software Control Management platforms) Triggering Jenkins with Webhooks Deploying into Docker and Kubernetes CI / CD with Jenkins This course covers the fundamentals necessary to deploy and utilize the Jenkins automation server. Jenkins enables users to immediately begin automating both their individual and collaborative workflows. Jenkins is a proven solution for a wide variety of tasks ranging from the helpful automation of scripts (such as Python and Ansible) to creating complex pipelines that govern the technical parts of not only Continuous Integration, but Continuous Delivery (CI/CD) as well. Jenkins is free, open source, and easily controlled with a simple web- based UI- it can be expanded by third party plugins and is deployable on nearly any on-site (Linux, Windows and Mac) or cloud platform. Overview of Jenkins Overview of Continuous Integration and Continuous Deployment (CI/CD) Understanding Git and GitHub Git Branching Methods for Installing Jenkins Jenkins Dashboard Jenkins Jobs Getting Started with Freestyle Jobs Triggering builds HTTP Web Hooks Augmenting Jenkins with Plugins Overview of Docker and Dockerfile for Building and Launching Images Pipeline Jobs for Continuous Integration and Continuous Deployment Pipeline Build Stage Pipeline Testing Stage Post Build actions SMTP and Other Notifications Programming Pipelines with Groovy More Groovy Programming Essentials Extracting Jenkins Data Analytics to Support Project Management Troubleshooting Failures Auditing stdout and stderr with Jenkins Jenkins REST API Controlling Jenkins API with Python Jenkins Security Scaling Jenkins Jenkins CLI Building a Kubernetes Cluster and Deploying Jenkins How to start successfully using Jenkins to automate aspects of your job the moment this course ends.
Duration 2 Days 12 CPD hours This course is intended for If you are a data analyst, data scientist, or a business analyst who wants to get started with using Python and machine learning techniques to analyze data and predict outcomes, this book is for you. Basic knowledge of computer programming and data analytics is a must. Familiarity with mathematical concepts such as algebra and basic statistics will be useful. Overview By the end of this course, you will have the skills you need to confidently use various machine learning algorithms to perform detailed data analysis and extract meaningful insights from data. This course is designed to give you practical guidance on industry-standard data analysis and machine learning tools in Python, with the help of realistic data. The course will help you understand how you can use pandas and Matplotlib to critically examine a dataset with summary statistics and graphs, and extract the insights you seek to derive. You will continue to build on your knowledge as you learn how to prepare data and feed it to machine learning algorithms, such as regularized logistic regression and random forest, using the scikit-learn package. You?ll discover how to tune the algorithms to provide the best predictions on new and unseen data. As you delve into later sections, you?ll be able to understand the working and output of these algorithms and gain insight into not only the predictive capabilities of the models but also their reasons for making these predictions. Data Exploration and Cleaning Python and the Anaconda Package Management System Different Types of Data Science Problems Loading the Case Study Data with Jupyter and pandas Data Quality Assurance and Exploration Exploring the Financial History Features in the Dataset Activity 1: Exploring Remaining Financial Features in the Dataset Introduction to Scikit-Learn and Model Evaluation Introduction Model Performance Metrics for Binary Classification Activity 2: Performing Logistic Regression with a New Feature and Creating a Precision-Recall Curve Details of Logistic Regression and Feature Exploration Introduction Examining the Relationships between Features and the Response Univariate Feature Selection: What It Does and Doesn't Do Building Cloud-Native Applications Activity 3: Fitting a Logistic Regression Model and Directly Using the Coefficients The Bias-Variance Trade-off Introduction Estimating the Coefficients and Intercepts of Logistic Regression Cross Validation: Choosing the Regularization Parameter and Other Hyperparameters Activity 4: Cross-Validation and Feature Engineering with the Case Study Data Decision Trees and Random Forests Introduction Decision trees Random Forests: Ensembles of Decision Trees Activity 5: Cross-Validation Grid Search with Random Forest Imputation of Missing Data, Financial Analysis, and Delivery to Client Introduction Review of Modeling Results Dealing with Missing Data: Imputation Strategies Activity 6: Deriving Financial Insights Final Thoughts on Delivering the Predictive Model to the Client
Duration 4 Days 24 CPD hours This course is intended for This course is designed for data analysts, business intelligence specialists, developers, system architects, and database administrators. Overview Skills gained in this training include:The features that Pig, Hive, and Impala offer for data acquisition, storage, and analysisThe fundamentals of Apache Hadoop and data ETL (extract, transform, load), ingestion, and processing with HadoopHow Pig, Hive, and Impala improve productivity for typical analysis tasksJoining diverse datasets to gain valuable business insightPerforming real-time, complex queries on datasets Cloudera University?s four-day data analyst training course focusing on Apache Pig and Hive and Cloudera Impala will teach you to apply traditional data analytics and business intelligence skills to big data. Hadoop Fundamentals The Motivation for Hadoop Hadoop Overview Data Storage: HDFS Distributed Data Processing: YARN, MapReduce, and Spark Data Processing and Analysis: Pig, Hive, and Impala Data Integration: Sqoop Other Hadoop Data Tools Exercise Scenarios Explanation Introduction to Pig What Is Pig? Pig?s Features Pig Use Cases Interacting with Pig Basic Data Analysis with Pig Pig Latin Syntax Loading Data Simple Data Types Field Definitions Data Output Viewing the Schema Filtering and Sorting Data Commonly-Used Functions Processing Complex Data with Pig Storage Formats Complex/Nested Data Types Grouping Built-In Functions for Complex Data Iterating Grouped Data Multi-Dataset Operations with Pig Techniques for Combining Data Sets Joining Data Sets in Pig Set Operations Splitting Data Sets Pig Troubleshoot & Optimization Troubleshooting Pig Logging Using Hadoop?s Web UI Data Sampling and Debugging Performance Overview Understanding the Execution Plan Tips for Improving the Performance of Your Pig Jobs Introduction to Hive & Impala What Is Hive? What Is Impala? Schema and Data Storage Comparing Hive to Traditional Databases Hive Use Cases Querying with Hive & Impala Databases and Tables Basic Hive and Impala Query Language Syntax Data Types Differences Between Hive and Impala Query Syntax Using Hue to Execute Queries Using the Impala Shell Data Management Data Storage Creating Databases and Tables Loading Data Altering Databases and Tables Simplifying Queries with Views Storing Query Results Data Storage & Performance Partitioning Tables Choosing a File Format Managing Metadata Controlling Access to Data Relational Data Analysis with Hive & Impala Joining Datasets Common Built-In Functions Aggregation and Windowing Working with Impala How Impala Executes Queries Extending Impala with User-Defined Functions Improving Impala Performance Analyzing Text and Complex Data with Hive Complex Values in Hive Using Regular Expressions in Hive Sentiment Analysis and N-Grams Conclusion Hive Optimization Understanding Query Performance Controlling Job Execution Plan Bucketing Indexing Data Extending Hive SerDes Data Transformation with Custom Scripts User-Defined Functions Parameterized Queries Choosing the Best Tool for the Job Comparing MapReduce, Pig, Hive, Impala, and Relational Databases Which to Choose?
Duration 1 Days 6 CPD hours This course is intended for This class is intended for the following: Data analysts, Data scientists, Business analysts getting started with Google Cloud Platform. Individuals responsible for designing pipelines and architectures for data processing, creating and maintaining machine learning and statistical models, querying datasets, visualizing query results and creating reports. Executives and IT decision makers evaluating Google Cloud Platform for use by data scientists. Overview This course teaches students the following skills:Identify the purpose and value of the key Big Data and Machine Learning products in the Google Cloud Platform.Use Cloud SQL and Cloud Dataproc to migrate existing MySQL and Hadoop/Pig/Spark/Hive workloads to Google Cloud Platform.Employ BigQuery and Cloud Datalab to carry out interactive data analysis.Train and use a neural network using TensorFlow.Employ ML APIs.Choose between different data processing products on the Google Cloud Platform. This course introduces participants to the Big Data and Machine Learning capabilities of Google Cloud Platform (GCP). It provides a quick overview of the Google Cloud Platform and a deeper dive of the data processing capabilities. Introducing Google Cloud Platform Google Platform Fundamentals Overview. Google Cloud Platform Big Data Products. Compute and Storage Fundamentals CPUs on demand (Compute Engine). A global filesystem (Cloud Storage). CloudShell. Lab: Set up a Ingest-Transform-Publish data processing pipeline. Data Analytics on the Cloud Stepping-stones to the cloud. Cloud SQL: your SQL database on the cloud. Lab: Importing data into CloudSQL and running queries. Spark on Dataproc. Lab: Machine Learning Recommendations with Spark on Dataproc. Scaling Data Analysis Fast random access. Datalab. BigQuery. Lab: Build machine learning dataset. Machine Learning Machine Learning with TensorFlow. Lab: Carry out ML with TensorFlow Pre-built models for common needs. Lab: Employ ML APIs. Data Processing Architectures Message-oriented architectures with Pub/Sub. Creating pipelines with Dataflow. Reference architecture for real-time and batch data processing. Summary Why GCP? Where to go from here Additional Resources Additional course details: Nexus Humans Google Cloud Platform Big Data and Machine Learning Fundamentals training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Google Cloud Platform Big Data and Machine Learning Fundamentals course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 2 Days 12 CPD hours This course is intended for The primary audience for this course is as follows: System Engineers System Administrators Architects Channel Partners Data Analysts Overview Upon completing this course, you will be able to meet these overall objectives: Describe how harnessing the power of your machine data enables you to make decisions based on facts, bot intuition or best guesses. Reduce the time you spend investigating incidents by up to 90%. Find and fix problems faster by learning new technical skills for real world scenarios. Get started with Splunk Enterprise, from installation and data onboarding to running search queries to creating simple reports and dashboards. Accelerate time to value with turnkey Splunk integrations for dozens of Cisco products and platforms. Ensure faster, more predictable Splunk deployments with a proven Cisco Validated Design and the latest Cisco UCS server. This course will cover how Splunk software scales to collect and index hundreds of terabytes of data per day, across multi-geography, multi-datacenter and cloud based infrastructures. Using Cisco?s Unified Computing System (UCS) Integrated Infrastructure for Big Data offers linear scalability along with operational simplification for single-rack and multiple-rack deployments. Cisco Integrated Infrastructure for Big Data and Splunk What is Cisco CPA? Architecture benefits for Splunk Components of IIBD and relationship to Splunk Architecture Cisco UCS Integrated Infrastructure for Big Data with Splunk Enterprise Splunk- Big Data Analytics NFS Configurations for the Splunk Frozen Data Storage NFS Client Configurations on the Indexers Splunk- Start Searching Chargeback Reporting Building custom reports using the report builder Application Containers Understanding Application Containers Understanding Advanced Tasks Task Library & Inputs CLI & SSH Task Understanding Compound Tasks Custom Tasks Open Automation Troubleshooting UCS Director Restart Module Loading Report Errors Feature Loading Report Registration REST API- Automation UCS Director Developer Tools Accessing REST using a REST client Accessing REST using the REST API browser Open Automation SDK Overview Open Automation vs. Custom Tasks Use Cases UCS Director PowerShell API Cisco UCS Director PowerShell Console Installing & Configuring Working with Cmdlets Cloupia Script Structure Inputs & Outputs Design Examples Additional course details: Nexus Humans Cisco Splunk for Cisco Integrated Infrastructure (SPLUNK) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Cisco Splunk for Cisco Integrated Infrastructure (SPLUNK) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 2 Days 12 CPD hours This course is intended for This course is intended for Business Leaders, including managers/supervisors in the following roles: Developer Architect Video Operator Overview In this course, you will learn to: Articulate the essential terms and concepts fundamental to video compression and distribution Describe the four fundamental stages of video streaming workflows: ingest, process, store and deliver Explain the importance of security in the AWS Cloud and how it is applied in video streaming workflows Analyze video streaming workflow diagrams using AWS services, based on simple to complex use cases Describe some of the key variables that influence workflow decisions Recognize how other AWS services for compliance, storage, and compute, interact with AWS Media Services in video streaming workflows and the functions they perform Describe strategies to test or prototype workflows to mitigate risk and cost impacts and optimize video streaming workflows Use the AWS Management Console to build and run simple video streaming workflows for live and video-on-demand content Recognize the automation and data analytics available for Media Services when used with AWS AI and explore media-specific use cases for these services Identify the next steps in exploring migration to the cloud for one or more Media Services This course covers the media and cloud fundamentals that will empower you to develop a cloud migration strategy for media workflows in support of business goals. The course covers important concepts related to video processing and delivery, the variables that can impact migration decisions, and real-world examples of hybrid and cloud use cases for AWS Media Services. It also introduces security, artificial intelligence, and analytics concepts to help you consider how AWS Media Services fit into your overall cloud strategy. Module 1: Important video concepts Video Metrics Video Compression Video Distribution Major Protocols Used in Video Streaming Module 2: Anatomy of streaming workflows Ingest Process Store Deliver Module 3: Using AWS services in media workflows video-on-demand (VOD) Introduction to AWS Media Services Security Variables Impacting Workflow Design VOD Simple Use Cases VOD Advanced Use Cases Lab 1: Build and run a simple video streaming workflow for VOD content Module 4: Using AWS services in media workflows live streaming Challenges of Live Streaming Live Streaming Simple Use Cases Live Streaming Advanced Use Cases Lab 2: Build and run a simple video streaming workflow for live content Module 5: Optimizing Workflows Cost Considerations Mitigating Risk Monitoring and Automation Exploring Migration Options Additional course details: Nexus Humans AWS Media Essentials for IT Business Decision Makers training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the AWS Media Essentials for IT Business Decision Makers course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.