Duration 1 Days 6 CPD hours This course is intended for This course is designed for data scientists with experience of Python who need to learn how to apply their data science and machine learning skills on Azure Databricks. Overview After completing this course, you will be able to: Provision an Azure Databricks workspace and cluster Use Azure Databricks to train a machine learning model Use MLflow to track experiments and manage machine learning models Integrate Azure Databricks with Azure Machine Learning Azure Databricks is a cloud-scale platform for data analytics and machine learning. In this course, students will learn how to use Azure Databricks to explore, prepare, and model data; and integrate Databricks machine learning processes with Azure Machine Learning. Introduction to Azure Databricks Getting Started with Azure Databricks Working with Data in Azure Databricks Training and Evaluating Machine Learning Models Preparing Data for Machine Learning Training a Machine Learning Model Managing Experiments and Models Using MLflow to Track Experiments Managing Models Managing Experiments and Models Using MLflow to Track Experiments Managing Models Integrating Azure Databricks and Azure Machine Learning Tracking Experiments with Azure Machine Learning Deploying Models
Duration 4 Days 24 CPD hours This course is intended for The primary audience for this course is data professionals, data architects, and business intelligence professionals who want to learn about data engineering and building analytical solutions using data platform technologies that exist on Microsoft Azure. The secondary audience for this course includes data analysts and data scientists who work with analytical solutions built on Microsoft Azure. In this course, the student will learn how to implement and manage data engineering workloads on Microsoft Azure, using Azure services such as Azure Synapse Analytics, Azure Data Lake Storage Gen2, Azure Stream Analytics, Azure Databricks, and others. The course focuses on common data engineering tasks such as orchestrating data transfer and transformation pipelines, working with data files in a data lake, creating and loading relational data warehouses, capturing and aggregating streams of real-time data, and tracking data assets and lineage. Prerequisites Successful students start this course with knowledge of cloud computing and core data concepts and professional experience with data solutions. AZ-900T00 Microsoft Azure Fundamentals DP-900T00 Microsoft Azure Data Fundamentals 1 - Introduction to data engineering on Azure What is data engineering Important data engineering concepts Data engineering in Microsoft Azure 2 - Introduction to Azure Data Lake Storage Gen2 Understand Azure Data Lake Storage Gen2 Enable Azure Data Lake Storage Gen2 in Azure Storage Compare Azure Data Lake Store to Azure Blob storage Understand the stages for processing big data Use Azure Data Lake Storage Gen2 in data analytics workloads 3 - Introduction to Azure Synapse Analytics What is Azure Synapse Analytics How Azure Synapse Analytics works When to use Azure Synapse Analytics 4 - Use Azure Synapse serverless SQL pool to query files in a data lake Understand Azure Synapse serverless SQL pool capabilities and use cases Query files using a serverless SQL pool Create external database objects 5 - Use Azure Synapse serverless SQL pools to transform data in a data lake Transform data files with the CREATE EXTERNAL TABLE AS SELECT statement Encapsulate data transformations in a stored procedure Include a data transformation stored procedure in a pipeline 6 - Create a lake database in Azure Synapse Analytics Understand lake database concepts Explore database templates Create a lake database Use a lake database 7 - Analyze data with Apache Spark in Azure Synapse Analytics Get to know Apache Spark Use Spark in Azure Synapse Analytics Analyze data with Spark Visualize data with Spark 8 - Transform data with Spark in Azure Synapse Analytics Modify and save dataframes Partition data files Transform data with SQL 9 - Use Delta Lake in Azure Synapse Analytics Understand Delta Lake Create Delta Lake tables Create catalog tables Use Delta Lake with streaming data Use Delta Lake in a SQL pool 10 - Analyze data in a relational data warehouse Design a data warehouse schema Create data warehouse tables Load data warehouse tables Query a data warehouse 11 - Load data into a relational data warehouse Load staging tables Load dimension tables Load time dimension tables Load slowly changing dimensions Load fact tables Perform post load optimization 12 - Build a data pipeline in Azure Synapse Analytics Understand pipelines in Azure Synapse Analytics Create a pipeline in Azure Synapse Studio Define data flows Run a pipeline 13 - Use Spark Notebooks in an Azure Synapse Pipeline Understand Synapse Notebooks and Pipelines Use a Synapse notebook activity in a pipeline Use parameters in a notebook 14 - Plan hybrid transactional and analytical processing using Azure Synapse Analytics Understand hybrid transactional and analytical processing patterns Describe Azure Synapse Link 15 - Implement Azure Synapse Link with Azure Cosmos DB Enable Cosmos DB account to use Azure Synapse Link Create an analytical store enabled container Create a linked service for Cosmos DB Query Cosmos DB data with Spark Query Cosmos DB with Synapse SQL 16 - Implement Azure Synapse Link for SQL What is Azure Synapse Link for SQL? Configure Azure Synapse Link for Azure SQL Database Configure Azure Synapse Link for SQL Server 2022 17 - Get started with Azure Stream Analytics Understand data streams Understand event processing Understand window functions 18 - Ingest streaming data using Azure Stream Analytics and Azure Synapse Analytics Stream ingestion scenarios Configure inputs and outputs Define a query to select, filter, and aggregate data Run a job to ingest data 19 - Visualize real-time data with Azure Stream Analytics and Power BI Use a Power BI output in Azure Stream Analytics Create a query for real-time visualization Create real-time data visualizations in Power BI 20 - Introduction to Microsoft Purview What is Microsoft Purview? How Microsoft Purview works When to use Microsoft Purview 21 - Integrate Microsoft Purview and Azure Synapse Analytics Catalog Azure Synapse Analytics data assets in Microsoft Purview Connect Microsoft Purview to an Azure Synapse Analytics workspace Search a Purview catalog in Synapse Studio Track data lineage in pipelines 22 - Explore Azure Databricks Get started with Azure Databricks Identify Azure Databricks workloads Understand key concepts 23 - Use Apache Spark in Azure Databricks Get to know Spark Create a Spark cluster Use Spark in notebooks Use Spark to work with data files Visualize data 24 - Run Azure Databricks Notebooks with Azure Data Factory Understand Azure Databricks notebooks and pipelines Create a linked service for Azure Databricks Use a Notebook activity in a pipeline Use parameters in a notebook Additional course details: Nexus Humans DP-203T00 Data Engineering on Microsoft Azure training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the DP-203T00 Data Engineering on Microsoft Azure course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
About this Training Course This 3 full-day training course will introduce participants to the Microsoft Power BI® software solution for extracting, manipulating, visualising and analysing data. This is a very practical, hands-on course that takes participants through a series of exercises which help users understand the Power BI® environment, how to use the key areas of functionality, and how to apply the tools it contains to design and produce analyses of their own data. The first two days focus on learning the key concepts and practising these using clean, simple datasets. The third day provides participants with the opportunity to apply what they've learned to their own data. This makes the course far more relevant and meaningful for them, it allows our facilitator to help them structure their data models, queries and DAX formulas correctly, and it allows our facilitator to help them solve any additional problems that may arise but which were not covered as part of the standard the course. In addition, at the end of the day, each participant walks away with something of real, practical use for their job role. Many previous participants have remarked that they obtained the most value from the course during the third day because otherwise, they wouldn't be able to do what they need to do. This is an introductory course and although it does not assume any prior experience with Power BI®, participants will gain much more from the course if they have at least used Power BI® a little prior to attending. Participants who have taught themselves Power BI® will also benefit from attending as the course will fill-in a number of gaps in their knowledge and will also extend what they know. A general understanding of databases, Excel formulas, and Excel Pivot Tables is useful though not essential. Comprehensive course notes, exercises and completed solutions are included. Microsoft® PowerBI® is a trademark of Microsoft Corporation in the United States and/or other countries. Training Objectives Upon completion of this training course, participants will be able to: Confidently use the Power BI® solution, including Power BI® Desktop, PowerBI®.com and the Power BI® Gateway Extract data from a variety of data sources and manipulate the data extracted so it is ready for analysis Combine data sources together and gain an introductory understanding of the M language Write formulas using the DAX language for generating custom columns, measures and tables Design reports and dashboards using a wide range of both built-in and custom visuals Publish reports and dashboards to PowerBI®.com Share reports and dashboards with others using PowerBI®.com Customize reports and dashboards so that different user groups automatically see their own personalized views Target Audience This training course is intended for: Financial Analysts Accountants Budgeting and planning specialists Treasury Risk Managers Strategic Planners This is an introductory course and although it does not assume any prior experience with Power BI®, participants will gain much more from the course if they have at least used Power BI® a little prior to attending. Participants who have taught themselves Power BI® will also benefit from attending as the course will fill-in a number of gaps in their knowledge and will also extend what they know. A general understanding of databases, Excel formulas, and Excel Pivot Tables is useful though not essential. Comprehensive course notes, exercises and completed solutions are included. Course Level Basic or Foundation Trainer Your expert course leader has a Masters (Applied Finance & Investment), B.Comm (Accounting & Information Systems), CISA, FAIM, F Fin and is a Microsoft Certified Excel Expert. He has over 20 years' experience in financial modelling, forecasting, valuation, model auditing, and management reporting for clients throughout the world. He is skilled in the development and maintenance of analytical tools and financial models for middle-market companies to large corporates, at all levels of complexity, in both domestic and international settings. He has trained delegates from a wide variety of Oil & Gas companies including Chevron, Woodside, BHP Billiton, Petronas, Carigali, Shell, Nippon, Eni, Pertamina, Inpex, and many more. He provides training in financial modelling for companies throughout the Asia, Oceania, Middle East and African regions. Before his current role, he spent 6 years working in the Corporate and IT Consulting divisions of a large, multinational Chartered Accounting firm. He is the author of a number of white papers on financial modelling on subjects such as Financial Modelling Best Practices and Financial Model Auditing. Highlights from his oil and gas experience include: Development of economic models to assist Decision Analysts modelling for a wide range of scenarios for multinational oil & gas assets. Auditing and further development of life of project models for Chevron's Strategic Planning Division analysing their North West Shelf assets. Development of business plan and budgeting models for multinational oil & gas assets. Development of cash flow and taxation models for a variety of oil gas companies. Consulting on Sarbanes Oxley spreadsheet remediation and risk assessment. POST TRAINING COACHING SUPPORT (OPTIONAL) To further optimise your learning experience from our courses, we also offer individualized 'One to One' coaching support for 2 hours post training. We can help improve your competence in your chosen area of interest, based on your learning needs and available hours. This is a great opportunity to improve your capability and confidence in a particular area of expertise. It will be delivered over a secure video conference call by one of our senior trainers. They will work with you to create a tailor-made coaching program that will help you achieve your goals faster. Request for further information post training support and fees applicable Accreditions And Affliations
The Fintech Frontier: Why FDs Need to Know About Fintech,” the podcast where we delve into the world of financial technology There are numerous areas where fintech can make a significant impact. For example, payment processing and reconciliation can be streamlined through digital payment solutions and automated tools. Data analytics and artificial intelligence can enhance financial forecasting, risk management, and fraud detection. Blockchain technology can revolutionize supply chain finance and streamline processes involving multiple parties. By understanding the capabilities of these fintech solutions, FDs can identify areas for improvement and select the right technologies to optimise their financial operations. Additionally, fintech can greatly enhance financial reporting and analysis. Advanced data analytics tools can extract meaningful insights from vast amounts of financial data, enabling FDs to make data-driven decisions and identify trends and patterns. Automation of repetitive tasks, such as data entry and reconciliation, reduces the risk of errors and frees up valuable time for FDs to focus on strategic initiatives. The adoption of cloud-based financial management systems also provides flexibility, scalability, and real-time access to financial data, empowering FDs to make informed decisions on the go. With the rapid pace of fintech advancements, how can FDs stay up to date and navigate the evolving fintech landscape? Continuous learning and engagement with the fintech community are key. Attend industry conferences, participate in webinars and workshops, and engage with fintech startups and established players. Networking with professionals in the field, joining fintech-focused associations, and following relevant publications and blogs can help FDs stay abreast of the latest fintech developments. Embracing a mindset of curiosity and adaptability is crucial in navigating the ever-changing fintech landscape. I would also encourage FDs to foster partnerships and collaborations with fintech companies. Engage in conversations with fintech providers to understand their solutions and explore potential synergies. By forging strategic partnerships, FDs can gain access to cutting-edge technologies and co-create innovative solutions tailored to their organisation’s unique needs. As we conclude, do you have any final thoughts or advice for our FD audience regarding fintech? Embrace fintech as an opportunity, not a threat. Seek to understand its potential and how it can align with your organisation’s goals and strategies. Be open to experimentation and pilot projects to test the viability of fintech solutions. Remember that fintech is a tool to enhance and optimize financial processes, and as FDs, we have a crucial role in driving its effective implementation. https://www.fdcapital.co.uk/podcast/the-fintech-frontier-why-fds-need-to-know-about-fintech/ Tags Online Events Things To Do Online Online Conferences Online Business Conferences #event #fintech #knowledge #fds #frontier
Duration 2 Days 12 CPD hours This course is intended for DevOps Engineers Software Developers Telecommunications Professionals Architects Quality Assurance & Site Reliability Professionals Overview Automate basic freestyle projects Jenkins Pipelines and Groovy Programming Software lifecycle management with Jenkins Popular plugins Scaling options Integrating Jenkins with Git and GitHub (as well as other Software Control Management platforms) Triggering Jenkins with Webhooks Deploying into Docker and Kubernetes CI / CD with Jenkins This course covers the fundamentals necessary to deploy and utilize the Jenkins automation server. Jenkins enables users to immediately begin automating both their individual and collaborative workflows. Jenkins is a proven solution for a wide variety of tasks ranging from the helpful automation of scripts (such as Python and Ansible) to creating complex pipelines that govern the technical parts of not only Continuous Integration, but Continuous Delivery (CI/CD) as well. Jenkins is free, open source, and easily controlled with a simple web- based UI- it can be expanded by third party plugins and is deployable on nearly any on-site (Linux, Windows and Mac) or cloud platform. Overview of Jenkins Overview of Continuous Integration and Continuous Deployment (CI/CD) Understanding Git and GitHub Git Branching Methods for Installing Jenkins Jenkins Dashboard Jenkins Jobs Getting Started with Freestyle Jobs Triggering builds HTTP Web Hooks Augmenting Jenkins with Plugins Overview of Docker and Dockerfile for Building and Launching Images Pipeline Jobs for Continuous Integration and Continuous Deployment Pipeline Build Stage Pipeline Testing Stage Post Build actions SMTP and Other Notifications Programming Pipelines with Groovy More Groovy Programming Essentials Extracting Jenkins Data Analytics to Support Project Management Troubleshooting Failures Auditing stdout and stderr with Jenkins Jenkins REST API Controlling Jenkins API with Python Jenkins Security Scaling Jenkins Jenkins CLI Building a Kubernetes Cluster and Deploying Jenkins How to start successfully using Jenkins to automate aspects of your job the moment this course ends.
Duration 2 Days 12 CPD hours This course is intended for If you are a data analyst, data scientist, or a business analyst who wants to get started with using Python and machine learning techniques to analyze data and predict outcomes, this book is for you. Basic knowledge of computer programming and data analytics is a must. Familiarity with mathematical concepts such as algebra and basic statistics will be useful. Overview By the end of this course, you will have the skills you need to confidently use various machine learning algorithms to perform detailed data analysis and extract meaningful insights from data. This course is designed to give you practical guidance on industry-standard data analysis and machine learning tools in Python, with the help of realistic data. The course will help you understand how you can use pandas and Matplotlib to critically examine a dataset with summary statistics and graphs, and extract the insights you seek to derive. You will continue to build on your knowledge as you learn how to prepare data and feed it to machine learning algorithms, such as regularized logistic regression and random forest, using the scikit-learn package. You?ll discover how to tune the algorithms to provide the best predictions on new and unseen data. As you delve into later sections, you?ll be able to understand the working and output of these algorithms and gain insight into not only the predictive capabilities of the models but also their reasons for making these predictions. Data Exploration and Cleaning Python and the Anaconda Package Management System Different Types of Data Science Problems Loading the Case Study Data with Jupyter and pandas Data Quality Assurance and Exploration Exploring the Financial History Features in the Dataset Activity 1: Exploring Remaining Financial Features in the Dataset Introduction to Scikit-Learn and Model Evaluation Introduction Model Performance Metrics for Binary Classification Activity 2: Performing Logistic Regression with a New Feature and Creating a Precision-Recall Curve Details of Logistic Regression and Feature Exploration Introduction Examining the Relationships between Features and the Response Univariate Feature Selection: What It Does and Doesn't Do Building Cloud-Native Applications Activity 3: Fitting a Logistic Regression Model and Directly Using the Coefficients The Bias-Variance Trade-off Introduction Estimating the Coefficients and Intercepts of Logistic Regression Cross Validation: Choosing the Regularization Parameter and Other Hyperparameters Activity 4: Cross-Validation and Feature Engineering with the Case Study Data Decision Trees and Random Forests Introduction Decision trees Random Forests: Ensembles of Decision Trees Activity 5: Cross-Validation Grid Search with Random Forest Imputation of Missing Data, Financial Analysis, and Delivery to Client Introduction Review of Modeling Results Dealing with Missing Data: Imputation Strategies Activity 6: Deriving Financial Insights Final Thoughts on Delivering the Predictive Model to the Client
Duration 1 Days 6 CPD hours This course is intended for This class is intended for the following: Data analysts, Data scientists, Business analysts getting started with Google Cloud Platform. Individuals responsible for designing pipelines and architectures for data processing, creating and maintaining machine learning and statistical models, querying datasets, visualizing query results and creating reports. Executives and IT decision makers evaluating Google Cloud Platform for use by data scientists. Overview This course teaches students the following skills:Identify the purpose and value of the key Big Data and Machine Learning products in the Google Cloud Platform.Use Cloud SQL and Cloud Dataproc to migrate existing MySQL and Hadoop/Pig/Spark/Hive workloads to Google Cloud Platform.Employ BigQuery and Cloud Datalab to carry out interactive data analysis.Train and use a neural network using TensorFlow.Employ ML APIs.Choose between different data processing products on the Google Cloud Platform. This course introduces participants to the Big Data and Machine Learning capabilities of Google Cloud Platform (GCP). It provides a quick overview of the Google Cloud Platform and a deeper dive of the data processing capabilities. Introducing Google Cloud Platform Google Platform Fundamentals Overview. Google Cloud Platform Big Data Products. Compute and Storage Fundamentals CPUs on demand (Compute Engine). A global filesystem (Cloud Storage). CloudShell. Lab: Set up a Ingest-Transform-Publish data processing pipeline. Data Analytics on the Cloud Stepping-stones to the cloud. Cloud SQL: your SQL database on the cloud. Lab: Importing data into CloudSQL and running queries. Spark on Dataproc. Lab: Machine Learning Recommendations with Spark on Dataproc. Scaling Data Analysis Fast random access. Datalab. BigQuery. Lab: Build machine learning dataset. Machine Learning Machine Learning with TensorFlow. Lab: Carry out ML with TensorFlow Pre-built models for common needs. Lab: Employ ML APIs. Data Processing Architectures Message-oriented architectures with Pub/Sub. Creating pipelines with Dataflow. Reference architecture for real-time and batch data processing. Summary Why GCP? Where to go from here Additional Resources Additional course details: Nexus Humans Google Cloud Platform Big Data and Machine Learning Fundamentals training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Google Cloud Platform Big Data and Machine Learning Fundamentals course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 4 Days 24 CPD hours This course is intended for This course is designed for data analysts, business intelligence specialists, developers, system architects, and database administrators. Overview Skills gained in this training include:The features that Pig, Hive, and Impala offer for data acquisition, storage, and analysisThe fundamentals of Apache Hadoop and data ETL (extract, transform, load), ingestion, and processing with HadoopHow Pig, Hive, and Impala improve productivity for typical analysis tasksJoining diverse datasets to gain valuable business insightPerforming real-time, complex queries on datasets Cloudera University?s four-day data analyst training course focusing on Apache Pig and Hive and Cloudera Impala will teach you to apply traditional data analytics and business intelligence skills to big data. Hadoop Fundamentals The Motivation for Hadoop Hadoop Overview Data Storage: HDFS Distributed Data Processing: YARN, MapReduce, and Spark Data Processing and Analysis: Pig, Hive, and Impala Data Integration: Sqoop Other Hadoop Data Tools Exercise Scenarios Explanation Introduction to Pig What Is Pig? Pig?s Features Pig Use Cases Interacting with Pig Basic Data Analysis with Pig Pig Latin Syntax Loading Data Simple Data Types Field Definitions Data Output Viewing the Schema Filtering and Sorting Data Commonly-Used Functions Processing Complex Data with Pig Storage Formats Complex/Nested Data Types Grouping Built-In Functions for Complex Data Iterating Grouped Data Multi-Dataset Operations with Pig Techniques for Combining Data Sets Joining Data Sets in Pig Set Operations Splitting Data Sets Pig Troubleshoot & Optimization Troubleshooting Pig Logging Using Hadoop?s Web UI Data Sampling and Debugging Performance Overview Understanding the Execution Plan Tips for Improving the Performance of Your Pig Jobs Introduction to Hive & Impala What Is Hive? What Is Impala? Schema and Data Storage Comparing Hive to Traditional Databases Hive Use Cases Querying with Hive & Impala Databases and Tables Basic Hive and Impala Query Language Syntax Data Types Differences Between Hive and Impala Query Syntax Using Hue to Execute Queries Using the Impala Shell Data Management Data Storage Creating Databases and Tables Loading Data Altering Databases and Tables Simplifying Queries with Views Storing Query Results Data Storage & Performance Partitioning Tables Choosing a File Format Managing Metadata Controlling Access to Data Relational Data Analysis with Hive & Impala Joining Datasets Common Built-In Functions Aggregation and Windowing Working with Impala How Impala Executes Queries Extending Impala with User-Defined Functions Improving Impala Performance Analyzing Text and Complex Data with Hive Complex Values in Hive Using Regular Expressions in Hive Sentiment Analysis and N-Grams Conclusion Hive Optimization Understanding Query Performance Controlling Job Execution Plan Bucketing Indexing Data Extending Hive SerDes Data Transformation with Custom Scripts User-Defined Functions Parameterized Queries Choosing the Best Tool for the Job Comparing MapReduce, Pig, Hive, Impala, and Relational Databases Which to Choose?
Duration 2 Days 12 CPD hours This course is intended for The primary audience for this course is as follows: System Engineers System Administrators Architects Channel Partners Data Analysts Overview Upon completing this course, you will be able to meet these overall objectives: Describe how harnessing the power of your machine data enables you to make decisions based on facts, bot intuition or best guesses. Reduce the time you spend investigating incidents by up to 90%. Find and fix problems faster by learning new technical skills for real world scenarios. Get started with Splunk Enterprise, from installation and data onboarding to running search queries to creating simple reports and dashboards. Accelerate time to value with turnkey Splunk integrations for dozens of Cisco products and platforms. Ensure faster, more predictable Splunk deployments with a proven Cisco Validated Design and the latest Cisco UCS server. This course will cover how Splunk software scales to collect and index hundreds of terabytes of data per day, across multi-geography, multi-datacenter and cloud based infrastructures. Using Cisco?s Unified Computing System (UCS) Integrated Infrastructure for Big Data offers linear scalability along with operational simplification for single-rack and multiple-rack deployments. Cisco Integrated Infrastructure for Big Data and Splunk What is Cisco CPA? Architecture benefits for Splunk Components of IIBD and relationship to Splunk Architecture Cisco UCS Integrated Infrastructure for Big Data with Splunk Enterprise Splunk- Big Data Analytics NFS Configurations for the Splunk Frozen Data Storage NFS Client Configurations on the Indexers Splunk- Start Searching Chargeback Reporting Building custom reports using the report builder Application Containers Understanding Application Containers Understanding Advanced Tasks Task Library & Inputs CLI & SSH Task Understanding Compound Tasks Custom Tasks Open Automation Troubleshooting UCS Director Restart Module Loading Report Errors Feature Loading Report Registration REST API- Automation UCS Director Developer Tools Accessing REST using a REST client Accessing REST using the REST API browser Open Automation SDK Overview Open Automation vs. Custom Tasks Use Cases UCS Director PowerShell API Cisco UCS Director PowerShell Console Installing & Configuring Working with Cmdlets Cloupia Script Structure Inputs & Outputs Design Examples Additional course details: Nexus Humans Cisco Splunk for Cisco Integrated Infrastructure (SPLUNK) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Cisco Splunk for Cisco Integrated Infrastructure (SPLUNK) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 2 Days 12 CPD hours This course is intended for This course is intended for Business Leaders, including managers/supervisors in the following roles: Developer Architect Video Operator Overview In this course, you will learn to: Articulate the essential terms and concepts fundamental to video compression and distribution Describe the four fundamental stages of video streaming workflows: ingest, process, store and deliver Explain the importance of security in the AWS Cloud and how it is applied in video streaming workflows Analyze video streaming workflow diagrams using AWS services, based on simple to complex use cases Describe some of the key variables that influence workflow decisions Recognize how other AWS services for compliance, storage, and compute, interact with AWS Media Services in video streaming workflows and the functions they perform Describe strategies to test or prototype workflows to mitigate risk and cost impacts and optimize video streaming workflows Use the AWS Management Console to build and run simple video streaming workflows for live and video-on-demand content Recognize the automation and data analytics available for Media Services when used with AWS AI and explore media-specific use cases for these services Identify the next steps in exploring migration to the cloud for one or more Media Services This course covers the media and cloud fundamentals that will empower you to develop a cloud migration strategy for media workflows in support of business goals. The course covers important concepts related to video processing and delivery, the variables that can impact migration decisions, and real-world examples of hybrid and cloud use cases for AWS Media Services. It also introduces security, artificial intelligence, and analytics concepts to help you consider how AWS Media Services fit into your overall cloud strategy. Module 1: Important video concepts Video Metrics Video Compression Video Distribution Major Protocols Used in Video Streaming Module 2: Anatomy of streaming workflows Ingest Process Store Deliver Module 3: Using AWS services in media workflows video-on-demand (VOD) Introduction to AWS Media Services Security Variables Impacting Workflow Design VOD Simple Use Cases VOD Advanced Use Cases Lab 1: Build and run a simple video streaming workflow for VOD content Module 4: Using AWS services in media workflows live streaming Challenges of Live Streaming Live Streaming Simple Use Cases Live Streaming Advanced Use Cases Lab 2: Build and run a simple video streaming workflow for live content Module 5: Optimizing Workflows Cost Considerations Mitigating Risk Monitoring and Automation Exploring Migration Options Additional course details: Nexus Humans AWS Media Essentials for IT Business Decision Makers training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the AWS Media Essentials for IT Business Decision Makers course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.