Duration 4 Days 24 CPD hours This course is intended for Administrators, architects, and business leaders who manage Nutanix clusters in the datacenter Managers and technical staff seeking information to drive purchase decisions Anyone who is seeking the Nutanix Certified Professional - Multicloud Infrastructure (NCP-MCI) certification Overview During this program, attendees will: Develop a working knowledge of the Nutanix product family. Understand the requirements and considerations involved in setting up a Nutanix cluster. Familiarize themselves with cluster management and monitoring via the Prism web console. Learn how to create, manage, and migrate VMs, set up data protection services, and plan for business continuity. Understand how to plan and handle upgrades, assess future requirements, and create what-if scenarios to adress scaling for business needs. The Nutanix Enterprise Cloud Administration (ECA) course enables administrators (system, network, and storage) to successfully configure and manage Nutanix in the datacenter. The course covers many of the tasks Nutanix administrators perform through the use of graphical user interfaces (GUIs) and command line interfaces (CLIs). It also provides insight into a Nutanix cluster?s failover and self-healing capabilities, offers tips for solving common problems, and provides guidelines for collecting information when interacting with Nutanix Support Introduction The section describes the Nutanix HCI solution, walks you through the components of the Nutanix Enterprise Cloud, and explains the relationship between physical and logical cluster components. Managing the Nutanix Cluster In this section, you will use the Prism console to monitor a cluster, configure a cluster using various interfaces, use the REST API Explorer to manage the cluster, and learn how to deploy Nutanix-specific PowerShell cmdlets. Securing the Nutanix Cluster This section shows how to secure a Nutanix cluster through user authentication, SSL certificate installation, and cluster access control. Acropolils Networking This section explains how to configure managed and unmanaged Acropolis networks and describes the use of Open vSwitch (OVS) in Acropolis. You will learn how to display and manage network details, differentiate between supported OVS bond modes, and gain insight into default network configuration. VM Management This section shows you how to upload images, and how to create and manage virtual machines. Health Monitoring and Alerts In this section, you will use the Health Dashboard to monitor a cluster?s health and performance. You will also use Analysis Dashboard to create charts that you can export with detailed information on a variety of components and metrics. Distributed Storage Fabric This section discusses creating and configuring storage containers, including the storage optimization features: deduplication, compression, and erasure coding. AHV Workload Migration Using Nutanix Move, this section shows how to migrate workloads to a cluster running AHV. This is followed by a lab where a VM running on a Nutanix cluster configured with ESXi is migrated to a Nutanix cluster running AHV. File and Volumes This section gives you detailed information on Nutanix Volumes, which provides highly available, high-performance block storage through a few easy configuration steps. It also discusses Nutanix Files. Understanding Infrastructure Resiliency This section shows how Nutanix provides comprehensive data protection at all levels of the virtual datacenter: logical and physical. Data Protection Data can be replicated between Nutanix clusters, synchronously and asynchronously. This section shows how to configure a Protection Domain (PD) and Remote Sites, recover a VM from a PD, and perform a planned failover of a PD. Prism Central Having discussed and used Prism Element earlier, this section looks at the capabilities of Prism Central. With the added functionality provided by a Pro license, the focus is on features related to monitoring and managing multiple activities across a set of clusters. Monitoring the Nutanix Cluster This section shows you where to locate and how to interpret cluster-related log files. In addition, you will take a closer look at the Nutanix Support Portal and online help. Cluster Management and Expansion This section outlines essential life-cycle operations, including starting/stopping a Nutanix cluster, as well as starting/shutting down a node. You will also learn how to expand a cluster, manage licenses, and upgrade the cluster?s software and firmware. Remote Office Branch Office (ROBO) Deployments In this section, you will understand various configurations and requirements specific to a ROBO site. This includes hardware/software, Witness VM, networking, failure and recovery scenarios for two-node clusters, and the seeding process. Additional course details: Nexus Humans NECA: Nutanix Enterprise Cloud Administration training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the NECA: Nutanix Enterprise Cloud Administration course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 1 Days 6 CPD hours This course is intended for Learners taking this course are interested in employee experiences or Microsoft Viva and want to learn how to assess, plan, strategize, design, and manage digital employee experiences that use Microsoft Viva, Microsoft Teams, SharePoint, and Power Platform. A learner in this role will collaborate with multiple teams to scope, design, and implement new digital employee experiences, such as onboarding, career and skill development, rewards and recognition, employee wellbeing, and employee retention. Learners should have a foundational understanding of Microsoft technologies, including Microsoft 365, Teams, SharePoint, and a deep understanding of Microsoft Viva features and capabilities. They may have experience in one or more of the following disciplines: human resources, people development, change management, information technology, or culture development. Overview By the end of this module, you'll be able to: Evaluate existing systems and identify requirements Identify stakeholders and users Recommend employee experience solutions and strategies Describe the four experience areas of Connection, Growth, Purpose, and Insights supported by Viva. Explain what Microsoft Viva apps are. Identify resources needed to set up each Viva app. Create an adoption plan to use Viva to solve business scenarios for the four employee experience areas of Connection, Insight, Purpose, and Growth. Describe the main features of Viva Connections List technical requirements/prerequisites for Viva Connections implementation Explain the differences between desktop and mobile experiences Identify 2-3 business use cases for Viva Connections Identify key stakeholders for the deployment of Viva Connections Align and prioritize scenarios for Viva Connections Plan and design for the Dashboard, the Feed, and Resources by scenarios and audiences Consider how your organization will scale adoption Assess your organization's existing learning experiences. Plan and strategize for Viva Learning. Coordinate the implementation of Viva Learning. Recommend an adoption strategy for Viva Learning. In this course, you'll learn how to bring people together to create an optimal employee experience that enables your organization to improve productivity, develop empathetic leadership, and transform how employees feel about their work. In your organization today, are people being treated well, or are their needs neglected? Are your teams aligned on goals with a sense of purpose? Are you driving the business outcomes that you need? The Microsoft Viva employee experience platform provides the infrastructure to create the culture of trust, collaboration, well-being, and active listening that you envision. This training course will provide Microsoft Employee Experience Platform Specialists with a comprehensive overview of Microsoft Viva, as well as Microsoft 365, Teams, and SharePoint. It will cover how to identify requirements for designing experiences for employee onboarding, career and skill development, rewards and recognition, compensation and benefits, employee wellbeing, and employee retention. It will also cover how to design solutions to meet these requirements, and how to collaborate with senior executive leadership, human resources, IT, adoption and change management, and learning and organizational development departments. Finally, it will cover how to continuously improve employee experiences based on data-driven insights and feedback. Design digital employee experiences Introduction Case study - Tailwind Traders Evaluate current employee experiences Consider employee privacy and data requirements Assemble business stakeholders and define goals Explore Viva experience areas Understand Viva licensing Knowledge check Summary and resources Introduction to the Microsoft Viva suite Introduction to Microsoft Viva Understand Viva apps Get started with Microsoft Viva Use Viva to keep everyone informed, included, and inspired Use Viva to get actionable insights to foster well-being and productivity Use Viva to align people's work to team and organization goals Use Viva to help employees learn, grow, and succeed Knowledge check Summary Introduction to Viva Connections Introduction What do users experience? When to use Viva Connections? What technical requirements must be met to deploy Viva Connections? Knowledge check Summary Plan for Viva Connections Introduction Build your team and meet requirements Analyze tasks and scenarios for Viva Connections Plan for Viva Connections Dashboard, Feed and Resources Plan to announce, launch, and scale adoption Knowledge check Summary Design skilling and growth experiences with Viva Learning Introduction Case study - Tailwind Traders Plan for Viva Learning Assemble Viva Learning admins and stakeholders Understand content sources with Viva Learning Coordinate setup and configuration of Viva Learning Develop adoption strategies for Viva Learning Develop an org-wide learning culture Knowledge check Summary and resources Additional course details: Nexus Humans MS-080T00: Employee Experience Platform Specialist training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the MS-080T00: Employee Experience Platform Specialist course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 2 Days 12 CPD hours This course is intended for Experienced DataStage developers seeking training in more advanced DataStage job techniques and who seek techniques for working with complex types of data resources. Overview Use Connector stages to read from and write to database tables Handle SQL errors in Connector stages Use Connector stages with multiple input links Use the File Connector stage to access Hadoop HDFS data Optimize jobs that write to database tables Use the Unstructured Data stage to extract data from Excel spreadsheets Use the Data Masking stage to mask sensitive data processed within a DataStage job Use the Hierarchical stage to parse, compose, and transform XML data Use the Schema Library Manager to import and manage XML schemas Use the Data Rules stage to validate fields of data within a DataStage job Create custom data rules for validating data Design a job that processes a star schema data warehouse with Type 1 and Type 2 slowly changing dimensions This course is designed to introduce you to advanced parallel job data processing techniques in DataStage v11.5. In this course you will develop data techniques for processing different types of complex data resources including relational data, unstructured data (Excel spreadsheets), and XML data. In addition, you will learn advanced techniques for processing data, including techniques for masking data and techniques for validating data using data rules. Finally, you will learn techniques for updating data in a star schema data warehouse using the DataStage SCD (Slowly Changing Dimensions) stage. Even if you are not working with all of these specific types of data, you will benefit from this course by learning advanced DataStage job design techniques, techniques that go beyond those utilized in the DataStage Essentials course. Accessing databases Connector stage overview - Use Connector stages to read from and write to relational tables - Working with the Connector stage properties Connector stage functionality - Before / After SQL - Sparse lookups - Optimize insert/update performance Error handling in Connector stages - Reject links - Reject conditions Multiple input links - Designing jobs using Connector stages with multiple input links - Ordering records across multiple input links File Connector stage - Read and write data to Hadoop file systems Demonstration 1: Handling database errors Demonstration 2: Parallel jobs with multiple Connector input links Demonstration 3: Using the File Connector stage to read and write HDFS files Processing unstructured data Using the Unstructured Data stage in DataStage jobs - Extract data from an Excel spreadsheet - Specify a data range for data extraction in an Unstructured Data stage - Specify document properties for data extraction. Demonstration 1: Processing unstructured data Data masking Using the Data Masking stage in DataStage jobs - Data masking techniques - Data masking policies - Applying policies for masquerading context-aware data types - Applying policies for masquerading generic data types - Repeatable replacement - Using reference tables - Creating custom reference tables Demonstration 1: Data masking Using data rules Introduction to data rules - Using the Data Rules Editor - Selecting data rules - Binding data rule variables - Output link constraints - Adding statistics and attributes to the output information Use the Data Rules stage to valid foreign key references in source data Create custom data rules Demonstration 1: Using data rules Processing XML data Introduction to the Hierarchical stage - Hierarchical stage Assembly editor - Use the Schema Library Manager to import and manage XML schemas Composing XML data - Using the HJoin step to create parent-child relationships between input lists - Using the Composer step Writing Hierarchical data to a relational table Using the Regroup step Consuming XML data - Using the XML Parser step - Propagating columns Topic 6: Transforming XML data - Using the Aggregate step - Using the Sort step - Using the Switch step - Using the H-Pivot step Demonstration 1: Importing XML schemas Demonstration 2: Compose hierarchical data Demonstration 3: Consume hierarchical data Demonstration 4: Transform hierarchical data Updating a star schema database Surrogate keys - Design a job that creates and updates a surrogate key source key file from a dimension table Slowly Changing Dimensions (SCD) stage - Star schema databases - SCD stage Fast Path pages - Specifying purpose codes - Dimension update specification - Design a job that processes a star schema database with Type 1 and Type 2 slowly changing dimensions Demonstration 1: Build a parallel job that updates a star schema database with two dimensions Additional course details: Nexus Humans KM423 IBM InfoSphere DataStage v11.5 - Advanced Data Processing training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the KM423 IBM InfoSphere DataStage v11.5 - Advanced Data Processing course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 5 Days 30 CPD hours Overview By the end of the course, you should be able to meet the following objectives: List the operational challenges for rolling out and operating telco services including 5G. Identify the role of VMware Telco Cloud products in supporting telco services. Discuss the role of VMware technologies such as vSphere, NSX, and Tanzu etc. in implementing telco services. Outline the role of native tools and other VMware monitoring tools such as vRealize Operations and vRealize Log Insight in maintaining network services. Choose the VMware Telco Cloud products that meet your application requirements. Deploy a solution architecture that meet VMware best practices for delivering services using VMware Telco Cloud products. Implement and maintain VMware Telco Cloud products in a secure manner. Identify the tools and remediation pathways for maintaining the availability and performance of your applications and infrastructure using VMware Telco Cloud and vRealize Suite products. Follow specific steps to resolve application performance and availability problems Scale your VMware Telco Cloud products to meet operational requirements in line with VMware best practices. Optimize the operation of VMware Telco Cloud products to ensure SLAs are met. This five-day, hands-on training course provides the knowledge to operate and scale VMware Telco Cloud version 2.x products in a Telco cloud provider environment. In this course, you are exposed to the entire VMware Telco Cloud portfolio, and the tools and methodologies available to ensure they operate effectively. In addition, you are presented with various scenarios where you will be guided through the process of identifying, analyzing, and formulating solutions to performance and other problems. Course Introduction Introductions and course logistics Course objectives Overview of Network Transformation Reviews the technologies that enable modern networks Lists the components of modern service provider networks Outlines characteristics of modern service provider networks in meeting customer application needs Service Delivery with VMware Telco Cloud Outlines the components of the VMware Telco Cloud portfolio Reviews the role each VMware Telco Cloud product plays in delivering telco services Specifies the dependencies each product has on underlying technologies Supporting VMware Telco Cloud Service Delivery Reviews the products that implements the virtualization, management, platform, and orchestration layers Outlines the role played by other VMware products such as NSX in delivering cloud services Outline opensource integration options with VMware Telco Cloud products Securing VMware Telco Cloud Reviews security threats that affect telco services Identifies the critical telco assets that are prone to attack Outlines best practice for securing VMware Telco Cloud products and underlying technology Provides overview of appropriate security controls for VMware Telco Cloud products Assessing Service Provision Reviews tools and methodologies used to gather requirements Outlines how to assess cloud-native capabilities Documenting findings Identifying security vulnerabilities with Helm. Reviews VMware?s Customer Engagement process Capturing infrastructure requirements from TCA Designing a VMware Telco Cloud Solution Selecting appropriate deployment topology Pros and cons of design choices How a design choice might be affected by other factors such as NSX and TKG deployment, or data center architecture Outlines typical scenarios where specific products align with identified requirements Documenting a design Designing for availability Ensuring a design aligns with best practice Specifying monitoring options Implementing VMware Telco Cloud Products Review deployment options for VMware products Integrating new products with existing ones Outline post-installation tasks Adding the new products as data sources in monitoring tools such as vRealize Operations and vRealize Operations Network Insight Ensuring products meet security requirements Configuring monitoring software Outlines the xNF onboarding process in TCA Maintaining Telco Services Outlines typical administrative tasks in ensuring services are maintained Use of native and other VMware performance monitoring tools Reviewing performance data Role of SLAs in service maintenance Reviews scenarios where known behaviors indicate problems Troubleshooting Deployed Telco Services Reviews the troubleshooting tools available Using tools to gather useful data Outlines how event correlation can be used to isolate problems Using a methodology to determine the root cause of a problem Steps to identify and resolve a problem Reviews scenarios where known problems are isolated and resolution steps identified Scaling VMware Telco Cloud Products Assessing if operations are impacted by lack of resources now or will be in the future Reviews performance optimization options Identifies implications for other products if you scale VMware Telco Cloud products Reviews VMware sizing guidelines Additional course details:Notes Delivery by TDSynex, Exit Certified and New Horizons an VMware Authorised Training Centre (VATC) Nexus Humans VMware Telco Cloud: Day 2 Operate and Scale [V2.x] training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the VMware Telco Cloud: Day 2 Operate and Scale [V2.x] course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 3 Days 18 CPD hours This course is intended for The ideal audience for the RPA and UiPath Boot Camp is beginners in the field of RPA and individuals in roles such as developers, project managers, operation analysts, and tech enthusiasts looking to familiarize themselves with automation technologies. It's also perfectly suited for business professionals keen on understanding and implementing automated solutions within their organizations to optimize processes. Overview This 'skills-centric' course is about 50% hands-on lab and 50% lecture, with extensive practical exercises designed to reinforce fundamental skills, concepts and best practices taught throughout the course. Working in a hands-on learning environment, led by our Automation Learning expert instructor, students will explore: Gain a thorough understanding of Robotic Process Automation (RPA) and its applications using UiPath, setting a solid foundation for future learning and application. Learn to record and play in UiPath Studio, a key skill that enables automating complex tasks in a user-friendly environment. Master the art of designing and controlling workflows using Sequencing, Flowcharting, and Control Flow, helping to streamline and manage automation processes effectively. Acquire practical skills in data manipulation, from variable management to CSV/Excel and data table conversions, empowering you to handle data-rich tasks with confidence. Develop competence in managing controls and exploring various plugins and extensions, providing a broader toolkit for handling diverse automation projects. Get hands-on experience with exception handling, debugging, logging, code management, and bot deployment, fundamental skills that ensure your automated processes are reliable and efficient. How to deploy and control Bots with UiPath Orchestrator The Hands-on Natural Language Processing (NLP) Boot Camp is an immersive, three-day course that serves as your guide to building machines that can read and interpret human language. NLP is a unique interdisciplinary field, blending computational linguistics with artificial intelligence to help machines understand, interpret, and generate human language. In an increasingly data-driven world, NLP skills provide a competitive edge, enabling the development of sophisticated projects such as voice assistants, text analyzers, chatbots, and so much more. Our comprehensive curriculum covers a broad spectrum of NLP topics. Beginning with an introduction to NLP and feature extraction, the course moves to the hands-on development of text classifiers, exploration of web scraping and APIs, before delving into topic modeling, vector representations, text manipulation, and sentiment analysis. Half of your time is dedicated to hands-on labs, where you'll experience the practical application of your knowledge, from creating pipelines and text classifiers to web scraping and analyzing sentiment. These labs serve as a microcosm of real-world scenarios, equipping you with the skills to efficiently process and analyze text data. Time permitting, you?ll also explore modern tools like Python libraries, the OpenAI GPT-3 API, and TensorFlow, using them in a series of engaging exercises. By the end of the course, you'll have a well-rounded understanding of NLP, and will leave equipped with the practical skills and insights that you can immediately put to use, helping your organization gain valuable insights from text data, streamline business processes, and improve user interactions with automated text-based systems. You?ll be able to process and analyze text data effectively, implement advanced text representations, apply machine learning algorithms for text data, and build simple chatbots. What is Robotic Process Automation? Scope and techniques of automation Robotic process automation About UiPath The future of automation Record and Play UiPath stack Downloading and installing UiPath Studio Learning UiPath Studio Task recorder Step-by-step examples using the recorder Sequence, Flowchart, and Control Flow Sequencing the workflow Activities Control flow, various types of loops, and decision making Step-by-step example using Sequence and Flowchart Step-by-step example using Sequence and Control flow Data Manipulation Variables and scope Collections Arguments ? Purpose and use Data table usage with examples Clipboard management File operation with step-by-step example CSV/Excel to data table and vice versa (with a step-by-step example) Taking Control of the Controls Finding and attaching windows Finding the control Techniques for waiting for a control Act on controls ? mouse and keyboard activities Working with UiExplorer Handling events Revisit recorder Screen Scraping When to use OCR Types of OCR available How to use OCR Avoiding typical failure points Tame that Application with Plugins and Extensions Terminal plugin SAP automation Java plugin Citrix automation Mail plugin PDF plugin Web integration Excel and Word plugins Credential management Extensions ? Java, Chrome, Firefox, and Silverlight Handling User Events and Assistant Bots What are assistant bots? Monitoring system event triggers Monitoring image and element triggers Launching an assistant bot on a keyboard event Exception Handling, Debugging, and Logging Exception handling Common exceptions and ways to handle them Logging and taking screenshots Debugging techniques Collecting crash dumps Error reporting Managing and Maintaining the Code Project organization Nesting workflows Reusability of workflows Commenting techniques State Machine When to use Flowcharts, State Machines, or Sequences Using config files and examples of a config file Integrating a TFS server Deploying and Maintaining the Bot Publishing using publish utility Overview of Orchestration Server Using Orchestration Server to control bots Using Orchestration Server to deploy bots License management Publishing and managing updates
Duration 3 Days 18 CPD hours This course is intended for Analyst Developer End User Implementer Overview Schedule and Burst Reports Perform Translations Create Reports Integrated With Oracle BI EE Administer BI Publisher Server Describe BI Publisher Technology and Architecture Create reports from OBI EE data sources Create and Modify Data Models Create RTF Templates by Using Template Builder Explore and Use the Form Field Method for Creating RTF Templates Create Layouts by Using the Layout Editor This Oracle BI Publisher 12c training will help you build a foundation of understanding how to best leverage this solution. Through Classroom Training or Live Virtual Class Training, you'll learn the ins and outs of how to use this solution. BI Publisher Technology and Architecture Functional Components Layout Templates Multitier Architecture Enterprise Server Architecture and Performance and Scalability Document Generation Process and Output Formats Supported Data Sources Bursting Overview Internationalization and Language Support Getting Started with BI Publisher Logging In, the Home Page, and Global Header, and Setting Account Preferences Viewing Reports Managing Repository Objects Managing Favorites Using Create Report wizard to Create Reports Selecting Data: Data Model, Spreadsheet, and BI Subject Area Configuring Report Properties Using the Data Model Editor Exploring the Schemas Used in the Course Exploring the Data Model Editor UI and the Supported Data Sources Creating a Private Data Source Creating a Simple Data Model based on a SQL Query Data Set Using Query Builder to Build a Query Viewing Data and Saving Sample Data Sets Adding Parameters and LOVs to the Query Configuring Parameter Settings and Viewing Reports with Parameters Working with Layout Editor Opening the Layout Editor and Navigating the Layout Editor UI Creating a Layout by Using a Basic Template Inserting a Layout Grid Adding a Table, Formatting Columns, Defining Sorts and Groups, and Applying Conditional Formats Inserting and Editing Charts, and Converting Charts to a Pivot Tables Adding Repeating Sections, Text Items, and Images Working with Lists, Gauges and Pivot Tables Creating Boilerplates Using Template Builder to Create RTF Templates Using the BI Publisher Menu Bar Creating an RTF Template from a Sample, Changing Field Properties, and Previewing Table Data Adding a Chart to an RTF Template Designing an RTF Template for a BI Publisher Report Creating a BI Publisher Report by Using Template Builder in Online Mode Exploring the Basic and Form Field Methods Exploring Advanced RTF Template Techniques Including Conditional Formats, Watermarks, Page-Level Calculations, Running Totals, Grouping, and Sorting BI Publisher Server: Administration and Security Describing the Administration Page Creating the JDBC Connections Setting, Viewing, and Updating Data Sources Describing the Security Model for BI Publisher and Oracle Fusion Middleware Describing Groups, Users, Roles, and Permissions Describing Delivery Options Including Print, Fax, Email, WebDav, HTTP Server, FTP, and CUPS Describing and Configuring BI Publisher Scheduler Integrating with Oracle BI Presentation Services and Oracle Endeca Server Scheduling and Bursting Reports Scheduling and Describing a Report Job and Related Options Managing and Viewing a Report Job Viewing Report Job History Scheduling a Report with Trigger Describing Bursting Adding a Bursting Definition to a Data Model Scheduling a Bursting Job Integrating BI Publisher with Oracle BI Enterprise Edition Configuring Presentation Services Integration Navigating Oracle BI EE Creating a Report based on OBI EE Subject Area Creating a Data Model and Report based on a BI Server SQL Query Creating a Data Model and Report based on an Oracle BI Analysis Adding a BI Publisher Report to an Oracle BI EE Dashboard Creating Data Models and BI Publisher Reports Based on Other Data Sources Configuring Presentation Services Integration Describing the Web Services Data Source Describing the HTTP (XML/RSS Feed) Data Source Explaining Proxy Setting for Web Services and HTTP Data Sources Creating a BI Publisher Report based on an External Web Service Creating a BI Publisher Report based on an HTTP Data Set Creating a BI Publisher Report Based on XML File Creating a BI Publisher Report Based on CSV Data source Performing Translations Describing Translation Types Translating by Using the Localized Template Option Translating by Using the XLIFF Option Managing XLIFF Translations on BI Publisher Server Describing the Overall Translation Process Describing Catalog Translation Exporting and Importing the XLIFF for a Catalog Folder Additional course details: Nexus Humans Oracle BI Publisher 12c R1: Fundamentals training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Oracle BI Publisher 12c R1: Fundamentals course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 5 Days 30 CPD hours This course is intended for Experienced system administrators and system integrators responsible for using the advanced features of vRealize Automation in enterprise deployments. Overview By the end of the course, you should be able to meet the following objectives Describe and configure the vRealize Automation in a clustered enterprise deployment using VMware vRealize Suite Lifecycle Manager⢠Scale VMware Identity Manager⢠to support High Availability. Configure security certificates in vRealize Automation from external Certificate Authorities. Describe the clustered deployment architecture, including Kubernetes pods and services. Create and configure advanced blueprints with complex YAML and cloudConfig. Use vRealize Automation advanced blueprints to deploy an actual 2-tier DB-Server using MySQL and phpMyAdmin. Practice troubleshooting techniques with advanced YAML blueprints in vRealize Automation. Use advanced VMware NSX-TTM Data Center networking features including NAT, routed networks, load balancers, security groups, and tags. Use VMware Code Stream⢠to integrate vRealize Automation with Kubernetes. Create Code Stream pipelines. Create and use Ansible playbooks that integrate with vRealize Automation. Configure vRealize Automation to integrate with Puppet. Configure and use ABX actions to create day-2 actions and interface with PowerShell scripts. Use vracli commands, log files, and VMware vRealize Log Insight⢠to troubleshoot vRealize Automation and vRealize Automation deployments. This five-day course is a follow-on to the VMware vRealize© Automation? Install, Configure, Manage course. In this course you go deeper into the advanced features of vRealize Automation to deploy user systems and interface vRealize Automation with other platforms and you learn how to deploy an enterprise-level cluster environment using LCM. This course relies heavily on hands-on labs. Course Introduction Introductions and course logistics Course objectives vRealize Automation Clustered Deployment Use LCM in a clustered deployment Configure External Certificates Configure NSX-T Data Center load balancer Install vRealize Automation using Clustered Deployment Scale VMware Identity Manager to support High Availability vRealize Automation Clustered Deployment Architecture List of Kubernetes Pods The vRealize Automation Kubernetes Architecture Relationship of Kubernetes Pods to Services Logs and their locations Blueprint deployment workflow with Kubernetes Service interaction Backup strategies and potential problems Advanced Blueprints Use advanced YAML and cloudConfig to deploy a functioning 2-tier application with a phpMyAdmin front-end server and a MySQL database server Use troubleshooting techniques to debug problems in advanced YAML blueprints List the log files that can aid in troubleshooting blueprint deployment Advanced Networking Use VMware NSX-T Data Center advanced features in blueprints Interfacing to IPAMs Use NSX-T Data Center NAT in blueprints Use NSX-T Data Center routed networks Use NSX-T Data Center load balancers Use NSX-T Data Center security groups Use tags with NSX-T Data Center network profiles Using vRealize Orchestrator Create Day-2 Actions with vRealize Orchestrator workflows Troubleshoot vRealize Orchestrator cluster issues Use vRealize Orchestrator to add computer objects to Active Directory when vRealize Automation deploys blueprints Use a tagging approach to vRealize Orchestrator workflows Use dynamic forms with vRealize Orchestrator Using ABX Actions Determine when to use ABX and when to use vRealize Orchestrator Use ABX to create day-2 Actions Call PowerShell from ABX Kubernetes Integration Create a Kubernetes namespace from vRealize Automation Connect to an existing Kubernetes cluster Automate the deployment of an application to a Kubernetes cluster with Code Stream Use Kubernetes in Extensibility Code Stream Create and use CI/CD pipelines Use the Code Stream user interface Add states and tasks to a Code Stream pipeline Integrate code from Code Stream with Git Using GitLab Integration Configure the GitLab Integration Use Gitlab with blueprints Configuration Management Describe the use case of Ansible and Ansible Tower Connect to Ansible Tower Use Ansible playbooks Use Puppet in configuration management Troubleshooting vracli commands and when to use them Check the status of Kubernetes pods and services Correct the state of pods and services Diagnose and solve vRealize Automation infrastructure problems Diagnose and solve vRealize Automation failures to deploy blueprints and services Use vRealize Log Insight for troubleshooting Additional course details:Notes Delivery by TDSynex, Exit Certified and New Horizons an VMware Authorised Training Centre (VATC) Nexus Humans VMware vRealize Automation: Advanced Features and Troubleshooting [v8.x] training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the VMware vRealize Automation: Advanced Features and Troubleshooting [v8.x] course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 3 Days 18 CPD hours This course is intended for This course is intended for: System administrators and operators who are operating in the AWS Cloud Informational technology workers who want to increase the system operations knowledge. Overview In this course, you will learn to: Recognize the AWS services that support the different phases of Operational Excellence, a WellArchitected Framework pillar. Manage access to AWS resources using AWS Accounts and Organizations and AWS Identity and Access Management (IAM). Maintain an inventory of in-use AWS resources using AWS services such as AWS Systems Manager, AWS CloudTrail, and AWS Config. Develop a resource deployment strategy utilizing metadata tags, Amazon Machine Images, and Control tower to deploy and maintain an AWS cloud environment. Automate resource deployment using AWS services such as AWS CloudFormation and AWS Service Catalog. Use AWS services to manage AWS resources through SysOps lifecycle processes such as deployments and patches. Configure a highly available cloud environment that leverages AWS services such as Amazon Route 53 and Elastic Load Balancing to route traffic for optimal latency and performance. Configure AWS Auto Scaling and Amazon Elastic Compute Cloud auto scaling to scale your cloud environment based on demand. Use Amazon CloudWatch and associated features such as alarms, dashboards, and widgets to monitor your cloud environment. Manage permissions and track activity in your cloud environment using AWS services such as AWS CloudTrail and AWS Config. Deploy your resources to an Amazon Virtual Private Cloud (Amazon VPC), establish necessary connectivity to your Amazon VPC, and protect your resources from disruptions of service. State the purpose, benefits, and appropriate use cases for mountable storage in your AWS cloud environment. Explain the operational characteristics of object storage in the AWS cloud, including Amazon Simple Storage Service (Amazon S3) and Amazon S3 Glacier. Build a comprehensive costing model to help gather, optimize, and predict your cloud costs using services such as AWS Cost Explorer and the AWS Cost & Usage Report. This course teaches systems operators and anyone performing system operations functions how to install, configure, automate, monitor, secure, maintain and troubleshoot the services, networks, and systems on AWS necessary to support business applications. The course also covers specific AWS features, tools, andbest practices related to these functions. Module 1: Introduction to System Operations on AWS Systems operations AWS Well-Architected Framework AWS Well-Architected Tool Module 2a: Access Management Access management Resources, accounts, and AWS Organizations Module 2b: System Discovery Methods to interact with AWS services Introduction to monitoring services Tools for automating resource discovery Inventory with AWS Systems Manager and AWS Config Troubleshooting scenario Hands-On Lab: Auditing AWS Resources with AWS Systems Manager and AWS Config Module 3: Deploying and Updating Resources Systems operations in deployments Tagging strategies Deployment using Amazon Machine Images (AMIs) Deployment using AWS Control Tower Troubleshooting scenario Module 4: Automating Resource Deployment Deployment using AWS CloudFormation Deployment using AWS Service Catalog Troubleshooting scenario Hands-On Lab: Infrastructure as Code Module 5: Manage Resources AWS Systems Manager Troubleshooting scenario Hands-On Lab: Operations as Code Module 6a: Configure Highly Available Systems Distributing traffic with Elastic Load Balancing Amazon Route 53 Module 6b: Automate Scaling Scaling with AWS Auto Scaling Scaling with Spot Instances Managing licenses with AWS License Manager Troubleshooting scenario Module 7: Monitor and Maintaining System Health Monitoring and maintaining healthy workloads Monitoring distributed applications Monitoring AWS infrastructure Monitoring your AWS account Troubleshooting scenario Hands-On Lab: Monitoring Applications and Infrastructure Module 8: Data Security and System Auditing Maintain a strong identity and access foundation Implement detection mechanisms Automate incident remediation Troubleshooting scenario Hands-On Lab: Securing the Environment Module 9: Operate Secure and Resilient Networks Building a secure Amazon Virtual Private Cloud (Amazon VPC) Networking beyond the VPC Troubleshooting scenario Module 10a : Mountable Storage Configuring Amazon Elastic Block Storage (Amazon EBS) Sizing Amazon EBS volumes for performance Using Amazon EBS snapshots Using Amazon Data Lifecycle Manager to manage your AWS resources Creating backup and data recovery plans Configuring shared file system storage Module 10b: Object Storage Deploying Amazon Simple Storage Service (Amazon S3) with Access Logs, Cross-Region Replication, and S3 Intelligent-Tiering Hands-On Lab: Automating with AWS Backup for Archiving and Recovery Module 11: Cost Reporting, Alerts, and Optimization Gain AWS expenditure awareness Use control mechanisms for cost management Optimize your AWS spend and usage Hands-On Lab: Capstone lab for SysOps Additional course details: Nexus Humans Systems Operations on AWS training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Systems Operations on AWS course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 3 Days 18 CPD hours This course is intended for This course is intended for: Those who will provide container orchestration management in the AWS Cloud including: DevOps engineers Systems administrators Overview In this course, you will learn to: Review and examine containers, Kubernetes and Amazon EKS fundamentals and the impact of containers on workflows. Build an Amazon EKS cluster by selecting the correct compute resources to support worker nodes. Secure your environment with AWS Identity and Access Management (IAM) authentication by creating an Amazon EKS service role for your cluster Deploy an application on the cluster. Publish container images to ECR and secure access via IAM policy. Automate and deploy applications, examine automation tools and pipelines. Create a GitOps pipeline using WeaveFlux. Collect monitoring data through metrics, logs, tracing with AWS X-Ray and identify metrics for performance tuning. Review scenarios where bottlenecks require the best scaling approach using horizontal or vertical scaling. Assess the tradeoffs between efficiency, resiliency, and cost and impact for tuning one over the other. Describe and outline a holistic, iterative approach to optimizing your environment. Design for cost, efficiency, and resiliency. Configure the AWS networking services to support the cluster. Describe how EKS/Amazon Virtual Private Cloud (VPC) functions and simplifies inter-node communications. Describe the function of VPC Container Network Interface (CNI). Review the benefits of a service mesh. Upgrade your Kubernetes, Amazon EKS, and third party tools Amazon EKS makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane. In this course, you will learn container management and orchestration for Kubernetes using Amazon EKS. You will build an Amazon EKS cluster, configure the environment, deploy the cluster, and then add applications to your cluster. You will manage container images using Amazon Elastic Container Registry (ECR) and learn how to automate application deployment. You will deploy applications using CI/CD tools. You will learn how to monitor and scale your environment by using metrics, logging, tracing, and horizontal/vertical scaling. You will learn how to design and manage a large container environment by designing for efficiency, cost, and resiliency. You will configure AWS networking services to support the cluster and learn how to secure your Amazon EKS environment. Module 0: Course Introduction Course preparation activities and agenda Module 1: Container Fundamentals Best practices for building applications Container fundamentals Components of a container Module 2: Kubernetes Fundamentals Container orchestration Kubernetes objects Kubernetes internals Preparing for Lab 1: Deploying Kubernetes Pods Module 3: Amazon EKS Fundamentals Introduction to Amazon EKS Amazon EKS control plane Amazon EKS data plane Fundamentals of Amazon EKS security Amazon EKS API Module 4: Building an Amazon EKS Cluster Configuring your environment Creating an Amazon EKS cluster Demo: Configuring and deploying clusters in the AWS Management Console Working with eksctl Preparing for Lab 2: Building an Amazon EKS Cluster Module 5: Deploying Applications to Your Amazon EKS Cluster Configuring Amazon Elastic Container Registry (Amazon ECR) Demo: Configuring Amazon ECR Deploying applications with Helm Demo: Deploying applications with Helm Continuous deployment in Amazon EKS GitOps and Amazon EKS Preparing for Lab 3: Deploying App Module 6: Configuring Observability in Amazon EKS Configuring observability in an Amazon EKS cluster Collecting metrics Using metrics for automatic scaling Managing logs Application tracing in Amazon EKS Gaining and applying insight from observability Preparing for Lab 4: Monitoring Amazon EKS Module 7: Balancing Efficiency, Resilience, and Cost Optimization in Amazon EKS The high level overview Designing for resilience Designing for cost optimization Designing for efficiency Module 8: Managing Networking in Amazon EKS Review: Networking in AWS Communicating in Amazon EKS Managing your IP space Deploying a service mesh Preparing for Lab 5: Exploring Amazon EKS Communication Module 9: Managing Authentication and Authorization in Amazon EKS Understanding the AWS shared responsibility model Authentication and authorization Managing IAM and RBAC Demo: Customizing RBAC roles Managing pod permissions using RBAC service accounts Module 10: Implementing Secure Workflows Securing cluster endpoint access Improving the security of your workflows Improving host and network security Managing secrets Preparing for Lab 6: Securing Amazon EKS Module 11: Managing Upgrades in Amazon EKS Planning for an upgrade Upgrading your Kubernetes version Amazon EKS platform versions Additional course details: Nexus Humans Running Containers on Amazon Elastic Kubernetes Service (Amazon EKS) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Running Containers on Amazon Elastic Kubernetes Service (Amazon EKS) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 3 Days 18 CPD hours This course is intended for The ideal audience for this course includes database enthusiasts, IT professionals, and developers who are eager to expand their knowledge and skill set in database management and optimization. Roles that would greatly benefit from attending this course include: Database Developers: Those who design, implement, and maintain databases as part of their primary responsibilities and want to improve their expertise in schema design, query optimization, and advanced database features. Backend Developers: Professionals who work on server-side application logic and require a strong understanding of database management to integrate data storage and retrieval processes into their applications. Overview Upon completing this course, database developers will be able to: Design and implement efficient database schemas by employing normalization techniques, appropriate indexing strategies, and partitioning methods to optimize data storage and retrieval processes. Develop advanced SQL queries, including joining multiple tables, utilizing subqueries, and aggregating data, to extract valuable insights and facilitate decision-making processes. Implement stored procedures, functions, and triggers to automate common database tasks, enforce data integrity, and improve overall application performance. Apply database performance tuning techniques, such as query optimization, index management, and transaction control, to ensure optimal resource usage and enhanced system responsiveness. Integrate databases with various programming languages and platforms, enabling seamless data access and manipulation for web, mobile, and desktop applications. PostgreSQL is a powerful, open-source object-relational database management system that emphasizes extensibility, data integrity, and high performance. Its versatility and robust feature set make it an ideal choice for developers working on projects of all sizes, from small-scale applications to enterprise-level systems. By learning PostgreSQL, developers can tap into its advanced capabilities, such as full-text search, spatial data support, and customizable data types, allowing them to create efficient and scalable solutions tailored to their unique needs. PostgreSQL for Database Developers is a three-day hands-on course that explores the fundamentals of database management, covering everything from installation and management to advanced SQL functions. Designed for beginners and enthusiasts alike, this course will equip you with the knowledge and skills required to effectively harness the power of PostgreSQL in today's data-driven landscape. Throughout the course you?ll be immersed in a variety of essential topics, such as understanding data types, creating and managing indexes, working with array values, and optimizing queries for improved performance. You?ll gain valuable hands-on experience with real-world exercises, including the use of the psql client, writing triggers and stored procedures with PL/pgSQL, and exploring advanced SQL functions like Common Table Expressions (CTE), Window Functions, and Recursive Queries. You?ll exit this course with a solid foundation in PostgreSQL, enabling you to confidently navigate and manage your databases with ease and efficiency. Installing & Managing PostgreSQL PostgreSQL installation process Optimal configuration settings User and role management Database backup and restoration Overview of PostgreSQL Database PostgreSQL architecture overview Understanding database objects Efficient data storage Transaction management basics Using the psql client Introduction to psql Essential psql commands Executing queries effectively Managing databases with psql Understanding PostgreSQL data types Numeric data types explored Character and binary types Date, time, and boolean values Array and other types Understanding sequences Sequence creation and usage Customizing sequence behavior Implementing auto-increment columns Sequence manipulation and control Creating & managing indexes PostgreSQL index fundamentals Designing partial indexes Utilizing expression-based indexes Index management techniques Using COPY to load data COPY command overview Importing and exporting data Handling CSV and binary formats Performance considerations Working with Array Values Array value basics Array manipulation functions Querying arrays efficiently Multidimensional array handling Advanced SQL Functions Mastering Common Table Expressions Utilizing Window Functions Regular Expressions in SQL Crafting Recursive Queries Writing triggers & stored procedures with PL/pgSQL PL/pgSQL variables usage Implementing loop operations PERFORM and EXECUTE statements Developing PostgreSQL triggers Using the PostgreSQL query optimizer Query analysis and optimization EXPLAIN command insights PostgreSQL query operators Identifying performance bottlenecks Improving query performance Query performance tuning Index optimization strategies Efficient database partitioning Connection and resource management Wrap Up & Additional Resources Further learning opportunities Staying up-to-date with PostgreSQL Community engagement and support Additional course details: Nexus Humans PostgreSQL for Database Developers (TTDB7024) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the PostgreSQL for Database Developers (TTDB7024) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.