Duration 1 Days 6 CPD hours This course is intended for This course is intended for: Technical professionals involved in architecting, building, and operating AWS solutions. Overview In this course, you will learn to: Identify the Well-Architected Framework features, design principles, design pillars, and common uses Apply the design principles, key services, and best practices for each pillar of the WellArchitected Framework Use the Well-Architected Tool to conduct Well-Architected Reviews The Well-Architected Framework enables you to make informed decisions about your customers architectures in a cloud-native way and understand the impact of design decisions that are made. By using the Well-Architected Framework, you will understand the risks in your architecture and ways to mitigate them.This course is designed to provide a deep dive into the AWS Well-Architected Framework and its 5 pillars.This course also covers the Well-Architected Review process, and using the AWS Well-Architected Tool to complete reviews. Module 1: Well-Architected Introduction History of Well-Architected Goals of Well-Architected What is the AWS Well-Architected Framework? The AWS Well-Architected Tool Module 2: Design Principles Operational Excellence
Duration 1 Days 6 CPD hours This course is intended for Learners who will find this course applicable to their work include: Solutions architects Cloud practitioners Data engineers Data scientists Developers Overview In this course, you will explore: Workload definition and key concepts The AWS Well-Architected Framework Review phases, process, best practices, and antipatterns High and medium risks Prioritizing improvements to the AWS Well-Architected workflow Locating and using the AWS Well-Architected Framework white paper, labs, prebuilt solutions in the AWS solutions library, AWS Well-Architected independent software vendors (ISVs), and AWS Well-Architected Partner Program (WAPP) This interactive course provides a deep dive into Amazon Web Services (AWS) best practices to help you perform effective and efficient AWS Well-Architected Framework Reviews. The course covers the phases of a review, including how to prepare, run, and get guidance after a review has been performed. Attendees should have familiarity with the AWS concepts, terminology, services, and tools that are covered in the intermediate, 200-levelAWS Well-Architected Best Practices.This course provides an AWS Well-Architected Framework Review simulation and instructor-led group exercises and discussions regarding prioritizing and solutioning risks. The content focuses on teaching learners how to prepare proposals on high and medium risk issues using the AWS Well-Architected Tool. Module 1: AWS Well-Architected Framework Reviews Workload definition Key concepts of a workload AWS Well-Architected Review phases AWS Well-Architected Review approach, lessons learned, and use case AWS Well-Architected Review best practices AWS Well-Architected Review anti-patterns Module 2: Customer Scenario Group Sessions Demonstration of a Review question and answer example Operational excellence Group role-play exercise Two questions in this pillar Security Group role-play exercise Three questions in this pillar Reliability Group role-play exercise Three questions in this pillar Performance efficiency Group role-play exercise Three questions in this pillar Cost optimization Group role-play exercise Three questions in this pillar Module 3: Risk Solutions and Priorities AWS Well-Architected workflow Defining and solutioning high risk issues (HRIs) and medium risk issues (MRIs) Identifying significant risks and solutioning group discussion for: Operational excellence Security Reliability Performance efficiency Cost optimization Prioritizing improvements Module 4: Resources Resource pages AWS Well-Architected ISVs Module 5: Course Summary Objective recap Debrief What?s next? Additional course details: Nexus Humans Advanced AWS Well-Architected Best Practices training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Advanced AWS Well-Architected Best Practices course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 5 Days 30 CPD hours This course is intended for This course is suitable for anyone responsible for configuring, managing or supporting a Veeam Availability Suite v11 environment. This includes Senior Engineers and Architects responsible for creating architectures for Veeam environments. Overview After completing this course, attendees should be able to: Describe Veeam Availability Suite components usage scenarios and relevance to your environment. Effectively manage data availability in on-site, off-site, cloud and hybrid environments. Ensure both Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs) are met. Configure Veeam Availability Suite to ensure data is protected effectively. Adapt with an organization's evolving technical and business data protection needs. Ensure recovery is possible, effective, efficient, secure and compliant with business requirements. Provide visibility of the business data assets, reports and dashboards to monitor performance and risks. Design and architect a Veeam solution in a real-world environment Describe best practices, review an existing infrastructure and assess business/project requirements Identify relevant infrastructure metrics and perform component (storage, CPU, memory) quantity sizing Provide implementation and testing guidelines in line with designs Innovatively address design challenges and pain points, matching appropriate Veeam Backup & Replication features with requirements Veeam Certified Architect is the highest level of Veeam technical certifications. Engineers who complete both Veeam Availability Suite v11: Configuration and Management and Veeam Backup & Replication V11: Architecture and Design programs (courses + exams) will be granted with the 'Veeam Certified Architect' (VMCA) title by Veeam. Introduction Veeam Availability Suite v11: Configuration and Management Describe RTOs and RPOs, what they mean for your business, how to manage and monitor performance against them The 3-2-1 Rule and its importance in formulating a successful backup strategy Identify key Veeam Availability Suite components and describe their usage scenarios and deployment types Building backup capabilities Backup methods, the appropriate use cases and impact on underlying file systems Create, modify, optimize and delete backup jobs, including Agents and NAS Backup jobs. Explore different tools and methods to maximize environment performance Ensure efficiency by being able to select appropriate transport modes while being aware of the impact of various backup functions on the infrastructure Building replication capabilities Identify and describe the options available for replication and impacts of using them Create and modify replication jobs, outline considerations to ensure success Introduce the new Continuous Data Protection (CDP) policy Secondary backups Simple vs. advanced backup copy jobs, how to create and modify them using best practices to ensure efficient recovery Discuss using tapes for backups Advanced repository capabilities Ensure repository scalability using a capability such as SOBR on-premises and off-site including integration with cloud storage Ensure compatibility with existing deduplication appliances Introduce the new hardened repository Protecting data in the cloud Review how Veeam can protect the data of a cloud native application Review how Veeam Cloud Connect enables you to take advantage of cloud services built on Veeam Review how Veeam can be used to protect your Office 365 data Restoring from backup Ensure you have the confidence to use the correct restore tool at the right time for restoring VMs, bare metal and individual content such as files and folders Utilize Secure Restore to prevent the restoration of malware Describe how to use Staged Restore to comply with things like General Data Protection Regulation (GDPR) before releasing restores to production Identify, describe and utilize the different explores and instant recovery tools and features Recovery from replica Identify and describe in detail, failover features and the appropriate usage Develop, prepare and test failover plans to ensure recovery Disaster recovery from replica to meet a variety of real-world recovery needs Testing backup and replication Testing backups and replicas to ensure you can recover, what you need, when you need to Configure and setup virtual sandbox environments based on backup, replicas and storage snapshots Veeam Backup Enterprise Manager and Veeam ONE Introduce the concept of monitoring your virtual, physical and cloud environments with Veeam Backup Enterprise Manager and Veeam ONE? Configuration backup Locate, migrate or restore backup configuration Introduction Veeam Backup & Replication v11: Architecture and Design Review the architecture principles Explore what a successful architecture looks like Review Veeam?s architecture methodology Discovery Analyze the existing environment Uncover relevant infrastructure metrics Uncover assumptions and risks Identify complexity in the environment Conceptual design Review scenario and data from discovery phase Identify logical groups of objects that will share resources based on requirements Create a set of detailed tables of business and technical requirements, constraints, assumptions and risks Review infrastructure data with each product component in mind Create high level design and data flow Logical design Match critical components and features of VBR with requirements Create logical groupings Determine location of components and relationship to logical grouping Aggregate totals of component resources needed per logical grouping Calculate component (storage, CPU, memory) quantity sizing Physical/tangible design Convert the logical design into a physical design Physical hardware sizing Create a list of physical Veeam backup components Implementation and Governance Review physical design and implantation plan Review Veeam deployment hardening Describe the architect?s obligations to the implementation team Provide guidance on implementation specifics that relate to the design Validation and Iteration Provide framework for how to test the design Further develop the design according to a modification scenario
Duration 2 Days 12 CPD hours This course is intended for Cloud Solutions Architects, Site Reliability Engineers, Systems Operations professionals, DevOps Engineers, IT managers. Individuals using Google Cloud Platform to create new solutions or to integrate existing systems, application environments, and infrastructure with the Google Cloud Platform. Overview Apply a tool set of questions, techniques and design considerations Define application requirements and express them objectively as KPIs, SLO's and SLI's Decompose application requirements to find the right microservice boundaries Leverage Google Cloud developer tools to set up modern, automated deployment pipelines Choose the appropriate Google Cloud Storage services based on application requirements Architect cloud and hybrid networks Implement reliable, scalable, resilient applications balancing key performance metrics with cost Choose the right Google Cloud deployment services for your applications Secure cloud applications, data and infrastructure Monitor service level objectives and costs using Stackdriver tools This course features a combination of lectures, design activities, and hands-on labs to show you how to use proven design patterns on Google Cloud to build highly reliable and efficient solutions and operate deployments that are highly available and cost-effective. This course was created for those who have already completed the Architecting with Google Compute Engine or Architecting with Google Kubernetes Engine course. Defining the Service Describe users in terms of roles and personas. Write qualitative requirements with user stories. Write quantitative requirements using key performance indicators (KPIs). Evaluate KPIs using SLOs and SLIs. Determine the quality of application requirements using SMART criteria. Microservice Design and Architecture Decompose monolithic applications into microservices. Recognize appropriate microservice boundaries. Architect stateful and stateless services to optimize scalability and reliability. Implement services using 12-factor best practices. Build loosely coupled services by implementing a well-designed REST architecture. Design consistent, standard RESTful service APIs. DevOps Automation Automate service deployment using CI/CD pipelines. Leverage Cloud Source Repositories for source and version control. Automate builds with Cloud Build and build triggers. Manage container images with Google Container Registry. Create infrastructure with code using Deployment Manager and Terraform. Choosing Storage Solutions Choose the appropriate Google Cloud data storage service based on use case, durability, availability, scalability and cost. Store binary data with Cloud Storage. Store relational data using Cloud SQL and Spanner. Store NoSQL data using Firestore and Cloud Bigtable. Cache data for fast access using Memorystore. Build a data warehouse using BigQuery. Google Cloud and Hybrid Network Architecture Design VPC networks to optimize for cost, security, and performance. Configure global and regional load balancers to provide access to services. Leverage Cloud CDN to provide lower latency and decrease network egress. Evaluate network architecture using the Cloud Network Intelligence Center. Connect networks using peering and VPNs. Create hybrid networks between Google Cloud and on-premises data centers using Cloud Interconnect. Deploying Applications to Google Cloud Choose the appropriate Google Cloud deployment service for your applications. Configure scalable, resilient infrastructure using Instance Templates and Groups. Orchestrate microservice deployments using Kubernetes and GKE. Leverage App Engine for a completely automated platform as a service (PaaS). Create serverless applications using Cloud Functions. Designing Reliable Systems Design services to meet requirements for availability, durability, and scalability. Implement fault-tolerant systems by avoiding single points of failure, correlated failures, and cascading failures. Avoid overload failures with the circuit breaker and truncated exponential backoff design patterns. Design resilient data storage with lazy deletion. Analyze disaster scenarios and plan for disaster recovery using cost/risk analysis. Security Design secure systems using best practices like separation of concerns, principle of least privilege, and regular audits. Leverage Cloud Security Command Center to help identify vulnerabilities. Simplify cloud governance using organizational policies and folders. Secure people using IAM roles, Identity-Aware Proxy, and Identity Platform. Manage the access and authorization of resources by machines and processes using service accounts. Secure networks with private IPs, firewalls, and Private Google Access. Mitigate DDoS attacks by leveraging Cloud DNS and Cloud Armor. Maintenance and Monitoring Manage new service versions using rolling updates, blue/green deployments, and canary releases. Forecast, monitor, and optimize service cost using the Google Cloud pricing calculator and billing reports and by analyzing billing data. Observe whether your services are meeting their SLOs using Cloud Monitoring and Dashboards. Use Uptime Checks to determine service availability. Respond to service outages using Cloud Monitoring Alerts. Additional course details: Nexus Humans Architecting with Google Cloud: Design and Process training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Architecting with Google Cloud: Design and Process course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 3 Days 18 CPD hours This course is intended for Ideal candidate for this course Consultants Pre-sales Engineers Sales Engineers Systems Engineers Solutions Architects Overview This course teaches advanced level HPE Server technologies. Topics Include:HPE Apollo ServersHPE Moonshot ServersHPE Integrity SuperdomeX ServersManagement ToolsCustomer Engagement Skills This course teaches advanced level HPE Server technologies. Topics Include:HPE Apollo ServersHPE Moonshot ServersHPE Integrity SuperdomeX ServersManagement ToolsCustomer Engagement Skills Recognizing Industry Trends Describe trends affecting enterprises and explain how these trends lead to the four Transformation Describe key business challenges enterprises are facing. Review the role of a server architect, emphasizing how the architect helps companies. Provide an overview of the HPE enterprise server solutions covered in this course: Apollo solutions Moonshot Integrity Superdome X Gathering Customer Requirements Identify key decision makers and explain how to engage them in a discussion about the company?s business requirements and challenges Obtain data and documentation required to understand the company? business requirements Explain best practices for creating requirements statements and documents Advanced Architecture for Server Solutions Analyze the special needs of data, High Performance Computing (HPC), and mission-critical workloads Given a customers? specific requirements, architect a solution for a data, HPC, and mission critical workloads HPE Apollo Solutions for HPC Explain the features and benefits of HPE Apollo 2000, 6000, and 8000 solutions Position HPE Apollo 2000 and 6000 solutions for the right use cases and workloads Create an implementation plan for an HPE Apollo 2000 or 6000 solution, including plans for the proper performance, scalability, high availability, and management HPE Apollo 4000 for Data-Driven Organizations Briefly describe the HPE Apollo 4000 portfolio Position HPE Apollo 4000 solutions for the right use cases Create an implementation plan for an HPE Apollo 4000 solution, including plans for the proper performance, scalability, and high availability HPE Moonshot Solutions Briefly describe the HPE Moonshot portfolio Position HPE Moonshot solutions for the right use cases Explain options and best practices for designing the networking component of an HPE Moonshot solution HPE Moonshot Workloads Position HPE Moonshot cartridges for the right use cases and workloads Create an implementation plan for the following solutions, including plans for the proper performance, scalability, and high availability: Big data and analytics solution Video processing solution Mobile workspace solution Web infrastructure solution HPE Integrity Superdome X Solutions Explain the benefits of the HPE Integrity Superdome X and describe its available options Explain the benefits of nPar and RAS features for HPE Integrity X solutions Position HPE Integrity Superdome X solutions for the right use cases Create an implementation plan for HPE Integrity X solutions, including plans for the proper performance, scalability, fault tolerance, high availability, and manageability Monitoring and Managing HPE Solutions Recommend and substantiate the HPE management tools that optimize administrative operations for various customer environments Explain the benefits of the HPE Representational State Transfer (REST) application program interface (API) Working with Customer Business Financials Demonstrate business acumen through an ability to analyze financial statements Define basic financial terms used when talking with a customer's executive officers Calculate key performance indicators (KPIs) to analyze a customer's financial health and understand industry and company trends Use HPE tools analyze a company's financial position Additional course details: Nexus Humans Architecting Adv HPE Server Solutions Rev 16.21 training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Architecting Adv HPE Server Solutions Rev 16.21 course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 3 Days 18 CPD hours This course is intended for This class is intended for the following participants: Cloud architects, administrators, and SysOps/DevOps personnel Individuals using Google Cloud Platform to create new solutions or to integrate existing systems, application environments, and infrastructure with the Google Cloud Platform. Overview This course teaches participants the following skills: Understand how software containers work Understand the architecture of Kubernetes Understand the architecture of Google Cloud Platform Understand how pod networking works in Kubernetes Engine Create and manage Kubernetes Engine clusters using the GCP Console and gcloud/ kubectl commands Launch, roll back and expose jobs in Kubernetes Manage access control using Kubernetes RBAC and Google Cloud IAM Managing pod security policies and network policies Using Secrets and ConfigMaps to isolate security credentials and configuration artifacts Understand GCP choices for managed storage services Monitor applications running in Kubernetes Engine This class introduces participants to deploying and managing containerized applications on Google Kubernetes Engine (GKE) and the other services provided by Google Cloud Platform. Through a combination of presentations, demos, and hands-on labs, participants explore and deploy solution elements, including infrastructure components such as pods, containers, deployments, and services; as well as networks and application services. This course also covers deploying practical solutions including security and access management, resource management, and resource monitoring. Introduction to Google Cloud Platform Use the Google Cloud Platform Console Use Cloud Shell Define cloud computing Identify GCPs compute services Understand regions and zones Understand the cloud resource hierarchy Administer your GCP resources Containers and Kubernetes in GCP Create a container using Cloud Build Store a container in Container Registry Understand the relationship between Kubernetes and Google Kubernetes Engine (GKE) Understand how to choose among GCP compute platforms Kubernetes Architecture Understand the architecture of Kubernetes: pods, namespaces Understand the control-plane components of Kubernetes Create container images using Google Cloud Build Store container images in Google Container Registry Create a Kubernetes Engine cluster Kubernetes Operations Work with the kubectl command Inspect the cluster and Pods View a Pods console output Sign in to a Pod interactively Deployments, Jobs, and Scaling Create and use Deployments Create and run Jobs and CronJobs Scale clusters manually and automatically Configure Node and Pod affinity Get software into your cluster with Helm charts and Kubernetes Marketplace GKE Networking Create Services to expose applications that are running within Pods Use load balancers to expose Services to external clients Create Ingress resources for HTTP(S) load balancing Leverage container-native load balancing to improve Pod load balancing Define Kubernetes network policies to allow and block traffic to pods Persistent Data and Storage Use Secrets to isolate security credentials Use ConfigMaps to isolate configuration artifacts Push out and roll back updates to Secrets and ConfigMaps Configure Persistent Storage Volumes for Kubernetes Pods Use StatefulSets to ensure that claims on persistent storage volumes persist across restarts Access Control and Security in Kubernetes and Kubernetes Engine Understand Kubernetes authentication and authorization Define Kubernetes RBAC roles and role bindings for accessing resources in namespaces Define Kubernetes RBAC cluster roles and cluster role bindings for accessing cluster-scoped resources Define Kubernetes pod security policies Understand the structure of GCP IAM Define IAM roles and policies for Kubernetes Engine cluster administration Logging and Monitoring Use Stackdriver to monitor and manage availability and performance Locate and inspect Kubernetes logs Create probes for wellness checks on live applications Using GCP Managed Storage Services from Kubernetes Applications Understand pros and cons for using a managed storage service versus self-managed containerized storage Enable applications running in GKE to access GCP storage services Understand use cases for Cloud Storage, Cloud SQL, Cloud Spanner, Cloud Bigtable, Cloud Firestore, and Bigquery from within a Kubernetes application
Duration 2 Days 12 CPD hours This course is intended for This class is primarily intended for the following participants: Technical employees using GCP, including customer companies, partners and system integrators deployment engineers, cloud architects, cloud administrators, system engineers , and SysOps/DevOps engineers. Individuals using GCP to create, integrate, or modernize solutions using secure, scalable microservices architectures in hybrid environments. Overview Connect and manage Anthos GKE clusters from GCP Console whether clusters are part of Anthos on Google Cloud or Anthos deployed on VMware. Understand how service mesh proxies are installed, configured and managed. Configure centralized logging, monitoring, tracing, and service visualizations wherever the Anthos GKE clusters are hosted. Understand and configure fine-grained traffic management. Use service mesh security features for service-service authentication, user authentication, and policy-based service authorization. Install a multi-service application spanning multiple clusters in a hybrid environment. Understand how services communicate across clusters. Migrate services between clusters. Install Anthos Config Management, use it to enforce policies, and explain how it can be used across multiple clusters. This two-day instructor-led course prepares students to modernize, manage, and observe their applications using Kubernetes whether the application is deployed on-premises or on Google Cloud Platform (GCP). Through presentations, and hands-on labs, participants explore and deploy using Kubernetes Engine (GKE), GKE Connect, Istio service mesh and Anthos Config Management capabilities that enable operators to work with modern applications even when split among multiple clusters hosted by multiple providers, or on-premises. Anthos Overview Describe challenges of hybrid cloud Discuss modern solutions Describe the Anthos Technology Stack Managing Hybrid Clusters using Kubernetes Engine Understand Anthos GKE hybrid environments, with Admin and User clusters Register and authenticate remote Anthos GKE clusters in GKE Hub View and manage registered clusters, in cloud and on-premises, using GKE Hub View workloads in all clusters from GKE Hub Lab: Managing Hybrid Clusters using Kubernetes Engine Introduction to Service Mesh Understand service mesh, and problems it solves Understand Istio architecture and components Explain Istio on GKE add on and it's lifecycle, vs OSS Istio Understand request network traffic flow in a service mesh Create a GKE cluster, with a service mesh Configure a multi-service application with service mesh Enable external access using an ingress gateway Explain the multi-service example applications: Hipster Shop, and Bookinfo Lab: Installing Open Source Istio on Kubernetes Engine Lab: Installing the Istio on GKE Add-On with Kubernetes Engine Observing Services using Service Mesh Adapters Understand service mesh flexible adapter model Understand service mesh telemetry processing Explain Stackdriver configurations for logging and monitoring Compare telemetry defaults for cloud and on-premises environments Configure and view custom metrics using service mesh View cluster and service metrics with pre-configured dashboards Trace microservice calls with timing data using service mesh adapters Visualize and discover service attributes with service mesh Lab: Telemetry and Observability with Istio Managing Traffic Routing with Service Mesh Understand the service mesh abstract model for traffic management Understand service mesh service discovery and load balancing Review and compare traffic management use cases and configurations Understand ingress configuration using service mesh Visualize traffic routing with live generated requests Configure a service mesh gateway to allow access to services from outside the mesh Apply virtual services and destination rules for version-specific routing Route traffic based on application-layer configuration Shift traffic from one service version to another, with fine-grained control, like a canary deployment Lab: Managing Traffic Routing with Istio and Envoy Managing Policies and Security with Service Mesh Understand authentication and authorization in service mesh Explain mTLS flow for service to service communication Adopt mutual TLS authentication across the service mesh incrementally Enable end-user authentication for the frontend service Use service mesh access control policies to secure access to the frontend service Lab: Managing Policies and Security with Service Mesh Managing Policies using Anthos Config Management Understand the challenge of managing resources across multiple clusters Understand how a Git repository is as a configuration source of truth Explain the Anthos Config Management components, and object lifecycle Install and configure Anthos Config Management, operators, tools, and related Git repository Verify cluster configuration compliance and drift management Update workload configuration using repo changes Lab: Managing Policies in Kubernetes Engine using Anthos Config Configuring Anthos GKE for Multi-Cluster Operation Understand how multiple clusters work together using DNS, root CA, and service discovery Explain service mesh control-plane architectures for multi-cluster Configure a multi-service application using service mesh across multiple clusters with multiple control-planes Configure a multi-service application using service mesh across multiple clusters with a shared control-plane Configure service naming/discovery between clusters Review ServiceEntries for cross-cluster service discovery Migrate workload from a remote cluster to an Anthos GKE cluster Lab: Configuring GKE for Multi-Cluster Operation with Istio Lab: Configuring GKE for Shared Control Plane Multi-Cluster Operation
Duration 5 Days 30 CPD hours This course is intended for This course is recommended for IT Professionals and Consultants. Overview Identify risks and areas for improvement in a Citrix Virtual Apps and Desktops environment by assessing relevant information in an existing deployment. Determine core Citrix Virtual Apps and Desktops design decisions and align them to business requirements to achieve a practical solution. Design a Citrix Virtual Apps and Desktops disaster recovery plan and understand different disaster recovery considerations. This advanced 5-day training course teaches the design principles for creating a Citrix Virtual Apps and Desktops virtualization solution. In this training, you will also learn how to assess existing environments, explore different scenarios, and make design decisions based on business requirements. This course covers the Citrix Consulting approach to design and covers the key design decisions through lectures, lab exercises, and interactive discussions. You will also learn about additional considerations and advanced configurations for multi-location solutions and disaster recovery planning. This training will help you prepare for the Citrix Certified Expert in Virtualization (CCE-V) exam. Module 1: Methodology & Assessment The Citrix Consulting Methodology Citrix Consulting Methodology Use Business Drivers Prioritize Business Drivers User Segmentation User Segmentation Process App Assessment Introduction App Assessment Analysis Why Perform a Capabilities Assessment? Common Capabilities Assessment Risks Module 2: User Layer Endpoint Considerations Peripherals Considerations Citrix Workspace App Version Considerations Citrix Workspace App Multiple Version Considerations Network Connectivity and the User Experience Bandwidth and Latency Considerations Graphics Mode Design Considerations HDX Transport Protocols Considerations Media Content Redirection Considerations Session Interruption Management Session Reliability Feature Considerations Session Interruption Management Auto-Client Reconnect Feature Considerations Session Interruption Management ICA Keep-Alive Feature Considerations Module 3: Access Layer Access Matrix Access Layer Access Layer Communications Double-Hop Access Layer Considerations Citrix Cloud Access Layer Considerations Use Cases for Multiple Stores Define Access Paths per User Group Define Number of URLs Configuration and Prerequisites for Access Paths Citrix Gateway Scalability Citrix Gateway High Availability StoreFront Server Scalability StoreFront Server High Availability Module 4: Resource Layer - Images Flexcast Models VDA Machine Scalability VDA Machine Sizing with NUMA VDA Machine Sizing VDA Machine Scalability Cloud Considerations Scalability Testing and Monitoring Secure VDA Machines Network Traffic Secure VDA Machines Prevent Breakouts Secure VDA Machines Implement Hardening Secure VDA Machines Anti-Virus Review of Image Methods Citrix Provisioning Overall Benefits and Considerations Citrix Provisioning Target Device Boot Methods Citrix Provisioning Read Cache and Sizing Citrix Provisioning Write Cache Type Citrix Provisioning vDisk Store Location Citrix Provisioning Network Design Citrix Provisioning Scalability Considerations Citrix Machine Creation Services Overall Benefits and Considerations Citrix Machine Creation Services Cloning Types Citrix Machine Creation Services Storage Locations & Sizing Citrix Machine Services Read and Write Cache App Layering Considerations Image Management Methods Module 5: Resource Layer - Applications and Personalization Application Delivery Option Determine the Optimal Deployment Method for an App General Application Concerns Profile Strategy Profile Types Review Citrix Profile Management Design Considerations Citrix Profile Management Scaling Citrix Profile Management Permissions Policies Review Optimize Logon Performance with Policies Printing Considerations Module 6: Control Layer Pod Architecture Introduction Pod Architecture Considerations Citrix Virtual Apps and Desktops Service Design Considerations Implement User Acceptance Testing Load Balancing the Machine Running the VDA Citrix Director Design Considerations Management Console Considerations Change Control Delivery Controller Scalability and High Availability Control Layer Security Configuration Logging Considerations Session Recording Module 7: Hardware/Compute Layer Hypervisor Host Hardware Considerations Separating Workloads Considerations Workload Considerations VMs Per Host and Hosts Per Pool Citrix Hypervisor Scalability VM Considerations in Azure and Amazon Web Services Storage Tier Considerations Storage I/O Considerations Storage Architecture Storage RAID & Disk Type Storage Sizing LUNs Storage Bandwidth Storage in Public Cloud Datacenter Networking Considerations Securing Hypervisor Administrative Access Secure the Physical Datacenter Secure the Virtual Datacenter Module 8: Module 8: High Availability and Multiple Location Environments Redundancy vs. Fault Tolerance vs. High Availability Multi-Location Architecture Considerations Multi-Site Architecture Considerations Global Server Load Balancing Optimal Gateway Routing Zone Preference and Failover StoreFront Resource Aggregation StoreFront Subscription Sync Hybrid Environment Options Citrix Provisioning Across Site Site Database Scalability and High Availability Citrix Provisioning Across Sites Considerations Citrix Machine Creation Across Sites App Layering Across Sites Managing Roaming Profiles and Citrix Workspace App Configurations Across Devices Profile Management Multi-Site Replication Considerations Folder Redirections and Other User Data in a Multi-Location Environment Application Data Considerations Cloud-Based Storage Replication Options Multi-Location Printing Considerations Zone Considerations Active Directory Considerations Module 9: Disaster Recovery Tiers of Disaster Recovery Disaster Recovery Considerations Business Continuity Planning and Testing Citrix Standard of Business Continuity
Duration 4 Days 24 CPD hours This course is intended for System installersSystem integratorsSystem administratorsNetwork administratorsSolutions designers Overview After completing this course, you should be able to:Explain transactional service activation and how it relates to business requirementsExplain the benefits and uses of Cisco NSOExplain how Cisco NSO communicates with network devicesUnderstand the NETCONF protocol and be able to read and write simple YANG modelsInstall NSO and describe how NSO uses NETCONF and the Device Manager componentUnderstand the difference between devices that are fully NETCONF capable and those that are less or not NETCONF capableExplain the YANG service model structureDescribe how YANG is used with NSO, create and deploy a service, and explain NSO FASTMAPDesign and manage services with YANG modelsPerform NSO configuration and basic troubleshooting, and describe the following NSO features: integration options, alarms and reporting, scalability and performance options, and available function packsUse logs to troubleshoot the Cisco NSO deployment and check NSO communication with network devicesExplain the mapping logic of service parameters to device models and consequently to device configurationsDescribe the use of different integration options and APIsExplain the use of Reactive FASTMAP for manipulating and implementing advanced Network Functions Virtualization (NFV) componentsDescribe the use of feature components and function packsDefine and explain the European Telecommunications Standards Institute (ETSI) Open Source NFV Management and Orchestration (MANO) principles and solutionWork with the alarm console, and understand the NSO alarm structure and how it conforms to modern network operations procedures The Cisco NSO Essentials for Programmers and Network Architects (NSO201) course introduces you to Cisco© Network Services Orchestrator (NSO). You will learn to install Cisco NSO and use it to manage devices and create services based on YANG templates with XPath. This course provides a brief overview of NSO as a network automation solution, as well as an introduction to NETCONF, YANG, and XPath. You will learn about service packages, network element drivers, and Application Programming Interfaces (APIs). The course also covers service creation, device and configuration management, NSO maintenance, NSO options and integrations, and basic NSO troubleshooting. Introduction to Cisco NSO Meeting Challenges with Orchestration Challenges of Network Management Challenges of Network Orchestration NSO Features and Benefits That Meet Challenges Standardized Approach What Is NSO? Logical Architecture Components What Does NSO Do? Orchestration Use Cases How Does NSO Work? Introduction to NETCONF and YANG Packages Mapping Logic Network Element Drivers (NEDs) Resources and Training Resources Training Get Started with Cisco NSO Installing Cisco NSO Setup Overview Cisco NSO Local Installation Installing NEDs Using NetSim NETCONF Overview Challenges of Network Management Introduction to NETCONF NETCONF Operation Device Manager Device Manager Overview Device Configuration Management Device Connection Management Templates and Groups Other Device Management Tools Service Manager Essentials YANG Overview Introduction to YANG Other Representations of YANG Data Types XPath Overview Basic YANG Statements Can You Spot the Error? Using Services Package Architecture Creating a Service Package Sample Service Configuration Service Template YANG Service Model Deploying a Service Model-to-Model Mapping Mapping Introduction Mapping Logic FASTMAP Template Processing Service Design and Cisco NSO Programmability Service Design Service Design Overview Top-Down Approach Bottom-Up Approach Device Configuration Service Model Service Management Service Management Tasks Service Lifecycle Management Guidelines NSO Programmability Introduction NSO Programmability Overview Python Service Skeleton Creating a Service YANG Model Creating a Service Template Template Processing with Python Cisco NSO Flexibility System Configuration and Troubleshooting System Configuration System Troubleshooting Integration Integration Options NETCONF Server Web Integration SNMP Agent Alarm Management and Reporting Alarm Management Reporting Scalability and Performance High Availability High-Availability Cluster Communications Clustering Layered Service Architecture Addressing Performance Limitations Components and Function Packs Function Packs NFV Orchestration Reactive FASTMAP
Duration 4 Days 24 CPD hours This course is intended for System installers System integrators System administrators Network administrators Solutions designers Overview After completing this course, you should be able to: Explain transactional service activation and how it relates to business requirements Explain the benefits and uses of Cisco NSO Explain how Cisco NSO communicates with network devices Understand the NETCONF protocol and be able to read and write simple YANG models Install NSO and describe how NSO uses NETCONF and the Device Manager component Understand the difference between devices that are fully NETCONF capable and those that are less or not NETCONF capable Explain the YANG service model structure Describe how YANG is used with NSO, create and deploy a service, and explain NSO FASTMAP Design and manage services with YANG models Perform NSO configuration and basic troubleshooting, and describe the following NSO features: integration options, alarms and reporting, scalability and performance options, and available function packs Use logs to troubleshoot the Cisco NSO deployment and check NSO communication with network devices Explain the mapping logic of service parameters to device models and consequently to device configurations Describe the use of different integration options and APIs Explain the use of Reactive FASTMAP for manipulating and implementing advanced Network Functions Virtualization (NFV) components Describe the use of feature components and function packs Define and explain the European Telecommunications Standards Institute (ETSI) Open Source NFV Management and Orchestration (MANO) principles and solution Work with the alarm console, and understand the NSO alarm structure and how it conforms to modern network operations procedures The Cisco NSO Essentials for Programmers and Network Architects (NSO201) v. 4.0 course introduces you to Cisco© Network Services Orchestrator (NSO). You will learn to install Cisco NSO and use it to manage devices and create services based on YANG templates with XPath. This course provides an overview of NSO as a network automation solution, as well as introductions to NETCONF, YANG, and XPath. You will learn about managing devices and creating device templates, service management and service package creation, network element drivers, interfacing with other systems using APIs, configuring and troubleshooting system settings, managing alarms and reporting, configuring NSO for scalability and performance, and capabilities that can be added to Cisco NSO. Introduction to Cisco NSO Meeting Challenges with Orchestration Challenges of Network Management Challenges of Network Orchestration NSO Features and Benefits That Meet Challenges Standardized Approach What Is NSO? Logical Architecture Components What Does NSO Do? Orchestration Use Cases How Does NSO Work? Introduction to NETCONF and YANG Packages Mapping Logic Network Element Drivers (NEDs) Resources and Training Resources Training Get Started with Cisco NSO Installing Cisco NSO Setup Overview Cisco NSO Local Installation Installing NEDs Using NetSim NETCONF Overview Challenges of Network Management Introduction to NETCONF NETCONF Operation Device Manager Device Manager Overview Device Configuration Management Device Connection Management Templates and Groups Other Device Management Tools Service Manager Essentials YANG Overview Introduction to YANG Other Representations of YANG Data Types XPath Overview Basic YANG Statements Can You Spot the Error? Using Services Package Architecture Creating a Service Package Sample Service Configuration Service Template YANG Service Model Deploying a Service Model-to-Model Mapping Mapping Introduction Mapping Logic FASTMAP Template Processing Service Design and Cisco NSO Programmability Service Design Service Design Overview Top-Down Approach Bottom-Up Approach Device Configuration Service Model Service Management Service Management Tasks Service Lifecycle Management Guidelines NSO Programmability Introduction NSO Programmability Overview Python Service Skeleton Creating a Service YANG Model Creating a Service Template Template Processing with Python Cisco NSO Flexibility System Configuration and Troubleshooting System Configuration System Troubleshooting Integration Integration Options NETCONF Server Web Integration SNMP Agent Alarm Management and Reporting Alarm Management Reporting Scalability and Performance High Availability High-Availability Cluster Communications Clustering Layered Service Architecture Addressing Performance Limitations Components and Function Packs Function Packs NFV Orchestration Reactive FASTMAP