Duration 2 Days 12 CPD hours This course is intended for This class is primarily intended for the following participants: Technical employees using GCP, including customer companies, partners and system integrators deployment engineers, cloud architects, cloud administrators, system engineers , and SysOps/DevOps engineers. Individuals using GCP to create, integrate, or modernize solutions using secure, scalable microservices architectures in hybrid environments. Overview Connect and manage Anthos GKE clusters from GCP Console whether clusters are part of Anthos on Google Cloud or Anthos deployed on VMware. Understand how service mesh proxies are installed, configured and managed. Configure centralized logging, monitoring, tracing, and service visualizations wherever the Anthos GKE clusters are hosted. Understand and configure fine-grained traffic management. Use service mesh security features for service-service authentication, user authentication, and policy-based service authorization. Install a multi-service application spanning multiple clusters in a hybrid environment. Understand how services communicate across clusters. Migrate services between clusters. Install Anthos Config Management, use it to enforce policies, and explain how it can be used across multiple clusters. This two-day instructor-led course prepares students to modernize, manage, and observe their applications using Kubernetes whether the application is deployed on-premises or on Google Cloud Platform (GCP). Through presentations, and hands-on labs, participants explore and deploy using Kubernetes Engine (GKE), GKE Connect, Istio service mesh and Anthos Config Management capabilities that enable operators to work with modern applications even when split among multiple clusters hosted by multiple providers, or on-premises. Anthos Overview Describe challenges of hybrid cloud Discuss modern solutions Describe the Anthos Technology Stack Managing Hybrid Clusters using Kubernetes Engine Understand Anthos GKE hybrid environments, with Admin and User clusters Register and authenticate remote Anthos GKE clusters in GKE Hub View and manage registered clusters, in cloud and on-premises, using GKE Hub View workloads in all clusters from GKE Hub Lab: Managing Hybrid Clusters using Kubernetes Engine Introduction to Service Mesh Understand service mesh, and problems it solves Understand Istio architecture and components Explain Istio on GKE add on and it's lifecycle, vs OSS Istio Understand request network traffic flow in a service mesh Create a GKE cluster, with a service mesh Configure a multi-service application with service mesh Enable external access using an ingress gateway Explain the multi-service example applications: Hipster Shop, and Bookinfo Lab: Installing Open Source Istio on Kubernetes Engine Lab: Installing the Istio on GKE Add-On with Kubernetes Engine Observing Services using Service Mesh Adapters Understand service mesh flexible adapter model Understand service mesh telemetry processing Explain Stackdriver configurations for logging and monitoring Compare telemetry defaults for cloud and on-premises environments Configure and view custom metrics using service mesh View cluster and service metrics with pre-configured dashboards Trace microservice calls with timing data using service mesh adapters Visualize and discover service attributes with service mesh Lab: Telemetry and Observability with Istio Managing Traffic Routing with Service Mesh Understand the service mesh abstract model for traffic management Understand service mesh service discovery and load balancing Review and compare traffic management use cases and configurations Understand ingress configuration using service mesh Visualize traffic routing with live generated requests Configure a service mesh gateway to allow access to services from outside the mesh Apply virtual services and destination rules for version-specific routing Route traffic based on application-layer configuration Shift traffic from one service version to another, with fine-grained control, like a canary deployment Lab: Managing Traffic Routing with Istio and Envoy Managing Policies and Security with Service Mesh Understand authentication and authorization in service mesh Explain mTLS flow for service to service communication Adopt mutual TLS authentication across the service mesh incrementally Enable end-user authentication for the frontend service Use service mesh access control policies to secure access to the frontend service Lab: Managing Policies and Security with Service Mesh Managing Policies using Anthos Config Management Understand the challenge of managing resources across multiple clusters Understand how a Git repository is as a configuration source of truth Explain the Anthos Config Management components, and object lifecycle Install and configure Anthos Config Management, operators, tools, and related Git repository Verify cluster configuration compliance and drift management Update workload configuration using repo changes Lab: Managing Policies in Kubernetes Engine using Anthos Config Configuring Anthos GKE for Multi-Cluster Operation Understand how multiple clusters work together using DNS, root CA, and service discovery Explain service mesh control-plane architectures for multi-cluster Configure a multi-service application using service mesh across multiple clusters with multiple control-planes Configure a multi-service application using service mesh across multiple clusters with a shared control-plane Configure service naming/discovery between clusters Review ServiceEntries for cross-cluster service discovery Migrate workload from a remote cluster to an Anthos GKE cluster Lab: Configuring GKE for Multi-Cluster Operation with Istio Lab: Configuring GKE for Shared Control Plane Multi-Cluster Operation
Duration 3 Days 18 CPD hours This course is intended for Administrators or application owners who are responsible for deploying and managing Kubernetes clusters and workloads Overview By the end of the course, you should be able to meet the following objectives: Describe the VMware Tanzu Mission Control architecture Configure user and group access Create and manage Kubernetes clusters Control access Create image registry, network, quota, security, custom and mutation policies Connect your on-premises vSphere with Tanzu Supervisor to VMware Tanzu Mission Control Create, manage, and back up VMware Tanzu Kubernetes Grid⢠clusters Create and manage Amazon Elastic Kubernetes Service clusters Perform cluster inspections Manage packages in your clusters Monitor and secure Kubernetes environments During this two-day course, you focus on using VMware Tanzu© Mission Control? to provision and manage Kubernetes clusters. The course covers how to apply image registry, network, security, quota, custom, and mutation policies to Kubernetes environments. It focuses on how to deploy, upgrade, back up, and monitor Kubernetes clusters on VMware vSphere© with VMware Tanzu©, and it also covers package management using the VMware Tanzu Mission Control catalog. Course Introduction Introduction and course logistics Course Objectives What Is VMware Tanzu Mission Control Describe VMware Tanzu Mission Control Describe vSphere with Tanzu Describe Tanzu Kubernetes Grid Describe VMware Tanzu© for Kubernetes Operations Explain how to request access to VMware Tanzu Mission Control Describe VMware Cloud? services Describe the VMware Cloud services catalog Explain how to access VMware Tanzu Mission Control Identify the components of VMware Tanzu Mission Control Explain the resource hierarchy of VMware Tanzu Mission Control Access, Users, and Groups Explain VMware Cloud services and enterprise federation Describe VMware Cloud services roles Explain multifactor authentication Describe the VMware Tanzu Mission Control UI List the components of the VMware Tanzu Mission Control UI Describe the VMware Tanzu CLI Describe the VMware Tanzu Mission Control API Cluster Lifecycle Management Outline the steps for registering a management cluster to VMware Tanzu Mission Control Discuss what a management cluster is Describe provisioners Explain the purpose of a cloud provider account Describe Amazon Elastic Kubernetes Service Describe Azure Kubernetes Service Workload Clusters Describe Tanzu Kubernetes Grid workload clusters Explain how to create a cluster Explain how to configure a cluster Describe Amazon Elastic Kubernetes Service workload clusters Describe Azure Kubernetes Service workload clusters Explain how to attach a Kubernetes cluster Explain how to verify the connections to the cluster Describe cluster health Policy Management Explain how access policies grant users access to different resources Describe the policy model Describe the available policy types Explain how image registry policies restrict from which image registries container images can be pulled Outline how network policies are applied to clusters Discuss how security policies control deployment of pods in a cluster Discuss how quota policies manage resource consumption in your clusters Discuss how custom policies implement specialized policies that govern your Kubernetes clusters Describe mutation policies Explain how Policy Insights reports VMware Tanzu Mission Control policy issues Control Catalog Describe the VMware Tanzu Mission Control catalog Explain how to install packages Describe cert-manager Explain Service Discovery and ExternalDNS Describe Multus CNI and Whereabouts Describe Fluent-Bit Explain Prometheus and Grafana Describe Harbor Describe Flux Describe Helm Describe Git repositories Tanzu Mission Control Day 2 Operations Describe data protection Describe cluster inspections Explain life cycle management Describe VMware Aria Operations? for Applications Discuss VMware Tanzu© Service Mesh? Advanced edition Describe VMware Aria Cost? powered by CloudHealth©
Learn to use DevOps tools from an industrial point of view. This course will help you get a firsthand experience of what it is like to be a DevOps engineer. Create DevOps CI/CD pipelines using Git, Jenkins, Ansible, Docker, SonarQube, and Kubernetes on AWS. Start your DevOps journey today. This course has been created from the perspective of a DevOps engineer who doesn't typically write application code.
A comprehensive introduction to the modern microservices architecture based on the most popular technologies such as .NET Core, Docker, Kubernetes, Istio Service Mesh, and many more.
Duration 5 Days 30 CPD hours This course is intended for Motivations: Use and manage containers from first principles & architect basic applications for Kubernetes Roles: general technical audiences & IT professionals CN251 is an intensive cloud native training bootcamp for IT professionals looking to develop skills in deploying and administering containerized applications in Kubernetes. Over the course of five days, students will start with learning about first principles for application containerization followed by learning how to stand up a containerized application in Kubernetes, and, finally, ramping up the skills for day-1 operating tasks for managing a Kubernetes production environment. CN251 is an ideal course for those who need to accelerate the development of their IT skills for a rapidly-changing technology landscape. Additional course details: Nexus Humans Cloud Native Operations Bootcamp training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Cloud Native Operations Bootcamp course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 2 Days 12 CPD hours This course is intended for Operators and application owners who are responsible for deploying and managing policies for multiple Kubernetes clusters across on-premises and public cloud environments. Overview By the end of the course, you should be able to meet the following objectives: Describe the VMware Tanzu Mission Control architecture Configure user and group access Create access, image registry, network, security, quota, and custom policies Connect your on-premises vSphere with Tanzu Supervisor cluster to VMware Tanzu Mission Control Create, manage, and backup Tanzu Kubernetes clusters Perform cluster inspections Monitor and secure Kubernetes environments During this two-day course, you focus on using VMware Tanzu© Mission Control? to provision and manage Kubernetes clusters. The course covers how to apply access, image registry, network, security, quota, and custom policies to Kubernetes environments. For cluster provisioning and management, the course focuses on deploying, upgrading, backing up and monitoring Kubernetes clusters on VMware vSphere© with Tanzu. Given the abstractions of VMware Tanzu Mission Control, the learnings should be transferrable to public cloud. Introducing VMware Tanzu Mission Control VMware Tanzu Mission Control Accessing VMware Tanzu Mission Control VMware Cloud? services access control VMware Tanzu Mission Control architecture Cluster Management Attached clusters Management clusters Provisioned clusters Cluster inspections Data protection VMware Tanzu© Observability? by Wavefront VMware Tanzu© Service Mesh? Policy Management Policy management Access policies Image registry policies Network policies Security policies Quota policies Custom policies Policy insights
Intro to containers training course description This course looks at the technologies of containers and microservices. The course starts with a look at what containers are, moving onto working with containers. Networking containers and container orchestration is then studied. The course finishes with monitoring containers with Prometheus and other systems. Hands on sessions are used to reinforce the theory rather than teach specific products, although Docker and Kubernetes are used. What will you learn Use containers. Build containers. Orchestrate containers. Evaluate container technologies. Intro to containers training course details Who will benefit: Those wishing to work with containers. Prerequisites: Introduction to virtualization. Duration 2 days Intro to containers training course contents What are containers? Virtualization, VMs, What are containers? What are microservices? Machine containers, application containers. Benefits. Container runtime tools Docker, LXC, Windows containers. Architecture, components. Hands on Installing Docker client and server. Working with containers Docker workflow, Docker images, Docker containers, Dockerfile, Building, running, storing images. Creating containers. Starting, stopping and controlling containers. Public repositories, private registries. Hands on Exploring containers. Microservices What are microservices? Modular architecture, IPC. Hands on Persistence and containers. Networking containers Linking, no networking, host, bridge. The container Network Interface. Hands on Container networking Container orchestration engines Docker swarm: Nodes, services, tasks. Apache Mesos: Mesos master, agents, frameworks. Kubernetes: Kubectl, master node, worker nodes. Openstack: Architecture, containers in OpenStack. Amazon ECS: Architecture, how it works. Hands on Setup and access a Kubernetes cluster. Managing containers Monitoring, logging, collecting metrics, cluster monitoring tools: Heapster. Hands on Using Prometheus with Kubernetes.
Network virtualization training course description This course covers network virtualization. It has been designed to enable network engineers to recognise and handle the requirements of networking Virtual Machines. Both internal and external network virtualization is covered along with the technologies used to map overlay networks on to the physical infrastructure. Hands on sessions are used to reinforce the theory rather than teach specific manufacturer implementations. What will you learn Evaluate network virtualization implementations and technologies. Connect Virtual Machines with virtual switches. Explain how overlay networks operate. Describe the technologies in overlay networks. Network virtualization training course details Who will benefit: Engineers networking virtual machines. Prerequisites: Introduction to virtualization. Duration 2 days Network virtualization training course contents Virtualization review Hypervisors, VMs, containers, migration issues, Data Centre network design. TOR and spine switches. VM IP addressing and MAC addresses. Hands on VM network configuration Network virtualization What is network virtualization, internal virtual networks, external virtual networks. Wireless network virtualization: spectrum, infrastructure, air interface. Implementations: Open vSwitch, NSX, Cisco, others. Hands on VM communication over the network. Single host network virtualization NICs, vNICs, resource allocation, vSwitches, tables, packet walks. vRouters. Hands on vSwitch configuration, MAC and ARP tables. Container networks Single host, network modes: Bridge, host, container, none. Hands on Docker networking. Multi host network virtualization Access control, path isolation, controllers, overlay networks. L2 extensions. NSX manager. OpenStack neutron. Packet walks. Distributed logical firewalls. Load balancing. Hands on Creating, configuring and using a distributed vSwitch. Mapping virtual to physical networks VXLAN, VTEP, VXLAN encapsulation, controllers, multicasts and VXLAN. VRF lite, GRE, MPLS VPN, 802.1x. Hands on VXLAN configuration. Orchestration vCenter, vagrant, OpenStack, Kubernetes, scheduling, service discovery, load balancing, plugins, CNI, Kubernetes architecture. Hands on Kubernetes networking. Summary Performance, NFV, automation. Monitoring in virtual networks.
Learn to build an amazing REST API with Spring Boot and understand what all this hype about microservices is about.
Duration 3 Days 18 CPD hours This course is intended for Developers Architects Administrators Overview By the end of the course, you should be able to meet the following objectives: Install and configure RabbitMQ Activate and use plugins such as the web management console Implement messaging patterns and applications using the Java client Set up a cluster of RabbitMQ nodes Configure high availability appropriately Tune and optimize RabbitMQ for better performance Secure RabbitMQ This intensive instructor-led course in RabbitMQ provides a deep dive into how to install, configure, and develop applications which leverage RabbitMQ messaging. The course begins with RabbitMQ installation and general configuration. It continues with developing messaging applications using the Java APIs, and delves into more advanced topics including clustering, high availability, performance, and security. Modules are accompanied by lab exercises that provide hands-on experience Introduction to Spring Essentials Kubernetes Overview BOSH Introduction Deploy, Patch & Upgrade Deploy a simple release Inside the VM Persistent Disks Patch the OS Upgrade Nginx Entry Point Set up a jumpbox Platform Infrastructure Pave the IaaS Deploy ops manager Deploy BOSH director Containerized Workloads Deploy Pivotal Container Service Provision a Kubernetes Cluster Harbor Container Registry Application Deployment Helm Advanced BOSH Deploy a distributed system Deploy Concourse CredHub Troubleshooting Concourse Deployment Concourse Day 2 Operations