This course is perfect for quality assurance professionals who want to step into automation testing with Cypress. You will learn Cypress from scratch and become a specialist in building a solid Cypress automation framework to test any real-world web application.
Learn to operate Nomad to deploy and manage applications and services across multiple environments, including on-premises, cloud, and hybrid. Learn from the expert who will guide you through lectures, demonstrations, and real-world scenarios, giving you the skills and knowledge you need to succeed with Nomad HashiCorp skillsets.
By encouraging you to build real-world applications, this course teaches you the concepts of ASP.NET scaffolding, Model View Controller (MVC), and Entity Framework. You will start by setting up the environment and proceed towards practical activities to understand the concepts in ASP.NET MVC development.
This course takes you through the concepts of containers and Kubernetes in a practical way. You will learn how to create, ship, run, and manage containerized web applications on local premises as well as on the cloud.
Are you looking for a course that teaches coding for absolute beginners in programming? Do you want to learn programming concepts using extremely simple flowcharts and pseudocodes? Are you looking for a step-by-step approach to learn the basics of programming? If your answer was YES to any of the above, this course is for you.
Duration 3 Days 18 CPD hours This course is intended for Cluster administrators (Junior systems administrators, junior cloud administrators) interested in deploying additional clusters to meet increasing demands from their organizations. Cluster engineers (Senior systems administrators, senior cloud administrators, cloud engineers) interested in the planning and design of OpenShift clusters to meet performance and reliability of different workloads and in creating work books for these installations. Site reliability engineers (SREs) interested in deploying test bed clusters to validate new settings, updates, customizations, operational procedures, and responses to incidents. Overview Validate infrastructure prerequisites for an OpenShift cluster. Run the OpenShift installer with custom settings. Describe and monitor each stage of the OpenShift installation process. Collect troubleshooting information during an ongoing installation, or after a failed installation. Complete the configuration of cluster services in a newly installed cluster. Installing OpenShift on a cloud, virtual, or physical infrastructure. Red Hat OpenShift Installation Lab (DO322) teaches essential skills for installing an OpenShift cluster in a range of environments, from proof of concept to production, and how to identify customizations that may be required because of the underlying cloud, virtual, or physical infrastructure. This course is based on Red Hat OpenShift Container Platform 4.6. 1 - Introduction to container technology Describe how software can run in containers orchestrated by Red Hat OpenShift Container Platform. 2 - Create containerized services Provision a server using container technology. 3 - Manage containers Manipulate prebuilt container images to create and manage containerized services. 4 - Manage container images Manage the life cycle of a container image from creation to deletion. 5 - Create custom container images Design and code a Dockerfile to build a custom container image. 6 - Deploy containerized applications on OpenShift Deploy single container applications on OpenShift Container Platform. 7 - Troubleshoot containerized applications Troubleshoot a containerized application deployed on OpenShift. 8 - Deploy and manage applications on an OpenShift cluster Use various application packaging methods to deploy applications to an OpenShift cluster, then manage their resources. 9 - Design containerized applications for OpenShift Select a containerization method for an application and create a container to run on an OpenShift cluster. 10 - Publish enterprise container images Create an enterprise registry and publish container images to it. 11 - Build applications Describe the OpenShift build process, then trigger and manage builds. 12 - Customize source-to-image (S2I) builds Customize an existing S2I base image and create a new one. 13 - Create applications from OpenShift templates Describe the elements of a template and create a multicontainer application template. 14 - Manage application deployments Monitor application health and implement various deployment methods for cloud-native applications. 15 - Perform comprehensive review Create and deploy cloudinative applications on OpenShift.
Duration 2 Days 12 CPD hours This course is intended for Operators and application owners who are responsible for deploying and managing policies for multiple Kubernetes clusters across on-premises and public cloud environments. Overview By the end of the course, you should be able to meet the following objectives: Describe the VMware Tanzu Mission Control architecture Configure user and group access Create access, image registry, network, security, quota, and custom policies Connect your on-premises vSphere with Tanzu Supervisor cluster to VMware Tanzu Mission Control Create, manage, and backup Tanzu Kubernetes clusters Perform cluster inspections Monitor and secure Kubernetes environments During this two-day course, you focus on using VMware Tanzu© Mission Control? to provision and manage Kubernetes clusters. The course covers how to apply access, image registry, network, security, quota, and custom policies to Kubernetes environments. For cluster provisioning and management, the course focuses on deploying, upgrading, backing up and monitoring Kubernetes clusters on VMware vSphere© with Tanzu. Given the abstractions of VMware Tanzu Mission Control, the learnings should be transferrable to public cloud. Introducing VMware Tanzu Mission Control VMware Tanzu Mission Control Accessing VMware Tanzu Mission Control VMware Cloud? services access control VMware Tanzu Mission Control architecture Cluster Management Attached clusters Management clusters Provisioned clusters Cluster inspections Data protection VMware Tanzu© Observability? by Wavefront VMware Tanzu© Service Mesh? Policy Management Policy management Access policies Image registry policies Network policies Security policies Quota policies Custom policies Policy insights
Duration 2 Days 12 CPD hours This course is intended for This class is primarily intended for the following participants: Technical employees using GCP, including customer companies, partners and system integrators deployment engineers, cloud architects, cloud administrators, system engineers , and SysOps/DevOps engineers. Individuals using GCP to create, integrate, or modernize solutions using secure, scalable microservices architectures in hybrid environments. Overview Connect and manage Anthos GKE clusters from GCP Console whether clusters are part of Anthos on Google Cloud or Anthos deployed on VMware. Understand how service mesh proxies are installed, configured and managed. Configure centralized logging, monitoring, tracing, and service visualizations wherever the Anthos GKE clusters are hosted. Understand and configure fine-grained traffic management. Use service mesh security features for service-service authentication, user authentication, and policy-based service authorization. Install a multi-service application spanning multiple clusters in a hybrid environment. Understand how services communicate across clusters. Migrate services between clusters. Install Anthos Config Management, use it to enforce policies, and explain how it can be used across multiple clusters. This two-day instructor-led course prepares students to modernize, manage, and observe their applications using Kubernetes whether the application is deployed on-premises or on Google Cloud Platform (GCP). Through presentations, and hands-on labs, participants explore and deploy using Kubernetes Engine (GKE), GKE Connect, Istio service mesh and Anthos Config Management capabilities that enable operators to work with modern applications even when split among multiple clusters hosted by multiple providers, or on-premises. Anthos Overview Describe challenges of hybrid cloud Discuss modern solutions Describe the Anthos Technology Stack Managing Hybrid Clusters using Kubernetes Engine Understand Anthos GKE hybrid environments, with Admin and User clusters Register and authenticate remote Anthos GKE clusters in GKE Hub View and manage registered clusters, in cloud and on-premises, using GKE Hub View workloads in all clusters from GKE Hub Lab: Managing Hybrid Clusters using Kubernetes Engine Introduction to Service Mesh Understand service mesh, and problems it solves Understand Istio architecture and components Explain Istio on GKE add on and it's lifecycle, vs OSS Istio Understand request network traffic flow in a service mesh Create a GKE cluster, with a service mesh Configure a multi-service application with service mesh Enable external access using an ingress gateway Explain the multi-service example applications: Hipster Shop, and Bookinfo Lab: Installing Open Source Istio on Kubernetes Engine Lab: Installing the Istio on GKE Add-On with Kubernetes Engine Observing Services using Service Mesh Adapters Understand service mesh flexible adapter model Understand service mesh telemetry processing Explain Stackdriver configurations for logging and monitoring Compare telemetry defaults for cloud and on-premises environments Configure and view custom metrics using service mesh View cluster and service metrics with pre-configured dashboards Trace microservice calls with timing data using service mesh adapters Visualize and discover service attributes with service mesh Lab: Telemetry and Observability with Istio Managing Traffic Routing with Service Mesh Understand the service mesh abstract model for traffic management Understand service mesh service discovery and load balancing Review and compare traffic management use cases and configurations Understand ingress configuration using service mesh Visualize traffic routing with live generated requests Configure a service mesh gateway to allow access to services from outside the mesh Apply virtual services and destination rules for version-specific routing Route traffic based on application-layer configuration Shift traffic from one service version to another, with fine-grained control, like a canary deployment Lab: Managing Traffic Routing with Istio and Envoy Managing Policies and Security with Service Mesh Understand authentication and authorization in service mesh Explain mTLS flow for service to service communication Adopt mutual TLS authentication across the service mesh incrementally Enable end-user authentication for the frontend service Use service mesh access control policies to secure access to the frontend service Lab: Managing Policies and Security with Service Mesh Managing Policies using Anthos Config Management Understand the challenge of managing resources across multiple clusters Understand how a Git repository is as a configuration source of truth Explain the Anthos Config Management components, and object lifecycle Install and configure Anthos Config Management, operators, tools, and related Git repository Verify cluster configuration compliance and drift management Update workload configuration using repo changes Lab: Managing Policies in Kubernetes Engine using Anthos Config Configuring Anthos GKE for Multi-Cluster Operation Understand how multiple clusters work together using DNS, root CA, and service discovery Explain service mesh control-plane architectures for multi-cluster Configure a multi-service application using service mesh across multiple clusters with multiple control-planes Configure a multi-service application using service mesh across multiple clusters with a shared control-plane Configure service naming/discovery between clusters Review ServiceEntries for cross-cluster service discovery Migrate workload from a remote cluster to an Anthos GKE cluster Lab: Configuring GKE for Multi-Cluster Operation with Istio Lab: Configuring GKE for Shared Control Plane Multi-Cluster Operation
Duration 5 Days 30 CPD hours This course is intended for Audience for this course This course is designed for system administrators responsible for creating OpenShift Enterprise instances, deploying applications, creating process customizations, managing instances and projects. Prerequisites for this course Have taken Red Hat Enterprise Linux Administration I and II (RH124 and RH134), or equivalent Red Hat Enterprise Linux© system administration experience Be certified as a Red Hat Certified System Administrator (RHCSA), or equivalent Red Hat Enterprise Linux system administration experience Be certified as a Red Hat Certified Engineer (RHCE©) Overview Learn to install, configure, and manage OpenShift Enterprise by Red Hat instances - OpenShift Enterprise Administration (DO280) prepares the system administrator to install, configure, and manage OpenShift Enterprise by Red Hat© instances. OpenShift Enterprise, Red Hat's platform-as-a-service (PaaS) offering, provides pre-defined deployment environments for applications of all types through its use of container technology. This creates an environment that supports DevOps principles such as reduced time to market and continuous delivery. - In this course, students will learn how to install and configure an instance of OpenShift Enterprise, test the instance by deploying a real world application, and manage projects/applications through hands-on labs. - Course content summary - Container concepts - Configuring resources with the command line interface - Building a pod - Enabling services for a pod - Creating routes - Downloading and configuring images - Rolling back and activating deployments - Creating custom S2I images This course will empower you to install and administer the Red Hat© OpenShift© Container Platform, with hands-on, lab-based materials that show you how to install, configure, and manage OpenShift clusters and deploy sample applications to further understand how developers will use the platform. This course is based on Red Hat© Enterprise Linux© 7.5 and Openshift Container Platform 3.9. OpenShift is a containerized application platform that allows your enterprise to manage container deployments and scale your applications using Kubernetes. OpenShift provides predefined application environments and builds upon Kubernetes to provide support for DevOps principles such as reduced time to market, infrastructure-as-code, continuous integration (CI), and continuous delivery (CD). 1 - INTRODUCTION TO RED HAT OPENSHIFT ENTERPRISE Review features and architecture of OpenShift Enterprise. 2 - INSTALL OPENSHIFT ENTERPRISE Install OpenShift Enterprise and configure a master and node. 3 - EXECUTE COMMANDS Execute commands using the command line interface. 4 - BUILD APPLICATIONS Create, build, and deploy applications to an OpenShift Enterprise instance. 5 - PERSISTENT STORAGE Provision persistent storage and use it for the internal registry. 6 - BUILD APPLICATIONS WITH SOURCE-TO-IMAGE (S2I) Create and build applications with S2I and templates. 7 - MANAGE THE SYSTEM Use OpenShift Enterprise components to manage deployed applications. 8 - CUSTOMIZE OPENSHIFT ENTERPRISE Customize resources and processes used by OpenShift Enterprise. 9 - COMPREHENSIVE REVIEW Practice and demonstrate knowledge and skills learned in the course. 10 - NOTE: Course outline is subject to change with technology advances and as the nature of the underlying job evolves. For questions or confirmation on a specific objective or topic, please contact us. Additional course details: Nexus Humans Red Hat OpenShift Administration II: Operating a Production Kubernetes Cluster (DO280) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Red Hat OpenShift Administration II: Operating a Production Kubernetes Cluster (DO280) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 2 Days 12 CPD hours This course is intended for This in an introductory-level class for intermediate skilled team members. Students should have prior software development experience or exposure, have some basic familiarity with containers, and should also be able to navigate the command line. Overview This course is approximately 50% hands-on, combining expert lecture, real-world demonstrations and group discussions with machine-based practical labs and exercises. Our engaging instructors and mentors are highly experienced practitioners who bring years of current 'on-the-job' experience into every classroom. Working in a hands-on learning environment led by our expert facilitator, students will explore: What a Kubernetes cluster is, and how to deploy and manage them on-premises and in the cloud. How Kubernetes fits into the cloud-native ecosystem, and how it interfaces with other important technologies such as Docker. The major Kubernetes components that let us deploy and manage applications in a modern cloud-native fashion. How to define and manage applications with declarative manifest files that should be version-controlled and treated like code. Containerization has taken the IT world by storm in the last few years. Large software houses, starting from Google and Amazon, are running significant portions of their production load in containers. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. This is a hands-on workshop style course that teaches core features and functionality of Kubernetes. You will leave this course knowing how to build a Kubernetes cluster, and how to deploy and manage applications on that cluster. Getting Started Our sample application Kubernetes concepts Declarative vs imperative Kubernetes network model First contact with kubectl Setting up Kubernetes Working with Containers Running our first containers on Kubernetes Exposing containers Shipping images with a registry Running our application on Kubernetes Exploring the Kubernetes Dashboard The Kubernetes dashboard Security implications of kubectl apply Scaling a deployment Daemon sets Labels and selectors Rolling updates Next Steps Accessing logs from the CLI Managing stacks with Helm Namespaces Next steps