Duration 2 Days 12 CPD hours This course is intended for This class is primarily intended for the following participants: Technical employees using GCP, including customer companies, partners and system integrators deployment engineers, cloud architects, cloud administrators, system engineers , and SysOps/DevOps engineers. Individuals using GCP to create, integrate, or modernize solutions using secure, scalable microservices architectures in hybrid environments. Overview Connect and manage Anthos GKE clusters from GCP Console whether clusters are part of Anthos on Google Cloud or Anthos deployed on VMware. Understand how service mesh proxies are installed, configured and managed. Configure centralized logging, monitoring, tracing, and service visualizations wherever the Anthos GKE clusters are hosted. Understand and configure fine-grained traffic management. Use service mesh security features for service-service authentication, user authentication, and policy-based service authorization. Install a multi-service application spanning multiple clusters in a hybrid environment. Understand how services communicate across clusters. Migrate services between clusters. Install Anthos Config Management, use it to enforce policies, and explain how it can be used across multiple clusters. This two-day instructor-led course prepares students to modernize, manage, and observe their applications using Kubernetes whether the application is deployed on-premises or on Google Cloud Platform (GCP). Through presentations, and hands-on labs, participants explore and deploy using Kubernetes Engine (GKE), GKE Connect, Istio service mesh and Anthos Config Management capabilities that enable operators to work with modern applications even when split among multiple clusters hosted by multiple providers, or on-premises. Anthos Overview Describe challenges of hybrid cloud Discuss modern solutions Describe the Anthos Technology Stack Managing Hybrid Clusters using Kubernetes Engine Understand Anthos GKE hybrid environments, with Admin and User clusters Register and authenticate remote Anthos GKE clusters in GKE Hub View and manage registered clusters, in cloud and on-premises, using GKE Hub View workloads in all clusters from GKE Hub Lab: Managing Hybrid Clusters using Kubernetes Engine Introduction to Service Mesh Understand service mesh, and problems it solves Understand Istio architecture and components Explain Istio on GKE add on and it's lifecycle, vs OSS Istio Understand request network traffic flow in a service mesh Create a GKE cluster, with a service mesh Configure a multi-service application with service mesh Enable external access using an ingress gateway Explain the multi-service example applications: Hipster Shop, and Bookinfo Lab: Installing Open Source Istio on Kubernetes Engine Lab: Installing the Istio on GKE Add-On with Kubernetes Engine Observing Services using Service Mesh Adapters Understand service mesh flexible adapter model Understand service mesh telemetry processing Explain Stackdriver configurations for logging and monitoring Compare telemetry defaults for cloud and on-premises environments Configure and view custom metrics using service mesh View cluster and service metrics with pre-configured dashboards Trace microservice calls with timing data using service mesh adapters Visualize and discover service attributes with service mesh Lab: Telemetry and Observability with Istio Managing Traffic Routing with Service Mesh Understand the service mesh abstract model for traffic management Understand service mesh service discovery and load balancing Review and compare traffic management use cases and configurations Understand ingress configuration using service mesh Visualize traffic routing with live generated requests Configure a service mesh gateway to allow access to services from outside the mesh Apply virtual services and destination rules for version-specific routing Route traffic based on application-layer configuration Shift traffic from one service version to another, with fine-grained control, like a canary deployment Lab: Managing Traffic Routing with Istio and Envoy Managing Policies and Security with Service Mesh Understand authentication and authorization in service mesh Explain mTLS flow for service to service communication Adopt mutual TLS authentication across the service mesh incrementally Enable end-user authentication for the frontend service Use service mesh access control policies to secure access to the frontend service Lab: Managing Policies and Security with Service Mesh Managing Policies using Anthos Config Management Understand the challenge of managing resources across multiple clusters Understand how a Git repository is as a configuration source of truth Explain the Anthos Config Management components, and object lifecycle Install and configure Anthos Config Management, operators, tools, and related Git repository Verify cluster configuration compliance and drift management Update workload configuration using repo changes Lab: Managing Policies in Kubernetes Engine using Anthos Config Configuring Anthos GKE for Multi-Cluster Operation Understand how multiple clusters work together using DNS, root CA, and service discovery Explain service mesh control-plane architectures for multi-cluster Configure a multi-service application using service mesh across multiple clusters with multiple control-planes Configure a multi-service application using service mesh across multiple clusters with a shared control-plane Configure service naming/discovery between clusters Review ServiceEntries for cross-cluster service discovery Migrate workload from a remote cluster to an Anthos GKE cluster Lab: Configuring GKE for Multi-Cluster Operation with Istio Lab: Configuring GKE for Shared Control Plane Multi-Cluster Operation
Duration 3 Days 18 CPD hours This course is intended for Cluster administrators (Junior systems administrators, junior cloud administrators) interested in deploying additional clusters to meet increasing demands from their organizations. Cluster engineers (Senior systems administrators, senior cloud administrators, cloud engineers) interested in the planning and design of OpenShift clusters to meet performance and reliability of different workloads and in creating work books for these installations. Site reliability engineers (SREs) interested in deploying test bed clusters to validate new settings, updates, customizations, operational procedures, and responses to incidents. Overview Validate infrastructure prerequisites for an OpenShift cluster. Run the OpenShift installer with custom settings. Describe and monitor each stage of the OpenShift installation process. Collect troubleshooting information during an ongoing installation, or after a failed installation. Complete the configuration of cluster services in a newly installed cluster. Installing OpenShift on a cloud, virtual, or physical infrastructure. Red Hat OpenShift Installation Lab (DO322) teaches essential skills for installing an OpenShift cluster in a range of environments, from proof of concept to production, and how to identify customizations that may be required because of the underlying cloud, virtual, or physical infrastructure. This course is based on Red Hat OpenShift Container Platform 4.6. 1 - Introduction to container technology Describe how software can run in containers orchestrated by Red Hat OpenShift Container Platform. 2 - Create containerized services Provision a server using container technology. 3 - Manage containers Manipulate prebuilt container images to create and manage containerized services. 4 - Manage container images Manage the life cycle of a container image from creation to deletion. 5 - Create custom container images Design and code a Dockerfile to build a custom container image. 6 - Deploy containerized applications on OpenShift Deploy single container applications on OpenShift Container Platform. 7 - Troubleshoot containerized applications Troubleshoot a containerized application deployed on OpenShift. 8 - Deploy and manage applications on an OpenShift cluster Use various application packaging methods to deploy applications to an OpenShift cluster, then manage their resources. 9 - Design containerized applications for OpenShift Select a containerization method for an application and create a container to run on an OpenShift cluster. 10 - Publish enterprise container images Create an enterprise registry and publish container images to it. 11 - Build applications Describe the OpenShift build process, then trigger and manage builds. 12 - Customize source-to-image (S2I) builds Customize an existing S2I base image and create a new one. 13 - Create applications from OpenShift templates Describe the elements of a template and create a multicontainer application template. 14 - Manage application deployments Monitor application health and implement various deployment methods for cloud-native applications. 15 - Perform comprehensive review Create and deploy cloudinative applications on OpenShift.
Duration 2 Days 12 CPD hours This course is intended for Operators and application owners who are responsible for deploying and managing policies for multiple Kubernetes clusters across on-premises and public cloud environments. Overview By the end of the course, you should be able to meet the following objectives: Describe the VMware Tanzu Mission Control architecture Configure user and group access Create access, image registry, network, security, quota, and custom policies Connect your on-premises vSphere with Tanzu Supervisor cluster to VMware Tanzu Mission Control Create, manage, and backup Tanzu Kubernetes clusters Perform cluster inspections Monitor and secure Kubernetes environments During this two-day course, you focus on using VMware Tanzu© Mission Control? to provision and manage Kubernetes clusters. The course covers how to apply access, image registry, network, security, quota, and custom policies to Kubernetes environments. For cluster provisioning and management, the course focuses on deploying, upgrading, backing up and monitoring Kubernetes clusters on VMware vSphere© with Tanzu. Given the abstractions of VMware Tanzu Mission Control, the learnings should be transferrable to public cloud. Introducing VMware Tanzu Mission Control VMware Tanzu Mission Control Accessing VMware Tanzu Mission Control VMware Cloud? services access control VMware Tanzu Mission Control architecture Cluster Management Attached clusters Management clusters Provisioned clusters Cluster inspections Data protection VMware Tanzu© Observability? by Wavefront VMware Tanzu© Service Mesh? Policy Management Policy management Access policies Image registry policies Network policies Security policies Quota policies Custom policies Policy insights
Duration 3 Days 18 CPD hours This course is intended for Platform operators who are responsible for deploying and managing Tanzu Kubernetes clusters Overview By the end of the course, you should be able to meet the following objectives: Describe how Tanzu Kubernetes Grid fits in the VMware TanzuTM portfolio Describe the Tanzu Kubernetes Grid architecture Deploy and manage Tanzu Kubernetes Grid management clusters Deploy and manage Tanzu Kubernetes Grid workload clusters Deploy, configure, and manage Tanzu Kubernetes Grid packages Perform basic troubleshooting During this three-day course, you focus on installing VMware Tanzu© Kubernetes Grid? on a VMware vSphere© environment and then provisioning and managing Tanzu Kubernetes Grid clusters. The course covers how to install Tanzu Kubernetes Grid packages for image registry, authentication, logging, ingress, multi-pod network interfaces, service discovery, and monitoring. The concepts learned in this course are transferable for users who must install Tanzu Kubernetes Grid on other supported clouds. Course Introduction Introductions and course logistics Course objectives Introducing VMware Tanzu Kubernetes Grid Identify the VMware Tanzu products responsible for Kubernetes life cycle management and describe the main differences between them Explain the core concepts of Tanzu Kubernetes Grid, including bootstrap, Tanzu Kubernetes Grid management and workload clusters, and the role of Cluster API List the components of a Tanzu Kubernetes Grid instance Illustrate how to use the Tanzu CLI Identify the requirements for a bootstrap machine Define the Carvel Tool set Define Cluster API Identify the infrastructure providers List the Cluster API controllers Identify the Cluster API Custom Resource Definitions Management Clusters List the requirements for deploying a management cluster Differentiate between deploying on vSphere 6.7 Update 3 and vSphere 7 Describe the components of NSX Advanced Load Balancer Explain how Tanzu Kubernetes Grid integrates with NSX Advanced Load Balancer Explain how Kubernetes manages authentication Define Pinniped Define Dex Describe the Pinniped authentication workflow List the steps to install a Tanzu Kubernetes Grid management cluster Summarize the events of a management cluster creation Demonstrate how to use commands when working with management clusters Tanzu Kubernetes Clusters List the steps to build a custom image Describe the available customizations Identify the options for deploying Tanzu Kubernetes Grid clusters Explain how Tanzu Kubernetes Grid clusters are created Discuss which VMs make up a Tanzu Kubernetes Grid cluster List the pods that run on a Tanzu Kubernetes cluster Describe the Tanzu Kubernetes Grid core add-ons that are installed on a cluster Configuring and Managing Tanzu Kubernetes Grid Instances Define the Tanzu Kubernetes Grid packages Describe the Harbor Image Registry Define Fluent Bit Identify the logs that Fluent Bit collects Explain basic Fluent Bit configuration Describe the Contour ingress controller Demonstrate how to install Contour on a Tanzu Kubernetes Grid cluster Demonstrate how to install Service Discovery with ExternalDNS. Define Multus CNI Define Prometheus Define Grafana Troubleshooting Discuss the various Tanzu Kubernetes Grid logs Identify the location of Tanzu Kubernetes Grid logs Explain the purpose of crash diagnostics Demonstrate how to use SSH to connect to a Tanzu Kubernetes Grid VM Describe the steps for troubleshooting a failed cluster deployment Additional course details:Notes Delivery by TDSynex, Exit Certified and New Horizons an VMware Authorised Training Centre (VATC) Nexus Humans VMware Tanzu Kubernetes Grid: Install, Configure, Manage [V1.5] training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the VMware Tanzu Kubernetes Grid: Install, Configure, Manage [V1.5] course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 2 Days 12 CPD hours This course is intended for This in an introductory-level class for intermediate skilled team members. Students should have prior software development experience or exposure, have some basic familiarity with containers, and should also be able to navigate the command line. Overview This course is approximately 50% hands-on, combining expert lecture, real-world demonstrations and group discussions with machine-based practical labs and exercises. Our engaging instructors and mentors are highly experienced practitioners who bring years of current 'on-the-job' experience into every classroom. Working in a hands-on learning environment led by our expert facilitator, students will explore: What a Kubernetes cluster is, and how to deploy and manage them on-premises and in the cloud. How Kubernetes fits into the cloud-native ecosystem, and how it interfaces with other important technologies such as Docker. The major Kubernetes components that let us deploy and manage applications in a modern cloud-native fashion. How to define and manage applications with declarative manifest files that should be version-controlled and treated like code. Containerization has taken the IT world by storm in the last few years. Large software houses, starting from Google and Amazon, are running significant portions of their production load in containers. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. This is a hands-on workshop style course that teaches core features and functionality of Kubernetes. You will leave this course knowing how to build a Kubernetes cluster, and how to deploy and manage applications on that cluster. Getting Started Our sample application Kubernetes concepts Declarative vs imperative Kubernetes network model First contact with kubectl Setting up Kubernetes Working with Containers Running our first containers on Kubernetes Exposing containers Shipping images with a registry Running our application on Kubernetes Exploring the Kubernetes Dashboard The Kubernetes dashboard Security implications of kubectl apply Scaling a deployment Daemon sets Labels and selectors Rolling updates Next Steps Accessing logs from the CLI Managing stacks with Helm Namespaces Next steps
Duration 3 Days 18 CPD hours This course is intended for This class is intended for the following participants: Cloud architects, administrators, and SysOps/DevOps personnel Individuals using Google Cloud Platform to create new solutions or to integrate existing systems, application environments, and infrastructure with the Google Cloud Platform. Overview This course teaches participants the following skills: Understand how software containers work Understand the architecture of Kubernetes Understand the architecture of Google Cloud Platform Understand how pod networking works in Kubernetes Engine Create and manage Kubernetes Engine clusters using the GCP Console and gcloud/ kubectl commands Launch, roll back and expose jobs in Kubernetes Manage access control using Kubernetes RBAC and Google Cloud IAM Managing pod security policies and network policies Using Secrets and ConfigMaps to isolate security credentials and configuration artifacts Understand GCP choices for managed storage services Monitor applications running in Kubernetes Engine This class introduces participants to deploying and managing containerized applications on Google Kubernetes Engine (GKE) and the other services provided by Google Cloud Platform. Through a combination of presentations, demos, and hands-on labs, participants explore and deploy solution elements, including infrastructure components such as pods, containers, deployments, and services; as well as networks and application services. This course also covers deploying practical solutions including security and access management, resource management, and resource monitoring. Introduction to Google Cloud Platform Use the Google Cloud Platform Console Use Cloud Shell Define cloud computing Identify GCPs compute services Understand regions and zones Understand the cloud resource hierarchy Administer your GCP resources Containers and Kubernetes in GCP Create a container using Cloud Build Store a container in Container Registry Understand the relationship between Kubernetes and Google Kubernetes Engine (GKE) Understand how to choose among GCP compute platforms Kubernetes Architecture Understand the architecture of Kubernetes: pods, namespaces Understand the control-plane components of Kubernetes Create container images using Google Cloud Build Store container images in Google Container Registry Create a Kubernetes Engine cluster Kubernetes Operations Work with the kubectl command Inspect the cluster and Pods View a Pods console output Sign in to a Pod interactively Deployments, Jobs, and Scaling Create and use Deployments Create and run Jobs and CronJobs Scale clusters manually and automatically Configure Node and Pod affinity Get software into your cluster with Helm charts and Kubernetes Marketplace GKE Networking Create Services to expose applications that are running within Pods Use load balancers to expose Services to external clients Create Ingress resources for HTTP(S) load balancing Leverage container-native load balancing to improve Pod load balancing Define Kubernetes network policies to allow and block traffic to pods Persistent Data and Storage Use Secrets to isolate security credentials Use ConfigMaps to isolate configuration artifacts Push out and roll back updates to Secrets and ConfigMaps Configure Persistent Storage Volumes for Kubernetes Pods Use StatefulSets to ensure that claims on persistent storage volumes persist across restarts Access Control and Security in Kubernetes and Kubernetes Engine Understand Kubernetes authentication and authorization Define Kubernetes RBAC roles and role bindings for accessing resources in namespaces Define Kubernetes RBAC cluster roles and cluster role bindings for accessing cluster-scoped resources Define Kubernetes pod security policies Understand the structure of GCP IAM Define IAM roles and policies for Kubernetes Engine cluster administration Logging and Monitoring Use Stackdriver to monitor and manage availability and performance Locate and inspect Kubernetes logs Create probes for wellness checks on live applications Using GCP Managed Storage Services from Kubernetes Applications Understand pros and cons for using a managed storage service versus self-managed containerized storage Enable applications running in GKE to access GCP storage services Understand use cases for Cloud Storage, Cloud SQL, Cloud Spanner, Cloud Bigtable, Cloud Firestore, and Bigquery from within a Kubernetes application
Duration 5 Days 30 CPD hours This course is intended for This course is designed for professionals in job roles such as: Communication engineers Project managers Network engineers Software engineers System architects The Developing Applications for Cisco Webex and Webex Devices (DEVWBX) v1.1 course prepares you to use the programmability features of Webex©, Cisco© enterprise solution for video conferencing, online meetings, online training, webinars, web conferencing, cloud calling, and collaboration. Through a combination of lessons and hands-on labs, you will learn about Webex Application Programming Interface (API) Foundation, meetings, devices, teams, messaging, embedding Cisco Webex, administration, and compliance. You will learn how to leverage Webex APIs to extend the functionalities of teams, meetings, and devices, and explore how these APIs can help automate, administer, and enforce compliance. This course prepares you for the 300-920 Developing Applications for Cisco Webex and Webex Devices (DEVWBX) exam. Introducing Webex APIs Foundations Webex as an Extensible Platform Building Cisco Webex Teams Applications Introduction to Webex Messaging Developing with Webex Meetings XML API Describe the Capabilities of Cisco Webex Meetings APIs Automating and Extending Cisco Collaboration Devices with xAPI Overview, Capabilities and Transport Methods for Cisco Endpoint Device Programmability Embedding Cisco Webex Benefits of Embedding Cisco Webex into Other Applications Managing Administration and Compliance with Cisco Webex APIs Administer a Cisco Webex Organization
Duration 4 Days 24 CPD hours Overview Topics Include:Installation of a multi-node Kubernetes cluster using kubeadm, and how to grow a cluster.Choosing and implementing cluster networking.Various methods of application lifecycle management, including scaling, updates and roll-backs.Configuring security both for the cluster as well as containers.Managing storage available to containers.Learn monitoring, logging and troubleshooting of containers and the cluster.Configure scheduling and affinity of container deployments.Use Helm and Charts to automate application deployment.Understand Federation for fault-tolerance and higher availability. In this vendor agnostic course, you'll learn the installation, configuration and administration of a production-grade Kubernetes cluster. Introduction Linux Foundation Linux Foundation Training Linux Foundation Certifications Laboratory Exercises, Solutions and Resources Distribution Details Labs Basics of Kubernetes Define Kubernetes Cluster Structure Adoption Project Governance and CNCF Labs Installation and Configuration Getting Started With Kubernetes Minikube kubeadm More Installation Tools Labs Kubernetes Architecture Kubernetes Architecture Networking Other Cluster Systems Labs APIs and Access API Access Annotations Working with A Simple Pod kubectl and API Swagger and OpenAPI Labs API Objects API Objects The v1 Group API Resources RBAC APIs Labs Managing State With Deployments Deployment Overview Managing Deployment States Deployments and Replica Sets DaemonSets Labels Labs Services Overview Accessing Services DNS Labs Volumes and Data Volumes Overview Volumes Persistent Volumes Passing Data To Pods ConfigMaps Labs Ingress Overview Ingress Controller Ingress Rules Labs Scheduling Overview Scheduler Settings Policies Affinity Rules Taints and Tolerations Labs Logging and Troubleshooting Overview Troubleshooting Flow Basic Start Sequence Monitoring Logging Troubleshooting Resources Labs Custom Resource Definition Overview Custom Resource Definitions Aggregated APIs Labs Kubernetes Federation Overview Federated Resources Labs Helm Overview Helm Using Helm Labs Security Overview Accessing the API Authentication and Authorization Admission Controller Pod Policies Network Policies Labs
Duration 5 Days 30 CPD hours This course is intended for Professionals who need to maintain or set up a Kubernetes cluster Container Orchestration Engineers DevOps Professionals Overview Cluster architecture, installation, and configuration Rolling out and rolling back applications in production Scaling clusters and applications to best use How to create robust, self-healing deployments Networking configuration on cluster nodes, services, and CoreDNS Persistent and intelligent storage for applications Troubleshooting cluster, application, and user errors Vendor-agnostic cloud provider-based Kubernetes Kubernetes is a Cloud Orchestration Platform providing reliability, replication, and stability while maximizing resource utilization for applications and services. By the conclusion of this hands-on, vendor agnostic training you will go back to work with the knowledge, skills, and abilities to design, implement, and maintain a production-grade Kubernetes cluster. We prioritize covering all objectives and concepts necessary for passing the Certified Kubernetes Administrator (CKA) exam. You will be provided the components necessary to assemble your own high availability Kubernetes environment and configure, expand, and control it to meet the demands made of cluster administrators. Your week of intensive, hands-on training will conclude with a mock CKA exam that simulates the real exam. Cluster Architecture, Installation & Configuration Each student will be given an environment that allows them to build a Kubernetes cluster from scratch. After a detailed discussion on key architectural components and primitives, students will install and compare two production grade Kubernetes clusters. Review: Kubernetes Fundamentals After successfully instantiating their own Kubernetes Cluster, students will be guided through foundational concepts of deploying and managing applications in a production environment. Workloads & Scheduling After establishing a solid Kubernetes command line foundation, students will be led through discussion and hands-on labs which focus on effectively creating applications that are easy to configure, simple to manage, quick to scale, and able to heal themselves. Services & Networking Thoroughly understanding the underlying physical and network infrastructure of a Kubernetes cluster is an essential skill for a Certified Kubernetes Administrator. After an in-depth discussion of the Kubernetes Networking Model, students explore the networking of their cluster?s Control Plane, Workers, Pods, and Services. Storage Certified Kubernetes Administrators are often in charge of designing and implementing the storage architecture for their clusters. After discussing many common cluster storage solutions and how to best use each, students practice incorporating stateful storage into their applications. Troubleshooting A Certified Kubernetes Administrator is expected to be an effective troubleshooter for their cluster. The lecture covers a variety of ways to evaluate and optimize available log information for efficient troubleshooting, and the labs have students practice diagnosing and resolving several typical issues within their Kubernetes Cluster. Certified Kubernetes Administrator Practice Exam Just like the Cloud Native Computing Foundation CKA Exam, the students will be given two hours to complete hands-on tasks in their own Kubernetes environment. Unlike the certification exam, students taking the Alta3 CKA Practice Exam will have scoring and documented answers available immediately after the exam is complete, and will have built-in class time to re-examine topics that they wish to discuss in greater depth.
Duration 1 Days 6 CPD hours This course is intended for This course is intended for the following participants:Cloud professionals interested in taking the Data Engineer certification exam.Data engineering professionals interested in taking the Data Engineer certification exam. Overview This course teaches participants the following skills: Position the Professional Data Engineer Certification Provide information, tips, and advice on taking the exam Review the sample case studies Review each section of the exam covering highest-level concepts sufficient to build confidence in what is known by the candidate and indicate skill gaps/areas of study if not known by the candidate Connect candidates to appropriate target learning This course will help prospective candidates plan their preparation for the Professional Data Engineer exam. The session will cover the structure and format of the examination, as well as its relationship to other Google Cloud certifications. Through lectures, quizzes, and discussions, candidates will familiarize themselves with the domain covered by the examination, to help them devise a preparation strategy. Rehearse useful skills including exam question reasoning and case comprehension. Tips and review of topics from the Data Engineering curriculum. Understanding the Professional Data Engineer Certification Position the Professional Data Engineer certification among the offerings Distinguish between Associate and Professional Provide guidance between Professional Data Engineer and Associate Cloud Engineer Describe how the exam is administered and the exam rules Provide general advice about taking the exam Sample Case Studies for the Professional Data Engineer Exam Flowlogistic MJTelco Designing and Building (Review and preparation tips) Designing data processing systems Designing flexible data representations Designing data pipelines Designing data processing infrastructure Build and maintain data structures and databases Building and maintaining flexible data representations Building and maintaining pipelines Building and maintaining processing infrastructure Analyzing and Modeling (Review and preparation tips) Analyze data and enable machine learning Analyzing data Machine learning Machine learning model deployment Model business processes for analysis and optimization Mapping business requirements to data representations Optimizing data representations, data infrastructure performance and cost Reliability, Policy, and Security (Review and preparation tips) Design for reliability Performing quality control Assessing, troubleshooting, and improving data representation and data processing infrastructure Recovering data Visualize data and advocate policy Building (or selecting) data visualization and reporting tools Advocating policies and publishing data and reports Design for security and compliance Designing secure data infrastructure and processes Designing for legal compliance Resources and next steps Resources for learning more about designing data processing systems, data structures, and databases Resources for learning more about data analysis, machine learning, business process analysis, and optimization Resources for learning more about data visualization and policy Resources for learning more about reliability design Resources for learning more about business process analysis and optimization Resources for learning more about reliability, policies, security, and compliance Additional course details: Nexus Humans Preparing for the Professional Data Engineer Examination training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Preparing for the Professional Data Engineer Examination course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.