Overview This 2-day programme covers the latest techniques used for fixed income attribution. This hands-on course enables participants to get a practical working experience of fixed income attribution, from planning to implementation and analysis. After completing the course you will have developed the skills to: Understand how attribution works and the value it adds to the investment process Interpret attribution reports from commercial systems Assess the strengths and weaknesses of commercially available attribution software Make informed decisions about the build vs. buy decision Present results in terms accessible to all parts of the business Who the course is for Performance analysts Fund and portfolio managers Investment officers Fixed Income professionals (marketing/sales) Auditors and compliance Quants and IT developers Course Content To learn more about the day by day course content please click here To learn more about schedule, pricing & delivery options, book a meeting with a course specialist now
Duration 2 Days 12 CPD hours This course is intended for This class is primarily intended for the following participants: Technical employees using GCP, including customer companies, partners and system integrators deployment engineers, cloud architects, cloud administrators, system engineers , and SysOps/DevOps engineers. Individuals using GCP to create, integrate, or modernize solutions using secure, scalable microservices architectures in hybrid environments. Overview Connect and manage Anthos GKE clusters from GCP Console whether clusters are part of Anthos on Google Cloud or Anthos deployed on VMware. Understand how service mesh proxies are installed, configured and managed. Configure centralized logging, monitoring, tracing, and service visualizations wherever the Anthos GKE clusters are hosted. Understand and configure fine-grained traffic management. Use service mesh security features for service-service authentication, user authentication, and policy-based service authorization. Install a multi-service application spanning multiple clusters in a hybrid environment. Understand how services communicate across clusters. Migrate services between clusters. Install Anthos Config Management, use it to enforce policies, and explain how it can be used across multiple clusters. This two-day instructor-led course prepares students to modernize, manage, and observe their applications using Kubernetes whether the application is deployed on-premises or on Google Cloud Platform (GCP). Through presentations, and hands-on labs, participants explore and deploy using Kubernetes Engine (GKE), GKE Connect, Istio service mesh and Anthos Config Management capabilities that enable operators to work with modern applications even when split among multiple clusters hosted by multiple providers, or on-premises. Anthos Overview Describe challenges of hybrid cloud Discuss modern solutions Describe the Anthos Technology Stack Managing Hybrid Clusters using Kubernetes Engine Understand Anthos GKE hybrid environments, with Admin and User clusters Register and authenticate remote Anthos GKE clusters in GKE Hub View and manage registered clusters, in cloud and on-premises, using GKE Hub View workloads in all clusters from GKE Hub Lab: Managing Hybrid Clusters using Kubernetes Engine Introduction to Service Mesh Understand service mesh, and problems it solves Understand Istio architecture and components Explain Istio on GKE add on and it's lifecycle, vs OSS Istio Understand request network traffic flow in a service mesh Create a GKE cluster, with a service mesh Configure a multi-service application with service mesh Enable external access using an ingress gateway Explain the multi-service example applications: Hipster Shop, and Bookinfo Lab: Installing Open Source Istio on Kubernetes Engine Lab: Installing the Istio on GKE Add-On with Kubernetes Engine Observing Services using Service Mesh Adapters Understand service mesh flexible adapter model Understand service mesh telemetry processing Explain Stackdriver configurations for logging and monitoring Compare telemetry defaults for cloud and on-premises environments Configure and view custom metrics using service mesh View cluster and service metrics with pre-configured dashboards Trace microservice calls with timing data using service mesh adapters Visualize and discover service attributes with service mesh Lab: Telemetry and Observability with Istio Managing Traffic Routing with Service Mesh Understand the service mesh abstract model for traffic management Understand service mesh service discovery and load balancing Review and compare traffic management use cases and configurations Understand ingress configuration using service mesh Visualize traffic routing with live generated requests Configure a service mesh gateway to allow access to services from outside the mesh Apply virtual services and destination rules for version-specific routing Route traffic based on application-layer configuration Shift traffic from one service version to another, with fine-grained control, like a canary deployment Lab: Managing Traffic Routing with Istio and Envoy Managing Policies and Security with Service Mesh Understand authentication and authorization in service mesh Explain mTLS flow for service to service communication Adopt mutual TLS authentication across the service mesh incrementally Enable end-user authentication for the frontend service Use service mesh access control policies to secure access to the frontend service Lab: Managing Policies and Security with Service Mesh Managing Policies using Anthos Config Management Understand the challenge of managing resources across multiple clusters Understand how a Git repository is as a configuration source of truth Explain the Anthos Config Management components, and object lifecycle Install and configure Anthos Config Management, operators, tools, and related Git repository Verify cluster configuration compliance and drift management Update workload configuration using repo changes Lab: Managing Policies in Kubernetes Engine using Anthos Config Configuring Anthos GKE for Multi-Cluster Operation Understand how multiple clusters work together using DNS, root CA, and service discovery Explain service mesh control-plane architectures for multi-cluster Configure a multi-service application using service mesh across multiple clusters with multiple control-planes Configure a multi-service application using service mesh across multiple clusters with a shared control-plane Configure service naming/discovery between clusters Review ServiceEntries for cross-cluster service discovery Migrate workload from a remote cluster to an Anthos GKE cluster Lab: Configuring GKE for Multi-Cluster Operation with Istio Lab: Configuring GKE for Shared Control Plane Multi-Cluster Operation
Duration 5 Days 30 CPD hours This course is intended for Audience for this course This course is designed for system administrators responsible for creating OpenShift Enterprise instances, deploying applications, creating process customizations, managing instances and projects. Prerequisites for this course Have taken Red Hat Enterprise Linux Administration I and II (RH124 and RH134), or equivalent Red Hat Enterprise Linux© system administration experience Be certified as a Red Hat Certified System Administrator (RHCSA), or equivalent Red Hat Enterprise Linux system administration experience Be certified as a Red Hat Certified Engineer (RHCE©) Overview Learn to install, configure, and manage OpenShift Enterprise by Red Hat instances - OpenShift Enterprise Administration (DO280) prepares the system administrator to install, configure, and manage OpenShift Enterprise by Red Hat© instances. OpenShift Enterprise, Red Hat's platform-as-a-service (PaaS) offering, provides pre-defined deployment environments for applications of all types through its use of container technology. This creates an environment that supports DevOps principles such as reduced time to market and continuous delivery. - In this course, students will learn how to install and configure an instance of OpenShift Enterprise, test the instance by deploying a real world application, and manage projects/applications through hands-on labs. - Course content summary - Container concepts - Configuring resources with the command line interface - Building a pod - Enabling services for a pod - Creating routes - Downloading and configuring images - Rolling back and activating deployments - Creating custom S2I images This course will empower you to install and administer the Red Hat© OpenShift© Container Platform, with hands-on, lab-based materials that show you how to install, configure, and manage OpenShift clusters and deploy sample applications to further understand how developers will use the platform. This course is based on Red Hat© Enterprise Linux© 7.5 and Openshift Container Platform 3.9. OpenShift is a containerized application platform that allows your enterprise to manage container deployments and scale your applications using Kubernetes. OpenShift provides predefined application environments and builds upon Kubernetes to provide support for DevOps principles such as reduced time to market, infrastructure-as-code, continuous integration (CI), and continuous delivery (CD). 1 - INTRODUCTION TO RED HAT OPENSHIFT ENTERPRISE Review features and architecture of OpenShift Enterprise. 2 - INSTALL OPENSHIFT ENTERPRISE Install OpenShift Enterprise and configure a master and node. 3 - EXECUTE COMMANDS Execute commands using the command line interface. 4 - BUILD APPLICATIONS Create, build, and deploy applications to an OpenShift Enterprise instance. 5 - PERSISTENT STORAGE Provision persistent storage and use it for the internal registry. 6 - BUILD APPLICATIONS WITH SOURCE-TO-IMAGE (S2I) Create and build applications with S2I and templates. 7 - MANAGE THE SYSTEM Use OpenShift Enterprise components to manage deployed applications. 8 - CUSTOMIZE OPENSHIFT ENTERPRISE Customize resources and processes used by OpenShift Enterprise. 9 - COMPREHENSIVE REVIEW Practice and demonstrate knowledge and skills learned in the course. 10 - NOTE: Course outline is subject to change with technology advances and as the nature of the underlying job evolves. For questions or confirmation on a specific objective or topic, please contact us. Additional course details: Nexus Humans Red Hat OpenShift Administration II: Operating a Production Kubernetes Cluster (DO280) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Red Hat OpenShift Administration II: Operating a Production Kubernetes Cluster (DO280) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 4 Days 24 CPD hours This course is intended for Data Analysts responsible for data quality using QualityStageData Quality ArchitectsData Cleansing Developers Overview List the common data quality contaminantsDescribe each of the following processes: Investigation, Standardization, Match. & SurvivorshipDescribe QualityStage architectureDescribe QualityStage clients and their functionsImport metadataBuild and run DataStage/QualityStage jobs, review resultsBuild Investigate jobsUse Character Discrete, Concatenate, and Word Investigations to analyze data fieldsDescribe the Standardize stageIdentify Rule SetsBuild jobs using the Standardize stageInterpret standardization resultsInvestigate unhandled data and patternsBuild a QualityStage job to identify matching recordsApply multiple Match passes to increase efficiencyInterpret and improve match resultsBuild a QualityStage Survive job that will consolidate matched records into a single master recordBuild a single job to match data using a Two-Source match This course teaches how to build QualityStage parallel jobs that investigate, standardize, match, and consolidate data records. Students will gain experience by building an application that combines customer data from three source systems. Data Quality Issues Listing the common data quality contaminants Describing data quality processes QualityStage Overview Describing QualityStage architecture Describing QualityStage clients and their functions Developing with QualityStage Importing metadata Building DataStage/QualityStage Jobs Running jobs Reviewing results Investigate Building Investigate jobs Using Character Discrete, Concatenate, and Word Investigations to analyze data fields Reviewing results Standardize Describing the Standardize stage Identifying Rule Sets Building jobs using the Standardize stage Interpreting standardize results Investigating unhandled data and patterns Match Building a QualityStage job to identify matching records Applying multiple Match passes to increase efficiency Interpreting and improving Match results Survive Building a QualityStage survive job that will consolidate matched records into a single master record Two-Source Match Building a QualityStage job to match data using a reference match Additional course details: Nexus Humans KM213 IBM InfoSphere QualityStage Essentials v11.5 training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the KM213 IBM InfoSphere QualityStage Essentials v11.5 course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 1 Days 6 CPD hours This course is intended for This course is intended for the following participants: Application developers, Cloud Solutions Architects, DevOps Engineers, IT managers. Individuals using Google Cloud Platform to create new solutions or to integrate existing systems, application environments, and infrastructure with the Google Cloud Platform. Overview At the end of the course, students will be able to: Understand container basics. Containerize an existing application. Understand Kubernetes concepts and principles. Deploy applications to Kubernetes using the CLI. Set up a continuous delivery pipeline using Jenkins Learn to containerize workloads in Docker containers, deploy them to Kubernetes clusters provided by Google Kubernetes Engine, and scale those workloads to handle increased traffic. Students will also learn how to continuously deploy new code in a Kubernetes cluster to provide application updates. Introduction to Containers and Docker Acquaint yourself with containers, Docker, and the Google Container Registry. Create a container. Package a container using Docker. Store a container image in Google Container Registry. Launch a Docker container. Kubernetes Basics Deploy an application with microservices in a Kubernetes cluster. Provision a complete Kubernetes cluster using Kubernetes Engine. Deploy and manage Docker containers using kubectl. Break an application into microservices using Kubernetes? Deployments and Services. Deploying to Kubernetes Create and manage Kubernetes deployments. Create a Kubernetes deployment. Trigger, pause, resume, and rollback updates. Understand and build canary deployments. Continuous Deployment with Jenkins Build a continuous delivery pipeline. Provision Jenkins in your Kubernetes cluster. Create a Jenkins pipeline. Implement a canary deployment using Jenkins. Additional course details: Nexus Humans Getting Started with Google Kubernetes Engine training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Getting Started with Google Kubernetes Engine course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 3 Days 18 CPD hours This course is intended for This course is intended for: Those who will provide container orchestration management in the AWS Cloud including: DevOps engineers Systems administrators Overview In this course, you will learn to: Review and examine containers, Kubernetes and Amazon EKS fundamentals and the impact of containers on workflows. Build an Amazon EKS cluster by selecting the correct compute resources to support worker nodes. Secure your environment with AWS Identity and Access Management (IAM) authentication by creating an Amazon EKS service role for your cluster Deploy an application on the cluster. Publish container images to ECR and secure access via IAM policy. Automate and deploy applications, examine automation tools and pipelines. Create a GitOps pipeline using WeaveFlux. Collect monitoring data through metrics, logs, tracing with AWS X-Ray and identify metrics for performance tuning. Review scenarios where bottlenecks require the best scaling approach using horizontal or vertical scaling. Assess the tradeoffs between efficiency, resiliency, and cost and impact for tuning one over the other. Describe and outline a holistic, iterative approach to optimizing your environment. Design for cost, efficiency, and resiliency. Configure the AWS networking services to support the cluster. Describe how EKS/Amazon Virtual Private Cloud (VPC) functions and simplifies inter-node communications. Describe the function of VPC Container Network Interface (CNI). Review the benefits of a service mesh. Upgrade your Kubernetes, Amazon EKS, and third party tools Amazon EKS makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane. In this course, you will learn container management and orchestration for Kubernetes using Amazon EKS. You will build an Amazon EKS cluster, configure the environment, deploy the cluster, and then add applications to your cluster. You will manage container images using Amazon Elastic Container Registry (ECR) and learn how to automate application deployment. You will deploy applications using CI/CD tools. You will learn how to monitor and scale your environment by using metrics, logging, tracing, and horizontal/vertical scaling. You will learn how to design and manage a large container environment by designing for efficiency, cost, and resiliency. You will configure AWS networking services to support the cluster and learn how to secure your Amazon EKS environment. Module 0: Course Introduction Course preparation activities and agenda Module 1: Container Fundamentals Best practices for building applications Container fundamentals Components of a container Module 2: Kubernetes Fundamentals Container orchestration Kubernetes objects Kubernetes internals Preparing for Lab 1: Deploying Kubernetes Pods Module 3: Amazon EKS Fundamentals Introduction to Amazon EKS Amazon EKS control plane Amazon EKS data plane Fundamentals of Amazon EKS security Amazon EKS API Module 4: Building an Amazon EKS Cluster Configuring your environment Creating an Amazon EKS cluster Demo: Configuring and deploying clusters in the AWS Management Console Working with eksctl Preparing for Lab 2: Building an Amazon EKS Cluster Module 5: Deploying Applications to Your Amazon EKS Cluster Configuring Amazon Elastic Container Registry (Amazon ECR) Demo: Configuring Amazon ECR Deploying applications with Helm Demo: Deploying applications with Helm Continuous deployment in Amazon EKS GitOps and Amazon EKS Preparing for Lab 3: Deploying App Module 6: Configuring Observability in Amazon EKS Configuring observability in an Amazon EKS cluster Collecting metrics Using metrics for automatic scaling Managing logs Application tracing in Amazon EKS Gaining and applying insight from observability Preparing for Lab 4: Monitoring Amazon EKS Module 7: Balancing Efficiency, Resilience, and Cost Optimization in Amazon EKS The high level overview Designing for resilience Designing for cost optimization Designing for efficiency Module 8: Managing Networking in Amazon EKS Review: Networking in AWS Communicating in Amazon EKS Managing your IP space Deploying a service mesh Preparing for Lab 5: Exploring Amazon EKS Communication Module 9: Managing Authentication and Authorization in Amazon EKS Understanding the AWS shared responsibility model Authentication and authorization Managing IAM and RBAC Demo: Customizing RBAC roles Managing pod permissions using RBAC service accounts Module 10: Implementing Secure Workflows Securing cluster endpoint access Improving the security of your workflows Improving host and network security Managing secrets Preparing for Lab 6: Securing Amazon EKS Module 11: Managing Upgrades in Amazon EKS Planning for an upgrade Upgrading your Kubernetes version Amazon EKS platform versions Additional course details: Nexus Humans Running Containers on Amazon Elastic Kubernetes Service (Amazon EKS) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Running Containers on Amazon Elastic Kubernetes Service (Amazon EKS) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Want to learn everything about Python, from installing to coding, with a liberal does of fun sprinkled into the learning? Then, this Python Programming Tutorials For Beginners is what you need.
Duration 3 Days 18 CPD hours This course is intended for Anyone who wants to qualify as a professional tester. The certification also offers good qualifications for builders, designers, programmers and project managers Overview This three-day training provides a general introduction to information systems testing. The objective of the training is to prepare the students for the ISTQB Foundation exam. Some important topics that will certainly be discussed here are the importance of testing, testing in relation to system development and the fundamentals of a structured testing process. The different phases in a test project are explained, after which some test techniques (both black box and white box) are also discussed. This foundation training therefore contains the basis in testing for both test managers and testers. In addition to theory, the training also includes a number of mock exams, so that the topics covered are placed even better in the exam context. Course Outline test principles life cycle testing static techniques test specification techniques blackbox techniques whitebox techniques and experienced based techniques Additional course details: Nexus Humans Certified Tester Foundation Level (CTFL) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Certified Tester Foundation Level (CTFL) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 3 Days 18 CPD hours This course is intended for Professionals who want unparalleled creative freedom, productivity, and precision for producing superb 3D modeling. Overview Fundamental concepts and workflows for creating 3D models using AutoCAD, Represent a design by creating solid primitives, solid or surface models from cross-sectional geometry, or composite models from multiple solid models, Complete a solid model design by adding the necessary features to detail, duplicate, and position 3D models, Convert 2D objects to 3D objects, Document a 3D design by creating 2D drawings for production and visualization, Communicate design ideas using visual styles, lights, model walk-through tools, and renderings. In this course, you will learn the fundamental concepts and workflows for creating 3D models using AutoCAD. IntroductionAdvanced Text Objects Annotation Scale Overview Using Fields Controlling the Draw Order Working with Tables Working with Linked Tables Creating Table Styles Projects - Advanced AnnotationDynamic Blocks Working with Dynamic Blocks Creating Dynamic Block Definitions Dynamic Block Authoring Tools Additional Visibility Options Attributes Inserting Blocks with Attributes Editing Attribute Values Defining Attributes Redefining Blocks with Attributes Extracting Attributes Projects - Advanced Blocks & AttributesOutput and Publishing Output For Electronic Review Autodesk Design Review Publishing Drawing Sets Other Tools for Collaboration eTransmit Hyperlinks Cloud Collaboration and 2D Automation Connecting to the Cloud Sharing Drawings in the Cloud Attach Navisworks Files Attach BIM 360 Glue Models Introduction to Sheet Sets Overview of Sheet Sets Creating Sheet Sets Creating Sheets in Sheet Sets Adding Views to Sheets Importing Layouts to Sheet Sets Publishing & Customizing Sheet Sets Transmitting and Archiving Sheet Sets Publishing Sheet Sets Customizing Sheet Sets Custom Blocks for Sheet Sets Projects - Sheet SetsManaging Layers Working in the Layer Properties Manager Creating Layer Filters Setting Layer States CAD Standards CAD Standards Concepts Configuring Standards Checking Standards Layer Translator System Setup Options Dialog Box System Variables Dynamic Input Settings Drawing Utilities Managing Plotters Plot Styles Introduction to Customization Why Customize? Creating a Custom Workspace Customizing the User Interface Using the Customize User Interface (CUI) Dialog Box Customizing the Ribbon Customizing the Quick Access Toolbar Customizing Menus Keyboard Shortcuts Macros & Custom Routines Custom Commands & Macros Running Scripts Action Recorder Editing Action Macros Loading Custom Routines