Duration 3 Days 18 CPD hours This course is intended for Application developers who want to build cloud-native applications or redesign existing applications that will run on Google Cloud Platform Overview This course teaches participants the following skills: Use best practices for application development. Choose the appropriate data storage option for application data. Implement federated identity management. Develop loosely coupled application components or microservices. Integrate application components and data sources. Debug, trace, and monitor applications. Perform repeatable deployments with containers and deployment services. Choose the appropriate application runtime environment; use Google Container Engine as a runtime environment and later switch to a no-ops solution with Google App Engine flexible environment. Learn how to design, develop, and deploy applications that seamlessly integrate components from the Google Cloud ecosystem. This course uses lectures, demos, and hands-on labs to show you how to use Google Cloud services and pre-trained machine learning APIs to build secure, scalable, and intelligent cloud-native applications. Best Practices for Application Development Code and environment management. Design and development of secure, scalable, reliable, loosely coupled application components and microservices. Continuous integration and delivery. Re-architecting applications for the cloud. Google Cloud Client Libraries, Google Cloud SDK, and Google Firebase SDK How to set up and use Google Cloud Client Libraries, Google Cloud SDK, and Google Firebase SDK. Lab: Set up Google Client Libraries, Cloud SDK, and Firebase SDK on a Linux instance and set up application credentials. Overview of Data Storage Options Overview of options to store application data. Use cases for Google Cloud Storage, Cloud Firestore, Cloud Bigtable, Google Cloud SQL, and Cloud Spanner. Best Practices for Using Cloud Firestore Best practices related to using Cloud Firestore in Datastore mode for:Queries, Built-in and composite indexes, Inserting and deleting data (batch operations),Transactions,Error handling. Bulk-loading data into Cloud Firestore by using Google Cloud Dataflow. Lab: Store application data in Cloud Datastore. Performing Operations on Cloud Storage Operations that can be performed on buckets and objects. Consistency model. Error handling. Best Practices for Using Cloud Storage Naming buckets for static websites and other uses. Naming objects (from an access distribution perspective). Performance considerations. Setting up and debugging a CORS configuration on a bucket. Lab: Store files in Cloud Storage. Handling Authentication and Authorization Cloud Identity and Access Management (IAM) roles and service accounts. User authentication by using Firebase Authentication. User authentication and authorization by using Cloud Identity-Aware Proxy. Lab: Authenticate users by using Firebase Authentication. Using Pub/Sub to Integrate Components of Your Application Topics, publishers, and subscribers. Pull and push subscriptions. Use cases for Cloud Pub/Sub. Lab: Develop a backend service to process messages in a message queue. Adding Intelligence to Your Application Overview of pre-trained machine learning APIs such as Cloud Vision API and Cloud Natural Language Processing API. Using Cloud Functions for Event-Driven Processing Key concepts such as triggers, background functions, HTTP functions. Use cases. Developing and deploying functions. Logging, error reporting, and monitoring. Managing APIs with Cloud Endpoints Open API deployment configuration. Lab: Deploy an API for your application. Deploying Applications Creating and storing container images. Repeatable deployments with deployment configuration and templates. Lab: Use Deployment Manager to deploy a web application into Google App Engine flexible environment test and production environments. Execution Environments for Your Application Considerations for choosing an execution environment for your application or service:Google Compute Engine (GCE),Google Kubernetes Engine (GKE), App Engine flexible environment, Cloud Functions, Cloud Dataflow, Cloud Run. Lab: Deploying your application on App Engine flexible environment. Debugging, Monitoring, and Tuning Performance Application Performance Management Tools. Stackdriver Debugger. Stackdriver Error Reporting. Lab: Debugging an application error by using Stackdriver Debugger and Error Reporting. Stackdriver Logging. Key concepts related to Stackdriver Trace and Stackdriver Monitoring. Lab: Use Stackdriver Monitoring and Stackdriver Trace to trace a request across services, observe, and optimize performance.
Duration 5 Days 30 CPD hours This course is intended for This course is intended for anyone responsible for conf iguring, maintaining, and troubleshooting Symantec Data Loss Prevention. Additionally, this course is intended for technical users responsible for creating and maintaining Symantec Data Loss Prevention policies and the incident response structure. Overview At the completion of the course, you will be able to: Enforce server, detection servers, and DLP Agents as well as reporting, workflow, incident response management, policy management and detection, response management, user and role administration, directory integration, and filtering. This course is designed to provide you with the fundamental know ledge to configure and administer the Symantec Data Loss Prevention Enforce platform. Introduction to Symantec Data Loss Prevention Symantec Data Loss Prevention overview Symantec Data Loss Prevention architecture Navigation and Reporting Navigating the user interface Reporting and analysis Report navigation, preferences, and features Report filters Report commands Incident snapshot Incident Data Access Hands-on labs: Become familiar with navigation and tools in the user interface. Create, filter, summarize, customize, and distribute reports. Create users, roles, and attributes. Incident Remediation and Workflow Incident remediation and w orkf low Managing users and attributes Custom attribute lookup User Risk Summary Hands-on labs: Remediate incidents and configure a user?s reporting preferences Policy Management Policy overview Creating policy groups Using policy templates Building policies Policy development best practices Hands-on labs: Use policy templates and policy builder to configure and apply new policies Response Rule Management Response rule overview Configuring Automated Response rules Configuring Smart Response rules Response rule best practices Hands-On Labs: Create and use Automated and Smart Response rules Described Content Matching DCM detection methods Hands-on labs: Create policies that include DCM and then use those policies to capture incidents Exact Data Matching and Directory Group Matching Exact data matching (EDM) Advanced EDM Directory group matching (DGM) Hands-on labs: Create policies that include EDM and DGM, and then use those policies to capture incident Indexed Document Matching Indexed document matching (IDM) Hands-on labs: Create policies that include IDM rules and then use those policies to capture incidents Vector Machine Learning Vector Machine Learning (VML) Hands-on labs: Create a VML profile, import document sets, and create a VML policy Network Monitor Review of Network Monitor Protocols Traffic filtering Network Monitor best practices Hands-On Labs: Apply IP and L7 filters Network Prevent Network Prevent overview Introduction to Network Prevent (Email) Introduction to Network Prevent (Web) Hands-On Labs: Configure Network Prevent (E-mail) response rules, incorporate them into policies, and use the policies to capture incidents Mobile Email Monitor and Mobile Prevent Introduction to Mobile Email Monitor Mobile Prevent overview Configuration VPN configuration Policy and Response Rule Creation Reporting and Remediation Troubleshooting Network Discover and Network Protect Network Discover and Network Protect overview Configuring Discover targets Configuring Box cloud targets Protecting data Auto-discovery of servers and shares Running and managing scans Reports and remediation Network Discover and Network Protect best practices Hands-on labs: Create and run a filesystem target using various response rules, including quarantining Endpoint Prevent Endpoint Prevent overview Detection capabilities at the Endpoint Configuring Endpoint Prevent Creating Endpoint response rules Viewing Endpoint Prevent incidents Endpoint Prevent best practices Managing DLP Agents Hands-on labs: Create Agent Groups and Endpoint response rules, monitor and block Endpoint actions, view Endpoint incidents, and use the Enforce console to manage DLP Agents Endpoint Discover Endpoint Discover overview Creating and running Endpoint Discover targets Using Endpoint Discover reports and reporting features Hands-on labs: Create Endpoint Discover targets, run Endpoint Discover targets, and view Endpoint Discover incidents Enterprise Enablement Preparing for risk reduction Risk reduction DLP Maturity model System Administration Server administration Language support Incident Delete Credential management Troubleshooting Diagnostic tools Troubleshooting scenario Getting support Hands-on labs: Interpret event reports and traffic reports, configure alerts, and use the Log Collection and Configuration tool Additional course details: Nexus Humans Symantec Data Loss Prevention 14.0 - Administration training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Symantec Data Loss Prevention 14.0 - Administration course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 4 Days 24 CPD hours This course is intended for Candidates should be familiar with Dynamics 365 Customer Insights and have firsthand experience with one or more additional Dynamics 365 apps, Power Query, Microsoft Dataverse, Common Data Model, and Microsoft Power Platform. They should also have working knowledge of practices related to privacy, compliance, consent, security, responsible AI, and data retention policy. Overview After completing this course, you will be able to: Clean, transform, and ingest data into Dynamics 365 Customer Insights Create a unified customer profile Work with Dynamics 365 Audience insights Enrich data and predictions Set up and manage external connections Administer and monitor Customer Insights Customer Data Platform specialists implement solutions that provide insight into customer profiles and that track engagement activities to help improve customer experiences and increase customer retention. In this course, students will learn about the Dynamics 365 Customer Insights solution, including how to unify customer data with prebuilt connectors, predict customer intent with rich segmentation, and maintain control of customer data. This specialty course starts with creating a unified profile and then working with customer data. Module 1: Get started with Dynamics 365 Customer Insights Introduction to the customer data platform Administer Dynamics 365 Customer Insights Explore user permissions in Dynamics 365 Customer Insights Module 2: Ingest data into Dynamics 365 Customer Insights Import and transform data Connect to data sources Work with data Module 3: Create a unified customer profile in Dynamics 365 Customer Insights Map data Match data Merge data Find customers Module 4: Work with Dynamics 365 Customer Insights Explore Audience insights Define relationships and activities Work with measures Work with segments Module 5: Enrich data and predictions with Audience insights Enrich data Use predictions Use machine learning models Module 6: Manage external connections with Customer Data Platform Export Customer Insights data Use Customer Insights with Microsoft Power Platform Display Customer Insights data in Dynamics 365 apps More ways to extend Customer Insights
ITIL® 4 Specialist: Create, Deliver and Support: Virtual In-House Training The ITIL® 4 Specialist: Create, Deliver, and Support module is part of the Managing Professional stream for ITIL® 4. Candidates need to pass the related certification exam for working towards the Managing Professional (MP) designation. This course is based on the ITIL® 4 Specialist: Create, Deliver, and Support exam specifications from AXELOS. With the help of ITIL® 4 concepts and terminology, exercises, and examples included in the course, candidates acquire the relevant knowledge required to pass the certification exam. What You Will Learn The learning objectives of the course are based on the following learning outcomes of the ITIL® 4 Specialist: Create, Deliver, and Support exam specification: Understand how to plan and build a service value stream to create, deliver, and support services Know how relevant ITIL® practices contribute to the creation, delivery, and support across the SVS and value streams Know how to create, deliver, and support services Organization and Culture Organizational Structures Team Culture Continuous Improvement Collaborative Culture Customer-Oriented Mindset Positive Communication Effective Teams Capabilities, Roles, and Competencies Workforce Planning Employee Satisfaction Management Results-Based Measuring and Reporting Information Technology to Create, Deliver, and Support Service Integration and Data Sharing Reporting and Advanced Analytics Collaboration and Workflow Robotic Process Automation Artificial Intelligence and Machine Learning CI / CD Information Model Value Stream Anatomy of a Value Stream Designing a Value Stream Value Stream Mapping Value Stream to Create, Deliver, and Support Services Value Stream for Creation of a New Service Value Stream for User Support Value Stream Model for Restoration of a Live Service Prioritize and Manage Work Managing Queues and Backlogs Shift-Left Approach Prioritizing Work Commercial and Sourcing Considerations Build or Buy Sourcing Models Service Integration and Management
Duration 3 Days 18 CPD hours This course is intended for This course is designed for existing Python programmers who have at least one year of Python experience and who want to expand their programming proficiency in Python 3. Overview In this course, you will expand your Python proficiencies. You will: Select an object-oriented programming approach for Python applications. Create object-oriented Python applications. Create a desktop application. Create data-driven applications. Create and secure web service-connected applications. Program Python for data science. Implement unit testing and exception handling. Package an application for distribution. Python© continues to be a popular programming language, perhaps owing to its easy learning curve, small code footprint, and versatility for business, web, and scientific uses. Python is useful for developing custom software tools, applications, web services, and cloud applications. In this course, you'll build upon your basic Python skills, learning more advanced topics such as object-oriented programming patterns, development of graphical user interfaces, data management, creating web service-connected apps, performing data science tasks, unit testing, and creating and installing packages and executable applications. Lesson 1: Selecting an Object-Oriented Programming Approach for Python Applications Topic A: Implement Object-Oriented Design Topic B: Leverage the Benefits of Object-Oriented Programming Lesson 2: Creating Object-Oriented Python Applications Topic A: Create a Class Topic B: Use Built-in Methods Topic C: Implement the Factory Design Pattern Lesson 3: Creating a Desktop Application Topic A: Design a Graphical User Interface (GUI) Topic B: Create Interactive Applications Lesson 4: Creating Data-Driven Applications Topic A: Connect to Data Topic B: Store, Update, and Delete Data in a Database Lesson 5: Creating and Securing a Web Service-Connected App Topic A: Select a Network Application Protocol Topic B: Create a RESTful Web Service Topic C: Create a Web Service Client Topic D: Secure Connected Applications Lesson 6: Programming Python for Data Science Topic A: Clean Data with Python Topic B: Visualize Data with Python Topic C: Perform Linear Regression with Machine Learning Lesson 7: Implementing Unit Testing and Exception Handling Topic A: Handle Exceptions Topic B: Write a Unit Test Topic C: Execute a Unit Test Lesson 8: Packaging an Application for Distribution Topic A: Create and Install a Package Topic B: Generate Alternative Distribution Files
Duration 3 Days 18 CPD hours This course is intended for This course is designed for existing Python programmers who have at least one year of Python experience and who want to expand their programming proficiency in Python 3. Overview In this course, you will expand your Python proficiencies. You will: Select an object-oriented programming approach for Python applications. Create object-oriented Python applications. Create a desktop application. Create data-driven applications. Create and secure web service-connected applications. Program Python for data science. Implement unit testing and exception handling. Package an application for distribution. Python continues to be a popular programming language, perhaps owing to its easy learning curve, small code footprint, and versatility for business, web, and scientific uses. Python is useful for developing custom software tools, applications, web services, and cloud applications. In this course, you'll build upon your basic Python skills, learning more advanced topics such as object-oriented programming patterns, development of graphical user interfaces, data management, creating web service-connected apps, performing data science tasks, unit testing, and creating and installing packages and executable applications. Lesson 1: Selecting an Object-Oriented Programming Approach for Python Applications Topic A: Implement Object-Oriented Design Topic B: Leverage the Benefits of Object-Oriented Programming Lesson 2: Creating Object-Oriented Python Applications Topic A: Create a Class Topic B: Use Built-in Methods Topic C: Implement the Factory Design Pattern Lesson 3: Creating a Desktop Application Topic A: Design a Graphical User Interface (GUI) Topic B: Create Interactive Applications Lesson 4: Creating Data-Driven Applications Topic A: Connect to Data Topic B: Store, Update, and Delete Data in a Database Lesson 5: Creating and Securing a Web Service-Connected App Topic A: Select a Network Application Protocol Topic B: Create a RESTful Web Service Topic C: Create a Web Service Client Topic D: Secure Connected Applications Lesson 6: Programming Python for Data Science Topic A: Clean Data with Python Topic B: Visualize Data with Python Topic C: Perform Linear Regression with Machine Learning Lesson 7: Implementing Unit Testing and Exception Handling Topic A: Handle Exceptions Topic B: Write a Unit Test Topic C: Execute a Unit Test Lesson 8: Packaging an Application for Distribution Topic A: Create and Install a Package Topic B: Generate Alternative Distribution Files
Duration 3 Days 18 CPD hours This course is intended for Anyone starting to write SAS programs Overview Use SAS Studio and SAS Enterprise Guide to write and submit SAS programs. Access SAS, Microsoft Excel, and text data. Explore and validate data. Prepare data by subsetting rows and computing new columns. Analyze and report on data. Export data and results to Excel, PDF, and other formats. Use SQL in SAS to query and join tables. This course is for users who want to learn how to write SAS programs to access, explore, prepare, and analyze data. It is the entry point to learning SAS programming for data science, machine learning, and artificial intelligence. Essentials The SAS programming process. Using SAS programming tools. Understanding SAS syntax. Accessing Data Understanding SAS data. Accessing data through libraries. Importing data into SAS. Exploring and Validating Data Exploring data. Filtering rows. Formatting columns. Sorting data and removing duplicates. Preparing Data Reading and filtering data. Computing new columns. Conditional processing. Analyzing and Reporting on Data Enhancing reports with titles, footnotes, and labels. Creating frequency reports. Creating summary statistics reports. Exporting Results Exporting data. Exporting reports. Using SQL in SAS Using Structured Query Language in SAS. Joining tables using SQL in SAS. Additional course details: Nexus Humans SAS Programming 1 - Essentials training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the SAS Programming 1 - Essentials course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 1 Days 6 CPD hours This course is intended for The audience for this course includes software developers and data scientists who need to use large language models for generative AI. Some programming experience is recommended, but the course will be valuable to anyone seeking to understand how the Azure OpenAI service can be used to implement generative AI solutions. Note Generative AI is a fast-evolving field of artificial intelligence, and the Azure OpenAI service is subject to frequent changes. The course materials are maintained to reflect the latest version of the service at the time of writing. Azure OpenAI Service provides access to OpenAI's powerful large language models such as GPT; the model behind the popular ChatGPT service. These models enable various natural language processing (NLP) solutions to understand, converse, and generate content. Users can access the service through REST APIs, SDKs, and Azure OpenAI Studio. In this course, you'll learn how to provision Azure OpenAI service, deploy models, and use them in generative AI applications. Prerequisites Familiarity with Azure and the Azure portal. Experience programming with C# or Python. 1 - Get started with Azure OpenAI Service Access Azure OpenAI Service Use Azure OpenAI Studio Explore types of generative AI models Deploy generative AI models Use prompts to get completions from models Test models in Azure OpenAI Studio's playgrounds 2 - Build natural language solutions with Azure OpenAI Service Integrate Azure OpenAI into your app Use Azure OpenAI REST API Use Azure OpenAI SDK 3 - Apply prompt engineering with Azure OpenAI Service Understand prompt engineering Write more effective prompts Provide context to improve accuracy 4 - Generate code with Azure OpenAI Service Construct code from natural language Complete code and assist the development process Fix bugs and improve your code 5 - Generate images with Azure OpenAI Service What is DALL-E? Explore DALL-E in Azure OpenAI Studio Use the Azure OpenAI REST API to consume DALL-E models 6 - Use your own data with Azure OpenAI Service Understand how to use your own data Add your own data source Chat with your model using your own data Additional course details: Nexus Humans AI-050T00: Develop Generative AI Solutions with Azure OpenAI Service training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the AI-050T00: Develop Generative AI Solutions with Azure OpenAI Service course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 1 Days 6 CPD hours This course is intended for This course is intended for: Data platform engineers Architects and operators who build and manage data analytics pipelines Overview In this course, you will learn to: Compare the features and benefits of data warehouses, data lakes, and modern data architectures Design and implement a batch data analytics solution Identify and apply appropriate techniques, including compression, to optimize data storage Select and deploy appropriate options to ingest, transform, and store data Choose the appropriate instance and node types, clusters, auto scaling, and network topology for a particular business use case Understand how data storage and processing affect the analysis and visualization mechanisms needed to gain actionable business insights Secure data at rest and in transit Monitor analytics workloads to identify and remediate problems Apply cost management best practices In this course, you will learn to build batch data analytics solutions using Amazon EMR, an enterprise-grade Apache Spark and Apache Hadoop managed service. You will learn how Amazon EMR integrates with open-source projects such as Apache Hive, Hue, and HBase, and with AWS services such as AWS Glue and AWS Lake Formation. The course addresses data collection, ingestion, cataloging, storage, and processing components in the context of Spark and Hadoop. You will learn to use EMR Notebooks to support both analytics and machine learning workloads. You will also learn to apply security, performance, and cost management best practices to the operation of Amazon EMR. Module A: Overview of Data Analytics and the Data Pipeline Data analytics use cases Using the data pipeline for analytics Module 1: Introduction to Amazon EMR Using Amazon EMR in analytics solutions Amazon EMR cluster architecture Interactive Demo 1: Launching an Amazon EMR cluster Cost management strategies Module 2: Data Analytics Pipeline Using Amazon EMR: Ingestion and Storage Storage optimization with Amazon EMR Data ingestion techniques Module 3: High-Performance Batch Data Analytics Using Apache Spark on Amazon EMR Apache Spark on Amazon EMR use cases Why Apache Spark on Amazon EMR Spark concepts Interactive Demo 2: Connect to an EMR cluster and perform Scala commands using the Spark shell Transformation, processing, and analytics Using notebooks with Amazon EMR Practice Lab 1: Low-latency data analytics using Apache Spark on Amazon EMR Module 4: Processing and Analyzing Batch Data with Amazon EMR and Apache Hive Using Amazon EMR with Hive to process batch data Transformation, processing, and analytics Practice Lab 2: Batch data processing using Amazon EMR with Hive Introduction to Apache HBase on Amazon EMR Module 5: Serverless Data Processing Serverless data processing, transformation, and analytics Using AWS Glue with Amazon EMR workloads Practice Lab 3: Orchestrate data processing in Spark using AWS Step Functions Module 6: Security and Monitoring of Amazon EMR Clusters Securing EMR clusters Interactive Demo 3: Client-side encryption with EMRFS Monitoring and troubleshooting Amazon EMR clusters Demo: Reviewing Apache Spark cluster history Module 7: Designing Batch Data Analytics Solutions Batch data analytics use cases Activity: Designing a batch data analytics workflow Module B: Developing Modern Data Architectures on AWS Modern data architectures
Duration 1 Days 6 CPD hours This course is intended for This course is intended for data warehouse engineers, data platform engineers, and architects and operators who build and manage data analytics pipelines. Completed either AWS Technical Essentials or Architecting on AWS Completed Building Data Lakes on AWS Overview In this course, you will learn to: Compare the features and benefits of data warehouses, data lakes, and modern data architectures Design and implement a data warehouse analytics solution Identify and apply appropriate techniques, including compression, to optimize data storage Select and deploy appropriate options to ingest, transform, and store data Choose the appropriate instance and node types, clusters, auto scaling, and network topology for a particular business use case Understand how data storage and processing affect the analysis and visualization mechanisms needed to gain actionable business insights Secure data at rest and in transit Monitor analytics workloads to identify and remediate problems Apply cost management best practices In this course, you will build a data analytics solution using Amazon Redshift, a cloud data warehouse service. The course focuses on the data collection, ingestion, cataloging, storage, and processing components of the analytics pipeline. You will learn to integrate Amazon Redshift with a data lake to support both analytics and machine learning workloads. You will also learn to apply security, performance, and cost management best practices to the operation of Amazon Redshift. Module A: Overview of Data Analytics and the Data Pipeline Data analytics use cases Using the data pipeline for analytics Module 1: Using Amazon Redshift in the Data Analytics Pipeline Why Amazon Redshift for data warehousing? Overview of Amazon Redshift Module 2: Introduction to Amazon Redshift Amazon Redshift architecture Interactive Demo 1: Touring the Amazon Redshift console Amazon Redshift features Practice Lab 1: Load and query data in an Amazon Redshift cluster Module 3: Ingestion and Storage Ingestion Interactive Demo 2: Connecting your Amazon Redshift cluster using a Jupyter notebook with Data API Data distribution and storage Interactive Demo 3: Analyzing semi-structured data using the SUPER data type Querying data in Amazon Redshift Practice Lab 2: Data analytics using Amazon Redshift Spectrum Module 4: Processing and Optimizing Data Data transformation Advanced querying Practice Lab 3: Data transformation and querying in Amazon Redshift Resource management Interactive Demo 4: Applying mixed workload management on Amazon Redshift Automation and optimization Interactive demo 5: Amazon Redshift cluster resizing from the dc2.large to ra3.xlplus cluster Module 5: Security and Monitoring of Amazon Redshift Clusters Securing the Amazon Redshift cluster Monitoring and troubleshooting Amazon Redshift clusters Module 6: Designing Data Warehouse Analytics Solutions Data warehouse use case review Activity: Designing a data warehouse analytics workflow Module B: Developing Modern Data Architectures on AWS Modern data architectures