Linux network administration 2 course description LPIC-2 is the second certification in LPI's multi level professional certification program. This course teaches the skills necessary to pass the LPI 202 exam; the second of two LPIC-2 exams. Specifically, the course covers the administration of Linux systems in small to medium sized mixed networks. What will you learn Install and configure fundamental network services. Linux network administration 2 course details Who will benefit: Linux administrators. Prerequisites: Linux engineer certification 1 (LPIC-2) Duration 5 days Linux network administration 2 course contents Part II The LPI 202 Exam Organizing Email Services The Linux Mail System, Mail Transfer Agent, Mail Delivery Agent, Mail User Agent, Email Protocols, SMTP, POP, IMAP, Using Email Servers, Sendmail, Postfix, Local Email Delivery, Procmail Basics, Sieve, Remote Email Delivery, Courier, Dovecot. DNS DNS and BIND, Configuring a DNS Server, Starting, Stopping, and Reloading BIND, Configuring BIND Logging, Creating and Maintaining DNS Zones, BIND Zone Files, Managing BIND Zones, Securing a DNS Server, ailing BIND, DNSSEC, TSIG, Employing DANE. Offering Web Services Web Servers, HTTP, The Apache Web Server, Installing and configuring Apache, Hosting Dynamic Web Applications, Secure Web Servers, Proxy Servers, Installing and configuring Squid, Configuring Clients, Nginx Server, Installing Nginx, Configuring Nginx. Sharing Files Samba, Configuring Samba, Troubleshooting Samba, NFS, Configuring NFS, Securing NFS, Troubleshooting NFS, FTP Servers, Configuring vsftpd, Configuring Pure-FTPd. Managing Network Clients Assigning Network Addresses, DHCP, Linux DHCP Software, Installing and configuring a DHCP Server and clients, Authentication Service, PAM Basics, Configuring PAM, PAM Application Files, Network Directories, LDAP Basics, OpenLDAP Server, LDAP Clients. Setting Up System Security Server Network Security, Port Scanning, Intrusion Detection Systems, External Network Security, iptables, Routing in Linux, Connecting Securely to a Server, OpenSSH, OpenVPN, Security Resources, US-CERT, SANS Institute, Bugtraq.
This course brings together all the important topics related to modern distributed applications and systems in one place. Explore the common challenges that appear while designing and implementing large-scale distributed systems, and how big-tech companies solve those problems. Throughout the course, we are going to build a distributed URL shortening service.
Definitive Puppet training course description Puppet is a framework and toolset for configuration management. This course covers Puppet to enable delegates to manage configurations. Hands on sessions follow all the major sections. What will you learn Deploy Puppet. Manage configurations with Puppet. Build hosts with Puppet. Produce reports with Puppet. Definitive Puppet training course details Who will benefit: Anyone working with Puppet. Prerequisites: Linux fundamentals. Duration 2 days Definitive Puppet training course contents Getting started with Puppet What is Puppet, Selecting the right version of Puppet, Installing Puppet, Configuring Puppet. Developing and deploying Puppet The puppet apply command and modes of operation, Foreground Puppet master, Developing Puppet with Vagrant, Environments, Making changes to the development environment, Testing the new environments with the Puppet agent, Environment branching and merging, Dynamic Puppet environments with Git branches, Summary, Resources. Scaling Puppet Identifying the challenges, Running the Puppet master with Apache and Passenger, Testing the Puppet master in Apache, Load balancing multiple Puppet masters, Scaling further, Load balancing alternatives. Measuring performance, Splay time, Summary, Going further, Resources. Externalizing Puppet configuration External node classification, Storing node configuration in LDAP, Summary, Resources. Exporting and storing configuration Virtual resources, Getting started with exported and stored configurations, Using exported resources, Expiring state resources, Summary, Resources. Puppet consoles The foreman, Puppet enterprise console, Puppetboard, Summary, Resources. Tools and integration Puppet forge and the module tool, Searching and installing a module from the forge, Generating a module, Managing module dependencies, Testing the modules, Developing Puppet modules with Geppetto, Summary, Resources. Reporting with Puppet Getting started, Configuring reporting, Report processors, Custom reporting, Other Puppet reporters, Summary, Resources. Extending Facter and Puppet Writing and distributing custom facts, Developing custom types, providers and functions, Summary, Resources, Complex data structures, Additional backends, Hiera functions in depth, Module data bindings, Hiera examples. Jiera-2, Summary, Resources. Mcollective Installing and configuring Mcollective, testing, Mcollective plugins, accessing hosts with Metadata. Hiera Lists, initial Hiera configuration, Hiera command line utility, complex data structures, additional backends, Hiera functions in depth, module data bindings. Hiera-2.
This course primarily focuses on explaining the concepts of Python and PySpark. It will help you enhance your data analysis skills using structured Spark DataFrames APIs.
Duration 4 Days 24 CPD hours This course is intended for This course is best suited to developers, engineers, and architects who want to use use Hadoop and related tools to solve real-world problems. Overview Skills learned in this course include:Creating a data set with Kite SDKDeveloping custom Flume components for data ingestionManaging a multi-stage workflow with OozieAnalyzing data with CrunchWriting user-defined functions for Hive and ImpalaWriting user-defined functions for Hive and ImpalaIndexing data with Cloudera Search Cloudera University?s four-day course for designing and building Big Data applications prepares you to analyze and solve real-world problems using Apache Hadoop and associated tools in the enterprise data hub (EDH). IntroductionApplication Architecture Scenario Explanation Understanding the Development Environment Identifying and Collecting Input Data Selecting Tools for Data Processing and Analysis Presenting Results to the Use Defining & Using Datasets Metadata Management What is Apache Avro? Avro Schemas Avro Schema Evolution Selecting a File Format Performance Considerations Using the Kite SDK Data Module What is the Kite SDK? Fundamental Data Module Concepts Creating New Data Sets Using the Kite SDK Loading, Accessing, and Deleting a Data Set Importing Relational Data with Apache Sqoop What is Apache Sqoop? Basic Imports Limiting Results Improving Sqoop?s Performance Sqoop 2 Capturing Data with Apache Flume What is Apache Flume? Basic Flume Architecture Flume Sources Flume Sinks Flume Configuration Logging Application Events to Hadoop Developing Custom Flume Components Flume Data Flow and Common Extension Points Custom Flume Sources Developing a Flume Pollable Source Developing a Flume Event-Driven Source Custom Flume Interceptors Developing a Header-Modifying Flume Interceptor Developing a Filtering Flume Interceptor Writing Avro Objects with a Custom Flume Interceptor Managing Workflows with Apache Oozie The Need for Workflow Management What is Apache Oozie? Defining an Oozie Workflow Validation, Packaging, and Deployment Running and Tracking Workflows Using the CLI Hue UI for Oozie Processing Data Pipelines with Apache Crunch What is Apache Crunch? Understanding the Crunch Pipeline Comparing Crunch to Java MapReduce Working with Crunch Projects Reading and Writing Data in Crunch Data Collection API Functions Utility Classes in the Crunch API Working with Tables in Apache Hive What is Apache Hive? Accessing Hive Basic Query Syntax Creating and Populating Hive Tables How Hive Reads Data Using the RegexSerDe in Hive Developing User-Defined Functions What are User-Defined Functions? Implementing a User-Defined Function Deploying Custom Libraries in Hive Registering a User-Defined Function in Hive Executing Interactive Queries with Impala What is Impala? Comparing Hive to Impala Running Queries in Impala Support for User-Defined Functions Data and Metadata Management Understanding Cloudera Search What is Cloudera Search? Search Architecture Supported Document Formats Indexing Data with Cloudera Search Collection and Schema Management Morphlines Indexing Data in Batch Mode Indexing Data in Near Real Time Presenting Results to Users Solr Query Syntax Building a Search UI with Hue Accessing Impala through JDBC Powering a Custom Web Application with Impala and Search
Duration 5 Days 30 CPD hours This course is intended for The CHFI course will benefit: Police and other laws enforcement personnel Defense and Military personnel e-Business Security professionals Systems administrators Legal professionals Banking, Insurance and other professionals Government agencies Overview At the end of this course, you will possess the skills needed to: Understand the fundamentals of computer forensics Understand the computer forensic investigation process Describe in detail different types of hard disks and file systems Understand data acquisition and duplication Counteract anti-forensic techniques Leverage forensic skills in Windows, Linux, and Mac Investigate web attacks Understand dark web forensics Deploy forensic techniques for databases, cloud, and networks Investigate email crimes including malware Perform forensics in mobile and IoT environments Every crime leaves a digital footprint, and you need the skills to track those footprints. In this course, students will learn to unravel these pieces of evidence, decode them and report them. From decoding a hack to taking legal action against the perpetrators, they will become an active respondent in times of cyber-breaches. Computer Forensics in Today?s World 1.1. Understand the Fundamentals of Computer Forensics 1.2. Understand Cybercrimes and their Investigation Procedures 1.3. Understand Digital Evidence 1.4. Understand Forensic Readiness, Incident Response and the Role of SOC (Security Operations Center) in Computer Forensics 1.5. Identify the Roles and Responsibilities of a Forensic Investigator 1.6. Understand the Challenges Faced in Investigating Cybercrimes 1.7. Understand Legal Compliance in Computer Forensics Computer Forensics Investigation Process 2.1. Understand the Forensic Investigation Process and its Importance 2.2. Understand the Pre-investigation Phase 2.3. Understand First Response 2.4. Understand the Investigation Phase 2.5. Understand the Post-investigation Phase Understanding Hard Disks and File Systems 3.1. Describe Different Types of Disk Drives and their Characteristics 3.2. Explain the Logical Structure of a Disk 3.3. Understand Booting Process of Windows, Linux and Mac Operating Systems 3.4. Understand Various File Systems of Windows, Linux and Mac Operating Systems 3.5. Examine File System Using Autopsy and The Sleuth Kit Tools 3.6 Understand Storage Systems 3.7. Understand Encoding Standards and Hex Editors 3.8. Analyze Popular File Formats Using Hex Editor Data Acquisition and Duplication 4.1. Understand Data Acquisition Fundamentals 4.2. Understand Data Acquisition Methodology 4.3. Prepare an Image File for Examination Defeating Anti-forensics Techniques 5.1. Understand Anti-forensics Techniques 5.2. Discuss Data Deletion and Recycle Bin Forensics 5.3. Illustrate File Carving Techniques and Ways to Recover Evidence from Deleted Partitions 5.4. Explore Password Cracking/Bypassing Techniques 5.5. Detect Steganography, Hidden Data in File System Structures, Trail Obfuscation, and File Extension Mismatch 5.6. Understand Techniques of Artifact Wiping, Overwritten Data/Metadata Detection, and Encryption 5.7. Detect Program Packers and Footprint Minimizing Techniques 5.8. Understand Anti-forensics Countermeasures Windows Forensics 6.1. Collect Volatile and Non-volatile Information 6.2. Perform Windows Memory and Registry Analysis 6.3. Examine the Cache, Cookie and History Recorded in Web Browsers 6.4. Examine Windows Files and Metadata 6.5. Understand ShellBags, LNK Files, and Jump Lists 6.6. Understand Text-based Logs and Windows Event Logs Linux and Mac Forensics 7.1. Understand Volatile and Non-volatile Data in Linux 7.2. Analyze Filesystem Images Using The Sleuth Kit 7.3. Demonstrate Memory Forensics Using Volatility & PhotoRec 7.4. Understand Mac Forensics Network Forensics 8.1. Understand Network Forensics 8.2. Explain Logging Fundamentals and Network Forensic Readiness 8.3. Summarize Event Correlation Concepts 8.4. Identify Indicators of Compromise (IoCs) from Network Logs 8.5. Investigate Network Traffic 8.6. Perform Incident Detection and Examination with SIEM Tools 8.7. Monitor and Detect Wireless Network Attacks Investigating Web Attacks 9.1. Understand Web Application Forensics 9.2. Understand Internet Information Services (IIS) Logs 9.3. Understand Apache Web Server Logs 9.4. Understand the Functionality of Intrusion Detection System (IDS) 9.5. Understand the Functionality of Web Application Firewall (WAF) 9.6. Investigate Web Attacks on Windows-based Servers 9.7. Detect and Investigate Various Attacks on Web Applications Dark Web Forensics 10.1. Understand the Dark Web 10.2. Determine How to Identify the Traces of Tor Browser during Investigation 10.3. Perform Tor Browser Forensics Database Forensics 11.1. Understand Database Forensics and its Importance 11.2. Determine Data Storage and Database Evidence Repositories in MSSQL Server 11.3. Collect Evidence Files on MSSQL Server 11.4. Perform MSSQL Forensics 11.5. Understand Internal Architecture of MySQL and Structure of Data Directory 11.6. Understand Information Schema and List MySQL Utilities for Performing Forensic Analysis 11.7. Perform MySQL Forensics on WordPress Web Application Database Cloud Forensics 12.1. Understand the Basic Cloud Computing Concepts 12.2. Understand Cloud Forensics 12.3. Understand the Fundamentals of Amazon Web Services (AWS) 12.4. Determine How to Investigate Security Incidents in AWS 12.5. Understand the Fundamentals of Microsoft Azure 12.6. Determine How to Investigate Security Incidents in Azure 12.7. Understand Forensic Methodologies for Containers and Microservices Investigating Email Crimes 13.1. Understand Email Basics 13.2. Understand Email Crime Investigation and its Steps 13.3. U.S. Laws Against Email Crime Malware Forensics 14.1. Define Malware and Identify the Common Techniques Attackers Use to Spread Malware 14.2. Understand Malware Forensics Fundamentals and Recognize Types of Malware Analysis 14.3. Understand and Perform Static Analysis of Malware 14.4. Analyze Suspicious Word and PDF Documents 14.5. Understand Dynamic Malware Analysis Fundamentals and Approaches 14.6. Analyze Malware Behavior on System Properties in Real-time 14.7. Analyze Malware Behavior on Network in Real-time 14.8. Describe Fileless Malware Attacks and How they Happen 14.9. Perform Fileless Malware Analysis - Emotet Mobile Forensics 15.1. Understand the Importance of Mobile Device Forensics 15.2. Illustrate Architectural Layers and Boot Processes of Android and iOS Devices 15.3. Explain the Steps Involved in Mobile Forensics Process 15.4. Investigate Cellular Network Data 15.5. Understand SIM File System and its Data Acquisition Method 15.6. Illustrate Phone Locks and Discuss Rooting of Android and Jailbreaking of iOS Devices 15.7. Perform Logical Acquisition on Android and iOS Devices 15.8. Perform Physical Acquisition on Android and iOS Devices 15.9. Discuss Mobile Forensics Challenges and Prepare Investigation Report IoT Forensics 16.1. Understand IoT and IoT Security Problems 16.2. Recognize Different Types of IoT Threats 16.3. Understand IoT Forensics 16.4. Perform Forensics on IoT Devices
Duration 4 Days 24 CPD hours This course is intended for This course is appropriate for developers and administrators who intend to use HBase. Overview Skills learned on the course include:The use cases and usage occasions for HBase, Hadoop, and RDBMSUsing the HBase shell to directly manipulate HBase tablesDesigning optimal HBase schemas for efficient data storage and recoveryHow to connect to HBase using the Java API, configure the HBase cluster, and administer an HBase clusterBest practices for identifying and resolving performance bottlenecks Cloudera University?s four-day training course for Apache HBase enables participants to store and access massive quantities of multi-structured data and perform hundreds of thousands of operations per second. Introduction to Hadoop & HBase What Is Big Data? Introducing Hadoop Hadoop Components What Is HBase? Why Use HBase? Strengths of HBase HBase in Production Weaknesses of HBase HBase Tables HBase Concepts HBase Table Fundamentals Thinking About Table Design The HBase Shell Creating Tables with the HBase Shell Working with Tables Working with Table Data HBase Architecture Fundamentals HBase Regions HBase Cluster Architecture HBase and HDFS Data Locality HBase Schema Design General Design Considerations Application-Centric Design Designing HBase Row Keys Other HBase Table Features Basic Data Access with the HBase API Options to Access HBase Data Creating and Deleting HBase Tables Retrieving Data with Get Retrieving Data with Scan Inserting and Updating Data Deleting Data More Advanced HBase API Features Filtering Scans Best Practices HBase Coprocessors HBase on the Cluster How HBase Uses HDFS Compactions and Splits HBase Reads & Writes How HBase Writes Data How HBase Reads Data Block Caches for Reading HBase Performance Tuning Column Family Considerations Schema Design Considerations Configuring for Caching Dealing with Time Series and Sequential Data Pre-Splitting Regions HBase Administration and Cluster Management HBase Daemons ZooKeeper Considerations HBase High Availability Using the HBase Balancer Fixing Tables with hbck HBase Security HBase Replication & Backup HBase Replication HBase Backup MapReduce and HBase Clusters Using Hive & Impala with HBase Using Hive and Impala with HBase Appendix A: Accessing Data with Python and Thrift Thrift Usage Working with Tables Getting and Putting Data Scanning Data Deleting Data Counters Filters Appendix B: OpenTSDB
Better than REST APIs! Build a fast and scalable HTTP/2 API for a Go microservice with gRPC and protocol buffers (protobufs)
Duration 1 Days 6 CPD hours This course is intended for This course is intended for: Data platform engineers Architects and operators who build and manage data analytics pipelines Overview In this course, you will learn to: Compare the features and benefits of data warehouses, data lakes, and modern data architectures Design and implement a batch data analytics solution Identify and apply appropriate techniques, including compression, to optimize data storage Select and deploy appropriate options to ingest, transform, and store data Choose the appropriate instance and node types, clusters, auto scaling, and network topology for a particular business use case Understand how data storage and processing affect the analysis and visualization mechanisms needed to gain actionable business insights Secure data at rest and in transit Monitor analytics workloads to identify and remediate problems Apply cost management best practices In this course, you will learn to build batch data analytics solutions using Amazon EMR, an enterprise-grade Apache Spark and Apache Hadoop managed service. You will learn how Amazon EMR integrates with open-source projects such as Apache Hive, Hue, and HBase, and with AWS services such as AWS Glue and AWS Lake Formation. The course addresses data collection, ingestion, cataloging, storage, and processing components in the context of Spark and Hadoop. You will learn to use EMR Notebooks to support both analytics and machine learning workloads. You will also learn to apply security, performance, and cost management best practices to the operation of Amazon EMR. Module A: Overview of Data Analytics and the Data Pipeline Data analytics use cases Using the data pipeline for analytics Module 1: Introduction to Amazon EMR Using Amazon EMR in analytics solutions Amazon EMR cluster architecture Interactive Demo 1: Launching an Amazon EMR cluster Cost management strategies Module 2: Data Analytics Pipeline Using Amazon EMR: Ingestion and Storage Storage optimization with Amazon EMR Data ingestion techniques Module 3: High-Performance Batch Data Analytics Using Apache Spark on Amazon EMR Apache Spark on Amazon EMR use cases Why Apache Spark on Amazon EMR Spark concepts Interactive Demo 2: Connect to an EMR cluster and perform Scala commands using the Spark shell Transformation, processing, and analytics Using notebooks with Amazon EMR Practice Lab 1: Low-latency data analytics using Apache Spark on Amazon EMR Module 4: Processing and Analyzing Batch Data with Amazon EMR and Apache Hive Using Amazon EMR with Hive to process batch data Transformation, processing, and analytics Practice Lab 2: Batch data processing using Amazon EMR with Hive Introduction to Apache HBase on Amazon EMR Module 5: Serverless Data Processing Serverless data processing, transformation, and analytics Using AWS Glue with Amazon EMR workloads Practice Lab 3: Orchestrate data processing in Spark using AWS Step Functions Module 6: Security and Monitoring of Amazon EMR Clusters Securing EMR clusters Interactive Demo 3: Client-side encryption with EMRFS Monitoring and troubleshooting Amazon EMR clusters Demo: Reviewing Apache Spark cluster history Module 7: Designing Batch Data Analytics Solutions Batch data analytics use cases Activity: Designing a batch data analytics workflow Module B: Developing Modern Data Architectures on AWS Modern data architectures
Better than REST APIs! Build a fast and scalable HTTP/2 API for your microservice with gRPC and protocol buffers (protobufs).