Duration 4 Days 24 CPD hours This course is intended for This course is appropriate for developers and administrators who intend to use HBase. Overview Skills learned on the course include:The use cases and usage occasions for HBase, Hadoop, and RDBMSUsing the HBase shell to directly manipulate HBase tablesDesigning optimal HBase schemas for efficient data storage and recoveryHow to connect to HBase using the Java API, configure the HBase cluster, and administer an HBase clusterBest practices for identifying and resolving performance bottlenecks Cloudera University?s four-day training course for Apache HBase enables participants to store and access massive quantities of multi-structured data and perform hundreds of thousands of operations per second. Introduction to Hadoop & HBase What Is Big Data? Introducing Hadoop Hadoop Components What Is HBase? Why Use HBase? Strengths of HBase HBase in Production Weaknesses of HBase HBase Tables HBase Concepts HBase Table Fundamentals Thinking About Table Design The HBase Shell Creating Tables with the HBase Shell Working with Tables Working with Table Data HBase Architecture Fundamentals HBase Regions HBase Cluster Architecture HBase and HDFS Data Locality HBase Schema Design General Design Considerations Application-Centric Design Designing HBase Row Keys Other HBase Table Features Basic Data Access with the HBase API Options to Access HBase Data Creating and Deleting HBase Tables Retrieving Data with Get Retrieving Data with Scan Inserting and Updating Data Deleting Data More Advanced HBase API Features Filtering Scans Best Practices HBase Coprocessors HBase on the Cluster How HBase Uses HDFS Compactions and Splits HBase Reads & Writes How HBase Writes Data How HBase Reads Data Block Caches for Reading HBase Performance Tuning Column Family Considerations Schema Design Considerations Configuring for Caching Dealing with Time Series and Sequential Data Pre-Splitting Regions HBase Administration and Cluster Management HBase Daemons ZooKeeper Considerations HBase High Availability Using the HBase Balancer Fixing Tables with hbck HBase Security HBase Replication & Backup HBase Replication HBase Backup MapReduce and HBase Clusters Using Hive & Impala with HBase Using Hive and Impala with HBase Appendix A: Accessing Data with Python and Thrift Thrift Usage Working with Tables Getting and Putting Data Scanning Data Deleting Data Counters Filters Appendix B: OpenTSDB
Duration 1 Days 6 CPD hours This course is intended for This course is intended for: Data platform engineers Architects and operators who build and manage data analytics pipelines Overview In this course, you will learn to: Compare the features and benefits of data warehouses, data lakes, and modern data architectures Design and implement a batch data analytics solution Identify and apply appropriate techniques, including compression, to optimize data storage Select and deploy appropriate options to ingest, transform, and store data Choose the appropriate instance and node types, clusters, auto scaling, and network topology for a particular business use case Understand how data storage and processing affect the analysis and visualization mechanisms needed to gain actionable business insights Secure data at rest and in transit Monitor analytics workloads to identify and remediate problems Apply cost management best practices In this course, you will learn to build batch data analytics solutions using Amazon EMR, an enterprise-grade Apache Spark and Apache Hadoop managed service. You will learn how Amazon EMR integrates with open-source projects such as Apache Hive, Hue, and HBase, and with AWS services such as AWS Glue and AWS Lake Formation. The course addresses data collection, ingestion, cataloging, storage, and processing components in the context of Spark and Hadoop. You will learn to use EMR Notebooks to support both analytics and machine learning workloads. You will also learn to apply security, performance, and cost management best practices to the operation of Amazon EMR. Module A: Overview of Data Analytics and the Data Pipeline Data analytics use cases Using the data pipeline for analytics Module 1: Introduction to Amazon EMR Using Amazon EMR in analytics solutions Amazon EMR cluster architecture Interactive Demo 1: Launching an Amazon EMR cluster Cost management strategies Module 2: Data Analytics Pipeline Using Amazon EMR: Ingestion and Storage Storage optimization with Amazon EMR Data ingestion techniques Module 3: High-Performance Batch Data Analytics Using Apache Spark on Amazon EMR Apache Spark on Amazon EMR use cases Why Apache Spark on Amazon EMR Spark concepts Interactive Demo 2: Connect to an EMR cluster and perform Scala commands using the Spark shell Transformation, processing, and analytics Using notebooks with Amazon EMR Practice Lab 1: Low-latency data analytics using Apache Spark on Amazon EMR Module 4: Processing and Analyzing Batch Data with Amazon EMR and Apache Hive Using Amazon EMR with Hive to process batch data Transformation, processing, and analytics Practice Lab 2: Batch data processing using Amazon EMR with Hive Introduction to Apache HBase on Amazon EMR Module 5: Serverless Data Processing Serverless data processing, transformation, and analytics Using AWS Glue with Amazon EMR workloads Practice Lab 3: Orchestrate data processing in Spark using AWS Step Functions Module 6: Security and Monitoring of Amazon EMR Clusters Securing EMR clusters Interactive Demo 3: Client-side encryption with EMRFS Monitoring and troubleshooting Amazon EMR clusters Demo: Reviewing Apache Spark cluster history Module 7: Designing Batch Data Analytics Solutions Batch data analytics use cases Activity: Designing a batch data analytics workflow Module B: Developing Modern Data Architectures on AWS Modern data architectures
Duration 4 Days 24 CPD hours This course is intended for The primary audience for this course is data professionals, data architects, and business intelligence professionals who want to learn about data engineering and building analytical solutions using data platform technologies that exist on Microsoft Azure. The secondary audience for this course includes data analysts and data scientists who work with analytical solutions built on Microsoft Azure. In this course, the student will learn how to implement and manage data engineering workloads on Microsoft Azure, using Azure services such as Azure Synapse Analytics, Azure Data Lake Storage Gen2, Azure Stream Analytics, Azure Databricks, and others. The course focuses on common data engineering tasks such as orchestrating data transfer and transformation pipelines, working with data files in a data lake, creating and loading relational data warehouses, capturing and aggregating streams of real-time data, and tracking data assets and lineage. Prerequisites Successful students start this course with knowledge of cloud computing and core data concepts and professional experience with data solutions. AZ-900T00 Microsoft Azure Fundamentals DP-900T00 Microsoft Azure Data Fundamentals 1 - Introduction to data engineering on Azure What is data engineering Important data engineering concepts Data engineering in Microsoft Azure 2 - Introduction to Azure Data Lake Storage Gen2 Understand Azure Data Lake Storage Gen2 Enable Azure Data Lake Storage Gen2 in Azure Storage Compare Azure Data Lake Store to Azure Blob storage Understand the stages for processing big data Use Azure Data Lake Storage Gen2 in data analytics workloads 3 - Introduction to Azure Synapse Analytics What is Azure Synapse Analytics How Azure Synapse Analytics works When to use Azure Synapse Analytics 4 - Use Azure Synapse serverless SQL pool to query files in a data lake Understand Azure Synapse serverless SQL pool capabilities and use cases Query files using a serverless SQL pool Create external database objects 5 - Use Azure Synapse serverless SQL pools to transform data in a data lake Transform data files with the CREATE EXTERNAL TABLE AS SELECT statement Encapsulate data transformations in a stored procedure Include a data transformation stored procedure in a pipeline 6 - Create a lake database in Azure Synapse Analytics Understand lake database concepts Explore database templates Create a lake database Use a lake database 7 - Analyze data with Apache Spark in Azure Synapse Analytics Get to know Apache Spark Use Spark in Azure Synapse Analytics Analyze data with Spark Visualize data with Spark 8 - Transform data with Spark in Azure Synapse Analytics Modify and save dataframes Partition data files Transform data with SQL 9 - Use Delta Lake in Azure Synapse Analytics Understand Delta Lake Create Delta Lake tables Create catalog tables Use Delta Lake with streaming data Use Delta Lake in a SQL pool 10 - Analyze data in a relational data warehouse Design a data warehouse schema Create data warehouse tables Load data warehouse tables Query a data warehouse 11 - Load data into a relational data warehouse Load staging tables Load dimension tables Load time dimension tables Load slowly changing dimensions Load fact tables Perform post load optimization 12 - Build a data pipeline in Azure Synapse Analytics Understand pipelines in Azure Synapse Analytics Create a pipeline in Azure Synapse Studio Define data flows Run a pipeline 13 - Use Spark Notebooks in an Azure Synapse Pipeline Understand Synapse Notebooks and Pipelines Use a Synapse notebook activity in a pipeline Use parameters in a notebook 14 - Plan hybrid transactional and analytical processing using Azure Synapse Analytics Understand hybrid transactional and analytical processing patterns Describe Azure Synapse Link 15 - Implement Azure Synapse Link with Azure Cosmos DB Enable Cosmos DB account to use Azure Synapse Link Create an analytical store enabled container Create a linked service for Cosmos DB Query Cosmos DB data with Spark Query Cosmos DB with Synapse SQL 16 - Implement Azure Synapse Link for SQL What is Azure Synapse Link for SQL? Configure Azure Synapse Link for Azure SQL Database Configure Azure Synapse Link for SQL Server 2022 17 - Get started with Azure Stream Analytics Understand data streams Understand event processing Understand window functions 18 - Ingest streaming data using Azure Stream Analytics and Azure Synapse Analytics Stream ingestion scenarios Configure inputs and outputs Define a query to select, filter, and aggregate data Run a job to ingest data 19 - Visualize real-time data with Azure Stream Analytics and Power BI Use a Power BI output in Azure Stream Analytics Create a query for real-time visualization Create real-time data visualizations in Power BI 20 - Introduction to Microsoft Purview What is Microsoft Purview? How Microsoft Purview works When to use Microsoft Purview 21 - Integrate Microsoft Purview and Azure Synapse Analytics Catalog Azure Synapse Analytics data assets in Microsoft Purview Connect Microsoft Purview to an Azure Synapse Analytics workspace Search a Purview catalog in Synapse Studio Track data lineage in pipelines 22 - Explore Azure Databricks Get started with Azure Databricks Identify Azure Databricks workloads Understand key concepts 23 - Use Apache Spark in Azure Databricks Get to know Spark Create a Spark cluster Use Spark in notebooks Use Spark to work with data files Visualize data 24 - Run Azure Databricks Notebooks with Azure Data Factory Understand Azure Databricks notebooks and pipelines Create a linked service for Azure Databricks Use a Notebook activity in a pipeline Use parameters in a notebook Additional course details: Nexus Humans DP-203T00 Data Engineering on Microsoft Azure training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the DP-203T00 Data Engineering on Microsoft Azure course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Intro to containers training course description This course looks at the technologies of containers and microservices. The course starts with a look at what containers are, moving onto working with containers. Networking containers and container orchestration is then studied. The course finishes with monitoring containers with Prometheus and other systems. Hands on sessions are used to reinforce the theory rather than teach specific products, although Docker and Kubernetes are used. What will you learn Use containers. Build containers. Orchestrate containers. Evaluate container technologies. Intro to containers training course details Who will benefit: Those wishing to work with containers. Prerequisites: Introduction to virtualization. Duration 2 days Intro to containers training course contents What are containers? Virtualization, VMs, What are containers? What are microservices? Machine containers, application containers. Benefits. Container runtime tools Docker, LXC, Windows containers. Architecture, components. Hands on Installing Docker client and server. Working with containers Docker workflow, Docker images, Docker containers, Dockerfile, Building, running, storing images. Creating containers. Starting, stopping and controlling containers. Public repositories, private registries. Hands on Exploring containers. Microservices What are microservices? Modular architecture, IPC. Hands on Persistence and containers. Networking containers Linking, no networking, host, bridge. The container Network Interface. Hands on Container networking Container orchestration engines Docker swarm: Nodes, services, tasks. Apache Mesos: Mesos master, agents, frameworks. Kubernetes: Kubectl, master node, worker nodes. Openstack: Architecture, containers in OpenStack. Amazon ECS: Architecture, how it works. Hands on Setup and access a Kubernetes cluster. Managing containers Monitoring, logging, collecting metrics, cluster monitoring tools: Heapster. Hands on Using Prometheus with Kubernetes.
Duration 4 Days 24 CPD hours This course is intended for Hadoop Developers Overview Through instructor-led discussion and interactive, hands-on exercises, participants will navigate the Hadoop ecosystem, learning topics such as:How data is distributed, stored, and processed in a Hadoop clusterHow to use Sqoop and Flume to ingest dataHow to process distributed data with Apache SparkHow to model structured data as tables in Impala and HiveHow to choose the best data storage format for different data usage patternsBest practices for data storage This training course is the best preparation for the challenges faced by Hadoop developers. Participants will learn to identify which tool is the right one to use in a given situation, and will gain hands-on experience in developing using those tools. Course Outline Introduction Introduction to Hadoop and the Hadoop Ecosystem Hadoop Architecture and HDFS Importing Relational Data with Apache Sqoop Introduction to Impala and Hive Modeling and Managing Data with Impala and Hive Data Formats Data Partitioning Capturing Data with Apache Flume Spark Basics Working with RDDs in Spark Writing and Deploying Spark Applications Parallel Programming with Spark Spark Caching and Persistence Common Patterns in Spark Data Processing Spark SQL and DataFrames Conclusion Additional course details: Nexus Humans Developer Training for Spark and Hadoop training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Developer Training for Spark and Hadoop course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Total PHP training course description PHP provides for the creation of dynamic web sites. This hands on training course looks at programming with PHP with an emphasis of building dynamic websites. Forms, state management and database integration are all covered with practicals used throughout the course to reinforce theory sessions. What will you learn Create dynamic web sites using PHP. Write PHP programs. Debug PHP programs. Examine existing code and determine its function. Total PHP training course details Who will benefit: Anyone creating dynamic web sites. Prerequisites: Software development fundamentals Duration 3 days Total PHP training course contents What is PHP? PHP history, dynamic web pages, how PHP works, alternatives to PHP. Downloading and installing PHP. Installing MySQL, installing Apache, platform issues. A first PHP web page A basic PHP script, PHP page structure. PHP comments. Integrating PHP and HTML. PHP forms HTML forms, taking values from forms. PHP and HTML Page inputs, environment inputs. phpinfo(), other form elements, sticky fields, generalised code, tables, forms, form elements, style sheets, JavaScript. Variables operators and expressions Expressions, data types, assignments, scope, constants, HTTP environment variables, getting data from forms using variables. Operators Arithmetic, logical, relational, Boolean, others. Control statements Conditional: if, else, elseif, switch. Loops: while, do while, for, break, continue, exit. Functions Built in functions, declaration, arguments, scope, loading functions from other files, defaulting parameters, call by value/ name. Arrays Indexes, array initialisation, array manipulation, multi dimensional arrays, array functions. String handling What is a string, string functions, matching, extraction, replacement. String operations, cleansing, sprintf, formatting web pages, strops and others, splitting strings, REs. PHP and databases Database structure, Database APIs, MySQL, Creating tables, Editing tables, simple SQL queries using PHP, building HTML tables using SQL queries, SQL injection, security issues, error handling. File I/O Opening, reading, writing files. Permissions, ownership, locking, directories. PHP, cookies and sessions State, Cookie properties, setting cookies, retrieving cookies, expiring/deleting cookies. Sessions, session variables, session IDs. PHP and email Emailing from servers, attachments. Objects OOP, PHP classes, constructors, instances.
Duration 4 Days 24 CPD hours This course is intended for The workshop is designed for data scientists who currently use Python or R to work with smaller datasets on a single machine and who need to scale up their analyses and machine learning models to large datasets on distributed clusters. Data engineers and developers with some knowledge of data science and machine learning may also find this workshop useful. Overview Overview of data science and machine learning at scale Overview of the Hadoop ecosystem Working with HDFS data and Hive tables using Hue Introduction to Cloudera Data Science Workbench Overview of Apache Spark 2 Reading and writing data Inspecting data quality Cleansing and transforming data Summarizing and grouping data Combining, splitting, and reshaping data Exploring data Configuring, monitoring, and troubleshooting Spark applications Overview of machine learning in Spark MLlib Extracting, transforming, and selecting features Building and evaluating regression models Building and evaluating classification models Building and evaluating clustering models Cross-validating models and tuning hyperparameters Building machine learning pipelines Deploying machine learning models Spark, Spark SQL, and Spark MLlib PySpark and sparklyr Cloudera Data Science Workbench (CDSW) Hue This workshop covers data science and machine learning workflows at scale using Apache Spark 2 and other key components of the Hadoop ecosystem. The workshop emphasizes the use of data science and machine learning methods to address real-world business challenges. Using scenarios and datasets from a fictional technology company, students discover insights to support critical business decisions and develop data products to transform the business. The material is presented through a sequence of brief lectures, interactive demonstrations, extensive hands-on exercises, and discussions. The Apache Spark demonstrations and exercises are conducted in Python (with PySpark) and R (with sparklyr) using the Cloudera Data Science Workbench (CDSW) environment. The workshop is designed for data scientists who currently use Python or R to work with smaller datasets on a single machine and who need to scale up their analyses and machine learning models to large datasets on distributed clusters. Data engineers and developers with some knowledge of data science and machine learning may also find this workshop useful. Overview of data science and machine learning at scaleOverview of the Hadoop ecosystemWorking with HDFS data and Hive tables using HueIntroduction to Cloudera Data Science WorkbenchOverview of Apache Spark 2Reading and writing dataInspecting data qualityCleansing and transforming dataSummarizing and grouping dataCombining, splitting, and reshaping dataExploring dataConfiguring, monitoring, and troubleshooting Spark applicationsOverview of machine learning in Spark MLlibExtracting, transforming, and selecting featuresBuilding and evauating regression modelsBuilding and evaluating classification modelsBuilding and evaluating clustering modelsCross-validating models and tuning hyperparametersBuilding machine learning pipelinesDeploying machine learning models Additional course details: Nexus Humans Cloudera Data Scientist Training training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Cloudera Data Scientist Training course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 1 Days 6 CPD hours This course is intended for The audience for this course is individuals who want to learn the fundamentals of database concepts in a cloud environment, get basic skilling in cloud data services, and build their foundational knowledge of cloud data services within Microsoft Azure. Overview Describe core data concepts Identify considerations for relational data on Azure Describe considerations for working with non-relational data on Azure Describe an analytics workload on Azure In this course, students will gain foundational knowledge of core data concepts and related Microsoft Azure data services. Students will learn about core data concepts such as relational, non-relational, big data, and analytics, and build their foundational knowledge of cloud data services within Microsoft Azure. Students will explore fundamental relational data concepts and relational database services in Azure. They will explore Azure storage for non-relational data and the fundamentals of Azure Cosmos DB. Students will learn about large-scale data warehousing, real-time analytics, and data visualization. 1 - Explore core data concepts Identify data formats Explore file storage Explore databases Explore transactional data processing Explore analytical data processing 2 - Explore data roles and services Explore job roles in the world of data Identify data services 3 - Explore fundamental relational data concepts Understand relational data Understand normalization Explore SQL Describe database objects 4 - Explore relational database services in Azure Describe Azure SQL services and capabilities Describe Azure services for open-source databases 5 - Explore Azure Storage for non-relational data Explore Azure blob storage Explore Azure DataLake Storage Gen2 Explore Azure Files Explore Azure Tables 6 - Explore fundamentals of Azure Cosmos DB Describe Azure Cosmos DB Identify Azure Cosmos DB APIs 7 - Explore fundamentals of large-scale data warehousing Describe data warehousing architecture Explore data ingestion pipelines Explore analytical data stores 8 - Explore fundamentals of real-time analytics Understand batch and stream processing Explore common elements of stream processing architecture Explore Azure Stream Analytics Explore Apache Spark on Microsoft Azure 9 - Explore fundamentals of data visualization Describe Power BI tools and workflow Describe core concepts of data modeling Describe considerations for data visualization
Securing UNIX systems training course description This course teaches you everything you need to know to build a safe Linux environment. The first section handles cryptography and authentication with certificates, openssl, mod_ssl, DNSSEC and filesystem encryption. Then Host security and hardening is covered with intrusion detection, and also user management and authentication. Filesystem Access control is then covered. Finally network security is covered with network hardening, packet filtering and VPNs. What will you learn Secure UNIX accounts. Secure UNIX file systems. Secure UNIX access through the network. Securing UNIX systems course details Who will benefit: Linux technical staff needing to secure their systems. Prerequisites: Linux system administration (LPIC-1) Duration 5 days Securing UNIX systems course contents Cryptography Certificates and Public Key Infrastructures X.509 certificates, lifecycle, fields and certificate extensions. Trust chains and PKI. openssl. Public and private keys. Certification authority. Manage server and client certificates. Revoke certificates and CAs. Encryption, signing and authentication SSL, TLS, protocol versions. Transport layer security threats, e.g. MITM. Apache HTTPD with mod_ssl for HTTPS service, including SNI and HSTS. HTTPD with mod_ssl to authenticate users using certificates. HTTPD with mod_ssl to provide OCSP stapling. Use OpenSSL for SSL/TLS client and server tests. Encrypted File Systems Block device and file system encryption. dm-crypt with LUKS to encrypt block devices. eCryptfs to encrypt file systems, including home directories and, PAM integration, plain dm-crypt and EncFS. DNS and cryptography DNSSEC and DANE. BIND as an authoritative name server serving DNSSEC secured zones. BIND as an recursive name server that performs DNSSEC validation, KSK, ZSK, Key Tag, Key generation, key storage, key management and key rollover, Maintenance and resigning of zones, Use DANE. TSIG. Host Security Host Hardening BIOS and boot loader (GRUB 2) security. Disable useless software and services, sysctl for security related kernel configuration, particularly ASLR, Exec-Shield and IP / ICMP configuration, Exec-Shield and IP / ICMP configuration, Limit resource usage. Work with chroot environments, Security advantages of virtualization. Host Intrusion Detection The Linux Audit system, chkrootkit, rkhunter, including updates, Linux Malware Detect, Automate host scans using cron, AIDE, including rule management, OpenSCAP. User Management and Authentication NSS and PAM, Enforce password policies. Lock accounts automatically after failed login attempts, SSSD, Configure NSS and PAM for use with SSSD, SSSD authentication against Active Directory, IPA, LDAP, Kerberos and local domains, Kerberos and local domains, Kerberos tickets. FreeIPA Installation and Samba Integration FreeIPA, architecture and components. Install and manage a FreeIPA server and domain, Active Directory replication and Kerberos cross-realm trusts, sudo, autofs, SSH and SELinux integration in FreeIPA. Access Control Discretionary Access Control File ownership and permissions, SUID, SGID. Access control lists, extended attributes and attribute classes. Mandatory Access Control TE, RBAC, MAC, DAC. SELinux, AppArmor and Smack. etwork File Systems NFSv4 security issues and improvements, NFSv4 server and clients, NFSv4 authentication mechanisms (LIPKEY, SPKM, Kerberos), NFSv4 pseudo file system, NFSv4 ACLs. CIFS clients, CIFS Unix Extensions, CIFS security modes (NTLM, Kerberos), mapping and handling of CIFS ACLs and SIDs in a Linux system. Network Security Network Hardening FreeRADIUS, nmap, scan methods. Wireshark, filters and statistics. Rogue router advertisements and DHCP messages. Network Intrusion Detection ntop, Cacti, bandwidth usage monitoring, Snort, rule management, OpenVAS, NASL. Packet Filtering Firewall architectures, DMZ, netfilter, iptables and ip6tables, standard modules, tests and targets. IPv4 and IPv6 packet filtering. Connection tracking, NAT. IP sets and netfilter rules, nftables and nft. ebtables. conntrackd Virtual Private Networks OpenVPN server and clients for both bridged and routed VPN networks. IPsec server and clients for routed VPN networks using IPsec-Tools / racoon. L2TP.
Securing Linux systems training course description This course teaches you everything you need to know to build a safe Linux environment. The first section handles cryptography and authentication with certificates, openssl, mod_ssl, DNSSEC and filesystem encryption. Then Host security and hardening is covered with intrusion detection, and also user management and authentication. Filesystem Access control is then covered. Finally network security is covered with network hardening, packet filtering and VPNs. What will you learn Secure Linux accounts. Secure Linux file systems. Secure Linux access through the network. Securing Linux systems training course details Who will benefit: Linux technical staff needing to secure their systems. Prerequisites: Linux system administration (LPIC-1) Duration 5 days Securing Linux systems training course contents Cryptography Certificates and Public Key Infrastructures X.509 certificates, lifecycle, fields and certificate extensions. Trust chains and PKI. openssl. Public and private keys. Certification authority. Manage server and client certificates. Revoke certificates and CAs. Encryption, signing and authentication SSL, TLS, protocol versions. Transport layer security threats, e.g. MITM. Apache HTTPD with mod_ssl for HTTPS service, including SNI and HSTS. HTTPD with mod_ssl to authenticate users using certificates. HTTPD with mod_ssl to provide OCSP stapling. Use OpenSSL for SSL/TLS client and server tests. Encrypted File Systems Block device and file system encryption. dm-crypt with LUKS to encrypt block devices. eCryptfs to encrypt file systems, including home directories and, PAM integration, plain dm-crypt and EncFS. DNS and cryptography DNSSEC and DANE. BIND as an authoritative name server serving DNSSEC secured zones. BIND as an recursive name server that performs DNSSEC validation, KSK, ZSK, Key Tag, Key generation, key storage, key management and key rollover, Maintenance and resigning of zones, Use DANE. TSIG. Host Security Host Hardening BIOS and boot loader (GRUB 2) security. Disable useless software and services, sysctl for security related kernel configuration, particularly ASLR, Exec-Shield and IP / ICMP configuration, Exec-Shield and IP / ICMP configuration, Limit resource usage. Work with chroot environments, Security advantages of virtualization. Host Intrusion Detection The Linux Audit system, chkrootkit, rkhunter, including updates, Linux Malware Detect, Automate host scans using cron, AIDE, including rule management, OpenSCAP. User Management and Authentication NSS and PAM, Enforce password policies. Lock accounts automatically after failed login attempts, SSSD, Configure NSS and PAM for use with SSSD, SSSD authentication against Active Directory, IPA, LDAP, Kerberos and local domains, Kerberos and local domains, Kerberos tickets. FreeIPA Installation and Samba Integration FreeIPA, architecture and components. Install and manage a FreeIPA server and domain, Active Directory replication and Kerberos cross-realm trusts, sudo, autofs, SSH and SELinux integration in FreeIPA. Access Control Discretionary Access Control File ownership and permissions, SUID, SGID. Access control lists, extended attributes and attribute classes. Mandatory Access Control TE, RBAC, MAC, DAC. SELinux, AppArmor and Smack. etwork File Systems NFSv4 security issues and improvements, NFSv4 server and clients, NFSv4 authentication mechanisms (LIPKEY, SPKM, Kerberos), NFSv4 pseudo file system, NFSv4 ACLs. CIFS clients, CIFS Unix Extensions, CIFS security modes (NTLM, Kerberos), mapping and handling of CIFS ACLs and SIDs in a Linux system. Network Security Network Hardening FreeRADIUS, nmap, scan methods. Wireshark, filters and statistics. Rogue router advertisements and DHCP messages. Network Intrusion Detection ntop, Cacti, bandwidth usage monitoring, Snort, rule management, OpenVAS, NASL. Packet Filtering Firewall architectures, DMZ, netfilter, iptables and ip6tables, standard modules, tests and targets. IPv4 and IPv6 packet filtering. Connection tracking, NAT. IP sets and netfilter rules, nftables and nft. ebtables. conntrackd Virtual Private Networks OpenVPN server and clients for both bridged and routed VPN networks. IPsec server and clients for routed VPN networks using IPsec-Tools / racoon. L2TP.