Duration 5 Days 30 CPD hours This course is intended for This course is for IT Professionals with expertise in designing and implementing solutions running on Microsoft Azure. They should have broad knowledge of IT operations, including networking, virtualization, identity, security, business continuity, disaster recovery, data platform, budgeting, and governance. Azure Solution Architects use the Azure Portal and as they become more adept they use the Command Line Interface. Candidates must have expert-level skills in Azure administration and have experience with Azure development processes and DevOps processes. Overview Secure identities with Azure Active Directory and users and groups. Implement identity solutions spanning on-premises and cloud-based capabilities Apply monitoring solutions for collecting, combining, and analyzing data from different sources. Manage subscriptions, accounts, Azure policies, and Role-Based Access Control. Administer Azure using the Resource Manager, Azure portal, Cloud Shell, and CLI. Configure intersite connectivity solutions like VNet Peering, and virtual network gateways. Administer Azure App Service, Azure Container Instances, and Kubernetes. This course teaches Solutions Architects how to translate business requirements into secure, scalable, and reliable solutions. Lessons include virtualization, automation, networking, storage, identity, security, data platform, and application infrastructure. This course outlines how decisions in each theses area affects an overall solution. Implement Azure Active Directory Overview of Azure Active Directory Users and Groups Domains and Custom Domains Azure AD Identity Protection Implement Conditional Access Configure Fraud Alerts for MFA Implement Bypass Options Configure Guest Users in Azure AD Configure Trusted IPs Manage Multiple Directories Implement and Manage Hybrid Identities Install and Configure Azure AD Connect Configure Password Sync and Password Writeback Configure Azure AD Connect Health Implement Virtual Networking Virtual Network Peering Implement VNet Peering Implement VMs for Windows and Linux Select Virtual Machine Size Configure High Availability Implement Azure Dedicated Hosts Deploy and Configure Scale Sets Configure Azure Disk Encryption Implement Load Balancing and Network Security Implement Azure Load Balancer Implement an Application Gateway Understand Web Application Firewall Implement Azure Firewall Implement Azure Front Door Implementing Azure Traffic Manager Implement Storage Accounts Storage Accounts Blob Storage Storage Security Managing Storage Accessing Blobs and Queues using AAD Implement NoSQL Databases Configure Storage Account Tables Select Appropriate CosmosDB APIs Implement Azure SQL Databases Configure Azure SQL Database Settings Implement Azure SQL Database Managed Instances High-Availability and Azure SQL Database In this module, you will learn how to Create an Azure SQL Database (single database) Create an Azure SQL Database Managed Instance Recommend high-availability architectural models used in Azure SQL Database Automate Deployment and Configuration of Resources Azure Resource Manager Templates Save a Template for a VM Evaluate Location of New Resources Configure a Virtual Hard Disk Template Deploy from a template Create and Execute an Automation Runbook Implement and Manage Azure Governance Create Management Groups, Subscriptions, and Resource Groups Overview of Role-Based Access Control (RBAC) Role-Based Access Control (RBAC) Roles Azure AD Access Reviews Implement and Configure an Azure Policy Azure Blueprints Manage Security for Applications Azure Key Vault Azure Managed Identity Manage Workloads in Azure Migrate Workloads using Azure Migrate VMware - Agentless Migration VMware - Agent-Based Migration Implement Azure Backup Azure to Azure Site Recovery Implement Azure Update Management Implement Container-Based Applications Azure Container Instances Configure Azure Kubernetes Service Implement an Application Infrastructure Create and Configure Azure App Service Create an App Service Web App for Containers Create and Configure an App Service Plan Configure Networking for an App Service Create and Manage Deployment Slots Implement Logic Apps Implement Azure Functions Implement Cloud Infrastructure Monitoring Azure Infrastructure Security Monitoring Azure Monitor Azure Workbooks Azure Alerts Log Analytics Network Watcher Azure Service Health Monitor Azure Costs Azure Application Insights Unified Monitoring in Azure
Duration 3 Days 18 CPD hours Overview The goal of this course is to enable technical students new to Cassandra to begin working with Cassandra in an optimal manner. Throughout the course students will learn to: Understand the Big Data needs that C* addresses Be familiar with the operation and structure of C* Be able to install and set up a C* database Use the C* tools, including cqlsh, nodetool, and ccm (Cassandra Cluster Manager) Be familiar with the C* architecture, and how a C* cluster is structured Understand how data is distributed and replicated in a C* cluster Understand core C* data modeling concepts, and use them to create well-structured data models Be familiar with the C* eventual consistency model and use it intelligently Be familiar with consistency mechanisms such as read repair and hinted handoff Understand and use CQL to create tables and query for data Know and use the CQL data types (numerical, textual, uuid, etc.) Be familiar with the various kinds of primary keys available (simple, compound, and composite primary keys) Be familiar with the C* write and read paths Understand C* deletion and compaction The Cassandra (C*) database is a massively scalable NoSQL database that provides high availability and fault tolerance, as well as linear scalability when adding new nodes to a cluster. It has many powerful capabilities, such as tunable and eventual consistency, that allow it to meet the needs of modern applications, but also introduce a new paradigm for data modeling that many organizations do not have the expertise to use in the best way.Introduction to Cassandra is a hands-on course designed to teach attendees the basics of how to create good data models with Cassandra. This technical course has a focus on the practical aspects of working with C*, and introduces essential concepts needed to understand Cassandra, including enough coverage of internal architecture to make good decisions. It is hands-on, with labs that provide experience in core functionality. Students will also explore CQL (Cassandra Query Language), as well as some of the ?anti-patterns? that lead to non-optimal C* data models and be ready to work on production systems involving Cassandra. Session 1: Cassandra Overview Why We Need Cassandra - Big Data Challenges vs RDBMS High level Cassandra Overview Cassandra Features Optional: Basic Cassandra Installation and Configuration Session 2: Cassandra Architecture and CQL Overview Cassandra Architecture Overview Cassandra Clusters and Rings Nodes and Virtual Nodes Data Replication in Cassandra Introduction to CQL Defining Tables with a Single Primary Key Using cqlsh for Interactive Querying Selecting and Inserting/Upserting Data with CQL Data Replication and Distribution Basic Data Types (including uuid, timeuuid) Session 3: Data Modeling and CQL Core Concepts Defining a Compound Primary Key CQL for Compound Primary Keys Partition Keys and Data Distribution Clustering Columns Overview of Internal Data Organization Overview of Other Querying Capabilities ORDER BY, CLUSTERING ORDER BY, UPDATE , DELETE, ALLOW FILTERING Batch Queries Data Modeling Guidelines Denormalization Data Modeling Workflow Data Modeling Principles Primary Key Considerations Composite Partition Keys Defining with CQL Data Distribution with Composite Partition Key Overview of Internal Data Organization Session 4: Additional CQL Capabilities Indexing Primary/Partition Keys and Pagination with token() Secondary Indexes and Usage Guidelines Cassandra collections Collection Structure and Uses Defining and Querying Collections (set, list, and map) Materialized View Overview Usage Guidelines Session 5: Data Consistency In Cassandra Overview of Consistency in Cassandra CAP Theorem Eventual (Tunable) Consistency in C* - ONE, QUORUM, ALL Choosing CL ONE Choosing CL QUORUM Achieving Immediate Consistency Overview of Other Consistency Levels Supportive Consistency Mechanisms Writing / Hinted Handoff Read Repair Nodetool repair Session 6: Internal Mechanisms Ring Details Partitioners Gossip Protocol Snitches Write Path Overview / Commit Log Memtables and SSTables Write Failure Unavailable Nodes and Node Failure Requirements for Write Operations Read Path Overview Read Mechanism Replication and Caching Deletion/Compaction Overview Delete Mechanism Tombstones and Compaction Session 7: Working with IntelliJ Configuring JDBC Data Source for Cassandra Reading Schema Information Querying and Editing Tables. Additional course details: Nexus Humans Introduction to Cassandra (TTDS6776) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Introduction to Cassandra (TTDS6776) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 2 Days 12 CPD hours This course is intended for Cloud Solutions Architects, Site Reliability Engineers, Systems Operations professionals, DevOps Engineers, IT managers. Individuals using Google Cloud Platform to create new solutions or to integrate existing systems, application environments, and infrastructure with the Google Cloud Platform. Overview Apply a tool set of questions, techniques and design considerations Define application requirements and express them objectively as KPIs, SLO's and SLI's Decompose application requirements to find the right microservice boundaries Leverage Google Cloud developer tools to set up modern, automated deployment pipelines Choose the appropriate Google Cloud Storage services based on application requirements Architect cloud and hybrid networks Implement reliable, scalable, resilient applications balancing key performance metrics with cost Choose the right Google Cloud deployment services for your applications Secure cloud applications, data and infrastructure Monitor service level objectives and costs using Stackdriver tools This course features a combination of lectures, design activities, and hands-on labs to show you how to use proven design patterns on Google Cloud to build highly reliable and efficient solutions and operate deployments that are highly available and cost-effective. This course was created for those who have already completed the Architecting with Google Compute Engine or Architecting with Google Kubernetes Engine course. Defining the Service Describe users in terms of roles and personas. Write qualitative requirements with user stories. Write quantitative requirements using key performance indicators (KPIs). Evaluate KPIs using SLOs and SLIs. Determine the quality of application requirements using SMART criteria. Microservice Design and Architecture Decompose monolithic applications into microservices. Recognize appropriate microservice boundaries. Architect stateful and stateless services to optimize scalability and reliability. Implement services using 12-factor best practices. Build loosely coupled services by implementing a well-designed REST architecture. Design consistent, standard RESTful service APIs. DevOps Automation Automate service deployment using CI/CD pipelines. Leverage Cloud Source Repositories for source and version control. Automate builds with Cloud Build and build triggers. Manage container images with Google Container Registry. Create infrastructure with code using Deployment Manager and Terraform. Choosing Storage Solutions Choose the appropriate Google Cloud data storage service based on use case, durability, availability, scalability and cost. Store binary data with Cloud Storage. Store relational data using Cloud SQL and Spanner. Store NoSQL data using Firestore and Cloud Bigtable. Cache data for fast access using Memorystore. Build a data warehouse using BigQuery. Google Cloud and Hybrid Network Architecture Design VPC networks to optimize for cost, security, and performance. Configure global and regional load balancers to provide access to services. Leverage Cloud CDN to provide lower latency and decrease network egress. Evaluate network architecture using the Cloud Network Intelligence Center. Connect networks using peering and VPNs. Create hybrid networks between Google Cloud and on-premises data centers using Cloud Interconnect. Deploying Applications to Google Cloud Choose the appropriate Google Cloud deployment service for your applications. Configure scalable, resilient infrastructure using Instance Templates and Groups. Orchestrate microservice deployments using Kubernetes and GKE. Leverage App Engine for a completely automated platform as a service (PaaS). Create serverless applications using Cloud Functions. Designing Reliable Systems Design services to meet requirements for availability, durability, and scalability. Implement fault-tolerant systems by avoiding single points of failure, correlated failures, and cascading failures. Avoid overload failures with the circuit breaker and truncated exponential backoff design patterns. Design resilient data storage with lazy deletion. Analyze disaster scenarios and plan for disaster recovery using cost/risk analysis. Security Design secure systems using best practices like separation of concerns, principle of least privilege, and regular audits. Leverage Cloud Security Command Center to help identify vulnerabilities. Simplify cloud governance using organizational policies and folders. Secure people using IAM roles, Identity-Aware Proxy, and Identity Platform. Manage the access and authorization of resources by machines and processes using service accounts. Secure networks with private IPs, firewalls, and Private Google Access. Mitigate DDoS attacks by leveraging Cloud DNS and Cloud Armor. Maintenance and Monitoring Manage new service versions using rolling updates, blue/green deployments, and canary releases. Forecast, monitor, and optimize service cost using the Google Cloud pricing calculator and billing reports and by analyzing billing data. Observe whether your services are meeting their SLOs using Cloud Monitoring and Dashboards. Use Uptime Checks to determine service availability. Respond to service outages using Cloud Monitoring Alerts. Additional course details: Nexus Humans Architecting with Google Cloud: Design and Process training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Architecting with Google Cloud: Design and Process course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Course Overview The SQL, NoSQL, Big Data and Hadoop Level 4 course is designed to provide aspiring data engineers with the skills to fast track their career. It will introduce you to key database and data engineering concepts, taking you through the different classifications of databases and software. You will learn how to build a data-driven organisation step-by-step, best practices for data analysis, how to use Elasticsearch as a search engine, and much more. This course is open to everyone and can be studied on a part-time or full-time basis. It is ideal for anyone looking to work in this field or gain a better understanding of Hadoop from a database perspective. Fast track your career by enrolling today and learn tips and shortcuts from experienced industry professionals. This best selling SQL, NoSQL, Big Data and Hadoop Level 4 has been developed by industry professionals and has already been completed by hundreds of satisfied students. This in-depth SQL, NoSQL, Big Data and Hadoop Level 4 is suitable for anyone who wants to build their professional skill set and improve their expert knowledge. The SQL, NoSQL, Big Data and Hadoop Level 4 is CPD-accredited, so you can be confident you're completing a quality training course will boost your CV and enhance your career potential. The SQL, NoSQL, Big Data and Hadoop Level 4 is made up of several information-packed modules which break down each topic into bite-sized chunks to ensure you understand and retain everything you learn. After successfully completing the SQL, NoSQL, Big Data and Hadoop Level 4, you will be awarded a certificate of completion as proof of your new skills. If you are looking to pursue a new career and want to build your professional skills to excel in your chosen field, the certificate of completion from the SQL, NoSQL, Big Data and Hadoop Level 4 will help you stand out from the crowd. You can also validate your certification on our website. We know that you are busy and that time is precious, so we have designed the SQL, NoSQL, Big Data and Hadoop Level 4 to be completed at your own pace, whether that's part-time or full-time. Get full course access upon registration and access the course materials from anywhere in the world, at any time, from any internet-enabled device. Our experienced tutors are here to support you through the entire learning process and answer any queries you may have via email.
Work with tables, partition, indexes, encryption, and database administration in the AWS Cloud with AWS DynamoDB
Overview This comprehensive course on SQL NoSQL Big Data and Hadoop will deepen your understanding on this topic. After successful completion of this course you can acquire the required skills in this sector. This SQL NoSQL Big Data and Hadoop comes with accredited certification from CPD, which will enhance your CV and make you worthy in the job market. So enrol in this course today to fast track your career ladder. How will I get my certificate? At the end of the course there will be an online written test, which you can take either during or after the course. After successfully completing the test you will be able to order your certificate, these are included in the price. Who is This course for? There is no experience or previous qualifications required for enrolment on this SQL NoSQL Big Data and Hadoop. It is available to all students, of all academic backgrounds. Requirements Our SQL NoSQL Big Data and Hadoop is fully compatible with PC's, Mac's, Laptop, Tablet and Smartphone devices. This course has been designed to be fully compatible with tablets and smartphones so you can access your course on Wi-Fi, 3G or 4G. There is no time limit for completing this course, it can be studied in your own time at your own pace. Career Path Learning this new skill will help you to advance in your career. It will diversify your job options and help you develop new techniques to keep up with the fast-changing world. This skillset will help you to- Open doors of opportunities Increase your adaptability Keep you relevant Boost confidence And much more! Course Curriculum 14 sections • 130 lectures • 22:34:00 total length •Introduction: 00:07:00 •Building a Data-driven Organization - Introduction: 00:04:00 •Data Engineering: 00:06:00 •Learning Environment & Course Material: 00:04:00 •Movielens Dataset: 00:03:00 •Introduction to Relational Databases: 00:09:00 •SQL: 00:05:00 •Movielens Relational Model: 00:15:00 •Movielens Relational Model: Normalization vs Denormalization: 00:16:00 •MySQL: 00:05:00 •Movielens in MySQL: Database import: 00:06:00 •OLTP in RDBMS: CRUD Applications: 00:17:00 •Indexes: 00:16:00 •Data Warehousing: 00:15:00 •Analytical Processing: 00:17:00 •Transaction Logs: 00:06:00 •Relational Databases - Wrap Up: 00:03:00 •Distributed Databases: 00:07:00 •CAP Theorem: 00:10:00 •BASE: 00:07:00 •Other Classifications: 00:07:00 •Introduction to KV Stores: 00:02:00 •Redis: 00:04:00 •Install Redis: 00:07:00 •Time Complexity of Algorithm: 00:05:00 •Data Structures in Redis : Key & String: 00:20:00 •Data Structures in Redis II : Hash & List: 00:18:00 •Data structures in Redis III : Set & Sorted Set: 00:21:00 •Data structures in Redis IV : Geo & HyperLogLog: 00:11:00 •Data structures in Redis V : Pubsub & Transaction: 00:08:00 •Modelling Movielens in Redis: 00:11:00 •Redis Example in Application: 00:29:00 •KV Stores: Wrap Up: 00:02:00 •Introduction to Document-Oriented Databases: 00:05:00 •MongoDB: 00:04:00 •MongoDB Installation: 00:02:00 •Movielens in MongoDB: 00:13:00 •Movielens in MongoDB: Normalization vs Denormalization: 00:11:00 •Movielens in MongoDB: Implementation: 00:10:00 •CRUD Operations in MongoDB: 00:13:00 •Indexes: 00:16:00 •MongoDB Aggregation Query - MapReduce function: 00:09:00 •MongoDB Aggregation Query - Aggregation Framework: 00:16:00 •Demo: MySQL vs MongoDB. Modeling with Spark: 00:02:00 •Document Stores: Wrap Up: 00:03:00 •Introduction to Search Engine Stores: 00:05:00 •Elasticsearch: 00:09:00 •Basic Terms Concepts and Description: 00:13:00 •Movielens in Elastisearch: 00:12:00 •CRUD in Elasticsearch: 00:15:00 •Search Queries in Elasticsearch: 00:23:00 •Aggregation Queries in Elasticsearch: 00:23:00 •The Elastic Stack (ELK): 00:12:00 •Use case: UFO Sighting in ElasticSearch: 00:29:00 •Search Engines: Wrap Up: 00:04:00 •Introduction to Columnar databases: 00:06:00 •HBase: 00:07:00 •HBase Architecture: 00:09:00 •HBase Installation: 00:09:00 •Apache Zookeeper: 00:06:00 •Movielens Data in HBase: 00:17:00 •Performing CRUD in HBase: 00:24:00 •SQL on HBase - Apache Phoenix: 00:14:00 •SQL on HBase - Apache Phoenix - Movielens: 00:10:00 •Demo : GeoLife GPS Trajectories: 00:02:00 •Wide Column Store: Wrap Up: 00:05:00 •Introduction to Time Series: 00:09:00 •InfluxDB: 00:03:00 •InfluxDB Installation: 00:07:00 •InfluxDB Data Model: 00:07:00 •Data manipulation in InfluxDB: 00:17:00 •TICK Stack I: 00:12:00 •TICK Stack II: 00:23:00 •Time Series Databases: Wrap Up: 00:04:00 •Introduction to Graph Databases: 00:05:00 •Modelling in Graph: 00:14:00 •Modelling Movielens as a Graph: 00:10:00 •Neo4J: 00:04:00 •Neo4J installation: 00:08:00 •Cypher: 00:12:00 •Cypher II: 00:19:00 •Movielens in Neo4J: Data Import: 00:17:00 •Movielens in Neo4J: Spring Application: 00:12:00 •Data Analysis in Graph Databases: 00:05:00 •Examples of Graph Algorithms in Neo4J: 00:18:00 •Graph Databases: Wrap Up: 00:07:00 •Introduction to Big Data With Apache Hadoop: 00:06:00 •Big Data Storage in Hadoop (HDFS): 00:16:00 •Big Data Processing : YARN: 00:11:00 •Installation: 00:13:00 •Data Processing in Hadoop (MapReduce): 00:14:00 •Examples in MapReduce: 00:25:00 •Data Processing in Hadoop (Pig): 00:12:00 •Examples in Pig: 00:21:00 •Data Processing in Hadoop (Spark): 00:23:00 •Examples in Spark: 00:23:00 •Data Analytics with Apache Spark: 00:09:00 •Data Compression: 00:06:00 •Data serialization and storage formats: 00:20:00 •Hadoop: Wrap Up: 00:07:00 •Introduction Big Data SQL Engines: 00:03:00 •Apache Hive: 00:10:00 •Apache Hive : Demonstration: 00:20:00 •MPP SQL-on-Hadoop: Introduction: 00:03:00 •Impala: 00:06:00 •Impala : Demonstration: 00:18:00 •PrestoDB: 00:13:00 •PrestoDB : Demonstration: 00:14:00 •SQL-on-Hadoop: Wrap Up: 00:02:00 •Data Architectures: 00:05:00 •Introduction to Distributed Commit Logs: 00:07:00 •Apache Kafka: 00:03:00 •Confluent Platform Installation: 00:10:00 •Data Modeling in Kafka I: 00:13:00 •Data Modeling in Kafka II: 00:15:00 •Data Generation for Testing: 00:09:00 •Use case: Toll fee Collection: 00:04:00 •Stream processing: 00:11:00 •Stream Processing II with Stream + Connect APIs: 00:19:00 •Example: Kafka Streams: 00:15:00 •KSQL : Streaming Processing in SQL: 00:04:00 •KSQL: Example: 00:14:00 •Demonstration: NYC Taxi and Fares: 00:01:00 •Streaming: Wrap Up: 00:02:00 •Database Polyglot: 00:04:00 •Extending your knowledge: 00:08:00 •Data Visualization: 00:11:00 •Building a Data-driven Organization - Conclusion: 00:07:00 •Conclusion: 00:03:00 •Assignment -SQL NoSQL Big Data and Hadoop: 00:00:00
Register on the SQL NoSQL Big Data and Hadoop today and build the experience, skills and knowledge you need to enhance your professional development and work towards your dream job. Study this course through online learning and take the first steps towards a long-term career. The course consists of a number of easy to digest, in-depth modules, designed to provide you with a detailed, expert level of knowledge. Learn through a mixture of instructional video lessons and online study materials. Receive online tutor support as you study the course, to ensure you are supported every step of the way. Get a digital certificate as a proof of your course completion. The SQL NoSQL Big Data and Hadoop is incredibly great value and allows you to study at your own pace. Access the course modules from any internet-enabled device, including computers, tablet, and smartphones. The course is designed to increase your employability and equip you with everything you need to be a success. Enrol on the now and start learning instantly! What You Get With The SQL NoSQL Big Data and Hadoop Receive a e-certificate upon successful completion of the course Get taught by experienced, professional instructors Study at a time and pace that suits your learning style Get instant feedback on assessments 24/7 help and advice via email or live chat Get full tutor support on weekdays (Monday to Friday) Course Design The course is delivered through our online learning platform, accessible through any internet-connected device. There are no formal deadlines or teaching schedules, meaning you are free to study the course at your own pace. You are taught through a combination of Video lessons Online study materials Certification Upon successful completion of the course, you will be able to obtain your course completion e-certificate free of cost. Print copy by post is also available at an additional cost of £9.99 and PDF Certificate at £4.99. Who Is This Course For: The course is ideal for those who already work in this sector or are an aspiring professional. This course is designed to enhance your expertise and boost your CV. Learn key skills and gain a professional qualification to prove your newly-acquired knowledge. Requirements: The online training is open to all students and has no formal entry requirements. To study the SQL NoSQL Big Data and Hadoop, all your need is a passion for learning, a good understanding of English, numeracy, and IT skills. You must also be over the age of 16. Course Content Section 01: Introduction Introduction 00:07:00 Building a Data-driven Organization - Introduction 00:04:00 Data Engineering 00:06:00 Learning Environment & Course Material 00:04:00 Movielens Dataset 00:03:00 Section 02: Relational Database Systems Introduction to Relational Databases 00:09:00 SQL 00:05:00 Movielens Relational Model 00:15:00 Movielens Relational Model: Normalization vs Denormalization 00:16:00 MySQL 00:05:00 Movielens in MySQL: Database import 00:06:00 OLTP in RDBMS: CRUD Applications 00:17:00 Indexes 00:16:00 Data Warehousing 00:15:00 Analytical Processing 00:17:00 Transaction Logs 00:06:00 Relational Databases - Wrap Up 00:03:00 Section 03: Database Classification Distributed Databases 00:07:00 CAP Theorem 00:10:00 BASE 00:07:00 Other Classifications 00:07:00 Section 04: Key-Value Store Introduction to KV Stores 00:02:00 Redis 00:04:00 Install Redis 00:07:00 Time Complexity of Algorithm 00:05:00 Data Structures in Redis : Key & String 00:20:00 Data Structures in Redis II : Hash & List 00:18:00 Data structures in Redis III : Set & Sorted Set 00:21:00 Data structures in Redis IV : Geo & HyperLogLog 00:11:00 Data structures in Redis V : Pubsub & Transaction 00:08:00 Modelling Movielens in Redis 00:11:00 Redis Example in Application 00:29:00 KV Stores: Wrap Up 00:02:00 Section 05: Document-Oriented Databases Introduction to Document-Oriented Databases 00:05:00 MongoDB 00:04:00 MongoDB Installation 00:02:00 Movielens in MongoDB 00:13:00 Movielens in MongoDB: Normalization vs Denormalization 00:11:00 Movielens in MongoDB: Implementation 00:10:00 CRUD Operations in MongoDB 00:13:00 Indexes 00:16:00 MongoDB Aggregation Query - MapReduce function 00:09:00 MongoDB Aggregation Query - Aggregation Framework 00:16:00 Demo: MySQL vs MongoDB. Modeling with Spark 00:02:00 Document Stores: Wrap Up 00:03:00 Section 06: Search Engines Introduction to Search Engine Stores 00:05:00 Elasticsearch 00:09:00 Basic Terms Concepts and Description 00:13:00 Movielens in Elastisearch 00:12:00 CRUD in Elasticsearch 00:15:00 Search Queries in Elasticsearch 00:23:00 Aggregation Queries in Elasticsearch 00:23:00 The Elastic Stack (ELK) 00:12:00 Use case: UFO Sighting in ElasticSearch 00:29:00 Search Engines: Wrap Up 00:04:00 Section 07: Wide Column Store Introduction to Columnar databases 00:06:00 HBase 00:07:00 HBase Architecture 00:09:00 HBase Installation 00:09:00 Apache Zookeeper 00:06:00 Movielens Data in HBase 00:17:00 Performing CRUD in HBase 00:24:00 SQL on HBase - Apache Phoenix 00:14:00 SQL on HBase - Apache Phoenix - Movielens 00:10:00 Demo : GeoLife GPS Trajectories 00:02:00 Wide Column Store: Wrap Up 00:04:00 Section 08: Time Series Databases Introduction to Time Series 00:09:00 InfluxDB 00:03:00 InfluxDB Installation 00:07:00 InfluxDB Data Model 00:07:00 Data manipulation in InfluxDB 00:17:00 TICK Stack I 00:12:00 TICK Stack II 00:23:00 Time Series Databases: Wrap Up 00:04:00 Section 09: Graph Databases Introduction to Graph Databases 00:05:00 Modelling in Graph 00:14:00 Modelling Movielens as a Graph 00:10:00 Neo4J 00:04:00 Neo4J installation 00:08:00 Cypher 00:12:00 Cypher II 00:19:00 Movielens in Neo4J: Data Import 00:17:00 Movielens in Neo4J: Spring Application 00:12:00 Data Analysis in Graph Databases 00:05:00 Examples of Graph Algorithms in Neo4J 00:18:00 Graph Databases: Wrap Up 00:07:00 Section 10: Hadoop Platform Introduction to Big Data With Apache Hadoop 00:06:00 Big Data Storage in Hadoop (HDFS) 00:16:00 Big Data Processing : YARN 00:11:00 Installation 00:13:00 Data Processing in Hadoop (MapReduce) 00:14:00 Examples in MapReduce 00:25:00 Data Processing in Hadoop (Pig) 00:12:00 Examples in Pig 00:21:00 Data Processing in Hadoop (Spark) 00:23:00 Examples in Spark 00:23:00 Data Analytics with Apache Spark 00:09:00 Data Compression 00:06:00 Data serialization and storage formats 00:20:00 Hadoop: Wrap Up 00:07:00 Section 11: Big Data SQL Engines Introduction Big Data SQL Engines 00:03:00 Apache Hive 00:10:00 Apache Hive : Demonstration 00:20:00 MPP SQL-on-Hadoop: Introduction 00:03:00 Impala 00:06:00 Impala : Demonstration 00:18:00 PrestoDB 00:13:00 PrestoDB : Demonstration 00:14:00 SQL-on-Hadoop: Wrap Up 00:02:00 Section 12: Distributed Commit Log Data Architectures 00:05:00 Introduction to Distributed Commit Logs 00:07:00 Apache Kafka 00:03:00 Confluent Platform Installation 00:10:00 Data Modeling in Kafka I 00:13:00 Data Modeling in Kafka II 00:15:00 Data Generation for Testing 00:09:00 Use case: Toll fee Collection 00:04:00 Stream processing 00:11:00 Stream Processing II with Stream + Connect APIs 00:19:00 Example: Kafka Streams 00:15:00 KSQL : Streaming Processing in SQL 00:04:00 KSQL: Example 00:14:00 Demonstration: NYC Taxi and Fares 00:01:00 Streaming: Wrap Up 00:02:00 Section 13: Summary Database Polyglot 00:04:00 Extending your knowledge 00:08:00 Data Visualization 00:11:00 Building a Data-driven Organization - Conclusion 00:07:00 Conclusion 00:03:00 Resources Resources - SQL NoSQL Big Data And Hadoop 00:00:00
Duration 1 Days 6 CPD hours This course is intended for The primary audience for this course is database developers who plan to migrate their MongoDB or Cassandra DB workloads to Azure using Cosmos DB. Overview Building Globally Distributed Applications with Cosmos DB Migrate MongoDB Workloads to Cosmos DB Migrate Cassandra DB Workloads to Cosmos DB This course will teach the students what is Cosmos DB and how you can migrate MongoDB and Cassandra workloads to Cosmos DB. Building Globally Distributed Applications with Cosmos DB Cosmos DB overview Cosmos DB APIs Provisioning Throughput Partitioning/Sharding Best Practices Migrate MongoDB Workloads to Cosmos DB Understand Migration Benefits Migration Planning Data Migration Application Migration Post-migration considerations Migrate Cassandra DB Workloads to Cosmos DB Understand Migration Benefits Migration Planning Data Migration Application Migration Post-migration considerations Additional course details: Nexus Humans DP-060T00 Migrate NoSQL workloads to Azure Cosmos DB training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the DP-060T00 Migrate NoSQL workloads to Azure Cosmos DB course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Master MongoDB, an open-source document database and leading NoSQL database that provides high performance, high availability, and automatic scaling. This course covers the MongoDB Community version for beginners and provides over 50 live-running queries, including creating new databases and tables.
A beginner-level course that will help you learn all you need to know about building applications using Python 3, FAST API, MongoDB, and NoSQL as well as front-end technologies such as HTML, CSS, JSX, and REACT JS with live demonstrations. You need to know the basics of HTML, CSS, and JavaScript to get started
Learn all about the brand new Firestore, a NoSQL document based technology
Tired of browsing and searching for a Big Data course you are looking for? Can't find the complete package that fulfils all your needs? Then don't worry as you have just found the solution. Take a minute and look through this extensive bundle that has everything you need to succeed. After surveying thousands of learners just like you and considering their valuable feedback, this all-in-one Big Data bundle has been designed by industry experts. We prioritised what learners were looking for in a complete package and developed this in-demand Big Data course that will enhance your skills and prepare you for the competitive job market. so, ourexperts are available for answering your queries on Big Data and help you along your learning journey. Advanced audio-visual learning modules of these Big Data courses are broken down into little chunks so that you can learn at your own pace without being overwhelmed by too much material at once. Furthermore, to help you showcase your expertise in Big Data, we have prepared a special gift of 1 hardcopy certificate and 1 PDF certificate for the title course completely free of cost. These certificates will enhance your credibility and encourage possible employers to pick you over the rest. This Big Data Bundle Consists of the following Premium courses: Course 01: SQL NoSQL Big Data and Hadoop Course 02: Complete Microsoft Power BI 2021 Course 03: Introduction to Data Analysis Course 04: Python for Data Analysis Course 05: Statistical Analysis Course 06: Data Analytics with Tableau Course 07: Basic Google Data Studio Course 08: Fundamentals of Business Analysis Course 09: Complete Introduction to Business Data Analysis Level 3 Course 10: Business Intelligence and Data Mining Masterclass Course 11: Research Methods in Business Course 12: Basic Graph Theory Algorithms Course 13: Data Protection and Data Security Level 2 Course 14: Data Analysis in Excel Level 3 Course Enrol now in Big Data to advance your career, and use the premium study materials from Apex Learning. The bundle incorporates basic to advanced level skills to shed some light on your way and boost your career. Hence, you can strengthen your Big Data expertise and essential knowledge, which will assist you in reaching your goal. Moreover, you can learn from any place in your own time without travelling for classes. CPD 165 CPD hours / points Accredited by CPD Quality Standards Who is this course for? Anyone from any background can enrol in this Big Data bundle. Requirements Our Big Data course is fully compatible with PCs, Macs, laptops, tablets and Smartphone devices. Career path Having this Big Data expertise will increase the value of your CV and open you up to multiple job sectors. Certificates Certificate of completion Digital certificate - Included You will get the PDF Certificate for the title course (SQL NoSQL Big Data and Hadoop) absolutely Free! Certificate of completion Hard copy certificate - Included You will get the Hard Copy certificate for the title course (SQL NoSQL Big Data and Hadoop) absolutely Free! Other Hard Copy certificates are available for £10 each. Please Note: The delivery charge inside the UK is £3.99, and the international students must pay a £9.99 shipping cost.