• Professional Development
  • Medicine & Nursing
  • Arts & Crafts
  • Health & Wellbeing
  • Personal Development

38 Apache Spark courses

Spark Programming in Scala for Beginners with Apache Spark 3

By Packt

This course does not require any prior knowledge of Apache Spark or Hadoop. The author explains Spark architecture and fundamental concepts to help you come up to speed and grasp the content of this course. The course will help you understand Spark programming and apply that knowledge to build data engineering solutions.

Spark Programming in Scala for Beginners with Apache Spark 3
Delivered Online On Demand6 hours 47 minutes
£14.99

Real-Time Stream Processing Using Apache Spark 3 for Scala Developers

By Packt

Learn the process to design and develop big data engineering projects using Apache Spark. This example-driven advanced-level course will help you understand real-time stream processing using Apache Spark and you can apply that knowledge to build real-time stream processing solutions.

Real-Time Stream Processing Using Apache Spark 3 for Scala Developers
Delivered Online On Demand3 hours 23 minutes
£22.99

Apache Spark 3 Advance Skills for Cracking Job Interviews

By Packt

A carefully structured advanced-level course on Apache Spark 3 to help you clear your job interviews. This course covers advanced topics and concepts that are part of the Databricks Spark certification exam. Boost your skills in Spark 3 architecture and memory management.

Apache Spark 3 Advance Skills for Cracking Job Interviews
Delivered Online On Demand3 hours 47 minutes
£67.99

Real-Time Stream Processing Using Apache Spark 3 for Python Developers

By Packt

Get to grips with real-time stream processing using PySpark as well as Spark structured streaming and apply that knowledge to build stream processing solutions. This course is example-driven and follows a working session-like approach.

Real-Time Stream Processing Using Apache Spark 3 for Python Developers
Delivered Online On Demand4 hours 34 minutes
£93.99

Spark Programming in Python for Beginners with Apache Spark 3

By Packt

Advance your data skills by mastering Spark programming in Python. This beginner's level course will help you understand the core concepts related to Apache Spark 3 and provide you with knowledge of applying those concepts to build data engineering solutions.

Spark Programming in Python for Beginners with Apache Spark 3
Delivered Online On Demand6 hours 35 minutes
£37.99

Apache Spark with Scala - Hands-On with Big Data!

By Packt

This is a comprehensive and practical Apache Spark course. In this course, you will learn and master the art of framing data analysis problems as Spark problems through 20+ hands-on examples, and then scale them up to run on cloud computing services. Explore Spark 3, IntelliJ, Structured Streaming, and a stronger focus on the DataSet API.

Apache Spark with Scala - Hands-On with Big Data!
Delivered Online On Demand8 hours 55 minutes
£74.99

Mastering Scala with Apache Spark for the Modern Data Enterprise (TTSK7520)

By Nexus Human

Duration 5 Days 30 CPD hours This course is intended for This intermediate and beyond level course is geared for experienced technical professionals in various roles, such as developers, data analysts, data engineers, software engineers, and machine learning engineers who want to leverage Scala and Spark to tackle complex data challenges and develop scalable, high-performance applications across diverse domains. Practical programming experience is required to participate in the hands-on labs. Overview Working in a hands-on learning environment led by our expert instructor you'll: Develop a basic understanding of Scala and Apache Spark fundamentals, enabling you to confidently create scalable and high-performance applications. Learn how to process large datasets efficiently, helping you handle complex data challenges and make data-driven decisions. Gain hands-on experience with real-time data streaming, allowing you to manage and analyze data as it flows into your applications. Acquire practical knowledge of machine learning algorithms using Spark MLlib, empowering you to create intelligent applications and uncover hidden insights. Master graph processing with GraphX, enabling you to analyze and visualize complex relationships in your data. Discover generative AI technologies using GPT with Spark and Scala, opening up new possibilities for automating content generation and enhancing data analysis. Embark on a journey to master the world of big data with our immersive course on Scala and Spark! Mastering Scala with Apache Spark for the Modern Data Enterprise is a five day hands on course designed to provide you with the essential skills and tools to tackle complex data projects using Scala programming language and Apache Spark, a high-performance data processing engine. Mastering these technologies will enable you to perform a wide range of tasks, from data wrangling and analytics to machine learning and artificial intelligence, across various industries and applications.Guided by our expert instructor, you?ll explore the fundamentals of Scala programming and Apache Spark while gaining valuable hands-on experience with Spark programming, RDDs, DataFrames, Spark SQL, and data sources. You?ll also explore Spark Streaming, performance optimization techniques, and the integration of popular external libraries, tools, and cloud platforms like AWS, Azure, and GCP. Machine learning enthusiasts will delve into Spark MLlib, covering basics of machine learning algorithms, data preparation, feature extraction, and various techniques such as regression, classification, clustering, and recommendation systems. Introduction to Scala Brief history and motivation Differences between Scala and Java Basic Scala syntax and constructs Scala's functional programming features Introduction to Apache Spark Overview and history Spark components and architecture Spark ecosystem Comparing Spark with other big data frameworks Basics of Spark Programming SparkContext and SparkSession Resilient Distributed Datasets (RDDs) Transformations and Actions Working with DataFrames Spark SQL and Data Sources Spark SQL library and its advantages Structured and semi-structured data sources Reading and writing data in various formats (CSV, JSON, Parquet, Avro, etc.) Data manipulation using SQL queries Basic RDD Operations Creating and manipulating RDDs Common transformations and actions on RDDs Working with key-value data Basic DataFrame and Dataset Operations Creating and manipulating DataFrames and Datasets Column operations and functions Filtering, sorting, and aggregating data Introduction to Spark Streaming Overview of Spark Streaming Discretized Stream (DStream) operations Windowed operations and stateful processing Performance Optimization Basics Best practices for efficient Spark code Broadcast variables and accumulators Monitoring Spark applications Integrating External Libraries and Tools, Spark Streaming Using popular external libraries, such as Hadoop and HBase Integrating with cloud platforms: AWS, Azure, GCP Connecting to data storage systems: HDFS, S3, Cassandra, etc. Introduction to Machine Learning Basics Overview of machine learning Supervised and unsupervised learning Common algorithms and use cases Introduction to Spark MLlib Overview of Spark MLlib MLlib's algorithms and utilities Data preparation and feature extraction Linear Regression and Classification Linear regression algorithm Logistic regression for classification Model evaluation and performance metrics Clustering Algorithms Overview of clustering algorithms K-means clustering Model evaluation and performance metrics Collaborative Filtering and Recommendation Systems Overview of recommendation systems Collaborative filtering techniques Implementing recommendations with Spark MLlib Introduction to Graph Processing Overview of graph processing Use cases and applications of graph processing Graph representations and operations Introduction to Spark GraphX Overview of GraphX Creating and transforming graphs Graph algorithms in GraphX Big Data Innovation! Using GPT and Generative AI Technologies with Spark and Scala Overview of generative AI technologies Integrating GPT with Spark and Scala Practical applications and use cases Bonus Topics / Time Permitting Introduction to Spark NLP Overview of Spark NLP Preprocessing text data Text classification and sentiment analysis Putting It All Together Work on a capstone project that integrates multiple aspects of the course, including data processing, machine learning, graph processing, and generative AI technologies.

Mastering Scala with Apache Spark for the Modern Data Enterprise (TTSK7520)
Delivered OnlineFlexible Dates
Price on Enquiry

DP-601T00 Implementing a Lakehouse with Microsoft Fabric

By Nexus Human

Duration 1 Days 6 CPD hours This course is intended for The primary audience for this course is data professionals who are familiar with data modeling, extraction, and analytics. It is designed for professionals who are interested in gaining knowledge about Lakehouse architecture, the Microsoft Fabric platform, and how to enable end-to-end analytics using these technologies. Job role: Data Analyst, Data Engineer, Data Scientist Overview Describe end-to-end analytics in Microsoft Fabric Describe core features and capabilities of lakehouses in Microsoft Fabric Create a lakehouse Ingest data into files and tables in a lakehouse Query lakehouse tables with SQL Configure Spark in a Microsoft Fabric workspace Identify suitable scenarios for Spark notebooks and Spark jobs Use Spark dataframes to analyze and transform data Use Spark SQL to query data in tables and views Visualize data in a Spark notebook Understand Delta Lake and delta tables in Microsoft Fabric Create and manage delta tables using Spark Use Spark to query and transform data in delta tables Use delta tables with Spark structured streaming Describe Dataflow (Gen2) capabilities in Microsoft Fabric Create Dataflow (Gen2) solutions to ingest and transform data Include a Dataflow (Gen2) in a pipeline This course is designed to build your foundational skills in data engineering on Microsoft Fabric, focusing on the Lakehouse concept. This course will explore the powerful capabilities of Apache Spark for distributed data processing and the essential techniques for efficient data management, versioning, and reliability by working with Delta Lake tables. This course will also explore data ingestion and orchestration using Dataflows Gen2 and Data Factory pipelines. This course includes a combination of lectures and hands-on exercises that will prepare you to work with lakehouses in Microsoft Fabric. Introduction to end-to-end analytics using Microsoft Fabric Explore end-to-end analytics with Microsoft Fabric Data teams and Microsoft Fabric Enable and use Microsoft Fabric Knowledge Check Get started with lakehouses in Microsoft Fabric Explore the Microsoft Fabric Lakehouse Work with Microsoft Fabric Lakehouses Exercise - Create and ingest data with a Microsoft Fabric Lakehouse Use Apache Spark in Microsoft Fabric Prepare to use Apache Spark Run Spark code Work with data in a Spark dataframe Work with data using Spark SQL Visualize data in a Spark notebook Exercise - Analyze data with Apache Spark Work with Delta Lake Tables in Microsoft Fabric Understand Delta Lake Create delta tables Work with delta tables in Spark Use delta tables with streaming data Exercise - Use delta tables in Apache Spark Ingest Data with DataFlows Gen2 in Microsoft Fabric Understand Dataflows (Gen2) in Microsoft Fabric Explore Dataflows (Gen2) in Microsoft Fabric Integrate Dataflows (Gen2) and Pipelines in Microsoft Fabric Exercise - Create and use a Dataflow (Gen2) in Microsoft Fabric

DP-601T00 Implementing a Lakehouse with Microsoft Fabric
Delivered OnlineFlexible Dates
£595

Apache Spark 3 for Data Engineering and Analytics with Python

By Packt

This course primarily focuses on explaining the concepts of Python and PySpark. It will help you enhance your data analysis skills using structured Spark DataFrames APIs.

Apache Spark 3 for Data Engineering and Analytics with Python
Delivered Online On Demand8 hours 30 minutes
£41.99

Building Batch Data Analytics Solutions on AWS

By Nexus Human

Duration 1 Days 6 CPD hours This course is intended for This course is intended for: Data platform engineers Architects and operators who build and manage data analytics pipelines Overview In this course, you will learn to: Compare the features and benefits of data warehouses, data lakes, and modern data architectures Design and implement a batch data analytics solution Identify and apply appropriate techniques, including compression, to optimize data storage Select and deploy appropriate options to ingest, transform, and store data Choose the appropriate instance and node types, clusters, auto scaling, and network topology for a particular business use case Understand how data storage and processing affect the analysis and visualization mechanisms needed to gain actionable business insights Secure data at rest and in transit Monitor analytics workloads to identify and remediate problems Apply cost management best practices In this course, you will learn to build batch data analytics solutions using Amazon EMR, an enterprise-grade Apache Spark and Apache Hadoop managed service. You will learn how Amazon EMR integrates with open-source projects such as Apache Hive, Hue, and HBase, and with AWS services such as AWS Glue and AWS Lake Formation. The course addresses data collection, ingestion, cataloging, storage, and processing components in the context of Spark and Hadoop. You will learn to use EMR Notebooks to support both analytics and machine learning workloads. You will also learn to apply security, performance, and cost management best practices to the operation of Amazon EMR. Module A: Overview of Data Analytics and the Data Pipeline Data analytics use cases Using the data pipeline for analytics Module 1: Introduction to Amazon EMR Using Amazon EMR in analytics solutions Amazon EMR cluster architecture Interactive Demo 1: Launching an Amazon EMR cluster Cost management strategies Module 2: Data Analytics Pipeline Using Amazon EMR: Ingestion and Storage Storage optimization with Amazon EMR Data ingestion techniques Module 3: High-Performance Batch Data Analytics Using Apache Spark on Amazon EMR Apache Spark on Amazon EMR use cases Why Apache Spark on Amazon EMR Spark concepts Interactive Demo 2: Connect to an EMR cluster and perform Scala commands using the Spark shell Transformation, processing, and analytics Using notebooks with Amazon EMR Practice Lab 1: Low-latency data analytics using Apache Spark on Amazon EMR Module 4: Processing and Analyzing Batch Data with Amazon EMR and Apache Hive Using Amazon EMR with Hive to process batch data Transformation, processing, and analytics Practice Lab 2: Batch data processing using Amazon EMR with Hive Introduction to Apache HBase on Amazon EMR Module 5: Serverless Data Processing Serverless data processing, transformation, and analytics Using AWS Glue with Amazon EMR workloads Practice Lab 3: Orchestrate data processing in Spark using AWS Step Functions Module 6: Security and Monitoring of Amazon EMR Clusters Securing EMR clusters Interactive Demo 3: Client-side encryption with EMRFS Monitoring and troubleshooting Amazon EMR clusters Demo: Reviewing Apache Spark cluster history Module 7: Designing Batch Data Analytics Solutions Batch data analytics use cases Activity: Designing a batch data analytics workflow Module B: Developing Modern Data Architectures on AWS Modern data architectures

Building Batch Data Analytics Solutions on AWS
Delivered OnlineFlexible Dates
Price on Enquiry

Educators matching "Apache Spark"

Show all 7
Nobleprog Pakistan

nobleprog pakistan

NobleProg is an international training and consultancy group, delivering high quality courses to every sector, covering: Artificial Intelligence, IT, Management, Applied Statistics. Over the last 17 years, we have trained more than 50,000 people from over 6000 companies and organisations. Our courses include classroom (both public and closed) and instructor-led online giving you choice and flexibility to suit your time, budget and level of expertise. We practice what we preach – we use a great deal of the technologies and methods that we teach, and continuously upgrade and improve our courses, keeping up to date with all the latest developments. Our trainers are hand picked and have been through rigorous checks and interviews, and all courses are evaluated by delegates ensuring continuous feedback and improvement. NobleProg in numbers 17 + years of experience 15 + offices all over the world 1000 + trainers cooperating with NobleProg 1400 + course outlines offered companies 6100 + companies that entrusted us satisfied participant 58 k. + satisfied participants NobleProg - The World’s Local Training Provider Our mission is to provide comprehensive training and consultancy solutions all over the world, in an effective and accessible way, tailored to consumers’ needs . We offer practical, real-world knowledge supported by a full understanding of the theory. Our expert trainers are skilled in the latest knowledge transfer techniques, blending presentation, demonstration and hands-on learning. We understand that our learners are excited to be gaining new skills and we thrive off that energy to deliver exceptional training events. Investing in upskilling or reskilling with NobleProg means you stay ahead. Our catalogue is constantly evolving and we offer the most in-demand courses, Java, JavaScript, SQL, Visual Basic for Applications (VBA), as well as Apache Spark, OpenStack, TensorFlow, Selenium, Artificial Intelligence, Data Analysis. Our offer consists of more than 1,400 training outlines covering more than 120 technologies. At NobleProg we emphasis a need of not only following the latest technological trends, but also anticipating changes. We focus on delivering professional skills and certifications that will have a real impact. See what sets us apart >> NobleProg's history NobleProg was established in 2005 in Krakow, Poland, and has gradually expanded its operations to other global markets since. In just two years the first international branch was opened in London. The overwhelming potential of NobleProg combined with the rising need for self-development programs, especially in the field of technological skills, prompted the company to change the business model into a franchise. By doing so, in a short period of time the company allowed a number of people passionate about education and new technologies to join the NobleProg Team. With each year the territorial reach of NobleProg was further expanding and we now have offices on every continent. NobleProg is the World's Local Training Provider.

Nexus Human

nexus human

London

Nexus Human, established over 20 years ago, stands as a pillar of excellence in the realm of IT and Business Skills Training and education in Ireland and the UK.  For over two decades, Nexus Human has been a steadfast source of reliable and high-quality training solutions, catering to a diverse range of professional and educational needs. With a strong reputation in the Training Industry, Nexus Human has consistently demonstrated its commitment to equipping individuals and organisations with the skills and knowledge required to thrive in today's dynamic world.  Our training programs span a wide spectrum, encompassing IT certifications, business skills, and much more.   What sets Nexus Human apart is our unwavering dedication to staying at the forefront of industry trends and technology advancements.  Our expert instructors, coupled with cutting-edge training resources, ensure that students receive the most up-to-date and relevant knowledge available. The impact of Nexus Human extends far and wide, helping individuals enhance their career prospects and aiding businesses in achieving their goals.  This 20-year journey has solidified our institution's standing as a trusted partner in personal and professional growth, offering reliable, excellent training that continues to shape the future.  Whether you seek to upskill, reskill, or simply stay ahead of the curve, Nexus Human is the place to turn for an educational experience marked by quality, reliability, and innovation.