• Professional Development
  • Medicine & Nursing
  • Arts & Crafts
  • Health & Wellbeing
  • Personal Development

116 Data Engineering courses

Data Science Projects with Python

By Nexus Human

Duration 2 Days 12 CPD hours This course is intended for If you are a data analyst, data scientist, or a business analyst who wants to get started with using Python and machine learning techniques to analyze data and predict outcomes, this book is for you. Basic knowledge of computer programming and data analytics is a must. Familiarity with mathematical concepts such as algebra and basic statistics will be useful. Overview By the end of this course, you will have the skills you need to confidently use various machine learning algorithms to perform detailed data analysis and extract meaningful insights from data. This course is designed to give you practical guidance on industry-standard data analysis and machine learning tools in Python, with the help of realistic data. The course will help you understand how you can use pandas and Matplotlib to critically examine a dataset with summary statistics and graphs, and extract the insights you seek to derive. You will continue to build on your knowledge as you learn how to prepare data and feed it to machine learning algorithms, such as regularized logistic regression and random forest, using the scikit-learn package. You?ll discover how to tune the algorithms to provide the best predictions on new and unseen data. As you delve into later sections, you?ll be able to understand the working and output of these algorithms and gain insight into not only the predictive capabilities of the models but also their reasons for making these predictions. Data Exploration and Cleaning Python and the Anaconda Package Management System Different Types of Data Science Problems Loading the Case Study Data with Jupyter and pandas Data Quality Assurance and Exploration Exploring the Financial History Features in the Dataset Activity 1: Exploring Remaining Financial Features in the Dataset Introduction to Scikit-Learn and Model Evaluation Introduction Model Performance Metrics for Binary Classification Activity 2: Performing Logistic Regression with a New Feature and Creating a Precision-Recall Curve Details of Logistic Regression and Feature Exploration Introduction Examining the Relationships between Features and the Response Univariate Feature Selection: What It Does and Doesn't Do Building Cloud-Native Applications Activity 3: Fitting a Logistic Regression Model and Directly Using the Coefficients The Bias-Variance Trade-off Introduction Estimating the Coefficients and Intercepts of Logistic Regression Cross Validation: Choosing the Regularization Parameter and Other Hyperparameters Activity 4: Cross-Validation and Feature Engineering with the Case Study Data Decision Trees and Random Forests Introduction Decision trees Random Forests: Ensembles of Decision Trees Activity 5: Cross-Validation Grid Search with Random Forest Imputation of Missing Data, Financial Analysis, and Delivery to Client Introduction Review of Modeling Results Dealing with Missing Data: Imputation Strategies Activity 6: Deriving Financial Insights Final Thoughts on Delivering the Predictive Model to the Client

Data Science Projects with Python
Delivered OnlineFlexible Dates
Price on Enquiry

Cloudera Data Analyst Training - Using Pig, Hive, and Impala with Hadoop

By Nexus Human

Duration 4 Days 24 CPD hours This course is intended for This course is designed for data analysts, business intelligence specialists, developers, system architects, and database administrators. Overview Skills gained in this training include:The features that Pig, Hive, and Impala offer for data acquisition, storage, and analysisThe fundamentals of Apache Hadoop and data ETL (extract, transform, load), ingestion, and processing with HadoopHow Pig, Hive, and Impala improve productivity for typical analysis tasksJoining diverse datasets to gain valuable business insightPerforming real-time, complex queries on datasets Cloudera University?s four-day data analyst training course focusing on Apache Pig and Hive and Cloudera Impala will teach you to apply traditional data analytics and business intelligence skills to big data. Hadoop Fundamentals The Motivation for Hadoop Hadoop Overview Data Storage: HDFS Distributed Data Processing: YARN, MapReduce, and Spark Data Processing and Analysis: Pig, Hive, and Impala Data Integration: Sqoop Other Hadoop Data Tools Exercise Scenarios Explanation Introduction to Pig What Is Pig? Pig?s Features Pig Use Cases Interacting with Pig Basic Data Analysis with Pig Pig Latin Syntax Loading Data Simple Data Types Field Definitions Data Output Viewing the Schema Filtering and Sorting Data Commonly-Used Functions Processing Complex Data with Pig Storage Formats Complex/Nested Data Types Grouping Built-In Functions for Complex Data Iterating Grouped Data Multi-Dataset Operations with Pig Techniques for Combining Data Sets Joining Data Sets in Pig Set Operations Splitting Data Sets Pig Troubleshoot & Optimization Troubleshooting Pig Logging Using Hadoop?s Web UI Data Sampling and Debugging Performance Overview Understanding the Execution Plan Tips for Improving the Performance of Your Pig Jobs Introduction to Hive & Impala What Is Hive? What Is Impala? Schema and Data Storage Comparing Hive to Traditional Databases Hive Use Cases Querying with Hive & Impala Databases and Tables Basic Hive and Impala Query Language Syntax Data Types Differences Between Hive and Impala Query Syntax Using Hue to Execute Queries Using the Impala Shell Data Management Data Storage Creating Databases and Tables Loading Data Altering Databases and Tables Simplifying Queries with Views Storing Query Results Data Storage & Performance Partitioning Tables Choosing a File Format Managing Metadata Controlling Access to Data Relational Data Analysis with Hive & Impala Joining Datasets Common Built-In Functions Aggregation and Windowing Working with Impala How Impala Executes Queries Extending Impala with User-Defined Functions Improving Impala Performance Analyzing Text and Complex Data with Hive Complex Values in Hive Using Regular Expressions in Hive Sentiment Analysis and N-Grams Conclusion Hive Optimization Understanding Query Performance Controlling Job Execution Plan Bucketing Indexing Data Extending Hive SerDes Data Transformation with Custom Scripts User-Defined Functions Parameterized Queries Choosing the Best Tool for the Job Comparing MapReduce, Pig, Hive, Impala, and Relational Databases Which to Choose?

Cloudera Data Analyst Training - Using Pig, Hive, and Impala with Hadoop
Delivered OnlineFlexible Dates
Price on Enquiry

Big Data Architecture Workshop

By Nexus Human

Duration 3 Days 18 CPD hours This course is intended for Senior Executives CIOs and CTOs Business Intelligence Executives Marketing Executives Data & Business Analytics Specialists Innovation Specialists & Entrepreneurs Academics, and other people interested in Big Data Overview More specifically, BDAW addresses advanced big data architecture topics, including, data formats, transformation, real-time, batch and machine learning processing, scalability, fault tolerance, security and privacy, minimizing the risk of an unsound architecture and technology selection. Big Data Architecture Workshop (BDAW) is a learning event that addresses advanced big data architecture topics. BDAW brings together technical contributors into a group setting to design and architect solutions to a challenging business problem. The workshop addresses big data architecture problems in general, and then applies them to the design of a challenging system. Throughout the highly interactive workshop, students apply concepts to real-world examples resulting in detailed synergistic discussions. The workshop is conducive for students to learn techniques for architecting big data systems, not only from Cloudera?s experience but also from the experiences of fellow students. Workshop Application Use Cases Oz Metropolitan Architectural questions Team activity: Analyze Metroz Application Use Cases Application Vertical Slice Definition Minimizing risk of an unsound architecture Selecting a vertical slice Team activity: Identify an initial vertical slice for Metroz Application Processing Real time, near real time processing Batch processing Data access patterns Delivery and processing guarantees Machine Learning pipelines Team activity: identify delivery and processing patterns in Metroz, characterize response time requirements, identify Machine Learning pipelines Application Data Three V?s of Big Data Data Lifecycle Data Formats Transforming Data Team activity: Metroz Data Requirements Scalable Applications Scale up, scale out, scale to X Determining if an application will scale Poll: scalable airport terminal designs Hadoop and Spark Scalability Team activity: Scaling Metroz Fault Tolerant Distributed Systems Principles Transparency Hardware vs. Software redundancy Tolerating disasters Stateless functional fault tolerance Stateful fault tolerance Replication and group consistency Fault tolerance in Spark and Map Reduce Application tolerance for failures Team activity: Identify Metroz component failures and requirements Security and Privacy Principles Privacy Threats Technologies Team activity: identify threats and security mechanisms in Metroz Deployment Cluster sizing and evolution On-premise vs. Cloud Edge computing Team activity: select deployment for Metroz Technology Selection HDFS HBase Kudu Relational Database Management Systems Map Reduce Spark, including streaming, SparkSQL and SparkML Hive Impala Cloudera Search Data Sets and Formats Team activity: technologies relevant to Metroz Software Architecture Architecture artifacts One platform or multiple, lambda architecture Team activity: produce high level architecture, selected technologies, revisit vertical slice Vertical Slice demonstration Additional course details: Nexus Humans Big Data Architecture Workshop training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Big Data Architecture Workshop course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.

Big Data Architecture Workshop
Delivered OnlineFlexible Dates
Price on Enquiry

Python With Data Science

By Nexus Human

Duration 2 Days 12 CPD hours This course is intended for Audience: Data Scientists, Software Developers, IT Architects, and Technical Managers. Participants should have the general knowledge of statistics and programming Also familiar with Python Overview ? NumPy, pandas, Matplotlib, scikit-learn ? Python REPLs ? Jupyter Notebooks ? Data analytics life-cycle phases ? Data repairing and normalizing ? Data aggregation and grouping ? Data visualization ? Data science algorithms for supervised and unsupervised machine learning Covers theoretical and technical aspects of using Python in Applied Data Science projects and Data Logistics use cases. Python for Data Science ? Using Modules ? Listing Methods in a Module ? Creating Your Own Modules ? List Comprehension ? Dictionary Comprehension ? String Comprehension ? Python 2 vs Python 3 ? Sets (Python 3+) ? Python Idioms ? Python Data Science ?Ecosystem? ? NumPy ? NumPy Arrays ? NumPy Idioms ? pandas ? Data Wrangling with pandas' DataFrame ? SciPy ? Scikit-learn ? SciPy or scikit-learn? ? Matplotlib ? Python vs R ? Python on Apache Spark ? Python Dev Tools and REPLs ? Anaconda ? IPython ? Visual Studio Code ? Jupyter ? Jupyter Basic Commands ? Summary Applied Data Science ? What is Data Science? ? Data Science Ecosystem ? Data Mining vs. Data Science ? Business Analytics vs. Data Science ? Data Science, Machine Learning, AI? ? Who is a Data Scientist? ? Data Science Skill Sets Venn Diagram ? Data Scientists at Work ? Examples of Data Science Projects ? An Example of a Data Product ? Applied Data Science at Google ? Data Science Gotchas ? Summary Data Analytics Life-cycle Phases ? Big Data Analytics Pipeline ? Data Discovery Phase ? Data Harvesting Phase ? Data Priming Phase ? Data Logistics and Data Governance ? Exploratory Data Analysis ? Model Planning Phase ? Model Building Phase ? Communicating the Results ? Production Roll-out ? Summary Repairing and Normalizing Data ? Repairing and Normalizing Data ? Dealing with the Missing Data ? Sample Data Set ? Getting Info on Null Data ? Dropping a Column ? Interpolating Missing Data in pandas ? Replacing the Missing Values with the Mean Value ? Scaling (Normalizing) the Data ? Data Preprocessing with scikit-learn ? Scaling with the scale() Function ? The MinMaxScaler Object ? Summary Descriptive Statistics Computing Features in Python ? Descriptive Statistics ? Non-uniformity of a Probability Distribution ? Using NumPy for Calculating Descriptive Statistics Measures ? Finding Min and Max in NumPy ? Using pandas for Calculating Descriptive Statistics Measures ? Correlation ? Regression and Correlation ? Covariance ? Getting Pairwise Correlation and Covariance Measures ? Finding Min and Max in pandas DataFrame ? Summary Data Aggregation and Grouping ? Data Aggregation and Grouping ? Sample Data Set ? The pandas.core.groupby.SeriesGroupBy Object ? Grouping by Two or More Columns ? Emulating the SQL's WHERE Clause ? The Pivot Tables ? Cross-Tabulation ? Summary Data Visualization with matplotlib ? Data Visualization ? What is matplotlib? ? Getting Started with matplotlib ? The Plotting Window ? The Figure Options ? The matplotlib.pyplot.plot() Function ? The matplotlib.pyplot.bar() Function ? The matplotlib.pyplot.pie () Function ? Subplots ? Using the matplotlib.gridspec.GridSpec Object ? The matplotlib.pyplot.subplot() Function ? Hands-on Exercise ? Figures ? Saving Figures to File ? Visualization with pandas ? Working with matplotlib in Jupyter Notebooks ? Summary Data Science and ML Algorithms in scikit-learn ? Data Science, Machine Learning, AI? ? Types of Machine Learning ? Terminology: Features and Observations ? Continuous and Categorical Features (Variables) ? Terminology: Axis ? The scikit-learn Package ? scikit-learn Estimators ? Models, Estimators, and Predictors ? Common Distance Metrics ? The Euclidean Metric ? The LIBSVM format ? Scaling of the Features ? The Curse of Dimensionality ? Supervised vs Unsupervised Machine Learning ? Supervised Machine Learning Algorithms ? Unsupervised Machine Learning Algorithms ? Choose the Right Algorithm ? Life-cycles of Machine Learning Development ? Data Split for Training and Test Data Sets ? Data Splitting in scikit-learn ? Hands-on Exercise ? Classification Examples ? Classifying with k-Nearest Neighbors (SL) ? k-Nearest Neighbors Algorithm ? k-Nearest Neighbors Algorithm ? The Error Rate ? Hands-on Exercise ? Dimensionality Reduction ? The Advantages of Dimensionality Reduction ? Principal component analysis (PCA) ? Hands-on Exercise ? Data Blending ? Decision Trees (SL) ? Decision Tree Terminology ? Decision Tree Classification in Context of Information Theory ? Information Entropy Defined ? The Shannon Entropy Formula ? The Simplified Decision Tree Algorithm ? Using Decision Trees ? Random Forests ? SVM ? Naive Bayes Classifier (SL) ? Naive Bayesian Probabilistic Model in a Nutshell ? Bayes Formula ? Classification of Documents with Naive Bayes ? Unsupervised Learning Type: Clustering ? Clustering Examples ? k-Means Clustering (UL) ? k-Means Clustering in a Nutshell ? k-Means Characteristics ? Regression Analysis ? Simple Linear Regression Model ? Linear vs Non-Linear Regression ? Linear Regression Illustration ? Major Underlying Assumptions for Regression Analysis ? Least-Squares Method (LSM) ? Locally Weighted Linear Regression ? Regression Models in Excel ? Multiple Regression Analysis ? Logistic Regression ? Regression vs Classification ? Time-Series Analysis ? Decomposing Time-Series ? Summary Lab Exercises Lab 1 - Learning the Lab Environment Lab 2 - Using Jupyter Notebook Lab 3 - Repairing and Normalizing Data Lab 4 - Computing Descriptive Statistics Lab 5 - Data Grouping and Aggregation Lab 6 - Data Visualization with matplotlib Lab 7 - Data Splitting Lab 8 - k-Nearest Neighbors Algorithm Lab 9 - The k-means Algorithm Lab 10 - The Random Forest Algorithm

Python With Data Science
Delivered OnlineFlexible Dates
Price on Enquiry

Data Science for Marketing Analytics

By Nexus Human

Duration 3 Days 18 CPD hours This course is intended for Data Science for Marketing Analytics is designed for developers and marketing analysts looking to use new, more sophisticated tools in their marketing analytics efforts. It'll help if you have prior experience of coding in Python and knowledge of high school level mathematics. Some experience with databases, Excel, statistics, or Tableau is useful but not necessary. Overview By the end of this course, you will be able to build your own marketing reporting and interactive dashboard solutions. The course starts by teaching you how to use Python libraries, such as pandas and Matplotlib, to read data from Python, manipulate it, and create plots, using both categorical and continuous variables. Then, you'll learn how to segment a population into groups and use different clustering techniques to evaluate customer segmentation.As you make your way through the course, you'll explore ways to evaluate and select the best segmentation approach, and go on to create a linear regression model on customer value data to predict lifetime value. In the concluding sections, you'll gain an understanding of regression techniques and tools for evaluating regression models, and explore ways to predict customer choice using classification algorithms. Finally, you'll apply these techniques to create a churn model for modeling customer product choices. Data Preparation and Cleaning Data Models and Structured Data pandas Data Manipulation Data Exploration and Visualization Identifying the Right Attributes Generating Targeted Insights Visualizing Data Unsupervised Learning: Customer Segmentation Customer Segmentation Methods Similarity and Data Standardization k-means Clustering Choosing the Best Segmentation Approach Choosing the Number of Clusters Different Methods of Clustering Evaluating Clustering Predicting Customer Revenue Using Linear Regression Understanding Regression Feature Engineering for Regression Performing and Interpreting Linear Regression Other Regression Techniques and Tools for Evaluation Evaluating the Accuracy of a Regression Model Using Regularization for Feature Selection Tree-Based Regression Models Supervised Learning: Predicting Customer Churn Classification Problems Understanding Logistic Regression Creating a Data Science Pipeline Fine-Tuning Classification Algorithms Support Vector Machine Decision Trees Random Forest Preprocessing Data for Machine Learning Models Model Evaluation Performance Metrics Modeling Customer Choice Understanding Multiclass Classification Class Imbalanced Data Additional course details: Nexus Humans Data Science for Marketing Analytics training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Data Science for Marketing Analytics course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.

Data Science for Marketing Analytics
Delivered OnlineFlexible Dates
Price on Enquiry

R Programming for Data Science (v1.0)

By Nexus Human

Duration 5 Days 30 CPD hours This course is intended for This course is designed for students who want to learn the R programming language, particularly students who want to leverage R for data analysis and data science tasks in their organization. The course is also designed for students with an interest in applying statistics to real-world problems. A typical student in this course should have several years of experience with computing technology, along with a proficiency in at least one other programming language. Overview In this course, you will use R to perform common data science tasks.You will: Set up an R development environment and execute simple code. Perform operations on atomic data types in R, including characters, numbers, and logicals. Perform operations on data structures in R, including vectors, lists, and data frames. Write conditional statements and loops. Structure code for reuse with functions and packages. Manage data by loading and saving datasets, manipulating data frames, and more. Analyze data through exploratory analysis, statistical analysis, and more. Create and format data visualizations using base R and ggplot2. Create simple statistical models from data. In our data-driven world, organizations need the right tools to extract valuable insights from that data. The R programming language is one of the tools at the forefront of data science. Its robust set of packages and statistical functions makes it a powerful choice for analyzing data, manipulating data, performing statistical tests on data, and creating predictive models from data. Likewise, R is notable for its strong data visualization tools, enabling you to create high-quality graphs and plots that are incredibly customizable. This course will teach you the fundamentals of programming in R to get you started. It will also teach you how to use R to perform common data science tasks and achieve data-driven results for the business. Lesson 1: Setting Up R and Executing Simple Code Topic A: Set Up the R Development Environment Topic B: Write R Statements Lesson 2: Processing Atomic Data Types Topic A: Process Characters Topic B: Process Numbers Topic C: Process Logicals Lesson 3: Processing Data Structures Topic A: Process Vectors Topic B: Process Factors Topic C: Process Data Frames Topic D: Subset Data Structures Lesson 4: Writing Conditional Statements and Loops Topic A: Write Conditional Statements Topic B: Write Loops Lesson 5: Structuring Code for Reuse Topic A: Define and Call Functions Topic B: Apply Loop Functions Topic C: Manage R Packages Lesson 6: Managing Data in R Topic A: Load Data Topic B: Save Data Topic C: Manipulate Data Frames Using Base R Topic D: Manipulate Data Frames Using dplyr Topic E: Handle Dates and Times Lesson 7: Analyzing Data in R Topic A: Examine Data Topic B: Explore the Underlying Distribution of Data Topic C: Identify Missing Values Lesson 8: Visualizing Data in R Topic A: Plot Data Using Base R Functions Topic B: Plot Data Using ggplot2 Topic C: Format Plots in ggplot2 Topic D: Create Combination Plots Lesson 9: Modeling Data in R Topic A: Create Statistical Models in R Topic B: Create Machine Learning Models in R

R Programming for Data Science (v1.0)
Delivered OnlineFlexible Dates
Price on Enquiry

Hands-on Predicitive Analytics with Python (TTPS4879)

By Nexus Human

Duration 3 Days 18 CPD hours This course is intended for This course is geared for Python experienced attendees who wish to learn and use basic machine learning algorithms and concepts. Students should have skills at least equivalent to the Python for Data Science courses we offer. Overview Working in a hands-on learning environment, guided by our expert team, attendees will learn to Understand the main concepts and principles of predictive analytics Use the Python data analytics ecosystem to implement end-to-end predictive analytics projects Explore advanced predictive modeling algorithms w with an emphasis on theory with intuitive explanations Learn to deploy a predictive model's results as an interactive application Learn about the stages involved in producing complete predictive analytics solutions Understand how to define a problem, propose a solution, and prepare a dataset Use visualizations to explore relationships and gain insights into the dataset Learn to build regression and classification models using scikit-learn Use Keras to build powerful neural network models that produce accurate predictions Learn to serve a model's predictions as a web application Predictive analytics is an applied field that employs a variety of quantitative methods using data to make predictions. It involves much more than just throwing data onto a computer to build a model. This course provides practical coverage to help you understand the most important concepts of predictive analytics. Using practical, step-by-step examples, we build predictive analytics solutions while using cutting-edge Python tools and packages. Hands-on Predictive Analytics with Python is a three-day, hands-on course that guides students through a step-by-step approach to defining problems and identifying relevant data. Students will learn how to perform data preparation, explore and visualize relationships, as well as build models, tune, evaluate, and deploy models. Each stage has relevant practical examples and efficient Python code. You will work with models such as KNN, Random Forests, and neural networks using the most important libraries in Python's data science stack: NumPy, Pandas, Matplotlib, Seabor, Keras, Dash, and so on. In addition to hands-on code examples, you will find intuitive explanations of the inner workings of the main techniques and algorithms used in predictive analytics. The Predictive Analytics Process Technical requirements What is predictive analytics? Reviewing important concepts of predictive analytics The predictive analytics process A quick tour of Python's data science stack Problem Understanding and Data Preparation Technical requirements Understanding the business problem and proposing a solution Practical project ? diamond prices Practical project ? credit card default Dataset Understanding ? Exploratory Data Analysis Technical requirements What is EDA? Univariate EDA Bivariate EDA Introduction to graphical multivariate EDA Predicting Numerical Values with Machine Learning Technical requirements Introduction to ML Practical considerations before modeling MLR Lasso regression KNN Training versus testing error Predicting Categories with Machine Learning Technical requirements Classification tasks Credit card default dataset Logistic regression Classification trees Random forests Training versus testing error Multiclass classification Naive Bayes classifiers Introducing Neural Nets for Predictive Analytics Technical requirements Introducing neural network models Introducing TensorFlow and Keras Regressing with neural networks Classification with neural networks The dark art of training neural networks Model Evaluation Technical requirements Evaluation of regression models Evaluation for classification models The k-fold cross-validation Model Tuning and Improving Performance Technical requirements Hyperparameter tuning Improving performance Implementing a Model with Dash Technical requirements Model communication and/or deployment phase Introducing Dash Implementing a predictive model as a web application Additional course details: Nexus Humans Hands-on Predicitive Analytics with Python (TTPS4879) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Hands-on Predicitive Analytics with Python (TTPS4879) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.

Hands-on Predicitive Analytics with Python (TTPS4879)
Delivered OnlineFlexible Dates
Price on Enquiry

Google Cloud Platform Big Data and Machine Learning Fundamentals

By Nexus Human

Duration 1 Days 6 CPD hours This course is intended for This class is intended for the following: Data analysts, Data scientists, Business analysts getting started with Google Cloud Platform. Individuals responsible for designing pipelines and architectures for data processing, creating and maintaining machine learning and statistical models, querying datasets, visualizing query results and creating reports. Executives and IT decision makers evaluating Google Cloud Platform for use by data scientists. Overview This course teaches students the following skills:Identify the purpose and value of the key Big Data and Machine Learning products in the Google Cloud Platform.Use Cloud SQL and Cloud Dataproc to migrate existing MySQL and Hadoop/Pig/Spark/Hive workloads to Google Cloud Platform.Employ BigQuery and Cloud Datalab to carry out interactive data analysis.Train and use a neural network using TensorFlow.Employ ML APIs.Choose between different data processing products on the Google Cloud Platform. This course introduces participants to the Big Data and Machine Learning capabilities of Google Cloud Platform (GCP). It provides a quick overview of the Google Cloud Platform and a deeper dive of the data processing capabilities. Introducing Google Cloud Platform Google Platform Fundamentals Overview. Google Cloud Platform Big Data Products. Compute and Storage Fundamentals CPUs on demand (Compute Engine). A global filesystem (Cloud Storage). CloudShell. Lab: Set up a Ingest-Transform-Publish data processing pipeline. Data Analytics on the Cloud Stepping-stones to the cloud. Cloud SQL: your SQL database on the cloud. Lab: Importing data into CloudSQL and running queries. Spark on Dataproc. Lab: Machine Learning Recommendations with Spark on Dataproc. Scaling Data Analysis Fast random access. Datalab. BigQuery. Lab: Build machine learning dataset. Machine Learning Machine Learning with TensorFlow. Lab: Carry out ML with TensorFlow Pre-built models for common needs. Lab: Employ ML APIs. Data Processing Architectures Message-oriented architectures with Pub/Sub. Creating pipelines with Dataflow. Reference architecture for real-time and batch data processing. Summary Why GCP? Where to go from here Additional Resources Additional course details: Nexus Humans Google Cloud Platform Big Data and Machine Learning Fundamentals training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Google Cloud Platform Big Data and Machine Learning Fundamentals course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.

Google Cloud Platform Big Data and Machine Learning Fundamentals
Delivered OnlineFlexible Dates
Price on Enquiry

Data Warehousing on AWS

By Nexus Human

Duration 3 Days 18 CPD hours This course is intended for This course is intended for: Database architects Database administrators Database developers Data analysts and scientists Overview This course is designed to teach you how to: Discuss the core concepts of data warehousing, and the intersection between data warehousing and big data solutions Launch an Amazon Redshift cluster and use the components, features, and functionality to implement a data warehouse in the cloud Use other AWS data and analytic services, such as Amazon DynamoDB, Amazon EMR, Amazon Kinesis, and Amazon S3, to contribute to the data warehousing solution Architect the data warehouse Identify performance issues, optimize queries, and tune the database for better performance Use Amazon Redshift Spectrum to analyze data directly from an Amazon S3 bucket Use Amazon QuickSight to perform data analysis and visualization tasks against the data warehouse Data Warehousing on AWS introduces you to concepts, strategies, and best practices for designing a cloud-based data warehousing solution using Amazon Redshift, the petabyte-scale data warehouse in AWS. This course demonstrates how to collect, store, and prepare data for the data warehouse by using other AWS services such as Amazon DynamoDB, Amazon EMR, Amazon Kinesis, and Amazon S3. Additionally, this course demonstrates how to use Amazon QuickSight to perform analysis on your data Module 1: Introduction to Data Warehousing Relational databases Data warehousing concepts The intersection of data warehousing and big data Overview of data management in AWS Hands-on lab 1: Introduction to Amazon Redshift Module 2: Introduction to Amazon Redshift Conceptual overview Real-world use cases Hands-on lab 2: Launching an Amazon Redshift cluster Module 3: Launching clusters Building the cluster Connecting to the cluster Controlling access Database security Load data Hands-on lab 3: Optimizing database schemas Module 4: Designing the database schema Schemas and data types Columnar compression Data distribution styles Data sorting methods Module 5: Identifying data sources Data sources overview Amazon S3 Amazon DynamoDB Amazon EMR Amazon Kinesis Data Firehose AWS Lambda Database Loader for Amazon Redshift Hands-on lab 4: Loading real-time data into an Amazon Redshift database Module 6: Loading data Preparing Data Loading data using COPY Data Warehousing on AWS AWS Classroom Training Concurrent write operations Troubleshooting load issues Hands-on lab 5: Loading data with the COPY command Module 7: Writing queries and tuning for performance Amazon Redshift SQL User-Defined Functions (UDFs) Factors that affect query performance The EXPLAIN command and query plans Workload Management (WLM) Hands-on lab 6: Configuring workload management Module 8: Amazon Redshift Spectrum Amazon Redshift Spectrum Configuring data for Amazon Redshift Spectrum Amazon Redshift Spectrum Queries Hands-on lab 7: Using Amazon Redshift Spectrum Module 9: Maintaining clusters Audit logging Performance monitoring Events and notifications Lab 8: Auditing and monitoring clusters Resizing clusters Backing up and restoring clusters Resource tagging and limits and constraints Hands-on lab 9: Backing up, restoring and resizing clusters Module 10: Analyzing and visualizing data Power of visualizations Building dashboards Amazon QuickSight editions and feature

Data Warehousing on AWS
Delivered OnlineFlexible Dates
Price on Enquiry

From Data to Insights with Google Cloud Platform

By Nexus Human

Duration 3 Days 18 CPD hours This course is intended for Data Analysts, Business Analysts, Business Intelligence professionals Cloud Data Engineers who will be partnering with Data Analysts to build scalable data solutions on Google Cloud Platform Overview This course teaches students the following skills: Derive insights from data using the analysis and visualization tools on Google Cloud Platform Interactively query datasets using Google BigQuery Load, clean, and transform data at scale Visualize data using Google Data Studio and other third-party platforms Distinguish between exploratory and explanatory analytics and when to use each approach Explore new datasets and uncover hidden insights quickly and effectively Optimizing data models and queries for price and performance Want to know how to query and process petabytes of data in seconds? Curious about data analysis that scales automatically as your data grows? Welcome to the Data Insights course! This four-course accelerated online specialization teaches course participants how to derive insights through data analysis and visualization using the Google Cloud Platform. The courses feature interactive scenarios and hands-on labs where participants explore, mine, load, visualize, and extract insights from diverse Google BigQuery datasets. The courses also cover data loading, querying, schema modeling, optimizing performance, query pricing, and data visualization. This specialization is intended for the following participants: Data Analysts, Business Analysts, Business Intelligence professionals Cloud Data Engineers who will be partnering with Data Analysts to build scalable data solutions on Google Cloud Platform To get the most out of this specialization, we recommend participants have some proficiency with ANSI SQL. Introduction to Data on the Google Cloud Platform Highlight Analytics Challenges Faced by Data Analysts Compare Big Data On-Premises vs on the Cloud Learn from Real-World Use Cases of Companies Transformed through Analytics on the Cloud Navigate Google Cloud Platform Project Basics Lab: Getting started with Google Cloud Platform Big Data Tools Overview Walkthrough Data Analyst Tasks, Challenges, and Introduce Google Cloud Platform Data Tools Demo: Analyze 10 Billion Records with Google BigQuery Explore 9 Fundamental Google BigQuery Features Compare GCP Tools for Analysts, Data Scientists, and Data Engineers Lab: Exploring Datasets with Google BigQuery Exploring your Data with SQL Compare Common Data Exploration Techniques Learn How to Code High Quality Standard SQL Explore Google BigQuery Public Datasets Visualization Preview: Google Data Studio Lab: Troubleshoot Common SQL Errors Google BigQuery Pricing Walkthrough of a BigQuery Job Calculate BigQuery Pricing: Storage, Querying, and Streaming Costs Optimize Queries for Cost Lab: Calculate Google BigQuery Pricing Cleaning and Transforming your Data Examine the 5 Principles of Dataset Integrity Characterize Dataset Shape and Skew Clean and Transform Data using SQL Clean and Transform Data using a new UI: Introducing Cloud Dataprep Lab: Explore and Shape Data with Cloud Dataprep Storing and Exporting Data Compare Permanent vs Temporary Tables Save and Export Query Results Performance Preview: Query Cache Lab: Creating new Permanent Tables Ingesting New Datasets into Google BigQuery Query from External Data Sources Avoid Data Ingesting Pitfalls Ingest New Data into Permanent Tables Discuss Streaming Inserts Lab: Ingesting and Querying New Datasets Data Visualization Overview of Data Visualization Principles Exploratory vs Explanatory Analysis Approaches Demo: Google Data Studio UI Connect Google Data Studio to Google BigQuery Lab: Exploring a Dataset in Google Data Studio Joining and Merging Datasets Merge Historical Data Tables with UNION Introduce Table Wildcards for Easy Merges Review Data Schemas: Linking Data Across Multiple Tables Walkthrough JOIN Examples and Pitfalls Lab: Join and Union Data from Multiple Tables Advanced Functions and Clauses Review SQL Case Statements Introduce Analytical Window Functions Safeguard Data with One-Way Field Encryption Discuss Effective Sub-query and CTE design Compare SQL and Javascript UDFs Lab: Deriving Insights with Advanced SQL Functions Schema Design and Nested Data Structures Compare Google BigQuery vs Traditional RDBMS Data Architecture Normalization vs Denormalization: Performance Tradeoffs Schema Review: The Good, The Bad, and The Ugly Arrays and Nested Data in Google BigQuery Lab: Querying Nested and Repeated Data More Visualization with Google Data Studio Create Case Statements and Calculated Fields Avoid Performance Pitfalls with Cache considerations Share Dashboards and Discuss Data Access considerations Optimizing for Performance Avoid Google BigQuery Performance Pitfalls Prevent Hotspots in your Data Diagnose Performance Issues with the Query Explanation map Lab: Optimizing and Troubleshooting Query Performance Advanced Insights Introducing Cloud Datalab Cloud Datalab Notebooks and Cells Benefits of Cloud Datalab Data Access Compare IAM and BigQuery Dataset Roles Avoid Access Pitfalls Review Members, Roles, Organizations, Account Administration, and Service Accounts

From Data to Insights with Google Cloud Platform
Delivered OnlineFlexible Dates
Price on Enquiry