Duration 2 Days 12 CPD hours This course is intended for IBM SPSS Statistics users who want to familiarize themselves with the statistical capabilities of IBM SPSS StatisticsBase. Anyone who wants to refresh their knowledge and statistical experience. Overview Introduction to statistical analysis Describing individual variables Testing hypotheses Testing hypotheses on individual variables Testing on the relationship between categorical variables Testing on the difference between two group means Testing on differences between more than two group means Testing on the relationship between scale variables Predicting a scale variable: Regression Introduction to Bayesian statistics Overview of multivariate procedures This course provides an application-oriented introduction to the statistical component of IBM SPSS Statistics. Students will review several statistical techniques and discuss situations in which they would use each technique, how to set up the analysis, and how to interpret the results. This includes a broad range of techniques for exploring and summarizing data, as well as investigating and testing relationships. Students will gain an understanding of when and why to use these various techniques and how to apply them with confidence, interpret their output, and graphically display the results. Introduction to statistical analysis Identify the steps in the research process Identify measurement levels Describing individual variables Chart individual variables Summarize individual variables Identify the normal distributionIdentify standardized scores Testing hypotheses Principles of statistical testing One-sided versus two-sided testingType I, type II errors and power Testing hypotheses on individual variables Identify population parameters and sample statistics Examine the distribution of the sample mean Test a hypothesis on the population mean Construct confidence intervals Tests on a single variable Testing on the relationship between categorical variables Chart the relationship Describe the relationship Test the hypothesis of independence Assumptions Identify differences between the groups Measure the strength of the association Testing on the difference between two group meansChart the relationship Describe the relationship Test the hypothesis of two equal group means Assumptions Testing on differences between more than two group means Chart the relationship Describe the relationship Test the hypothesis of all group means being equal Assumptions Identify differences between the group means Testing on the relationship between scale variables Chart the relationship Describe the relationship Test the hypothesis of independence Assumptions Treatment of missing values Predicting a scale variable: Regression Explain linear regression Identify unstandardized and standardized coefficients Assess the fit Examine residuals Include 0-1 independent variables Include categorical independent variables Introduction to Bayesian statistics Bayesian statistics and classical test theory The Bayesian approach Evaluate a null hypothesis Overview of Bayesian procedures in IBM SPSS Statistics Overview of multivariate procedures Overview of supervised models Overview of models to create natural groupings
Duration 4 Days 24 CPD hours This course is intended for This course is designed for data analysts, business intelligence specialists, developers, system architects, and database administrators. Overview Skills gained in this training include:The features that Pig, Hive, and Impala offer for data acquisition, storage, and analysisThe fundamentals of Apache Hadoop and data ETL (extract, transform, load), ingestion, and processing with HadoopHow Pig, Hive, and Impala improve productivity for typical analysis tasksJoining diverse datasets to gain valuable business insightPerforming real-time, complex queries on datasets Cloudera University?s four-day data analyst training course focusing on Apache Pig and Hive and Cloudera Impala will teach you to apply traditional data analytics and business intelligence skills to big data. Hadoop Fundamentals The Motivation for Hadoop Hadoop Overview Data Storage: HDFS Distributed Data Processing: YARN, MapReduce, and Spark Data Processing and Analysis: Pig, Hive, and Impala Data Integration: Sqoop Other Hadoop Data Tools Exercise Scenarios Explanation Introduction to Pig What Is Pig? Pig?s Features Pig Use Cases Interacting with Pig Basic Data Analysis with Pig Pig Latin Syntax Loading Data Simple Data Types Field Definitions Data Output Viewing the Schema Filtering and Sorting Data Commonly-Used Functions Processing Complex Data with Pig Storage Formats Complex/Nested Data Types Grouping Built-In Functions for Complex Data Iterating Grouped Data Multi-Dataset Operations with Pig Techniques for Combining Data Sets Joining Data Sets in Pig Set Operations Splitting Data Sets Pig Troubleshoot & Optimization Troubleshooting Pig Logging Using Hadoop?s Web UI Data Sampling and Debugging Performance Overview Understanding the Execution Plan Tips for Improving the Performance of Your Pig Jobs Introduction to Hive & Impala What Is Hive? What Is Impala? Schema and Data Storage Comparing Hive to Traditional Databases Hive Use Cases Querying with Hive & Impala Databases and Tables Basic Hive and Impala Query Language Syntax Data Types Differences Between Hive and Impala Query Syntax Using Hue to Execute Queries Using the Impala Shell Data Management Data Storage Creating Databases and Tables Loading Data Altering Databases and Tables Simplifying Queries with Views Storing Query Results Data Storage & Performance Partitioning Tables Choosing a File Format Managing Metadata Controlling Access to Data Relational Data Analysis with Hive & Impala Joining Datasets Common Built-In Functions Aggregation and Windowing Working with Impala How Impala Executes Queries Extending Impala with User-Defined Functions Improving Impala Performance Analyzing Text and Complex Data with Hive Complex Values in Hive Using Regular Expressions in Hive Sentiment Analysis and N-Grams Conclusion Hive Optimization Understanding Query Performance Controlling Job Execution Plan Bucketing Indexing Data Extending Hive SerDes Data Transformation with Custom Scripts User-Defined Functions Parameterized Queries Choosing the Best Tool for the Job Comparing MapReduce, Pig, Hive, Impala, and Relational Databases Which to Choose?
Duration 2 Days 12 CPD hours This course is intended for Authors Overview Please refer to course overview This course teaches experienced authors advanced report building techniques to enhance, customize, manage, and distribute reports. Additionally, the student will learn how to create highly interactive and engaging reports that can be run offline by creating Active Reports. Create query models Build a query and connect it to a report Answer a business question by referencing data in a separate query Create reports based on query relationships Create join relationships between queries Combine data containers based on relationships from different queries Create a report comparing the percentage of change Introduction to dimensional reporting concepts Examine data sources and model types Describe the dimensional approach to queries Apply report authoring styles Introduction to dimensional data in reports Use members to create reports Identify sets and tuples in reports Use query calculations and set definitions Dimensional report context Examine dimensional report members Examine dimensional report measures Use the default measure to create a summarized column in a report Focus your dimensional data Focus your report by excluding members of a defined set Compare the use of the filter() function to a detail filter Filter dimensional data using slicers Calculations and dimensional functions Examine dimensional functions Show totals and exclude members Create a percent of base calculation Create advanced dynamic reports Use query macros Control report output using a query macro Create a dynamic growth report Create a report that displays summary data before detailed data and uses singletons to summarize information Design effective prompts Create a prompt that allows users to select conditional formatting values Create a prompt that provides users a choice between different filters Create a prompt to let users choose a column sort order Create a prompt to let users select a display type Examine the report specification Examine report specification flow Identify considerations when modifying report specifications Customize reporting objects Distribute reports Burst a report to email recipients by using a data item Burst a list report to the IBM Cognos Analytics portal by using a burst table Burst a crosstab report to the IBM Cognos Analytics portal by using a burst table and a master detail relationship Enhance user interaction with HTML Create interactive reports using HTML Include additional information with tooltips Send emails using links in a report Introduction to IBM Cognos Active Reports Examine Active Report controls and variables Create a simple Active Report using Static and Data-driven controls Change filtering and selection behavior in a report Create interaction between multiple controls and variables Active Report charts and decks Create an Active Report with a Deck Create an Active Report with 11.0 visualizations
Duration 3 Days 18 CPD hours This course is intended for Authors Overview Please refer to course overview This course provides authors with an introduction to build reports using Cognos Analytics. Techniques to enhance, customize, and manage reports will be explored. Activities will illustrate and reinforce key concepts during this learning opportunity. What is IBM Cognos Analytics - Reporting The Welcome page Consume report content Interactive filtering Working with reports Dimensionally modeled relational dataUse personal data sources and data modules Upload personal data Upload custom images Using navigation paths in a data module Examine list reports Group data Format columns Include headers and footers Aggregate fact data Identify differences in aggregation Multiple facts and repeated information Use shared dimensions to create multi-fact queries Present repeated information Add repeated information to reports Create a mailing list report Create crosstab reports Add measures to a crosstab Data sources for a crosstab Create complex crosstab reports Add items as peers Create crosstab nodes and crosstab members Work with crosstab data Format, sort, and aggregate a crosstab Create discontinuous crosstab reportsCreate visualization reports Visualization categories Customize visualizations Client side visualizations Enhanced map visualizations Focus reports using filters Create filters to narrow the focus Use advanced detail filters Apply a filter with aggregation Use summary filters Focus reports using prompts Examine parameters and prompts Create a parameter for a report item Add a prompt page Add a prompt item to a report Identify a prompt type Create a cascading prompt Use calculations What are calculations Add Date and Time functions Add string functions Display prompt selections in report titles Customize reports with conditional formatting Three steps for conditional formatting Create a variable Assign the variable to a report object Format based on the conditional value Conditionally render report objects Drill-through definitions Navigate to related data Enhance report layout View the structure of a report Use Guided report layout Force page breaks Create horizontal pagination Modify the report structure Format objects across reports Use additional report-building techniques Enhance a report design Add objects to reports Convert a list to a crosstab Explore reuse Additional course details: Nexus Humans B6258 IBM Cognos Analytics - Author Reports Fundamentals V11.1.x training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the B6258 IBM Cognos Analytics - Author Reports Fundamentals V11.1.x course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 2 Days 12 CPD hours This course is intended for Report Authors Overview Create query models Create reports based on query relationships Introduction to dimensional data Introduction to dimensional data in reports Dimensional report context Focus your dimensional data Calculations and dimensional functions Create advanced dynamic reports This offering teaches Professional Report Authors about advanced report building techniques using relational data models, dimensional data, and ways of enhancing, customizing, managing, and distributing professional reports. The course builds on topics presented in the Fundamentals course. Activities will illustrate and reinforce key concepts during this learning activity. Create query models Build a query and connect it to a report Answer a business question by referencing data in a separate query Create reports based on query relationships Create join relationships between queries Combine data containers based on relationships from different queries Create a report comparing the percentage of change Introduction to dimensional reporting concepts Examine data sources and model types Describe the dimensional approach to queries Apply report authoring styles Introduction to dimensional data in reports Use members to create reports Identify sets and tuples in reports Use query calculations and set definitions Dimensional report context Examine dimensional report members Examine dimensional report measures Use the default measure to create a summarized column in a report Focus your dimensional data Focus your report by excluding members of a defined set Compare the use of the filter() function to a detail filter Filter dimensional data using slicers Calculations and dimensional functions Examine dimensional functions Show totals and exclude members Create a percent of base calculation Create advanced dynamic reports Use query macros Control report output using a query macro Create a dynamic growth report Create a report that displays summary data before detailed data and uses singletons to summarize information Design effective prompts Create a prompt that allows users to select conditional formatting values Create a prompt that provides users a choice between different filters Create a prompt to let users choose a column sort order Create a prompt to let users select a display type Examine the report specification Examine report specification flow Identify considerations when modifying report specifications Customize reporting objects Distribute reports Burst a report to email recipients by using a data item Burst a list report to the IBM Cognos Analytics portal by using a burst table Burst a crosstab report to the IBM Cognos Analytics portal by using a burst table and a master detail relationship Enhance user interaction with HTML Create interactive reports using HTML Include additional information with tooltips Send emails using links in a report Introduction to IBM Cognos Active Reports Examine Active Report controls and variables Create a simple Active Report using Static and Data-driven controls Change filtering and selection behavior in a report Create interaction between multiple controls and variables Active Report charts and decks Create an Active Report with a Data deck Use Master detail relationships with Decks Optimize Active Reports Create an Active Report with new visualizations
Duration 2 Days 12 CPD hours This course is intended for Report authors working with dimensional data sources. Through interactive demonstrations & exercises, participants will learn how to author reports that navigate & manipulate dimensional data structures using the specific dimensional functions & features available in IBM Cognos Analytics. Introduction to Dimensional Concepts Identify different data sources and models Investigate the OLAP dimensional structure Identify dimensional data items and expressions Differentiate the IBM Cognos Analytics query language from SQL and MDX Differentiate relational and dimensional report authoring styles Introduction to Dimensional Data in Reports Work with members Identify sets and tuples in IBM Cognos Analytics Dimensional Report Context Understand the purpose of report context Understand how data is affected by default and root members Focus Your Dimensional Data Compare dimensional queries to relational queries Explain the importance of filtering dimensional queries Evaluate different filtering techniques Filter based on dimensions and members Filter based on measure values Filter using a slicer Calculations & Dimensional Functions Use IBM Cognos Analytics dimensional functions to create sets and tuples Perform arithmetic operations in OLAP queries Identify coercion errors and rules Functions for Navigating Dimesional Hierarchies Navigate dimensional data using family functions Relative Functions Navigate dimensional data using relative functions Navigate dimensional data using relative time functions Advanced Drilling Techniques & Member Sets Understand default drill-up and drill-down functionality Identify cases when you need to override default drilling behavior Configure advanced drilling behavior to support sophisticated use cases Define member sets to support advanced drilling Define member sets to support functions Set Up Drill-Through Reports Navigate from a specific report to a target report Drill down to greater detail and then navigate to target report Navigate between reports created using different data sources End-to-End Workshop Review concepts covered throughout the course
Duration 1 Days 6 CPD hours This course is intended for This course is intended for: Data platform engineers Architects and operators who build and manage data analytics pipelines Overview In this course, you will learn to: Compare the features and benefits of data warehouses, data lakes, and modern data architectures Design and implement a batch data analytics solution Identify and apply appropriate techniques, including compression, to optimize data storage Select and deploy appropriate options to ingest, transform, and store data Choose the appropriate instance and node types, clusters, auto scaling, and network topology for a particular business use case Understand how data storage and processing affect the analysis and visualization mechanisms needed to gain actionable business insights Secure data at rest and in transit Monitor analytics workloads to identify and remediate problems Apply cost management best practices In this course, you will learn to build batch data analytics solutions using Amazon EMR, an enterprise-grade Apache Spark and Apache Hadoop managed service. You will learn how Amazon EMR integrates with open-source projects such as Apache Hive, Hue, and HBase, and with AWS services such as AWS Glue and AWS Lake Formation. The course addresses data collection, ingestion, cataloging, storage, and processing components in the context of Spark and Hadoop. You will learn to use EMR Notebooks to support both analytics and machine learning workloads. You will also learn to apply security, performance, and cost management best practices to the operation of Amazon EMR. Module A: Overview of Data Analytics and the Data Pipeline Data analytics use cases Using the data pipeline for analytics Module 1: Introduction to Amazon EMR Using Amazon EMR in analytics solutions Amazon EMR cluster architecture Interactive Demo 1: Launching an Amazon EMR cluster Cost management strategies Module 2: Data Analytics Pipeline Using Amazon EMR: Ingestion and Storage Storage optimization with Amazon EMR Data ingestion techniques Module 3: High-Performance Batch Data Analytics Using Apache Spark on Amazon EMR Apache Spark on Amazon EMR use cases Why Apache Spark on Amazon EMR Spark concepts Interactive Demo 2: Connect to an EMR cluster and perform Scala commands using the Spark shell Transformation, processing, and analytics Using notebooks with Amazon EMR Practice Lab 1: Low-latency data analytics using Apache Spark on Amazon EMR Module 4: Processing and Analyzing Batch Data with Amazon EMR and Apache Hive Using Amazon EMR with Hive to process batch data Transformation, processing, and analytics Practice Lab 2: Batch data processing using Amazon EMR with Hive Introduction to Apache HBase on Amazon EMR Module 5: Serverless Data Processing Serverless data processing, transformation, and analytics Using AWS Glue with Amazon EMR workloads Practice Lab 3: Orchestrate data processing in Spark using AWS Step Functions Module 6: Security and Monitoring of Amazon EMR Clusters Securing EMR clusters Interactive Demo 3: Client-side encryption with EMRFS Monitoring and troubleshooting Amazon EMR clusters Demo: Reviewing Apache Spark cluster history Module 7: Designing Batch Data Analytics Solutions Batch data analytics use cases Activity: Designing a batch data analytics workflow Module B: Developing Modern Data Architectures on AWS Modern data architectures
Duration 2 Days 12 CPD hours This course is intended for This course is designed for the Business Analyst professional who is involved with testing the functionality of technology projects. Overview Develop an understanding about basic concepts associated with User Acceptance TestingSee how UAT applies to the Software Development Lifecycle (SDLC)Recognize benefits of improved quality of deployed software using User Acceptance TestingIdentify the key roles, activities and deliverables which make up User Acceptance Testing Use a Business Use Case to define scenarios for testingCreate a UAT test plan and write UAT test cases with associated test dataUnderstand the process for testing functional and non-functional requirementsIdentify the challenges of testing vendor-supplied applications This course looks at the issues which drive the need for a UAT process & describes the components of the process. It is designed to help Business Analysts to develop an understanding of their role, the process, and the deliverables associated with UAT. Day 1 Software Testing - the Basics Understanding the Tester?s Terminology The UAT Planning Process Day 2 UAT Test Coverage Creating & Executing the UAT Test Cases Verifying the Test Results Testing Vendor-Supplied Applications Additional course details: Nexus Humans BA29 - User Acceptance Testing for Business Analysts training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the BA29 - User Acceptance Testing for Business Analysts course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 2 Days 12 CPD hours This course is intended for This introductory-level course is intended for Business Analysts and Data Analysts (or anyone else in the data science realm) who are already comfortable working with numerical data in Excel or other spreadsheet environments. No prior programming experience is required, and a browser is the only tool necessary for the course. Overview This course is approximately 50% hands-on, combining expert lecture, real-world demonstrations and group discussions with machine-based practical labs and exercises. Our engaging instructors and mentors are highly experienced practitioners who bring years of current 'on-the-job' experience into every classroom. Throughout the hands-on course students, will learn to leverage Python scripting for data science (to a basic level) using the most current and efficient skills and techniques. Working in a hands-on learning environment, guided by our expert team, attendees will learn about and explore (to a basic level): How to work with Python interactively in web notebooks The essentials of Python scripting Key concepts necessary to enter the world of Data Science via Python This course introduces data analysts and business analysts (as well as anyone interested in Data Science) to the Python programming language, as it?s often used in Data Science in web notebooks. This goal of this course is to provide students with a baseline understanding of core concepts that can serve as a platform of knowledge to follow up with more in-depth training and real-world practice. This course introduces data analysts and business analysts (as well as anyone interested in Data Science) to the Python programming language, as it's often used in Data Science in web notebooks. This goal of this course is to provide students with a baseline understanding of core concepts that can serve as a platform of knowledge to follow up with more in-depth training and real-world practice. Additional course details: Nexus Humans Python for Data Science Primer: Hands-on Technical Overview (TTPS4872) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Python for Data Science Primer: Hands-on Technical Overview (TTPS4872) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.
Duration 1 Days 6 CPD hours This course is intended for New administrators, business analysts or report writers who are new to creating reports or dashboards within Salesforce. Overview A student in this class will learn the basic Salesforce object model, and how to create and secure reports and dashboards. The instructor will lead students through exercises to create tabular, summary, matrix and join reports. Students will learn advanced reporting functionality such as charting, report summary fields, bucket fields, conditional highlighting, advanced report filters and building custom report types. Finally, the student will learn how to create and run dashboards and schedule and email reports and dashboards. This course is specifically designed to teach administrators, business analysts or report writers how to utilize the basic and advanced analytic capabilities of Salesforce. Introductions / Login to Training OrgsOverview of Salesforce Object ModelTabular, Summary, Matrix, Join ReportsCharts, Bucket Fields, Report Summary Fields, Conditional HighlightingCustom Report TypesDashboardsReport & Dashboard Scheduling Additional course details: Nexus Humans Introduction to Salesforce.com Analytics - Building Reports and Dashboards training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Introduction to Salesforce.com Analytics - Building Reports and Dashboards course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.