• Professional Development
  • Medicine & Nursing
  • Arts & Crafts
  • Health & Wellbeing
  • Personal Development

119 Educators providing HIV courses delivered Live Online

Curative Care Alliance

curative care alliance

London

About Us With our organisational members in over 100 countries, we provide a global voice on hospice and palliative care The Worldwide Hospice Palliative Care Alliance (WHPCA) is an international non-governmental organisation focusing exclusively on hospice and palliative care development worldwide. We are a network of national and regional hospice and palliative care organisations and affiliate organisations. Our mission is: To bring together the global palliative care community to improve well-being and reduce unnecessary suffering for those in need of palliative care in collaboration with the regional and national hospice and palliative care organisations and other partners. We believe that no-one with a life-limiting condition, such as cancer or HIV, should live and die with unnecessary pain and distress. Our vision is a world with universal access to hospice and palliative care. Our mission is to foster, promote and influence the delivery of affordable, quality palliative care. The WHPCA is registered in the UK where our secretariat staff are currently based. WHPCA Key Messages Hospice and palliative care aim to relieve suffering and to improve the quality of life of people and their families and carers facing life threatening and life limiting illness. At least 40 million need palliative care annually, including 20 million at the end of life. 18 million of these die in avoidable pain and distress. Pain management is essential to hospice and palliative care and the WHPCA works to improve access to these essential medications. Over 75% of the world’s population lacks adequate access to the medications needed to treat their pain. The WHPCA believes that the person accessing care should be at the centre of their care. Palliative care looks after the physical, psychological, social, practical, legal and spiritual needs of the person and their family. The WHPCA advocates for hospice and palliative care worldwide and supports national and organisations to integrate hospice and palliative care into their country’s health systems. The WHPCA works with partner organisations to care for people, their family members and carers to alleviate pain and distress and promote quality of life.

Recovery Coaching Scotland

recovery coaching scotland

London

WHY RECOVERY COACHING? Background The illicit use of drugs and particularly opiates, benzodiazepines and psychostimulants, causes significant problems within Scotland as it does in other parts of the UK and Europe. Some of these problems are primarily social in nature, involving, for example, increases in acquisitive crime, prostitution, unemployment, family breakdown and homelessness. Others are more clearly associated with health problems, for example, the transmission of communicable diseases (HIV, hepatitis), injecting-related injuries and increased demands upon health care services. Similarly, alcohol problems are a major concern for public health in Scotland. Short-term problems such as intoxication can lead to risk of injury and is associated with violence and social disorder. Over the longer term, excessive consumption can cause irreversible damage to parts of the body such as the liver and brain. Alcohol can also lead to mental health problems, for example, alcohol dependency and increased risk of suicide. In addition, alcohol is recognised as a contributory factor in many other diseases including cancer, stroke and heart disease. Wider social problems include family disruption, absenteeism from work and financial difficulties. The Alcohol Framework 2018: Preventing Harm, published by the Scottish Government includes the estimate from the 2010 study, The Societal Cost of Alcohol Misuse in Scotland for 2007, that the impact of this excessive consumption is estimated to cost Scotland £3.6 billion each year. Our Challenge There are a number of characteristics in the behaviours, profile and patterns of drug use and people who use them that both differentiate and add complexity to the nature of our challenge, such as: High risk patterns of Drug use, including multiple different drug (poly drug use) and alcohol. High levels of social depravation, poverty and highly stigmatised people. Drug Misuse & Treatment in Scottish Prisons From 2009/10 to 2018/19, Testing was conducted across all Scottish prisons annually. During one month of the year, prisoners arriving in custody were voluntarily tested for the presence of illegal or illicit drugs. Similarly, those leaving custody during the month were tested to assess progress towards the 'reduced or stabilised' offender outcome. Some key points been: In 2018/19, of the tests carried out at prison entry 75% were positive for drugs The illegal/illicit drugs most commonly detected when entering prison in 2018/19 were cannabis benzodiazepines, opiates and cocaine In 2018/19, of the tests carried out when leaving prison 26% were positive for illegal/illicit drug

En-light It Up!

en-light it up!

Communication and Marketing Agency in the Health and Pharmaceutical Business Freelance Account Director, Jan–May 2018 Marketing and communication strategy development for ADHD and Eradication programme for HCV. Spoken Brand Narratives, London, UK Communication and Marketing Agency in the Health and Pharmaceutical Business Freelance Account Director, May–Sep 2018 Marketing and communication strategy development for ovarian cancer, acute coronary syndrome, Dravet syndrome. Cherry Advertising, London, UK Communication and Marketing Agency in the Health and Pharmaceutical Business Freelance Account Director, Jan–April 2018 Marketing and communication strategy development for unresectable hepatocellular carcinoma and radioactive iodine -refractory differentiated thyroid cancer. Concentric Health Experience, London, UK Communication and Marketing Agency in the Health and Pharmaceutical Business Freelance Account Director, Oct–Nov 2017 Marketing and communication strategy development for the global launch of a new antibiotic against nosocomial pneumonia infections. McCann Health, London, UK Communication and Marketing Agency in the Health and Pharmaceutical Business Account Director, May– Sep 2017 Marketing and communication strategy development for breakthrough validated comprehensive genomic profiling of tumours; painkillers; COPD and asthma. Havas Lynx, London, UK Communication and Marketing Agency in the Health and Pharmaceutical Business Freelance Account Director, Mar–Oct 2016 Marketing and communication strategy development for EU lobby/PR project in immuno-oncology. Publicis LifeBrands Resolute, London, UK Communication and Marketing Agency in the Health and Pharmaceutical Business Freelance Sr Account Manager, Sep 2015 – Feb 2016 Marketing and communication strategy development for the first vaccine against dengue fever. TBWA\PW, London, UK Communication and Marketing Agency in the Health and Pharmaceutical Business Freelance Sr Account Manager, Jun – Sep 2015 Marketing and communication strategy development for oral care, life science diagnostics. Sudler & Hennessey, London, UK Communication and Marketing Agency in the Health and Pharmaceutical Business Freelance Account Manager, Jun 2014 – May 2015 Marketing and communication strategy development for cardiology, primary and secondary care. Peaxi Communications, Milano, Italy Healthcare Marketing and communications consultant, founder & director Apr 2013 – May 2014 Healthcare advertising and medical communication advisor and consultancy. QBGROUP, Padova, Italy Communication and Marketing Agency in the Health and Pharmaceutical Business Key Account Manager, Feb–Mar 2013 Medical education/event-communications agency with advanced ICT assets, including augmented reality, 3D graphic design and holographic production. Sudler & Hennessey, Milano, Italy Communication and Marketing Agency in the Health and Pharmaceutical Business Account Manager, May 2011 – Jan 2013 Medical education (events and web-based programmes), promotion and advertising for diabetes, oncology, COPD, angioedema, rheumatoid arthritis. GDS Brand Consultancy, Milano, Italy Consulting agency offering brand strategy implementation services Freelance Consultant, Feb–May 2011 Events organisation and promotion support. Wyeth Consumer Healthcare (later Pfizer), Milano, Italy Pharmaceutical company Junior Product Manager, Mar–Sep 2010 Marketing and branding activities for OTC and vitamin supplements. Value Relations International, Milano, Italy Communication and Media Relations Agency in the Health and Pharmaceutical Business Account for media relations, Nov 2009 – Jan 2010 Media-relations activities for multiple clients. MolMed, Milano, Italy Biotech company Business Development & Communication Associate, Apr 2007 – May 2009 Business development for research, development and clinical validation of innovative therapies to treat cancer and gene therapies. EDUCATION AND TRAINING CTI Co-Active Life Coach – International Certification (validated by ICF), London, UK Co-Active curriculum and International Certification completed in May 2019 Theta Healing Institute of Knowledge, founded by Vianna Stibal Advanced Theta Healing Practitioner and Certified ThetaHealing Instructor Jun 2018 – present Courses and certifications received with abilitation to teach: Basic DNA, Advanced DNA, Dig Deeper, SoulMates, Growing Your Relationships series (You and your Significant Other, You and the Creator, You and the Earth, You and your Inner Circle), World Relations, Manifesting and Abundance. Courses and certifications received as advanced practitioner: Basic DNA, Advanced DNA, Dig Deeper, SoulMates, Growing Your Relationships series (You and your Significant Other, You and the Creator, You and the Earth, You and your Inner Circle), World Relations, Manifesting and Abundance, Intuitive Anatomy, Disease and Disorders, Planes of Existence, DNA 3. City, University of London, London, UK Short course: Business and Management – Coaching for Business, Jan-Mar 2018 Business School “Il Sole 24ORE”, Milano, Italy Full time Marketing & Communication Master, May-Nov 2009 University “Vita-Salute San Raffaele”, Milano, Italy Specialty degree in Medical, Cellular and Molecular Biotechnology (drug development processes in bio-pharma companies), Oct 2004 – Mar 2007 Score: 108/110 MolMed, Milano, Italy AIDS Gene Therapy Laboratory, Sep 2005 – Mar 2007 Experimental laboratory project for graduation. Final thesis on the following research project: “Analysis of the possible interference of lentiviral vectors on HIV-1 integration”. University “Vita-Salute San Raffaele”, Milano, Italy Bachelor’s degree in Medical and Pharmaceutical Biotechnology (human health), Oct 2001 – Oct 2004 Score: 110/110 DIBIT, San Raffaele Scientific Institute, Milano, Italy Oncology Molecular Genetics Laboratory, Nov 2002 – Oct 2004 Internship in an academic research laboratory; learning of basic experimental techniques.

Kings College Hospital Maternity

kings college hospital maternity

London

We are a leading London maternity hospital and care for more than 8,000 pregnant women and birthing people and their babies each year. We provide all aspects of obstetric and midwifery care, from before conception and before birth (antenatal) to birth and after delivery (postnatal). The majority of pregnant women and people will be cared for by our expert team of midwives who are experienced in supporting those with uncomplicated pregnancies and births. When your circumstances are more complex, our specialist obstetric doctors and allied health professionals will work alongside your midwife to give you the care and support your need to have a safe and satisfying birth. You will have your own ideas about how you would like your baby to be born – whether at home or in hospital – and we do our best to help you to achieve this. We have obstetric-led birthing rooms, midwife-led birth suites with birth pools, obstetric theatres for both planned and emergency caesareans, and a homebirth service. Are you pregnant and want to have your baby with King's? You do not have to see your GP before contacting us. Please complete the King's College Hospital antenatal self-referral form to refer yourself and send to kch-tr.antenatalreferral@nhs.net. We will then email you with a reference number to confirm we have received your referral. Your first appointments with the midwife and scanning team will be sent to you either via post or email. Please note we may contact and share information with other health professionals as required. We see pregnant women and people who live in the below postcode areas in Lambeth, Southwark, and Lewisham. Referrals from those who live outside this catchment area will also be considered: SW2, SW4, SW8, SW9, SW16 SE1, SE4, SE5, SE11, SE14, SE15, SE16, SE17, SE19, SE21, SE22, SE23, SE24, SE25, SE26, SE27 CR7 Antenatal care (before the birth) This is provided by the midwifery team caring for women and pregnant people in your local area, alongside your GP or obstetrician. During your pregnancy, you will have regular appointments to make sure you and your baby are well. You will be offered routine health checks such as blood tests and other screenings, you can read more about the different scans, tests and antenatal care you can expect on the NHS website. Your screening choices are explained in this screening information leaflet, which is produced by Public Health England and available in several languages. We also provide care if screening finds you have an infectious disease, including Hepatitis B, HIV or syphilis. Badger Notes You can access your pregnancy notes and leaflets via the Badger Notes website or app. Your account will be activated after your first midwife appointment. You can use the digital maternity notes platform to communicate with your care team and we recommend you use the ‘Conversations’ option to share your birth preferences with us before your birth. Your midwife can help you with this. Clinic and scan locations Read your appointment letter carefully to see where to go for your appointments, because these are held at a variety of locations. This includes children’s centres, GP and health centres, and a number of buildings on the King's site, including Stork on the Hill, Midwives House and the Community Midwives Centre. Ultrasound (nuchal) scans take place in the Harris Birthright Centre, in the Fetal Medicine Research Institute. Buildings on the hospital site are shown on the King's campus map. Parent education classes We offer a range of online workshops to help prepare you for birth and baby. Join the 'Welcome to King’s Maternity' workshop in your first trimester to learn more about how to stay healthy in pregnancy, the services we offer, and other workshops that may be suitable for you. To sign up to a workshop, go to our parent education Eventbrite page. Email kch-tr.parenteducation@nhs.net for more information. Urgent advice If you need urgent advice and are: pregnant and currently receiving care at King's; have just given birth at King's; or have had a home birth with King's: 24 hours a day, 7 days a week: Telephone Assessment Line +44 (0)20 3299 8389 Monday-Friday, 9am-5pm: contact the midwifery team leading your care Out of hours: contact the Nightingale Birth Centre. Where to give birth You can choose to give birth: in the Nightingale Birth Centre at King’s at home with the help of our community-based midwives, if you live in King’s catchment area. Our Maternity Department is on the third and fourth floors of the Golden Jubilee Wing and includes the Nightingale Birth Centre. Our facilities include 10 labour rooms, operating theatres, recovery rooms and a high dependency unit (HDU). Midwife-led birthing suite You have the choice of two midwife-led birthing rooms, each with a birthing pool and their own shower and toilet, where we have created a ‘home from home’ feel for your birth environment. Homebirth Our home birth midwife team (called Phoenix) provide a home birth service within the King’s catchment area. If you are interested in this option, indicate this on your antenatal self-referral form, or contact your community midwife. We will support women and birthing people to make informed choices about where they would like to birth their babies. There may be instances when a home birth might not be recommended, and your midwife or doctor can discuss these with you. Neonatal Unit Babies who need special care are looked after in the Neonatal Unit by our specialist team, it is located opposite Nightingale Birth Centre on the fourth floor of Golden Jubilee Wing. Anthony Nolan umbilical cord blood donation If you give birth at King’s College Hospital, you can help save the life of someone with blood cancer by donating your umbilical cord blood to the Anthony Nolan Cord Blood Programme after you give birth. We are one of five hospitals in the UK where women can donate their umbilical cords. Please watch this short animation about donating your cord blood. If you would like to register to donate cord blood, please speak with your midwife or one of the dedicated cord blood collectors at King’s College Hospital. Find out more about Anthony Nolan’s Cord blood programme and their lifesaving work. If you have any questions about cord blood donation, please get in touch with the team at Anthony Nolan: Cord.Collection@anthonynolan.org After the birth (postnatal) If everything with your birth has been uncomplicated we encourage you to go home within a few hours. You can contact the maternity unit at any time day or night if you have any concerns. If you or your baby needs to stay in hospital for additional care you will be transferred to William Gilliatt postnatal ward for the remainder of your stay. This ward contains four-bedded bays and shared bathrooms. You and your baby room in together and birth partners are able to visit 24 hours a day. Going home Our care does not stop once you are at home. When you leave King’s you should have a visit from your community midwife within 24 hours. They will plan visits with you over the next 10 days. If you live outside King’s area your details will be passed to your local community midwives who will take over your care. If you would like support with breastfeeding, we have specialist infant feeding midwives who offer virtual workshops and in-person support via referral from your community midwife. Get involved If you'd like to help us improve our maternity services for parents and babies, join the King’s Maternity Voices Partnership (MVP). Feedback Friends and Family You can tell us what you did and didn’t like about your care by completing the Friends and Family feedback form, it only takes a couple of minutes and you can comment on your antenatal, birth and postnatal ward or postnatal community care. PALS The Patient Advice and Liaison Service (PALS) is a service that offers support, information and assistance to patients, relatives and visitors. They can also provide help and advice if you have a concern or complaint that staff have not been able to resolve for you.

Courses matching "HIV"

Show all 11

Mergers and Acquisitions - Virtual Learning

By EMG Associates UK Limited

Mergers and Acquisitions - Virtual Learning Why Attend This practical course covers the key steps in the Mergers and Acquisitions(M&A) process, from the initial step of valuing the shares in a company through to closing the deal. Whether or not participants practice M&A, this course will provide them an insider's look into what is an undeniable major force in today's corporate arena. This course will give participants an A-Z understanding of the M&A process and the ability to evaluate whether a merger or acquisition fits with their organization's strategy. As a result they will identify the most lucrative M&A opportunities, select the best partners and get the maximum reward from the deal. Course Methodology In this interactive training course participants will frequently work in pairs as well as in larger groups to complete exercises, and regional and international case studies. Course Objectives By the end of the course, participants will be able to: Identify attractive Mergers and Acquisitions (M&A) opportunities Formulate the initial steps and the preliminary agreements for a merger or acquisition Carry out a full due diligence into the state of affairs of a target company Understand the Share Purchase Agreement (SPA) and the Asset Purchase Agreement (APA) Take an active role in the exchange and completion stages of a merger or acquisition Be an effective part of the post-merger integration to ensure the smooth running of the new organization Target Audience This course is suitable for anyone involved in the identification, planning and execution of a Mergers and Acquisitions opportunity. This includes, CEOs, managing directors, general managers, financial directors, accountants, board members, commercial directors, business development directors, strategy planners and analysts, and in-house council. Target Competencies Identifying M&A opportunities Due Diligence Organizing Acquisitions Structuring Negotiations Post-acquisition Integration Post-acquisition Audit Note The Dubai Government Legal Affairs Department has introduced a Continuing Legal Professional Development (CLPD) programme to legal consultants authorised to practise through a licensed firm in the Emirate of Dubai. We are proud to announce that the Dubai Government Legal Affairs Department has accredited EMG Associates as a CLPD provider. In addition, all our legal programmes have been approved. This PLUS Specialty Training Legal course qualifies for 4 elective CLPD points. Fundamentals of Mergers and Acquisitions ( M&A) Distinction between Mergers and Acquisitions Types of Mergers & Acquisitions Horizontal Vertical conglomerate Knowledge of areas of law required in M& A The Preliminary documents required in M&A Heads of terms- legally binding? Confidentiality - do they need to be in writing? Lockout/exclusivity agreements- requirements for enforceability How to structure the Acquisition Share sale Advantages and disadvantages from the buyer's perspective Advantages and disadvantages from the seller's perspective Business sale Advantages and disadvantages from the buyer's perspective Advantages and disadvantages from the seller's perspective Hive down A combination of assert sale and share sale Looking at different valuation techniques Real Estate Value Relief from Royalty Discounted Cash Flow Market Multiples Dividend Yield Net Assets The Due Diligence Process What is it? Why do it? Scope of due diligence Legal Financial  Commercial Operational The Purchase Agreements Share Sale Purchase Agreement v Asset Purchase Agreement v Business Purchase Agreements Provisions in a Share Purchase Agreement Importance of warranties and indemnities in purchase agreements Negotiating warranties from a Share Purchase Agreement Contractual protection for the seller  Disclosure letter Intellectual property What happens to IP in M&A  Stages of the IP during the M&A process Dispute Resolution in M&A Litigation Arbitration Mediation The Exchange and completion stages of M&A Seller's document Buyer's document The auction process The relevant stages Advantages and disadvantages from the buyer's and the seller's perspective

Mergers and Acquisitions - Virtual Learning
Delivered OnlineFlexible Dates
£1,599

Cloudera Data Analyst Training - Using Pig, Hive, and Impala with Hadoop

By Nexus Human

Duration 4 Days 24 CPD hours This course is intended for This course is designed for data analysts, business intelligence specialists, developers, system architects, and database administrators. Overview Skills gained in this training include:The features that Pig, Hive, and Impala offer for data acquisition, storage, and analysisThe fundamentals of Apache Hadoop and data ETL (extract, transform, load), ingestion, and processing with HadoopHow Pig, Hive, and Impala improve productivity for typical analysis tasksJoining diverse datasets to gain valuable business insightPerforming real-time, complex queries on datasets Cloudera University?s four-day data analyst training course focusing on Apache Pig and Hive and Cloudera Impala will teach you to apply traditional data analytics and business intelligence skills to big data. Hadoop Fundamentals The Motivation for Hadoop Hadoop Overview Data Storage: HDFS Distributed Data Processing: YARN, MapReduce, and Spark Data Processing and Analysis: Pig, Hive, and Impala Data Integration: Sqoop Other Hadoop Data Tools Exercise Scenarios Explanation Introduction to Pig What Is Pig? Pig?s Features Pig Use Cases Interacting with Pig Basic Data Analysis with Pig Pig Latin Syntax Loading Data Simple Data Types Field Definitions Data Output Viewing the Schema Filtering and Sorting Data Commonly-Used Functions Processing Complex Data with Pig Storage Formats Complex/Nested Data Types Grouping Built-In Functions for Complex Data Iterating Grouped Data Multi-Dataset Operations with Pig Techniques for Combining Data Sets Joining Data Sets in Pig Set Operations Splitting Data Sets Pig Troubleshoot & Optimization Troubleshooting Pig Logging Using Hadoop?s Web UI Data Sampling and Debugging Performance Overview Understanding the Execution Plan Tips for Improving the Performance of Your Pig Jobs Introduction to Hive & Impala What Is Hive? What Is Impala? Schema and Data Storage Comparing Hive to Traditional Databases Hive Use Cases Querying with Hive & Impala Databases and Tables Basic Hive and Impala Query Language Syntax Data Types Differences Between Hive and Impala Query Syntax Using Hue to Execute Queries Using the Impala Shell Data Management Data Storage Creating Databases and Tables Loading Data Altering Databases and Tables Simplifying Queries with Views Storing Query Results Data Storage & Performance Partitioning Tables Choosing a File Format Managing Metadata Controlling Access to Data Relational Data Analysis with Hive & Impala Joining Datasets Common Built-In Functions Aggregation and Windowing Working with Impala How Impala Executes Queries Extending Impala with User-Defined Functions Improving Impala Performance Analyzing Text and Complex Data with Hive Complex Values in Hive Using Regular Expressions in Hive Sentiment Analysis and N-Grams Conclusion Hive Optimization Understanding Query Performance Controlling Job Execution Plan Bucketing Indexing Data Extending Hive SerDes Data Transformation with Custom Scripts User-Defined Functions Parameterized Queries Choosing the Best Tool for the Job Comparing MapReduce, Pig, Hive, Impala, and Relational Databases Which to Choose?

Cloudera Data Analyst Training - Using Pig, Hive, and Impala with Hadoop
Delivered OnlineFlexible Dates
Price on Enquiry

Designing and Building Big Data Applications

By Nexus Human

Duration 4 Days 24 CPD hours This course is intended for This course is best suited to developers, engineers, and architects who want to use use Hadoop and related tools to solve real-world problems. Overview Skills learned in this course include:Creating a data set with Kite SDKDeveloping custom Flume components for data ingestionManaging a multi-stage workflow with OozieAnalyzing data with CrunchWriting user-defined functions for Hive and ImpalaWriting user-defined functions for Hive and ImpalaIndexing data with Cloudera Search Cloudera University?s four-day course for designing and building Big Data applications prepares you to analyze and solve real-world problems using Apache Hadoop and associated tools in the enterprise data hub (EDH). IntroductionApplication Architecture Scenario Explanation Understanding the Development Environment Identifying and Collecting Input Data Selecting Tools for Data Processing and Analysis Presenting Results to the Use Defining & Using Datasets Metadata Management What is Apache Avro? Avro Schemas Avro Schema Evolution Selecting a File Format Performance Considerations Using the Kite SDK Data Module What is the Kite SDK? Fundamental Data Module Concepts Creating New Data Sets Using the Kite SDK Loading, Accessing, and Deleting a Data Set Importing Relational Data with Apache Sqoop What is Apache Sqoop? Basic Imports Limiting Results Improving Sqoop?s Performance Sqoop 2 Capturing Data with Apache Flume What is Apache Flume? Basic Flume Architecture Flume Sources Flume Sinks Flume Configuration Logging Application Events to Hadoop Developing Custom Flume Components Flume Data Flow and Common Extension Points Custom Flume Sources Developing a Flume Pollable Source Developing a Flume Event-Driven Source Custom Flume Interceptors Developing a Header-Modifying Flume Interceptor Developing a Filtering Flume Interceptor Writing Avro Objects with a Custom Flume Interceptor Managing Workflows with Apache Oozie The Need for Workflow Management What is Apache Oozie? Defining an Oozie Workflow Validation, Packaging, and Deployment Running and Tracking Workflows Using the CLI Hue UI for Oozie Processing Data Pipelines with Apache Crunch What is Apache Crunch? Understanding the Crunch Pipeline Comparing Crunch to Java MapReduce Working with Crunch Projects Reading and Writing Data in Crunch Data Collection API Functions Utility Classes in the Crunch API Working with Tables in Apache Hive What is Apache Hive? Accessing Hive Basic Query Syntax Creating and Populating Hive Tables How Hive Reads Data Using the RegexSerDe in Hive Developing User-Defined Functions What are User-Defined Functions? Implementing a User-Defined Function Deploying Custom Libraries in Hive Registering a User-Defined Function in Hive Executing Interactive Queries with Impala What is Impala? Comparing Hive to Impala Running Queries in Impala Support for User-Defined Functions Data and Metadata Management Understanding Cloudera Search What is Cloudera Search? Search Architecture Supported Document Formats Indexing Data with Cloudera Search Collection and Schema Management Morphlines Indexing Data in Batch Mode Indexing Data in Near Real Time Presenting Results to Users Solr Query Syntax Building a Search UI with Hue Accessing Impala through JDBC Powering a Custom Web Application with Impala and Search

Designing and Building Big Data Applications
Delivered OnlineFlexible Dates
Price on Enquiry

Building Batch Data Analytics Solutions on AWS

By Nexus Human

Duration 1 Days 6 CPD hours This course is intended for This course is intended for: Data platform engineers Architects and operators who build and manage data analytics pipelines Overview In this course, you will learn to: Compare the features and benefits of data warehouses, data lakes, and modern data architectures Design and implement a batch data analytics solution Identify and apply appropriate techniques, including compression, to optimize data storage Select and deploy appropriate options to ingest, transform, and store data Choose the appropriate instance and node types, clusters, auto scaling, and network topology for a particular business use case Understand how data storage and processing affect the analysis and visualization mechanisms needed to gain actionable business insights Secure data at rest and in transit Monitor analytics workloads to identify and remediate problems Apply cost management best practices In this course, you will learn to build batch data analytics solutions using Amazon EMR, an enterprise-grade Apache Spark and Apache Hadoop managed service. You will learn how Amazon EMR integrates with open-source projects such as Apache Hive, Hue, and HBase, and with AWS services such as AWS Glue and AWS Lake Formation. The course addresses data collection, ingestion, cataloging, storage, and processing components in the context of Spark and Hadoop. You will learn to use EMR Notebooks to support both analytics and machine learning workloads. You will also learn to apply security, performance, and cost management best practices to the operation of Amazon EMR. Module A: Overview of Data Analytics and the Data Pipeline Data analytics use cases Using the data pipeline for analytics Module 1: Introduction to Amazon EMR Using Amazon EMR in analytics solutions Amazon EMR cluster architecture Interactive Demo 1: Launching an Amazon EMR cluster Cost management strategies Module 2: Data Analytics Pipeline Using Amazon EMR: Ingestion and Storage Storage optimization with Amazon EMR Data ingestion techniques Module 3: High-Performance Batch Data Analytics Using Apache Spark on Amazon EMR Apache Spark on Amazon EMR use cases Why Apache Spark on Amazon EMR Spark concepts Interactive Demo 2: Connect to an EMR cluster and perform Scala commands using the Spark shell Transformation, processing, and analytics Using notebooks with Amazon EMR Practice Lab 1: Low-latency data analytics using Apache Spark on Amazon EMR Module 4: Processing and Analyzing Batch Data with Amazon EMR and Apache Hive Using Amazon EMR with Hive to process batch data Transformation, processing, and analytics Practice Lab 2: Batch data processing using Amazon EMR with Hive Introduction to Apache HBase on Amazon EMR Module 5: Serverless Data Processing Serverless data processing, transformation, and analytics Using AWS Glue with Amazon EMR workloads Practice Lab 3: Orchestrate data processing in Spark using AWS Step Functions Module 6: Security and Monitoring of Amazon EMR Clusters Securing EMR clusters Interactive Demo 3: Client-side encryption with EMRFS Monitoring and troubleshooting Amazon EMR clusters Demo: Reviewing Apache Spark cluster history Module 7: Designing Batch Data Analytics Solutions Batch data analytics use cases Activity: Designing a batch data analytics workflow Module B: Developing Modern Data Architectures on AWS Modern data architectures

Building Batch Data Analytics Solutions on AWS
Delivered OnlineFlexible Dates
Price on Enquiry

Developer Training for Spark and Hadoop

By Nexus Human

Duration 4 Days 24 CPD hours This course is intended for Hadoop Developers Overview Through instructor-led discussion and interactive, hands-on exercises, participants will navigate the Hadoop ecosystem, learning topics such as:How data is distributed, stored, and processed in a Hadoop clusterHow to use Sqoop and Flume to ingest dataHow to process distributed data with Apache SparkHow to model structured data as tables in Impala and HiveHow to choose the best data storage format for different data usage patternsBest practices for data storage This training course is the best preparation for the challenges faced by Hadoop developers. Participants will learn to identify which tool is the right one to use in a given situation, and will gain hands-on experience in developing using those tools. Course Outline Introduction Introduction to Hadoop and the Hadoop Ecosystem Hadoop Architecture and HDFS Importing Relational Data with Apache Sqoop Introduction to Impala and Hive Modeling and Managing Data with Impala and Hive Data Formats Data Partitioning Capturing Data with Apache Flume Spark Basics Working with RDDs in Spark Writing and Deploying Spark Applications Parallel Programming with Spark Spark Caching and Persistence Common Patterns in Spark Data Processing Spark SQL and DataFrames Conclusion Additional course details: Nexus Humans Developer Training for Spark and Hadoop training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Developer Training for Spark and Hadoop course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.

Developer Training for Spark and Hadoop
Delivered OnlineFlexible Dates
Price on Enquiry

Cloudera Training for Apache HBase

By Nexus Human

Duration 4 Days 24 CPD hours This course is intended for This course is appropriate for developers and administrators who intend to use HBase. Overview Skills learned on the course include:The use cases and usage occasions for HBase, Hadoop, and RDBMSUsing the HBase shell to directly manipulate HBase tablesDesigning optimal HBase schemas for efficient data storage and recoveryHow to connect to HBase using the Java API, configure the HBase cluster, and administer an HBase clusterBest practices for identifying and resolving performance bottlenecks Cloudera University?s four-day training course for Apache HBase enables participants to store and access massive quantities of multi-structured data and perform hundreds of thousands of operations per second. Introduction to Hadoop & HBase What Is Big Data? Introducing Hadoop Hadoop Components What Is HBase? Why Use HBase? Strengths of HBase HBase in Production Weaknesses of HBase HBase Tables HBase Concepts HBase Table Fundamentals Thinking About Table Design The HBase Shell Creating Tables with the HBase Shell Working with Tables Working with Table Data HBase Architecture Fundamentals HBase Regions HBase Cluster Architecture HBase and HDFS Data Locality HBase Schema Design General Design Considerations Application-Centric Design Designing HBase Row Keys Other HBase Table Features Basic Data Access with the HBase API Options to Access HBase Data Creating and Deleting HBase Tables Retrieving Data with Get Retrieving Data with Scan Inserting and Updating Data Deleting Data More Advanced HBase API Features Filtering Scans Best Practices HBase Coprocessors HBase on the Cluster How HBase Uses HDFS Compactions and Splits HBase Reads & Writes How HBase Writes Data How HBase Reads Data Block Caches for Reading HBase Performance Tuning Column Family Considerations Schema Design Considerations Configuring for Caching Dealing with Time Series and Sequential Data Pre-Splitting Regions HBase Administration and Cluster Management HBase Daemons ZooKeeper Considerations HBase High Availability Using the HBase Balancer Fixing Tables with hbck HBase Security HBase Replication & Backup HBase Replication HBase Backup MapReduce and HBase Clusters Using Hive & Impala with HBase Using Hive and Impala with HBase Appendix A: Accessing Data with Python and Thrift Thrift Usage Working with Tables Getting and Putting Data Scanning Data Deleting Data Counters Filters Appendix B: OpenTSDB

Cloudera Training for Apache HBase
Delivered OnlineFlexible Dates
Price on Enquiry

Cloudera Administrator Training for Apache Hadoop

By Nexus Human

Duration 4 Days 24 CPD hours This course is intended for This course is best suited to systems administrators and IT managers. Overview Skills gained in this training include:Determining the correct hardware and infrastructure for your clusterProper cluster configuration and deployment to integrate with the data centerConfiguring the FairScheduler to provide service-level agreements for multiple users of a clusterBest practices for preparing and maintaining Apache Hadoop in productionTroubleshooting, diagnosing, tuning, and solving Hadoop issues Cloudera University?s four-day administrator training course for Apache Hadoop provides participants with a comprehensive understanding of all the steps necessary to operate and maintain a Hadoop cluster. The Case for Apache Hadoop Why Hadoop? Core Hadoop Components Fundamental Concepts HDFS HDFS Features Writing and Reading Files NameNode Memory Considerations Overview of HDFS Security Using the Namenode Web UI Using the Hadoop File Shell Getting Data into HDFS Ingesting Data from External Sources with Flume Ingesting Data from Relational Databases with Sqoop REST Interfaces Best Practices for Importing Data YARN & MapReduce What Is MapReduce? Basic MapReduce Concepts YARN Cluster Architecture Resource Allocation Failure Recovery Using the YARN Web UI MapReduce Version 1 Planning Your Hadoop Cluster General Planning Considerations Choosing the Right Hardware Network Considerations Configuring Nodes Planning for Cluster Management Hadoop Installation and Initial Configuration Deployment Types Installing Hadoop Specifying the Hadoop Configuration Performing Initial HDFS Configuration Performing Initial YARN and MapReduce Configuration Hadoop Logging Installing and Configuring Hive, Impala, and Pig Hive Impala Pig Hadoop Clients What is a Hadoop Client? Installing and Configuring Hadoop Clients Installing and Configuring Hue Hue Authentication and Authorization Cloudera Manager The Motivation for Cloudera Manager Cloudera Manager Features Express and Enterprise Versions Cloudera Manager Topology Installing Cloudera Manager Installing Hadoop Using Cloudera Manager Performing Basic Administration Tasks Using Cloudera Manager Advanced Cluster Configuration Advanced Configuration Parameters Configuring Hadoop Ports Explicitly Including and Excluding Hosts Configuring HDFS for Rack Awareness Configuring HDFS High Availability Hadoop Security Why Hadoop Security Is Important Hadoop?s Security System Concepts What Kerberos Is and How it Works Securing a Hadoop Cluster with Kerberos Managing and Scheduling Jobs Managing Running Jobs Scheduling Hadoop Jobs Configuring the FairScheduler Impala Query Scheduling Cluster Maintainence Checking HDFS Status Copying Data Between Clusters Adding and Removing Cluster Nodes Rebalancing the Cluster Cluster Upgrading Cluster Monitoring & Troubleshooting General System Monitoring Monitoring Hadoop Clusters Common Troubleshooting Hadoop Clusters Common Misconfigurations Additional course details: Nexus Humans Cloudera Administrator Training for Apache Hadoop training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Cloudera Administrator Training for Apache Hadoop course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.

Cloudera Administrator Training for Apache Hadoop
Delivered OnlineFlexible Dates
Price on Enquiry

Cloudera Data Scientist Training

By Nexus Human

Duration 4 Days 24 CPD hours This course is intended for The workshop is designed for data scientists who currently use Python or R to work with smaller datasets on a single machine and who need to scale up their analyses and machine learning models to large datasets on distributed clusters. Data engineers and developers with some knowledge of data science and machine learning may also find this workshop useful. Overview Overview of data science and machine learning at scale Overview of the Hadoop ecosystem Working with HDFS data and Hive tables using Hue Introduction to Cloudera Data Science Workbench Overview of Apache Spark 2 Reading and writing data Inspecting data quality Cleansing and transforming data Summarizing and grouping data Combining, splitting, and reshaping data Exploring data Configuring, monitoring, and troubleshooting Spark applications Overview of machine learning in Spark MLlib Extracting, transforming, and selecting features Building and evaluating regression models Building and evaluating classification models Building and evaluating clustering models Cross-validating models and tuning hyperparameters Building machine learning pipelines Deploying machine learning models Spark, Spark SQL, and Spark MLlib PySpark and sparklyr Cloudera Data Science Workbench (CDSW) Hue This workshop covers data science and machine learning workflows at scale using Apache Spark 2 and other key components of the Hadoop ecosystem. The workshop emphasizes the use of data science and machine learning methods to address real-world business challenges. Using scenarios and datasets from a fictional technology company, students discover insights to support critical business decisions and develop data products to transform the business. The material is presented through a sequence of brief lectures, interactive demonstrations, extensive hands-on exercises, and discussions. The Apache Spark demonstrations and exercises are conducted in Python (with PySpark) and R (with sparklyr) using the Cloudera Data Science Workbench (CDSW) environment. The workshop is designed for data scientists who currently use Python or R to work with smaller datasets on a single machine and who need to scale up their analyses and machine learning models to large datasets on distributed clusters. Data engineers and developers with some knowledge of data science and machine learning may also find this workshop useful. Overview of data science and machine learning at scaleOverview of the Hadoop ecosystemWorking with HDFS data and Hive tables using HueIntroduction to Cloudera Data Science WorkbenchOverview of Apache Spark 2Reading and writing dataInspecting data qualityCleansing and transforming dataSummarizing and grouping dataCombining, splitting, and reshaping dataExploring dataConfiguring, monitoring, and troubleshooting Spark applicationsOverview of machine learning in Spark MLlibExtracting, transforming, and selecting featuresBuilding and evauating regression modelsBuilding and evaluating classification modelsBuilding and evaluating clustering modelsCross-validating models and tuning hyperparametersBuilding machine learning pipelinesDeploying machine learning models Additional course details: Nexus Humans Cloudera Data Scientist Training training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Cloudera Data Scientist Training course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.

Cloudera Data Scientist Training
Delivered OnlineFlexible Dates
Price on Enquiry

Introduction to Hadoop Administration (TTDS6503)

By Nexus Human

Duration 3 Days 18 CPD hours This course is intended for This is an introductory-level course designed to teach experienced systems administrators how to install, maintain, monitor, troubleshoot, optimize, and secure Hadoop. Previous Hadoop experience is not required. Overview Working within in an engaging, hands-on learning environment, guided by our expert team, attendees will learn to: Understand the benefits of distributed computing Understand the Hadoop architecture (including HDFS and MapReduce) Define administrator participation in Big Data projects Plan, implement, and maintain Hadoop clusters Deploy and maintain additional Big Data tools (Pig, Hive, Flume, etc.) Plan, deploy and maintain HBase on a Hadoop cluster Monitor and maintain hundreds of servers Pinpoint performance bottlenecks and fix them Apache Hadoop is an open source framework for creating reliable and distributable compute clusters. Hadoop provides an excellent platform (with other related frameworks) to process large unstructured or semi-structured data sets from multiple sources to dissect, classify, learn from and make suggestions for business analytics, decision support, and other advanced forms of machine intelligence. This is an introductory-level, hands-on lab-intensive course geared for the administrator (new to Hadoop) who is charged with maintaining a Hadoop cluster and its related components. You will learn how to install, maintain, monitor, troubleshoot, optimize, and secure Hadoop. Introduction Hadoop history and concepts Ecosystem Distributions High level architecture Hadoop myths Hadoop challenges (hardware / software) Planning and installation Selecting software and Hadoop distributions Sizing the cluster and planning for growth Selecting hardware and network Rack topology Installation Multi-tenancy Directory structure and logs Benchmarking HDFS operations Concepts (horizontal scaling, replication, data locality, rack awareness) Nodes and daemons (NameNode, Secondary NameNode, HA Standby NameNode, DataNode) Health monitoring Command-line and browser-based administration Adding storage and replacing defective drives MapReduce operations Parallel computing before MapReduce: compare HPC versus Hadoop administration MapReduce cluster loads Nodes and Daemons (JobTracker, TaskTracker) MapReduce UI walk through MapReduce configuration Job config Job schedulers Administrator view of MapReduce best practices Optimizing MapReduce Fool proofing MR: what to tell your programmers YARN: architecture and use Advanced topics Hardware monitoring System software monitoring Hadoop cluster monitoring Adding and removing servers and upgrading Hadoop Backup, recovery, and business continuity planning Cluster configuration tweaks Hardware maintenance schedule Oozie scheduling for administrators Securing your cluster with Kerberos The future of Hadoop

Introduction to Hadoop Administration (TTDS6503)
Delivered OnlineFlexible Dates
Price on Enquiry

Google Cloud Platform Big Data and Machine Learning Fundamentals

By Nexus Human

Duration 1 Days 6 CPD hours This course is intended for This class is intended for the following: Data analysts, Data scientists, Business analysts getting started with Google Cloud Platform. Individuals responsible for designing pipelines and architectures for data processing, creating and maintaining machine learning and statistical models, querying datasets, visualizing query results and creating reports. Executives and IT decision makers evaluating Google Cloud Platform for use by data scientists. Overview This course teaches students the following skills:Identify the purpose and value of the key Big Data and Machine Learning products in the Google Cloud Platform.Use Cloud SQL and Cloud Dataproc to migrate existing MySQL and Hadoop/Pig/Spark/Hive workloads to Google Cloud Platform.Employ BigQuery and Cloud Datalab to carry out interactive data analysis.Train and use a neural network using TensorFlow.Employ ML APIs.Choose between different data processing products on the Google Cloud Platform. This course introduces participants to the Big Data and Machine Learning capabilities of Google Cloud Platform (GCP). It provides a quick overview of the Google Cloud Platform and a deeper dive of the data processing capabilities. Introducing Google Cloud Platform Google Platform Fundamentals Overview. Google Cloud Platform Big Data Products. Compute and Storage Fundamentals CPUs on demand (Compute Engine). A global filesystem (Cloud Storage). CloudShell. Lab: Set up a Ingest-Transform-Publish data processing pipeline. Data Analytics on the Cloud Stepping-stones to the cloud. Cloud SQL: your SQL database on the cloud. Lab: Importing data into CloudSQL and running queries. Spark on Dataproc. Lab: Machine Learning Recommendations with Spark on Dataproc. Scaling Data Analysis Fast random access. Datalab. BigQuery. Lab: Build machine learning dataset. Machine Learning Machine Learning with TensorFlow. Lab: Carry out ML with TensorFlow Pre-built models for common needs. Lab: Employ ML APIs. Data Processing Architectures Message-oriented architectures with Pub/Sub. Creating pipelines with Dataflow. Reference architecture for real-time and batch data processing. Summary Why GCP? Where to go from here Additional Resources Additional course details: Nexus Humans Google Cloud Platform Big Data and Machine Learning Fundamentals training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Google Cloud Platform Big Data and Machine Learning Fundamentals course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.

Google Cloud Platform Big Data and Machine Learning Fundamentals
Delivered OnlineFlexible Dates
Price on Enquiry