Data Integration with Cloud Data Fusion
The Data Integration with Cloud Data Fusion course is a two-day, intermediate-level training program designed to introduce participants to Google Cloud's data integration capabilities using Cloud Data Fusion. The course addresses data integration challenges and the need for a data integration platform, and illustrates how Cloud Data Fusion can help effectively integrate data from a variety of sources and formats. Participants will learn to design and run batch and real-time data processing pipelines, work with Wrangler to build data transformations, use connectors to integrate data from various sources and formats, and configure execution environments. The course is intended primarily for Data Engineers and Data Analysts who have completed the Big Data and Machine Learning Fundamentals course. The course contributes to the preparation for the Google Professional Data Engineer Certification exam .
Course Objectives
Below is a summary of the main objectives of the Data Integration with Cloud Data Fusion Course :
- Get familiar with Cloud Data Fusion features.
- Design and execute data processing pipelines.
- Use Wrangler for data transformations.
- Use connectors to integrate data from different sources.
- Configure execution environments for data pipelines.
- Implement best practices for data pipeline design and management.
- Monitor and troubleshoot data pipelines using Cloud Data Fusion tools.
- Optimize performance and cost of data processing tasks.
Course Certification
This course helps you prepare to take the:
Google Cloud Certified Professional Data Engineer Exam;
Course Outline
Module 01: Introduction to data integration and Cloud Data Fusion
- Data integration: what, why, challenges
- Data integration tools used in industry
- User personas
- Introduction to Cloud Data Fusion
- Data integration critical capabilities
- Cloud Data Fusion UI components
- Understand the need for data integration
- List the situations/cases where data integration can help businesses
- List the available data integration platforms and tools
- Identify the challenges with data integration
- Understand the use of Cloud Data Fusion as a data integration platform
- Create a Cloud Data Fusion instance
- Familiarize with core framework and major components in Cloud Data Fusion
Module 02: Building pipelines
- Cloud Data Fusion architecture
- Core concepts
- Data pipelines and directed acyclic graphs (DAG)
- Pipeline Lifecycle
- Designing pipelines in Pipeline Studio
- Understand Cloud Data Fusion architecture
- Define what a data pipeline is
- Understand the DAG representation of a data pipeline
- Learn to use Pipeline Studio and its components
- Design a simple pipeline using Pipeline Studio
- Deploy and execute a pipeline
Module 03: Designing complex pipelines
- Branching, Merging and Joining
- Actions and Notifications
- Error handling and Macros
- Pipeline Configurations, Scheduling, Import and Export
- Perform branching, merging, and join operations
- Execute pipeline with runtime arguments using macros
- Work with error handlers
- Execute pre- and post-pipeline executions with help of actions and notifications
- Schedule pipelines for execution
- Import and export existing pipelines
Module 04: Pipeline execution environment
- Schedules and triggers
- Execution environment: Compute profile and provisioners
- Monitoring pipelines
- Understand the composition of an execution environment
- Configure your pipeline’s execution environment, logging, and metrics. Understand concepts like compute profile and provisioner
Module 05: Building Transformations and Preparing Data with Wrangler
- Wrangler
- Directives
- User-defined directives
- Understand the use of Wrangler and its main components
- Transform data using Wrangler UI
- Transform data using directives/CLI methods
- Create and use user-defined directives
Module 06: Connectors and streaming pipelines
- Understand the data integration architecture
- List various connectors
- Use the Cloud Data Loss Prevention (DLP) API
- Understand the reference architecture of streaming pipelines
- Build and execute a streaming pipeline
- Connectors
- DLP
- Reference architecture for streaming applications
- Building streaming pipelines
Module 07: Metadata and data lineage
- Metadata
- Data lineage
- List types of metadata
- Differentiate between business, technical, and operational metadata
- Understand what data lineage is
Module 08: Course Summary
- Understand the importance of maintaining data lineage
- Differentiate between metadata and data lineage
Course Mode
Instructor-Led Remote Live Classroom Training;
Trainers
Trainers are GCP Official Instructors and certified in other IT technologies, with years of hands-on experience in the industry and in Training.
Lab Topology
For all types of delivery, the Trainee can access real Cisco equipment and systems in our laboratories or directly at the Cisco data centers remotely 24 hours a day. Each participant has access to implement the various configurations thus having a practical and immediate feedback of the theoretical concepts.
Here are some Labs topologies available:
Course Details
Course Prerequisites
Attendance at the Google Cloud Big Data and Machine Learning Fundamentals course is recommended .
Course Duration
Intensive duration 2 days
Course Frequency
Course Duration: 2 days (9.00 to 17.00) - Ask for other types of attendance.
Course Date
- Data Integration with Cloud Data Fusion Course (Intensive Formula) –09/01/2025 – 09:00 – 17:00
- Data Integration with Cloud Data Fusion Course (Intensive Formula) –27/03/2025 – 09:00 – 17:00
- Data Integration with Cloud Data Fusion Course (Intensive Formula) –29/05/2025 – 09:00 – 17:00
- Data Integration with Cloud Data Fusion Course (Intensive Formula) –24/07/2025 – 09:00 – 17:00
Steps to Enroll
Registration takes place by asking to be contacted from the following link, or by contacting the office at the international number +355 45 301 313 or by sending a request to the email info@hadartraining.com