Introduction to HPE Data Fabric Database - DEV 330

Introduction to HPE Data Fabric Database - DEV 330

Learn to install, design, and build an HPE Data Fabric database then write applications to process and analyze data

About this Course

This course teaches developers the skills required to install, design, and build an HPE Data Fabric database , then to write applications that process and analyze data. Covered are the HBase API (to work with legacy data and applications) and the OJAI API to load and process new data of all types. You will design and build tables and a schema for your database, and then use this database to write applications to load, and manipulate data, maximize performance, control access to data, and query simple, complex, and streaming data.

This is the first course in the HPE Data Fabric Database Series.

What's Covered

Course Lessons Lab Activities

1: Perform Common Database Operations

Create a New Table and Load Data
Compare Database Schema
Query Data in Data Fabric Database

 

Create Tables and Load Data into MapR Database
Manipulate Rows, Columns, and Column Families
Load JSON Data, Monitor Data Splits
Use Drill to Query Data Fabric Database Tables

2: Use MapR-DB APIs

Set Up the Data Fabric Database Client
Introduction to the HBase API
Introduction to the OJAI API

 

Set Up a Data Fabric Client
Use the HBase API
Use the OJAI API

Prerequisites

  • Basic Hadoop knowledge and intermediate linux knowledge
  • Experience using a text editor such as vi
  • Terminal program installed; familiarity with command-line options such as mv, cp, ssh, grep, cd, and useradd
  • Beginner-to-intermediate fluency with Java in an IDE

Curriculum

  • Lesson 1: Perform Common Data Fabric DB Operations
  • Quiz 1
  • Lesson 2: Using Data Fabric Database APIs
  • Quiz 2
  • Course Materials
  • Lab Guide
  • Lab Environment Setup Guide

About this Course

This course teaches developers the skills required to install, design, and build an HPE Data Fabric database , then to write applications that process and analyze data. Covered are the HBase API (to work with legacy data and applications) and the OJAI API to load and process new data of all types. You will design and build tables and a schema for your database, and then use this database to write applications to load, and manipulate data, maximize performance, control access to data, and query simple, complex, and streaming data.

This is the first course in the HPE Data Fabric Database Series.

What's Covered

Course Lessons Lab Activities

1: Perform Common Database Operations

Create a New Table and Load Data
Compare Database Schema
Query Data in Data Fabric Database

 

Create Tables and Load Data into MapR Database
Manipulate Rows, Columns, and Column Families
Load JSON Data, Monitor Data Splits
Use Drill to Query Data Fabric Database Tables

2: Use MapR-DB APIs

Set Up the Data Fabric Database Client
Introduction to the HBase API
Introduction to the OJAI API

 

Set Up a Data Fabric Client
Use the HBase API
Use the OJAI API

Prerequisites

  • Basic Hadoop knowledge and intermediate linux knowledge
  • Experience using a text editor such as vi
  • Terminal program installed; familiarity with command-line options such as mv, cp, ssh, grep, cd, and useradd
  • Beginner-to-intermediate fluency with Java in an IDE

Curriculum

  • Lesson 1: Perform Common Data Fabric DB Operations
  • Quiz 1
  • Lesson 2: Using Data Fabric Database APIs
  • Quiz 2
  • Course Materials
  • Lab Guide
  • Lab Environment Setup Guide