Phoenix is an open source SQL interface for HBase. It works like an SQL layer on top of HBase architecture. It becomes the trusted data platform for OLTP and operational analytics for Hadoop through well-defined standard SQL APIs. It breaks up SQL queries into multiple HBase scans and runs them in parallel. It uses HBase native APIs rather than map reduce frameworks.

  • Supports standard SQL and JDBC APIs with full ACID transaction capabilities.
  • Uses HBase as its backing store for enabling the Hadoop OLTP and operational analytics.
  • Reduces the amount of code users to write for accessing the HBase.
  • Allows for performance optimizations transparent to the user.

For additional details on Phoenix please refer

Phoenix tab provides user friendly interface to manage and run Phoenix scripts at ease. It provides the following features,

Interactively run Phoenix scripts

Phoenix scripts can be run interactively within Big Data Studio by directly typing Phoenix queries into the provided console.


In Kerberos authentication cluster, logged AD user should have privileges over namespaces as ‘default’ and ‘system’ to work with Phoenix tables. If connected with normal user it throws exception like “No Current Connection”.

Run All

You can run all commands in the script file loaded in editor through interactive console one by one by clicking the “Run All” button or by choosing the “Run in Console” option in context menu.

Run Selection

You can run selected commands in the script file through interactive console one by one by clicking “Run Selection” button or by choosing the “Run Selection in Console” option in context menu.


Autocomplete feature is added in the Editor. It will provide suggestion for the keywords based on typing and allows the to accept the suggestion or select by pressing “down arrow” key.

Manage script files

You can create new script file and load a file using “Script” button.

You can save as a file using “Save As” button.

You have an option to import scripts from the folder, create new script and delete scripts present in the tree view.


We ship several samples which you can use it for getting started.

Working with Phoenix Schema

Big Data studio provides simple interface to create new schema, table, and an option to manage Phoenix schema with tree like explorer.

Create Schema

Click “New Schema” button in the Phoenix tab and enter schema name in prompt and click “Create” button to create a new schema.

Create Table

To create a new table, click “New Table” button in the Phoenix tab, select schema and provide SQL query to create table based on the template provided in the editor area of the prompt box.

Phoenix Schema Explorer

You can explore the schemas and tables in Phoenix in the simple tree like view under Schema tab.

Goto context menu by right clicking the schema/ tables/ empty tree and explore the available options.

You can create schema, table, view top 500 rows, drop schema and table, alter table and add a new column with a simple interface.

Phoenix Table Viewer

You can view the Phoenix table contents in Result tab using select table option available in Schema tab. The result can be viewed as both plain text and grid.

CSV Export

You can export the results generated by running Phoenix script to CSV format by clicking “CSV Export” button in the ribbon.

Phoenix C# and Java samples

You run the Phoenix samples in Java and can run C# samples with Phoenix ODBC driver interface. You have to install Phoenix ODBC driver in your machine to run the Phoenix C# samples.