Interactive Demo

Demonstration of Oracle SQL compatible functions and syntax

Cloud Service lets you run Oracle SQL queries in the cloud using EDB Postgres Advanced Server. This demonstration shows two Oracle SQL-syntax queries running unmodified on a Cloud Service test cluster, populated with the Chinook sample database.

Watch the video, or load up psql and follow along.

Connecting to the demo cluster with psql

You can use any recent version of psql to connect to EDB Postgres Advanced Server. The version that ships with Advanced Server has a few nice SQL*Plus compatibility features (with more availability in EDB*Plus). The queries and commands used here work the same in either version of psql. For convenience, these examples use the version of psql available in the EDB Postgres Advanced Server container image used by Cloud Native PostgreSQL and internally by Cloud Service. You can follow along by installing Docker and running:

docker pull quay.io/enterprisedb/edb-postgres-advanced
docker run --rm -it quay.io/enterprisedb/edb-postgres-advanced /bin/bash
Output
[postgres@0fe3c244563c /]$ 
Note

If you prefer a graphical tool to execute Oracle-syntax-compatible queries or run Oracle PL/SQL-compatible code, we recommend pgAdmin.

The connection string for this demo's EDB Postgres Advanced Server cluster looks like this:

postgres://demo:password@p-vmpc7c40fm.pg.biganimal.io:5432/chinook?sslmode=require

In case you're unfamiliar with PostgreSQL connection URIs:

  • demo is the user role you're connecting as. This is a user set up with select privileges on the database.

  • password is the password for this user.

    Passwords in connection strings.

    This example illustrates a complete connection URL, including the password. This is fine for a demonstration and might also be acceptable for application configuration if access to the configuration is limited. Avoid this practice for admin, superuser, or other roles used interactively. psql prompts for a password if none is supplied.

  • p-vmpc7c40fm.pg.biganimal.io is the host name for the EDB Postgres Advanced Server cluster on Cloud Service that you're connecting to.

  • 5432 is the usual PostgreSQL port number.

  • chinook is the name of the database.

  • sslmode=require ensures that you establish a secure connection.

With that in hand, launch psql:

psql postgres://demo:password@p-vmpc7c40fm.pg.biganimal.io:5432/chinook?sslmode=require
Output
psql (14.2.1, server 13.6.10 (Debian 13.6.10-1+deb10))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.

chinook=>

Here's the schema:

\dt
Output
             List of relations
 Schema |     Name      | Type  |   Owner   
--------+---------------+-------+-----------
 public | album         | table | edb_admin
 public | artist        | table | edb_admin
 public | customer      | table | edb_admin
 public | employee      | table | edb_admin
 public | genre         | table | edb_admin
 public | invoice       | table | edb_admin
 public | invoiceline   | table | edb_admin
 public | mediatype     | table | edb_admin
 public | playlist      | table | edb_admin
 public | playlisttrack | table | edb_admin
 public | track         | table | edb_admin
(11 rows)

An employee table is defined as follows:

\d+ employee
Output
Table "public.employee"
   Column   |            Type             | Collation | Nullable | Default | Storage  | Stats target | Description 
------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------
 employeeid | numeric                     |           | not null |         | main     |              | 
 lastname   | character varying(20)       |           | not null |         | extended |              | 
 firstname  | character varying(20)       |           | not null |         | extended |              | 
 title      | character varying(30)       |           |          |         | extended |              | 
 reportsto  | numeric                     |           |          |         | main     |              | 
 birthdate  | timestamp without time zone |           |          |         | plain    |              | 
 hiredate   | timestamp without time zone |           |          |         | plain    |              | 
 address    | character varying(70)       |           |          |         | extended |              | 
 city       | character varying(40)       |           |          |         | extended |              | 
 state      | character varying(40)       |           |          |         | extended |              | 
 country    | character varying(40)       |           |          |         | extended |              | 
 postalcode | character varying(10)       |           |          |         | extended |              | 
 phone      | character varying(24)       |           |          |         | extended |              | 
 fax        | character varying(24)       |           |          |         | extended |              | 
 email      | character varying(60)       |           |          |         | extended |              | 
Indexes:
    "pk_employee" PRIMARY KEY, btree (employeeid)
Foreign-key constraints:
    "fk_employeereportsto" FOREIGN KEY (reportsto) REFERENCES employee(employeeid)
Referenced by:
    TABLE "customer" CONSTRAINT "fk_customersupportrepid" FOREIGN KEY (supportrepid) REFERENCES employee(employeeid)
    TABLE "employee" CONSTRAINT "fk_employeereportsto" FOREIGN KEY (reportsto) REFERENCES employee(employeeid)
Access method: heap

This table has a reportsto field. That means this is a hierarchical reporting structure, with some employees reporting to other employees who might in turn report to still other employees.

Demo #1: Exposing an organization hierarchy with CONNECT BY

Construct a hierarchical query to expose this chain of command.

Modern SQL can use a recursive CTE for this, as those are widely supported. But Oracle has, for decades, supported an alternative mechanism for querying hierarchy in the form of CONNECT BY, as shown in the following:

SELECT firstname, lastname, (
	SELECT LISTAGG(lastname, ', ')
	FROM employee rt
	START WITH rt.employeeid=e.reportsto
	CONNECT BY employeeid = PRIOR reportsto
	) AS "chain of command"
FROM employee e;
Output
 firstname | lastname | chain of command 
-----------+----------+------------------
 Andrew    | Adams    | 
 Nancy     | Edwards  | Adams
 Jane      | Peacock  | Edwards, Adams
 Margaret  | Park     | Edwards, Adams
 Steve     | Johnson  | Edwards, Adams
 Michael   | Mitchell | Adams
 Robert    | King     | Mitchell, Adams
 Laura     | Callahan | Mitchell, Adams
(8 rows)

Here, the CONNECT BY and the LISTAGG functions are used in a subquery to generate the chain of command for each employee: who they report to, who that person reports to, and so on.

The LISTAGG() function was introduced in Oracle 11g Release 2. Very few database systems support it. PostgreSQL does support string_agg(). In the previous example, that could be used as a drop-in replacement:

SELECT firstname, lastname, (
	SELECT string_agg(lastname, ', ')
	FROM employee rt
	START WITH rt.employeeid=e.reportsto
	CONNECT BY employeeid = PRIOR reportsto
	) AS "chain of command"
FROM employee e;
Output
 firstname | lastname | chain of command
-----------+----------+------------------
 Andrew    | Adams    |
 Nancy     | Edwards  | Adams
 Jane      | Peacock  | Edwards, Adams
 Margaret  | Park     | Edwards, Adams
 Steve     | Johnson  | Edwards, Adams
 Michael   | Mitchell | Adams
 Robert    | King     | Mitchell, Adams
 Laura     | Callahan | Mitchell, Adams
(8 rows)

But the semantics of the two functions are different for even slightly less-trivial uses, specifically when using the grouping construct, as shown in the next demonstration.

Demo #2: Group concatenation with LISTAGG

As shown in the first demonstration, this database has album and track tables containing metadata on digital recordings. You can use some analytic functions, including LISTAGG, to put together a report on average track storage requirements for albums with "baby" in the title.

SELECT UNIQUE title,
       ROUND(AVG(bytes) OVER (PARTITION BY mediatypeid)/1048576 ) media_avg_mb,
       LISTAGG(t.name || ' (' || ROUND(bytes/1048576) || ' mb)', chr(10))
         WITHIN GROUP (ORDER BY trackid)
         OVER (PARTITION BY title)  track_list
FROM track t
JOIN album USING (albumid)
JOIN mediatype USING (mediatypeid)
WHERE lower(title) LIKE '%baby%'
ORDER BY title;
Output
    title     | media_avg_mb |                    track_list
--------------+--------------+---------------------------------------------------
 Achtung Baby |            9 | Zoo Station (9 mb)                               +
              |              | Even Better Than The Real Thing (7 mb)           +
              |              | One (9 mb)                                       +
              |              | Until The End Of The World (9 mb)                +
              |              | Who's Gonna Ride Your Wild Horses (10 mb)        +
              |              | So Cruel (11 mb)                                 +
              |              | The Fly (8 mb)                                   +
              |              | Mysterious Ways (8 mb)                           +
              |              | Tryin' To Throw Your Arms Around The World (7 mb)+
              |              | Ultraviolet (Light My Way) (10 mb)               +
              |              | Acrobat (8 mb)                                   +
              |              | Love Is Blindness (8 mb)
(1 row)

Trying to eplace LISTAGG with string_agg in this example fails. The expression syntax for string_agg is different.

SELECT UNIQUE title,
       ROUND(AVG(bytes) OVER (PARTITION BY mediatypeid)/1048576 ) media_avg_mb,
       string_agg(t.name || ' (' || ROUND(bytes/1048576) || ' mb)', chr(10))
         WITHIN GROUP (ORDER BY trackid)
         OVER (PARTITION BY title)  track_list
FROM track t
JOIN album USING (albumid)
JOIN mediatype USING (mediatypeid)
WHERE lower(title) LIKE '%baby%'
ORDER BY title;
Output
ERROR:  function string_agg(text, text, numeric) does not exist
LINE 3:        string_agg(t.name || ' (' || ROUND(bytes/1048576) || ...
               ^
HINT:  No function matches the given name and argument types. You might need to add explicit type casts.

This isn't difficult to correct, but it requires restructuring the query to replace the grouping construct. Such work can quickly accumulate errors. Fortunately, EDB Postgres Advanced Server supports LISTAGG in addition to string_agg, so this query doesn't need to change when migrating from Oracle.

Compatibility preserves the value of your existing work

In both of the examples shown here, you probably would not use the functions and syntax demonstrated for new work. There are better, more familiar or at least more widely available equivalents provided natively by PostgreSQL and many other databases. But by supporting them, EDB Advanced Server lets you reuse existing logic with minimal modification, allowing you to focus your time and expertise on solving new problems.

Try it on your own cluster: export and import

If you want to try these examples on your own Cloud Service cluster, follow these instructions to import the example database.

  1. Connect to your EDB Postgres Advanced Server cluster as edb_admin using psql. You can get the command for this from your cluster's Overview tab. It looks like this, where <YOUR CLUSTER HOSTNAME> is specific to your cluster:

    psql -W "postgres://edb_admin@<YOUR CLUSTER HOSTNAME>:5432/edb_admin?sslmode=require"

    You're prompted for the password you specified when creating your cluster.

  2. Create a new database and connect to it. You're prompted again for your cluster password:

    CREATE DATABASE chinook;
    \c chinook
  3. Create a limited-privilege user for connecting to this database:

    CREATE USER demo WITH PASSWORD 'password';
    ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO demo;

    See Details on managing access in Cloud Service databases for more information.

  4. Quit psql (\q).

  5. To export and import, you need compatible versions of pg_dump and pg_restore. If you aren't already running EDB Postgres Advanced Server, you can use the container image used by Big Animal to ensure compatibility.

    docker pull quay.io/enterprisedb/edb-postgres-advanced
    docker run --rm -it quay.io/enterprisedb/edb-postgres-advanced /bin/bash
  6. Export the Chinook sample database from EDB's cluster using pg_dump:

    pg_dump --format custom \
        "postgres://demo:password@p-vmpc7c40fm.pg.biganimal.io:5432/chinook?sslmode=require" \
        > /tmp/chinook.dump
  7. Get the host name of your cluster from the Connect tab on the Console. Use it in place of <YOUR CLUSTER HOSTNAME> to invoke pg_restore:

    pg_restore --no-owner \
        -d "postgres://edb_admin@<YOUR CLUSTER HOSTNAME>:5432/chinook?sslmode=require" \
        /tmp/chinook.dump
    Note

    You might see an error about pg_stat_statements. You can safely ignore this error. pg_stat_statements is a very useful extension and is installed by default on Cloud Service clusters. However, since you're connecting as the admin user and not a superuser, you can't modify it. The rest of the schema and data is restored, however.

  8. Finally, connect to this database:

    psql "postgres://demo:password@<YOUR CLUSTER HOSTNAME>:5432/chinook?sslmode=require"

Now you can try some queries on your own cluster.

Next steps


Could this page be better? Report a problem or suggest an addition!