Category Archives: Learning

How to secure your ORDS service

Securing your Oracle Rest Data Services module only takes a couple of minutes.

As you can see in the video, this will be a fairly short how-to.

Assumptions

For this post, I’m going to assume:

  • You’re able to use SQL Developer.
  • You already have an ORDS module created that you want to secure.
  • You have at least a basic understanding of Roles, Privileges, Authentication and Authorization.

Test the Existing Service

Before we begin, I’ll test the current service with a curl command.

This returns a response with a status of “HTTP/1.1 200 OK” and a JSON object with an array of items.

 

Secure Your Service

First up I will create a Role and a Privilege that I’ll use to secure my service.

Create a Role
  1. Expand the schema connection and REST Data Services, if they are not already expanded.
  2. Right click on Roles.
  3. Click New Role…

  1. Enter a name for the role.  I’m using bc-role.
  2. Click Apply.

Create a Privilege
  1. Expand the schema connection and REST Data Services, if they are not already expanded.
  2. Right click on Privileges.
  3. Click New Privilege…

  1. Enter a name for the privilege.  I’m using bc-priv.
  2. Select the new role ‘bc-role’.
  3. Select the ORDS Module.  I’m using my Beacon module.
  4. Click Apply.

Be aware, this will only add security to the specific modules you select in the bottom section.  If you were to add a new module later and you want it to be covered with this privilege, you need to remember to edit this privilege and select it in the ‘Protect Modules’ selector.

Alternatively, you can use the ‘Protect Resources’ tab to apply security based on a URI pattern.

For my examples, I’m working from a VirtualBox VM and I have used ‘beacon’ as my schema alias.  So the base for all of my REST services in this schema would be

I entered the URI pattern

This will protect all ORDS services for this schema.  If I wanted to use this method to protect just an admin URI template in my beacon module (the full URI is https://localhost:8080/ords/beacon/beacon/admin), I would use a URI pattern like this

There are many more complex patterns you could use for your services, but for this post, I am only protecting the Beacon module so I am using the first example.

Test
This now returns a response with a status of “HTTP/1.1 401 Unauthorized”.
 

Access Your Service

I want to set up my ORDS module so it can be accessed by a third party application so I’ll set up OAuth 2.

Create an OAuth 2 Client

Using the OAUTH PL/SQL Package, I will create an OAuth2 client using the privilege I created above.

Then I will grant the new OAUTH2 client the role I created above.

Test

In order to generate an access token, I need to get my client id and secret from the user_ords_clients table.

When you create an OAuth2 client, a REST service ‘/oauth/token’ is created for you that your applications can use to generate a bearer token using the above id and secret.

Notice that this token expires in 3600 seconds.  Once it expires, you would repeat the above call to generate a new token.

Now that I have an access_token I can include it in the Authorization header to access my secure ORDS service.  Notice that since this is a Bearer token I set the Authorization value to ‘Bearer <<my token>>’.

Example Usage

The following is an example of how to make secure REST calls from an external application after you’ve followed the above steps.

On the application server

Store the client_id and client_secret in environment variables.

In your application

Summary

Setting up security on your ORDS modules is quick and fairly easy.

OAuth2 is just one way to access your secured ORDS modules.  If you’d like to setup Basic Auth checkout this post by Jeff Smith.

Console the ORDS documentation for other examples and a much more in-depth explanation.

How to use an SSH Tunnel in Oracle Developer Cloud Service Build Jobs

Oracle Developer Cloud Service is a hosted team development and delivery platform with all kinds of tools to help your team be more efficient.  In this post, I will cover how to use an SSH tunnel to connect to your database in a build job.

This is current as of October 2018.

Connecting to your database through an SSH tunnel is fairly simple and you won’t have to ask your network admin to open a port in your firewall and or load balancers.

If you’re connecting to an Oracle Cloud Database it should be pre-configured to allow database connections through an SSH tunnel.  If not check with your server admin for assistance.

If you’re not familiar with SSH tunnels you can find out more here.

However, if you’re reading this post you probably just want to skip to the how-to so let’s get started.

Configure your build job

Open your Developer Cloud Service project, click the ‘Build’ tab and select the build you want to work with.

Click the ‘Configure’ button

Authorized SSH Keys

You will need an SSH Key that has been authorized to connect to the database server.  I recommend generating a new key that will only be used from your DevCs builds.  Name the new key pair something that will let you know it’s for this DevCs project.

Helpful links:

If you generated a new key pair named DevCsProj1, you should have two files.  The file without an extension ‘DevCsProj1’ is your private key and the file with .pub ‘DevCsProj1.pub’ is your public key.

Add a build environment configuration

Select the ‘Build Environment’ tab, click ‘Add Build Environment’ and choose ‘SSH Configuration’.

  1. Open the private key (from above) in a text editor, copy everything in the file and paste it in the ‘Private Key’ file text area.
  2. You can do the same for the ‘Public Key’ text area but it is not required for this example.
  3. If you created your keys with a password, enter the password in the ‘Pass Phrase’ field.
  4. For a little extra security, you could get the public key for your Database Server and enter it in the ‘SSH Server Public Key’ text area.  This will ensure that your build job only connects to that server and will help protect against connecting to a different server if the IP address is re-assigned.  This field is optional.
  5. Check ‘Create SSH tunnel’.
  6. Enter the SSH username.  (This is not a database user, this is an authorized SSH user on the server.)
  7. If you’d rather use an SSH user/password instead of an SSH Key file, you could enter the password in the password field.  I leave it blank in order to only connect with the keys.
  8. Enter the port you want to use on the DevCs side of the tunnel in ‘Local Port’.
  9. If you leave ‘Remote Host’ empty it will default to localhost.  The remote host is used on the other side of the tunnel to make the connection as if you were on that machine.  Since I’m intending to connect to the database that is on the same server that I’m SSH’d into I can use localhost.  If I wanted to connect to a different server from that side of the tunnel, I could enter the address for the other remote server.  For example, if I had a server that was only accessible from inside of a network that includes the SSH server, I can SSH into the network and the tunnel will end on the other internal server.
  10. If you’d like to re-use your keys for other SSH commands you can check ‘Setup files in ~/.ssh for command-line ssh tools’.  It is not necessary for this example.

The ‘Connect String’ displayed at the bottom shows the ssh command DevCs will use to create the tunnel.

ssh -L localhost:1521:localhost:1521 opc@129.111.111.111

Let’s break it down:

  • Create a local ssh tunnel.  ssh -L
  • On the local DevCs server map to the ‘Local Port’ value.  localhost:1521 (The first one)
  • Once connected to the SSH server map the tunnel end to ‘Remote Host’:’Remote Port’.   localhost:1521 (The second one)
  • Make the SSH connection as ‘Username’@’SSH Server’.  opc@129.111.111.111

OK, time to use the new tunnel.  I’ll use a SQLcl Builder.

Add / Edit a builder

Select the Builders tab, click Add Builder and choose SQLcl Builder.

If you’ve used a SQLcl builder before this will all be the same, except for the connect string.

  1. Enter your database username in the ‘Username’ field.
  2. Enter your database password in the ‘Password’ field.
  3. If you’re connecting to a database that is using wallet credentials such as Oracle Exadata Express Cloud Service, enter the location of your Credentials file.  I’m not so I will leave it empty.
  4. Since I now have an SSH tunnel in place I will connect to the local (DevCs server) end of the tunnel and use the ‘Local Port’ value from the SSH Configuration.  //localhost:1521/[servicename].  Even if I had defined a remote host and/or port other than localhost / 1521 in the tunnel configuration, I would still use localhost:[Local Port].  The tunnel takes care of the mapping on the other end.
  5. Enter the SQL File or Inline SQL you want to run.
  6. Save the Job Configuration.

Run it

Click the ‘Build Now’ button and when you look at the console output you will see something similar to this.

  • SSH is set up.
  • The SSH tunnel is opened.
  • SQLcl executed my script.  (I did not enable any output in my script so none is displayed.)
  • SQLcl disconnects from my Database.
  • The SSH tunnel is closed.
  • The SSH environment is removed.

Once you’ve added an SSH Build Environment to your build job and tested it, you can start adding them to each Build Job you use a database connection in.  After that’s done you can close port 1521 on your Database server (assuming you don’t need it for other applications).

Oracle Developer Cloud Service is constantly being improved, so let me know in the comments if this guide becomes out of date and I will update it.

Execute PL/SQL calls with Python and cx_Oracle

After you’ve got the hang of performing Basic CRUD operations with cx_Oracle you’re ready to start tapping into some of the real power of the Oracle Database.

Why use PL/SQL?

Python is an excellent language for most things you want your application to do, but when you’re processing data it just goes faster if you do the work where the data is.

This post will cover how to execute Oracle PL/SQL functions and procedures using Python and cx_Oracle.  I’m assuming you’re already familiar with PL/SQL if not, you can get some help from Steven Feuerstein and Bryn Llewellyn.  (Additional resources at the end.)

Prerequisites

  • Python 3
  • Oracle Database version 12+
  • Basic Oracle PL/SQL and SQL knowledge.

Setup

If you’d like to follow along with the examples you’ll need to create the following objects in a database schema that is safe to experiment in.  Make sure you have permissions to create the following objects.

To keep everything clean, I’ll be putting my PL/SQL code into a package called pet_manager.

Cleanup

To clean up the database when you are finished with the series, you need to drop the two tables and the package.  Please make sure you’re connected to the correct schema where you created the tables.

Boilerplate template

The template we will be using is:

  1. Install cx_Oracle.
  2. Import the cx_Oracle driver.
  3. Import os module used to read the environment variable.
  4. Get the connection string from the environment variable.
  5. Create the connection object.
  6. Create the cursor object.
I will include this code section with all Python examples and use the connection object “con” and the cursor object “cur” throughout the series.

For each exercise, replace the “# Your code here” line with your code.

Anonymous PL/SQL Block

I’m going to start off with the most basic process and simply execute an anonymous block of PL/SQL code to reset the database tables.

You can execute any DDL or DML statement like this, but if you’re going to run PL/SQL it’s usually best to compile it to the database.

Execute a PL/SQL Procedure

Using the code from the anonymous block I created a procedure in the PL/SQL package called reset_data.

To call this procedure from Python we use the cursor.callproc method and pass in the package.procedure name to execute.

Assuming everything works, there will not be any response.  So this works as a ‘fire and forget’ way to call database procedures.

Pass Parameters

I have a procedure in my PL/SQL package that we can use to create a new pet in the lcs_pets table.  It accepts the pet_name, owner_id and pet_type.  Using these values it will insert a new entry into the lcs_pets table.

Now on the Python side.

I prefer to set my values with variables so that my code is easier to read, so I’ll create and set pet_name, owner_id and pet_type.

Next, I’ll call the cursor.callproc method and add an array containing the values to pass in the order they are defined in the database.

If everything works there will not be any response.

You can also use keyword parameters.  This also makes your code easy to read and also makes it so you don’t need to worry about the order of the parameters.

Once again, if everything works there will not be any response.

Get PL/SQL Function Return Values

When a row is added to the lcs_pets table a new id is automatically generated.  Having this id can be useful so I created a function in my PL/SQL package that will create a new pet in the lcs_pets table, just like in the previous function, but it will return the new id.

Using Python to call a function in the database and get the return value I’ll use the cursor.callfunc method.

  1. I set the variables that I’ll use as arguments to the function.
  2. Define a new_pet_id variable and assign it the value returned from callfunc.
  3. The second argument of the callfunc method is used to define the type of the data being returned.  I’ll set it to int.  (cx_Oracle will handle the NUMBER to int conversion.)
  4. I pass in the array of values just like I did when I used callproc.
  5. Print the returned value for new_pet_id.

Out Parameters

Out parameters can be very handy when you need to pass back more than one piece of information.  I have an add_pet function in the PL/SQL package that will check to see if the pet type you’re adding needs a license or not.  The function will return the new id like before, and a ‘yes’ or ‘no’ through the out parameter.

To work with the out parameter in Python I’ll add a string variable called ‘need_license’.  It can be defined using ‘cursor.var(str)‘. Then we just add the new variable to the values array in the correct position.  This works the same when using out parameters with the callproc method.

To get the value from ‘need_license’ we call it’s getvalue() function.

Accept Argument Values

So far I’ve hard-coded the variable values in the Python code and the methods are fairly simple, so there’s a low chance of errors.  But, for most methods, we want to accept parameter values that can be passed into the Python code then on to the PL/SQL functions.  I’ll modify the Python method to accept command line arguments.

We need to import sys so that we can use sys.argv[] to grab the command line arguments and assign them to the variables.

If I run this to add a dog, I get:

Adding a fish, I get:

PL/SQL Exceptions

Now that I’m accepting outside argument values, the odds that I’ll eventually get errors with the above code is almost a certainty.  If an error happens in the Python code you can handle it as you normally would.  But, what if there’s an error thrown by the PL/SQL code?

It’s easy enough to test this.  Make the same call as before but pass in a string for the second value.

I would recommend that you handle errors as close to where they happen as you can.  In this example, you could catch the error in the PL/SQL function and either handle it or raise it.  If you don’t handle it in PL/SQL it will be passed back to cx_Oracle which will throw a cx_Oracle.DatabaseError.  At that point, you can handle it as you would when any other Error is thrown in your Python application.

Additional Resources

CI/CD for Database Developers – Export Database Objects into Version Control

With this post, I kick off a series that walks you through the process of building  database applications using a CI/CD pipeline. I will be covering:

  • Export Database Objects into Version Control (This Post)
  • Use Schema Migration
  • Unit Test PL/SQL
  • Build and Deploy
  • Automate

Links will be added for each topic as that article is published. I will use the DinoDate application (source) as my code base.

For this post, assume that I have DinoDate installed and all development to date  has been done directly in the database (source code not stored externally in files).

Since, however, it’s really not a good idea to edit and maintain code directly in the database, I’d like to switch to storing the DDL for my database schema objects in files, managed by a version control system. Let’s get going!

Export the Database Objects out to Files

I could export the code for my objects out into files by writing queries against  built-in database views like ALL_SOURCE.  I have used this method a few times, and you probably have, too. You can make it work, but it puts the onus on you to get everything right. I’d rather think about other things, so I ask myself: is there a tool that will take care of the heavy lifting on this step?

There sure is, and it’s free:  Oracle SQL Developer has a great export tool. Assuming you’ve got SQL Developer installed and you can connect to your schema, here’s what you do:

Under the Tools menu, click on Database Export.

The first step of the wizard configures the general export attributes:

  1. Choose the connection for the schema you want to export.
  2. Select the options you’d like the wizard to use when creating the export scripts.
    I like to add drop statements to my scripts.  You may also want to export grants.
  3. I want to export some of the master table data so I will leave the export data section set with defaults.
  4. Choose “Save As Separate Directories” and select the export directory.

In step 2 I left all of the default options selected.

I want all of the schema objects so I skip step 3.

I selected the tables that contain the pre-loaded master data for the application.

Everything looks good so click finish.

The export wizard also creates a master run script and opens the script in a new worksheet (I’ll come back to this later):

Now that I have everything exported, I want to get the files into version control asap.

Add the Objects to Version Control

I’ll be using Git for version control.  SQL Developer has some nice integrated tools for working with Git but for now, I’m just going to use the command line.

  1. Change to the export directory.
  2. Initialize a Git repository.
  3. Add all files to the repo.
  4. Commit the changes.
Assuming that you’re using a central Git repository, you should push the new repo up to the shared repo.  That process can differ based on which repository host you’re using so I’ll leave it up to you to consult the documentation.

With my files stored safely in Git, it’s time to

Verify / Clean up

Whenever you use any automated code generate/export tool, you should verify the results.

For example, you may want to change the generated SQL for the dinosaure_id column from

to

After you verify each file, run a code beautifier on it.  Maintaining a standard code format will make it easier to see the differences between changes.  This will make your future code reviews much easier.

Make sure you commit each file as you change it.  Don’t wait till the end to do a massive commit.

Organize

In step 1 of the wizard, I selected Save as Separate Directories so all of my object scripts have been grouped into directories by type, like this:

This is a perfectly fine way to organize your scripts.  However, I like to group mine by whether or not the scripts will be changed and re-run vs run a single time and future changes will be done with new scripts.  For example:

If I were going to run everything from scripts, I would also break down the Run_Once directory into product versions with a subdirectory for ‘create from scratch’ and one for ‘updates’, like this:

In another post, I’ll show you how to use a Schema Migration tool which makes managing schema changes much easier.

Master Build Script

Now that everything is cleaned up and organized, we need to modify the master script that was generated by the export.  It should be named with a timestamp in a format similar to this:

Generated-YYYYMMDDHH24MISS.sql.

My exported script was named Generated-20180706134514.sql.

You will want to edit this file and modify the directory for the scripts to match the changes you made above.  Also, you need to verify that the scripts are being run in the order of dependencies.  Objects should be created after the objects they depend on.

If you’re not planning to use a Schema Migration tool you’ll want to create separate master scripts for each of the create and update directories.

Test

Create a new schema that’s safe to test with, and run your master scripts.  After you get the scripts running without errors, do a schema compare to check for any differences between your new schema and the schema you exported from.

When it’s all the way you want it, you are now ready and able to work from files to build your database code.

Working From  Files

From this point, you can now follow a new and improved workflow when it comes to editing the code for your database objects:

  1. Pull the latest version of your code from the shared repository. NEVER ASSUME YOU ALREADY HAVE THE LATEST.
  2. Make your changes.
  3. Compile to the database.
  4. Commit (to your source code repository, not your transaction) often.
  5. Push your changes back up to the shared repository.

Of course, that is a very simplified workflow. You should start a discussion with your team about using more advanced methods to further automate and improve your processes.

I sincerely hope you skimmed this post because you gave up long ago on editing code directly in the database. If that is not the case, I hope this article helps you make the change. Because then the next time something goes badly wrong,  you can simply recover from Git (or your repository of choice).

No tearing your hair out. No gnashing of teeth. No self-hating recriminations. Just a quick recovery and get back to work.

And then you can move on to more interesting challenges in improving the way you write and maintain your database application code.

Quick and Easy Raspberry Pi Setup

So, you have a shiny new Raspberry Pi and you’d like to install Linux on it.

This is a short guide to help you get Raspbian installed and configured for Wifi and SSH access.  You should be able to follow this guide even if you don’t connect a keyboard or monitor to your Pi.

NOTE: In order to complete all of the steps in this guide, you will need to be able to access and edit files in an ext4 filesystem.  I will mark these steps with [ext4].  This may require additional steps and/or software for Windows systems.

First up, you will need to download a couple files.

  • Raspbian
    I usually download the image from raspberrypi.org.  There are two images available.  The desktop image comes with a GUI front end and a few pre-installed applications.  The lite image is a ‘headless’ install, meaning you will boot to a command prompt and there is no GUI desktop installed.  This guide will work for either version, pick whichever version you want.
  • Etcher
    Using Etcher makes it easy to flash an image to your sd card.

Put the SD Card in Your Computer

For the next few steps, keep the SD Card in your computer.  You will not put the card in your Pi until we get to the “Boot your Pi” section below.

Flash Raspbian to the SD Card

Run Etcher:

  1. Chose the Raspbian zip file you downloaded above.
  2. Select your SD Card from the list.  WARNING: Etcher does it’s best to make sure it only shows SD Cards in the list, but you should always make sure it’s the card you want to install to, the selected drive will be formatted!
  3. Click Flash.

If you open your favorite file explorer you should now see two new drives listed.

  • boot
  • rootfs [ext4]
Enable SSH Access
  1. Change into boot.  (Modify the ‘cd’ command below to match the path to boot on your system.)
  2. Create an empty file called ‘ssh’.
That’s it.  The first time you boot your Pi ssh will automatically be enabled.  (Don’t boot it up yet.)

Connect Without a Password

Having to type a password everytime you SSH or SCP to your Pi gets old after a while.  If you don’t mind typing the password every time you can skip this step.

Add your public key to your Pi [ext4]:

  1. Locate the ssh public key you want to use (example ~/.ssh/id_rsa.pub). If you don’t already have one, you can follow the steps in this guide.
  2. Change into the pi home on rootfs.  (Modify the ‘cd’ command below to match the path to rootfs on your system.)
  3. Create the .ssh directory.
  4. Append your public key to the authorized_keys file.  (You can repeat this step to append as many keys as you need for the different systems you intend to use.)
If you’re not able to access the ext4 partition from your operating system you can follow this guide on raspberrypi.org after you boot up the Pi.

Enable Wifi [ext4]
  1. Change into rootfs/etc/wpa_supplicant.
  2. Edit wpa_supplicant.conf.  (You may need sudo to edit this file.)
  3. Append the following to the end of the file.  Replace “Your SSID” and “Your WPA Password” with the values to connect to your Wifi.
  4. Save the file.

If you’re not able to access the ext4 partition from your operating system you can follow these instructions after you boot up the Pi with the following changes.

  1. Connect a keyboard and monitor to your Pi.
  2. Change into /etc/wpa_supplicant.
  3. Continue with step 2 above.

Now it’s time to

Boot your Pi

Put the SD Card in your Pi and boot it up.

If you’re not connecting your Pi to a display, you should be able to get its IP address from your Wifi Router admin page.

  1. Connect with ssh.
  2. Change the default pi user password. (default password: raspberry).  Even though we set up access using a public key, the Pi can still be accessed with a password so it’s a good idea to set a new password.
  3. Run an update.

Do Something Fun

At this point, you have a little Linux machine all set up and ready to use for your projects.  Go do something fun with it.

If there are other configurations you’d like me to add to the guide please leave a comment.

Deploy a Python application to Oracle Application Container Cloud Service

About Application Container Cloud Service

ACCS provides a pre-configured platform (Platform as a Service or PAAS) where you can quickly deploy and host your applications.  For many of today’s applications, the hosting server is just that, a place to host the application.  Most of the time the only thing an application needs from the server is to have it support the application’s programming language and to provide in and out connections through ports.  Using a PAAS such as ACCS frees you from all of the extra work of configuring and maintaining a server and allows you to focus on perfecting your application.

ACCS supports multiple languages but for this post, I’ll focus on Python.

DinoDate

For the examples, I will be deploying the DinoDate application. DinoDate was written as an open source learning application that can be used to demonstrate database concepts with multiple programming languages.  It currently has both Python and NodeJS mid-tier applications and is backed by an Oracle Database.

The following instructions show how to deploy the Python version of DinoDate to an Oracle ACCS instance.

If you don’t have access to Oracle Cloud services, you can try the Oracle Cloud with $300 of free credit.

Download/Clone the DinoDate application.

Database

First, you’ll need a database.

Create an Oracle Cloud database or if you already have an Oracle Database, make sure that you can safely create and destroy the DD and DD_NON_EBR schema.

Connect to your database as sys with sysdba and run coreDatabase/dd_master_install.sql.  (Use your password and connect string)

Prepare the DinoDate Application

Download oraclejet.zip (version 4.1.0).  (Current versions as of the time of this post.)

  • Extract the Oracle JET files
  • Run bower install

Download necessary files

The Docker container for Python used by ACCS comes with Python installed.  We’ll need to include the rest of the dependencies.

Package the Files to Deploy

  • Create a deploy directory with a lib subdirectory.
  • Copy the front end client into the deploy directory.
  • Copy the python application into the deploy directory.
  • Extract the Oracle instant client files into the deploy/lib directory.  (Change the command to point to where your files are located.)
  • Change to the deploy directory.
Create a shell script, launchPython.sh,  to install the dependencies and launch the application.
Create the manifest file: manifest.json

This file declares that we will use Python version 3.6.0 and provides the command that will be used to start the application.

Create the deployment file: deployment.json

This file includes the environment variables DinoDate needs and sets the ACCS deployment to use 1G of memory and only install 1 instance.  PYTHONPATH is the directory we will install the Python modules into and LD_LIBRARY_PATH is used by cx_Oracle to locate the Oracle client files.

Replace “YourJdbcConnecString” with the JDBC connect string for your database.

Important Note

ACCS is pre-configured to listen on $PORT so we set our application to listen on that port.  Do not attempt to change $PORT.  When ACCS performs its post-deploy check it will open the application using $PORT, if the application is not listening on that port and returns a 404 the deployment will fail and be removed.

Create a zip file with the required DinoDate deploy files.

Deploy to ACCS

In your browser navigate to the Oracle Application Container Cloud Service Console.

Push the Create Application button to open the platform selection panel.

Push the Python button to open the application definition panel and expand the ‘More Options’ section.

  • Populate [Name] with DinoDatePython.
  • Click ‘Choose File’ for Archive and select the DinoDatePythonACCS.zip file.
  • Click ‘Choose File’ for Manifest and select the manifest.json file.
  • Click ‘Choose File’ for Deployment Configuration and select the deployment.json file.

You can change the values in the other fields as you’d like, but notice that since we defined “memory”: “1G” and “instances”: “1” in the deployment.json file those values will change automatically.

It’s also possible to include the manifest.json file in the DinoDatePythonACCS.zip file instead of uploading it separately.

Click Create.

It may take several minutes for ACCS to setup the environment and deploy the application.  Once it’s done click on the URL: link to open the application.

Try it out

You can log in with any of the existing users, such as:

  • Bob
    bob@example.com
  • Admin
    admin@example.com

Use any value for the password, the application doesn’t check it.

Click on the Search tab and search for ‘eat’ it should return 6 of the pre-loaded dinosaurs.

Quick Review

  1. Download the dependencies.
  2. Create a launch script that will install the dependencies and launch the application.
  3. Collect the required deployment artifacts and dependencies into a .zip file.
  4. Create a manifest.json file that contains at least the required Python version and the command used to start your application.
  5. Create a deployment.json file that contains any needed environment variable definitions.  Optionally you can include ACCS environment definitions such as required memory and number of instances.  (This file is optional.  You could include the environment variables in your launch script.)
    Reminder: ACCS will use the pre-defined environment variable $PORT.  Make sure your application listens on $PORT.
  6. Use the ACCS service console to upload your 3 files and create your new application.

If you run into any trouble, leave a comment and I’ll be happy to help.

Deploy a Node.js application to Oracle Application Container Cloud Service

About Application Container Cloud Service

ACCS provides a pre-configured platform (Platform as a Service or PAAS) where you can quickly deploy and host your applications.  For many of today’s applications, the hosting server is just that, a place to host the application.  Most of the time the only thing an application needs from the server is to have it support the application’s programming language and to provide in and out connections through ports.  Using a PAAS such as ACCS frees you from all of the extra work of configuring and maintaining a server and allows you to focus on perfecting your application.

ACCS supports multiple languages but for this post, I’ll focus on Node.js.

DinoDate

For the examples, I will be deploying the DinoDate application. DinoDate was written as an open source learning application that can be used to demonstrate database concepts with multiple programming languages.  It currently has both Python and NodeJS mid-tier applications and is backed by an Oracle Database.

The following instructions show how to deploy the Node.js version of DinoDate to an Oracle ACCS instance.

If you don’t have access to Oracle Cloud services, you can try the Oracle Cloud with $300 of free credit.

Download/Clone the DinoDate application.

Database

First, you’ll need a database.

Create an Oracle Cloud database or if you already have an Oracle Database, make sure that you can safely create and destroy the DD and DD_NON_EBR schema.

Connect to your database as sys with sysdba and run coreDatabase/dd_master_install.sql.  (Use your password and connect string)

Prepare the DinoDate Application

Download oraclejet.zip (version 4.1.0).  (Current versions as of the time of this post.)

  • Extract the Oracle JET files
  • Run bower install
  • Install the NodeJS modules.  ACCS assumes your deploy package will include all necessary modules.
  • Create a deploy directory.
  • Copy the front end client into the deploy directory.
  • Copy the nodejs application into the deploy directory.
  • Change to the deploy/nodejs directory.
Create the environment variables.  (Replace YourJdbcConnecString with your JDBC connect string.)
Test the application to make sure everything is working.
  • Open your browser to localhost:3000
  • Log in as bob@example.com (any password will work)
  • Open the search tab and execute a search.  I typically search for ‘eat’ it will return several members.

Ctrl-c to stop the node server then switch back to the deploy directory.

Package the Files to Deploy

Create the manifest file: manifest.json

This file declares that we will use Node.js version 8 and provides the command that will be used to start the application.

Create the deployment file: deployment.json

This file includes the environment variables DinoDate needs and sets the ACCS deployment to use 1G of memory and only install 1 instance.

Replace “YourJdbcConnecString” with the JDBC connect string for your database.

Important Note

ACCS is pre-configured to listen on $PORT so we set our application to listen on that port.  Do not attempt to change $PORT.  When ACCS performs its post-deploy check it will open the application using $PORT, if the application is not listening on that port and returns a 404 the deployment will fail and be removed.

Create a zip file with the required DinoDate deploy files.

Important Note

The Node.js platform in ACCS has the oracledb module pre-installed.  If we were to upload the module we just installed it would cause a conflict that would cause the deployment to fail and be removed, so we exclude it from the deployment .zip file.

Deploy to ACCS

In your browser navigate to the Oracle Application Container Cloud Service Console.

Push the Create Application button to open the platform selection panel.

 

Push the Node button to open the application definition panel.

  • Populate [Name] with DinoDate.
  • Click ‘Choose File’ for Archive and select the DinoDateNodeACCS.zip file.
  • Click ‘Choose File’ for Manifest and select the manifest.json file.
  • Click ‘Choose File’ for Deployment Configuration and select the deployment.json file.

You can change the values in the other fields as you’d like, but notice that since we defined “memory”: “1G” and “instances”: “1” in the deployment.json file those values will change automatically.

It’s also possible to include the manifest.json file in the DinoDateNodeACCS.zip file instead of uploading it separately.

Click Create.

It may take several minutes for ACCS to setup the environment and deploy the application.  Once it’s done click on the URL: link to open the application.

Try it out

You can log in with any of the existing users, such as:

  • Bob
    bob@example.com
  • Admin
    admin@example.com

Use any value for the password, the application doesn’t check it.

Click on the Search tab and search for ‘eat’ it should return 6 of the pre-loaded dinosaurs.

Quick Review

  1. Build your application and test it.
  2. Collect the required deployment artifacts and dependencies into a .zip file.
    Reminder: do not include the oracledb module.
  3. Create a manifest.json file that contains at least the required Node.js version and the command used to start your application.
  4. Create a deployment.json file that contains any needed environment variable definitions.  Optionally you can include ACCS environment definitions such as required memory and number of instances.  (This file is optional)
    Reminder: ACCS will use the pre-defined environment variable $PORT.  Make sure your application listens on $PORT.
  5. Use the ACCS service console to upload your 3 files and create your new application.

If you run into any trouble, leave a comment and I’ll be happy to help.

My ODTUG GeekAThon 2017 Entry

The rules and other information can be found at ODTUG GeekAThon 2017.

Problem

My son Alex attends a school where the students have some ‘bonus features’, or as the school puts it: “Educating Exceptional People”.  There are some students at Alex’s school who sometimes try to wander away.  Obviously, this could be a problem but the school staff is extremely well trained and they keep a close watchful eye on all of the students.  Still, I’d like to try and make their lives a little easier, and the students a little safer.

There are commercial systems available that could notify the administration and/or lock doors when a beacon worn by a student is detected in a hazardous zone, such as leaving the school.  That sounds perfect. There’s just one problem: those systems can be very expensive.

Proposal

Implement a student tracking and door lock automation system that can operate on inexpensive components and open source the software.  I will set up a test environment at my house and my son will test it with me.

Desired Features

  • Central to the whole system is a way to detect a beacon when it enters a specific area such as near an exit door or a faculty-only area.
  • Ability to send notifications.
  • Ability to trigger a physical event such as a door lock or audible alert.
  • Log beacon detection events in a database.
    • Beacon Id.
    • Distance from the scanner.
    • Timestamp.
  • Affordable components.

Initial Idea

After browsing the web for a while I decided I would set up multiple scanners with overlapping zones then use trilateration (I like saying that word) to determine the position of the beacon.

I would set up multiple scanners, measure the distances between them and plug that data into my database.  When a scanner detects a beacon it would use my ORDS service to POST its own id, the beacon id and the calculated distance to the beacon.  On the database, I would use Oracle Spatial queries to determine the location of the beacon.  Finally, I would compare the beacon location to defined zones in my house and trigger the alerts/actions for the zones.

I have a tendency to over-engineer my projects.  I once built a doghouse that weighed close to 200 lbs.  (It was awesome.)

After getting most of this working, I realized that I could achieve the project goals by simply placing a single scanner near each zone and let that scanner initiate the alert actions for its zone.  Sometimes less is more.

Hardware

I already had a bunch of Raspberry Pi so I decided to use a couple of my Pi 3s.  Since I’m always looking for an excuse to buy more toys, I decided to get a Pi Zero W.

I have a z-wave enabled deadbolt and a Z-Stick USB hub that I can control using Home-Assistant.io.  For the audio notification, I’ll push a ‘text to speech’ action to my Sonos speaker.  I can make the Sonos say anything I want, this entertains me a lot, my family… not so much.

Software

  • Raspbian Linux
    • Linux modules
      • bluetooth
      • bluez
      • libbluetooth-dev
      • libudev-dev
  • NodeJS
    • NodeJS Modules
      • bleacon
      • request
  • IFTTT.com
  • Home-Assistant.io
  • Oracle Database
  • Oracle Rest Data Services (ORDS)

The installation instructions are in the GitHub repo.

Database

The beacons are set to transmit every two seconds and can be detected by multiple sensors.  I always like to keep track of my data so of course, I’m pushing it to a database.  I’m using an Oracle Cloud Database with an ORDS (Oracle Rest Data Services) front end to collect the data.  When a Raspberry Pi detects a beacon, it will calculate the distance then POST the data to the database.  The database will automatically record a time-stamp when the record is inserted.

This is included in the current code and it’s what I need to collect the data for the “Initial Idea” section above.

If I decide to implement the feature to track the beacon’s position throughout my house.  I just need to determine the fixed position of each scanner relative to a point in my house and using the data I’m already collecting, run an Oracle Spatial query that defines a circle from each scanner with a radius of the distance to the beacon.  Where the circles overlap is the approximate location of the beacon.  The official term (linked above) is Trilateration, but you can think of it as a Venn Diagram.

How I Deployed the System

If you’d rather, you can watch the video and skip this section.

I configured and positioned three Raspberry Pi through my house.  I put a Pi 3 in the hallway outside of the bedrooms, a Pi 3 near the front door and a Pi Zero W outside on the front porch.

  • The first Raspberry Pi 3 in the hallway is set to trigger an alert when the beacon is approximately 2 meters away. This alert will send a notification through IFTTT* to the app on my phone.
  • The Raspberry Pi 3 near the front door fires an alert when the beacon is approximately 1 meter away.
    This alert has three actions:

    • Send the ‘lock’ command to the deadbolt through the REST interface of Home-Assistant.io using Z-Wave.  Home-Assistant.io and the Z-Wave USB dongle are also installed on this Pi.
    • Set the Sonos volume to max and send ‘Locking the front door’ to the Sonos speaker using the text to speech function in Home-Assistant.io.
    • Send a notification through IFTTT to my cell phone.
  • The Raspberry Pi Zero W outside near the front door will trigger an alert when the beacon is approximately 1 meter away.
    This alert has three actions:

    • Send the ‘unlock’ command to the deadbolt through the REST interface of Home-Assistant.io using Z-Wave.
      (If Alex makes it outside, I want the door unlocked so he can come back in.)
    • Set the Sonos volume to max and send ‘Unlocking the front door’ to the Sonos speaker using the text to speech function in Home-Assistant.io.
    • Send a notification through IFTTT to my cell phone.

*IFTTT can also send a text message but the free tier only allows a limited number of texts to be sent each month. I chose to use notifications through their Android app since they are unlimited and I would have burned through the text quota the first time I forgot to limit how often I send a notification. In a live situation, it could send out multiple texts.

Challenges

I had intended to use OpenHab for the home automation features of the project, but when I built the project there was a bug in the Z-Wave addon that made interacting with the deadbolt more difficult.  I tried out Home-Assistant.io and so far I really like it.  Each application has its own strengths and weaknesses, but they both run on a Raspberry Pi so I may use both for future projects.  I’d like to mention they are both open source which is an added bonus.

The beacon distance tracking is not as accurate as I hoped, but it’s fine for this project.  The signal can be degraded by walls, bodies or other objects being between the beacon and scanner.  To improve the accuracy, I implemented a weighted rolling average function as part of the distance calculation to smooth out some of the spikes.  Deploying more scanners would also greatly improve the accuracy if I implement the position tracking.

Future Improvements

  • Add an Oracle JET front end for configuration and control of the system.
  • Add a map display that can show the beacons live.
  • Change the Alert/Action code to be more generic and provide a mechanism to define them in the front end.
  • Find a small inexpensive wearable BLE beacon or design one with a small rechargeable battery and a 3D printed enclosure.

Final Thoughts

If you decided not to participate in the GeekAThon this year please join in next year.  It is a great way to learn some new skills and have fun at the same time.  I am sure parts of what I described above sound intimidating. But if you’d like to try your hand at this or similar projects, don’t hesitate to contact me for help. And while I can’t speak for the other GeekAThon participants, this year or past years, I am certain they will be eager to help you, too.

This project has been a lot of fun, I learned a lot.  I’m looking forward to next year!

Becoming a DevOps “expert”

I’ve decided to learn more about DevOps.

I’ve always been a believer in automating repetitive tasks and letting machines do as much of “my” work as they can.  The way I learn best is (as you can tell by the name of my blog) I learn about the topic, I build something from what I’ve learned and I share my experience.

Given that DevOps is a very big topic, it will take more than one or two posts to do it justice.  This post is the first in a series of blog posts, videos and presentations that I plan to create as I learn more.  I think the best place to start is with what I “think” I know now.

What is DevOps?

A couple years ago I was using Jenkins to create a continuous delivery pipeline for a project I was working on.  I was the only one working on the project and after seeing a CD demonstration at a conference I figured I’d give it a try.  I had everything working and I was quite pleased with myself.  Then I started hearing the term DevOps and assumed it was just a term for what I was already doing.  I was partially right.

DevOps is more than just automating the software delivery process, it’s also a cultural mindset.  It’s developers and operations working together throughout the full lifecycle of a project instead of in separate silos.  Since I was working solo on that project I missed out on this aspect.  Currently, I’m not working on any project where I can experience the full cultural aspect so I plan to mentally assume different roles as I work through the learning process.

If you’d like a better definition, there is plenty of material available on the web from real experts.  I only wanted to document what DevOps means to me as I start to learn more.

My current plan.

I have been working on an open source application used for demonstrations and learning called DinoDate.  I am going to build a DevOps process around this application.  My focus will be a bit more on the database aspects of DevOps since the database isn’t always used to its full potential and sometimes even treated like a bucket of data.  I will be building this process using the Oracle Developer Cloud Service against an Oracle Cloud Database and other Oracle Cloud services, as well as other tools such as Jenkins against an Oracle Database on a VM.

Plan Steps:
  1. Define the steps to manually deploy DinoDate as is.
    1. Automate the build and deploy process which currently is, run some scripts and scp the code to an Oracle Compute instance where I have already setup Python and NodeJS.
    2. Deploy the NodeJS and Python apps to an Oracle Application Container Cloud instance.
  2. Add some open source tools to improve the process.
    • Build script using Gradle.
    • Schema object version control using Liquibase.
    • Unit tests for the PL/SQL using UTPLSQL.
  3. Automate creating the infrastructure (DB, Compute instance) from scratch then deploy, test and destroy.
  4. Reproduce the entire CD pipeline using Jenkins (or another tool) against a VM.

Once I’m satisfied with my understanding of the tools and workflow, I’ll find a project that would benefit from a DevOps environment and pester encourage them to switch to a DevOps process with an offer to act as the DevOps “expert”.

More to come.

Keep an eye out here and on my YouTube channel for how-to and ‘lessons learned’ posts that I’ll make as I go.  Feel free to post a comment if you see that I’ve already got something wrong or if you have a specific interest you’d like me to focus on as I go.

 

 

 

 

Getting started with Oracle Rest Data Services

Most applications today store data of some type, most likely that data is stored in a database.  There are many ways to get data from the application to the database and back, but one of the most popular methods is using RESTful services.  If you’re not familiar with REST think of it as an easy way to let 2 computers talk to each other.  For a more detailed explanation check out this Wikipedia page.

If you are familiar with REST you’re probably used to standing up a server and building a server side application that connects to your database and provides a REST API.

Oracle provides a simpler solution called Oracle REST Data Services or ORDS for short.  ORDS is a quick way to build a REST API directly to your database.  If you’d like a more thorough explanation, check out the ORDS site.

A Short Tutorial

Setup a VM

I’ll be using the Developer Days vm on Virtual Box for the tutorial.  This vm has the Oracle 12c Database and ORDS already installed and ready to go.

  1. Download the Database App Development VM.  I’m using the one from June 13, 2017.
  2. Create a new appliance and start it.
  3. Inside the appliance, open a terminal and enter the following commands.  Provide a password when prompted.
Now we have the VM running and we’ve created an ORDS user “ords_dev”.

SQL Developer

For these examples, I’ll be using SQL Developer version 4.2.0.

If you don’t already have SQL Developer installed you can download it here.

Connect to the HR schema

Open SQL Developer and create a connection to the HR schema.

  • Connection Name:  Anything you’d like.  I’m using Hr – VM
  • Username: hr
  • Password: oracle
  • Hostname: localhost
  • Port: 1521
  • Service name: oracle
    (Make sure you select the Service name radio button.)

Test the connection and connect.

Rest Enable The Schema
  1. Right click on the HR connection.
  2. Click REST Services.
  3. Click Enable REST Services…

  • Enable schema: checked
  • Schema alias: personnel
    (Remember this for later.)
  • Authorization required: un-checked
    For production applications, you will want to use authorization but I’m not going to cover it here.

You can click Finish or if you’d like to see the summary page you can click Next then Finish.

REST Data Services Wizard

From here SQL Developer offers a couple different ways to run the REST Data Services wizard.

One way you can work with the wizard is through the database connection.

This method does not require you to have an ORDS user, but the full ORDS URI won’t be automatically provided in the wizard so you’ll need to get that from the ORDS admin.  I’ll cover the URI below.

For this tutorial, I’ll be using the…

REST Development Panel
  1. Click the View menu item.
  2. Click REST Data Services.
  3. Click Development.

The REST Development panel (on the right) should now be in the left panel bar.

         

Connect to ORDS
  1. Click the Connect icon.
  2. Create a new connection.
  3. Populate the ORDS connection data.

This is an ORDS connection using the ORDS user we created in the VM earlier NOT the HR schema user.

Connection Name: HR-VM
Username: ords_dev
(The username is case sensitive.)
Select: http
Hostname: localhost
Port: 8080
Server Path: /ords
Schema/Workspace: /personnel
(If you used a different value when you rest enabled the schema use that value here ‘/your_alias’)

  1. Click OK in the New RESTful Services Connection panel.
  2. Select your new connection and click OK.
  3. Enter the password we created earlier: oracle
  4. Click OK.

New Module

A module is a collection of related REST services.  How the services are related is up to your imagination.  I usually think of a module like a package and the services as functions inside the package.

To create a new module:

  1. Right click on Modules.
  2. Click New Module…

The wizard will open and we can populate the data.  The purpose of my module is to manage the personnel so I’m going to name my module Manage.

Module Name: Manage
URI Prefix: manage
Check the Publish check box.

Notice that when you enter the URI Prefix the Example URI is expanded to include that value.  This is the URI I mentioned above.  If you run the wizard through the database connection the URI will include a generic value for the first part that refers to the ORDS server.  (http://localhost:8080/ords/personnel/)

Click Next.

Template URI

The template URI identifies a specific REST service endpoint.  In this case employees.  Notice that when you enter the URI Pattern the Example URI is expanded to also include that value.

Let’s break apart the URI.  First, we have the schema alias ‘personnel’ that gives us access to the HR schema.  Next, we created a module to ‘manage’ the HR schema records. Finally, we created a specific URI to handle transactions for ’employees’.

Method Handlers

Now that we’ve created the service endpoint to work with employees, we need to ‘Handle’ the different HTTP ‘Methods’ we intend to use.

A quick web search for ‘http rest methods’ will return pages of discussions on the available methods and how to “properly use them” but the short version is:

GET: Retrieve records with or without search criteria.
POST: Create records without providing the primary key.
PUT: Replace a record with a given primary key.  This can also be used to create a record if you’ve pre-assigned it a primary key.
DELETE: Remove a record with a given primary key.

We’ll start by creating a simple GET all handler.

  • Method: GET
  • Source Type: Query
  • Data Format: JSON
  • Pagination Size: 25
    We’ll leave this at the default value of 25.  It’s a good idea to define a pagination size, we don’t want to accidentally return a billion records in one call.  More on this later.

Click Next, review the summary and click Finish.

Get Query

Our GET method will return the Employee id, Hire Date, First and Last name for all employees.

If the GET employees SQL Worksheet did not automatically open, expand Manage, employees and click on GET.

Enter this query into the SQL Worksheet.

Push the new module to ORDS
  1. Right click on the Manage module.
  2. Click on Upload.

Post

To create new records we’ll want a handler for the POST method.

  1. Right click the employees URI template.
  2. Click Add Handler.
  3. Click Post.

Notice that GET is grayed out since you can only have one method handler of each type per URI template.

We use the MIME Types to define the data format that we’ll accept.  Click the green plus to add a new MIME Type and enter application/json.  Click Apply.

If the POST employees SQL Worksheet did not automatically open, expand Manage, employees and click on POST.

ORDS uses PL/SQL for methods that change data, POST, PUT and DELETE.  PL/SQL gives us a greater amount of control which in turn provides better security.

Enter this PL/SQL into the SQL Worksheet.

Notice the use of bind variables in the PL/SQL.  If the data keys coming into our REST service match our bind variables, ORDS will auto-map the values.  However, if the keys do not match or we have additional use cases, we will need to map the bind variables using the Parameters tab. For this service, we will be passing in data values with keys that match the bind variables.

Since we are creating a new record and the primary key is auto-generated, it will be useful to the end user if we return the new id.  Above, we’ve defined a new bind variable :newid to pass this value back.  There is also another bind variable :status that we’ll use to change the response status from 200 (success) to 201 (success and I created a new record).

Parameters

Click on the Parameters tab and enter the following values.

Colum definitions:

  • Name – Used by ORDS.
    • newid will be the key in the JSON object that returns the id to the user.
    • X-APEX-STATUS-CODE is a built in ORDS parameter used to set the status of the response object.
  • Bind Parameter – The bind variable used in our PL/SQL.
  • Access Method – Defines the direction in the transaction we intend to use the parameters; IN, OUT or IN/OUT.
  • Source Type is where the parameter will be used.
    • newid will be in the response body.
    • X-APEX-STATUS-CODE will be in the response header.
  • Data Type – Data type for the returned value.  When all else fails, choose STRING.
Push the modified module to ORDS
  1. Right click on the Manage module.
  2. Click on Upload.

At this point, we have created and deployed a fully functional REST API with the ability to GET all employees and POST a new employee.

It’s time to….

Test the Service

Switch to the Details tab for either the GET or the POST method handler.  At the bottom, you can copy the URI for the new REST service.

URI: http://localhost:8080/ords/personnel/manage/employees

GET

To test the GET method you could simply enter the URI into a web browser and it will return the records.  Using my test tool, I enter the URI and hit send.

I receive back a JSON object with an “items” array that has 25 employee entries in it.  Below, I’ve trimmed a few out of the middle to keep it short.

Remember, I set the Pagination Size to 25 in the GET method, so ORDS returns the first 25 records.  Notice at the bottom of the JSON object after the array there is a “first” object.  The “$ref” value will take you to the first page of records.  This is automatically added to the response by ORDS when pagination is enabled.

There is also a “next” object added by ORDS to indicate that there are more records on the server.  When you write your client side application, you would process the returned records and check to see if there is a “next” object.  If there is, you could use URI in the “$ref” object to fetch the next set of records.  You would loop through this process until the last set of records.  When you reach the last set there will not be a “next” object.

After the first page, you would start to see a “prev” object containing a “$ref” object that you can use to reverse through the records.

If you set Pagination Size to 0 the service will return every record at once and the navigation objects will not be included.

POST

In your REST testing tool:

  • Change the method to POST.
  • Add a header.
    Content-Type: application/json
  • Enter the following as the payload.
  • Send the request.

You should receive a response with a status of “201 Created” and the response body should contain the newly generated id.

Our service is deployed and the tests return the data we expect.

The wizards are a great way to quickly define REST services for your database, but you won’t want to use them when you deploy your application.  Instead, we can…

Export SQL

For mass deployment (or for people who just prefer to type everything) a SQL script is a better option.

Another difference between the REST Development panel and REST Data Services in the database connection is that you can export the SQL using the database connection tool.

Open the HR database connection and expand the REST Data Services item.  If you do not see your new service, click on the REST Data Services item and click the refresh arrows at the top of the panel.

  1. Expand Modules.
  2. Right click on Manage.
  3. Select Export…

In the window that pops up:

  1. Check the Enable Schema check box if you want to include the statement.
  2. Un-Check Privileges.
  3. Enter a filename and location.
  4. Click Apply.
  5. Open the file.

You can now include this SQL script in your application build process to deploy the REST services right alongside the rest of your database objects.

When you need a REST API to work with your database, ORDS and the SQL Developer wizards will save you a ton of time and help you create very robust and elegant solutions.

Please leave me a comment if you have trouble or find any bugs.