Category Archives: DevOps

How to use an SSH Tunnel in Oracle Developer Cloud Service Build Jobs

Oracle Developer Cloud Service is a hosted team development and delivery platform with all kinds of tools to help your team be more efficient.  In this post, I will cover how to use an SSH tunnel to connect to your database in a build job.

This is current as of October 2018.

Connecting to your database through an SSH tunnel is fairly simple and you won’t have to ask your network admin to open a port in your firewall and or load balancers.

If you’re connecting to an Oracle Cloud Database it should be pre-configured to allow database connections through an SSH tunnel.  If not check with your server admin for assistance.

If you’re not familiar with SSH tunnels you can find out more here.

However, if you’re reading this post you probably just want to skip to the how-to so let’s get started.

Configure your build job

Open your Developer Cloud Service project, click the ‘Build’ tab and select the build you want to work with.

Click the ‘Configure’ button

Authorized SSH Keys

You will need an SSH Key that has been authorized to connect to the database server.  I recommend generating a new key that will only be used from your DevCs builds.  Name the new key pair something that will let you know it’s for this DevCs project.

Helpful links:

If you generated a new key pair named DevCsProj1, you should have two files.  The file without an extension ‘DevCsProj1’ is your private key and the file with .pub ‘DevCsProj1.pub’ is your public key.

Add a build environment configuration

Select the ‘Build Environment’ tab, click ‘Add Build Environment’ and choose ‘SSH Configuration’.

  1. Open the private key (from above) in a text editor, copy everything in the file and paste it in the ‘Private Key’ file text area.
  2. You can do the same for the ‘Public Key’ text area but it is not required for this example.
  3. If you created your keys with a password, enter the password in the ‘Pass Phrase’ field.
  4. For a little extra security, you could get the public key for your Database Server and enter it in the ‘SSH Server Public Key’ text area.  This will ensure that your build job only connects to that server and will help protect against connecting to a different server if the IP address is re-assigned.  This field is optional.
  5. Check ‘Create SSH tunnel’.
  6. Enter the SSH username.  (This is not a database user, this is an authorized SSH user on the server.)
  7. If you’d rather use an SSH user/password instead of an SSH Key file, you could enter the password in the password field.  I leave it blank in order to only connect with the keys.
  8. Enter the port you want to use on the DevCs side of the tunnel in ‘Local Port’.
  9. If you leave ‘Remote Host’ empty it will default to localhost.  The remote host is used on the other side of the tunnel to make the connection as if you were on that machine.  Since I’m intending to connect to the database that is on the same server that I’m SSH’d into I can use localhost.  If I wanted to connect to a different server from that side of the tunnel, I could enter the address for the other remote server.  For example, if I had a server that was only accessible from inside of a network that includes the SSH server, I can SSH into the network and the tunnel will end on the other internal server.
  10. If you’d like to re-use your keys for other SSH commands you can check ‘Setup files in ~/.ssh for command-line ssh tools’.  It is not necessary for this example.

The ‘Connect String’ displayed at the bottom shows the ssh command DevCs will use to create the tunnel.

ssh -L localhost:1521:localhost:1521 opc@129.111.111.111

Let’s break it down:

  • Create a local ssh tunnel.  ssh -L
  • On the local DevCs server map to the ‘Local Port’ value.  localhost:1521 (The first one)
  • Once connected to the SSH server map the tunnel end to ‘Remote Host’:’Remote Port’.   localhost:1521 (The second one)
  • Make the SSH connection as ‘Username’@’SSH Server’.  opc@129.111.111.111

OK, time to use the new tunnel.  I’ll use a SQLcl Builder.

Add / Edit a builder

Select the Builders tab, click Add Builder and choose SQLcl Builder.

If you’ve used a SQLcl builder before this will all be the same, except for the connect string.

  1. Enter your database username in the ‘Username’ field.
  2. Enter your database password in the ‘Password’ field.
  3. If you’re connecting to a database that is using wallet credentials such as Oracle Exadata Express Cloud Service, enter the location of your Credentials file.  I’m not so I will leave it empty.
  4. Since I now have an SSH tunnel in place I will connect to the local (DevCs server) end of the tunnel and use the ‘Local Port’ value from the SSH Configuration.  //localhost:1521/[servicename].  Even if I had defined a remote host and/or port other than localhost / 1521 in the tunnel configuration, I would still use localhost:[Local Port].  The tunnel takes care of the mapping on the other end.
  5. Enter the SQL File or Inline SQL you want to run.
  6. Save the Job Configuration.

Run it

Click the ‘Build Now’ button and when you look at the console output you will see something similar to this.

  • SSH is set up.
  • The SSH tunnel is opened.
  • SQLcl executed my script.  (I did not enable any output in my script so none is displayed.)
  • SQLcl disconnects from my Database.
  • The SSH tunnel is closed.
  • The SSH environment is removed.

Once you’ve added an SSH Build Environment to your build job and tested it, you can start adding them to each Build Job you use a database connection in.  After that’s done you can close port 1521 on your Database server (assuming you don’t need it for other applications).

Oracle Developer Cloud Service is constantly being improved, so let me know in the comments if this guide becomes out of date and I will update it.

CI/CD for Database Developers – Export Database Objects into Version Control

With this post, I kick off a series that walks you through the process of building  database applications using a CI/CD pipeline. I will be covering:

  • Export Database Objects into Version Control (This Post)
  • Use Schema Migration
  • Unit Test PL/SQL
  • Build and Deploy
  • Automate

Links will be added for each topic as that article is published. I will use the DinoDate application (source) as my code base.

For this post, assume that I have DinoDate installed and all development to date  has been done directly in the database (source code not stored externally in files).

Since, however, it’s really not a good idea to edit and maintain code directly in the database, I’d like to switch to storing the DDL for my database schema objects in files, managed by a version control system. Let’s get going!

Export the Database Objects out to Files

I could export the code for my objects out into files by writing queries against  built-in database views like ALL_SOURCE.  I have used this method a few times, and you probably have, too. You can make it work, but it puts the onus on you to get everything right. I’d rather think about other things, so I ask myself: is there a tool that will take care of the heavy lifting on this step?

There sure is, and it’s free:  Oracle SQL Developer has a great export tool. Assuming you’ve got SQL Developer installed and you can connect to your schema, here’s what you do:

Under the Tools menu, click on Database Export.

The first step of the wizard configures the general export attributes:

  1. Choose the connection for the schema you want to export.
  2. Select the options you’d like the wizard to use when creating the export scripts.
    I like to add drop statements to my scripts.  You may also want to export grants.
  3. I want to export some of the master table data so I will leave the export data section set with defaults.
  4. Choose “Save As Separate Directories” and select the export directory.

In step 2 I left all of the default options selected.

I want all of the schema objects so I skip step 3.

I selected the tables that contain the pre-loaded master data for the application.

Everything looks good so click finish.

The export wizard also creates a master run script and opens the script in a new worksheet (I’ll come back to this later):

Now that I have everything exported, I want to get the files into version control asap.

Add the Objects to Version Control

I’ll be using Git for version control.  SQL Developer has some nice integrated tools for working with Git but for now, I’m just going to use the command line.

  1. Change to the export directory.
  2. Initialize a Git repository.
  3. Add all files to the repo.
  4. Commit the changes.
Assuming that you’re using a central Git repository, you should push the new repo up to the shared repo.  That process can differ based on which repository host you’re using so I’ll leave it up to you to consult the documentation.

With my files stored safely in Git, it’s time to

Verify / Clean up

Whenever you use any automated code generate/export tool, you should verify the results.

For example, you may want to change the generated SQL for the dinosaure_id column from

to

After you verify each file, run a code beautifier on it.  Maintaining a standard code format will make it easier to see the differences between changes.  This will make your future code reviews much easier.

Make sure you commit each file as you change it.  Don’t wait till the end to do a massive commit.

Organize

In step 1 of the wizard, I selected Save as Separate Directories so all of my object scripts have been grouped into directories by type, like this:

This is a perfectly fine way to organize your scripts.  However, I like to group mine by whether or not the scripts will be changed and re-run vs run a single time and future changes will be done with new scripts.  For example:

If I were going to run everything from scripts, I would also break down the Run_Once directory into product versions with a subdirectory for ‘create from scratch’ and one for ‘updates’, like this:

In another post, I’ll show you how to use a Schema Migration tool which makes managing schema changes much easier.

Master Build Script

Now that everything is cleaned up and organized, we need to modify the master script that was generated by the export.  It should be named with a timestamp in a format similar to this:

Generated-YYYYMMDDHH24MISS.sql.

My exported script was named Generated-20180706134514.sql.

You will want to edit this file and modify the directory for the scripts to match the changes you made above.  Also, you need to verify that the scripts are being run in the order of dependencies.  Objects should be created after the objects they depend on.

If you’re not planning to use a Schema Migration tool you’ll want to create separate master scripts for each of the create and update directories.

Test

Create a new schema that’s safe to test with, and run your master scripts.  After you get the scripts running without errors, do a schema compare to check for any differences between your new schema and the schema you exported from.

When it’s all the way you want it, you are now ready and able to work from files to build your database code.

Working From  Files

From this point, you can now follow a new and improved workflow when it comes to editing the code for your database objects:

  1. Pull the latest version of your code from the shared repository. NEVER ASSUME YOU ALREADY HAVE THE LATEST.
  2. Make your changes.
  3. Compile to the database.
  4. Commit (to your source code repository, not your transaction) often.
  5. Push your changes back up to the shared repository.

Of course, that is a very simplified workflow. You should start a discussion with your team about using more advanced methods to further automate and improve your processes.

I sincerely hope you skimmed this post because you gave up long ago on editing code directly in the database. If that is not the case, I hope this article helps you make the change. Because then the next time something goes badly wrong,  you can simply recover from Git (or your repository of choice).

No tearing your hair out. No gnashing of teeth. No self-hating recriminations. Just a quick recovery and get back to work.

And then you can move on to more interesting challenges in improving the way you write and maintain your database application code.

Tips to help PL/SQL developers get started with CI/CD

In most ways, PL/SQL development is just like working with any other language but, sometimes it can be a little different.  If you’d like to create a Continuous Deployment Pipeline for your PL/SQL application, here is a short list of tips to help you get started.

Work From Files

Do not use the Database as your source code repository.  If you are making changes to your application in a live database, stop right now and go read PL/SQL 101: Save your source code to files by Steven Feuerstein.

Now that you’re working from files, those files should be checked into…

Version Control

There are a lot of version control applications out there, the most popular right now is probably Git.  One advantage of using Git is, you work in your own local copy of the repository making frequent commits that only you see.  If you’re working in your own personal database you could compile your changes to the database at this point.

If you’re working in a database shared with other people and you’re ready to compile your code,   git pull from the central repository to get any changes since your last pull.  Handle any merge conflicts, then git push your changes back up to the shared repository.  After the push, you can compile your code to the database.  This helps ensure that people don’t overwrite each other’s changes.

Making frequent small commits will help keep everything running smoothly.

What about your schema objects?

Your schema objects, such as tables and views, are as much a part of your application as any other component and changes to these objects should be version controlled as well.  The process of keeping track of schema changes is called Schema Migration and there are open source tools you can use such as Flyway and Liquibase.

Unit Testing

In any application it’s smart to have good test coverage, If your goal is Continuous Deployment it’s critical.  A Unit Test is when you test a single unit of code, in PL/SQL that may be a function or procedure.

Unit tests typically follow these steps:

  1. Setup the environment to simulate how it will be when the application is deployed.  This might include changing configuration settings and loading test data.
  2. Execute the code unit.
  3. Validate the results.
  4. Clean up the environment, resetting it to the state it was in before running the tests.

utPLSQL is a great tool for unit testing your PL/SQL applications.  You will write a package of tests for each ‘unit’ of your application which should test all of the required functionality and potential errors.  utPLSQL is an active open source project with an impressive set of features.  Check out the docs to get started.

Build

Building a database application usually consists of running the change scripts in a specific order.  It’s common to create a master script that executes the others in order of how the objects depend on each other.  However, if the master script simply executes the other scripts, you will need to create additional scripts to track and verify changes, and more scripts to give you the ability to rollback the changes if/when there’s a problem.

There are build tools such as Gradle and Maven that can easily execute your scripts.  But you’ll still need to create the additional control scripts.  If you use a Schema Migration tool it should include a lot of these additional functions without having to write extra scripts.  For an example, check out Dino Date which has a Liquibase migration included.

How to handle the PL/SQL code

You could include your PL/SQL code in a Schema Migration changeset but adding schema migration notation to your PL/SQL introduces another layer of complexity and potential errors.

In the Dino Date runOnChange directory, you will find examples of setting up Liquibase changesets that watch for changes in the files of objects that you would rather keep ‘pure’.  When you run a migration, if the file has changed Liquibase will run the new version.

In a shared database environment, you should execute a schema migration after you pull/merge/push your changes into your version control system.

Automate!

All of these pieces can be tied together and automated with an automation server such as Hudson or Jenkins (both are open source) to create a build pipeline.

Original by Jez Humble

A simple (maybe too simple) build pipeline using the above tools could follow these steps:

  1. Developer makes a change and pushes it to the shared Git repository.
  2. Hudson notices the repository has changed and triggers the build pipeline.
  3. The project is pulled from Git.
  4. Liquibase deploys the changes to a test database.
  5. utPLSQL is trigged to run the unit tests.
  6. Liquibase deploys the changes to the production database.

Other Useful Tools

  • Edition Based Redefinition[pdf] can be used to deploy applications with little to no downtime.
  • Oracle Developer Cloud Service comes with a ton of pre-configured tools to help with almost every aspect of your development process.
  • Gitora can help you version control your application if you are not able to move out to files.

Oracle Developer Cloud Service – Use Maven Repository Artifacts in Build Jobs

Maven Repository

Oracle Developer Cloud Service (DevCs) includes a lot of great tools to help you build applications.  The Maven repository is useful for storing build dependencies and artifacts.

Maven artifacts can be uploaded to the repository after they are created by your build.  Dependencies can be manually uploaded ahead of time and pulled into the build jobs that need them.  Let’s take a look at how to work with dependency files in the Maven repository.

Upload Files

We’ll need to upload one or more files first.

  1. Click the Upload button.
  2. Drag and drop files into the box or click “select artifact files” and select the files.

My project uses Oracle JET version 4.1.0 so I’ll add it to my Maven repository.

Populate the file data.

  • GroupId: DinoDate
  • ArtifactId: oracleJet
  • Version 4.1.0
  • Packaging: zip

Click the Start Upload button.

Note: the file may be renamed in the repository.  For example, the oraclejet_4.1.0.zip file will be renamed to oraclejet-4.1.0.zip using the above data.

Now that we have a file in the repository let’s see use it in our build.

Use a Build Tool

If you’re using a build tool such as Maven or Gradle, you can get the dependency declaration for the Maven repository by clicking on the root of the project then copying it from the dependency declaration tab for your tool.

To get the file information:

  • Drill down into the folder.
  • Click on the file you need.
  • Copy the dependency declaration for the file.

At this point, you can run your build tool in a build job as normal.

But if you’re not using a build tool there is another way.

cURL

To use a tool such as cURL you need to get the URL for the file your after.

First, we’ll need the DevCs Maven repository URL for the project.

  1. Click /
  2. Copy URL from the dependency declaration panel

The repository URL will be similar to this:

Add file information to the URL:

This is the same data (plus the file name) we entered when we uploaded the file.

If you need to find the data for a file:

  • Drill down into the folder.
  • Click on the file you need.
  • Copy the information from the dependency declaration for the file.

Now fill out the rest of the URL:

Use cURL in the Build Job

The different components in DevCs already have the permissions to access each other in place so the build job can simply cURL from the Maven repository.

Add an “Execute Shell” build step and add the following (with your repository URL):

You can use any of the cURL features as you normally would, for example, to get multiple files in one call:

Why Store Dependencies in the Project?

Using the DevCs Maven Repository to store dependencies has many benefits, including:

  • It will be faster than pulling dependencies across the internet.
  • You won’t have to worry about a remote repository being offline or files being removed.
  •  Helps protect you from security problems if a remote repository is compromised.

Please leave a comment if you have questions or need additional help.

Deploy a Python application to Oracle Application Container Cloud Service

About Application Container Cloud Service

ACCS provides a pre-configured platform (Platform as a Service or PAAS) where you can quickly deploy and host your applications.  For many of today’s applications, the hosting server is just that, a place to host the application.  Most of the time the only thing an application needs from the server is to have it support the application’s programming language and to provide in and out connections through ports.  Using a PAAS such as ACCS frees you from all of the extra work of configuring and maintaining a server and allows you to focus on perfecting your application.

ACCS supports multiple languages but for this post, I’ll focus on Python.

DinoDate

For the examples, I will be deploying the DinoDate application. DinoDate was written as an open source learning application that can be used to demonstrate database concepts with multiple programming languages.  It currently has both Python and NodeJS mid-tier applications and is backed by an Oracle Database.

The following instructions show how to deploy the Python version of DinoDate to an Oracle ACCS instance.

If you don’t have access to Oracle Cloud services, you can try the Oracle Cloud with $300 of free credit.

Download/Clone the DinoDate application.

Database

First, you’ll need a database.

Create an Oracle Cloud database or if you already have an Oracle Database, make sure that you can safely create and destroy the DD and DD_NON_EBR schema.

Connect to your database as sys with sysdba and run coreDatabase/dd_master_install.sql.  (Use your password and connect string)

Prepare the DinoDate Application

Download oraclejet.zip (version 4.1.0).  (Current versions as of the time of this post.)

  • Extract the Oracle JET files
  • Run bower install

Download necessary files

The Docker container for Python used by ACCS comes with Python installed.  We’ll need to include the rest of the dependencies.

Package the Files to Deploy

  • Create a deploy directory with a lib subdirectory.
  • Copy the front end client into the deploy directory.
  • Copy the python application into the deploy directory.
  • Extract the Oracle instant client files into the deploy/lib directory.  (Change the command to point to where your files are located.)
  • Change to the deploy directory.
Create a shell script, launchPython.sh,  to install the dependencies and launch the application.
Create the manifest file: manifest.json

This file declares that we will use Python version 3.6.0 and provides the command that will be used to start the application.

Create the deployment file: deployment.json

This file includes the environment variables DinoDate needs and sets the ACCS deployment to use 1G of memory and only install 1 instance.  PYTHONPATH is the directory we will install the Python modules into and LD_LIBRARY_PATH is used by cx_Oracle to locate the Oracle client files.

Replace “YourJdbcConnecString” with the JDBC connect string for your database.

Important Note

ACCS is pre-configured to listen on $PORT so we set our application to listen on that port.  Do not attempt to change $PORT.  When ACCS performs its post-deploy check it will open the application using $PORT, if the application is not listening on that port and returns a 404 the deployment will fail and be removed.

Create a zip file with the required DinoDate deploy files.

Deploy to ACCS

In your browser navigate to the Oracle Application Container Cloud Service Console.

Push the Create Application button to open the platform selection panel.

Push the Python button to open the application definition panel and expand the ‘More Options’ section.

  • Populate [Name] with DinoDatePython.
  • Click ‘Choose File’ for Archive and select the DinoDatePythonACCS.zip file.
  • Click ‘Choose File’ for Manifest and select the manifest.json file.
  • Click ‘Choose File’ for Deployment Configuration and select the deployment.json file.

You can change the values in the other fields as you’d like, but notice that since we defined “memory”: “1G” and “instances”: “1” in the deployment.json file those values will change automatically.

It’s also possible to include the manifest.json file in the DinoDatePythonACCS.zip file instead of uploading it separately.

Click Create.

It may take several minutes for ACCS to setup the environment and deploy the application.  Once it’s done click on the URL: link to open the application.

Try it out

You can log in with any of the existing users, such as:

  • Bob
    bob@example.com
  • Admin
    admin@example.com

Use any value for the password, the application doesn’t check it.

Click on the Search tab and search for ‘eat’ it should return 6 of the pre-loaded dinosaurs.

Quick Review

  1. Download the dependencies.
  2. Create a launch script that will install the dependencies and launch the application.
  3. Collect the required deployment artifacts and dependencies into a .zip file.
  4. Create a manifest.json file that contains at least the required Python version and the command used to start your application.
  5. Create a deployment.json file that contains any needed environment variable definitions.  Optionally you can include ACCS environment definitions such as required memory and number of instances.  (This file is optional.  You could include the environment variables in your launch script.)
    Reminder: ACCS will use the pre-defined environment variable $PORT.  Make sure your application listens on $PORT.
  6. Use the ACCS service console to upload your 3 files and create your new application.

If you run into any trouble, leave a comment and I’ll be happy to help.

Deploy a Node.js application to Oracle Application Container Cloud Service

About Application Container Cloud Service

ACCS provides a pre-configured platform (Platform as a Service or PAAS) where you can quickly deploy and host your applications.  For many of today’s applications, the hosting server is just that, a place to host the application.  Most of the time the only thing an application needs from the server is to have it support the application’s programming language and to provide in and out connections through ports.  Using a PAAS such as ACCS frees you from all of the extra work of configuring and maintaining a server and allows you to focus on perfecting your application.

ACCS supports multiple languages but for this post, I’ll focus on Node.js.

DinoDate

For the examples, I will be deploying the DinoDate application. DinoDate was written as an open source learning application that can be used to demonstrate database concepts with multiple programming languages.  It currently has both Python and NodeJS mid-tier applications and is backed by an Oracle Database.

The following instructions show how to deploy the Node.js version of DinoDate to an Oracle ACCS instance.

If you don’t have access to Oracle Cloud services, you can try the Oracle Cloud with $300 of free credit.

Download/Clone the DinoDate application.

Database

First, you’ll need a database.

Create an Oracle Cloud database or if you already have an Oracle Database, make sure that you can safely create and destroy the DD and DD_NON_EBR schema.

Connect to your database as sys with sysdba and run coreDatabase/dd_master_install.sql.  (Use your password and connect string)

Prepare the DinoDate Application

Download oraclejet.zip (version 4.1.0).  (Current versions as of the time of this post.)

  • Extract the Oracle JET files
  • Run bower install
  • Install the NodeJS modules.  ACCS assumes your deploy package will include all necessary modules.
  • Create a deploy directory.
  • Copy the front end client into the deploy directory.
  • Copy the nodejs application into the deploy directory.
  • Change to the deploy/nodejs directory.
Create the environment variables.  (Replace YourJdbcConnecString with your JDBC connect string.)
Test the application to make sure everything is working.
  • Open your browser to localhost:3000
  • Log in as bob@example.com (any password will work)
  • Open the search tab and execute a search.  I typically search for ‘eat’ it will return several members.

Ctrl-c to stop the node server then switch back to the deploy directory.

Package the Files to Deploy

Create the manifest file: manifest.json

This file declares that we will use Node.js version 8 and provides the command that will be used to start the application.

Create the deployment file: deployment.json

This file includes the environment variables DinoDate needs and sets the ACCS deployment to use 1G of memory and only install 1 instance.

Replace “YourJdbcConnecString” with the JDBC connect string for your database.

Important Note

ACCS is pre-configured to listen on $PORT so we set our application to listen on that port.  Do not attempt to change $PORT.  When ACCS performs its post-deploy check it will open the application using $PORT, if the application is not listening on that port and returns a 404 the deployment will fail and be removed.

Create a zip file with the required DinoDate deploy files.

Important Note

The Node.js platform in ACCS has the oracledb module pre-installed.  If we were to upload the module we just installed it would cause a conflict that would cause the deployment to fail and be removed, so we exclude it from the deployment .zip file.

Deploy to ACCS

In your browser navigate to the Oracle Application Container Cloud Service Console.

Push the Create Application button to open the platform selection panel.

 

Push the Node button to open the application definition panel.

  • Populate [Name] with DinoDate.
  • Click ‘Choose File’ for Archive and select the DinoDateNodeACCS.zip file.
  • Click ‘Choose File’ for Manifest and select the manifest.json file.
  • Click ‘Choose File’ for Deployment Configuration and select the deployment.json file.

You can change the values in the other fields as you’d like, but notice that since we defined “memory”: “1G” and “instances”: “1” in the deployment.json file those values will change automatically.

It’s also possible to include the manifest.json file in the DinoDateNodeACCS.zip file instead of uploading it separately.

Click Create.

It may take several minutes for ACCS to setup the environment and deploy the application.  Once it’s done click on the URL: link to open the application.

Try it out

You can log in with any of the existing users, such as:

  • Bob
    bob@example.com
  • Admin
    admin@example.com

Use any value for the password, the application doesn’t check it.

Click on the Search tab and search for ‘eat’ it should return 6 of the pre-loaded dinosaurs.

Quick Review

  1. Build your application and test it.
  2. Collect the required deployment artifacts and dependencies into a .zip file.
    Reminder: do not include the oracledb module.
  3. Create a manifest.json file that contains at least the required Node.js version and the command used to start your application.
  4. Create a deployment.json file that contains any needed environment variable definitions.  Optionally you can include ACCS environment definitions such as required memory and number of instances.  (This file is optional)
    Reminder: ACCS will use the pre-defined environment variable $PORT.  Make sure your application listens on $PORT.
  5. Use the ACCS service console to upload your 3 files and create your new application.

If you run into any trouble, leave a comment and I’ll be happy to help.

Deploy NodeJS & Python3 applications on an Oracle Cloud Compute instance

DinoDate

DinoDate currently has both Python and NodeJS mid-tier applications and is backed by an Oracle Database.

The following instructions show how to deploy DinoDate to an Oracle Cloud Compute instance.  However, if you just need to deploy a NodeJS or Python application, the same instructions should help you install Node and/or Python 3.

If you don’t have access to an Oracle Database you can try the Oracle Cloud for free.

Database

Download/Clone DinoDate to get the database scripts you’ll need.

Create an Oracle Cloud database.

Connect to your database as sys with sysdba and run coreDatabase/dd_master_install.sql.  (Use your password and connect string)

Compute

Create an Oracle Cloud Compute instance.

Open the ports for our NodeJS and Python apps.

Download and scp the following to your new compute instance.  (Current versions as of the time of this post.)

Open an ssh connection to your compute instance.  (Use your ssh key and the public IP address for your compute instance)

  • Switch to su
  • Update your instance
  • Install some tools we’ll need

Install both Oracle Instant Client files

Install NodeJS 8

  • Install some tools we’ll need
  • Enable the config manager
  • Install Python 3.5
  • Enable Python 3.5
  • Upgrade pip
  • Install the python modules for DinoDate
    • cx_Oracle
    • bottle
  • Exit scl bash:

Exit su

Add the following to your .bash_profile:

  • Create the environment variables  for DinoDate (use the JDBC connect string for your database)
  • Enable Python 3.5.
Re-run .bash_profile
  • Clone DinoDate to your Compute instance
  • Extract the Oracle JET files
  • Run bower install
  • Install the NodeJS modules
  • Use pm2 to start the NodeJS version of DinoDate
    The –watch parameter will restart the application if the files change.
  • (We already installed the Python modules above)
  • Use pm2 to start the NodeJS version of DinoDate
    The –watch parameter will restart the application if the files change.
‘pm2 startup’ will generate the command needed to restart our applications on boot.  The following will extract and execute the command from the generated text.

Try it out

Open a browser and pull up DinoDate:

  • NodeJS
    http://YourComputePublicIP:3000
  • Python
    http://YourComputePublicIP:8088

You can log in with any of the existing users, such as:

  • Bob
    bob@example.com
  • Admin
    admin@example.com

Use any value for the password, the application doesn’t check it.

Click on the Search tab and search for ‘eat’ it should return 6 of the pre-loaded dinosaurs.

If you run into any trouble, leave a comment and I’ll be happy to help.

Becoming a DevOps “expert”

I’ve decided to learn more about DevOps.

I’ve always been a believer in automating repetitive tasks and letting machines do as much of “my” work as they can.  The way I learn best is (as you can tell by the name of my blog) I learn about the topic, I build something from what I’ve learned and I share my experience.

Given that DevOps is a very big topic, it will take more than one or two posts to do it justice.  This post is the first in a series of blog posts, videos and presentations that I plan to create as I learn more.  I think the best place to start is with what I “think” I know now.

What is DevOps?

A couple years ago I was using Jenkins to create a continuous delivery pipeline for a project I was working on.  I was the only one working on the project and after seeing a CD demonstration at a conference I figured I’d give it a try.  I had everything working and I was quite pleased with myself.  Then I started hearing the term DevOps and assumed it was just a term for what I was already doing.  I was partially right.

DevOps is more than just automating the software delivery process, it’s also a cultural mindset.  It’s developers and operations working together throughout the full lifecycle of a project instead of in separate silos.  Since I was working solo on that project I missed out on this aspect.  Currently, I’m not working on any project where I can experience the full cultural aspect so I plan to mentally assume different roles as I work through the learning process.

If you’d like a better definition, there is plenty of material available on the web from real experts.  I only wanted to document what DevOps means to me as I start to learn more.

My current plan.

I have been working on an open source application used for demonstrations and learning called DinoDate.  I am going to build a DevOps process around this application.  My focus will be a bit more on the database aspects of DevOps since the database isn’t always used to its full potential and sometimes even treated like a bucket of data.  I will be building this process using the Oracle Developer Cloud Service against an Oracle Cloud Database and other Oracle Cloud services, as well as other tools such as Jenkins against an Oracle Database on a VM.

Plan Steps:
  1. Define the steps to manually deploy DinoDate as is.
    1. Automate the build and deploy process which currently is, run some scripts and scp the code to an Oracle Compute instance where I have already setup Python and NodeJS.
    2. Deploy the NodeJS and Python apps to an Oracle Application Container Cloud instance.
  2. Add some open source tools to improve the process.
    • Build script using Gradle.
    • Schema object version control using Liquibase.
    • Unit tests for the PL/SQL using UTPLSQL.
  3. Automate creating the infrastructure (DB, Compute instance) from scratch then deploy, test and destroy.
  4. Reproduce the entire CD pipeline using Jenkins (or another tool) against a VM.

Once I’m satisfied with my understanding of the tools and workflow, I’ll find a project that would benefit from a DevOps environment and pester encourage them to switch to a DevOps process with an offer to act as the DevOps “expert”.

More to come.

Keep an eye out here and on my YouTube channel for how-to and ‘lessons learned’ posts that I’ll make as I go.  Feel free to post a comment if you see that I’ve already got something wrong or if you have a specific interest you’d like me to focus on as I go.