With this post, I kick off a series that walks you through the process of building database applications using a CI/CD pipeline. I will be covering:
- Export Database Objects into Version Control (This Post)
- Use Schema Migration
- Unit Test PL/SQL
- Build and Deploy
- Automate
Links will be added for each topic as that article is published. I will use the DinoDate application (source) as my code base.
For this post, assume that I have DinoDate installed and all development to date has been done directly in the database (source code not stored externally in files).
Since, however, it’s really not a good idea to edit and maintain code directly in the database, I’d like to switch to storing the DDL for my database schema objects in files, managed by a version control system. Let’s get going!
Export the Database Objects out to Files
I could export the code for my objects out into files by writing queries against built-in database views like ALL_SOURCE. I have used this method a few times, and you probably have, too. You can make it work, but it puts the onus on you to get everything right. I’d rather think about other things, so I ask myself: is there a tool that will take care of the heavy lifting on this step?
There sure is, and it’s free: Oracle SQL Developer has a great export tool. Assuming you’ve got SQL Developer installed and you can connect to your schema, here’s what you do:
Under the Tools menu, click on Database Export.
The first step of the wizard configures the general export attributes:
- Choose the connection for the schema you want to export.
- Select the options you’d like the wizard to use when creating the export scripts.
I like to add drop statements to my scripts. You may also want to export grants. - I want to export some of the master table data so I will leave the export data section set with defaults.
- Choose “Save As Separate Directories” and select the export directory.
In step 2 I left all of the default options selected.
I want all of the schema objects so I skip step 3.
I selected the tables that contain the pre-loaded master data for the application.
Everything looks good so click finish.
The export wizard also creates a master run script and opens the script in a new worksheet (I’ll come back to this later):
Now that I have everything exported, I want to get the files into version control asap.
Add the Objects to Version Control
I’ll be using Git for version control. SQL Developer has some nice integrated tools for working with Git but for now, I’m just going to use the command line.
- Change to the export directory.
- Initialize a Git repository.
- Add all files to the repo.
- Commit the changes.
1 2 3 4 5 6 7 8 9 10 11 12 |
[~]$ cd ~/Projects/Oracle/DevOpsLab/DdlExport/DD [DD]$ git init Initialized empty Git repository in /mnt/f5e545bf-a2d6-45c0-b7ea-29d8476a5e06/Projects/Oracle/DevOpsLab/DdlExport/DD/.git/ [DD]$ git add . [DD]$ git commit -m "Initial Commit" [master (root-commit) f94be76] Initial Commit 94 files changed, 3222 insertions(+) create mode 100644 CONSTRAINTS/DATABASECHANGELOG.sql create mode 100644 CONSTRAINTS/DATABASECHANGELOGLOCK.sql create mode 100644 CONSTRAINTS/DD_DEQUEUE_ERRORS_T.sql create mode 100644 CONSTRAINTS/DD_DINOSAURS_T.sql ... |
With my files stored safely in Git, it’s time to
Verify / Clean up
Whenever you use any automated code generate/export tool, you should verify the results.
For example, you may want to change the generated SQL for the dinosaure_id column from
1 |
"DINOSAUR_ID" NUMBER GENERATED BY DEFAULT AS IDENTITY MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE NOKEEP NOSCALE, |
to
1 |
dinosaur_id NUMBER GENERATED ALWAYS AS IDENTITY, |
After you verify each file, run a code beautifier on it. Maintaining a standard code format will make it easier to see the differences between changes. This will make your future code reviews much easier.
Make sure you commit each file as you change it. Don’t wait till the end to do a massive commit.
Organize
In step 1 of the wizard, I selected Save as Separate Directories so all of my object scripts have been grouped into directories by type, like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
dd -CONSTRAINTS -DATA_TABLE -DATA_VIEW -FUNCTIONS -INDEXES -PACKAGE_BODIES -PACKAGES -PROCEDURES -REF_CONSTRAINTS -SYNONYMS -TABLES -TRIGGERS -TYPES -VIEWS |
This is a perfectly fine way to organize your scripts. However, I like to group mine by whether or not the scripts will be changed and re-run vs run a single time and future changes will be done with new scripts. For example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
dd -Run_Once -CONSTRAINTS -DATA_TABLE -DATA_VIEW -INDEXES -REF_CONSTRAINTS -SYNONYMS -TABLES -TYPES -Run_On_Change -FUNCTIONS -PACKAGES -PACKAGE_BODIES -PROCEDURES -TRIGGERS -VIEWS |
If I were going to run everything from scripts, I would also break down the Run_Once directory into product versions with a subdirectory for ‘create from scratch’ and one for ‘updates’, like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
-Run_Once -v4.1 -create -new_table1.sql -new_table2.sql -update -alter_table5.sql -alter_table8.sql -v5.0 -create -new_table1.sql -new_table2.sql -update -alter_table12.sql |
In another post, I’ll show you how to use a Schema Migration tool which makes managing schema changes much easier.
Master Build Script
Now that everything is cleaned up and organized, we need to modify the master script that was generated by the export. It should be named with a timestamp in a format similar to this:
Generated-YYYYMMDDHH24MISS.sql.
My exported script was named Generated-20180706134514.sql.
You will want to edit this file and modify the directory for the scripts to match the changes you made above. Also, you need to verify that the scripts are being run in the order of dependencies. Objects should be created after the objects they depend on.
If you’re not planning to use a Schema Migration tool you’ll want to create separate master scripts for each of the create and update directories.
Test
Create a new schema that’s safe to test with, and run your master scripts. After you get the scripts running without errors, do a schema compare to check for any differences between your new schema and the schema you exported from.
When it’s all the way you want it, you are now ready and able to work from files to build your database code.
Working From Files
From this point, you can now follow a new and improved workflow when it comes to editing the code for your database objects:
- Pull the latest version of your code from the shared repository. NEVER ASSUME YOU ALREADY HAVE THE LATEST.
- Make your changes.
- Compile to the database.
- Commit (to your source code repository, not your transaction) often.
- Push your changes back up to the shared repository.
Of course, that is a very simplified workflow. You should start a discussion with your team about using more advanced methods to further automate and improve your processes.
I sincerely hope you skimmed this post because you gave up long ago on editing code directly in the database. If that is not the case, I hope this article helps you make the change. Because then the next time something goes badly wrong, you can simply recover from Git (or your repository of choice).
No tearing your hair out. No gnashing of teeth. No self-hating recriminations. Just a quick recovery and get back to work.
And then you can move on to more interesting challenges in improving the way you write and maintain your database application code.
You must be logged in to post a comment.