It seems like everybody knows how to train predictive models in languages like Python, R, and Julia. But what about SQL? How can you leverage the power of a well-known database language for machine learning?
Don’t feel like reading? Check out my video on the topic:
SQL has been around for decades, but still isn’t recognized as a language for machine learning. Sure, I’d pick Python any day of the week, but sometimes in-database machine learning is the only option.
We’ll use Oracle Cloud for this article. It’s free, so please register and create an instance of the OLTP database (Version 19c, has 0.2TB of storage). Once done, establish a connection through the SQL Developer Web — or any other tool.
We’ll use a slightly modified version of a well-known Housing prices dataset – you can download it here. I’ve chosen this dataset because it doesn’t require any extensive preparation, so the focus can immediately be shifted to predictive modeling.
The article is structured as follows:
Dataset loading and preparation
If you are following along, I hope you’ve downloaded the dataset already. You need to load it into the database with a tool of your choice – I’m using SQL Developer Web, a free tool provided by Oracle Cloud.
The loading process is straightforward – click on the upload button, choose the CSV file, and click Next a couple of time:
Mine is now stored in the
housingprices table. Here’s how the first couple of rows look like:
Oracle Machine Learning (OML) is a bit strange when it comes to creating models. Your data table must contain a column with an ID – numbers generated from a sequence. Our dataset doesn’t have that by default, so let’s add it manually:
There’s only one thing left to do in this section: the train/test split. In SQL, that’s done by creating two views. The first view has 75% of the data randomly sampled, and the second one has the remaining 25%. The percentages were chosen arbitrarily:
We now have two views created –
housing_train_data for training and
housing_test_data for testing. There’s still one thing left to do before model training: to specify settings for the model. Let’s do that in the next section.
Oracle uses this settings table style for training predictive models. The settings table is made of two columns – one for the name and the other for the value.
Here’s how to create the table:
The following information will be stored in this table:
- Type of model – let’s use a Generalized Linear Model (GLM)
- Diagnostics table name – for model statistics
- Data preparation technique – let’s go with the automatic
- Feature selection – disabled or enabled, let’s go with enabled
- Feature generation – same as with feature selection
Yes, you are reading this right – all of that is done automatically without the need for your assistance. Let’s fill the settings table next:
Here’s how the table looks like:
And that’s it! Let’s continue with the model training.
Model training and validation
The model training boils down to a single procedure call. Here’s the code:
As you can see, you need to name your model first. The name
GLMR_Regression_Housing is entirely arbitrary. After a second or so, the model is trained. Don’t get scared with the number of tables Oracle creates. Those are required for a model to work properly.
Next, let’s take a peek at the model performance on the train set. It can be obtained with the following query:
On average, the model is wrong by 7.2 units of the price, and it captures just north of 70% of the variance.
Let’s take a look at the most significant features next. The feature can be classified as significant for prediction if its P-value is somewhere near 0 (less than 0.05 will do). Here’s the query:
As you can see, these features weren’t present initially in the dataset but were created automatically by Oracle’s feature generator.
Finally, let’s apply this model to the test set:
The results are stored in the
Don’t know what’s the deal with the scientific notation, but it’s good enough for further evaluation. I’ll leave it up to you, as you only need to create a
UNION between the
housing_test_data view and the
housing_test_predictions table to see how good are the results.
Machine learning isn’t a Python or R specific thing anymore. Adoption in SQL isn’t the most intuitive for a data scientist, and the documentation is awful, but in-database ML provides an excellent way for business users to get in touch with predictive modeling.
Don’t miss out on the rest of the ML with SQL series:
Feel free to leave any thoughts in the comment section below.