Friday, October 23, 2015

Retrieve decimals when rounding an average in SQL

The average will have the same data type as the values, so cast the values:

SELECT ROUND(AVG(CAST(column_name AS FLOAT)), 2) FROM [database].[dbo].[table]

Net Promoter Score (NPS in SQL Server)

As our customers know, we send short email surveys to all those who ring our service centre. One of the questions we ask is our Ultimate Question: Would you recommend Smartsalary to a friend or colleague?

Check out the results for two of our top agents in the service centre last month.
Using the NPS methodology below, you can see that these two agents had 64% promoters and 6% detractors, for a total NPS of 58% (which is derived by 64%-6%).
Given that NPS scores can range from +100% down to -100%, our current companywide score in the low 40s is pretty good. Though as a company, we would do well to better understand our higher performers and try to replicate their skills throughout the organisation.

SQL solution:

Thursday, October 22, 2015

How to choose algorithms for Microsoft Azure Machine Learning

  • The Machine Learning Algorithm Cheat Sheet
  • Flavors of machine learning
  • Considerations when choosing an algorithm
  • Algorithm notes

Tuesday, October 20, 2015

Machine learning algorithms grouped by similarity

Algorithms are often grouped by similarity in terms of their function (how they work). For example, tree-based methods, and neural network inspired methods.

I think this is the most useful way to group algorithms and it is the approach we will use here.

This is a useful grouping method, but it is not perfect. There are still algorithms that could just as easily fit into multiple categories like Learning Vector Quantization that is both a neural network inspired method and an instance-based method. There are also categories that have the same name that describes the problem and the class of algorithm such as Regression and Clustering.

We could handle these cases by listing algorithms twice or by selecting the group that subjectively is the “best” fit. I like this latter approach of not duplicating algorithms to keep things simple.

In this section I list many of the popular machine leaning algorithms grouped the way I think is the most intuitive. It is not exhaustive in either the groups or the algorithms, but I think it is representative and will be useful to you to get an idea of the lay of the land.

Please Note: There is a strong bias towards algorithms used for classification and regression, the two most prevalent supervised machine learning problems you will encounter.

Regression Algorithms
Regression Algorithms
Regression is concerned with modelling the relationship between variables that is iteratively refined using a measure of error in the predictions made by the model.

Regression methods are a workhorse of statistics and have been cooped into statistical machine learning. This may be confusing because we can use regression to refer to the class of problem and the class of algorithm. Really, regression is a process.

The most popular regression algorithms are:
  • Ordinary Least Squares Regression (OLSR)
  • Linear Regression
  • Logistic Regression
  • Stepwise Regression
  • Multivariate Adaptive Regression Splines (MARS)
  • Locally Estimated Scatterplot Smoothing (LOESS)

Instance-based Algorithms

Instance-based AlgorithmsInstance based learning model a decision problem with instances or examples of training data that are deemed important or required to the model.

Such methods typically build up a database of example data and compare new data to the database using a similarity measure in order to find the best match and make a prediction. For this reason, instance-based methods are also called winner-take-all methods and memory-based learning. Focus is put on representation of the stored instances and similarity measures used between instances.

The most popular instance-based algorithms are:
  • k-Nearest Neighbour (kNN)
  • Learning Vector Quantization (LVQ)
  • Self-Organizing Map (SOM)
  • Locally Weighted Learning (LWL)

Regularization Algorithms

Regularization AlgorithmsAn extension made to another method (typically regression methods) that penalizes models based on their complexity, favoring simpler models that are also better at generalizing.

I have listed regularization algorithms separately here because they are popular, powerful and generally simple modifications made to other methods.

The most popular regularization algorithms are:
  • Ridge Regression
  • Least Absolute Shrinkage and Selection Operator (LASSO)
  • Elastic Net
  • Least-Angle Regression (LARS)

Decision Tree Algorithms
Decision Tree Algorithms
Decision tree methods construct a model of decisions made based on actual values of attributes in the data.

Decisions fork in tree structures until a prediction decision is made for a given record. Decision trees are trained on data for classification and regression problems. Decision trees are often fast and accurate and a big favorite in machine learning.

The most popular decision tree algorithms are:
  • Classification and Regression Tree (CART)
  • Iterative Dichotomiser 3 (ID3)
  • C4.5 and C5.0 (different versions of a powerful approach)
  • Chi-squared Automatic Interaction Detection (CHAID)
  • Decision Stump
  • M5
  • Conditional Decision Trees

Bayesian Algorithms
Bayesian Algorithms
Bayesian methods are those that are explicitly apply Bayes’ Theorem for problems such as classification and regression.

The most popular Bayesian algorithms are:
  • Naive Bayes
  • Gaussian Naive Bayes
  • Multinomial Naive Bayes
  • Averaged One-Dependence Estimators (AODE)
  • Bayesian Belief Network (BBN)
  • Bayesian Network (BN)

Clustering Algorithms
Clustering AlgorithmsClustering, like regression describes the class of problem and the class of methods.

Clustering methods are typically organized by the modelling approaches such as centroid-based and hierarchal. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality.

The most popular clustering algorithms are:
  • k-Means
  • k-Medians
  • Expectation Maximisation (EM)
  • Hierarchical Clustering

Association Rule Learning Algorithms

Assoication Rule Learning AlgorithmsAssociation rule learning are methods that extract rules that best explain observed relationships between variables in data.

These rules can discover important and commercially useful associations in large multidimensional datasets that can be exploited by an organisation.

The most popular association rule learning algorithms are:
  • Apriori algorithm
  • Eclat algorithm

Artificial Neural Network Algorithms
Artificial Neural Network Algorithms
Artificial Neural Networks are models that are inspired by the structure and/or function of biological neural networks.

They are a class of pattern matching that are commonly used for regression and classification problems but are really an enormous subfield comprised of hundreds of algorithms and variations for all manner of problem types.

Note that I have separated out Deep Learning from neural networks because of the massive growth and popularity in the field. Here we are concerned with the more classical methods.

The most popular artificial neural network algorithms are:
  • Perceptron
  • Back-Propagation
  • Hopfield Network
  • Radial Basis Function Network (RBFN)

Deep Learning Algorithms

Deep Learning AlgorithmsDeep Learning methods are a modern update to Artificial Neural Networks that exploit abundant cheap computation.

They are concerned with building much larger and more complex neural networks, and as commented above, many methods are concerned with semi-supervised learning problems where large datasets contain very little labelled data.

The most popular deep learning algorithms are:
  • Deep Boltzmann Machine (DBM)
  • Deep Belief Networks (DBN)
  • Convolutional Neural Network (CNN)
  • Stacked Auto-Encoders

Dimensionality Reduction Algorithms
Dimensional Reduction Algorithms
Like clustering methods, dimensionality reduction seek and exploit the inherent structure in the data, but in this case in an unsupervised manner or order to summarise or describe data using less information.

This can be useful to visualize dimensional data or to simplify data which can then be used in a supervized learning method. Many of these methods can be adapted for use in classification and regression.
  • Principal Component Analysis (PCA)
  • Principal Component Regression (PCR)
  • Partial Least Squares Regression (PLSR)
  • Sammon Mapping
  • Multidimensional Scaling (MDS)
  • Projection Pursuit
  • Linear Discriminant Analysis (LDA)
  • Mixture Discriminant Analysis (MDA)
  • Quadratic Discriminant Analysis (QDA)
  • Flexible Discriminant Analysis (FDA)

Ensemble Algorithms
Ensemble Algorithms
Ensemble methods are models composed of multiple weaker models that are independently trained and whose predictions are combined in some way to make the overall prediction.

Much effort is put into what types of weak learners to combine and the ways in which to combine them. This is a very powerful class of techniques and as such is very popular.
  • Boosting
  • Bootstrapped Aggregation (Bagging)
  • AdaBoost
  • Stacked Generalization (blending)
  • Gradient Boosting Machines (GBM)
  • Gradient Boosted Regression Trees (GBRT)
  • Random Forest

Machine learning algorithms grouped by learning style

There are different ways an algorithm can model a problem based on its interaction with the experience or environment or whatever we want to call the input data.

It is popular in machine learning and artificial intelligence textbooks to first consider the learning styles that an algorithm can adopt.

There are only a few main learning styles or learning models that an algorithm can have and we’ll go through them here with a few examples of algorithms and problem types that they suit.

This taxonomy or way of organizing machine learning algorithms is useful because it forces you to think about the the roles of the input data and the model preparation process and select one that is the most appropriate for your problem in order to get the best result.

Let’s take a look at four different learning styles in machine learning algorithms:

Supervised Learning
Supervised Learning Algorithms
Input data is called training data and has a known label or result such as spam/not-spam or a stock price at a time.

A model is prepared through a training process where it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data.

Example problems are classification and regression.

Example algorithms include Logistic Regression and the Back Propagation Neural Network.
Unsupervised Learning

Unsupervised Learning AlgorithmsInput data is not labelled and does not have a known result.

A model is prepared by deducing structures present in the input data. This may be to extract general rules. It may through a mathematical process to systematically reduce redundancy, or it may be to organize data by similarity.

Example problems are clustering, dimensionality reduction and association rule learning.

Example algorithms include: the Apriori algorithm and k-Means.
Semi-Supervised Learning
Semi-supervised Learning Algorithms
Input data is a mixture of labelled and unlabelled examples.

There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions.

Example problems are classification and regression.

Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabelled data.
Overview

When crunching data to model business decisions, you are most typically using supervised and unsupervised learning methods.

A hot topic at the moment is semi-supervised learning methods in areas such as image classification where there are large datasets with very few labelled examples.

Sunday, October 18, 2015

Machine learning definitions and terminology

Example: 
An object or instance in data used.

Features:
The set of attributes, often represented as a vector, associated to an example, e.g., height and weight for gender prediction.

Labels:
  • In classification, category associated to an object, e.g., positive or negative in binary classification. 
  • In regression, real-valued numbers.
Training data: 
Data used for training algorithm.

Test data: 
Data exclusively used for testing algorithm.

Some standard learning scenarios:
  • Supervised learning: labeled training data.
  • Unsupervised learning: no labeled data. 
  • Semi-supervised learning: labeled training data + unlabeled data.
  • Transductive learning: labeled training data + unlabeled test data.

Thursday, October 8, 2015

How to check wich triggers are enabled or disabled

SELECT name, is_disabled FROM sys.triggers


Select data from string up to a space in SQL Server?

You can use a combiation of LEFT and CHARINDEX to find the index of the first space, and then grab everything to the left of that.

SELECT LEFT(YourColumn, charindex(' ', YourColumn) - 1)

Thursday, October 1, 2015

How to find open transactions in SQL Server

DBCC OPENTRAN (Transact-SQL)

Applies To: SQL Server 2014, SQL Server 2016 Preview
DBCC OPENTRAN helps to identify active transactions that may be preventing log truncation. DBCC OPENTRAN displays information about the oldest active transaction and the oldest distributed and nondistributed replicated transactions, if any, within the transaction log of the specified database. Results are displayed only if there is an active transaction that exists in the log or if the database contains replication information. An informational message is displayed if there are no active transactions in the log.